The relationship between human fluid intelligence and social-emotional abilities has been a topic of considerable interest. The current study investigated whether adolescents with different intellectual levels had different automatic neural processing of facial expressions. Two groups of adolescent males were enrolled: a high IQ group and an average IQ group. Age and parental socioeconomic status were matched between the two groups. Participants counted the numbers of the central cross changes while paired facial expressions were presented bilaterally in an oddball paradigm. There were two experimental conditions: a happy condition, in which neutral expressions were standard stimuli (p = 0.8) and happy expressions were deviant stimuli (p = 0.2), and a fearful condition, in which neutral expressions were standard stimuli (p = 0.8) and fearful expressions were deviant stimuli (p = 0.2). Participants were required to concentrate on the primary task of counting the central cross changes and to ignore the expressions to ensure that facial expression processing was automatic. Event-related potentials (ERPs) were obtained during the tasks. The visual mismatch negativity (vMMN) components were analyzed to index the automatic neural processing of facial expressions. For the early vMMN (50–130 ms), the high IQ group showed more negative vMMN amplitudes than the average IQ group in the happy condition. For the late vMMN (320–450 ms), the high IQ group had greater vMMN responses than the average IQ group over frontal and occipito-temporal areas in the fearful condition, and the average IQ group evoked larger vMMN amplitudes than the high IQ group over occipito-temporal areas in the happy condition. The present study elucidated the close relationships between fluid intelligence and pre-attentive change detection on social-emotional information.
Citation: Liu T, Xiao T, Li X, Shi J (2015) Fluid Intelligence and Automatic Neural Processes in Facial Expression Perception: An Event-Related Potential Study. PLoS ONE 10(9): e0138199. https://doi.org/10.1371/journal.pone.0138199
Editor: Piia Susanna Astikainen, University of Jyväskylä, FINLAND
Received: September 29, 2014; Accepted: August 27, 2015; Published: September 16, 2015
Copyright: © 2015 Liu et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited
Data Availability: All relevant data are within the paper and its Supporting Information files.
Funding: This research was supported by the National Natural Science Foundation of China (Grant No. 31370020), the Natural Science Foundation for the Youth of China (Grant No. 31000468), the Scientific Foundation of Institute of Psychology, Chinese Academy of Sciences for Outstanding Doctoral Dissertation and President Award of Chinese Academy of Sciences (No.Y0CX272B01). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
The nature of human intelligence is an enduring topic in scientific research of cognition. Fluid intelligence or g factor of intelligence has been widely adopted to describe the intelligence profile that originates from an individual’s birth, and which cannot be impacted by knowledge or experience. This factor indicates how well individuals can adapt themselves to their emotional and non-emotional environment . Studies on the cognitive characteristics have consistently found that those with high IQ have better memory, attention, and cognitive control abilities than individuals with average IQ [1–2]. However, there exist different theories relating intelligence and social-emotional abilities: Spearman proposed the psychometric theory of intelligence, which posited at least modest correlations between an individual’s social-emotional abilities and his or her cognitive abilities . In contrast, Gardner’s multiple intelligence theory proposed complete independence of emotional abilities and cognitive abilities or academic aptitude .
The investigation of the social-emotional abilities of intellectually gifted individuals dates back to Terman . Several studies have provided empirical evidence of a positive relationship between intelligence and social-emotional abilities, such as ego resiliency, self-efficacy, self-esteem and reduced vulnerability [6–9]. It is observed that individual’s emotional abilities (such as, emotion perception, emotion generation, emotion understanding and emotion regulation) are correlated with fluid intelligence [10–11], and higher IQ scores are associated with faster responses during selective attention tasks involving affective information . Moreover, adolescence is an extremely important period for an individual’s neurodevelopment of social-emotional abilities [13–14]. Children with higher IQ scores show better performances in emotional intelligence tests, suggesting that children with high IQ might have better emotion perception and management abilities than their average IQ peers . However, it is unknown as to whether adolescents with high IQ also have better automatic neural processing of social-emotional information, and the current study is intended to investigate the relationship between fluid intelligence and neural activation of pre-attentive facial expression processes and further to provide electrophysiological proofs.
Facial expressions contain essential social-emotional information, and the electrophysiological studies using the event-related potential (ERP) technique have reported that the ERP components of P100, N170, and P300 are associated with three different stages in the perception of facial expressions [16–19]. Additionally, the automatic detection of minor changes in facial expressions is even more crucial for social-interpersonal communication . To study the automatic neural processing of facial expressions, passive expression-related oddball tasks have been widely adopted . An ERP component of particular interest has been visual mismatch negativity (vMMN), which is measured by subtracting the neural responses to standard, frequently presented stimuli, from those to deviant (i.e. randomly and infrequently presented) stimuli . The expression-related vMMN has been regarded as an index of the automatic neural processing of facial expressions [20–30]. Moreover, Stefanics et al.  complemented the classic oddball paradigm with an accompanying primary task, consisting of pressing a response button rapidly in response to changes of a fixation cross, to improve the methodological validity of expression-related vMMN. They observed a significant vMMN to deviant emotional faces over the bilateral temporal-occipital electrode sites. Although the neural mechanism of the expression-related vMMN remains unclear, most researchers regarded the vMMN in response to deviant facial expressions to reflect automatic and unintentional processes in predictive memory representation [20,26,28].
The main aim of the current study was to investigate whether adolescents with different levels of intelligence have different automatic neural processing of facial expressions, and to further elucidate the relationships between fluid intelligence and the automatic processing of emotional information. In the present study, we used a similar paradigm to Stefanics et al.  by introducing a centrally presented visual primary task to occupy the participant’s attention, and displaying a passive emotion-related oddball paradigm on both sides of the primary task. Participants were instructed to concentrate on the central crosses and to accomplish the primary task as fast and as accurately as they could, while ignoring the bilaterally presented facial expressions. Two kinds of oddball conditions were used: a fearful oddball condition, in which fearful expressions were used as deviant stimuli and neutral faces as standard stimuli, and a happy oddball condition, in which happy expressions were used as deviant stimuli and neutral faces as standard stimuli. We compared vMMN responses for happy and fearful conditions. Based on Spearman’s psychometric theory of intelligence  and Zeidner et al.’s  performance-based emotional intelligence measurement findings, we hypothesized that adolescents with high IQ would show better automatic processing in both happy and fearful conditions than their average IQ peers, as indexed by greater vMMN amplitudes over the frontal and occipito-temporal brain areas.
Methods and Methods
This study was approved by the Ethics Committee of the Institute of Psychology, Chinese Academy of Sciences. Written informed consent was obtained from children and their parents.
Two groups of adolescent males (a high IQ group and an average IQ group) were enrolled in the study. The high IQ group (n = 17, ages 13.3–14.2 years old, mean age: 13.7 years old) was recruited from a gifted education system called the “Gifted Youth Class” which offers a curriculum emphasizing the science domains, such as, mathematics, physics, chemistry, and biology. The “Gifted Youth Class” enrolls 30 children from about 1800–2000 candidates each year based on their scores on classical intelligence tests and on cognitive abilities, such as attention, memory, and executive functions. Participants in the average IQ group (n = 19, age 13.2–14.3 years old, mean age: 13.7 years old) were chosen from a conventional middle school, and had similar ages and parental socioeconomic status (SES) to those in the high IQ group. All participants were adolescent males, because most members of the “Gifted Youth Class” were boys, and selecting only adolescent male participants avoided increased variability and the need to consider additional covariates, such as girl’s pubertal status and menstrual cycle. All adolescents were right handed, with normal or corrected-to-normal visual acuity, and none had psychiatric or neurological problems.
Intelligence was measured via two classic intelligence tests: Cattell’s Culture Faire Test (55 items, 1 point/item, scale range 0–55)  and Raven’s Standard Progressive Matrices (60 items, 1 point/item, scale range 0–60) . Participants’ IQ scores are presented in Table 1. These two instruments are regarded as the most promising tests of fluid intelligence, and have been shown to load highly on the g factor of intelligence [33–34]. The cutoff range of the Cattell test scores was 48–53 for the high IQ group and 36–43 for the average IQ group, whereas the Raven test range was 53–57 for the high IQ group and 42–47 for the average IQ group. A t-test (2-tail) analysis showed that the high IQ group achieved significantly higher intelligence scores than the average IQ group in both intelligence tests (Cattell: t = 5.5, p < 0.001; Raven: t = 5.1, p < 0.001).
Since the parental SES is known to be a crucial factor in children’s cognition and emotion development [35–36], the current study also controlled SES factors, specifically parental wealth and maternal education between the two groups. Parental wealth was calculated as the family’s average income per month from their child’s birth, and maternal education was measured as the mother’s highest educational degree. A t-test (2-tail) comparison showed that there were no significant differences between the two groups in their SES scores (ps > 0.05). Detailed descriptions of the participant’s SES information are presented in Table 1.
Stimuli and Procedure
Fig 1 illustrates the stimuli and procedure. The presentation screen was a computer monitor (17 inches, 1024 × 768 resolution at 100 Hz refresh rate) with a black background.
Two identical facial expressions were displayed bilaterally to both sides of the central fixation cross. The presentation of faces and the changes of the fixation cross were independent. The face-pair was presented on each screen for 150ms, followed by an inter-stimulus interval of 300–700 ms. The cross changed occasionally during each block, and participants were required to detect the changes and to report at the end of each block how many times the cross had changed.
Similar to several previous vMMN studies [20,28], the primary task was displayed in the central visual field, and the expression-related oddball paradigm (facial expressions) was presented bilaterally to the central fixation cross. The expression-related oddball paradigm was displayed independently from the primary task. Participants were required to focus their attention on the primary task and to ignore the facial expression stimuli. This design guaranteed that participant’s attention was focused on the primary task and that the perception of the emotional information presented in the oddball paradigm was automatic. Participants were instructed to detect and to count how many times the central cross (“+”) changed: the horizontal line of the cross was longer than its vertical line, or the vertical line was longer than the horizontal line. The participants reported the changes at the end of each block, by key press.
Each cross change in the primary task lasted 300 ms, after which the cross returned to its original size. During each block, the cross randomly changed from zero to nine times. Participants were required to concentrate on counting the number of cross changes and to press the corresponding number stickers on the keyboard (“0” to “9”) at the end of each block to report how many times the central cross had been changed. Participants were instructed to use the left index finger to press the sticker of “0”, “1”, “2”, “3”, or “4” for the changes of zero, once, twice, three times, and four times, and to use the right index finger to press the sticker of “5”, “6”, “7”, “8”, or “9” for the changes of five times, six times, seven times, eight times, and nine times. The answer screen remained until a button was pressed. The design aimed to guarantee that the participant’s attention was fully engaged with the primary task. The accuracies and reaction time of reporting the counts were analyzed by a 2×2 ANOVA with Intelligence (high IQ, average IQ) and Expression condition (fearful, happy) as independent variables.
The facial expression images were from 10 Chinese models (5 males, 5 females) showing neutral, happy and fearful expressions. Two identical expressions from one identical model were synchronously displayed on the both sides of the central cross. Each face was displayed in light grey, with the visual angle of 6° horizontally and 8° vertically at the 65 cm viewing distance. Each face-pair was displayed for 150 ms, followed by an inter-stimulus interval of 300–700 ms. The oddball condition was either happy or fearful, with the presentation order of the two oddball conditions randomized across participants. For the happy oddball condition, happy expressions were presented as the deviant stimuli (probability of 0.2) and neutral expressions as the standard stimuli (probability of 0.8). For the fearful oddball condition, fearful expressions were displayed as the deviant stimuli (probability of 0.2) and neutral expressions as the standard stimuli (probability of 0.8). Each condition consisted of 6 practice blocks and 60 formal blocks, and there were 480 standard stimuli and 120 deviant stimuli in each expression condition. Deviants and standards were presented pseudo-randomly. There were no fewer than two standards between subsequent deviants, and no block had begun with a deviant. The vMMN responses evoked by the facial expression stimuli were analyzed to measure the individual’s automatic processing of facial expressions.
EEG recording and analysis
The electroencephalograms (EEG) were recorded from 64 scalp electrodes via a NeuroScan Quik-Cap. The electrodes were placed according to the extended 10–20 system locations. The horizontal and vertical EOG (HEOG and VEOG) were monitored via four bipolar electrodes positioned on the outer canthi of each eye and at the inferior and superior areas of left eye, respectively. The electrode impedance was kept under 5 kΩ. The EEG signal was continuously recorded at a sample rate of 500 Hz using a nose reference, amplified using SynAmps amplifiers and online band-pass filtered at 0.05–100 Hz. The EEG signal was further epoched and averaged with 100 ms prior to and 500 ms after the stimulus onset. The pre-stimulus interval of 100 ms was used for baseline correction. Epochs were screened for artifacts: contamination by eye movements, or muscle potentials exceeding ±70 μV at any electrode were excluded from averaging. A Chi-square test showed that the remaining trial numbers were similar across groups (high IQ and average IQ) and Expression conditions (fearful condition and happy condition) (χ2(1,35) = 1.01, p = 0.3). The EEG was re-referenced to the common average potential and was filtered off-line with a zero phase shift (bandwidth: 0–30 Hz, slope: 24 dB/octave). Overall, less than 10% of the epochs were excluded from further analyses.
For the accuracies of reporting the counts, no significant main effects or interaction effects were observed. For the reaction time, the main effect of Intelligence was significant (F(1,34) = 4, p < 0.05, η2 = 0.15), and post hoc pairwise comparisons (adjusted by the Sidak method) showed that the high IQ group had faster responses than the average IQ group. The Expression condition also showed a significant main effect (F(1,34) = 10.5, p < 0.005, η2 = 0.24), and response speed was faster in the primary task when the happy oddball condition was presented, compared to the fearful condition (p < 0.05).
ERP responses in the happy and fearful oddball conditions
Fig 2 shows the grand-average ERPs elicited by the standard and deviant stimuli in the happy and fearful oddball conditions. In order to analyze the automatic processing of affective deviants, vMMN was calculated by subtracting the ERP responses to standard stimuli from the ERP responses to deviant stimuli [37–40]. vMMN responses were analyzed over the frontal areas and occipito-temporal areas within three time windows: early vMMN (50–130 ms), middle vMMN (150–300 ms) and late vMMN (320–450 ms). The regions of interest (ROI) for frontal areas contained the electrodes of F1, F3, F5, F2, F4 and F6, whereas the regions of interest (ROI) for occipito-temporal areas contained the electrodes TP7, P7, PO7, CB1, O1, TP8, P8, PO8, CB2 and O2, consistent with a previous study in adults . Repeated-measures ANOVAs were conducted to analyze the peak amplitudes of vMMN components, with independent variables of Intelligence (high IQ, average IQ), Expression condition (fearful, happy), and ROI (frontal, occipito-temporal).
The left occipito-temporal waveform was the average neural activation at electrodes TP7, P7, PO7, CB1, and O1. The right occipito-temporal waveform was obtained from the average of TP8, P8, PO8, CB2, and O2.
The means of vMMN amplitudes in both expression conditions are presented in Table 2, and the raw data of vMMN amplitudes was in the Supporting information file (S1 Data). The waveforms of vMMNs in both fearful and happy conditions are displayed in Figs 3 and 4 presents the topographic maps of Deviant-minus-Standard difference waves for two IQ groups.
The vMMNs in the deviant fearful minus standard fearful condition, and the left frontal waveform was the average neural activation at electrodes of F1, F3, and F5. The right frontal waveform was obtained from F2, F4, and F6. The left occipito-temporal waveform was from TP7, P7, PO7, CB1, and O1. The right occipito-temporal waveform was from TP8, P8, PO8, CB2, and O2.
For the early vMMN (50–130 ms), the interaction of Intelligence × Expression (F(1,34) = 4.3, p < 0.05, η2 = 0.11) indicated that the high IQ group had more negative vMMN amplitudes than the average IQ group in the happy oddball condition (p < 0.05). Post-hoc analyses showed that high IQ group had more negative vMMN responses in the happy oddball condition than in the fearful condition (p < 0.001).
For the middle vMMN (150–300 ms), the interaction of Expression × ROI was marginally significant (F(1,34) = 4.6, p = 0.06, η2 = 0.13), whereby the occipito-temporal areas had more negative vMMN than the frontal areas in the happy condition (p < 0.01). There were no IQ-related main effects or interaction effects for vMMN during this epoch.
For the late vMMN (320–450 ms), the interaction of Intelligence × Expression × ROI was significant (F(1,34) = 8.1, p < 0.01, η2 = 0.19), such that in the fearful condition, high IQ adolescents had more negative vMMN than average IQ adolescents over both frontal and occipito-temporal areas (ps < 0.05), and in the happy condition, the average IQ group had greater vMMN amplitudes than the high IQ group over the occipito-temporal areas (p < 0.01). Post hoc analyses also showed that over the occipito-temporal areas, high IQ adolescent had more negative vMMN in the fearful condition relative to that in the happy condition (p < 0.05), and average IQ adolescent had greater vMMN in the happy condition than in the fearful condition (p < 0.05).
The current study investigated the relationship between intelligence and neural activation associated with automatic facial expression processing, with the aim of adding electrophysiological evidence to the body of research. The behavioral results showed that adolescents with high IQ performed faster than their average IQ peers in reporting how many times the fixation cross changed, indicating better performance on this cognitive task [1–2]. It was also observed that participants responded more quickly in the happy condition than in the fearful condition, which might reveal that positive affect (i.e., context, or mood) facilitated their cognitive processes  and/or negative affect impaired the cognitive processes .
vMMN responses have been widely studied with a large time range from 100 ms to 580 ms over the temporal, occipital, and frontal brain areas , and the current vMMN responses were analyzed with different time windows of early vMMN (50–130 ms), middle vMMN (150–300 ms), and late vMMN (320–450ms). It seemed that the current expression-related, early vMMN started earlier than traditional vMMN (approximately 100 ms), which might be due to that adolescents were extremely sensitive to affective information and they showed faster pre-attentive processing on facial expressions [43–44]. The exact cognitive processes these vMMN responses reflect are still unknown, and some recent studies suggest that vMMN responses can be regarded as the perceptual prediction error signals [39–40].
The close relationships between fluid intelligence and automatic neural processes were observed for the early and late vMMNs, but for the middle vMMN. The vMMN with the epoch of 150–300 ms is regarded to reflect the difference of N170 . Astikainen et al.  found that an ERP component at 130 ms latency was elicited in oddball but also in equal probability condition suggesting that it reflects both the detection of regularity violations (pure vMMN) and also encoding of emotional information in faces. N170 was sensitive only to emotional expressions, not stimulus probability. No significant main effect of IQ was found for the current middle vMMN, which was consistent with a prior study which reported that adolescents with different IQ levels had similar N170 amplitudes to positive and negative faces during a facial expression perception task , and these findings might indicate that adolescents with different IQ levels have comparable structural encoding abilities on facial expressions.
More importantly, adolescents with high IQ showed more negative amplitudes of early vMMN than adolescents with average IQ in the happy oddball condition. This suggests that adolescents with high IQ might have better pre-attentive processing of positive expressions as compared to their average IQ peers. For the late vMMN, high IQ adolescents showed greater vMMN amplitudes than average IQ adolescents in the fearful condition, and average IQ adolescents had larger vMMN over the temporal-occipital areas than high IQ adolescents in the happy condition. This demonstrates that adolescents with different intellectual levels show different perceptual bias to emotional information with different affective valences: individuals with high IQ had better automatic change detection for both happy (early vMMN) and fearful (late vMMN) minor deviants than adolescents with average IQ, whereas adolescents with average IQ might show better automatic change detection for happy deviants than high IQ adolescents during late vMMN responses. These interesting findings suggest that individuals’ fluid intelligence abilities correlate with their emotion-related behaviors  and social functioning [46–49].
Prior studies have showed consistently that vMMN responses correlate with individual cognitive and emotional abilities. For example, Stefanics and Czigler  observed that vMMN amplitudes to right hands with unexpected laterality correlated with Edinburgh handedness scores, thus revealing a close association between vMMN responses and the strength of hand-preferences. Csukly et al.  observed attenuated vMMN amplitudes in patients with schizophrenia. Patients’ impaired vMMN responses were significantly associated with decreased emotion recognition performance, further revealing the complex interactions between emotional and cognitive processes [51–52]. Furthermore, a relationship between vMMN responses and autism spectrum personality traits has also been demonstrated . In particular, individuals with higher autism spectrum quotient scores had smaller amplitudes of vMMN responses to happy deviants. These findings illustrate that the previously discovered close associations between automatic prediction error responses (mainly for auditory MMN, aMMN) and behavioral measures in cognitive tasks also exist for vMMN responses [53–55]. Regarding the relationship between aMMN responses and cognitive abilities, a previous study adopting a classical auditory oddball paradigm with non-emotional stimuli found that highly intelligent children had better automatic detection of minor auditory changes than average IQ children. This was reflected in larger amplitudes of aMMN and late discriminative negativity (LDN) in the former . Additionally, higher mental ability has been shown to be associated with larger aMMN amplitudes and shorter aMMN latencies to deviant stimuli, as compared to lower mental ability [57–60]. Furthermore, the current study also showed that adolescents with high IQ had greater vMMN to fearful deviant stimuli over the frontal areas, as compared to average IQ adolescents. This might indicate a difference in the maturity of the prefrontal cortex between the two IQ groups [51,61–65]. Generally, these findings support the view that there exist specific neural mechanisms associated with human intelligence and automatic neural processes [34,63, 66–67].
There were several limitations of the current study: first, no formal reliable measure of social cognition or emotional intelligence was used. In the future work we would measure emotional intelligence via the Mayer–Salovey–Caruso Emotional Intelligence Test , and consider personality traits such as depression, social anxiety, and empathy, given that these variables might modulate emotion perception at a subclinical level . Second, more types of face models (e.g., simple schematic faces, complex schematic faces, and photographs of real human faces expressing emotions) might be adopted to investigate whether vMMN components are affected by lower-level physical differences among emotional stimuli.
In summary, the present study found that adolescents with high IQ can automatically perceive minor visual changes in positive expressions, as reflected in enhanced neural activation in the early vMMN. For the late vMMN, high IQ adolescents had better automatic processing of the fearful expressions than their average IQ peers, and average IQ adolescents had enhanced pre-attentive processing of happy expressions over the occipito-temporal areas. These findings demonstrated that adolescents with high IQ can process and store minor changes in both positive and negative information outside the focus of attention for further memory representation, as compared to adolescents with average IQ. The current study thus sheds light on the essential relationship between fluid intelligence and automatic facial expression perception.
We would like to express our warmest thanks to all the children for their participation, and many thanks to the academic editor and three reviewers for their thoughtful and helpful comments.
Conceived and designed the experiments: TL JS. Performed the experiments: TL XL. Analyzed the data: TL TX. Contributed reagents/materials/analysis tools: TX. Wrote the paper: TL JS.
- 1. Sternberg RJ. Handbook of intelligence. Cambridge: Combridge University Press; 2000.
- 2. Schweizer K, Moosbrugger H. Attention and working memory as predictors of intelligence. Intelligence 2004; 32: 329–347.
- 3. Spearman CE. The abilities of man. London: Macmillan; 1927.
- 4. Gardner H. Frames of mind: The theory of multiple intelligences. New York: Basic Books; 1983.
- 5. Terman LM. Mental and physical traits of a thousand gifted children. Stanford, CA: StanfordUniversity Press; 1925.
- 6. Austin EJ, Deary IJ, Whiteman MC, Fowkes FGR, Pedersen NL, Rabbitt P, et al. Relationships between ability and personality: Does intelligence contribute positively to personal and social adjustment? Pers Individ Dif 2002; 32: 1391–1411.
- 7. Coplan JD, Hodulik S, Mathew SJ, Mao X, Hof PR, Gorman JM, et al. The relationship between intelligence and anxiety: an association with subcortical white matter metabolism. Front Evol Neurosci 2012; 3: 8. pmid:22347183
- 8. Wartenburger I, Kühn E, Sassenberg U, Foth M, Franz EA, van der Meer E. On the relationship between fluid intelligence, gesture production, and brain structure. Intelligence 2010; 38: 193–201.
- 9. Zeidner M, Matthews G. Intelligence and Personality. In Sternberg RJ (Ed.), Handbook of Intelligence (pp.518–610). New York: Cambridge University Press; 2000.
- 10. Mayer JD, Caruso D, Salovey P. Emotional intelligence meets traditional standards for an intelligence. Intelligence 1999; 27: 267–298.
- 11. Mayer JD, Salovey P, Caruso DR, Sitarenios G. Measuring emotional intelligence with the MSCEIT V2.0. Emotion 2003; 3: 97–105. pmid:12899321
- 12. Fiori M, Antonakis J. Selective attention to emotional stimuli: What IQ and openness do, and emotional intelligence does. Intelligence 2012; 40: 245–254.
- 13. Burnett S, Sebastian C, Kadosh KC, Blakemore S-J. The social brain in adolescence: evidence from functional magnetic resonance imaging and behavioural studies. Neurosci Biobehav Rev 2011; 35: 1654–1664. pmid:21036192
- 14. Casey BJ, Duhoux S, Cohen MM. Adolescence: what do transmission, transition, and translation have to do with it? Neuron 2010; 67: 749–760. pmid:20826307
- 15. Zeidner M, Shani-Zinovich I, Matthews G, Roberts RD. Assessing emotional intelligence in gifted and non-gifted high school students: outcomes depend on the measure. Intelligence 2005; 33: 369–391.
- 16. Bentin S, Allison T, Puce A, Perez E, McCarthy G. Electrophysiological studies of face perception in human. J Cogn Neurosci 1996; 8: 551–565. pmid:20740065
- 17. Campanella S, Quinet P, Bruyer R, Crommelinck M, Guerit JM. Categorical perception of happiness and fear facial expressions: an ERP study. J Cogn Neurosci 2002; 14: 210–227. pmid:11970787
- 18. Luo W, Feng W, He W, Wang NY, Luo YJ. Three stages of facial expression processing: ERP study with rapid serial visual presentation. Neuroimage 2010; 49: 1857–1867. pmid:19770052
- 19. Pourtois G, Dan ES, Grandjean D, Sander D, Vuilleumier P. Enhanced extrastriate visual response to bandpass spatial frequency filtered fearful faces: Time course and topographic evoked-potentials mapping. Hum Brain Mapp 2005; 26: 65–79. pmid:15954123
- 20. Stefanics G, Csukly G, Komlósi S, Czobor P, Czigler I. Processing of unattended facial emotions: A visual mismatch negativity study. Neuroimage 2012; 59: 3042–3049. pmid:22037000
- 21. Astikainen P, Hietanen JK. Event-related potentials to task-irrelevant changes in facial expressions. Behav Brain Funct 2009; 5: 30. pmid:19619272
- 22. Astikainen P, Cong F, Ristaniemi T, Hietanen JK. Event-related potentials to unattended changes in facial expressions: detection of regularity violations or encoding of emotions? Front Hum Neurosci 2013; 7: 557. pmid:24062661
- 23. Chang Y, Xu J, Shi N, Zang B, Zhao L. Dysfunction of processing task-irrelevant emotional faces in major depressive disorder patients revealed by expression-related visual MMN. Neurosci Lett 2010; 472: 33–37. pmid:20117175
- 24. Fujimura T, Okanoya K. Event-related potentials elicited by pre-attentive emotional changes in temporal context. PLoS One 2013; 8: e63703. pmid:23671693
- 25. Gayle LC, Gal D, Kieffaber PD. Measuring affective reactivity in individuals with autism spectrum personality traits using the visual mismatch negativity event-related brain potential. Front Hum Neurosci 2012; 6:334. pmid:23267324
- 26. Kimura M, Kondo H, Ohira H, Schröger E. Unintentional temporal context-based prediction of emotional faces: an electrophysiological study. Cereb Cortex 2012; 22: 1774–1785. pmid:21945904
- 27. Kreegipuu K, Kuldkepp N, Sibolt O, Toom M, Allik J, Näätänen R. vMMN for schematic faces: automatic detection of change in emotional expression. Fron Hum Neurosci 2013; 7: 714.
- 28. Li X, Lu Y, Sun G, Gao L, Zhao L. Visual mismatch negativity elicited by facial expressions: new evidence from the equiprobable paradigm. Behav Brain Funct 2012; 8: 7. pmid:22300600
- 29. Susac A, Ilmoniemi RJ, Pihko E, Supek S. Neurodynamic studies on emotional and inverted faces in an oddball paradigm. Brain Topogr 2004; 16: 265–268. pmid:15379225
- 30. Zhao L, Li J. Visual mismatch negativity elicited by facial expressions under non-attentional condition. Neurosci Lett 2006; 410: 126–131. pmid:17081690
- 31. Cattell RB. Theory of fluid and crystallized intelligence: a critical experiment. J Educ Psychol 1963; 54: 1–22.
- 32. Raven JC, Court JH, Raven J. Raven’s Progressive Matrices and Vocabulary Scales. New York: Psychological Corporation; 1977.
- 33. Conway ARA, Cowan N, Bunting MF, Therriault DJ, Minkoff SRB. A latent variable analysis of working memory capacity, short-term memory capacity, processing speed, and general fluid intelligence. Intelligence 2002; 30:163–183.
- 34. Duncan J, Seitz RJ, Kolodny J, Bor D, Herzog H, Ahmed A, et al. A neural basis for general intelligence. Science 2000; 289: 457–460. pmid:10903207
- 35. Bradley RH, Corwyn RF. Socioeconomic status and child development. Annu Rev Psychol 2002; 53: 371–399. pmid:11752490
- 36. Turkheimer E, Haley A, Waldron M, D’Onofrio B, Gottesman II. Socioeconomic status modifies heritability of IQ in young children. Psychol Sci 2003; 14:623–628. pmid:14629696
- 37. Czigler I. Visual mismatch negativity: violating of nonattended environmental regularities. J Psychophysiol 2007; 21: 224–230.
- 38. Czigler I. Representation of regularities in visual stimulation: Event-related potentials reveal the automatic acquisition. In Czigler I. & Winkler I. (Eds.), Unconscious Memory Representation in Perception (pp. 107–132). Amsterdam: John Benjamins Publishing Company; 2010.
- 39. Stefanics G, Kremláčk J, Czigler I. Visual mismatch negativity: a predictive coding view. Front HumNeurosci 2014; 8: 666.
- 40. Stefanics G, Astikainen P, Czigler I. Visual mismatch negativity (vMMN): a prediction error signal in the visual modality. Front Hum Neurosci 2015; 8: 1074. pmid:25657621
- 41. Rowe G, Hirsh JB, Anderson AK. Positive affect increases the breadth of attentional selection. Proc Natl Acad Sci U S A 2007; 104: 383–388. pmid:17182749
- 42. Barrett LF, Kensinger EA. Context is routinely encoded during emotion perception. Psychol Sci 2010; 21:595–599. pmid:20424107
- 43. Dahl RE. Adolescent development and the regulation of behavior and emotion: introduction to part VIII. Ann N Y Acad Sci 2004; 1021: 294–295. pmid:15251899
- 44. Scherf KS, Luna B, Avidan G, Behrmann M. What precedes which: developmental neural tuning in face- and place-related cortex. Cortex 2011; 21: 1663–1680.
- 45. Liu T, Xiao T, Li X, Shi J. Neural mechanism of facial expression perception in intellectually gifted adolescents. Neurosci Lett 2015; 592: 22–26. pmid:25736949
- 46. Deary IJ, Taylor MD, Hart CL, Wilson V, Smith GD, Blane D, et al. Intergenerational social mobility and the mid-life status attainment: Influences of childhood intelligence, childhood social factors, and education. Intelligence 2005; 33: 455–472.
- 47. Konstantopoulos S, Modi M, Hedges LV. Who are America’s gifted? Am J Educ 2001; 109: 344–382.
- 48. Lubinski D, Benbow C. States of excellence. Am Psychol 2000; 55: 137–150. pmid:11392857
- 49. Strenze T. Intelligence and socioeconomic success: A meta-analytic review of longitudinal research. Intelligence 2007; 35: 401–426.
- 50. Csukly G, Stefanics G, Komlósi S, Czigler I, Czobor P. Emotion-related visual mismatch responses in schizophrenia: impairments and correlations with emotion recognition. PLoS One 2013; 8: e75444. pmid:24116046
- 51. Colom R, Burgaleta M, Román FJ, Karama S, Álvarez-Linera J, Abad FJ, et al. Neuroanatomic overlap between intelligence and cognitive factors: Morphometry methods provides support for the key role of the frontal lobes. Neuroimage 2013; 72:143–152. pmid:23357078
- 52. Bish G, Luu P, Posner MI. Cognitive and emotional influences in anterior cingulate cortex. Trends Cogn Sci 2000; 4: 215–222. pmid:10827444
- 53. Stefanics G, Czigler I. Automatic prediction error responses to hands with unexpected laterality: an electrophysiological study. Neuroimage 2012; 63: 253–261. pmid:22776450
- 54. Todd J, Myers R, Pirillo R, Drysdale K. Neuropsychological correlates of auditory preceptual inference: a mismatch negativity (MMN) study. Brain Res 2010; 1310: 113–123. pmid:19914219
- 55. Toyomaki A, Kusumi I, Matsuyama T, Kako Y, Ito K, Koyama T. Tone duration mismatch negativity deficits predict impairment of executive function in schizophrenia. Prog Neuropsychopharmacol Biol Psychiatry 2008; 32: 95–99. pmid:17764800
- 56. Liu T, Shi J, Zhang Q, Zhao D, Yang J. Neural mechanisms of auditory sensory processing in children with high intelligence. Neuroreport 2007; 18: 1571–1575. pmid:17885604
- 57. Beauchamp CM, Stelmack RM. The chronometry of mental ability: an event-related potential analysis of an auditory oddball discrimination task. Intelligence 2006; 34: 571–586.
- 58. Houlihan M, Stelmack RM. Mental ability and mismatch negativity: Pre-attentive discrimination of abstract of feature conjunctions in auditory sequences. Intelligence 2012; 40: 239–244.
- 59. Sculthorpe LD, Stelmack RM, Campbell KB. Mental ability and the effect of pattern violation discrimination on P300 and mismatch negativity. Intelligence 2009; 37: 405–411.
- 60. Troche SJ, Houlihan ME, Stelmack R, Rammsayer TH. Mental ability and the discrimination of auditory frequency and duration change without focused attention: An analysis of mismatch negativity. Pers Individ Dif 2010; 49: 228–233.
- 61. Duncan J. Intelligence tests predict brain response to demanding task events. Nat Neurosci 2003; 6:207–208. pmid:12601376
- 62. Gray JR, Chabris CF, Braver TS. Neural mechanisms of general fluid intelligence. Nat Neurosci 2003; 6: 316–322. pmid:12592404
- 63. Green AE, Kraemer DJM, DeYoung CG, Fossella JA, Gray JR. A gene-brain-cognition pathway: prefrontal activity mediates the effect of COMT on cognitive control and IQ. Cereb Cortex 2013; 23: 552–559. pmid:22368081
- 64. Liu T, Xiao T, Shi J, Zhao D. Response preparation and cognitive control of highly intelligent children: a Go-Nogo event-related study. Neuroscience 2011a; 180: 122–128.
- 65. Liu T, Xiao T, Shi J, Zhao D, Liu J. Conflict control of children with different intellectual levels: an ERP study. Neurosci Lett 2011b; 490: 101–106.
- 66. Deary IJ, Penke L, Johnson W. The neuroscience of human intelligence differences. Nat Rev Neurosci 2010; 11: 201–211. pmid:20145623
- 67. Gray JR, Thompson PM. Neurobiology of intelligence: Science and ethics. Nat Rev Neurosci 2004; 5: 471–482. pmid:15152197
- 68. Campanella S, Falbo L, Rossignol M, Grynberg D, Balconi M, Verbanck P, et al. Sex differences on emotional processing are modulated by subclinical levels of alexithymia and depression: a preliminary assessment using event-related potentials. Psychiatry Res 2012; 197: 145–153. pmid:22397916