Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

An extremely fast neural mechanism to detect emotional visual stimuli: A two-experiment study

Abstract

Defining the brain mechanisms underlying initial emotional evaluation is a key but unexplored clue to understanding affective processing. Event-related potentials (ERPs), especially suited for investigating this issue, were recorded in two experiments (n = 36 and n = 35). We presented emotionally negative (spiders) and neutral (wheels) silhouettes homogenized regarding their visual parameters. In Experiment 1, stimuli appeared at fixation or in the periphery (200 trials per condition and location), the former eliciting a N40 (39 milliseconds) and a P80 (or C1: 80 milliseconds) component, and the latter only a P80. In Experiment 2, stimuli were presented only at fixation (500 trials per condition). Again, an N40 (45 milliseconds) was observed, followed by a P100 (or P1: 105 milliseconds). Analyses revealed significantly greater N40-C1P1 peak-to-peak amplitudes for spiders in both experiments, and ANCOVAs showed that these effects were not explained by C1P1 alone, but that processes underlying N40 significantly contributed. Source analyses pointed to V1 as an N40 focus (more clearly in Experiment 2). Sources for C1P1 included V1 (P80) and V2/LOC (P80 and P100). These results and their timing point to low-order structures (such as visual thalamic nuclei or superior colliculi) or the visual cortex itself, as candidates for initial evaluation structures.

Introduction

Despite the growing interest and knowledge on the neural mechanisms sustaining emotional processing, several basic, key issues remain far from being understood. One of them, especially relevant in evolutionary terms, is how the brain deals so rapidly with emotional stimulation, organizing behavioral reactions that, in some circumstances, occur within four or five-tenths of a second (e.g., [1]). An obvious and necessary previous neural process is detecting emotional stimuli or, in other words, initially evaluating the incoming sensory input and marking it, if pertinent, as dangerous, appetitive, or, in general, affectively loaded. These initial evaluation structures (IESs) are still undefined, despite several hypotheses have been proposed. An intimately related and unsolved issue is the latency at which these structures can elicit electrophysiological traces, by themselves or through their cortical projections, of their evaluative activity. Crucially, this latency may help to reinforce some of the proposals on IESs over others.

For example, the detection of subliminal facial expressions by the amygdala, often conceptualized as a core IES (e.g., see reviews or meta-analyses in [25]), lasts more than 250 milliseconds (ms) to be reflected in the visual cortex [6]. However, the visual cortex shows greater activity to emotional stimuli than to neutral ones before 100 ms [7]. Thus, it is improbable that the amygdala mediates this early visual cortical activity. Moreover, while emotional faces elicit an increased response in the amygdala as compared to neutral faces as soon as 74 ms, amygdalar discrimination of emotional non-facial stimuli occurs beyond 150 ms from stimulus onset [8]. Again, evidence exists of visual cortex discrimination of emotional non-facial stimuli before 100 ms [7]. In other words, current data point to candidates other than the amygdala to be IESs. An alternative hypothesis is that initial -albeit rudimentary- evaluation may reside in faster (≈30- ms) first-order structures (i.e., receiving direct visual inputs from the retina). This low-order IES hypothesis [9] points to the visual thalamus, mainly the lateral geniculate nucleus (LGN)-thalamic reticular nucleus (TRN) tandem, which are recently being revealed as active processors of the visual input rather than passive relays, as traditionally assumed (see reviews in [10, 11]), with the contribution of other first- and second-order thalamic and non-thalamic nuclei such as the superior colliculus (e.g., [12]). Finally, a third hypothesis is that the visual cortex itself is the initial evaluator, its activity being not relevantly mediated by any previous evaluation process [13].

Thus, the latency of the first biased response to emotional stimuli in the visual cortex indirectly informs on the nature of the IES involved. Particularly, shorter latencies would support non-amygdalar hypotheses. The best non-invasive methodologies to study response latencies of human neural processes are magnetoelectric (EEG and MEG), whose temporal resolution is much higher than that of the rest of non-invasive neuroimaging techniques. In the visual domain, the earliest trace in event-related potentials (ERPs) -one of the neural signals the EEG provides- signaling cortical processing is usually attributed to the C1 component. This component initiates at ≈60 ms from stimulus onset and peaks at ≈80, and is mainly originated in the striate cortex or V1 [14, 15], although the contribution of V2 and V3 has also been raised [14, 16]. Importantly, C1 peak shows enhanced amplitudes in response to emotional facial expressions as compared to neutral [1720]. Non-facial, consciously perceived emotional stimuli of pictorial nature have not been explored in this respect, but emotional effects have been observed in response to affective words between 50 and 100 ms [20].

Although less frequently, visual ERP components before C1 have also been reported in the research domain [2127]. It is important to note, however, that visual ERP components as early as 30 ms are well established in clinical practice in response to visual flashes [28]. This pre-C1 activity, which lacks a consensual nomenclature, will be referred to as N40 hereafter. The scarce data available on the origin of N40 point to V1 as one of its sources. Indeed, the onset of V1 activity once the geniculo-cortical inputs arrive may occur as early as 18–20 ms in the macaque monkey [29, 30] with an average latency reported at 26 ms [30]. An extrapolation to humans following the rough 3/5 ratio characterizing macaque vs human latencies (e.g., [31]) yields an approximate latency of 40 ms in our species. The contribution of V1 to the generation of N40 has also been reported in humans [32] along with thalamic sources [33]. Interestingly, N40 has been revealed to be modulated by attention, so it appears to be sensitive to cognitive factors [25, 33]. Modulation of such an early ERP activity by emotional stimuli has not been explored yet, however. Thus, the scarce studies exploring these extremely early electrophysiological traces of visual processing employ non-emotional stimuli. On the other hand, ERP studies presenting emotional stimuli have not been designed to explore N40, which presents a relatively low amplitude and hence, low signal-to-noise ratio (SNR).

We carried out two experiments to explore whether ERP activity originating in the visual cortex during the first 100 ms is enhanced by non-facial, supraliminal, emotional visual stimuli in order to advance in the characterization of IESs. To this aim, we introduced several methodological implementations that help to enhance the SNR in early visual ERP components. First, the number of trials was larger than usual in ERP research on emotional stimulation. Second, stimuli presented Gestalt characteristics such as closed contours or compact shape (they consisted of silhouettes) since they are optimal to increase the response of contour-sensitive neurons present in V1 and V2 (e.g. [34]). Third, in Experiment 1, stimuli were presented at several spatial locations given that cognitive (attentional) and emotional effects on early visual ERP components (such as C1) may be modulated (and even neutralized) depending on their position in the visual field (attentional effects: [35]; emotional: [7]). Experiment 2 employed only stimuli at fixation given the results of Experiment 1, and this allowed for a further increase in the number of trials. This second experiment also included several design modifications to control for potential alternative explanations of the effects observed in Experiment 1.

Materials and methods

Data and supplemental material availability

The data associated with both experiments are available at https://osf.io/9bc2y. Supplemental material mentioned hereafter is also available at that link.

Participants

Forty-four individuals participated in Experiment 1, although data from only 36 of them could eventually be analyzed, as explained later. These 36 participants (age range of 18 to 24 years, mean = 19.46, SD = 1.14, 29 women) were students of Psychology, provided their written informed consent, and received academic compensation for their participation. In the case of Experiment 2, the initial sample consisted of thirty-nine individuals, none of whom participated in Experiment 1. The data from only 35 of them (age range of 17 to 24 years, mean = 19.68, SD = 2.11, 30 women) could eventually be analyzed, as explained later. They also provided their written informed consent (that of the only participant under the legal age of majority in Spain -18 years- was also signed by one of the parents).

In both cases, sample sizes allow reaching a statistical power of at least 0.8 for two dependent means comparisons -spiders vs. wheels, in this case- foreseeing medium effect sizes, usual in studies on early ERPs (computations were carried out employing G*Power© developed by [36]). Both experiments were designed in accordance with the Declaration of Helsinki and had been previously approved by the Universidad Autónoma de Madrid’s Ethics Committee. The whole sample of participants attended the laboratory between November 10 and 30, 2020 (Experiment 1) and between September 18 and November 20, 2023 (Experiment 2).

Stimuli (common to Experiments 1 and 2)

Participants were placed in an electrically shielded, sound-attenuated room. They were asked to place their chin on a chinrest maintained at a fixed distance (40 cm) from the screen (VIEWpixx®, 120 Hz) throughout the experiment. The presentation software was Psychtoolbox 3, and the communication between the stimulation PC and the Biosemi© EEG recording system was via optic fiber. The inevitable lag between the marks signaling stimuli onsets (or ‘triggers’) in EEG recordings and its actual onset on the screen was measured employing a photoelectric sensor as described in https://www.youtube.com/watch?v=0BPwcciq8u8 and corrected during pre-processing.

Two types of stimuli were presented to participants (Fig 1): 20 emotional silhouettes (spiders) and 20 neutral (wheels), all in black color over a white background. The size of stimuli (figure + ground) was 14° x 14° width. Spiders are among the top five most feared animals [37] and they cause the most prevalent phobia related to animals [38]. Indeed, spiders are assessed as negatively valenced stimuli by relatively large samples in emotional picture databases (e.g., IAPS: [39]; EmoMadrid: [40]). In order to test whether spider silhouettes were also efficient as negatively valenced stimuli, and wheels as neutral, they were previously evaluated by an independent sample of 447 participants (397 women, mean age = 19.51, SD = 1.46) who rated their emotional valence through a 7-point Likert scale that ranged from “very negative” (1) to “very positive” (7). Spiders were rated as negative (mean = 1.704, standard error of means [SEM] = 0.038) and wheels as neutral (i.e., in the intermediate values of the scale: mean = 3.918, SEM = 0.030). Differences between both stimuli were strongly significant (F(1,446) = 2557.289, p<0.001, ƞ2p = 0.852).

thumbnail
Fig 1. Experimental design.

A: Size and possible locations of the stimulus in each trial. B: Schematic representation of one portion of the stimulus sequence. One of the exemplars of spider and wheel probes are depicted, each in one of the five possible spatial locations.

https://doi.org/10.1371/journal.pone.0299677.g001

As mentioned in the introduction, stimuli presenting Gestalt characteristics such as closed contours or compact shape, as it is the case of silhouettes, are optimal to increase the response of contour-sensitive neurons present in V1 and V2 [34]. Moreover, the use of black silhouettes over white background inherently equalizes color and contrast, which may influence early visual ERPs (color: [41]; contrast: [42]), across experimental categories. Luminosity (i.e., figure surface against background) and spatial frequency of silhouettes, which may also influence ERP components of interest (luminosity: [43]; spatial frequency: [44]), were manipulated so they did not significantly differ between categories (spiders vs. wheels). Details on these two low-level characteristics and statistical contrasts, as well as the stimuli themselves, are provided in EmoMadrid (https://www.psicologiauam.es/CEACO/EmoMadrid/EMsiluetas.htm). In sum, the only visual parameter besides their emotional meaning clearly differing among spiders and wheels was their shape, in any case sharing certain key characteristics (e.g., wheel spokes may resemble spider legs and vice versa). More importantly, shape per se has been reported to firstly affect ERPs in latencies longer than those explored in this study [4547].

Procedures

Experiment 1.

Each spider and wheel appeared 10 times in random order in one of the five locations depicted in Fig 1, one at fixation (FIX) and four peripheral (the center of each peripheral position was 32.5º from the center of the screen). Peripheral positions were upper-left visual field (UL), upper-right (UR), lower-left (LL), and lower-right (LR). This resulted in 200 trials per emotional category and location (20 exemplars x 10 presentations), and the total number of trials was 2000 (200 x 2 categories x 5 locations). Each stimulus, whatever its location, was displayed on the screen for 150 ms, and the inter-trial interval (ITI) was 850 ms. Participants were instructed to look at the fixation dot at the center of the screen all the time, which was marked with a blue circle (0.3° radius, RGB = 0, 0, 255) during the interstimulus intervals. The total duration of the whole stimulus sequence was ≈32 minutes, so it was divided into eight blocks to provide brief rest periods. In order to engage constant attention to stimulation, the inter-stimulus fixation dot randomly changed its color from blue to red (255, 0, 0) in 1 to 5 trials per block (0.5–2.5% trials per block), and participants were instructed to mentally count these changes and report the total number after each block (this sum was different from block to block). None of the participants deviated more than one color change from the correct answer per block. As explained later, each red dot trial, and the next, were removed before analyses.

Experiment 2.

Some methodological implementations were introduced in Experiment 2. First, further increasing the number of trials was considered a priority to boost SNR: each category (spiders and wheels) was presented 500 times (vs. 200 in Experiment 1). In addition, and also to increase this ratio, a jitter was added to the ITI to avoid a phase-locking of alpha EEG activity with the stimulus presentation rate. Thus, the ITI presented five different durations (750, 800, 850, 900, 850 ms), the average being 850 ms (as Experiment’s 1 fixed ITI). Third, we changed the color of the fixation dot. In Experiment 1, the contrast between the fixation dot (dark blue) and either the figure (black) or the background (white) was unbalanced between spiders and wheels when they were presented at fixation: the dot was surrounded by the figure -black- in 85% (i.e., less contrast) of the spider trials and 25% of the wheel trials. Thus, we presented the same stimuli in the replication but the fixation dot presented a grey color (RGB: 128, 128, 128) equidistant from white (255, 255, 255) and black (0, 0, 0) to discard any influence of this dot-stimulus contrast in the observed effects. And fourth, Experiment 2 presented stimuli only at fixation given that peripheral stimuli failed to elicit the N40 component and to show significant differences between spiders and wheels in P80 in Experiment 1, as later described and discussed in the Results and Discussion sections.

The stimuli were identical to those employed in Experiment 1, and maintained the same size and duration, but were presented only at fixation as indicated. As in Experiment 1, the presentation order was random and the total run (≈ 16 minutes) was divided into four blocks. In each of them, the black fixation dot changed to red instead of grey in 1 to 5 trials per block (0.5–2.5% trials per block) and the task consisted again in “mentally counting” the number of changes to red and reporting the number of changes at the end of each block. The data from one of the discarded participants (see Participants section) were not included due to a deviation by more than one from the correct number of changes in one of the blocks (the rest of the participants did not exceed this deviation limit in any block).

Recording and pre-processing (common to Experiments 1 and 2)

Electroencephalographic (EEG) activity was recorded using an electrode active cap (Biosemi®) with Ag-AgCl electrodes, in which the EEG signal is preamplified at the electrode. Sixty-four electrodes were placed at the scalp attending a homogeneous distribution (see Supplemental material) and the international 10–20 system. Following the BioSemi design, the voltage at each active electrode was recorded with respect to a common mode sense (CMS) active electrode and a passive electrode (DRL) replacing the ground electrode. All scalp electrodes were referenced offline to the nosetip. Electrooculographic (EOG) data were recorded supra- and infraorbitally (vertical EOG) as well as from the left versus right orbital rim (horizontal EOG) to detect blinking and ocular deviations from the fixation point. An online analog low-pass filter was set to 104Hz (5th order, CIC filter), with no high-pass filter. Recordings were continuously digitized at a sampling rate of 512 Hz. An offline digital Butterworth bandpass filter of 0.01 to 30 Hz (2nd order, zero-phase forward and reverse–twopass- filter) was applied to continuous (pre-epoched) data using the Fieldtrip software (http://fieldtrip.fcdonders.nl; [48]). Setting the high-pass filter at 0.1 Hz or less has been recommended to study early ERP components [49]. The continuous recording was divided into 300 ms epochs per trial, beginning 100 ms before the probe stimulus onset.

EEG epochs corresponding to trials in which the fixation dot changed its color (see the previous section) were eliminated, as well as those corresponding to the subsequent trial, to avoid the effect of this control, irrelevant (to our scopes) task. Blinking-derived artifacts were removed through an independent component analysis (ICA)-based strategy [50], as provided in Fieldtrip. After the ICA-based removal process, a second stage of inspection of the EEG data was conducted to automatically discard trials in which any EEG channel surpassed ±100 μV and/or its average global amplitude (i.e., maximum minus minimum amplitude) across trials ± 3.5 standard deviations. The minimum number of trials accepted for averaging was 150 trials per participant and condition (i.e., each category presented in each location). Data from six of the discarded participants in Experiment 1 (see Participants section) were eliminated since they did not meet this criterion, and the other two had to be discarded because of data storage issues. In Experiment 1, this trial and participant rejection procedure led to the average admission of 179 (SD = 7), 180 (8), 181 (8), 182 (8), and 181 (7) trials at each of the five locations in the case of spiders, and of 181 (SD = 8), 181 (7), 180 (8), 181 (7), and 180 (9) in the case of wheels, the difference among stimulus categories being non-significant (F(9,315) = 0.915, p = 0.512, ƞ2p = 0.025). For Experiment 2, the criterion was set at 400 trials minimum and three of the discarded participants did not meet it. In this experiment, the trial and participant rejection procedure led to the average admission of 451 (SD = 16) in the case of spiders and of 452 (SD = 15) in the case of wheels, the difference among stimulus categories being non-significant (F(1,34) = 0.137, p = 0.714, ƞ2p = 0.004).

Data analysis

Experiment 1.

First, recordings were baseline-corrected using the 100ms prestimulus interval. Next, we proceeded to identify and quantify the first visual component of ERPs. As illustrated in Fig 2 (where only the final 50ms portion of the baseline appears for graphical purposes; grand averages including the whole baseline are available in Supplemental material), an N40 component is visible in grand averages in response to FIX stimuli, being less evident in response to peripheral stimuli. To objectively confirm the existence or not of the N40 component, we determined whether amplitudes greater than typical baseline amplitude existed in its corresponding time window considering, at the same time, that N40 typically presents a very low SNR so too stringent criteria could mask this component. Thus, for each condition and within the 30–60 ms interval, the occurrence of N40 was confirmed when at least two neighbor channels (within the relevant scalp region, i.e., the posterior hemiscalp) presented at least two consecutive voltage points whose amplitude was beyond ±1.5 times the standard deviation of the corresponding baseline. This procedure revealed that N40 took place in the FIX conditions (both spiders and wheels), but not in any of the remaining eight conditions (four peripheral locations x two types of stimuli). To define N40 peak latency in response to FIX stimuli, recordings at parietal and occipital electrodes, bilaterally, were averaged together to provide a meta-average (Fig 2). The latency of the most negative value of the meta-average between 30 and 60 ms was defined as the N40 peak, which was 39 ms. Therefore, N40 amplitude to FIX stimuli was individually quantified as the average amplitude within the 36 to 42 ms window of interest (WOI).

thumbnail
Fig 2. Experiment 1: Windows of interest (WOI) and their outputs, sPCA and source estimation.

WOIs for N40 (blue bar), only found in FIX, and P80 (green bars), patent in all conditions, are represented over meta-averages computed from the electrode sites marked in black in each scalp map (FIX: fixation, LL: lower left, LR: lower right, UL: upper left, UR: upper right). Temporal and amplitude scales are the same for all locations and are defined in FIX. Shadows surrounding meta-average lines represent the standard error of means. Topographic maps of sPCA-derived relevant factor scores corresponding to each WOI are also depicted, as well as source estimations corresponding to WOIs of FIX conditions, which were those finally showing significant effects (see Supplemental material for source estimations in the rest of conditions).

https://doi.org/10.1371/journal.pone.0299677.g002

The next component in time (P80) was also detected and its amplitude quantified in all conditions, since it was patent in all of them, including peripheral (Fig 2). Thus, P80 peak latency was defined by averaging together recordings at parietal and occipital electrodes, bilaterally in the case of FIX stimuli and contralaterally for peripheral stimuli, to obtain meta-averages (Fig 2). The latency of the most positive value of meta-averages between 60 and 110 ms was defined as the P80 peak: 80 ms for FIX stimuli (WOI to compute individual amplitude: 74–86 ms), 88 for LL (WOI: 82–94 ms), 86 for LR (WOI: 80–92 ms), 90 for UL (WOI: 84–96), and 101 for UR (WOI: 95–107 ms): Table 1.

We also measured the differential N40-P80 amplitude, or peak to peak amplitude, a classical way of computing amplitudes (e.g., [5153]) that has recently been revealed as useful to explore early visual ERP components [7]. Two advantages of this measure may be underlined. First, it is less susceptible to be affected by data processing settings such as the high-pass filter cut frequency, which significantly affects traditional (monophasic) amplitude measures in early components [49, 54] or the length of the baseline, another critical aspect in this regard. Second, it allows to quantify neural processes transversally affecting neighbor components by eliciting an increase of absolute amplitude in both of them. To this aim, the individual difference between P80 and N40 amplitudes (each computed as explained above) was calculated for FIX stimuli (Fig 2), given that peripheral stimuli did not elicit the N40 component.

To avoid the multiple comparison problem that analyses are potentially affected by, the experimental effects on N40 and P80 were analyzed by submitting their amplitudes in the corresponding WOIs to a spatial principal component analysis (sPCA) on SPSS 26.0 [55]. This procedure reduces the electrode information (64 levels) into a small number of spatial factors (SFs) explaining, for the whole experimental sample, most of the variance due to the scalp location of recordings. Importantly, principal component analysis has been long defended as a preferable methodology to detect and quantify ERP components over traditional methodologies (e.g., [5659]). In the space domain (sPCA), the main advantage of PCA over classical procedures based on visual inspection of topographies to define regions of interest is that it presents each ERP component separately and with its ‘clean’ shape, extracting and quantifying it free of the influences of adjacent or subjacent components. Indeed, several neural processes (and hence, several electrical signals) may concur at any given moment, and the recording at any scalp location at that moment is the electrical balance of these different neural processes. Such recording can stymie visual inspection. Spatial PCA (sPCA), in which variables are the electrodes and cases are participants x conditions, separates ERP components along space, each spatial factor ideally reflecting one of the concurrent neural processes occurring at any given moment or temporal interval. Additionally, sPCA provides a reliable division of the scalp into different recording regions. Basically, each region or spatial factor (SF) is formed with the scalp points where recordings tend to covary. As a result, the shape of the sPCA-configured regions is functionally based, and scarcely resembles the shape of the geometrically configured regions usually defined by traditional procedures. The spatial factor score, the sPCA-derived single parameter (per participant and condition) in which each SF is quantified, “summarizes” the behavior of the whole set of electrodes it involves (with different weights) and is linearly related to original amplitudes.

A separate sPCA was applied to each stimulus spatial location (FIX, LL, LR, UL, UR) given i) that N40 was only elicited by FIX stimuli and ii) that the latency of P80 varied across locations, as indicated. Components were selected based on the scree test and subsequently submitted to varimax rotation, which provides optimal performance in sPCA [60]. Factor scores corresponding to those SFs showing a parietal/occipital distribution (bilateral for FIX conditions or contralateral to stimulus location for peripheral conditions), the one relevant as regards early visual ERPs, were then submitted to statistical contrasts. A double contrast strategy was carried out using JASP software [61]. First, a one-tailed (given that our scope was detecting the earliest trace of sensory gain -i.e., greater activity in visual processing structures- towards emotional stimuli) frequentist repeated-measures Student’s T-test was carried out introducing Emotion of the probe (spiders, wheels) as factor. Effect sizes in these tests were computed using the Cohen’s d formula. Second, Bayesian paired samples T-tests using the default prior (0.707), corresponding to medium effect sizes, were carried out on the same data to test the likelihood of data on H1 (spider > wheel) over H0 (spider  =  wheel) (BF10).

Finally, and to better characterize N40 and P80, their sources were estimated via the Minimum Norm (MN) method using the current density map algorithm as implemented in Brainstorm, v2021 [62]. To this aim, average amplitudes within the WOIs of each component showing significant effects in the previous (statistical contrast) step were submitted to this algorithm (depth weighting order and maximal amount: 0.5 and 10, respectively; noise covariance regularization: 0.1; SNR: 10), which was applied on a realistic cortex model defined through the openMEEG package [63, 64]. However, source estimations of the P80 component recorded in conditions showing insensitivity to the experimental treatment (i.e., peripheral conditions) are also available in Supplemental material.

Experiment 2

After baseline (100 ms) correction, the first analytic task was identifying and quantifying the first visual component of ERPs. As may be appreciated in Fig 3 (where only the final 50 ms portion of the baseline is represented for graphical purposes; grand averages showing the complete baseline are available in Supplemental material), an N40 component is clearly visible in grand averages, so the confirmation procedure carried out in Experiment 1 was not necessary this time. To define N40 peak latency, recordings at parietal and occipital electrodes, bilaterally, were averaged together to provide a meta-average (Fig 3). The latency of the most negative value of the meta-average between 30 and 60 ms was defined as the N40 peak, which was 45 ms. Therefore, N40 amplitude was individually quantified as the average amplitude within the 42 to 48 ms WOI. The next component in time (P100) was also quantified after defining its peak latency, which was defined from the same meta-average just mentioned (Fig 3). Thus, the latency of the most positive value within the meta-average between 70 and 130 ms was defined as the P100 peak, which was 105 ms. The WOI defined to quantify the average amplitude of this component was 99 to 111 ms. We also measured the differential N40-P100 amplitude, or peak to peak amplitude. To this aim, the individual difference between P100 and N40 amplitudes, each computed as explained above, was calculated (Fig 3).

thumbnail
Fig 3. Experiment 2: Windows of interest (WOI) and their outputs, sPCA and source estimation.

WOIs for N40 and P100 (green bars) are represented over meta-averages computed from the electrode sites marked in black in the scalp map. Shadows surrounding meta-average lines represent the standard error of means. Topographic maps of the sPCA-derived relevant factor scores corresponding to each WOI and source estimations corresponding to WOIs are also depicted.

https://doi.org/10.1371/journal.pone.0299677.g003

The same sPCA-based quantification method explained in Experiment 1 was also followed here on the WOIs corresponding to N40 and P100. Also, the double contrast strategy -frequentist and Bayesian- described in the previous experiment was again performed, using the same parameters, on factor scores yielded by sPCA introducing Emotion of the probe (spiders, wheels) as factor. Finally, the sources of N40 and P100 were estimated via the Minimum Norm (MN) source localization algorithm following the same specifications as in Experiment 1.

Results

Experiment 1

N40 (stimuli at fixation).

The sPCA was computed on N40 amplitudes to FIX stimuli only, as explained above. This analysis yielded five SFs explaining most of the variance of the 64 electrodes (88.11%). SF2 was the one showing bilateral occipital/parietal distribution which, for obvious reasons, is the one relevant in this case. Figs 2 and 4 show the topography of N40-SF2, and Fig 4 depicts meta-averaged recordings from representative electrodes of this SF, along with descriptive plots. Its corresponding factor scores were then submitted to repeated-measures T-tests, both frequentist and Bayesian, on factor Emotion (spiders, wheels). As shown in Table 1, the former yielded significant differences (t(35) = -1.747, p = 0.045, d = -0.291), spiders showing more negative N40 scores/amplitudes (Fig 4). However, Bayesian analyses found only anecdotal evidence in favor of H1 (greater N40 amplitude -more negative- for spiders than for wheels): BF10 = 1.342.

thumbnail
Fig 4. Experiment 1: Descriptive data.

N40, P80, and N40-P80 peak-to-peak factor scores (linearly related to amplitudes) in response to spiders and wheels presented at fixation in relevant spatial factors. Violin plots show individual distribution and line graphs show means and standard error of means (error bars). For illustrative purposes, grand averages (center) are computed from five representative electrodes (marked in red) within the regions of maximal factorial load. Shadows surrounding grand average lines represent the standard error of means.

https://doi.org/10.1371/journal.pone.0299677.g004

The MN source estimation analysis on N40 elicited by FIX stimuli was carried out on the average amplitude within the N40 WOI, as indicated. As illustrated in Fig 2, this analysis yielded V1 as one of the sources (concretely, the caudal apex of the calcarine sulcus), but also other foci at prefrontal areas (Table 2). This disparity of sources may point to a low SNR and to possible spurious solutions rather than to a spread cortical activation, an issue that will be discussed later and that was addressed in Experiment 2.

P80 (stimuli at fixation and at the periphery)

P80 was clearly elicited by all stimuli, whatever their location, so sPCAs were applied to all conditions. Table 1 shows the number of SFs extracted for each stimulus spatial location in the case of P80, and their total explained variance, which was over 86% in all cases. Factorial loadings corresponding to those SFs showing occipital/parietal distribution, bilateral in the case of FIX stimuli and contralateral in the case of peripheral stimuli, which were those relevant to our scopes (i.e., SF5 for FIX, SF4 for UL, SF2 for UR and LL, and SF3 for LR), are represented in Figs 2 and 4.

Factor scores derived from each SF were subsequently contrasted via repeated-measures T-tests on factor Emotion (spiders, wheels) and, since five contrasts were carried out (one per stimulus location), alpha was submitted to the Bonferroni adjustment procedure to avoid multiple comparison-derived type I errors. This adjustment set alpha at 0.01. Spiders elicited significantly greater P80 amplitudes than wheels when they were presented at fixation (t(35) = 3.525, p<0.001, d = 0.588): Fig 4. Instead, peripheral conditions did not yield significant spiders>wheels differences: Table 1. On the other hand, Bayesian analyses confirmed strong evidence in favor of H1 (spiders>wheels) in the case of FIX stimuli: BF10 = 53.470, and null evidence (or even strong evidence in favor of H0) for peripheral conditions (Table 1). However, peripheral stimuli were actually perceived and discriminated, as revealed by later ERP components (these analyses and their results are described in Supplemental material for being out of the scope of this study).

Source estimation on P80 amplitude to FIX stimuli returned V1, bilaterally, as the main focus of activity (x = -11, y = -105, z = -12), along with bilateral foci in V2/LOC -lateral occipital cortex- (x = -30, y = -101, z = 2) with no other relevant foci in the rest of the cortex (V3, also present in this area, is an unlikely source since its main role is color processing): Table 2 and Fig 2. Supplemental material also includes source estimation of P80 to peripheral conditions, showing how main foci were located at visual cortices contralateral to stimulus location, more dorsally when presented in the lower visual field, and more ventrally for stimuli in the upper visual field.

N40-P80 peak-to-peak amplitudes (stimuli at fixation)

Finally, N40-P80 differential amplitude was computed as the difference of the amplitudes of both components, each measured as indicated above, in response to FIX stimuli. These differences, calculated for each channel, condition (spider or wheel) and participant, were then submitted to a sPCA. The critical factor in this case was SF5 which, as illustrated in Figs 2 and 4, presented maximal loadings at midline parietal/occipital areas. The repeated-measures T-test contrasting its factor scores as a function of factor Emotion (spiders, wheels) yielded significantly greater N40-P80 peak to peak amplitude to spiders than to wheels, this result showing a large effect size (t(35) = 4.547, p<0.001, d = 0.758). The Bayesian repeated-measures T-test on these data found ‘extreme’ evidence in favor of H1 (spiders>wheels): BF10 = 764.941.

In order to test whether this significant sensitivity of N40-P80 peak-to-peak factor scores or amplitudes could be explained by either N40 or P80 alone (being in this case a redundant result), a repeated-measures ANCOVA was carried out using SPSS 26.0. In it, N40-P80 peak-to-peak amplitudes were introduced as the dependent variable, and N40 and P80 amplitudes, separately, as covariates. The covariates were both significantly related to the N40-P80 peak-to-peak amplitude [N40: F(1, 54.931) = 10.144, p = 0.002); P80(1, 62.322) = 133.905, p<0.001). Indeed, the effect of Emotion of the probe on N40-P80 peak to peak amplitude is lost after controlling for both N40 and P80 amplitudes [F(1, 41.437) = 3.598, p = 0.065]. In other words, N40-P80 peak-to-peak effects depend on both N40 and P80 and reflects a neural process transversally affecting both deflections.

Experiment 2

The sPCA computed on N40 amplitudes yielded five SFs explaining most of the variance of the 64 electrodes (89.87%). SF2 was the one showing a similar distribution to the relevant N40-related spatial factor in Experiment 1. Neither frequentist (t(34) = -0.403, p = 0.345, d = -0.068) nor Bayesian analyses (BF10 = 0.254) indicated significantly greater amplitudes for spiders than for wheels. As for P100, five SFs explained 93.40% of the variance, with SF3 presenting a similar distribution to that of the relevant P80 spatial factor in Experiment 1. Again, both frequentist (t(34) = -0.925, p = 0.819, d = -0.156) and Bayesian contrasts (BF10 = 0.102) failed to find significant spiders>wheels differences. Figs 3 and 5 show the topography of N40-SF2 and P100-SF3.

thumbnail
Fig 5. Experiment 2: Descriptive data.

N40-P80 peak-to-peak factor scores (linearly related to amplitudes) in response to spiders and wheels in the relevant spatial factor. Violin plots show individual distribution and line graphs show means and standard error of means (error bars). For illustrative purposes, grand averages (center) are computed from three representative electrodes (marked in red) within the region of maximal factorial load. Shadows surrounding grand average lines represent the standard error of means.

https://doi.org/10.1371/journal.pone.0299677.g005

As summarized in Table 3, N40-P100 peak-to-peak amplitudes did show significant effects, as in Experiment 1. Six sPCA components were extracted (explaining 96.111% of the variance) and, among them, SF6 showed a midline-parietal distribution similar to the relevant N40-P80 factor in Experiment 1. The frequentist contrast revealed significantly greater N40-P100 peak-to-peak amplitudes for spiders than for wheels (t(34) = 2.334, p = 0.013, d = 0.394), and also did the Bayesian test (BF10 = 3.819). Fig 5 depicts meta-averaged recordings from representative electrodes within SF6, along with descriptive plots. As in Experiment 1, and to test whether this significant sensitivity of N40-P100 peak-to-peak factor scores or amplitudes could be explained by either N40 or P100 alone, a repeated-measures ANCOVA was carried out following the same procedure: N40-P100 peak-to-peak amplitudes were introduced as the dependent variable, and N40 and P100 amplitudes, separately, as covariates. The covariates were differently related to the N40-P100 peak-to-peak amplitude: while this relationship was significant in the case of N40 (F(1, 44.581) = 8.425, p = 0.006), it was not in the case of P100 (F(1, 64.022) = 1.386, p = 0.243). However, there was a significant effect of Emotion of the probe on N40-P100 peak to peak amplitude also after controlling for N40 and P100 individual amplitudes (F(1, 30.596) = 7.464, p = 0.010), suggesting additional mechanisms explaining this effect besides those reflected in these covariates.

MN source estimation analysis on N40 was carried out on the average amplitude within the N40 and P100 WOIs, as indicated (Table 4). As illustrated in Fig 3, this analysis yielded V1 as the net, main source (concretely, the caudal apex of the calcarine sulcus: x = -11, y = -105, z = -12). No other sources were observed in Experiment 2 for N40, probably due to the increased SNR provided by the methodological implementations previously mentioned. As for P100, the main source was located at V2/LOC (posterior part of the middle occipital gyrus: x = -45, y = -91, z = -3; see footnote 2). A second source for P100 was located in the superior parietal lobule (x = -26, y = -64, z = 68).

Discussion

Previous studies place the earliest electrophysiological trace of emotional detection at around 80 ms from stimulus onset, concretely in the C1 component of ERPs [7, 1719]. The two experiments confirm how quickly our visual system can detect certain emotional stimuli, but place this capability even earlier, starting at ≈40 ms from stimulus onset. This initial detection mechanism lasts up to ≈100 ms (P80 and P100 -which may be identified with traditional C1 and P1- in Experiments 1 and 2, respectively). The sources of this activity were located in visual cortices: V1 in the case of N40, both V1 and secondary cortices in the case of P80/C1, and mainly in secondary cortices in the case of P100/P1. This 40–100 ms window reflected in the N40-C1P1 peak-to-peak amplitude involves perceptual-attentional mechanisms implying interactions between those primary and secondary visual cortices which are transversal to the whole window rather than being circumscribed to a single ERP deflection. These results will be discussed in detail below, but it is important to underline at this point the novelty of these results as they are the first, to the best of our knowledge, to report the discrimination of emotional visual stimuli so early in time. This is understandable since pre-C1 activity, already little studied, has not been explored in response to emotional stimuli, either because studies analyzing this early visual ERP activity employed neutral stimuli or because experiments presenting emotional stimuli were not oriented or designed to record this activity.

As regards Experiment 1, the early capability of the brain to detect emotional stimuli was partially revealed by N40, which showed mixed evidence in frequentist and Bayesian contrasts and, robustly, by the N40-P80 peak-to-peak amplitude. This extremely fast activity reflecting the discrimination of emotional visual stimuli, and manifested in N40-P80, was statistically demonstrated to be due to both N40 and P80, and not to any of them separately. The visual cortex was found to be in the origin of both components. In the case of N40, solutions included V1. This first visual cortex stage receives the majority of inputs from the lateral geniculate nucleus of the thalamus [65]. Additionally, source estimation solutions unexpectedly included prefrontal areas, whose involvement seems improbable at this latency. These prefrontal foci likely reveal analytical noise and point to the desirability of increasing SNR in the second experiment. P80 sources were cleaner and also involved V1, along with V2 and/or lateral occipital cortex (LOC). Both V2 and LOC are progressively involved in object recognition, from contour and shape processing in V2 [66] to more global object identification in LOC [67].

These Experiment 1 effects were only observed for stimuli presented at fixation. However, spider > wheel differences in response to peripheral stimuli emerged in later, out of our scopes, ERP components, as shown in Supplemental material, demonstrating that these stimuli were actually perceived and evaluated. Two methodological factors may contribute to explaining the unexpected lack of sensitivity to peripheral stimuli of the earliest ERP components. First, SNR in early visual components is even lower for peripheral vision since it is underrepresented (compared to foveal vision), in terms of the number of neurons involved, both in the visual thalamus and in V1 (e.g.,[68]). Second, the task asked to direct attention towards fixation (i.e., color changes in the fixation dot). Considering that attention yields a biased competition at the perceptual level whereby limited processing resources prioritize attended spatial locations over unattended ones [69, 70], peripheral stimuli may have evoked diminished activity for this reason as well. These issues were beyond the scopes of the second experiment -focused on stimuli at fixation- to avoid an excessive number of trials, but deserve to be explored in the future.

Experiment 2 revealed similar early visual ERP components as those found in Experiment 1. Thus, both an N40 (presenting slightly higher latency: 45 ms) and a subsequent positive component, P100 (peaking at 105 ms) in this case, were evident at parietal and occipital regions. The variations in latency of both components with respect to Experiment 1 are probably due, along with the different sample of participants, to the implementations introduced in the experimental design. The most influential would be the variable ITI (instead of fixed), implemented to minimize alpha phase synchronization, which especially affects P1 latency [71]. Importantly, Experiment 2 also confirmed the emotional effect on peak-to-peak amplitude involving both components, N40-P100 in this case. Thus, this amplitude was again significantly greater for spiders than for wheels. This replication reinforces the main finding in Experiment 1 and allows us to rule out that it was explained by possible confounding factors such as the contrast of the fixation dot over the background described in the Procedures section.

As in Experiment 1, this peak-to-peak amplitude effect was not explained by N40 or P100 separately, although the involvement of the former was stronger according to ANCOVAs. Moreover, neither N40 nor P100 showed significant effects when their single amplitude was analyzed. In this regard, Experiment 2 confirms the usefulness of analyzing peak-to-peak amplitudes in early visual ERPs, which appears to provide more complete and robust information (or less dependent on the experimental design) than single component analyses, at least when processes are transversal to two deflections rather than circumscribed to one of them, as seems to occur here. This classical way of measuring ERPs [5153] has recently been revealed as useful for exploring early visual ERP components [7] and, as developed in the Data Analysis section, is less affected by signal processing procedures such as filtering or baseline definition.

Increasing SNR in Experiment 2 also allowed us to obtain cleaner source estimations, particularly in the case of N40. This time, the origin of this component was clearly located in V1, with no other appreciable sources. The origin of P100 was also the visual cortex, although the contribution of V1 was not as evident as in the case of P80 (Experiment 1), probably due to its longer latency (≈25 ms). In line with previous studies, sources involved secondary areas [14], [72], concretely V2/LOC and the superior parietal lobule (SPL). The former source, involved in object recognition, was also observed and discussed in Experiment 1 with respect to P80, suggesting a -at least partial- functional link between P80 and P100. The SPL is a parietal area highly involved in attentional processes, both exogenous and endogenous, being a key node in the dorsal attention network [73]. This parietal area is consistently involved in attentional capture by emotional distractors (i.e., irrelevant to the task, as in this case;[74]). Moreover, this attentional capture by affective stimuli is typically reflected in P1, among other components [74]. Therefore, P100/P1 appears to reflect advanced stages of object identification (also observed in P80/C1, in Experiment 1), along with exogenous attention mechanisms.

The findings of both experiments, and particularly their timing, have several important implications at the theoretical level as regards emotional processing, particularly concerning the existing hypotheses on IESs. High-order structures such as the amygdala have been defended as a key IES capable of modulating the activity of the visual cortex, among other cerebral structures, at very short latencies (e.g., see reviews by [2]-[5]). However, the latency of the amygdala’s enhanced response to emotional stimuli is not compatible with present data. Concretely, and according to intracranial EEG recordings, the earliest amygdala response to non-facial emotional visual stimuli is beyond 150 ms [8]. Moreover, even the amygdalar response to facial expressions, which is faster as indicated in the Introduction, does not modulate visual cortex activity until more than 250 ms later [6]. The alternatives to the amygdala hypothesis as an IES are currently under open debate. On the one hand, the “central position” postulates that the sensory cortex itself “is responsible for smart (fast and precise) initial evaluation of environmental threat” ([13]; p. 349). On the other hand, the “peripheral position” proposes that “beyond the central modulation of sensory experience, most sensory systems are tuned to conduct value-based appraisal of the environment before signals reach the cortex” ([75]; p. 917).

The results of our two experiments point to non-amygdalar candidates to be IESs and, whereas they are compatible with both the central and the peripheral alternative hypotheses, this key issue is worth being -at least briefly- discussed. Emerging evidence points to the capability of earlier, first-order structures in the visual pathway such as the visual thalamus (see [9], for a review) or the superior colliculi ([12]; non-human data) to modulate their activity depending on the salience of the stimulus without the concourse of the visual cortex. Importantly, these structures modulate defensive behavior in response to visual stimuli in rodents (visual thalamus: [76]; superior colliculus: [77]). Concerning this, their abnormal activity has been proposed to be linked to affective problems such as anxiety or phobias in these same studies, pointing to their crucial role also at the clinical level. In our opinion, the key idea is that evaluation is a multistage process that requires all the steps (as each depends on the previous one), both rudimentary and precise, both fast and slow, to be “smart” and to allow for adaptive coping with emotional situations. In this chain of evaluative stages, the visual cortex, the amygdala, and other evaluative structures, would play a crucial role in different moments. However, as for the initial stage, which is the scope of this study, the peripheral hypothesis seems better positioned according to the scarce data available so far. In any case, further research is needed to advance this debate.

A remark on possible alternative interpretations and future directions should be made. While low-level visual parameters were controlled and homogenized between spiders and wheels, high-level differences apart from their emotional content exist, such as their semantic category (e.g., natural vs. artificial, or animal vs. object). Although this possibility cannot be discarded, we consider it very remote. Thus, while emotional stimuli are, by definition, relevant for the individual, the natural or animal condition of an item is orthogonal to relevance. For example, the relevance of a sparrow is minimal as compared to a snake for the majority of the population (as revealed by normative data in emotional picture databases cited above); in the non-animal/artificial category, an empty pot vs. a pistol pointing at us would be a parallel example. Critically, the evolutionary pressure on an extremely swift evaluation mechanism would be higher to detect emotion/relevance than to carry out a semantic categorization. Moreover, semantic processing of stimulation occurs later according to current data (e.g., first traces of animal vs. non-animal discrimination occur beyond 100 ms: [78, 79]. In any case, this extremely fast mechanism is worth being further explored by introducing additional stimulus categories, including emotionally positive items, and by manipulating the natural vs. artificial (or animal vs. non-animal) condition, to test the generalizability of current results. Relatedly, exploring individual differences is of great interest given that the activity of early evaluation structures is modulated by the individual experience thanks to the feedback they receive from other brain areas. For example, top-down modulation of visual cortices from evaluative structures higher in the hierarchy, such as the ventral prefrontal cortex [80], or of the visual thalamus from the visual cortex [81], plays a critical role in the activity of the initial evaluators. To conclude, this double-experiment study provides data on one of the less explored stages of affective processing and points to earlier-than-expected emotional evaluation processes.

References

  1. 1. Zhang W, Lu J. Time course of automatic emotion regulation during a facial Go/Nogo task. Biol Psychol. 2012;89:444–449. pmid:22200654
  2. 2. Adolphs R. r, Fea faces, and the human amygdala. Curr Opin Neurobiol. 2008;18:166–172.
  3. 3. Costafreda SG, Brammer MJ, David AS, Fu CHY. Predictors of amygdala activation during the processing of emotional stimuli: A meta-analysis of 385 PET and fMRI studies. Brain Res Rev. 2008;58:57–70. pmid:18076995
  4. 4. Öhman A. Automaticity and the amygdala: Nonconscious responses to emotional faces. Curr Dir Psychol Sci. 2002;11:62–66.
  5. 5. Zald DH. The human amygdala and the emotional evaluation of sensory stimuli. Brain Res Brain Res Rev. 2003;41:88–123. pmid:12505650
  6. 6. Wang Y, Luo L, Chen G, Luan G, Wang X, Wang Q, et al. Rapid processing of invisible fearful faces in the human amygdala. J Neurosci. 2023;43:1405–1413. pmid:36690451
  7. 7. Carretié L, Fernández-Folgueiras U, Álvarez F, Cipriani G, Tapia M, Kessel D. Fast unconscious processing of emotional stimuli in early stages of the visual cortex. Cereb Cortex. 2022;32:4331–4344. pmid:35059708
  8. 8. Méndez-Bértolo C, Moratti S, Toledano R, Lopez-Sosa F, Martínez-Alvarez R, Mah YH, et al. A fast pathway for fear in human amygdala. Nat Neurosci. 2016;19:1041. pmid:27294508
  9. 9. Carretié L, Yadav RK, Méndez-Bértolo C. The missing link in early emotional processing. Emotion Rev. 2021;13:225–244.
  10. 10. Fiebelkorn IC, Kastner S. Functional specialization in the attention network. Annu Rev Psychol. 2020;71:221–249. pmid:31514578
  11. 11. Ghodrati M, Khaligh-Razavi S, Lehky SR. Towards building a more complex view of the lateral geniculate nucleus: Recent advances in understanding its role. Prog Neurobiol. 2017;156:214–255. pmid:28634086
  12. 12. Méndez CA, Celeghin A, Diano M, Orsenigo D, Ocak B, Tamietto M. A deep neural network model of the primate superior colliculus for emotion recognition. Philos Trans R Soc B. 2022;377(1863):20210512. pmid:36126660
  13. 13. Li W, Keil A. Sensing fear: fast and precise threat evaluation in human sensory cortex. Trends Cogn Sci. 2023;27:341–352. pmid:36732175
  14. 14. Capilla A, Melcón M, Kessel D, Calderón R, Pazo-Álvarez P, Carretié L. Retinotopic mapping of visual event-related potentials. Biol Psychol. 2016;118:114–125. pmid:27235686
  15. 15. Di Russo F, Martínez A, Hillyard SA. Source analysis of event-related cortical activity during visuo-spatial attention. Cereb Cortex. 2003;13:486–499. pmid:12679295
  16. 16. Ales JM, Yates JL, Norcia AM. V1 is not uniquely identified by polarity reversals of responses to upper and lower visual field stimuli. Neuroimage. 2010;52:1401–1409. pmid:20488247
  17. 17. Acunzo D, MacKenzie G, van Rossum MCW. Spatial attention affects the early processing of neutral versus fearful faces when they are task-irrelevant: A classifier study of the EEG C1 component. Cogn Affect Behav Neurosci. 2019;19:123–137. pmid:30341623
  18. 18. Eldar S, Yankelevitch R, Lamy D, Bar-Haim Y. Enhanced neural reactivity and selective attention to threat in anxiety. Biol Psychol. 2010;85:252–257. pmid:20655976
  19. 19. Pourtois G, Grandjean D, Sander D, Vuilleumier P. Electrophysiological correlates of rapid spatial orienting towards fearful faces. Cereb Cortex. 2004;14:619–633. pmid:15054077
  20. 20. Rellecke J, Palazova M, Sommer W, Schacht A. On the automaticity of emotion processing in words and faces: event-related brain potentials evidence from a superficial task. Brain Cogn. 2011;77:23–32. pmid:21794970
  21. 21. Buchner H, Gobbelé R, Wagner M, Fuchs M, Waberski TD, Beckmann R. Fast visual evoked potential input into human area V5. Neuroreport. 1997;8:2419–2422. pmid:9261801
  22. 22. Ffytche DH, Guy CN, Zeki S. The parallel visual motion inputs into areas V1 and V5 of human cerebral cortex. Brain. 1995;118:1375–1394. pmid:8595471
  23. 23. Foxe JJ, Simpson GV. Flow of activation from V1 to frontal cortex in humans. Exp Brain Res. 2002;142:139–150.
  24. 24. Inui K, Kakigi R. Temporal analysis of the flow from V1 to the extrastriate cortex in humans. J Neurophysiol. 2006;96:775–784. pmid:16835365
  25. 25. Moradi F, Liu LC, Cheng K, Waggoner RA, Tanaka K, Ioannides AA. Consistent and precise localization of brain activity in human primary visual cortex by MEG and fMRI. Neuroimage. 2003;18:595–609. pmid:12667837
  26. 26. Proverbio AM, Esposito P, Zani A. Early involvement of the temporal area in attentional selection of grating orientation: an ERP study. Cogn Brain Res. 2002;13:139–151. pmid:11867258
  27. 27. Yoshida F, Hirata M, Onodera A, et al. Noninvasive spatiotemporal imaging of neural transmission in the subcortical visual pathway. Sci Rep. 2017;7:4424. pmid:28667266
  28. 28. Odom JV, Bach M, Brigell M, Holder GE, McCulloch DL, Mizota A, Tormene AP, International Society for Clinical Electrophysiology of Vision. ISCEV standard for clinical visual evoked potentials: (2016 update). Doc Ophthalmol. 2016;133:1–9.
  29. 29. Maunsell JH, Gibson JR. Visual response latencies in striate cortex of the macaque monkey. J Neurophysiol. 1992;68:1332–1344. pmid:1432087
  30. 30. Schroeder CE, Mehta AD, Givre SJ. A spatiotemporal profile of visual system activation revealed by current source density analysis in the awake macaque. Cereb Cortex. 1998;8:575–592. pmid:9823479
  31. 31. Kelly SP, Schroeder CE, Lalor EC. What does polarity inversion of extrastriate activity tell us about striate contributions to the early VEP? A comment on Ales et al. (2010). Neuroimage. 2013;76:442–445. pmid:22504764
  32. 32. Proverbio AM, Del Zotto M, Zani A. Electrical neuroimaging evidence that spatial frequency-based selective attention affects V1 activity as early as 40–60 ms in humans. BMC Neurosci. 2010;11:1–13.
  33. 33. Proverbio AM, Broido V, De Benedetto F, Zani A. Scalp-recorded N40 visual evoked potential: Sensory and attentional properties. Eur J Neurosci. 2021;54:6553–6574. pmid:34486754
  34. 34. Ko HK, von der Heydt R. Figure-ground organization in the visual cortex: Does meaning matter? J Neurophysiol. 2018;119:160–176. pmid:28978761
  35. 35. Mohr KS, Kelly SP. The spatiotemporal characteristics of the C1 component and its modulation by attention. Cogn Neurosci. 2018;9:71–74. pmid:28971714
  36. 36. Faul F, Erdfelder E, Buchner A, Lang A. Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behav Res Methods. 2009;41:1149–1160. pmid:19897823
  37. 37. Gerdes AB, Uhl G, Alpers GW. Spiders are special: Fear and disgust evoked by pictures of arthropods. Evol Hum Behav. 2009;30:66–73.
  38. 38. Jacobi F, Wittchen HU, Hölting C, Höfler M, Pfister H, Müller N, et al. Prevalence, co-morbidity and correlates of mental disorders in the general population: results from the German Health Interview and Examination Survey (GHS). Psychol Med. 2004;34:597–611. pmid:15099415
  39. 39. Lang PJ, Bradley MM, Cuthbert BN. International affective picture system (IAPS): Affective ratings of pictures and instruction manual. Gainesville, FL: University of Florida; 2005.
  40. 40. Carretié L, Tapia M, López-Martín S, Albert J. EmoMadrid: An emotional pictures database for affect research. Motiv Emot. 2019;43:929–939.
  41. 41. Paulus WM, Hömberg V, Cunningham K, Halliday AM, Rohde N. Colour and brightness components of foveal visual evoked potentials in man. Electroencephalogr Clin Neurophysiol. 1984;58:107–119. pmid:6204836
  42. 42. Foxe JJ, Strugstad EC, Sehatpour P, Molholm S, Pasieka W, Schroeder CE, et al. Parvocellular and magnocellular contributions to the initial generators of the visual evoked potential: high-density electrical mapping of the “C1” component. Brain Topogr. 2008;21:11–21. pmid:18784997
  43. 43. Johannes S, Münte TF, Heinze HJ, Mangun GR. Luminance and spatial attention effects on early visual processing. Cogn Brain Res. 1995;2:189–205. pmid:7580401
  44. 44. Nakashima T, Kaneko K, Goto Y, Abe T, Mitsudo T, Ogata K, et al. Early ERP components differentially extract facial features: evidence for spatial frequency-and-contrast detectors. Neurosci Res. 2008;62:225–235. pmid:18809442
  45. 45. Bradley MM, Hamby S, Löw A, Lang PJ. Brain potentials in perception: picture complexity and emotional arousal. Psychophysiology. 2007;44:364–373. pmid:17433095
  46. 46. Hillyard SA, Teder-Sälejärvi WA, Münte TF. Temporal dynamics of early perceptual processing. Curr Opin Neurobiol. 1998;8:202–210. pmid:9635203
  47. 47. Van Strien JW, Christiaans G, Franken IH, Huijding J. Curvilinear shapes and the snake detection hypothesis: an ERP study. Psychophysiology. 2016;53:252–257. pmid:26481589
  48. 48. Oostenveld R, Fries P, Maris E, Schoffelen JM. FieldTrip: open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data. Comput Intell Neurosci. 2011. pmid:21253357
  49. 49. Acunzo DJ, MacKenzie G, van Rossum MCW. Systematic biases in early ERP and ERF components as a result of high-pass filtering. J Neurosci Methods. 2012;209:212–218. pmid:22743800
  50. 50. Jung TP, Makeig S, Humphries C, Lee TW, Mckeown MJ, Iragui V, et al. Removing electroencephalographic artifacts by blind source separation. Psychophysiology. 2000;37:163–178. pmid:10731767
  51. 51. Begleiter H, Porjesz B, Tenner M. Neuroradiological and neurophysiological evidence of brain deficits in chronic alcoholics. Acta Psychiatr Scand. 1980;62:3–13. pmid:6935921
  52. 52. Hillyard SA, Picton TW. On and off components in the auditory evoked potential. Percept Psychophys. 1978;24:391–398. pmid:745926
  53. 53. Verleger R, Cohen R. Effects of certainty, modality shift and guess outcome on evoked potentials and reaction times in chronic schizophrenics. Psychol Med. 1978;8:81–93. pmid:635071
  54. 54. Widmann A, Schröger E. Filter effects and filter artifacts in the analysis of electrophysiological data. Front Psychol. 2012;3:233. pmid:22787453
  55. 55. Corp IBM. IBM SPSS Statistics for Windows, Version 26. Armonk, NY: IBM Corp; 2019.
  56. 56. Chapman RM, McCrary JW. EP component identification and measurement by principal components analysis. Brain Cogn. 1995;27:288–310. pmid:7626278
  57. 57. Coles MGH, Gratton G, Kramer AF, Miller GA. Principles of signal acquisition and analysis. In: Coles MGH, Donchin E, Porges SW, editors. Psychophysiology: Systems, processes and applications. Amsterdam: Elsevier; 1986. p. 83–221.
  58. 58. Dien J. Applying Principal Components Analysis to Event-Related Potentials: A Tutorial. Dev Neuropsychol. 2012;37: 497–517. pmid:22889342
  59. 59. Donchin E, Heffley EF (1978) Multivariate analysis of event-related potential data: A tutorial review. In: Otto D, editor. Multidisciplinary perspectives in event-related brain potential research. Washington, DC: USA: Government Printing Office; 1978. p. 555–572.
  60. 60. Dien J. Evaluating two‐step PCA of ERP data with geomin, infomax, oblimin, promax, and varimax rotations. Psychophysiology. 2010;47:170–183. pmid:19761521
  61. 61. Team JASP. JASP (Version 0.15) [Computer software]. 2021.
  62. 62. Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM. Brainstorm: A User-Friendly Application for MEG/EEG Analysis. Comput Intell Neurosci. 2011;2011: 879716. pmid:21584256
  63. 63. Gramfort A, Papadopoulo T, Olivi E, Clerc M. OpenMEEG: opensource software for quasistatic bioelectromagnetics. Biomed Eng Online. 2010;9:45. pmid:20819204
  64. 64. Kybic J, Clerc M, Abboud T, Faugeras O, Keriven R, Papadopoulo T. A common formalism for the integral formulations of the forward EEG problem. IEEE Trans Med Imaging. 2005;24:12–28. pmid:15638183
  65. 65. Hubel DH, Wiesel TN. Laminar and columnar distribution of geniculo‐cortical fibers in the macaque monkey. J Comp Neurol. 1972;146:421–450. pmid:4117368
  66. 66. Anzai A, Peng X, Van Essen DC. Neurons in monkey visual area V2 encode combinations of orientations. Nat Neurosci. 2007;10:1313–1321. pmid:17873872
  67. 67. Grill-Spector K, Kourtzi Z, Kanwisher N. The lateral occipital complex and its role in object recognition. Vision Res. 2001;41:1409–1422. pmid:11322983
  68. 68. Azzopardi P, Cowey A. The overrepresentation of the fovea and adjacent retina in the striate cortex and dorsal lateral geniculate nucleus of the macaque monkey. Neuroscience. 1996;72:627–639. pmid:9157310
  69. 69. Beck DM, Kastner S. Stimulus context modulates competition in human extrastriate cortex. Nat Neurosci. 2005;8:1110–1116. pmid:16007082
  70. 70. Desimone R. Visual attention mediated by biased competition in extrastriate visual cortex. Philos Trans R Soc Lond B Biol Sci. 1998;353:1245–1255. pmid:9770219
  71. 71. Gruber WR, Klimesch W, Sauseng P, Doppelmayr M. Alpha phase synchronization predicts P1 and N1 latency and amplitude size. Cereb Cortex. 2005;15:371–377. pmid:15749980
  72. 72. Di Russo F, Stella A, Spitoni G, Strappini F, Sdoia S, Galati G, et al. Spatiotemporal brain mapping of spatial attention effects on pattern-reversal ERPs. Hum Brain Mapp. 2012;33:1334–1351. pmid:21500317
  73. 73. Corbetta M, Patel G, Shulman GL. The reorienting system of the human brain: From environment to theory of mind. Neuron. 2008;58:306–324. pmid:18466742
  74. 74. Carretié L. Exogenous (automatic) attention to emotional stimuli: A review. Cogn Affect Behav Neurosci. 2014;14:1228–1258. pmid:24683062
  75. 75. Kryklywy JH, Ehlers MR, Anderson AK, Todd RM. From architecture to evolution: multisensory evidence of decentralized emotion. Trends Cogn Sci. 2020;24:916–929. pmid:32917534
  76. 76. Salay L. D., & Huberman A. D. (2021). Divergent outputs of the ventral lateral geniculate nucleus mediate visually evoked defensive behaviors. Cell Rep. 2021;37:109792. pmid:34610302
  77. 77. Muthuraju S, Talbot T, Brandão ML. Dopamine D2 receptors regulate unconditioned fear in deep layers of the superior colliculus and dorsal periaqueductal gray. Behav Brain Res. 2016;297:116–123. pmid:26455877
  78. 78. Cichy RM, Pantazis D. Multivariate pattern analysis of MEG and EEG: A comparison of representational structure in time and space. Neuroimage. 2017;158:441–454. pmid:28716718
  79. 79. Crouzet SM, Joubert OR, Thorpe SJ, Fabre-Thorpe M. Animal detection precedes access to scene category. PLoS One. 2012;7(12):e51471. pmid:23251545
  80. 80. Bar M, Kassam KS, Ghuman AS, Boshyan J, Schmidt AM, Dale AM et al. Top-down facilitation of visual recognition. Proc Natl Acad Sci USA. 2006:103: 449–454. pmid:16407167
  81. 81. Briggs F, Usrey WM. Emerging views of corticothalamic function. Curr Opin Neurobiol. 2008;18:403–407. pmid:18805486