Figures
Abstract
A data-adaptive, multiscale version of Rényi’s quadratic entropy (RQE) is introduced for emotional state discrimination from EEG recordings. The algorithm is applied to scalp EEG recordings of 30 participants watching 4 emotionally-charged video clips taken from a validated public database. Krippendorff’s inter-rater statistic reveals that multiscale RQE of the mid-frontal scalp electrodes best discriminates between five emotional states. Multiscale RQE is also applied to joint scalp EEG, amygdala- and occipital pole intracranial recordings of an implanted patient watching a neutral and an emotionally charged video clip. Unlike for the neutral video clip, the RQEs of the mid-frontal scalp electrodes and the amygdala-implanted electrodes are observed to coincide in the time range where the crux of the emotionally-charged video clip is revealed. In addition, also during this time range, phase synchrony between the amygdala and mid-frontal recordings is maximal, as well as our 30 participants’ inter-rater agreement on the same video clip. A source reconstruction exercise using intracranial recordings supports our assertion that amygdala could contribute to mid-frontal scalp EEG. On the contrary, no such contribution was observed for the occipital pole’s intracranial recordings. Our results suggest that emotional states discriminated from mid-frontal scalp EEG are likely to be mirrored by differences in amygdala activations in particular when recorded in response to emotionally-charged scenes.
Citation: Tonoyan Y, Chanwimalueang T, Mandic DP, Van Hulle MM (2017) Discrimination of emotional states from scalp- and intracranial EEG using multiscale Rényi entropy. PLoS ONE 12(11): e0186916. https://doi.org/10.1371/journal.pone.0186916
Editor: Gilles van Luijtelaar, Radboud Universiteit, NETHERLANDS
Received: April 11, 2017; Accepted: October 10, 2017; Published: November 3, 2017
Copyright: © 2017 Tonoyan et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All raw data files are available from the KU Leuven Box database (https://kuleuven.app.box.com/v/EntropyEmotion?sortColumn=date&sortDirection=desc&pageSize=20&pageNumber=0).
Funding: YT is supported by the Interuniversity Attraction Poles Programme Belgian Science Policy (IUAP P7/11). MMVH is supported by research grants received from the Financing program (PFV/10/008), an interdisciplinary research project (IDO/12/007), and an industrial research fund project (IOF/HB/12/021) of the KU Leuven, the Belgian Fund for Scientific Research – Flanders (G088314N, G0A0914N), the Interuniversity Attraction Poles Programme – Belgian Science Policy (IUAP P7/11), the Flemish Regional Ministry of Education (Belgium) (GOA 10/019), and the Hercules Foundation (AKUL 043). DM is supported by an EPSRC grant (EP/K025643/1).
Competing interests: The authors have declared that no competing interests exist.
1 Introduction
The identification and discrimination of emotional states from EEG is considered notoriously challenging, mainly due to the difficulty to gauge electromagnetic activity elicited by cortical structures involved in processing emotional information [1], yet potentially useful for a broad range of important applications such as diagnosing and treating patients with dysfunctional processing of emotional information [2], emotion-sensitive interactive games, affective interfaces, and emotion-sensitive tutoring systems [3–5]. This article sets out to address these challenges, but rather than evaluating the power in the standard EEG frequency bands, as is traditionally done [6–13], or amplitudes and latencies of event-related potentials (ERPs) in response to emotion-evoking stimuli [14–19], we conduct the analysis within the realm of complexity science [20].
Several entropy-based metrics of signal complexity have already been proposed for discriminating emotional states. Aftanas and co-workers [21] showed that, when viewing images evoking negative and positive emotions, higher values of EEG correlation dimension complexity are observed compared to viewing neutral images. Hosseini and Naghibi-Sistani [22] applied two entropic metrics (approximate and wavelet entropy) to discriminate between two emotional states (calm-neutral and negative-excited) in response to viewing sequences of emotion inducing pictures and achieved 73.25% classification accuracy. Jie et al. [23] applied sample entropy to EEG data obtained from two binary emotion recognition tasks (positive vs. negative emotion both with high arousal, and music clips with different arousal levels) and achieved 80.43% and 79.11% classification performance.
A promising development in signal complexity analysis is Rényi entropy (RE). Originally introduced as a generalization of Shannon entropy [24], RE has enjoyed several successful EEG-based clinical applications [13,25]. However, RE has been less utilised for EEG-based mental and affective state detection. Sourina and co-workers [26] used an RE-based variant (Hausdorff dimension) to calculate the fractal dimension of EEG for real time emotion quantification and classification, and Bajaj and Pachori [27] used RE together with other EEG complexity measures for emotion detection.
However, one should also be aware that, at least according to some authors [28], neither strictly periodic nor completely random signals should be regarded complex but rather signals that possess long range correlations across multiple temporal scales. As the cited studies assess entropy on a single scale, and since the used entropy metrics become maximal for random signals, their outcome could be confounding randomness with complexity. To avoid this confusion, Costa and co-workers [29] accounted for the interdependence between entropy and scale and proposed to calculate entropy, in their case sample entropy, on different temporal scales (whence, multiscale sample entropy, MSE), using the so-called coarse graining approach (averaging the signal over non-overlapping windows of increasing length). However, as the latter factually corresponds to a smoothing operation, predominantly low-frequency signal components were captured. To overcome this, empirical mode decomposition (EMD) [30] and its multivariate extension (MEMD) [31] have been suggested: a fully data-driven, time-frequency technique that decomposes a signal into a finite set of amplitude/frequency modulated components, called intrinsic mode functions (IMFs). A further improvement is multivariate MSE (MMSE), to account for both within and cross-channel dependencies, further combined with MEMD into MEMD-enhanced MMSE [32]. When applying EMD to EEG recordings, entropy can be estimated in each IMF individually. Sharma et al. [33] recently applied this technique to several entropic complexity measures, including RE, to predict focal epileptic seizures from EEG. Also recently, Tonoyan and co-workers [34] used MEMD in combination with MSE to discriminate 5 emotional states from mid-frontal EEG recordings when viewing emotionally charged video clips compiled by Schaefer and co-workers [35]. Albeit the results were encouraging, MSE-based entropy estimation turned out not only to be sensitive to the choice of window length but also computationally intensive.
Our aim for this study is to curb these drawbacks by revisiting MEMD-based EEG signal complexity in response to emotionally charged video clips, but now based on Rényi’s quadratic entropy (RQE) applied to whole scalp recordings of 30 subjects. We use Krippendorff’s inter-rater statistic [36] to identify the scalp electrodes that best discriminate between emotional states across subjects. We also recorded intracranial EEG (iEEG) of the amygdala and the occipital cortex jointly with scalp EEG of a patient viewing emotional and neutral video clips. The amygdala is considered to be strongly involved in emotional processing [37–48]. Motivated by Makeig and co-workers [49], who considered similar signal processing techniques for analyzing Human Intracranial Electrophysiology (HIE) and scalp EEG, we apply the same MEMD-based RQE method to the intracranial recordings and verify whether we can discriminate between the neutral and the emotional movies in a way similar to the jointly recorded scalp EEG.
2. Materials and methods
2.1 Materials
2.1.1 Video clips.
We considered emotionally-charged video clips of the public database developed by Schaefer and co-workers [35] (http://nemo.psp.ucl.ac.be/FilmStim/) (Table 1). The spoken language of these video clips was French or dubbed into French. Each video clip in the database is labeled in terms of emotional category (fear, anger, sadness, disgust, amusement, tenderness, neutral, further called “standard label”), which Schaefer and co-workers obtained by asking their participants to report what they personally felt in reaction to the video clips, not what they believed people would generally feel. The average duration of all video clips was approximately 3 minutes.
Name of videos between quotes and scene numbers between round brackets, standard labels between square brackets, positive or negative emotional affect scores (mean scores between round brackets), and our participants’ self-labels (underlined followed by the number of respondents between brackets). Video clips were taken from http://nemo.psp.ucl.ac.be/FilmStim/, standard labels and affect scores from Table 1 in [35] or provided by Alexandre Schaefer (personal communication).
We performed two experiments, further called main and control. For our main experiment, we selected two sets of 4 video clips (out of 70 video clips), all with top ten scores in their respective emotional categories. For the first set, 3 out of 4 videos (disgust, amusement, tenderness) had, in addition, also top ten positive or negative affect scores (cf., Table 1); the 4th video (sadness) did not have a top ten affect score. It was selected to verify the statement of Aftanas and co-workers [21] that higher complexity values can be observed for affective stimuli compared to more neutral ones. As these videos have different emotional labels and affect scores, we want to see whether this translates into a difference in EEG complexity values. For the second study, 4 video clips were chosen but not from the top ten affect score list: as these videos shared lower affect scores, we used them in our experiment as controls. All videos were presented in random order to our participants.
For the implanted patient, as the implant serves a medical purpose, the time window that we could dispose of to perform experiments was restricted. Hence, we used only two video clips: fragment 3 of ‘Schindler’s list’ as an emotional video (standard labeled as ‘anger’) and the weather forecast as an example of a more neutral video. The data recorded in this patient was used as a case study for assessing the differential activation of the amygdala in response to the two video clips and the putative relation between amygdala activation and scalp EEG.
After watching each video clip, all participants (including the implanted patient) were asked to categorize the evoked emotion (after providing them with the same list as used in Schaefer et al.’s: fear, anger, sadness, disgust, amusement, tenderness, neutral). We further call these the ‘self-labels’. Note that our participants were not informed about the video clips’ standard labels. Only in the case of ‘anger’ the self-labels were not univocal as 16 participants reported ‘disgust’ instead (‘Sleepers’, i.e., a video about child abuse). To summarize, we have 3 labels (Table 1):
- standard label: the emotional category of each video clip, taken from Schaefer et al.;
- self-label: the emotional category of each video clip selected by our participants from the list of standard labels used by Schaefer et al.;
- self-reported emotional affect scores of each video, also taken from Schaefer et al.
Unless noted otherwise, we will use the self-labels for labeling the entropy curves.
2.1.2 Participants.
The main experiment was performed with 30 healthy volunteers (20 female, 10 male, mean age = 32.48, SD = 15.77, age range 19–70) who master French language (i.e., French as mother tongue or French-Dutch bilinguals). We also recruited 2 non-French speaking volunteers (1 Flemish-Dutch speaking 24 yo male, 1 Armenian speaking 28 yo female). For the experiment with the control videos (with low affect scores), we tested 6 volunteers (2 female, 4 male, mean age = 27, SD = 1.6, age range 25–30). Volunteers were recruited via emails, social media posts, flyers, and billboard announcements. Some were university graduate students (KU Leuven, VUB), often being regular subjects in our EEG experiments, and were paid. No participant had any known neurological or psychiatric disorder. Ethical approval for this study was granted by an independent ethical committee (“Commissie voor Medische Ethiek” of UZ Leuven, our University Hospital). The study was conducted in accordance with the most recent version of the Declaration of Helsinki (2013). Before participating in the experiment, all recruited participants were informed about the goal of the study, what would be their task, and what would be done with the recorded data (privacy), after which they read and signed the informed consent form that was previously approved by the said ethical committee. All EEG recordings were performed between 10/12/2014 and 13/05/2015. The raw scalp EEG recordings are available from https://kuleuven.box.com/v/EntropyEmotion
2.1.3 EEG recording and preprocessing.
Participants were tested in a sound-attenuated, darkened room with a constant temperature of 20 degrees, sitting in front of an LCD screen. Each participant’s task was to watch the video clips and report the emotional categories. When viewing the video clips, EEG was recorded5 continuously using 32 active electrodes, evenly distributed over the entire scalp (positioning and naming convention following a subset of the extended 10–20 system) using a BioSemi ActiveTwo system (BioSemi, Amsterdam, the Netherlands) as well as an electro-oculogram (EOG) using the set-up of Croft and co-workers [50]. The EEG signals were re-referenced offline from the original common mode sense reference [51] (CMS, positioned next to electrode Pz) to the average of two additional electrodes that were placed on the subject’s mastoids. The duration of the experiment excluding electrode setup was 20 minutes. The EEG signals were filtered using a 4th-order Butterworth bandpass filter with range 0.5–30 Hz. The original sampling rate of 2048 Hz was downsampled to 128 Hz (including anti-aliasing) to reduce computational costs. Finally, the EOG signal was utilized for removing eye artifacts following the Revised Artifact-Aligned Averaging (RAAA) procedure described in [50].
2.1.4 Intracranial EEG recording and preprocessing.
We also recorded intracranial EEG (iEEG), also termed Electrocorticography (ECoG), from a patient, scheduled for resective surgery, as part of her epileptic seizure treatment, when viewing an emotional and a neutral video clip. The patient was implanted with an intracranial electrode (depth electrode with 10 contacts, contact size 2.4/1.1 mm (overall length/diameter) and 4 mm inter-contact spacing) in the right hippocampus until the amygdala (2 contact points present) and with a subdural electrode grid (4x5 electrodes with 4 mm electrode diameter, 2.3 mm electrode exposure and 10 or 15 mm inter-contact spacing) over the right occipital cortex (S5 Appendix). Recordings were made with a Micromed digital video compatible EEG recording system (Micromed Spa, Mogliano Veneto, Italy). The sampling frequency of the recording was set to 1028 Hz. Offline pre-processing and downsampling were done using the same parameters as for the scalp-recorded EEG, except that a CMS reference per intracranial electrode (10 contacts) and grid (20 contacts) was used. Ethical approval for this study was granted by an independent ethical committee (“Commissie voor Medische Ethiek” of UZ Gent) and conducted in accordance with the most recent version of the Declaration of Helsinki (2013). Before participating in the experiment, the patient was informed about the goal of the study, what would be her task, and what would be done with the recorded data (privacy), after which she read and signed the informed consent form that was previously approved by the said ethical committee. The recordings were done on 29/02/2016.
2.2 Methods
2.2.1 Sample entropy.
We compare 3 entropy-based methods. The first one is Sample Entropy (SE) [52]. It corresponds to the conditional probability that two sequences that are similar to each other for m consecutive data points (samples), within tolerance level r, and remain similar when one more data point to each sequence is added. Formally, SE is expressed as follows: where Bm(r) is the probability that the similarity between two sequences of length m obeys the tolerance level r, Am+1 (r) the probability that the similarity between the same two sequences but now extended with one data point, thus of length m + 1, also obeys r, and N the number of sequences. The tolerance level r is usually set to a percentage of the standard deviation of the normalized data; for our case we selected 15% [31].
In order to estimate sample entropy for a multivariate case (MSE), the sequences are formulated as follows. Recalling multivariate embedding theory [53], for d-variate time series , the multivariate embedded sequence (so-called a composite delay vector) can be constructed as: where , is the embedding vector and τ = [τ1, τ2, …, τd] the time lag vector. The above sample entropy definition is then applied to the composite delay vector.
In our case, we have d electrode channels, n the number of samples or data points in a sequence (further called ‘snippet’ as its length is small in practice), and N the number of sequences. The computational complexity of MSE for multichannel EEG recordings is qubic: ϑ(dNn + dn).
2.2.2 Rényi’s entropy.
Consider a discrete random variable x that adopts n values with probabilities p1, p2, …, pn. When the k-th value delivers Ik bits of information, then the total amount of information becomes the so-called Shannon’s entropy: (1)
In (1), a linear averaging operator is assumed however, in general, for any monotonic function g(x) with an inverse g-1(x), the general mean associated with g(x) for a set of real values {xk, k = 1, …, n} with probabilities {pk, k = 1, …, n} can be written as:
Hence, using the general mean, the total amount of information will be: (2) with g(x) a Kolmogorov-Nagumo invertible function. Rényi proved that, when the criterion of additivity of independent events applies to (2), then the range of usable functions g(x) is dramatically restricted. There are two possible classes:
- g(x) = c, where c is a constant, so the general mean becomes linear and Shannon entropy (1) is obtained;
- g(x) = c2(1−α)x, which implies that the entropy definition becomes: (3) which is called Rényi’s information measure of order α.
We are particularly interested in the case where α = 2, further called Rényi’s quadratic entropy (RQE), as the term between brackets in (3) corresponds to the expected value of the probability density function (PDF). We wish to estimate RQE directly from the sampled signal using a Gaussian kernel for Parzen’s density estimate: with N the number of samples and σ the kernel size or bandwith parameter. Hence, we obtain: where IP is the information potential[54]. Note that, for the multivariate case, the average of the univariate kernels is taken. For ease of reference, we will call it multivariate RQE (MRQE). The bandwidth σ is a free parameter that can be chosen according to Silverman’s rule: where n is the number of data points in a snippet, d the data dimensionality (number of electrodes), and σX the standard deviation.
The computational complexity of MRQE is quadratic: ϑ(dn + n).
2.2.3 Kernel-based Shannon entropy.
As a third entropy-based method, we consider Shannon entropy (ShE): where pk is a probability density function (PDF). As in the case of RQE, we estimate the PDF directly from the samples using a Gaussian kernel for Parzen’s density estimate: where nis the number of data points in a snippet and σ the kernel size or bandwidth parameter. Similarly to RQE, we develop the multivariate case with the Shannon entropy (MShE). The computational complexity of MShE is quadratic as well:ϑ(dn + n).
2.2.4 Multivariate Empirical Mode Decomposition (MEMD).
With Empirical Mode Decomposition (EMD) a signal of length M data points is split into a l = log2M narrow-band, amplitude/frequency modulated components called Intrinsic Mode Functions (IMFs) [55]:
IMF1 corresponds to the highest frequency component and the subsequent IMFs to lower, more narrow-banded frequency components. The last component, IMFn, is the trend in the signal and is usually omitted from further analysis. The multivariate extension of EMD (MEMD) [31] aligns similar frequency bands of multiple channels, thus, providing an assessment of their possible interdependence (mode alignment property).
2.2.5 MEMD-enhanced multiscale, multivariate entropy.
There are at least two ways to compute a multiscale version of entropy when using MEMD: we can compute multivariate entropy (MSE, MRQE or MShE) for each scale individually (IMFs) or for their accumulated scales (cumulative IMFs, CIMFs). We explain the algorithm for the CIMF case:
- obtain the IMFs for each subject’s entire recording length of each video clip (M data points, with M in our case between 30000 and 52000, depending on video clip length) by applying the MEMD method to a given number d of EEG electrodes;
- accumulate the IMFs one by one, starting with the first one, CIMF1 = IMF1, and then add the second IMF to the first, CIMF2 = IMF1 + IMF2, and so on, until all IMFs are added;
- for each CIMF, calculate multivariate entropy (d electrodes) for each non-overlapping snippet of a prior defined EEG signal track and take the average over all those N snippets (MSE) or calculate the univariate entropy per electrode for those snippets and take the average over all d electrodes (MRQE, MShE);
- plot the entropy estimates as a function of CIMF1,CIMF2, … to obtain the MEMD-enhanced multiscale, multivariate entropy curve of the targeted EEG signal track (i.e., MEMD-enhanced MMSE, MMRQE, MMShE)
Note that the total number of IMFs is log2 M [55] but the used algorithm selects by itself the number of IMFs in a data-driven, subject-dependent way (in our case, between 15 and 17 IMFs), hence, for clarity’s sake, we decided to show 15 IMFs for all subjects. Both operations imply that the original signal is only approximated by CIMFn, so their entropies could be different.
For the interested reader, the Matlab code for the multiscale MSE algorithm can we found at http://www.commsp.ee.ic.ac.uk/~mandic/research/Complexity_Stuff.htm, and the Matlab code for the multivariate EMD at http://www.commsp.ee.ic.ac.uk/~mandic/research/emd.htm
2.2.6 Scalp space projection of intracranial recordings.
In order to explore the relation between scalp- and intracranial recordings, in response to emotion-evoking and neutral video clips, we performed a source reconstruction analysis with the Brainstorm toolbox as it can work directly with intracranial electrodes. We started with the patient’s post-implantation MRI headscan to define the standard Montreal Neurological Institute (MNI) stereotactic coordinates of the depth electrode’s contact points in the amygdala-hippocampal complex and the electrode positions of the grid covering the occipital cortex. Then, we used the patient’s MRI scan before implantation to extract the cortex envelopes from the MRI scan (inner and outer skull, scalp surface and cortex) using the Brainsuite software[56]. Fiducial points were selected manually. For the forward model, Boundary Element Model (BEM) surfaces were created using 15000 dipoles on the entire MRI volume so as to include deep sources, using the OpenMEEG BEM model [57]. For the inverse modeling, as there was no resting state data available, the identity matrix was selected for the noise covariance, and the sLORETA algorithm used for distributed source modeling [58].
The depth electrode had 10 contact points but, based on anatomical grounds, the first 2 contact points were considered to be in the amygdala. However, as their recordings did not display a linear relation (no obvious Pearson correlation, see S4 Appendix), but instead the recordings of the second contact point seemed to correlate with those of contact points 3 to 6, we only used electrode 1 to simulate data from that contact point to see how its projects back on scalp space. In order to obtain a sizeable estimate of amygdala activation, we assumed that 1 cm3 of amygdala tissue, in proportion to the entire head volume, corresponds to about 27 amygdala dipoles out of 15000 in total. We put all values of the headmodel to zero, except for the said 27 dipoles, which we filled with contact point 1’s recordings. As the headmodel for the entire MRI volume is not constrained in orientation and orientation, but needs to be defined for our simulations, we chose four possible orientations for simulating our deep source—the X-, Y-, and Z-directions, and also the equally mixed version, with an equal weight in each direction—, and projected the result on the entire scalp.
Next, we wanted examine whether the sources that generated the intracranial recordings of the electrode grid covering the occipital lobe also contribute to the frontal scalp recordings. For this, we first solved the inverse problem, thus starting from the 20 electrode grid recordings, and then addressed the forward problem to see how these sources show up in the frontal part of the scalp.
3. Results
3.1 Discriminability of emotional categories
MEMD-enhanced multiscale, multivariate entropy estimates of the preprocessed scalp EEG recordings were computed only over the last 100s of each video clip, as the video clips were construed by Schaefer and co-workers to have their climax at the very end (see also section 3.5). For the sake of comparison, in the same way as in, we considered electrodes F3 and F4 (mid-frontal area) (d = 2) and also took N = 1000 non-overlapping snippets of length 100 ms (m = 12), and averaged the entropy estimates across snippets of each participant. The MEMD-enhanced MMSE, -MMRQE and -MMShE curves for all 30 participants, grouped by their self-reported emotional categories (see Table 1), are shown in Fig 1. We observe that the complexity curves of the three entropies are similar with similar discriminabilities between emotional states. We also observe that the curve labeled ‘sadness’ has the lowest entropy values—note it also has the lowest affect score of all video clips used—, thus, confirming Aftanas and co-workers’ assertion [21]. However, when assessing the effect of snippet length, we observe that the MEMD-enhanced MMRQE curves are less sensitive to the choice of snippet length compared to those of the other 2 approaches (see S1 Appendix). We also calculated the entropies of the original (uniscale) EEG. We observe that the uniscale case exhibits a much lower discriminability (factually only 2 groups of plots remain), clearly showing the advantage of the multiscale approach. We also calculated the MEMD-enhanced MMRQE curves for the 2 non-French speaking participants (S6 Appendix.). The curves are now factually indistinguishable except for the ‘Life is beautiful’ video clip (self-labeled as tenderness). Hence, the physical parameters of the video clips are not explaining the discriminability of the MMRQE curves (see also the low interrater agreement of the occipital and auditory cortex responses in Fig 2, left panel, discussed further).
Curves are labeled in terms of our 30 participants’ self-reports (self-labels) and corresponding uniscale entropy values for the original EEG recordings (cf., horizontal axis tick labeled ‘raw’). Error bars are standard errors of average MMSE, MMRQE and MMShE values per subject. Results shown for 100 ms snippet length. Color convention: purple = anger, red = amusement, green = disgust, black = tenderness, blue = sadness.
The coefficient shows the degree to which the participants’ RQEs for CIMF6 are in agreement.
We also computed the MEMD-enhanced MMRQE curves of the 6 participants to the control experiment (see S7 Appendix). Note that the 4 video clips were not taken from the top ten list of affect scores (Table 1). The individual curves are much less discriminable and their ranking—in terms of self-labels—is now subject-dependent.
As the peak frequency of IMFi is given by [55], given the sampling rate of 128 Hz, the frequency range spanned by the first 6 IMFs is from 1 Hz to 32 Hz and this corresponds to the bandwidth of our Butterworth filter implemented in the preprocessing (0.5–30 Hz). This also explains the asymptotic behavior of our entropy curve results, whence, we will further focus on CIMF6. When applying a linear mixed model [59] to the CIMF6 results of MMRQE, with self-label, gender and age as fixed effects, and subject as random effect, we found that self-label is significant (< 0.0001), whereas age (p = 0.083) and gender are not (p = 0.68).
For comparison’s sake, we also considered the case where entropy is plotted as a function of IMF index, thus, by considering separate scales. The result is shown in S2 Appendix.: we observe a much lower discriminability between emotional states compared to the cumulative case (CIMF).
Finally, as the computational complexity of MMRQE is quadratic and that of MMSE cubic the former is also advantageous for computational efficiency reasons.
3.2 Discriminability of multiscale RQE per electrode
We statistically assessed to what extent the RQEs per self-reported emotional category are in agreement across subjects (inter-rater reliability, inter-rater agreement). We used the so-called interval version of Krippendorf's alpha statistic (using the kriAlpha function in Matlab) for our 30 participants (main experiment), given their 5 self-reported emotional categories (self-labels), and determined the scalp distribution (per electrode, thus, univariate) of the alpha statistic of the RQEs and restricted ourselves to CIMF6, to simplify the comparison. The results are shown in Fig 2 (left panel) when using 100ms snippets on the last 100s of each video, and when computing the EMD for all 32 electrodes jointly (MEMD) and retaining CIMF6. We observe that the frontal region of the scalp has the highest alpha coefficient with much smaller values for the occipital and auditory cortices. For comparison’s sake, we also computed the scalp distribution of the alpha coefficient for the original (uniscale) EEG data (see S3 Appendix: the coefficients now turn out to be negative, implying a disconcordance in the RQE values between subjects). In order to assess the statistical significance of our results, we compared the alpha coefficient of each electrode of the main group (30 subjects) with that of the corresponding electrode of the control group (6 subjects). Hereto, we determined a distribution of the alpha coefficient of the control group using a simple block bootstrap strategy (Matlab’s bootstrp function) in which the alpha coefficient is computed for 1000 subsets of 5 subjects (random sampling with replacement). The result for the control group is shown in Fig 2 (right panel). The scalp plot of the p values resulting from a simple t-test is shown in Fig 3. One observes that the most discriminative electrodes are F3, FC1, FC5, T7, CP5, CP2, C4, T8, FC6, F4, F8 (using a 5% significance level).
In order to identify which subset of electrodes best discriminate between the 5 self-reported emotional categories, we conducted a ‘greedy’ search on the alpha coefficients of the main group (30 participants) computed from the last 100 seconds of each movie. We started the search from F3, the most significant electrode in Fig 2 (left panel), computed CIMF6 for this electrode only (to avoid effects from other electrodes), and plotted its alpha coefficient based on the main group’s RQE. Then, we added the next most significant channel, F4, compute CIMF6 for F3-F4, and plotted the alpha coefficient based on multivariate RQE (channels F3-F4, thus d = 2), and so on (Fig 4). One observes that F3-F4-Fz show best discriminability. This confirms that, at least for our case, emotion discrimination can be achieved using only 3 mid-frontal channels, which also reduces the computational cost for the MEMD-enhanced MMRQE algorithm (remember that the algorithm’s complexity scales with number of channels).
3.3 Temporal evolution of multiscale RQE discriminability
We also considered the temporal evolution of Krippendorf’s alpha coefficient of the main group but now using the entire length of each video clip. We used 100ms snippets, the 3 mid-frontal (F3, F4, Fz) electrodes selected by our clustering analysis for MEMD calculation (Fig 4), and compute the multivariate RQE (MRQE, d = 3) for CIMF6. We observe from Fig 5 that the alpha coefficient, and whence the discriminability across participants, increases and remains relatively stable towards the end of the video clip. This confirms our motivation to focus on the last 100s of the EEG recordings.
3.4 Multiscale RQE of intracranial EEG
We also computed the MEMD-enhanced MMRQE for the intracranial EEG recordings in the amygdala (2 contact points) and over the occipital area (20 contact points) when the patient viewed an emotional- (fragment from ‘Schindler’s list’, self-labeled by the patient as ‘anger’) and a neutral (weather forecast, self-labeled as ‘neutral’) video clip (Fig 6). As in the scalp EEG case, we selected the last 100s track of each video clip, determined the MEMD for the 2 amygdala contact points (depth electrode) and separately for the 20 occipital contact points (subdural electrode grid), and finally plotted the multiscale, multivariate RQE (MMRQE) using non-overlapping 100ms snippets. We observe that the MMRQE curves for the amygdala are well separable compared to those of the occipital area (Fig 6, left vs. right panel).
Shown are the results for the (a) amygdala- and (b) occipital implants in a patient viewing emotional and neutral video clips (blue vs. red labeled curves). Error bars correspond to standard errors of individual snippet MMRQEs.
3.5 Temporal evolution of multiscale RQE for joint scalp/intracranial EEG
Finally, we computed MRQE for CIMF6 using the joint scalp (mid-frontal) EEG and intracranial EEG (amygdala) recorded when viewing both video clips (emotional and neutral). Note that the MEMD is computed with reference to the listed electrodes in each case. The results are shown in Fig 7. We observe that, for the ‘anger’ video, the MRQE curves of the intracranial EEG (amygdala) and mid-frontal scalp EEG seem to converge and even overlap at the end of the videos (Fig 7, left panel). In order to statistically assess this observation, we modeled the temporal evolution of both MRQEs for 8 successive, non-overlapping time intervals where each interval consists of 20 successive recording samples (interval between 2 successive samples is 1s). Then, for each recording, we adopted a linear mixed model approach [59,60] with MRQE as continuous outcome and time intervals as fixed effects: where subscript i refers to the modeled recording, α the overall intercept, β1 … β7 the overall time interval-specific slopes, the latent variables random intercept and -slope, and εij~N(0, σ2) random noise. To correct for possible EEG recording-specific differences, random intercept and random slope were used. Finally, we tested the hypothesis that the MRQEs of both signals (scalp and intracranial) at a given time interval were from the same distribution. For all time intervals, expect the two last ones (40 ms in total), the hypothesis is rejected, hence the MRQE curves indeed become indistinguishable at the end of the ‘anger’ video. Note that this also corresponds to the time range producing best discriminability (alpha coefficient) of the main group’s scalp EEG recordings (section 3.3), where the crux of the video unfolds. On the contrary, no such overlap is observed for the ‘neutral’ video (Fig 7, right panel). All analyses were performed using SAS, release 9.4.
Shown are the results for CIMF6 for amygdala iEEG and mid-frontal scalp EEG (F3, F4, Fz electrodes) in response to emotional and neutral video clips.
In order to gain insight into the origin of the overlap between the two MRQE curves, we analyzed phase synchrony between the mid-frontal scalp EEG and intracranial amygdala signals, using the phase locking statistics (PLS) technique introduced by Lachaux and co-workers [61]. The results are shown in Fig 8 (left panel). After a transitional period, at the onset of both videos, phase synchrony of the emotional video (‘anger’) increases sharply towards the end, which is not the case for the neutral video as phase synchrony stays constant. As these results are from only one subject, there is not enough statistical power to decide whether the difference between the two curves and between the low and high synchrony values of the emotional video are significant. We therefore applied the surrogate data method called Iterative Amplitude Adjusted Fourier Transformed (IAAFT), originally proposed by Schreiber and Schmitz [62] [63], to the amygdala and mid-frontal recordings of each video. The IAAFT algorithm has ability to generate surrogate signals with identical amplitude distributions and approximately identical amplitude spectra while the cross-correlation between the original signals is destroyed. We used Matlab’s Chaotic Systems Toolbox (https://nl.mathworks.com/matlabcentral/fileexchange/1597-chaotic-systems-toolbox/content/IAAFT.m) with c = 10, maxiter = 1000, frequency range 0.05–30 Hz, and trial length 200ms.
Shown are the results for the (a) emotional (‘anger’) and neutral video clips (left) and for the corresponding surrogate signal distributions (right) (red = neural video, blue = emotional video).
The results are illustrated in Fig 8 (right panel). It can be seen that the increasing phase synchrony of the emotional video is significantly different from its surrogate distribution whereas the phase synchrony of the neutral video is indistinguishable from its surrogate distribution. The high value of the phase synchrony at the end of the emotional video supports the similarity in MRQE values between the scalp and intracranial recordings (Fig 7): this means that these recordings exhibit strong phase synchrony and share similar degrees of complexity near the end of the emotional video.
Finally, using Brainstorm, we simulated for each of the 4 different amygdala source orientations (see Methods section 2.2.6) the EEG signal (128 channel full scalp configuration) in response to the available amygdala recordings of both video clips. As above, we computed phase synchrony between simulated and actually recorded mid-frontal (F3,Fz,F4) EEG and again applied the IAAFT algorithm to generate surrogates. The results are shown in S8 Appendix for the case where the amygdala dipole orientations are equally weighted in X-, Y-, and Z-direction (“mixed” case, see Methods section 2.2.6); the results for the X-, Y-, and Z-oriented dipoles are comparable. We observe that phase synchrony behaves quite similarly to Fig 8. The phase synchrony result in S8 Appendix supports our hypothesis that, at least for the last part of the emotional video, amygdala could indeed be contributing to the synchrony observed in our mid-frontal scalp recordings, but we hasten to add that other sources (which we did not model here) could be contributing as well. When repeating the exercise for the sources that evoked the occipital electrode grid response and also simulate their EEG activity on the scalp, we observed no significant phase synchrony levels for the mid-frontal electrodes neither any difference between the neutral and emotional videos. This result is also in line with our complexity results (Fig 8).
4. Discussion
Recently, several algorithms were proposed linking EEG signal complexity (in particular entropy) to distinct emotional states. Jie and co-workers [23] performed emotion recognition from univariate, uniscale sample entropy (SE) of prefrontal EEG recordings. However, the EEG recordings of subjects whose emotion self-reports were not in line with the pre-defined ones were excluded from their analysis. In our previous work [34], we used MEMD-enhanced multiscale, multivariate sample entropy (MMSE) to discriminate multiple emotional states from EEG recordings in response to emotion-evoking video clips. Contrary to Jie and co-workers, we used self-reports of emotional category for labeling the signal complexity results. Furthermore, unlike previous studies [64], the emotional state was discriminated even from a single video clip (single-trial design). This strategy was also adopted in the pertinent article but in the context of MEMD-enhanced MMRQE, a multiscale, multivariate version of Rényi’s quadratic entropy (RQE). Albeit the results were similar, the Rényi-based approach revealed two advantages. Firstly, MMRQE’s computational complexity is quadratic whereas MMSE’s [34] cubic, which rendered the Rényi-based approach computationally much more feasible for joint electrode analyses and electrode selection strategies. Secondly, the MMSE-based method is much more sensitive to snippet length as the discriminability of emotional categories quickly degrades with snippet length. For MMRQE we observed a much broader range of usable snippet lengths.
It has been claimed that changes in emotional state (also in relation to dysfunctions) can be discerned from frontal lobe recordings [65,66]. In [67] it was shown that the orbital frontal cortex plays a critical role in cognitive control of emotion (especially in the case of suppressing emotional responses), and activity in this region reflects subsequent appraisal processes related to viewing emotional stimuli (see also [38]). Jie et al. [23] achieved emotion recognition from prefrontal EEG recordings and Tonoyan et al.[34] from mid-frontal recordings. By using Krippendorff’s inter-rater reliability analysis in combination with a greedy search for optimal electrodes, we showed in the pertinent article that the mid-frontal electrodes F3, Fz, F4 scored best in discriminating emotional states (Fig 4) but this does not preclude the contribution of central and temporal electrodes, as reported in other studies [37,44,68]. Interestingly, EEG signals over the occipital and auditory cortices exhibited a much lower discriminability which implies that our results are probably not explainable by sensory or auditory activations elicited when viewing video clips.
The question arises whether EEG signal complexity could be used to ‘label’ arbitrary video clips in terms of emotional states. There are several observations that demote our expectations. Firstly, there is the concern on the labeling side: the same video clip could receive different emotional category reports from different viewers as we observed for the ‘Sleepers’ movie (Table 1). Another concern, maybe the most important one, is that signal complexity is not static but rather evolves when watching the video clip and this affected the discriminability of our signal complexity results in terms of self-reported emotional categories (see Fig 5). However, we also observed that discriminability increased towards the end of the video clips. This is in alignment with the way the video clips were constructed by Schaefer and co-workers: the clips have their climax at the end (roughly the last 100s). We therefore recommend to restrict EEG complexity analysis to the most affective scenes as for those the highest inter-rater agreement can be expected.
How to interpret signal complexity then? One suggestion is to relate it to affect score: the affect scores of the video clips we used are available in [35] and are averages of self-reported affect scores based on 10 positive and 10 negative items (each on a 5 point scale). The video clip ‘City of angels’ showed both a lower affect score (Table 1) and a lower MMRQE curve (Fig 1) than the ‘La cité de la peur’ video clip, which is in alignment with the assertion of Aftanas and co-workers [21] that higher complexity values can be observed for more affective stimuli. However, we could not observe such a relationship for the other video clips. This could be due to the ways the affect scores of [35] (20 item check list per clip) and our emotional self-scores were collected (one emotional category per clip), and whether differences in reported affect scores are significant. (Note that no standard deviations are provided in [35].) Video clips with low affect score seem to be more difficult to discriminate based on signal complexity, as we showed for the control experiment S7 Appendix): albeit the range of the RQE values was similar to that of the main experiment, the MMRQE curves were much less discriminable and their ranking varied across participants. By using video clips with the highest affect scores (cf., 3 of the 4 video clips of the main experiment), or by contrasting highest and lowest affect score video clips (cf., ‘City of angels’ video clip), we could achieve discriminability based on signal complexity (Fig 1). Affect score, thus, seems to play a role in signal complexity but not completely. Another suggestion is to relate signal complexity to emotional arousal as these self-reports are also available in [35]. As shown in our earlier work, and it also pertains to our MMRQE results, the relation between signal complexity and emotional arousal was not significant (using a linear mixed model regression analysis) neither for the whole video clip (since in [35] the self-reports were collected in this way) nor for the last 100 s (‘climax’).
In order to shed light on the significance of scalp EEG signal complexity, we also examined intracranial EEGs from the amygdala recorded in a patient when viewing emotional and neutral video clips. The amygdala is considered to be strongly involved in emotional processing. Guillory and co-workers [69] reviewed 64 invasive studies on human emotion including amygdala [37–46]. Moreover, Sergerie and co-workers [47] showed that the amygdala responds to all visual emotional stimuli regardless of their valence and Aggleton and Mishkin even claimed it is the gateway to sensory emotion [48]. By applying the same MEMD-based MMRQE method to the intracranial recordings, we observed a clear discrimination between the MMRQE curves of the neutral and the emotional movies (labeled by the patient as evoking anger) for the amygdala but not for the occipital area (Fig 6). This result is in agreement with several studies that prove the involvement of amygdala in negative emotion processing. Oya et al. [40] proved that significant changes in gamma power amplitude in amygdala were selectively obtained in response to visual images judged to be aversive but not in response to those that were judged pleasant or neutral. Naccache and co-workers [42] obtained differential ERP responses in amygdala to negative emotional words. We observed that the multiscale RQE computed for the mid-frontal scalp electrodes and the amygdala electrodes overlapped and their phase synchrony reached a maximum at the end of the emotional video where the crux is revealed. On the contrary, no such overlap or phase synchrony was observed for the ‘neutral’ video. This was also observed for the phase synchrony between the original- and reconstructed mid-frontal scalp activations, reconstructed from our amygdala recordings using our head model. In summary, our findings seem to suggest that emotional state discrimination from scalp EEG is more likely to be supported by differences in amygdala activation when in response to emotionally-charged scenes.
Conclusion
We introduced multiscale, multivariate Rényi quadratic entropy (MMRQE) for analyzing EEG recordings of 30 participants viewing 4 emotionally-charged video clips taken from a validated database [35]. We compared our approach to the multiscale, multivariate version of sample entropy (MMSE), and showed that the results were similar but with the advantage that MMRQE is less computationally intensive and less sensitive to track (snippet) length. We also applied our method to intracranial EEG recordings of the amygdala and observed that the RQEs of the emotional and neutral video clips could be discriminated and that the MMRQE of the emotional video coincided with that of the mid-frontal scalp electrodes at the end of the clip where the climax of the video is revealed. This was also confirmed by the increasing phase synchrony levels between the mid-frontal and amygdala recordings. This provides conclusive evidence of the proposed multiscale entropy-based method in discriminating emotional states.
Supporting information
S2 Appendix. MMRQE plotted in terms of individual IMFs.
https://doi.org/10.1371/journal.pone.0186916.s002
(PDF)
S3 Appendix. Scalp distribution of original EEG’s alpha coefficient.
https://doi.org/10.1371/journal.pone.0186916.s003
(PDF)
S4 Appendix. Correlation matrix of depth electrode.
https://doi.org/10.1371/journal.pone.0186916.s004
(PDF)
S5 Appendix. Location of subdural electrode grid.
https://doi.org/10.1371/journal.pone.0186916.s005
(PDF)
S6 Appendix. MMRQE results for non-French speaking subjects, main experiment.
https://doi.org/10.1371/journal.pone.0186916.s006
(PDF)
S7 Appendix. MMRQE results, control experiment.
https://doi.org/10.1371/journal.pone.0186916.s007
(PDF)
S8 Appendix. Phase synchrony between simulated and mid-frontal EEG.
https://doi.org/10.1371/journal.pone.0186916.s008
(PDF)
Acknowledgments
The authors are grateful to Prof. Alexandre Schaefer (School of Business, Monash University, Victoria, Malaysia) for his valuable input to the interpretation of the video clip data and to Prof. Alfred Meurs, Dr. Evelien Carrette, and Prof. Paul Boon (Neurology Department, Ghent University Hospital), Prof. Dirk Van Roost (Department of Neurosurgery, Ghent University Hospital) for their assistance in the ECoG experiment and Mansoureh Fahimi Hnazaee (Laboratory of Neuro- and Psychophysiology, KU Leuven) for performing the source reconstruction analysis. YT is supported by the Interuniversity Attraction Poles Programme Belgian Science Policy (IUAP P7/11). MMVH is supported by research grants received from the Financing program (PFV/10/008), an interdisciplinary research project (IDO/12/007), and an industrial research fund project (IOF/HB/12/021) of the KU Leuven, the Belgian Fund for Scientific Research–Flanders (G088314N, G0A0914N), the Interuniversity Attraction Poles Programme–Belgian Science Policy (IUAP P7/11), the Flemish Regional Ministry of Education (Belgium) (GOA 10/019), and the Hercules Foundation (AKUL 043).Part of the reported work was done while MVH was a Visiting Academic at Imperial College London. DM is supported by an EPSRC grant (EP/K025643/1).
References
- 1. John ER, Prichep LS, Fridman J, Easton P. Neurometrics: computer-assisted differential diagnosis of brain dysfunctions. Sci [Internet]. 1988 Jan 8;239 (4836): 162–9. Available from: http://www.sciencemag.org/content/239/4836/162.abstract
- 2.
Nunez PL, Cutillo BA. Neocortical dynamics and human EEG rhythms. Oxford University Press, USA; 1995.
- 3.
Choppin A. EEG-based human interface for disabled individuals: Emotion expression with neural networks. Unpubl master’s thesis. 2000;
- 4. Millán J d R, Rupp R, Müller-Putz GR, Murray-Smith R, Giugliemma C, Tangermann M, et al. Combining brain–computer interfaces and assistive technologies: state-of-the-art and challenges. Front Neurosci. 2010;4.
- 5.
Nijholt A. BCI for games: A “state of the art”survey. In: Entertainment Computing-ICEC 2008. Springer; 2009. p. 225–8.
- 6. Davidson RJ. Anterior cerebral asymmetry and the nature of emotion. Brain Cogn [Internet]. 1992 Sep [cited 2015 Jul 7];20(1):125–51. Available from: http://www.sciencedirect.com/science/article/pii/027826269290065T pmid:1389117
- 7. Kostyunina MB, Kulikov MA. Frequency characteristics of EEG spectra in the emotions. Neurosci Behav Physiol [Internet]. 1996;26(4):340–3. Available from: http://europepmc.org/abstract/MED/8912339 pmid:8912339
- 8. Shemiakina N V, Dan’ko SG. Influence of the emotional perception of a signal on the electroencephalographic correlates of the creative activity. Fiziol Cheloveka. 2004;30(2):22–9. pmid:15150971
- 9. Jaušovec N, Jaušovec K, Gerlič I. Differences in event-related and induced EEG patterns in the theta and alpha frequency bands related to human emotional intelligence. Neurosci Lett. 2001;311(2):93–6. pmid:11567786
- 10. Vecchiato G, Astolfi L, Fallani FDV, Cincotti F, Mattia D, Salinari S, et al. Changes in brain activity during the observation of TV commercials by using EEG, GSR and HR measurements. Brain Topogr. 2010;23(2):165–79. pmid:20033272
- 11. Bekkedal MY V, Rossi J, Panksepp J. Human brain EEG indices of emotions: delineating responses to affective vocalizations by measuring frontal theta event-related synchronization. Neurosci Biobehav Rev. 2011;35(9):1959–70. pmid:21596060
- 12. Wang X-W, Nie D, Lu B-L. Emotional state classification from EEG data using machine learning approach. Neurocomputing. 2014;129:94–106.
- 13. Daly I, Malik A, Hwang F, Roesch E, Weaver J, Kirke A, et al. Neural correlates of emotional responses to music: an EEG study. Neurosci Lett. 2014;573:52–7. pmid:24820541
- 14. Paulmann S, Kotz SA. An ERP investigation on the temporal dynamics of emotional prosody and emotional semantics in pseudo-and lexical-sentence context. Brain Lang. 2008;105(1):59–69. pmid:18177699
- 15. Spreckelmeyer KN, Kutas M, Urbach T, Altenmüller E, Münte TF. Neural processing of vocal emotion and identity. Brain Cogn. 2009;69(1):121–6. pmid:18644670
- 16. Balconi M, Lucchiari C. EEG correlates (event-related desynchronization) of emotional face elaboration: a temporal analysis. Neurosci Lett. 2006;392(1):118–23.
- 17. Zhang D, He W, Wang T, Luo W, Zhu X, Gu R, et al. Three stages of emotional word processing: an ERP study with rapid serial visual presentation. Soc Cogn Affect Neurosci. 2014;9(12):1897–903. pmid:24526185
- 18. Jessen S, Kotz SA. The temporal dynamics of processing emotions from vocal, facial, and bodily expressions. Neuroimage. 2011;58(2):665–74. pmid:21718792
- 19. Stefanics G, Csukly G, Komlósi S, Czobor P, Czigler I. Processing of unattended facial emotions: a visual mismatch negativity study. Neuroimage. 2012;59(3):3042–9. pmid:22037000
- 20. Goldberger AL, Peng C-K, Lipsitz LA. What is physiologic complexity and how does it change with aging and disease? Neurobiol Aging. 2002;23(1):23–6. pmid:11755014
- 21. Aftanas LI, Lotova N V, Koshkarov VI, Makhnev VP, Mordvintsev YN, Popov SA. Non-linear dynamic complexity of the human EEG during evoked emotions. Int J Psychophysiol. 1998;28(1):63–76. pmid:9506311
- 22. Hosseini SA, Naghibi-Sistani MB. Emotion recognition method using entropy analysis of EEG signals. Int J Image, Graph Signal Process. 2011;3(5):30.
- 23. Jie X, Cao R, Li L. Emotion recognition based on the sample entropy of EEG. Biomed Mater Eng. 2013;24(1):1185–92.
- 24. Rényi a. On measures of entropy and information. Entropy [Internet]. 1961;547(c):547–61. Available from: http://www.maths.gla.ac.uk/~tl/Renyi.pdf
- 25. Arefian NM, Zali AR, Seddighi AS, Fathi M, Teymourian H, Dabir S, et al. Clinical analysis of eeg parameters in prediction of the depth of anesthesia in different stages: a comparative study. Tanaffos. 2009;8(2):46–53.
- 26. Sourina O, Wang Q, Liu Y, Nguyen MK. A real-time fracal-based brain state recognition from EEG and its application. Proc Biosignals [Internet]. 2011;82–91. Available from: http://www.ntu.edu.sg/home/eosourina/Papers/BIOSIGNALS_2011_164_CR.pdf
- 27.
Bajaj V, Pachori RB. Detection of human emotions using features based on the multiwavelet transform of EEG signals. In: Brain-Computer Interfaces. Springer; 2015. p. 215–40.
- 28.
Kantz H, Schreiber T. Nonlinear time series analysis. Vol. 7. Cambridge university press; 2004.
- 29. Costa M, Goldberger A, Peng C-K. Multiscale entropy analysis of biological signals. Phys Rev E [Internet]. 2005 Feb [cited 2014 Mar 24];71(2):021906. Available from: http://link.aps.org/doi/10.1103/PhysRevE.71.021906
- 30. Huang NE, Shen Z, Long SR, Wu MC, Shih HH, Zheng Q, et al. The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis. Proc R Soc A Math Phys Eng Sci [Internet]. 1998 Mar 8 [cited 2014 Apr 28];454(1971):903–95. Available from: http://rspa.royalsocietypublishing.org/cgi/doi/10.1098/rspa.1998.0193
- 31. Rehman N, Mandic DP. Multivariate empirical mode decomposition. Proc R Soc A Math Phys Eng Sci [Internet]. 2009 Dec 23 [cited 2014 Mar 24];466(2117):1291–302. Available from: http://rspa.royalsocietypublishing.org/cgi/doi/10.1098/rspa.2009.0502
- 32. Ahmed MU, Rehman N, Looney D, Rutkowski TM, Mandic DP. Dynamical complexity of human responses: a multivariate data-adaptive framework. Bull Polish Acad Sci Tech Sci [Internet]. 2012;60(3):433–45. Available from: http://www.degruyter.com/view/j/bpasts.2012.60.issue-3/v10175-012-0055-0/v10175-012-0055-0.xml
- 33. Sharma R, Pachori RB, Acharya UR. Application of Entropy Measures on Intrinsic Mode Functions for the Automated Identification of Focal Electroencephalogram Signals. Entropy. 2015;17(2):669–91.
- 34. Tonoyan Y, Looney D, Mandic DP, Van Hulle MM. Discriminating Multiple Emotional States from EEG Using a Data-Adaptive, Multiscale Information-Theoretic Approach. Int J Neural Syst. 2016;
- 35. Schaefer A, Nils F, Sanchez X, Philippot P. Assessing the effectiveness of a large database of emotion-eliciting films: A new tool for emotion researchers. Cogn Emot [Internet]. 2010;24(7):1153–72. Available from: http://www.informaworld.com/smpp/title~content=t713682755
- 36. Hayes AF, Krippendorff K. Answering the call for a standard reliability measure for coding data. Commun Methods Meas. 2007;1(1):77–89.
- 37. Fish DR, Gloor P, Quesney FL, Oliver A. Clinical responses to electrical brain stimulation of the temporal and frontal lobes in patients with epilepsy. Brain. 1993;116(2):397–414.
- 38. HALGREN E, WALTER RD, CHERLOW DG, CRANDALL PH. Mental phenomena evoked by electrical stimulation of the human hippocampal formation and amygdala. Brain. 1978;101(1):83–115. pmid:638728
- 39. Fried I, MacDonald KA, Wilson CL. Single neuron activity in human hippocampus and amygdala during recognition of faces and objects. Neuron. 1997;18(5):753–65. pmid:9182800
- 40. Oya H, Kawasaki H, Howard MA, Adolphs R. Electrophysiological responses in the human amygdala discriminate emotion categories of complex visual stimuli. J Neurosci. 2002;22(21):9502–12. pmid:12417674
- 41. Krolak-Salmon P, Hénaff M-A, Vighetto A, Bertrand O, Mauguière F. Early amygdala reaction to fear spreading in occipital, temporal, and frontal cortex: a depth electrode ERP study in human. Neuron. 2004;42(4):665–76. pmid:15157426
- 42. Naccache L, Gaillard R, Adam C, Hasboun D, Clémenceau S, Baulac M, et al. A direct intracranial record of emotions evoked by subliminal words. Proc Natl Acad Sci U S A. 2005;102(21):7713–7. pmid:15897465
- 43. Lanteaume L, Khalfa S, Régis J, Marquis P, Chauvel P, Bartolomei F. Emotion induction after direct intracerebral stimulations of human amygdala. Cereb Cortex. 2007;17(6):1307–13. pmid:16880223
- 44. Meletti S, Tassi L, Mai R, Fini N, Tassinari CA, Russo G Lo. Emotions induced by intracerebral electrical stimulation of the temporal lobe. Epilepsia. 2006;47(s5):47–51.
- 45. Pourtois G, Spinelli L, Seeck M, Vuilleumier P. Temporal precedence of emotion over attention modulations in the lateral amygdala: Intracranial ERP evidence from a patient with temporal lobe epilepsy. Cogn Affect Behav Neurosci. 2010;10(1):83–93. pmid:20233957
- 46. Sato W, Kochiyama T, Uono S, Matsuda K, Usui K, Inoue Y, et al. Rapid amygdala gamma oscillations in response to fearful facial expressions. Neuropsychologia. 2011;49(4):612–7. pmid:21182851
- 47. Sergerie K, Chochol C, Armony JL. The role of the amygdala in emotional processing: a quantitative meta-analysis of functional neuroimaging studies. Neurosci Biobehav Rev. 2008;32(4):811–30. pmid:18316124
- 48. Aggleton JP, Mishkin M. The amygdala: sensory gateway to the emotions. Emot Theory, Res Exp. 1986;3:281–99.
- 49. Makeig S, Debener S, Onton J, Delorme A. Mining event-related brain dynamics. Trends Cogn Sci. 2004;8(5):204–10. pmid:15120678
- 50. Croft RJ, Chandler JS, Barry RJ, Cooper NR, Clarke AR. EOG correction: a comparison of four methods. Psychophysiology [Internet]. 2005 Jan [cited 2015 Jul 7];42(1):16–24. Available from: http://www.ncbi.nlm.nih.gov/pubmed/15720577
- 51. van Rijn ACM, Peper A, Grimbergen CA. High-quality recording of bioelectric events. Med Biol Eng Comput. 1991;29(4):433–40. pmid:1787761
- 52. Richman JS, Moorman JR. Physiological time-series analysis using approximate entropy and sample entropy. Am J Physiol Heart Circ Physiol [Internet]. 2000 Jun;278(6):H2039–49. Available from: http://www.ncbi.nlm.nih.gov/pubmed/10843903 pmid:10843903
- 53.
Gautama T, Mandic DP, Van Hulle MM. A differential entropy based method for determining the optimal embedding parameters of a signal. In: Acoustics, Speech, and Signal Processing, 2003 Proceedings(ICASSP’03) 2003 IEEE International Conference on. IEEE; 2003. p. VI– 29.
- 54.
Principe JC. Information theoretic learning: Renyi’s entropy and kernel perspectives. Springer Science & Business Media; 2010.
- 55. Costa M, Priplata AA, Lipsitz LA, Wu Z, Huang NE, Goldberger AL, et al. Noise and poise: enhancement of postural complexity in the elderly with a stochastic-resonance–based therapy. EPL (Europhysics Lett. 2007;77(6):68008.
- 56. Shattuck DW, Leahy RM. BrainSuite: an automated cortical surface identification tool. Med Image Anal. 2002;6(2):129–42. pmid:12045000
- 57. Gramfort A, Papadopoulo T, Olivi E, Clerc M. OpenMEEG: opensource software for quasistatic bioelectromagnetics. Biomed Eng Online. 2010;9(1):45.
- 58. Pascual-Marqui RD. Standardized low-resolution brain electromagnetic tomography (sLORETA): technical details. Methods Find Exp Clin Pharmacol. 2002;24(Suppl D):5–12.
- 59. Laird NM, Ware JH. Random-effects models for longitudinal data. Biometrics. 1982;963–74. pmid:7168798
- 60.
Verbeke G, Molenberghs G. Linear mixed models for longitudinal data. Springer Science & Business Media; 2009.
- 61. Lachaux J-P, Rodriguez E, Martinerie J, Varela FJ. Measuring phase synchrony in brain signals. Hum Brain Mapp. 1999;8(4):194–208. pmid:10619414
- 62. Gautama T, Mandic DP, Van Hulle MM. Signal nonlinearity in fMRI: a comparison between BOLD and MION. IEEE Trans Med Imaging. 2003;22(5):636–44. pmid:12846432
- 63. Schreiber T, Schmitz A. Improved surrogate data for nonlinearity tests. Phys Rev Lett. 1996;77(4):635. pmid:10062864
- 64.
Soleymani M, Koelstra S, Patras I, Pun T. Continuous emotion detection in response to music videos. Face Gesture 2011 [Internet]. 2011 Mar;803–8. http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5771352
- 65. Bartolic EI, Basso MR, Schefft BK, Glauser T, Titanic-Schefft M. Effects of experimentally-induced emotional states on frontal lobe cognitive task performance. Neuropsychologia. 1999;37(6):677–83. pmid:10390029
- 66. Stuss DT, Gow CA, Hetherington CR. “No longer Gage”: frontal lobe dysfunction and emotional changes. J Consult Clin Psychol. 1992;60(3):349. pmid:1619089
- 67. Ochsner KN, Gross JJ. The cognitive control of emotion. Trends Cogn Sci. 2005;9(5):242–9. pmid:15866151
- 68. Holt DJ, Kunkel L, Weiss AP, Goff DC, Wright CI, Shin LM, et al. Increased medial temporal lobe activation during the passive viewing of emotional and neutral facial expressions in schizophrenia. Schizophr Res. 2006;82(2):153–62.
- 69. Guillory SA, Bujarski KA. Exploring emotions using invasive methods: review of 60 years of human intracranial electrophysiology. Soc Cogn Affect Neurosci. 2014;nsu002.