Next Article in Journal
Raw Data-Based Motion Compensation for High-Resolution Sliding Spotlight Synthetic Aperture Radar
Next Article in Special Issue
Acquiring Respiration Rate from Photoplethysmographic Signal by Recursive Bayesian Tracking of Intrinsic Modes in Time-Frequency Spectra
Previous Article in Journal
Integrated Optical Mach-Zehnder Interferometer Based on Organic-Inorganic Hybrids for Photonics-on-a-Chip Biosensing Applications
Previous Article in Special Issue
Detection-Response Task—Uses and Limitations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Investigating Patterns for Self-Induced Emotion Recognition from EEG Signals

1
China National Digital Switching System Engineering and Technological Research Center, Zhengzhou 450002, China
2
Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, China
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(3), 841; https://doi.org/10.3390/s18030841
Submission received: 9 February 2018 / Revised: 4 March 2018 / Accepted: 7 March 2018 / Published: 12 March 2018
(This article belongs to the Special Issue Advanced Physiological Sensing)

Abstract

:
Most current approaches to emotion recognition are based on neural signals elicited by affective materials such as images, sounds and videos. However, the application of neural patterns in the recognition of self-induced emotions remains uninvestigated. In this study we inferred the patterns and neural signatures of self-induced emotions from electroencephalogram (EEG) signals. The EEG signals of 30 participants were recorded while they watched 18 Chinese movie clips which were intended to elicit six discrete emotions, including joy, neutrality, sadness, disgust, anger and fear. After watching each movie clip the participants were asked to self-induce emotions by recalling a specific scene from each movie. We analyzed the important features, electrode distribution and average neural patterns of different self-induced emotions. Results demonstrated that features related to high-frequency rhythm of EEG signals from electrodes distributed in the bilateral temporal, prefrontal and occipital lobes have outstanding performance in the discrimination of emotions. Moreover, the six discrete categories of self-induced emotion exhibit specific neural patterns and brain topography distributions. We achieved an average accuracy of 87.36% in the discrimination of positive from negative self-induced emotions and 54.52% in the classification of emotions into six discrete categories. Our research will help promote the development of comprehensive endogenous emotion recognition methods.

1. Introduction

Given that emotion plays an important role in our daily lives and work, the real-time assessment and regulation of emotions can improve our lives. For example, emotion recognition will facilitate the natural advancement of human–machine interactions and communication. Furthermore, recognizing the real emotional state of patients, particularly those of patients with expression problems, will help improve the quality of medical care. In recent years, emotion recognition based on EEG signals has gained considerable attention. The method of emotion recognition is a crucial factor in human-computer interaction (HCI) systems, which will effectively improve communication between humans and machines [1,2].
However, emotion recognition based on EEG signals is challenging given the vague boundaries and individual variations presented by emotions. Moreover, in theory, we cannot obtain the “ground truth” of human emotions, that is, the true label of EEG that correspond to different emotional states, because emotion is a function of time, context, space, language, culture, and races. Therefore, researchers have used various affective materials, such as images, sounds, and videos, to elicit emotions. Affective video materials are widely used by researchers given that these materials can expose subjects to real-life scenarios through the visual and aural stimuli that they provide.
DEAP is a multimodal dataset used to analyze human affective states. This dataset contains EEG and peripheral physiological signals acquired from 32 participants as they watched 40 one-minute-long excerpts of music videos [3]. MAHNOB-HCI is another multimodal database of recorded responses to affective movie stimuli. A multimodal setup was established for the synchronized recording of face videos, audio signals, eye-gaze data, and peripheral/central nervous system physiological signals of 27 participants [4]. Zheng et al. developed a SEED dataset to investigate stable patterns over time for emotion recognition from EEG. Fifteen subjects participated in the experiment, and each subject was required to perform the experiment for three sessions. The time interval between two sessions is one week or longer [5]. Liu et al. constructed a standard database of 16 emotional film clips selected from over one thousand film excerpts and proposed a system for real-time recognition of movie-induced emotion through the analysis of EEG signals [6].
Various features and extraction methods based on the above datasets have been proposed for the recognition of emotions from EEG signals. These methods include time domain, frequency domain, joint time-frequency analysis, and empirical mode decomposition (EMD) techniques [7].
The statistical parameters of EEG series, including first and second difference, mean value, and power, are usually utilized as features in time domain techniques [8]. Nonlinear features, including fractal dimension [9,10], sample entropy [11] and nonstationary index [12] have been utilized for emotion recognition. Hjorth features [13], and higher order crossing features [14] had also been used in EEG studies [15,16].
Time-frequency analysis is based on the spectrum of EEG signals, and the energy, power, power spectral density and differential entropy (DE) [17] of a certain subband are utilized as features. Short-time Fourier transform (STFT) [18,19], Hilbert-Huang transform [20,21] and discrete wavelet transform [22,23,24,25] are the most commonly used techniques for spectral calculation. Higher frequency subbands, such as Beta (16–32 Hz) and Gamma (32–64 Hz) bands, have been verified to outperform lower subbands in emotion recognition [3,26].
Mert et al. extracted entropy, power, power spectral density, correlation, and asymmetry of intrinsic mode functions (IMF) as features through EMD and then utilized independent component analysis (ICA) to reduce the dimensions of the feature set. Classification accuracy of emotions was computed with all the subjects merged together [27]. Zhuang et al. utilized the multidimensional information of IMF, the first difference of time series, the first difference of phase, and normalized energy as features. They then verified the classification performance of their method with the DEAP dataset and found that the classification accuracy is superior to DE of the Gamma band [28].
Other features extracted from electrode combinations, such as the coherence and asymmetry of electrodes in different brain regions [29,30,31], and graph-theoretic features [32], have been utilized. Jenke et al. compared the performance of different features and obtained a guiding rule for feature extraction and selection [33].
Some other strategies, such as the utilization of deep networks, have also been investigated to improve classification performance. Zheng used a deep neural network to investigate critical frequency bands and channels for emotion recognition [34]. Yang used a hierarchical network with subnetwork nodes for emotion recognition [35]. Li et al. designed a hybrid deep-learning model that combines the convolutional neural network and recurrent neural network to extract task-related features. They then performed experiments with the DEAP dataset [36].
All datasets and methods for emotion recognition are based on external affective stimuli. However, few studies on self-induced emotion recognition from EEG have been conducted despite their importance to endogenous emotion recognition. Liu et al. investigated the profile of autonomic nervous responses during the experience of five basic self-induced emotions: sadness, happiness, fear, anger, surprise and neutral. ECG and respiratory activity of fourteen healthy volunteers were recorded during their reading passages with five basic emotional tones and neutral tone to elicit corresponding endogenous emotions. They found that it was feasible and effective to recognize users’ affective states based on peripheral physiological response patterns of ECG and respiratory activities. However, their research did not include the patterns of EEG signals for self-induced emotion [37]. The stability, performance and neural patterns of self-induced emotion recognition based on EEG signals remain unknown. Moreover, whether self-induced emotion and affective stimuli-induced emotion share commonalities remains a point of contention. The main contributions of this study to the EEG-based emotion recognition can be summarized as follows:
(1)
We have developed an emotional EEG dataset for the evaluation of stable patterns of self-induced emotion across subjects. To the best of our knowledge, a public EEG dataset for analyzing the classification performance of stable neural patterns in the recognition of self-induced emotion is unavailable.
(2)
We systematically compared self-induced emotion with movie-induced emotion and found that these two types of emotions share numerous commonalities.
(3)
We analyzed the important features, electrode distribution, and average neural patterns of different self-induced emotions. Our analytical results will support future efforts for real-time recognition of endogenous emotions in real life.
(4)
We confirmed that self-induced emotions exhibit subject-independent neural signatures and relatively stable EEG patterns at critical frequency bands and brain regions.
This paper is structured as follows: a detailed description of the experimental setup is presented in Section 2. A discussion of the methodology is provided in Section 3. The classification results and analysis are presented in Section 4. The discussion is given in Section 5, and the conclusion is given in Section 6.

2. Experiment Setup

We designed a novel emotion experiment to collect EEG data for the investigation of different emotional states. Our experiment is different from other existing publically available datasets.

2.1. Experimental Protocol

We designed an experiment and recorded the EEG signals of 30 participants. Each participant watched 18 Chinese movie clips from the Chinese affective video system [38]. These movie clips were intended to elicit six discrete emotions, including joy, neutrality, sadness, disgust, anger, and fear.
These emotional movie clips contained scenes and audio that exposed participants to real-life scenarios and elicited strong subjective and physiological changes. The details of the movie clips used in our experiment are listed in Table 1. All six categories of movie clips were randomly presented to the participants. The participants performed a practice trial to familiarize themselves with the system. A short video was shown during the unrecorded trial. Next, the researcher started the EEG signal recording and left the room. The participant then began the experiment by pressing a key on the keyboard. The formal experiment started with 4 min of baseline recording, during which a fixation cross was displayed to the participant. Next, the participants were asked to close their eyes and stay relaxed. During this period, a 4 min baseline signal was recorded while the participant kept their eyes closed. Then, the 18 movie clips were presented in 18 trials, each consisting of the following steps (see Figure 1):
(1)
5 s display of the current trial number to inform the participants of their progress.
(2)
5 s of baseline signal collection (fixation cross).
(3)
Display of the movie clips.
(4)
1 min of self-elicitation of emotion, during which participants closed their eyes and attempted to recall scenes from the movie clip that they had just watched.
(5)
10 s of self-assessment for arousal and valence.
(6)
45 s of rest time.
After watching each movie clip, we asked the participants to recall a scene from the movie to self-elicit emotion. This method enabled us to tag self-induced emotion and movie-induced emotion in one experiment. Then, the participants opened their eyes and self-assessed their levels of arousal and valence. Self-assessment manikins [39] were used to visually represent arousal and valence levels (see Figure 2). The manikins were displayed in the middle of the screen with the numbers 1–9 printed below them. Participants used a keyboard to directly input the number that corresponded to their arousal and valence levels. At the end of each trial, participants had 45 s of rest. They could drink water and relax.

2.2. EEG Data Acquisition

Participants were selected through interviews and questionnaire administration. Beck Anxiety Inventory [40], Hamilton Anxiety Rating Scale [41], and Hamilton Rating Scale for Depression [42] tests were administered to exclude individuals with anxiety, depression, or physical abnormalities, as well as those using sedatives and psychotropic drugs. Finally, 30 native Chinese undergraduate and graduate students (20 males and 10 females) with an average age of 23.73 years (range = 18–35, SD = 2.98) participated in our experiment. All participants were right-handed with normal or corrected-to-normal vision and normal hearing. Before the experiments, the participants were informed about the experiment and were instructed to sit comfortably, watch the forthcoming movie clips attentively without diverting their attention from the screen, and refrain as much as possible from overt movements. The movie clips were presented on a 23-inch screen (refresh frequency = 60 Hz). To minimize eye movements, all stimuli were displayed on the center of the screen. A stereo amplifier was used, and the volume was set at a suitable level. The software E-Prime 2.0 (Psychology Software Tools, Sharpsburg, GA, USA) was used to present stimuli, mark synchronization labels, and record the participants’ ratings. Figure 3 shows the moment before the start of the experiment.
EEG signals were recorded with a g.HIamp System (g.tec Medical Engineering, Linz, Austria). The parameters of the recording system were set in accordance with Table 2. The layout of 62 electrodes followed the international 10–20 system, as shown in Figure 4. The Fz electrode was used for reference calculation. Thus, the number of effective electrodes was 61.

3. Method

EEG signals were preprocessed to remove eye artifacts. Then, two types of features were extracted: DE based on STFT and the first difference of IMF based on EMD. Then, the minimal redundancy–maximal relevance (MRMR) algorithm was utilized for feature dimension reduction. Finally, the retained features were fed into support vector machine (SVM) for classification. The whole process is shown in Figure 5.

3.1. Data Preprocessing

To ensure that all emotional EEG data had the same length, we took the last 50 s of each video and of the 1 min self-elicitation of emotion for analysis. Before feature extraction, high-frequency interferences were filtered out from EEG signals by using a band-pass filter with a range of 0.1–80 Hz. Then, electrooculography (EOG) artifacts were removed by the blind-source analysis algorithm FastICA [43]. Each subject’s signal was decomposed into 61 independent components (ICs). Then, EOG artifacts were selected and removed. Figure 6a,b illustrate EEG data before and after the removal of EOG artifacts. Finally, the 5 s of pretrial baseline was removed from the EEG signals.

3.2. Feature Extraction

In this study, we segmented EEG signals by using a 2 s window and a 50% overlap between two consecutive windows. Figure 7 shows the feature extraction process. Each emotional EEG data lasted for 50 s, and 2 s of EEG signals were extracted as samples. Therefore, we acquired 882 labeled samples for each subject who watched 18 movie clips. Two types of features were utilized for emotion recognition: DE based on STFT and the first difference of component IMF1 based on EMD.

3.2.1. DE Based on STFT

We utilized STFT for the time-frequency analysis of EEG signals. The window length of STFT was 128 with 50% overlap. The original EEG signal is s(t). After STFT, we acquired:
STFT s , γ ( t , f ) = + s ( τ ) γ ( t τ ) e j 2 π f τ d τ = + s ( τ ) γ t , f e j 2 π f τ
From STFTs,γ(t,f), we calculated the power of δ, θ, α, β, and γ bands in accordance with Table 3, as follows:
s p e c t r o g r a m { s [ n ] } ( m , f k ) = | S ( m , f k ) | 2
DE is defined as follows:
DE = log ( | S ( m , f k ) | 2 )
Given that the effective electrode of EEG signals is 61 channels, we extracted 305 DE features from each sample.

3.2.2. First Difference of IMF Based on EMD

EMD decomposes EEG signals into a set of IMFs through an automatic shifting process. Each IMF represents the different frequency components of original signals and should satisfy two conditions: (1) For the whole dataset, the number of extreme points and the number of zero crossings must be either equal or differ at most by one. (2) At each point, the mean value calculated from the upper and lower envelope must be zero [7].
EMD functions similarly to an adaptive high-pass filter. It first shifts out the fastest changing component and smoothens the oscillation of IMF as the level of IMF increases. Each component is band-limited and reflect the characteristics of instantaneous frequency. Figure 8 shows a segment of original EEG signals and the corresponding first five decomposed IMFs.
As stated in [28], the components of IMF1 with high oscillation frequency play a more important role in emotion recognition than those with low oscillation frequency. Therefore, we extracted the first difference Dt of time series IMF1 as a feature. For an IMF1 component with N points, IMF{imf1, imf2, …, imfN}, Dt is defined as follows:
D t = 1 N 1 n = 1 N 1 | i m f ( n + 1 ) i m f ( n ) |
We utilized log(Dt) as a feature. The effective electrode of EEG signals has 61 channels. Thus, for each sample, we extracted 366 features, 305 DE features, and 61 features on the basis of the EMD strategy.

3.3. Dimensionality Reduction

Previous research [33] has shown that the MRMR algorithm developed by Ding and Peng [44] is suitable for emotional feature selection and outperforms other methods, such as ReliefF [45] and effect-size-based feature selection methods. MRMR utilizes mutual information to characterize the suitability of a feature subset. Mutual information between two random variables x and y is defined as:
I ( x ; y ) = p ( x , y ) log p ( x , y ) p ( x ) p ( y ) d x d y
where p(x) and p(y) are the marginal probability density functions of x and y, respectively, and p(x,y) is their joint probability distribution. If I(x;y) equals zero, the two random variables x and y are statistically independent.
The MRMR method aims to optimize two criteria simultaneously: (1) Maximal-relevance criterion D, which aims to maximize average mutual information I(xi;y) between each feature xi and the target vector y. (2) Minimal redundancy criterion R, which aims to minimize the average mutual information I(xi;y) between two features. The algorithm finds near-optimal features by using forward selection. Given an already chosen set Sk of k features, the next feature is selected by maximizing the combined criterion DR:
max x j χ S k [ I ( x j ; y ) 1 k x i S k I ( x j ; x i ) ]

3.4. Classification

The extracted features were fed into SVM for classification. SVM is widely used for emotion recognition [46,47] and has promising applications in many fields. In our study, LIBSVM was implemented for SVM classifier with linear kernel function and default parameter setting [48].

4. Results

4.1. Classification of Self-Induced Emotions

We explored the classification of self-induced emotions by performing three subject-dependent experiments:
● Movie-Induced Emotion Recognition
Movie-induced emotional data were used as the training and testing sets for this classification task. Each subject watched 18 movie clips. In binary classification, samples from joy movie clips (three clips) were classified as positive, and samples from sad, disgust, anger, and fear movie clips (12 clips) were classified as negative. We utilized 49 samples from one movie clip as the testing set and all the other 686 samples from 14 movie clips as the training set to avoid correlations between the training and testing sets. The final accuracy for each subject could be acquired by averaging 15 results from the 15 tested movie clips. To classify emotions into six discrete categories, we utilized 49 × 2 samples from two movie clips of each emotional category as the training set and 49 samples from the one remaining movie clip as the testing set to avoid correlations between the training and testing sets. The final accuracy of each subject could be acquired by averaging three results from the three tested movie clips.
● Self-Induced Emotion Recognition
Self-induced emotional data were used as the training and testing sets for this classification task. Each subject recalled 18 movie clips. In binary classification, samples from the recollection of joy movie clips (three clips) were classified as positive, whereas those from the recollection of sad, disgust, anger, and fear movie clips (12 clips in total) were classified as negative. Each time, we utilized 49 samples from one movie clip as the testing set and all other 686 samples from 14 movie clips as the training set to avoid correlations between the training and testing sets. The final accuracy for each subject could be acquired by averaging 15 results from the 15 test sets. To classify emotions into six discrete categories, we utilized 49 × 2 samples from the recollection of 2 movie clips from each emotion category as a training set and 49 samples from the remaining movie clip as a testing set to avoid correlations between the training and testing sets. The final accuracy for each subject could be acquired by averaging three results from the three test sets.
● Prediction of Self-Induced Emotion through Movie-Induced Emotion
We utilized all 49 × 18 samples from 18 movie-induced emotional data as the training set to establish a classification model. By utilizing the established model, we predicted the categories of self-induced emotion, all 49 × 18 samples from self-induced emotional data as the testing set.

4.1.1. Classification of Positive and Negative Emotions

Table 4 shows the accuracies of binary classification for 30 participants in three experiment tasks above. The average accuracy for the binary classification of self-induced emotion is 87.36%, which is close to that of movie-induced emotion (87.20%). The average accuracy obtained for the third experiment task is 78.53%, which is far above the random probability of 50%. These findings indicated that self-induced emotion and movie-induced emotion share numerous commonalities. In the future, we can use a model established on the basis of affective-stimulus -induced emotion to predict comprehensively endogenous emotion.
We provided some other strategies for measuring classification performance in the case of an unbalanced training set of positive and negative samples. Figure 9 illustrates the ROC curve of three experiment tasks. The areas under curve (AUC) are 0.9047, 0.8996, and 0.8102 and indicate that the model exhibits robust classification performance in discriminating positive from negative for both self-induced emotions and movie-induced emotions.
Table 5 provides the F1 score and classification accuracy for binary emotion recognition. The F1 score of positive samples is lower than that of negative samples because the training set contains higher numbers of negative samples than positive samples. The F1 scores of negative samples for three experiment tasks are 0.94, 0.92, and 0.86. The classification performance in F1 score and accuracy for self-induced emotions is similar to that for movie-induced emotions.

4.1.2. Classification of Emotions into Six Discrete Categories

Table 6 shows the accuracies obtained for the classification of the emotions of 30 participants in three experiment tasks. Emotions are classified into six discrete categories. The average classification accuracy for self-induced emotion is 54.52%, which is close to that for movie-induced emotion (55.65%). The average accuracy for the third case, prediction of self-induced emotion through movie-induced emotion, is 49.92%, which is far above the random probability of 16.67%.
The average confusion matrix of all participants under three experiment tasks is illustrated in Figure 10. Figure 10a,b show that the classification performance for joy is the best, followed by that for neutral emotion.
The classification performance for disgust is the best among the classification performances for all four types of self-induced negative emotions. The classification performance for anger is the best among all classification performances for all four types of movie-induced negative emotions. Figure 10c shows that the model established on the basis of movie-induced emotion exhibits the best prediction performance for self-induced neutral emotion and then for joy. Among the classification performances for four types of negative emotions, that for anger is the best. The four negative emotions are easily misclassified, indicating that negative emotions share some commonalities.

4.2. Dimensionality Reduction

For each sample, we extracted 366 features in total. Are these features effective in emotion recognition? Which features and electrodes are more important in self-induced emotion recognition? In this subsection, we utilized MRMR method to analyze the important features, electrodes for self-induced emotion recognition.
Figure 11 illustrates the dimensionality reduction performance of the MRMR algorithm. The binary classification of self-induced emotion recognition achieves an accuracy of 85.21% and that of movie-induced emotion achieves an accuracy of 83.75% when the top 10 ranked features sorted by MRMR are selected for recognition. Accuracy increases continuously with the increasing number of utilized features. When 366 features are utilized, the classification accuracy for self-induced emotions is 87.36% and that for movie-induced emotion is 87.20%.
When the top 10 ranked features sorted by MRMR are selected for the classification of emotions into six discrete categories, the classification accuracy for self-induced emotion is 46.70% and that for movie-induced emotion is 46.47%. Accuracy increases continuously as the number of utilized features increases. When 366 features are utilized, the classification accuracy for self-induced emotions is 54.52% and that for movie-induced emotion is 55.65%.
To classify emotions into six discrete categories, we selected the corresponding top 20 electrode distributions in accordance with the ranking of the 366 features sorted by MRMR. The results are shown in Table 7. The DE of electrode TP8 in the Beta band; the DE of electrodes AF7, AF8, FP1, FP2, F6, F8, FC6, FT8, T7, T8, TP8, TP9, TP10, CP6, P8, O1, O2, and Oz of the Gamma band; and the first difference of IMF1 of electrodes T7, T8, and C6 decomposed through EMD play an important role in the classification of movie-induced emotions.
The DE of electrodes AF7, AF8, FP1, FC5, FC6, FT7, FT8, T7, T8, TP7, TP8, TP9, TP10, C5, C6, CP6, P8, O1, and Oz in the Gamma band and the first difference of IMF1 of electrodes FT8, T8, TP10, and CP6 decomposed through EMD play an important role in the classification of self-induced emotions.
The features of high-frequency bands provide outstanding classification performance. These features include the DE of the Gamma band and the first difference of wave IMF1 with the highest oscillation frequency decomposed through EMD.
Figure 12 shows the distribution of the top 20 subject-independent electrodes selected on the basis of MRMR ranking. As can be seen from the figure, electrodes C5, C6, CP6, T7, T8, TP8, TP9, and TP10 on the temporal lobe; electrodes AF7, AF8, and FP1 on the prefrontal lobe; and electrodes O1, O2, and Oz on the occipital lobe play important roles in emotion recognition. This finding shows that the neural modes of external movie-induced emotion and internal self-induced emotion share common characteristics. We can use some of the important characteristics of self-induced emotion to lay the foundation for endogenous emotion recognition.

4.3. Neural Signatures and Patterns of Self-Induced Emotion

We analyzed the important features and average neural patterns of different self-induced emotions. Figure 13 shows the boxplots of 10 important features of self-induced emotion. The figure shows that different emotions can be effectively identified by setting proper thresholds for different electrodes and features. For example, joy can be effectively distinguished from sadness, disgust, anger, and fear when the DE threshold of electrode T7 in the Gamma band is set to 0.6.
Figure 14 and Figure 15 show the average brain topographies of movie-induced emotion and self-induced emotion, respectively. The six discrete emotion categories do not have significantly different brain topographies under the features of DEs of Delta (1–4 Hz), Theta (4–8 Hz), and Alpha (8–12 Hz) band. However, a slight difference in the left temporal lobe is noted under the DE of the Beta (12–30 Hz) band.
Under the DE of the Gamma (30–64 Hz) band, the six discrete categories of self-induced emotion result in significant differences in electrodes T7, T8, TP7, TP8, TP9, and TP10 on both temporal lobes; electrodes O1, O2, and Oz on the occipital lobe; and electrodes AF7, AF8, FP1, and FP2 on the prefrontal lobe. The feature values of both sides of the temporal and occipital lobes for joy are higher than those of other emotions. The feature value of the frontal lobe for disgust is the highest for all emotions. Neutrality has the lowest feature value over the entire brain topography, compared with other five discrete emotion categories. Similar results are observed for movie-induced emotions.
Under feature Dt based on EMD, the six discrete categories of self-induced emotion result in significant differences in electrodes T7, T8, TP7, TP8, TP9, and TP10 on both temporal lobes and in electrodes FPz, FP1, and FP2 on the prefrontal lobe. Disgust has the highest feature value at the prefrontal lobe, and joy has the highest feature value in the left temporal and occipital lobes. Similar results are observed for movie-induced emotions.
The important electrodes and features inferred from average brain topography are consistent with those selected by MRMR (refer to Section 4.2). Therefore, the neural patterns of self-induced emotion do exist and they have much in common with stimuli-induced emotion. This finding makes sense for real-time recognition of comprehensive endogenous emotion.

5. Discussion

Emotion recognition from EEG signals has achieved significant progress in recent years. Previous researches have mainly focused on emotion induced by external affective stimuli, and few studies on the classification of self-induced emotion from EEG are available. The main contributions of this study can be summarized as follows:
First, we designed an experiment that considers two types of emotions: movie-induced emotion and self-induced emotion. Thirty participants took part in our experiment, and we developed an EEG-based dataset for the evaluation of the patterns of self-induced emotion across subjects.
Second, we evaluated classification performance for self-induced emotions. We achieved an average accuracy of 87.36% in discriminating positive from negative emotions and an average accuracy of 54.52% in classifying emotions into six discrete categories. We achieved similar accuracies for classifying movie-induced emotions. We also utilized movie-induced emotional data as a training set to establish a classification model. We used this model to classify self-induced emotions and achieved 78.53% accuracy in discriminating positive from negative emotions and 49.92% accuracy in classifying emotions into six discrete categories.
Third, we analyzed the important features and distributions of electrodes through MRMR algorithm. We found that the DE of the Gamma band and the first difference of IMF1 decomposed through EMD have good classification performance. These important electrodes are distributed in the bilateral temporal lobe (C5, C6, CP6, T7, T8, TP8, TP9, and TP10), the prefrontal lobe (AF7, AF8, and FP1), and the occipital lobe (O1, O2, and Oz). We also discovered that self-induced emotion and movie-induced emotion share numerous commonalities.
Finally, by analyzing the average brain topography of all the participants over all experimental sessions, we obtained the neural patterns of self-induced emotion as follows: Disgust is associated with the highest feature value of the prefrontal lobe; joy is associated with high feature values of bilateral temporal lobe and occipital lobes; and negative emotions elicit asymmetries in the bilateral temporal lobe. Moreover, the important brain regions and electrodes that we identified on the basis of average brain topography are consistent with those selected through the MRMR algorithm.
Our study is limited by our small sample size. We only collected EEG signals from 30 participants. In the future, we will collect additional EEG signals to verify our analysis and conclusion. In addition, we will investigate the real-time recognition of comprehensive endogenous emotion to promote the practical application of emotion recognition based on EEG signals.

6. Conclusions

We compiled a dataset comprising the EEG signals of 30 participants for the analysis of self-induced emotion. Then, we identified EEG features, electrode distribution and stable neural patterns that are significantly associated with self-induced emotion. We found that the DE of the Gamma band and the first difference of IMF1 decomposed through EMD have better performances in emotion recognition than other features. The roles of electrodes distributed in the bilateral temporal (C5, C6, CP6, T7, T8, TP8, TP9, and TP10), prefrontal (AF7, AF8, and FP1), and occipital (O1, O2, and Oz) lobes are more important in the discrimination of self-induced emotion than those of electrodes distributed in different regions. In addition, self-induced emotions provide characteristic neural patterns. For example, disgust is associated with the highest feature values in the prefrontal lobe; joy is associated with high feature values in the bilateral temporal and occipital lobes; and negative emotions elicit apparent asymmetries in the bilateral temporal lobe. Moreover, we discovered that self-induced and movie-induced emotions share many commonalities. Our research lays a substantial foundation for real-time recognition of comprehensive endogenous emotion. For the future work, we will explore the utilization of deep learning technology for emotion recognition, developing a deep neural network structure suitable for emotional EEG signals and improving the classification accuracy. One possible solution to deal with this problem is to adopt stochastic configuration networks techniques [49].

Acknowledgments

This work was supported by the National Natural Science Foundation of China (No. 61701089, No. 61601518 and No. 61372172) and the National Key R&D Program of China under grant 2017YFB1002502.

Author Contributions

Ning Zhuang is mainly responsible for research design, data collection, data analysis and manuscript writing of this study. Ying Zeng is mainly responsible for data collection and data analysis. Kai Yang is mainly responsible for data collection and document retrieval. Chi Zhang is mainly responsible for data collection and production of charts. Li Tong is mainly responsible for research design and data analysis. Bin Yan is mainly responsible for research design and manuscript writing.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Han, C.H.; Lim, J.H.; Lee, J.H.; Kim, K.; Im, C.H. Data-Driven User Feedback: An Improved Neurofeedback Strategy Considering the Interindividual Variability of EEG Features. Biomed Res. Int. 2016, 2016, 3939815. [Google Scholar] [CrossRef] [PubMed]
  2. Clemmensen, T.; Kaptelinin, V.; Nardi, B. Making HCI theory work: An analysis of the use of activity theory in HCI research. Behav. Inf. Technol. 2016, 35, 1–20. [Google Scholar] [CrossRef]
  3. Koelstra, S.; Muhl, C.; Soleymani, M.; Lee, J.S.; Yazdani, A.; Ebrahimi, T.; Pun, T.; Nijholt, A.; Patras, I. DEAP: A Database for Emotion Analysis: Using Physiological Signals. IEEE Trans. Affect. Comput. 2012, 3, 18–31. [Google Scholar] [CrossRef]
  4. Soleymani, M.; Lichtenauer, J.; Pun, T.; Pantic, M. A Multimodal Database for Affect Recognition and Implicit Tagging. IEEE Trans. Affect. Comput. 2012, 3, 42–55. [Google Scholar] [CrossRef]
  5. Zheng, W.L.; Zhu, J.Y.; Lu, B.L. Identifying Stable Patterns over Time for Emotion Recognition from EEG. IEEE Trans. Affect. Comput. 2016, PP. [Google Scholar] [CrossRef]
  6. Liu, Y.J.; Yu, M.; Zhao, G.; Song, J.; Ge, Y.; Shi, Y. Real-Time Movie-Induced Discrete Emotion Recognition from EEG Signals. IEEE Trans. Affect. Comput. 2017, PP. [Google Scholar] [CrossRef]
  7. Huang, N.E.; Shen, Z.; Long, S.R.; Wu, M.C.; Shih, H.H.; Zheng, Q.; Yen, N.C.; Chi, C.T.; Liu, H.H. The Empirical Mode Decomposition and the Hilbert Spectrum for Nonlinear and Non-Stationary Time Series Analysis. Proc. Math. Phys. Eng. Sci. 1998, 454, 903–995. [Google Scholar]
  8. Takahashi, K. Remarks on Emotion Recognition from Multi-Modal Bio-Potential Signals. Proc. IEEE Int. Conf. Ind. Technol. 2004, 2, 1138–1143. [Google Scholar]
  9. Sourina, O.; Liu, Y. A Fractal-based Algorithm of Emotion Recognition from EEG using Arousal-Valence Model. In Proceedings of the International Conference on Bio-Inspired Systems and Signal Processing, Rome, Italy, 26–29 January 2011; pp. 209–214. [Google Scholar]
  10. Liu, Y.; Sourina, O. Real-Time Subject-Dependent EEG-Based Emotion Recognition Algorithm. Trans. Comput. Sci. 2014, 7848, 101–120. [Google Scholar]
  11. Jie, X.; Cao, R.; Li, L. Emotion recognition based on the sample entropy of EEG. Bio-Med. Mater. Eng. 2014, 24, 1185–1192. [Google Scholar]
  12. Kroupi, E.; Yazdani, A.; Ebrahimi, T. EEG Correlates of Different Emotional States Elicited during Watching Music Videos. In Affective Computing and Intelligent Interaction; Springer: Berlin/Heidelberg, Germany, 2011; pp. 457–466. [Google Scholar]
  13. Hjorth, B. EEG analysis based on time domain properties. Electroencephalogr. Clin. Neurophysiol. 1970, 29, 306–310. [Google Scholar] [CrossRef]
  14. Petrantonakis, P.C.; Hadjileontiadis, L.J. Emotion Recognition From EEG Using Higher Order Crossings. IEEE Trans. Inf. Technol. Biomed. 2010, 14, 186–197. [Google Scholar] [CrossRef] [PubMed]
  15. Ansari-Asl, K.; Chanel, G.; Pun, T. A channel selection method for EEG classification in emotion assessment based on synchronization likelihood. In Proceedings of the 15th European Signal Processing Conference, Poznan, Poland, 3–7 September 2007; pp. 1241–1245. [Google Scholar]
  16. Horlings, R.; Datcu, D.; Rothkrantz, L.J.M. Emotion recognition using brain activity. In Proceedings of the International Conference on Computer Systems & Technology, Gabrovo, Bulgaria, 12–13 June 2008; Volume 25. [Google Scholar]
  17. Duan, R.N.; Zhu, J.Y.; Lu, B.L. Differential entropy feature for EEG-based emotion classification. Proc. Int. IEEE/EMBS Conf. Neural Eng. 2013, 8588, 81–84. [Google Scholar]
  18. Chanel, G.; Ansari-Asl, K.; Pun, T. Valence-arousal evaluation using physiological signals in an emotion recall paradigm. Proc. IEEE Int. Conf. Syst. Man Cybern. 2007, 37, 2662–2667. [Google Scholar]
  19. Lin, Y.P.; Wang, C.H.; Jung, T.P.; Wu, T.L.; Jeng, S.K.; Duann, J.R.; Chen, J.H. EEG-based emotion recognition in music listening. IEEE Trans. Bio-Med. Eng. 2010, 57, 1798–1806. [Google Scholar]
  20. Hadjidimitriou, S.K.; Hadjileontiadis, L.J. Toward an EEG-based recognition of music liking using time-frequency analysis. IEEE Trans. Bio-Med. Eng. 2012, 59, 3498–3510. [Google Scholar] [CrossRef] [PubMed]
  21. Uzun, S.S.; Yildirim, S.; Yildirim, E. Emotion primitives estimation from EEG signals using Hilbert Huang Transform. In Proceedings of the 2012 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI), Hong Kong, China, 5–7 January 2012. [Google Scholar]
  22. Murugappan, M.; Rizon, M.; Nagarajan, R.; Yaacob, S. EEG feature extraction for classifying emotions using FCM and FKM. In Proceedings of the 7th WSEAS International Conference on Applied Computer and Applied Computational Science, Hangzhou, China, 6–8 April 2008. [Google Scholar]
  23. Mohammadi, Z.; Frounchi, J.; Amiri, M. Wavelet-based emotion recognition system using EEG signal. Neural Comput. Appl. 2017, 28, 1985–1990. [Google Scholar] [CrossRef]
  24. Murugappan, M. Human emotion classification using wavelet transform and KNN. Proc. Int. Conf. Pattern Anal. Intell. Robot. 2011, 1, 148–153. [Google Scholar]
  25. Özerdem, M.S.; Polat, H. Emotion recognition based on EEG features in movie clips with channel selection. Brain Inform. 2017, 4, 241–252. [Google Scholar] [CrossRef] [PubMed]
  26. Wichakam, I.; Vateekul, P. An evaluation of feature extraction in EEG-based emotion prediction with support vector machines. In Proceedings of the 2014 11th International Joint Conference on Computer Science and Software Engineering (JCSSE), Chon Buri, Thailand, 14–16 May 2014. [Google Scholar]
  27. Mert, A.; Akan, A. Emotion recognition from EEG signals by using multivariate empirical mode decomposition. Pattern Anal. Appl. 2016, 21, 81–89. [Google Scholar] [CrossRef]
  28. Zhuang, N.; Zeng, Y.; Tong, L.; Zhang, C.; Zhang, H.; Yan, B. Emotion Recognition from EEG Signals Using Multidimensional Information in EMD Domain. Biomed Res. Int. 2017, 2017, 8317357. [Google Scholar] [CrossRef] [PubMed]
  29. Reuderink, B.; Mühl, C.; Poel, M. Valence, arousal and dominance in the EEG during game play. Int. J. Autom. Adapt. Comm. Syst. 2013, 6, 45–62. [Google Scholar] [CrossRef]
  30. Brown, L.; Grundlehner, B.; Penders, J. Towards wireless emotional valence detection from EEG. In Proceedings of the 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC, Boston, MA, USA, 30 August–3 September 2011. [Google Scholar]
  31. Rozgic, V.; Vitaladevuni, S.N.; Prasad, R. Robust EEG emotion classification using segment level decision fusion. Proc. IEEE Int. Conf. Acoust. 2013, 32, 1286–1290. [Google Scholar]
  32. Gupta, R.; Laghari, K.U.R.; Falk, T.H. Relevance vector classifier decision fusion and EEG graph-theoretic features for automatic affective state characterization. Neurocomputing 2016, 174, 875–884. [Google Scholar] [CrossRef]
  33. Jenke, R.; Peer, A.; Buss, M. Feature Extraction and Selection for Emotion Recognition from EEG. IEEE Trans. Affect. Comput. 2014, 5, 327–339. [Google Scholar] [CrossRef]
  34. Zheng, W.L.; Lu, B.L. Investigating Critical Frequency Bands and Channels for EEG-Based Emotion Recognition with Deep Neural Networks. IEEE Trans. Autonom. Ment. Dev. 2015, 7, 162–175. [Google Scholar] [CrossRef]
  35. Yang, Y.; Wu, Q.M.J.; Zheng, W.L.; Lu, B.L. EEG-based emotion recognition using hierarchical network with subnetwork nodes. IEEE Trans. Cogn. Dev. Syst. 2017, PP. [Google Scholar] [CrossRef]
  36. Li, X.; Song, D.; Zhang, P.; Yu, G.; Hou, Y.; Hu, B. Emotion recognition from multi-channel EEG data through Convolutional Recurrent Neural Network. In Proceedings of the 2016 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Shenzhen, China, 15–18 December 2016. [Google Scholar]
  37. Liu, Y.; Wang, S.; Fu, X. Patterns of Cardiorespiratory Activity Associated with Five Basic Emotions. J. Comput. Res. Dev. 2016, 53, 716–725. [Google Scholar]
  38. Xu, P.; Huang, Y.X.; Luo, Y.J. Establishment and assessment of native Chinese affective video system. Chin. Ment. Health J. 2010, 24, 551–554. [Google Scholar]
  39. Morris, J.D. Observations SAM: The self-assessment manikin—An efficient cross-cultural measurement of emotional response. J. Adv. Res. 1995, 35, 63–68. [Google Scholar]
  40. Brenner, L.A. Beck Anxiety Inventory; Springer: New York, NY, USA, 2011; pp. 359–361. [Google Scholar]
  41. Schneider, H.; Esbitt, S.; Gonzalez, J.S. Hamilton Anxiety Rating Scale; Springer: New York, NY, USA, 2013; pp. 886–887. [Google Scholar]
  42. Hamilton, M. The Hamilton Rating Scale for Depression; Springer: Berlin/Heidelberg, Germany, 1986; pp. 143–152. [Google Scholar]
  43. Hyvärinen, A. The Fixed-Point Algorithm and Maximum Likelihood Estimation for Independent Component Analysis. Neural Process. Lett. 1999, 10, 1–5. [Google Scholar] [CrossRef]
  44. Ding, C.; Peng, H. Minimum redundancy feature selection from microarray gene expression data. J. Bioinf. Comput. Biol. 2003, 3, 185–205. [Google Scholar] [CrossRef]
  45. Zhang, J.; Chen, M.; Zhao, S.; Hu, S.; Shi, Z.; Cao, Y. ReliefF-Based EEG Sensor Selection Methods for Emotion Recognition. Sensors 2016, 16, 1558. [Google Scholar] [CrossRef] [PubMed]
  46. Takahashi, K. Remarks on SVM-based emotion recognition from multi-modal bio-potential signals. In Proceedings of the RO-MAN 2004 13th IEEE International Workshop on Robot and Human Interactive Communication, Kurashiki, Japan, 22 September 2004. [Google Scholar]
  47. Chanel, G.; Kronegg, J.; Grandjean, D.; Pun, T. Emotion Assessment: Arousal Evaluation Using EEG’s and Peripheral Physiological Signals. In Proceedings of the International Workshop on Multimedia Content Representation, Classification and Security, Istanbul, Turkey, 11–13 September 2006. [Google Scholar]
  48. Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 1–27. [Google Scholar] [CrossRef]
  49. Wang, D.; Li, M. Stochastic Configuration Networks: Fundamentals and Algorithms. IEEE Trans. Cybern. 2017, 47, 3466–3479. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Experimental protocol.
Figure 1. Experimental protocol.
Sensors 18 00841 g001
Figure 2. Self-assessment for arousal and valence.
Figure 2. Self-assessment for arousal and valence.
Sensors 18 00841 g002
Figure 3. Experimental environment.
Figure 3. Experimental environment.
Sensors 18 00841 g003
Figure 4. EEG cap layout for 62 channels.
Figure 4. EEG cap layout for 62 channels.
Sensors 18 00841 g004
Figure 5. Block diagram of emotion recognition.
Figure 5. Block diagram of emotion recognition.
Sensors 18 00841 g005
Figure 6. EEG signals before and after the removal of EOG artifacts. (a) EEG signals contaminated by EOG artifacts. (b) EEG signals without EOG artifacts.
Figure 6. EEG signals before and after the removal of EOG artifacts. (a) EEG signals contaminated by EOG artifacts. (b) EEG signals without EOG artifacts.
Sensors 18 00841 g006aSensors 18 00841 g006b
Figure 7. Feature extraction process.
Figure 7. Feature extraction process.
Sensors 18 00841 g007
Figure 8. EEG signals and their corresponding first five IMFs.
Figure 8. EEG signals and their corresponding first five IMFs.
Sensors 18 00841 g008
Figure 9. ROC curve of binary emotional classification in three experiment tasks. Positive emotions are distinguished from negative emotions. The AUC values of binary classification for movie-induced emotion recognition, self-induced emotion recognition, and prediction of self-induced emotion through movie-induced emotion are 0.9047, 0.8996, and 0.8102, respectively.
Figure 9. ROC curve of binary emotional classification in three experiment tasks. Positive emotions are distinguished from negative emotions. The AUC values of binary classification for movie-induced emotion recognition, self-induced emotion recognition, and prediction of self-induced emotion through movie-induced emotion are 0.9047, 0.8996, and 0.8102, respectively.
Sensors 18 00841 g009
Figure 10. Average confusion matrix for the classification of emotions of 30 participants into six discrete categories. (a) Average confusion matrix for movie-induced emotion recognition. (b) Average confusion matrix for self-induced emotion recognition. (c) Average confusion matrix for prediction of self-induced emotion through movie-induced emotion.
Figure 10. Average confusion matrix for the classification of emotions of 30 participants into six discrete categories. (a) Average confusion matrix for movie-induced emotion recognition. (b) Average confusion matrix for self-induced emotion recognition. (c) Average confusion matrix for prediction of self-induced emotion through movie-induced emotion.
Sensors 18 00841 g010aSensors 18 00841 g010b
Figure 11. Dimensionality reduction using MRMR. MRMR is used to sort 366 features for each participant. The top 10, 60, 110, 160, 210, 260, and 310 features and all 366 features are utilized for emotion recognition. Average accuracy is computed for all participants. (a) Binary emotional classification with different numbers of features. (b) Classification of emotions into six discrete categories with different numbers of features.
Figure 11. Dimensionality reduction using MRMR. MRMR is used to sort 366 features for each participant. The top 10, 60, 110, 160, 210, 260, and 310 features and all 366 features are utilized for emotion recognition. Average accuracy is computed for all participants. (a) Binary emotional classification with different numbers of features. (b) Classification of emotions into six discrete categories with different numbers of features.
Sensors 18 00841 g011
Figure 12. Distribution of top 20 subject-independent features selected on the basis of MRMR ranking. (a) Movie-induced emotion recognition. (b) Self-induced emotion recognition.
Figure 12. Distribution of top 20 subject-independent features selected on the basis of MRMR ranking. (a) Movie-induced emotion recognition. (b) Self-induced emotion recognition.
Sensors 18 00841 g012
Figure 13. Distribution of 10 important electrode features associated with self-induced emotion.
Figure 13. Distribution of 10 important electrode features associated with self-induced emotion.
Sensors 18 00841 g013aSensors 18 00841 g013b
Figure 14. Average neural patterns of different movie-induced emotions in all participants. The DE of frequency band Delta (1–4Hz), Theta (4–8 Hz), Alpha (8–12 Hz), Beta (12–30 Hz), and Gamma (30–64 Hz), and the first difference Dt of IMF1 decomposed through EMD are illustrated from top to bottom.
Figure 14. Average neural patterns of different movie-induced emotions in all participants. The DE of frequency band Delta (1–4Hz), Theta (4–8 Hz), Alpha (8–12 Hz), Beta (12–30 Hz), and Gamma (30–64 Hz), and the first difference Dt of IMF1 decomposed through EMD are illustrated from top to bottom.
Sensors 18 00841 g014
Figure 15. Average neural patterns for different self-induced emotions of all participants. DE of frequency band Delta (1–4Hz), Theta (4–8 Hz), Alpha (8–12 Hz), Beta (12–30 Hz), and Gamma (30–64 Hz) and the first difference Dt of IMF1 decomposed by EMD are illustrated from top to bottom.
Figure 15. Average neural patterns for different self-induced emotions of all participants. DE of frequency band Delta (1–4Hz), Theta (4–8 Hz), Alpha (8–12 Hz), Beta (12–30 Hz), and Gamma (30–64 Hz) and the first difference Dt of IMF1 decomposed by EMD are illustrated from top to bottom.
Sensors 18 00841 g015
Table 1. Brief description of movie clips used in the emotion experiment.
Table 1. Brief description of movie clips used in the emotion experiment.
No.LabelMovie NameLength (s)
1JoyMore Haste Less Speed109
2JoyBie Na Zi Ji Bu Dang Gan Bu142
3JoyFlirting Scholar112
4NeutralIP Package70
5NeutralHardware Conflict65
6NeutralIDE Interface Repair77
7SadMy Brothers and Sisters146
8SadMom Love Me Once More136
9SadWarm Spring101
10DisgustBlack Sun 731(1)100
11DisgustBlack Sun 731(3)68
12DisgustVomit90
13AngerFist of Fury (2)66
14AngerKangxi Dynasty94
15AngerConman in Tokyo107
16FearHelp Me50
17FearThe Game of Killing (1)159
18FearInner Senses86
Table 2. Parameter settings of EEG recording system.
Table 2. Parameter settings of EEG recording system.
ParametersSettings
Amplifiergtec.HIamp
Sampling Frequency512 Hz
Band Pass Filter Frequency0.1–100 Hz
Notch frequency50 Hz
Electrode LayoutInternational 10–20 System
GND Electrode PositionAFz
Reference Electrode PositionFz, Right Earlobe
Electrode MaterialAg/AgCl
EEG Recording Softwareg.Recorder
Table 3. Frequency band ranges of EEG signals.
Table 3. Frequency band ranges of EEG signals.
Frequency BandBandwidth (Hz)
δ (Delta)1–4 Hz
θ (Theta)4–8 Hz
α (Alpha)8–12 Hz
β (Beta)12–30 Hz
γ (Gamma)30–64 Hz
Table 4. Accuracies of binary classification for the discrimination of positive emotions from negative emotions (Standard deviations are shown in parentheses).
Table 4. Accuracies of binary classification for the discrimination of positive emotions from negative emotions (Standard deviations are shown in parentheses).
No. ParticipantAccuracy (%)
Movie-Induced Emotion RecognitionSelf-Induced Emotion RecognitionPrediction of Self-Induced Emotion through Movie-Induced Emotion
193.3387.4882.99
299.8498.9187.21
397.0199.7395.51
490.2088.7186.94
594.0182.5993.61
697.4185.1786.12
793.4776.4678.91
899.8699.3293.06
986.9482.0473.33
1074.6996.0580.14
1189.9387.6280.95
1287.4892.6581.63
1389.5277.1448.84
1467.8968.1669.93
1592.6590.2088.98
1682.0465.5850.07
1777.6994.6956.05
1886.2679.0588.44
1971.7082.7296.46
2087.0790.0779.32
2169.5285.1781.90
2279.1885.7131.02
2393.2087.3548.57
2487.2190.8882.85
2586.8087.3560.14
2697.9697.8299.32
2788.9886.1280.82
2889.9392.8089.93
2986.5391.4398.78
3077.6991.8484.08
Average87.20 (8.74)87.36 (8.19)78.53 (16.66)
Table 5. Performance of binary classification for the discrimination of positive emotions from negative emotions (Standard deviations are shown in parentheses).
Table 5. Performance of binary classification for the discrimination of positive emotions from negative emotions (Standard deviations are shown in parentheses).
Movie-Induced Emotion RecognitionSelf-Induced Emotion RecognitionPrediction of Self-Induced Emotion through Movie-Induced Emotion
LabelPositiveNegativePositiveNegativePositiveNegative
Predict
Positive278812002667104425782902
Negative162216440174316596183214738
Positive F1-Score0.660.670.52
Negative F1-Score0.940.920.86
Accuracy (%)87.20 (8.74)87.36 (8.19)78.53 (16.66)
Table 6. Accuracies for the classification of emotions into six discrete categories. (Standard deviations are shown in parentheses).
Table 6. Accuracies for the classification of emotions into six discrete categories. (Standard deviations are shown in parentheses).
No. ParticipantAccuracy (%)
Movie-Induced Emotion RecognitionSelf-Induced Emotion RecognitionPrediction of Self-Induced Emotion through Movie-Induced Emotion
165.6563.0465.65
265.8756.2447.28
356.0163.7250.00
439.6857.7166.55
551.9350.4569.05
655.1053.7442.06
757.0340.8233.79
872.4571.7747.51
941.2764.2940.93
1052.3939.5754.20
1156.9252.9539.68
1260.3254.2046.94
1339.4646.1540.36
1441.3858.0564.51
1554.4256.6941.27
1643.6540.8241.95
1742.5260.6646.71
1865.9944.6751.25
1949.8964.2956.92
2060.2055.3331.07
2173.2470.5229.48
2241.2741.1642.74
2360.5429.5948.64
2458.5058.5060.54
2551.7047.0526.76
2681.5282.5485.94
2762.1364.6351.47
2855.2249.4348.30
2959.5244.7869.16
3053.6352.3856.92
Average55.65 (10.39)54.52 (11.02)49.92 (13.09)
Table 7. Top 20 electrodes for the classification of emotions into six discrete categories (Electrodes are selected in accordance with MRMR ranking).
Table 7. Top 20 electrodes for the classification of emotions into six discrete categories (Electrodes are selected in accordance with MRMR ranking).
Type of EmotionFeatures
Beta (DE)Gamma (DE)EMD (Dt)
Movie-Induced EmotionT8AF7, AF8, FP1, FP2, F6, F8, FC6, FT8, T7, T8, TP8, TP9, TP10, C5, C6, CP6, P8, O1, O2, OzT7, T8, C6
Self-Induced Emotion/AF7, AF8, FP1, FC5, FC6, FT7, FT8, T7, T8, TP7, TP8, TP9, TP10, C5, C6, CP6, P8, O1, O2, OzT8, TP10, C6, FT8,

Share and Cite

MDPI and ACS Style

Zhuang, N.; Zeng, Y.; Yang, K.; Zhang, C.; Tong, L.; Yan, B. Investigating Patterns for Self-Induced Emotion Recognition from EEG Signals. Sensors 2018, 18, 841. https://doi.org/10.3390/s18030841

AMA Style

Zhuang N, Zeng Y, Yang K, Zhang C, Tong L, Yan B. Investigating Patterns for Self-Induced Emotion Recognition from EEG Signals. Sensors. 2018; 18(3):841. https://doi.org/10.3390/s18030841

Chicago/Turabian Style

Zhuang, Ning, Ying Zeng, Kai Yang, Chi Zhang, Li Tong, and Bin Yan. 2018. "Investigating Patterns for Self-Induced Emotion Recognition from EEG Signals" Sensors 18, no. 3: 841. https://doi.org/10.3390/s18030841

APA Style

Zhuang, N., Zeng, Y., Yang, K., Zhang, C., Tong, L., & Yan, B. (2018). Investigating Patterns for Self-Induced Emotion Recognition from EEG Signals. Sensors, 18(3), 841. https://doi.org/10.3390/s18030841

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop