Next Article in Journal
The Impact of Weather and Seasons on Falls and Physical Activity among Older Adults with Glaucoma: A Longitudinal Prospective Cohort Study
Next Article in Special Issue
Requirements for Supporting Diagnostic Equipment of Respiration Process in Humans
Previous Article in Journal
Hybrid Imitation Learning Framework for Robotic Manipulation Tasks
Previous Article in Special Issue
A Time-Frequency Measurement and Evaluation Approach for Body Channel Characteristics in Galvanic Coupling Intrabody Communication
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Predicting Exact Valence and Arousal Values from EEG

LASIGE, Faculdade de Ciências, Universidade de Lisboa, 1749-016 Lisboa, Portugal
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(10), 3414; https://doi.org/10.3390/s21103414
Submission received: 29 March 2021 / Revised: 20 April 2021 / Accepted: 11 May 2021 / Published: 14 May 2021
(This article belongs to the Special Issue Biomedical Signal Acquisition and Processing Using Sensors)

Abstract

:
Recognition of emotions from physiological signals, and in particular from electroencephalography (EEG), is a field within affective computing gaining increasing relevance. Although researchers have used these signals to recognize emotions, most of them only identify a limited set of emotional states (e.g., happiness, sadness, anger, etc.) and have not attempted to predict exact values for valence and arousal, which would provide a wider range of emotional states. This paper describes our proposed model for predicting the exact values of valence and arousal in a subject-independent scenario. To create it, we studied the best features, brain waves, and machine learning models that are currently in use for emotion classification. This systematic analysis revealed that the best prediction model uses a KNN regressor (K = 1) with Manhattan distance, features from the alpha, beta and gamma bands, and the differential asymmetry from the alpha band. Results, using the DEAP, AMIGOS and DREAMER datasets, show that our model can predict valence and arousal values with a low error (MAE < 0.06, RMSE < 0.16) and a strong correlation between predicted and expected values (PCC > 0.80), and can identify four emotional classes with an accuracy of 84.4%. The findings of this work show that the features, brain waves and machine learning models, typically used in emotion classification tasks, can be used in more challenging situations, such as the prediction of exact values for valence and arousal.

1. Introduction

Emotions play an undeniably important role in human lives. They are involved in a plethora of cognitive processes such as decision-making, perception, social interactions and intelligence [1]. Thus, the identification of a person’s emotional state has become a need. Let us consider a scenario where we want to identify the emotional state of subjects from their EEG signals. However, we do not just want to identify whether a person is feeling positive or negative, or whether they are feeling a certain discrete emotion (e.g., happiness or disgust). We want more than that, we want to know the exact valence and arousal values that the person is feeling. This offers a wider range of emotional states and has the advantage that it can later be converted into discrete emotions if we wish.
There are several works that identify the emotional state of a person from EEG, as we discuss in Section 2, but the vast majority identify a small number of states, such as high/low valence and high/low arousal, or one of the quadrants of the circumplex model of affect (HAHV, HALV, LALV and LAHV, where H, L, A and V stand for high, low, arousal and valence, respectively). Thus, while several approaches for identifying discrete emotions have been proposed in the recent years, little attention has been paid to the prediction of exact values for valence and arousal (see Figure 1).
With this in mind, in this paper, we seek to answer to the following research questions: RQ1) Can EEG be used to predict the exact values of valence and arousal? RQ2) Are the typical features, brain waves and machine learning models used for classification of emotions suitable for the prediction of exact valence and arousal values? RQ3) Are the predicted valence and arousal values suitable for classification tasks with good accuracy?
To that end, we analyzed features from different domains (time, frequency and wavelet) extracted from the EEG signal, brain waves and machine learning methods for regression. For this purpose, we used three datasets (DEAP [2], AMIGOS [3] and DREAMER [4]) containing EEG signals collected during emotion elicitation experiments, together with the self assessment of the valence and arousal performed by the participants. We extracted time, frequency and wavelet features from EEG, considering alpha, beta and gamma bands, namely the three Hjorth parameters (activity, mobility and complexity), Spectral Entropy, Wavelet Energy and Entropy and IMF energy and entropy, as we describe in Section 3.
Experimental results, using a subject-independent setup with 10-fold cross-validation technique, show that our proposed model can predict valence and arousal values with a low error and a strong correlation between predicted and expected values (Section 5.2). Furthermore, in two subject-independent classification tasks (two classes and four classes), our model surpasses the state-of-the-art (Section 5.3).
Our main contributions can be summarized as follows:
  • A systematic study of the best features, brain waves and machine learning models for predicting exact valence and arousal values (Section 4);
  • Identification of the two best machine learning regressors (KNN and RF), out of seven, for predicting values for valence and arousal (Section 4.3);
  • Combination and study of features from the time, frequency and wavelet domain, complemented with asymmetry features, for valence and arousal prediction (Section 4.4 and Section 4.5);
  • A model able to predict exact values for valence and arousal with a low error, which can also predict emotional classes with the highest accuracy among state-of-the-art methods (Section 5).

2. Background and Related Work

To properly understand emotion recognition systems from EEG, we need to know: (1) the set of emotions to be detected and how they are modeled; (2) how EEG signals are related to emotions; (3) which brain waves and features best describe emotional changes in people; (4) which machine learning methods are most appropriate for emotion recognition.

2.1. Emotions

Emotions are generated whenever a perception of an important change in the environment or in the physical body appear. There are two main scientific ways of explaining the nature of emotions. According to the cognitive appraisal theory, emotions are judgments about the extent to which the current situation meets our goals or favors our personal well-being [5]. Alternatively James and Lange [6,7] have argued that emotions are perceptions of changes in our body such as heart rate, breathing rate, perspiration and hormone levels. Either way, emotions are conscious experiences characterized by intense mental activity and a certain degree of pleasure or displeasure.
There are two perspectives to represent emotions: discrete and dimensional. In the discrete perspective, all humans are thought to have an innate set of basic emotions that are cross-culturally recognizable. A popular example is Ekman’s six basic emotions (anger, disgust, fear, happiness, sadness and surprise) [8]. In the dimensional perspective, emotions are represented by the valence, arousal and dominance dimensions [9]. Valence, as used in psychology, means the intrinsic attractiveness or aversion of an event, object or situation, varying from negative to positive. Arousal is the physiological and psychological state of being awake or having the sensory organs stimulated to a point of perception, ranging from sleepy to excited. Dominance corresponds to the strength of the emotion. Dimensional continuous models are more accurate in describing a broader range of spontaneous, everyday emotions when compared to categorical models of discrete emotions [10]. For example, while the latter can only describe happiness, the dimensional representation can discriminate between several emotions near happiness, such as aroused, astonished, excited, delighted, etc. (Figure 1).

2.2. Physiological Signals and EEG

The use of physiological responses to characterize people’s emotional state has gained increasing attention. There are several physiological signals that can be used for this purpose, namely the electrical activity of the heart (ECG), galvanic skin response (GSR), electromyography (EMG), respiration rate (RR), functional magnetic resonance imaging (fMRI) or electroencephalography (EEG).
The latter provides great time resolution and fast data acquisition while being non invasive and inexpensive, making it a good candidate to measure people’s emotional state. The frequency of EEG measurements ranges from 1 to 80 Hz, with amplitudes of 10 to 100 microvolts [11]. Brain waves are usually categorized into five different frequency bands: Delta ( δ ) 1–4 Hz; Theta ( θ ) 4–7 Hz; Alpha ( α ) 8–13 Hz; Beta ( β ) 13–30 Hz; and Gamma ( γ ) > 30 Hz), each one being more prominent in certain states of mind. Delta are the slowest waves, being most pronounced during non-rapid eye movement (NREM) sleep. Theta waves are associated with subconscious activities, such as dreaming, and are present in meditative states of mind. Alpha waves appear predominantly during wakeful relaxation mental states with the eyes closed, and are most visible over the parietal and occipital lobes [12]. Beta wave activity, on the other hand, is related to an active state of mind, more prominent in the frontal cortex during intense focused mental activity [12]. Lastly, Gamma rhythms are thought to be associated with intense brain activity for the purpose of running certain cognitive and motor functions. According to the literature, there is also a strong correlation between these waves and different affective states [1].

2.3. Brain Waves and Features

One early decision when working with EEG for emotion recognition is related to the number of electrodes to use. In the literature, this number varies from only 2 electrodes [13,14] to a maximum of 64 electrodes [15,16], with the most common value revolving around 32 [2,17,18,19,20]. Usually, the placement of the electrodes in the scalp is done according to the international 10–20 system.
Another decision is related to the use of monopoles or dipoles. The former record the potential difference compared to a neutral electrode connected to an ear lobe or mastoid, while the latter collects the potential difference between two paired electrodes, thus allowing for the extraction of asymmetry features [16,19]. The asymmetry concept has been used in many experiments, and states that the difference in activity between the hemispheres reflects emotional positivity (valence). A left upper activity is related to a positive emotion (high valence), while a right upper activity is related to a negative emotion (low valence). According to the literature, the electrodes positioned in the frontal and parietal lobes are the most used because they have produced the best results.
Regarding brain waves, most researchers use the set comprised of theta, alpha, beta and gamma. Some also use the delta [15,21] or a custom set of EEG frequencies [22,23], while Petrantonakis et al. [24,25] used only alpha and beta frequencies, as these had produced the best results in previous works. The same for Zhang et al. [14,26], who used only beta frequencies.
Concerning the features to be extracted from the EEG, there is a great variety in the literature, with several authors using more than one type of feature extraction algorithm. The most used methods have been the Fourier Transform such as the Short-time Fourier Transform (STFT) or Discrete Fourier Transform (DFT) [16,27], statistical (mean, standard deviation, kurtosis, skewness, Pearson correlation) [19,28], Hjorth parameters (HP) [19,29,30], Power Spectral Density (PSD) [2,15,18], Wavelet Transform (WT) [15,31], Empirical Mode Decomposition (EMD) or Hilbert–Huang Spectrum (HHS) [24,26,27], Entropy such as the Differential Entropy (DE) [16,32], Approximate Entropy (AE) [15], Sample Entropy (SampEn) [26], Wavelet Entropy (WE) [15,31], Higher Order Crossings (HOC) [25], Fractal Dimensions (FD) [15,29,33], Auto Regressive Models (AR) [14] and Hurst Exponent (HE) [15]. When dipoles are used, the extracted features are called asymmetry measures. These include Differential and Rational Power Spectral Asymmetry (DPSA and RPSA) [2,18,34] or Mutual Entropy (ME)/Mutual Information (MI) [19]. It is also common to analyze the similarities between time series using Phase Coherence [19].

2.4. Emotion Classification

According to our literature review, almost all authors have used machine learning classifiers to recognize emotions. The most used have been the Support Vector Machines (SVM) [15,25,34], followed by K-Nearest Neighbors (KNN) [16,30,35]. Other classifiers used have been the Naive Bayes (NB) [2], Multi-Layer Perceptron (MLP) [34,35], Logistic Regression (LR) [16,32] and Random Forest (RF) [23,27]. In addition to these works that used hand-crafted features, there are other solutions that use deep-learning approaches, such as Artificial Neural Networks (ANN) [30], Deep Learning Networks (DLN) [18], Deep Belief Networks (DBN) [16], Long Short-Term Memory (LSTM) networks [36,37] or a combination of the latter with Convolutional Neural Network (CNN) [38,39].
The emotion classification problem has been done in one of three ways: (i) identification of discrete emotions such as happiness, scared or disgust [24,27,34,40,41,42]; (ii) distinction between high/low arousal and high/low valence [2,3,4,19,29,31,43]; and (iii) finding the quadrant, in the valence/arousal space [13,14,19,21,44,45]. In the last two cases, researchers create two classifiers, one to discern between high/low valence and the other for high/low arousal. Although binary classification is the most common, there are works in which researchers have performed multi-class classification [23,32]. There are also some works that included all positive emotions in one class and all negative emotions in another, and sometimes with the addition of the neutral class [15,16].
Table 1 summarizes the main characteristics of a subset of the reviewed papers, which include the database used, brain waves utilized, features extracted, classifiers employed and the set of emotions recognized. These works were chosen according to their relevance and novelty.

3. Materials and Methods

For creating our model, we explored different brain waves and features, and trained multiple regressors, using annotated datasets. Here, we describe all of them, plus the metrics used to evaluate the quality of the prediction.

3.1. Datasets

For our study, we used the AMIGOS [3], DEAP [2] and DREAMER [4] datasets, whose main characteristics are shown in Table 2.
The data used from the AMIGOS dataset that corresponds to the scenario where the 40 participants were alone watching 16 short videos: four in each quadrant of the circumplex model of affect. The EEG signals were recorded using the Emotiv EPOC Neuroheadset, using 14 electrode channels. The DEAP dataset contains data collected using 40 music videos, 10 in each quadrant. The EEG signal was recorded using 32 active AgCl electrodes with the Biosemi ActiveTwo system. The DREAMER dataset contains EEG signals recorded using the Emotiv EPOC Neuroheadset. Signals were collected from 23 participants while they watched 18 film clips selected to elicit nine emotions (amusement, excitement, happiness, calmness, anger, disgust, fear, sadness and surprise).
In the three datasets, participants performed a self-assessment of their perceived arousal, valence and dominance values using the Self-Assessment Manikin (SAM) [46]. In the case of DEAP and AMIGOS, participants also selected the basic emotion (neutral, happiness, sadness, surprise, fear, anger and disgust) they were feeling at the beginning of the study (before receiving any stimulus), and then, after visualizing each video.

3.2. Brain Waves

As we presented in the related work section, there is no consensus on which brain waves to use. However, considering the published results, we can see that the best accuracy is attained when using alpha, beta and/or gamma waves. Therefore, we studied only these three types of brain waves.

3.3. Features

The features analyzed in our work were selected based on their effectiveness, simplicity and computational speed, according to prior works, as described in Section 2. We studied the Hjorth parameters, Spectral Entropy, Wavelet Energy and Entropy and IMF energy and entropy.

3.3.1. Hjorth Parameters

The Hjorth parameters [47] are obtained by applying signal processing techniques in the time domain, giving an insight into the statistical properties of the signal. The three Hjorth parameters are: activity, mobility and complexity (Equations (1)–(3)).
Activity gives a measure of the squared standard deviation of the amplitude of the signal x ( t ) , indicating the surface of the power spectrum in the frequency domain. That is, the activity value is large if the higher frequency components are more common, and low otherwise. Activity corresponds to the variance of the signal.
Activity = v a r ( x ( t ) )
Mobility represents the mean frequency or the proportion of standard deviation of the power spectrum. This is defined as the square root of the activity of the first derivative of the signal divided by the activity of the signal.
Mobility = Activity ( x ( t ) ) Activity ( x ( t ) )
Complexity indicates how the shape of a signal is similar to a pure sine wave, and gives an estimation of the bandwidth of the signal. It is defined as the ratio between the mobility of the first derivative and the mobility of the signal.
Complexity = Mobility ( x ( t ) ) Mobility ( x ( t ) )
To summarize, the three parameters can be referred as the average power, the average power of the normalized derivative and the average power of the normalized second derivative of the signal, respectively.

3.3.2. Spectral Entropy

Entropy is a concept related to uncertainty or disorder. The Spectral Entropy of a signal is based on Shannon’s entropy [48] from information theory, and it measures the irregularity or complexity of digital signals in the frequency domain. After performing a Fourier Transform, the signal is converted into a power spectrum, and the information entropy of the latter represents the Power Spectral Entropy of the signal [49]. Consider x i to be a random variable and p ( x i ) its respective probability, then Shannon Entropy can be calculated as follows:
H ( x ) = i = 1 N p ( x i ) l o g 2 p ( x i )
The Spectral Entropy treats the signal’s normalized power distribution in the frequency domain as a probability distribution, and calculates the Shannon Entropy of it. Therefore, the Shannon Entropy in this context is the Spectral Entropy of the signal if we consider p ( x i ) to be the probability distribution of a power spectrum:
p ( x i ) = P s d ( x i ) j P s d ( x j ) )
P s d ( x i is the power spectral density, which is equal to the absolute value of the signal’s Discrete Fourier Transform.

3.3.3. Wavelet Energy and Entropy

Wavelet transformation is a spectral analysis technique in which any function can be represented as an infinite series of wavelets. The main idea behind this analysis is to represent a signal as a linear combination of a particular set of functions. This set is obtained by shifting and dilating a single prototype wavelet ψ ( t ) called mother wavelet [50]. This is realized by considering all possible integer translations of ψ ( t ) , and dilation is obtained by multiplying t by a scaling factor, which is usually a factor of two [51]. Equation (6) shows how wavelets are generated from the mother wavelet:
ψ j , k ( t ) = 2 j / 2 ψ ( 2 j t k )
where j indicates the magnitude and scale of the function (dilation) and k specifies the translation in time.
The Discrete Wavelet Transform (DWT) is derived from the continuous wavelet transform with a discrete input. It analyses the signal in several frequency bands, with different resolutions, decomposing the signal both in a rough approximation and detailed information. For this, it applies consecutive scaling and wavelet functions. Scaling functions are related to low-pass filters and the wavelet to high-pass filters [50].
The first application of the high-pass and low-pass filters produces the detailed coefficient D1 and the approximation coefficient A1, respectively. Then, the first approximation A1 is decomposed again (into A2 and D2) and the process is repeated, taking into consideration the frequency components of the signal we want to isolate [51]. Given that, in this work, we only consider alpha, beta and gamma frequencies, the number of decomposition levels used is three (D1–D3). Thus, D1 corresponds to gamma, D2 corresponds to beta and D3 to alpha. The mother wavelet chosen was db4, since it had already proven to generate good results in similar works.
Finally, after obtaining the detailed coefficients of the desired bands (decomposition levels) the Wavelet Energy can be computed by summing the square of the absolute value of these coefficients. The wavelet entropy can be calculated in a similar way to the Spectral Entropy.

3.3.4. IMF Energy and Entropy

Empirical Mode Decomposition (EMD) is a data-driven method for processing non-stationary, nonlinear, stochastic signals, which makes it ideal for the analysis and processing of EEG signals [52]. The EMD algorithm decomposes a signal x ( t ) into a finite set of AM-FM oscillating components c ( t ) called Intrinsic Mode Functions (IMFs) with specific frequency bands. Each IMF satisfies two conditions: (i) the number of local extrema (maxima and minima) and the number of zero crossings differ by at most one; (ii) the mean value of the envelope defined by the local maxima and the envelope defined by the local minima is zero [52]. The general workflow of the EMD algorithm to decompose a signal is described in Algorithm 1.
Algorithm 1 EMD decomposition steps.
1:
Identify all extrema (maxima and minima)
2:
Create the upper u ( t ) and lower l ( t ) envelopes by connecting the maxima and minima separately with a cubic spline curve
3:
Find the mean of the envelopes as m ( t ) = u ( t ) + l ( t ) 2
4:
Take the difference between the data and the mean: d ( t ) = x ( t ) m ( t )
5:
Decide whether d ( t ) is an IMF or not by checking the two basic conditions described above and the stoppage criterion
6:
If d ( t ) is not an IMF, repeat steps 1–5 on d ( t ) as many times as needed until it satisfies the conditions
7:
If d ( t ) is an IMF, assign it to an IMF component c ( t )
8:
Repeat steps 1–7 on the residue, r ( t ) = x ( t ) c ( t ) , as input data
9:
The process stops when the residue contains no more than one extremum
After decomposition by EMD, the original signal x ( t ) is a linear combination of N IMF components c i ( t ) and a final residual part r N ( t ) , as shown in Equation (7).
x ( t ) = i = 1 N c i ( t ) + r N ( t )
EMD works as an adaptive high-pass filter, which isolates the fastest changing components first. Thus, the first IMFs contain information in the high frequency spectrum, while the last IMFs contain information within the lowest frequency spectrum. Since each component is band-limited, they reflect the characteristics of the instantaneous frequencies [53]. For this work, we focused on the first IMF, which roughly contains information within the gamma frequency range, the second IMF that contains the beta frequency spectrum, and the third IMF, which contains the alpha band [54]. To obtain the energy and entropy of the IMFs, we used the methods described in the previous sections.

3.4. Regression Methods

Since we intended to identify continuous valence and arousal values, the machine learning methods to be used should be regressors rather than classifiers. We studied seven methods: Linear Regression (LR), Additive Regression (AR), Decision Tree (DT), K-Nearest Neighbors (KNN), Random Forest (RF) and Support Vector Machines for Regression (SVR), with two kernels. These were chosen based on the analysis of the related work (see Section 2).

3.5. Metrics

To evaluate the regressors’ accuracy, we used three measures: the mean absolute error (MAE), Pearson correlation coefficient (PCC) and the root-mean-square error (RMSE) [55]. MAE measures the average magnitude of the errors in a set of predictions, without considering their direction. RMSE also measures the average magnitude of the error, but gives a relatively high weight to large errors. In our case, both metrics express the average model prediction error from 0 to 1. PCC measures the linear correlation between the ground-truth and predicted values. For MAE and RMSE the lower the value the better, while for PCC the closer to 1 the better. The formulas for each of the described measures are presented in Equations (8)–(10), where y represents a series of N ground-truth samples, and y ^ a series of N predicted values.
M A E = i = 1 N | y ^ i y i | N
R M S E = i = 1 N ( y ^ i y i ) 2 N
P C C = N i = 1 N ( y ^ i y i ) i = 1 N y ^ i i = 1 N y i N i = 1 N y ^ i 2 ( i = 1 N y ^ i ) 2 ) N i = 1 N y i 2 ( i = 1 N y i ) 2 )

4. Proposed Model

In this section, we describe both the process that led to the creation of the feature vectors, as well as the analysis performed to create our model.

4.1. Feature Vector

To compute the feature vector from the EEG signal, we started by performing a pre-processing step (Figure 2). Here, we first detrended the signal and eliminated the 50 Hz power line frequency by applying a notch filter. Then, to remove artifacts, we applied adaptive filtering techniques (for ECG artifacts) and wavelet thresholding (for EOG artifacts).
To extract the alpha, beta and gamma bands, we applied three FIR band pass filters to the EEG signal. Then, we computed the Hjorth parameters and Spectral Entropy for each of the bands. Wavelet and IMF-based features were calculated using the signal obtained after the pre-processing step, because these algorithms are band-limited.
We used a 4 s epoch, with 50% overlap, and computed the selected features for this window of the EEG. These values were selected based on the literature and after some preliminary tests.

4.2. Methodology

To create our model for valence and arousal prediction, we used the DEAP dataset as ground-truth, since it has been widely used and validated by the research community. To that end, we performed an analysis of multiple factors (regressors, brain asymmetry, waves and features) to identify the best configurations for our prediction models. The steps followed are represented in Figure 3 and described below:
  • Regressors Selection: In this step, we compared the accuracy of the seven regressors selected for our analysis. For that, we used a feature vector composed by all the features computed for all electrodes and all waves (but without the asymmetry features);
  • Brain Asymmetry: After identifying the best regressors, we checked if the use of brain asymmetry could improve the results, and if so, which type of asymmetry and waves produced the best results;
  • Features by Waves: We verified the accuracy achieved using each feature (for each brain wave) individually. To perform feature selection, we used a forward selection (wrapper method) [56], where we started by ranking the features based on their PCC values, and then added one by one to the model, until the results no longer improved;
  • Regressor Optimization: Finally, after identifying the best features, waves and regressors, we optimized the parameters of the selected regressors.
To train the several models on each step, we assigned to the feature vector extracted from each EEG epoch the self-reported values of valence and arousal present in the DEAP dataset. Since these values were reported for the overall video, all epochs from a video were annotated with the same valence and arousal values.
In all the experiments, we considered all the participants, and used a subject-independent setup through a 10-fold cross-validation. We randomly divided the dataset into 10 folds, then in turn, nine folds were used for training and the other one for testing. We report accuracy in terms of the three metrics (PCC, MAE, RMSE) as the average from the 10 iterations.

4.3. Regressors Selection

We started our analysis by identifying the best regression methods among the seven enumerated in Section 3.4: Additive Regression (AR), Decision Tree (DT), K-Nearest Neighbors (KNN), Linear Regression (LR), Random Forest (RF) and Support Vector Machine for Regression (SVR). The latter was tested with two different kernels, linear and Radial Basis Function (RBF)). We used the versions of these machine learning algorithms present in the Weka 3.8 software [57], with their default parameters.
We performed three tests for each regressor, one for each frequency band (alpha, beta and gamma). The feature vector used had a dimension of 256, composed by 8 features for each of the 32 channels: three Hjorth parameters (H1, H2, H3), Spectral Entropy (SE), Wavelet Energy (WP), Wavelet Entropy (WE), IMF energy (IMFP) and IMF entropy (IMFE).
As shown in Table 3, the RF and KNN regressors achieved the best results overall for the three bands. Although the DF presents better results than the KNN for the gamma band, it is worse for the other two bands. SVR, AR and LR present the worst results for all bands. As such, we selected RF and KNN (with K = 1) to be used in the remaining tests.

4.4. Asymmetry Features

The brain asymmetry concept has been widely used for emotion recognition, particularly for valence classification. Here, we tested two types of asymmetry: differential and rational. The former was calculated by considering the difference in feature values between homologous channels of opposing hemispheres (e.g., F3–F4, F7–F8, etc.). The rational asymmetry was calculated by dividing the feature values along the same homologous channels. The resulting feature vector, for both asymmetries, had a dimension of 112 (8 features for 14 asymmetry channels). We also combined the two feature vectors in a single one.
From Table 4, we can see that the differential asymmetry of the alpha waves produced the best results for the prediction of valence, using both regressors. The best results for arousal were achieved using a combination of both asymmetries of the gamma waves, using RF, and the rational asymmetry of the beta spectrum, using KNN. In both predictions (valence and arousal) the RF regressor achieved the best results.

4.5. Features by Waves

To identify the best features per wave (or the pairs wave-feature), we analyzed each pair individually, by training and testing a model for valence and another for arousal. Figure 4 shows the PCC values for the eight features per wave and for the two regressors. As we can see, the Wavelet Energy (WP) and the first Hjorth parameter (H1—Activity) produced the best results. On the other hand, the Spectral Entropy (SE) and the third Hjorth parameter (H3—Complexity) generated the worst results. Overall, beta and gamma features yield the best results for both regressors.
To identify the set of pairs wave-feature that produced the best values, we first ranked them by the PCC value, and then added one by one, starting from those with higher PCC values, and stopping when the inclusion of a new feature did not improve the results any further. We did this for each band separately and for band combinations, as well as using both regressors. The resulting set of waves and features for each combination is presented in Table 5.
Considering each wave alone, gamma and beta exhibit the best results in either regressor, for both valence and arousal. In general, KNN required more features than RF to achieve similar results, when considering single waves.
The combination of features from several waves improved the results, both for valence and arousal. For KNN, the combination of the three waves generated the best results, while for RF the addition of the alpha waves with other waves did not bring any improvement.
As we have seen in Table 4, the alpha differential asymmetry yielded the best results for valence prediction. Thus, we studied it to identify its best features. In Table 5, we can observe that the alpha differential asymmetry ( α D A ) generated much better results for valence than for arousal, as expected according to the literature.
Finally, we joined the alpha differential asymmetry with the best combination of waves. As we can see, we achieved the best results for arousal with this set of features. For valence, the values of PCC did not improve, but the MAE value for the KNN regressor was the smallest.
Before we selected the best features for each model, we performed a small test with the AMIGOS dataset. The test revealed that the combination of the alpha differential asymmetry with features from other waves yielded better results than using the asymmetry features only. Consequently, to attain a more generalized model, that is, one that would be accurate for all datasets with similar EEG characteristics and stimuli elicitation as the ones tested, we chose the following models for valence and arousal prediction:
  • KNN: All features except the Spectral Entropy (SE) from the three bands, plus the alpha differential asymmetry, with all features except the third Hjorth parameter (H3). This yields a feature vector of dimension 770 for DEAP (7 features × 3 waves × 32 channels + 7 features from alpha waves × 14 asymmetry channels) and 343 for AMIGOS and DREAMER (7 features × 3 waves × 14 channels + 7 features from alpha waves × 7 asymmetry channels).
  • RF: First Hjorth parameter (H1) and the Wavelet Energy (WP) from the beta and gamma waves, plus the alpha differential asymmetry from the first Hjorth parameter (H1), the Wavelet Energy (WP) and Wavelet Entropy (WE). The resulting feature vector has a dimension of 170 for DEAP (2 features × 2 waves × 32 channels + 3 features from alpha waves × 14 asymmetry channels) and 77 for AMIGOS and DREAMER (2 features × 2 waves × 14 channels + 3 features from alpha waves × 7 asymmetry channels).

4.6. Optimizing the Models

After identifying the best configuration for the four models, we performed a hyperparameter tuning to optimize the regressors. For KNN, we tested several K values (1, 3, 5, 7, 11 and 21), and used the Manhattan distance in the neighbor search instead of the euclidean distance. For RF, we tested 50, 500, 750 and 1000 trees (instead of the default 100).
From Table 6, we can see that KNN yielded the best results when K = 1. For RF, the results for 500, 750 and 1000 trees are equal for MAE and RMSE, and had a very small difference for PCC. Therefore, we opted for the 500 trees since it has a lower computational cost.

5. Experimental Evaluation

To assess the quality and generalization of the two models identified in the previous section, we conducted two experiments. One to evaluate the accuracy of the predicted values of valence and arousal, and another to assess the classification accuracy using the predicted values.

5.1. Setup

We conducted the evaluation using three datasets, DEAP, AMIGOS and DREAMER. For each dataset, we created the models using the settings identified in the previous section. In this way, we can understand how generalizable these settings are. We assessed the quality of the proposed models for two tasks: prediction and classification. In the former, we evaluated the models’ ability to predict the valence and arousal values, while in the latter, we measured the accuracy in identifying two classes (low/high arousal, low/high valence) and four classes (quadrants), using the estimated values to perform the classification tasks. In all the experiments, we used a subject-independent setup with a 10-fold cross-validation approach.

5.2. Prediction Results

As we can see in Table 7, both models (KNN and RF) achieved very good results for the three datasets, with PCC values greater than 0.755 and MAE values smaller than 0.158. In fact, and although the best models were found using the DEAP dataset, the quality of the prediction for the two unseen datasets (AMIGOS and DREAMER) is even better than for DEAP. This shows that the two identified models are generic enough to deal with new data.
Overall, results show that these models can predict valence and arousal with low error and a strong correlation with the expected values. The KNN presents the lowest errors (MAE) in all situations, while for PCC and RMSE both regressors present very similar values.

5.3. Classification Results

The final step of our evaluation consisted of evaluating the models in two subject-independent emotion classification tasks. One where we distinguish between low/high arousal and low/high valence (two classes), and another where we identify the quadrant in the valence/arousal space (four classes). To that end, we used the pair of predicted valence and arousal values.

5.3.1. Arousal and Valence Binary Classification

In this classification task, we computed the accuracy rate for arousal and valence by averaging the classification rates for their low and high classes. We obtained these values for both regressors (KNN and RF) using the three datasets (see Table 8). As we can see, the KNN model achieved the best results for two datasets (DEAP and AMIGOS), while RF was slightly better than KNN from the DREAMER dataset. Thus, overall, we consider the KNN model to be the best one.
In Table 9, we compare the results achieved by the best identified model (KNN) with several recent works using the DEAP dataset. As we can see, our model achieved the highest classification rate, with values around 89.8% for both valence and arousal.

5.3.2. Arousal and Valence Quadrants Classification

In the second classification task, we identified the quadrant where the pair of predicted valence and arousal values was located. The classification results for KNN, RF and a mix of both (KNN for arousal and RF for valence due to the PCC values in Table 6) for the three datasets are shown in Figure 5. We present the results in the form of confusion matrices where rows represent the true quadrant and columns represent the predicted quadrant. It can be seen that the KNN-based models generated the best results for all datasets. This was foreseeable due to the small MAE values that these models displayed earlier (Table 6). We achieved better accuracy results for the two unseen datasets (AMIGOS and DREAMER), which shows that the features, brain waves and machine learning methods identified are generic enough to be used in unknown data.
Finally, we compared the classification results of the best identified model, with some recent approaches that perform a four class classification using the DEAP dataset. As we can see in Table 10, our best model (KNN) presents the best result, achieving an accuracy of 84.4%.

5.4. Discussion

The main goal of this work was to study the features, brain waves, and regressors that would ensure the creation of accurate and generalizable models for identifying emotional states from EEG, through the prediction of exact values for valence and arousal.
Our search for the best prediction models started with the comparison of several machine learning approaches, chosen based on their regular use and overall effectiveness and efficiency. RF and KNN achieved the highest PCC values and the lowest errors (MAE and RMSE), when compared to the remaining ones. Additionally, these regressors are relatively fast, making them good options for interactive applications where results should be produced in real-time.
The analysis of the features revealed that the first Hjorth parameter (H1), Wavelet Energy (WP) and IMF power (IMFP) generated the best accuracies on all frequency bands tested. These are features heavily correlated with power spectrum density. The other features, although not so relevant, also proved to be significant, as their inclusion in the KNN-based models improved the results. The only exception was the Spectral Entropy (SE), which whenever it was included deteriorated the results. The beta- and gamma-based features generated the best accuracies, which is consistent with the state-of-the-art.
The inclusion of the differential asymmetry of the alpha spectrum ( α D A ) improved considerably the valence prediction, as shown in Table 5. This corroborates the valence hypothesis, which states that the left hemisphere is dominant for processing positive emotions, while the right is dominant for negative ones [63].
After identifying the best features, we optimized the machine learning regressors by testing different values for their parameters (number of trees for RF, and K for KNN). For RF, we identified 500 trees, and for KNN, K = 1. We also changed the spatial metric of the KNN to the Manhattan distance, which improved the results.
To compare our results with previous approaches, we transformed the predicted valence and arousal values into high/low arousal and high/low valence (two classes) and the corresponding quadrant of the circumplex model of affect (four classes). In both classification scenarios, the identified KNN model achieved the highest accuracy, obtaining a value of 89.8% for two classes and 84.4% for four classes. These results are even more encouraging if we consider that they were obtained by predicting the arousal and valence values rather than directly from a classifier trained to identify classes (as the related work did). This means that we can accurately assess the emotional level of individuals by predicting arousal and valence values, and if necessary we can also identify discrete emotional classes.
From the achieved results, we can conclude that EEG can be used for predicting exact valence and arousal values (RQ1), and that the typical features, brain waves and machine learning models used for classification of emotions can be used for predicting exact valence and arousal values (RQ2). Finally, the two classification scenarios where we converted the predicted valence and arousal values into classes showed that our proposed model produces high quality results in classification tasks (RQ3).

6. Conclusions

In this work, we investigated the best combination of features, brain waves and regressors to build the best possible model to predict the exact valence and arousal values. We identified KNN and RF as the best machine learning methods for regression. In general, the features extracted within the beta and gamma frequencies were the most accurate, and the brain asymmetry concept of the alpha band proved to be useful for predicting valence. In the end, the KNN-based model, using all features except the Spectral Entropy, achieved the best accuracy for arousal and valence prediction, as well as for classification. A comparison with previous works, using the DEAP dataset, shows that the identified model presents the highest accuracies for two and four classes, achieving 89.8% and 84.4% respectively.
As future work, one can explore the use of these features and regressors in the analysis and classification of other physiological signals, since according to [64] entropy features in combination with RF showed good results for analyzing ECG signals.

Author Contributions

Conceptualization, F.G., S.M.A. and M.J.F.; software, F.G.; supervision, M.J.F.; validation, F.G.; writing—original draft, F.G., S.M.A. and M.J.F.; writing—review and editing, F.G., S.M.A. and M.J.F. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by FCT through project AWESOME, ref. PTDC/CCI-INF/29234/ 2017, and the LASIGE Research Unit, ref. UIDB/00408/2020 and ref. UIDP/00408/2020. Soraia M. Alarcão is funded by an FCT grant, ref. SFRH/BD/138263/2018.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Alarcão, S.M.; Fonseca, M.J. Emotions Recognition Using EEG Signals: A Survey. IEEE Trans. Affect. Comput. 2019, 10, 374–393. [Google Scholar] [CrossRef]
  2. Koelstra, S.; Mühl, C.; Soleymani, M.; Lee, J.; Yazdani, A.; Ebrahimi, T.; Pun, T.; Nijholt, A.; Patras, I. DEAP: A Database for Emotion Analysis Using Physiological Signals. IEEE Trans. Affect. Comput. 2012, 3, 18–31. [Google Scholar] [CrossRef] [Green Version]
  3. Miranda Correa, J.A.; Abadi, M.K.; Sebe, N.; Patras, I. AMIGOS: A Dataset for Affect, Personality and Mood Research on Individuals and Groups. IEEE Trans. Affect. Comput. 2018, 1–14. [Google Scholar] [CrossRef] [Green Version]
  4. Katsigiannis, S.; Ramzan, N. DREAMER: A Database for Emotion Recognition through EEG and ECG Signals from Wireless Low-cost Off-the-Shelf Devices. IEEE J. Biomed. Health Inform. 2018, 22, 98–107. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Smith, C.A.; Lazarus, R.S. Emotion and Adaptation. In Handbook of Personality: Theory and Research; The Guilford Press: New York, NY, USA, 1990; Chapter 23; pp. 609–637. [Google Scholar]
  6. James, W. The Physical Basis of Emotion. Psychol. Rev. 1894, 1, 516–529. [Google Scholar] [CrossRef] [Green Version]
  7. Lange, C.G. The Emotions. In Psychology Classics; Williams & Wilkins (Original Work Published 1885): Baltimore, MD, USA, 1922; Chapter 2; pp. 33–90. [Google Scholar]
  8. Ekman, P. Basic Emotions; John Wiley & Sons Ltd.: Hoboken, NJ, USA, 1999; Chapter 3. [Google Scholar]
  9. Mehrabian, A.; Russell, J. An Approach to Environmental Psychology; M.I.T. Press: Cambridge, MA, USA, 1974. [Google Scholar]
  10. Gunes, H.; Schuller, B. Categorical and Dimensional Affect Analysis in Continuous Input: Current Trends and Future Directions. Image Vis. Comput. 2013, 31, 120–136. [Google Scholar] [CrossRef]
  11. Dalgleish, T. The Emotional Brain. Nat. Rev. Neurosci. 2004, 5, 583–589. [Google Scholar] [CrossRef]
  12. Onton, J. High-Frequency Broadband Modulation of Electroencephalographic Spectra. Front. Hum. Neurosci. 2009, 3, 61. [Google Scholar] [CrossRef] [Green Version]
  13. Munoz, R.; Olivares, R.; Taramasco, C.; Villarroel, R.; Soto, R.; Barcelos, T.S.; Merino, E.; Alonso-Sánchez, M.F. Using Black Hole Algorithm to Improve EEG-Based Emotion Recognition. Comput. Intell. Neurosci. 2018, 2018, 3050214. [Google Scholar] [CrossRef]
  14. Zhang, Y.; Zhang, S.; Ji, X. EEG-Based Classification of Emotions Using Empirical Mode Decomposition and Autoregressive Model. Multimed. Tools Appl. 2018, 77, 26697–26710. [Google Scholar] [CrossRef]
  15. Wang, X.W.; Nie, D.; Lu, B.L. Emotional State Classification from EEG Data Using Machine Learning Approach. Neurocomputing 2014, 129, 94–106. [Google Scholar] [CrossRef]
  16. Zheng, W.L.; Lu, B.L. Investigating Critical Frequency Bands and Channels for EEG-Based Emotion Recognition with Deep Neural Networks. IEEE Trans. Auton. Ment. Dev. 2015, 7, 162–175. [Google Scholar] [CrossRef]
  17. Soleymani, M.; Lichtenauer, J.; Pun, T.; Pantic, M. A Multimodal Database for Affect Recognition and Implicit Tagging. IEEE Trans. Affect. Comput. 2011, 3, 42–55. [Google Scholar] [CrossRef] [Green Version]
  18. Jirayucharoensak, S.; Pan-Ngum, S.; Israsena, P. EEG-Based Emotion Recognition Using Deep Learning Network with Principal Component Based Covariate Shift Adaptation. Sci. World J. 2014, 2014, 627892. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Chen, M.; Han, J.; Guo, L.; Wang, J.; Patras, I. Identifying Valence and Arousal Levels via Connectivity Between EEG Channels. In Proceedings of the International Conference on Affective Computing and Intelligent Interaction (ACII), Xi’an, China, 21–24 September 2015; pp. 63–69. [Google Scholar]
  20. Sharma, R.; Pachori, R.B.; Sircar, P. Automated Emotion Recognition Based on Higher Order Statistics and Deep Learning Algorithm. Biomed. Signal Process. Control 2020, 58, 101867. [Google Scholar] [CrossRef]
  21. Aguiñaga, A.R.; Ramirez, M.A.L. Emotional States Recognition, Implementing a Low Computational Complexity Strategy. Health Inform. J. 2018, 24, 146–170. [Google Scholar] [CrossRef]
  22. Chen, T.; Ju, S.; Ren, F.; Fan, M.; Gu, Y. EEG Emotion Recognition Model Based on the LIBSVM Classifier. Measurement 2020, 164. [Google Scholar] [CrossRef]
  23. Gupta, V.; Chopda, M.D.; Pachori, R.B. Cross-Subject Emotion Recognition Using Flexible Analytic Wavelet Transform from EEG Signals. IEEE Sens. J. 2019, 19, 2266–2274. [Google Scholar] [CrossRef]
  24. Petrantonakis, P.C.; Hadjileontiadis, L.J. Emotion Recognition from Brain Signals Using Hybrid Adaptive Filtering and Higher Order Crossings Analysis. IEEE Trans. Affect. Comput. 2010, 1, 81–97. [Google Scholar] [CrossRef]
  25. Petrantonakis, P.C.; Hadjileontiadis, L.J. A Novel Emotion Elicitation Index Using Frontal Brain Asymmetry for Enhanced EEG-Based Emotion Recognition. IEEE Trans. Inf. Technol. Biomed. 2011, 15, 737–746. [Google Scholar] [CrossRef]
  26. Zhang, J.; Chen, M.; Zhao, S.; Hu, S.; Shi, Z.; Cao, Y. ReliefF-Based EEG Sensor Selection Methods for Emotion Recognition. Sensors 2016, 16, 1558. [Google Scholar] [CrossRef] [PubMed]
  27. Ackermann, P.; Kohlschein, C.; Bitsch, J.Á.; Wehrle, K.; Jeschke, S. EEG-Based Automatic Emotion Recognition: Feature Extraction, Selection and Classification Methods. In Proceedings of the IEEE 18th International Conference on e-Health Networking, Applications and Services (Healthcom), Munich, Germany, 14–16 September 2016; pp. 1–6. [Google Scholar]
  28. Mehmood, R.M.; Lee, H.J. A Novel Feature Extraction Method Based on Late Positive Potential for Emotion Recognition in Human Brain Signal Patterns. Comput. Electr. Eng. 2016, 53, 444–457. [Google Scholar] [CrossRef]
  29. Atkinson, J.; Campos, D. Improving BCI-Based Emotion Recognition by Combining EEG Feature Selection and Kernel Classifiers. Expert Syst. Appl. 2016, 47, 35–41. [Google Scholar] [CrossRef]
  30. Mert, A.; Akan, A. Emotion Recognition from EEG Signals by Using Multivariate Empirical Mode Decomposition. Pattern Anal. Appl. 2018, 21, 81–89. [Google Scholar] [CrossRef]
  31. Mohammadi, Z.; Frounchi, J.; Amiri, M. Wavelet-based Emotion Recognition System using EEG Signal. Neural Comput. Appl. 2017, 28, 1985–1990. [Google Scholar] [CrossRef]
  32. Zheng, W.L.; Zhu, J.Y.; Lu, B.L. Identifying Stable Patterns over Time for Emotion Recognition from EEG. IEEE Trans. Affect. Comput. 2019, 10, 417–429. [Google Scholar] [CrossRef] [Green Version]
  33. Sourina, O.; Liu, Y. A Fractal-Based Algorithm of Emotion Recognition From Eeg Using Arousal-Valence Model. In Proceedings of the International Conference on Bio-Inspired Systems and Signal Processing, Rome, Italy, 26–29 January 2011; Volume 2, pp. 209–214. [Google Scholar]
  34. Lin, Y.P.; Wang, C.H.; Jung, T.P.; Wu, T.L.; Jeng, S.K.; Duann, J.R.; Chen, J.H. EEG-Based Emotion Recognition in Music Listening. IEEE Trans. Biomed. Eng. 2010, 57, 1798–1806. [Google Scholar] [CrossRef]
  35. Chen, J.; Hu, B.; Moore, P.; Zhang, X.; Ma, X. Electroencephalogram-based Emotion Assessment System using Ontology and Data Mining Techniques. Appl. Soft Comput. 2015, 30, 663–674. [Google Scholar] [CrossRef]
  36. Kim, B.H.; Jo, S. Deep Physiological Affect Network for the Recognition of Human Emotions. IEEE Trans. Affect. Comput. 2020, 11, 230–243. [Google Scholar] [CrossRef] [Green Version]
  37. Du, X.; Ma, C.; Zhang, G.; Li, J.; Lai, Y.K.; Zhao, G.; Deng, X.; Liu, Y.J.; Wang, H. An Efficient LSTM Network for Emotion Recognition from Multichannel EEG Signals. IEEE Trans. Affect. Comput. 2020. [Google Scholar] [CrossRef]
  38. Li, Y.; Huang, J.; Zhou, H.; Zhong, N. Human Emotion Recognition with Electroencephalographic Multidimensional Features by Hybrid Deep Neural Networks. Appl. Sci. 2017, 7, 1060. [Google Scholar] [CrossRef] [Green Version]
  39. Kang, J.S.; Kavuri, S.; Lee, M. ICA-Evolution Based Data Augmentation with Ensemble Deep Neural Networks Using Time and Frequency Kernels for Emotion Recognition from EEG-Data. IEEE Trans. Affect. Comput. 2019. [Google Scholar] [CrossRef]
  40. Liu, Y.; Sourina, O.; Nguyen, M.K. Real-Time EEG-Based Emotion Recognition and Its Applications. In Transactions on Computational Science XII; Springer: Berlin/Heidelberg, Germany, 2011; pp. 256–277. [Google Scholar]
  41. Murugappan, M.; Murugappan, S. Human Emotion Recognition Through Short Time Electroencephalogram (EEG) Signals Using Fast Fourier Transform (FFT). In Proceedings of the IEEE 9th International Colloquium on Signal Processing and its Applications, Kuala Lumpur, Malaysia, 8–10 March 2013; pp. 289–294. [Google Scholar]
  42. Cai, J.; Chen, W.; Yin, Z. Multiple Transferable Recursive Feature Elimination Technique for Emotion Recognition Based on EEG Signals. Symmetry 2019, 11, 683. [Google Scholar] [CrossRef] [Green Version]
  43. Thammasan, N.; Moriyama, K.; Ichi Fukui, K.; Numao, M. Familiarity Effects in EEG-Based Emotion Recognition. Brain Inform. 2017, 4, 39–50. [Google Scholar] [CrossRef] [PubMed]
  44. Alazrai, R.; Homoud, R.; Alwanni, H.; Daoud, M.I. EEG-Based Emotion Recognition using Quadratic Time-Frequency Distribution. Sensors 2018, 18, 2739. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  45. Xu, H.; Wang, X.; Li, W.; Wang, H.; Bi, Q. Research on EEG Channel Selection Method for Emotion Recognition. In Proceedings of the IEEE International Conference on Robotics and Biomimetics (ROBIO), Dali, China, 6–8 December 2019; pp. 2528–2535. [Google Scholar]
  46. Bradley, M.M.; Lang, P.J. Measuring Emotion: The Self-Assessment Manikin and the Semantic Differential. J. Behav. Ther. Exp. Psychiatry 1994, 25, 49–59. [Google Scholar] [CrossRef]
  47. Hjorth, B. EEG Analysis Based on Time Domain Properties. Electroencephalogr. Clin. Neurophysiol. 1970, 29, 306–310. [Google Scholar] [CrossRef]
  48. Shannon, C.E. Communication Theory of Secrecy Systems. Bell Syst. Tech. J. 1949, 28, 656–715. [Google Scholar] [CrossRef]
  49. Zhang, A.; Yang, B.; Huang, L. Feature Extraction of EEG Signals Using Power Spectral Entropy. In Proceedings of the International Conference on BioMedical Engineering and Informatics, Sanya, China, 27–30 May 2008; Volume 2, pp. 435–439. [Google Scholar]
  50. Subasi, A. EEG Signal Classification Using Wavelet Feature Extraction and a Mixture of Expert Model. Expert Syst. Appl. 2007, 32, 1084–1093. [Google Scholar] [CrossRef]
  51. Islam, M.K.; Rastegarnia, A.; Yang, Z. Methods for Artifact Detection and Removal From Scalp EEG: A Review. Neurophysiol. Clin./Clin. Neurophysiol. 2016, 46, 287–305. [Google Scholar] [CrossRef] [PubMed]
  52. Huang, N.E.; Shen, Z.; Long, S.R.; Wu, M.C.; Shih, H.H.; Zheng, Q.; Yen, N.C.; Tung, C.C.; Liu, H.H. The Empirical Mode Decomposition and the Hilbert Spectrum for Nonlinear and Non-Stationary Time Series Analysis. Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 1998, 454, 903–995. [Google Scholar] [CrossRef]
  53. Zhuang, N.; Zeng, Y.; Tong, L.; Zhang, C.; Zhang, H.; Yan, B. Emotion Recognition from EEG Signals Using Multidimensional Information in EMD Domain. BioMed Res. Int. 2017, 2017, 8317357. [Google Scholar] [CrossRef] [PubMed]
  54. Tsai, F.F.; Fan, S.Z.; Lin, Y.S.; Huang, N.E.; Yeh, J.R. Investigating Power Density and the Degree of Nonlinearity in Intrinsic Components of Anesthesia EEG by the Hilbert-Huang Transform: An Example Using Ketamine and Alfentanil. PLoS ONE 2016, 11, e0168108. [Google Scholar] [CrossRef]
  55. Kossaifi, J.; Tzimiropoulos, G.; Todorovic, S.; Pantic, M. AFEW-VA Database for Valence and Arousal Estimation In-the-Wild. Image Vis. Comput. 2017, 65, 23–36. [Google Scholar] [CrossRef]
  56. Tang, J.; Alelyani, S.; Liu, H. Feature selection for classification: A review. In Data Classification: Algorithms and Applications; CRC Press: Boca Raton, FL, USA, 2014; pp. 37–64. [Google Scholar]
  57. Eibe, F.; Hall, M.A.; Witten, I.H. The WEKA Workbench. Online Appendix. In Data Mining: Practical Machine Learning Tools and Techniques; Morgan Kaufmann: Burlington, MA, USA, 2016. [Google Scholar]
  58. Liu, Z.T.; Xie, Q.; Wu, M.; Cao, W.H.; Li, D.Y.; Li, S.H. Electroencephalogram Emotion Recognition Based on Empirical Mode Decomposition and Optimal Feature Selection. IEEE Trans. Cogn. Dev. Syst. 2018, 11, 517–526. [Google Scholar] [CrossRef]
  59. Wang, Z.; Tong, Y.; Heng, X. Phase-Locking Value Based Graph Convolutional Neural Networks for Emotion Recognition. IEEE Access 2019, 7, 93711–93722. [Google Scholar] [CrossRef]
  60. Liu, J.; Meng, H.; Li, M.; Zhang, F.; Qin, R.; Nandi, A.K. Emotion Detection from EEG Recordings Based on Supervised and Unsupervised Dimension Reduction. Concurr. Comput. Pract. Exp. 2018, 30, e4446. [Google Scholar] [CrossRef] [Green Version]
  61. Yin, Z.; Wang, Y.; Liu, L.; Zhang, W.; Zhang, J. Cross-Subject EEG Feature Selection for Emotion Recognition Using Transfer Recursive Feature Elimination. Front. Neurorobot. 2017, 11, 19. [Google Scholar] [CrossRef] [Green Version]
  62. Ozel, P.; Akan, A.; Yilmaz, B. Synchrosqueezing Transform Based Feature Extraction from EEG Signals for Emotional State Prediction. Biomed. Signal Process. Control 2019, 52, 152–161. [Google Scholar] [CrossRef]
  63. Alves, N.T.; Fukusima, S.S.; Aznar-Casanova, J.A. Models of Brain Asymmetry in Emotional Processing. Psychol. Neurosci. 2008, 1, 63–66. [Google Scholar]
  64. Li, T.; Zhou, M. ECG Classification Using Wavelet Packet Entropy and Random Forests. Entropy 2016, 18, 285. [Google Scholar] [CrossRef]
Figure 1. The typical set of emotions recognized in the literature (ad) and what we want to achieve with our work (e). A: arousal; V: valence. (best seen in color).
Figure 1. The typical set of emotions recognized in the literature (ad) and what we want to achieve with our work (e). A: arousal; V: valence. (best seen in color).
Sensors 21 03414 g001
Figure 2. Steps to compute the feature vector from the raw EEG signal.
Figure 2. Steps to compute the feature vector from the raw EEG signal.
Sensors 21 03414 g002
Figure 3. Steps of the analysis to identify the best configuration for our valence and arousal prediction models. (Lin: Linear; RBF: Radial Basis Function).
Figure 3. Steps of the analysis to identify the best configuration for our valence and arousal prediction models. (Lin: Linear; RBF: Radial Basis Function).
Sensors 21 03414 g003
Figure 4. PCC values when each of the features per wave were used individually for predicting arousal and valence values. (best seen in color).
Figure 4. PCC values when each of the features per wave were used individually for predicting arousal and valence values. (best seen in color).
Sensors 21 03414 g004
Figure 5. Confusion matrices of four quadrants classification for the KNN (top), RF (middle) and a combination of both regressors (bottom), using the datasets DEAP (left), AMIGOS (center) and DREAMER (right). (best seen in color).
Figure 5. Confusion matrices of four quadrants classification for the KNN (top), RF (middle) and a combination of both regressors (bottom), using the datasets DEAP (left), AMIGOS (center) and DREAMER (right). (best seen in color).
Sensors 21 03414 g005
Table 1. A brief summary of the analyzed works and their main characteristics.
Table 1. A brief summary of the analyzed works and their main characteristics.
WorkDatabaseFeatures (Brain Waves)ClassifierEmotions (#Classes)
[17]MANHOB-HCIPSD, DPSA ( θ , α , β , γ )SVMarousal (3); valence (3)
[2]DEAPPSD, APSD ( θ , α , β , γ )NBarousal (2); valence (2)
[41]Video (Own)SampEn, Spectral Centroid ( α , β , γ )KNN, PNNdisgust, happy, surprise, fear and neutral (5)
[18]DEAPPSD ( θ , α , β , γ )DLN, SVM, NBarousal (3); valence (3)
[15]Video (Own)DPSA, WT, WE, AE, FD, HE ( δ , θ , α , β , γ )SVMpositive and negative (2)
[19]DEAPPearson correlation, Phase coherence, MI ( θ , α , β , γ )SVMarousal (2); valence (2)
[16]SEEDPSD, DE, Differential/Rational asymmetry ( δ , θ , α , β , γ )KNN, LR, SVM, DBNpositive, negative and neutral (3)
[27]DEAPPSD, STFT, HHS, HOC ( θ , α , β , γ )RF, SVManger, surprise, other (3)
[29]DEAPStatistical, PSD, HP, FD ( θ , α , β , γ )SVMarousal (2); valence (2)
[31]DEAPWP, WE ( θ , α , β , γ )SVM, KNNarousal (2); valence (2)
[43]DEAP, MusicPSD, FD, differential asymmetry ( δ , θ , α , β , γ )SVM, MLP, C4.5arousal (2); valence (2)
[13]MANHOB-HCIEMD, SampEn ( β )SVMhigh/low valence/arousal (4)
[14]DEAPEMD, AR ( β )SVMhigh/low valence/arousal (4)
[4]DREAMERLogarithmic PSD ( θ , α , β )SVMarousal (2); valence (2)
[3]AMIGOSLogarithmic PSD, APSD ( θ , α , β , γ )NBarousal (2); valence (2)
[21]DEAPWP ( δ , θ , α , β , γ )ANN, SVMhigh/low valence/arousal (4)
[44]DEAPQuadratic time-frequency distributions (Custom)SVMhigh/low valence/arousal (4)
[23]DEAP, SEEDFlexible Analytic WT, Rényi’s Quadratic Entropy (Custom)SVM, RFhigh/low valence/arousal (4); positive, negative and neutral (3)
[42]DEAPPSD, APSD, Shannon Entropy, SE, ZCR, Statistical ( θ , α , β , γ )LSSVM (Least Square SVM)joy, peace, anger and depression (4)
[45]DEAPWP, WE ( θ , α , β , γ )KELMhigh/low valence/arousal (4)
[20]DEAP, SEEDWT, High Order Statistics ( δ , θ , α , β , γ )DLNhigh/low valence/arousal (4); positive, negative and neutral (3)
[22]Video (Own)LZC, WT, Cointegration Degree, EMD, AE (Custom)SVMarousal (2); valence (2)
Feature Extraction: AE—Approximate Entropy, APSD—Asymmetric Power Spectrum Density, AR—Auto Regressive models, DE— Differential Entropy, DPSA—Differential Power Spectral Asymmetry, EMD—Empirical Mode Decomposition, FD—Fractal Dimensions, HE—Hurst Exponent, HHS—Hilbert–Huang Spectrum, HOC—Higher Order Crossings, HP—Hjorth Parameters, LZC—Lempel-Ziv Complexity, MI—Mutual Information, PSD—Power Spectrum Density, SampEn—Sample Entropy, SE—Spectral Entropy, STFT—Short- Time Fourier Transform, WE—Wavelet Entropy, WP—Wavelet Energy, WT—Wavelet Transform and ZCR—Zero-Crossing Rate. Classifier: ANN—Artificial Neural Networks, DBN—Deep Belief Networks, DLN—Deep Learning Networks, KELM—Extreme Learning Machine with kernel, KNN—K-Nearest Neighbors, LR—Logistic Regression, MLP—Multi-Layer Perceptron, NB—Naive Bayes, RF—Random Forest and SVM—Support Vector Machines.
Table 2. Main characteristics of the AMIGOS, DEAP and DREAMER datasets.
Table 2. Main characteristics of the AMIGOS, DEAP and DREAMER datasets.
AMIGOSDEAPDREAMER
#Videos16
(4 per quadrant)
40
(10 per quadrant)
18
(nine emotions twice)
Typemovie extractsmusic videosfilm clips
Duration<250 s60 s65–393 s
Physiological SignalsEEG, GSR, ECGEEG, GSR, BVP, RESP, SKT, EOG, EMGEEG, ECG
Participants40
(13 female)
32
(16 female)
23
(9 female)
Physiological signals: electroencephalography (EEG), galvanic skin response (GSR), electrocardiography (ECG), blood volume pulse (BVP), respiration (RESP), skin temperature (SKT), electrooculography (EOG) and electromyography (EMG).
Table 3. Results for each regressor per wave. Values in bold represent the two best results for each column in each wave. Grey rows represent the selected regressors.
Table 3. Results for each regressor per wave. Values in bold represent the two best results for each column in each wave. Grey rows represent the selected regressors.
ArousalValence
WaveRegressorPCCMAERMSEPCCMAERMSE
AlphaAR0.2480.2050.2450.1720.2240.264
DT0.3390.1940.2410.2630.2160.261
KNN (K = 1)0.4590.1580.2630.4170.1780.290
LR0.2730.2020.2440.2030.2220.262
RF0.5170.1770.2190.4800.1970.238
SVR (linear)0.2510.2000.2480.2320.2370.242
SVR (RBF)0.2440.2120.2350.2140.2110.241
BetaAR0.2970.2010.2420.2290.2210.261
DT0.4310.1800.2320.4040.1960.249
KNN (K = 1)0.6050.1180.2250.5880.1300.244
LR0.3010.2000.2420.2800.2160.257
RF0.6430.1590.1990.6360.1720.213
SVR (linear)0.2730.1980.2470.2590.2250.243
SVR (RBF)0.2600.2150.2290.2410.2260.239
GammaAR0.2970.2010.2420.2560.2190.259
DT0.4720.1730.2270.4860.1830.237
KNN (K = 1)0.4090.1740.2750.3870.1910.297
LR0.2470.2030.2460.2550.2180.259
RF0.6690.1530.1940.6670.1640.205
SVR (linear)0.2180.2020.2510.2110.2180.265
SVR (RBF)0.2130.2030.2490.1760.2210.264
Table 4. Results for two types of asymmetry, and their combination, per wave. Bold values represent the best results for KNN and RF on each column.
Table 4. Results for two types of asymmetry, and their combination, per wave. Bold values represent the best results for KNN and RF on each column.
ArousalValence
Reg.AsymmetryWavePCCMAERMSEPCCMAERMSE
KNNRationalAlpha0.4100.1750.2750.3660.1960.302
Beta0.5580.1320.2380.5500.1440.255
Gamma0.4470.1650.2660.4480.1770.283
DifferentialAlpha0.4110.1820.2750.7390.1250.191
Beta0.3380.1950.2900.3370.2090.308
Gamma0.2710.2120.3050.2790.2270.322
BothAlpha0.4720.1620.2600.6720.1340.215
Beta0.5300.1410.2450.5250.1520.262
Gamma0.4220.1720.2710.4230.1850.289
RFRationalAlpha0.5160.1760.2180.4810.1960.237
Beta0.6810.1510.1910.6790.1620.202
Gamma0.6940.1470.1880.6940.1570.194
DifferentialAlpha0.6100.1660.2050.8620.1200.153
Beta0.6740.1510.1920.6720.1620.203
Gamma0.6890.1480.1890.6850.1580.200
BothAlpha0.5860.1690.2090.8120.1410.176
Beta0.6800.1500.1910.6820.1610.202
Gamma0.6960.1460.1870.6940.1560.198
Table 5. Results for the different features by waves, and their combinations. Values in bold represent the best results for KNN and RF, for valence and arousal. Grey rows represent the selected features.
Table 5. Results for the different features by waves, and their combinations. Values in bold represent the best results for KNN and RF, for valence and arousal. Grey rows represent the selected features.
ArousalValence
Reg.Waves & FeaturesPCCMAERMSEPCCMAERMSE
KNN α ( A l l S E ) 0.5050.1470.2520.4510.1670.281
β ( A l l S E ) 0.6210.1140.2210.5970.1260.241
γ ( W P + H 1 + I M F P ) 0.5350.1390.2440.5280.1490.260
α , β ( A l l S E ) 0.6890.0930.1990.6580.1050.221
α , γ ( A l l S E ) 0.6400.1090.2150.6030.1220.239
β , γ ( A l l S E ) 0.6720.0990.2050.6520.1100.224
α , β , γ ( A l l S E ) 0.7220.0840.1890.6910.0950.211
α D A ( A l l H 3 ) 0.4300.1780.2700.7740.1150.178
α , β , γ ( A l l S E ) +   α D A ( A l l H 3 ) 0.7430.0780.1820.7500.0820.189
RF α ( A l l S E H 3 ) 0.5430.1730.2150.5050.1930.234
β ( W P + H 1 ) 0.6690.1500.1920.6610.1610.204
γ ( W P + H 1 + I M F P ) 0.7190.1380.1800.7170.1450.190
β , γ ( H 1 + W P ) 0.7220.1380.1800.7190.1470.190
α D A ( H 1 + W P + W E ) 0.6420.1590.1980.9010.0970.126
β , γ ( H 1 + W P ) +   α D A ( H 1 + W P + W E ) 0.7480.1360.1750.8450.1190.155
Table 6. Results for the optimization of the KNN and RF models, for different values of K, Manhattan distance and number of trees (T). Values in bold represent the best results for each model. Grey rows represent the selected parameters.
Table 6. Results for the optimization of the KNN and RF models, for different values of K, Manhattan distance and number of trees (T). Values in bold represent the best results for each model. Grey rows represent the selected parameters.
ArousalValence
ModelPar.PCCMAERMSEPCCMAERMSE
KNN
α , β , γ ( A l l S E ) +
α D A ( A l l H 3 )
K = 10.7940.0620.1630.7950.0660.172
K = 30.7250.1200.1750.7250.1280.185
K = 50.6840.1370.1850.6890.1460.194
K = 70.6550.1470.1920.6630.1560.201
K = 110.6220.1560.1990.6330.1660.208
K = 210.5790.1660.2080.5950.1760.217
RF
β , γ ( H 1 + W P ) +
α D A ( H 1 + W P + W E )
T = 500.7400.1370.1760.8390.1190.156
T = 1000.7480.1360.1750.8450.1190.155
T = 5000.7550.1350.1740.8520.1180.153
T = 7500.7550.1350.1740.8520.1180.153
T = 10000.7560.1350.1740.8530.1180.153
Table 7. Prediction results using the datasets DEAP, AMIGOS and DREAMER.
Table 7. Prediction results using the datasets DEAP, AMIGOS and DREAMER.
ArousalValence
Reg.DatasetPCCMAERMSEPCCMAERMSE
KNNDEAP0.7940.0620.1630.7950.0660.172
AMIGOS0.8300.0450.1290.8080.0630.175
DREAMER0.8060.0580.1650.8120.0760.213
RFDEAP0.7550.1350.1740.8520.1180.153
AMIGOS0.7890.1150.1480.7690.1580.195
DREAMER0.8640.0990.1420.8700.1280.181
Table 8. Accuracy values (%) for arousal and valence binary classification. Values in bold represent the best results for each column.
Table 8. Accuracy values (%) for arousal and valence binary classification. Values in bold represent the best results for each column.
DEAPAMIGOSDREAMER
ArousalValenceArousalValenceArousalValence
KNN89.8489.8392.4690.6993.7292.16
RF80.6285.9185.9883.0093.7993.65
Table 9. Comparison of the accuracy (%) of the proposed model with previous works, for arousal and valence binary classification (low/high arousal, low/high valence). Values are from the original papers and using the DEAP dataset.
Table 9. Comparison of the accuracy (%) of the proposed model with previous works, for arousal and valence binary classification (low/high arousal, low/high valence). Values are from the original papers and using the DEAP dataset.
YearMethodArousalValence
2020Deep Physiological Affect Network (Convolutional LSTM with a temporal loss function) [36]79.0378.72
2020Attention-based LSTM with Domain Discriminator [37]72.9769.06
2019Spectrum centroid and Lempel–Ziv complexity from EMD; KNN [58]86.4684.90
2019Ensemble of CNNs with LSTM model [39]—–84.92
2019Phase-locking value-based graph CNN [59]77.0373.31
2018Time, frequency and connectivity features combined with mRMR and PCA for features reduction; Random Forest [60]74.3077.20
2017Transfer recursive feature elimination; least square SVM [61]78.6778.75
2012EEG Power spectral features + Asymmetry, from four bands; naive Bayes classifier [2] (DEAP paper)62.0057.60
2021Proposed model89.8489.83
α , β , γ ( A l l S E ) + α D A ( A l l H 3 ) ; KNN, K = 1
Table 10. Comparison of the accuracy (%) of the proposed model with previous works, for the four quadrants classification. Values are from the original papers and using the DEAP dataset.
Table 10. Comparison of the accuracy (%) of the proposed model with previous works, for the four quadrants classification. Values are from the original papers and using the DEAP dataset.
YearMethodAccuracy
2020Nonlinear higher order statistics and deep learning algorithm [20]82.01
2019Wavelet energy and entropy; Extreme Learning Machine with kernel [45]80.83
2019Time-frequency analysis using multivariate synchrosqueezing transform; Gaussian SVM [62]76.30
2018Wavelet energy; SVM classifier [21]81.97
2018Flexible analytic wavelet transform + information potential to extract features; Random Forest [23]71.43
2017Hybrid deep learning neural network (CNN + LSTM) [38]75.21
2016Discriminative Graph regularized Extreme Learning Machine with differential entropy features [32]69.67
2021Proposed model84.40
α , β , γ ( A l l S E ) + α D A ( A l l H 3 ) ; KNN, K = 1

Short Biography of Authors

Sensors 21 03414 i001
Filipe Galvão holds a master’s degree in Biomedical Engineering and Biophysics from the Faculty of Sciences, University of Lisbon (2020). His research interests include Emotion Recognition, Medical Signals Analysis, Signal Processing and Brain–Computer Interfaces.
Sensors 21 03414 i002
Soraia M. Alarcão holds a master’s degree in Information Systems and Computer Engineering from IST/ULisbon (2014), and is currently a PhD student at the Informatics Department at Faculty of Sciences, University of Lisbon, Portugal. She is a researcher at LASIGE since 2014. Her research interests include Accessibility, Emotions Recognition, Human–Computer Interaction, Health Systems and Multimedia Information Retrieval.
Sensors 21 03414 i003
Manuel J. Fonseca holds a PhD (2004) in Information Systems and Computer Engineering from IST/ULisbon, is an Associated Professor at Faculty of Sciences, University of Lisbon, and a senior researcher at LASIGE. His main research areas include Human–Computer Interaction, Emotions Recognition, Brain–Computer Interfaces, Multimedia Information Retrieval, Sketch Recognition and Health Systems. He is a senior member of IEEE and ACM, and a member of Eurographics.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Galvão, F.; Alarcão, S.M.; Fonseca, M.J. Predicting Exact Valence and Arousal Values from EEG. Sensors 2021, 21, 3414. https://doi.org/10.3390/s21103414

AMA Style

Galvão F, Alarcão SM, Fonseca MJ. Predicting Exact Valence and Arousal Values from EEG. Sensors. 2021; 21(10):3414. https://doi.org/10.3390/s21103414

Chicago/Turabian Style

Galvão, Filipe, Soraia M. Alarcão, and Manuel J. Fonseca. 2021. "Predicting Exact Valence and Arousal Values from EEG" Sensors 21, no. 10: 3414. https://doi.org/10.3390/s21103414

APA Style

Galvão, F., Alarcão, S. M., & Fonseca, M. J. (2021). Predicting Exact Valence and Arousal Values from EEG. Sensors, 21(10), 3414. https://doi.org/10.3390/s21103414

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop