A Novel Approach for Emotion Recognition Based on EEG Signal Using Deep Learning
Abstract
:1. Introduction
- To the best of our knowledge, for the first time in this study, data belonging to the GAMEEMO dataset was analyzed with the DeepBiLSTM model.
- With this study, it has been observed that the deep learning model achieves better performance than machine learning models.
- EEG signals belonging to the GAMEEMO dataset were obtained with a portable EEG device. With this study, it has been observed that portable EEG devices are at least as successful as conventional devices.
- It was observed that the discrete emotion model and the dimensional emotion model performed differently.
2. Related Works
3. Material and Methods
3.1. GAMEEMO Dataset
- In the first stage, four different computer games were played by the subjects. The subjects played each game for 5 min. In total, EEG data of 20 min was obtained for each subject.
- The subjects filled out the SAM (Self-Assessment Manikin) form after playing each game. The main purpose of filling out this form is to label each game played.
3.2. Feature Extraction Methods
3.2.1. Empirical Mode Decomposition
3.2.2. Variation Mode Decomposition
3.3. Classifier
3.4. K-Fold Validation
4. Application Results and Discussion
- A total of 1568 (14 × 28 × 4) pieces of 4704 (14 × 3 × 28 × 4) sample length EEG signals were used in the input layer. Above, 14 refers to the number of EEG channels, 28 refers to the number of subjects, 3 indicates the collected statistical features, and 4 represents the zones of the arousal-valence plane.
- 256-unit BiLSTM was used in the second layer. ReLU (rectified linear unit) was preferred as the activation function.
- BiLSTM with 128 units was preferred for the third layer. As in the previous layer, ReLU is employed as the activation function in this layer.
- A total of 64 BiLSTM units were used in the fourth layer. As with the previous two layers, ReLU was used as the activation function.
- Then, flattening was carried out so that all data were one-dimensional.
- After flattening, batch normalization has been performed so that all data are in the same range (between 0 and 1).
- Then, dropout was conducted to prevent excessive learning and overfitting and 15% of the neurons were removed from the architecture.
- Two fully connected layers were considered. While 512 neurons were used in the first fully connected layer, the number of neurons was reduced to 256 in the second fully connected layer.
- In the last layer, a classification process was made. Two neurons were used in the last layer for binary-class classification. For the multi-class classification process, in the last layer, 4 neurons were used as the number of classes is 4.
- For the binary-class classification process, Sigmoid is used as the activation function in the classification layer and Softmax is used as the activation function in multi-class classification.
- The error of the model is determined by the binary cross-entropy function for binary class classification. In the multi-class classification process, the error of the model is determined by categorical cross-entropy.
- The Adam function was used for optimization in both classification processes. The learning rate of Adam function was defined as 0.001 and the decay value was 0.00001.
- The number of epochs of the model was determined as 250.
- Validation of the model was carried out with 10-fold cross-validation.
- All these parameters were determined by a trial-and-error approach.
- Linear kernel was used as the kernel of the model for binary-classification. For multi-class classification purpose, GRB (Gaussian Radial Basis) kernel was applied.
- Size of the margin value (C) is defined as 1.5.
- The maximum depth of the model was determined as 25.
- A total of 500 estimators were used.
- In total, three neighbors were used and the distance between neighbors was calculated with Euclid.
- The weight of the model was determined by the ‘distance’ parameter.
- Brute was used for the algorithm of the model.
- The value of 20 was taken into account as the leaf size.
- It has been observed that the deep learning algorithm is better than certain machine learning algorithms, despite the small number of data. Generally, the number of data is expected to be high in studies conducted with deep learning.
- Although the data distribution is not very clear in binary classification, the proposed deep learning model has been observed to be acceptably successful.
- When classifying with deep learning, manual feature extraction is generally not performed. In this study, we performed deep learning after feature extraction. By extracting features from signals, we may have caused certain information to be lost. However, despite this, a successful classification was achieved by the proposed deep learning model.
- Due to the large sample length of the signals in the data set, we could not process the raw data. We had to perform feature extraction. This may have caused some information to be lost. Processing the raw data can increase the performance of the deep learning algorithm.
- The performances of the deep learning model and other machine learning models vary according to the selected feature extraction methods. More information can be obtained by using other feature extraction methods. This can positively affect the performance of the methods.
- Although the number of data is small, the proposed method has been successful. However, this does not mean that the model is reliable. To determine the reliability of the model, different data sets should be used, or the information obtained from the data set used should be reproduced.
- The concept of emotion is an abstract structure. It varies from person to person. Therefore, it is not possible to reach a definite conclusion no matter what study is done.
- In current studies, visual or aural stimuli are generally used separately. Using them separately does not have a sufficient stimulating effect.
- In some other studies in the literature, visual and aural stimuli are used at the same time, and videos or music clips are usually shown for this purpose. However, these stimuli are not as effective as computer games.
- Traditional EEG devices are generally used in current studies. This device is both difficult and costly to use.
- In this study, it was observed that emotion analysis based on EEG signals was more effective. Because EEG signals cannot be manipulated by the subjects and real emotions cannot be hidden.
- In this study, it was observed that the classification made according to the emotion model (arousal-valence) was more effective than the discrete emotion model (positive-negative). This showed that as the number of classes increases, the classification process is healthier, and the deep learning model learns better.
- Raw EEG signals were used in the study and no specific sampling was performed. In this way, the process load was less, and time was saved.
- With this study, it has been observed that portable EEG devices are at least as effective as conventional devices. We think that the results of this study will encourage researchers to use portable devices.
- In the study, only statistical methods were used for feature extraction and the data size was not very complex. Only maximum, minimum, and average values were used for feature extraction and the size of the data was reduced. This situation did not adversely affect the classifier and an effective classification was performed. In this way, it has been observed that few and concise features are effective in classification rather than obtaining a large number of features.
- It has been observed that the deep learning algorithm is more effective than some of the machine learning algorithms.
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Naji, M.; Firoozabadi, M.; Azadfallah, P. Emotion classification during music listening from forehead biosignals. SIViP 2013, 9, 1365–1375. [Google Scholar] [CrossRef]
- Diener, E.; Chan, M.Y. Happy people live longer: Subjective well-being contributes to health and longevity. Appl. Psychol. Health Well Being 2011, 3, 1–43. [Google Scholar] [CrossRef]
- Alakuş, T.B.; Türkoğlu, İ. Determination of effective EEG channels for discrimination of positive and negative emotions with wavelet decomposition and support vector machines. Int. J. Inform. Technol. 2019, 12, 229–237. [Google Scholar]
- Turnip, A.; Simbolon, A.I.; Amri, M.F.; Sihombing, P.; Setiadi, R.H.; Mulyana, E. Backpropagation neural networks training for EEG-SSVEP classification of emotion recognition. Internetw. Indones. J. 2017, 9, 53–57. [Google Scholar]
- Hu, Y.; Dong, J.; Zhou, S. Audio-textual emotion recognition based on improved neural networks. Math. Probl. Eng. 2019, 2019, 1–9. [Google Scholar]
- Lasri, I.; Solh, A.R.; Belkacemi, M.E. Facial emotion recognition of students using convolutional neural network. In Proceedings of the 3rd International Conference on Intelligent Computing in Data Sciences, Marrakech, Morocco, 29–30 October 2019. [Google Scholar]
- Garber-Barron, M.; Si, M. Using body movement and posture for emotion detection in non-acted scenarios. In Proceedings of the IEEE International Conference on Fuzzy Systems, Brisbane, QLD, Australia, 10–15 June 2012. [Google Scholar]
- Sassenrath, C.; Sassenberg, K.; Ray, D.G.; Scheiter, K.; Jarodzka, H. A motivational determinant of facial emotion recognition: Regulatory focus affects recognition of emotions in faces. PLoS ONE 2014, 9, e112383. [Google Scholar] [CrossRef] [Green Version]
- Wioleta, S. Using physiological signals for emotion recognition. In Proceedings of the 6th Conference on Human System Interactions, Sopot, Poland, 6–8 June 2013. [Google Scholar]
- Yan, J.; Chen, S. A EEG-based emotion recognition model with rhythm and time characteristics. Brain Inf. 2019, 6, 7. [Google Scholar] [CrossRef] [Green Version]
- Casson, A.J.; Yates, D.C.; Smith, S.J.M.; Duncan, J.S.; Rodriguez-Villegas, E. Wearable electroencephalography. IEEE Eng. Med. Biol. Mag. 2010, 29, 44–56. [Google Scholar] [CrossRef] [Green Version]
- Bashivan, P.; Rish, I.; Heisig, S. Mental state recognition via wearable EEG. arXiv 2016, arXiv:1602.00985. [Google Scholar]
- Horlings, R.; Datcu, D.; Rothkrantz, L.J.M. Emotion recognition using brain activity. In Proceedings of the 9th International Conference on Computer Systems and Technologies and Workshop for PhD Students in Computing, Gabrovo, Bulgaria, 12–13 June 2008. [Google Scholar]
- Gu, S.; Wang, F.; Patel, N.P.; Bourgeois, J.A.; Huang, J.H. A model for basic emotions using observations of behavior in drosophila. Front. Psychol. 2019, 10, 1–13. [Google Scholar] [CrossRef] [Green Version]
- Hania, W.M.B.; Lachiri, Z. Emotion classification in arousal-valence dimension using discrete affective keywords tagging. In Proceedings of the International Conference on Engineering & MIS, Monastir, Tunisia, 8–10 May 2017. [Google Scholar]
- Stickel, C.; Ebner, M.; Steinbach-Nordmann, S.; Searle, G.; Holzinger, A. Emotion detection: Application of the valence arousal space for rapid biological usability testing to enhance universal access. In Proceedings of the International Conference on Universal Access in Human-Computer Interaction, San Diego, CA, USA, 19–24 July 2009. [Google Scholar]
- Bradley, M.M.; Lang, P.J. International affective digitized sounds (IADS): Stimuli, instruction manual and affective ratings. In Technical Report, No: B-2; The Center for Research in Psychophysiology, University of Florida: Gainesville, FL, USA, 1999. [Google Scholar]
- Raval, D.; Sakle, M. A literature review on emotion recognition system using various facial expression. IJARIIE 2015, 1, 326–329. [Google Scholar]
- Alakuş, T.B.; Türkoğlu, İ. Database for an emotion recognition system based on EEG signals and various computer games—GAMEEMO. Biomed. Signal Process. Control. 2020, 60, 101951. [Google Scholar] [CrossRef]
- Alakuş, T.B.; Türkoğlu, İ. Emotion recognition with deep learning using GAMEEMO data set. Electron. Lett. 2020, 56, 1364–1367. [Google Scholar] [CrossRef]
- Alex, M.; Tariq, U.; Al-Shargie, F.; Mir, H.S.; Nashash, H.A. Discrimination of genuine and acted emotional expressions using eeg signal and machine learning. IEEE Access 2020, 8, 191080–191089. [Google Scholar] [CrossRef]
- Pandey, P.; Seeja, K.R. Subject independent emotion recognition from EEG using VMD and deep learning. J. King Saud Univ. Comput. Inf. Sci. 2022, 34, 1730–1738. [Google Scholar] [CrossRef]
- Sharan, R.V.; Berkovsky, S.; Taib, R.; Koprinska, I.; Li, J. Detecting personality traits using inter-hemispheric asynchrony of the brainwaves. In Proceedings of the 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society, Montreal, QC, Canada, 20–24 July 2020. [Google Scholar]
- Priya, T.H.; Mahalakshmi, P.; Naidu, V.P.S.; Srinivas, M. Stress detection from EEG using power ratio. In Proceedings of the International Conference on Emerging Trends in Information Technology and Engineering, Vellore, India, 24–25 February 2020. [Google Scholar]
- Matlovic, T.; Gaspar, P.; Moro, R.; Simko, J.; Bielikova, M. Emotions detection using facial expressions recognition and EEG. In Proceedings of the 11th International Workshop on Semantic and Social Media Adaptation and Personalization, Thessaloniki, Greece, 20–21 October 2016. [Google Scholar]
- Gao, Y.; Wang, X.; Potter, T.; Zhang, J.; Zhang, Y. Single-trial EEG emotion recognition using granger causality/transfer entropy analysis. J. Neurosci. Methods 2020, 346, 108904. [Google Scholar] [CrossRef]
- Salama, E.S.; El-Khoribi, R.A.; Shoman, M.E.; Shalaby, M.A.W. A 3D-convolutional neural network framework with ensemble learning techniques for multi-modal emotion recognition. Egypt. Inf. J. 2021, 22, 167–176. [Google Scholar] [CrossRef]
- Nguyen, T.H.; Chung, W.Y. Negative news recognition during social media news consumption using EEG. IEEE Access 2019, 7, 133227–133236. [Google Scholar] [CrossRef]
- Xu, X.; Zhang, Y.; Tang, M.; Gu, H.; Yan, S.; Yang, J. emotion recognition based on double tree complex wavelet transform and machine learning in internet of things. IEEE Access 2019, 7, 154114–154120. [Google Scholar] [CrossRef]
- Colominas, M.A.; Schlotthauer, G.; Torres, M.E. Improved complete ensemble EMD: A suitable tool for biomedical signal processing. Biomed. Signal Process. Control. 2014, 14, 19–29. [Google Scholar] [CrossRef]
- Li, X.; Dong, L.; Li, B.; Lei, Y.; Xu, N. Microseismic signal denoising via empirical mode decomposition, compressed sensing, and soft-thresholding. Appl. Sci. 2020, 10, 2191. [Google Scholar] [CrossRef] [Green Version]
- Boudraa, A.O.; Cexus, J.C.; Benramdane, S.; Beghdadi, A. Noise filtering using empirical mode decomposition. In Proceedings of the 9th International Symposium on Signal Processing and Its Applications, Sharjah, UAE, 12–15 February 2007. [Google Scholar]
- Molla, K.I.; Rahman, M.S.; Sumi, A.; Banik, P. Empirical mode decomposition analysis of climate changes with special reference to rainfall data. Discret. Dyn. Nat. Soc. 2006, 2006, 045348. [Google Scholar] [CrossRef] [Green Version]
- Cura, O.K.; Atli, S.K.; Türe, H.S.; Akan, A. Epileptic seizure classifications using empirical mode decomposition and its derivative. Biomed. Eng. Online 2020, 19, 10. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Nunes, J.C.; Delechelle, E. Empirical mode decomposition: Applications on signal and image processing. Adv. Adapt. Data Anal. 2009, 1, 125–175. [Google Scholar] [CrossRef] [Green Version]
- Lahmiri, S.; Boukadoum, M. Biomedical image denoising using variational mode decomposition. In Proceedings of the IEEE Biomedical Circuits and Systems Conference, Lausanne, Switzerland, 22–24 October 2014. [Google Scholar]
- Dragomiretskiy, K.; Zosso, D. Variational mode decomposition. IEEE Trans. Signal Process 2013, 62, 531–544. [Google Scholar] [CrossRef]
- Jiang, L.; Zhou, X.; Che, L.; Rong, S.; Wen, H. Feature extraction and reconstruction by using 2D-VMD based on carrier-free UWB radar application in human motion recognition. Sensors 2019, 19, 1962. [Google Scholar] [CrossRef] [Green Version]
- Islam, M.; Ahmed, T.; Mostafa, S.S.; Yusuf, S.U.; Ahmad, M. Human emotion recognition using frequency & statistical measures of EEG signal. In Proceedings of the International Conference on Informatics, Electronics and Vision, Dhaka, Bangladesh, 17–18 May 2013. [Google Scholar]
- Li, C.; Zhang, Y.; Ren, X. Modeling hourly soil temperature using deep BiLSTM neural network. Algorithms 2020, 13, 173. [Google Scholar] [CrossRef]
- Siami-Namini, S.; Tavakoli, N.; Namin, A.S. The performance of LSTM and BiLSTM in forecasting time series. In Proceedings of the IEEE International Conference on Big Data, Los Angeles, CA, USA, 9–12 December 2019. [Google Scholar]
- Glorot, X.; Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the 13th International Conference on Artificial Intelligence and Statistics, Sardinia, Italy, 13–15 May 2010. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015. [Google Scholar]
- Graves, A.; Fernandez, S.; Schmidhuber, J. Bidirectional LSTM networks for improved phoneme classification and recognition. In Proceedings of the 15th International Conference on Artificial Neural Networks, Warsaw, Poland, 11–15 September 2005. [Google Scholar]
- Rodriguez, J.D.; Perez, A.; Lozano, J.A. Sensitivity analysis of k-fold cross validation in prediction error estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 569–575. [Google Scholar] [CrossRef]
- Zhang, Z. Introduction to machine learning: K-nearest neighbors. Ann. Transl. Med. 2016, 4, 218. [Google Scholar] [CrossRef] [Green Version]
- Imandoust, S.B.; Bolandraftar, M. Application of k-nearest neighbor (KNN) approach for predicting economic events: Theoretical background. Int. J. Eng. Res. Appl. 2013, 3, 605–610. [Google Scholar]
- Vapnik, V.N. The Nature of Statistical Learning Theory, 2nd ed.; Springer: New York, NY, USA, 2000. [Google Scholar]
- Lihong, Z.; Ying, S.; Yushi, Z.; Cheng, Z.; Yi, Z. Face recognition based on multi-class SVM. In Proceedings of the Chinese Control and Decision Conference, Guilin, China, 17–19 June 2009. [Google Scholar]
- Ghatasheh, N. Business analytics using random forest trees for credit risk prediction: A comparison study. Int. J. Adv. Sci. Technol. 2014, 72, 19–30. [Google Scholar] [CrossRef]
- Mustaqeem; Sajjad, M.; Kwon, S. Clustering-based speech emotion recognition by incorporating learned features and deep BiLSTM. IEEE Access 2020, 8, 79861–79875. [Google Scholar] [CrossRef]
- Schuster, M.; Paliwal, K.K. Bidirectional recurrent neural networks. IEEE Trans. Signal Process 1997, 45, 2673–2681. [Google Scholar] [CrossRef] [Green Version]
- Pasupa, K.; Sunhem, W. A comparison between shallow and deep architecture classifiers on small dataset. In Proceedings of the 8th International Conference on Information Technology and Electrical Engineering, Yogyakarta, Indonesia, 5–6 October 2016. [Google Scholar]
- Deng, L.; Yu, D. Deep Learning: Methods and Applications, 1st ed.; Now Foundations and Trends: Boston, MA, USA, 2014. [Google Scholar]
- Abbas, A.; Abdelsamea, M.M.; Gaber, M.M. Classification of COVID-19 in chest X-ray images using DeTraC deep convolutional neural network. Appl. Intell. 2021, 51, 854–864. [Google Scholar] [CrossRef]
- Fan, J.; Upadhye, S.; Worster, A. Understanding receiver operating characteristic (ROC) curves. CJEM 2006, 8, 19–20. [Google Scholar] [CrossRef]
- Flach, P.A.; Wu, S. Repairing concavities in ROC curves. In Proceedings of the 19th International Joint Conference on Artificial Intelligence, Edinburgh, Scotland, 30 July–5 August 2005. [Google Scholar]
- Al-Fahoum, A.S.; Al-Frahiat, A.A. Methods of EEG signal features extraction using linear analysis in frequency and time-frequency domains. ISRN Neurosci. 2014, 2014, 730218. [Google Scholar] [CrossRef]
- Ridouh, A.; Boutana, D.; Bourennane, S. EEG signals classification based on time frequency analysis. J. Circuits Syst. Comput. 2017, 26, 1750198. [Google Scholar] [CrossRef]
- Aslan, M. CNN based efficient approach for emotion recognition. J. King Saud Univ. Comput. Inf. Sci. 2021, 34, 1–12. [Google Scholar] [CrossRef]
- Kumar, A.; Kumar, A. DEEPHER: Human emotion recognition using an eeg-based deep learning network model. Eng. Proc. 2021, 10, 32. [Google Scholar]
- Huang, G.; Song, Z. Analysis of bimodal emotion recognition method based on EEG signals. In Proceedings of the 2nd International Seminar on Artificial Intelligence, Networking and Information Technology, Shanghai, China, 15–17 October 2021. [Google Scholar]
- Toraman, S.; Dursun, Ö.O. GameEmo-CapsNet: Emotion recognition from single-channel EEG signals using the 1D capsule networks. Trait Sign. 2021, 38, 1689–1698. [Google Scholar] [CrossRef]
- Abdulrahman, A.; Baykara, M. Feature extraction approach based on statistical methods and wavelet packet decomposition for emotion recognition using EEG signals. In Proceedings of the International Conference on Innovations in Intelligent SysTems and Applications, Kocaeli, Turkey, 25–27 August 2021. [Google Scholar]
- Tuncer, T.; Doğan, Ş.; Subaşı, A. LEDPatNet19: Automated emotion recognition model based on nonlinear LED pattern feature extraction function using EEG signals. Cogn. Neuro. 2022, 16, 779–790. [Google Scholar] [CrossRef] [PubMed]
Game Symbol | Stimuli Type | Discrete Model | Dimensional Model |
---|---|---|---|
G1 | Boring | Negative emotion | LANV zone |
G2 | Calm | Positive emotion | LAPV zone |
G3 | Horror | Negative emotion | HANV zone |
G4 | Funny | Positive emotion | HAPV zone |
Fold | kNN | SVM | RF | DeepBiLSTM |
---|---|---|---|---|
F1 | 64.58% | 51.89% | 54.23% | 69.14% |
F2 | 63.25% | 55.78% | 51.27% | 71.42% |
F3 | 60.24% | 56.87% | 56.41% | 70.83% |
F4 | 64.58% | 51.23% | 53.62% | 69.50% |
F5 | 66.47% | 56.41% | 51.24% | 72.48% |
F6 | 62.96% | s53.87% | 55.68% | 70.20% |
F7 | 64.12% | 58.97% | 58.00% | 70.00% |
F8 | 62.24% | 55.87% | 55.29% | 69.60% |
F9 | 66.39% | 55.26% | 58.74% | 71.22% |
F10 | 69.87% | 58.79% | 64.26% | 74.52% |
Mean | 64.47% | 55.49% | 58.74% | 70.89% |
Standard Deviation | 2.65% | 2.58% | 3.86% | 1.63% |
Fold | kNN | SVM | RF | DeepBiLSTM |
---|---|---|---|---|
F1 | 66.32% | 62.35% | 65.85% | 89.40% |
F2 | 61.47% | 60.47% | 72.45% | 87.14% |
F3 | 63.63% | 62.84% | 71.48% | 81.48% |
F4 | 68.97% | 69.87% | 70.32% | 87.44% |
F5 | 67.89% | 63.12% | 74.87% | 95.65% |
F6 | 66.21% | 61.28% | 68.95% | 88.51% |
F7 | 65.87% | 67.74% | 69.41% | 95.95% |
F8 | 67.84% | 67.45% | 70.32% | 86.42% |
F9 | 67.41% | 64.05% | 73.82% | 93.86% |
F10 | 69.44% | 68.44% | 67.44% | 97.44% |
Mean | 66.40% | 64.76% | 70.49% | 90.33% |
Standart Deviation | 2.43% | 3.32% | 2.78% | 5.16% |
Ref. | AI Methods | Emotion Model | Feature Extraction Methods | Performance Criteria | Results (Avg.) |
---|---|---|---|---|---|
[19] | MLPNN, SVM, kNN | Discrete and dimensional | Statistical features, DWT, Hjorth features, Shannon entropy, logarithmic energy entropy, sample entropy, multi-scale entropy | Accuracy | 80% |
[20] | BiLSTM, kNN, SVM, ANN | Discrete model | Spectral entropy | Accuracy | 76.91% |
[21] | kNN, SVM, ANN | Discrete model | DWT, EMD | Accuracy | 94.3% |
[22] | DNN | Dimensional model | VMD | Accuracy | 61.88% |
[57] | kNN, NB, DT, CNN | Dimensional model | EMD, VMD, entropy, HFD (Higuchi’s Fractal Dimension) | Accuracy | 95.20% |
[58] | SVM | Discrete model | EMD, VMD | Accuracy | 90.63% |
[23] | kNN, LR, NB, SVM | Discrete model | ANOVA | Accuracy | 95.45% |
[24] | SVM, kNN | Discrete model | PSD (Power Spectral Density) | Accuracy | 99.42% |
[26] | SVM | Discrete model | HOG, GC, TE | Accuracy | 95.21% |
This study | SVM, kNN, RF, DeepBiLSTM | Discrete and dimensional model | EMD, VMD, statistical features | Accuracy | 80.81% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Abdulrahman, A.; Baykara, M.; Alakus, T.B. A Novel Approach for Emotion Recognition Based on EEG Signal Using Deep Learning. Appl. Sci. 2022, 12, 10028. https://doi.org/10.3390/app121910028
Abdulrahman A, Baykara M, Alakus TB. A Novel Approach for Emotion Recognition Based on EEG Signal Using Deep Learning. Applied Sciences. 2022; 12(19):10028. https://doi.org/10.3390/app121910028
Chicago/Turabian StyleAbdulrahman, Awf, Muhammet Baykara, and Talha Burak Alakus. 2022. "A Novel Approach for Emotion Recognition Based on EEG Signal Using Deep Learning" Applied Sciences 12, no. 19: 10028. https://doi.org/10.3390/app121910028
APA StyleAbdulrahman, A., Baykara, M., & Alakus, T. B. (2022). A Novel Approach for Emotion Recognition Based on EEG Signal Using Deep Learning. Applied Sciences, 12(19), 10028. https://doi.org/10.3390/app121910028