Automatic Recognition of Multiple Emotional Classes from EEG Signals through the Use of Graph Theory and Convolutional Neural Networks
Abstract
:1. Introduction
- Presenting a deep automatic model to classify three different categories of positive, negative, and neutral emotions.
- Creating a database of EEG signals based on musical stimuli.
- The proposed deep model exhibits high resistance to environmental noise.
- Achieving the highest classification accuracy between positive, negative, and neutral emotions in comparison to other recent studies.
- Achieving the highest accuracy only based on the evaluation of 3 C3, C4, and Pz EEG channels makes it possible to use the proposed model in real-time environments.
2. Background
2.1. Brief of GANs
2.2. Brief of the Combination of Graph Theory and Deep Convolutional Networks
3. Materials and Methods
3.1. Data Gathering
3.2. Pre-Processing
3.3. Graph Design in Our Pipeline
3.4. Architecture
3.5. Training, Validation, and Test Series
4. Experimental Findings
4.1. Optimization Findings
4.2. Simulation Findings
4.3. Comparison
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Nomenclature
Acronym | Phrase |
EEG | Electroencephalogram |
GANs | Generative Adversarial Networks |
BCI | Brain-Computer Interface |
ECG | Electrocardiogram |
EMG | Electromyogram |
GSR | Galvanic Skin Response |
ML | Machine Learning |
KNN | K-Nearest Neighbor |
LSTM | Long Short-Term Memory |
CNN | Convolutional Neural Networks |
FC | Fully Connected |
DWT | Discrete Wavelet Transform |
TR | Temporal Relative |
STFT | Short-Time Fourier Transform |
G | Generator |
D | Discriminator |
ROC | Receiver Operating Characteristic |
References
- Agung, E.S.; Rifai, A.P.; Wijayanto, T. Image-based facial emotion recognition using convolutional neural network on emognition dataset. Sci. Rep. 2024, 14, 14429. [Google Scholar] [CrossRef] [PubMed]
- Alsaadawı, H.F.T.; Daş, R. Multimodal Emotion Recognition Using Bi-LG-GCN for MELD Dataset. Balk. J. Electr. Comput. Eng. 2024, 12, 36–46. [Google Scholar] [CrossRef]
- Alslaity, A.; Orji, R. Machine learning techniques for emotion detection and sentiment analysis: Current state, challenges, and future directions. Behav. Inf. Technol. 2024, 43, 139–164. [Google Scholar] [CrossRef]
- Deshmukh, S.; Chaudhary, S.; Gayakwad, M.; Kadam, K.; More, N.S.; Bhosale, A. Advances in Facial Emotion Recognition: Deep Learning Approaches and Future Prospects. In Proceedings of the 2024 MIT Art, Design and Technology School of Computing International Conference (MITADTSoCiCon), Pune, India, 25–27 April 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–3. [Google Scholar]
- Farashi, S.; Bashirian, S.; Jenabi, E.; Razjouyan, K. Effectiveness of virtual reality and computerized training programs for enhancing emotion recognition in people with autism spectrum disorder: A systematic review and meta-analysis. Int. J. Dev. Disabil. 2024, 70, 110–126. [Google Scholar] [CrossRef] [PubMed]
- Geetha, A.; Mala, T.; Priyanka, D.; Uma, E. Multimodal Emotion Recognition with deep learning: Advancements, challenges, and future directions. Inf. Fusion 2024, 105, 102218. [Google Scholar]
- Rahmani, M.; Mohajelin, F.; Khaleghi, N.; Sheykhivand, S.; Danishvar, S. An Automatic Lie Detection Model Using EEG Signals Based on the Combination of Type 2 Fuzzy Sets and Deep Graph Convolutional Networks. Sensors 2024, 24, 3598. [Google Scholar] [CrossRef]
- Guo, X.; Zhang, Y.; Lu, S.; Lu, Z. Facial expression recognition: A review. Multimed. Tools Appl. 2024, 83, 23689–23735. [Google Scholar] [CrossRef]
- Hazmoune, S.; Bougamouza, F. Using transformers for multimodal emotion recognition: Taxonomies and state of the art review. Eng. Appl. Artif. Intell. 2024, 133, 108339. [Google Scholar] [CrossRef]
- Jajan, K.I.K.; Abdulazeez, E.A.M. Facial Expression Recognition Based on Deep Learning: A Review. Indones. J. Comput. Sci. 2024, 13, 183–204. [Google Scholar] [CrossRef]
- Li, J.; Washington, P. A comparison of personalized and generalized approaches to emotion recognition using consumer wearable devices: Machine learning study. JMIR AI 2024, 3, e52171. [Google Scholar] [CrossRef]
- Mumtaz, W.; Rasheed, S.; Irfan, A. Review of challenges associated with the EEG artifact removal methods. Biomed. Signal Process. Control 2021, 68, 102741. [Google Scholar] [CrossRef]
- Ahmed, M.Z.I.; Sinha, N.; Ghaderpour, E.; Phadikar, S.; Ghosh, R. A novel baseline removal paradigm for subject-independent features in emotion classification using EEG. Bioengineering 2023, 10, 54. [Google Scholar] [CrossRef] [PubMed]
- Sheykhivand, S.; Mousavi, Z.; Rezaii, T.Y.; Farzamnia, A. Recognizing emotions evoked by music using CNN-LSTM networks on EEG signals. IEEE Access 2020, 8, 139332–139345. [Google Scholar] [CrossRef]
- Baradaran, F.; Farzan, A.; Danishvar, S.; Sheykhivand, S. Customized 2D CNN Model for the Automatic Emotion Recognition Based on EEG Signals. Electronics 2023, 12, 2232. [Google Scholar] [CrossRef]
- Baradaran, F.; Farzan, A.; Danishvar, S.; Sheykhivand, S. Automatic Emotion Recognition from EEG Signals Using a Combination of Type-2 Fuzzy and Deep Convolutional Networks. Electronics 2023, 12, 2216. [Google Scholar] [CrossRef]
- Yang, L.; Wang, Y.; Yang, X.; Zheng, C. Stochastic weight averaging enhanced temporal convolution network for EEG-based emotion recognition. Biomed. Signal Process. Control 2023, 83, 104661. [Google Scholar] [CrossRef]
- Hussain, M.; AboAlSamh, H.A.; Ullah, I. Emotion recognition system based on two-level ensemble of deep-convolutional neural network models. IEEE Access 2023, 11, 16875–16895. [Google Scholar] [CrossRef]
- Khubani, J.; Kulkarni, S. Inventive deep convolutional neural network classifier for emotion identification in accordance with EEG signals. Soc. Netw. Anal. Min. 2023, 13, 34. [Google Scholar] [CrossRef]
- Peng, G.; Zhao, K.; Zhang, H.; Xu, D.; Kong, X. Temporal relative transformer encoding cooperating with channel attention for EEG emotion analysis. Comput. Biol. Med. 2023, 154, 106537. [Google Scholar] [CrossRef]
- Xu, J.; Qian, W.; Hu, L.; Liao, G.; Tian, Y. EEG decoding for musical emotion with functional connectivity features. Biomed. Signal Process. Control 2024, 89, 105744. [Google Scholar] [CrossRef]
- Alotaibi, F.M. An AI-inspired spatio-temporal neural network for EEG-based emotional status. Sensors 2023, 23, 498. [Google Scholar] [CrossRef] [PubMed]
- Qiao, Y.; Mu, J.; Xie, J.; Hu, B.; Liu, G. Music emotion recognition based on temporal convolutional attention network using EEG. Front. Hum. Neurosci. 2024, 18, 1324897. [Google Scholar] [CrossRef] [PubMed]
- Li, C.-L.; Chang, W.-C.; Cheng, Y.; Yang, Y.; Póczos, B. Mmd gan: Towards deeper understanding of moment matching network. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2017; Volume 30. [Google Scholar]
- Javidialsaadi, A.; Mondal, S.; Subramanian, S. Model checks for two-sample location-scale. J. Nonparametric Stat. 2023, 36, 749–779. [Google Scholar] [CrossRef]
- Arjmandi, H.; Zhao, X. Social Media Impact on FEMA Funding Programs. In Proceedings of the AMCIS 2024, Salt Lake City, UT, USA, 15–17 August 2024. [Google Scholar]
- Atashpanjeh, H.; Behfar, A.; Haverkamp, C.; Verdoes, M.M.; Al-Ameen, M.N. Intermediate help with using digital devices and online accounts: Understanding the needs, expectations, and vulnerabilities of young adults. In Conference on Human-Computer Interaction; Springer: Cham, Switzerland, 2022; pp. 3–15. [Google Scholar]
- Behfar, A.; Atashpanjeh, H.; Al-Ameen, M.N. Can Password Meter Be More Effective towards User Attention, Engagement, and Attachment? A Study of Metaphor-Based Designs. In Proceedings of the Companion Publication of the 2023 Conference on Computer Supported Cooperative Work and Social Computing, Minneapolis, MN, USA, 14–18 October 2023; pp. 164–171. [Google Scholar]
- Karimzadeh, M.; Basvoju, D.; Vakanski, A.; Charit, I.; Xu, F.; Zhang, X. Machine Learning for Additive Manufacturing of Functionally Graded Materials. Materials 2024, 17, 3673. [Google Scholar] [CrossRef]
- Shabani, S.; Majkut, M.; Dykas, S.; Smołka, K.; Lakzian, E. An investigation comparing various numerical approaches for simulating the behaviour of condensing flows in steam nozzles and turbine cascades. Eng. Anal. Bound. Elem. 2024, 158, 364–374. [Google Scholar] [CrossRef]
- Hosseini, A.; Yahouni, Z.; Feizabadi, M. Scheduling AIV transporter using simulation-based supervised learning: A case study on a dynamic job-shop with three workstations. IFAC-PapersOnLine 2023, 56, 8591–8597. [Google Scholar] [CrossRef]
- Chen, D.; Hosseini, A.; Smith, A.; Nikkhah, A.F.; Heydarian, A.; Shoghli, O.; Campbell, B. Performance Evaluation of Real-Time Object Detection for Electric Scooters. arXiv 2024, arXiv:2405.03039. [Google Scholar]
- Kiani, S.; Salmanpour, A.; Hamzeh, M.; Kebriaei, H. Learning Robust Model Predictive Control for Voltage Control of Islanded Microgrid. IEEE Trans. Autom. Sci. Eng. 2024, 10, 10–15. [Google Scholar] [CrossRef]
- Zarean Dowlat Abadi, J.; Iraj, M.; Bagheri, E.; RabieiPakdeh, Z.; Dehghani Tafti, M.R. A Multiobjective Multiproduct Mathematical Modeling for Green Supply Chain considering Location-Routing Decisions. Math. Probl. Eng. 2022, 2022, 7009338. [Google Scholar] [CrossRef]
- Abdi Chooplou, C.; Kahrizi, E.; Fathi, A.; Ghodsian, M.; Latifi, M. Baffle-Enhanced Scour Mitigation in Rectangular and Trapezoidal Piano Key Weirs: An Experimental and Machine Learning Investigation. Water 2024, 16, 2133. [Google Scholar] [CrossRef]
- Ahmadirad, Z. Evaluating the influence of AI on market values in finance: Distinguishing between authentic growth and speculative hype. Int. J. Adv. Res. Humanit. Law 2024, 1, 50–57. [Google Scholar] [CrossRef]
- Mahdavimanshadi, M.; Anaraki, M.G.; Mowlai, M.; Ahmadirad, Z. A Multistage Stochastic Optimization Model for Resilient Pharmaceutical Supply Chain in COVID-19 Pandemic Based on Patient Group Priority. In Proceedings of the 2024 Systems and Information Engineering Design Symposium (SIEDS), Charlottesville, VA, USA, 3 May 2024; pp. 382, 387. [Google Scholar]
- Yousefzadeh, M.; Hasanpour, M.; Zolghadri, M.; Salimi, F.; Yektaeian Vaziri, A.; Mahmoudi Aqeel Abadi, A.; Jafari, R.; Esfahanian, P.; Nazem-Zadeh, M.-R. Deep learning framework for prediction of infection severity of COVID-19. Front. Med. 2022, 9, 940960. [Google Scholar] [CrossRef] [PubMed]
- EskandariNasab, M.; Raeisi, Z.; Lashaki, R.A.; Najafi, H. A GRU–CNN model for auditory attention detection using microstate and recurrence quantification analysis. Sci. Rep. 2024, 14, 8861. [Google Scholar] [CrossRef] [PubMed]
- Zhang, S.; Tong, H.; Xu, J.; Maciejewski, R. Graph convolutional networks: A comprehensive review. Comput. Soc. Netw. 2019, 6, 11. [Google Scholar] [CrossRef]
- Habibi, A.; Damasio, A. Music, feelings, and the human brain. Psychomusicology Music. Mind Brain 2014, 24, 92. [Google Scholar] [CrossRef]
- Seifi, N.; Al-Mamun, A. Optimizing Memory Access Efficiency in CUDA Kernel via Data Layout Technique. J. Comput. Commun. 2024, 12, 124–139. [Google Scholar] [CrossRef]
- Wang, H.; Hu, D. Comparison of SVM and LS-SVM for regression. In Proceedings of the 2005 International Conference on Neural Networks and Brain, Beijing, China, 13–15 October 2005; pp. 279, 283. [Google Scholar]
- Taud, H.; Mas, J.-F. Multilayer perceptron (MLP). Geomat. Approaches Model. Land Change Scenar. 2018, 451–455. [Google Scholar]
- Ukey, N.; Yang, Z.; Li, B.; Zhang, G.; Hu, Y.; Zhang, W. Survey on exact knn queries over high-dimensional data space. Sensors 2023, 23, 629. [Google Scholar] [CrossRef]
- Chua, L.O. CNN: A vision of complexity. Int. J. Bifurc. Chaos 1997, 7, 2219–2425. [Google Scholar] [CrossRef]
- Vaziri, A.Y.; Makkiabadi, B.; Samadzadehaghdam, N. EEGg: Generating Synthetic EEG Signals in Matlab Environment. Front. Biomed. Technol. 2023, 10, 370–381. [Google Scholar]
Emotion | Music Played |
---|---|
Negative I | Pishdaramd Esfehani |
Positive I | Azari 6 and 8 |
Negative II | Pishdaramd Homayoun |
Positive II | Azari 6 and 8 |
Positive III | Bandari 6 and 8 |
Negative III | Afshari |
Negative IV | Pishdaramd Esfehani |
Positive IV | Persion 6 and 8 |
Negative V | Pishdaramad Dashti |
Positive V | Bandari 6 and 8 |
Layers | Weight Tensor | Bias | Parameters |
---|---|---|---|
GConv I | (x1, 125,000/K, 125,000/K) | 125,000/K | (15,625,000,000/K × K) × x1 + (125,000/K) |
GConv II | (x2, 125,000/K, 62,500/K) | 62,500/K | (7,812,500,000/K × K) × x2 + (round (62,500/K)) |
GConv III | (x3, 62,000/K, round (31,250/K)) | 30,000/K | (1,860,000,000/K × K) × x3 + (30,000/K) |
GConv IV | (x4, round (31,250/K), round (15,625/K)) | Round (15,625/K) | (4,588,281,250/K × K) × x4 + (round (15,625/K)) |
Flattening Layer | (round (31,250/K), 2) | 2 | (round(62,500/K)) + 2 |
Parameters | Values | Optimal Value |
---|---|---|
Batch Size in GAN Optimizer in GAN Number of ConV in GAN Learning Rate in GAN Number of GConv | 4, 6, 8, 10, 12 Adam, SGD, Adamax 3, 4, 5, 6 0.1, 0.01, 0.001, 0.0001 2, 3, 4, 5, 6, 7 | 8 Adamax 6 0.001 4 |
Batch Size in DFCGN | 8, 16, 32 | 16 |
Batch normalization | ReLU, Leaky-ReLU, TF-2 | Leaky-ReLU |
Learning Rate in DFCGN | 0.1, 0.01, 0.001, 0.0001, 0.00001 | 0.001 |
Dropout Rate | 0.1, 0.2, 0.3 | 0.3 |
Weight of optimizer | ||
Error function | MSE, Cross Entropy | Cross Entropy |
Optimizer in DFCGN | Adam, SGD, Adadelta, Adamax | Adam |
Measurement Index | 2-Class | 3-Class |
---|---|---|
Accuracy | 99.1 | 98.2 |
Sensitivity | 98.4 | 97.2 |
Precision | 99.4 | 97.8 |
Specificity | 97.8 | 96.3 |
Kappa coefficient | 0.8 | 0.9 |
Repetitions | First | Second | Third | Fourth | Fifth | Sixth | Seventh | Eighth | Ninth | Tenth |
---|---|---|---|---|---|---|---|---|---|---|
ACC (%) | 91 | 94 | 92 | 97 | 90 | 93 | 94 | 94 | 95 | 94 |
Repetitions | Eleventh | Twelfth | Thirteenth | Fourteenth | Fifteenth | Sixteenth | Seventeenth | Eighteenth | Nineteenth | Twentieth |
ACC (%) | 93 | 97 | 90 | 94 | 92 | 93 | 98 | 91 | 93 | 93 |
Average Accuracy (%) | 93.4 |
Research | Datasets | Algorithms | ACC (%) |
---|---|---|---|
Sheykhivand et al. [14] | Private | CNN + LSTM | 97 |
Baradaran et al. [15] | Private | DCNN | 98 |
Baradaran et al. [16] | Private | Type 2 Fuzzy + CNN | 98 |
Yang et al. [17] | Deap, Seed | SITCN | 95 |
Hussain et al. [18] | Deap, Seed | LP-1D-CNN | 98.43 |
Khubani et al. [19] | Private | DCNN | 97.12 |
Peng et al. [20] | Deap, Seed | Temporal Relative (TR) Encoding | 95.58 |
Xu et al. [21] | Private | Functional Connectivity Features | 97 |
Alotaibi et al. [22] | Deap, Seed | GoogLeNet DNN | 96.95 |
Qiao et al. [23] | Private | CNN-SA-BiLSTM | 96.43 |
Our Model | Private | Graph Theory + CNN | 99.2 class 98.3 class |
Method | Feature Learning (ACC) | Handcrafted Features (ACC) |
---|---|---|
KNN | 72% | 78% |
SVM | 74% | 89% |
CNN | 93% | 73% |
MLP | 78% | 90% |
P-M | 99% | 78% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Mohajelin, F.; Sheykhivand, S.; Shabani, A.; Danishvar, M.; Danishvar, S.; Lahijan, L.Z. Automatic Recognition of Multiple Emotional Classes from EEG Signals through the Use of Graph Theory and Convolutional Neural Networks. Sensors 2024, 24, 5883. https://doi.org/10.3390/s24185883
Mohajelin F, Sheykhivand S, Shabani A, Danishvar M, Danishvar S, Lahijan LZ. Automatic Recognition of Multiple Emotional Classes from EEG Signals through the Use of Graph Theory and Convolutional Neural Networks. Sensors. 2024; 24(18):5883. https://doi.org/10.3390/s24185883
Chicago/Turabian StyleMohajelin, Fatemeh, Sobhan Sheykhivand, Abbas Shabani, Morad Danishvar, Sebelan Danishvar, and Lida Zare Lahijan. 2024. "Automatic Recognition of Multiple Emotional Classes from EEG Signals through the Use of Graph Theory and Convolutional Neural Networks" Sensors 24, no. 18: 5883. https://doi.org/10.3390/s24185883
APA StyleMohajelin, F., Sheykhivand, S., Shabani, A., Danishvar, M., Danishvar, S., & Lahijan, L. Z. (2024). Automatic Recognition of Multiple Emotional Classes from EEG Signals through the Use of Graph Theory and Convolutional Neural Networks. Sensors, 24(18), 5883. https://doi.org/10.3390/s24185883