Lightweight Building of an Electroencephalogram-Based Emotion Detection System
Abstract
:1. Introduction
- Signal acquisition: This step deals with the measurement of brain signals using a particular type of sensor device. Furthermore, it involves different noninvasive neuroimaging methods that have been employed in BCI studies. These noninvasive techniques include electroencephalography (EEG), magnetoencephalography (MEG), functional magnetic resonance imaging (fMRI), functional near infrared (fNIR), and positron emission tomography (PET). All these methods have been reported as accepted. and safe imaging procedures used in research facilities and hospitals in clinical and nonclinical contexts. However, EEG is used most often in BCI research experimentation because the process is noninvasive to the research subject and minimal risk is involved. Moreover, the devices’ usability, reliability, and cost-effectiveness, along with the relative convenience of conducting studies and recruiting participants afforded by portability, have been cited as factors influencing the increased adoption of this method in applied research contexts. Yet, these advantages are often accompanied by challenges such as low spatial resolution and difficulty in managing signal-to-noise ratios.
- Signal processing and translation: This BCI process includes the following steps:
- –
- Signal preprocessing: This step deals with the filtering of acquired signals and removal of noise. Basically, signals are amplified, filtered, digitized, and transmitted to a computer.
- –
- Feature extraction/selection: This step deals with the analysis of digital signals to discriminate relevant signal characteristics. The signals are then represented in a compact form suitable for translation into output commands through selecting a subset and reducing the dimensionality of features.
- –
- Feature classification: The subsequent signal features are fed into the feature translation algorithm, which translates the features into a control signal for the output device or into commands that accomplish the user’s intent.
- Application/ feedback: The control signal from the signal processing and translation stage causes changes in the environment, device, or feedback mechanism of the EEG system. Many EEG-based BCI systems have been developed that have targeted different applications and shared the same goal of translating users’ intent into actions without using peripheral nerve impulses and muscles.
2. Background
2.1. EEG Correlates of Emotion
2.2. EEG-Based Emotion Detection Applications
2.3. NeuCube-Based Spiking Neural Networks
- 1
- Data encoding, in which EEG samples are converted into spike trains using encoding methods such as threshold-based representation (TBR), which generates a spike train whenever changes in the EEG data exceed a specific threshold.
- 2
- SNN cube initialization, in which each neuron in the SNN cube is connected to its nearby neurons, which are within a specific distance threshold.
- 3
- Training the cube using unsupervised learning, in which EEG data in the forms of spike trains are used to modify the initially set connection weights; thus, the same groups of spiking neurons in the SNN cube will be activated when similar input stimuli are presented.
- 4
- Classification, in which the same dataset used for the unsupervised training phase is propagated to the SNN cube to train neurons in the output layer classifier. For each sample, a new neuron in the classifier is generated, and the state of the SNN cube is measured to establish a connection between this new neuron and a specific group of spiking neurons in the SNN cube. The state of the SNN cube is measured to establish a connection between this new neuron and a specific group of spiking NeuCube parameters, such as the SNN learning rule rate, the encoding method threshold, and the spike firing rate, which can affect the classification accuracy; therefore, an optimization module is proposed in this architecture to obtain the values of some of these parameters, which could lead to enhanced accuracy. The weight of this connection adapts according to the datasets used to link each sample, represented as a neuron in the output layer to specific emotional stimuli (emotion class), represented as a specific group of spiking neurons in the SNN cube.
3. Related Works
4. Proposed System
4.1. Dataset
4.2. NeuCube-Based SNN Classifier
- Data were encoded into spikes using the threshold-based representation (TBR) method with a 0.5 threshold.
- The SNN cube was initialized with 1000 spiking neurons and a 2.5 small-world radius, which defined the distance that constrained the number of each neuron’s initial connections to its neighboring neurons.
- Unsupervised training of the cube was performed using a threshold for neuron firing = 0.5 and unsupervised STDP learning rate = 0.01.
- The output layer classifier was classified with a modulation factor = 0.8 and drift value = 0.005. An architecture of the SNN cube trained using 60 samples is shown in Figure 6a, with the output layer when colored by true labels shown in Figure 6b, and when colored by predicted labels shown in Figure 6c.
5. Results and Discussion
6. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Fernando Nicolas-Alonso, L.; Gomez-Gil, J. Brain computer interfaces, a review. Sensors 2012, 12, 1211–1279. [Google Scholar] [CrossRef] [PubMed]
- Shih, J.J.; Krusienski, D.J.; Wolpaw, J.R. Brain-Computer Interfaces in Medicine. In Proceedings of the Mayo Clinic Proceedings; Elsevier BV: New York, NY, USA, 2012; Volume 87, pp. 268–279. [Google Scholar]
- Al-Nafjan, A.; Hosny, M.; Al-Ohali, Y.; Al-Wabil, A. Review and classification of emotion recognition based on EEG brain-computer interface system research: A systematic review. Appl. Sci. 2017, 7, 1239. [Google Scholar] [CrossRef] [Green Version]
- Tavanaei, A.; Ghodrati, M.; Kheradpisheh, S.R.; Masquelier, T.; Maida, A. Deep learning in spiking neural networks. Neural Netw. 2019, 111, 47–63. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kasabov, N.; Capecci, E. Spiking neural network methodology for modelling, classification and understanding of EEG spatio-temporal data measuring cognitive processes. Inf. Sci. 2015, 294, 565–575. [Google Scholar] [CrossRef]
- Kasabov, N. NeuCube EvoSpike Architecture for Spatio-temporal Modelling and Pattern Recognition of Brain Signals. In Computer Vision; Springer Science and Business Media LLC: Trento, Italy, 2012; Volume 7477, pp. 225–243. [Google Scholar]
- Kasabov, N. NeuCube: A spiking neural network architecture for mapping, learning and understanding of spatio-temporal brain data. Neural Netw. 2014, 52, 62–76. [Google Scholar] [CrossRef]
- Doborjeh, Z.G.; Doborjeh, M.G.; Kasabov, N. Attentional bias pattern recognition in spiking neural networks from spatio-temporal EEG data. Cogn. Comput. 2017, 10, 35–48. [Google Scholar] [CrossRef]
- Peter, C.; Peter, C.; Urban, B. Emotion in human-computer interaction. In Expanding the Frontiers of Visual Analytics and Visualization; Springer: London, UK, 2012; pp. 239–262. [Google Scholar]
- Jirayucharoensak, S.; Pan-Ngum, S.; Israsena, P. EEG-based emotion recognition using deep learning network with principal component based covariate shift adaptation. Sci. World J. 2014, 2014, 1–10. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Pan, J.; Xie, Q.; Huang, H.; He, Y.; Sun, Y.; Yu, R.; Li, Y. Emotion-related consciousness detection in patients with disorders of consciousness through an EEG-based BCI system. Front. Hum. Neurosci. 2018, 12, 12. [Google Scholar] [CrossRef] [Green Version]
- Gica, S.; Poyraz, B.C.; Gulec, H. Are emotion recognition deficits in patients with schizophrenia states or traits? A 6-month follow-up study. Indian J. Psychiatry 2019, 61, 45–52. [Google Scholar] [PubMed]
- Iyer, K.K.; Au, T.R.; Angwin, A.J.; Copland, D.A.; Dissanayaka, N.N. Source activity during emotion processing and its relationship to cognitive impairment in Parkinson’s disease. J. Affect. Disord. 2019, 253, 327–335. [Google Scholar] [CrossRef]
- Charpentier, J.; Kovarski, K.; Houy-Durand, E.; Malvy, J.; Saby, A.; Bonnet-Brilhault, F.; Latinus, M.; Gomot, M. Emotional prosodic change detection in autism Spectrum disorder: An electrophysiological investigation in children and adults. J. Neurodev. Disord. 2018, 10, 28. [Google Scholar] [CrossRef] [Green Version]
- Malaia, E.; Cockerham, D.; Rublein, K. Visual integration of fear and anger emotional cues by children on the autism spectrum and neurotypical peers: An EEG study. Neuropsychologia 2019, 126, 138–146. [Google Scholar] [CrossRef]
- Alakus, T.B.; Gonen, M.; Turkoglu, I. Database for an emotion recognition system based on EEG signals and various computer games—GAMEEMO. Biomed. Signal Process. Control 2020, 60, 101951. [Google Scholar] [CrossRef]
- Stavroulia, K.E.; Christofi, M.; Baka, E.; Michael-Grigoriou, D.; Magnenat-Thalmann, N.; Lanitis, A. Assessing the emotional impact of virtual reality-based teacher training. Int. J. Inf. Learn. Technol. 2019, 36, 192–217. [Google Scholar] [CrossRef] [Green Version]
- Feradov, F.; Mporas, I.; Ganchev, T. Evaluation of features in detection of dislike responses to audio–visual stimuli from EEG signals. Computers 2020, 9, 33. [Google Scholar] [CrossRef] [Green Version]
- Proverbio, A.M.; Camporeale, E.; Brusa, A. Multimodal recognition of emotions in music and facial expressions. Front. Hum. Neurosci. 2020, 14, 14. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Hsu, J.-L.; Zhen, Y.-L.; Lin, T.-C.; Chiu, Y.-S. Affective content analysis of music emotion through EEG. Multimedia Syst. 2017, 24, 195–210. [Google Scholar] [CrossRef]
- Gupta, R.; Abadi, M.K.; Cabré, J.A.C.; Morreale, F.; Falk, T.H.; Sebs, N. A Quality Adaptive Multimodal Affect Recognition System for User-Centric Multimedia Indexing. In Proceedings of the 2016 ACM on International Conference on Multimedia Retrieval—ICMR ’16; Association for Computing Machinery (ACM): New York, NY, USA, 2016; pp. 317–320. [Google Scholar]
- Aldayel, M.; Ykhlef, M.; Al-Nafjan, A. Deep learning for EEG-based preference classification in neuromarketing. Appl. Sci. 2020, 10, 1525. [Google Scholar] [CrossRef] [Green Version]
- Wei, Z.; Wu, C.; Wang, X.; Supratak, A.; Wang, P.; Guo, Y. Using support vector machine on EEG for advertisement impact assessment. Front. Neurosci. 2018, 12, 76. [Google Scholar] [CrossRef]
- Lamti, H.A.; Ben Khelifa, M.M.; Alimi, A.M.; Gorce, P. Emotion detection for wheelchair navigation enhancement. Robot 2014, 34, 1209–1226. [Google Scholar] [CrossRef]
- López-Hernández, J.L.; González-Carrasco, I.; López-Cuadrado, J.L.; Ruiz, B. Towards the recognition of the emotions of people with visual disabilities through brain-computer interfaces. Sensors 2019, 19, 2620. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Khosrowabadi, R. Stress and perception of emotional stimuli: Long-term stress rewiring the brain. Basic Clin. Neurosci. J. 2018, 9, 107–120. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Mehmood, R.M.; Lee, H.J. Towards building a computer aided education system for special students using wearable sensor technologies. Sensors 2017, 17, 317. [Google Scholar] [CrossRef] [PubMed]
- Pavlidis, N.G.; Tasoulis, D.; Plagianakos, V.; Nikiforidis, G.; Vrahatis, M. Spiking neural network training using evolutionary algorithms. In Proceedings of the 2005 IEEE International Joint Conference on Neural Networks; IEEE: Montreal, QC, Canada, 2005; Volume 4, pp. 2190–2194. [Google Scholar]
- Tan, C.; Šarlija, M.; Kasabov, N. Spiking neural networks: Background, recent development and the NeuCube architecture. Neural Process. Lett. 2020, 1–27. [Google Scholar] [CrossRef]
- Taherkhani, A.; Belatreche, A.; Li, Y.; Cosma, G.; Maguire, L.P.; McGinnity, T. A review of learning in biologically plausible spiking neural networks. Neural Netw. 2020, 122, 253–272. [Google Scholar] [CrossRef]
- Luo, Y.; Fu, Q.; Xie, J.; Qin, Y.; Wu, G.; Liu, J.; Jiang, F.; Cao, Y.; Ding, X. EEG-based emotion classification using spiking neural networks. IEEE Access 2020, 8, 46007–46016. [Google Scholar] [CrossRef]
- Roy, Y.; Banville, H.; Albuquerque, I.; Gramfort, A.; Falk, T.H.; Faubert, J. Deep learning-based electroencephalography analysis: A systematic review. J. Neural Eng. 2019, 16, 051001. [Google Scholar] [CrossRef]
- Lin, Y.-P.; Wang, C.-H.; Jung, T.-P.; Wu, T.-L.; Jeng, S.-K.; Duann, J.-R.; Chen, J.-H. EEG-based emotion recognition in music listening. IEEE Trans. Biomed. Eng. 2010, 57, 1798–1806. [Google Scholar] [CrossRef]
- Mohammadi, Z.; Frounchi, J.; Amiri, M. Wavelet-based emotion recognition system using EEG signal. Neural Comput. Appl. 2016, 28, 1985–1990. [Google Scholar] [CrossRef]
- Zhang, Y.; Ji, X.; Zhang, S. An approach to EEG-based emotion recognition using combined feature extraction method. Neurosci. Lett. 2016, 633, 152–157. [Google Scholar] [CrossRef]
- Al-Nafjan, A.; Hosny, M.; Al-Wabil, A.; Al-Ohali, Y. Classification of human emotions from electroencephalogram (EEG) signal using deep neural network. Int. J. Adv. Comput. Sci. Appl. 2017, 8, 419–425. [Google Scholar] [CrossRef]
- Chen, J.X.; Zhang, P.W.; Mao, Z.J.; Huang, Y.F.; Jiang, D.M.; Zhang, Y.N. Accurate EEG-based emotion recognition on combined features using deep convolutional neural networks. IEEE Access 2019, 7, 44317–44328. [Google Scholar] [CrossRef]
- Chen, J.; Hu, B.; Moore, P.; Zhang, X.; Ma, X. Electroencephalogram-based emotion assessment system using ontology and data mining techniques. Appl. Soft Comput. 2015, 30, 663–674. [Google Scholar] [CrossRef]
- Li, X.; Yan, J.-Z.; Chen, J.-H. Channel division based multiple classifiers fusion for emotion recognition using EEG signals. ITM Web Conf. 2017, 11, 07006. [Google Scholar] [CrossRef]
- Goel, P.; Liu, H.; Brown, D.; Datta, A. On the use of spiking neural network for EEG classification. Int. J. Knowl.-Based Intell. Eng. Syst. 2008, 12, 295–304. [Google Scholar] [CrossRef]
- Salazar-Varas, R.; Vazquez, R.A. Evaluating spiking neural models in the classification of motor imagery EEG signals using short calibration sessions. Appl. Soft Comput. 2018, 67, 232–244. [Google Scholar] [CrossRef]
- Doborjeh, M.G.; Wang, G.Y.; Kasabov, N.K.; Kydd, R.; Russell, B. A spiking neural network methodology and system for learning and comparative analysis of EEG data from healthy versus addiction treated versus addiction not treated subjects. IEEE Trans. Biomed. Eng. 2015, 63, 1830–1841. [Google Scholar] [CrossRef]
- Doborjeh, Z.; Doborjeh, M.; Taylor, T.; Kasabov, N.; Wang, G.Y.; Siegert, R.; Sumich, A. Spiking Neural network modelling approach reveals how mindfulness training rewires the brain. Sci. Rep. 2019, 9, 1–15. [Google Scholar] [CrossRef]
- Koelstra, S.; Muhl, C.; Soleymani, M.; Lee, J.-S.; Yazdani, A.; Ebrahimi, T.; Pun, T.; Nijholt, A.; Patras, I. DEAP: A database for emotion analysis; using physiological signals. IEEE Trans. Affect. Comput. 2011, 3, 18–31. [Google Scholar] [CrossRef] [Green Version]
Ref. | Dataset | Feature Extraction | Classifier | Accuracy |
---|---|---|---|---|
[33] | Own dataset | 4 | Multilayer perceptron (MLP) Support Vector Machine (SVM) | 82.29%. |
[10] | DEAP dataset | 2 | Deep learning network (DLN)-100 DLN-50 DLN-50 + PCA DLN-50 + PCA + covariate shift adaptation (CSA) | 53.42% (valence) 52.05% (arousal) |
[34] | DEAP dataset | 2 | k-nearest neighbors (KNN) | 86.75% (valence) 84.05 % |
[35] | DEAP dataset | 1 | SVM | 94.98% |
[36] | DEAP dataset | 2 | Deep neural network (DNN), Random forest (RF) | 82.0%, 48.5% |
[37] | DEAP dataset | 1 | Deep convolutional neural network (CNN) | 88.76% (valence) 85.75% (arousal) |
Dataset /Dimension | Participants | Samples for Each Participant (Trial × Channel × Data) | Labels |
---|---|---|---|
Original DEAP | 32 | 40 × 32 × 8064 | 1280 × 4 |
60-samples-Exp1 | 6 | 40 × 32 × 8064 | 60 × 2 |
40-samples-Exp2 | 4 | 40 × 32 × 8064 | 40 × 2 |
60-samples-Exp3 | 6 | 40 × 32 × 8064 | 60 × 2 |
40-samples-Exp4 | 4 | 40 × 32 × 8064 | 40 × 2 |
Sample Size/Emotion Dimension | Valence | Arousal |
---|---|---|
60-samples-Exp1 | 66.67% | 69.23% |
40-samples-Exp2 | 55.56% | 66.67% |
60-samples-Exp3 | 84.62% | 61.54% |
40-samples-Exp4 | 66.67% | 55.56% |
Study | Dataset Subjects (Trial × Channel × Data) | Feature Extraction Methods | Accuracy | ||
---|---|---|---|---|---|
Valence | Arousal | ||||
Jirayucharoensak et al. [10] | 32 (40 × 32 × 8064) | 2 | 53.42% | 52.05% | |
Luo et al. [31] | 32 (40 × 32 × 8064) | 1 | 78% | 74% | |
Mohammadi et al. [34] | 32 (40 × 10 × 8064) | 2 | 86.75 % | 84.05 % | |
Chen et al. [37] | 32 (40 × 32 × 128) | 5 | 88.76% | 85.75% | |
Proposed method | 60-Exp1 | 6 (10 × 4 × 1152) | 0 | 66.67% | 69.67% |
40-Exp2 | 4 (10 × 4 × 1152) | 0 | 55.56% | 66.67% | |
60-Exp3 | 6 (10 × 4 × 1152) | 0 | 84.62% | 61.54% | |
40-Exp4 | 4 (10 × 4 × 1152) | 0 | 66.67% | 55.56% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Al-Nafjan, A.; Alharthi, K.; Kurdi, H. Lightweight Building of an Electroencephalogram-Based Emotion Detection System. Brain Sci. 2020, 10, 781. https://doi.org/10.3390/brainsci10110781
Al-Nafjan A, Alharthi K, Kurdi H. Lightweight Building of an Electroencephalogram-Based Emotion Detection System. Brain Sciences. 2020; 10(11):781. https://doi.org/10.3390/brainsci10110781
Chicago/Turabian StyleAl-Nafjan, Abeer, Khulud Alharthi, and Heba Kurdi. 2020. "Lightweight Building of an Electroencephalogram-Based Emotion Detection System" Brain Sciences 10, no. 11: 781. https://doi.org/10.3390/brainsci10110781
APA StyleAl-Nafjan, A., Alharthi, K., & Kurdi, H. (2020). Lightweight Building of an Electroencephalogram-Based Emotion Detection System. Brain Sciences, 10(11), 781. https://doi.org/10.3390/brainsci10110781