Affective State Assistant for Helping Users with Cognition Disabilities Using Neural Networks
Abstract
:1. Introduction
2. Materials and Methods
2.1. Physiological Signals
2.1.1. Brain’s Electrical Activity
2.1.2. Heart’s Electrical Activity
2.1.3. Muscles’ Electrical Activity
2.1.4. Dermal Electrical Activity
2.2. Dataset
2.2.1. ASCERTAIN
2.2.2. DREAMER
2.2.3. Summary
- Sensors: only EEG, ECG, and GSR data are used. Facial EMG from ASCERTAIN is discarded, and GSR data are not provided by DREAMER (using only the GSR information from ASCERTAIN).
- Data frequency: based on the different frequencies used, it will be 32 Hz for EEG, 128 Hz for GSR, and 256 Hz for ECG. These are the lowest frequencies from each sensor of both datasets. The others will be down-sampled in order to balance them.
- Data amount: In order to perform a balanced training, which gives equal weight to the information from all the datasets, we must use a similar amount of information from each of the datasets. In this way, there is no type of bias when training. Thus, the number of samples used from each of the datasets will be restricted by the dataset that has the least number of samples. In the case of GSR, the full amount of data provided by ASCERTAIN will be used.
- Labels: both datasets share the main two labels (“Valence” and “Arousal”). Due to the studies done before, with the information of both affective states is enough to extract information of the emotional state of the user.
- EEG: eight channels are used in order to design our own device in a near future with OpenBCI platform. These channels are (according to the 10–20 international placement system) Fp1, Fp2, C3, C4, T5, T6, O1, and O2. The information from other channels is discarded.
- Most previous works divide each affective state in two classes (this fact will be observed in the final comparison made in the “Results and Discussion” section). We think that this is not enough, as we need more classes in order to discretize correctly the different states.
- Among the works that use three classes, the central zone (’neutral’) is given greater weight than the extremes (’low’ and ’high’); i.e., in one of the works a division of 25% (low), 50% (medium), and 25% (high) is used. We thought that this division was not realistic and, although these values depend on the subject, we made an equitable division between those classes: we use the lower 30% values for ’low’, 40% for ’medium’, and the upper 30% values for ’high’.
2.3. Affective State Classifier
2.3.1. Pre-Processing
2.3.2. Feature Extraction
2.3.3. Single Neural Networks
- EEG: input layer (6 features × 8 channels = 48 nodes), hidden layer 1 (96 nodes), hidden layer 2 (24 nodes), output layer (three nodes).
- ECG: input layer (6 features × 2 channels = 12 nodes), hidden layer 1 (24 nodes), hidden layer 2 (six nodes), output layer (three nodes).
- GSR: input layer (6 features × 1 channel = 6 nodes), hidden layer 1 (12 nodes), hidden layer 2 (six nodes), output layer (three nodes).
2.3.4. Full Classification System
3. Results and Discussion
3.1. Single Neural Networks Results
- The time window needed to obtain such results is very high compared with the one used for GSR and ECG. Thus, if the full system uses EEG, the data rate of the system will be reduced significantly.
- The difference between the accuracy of the training set and the accuracy of the testing set is too high (and the error too). This fact may mean the system is overtrained and it is not ready for new data, so testing the system with other users can cause bad results.
3.2. Full Classification System
- F1-score: For the training sets, the extreme classes (LOW and HIGH) obtain better results than the MIDDLE one; but, for the testing sets, this is not true.
- Sensitivity: the class MIDDLE obtains better results than the others in all the cases.
- Specificity: the class MIDDLE obtains worse results than the others in all the cases.
- Precision: the class MIDDLE obtains worse results than the others in all the cases.
3.3. Comparison
- Data resolution: it uses only two levels for Arousal and Valence (negative and neutral). We use three (low, medium, and high).
- Time window: it uses a 30-s time window, obtaining a maximum processing rate of 0.033 samples per second. We use 4 s for the time window, so our system has around a 7.5x data rate (0.25 samples per second).
- Training epochs: the results are provided after a 5000-epoch training. We use only 500 epochs.
- Architecture complexity: it uses four hidden layers and a Recurrent Neural Network (RNN). We use a classical MLP neural network with only four hidden layers, so it is easier to be implemented in embedded systems.
4. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Argyle, M. Non-verbal communication in human social interaction. In Non-Verbal Communication; Cambridge Universit Press: Cambridge, UK, 1972. [Google Scholar]
- Knapp, M.L.; Hall, J.A.; Horgan, T.G. Nonverbal Communication in Human Interaction; Cengage Learning: Boston, MA, USA, 2013. [Google Scholar]
- Isbister, K.; Nass, C. Consistency of personality in interactive characters: Verbal cues, non-verbal cues, and user characteristics. Int. J. Hum.-Comput. Stud. 2000, 53, 251–267. [Google Scholar] [CrossRef] [Green Version]
- Tirapu-Ustárroz, J.; Pérez-Sayes, G.; Erekatxo-Bilbao, M.; Pelegrín-Valero, C. ¿ Qué es la teoría de la mente. Revista de Neurología 2007, 44, 479–489. [Google Scholar] [CrossRef] [PubMed]
- Volkmar, F.R.; Sparrow, S.S.; Rende, R.D.; Cohen, D.J. Facial perception in autism. J. Child Psychol. Psychiatry 1989, 30, 591–598. [Google Scholar] [CrossRef] [PubMed]
- Celani, G.; Battacchi, M.W.; Arcidiacono, L. The understanding of the emotional meaning of facial expressions in people with autism. J. Autism Dev. Disord. 1999, 29, 57–66. [Google Scholar] [CrossRef] [PubMed]
- Hatfield, E.; Cacioppo, J.T.; Rapson, R.L. Emotional contagion. Curr. Dir. Psychol. Sci. 1993, 2, 96–100. [Google Scholar] [CrossRef]
- James, W. William James writings 1878-1899, chapter on emotion. Libr. Am. 1992, 350–365. [Google Scholar]
- Lange, C. Uber Gemuthsbewegungen. Lipzig, Thomas. In The Emotions: A Psychophysiological Study; Hafner Publishing: New York, NY, USA, 1885; pp. 33–90. [Google Scholar]
- Cannon, W.B. The James-Lange theory of emotions: A critical examination and an alternative theory. Am. J. Psychol. 1927, 39, 106–124. [Google Scholar] [CrossRef]
- LeDoux, J.E. Emotion circuits in the brain. Ann. Rev. Neurosci. 2000, 23, 155–184. [Google Scholar] [CrossRef]
- Lang, P.; Bradley, M.M. The International Affective Picture System (IAPS) in the study of emotion and attention. Handb. Emot. Elicitation Assess. 2007, 29, 70–73. [Google Scholar]
- Wiens, S.; Öhman, A. Probing Unconscious Emotional Processes on Becoming A Successful Masketeer. In Handbook of Emotion Elicitation and Assessment; Oxford University Press: Oxford, UK, 2007. [Google Scholar]
- Ekman, P. The directed facial action task. Handb. Emot. Elicitation Assess. 2007, 47, 53. [Google Scholar]
- Laird, J.D.; Strout, S. Emotional behaviors as emotional stimuli. In Handbook of Emotion Elicitation and Assessment; Oxford University Press: Oxford, UK, 2007; pp. 54–64. [Google Scholar]
- Amodio, D.M.; Zinner, L.R.; Harmon-Jones, E. Social psychological methods of emotion elicitation. Handb. Emot. Elicitation Assess. 2007, 91, 91–105. [Google Scholar]
- Roberts, N.A.; Tsai, J.L.; Coan, J.A. Emotion elicitation using dyadic interaction tasks. Handb. Emot. Elicitation Assess. 2007, 106–123. [Google Scholar]
- Eich, E.; Ng, J.T.; Macaulay, D.; Percy, A.D.; Grebneva, I. Combining music with thought to change mood. In Handbook of Emotion Elicitation and Assessment; Oxford University Press: Oxford, UK, 2007; pp. 124–136. [Google Scholar]
- Rottenberg, J.; Ray, R.; Gross, J. Emotion elicitation using films. In Handbook of Emotion Elicitation and Assessment; Oxford University Press: Oxford, UK, 2007. [Google Scholar]
- Tooby, J.; Cosmides, L. The past explains the present: Emotional adaptations and the structure of ancestral environments. Ethol. Sociobiol. 1990, 11, 375–424. [Google Scholar] [CrossRef]
- Coan, J.A.; Allen, J.J. Frontal EEG asymmetry as a moderator and mediator of emotion. Biol. Psychol. 2004, 67, 7–50. [Google Scholar] [CrossRef] [PubMed]
- Posner, J.; Russell, J.A.; Peterson, B.S. The circumplex model of affect: An integrative approach to affective neuroscience, cognitive development, and psychopathology. Dev. Psychopathol. 2005, 17, 715. [Google Scholar] [CrossRef]
- Berridge, K.C. Pleasures of the brain. Brain Cogn. 2003, 52, 106–128. [Google Scholar] [CrossRef]
- Berkowitz, L.; Jaffee, S.; Jo, E.; Troccoli, B.T. On the correction of feeling-induced judgmental biases. In Feeling and Thinking: The Role of Affect in Social Cognition; Cambridge University Press: Cambridge, UK, 2000; pp. 131–152. [Google Scholar]
- Al-Qazzaz, N.K.; Hamid Bin Mohd Ali, S.; Ahmad, S.A.; Islam, M.S.; Escudero, J. Selection of mother wavelet functions for multi-channel EEG signal analysis during a working memory task. Sensors 2015, 15, 29015–29035. [Google Scholar] [CrossRef]
- Mjahad, A.; Rosado-Muñoz, A.; Guerrero-Martínez, J.F.; Bataller-Mompeán, M.; Francés-Villora, J.V.; Dutta, M.K. Detection of ventricular fibrillation using the image from time-frequency representation and combined classifiers without feature extraction. Appl. Sci. 2018, 8, 2057. [Google Scholar] [CrossRef] [Green Version]
- Ji, N.; Ma, L.; Dong, H.; Zhang, X. EEG Signals Feature Extraction Based on DWT and EMD Combined with Approximate Entropy. Br. Sci. 2019, 9, 201. [Google Scholar] [CrossRef] [Green Version]
- Ji, Y.; Zhang, S.; Xiao, W. Electrocardiogram classification based on faster regions with convolutional neural network. Sensors 2019, 19, 2558. [Google Scholar] [CrossRef] [Green Version]
- Oh, S.L.; Vicnesh, J.; Ciaccio, E.J.; Yuvaraj, R.; Acharya, U.R. Deep convolutional neural network model for automated diagnosis of schizophrenia using EEG signals. Appl. Sci. 2019, 9, 2870. [Google Scholar] [CrossRef] [Green Version]
- Civit-Masot, J.; Domínguez-Morales, M.J.; Vicente-Díaz, S.; Civit, A. Dual Machine-Learning System to Aid Glaucoma Diagnosis Using Disc and Cup Feature Extraction. IEEE Access 2020, 8, 127519–127529. [Google Scholar] [CrossRef]
- Civit-Masot, J.; Luna-Perejón, F.; Domínguez Morales, M.; Civit, A. Deep Learning System for COVID-19 Diagnosis Aid Using X-ray Pulmonary Images. Appl. Sci. 2020, 10, 4640. [Google Scholar] [CrossRef]
- Gao, C.; Neil, D.; Ceolini, E.; Liu, S.C.; Delbruck, T. DeltaRNN: A power-efficient recurrent neural network accelerator. In Proceedings of the 2018 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, Monterey, CA, USA, 25–27 February 2018; pp. 21–30. [Google Scholar]
- Luna-Perejón, F.; Domínguez-Morales, M.J.; Civit-Balcells, A. Wearable fall detector using recurrent neural networks. Sensors 2019, 19, 4885. [Google Scholar] [CrossRef] [Green Version]
- Crites, S.L., Jr.; Cacioppo, J.T. Electrocortical differentiation of evaluative and nonevaluative categorizations. Psychol. Sci. 1996, 7, 318–321. [Google Scholar] [CrossRef] [Green Version]
- Cuthbert, B.N.; Schupp, H.T.; Bradley, M.M.; Birbaumer, N.; Lang, P.J. Brain potentials in affective picture processing: Covariation with autonomic arousal and affective report. Biol. Psychol. 2000, 52, 95–111. [Google Scholar] [CrossRef] [Green Version]
- Cacioppo, J.T.; Berntson, G.G.; Larsen, J.T.; Poehlmann, K.M.; Ito, T.A. The psychophysiology of emotion. Handb. Emotions 2000, 2, 173–191. [Google Scholar]
- Graham, F.K.; Clifton, R.K. Heart-rate change as a component of the orienting response. Psychol. Bull. 1966, 65, 305. [Google Scholar] [CrossRef] [PubMed]
- Prkachin, K.M.; Williams-Avery, R.M.; Zwaal, C.; Mills, D.E. Cardiovascular changes during induced emotion: An application of Lang’s theory of emotional imagery. J. Psychosom. Res. 1999, 47, 255–267. [Google Scholar] [CrossRef]
- Cacioppo, J.T.; Berntson, G.G.; Klein, D.J.; Poehlmann, K.M. Psychophysiology of emotion across the life span. Ann. Rev. Gerontol. Geriatr. 1997, 17, 27–74. [Google Scholar]
- Codispoti, M.; Bradley, M.M.; Lang, P.J. Affective reactions to briefly presented pictures. Psychophysiology 2001, 38, 474–478. [Google Scholar] [CrossRef] [PubMed]
- Bradley, M.M.; Lang, P.J.; Cuthbert, B.N. Emotion, novelty, and the startle reflex: Habituation in humans. Behav. Neurosci. 1993, 107, 970. [Google Scholar] [CrossRef]
- Cacioppo, J.T.; Tassinary, L.G.; Fridlund, A.J. The Skeletomotor System; Cambridge University Press: Cambridge, UK, 1990. [Google Scholar]
- Schwartz, G.E.; Fair, P.L.; Salt, P.; Mandel, M.R.; Klerman, G.L. Facial muscle patterning to affective imagery in depressed and nondepressed subjects. Science 1976, 192, 489–491. [Google Scholar] [CrossRef]
- Lang, P.J.; Greenwald, M.K.; Bradley, M.M.; Hamm, A.O. Looking at pictures: Affective, facial, visceral, and behavioral reactions. Psychophysiology 1993, 30, 261–273. [Google Scholar] [CrossRef]
- Greenwald, M.K.; Cook, E.W.; Lang, P.J. Affective judgment and psychophysiological response: Dimensional covariation in the evaluation of pictorial stimuli. J. Psychophysiol. 1989, 3, 51–64. [Google Scholar]
- Witvliet, C.V.; Vrana, S.R. Psychophysiological responses as indices of affective dimensions. Psychophysiology 1995, 32, 436–443. [Google Scholar] [CrossRef]
- Cacioppo, J.T.; Petty, R.E.; Losch, M.E.; Kim, H.S. Electromyographic activity over facial muscle regions can differentiate the valence and intensity of affective reactions. J. Personal. Soc. Psychol. 1986, 50, 260. [Google Scholar] [CrossRef]
- Ekman, P. Facial expression and emotion. Am. Psychol. 1993, 48, 384. [Google Scholar] [CrossRef]
- Lang, P.J. The emotion probe: Studies of motivation and attention. Am. Psychol. 1995, 50, 372. [Google Scholar] [CrossRef]
- Bradley, M.M.; Codispoti, M.; Cuthbert, B.N.; Lang, P.J. Emotion and motivation I: Defensive and appetitive reactions in picture processing. Emotion 2001, 1, 276. [Google Scholar] [CrossRef]
- Subramanian, R.; Wache, J.; Abadi, M.K.; Vieriu, R.L.; Winkler, S.; Sebe, N. ASCERTAIN: Emotion and personality recognition using commercial sensors. IEEE Tran. Affect. Comput. 2016, 9, 147–160. [Google Scholar] [CrossRef]
- Katsigiannis, S.; Ramzan, N. DREAMER: A database for emotion recognition through EEG and ECG signals from wireless low-cost off-the-shelf devices. IEEE J. Biomed. Health Inform. 2017, 22, 98–107. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Lee, J.; Yoo, S.K. Design of user-customized negative emotion classifier based on feature selection using physiological signal sensors. Sensors 2018, 18, 4253. [Google Scholar] [CrossRef] [Green Version]
- Lee, J.; Yoo, S.K. Recognition of Negative Emotion Using Long Short-Term Memory with Bio-Signal Feature Compression. Sensors 2020, 20, 573. [Google Scholar] [CrossRef] [Green Version]
- García, H.F.; Álvarez, M.A.; Orozco, Á.A. Gaussian process dynamical models for multimodal affect recognition. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 850–853. [Google Scholar]
- Liu, J.; Meng, H.; Nandi, A.; Li, M. Emotion detection from EEG recordings. In Proceedings of the 2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), Changsha, China, 13–15 August 2016; pp. 1722–1727. [Google Scholar]
- Li, X.; Song, D.; Zhang, P.; Yu, G.; Hou, Y.; Hu, B. Emotion recognition from multi-channel EEG data through convolutional recurrent neural network. In Proceedings of the 2016 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Shenzhen, China, 15–18 December 2016; pp. 352–359. [Google Scholar]
- Zhang, J.; Chen, M.; Hu, S.; Cao, Y.; Kozma, R. PNN for EEG-based Emotion Recognition. In Proceedings of the 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Budapest, Hungary, 9–12 October 2016. [Google Scholar]
- Mirmohamadsadeghi, L.; Yazdani, A.; Vesin, J.M. Using cardio-respiratory signals to recognize emotions elicited by watching music video clips. In Proceedings of the 2016 IEEE 18th International Workshop on Multimedia Signal Processing (MMSP), Montreal, QC, Canada, 21–23 September 2016; pp. 1–5. [Google Scholar]
- Zheng, W.L.; Zhu, J.Y.; Lu, B.L. Identifying stable patterns over time for emotion recognition from EEG. IEEE Trans. Affect. Comput. 2017, 10, 417–429. [Google Scholar] [CrossRef] [Green Version]
- Girardi, D.; Lanubile, F.; Novielli, N. Emotion detection using noninvasive low cost sensors. In Proceedings of the 2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII), San Antonio, TX, USA, 23–26 October 2017; pp. 125–130. [Google Scholar]
- Lee, M.S.; Lee, Y.K.; Pae, D.S.; Lim, M.T.; Kim, D.W.; Kang, T.K. Fast Emotion Recognition Based on Single Pulse PPG Signal with Convolutional Neural Network. Appl. Sci. 2019, 9, 3355. [Google Scholar] [CrossRef] [Green Version]
- Sonkusare, S.; Ahmedt-Aristizabal, D.; Aburn, M.J.; Nguyen, V.T.; Pang, T.; Frydman, S.; Denman, S.; Fookes, C.; Breakspear, M.; Guo, C.C. Detecting changes in facial temperature induced by a sudden auditory stimulus based on deep learning-assisted face tracking. Sci. Rep. 2019, 9, 4729. [Google Scholar] [CrossRef]
Dataset | #Participants | Physiological Signals | Logging Frequency (Hz) | Tagged Labels |
---|---|---|---|---|
AMIGOS | 40 | EEG, ECG, GSR | 128 256 | Valence, Arousal, Dominance, Liking and Familiarity |
ASCERTAIN [51] | 58 | EEG 1, ECG 2, GSR 3 and Facial EMG | 1 32 3 128 2 256 | Valence, Arousal, Engagement, Liking and Familiarity |
DEAP | 32 | EEG, ECG, GSR EOG and Facial EMG | 128 | Valence, Arousal, Dominance, Liking and Familiarity |
DREAMER [52] | 23 | EEG 4 and ECG 5 | 4 128 5 256 | Valence, Arousal and Dominance |
HR-EEG4EMO | 40 | EEG, ECG, GSR, SPO2, breath and heart rate | 100 1000 | Valence |
Train | Test | |||||
---|---|---|---|---|---|---|
Time Window (s) | #Samples | Accuracy (%) | Loss | #Samples | Accuracy (%) | Loss |
1 | 54,816 | 79.95 | 0.1080 | 5486 | 78.80 | 0.1083 |
2 | 27,272 | 80.28 | 0.1056 | 2730 | 79.65 | 0.1071 |
3 | 18,040 | 81.34 | 0.1014 | 1805 | 81.20 | 0.1033 |
4 | 13,488 | 83.48 | 0.0964 | 1349 | 82.59 | 0.0987 |
5 | 10,704 | 83.06 | 0.0965 | 1073 | 82.93 | 0.0983 |
6 | 8872 | 82.12 | 0.0988 | 889 | 81.14 | 0.1001 |
7 | 7576 | 81.09 | 0.1035 | 759 | 80.36 | 0.1078 |
8 | 6608 | 80.27 | 0.1083 | 661 | 79.69 | 0.1107 |
9 | 5844 | 79.84 | 0.1091 | 584 | 79.33 | 0.1134 |
10 | 5212 | 79.10 | 0.1110 | 521 | 79.02 | 0.1131 |
Train | Test | |||||
---|---|---|---|---|---|---|
Time Window (s) | #Samples | Accuracy (%) | Loss | #Samples | Accuracy (%) | Loss |
1 | 54,816 | 74.55 | 0.1262 | 5486 | 74.50 | 0.1282 |
2 | 27,272 | 74.83 | 0.1257 | 2730 | 74.68 | 0.1263 |
3 | 18,040 | 75.05 | 0.1249 | 1805 | 74.91 | 0.1255 |
4 | 13,488 | 75.88 | 0.1245 | 1349 | 75.60 | 0.1233 |
5 | 10,704 | 76.22 | 0.1241 | 1073 | 75.83 | 0.1225 |
6 | 8872 | 75.53 | 0.1240 | 889 | 74.94 | 0.1241 |
7 | 7576 | 75.08 | 0.1252 | 759 | 74.93 | 0.1256 |
8 | 6608 | 74.80 | 0.1259 | 661 | 74.29 | 0.1267 |
9 | 5844 | 74.51 | 0.1261 | 584 | 73.06 | 0.1297 |
10 | 5212 | 74.30 | 0.1273 | 521 | 73.02 | 0.1346 |
Train | Test | |||||
---|---|---|---|---|---|---|
Time Window (s) | #Samples | Accuracy (%) | Loss | #Samples | Accuracy (%) | Loss |
1 | 54,820 | 80.20 | 0.1035 | 5487 | 79.80 | 0.1055 |
2 | 27,264 | 80.33 | 0.1033 | 2730 | 80.11 | 0.1041 |
3 | 17,892 | 80.86 | 0.1031 | 1889 | 80.31 | 0.1048 |
4 | 13,368 | 81.72 | 0.1024 | 1411 | 80.69 | 0.1038 |
5 | 10712 | 80.45 | 0.1026 | 1071 | 80.18 | 0.1057 |
6 | 8876 | 80.40 | 0.1040 | 890 | 80.04 | 0.1061 |
7 | 7588 | 80.19 | 0.1057 | 759 | 78.53 | 0.1132 |
8 | 6612 | 80.12 | 0.1047 | 660 | 79.59 | 0.1121 |
9 | 5844 | 79.85 | 0.1068 | 584 | 78.92 | 0.1153 |
10 | 5208 | 80.05 | 0.1063 | 519 | 78.24 | 0.1151 |
Train | Test | |||||
---|---|---|---|---|---|---|
Time Window (s) | #Samples | Accuracy (%) | Loss | #Samples | Accuracy (%) | Loss |
1 | 54,820 | 76.92 | 0.1151 | 5487 | 75.65 | 0.1167 |
2 | 27,264 | 76.97 | 0.1136 | 2730 | 76.09 | 0.1166 |
3 | 17,892 | 78.19 | 0.1051 | 1889 | 77.47 | 0.1112 |
4 | 13,368 | 80.89 | 0.0998 | 1411 | 79.29 | 0.1028 |
5 | 10712 | 76.23 | 0.1166 | 1071 | 76.29 | 0.1189 |
6 | 8876 | 75.41 | 0.1184 | 890 | 75.64 | 0.1240 |
7 | 7588 | 76.17 | 0.1158 | 759 | 76.16 | 0.1206 |
8 | 6612 | 76.16 | 0.1179 | 660 | 75.19 | 0.1227 |
9 | 5844 | 76.93 | 0.1122 | 584 | 76.30 | 0.1174 |
10 | 5208 | 75.42 | 0.1203 | 519 | 75.69 | 0.1206 |
Train | Test | |||||
---|---|---|---|---|---|---|
Time Window (s) | #Samples | Accuracy (%) | Loss | #Samples | Accuracy (%) | Loss |
1 | 54,637 | 69.37 | 0.1416 | 5755 | 67.32 | 0.1582 |
2 | 27,195 | 69.65 | 0.1411 | 2865 | 67.35 | 0.1541 |
3 | 17,927 | 70.53 | 0.1325 | 1888 | 65.33 | 0.1657 |
4 | 13,374 | 70.97 | 0.1366 | 1409 | 63.72 | 0.1672 |
5 | 10,662 | 71.54 | 0.1358 | 1123 | 62.98 | 0.1683 |
6 | 8841 | 72.26 | 0.1300 | 931 | 65.21 | 0.1744 |
7 | 7559 | 72.27 | 0.1302 | 796 | 63.00 | 0.1799 |
8 | 6581 | 74.06 | 0.1210 | 693 | 63.46 | 0.1749 |
9 | 5813 | 75.63 | 0.1161 | 612 | 62.32 | 0.1913 |
10 | 5195 | 65.34 | 0.1641 | 547 | 65.29 | 0.1652 |
Train | Test | |||||
---|---|---|---|---|---|---|
Time Window (s) | #Samples | Accuracy (%) | Loss | #Samples | Accuracy (%) | Loss |
1 | 54,637 | 74.81 | 0.1216 | 5755 | 75.22 | 0.1215 |
2 | 27,195 | 75.05 | 0.1178 | 2865 | 73.99 | 0.1270 |
3 | 17,927 | 76.36 | 0.1099 | 1888 | 72.09 | 0.1340 |
4 | 13,374 | 76.21 | 0.1096 | 1409 | 72.38 | 0.1416 |
5 | 10,662 | 76.64 | 0.1114 | 1123 | 73.35 | 0.1307 |
6 | 8841 | 77.15 | 0.1043 | 931 | 73.28 | 0.1432 |
7 | 7559 | 77.04 | 0.1063 | 796 | 69.82 | 0.1423 |
8 | 6581 | 77.27 | 0.1046 | 693 | 69.43 | 0.1505 |
9 | 5813 | 79.87 | 0.0956 | 612 | 68.05 | 0.1559 |
10 | 5195 | 80.34 | 0.0915 | 547 | 67.67 | 0.1684 |
Train | Test | |||
---|---|---|---|---|
Affective State | Accuracy (%) | Loss | Accuracy (%) | Loss |
Arousal | 95.87 | 0.0279 | 91.71 | 0.0643 |
Valence | 94.64 | 0.0249 | 90.36 | 0.0589 |
Train | Test | |||
---|---|---|---|---|
Affective State | Accuracy (%) | Loss | Accuracy (%) | Loss |
Arousal | 81.95 | 0.0591 | 75.07 | 0.1351 |
Valence | 83.38 | 0.0576 | 77.88 | 0.1349 |
Class | True Positives | False Positives | True Negatives | False Negatives | Sensitivity | Specificity | Precision | F1-Score |
---|---|---|---|---|---|---|---|---|
LOW | 3957 | 57 | 9117 | 237 | 0.943 | 0.994 | 0.986 | 0.964 |
MED | 5091 | 446 | 7727 | 104 | 0.979 | 0.945 | 0.919 | 0.948 |
HIGH | 3768 | 49 | 9340 | 211 | 0.947 | 0.995 | 0.987 | 0.966 |
Class | True Positives | False Positives | True Negatives | False Negatives | Sensitivity | Specificity | Precision | F1-Score |
---|---|---|---|---|---|---|---|---|
LOW | 379 | 24 | 978 | 30 | 0.926 | 0.976 | 0.941 | 0.933 |
MED | 592 | 74 | 714 | 31 | 0.950 | 0.906 | 0.889 | 0.918 |
HIGH | 323 | 19 | 1013 | 56 | 0.853 | 0.981 | 0.944 | 0.896 |
Class | True Positives | False Positives | True Negatives | False Negatives | Sensitivity | Specificity | Precision | F1-Score |
---|---|---|---|---|---|---|---|---|
LOW | 3773 | 76 | 9214 | 305 | 0.925 | 0.992 | 0.980 | 0.952 |
MED | 5521 | 537 | 7139 | 171 | 0.970 | 0.930 | 0.911 | 0.939 |
HIGH | 3358 | 103 | 9667 | 240 | 0.933 | 0.989 | 0.970 | 0.951 |
Class | True Positives | False Positives | True Negatives | False Negatives | Sensitivity | Specificity | Precision | F1-Score |
---|---|---|---|---|---|---|---|---|
LOW | 331 | 33 | 982 | 65 | 0.836 | 0.967 | 0.909 | 0.871 |
MED | 561 | 89 | 725 | 36 | 0.939 | 0.890 | 0.863 | 0.899 |
HIGH | 383 | 14 | 979 | 35 | 0.916 | 0.986 | 0.965 | 0.940 |
Work | Published | Output Resolution | Sensors | Technology | Accuracy |
---|---|---|---|---|---|
García, H. et al. [55] | 2016 | 3 levels (Low, Medium, High) | EEG, EMG and EOG 1 | SVM 4 | Valence: 88.3% Arousal: 90.6% |
Liu, J. et al. [56] | 2016 | 2 levels (Low, High) | EEG | KNN 5 and RF 6 | Valence: 69.6% Arousal: 71.2% |
Li, X. et al. [57] | 2016 | 2 levels (Low, High) | EEG | C-RNN 7 | Valence: 72.1% Arousal: 74.1% |
Zhang, J. et al. [58] | 2016 | 2 levels (Low, High) | EEG | PNN 8 | Valence: 81.2% Arousal: 81.2% |
Mirmohamadsadeghi, L. et al. [59] | 2016 | 2 levels (Low, High) | ECG and Respiration | SVM 4 | Valence: 74.0% Arousal: 74.0% |
Zheng, W. et al. [60] | 2017 | 3 levels (Negative, Neutral, Positive) | EEG | KNN 5, LR 9 and SVM 4 | Mean: 79.3% |
Girardi, D. et al. [61] | 2017 | 2 levels (Low, High) | EEG, GSR, and EMG | SVM 4 | Valence: 63.9% Arousal: 58.6% |
Lee, J. et al. [53] | 2018 | 2 levels (Neutral, Negative) | ECG, GSR and SKT 2 | NN 10 | Mean: 92.5% |
Lee, M. et al. [62] | 2019 | 2 levels (Low, High) | PPG 3 | CNN 11 | Valence: 75.3% Arousal: 76.2% |
Sonkusare, S. et al. [63] | 2019 | 2 levels (Low, High) | ECG, GSR and SKT 2 | CNN 11 | Mean: 92% |
Lee, J. et al. [54] | 2020 | 2 levels (Neutral, Negative) | ECG, GSR and SKT 2 | B-RNN 12 | Mean: 98.4% |
This work | 2020 | 3 levels (Low, Medium, High) | ECG and GSR | NN 10 | Valence (train): 94.6% Arousal (train): 95.9% Valence (test): 90.4% Arousal (test): 91.7% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Muñoz-Saavedra, L.; Luna-Perejón, F.; Civit-Masot, J.; Miró-Amarante, L.; Civit, A.; Domínguez-Morales, M. Affective State Assistant for Helping Users with Cognition Disabilities Using Neural Networks. Electronics 2020, 9, 1843. https://doi.org/10.3390/electronics9111843
Muñoz-Saavedra L, Luna-Perejón F, Civit-Masot J, Miró-Amarante L, Civit A, Domínguez-Morales M. Affective State Assistant for Helping Users with Cognition Disabilities Using Neural Networks. Electronics. 2020; 9(11):1843. https://doi.org/10.3390/electronics9111843
Chicago/Turabian StyleMuñoz-Saavedra, Luis, Francisco Luna-Perejón, Javier Civit-Masot, Lourdes Miró-Amarante, Anton Civit, and Manuel Domínguez-Morales. 2020. "Affective State Assistant for Helping Users with Cognition Disabilities Using Neural Networks" Electronics 9, no. 11: 1843. https://doi.org/10.3390/electronics9111843
APA StyleMuñoz-Saavedra, L., Luna-Perejón, F., Civit-Masot, J., Miró-Amarante, L., Civit, A., & Domínguez-Morales, M. (2020). Affective State Assistant for Helping Users with Cognition Disabilities Using Neural Networks. Electronics, 9(11), 1843. https://doi.org/10.3390/electronics9111843