Fear Facial Emotion Recognition Based on Angular Deviation
Abstract
:1. Introduction
2. Materials and Methods
2.1. The Proposed Methodology
- Data acquisition: The calibration computing and the face emotion pattern are extracted to prepare the database.
- Emotions recording: The Qualisys Track Manager process is used to capture motion. This process leads to extract different negative emotions.
- 3D-2D projection: This part is dedicated to the motion in various angular variations. Here, five negative emotions (sad, anger, disgust, surprise, and fear) in seven angular deviations (0°, 5°, 10°, 15°, 20°, 25°, 30°) and four orientations (up, down, right, left) are presented.
- Feature classification procedure: The combination of the principal component analysis and artificial neural network (PCAN) is realized to divide emotions into two classes (fear and others).
- The inner and outer corners of the eyes
- The ends of the upper eyelids and the lower eyelids
- The inner and outer corners of the eyebrows
- The upper point of the eyebrows
- The left and right corners of the mouth
- The upper and lower ends of the lips.
2.1.1. Data Collection-Based QTM System
2.1.2. Emotions Recording
2.1.3. 3D-2D Projection
- Step1: feature selection
- Step2: projection plane selection
- Step3: matrix cross with angular deviation
2.1.4. Classification Procedure
Algorithm 1 Principal steps of PCA algorithm. |
For a set of data X: 1. Calculate the covariance matrix |
2. Diagonalize the covariance matrix in order to extract the set of eigenvectors w: |
where D is the eigenvalue matrix. 3. Determine the principal components by the following linear transformation: |
where is the mean vector of X. |
3. Results
3.1. Results of Projection Procedure Based on Angular Deviations and Orientations
3.2. Discriminant Characterization Results-Based Principal Component Analysis
3.3. Fear Emotion Classification Results
4. Discussion
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Shome, A.; Rahman, M.M.; Chellappan, S.; Islam, A.A. A generalized mechanism beyond NLP for real-time detection of cyber abuse through facial expression analytics. In Proceedings of the 16th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, Texas, TX, USA, 12 November 2019; pp. 348–357. [Google Scholar]
- George, A.; Mostaani, Z.; Geissenbuhler, D.; Nikisins, O.; Anjos, A.; Marcel, S. Biometric face presentation attack detection with multi-channel convolutional neural network. IEEE Trans. Inf. Forensics Secur. 2019, 15, 42–55. [Google Scholar] [CrossRef] [Green Version]
- Taha, B.; Hatzinakos, D. Emotion Recognition from 2D Facial Expressions. In Proceedings of the IEEE Canadian Conference of Electrical and Computer Engineering (CCECE), Edmonton, AB, Canada, 5 May 2019; pp. 1–4. [Google Scholar]
- Balasubramanian, B.; Diwan, P.; Nadar, R.; Bhatia, A. Analysis of Facial Emotion Recognition. In Proceedings of the IEEE 3rd International Conference on Trends in Electronics and Informatics (ICOEI), Tirunelveli, India, 23 April 2019; pp. 945–949. [Google Scholar]
- Nguyen, D.H.; Kim, S.; Lee, G.S.; Yang, H.J.; Na, I.S.; Kim, S.H. Facial Expression Recognition Using a Temporal Ensemble of Multi-level Convolutional Neural Networks. IEEE Trans. Affect. Comput. 2019, 33, 1940015. [Google Scholar] [CrossRef]
- Melaugh, R.; Siddique, N.; Coleman, S.; Yogarajah, P. Facial Expression Recognition on partial facial sections. In Proceedings of the 2019 11th International Symposium on Image and Signal Processing and Analysis (ISPA), Dubrovnik, Croatia, 23–25 September 2019; pp. 193–197. [Google Scholar]
- Kaskavalci, H.C.; Gören, S. A Deep Learning Based Distributed Smart Surveillance Architecture using Edge and Cloud Computing. In Proceedings of the IEEE International Conference on Deep Learning and Machine Learning in Emerging Applications (Deep-ML), Istanbul, Turkey, 26–28 August 2019; pp. 1–6. [Google Scholar]
- Kollias, D.; Zafeiriou, S. A Multi-component CNN-RNN Approach for Dimensional Emotion Recognition in-the-wild. arXiv 2018, arXiv:1805.01452. [Google Scholar]
- Kollias, D.; Zafeiriou, S. Exploiting multi-cnn features in cnn-rnn based dimensional emotion recognition on the omg in-the-wild dataset. arXiv 2019, arXiv:1910.01417. [Google Scholar]
- Zhang, F.; Zhang, T.; Mao, Q.; Xu, C. Geometry Guided Pose-Invariant Facial Expression Recognition. IEEE Trans. Image Process. 2020, 29, 4445–4460. [Google Scholar] [CrossRef] [PubMed]
- Hossain, M.S.; Muhammad, G. Emotion recognition using deep learning approach from audio–visual emotional big data. Inf. Fusion 2019, 49, 69–78. [Google Scholar] [CrossRef]
- Kaya, H.; Gürpınar, F.; Salah, A.A. Video-based emotion recognition in the wild using deep transfer learning and score fusion. Image Vis. Comput. 2017, 65, 66–75. [Google Scholar] [CrossRef]
- Ueda, J.; Okajima, K. Face morphing using average face for subtle expression recognition. In Proceedings of the IEEE 11th International Symposium on Image and Signal Processing and Analysis (ISPA), Dubrovnik, Croatia, 23–25 September 2019; pp. 187–192. [Google Scholar]
- Pitaloka, D.A.; Wulandari, A.; Basaruddin, T.; Liliana, D.Y. Enhancing CNN with preprocessing stage in automatic emotion recognition. Procedia Comput. Sci. 2017, 116, 523–529. [Google Scholar] [CrossRef]
- Hua, C.H.; Huynh-The, T.; Seo, H.; Lee, S. Convolutional Network with Densely Backward Attention for Facial Expression Recognition. In Proceedings of the IEEE 14th International Conference on Ubiquitous Information Management and Communication (IMCOM), Taichung, Taiwan, 3–5 January 2020; pp. 1–6. [Google Scholar]
- Singh, S.; Nasoz, F. Facial Expression Recognition with Convolutional Neural Networks. In Proceedings of the IEEE 10th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 6–8 January 2020; pp. 0324–0328. [Google Scholar]
- Salah, A.A.; Kaya, H.; Gürpınar, F. Video-based emotion recognition in the wild. In Multimodal Behavior Analysis in the Wild; Academic Press: Cambridge, MA, USA, 2019; Volume 1, pp. 369–386. [Google Scholar] [CrossRef]
- Yang, H.; Han, J.; Min, K. A Multi-Column CNN Model for Emotion Recognition from EEG Signals. Sensors 2019, 19, 4736. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Zhang, B.; Quan, C.; Ren, F. Study on CNN in the recognition of emotion in audio and images. In Proceedings of the IEEE/ACIS 15th International Conference on Computer and Information Science (ICIS), Okayama, Japan, 26–29 June 2016; pp. 1–5. [Google Scholar]
- Fnaiech, A.; Bouzaiane, S.; Sayadi, M.; Louis, N.; Gorce, P. Real time 3D facial emotion classification using a digital signal PIC microcontroller. In Proceedings of the 2018 IEEE International Conference on Image Processing, Applications and Systems (IPAS), Sophia Antipolis, France, 12–14 December 2018; pp. 285–290. [Google Scholar]
- Windolf, M.; Götzen, N.; Morlock, M. Systematic accuracy and precision analysis of video motion capturing systems—Exemplified on the Vicon-460 system. J. Biomech. 2008, 41, 2776–2780. [Google Scholar] [CrossRef] [PubMed]
- Mouelhi, A.; Sayadi, M.; Fnaiech, F.; Mrad, K.; Ben Romdhane, K. Automatic image segmentation of nuclear stained breast tissue sections using color active contour model and an improved watershed method. Biomed. Signal Process. Control. 2013, 8, 421–436. [Google Scholar] [CrossRef]
- Hemanth, D.J.; Vijila, C.K.S.; Selvakumar, A.; Anitha, J. Performance Improved Iteration-Free Artificial Neural Networks for Abnormal Magnetic Resonance Brain Image Classification. Neurocomputing 2014, 130, 98–107. [Google Scholar] [CrossRef]
- Arif, S.; Wang, J.; Hussain, F.; Fei, Z. Trajectory-Based 3D Convolutional Descriptors for Human Action Recognition. J. Inf. Sci. Eng. 2019, 35, 851–870. [Google Scholar] [CrossRef]
- Sahli, H.; Ben Slama, A.; Mouelhi, A.; Soayeh, N.; Rachdi, R.; Sayadi, M. A computer-aided method based on geometrical texture features for a precocious detection of fetal Hydrocephalus in ultrasound images. Technol. Health Care 2020, 28, 643–664. [Google Scholar] [CrossRef] [PubMed]
- Sahli, H.; Mouelhi, A.; Ben Slama, A.; Sayadi, M.; Rachdi, R. Supervised classification approach of biometric measures for automatic fetal defect screening in head ultrasound images. J. Med. Eng. Technol. 2019, 43, 279–286. [Google Scholar] [CrossRef] [PubMed]
- Sahli, H.; Ben Slama, A.; Bouzaiane, S.; Marrakchi, J.; Boukriba, S.; Sayadi, M. VNG technique for a convenient vestibular neuritis rating. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2020, 8, 571–580. [Google Scholar] [CrossRef]
- Sahli, H.; Mouelhi, A.; Diouani, M.F.; Tlig, L.; Refai, A.; Landoulsi, R.B.; Sayadi, M.; Essafi, M. An advanced intelligent ELISA test for bovine tuberculosis diagnosis. Biomed. Signal Process. Control 2018, 46, 59–66. [Google Scholar] [CrossRef]
- Tang, Y.; Jing, L.; Li, H.; Atkinson, P.M. A multiple-point spatially weighted k-NN method for object-based classification. Int. J. Appl. Earth Obs. Geoinf. 2016, 52, 263–274. [Google Scholar] [CrossRef] [Green Version]
- Ventouras, E.M.; Asvestas, P.; Karanasiou, I.; Matsopoulos, G.K. Classification of Error-Related Negativity (ERN) and Positivity (Pe) potentials using kNN and Support Vector Machines. Comput. Biol. Med. 2011, 41, 98–109. [Google Scholar] [CrossRef] [PubMed]
Human Organ | Directions | Positions | |||||||||
Left | Right | Inner-Left | Inner-Right | Outer-Left | Outer-Right | Upper-Left | Upper-Right | Lower-Left | Lower-Right | ||
EYEBROWS | A1, A2, A3 | A4, A5, A6 | A1 | A4 | A3 | A6 | A2 | A5 | - | - | |
EYES | B1, B2, B3, B4 | B5, B6, B7, B8 | B3 | B5 | B1 | B7 | B2 | B6 | B4 | B8 | |
MOUTH | C1, C2, C3, C4 | - | - | C1 | C3 | C2 | C4 |
Neutral | Sad | Anger | Disgust | Surprise | Fear | ||
---|---|---|---|---|---|---|---|
Patient 1 | Manual marked image | ||||||
Corresponding significant emotive image | |||||||
Frame 10 | Frame 52 | Frame 77 | Frame 68 | Frame 93 | Frame 72 | ||
Patient 2 | Manual marked image | ||||||
Corresponding significant emotive image | |||||||
Frame 19 | Frame 84 | Frame 92 | Frame 55 | Frame 13 | Frame 81 |
Datasets | C1-Subjects | C2-Subjects | Training set | Test set | ||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
345 | 69 | 276 | 320 | 25 | ||||||||||||||||||||
F | S | A | D | SR | F | S | A | D | SR | F | S | A | D | SR | F | S | A | D | SR | F | S | A | D | SR |
√ | √ | √ | √ | √ | √ | - | - | - | - | - | √ | √ | √ | √ | 60 | 260 | 9 | 16 |
Axes | Direction | Degree | Axes | Direction | Degree | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
X | Right | +5 | +10 | +15 | +20 | +25 | +30 | Y | UP | +5 | +10 | +15 | +20 | +25 | +30 |
left | −5 | −10 | −15 | −20 | −25 | −30 | Down | −5 | −10 | −15 | −20 | −25 | −30 | ||
No deviation | 0 |
Resulting Distance | Manual Interpretation | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
AD | Neutral | Sad | Anger | Disgust | Surprise | Fear | Neutral | Sad | Anger | Disgust | Surprise | Fear |
3° | 0.814 | 0.722 | 0.775 | 0.675 | 0.774 | 0.781 | E | E | E | E | E | E |
9° | 0.784 | 0.675 | 0.663 | 0.672 | 0.711 | 0.697 | E | E | E | E | E | E |
14° | 0.685 | 0.641 | 0.584 | 0.597 | 0.677 | 0.662 | E | A | A | E | E | E |
17° | 0.618 | 0.553 | 0.546 | 0.532 | 0.521 | 0.641 | E | A | A | A | A | E |
19° | 0.566 | 0.544 | 0.537 | 0.520 | 0.516 | 0.627 | A | A | A | A | A | E |
20° | 0.541 | 0.529 | 0.519 | 0.507 | 0.508 | 0.503 | A | A | A | A | A | A |
26° | 0.511 | 0.451 | 0.444 | 0.451 | 0.447 | 0.501 | A | B | B | B | B | A |
41° | 0.402 | 0.388 | 0.378 | 0.354 | 0.344 | 0.322 | B | B | B | B | B | B |
60° | 0.311 | 0.244 | 0.265 | 0.245 | 0.278 | 0.298 | B | B | B | B | B | B |
AD° | SVM [27] | PSVM [28] | FSVM [28] | ANN [20] | FAAN [20] | The Proposed PCAN Method |
---|---|---|---|---|---|---|
0 | 6.7 | 6.4 | 5.6 | 6.2 | 5.3 | 0.5 |
5 | 8.2 | 6.5 | 6.2 | 7.1 | 5.5 | 3.6 |
10 | 8.5 | 6.8 | 6.9 | 7.5 | 5.8 | 3.8 |
15 | 9.3 | 7.6 | 7.3 | 8.2 | 6.0 | 4.1 |
20 | 9.7 | 8.4 | 8.0 | 8.7 | 7.8 | 4.4 |
25 | 22.1 | 24.3 | 21.1 | 27.1 | 23.2 | 21.2 |
30 | 36.9 | 36.7 | 35.8 | 37.9 | 34.7 | 30 |
Classifier | Network | Hidden Layer | Epoch | Threshold | Learning Coefficient | Moment Value | Activation Function | Training Function |
---|---|---|---|---|---|---|---|---|
PCAN | (5,12,30,1) | 2 | 150 | 0.7 | 0.001 | 0.9 | HTS | QN |
FANN | (5,12,30,1) | 2 | 150 | 0.7 | 0.001 | 0.9 | HTS | CG |
ANN | (153,100,1) | 1 | 100 | 0.7 | 0.001 | 0.9 | HTS | CG |
FSVM | (5,12,30,1) | 2 | 150 | 0.7 | - | - | G-RBF | RBF |
PSVM | (5,12,30,1) | 2 | 150 | 0.7 | - | - | G-RBF | RBF |
SVM | (153,100,1) | 1 | 100 | 0.7 | - | - | G-RBF | RBF |
AC | SE | SP | PPV | NPV | PLr | NLr | Prv | Pto | PtoP | PtoN | PtPo | PtPr | AbD | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Fold 1 | 96.2 | 92.3 | 95.1 | 95.3 | 92.0 | 18.84 | 0.08 | 0.536 | 1.16 | 21.76 | 0.09 | 0.96 | 0.09 | 0.87 |
Fold 2 | 97.5 | 95.5 | 98.5 | 98.7 | 95.3 | 63.67 | 0.05 | 0.536 | 1.16 | 73.55 | 0.05 | 0.99 | 0.05 | 0.94 |
Fold 3 | 96.4 | 98.8 | 92.6 | 93.2 | 98.6 | 13.35 | 0.01 | 0.52 | 1.08 | 14.46 | 0.01 | 0.94 | 0.01 | 0.92 |
Fold 4 | 95.9 | 92.6 | 98.4 | 98.7 | 92.0 | 57.88 | 0.08 | 0.553 | 1.24 | 71.60 | 0.09 | 0.99 | 0.09 | 0.90 |
Fold 5 | 96.0 | 89.5 | 94.8 | 95.3 | 88.6 | 17.21 | 0.11 | 0.553 | 1.24 | 21.29 | 0.14 | 0.96 | 0.12 | 0.83 |
Fold 6 | 96.1 | 95.1 | 92.3 | 92.0 | 95.3 | 12.35 | 0.05 | 0.53 | 1.13 | 13.93 | 0.06 | 0.93 | 0.06 | 0.88 |
Fold 7 | 96.2 | 92.6 | 98.4 | 98.7 | 92.0 | 57.88 | 0.08 | 0.553 | 1.24 | 71.60 | 0.09 | 0.99 | 0.09 | 0.90 |
Fold 8 | 94.9 | 98.7 | 98.7 | 98.7 | 98.7 | 75.92 | 0.01 | 0.5 | 1.00 | 75.92 | 0.01 | 0.99 | 0.01 | 0.97 |
Fold 9 | 98.8 | 95.1 | 92.3 | 92.0 | 95.3 | 12.35 | 0.05 | 0.483 | 0.93 | 11.54 | 0.05 | 0.92 | 0.05 | 0.87 |
Fold 10 | 94.7 | 98.3 | 89.9 | 88.7 | 98.7 | 9.73 | 0.02 | 0.45 | 0.82 | 7.96 | 0.02 | 0.89 | 0.02 | 0.87 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Fnaiech, A.; Sahli, H.; Sayadi, M.; Gorce, P. Fear Facial Emotion Recognition Based on Angular Deviation. Electronics 2021, 10, 358. https://doi.org/10.3390/electronics10030358
Fnaiech A, Sahli H, Sayadi M, Gorce P. Fear Facial Emotion Recognition Based on Angular Deviation. Electronics. 2021; 10(3):358. https://doi.org/10.3390/electronics10030358
Chicago/Turabian StyleFnaiech, Ahmed, Hanene Sahli, Mounir Sayadi, and Philippe Gorce. 2021. "Fear Facial Emotion Recognition Based on Angular Deviation" Electronics 10, no. 3: 358. https://doi.org/10.3390/electronics10030358
APA StyleFnaiech, A., Sahli, H., Sayadi, M., & Gorce, P. (2021). Fear Facial Emotion Recognition Based on Angular Deviation. Electronics, 10(3), 358. https://doi.org/10.3390/electronics10030358