IoMT Based Facial Emotion Recognition System Using Deep Convolution Neural Networks
Abstract
:1. Introduction
- Design and implementation of IoMT based portable edge device for recognizing the facial emotions of an individual in real-time.
- A customized wrist band and vision mote are designed and developed for recognizing facial emotions by correlating the sensor data and visuals data.
- A pre-trained deep network has been imported on Raspberry-Pi itself using Co-Processor Intel Movidius neural compute stick (Mouser Electronics, Mansfield, TX, USA).
- Deep convolution neural network model for delivering outstanding accuracy in the edge device for FER.
- Real-time experiments are performed on a distinct individual by considering three facial emotion subjects, including happy, angry, and neutral.
- A t-test is performed for extracting the significant difference in systolic, diastolic blood pressure, and heart rate of an individual during watching three different subjects (angry, happy, and neutral).
2. Related Work
3. Proposed Methodology
4. System Development
5. Results and Validation
5.1. Results
5.2. Validation
6. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Irfan, M.; Ahmad, N. Internet of medical things: Architectural model, motivational factors and impediments. In Proceedings of the 2018 15th Learning and Technology Conference (L&T), Jeddah, Saudi Arabia, 25–26 February 2018; pp. 6–13. [Google Scholar]
- Nayyar, A.; Puri, V.; Nguyen, N.G. BioSenHealth 1.0: A Novel Internet of Medical Things (IoMT)-Based Patient Health Monitoring System. In Lecture Notes in Networks and Systems; Springer Nature: Cham, Switzerland, 2019; Volume 55, pp. 155–164. [Google Scholar]
- Rahman, M.A.; Hossain, M.S. An Internet of medical things-enabled edge computing framework for tackling COVID-19. IEEE Internet Things J. 2021. [Google Scholar] [CrossRef]
- Shan, C.; Gong, S.; McOwan, P.W. Facial expression recognition based on local binary patterns: A comprehensive study. Image Vis. Comput. 2009, 27, 803–816. [Google Scholar] [CrossRef] [Green Version]
- Kwong, J.C.T.; Garcia, F.C.C.; Abu, P.A.R.; Reyes, R.S.J. Emotion recognition via facial expression: Utilization of numerous feature descriptors in different machine learning algorithms. In Proceedings of the TENCON 2018-2018 IEEE Region 10 Conference, Jeju, Korea, 28–31 October 2018; pp. 2045–2049. [Google Scholar]
- Rodríguez-Pulecio, C.G.; Benítez-Restrepo, H.D.; Bovik, A.C. Making long-wave infrared face recognition robust against image quality degradations. Quant. Infrared Thermogr. J. 2019, 16, 218–242. [Google Scholar] [CrossRef]
- Canedo, D.; Neves, A.J.R. Facial expression recognition using computer vision: A systematic review. Appl. Sci. 2019, 9, 4678. [Google Scholar] [CrossRef] [Green Version]
- Ruiz-Garcia, A.; Elshaw, M.; Altahhan, A.; Palade, V. A hybrid deep learning neural approach for emotion recognition from facial expressions for socially assistive robots. Neural Comput. Appl. 2018, 29, 359–373. [Google Scholar] [CrossRef]
- Sajjad, M.; Nasir, M.; Ullah, F.U.M.; Muhammad, K.; Sangaiah, A.K.; Baik, S.W. Raspberry Pi assisted facial expression recognition framework for smart security in law-enforcement services. Inf. Sci. N. Y. 2019, 479, 416–431. [Google Scholar] [CrossRef]
- Srihari, K.; Ramesh, R.; Udayakumar, E.; Dhiman, G. An Innovative Approach for Face Recognition Using Raspberry Pi. Artif. Intell. Evol. 2020, 103–108. [Google Scholar] [CrossRef]
- Gaikwad, P.S.; Kulkarni, V.B. Face Recognition Using Golden Ratio for Door Access Control System; Springer: Singapore, 2021; pp. 209–231. [Google Scholar]
- Lin, H.; Garg, S.; Hu, J.; Wang, X.; Piran, M.J.; Hossain, M.S. Privacy-enhanced data fusion for COVID-19 applications in intelligent Internet of medical Things. IEEE Internet Things J. 2020. [Google Scholar] [CrossRef]
- Alom, M.Z.; Taha, T.M.; Yakopcic, C.; Westberg, S.; Sidike, P.; Nasrin, M.S.; Hasan, M.; Van Essen, B.C.; Awwal, A.A.S.; Asari, V.K. A state-of-the-art survey on deep learning theory and architectures. Electronics 2019, 8, 292. [Google Scholar] [CrossRef] [Green Version]
- Jain, Y.; Gandhi, H.; Burte, A.; Vora, A. Mental and Physical Health Management System Using ML, Computer Vision and IoT Sensor Network. In Proceedings of the 4th International Conference on Electronics, Communication and Aerospace Technology, ICECA 2020, Coimbatore, India, 5–7 November 2020; pp. 786–791. [Google Scholar] [CrossRef]
- Zedan, M.J.M.; Abduljabbar, A.I.; Malallah, F.L.; Saeed, M.G. Controlling Embedded Systems Remotely via Internet-of-Things Based on Emotional Recognition. Adv. Hum. Comput. Interact. 2020, 2020. [Google Scholar] [CrossRef]
- Abbasnejad, I.; Sridharan, S.; Nguyen, D.; Denman, S.; Fookes, C.; Lucey, S. Using synthetic data to improve facial expression analysis with 3d convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Venice, Italy, 22–29 October 2017; pp. 1609–1618. [Google Scholar]
- Tümen, V.; Söylemez, Ö.F.; Ergen, B. Facial emotion recognition on a dataset using Convolutional Neural Network. In Proceedings of the IDAP 2017—International Artificial Intelligence and Data Processing Symposium, Malatya, Turkey, 16–17 September 2017. [Google Scholar] [CrossRef]
- Saran, R.; Haricharan, S.; Praveen, N. Facial emotion recognition using deep convolutional neural networks. Int. J. Adv. Sci. Technol. 2020, 29, 2020–2025. [Google Scholar] [CrossRef] [Green Version]
- Cheng, H.; Su, Z.; Xiong, N.; Xiao, Y. Energy-efficient node scheduling algorithms for wireless sensor networks using Markov Random Field model. Inf. Sci. N. Y. 2016, 329, 461–477. [Google Scholar] [CrossRef]
- Breuer, R.; Kimmel, R. A deep learning perspective on the origin of facial expressions. arXiv 2017, arXiv:1705.01842. [Google Scholar]
- Sajjad, M.; Shah, A.; Jan, Z.; Shah, S.I.; Baik, S.W.; Mehmood, I. Facial appearance and texture feature-based robust facial expression recognition framework for sentiment knowledge discovery. Clust. Comput. 2018, 21, 549–567. [Google Scholar] [CrossRef]
- Zhang, L.; Verma, B.; Tjondronegoro, D.; Chandran, V. Facial expression analysis under partial occlusion: A survey. arXiv 2018, arXiv:1802.08784. [Google Scholar] [CrossRef] [Green Version]
- Zhu, C.; Zheng, Y.; Luu, K.; Savvides, M. CMS-RCNN: Contextual multi-scale region-based cnn for unconstrained face detection. In Deep Learning for Biometrics; Springer Nature: Cham, Switzerland, 2017; pp. 57–79. [Google Scholar]
- Al-Shabi, M.; Cheah, W.P.; Connie, T. Facial Expression Recognition Using a Hybrid CNN-SIFT Aggregator. CoRR abs/1608.02833 (2016). arXiv 2016, arXiv:1608.02833. [Google Scholar]
- Deng, J.; Pang, G.; Zhang, Z.; Pang, Z.; Yang, H.; Yang, G. cGAN based facial expression recognition for human-robot interaction. IEEE Access 2019, 7, 9848–9859. [Google Scholar] [CrossRef]
- Li, J.; Jin, K.; Zhou, D.; Kubota, N.; Ju, Z. Attention mechanism-based CNN for facial expression recognition. Neurocomputing 2020, 411, 340–350. [Google Scholar] [CrossRef]
- Li, Q.; Liu, Y.Q.; Peng, Y.Q.; Liu, C.; Shi, J.; Yan, F.; Zhang, Q. Real-time facial emotion recognition using lightweight convolution neural network. J. Phys. Conf. Ser. 2021, 1827, 12130. [Google Scholar] [CrossRef]
- Mellouk, W.; Handouzi, W. Facial emotion recognition using deep learning: Review and insights. Procedia Comput. Sci. 2020, 175, 689–694. [Google Scholar] [CrossRef]
- Sadeghi, H.; Raie, A.-A. Human vision inspired feature extraction for facial expression recognition. Multimed. Tools Appl. 2019, 78, 30335–30353. [Google Scholar] [CrossRef]
- Tsai, H.-H.; Chang, Y.-C. Facial expression recognition using a combination of multiple facial features and support vector machine. Soft Comput. 2018, 22, 4389–4405. [Google Scholar] [CrossRef]
- Ji, Y.; Hu, Y.; Yang, Y.; Shen, F.; Shen, H.T. Cross-domain facial expression recognition via an intra-category common feature and inter-category distinction feature fusion network. Neurocomputing 2019, 333, 231–239. [Google Scholar] [CrossRef]
- Zhang, T.; Liu, M.; Yuan, T.; Al-Nabhan, N. Emotion-Aware and Intelligent Internet of Medical Things towards Emotion Recognition during COVID-19 Pandemic. IEEE Internet Things J. 2020. [Google Scholar] [CrossRef]
- Rathour, N.; Gehlot, A.; Singh, R. Spruce-A intelligent surveillance device for monitoring of dustbins using image processing and raspberry PI. Int. J. Recent Technol. Eng. 2019, 8, 1570–1574. [Google Scholar] [CrossRef]
- Rathour, N.; Gehlot, A.; Singh, R. A standalone vision device to recognize facial landmarks and smile in real time using Raspberry Pi and sensor. Int. J. Eng. Adv. Technol. 2019, 8, 4383–4388. [Google Scholar] [CrossRef]
- Rathour, N.; Singh, R.; Gehlot, A. Image and Video Capturing for Proper Hand Sanitation Surveillance in Hospitals Using Euphony—A Raspberry Pi and Arduino-Based Device. In International Conference on Intelligent Computing and Smart Communication 2019. Algorithms for Intelligent Systems; Springer: Singapore, 2020; pp. 1475–1486. [Google Scholar]
- Haider, F.; Pollak, S.; Albert, P.; Luz, S. Emotion recognition in low-resource settings: An evaluation of automatic feature selection methods. Comput. Speech Lang. 2021, 65, 101119. [Google Scholar] [CrossRef]
- Su, Y.-S.; Suen, H.-Y.; Hung, K.-E. Predicting behavioral competencies automatically from facial expressions in real-time video-recorded interviews. J. Real-Time Image Process. 2021, 1–11. [Google Scholar] [CrossRef]
- Uddin, M.Z.; Nilsson, E.G. Emotion recognition using speech and neural structured learning to facilitate edge intelligence. Eng. Appl. Artif. Intell. 2020, 94, 103775. [Google Scholar] [CrossRef]
- Wang, S.; Guo, W. Robust co-clustering via dual local learning and high-order matrix factorization. Knowl. Based Syst. 2017, 138, 176–187. [Google Scholar] [CrossRef]
- Altameem, T.; Altameem, A. Facial expression recognition using human machine interaction and multi-modal visualization analysis for healthcare applications. Image Vis. Comput. 2020, 103, 104044. [Google Scholar] [CrossRef]
- Chen, Y.; Ou, R.; Li, Z.; Wu, K. WiFace: Facial Expression Recognition Using Wi-Fi Signals. IEEE Trans. Mob. Comput. 2020. [Google Scholar] [CrossRef]
- Masud, M.; Muhammad, G.; Alhumyani, H.; Alshamrani, S.S.; Cheikhrouhou, O.; Ibrahim, S.; Hossain, M.S. Deep learning-based intelligent face recognition in IoT-cloud environment. Comput. Commun. 2020, 152, 215–222. [Google Scholar] [CrossRef]
- Medapati, P.K.; Tejo Murthy, P.H.S.; Sridhar, K.P. LAMSTAR: For IoT-based face recognition system to manage the safety factor in smart cities. Trans. Emerg. Telecommun. Technol. 2020, 31, e3843. [Google Scholar] [CrossRef]
- Viola, P.; Jones, M. Rapid object detection using a boosted cascade of simple features. In Proceedings of the 2001 IEEE computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2001, Kauai, HI, USA, 8–14 December 2001; Volume 1, p. I. [Google Scholar]
- Arriaga, O.; Valdenegro-Toro, M.; Plöger, P.G. Real-time convolutional neural networks for emotion and gender classification. In Proceedings of the 27th European Symposium on Artificial Neural Networks, ESANN 2019, Computational Intelligence and Machine Learning, Brügge, Belgium, 24–26 April 2019; pp. 221–226. [Google Scholar]
- Blood Pressure Sensor—Serial Output. Available online: https://www.sunrom.com/p/blood-pressure-sensor-serial-output (accessed on 17 May 2021).
Physical State | Systolic (mm Hg) | Diastolic (mm Hg) |
---|---|---|
Hypotension | <90 | <60 |
Desired | 90–119 | 60–79 |
Prehypertension | 120–139 | 80–89 |
Stage 1 Hypertension | 140–159 | 90–99 |
Stage 2 Hypertension | 160–179 | 100–109 |
Hypertensive crisis | ≥180 | ≥110 |
Model | Accuracy | Learning Rate | Test Accuracy | Optimizer |
---|---|---|---|---|
Mini_Xception | 73% | 0.005 | 69% | Adam |
Densenet161 | 59% | 0.001, 0.001, 0.005 | 43% | SGD |
Resnet38 | 68% | 0.0001 | 60% | SGD |
Mobilenet_V2 | 72.5% | 0.0001, 0.001 | 64% | Adam |
S.No. | Name of Video | Duration of Video (mm:ss) | Video Type |
---|---|---|---|
1 | Tom and Jerry | 25:59 | Happy |
2 | Mr.Bean at the Dentist | 27:20 | Happy |
3 | Contagious Laughter Compilation | 16:00 | Happy |
4 | Sridevi Funeral video | 14:54 | Sad |
5 | Mumbai Terror Attack videos | 16:00 | Sad |
6 | She tells her story on why she fled away from North Korea | 9:38 | Sad |
7 | Best of Angry people | 14:29 | Angry |
8 | Nirbhaya’s Mothers Interview | 17:25 | Angry |
9 | Pit Bull Terrier Dog Attacks | 9:38 | Angry |
S. No. | Subject Number | Age | Blood Pressure (mm Hg) | Heart Rate (BPM) | Expression | Timestamp (hh:mm:ss) | |
---|---|---|---|---|---|---|---|
Systolic | Diastolic | ||||||
1 | Subject 1 | 20 | 130 | 101 | 91 | Happy | 4:27:30 |
2 | Subject 1 | 20 | 130 | 100 | 91 | Happy | 4:30:20 |
3 | Subject 1 | 20 | 132 | 102 | 94 | Happy | 4:33:30 |
4 | Subject 1 | 20 | 135 | 114 | 91 | Happy | 4:35:20 |
5 | Subject 1 | 20 | 139 | 112 | 102 | Happy | 4:38:35 |
6 | Subject 2 | 22 | 122 | 108 | 98 | Happy | 4:41:18 |
7 | Subject 2 | 22 | 126 | 96 | 80 | Happy | 4:44:35 |
8 | Subject 2 | 22 | 128 | 93 | 83 | Happy | 4:47:30 |
9 | Subject 2 | 22 | 126 | 96 | 80 | Happy | 4:44:30 |
10 | Subject 2 | 22 | 128 | 93 | 90 | Happy | 4:47:24 |
11 | Subject 3 | 21 | 145 | 97 | 79 | Happy | 4:50:24 |
12 | Subject 3 | 21 | 143 | 99 | 83 | Happy | 4:53:12 |
13 | Subject 3 | 21 | 128 | 93 | 70 | Happy | 4:56:34 |
14 | Subject 3 | 21 | 145 | 97 | 75 | Happy | 4:59:16 |
15 | Subject 3 | 21 | 143 | 90 | 79 | Happy | 5:02:34 |
16 | Subject 1 | 20 | 118 | 79 | 75 | Neutral | 5:30:10 |
17 | Subject 1 | 20 | 117 | 79 | 76 | Neutral | 5:33:40 |
18 | Subject 1 | 20 | 118 | 77 | 76 | Neutral | 5:36:46 |
19 | Subject 1 | 20 | 119 | 77 | 75 | Neutral | 5:39:52 |
20 | Subject 1 | 20 | 117 | 79 | 76 | Neutral | 5:42:23 |
21 | Subject 2 | 22 | 121 | 80 | 78 | Neutral | 5:45:12 |
22 | Subject 2 | 22 | 122 | 80 | 77 | Neutral | 5:48:43 |
23 | Subject 2 | 22 | 122 | 79 | 77 | Neutral | 5:51:16 |
24 | Subject 2 | 22 | 120 | 80 | 76 | Neutral | 5:54:24 |
25 | Subject 2 | 22 | 119 | 79 | 76 | Neutral | 5:57:20 |
26 | Subject 3 | 21 | 103 | 63 | 61 | Neutral | 6:00:25 |
27 | Subject 3 | 21 | 103 | 62 | 61 | Neutral | 6:03:10 |
28 | Subject 3 | 21 | 105 | 66 | 70 | Neutral | 6:06:30 |
29 | Subject 3 | 21 | 105 | 66 | 71 | Neutral | 6:09:50 |
30 | Subject 3 | 21 | 107 | 63 | 74 | Neutral | 6:12:10 |
31 | Subject 1 | 20 | 136 | 109 | 85 | Angry | 6:45:00 |
32 | Subject 1 | 20 | 135 | 108 | 84 | Angry | 6:48:20 |
33 | Subject 1 | 20 | 137 | 109 | 86 | Angry | 6:51:43 |
34 | Subject 1 | 20 | 137 | 109 | 87 | Angry | 6:54:20 |
35 | Subject 1 | 20 | 136 | 109 | 87 | Angry | 6:57:32 |
36 | Subject 2 | 22 | 140 | 113 | 102 | Angry | 7:00:10 |
37 | Subject 2 | 22 | 129 | 110 | 98 | Angry | 7:03:23 |
38 | Subject 2 | 22 | 130 | 110 | 98 | Angry | 7:06:32 |
39 | Subject 2 | 22 | 128 | 109 | 100 | Angry | 7:09:43 |
40 | Subject 2 | 22 | 142 | 112 | 105 | Angry | 7:12:32 |
41 | Subject 3 | 21 | 138 | 110 | 108 | Angry | 7:15:10 |
42 | Subject 3 | 21 | 140 | 112 | 109 | Angry | 7:18:34 |
43 | Subject 3 | 21 | 142 | 111 | 107 | Angry | 7:21:42 |
44 | Subject 3 | 21 | 138 | 110 | 107 | Angry | 7:24:10 |
45 | Subject 3 | 21 | 140 | 112 | 108 | Angry | 7:27:30 |
Mean | N | Std. Deviation | Std. Error Mean | ||
---|---|---|---|---|---|
Pair 1 | Systolic_BP_Happy | 133.333 | 15 | 7.7429 | 1.9992 |
Systolic_BP_Neutral | 114.400 | 15 | 7.3853 | 1.9069 |
Mean | N | Std. Deviation | Std. Error Mean | ||
---|---|---|---|---|---|
Pair 1 | Diastolic_BP_Happy | 99.400 | 15 | 7.0791 | 1.8278 |
Diastolic_BP_Neutral | 73.933 | 15 | 7.3918 | 1.9085 |
Mean | N | Std. Deviation | Std. Error Mean | ||
---|---|---|---|---|---|
Pair 1 | Systolic_BP_Happy | 133.333 | 15 | 7.7429 | 1.9992 |
Systolic_BP_Neutral | 114.400 | 15 | 7.3853 | 1.9069 |
Mean | N | Std. Deviation | Std. Error Mean | ||
---|---|---|---|---|---|
Pair 1 | Systolic_BP_Neutral | 114.400 | 15 | 7.3853 | 1.9069 |
Systolic_BP_Angry | 136.533 | 15 | 4.4379 | 1.1459 |
Mean | N | Std. Deviation | Std. Error Mean | ||
---|---|---|---|---|---|
Pair 1 | Diastolic_BP_Neutral | 73.933 | 15 | 7.3918 | 1.9085 |
Diastolic_BP_Angry | 110.200 | 15 | 1.4736 | 0.3805 |
Mean | N | Std. Deviation | Std. Error Mean | ||
---|---|---|---|---|---|
Pair 1 | Heart_Rate_ Neutral | 73.267 | 15 | 5.4178 | 1.3989 |
Heart_Rate_ Angry | 98.067 | 15 | 9.6471 | 2.4909 |
Mean | N | Std. Deviation | Std. Error Mean | ||
---|---|---|---|---|---|
Pair 1 | Systolic_BP_Angry | 136.533 | 15 | 4.4379 | 1.1459 |
Systolic_BP_Sad | 90.200 | 15 | 2.7568 | 0.7118 |
Mean | N | Std. Deviation | Std. Error Mean | ||
---|---|---|---|---|---|
Pair 1 | Diastolic_BP_Angry | 110.200 | 15 | 1.4736 | 0.3805 |
Diastolic_BP_Sad | 76.53 | 15 | 3.563 | 0.920 |
Mean | N | Std. Deviation | Std. Error Mean | ||
---|---|---|---|---|---|
Pair 1 | Heart_Rate_Angry | 98.067 | 15 | 9.6471 | 2.4909 |
Heart_Rate_Sad | 96.067 | 15 | 5.8367 | 1.5070 |
S. No. | Pair State | Parameter | Paired Difference | |||||
---|---|---|---|---|---|---|---|---|
Mean | St. Dev | St. Error | t | df | Sig 2-Tailed | |||
1. | Happy to Neutral | Systolic BP | 18.933 | 14.2200 | 3.6716 | 5.157 | 14 | 0.000 |
Diastolic BP | 25.467 | 8.0699 | 2.0836 | 12.22 | 14 | 0.000 | ||
Heart Rate | 18.933 | 14.220 | 3.6716 | 5.157 | 14 | 0.000 | ||
2. | Neutral to Angry | Systolic BP | −22.133 | 10.5347 | 2.7201 | −8.137 | 14 | 0.000 |
Diastolic BP | 36.266 | 8.0664 | 2.0827 | 5.157 | 14 | 0.000 | ||
Heart Rate | −24.800 | 13.3962 | 3.4589 | −7.170 | 14 | 0.000 | ||
3. | Angry to Sad | Systolic BP | 46.333 | 5.6904 | 1.4693 | 31.535 | 14 | 0.000 |
Diastolic BP | 33.666 | 3.8668 | −0.9984 | 33.720 | 14 | 0.000 | ||
Heart Rate | 2.000 | 13.1909 | 3.4059 | 0.587 | 14 | 0.566 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Rathour, N.; Alshamrani, S.S.; Singh, R.; Gehlot, A.; Rashid, M.; Akram, S.V.; AlGhamdi, A.S. IoMT Based Facial Emotion Recognition System Using Deep Convolution Neural Networks. Electronics 2021, 10, 1289. https://doi.org/10.3390/electronics10111289
Rathour N, Alshamrani SS, Singh R, Gehlot A, Rashid M, Akram SV, AlGhamdi AS. IoMT Based Facial Emotion Recognition System Using Deep Convolution Neural Networks. Electronics. 2021; 10(11):1289. https://doi.org/10.3390/electronics10111289
Chicago/Turabian StyleRathour, Navjot, Sultan S. Alshamrani, Rajesh Singh, Anita Gehlot, Mamoon Rashid, Shaik Vaseem Akram, and Ahmed Saeed AlGhamdi. 2021. "IoMT Based Facial Emotion Recognition System Using Deep Convolution Neural Networks" Electronics 10, no. 11: 1289. https://doi.org/10.3390/electronics10111289
APA StyleRathour, N., Alshamrani, S. S., Singh, R., Gehlot, A., Rashid, M., Akram, S. V., & AlGhamdi, A. S. (2021). IoMT Based Facial Emotion Recognition System Using Deep Convolution Neural Networks. Electronics, 10(11), 1289. https://doi.org/10.3390/electronics10111289