A Multimodal IoT-Based Locomotion Classification System Using Features Engineering and Recursive Neural Network
Abstract
:1. Introduction
- Fusing the three different types of sensor data features for multimodal locomotion prediction.
- Accurate skeleton modeling for the extracted human silhouette was validated through confidence levels and skeleton point accuracies.
- Major enhancement in the accuracy of locomotion classification with improved human skeleton point confidence levels by applying a combination of different feature extraction methods and feature fusion in the proposed system methodology.
2. Literature Review
2.1. Sensor or Vision-Based Systems
2.2. Multimodal Systems
3. Materials and Methods
3.1. System Methodology
3.2. IoT-Based Multimodal Data Pre-Processing
3.3. Features Engineering
3.3.1. Pearson Correlation
3.3.2. Linear Prediction Cepstral Coefficients (LPCC)
3.3.3. Spider Local Image Features
3.4. Features Fusion and Optimization
3.4.1. Features Fusion
3.4.2. Features Optimization
3.5. Locomotion Classification via Recursive Neural Network
4. Experimental Setup and Evaluation
4.1. Dataset Descriptions
4.1.1. HWU-USP Dataset
4.1.2. Opportunity++ Dataset
4.2. Experimental Results
4.2.1. Experiment 1: Via HWU-USP Dataset
4.2.2. Experiment 2: The Opportunity++ Dataset
4.2.3. Experiment 3: Evaluation Using Other Conventional Systems
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Ahmad, J.; Nadeem, A.; Bobasu, S. Human Body Parts Estimation and Detection for Physical Sports Movements. In Proceedings of the 2019 2nd International Conference on Communication, Computing and Digital Systems (C-CODE), Islamabad, Pakistan, 6–7 March 2019; pp. 104–109. [Google Scholar] [CrossRef]
- Pervaiz, M.; Ahmad, J. Artificial Neural Network for Human Object Interaction System Over Aerial Images. In Proceedings of the 2023 4th International Conference on Advancements in Computational Sciences (ICACS), Lahore, Pakistan, 20–22 February 2023; pp. 1–6. [Google Scholar] [CrossRef]
- Quaid, M.A.K.; Ahmad, J. Wearable sensors based human behavioral pattern recognition using statistical features and reweighted genetic algorithm. Multimed. Tools Appl. 2020, 79, 6061–6083. [Google Scholar] [CrossRef]
- Danyal; Azmat, U. Human Activity Recognition via Smartphone Embedded Sensor using Multi-Class SVM. In Proceedings of the 2022 24th International Multitopic Conference (INMIC), Islamabad, Pakistan, 21–22 October 2022; pp. 1–7. [Google Scholar] [CrossRef]
- Ahmad, J.; Batool, M.; Kim, K. Stochastic Recognition of Physical Activity and Healthcare Using Tri-Axial Inertial Wearable Sensors. Appl. Sci. 2020, 10, 7122. [Google Scholar] [CrossRef]
- Ahmad, J.; Mahmood, M. Students’ behavior mining in e-learning environment using cognitive processes with information technologies. Educ. Inf. Technol. 2019, 24, 2797–2821. [Google Scholar] [CrossRef]
- Kang, I.; Molinaro, D.D.; Duggal, S.; Chen, Y.; Kunapuli, P.; Young, A.J. Real-Time Gait Phase Estimation for Robotic Hip Exoskeleton Control During Multimodal Locomotion. IEEE Robot. Autom. Lett. 2021, 6, 3491–3497. [Google Scholar] [CrossRef]
- Mahmood, M.; Ahmad, J.; Kim, K. WHITE STAG model: Wise human interaction tracking and estimation (WHITE) using spatio-temporal and angular-geometric (STAG) descriptors. Multimed. Tools Appl. 2020, 79, 6919–6950. [Google Scholar] [CrossRef]
- Batool, M.; Alotaibi, S.S.; Alatiyyah, M.H.; Alnowaiser, K.; Aljuaid, H.; Jalal, A.; Park, J. Depth Sensors-Based Action Recognition using a Modified K-Ary Entropy Classifier. IEEE Access 2013. [Google Scholar] [CrossRef]
- Ghadi, Y.Y.; Javeed, M.; Alarfaj, M.; Al Shloul, T.; Alsuhibany, S.A.; Jalal, A.; Kamal, S.; Kim, D.-S. MS-DLD: Multi-Sensors Based Daily Locomotion Detection via Kinematic-Static Energy and Body-Specific HMMs. IEEE Access 2022, 10, 23964–23979. [Google Scholar] [CrossRef]
- Figueiredo, J.; Carvalho, S.P.; Gonçalve, D.; Moreno, J.C.; Santos, C.P. Daily Locomotion Recognition and Prediction: A Kinematic Data-Based Machine Learning Approach. IEEE Access 2020, 8, 33250–33262. [Google Scholar] [CrossRef]
- Madiha, J.; Shorfuzzaman, M.; Alsufyani, N.; Chelloug, S.A.; Jalal, A.; Park, J. Physical human locomotion prediction using manifold regularization. PeerJ Comput. Sci. 2022, 8, e1105. [Google Scholar] [CrossRef]
- Wang, L.; Ciliberto, M.; Gjoreski, H.; Lago, P.; Murao, K.; Okita, T.; Roggen, D. Locomotion and Transportation Mode Recognition from GPS and Radio Signals: Summary of SHL Challenge 2021. In Proceedings of the Adjunct Proceedings of the 2021 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2021 ACM International Symposium on Wearable Computers (UbiComp ‘21), Association for Computing Machinery, New York, NY, USA, 21–26 September 2021; pp. 412–422. [Google Scholar] [CrossRef]
- Chavarriaga, R.; Sagha, H.; Calatroni, A.; Digumarti, S.T.; Tröster, G.; Millán, J.D.R.; Roggen, D. The Opportunity challenge: A benchmark database for on-body sensor-based activity recognition. Pattern Recognit. Lett. 2013, 34, 2033–2042. [Google Scholar] [CrossRef]
- Ordóñez, F.; Roggen, D. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition. Sensors 2016, 16, 115. [Google Scholar] [CrossRef]
- De, D.; Bharti, P.; Das, S.K.; Chellappan, S. Multimodal Wearable Sensing for Fine-Grained Activity Recognition in Healthcare. IEEE Internet Comput. 2015, 19, 26–35. [Google Scholar] [CrossRef]
- Chung, S.; Lim, J.; Noh, K.J.; Kim, G.; Jeong, H. Sensor Data Acquisition and Multimodal Sensor Fusion for Human Activity Recognition Using Deep Learning. Sensors 2019, 19, 1716. [Google Scholar] [CrossRef]
- Ahmad, J.; Kim, Y. Dense depth maps-based human pose tracking and recognition in dynamic scenes using ridge data. In Proceedings of the 2014 11th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Seoul, Republic of Korea, 26–29 August 2014; pp. 119–124. [Google Scholar] [CrossRef]
- Muneeb, M.; Rustam, H.; Ahmad, J. Automate Appliances via Gestures Recognition for Elderly Living Assistance. In Proceedings of the 2023 4th International Conference on Advancements in Computational Sciences (ICACS), Lahore, Pakistan, 20–22 February 2023; pp. 1–6. [Google Scholar] [CrossRef]
- Madiha, J.; Ahmad, J. Body-worn Hybrid-Sensors based Motion Patterns Detection via Bag-of-features and Fuzzy Logic Optimization. In Proceedings of the 2021 International Conference on Innovative Computing (ICIC), Lahore, Pakistan, 9–10 November 2021; pp. 1–7. [Google Scholar] [CrossRef]
- Shloul, T.A.; Javeed, M.; Gochoo, M.; Alsuhibany, S.A.; Ghadi, Y.Y.; Jalal, A.; Park, J. Student’s health exercise recognition tool for E-learning education. IASC Intell. Autom. Soft Comput. 2023, 35, 149–161. [Google Scholar] [CrossRef]
- Gochoo, M.; Akhter, I.; Jalal, A.; Kim, K. Stochastic remote sensing event classification over adaptive posture estimation via multifused data and deep belief network. Remote Sens. 2021, 13, 912. [Google Scholar] [CrossRef]
- Azmat, U.; Ahmad, J. Smartphone Inertial Sensors for Human Locomotion Activity Recognition based on Template Matching and Codebook Generation. In Proceedings of the 2021 International Conference on Communication Technologies (ComTech), Rawalpindi, Pakistan, 21–22 September 2021; pp. 109–114. [Google Scholar] [CrossRef]
- Ahmad, J.; Quaid, M.A.K.; Hasan, A.S. Wearable Sensor-Based Human Behavior Understanding and Recognition in Daily Life for Smart Environments. In Proceedings of the 2018 International Conference on Frontiers of Information Technology (FIT), Islamabad, Pakistan, 17–19 December 2018; pp. 105–110. [Google Scholar] [CrossRef]
- Ahmad, J.; Quaid, M.A.K.; Kim, K. A Wrist Worn Acceleration Based Human Motion Analysis and Classification for Ambient Smart Home System. J. Electr. Eng. Technol. 2019, 14, 1733–1739. [Google Scholar] [CrossRef]
- Zhuo, S.; Sherlock, L.; Dobbie, G.; Koh, Y.S.; Russello, G.; Lottridge, D. Real-time Smartphone Activity Classification Using Inertial Sensors—Recognition of Scrolling, Typing, and Watching Videos While Sitting or Walking. Sensors 2020, 20, 655. [Google Scholar] [CrossRef]
- Pazhanirajan, S.; Dhanalakshmi, P. EEG Signal Classification using Linear Predictive Cepstral Coefficient Features. Int. J. Comput. Appl. 2013, 73, 28–31. [Google Scholar] [CrossRef]
- Fausto, F.; Cuevas, E.; Gonzales, A. A New Descriptor for Image Matching Based on Bionic Principles. Pattern Anal. Appl. 2017, 20, 1245–1259. [Google Scholar] [CrossRef]
- Madiha, J.; Jalal, A.; Kim, K. Wearable Sensors based Exertion Recognition using Statistical Features and Random Forest for Physical Healthcare Monitoring. In Proceedings of the 2021 International Bhurban Conference on Applied Sciences and Technologies (IBCAST), Islamabad, Pakistan, 12–16 January 2021; pp. 512–517. [Google Scholar] [CrossRef]
- Sen, B.; Hussain, S.A.I.; Gupta, A.D.; Gupta, M.K.; Pimenov, D.Y.; Mikołajczyk, T. Application of Type-2 Fuzzy AHP-ARAS for Selecting Optimal WEDM Parameters. Metals 2020, 11, 42. [Google Scholar] [CrossRef]
- Zhang, X.; Jiang, R.; Wang, T.; Wang, J. Recursive Neural Network for Video Deblurring. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 3025–3036. [Google Scholar] [CrossRef]
- Murad, A.; Pyun, J.-Y. Deep Recurrent Neural Networks for Human Activity Recognition. Sensors 2017, 17, 2556. [Google Scholar] [CrossRef]
- Ranieri, C.M.; MacLeod, S.; Dragone, M.; Vargas, P.A.; Romero, R.F. Activity Recognition for Ambient Assisted Living with Videos, Inertial Units and Ambient Sensors. Sensors 2021, 21, 768. [Google Scholar] [CrossRef]
- Ciliberto, M.; Rey, V.F.; Calatroni, A.; Lukowicz, P.; Roggen, D. Opportunity++: A Multimodal Dataset for Video- and Wearable, Object and Ambient Sensors-based Human Activity Recognition. Front. Comput. Sci. 2021, 3. [Google Scholar] [CrossRef]
- Akhter, I.; Jalal, A.; Kim, K. Pose Estimation and Detection for Event Recognition using Sense-Aware Features and Adaboost Classifier. In Proceedings of the 2021 International Bhurban Conference on Applied Sciences and Technologies (IBCAST), Islamabad, Pakistan, 12–16 January 2021; pp. 500–505. [Google Scholar] [CrossRef]
- Javeed, M.; Jalal, A. Deep Activity Recognition based on Patterns Discovery for Healthcare Monitoring. In Proceedings of the 2023 International Conference on Advancements in Computational Sciences (ICACS), Lahore, Pakistan, 20–22 February 2023. [Google Scholar]
- Nadeem, A.; Ahmad, J.; Kim, K. Automatic human posture estimation for sport activity recognition with robust body parts detection and entropy markov model. Multimed. Tools Appl. 2021, 80, 21465–21498. [Google Scholar] [CrossRef]
- Hajjej, F.; Javeed, M.; Ksibi, A.; Alarfaj, M.; Alnowaiser, K.; Jalal, A.; Alsufyani, N.; Shorfuzzaman, M.; Park, J. Deep Human Motion Detection and Multi-Features Analysis for Smart Healthcare Learning Tools. IEEE Access 2022, 10, 116527–116539. [Google Scholar] [CrossRef]
- Memmesheimer, R.; Theisen, N.; Paulus, D. Gimme Signals: Discriminative signal encoding for multimodal activity recognition. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 10394–10401. [Google Scholar] [CrossRef]
- Martínez-Villaseñor, L.; Ponce, H.; Brieva, J.; Moya-Albor, E.; Núñez-Martínez, J.; Peñafort-Asturiano, C. UP-Fall Detection Dataset: A Multimodal Approach. Sensors 2019, 19, 1988. [Google Scholar] [CrossRef]
- Piechocki, R.J.; Wang, X.; Bocus, M.J. Multimodal sensor fusion in the latent representation space. Sci. Rep. 2023, 13, 2005. [Google Scholar] [CrossRef]
- Al-Amin, M.; Tao, W.; Doell, D.; Lingard, R.; Yin, Z.; Leu, M.C.; Qin, R. Action Recognition in Manufacturing Assembly using Multimodal Sensor Fusion. Procedia Manuf. 2019, 39, 158–167. [Google Scholar] [CrossRef]
- Gao, W.; Zhang, L.; Teng, Q.; He, J.; Wu, H. DanHAR: Dual Attention Network for multimodal human activity recognition using wearable sensors. Appl. Soft Comput. 2021, 111, 107728. [Google Scholar] [CrossRef]
- Ahmad, J.; Batool, M.; Kim, K. Sustainable Wearable System: Human Behavior Modeling for Life-Logging Activities Using K-Ary Tree Hashing Classifier. Sustainability 2020, 12, 10324. [Google Scholar] [CrossRef]
ms | tk | mbc | mct | st | up | rn | ul | cd | |
---|---|---|---|---|---|---|---|---|---|
ms | 0.85 | 0 | 0 | 0 | 0.05 | 0 | 0.1 | 0 | 0 |
tk | 0 | 0.86 | 0 | 0.1 | 0 | 0 | 0 | 0 | 0.04 |
mbc | 0.01 | 0 | 0.89 | 0 | 0 | 0.05 | 0 | 0.05 | 0 |
mct | 0 | 0.1 | 0 | 0.90 | 0 | 0 | 0 | 0 | 0 |
st | 0 | 0 | 0.12 | 0 | 0.88 | 0 | 0 | 0 | 0 |
up | 0.04 | 0 | 0 | 0.07 | 0 | 0.89 | 0 | 0 | 0 |
rn | 0 | 0 | 0 | 0 | 0.14 | 0 | 0.86 | 0 | 0.1 |
ul | 0 | 0.03 | 0 | 0 | 0.1 | 0 | 0 | 0.87 | 0 |
cd | 0 | 0 | 0.1 | 0 | 0 | 0.01 | 0 | 0 | 0.89 |
Human Skeleton Points | Confidence Level | Distance | Recognition Accuracy |
---|---|---|---|
Head | 0.81 | 13.6 | 0.91 |
Left shoulder | 0.80 | 12.5 | 0.83 |
Right shoulder | 0.77 | 11.2 | 0.75 |
Left elbow | 0.69 | 14.5 | 0.97 |
Right elbow | 0.74 | 13.6 | 0.91 |
Left wrist | 0.80 | 9.7 | 0.65 |
Right wrist | 0.78 | 10.8 | 0.72 |
Torso | 0.80 | 13.1 | 0.87 |
Left knee | 0.72 | 12.9 | 0.86 |
Right knee | 0.75 | 11.7 | 0.78 |
Left ankle | 0.66 | 12.4 | 0.83 |
Right ankle | 0.68 | 11.9 | 0.79 |
Mean Accuracy | 0.75 | 0.82 |
od2 | cd1 | od1 | cd2 | cf | odw | of | cdw | cdr1 | odr2 | odr1 | cdr2 | odr3 | ct | dc | cdr3 | ts | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
od2 | 0.89 | 0 | 0 | 0.01 | 0 | 0.05 | 0 | 0 | 0.05 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
cd1 | 0 | 0.85 | 0.01 | 0 | 0 | 0 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04 | 0 | 0 |
cd1 | 0.02 | 0 | 0.87 | 0 | 0 | 0 | 0 | 0.01 | 0 | 0.05 | 0 | 0 | 0.05 | 0 | 0 | 0 | 0 |
cd2 | 0 | 0.1 | 0 | 0.87 | 0 | 0 | 0 | 0 | 0.01 | 0 | 0.02 | 0 | 0 | 0 | 0 | 0 | 0 |
cf | 0 | 0 | 0 | 0.05 | 0.85 | 0 | 0 | 0 | 0 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
odw | 0 | 0 | 0 | 0 | 0.03 | 0.90 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03 | 0 | 0.04 |
of | 0 | 0 | 0 | 0 | 0 | 0 | 0.84 | 0 | 0 | 0 | 0 | 0.06 | 0 | 0 | 0 | 0.1 | 0 |
cdw | 0 | 0 | 0 | 0 | 0 | 0.01 | 0 | 0.85 | 0.04 | 0 | 0 | 0 | 0.1 | 0 | 0 | 0 | 0 |
cdr1 | 0 | 0 | 0.12 | 0 | 0 | 0 | 0 | 0 | 0.88 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
odr2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.06 | 0 | 0.89 | 0 | 0 | 0 | 0.05 | 0 | 0 | 0 |
odr1 | 0.1 | 0 | 0 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 0.80 | 0 | 0 | 0 | 0 | 0 | 0 |
cdr2 | 0 | 0.02 | 0 | 0 | 0.01 | 0 | 0 | 0 | 0 | 0 | 0 | 0.87 | 0 | 0 | 0 | 0 | 0.1 |
odr3 | 0 | 0 | 0 | 0 | 0 | 0.03 | 0 | 0 | 0.01 | 0 | 0 | 0 | 0.86 | 0 | 0.1 | 0 | 0 |
ct | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 0 | 0 | 0 | 0 | 0.04 | 0 | 0.86 | 0 | 0 | 0 |
dc | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 0 | 0 | 0 | 0.88 | 0 | 0 |
cdr3 | 0 | 0 | 0.02 | 0 | 0 | 0 | 0 | 0.03 | 0 | 0 | 0 | 0.05 | 0 | 0 | 0 | 0.90 | 0 |
ts | 0 | 0 | 0 | 0 | 0 | 0 | 0.02 | 0 | 0 | 0.1 | 0 | 0 | 0 | 0 | 0 | 0 | 0.88 |
Human Skeleton Points | Confidence Level | Distance | Recognition Accuracy |
---|---|---|---|
Head | 0.85 | 14.2 | 0.95 |
Left shoulder | 0.86 | 13.7 | 0.91 |
Right shoulder | 0.85 | 12.9 | 0.86 |
Left elbow | 0.79 | 11.5 | 0.77 |
Right elbow | 0.78 | 13.2 | 0.88 |
Left wrist | 0.74 | 10.9 | 0.73 |
Right wrist | 0.69 | 12.7 | 0.85 |
Torso | 0.87 | 11.2 | 0.75 |
Left knee | 0.77 | 10.1 | 0.67 |
Right knee | 0.79 | 14.0 | 0.93 |
Left ankle | 0.60 | 11.1 | 0.74 |
Right ankle | 0.59 | 12.6 | 0.84 |
Mean Accuracy | 0.76 | 0.83 |
IoT-based Multimodal Conventional Systems | Modalities | Accuracy |
---|---|---|
Memmesheimer et al. [39] | Ambient + Motion + Vision | 0.86 |
Martínez-Villaseñor et al. [40] | Ambient + Vision | 0.65 |
Piechocki et al. [41] | Ambient + Vision | 0.74 |
Al-Amin et al. [42] | Motion + Vision | 0.85 |
Gao et al. [43] | Ambient + Motion | 0.83 |
Proposed Multimodal IoT-based Locomotion Prediction System | Ambient + Motion + Vision | 0.87 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Javeed, M.; Mudawi, N.A.; Alabduallah, B.I.; Jalal, A.; Kim, W. A Multimodal IoT-Based Locomotion Classification System Using Features Engineering and Recursive Neural Network. Sensors 2023, 23, 4716. https://doi.org/10.3390/s23104716
Javeed M, Mudawi NA, Alabduallah BI, Jalal A, Kim W. A Multimodal IoT-Based Locomotion Classification System Using Features Engineering and Recursive Neural Network. Sensors. 2023; 23(10):4716. https://doi.org/10.3390/s23104716
Chicago/Turabian StyleJaveed, Madiha, Naif Al Mudawi, Bayan Ibrahimm Alabduallah, Ahmad Jalal, and Wooseong Kim. 2023. "A Multimodal IoT-Based Locomotion Classification System Using Features Engineering and Recursive Neural Network" Sensors 23, no. 10: 4716. https://doi.org/10.3390/s23104716
APA StyleJaveed, M., Mudawi, N. A., Alabduallah, B. I., Jalal, A., & Kim, W. (2023). A Multimodal IoT-Based Locomotion Classification System Using Features Engineering and Recursive Neural Network. Sensors, 23(10), 4716. https://doi.org/10.3390/s23104716