A Unified Local–Global Feature Extraction Network for Human Gait Recognition Using Smartphone Sensors
Abstract
:1. Introduction
- Inspired by the multi-scale approach, the proposed model leverages multi-scale convolutional neural networks [33], a fusion network, and a weight update sub-network, and it combines them in an end-to-end manner to address the covariate issues.
- In particular, it aims to highlight relevant local features in each scale with respect to label-based gait patterns by incorporating weight update sub-networks (Ws). Furthermore, global features are extracted with the help of a fusion network. The significance of discriminative local and global features is to handle intra-class variations and inter-class variations, respectively.
- The proposed framework has been gone through extensive empirical evaluations using four benchmark gait-based inertial datasets: OU-ISIR, whuGAIT, Gait-mob-ACC, and IDNet, and the results are compared with many state-of-the-art gait recognition models such as IdNet [23], CNN [34], LSTM [30], DeepConv [35], CNN+LSTM [24], and the proposed model outperforms others.
2. Related Work
2.1. Sensor-Based Gait Identification
2.2. Deep Learning Approaches on Gait Analysis
3. System Overview
3.1. Proposed Approach
3.2. Fusion Network
3.3. Training and Classification
4. Experimental Setup and Result Analysis
4.1. Different Sensor-Based Gait Dataset
4.2. Network Architecture
- Experiments on the effect of using the proposed weight update sub-network (Ws) into various CNN architectures.
- Performance of the proposed methods in handling gait data collected under different covariate conditions.
- Evaluation of the proposed method for identification and authentication.
4.3. Experiments on the Effect of Using the Proposed Weight Updated Sub-Networks (Ws) into Various CNN Architectures
4.4. Performance Evaluation of the Proposed Network under Different Covariate Conditions
4.5. Identification and Authentication of Gait Based Bio-Metric System
4.5.1. Experimental Results on Identification
4.5.2. Experiments on Authentication
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Tao, W.; Liu, T.; Zheng, R.; Feng, H. Gait analysis using wearable sensors. Sensors 2012, 12, 2255–2283. [Google Scholar] [CrossRef] [PubMed]
- Castermans, T.; Duvinage, M.; Cheron, G.; Dutoit, T. Towards effective non-invasive brain-computer interfaces dedicated to gait rehabilitation systems. Brain Sci. 2013, 4, 1–48. [Google Scholar] [CrossRef] [PubMed]
- Hamid, H.; Naseer, N.; Nazeer, H.; Khan, M.J.; Khan, R.A.; Shahbaz Khan, U. Analyzing Classification Performance of fNIRS-BCI for Gait Rehabilitation Using Deep Neural Networks. Sensors 2022, 22, 1932. [Google Scholar] [CrossRef] [PubMed]
- Andreoni, G.; Mazzola, M.; Zambarbieri, D.; Forzoni, L.; D’Onofrio, S.; Viotti, S.; Santambrogio, G.C.; Baselli, G. Motion analysis and eye tracking technologies applied to portable ultrasound systems user interfaces evaluation. In Proceedings of the 2013 International Conference on Computer Medical Applications (ICCMA), Sousse, Tunisia, 20–22 January 2013; pp. 1–6. [Google Scholar]
- Katona, J.; Kovari, A. A brain–computer interface project applied in computer engineering. IEEE Trans. Educ. 2016, 59, 319–326. [Google Scholar] [CrossRef]
- Katona, J. A review of human–computer interaction and virtual reality research fields in cognitive InfoCommunications. Appl. Sci. 2021, 11, 2646. [Google Scholar] [CrossRef]
- Hong, K.S.; Khan, M.J. Hybrid brain–computer interface techniques for improved classification accuracy and increased number of commands: A review. Front. Neurorobot. 2017, 11, 35. [Google Scholar] [CrossRef] [Green Version]
- Katona, J.; Ujbanyi, T.; Sziladi, G.; Kovari, A. Speed control of Festo Robotino mobile robot using NeuroSky MindWave EEG headset based brain-computer interface. In Proceedings of the 2016 7th IEEE international conference on cognitive Infocommunications (CogInfoCom), Wrocław, Poland, 16–18 October 2016; pp. 251–256. [Google Scholar]
- Katona, J.; Ujbanyi, T.; Sziladi, G.; Kovari, A. Examine the effect of different web-based media on human brain waves. In Proceedings of the 2017 8th IEEE International Conference on Cognitive Infocommunications (CogInfoCom), Debrecen, Hungary, 11–14 September 2017; pp. 407–412. [Google Scholar]
- Costa, Á.; Iáñez, E.; Úbeda, A.; Hortal, E.; Del-Ama, A.J.; Gil-Agudo, A.; Azorín, J.M. Decoding the attentional demands of gait through EEG gamma band features. PLoS ONE 2016, 11, e0154136. [Google Scholar] [CrossRef]
- Katona, J. Examination and comparison of the EEG based Attention Test with CPT and TOVA. In Proceedings of the 2014 IEEE 15th International Symposium on Computational Intelligence and Informatics (CINTI), Budapest, Hungary, 19–21 November 2014; pp. 117–120. [Google Scholar]
- Katona, J. Clean and dirty code comprehension by eye-tracking based evaluation using GP3 eye tracker. Acta Polytech. Hung. 2021, 18, 79–99. [Google Scholar] [CrossRef]
- Katona, J. Analyse the Readability of LINQ Code using an Eye-Tracking-based Evaluation. Acta Polytech. Hung. 2021, 18, 193–215. [Google Scholar] [CrossRef]
- Chen, X.; Xu, J. Uncooperative gait recognition: Re-ranking based on sparse coding and multi-view hypergraph learning. Pattern Recognit. 2016, 53, 116–129. [Google Scholar] [CrossRef]
- Choudhury, S.D.; Tjahjadi, T. Robust view-invariant multiscale gait recognition. Pattern Recognit. 2015, 48, 798–811. [Google Scholar] [CrossRef] [Green Version]
- Luo, J.; Tang, J.; Tjahjadi, T.; Xiao, X. Robust arbitrary view gait recognition based on parametric 3D human body reconstruction and virtual posture synthesis. Pattern Recognit. 2016, 60, 361–377. [Google Scholar] [CrossRef] [Green Version]
- Das, S.; Purnananda, H.Y.; Meher, S.; Sahoo, U.K. Minimum Spanning Tree Based Clustering for Human Gait Classification. In Proceedings of the IEEE Region 10 Conference (TENCON), Kochi, India, 17–20 October 2019; pp. 982–985. [Google Scholar]
- Xing, X.; Wang, K.; Yan, T.; Lv, Z. Complete canonical correlation analysis with application to multi-view gait recognition. Pattern Recognit. 2016, 50, 107–117. [Google Scholar] [CrossRef]
- Zeng, W.; Wang, C.; Yang, F. Silhouette-based gait recognition via deterministic learning. Pattern recognit. 2014, 47, 3568–3584. [Google Scholar] [CrossRef]
- Sprager, S.; Juric, M.B. Inertial sensor-based gait recognition: A review. Sensors 2015, 15, 22089–22127. [Google Scholar] [CrossRef]
- Ngo, T.T.; Makihara, Y.; Nagahara, H.; Mukaigawa, Y.; Yagi, Y. The largest inertial sensor-based gait database and performance evaluation of gait-based personal authentication. Pattern Recognit. 2014, 47, 228–237. [Google Scholar] [CrossRef] [Green Version]
- Zou, Q.; Ni, L.; Wang, Q.; Li, Q.; Wang, S. Robust gait recognition by integrating inertial and RGBD sensors. IEEE Trans. Cybern. 2017, 48, 1136–1150. [Google Scholar] [CrossRef] [Green Version]
- Gadaleta, M.; Rossi, M. Idnet: Smartphone-based gait recognition with convolutional neural networks. Pattern Recognit. 2018, 74, 25–37. [Google Scholar] [CrossRef] [Green Version]
- Zou, Q.; Wang, Y.; Wang, Q.; Zhao, Y.; Li, Q. Deep Learning-Based Gait Recognition Using Smartphones in the Wild. IEEE Trans. Inf. Forensics Secur. 2020, 15, 3197–3212. [Google Scholar] [CrossRef] [Green Version]
- De Marsico, M.; Mecca, A.; Barra, S. Walking in a smart city: Investigating the gait stabilization effect for biometric recognition via wearable sensors. Comput. Electr. Eng. 2019, 80, 106501. [Google Scholar] [CrossRef]
- Bashar, S.K.; Al Fahim, A.; Chon, K.H. Smartphone Based Human Activity Recognition with Feature Selection and Dense Neural Network. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 5888–5891. [Google Scholar]
- Mekruksavanich, S.; Jitpattanakul, A. Smartwatch-based human activity recognition using hybrid lstm network. In Proceedings of the 2020 IEEE Sensors, Rotterdam, The Netherlands, 25–28 October 2020; pp. 1–4. [Google Scholar]
- Mondal, R.; Mukherjee, D.; Singh, P.K.; Bhateja, V.; Sarkar, R. A New Framework for Smartphone Sensor-Based Human Activity Recognition Using Graph Neural Network. IEEE Sens. J. 2020, 21, 11461–11468. [Google Scholar] [CrossRef]
- Kwapisz, J.R.; Weiss, G.M.; Moore, S.A. Cell phone-based biometric identification. In Proceedings of the 4th IEEE International Conference on Biometrics, Theory, Applications and Systems (BTAS), Washington, DC, USA, 27–29 September 2010; pp. 1–7. [Google Scholar]
- Watanabe, Y.; Kimura, M. Gait identification and authentication using LSTM based on 3-axis accelerations of smartphone. Procedia Comput. Sci. 2020, 176, 3873–3880. [Google Scholar] [CrossRef]
- Nemes, S.; Antal, M. Feature learning for accelerometer based gait recognition. In Proceedings of the 2021 IEEE 15th International Symposium on Applied Computational Intelligence and Informatics (SACI), Timisoara, Romania, 19–21 May 2021; pp. 479–484. [Google Scholar]
- Loog, M.; Duin, R.P.W.; Haeb-Umbach, R. Multiclass linear dimension reduction by weighted pairwise Fisher criteria. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 762–766. [Google Scholar] [CrossRef]
- Sermanet, P.; Eigen, D.; Zhang, X.; Mathieu, M.; Fergus, R.; LeCun, Y. Overfeat: Integrated recognition, localization and detection using convolutional networks. In Proceedings of the 2nd International Conference on Learning Representations (ICLR 2014), Banff, AB, Canada, 14–16 April 2014. [Google Scholar]
- Delgado-Escaño, R.; Castro, F.M.; Cózar, J.R.; Marín-Jiménez, M.J.; Guil, N. An end-to-end multi-task and fusion CNN for inertial-based gait recognition. IEEE Access 2018, 7, 1897–1908. [Google Scholar] [CrossRef]
- Ordóñez, F.J.; Roggen, D. Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition. Sensors 2016, 16, 115. [Google Scholar] [CrossRef] [Green Version]
- Raccagni, C.; Gaßner, H.; Eschlboeck, S.; Boesch, S.; Krismer, F.; Seppi, K.; Poewe, W.; Eskofier, B.M.; Winkler, J.; Wenning, G.; et al. Sensor-based gait analysis in atypical parkinsonian disorders. Brain Behav. 2018, 8, e00977. [Google Scholar] [CrossRef]
- Wei, Z.; Qinghui, W.; Muqing, D.; Yiqi, L. A new inertial sensor-based gait recognition method via deterministic learning. In Proceedings of the 2015 34th Chinese Control Conference (CCC), Hangzhou, China, 28–30 July 2015; pp. 3908–3913. [Google Scholar]
- Donath, L.; Faude, O.; Lichtenstein, E.; Pagenstert, G.; Nüesch, C.; Mündermann, A. Mobile inertial sensor based gait analysis: Validity and reliability of spatiotemporal gait characteristics in healthy seniors. Gait Posture 2016, 49, 371–374. [Google Scholar] [CrossRef]
- Tran, L.; Choi, D. Data augmentation for inertial sensor-based gait deep neural network. IEEE Access 2020, 8, 12364–12378. [Google Scholar] [CrossRef]
- Khabir, K.M.; Siraj, M.S.; Ahmed, M.; Ahmed, M.U. Prediction of gender and age from inertial sensor-based gait dataset. In Proceedings of the 2019 Joint 8th International Conference on Informatics, Electronics & Vision (ICIEV) and 2019 3rd International Conference on Imaging, Vision & Pattern Recognition (icIVPR), Spokane, WA, USA, 30 May–2 June 2019; pp. 371–376. [Google Scholar]
- Nickel, C.; Brandt, H.; Busch, C. Classification of acceleration data for biometric gait recognition on mobile devices. In Proceedings of the BIOSIG 2011–Proceedings of the Biometrics Special Interest Group, Darmstadt, Germany, 8–9 September 2011. [Google Scholar]
- Juefei-Xu, F.; Bhagavatula, C.; Jaech, A.; Prasad, U.; Savvides, M. Gait-id on the move: Pace independent human identification using cell phone accelerometer dynamics. In Proceedings of the 5th IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS), Arlington, VA, USA, 23–27 September 2012; pp. 8–15. [Google Scholar]
- Kumar, P.; Mukherjee, S.; Saini, R.; Kaushik, P.; Roy, P.P.; Dogra, D.P. Multimodal gait recognition with inertial sensor data and video using evolutionary algorithm. IEEE Trans. Fuzzy Syst. 2018, 27, 956–965. [Google Scholar] [CrossRef]
- Gong, J.; Goldman, M.D.; Lach, J. Deepmotion: A deep convolutional neural network on inertial body sensors for gait assessment in multiple sclerosis. In Proceedings of the IEEE Wireless Health (WH), Bethesda, MD, USA, 25–27 October 2016; pp. 1–8. [Google Scholar]
- Sena, J.; Barreto, J.; Caetano, C.; Cramer, G.; Schwartz, W.R. Human activity recognition based on smartphone and wearable sensors using multiscale DCNN ensemble. Neurocomputing 2021, 444, 226–243. [Google Scholar] [CrossRef]
- Yao, S.; Hu, S.; Zhao, Y.; Zhang, A.; Abdelzaher, T. Deepsense: A unified deep learning framework for time-series mobile sensing data processing. In Proceedings of the 26th International Conference on World Wide Web, Perth, Australia, 3–7 May 2017; pp. 351–360. [Google Scholar]
- Hoang, H.V.; Tran, M.T. DeepSense-inception: Gait identification from inertial sensors with inception-like architecture and recurrent network. In Proceedings of the 13th International Conference on Computational Intelligence and Security (CIS), Hong Kong, China, 15–18 December 2017; pp. 594–598. [Google Scholar]
- Wu, Z.; Huang, Y.; Wang, L.; Wang, X.; Tan, T. A comprehensive study on cross-view gait based human identification with deep cnns. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 209–226. [Google Scholar] [CrossRef] [PubMed]
- Chao, H.; He, Y.; Zhang, J.; Feng, J. Gaitset: Regarding gait as a set for cross-view gait recognition. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 8126–8133. [Google Scholar]
- Fu, Y.; Wei, Y.; Zhou, Y.; Shi, H.; Huang, G.; Wang, X.; Yao, Z.; Huang, T. Horizontal pyramid matching for person re-identification. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 8295–8302. [Google Scholar]
- Zhang, Z.; Tran, L.; Yin, X.; Atoum, Y.; Liu, X.; Wan, J.; Wang, N. Gait recognition via disentangled representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 4710–4719. [Google Scholar]
- Zhang, Z.; Tran, L.; Liu, F.; Liu, X. On learning disentangled representations for gait recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2020. [Google Scholar] [CrossRef] [PubMed]
- Cui, Z.; Chen, W.; Chen, Y. Multi-scale convolutional neural networks for time series classification. arXiv 2016, arXiv:1603.06995. [Google Scholar]
- Liu, C.; Wechsler, H. Gabor feature based classification using the enhanced fisher linear discriminant model for face recognition. IEEE Trans. Image Process. 2002, 11, 467–476. [Google Scholar] [PubMed] [Green Version]
- Chen, D.S.; Jain, R.C. A robust backpropagation learning algorithm for function approximation. IEEE Trans. Neural Netw. 1994, 5, 467–479. [Google Scholar] [CrossRef] [PubMed]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NA, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
Database Name | No. | Number of Subjects | Sampling Rate | Challenges |
---|---|---|---|---|
OU-ISIR | #1 #2 | 745 408 | 100 Hz | A large database with fewer samples on each subject and each subject walks on a plain and sloppy surface |
whuGAIT | #1 #2 | 118 20 | 50 Hz | |
Gait-mob-ACC | #1 #2 #3 | 10 50 50 | Variation of walking speed: normal and fast with seven different covariates: either hand/both hand in pocket, either hand holding book, and either hand with loadings | |
IDNet | - | 50 | 100 Hz | Wear different shoe types and different clothes |
Layer Name | Input | Kernel Size | Number of Kernels | Feature Map | Number of Parameters |
---|---|---|---|---|---|
Conv1_1 | 32 | 1760 | |||
MaxPool1 | / | ||||
Conv2_1 | 64 | 10,304 | |||
Conv2_2 | 128 | 41,088 | |||
MaxPool2 | / | ||||
Conv3_1 | 128 | 49,280 |
sub-Networks (s) | whuGait | IDnet | OU-ISIR | Gait-Mob-ACC | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Rank1 | Rank-5 | VR | Rank-1 | Rank-5 | VR | Rank-1 | Rank-5 | VR | Rank-1 | Rank-5 | VR | |||||
Id | Id | (FAR = ) | Id | Id | (FAR = ) | Id | Id | (FAR = ) | Id | Id | (FAR = ) | |||||
2 | AlexNet | 78.69 | 82.94 | 0.79 | 81.98 | 83.99 | 0.79 | 55.89 | 61.82 | 0.43 | 75.88 | 80.75 | 0.74 | |||
VGG-14 | 83.66 | 86.43 | 0.83 | 86.24 | 90.91 | 0.86 | 57.76 | 61.83 | 0.44 | 75.94 | 79.04 | 0.74 | ||||
VGG-16 | 84.56 | 87.06 | 0.83 | 86.31 | 91.65 | 0.87 | 57.88 | 62.95 | 0.44 | 76.32 | 79.99 | 0.75 | ||||
ResNet-50 | 93.06 | 95.83 | 0.88 | 92.95 | 94.09 | 0.91 | 67.47 | 70.54 | 0.49 | 86.21 | 89.77 | 0.91 | ||||
CWs-AlexNet | 80.98 | 85.91 | 0.82 | 84.24 | 93.46 | 0.81 | 56.91 | 61.98 | 0.43 | 78.88 | 83.18 | 0.77 | ||||
CWs-VGG14 | 85.87 | 90.05 | 0.85 | 88.06 | 93.32 | 0.88 | 60.78 | 64.21 | 0.46 | 83.19 | 88.06 | 0.79 | ||||
CWs-VGG16 | 86.33 | 91.87 | 0.86 | 89.87 | 94.54 | 0.89 | 61.43 | 65.32 | 0.47 | 84.67 | 88.42 | 0.8 | ||||
CWs-ResNet50 | 95.04 | 97.76 | 0.91 | 96.11 | 97.97 | 0.93 | 69.53 | 73.89 | 0.51 | 89.55 | 93.67 | 0.94 | ||||
proposed | 95.32 | 98.08 | 0.92 | 96.34 | 98.56 | 0.93 | 70.34 | 75.85 | 0.52 | 91.32 | 94.43 | 0.94 | ||||
3 | AlexNet | 89.01 | 92.64 | 0.81 | 88.87 | 92.57 | 0.84 | 59.56 | 61.84 | 0.45 | 80.01 | 83.65 | 0.87 | |||
VGG14 | 91.32 | 94.54 | 0.82 | 92.01 | 95.36 | 0.87 | 61.19 | 65.92 | 0.47 | 83.88 | 87.73 | 0.88 | ||||
VGG16 | 91.78 | 95.81 | 0.83 | 92.42 | 95.75 | 0.88 | 61.76 | 64.20 | 0.46 | 84.88 | 88.78 | 0.89 | ||||
ResNet50 | 93.54 | 97.47 | 0.88 | 96.24 | 97.96 | 0.94 | 68.54 | 72.86 | 0.51 | 89.65 | 93.64 | 0.92 | ||||
CWs-AlexNet | 90.54 | 93.04 | 0.83 | 90.76 | 94.35 | 0.85 | 61.82 | 65.47 | 0.47 | 83.71 | 85.45 | 0.90 | ||||
CWs-VGG14 | 93.12 | 96.13 | 0.89 | 94.02 | 97.61 | 0.89 | 64.89 | 68.78 | 0.48 | 87.21 | 91.65 | 0.91 | ||||
CWs-VGG16 | 93.03 | 96.43 | 0.89 | 94.24 | 97.86 | 0.90 | 65.56 | 69.35 | 0.49 | 88.45 | 93.67 | 0.93 | ||||
CWs-ResNet50 | 96.32 | 98.53 | 0.93 | 98.24 | 100 | 0.96 | 72.86 | 76.59 | 0.53 | 94.01 | 98.15 | 0.96 | ||||
Proposed | 97.36 | 99.78 | 0.94 | 99.96 | 100 | 0.96 | 73.38 | 77.51 | 0.54 | 94.05 | 98.64 | 0.96 | ||||
4 () | AlexNet | 87.43 | 92.56 | 0.83 | 86.21 | 90.64 | 0.83 | 53.54 | 58.43 | 0.40 | 72.88 | 77.67 | 0.86 | |||
VGG14 | 87.43 | 91.01 | 0.84 | 88.12 | 92.89 | 0.86 | 60.19 | 64.84 | 0.46 | 80.32 | 84.43 | 0.88 | ||||
VGG16 | 88.34 | 92.43 | 0.84 | 88.51 | 93.30 | 0.86 | 60.48 | 65.48 | 0.46 | 82.98 | 85.13 | 0.89 | ||||
ResNet50 | 91.89 | 95.94 | 0.88 | 90.89 | 94.14 | 0.89 | 64.97 | 69.99 | 0.50 | 86.16 | 90.98 | 0.92 | ||||
CWs-AlexNet | 90.34 | 95.89 | 0.86 | 88.13 | 93.03 | 0.85 | 55.76 | 60.20 | 0.41 | 76.71 | 80.51 | 0.87 | ||||
CWs-VGG14 | 91.03 | 95.96 | 0.86 | 90.07 | 95.46 | 0.87 | 64.98 | 68.78 | 0.48 | 84.23 | 88.89 | 0.92 | ||||
CWs-VGG16 | 91.89 | 96.65 | 0.87 | 90.45 | 94.55 | 0.88 | 65.16 | 69.98 | 0.49 | 86.89 | 91.78 | 0.94 | ||||
CWs-ResNet50 | 94.12 | 98.27 | 0.9 | 94.39 | 97.06 | 0.92 | 69.97 | 73.32 | 0.52 | 91.13 | 91.98 | 0.95 | ||||
Proposed | 97.01 | 99.75 | 0.94 | 99.93 | 100 | 0.96 | 73.56 | 78.84 | 0.55 | 94.88 | 99.14 | 0.97 | ||||
5 () | AlexNet | 79.45 | 83.65 | 0.76 | 79.63 | 84.32 | 0.82 | 52.17 | 56.99 | 0.41 | 77.44 | 80.21 | 0.81 | |||
VGG14 | 81.69 | 84.32 | 0.78 | 86.33 | 90.56 | 0.82 | 53.11 | 57.42 | 0.4 | 79.64 | 82.43 | 0.83 | ||||
VGG16 | 81.94 | 84.87 | 0.79 | 87.44 | 91.21 | 0.83 | 54.98 | 57.96 | 0.4 | 81.01 | 85.78 | 0.85 | ||||
ResNet50 | 90.15 | 94.47 | 0.89 | 92.31 | 97.09 | 0.9 | 62.33 | 69.08 | 0.50 | 88.56 | 92.11 | 0.90 | ||||
CWs-AlexNet | 83.17 | 85.54 | 0.79 | 82.54 | 87.54 | 0.83 | 54.12 | 58.73 | 0.42 | 80.32 | 84.04 | 0.82 | ||||
CWs-VGG14 | 89.12 | 91.35 | 0.85 | 89.32 | 93.76 | 0.87 | 61.41 | 64.56 | 0.45 | 84.36 | 87.55 | 0.89 | ||||
CWs-VGG16 | 89.54 | 92.98 | 0.85 | 91.07 | 93.86 | 0.88 | 61.59 | 65.32 | 0.46 | 86.14 | 90.76 | 0.91 | ||||
CWs-ResNet50 | 93.32 | 97.89 | 0.91 | 95.89 | 97.32 | 0.91 | 65.76 | 71.34 | 0.50 | 90.12 | 94.32 | 0.93 | ||||
Proposed | 96.38 | 98.64 | 0.93 | 98.76 | 99.32 | 0.94 | 72.16 | 76.18 | 0.53 | 93.89 | 98.56 | 0.96 |
Methods | whuGait Dataset1 (118 Subjects) | whuGait Dataset2 (20 Subjects) | IDNet Dataset (50 Subjects) | OU-ISIR Dataset (745 Subjects) | OU-ISIR Dataset2 (408 Subjects) | Gait-Mob-ACC Dataset3 (50 Subjects) |
---|---|---|---|---|---|---|
IdNet [23] | 92.91% | 96.78% | 99.58% | 44.29% | 46.20% | 74.75% |
CNN [34] | 92.89% | 97.02% | 99.71% | 40.60% | 47.14% | 90.2% |
LSTM [30] | 91.88% | 96.98% | 99.46% | 66.36% | 65.32% | 81.65% |
DeepConv [35] | 92.25% | 96.80% | 99.24% | 37.33% | 41.32% | 86.23% |
CNN+LSTM [24] | 92.51% | 96.82% | 99.61% | 34.28% | 53.96% | 89.22% |
+LSTM [24] | 92.94% | 97.04% | 99.64% | - | - | - |
CNN+ [24] | 93.52% | 97.33% | 99.75% | - | - | - |
WMsCNN-Local | 93.36% | 98.28% | 99.81% | 65.74% | 72.13% | 90.49% |
WMsCNN-Local-Global | 95.75% | 98.98% | 99.96% | 73.56% | 76.42% | 94.71% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Das, S.; Meher, S.; Sahoo, U.K. A Unified Local–Global Feature Extraction Network for Human Gait Recognition Using Smartphone Sensors. Sensors 2022, 22, 3968. https://doi.org/10.3390/s22113968
Das S, Meher S, Sahoo UK. A Unified Local–Global Feature Extraction Network for Human Gait Recognition Using Smartphone Sensors. Sensors. 2022; 22(11):3968. https://doi.org/10.3390/s22113968
Chicago/Turabian StyleDas, Sonia, Sukadev Meher, and Upendra Kumar Sahoo. 2022. "A Unified Local–Global Feature Extraction Network for Human Gait Recognition Using Smartphone Sensors" Sensors 22, no. 11: 3968. https://doi.org/10.3390/s22113968
APA StyleDas, S., Meher, S., & Sahoo, U. K. (2022). A Unified Local–Global Feature Extraction Network for Human Gait Recognition Using Smartphone Sensors. Sensors, 22(11), 3968. https://doi.org/10.3390/s22113968