Neural Networks for Automatic Posture Recognition in Ambient-Assisted Living
Abstract
:1. Introduction
2. Materials and Methods
2.1. Data Acquisition
2.2. Data Processing
2.3. Neural Networks
2.3.1. Multi-Layer Perceptron
Feature Selection
Architecture
2.3.2. LSTM Sequence Neural Network
Feature Selection
Architecture
2.4. Database
3. Results
3.1. MLP Neural Network
3.2. LSTM Sequence Neural Network
- Set 1: A_pitch, A_roll, B_pitch, B_roll, µ2, Z_Head, Z_C7;
- Set 2: A_pitch, A_roll, B_pitch, B_roll, ξ, µ2, Z_C7, Z_Hc;
- Set 3: A_pitch, A_roll, B_pitch, B_roll, µ2, δ2, Z_C7, Z_Hc;
- Set 4: A_pitch, A_roll, B_pitch, B_roll, ξ, δ2, Z_Head, Z_Hc.
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Beddiar, D.R.; Nini, B.; Sabokrou, M.; Hadid, A. Vision-based human activity recognition: A survey. Multimed. Tools Appl. 2020, 79, 30509–30555. [Google Scholar] [CrossRef]
- Aggarwal, J.K.; Ryoo, M.S. Human activity analysis: A review. ACM Comput. Surv. 2011, 43, 1–43. [Google Scholar] [CrossRef]
- Zhang, H.B.; Zhang, Y.X.; Zhong, B.; Lei, Q.; Yang, L.; Du, J.X.; Chen, D.S. A comprehensive survey of vision-based human action recognition methods. Sensors 2019, 19, 1005. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wang, P.; Li, W.; Ogunbona, P.; Wan, J.; Escalera, S. RGB-D-based human motion recognition with deep learning: A survey. Comput. Vis. Image Underst. 2018, 171, 118–139. [Google Scholar] [CrossRef] [Green Version]
- Majumder, S.; Kehtarnavaz, N. Vision and Inertial Sensing Fusion for Human Action Recognition: A Review. IEEE Sens. J. 2020, 21, 2454–2467. [Google Scholar] [CrossRef]
- Pareek, P.; Thakkar, A. A Survey on Video-Based Human Action Recognition: Recent Updates, Datasets, Challenges, and Applications; Springer: Dordrecht, The Netherlands, 2020. [Google Scholar]
- Bouchabou, D.; Nguyen, S.M.; Lohr, C.; Leduc, B.; Kanellos, I. A survey of human activity recognition in smart homes based on iot sensors algorithms: Taxonomies, challenges, and opportunities with deep learning. Sensors 2021, 21, 6037. [Google Scholar] [CrossRef] [PubMed]
- Sanchez, V.G.; Pfeiffer, C.F.; Skeie, N.O. A review of smart house analysis methods for assisting older people living alone. J. Sens. Actuator Netw. 2017, 6, 11. [Google Scholar] [CrossRef] [Green Version]
- Malekmohamadi, H.; Moemeni, A.; Orun, A.; Purohit, J.K. Low-Cost Automatic Ambient Assisted Living System. In Proceedings of the 2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), Athens, Greece, 19–23 March 2018; pp. 693–697. [Google Scholar]
- Chen, C.; Jafari, R.; Kehtarnavaz, N. UTD-MHAD: A Multimodal Dataset for Human Action Recognition Utilizing a Depth Camera and a Wearable Inertial Sensor. Available online: https://personal.utdallas.edu/~kehtar/UTD-MHAD.html (accessed on 24 December 2021).
- TST Fall Detection Dataset v2 | IEEE DataPort. Available online: https://ieee-dataport.org/documents/tst-fall-detection-dataset-v2 (accessed on 24 December 2021).
- Akyash, M.; Mohammadzade, H.; Behroozi, H. A Dynamic Time Warping Based Kernel for 3D Action Recognition Using Kinect Depth Sensor. In Proceedings of the 2020 28th Iranian Conference on Electrical Engineering (ICEE), Tabriz, Iran, 4–6 August 2020. [Google Scholar]
- Datasets. Available online: https://wangjiangb.github.io/my_data.html (accessed on 24 December 2021).
- Su, B.; Wu, H.; Sheng, M. Human action recognition method based on hierarchical framework via Kinect skeleton data. In Proceedings of the 2017 International Conference on Machine Learning and Cybernetics (ICMLC), Ningbo, China, 9–12 July 2017; Volume 1, pp. 83–90. [Google Scholar]
- UTKinect-Action3D Dataset. Available online: http://cvrc.ece.utexas.edu/KinectDatasets/HOJ3D.html (accessed on 24 December 2021).
- Morana, M.; Lo Re, G.; Gaglio, S. KARD—Kinect Activity Recognition Dataset. 2017. Available online: https://data.mendeley.com/datasets/k28dtm7tr6/1 (accessed on 24 December 2021).
- HON4D. Available online: http://www.cs.ucf.edu/~oreifej/HON4D.html (accessed on 24 December 2021).
- MICC—Media Integration and Communication Center. Available online: https://www.micc.unifi.it/resources/datasets/florence-3d-actions-dataset/ (accessed on 24 December 2021).
- Activity Recognition. Smart City Lab. Available online: http://smartcity.csr.unibo.it/activity-recognition/ (accessed on 24 December 2021).
- Ahad, M.A.R.; Ahmed, M.; Das Antar, A.; Makihara, Y.; Yagi, Y. Action recognition using kinematics posture feature on 3D skeleton joint locations. Pattern Recognit. Lett. 2021, 145, 216–224. [Google Scholar] [CrossRef]
- Karthickkumar, S.; Kumar, K. A survey on Deep learning techniques for human action recognition. In Proceedings of the 2020 International Conference on Computer Communication and Informatics (ICCCI), Coimbatore, India, 22–24 January 2020; pp. 20–25. [Google Scholar]
- Jaiswal, S.; Jaiswal, T. Remarkable Skeleton Based Human Action Recognition. Artif. Intell. Evol. 2020, 1, 109–122. [Google Scholar]
- ROSE Lab. Available online: https://rose1.ntu.edu.sg/dataset/actionRecognition/ (accessed on 24 December 2021).
- Yun, K. Two-person Interaction Detection Using Body-Pose Features and Multiple Instance Learning. Available online: https://www3.cs.stonybrook.edu/~kyun/research/kinect_interaction/index.html (accessed on 24 December 2021).
- Zhu, A.; Wu, Q.; Cui, R.; Wang, T.; Hang, W.; Hua, G.; Snoussi, H. Exploring a rich spatial–temporal dependent relational model for skeleton-based action recognition by bidirectional LSTM-CNN. Neurocomputing 2020, 414, 90–100. [Google Scholar] [CrossRef]
- Devanne, M.; Papadakis, P.; Nguyen, S.M. Recognition of activities of daily living via hierarchical long-short term memory networks. In Proceedings of the 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), Bari, Italy, 6–9 October 2019; pp. 3318–3324. [Google Scholar]
- CMU. Panoptic Dataset. Available online: http://domedb.perception.cs.cmu.edu/ (accessed on 24 December 2021).
- Motion Database HDM05. Available online: http://resources.mpi-inf.mpg.de/HDM05/ (accessed on 24 December 2021).
- Zhu, W.; Lan, C.; Xing, J.; Zeng, W.; Li, Y.; Shen, L.; Xie, X. Co-Occurrence feature learning for skeleton based action recognition using regularized deep LSTM networks. In Proceedings of the 30th AAAI Conference on Artificial Intelligence 2016, Phoenix, AZ, USA, 12–17 February 2016; pp. 3697–3703. [Google Scholar]
- Liu, J.; Shahroudy, A.; Xu, D.; Wang, G. Spatio-temporal LSTM with trust gates for 3D human action recognition. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2016; pp. 816–833. [Google Scholar]
- Liu, J.; Shahroudy, A.; Xu, D.; Kot, A.C.; Wang, G. Skeleton-Based Action Recognition Using Spatio-Temporal LSTM Network with Trust Gates. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 3007–3021. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Li, M.; Wang, H.; Yang, L.; Liang, Y.; Shang, Z.; Wan, H. Fast hybrid dimensionality reduction method for classification based on feature selection and grouped feature extraction. Expert Syst. Appl. 2020, 150, 113277. [Google Scholar] [CrossRef]
- Sharma, N.; Saroha, K. Study of dimension reduction methodologies in data mining. In Proceedings of the International Conference on Computing, Communication & Automation, Greater Noida, India, 15–16 May 2015; pp. 133–137. [Google Scholar]
- Abd-Alsabour, N. On the Role of Dimensionality Reduction. J. Comput. 2018, 13, 571–579. [Google Scholar] [CrossRef]
- Ayesha, S.; Hanif, M.K.; Talib, R. Overview and comparative study of dimensionality reduction techniques for high dimensional data. Inf. Fusion 2020, 59, 44–58. [Google Scholar] [CrossRef]
- Jindal, D.K. A Review on Dimensionality Reduction Techniques2. Int. J. Comput. Appl. 2017, 173, 42–46. [Google Scholar]
- Blum, A.L.; Langley, P. Selection of relevant features and examples in machine learning. Artif. Intell. 1997, 97, 245–271. [Google Scholar] [CrossRef] [Green Version]
- Zebari, R.R.; Mohsin Abdulazeez, A.; Zeebaree, D.Q.; Zebari, D.A.; Saeed, J.N. A Comprehensive Review of Dimensionality Reduction Techniques for Feature Selection and Feature Extraction. J. Appl. Sci. Technol. Trends 2020, 1, 56–70. [Google Scholar] [CrossRef]
- Sanjograj Singh Ahuja, S.A. IRJET-Using Feature Selection Technique for Data Mining: A Review. Irjet 2021, 8, 3536–3540. [Google Scholar]
- Wang, L.; Huynh, D.Q.; Koniusz, P. A Comparative Review of Recent Kinect-Based Action Recognition Algorithms. IEEE Trans. Image Process. 2020, 29, 15–28. [Google Scholar] [CrossRef] [Green Version]
- Shaikh, M.B.; Chai, D. RGB-D Data-Based Action Recognition: A Review. Sensors 2021, 21, 4246. [Google Scholar] [CrossRef]
- Cippitelli, E.; Gambi, E.; Spinsante, S. Human Action Recognition with RGB-D Sensors. In Motion Tracking and Gesture Recognition; Intech: Rijeka, Croatia, 2017; pp. 100–115. [Google Scholar] [CrossRef] [Green Version]
- Zhang, J.; Shum, H.P.H.; Han, J.; Shao, L. Action Recognition from Arbitrary Views Using Transferable Dictionary Learning. IEEE Trans. Image Process. 2018, 27, 4709–4723. [Google Scholar] [CrossRef] [Green Version]
- Weiyao, X.; Muqing, W.; Min, Z.; Ting, X. Fusion of Skeleton and RGB Features for RGB-D Human Action Recognition. IEEE Sens. J. 2021, 21, 19157–19164. [Google Scholar] [CrossRef]
- Guerra, B.M.V.; Ramat, S.; Beltrami, G.; Schmid, M. Automatic pose recognition for monitoring dangerous situations in Ambient-Assisted Living. Front. Bioeng. Biotechnol. 2020, 8, 415. [Google Scholar] [CrossRef]
- Guerra, B.M.V.; Ramat, S.; Gandolfi, R.; Beltrami, G.; Schmid, M. Skeleton data pre-processing for human pose recognition using Neural Network. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 4265–4268. [Google Scholar]
- Jegham, I.; Ben Khalifa, A.; Alouani, I.; Mahjoub, M.A. Vision-based human action recognition: An overview and real world challenges. Forensic Sci. Int. Digit. Investig. 2020, 32, 200901. [Google Scholar] [CrossRef]
Angles | Description |
---|---|
Angle between head and shoulder segments. Left and right, respectively | |
ξ | Angle between head and trunk segments |
Angle between trunk and shoulder segments. Left and right, respectively | |
Angle between shoulder and arm segments. Left and right, respectively | |
Angle between arm and forearm segments. Left and right, respectively | |
Angle between trunk and hip segments. Left and right, respectively | |
Angle between hip and thigh segments. Left and right, respectively | |
Angle between thigh and leg segments. Left and right, respectively | |
Angle between leg and foot segments. Left and right, respectively | |
Head roll angle | |
Head pitch angle | |
Trunk roll angle | |
Trunk pitch angle |
Classes | Database MLP | Database LSTM | ||
---|---|---|---|---|
Training | Testing | Training | Testing | |
Class 1 | 121,059 | 24,267 | 121,870 | 25,354 |
Class 2 | 189,668 | 43,990 | 193,446 | 45,258 |
Class 3 | 76,028 | 27,165 | 76,939 | 27,137 |
Class 4 | 113,178 | 23,779 | 116,903 | 24,838 |
Class 5 | 66,252 | 13,933 | 68,072 | 14,513 |
Total | 566,185 | 133,134 | 577,230 | 137,100 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Guerra, B.M.V.; Schmid, M.; Beltrami, G.; Ramat, S. Neural Networks for Automatic Posture Recognition in Ambient-Assisted Living. Sensors 2022, 22, 2609. https://doi.org/10.3390/s22072609
Guerra BMV, Schmid M, Beltrami G, Ramat S. Neural Networks for Automatic Posture Recognition in Ambient-Assisted Living. Sensors. 2022; 22(7):2609. https://doi.org/10.3390/s22072609
Chicago/Turabian StyleGuerra, Bruna Maria Vittoria, Micaela Schmid, Giorgio Beltrami, and Stefano Ramat. 2022. "Neural Networks for Automatic Posture Recognition in Ambient-Assisted Living" Sensors 22, no. 7: 2609. https://doi.org/10.3390/s22072609
APA StyleGuerra, B. M. V., Schmid, M., Beltrami, G., & Ramat, S. (2022). Neural Networks for Automatic Posture Recognition in Ambient-Assisted Living. Sensors, 22(7), 2609. https://doi.org/10.3390/s22072609