Human Posture Transition-Time Detection Based upon Inertial Measurement Unit and Long Short-Term Memory Neural Networks
Abstract
:1. Introduction
- It stands out in its precise identification of specific postures, relying solely on IMU data to capture the intricate dynamics within the human body, obviating the need for external sensors;
- A central focus of this study revolves around the meticulous prediction of transitions between different postures. This pioneering innovation holds the promise of facilitating seamless and secure transitions, whether in the context of exoskeleton utilization or the synchronization of digital twins in future applications;
- The development of a tailor-made deep learning framework designed for the detection and prediction of human intentions opens vast possibilities in the domains of digital twins, exoskeleton technology, and human intention control.
2. Materials and Methods
2.1. The Experimental Structure
2.2. The IMU Equipment
2.3. Participants
2.4. The Experimental Protocol for Clinical Trials with IMU Equipment
2.5. Data Processing
2.6. Posture-Change Detection Algorithm
2.7. Performance Comparison and Transition-Time Analysis
3. Results
3.1. The IMU Signal Characteristics
3.2. Data and the Design of Test Posture-Change Sequence
3.3. The Comparison of Deep Learning Structures between FNN and LSTM Structures
3.4. Effect of the Sampling Rates
3.5. The Comparison of Different LSTM Structures’ Performance and Decisions of Deep Learning Models
3.6. The Identification Performance of Human Posture-Transition Time among Different Subjects
4. Discussion
5. Limitations
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Conflicts of Interest
References
- Lara, O.D.; Labrador, M.A. A Survey on Human Activity Recognition using Wearable Sensors. IEEE Commun. Surv. Tutor. 2013, 15, 1192–1209. [Google Scholar] [CrossRef]
- Preatoni, E.; Nodari, S.; Lopomo, N.F. Supervised Machine Learning Applied to Wearable Sensor Data Can Accurately Classify Functional Fitness Exercises Within a Continuous Workout. Front. Bioeng. Biotechnol. 2020, 8, 664. [Google Scholar] [CrossRef]
- Dallel, M.; Havard, V.; Dupuis, Y.; Baudry, D. Digital twin of an industrial workstation: A novel method of an auto-labeled data generator using virtual reality for human action recognition in the context of human–robot collaboration. Eng. Appl. Artif. Intell. 2023, 118, 105655. [Google Scholar] [CrossRef]
- Cangelosi, A.; Invitto, S. Human-Robot Interaction and Neuroprosthetics: A review of new technologies. IEEE Consum. Electron. Mag. 2017, 6, 24–33. [Google Scholar] [CrossRef]
- Zhang, L.; Liu, G.; Han, B.; Wang, Z.; Zhang, T. sEMG Based Human Motion Intention Recognition. J. Robot. 2019, 2019, 3679174. [Google Scholar] [CrossRef]
- Lee, C.-Y.; Lan, S.C.; Lin, J.-J.; Lin, Y.-T.; Chiang, P.-S.; Hsu, W.-L.; Jen, K.-K.; Huang, A.Y.S.; Yen, J.-Y. Realization of Natural Human Motion on a 3D Biped Robot For Studying the Exoskeleton Effective. J. Med. Biol. Eng. 2021, 41, 856–869. [Google Scholar] [CrossRef]
- Lee, J.; Kwon, K.; Yeo, W.-H. Recent advances in wearable exoskeletons for human strength augmentation. Flex. Print. Electron. 2022, 7, 023002. [Google Scholar] [CrossRef]
- Ding, M.; Nagashima, M.; Cho, S.G.; Takamatsu, J.; Ogasawara, T. Control of Walking Assist Exoskeleton with Time-delay Based on the Prediction of Plantar Force. IEEE Access 2020, 8, 138642–138651. [Google Scholar] [CrossRef]
- Wang, H.; Schmid, C. In Action Recognition with Improved Trajectories. In Proceedings of the 2013 IEEE International Conference on Computer Vision, Las Vegas, NV, USA, 1–8 December 2013; Volume 2013, pp. 3551–3558. [Google Scholar]
- Herath, S.; Harandi, M.; Porikli, F. Going deeper into action recognition: A survey. Image Vis. Comput. 2017, 60, 4–21. [Google Scholar] [CrossRef]
- Younsi, M.; Diaf, M.; Siarry, P. Comparative study of orthogonal moments for human postures recognition. Eng. Appl. Artif. Intell. 2023, 120, 105855. [Google Scholar] [CrossRef]
- Bobick, A.F.; Davis, J.W. The recognition of human movement using temporal templates. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 257–267. [Google Scholar] [CrossRef]
- Gorelick, L.; Basri, R.; Blank, M.; Shechtman, E.; Irani, M. Actions as Space-Time Shapes. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 2247–2253. [Google Scholar] [CrossRef]
- Yan, S.; Xiong, Y.; Lin, D. In Spatial temporal graph convolutional networks for skeleton-based action recognition. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, Louisiana, LA, USA, 2–7 February 2018. [Google Scholar]
- Khodabandelou, G.; Moon, H.; Amirat, Y.; Mohammed, S. A fuzzy convolutional attention-based GRU network for human activity recognition. Eng. Appl. Artif. Intell. 2023, 118, 105702. [Google Scholar] [CrossRef]
- Papi, E.; Bo, Y.N.; McGregor, A.H. A flexible wearable sensor for knee flexion assessment during gait. Gait Posture 2018, 62, 480–483. [Google Scholar] [CrossRef] [PubMed]
- Taheri, O.; Salarieh, H.; Alasty, A. Human leg motion tracking by fusing imus and rgb camera data using extended kalman filter. arXiv 2011, arXiv:2011.00574. [Google Scholar]
- Ito, T.; Ayusawa, K.; Yoshida, E.; Kobayashi, H. Evaluation of active wearable assistive devices with human posture reproduction using a humanoid robot. Adv. Robot. 2018, 32, 635–645. [Google Scholar] [CrossRef]
- Zhu, L.; Wang, Z.; Ning, Z.; Zhang, Y.; Liu, Y.; Cao, W.; Wu, X.; Chen, C. A Novel Motion Intention Recognition Approach for Soft Exoskeleton via IMU. Electronics 2020, 9, 2176. [Google Scholar] [CrossRef]
- Bruinsma, J.; Carloni, R. IMU-Based Deep Neural Networks: Prediction of Locomotor and Transition Intentions of an Osseointegrated Transfemoral Amputee. IEEE Trans. Neural Syst. Rehabil. Eng. 2021, 29, 1079–1088. [Google Scholar] [CrossRef] [PubMed]
- Liu, H.; Xue, T.; Schultz, T. On a Real Real-Time Wearable Human Activity Recognition System. In Proceedings of the 16th International Joint Conference on Biomedical Engineering Systems and Technologies—WHC, Lisbon, Portugal, 16–18 February 2023; Volume 5. [Google Scholar]
- Manivasagam, K.; Yang, L. Evaluation of a New Simplified Inertial Sensor Method against Electrogoniometer for Measuring Wrist Motion in Occupational Studies. Sensors 2022, 22, 1690. [Google Scholar] [CrossRef]
- Liu, H.; Schultz, I.T. Biosignal Processing and Activity Modeling for Multimodal Human Activity Recognition. Ph.D. Thesis, Universität Bremen, Bremen, Germany, 2021. [Google Scholar]
- Harris, E.J.; Khoo, I.-H.; Demircan, E. A Survey of Human Gait-Based Artificial Intelligence Applications. Front. Robot. AI 2022, 8, 749274. [Google Scholar] [CrossRef]
- Kececi, A.; Yildirak, A.; Ozyazici, K.; Ayluctarhan, G.; Agbulut, O.; Zincir, I. Implementation of machine learning algorithms for gait recognition. Eng. Sci. Technol. Int. J. 2020, 23, 931–937. [Google Scholar] [CrossRef]
- Hartmann, Y.; Liu, H.; Schultz, T. High-Level Features for Human Activity Recognition and Modeling. In Biomedical Engineering Systems and Technologies; Roque, A.C.A., Gracanin, D., Lorenz, R., Tsanas, A., Bier, N., Fred, A., Gamboa, H., Eds.; Springer Nature Switzerland: Cham, Switzerland, 2023; pp. 141–163. [Google Scholar]
- Rodrigues, J.; Liu, H.; Folgado, D.; Belo, D.; Schultz, T.; Gamboa, H. Feature-Based Information Retrieval of Multimodal Biosignals with a Self-Similarity Matrix: Focus on Automatic Segmentation. Biosensors 2022, 12, 1182. [Google Scholar] [CrossRef] [PubMed]
- Liu, H.; Hartmann, Y.; Schultz, T. Motion Units: Generalized Sequence Modeling of Human Activities for Sensor-Based Activity Recognition. In Proceedings of the 29th European Signal Processing Conference (EUSIPCO), Dublin, Ireland, 23–27 August 2021; pp. 1506–1510. [Google Scholar]
- Kong, Y.; Fu, Y. Human Action Recognition and Prediction: A Survey. Int. J. Comput. Vis. 2022, 130, 1366–1401. [Google Scholar] [CrossRef]
- Wang, J.; Chen, Y.; Hao, S.; Peng, X.; Hu, L. Deep learning for sensor-based activity recognition: A survey. Pattern Recognit. Lett. 2019, 119, 3–11. [Google Scholar] [CrossRef]
- Chen, K.; Zhang, D.; Yao, L.; Guo, B.; Yu, Z.; Liu, Y. Deep Learning for Sensor-based Human Activity Recognition: Overview, Challenges, and Opportunities. ACM Comput. Surv. 2021, 54, 77. [Google Scholar] [CrossRef]
- Archetti, L.; Ragni, F.; Saint-Bauzel, L.; Roby-Brami, A.; Amici, C. Inclusive Human Intention Prediction with Wearable Sensors: Machine Learning Techniques for the Reaching Task Use Case. Eng. Proc. 2020, 2, 13. [Google Scholar]
- Ragni, F.; Archetti, L.; Roby-Brami, A.; Amici, C.; Saint-Bauzel, L. Intention Prediction and Human Health Condition Detection in Reaching Tasks with Machine Learning Techniques. Sensors 2021, 21, 5253. [Google Scholar] [CrossRef]
- Li, S.; Zhang, L.; Diao, X. Deep-Learning-Based Human Intention Prediction Using RGB Images and Optical Flow. J. Intell. Robot. Syst. 2020, 97, 95–107. [Google Scholar] [CrossRef]
- Lindemann, B.; Müller, T.; Vietz, H.; Jazdi, N.; Weyrich, M. A survey on long short-term memory networks for time series prediction. Procedia CIRP 2021, 99, 650–655. [Google Scholar] [CrossRef]
- Lin, B.; Bouneffouf, D.; Cecchi, G. In Predicting Human Decision Making with LSTM. In Proceedings of the 2022 International Joint Conference on Neural Networks (IJCNN), Padua, Italy, 18–23 July 2022; pp. 1–8. [Google Scholar]
- He, J.; Guo, Z.; Shao, Z.; Zhao, J.; Dan, G. An LSTM-Based Prediction Method for Lower Limb Intention Perception by Integrative Analysis of Kinect Visual Signal. J. Healthc. Eng. 2020, 2020, 8024789. [Google Scholar] [CrossRef]
- Ren, B.; Zhang, Z.; Zhang, C.; Chen, S. Motion Trajectories Prediction of Lower Limb Exoskeleton Based on Long Short-Term Memory (LSTM) Networks. Actuators 2022, 11, 73. [Google Scholar] [CrossRef]
- Kang, K.-Y.; Lee, S.-G.; Kang, H.; Kim, J.-G.; Tack, G.-R.; Choi, J.-S. A Pilot Study of the Efficiency of LSTM-Based Motion Classification Algorithms Using a Single Accelerometer. Appl. Sci. 2022, 12, 7243. [Google Scholar] [CrossRef]
- Peng, Y.; Gan, C.; Zhang, Z. A survey of feature extraction methods in human action recognition. Comput. Appl. Softwire 2022, 39, 8. [Google Scholar]
- Liu, Y.X.; Wang, R.; Gutierrez-Farewik, E.M. A Muscle Synergy-Inspired Method of Detecting Human Movement Intentions Based on Wearable Sensor Fusion. IEEE Trans. Neural Syst. Rehabil. Eng. 2021, 29, 1089–1098. [Google Scholar] [CrossRef]
- Yin, P.; Yang, L.; Yang, M. In Research on Recognition of Human Motion State Based on Force and Motion Sensor Fusion. In Proceedings of the 2022 IEEE 2nd International Conference on Power, Electronics and Computer Applications (ICPECA), Shenyang, China, 21–23 January 2022; pp. 238–242. [Google Scholar]
- Tang, H.-Y.; Tan, S.-H.; Su, T.-Y.; Chiang, C.-J.; Chen, H.-H. Upper Body Posture Recognition Using Inertial Sensors and Recurrent Neural Networks. Appl. Sci. 2021, 11, 12101. [Google Scholar] [CrossRef]
- Cui, J.; Li, Z. Prediction of Upper Limb Action Intention Based on Long Short-Term Memory Neural Network. Electronics 2022, 11, 1320. [Google Scholar] [CrossRef]
- Gibb, K.; Yovchev, N.; Aubertin, C.; Greenwood, K.; Redpath, S.; Ibey, A.A.M.; Chan, A.D.C.; Green, J.R.; Langlois, R.G. Developing an Instrumentation Package to Measure Noise and Vibration in Neonatal Patient Transport. In Proceedings of the 2023 IEEE International Symposium on Medical Measurements and Applications (MeMeA), Jeju, Republic of Korea, 14–16 June 2023; pp. 1–6. [Google Scholar] [CrossRef]
- Piche, E.; Guilbot, M.; Chorin, F.; Guerin, O.; Zory, R.; Gerus, P. Validity and repeatability of a new inertial measurement unit system for gait analysis on kinematic parameters: Comparison with an optoelectronic system. Measurement 2022, 198, 111442. [Google Scholar] [CrossRef]
- Qiu, S.; Zhao, H.; Jiang, N.; Wang, Z.; Liu, L.; An, Y.; Zhao, H.; Miao, X.; Liu, R.; Fortino, G. Multi-sensor information fusion based on machine learning for real applications in human activity recognition: State-of-the-art and research challenges. Inf. Fusion 2022, 80, 241–265. [Google Scholar] [CrossRef]
- Liu, H.; Schultz, T. How Long Are Various Types of Daily Activities? Statistical Analysis of a Multimodal Wearable Sensor-based Human Activity Dataset. In Proceedings of the 15th International Joint Conference on Biomedical Engineering Systems and Technologies (BIOSTEC 2022)—HEALTHINF, Virtual Event, 9–11 February 2022. [Google Scholar]
Parameters | |
Number of subject’s sex | Male: 16; Female: 14 |
The ratio of male to female | 1.14 |
Mean ± SD | |
The mean of subjects’ age (years) | 26 ± 5.92 (Range: 22–50) |
The mean of subjects’ height (cm) | 165.57 ± 8.94 (Range: 142–179) |
The mean of subjects’ weight (kg) | 61.07 ± 12.34 (Range: 39–95) |
Lengths of right thigh of subjects (cm) | 41.43 ± 5.43 |
Lengths of right calf of subjects (cm) | 40.37 ± 4.35 |
Lengths of right foot of subjects (cm) | 24.53 ± 1.94 |
Group 1 | Group 2 | Group 3 | Group 4 | Group 5 | |
---|---|---|---|---|---|
Dropout layer (p = 0.2) | 1 | ||||
Dropout layer (p = 0.4) | 3 | 1 | |||
Dropout layer (p = 0.5) | 5 | ||||
Dropout layer (p = 0.8) | 6 | 2 | 4 | ||
LSTM layer | 2 | 2 | 2 | 1 | 1 |
Fully connect layer 1 | 3 | 3 | 2 | ||
Fully connect layer 2 | 7 | 4 | 6 | 3 | 5 |
Batch Normalization layer | 4 | 1 | 3 | ||
Leakly ReLu layer | 5 | 4 | 4 | 6 | |
Softmax layer | 8 | 5 | 7 | 5 | 7 |
Pelvis Angle | Pelvis Angular Velocity | Pelvis Angular Acceleration | Hip Angle | Hip Angular Velocity | Hip Angular Acceleration | Knee Angle | Knee Angular Velocity | Knee Angular Acceleration | Ankle Angle | Ankle Angular Velocity | Ankle Angular Acceleration | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
Pelvis Angle | 1 | 0.01136 | −0.24293 | 0.578343 | −0.01739 | −0.14509 | 0.391564 | −0.0797 | −0.00427 | 0.176395 | 0.016812 | −0.01739 |
Pelvis Angular Velocity | 1 | −0.04938 | 0.002561 | 0.549951 | −0.12353 | 0.080105 | 0.074217 | −0.4339 | −0.07015 | 0.118572 | 0.091082 | |
Pelvis Angular Acceleration | 1 | −0.08809 | 0.054116 | 0.646527 | −0.03164 | 0.517422 | 0.132756 | −0.08723 | −0.18378 | 0.054194 | ||
Hip Angle | 1 | −0.00791 | −0.17426 | 0.922779 | −0.13341 | −0.02437 | 0.421537 | 0.029337 | −0.00376 | |||
Hip Angular Velocity | 1 | −0.04477 | 0.205918 | 0.249638 | −0.50045 | −0.16232 | 0.019327 | 0.137285 | ||||
Hip Angular Acceleration | 1 | −0.08519 | 0.629106 | 0.408148 | 0.005521 | −0.28019 | −0.04438 | |||||
Knee Angle | 1 | −0.01249 | −0.2038 | 0.410049 | 0.037825 | 0.022378 | ||||||
Knee Angular Velocity | 1 | −0.06072 | −0.11249 | −0.20555 | 0.194528 | |||||||
Knee Angular Acceleration | 1 | 0.167935 | −0.26501 | −0.19381 | ||||||||
Ankle Angle | 1 | −0.04289 | −0.44771 | |||||||||
Ankle Angular Velocity | 1 | −0.0941 | ||||||||||
Ankle Angular Acceleration | 1 |
The Time Label from IMU Sensors (Unit: s) (Mean ± S.D.) | The Time Label from the Vicon Device (Unit: s) (Mean ± S.D.) | |
---|---|---|
Start to walk | 7.4150 ± 0.473 | 7.65826 ± 0.709 |
Stop to walk | 11.9350 ± 0.544 | 11.16389 ± 0.359 |
Training Dataset | Sample Rate (Unit: Hz) | Experiment Time (Unit: s) | ||||
---|---|---|---|---|---|---|
25 | 3630 | |||||
Deep Learning Type | Sample Rate (Unit: Hz) | Experiment Time (Unit: s) | Training Time in Python IDE (Unit: mins) | MATLAB Analysis Time (Unit: s) | ||
Subject 01 | FNN | 25 | 375 | 24.60 | 3.804 | About 50% of the total operating time, which included the running time of the AI model with Python IDE and the running time with MATLAB analysis. |
Subject 02 | 315 | 3.888 | ||||
Subject 03 | 315 | 3.789 | ||||
Subject 01 | LSTM | 25 | 375 | 88.80 | 3.731 | |
Subject 02 | 315 | 3.856 | ||||
Subject 03 | 315 | 3.905 |
Parameters | Learning Rate: 0.001 | Weight Decay: 0.00001 | Loss Function: Cross-Entropy | |
---|---|---|---|---|
Epochs: 150 | Batch Size: 10 | Use One GPU Device | ||
Training Dataset | The Number of Features: 12 | The Number of Labels: 7 | ||
The Number of Samples: 217,880 Samples from One Person. (3630 s, Whose Sample Rate Was 100 Hz) | ||||
Group Types | Training Stage | Prediction Stage | Training Time in Python IDE (mins) | |
Training Loss | Validation Loss | Average Prediction Loss | ||
Group 1 | 0.3082 | 0.3101 | 0.4109 | 206 |
Group 2 | 1.2410 | 1.2409 | 0.6357 | 229 |
Group 3 | 1.7446 | 1.9525 | 0.4520 | 241 |
Group 4 | 0.3980 | 0.3982 | 0.4362 | 244 |
Group 5 | 0.3112 | 0.3141 | 0.3995 | 249 |
Software Operating Time | Mean ± SD |
---|---|
Python algorithm with LSTM model for intention-transition-time probability prediction (s) | 6.76667 ± 1.30472 (80.37% of operating time) |
Analysis with MATLAB for intention-transition-time prediction (s) | 1.65217 ± 0.71406 (19.63% of operating time) |
Statistic results of the time difference when human posture changed | Mean ± SD |
In the standing stage (s) | 0.42557 ± 0.16049 |
In the walking stage (s) | 0.39019 ± 0.14458 |
In the sitting stage (s) | 0.42941 ± 0.15568 |
Stop standing and start walking (s) | 0.47986 ± 0.25106 |
Stop walking and start standing (s) | 0.59732 ± 0.33405 |
Stop standing and start sitting (s) | 0.40483 ± 0.12591 |
Stop sitting and start standing (s) | 0.31880 ± 0.15251 |
Accuracy for posture-change detection | Mean ± SD |
Accuracy for prediction of intention classification (%) | 85.87 ± 7.49 |
Accuracy for prediction of intention-transition time (%) | 94.44 ± 1.70 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kuo, C.-T.; Lin, J.-J.; Jen, K.-K.; Hsu, W.-L.; Wang, F.-C.; Tsao, T.-C.; Yen, J.-Y. Human Posture Transition-Time Detection Based upon Inertial Measurement Unit and Long Short-Term Memory Neural Networks. Biomimetics 2023, 8, 471. https://doi.org/10.3390/biomimetics8060471
Kuo C-T, Lin J-J, Jen K-K, Hsu W-L, Wang F-C, Tsao T-C, Yen J-Y. Human Posture Transition-Time Detection Based upon Inertial Measurement Unit and Long Short-Term Memory Neural Networks. Biomimetics. 2023; 8(6):471. https://doi.org/10.3390/biomimetics8060471
Chicago/Turabian StyleKuo, Chun-Ting, Jun-Ji Lin, Kuo-Kuang Jen, Wei-Li Hsu, Fu-Cheng Wang, Tsu-Chin Tsao, and Jia-Yush Yen. 2023. "Human Posture Transition-Time Detection Based upon Inertial Measurement Unit and Long Short-Term Memory Neural Networks" Biomimetics 8, no. 6: 471. https://doi.org/10.3390/biomimetics8060471
APA StyleKuo, C. -T., Lin, J. -J., Jen, K. -K., Hsu, W. -L., Wang, F. -C., Tsao, T. -C., & Yen, J. -Y. (2023). Human Posture Transition-Time Detection Based upon Inertial Measurement Unit and Long Short-Term Memory Neural Networks. Biomimetics, 8(6), 471. https://doi.org/10.3390/biomimetics8060471