Estimation of Lower Limb Joint Angles Using sEMG Signals and RGB-D Camera
Abstract
:1. Introduction
- Our work presents a novel dual-branch framework for human joint angle estimation that leverages sEMG signals and RGB-D camera data. The multi-dimensional feature extraction block offers a more comprehensive representation of the human joint model across different scenarios.
- This paper proposes an enhanced sEMG feature extraction method that utilizes multiple channels and scales of sEMG signals through an online adaptive convolutional autoencoder structure, coupled with feature enhancement based on correlation principles.
- We employ a feature extraction and fusion pipeline that combines sEMG signals and RGB-D data, which has proven effective and efficient in convolutional networks by leveraging multi-scale input and a sensitive data fusion block.
2. Materials and Methods
2.1. Overview
2.2. sEMG Signal Processing
2.2.1. Preprocessing
2.2.2. Multi-Scale Feature Extraction Block
2.3. Vision-Based Feature Extraction
2.4. Data Fusion and Angle Estimation
2.5. Experimental Environments
- We compared single-modal sensor capture and processing of sEMG signals and RGB-D camera data. By highlighting the disadvantages of a single-sensor system, we demonstrated the improved accuracy of joint angle estimation using the combination of sEMG signals and RGB-D camera data.
- Considering subject diversity, we performed cross-comparisons between different subjects and analyzed the errors by accounting for scenario effects and human behavior effects.
- Noting that ground obstacles may influence visual data capture, we introduced a series of occlusions in our test of knee angle prediction.
- We compared a series of state-of-the-art methods with ours using our dataset and analyzed the performance from different perspectives.
- Computer: Windows 10 64-bit (DELL, Beijing, China)and Ubuntu 18.04.6 LTS 64-bit double operating system, carrying a CPU of Intel® Core™ i7-8700K with 16 GB RAM and an Nvidia GEFORCE GTX 1660 Ti graphics processing card.
- RGB-D camera: Intel RealSense Depth Camera D435i with specifications of 1920 × 1080 colored resolution and 1280 × 720 depth resolution @30 fps, 69° horizontal and 42° vertical field-of-view wide-angle lens.
- sEMG signal capture equipment: Delsys wearable sensors with analog output in range of 11 mV and sampling rate of 1111 Hz. The average noise of overall channel is inferior to 3 uV RMS @ 10–850 Hz.
- Motion capture equipment: VICON motion capture system with 1.3 MP resolution @ 250 Hz. The capture system has 98.1° horizontal and 50.1° vertical field of view.
2.5.1. Participants
2.5.2. Data Acquisition
3. Results
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
HMI | Human–machine interaction |
sEMG | Surface electromyography signal |
FBG | Fiber Bragg grating |
IMU | Inertial measurement unit |
CNN | Convolutional neural network |
RGB-D | Red green blue and depth |
RAM | Random-access memory |
References
- Biagetti, G.; Crippa, P.; Falaschetti, L.; Orcioni, S.; Turchetti, C. Human activity monitoring system based on wearable sEMG and accelerometer wireless sensor nodes. Biomed. Eng. Online 2018, 17, 132. [Google Scholar] [CrossRef] [PubMed]
- Supuk, T.G.; Skelin, A.K.; Cic, M. Design, development, and testing of a low-cost sEMG system and its use in recording muscle activity in human gait. Sensors 2014, 14, 8235–8258. [Google Scholar] [CrossRef] [PubMed]
- Panahandeh, G.; Mohammadiha, N.; Leijon, A.; Händel, P. Continuous hidden Markov model for pedestrian activity classification and gait analysis. IEEE Trans. Instrum. Meas. 2013, 62, 1073–1083. [Google Scholar] [CrossRef]
- Zhang, M.; Sawchuk, A.A. A feature selection-based framework for human activity recognition using wearable multimodal sensors. In Proceedings of the BodyNets Conference, Beijing, China, 7–10 November 2011; pp. 92–98. [Google Scholar]
- Zhang, S.; Wei, Z.; Nie, J.; Huang, L.; Wang, S.; Li, Z. A review on human activity recognition using vision-based methods. J. Healthc. Eng. 2017, 2017, 3090343. [Google Scholar] [CrossRef]
- Da Silva, A.F.; Gonçalves, A.F.; Mendes, P.M.; Correia, J.H. FBG sensing glove for monitoring hand posture. IEEE Sens. J. 2011, 11, 2442–2448. [Google Scholar] [CrossRef]
- Kim, J.S.; Kim, B.K.; Jang, M.; Kang, K.; Kim, D.E.; Ju, B.K.; Kim, J. Wearable hand module and real-time tracking algorithms for measuring finger joint angles of different hand sizes with high accuracy using FBG strain sensors. Sensors 2020, 20, 1921. [Google Scholar] [CrossRef]
- Seel, T.; Raisch, J.; Schauer, T. IMU-based joint angle measurement for gait analysis. Sensors 2014, 14, 6891–6909. [Google Scholar] [CrossRef]
- Chen, L.; Yan, X.; Hu, D. A deep learning control strategy for IMU-based joint angle estimation in a hip power-assisted swimming exoskeleton. IEEE Sens. J. 2023, 23, 15058–15070. [Google Scholar] [CrossRef]
- Zhu, K.; Li, J.; Li, D.; Fan, B.; Shull, P.B. IMU shoulder angle estimation: Effects of sensor-to-segment misalignment and sensor orientation error. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 4481–4491. [Google Scholar] [CrossRef]
- Zhao, L.; Sukthankar, G.; Sukthankar, R. Robust active learning using crowdsourced annotations for activity recognition. In Proceedings of the Workshops at the Twenty-Fifth AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 7–11 August 2011. [Google Scholar]
- Zhuojun, X.; Yantao, T.; Yang, L. sEMG pattern recognition of muscle force of the upper arm for intelligent bionic limb control. J. Bionic Eng. 2015, 12, 316–323. [Google Scholar] [CrossRef]
- Tu, J.; Dai, Z.; Zhao, X.; Huang, Z. Lower limb motion recognition based on surface electromyography. Biomed. Signal Process. Control 2023, 81, 104443. [Google Scholar] [CrossRef]
- Wang, J.; Dai, Y.; Si, X. Analysis and recognition of human lower limb motions based on electromyography (EMG) signals. Electronics 2021, 10, 2473. [Google Scholar] [CrossRef]
- Aung, Y.M.; Al-Jumaily, A. sEMG based ANN for shoulder angle prediction. Procedia Eng. 2012, 41, 1009–1015. [Google Scholar] [CrossRef]
- Ding, Z.; Yang, C.; Wang, Z.; Yin, X.; Jiang, F. Online adaptive prediction of human motion intention based on sEMG. Sensors 2021, 21, 2882. [Google Scholar] [CrossRef] [PubMed]
- Tanaka, T.; Nambu, I.; Maruyama, Y.; Wada, Y. Sliding-window normalization to improve the performance of machine-learning models for real-time motion prediction using electromyography. Sensors 2022, 22, 5005. [Google Scholar] [CrossRef]
- Fleischer, C.; Reinicke, C.; Hommel, G. Predicting the intended motion with EMG signals for an exoskeleton orthosis controller. In Proceedings of the 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005; IEEE: Piscataway, NJ, USA, 2005; pp. 2029–2034. [Google Scholar]
- Gu, X.; Liu, J.; Zheng, Y. Markerless gait analysis based on a single RGB camera. In Proceedings of the 2018 IEEE 15th International Conference on Wearable and Implantable Body Sensor Networks (BSN), Las Vegas, NV, USA, 4–7 March 2018; IEEE: Piscataway, NJ, USA, 2018. [Google Scholar]
- İnce, Ö.F.; Yüksel, M. Human activity recognition with analysis of angles between skeletal joints using an RGB-depth sensor. ETRI J. 2020, 42, 78–89. [Google Scholar] [CrossRef]
- Wang, L.; Zhao, X.; Yu, T.; Wang, S.; Liu, Y. NormalGAN: Learning detailed 3D human from a single RGB-D image. In Computer Vision—ECCV 2020, Proceedings of the 16th European Conference, Glasgow, UK, 23–28 August 2020; Springer International Publishing: Cham, Switzerland, 2020; pp. 430–446. [Google Scholar]
- Sengupta, A.; Maji, P.; Sinha, R. mm-Pose: Real-time human skeletal posture estimation using mmWave radars and CNNs. IEEE Sens. J. 2020, 20, 10032–10044. [Google Scholar] [CrossRef]
- Wu, Q.; Xu, G.; Zhang, S.; Li, Y.; Wei, F. Human 3D pose estimation in a lying position by RGB-D images for medical diagnosis and rehabilitation. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 5802–5805. [Google Scholar]
- Zheng, J.; Shi, X.; Gorban, A.; Mao, J.; Song, Y.; Qi, C.R.; Liu, T.; Chari, V.; Cornman, A.; Zhou, Y.; et al. Multi-modal 3D human pose estimation with 2D weak supervision in autonomous driving. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 4478–4487. [Google Scholar]
- Mehta, D.; Singh, S.; Wray, S. XNect: Real-time multi-person 3D motion capture with a single RGB camera. ACM Trans. Graph. (TOG) 2020, 39, 82:1–82:17. [Google Scholar] [CrossRef]
- Bo, A.P.L.; Hayashibe, M.; Poignet, P. Joint angle estimation in rehabilitation with inertial sensors and its integration with Kinect. In Proceedings of the 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Boston, MA, USA, 30 August–3 September 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 3479–3483. [Google Scholar]
- Huang, F.; Zeng, A.; Liu, M.; Lai, Q.; Xu, Q. DeepFuse: An IMU-aware network for real-time 3D human pose estimation from multi-view images. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Snowmass, CO, USA, 1–5 March 2020; pp. 429–438. [Google Scholar]
- Liu, L.; Yang, J.; Lin, Y.; Zhang, P.; Zhang, L. 3D human pose estimation with a single image and inertial measurement unit (IMU) sequence. Pattern Recognit. 2024, 149, 110175. [Google Scholar] [CrossRef]
- Chen, M.; Tan, G. 3D human pose estimation based on wearable IMUs and multiple camera views. Electronics 2024, 13, 2926. [Google Scholar] [CrossRef]
- Cippitelli, E.; Gasparrini, S.; Gambi, E.; Spinsante, S.; Wåhslény, J.; Orhany, I.; Lindhy, T. Time synchronization and data fusion for RGB-depth cameras and inertial sensors in AAL applications. In Proceedings of the 2015 IEEE International Conference on Communication Workshop (ICCW), London, UK, 8–12 June 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 265–270. [Google Scholar]
- Varshney, P.K. Multisensor data fusion. Electron. Commun. Eng. J. 1997, 9, 245–253. [Google Scholar] [CrossRef]
- Liu, Z.; Meyendorf, N.; Mrad, N. The role of data fusion in predictive maintenance using digital twin. AIP Conf. Proc. 2018, 1949, 020023. [Google Scholar]
- Phinyomark, A.; Phukpattaranont, P.; Limsakul, C. The usefulness of wavelet transform to reduce noise in the SEMG signal. In EMG Methods for Evaluating Muscle and Nerve Function; BoD—Books on Demand: Norderstedt, Germany, 2012; pp. 107–132. [Google Scholar]
- Said, A.B.; Mohamed, A.; Elfouly, T.; Harras, K.; Wang, Z.J. Multimodal deep learning approach for joint EEG-EMG data compression and classification. In Proceedings of the 2017 IEEE Wireless Communications and Networking Conference (WCNC), San Francisco, CA, USA, 19–22 March 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–6. [Google Scholar]
- Dinashi, K.; Ameri, A.; Akhaee, M.A.; Englehart, K.; Scheme, E. Compression of EMG signals using deep convolutional autoencoders. IEEE J. Biomed. Health Inform. 2022, 26, 2888–2897. [Google Scholar] [CrossRef] [PubMed]
- Lin, C.; Cui, Z.; Chen, C.; Liu, Y.; Jiang, N. A fast gradient convolution kernel compensation method for surface electromyogram decomposition. J. Electromyogr. Kinesiol. 2024, 76, 102869. [Google Scholar] [CrossRef]
- Wen, Y.; Avrillon, S.; Hernandez-Pavon, J.C.; Kim, S.J.; Hug, F.; Pons, J.L. A convolutional neural network to identify motor units from high-density surface electromyography signals in real time. J. Neural Eng. 2021, 18, 056003. [Google Scholar] [CrossRef]
- Kaczmarek, P.; Mańkowski, T.; Tomczyński, J. putEMG—A surface electromyography hand gesture recognition dataset. Sensors 2019, 19, 3548. [Google Scholar] [CrossRef]
- Ye, M.; Wang, X.; Yang, R.; Ren, L.; Pollefeys, M. Accurate 3D pose estimation from a single depth image. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 731–738. [Google Scholar]
- Zhang, F.; Li, P.; Hou, Z.G.; Lu, Z.; Chen, Y.; Li, Q.; Tan, M. sEMG-based continuous estimation of joint angles of human legs by using BP neural network. Neurocomputing 2012, 78, 139–148. [Google Scholar] [CrossRef]
- Chen, J.; Zhang, X.; Cheng, Y.; Xi, N. Surface EMG based continuous estimation of human lower limb joint angles by using deep belief networks. Biomed. Signal Process. Control 2018, 40, 335–342. [Google Scholar] [CrossRef]
- Mundt, M.; Thomsen, W.; Witter, T.; Koeppe, A.; David, S.; Bamer, F.; Potthast, W.; Markert, B. Prediction of lower limb joint angles and moments during gait using artificial neural networks. Med. Biol. Eng. Comput. 2020, 58, 211–225. [Google Scholar] [CrossRef]
- Song, Q.; Ma, X.; Liu, Y. Continuous online prediction of lower limb joint angles based on sEMG signals by deep learning approach. Comput. Biol. Med. 2023, 163, 107124. [Google Scholar] [CrossRef]
sEMG-Based | Vision-Based | Combined | ||
---|---|---|---|---|
Hip | Stance phase | 3.97 ± 1.35 | 2.15 ± 0.87 | 1.47 ± 0.63 |
Swing phase | 3.64 ± 1.11 | 2.57 ± 0.86 | 1.89 ± 0.67 | |
Knee | Stance phase | 4.42 ± 1.71 | 1.71 ± 0.99 | 1.17 ± 0.76 |
Swing phase | 3.82 ± 1.63 | 2.89 ± 0.93 | 1.26 ± 0.82 | |
Ankle | Stance phase | 4.21 ± 1.45 | 2.23 ± 0.89 | 1.52 ± 0.65 |
Swing phase | 3.97 ± 1.18 | 2.98 ± 0.89 | 2.12 ± 0.55 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Du, G.; Ding, Z.; Guo, H.; Song, M.; Jiang, F. Estimation of Lower Limb Joint Angles Using sEMG Signals and RGB-D Camera. Bioengineering 2024, 11, 1026. https://doi.org/10.3390/bioengineering11101026
Du G, Ding Z, Guo H, Song M, Jiang F. Estimation of Lower Limb Joint Angles Using sEMG Signals and RGB-D Camera. Bioengineering. 2024; 11(10):1026. https://doi.org/10.3390/bioengineering11101026
Chicago/Turabian StyleDu, Guoming, Zhen Ding, Hao Guo, Meichao Song, and Feng Jiang. 2024. "Estimation of Lower Limb Joint Angles Using sEMG Signals and RGB-D Camera" Bioengineering 11, no. 10: 1026. https://doi.org/10.3390/bioengineering11101026
APA StyleDu, G., Ding, Z., Guo, H., Song, M., & Jiang, F. (2024). Estimation of Lower Limb Joint Angles Using sEMG Signals and RGB-D Camera. Bioengineering, 11(10), 1026. https://doi.org/10.3390/bioengineering11101026