A Transformer-Based Neural Network for Gait Prediction in Lower Limb Exoskeleton Robots Using Plantar Force
Abstract
:1. Introduction
- We propose a gait prediction method in terms of plantar pressure data and deep learning. The predicted joint angles are capable of compensating for the sensor delay and serving as a reference for the controller.
- We introduce the transformer-based model, TFSformer, which effectively integrates features from both temporal and force-space dimensions. The proposed model features a high performance in gait prediction tasks, achieving the minimum mean absolute errors of , and in three tasks set in our experiment, respectively.
- We construct a dataset containing data from 35 volunteers, providing a data foundation for network training and evaluation.
2. Related Work
2.1. Gait Analysis for Lower Limb Exoskeleton Robots
2.2. Transformer-Based Network for Time Series Issue
3. Preliminary
3.1. Data Acquisition and Preprocessing
3.1.1. Data Acquisition
3.1.2. Data Preprocessing
- For data with small portions missing, typically not exceeding five samples, cubic spline interpolation was applied to fill the data and ensure data continuity.
- For the data with severely missing values or excessive noise, all data were deleted to ensure the availability of reliable data.
- To further reduce the noise and enhance the quality of the data, a filtering operation with a cut-off frequency of 60 Hz was applied.
3.2. Variational Mode Decomposition
3.3. Basic Transformer Architecture
3.3.1. Multi-Head Attention Mechanism
3.3.2. Feed-Forward Network
3.3.3. Positional Encoding
3.4. Convolutional Neural Networks
4. The Gait Prediction Model
4.1. One-Dimensional Convolution-Based Encoder
4.2. Multi-Channel Attention-Based Decoder
4.2.1. Multi-Channel Attention
4.2.2. Deep Multi-Channel Attention Structure
5. Experiment and Results
5.1. Experimental Setup
5.1.1. Experimental Strategies
- CNN model: The CNN model is constructed by removing the decoder modules from the proposed TFSformer. Furthermore, the output of the CNN is structured by two linear layers.
- Transformer model: The transformer network for the comparison experiment is derived from the initial transformer architecture developed for a natural language processing (NLP) task in [22]. In contrast to the NLP task, the dimensions of the inputs in our research are fixed. Furthermore, the padding is unnecessary in the experiments. Thus, the primary transformer in [22] is adjusted by excluding the padding mask mechanism. Furthermore, all the information in the inputs is visible at each moment rather than partly invisible as it is in NLP tasks. Thus, the attention-mask mechanism is not employed as well.
- CNN transformer model: The network is constructed on the basis of the transformer network above by replacing the encoder module with a convolutional operation, which is the same as the encoder structure in our proposal. This is to demonstrate the efficiency of the decoder mechanism in TFSformer.
5.1.2. Dataset Partition
5.1.3. Hyperparameter Settings
5.2. Experimental Results
- The transformer model exhibited an inferior performance in single-step prediction, while the CNN model demonstrated a weaker prediction ability compared to the other three models in multi-step prediction tasks. These results suggest that each individual approach, either a convolutional operation or an attention mechanism, has limitations. Furthermore, combining the strengths of both approaches contributes to obvious improvements in the network’s ability to capture temporal features.
- The phenomenon that TFSformer outperforms the CNN transformer in all tasks fully demonstrates the superiority of the multi-channel attention-based decoder mechanism in our proposal.
- TFSformer (the orange line) is capable of capturing the change in joint angles around poles well, while the errors of the CNN (the green line) and the transformer (the red line) are slightly larger.
- During the interval within the gentle angle change, the prediction value of TFSformer is relatively smooth and in line with the true changing trends.
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Pais-Vieira, C.; Allahdad, M.; Neves-Amado, J.; Perrotta, A.; Morya, E.; Moioli, R.; Shapkova, E.; Pais-Vieira, M. Method for positioning and rehabilitation training with the ExoAtlet ® powered exoskeleton. MethodsX 2020, 7, 100849. [Google Scholar] [CrossRef]
- Tsukahara, A.; Hasegawa, Y.; Eguchi, K.; Sankai, Y. Restoration of Gait for Spinal Cord Injury Patients Using HAL With Intention Estimator for Preferable Swing Speed. IEEE Trans. Neural Syst. Rehabil. Eng. 2015, 23, 308–318. [Google Scholar] [CrossRef]
- Hartigan, C.; Kandilakis, C.; Dalley, S.; Clausen, M.; Wilson, E.; Morrison, S.; Etheridge, S.; Farris, R. Mobility Outcomes Following Five Training Sessions with a Powered Exoskeleton. Top. Spinal Cord Inj. Rehabil. 2015, 21, 93–99. [Google Scholar] [CrossRef] [Green Version]
- Stadler, K.; Altenburger, R.; Schmidhauser, E.; Scherly, D.; Ortiz, J.; Toxiri, S.; Mateos, L.; Masood, J. ROBO-MATE an Exoskeleton for Industrial Use—Concept and Mechanical Design. In Advances in Cooperative Robotics; World Scientific: Singapore, 2016. [Google Scholar]
- Zhang, J.; Fiers, P.; Witte, K.A.; Jackson, R.W.; Poggensee, K.L.; Atkeson, C.G.; Collins, S.H. Human-in-the-loop optimization of exoskeleton assistance during walking. Science 2017, 356, 1280–1284. [Google Scholar] [CrossRef] [Green Version]
- Yu, J.; Zhang, S.; Wang, A.; Li, W.; Song, L. Musculoskeletal modeling and humanoid control of robots based on human gait data. PeerJ Comput. Sci. 2021, 7, e657. [Google Scholar] [CrossRef] [PubMed]
- Yu, J.; Zhang, S.; Wang, A.; Li, W.; Ma, Z.; Yue, X. Humanoid control of lower limb exoskeleton robot based on human gait data with sliding mode neural network. CAAI Trans. Intell. Technol. 2022, 7, 606–616. [Google Scholar] [CrossRef]
- Liu, M.; Peng, B.; Shang, M. Lower limb movement intention recognition for rehabilitation robot aided with projected recurrent neural network. Complex Intell. Syst. 2021, 8, 2813–2824. [Google Scholar] [CrossRef]
- Baud, R.; Manzoori, A.; Ijspeert, A.; Bouri, M. Review of control strategies for lower-limb exoskeletons to assist gait. J. NeuroEng. Rehabil. 2021, 18, 1–34. [Google Scholar] [CrossRef]
- Chen, J.L.; Dai, Y.N.; Grimaldi, N.S.; Lin, J.J.; Hu, B.Y.; Wu, Y.F.; Gao, S. Plantar Pressure-Based Insole Gait Monitoring Techniques for Diseases Monitoring and Analysis: A Review. Adv. Mater. Technol. 2022, 7, 2100566. [Google Scholar] [CrossRef]
- Koenker, R.; Xiao, Z. Quantile autoregression. J. Am. Stat. Assoc. 2006, 101, 980–990. [Google Scholar] [CrossRef] [Green Version]
- Box, G.E.; Pierce, D.A. Distribution of residual autocorrelations in autoregressive-integrated moving average time series models. J. Am. Stat. Assoc. 1970, 65, 1509–1526. [Google Scholar] [CrossRef]
- Taylor, S.J.; Letham, B. Forecasting at scale. Am. Stat. 2018, 72, 37–45. [Google Scholar] [CrossRef]
- Hewamalage, H.; Bergmeir, C.; Bandara, K. Recurrent neural networks for time series forecasting: Current status and future directions. Int. J. Forecast. 2021, 37, 388–427. [Google Scholar] [CrossRef]
- Ma, R.; Yang, T.; Breaz, E.; Li, Z.; Briois, P.; Gao, F. Data-driven proton exchange membrane fuel cell degradation predication through deep learning method. Appl. Energy 2018, 231, 102–115. [Google Scholar] [CrossRef]
- Chen, Y.; Rao, M.; Feng, K.; Zuo, M.J. Physics-Informed LSTM hyperparameters selection for gearbox fault detection. Mech. Syst. Signal Process. 2022, 171, 108907. [Google Scholar] [CrossRef]
- Li, Z.; Liu, F.; Yang, W.; Peng, S.; Zhou, J. A survey of convolutional neural networks: Analysis, applications, and prospects. IEEE Trans. Neural Netw. Learn. Syst. 2021, 12, 6999–7019. [Google Scholar] [CrossRef]
- Li, H.; Wang, Z.; Yue, X.; Wang, W.; Tomiyama, H.; Meng, L. An Architecture-Level Analysis on Deep Learning Models for Low-Impact Computations. Artif. Intell. Rev. 2022, 55. [Google Scholar] [CrossRef]
- Li, H.; Yue, X.; Zhao, C.; Meng, L. Lightweight Deep Neural Network from Scratch. Appl. Intell. 2023, 53, 18868–18886. [Google Scholar] [CrossRef]
- Dauphin, Y.N.; Fan, A.; Auli, M.; Grangier, D. Language modeling with gated convolutional networks. In Proceedings of the International Conference on Machine Learning, PMLR, Sydney, Australia, 6–11 August 2017; pp. 933–941. [Google Scholar]
- Borovykh, A.; Bohte, S.; Oosterlee, C.W. Conditional time series forecasting with convolutional neural networks. arXiv 2017, arXiv:1703.04691. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30, 6000–6010. [Google Scholar]
- Liu, M.D.; Ding, L.; Bai, Y.L. Application of hybrid model based on empirical mode decomposition, novel recurrent neural networks and the ARIMA to wind speed prediction. Energy Convers. Manag. 2021, 233, 113917. [Google Scholar] [CrossRef]
- Dragomiretskiy, K.; Zosso, D. Variational mode decomposition. IEEE Trans. Signal Process. 2013, 62, 531–544. [Google Scholar] [CrossRef]
- Chen, Z.; Guo, Q.; Li, T.; Yan, Y.; Jiang, D. Gait prediction and variable admittance control for lower limb exoskeleton with measurement delay and extended-state-observer. IEEE Trans. Neural Netw. Learn. Syst. 2022; early access. [Google Scholar] [CrossRef]
- Wu, X.; Liu, D.X.; Liu, M.; Chen, C.; Guo, H. Individualized gait pattern generation for sharing lower limb exoskeleton robot. IEEE Trans. Autom. Sci. Eng. 2018, 15, 1459–1470. [Google Scholar] [CrossRef]
- Wu, X.; Yuan, Y.; Zhang, X.; Wang, C.; Xu, T.; Tao, D. Gait phase classification for a lower limb exoskeleton system based on a graph convolutional network model. IEEE Trans. Ind. Electron. 2021, 69, 4999–5008. [Google Scholar] [CrossRef]
- Kolaghassi, R.; Marcelli, G.; Sirlantzis, K. Deep Learning Models for Stable Gait Prediction Applied to Exoskeleton Reference Trajectories for Children With Cerebral Palsy. IEEE Access 2023, 11, 31962–31976. [Google Scholar] [CrossRef]
- Huang, Y.; Song, R.; Argha, A.; Celler, B.G.; Savkin, A.V.; Su, S.W. Human motion intent description based on bumpless switching mechanism for rehabilitation robot. IEEE Trans. Neural Syst. Rehabil. Eng. 2021, 29, 673–682. [Google Scholar] [CrossRef]
- Zhang, W.; Ling, Z.; Heinrich, S.; Ding, X.; Feng, Y. Walking Speed Learning and Generalization Using Seq2Seq Gated and Adaptive Continuous-Time Recurrent Neural Network (S2S-GACTRNN) for a Hip Exoskeleton. IEEE/ASME Trans. Mechatron. 2023; early access. [Google Scholar] [CrossRef]
- Wang, C.; Chen, Y.; Zhang, S.; Zhang, Q. Stock market index prediction using deep Transformer model. Expert Syst. Appl. 2022, 208, 118128. [Google Scholar] [CrossRef]
- Chen, Z.; Chen, D.; Zhang, X.; Yuan, Z.; Cheng, X. Learning graph structures with transformer for multivariate time-series anomaly detection in IoT. IEEE Internet Things J. 2021, 9, 9179–9189. [Google Scholar] [CrossRef]
- Kim, J.; Kang, H.; Kang, P. Time-series anomaly detection with stacked Transformer representations and 1D convolutional network. Eng. Appl. Artif. Intell. 2023, 120, 105964. [Google Scholar] [CrossRef]
- Lim, B.; Arık, S.Ö.; Loeff, N.; Pfister, T. Temporal fusion transformers for interpretable multi-horizon time series forecasting. Int. J. Forecast. 2021, 37, 1748–1764. [Google Scholar] [CrossRef]
- Zhou, H.; Zhang, S.; Peng, J.; Zhang, S.; Li, J.; Xiong, H.; Zhang, W. Informer: Beyond efficient transformer for long sequence time-series forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual Event, 2–9 February 2021; Volume 35, pp. 11106–11115. [Google Scholar]
- Wu, H.; Xu, J.; Wang, J.; Long, M. Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. Adv. Neural Inf. Process. Syst. 2021, 34, 22419–22430. [Google Scholar]
- Zhou, T.; Ma, Z.; Wen, Q.; Wang, X.; Sun, L.; Jin, R. Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting. In Proceedings of the International Conference on Machine Learning, PMLR, Baltimore, Maryland, USA, 17–23 July 2022; pp. 27268–27286. [Google Scholar]
- Li, H.; Yue, X.; Meng, L. Enhanced mechanisms of pooling and channel attention for deep learning feature maps. PeerJ Comput. Sci. 2022, 8, e1161. [Google Scholar] [CrossRef] [PubMed]
Single-Step Prediction (k = 1) | Multi-Step Prediction (k = 3) | Multi-Step Prediction (k = 6) | |
---|---|---|---|
Training set | 40,864 | 40,234 | 39,289 |
Validation set | 8392 | 8362 | 8167 |
Test set | 6150 | 6058 | 5920 |
Models | TFSformer | CNN | Transformer | CNN-Transformer | |||||
---|---|---|---|---|---|---|---|---|---|
Metric | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | |
Task1 | 0.0525 | 0.0035 | 0.0645 | 0.0070 | 0.0667 | 0.0065 | 0.0590 | 0.0063 | |
0.0479 | 0.0043 | 0.0614 | 0.0066 | 0.0664 | 0.0070 | 0.0579 | 0.0062 | ||
0.0725 | 0.0085 | 0.0818 | 0.0112 | 0.1053 | 0.0196 | 0.0743 | 0.0089 | ||
0.1004 | 0.0148 | 0.0989 | 0.0146 | 0.0831 | 0.0108 | 0.0956 | 0.0135 | ||
M | 0.0683 | 0.0078 | 0.0766 | 0.0098 | 0.0804 | 0.0110 | 0.0717 | 0.0087 | |
Task2 | 0.0487 | 0.0038 | 0.0602 | 0.0060 | 0.0542 | 0.0046 | 0.0581 | 0.0061 | |
0.0523 | 0.0043 | 0.0598 | 0.0061 | 0.0506 | 0.0040 | 0.0617 | 0.0071 | ||
0.0805 | 0.0112 | 0.0833 | 0.0120 | 0.0873 | 0.0127 | 0.0777 | 0.0101 | ||
0.0925 | 0.0137 | 0.0989 | 0.0150 | 0.0829 | 0.0103 | 0.1005 | 0.0149 | ||
M | 0.0685 | 0.0083 | 0.0756 | 0.0098 | 0.0689 | 0.0079 | 0.0745 | 0.0095 | |
Task3 | 0.0449 | 0.0048 | 0.0549 | 0.0050 | 0.0520 | 0.0043 | 0.0522 | 0.0050 | |
0.0559 | 0.0032 | 0.0643 | 0.0067 | 0.0543 | 0.0044 | 0.0574 | 0.0060 | ||
0.0841 | 0.0113 | 0.0721 | 0.0088 | 0.0801 | 0.0108 | 0.0790 | 0.0109 | ||
0.0858 | 0.0106 | 0.1075 | 0.0175 | 0.0889 | 0.0119 | 0.0979 | 0.1412 | ||
M | 0.0677 | 0.0075 | 0.0747 | 0.0095 | 0.0688 | 0.0078 | 0.0716 | 0.0090 |
Height (cm) | Weight (kg) | Thigh Length (cm) | Shank Length (cm) | Gender | Age | |
---|---|---|---|---|---|---|
Subject 1 | 178.0 | 68.3 | 40.2 | 40.3 | male | 27 |
Subject 2 | 164.1 | 58.2 | 36.5 | 36.2 | female | 24 |
Subject 3 | 176.3 | 74.5 | 38.9 | 40.1 | male | 26 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ren, J.; Wang, A.; Li, H.; Yue, X.; Meng, L. A Transformer-Based Neural Network for Gait Prediction in Lower Limb Exoskeleton Robots Using Plantar Force. Sensors 2023, 23, 6547. https://doi.org/10.3390/s23146547
Ren J, Wang A, Li H, Yue X, Meng L. A Transformer-Based Neural Network for Gait Prediction in Lower Limb Exoskeleton Robots Using Plantar Force. Sensors. 2023; 23(14):6547. https://doi.org/10.3390/s23146547
Chicago/Turabian StyleRen, Jiale, Aihui Wang, Hengyi Li, Xuebin Yue, and Lin Meng. 2023. "A Transformer-Based Neural Network for Gait Prediction in Lower Limb Exoskeleton Robots Using Plantar Force" Sensors 23, no. 14: 6547. https://doi.org/10.3390/s23146547
APA StyleRen, J., Wang, A., Li, H., Yue, X., & Meng, L. (2023). A Transformer-Based Neural Network for Gait Prediction in Lower Limb Exoskeleton Robots Using Plantar Force. Sensors, 23(14), 6547. https://doi.org/10.3390/s23146547