Standing-Posture Recognition in Human–Robot Collaboration Based on Deep Learning and the Dempster–Shafer Evidence Theory
Abstract
:1. Introduction
2. Related Work
3. Methods
3.1. Selected Standing Postures
3.2. Standing Postures Classification System
3.3. Participants
3.4. Classification Model of Standing Posture
3.4.1. CNN Structure
3.4.2. Data Augmentation
3.5. Other Classifiers
3.6. D–S Evidence Theory and the Multi-Classifier Fusion Algorithm
3.6.1. D–S Evidence Theory
3.6.2. The Multi-Classifier Fusion Algorithm
4. Results
4.1. Dataset
4.2. Experimental Results of CNN
4.3. Comparison with Other Classifiers
5. Discussion
5.1. SPCS
5.2. Standing-Posture Classification Method
6. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Kinugawa, J.; Kanazawa, A.; Arai, S.; Kosuge, K. Adaptive Task Scheduling for an Assembly Task Co-worker Robot Based on Incremental Learning of a Human’s Motion Pattern. IEEE Robot. Autom. Lett. 2017, 2, 856–863. [Google Scholar] [CrossRef]
- Yao, B.; Zhou, Z.; Wang, L.; Xu, W.; Quan, L.; Liu, A. Sensorless and adaptive admittance control of industrial robot in physical human−robot interaction. Robot. Comput. -Integr. Manuf. 2018, 51, 158–168. [Google Scholar] [CrossRef]
- ZOU, F. Standard for Human-Robot Collaboration and its Application Trend. Aeronaut. Manuf. Technol. 2016, 58–63, 76. [Google Scholar]
- Bobka, P.; Germann, T.; Heyn, J.K.; Gerbers, R.; Dietrich, F.; Droder, K. Simulation platform to investigate safe operation of human-robot collaboration systems. In Proceedings of the 6th Cirp Conference on Assembly Technologies and Systems, Gothenburg, Sweden, 16–18 May 2016; Volume 44, pp. 187–192. [Google Scholar]
- Cherubini, A.; Passama, R.; Crosnier, A.; Lasnier, A.; Fraisse, P. Collaborative manufacturing with physical human–robot interaction. Robot. Comput. -Integr. Manuf. 2016, 40, 1–13. [Google Scholar] [CrossRef] [Green Version]
- Khalid, A.; Kirisci, P.; Ghrairi, Z.; Pannek, J.; Thoben, K.D. Safety Requirements in Collaborative Human–Robot Cyber-Physical System. In 5th International Conference on Dynemics in Logistics (LDIC 2016); Springer: Bremen, Germany, 2016. [Google Scholar]
- Michalos, G.; Kousi, N.; Karagiannis, P.; Gkournelos, C.; Dimoulas, K.; Koukas, S.; Mparis, K.; Papavasileiou, A.; Makris, S. Seamless human robot collaborative assembly–An automotive case study. Mechatronics 2018, 55, 199–211. [Google Scholar] [CrossRef]
- Xi, V.W.; Kemény, Z.; Váncza, J.; Wang, L. Human–robot collaborative assembly in cyber-physical production: Classification framework and implementation. CIRP Ann. 2017, 66, 5–8. [Google Scholar] [CrossRef] [Green Version]
- Toshev, A.; Szegedy, C. DeepPose: Human Pose Estimation via Deep Neural Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1653–1660. [Google Scholar]
- Pereira, F.G.; Vassallo, R.F.; Salles, E.O.T. Human–Robot Interaction and Cooperation Through People Detection and Gesture Recognition. J. Control Autom. Electr. Syst. 2013, 24, 187–198. [Google Scholar] [CrossRef]
- Ghazal, S.; Khan, U.S. Human posture classification using skeleton information. In Proceedings of the 2018 International Conference on Computing, Mathematics and Engineering Technologies, iCoMET 2018, Sukkur, Pakistan, 3–4 March 2018; pp. 1–4. [Google Scholar]
- Le, T.L.; Nguyen, M.Q.; Nguyen, T.T.M. Human posture recognition using human skeleton provided by Kinect. In Proceedings of the 2013 International Conference on Computing, Management and Telecommunications (ComManTel), Ho Chi Minh City, Vietnam, 21–24 January 2013; pp. 340–345. [Google Scholar]
- Du, Y.; Fu, Y.; Wang, L. Representation Learning of Temporal Dynamics for Skeleton-Based Action Recognition. IEEE Trans. Image Process. 2016, 25, 3010–3022. [Google Scholar] [CrossRef]
- Wang, L.; Schmidt, B.; Nee, N.Y. Vision-guided active collision avoidance for human-robot collaborations. Manuf. Lett. 2013, 1, 5–8. [Google Scholar] [CrossRef]
- Chen, C.; Jafari, R.; Kehtarnavaz, N. A survey of depth and inertial sensor fusion for human action recognition. Multimed. Tools Appl. 2017, 76, 4405–4425. [Google Scholar] [CrossRef]
- Santaera, G.; Luberto, E.; Serio, A.; Gabiccini, M.; Bicchi, A. Low-cost, fast and accurate reconstruction of robotic and human postures via IMU measurements. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation, Seattle, WA, USA, 26–30 May 2015; pp. 2728–2735. [Google Scholar]
- Ligang, C.; Yingjie, L.; Zhifeng, L.; Guoying, C.; Congbin, Y.; Qianlei, W. Force-free Control Model for Six Axis Industrial Robot Based on Stiffness Control. J. Beijing Univ. Technol. 2017, 43, 1037–1044. [Google Scholar]
- Noohi, E.; Zefran, M.; Patton, J.L. A Model for Human–Human Collaborative Object Manipulation and Its Application to Human–Robot Interaction. IEEE Trans. Robot. 2017, 32, 880–896. [Google Scholar] [CrossRef]
- Kinugawa, J.; Kanazawa, A.; Kosuge, K. Task Scheduling for Assembly Task Co-worker Robot Based on Estimation of Work Progress Using Worker’s Kinetic Information. Trans. Soc. Instrum. Control. Eng. 2017, 53, 178–187. [Google Scholar] [CrossRef] [Green Version]
- Gang, Q.; Bo, P.; Zhang, J. Gesture recognition using video and floor pressure data. In Proceedings of the IEEE International Conference on Image Processing, Lake Buena Vista, FL, USA, 30 September–3 October 2012; pp. 173–176. [Google Scholar]
- Sun, Q.; Gonzalez, E.; Yu, S. On bed posture recognition with pressure sensor array system. In Proceedings of the Sensors IEEE, Orlando, FL, USA, 30 October–2 November 2016; pp. 1–3. [Google Scholar]
- Cruz-Santos, W.; Beltran-Herrera, A.; Vazquez-Santacruz, E.; Gamboa-Zuniga, M. Posture classification of lying down human bodies based on pressure sensors array. In Proceedings of the International Joint Conference on Neural Networks, Beijing, China, 6–11 July 2014; pp. 533–537. [Google Scholar]
- Roh, J.; Park, H.J.; Lee, K.J.; Hyeong, J.; Kim, S.; Lee, B. Sitting Posture Monitoring System Based on a Low-Cost Load Cell Using Machine Learning. Sensors 2018, 18, 208. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Dröder, K.; Bobka, P.; Germann, T.; Gabriel, F.; Dietrich, F. A Machine Learning-Enhanced Digital Twin Approach for Human-Robot-Collaboration. Procedia Cirp 2018, 76, 187–192. [Google Scholar] [CrossRef]
- Cheng, J.; Sundholm, M.; Zhou, B.; Hirsch, M.; Lukowicz, P. Smart-surface: Large scale textile pressure sensors arrays for activity recognition. Pervasive Mob. Comput. 2016, 30, 97–112. [Google Scholar] [CrossRef]
- Agrawal, T.K.; Thomassey, S.; Cochrane, C.; Lemort, G.; Koncar, V. Low Cost Intelligent Carpet System for Footstep Detection. IEEE Sens. J. 2017, 17, 4239–4247. [Google Scholar] [CrossRef]
- Qiang, Y.; Yi, X.; Zhiming, Y. A multi-stage methodology is proposed. Chin. J. Sens. Actuators 2018, 31, 1559–1565. [Google Scholar]
- Xia, Y. Study on Key Technologies for Ambulation Behavior Perception Based on Plantar Pressure Distribution; University of Science and Technology of China: Hefei, China, 2013. [Google Scholar]
- Singh, M.S.; Pondenkandath, V.; Zhou, B.; Lukowicz, P.; Liwicki, M. Transforming Sensor Data to the Image Domain for Deep Learning—An Application to Footstep Detection. In Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, 14–19 May 2017; pp. 2665–2672. [Google Scholar]
- Jung, J.-W.; Cho, Y.-O.; So, B.-C. Footprint Recognition Using Footprint Energy Image. Sens. Lett. 2012, 10, 1294–1301. [Google Scholar] [CrossRef]
- Dong, L.; Weiwei, G.; Yan, Z.; Wenxia, B. Tactility-based gait recognition by plantar pressure image. J. Huazhong Univ. Sci. Technol. (Nat. Sci. Ed.) 2013, 41, 25–29. [Google Scholar]
- Voulodimos, A.; Doulamis, N.; Doulamis, A.; Protopapadakis, E. Deep Learning for Computer Vision: A Brief Review. Comput. Intell. Neurosci. 2018, 2018, 1–13. [Google Scholar] [CrossRef] [PubMed]
- Längkvist, M.; Karlsson, L.; Loutfi, A. A review of unsupervised feature learning and deep learning for time-series modeling. Pattern Recognit. Lett. 2014, 42, 11–24. [Google Scholar] [CrossRef] [Green Version]
- Yong, K.; Son, Y.; Kim, W.; Jin, B.; Yun, M. Classification of Children’s Sitting Postures Using Machine Learning Algorithms. Appl. Sci. 2018, 8, 1280. [Google Scholar]
- Costilla-Reyes, O.; Vera-Rodriguez, R.; Scully, P.; Ozanyan, K.B. Spatial footstep recognition by convolutional neural networks for biometrie applications. In Proceedings of the 15th IEEE Sensors Conference, Orlando, FL, USA, 30 October–2 November 2016. [Google Scholar]
- Bo, Z.; Singh, S.; Doda, S.; Cheng, J.; Lukowicz, P. The Carpet Knows: Identifying People in a Smart Environment from a Single Step. In Proceedings of the IEEE International Conference on Pervasive Computing & Communications Workshops, Kona, HI, USA, 13–17 March 2017. [Google Scholar]
- Zhang, G.; Jia, S.; Li, X.; Zhang, X. Weighted score-level feature fusion based on Dempster–Shafer evidence theory for action recognition. J. Electron. Imaging 2018, 27, 1. [Google Scholar] [CrossRef] [Green Version]
- Yang, D.; Ji, H.B.; Gao, Y.C. A robust D-S fusion algorithm for multi-target multi-sensor with higher reliability. Inf. Fusion 2019, 47, 32–44. [Google Scholar] [CrossRef]
- Zhang, J.; Yusheng, W.; Qingquan, L.; Fang, G.; Xiaoli, M.; Min, L. Research on High-Precision, Low Cost Piezoresistive MEMS-Array Pressure Transmitters Based on Genetic Wavelet Neural Networks for Meteorological Measurements. Micromachines 2015, 6, 554–573. [Google Scholar] [CrossRef] [Green Version]
- Rogez, G.; Schmid, C. MoCap-Guided Data Augmentation for 3D Pose Estimation in the Wild. Available online: http://papers.nips.cc/paper/6563-mocap-guided-data-augmentation-for-3d-pose-estimation-in-the-wild (accessed on 12 February 2020).
- Bousseta, R.; Tayeb, S.; Ouakouak, I.E.; Gharbi, M.; Himmi, M.M. EEG efficient classification of imagined right and left hand movement using RBF kernel SVM and the joint CWT_PCA. Ai Soc. 2017, 33, 1–9. [Google Scholar] [CrossRef]
- Liang, Z.; Peiyi, S.; Guangming, Z.; Wei, W.; Houbing, S. A Fast Robot Identification and Mapping Algorithm Based on Kinect Sensor. Sensors 2015, 15, 19937–19967. [Google Scholar]
- Rahmadani, S.; Dongoran, A.; Zarlis, M.; Zakarias. Comparison of Naive Bayes and Decision Tree on Feature Selection Using Genetic Algorithm for Classification Problem. J. Phys. Conf. Ser. 2018, 978, 012087. [Google Scholar] [CrossRef]
- Kingma, D.; Ba, J. Adam: A Method for Stochastic Optimizationa. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Zhang, J.; Gang, Q.; Kidané, A. Footprint tracking and recognition using a pressure sensing floor. In Proceedings of the International Conference on Image Processing, Cairo, Egypt, 7–10 November 2009. [Google Scholar]
Participant | Sex | Height (cm) | Weight (kg) | Shoe Size (EU) |
---|---|---|---|---|
A | male | 168 | 60 | 43 |
B | male | 172 | 71 | 39 |
C | male | 181 | 80 | 42 |
D | male | 179 | 96 | 44 |
E | male | 178 | 75 | 41 |
F | male | 171 | 55 | 40 |
G | male | 166 | 50 | 39 |
H | male | 178 | 65 | 43 |
I | female | 165 | 45 | 37 |
J | female | 162 | 41 | 36 |
Recognition Method | Recognition Rate (%) | Training Time (s) |
---|---|---|
Without data augmentation | 92.548 | 235 |
Without BN | 94.448 | 6849 |
Adam + BN | 96.126 | 1233 |
SGD + BN | 96.605 | 1154 |
Adam + SGD + BN | 96.412 | 1161 |
Classifier | CNN | SVM | KNN | RF | DT | NB | BP | CSK–DS |
---|---|---|---|---|---|---|---|---|
1 | 0.9610 | 0.9524 | 0.9212 | 0.9285 | 0.9089 | 0.8458 | 0.856 | 1.0000 |
2 | 0.9612 | 0.9576 | 0.9054 | 0.9293 | 0.8908 | 0.8165 | 0.825 | 0.9988 |
3 | 0.9604 | 0.9255 | 0.9155 | 0.9192 | 0.8968 | 0.8228 | 0.8266 | 1.0000 |
4 | 0.9636 | 0.9351 | 0.9069 | 0.9086 | 0.8854 | 0.8462 | 0.824 | 1.0000 |
5 | 0.9622 | 0.9298 | 0.9124 | 0.9198 | 0.8862 | 0.8431 | 0.8166 | 1.0000 |
6 | 0.9686 | 0.9466 | 0.9026 | 0.9254 | 0.8788 | 0.8196 | 0.834 | 1.0000 |
7 | 0.9642 | 0.9522 | 0.9266 | 0.9171 | 0.9092 | 0.8347 | 0.8514 | 0.9999 |
8 | 0.9618 | 0.9772 | 0.9198 | 0.9254 | 0.8826 | 0.8412 | 0.8289 | 0.9982 |
Average | 0.9628 | 0.9445 | 0.9138 | 0.9216 | 0.8923 | 0.8337 | 0.8328 | 0.9996 |
Classifier | CNN | SVM | KNN | RF | DT | NB | BP | CSK–DS |
---|---|---|---|---|---|---|---|---|
1 | 0.9044 | 0.8657 | 0.8246 | 0.8274 | 0.7903 | 0.7397 | 0.7443 | 0.9975 |
2 | 0.9032 | 0.8546 | 0.8341 | 0.8183 | 0.7842 | 0.7596 | 0.7564 | 0.9975 |
3 | 0.9065 | 0.8498 | 0.8167 | 0.8195 | 0.7605 | 0.7483 | 0.7345 | 0.9978 |
4 | 0.9047 | 0.8812 | 0.8054 | 0.7941 | 0.7862 | 0.7455 | 0.7697 | 0.9979 |
5 | 0.9021 | 0.8368 | 0.8132 | 0.8129 | 0.7917 | 0.7631 | 0.7487 | 0.9978 |
6 | 0.9066 | 0.8439 | 0.8055 | 0.8217 | 0.7791 | 0.7368 | 0.7661 | 0.9981 |
7 | 0.9005 | 0.8567 | 0.8371 | 0.8153 | 0.7849 | 0.7459 | 0.7546 | 0.9983 |
8 | 0.9091 | 0.8633 | 0.8174 | 0.8044 | 0.7924 | 0.7682 | 0.7468 | 0.9983 |
Average | 0.9047 | 0.8565 | 0.8195 | 0.8142 | 0.7836 | 0.7508 | 0.7526 | 0.9979 |
Author | The Sensor Type and Number | Sensor Position | Number of Subjects | Classification Method | Number of Postures | Refresh Rate | Accuracy |
---|---|---|---|---|---|---|---|
Cheng et al. [25] | textile pressure-sensing matrix 80 × 80 | On floor | 11 | KNN | 7 | 40 Hz | 78.7% |
Costilla-Reyes et al. [44] | piezoelectric sensors 88 × 88 | On floor | 127 | CNN + SVM | 3 | 1.6 kHz | 90.60% |
Zhou et al. [35] | fabric sensor mat □ 120 × 54 | On floor | 13 | RNN (Recurrent Neural Network) | person identification | 25 Hz | 76.9% |
Zhang et al. [45] | Force Sensing Resistors: 504 × 384 | On floor | 2 | Mean-Shift Clustering | 7 | 44 Hz | 95.59% |
Proposed method | Pressure Thin Film Sensor □ 32 × 32 | On floor | 10 | Improved CNN | 9 | 40 Hz | 96.41% |
D–S fusion | CNN–SVM–KNN | 10 | D-S theory | 9 | 40 Hz | 99.96% |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, G.; Liu, Z.; Cai, L.; Yan, J. Standing-Posture Recognition in Human–Robot Collaboration Based on Deep Learning and the Dempster–Shafer Evidence Theory. Sensors 2020, 20, 1158. https://doi.org/10.3390/s20041158
Li G, Liu Z, Cai L, Yan J. Standing-Posture Recognition in Human–Robot Collaboration Based on Deep Learning and the Dempster–Shafer Evidence Theory. Sensors. 2020; 20(4):1158. https://doi.org/10.3390/s20041158
Chicago/Turabian StyleLi, Guan, Zhifeng Liu, Ligang Cai, and Jun Yan. 2020. "Standing-Posture Recognition in Human–Robot Collaboration Based on Deep Learning and the Dempster–Shafer Evidence Theory" Sensors 20, no. 4: 1158. https://doi.org/10.3390/s20041158
APA StyleLi, G., Liu, Z., Cai, L., & Yan, J. (2020). Standing-Posture Recognition in Human–Robot Collaboration Based on Deep Learning and the Dempster–Shafer Evidence Theory. Sensors, 20(4), 1158. https://doi.org/10.3390/s20041158