Asymmetric Identification Model for Human-Robot Contacts via Supervised Learning
Abstract
:1. Introduction
- We developed a lightweight and high-performance human-robot contact detection model (HRCDM) using different supervised machine-learning methods.
- We discriminate and evaluate the performance of five supervised ML schemes EBT (ensemble bagging trees), FDT (fine decision trees), kNN (k-nearest neighbors), LRK (logistic regression kernel), and SDC (subspace discriminate) for HRCDS using new and inclusive contact detection dataset (CDD 2021).
- We provide extensive experimental results using several quality and computation indicators. In addition, we compare our best outcomes with existing approaches and show that our EBT-based HRCDS is more accurate than existing models by 3–15%, with faster inferencing.
2. Related Works
2.1. Collision Avoidance
2.2. Collision Detection Methods Based on the Dynamic Model of the Robot
2.3. Collision Detection Methods Based on Data
3. HRC Detection Model
3.1. The CDD 2021 Dataset
3.2. The Data Formulation Component
- The first stage after accumulation of the intended data (i.e., CCD2021) is to read and host the CCD2021 records (which is available in .CSV format) using the MATLAB import tool. Once imported, MATLAB will host the data records in a temporary table in order to enable the exploratory data analysis (EDA) before the data are stored in the workplace to check the symmetry of samples features. At this stage, we have performed several EDA processes over the imported data including: (1) combining the datasets from different files in one unified dataset, (2) removing any unwanted or corrupted values, (3) fixing all missing values, (4) replacing nonimportable values or empty cells with zeros.
- The second stage of data formulation is the class labeling, where we have three classes at the output layer. We have three labels encoded using integer encoding as follows: “Noncontact: 0”, “Intentional: 1”, and “Collision: 2”.
- The third stage of data formulation is samples randomization (also called shuffling). This process takes place by rearranging the positions for the sample in the dataset by using a random redistribution process. This process is essential to ensure that data are statistically unpredicted and asymmetric and to avoid any biasing by the classifier toward specific data samples.
- The fourth stage of data formulation is the splitting of the dataset into two subsets: training dataset that is composed of 70% of the samples in the original dataset and testing (validation) dataset that is composed of 30% of the samples in the original dataset. To ensure higher efficiency in the learning process (training + testing), we used asymmetric k-fold cross-validation to distribute the dataset into training and testing datasets.
3.3. The Learning Models Component
3.4. The Model Evaluation Component
3.5. Experimental Environment
4. Results and Analysis
4.1. Performance Evaluation
4.2. Optimal Model Selection
4.3. Experimental Evaluation of the Optimal Model
4.4. Comparison with the State-of-the-Art Models
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Haddadin, S.; de Luca, A.; Albu-Schäffer, A. Robot collisions: A survey on detection, isolation, and identification. IEEE Trans. Robot. 2017, 33, 1292–1312. [Google Scholar] [CrossRef] [Green Version]
- Escobedo, C.; Strong, M.; West, M.; Aramburu, A.; Roncone, A. Contact Anticipation for Physical Human-robot Interaction with Robotic Manipulators Using Onboard Proximity Sensors. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 7255–7262. [Google Scholar] [CrossRef]
- Robla-Gomez, S.; Becerra, V.M.; Llata, J.R.; Gonzalez-Sarabia, E.; Torre-Ferrero, C.; Perez-Oria, J. Working Together: A Review on Safe Human–robot Collaboration in Industrial Environments. IEEE Access 2017, 5, 26754–26773. [Google Scholar] [CrossRef]
- Nikolakis, N.; Maratos, V.; Makris, S. A cyber physical system (CPS) approach for safe human–robot collaboration in a shared workplace. Robot. Comput. Integr. Manuf. 2019, 56, 233–243. [Google Scholar] [CrossRef]
- Liu, Z.; Hao, J. Intention Recognition in Physical Human-robot Interaction Based on Radial Basis Function Neural Network. J. Robot. 2019, 2019, 4141269. [Google Scholar] [CrossRef]
- Amin, F.M.; Rezayati, M.; van de Venn, H.W.; Karimpour, H. A Mixed-Perception Approach for Safe Human–Robot Collaboration in Industrial Automation. Sensors 2020, 20, 6347. [Google Scholar] [CrossRef]
- Semeraro, F.; Griffiths, A.; Cangelosi, A. Human–robot Collaboration and Machine Learning: A Systematic Review of Recent Research. arXiv 2021, arXiv:2110.07448. [Google Scholar]
- Rafique, A.A.; Jalal, A.; Kim, K. Automated Sustainable Multi-Object Segmentation and Recognition via Modified Sampling Consensus and Kernel Sliding Perceptron. Symmetry 2020, 12, 1928. [Google Scholar] [CrossRef]
- Zhang, Z.; Qian, K.; Schuller, B.W.; Wollherr, D. An Online Robot Collision Detection and Identification Scheme by Supervised Learning and Bayesian Decision Theory. IEEE Trans. Autom. Sci. Eng. 2021, 18, 1144–1156. [Google Scholar] [CrossRef]
- Du, G.; Long, S.; Li, F.; Huang, X. Active collision avoidance for human–robot interaction with UKF, expert system, and artificial potential field method. Front. Robot. AI 2018, 5, 125. [Google Scholar] [CrossRef]
- Nascimento, H.; Mujica, M.; Benoussaad, M. Collision Avoidance in Human–robot Interaction Using Kinect Vision System Combined with Robot’s Model and Data. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October–24 January 2021; pp. 10293–10298. [Google Scholar] [CrossRef]
- Svarny, P.; Tesar, M.; Behrens, J.K.; Hoffmann, M. Safe physical HRI: Toward a unified treatment of speed and separation monitoring together with power and force limiting. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 7580–7587. [Google Scholar] [CrossRef] [Green Version]
- Ding, Y.; Thomas, U. Collision Avoidance with Proximity Servoing for Redundant Serial Robot Manipulators. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 10249–10255. [Google Scholar] [CrossRef]
- Ahmad, M.A.; Ourak, M.; Gruijthuijsen, C.; Deprest, J.; Vercauteren, T.; Poorten, E.V. Deep learning-based monocular placental pose estimation: Toward collaborative robotics in fetoscopy. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 1561–1571. [Google Scholar] [CrossRef]
- Chi, W.; Liu, J.; Rafii-Tari, H.; Riga, C.V.; Bicknell, C.D.; Yang, G.-Z. Learning-based endovascular navigation through the use of non-rigid registration for collaborative robotic catheterization. Int. J. Comput. Assist. Radiol. Surg. 2018, 13, 855–864. [Google Scholar] [CrossRef] [Green Version]
- Akkaladevi, S.C.; Plasch, M.; Pichler, A.; Ikeda, M. Toward Reinforcement based Learning of an Assembly Process for Human Robot Collaboration. Procedia Manuf. 2019, 38, 1491–1498. [Google Scholar] [CrossRef]
- Wojtak, W.; Ferreira, F.; Vicente, P.; Louro, L.; Bicho, E.; Erlhagen, W. A neural integrator model for planning and val-ue-based decision making of a robotics assistant. Neural Comput. Appl. 2021, 33, 3737–3756. [Google Scholar] [CrossRef]
- Choi, S.; Lee, K.; Park, H.A.; Oh, S. A Nonparametric Motion Flow Model for Human-Robot Cooperation. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 7211–7218. [Google Scholar] [CrossRef] [Green Version]
- Chen, M.; Nikolaidis, S.; Soh, H.; Hsu, D.; Srinivasa, S. Trust-Aware Decision Making for Human-robot Collaboration. ACM Trans. Human Robot Interact 2020, 9, 1–23. [Google Scholar] [CrossRef] [Green Version]
- Cunha, A.; Ferreira, F.; Sousa, E.; Louro, L.; Vicente, P.; Monteiro, S.; Erlhagen, W.; Bicho, E. Toward collaborative robots as intelligent co-workers in human–robot joint tasks: What to do and who does it? In Proceedings of the 52nd International Symposium on Robotics (ISR), Online, 9–10 December 2020; Available online: https://ieeexplore.ieee.org/abstract/document/9307464 (accessed on 6 February 2022).
- Lu, W.; Hu, Z.; Pan, J. Human–robot Collaboration using Variable Admittance Control and Human Intention Prediction. In Proceedings of the 2020 IEEE 16th International Conference on Automation Science and Engineering (CASE), Hong Kong, China, 20–21 August 2020; pp. 1116–1121. [Google Scholar] [CrossRef]
- Roveda, L.; Maskani, J.; Franceschi, P.; Abdi, A.; Braghin, F.; Tosatti, L.M.; Pedrocchi, N. Model-Based Reinforcement Learning Variable Impedance Control for Human–robot Collaboration. J. Intell. Robot. Syst. 2020, 100, 417–433. [Google Scholar] [CrossRef]
- Sasagawa, A.; Fujimoto, K.; Sakaino, S.; Tsuji, T. Imitation Learning Based on Bilateral Control for Human–Robot Cooperation. IEEE Robot. Autom. Lett. 2020, 5, 6169–6176. [Google Scholar] [CrossRef]
- Van der Spaa, L.; Gienger, M.; Bates, T.; Kober, J. Predicting and Optimizing Ergonomics in Physical Human–robot Cooperation Tasks. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 1799–1805. [Google Scholar] [CrossRef]
- Ghadirzadeh, A.; Chen, X.; Yin, W.; Yi, Z.; Bjorkman, M.; Kragic, D. Human-Centered Collaborative Robots with Deep Reinforcement Learning. IEEE Robot. Autom. Lett. 2020, 6, 566–571. [Google Scholar] [CrossRef]
- Vinanzi, S.; Cangelosi, A.; Goerick, C. The Role of Social Cues for Goal Disambiguation in Human–robot Cooperation. In Proceedings of the 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Naples, Italy, 31 August–4 September 2020; pp. 971–977. [Google Scholar] [CrossRef]
- Mariotti, E.; Magrini, E.; De Luca, A. Admittance Control for Human–robot Interaction Using an Industrial Robot Equipped with a F/T Sensor. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 6130–6136. [Google Scholar] [CrossRef]
- Villani, V.; Pini, F.; Leali, F.; Secchi, C. Survey on human–robot collaboration in industrial settings: Safety, intuitive interfaces and applications. Mechatronics 2018, 55, 248–266. [Google Scholar] [CrossRef]
- Xu, Z.; Zhao, J.; Yu, Y.; Zeng, H. Improved 1D-CNNs for behavior recognition using wearable sensor network. Comput. Commun. 2020, 151, 165–171. [Google Scholar] [CrossRef]
- Xia, C.; Sugiura, Y. Wearable Accelerometer Optimal Positions for Human Motion Recognition. In Proceedings of the 2020 IEEE 2nd Global Conference on Life Sciences and Technologies (LifeTech), Kyoto, Japan, 10–12 March 2020; pp. 19–20. [Google Scholar] [CrossRef]
- Zhao, Y.; Man, K.L.; Smith, J.; Siddique, K.; Guan, S.-U. Improved two-stream model for human action recognition. EURASIP J. Image Video Process. 2020, 2020, 1–9. [Google Scholar] [CrossRef]
- Gu, Y.; Ye, X.; Sheng, W.; Ou, Y.; Li, Y. Multiple stream deep learning model for human action recognition. Image Vis. Comput. 2019, 93, 103818. [Google Scholar] [CrossRef]
- Srihari, D.; Kishore, P.V.V.; Kumar, E.K.; Kumar, D.A.; Kumar, M.T.K.; Prasad, M.V.D.; Prasad, C.R. A four-stream ConvNet based on spatial and depth flow for human action classification using RGB-D data. Multimed. Tools Appl. 2020, 79, 11723–11746. [Google Scholar] [CrossRef]
- Rahimi, A.; Kumar, K.D.; Alighanbari, H. Fault detection and isolation of control moment gyros for satellite attitude control subsystem. Mech. Syst. Signal Process. 2019, 135, 106419. [Google Scholar] [CrossRef]
- Shao, S.-Y.; Sun, W.-J.; Yan, R.-Q.; Wang, P.; Gao, R.X. A Deep Learning Approach for Fault Diagnosis of Induction Motors in Manufacturing. Chin. J. Mech. Eng. 2017, 30, 1347–1356. [Google Scholar] [CrossRef] [Green Version]
- Agriomallos, I.; Doltsinis, S.; Mitsioni, I.; Doulgeri, Z. Slippage Detection Generalizing to Grasping of Unknown Objects using Machine Learning with Novel Features. IEEE Robot. Autom. Lett. 2018, 3, 942–948. [Google Scholar] [CrossRef]
- Dimeas, F.; Avendaño-Valencia, L.D.; Aspragathos, N. Human–robot collision detection and identification based on fuzzy and time series modelling. Robotica 2014, 33, 1886–1898. [Google Scholar] [CrossRef]
- Sharkawy, A.-N.; Mostfa, A.A. Neural networks’ design and training for safe human–robot cooperation. J. King Saud Univ. Eng. Sci. 2021. [Google Scholar] [CrossRef]
- Czubenko, M.; Kowalczuk, Z. A Simple Neural Network for Collision Detection of Collaborative Robots. Sensors 2021, 21, 4235. [Google Scholar] [CrossRef]
- Zhang, J.; Liu, H.; Chang, Q.; Wang, L.; Gao, R.X. Recurrent neural network for motion trajectory prediction in human–robot collaborative assembly. CIRP Ann. 2020, 69, 9–12. [Google Scholar] [CrossRef]
- Chen, X.; Jiang, Y.; Yang, C. Stiffness Estimation and Intention Detection for Human–robot Collaboration. In Proceedings of the 2020 15th IEEE Conference on Industrial Electronics and Applications (ICIEA), Kristiansand, Norway, 9–13 November 2020; pp. 1802–1807. [Google Scholar] [CrossRef]
- Chen, X.; Wang, N.; Cheng, H.; Yang, C. Neural Learning Enhanced Variable Admittance Control for Human–Robot Collaboration. IEEE Access 2020, 8, 25727–25737. [Google Scholar] [CrossRef]
- Rezayati, M.; van de Venn, H.W. Physical Human-Robot Contact Detection, Mendeley Data Repository; Version 2; Elsevier: Amsterdam, The Netherlands, 2021. [Google Scholar] [CrossRef]
- Abu Al-Haija, Q.; Alsulami, A.A. High-Performance Classification Model to Identify Ransomware Payments for Het-ero-geneous Bitcoin Networks. Electronics 2021, 10, 2113. [Google Scholar] [CrossRef]
- Zhang, M.-L.; Zhou, Z.-H. A k-nearest neighbor based algorithm for multi-label classification. In Proceedings of the 2005 IEEE International Conference on Granular Computing, Beijing, China, 25–27 July 2005; Volume 2, pp. 718–721. [Google Scholar] [CrossRef]
- Karsmakers, P.; Pelckmans, K.; Suykens, J.A.K. Multi-class kernel logistic regression: A fixed-size implementation. In Proceedings of the 2007 International Joint Conference on Neural Networks, Orlando, FL, USA, 12–17 August 2007; pp. 1756–1761. [Google Scholar] [CrossRef]
- Feng, J.; Yu, Y.; Zhou, Z.H. Multi-layered gradient boosting decision trees. Adv. Neural Inf. Process. Syst. 2018, 31, 1–11. [Google Scholar]
- Yurochkin, M.; Bower, A.; Sun, Y. Training individually fair ML models with sensitive subspace robustness. arXiv 2019, arXiv:1907.00020. [Google Scholar]
- Abu Al-Haija, Q. Top-Down Machine Learning-Based Architecture for Cyberattacks Identification and Classification in IoT Communication Networks. Front. Big Data 2022, 4, 782902. [Google Scholar] [CrossRef]
- Abu Al-Haija, Q.; Ishtaiwi, A. Multiclass Classification of Firewall Log Files Using Shallow Neural Network for Network Security Applications. In Soft Computing for Security Applications. Advances in Intelligent Systems and Computing; Ranganathan, G., Fernando, X., Shi, F., El Allioui, Y., Eds.; Springer: Singapore, 2022; pp. 27–41. [Google Scholar] [CrossRef]
- Chujai, P.; Chomboon, K.; Teerarassamee, P.; Kerdprasop, N.; Kerdprasop, K. Ensemble learning for imbalanced data classification problem. In Proceedings of the 3rd International Conference on Industrial Application Engineering (Nakhon Ratchasima), Kitakyushu, Japan, 28–31 March 2015. [Google Scholar]
- Rodrigues, I.R.; Barbosa, G.; Filho, A.O.; Cani, C.; Dantas, M.; Sadok, D.H.; Kelner, J.; Souza, R.S.; Marquezini, M.V.; Lins, S. Modeling and assessing an intelligent system for safety in human-robot collaboration using deep and machine learning techniques. Multimed Tools Appl. 2021, 81, 2213–2239. [Google Scholar] [CrossRef]
- Wen, X.; Chen, H.; Hong, Q. Human Assembly Task Recognition in Human–Robot Collaboration based on 3D CNN. In Proceedings of the 2019 IEEE 9th Annual International Conference on CYBER Technology in Automation, Control and Intelligent Systems (CYBER), Suzhou, China, 29 July–2 August 2019; pp. 1230–1234. [Google Scholar]
- Heo, Y.J.; Kim, D.; Lee, W.; Kim, H.; Park, J.; Chung, W.K. Collision Detection for Industrial Collaborative Robots: A Deep Learning Approach. IEEE Robot. Autom. Lett. 2019, 4, 740–746. [Google Scholar] [CrossRef]
- Anvaripour, M.; Saif, M. Collision Detection for Human–robot Interaction in an Industrial Setting Using Force Myography and a Deep Learning Approach. In Proceedings of the 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), Bari, Italy, 6–9 October 2019; pp. 2149–2154. [Google Scholar]
- Jain, A.; Koppula, H.S.; Raghavan, B.; Soh, S.; Saxena, A. Car that Knows Before You Do: Anticipating Maneuvers via Learning Temporal Driving Models. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 3182–3190. [Google Scholar] [CrossRef] [Green Version]
Model Type | Description |
---|---|
EBT | Learning Family: Bagged Trees, Ensemble technique: Bag, Learner method: Decision tree, Number of learners: 30, Maximum number of DT splits: 2204, Learning rate = 0.1, Optimizer: Bayesian optimizer, number of epochs: 30 iterations. |
kNN | Learning Family: Weighted KNN, Number of neighbors (k): 10, Learner method: nearest neighbor, Distance metric: Euclidean Distance, weight metric: Squared inverse, data standardization: true |
LRK | Learning Family: Logistic Regression Kernel, Learner method: Logistic Regression, Number of expansion dimensions: Auto, Regularization strength (Lambda): Auto, Kernel scale: Auto, Multiclass method: One-vs-One, Iteration limit: 1000 |
FDT | Learning Family: Fine Tree, Learner method: Decision tree, Maximum number of splits: 100, Split criterion: Gini’s diversity index, Surrogate decision splits Off, all features used in the model. |
SDC | Learning Family: Subspace Discriminant, Ensemble technique: Subspace, Learner method: Discriminant, Number of learners: 30, Subspace dimension: 393, all features used in the model. |
ACC | PPV | TPR | MCE | FDR | FNR | PSD | PRT | |
---|---|---|---|---|---|---|---|---|
EBT | 97.1% | 96.9% | 97.1% | 2.9% | 3.1% | 2.9% | 9800 O/S | 102 µS |
kNN | 96.6% | 94.2% | 95.0% | 3.4% | 5.8% | 5.0% | 3300 O/S | 303 µS |
LRK | 94.2% | 92.0% | 89.4% | 5.8% | 8.0% | 10.6% | 1100 O/S | 909 µS |
FDT | 92.2% | 71.7% | 61.3% | 7.8% | 28.3% | 38.7% | 15,000 O/S | 67 µS |
SDC | 77.1% | 72.3% | 70.7% | 22.9% | 27.7% | 29.3% | 7400 O/S | 135 µS |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Abu Al-Haija, Q.; Al-Saraireh, J. Asymmetric Identification Model for Human-Robot Contacts via Supervised Learning. Symmetry 2022, 14, 591. https://doi.org/10.3390/sym14030591
Abu Al-Haija Q, Al-Saraireh J. Asymmetric Identification Model for Human-Robot Contacts via Supervised Learning. Symmetry. 2022; 14(3):591. https://doi.org/10.3390/sym14030591
Chicago/Turabian StyleAbu Al-Haija, Qasem, and Ja’afer Al-Saraireh. 2022. "Asymmetric Identification Model for Human-Robot Contacts via Supervised Learning" Symmetry 14, no. 3: 591. https://doi.org/10.3390/sym14030591
APA StyleAbu Al-Haija, Q., & Al-Saraireh, J. (2022). Asymmetric Identification Model for Human-Robot Contacts via Supervised Learning. Symmetry, 14(3), 591. https://doi.org/10.3390/sym14030591