A Vision-Driven Collaborative Robotic Grasping System Tele-Operated by Surface Electromyography
Abstract
:1. Introduction
2. System Architecture
2.1. Vision-Guided Robotic Grasping System
2.2. Electromyography -Based Movement Control System for Robotic Grasping
3. Proposed Method for Grasping
3.1. Grasping Points and Pose Estimation
- First, the robotic hand is moved to a point 10 cm away from the object. This is a pre-grasping position which is used to facilitate the planning of the following steps. The pre-grasping position is computed, from location (position and orientation) of contact points on the object surface, by the vision algorithm previously described.
- Second, the robotic hand is moved forward facing the object with its palm and the fingers opened. In this step the hand reaches the point in which, after closing, it would place the fingertips on the calculated contact points.
3.2. Collaborative System with Both Visual and Electromyography Data
4. Experiments and Discussion
4.1. Test Design
4.2. Results and Evaluation
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Paperno, N.; Rupp, M.; Maboudou-Tchao, E.M.; Smither, J.A.A.; Behal, A. A Predictive Model for Use of an Assistive Robotic Manipulator: Human Factors versus Performance in Pick-and-Place/Retrieval Tasks. IEEE Trans. Hum.-Mach. Syst. 2016, 46, 846–858. [Google Scholar] [CrossRef]
- Treurnicht, N.F.; Blanckenberg, M.M.; van Niekerk, H.G. Using poka-yoke methods to improve employment potential of intellectually disabled workers. S. Afr. J. Ind. Eng. 2011, 22, 213–224. [Google Scholar] [CrossRef]
- Kochan, A. Remploy: Disabled and thriving. Assem. Autom. 1996, 16, 40–41. [Google Scholar] [CrossRef]
- Maja, J.M.; Brian, S. Socially Assistive Robotics. In Springer Handbook of Robotics, 2nd ed.; Bruno, S., Oussama, K., Eds.; Springer: Berlin/Heidelberg, Germany, 2016; pp. 1973–1994. ISBN 978-3-319-32550-7. [Google Scholar] [Green Version]
- Li, H.B.; Zhang, L.; Kawashima, K. Operator dynamics for stability condition in haptic and teleoperation system: A survey. Int. J. Med. Robot. Comp. Assist. Surg. 2018, 14, e1881. [Google Scholar] [CrossRef] [PubMed]
- Ureche, L.P.; Billard, A. Constraints extraction from asymmetrical bimanual tasks and their use in coordinated behavior. Robot. Auton. Syst. 2018, 103, 222–235. [Google Scholar] [CrossRef]
- Kasaei, S.H.; Oliveira, M.; Lim, G.H.; Lopes, L.S.; Tome, A.M. Towards lifelong assistive robotics: A tight coupling between object perception and manipulation. Neurocomputing 2018, 291, 151–166. [Google Scholar] [CrossRef]
- Chowdhury, R.H.; Reaz, M.B.I.; Ali, M.A.B.M.; Bakar, A.A.A.; Chellappan, K.; Chang, T.G. Surface Electromyography Signal Processing and Classification Techniques. Sensors 2013, 13, 12431–12466. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Geethanjali, P. Myoelectric Control of Prosthetic Hands: State-of-the-art Review. Med. Devices 2016, 9, 247–255. [Google Scholar] [CrossRef] [PubMed]
- Rojas-Martínez, M.; Alonso, J.; Jordanic, M.; Romero, S.; Mañanas, M. Identificación de Tareas Isométricas y Dinámicas del Miembro Superior Basada en EMG de Alta Densidad. In Revista Iberoamericana de Automática e Informátia Industrial; Elsevier: New York, NY, USA, 2017; Volume 4, pp. 406–411. [Google Scholar]
- Connan, M.; Ruiz Ramírez, E.; Vodermayer, B.; Castellini, C. Assessment of a Wearable Force—And Electromyography Device and Comparison of the Related Signals for Myocontrol. Front. Neurorobot. 2016, 10, 17. [Google Scholar] [CrossRef] [PubMed]
- Dosen, S.; Markovic, M.; Somer, K.; Graimann, B.; Farina, D. EMG Biofeedback for Online Predictive Control of Grasping Force in a Myoelectric Prosthesis. J Neuroeng. Rehabil. 2015, 12, 55. [Google Scholar] [CrossRef] [PubMed]
- Schweisfurth, M.A.; Markovic, M.; Dosen, S.; Teich, F.; Graimann, B.; Farina, D. Electrotactile EMG feedback improves the control of prosthesis grasping force. J. Neural Eng. 2016, 13, 5. [Google Scholar] [CrossRef] [PubMed]
- Chin, C.A.; Barreto, A. The Integration of Electromyogram and Eye Gaze Tracking Inputs for Hands-Free Cursor Control. Biomed. Sci. Instrum. 2007, 43, 152–157. [Google Scholar] [PubMed]
- Nam, Y.; Koo, B.; Cichocki, A.; Choi, S. GOM-Face: GKP, EOG, and EMG-based Multimodal Interface with Application to Humanoid Robot Control. IEEE Trans. Biomed. Eng. 2014, 61, 453–462. [Google Scholar] [CrossRef]
- Bagnell, J.A.; Cavalcanti, F.; Cui, L.; Galluzzo, T.; Hebert, M.; Kazemi, M.; Klingensmith, M.; Libby, J.; Liu, T.Y.; Pollard, N.; et al. An integrated system for autonomous robotics manipulation. In Proceedings of the IEEE Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011; pp. 2955–2962. [Google Scholar]
- Rusu, R.B. Semantic 3D Object Maps for Everyday Manipulation. KI-Künstl. Intell. 2010, 24, 345–348. [Google Scholar] [CrossRef]
- Wahrmann, D.; Hildebrandt, A.C.; Schuetz, C.; Wittmann, R.; Rixen, D. An Autonomous and Flexible Robotic Framework for Logistics Applications. J. Intell. Robot. Syst. 2017, 1–13. [Google Scholar] [CrossRef]
- Schwarz, M.; Milan, A.; Selvam-Periyasamy, A.; Behnke, S. RGB-D object detection and semantic segmentation for autonomous manipulation clutter. Int. J. Robot. Res. 2017. [Google Scholar] [CrossRef]
- Bo, L.; Ren, X.; Fox, D. Unsupervised feature learning for RGB-D based object recognition. In Experimental Robotics; Desai, J., Dudek, G., Khatib, O., Kumar, V., Eds.; Springer: Heildelberg, Germany, 2013; Volume 88, pp. 387–402. ISBN 978-3-319-00064-0. [Google Scholar]
- Ulrich, M.; Wiedemann, C.; Steger, C. Combining scale-space and similarity-based aspect graphs for fast 3d object recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 1902–1914. [Google Scholar] [CrossRef] [PubMed]
- Mateo, C.M.; Gil, P.; Torres, F. Visual perception for the 3D recognition of geometric pieces in Robotic manipulation. Int. J. Adv. Manuf. Technol. 2016, 83, 1999–2013. [Google Scholar] [CrossRef] [Green Version]
- Wohlhart, P.; Lepetit, V. Learning descriptors for object recognition and 3D pose estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3109–3118. [Google Scholar]
- Zapata-Impata, B.S.; Mateo, C.M.; Gil, P.; Pomares, J. Using geometry to detect grasping points on 3D unknown point cloud. In Proceedings of the 14th International Conference on Informatics in Control, Automation and Robotics, Madrid, Spain, 26–28 July 2017; pp. 154–161. [Google Scholar]
- Ten Pas, A.; Gualtieri, M.; Saenko, K.; Platt, R. Grasp Pose Detection in Point Clouds. Int. J. Robot. Res. 2017, 36, 1455–1473. [Google Scholar] [CrossRef]
- Kehl, W.; Manhardt, F.; Tombari, F.; Illic, S.; Navab, N. SSD-6D: Making RGB-based 3D detection and 6D pose estimation great again. In Proceedings of the International Conference on Computer Vision, Venice, Italy, 22–29 October 2017. [Google Scholar]
- Levine, S.; Pastor, P.; Krizhevsky, A.; Ibarz, J.; Quillen, D. Learning hand-eye coordination for Robotic grasping with deep learning and large scale data collection. Int. J. Robot. Res. 2017, 421–436. [Google Scholar] [CrossRef]
- Mahler, J.; Liang, J.; Niyaz, J.; Laskey, J.; Doan, R.; Liu, X.; Ojea, J.A.; Goldberg, K. Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analitic grasp metrics. arXiv, 2017; arXiv:1703.09312v3. [Google Scholar]
- Chitta, S.; Sucan, I.; Cousins, S. MoveIt! [ROS Topics]. IEEE Robot. Autom. Mag. 2012, 19, 18–19. [Google Scholar] [CrossRef] [Green Version]
Subject | Success | Error | No Detection | sEMG ACC | Grasping ACC |
---|---|---|---|---|---|
A01 | 10 | 0 | 0 | 100% | 100% |
A02 | 10 | 0 | 1 | 91% | 100% |
A03 | 10 | 1 | 2 | 77% | 100% |
A04 | 8 | 1 | 0 | 89% | 100% |
A05 | 10 | 0 | 0 | 100% | 80% |
A06 | 6 | 2 | 1 | 67% | 80% |
Average | 9.00 | 0.67 | 0.67 | 87.23% | 93.33% |
Standard deviation | 1.67 | 0.82 | 0.82 | 13.20% | 10.33% |
Subject | Success | Error | No Detection | sEMG ACC | Grasping ACC |
---|---|---|---|---|---|
A01 | 8 | 1 | 0 | 89% | 100% |
A02 | 10 | 1 | 1 | 83% | 100% |
A03 | 10 | 0 | 1 | 91% | 100% |
A04 | 8 | 1 | 0 | 89% | 100% |
A05 | 10 | 1 | 3 | 71% | 100% |
A06 | 10 | 0 | 2 | 83% | 100% |
Average | 9.33 | 0.67 | 1.17 | 84.46% | 100.00% |
Standard deviation | 1.03 | 0.52 | 1.17 | 7.12% | 0.00% |
Subject | Success | Error | No Detection | sEMG ACC | Grasping ACC |
---|---|---|---|---|---|
A01 | 10 | 0 | 1 | 91% | 80% |
A02 | 10 | 0 | 0 | 100% | 100% |
A03 | 10 | 1 | 0 | 91% | 100% |
A04 | 10 | 0 | 1 | 91% | 100% |
A06 | 8 | 1 | 0 | 89% | 80% |
Average | 9.60 | 0.40 | 0.40 | 92.32% | 92.00% |
Standard deviation | 0.89 | 0.55 | 0.55 | 4.38% | 10.95% |
Subject | Trials | Success | Error | Grasping ACC |
---|---|---|---|---|
with sEMG | 85 | 81 | 4 | 95.29% |
without sEMG (same object) | 15 | 13 | 2 | 86.66% |
without sEMG (other cylindrical objects) | 66 | 53 | 13 | 80.30% |
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Úbeda, A.; Zapata-Impata, B.S.; Puente, S.T.; Gil, P.; Candelas, F.; Torres, F. A Vision-Driven Collaborative Robotic Grasping System Tele-Operated by Surface Electromyography. Sensors 2018, 18, 2366. https://doi.org/10.3390/s18072366
Úbeda A, Zapata-Impata BS, Puente ST, Gil P, Candelas F, Torres F. A Vision-Driven Collaborative Robotic Grasping System Tele-Operated by Surface Electromyography. Sensors. 2018; 18(7):2366. https://doi.org/10.3390/s18072366
Chicago/Turabian StyleÚbeda, Andrés, Brayan S. Zapata-Impata, Santiago T. Puente, Pablo Gil, Francisco Candelas, and Fernando Torres. 2018. "A Vision-Driven Collaborative Robotic Grasping System Tele-Operated by Surface Electromyography" Sensors 18, no. 7: 2366. https://doi.org/10.3390/s18072366
APA StyleÚbeda, A., Zapata-Impata, B. S., Puente, S. T., Gil, P., Candelas, F., & Torres, F. (2018). A Vision-Driven Collaborative Robotic Grasping System Tele-Operated by Surface Electromyography. Sensors, 18(7), 2366. https://doi.org/10.3390/s18072366