Picking Towels in Point Clouds
Abstract
:1. Introduction
2. Methods
Algorithm 1: Determining Grasp Pose() |
Input: point clouds, C; a region of interest, R Output: Grasp Pose() step 1: R=Get_ROI(C); step 2: G_normal=Get_normal(R); step 3: Get the convex wrinkles, W, and the number of the convex wrinkles is M; (W,M)=Graph_based_point_clouds(G_normal); step 4: Candidate convex wrinkle, which has the most number of convex points; Cd=Most_Convex(W,M); step 5: Pose()=Get_Grasp_Pose(Cd); |
2.1. The Concave and Convex Criterion
2.2. The Edge Weights
2.3. Threshold Function
3. Picking the Towels
3.1. Grasp Point P
Algorithm 2: Determining the grasp point |
3.2. The Grasp Orientation
4. Results
5. Discussion
6. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Sanchez, J.; Corrales, J.A.; Bouzgarrou, B.C.; Mezouar, Y. Robotic manipulation and sensing of deformable objects in domestic and industrial applications: A survey. Int. J. Robot. Res. 2018, 37. [Google Scholar] [CrossRef]
- Sun, L.; Aragon-Camarasa, G.; Rogers, S.; Stolkin, R.; Siebert, J.P. Single-shot clothing category recognition in free-configurations with application to autonomous clothes sorting. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 6699–6706. [Google Scholar]
- Yuan, W.; Mo, Y.; Wang, S.; Adelson, E.H. Active Clothing Material Perception using Tactile Sensing and Deep Learning. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 1–8. [Google Scholar]
- Willimon, B.; Birchfield, S.; Walker, I. Classification of clothing using interactive perception. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 1862–1868. [Google Scholar]
- Charles, R.Q.; Su, H.; Kaichun, M.; Guibas, L.J. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 77–85. [Google Scholar]
- Stria, J.; Hlaváč, V. Classification of Hanging Garments Using Learned Features Extracted from 3D Point Clouds. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018. [Google Scholar]
- Willimon, B.; Walker, I.; Birchfield, S. A new approach to clothing classification using mid-level layers. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 4271–4278. [Google Scholar]
- Stria, J.; Průša, D.; Hlaváč, V. Polygonal models for clothing. In Advances in Autonomous Robotics Systems; Springer: Cham, Switzerland, 2014; Volume 8717, pp. 173–184. [Google Scholar]
- Stria, J.; Průša, D.; Hlaváč, V.; Wagner, L.; Petrík, V.; Krsek, P.; Smutnỳ, V. Garment perception and its folding using a dual-arm robot. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 61–67. [Google Scholar]
- Van Den Berg, J.; Miller, S.; Goldberg, K.; Abbeel, P. Gravity-based robotic cloth folding. In Algorithmic Foundations of Robotics IX; Springer: Berlin/Heidelberg, Germany, 2010; pp. 409–424. [Google Scholar]
- Doumanoglou, A.; Stria, J.; Peleka, G.; Mariolis, I.; Petrik, V.; Kargakos, A.; Wagner, L.; Hlavác, V.; Kim, T.K.; Malassiotis, S. Folding Clothes Autonomously: A Complete Pipeline. IEEE Trans. Robot. 2016, 32, 1461–1478. [Google Scholar] [CrossRef]
- Laundry Machine Working Video. Available online: https://www.youtube.com/watch?v=zdr0bBXoxjk (accessed on 22 February 2017).
- A demonstration of “Laundroid,” the world’s first automated laundry-folding robot. Available online: https://www.youtube.com/watch?v=7apeh4tjsgI/ (accessed on 10 October 2015).
- Ramisa Ayats, A.; Alenyà Ribas, G.; Moreno-Noguer, F.; Torras, C. Determining where to grasp cloth using depth information. In Proceedings of the 14th International Conference of the Catalan Association for Artificial Intelligence, Lleida, Spain, 26–28 October 2011; pp. 199–207. [Google Scholar]
- Ramisa, A.; Alenya, G.; Moreno-Noguer, F.; Torras, C. Finddd: A fast 3d descriptor to characterize textiles for robot manipulation. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 824–830. [Google Scholar]
- Hata, S.; Hiroyasu, T.; Hayashi, J.; Hojoh, H.; Hamada, T. Robot system for cloth handling. In Proceedings of the 2008 34th Annual Conference of IEEE Industrial Electronics, Orlando, FL, USA, 10–13 November 2008; pp. 3449–3454. [Google Scholar]
- Ramisa, A.; Alenya, G.; Moreno-Noguer, F.; Torras, C. Using depth and appearance features for informed robot grasping of highly wrinkled clothes. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012; pp. 1703–1708. [Google Scholar]
- Bersch, C.; Pitzer, B.; Kammel, S. Bimanual robotic cloth manipulation for laundry folding. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011; pp. 1413–1419. [Google Scholar]
- Monsó, P.; Alenyà, G.; Torras, C. Pomdp approach to robotized clothes separation. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura, Portugal, 7–12 October 2012; pp. 1324–1329. [Google Scholar]
- Felzenszwalb, P.F.; Huttenlocher, D.P. Efficient graph-based image segmentation. Int. J. Comput. Vis. 2004, 59, 167–181. [Google Scholar] [CrossRef]
- Karpathy, A.; Miller, S.; Fei-Fei, L. Object discovery in 3d scenes via shape analysis. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 2088–2095. [Google Scholar]
- Verdoja, F.; Thomas, D.; Sugimoto, A. Fast 3D point cloud segmentation using supervoxels with geometry and color for 3D scene understanding. In Proceedings of the 2017 IEEE International Conference on Multimedia and Expo (ICME), Hong Kong, China, 10–14 July 2017; pp. 1285–1290. [Google Scholar]
- Li, M.; Hashimoto, K. Curve set feature-based robust and fast pose estimation algorithm. Sensors 2017, 17, 1782. [Google Scholar]
- Christoph Stein, S.; Schoeler, M.; Papon, J.; Worgotter, F. Object partitioning using local convexity. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 304–311. [Google Scholar]
- Gualtieri, M.; ten Pas, A.; Saenko, K.; Platt, R. High precision grasp pose detection in dense clutter. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 598–605. [Google Scholar]
- Pas, A.T.; Gualtieri, M.; Saenko, K.; Platt, R. Grasp Pose Detection in Point Clouds. Int. J. Robot. Res. 2017, 36, 1455–1473. [Google Scholar]
Experiment No. | GN = 0 | GN = 1 | GN = 2 | GN = 0,1,2 | success_rate |
---|---|---|---|---|---|
1 | 3 | 16 | 2 | 21 | 76.2% |
2 | 3 | 18 | 1 | 22 | 81.8% |
3 | 4 | 18 | 1 | 23 | 78.2% |
4 | 6 | 18 | 1 | 25 | 72% |
5 | 2 | 16 | 2 | 20 | 80% |
6 | 2 | 18 | 1 | 21 | 85.7% |
7 | 4 | 18 | 1 | 23 | 78.2% |
8 | 3 | 18 | 1 | 22 | 81.8% |
9 | 3 | 20 | 0 | 23 | 86.9% |
10 | 5 | 20 | 0 | 25 | 80% |
total | 35 | 180 | 10 | 225 | 80% |
Experiment No. | GN = 0 | GN = 1 | GN = 2 | GN = 0,1,2 | success_rate |
---|---|---|---|---|---|
1 | 4 | 18 | 1 | 23 | 78.3% |
2 | 5 | 18 | 1 | 24 | 75% |
3 | 0 | 14 | 3 | 17 | 82.3% |
4 | 2 | 18 | 1 | 21 | 85.7% |
5 | 5 | 16 | 2 | 23 | 69.5% |
6 | 5 | 20 | 0 | 25 | 80% |
7 | 1 | 20 | 0 | 21 | 95.2% |
8 | 5 | 14 | 3 | 22 | 63.6% |
9 | 6 | 20 | 0 | 26 | 76.9% |
10 | 2 | 18 | 1 | 21 | 85.7% |
total | 35 | 176 | 12 | 223 | 78.9% |
Experiment No. | GN = 0 | GN = 1 | GN = 2 | GN = 0,1,2 | success_rate |
---|---|---|---|---|---|
1 | 2 | 16 | 2 | 20 | 80% |
2 | 2 | 20 | 0 | 22 | 90.9% |
3 | 4 | 20 | 0 | 18 | 83.3% |
4 | 3 | 14 | 3 | 20 | 70% |
5 | 9 | 20 | 0 | 29 | 68.9% |
6 | 3 | 18 | 1 | 22 | 81.8% |
7 | 0 | 16 | 2 | 18 | 88.8% |
8 | 4 | 18 | 1 | 23 | 78.3% |
9 | 1 | 16 | 2 | 19 | 84.2% |
10 | 1 | 14 | 3 | 18 | 77.8% |
total | 29 | 172 | 14 | 215 | 80% |
Experiment No. | GN = 0 | GN = 1 | GN = 2 | GN = 0,1,2 | success_rate |
---|---|---|---|---|---|
1 | 2 | 18 | 1 | 21 | 85.7% |
2 | 1 | 18 | 1 | 20 | 90% |
3 | 1 | 14 | 3 | 18 | 77.8% |
4 | 1 | 18 | 1 | 20 | 90% |
5 | 5 | 18 | 1 | 24 | 75% |
6 | 1 | 16 | 2 | 19 | 84.2% |
7 | 5 | 16 | 2 | 23 | 69.5% |
8 | 4 | 16 | 2 | 22 | 72.7% |
9 | 1 | 18 | 1 | 20 | 90% |
10 | 2 | 12 | 4 | 18 | 80% |
total | 23 | 164 | 18 | 205 | 80% |
Experiment No. | GN = 0 | GN = 1 | GN = 2 | GN = 0,1,2 | success_rate |
---|---|---|---|---|---|
1 | 12 | 8 | 0 | 20 | 40% |
2 | 6 | 13 | 1 | 20 | 65% |
3 | 1 | 17 | 2 | 20 | 85% |
4 | 3 | 15 | 2 | 20 | 75% |
5 | 9 | 11 | 0 | 20 | 55% |
Experiment No. | GN = 0 | GN = 1 | GN = 2 | GN = 0,1,2 | success_rate |
---|---|---|---|---|---|
1 | 3 | 17 | 0 | 20 | 85% |
2 | 5 | 15 | 0 | 20 | 75% |
3 | 1 | 17 | 2 | 20 | 85% |
4 | 8 | 11 | 1 | 20 | 55% |
5 | 4 | 15 | 1 | 20 | 75% |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, X.; Jiang, X.; Zhao, J.; Wang, S.; Yang, T.; Liu, Y. Picking Towels in Point Clouds. Sensors 2019, 19, 713. https://doi.org/10.3390/s19030713
Wang X, Jiang X, Zhao J, Wang S, Yang T, Liu Y. Picking Towels in Point Clouds. Sensors. 2019; 19(3):713. https://doi.org/10.3390/s19030713
Chicago/Turabian StyleWang, Xiaoman, Xin Jiang, Jie Zhao, Shengfan Wang, Tao Yang, and Yunhui Liu. 2019. "Picking Towels in Point Clouds" Sensors 19, no. 3: 713. https://doi.org/10.3390/s19030713
APA StyleWang, X., Jiang, X., Zhao, J., Wang, S., Yang, T., & Liu, Y. (2019). Picking Towels in Point Clouds. Sensors, 19(3), 713. https://doi.org/10.3390/s19030713