Development and Experimental Evaluation of a 3D Vision System for Grinding Robot
Abstract
:1. Introduction
- We proposed a grinding robot 3D vision system to detect the machining target automatically.
- To improve the speed and accuracy of registration, we use the voxel grid filter and fast point feature histogram (FPFH) to preprocess the sample point cloud and the test point cloud. To improve the registration accuracy of the point cloud pair, we use the combination of the sampling consistent initial registration algorithm (SAC-IA) and iterative closest point algorithm (ICP).
- We use FLANN to search out the difference point cloud from the precisely registered point cloud pair and use the point cloud segmentation method proposed in this paper to extract machining path points from the difference point cloud. The key point of the point cloud segmentation is as follows: (1) Calculate the minimum bounding box. (2) Rasterize the bounding box and find the centroid of the point cloud in each three-dimensional grid. (3) Use neighbour search to search for the nearest point to the centroid from the test point cloud.
- We construct a detection error compensation model for the vision system and use a multi-point constraint method and a high-precision laser tracker to accurately calculate the error compensation amount and the transformation matrix of the scanner frame to the robot base frame.
2. The Hardware Architecture of the Grinding Robot 3D Vision System
3. The Mathematical Basis for the Grinding Robot 3D Vision System
3.1. Downsample Point Cloud Using Voxel Grid Filter
3.2. Calculate FPFH
3.3. Coarse Registration of the Point Cloud Pair Based on SAC-IA
3.4. Accurate Registration of Point Cloud Pair Based on ICP
3.5. Search for Difference Point Cloud Using FLANN
3.6. Extract Machining Path Points From Difference Point Cloud
3.7. Accurate Calibration of the Grinding Robot 3D Vision System Based on Error Compensation Model
4. Data Processing of the Grinding Robot 3D Vision System
- Scan the sample workpiece (qualified workpiece) to get the sample point cloud , and scan the target workpiece (workpiece to be processed) to get the test point cloud .
- Use the voxel grid filter to downsample the sample point cloud and the test point cloud to get the point clouds and , respectively.
- Calculate the point feature descriptor of point clouds and respectively, and obtain their respective FPFH.
- Use the SAC-IA to register the point clouds and coarsely and obtain the paired point cloud pair .
- Use the ICP algorithm to accurately register the point cloud pair , resulting in the registered point cloud pair .
- Use FLANN to search for difference point cloud D.
- Divide the difference point cloud D, then calculate the centroid of each grid, and search from the point cloud for the points closest to these centroids as machining path points , ⋯ .
- Construct a detection error compensation model for the robot vision system, as shown in Equation (17).
- Obtain the calibration error of the grinding robot 3D vision system by solving Equation (18) with multiple points, and then calculate the transformation matrix between the compensated scanner frame and the robot base frame.
- The machining path points , ⋯ obtained in Step 2 are converted into machining path points , ⋯ described by the robot base frame, and then send the machining information commands to the grinding robot controller.
5. Experimental Results and Discussion
5.1. Measurement Accuracy Experiment of the 3D Vision System
5.2. Calibration and Calibration Accuracy Experiment of the Grinding Robot 3D Vision System
5.3. Machining Target Detection Experiment Using the Grinding Robot 3D Vision System
5.4. The Effect of the Compound Error of the Grinding Robot on the Detection Accuracy of the 3D Vision System
6. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Weule, H.; Timmermann, S.; Eversheim, W. Automation of the surface finishing in the manufacturing of dies and molds. CIRP Ann. 1990, 39, 299–303. [Google Scholar] [CrossRef]
- Dambon, O.; Demmer, A.; Peters, J. Surface interactions in steel polishing for the precision tool making. CIRP Ann. 2006, 55, 609–612. [Google Scholar] [CrossRef]
- Márquez, J.; Pérez, J.; Rıos, J.; Vizán, A. Process modeling for robotic polishing. J. Mater. Process. Technol. 2005, 159, 69–82. [Google Scholar] [CrossRef] [Green Version]
- Huang, H.; Gong, Z.; Chen, X.; Zhou, L. Robotic grinding and polishing for turbine-vane overhaul. J. Mater. Process. Technol. 2002, 127, 140–145. [Google Scholar] [CrossRef]
- Roswell, A.; Xi, F.J.; Liu, G. Modelling and analysis of contact stress for automated polishing. Int. J. Mach. Tools Manuf. 2006, 46, 424–435. [Google Scholar] [CrossRef]
- Nagata, F.; Kusumoto, Y.; Fujimoto, Y.; Watanabe, K. Robotic sanding system for new designed furniture with free-formed surface. Rob. Comput. Integr. Manuf. 2007, 23, 371–379. [Google Scholar] [CrossRef]
- Wang, G.; Wang, Y.; Xu, Z. Modeling and analysis of the material removal depth for stone polishing. J. Mater. Process. Technol. 2009, 209, 2453–2463. [Google Scholar] [CrossRef]
- Lindner, L.; Sergiyenko, O.; Rodríguez-Quiñonez, J.C.; Rivas-Lopez, M.; Hernandez-Balbuena, D.; Flores-Fuentes, W.; Natanael Murrieta-Rico, F.; Tyrsa, V. Mobile robot vision system using continuous laser scanning for industrial application. Ind. Robot Int. J. 2016, 43, 360–369. [Google Scholar] [CrossRef] [Green Version]
- Qin, Q.; Zhu, D.; Tu, Z.; Hong, J. Sorting System of Robot Based on Vision Detection. In International Workshop of Advanced Manufacturing and Automation; Wang, K., Wang, Y., Strandhagen, J., Yu, T., Eds.; Springer: Singapore, 2017; pp. 591–597. [Google Scholar]
- Zhang, L.; Zhang, J.-Z. Object-oriented stripe structured-light vision-guided robot. In Proceedings of the 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Banff, AB, Canada, 5–8 October 2017; pp. 1592–1596. [Google Scholar]
- Montironi, M.; Castellini, P.; Stroppa, L.; Paone, N. Adaptive autonomous positioning of a robot vision system: Application to quality control on production lines. Rob. Comput. Integr. Manuf. 2014, 30, 489–498. [Google Scholar] [CrossRef]
- Šuligoj, F.; Šekoranja, B.; Švaco, M.; Jerbić, B. Object tracking with a multiagent robot system and a stereo vision camera. Procedia Eng. 2014, 69, 968–973. [Google Scholar] [CrossRef]
- Wu, X.; Li, Z.; Wen, P. An automatic shoe-groove feature extraction method based on robot and structural laser scanning. Int. J. Adv. Rob. Syst. 2016, 14. [Google Scholar] [CrossRef] [Green Version]
- Gonçalves, P.J.S.; Torres, P.M.; Santos, F.; António, R.; Catarino, N.; Martins, J. A vision system for robotic ultrasound guided orthopaedic surgery. J. Intell. Rob. Syst. 2015, 77, 327–339. [Google Scholar] [CrossRef]
- Pérez, L.; Rodríguez, Í.; Rodríguez, N.; Usamentiaga, R.; García, D.F. Robot guidance using machine vision techniques in industrial environments: A comparative review. Sensors 2016, 16, 335. [Google Scholar] [CrossRef] [PubMed]
- Dieste, J.; Fernández, A.; Javierre, C.; Santolaria, J. Environmentally conscious polishing system based on robotics and artificial vision. Adv. Mech. Eng. 2015, 7, 798907. [Google Scholar] [CrossRef]
- Hosseininia, S.; Khalili, K.; Emam, S. Flexible Automation in Porcelain Edge Polishing Using Machine Vision. Procedia Technol. 2016, 22, 562–569. [Google Scholar] [CrossRef]
- Princely, F.L.; Selvaraj, T. Vision assisted robotic deburring of edge burrs in cast parts. Procedia Engl. 2014, 97, 1906–1914. [Google Scholar] [CrossRef]
- Ji, H.; Jiang, X.; Du, Y.; Shan, P.; Li, P.; Lyu, C.; Yang, W.; Liu, Y. A structured light based measuring system used for an autonomous interior finishing robot. In Proceedings of the 2017 IEEE International Conference on Real-time Computing and Robotics (RCAR), Okinawa, Japan, 14–18 July 2017; pp. 677–682. [Google Scholar]
- Connolly, C. Cumulative generation of octree models from range data. In Proceedings of the 1984 IEEE International Conference on Robotics and Automation, Atlanta, GA, USA, 13–15 March 1984; pp. 25–32. [Google Scholar]
- Martin, R.; Stroud, I.; Marshall, A. Data Reduction for Reverse Engineering. RECCAD, Deliverable Document 1 COPERNICUS Project, No. 1068. Available online: https://pdfs.semanticscholar.org/f20a/e4980c3be70e5312f7aeaaffbfde9de77658.pdf (accessed on 1 June 2018).
- Rusu, R.B.; Marton, Z.C.; Blodow, N.; Beetz, M. Learning informative point classes for the acquisition of object model maps. In Proceedings of the 10th International Conference on Control, Automation, Robotics and Vision, Hanoi, Vietnam, 17–20 December 2008; pp. 643–650. [Google Scholar]
- Rusu, R.B.; Blodow, N.; Marton, Z.C.; Beetz, M. Aligning point cloud views using persistent feature histograms. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, 22–26 September 2008; pp. 3384–3391. [Google Scholar]
- Rusu, R.B.; Blodow, N.; Beetz, M. Fast Point Feature Histograms (FPFH) for 3D registration. In Proceedings of the 2009 IEEE international conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 1848–1853. [Google Scholar]
- Fitzgibbon, A.W. Robust registration of 2D and 3D point sets. Image Vis. Comput. 2003, 21, 1145–1153. [Google Scholar] [CrossRef] [Green Version]
- Besl, P.J.; Mckay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 14, 239–256. [Google Scholar] [CrossRef]
- Muja, M. Fast approximate nearest neighbors with automatic algorithm configuration. In Proceedings of the VISAPP International Conference on Computer Vision Theory and Applications, Lisboa, Portugal, 5–8 February 2009; pp. 331–340. [Google Scholar]
- Friedman, J.H. An Algorithm for Finding Best Matches in Logarithmic Expected Time. Acm Trans. Math. Softw. 1977, 3, 209–226. [Google Scholar] [CrossRef] [Green Version]
- Silpaanan, C.; Hartley, R. Optimised KD-trees for fast image descriptor matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
- Andoni, A.; Indyk, P. Near-Optimal Hashing Algorithms for Approximate Nearest Neighbor in High Dimensions. In Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS’06), Berkeley, CA, USA, 21–24 October 2006; pp. 459–468. [Google Scholar]
- Arya, S.; Mount, D.M.; Netanyahu, N.S.; Silverman, R.; Wu, A.Y. An optimal algorithm for approximate nearest neighbor searching fixed dimensions. J. Acm 1998, 45, 891–923. [Google Scholar] [CrossRef] [Green Version]
- Leibe, B.; Mikolajczyk, K.; Schiele, B. Efficient Clustering and Matching for Object Class Recognition. In Proceedings of the British Machine Vision Conference, Edinburgh, UK, 4–7 September 2006; pp. 789–798. [Google Scholar]
- Barequet, G.; Har-Peled, S. Efficiently approximating the minimum-volume bounding box of a point set in three dimensions. J. Algorithms 2001, 38, 91–109. [Google Scholar] [CrossRef]
- Huebner, K.; Ruthotto, S.; Kragic, D. Minimum volume bounding box decomposition for shape approximation in robot grasping. In Proceedings of the IEEE International Conference on Robotics and Automation, Pasadena, CA, USA, 19–23 May 2008; pp. 1628–1633. [Google Scholar]
- Popescu, C.R.; Lungu, A. Real-time 3D reconstruction using a Kinect sensor. Comput. Sci. Inf. Technol. 2014, 2, 95–99. [Google Scholar]
- Yin, S.; Ren, Y.; Zhu, J.; Yang, S.; Ye, S. A vision-based self-calibration method for robotic visual inspection systems. Sensors 2013, 13, 16565–16582. [Google Scholar] [CrossRef] [PubMed]
- Schnabel, R.; Wahl, R.; Klein, R. Efficient RANSAC for Point-Cloud Shape Detection. Comput. Graph. Forum 2010, 26, 214–226. [Google Scholar] [CrossRef]
Position Number | |||||
---|---|---|---|---|---|
Measured height (mm) | 30.3999 | 30.4026 | 30.4522 | 30.4175 | 30.4662 |
Absolute error (mm) | 0.0001 | 0.0026 | 0.0522 | 0.0175 | 0.0662 |
i | (°) | (°) | (°) | ||
---|---|---|---|---|---|
1 | −0.0088 | −1.0167 | −3.1596 | 0.0052 | 0 |
2 | −0.0001 | 0.0001 | −1.0152 | −0.0007 | 0 |
3 | 0.0022 | −0.0007 | 0.3185 | 0.0074 | −0.0018 |
4 | −0.0027 | 1.0445 | 0.0834 | 0.0026 | 0 |
Machining Path Point Number | PS(x/y/z) (mm) | NS(x/y/z) (rad) |
---|---|---|
1 | 2.9196/−16.9633/439.4346 | 0.6721/−0.2216/−0.7066 |
2 | 5.8379/−14.1931/437.0458 | 0.6734/−0.2014/−0.7113 |
3 | 8.5740/−8.8756/432.9619 | 0.6762/−0.1792/−0.7146 |
4 | 12.8166/−4.4634/428.9439 | 0.6869/−0.1583/−0.7093 |
5 | 16.8304/−0.7351/425.3878 | 0.6862/−0.1520/−0.7114 |
6 | 22.3266/2.7024/421.6943 | 0.6850/−0.1405/−0.7149 |
Machining Path Point Number | PB(x/y/z) (mm) | NB(x/y/z) (rad) |
---|---|---|
1 | 552.5296/−596.9379/−46.6984 | 0.8525/−0.1176/1.7602 |
2 | 548.0370/−596.0266/−45.7583 | 0.8463/−0.0979/1.7351 |
3 | 541.6790/−592.6831/−44.8424 | 0.8415/−0.0759/1.7068 |
4 | 534.7713/−590.8615/−43.2371 | 0.8451/−0.0547/1.6779 |
5 | 528.5730/−589.5172/−41.6772 | 0.8426/−0.0486/1.6705 |
6 | 521.4130/−589.2253/−39.5995 | 0.8383/−0.0373/1.6571 |
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Diao, S.; Chen, X.; Luo, J. Development and Experimental Evaluation of a 3D Vision System for Grinding Robot. Sensors 2018, 18, 3078. https://doi.org/10.3390/s18093078
Diao S, Chen X, Luo J. Development and Experimental Evaluation of a 3D Vision System for Grinding Robot. Sensors. 2018; 18(9):3078. https://doi.org/10.3390/s18093078
Chicago/Turabian StyleDiao, Shipu, Xindu Chen, and Jinhong Luo. 2018. "Development and Experimental Evaluation of a 3D Vision System for Grinding Robot" Sensors 18, no. 9: 3078. https://doi.org/10.3390/s18093078
APA StyleDiao, S., Chen, X., & Luo, J. (2018). Development and Experimental Evaluation of a 3D Vision System for Grinding Robot. Sensors, 18(9), 3078. https://doi.org/10.3390/s18093078