Robust Spacecraft Component Detection in Point Clouds
Abstract
:1. Introduction
- (1)
- Cylinders are first detected by PEARL [26], where initial cylinder proposals are generated in RANSAC, and energy-based multi-cylinder fitting and parameter estimation via least square fitting are then iteratively executed to detect the desired cylinders.
- (2)
- Planes are detected using Hough transform and finally represented as bounded patches with their minimum bounding rectangles (MBR).
- (3)
- Cuboids are recognized with pair-wise geometry relations from the detected patches at last.
- (1)
- The cuboid is a common geometry primitive, which can however not be parameterized directly. Detection and recognition of cuboids are still open problems. In this paper, we propose a new method to recognize cuboids from rectangle patches using geometry relation criteria to infer opposite and adjacent cuboid faces. Moreover, robust estimation methods for cuboid orientation and dimension are also proposed.
- (2)
- We propose an entire automatic spacecraft component detection scheme, which can provide concise and abstract geometric representations of the space objects from unstructured 3D point cloud data. To make the scheme robust, we use some special procedures and improvements, such as the use of adaptive dimensional unit , utilization of PEARL for robust cylinder detection and improvement on MBR estimation.
2. The Proposed Scheme
- (1)
- First, point cloud preprocessing is conducted to remove outliers. Meanwhile, an adjective dimensional unit is also estimated, which is useful in the subsequent detection steps.
- (2)
- Since local areas of the cylinder surface can be regarded as planes in approximation and a plane can be also treated as a local area of a cylinder with a large radius, the existence of plane points and cylinder points will influence the detection of each other. Thus, to avoid the influence of plane points on cylinder detections, prominent patches (i.e., the top 10 planes with the most points in our cases) are removed in advance before cylinder detection via the patches detection module. Note that the actual patch detection is performed after cylinder detection, i.e., the next Steps (3) and (4).
- (3)
- Cylinders are detected by the cylinder detection module in the fashion of PEARL, where initial cylinder proposals are first generated in RANSAC, then energy-based multi-cylinder fitting and parameter estimation via least square fitting are iteratively executed to detect the desired cylinder primitives. The surface points of these cylinder primitives are verified at last. Note that plane detection results of Step (2) will be revoked after primitives detection and before point verification.
- (4)
- Planar patches are detected by the patches detection module, where the Hough transform is mainly employed. MBRs of these patches are estimated for a further representation.
- (5)
- Cuboids are recognized by the cuboid detection module from the patches results of Step (4). Geometric relation criteria are first proposed to infer opposite and adjacent cuboid faces, through which patches belonging to the same cuboid could be detected. Then, the cuboid orientation and dimension are robustly estimated. Plane mergence is performed at last to append the missed cuboid faces.
2.1. Preprocessing
2.1.1. Removal of Outliers
2.1.2. Estimation of Dimensional Unit
2.2. Detection of Cylinders
2.2.1. Generation of Initial Cylinders
- the distances from and to the axis are inconsistent;
- the center of the cylinder is out of the minimum enclosing box of the point clouds;
- the radius of the cylinder exceeds the minimum dimension of the minimum enclosing box;
- the inlier percentage is less than preset threshold .
2.2.2. Energy-Based Multi-Cylinder Fitting
- Data term is a geometric error energy, which measures the disagreement between p and the corresponding cylinder model . It is defined as the sum of distance proximity and orientation proximity, i.e.,
- Smooth term is a discontinuity preserving smoothness error energy, which constrains the labeling consistency among the neighboring points. In our scheme, the neighborhood system employs the KNN. is described by the Potts model [29] as:
- Label cost is the label energy to prevent over-fitting, where is the set of distinct labels assigned to data points by labeling , is the cardinality of and is a coefficient.
2.2.3. Iteration of Model Fitting and Estimation
2.3. Detection of Planar Patches
2.3.1. Plane Detection
2.3.2. MBR Extraction
Algorithm 1: Robust extraction of minimum bounding rectangle. |
2.4. Detection of Cuboids
2.4.1. Geometry Relation Criteria
- Criterion for two opposite faces: (i) their plane normals are parallel, and their edges are respectively parallel, as well; (ii) after projecting each patch to the other and calculating the ratio of intersection over union (IoU), the average IoU should be greater than a preset threshold .
- Criterion for two adjacent faces: (i) their plane normals are perpendicular, and their edges are respectively parallel or perpendicular; (ii) some edge of one patch is adjacent to that of the other with similar lengths.
2.4.2. Orientation Estimation
2.4.3. Dimension Estimation
2.4.4. Plane Mergence
3. Experiments
3.1. Data Collection and Parameter Configure
- Synthesized point clouds: The synthesized point clouds are generated by uniformly sampling the surfaces of 8 spacecraft computer-aided design (CAD) models in the BUAA space object image dataset (BUAA-SID) [4]. Origin CAD mesh models of the 8 spacecraft are shown in Figure 5. To test our scheme and evaluate its robustness, synthesized data with different point distribution densities, position noise and direction noise are generated. To add position noise, a deviation of along a random direction is added to every point. n is a standard Gaussian noise, and is a distance deviation coefficient, which takes , or ( is the dimension length described in Section 2.1). To add direction noise, a rotation of a random direction with angle is applied to the normal of every point. is a direction deviation coefficient, which takes , or . We use {01U, 02U, 04U} and {05D, 10D, 15D} to denote different levels of position and direction noise, respectively.Synthesized point cloud of each spacecraft containing 50Knoise-free points are used for basic testing. For robustness analysis on density, point clouds with 20K, 10K and 05K noise-free samples are used. For robustness analysis on position noise, point clouds with 50K samples and distance noise levels of 01U, 02U and 04U are used. For robustness analysis on direction noise, point clouds with 50K samples and direction noise levels of 05D, 10D and 15D are used. In addition, point clouds with 20K, 10K and 05K samples, distance noise level of 04U and direction noise level of 15D are tested for integrated analysis. For clarity, each synthesized point cloud is referred to as **K_**U_**D, where **K, **U and **D denote the sample number, distance noise level and direction noise level, respectively. For example, a point cloud with 50K samples, distance noise level of 04U and direction noise level of 15D is referred to as 50K_04U_15D.
- Reconstructed point clouds: the reconstructed point cloud data are recovered from simulated images sequences of the 8 spacecraft models via the image-based reconstruction method [14]. Besides, reconstruction results of real scaled models of spacecraft Shenzhou and Tiangong [14] are also included reconstructed point cloud data. Reconstructed point clouds of these 10 models are shown in the first column in Figure 2.
3.2. Results on Synthesized Point Clouds
3.3. Robustness Analysis
3.4. Results on Reconstructed Point Clouds
4. Conclusions
Supplementary Materials
Acknowledgments
Author Contributions
Conflicts of Interest
References
- Cefola, P.J.; Alfriend, K.T. Sixth US/Russian Space Surveillance Workshop. In Proceedings of the AIAA/AAS Astrodynamics Specialist Conference and Exhibit, Keystone, CO, USA, 21–24 August 2006; American Institute of Aeronautics and Astronautics: Reston, VA, USA, 2006; pp. 1677–1693. [Google Scholar]
- Meng, G.; Jiang, Z.; Liu, Z.; Zhang, H.; Zhao, D. Full-viewpoint 3D Space Object Recognition Based on Kernel Locality Preserving Projections. Chin. J. Aeronaut. 2010, 23, 563–572. [Google Scholar]
- Zhang, H.; Jiang, Z.; Elgammal, A. Vision-based pose estimation for cooperative space objects. Acta Astronaut. 2013, 91, 115–122. [Google Scholar] [CrossRef]
- Zhang, H.; Jiang, Z. Multi-view space object recognition and pose estimation based on kernel regression. Chin. J. Aeronaut. 2014, 27, 1233–1241. [Google Scholar] [CrossRef]
- Zhang, H.; Jiang, Z.; Elgammal, A. Satellite recognition and pose estimation using Homeomorphic Manifold Analysis. IEEE Trans. Aerosp. Electron. Syst. 2015, 51, 785–792. [Google Scholar] [CrossRef]
- Grilli, E.; Menna, F.; Remondino, F. A Review of Point Clouds Segmentation and Classification Algorithms. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 339–344. [Google Scholar] [CrossRef]
- Ruel, S.; Luu, T.; Berube, A. Space shuttle testing of the TriDAR 3D rendezvous and docking sensor. J. Field Robot. 2012, 29, 535–553. [Google Scholar] [CrossRef]
- Benninghoff, H.; Boge, T.; Rems, F. Autonomous navigation for on-orbit servicing. KI-Künstliche Intell. 2014, 28, 77–83. [Google Scholar] [CrossRef]
- Zhang, X.; Zhang, H.; Wei, Q.; Jiang, Z. Pose Estimation of Space Objects Based on Hybrid Feature Matching of Contour Points. In Advances in Image and Graphics Technologies, Proceedings of the 11th Chinese Conference, IGTA 2016, Beijing, China, 8–9 July 2016; Springer: Singapore, 2016; pp. 184–191. [Google Scholar]
- Opromolla, R.; Fasano, G.; Rufino, G.; Grassi, M. Pose Estimation for Spacecraft Relative Navigation Using Model-Based Algorithms. IEEE Trans. Aerosp. Electron. Syst. 2017, 53, 431–447. [Google Scholar] [CrossRef]
- Tan, W.; Liu, H.; Dong, Z.; Zhang, G.; Bao, H. Robust monocular SLAM in dynamic environments. In Proceedings of the 2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Adelaide, Australia, 1–4 October 2013; pp. 209–218. [Google Scholar]
- Engel, J.; Koltun, V.; Cremers, D. Direct Sparse Odometry. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 611–625. [Google Scholar] [CrossRef] [PubMed]
- Snavely, N.; Seitz, S.M.; Szeliski, R. Modeling the World from Internet Photo Collections. Int. J. Comput. Vis. 2008, 80, 189–210. [Google Scholar] [CrossRef]
- Zhang, H.; Wei, Q.; Jiang, Z. 3D Reconstruction of Space Objects from Multi-Views by a Visible Sensor. Sensors 2017, 17, 1689. [Google Scholar] [CrossRef] [PubMed]
- Vosselman, G.; Gorte, B.G.; Sithole, G.; Rabbani, T. Recognising structure in laser scanner point clouds. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2004, 46, 33–38. [Google Scholar]
- Li, Y.; Wu, X.; Chrysathou, Y.; Sharf, A.; Cohen-Or, D.; Mitra, N.J. GlobFit: Consistently fitting primitives by discovering global relations. In ACM SIGGRAPH 2011 Papers, Proceedings of the SIGGRAPH ’11, Vancouver, BC, Canada, 7–11 August 2011; ACM: New York, NY, USA, 2011; pp. 1–12. [Google Scholar]
- Limberger, F.A.; Oliveira, M.M. Real-time detection of planar regions in unorganized point clouds. Pattern Recognit. 2015, 48, 2043–2053. [Google Scholar] [CrossRef]
- Chen, D.; Zhang, L.; Mathiopoulos, P.T.; Huang, X. A methodology for automated segmentation and reconstruction of urban 3-D buildings from ALS point clouds. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 4199–4217. [Google Scholar] [CrossRef]
- Cao, R.; Zhang, Y.; Liu, X.; Zhao, Z. Roof plane extraction from airborne lidar point clouds. Int. J. Remote Sens. 2017, 38, 3684–3703. [Google Scholar] [CrossRef]
- Rabbani, T.; Van Den Heuvel, F. Efficient hough transform for automatic detection of cylinders in point clouds. ISPRS WG III/3, III/4 2005, 3, 60–65. [Google Scholar]
- Qiu, R.; Zhou, Q.Y.; Neumann, U. Pipe-Run Extraction and Reconstruction from Point Clouds. In Computer Vision—ECCV 2014, Proceedings of the 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Springer International Publishing: Cham, Switzerland, 2014; pp. 17–30. [Google Scholar]
- Pang, G.; Qiu, R.; Huang, J.; You, S.; Neumann, U. Automatic 3D industrial point cloud modeling and recognition. In Proceedings of the 2015 14th IAPR International Conference on Machine Vision Applications (MVA), Tokyo, Japan, 18–22 May 2015; pp. 22–25. [Google Scholar]
- Borrmann, D.; Elseberg, J.; Lingemann, K.; Nüchter, A. The 3D Hough Transform for plane detection in point clouds: A review and a new accumulator design. 3D Res. 2011, 2, 3. [Google Scholar] [CrossRef]
- Schnabel, R.; Wahl, R.; Klein, R. Efficient RANSAC for Point-Cloud Shape Detection. Comput. Graph. Forum 2007, 26, 214–226. [Google Scholar] [CrossRef]
- Yang, M.Y.; Förstner, W. Plane detection in point cloud data. In Proceedings of the 2nd International Conference on Machine Control Guidance, Bonn, Germany, 9–11 March 2010; Volume 1, pp. 95–104. [Google Scholar]
- Isack, H.; Boykov, Y. Energy-Based Geometric Multi-model Fitting. Int. J. Comput. Vis. 2012, 97, 123–147. [Google Scholar] [CrossRef]
- Wang, L.; Shen, C.; Duan, F.; Guo, P. Energy-Based Multi-plane Detection from 3D Point Clouds. In Neural Information Processing. ICONIP 2016. Lecture Notes in Computer Science; Hirose, A., Ozawa, S., Doya, K., Ikeda, K., Lee, M., Liu, D., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 715–722. [Google Scholar]
- Delong, A.; Osokin, A.; Isack, H.N.; Boykov, Y. Fast Approximate Energy Minimization with Label Costs. Int. J. Comput. Vis. 2012, 96, 1–27. [Google Scholar] [CrossRef]
- Boykov, Y.; Veksler, O.; Zabih, R. Fast approximate energy minimization via graph cuts. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 1222–1239. [Google Scholar] [CrossRef]
- Wei, Q.; Jiang, Z.; Zhang, H.; Nie, S. Spacecraft Component Detection in Point Clouds. In Proceedings of the 12th Chinese Conference on Advances in Image and Graphics Technologies, IGTA 2017, Beijing, China, 30 June–1 July 2017; pp. 210–218. [Google Scholar]
- Eberly, D.H. Geometric Tools. Available online: https://www.geometrictools.com (accessed on 18 November 2017).
- Knapitsch, A.; Park, J.; Zhou, Q.Y.; Koltun, V. Tanks and Temples: Benchmarking Large-Scale Scene Reconstruction. ACM Trans. Graph. 2017, 36. [Google Scholar] [CrossRef]
- Schöps, T.; Schönberger, J.L.; Galliani, S.; Sattler, T.; Schindler, K.; Pollefeys, M.; Geiger, A. A Multi-View Stereo Benchmark with High-Resolution Images and Multi-Camera Videos. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
Notation | Definition | Value |
---|---|---|
Inlier rate threshold for planes in patch detection. | 0.05 | |
Inlier rate threshold for cylinders in cylinder detection. | 0.1 | |
IoU threshold for opposite faces in patch-wise geometry relation criteria. | 0.6 | |
Angle deviation for parallel and perpendicular patches. | ||
Distance proximity threshold for surface point verification. | ||
Orientation proximity threshold for surface point verification. |
Objects | Method in [30] | Our Scheme | ||||
---|---|---|---|---|---|---|
cube | 72.0 | 71.5 | 71.7 | 70.5 | 64.2 | 67.2 |
DSP | 80.9 | 73.6 | 77.1 | 89.8 | 77.0 | 82.9 |
GPS | 46.3 | 50.1 | 48.2 | 64.4 | 71.6 | 67.8 |
Helios | 61.8 | 36.5 | 45.9 | 48.7 | 50.7 | 49.7 |
minisat | 33.6 | 48.1 | 39.6 | 28.6 | 39.0 | 33.0 |
Radarsat | 80.0 | 60.9 | 69.1 | 77.3 | 71.3 | 74.2 |
SCISAT | 27.8 | 42.3 | 33.6 | 51.0 | 42.6 | 46.4 |
SPOT | 37.1 | 37.9 | 37.5 | 39.5 | 41.5 | 40.5 |
Shenzhou | 81.3 | 80.3 | 80.8 | 87.3 | 85.7 | 86.5 |
Tiangong | 86.0 | 84.2 | 85.1 | 95.4 | 86.5 | 90.7 |
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wei, Q.; Jiang, Z.; Zhang, H. Robust Spacecraft Component Detection in Point Clouds. Sensors 2018, 18, 933. https://doi.org/10.3390/s18040933
Wei Q, Jiang Z, Zhang H. Robust Spacecraft Component Detection in Point Clouds. Sensors. 2018; 18(4):933. https://doi.org/10.3390/s18040933
Chicago/Turabian StyleWei, Quanmao, Zhiguo Jiang, and Haopeng Zhang. 2018. "Robust Spacecraft Component Detection in Point Clouds" Sensors 18, no. 4: 933. https://doi.org/10.3390/s18040933
APA StyleWei, Q., Jiang, Z., & Zhang, H. (2018). Robust Spacecraft Component Detection in Point Clouds. Sensors, 18(4), 933. https://doi.org/10.3390/s18040933