Vehicle Recognition Based on Region Growth of Relative Tension and Similarity Measurement of Side Projection Profile of Vehicle Body
Abstract
:1. Introduction
- (1)
- According to the physical shape of vehicles, two-dimensional feature extraction is carried out for vehicle point clouds of three-dimensional regions. The side projection profile of vehicle body can not only describes the two-dimensional characteristics of vehicles but also eliminates the interference caused by a large number of missing data. At the same time, the process of dimension reduction also reduces the complexity of the problem.
- (2)
- Based on the principle of least squares, similarity measurement of the side projection profile is used to recognize vehicle point clouds.
2. Methods
2.1. Point Cloud Pre-Removing
2.2. Neighborhood Division
2.2.1. Neighborhood Segmentation
- (1)
- With randomly taking a seed point as the center, all points in the cube with a fixed radius are caught as its neighboring points.
- (2)
- According to computing the value of relative tension between the seed point and its neighboring points by Formula (3), we select several point clouds with the highest value of relative tension from these neighboring points as new seed points to form a new seed set.
- (3)
- We perform the same neighborhood division on the new seed set again to obtain neighboring points. These points are merged with original neighboring points into new neighboring points. New seed points are selected out again by step (2).
- (4)
- We iterate the above three steps. When the new seed set can no longer catch neighboring points, the algorithm stops. The area which contains the last set of neighboring points and all of the seed sets is a complete neighborhood.
2.2.2. Neighborhood Splitting and Merging
- (1)
- Neighborhood splitting
Algorithm 1: Computing the first derivative and the second derivative. |
Input: , |
Step length , |
axis parking direction, |
, |
= 0, , . |
I: |
1: t ceil() |
2: For : 1 to t do { |
3: |
4: |
5: find( && ) |
6: length() |
7: } |
II: compute |
1: For : 1 to (t ) do { |
2: |
3: ( + ) |
4: } |
5: = [0, ] |
III: compute |
1: For : 1 to t do { |
2: |
3: () |
4: |
5: } |
Output: the first derivative: , |
the second derivative: |
Algorithm 2: Spitting a neighborhood into two sub-regions. |
Input: , |
the first derivative: , |
the second derivative: |
1: Ind find( < 0) |
2: l length(Ind) |
3: if l == 0 |
4: P |
5: ind i |
6: else if l = && Ind = N |
7: P |
8: ind i |
9:else |
10: tem Ind() |
11: , |
12: , |
13: ind [i,i]. |
Output: , i or , , [i,i]. |
- (2)
- Neighborhood merging
2.3. Similarity Measurement
- (1)
- In Section 2.3.1, it explains how to obtain side projection profile of point clouds in a neighborhood according to the degree of curvature of the profile curve. The side projection profile is mathematically a series of discrete points.
- (2)
- Generally, the point clouds of side projection profiles that obtain in (1) can not be directly involved in similarity measurement. It is necessary to re-sample the point clouds to meet the measurement conditions in the Section 2.3.2.
- (3)
- The unique shape of a vehicle is important information. Normal distribution measurement in Section 2.3.3 can provide the value range of vehicle length and vehicle width for removing point clouds of some of non-vehicles.
- (4)
- In Section 2.3.4, the principle of least squares is used to recognize point clouds of vehicles.
2.3.1. Obtaining Side Projection Profile
2.3.2. Re-Sampling Side Projection Profile
2.3.3. Normal Distribution Measurement
2.3.4. Similarity Measurement of Side Projection Profile
3. Results and Discussion
3.1. Vehicle Recognition Results
3.2. Selection of Parameters Radius and Seed Number
3.3. Performance of Region Growth Based on Relative Tension
3.4. Performance of Neighborhood Splitting and Merging
4. Comparative Studies
4.1. Recognition Result Indices
4.2. Neighborhood Segmentation Performance
4.3. Computation Time
4.4. Recognition Performance for Specific Situations
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Yu, Y.; Zheng, H. Vehicle detection from high resolution satellite imagery based on the morphological neural network. J. Harbin Eng. Univ. 2006, 7, 189–193. [Google Scholar]
- Moon, H.; Chellappa, R.; Rosenfeld, A. Performance analysis of a simple vehicle detection algorithm. Image Vis. Comput. 2002, 20, 1–13. [Google Scholar] [CrossRef]
- Zhao, T.; Nevatia, R. Car detection in low resolution aerial image. In Proceedings of the 8th IEEE International Conference on Computer Vision, Washington, DC, USA, 7–14 July 2001. [Google Scholar]
- Ruskone, R.; Guiges, L.; Airault, S.; Jamet, O. Vehicle detection on aerial images: A structural approach. In Proceedings of the 13th International Conference on Pattern Recognition, Vienna, Austria, 25–29 August 1996. [Google Scholar]
- Cheng, G.; Han, J.; Zhou, P.; Xu, D. Learning rotation-invariant and fisher discriminative convolutional neural networks for object detection. IEEE Trans. Image Process. 2019, 28, 265–278. [Google Scholar] [CrossRef]
- Yang, C.; Li, W.; Lin, Z. Vehicle Object Detection in Remote Sensing Imagery Based on Multi-Perspective Convolutional Neural Network. Int. J. Geo-Inf. 2018, 7, 249. [Google Scholar] [CrossRef] [Green Version]
- Wu, X.; Hong, D.; Tian, J.; Chanussot, J.; Li, W.; Tao, R. ORSIm detector: A novel object detection framework in optical remote sensing imagery using spatial-frequency channel features. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5146–5158. [Google Scholar] [CrossRef] [Green Version]
- Kang, L.; Gellert, M. Fast multiclass vehicle detection on aerial images. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1938–1942. [Google Scholar] [CrossRef] [Green Version]
- Cheng, G.; Si, Y.; Hong, H.; Yao, X.; Guo, L. Cross-scale feature fusion for object detection in optical remote sensing images. IEEE Geosci. Remote Sens. Lett. 2020, 18, 431–435. [Google Scholar] [CrossRef]
- Wu, X.; Hong, D.; Chanussot, J.; Xu, Y.; Tao, R.; Wang, Y. Fourier-based rotation-invariant feature boosting: An efficient framework for geospatial object detection. IEEE Geosci. Remote Sens. Lett. 2020, 17, 302–306. [Google Scholar] [CrossRef] [Green Version]
- Yang, B.; Liang, F.; Huang, R. Progress, Challenges and Perspectives of 3D LiDAR Point Cloud Processing. Acta Geod. Cartogr. Sin. 2017, 46, 1509–1516. [Google Scholar]
- Park, J.; Kim, C.; Jo, K. PCSCNet: Fast 3D Semantic Segmentation of LiDAR Point Cloud for Autonomous Car using Point Convolution and Sparse Convolution Network. arXiv 2022, arXiv:2202.10047. [Google Scholar] [CrossRef]
- Zou, T.; Chen, G.; Li, Z.; He, W.; Qu, S.; Gu, S.; Knoll, A. KAM-Net: Keypoint-Aware and Keypoint-Matching Network for Vehicle Detection from 2D Point Cloud. IEEE Trans. Artif. Intell. 2022, 3, 207–217. [Google Scholar] [CrossRef]
- Yang, B.; Dong, Z.; Zhao, G.; Dai, W. Hierarchical extraction of urban objects from mobile laser scanning data. ISPRS J. Photogramm. Remote Sens. 2015, 99, 45–57. [Google Scholar] [CrossRef]
- Zhang, Y.; Ge, P.; Xu, J.; Zhang, T.; Zhao, Q. Lidar-based Vehicle Target Recognition. In Proceedings of the 4th CAA International Conference on Vehicular Control and Intelligence, Hangzhou, China, 18–20 December 2020. [Google Scholar]
- Li, B. 3d fully convolutional network for vehicle detection in point cloud. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Vancouver, BC, Canada, 24–28 September 2017. [Google Scholar]
- Wang, H.; Zhang, X. Real-time vehicle detection and tracking using 3D LiDAR. Asian J. Control. 2022, 24, 1459–1469. [Google Scholar] [CrossRef]
- Liang, X.; Fu, Z. MHNet: Multiscale Hierarchical Network for 3D Point Cloud Semantic Segmentation. IEEE Access. 2019, 7, 173999–174012. [Google Scholar] [CrossRef]
- Weinmann, M.; Jutzi, B.; Mallet, C. Feature relevance assessment for the semantic interpretation of 3D point cloud data. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, 5, 313–318. [Google Scholar] [CrossRef] [Green Version]
- Weinmann, M.; Urban, S.; Hinz, S.; Jutzi, B.; Mallet, C. Distinctive 2D and 3D features for automated large-scale scene analysis in urban areas. Comput. Graph. 2015, 49, 47–57. [Google Scholar] [CrossRef]
- Weinmann, M.; Weinmann, M.; Mallet, C.; Brédif, M. A classification-segmentation framework for the detection of individual trees in dense MMS point cloud data acquired in urban areas. Remote Sens. 2017, 9, 277. [Google Scholar] [CrossRef] [Green Version]
- Wu, Z.; Song, S.; Khosla, A.; Yu, F.; Zhang, L. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1912–1920. [Google Scholar]
- Maturana, D.; Scherer, S. Voxnet: A 3d convolutional neural network for real-time object recognition. In Proceedings of the International Conference on Intelligent Robots and Systems, Hamburg, Germany, 28 September–2 October 2015; pp. 922–928. [Google Scholar]
- Gorte, B. Segmentation of TIN-structured surface models. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2002, 34, 465–469. [Google Scholar]
- Yu, L. Research on Methods for Ground Objective Classification and Rapid Model Construction Based on Mobile Laser Scanning Data. Ph.D. Thesis, Wuhan University, Wuhan, China, 2011. [Google Scholar]
- Mohamed, M.; Morsy, S.; El-Shazly, A. Evaluation of data subsampling and neighbourhood selection for mobile LiDAR data classification. Egypt. J. Remote Sens. Space Sci. 2021, 24, 799–804. [Google Scholar] [CrossRef]
- Mohamed, M.; Morsy, S.; El-Shazly, A. Evaluation of machine learning classifiers for 3D mobile LiDAR point cloud classification using different neighborhood search methods. Adv. LiDAR 2022, 2, 1–9. [Google Scholar]
- Seyfeli, S.; OK, A.O. Classification of Mobile Laser Scanning Data with Geometric Features and Cylindrical Neighborhood. Baltic J. Mod. Comput. 2022, 10, 209–223. [Google Scholar] [CrossRef]
- Xu, J.; Zhang, R.; Dou, J.; Zhu, Y.; Sun, J.; Pu, S. Rpvnet: A deep and efficient range-point-voxel fusion network for lidar point cloud segmentation. In Proceedings of the IEEE International Conference on Computer Vision, Montreal, QC, Canada, 11–17 October 2021; pp. 16024–16033. [Google Scholar]
- Xu, Y.; Tong, X.; Stilla, U. Voxel-based representation of 3D point clouds: Methods, applications, and its potential use in the construction industry. Autom. Constr. 2021, 126, 103675. [Google Scholar] [CrossRef]
- Dai, J.; Qi, H.; Xiong, Y.; Li, Y. Deformable convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 764–773. [Google Scholar]
- Mohamed, M.; Morsy, S.; El-Shazly, A. Improvement of 3D LiDAR point cloud classification of urban road environment based on random forest classifier. Geocarto Int. 2022, 37, 15604–15626. [Google Scholar] [CrossRef]
- Geng, H.; Gao, Z.; Fang, G.; Xie, Y. 3D Object Recognition and Localization with a Dense LiDAR Scanner. Actuators 2022, 11, 13. [Google Scholar] [CrossRef]
- Zhang, T.; Vosselman, G.; Elberink, S.J.O. Vehicle recognition in aerial LiDAR point cloud based on dynamic time warping. ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci. 2017, IV-2/W4, 193–198. [Google Scholar] [CrossRef] [Green Version]
- Zhang, T.; Kan, Y.; Jia, H.; Deng, C.; Xing, T. Urban vehicle extraction from aerial laser scanning point cloud data. Int. J. Remote Sens. 2020, 41, 6664–6697. [Google Scholar] [CrossRef]
- Shi, S.; Jiang, L.; Deng, J.; Wang, Z.; Guo, C.; Shi, J.; Wang, X.; Li, H. PV-RCNN++: Point-voxel feature set abstraction with local vector representation for 3D object detection. arXiv 2021, arXiv:2102.00463. [Google Scholar] [CrossRef]
- Li, J.; Dai, H.; Shao, L.; Ding, Y. From voxel to point: Iou-guided 3d object detection for point cloud with voxel-to-point decoder. In Proceedings of the 29th ACM International Conference on Multimedia, Virtual Event, China, 20–24 October 2021; pp. 4622–4631. [Google Scholar]
- Zhou, W.; Cao, X.; Zhang, X.; Hao, X.; Wang, D.; He, Y. Multi Point-Voxel Convolution (MPVConv) for Deep Learning on Point Clouds. arXiv 2021, arXiv:2107.13152. [Google Scholar]
- Roynard, X.; Deschaud, J.; Goulette, F. Classification of point cloud scenes with multiscale voxel deep network. arXiv 2018, arXiv:1804.03583. [Google Scholar]
- Su, H.; Maji, S.; Kalogerakis, E. Multi-view convolutional neural networks for 3d shape recognition. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 13–16 December 2015; pp. 945–953. [Google Scholar]
- Zhang, X. Studying on the information extraction of streetlight and street tree from vehicle-borne LiDAR point cloud. Master’s Thesis, Henan Polytechnic University, Jiaozuo, China, 2017. [Google Scholar]
- Zhang, W.; Qi, J.; Wan, P.; Wang, H.; Xie, D.; Wang, X.; Yan, G. An Easy-to-Use Airborne LiDAR Data Filtering Method Based on Cloth Simulation. Remote Sens. 2016, 8, 501. [Google Scholar] [CrossRef]
- Girardeau-Montaut, D. Cloud Compare; Électricité de France, S.A., Ed.; (EDF) R&D: Paris, France, 2003. [Google Scholar]
- Zheng, M.; Wu, H.; Li, Y. An adaptive end-to-end classification approach for mobile laser scanning point clouds based on knowledge in urban scenes. Remote Sens. 2019, 11, 186. [Google Scholar] [CrossRef] [Green Version]
- Kang, C.; Wang, F.; Zong, M.; Cheng, Y.; Lu, T. Research on improved region growing point cloud algorithm. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, 42, 153–157. [Google Scholar] [CrossRef] [Green Version]
- Alshawabkeh, Y. Linear feature extraction from point cloud using color information. Herit. Sci. 2020, 8, 1–13. [Google Scholar] [CrossRef]
- Xie, Y.; Tian, J.; Zhu, X. Linking points with labels in 3D: A review of point cloud semantic segmentation. IEEE Geosci. Remote Sens. Mag. 2020, 8, 38–59. [Google Scholar] [CrossRef] [Green Version]
- Huang, R.; Sang, N.; Liu, L.Y.; Luo, D.P.; Tang, J.L. A Method of Clustering Feature Vectors via Incremental Iteration. PR&AI 2010, 23, 320–326. [Google Scholar]
- Tobler, W.R. A Computer Movie Simulating Urban Growth in the Detroit Region. Econ. Geogr. 1970, 46, 234. [Google Scholar] [CrossRef]
- Liu, Y.; Ge, Q. An indirect algorithm for contour feature extraction. Compu Eng. Appl. 2004, 10, 51–52+70. [Google Scholar]
- Serna, A.; Marcotegui, B.; Goulette, F.; Deschaud, J.E. Paris-rue-Madame database: A 3D mobile laser scanner dataset for benchmarking urban detection, segmentation and classification methods. In Proceedings of the 4th International Conference on Pattern Recognition Applications and Methods (ICPRAM 2014), Angers, France, 6–8 March 2014. [Google Scholar]
- Vallet, B.; Brédif, M.; Serna, A.; Marcotegui, B.; Paparoditis, N. TerraMobilita/iQmulus Urban Point Cloud Analysis Benchmark. Comput. Graph. 2015, 49, 126–133. [Google Scholar] [CrossRef] [Green Version]
- Li, L.; Han, Y. Building contour matching in airborne LiDAR point clouds. Remote Sens. Inf. 2016, 2, 13–18. [Google Scholar]
- Wang, H. Vehicle identification and classification system based on laser ranging. Master’s Thesis, Tianjin University, Tianjin, China, 2011. [Google Scholar]
- Yu, Y.; Li, J.; Guan, H.; Wang, C.; Yu, J. Semiautomated extraction of street light poles from mobile LiDAR point-clouds. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1374–1386. [Google Scholar] [CrossRef]
- Hackel, T.; Wegner, J.D.; Schindler, K. Fast Semantic Segmentation of 3d Point Clouds with Strongly Varying Density. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 3, 177–184. [Google Scholar]
- Weinmann, M.; Jutzi, B.; Mallet, C. Semantic 3D scene interpretation: A framework combining optimal neighborhood size selection with relevant features. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, II-3, 181–188. [Google Scholar] [CrossRef] [Green Version]
- Schubert, E.; Sander, J.; Ester, M.; Kriegel, H.P.; Xu, X. DBSCAN revisited, revisited: Why and how you should (still) use DBSCAN. ACM Trans. Database Syst. 2017, 42, 1–21. [Google Scholar] [CrossRef]
- Hartigan, J.A.; Wong, M.A. A K-Means Clustering Algorithm. Appl. Stat. 1979, 28, 100. [Google Scholar] [CrossRef]
- Comaniciu, D.; Meer, P. Mean shift analysis and applications. In Proceedings of the 7th IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999. [Google Scholar]
Number | Brand | Model | Body Length (mm) | Body Width (mm) |
---|---|---|---|---|
1 | Volkswagen | Golf | 4262 | 1799 |
2 | Tesla | Model 3 | 4694 | 1850 |
3 | Ford | Focus | 4647 | 1810 |
4 | Renault S.A. | Clio | 4063 | 1732 |
5 | Benz | Class A | 4622 | 1796 |
6 | Skoda | Octavia | 4675 | 1814 |
7 | Nissan | Qashqai | 4401 | 1837 |
8 | Toyota | Yaris | 4160 | 1700 |
9 | Ford | Fiesta | 3980 | 1722 |
10 | Mini | Hatch | 3832 | 1727 |
11 | Opel | Corsa | 3741 | 1608 |
12 | Citroen | C3 | 3850 | 1667 |
13 | Renault S.A. | Captur | 4122 | 1778 |
14 | BMW | Series 1 | 4341 | 1765 |
15 | Volvo | XC60 | 4688 | 1902 |
16 | Peugeot | 208 | 3962 | 1739 |
17 | Toyota | Corolla | 4635 | 1780 |
18 | Volkswagen | Polo | 4053 | 1740 |
19 | Volkswagen | Tiguan | 4486 | 1839 |
20 | F.I.A.T. | 500X | 3547 | 1627 |
21 | Smart | Smart | 2695 | 1663 |
22 | Renault S.A. | Magotan | 4865 | 1832 |
23 | Benz | Class E | 5065 | 1860 |
24 | Benz | Class S | 5259 | 1899 |
25 | Peugeot | 5008 | 4670 | 1855 |
26 | BMW | X5 | 4930 | 2004 |
27 | Audi | Q3 | 4518 | 1843 |
28 | Porsche | 911 | 4519 | 1852 |
29 | Hyundai | Encino | 4195 | 1800 |
30 | Suzuki | SX4 | 4135 | 1755 |
Data Set | Recall | Precision | F-Score |
---|---|---|---|
Paris-Rue-Madame database | 0.837 | 0.979 | 0.902 |
Data Set | Right | Left | Erroneous | ||
---|---|---|---|---|---|
Paris-Rue-Madame database | In the area A (Left in Figure 11) | Ground truth | 8 | 19 | - |
Recognition result | 7 | 16 | 4 | ||
In the area B (Right in Figure 11) | Right | Left | Erroneous | ||
Ground truth | 13 | 11 | - | ||
Recognition result | 12 | 10 | 2 |
Data Set | Point Cloud Pre-Removing | Neighborhood Division | Similarity Measurement | Total |
---|---|---|---|---|
Paris-Rue-Madame data set | 15 | 38 | 2 | 55 |
Data Set | Method | Recall | Precision | F-Score |
---|---|---|---|---|
Paris-Rue-Madame data set | Hackel et al. [56] | 0.9786 | 0.9086 | 0.9423 |
Weinmann et al. [20] | 0.6476 | 0.7948 | 0.7137 | |
Weinmann et al. [57] | 0.603 | 0.768 | 0.676 | |
Proposed | 0.837 | 0.979 | 0.902 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zheng, M.; Wu, H. Vehicle Recognition Based on Region Growth of Relative Tension and Similarity Measurement of Side Projection Profile of Vehicle Body. Remote Sens. 2023, 15, 1493. https://doi.org/10.3390/rs15061493
Zheng M, Wu H. Vehicle Recognition Based on Region Growth of Relative Tension and Similarity Measurement of Side Projection Profile of Vehicle Body. Remote Sensing. 2023; 15(6):1493. https://doi.org/10.3390/rs15061493
Chicago/Turabian StyleZheng, Mingxue, and Huayi Wu. 2023. "Vehicle Recognition Based on Region Growth of Relative Tension and Similarity Measurement of Side Projection Profile of Vehicle Body" Remote Sensing 15, no. 6: 1493. https://doi.org/10.3390/rs15061493
APA StyleZheng, M., & Wu, H. (2023). Vehicle Recognition Based on Region Growth of Relative Tension and Similarity Measurement of Side Projection Profile of Vehicle Body. Remote Sensing, 15(6), 1493. https://doi.org/10.3390/rs15061493