SG-LPR: Semantic-Guided LiDAR-Based Place Recognition
Abstract
:1. Introduction
- We propose a unified semantic-guided LPR framework, characterized by a “segmentation-while-describing” structure, which eliminates the need for additional intermediate data-processing and storage steps.
- Based on this framework, we design the SG-LPR model, integrating the advantages of Swin Transformer and U-Net in capturing global contextual information and fine-grained feature extraction.
- Experimental results on the KITTI and NCLT datasets demonstrate the effectiveness of the proposed framework, with the model outperforming comparative baseline algorithms in terms of place recognition performance and generalization ability.
2. Related Work
2.1. Handcrafted Feature-Based Methods
2.2. Deep Learning-Based Methods
3. Preliminaries
3.1. Data Representation
3.2. Problem Definition
4. Methodology
4.1. Overall Architecture
4.1.1. Feature Extraction Module
4.1.2. LPR Task Module
4.1.3. Semantic Segmentation Task Module
4.2. Loss Functions
4.2.1. Lazy Triplet Loss
4.2.2. Cross-Entropy Loss
4.2.3. Dice Loss
4.2.4. Joint Loss
5. Experiments and Results
5.1. Dataset and Experimental Settings
5.1.1. Dataset
5.1.2. Implementation Details
5.1.3. Evaluation Metrics
5.2. Comparison with State-of-the-Art
5.2.1. Quantitative Results
5.2.2. Qualitative Results
5.3. Robustness Test
5.4. Generalization Ability
5.5. Ablation Study
5.5.1. Ablation of Key Components in LPR Task Module
5.5.2. Ablation of Semantic Segmentation Auxiliary Task Branch
5.5.3. Ablation of Different Types of Input for the LPR Task Branch
5.5.4. Ablation of Loss Function Terms
5.5.5. Ablation of the Number of Semantic Categories
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
LiDAR | Light Detection and Ranging |
LPR | LiDAR-based Place Recognition |
BEV | Bird’s Eye View |
SG-LPR | Semantic-guided LiDAR-based Place Recognition |
References
- Shi, P.; Zhang, Y.; Li, J. LiDAR-based place recognition for autonomous driving: A survey. arXiv 2023, arXiv:2306.10561. [Google Scholar]
- Yin, P.; Zhao, S.; Cisneros, I.; Abuduweili, A.; Huang, G.; Milford, M.; Liu, C.; Choset, H.; Scherer, S. General place recognition survey: Towards the real-world autonomy age. arXiv 2022, arXiv:2209.04497. [Google Scholar]
- Li, L.; Kong, X.; Zhao, X.; Huang, T.; Li, W.; Wen, F.; Zhang, H.; Liu, Y. SSC: Semantic scan context for large-scale place recognition. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 2092–2099. [Google Scholar]
- Du, J.; Wang, R.; Cremers, D. Dh3d: Deep hierarchical 3d descriptors for robust large-scale 6dof relocalization. In Proceedings of the European Conference on Computer Vision (ECCV), Virtual Venue, 23–28 August 2020; pp. 744–762. [Google Scholar]
- Komorowski, J. Minkloc3d: Point cloud based large-scale place recognition. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Virtual Venue, 3–8 January 2021; pp. 1790–1799. [Google Scholar]
- Luo, L.; Zheng, S.; Li, Y.; Fan, Y.; Yu, B.; Cao, S.Y.; Li, J.; Shen, H.L. BEVPlace: Learning LiDAR-based place recognition using bird’s eye view images. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 4–6 October 2023; pp. 8700–8709. [Google Scholar]
- Uy, M.A.; Lee, G.H. PointNetVLAD: Deep point cloud based retrieval for large-scale place recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, USA, 18–22 June 2018; pp. 4470–4479. [Google Scholar]
- Vidanapathirana, K.; Ramezani, M.; Moghadam, P.; Sridharan, S.; Fookes, C. LoGG3D-Net: Locally guided global descriptor learning for 3D place recognition. In Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022; pp. 2215–2221. [Google Scholar]
- Arce, J.; Vödisch, N.; Cattaneo, D.; Burgard, W.; Valada, A. PADLoC: LiDAR-based deep loop closure detection and registration using panoptic attention. IEEE Robot. Autom. Lett. 2023, 8, 1319–1326. [Google Scholar] [CrossRef]
- Kong, X.; Yang, X.; Zhai, G.; Zhao, X.; Zeng, X.; Wang, M.; Liu, Y.; Li, W.; Wen, F. Semantic graph based place recognition for 3d point clouds. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; pp. 8216–8223. [Google Scholar]
- Yin, P.; Xu, L.; Feng, Z.; Egorov, A.; Li, B. PSE-Match: A viewpoint-free place recognition method with parallel semantic embedding. IEEE Trans. Intell. Transp. Syst. 2021, 23, 11249–11260. [Google Scholar] [CrossRef]
- Kong, D.; Li, X.; Xu, Q.; Hu, Y.; Ni, P. SC_LPR: Semantically consistent LiDAR place recognition based on chained cascade network in long-term dynamic environments. IEEE Trans. Image Process. 2024, 33, 2145–2157. [Google Scholar] [CrossRef] [PubMed]
- Li, L.; Kong, X.; Zhao, X.; Huang, T.; Li, W.; Wen, F.; Zhang, H.; Liu, Y. RINet: Efficient 3D LiDAR-based place recognition using rotation invariant neural network. IEEE Robot. Autom. Lett. 2022, 7, 4321–4328. [Google Scholar] [CrossRef]
- Vidanapathirana, K.; Moghadam, P.; Harwood, B.; Zhao, M.; Sridharan, S.; Fookes, C. Locus: LiDAR-based place recognition using spatiotemporal higher-order pooling. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 5075–5081. [Google Scholar]
- Dai, D.; Wang, J.; Chen, Z.; Bao, P. SC-LPR: Spatiotemporal context based LiDAR place recognition. Pattern Recognit. Lett. 2022, 156, 160–166. [Google Scholar] [CrossRef]
- Chen, X.; Läbe, T.; Milioto, A.; Röhling, T.; Vysotska, O.; Haag, A.; Behley, J.; Stachniss, C. OverlapNet: Loop closing for LiDAR-based SLAM. arXiv 2021, arXiv:2105.11344. [Google Scholar]
- Wu, H.; Zhang, Z.; Lin, S.; Mu, X.; Zhao, Q.; Yang, M.; Qin, T. MapLocNet: Coarse-to-Fine Feature Registration for Visual Re-Localization in Navigation Maps. arXiv 2024, arXiv:2407.08561. [Google Scholar]
- Ming, Y.; Yang, X.; Zhang, G.; Calway, A. Cgis-net: Aggregating colour, geometry and implicit semantic features for indoor place recognition. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; pp. 6991–6997. [Google Scholar]
- Ming, Y.; Ma, J.; Yang, X.; Dai, W.; Peng, Y.; Kong, W. AEGIS-Net: Attention-Guided Multi-Level Feature Aggregation for Indoor Place Recognition. In Proceedings of the ICASSP 2024–2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Seoul, Korea, 14–19 April 2024; pp. 4030–4034. [Google Scholar]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin Transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Virtual Venue, 11–17 October 2021; pp. 10012–10022. [Google Scholar]
- Yin, H.; Xu, X.; Lu, S.; Chen, X.; Xiong, R.; Shen, S.; Stachniss, C.; Wang, Y. A survey on global lidar localization: Challenges, advances and open problems. Int. J. Comput. Vis. 2024, 132, 3139–3171. [Google Scholar] [CrossRef]
- Kim, G.; Kim, A. Scan Context: Egocentric spatial descriptor for place recognition within 3d point cloud map. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 4802–4809. [Google Scholar]
- Wang, Y.; Sun, Z.; Xu, C.Z.; Sarma, S.E.; Yang, J.; Kong, H. LiDAR iris for loop-closure detection. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; pp. 5769–5775. [Google Scholar]
- He, L.; Wang, X.; Zhang, H. M2DP: A novel 3D point cloud descriptor and its application in loop closure detection. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, South Korea, 9–14 October 2016; pp. 231–237. [Google Scholar]
- Magnusson, M.; Andreasson, H.; Nuchter, A.; Lilienthal, A.J. Appearance-based loop detection from 3D laser data using the normal distributions transform. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation (ICRA), Kobe, Japan, 12–17 May 2009; pp. 23–28. [Google Scholar]
- Bosse, M.; Zlot, R. Place recognition using keypoint voting in large 3D lidar datasets. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation (ICRA), Karlsruhe, Germany, 6–10 May 2013; pp. 2677–2684. [Google Scholar]
- Dubé, R.; Dugas, D.; Stumm, E.; Nieto, J.; Siegwart, R.; Cadena, C. Segmatch: Segment based place recognition in 3d point clouds. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Marina Bay Sands, Singapore, 29 May–3 June 2017; pp. 5266–5272. [Google Scholar]
- Zou, X.; Li, J.; Wang, Y.; Liang, F.; Wu, W.; Wang, H.; Yang, B.; Dong, Z. PatchAugNet: Patch feature augmentation-based heterogeneous point cloud place recognition in large-scale street scenes. ISPRS J. Photogramm. Remote Sens. 2023, 206, 273–292. [Google Scholar] [CrossRef]
- Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 652–660. [Google Scholar]
- Arandjelovic, R.; Gronat, P.; Torii, A.; Pajdla, T.; Sivic, J. NetVLAD: CNN architecture for weakly supervised place recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 5297–5307. [Google Scholar]
- Zhang, W.; Xiao, C. PCAN: 3D attention map learning using contextual information for point cloud based retrieval. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 12436–12445. [Google Scholar]
- Xia, Y.; Xu, Y.; Li, S.; Wang, R.; Du, J.; Cremers, D.; Stilla, U. SOE-Net: A self-attention and orientation encoding network for point cloud based place recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), virtual venue, 19–25 June 2021; pp. 11348–11357. [Google Scholar]
- Sun, Q.; Liu, H.; He, J.; Fan, Z.; Du, X. Dagc: Employing dual attention and graph convolution for point cloud based place recognition. In Proceedings of the 2020 International Conference on Multimedia Retrieval (ICMR), New York, NY, USA, 8–11 June 2020; pp. 224–232. [Google Scholar]
- Hui, L.; Yang, H.; Cheng, M.; Xie, J.; Yang, J. Pyramid point cloud transformer for large-scale place recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Virtual Venue, 11–17 October 2021; pp. 6098–6107. [Google Scholar]
- Fan, Z.; Song, Z.; Liu, H.; Lu, Z.; He, J.; Du, X. SVT-Net: Super light-weight sparse voxel transformer for large scale place recognition. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), Virtual Venue, 22 February–1 March 2022; Volume 36, pp. 551–560. [Google Scholar]
- Cattaneo, D.; Vaghi, M.; Valada, A. Lcdnet: Deep loop closure detection and point cloud registration for lidar slam. IEEE Trans. Robot. 2022, 38, 2074–2093. [Google Scholar] [CrossRef]
- Xia, Y.; Gladkova, M.; Wang, R.; Li, Q.; Stilla, U.; Henriques, J.F.; Cremers, D. Casspr: Cross attention single scan place recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 4–6 October 2023; pp. 8461–8472. [Google Scholar]
- Wu, T.; Fu, H.; Liu, B.; Xue, H.; Ren, R.; Tu, Z. Detailed analysis on generating the range image for lidar point cloud processing. Electronics 2021, 10, 1224. [Google Scholar] [CrossRef]
- Ma, J.; Zhang, J.; Xu, J.; Ai, R.; Gu, W.; Chen, X. OverlapTransformer: An efficient and yaw-angle-invariant transformer network for LiDAR-based place recognition. IEEE Robot. Autom. Lett. 2022, 7, 6958–6965. [Google Scholar] [CrossRef]
- Xu, X.; Yin, H.; Chen, Z.; Li, Y.; Wang, Y.; Xiong, R. Disco: Differentiable scan context with orientation. IEEE Robot. Autom. Lett. 2021, 6, 2791–2798. [Google Scholar] [CrossRef]
- Luo, L.; Cao, S.; Li, X.; Xu, J.; Ai, R.; Yu, Z.; Chen, X. BEVPlace++: Fast, Robust, and Lightweight LiDAR Global Localization for Unmanned Ground Vehicles. arXiv 2024, arXiv:2408.01841. [Google Scholar]
- Cao, F.; Yan, F.; Wang, S.; Zhuang, Y.; Wang, W. Season-invariant and viewpoint-tolerant LiDAR place recognition in GPS-denied environments. IEEE Trans. Ind. Electron. 2020, 68, 563–574. [Google Scholar] [CrossRef]
- Lu, S.; Xu, X.; Tang, L.; Xiong, R.; Wang, Y. DeepRING: Learning roto-translation invariant representation for LiDAR based place recognition. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; pp. 1904–1911. [Google Scholar]
- Ma, J.; Xiong, G.; Xu, J.; Chen, X. CVTNet: A cross-view transformer network for LiDAR-based place recognition in autonomous driving environments. IEEE Trans. Ind. Inform. 2023, 20, 4039–4048. [Google Scholar] [CrossRef]
- Zhang, J.; Zhang, Y.; Rong, L.; Tian, R.; Wang, S. MVSE-Net: A Multi-View Deep Network With Semantic Embedding for LiDAR Place Recognition. IEEE Trans. Intell. Transp. Syst. 2024, 25, 17174–17186. [Google Scholar] [CrossRef]
- Behley, J.; Garbade, M.; Milioto, A.; Quenzel, J.; Behnke, S.; Stachniss, C.; Gall, J. SemanticKITTI: A dataset for semantic scene understanding of lidar sequences. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2019; pp. 9297–9307. [Google Scholar]
- Cao, H.; Wang, Y.; Chen, J.; Jiang, D.; Zhang, X.; Tian, Q.; Wang, M. Swin-Unet: Unet-like pure transformer for medical image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Tel Aviv, Israel, 23–27 October 2022; pp. 205–218. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? The kitti vision benchmark suite. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 16–21 June 2012; pp. 3354–3361. [Google Scholar]
- Carlevaris-Bianco, N.; Ushani, A.K.; Eustice, R.M. University of Michigan North Campus long-term vision and lidar dataset. Int. J. Robot. Res. 2016, 35, 1023–1035. [Google Scholar] [CrossRef]
- Cui, Y.; Chen, X.; Zhang, Y.; Dong, J.; Wu, Q.; Zhu, F. Bow3d: Bag of words for real-time loop closing in 3d lidar slam. IEEE Robot. Autom. Lett. 2022, 8, 2828–2835. [Google Scholar] [CrossRef]
# | Methods | 00 | 02 | 05 | 06 | 07 | 08 | Mean |
---|---|---|---|---|---|---|---|---|
1 | M2DP [24] | 0.708 | 0.717 | 0.602 | 0.787 | 0.560 | 0.073 | 0.575 |
SC [22] | 0.750 | 0.782 | 0.895 | 0.968 | 0.662 | 0.607 | 0.777 | |
LI [23] | 0.668 | 0.762 | 0.768 | 0.913 | 0.629 | 0.478 | 0.703 | |
PNV [7] | 0.779 | 0.727 | 0.541 | 0.852 | 0.631 | 0.037 | 0.595 | |
OT [39] | 0.952 | 0.853 | 0.909 | 0.987 | 0.330 | 0.256 | 0.715 | |
DiSCO [40] | 0.964 | 0.892 | 0.964 | 0.990 | 0.897 | 0.903 | 0.935 | |
LoGG3D-Net [8] | 0.953 | 0.888 | 0.976 | 0.977 | 1.000 | 0.843 | 0.939 | |
BEVPlace [6] | 0.979 | 0.900 | 0.974 | 0.991 | 0.906 | 0.894 | 0.941 | |
2 | SSC [3] | 0.951 | 0.891 | 0.951 | 0.985 | 0.875 | 0.940 | 0.932 |
SGPR [10] | 0.820 | 0.751 | 0.751 | 0.655 | 0.868 | 0.750 | 0.766 | |
Locus [14] | 0.957 | 0.745 | 0.968 | 0.948 | 0.921 | 0.900 | 0.907 | |
RINet [13] | 0.978 | 0.947 | 0.917 | 0.978 | 0.967 | 0.869 | 0.943 | |
SC_LPR [12] | 0.900 | 0.870 | 0.920 | 0.910 | 0.870 | 0.650 | 0.850 | |
3 | SG-LPR(Ours) | 0.980 | 0.918 | 0.976 | 1.000 | 1.000 | 0.898 | 0.962 |
# | Methods | 00 | 02 | 05 | 06 | 07 | 08 | Mean | Cmp * |
---|---|---|---|---|---|---|---|---|---|
1 | M2DP [24] | 0.276 | 0.282 | 0.341 | 0.316 | 0.204 | 0.201 | 0.270 | −0.305 |
SC [22] | 0.719 | 0.734 | 0.844 | 0.898 | 0.606 | 0.546 | 0.725 | −0.052 | |
LI [23] | 0.667 | 0.764 | 0.772 | 0.912 | 0.633 | 0.470 | 0.703 | 0.000 | |
PNV [7] | 0.083 | 0.090 | 0.490 | 0.094 | 0.064 | 0.086 | 0.151 | −0.444 | |
DiSCO [40] | 0.960 | 0.891 | 0.952 | 0.985 | 0.894 | 0.892 | 0.929 | −0.006 | |
BEVPlace [6] | 0.979 | 0.900 | 0.974 | 0.991 | 0.906 | 0.894 | 0.941 | 0.000 | |
2 | SSC [3] | 0.955 | 0.889 | 0.952 | 0.986 | 0.876 | 0.943 | 0.934 | +0.002 |
SGPR [10] | 0.772 | 0.716 | 0.723 | 0.640 | 0.748 | 0.678 | 0.713 | −0.053 | |
Locus [14] | 0.944 | 0.726 | 0.960 | 0.927 | 0.911 | 0.877 | 0.891 | −0.016 | |
RINet [13] | 0.992 | 0.942 | 0.954 | 1.000 | 0.990 | 0.962 | 0.973 | +0.030 | |
SC_LPR [12] | 0.900 | 0.870 | 0.920 | 0.910 | 0.870 | 0.650 | 0.850 | 0.000 | |
3 | SG-LPR(Ours) | 0.969 | 0.913 | 0.976 | 0.993 | 1.000 | 0.880 | 0.955 | −0.007 |
Methods | 2012-02-04 | 2012-03-17 | 2012-06-15 | 2012-09-28 | 2012-11-16 | 2013-02-23 | Mean |
---|---|---|---|---|---|---|---|
M2DP [24] | 0.632 | 0.580 | 0.424 | 0.406 | 0.493 | 0.279 | 0.469 |
BoW3D [51] | 0.149 | 0.107 | 0.065 | 0.050 | 0.052 | 0.075 | 0.083 |
CVTNet [44] | 0.892 | 0.880 | 0.812 | 0.749 | 0.771 | 0.803 | 0.818 |
LoGG3D-Net [8] | 0.699 | 0.196 | 0.110 | 0.087 | 0.109 | 0.256 | 0.243 |
LCDNet [36] | 0.605 | 0.542 | 0.442 | 0.349 | 0.317 | 0.109 | 0.394 |
BEVPlace [6] | 0.935 | 0.927 | 0.874 | 0.878 | 0.889 | 0.862 | 0.894 |
BEVPlace++ [41] | 0.953 | 0.942 | 0.902 | 0.889 | 0.913 | 0.878 | 0.913 |
SG-LPR (Ours) | 0.947 | 0.936 | 0.931 | 0.916 | 0.914 | 0.913 | 0.926 |
# | Convs * | CBAM | NetVLAD | 00 | 02 | 05 | 06 | 07 | 08 | Mean |
---|---|---|---|---|---|---|---|---|---|---|
1 | ✓ | 0.926 | 0.836 | 0.878 | 0.991 | 0.224 | 0.716 | 0.762 | ||
2 | ✓ | ✓ | 0.961 | 0.906 | 0.936 | 0.983 | 0.885 | 0.819 | 0.915 | |
3 | ✓ | ✓ | ✓ | 0.980 | 0.918 | 0.976 | 1.000 | 1.000 | 0.898 | 0.962 |
# | seg_branch | 00 | 02 | 05 | 06 | 07 | 08 | Mean |
---|---|---|---|---|---|---|---|---|
1 | 0.932 | 0.842 | 0.903 | 0.975 | 0.604 | 0.785 | 0.840 | |
2 | ✓ | 0.980 | 0.918 | 0.976 | 1.000 | 1.000 | 0.898 | 0.962 |
# | RB | RS | HS | 00 | 02 | 05 | 06 | 07 | 08 | Mean |
---|---|---|---|---|---|---|---|---|---|---|
1 | ✓ | 0.934 | 0.845 | 0.872 | 0.942 | 0.465 | 0.738 | 0.799 | ||
2 | ✓ | 0.955 | 0.859 | 0.926 | 0.975 | 0.936 | 0.721 | 0.895 | ||
3 | ✓ | ✓ | 0.972 | 0.851 | 0.936 | 0.981 | 0.955 | 0.762 | 0.910 | |
4 | ✓ | 0.980 | 0.918 | 0.976 | 1.000 | 1.000 | 0.898 | 0.962 |
# | 00 | 02 | 05 | 06 | 07 | 08 | Mean | |||
---|---|---|---|---|---|---|---|---|---|---|
1 | ✓ | 0.932 | 0.842 | 0.903 | 0.975 | 0.604 | 0.785 | 0.840 | ||
2 | ✓ | ✓ | 0.957 | 0.906 | 0.950 | 0.973 | 1.000 | 0.852 | 0.940 | |
3 | ✓ | ✓ | 0.965 | 0.907 | 0.950 | 0.985 | 1.000 | 0.809 | 0.936 | |
4 | ✓ | ✓ | ✓ | 0.980 | 0.918 | 0.976 | 1.000 | 1.000 | 0.898 | 0.962 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jiang, W.; Xue, H.; Si, S.; Min, C.; Xiao, L.; Nie, Y.; Dai, B. SG-LPR: Semantic-Guided LiDAR-Based Place Recognition. Electronics 2024, 13, 4532. https://doi.org/10.3390/electronics13224532
Jiang W, Xue H, Si S, Min C, Xiao L, Nie Y, Dai B. SG-LPR: Semantic-Guided LiDAR-Based Place Recognition. Electronics. 2024; 13(22):4532. https://doi.org/10.3390/electronics13224532
Chicago/Turabian StyleJiang, Weizhong, Hanzhang Xue, Shubin Si, Chen Min, Liang Xiao, Yiming Nie, and Bin Dai. 2024. "SG-LPR: Semantic-Guided LiDAR-Based Place Recognition" Electronics 13, no. 22: 4532. https://doi.org/10.3390/electronics13224532
APA StyleJiang, W., Xue, H., Si, S., Min, C., Xiao, L., Nie, Y., & Dai, B. (2024). SG-LPR: Semantic-Guided LiDAR-Based Place Recognition. Electronics, 13(22), 4532. https://doi.org/10.3390/electronics13224532