Multilevel Geometric Feature Embedding in Transformer Network for ALS Point Cloud Semantic Segmentation
Abstract
:1. Introduction
- (1)
- The GFE-T module is specifically designed to enhance the network’s ability to learn and capture local geometric features. By embedding these geometric features into the point transformer, the network effectively learns local geometric structure features, enhancing its classification capability.
- (2)
- We propose the FR-DKNN method, which effectively addresses the issue of inconsistent neighborhood ranges in KNN due to uneven point cloud distributions by using dilated K-nearest neighbor queries within a fixed radius. This ensures that the network retains robust discriminative capability when learning neighborhood features of the points.
- (3)
- Based on the proposed GFE-T module and FR-DKNN method, we design MGFE-T, a transformer ALS point cloud semantic segmentation network with multilevel geometric feature embedding and multilevel loss aggregation (M-Loss) supervising the network at each level.
- (4)
- We conducted experiments on the LASDU, DFC2019, and ISPRS datasets, demonstrating the excellent performance of the proposed method. Ablation experiments were conducted to verify the effectiveness of each module in the network. Cross-validation across datasets demonstrated the reliable generalization ability of the network.
2. Methods
2.1. Overall Architecture
2.2. Geometric Feature Embedding Transformer
2.2.1. Point Transformer
2.2.2. Geometric Feature Embedding
2.3. Fixed-Radius Dilated KNN
2.4. Multilevel Loss Aggregation
3. Results
3.1. Datasets
3.2. Implementation Details
3.3. Evaluation Metrics
3.4. Experimental Results
3.4.1. Result of the LASDU
3.4.2. Result of the DFC2019
3.4.3. Result of the ISPRS
4. Ablation Study
4.1. Impact of Query Radius on Performance
4.2. Impact of Network Depth on Performance
4.3. The Effectiveness of the Proposed Module
4.4. Complexity and Runtime Analysis
5. Generalization Performance
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Su, H.; Maji, S.; Kalogerakis, E.; Learned-Miller, E. Multi-View Convolutional Neural Networks for 3D Shape Recognition. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 945–953. [Google Scholar]
- Qi, C.R.; Su, H.; Niessner, M.; Dai, A.; Yan, M.; Guibas, L.J. Volumetric and Multi-View CNNs for Object Classification on 3D Data. In Proceedings of the 29th IEEE Conference on Computer Vision and Pattern Recognition, (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 5648–5656. [Google Scholar]
- Maturana, D.; Scherer, S. VoxNet: A 3D Convolutional Neural Network for Real-Time Object Recognition. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 922–928. [Google Scholar]
- Wu, Z.; Song, S.; Khosla, A.; Yu, F.; Zhang, L.; Tang, X.; Xiao, J. 3D ShapeNets: A Deep Representation for Volumetric Shapes. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1912–1920. [Google Scholar]
- Charles, R.Q.; Su, H.; Kaichun, M.; Guibas, L.J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 77–85. [Google Scholar]
- Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. In Advances in Neural Information Processing Systems 30, Proceedings of the 31st Annual Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017; Curran Associates, Inc.: Red Hook, NY, USA, 2018; Volume 30. [Google Scholar]
- Ma, X.; Qin, C.; You, H.; Ran, H.; Fu, Y. Rethinking Network Design and Local Geometry in Point Cloud: A Simple Residual MLP Framework. arXiv 2022, arXiv:2202.07123. [Google Scholar]
- Qian, G.; Li, Y.; Peng, H.; Mai, J.; Hammoud, H.; Elhoseiny, M.; Ghanem, B. PointNeXt: Revisiting PointNet++ with Improved Training and Scaling Strategies. Adv. Neural Inf. Process. Syst. 2022, 35, 23192–23204. [Google Scholar]
- Li, Y.; Bu, R.; Sun, M.; Wu, W.; Di, X.; Chen, B. PointCNN: Convolution On X-Transformed Points. In Advances in Neural Information Processing Systems 31, Proceedings of the 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montreal, QC, Canada, 3–8 December 2018; Curran Associates, Inc.: Red Hook, NY, USA, 2019; Volume 31. [Google Scholar]
- Jiang, M.; Wu, Y.; Zhao, T.; Zhao, Z.; Lu, C. PointSIFT: A SIFT-like Network Module for 3D Point Cloud Semantic Segmentation. arXiv 2018, arXiv:1807.00652. [Google Scholar]
- Wu, W.; Qi, Z.; Fuxin, L. PointConv: Deep Convolutional Networks on 3D Point Clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 9621–9630. [Google Scholar]
- Thomas, H.; Deschaud, J.-E.; Marcotegui, B.; Goulette, F.; Guibas, L.J. KPConv: Flexible and Deformable Convolution for Point Clouds. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 6411–6420. [Google Scholar]
- Simonovsky, M.; Komodakis, N. Dynamic Edge-Conditioned Filters in Convolutional Neural Networks on Graphs. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 29–38. [Google Scholar]
- Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S.E.; Bronstein, M.M.; Solomon, J.M. Dynamic Graph CNN for Learning on Point Clouds. ACM Trans. Graph. 2019, 38, 146. [Google Scholar] [CrossRef]
- Liu, Y.; Fan, B.; Xiang, S.; Pan, C. Relation-Shape Convolutional Neural Network for Point Cloud Analysis. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 8887–8896. [Google Scholar]
- Wu, X.; Lao, Y.; Jiang, L.; Liu, X.; Zhao, H. Point Transformer V2: Grouped Vector Attention and Partition-Based Pooling. In Advances in Neural Information Processing Systems 35, 36th Conference on Neural Information Processing Systems (NeurIPS 2022), New Orleans, LA, USA, 28 November–9 December 2022; Curran Associates, Inc.: Red Hook, NY, USA, 2023. [Google Scholar]
- Guo, M.-H.; Cai, J.-X.; Liu, Z.-N.; Mu, T.-J.; Martin, R.R.; Hu, S.-M. PCT: Point Cloud Transformer. Comput. Vis. Media 2021, 7, 187–199. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Lin, Y.; Vosselman, G.; Cao, Y.; Yang, M.Y. Local and Global Encoder Network for Semantic Segmentation of Airborne Laser Scanning Point Clouds. ISPRS J. Photogramm. Remote Sens. 2021, 176, 151–168. [Google Scholar] [CrossRef]
- Yousefhussien, M.; Kelbe, D.J.; Ientilucci, E.J.; Salvaggio, C. A Multi-Scale Fully Convolutional Network for Semantic Labeling of 3D Point Clouds. ISPRS J. Photogramm. Remote Sens. 2018, 143, 191–204. [Google Scholar] [CrossRef]
- Zhang, K.; Ye, L.; Xiao, W.; Sheng, Y.; Zhang, S.; Tao, X.; Zhou, Y. A Dual Attention Neural Network for Airborne LiDAR Point Cloud Semantic Segmentation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5704617. [Google Scholar] [CrossRef]
- Lai, X.; Pian, W.; BO, L.; He, L. A Building Extraction Method Based on IGA That Fuses Point Cloud and Image Data. J. Infrared Millim. Waves 2023, 43, 116–125. [Google Scholar]
- He, P.; Gao, K.; Liu, W.; Song, W.; Hu, Q.; Cheng, X.; Li, S. OFFS-Net: Optimal Feature Fusion-Based Spectral Information Network for Airborne Point Cloud Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 141–152. [Google Scholar] [CrossRef]
- Yang, Y.; Tang, R.; Wang, J.; Xia, M. A Hierarchical Deep Neural Network with Iterative Features for Semantic Labeling of Airborne LiDAR Point Clouds. Comput. Geosci. 2021, 157, 104932. [Google Scholar] [CrossRef]
- Ma, L.; Li, J.; Guan, H.; Yu, Y.; Chen, Y. STN: Saliency-Guided Transformer Network for Point-Wise Semantic Segmentation of Urban Scenes. IEEE Geosci. Remote Sens. Lett. 2022, 19, 7004405. [Google Scholar] [CrossRef]
- Li, W.; Wang, F.-D.; Xia, G.-S. A Geometry-Attentional Network for ALS Point Cloud Classification. ISPRS J. Photogramm. Remote Sens. 2020, 164, 26–40. [Google Scholar] [CrossRef]
- Jiang, T.; Wang, Y.; Liu, S.; Cong, Y.; Dai, L.; Sun, J. Local and Global Structure for Urban ALS Point Cloud Semantic Segmentation With Ground-Aware Attention. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5702615. [Google Scholar] [CrossRef]
- Jin, S.; Su, Y.; Zhao, X.; Hu, T.; Guo, Q. A Point-Based Fully Convolutional Neural Network for Airborne LiDAR Ground Point Filtering in Forested Environments. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 3958–3974. [Google Scholar] [CrossRef]
- Huang, R.; Xu, Y.; Stilla, U. GraNet: Global Relation-Aware Attentional Network for Semantic Segmentation of ALS Point Clouds. ISPRS J. Photogramm. Remote Sens. 2021, 177, 1–20. [Google Scholar] [CrossRef]
- Mao, Y.; Chen, K.; Diao, W.; Sun, X.; Lu, X.; Fu, K.; Weinmann, M. Beyond Single Receptive Field: A Receptive Field Fusion-and-Stratification Network for Airborne Laser Scanning Point Cloud Classification. ISPRS J. Photogramm. Remote Sens. 2022, 188, 45–61. [Google Scholar] [CrossRef]
- Zhao, H.; Jiang, L.; Jia, J.; Torr, P.; Koltun, V. Point Transformer. arXiv 2021, arXiv:2012.09164. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention Is All You Need. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; Curran Associates, Inc.: New York, NY, USA, 2017; Volume 30. [Google Scholar]
- Devlin, J.; Chang, M.-W.; Lee, K.; Toutanova, K. BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 17th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2019), Minneapolis, MN, USA, 2–7 June 2019. [Google Scholar]
- Dai, Z.; Yang, Z.; Yang, Y.; Carbonell, J.; Le, Q.V.; Salakhutdinov, R. Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context. arXiv 2019, arXiv:1901.02860. [Google Scholar]
- Ramachandran, P.; Parmar, N.; Vaswani, A.; Bello, I.; Levskaya, A.; Shlens, J. Stand-Alone Self-Attention in Vision Models. In Advances in Neural Information Processing Systems 32, Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS 2019) Vancouver, BC, Canada, 8–14 December 2019; Curran Associates, Inc.: New York, NY, USA, 2020; Volume 32. [Google Scholar]
- Zhao, H.; Jia, J.; Koltun, V. Exploring Self-Attention for Image Recognition. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 10073–10082. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image Is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Wang, L.; Wu, J.; Liu, X.; Ma, X.; Cheng, J. Semantic Segmentation of Large-Scale Point Clouds Based on Dilated Nearest Neighbors Graph. Complex Intell. Syst. 2022, 8, 3833–3845. [Google Scholar] [CrossRef]
- Ye, Z.; Xu, Y.; Huang, R.; Tong, X.; Li, X.; Liu, X.; Luan, K.; Hoegner, L.; Stilla, U. LASDU: A Large-Scale Aerial LiDAR Dataset for Semantic Labeling in Dense Urban Areas. ISPRS Int. J. Geo-Inf. 2020, 9, 450. [Google Scholar] [CrossRef]
- Le Saux, B.; Yokoya, N.; Haensch, R.; Brown, M. 2019 IEEE GRSS Data Fusion Contest: Large-Scale Semantic 3D Reconstruction [Technical Committees]. IEEE Geosci. Remote Sens. Mag. 2019, 7, 33–36. [Google Scholar] [CrossRef]
- Niemeyer, J.; Rottensteiner, F.; Soergel, U. Contextual Classification of Lidar Data and Building Object Detection in Urban Areas. ISPRS J. Photogramm. Remote Sens. 2014, 87, 152–165. [Google Scholar] [CrossRef]
- Li, J.; Weinmann, M.; Sun, X.; Diao, W.; Feng, Y.; Hinz, S.; Fu, K. VD-LAB: A View-Decoupled Network with Local-Global Aggregation Bridge for Airborne Laser Scanning Point Cloud Classification. ISPRS J. Photogramm. Remote Sens. 2022, 186, 19–33. [Google Scholar] [CrossRef]
- Zeng, T.; Luo, F.; Guo, T.; Gong, X.; Xue, J.; Li, H. Recurrent Residual Dual Attention Network for Airborne Laser Scanning Point Cloud Semantic Segmentation. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5702614. [Google Scholar] [CrossRef]
- Zeng, T.; Luo, F.; Guo, T.; Gong, X.; Xue, J.; Li, H. Multilevel Context Feature Fusion for Semantic Segmentation of ALS Point Cloud. IEEE Geosci. Remote Sens. Lett. 2023, 20, 5506605. [Google Scholar] [CrossRef]
- Zhang, R.; Chen, S.; Wang, X.; Zhang, Y. IPCONV: Convolution with Multiple Different Kernels for Point Cloud Semantic Segmentation. Remote Sens. 2023, 15, 5136. [Google Scholar] [CrossRef]
- Pirotti, F.; Tonion, F. Classification of aerial laser scanning point clouds using machine learning: A comparison between random forest and tensorflow. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2-W13, 1105–1111. [Google Scholar] [CrossRef]
- Atik, M.E.; Duran, Z.; Seker, D.Z. Machine Learning-Based Supervised Classification of Point Clouds Using Multiscale Geometric Features. ISPRS Int. J. Geo-Inf. 2021, 10, 187. [Google Scholar] [CrossRef]
- Feng, C.-C.; Guo, Z. A Hierarchical Approach for Point Cloud Classification With 3D Contextual Features. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 5036–5048. [Google Scholar] [CrossRef]
Class | Training Data | Test Data | ||
---|---|---|---|---|
Number | Ratio | Number | Ratio | |
Ground | 49,517,558 | 64.63% | 4,701,150 | 66.78% |
High_veg | 11,152,546 | 14.56% | 944,234 | 13.41% |
Building | 10,136,608 | 13.23% | 915,618 | 13.01% |
Water | 1,295,778 | 1.69% | 80,387 | 1.14% |
Bridge | 869,547 | 1.13% | 90,032 | 1.28% |
Unlabeled | 3,642,710 | 4.75% | 308,205 | 4.38% |
Sum | 76,614,747 | 7,039,626 |
Method | Ground | Building | Trees | Low_veg | Artifacts | OA | mF1 |
---|---|---|---|---|---|---|---|
GraNet [29] | 89.9 | 95.8 | 86.1 | 64.7 | 42.4 | 86.2 | 75.8 |
VD-LAB [42] | 91.2 | 95.5 | 87.2 | 73.5 | 44.6 | 88.0 | 78.4 |
RFFS [30] | 90.9 | 95.4 | 86.8 | 71.0 | 44.4 | 87.1 | 77.7 |
RRDAN [43] | 91.6 | 96.6 | 84.1 | 66.3 | 48.3 | 87.7 | 77.4 |
MCFN [44] | 91.6 | 96.7 | 85.9 | 67.1 | 43.8 | 88.0 | 77.0 |
IPCONV [45] | 90.5 | 96.3 | 85.8 | 59.6 | 46.3 | 86.7 | 75.7 |
Ours | 92.6 | 96.6 | 86.1 | 74.4 | 50.7 | 89.1 | 80.1 |
Method | Ground | High_veg | Building | Water | Bridge | OA | mF1 |
---|---|---|---|---|---|---|---|
LGENet [19] | 99.3 | 98.3 | 92.8 | 47.4 | 79.1 | 98.4 | 83.4 |
DA-Net [21] | 99.3 | 97.6 | 92.7 | 41.6 | 85.1 | 98.3 | 83.3 |
Local and [27] | 98.9 | 96.1 | 90.2 | 41.6 | 83.7 | 94.8 | 81.4 |
RFFS-Net [30] | 96.6 | 96.1 | 88.7 | 77.8 | 81.0 | 94.3 | 88.0 |
RRDAN [43] | 99.1 | 98.1 | 95.8 | 62.8 | 82.3 | 98.1 | 87.6 |
IPCONV [45] | 98.8 | 97.3 | 92.9 | 92.1 | 58.2 | 97.1 | 87.9 |
Ours | 99.6 | 96.6 | 95.0 | 94.0 | 93.3 | 98.5 | 95.7 |
Method | Power | Low_veg | Imp_surf | Car | Fence/Hedge | Roof | Facade | Shrub | Tree | OA | mF1 |
---|---|---|---|---|---|---|---|---|---|---|---|
GraNet [29] | 67.7 | 82.7 | 91.7 | 80.9 | 51.1 | 94.5 | 62.0 | 49.9 | 82.0 | 84.5 | 73.6 |
VD-LAB [42] | 69.3 | 80.5 | 90.4 | 79.4 | 38.3 | 89.5 | 59.7 | 47.5 | 77.2 | 81.4 | 70.2 |
RFFS [30] | 75.5 | 80.0 | 90.5 | 78.5 | 45.5 | 92.7 | 57.9 | 48.3 | 75.7 | 82.1 | 71.6 |
RRDAN [43] | 72.2 | 81.7 | 91.2 | 84.6 | 44.8 | 94.7 | 65.2 | 52.0 | 85.3 | 84.9 | 74.6 |
MCFN [44] | 74.5 | 82.3 | 91.8 | 79.0 | 37.5 | 94.7 | 61.7 | 48.7 | 83.3 | 84.4 | 72.6 |
IPCONV [45] | 66.8 | 82.1 | 91.4 | 74.3 | 36.8 | 94.8 | 65.2 | 42.3 | 82.7 | 84.5 | 70.7 |
Ours | 70.7 | 84.0 | 91.8 | 79.6 | 23.6 | 95.0 | 63.4 | 49.5 | 84.3 | 85.2 | 71.3 |
Method | Imp_surf | Roof | Tree | mF1 | |
---|---|---|---|---|---|
Machine Learning | RF (Pirotti et al. [46]) | 92.6 | 96.2 | 84.1 | 91.0 |
SVM (Atik et al. [47]) | 87.7 | 74.6 | 67.8 | 76.7 | |
Deep Learning | RRDAN [43] | 91.2 | 94.7 | 85.3 | 90.4 |
Ours | 91.8 | 95.0 | 84.3 | 90.4 | |
OFFS-Net(S) [23] | 90.2 | 94.6 | 82.3 | 89.0 | |
Combined ML and DL | H-MLP [48] | 83.3 | 94.7 | - | - |
OFFS-Net [23] | 92.4 | 95.3 | 83.6 | 90.4 |
Layers | OA (%) | mF1 |
---|---|---|
3 | 79.3 | 88.9 |
4 | 80.1 | 89.1 |
5 | 79.0 | 88.8 |
Model | GFE-T | FR-DKNN | M-Loss | Ground | Building | Trees | Low_veg | Artifacts | OA | mF1 |
---|---|---|---|---|---|---|---|---|---|---|
baseline | 91.7 | 96.1 | 86.5 | 70.6 | 43.1 | 88.0 | 77.6 | |||
A | √ | √ | √ | 92.6 | 96.6 | 86.1 | 74.4 | 50.7 | 89.1 | 80.1 |
B | √ | √ | 92.0 | 96.4 | 86.4 | 73.3 | 49.2 | 88.5 | 79.5 | |
C | √ | √ | 92.0 | 96.4 | 86.2 | 72.0 | 48.6 | 88.4 | 79.0 | |
D | √ | √ | 92.2 | 96.6 | 86.3 | 72.1 | 49.7 | 88.7 | 79.4 |
Model | #Params | #FLOPs | #Time |
---|---|---|---|
baseline | 7.41 M | 9.39 G | 2 h 48 m |
A (Ours) | 8.49 M | 10.39 G | 3 h 29 m |
B (No GFE-T) | 7.59 M | 9.61 G | 3 h 11 m |
C (No FR-DKNN) | 8.49 M | 10.39 G | 2 h 55 m |
D (No M-Loss) | 8.31 M | 9.84 G | 3 h 27 m |
Experiment | Ground | Building | Trees |
---|---|---|---|
exp. 1 Train on LASDU, Test on LASDU | 92.6 | 96.2 | 84.1 |
exp. 2 Train on DFC2019, Test on LASDU (∆) | 98.8 (+6.2) | 95.6 (−0.6) | 82.2 (−1.9) |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liang, Z.; Lai, X. Multilevel Geometric Feature Embedding in Transformer Network for ALS Point Cloud Semantic Segmentation. Remote Sens. 2024, 16, 3386. https://doi.org/10.3390/rs16183386
Liang Z, Lai X. Multilevel Geometric Feature Embedding in Transformer Network for ALS Point Cloud Semantic Segmentation. Remote Sensing. 2024; 16(18):3386. https://doi.org/10.3390/rs16183386
Chicago/Turabian StyleLiang, Zhuanxin, and Xudong Lai. 2024. "Multilevel Geometric Feature Embedding in Transformer Network for ALS Point Cloud Semantic Segmentation" Remote Sensing 16, no. 18: 3386. https://doi.org/10.3390/rs16183386
APA StyleLiang, Z., & Lai, X. (2024). Multilevel Geometric Feature Embedding in Transformer Network for ALS Point Cloud Semantic Segmentation. Remote Sensing, 16(18), 3386. https://doi.org/10.3390/rs16183386