PointPainting: 3D Object Detection Aided by Semantic Image Information
Abstract
:1. Introduction
- Anchor weight-assignment strategy. We propose a way to assign weights to anchors based on semantic information. The detector becomes more discriminative by paying more attention to the problematic anchors carrying more inaccurate semantic information.
- Dual-attention module. We adopt a dual-attention module to enhance the voxelized point cloud. This module suppresses the inaccurate semantic information in a voxelized point cloud.
- SegIoU-based anchor assigner. We use a SegIoU based anchor assigner to filter out abnormal positive anchors, to avoid confusion and improve detector performance.
2. Related Works
2.1. Multi-Modal 3D Object-Detection Methods
2.1.1. Raw Data Fusion
2.1.2. Feature-Level Fusion
2.1.3. Decision-Level Fusion
2.2. PointPainting
2.2.1. Semantic Segmentation
2.2.2. Point Cloud Painting
Algorithm 1 Point Cloud Painting(). | |
| |
|
2.2.3. Point-Cloud-Based Detector
3. PointPainting++
3.1. PointPainting++ Architecture
3.1.1. Anchor Weight Assignment
Algorithm 2 Mark Points(). | |
| |
|
3.1.2. Dual-Attention Module
3.1.3. SegIoU-Based Anchor Assigner
3.2. An Efficient Acceleration Algorithm
4. Experimental Setup
4.1. Dataset and Evaluation Metrics
4.2. Semantic Segmentation Network
4.3. Point-Cloud-Based Network
5. Experimental Results
5.1. Quantitative Analysis
5.2. Ablation Study
5.3. Qualitative Analysis
6. Discussions
6.1. The Influence of Anchor Weight
6.2. The Influence of Semantic Weight in SegIoU
6.3. The Influence of the Number of Positive Anchors
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Yoo, J.H.; Kim, Y.; Kim, J.; Choi, J.W. 3d-cvf: Generating joint camera and LiDAR features using cross-view spatial feature fusion for 3d object detection. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 720–736. [Google Scholar]
- Xie, L.; Xiang, C.; Yu, Z.; Xu, G.; Yang, Z.; Cai, D.; He, X. PI-RCNN: An efficient multi-sensor 3D object detector with point-based attentive cont-conv fusion module. In Proceedings of the AAAI conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 12460–12467. [Google Scholar]
- Huang, T.; Liu, Z.; Chen, X.; Bai, X. Epnet: Enhancing point features with image semantics for 3d object detection. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 35–52. [Google Scholar]
- Pang, S.; Morris, D.; Radha, H. CLOCs: Camera-LiDAR object candidates fusion for 3D object detection. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24–29 October 2020; pp. 10386–10393. [Google Scholar]
- Zheng, W.; Tang, W.; Jiang, L.; Fu, C.W. SE-SSD: Self-ensembling single-stage object detector from point cloud. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 14494–14503. [Google Scholar]
- Shi, S.; Guo, C.; Jiang, L.; Wang, Z.; Shi, J.; Wang, X.; Li, H. Pv-rcnn: Point-voxel feature set abstraction for 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 10529–10538. [Google Scholar]
- Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? The kitti vision benchmark suite. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3354–3361. [Google Scholar]
- Vora, S.; Lang, A.H.; Helou, B.; Beijbom, O. Pointpainting: Sequential fusion for 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 4604–4612. [Google Scholar]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 28. [Google Scholar]
- Qi, C.R.; Liu, W.; Wu, C.; Su, H.; Guibas, L.J. Frustum pointnets for 3d object detection from rgb-d data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 918–927. [Google Scholar]
- Wang, Z.; Jia, K. Frustum convnet: Sliding frustums to aggregate local point-wise features for amodal 3d object detection. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 1742–1749. [Google Scholar]
- Shin, K.; Kwon, Y.P.; Tomizuka, M. Roarnet: A robust 3d object detection based on region approximation refinement. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; pp. 2510–2515. [Google Scholar]
- Paigwar, A.; Sierra-Gonzalez, D.; Erkent, Ö.; Laugier, C. Frustum-pointpillars: A multi-stage approach for 3d object detection using rgb camera and lidar. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 2926–2933. [Google Scholar]
- Du, X.; Ang, M.H.; Karaman, S.; Rus, D. A general pipeline for 3d detection of vehicles. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 3194–3200. [Google Scholar]
- Xu, S.; Zhou, D.; Fang, J.; Yin, J.; Bin, Z.; Zhang, L. Fusionpainting: Multimodal fusion with adaptive attention for 3d object detection. In Proceedings of the 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), Indianapolis, IN, USA, 19–22 September 2021; pp. 3047–3054. [Google Scholar]
- Simon, M.; Amende, K.; Kraus, A.; Honer, J.; Samann, T.; Kaulbersch, H.; Milz, S.; Michael Gross, H. Complexer-yolo: Real-time 3d object detection and tracking on semantic point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–17 June 2019. [Google Scholar]
- Yin, T.; Zhou, X.; Krähenbühl, P. Multimodal virtual point 3d detection. Adv. Neural Inf. Process. Syst. 2021, 34, 16494–16507. [Google Scholar]
- Mao, J.; Shi, S.; Wang, X.; Li, H. 3D object detection for autonomous driving: A review and new outlooks. arXiv 2022, arXiv:2206.09474. [Google Scholar]
- Liang, M.; Yang, B.; Chen, Y.; Hu, R.; Urtasun, R. Multi-task multi-sensor fusion for 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 7345–7353. [Google Scholar]
- Sindagi, V.A.; Zhou, Y.; Tuzel, O. MVX-Net: Multimodal VoxelNet for 3D Object Detection. In Proceedings of the International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019. [Google Scholar]
- Li, Y.; Yu, A.W.; Meng, T.; Caine, B.; Ngiam, J.; Peng, D.; Shen, J.; Lu, Y.; Zhou, D.; Le, Q.V.; et al. Deepfusion: Lidar-camera deep fusion for multi-modal 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 17182–17191. [Google Scholar]
- Zhang, Y.; Chen, J.; Huang, D. Cat-det: Contrastively augmented transformer for multi-modal 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 908–917. [Google Scholar]
- Chen, X.; Zhang, T.; Wang, Y.; Wang, Y.; Zhao, H. Futr3d: A unified sensor fusion framework for 3d detection. arXiv 2022, arXiv:2203.10642. [Google Scholar]
- Liu, Z.; Tang, H.; Amini, A.; Yang, X.; Mao, H.; Rus, D.; Han, S. BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird’s-Eye View Representation. arXiv 2022, arXiv:2205.13542. [Google Scholar]
- Li, Y.; Qi, X.; Chen, Y.; Wang, L.; Li, Z.; Sun, J.; Jia, J. Voxel field fusion for 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 1120–1129. [Google Scholar]
- Bai, X.; Hu, Z.; Zhu, X.; Huang, Q.; Chen, Y.; Fu, H.; Tai, C.L. Transfusion: Robust lidar-camera fusion for 3d object detection with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 1090–1099. [Google Scholar]
- Wang, C.; Ma, C.; Zhu, M.; Yang, X. Pointaugmenting: Cross-modal augmentation for 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 11794–11803. [Google Scholar]
- Xu, D.; Anguelov, D.; Jain, A. Pointfusion: Deep sensor fusion for 3d bounding box estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 244–253. [Google Scholar]
- Chen, X.; Ma, H.; Wan, J.; Li, B.; Xia, T. Multi-view 3d object detection network for autonomous driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1907–1915. [Google Scholar]
- Ku, J.; Mozifian, M.; Lee, J.; Harakeh, A.; Waslander, S.L. Joint 3d proposal generation and object detection from view aggregation. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1–8. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar]
- Pang, S.; Morris, D.; Radha, H. Fast-CLOCs: Fast camera-LiDAR object candidates fusion for 3D object detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2022; pp. 187–196. [Google Scholar]
- Lang, A.H.; Vora, S.; Caesar, H.; Zhou, L.; Yang, J.; Beijbom, O. Pointpillars: Fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 12697–12705. [Google Scholar]
- Yin, T.; Zhou, X.; Krahenbuhl, P. Center-based 3d object detection and tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 11784–11793. [Google Scholar]
- Zhou, Y.; Tuzel, O. Voxelnet: End-to-end learning for point cloud based 3d object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4490–4499. [Google Scholar]
- Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 17–24 May 2018; pp. 801–818. [Google Scholar]
- Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; Schiele, B. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 3213–3223. [Google Scholar]
- Yan, Y.; Mao, Y.; Li, B. Second: Sparsely embedded convolutional detection. Sensors 2018, 18, 3337. [Google Scholar]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
PointPillars [34] | Painted PointPillars [8] | Painted PointPillars++ | Delta | |
---|---|---|---|---|
66.60 | 66.30 | 67.92 | +1.62 | |
71.63 | 73.23 | 74.83 | +1.60 | |
SECOND [39] | Painted SECOND [8] | Painted SECOND++ | Delta | |
67.53 | 68.24 | 68.88 | +0.64 | |
71.99 | 73.62 | 74.01 | +0.39 | |
CenterPoint [35] | Painted CenterPoint [8] | Painted CenterPoint++ | Delta | |
67.87 | 68.25 | 69.08 | +0.83 | |
72.53 | 72.96 | 73.96 | +1.00 | |
SECOND-IoU [39] | Painted SECOND-IoU [8] | Painted SECOND-IoU++ | Delta | |
70.88 | 70.78 | 71.37 | +0.59 | |
75.10 | 75.64 | 76.26 | +0.62 |
Method | Category | Category | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
Easy | Moderate | Hard | Easy | Moderate | Hard | |||||
Car | 87.19 | 77.46 | 75.81 | 80.15 | Car | 89.84 | 87.46 | 85.32 | 87.54 | |
PointPillars [34] | Pedestrian | 55.13 | 50.09 | 46.71 | 50.65 | Pedestrian | 59.56 | 55.29 | 51.74 | 55.53 |
Cyclist | 82.66 | 63.91 | 60.47 | 69.01 | Cyclist | 84.84 | 67.68 | 62.90 | 71.81 | |
Car | 86.83 | 77.30 | 75.87 | 80.00 | Car | 89.61 | 87.73 | 86.05 | 87.80 | |
Painted PointPillars [8] | Pedestrian | 59.64 | 55.29 | 50.81 | 55.25 | Pedestrian | 66.75 | 60.77 | 57.31 | 61.61 |
Cyclist | 78.52 | 57.87 | 54.57 | 63.65 | Cyclist | 82.59 | 66.25 | 62.06 | 70.30 | |
Car | 87.24 | 77.43 | 75.65 | 80.10 | Car | 89.69 | 87.62 | 85.77 | 87.70 | |
Painted PointPillars++ | Pedestrian | 64.87 | 57.41 | 52.73 | 58.33 | Pedestrian | 70.56 | 63.53 | 60.04 | 64.71 |
Cyclist | 80.49 | 59.29 | 56.18 | 65.32 | Cyclist | 84.75 | 67.47 | 64.06 | 72.09 |
Method | Category | Category | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
Easy | Moderate | Hard | Easy | Moderate | Hard | |||||
Car | 88.27 | 78.27 | 77.05 | 81.20 | Car | 89.77 | 87.62 | 86.21 | 87.87 | |
SECOND [39] | Pedestrian | 56.22 | 52.36 | 47.04 | 51.87 | Pedestrian | 59.74 | 55.02 | 51.19 | 55.32 |
Cyclist | 80.27 | 66.43 | 61.89 | 69.53 | Cyclist | 83.54 | 69.30 | 65.50 | 78.78 | |
Car | 87.87 | 77.91 | 76.63 | 80.80 | Car | 89.66 | 87.57 | 86.36 | 87.86 | |
Painted SECOND [8] | Pedestrian | 58.68 | 53.88 | 50.67 | 54.41 | Pedestrian | 62.45 | 56.12 | 54.11 | 57.56 |
Cyclist | 81.21 | 65.47 | 61.87 | 69.49 | Cyclist | 87.59 | 71.12 | 61.57 | 75.43 | |
Car | 88.29 | 78.46 | 77.23 | 81.33 | Car | 89.64 | 87.92 | 86.74 | 88.10 | |
Painted SECOND++ | Pedestrian | 58.89 | 54.06 | 50.62 | 54.52 | Pedestrian | 62.57 | 56.47 | 54.73 | 57.92 |
Cyclist | 83.42 | 66.62 | 62.30 | 70.78 | Cyclist | 90.05 | 70.69 | 67.28 | 76.01 |
Method | Category | Category | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
Easy | Moderate | Hard | Easy | Moderate | Hard | |||||
Car | 87.16 | 79.16 | 76.95 | 81.09 | Car | 89.03 | 87.22 | 85.91 | 87.39 | |
CenterPoint [35] | Pedestrian | 55.75 | 52.84 | 50.48 | 53.02 | Pedestrian | 60.00 | 58.50 | 55.35 | 57.95 |
Cyclist | 80.63 | 66.13 | 61.71 | 69.49 | Cyclist | 82.71 | 69.46 | 64.59 | 72.25 | |
Car | 87.38 | 79.48 | 77.19 | 81.35 | Car | 89.17 | 87.57 | 86.56 | 87.76 | |
Painted CenterPoint [8] | Pedestrian | 57.66 | 54.30 | 50.71 | 54.22 | Pedestrian | 61.59 | 58.60 | 55.83 | 58.67 |
Cyclist | 81.99 | 64.80 | 60.73 | 69.17 | Cyclist | 85.86 | 67.66 | 63.81 | 72.44 | |
Car | 87.58 | 79.75 | 77.34 | 81.56 | Car | 89.18 | 87.40 | 86.88 | 87.82 | |
Painted CenterPoint++ | Pedestrian | 59.98 | 55.92 | 52.81 | 56.24 | Pedestrian | 63.82 | 60.84 | 57.60 | 60.75 |
Cyclist | 81.07 | 65.19 | 62.06 | 69.44 | Cyclist | 87.65 | 68.02 | 64.23 | 73.30 |
Method | Category | Category | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
Easy | Moderate | Hard | Easy | Moderate | Hard | |||||
Car | 89.10 | 79.11 | 78.17 | 82.13 | Car | 90.14 | 88.12 | 86.83 | 88.36 | |
SECOND-IoU [39] | Pedestrian | 61.45 | 55.31 | 50.26 | 55.67 | Pedestrian | 64.84 | 58.26 | 53.98 | 59.03 |
Cyclist | 86.02 | 71.56 | 66.90 | 74.83 | Cyclist | 89.14 | 73.61 | 70.95 | 77.90 | |
Car | 88.63 | 78.90 | 77.88 | 81.80 | Car | 90.13 | 87.91 | 86.91 | 88.32 | |
Painted SECOND-IoU [8] | Pedestrian | 62.08 | 55.17 | 49.78 | 55.68 | Pedestrian | 66.03 | 58.33 | 55.32 | 59.89 |
Cyclist | 85.81 | 71.19 | 67.53 | 74.85 | Cyclist | 93.74 | 72.75 | 69.67 | 78.72 | |
Car | 88.82 | 78.90 | 77.85 | 81.86 | Car | 90.19 | 88.13 | 86.98 | 88.43 | |
Painted SECOND-IoU++ | Pedestrian | 63.83 | 56.43 | 50.64 | 56.97 | Pedestrian | 67.91 | 59.96 | 56.18 | 61.35 |
Cyclist | 86.70 | 71.58 | 67.54 | 75.27 | Cyclist | 93.03 | 73.46 | 70.53 | 79.01 |
Method | Category | Category | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
Easy | Moderate | Hard | Easy | Moderate | Hard | |||||
Car | 86.83 | 77.30 | 75.87 | 80.00 | Car | 89.61 | 87.73 | 86.05 | 87.80 | |
Painted PointPillars [8] | Pedestrian | 59.64 | 55.29 | 50.81 | 55.25 | Pedestrian | 66.75 | 60.77 | 57.31 | 61.61 |
Cyclist | 78.52 | 57.87 | 54.57 | 63.65 | Cyclist | 82.59 | 66.25 | 62.06 | 70.30 | |
Car | 86.91 | 77.35 | 76.05 | 80.10 | Car | 89.63 | 87.72 | 86.08 | 87.81 | |
Painted PointPillars+I | Pedestrian | 56.83 | 53.96 | 49.02 | 53.27 | Pedestrian | 65.24 | 60.20 | 57.07 | 60.84 |
Cyclist | 79.19 | 57.98 | 54.26 | 63.81 | Cyclist | 84.28 | 67.46 | 64.07 | 71.93 | |
Car | 86.97 | 77.35 | 75.85 | 80.06 | Car | 89.58 | 87.62 | 85.99 | 87.73 | |
Painted PointPillars+II | Pedestrian | 60.73 | 55.86 | 51.60 | 56.06 | Pedestrian | 67.67 | 62.30 | 58.37 | 62.78 |
Cyclist | 78.72 | 59.03 | 55.22 | 64.33 | Cyclist | 85.16 | 67.28 | 63.71 | 72.05 | |
Car | 87.24 | 77.43 | 75.65 | 80.10 | Car | 89.69 | 87.62 | 85.77 | 87.70 | |
Painted PointPillars+III | Pedestrian | 64.87 | 57.41 | 52.73 | 58.33 | Pedestrian | 70.56 | 63.53 | 60.04 | 64.71 |
Cyclist | 80.49 | 59.29 | 56.18 | 65.32 | Cyclist | 84.75 | 67.47 | 64.06 | 72.09 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Gao, Z.; Wang, Q.; Pan, Z.; Zhai, Z.; Long, H. PointPainting: 3D Object Detection Aided by Semantic Image Information. Sensors 2023, 23, 2868. https://doi.org/10.3390/s23052868
Gao Z, Wang Q, Pan Z, Zhai Z, Long H. PointPainting: 3D Object Detection Aided by Semantic Image Information. Sensors. 2023; 23(5):2868. https://doi.org/10.3390/s23052868
Chicago/Turabian StyleGao, Zhentong, Qiantong Wang, Zongxu Pan, Zhenyu Zhai, and Hui Long. 2023. "PointPainting: 3D Object Detection Aided by Semantic Image Information" Sensors 23, no. 5: 2868. https://doi.org/10.3390/s23052868
APA StyleGao, Z., Wang, Q., Pan, Z., Zhai, Z., & Long, H. (2023). PointPainting: 3D Object Detection Aided by Semantic Image Information. Sensors, 23(5), 2868. https://doi.org/10.3390/s23052868