Defect Detection and 3D Reconstruction of Complex Urban Underground Pipeline Scenes for Sewer Robots
Abstract
:1. Introduction
- A framework for drainage pipe defect detection and 3D reconstruction is proposed to obtain comprehensive pipe condition information. This framework is illustrated in Figure 1.
- A Sewer-YOLO-Slim detection model is proposed for the automatic detection of urban drainage pipe defects, and the proposed model is deployed on pipe robot equipment.
- The YOLO model is optimized in three key areas: enhancing the backbone network and neck network, integrating an attention mechanism within the detection head (DyHead), and pruning the proposed model to achieve a lightweight model design.
- The framework implements the positioning of pipeline inspection robots, the reconstruction of realistic 3D sewer scenes, and measurement functionality. This enables drainage pipeline condition data to be collected more comprehensively and presented in a clear and intuitive manner.
2. Materials and Methods
2.1. Amphibious Wheeled Robot
2.2. Urban Sewer Defect Image Database
2.3. YOLOv7-Tiny Algorithm
2.4. Sewer-YOLO-Slim Model Construction
2.4.1. FasterNet Algorithm
2.4.2. GSCConv and VoVGSCCSP Modules
2.4.3. DyHead Module
2.4.4. The Design of Network Pruning
2.5. Robot Positioning and 3D Reconstruction
2.5.1. Amphibious Wheeled Inspection Robot Positioning
2.5.2. Three-Dimensional Reconstruction of the Pipe Scene
3. Configuration and Evaluation
3.1. Experimental Configuration
3.2. Evaluation Metrics
4. Experiment and Results
4.1. Sewer Defect Detection
4.1.1. Backbone Network Experiment
4.1.2. Ablation Experiment
4.1.3. Channel Pruning Experiment
4.1.4. Comparison of Different Detection Algorithms
4.1.5. Edge Deployment Experiment
4.2. Amphibious Wheeled Robot Positioning and 3D Reconstruction
4.2.1. Data
4.2.2. Robot Positioning
4.2.3. Three-Dimensional Construction
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
FLOPs | Floating-Point Operations |
mAP | mean Average Precision |
VO | Visual Odometry |
CCTV | Closed-Circuit Television |
IMU | Inertial Measurement Unit |
USDID | Urban Sewer Defect Image Database |
BN | Batch Normalization |
SIFT | Scale-Invariant Feature Transform |
PnP | Perspective-n-Point |
BA | Bundle Adjustment |
SFM | Structure from Motion |
MVS | Multiview Stereo |
AP | Average Precision |
TP | True Positive |
FP | False Positive |
FN | False Negative |
References
- Hu, C.; Dong, B.; Shao, H.; Zhang, J.; Wang, Y. Toward purifying defect feature for multilabel sewer defect classification. IEEE Trans. Instrum. Meas. 2023, 72, 5008611. [Google Scholar] [CrossRef]
- Xie, Q.; Li, D.; Xu, J.; Yu, Z.; Wang, J. Automatic detection and classification of sewer defects via hierarchical deep learning. IEEE Trans. Autom. Sci. Eng. 2019, 16, 1836–1847. [Google Scholar] [CrossRef]
- Situ, Z.; Teng, S.; Liao, X.; Chen, G.; Zhou, Q. Real-time sewer defect detection based on YOLO network, transfer learning, and channel pruning algorithm. J. Civ. Struct. Health. 2024, 14, 41–57. [Google Scholar] [CrossRef]
- Hassan, S.I.; Dang, L.M.; Mehmood, I.; Im, S.; Choi, C.; Kang, J.; Park, Y.S.; Moon, H. Underground sewer pipe condition assessment based on convolutional neural networks. Automat. Constr. 2019, 106, 102849. [Google Scholar] [CrossRef]
- Wang, M.; Luo, H.; Cheng, J.C. Towards an automated condition assessment framework of underground sewer pipes based on closed-circuit television (CCTV) images. Tunn. Undergr. Space Technol. 2021, 110, 103840. [Google Scholar] [CrossRef]
- Li, Y.; Wang, H.; Dang, L.M.; Song, H.K.; Moon, H. Vision-based defect inspection and condition assessment for sewer pipes: A comprehensive survey. Sensors 2022, 22, 2722. [Google Scholar] [CrossRef]
- Li, Y.; Wang, H.; Dang, L.M.; Piran, M.J.; Moon, H. A robust instance segmentation framework for underground sewer defect detection. Measurement 2022, 190, 110727. [Google Scholar] [CrossRef]
- Suykens, J.A. Support vector machines: A nonlinear modelling and control perspective. Eur. J. Control. 2001, 7, 311–327. [Google Scholar] [CrossRef]
- Liaw, A.; Wiener, M. Classification and regression by randomForest. R News 2002, 2, 18–22. [Google Scholar]
- Halfawy, M.R.; Hengmeechai, J. Automated defect detection in sewer closed circuit television images using histograms of oriented gradients and support vector machine. Automat. Constr. 2014, 38, 1–13. [Google Scholar] [CrossRef]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single shot multibox detector. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; pp. 21–37. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar] [CrossRef]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar] [CrossRef]
- Girshick, R. Fast r-cnn. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1440–1448. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1137–1149. [Google Scholar] [CrossRef]
- Duran, O.; Althoefer, K.; Seneviratne, L.D. Automated pipe defect detection and categorization using camera/laser-based profiler and artificial neural network. IEEE Trans. Autom. Sci. Eng. 2007, 4, 118–126. [Google Scholar] [CrossRef]
- Guo, W.; Soibelman, L.; Garrett, J.H., Jr. Automated defect detection for sewer pipeline inspection and condition assessment. Automat. Constr. 2009, 18, 587–596. [Google Scholar] [CrossRef]
- Cheng, J.C.; Wang, M. Automated detection of sewer pipe defects in closed-circuit television images using deep learning techniques. Autom. Constr. 2018, 95, 155–171. [Google Scholar] [CrossRef]
- Li, D.; Xie, Q.; Yu, Z.; Wu, Q.; Zhou, J.; Wang, J. Sewer pipe defect detection via deep learning with local and global feature fusion. Automat. Constr. 2021, 129, 103823. [Google Scholar] [CrossRef]
- Kumar, S.S.; Abraham, D.M. A deep learning based automated structural defect detection system for sewer pipelines. In Proceedings of the ASCE International Conference on Computing in Civil Engineering 2019, Reston, VA, USA, 17–19 June 2019; pp. 226–233. [Google Scholar] [CrossRef]
- Tan, Y.; Cai, R.; Li, J.; Chen, P.; Wang, M. Automatic detection of sewer defects based on improved you only look once algorithm. Automat. Constr. 2021, 131, 103912. [Google Scholar] [CrossRef]
- Yin, X.; Chen, Y.; Bouferguene, A.; Zaman, H.; Al-Hussein, M.; Kurach, L. A deep learning-based framework for an automated defect detection system for sewer pipes. Automat. Constr. 2020, 109, 102967. [Google Scholar] [CrossRef]
- Oh, C.; Dang, L.M.; Han, D.; Moon, H. Robust sewer defect detection with text analysis based on deep learning. IEEE Access 2022, 10, 46224–46237. [Google Scholar] [CrossRef]
- Kumar, S.S.; Wang, M.; Abraham, D.M.; Jahanshahi, M.R.; Iseley, T.; Cheng, J.C. Deep learning–based automated detection of sewer defects in CCTV videos. J. Civ. Eng. 2020, 34, 04019047. [Google Scholar] [CrossRef]
- Zhang, P.; Zhong, Y.; Li, X. SlimYOLOv3: Narrower, Faster and Better for Real-Time UAV Applications. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), Seoul, Republic of Korea, 27–28 October 2019; pp. 37–45. [Google Scholar] [CrossRef]
- Wu, D.; Lv, S.; Jiang, M.; Song, H. Using channel pruning-based YOLO v4 deep learning algorithm for the real-time and accurate detection of apple flowers in natural environments. Comput. Electron. Agr. 2020, 178, 105742. [Google Scholar] [CrossRef]
- Zhang, J.; Zhang, R.; Shu, X.; Yu, L.; Xu, X. Channel Pruning-Based YOLOv7 Deep Learning Algorithm for Identifying Trolley Codes. Appl. Sci. 2023, 13, 10202. [Google Scholar] [CrossRef]
- Zhao, S.; Kang, F.; Li, J. Concrete dam damage detection and localisation based on YOLOv5s-HSC and photogrammetric 3D reconstruction. Automat. Constr. 2022, 143, 104555. [Google Scholar] [CrossRef]
- Wang, Y.; Wang, F.; Li, K.; Feng, X.; Hou, W.; Liu, L.; Chen, L.; He, Y.; Wang, Y. Low-light wheat image enhancement using an explicit inter-channel sparse transformer. Comput. Electron. Agric. 2024, 224, 109169. [Google Scholar] [CrossRef]
- Huang, M.Q.; Ninić, J.; Zhang, Q.B. BIM, machine learning and computer vision techniques in underground construction: Current status and future perspectives. Tunn. Undergr. Space Technol. 2021, 108, 103677. [Google Scholar] [CrossRef]
- Tan, Y.; Deng, T.; Zhou, J.; Zhou, Z. LiDAR-Based Automatic Pavement Distress Detection and Management Using Deep Learning and BIM. J. Constr. Eng. M. 2024, 150, 04024069. [Google Scholar] [CrossRef]
- Lepot, M.; Stanić, N.; Clemens, F.H.L.R. A technology for sewer pipe inspection (Part 2): Experimental assessment of a new laser profiler for sewer defect detection and quantification. Automat. Constr. 2017, 73, 1–11. [Google Scholar] [CrossRef]
- Bahnsen, C.H.; Johansen, A.S.; Philipsen, M.P.; Henriksen, J.W.; Nasrollahi, K.; Moeslund, T.B. 3d sensors for sewer inspection: A quantitative review and analysis. Sensors 2021, 21, 2553. [Google Scholar] [CrossRef]
- Ahmed, A.; Ashfaque, M.; Ulhaq, M.U.; Mathavan, S.; Kamal, K.; Rahman, M. Pothole 3D reconstruction with a novel imaging system and structure from motion techniques. IEEE Trans. Intell. Transp. Syst. 2021, 23, 4685–4694. [Google Scholar] [CrossRef]
- Wang, J.; Zhang, L.; Zhang, Y. Mixture 2D convolutions for 3D medical image segmentation. Int. J. Neural. Syst. 2023, 33, 2250059. [Google Scholar] [CrossRef]
- El Madawi, K.; Rashed, H.; El Sallab, A.; Nasr, O.; Kamel, H.; Yogamani, S. Rgb and lidar fusion based 3d semantic segmentation for autonomous driving. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019; pp. 7–12. [Google Scholar] [CrossRef]
- Zhang, X.; Zhao, P.; Hu, Q.; Wang, H.; Ai, M.; Li, J. A 3D reconstruction pipeline of urban drainage pipes based on multiviewimage matching using low-cost panoramic video cameras. Water 2019, 11, 2101. [Google Scholar] [CrossRef]
- Fang, X.; Li, Q.; Zhu, J.; Chen, Z.; Zhang, D.; Wu, K.; Ding, K.; Li, Q. Sewer defect instance segmentation, localization, and 3D reconstruction for sewer floating capsule robots. Automat. Constr. 2022, 142, 104494. [Google Scholar] [CrossRef]
- Ma, D.; Wang, N.; Fang, H.; Chen, W.; Li, B.; Zhai, K. Attention-optimized 3D segmentation and reconstruction system for sewer pipelines employing multi-view images. Comput.-Aided Civ. Inf. 2024. online version of record. [Google Scholar] [CrossRef]
- Wang, N.; Ma, D.; Du, X.; Li, B.; Di, D.; Pang, G.; Duan, Y. An automatic defect classification and segmentation method on three-dimensional point clouds for sewer pipes. Tunn. Undergr. Space Technol. 2024, 143, 105480. [Google Scholar] [CrossRef]
- Ministry of Housing and Urban-Rural Development of the People’s Republic of China. CJJ 181-2012 Technical Specification for Inspection and Evaluation of Urban Sewer; China Architecture & Building Press: Beijing, China, 2012; pp. 28–30.
- Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Paris, France, 2–6 October 2023; pp. 7464–7475. [Google Scholar] [CrossRef]
- WongKinYiu. YOLOv7. Available online: https://github.com/WongKinYiu/yolov7 (accessed on 6 July 2022).
- Chen, J.; Kao, S.H.; He, H.; Zhuo, W.; Wen, S.; Lee, C.H.; Chan, S.H.G. Run, don’t walk: Chasing higher FLOPS for faster neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Paris, France, 2–6 October 2023; pp. 12021–12031. [Google Scholar] [CrossRef]
- Li, H.; Li, J.; Wei, H.; Liu, Z.; Zhan, Z.; Ren, Q. Slim-neck by GSConv: A better design paradigm of detector architectures for autonomous vehicles. arXiv 2022, arXiv:2206.02424. [Google Scholar] [CrossRef]
- Dai, X.; Chen, Y.; Xiao, B.; Chen, D.; Liu, M.; Yuan, L.; Zhang, L. Dynamic head: Unifying object detection heads with attentions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Montreal, BC, Canada, 11–17 October 2021; pp. 7373–7382. [Google Scholar] [CrossRef]
- Nistér, D.; Naroditsky, O.; Bergen, J. Visual odometry. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, 27 June–2 July 2004; pp. 652–659. [Google Scholar] [CrossRef]
- Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Li, S.; Xu, C.; Xie, M. A robust O (n) solution to the perspective-n-point problem. Appl. Sci. 2012, 34, 1444–1450. [Google Scholar] [CrossRef]
- Triggs, B.; McLauchlan, P.F.; Hartley, R.I.; Fitzgibbon, A. Bundle adjustment—A modern synthesis. In Proceedings of the Vision Algorithms: Theory and Practice: International Workshop on Vision Algorithms, Corfu, Greece, 21–22 September 1999; pp. 298–372. [Google Scholar] [CrossRef]
- Schonberger, J.L.; Frahm, J.M. Structure-from-motion revisited. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 4104–4113. [Google Scholar] [CrossRef]
- Geiger, A.; Ziegler, J.; Stiller, C. Stereoscan: Dense 3d reconstruction in real-time. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), Baden-Baden, Germany, 5–9 June 2011; pp. 963–968. [Google Scholar] [CrossRef]
- Kanazawa, A.; Tulsiani, S.; Efros, A.A.; Malik, J. Learning category-specific mesh reconstruction from image collections. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 386–402. [Google Scholar] [CrossRef]
- Moulon, P.; Monasse, P.; Perrot, R.; Marlet, R. Openmvg: Open multiple view geometry. In Proceedings of the Reproducible Research in Pattern Recognition: First International Workshop, RRPR 2016, Cancún, Mexico, 4 December 2016; pp. 60–74. [Google Scholar] [CrossRef]
- Li, S.; Xiao, X.; Guo, B.; Zhang, L. A novel OpenMVS-based texture reconstruction method based on the fully automatic plane segmentation for 3D mesh models. Remote Sens. 2020, 12, 3908. [Google Scholar] [CrossRef]
- Han, K.; Wang, Y.; Tian, Q.; Guo, J.; Xu, C.; Xu, C. Ghostnet: More features from cheap operations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1580–1589. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
- Liu, X.; Peng, H.; Zheng, N.; Yang, Y.; Hu, H.; Yuan, Y. Efficientvit: Memory efficient vision transformer with cascaded group attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Paris, France, 2–6 October 2023; pp. 14420–14430. [Google Scholar] [CrossRef]
- Howard, A.; Sandler, M.; Chu, G.; Chen, L.C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V.; et al. Searching for mobilenetv3. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Montreal, BC, Canada, 11–17 October 2021; pp. 1314–1324. [Google Scholar] [CrossRef]
- Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar] [CrossRef]
- FBochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar] [CrossRef]
- Ultralytics. YOLOv5. Available online: https://github.com/ultralytics/yolov5 (accessed on 1 November 2021).
- Ultralytics. YOLOv8. Available online: https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/models/v8 (accessed on 12 January 2023).
- WongKinYiu. YOLOv9. Available online: https://github.com/WongKinYiu/yolov9 (accessed on 18 February 2024).
- Wang, A.; Chen, H.; Liu, L.; Chen, K.; Lin, Z.; Han, J.; Ding, G. Yolov10: Real-time end-to-end object detection. arXiv 2024, arXiv:2405.14458. [Google Scholar] [CrossRef]
- Ultralytics. YOLOv11. Available online: https://github.com/ultralytics/ultralytics (accessed on 27 September 2024).
Dataset | Defect Type | Number of Images | ||||
---|---|---|---|---|---|---|
Misplace | Obstacle | Root | Leaky | Fouling | ||
Training | 3116 | 750 | 935 | 607 | 354 | 4466 |
Testing | 1289 | 296 | 374 | 263 | 170 | 1902 |
Total | 4405 | 1046 | 1309 | 870 | 524 | 6368 |
Parameter | Value |
---|---|
Input resolution | 416 × 416 |
Learning rate | 0.001 |
Weight decay | 0.0005 |
Epochs | 300 |
Batch size | 16 |
IoU | 0.5 |
Parameter | Value |
---|---|
Input resolution | 416 × 416 |
Sparse learning rate | 0.002 |
Sparse iterations | 200 |
Channel pruning rate | 0.5 |
Fine-tuning iterations | 300 |
Backbone | PrecisionL (%) | Recall (%) | mAP (%) | Model Size (MB) | Total Parameters (Million) | FLOPs (Giga) | Training Hours (h) |
---|---|---|---|---|---|---|---|
Original | 92.1 | 84.6 | 92.0 | 12.3 | 6.03 | 13.2 | 9.847 |
GhostNet [56] | 89.0 | 79.7 | 88.8 | 11.0 | 5.29 | 11.1 | 11.576 |
ResNet18 [57] | 86.6 | 75.3 | 84.7 | 29.7 | 14.73 | 35.7 | 10.634 |
EfficientViT_M0 [58] | 87.9 | 78.8 | 87.2 | 11.8 | 5.53 | 10.1 | 12.771 |
MobileNetV3 [59] | 83.8 | 74.5 | 83.3 | 9.2 | 4.48 | 6.8 | 11.235 |
FasterNet-T0 | 93.5 | 85.2 | 93.4 | 11.6 | 5.66 | 11.6 | 6.278 |
Plan | FasterNet-T0 | GSCConv + VoVGSCCP | DyHead |
---|---|---|---|
0 | × | × | × |
1 | ✓ | × | × |
2 | ✓ | ✓ | × |
3 | ✓ | × | ✓ |
4 | ✓ | ✓ | ✓ |
Model | Precision (%) | Recall (%) | mAP (%) | Model Size (MB) | Total Parameters (Million) | FLOPs (Giga) |
---|---|---|---|---|---|---|
Plan 0 | 92.1 | 84.6 | 92.0 | 12.3 | 6.03 | 13.2 |
Plan 1 | 93.5 | 85.2 | 93.4 | 11.6 | 5.66 | 11.6 |
Plan 2 | 93.3 | 85.1 | 93.1 | 8.1 | 3.89 | 7.7 |
Plan 3 | 94.3 | 87.3 | 94.2 | 11.5 | 5.61 | 11.3 |
Plan 4 | 93.9 | 87.7 | 93.8 | 10.2 | 4.94 | 9.0 |
Pruning Rate (%) | Precision (%) | Recall (%) | mAP (%) | Model Size (MB) | Total Parameters (Million) | FLOPs (Giga) |
---|---|---|---|---|---|---|
0 | 93.9 | 87.7 | 93.8 | 10.2 | 4.94 | 9.0 |
40% | 93.8 | 87.2 | 93.7 | 6.6 | 3.14 | 5.9 |
50% | 93.6 | 87.4 | 93.5 | 4.9 | 2.41 | 4.5 |
60% | 90.5 | 82.5 | 89.4 | 3.8 | 1.86 | 3.0 |
70% | 84.5 | 77.4 | 83.5 | 3.4 | 1.64 | 2.2 |
80% | 53.9 | 51.0 | 51.9 | 3.2 | 1.52 | 1.8 |
Model | Precision (%) | Recall (%) | mAP (%) | Model Size (MB) | Total Parameters (Million) | FLOPs (Giga) |
---|---|---|---|---|---|---|
Faster-RCNN [15] | 82.4 | 68.1 | 72.4 | 108.3 | 41.75 | 134.4 |
SSD [11] | 91.7 | 75.2 | 87.1 | 92.6 | 24.15 | 116.2 |
YOLOv3 [60] | 88.4 | 78.6 | 88.2 | 235.1 | 61.54 | 32.8 |
YOLOv4 [61] | 87.8 | 75.4 | 86.1 | 244.5 | 63.95 | 59.9 |
YOLOv5l [62] | 90.2 | 84.8 | 89.8 | 52.0 | 25.79 | 55.0 |
YOLOv7-tiny [61] | 92.1 | 84.6 | 92.0 | 12.3 | 6.03 | 13.2 |
YOLOv7 [60] | 95.3 | 86.8 | 94.4 | 74.8 | 36.5 | 103.2 |
YOLOv8n [63] | 93.0 | 85.5 | 92.8 | 6.2 | 3.00 | 8.2 |
YOLOv9s [64] | 93.6 | 86.6 | 93.4 | 15.2 | 7.17 | 26.7 |
YOLOv10n [65] | 91.4 | 83.9 | 91.1 | 5.8 | 2.27 | 6.5 |
YOLOv11n [66] | 90.7 | 83.4 | 90.5 | 5.5 | 2.58 | 6.3 |
Improved YOLO | 93.9 | 87.7 | 93.8 | 10.2 | 4.94 | 9.0 |
Sewer-YOLO-Slim | 93.6 | 87.4 | 93.5 | 4.9 | 2.41 | 4.5 |
Device | Use TensorRT | mAP | Speed |
---|---|---|---|
RTX 3090 | No | 93.5 | 22.5 ms |
RTX 3090 | Yes | 92.7 | 14.0 ms |
EA-B400 | Yes | 92.7 | 15.3 ms |
Known | Verification | |||
---|---|---|---|---|
Inspection Well
Diameter |
Pipe Burial
Depth |
Pipe
Diameter |
Length of
the Entire Pipe | |
Distance on 3D model (unit) | 0.7622 | 2.1807 | 0.6034 | 20.1983 |
Reasoning distance (m) | - | 2.06 | 0.57 | 19.08 |
Actual distance (m) | 0.72 | 2.10 | 0.60 | 18.20 |
Error (m) | - | −0.04 | −0.03 | +0.88 |
Known | Verification | |||
---|---|---|---|---|
Inspection Well
Diameter |
Pipe Burial
Depth |
Pipe
Diameter |
Length of
the Entire Pipe | |
Distance on 3D model (unit) | 0.7601 | 2.2132 | 0.6175 | 18.6115 |
Reasoning distance (m) | - | 2.10 | 0.58 | 17.63 |
Actual distance (m) | 0.72 | 2.10 | 0.60 | 18.20 |
Error (m) | - | 0 | −0.02 | −0.57 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, R.; Shao, Z.; Sun, Q.; Yu, Z. Defect Detection and 3D Reconstruction of Complex Urban Underground Pipeline Scenes for Sewer Robots. Sensors 2024, 24, 7557. https://doi.org/10.3390/s24237557
Liu R, Shao Z, Sun Q, Yu Z. Defect Detection and 3D Reconstruction of Complex Urban Underground Pipeline Scenes for Sewer Robots. Sensors. 2024; 24(23):7557. https://doi.org/10.3390/s24237557
Chicago/Turabian StyleLiu, Ruihao, Zhongxi Shao, Qiang Sun, and Zhenzhong Yu. 2024. "Defect Detection and 3D Reconstruction of Complex Urban Underground Pipeline Scenes for Sewer Robots" Sensors 24, no. 23: 7557. https://doi.org/10.3390/s24237557
APA StyleLiu, R., Shao, Z., Sun, Q., & Yu, Z. (2024). Defect Detection and 3D Reconstruction of Complex Urban Underground Pipeline Scenes for Sewer Robots. Sensors, 24(23), 7557. https://doi.org/10.3390/s24237557