A Multi-Information Fusion Method for Repetitive Tunnel Disease Detection
Abstract
:1. Introduction
- (1)
- This study proposes an approach to recognize repeated tunnel lining diseases using multi-information fusion. It integrates location information, mileage information, and image similarity information.
- (2)
- Designed the SuperVO algorithm and devised a technique to mitigate the accumulation of errors by utilizing distinguishable marks. This method enables the localization task to be performed solely with a monocular camera, even in tunnel videos with limited texture and repetitive patterns.
- (3)
- Designed the SpSg Network, which enables the comparison of tunnel lining images acquired at different times, angles, and illumination conditions.
- (4)
- The effectiveness of our method has been demonstrated through practical experiments in real tunnels, offering a reliable and intelligent approach for detecting tunnel diseases and conducting long-term monitoring.
2. Related Works
2.1. Defect Detection
2.2. Defect Localization
2.3. Image Matching
3. Materials and Methods
3.1. Video Preprocessing
3.2. Yolov7 Detector
3.3. SuperVO Algorithm
3.4. SuperPoint–SuperGlue Matching Network
4. Results
4.1. Yolov7 Detector Experiments
4.1.1. Tunnel Lining Disease Detection Dataset
4.1.2. Evaluation Metrics and Results
4.2. SuperVO Experiments
4.2.1. Datasets and Evaluation Metrics
4.2.2. Optimal Threshold t Determination
4.2.3. Ablation Experiment
4.2.4. Comparative Experiment
4.2.5. Repeatable Localization Accuracy Test
4.3. SuperPoint–SuperGlue Matching Network Experiments
4.3.1. Datasets and Evaluation Metrics
4.3.2. Optimal Threshold l Determination
4.3.3. Comparative Experiment
4.3.4. Defect Repeatability Detection Results
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Zhao, L.; Wang, J.; Liu, S.; Yang, X. An Adaptive Multitask Network for Detecting the Region of Water Leakage in Tunnels. Appl. Sci. 2023, 13, 6231. [Google Scholar] [CrossRef]
- Zhou, Z.; Zhang, J.; Gong, C. Automatic Detection Method of Tunnel Lining Multi-defects via an Enhanced You Only Look Once Network. Comput.-Aided Civ. Infrastruct. Eng. 2022, 37, 762–780. [Google Scholar] [CrossRef]
- Li, D.; Xie, Q.; Gong, X.; Yu, Z.; Xu, J.; Sun, Y.; Wang, J. Automatic Defect Detection of Metro Tunnel Surfaces Using a Vision-Based Inspection System. Adv. Eng. Inform. 2021, 47, 101206. [Google Scholar] [CrossRef]
- Gao, X.; Yang, Y.; Xu, Z.; Gan, Z. A New Method for Repeated Localization and Matching of Tunnel Lining Defects. Eng. Appl. Artif. Intell. 2024, 132, 107855. [Google Scholar] [CrossRef]
- Liao, J.; Yue, Y.; Zhang, D.; Tu, W.; Cao, R.; Zou, Q.; Li, Q. Automatic Tunnel Crack Inspection Using an Efficient Mobile Imaging Module and a Lightweight CNN. IEEE Trans. Intell. Transp. Syst. 2022, 23, 15190–15203. [Google Scholar] [CrossRef]
- Qu, Z.; Lin, L.D.; Guo, Y.; Wang, N. An Improved Algorithm for Image Crack Detection Based on Percolation Model. IEEJ Trans. Electr. Electron. Eng. 2015, 10, 214–221. [Google Scholar] [CrossRef]
- Amhaz, R.; Chambon, S.; Idier, J.; Baltazart, V. Automatic Crack Detection on Two-Dimensional Pavement Images: An Algorithm Based on Minimal Path Selection. IEEE Trans. Intell. Transp. Syst. 2016, 17, 2718–2729. [Google Scholar] [CrossRef]
- Su, G.; Chen, Y.; Jiang, Q.; Li, C.; Cai, W. Spalling Failure of Deep Hard Rock Caverns. J. Rock Mech. Geotech. Eng. 2023, 15, 2083–2104. [Google Scholar] [CrossRef]
- Xu, Y.; Li, D.; Xie, Q.; Wu, Q.; Wang, J. Automatic Defect Detection and Segmentation of Tunnel Surface Using Modified Mask R-CNN. Measurement 2021, 178, 109316. [Google Scholar] [CrossRef]
- Zhao, S.; Zhang, D.; Xue, Y.; Zhou, M.; Huang, H. A Deep Learning-Based Approach for Refined Crack Evaluation from Shield Tunnel Lining Images. Autom. Constr. 2021, 132, 103934. [Google Scholar] [CrossRef]
- Yang, B.; Xue, L.; Fan, H.; Yang, X. SINS/Odometer/Doppler Radar High-Precision Integrated Navigation Method for Land Vehicle. IEEE Sens. J. 2021, 21, 15090–15100. [Google Scholar] [CrossRef]
- Schaer, P.; Vallet, J. Trajectory adjustment of mobile laser scan data in gps denied environments. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 40, 61–64. [Google Scholar] [CrossRef]
- Du, L.; Zhong, R.; Sun, H.; Zhu, Q.; Zhang, Z. Study of the Integration of the CNU-TS-1 Mobile Tunnel Monitoring System. Sensors 2018, 18, 420. [Google Scholar] [CrossRef]
- Kim, H.; Choi, Y. Comparison of Three Location Estimation Methods of an Autonomous Driving Robot for Underground Mines. Appl. Sci. 2020, 10, 4831. [Google Scholar] [CrossRef]
- DeTone, D.; Malisiewicz, T.; Rabinovich, A. SuperPoint: Self-Supervised Interest Point Detection and Description. arXiv 2018, arXiv:1712.07629. [Google Scholar]
- Ma, J.; Jiang, X.; Fan, A.; Jiang, J.; Yan, J. Image Matching from Handcrafted to Deep Features: A Survey. Int. J. Comput. Vis. 2021, 129, 23–79. [Google Scholar] [CrossRef]
- Fu, Y.; Zhang, P.; Liu, B.; Rong, Z.; Wu, Y. Learning to Reduce Scale Differences for Large-Scale Invariant Image Matching. IEEE Trans. Circuits Syst. Video Technol. 2023, 33, 1335–1348. [Google Scholar] [CrossRef]
- Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-Up Robust Features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
- Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An Efficient Alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar] [CrossRef]
- Li, J.; Hu, Q.; Ai, M. RIFT: Multi-Modal Image Matching Based on Radiation-Variation Insensitive Feature Transform. IEEE Trans. Image Process. 2020, 29, 3296–3310. [Google Scholar] [CrossRef]
- Korman, S.; Reichman, D.; Tsur, G.; Avidan, S. Fast-Match: Fast Affine Template Matching. Int. J. Comput. Vis. 2017, 121, 111–125. [Google Scholar] [CrossRef]
- Dong, J.; Hu, M.; Lu, J.; Han, S. Affine Template Matching Based on Multi-Scale Dense Structure Principal Direction. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 2125–2132. [Google Scholar] [CrossRef]
- Revaud, J.; Weinzaepfel, P.; Harchaoui, Z.; Schmid, C. DeepMatching: Hierarchical Deformable Dense Matching. Int. J. Comput. Vis. 2016, 120, 300–323. [Google Scholar] [CrossRef]
- Chopra, S.; Hadsell, R.; LeCun, Y. Learning a Similarity Metric Discriminatively, with Application to Face Verification. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; Volume 1, pp. 539–546. [Google Scholar] [CrossRef]
- Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. arXiv 2018, arXiv:1801.03924. [Google Scholar]
- Gleize, P.; Wang, W.; Feiszli, M. SiLK—Simple Learned Keypoints. arXiv 2023, arXiv:2304.06194. [Google Scholar]
- Sarlin, P.E.; DeTone, D.; Malisiewicz, T.; Rabinovich, A. SuperGlue: Learning Feature Matching with Graph Neural Networks. arXiv 2020, arXiv:1911.11763. [Google Scholar]
- Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors. arXiv 2022, arXiv:2207.02696. [Google Scholar]
- Wang, C.Y.; Yeh, I.H.; Liao, H.Y.M. YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information. arXiv 2024, arXiv:2402.13616. [Google Scholar]
- Mur-Artal, R.; Montiel, J.M.M.; Tardos, J.D. ORB-SLAM: A Versatile and Accurate Monocular SLAM System. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef]
- Forster, C.; Pizzoli, M.; Scaramuzza, D. SVO: Fast Semi-Direct Monocular Visual Odometry. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 15–22. [Google Scholar] [CrossRef]
- Engel, J.; Koltun, V.; Cremers, D. Direct Sparse Odometry. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 611–625. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2015, arXiv:1409.1556. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Tan, M.; Le, Q.V. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv 2020, arXiv:1905.11946. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. arXiv 2015, arXiv:1512.03385. [Google Scholar]
- Gao, S.H.; Cheng, M.M.; Zhao, K.; Zhang, X.Y.; Yang, M.H.; Torr, P. Res2Net: A New Multi-scale Backbone Architecture. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 652–662. [Google Scholar] [CrossRef]
Name | Data Augmentation | Number of Training Sets | Number of Validation Sets | Number of Test Sets | Total Number |
---|---|---|---|---|---|
D1 | No | 451 | 151 | 151 | 753 |
D1E | Yes | 1715 | 572 | 572 | 2859 |
D1+D2 | Yes | 3184 | 1062 | 1062 | 5308 |
D1E+D2 | Yes | 4448 | 1483 | 1483 | 7414 |
Name | Times | Direction | Occlusion | Mileage (m) | Frames |
---|---|---|---|---|---|
Video1 | First time | Identical | Non-existent | 159 | 414 |
Video2 | First time | Identical | Existent | 260 | 659 |
Video3 | Second time | Opposite | Non-existent | 159 | 446 |
Thresholds t | Average Error (m) | Maximum Error (m) |
---|---|---|
150 | 0.190 | 0.680 |
125 | 0.187 | 0.694 |
100 | 0.212 | 0.831 |
75 | 0.276 | 1.141 |
50 | 0.281 | 1.162 |
Methods | Average Error (m) | Maximum Error (m) |
---|---|---|
ORB+BF (Baseline) | 8.926 | 20.392 |
SP+BF | 0.746 | 3.625 |
SP+SG | 1.621 | 4.058 |
SP+SG+AD | 0.562 | 1.354 |
SP+SG+C | 0.502 | 2.264 |
OURS (SP+SG+AD+C) | 0.212 | 0.751 |
Methods | Average Error (m) | Maximum Error (m) |
---|---|---|
ORB+BF (Baseline) | 4.017 | 11.089 |
SP+BF | 0.980 | 2.431 |
SP+SG | 0.974 | 2.428 |
SP+SG+AD | 1.558 | 2.988 |
SP+SG+C | 0.209 | 1.855 |
OURS (SP+SG+AD+C) | 0.187 | 0.694 |
Real Mileage (m) | Video1 | Video3 | The Error between the Two Predictions (m) | ||
---|---|---|---|---|---|
Predicted (m) | Errors (m) | Predicted (m) | Errors (m) | ||
20 | 20.00 | 0 | 20.13 | 0.13 | 0.13 |
40 | 40.12 | 0.12 | 39.77 | 0.23 | 0.35 |
60 | 59.79 | 0.21 | 60.40 | 0.40 | 0.61 |
80 | 79.64 | 0.36 | 80.29 | 0.29 | 0.65 |
100 | 99.90 | 0.10 | 99.97 | 0.03 | 0.07 |
120 | 120.12 | 0.12 | 120.33 | 0.33 | 0.21 |
140 | 140.05 | 0.05 | 139.94 | 0.06 | 0.11 |
Threshold l (Ring) | P | R | F1 | (s) |
---|---|---|---|---|
7 | 97.56% | 64.52% | 77.67% | 4.84 |
6 | 97.56% | 64.52% | 77.67% | 4.80 |
5 | 96.61% | 93.44% | 95.00% | 4.80 |
4 | 100% | 64.52% | 78.43% | 4.80 |
3 | 100% | 64.52% | 78.43% | 4.80 |
Category | Methods | P | R | F1 | (s) |
---|---|---|---|---|---|
Based on feature points | ORB+BF | 15.71% | 18.33% | 16.92% | 4.60 |
SP+BF | 100.00% | 35.00% | 51.85% | 4.94 | |
Ours | 96.61% | 93.44% | 95.00% | 4.80 | |
Based on depth features | VGG* | 63.64% | 11.86% | 20.00% | 0.20 |
VGG | 70.73% | 56.86% | 63.04% | 0.17 | |
DarkNet53 | 44.62% | 90.62% | 59.79% | 0.15 | |
EfficientNet | 53.85% | 92.11% | 67.96% | 0.17 | |
ResNet | 50.77% | 91.67% | 65.35% | 0.14 | |
Res2net | 60.00% | 92.86% | 72.90% | 0.23 |
Query Numbers | Gallery Numbers | Repeated Diseases | New Diseases | Vanished Diseases | |||
---|---|---|---|---|---|---|---|
Predicted | Actual | Predicted | Actual | Predicted | Actual | ||
89 | 556 | 59 | 60 | 30 | 29 | 497 | 496 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Gan, Z.; Teng, L.; Chang, Y.; Feng, X.; Gao, M.; Gao, X. A Multi-Information Fusion Method for Repetitive Tunnel Disease Detection. Sustainability 2024, 16, 4285. https://doi.org/10.3390/su16104285
Gan Z, Teng L, Chang Y, Feng X, Gao M, Gao X. A Multi-Information Fusion Method for Repetitive Tunnel Disease Detection. Sustainability. 2024; 16(10):4285. https://doi.org/10.3390/su16104285
Chicago/Turabian StyleGan, Zhiyuan, Li Teng, Ying Chang, Xinyang Feng, Mengnan Gao, and Xinwen Gao. 2024. "A Multi-Information Fusion Method for Repetitive Tunnel Disease Detection" Sustainability 16, no. 10: 4285. https://doi.org/10.3390/su16104285
APA StyleGan, Z., Teng, L., Chang, Y., Feng, X., Gao, M., & Gao, X. (2024). A Multi-Information Fusion Method for Repetitive Tunnel Disease Detection. Sustainability, 16(10), 4285. https://doi.org/10.3390/su16104285