Development of Deep Intelligence for Automatic River Detection (RivDet)
Abstract
:1. Introduction
2. Study Area
3. Development of River Detection Artificial Intelligence
3.1. Image Data Preparation
3.2. AI Model (RivDet) Creation and Training
3.3. Model Evaluation
4. Result
4.1. Comparative Analysis with River Levees
4.2. Augmentation
4.2.1. One and Two Augmentations
4.2.2. Three and Four Augmentations
4.3. Results for Each River
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Salman, A.M.; Li, Y. Flood risk assessment, future trend modeling, and risk communication: A review of ongoing research. Nat. Hazards Rev. 2018, 19, 04018011. [Google Scholar] [CrossRef]
- Zhang, J.; Xu, W.; Liao, X.; Zong, S.; Liu, B. Global mortality risk assessment from river flooding under climate change. Environ. Res. Lett. 2021, 16, 064036. [Google Scholar] [CrossRef]
- Liu, W.; Feng, Q.; Engel, B.A.; Yu, T.; Zhang, X.; Qian, Y. A probabilistic assessment of urban flood risk and impacts of future climate change. J. Hydrol. 2023, 618, 129267. [Google Scholar] [CrossRef]
- Jain, A.; Ramaprasad, R.; Narang, P.; Mandal, M.; Chamola, V.; Yu, F.R.; Guizan, M. AI-Enabled Object Detection in UAVs: Challenges, Design Choices, and Research Directions. IEEE Netw. 2021, 35, 129–135. [Google Scholar] [CrossRef]
- Varatharasan, V.; Rao, A.S.S.; Toutounji, E.; Hong, J.-H.; Shin, H.-S. Target detection, tracking and avoidance system for low-cost UAVs using AI-based approaches. In Proceedings of the 2019 Workshop on Research, Education and Development of Unmanned Aerial Systems (RED UAS), Cranfield, UK, 25–27 November 2019; pp. 142–147. [Google Scholar]
- Eum, T.S.; Shin, E.T.; Song, C.G. Analysis of present status and characteristics of elementary technologies for smart river management. J. Korean Soc. Disaster Secur. 2022, 15, 13–21. [Google Scholar]
- Bernard, T.G.; Davy, P.; Lague, D. Hydro-Geomorphic Metrics for High Resolution Fluvial Landscape Analysis. J. Geophys. Res. Earth Surf. 2022, 127, e2021JF006535. [Google Scholar] [CrossRef]
- Stanislawski, L.V.; Shavers, E.J.; Wang, S.; Jiang, Z.; Usery, E.L.; Moak, E.; Duffy, A.; Schott, J. Extensibility of U-Net Neural Network Model for Hydrographic Feature Extraction and Implications for Hydrologic Modeling. Remote Sens. 2021, 13, 2368. [Google Scholar] [CrossRef]
- Costabile, P.; Costanzo, C.; Lombardo, M.; Shavers, E.; Stanislawski, L.V. Unravelling spatial heterogeneity of inundation pattern domains for 2D analysis of fluvial landscapes and drainage networks. J. Hydrol. 2024, 632, 130728. [Google Scholar] [CrossRef]
- Jaroenchai, N.; Wang, S.; Stanislawski, L.V.; Shavers, E.; Jiang, Z.; Sagan, V.; Usery, E.L. Transfer learning with convolutional neural networks for hydrological streamline delineation. Environ. Model. Softw. 2024, 181, 106165. [Google Scholar] [CrossRef]
- Patil, S.; Sawant, S.; Joshi, A. Flood detection using remote sensing and deep learning approaches. In Proceedings of the 2023 14th International Conference on Computing Communication and Networking Technologies (ICCCNT), Delhi, India, 6–8 July 2023; pp. 1–6. [Google Scholar]
- Vandaele, R.; Dance, S.L.; Ojha, V. Deep learning for automated river-level monitoring through river-camera images: An approach based on water segmentation and transfer learning. Hydrol. Earth Syst. Sci. 2021, 25, 4435–4453. [Google Scholar] [CrossRef]
- Niu, G.; Li, J.; Guo, S.; Pun, M.-O.; Hou, L.; Yang, L. SuperDock A Deep Learning-Based Automated Floating Trash Monitoring System. In Proceedings of the 2019 IEEE International Conference on Robotics and Biomimetics (ROBIO), Dali, China, 6–8 December 2019; pp. 1035–1040. [Google Scholar]
- Rizk, H.; Shokry, A.; Youssef, M. Effectiveness of data augmentation in cellular-based localization using deep learning. In Proceedings of the 2019 IEEE Wireless Communications and Networking Conference (WCNC), Marrakesh, Morocco, 15–18 April 2019; pp. 1–6. [Google Scholar]
- Liao, Y.-H.; Juang, J.-G. Real-Time UAV Trash Monitoring System. Appl. Sci. 2022, 12, 1838. [Google Scholar] [CrossRef]
- Lee, K.; Wang, B.; Lee, S. Analysis of YOLOv5 and DeepLabv3+ Algorithms for Detecting Illegal Cultivation on Public Land: A Case Study of a Riverside in Korea. Int. J. Environ. Res. Public Health 2023, 20, 1770. [Google Scholar] [CrossRef] [PubMed]
- Wang, G.; Chen, Y.; An, P.; Hong, H.; Hu, J.; Huang, T. UAV-YOLOv8: A Small-Object-Detection Model Based on Improved YOLOv8 for UAV Aerial Photography Scenarios. Sensors 2023, 23, 7190. [Google Scholar] [CrossRef] [PubMed]
- Sultana, F.; Sufian, A.; Dutta, P. A review of object detection models based on convolutional neural network. In Intelligent Computing: Image Processing Based Applications; Springer: Berlin/Heidelberg, Germany, 2020; pp. 1–16. [Google Scholar]
- Kim, T.; Jung, J.; Ha, T.; Kong, Y.; Lee, a. Determining the Optimum Altitude Parameter for River Surveying Using UAVs: Case Study on Jinae-cheon Stream. J. Korean Soc. Hazard Mitig. 2022, 22, 187–193. [Google Scholar] [CrossRef]
- Kong, Y.; Kim, T.; Lee, T. UAV-Based Floodwater-Level Establishment for FEWS for Abrupt River Section Change in Imsan. J. Korean Soc. Hazard Mitig. 2022, 22, 377–384. [Google Scholar] [CrossRef]
- Kim, T.; Park, J.; Hwang, S.; Lee, T. Whole Watershed-based Estimation of FEWS Installation Site Using UAV Photogrammetry. J. Korean Soc. Hazard Mitig. 2023, 23, 233–241. [Google Scholar] [CrossRef]
- Alexandrova, S.; Tatlock, Z.; Cakmak, M. RoboFlow: A flow-based visual programming language for mobile manipulation tasks. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 5537–5544. [Google Scholar]
- Ciaglia, F.; Zuppichini, F.S.; Guerrie, P.; McQuade, M.; Solawetz, J. Roboflow 100: A rich, multi-domain object detection benchmark. arXiv 2022, arXiv:2211.13523. [Google Scholar]
- Kim, H.-J.; Shin, B.-K.; Kim, W. A study on hydromorphology and vegetation features depending on typology of natural streams in Korea. Korean J. Environ. Ecol. 2014, 28, 215–234. [Google Scholar] [CrossRef]
- Pei, J.; Ananthasubramaniam, A.; Wang, X.; Zhou, N.; Sargent, J.; Dedeloudis, A.; Jurgens, D. Potato: The portable text annotation tool. arXiv 2022, arXiv:2212.08620. [Google Scholar]
- Terven, J.; Cordova-Esparza, D. A comprehensive review of YOLO: From YOLOv1 to YOLOv8 and beyond. arXiv 2023, arXiv:2304.00501. [Google Scholar]
- Talebi, H.; Milanfar, P. Learning to resize images for computer vision tasks. In Proceedings of the IEEE/CVF International Conference On Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 497–506. [Google Scholar]
- Perez, L.; Wang, J. The effectiveness of data augmentation in image classification using deep learning. arXiv 2017, arXiv:1712.04621. [Google Scholar]
- Mikołajczyk, A.; Grochowski, M. Data augmentation for improving deep learning in image classification problem. In Proceedings of the 2018 International Interdisciplinary PhD Workshop (IIPhDW), Swinoujscie, Poland, 9–12 May 2018; pp. 117–122. [Google Scholar]
- Gu, S.; Pednekar, M.; Slater, R. Improve image classification using data augmentation and neural networks. SMU Data Sci. Rev. 2019, 2, 1. [Google Scholar]
- Asperti, A.; Mastronardo, C. The effectiveness of data augmentation for detection of gastrointestinal diseases from endoscopical images. arXiv 2017, arXiv:1712.03689. [Google Scholar]
- Khalifa, N.E.; Loey, M.; Mirjalili, S. A comprehensive survey of recent trends in deep learning for digital images augmentation. Artif. Intell. Rev. 2022, 55, 2351–2377. [Google Scholar] [CrossRef] [PubMed]
- Takahashi, R.; Matsubara, T.; Uehara, K. Data augmentation using random image cropping and patching for deep CNNs. IEEE Trans. Circuits Syst. Video Technol. 2019, 30, 2917–2931. [Google Scholar] [CrossRef]
- Goceri, E. Medical image data augmentation: Techniques, comparisons and interpretations. Artif. Intell. Rev. 2023, 56, 12561–12605. [Google Scholar] [CrossRef]
- Yu, H.; Kawaike, K.; Yamanoi, K.; Koshiba, T. 3d Simulation of the Effect of Spur Dikes Spacing on Bed Deformation, Flow and Performance Evaluation in Meandering Channels. J. JSCE 2024, 12, 23–16178. [Google Scholar] [CrossRef]
- Nagaraju, M.; Chawla, P.; Kumar, N. Performance improvement of Deep Learning Models using image augmentation techniques. Multimed. Tools Appl. 2022, 81, 9177–9200. [Google Scholar] [CrossRef]
- Burdziakowski, P.; Bobkowska, K. UAV photogrammetry under poor lighting conditions—Accuracy considerations. Sensors 2021, 21, 3531. [Google Scholar] [CrossRef]
- Mumuni, A.; Mumuni, F. Data augmentation: A comprehensive survey of modern approaches. Array 2022, 16, 100258. [Google Scholar] [CrossRef]
- Słomska-Przech, K.; Panecki, T.; Pokojski, W. Heat maps: Perfect maps for quick reading? comparing usability of heat maps with different levels of generalization. ISPRS Int. J. Geo-Inf. 2021, 10, 562. [Google Scholar] [CrossRef]
- Sekeroglu, B.; Ever, Y.K.; Dimililer, K.; Al-Turjman, F. Comparative Evaluation and Comprehensive Analysis of Machine Learning Models for Regression Problems. Data Intell. 2022, 4, 620–652. [Google Scholar] [CrossRef]
mAP | Precision | Recall | |
---|---|---|---|
With Levees | 0.795 | 0.791 | 0.768 |
Without Levees | 0.872 | 0.816 | 0.820 |
No. | Augmentation | D-mAP | D-Precision | D-Recall | Sum | |
---|---|---|---|---|---|---|
S1 | 3× | Flip | 0.005 | 0.022 | −0.030 | −0.003 |
S2 | 90° Rotate | 0.012 | −0.009 | 0.021 | 0.024 | |
S3 | Crop | 0.006 | 0.008 | −0.013 | 0.001 | |
S4 | Shear | 0.005 | 0.010 | 0.002 | 0.017 | |
D1 | Flip and 90° Rotate | 0.017 | −0.005 | 0.028 | 0.040 | |
D2 | Flip and Crop | 0.015 | 0.020 | 0.009 | 0.044 | |
D3 | Flip and Shear | 0.009 | −0.002 | 0.009 | 0.016 | |
D4 | 90° Rotate and Crop | 0.017 | −0.005 | 0.028 | 0.040 | |
D5 | 90° Rotate and Shear | 0.021 | −0.001 | 0.027 | 0.047 † | |
D6 | Crop and Shear | 0.000 | −0.012 | 0.027 | 0.015 | |
S5 | 5× | Flip | 0.016 | −0.003 | 0.016 | 0.029 |
S6 | 90° Rotate | 0.016 | 0.006 | −0.011 | 0.011 | |
S7 | Crop | −0.003 | −0.006 | 0.011 | 0.002 | |
S8 | Shear | −0.002 | −0.015 | 0.024 | 0.007 | |
D7 | Flip and 90° Rotate | 0.010 | −0.015 | 0.007 | 0.002 | |
D8 | Flip and Crop | 0.006 | −0.018 | 0.017 | 0.005 | |
D9 | Flip and Shear | 0.022 | 0.010 | 0.013 | 0.045 | |
D10 | 90° Rotate and Crop | 0.011 | −0.007 | 0.017 | 0.021 | |
D11 | 90° Rotate and Shear | 0.011 | −0.010 | 0.009 | 0.010 | |
D12 | Crop and Shear | 0.003 | 0.017 | −0.002 | 0.018 |
No. | Augmentation | mAP | Precision | Recall | |
Basic | 0.872 | 0.816 | 0.820 | ||
T1 | 3× | Flip and 90° Rotate and Crop | 0.882 | 0.788 | 0.857 |
T2 | Flip and 90° Rotate and Shear | 0.878 | 0.820 | 0.825 | |
T3 | Flip and Crop and Shear | 0.888 | 0.830 | 0.848 | |
T4 | 90° Rotate and Crop and Shear | 0.885 | 0.813 | 0.833 | |
T5 | 5× | Flip and 90° Rotate and Crop | 0.886 | 0.818 | 0.825 |
T6 | Flip and 90° Rotate and Shear | 0.890 | 0.842 | 0.795 | |
T7 | Flip and Crop and Shear | 0.885 | 0.788 | 0.856 | |
T8 | 90° Rotate and Crop and Shear | 0.884 | 0.839 | 0.788 | |
Q1 | 3× | Flip and 90° Rotate and Crop and Shear | 0.884 | 0.802 | 0.842 |
Q2 | 5× | 0.887 | 0.835 | 0.816 |
No. | Augmentation | D-mAP | D-Precision | D-Recall | Sum | |
---|---|---|---|---|---|---|
T1 | 3× | Flip and 90° Rotate and Crop | 0.010 | −0.028 | 0.037 | 0.019 |
T2 | Flip and 90° Rotate and Shear | 0.006 | 0.004 | 0.005 | 0.015 | |
T3 | Flip and Crop and Shear | 0.016 | 0.014 | 0.028 | 0.058† | |
T4 | 90° Rotate and Crop and Shear | 0.013 | −0.003 | 0.013 | 0.023 | |
T5 | 5× | Flip and 90° Rotate and Crop | 0.014 | 0.002 | 0.005 | 0.021 |
T6 | Flip and 90° Rotate and Shear | 0.018 | 0.026 | −0.025 | 0.019 | |
T7 | Flip and Crop and Shear | 0.013 | −0.028 | 0.036 | 0.021 | |
T8 | 90° Rotate and Crop and Shear | 0.012 | 0.023 | −0.032 | 0.003 | |
Q1 | 3× | Flip and 90° Rotate and Crop and Shear | 0.012 | −0.014 | 0.022 | 0.020 |
Q2 | 5× | 0.015 | 0.019 | −0.004 | 0.030 |
River | No. | Augmentation | Confidence | |
---|---|---|---|---|
Chogang | D6 | Crop Shear | 3× | 0.849 |
Doya | T8 | 90° Rotate and Crop and Shear | 5× | 0.867 |
Jinae | S3 | Crop | 3× | 0.847 |
Migok | Basic | - | - | 0.881 |
Nabul | S5 | Flip | 5× | 0.878 |
Pori | T4 | 90° Rotate and Crop and Shear | 3× | 0.841 |
Sin | T8 | 90° Rotate and Crop and Shear | 5× | 0.741 |
Suda | S7 | Crop | 5× | 0.812 |
Wogang | D12 | Crop and Shear | 5× | 0.866 |
Wosu | T3 | Flip and Crop and Shear | 3× | 0.885 |
Youngcheon | T7 | Flip and Crop and Shear | 5× | 0.865 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lee, S.; Kong, Y.; Lee, T. Development of Deep Intelligence for Automatic River Detection (RivDet). Remote Sens. 2025, 17, 346. https://doi.org/10.3390/rs17020346
Lee S, Kong Y, Lee T. Development of Deep Intelligence for Automatic River Detection (RivDet). Remote Sensing. 2025; 17(2):346. https://doi.org/10.3390/rs17020346
Chicago/Turabian StyleLee, Sejeong, Yejin Kong, and Taesam Lee. 2025. "Development of Deep Intelligence for Automatic River Detection (RivDet)" Remote Sensing 17, no. 2: 346. https://doi.org/10.3390/rs17020346
APA StyleLee, S., Kong, Y., & Lee, T. (2025). Development of Deep Intelligence for Automatic River Detection (RivDet). Remote Sensing, 17(2), 346. https://doi.org/10.3390/rs17020346