Interchangeability of Cross-Platform Orthophotographic and LiDAR Data in DeepLabV3+-Based Land Cover Classification Method
Abstract
:1. Introduction
- Multiple season and platform data collection in the same study site;
- Interchangeability-based comparison between ALB and GLS;
- Application of image fusion technology on remote sensing-based LCC;
- If it can be proved, cross-platform interchangeability for both platforms can be a strong point for the practical implications of solving corresponding financial and seasonal issues.
2. Study Site and Methods
2.1. Study Site
2.2. Workflow
- ALB/GLS: this is a step that use ALB and GLS to collect data;
- Data processing: a necessary operation for transferring the raw data into usable data for DL;
- Input data: explaining the data style/species;
- Processing module: explaining how to process the input data;
- Data for training/prediction/assessment: separating the processed input data on purpose;
- DeepLabV3+ module: how to train a DeepLabV3+ model;
- Trained model: saving the trained model;
- Prediction: using data for prediction in trained model;
- Output: using OA and macro-F1 for assessment.
2.3. Data Collection
2.3.1. ALB Part
2.3.2. GLS Part
2.4. Data Processing of Imagery-Based Input
2.4.1. Preprocessing
2.4.2. Transform High-Resolution Imagery to Low-Resolution Imagery (LR-DI)
2.4.3. Transform LiDAR Data to Imagery (LiDAR-I)
2.4.4. Combine LR-DI with LiDAR-I (Image Fusion)
2.5. DeepLabV3+ Model
3. Results
3.1. Cross-Platform Interchangeability (With Water Area)
3.2. Cross-Platform Interchangeability (Without Water Area)
3.3. Cross-Platform Interchangeability (Individual Land Cover)
- X: training all the GLS LR-DI, predicting ALB March 2017 LR-DI. Y: training all the ALB LR-DI, predicting GLS March 2020 LR-DI.
- X: training all the GLS LR-DI, predicting ALB November 2017 LR-DI. Y: training all the ALB LR-DI, predicting GLS October 2020 LR-DI;
- X: training all the GLS LiDAR-I, predicting ALB March 2017 LiDAR-I. Y: training all the ALB LiDAR-I, predicting GLS March 2020 LiDAR-I;
- X: training all the GLS LiDAR-I, predicting ALB November 2017 LiDAR-I. Y: training all the ALB LiDAR-I, predicting GLS October 2020 LiDAR-I;
- X: training all the GLS image fusion, predicting ALB March 2017 image fusion. Y: training all the ALB image fusion, predicting GLS March 2020 image fusion;
- X: training all the GLS image fusion, predicting ALB November 2017 image fusion, Y: training all the ALB image fusion, predicting GLS October 2020 image fusion.
3.4. Cross-Platform Interchangeability (Individual Data Type)
4. Discussion
4.1. Cross-Platform Interchangeability
4.2. Computational Resources and Processing Times
4.3. Limitations and Potential Applications
- Limited study sites were chosen. More study sites should be considered to prove the possibility of LCC-based applications derived from cross-platform LiDAR in other study sites, e.g., if the study site does not include all types of land cover, will the results of LCC be affected or not.
- Only two seasons were included in this research. More seasonal data should be considered to prove the possibility of LCC-based applications derived from cross-platform LiDAR, e.g., if the feature of land cover in the other seasons has not been trained, will the results of LCC be affected or not.
- Only the DeepLabV3+ model was applied in this research. Other open source semantic segmentation models (i.e., U-Net, ResNet and SegForm) should also be considered to prove the verity and generalization of the cross-platform LiDAR applied on LCC, i.e., can the other semantic segmentation models improve the current DeepLabV3+-based LCC accuracy or not.
- More study sites are needed for proving the generalization and verity;
- Interchangeability-based comparisons are not just limited between ALB and GLS but could also consider the satellite platform (e.g., promoting global surface water monitoring research with the SWOT satellite, high-resolution mapping of global surface water and its long-term changes).
- The application of image fusion technology on more remote sensing-based LCC.
- And we consider the potential applications of the obtained results in other tasks:
- The obtained results (LCC) derived from this research can be applied to flood control (setting parameters for flood calculation);
- The obtained results (LCC) derived from this research can be used as basic information of the vegetation environment.
5. Conclusions
Supplementary Materials
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
AI | Artificial Intelligence |
ALB | airborne LiDAR bathymetry |
Conv | convolution |
CC | Cloud Compare |
DCNN | Deep Convolutional Neural Network |
DL | deep learning |
DSM | digital surface model |
DTM | digital terrain model |
GLS | green LiDAR system |
GPU | Graphics Processing Unit |
HC | high contrast |
HR-DI | high-resolution digital image |
KP | kilo post |
LCC | land cover classification |
LiDAR | Light Detection and Ranging |
LiDAR-I | Light Detection and Ranging-based Image |
LR-DI | low-resolution digital image |
LR-DI-A 50% | transparency-changed low-resolution digital image |
LR-TL | low-resolution true label |
MMS | Mobile Mapping System |
NIR | near-infrared |
OA | overall accuracy |
RGB | Red, Green, Blue |
SLAM | Simultaneous Localization and Mapping |
TAPG training | ALB predicting GLS |
TGPA training | GLS predicting ALB |
UAV | unmanned aerial vehicle |
References
- Bureš, L.; Sychová, P.; Maca, P.; Roub, R.; Marval, S. River Bathymetry Model Based on Floodplain Topography. Water 2019, 11, 1287. [Google Scholar] [CrossRef]
- Hilldale, R.C.; Raff, D. Assessing the ability of airborne LiDAR to map river bathymetry. Earth Surf. Process. Landf. 2008, 33, 773–783. [Google Scholar] [CrossRef]
- Lefsky, M.A.; Harding, D.; Cohen, W.B.; Parker, G.; Shugart, H.H. Surface Lidar Remote Sensing of Basal Area and Biomass in Deciduous Forests of Eastern Maryland, USA. Remote Sens. Environ. 1999, 67, 83–98. [Google Scholar] [CrossRef]
- Khosrovyan, A. Biodiversity and Ecosystem Services in Rivers. Water 2024, 16, 2091. [Google Scholar] [CrossRef]
- Sendzimir, J.; Schmutz, S. Challenges in riverine ecosystem management. In Riverine Ecosystem Management: Science for Governing Towards a Sustainable Future; Springer: Berlin/Heidelberg, Germany, 2018; pp. 1–16. [Google Scholar] [CrossRef]
- Opperman, J.J.; Luster, R.; McKenney, B.A.; Roberts, M.; Meadows, A.W. Ecologically functional floodplains: Connectivity, flow regime, and scale 1. JAWRA J. Am. Water Resour. Assoc. 2010, 46, 211–226. [Google Scholar] [CrossRef]
- Meng, X.; Currit, N.; Zhao, K. Ground filtering algorithms for airborne LiDAR data: A review of critical issues. Remote Sens. 2010, 2, 833–860. [Google Scholar] [CrossRef]
- Khosravipour, A.; Skidmore, A.K.; Isenburg, M.; Wang, T.; Hussin, Y.A. Generating pit-free canopy height models from airborne LiDAR. Photogramm. Eng. Remote Sens. 2014, 80, 863–872. [Google Scholar] [CrossRef]
- Chen, Z.; Gao, B.; Devereux, B. State-of-the-Art: DTM Generation Using Airborne LIDAR Data. Sensors 2017, 17, 150. [Google Scholar] [CrossRef]
- Szafarczyk, A.; Toś, C. The use of green laser in LiDAR bathymetry: State of the art and recent advancements. Sensors 2023, 23, 292. [Google Scholar] [CrossRef]
- Islam, M.T.; Yoshida, K.; Nishiyama, S.; Sakai, K. Novel UAV-borne green LiDAR versus traditional airborne LiDAR: A case study to characterize hydraulic and riparian vegetation attributes. In Proceedings of the 39th IAHR World Congress, Granada, Spain, 19–24 June 2022. [Google Scholar] [CrossRef]
- Phiri, D.; Simwanda, M.; Nyirenda, V.; Murayama, Y.; Ranagalage, M. Decision Tree Algorithms for Developing Rulesets for Object-Based Land Cover Classification. ISPRS Int. J. Geo-Inf. 2020, 9, 329. [Google Scholar] [CrossRef]
- Tassi, A.; Vizzari, M. Object-Oriented LULC Classification in Google Earth Engine Combining SNIC, GLCM, and Machine Learning Algorithms. Remote Sens. 2020, 12, 3776. [Google Scholar] [CrossRef]
- Basheer, S.; Wang, X.; Farooque, A.A.; Nawaz, R.A.; Liu, K.; Adekanmbi, T.; Liu, S. Comparison of land use land cover classifiers using different satellite imagery and machine learning techniques. Remote Sens. 2022, 14, 4978. [Google Scholar] [CrossRef]
- Yousefi, S.; Mirzaee, S.; Almohamad, H.; Al Dughairi, A.A.; Gomez, C.; Siamian, N.; Alrasheedi, M.; Abdo, H.G. Image classification and land cover mapping using sentinel-2 imagery: Optimization of SVM parameters. Land 2022, 11, 993. [Google Scholar] [CrossRef]
- Nasiri, V.; Deljouei, A.; Moradi, F.; Sadeghi, S.M.M.; Borz, S.A. Land use and land cover mapping using Sentinel-2, Landsat-8 Satellite Images, and Google Earth Engine: A comparison of two composition methods. Remote Sens. 2022, 14, 1977. [Google Scholar] [CrossRef]
- Mahmoud, R.; Hassanin, M.; Al Feel, H.; Badry, R.M. Machine learning-based land use and land cover mapping using multi-spectral satellite imagery: A case study in Egypt. Sustainability 2023, 15, 9467. [Google Scholar] [CrossRef]
- Li, Y.; Yu, J.; Wang, M.; Xie, M.; Xi, L.; Pang, Y.; Hou, C. Research on the Optimization of Multi-Class Land Cover Classification Using Deep Learning with Multispectral Images. Land 2024, 13, 603. [Google Scholar] [CrossRef]
- Chen, Z.; Xu, Y.; Chen, Y. A Framework for Fine-Grained Land-Cover Classification Using 10 m Sentinel-2 Images. Remote Sens. 2024, 16, 390. [Google Scholar] [CrossRef]
- Pan, S.; Yoshida, K.; Kojima, T.; Nishiyama, S. Drone-LiDAR-assisted Image Fusion methodology for Deep Learning-based Land Cover Classification. Artif. Intell. Data Sci. 2022, 3, 15–25. [Google Scholar] [CrossRef]
- Ayhan, B.; Kwan, C. Tree, shrub, and grass classification using only RGB images. Remote Sens. 2020, 12, 1333. [Google Scholar] [CrossRef]
- Yoshida, K.; Pan, S.; Taniguchi, J.; Nishiyama, S.; Kojima, T.; Islam, M.T. Airborne LiDAR-assisted deep learning methodology for riparian land cover classification using aerial photographs and its application for flood modelling. J. Hydroinformatics 2022, 24, 179–201. [Google Scholar] [CrossRef]
- Yoshida, K.; Pan, S.; Nishiyama, S.; Kojima, T. The Interchangeability of the Cross-Platform Data in the Deep Learning-Based Land Cover Classification Methodology. In Proceedings of the 15th International Conference on Hydroinformatics, Beijing, China, 27–30 May 2024; p. 355. [Google Scholar] [CrossRef]
- Jumain, M.; Ibrahim, Z.; Ismail, Z.; Entalai, K.; Makhtar, M.R.; Rahman, M.S.A.; Alias, N. Influence of riparian vegetation on flow resistance in mobile bed straight compound channels. J. Phys. Conf. Ser. 2018, 1049, 012027. [Google Scholar] [CrossRef]
- Matikainen, L.; Karila, K.; Hyyppä, J.; Litkey, P.; Puttonen, E.; Ahokas, E. Object-based analysis of multispectral airborne laser scanner data for land cover classification and map updating. ISPRS J. Photogramm. Remote Sens. 2017, 128, 298–313. [Google Scholar] [CrossRef]
- Ji, J.; Wang, W.; Ning, Y.; Bo, H.; Ren, Y. Research on a Matching Method for Vehicle-Borne Laser Point Cloud and Panoramic Images Based on Occlusion Removal. Remote Sens. 2024, 16, 2531. [Google Scholar] [CrossRef]
- Luetzenburg, G.; Kroon, A.; Kjeldsen, K.K.; Splinter, K.D.; Bjørk, A.A. High-resolution topographic surveying and change detection with the iPhone LiDAR. Nat. Protoc. 2024, 19, 3520–3541. [Google Scholar] [CrossRef]
- Jang, E.; Lee, S.J.; Jo, H. A New Multimodal Map Building Method Using Multiple Object Tracking and Gaussian Process Regression. Remote Sens. 2024, 16, 2622. [Google Scholar] [CrossRef]
Items | ALB | GLS | |||
---|---|---|---|---|---|
2017 | 2020 | ||||
March | November | March | October | ||
Laser wavelength range (nm) | Green | 515 | 515 | 532 | 532 |
NIR | 1064 | 1064 | - | - | |
Number of laser beams (103 s−1) | Green | 35 | 35 | 60 | 60 |
NIR | 148 | 148 | - | - | |
Ground altitude (m) | 500 | 500 | 50 | 50 | |
Flight speed (km h−1) | 220 | 110 | 9 | 9 | |
Measurement point density (m−2) | Green | 2 | 4 | 100 | 100 |
NIR | 9.0 | 9.0 | - | - | |
Resolution of raw imagery (cm pixel−1) | 10 | 10 | 3 | 3 | |
Resolution of raw LiDAR (m pixel−1) * | 2 | 2 | 1 | 1 | |
FTU ** | - | - | 0.8 | 3.12 | |
NTU *** | - | - | - | 3.7 | |
Degree **** | 2.9 | 3.2 | - | - |
Symbol | Formula |
---|---|
Precision (X) or P.X | TP-X/PR-X |
Recall (X) or R.X | TP-X/TL-X |
F1 (X) | 2 × Precision (X) × Recall (X)/(Precision (X) + Recall (X)) |
OA | ∑ TP-X/Amount of total pixels |
Macro-F1 | ∑ F1 score (X)/Amount of labels |
Prediction | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
C | R | W | BG | G | T | B | Total | Recall | F1 | ||
True | C | TP-C | E-C/R | E-C/W | E-C/BG | E-C/G | E-C/T | E-C/B | TL-C | R.C | F1-C |
R | E-R/C | TP-R | E-R/W | E-R/BG | E-R/G | E-R/T | E-R/B | TL-R | R.R | F1-R | |
W | E-W/C | E-W/R | TP-W | E-W/BG | E-W/G | E-W/T | E-W/B | TL-W | R.W | F1-W | |
BG | E-BG/C | E-BG/R | E-BG/W | TP-BG | E-BG/G | E-BG/T | E-BG/B | TL-BG | R.BG | F1-BG | |
G | E-G/C | E-G/R | E-G/W | E-G/BG | TP-G | E-G/T | E-G/B | TL-G | R.G | F1-G | |
T | E-T/C | E-T/R | E-T/W | E-T/BG | E-T/G | TP-T | E-T/B | TL-T | R.T | F1-T | |
B | E-B/C | E-B/R | E-B/W | E-B/BG | E-B/G | E-B/T | TP-B | TL-B | R.B | F1-B | |
Total | PR-C | PR-R | PR-W | PR-BG | PR-G | PR-T | PR-B | Total | |||
Precision | P.C | P.R | P.W | P.BG | P.G | P.T | P.B |
Data Styles | Groups | Train | Predict | Average (OA) | Average (Macro-F1) | ||||
---|---|---|---|---|---|---|---|---|---|
GLS | ALB | GLS | ALB | With | Without | With | Without | ||
LR-DI (RGB) | 1 | ⃝ | ⃝ | 0.73 | 0.66 | 0.66 | 0.62 | ||
2 | ⃝ | ⃝ | 0.61 | 0.62 | 0.59 | 0.57 | |||
LiDAR-I | 3 | ⃝ | ⃝ | 0.72 | 0.61 | 0.63 | 0.59 | ||
4 | ⃝ | ⃝ | 0.60 | 0.64 | 0.56 | 0.56 | |||
Image fusion | 5 | ⃝ | ⃝ | 0.73 | 0.63 | 0.65 | 0.61 | ||
6 | ⃝ | ⃝ | 0.64 | 0.66 | 0.60 | 0.59 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Pan, S.; Yoshida, K.; Nishiyama, S.; Kojima, T.; Hashimoto, Y. Interchangeability of Cross-Platform Orthophotographic and LiDAR Data in DeepLabV3+-Based Land Cover Classification Method. Land 2025, 14, 217. https://doi.org/10.3390/land14020217
Pan S, Yoshida K, Nishiyama S, Kojima T, Hashimoto Y. Interchangeability of Cross-Platform Orthophotographic and LiDAR Data in DeepLabV3+-Based Land Cover Classification Method. Land. 2025; 14(2):217. https://doi.org/10.3390/land14020217
Chicago/Turabian StylePan, Shijun, Keisuke Yoshida, Satoshi Nishiyama, Takashi Kojima, and Yutaro Hashimoto. 2025. "Interchangeability of Cross-Platform Orthophotographic and LiDAR Data in DeepLabV3+-Based Land Cover Classification Method" Land 14, no. 2: 217. https://doi.org/10.3390/land14020217
APA StylePan, S., Yoshida, K., Nishiyama, S., Kojima, T., & Hashimoto, Y. (2025). Interchangeability of Cross-Platform Orthophotographic and LiDAR Data in DeepLabV3+-Based Land Cover Classification Method. Land, 14(2), 217. https://doi.org/10.3390/land14020217