Change Detection Based on Existing Vector Polygons and Up-to-Date Images Using an Attention-Based Multi-Scale ConvTransformer Network
Abstract
:1. Introduction
- We introduce a framework for detecting changes in vector polygons utilizing single-temporal high-resolution RS images and deep learning. This framework enables end-to-end application, encompassing image preprocessing through change detection, requiring solely up-to-date images and corresponding land cover vector data from the previous time image. This method offers a comprehensive bottom-up solution.
- For sample construction, we propose boundary-preserved masking Simple Linear Iterative Clustering (SLIC) for generating superpixels. These are then combined with land cover vector data to create an adaptive sample cropping scheme. To address noise, we introduce an efficient Visual Transformer and class-constrained Density Peak-based (EViTCC-DP) method for noisy label removal, followed by the transformation of noisy samples into representative ones using k-means clustering, resulting in the automatic generation of a high-quality multi-scale sample set.
- To enhance fine-grained scene classification precision, we employ an improved attention-based multi-scale ConvTransformer network (AMCT-Net) for superpixel cropping unit classification. By integrating a CNN structure and Transformer, along with the attention mechanism module, we achieve a more discriminative feature representation, enhancing the model’s classification accuracy. Additionally, we introduce a change decisionmaker with various rules, which synergistically combines and post-processes sample predictions with land cover vector data to effectively extract changed vector polygons.
2. Methodology
2.1. Density Peak Clustering
- The density around the cluster center should be relatively high;
- The cluster center should be situated at a considerable distance from points with higher surrounding density.
2.2. Automated Generation of Initial Samples with Vector Boundary Constraints
2.2.1. Automatic Generation of Initial Samples
2.2.2. Source of Noise Samples
2.3. Initial Samples Denoising Based on DP Clustering Algorithm
- Employing boundary constraints of vector polygons and adaptive cropping of RS images to automatically generate initial samples and train ViT models.
- Utilizing the pre-trained ViT to extract features from scene samples and inputting them into the DP clustering algorithm according to class constraints to achieve the purpose of denoising.
2.4. Attention-Based Multi-Scale ConvTransformer Network, AMCT-Net
2.4.1. Overview of the Proposed AMCT-Net
2.4.2. Module Details
2.5. Vector Polygons Change Detection Analysis Based on Confidence Rules
3. Experiments and Results
3.1. Description of Data Sources and Research Scheme
3.2. Results
3.2.1. Change Detection and Post-Processing
3.2.2. Evaluation Metrics
3.2.3. Vector Polygons Change Detection Results
- Across both datasets, the baseline model (ViT) exhibits unsatisfactory performance across the five evaluation metrics, while the enhanced model incorporating attention mechanisms and a multi-scale convolution module demonstrates a notable improvement in accuracy. Notably, the AMCT-Net model outperforms other architectures in terms of Recall, specificity, and F1 score. Specifically, on the Nantong dataset, AMCT-Net achieves a Recall of 0.9134, specificity of 0.9839, and F1 score of 0.9201, representing a 0.25% increase in Recall compared to the sub-optimal HCTM model (Recall = 0.9109). On the Guantan dataset, AMCT-Net’s Recall reaches 0.9292, specificity stands at 0.9898, and F1 score is 0.9306, with a significant 1.92% increase in Recall compared to the sub-optimal HCTM model (Recall = 0.9100). This underscores the substantial advancement in classification accuracy achieved by AMCT-Net.
- It is noteworthy that the performance of the model differs between the two datasets. For instance, the proposed AMCT-Net model only marginally improves accuracy by 0.66% compared to the baseline model on the Nantong dataset. Conversely, the model exhibits a much more substantial improvement in accuracy on the Guantan dataset, with an increase of 1.86% compared to the baseline model. The variation in performance may be attributed to the urban development context of the Nantong dataset, where change types are inherently more complex compared to the Guantan dataset. However, as AMCT-Net integrates the local feature extraction capabilities of CNN structures with the global information processing characteristics of Transformer architecture, supplemented by the introduction of a multi-scale module, these enhancements prove particularly advantageous for processing the multi-scale sample set in this study, underscoring its adaptability to diverse dataset features.
4. Analysis and Discussion
4.1. Analysis of the DP Algorithm Parameters Selections
4.2. Influence of Sample Set Denoising
4.3. Introducing Representative Training Samples
5. Conclusions
- The boundary constraint segmentation method utilized in this study accurately segments the boundaries of ground objects, while the adaptive cropping strategy facilitates comprehensive sampling within vector polygons, minimizing confusion among ground objects in the generated samples. The proposed sample denoising method, EViTCC-DP, significantly enhances model accuracy, leading to a 2.80% and 2.56% improvement in OA on the Nantong and Guantan datasets, respectively.
- To enhance classification performance, we introduced multi-scale modules and attention mechanisms to construct a novel model, AMCT-Net. This network combines the advantages of CNNs and Transformers, enabling the extraction of more discriminative features. Experimental results on the two datasets demonstrate the effectiveness of the proposed method, with the accuracy of the AMCT-Net model reaching 91.34% and 93.51%, respectively, surpassing that of other advanced models.
- Visual interpretation results demonstrate the significance of RTS in enhancing detection accuracy. The introduction of RTS yields a 2.11% and 1.09% increase in change detection accuracy for the Nantong and Guantan datasets, respectively. Our approach enables the swift construction of a high-quality multi-scale scene sample set incorporating RTS, requiring minimal manual intervention. Furthermore, in conjunction with designed change decision rules featuring adjustable parameters and improved applicability, the change detection method outlined in this paper effectively identifies changed vector polygons, offering clear advantages over traditional manual vector polygons updating methods.
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Sefrin, O.; Riese, F.M.; Keller, S. Deep learning for land cover change detection. Remote Sens. 2020, 13, 78. [Google Scholar] [CrossRef]
- Zhang, C.; Yue, P.; Tapete, D.; Jiang, L.; Shangguan, B.; Huang, L.; Liu, G. A deeply supervised image fusion network for change detection in high resolution bi-temporal remote sensing images. ISPRS J. Photogramm. Remote Sens. 2020, 166, 183–200. [Google Scholar] [CrossRef]
- Jiang, M.; Zhang, X.; Sun, Y.; Feng, W.; Gan, Q.; Ruan, Y. AFSNet: Attention-guided full-scale feature aggregation network for high-resolution remote sensing image change detection. GISci. Remote Sens. 2022, 59, 1882–1900. [Google Scholar] [CrossRef]
- Ning, X.; Zhang, H.; Zhang, R.; Huang, X. Multi-stage progressive change detection on high resolution remote sensing imagery. ISPRS J. Photogramm. Remote Sens. 2024, 207, 231–244. [Google Scholar] [CrossRef]
- Dong, S.; Wang, L.; Du, B.; Meng, X. ChangeCLIP: Remote sensing change detection with multimodal vision-language representation learning. ISPRS J. Photogramm. Remote Sens. 2024, 208, 53–69. [Google Scholar] [CrossRef]
- Deng, X.; Huang, J.; Rozelle, S.; Zhang, J.; Li, Z. Impact of urbanization on cultivated land changes in China. Land Use Policy 2015, 45, 1–7. [Google Scholar] [CrossRef]
- Lv, Z.; Huang, H.; Sun, W.; Jia, M.; Benediktsson, J.A.; Chen, F. Iterative Training Sample Augmentation for Enhancing Land Cover Change Detection Performance with Deep Learning Neural Network. IEEE Trans. Neural Netw. Learn. Syst. 2023, 1–14. [Google Scholar] [CrossRef] [PubMed]
- Wu, C.; Du, B.; Zhang, L. Fully Convolutional Change Detection Framework with Generative Adversarial Network for Unsupervised, Weakly Supervised and Regional Supervised Change Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 9774–9788. [Google Scholar] [CrossRef]
- Kulinan, A.S.; Cho, Y.; Park, M.; Park, S. Rapid wildfire damage estimation using integrated object-based classification with auto-generated training samples from Sentinel-2 imagery on Google Earth Engine. Int. J. Appl. Earth Obs. Geoinf. 2024, 126, 103628. [Google Scholar] [CrossRef]
- Sun, B.; Zhang, Y.; Zhou, Q.; Zhang, X. Effectiveness of Semi-Supervised Learning and Multi-Source Data in Detailed Urban Landuse Mapping with a Few Labeled Samples. Remote Sens. 2022, 14, 648. [Google Scholar] [CrossRef]
- Zhao, Y.; Zhang, X.; Feng, W.; Xu, J. Deep Learning Classification by ResNet-18 Based on the Real Spectral Dataset from Multispectral Remote Sensing Images. Remote Sens. 2022, 14, 4883. [Google Scholar] [CrossRef]
- Cui, Y.; Yang, G.; Zhou, Y.; Zhao, C.; Pan, Y.; Sun, Q.; Gu, X. AGTML: A novel approach to land cover classification by integrating automatic generation of training samples and machine learning algorithms on Google Earth Engine. Ecol. Indic. 2023, 154, 110904. [Google Scholar] [CrossRef]
- Cao, Y.; Huang, X. A full-level fused cross-task transfer learning method for building change detection using noise-robust pretrained networks on crowdsourced labels. Remote Sens. Environ. 2023, 284, 113371. [Google Scholar] [CrossRef]
- Li, J.; Huang, X.; Chang, X. A label-noise robust active learning sample collection method for multi-temporal urban land-cover classification and change analysis. ISPRS J. Photogramm. Remote Sens. 2020, 163, 1–17. [Google Scholar] [CrossRef]
- Xuan, F.; Dong, Y.; Li, J.; Li, X.; Su, W.; Huang, X.; Huang, J.; Xie, Z.; Li, Z.; Liu, H.; et al. Mapping crop type in Northeast China during 2013–2021 using automatic sampling and tile-based image classification. Int. J. Appl. Earth Obs. Geoinf. 2023, 117, 103178. [Google Scholar] [CrossRef]
- Zhang, L.; Hu, X.; Zhang, M.; Shu, Z.; Zhou, H. Object-level change detection with a dual correlation attention-guided detector. ISPRS J. Photogramm. Remote Sens. 2021, 177, 147–160. [Google Scholar] [CrossRef]
- Gu, F.; Xiao, P.; Zhang, X.; Li, Z.; Muhtar, D. FDFF-Net: A Full-Scale Difference Feature Fusion Network for Change Detection in High-Resolution Remote Sensing Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 2161–2172. [Google Scholar] [CrossRef]
- Peng, Y.; He, J.; Yuan, Q.; Wang, S.; Chu, X.; Zhang, L. Automated glacier extraction using a Transformer based deep learning approach from multi-sensor remote sensing imagery. ISPRS J. Photogramm. Remote Sens. 2023, 202, 303–313. [Google Scholar] [CrossRef]
- Jiang, M.; Su, Y.; Gao, L.; Plaza, A.; Zhao, X.-L.; Sun, X.; Liu, G. GraphGST: Graph Generative Structure-Aware Transformer for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5504016. [Google Scholar] [CrossRef]
- Chen, K.; Zou, Z.; Shi, Z. Building Extraction from Remote Sensing Images with Sparse Token Transformers. Remote Sens. 2021, 13, 4441. [Google Scholar] [CrossRef]
- Noman, M.; Fiaz, M.; Cholakkal, H.; Narayan, S.; Anwer, R.M.; Khan, S.; Khan, F.S. Remote Sensing Change Detection with Transformers Trained from Scratch. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–14. [Google Scholar] [CrossRef]
- Chen, H.; Qi, Z.; Shi, Z. Remote Sensing Image Change Detection With Transformers. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5607514. [Google Scholar] [CrossRef]
- Roy, S.K.; Deria, A.; Hong, D.; Rasti, B.; Plaza, A.; Chanussot, J. Multimodal Fusion Transformer for Remote Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 6826. [Google Scholar] [CrossRef]
- Bazi, Y.; Bashmal, L.; Rahhal, M.M.A.; Dayil, R.A.; Ajlan, N.A. Vision Transformers for Remote Sensing Image Classification. Remote Sens. 2021, 13, 516. [Google Scholar] [CrossRef]
- Jiang, M.; Chen, Y.; Dong, Z.; Liu, X.; Zhang, X.; Zhang, H. Multiscale Fusion CNN-Transformer Network for High-Resolution Remote Sensing Image Change Detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 5280–5293. [Google Scholar] [CrossRef]
- Wang, G.; Li, B.; Zhang, T.; Zhang, S. A Network Combining a Transformer and a Convolutional Neural Network for Remote Sensing Image Change Detection. Remote Sens. 2022, 14, 2228. [Google Scholar] [CrossRef]
- Liu, W.; Lin, Y.; Liu, W.; Yu, Y.; Li, J. An attention-based multiscale transformer network for remote sensing image change detection. ISPRS J. Photogramm. Remote Sens. 2023, 202, 599–609. [Google Scholar] [CrossRef]
- Song, F.; Zhang, S.; Lei, T.; Song, Y.; Peng, Z. MSTDSNet-CD: Multiscale Swin Transformer and Deeply Supervised Network for Change Detection of the Fast-Growing Urban Regions. IEEE Geosci. Remote Sens. Lett. 2022, 19, 6508505. [Google Scholar] [CrossRef]
- Shao, M.; Li, K.; Wen, Y.; Xie, X. Large-scale Foundation Model enhanced Few-shot Learning for Open-pit Minefield Extraction. IEEE Geosci. Remote Sens. Lett. 2024, 1–1. [Google Scholar] [CrossRef]
- Sun, X.; Wang, P.; Lu, W.; Zhu, Z.; Lu, X.; He, Q.; Li, J.; Rong, X.; Yang, Z.; Chang, H.; et al. RingMo: A Remote Sensing Foundation Model With Masked Image Modeling. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5612822. [Google Scholar] [CrossRef]
- Chen, K.; Liu, C.; Chen, H.; Zhang, H.; Li, W.; Zou, Z.; Shi, Z. RSPrompter: Learning to Prompt for Remote Sensing Instance Segmentation Based on Visual Foundation Model. IEEE Trans. Geosci. Remote Sens. 2024, 62, 4701117. [Google Scholar] [CrossRef]
- Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A.C.; Lo, W.-Y. Segment anything. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 4015–4026. [Google Scholar]
- Zhang, C.; Wu, R.; Li, G.; Cui, W.; Jiang, Y. Change detection method based on vector data and isolation forest algorithm. J. Appl. Remote Sens. 2020, 14, 024516. [Google Scholar] [CrossRef]
- Wei, D.; Hou, D.; Zhou, X.; Chen, J. Change Detection Using a Texture Feature Space Outlier Index from Mono-Temporal Remote Sensing Images and Vector Data. Remote Sens. 2021, 13, 3857. [Google Scholar] [CrossRef]
- Shi, J.; Liu, W.; Zhu, Y.; Wang, S.; Hao, S.; Zhu, C.; Shan, H.; Li, E.; Li, X.; Zhang, L. Fine Object Change Detection Based on Vector Boundary and Deep Learning with High-Resolution Remote Sensing Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 4094–4103. [Google Scholar] [CrossRef]
- Guo, Z.; Liu, W.; Xu, J.; Li, E.; Li, X.; Zhang, L.; Zhang, J. Land type authenticity check of vector patches using a self-trained deep learning model. Int. J. Remote Sens. 2022, 43, 1226–1252. [Google Scholar] [CrossRef]
- Zhang, H.; Liu, W.; Niu, H.; Yin, P.; Dong, S.; Wu, J.; Li, E.; Zhang, L.; Zhu, C. Land Cover Change Detection Based on Vector Polygons and Deep Learning with High Resolution Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2023, 62, 4402218. [Google Scholar] [CrossRef]
- Fang, H.; Guo, S.; Lin, C.; Zhang, P.; Zhang, W.; Du, P. Scene-level change detection by integrating VHR images and POI data using a multiple-branch fusion network. Remote Sens. Lett. 2023, 14, 808–820. [Google Scholar] [CrossRef]
- Tu, B.; Zhang, X.; Kang, X.; Zhang, G.; Li, S. Density Peak-Based Noisy Label Detection for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 1573–1584. [Google Scholar] [CrossRef]
- Tu, B.; Zhang, X.; Kang, X.; Wang, J.; Benediktsson, J.A. Spatial Density Peak Clustering for Hyperspectral Image Classification with Noisy Labels. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5085–5097. [Google Scholar] [CrossRef]
- Algan, G.; Ulusoy, I. Image classification with deep learning in the presence of noisy labels: A survey. Knowl.-Based Syst. 2021, 215, 106771. [Google Scholar] [CrossRef]
- Liu, S.; Zheng, Y.; Du, Q.; Bruzzone, L.; Samat, A.; Tong, X.; Jin, Y.; Wang, C. A Shallow-to-Deep Feature Fusion Network for VHR Remote Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5410213. [Google Scholar] [CrossRef]
- Li, G.; Ning, X.; Zhang, H.; Wang, H.; Hao, M. Remote sensing monitoring for the non-agriculturalization of cultivated land guided by the third national land survey results data. Sci. Surv. Mapp. 2022, 47, 149–159. [Google Scholar] [CrossRef]
- Kang, X.; Duan, P.; Xiang, X.; Li, S.; Benediktsson, J.A. Detection and Correction of Mislabeled Training Samples for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5673–5686. [Google Scholar] [CrossRef]
- Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S. SLIC Superpixels Compared to State-of-the-Art Superpixel Methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef] [PubMed]
- Chen, Y.; Ma, S.; Chen, X.; Ghamisi, P. Hyperspectral data clustering based on density analysis ensemble. Remote Sens. Lett. 2017, 8, 194–203. [Google Scholar] [CrossRef]
- Krishna, K.; Narasimha Murty, M. Genetic K-means algorithm. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 1999, 29, 433–439. [Google Scholar] [CrossRef]
- Rodriguez, A.; Laio, A. Clustering by fast search and find of density peaks. Science 2014, 344, 1492–1496. [Google Scholar] [CrossRef]
- Li, Z.; Li, E.; Samat, A.; Xu, T.; Liu, W.; Zhu, Y. An Object-Oriented CNN Model Based on Improved Superpixel Segmentation for High-Resolution Remote Sensing Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 4782–4796. [Google Scholar] [CrossRef]
- Yang, G.; Yu, W.; Yao, X.; Zheng, H.; Cao, Q.; Zhu, Y.; Cao, W.; Cheng, T. AGTOC: A novel approach to winter wheat mapping by automatic generation of training samples and one-class classification on Google Earth Engine. Int. J. Appl. Earth Obs. Geoinf. 2021, 102, 102446. [Google Scholar] [CrossRef]
- Zhang, C.; Dong, J.; Xie, Y.; Zhang, X.; Ge, Q. Mapping irrigated croplands in China using a synergetic training sample generating method, machine learning classifier, and Google Earth Engine. Int. J. Appl. Earth Obs. Geoinf. 2022, 112, 102888. [Google Scholar] [CrossRef]
- Zhao, Z.; Luo, Z.; Li, J.; Wang, K.; Shi, B. Large-scale fine-grained bird recognition based on a triplet network and bilinear model. Appl. Sci. 2018, 8, 1906. [Google Scholar] [CrossRef]
- Wang, W.; Tan, X.; Zhang, P.; Wang, X. A CBAM Based Multiscale Transformer Fusion Approach for Remote Sensing Image Change Detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 6817–6825. [Google Scholar] [CrossRef]
- Shi, J.; Liu, W.; Shan, H.; Li, E.; Li, X.; Zhang, L. Remote Sensing Scene Classification Based on Multibranch Fusion Attention Network. IEEE Geosci. Remote Sens. Lett. 2023, 20, 3001505. [Google Scholar] [CrossRef]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Hou, Q.; Zhou, D.; Feng, J. Coordinate attention for efficient mobile network design. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 19–25 June 2021; pp. 13713–13722. [Google Scholar]
- Li, J.; Lin, D.; Wang, Y.; Xu, G.; Zhang, Y.; Ding, C.; Zhou, Y. Deep Discriminative Representation Learning with Attention Map for Scene Classification. Remote Sens. 2020, 12, 1366. [Google Scholar] [CrossRef]
- Wu, C.; Du, B.; Cui, X.; Zhang, L. A post-classification change detection method based on iterative slow feature analysis and Bayesian soft fusion. Remote Sens. Environ. 2017, 199, 241–255. [Google Scholar] [CrossRef]
- Jiang, B.; Wang, Z.; Wang, X.; Zhang, Z.; Chen, L.; Wang, X.; Luo, B. VcT: Visual change Transformer for Remote Sensing Image Change Detection. IEEE Trans. Geosci. Remote Sens. 2023, 61, 2005214. [Google Scholar] [CrossRef]
- van der Maaten, L.; Hinton, G. Visualizing Data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
- Wen, Y.; Li, X.; Mu, H.; Zhong, L.; Chen, H.; Zeng, Y.; Miao, S.; Su, W.; Gong, P.; Li, B.; et al. Mapping corn dynamics using limited but representative samples with adaptive strategies. ISPRS J. Photogramm. Remote Sens. 2022, 190, 252–266. [Google Scholar] [CrossRef]
- Jia, W.; Pang, Y.; Tortini, R. The influence of BRDF effects and representativeness of training data on tree species classification using multi-flightline airborne hyperspectral imagery. ISPRS J. Photogramm. Remote Sens. 2024, 207, 245–263. [Google Scholar] [CrossRef]
- An, Y.; Yang, L.; Zhu, A.X.; Qin, C.; Shi, J. Identification of representative samples from existing samples for digital soil mapping. Geoderma 2018, 311, 109–119. [Google Scholar] [CrossRef]
- Shrivastava, S.; Zhang, X.; Nagesh, S.; Parchami, A. DatasetEquity: Are All Samples Created Equal? In the Quest for Equity within Datasets. arXiv 2023, arXiv:2308.09878. [Google Scholar]
- Du, J.; Zhou, Y.; Liu, P.; Vong, C.M.; Wang, T. Parameter-Free Loss for Class-Imbalanced Deep Learning in Image Classification. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 3234–3240. [Google Scholar] [CrossRef]
Number | Land Use Types | Nantong Dataset | Guantan Dataset |
---|---|---|---|
C1 | buildings | 5243 | 4849 |
C2 | cropland | 2647 | 1744 |
C3 | forest | 3201 | 3768 |
C4 | industrial | 5844 | 4160 |
C5 | paddy field | 4848 | 5115 |
C6 | road | 2738 | 1240 |
C7 | water | 5690 | 5139 |
Model | Metrics | Nantong Dataset | Guantan Dataset |
---|---|---|---|
Accuracy | 0.9068 | 0.9165 | |
Precision | 0.8938 | 0.8914 | |
ViT | Recall | 0.8903 | 0.9072 |
Specificity | 0.9816 | 0.9811 | |
F1 Score | 0.8921 | 0.8992 | |
MTC-Net | Accuracy | 0.9117 | 0.9164 |
Precision | 0.9103 | 0.8969 | |
Recall | 0.8904 | 0.9203 | |
Specificity | 0.9824 | 0.9811 | |
F1 Score | 0.8985 | 0.8996 | |
HCTM | Accuracy | 0.9135 | 0.9292 |
Precision | 0.8975 | 0.9230 | |
Recall | 0.9109 | 0.9100 | |
Specificity | 0.9817 | 0.9859 | |
F1 Score | 0.9182 | 0.9215 | |
AMCT-Net (ours) | Accuracy | 0.9134 | 0.9351 |
Precision | 0.9179 | 0.9228 | |
Recall | 0.9134 | 0.9292 | |
Specificity | 0.9839 | 0.9898 | |
F1 Score | 0.9201 | 0.9306 |
Training Set | Epoch | ||||||
---|---|---|---|---|---|---|---|
10 | 20 | 30 | 40 | 50 | 82 | ||
Initial | OA | 0.8153 | 0.8459 | 0.8644 | 0.8798 | 0.8984 | 0.9068 |
Denoised by TCCV | OA | 0.8342 | 0.8709 | 0.8804 | 0.8979 | 0.9079 | 0.9288 |
Denoised by EViTCC-DP | OA | 0.8420 | 0.8727 | 0.8875 | 0.9052 | 0.9129 | 0.9348 |
Training Set | Epoch | ||||||
---|---|---|---|---|---|---|---|
10 | 20 | 30 | 40 | 60 | 84 | ||
Initial | OA | 0.8063 | 0.8379 | 0.8595 | 0.8752 | 0.8906 | 0.9165 |
Denoised by TCCV | OA | 0.8389 | 0.8711 | 0.8888 | 0.9010 | 0.9093 | 0.9333 |
Denoised by EViTCC-DP | OA | 0.8441 | 0.8771 | 0.8953 | 0.9089 | 0.9197 | 0.9421 |
Training Set | % | Nantong Dataset | Guantan Dataset |
---|---|---|---|
Denoised (excluding RTS) | Precision | 88.22 | 89.97 |
Recall | 91.26 | 92.13 | |
Denoised (including RTS) | Precision | 90.33 | 91.06 |
Recall | 91.41 | 92.38 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, S.; Zhu, Y.; Zheng, N.; Liu, W.; Zhang, H.; Zhao, X.; Liu, Y. Change Detection Based on Existing Vector Polygons and Up-to-Date Images Using an Attention-Based Multi-Scale ConvTransformer Network. Remote Sens. 2024, 16, 1736. https://doi.org/10.3390/rs16101736
Wang S, Zhu Y, Zheng N, Liu W, Zhang H, Zhao X, Liu Y. Change Detection Based on Existing Vector Polygons and Up-to-Date Images Using an Attention-Based Multi-Scale ConvTransformer Network. Remote Sensing. 2024; 16(10):1736. https://doi.org/10.3390/rs16101736
Chicago/Turabian StyleWang, Shengli, Yihu Zhu, Nanshan Zheng, Wei Liu, Hua Zhang, Xu Zhao, and Yongkun Liu. 2024. "Change Detection Based on Existing Vector Polygons and Up-to-Date Images Using an Attention-Based Multi-Scale ConvTransformer Network" Remote Sensing 16, no. 10: 1736. https://doi.org/10.3390/rs16101736
APA StyleWang, S., Zhu, Y., Zheng, N., Liu, W., Zhang, H., Zhao, X., & Liu, Y. (2024). Change Detection Based on Existing Vector Polygons and Up-to-Date Images Using an Attention-Based Multi-Scale ConvTransformer Network. Remote Sensing, 16(10), 1736. https://doi.org/10.3390/rs16101736