Iterative Optimization-Enhanced Contrastive Learning for Multimodal Change Detection
Abstract
:1. Introduction
Background
- (1)
- Proposing an unsupervised MCD technique that fuses multimodal image features in a unified space via contrastive learning. The process incorporates iterative optimization with intelligent pseudo-label selection, rapidly distinguishing between changed and unchanged areas and producing precise CMs.
- (2)
- Introducing a contrastive learning-based feature space unification network, which diminishes discrepancies among positive samples and amplifies those between negative and anchor samples. A shared projection layer aligns features across modalities, further improving discrimination.
- (3)
- Implementing an iterative learning strategy enhanced with dynamically chosen pseudo-labels from a clustering and adaptive filtering system. This accelerates network convergence and bolsters accuracy by accentuating the disparity between changed and unchanged image regions.
2. Proposed Method
2.1. Deep Feature Extraction Network
2.2. Multimodal Contrastive Learning
2.3. Iterative Optimization Learning with Pseudo-Labels
2.4. CM Generation
3. Experiment and Discussion
3.1. Multimodal Datasets and Quantitative Measures
- (1)
- overall accuracy OA:
- (2)
- F1 scores F1:
- (3)
- the Kappa coefficient k:
3.2. Experimental Parameter Setting and Analysis
3.2.1. Feature Fusion Coefficient
3.2.2. Iteration Coefficient
3.2.3. Sample Ratio Coefficient
3.3. Performance of Proposed IOECL and Comparison Methods
- (1)
- Experiments on D1: The CMs and accuracy evaluation of D1 are shown in Figure 13 and Table 2, respectively. Generally speaking, these methods achieved relatively ideal results in changed-area detection. Most methods detected relatively complete flooding areas. This is due to the obvious difference between flooding areas and the background of image t2. However, since the changed areas are relatively small above the image, it is easy to mix in unchanged fragment information while detecting changed areas. Compared with CAN, the proposed method relatively completely retains the detailed outline of changed areas. In addition, the proposed method minimizes the detection of unchanged areas as changed areas compared with the remaining eight methods. This is mainly due to the iterative optimization, which uses reliable pseudo-labels to enhance the distinction between changed and unchanged areas.From the accuracy evaluation, the proposed method has a great advantage with a Kappa coefficient k of 0.5025. The overall accuracy OA and F1 scores F1 are 0.9560 and 0.5255, respectively, both of which also have significant accuracy improvements.
- (2)
- Experiments on D2: The CMs of different methods on dataset D2 are shown in Figure 14. As can be seen from Table 3, due to the large number of small and fragmented buildings, the accuracy of the nine methods on these data is relatively low. The accuracy of SCASC is slightly higher than that of the method proposed in this paper.
- (3)
- Experiments on D3: Figure 15 shows two high-resolution optical images acquired from different satellite sensors. The proposed method has far fewer false detections compared to the other eight methods and detects the most complete changed area in the semi-circular part at the bottom of the changed building area. In terms of accuracy evaluation (Table 4), the proposed method obtains the highest Kappa coefficient k of 0.5533, which is much higher than those of the other comparison methods. Its overall accuracy OA and F1 scores F1 both achieved the best results of 0.9040 and 0.6041, respectively. The proposed method has such good performance on this dataset for two main reasons: The data have higher spatial resolutions, and their detailed information provides a database for obtaining more accurate feature information, while iterative optimization plays a very important role in expanding the differences between the changed and unchanged regions of the image, significantly improving the accuracy of the data after several iterations of optimization.
- (4)
- Experiments on D4: The geographical features of D4 are simple, and the proposed method also achieves the best detection results of F1 scores F1 and the Kappa coefficient k. By observing the results shown in Figure 16, it is clear that the proposed method detects the changed area more completely than the other four comparison methods, the SCCN, INLPG, IRG-McS, and SCASC. However, the existence of many small fragments in the unchanged area, which are relatively complex and could easily be mistakenly divided into changed areas, poses a challenge for this dataset. The method proposed here has fewer false detections than the other methods in the unchanged areas. From Table 5, we can see that the proposed method achieved an overall accuracy OA, F1 score F1, and Kappa coefficient k of 0.9393, 0.7849, and 0.7505, respectively, which are relatively better than those of the other methods. With step-by-step iterative optimization, the changed features gradually become more obvious, and the contours of the changed regions gradually become clearer, distinctly distinguishing them from the unchanged regions. Therefore, better results are achieved in the final results. Compared to other methods, the iterative optimization part of the proposed method shows great advantages in strengthening the features of the changed regions and reducing fragmented false detections.
3.4. Ablation Study
4. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Tang, Y.; Zhang, L. Urban Change Analysis with Multi-Sensor Multispectral Imagery. Remote Sens. 2017, 9, 252. [Google Scholar] [CrossRef]
- Chen, Y.; Tang, Y.; Han, T.; Zhang, Y.; Zou, B.; Feng, H. RAMC: A Rotation Adaptive Tracker with Motion Constraint for Satellite Video Single-Object Tracking. Remote Sens. 2022, 14, 3108. [Google Scholar] [CrossRef]
- Chen, Y.; Tang, Y.; Yin, Z.; Han, T.; Zou, B.; Feng, H. Single Object Tracking in Satellite Videos: A Correlation Filter-Based Dual-Flow Tracker. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 6687–6698. [Google Scholar] [CrossRef]
- Han, T.; Tang, Y.; Yang, X.; Lin, Z.; Zou, B.; Feng, H. Change Detection for Heterogeneous Remote Sensing Images with Improved Training of Hierarchical Extreme Learning Machine (HELM). Remote Sens. 2021, 13, 4918. [Google Scholar] [CrossRef]
- Hussain, M.; Chen, D.; Cheng, A.; Wei, H.; Stanley, D. Change Detection from Remotely Sensed Images: From Pixel-Based to Object-Based Approaches. ISPRS J. Photogramm. Remote Sens. 2013, 80, 91–106. [Google Scholar] [CrossRef]
- Wu, C.; Du, B.; Zhang, L. Fully Convolutional Change Detection Framework with Generative Adversarial Network for Unsupervised, Weakly Supervised and Regional Supervised Change Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 9774–9788. [Google Scholar] [CrossRef] [PubMed]
- Zhang, M.; Zhang, R.; Yang, Y.; Bai, H.; Zhang, J.; Guo, J. ISNet: Shape Matters for Infrared Small Target Detection. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 867–876. [Google Scholar]
- Zhang, M.; Bai, H.; Zhang, J.; Zhang, R.; Wang, C.; Guo, J.; Gao, X. RKformer: Runge-Kutta Transformer with Random-Connection Attention for Infrared Small Target Detection. In Proceedings of the 30th ACM International Conference on Multimedia, Lisboa, Portugal, 10 October 2022; pp. 1730–1738. [Google Scholar]
- Chen, Y.; Yuan, Q.; Tang, Y.; Xiao, Y.; He, J.; Han, T.; Liu, Z.; Zhang, L. SSTtrack: A Unified Hyperspectral Video Tracking Framework via Modeling Spectral-Spatial-Temporal Conditions. Inf. Fusion 2025, 114, 102658. [Google Scholar] [CrossRef]
- Zhang, M.; Yue, K.; Li, B.; Guo, J.; Li, Y.; Gao, X. Single-Frame Infrared Small Target Detection via Gaussian Curvature Inspired Network. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5005013. [Google Scholar] [CrossRef]
- Zhang, M.; Wang, Y.; Guo, J.; Li, Y.; Gao, X.; Zhang, J. IRSAM: Advancing Segment Anything Model for Infrared Small Target Detection. arXiv 2024, arXiv:2407.07520. [Google Scholar]
- Yang, J.; Zhou, Y.; Cao, Y.; Feng, L. Heterogeneous Image Change Detection Using Deep Canonical Correlation Analysis. In Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018; pp. 2917–2922. [Google Scholar]
- Zhou, Y.; Liu, H.; Li, D.; Cao, H.; Yang, J.; Li, Z. Cross-Sensor Image Change Detection Based on Deep Canonically Correlated Autoencoders. In Artificial Intelligence for Communications and Networks; Han, S., Ye, L., Meng, W., Eds.; Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering; Springer International Publishing: Cham, Switzerland, 2019; Volume 286, pp. 251–257. ISBN 978-3-030-22967-2. [Google Scholar]
- Shao, R.; Du, C.; Chen, H.; Li, J. SUNet: Change Detection for Heterogeneous Remote Sensing Images from Satellite and UAV Using a Dual-Channel Fully Convolution Network. Remote Sens. 2021, 13, 3750. [Google Scholar] [CrossRef]
- Zhang, C.; Feng, Y.; Hu, L.; Tapete, D.; Pan, L.; Liang, Z.; Cigna, F.; Yue, P. A Domain Adaptation Neural Network for Change Detection with Heterogeneous Optical and SAR Remote Sensing Images. Int. J. Appl. Earth Obs. Geoinf. 2022, 109, 102769. [Google Scholar] [CrossRef]
- Zhang, M.; Zhang, R.; Zhang, J.; Guo, J.; Li, Y.; Gao, X. Dim2Clear Network for Infrared Small Target Detection. IEEE Trans. Geosci. Remote Sens. 2023, 61, 4700718. [Google Scholar] [CrossRef]
- Ma, W.; Xiong, Y.; Wu, Y.; Yang, H.; Zhang, X.; Jiao, L. Change Detection in Remote Sensing Images Based on Image Mapping and a Deep Capsule Network. Remote Sens. 2019, 11, 626. [Google Scholar] [CrossRef]
- Liu, H.; Wang, Z.; Shang, F.; Zhang, M.; Gong, M.; Ge, F.; Jiao, L. A Novel Deep Framework for Change Detection of Multi-Source Heterogeneous Images. In Proceedings of the 2019 International Conference on Data Mining Workshops (ICDMW), Beijing, China, 8–11 November 2019; pp. 165–171. [Google Scholar]
- Jiang, X.; Li, G.; Zhang, X.-P.; He, Y. A Semisupervised Siamese Network for Efficient Change Detection in Heterogeneous Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4700718. [Google Scholar] [CrossRef]
- Shi, J.; Wu, T.; Qin, A.K.; Lei, Y.; Jeon, G. Semisupervised Adaptive Ladder Network for Remote Sensing Image Change Detection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5408220. [Google Scholar] [CrossRef]
- Luppino, L.T.; Bianchi, F.M.; Moser, G.; Anfinsen, S.N. Unsupervised Image Regression for Heterogeneous Change Detection. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9960–9975. [Google Scholar] [CrossRef]
- Gong, M.; Zhang, P.; Su, L.; Liu, J. Coupled Dictionary Learning for Change Detection from Multisource Data. IEEE Trans. Geosci. Remote Sens. 2016, 54, 7077–7091. [Google Scholar] [CrossRef]
- Mignotte, M. A Fractal Projection and Markovian Segmentation-Based Approach for Multimodal Change Detection. IEEE Trans. Geosci. Remote Sens. 2020, 58, 8046–8058. [Google Scholar] [CrossRef]
- Jimenez-Sierra, D.A.; Benítez-Restrepo, H.D.; Vargas-Cardona, H.D.; Chanussot, J. Graph-Based Data Fusion Applied to: Change Detection and Biomass Estimation in Rice Crops. Remote Sens. 2020, 12, 2683. [Google Scholar] [CrossRef]
- Han, T.; Tang, Y.; Zou, B.; Feng, H. Unsupervised Multimodal Change Detection Based on Adaptive Optimization of Structured Graph. Int. J. Appl. Earth Obs. Geoinf. 2024, 126, 103630. [Google Scholar] [CrossRef]
- Han, T.; Tang, Y.; Chen, Y.; Zou, B.; Feng, H. Global Structure Graph Mapping for Multimodal Change Detection. Int. J. Digit. Earth 2024, 17, 2347457. [Google Scholar] [CrossRef]
- Sun, Y.; Lei, L.; Li, X.; Sun, H.; Kuang, G. Nonlocal Patch Similarity Based Heterogeneous Remote Sensing Change Detection. Pattern Recognit. 2021, 109, 107598. [Google Scholar] [CrossRef]
- Zhao, L.; Sun, Y.; Lei, L.; Zhang, S. Auto-Weighted Structured Graph-Based Regression Method for Heterogeneous Change Detection. Remote Sens. 2022, 14, 4570. [Google Scholar] [CrossRef]
- Tang, Y.; Yang, X.; Han, T.; Zhang, F.; Zou, B.; Feng, H. Enhanced Graph Structure Representation for Unsupervised Heterogeneous Change Detection. Remote Sens. 2024, 16, 721. [Google Scholar] [CrossRef]
- Sun, Y.; Lei, L.; Li, X.; Tan, X.; Kuang, G. Structure Consistency-Based Graph for Unsupervised Change Detection with Homogeneous and Heterogeneous Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4700221. [Google Scholar] [CrossRef]
- Sun, Y.; Lei, L.; Guan, D.; Kuang, G. Iterative Robust Graph for Unsupervised Change Detection of Heterogeneous Remote Sensing Images. IEEE Trans. Image Process. 2021, 30, 6277–6291. [Google Scholar] [CrossRef] [PubMed]
- Sun, Y.; Lei, L.; Guan, D.; Li, M.; Kuang, G. Sparse-Constrained Adaptive Structure Consistency-Based Unsupervised Image Regression for Heterogeneous Remote-Sensing Change Detection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4405814. [Google Scholar] [CrossRef]
- Han, T.; Tang, Y.; Chen, Y.; Yang, X.; Guo, Y.; Jiang, S. SDC-GAE: Structural Difference Compensation Graph Autoencoder for Unsupervised Multimodal Change Detection. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5622416. [Google Scholar] [CrossRef]
- Rani, V.; Nabi, S.T.; Kumar, M.; Mittal, A.; Kumar, K. Self-Supervised Learning: A Succinct Review. Arch. Computat. Methods Eng. 2023, 30, 2761–2775. [Google Scholar] [CrossRef]
- Gui, J.; Chen, T.; Zhang, J.; Cao, Q.; Sun, Z.; Luo, H.; Tao, D. A Survey on Self-Supervised Learning: Algorithms, Applications, and Future Trends. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 1–20. [Google Scholar] [CrossRef]
- Liu, X.; Zhang, F.; Hou, Z.; Mian, L.; Wang, Z.; Zhang, J.; Tang, J. Self-Supervised Learning: Generative or Contrastive. IEEE Trans. Knowl. Data Eng. 2021, 35, 857–876. [Google Scholar] [CrossRef]
- Bond-Taylor, S.; Leach, A.; Long, Y.; Willcocks, C.G. Deep Generative Modelling: A Comparative Review of VAEs, GANs, Normalizing Flows, Energy-Based and Autoregressive Models. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 7327–7347. [Google Scholar] [CrossRef] [PubMed]
- Kingma, D.P.; Welling, M. An Introduction to Variational Autoencoders. FNT Mach. Learn. 2019, 12, 307–392. [Google Scholar] [CrossRef]
- Wang, K.; Gou, C.; Duan, Y.; Lin, Y.; Zheng, X.; Wang, F.-Y. Generative Adversarial Networks: Introduction and Outlook. IEEE/CAA J. Autom. Sinica 2017, 4, 588–598. [Google Scholar] [CrossRef]
- Gui, J.; Sun, Z.; Wen, Y.; Tao, D.; Ye, J. A Review on Generative Adversarial Networks: Algorithms, Theory, and Applications. IEEE Trans. Knowl. Data Eng. 2023, 35, 3313–3332. [Google Scholar] [CrossRef]
- Han, T.; Tang, Y.; Chen, Y. Heterogeneous Image Change Detection Based on Two-Stage Joint Feature Learning. In Proceedings of the IGARSS 2022–2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022; pp. 3215–3218. [Google Scholar]
- Liu, J.; Gong, M.; Qin, K.; Zhang, P. A Deep Convolutional Coupling Network for Change Detection Based on Heterogeneous Optical and Radar Images. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 545–559. [Google Scholar] [CrossRef]
- Zhao, W.; Wang, Z.; Gong, M.; Liu, J. Discriminative Feature Learning for Unsupervised Change Detection in Heterogeneous Images Based on a Coupled Neural Network. IEEE Trans. Geosci. Remote Sens. 2017, 55, 7066–7080. [Google Scholar] [CrossRef]
- Su, L.; Gong, M.; Zhang, P.; Zhang, M.; Liu, J.; Yang, H. Deep Learning and Mapping Based Ternary Change Detection for Information Unbalanced Images. Pattern Recognit. 2017, 66, 213–228. [Google Scholar] [CrossRef]
- Niu, X.; Gong, M.; Zhan, T.; Yang, Y. A Conditional Adversarial Network for Change Detection in Heterogeneous Images. IEEE Geosci. Remote Sens. Lett. 2019, 16, 45–49. [Google Scholar] [CrossRef]
- Zhan, T.; Gong, M.; Jiang, X.; Li, S. Log-Based Transformation Feature Learning for Change Detection in Heterogeneous Images. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1352–1356. [Google Scholar] [CrossRef]
- Luppino, L.T.; Kampffmeyer, M.; Bianchi, F.M.; Moser, G.; Serpico, S.B.; Jenssen, R.; Anfinsen, S.N. Deep Image Translation with an Affinity-Based Change Prior for Unsupervised Multimodal Change Detection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4700422. [Google Scholar] [CrossRef]
- Chen, Y.; Bruzzone, L. Self-Supervised Change Detection in Multiview Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5402812. [Google Scholar] [CrossRef]
- Saha, S.; Ebel, P.; Zhu, X.X. Self-Supervised Multisensor Change Detection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4405710. [Google Scholar] [CrossRef]
- Hartigan, J.A.; Wong, M.A. Algorithm AS 136: A K-Means Clustering Algorithm. Appl. Stat. 1979, 28, 100. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Lin, T.-Y.; Dollar, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature Pyramid Networks for Object Detection. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 936–944. [Google Scholar]
- Agarap, A.F. Deep Learning Using Rectified Linear Units (ReLU). arXiv 2019, arXiv:1803.08375. [Google Scholar]
- Wang, Y.; Albrecht, C.M.; Braham, N.A.A.; Mou, L.; Zhu, X.X. Self-Supervised Learning in Remote Sensing: A Review. IEEE Geosci. Remote Sens. Mag. 2022, 10, 213–247. [Google Scholar] [CrossRef]
- Zhang, L.; Lu, W.; Zhang, J.; Wang, H. A Semisupervised Convolution Neural Network for Partial Unlabeled Remote-Sensing Image Segmentation. IEEE Geosci. Remote Sens. Lett. 2022, 19, 6507305. [Google Scholar] [CrossRef]
- Yang, J.; Kang, Z.; Yang, Z.; Xie, J.; Xue, B.; Yang, J.; Tao, J. A Laboratory Open-Set Martian Rock Classification Method Based on Spectral Signatures. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4601815. [Google Scholar] [CrossRef]
- Rao, W.; Qu, Y.; Gao, L.; Sun, X.; Wu, Y.; Zhang, B. Transferable Network with Siamese Architecture for Anomaly Detection in Hyperspectral Images. Int. J. Appl. Earth Obs. Geoinf. 2022, 106, 102669. [Google Scholar] [CrossRef]
- Jing, H.; Cheng, Y.; Wu, H.; Wang, H. Radar Target Detection with Multi-Task Learning in Heterogeneous Environment. IEEE Geosci. Remote Sens. Lett. 2022, 19, 4021405. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, S.; Zou, B.; Dong, H. Unsupervised Deep Representation Learning and Few-Shot Classification of PolSAR Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5100316. [Google Scholar] [CrossRef]
- Schroff, F.; Kalenichenko, D.; Philbin, J. FaceNet: A Unified Embedding for Face Recognition and Clustering. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 815–823. [Google Scholar]
SCCN | Weighting parameter ; learning rate is ; and the number of epochs is 500. |
X-Net | Dropout rate is 20%; loss function for 240 epochs at a learning rate of ; and the weights of the loss functions are three: , , and . |
ACE-Net | Dropout rate is 20%; loss function for 240 epochs at a learning rate of ; and the weights of the loss functions are five: , ,, , and |
CAN | Weighting parameter ; learning rate is ; and the number of epochs is 500. |
NPSG | Patch size ; search window size ; search step size ; target patch step size ; and the selected most similar neighbors . |
INLPG | Patch size ; step size ; and the most similar neighbors (N is the total number of patches). |
IRG-McS | Superpixels number ; and the maximum number of iterations . |
SCASC | Superpixels number ; and weighting parameter . |
Method | P | R | OA | F1 | k |
---|---|---|---|---|---|
SCCN | 0.2872 | 0.9050 | 0.8977 | 0.4360 | 0.3960 |
X-Net | 0.2735 | 0.7273 | 0.9008 | 0.3975 | 0.3566 |
ACE-Net | 0.2613 | 0.7272 | 0.8982 | 0.3845 | 0.3422 |
CAN | 0.2739 | 0.3590 | 0.9304 | 0.3107 | 0.2748 |
NPSG | 0.4263 | 0.5411 | 0.9481 | 0.4769 | 0.4500 |
INLPG | 0.3002 | 0.7309 | 0.9138 | 0.4256 | 0.3876 |
IRG-McS | 0.4040 | 0.6542 | 0.9427 | 0.4995 | 0.4709 |
SCASC | 0.4117 | 0.6181 | 0.9447 | 0.4942 | 0.4662 |
IOECL | 0.4966 | 0.5579 | 0.9560 | 0.5255 | 0.5025 |
Method | P | R | OA | F1 | k |
---|---|---|---|---|---|
SCCN | 0.3767 | 0.5434 | 0.8922 | 0.4449 | 0.3874 |
X-Net | 0.2551 | 0.3527 | 0.8609 | 0.2961 | 0.2211 |
ACE-Net | 0.3244 | 0.4119 | 0.8801 | 0.3630 | 0.2978 |
CAN | 0.0410 | 0.0334 | 0.8610 | 0.0368 | −0.0373 |
NPSG | 0.1974 | 0.3919 | 0.8249 | 0.2625 | 0.1753 |
INLPG | 0.2744 | 0.4036 | 0.8677 | 0.3267 | 0.2562 |
IRG-McS | 0.2678 | 0.2867 | 0.8809 | 0.2769 | 0.2122 |
SCASC | 0.4748 | 0.4072 | 0.9170 | 0.4384 | 0.3939 |
IOECL | 0.2866 | 0.5289 | 0.8578 | 0.3717 | 0.2995 |
Method | P | R | OA | F1 | k |
---|---|---|---|---|---|
SCCN | 0.3380 | 0.2604 | 0.8106 | 0.2942 | 0.1869 |
X-Net | 0.5827 | 0.3980 | 0.8583 | 0.4729 | 0.3944 |
ACE-Net | 0.5929 | 0.3137 | 0.8560 | 0.4103 | 0.3370 |
CAN | 0.4606 | 0.2161 | 0.8428 | 0.2942 | 0.2185 |
NPSG | 0.4692 | 0.1218 | 0.8460 | 0.1934 | 0.1397 |
INLPG | 0.4003 | 0.4289 | 0.8160 | 0.4141 | 0.3051 |
IRG-McS | 0.6022 | 0.2829 | 0.8630 | 0.3849 | 0.3189 |
SCASC | 0.8015 | 0.4041 | 0.8945 | 0.5373 | 0.4849 |
IOECL | 0.8054 | 0.4833 | 0.9040 | 0.6041 | 0.5533 |
Method | P | R | OA | F1 | k |
---|---|---|---|---|---|
SCCN | 0.1662 | 0.9998 | 0.4016 | 0.2850 | 0.1011 |
X-Net | 0.6041 | 0.7361 | 0.9110 | 0.6636 | 0.6129 |
ACE-Net | 0.2492 | 0.2737 | 0.8150 | 0.2609 | 0.1554 |
CAN | 0.3310 | 0.2891 | 0.8455 | 0.3086 | 0.2221 |
NPSG | 0.3123 | 0.6684 | 0.7849 | 0.4257 | 0.3142 |
INLPG | 0.3886 | 0.9386 | 0.8166 | 0.5497 | 0.4583 |
IRG-McS | 0.6997 | 0.8685 | 0.9399 | 0.7750 | 0.7408 |
SCASC | 0.6763 | 0.8202 | 0.9317 | 0.7413 | 0.7024 |
IOECL | 0.6801 | 0.9278 | 0.9393 | 0.7849 | 0.7505 |
Iteration | D1 | D2 | D3 | D4 | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
OA | F1 | k | OA | F1 | k | OA | F1 | k | OA | F1 | k | |
0 | 0.9029 | 0.3089 | 0.2646 | 0.8626 | 0.3623 | 0.2912 | 0.8464 | 0.5142 | 0.4231 | 0.8783 | 0.5846 | 0.5162 |
6 | 0.9560 | 0.5255 | 0.5025 | 0.8578 | 0.3717 | 0.2995 | 0.9040 | 0.6041 | 0.5533 | 0.9393 | 0.7849 | 0.7505 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tang, Y.; Yang, X.; Han, T.; Sun, K.; Guo, Y.; Hu, J. Iterative Optimization-Enhanced Contrastive Learning for Multimodal Change Detection. Remote Sens. 2024, 16, 3624. https://doi.org/10.3390/rs16193624
Tang Y, Yang X, Han T, Sun K, Guo Y, Hu J. Iterative Optimization-Enhanced Contrastive Learning for Multimodal Change Detection. Remote Sensing. 2024; 16(19):3624. https://doi.org/10.3390/rs16193624
Chicago/Turabian StyleTang, Yuqi, Xin Yang, Te Han, Kai Sun, Yuqiang Guo, and Jun Hu. 2024. "Iterative Optimization-Enhanced Contrastive Learning for Multimodal Change Detection" Remote Sensing 16, no. 19: 3624. https://doi.org/10.3390/rs16193624
APA StyleTang, Y., Yang, X., Han, T., Sun, K., Guo, Y., & Hu, J. (2024). Iterative Optimization-Enhanced Contrastive Learning for Multimodal Change Detection. Remote Sensing, 16(19), 3624. https://doi.org/10.3390/rs16193624