The Eyes of the Gods: A Survey of Unsupervised Domain Adaptation Methods Based on Remote Sensing Data
Abstract
:1. Introduction
- This paper clarifies and reviews the idea of unsupervised domain adaptation in remote sensing area. In a nutshell, this paper provides an overview of unsupervised domain adaptation methods divided into four categories: (1) Generative training methods, (2) Adversarial training methods, (3) Self-training methods, (4) Hybrid training methods. This paper also explores the benefits and potential drawbacks of various training methods by contrasting the statistics of experimental results.
- This paper elaborates on the domain shift based on a thorough review of large remote sensing datasets used in unsupervised domain adaptation research and analyzes its influencing factors, which vary from factors of data acquisition and imaging to factors of tasks and annotation.
- Remote sensing applications with some case studies using unsupervised domain adaptation in remote sensing data are introduced in this paper. Besides, we also concentrate on unsupervised domain adaptation methods based on practical dilemma with remote sensing data, such as multi-domains, partial and open set issues, including task definition, and solution approaches.
- The possible hazards and future development directions of unsupervised domain adaptation methods in remote sensing images are also analyzed in depth through a comparison of methods for natural images.
2. Overview
2.1. Notations and Definitions
2.2. Remote Sensing Datasets and Tasks
Tasks | Dataset | Description | Works | ||||||
---|---|---|---|---|---|---|---|---|---|
Bands | Types | Device | Size-I | Size-D | Resolution | Region | |||
Regression | SARptical [58] | SAR-2 | - | TerraSAR-X | 112 × 112 | 20,216 | 1 m | Berlin | |
Opt-3 | UltraCAM | 112 × 112 | 0.2 m | ||||||
Classification | NWPU [59] | Opt-3 | 45 | Google Earth | 256 × 256 | 31,500 | 0.2–30 m | - | |
PatternNet [64] | Opt-3 | 38 | Google Map | 256 × 256 | 800 | 0.062–4.693 m | - | ||
AID [65] | Opt-3 | 30 | Google Earth | 600 × 600 | 10,000 | 0.5–8 m | - | [60,61,62,63] | |
Merced [66] | Opt-3 | 21 | Map | 256 × 256 | 2100 | 0.3 m | - | ||
Eurosat [67,68] | Opt-3 | 10 | Sentinel-2 | 64 × 64 | 27,000 | 0.2–30 m | - | ||
MSI-13 | 10 | Sentinel-2 | 64 × 64 | 27,000 | 0.2–30 m | - | |||
MSTAR [69] | SAR-2 | 10 | SAR Sensors | 128 × 128 | 17,658 | 0.3 m | - | [70] | |
So2Sat LCZ42 [71] | SAR-8 | 17 | Sentinel-1 | 32 × 32 | 400,673 | 10 m | 52 cities | ||
MSI-10 | Sentinel-2 | 32 × 32 | 10 m | ||||||
Detection | DOTA [72] | Opt-3 | 15 | Google Earth, GF-2, JL-1 | 800 × 800–4000 × 4050 | 2806 | - | - | [73] |
Dior [74] | Opt-3 | 20 | Google Earth | 800 × 800 | 23,463 | 0.5–30 m | - | ||
Optical-SAR [75] | Opt-1 | 5 | Google Earth | 192 × 192 | 10,000 | 0.3 m | Visakhapatnam | ||
SAR-1 | TerraSAR-X | ||||||||
Simulation Data [75] | SAR-2 | 6 | 3D-CAD | - | 10,800 | - | - | ||
Measured DataSet [75] | SAR-2 | 5 | Sentinel-1,TerraSAR-X | 192 × 192 | 17,500 | 5/1 m | - | ||
FARADSAR [76] | SAR-1 | 10 | Radar | 1300 × 580–1700 × 1850 | 106 | 0.1 m | The University of New Mexico | [37] | |
miniSAR [77] | SAR-1 | 10 | Radar | 1638 × 2510 | 9 | 0.1 m | Kirtland Air Force Base | ||
Segmentation | Skyscape [78] | Opt-3 | 31 | Camera System | 5616 × 3744 | 16 | 0.13 m | Munich | |
LoveDA [79] | Opt-3 | 7 | Spaceborne | 1024 × 1024 | 5987 | 0.3 m | NanJing, ChangZhou, WuHan | ||
ISPRS Vaihingen [80] | Opt-3 | 5 | - | 2500 × 2000 | 32 | 0.09 m | Vaihingen | ||
ISPRS Potsdam [80] | Opt-3 | 5 | - | 6000 × 6000 | 38 | 0.05 m | Potsdam | ||
Beijing Dataset [81] | Opt-3 | 4 | DigitalGlobe, SpaceView | 1800 × 800 | 202 | 0.3 m | Beijing | [81] | |
Massachusetts [82] | Opt-3 | 3 | - | 1500 × 1500 | 1171 | 1 m | Massachusetts | ||
SpaceNet [83] | SAR-5 | 2 | Aerial Sensor | 406–439 | 6000 | 0.5 m | Rotterdam | ||
Opt-3 | 2 | WorldView-2 | 406–439 | 6000 | 0.5 m | Rotterdam | |||
DeepGlobe [84] | Opt-3 | 2 | DigitalGlobe, Vivid | 1024 × 1024 | 8970 | 0.5 m | Thailand, Indonesia, India | ||
CHN6-CUG [85] | Opt-3 | 2 | Google Earth | 512 × 512 | 4511 | 0.5 m | 6 cities | ||
Indian Pines [86] | HSI-224 | 16 | AVRIS Sensor | 145 × 145 | 10,249 | 20 m | North-Western Indiana | ||
Salinas [87] | HSI-224 | 16 | AVRIS Sensor | 512 × 217 | 54,129 | 3.7 m | Salinas Valley | ||
HSI-224 | 16 | AVRIS Sensor | 86 × 83 | 5348 | - | Salinas Valley | |||
Botswana [88] | HSI-145 | 14 | NASA EO-1 | 1476 × 256 | 3284 | 30 m | Okavango Delta | [81] | |
HSI-145 | 14 | EO-1 Satellite | 1476 × 256 | 2494 | 30 m | Okavango Delta | |||
Kennedy Space Center [89] | HSI-176 | 13 | Spectrometer | 512 × 614 | 1 | 18 m | Kennedy | [90] | |
Washington DC MALL [91] | HSI-191 | 7 | Sensor | 1280 × 307 | 1 | - | Washington | [92] | |
Generation | RICE [93] | Opt-3 | 2 | Google Earth | 512 × 512 | 1000 | - | - | |
Opt-3 | 4 | Landsat 8 OLI/TIRS | 512 × 512 | 450 sets | - | - | |||
SEN12MS-CR [94,95] | SAR-2 | - | Sentinel-1 | 256 × 256 | 472,563 | 10 m | - | ||
MSI-13 | Sentinel-1 | 256 × 256 | 10 m | ||||||
MSI-13 | Sentinel-2 | 256 × 256 | 10 m |
2.2.1. Classification Task
2.2.2. Detection Task
2.2.3. Segmentation Task
2.2.4. Generation Task
2.3. The Key Problems: Domain Shift
- Detection Areas. There are huge differences between areas or countries due to the discrepancy of economy level and human activity influence. For example, roads in urban and suburban areas have different characteristics, such as color, connectivity and contraposition with background, as mentioned in LoveDA [79]. From the two images pointed at by the purple arrow in Figure 3, we can also make a sense of the diversity between different areas collected from Potsdam through the density of the building marked in blue in the label images.
- Illumination. Imagery is captured at different times of the day as a result of the illumination difference, which may increase the instability of the training model. In particular, visible bands are not available in some all-day monitoring missions at night, which causes discrepancies compared with daytime data.
- Resolution or Ground Sampling Distance (GSD) [15,31], taking the Postdam Dataset with 5 cm resolution and 9 cm resolution as an example, as shown in the two images pointed at by the blue arrow in Figure 3. High resolution and wide coverage may cause images containing redundant information or noises. Reference [60] uses five diverse remote sensing datasets to discuss characteristics through transfer learning, the conclusion of which indicates that multi-resolution is helpful to affect generic representation.
- Devices and detection bands [128]. This is quite different between wavelength, type and the number of bands. The two images and details pointed at by the green arrow in Figure 3 show appearance diversities, although they are selected from the same area and satellite but different bands, where one consists of RGB bands and another of IRRG bands. Reference [128] noticed the discrepancy caused by various multi-spectral bands and built domain adaptation methods using RGB images to other types of multi-spectral images. As shown in Figure 4a, we also display the pixel value statistics for various channels across the two datasets. It is clear that the two datasets’ pixel value distributions are very dissimilar. Additionally, the IR-Band differs significantly from the visible light channel after being translated to the 0–255 range.
- Inconsistency in class distribution. Even if we collect data from different domains for the same category, the proportion of a certain category may be different, as shown in Figure 4b, where the ’Building’ class has the highest proportion of pixels in the Vaihingen dataset, while ’Impervious surface’ is higher in the Potsdam dataset.
- Atmospheric effects. Sometimes even images collected by the same satellite sensors might have quite different radiometry, which makes them hard to annotate and recognize.
- Position of the sun and satellite observation direction. The imaging quality of Unmanned Aerial Vehicles (UAV) are influenced by position of sun mentioned in [129], such as the incorrect exposures and distortion. The relative position between the sun and the satellite affected the quality of the satellite image in reference [130].
3. Approaches of Domain Adaptation in Remote Sensing
3.1. Generative Training Methods
- Target-stylized methods. The main purpose of target-stylized methods is to convert the source domain image into a fake image with a similar style to the target domain image so that the semantic consistency of data and labels can be preserved to the greatest extent. The general pipeline is designing a Generator tozz generate target-stylized images , and then utilizing and to train or fine-tune a model . Some representative papers are ColormapGAN or Neural Style Transfer (NST) methods methods [13,14], GAN methods [15,16,20,116] and so on. Matching methods only consider color distribution matching and do not consider semantics, but migration can be implemented quickly and efficiently when the domain gap is small, such as Graph matching [136], Histogram matching [137], etc.
- Source-stylized or mid-domain transferring methods. Different from target-stylized methods, the biggest advantage of these methods is that they can ensure that the data and labels of the training model are precisely aligned, which can guarantee the accuracy of the semantics. These methods are mainly for transferring target images to source-styled images or to find an intermediate domain between the source and target domain, such as Inverse DA and other multi-domain methods [19,21,22]. The pipeline is similar to the target-stylized pipeline but the data flow is opposite and a Generator defined as .
3.1.1. Target-Stylized Methods
3.1.2. Source-Stylized or Mid-Domain Methods
3.2. Adversarial Training Methods
3.2.1. Feature-Level Adversarial Training Methods
3.2.2. Pixel-Level Adversarial Training Methods
3.2.3. Hierarchical Adversarial Training Methods
3.3. Self-Training Methods
3.4. Hybrid Training Methods
3.4.1. GT-AT Methods
3.4.2. AT-ST Methods
3.4.3. GT-AT-ST Methods
4. Other Concerns of UDA in Remote Sensing
4.1. Scale Divergence Problem
4.2. Partial or Open-Set Unsupervised Domain Adaptation
4.3. Multi-Domains Unsupervised Domain Adaptation
4.4. Domain Generalization in Remote Sensing
5. Discussion
5.1. Comparisons of Different UDA Training Methods
5.2. Comparisons of UDA Methods between Natural and Remote Sensing Data
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Liu, P. A survey of remote-sensing big data. Front. Environ. Sci. 2015, 3, 45. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, L.; Du, B. Deep learning for remote sensing data: A technical tutorial on the state of the art. IEEE Geosci. Remote Sens. Mag. 2016, 4, 22–40. [Google Scholar] [CrossRef]
- Chi, M.; Plaza, A.; Benediktsson, J.A.; Sun, Z.; Shen, J.; Zhu, Y. Big data for remote sensing: Challenges and opportunities. Proc. IEEE 2016, 104, 2207–2219. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Xie, S.; Girshick, R.; Dollár, P.; Tu, Z.; He, K. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1492–1500. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 28, 91–99. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Springer: Munich, Germany, 2015; pp. 234–241. [Google Scholar]
- Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
- Wang, M.; Deng, W. Deep Visual Domain Adaptation: A Survey. Neurocomputing 2018, 312, 135–153. [Google Scholar] [CrossRef]
- Wilson, G.; Cook, D.J. A survey of unsupervised deep domain adaptation. ACM Trans. Intell. Syst. Technol. TIST 2020, 11, 1–46. [Google Scholar] [CrossRef]
- Tuia, D.; Persello, C.; Bruzzone, L. Domain adaptation for the classification of remote sensing data: An overview of recent advances. IEEE Geosci. Remote Sens. Mag. 2016, 4, 41–57. [Google Scholar] [CrossRef]
- Kellenberger, B.; Tasar, O.; Bhushan Damodaran, B.; Courty, N.; Tuia, D. Deep Domain Adaptation in Earth Observation. In Deep Learning for the Earth Sciences: A Comprehensive Approach to Remote Sensing, Climate Science, and Geosciences; Wiley: Hoboken, NJ, USA, 2021; pp. 90–104. [Google Scholar] [CrossRef]
- Tasar, O.; Happy, S.; Tarabalka, Y.; Alliez, P. ColorMapGAN: Unsupervised domain adaptation for semantic segmentation using color mapping generative adversarial networks. IEEE Trans. Geosci. Remote Sens. 2020, 58, 7178–7193. [Google Scholar] [CrossRef]
- Tasar, O.; Happy, S.; Tarabalka, Y.; Alliez, P. SemI2I: Semantically consistent image-to-image translation for domain adaptation of remote sensing data. In Proceedings of the IGARSS 2020—2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 1837–1840. [Google Scholar]
- Benjdira, B.; Bazi, Y.; Koubaa, A.; Ouni, K. Unsupervised domain adaptation using generative adversarial networks for semantic segmentation of aerial images. Remote Sens. 2019, 11, 1369. [Google Scholar] [CrossRef] [Green Version]
- Li, Y.; Shi, T.; Chen, W.; Zhang, Y.; Wang, Z.; Li, H. Unsupervised Style Transfer via Dualgan for Cross-Domain Aerial Image Classification. In Proceedings of the IGARSS 2020—2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 1385–1388. [Google Scholar]
- Fu, H.; Gong, M.; Wang, C.; Batmanghelich, K.; Zhang, K.; Tao, D. Geometry-consistent generative adversarial networks for one-sided unsupervised domain mapping. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 2427–2436. [Google Scholar]
- Zhao, D.; Yuan, B.; Gao, Y.; Qi, X.; Shi, Z. UGCNet: An Unsupervised Semantic Segmentation Network Embedded with Geometry Consistency for Remote-Sensing Images. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
- Li, Z.; Wang, R.; Pun, M.O.; Wang, Z.; Yu, H. Inverse Domain Adaptation for Remote Sensing Images Using Wasserstein Distance. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 2345–2348. [Google Scholar]
- Cai, Y.; Yang, Y.; Zheng, Q.; Shen, Z.; Shang, Y.; Yin, J.; Shi, Z. BiFDANet: Unsupervised Bidirectional Domain Adaptation for Semantic Segmentation of Remote Sensing Images. Remote Sens. 2022, 14, 190. [Google Scholar] [CrossRef]
- Tasar, O.; Tarabalka, Y.; Giros, A.; Alliez, P.; Clerc, S. StandardGAN: Multi-source domain adaptation for semantic segmentation of very high resolution satellite images by data standardization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 192–193. [Google Scholar]
- Tasar, O.; Giros, A.; Tarabalka, Y.; Alliez, P.; Clerc, S. Daugnet: Unsupervised, multisource, multitarget, and life-long domain adaptation for semantic segmentation of satellite images. IEEE Trans. Geosci. Remote Sens. 2020, 59, 1067–1081. [Google Scholar] [CrossRef]
- Tzeng, E.; Hoffman, J.; Zhang, N.; Saenko, K.; Darrell, T. Deep domain confusion: Maximizing for domain invariance. arXiv 2014, arXiv:1412.3474. [Google Scholar]
- Sejdinovic, D.; Sriperumbudur, B.; Gretton, A.; Fukumizu, K. Equivalence of distance-based and RKHS-based statistics in hypothesis testing. Ann. Stat. 2013, 41, 2263–2291. [Google Scholar] [CrossRef]
- Sun, B.; Saenko, K. Deep coral: Correlation alignment for deep domain adaptation. In Proceedings of the European Conference on Computer Vision; Springer: Amsterdam, The Netherlands, 2016; pp. 443–450. [Google Scholar]
- Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein generative adversarial networks. In Proceedings of the International Conference on Machine Learning, PMLR, Sydney, NSW, Australia, 6–11 August 2017; pp. 214–223. [Google Scholar]
- Ganin, Y.; Lempitsky, V. Unsupervised domain adaptation by backpropagation. In Proceedings of the International Conference on Machine Learning, PMLR, Lille, France, 6–11 July 2015; pp. 1180–1189. [Google Scholar]
- Tzeng, E.; Hoffman, J.; Saenko, K.; Darrell, T. Adversarial discriminative domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7167–7176. [Google Scholar]
- Tsai, Y.H.; Hung, W.C.; Schulter, S.; Sohn, K.; Yang, M.H.; Chandraker, M. Learning to adapt structured output space for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–20 June 2018; pp. 7472–7481. [Google Scholar]
- Luo, Y.; Zheng, L.; Guan, T.; Yu, J.; Yang, Y. Taking a closer look at domain shift: Category-level adversaries for semantics consistent domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 2507–2516. [Google Scholar]
- Liu, W.; Su, F. Unsupervised adversarial domain adaptation network for semantic segmentation. IEEE Geosci. Remote Sens. Lett. 2019, 17, 1978–1982. [Google Scholar] [CrossRef]
- Lu, X.; Zhong, Y. A Noval Global-Local Adversarial Network for Unsupervised Cross-Domain Road Detection. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, IEEE, Brussels, Belgium, 11–16 July 2021; pp. 2775–2778. [Google Scholar]
- Chen, J.; Chen, G.; Fang, B.; Wang, J.; Wang, L. Class-Aware Domain Adaptation for Coastal Land Cover Mapping Using Optical Remote Sensing Imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 11800–11813. [Google Scholar] [CrossRef]
- Kang, G.; Jiang, L.; Yang, Y.; Hauptmann, A.G. Contrastive adaptation network for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 4893–4902. [Google Scholar]
- Zou, Y.; Yu, Z.; Kumar, B.; Wang, J. Unsupervised domain adaptation for semantic segmentation via class-balanced self-training. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 289–305. [Google Scholar]
- Mei, K.; Zhu, C.; Zou, J.; Zhang, S. Instance adaptive self-training for unsupervised domain adaptation. In Proceedings of the European Conference on Computer Vision; Springer: Cham, Switzerland, 2020; pp. 415–430. [Google Scholar]
- Shi, Y.; Du, L.; Guo, Y. Unsupervised Domain Adaptation for SAR Target Detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 6372–6385. [Google Scholar] [CrossRef]
- Zhao, Y.; Gao, H.; Guo, P.; Sun, Z. ResiDualGAN: Resize-Residual DualGAN for Cross-Domain Remote Sensing Images Semantic Segmentation. arXiv 2022, arXiv:2201.11523. [Google Scholar]
- Hu, J.; Tuo, H.; Wang, C.; Zhong, H.; Pan, H.; Jing, Z. Unsupervised satellite image classification based on partial transfer learning. Aerosp. Syst. 2020, 3, 21–28. [Google Scholar] [CrossRef]
- Cao, Z.; Long, M.; Wang, J.; Jordan, M.I. Partial transfer learning with selective adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 2724–2732. [Google Scholar]
- Panareda Busto, P.; Gall, J. Open set domain adaptation. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 754–763. [Google Scholar]
- Saito, K.; Yamamoto, S.; Ushiku, Y.; Harada, T. Open set domain adaptation by backpropagation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 153–168. [Google Scholar]
- Zhao, S.; Li, B.; Yue, X.; Gu, Y.; Xu, P.; Hu, R.; Chai, H.; Keutzer, K. Multi-source domain adaptation for semantic segmentation. Adv. Neural Inf. Process. Syst. 2019, 32. [Google Scholar]
- Lu, X.; Gong, T.; Zheng, X. Multisource compensation network for remote sensing cross-domain scene classification. IEEE Trans. Geosci. Remote Sens. 2019, 58, 2504–2515. [Google Scholar] [CrossRef]
- Ji, S.; Wang, D.; Luo, M. Generative adversarial network-based full-space domain adaptation for land cover classification from multiple-source remote sensing images. IEEE Trans. Geosci. Remote Sens. 2020, 59, 3816–3828. [Google Scholar] [CrossRef]
- Zheng, J.; Wu, W.; Fu, H.; Li, W.; Dong, R.; Zhang, L.; Yuan, S. Unsupervised mixed multi-target domain adaptation for remote sensing images classification. In Proceedings of the IGARSS 2020—2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 1381–1384. [Google Scholar]
- Li, Y.; Shi, T.; Zhang, Y.; Chen, W.; Wang, Z.; Li, H. Learning deep semantic segmentation network under multiple weakly-supervised constraints for cross-domain remote sensing image semantic segmentation. ISPRS J. Photogramm. Remote Sens. 2021, 175, 20–33. [Google Scholar] [CrossRef]
- Iqbal, J.; Ali, M. Weakly-supervised domain adaptation for built-up region segmentation in aerial and satellite imagery. ISPRS J. Photogramm. Remote Sens. 2020, 167, 263–275. [Google Scholar] [CrossRef]
- Luo, M.; Ji, S. Cross-spatiotemporal land-cover classification from VHR remote sensing images with deep learning based domain adaptation. ISPRS J. Photogramm. Remote Sens. 2022, 191, 105–128. [Google Scholar] [CrossRef]
- Nyborg, J.; Pelletier, C.; Lefèvre, S.; Assent, I. TimeMatch: Unsupervised cross-region adaptation by temporal shift estimation. ISPRS J. Photogramm. Remote Sens. 2022, 188, 301–313. [Google Scholar] [CrossRef]
- Toldo, M.; Maracani, A.; Michieli, U.; Zanuttigh, P. Unsupervised domain adaptation in semantic segmentation: A review. Technologies 2020, 8, 35. [Google Scholar] [CrossRef]
- Csurka, G. Domain adaptation for visual applications: A comprehensive survey. arXiv 2017, arXiv:1702.05374. [Google Scholar]
- Csurka, G.; Volpi, R.; Chidlovskii, B. Unsupervised Domain Adaptation for Semantic Image Segmentation: A Comprehensive Survey. arXiv 2021, arXiv:2112.03241. [Google Scholar]
- Qin, R.; Liu, T. A Review of Landcover Classification with Very-High Resolution Remotely Sensed Optical Images—Analysis Unit, Model Scalability and Transferability. Remote Sens. 2022, 14, 646. [Google Scholar] [CrossRef]
- Nagananda, N.; Taufique, A.M.N.; Madappa, R.; Jahan, C.S.; Minnehan, B.; Rovito, T.; Savakis, A. Benchmarking Domain Adaptation Methods on Aerial Datasets. Sensors 2021, 21, 8070. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Y.; Deng, B.; Tang, H.; Zhang, L.; Jia, K. Unsupervised multi-class domain adaptation: Theory, algorithms, and practice. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 2775–2792. [Google Scholar] [CrossRef] [PubMed]
- Gong, J.; Zhang, M.; Hu, X.; Zhang, Z.; Li, Y.; Jiang, L. The design of deep learning framework and model for intelligent remote sensing. Acta Geod. Cartogr. Sin. 2022, 51, 475–487. [Google Scholar]
- Wang, Y.; Zhu, X.X. The sarptical dataset for joint analysis of sar and optical image in dense urban area. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 6840–6843. [Google Scholar]
- Cheng, G.; Han, J.; Lu, X. Remote sensing image scene classification: Benchmark and state of the art. Proc. IEEE 2017, 105, 1865–1883. [Google Scholar] [CrossRef]
- Neumann, M.; Pinto, A.S.; Zhai, X.; Houlsby, N. In-domain representation learning for remote sensing. arXiv 2019, arXiv:1911.06721. [Google Scholar]
- Risojević, V.; Stojnić, V. The role of pre-training in high-resolution remote sensing scene classification. arXiv 2021, arXiv:2111.03690. [Google Scholar]
- Chan, L.; Hosseini, M.S.; Plataniotis, K.N. A comprehensive analysis of weakly-supervised semantic segmentation in different image domains. Int. J. Comput. Vis. 2021, 129, 361–384. [Google Scholar] [CrossRef]
- Mañas, O.; Lacoste, A.; Giro-i Nieto, X.; Vazquez, D.; Rodriguez, P. Seasonal contrast: Unsupervised pre-training from uncurated remote sensing data. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Online, 11–17 October 2021; pp. 9414–9423. [Google Scholar]
- Zhou, W.; Newsam, S.; Li, C.; Shao, Z. PatternNet: A benchmark dataset for performance evaluation of remote sensing image retrieval. ISPRS J. Photogramm. Remote Sens. 2018, 145, 197–209. [Google Scholar] [CrossRef]
- Xia, G.S.; Hu, J.; Hu, F.; Shi, B.; Bai, X.; Zhong, Y.; Zhang, L.; Lu, X. AID: A benchmark data set for performance evaluation of aerial scene classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3965–3981. [Google Scholar] [CrossRef]
- Yang, Y.; Newsam, S. Bag-of-visual-words and spatial extensions for land-use classification. In Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems, New York, NY, USA, 2–5 November 2010; pp. 270–279. [Google Scholar]
- Helber, P.; Bischke, B.; Dengel, A.; Borth, D. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 2217–2226. [Google Scholar] [CrossRef]
- Helber, P.; Bischke, B.; Dengel, A.; Borth, D. Introducing eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. In Proceedings of the IGARSS 2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 204–207. [Google Scholar]
- Diemunsch, J.R.; Wissinger, J. Moving and stationary target acquisition and recognition (MSTAR) model-based automatic target recognition: Search technology for a robust ATR. In Algorithms for Synthetic Aperture Radar Imagery V; International Society for Optics and Photonics: Bellingham, WA, USA, 1998; Volume 3370, pp. 481–492. [Google Scholar]
- Wang, K.; Zhang, G.; Leung, H. SAR target recognition based on cross-domain and cross-task transfer learning. IEEE Access 2019, 7, 153391–153399. [Google Scholar] [CrossRef]
- Zhu, X.X.; Hu, J.; Qiu, C.; Shi, Y.; Kang, J.; Mou, L.; Bagheri, H.; Häberle, M.; Hua, Y.; Huang, R.; et al. So2Sat LCZ42: A benchmark dataset for global local climate zones classification. arXiv 2019, arXiv:1912.12171. [Google Scholar]
- Xia, G.S.; Bai, X.; Ding, J.; Zhu, Z.; Belongie, S.; Luo, J.; Datcu, M.; Pelillo, M.; Zhang, L. DOTA: A large-scale dataset for object detection in aerial images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 3974–3983. [Google Scholar]
- Xu, T.; Sun, X.; Diao, W.; Zhao, L.; Fu, K.; Wang, H. FADA: Feature Aligned Domain Adaptive Object Detection in Remote Sensing Imagery. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–16. [Google Scholar] [CrossRef]
- Li, K.; Wan, G.; Cheng, G.; Meng, L.; Han, J. Object detection in optical remote sensing images: A survey and a new benchmark. ISPRS J. Photogramm. Remote Sens. 2020, 159, 296–307. [Google Scholar] [CrossRef]
- Zhang, W.; Zhu, Y.; Fu, Q. Adversarial deep domain adaptation for multi-band SAR images classification. IEEE Access 2019, 7, 78571–78583. [Google Scholar] [CrossRef]
- FaradSAR Dataset. Available online: https://www.sandia.gov/radar/complex-data/ (accessed on 24 July 2022).
- miniSAR Dataset. Available online: https://www.sandia.gov/radar/complex-data/index.html (accessed on 24 July 2022).
- Azimi, S.M.; Henry, C.; Sommer, L.; Schumann, A.; Vig, E. Skyscapes fine-grained semantic understanding of aerial scenes. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 7393–7403. [Google Scholar]
- Wang, J.; Zheng, Z.; Ma, A.; Lu, X.; Zhong, Y. LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation. arXiv 2021, arXiv:2110.08733. [Google Scholar]
- Rottensteiner, F.; Sohn, G.; Jung, J.; Gerke, M.; Baillard, C.; Benitez, S.; Breitkopf, U. The ISPRS benchmark on urban object classification and 3D building reconstruction. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 1, 293–298. [Google Scholar] [CrossRef]
- Yan, L.; Fan, B.; Liu, H.; Huo, C.; Xiang, S.; Pan, C. Triplet adversarial domain adaptation for pixel-level classification of VHR remote sensing images. IEEE Trans. Geosci. Remote Sens. 2019, 58, 3558–3573. [Google Scholar] [CrossRef]
- Mnih, V. Machine Learning for Aerial Image Labeling; University of Toronto: Toronto, ON, Canada, 2013. [Google Scholar]
- Shermeyer, J.; Hogan, D.; Brown, J.; Van Etten, A.; Weir, N.; Pacifici, F.; Hansch, R.; Bastidas, A.; Soenen, S.; Bacastow, T.; et al. SpaceNet 6: Multi-sensor all weather mapping dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 16–18 June 2020; pp. 196–197. [Google Scholar]
- Demir, I.; Koperski, K.; Lindenbaum, D.; Pang, G.; Huang, J.; Basu, S.; Hughes, F.; Tuia, D.; Raskar, R. Deepglobe 2018: A challenge to parse the earth through satellite images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 172–181. [Google Scholar]
- Zhu, Q.; Zhang, Y.; Wang, L.; Zhong, Y.; Guan, Q.; Lu, X.; Zhang, L.; Li, D. A Global Context-aware and Batch-independent Network for road extraction from VHR satellite imagery. ISPRS J. Photogramm. Remote Sens. 2021, 175, 353–365. [Google Scholar] [CrossRef]
- Baumgardner, M.F.; Biehl, L.L.; Landgrebe, D.A. 220 band aviris hyperspectral image data set: 12 Junea 1992 indian pine test site 3. Purdue Univ. Res. Repos. 2015, 10, R7RX991C. [Google Scholar]
- Salinas Dataset. Available online: http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes#Salinas (accessed on 9 February 2017).
- Botswana Dataset. Available online: http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes#Botswana (accessed on 10 February 2010).
- Kennedy Space Center Dataset. Available online: http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes#Pavia_Centre_and_University (accessed on 29 November 2016).
- Yang, H.L.; Crawford, M.M. Domain adaptation with preservation of manifold geometry for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 9, 543–555. [Google Scholar] [CrossRef]
- Washington DC Mall Dataset. Available online: https://engineering.purdue.edu/~biehl/MultiSpec/hyperspectral.html (accessed on 24 July 2022).
- Sun, Z.; Wang, C.; Wang, H.; Li, J. Learn multiple-kernel SVMs for domain adaptation in hyperspectral data. IEEE Geosci. Remote Sens. Lett. 2013, 10, 1224–1228. [Google Scholar]
- Lin, D.; Xu, G.; Wang, X.; Wang, Y.; Sun, X.; Fu, K. A remote sensing image dataset for cloud removal. arXiv 2019, arXiv:1901.00600. [Google Scholar]
- Schmitt, M.; Hughes, L.H.; Qiu, C.; Zhu, X.X. SEN12MS–A Curated Dataset of Georeferenced Multi-Spectral Sentinel-1/2 Imagery for Deep Learning and Data Fusion. arXiv 2019, arXiv:1906.07789. [Google Scholar] [CrossRef]
- Meraner, A.; Ebel, P.; Zhu, X.X.; Schmitt, M. Cloud removal in Sentinel-2 imagery using a deep residual neural network and SAR-optical data fusion. ISPRS J. Photogramm. Remote Sens. 2020, 166, 333–346. [Google Scholar] [CrossRef] [PubMed]
- Liu, S.; Shi, Q. Local climate zone mapping as remote sensing scene classification using deep learning: A case study of metropolitan China. ISPRS J. Photogramm. Remote Sens. 2020, 164, 229–242. [Google Scholar] [CrossRef]
- Zhao, N.; Zhong, Y.; Ma, A. Mapping Local Climate Zones with Circled Similarity Propagation Based Domain Adaptation. In Proceedings of the IGARSS 2020—2020 IEEE International Geoscience and Remote Sensing Symposium, IEEE, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 1377–1380. [Google Scholar]
- Xu, Y.; Ma, F.; Meng, D.; Ren, C.; Leung, Y. A co-training approach to the classification of local climate zones with multi-source data. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 1209–1212. [Google Scholar]
- Li, J.; Xu, Z.; Fu, L.; Zhou, X.; Yu, H. Domain adaptation from daytime to nighttime: A situation-sensitive vehicle detection and traffic flow parameter estimation framework. Transp. Res. Part Emerg. Technol. 2021, 124, 102946. [Google Scholar] [CrossRef]
- Koga, Y.; Miyazaki, H.; Shibasaki, R. A method for vehicle detection in high-resolution satellite images that uses a region-based object detector and unsupervised domain adaptation. Remote Sens. 2020, 12, 575. [Google Scholar] [CrossRef]
- Koga, Y.; Miyazaki, H.; Shibasaki, R. Adapting Vehicle Detector to Target Domain by Adversarial Prediction Alignment. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 2341–2344. [Google Scholar]
- Zhang, R.; Newsam, S.; Shao, Z.; Huang, X.; Wang, J.; Li, D. Multi-scale adversarial network for vehicle detection in UAV imagery. ISPRS J. Photogramm. Remote Sens. 2021, 180, 283–295. [Google Scholar] [CrossRef]
- Wu, W.; Zheng, J.; Fu, H.; Li, W.; Yu, L. Cross-regional oil palm tree detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 56–57. [Google Scholar]
- Wu, W.; Zheng, J.; Li, W.; Fu, H.; Yuan, S.; Yu, L. Domain adversarial neural network-based oil palm detection using high-resolution satellite images. Proc. Autom. Target Recognit XXX SPIE 2020, 11394, 29–37. [Google Scholar]
- Zheng, J.; Wu, W.; Zhao, Y.; Fu, H. Transresnet: Transferable Resnet For Domain Adaptation. In Proceedings of the 2021 IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA, 19–22 September 2021; pp. 764–768. [Google Scholar]
- Zheng, J.; Fu, H.; Li, W.; Wu, W.; Yu, L.; Yuan, S.; Tao, W.Y.W.; Pang, T.K.; Kanniah, K.D. Growing status observation for oil palm trees using Unmanned Aerial Vehicle (UAV) images. ISPRS J. Photogramm. Remote Sens. 2021, 173, 95–121. [Google Scholar] [CrossRef]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the European Conference on Computer Vision; Springer: Amsterdam, The Netherlands, 2016; pp. 21–37. [Google Scholar]
- Chen, H.; Wu, C.; Du, B.; Zhang, L. DSDANet: Deep Siamese domain adaptation convolutional neural network for cross-domain change detection. arXiv 2020, arXiv:2006.09225. [Google Scholar]
- Zhang, C.; Feng, Y.; Hu, L.; Tapete, D.; Pan, L.; Liang, Z.; Cigna, F.; Yue, P. A domain adaptation neural network for change detection with heterogeneous optical and SAR remote sensing images. Int. J. Appl. Earth Obs. Geoinf. 2022, 109, 102769. [Google Scholar] [CrossRef]
- Vega, P.J.S. Deep Learning-Based Domain Adaptation for Change Detection in Tropical Forests. Ph.D. Thesis, PUC-Rio, Rio de Janeiro, Brazil, 2021. [Google Scholar]
- Soto, P.; Costa, G.; Feitosa, R.; Happ, P.; Ortega, M.; Noa, J.; Almeida, C.; Heipke, C. Domain adaptation with cyclegan for change detection in the Amazon forest. ISPRS Arch. 2020, 43, 1635–1643. [Google Scholar] [CrossRef]
- Soto, P.J.; Costa, G.A.; Feitosa, R.Q.; Ortega, M.X.; Bermudez, J.D.; Turnes, J.N. Domain-Adversarial Neural Networks for Deforestation Detection in Tropical Forests. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
- Vega, P.J.S.; da Costa, G.A.O.P.; Feitosa, R.Q.; Adarme, M.X.O.; de Almeida, C.A.; Heipke, C.; Rottensteiner, F. An unsupervised domain adaptation approach for change detection and its application to deforestation mapping in tropical biomes. ISPRS J. Photogramm. Remote Sens. 2021, 181, 113–128. [Google Scholar] [CrossRef]
- Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar]
- Zhang, Y.; Deng, M.; He, F.; Guo, Y.; Sun, G.; Chen, J. FODA: Building change detection in high-resolution remote sensing images based on feature–output space dual-alignment. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 8125–8134. [Google Scholar] [CrossRef]
- Schenkel, F.; Middelmann, W. Domain adaptation for semantic segmentation of aerial imagery using cycle-consistent adversarial networks. In Proceedings of the 2020 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 1448–1451. [Google Scholar]
- Shi, T.; Li, Y.; Zhang, Y. Rotation Consistency-Preserved Generative Adversarial Networks for Cross-Domain Aerial Image Semantic Segmentation. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 8668–8671. [Google Scholar]
- Fang, B.; Kou, R.; Pan, L.; Chen, P. Category-sensitive domain adaptation for land cover mapping in aerial scenes. Remote Sens. 2019, 11, 2631. [Google Scholar] [CrossRef]
- Lu, X.; Zhong, Y.; Zheng, Z.; Wang, J. Cross-domain road detection based on global-local adversarial learning framework from very high resolution satellite imagery. ISPRS J. Photogramm. Remote Sens. 2021, 180, 296–312. [Google Scholar] [CrossRef]
- Zhao, D.; Li, J.; Yuan, B.; Shi, Z. V2RNet: An Unsupervised Semantic Segmentation Algorithm for Remote Sensing Images via Cross-Domain Transfer Learning. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 4676–4679. [Google Scholar]
- Shen, W.; Wang, Q.; Jiang, H.; Li, S.; Yin, J. Unsupervised Domain Adaptation for Semantic Segmentation via Self-Supervision. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 2747–2750. [Google Scholar]
- Zhang, L.; Lan, M.; Zhang, J.; Tao, D. Stagewise unsupervised domain adaptation with adversarial self-training for road segmentation of remote-sensing images. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–3. [Google Scholar] [CrossRef]
- Wang, J.; Ma, A.; Zhong, Y.; Zheng, Z.; Zhang, L. Cross-sensor domain adaptation for high spatial resolution urban land-cover mapping: From airborne to spaceborne imagery. Remote Sens. Environ. 2022, 277, 113058. [Google Scholar] [CrossRef]
- Shao, Y.; Li, L.; Ren, W.; Gao, C.; Sang, N. Domain adaptation for image dehazing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–18 June 2020; pp. 2808–2817. [Google Scholar]
- Yang, J.; Chen, H.; Xu, Y.; Shi, Z.; Luo, R.; Xie, L.; Su, R. Domain adaptation for degraded remote scene classification. In Proceedings of the 2020 16th International Conference on Control, Automation, Robotics and Vision (ICARCV), Shenzhen, China, 13–14 December 2020; pp. 111–117. [Google Scholar]
- Mehta, A.; Sinha, H.; Mandal, M.; Narang, P. Domain-aware unsupervised hyperspectral reconstruction for aerial image dehazing. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Online, 5–9 January 2021; pp. 413–422. [Google Scholar]
- Ebel, P.; Schmitt, M.; Zhu, X.X. Cloud removal in unpaired Sentinel-2 imagery using cycle-consistent GAN and SAR-optical data fusion. In Proceedings of the 2020 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 2065–2068. [Google Scholar]
- Bengana, N.; Heikkilä, J. Improving land cover segmentation across satellites using domain adaptation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 14, 1399–1410. [Google Scholar] [CrossRef]
- Sekrecka, A.; Wierzbicki, D.; Kedzierski, M. Influence of the sun position and platform orientation on the quality of imagery obtained from unmanned aerial vehicles. Remote Sens. 2020, 12, 1040. [Google Scholar] [CrossRef]
- Zhang, Y.; Feng, Z.; Shi, D. The influnece of satellite observation direction on remote sensing image. J. Remote Sens. 2007, 11, 433. [Google Scholar] [CrossRef]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 24–27 June 2014; pp. 580–587. [Google Scholar]
- Ge, W.; Yu, Y. Borrowing treasures from the wealthy: Deep transfer learning through selective joint fine-tuning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1086–1095. [Google Scholar]
- Mazza, A.; Sepe, P.; Poggi, G.; Scarpa, G. Cloud Segmentation of Sentinel-2 Images Using Convolutional Neural Network with Domain Adaptation. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 7236–7239. [Google Scholar]
- Zhang, Y.; Qiu, Z.; Yao, T.; Liu, D.; Mei, T. Fully convolutional adaptation networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 19–21 June 2018; pp. 6810–6818. [Google Scholar]
- Shi, L.; Wang, Z.; Pan, B.; Shi, Z. An end-to-end network for remote sensing imagery semantic segmentation via joint pixel-and representation-level domain adaptation. IEEE Geosci. Remote Sens. Lett. 2020, 18, 1896–1900. [Google Scholar] [CrossRef]
- Tuia, D.; Munoz-Mari, J.; Gomez-Chova, L.; Malo, J. Graph Matching for Adaptation in Remote Sensing. IEEE Trans. Geosci. Remote Sens. 2013, 51, 329–341. [Google Scholar] [CrossRef]
- Rakwatin, P.; Takeuchi, W.; Yasuoka, Y. Restoration of Aqua MODIS band 6 using histogram matching and local least squares fitting. IEEE Trans. Geosci. Remote Sens. 2008, 47, 613–627. [Google Scholar] [CrossRef]
- Yaras, C.; Huang, B.; Bradbury, K.; Malof, J.M. Randomized Histogram Matching: A Simple Augmentation for Unsupervised Domain Adaptation in Overhead Imagery. arXiv 2021, arXiv:2104.14032. [Google Scholar]
- Agarwal, V.; Abidi, B.R.; Koschan, A.; Abidi, M.A. An overview of color constancy algorithms. J. Pattern Recognit. Res. 2006, 1, 42–54. [Google Scholar] [CrossRef]
- Buchsbaum, G. A spatial processor model for object colour perception. J. Frankl. Inst. 1980, 310, 1–26. [Google Scholar] [CrossRef]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 2014, 27, 2672–2680. [Google Scholar] [CrossRef]
- Huang, X.; Belongie, S. Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 1501–1510. [Google Scholar]
- Khalel, A.; Tasar, O.; Charpiat, G.; Tarabalka, Y. Multi-task deep learning for satellite image pansharpening and segmentation. In Proceedings of the 2019 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Yokohama, Japan, 28 July–2 August 2019; pp. 4869–4872. [Google Scholar]
- Murez, Z.; Kolouri, S.; Kriegman, D.; Ramamoorthi, R.; Kim, K. Image to image translation for domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 4500–4509. [Google Scholar]
- Park, T.; Efros, A.A.; Zhang, R.; Zhu, J.Y. Contrastive learning for unpaired image-to-image translation. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 319–345. [Google Scholar]
- Han, J.; Shoeiby, M.; Petersson, L.; Armin, M.A. Dual Contrastive Learning for Unsupervised Image-to-Image Translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, 19–25 June 2021; pp. 746–755. [Google Scholar]
- Taigman, Y.; Polyak, A.; Wolf, L. Unsupervised cross-domain image generation. arXiv 2016, arXiv:1611.02200. [Google Scholar]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
- Yi, Z.; Zhang, H.; Tan, P.; Gong, M. Dualgan: Unsupervised dual learning for image-to-image translation. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2849–2857. [Google Scholar]
- Hoffman, J.; Tzeng, E.; Park, T.; Zhu, J.Y.; Isola, P.; Saenko, K.; Efros, A.; Darrell, T. Cycada: Cycle-consistent adversarial domain adaptation. In Proceedings of the International Conference on Machine Learning, PMLR, Stockholm, Sweden, 10–15 June 2018; pp. 1989–1998. [Google Scholar]
- Zhu, C.; Zhao, D.; Qi, J.; Qi, X.; Shi, Z. Cross-Domain Transfer for Ship Instance Segmentation in SAR Images. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 2206–2209. [Google Scholar]
- Benaim, S.; Wolf, L. One-sided unsupervised domain mapping. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar] [CrossRef]
- Sun, B.; Feng, J.; Saenko, K. Return of frustratingly easy domain adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; Volume 30. [Google Scholar]
- Gretton, A.; Borgwardt, K.M.; Rasch, M.J.; Schölkopf, B.; Smola, A. A kernel two-sample test. J. Mach. Learn. Res. 2012, 13, 723–773. [Google Scholar]
- Pan, S.J.; Tsang, I.W.; Kwok, J.T.; Yang, Q. Domain adaptation via transfer component analysis. IEEE Trans. Neural Netw. 2010, 22, 199–210. [Google Scholar] [CrossRef]
- Wang, Z.; Du, B.; Shi, Q.; Tu, W. Domain adaptation with discriminative distribution and manifold embedding for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1155–1159. [Google Scholar] [CrossRef]
- Li, Z.; Tang, X.; Li, W.; Wang, C.; Liu, C.; He, J. A two-stage deep domain adaptation method for hyperspectral image classification. Remote Sens. 2020, 12, 1054. [Google Scholar] [CrossRef]
- Liu, Z.; Ma, L.; Du, Q. Class-wise distribution adaptation for unsupervised classification of hyperspectral remote sensing images. IEEE Trans. Geosci. Remote Sens. 2020, 59, 508–521. [Google Scholar] [CrossRef]
- Sun, S.; Gu, Y.; Ren, M. Fine-Grained Ship Recognition from the Horizontal View Based on Domain Adaptation. Sensors 2022, 22, 3243. [Google Scholar] [CrossRef] [PubMed]
- Zhu, Y.; Zhuang, F.; Wang, J.; Ke, G.; Chen, J.; Bian, J.; Xiong, H.; He, Q. Deep subdomain adaptation network for image classification. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 1713–1722. [Google Scholar] [CrossRef] [PubMed]
- Wang, W.; Ma, L.; Chen, M.; Du, Q. Joint correlation alignment-based graph neural network for domain adaptation of multitemporal hyperspectral remote sensing images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 3170–3184. [Google Scholar] [CrossRef]
- Haeusser, P.; Frerix, T.; Mordvintsev, A.; Cremers, D. Associative domain adaptation. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2765–2773. [Google Scholar]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 7132–7141. [Google Scholar]
- Chen, M.; Ma, L.; Wang, W.; Du, Q. Augmented associative learning-based domain adaptation for classification of hyperspectral remote sensing images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 6236–6248. [Google Scholar] [CrossRef]
- Damodaran, B.B.; Kellenberger, B.; Flamary, R.; Tuia, D.; Courty, N. Deepjdot: Deep joint distribution optimal transport for unsupervised domain adaptation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 447–463. [Google Scholar]
- Peng, J.; Sun, W.; Ma, L.; Du, Q. Discriminative transfer joint matching for domain adaptation in hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2019, 16, 972–976. [Google Scholar] [CrossRef]
- Mengqiu, X.; Ming, W.; Jun, G.; Zhang, C.; Yubo, W.; Zhanyu, M. Sea fog detection based on unsupervised domain adaptation. Chin. J. Aeronaut. 2021, 35, 415–425. [Google Scholar]
- Wang, X.; Li, Y.; Cheng, Y. Hyperspectral image classification based on unsupervised heterogeneous domain adaptation cyclegan. Chin. J. Electron. 2020, 29, 608–614. [Google Scholar] [CrossRef]
- Zheng, J.; Fu, H.; Li, W.; Wu, W.; Zhao, Y.; Dong, R.; Yu, L. Cross-regional oil palm tree counting and detection via a multi-level attention domain adaptation network. ISPRS J. Photogramm. Remote Sens. 2020, 167, 154–177. [Google Scholar] [CrossRef]
- Deng, X.; Yang, H.L.; Makkar, N.; Lunga, D. Large scale unsupervised domain adaptation of segmentation networks with adversarial learning. In Proceedings of the 2019 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Yokohama, Japan, 28 July–2 August 2019; pp. 4955–4958. [Google Scholar]
- Zhang, T.; Li, W.; Ping, F.; Zhe, e. Adaptive Object Detection for Multi-source Remote Sensing Images. J. Signal Process. 2020, 36, 1407–1414. (In Chinese) [Google Scholar]
- Vu, T.H.; Jain, H.; Bucher, M.; Cord, M.; Pérez, P. Advent: Adversarial entropy minimization for domain adaptation in semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 2517–2526. [Google Scholar]
- Lian, Q.; Lv, F.; Duan, L.; Gong, B. Constructing self-motivated pyramid curriculums for cross-domain semantic segmentation: A non-adversarial approach. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 6758–6767. [Google Scholar]
- Wittich, D.; Rottensteiner, F. Appearance based deep domain adaptation for the classification of aerial images. ISPRS J. Photogramm. Remote Sens. 2021, 180, 82–102. [Google Scholar] [CrossRef]
- Wu, Z.; Han, X.; Lin, Y.L.; Uzunbas, M.G.; Goldstein, T.; Lim, S.N.; Davis, L.S. Dcan: Dual channel-wise alignment networks for unsupervised scene adaptation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 518–534. [Google Scholar]
- Li, Y.; Wang, N.; Shi, J.; Liu, J.; Hou, X. Revisiting batch normalization for practical domain adaptation. arXiv 2016, arXiv:1603.04779. [Google Scholar]
- Chen, Y.C.; Lin, Y.Y.; Yang, M.H.; Huang, J.B. Crdoco: Pixel-level domain transfer with cross-domain consistency. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 1791–1800. [Google Scholar]
- Pan, F.; Shin, I.; Rameau, F.; Lee, S.; Kweon, I.S. Unsupervised intra-domain adaptation for semantic segmentation through self-supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–18 June 2020; pp. 3764–3773. [Google Scholar]
- Deng, X.; Zhu, Y.; Tian, Y.; Newsam, S. Scale Aware Adaptation for Land-Cover Classification in Remote Sensing Imagery. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Online, 5–9 January 2021; pp. 2160–2169. [Google Scholar]
- Zhang, J.; Liu, J.; Shi, L.; Pan, B.; Xu, X. An open set domain adaptation network based on adversarial learning for remote sensing image scene classification. In Proceedings of the 2020 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 1365–1368. [Google Scholar]
- Gu, X.; Sun, J.; Xu, Z. Spherical space domain adaptation with robust pseudo-label loss. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–18 June 2020; pp. 9101–9110. [Google Scholar]
- Zhao, S.; Zhang, Z.; Zhang, T.; Guo, W.; Luo, Y. Transferable SAR Image Classification Crossing Different Satellites under Open Set Condition. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
- Al Rahhal, M.M.; Bazi, Y.; Abdullah, T.; Mekhalfi, M.L.; AlHichri, H.; Zuair, M. Learning a multi-branch neural network from multiple sources for knowledge adaptation in remote sensing imagery. Remote Sens. 2018, 10, 1890. [Google Scholar] [CrossRef]
- Al Rahhal, M.M.; Bazi, Y.; Al-Hwiti, H.; Alhichri, H.; Alajlan, N. Adversarial learning for knowledge adaptation from multiple remote sensing sources. IEEE Geosci. Remote Sens. Lett. 2020, 18, 1451–1455. [Google Scholar] [CrossRef]
- Xu, R.; Chen, Z.; Zuo, W.; Yan, J.; Lin, L. Deep cocktail network: Multi-source unsupervised domain adaptation with category shift. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 3964–3973. [Google Scholar]
- Elshamli, A.; Taylor, G.W.; Areibi, S. Multisource domain adaptation for remote sensing using deep neural networks. IEEE Trans. Geosci. Remote Sens. 2019, 58, 3328–3340. [Google Scholar] [CrossRef]
- Xia, G.S.; Yang, W.; Delon, J.; Gousseau, Y.; Sun, H.; Maître, H. Structural high-resolution satellite image indexing. In Proceedings of the ISPRS TC VII Symposium—100 Years ISPRS, Vienna, Austria, 5–7 July 2010; pp. 298–303. [Google Scholar]
- Andrychowicz, M.; Denil, M.; Gomez, S.; Hoffman, M.W.; Pfau, D.; Schaul, T.; Shillingford, B.; De Freitas, N. Learning to learn by gradient descent by gradient descent. Adv. Neural Inf. Process. Syst. 2016, 29. [Google Scholar] [CrossRef]
- Zheng, J.; Wu, W.; Yuan, S.; Zhao, Y.; Li, W.; Zhang, L.; Dong, R.; Fu, H. A Two-Stage Adaptation Network (TSAN) for Remote Sensing Scene Classification in Single-Source-Mixed-Multiple-Target Domain Adaptation (S2M2T DA) Scenarios. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–13. [Google Scholar] [CrossRef]
- Wei, H.; Ma, L.; Liu, Y.; Du, Q. Combining multiple classifiers for domain adaptation of remote sensing image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 1832–1847. [Google Scholar] [CrossRef]
- Ghifary, M.; Kleijn, W.B.; Zhang, M.; Balduzzi, D. Domain generalization for object recognition with multi-task autoencoders. In Proceedings of the IEEE International Conference on Computer Vision, Boston, MA, USA, 8–10 June 2015; pp. 2551–2559. [Google Scholar]
- Li, H.; Pan, S.J.; Wang, S.; Kot, A.C. Domain generalization with adversarial feature learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 5400–5409. [Google Scholar]
- Zheng, J.; Wu, W.; Yuan, S.; Fu, H.; Li, W.; Yu, L. Multisource-domain generalization-based oil palm tree detection using very-high-resolution (vhr) satellite images. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
- Persello, C.; Bruzzone, L. Relevant and invariant feature selection of hyperspectral images for domain generalization. In Proceedings of the 2014 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Quebec City, QC, Canada, 13–18 July 2014; pp. 3562–3565. [Google Scholar]
- Wang, L.; Li, R.; Duan, C.; Zhang, C.; Meng, X.; Fang, S. A novel transformer based semantic segmentation scheme for fine-resolution remote sensing images. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
Method or Ref | Settings | V9-P5 | P5-V9 | |||||||
---|---|---|---|---|---|---|---|---|---|---|
IRRG-RGB | RGB-IRRG | IRGB-IRRG | ||||||||
Baseline | Ep | F1 Score | mIou | F1 Score | mIou | F1 Score | mIou | |||
Source Only | [15] | BiSeNet | 80 | - | - | - | - | 0.32 | 0.17 | |
Source Only | [16] | BiSeNet | - | - | - | 0.287 | 0.167 | 0.438 | 0.245 | |
Source Only | [16] | Deeplabv3+ | - | - | - | 0.449 | 0.245 | 0.491 | 0.253 | |
GT | CycleGAN | [118] | Deeplab | - | 0.270 | 0.215 | 0.298 | 0.233 | - | - |
GT | Method [15] | - | BiSeNet | 80 | - | - | - | - | 0.49 | 0.30 |
GT | UST-DG [16] | - | Deeplab v3+ | - | - | - | 0.509 | 0.359 | 0.606 | 0.416 |
GT | PRC-GAN [117] | - | Deeplab v3+ | 45 | - | - | 0.561 | 0.407 | 0.661 | 0.482 |
GT | Method [116] | - | SegNet | - | - | - | 0.7740 | 0.6132 | - | - |
AT | CyCADA | [118] | Deeplab | - | 0.433 | 0.326 | 0.452 | 0.363 | - | - |
AT | Method [31] | - | Deeplab v3+ | - | - | 0.3870 | - | - | - | - |
AT | AdaptSegNet | [117] | - | - | - | - | 0.401 | 0.321 | 0.523 | 0.352 |
AT | CLAN | [118] | Deeplab | - | 0.487 | 0.408 | 0.517 | 0.426 | - | - |
AT | CsDA [118] | - | Deeplab | 200 | 0.528 | 0.423 | 0.545 | 0.449 | - | - |
ST | CBST | [118] | Deeplab | - | 0.452 | 0.374 | 0.460 | 0.388 | - | - |
HT | Method [45] | - | MA-FCN | - | - | - | - | - | - | 0.437 |
HT | DACST [49] | - | VGG-16 | - | - | - | - | - | - | 0.444 |
HT | TriADA [81] | - | Deeplab v3 | - | - | - | 0.656 | 0.497 | 0.698 | 0.551 |
HT | TriADA-CAST [81] | - | Deeplab v3 | - | - | - | 0.665 | 0.514 | 0.712 | 0.568 |
HT | FCAN [174] | - | - | 50 | - | - | 0.669 | 0.535 | - | - |
Other | SEANet | [117] | - | - | - | - | 0.468 | 0.278 | 0.557 | 0.377 |
Target Only | [195] | Deeplab v3+ | - | 0.9212 | 0.8432 | - | - | 0.8957 | 0.8147 | |
Target Only | [195] | DC-Swin | - | 0.9325 | 0.8756 | - | - | 0.9071 | 0.8322 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Xu, M.; Wu, M.; Chen, K.; Zhang, C.; Guo, J. The Eyes of the Gods: A Survey of Unsupervised Domain Adaptation Methods Based on Remote Sensing Data. Remote Sens. 2022, 14, 4380. https://doi.org/10.3390/rs14174380
Xu M, Wu M, Chen K, Zhang C, Guo J. The Eyes of the Gods: A Survey of Unsupervised Domain Adaptation Methods Based on Remote Sensing Data. Remote Sensing. 2022; 14(17):4380. https://doi.org/10.3390/rs14174380
Chicago/Turabian StyleXu, Mengqiu, Ming Wu, Kaixin Chen, Chuang Zhang, and Jun Guo. 2022. "The Eyes of the Gods: A Survey of Unsupervised Domain Adaptation Methods Based on Remote Sensing Data" Remote Sensing 14, no. 17: 4380. https://doi.org/10.3390/rs14174380
APA StyleXu, M., Wu, M., Chen, K., Zhang, C., & Guo, J. (2022). The Eyes of the Gods: A Survey of Unsupervised Domain Adaptation Methods Based on Remote Sensing Data. Remote Sensing, 14(17), 4380. https://doi.org/10.3390/rs14174380