Two-Way Generation of High-Resolution EO and SAR Images via Dual Distortion-Adaptive GANs
Abstract
:1. Introduction
- We construct a new benchmark high-resolution (0.25 m spatial resolution) EO-SAR dataset SN6-SAROPT, which is comprised of over 700 non-overlapping image pairs (Capella Space’s X-band quad-pol SAR of size and Maxar WorldView 2 EO of size ) covering the port of Rotterdam, the Netherlands;
- We present a GAN-based I2I algorithm for EO-SAR images with a distortion-adaptive module to handle nonlinear distortions caused by different imaging characteristics of optical and radar sensors. To the best of our knowledge, this is the first work that models the nonlinear distortions between two imaging domains via a trainable network for remote sensing applications;
- Extensive experiments on both low-resolution and high-resolution datasets are conducted and have demonstrated the superiority of the proposed method for both S2E and E2S tasks, especially for high-resolution remote sensing data in urban areas.
2. Related Works
2.1. I2I Translation in Remote Sensing
2.2. EO-SAR Datasets
2.2.1. SEN1-2
2.2.2. SAR2Opt
3. Construction of the Novel Dataset SN6-SAROPT
- A full Maxar WorldView 2 optical image of size 22,800 × 16,202 and spatial resolution of 0.5 m, without cropping;
- A total of 202 overlapping SAR image strips are included in the dataset, each with a size of ∼2800 × 40,000 and a spatial resolution of 0.25 m. Four channels of SAR intensity information, i.e., HH, HV, VH, and VV, and two channels derived from Pauli polarimetric decomposition, i.e., Alpha2 and Beta2, are contained.
3.1. SAR Image Processing
3.2. EO-SAR Matching
4. Methodology
4.1. Two-Way Distortion-Adaptive Module
4.2. Overall Training Scheme
5. Experiments
5.1. Experiment Setup
5.2. Evaluation Metrics
- Peak signal-to-noise ratio (PSNR): Based on the average distance between two images, x and y of size , also known as the mean-squared error (MSE). PSNR is defined as:
- Structural SIMilarity (SSIM): Image similarity is measured by comparing the contrast, luminance, and structural information:
- Fréchet Inception Distance (FID): FID is defined as the Wasserstein-2/Fréchet distance between the feature embeddings of images from the last pooling layer of the Inception-V3 model pre-trained on ImageNet.
- Learned perceptual image patch similarity (LPIPS):
5.3. Quantitative Evaluations
5.4. Qualitative Evaluations
5.5. Ablation Study
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
EO | Electro-optical |
SAR | Synthetic Aperture Radar |
I2I | Image to image translation |
GAN | Generative Adversarial Network |
S2E | SAR to EO |
E2S | EO to SAR |
GSD | Ground Sample Distance |
ESA | European Space Administration |
DLR | German Aerospace Center |
IW | Interferometric Wide |
E-SN6 | Expanded version of the SpaceNet 6 dataset |
DA | Distortion-adaptive |
FR-IQA | Full-reference Image Quality Assessment |
NR-IQA | No-reference Image Quality Assessment |
PSNR | Peak Signal-to-Noise Ratio |
SSIM | Structural SIMilarity |
FID | Fréchet Inception Distance |
LPIPS | Learned Perceptual Image Patch Similarity |
References
- Hansen, M.C.; Defries, R.S.; Townshend, J.R.G.; Sohlberg, R. Global Land Cover Classification at 1 Km Spatial Resolution Using a Classification Tree Approach. Int. J. Remote Sens. 2000, 21, 1331–1364. [Google Scholar] [CrossRef]
- Zhong, Y.; Ma, A.; soon Ong, Y.; Zhu, Z.; Zhang, L. Computational Intelligence in Optical Remote Sensing Image Processing. Appl. Soft Comput. 2018, 64, 75–93. [Google Scholar] [CrossRef]
- Li, K.; Wan, G.; Cheng, G.; Meng, L.; Han, J. Object Detection in Optical Remote Sensing Images: A Survey and a New Benchmark. ISPRS J. Photogramm. Remote Sens. 2020, 159, 296–307. [Google Scholar] [CrossRef]
- He, X.; Zhou, Y.; Zhao, J.; Zhang, D.; Yao, R.; Xue, Y. Swin Transformer Embedding UNet for Remote Sensing Image Semantic Segmentation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4408715. [Google Scholar] [CrossRef]
- Chen, H.; Qi, Z.; Shi, Z. Remote Sensing Image Change Detection with Transformers. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4408715. [Google Scholar] [CrossRef]
- Pan, X.; Xie, F.; Jiang, Z.; Yin, J. Haze Removal for a Single Remote Sensing Image Based on Deformed Haze Imaging Model. IEEE Signal Process. Lett. 2015, 22, 1806–1810. [Google Scholar] [CrossRef]
- Jiang, H.; Lu, N. Multi-Scale Residual Convolutional Neural Network for Haze Removal of Remote Sensing Images. Remote Sens. 2018, 10, 945. [Google Scholar] [CrossRef] [Green Version]
- Guo, Q.; Hu, H.M.; Li, B. Haze and Thin Cloud Removal Using Elliptical Boundary Prior for Remote Sensing Image. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9124–9137. [Google Scholar] [CrossRef]
- Darbaghshahi, F.N.; Mohammadi, M.R.; Soryani, M. Cloud Removal in Remote Sensing Images Using Generative Adversarial Networks and SAR-to-Optical Image Translation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4105309. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. arXiv 2015, arXiv:1512.03385. [Google Scholar] [CrossRef]
- Sumbul, G.; Charfuelan, M.; Demir, B.; Markl, V. Bigearthnet: A Large-Scale Benchmark Archive for Remote Sensing Image Understanding. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 5901–5904. [Google Scholar] [CrossRef] [Green Version]
- Mahdianpari, M.; Salehi, B.; Rezaee, M.; Mohammadimanesh, F.; Zhang, Y. Very Deep Convolutional Neural Networks for Complex Land Cover Mapping Using Multispectral Remote Sensing Imagery. Remote Sens. 2018, 10, 1119. [Google Scholar] [CrossRef] [Green Version]
- Esteva, A.; Chou, K.; Yeung, S.; Naik, N.; Madani, A.; Mottaghi, A.; Liu, Y.; Topol, E.; Dean, J.; Socher, R. Deep Learning-Enabled Medical Computer Vision. NPJ Digit. Med. 2021, 4, 1–9. [Google Scholar] [CrossRef] [PubMed]
- Hatamizadeh, A.; Tang, Y.; Nath, V.; Yang, D.; Myronenko, A.; Landman, B.; Roth, H.R.; Xu, D. UNETR: Transformers for 3D Medical Image Segmentation. In Proceedings of the 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 3–8 January 2022; IEEE: Waikoloa, HI, USA, 2022; pp. 1748–1758. [Google Scholar] [CrossRef]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. arXiv 2014, arXiv:1406.2661. [Google Scholar] [CrossRef]
- Fuentes Reyes, M.; Auer, S.; Merkle, N.; Henry, C.; Schmitt, M. SAR-to-Optical Image Translation Based on Conditional Generative Adversarial Networks—Optimization, Opportunities and Limits. Remote Sens. 2019, 11, 2067. [Google Scholar] [CrossRef] [Green Version]
- Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Adversarial Networks. arXiv 2018, arXiv:1406.2661. [Google Scholar]
- Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. arXiv 2020, arXiv:1703.10593. [Google Scholar]
- Yang, X.; Wang, Z.; Zhao, J.; Yang, D. FG-GAN: A Fine-Grained Generative Adversarial Network for Unsupervised SAR-to-Optical Image Translation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5621211. [Google Scholar] [CrossRef]
- Ao, D.; Dumitru, C.O.; Schwarz, G.; Datcu, M. Dialectical GAN for SAR Image Translation: From Sentinel-1 to TerraSAR-X. Remote Sens. 2018, 10, 1597. [Google Scholar] [CrossRef] [Green Version]
- Shermeyer, J. SpaceNet 6: Expanded Dataset Release. 2020. Available online: https://medium.com/the-downlinq/spacenet-6-expanded-dataset-release-e1a7ddaf030 (accessed on 27 February 2023).
- Wang, L.; Xu, X.; Yu, Y.; Yang, R.; Gui, R.; Xu, Z.; Pu, F. SAR-to-Optical Image Translation Using Supervised Cycle-Consistent Adversarial Networks. IEEE Access 2019, 7, 129136–129149. [Google Scholar] [CrossRef]
- Zhao, Y.; Celik, T.; Liu, N.; Li, H.C. A Comparative Analysis of GAN-Based Methods for SAR-to-Optical Image Translation. IEEE Geosci. Remote Sens. Lett. 2022, 19, 3512605. [Google Scholar] [CrossRef]
- Wang, Z.; Ma, Y.; Zhang, Y. Hybrid cGAN: Coupling Global and Local Features for SAR-to-Optical Image Translation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5236016. [Google Scholar] [CrossRef]
- Tan, D.; Liu, Y.; Li, G.; Yao, L.; Sun, S.; He, Y. Serial GANs: A Feature-Preserving Heterogeneous Remote Sensing Image Transformation Model. Remote Sens. 2021, 13, 3968. [Google Scholar] [CrossRef]
- Dellinger, F.; Delon, J.; Gousseau, Y.; Michel, J.; Tupin, F. SAR-SIFT: A SIFT-like Algorithm for Applications on SAR Images. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012; pp. 3478–3481. [Google Scholar] [CrossRef] [Green Version]
- Ma, W.; Wen, Z.; Wu, Y.; Jiao, L.; Gong, M.; Zheng, Y.; Liu, L. Remote Sensing Image Registration with Modified SIFT and Enhanced Feature Matching. IEEE Geosci. Remote Sens. Lett. 2017, 14, 3–7. [Google Scholar] [CrossRef]
- Cui, S.; Ma, A.; Zhang, L.; Xu, M.; Zhong, Y. MAP-Net: SAR and Optical Image Matching via Image-Based Convolutional Network with Attention Mechanism and Spatial Pyramid Aggregated Pooling. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1000513. [Google Scholar] [CrossRef]
- Cohen, J.P.; Luck, M.; Honari, S. How to Cure Cancer (in Images) with Unpaired Image Translation. 2018. Available online: https://openreview.net/pdf?id=SJIA3pijM (accessed on 27 February 2023).
- Moriakov, N.; Adler, J.; Teuwen, J. Kernel of CycleGAN as a Principle Homogeneous Space. arXiv 2020, arXiv:2001.09061. [Google Scholar]
- Kong, L.; Lian, C.; Huang, D.; Li, Z.; Hu, Y.; Zhou, Q. Breaking the Dilemma of Medical Image-to-image Translation. arXiv 2021, arXiv:2110.06465. [Google Scholar]
- Sentinel-1-Overview-Sentinel Online-Sentinel Online. Available online: https://sentinels.copernicus.eu/web/sentinel/missions/sentinel-1/overview (accessed on 27 February 2023).
- Schmitt, M.; Hughes, L.H.; Zhu, X.X. The SEN1-2 dataset for deep learning in sar-optical data fusion. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, IV-1, 141–146. [Google Scholar] [CrossRef] [Green Version]
- DLR—About the Arth-Observation Satellite TerraSAR-X. Available online: https://www.dlr.de/content/en/articles/missions-projects/terrasar-x/terrasar-x-earth-observation-satellite.html (accessed on 27 February 2023).
- Vreugdenhil, M.; Wagner, W.; Bauer-Marschallinger, B.; Pfeil, I.; Teubner, I.; Rüdiger, C.; Strauss, P. Sensitivity of Sentinel-1 Backscatter to Vegetation Dynamics: An Austrian Case Study. Remote Sens. 2018, 10, 1396. [Google Scholar] [CrossRef] [Green Version]
- Mandal, D.; Kumar, V.; Ratha, D.; Dey, S.; Bhattacharya, A.; Lopez-Sanchez, J.M.; McNairn, H.; Rao, Y.S. Dual Polarimetric Radar Vegetation Index for Crop Growth Monitoring Using Sentinel-1 SAR Data. Remote Sens. Environ. 2020, 247, 111954. [Google Scholar] [CrossRef]
- Vreugdenhil, M.; Navacchi, C.; Bauer-Marschallinger, B.; Hahn, S.; Steele-Dunne, S.; Pfeil, I.; Dorigo, W.; Wagner, W. Sentinel-1 Cross Ratio and Vegetation Optical Depth: A Comparison over Europe. Remote Sens. 2020, 12, 3404. [Google Scholar] [CrossRef]
- Malenovský, Z.; Rott, H.; Cihlar, J.; Schaepman, M.E.; García-Santos, G.; Fernandes, R.; Berger, M. Sentinels for Science: Potential of Sentinel-1, -2, and -3 Missions for Scientific Observations of Ocean, Cryosphere, and Land. Remote Sens. Environ. 2012, 120, 91–101. [Google Scholar] [CrossRef]
- Han, H.; Lee, S.; Kim, J.I.; Kim, S.H.; Kim, H.c. Changes in a Giant Iceberg Created from the Collapse of the Larsen C Ice Shelf, Antarctic Peninsula, Derived from Sentinel-1 and CryoSat-2 Data. Remote Sens. 2019, 11, 404. [Google Scholar] [CrossRef] [Green Version]
- Twele, A.; Cao, W.; Plank, S.; Martinis, S. Sentinel-1-Based Flood Mapping: A Fully Automated Processing Chain. Int. J. Remote Sens. 2016, 37, 2990–3004. [Google Scholar] [CrossRef]
- Li, Y.; Martinis, S.; Plank, S.; Ludwig, R. An Automatic Change Detection Approach for Rapid Flood Mapping in Sentinel-1 SAR Data. Int. J. Appl. Earth Obs. Geoinf. 2018, 73, 123–135. [Google Scholar] [CrossRef]
- Uddin, K.; Matin, M.A.; Meyer, F.J. Operational Flood Mapping Using Multi-Temporal Sentinel-1 SAR Images: A Case Study from Bangladesh. Remote Sens. 2019, 11, 1581. [Google Scholar] [CrossRef] [Green Version]
- Li, X. The First Sentinel-1 SAR Image of a Typhoon. Acta Oceanol. Sin. 2015, 34, 1–2. [Google Scholar] [CrossRef]
- Liu, W.; Fujii, K.; Maruyama, Y.; Yamazaki, F. Inundation Assessment of the 2019 Typhoon Hagibis in Japan Using Multi-Temporal Sentinel-1 Intensity Images. Remote Sens. 2021, 13, 639. [Google Scholar] [CrossRef]
- Xia, G.S.; Bai, X.; Ding, J.; Zhu, Z.; Belongie, S.; Luo, J.; Datcu, M.; Pelillo, M.; Zhang, L. DOTA: A Large-scale Dataset for Object Detection in Aerial Images. arXiv 2019, arXiv:1711.10398. [Google Scholar]
- Wang, Y.; Zhu, X.X. The SARptical Dataset for Joint Analysis of SAR and Optical Image in Dense Urban Area. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 6840–6843. [Google Scholar] [CrossRef] [Green Version]
- Huang, M.; Xu, Y.; Qian, L.; Shi, W.; Zhang, Y.; Bao, W.; Wang, N.; Liu, X.; Xiang, X. The QXS-SAROPT Dataset for Deep Learning in SAR-Optical Data Fusion. arXiv 2021, arXiv:2103.08259. [Google Scholar]
- Schmitt, M.; Hughes, L.H.; Qiu, C.; Zhu, X.X. SEN12MS—A Curated Dataset of Georeferenced Multi-Spectral Sentinel-1/2 Imagery for Deep Learning and Data Fusion. arXiv 2019, arXiv:1906.07789. [Google Scholar] [CrossRef] [Green Version]
- Zhu, X.X.; Hu, J.; Qiu, C.; Shi, Y.; Kang, J.; Mou, L.; Bagheri, H.; Häberle, M.; Hua, Y.; Huang, R.; et al. So2Sat LCZ42: A Benchmark Dataset for Global Local Climate Zones Classification. arXiv 2019, arXiv:1912.12171. [Google Scholar]
- Jaderberg, M.; Simonyan, K.; Zisserman, A.; Kavukcuoglu, K. Spatial Transformer Networks. arXiv 2016, arXiv:1506.02025. [Google Scholar]
- Chen, R.; Huang, W.; Huang, B.; Sun, F.; Fang, B. Reusing discriminators for encoding: Towards unsupervised image-to-image translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 8168–8177. [Google Scholar]
- Balakrishnan, G.; Zhao, A.; Sabuncu, M.R.; Guttag, J.; Dalca, A.V. VoxelMorph: A Learning Framework for Deformable Medical Image Registration. IEEE Trans. Med. Imaging 2019, 38, 1788–1800. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Measurement to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
- Heusel, M.; Ramsauer, H.; Unterthiner, T.; Nessler, B.; Hochreiter, S. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. arXiv 2018, arXiv:1706.08500. [Google Scholar]
- Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. arXiv 2018, arXiv:1801.03924. [Google Scholar]
Name | Source of SAR | Resolution of SAR | Coverage | Channel |
---|---|---|---|---|
SN6-SAROPT | Capella Space’s X-band quad-pol sensor | 0.25 m | Rotterdam (724 pairs, 512 × 512) | 4 |
SAR2Opt [23] | TerraSAR-X | 1 m | Asia cities (2076 pairs, 600 × 600) | 1 |
SARptical [46] | TerraSAR-X | 1 m | Berlin (Over 10,000 pairs, 112 × 112) | 1 |
QXS-SAROPT [47] | Gaofen-3 | 1 m | Port cities (20,000 pairs, 256 × 256) | 1 |
SEN1-2 [33] | Sentinel-1/2 | down to 5 m | Multiple locations (282,384 pairs, 256 × 256) | 1 |
SEN1-2MS [48] | Sentinel-1/2 | down to 5 m | Multiple locations (282,384 pairs, 256 × 256) | 2 |
So2Sat-LCZ42 [49] | Sentinel-1/2 | down to 5 m | Multiple locations (400,673 pairs, 32 × 32) | 2 |
Methods | S2E | E2S | ||||||
---|---|---|---|---|---|---|---|---|
PSNR↑ | SSIM↑ | FID↓ | LPIPS↓ | PSNR↑ | SSIM↑ | FID↓ | LPIPS↓ | |
Ours | 15.72 | 0.240 | 178.76 | 0.491 | 15.22 | 0.203 | 116.29 | 0.380 |
CycleGAN [18] | 13.23 | 0.072 | 164.29 | 0.581 | 12.13 | 0.058 | 188.01 | 0.567 |
Pix2Pix [17] | 14.60 | 0.110 | 237.09 | 0.531 | 13.78 | 0.111 | 168.95 | 0.380 |
NICEGAN(NC) [51] | 11.66 | 0.026 | 308.59 | 0.674 | 11.70 | 0.069 | 366.85 | 0.582 |
NICEGAN(C) [51] | 13.70 | 0.189 | 186.63 | 0.556 | 12.12 | 0.101 | 151.89 | 0.437 |
Methods | S2E | E2S | ||||||
---|---|---|---|---|---|---|---|---|
PSNR↑ | SSIM↑ | FID↓ | LPIPS↓ | PSNR↑ | SSIM↑ | FID↓ | LPIPS↓ | |
Ours | 15.14 | 0.188 | 255.22 | 0.461 | 18.10 | 0.262 | 130.81 | 0.377 |
CycleGAN [18] | 13.49 | 0.146 | 200.47 | 0.485 | 16.78 | 0.105 | 137.58 | 0.416 |
Pix2Pix [17] | 14.71 | 0.143 | 355.48 | 0.525 | 17.73 | 0.114 | 277.15 | 0.397 |
NICEGAN(NC) [51] | 11.63 | 0.157 | 411.64 | 0.715 | 11.11 | 0.079 | 448.96 | 0.723 |
NICEGAN(C) [51] | 13.75 | 0.143 | 323.57 | 0.522 | 17.14 | 0.217 | 159.77 | 0.418 |
Methods | S2E | E2S | ||||||
---|---|---|---|---|---|---|---|---|
PSNR↑ | SSIM↑ | FID↓ | LPIPS↓ | PSNR↑ | SSIM↑ | FID↓ | LPIPS↓ | |
Ours | 10.92 | 0.027 | 219.41 | 0.615 | 12.87 | 0.016 | 163.29 | 0.459 |
CycleGAN [18] | 11.41 | 0.005 | 177.85 | 0.610 | 10.14 | 0.035 | 168.18 | 0.450 |
Pix2Pix [17] | 11.53 | 0.061 | 241.02 | 0.621 | 11.82 | 0.014 | 141.73 | 0.388 |
NICEGAN(NC) [51] | 10.23 | 0.059 | 336.32 | 0.749 | 11.37 | 0.001 | 169.08 | 0.449 |
NICEGAN(C) [51] | 10.46 | 0.051 | 199.26 | 0.617 | 10.32 | 0.005 | 116.92 | 0.430 |
Land Type | Methods | S2E | E2S | ||||||
---|---|---|---|---|---|---|---|---|---|
PSNR↑ | SSIM↑ | FID↓ | LPIPS↓ | PSNR↑ | SSIM↑ | FID↓ | LPIPS↓ | ||
Building | Ours | 14.31 | 0.108 | 283.07 | 0.469 | 17.90 | 0.238 | 139.60 | 0.401 |
CycleGAN [18] | 12.36 | 0.053 | 205.02 | 0.492 | 16.44 | 0.096 | 146.15 | 0.447 | |
Pix2Pix [17] | 13.63 | 0.063 | 374.92 | 0.515 | 17.43 | 0.105 | 330.29 | 0.431 | |
NICEGAN(NC) [51] | 11.11 | 0.079 | 448.96 | 0.723 | 17.52 | 0.129 | 482.44 | 0.724 | |
NICEGAN(C) [51] | 12.94 | 0.057 | 377.93 | 0.529 | 16.75 | 0.188 | 177.67 | 0.458 | |
Forest | Ours | 16.75 | 0.163 | 337.81 | 0.500 | 17.90 | 0.260 | 222.16 | 0.357 |
CycleGAN [18] | 15.06 | 0.150 | 328.70 | 0.557 | 16.57 | 0.087 | 210.20 | 0.370 | |
Pix2Pix [17] | 16.85 | 0.159 | 388.83 | 0.564 | 17.50 | 0.107 | 226.37 | 0.353 | |
NICEGAN(NC) [51] | 11.42 | 0.151 | 494.87 | 0.759 | 16.77 | 0.133 | 431.06 | 0.751 | |
NICEGAN(C) [51] | 13.87 | 0.106 | 371.68 | 0.607 | 17.16 | 0.227 | 224.88 | 0.360 | |
River | Ours | 16.28 | 0.446 | 307.21 | 0.393 | 18.75 | 0.320 | 214.14 | 0.324 |
CycleGAN [18] | 15.49 | 0.418 | 296.19 | 0.392 | 17.71 | 0.135 | 239.29 | 0.369 | |
Pix2Pix [17] | 16.15 | 0.364 | 353.59 | 0.506 | 18.63 | 0.137 | 308.49 | 0.337 | |
NICEGAN(NC) [51] | 13.26 | 0.390 | 389.40 | 0.639 | 18.13 | 0.145 | 432.20 | 0.708 | |
NICEGAN(C) [51] | 15.86 | 0.423 | 365.65 | 0.421 | 18.38 | 0.287 | 246.56 | 0.354 | |
Road | Ours | 14.79 | 0.149 | 439.59 | 0.500 | 18.27 | 0.282 | 217.43 | 0.381 |
CycleGAN [18] | 13.15 | 0.098 | 353.40 | 0.521 | 17.18 | 0.127 | 257.67 | 0.418 | |
Pix2Pix [17] | 14.30 | 0.104 | 488.10 | 0.553 | 17.94 | 0.127 | 396.68 | 0.395 | |
NICEGAN(NC) [51] | 11.48 | 0.119 | 525.64 | 0.759 | 17.99 | 0.146 | 481.53 | 0.736 | |
NICEGAN(C) [51] | 13.81 | 0.110 | 464.75 | 0.552 | 16.73 | 0.218 | 288.81 | 0.428 |
Methods | Number of Trainable Parameters (M) | Test Time | |||
---|---|---|---|---|---|
Generator | Discriminator | DA Modules | Total | (ms/per image) | |
Ours | 22.76 | 5.51 | 4.12 | 32.40 | 41.76 |
CycleGAN [18] | 22.76 | 5.51 | - | 28.29 | 35.55 |
Pix2Pix [17] | 54.41 | 2.77 | - | 57.18 | 8.82 |
NICEGAN(NC) [51] | 9.45 | 11.72 | - | 21.17 | 62.23 |
NICEGAN(C) [51] | 18.57 | 93.75 | - | 112.32 | 74.14 |
Variants | S2E | E2S | ||||||
---|---|---|---|---|---|---|---|---|
PSNR↑ | SSIM↑ | FID↓ | LPIPS↓ | PSNR↑ | SSIM↑ | FID↓ | LPIPS↓ | |
Ours w/o DA modules | 13.07 | 0.078 | 185.69 | 0.572 | 12.72 | 0.018 | 274.66 | 0.940 |
Ours w/o DA-SAR | 15.26 | 0.220 | 161.95 | 0.483 | 11.39 | 0.084 | 117.24 | 0.516 |
Ours w/o DA-EO | 13.60 | 0.097 | 189.56 | 0.579 | 15.33 | 0.199 | 143.14 | 0.416 |
Ours | 15.72 | 0.240 | 178.76 | 0.491 | 15.22 | 0.203 | 116.29 | 0.380 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Qing, Y.; Zhu, J.; Feng, H.; Liu, W.; Wen, B. Two-Way Generation of High-Resolution EO and SAR Images via Dual Distortion-Adaptive GANs. Remote Sens. 2023, 15, 1878. https://doi.org/10.3390/rs15071878
Qing Y, Zhu J, Feng H, Liu W, Wen B. Two-Way Generation of High-Resolution EO and SAR Images via Dual Distortion-Adaptive GANs. Remote Sensing. 2023; 15(7):1878. https://doi.org/10.3390/rs15071878
Chicago/Turabian StyleQing, Yuanyuan, Jiang Zhu, Hongchuan Feng, Weixian Liu, and Bihan Wen. 2023. "Two-Way Generation of High-Resolution EO and SAR Images via Dual Distortion-Adaptive GANs" Remote Sensing 15, no. 7: 1878. https://doi.org/10.3390/rs15071878
APA StyleQing, Y., Zhu, J., Feng, H., Liu, W., & Wen, B. (2023). Two-Way Generation of High-Resolution EO and SAR Images via Dual Distortion-Adaptive GANs. Remote Sensing, 15(7), 1878. https://doi.org/10.3390/rs15071878