A Flexible Spatiotemporal Thick Cloud Removal Method with Low Requirements for Reference Images
Abstract
:1. Introduction
1.1. Related Works
1.2. Motivation
- (1)
- Most cloud removal methods rely on a single reference image and have strict requirements for image quality, expecting it to be completely clear and close to that of the target image. However, when the target image and the reference image contain repeated missing areas, the recovery image still contains missing areas, as shown in Figure 1a.
- (2)
- Cloud removal methods that can use the information from multiple reference images at the same time, such as low-rank tensor completion-based methods, require a long time-series dataset with high temporal correlation. When the auxiliary information is only available in the reference image, with significant differences from the target image, the recovery image contains abrupt areas, as shown in Figure 1a.
- (1)
- In contrast to single-reference-image-based methods, FSTGAN can simultaneously use the information from multiple cloudy reference images to recover the target image. As shown in Step 1 of Figure 1b, FSTGAN effectively fuses temporal features extracted from each reference image, resulting in feature maps that are devoid of any missing areas.
- (2)
- In contrast to the existing multiple-reference-image-based methods, the proposed method can leverage the spatial information of uncontaminated areas as a guideline to assimilate the fused features, which eliminates the abrupt areas in the feature maps, as illustrated in Step 2 of Figure 1b. Consequently, FSTGAN is capable of achieving satisfactory results, even when the reference images exhibit significant differences, thereby reducing the strict requirements for reference images.
- (3)
- A series of experiments were carried out on Landsat 8 OLI and Sentinel-2 MSI data. The visual effects and quantitative evaluation demonstrate the practicality of FSTGAN in both simulated and real experiments.
2. Methodology
2.1. Generator
- Feature Extraction Block: The inputs of this module are divided into two main groups: the image/features of the target image and images/features of three reference images. Through the skip connections, the features extracted from the target image can be transferred to the reference image. This module is responsible for extracting features from the target image and three reference images. By analyzing the characteristics of each image, it effectively captures valuable information that can be used for further feature fusion.
- Feature Fusion Block: The four inputs of this module are the features extracted from the target image and three reference images. After extracting features from the reference images, the feature fusion block sums these features to generate feature maps without any missing areas. Additionally, to prepare for the subsequent feature assimilation, this fusion process converts the four input features into two outputs. One output represents the features of the target image, while the other represents the features obtained by fusing the reference images.
- Feature Assimilation Block: The inputs of this module consist of fused features extracted from three reference images, along with the features extracted from the target image. The feature assimilation block leverages the uncontaminated information of the target image to assimilate the fused features obtained from the last step. This assimilation process results in refined feature maps of fused features, which aid in detail recovery in the decoder.
2.2. Discriminator
2.3. Loss Function
3. Experiments and Results
3.1. Study Areas and Datasets
3.2. Experimental Settings
3.3. Simulated Data Experiments
- (1)
- Case 1: The target image was contaminated in large areas, and multitemporal images with multiple small missing areas were employed as the complementary information, as shown in rows 1–4 in Figure 5a–d.
- (2)
- Case 2: The target image was contaminated by multiple small areas, and multitemporal images with multiple small missing areas were employed as the complementary information, as shown in rows 5–8 in Figure 5a–d.
- (3)
- Case 3: The target image was contaminated by multiple small areas, and a single cloud-free image was employed as the complementary information, as shown in Figures 8 and 9a,b.
3.4. Real-Data Experiments
4. Discussion
4.1. Rationality of the Network
4.2. Ablation Study
4.3. The Influence of the Scale Factors in the Discriminator
4.4. Computational Complexity Analysis
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Chang, C.I.; Heinz, D.C. Constrained subpixel target detection for remotely sensed imagery. IEEE Trans. Geosci. Remote Sens. 2000, 38, 1144–1159. [Google Scholar] [CrossRef]
- Chen, C.P.; Li, H.; Wei, Y.; Xia, T.; Tang, Y.Y. A local contrast method for small infrared target detection. IEEE Trans. Geosci. Remote Sens. 2013, 52, 574–581. [Google Scholar] [CrossRef]
- Li, S.; Song, W.; Fang, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Deep learning for hyperspectral image classification: An overview. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6690–6709. [Google Scholar] [CrossRef]
- Berni, J.A.; Zarco-Tejada, P.J.; Suárez, L.; Fereres, E. Thermal and narrowband multispectral remote sensing for vegetation monitoring from an unmanned aerial vehicle. IEEE Trans. Geosci. Remote Sens. 2009, 47, 722–738. [Google Scholar] [CrossRef]
- Cao, R.; Chen, Y.; Chen, J.; Zhu, X.; Shen, M. Thick cloud removal in Landsat images based on autoregression of Landsat time-series data. Remote Sens. Environ. 2020, 249, 112001. [Google Scholar] [CrossRef]
- Zhang, C.; Li, W.; Travis, D. Gaps-fill of SLC-off Landsat ETM+ satellite image using a geostatistical approach. Int. J. Remote Sens. 2007, 28, 5103–5122. [Google Scholar] [CrossRef]
- Maalouf, A.; Carré, P.; Augereau, B.; Fernandez-Maloigne, C. A bandelet-based inpainting technique for clouds removal from remotely sensed images. IEEE Trans. Geosci. Remote Sens. 2009, 47, 2363–2371. [Google Scholar] [CrossRef]
- Cheng, Q.; Shen, H.; Zhang, L.; Li, P. Inpainting for remotely sensed images with a multichannel nonlocal total variation model. IEEE Trans. Geosci. Remote Sens. 2013, 52, 175–187. [Google Scholar] [CrossRef]
- Zhao, Y.; Shen, S.; Hu, J.; Li, Y.; Pan, J. Cloud removal using multimodal GAN with adversarial consistency loss. IEEE Geosci. Remote Sens. Lett. 2021, 19, 8015605. [Google Scholar] [CrossRef]
- Huang, B.; Li, Y.; Han, X.; Cui, Y.; Li, W.; Li, R. Cloud removal from optical satellite imagery with SAR imagery using sparse representation. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1046–1050. [Google Scholar] [CrossRef]
- Gao, J.; Yuan, Q.; Li, J.; Zhang, H.; Su, X. Cloud removal with fusion of high resolution optical and SAR images using generative adversarial networks. Remote Sens. 2020, 12, 191. [Google Scholar] [CrossRef]
- Grohnfeldt, C.; Schmitt, M.; Zhu, X. A conditional generative adversarial network to fuse SAR and multispectral optical data for cloud removal from Sentinel-2 images. In Proceedings of the IGARSS 2018–2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 1726–1729. [Google Scholar]
- Liu, H.; Huang, B.; Cai, J. Thick Cloud Removal Under Land Cover Changes Using Multisource Satellite Imagery and a Spatiotemporal Attention Network. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5601218. [Google Scholar] [CrossRef]
- Li, M.; Liew, S.C.; Kwoh, L.K. Producing cloud free and cloud-shadow free mosaic from cloudy IKONOS images. In Proceedings of the IGARSS 2003: 2003 IEEE International Geoscience and Remote Sensing Symposium, Toulouse, France, 21–25 July 2003; Volume 6, pp. 3946–3948. [Google Scholar]
- Scaramuzza, P.; Barsi, J. Landsat 7 scan line corrector-off gap-filled product development. Proc. Pecora 2005, 16, 23–27. [Google Scholar]
- Savitzky, A.; Golay, M.J. Smoothing and differentiation of data by simplified least squares procedures. Anal. Chem. 1964, 36, 1627–1639. [Google Scholar] [CrossRef]
- Roerink, G.; Menenti, M.; Verhoef, W. Reconstructing cloudfree NDVI composites using Fourier analysis of time series. Int. J. Remote Sens. 2000, 21, 1911–1917. [Google Scholar] [CrossRef]
- Lorenzi, L.; Melgani, F.; Mercier, G. Missing-area reconstruction in multispectral images under a compressive sensing perspective. IEEE Trans. Geosci. Remote Sens. 2013, 51, 3998–4008. [Google Scholar] [CrossRef]
- Latif, B.A.; Lecerf, R.; Mercier, G.; Hubert-Moy, L. Preprocessing of low-resolution time series contaminated by clouds and shadows. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2083–2096. [Google Scholar] [CrossRef]
- Meng, Q.; Borders, B.E.; Cieszewski, C.J.; Madden, M. Closest spectral fit for removing clouds and cloud shadows. Photogramm. Eng. Remote Sens. 2009, 75, 569–576. [Google Scholar] [CrossRef]
- Ramoino, F.; Tutunaru, F.; Pera, F.; Arino, O. Ten-meter Sentinel-2A cloud-free composite—Southern Africa 2016. Remote Sens. 2017, 9, 652. [Google Scholar] [CrossRef]
- Gao, G.; Gu, Y. Multitemporal Landsat missing data recovery based on tempo-spectral angle model. Photogramm. Eng. Remote Sens. 2017, 55, 3656–3668. [Google Scholar] [CrossRef]
- Zhang, J.; Clayton, M.K.; Townsend, P.A. Missing data and regression models for spatial images. Photogramm. Eng. Remote Sens. 2014, 53, 1574–1582. [Google Scholar] [CrossRef]
- Vuolo, F.; Ng, W.T.; Atzberger, C. Smoothing and gap-filling of high resolution multi-spectral time series: Example of Landsat data. Int. J. Appl. Earth Obs. Geoinf. 2017, 57, 202–213. [Google Scholar] [CrossRef]
- Yang, G.; Shen, H.; Zhang, L.; He, Z.; Li, X. A moving weighted harmonic analysis method for reconstructing high-quality SPOT VEGETATION NDVI time-series data. IEEE Trans. Geosci. Remote Sens. 2015, 53, 6008–6021. [Google Scholar] [CrossRef]
- Zhang, Y.; Xiang, Y.; Zhang, L.Y.; Yang, L.X.; Zhou, J. Efficiently and securely outsourcing compressed sensing reconstruction to a cloud. Inf. Sci. 2019, 496, 150–160. [Google Scholar] [CrossRef]
- Sahoo, T.; Patnaik, S. Cloud removal from satellite images using auto associative neural network and stationary wevlet transform. In Proceedings of the 2008 First International Conference on Emerging Trends in Engineering and Technology, Nagpur, India, 16–18 July 2008; pp. 100–105. [Google Scholar]
- Tahsin, S.; Medeiros, S.C.; Hooshyar, M.; Singh, A. Optical cloud pixel recovery via machine learning. Remote Sens. 2017, 9, 527. [Google Scholar] [CrossRef]
- Gao, F.; Deng, X.; Xu, M.; Xu, J.; Dragotti, P.L. Multi-modal convolutional dictionary learning. IEEE Trans. Image Process. 2022, 31, 1325–1339. [Google Scholar] [CrossRef]
- Xu, M.; Jia, X.; Pickering, M.; Plaza, A.J. Cloud removal based on sparse representation via multitemporal dictionary learning. IEEE Trans. Geosci. Remote Sens. 2016, 54, 2998–3006. [Google Scholar] [CrossRef]
- Li, X.; Wang, L.; Cheng, Q.; Wu, P.; Gan, W.; Fang, L. Cloud removal in remote sensing images using nonnegative matrix factorization and error correction. ISPRS J. Photogramm. Remote Sens. 2019, 148, 103–113. [Google Scholar] [CrossRef]
- Zhu, X.; Gao, F.; Liu, D.; Chen, J. A modified neighborhood similar pixel interpolator approach for removing thick clouds in Landsat images. IEEE Geosci. Remote Sens. Lett. 2011, 9, 521–525. [Google Scholar] [CrossRef]
- Zeng, C.; Shen, H.; Zhang, L. Recovering missing pixels for Landsat ETM+ SLC-off imagery using multi-temporal regression analysis and a regularization method. Remote Sens. Environ. 2013, 131, 182–194. [Google Scholar] [CrossRef]
- Ng, M.K.P.; Yuan, Q.; Yan, L.; Sun, J. An adaptive weighted tensor completion method for the recovery of remote sensing images with missing data. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3367–3381. [Google Scholar] [CrossRef]
- Chen, Z.; Zhang, P.; Zhang, Y.; Xu, X.; Ji, L.; Tang, H. Thick Cloud Removal in Multi-Temporal Remote Sensing Images via Frequency Spectrum-Modulated Tensor Completion. Remote Sens. 2023, 15, 1230. [Google Scholar] [CrossRef]
- Zhang, Q.; Yuan, Q.; Zeng, C.; Li, X.; Wei, Y. Missing data reconstruction in remote sensing image with a unified spatial–temporal–spectral deep convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4274–4288. [Google Scholar] [CrossRef]
- Ma, X.; Wang, Q.; Tong, X.; Atkinson, P.M. A deep learning model for incorporating temporal information in haze removal. Remote Sens. Environ. 2022, 274, 113012. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. Commun. ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
- Singh, P.; Komodakis, N. Cloud-gan: Cloud removal for sentinel-2 imagery using a cyclic consistent generative adversarial networks. In Proceedings of the IGARSS 2018–2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 1772–1775. [Google Scholar]
- Enomoto, K.; Sakurada, K.; Wang, W.; Fukui, H.; Matsuoka, M.; Nakamura, R.; Kawaguchi, N. Filmy cloud removal on satellite imagery with multispectral conditional generative adversarial nets. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 48–56. [Google Scholar]
- Shen, H.; Li, X.; Cheng, Q.; Zeng, C.; Yang, G.; Li, H.; Zhang, L. Missing information reconstruction of remote sensing data: A technical review. IEEE Geosci. Remote Sens. Mag. 2015, 3, 61–85. [Google Scholar] [CrossRef]
- Zhu, Z.; Woodcock, C.E. Object-based cloud and cloud shadow detection in Landsat imagery. Remote Sens. Environ. 2012, 118, 83–94. [Google Scholar] [CrossRef]
- Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
- Ba, J.L.; Kiros, J.R.; Hinton, G.E. Layer normalization. arXiv 2016, arXiv:1607.06450. [Google Scholar]
- Ulyanov, D.; Vedaldi, A.; Lempitsky, V. Instance normalization: The missing ingredient for fast stylization. arXiv 2016, arXiv:1607.08022. [Google Scholar]
- Luo, P.; Ren, J.; Peng, Z.; Zhang, R.; Li, J. Differentiable learning-to-normalize via switchable normalization. arXiv 2018, arXiv:1806.10779. [Google Scholar]
- Tang, W.; Li, G.; Bao, X.; Nian, F.; Li, T. Mscgan: Multi-scale conditional generative adversarial networks for person image generation. In Proceedings of the 2020 Chinese Control And Decision Conference (CCDC), Hefei, China, 22–24 August 2020; pp. 1440–1445. [Google Scholar]
- Miyato, T.; Kataoka, T.; Koyama, M.; Yoshida, Y. Spectral normalization for generative adversarial networks. arXiv 2018, arXiv:1802.05957. [Google Scholar]
- Mao, X.; Li, Q.; Xie, H.; Lau, R.Y.; Wang, Z.; Paul Smolley, S. Least squares generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2794–2802. [Google Scholar]
- Tan, Z.; Gao, M.; Li, X.; Jiang, L. A flexible reference-insensitive spatiotemporal fusion model for remote sensing images using conditional generative adversarial network. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5601413. [Google Scholar] [CrossRef]
- Wang, Z.; Simoncelli, E.P.; Bovik, A.C. Multiscale structural similarity for image quality assessment. In Proceedings of the The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, Pacific Grove, CA, USA, 9–12 November 2003; Volume 2, pp. 1398–1402. [Google Scholar]
- Xia, M.; Jia, K. Reconstructing Missing Information of Remote Sensing Data Contaminated by Large and Thick Clouds Based on an Improved Multitemporal Dictionary Learning Method. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5605914. [Google Scholar] [CrossRef]
- Wan, W.; Xiao, P.; Feng, X.; Li, H.; Ma, R.; Duan, H.; Zhao, L. Monitoring lake changes of Qinghai-Tibetan Plateau over the past 30 years using satellite remote sensing data. Chin. Sci. Bull. 2014, 59, 1021–1035. [Google Scholar] [CrossRef]
- Chu, D.; Shen, H.; Guan, X.; Chen, J.M.; Li, X.; Li, J.; Zhang, L. Long time-series NDVI reconstruction in cloud-prone regions via spatio-temporal tensor completion. Remote Sens. Environ. 2021, 264, 112632. [Google Scholar] [CrossRef]
- Sarukkai, V.; Jain, A.; Uzkent, B.; Ermon, S. Cloud Removal in Satellite Images Using Spatiotemporal Generative Networks. arXiv 2019, arXiv:1912.06838. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
- Wald, L. Data Fusion: Definitions and Architectures: Fusion of Images of Different Spatial Resolutions; Presses des MINES: Paris, France, 2002. [Google Scholar]
- Kruse, F.A.; Lefkoff, A.; Boardman, J.; Heidebrecht, K.; Shapiro, A.; Barloon, P.; Goetz, A. The spectral image processing system (SIPS)—Interactive visualization and analysis of imaging spectrometer data. Remote Sens. Environ. 1993, 44, 145–163. [Google Scholar] [CrossRef]
- Wang, Z.; Bovik, A.C. A universal image quality index. IEEE Signal Process. Lett. 2002, 9, 81–84. [Google Scholar] [CrossRef]
- Kwan, C.; Hagen, L.; Chou, B.; Perez, D.; Li, J.; Shen, Y.; Koperski, K. Simple and effective cloud-and shadow-detection algorithms for Landsat and Worldview images. Signal Image Video Process. 2020, 14, 125–133. [Google Scholar] [CrossRef]
- Zhu, Z.; Wang, S.; Woodcock, C.E. Improvement and expansion of the Fmask algorithm: Cloud, cloud shadow, and snow detection for Landsats 4–7, 8, and Sentinel 2 images. Remote Sens. Environ. 2015, 159, 269–277. [Google Scholar] [CrossRef]
Hardware | RAM | CPU | GPU |
80 GB | AMD EPYC 7543 | RTX 3090 | |
Software | Python | CUDA | PyTorch |
3.8.13 | 11.3 | 1.8.0 |
Case 1: Rows 1–2 in Figure 5 of Landsat OLI Data | Case 1: Rows 3–4 in Figure 5 of Sentinel MSI Data | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Dataset | Method | RMSE | SSIM | ERGAS | SAM | PSNR | CC | UIQI | Dataset | Method | RMSE | SSIM | ERGAS | SAM | PSNR | CC | UIQI |
Row 1 | AWTC | 0.555 | 0.701 | 4.077 | 21.822 | 25.457 | 0.519 | 0.486 | Row 3 | AWTC | 1.156 | 0.730 | 4.244 | 24.799 | 19.169 | 0.163 | 0.118 |
ST-Tensor | 0.200 | 0.904 | 2.424 | 8.132 | 34.595 | 0.897 | 0.892 | ST-Tensor | 0.126 | 0.973 | 1.488 | 3.135 | 37.979 | 0.925 | 0.921 | ||
FMTC | 0.228 | 0.907 | 2.570 | 8.756 | 33.575 | 0.885 | 0.879 | FMTC | 0.145 | 0.949 | 1.572 | 3.429 | 36.783 | 0.909 | 0.907 | ||
STGAN | 0.621 | 0.690 | 4.341 | 26.769 | 24.421 | 0.561 | 0.485 | STGAN | 0.418 | 0.903 | 2.585 | 8.919 | 27.836 | 0.499 | 0.492 | ||
Ours | 0.109 | 0.960 | 2.306 | 6.606 | 40.509 | 0.941 | 0.934 | Ours | 0.107 | 0.975 | 1.355 | 2.656 | 39.459 | 0.948 | 0.941 | ||
Row 2 | AWTC | 0.265 | 0.864 | 3.681 | 17.121 | 34.283 | 0.695 | 0.668 | Row 4 | AWTC | 0.998 | 0.758 | 4.002 | 21.674 | 20.385 | 0.276 | 0.243 |
ST-Tensor | 0.125 | 0.960 | 2.738 | 8.782 | 39.905 | 0.920 | 0.915 | ST-Tensor | 0.131 | 0.942 | 1.495 | 3.144 | 37.721 | 0.960 | 0.960 | ||
FMTC | 0.137 | 0.950 | 2.928 | 9.328 | 38.891 | 0.914 | 0.904 | FMTC | 0.157 | 0.929 | 1.642 | 3.603 | 36.110 | 0.947 | 0.945 | ||
STGAN | 0.331 | 0.843 | 4.306 | 18.037 | 31.938 | 0.736 | 0.683 | STGAN | 0.319 | 0.901 | 2.308 | 6.846 | 30.091 | 0.813 | 0.810 | ||
Ours | 0.081 | 0.978 | 2.192 | 6.294 | 43.667 | 0.960 | 0.959 | Ours | 0.093 | 0.970 | 1.263 | 2.271 | 40.638 | 0.979 | 0.979 | ||
Case 2: Rows 5–6 in Figure 5 of Landsat OLI data | Case 2: Rows 7–8 in Figure 5 of Sentinel MSI data | ||||||||||||||||
Row 5 | AWTC | 0.104 | 0.969 | 2.048 | 5.350 | 41.264 | 0.970 | 0.970 | Row 7 | AWTC | 0.119 | 0.974 | 1.441 | 2.898 | 38.646 | 0.949 | 0.948 |
ST-Tensor | 0.094 | 0.984 | 1.822 | 4.239 | 43.587 | 0.974 | 0.973 | ST-Tensor | 0.112 | 0.983 | 1.418 | 2.869 | 39.038 | 0.954 | 0.954 | ||
FMTC | 0.094 | 0.979 | 1.968 | 4.922 | 42.020 | 0.978 | 0.977 | FMTC | 0.105 | 0.978 | 1.373 | 2.719 | 39.612 | 0.959 | 0.958 | ||
STGAN | 0.124 | 0.968 | 2.506 | 7.552 | 38.408 | 0.957 | 0.955 | STGAN | 0.189 | 0.966 | 1.836 | 4.539 | 34.529 | 0.875 | 0.870 | ||
Ours | 0.055 | 0.990 | 1.578 | 3.158 | 46.063 | 0.991 | 0.991 | Ours | 0.075 | 0.989 | 1.144 | 1.888 | 44.664 | 0.978 | 0.978 | ||
Row 6 | AWTC | 0.096 | 0.974 | 2.043 | 5.542 | 42.309 | 0.971 | 0.971 | Row 8 | AWTC | 0.091 | 0.977 | 1.289 | 2.369 | 41.004 | 0.931 | 0.929 |
ST-Tensor | 0.090 | 0.987 | 2.059 | 5.621 | 42.279 | 0.969 | 0.969 | ST-Tensor | 0.053 | 0.990 | 0.990 | 1.411 | 45.612 | 0.976 | 0.976 | ||
FMTC | 0.077 | 0.983 | 1.804 | 4.307 | 44.611 | 0.983 | 0.982 | FMTC | 0.056 | 0.987 | 1.017 | 1.503 | 45.162 | 0.959 | 0.958 | ||
STGAN | 0.105 | 0.976 | 2.325 | 6.948 | 40.179 | 0.949 | 0.947 | STGAN | 0.108 | 0.978 | 1.405 | 2.033 | 39.596 | 0.953 | 0.951 | ||
Ours | 0.037 | 0.996 | 1.308 | 2.280 | 49.995 | 0.995 | 0.995 | Ours | 0.041 | 0.994 | 0.890 | 1.068 | 48.001 | 0.986 | 0.986 |
Case 3: Sentinel with Small Temporal Differences (Figure 8) | Case 3: Sentinel with Large Temporal Differences in (Figure 9) | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Dataset | Method | RMSE | SSIM | ERGAS | SAM | PSNR | CC | UIQI | Dataset | Method | RMSE | SSIM | ERGAS | SAM | PSNR | CC | UIQI |
Figure 8 | MNSPI | 0.074 | 0.980 | 1.204 | 2.086 | 42.804 | 0.981 | 0.981 | Figure 9 | MNSPI | 0.153 | 0.964 | 1.949 | 5.275 | 36.505 | 0.944 | 0.943 |
WLR | 0.073 | 0.986 | 1.381 | 2.588 | 42.862 | 0.991 | 0.991 | WLR | 0.159 | 0.960 | 1.983 | 5.481 | 36.172 | 0.940 | 0.940 | ||
STGAN | 0.119 | 0.968 | 1.543 | 2.976 | 38.634 | 0.965 | 0.962 | STGAN | 0.239 | 0.952 | 2.344 | 5.745 | 33.159 | 0.938 | 0.928 | ||
ST-Tensor | 0.085 | 0.979 | 1.297 | 2.404 | 41.562 | 0.975 | 0.975 | ST-Tensor | 0.162 | 0.960 | 2.083 | 5.754 | 35.917 | 0.935 | 0.933 | ||
Ours | 0.048 | 0.991 | 0.969 | 1.350 | 46.607 | 0.992 | 0.992 | Ours | 0.109 | 0.974 | 1.648 | 3.797 | 39.417 | 0.971 | 0.970 |
Feature Loss | Spectrum Loss | Vision Loss | Test Loss (MSE) ↓ |
---|---|---|---|
✓ | 0.009213 | ||
✓ | 0.069922 | ||
✓ | 0.010431 | ||
✓ | ✓ | 0.009114 | |
✓ | ✓ | 0.008979 | |
✓ | ✓ | 0.009496 | |
✓ | ✓ | ✓ | 0.008767 |
Generator | Discriminator | Training Loss ↓ | |
---|---|---|---|
Encoder | Decoder | ||
✗ | 0.041925 | ||
✗ | 0.041048 | ||
✗ | 0.016650 | ||
✗ | ✗ | 0.043720 | |
✗ | ✗ | ✗ | 0.043781 |
0.015877 |
Dataset | Scale Factors in the Discriminator | Training Loss ↓ | ||
---|---|---|---|---|
1× | 0.5× | 0.25× | ||
Landsat | ✓ | 0.016792 | ||
✓ | ✓ | 0.016534 | ||
✓ | ✓ | ✓ | 0.015877 | |
Sentinel | ✓ | 0.020759 | ||
✓ | ✓ | 0.020710 | ||
✓ | ✓ | ✓ | 0.020551 |
Traditional Methods (Inference Time) | Deep Learning Methods (Parameters/Training Time/Inference Time) | ||||
---|---|---|---|---|---|
Method | AWTC | ST-Tensor | FMTC | STGAN | FSTGAN |
Landsat | 772.84 s | 1228.51 s | 335.22 s | 115.835 M/8.55 h/0.53 s | 3.678 M/5.4 h/0.17 s |
Sentinel | 390.45 s | 592.46 s | 190.13 s | 115.802 M/35.7 h/0.56 s | 3.673 M/33.7 h/0.11 s |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, Y.; Ji, L.; Xu, X.; Zhang, P.; Jiang, K.; Tang, H. A Flexible Spatiotemporal Thick Cloud Removal Method with Low Requirements for Reference Images. Remote Sens. 2023, 15, 4306. https://doi.org/10.3390/rs15174306
Zhang Y, Ji L, Xu X, Zhang P, Jiang K, Tang H. A Flexible Spatiotemporal Thick Cloud Removal Method with Low Requirements for Reference Images. Remote Sensing. 2023; 15(17):4306. https://doi.org/10.3390/rs15174306
Chicago/Turabian StyleZhang, Yu, Luyan Ji, Xunpeng Xu, Peng Zhang, Kang Jiang, and Hairong Tang. 2023. "A Flexible Spatiotemporal Thick Cloud Removal Method with Low Requirements for Reference Images" Remote Sensing 15, no. 17: 4306. https://doi.org/10.3390/rs15174306
APA StyleZhang, Y., Ji, L., Xu, X., Zhang, P., Jiang, K., & Tang, H. (2023). A Flexible Spatiotemporal Thick Cloud Removal Method with Low Requirements for Reference Images. Remote Sensing, 15(17), 4306. https://doi.org/10.3390/rs15174306