Next Article in Journal
SSA-LHCD: A Singular Spectrum Analysis-Driven Lightweight Network with 2-D Self-Attention for Hyperspectral Change Detection
Next Article in Special Issue
Extracting Wetlands in Coastal Louisiana from the Operational VIIRS and GOES-R Flood Products
Previous Article in Journal
Quantifying City- and Street-Scale Urban Tree Phenology from Landsat-8, Sentinel-2, and PlanetScope Images: A Case Study in Downtown Beijing
Previous Article in Special Issue
Downscaling Land Surface Temperature Derived from Microwave Observations with the Super-Resolution Reconstruction Method: A Case Study in the CONUS
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reconstructing Snow-Free Sentinel-2 Satellite Imagery: A Generative Adversarial Network (GAN) Approach

1
Geographic Information and Spatial Analysis Laboratory, Queen’s University, Kingston, ON K7L 3N6, Canada
2
Department of Geography and Planning, University of Toledo, Toledo, OH 43606, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(13), 2352; https://doi.org/10.3390/rs16132352
Submission received: 18 April 2024 / Revised: 15 June 2024 / Accepted: 24 June 2024 / Published: 27 June 2024
(This article belongs to the Special Issue Big Earth Data for Climate Studies)

Abstract

:
Sentinel-2 satellites are one of the major instruments in remote sensing (RS) technology that has revolutionized Earth observation research, as its main goal is to offer high-resolution satellite data for dynamic monitoring of Earth’s surface and climate change detection amongst others. However, visual observation of Sentinel-2 satellite data has revealed that most images obtained during the winter season contain snow noise, posing a major challenge and impediment to satellite RS analysis of land surface. This singular effect hampers satellite signals from capturing important surface features within the geographical area of interest. Consequently, it leads to information loss, image processing problems due to contamination, and masking effects, all of which can reduce the accuracy of image analysis. In this study, we developed a snow-cover removal (SCR) model based on the Cycle-Consistent Adversarial Networks (CycleGANs) architecture. Data augmentation procedures were carried out to salvage the effect of the limited availability of Sentinel-2 image data. Sentinel-2 satellite images were used for model training and the development of a novel SCR model. The SCR model captures snow and other prominent features in the Sentinel-2 satellite image and then generates a new snow-free synthetic optical image that shares the same characteristics as the source satellite image. The snow-free synthetic images generated are evaluated to quantify their visual and semantic similarity with original snow-free Sentinel-2 satellite images by using different image qualitative metrics (IQMs) such as Structural Similarity Index Measure (SSIM), Universal image quality index (Q), and peak signal-to-noise ratio (PSNR). The estimated metric data shows that Q delivers more metric values, nearly 95%, than SSIM and PRSN. The methodology presented in this study could be beneficial for RS research in DL model development for environmental mapping and time series modeling. The results also confirm the DL technique’s applicability in RS studies.

Graphical Abstract

1. Introduction

Remote Sensing (RS) technology plays an important role in Earth Observation (EO) and research, and the Sentinel-2 Satellite has been one of the major instruments dedicated to EO within the last decade. Sentinel-2 can acquire optical imagery at high spatial resolution (10–60 m) observations over land and coastal waters. One of its primary goals is to complement other existing satellite missions while improving data availability for end users. It achieves this by providing high-resolution satellite data for real-time monitoring of Earth’s atmosphere and surface, facilitating the detection of climate changes and natural disasters over any given geographical area during satellite passage, among other functions [1]. In a nutshell, leveraging RS information acquired through Sentinel-2 has made EO and timely intervention in natural disaster monitoring more effective [1,2,3]. Interestingly, researchers worldwide have been employing RS techniques to assess the effectiveness and applications of Sentinel-2. Their findings reveal that optical imagery from Sentinel-2 frequently encounters clarity or obstruction challenges due to either partial or absolute cloud cover [4]. However, there is another significant challenge and impediment in satellite RS that has received less attention. In recent years, with an improved study of long-term time series for climate change and hydrology using Sentinel-2, certain features observed as snow have been reported [5,6] to cover specific geographical areas of the land surface. As a result, the quality and accuracy of Sentinel-2 satellite measurements have been reduced. Differentiating between snow and clouds can be challenging when mapping snow coverage using satellite imagery, as they often have remarkably similar appearances and color dis-attributions. The Sentinel Hub Earth Observation browser: “https://www.sentinel-hub.com/explore/eobrowser/ (accessed on 24 January 2024)” now incorporates an algorithm that can systematically select Satellite passage sessions with cloud-free Sentinel-2 imagery. Hence, this leaves us solely with satellite imagery with snow-cover (SC) features. Furthermore, several independent RS researchers have utilized machine learning (ML) as a computational tool to eliminate cloud-cover features from satellite imagery [7,8], resulting in satellite imagery exclusively exhibiting snow-cover (SC) features.
Snow cover is predominantly observed during the winter and early spring seasons of the year. This phenomenon has been one of the most influential factors affecting the quality of satellite imagery, as it obscures crucial surface features. Satellite imagery with SC presents various limitations and challenges. First, it hinders the acquisition of necessary features due to satellite signal interference and contamination, resulting in information loss [6]. Secondly, it complicates image data processing by causing masking effects that can diminish the accuracy of computer vision systems [9]. Lastly, this effect can systematically reduce or limit the availability of satellite images required for data training in deep learning (DL) model development and climate change time series studies [10]. Moreover, addressing the SC and removal issue in optical satellite images has been imperative in RS studies. It is worth noting that optical RS satellites operating in wavelengths like Sentinel-2 are limited in their ability to penetrate the snow and ice [11], rendering them incapable of revealing the underlying features. Therefore, it is essential to explore alternative methods for reconstructing synthetic RS image data, enabling the prediction of scenes beneath the areas of SC in geographical interest. Previous reports have outlined three different methods for reconstructing RS image data [7,8,12]. These methods include the spatial-based approach, which involves restoring corrupted pixel sections on the image through [13]; the spectral-based method, which utilizes different sensors to obtain additional satellite images for reconstructing missing information in the defective image data; and the temporal-based method, where the corrupted or defective pixels in the image are removed and replaced with non-defective pixel obtained from a relatively close image of the same area. These techniques have been widely utilized in RS applications, and it would be interesting to explore their applicability to image data affected by SC. Unfortunately, we cannot make use of them because the outcomes of these methods in various use cases have shown failures in recovering corrupted image pixels, inability to handle large areas, lack of color information, poor image resolution conditions, generic image distortion, etc. [12,14,15]. Therefore, we employed Generative Adversarial Networks (GANs).
GANs are generative models based on deep learning (DL). They have been utilized across various application areas since their development by Ian Goodfellow [16], particularly in unsupervised learning tasks such as image-to-image translation. They consist of two sub-models: the generator and discriminator [4,17]. In simple terms, the generator (G) networks take sample data and generate another sample which we regard as synthetic data, while the discriminator (D) assesses and determines whether the data is generated or originates from the real sample pool. Detecting clouds and snow in remote sensing images is a significant preprocessing task for EO and remote sensing imagery analysis. It is therefore important to note that the experimental results from authors who have reported the application of GANs at different use cases in EO studies have largely focused on cloud noise remover from satellite images. It is worth noting that several works on SC removal from images using GANs of different architectures have been carried out by different independent researchers. For instance, Aiwen et al. [18] reported the detection and negative effect of snowfall features on synthetic images obtained from digital cameras. They deployed the GAN technique integrated with the attention mechanism in the generator component. Zhang et al. [9] also proposed a GAN algorithm for effective snow particles and snow streak removal from images.
In addition, other researchers have reported the removal of snow noise from satellite images using different GAN architectures for pix2pix image translation. For instance, Praveer and Nikos [4] created a CloudGAN model that is capable of extracting thin clouds from Sentinel-2 images. This model does require a high wavelength input, like SAR images, for a region that is heavily obscured, which lowers the model’s effectiveness. Enomoto et al. [19] created a Multispectral Conditional Generative Adversarial Networks (McGANs) model using the conditional Generative Adversarial Networks (cGANs) framework. Their model can remove clouds from visible light RGB satellite images; however, the resulting cloud-free images often contain artifacts. Meanwhile, Toizumi et al. [20] used a GAN with thick cloud masks to eliminate thin clouds from cloudy images while maintaining thick cloud areas. Their model output produces artifact-free non-cloudy images. In addition, Sarukkai et al. [21] proposed a trainable spatial–temporal generator network (STGAN) model, also based on the cGAN architecture. Their model generates realistic cloud-free images while removing the various atmospheric factors that could affect satellite images. Moreover, another major challenge that has gotten little or no attention, contributing to the limited success rate in RS imagery processing and environmental modeling activities, is snow-covered scenes on “Satellite images”. Moreover, there have been some deployments of different algorithms for model development capable of removing SC from images.
To mention a few, the DesnowNet model, which was a CNN-based method proposed to remove snow from images [22]; the Multi-scale Stacked Densely Connected Convolutional Network (MS-SDN) model was another efficient neural network based on the stacked dense network for snow removal [23]; and the Joint size and transparency aware snow removal (JSTASR) model based on modified partial convolution algorithm [24]. In 2021, Kaihao Zhang et al. [25] proposed a Deep Dense Multi-Scale Network (DDMSNet) using semantic and geometric priors for snow removal from images, and recently, Shan Lie et al. [26] proposed an efficient snow removal transformer with a global windowing network (SGNet) for snow removal from images. Interestingly, majority of the studies on SC removal such as the ones listed above made use of digital camera photos or some sort of pictures taken by drones and not RS satellite images, which is the main interest in this study. However, some studies have reported different model development using various neural network algorithms for the mapping and detection of SC features on RS satellite images [5,6,27], but not SC removal. Hence, we implemented the Cycle-Consistent Adversarial Networks (CycleGANs) framework [28], which is a type of GAN architecture for mapping between two different domains of images without the need for paired training of the image data. CycleGANs have been reported to be an efficient and optimized GAN architecture over other GAN architectures [16,29] as far as pix2pix image translation is concerned. Its major advantage over other GAN architectures is that CycleGANs do not require alignment of training image set, as it can perform image translation on unpaired image data. CycleGANs techniques have been demonstrated by Jun-Yan et al. [28] with many examples using different aerial photos, nature photos, and picture–paintings, where they translated an image from a source domain (X) to another target domain (Y) in the absence of paired samples. Their results reveal that the translated images captured some distinctive qualities from image domain X and translated them into another image domain Y. In addition, Praveer and Nikos [4] have demonstrated the RS usage of CycleGANs for the prediction of cloud-free images of the same scene as reflected in the cloudy images. Their results further show that paired (cloud/cloud-free) training dataset procedures are not necessary.
In this study, we developed a snow-cover removal (SCR) model based on the CycleGANs framework. The aim of our SCR model is to automatically capture snow and other notable features in Sentinel-2 satellite images and then generate new snow-free synthetic optical images that share the same properties as the original snow-free satellite images. The generated snow-free synthetic images are evaluated to determine the level of visual and semantic similarity with the original snow-free Sentinel-2 satellite images using various image qualitative metrics (IQMs). This article is structured as follows. In Section 2, the proposed method is discussed, while the materials and implementation are stated in Section 3. Results and the metric quality measurement of the generated images are presented in Section 4. The discussions of the results are presented in Section 5.

2. Methods

Detecting snow cover features on RS optical images and translating them to snow-free synthetic optical images can improve the quality of output images. In this study, the SCR model is built on the CycleGAN framework. The schematic representation of the CycleGAN framework is shown in Figure 1. It involves training two neural networks: the generator network (G) composed of [GX2Y, FY2X] and the discriminator network (D) comprising [DX, DY], working concurrently between two domains. GX2Y represents the mapping function G: X → Y, where G generates images ( x ~ ) from domain X to domain Y, while FY2X represents the reverse mapping function F: Y → X, where F generates images ( y ~ ) from domain Y to domain X. Adopting a mathematical geostatistical interpolation technique such as kriging is a good strategy we could deploy for snow cover remover by reconstructing image pixels [13], but this method has been shown to fail and be incapable of dealing with image spatial information [12] such as RS satellite images. Interestingly, a few of the GAN architectures, such as cGAN, can potentially work as well, but a major edge our model framework has over others is that CycleGAN is committed to solving the problem of lack of paired training data in actual model training, unlike cGAN [19,21] where the baseline is trained with only conditional paired data. Our technique is similar to Praveer and Nikos [4] who used the CloudGAN model to extract thin clouds from Sentinel-2 images, but differ in hyperparameter settings and use case. For instance, our model framework’s use case is snow cover removal, and we trained the model from scratch with a learning rate of 0.0002 for 1000 epochs, whereas Praveer and Nikos [4] used 0.0002 for 200 epochs, although the hyperparameter settings may vary depending on the task. In our setup, snowy satellite images (SSI) belong to domain X, while snow-free satellite images (SFSI) belong to domain Y, with no direct correspondence between the images. During model training, image data flows in two directions of the domains: the forward direction indicated by the blue-colored arrows, and the backward direction indicated by red-colored arrows in Figure 1. Both G and D compete with each other, resulting in improved performance during model training. The objective of DX is to distinguish between real images X and the translated images ( x ~ ), while DY aims to differentiate between the real images Y and the translated images ( y ~ ). The two major components behind the efficient operations of CycleGANs are the introduction of adversarial loss (AL) and cycle consistency loss (CCL) [30].
Adversarial loss (AL) ensures that the translated image closely resembles the real samples in both X and Y domains by extracting significant features and statistical properties of the target domain. The fundamental concept of CCL is that if an image (x) is translated from domain X to Y and then back to domain X, the final output must closely resemble the initial input.
This procedure is known as forward cycle consistency and compares the similarity between the real image (x) and the output x ^   (i.e., x i G y i ~ F x i ^ x i : forward direction). Similarly, the same procedure is performed on image (y) in the Y domain but in a reverse order, known as backward cycle consistency (i.e., y i F x i ~ G y i ^ y i : backward direction). The generator network (G) aims to minimize this objective against the adversary, while the discriminator network (D) seeks to maximize it (i.e., minG maxDYGAN (F, DY, Y, X)). For a mapping function G: X → Y and its corresponding discriminator DY, the AL mathematical expression in CycleGAN is given as [28,31]:
GAN (G, DY, X, Y) = 𝔼y ∼ pdata(y) [log DY (y)] + 𝔼x ∼ pdata(x) [log (1 − DY (G(x))]
where x and y denote SSI and SFSI, respectively, while x ∼ pdata(x) and y ∼ pdata(y) are the distributions of SS1 and SFSI. The discriminator’s evaluation scores for every real image Y are denoted by Ey. Hence, a similar procedure for mapping function F: Y → X, and corresponding discriminator DX is carried out as well (i.e., minF maxDX ℓGAN (F, DX, Y, X)). Following [4,28], we adopted the use of least-square loss to avoid the negative logarithm possibility and optimize the image generation output in terms of image quality during model training. Therefore, for GAN loss ℓGAN (G, DY, X, Y), we trained G to minimize 𝔼x ∼ pdata(x) [(D(G(x)) − 1)2] and train the D to minimize 𝔼y ∼ pdata(y) [(D(y)) − 1)2] + 𝔼x ∼ pdata(x) [D(G(x))2]. To prevent the discriminator from failing to provide insightful input that could enhance the generator output, the CCL ensures that crucial information details (i.e., essential visual data) from the original image are not lost during image information translation between domains [31]. The mathematical expression of CCL is given below:
cyc (G, F) = 𝔼x ∼ pdata(x) [‖F(G(x)] − x1] + 𝔼y ∼ pdata(y) [‖G(F(x)) − y‖1]
We compared the generated SSI with the real SSI and then calculated the sum of the absolute pixel value of the differences between the original image and the image after being converted by both generators. By integrating these loss components (i.e., CCL and AL), we can construct the overall objective for the model by combining Equations (1) and (2):
ℓ(G, F, DX, DY) = ℓGAN (G, DY, X, Y) + ℓGAN (F, DX, Y, X) + λℓcyc (G, F)
where λ is the regularizing factor that controls the relative importance of the two objective functions. Further details about AL and CCL are discussed in [28,31,32].

Model Output Performance Metrics

Generated synthetic images are prone to loss of visual quality during image acquisition, processing, and reproduction, among others [33]. Hence, further qualitative procedures are required to measure the quality of generated synthetic images. Zhou Wang et al. [34] reported the Structural Similarity Index Measure (SSIM) as an effective tool for assessing image quality, among other metrics. On the contrary, Jim Nilsson et al. [35] demonstrated in their review study that SSIM can produce incorrect or invalid results when used. Therefore, we added two more techniques of image quality measurements since we cannot ascertain the absolute correctness of the reported measuring metric.
(a) 
SSIM examines similarities between luminance, contrast, and structure [33]. Given two N-dimensional images; x = (x1, …, xN), and y = (y1, …, yN), where x and y are original and generated synthetic images, respectively. The summarized and simplified version of SSIM is given as [33,34]:
S ( x , y ) = S 1 ( x , y ) S 2 ( x , y ) = 2 x ¯ y ¯ + ϵ 1 x ¯ 2 + y ¯ 2 + ϵ 1 2 S x y + ϵ 2 S x 2 + S y 2 + ϵ 2
Ɛi is the numerical stability coefficient for preventing zero denominators, x ¯   a n d   y ¯ are the local means, Sx and Sy are the standard deviations, and Sxy is the cross-covariance for images x and y sequentially. The larger the computed value, the higher the similarity between the real image and the synthetic generated image.
(b) 
Universal image quality index denoted as ‘Q’ is often used to determine the average visual correlation between the original and the generated synthetic image [36]. Q is a product of 3 components (Loss of correlation, Luminance distortion, and Contrast distortion). When the generated image is closer to the original image, then the Q value is closer to one. The value of 1 is achieved if and only if yi = xi for all i = 1, 2, …, N.
Let x = {xi: I = 1, 2, 3, ……., N} and y = {yi: I = 1, 2, 3, ……., N} be the original and generated synthetic images, respectively. Q can be mathematically represented as:
Q = σ x y σ x σ y * 2 x ¯ y ¯ ( x ¯ ) 2 + ( y ¯ ) 2 * 2 σ x σ y σ x 2 + σ y 2
x ¯ = 1 N Σ i = 1 N x i   , y ¯ = 1 N Σ i = 1 N y i   ,   σ x 2 = 1 N 1 Σ i = 1 N ( x i x ¯ ) 2
σ y 2 = 1 N 1 Σ i = 1 N ( y i y ¯ ) 2 ,     σ x y = 1 N 1 Σ i = 1 N ( x i x ¯ ) ( y i y ¯ )
where x ¯   a n d   σ x are the means of original and generated synthetic images, respectively, σ x   a n d   σ y denote the same image name but for variance, while σ x y is the cross variance between x and y.
(c) 
Peak signal-to-noise ratio (PSNR) is a commonly used mean-squared-error-based metric for measuring the quality of a generated image by quantifying the pixel-to-pixel difference between two (original and generated) images. The higher the PSNR, the better the quality of the reconstructed image, and it is expressed in terms of the logarithmic decibel scale [34,37].
P S N R = 10 . l o g 10 M A X L 2 M S E ,   M S E = 1 m n Σ i = 0 m 1 Σ j = 0 n 1 ( f ( i , j ) g ( i , j ) ) 2
where MSE is the Mean Squared Error between two images X (i, j) and Y (i, j)), m denotes the number of rows of pixels of the images and i represents the index of that row, while n denotes the number of columns of pixels of the image and j represents the index of that column. f represents the matrix data of the original image, while g represents the matrix data of the generated synthetic image in this study.

3. Materials and Method

3.1. Dataset

Our dataset collection consists of high-resolution level 2 Sentinel-2 cloud-free optical imagery with false color composition spanning the years 2016–2022. For our investigation, we specifically selected satellite imagery with a spatial resolution of 10 m. Datasets were downloaded from “https://apps.sentinel-hub.com/eo-browser/ (accessed on 24 January 2024)”. Most of our cloud-free satellite images exhibit cloud coverage within the range of 0–2%, enabling clear visibility of images with and without snow. All the downloaded images were acquired over the Cariboo region along Williams Lake in British Columbia, Canada, as it is a geographical area of interest for our study. We opted for 95 samples of SSI and 67 samples of SFSI for training the GANs. Interestingly, training SSI to SFSI requires a large number of images; therefore, we performed data augmentation [38] on both SSI and SFSI to generate a sufficient quantity of images suitable for GAN training. Following data augmentation and data cleaning, we obtained 1000 images each for SSI and SFSI.

3.2. Network Architectures and Training

In constructing our SCR model, we adopted the architecture and naming pattern for the CycleGAN network from Praveer and Nikos [4] and Jun-Yan et al. [28], respectively. Figure 2 shows the overview of the CycleGAN architecture process workflow for the SCR model. Computational cost is one of the primary aspects that was taken into account during the model training process when building our SCR model.
The high-resolution image from the Sentinel-2 satellite, which had 512 × 512 pixels, was resized to 256 × 256 pixels. This process was followed based on our needs and to prevent significantly higher computational costs in terms of computation time and significant memory usage by the computer system. The internal component configuration of the computer system we used to develop the model includes an Intel(R) Xeon(R) W-2245 processor running at 3.90 GHz, a workstation running Windows 10 Pro, a processor with an optimized high-performance NVIDIA Quadro card, which is more accurate and dependable than a typical GPU, and 16.0 GB of RAM. In the generator (G) architecture, we employed 6 residual blocks for training images of size 128 × 128, and 9 residual blocks for images of size 256 × 256. As for the discriminator (D) architecture, we used the 70 × 70 PatchGAN approach introduced by Phillip Isola et al. [29]. PatchGAN divides the input image into arrays of size 70 × 70 patches, which are then used to classify the images as real or generated based on an average aggregation technique. To enhance the accuracy of the model output, we added more data, as described in Section 3.1. Furthermore, we changed the learning rate from 0.002 to 0.0002 and used 1000 epochs.

4. Results

SCR Model Prediction Results

Based on the SCR model, we generated a variety of synthetic satellite images (i.e., SSI and SFSI). Figure 3 shows the flowchart of SCR model deployment and possible outputs. Our SCR model successfully removes the dispersed snow from the satellite image and replaces it with the underlying ground textures while preserving other notable features.
Figure 4a,b shows the two-column layout, with the first stand-alone row representing the original SFSI. The second, third, and fourth rows contain the original SSIs and their corresponding synthetic SFSIs, respectively. The date tag appears on the left edge of each image. Since obtaining the original SFSI and original SSI for the same selected day or season of interest is not possible, we chose to use a snow-free satellite image of another day (11 August 2021) as a benchmark or reference image. The chosen benchmark satellite image was relatively clear, cloud-free, fuzzy-free, and cloud-shadow-free.
As observed in Figure 4a,b, the SCR model output can retain significant feature details such as fields and rivers (the black-colored, thick, curve-like line across the image). In most cases, its output is aesthetically pleasing and seems natural, almost identical to the original SFSI. The model was notably able to remove highly thick snow cover (e.g., SAT-Img1 and SAT-Img6). Furthermore, it is important to note that, in comparison to other generated images, the generated image with the extremely thick snow coverage scenario in Figure 4b does not produce an output of good quality (SAT-Img6).
Three bar charts are displayed in Figure 5, with three bars representing each image in each of the qualitative metrics (IQMs). We consider the original SFSI to be the benchmark image or reference image. The bar chart shows various metric measures between original SFSI vs. generated SFI and original SFSI versus original SSI. The color code description is provided in the legend of the satellite image status bar chart.
For convenience, we expressed the IQM values in percentages. It is assumed that the generated SFI metric values are in the range of 75–95.1%, 88.1–95.7%, and 18.6–29.1% for SSIM, Q, and PSNR, respectively, while the original SSI metric values are in the range of 75.1–83.6%, 73.1–83.4%, and 13.4–18.4%, respectively. The quantitative findings of the assessment of the metric scores of image-to-image similarities between the generated and original images are presented in Table 1.
Table 1 demonstrates how the generated SFI metric values are consistently greater than the original SSI values for every index. This suggests that our approach performs admirably across the board for all simulated situations.

5. Discussion

We present an analysis of the results achieved using our SCR model, which efficiently detects and eliminates snow cover scenes from satellite images, replacing them with the underlying ground details, as seen in Figure 4a and Figure 4b, respectively.
As observed, in Figure 4a,b, the SCR model can retain significant feature details, such as fields and rivers (the thick curve-like line across the image). In most cases, the model output is aesthetically pleasing and seems natural, almost identical to the original SFSI. Our model’s output findings were comparable to those published by Praveer and Nikos [4], who employed Cloud-GAN with the CycleGAN framework to generate cloud-free synthetic images. However, they reported a few failed scenarios in which the model output images were blank or overly smoothed. These unsuccessful cases could be the consequence of insufficient data augmentation or a hyperparameter tweaking issue. Our model was notably able to remove highly thick snow cover (e.g., SAT-Img1 and SAT-Img6). Moreover, it is noteworthy to state that the generated image with a highly thick snow coverage scene in Figure 4b does not give a high-quality output compared to other generated images, as in the case of SAT-Img6. This situation could be a result of the limited availability of satellite image samples with high snow coverage scenes during SCR model training, which consequently affected the model’s performance. In addition, we observed that the images in Figure 4a and Figure 4b are slightly fuzzy, respectively. Unfortunately, we could not get it better than this. Since the satellite passage over the region of interest was during the winter season, therefore we suspected that the image resolution might have been affected by the atmospheric particulate matter suspended in the air which could potentially make the satellite image fuzzy.
In Figure 5, we estimated that the metric value for original SFSI versus original SFSI is one, hence the green bar is always 100%. Each metric (SSIM, Q, and PSNR) displays the metric values for the original SFSI, generated SFI, and original SFSI of different dates. The bar chart displays the results of quantitative and comparative evaluations of the model output (i.e., generated images) by estimating how similar the output is to the original SFSI. As observed, all IQM values of the generated SFI performed similarly in that their metric values were higher than those of the original SSI. Additionally, the relatively high SSIM values of the model output signify a higher resemblance to the original SFSI in terms of luminance, contrast, and structure compared with the model input image (i.e., original SSI). In the example of SAT-Img6 in Figure 4b, the generated image has a different contrast compared with the corresponding target image, which can be misleading about the created image’s quality. This effect could be the reason why the generated SFI has a relatively low SSIM value (~2% higher than SAT-Img6) compared to other SSIM values. Interestingly, other generated SFIs exhibited higher percentage values than SAT-Img6 in other metrics.
In addition, the higher PSNR percentage values are indications of similarity in the pixel-to-pixel difference between the original SFSI and the generated image, as we can observe the significant increase in signal information after the snow cover scenes have been removed—by as much as ~7%, ~8%, ~3%, ~6%, ~10%, and ~7%, for the satellite images SAT-Img1, SAT-Img2, SAT-Img3, SAT-Img4, SAT-Img5, and SAT-Img6, respectively. In this case, the target image (i.e., original SFSI) was taken as a reference, and the PSNR values of the original SSI and the generated SFI were compared to record the information gained. High values of PSNR indicate a high similarity between the generated SFI and the original SFSI. The complexity and diversity of satellite images under various settings make it difficult to compare our model metric value results to previously reported model metric results [4,7,8].
Q in our case has the highest range of metric values (91–~95%) compared to other indices (SSIM and PSNR). This is comparable to [7], where Q was likewise shown to have the highest metric value. Q is a universal image quality index, and it reveals the degree of visual and semantic similarity between the original SSFI and the generated SFI. Furthermore, Figure 5 shows high metric values of over 77% for all generated images and also reveals the variability in the IQM value index depending on the amount of distortion and contrast in the generated image. Consequently, it is difficult to acquire the same quality rating even when the same distortion and contrast are present in different generated images. Despite this, we found that Q produces a result closer to 95% than SSIM and PRSN.

6. Conclusions

In this article, we proposed a snow cover remover (SCR) model that employs a CycleGAN framework. GAN deployment has been successful in many image analysis and visual tasks, as stated in Section 1, but the use of GANs for RS satellite snow cover removal is still relatively small. Hence, we used the CycleGAN framework and Sentinel satellite images for model training and the development of a novel SCR model. The SCR model recognizes and eliminates snow-cover scenes from satellite images, replacing them with underlying ground information without using any corresponding snow-free pair for a snowy image. Data augmentation procedures were carried out to salvage the effect of the limited availability of the Sentinel-2 image dataset exhibiting snow cover features. In order to achieve more computing efficiency, we resized the 512 × 512 pixels to 256 × 256 pixels, compromising the greater pixel sizes.
The model was tested on real snowy satellite images that effectively removed snow cover scenes and then reconstructed underlying ground textures while preserving important features. However, we believe we can get a more refined model output if there is a larger number of original SSIs for model training. The model performance was evaluated using selected Image Quality Metrics (SSIM, Q, PSNR). The bar charts of the IQM display the model’s efficiency. This consequently exhibits the quality of the generated snow-free images expressed in percentage, thus validating the effectiveness of the model’s output.
Finally, we demonstrated that the generated snow-free images are useful for RS research based on the exhibited IQM values. Our metric data shows that Q delivers more metric values, nearly 95%, than SSIM and PRSN. We think that our approach has demonstrated the advancement of the DL technique and its applicability in RS studies. Hence, this approach has contributed to resolving some of the problems associated with processing RS imagery and can increase the amount of satellite image data available for use in future studies and applications, particularly for data training in DL model development. Another potential application for this method is the area of cloud shadow removal. Future research directions may involve the development of an optimized snow cover removal model using a multi-source SSI dataset capable of providing high-resolution images for environmental mapping and time series modeling.

Author Contributions

Conceptualization, T.S.O. and D.C.; methodology, T.S.O.; validation and formal analysis, T.S.O., D.C., O.O., M.B., M.H., and O.I.; investigation, T.S.O.; data curation, T.S.O.; writing—original draft preparation, T.S.O.; writing—review and editing, all authors; supervision, D.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the office of the Vice-Principal Research (VPR) Fund of Queen’s University, ON, Canada, and the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant.

Data Availability Statement

Data are available in a publicly accessible repository at https://apps.sentinel-hub.com/eo-browser/ (accessed on 24 January 2024).

Acknowledgments

The authors thank the Sentinel satellite data hub for making the satellite optical data available and easily accessible through the EO Application Programming Interface (API) web browser. We also thank the reviewers and editors very much for taking the time to thoroughly review our article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Spoto, F.; Sy, O.; Laberinti, P.; Martimort, P.; Fernandez, V.; Colin, O.; Meygret, A. Overview of sentinel-2. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012; pp. 1707–1710. [Google Scholar] [CrossRef]
  2. Tarpanelli, A.; Mondini, A.C.; Camici, S. Effectiveness of Sentinel-1 and Sentinel-2 for flood detetion assessment in Europe. Nat. Hazards Earth Syst. Sci. 2022, 22, 2473–2489. [Google Scholar] [CrossRef]
  3. Moumtzidou, A.; Bakratsas, M.; Andreadis, S.; Karakostas, A.; Gialampoukidis, I.; Vrochidis, S.; Kompatsiaris, I. Flood detection with Sentinel-2 satellite images in crisis management systems. CoRe Paper—Using Artificial Intelligence to exploit Satellite Data in Risk and Crisis Management. In Proceedings of the 17th ISCRAM Conference, Blacksburg, VA, USA, 24–27 May 2020; pp. 1049–1059. Available online: https://idl.iscram.org/files/anastasiamoumtzidou/2020/2296_AnastasiaMoumtzidou_etal2020.pdf (accessed on 16 January 2024).
  4. Singh, P.; Komodakis, N. Cloud-Gan: Cloud Removal for Sentinel-2 Imagery Using a Cyclic Consistent Generative Adversarial Networks. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 1772–1775. [Google Scholar] [CrossRef]
  5. Gascoin, S.; Grizonnet, M.; Bouchet, M.; Salgues, G.; Hagolle, O. Theia Snow collection: High-resolution operational snow cover maps from Sentinel-2 and Landsat-8 data. Earth Syst. Sci. Data 2019, 11, 493–514. [Google Scholar] [CrossRef]
  6. Wang, Y.; Su, J.; Zhai, X.; Meng, F.; Liu, C. Snow Coverage Mapping by Learning from Sentinel-2 Satellite Multispectral Images via Machine Learning Algorithms. Remote Sens. 2022, 14, 782. [Google Scholar] [CrossRef]
  7. Darbaghshahi, F.N.; Mohammadi, M.R.; Soryani, M. Cloud Removal in Remote Sensing Images Using Generative Adversarial Networks and SAR-to-Optical Image Translation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–9. [Google Scholar] [CrossRef]
  8. Maniyar, C.; Kumar, A. Generative Adversarial Network for Cloud Removal from Optical Temporal Satellite Imagery. In Advances in Intelligent Systems and Computing Soft Computing for Problem Solving, Proceedings of SocProS Volume 2; Springer: Singapore, 2021; pp. 481–491. [Google Scholar] [CrossRef]
  9. Zhang, Z.; Wu, S.; Wang, S. Single-image snow removal algorithm based on generative adversarial networks. IET Image Process. 2023, 17, 3580–3588. [Google Scholar] [CrossRef]
  10. Zakeri, F.; Mariethoz, G. Synthesizing long-term satellite imagery consistent with climate data: Application to daily snow cover. J. Remote Sens. Environ. 2024, 300, 113877. [Google Scholar] [CrossRef]
  11. Yang, X.; Qin, Q.; Yesou, H.; Ledauphin, T.; Koehl, M.; Grussenmeyer, P.; Zhu, Z. Monthly estimation of the surface water extent in France at a 10-m resolution using Sentinel-2 data. Remote Sens. Environ. 2020, 244, 111803. [Google Scholar] [CrossRef]
  12. Shen, H.; Li, X.; Cheng, Q.; Zeng, C.; Yang, G.; Li, H.; Zhang, L. Missing information reconstruction of remote sensing data: A technical review. IEEE Geosci. Remote Sens. Mag. 2015, 3, 61–85. [Google Scholar] [CrossRef]
  13. Deepthi, N.; Catherine Rakkini, D.; Edison Prabhu, K. Image Restoration by Kriging Interpolation Technique. IOSR J. Electr. Electron. Eng. 2016, 11, 25–37. [Google Scholar]
  14. Huang, B.; Li, Y.; Han, X.; Cui, Y.; Li, W.; Li, R. Cloud removal from optical satellite imagery with SAR imagery using sparse representation. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1046–1050. [Google Scholar] [CrossRef]
  15. Lin, C.H.; Lai, K.H.; Bin Chen, Z.; Chen, J.Y. Patch-based information reconstruction of cloud-contaminated multitemporal images. IEEE Trans. Geosci. Remote Sens. 2014, 52, 163–174. [Google Scholar] [CrossRef]
  16. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. Communications of the ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
  17. Jozdani, S.; Chen, D.; Pouliot, D.; Johnson, B.A. A review and meta-analysis of generative adversarial networks and their applications in remote sensing. Int. J. Appl. Earth Obs. Geoinf. 2022, 108, 102734. [Google Scholar] [CrossRef]
  18. Jia, A.; Jia, Z.H.; Yang, J.; Kasabov, N.K. Single-Image Snow Removal Based on an Attention Mechanism and a Generative Adversarial Network. IEEE Access 2021, 9, 12852–12860. [Google Scholar] [CrossRef]
  19. Kenji, E.; Sakurada, K.; Wang, W.; Fukui, H.; Matsuoka, M.; Nakamura, R.; Kawaguchi, N. Filmy cloud removal on satellite imagerywith multispectral conditional generative adversarial nets. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 1533–1541. [Google Scholar] [CrossRef]
  20. Toizumi, T.; Zini, S.; Sagi, K.; Kaneko, E.; Tsukada, M.; Schettini, R. Artifact-free thin cloud removal using gans. In Proceedings of the International Conference on Image Processing ICIP, Taipei, Taiwan, 22–25 September 2019; pp. 3596–3600. [Google Scholar] [CrossRef]
  21. Sarukkai, V.; Jain, A.; Uzkent, B.; Ermon, S. Cloud removal in satellite images using spatiotemporal generative networks. In Proceedings of the 2020 IEEE Winter Conference on Applied Computer Vision, WACV 2020, Snowmass Village, CO, USA, 1–5 March 2020; pp. 1785–1794. [Google Scholar] [CrossRef]
  22. Liu, Y.F.; Jaw, D.W.; Huang, S.C.; Hwang, J.N. DesnowNet: Context-aware deep network for snow removal. IEEE Trans. Image Process. 2018, 27, 3064–3073. [Google Scholar] [CrossRef] [PubMed]
  23. Li, P.; Yun, M.; Tian, J.; Tang, Y.; Wang, G.; Wu, C. Stacked dense networks for single-image snow removal. Neurocomputing 2019, 367, 152163. [Google Scholar] [CrossRef]
  24. Chen, W.-T.; Fang, H.-Y.; Ding, J.-J.; Tsai, C.-C.; Kuo, S.-Y. JSTASR: Joint size and transparency aware snow removal algorithm based on modified partial convolution and veiling effect removal. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2020; pp. 754–770. [Google Scholar]
  25. Zhang, K.; Li, R.; Yu, Y.; Luo, W.; Li, C. Deep dense multi-scale network for snow removal using semantic and depth priors. IEEE Trans. Image Process. 2021, 30, 7419–7431. [Google Scholar] [CrossRef] [PubMed]
  26. Shan, L.; Zhang, H.; Cheng, B. SGNet: Efficient Snow Removal Deep Network with a Global Windowing Transformer. Mathematics 2024, 12, 1424. [Google Scholar] [CrossRef]
  27. Yin, M.; Wang, P.; Ni, C. Cloud and snow detection of remote sensing images based on improved Unet3+. Sci. Rep. 2022, 12, 14415. [Google Scholar] [CrossRef] [PubMed]
  28. Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar] [CrossRef]
  29. Isola, P.; Zhu, J.-Y.; Zhou, T.; Efros, A.A. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1125–1134. [Google Scholar]
  30. Zhu, Z.; Lu, J.; Yuan, S.; He, Y.; Zheng, F.; Jiang, H.; Yan, Y.; Sun, Q. Automated Generation and Analysis of Molecular Images Using Generative Artificial Intelligence Models. J. Phys. Chem. Lett. 2024, 15, 1985–1992. [Google Scholar] [CrossRef] [PubMed]
  31. Chandhok, S. CycleGAN: Unpaired Image-to-Image Translation (Part 1). 2022. Available online: https://pyimg.co/7vh0s (accessed on 12 December 2023).
  32. Xu, M.; Sook, Y.; Alvaro, F.; Sun, P.D. A comprehensive survey of image augmentation techniques for deep learning. Pattern Recognit. 2023, 137, 109347–109358. [Google Scholar] [CrossRef]
  33. Sara, U.; Akter, M.; Uddin, M.S. Image Quality Assessment through FSIM, SSIM, MSE and PSNR—A Comparative Study. J. Comput. Commun. 2019, 7, 8–18. [Google Scholar] [CrossRef]
  34. Wang, Z.; Bovik, A.C. A Universal Image Quality Index. IEEE Signal Process. Lett. 2002, 9, 81–84. [Google Scholar] [CrossRef]
  35. Nilsson, J.; Akenine-Möller, T. Understanding SSIM. NVIDIA 2020. Available online: https://arxiv.org/pdf/2006.13846.pdf (accessed on 16 January 2024).
  36. Søgaard, J.; Krasula, L.; Shahid, M.; Temel, D.; Brunnstrom, K.; Razaak, M. Applicability of Existing Objective Metrics of Perceptual Quality for Adaptive Video Streaming. In Proceedings of the Society for Imaging Science and Technology IS&T International Symposium on Electronic Imaging, San Francisco, CA, USA, 14–18 February 2016; pp. 1–7. [Google Scholar] [CrossRef]
  37. Zhou, W.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  38. Abdelhack, M. A Comparison of Data Augmentation Techniques in Training Deep Neural Networks for Satellite Image Classification. 2020. Available online: https://arxiv.org/pdf/2003.13502.pdf (accessed on on 22 January 2024).
Figure 1. Schematic diagram of an overview of the CycleGAN framework with mapping functions G and F, and discriminators DX and Dy. The forward and backward cycles are denoted with blue and red arrows, respectively. x, x ~ , y, and y ~   represent the real SSI, generated SSI, real SFSI, and generated SFSI, respectively.
Figure 1. Schematic diagram of an overview of the CycleGAN framework with mapping functions G and F, and discriminators DX and Dy. The forward and backward cycles are denoted with blue and red arrows, respectively. x, x ~ , y, and y ~   represent the real SSI, generated SSI, real SFSI, and generated SFSI, respectively.
Remotesensing 16 02352 g001
Figure 2. Picture of the workflow process showing the methodological stages involved in developing the SCR model for generating the snow-free synthetic optical images. Every term/parameter in the process flow chart has been discussed in Section 2.
Figure 2. Picture of the workflow process showing the methodological stages involved in developing the SCR model for generating the snow-free synthetic optical images. Every term/parameter in the process flow chart has been discussed in Section 2.
Remotesensing 16 02352 g002
Figure 3. Snow cover removal (SCR) model for Sentinel-2 satellite imagery. The SCR model takes snowy satellite imagery as input and outputs it as a snow-free image.
Figure 3. Snow cover removal (SCR) model for Sentinel-2 satellite imagery. The SCR model takes snowy satellite imagery as input and outputs it as a snow-free image.
Remotesensing 16 02352 g003
Figure 4. (a) Results of snow cover removal from satellite images. The first row consists of the original SFSI, while the second (Sat-Img1), third (Sat-Img2), and fourth (Sat-Img3) rows consist of the original SSIs with their corresponding generated synthetic SFSIs. All original SSIs have their date tag attached to their left margins. (b) Results of snow cover removal from satellite images. The first row consists of the original SFSI, while the second (Sat-Img4), third (Sat-Img5), and fourth (Sat-Img6) rows consist of the original SSIs with their corresponding generated synthetic SFSIs. All original SSIs have their date tags attached to their left margins.
Figure 4. (a) Results of snow cover removal from satellite images. The first row consists of the original SFSI, while the second (Sat-Img1), third (Sat-Img2), and fourth (Sat-Img3) rows consist of the original SSIs with their corresponding generated synthetic SFSIs. All original SSIs have their date tag attached to their left margins. (b) Results of snow cover removal from satellite images. The first row consists of the original SFSI, while the second (Sat-Img4), third (Sat-Img5), and fourth (Sat-Img6) rows consist of the original SSIs with their corresponding generated synthetic SFSIs. All original SSIs have their date tags attached to their left margins.
Remotesensing 16 02352 g004aRemotesensing 16 02352 g004b
Figure 5. Bar charts showing different image qualitative metric (IQM) value comparisons between the original SSI and generated SFI with respect to the original SFSI. The chart shows variations in IQM values after snow removal.
Figure 5. Bar charts showing different image qualitative metric (IQM) value comparisons between the original SSI and generated SFI with respect to the original SFSI. The chart shows variations in IQM values after snow removal.
Remotesensing 16 02352 g005
Table 1. Results of metric evaluations for original SFSI versus original SSI, and original SFSI versus generated SFI.
Table 1. Results of metric evaluations for original SFSI versus original SSI, and original SFSI versus generated SFI.
SSIMQPSNR
Sentinel-2 Satellite ImagesOri. SFSI
versus
Ori. SSI
(%)
Ori. SFSI
versus
Gen. SFI
(%)
Ori. SFSI
versus
Ori. SSI
(%)
Ori. SFSI
versus
Gen. SFI (%)
Ori. SFSI
versus
Ori. SSI
(%)
Ori. SFSI
versus
Gen. SFI
(%)
SAT-Img175.1084.1076.1092.0016.4023.20
SAT-Img280.1085.2081.2091.1017.0025.06
SAT-Img381.4086.2082.2091.1020.8023.40
SAT-Img481.3085.0083.4088.7013.4019.30
SAT-Img583.6095.1084.1094.7018.4028.50
SAT-Img675.1077.3073.1088.1012.1018.60
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Oluwadare, T.S.; Chen, D.; Oluwafemi, O.; Babadi, M.; Hossain, M.; Ibukun, O. Reconstructing Snow-Free Sentinel-2 Satellite Imagery: A Generative Adversarial Network (GAN) Approach. Remote Sens. 2024, 16, 2352. https://doi.org/10.3390/rs16132352

AMA Style

Oluwadare TS, Chen D, Oluwafemi O, Babadi M, Hossain M, Ibukun O. Reconstructing Snow-Free Sentinel-2 Satellite Imagery: A Generative Adversarial Network (GAN) Approach. Remote Sensing. 2024; 16(13):2352. https://doi.org/10.3390/rs16132352

Chicago/Turabian Style

Oluwadare, Temitope Seun, Dongmei Chen, Olawale Oluwafemi, Masoud Babadi, Mohammad Hossain, and Oluwaseun Ibukun. 2024. "Reconstructing Snow-Free Sentinel-2 Satellite Imagery: A Generative Adversarial Network (GAN) Approach" Remote Sensing 16, no. 13: 2352. https://doi.org/10.3390/rs16132352

APA Style

Oluwadare, T. S., Chen, D., Oluwafemi, O., Babadi, M., Hossain, M., & Ibukun, O. (2024). Reconstructing Snow-Free Sentinel-2 Satellite Imagery: A Generative Adversarial Network (GAN) Approach. Remote Sensing, 16(13), 2352. https://doi.org/10.3390/rs16132352

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop