Next Article in Journal
Learn to Few-Shot Segment Remote Sensing Images from Irrelevant Data
Next Article in Special Issue
Spatial Distribution of Multiple Atmospheric Pollutants in China from 2015 to 2020
Previous Article in Journal
Better Inversion of Wheat Canopy SPAD Values before Heading Stage Using Spectral and Texture Indices Based on UAV Multispectral Imagery
Previous Article in Special Issue
Forecasting PM10 Levels Using Machine Learning Models in the Arctic: A Comparative Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Spatiotemporal Fusion Model of Remote Sensing Images Combining Single-Band and Multi-Band Prediction

1
Key Laboratory of Knowledge Engineering with Big Data, Ministry of Education, Hefei University of Technology, Hefei 230601, China
2
Anhui Province Key Laboratory of Industry Safety and Emergency Technology, Hefei 230601, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(20), 4936; https://doi.org/10.3390/rs15204936
Submission received: 29 July 2023 / Revised: 25 September 2023 / Accepted: 11 October 2023 / Published: 12 October 2023
(This article belongs to the Special Issue Machine Learning for Spatiotemporal Remote Sensing Data)

Abstract

:
In recent years, convolutional neural network (CNN)-based spatiotemporal fusion (STF) models for remote sensing images have made significant progress. However, existing STF models may suffer from two main drawbacks. Firstly, multi-band prediction often generates a hybrid feature representation that includes information from all bands. This blending of features can lead to the loss or blurring of high-frequency details, making it challenging to reconstruct multi-spectral remote sensing images with significant spectral differences between bands. Another challenge in many STF models is the limited preservation of spectral information during 2D convolution operations. Combining all input channels’ convolution results into a single-channel output feature map can lead to the degradation of spectral dimension information. To address these issues and to strike a balance between avoiding hybrid features and fully utilizing spectral information, we propose a remote sensing image STF model that combines single-band and multi-band prediction (SMSTFM). The SMSTFM initially performs single-band prediction, generating separate predicted images for each band, which are then stacked together to form a preliminary fused image. Subsequently, the multi-band prediction module leverages the spectral dimension information of the input images to further enhance the preliminary predictions. We employ the modern ConvNeXt convolutional module as the primary feature extraction component. During the multi-band prediction phase, we enhance the spatial and channel information captures by replacing the 2D convolutions within ConvNeXt with 3D convolutions. In the experimental section, we evaluate our proposed algorithm on two public datasets with 16x resolution differences and one dataset with a 3x resolution difference. The results demonstrate that our SMSTFM achieves state-of-the-art performance on these datasets and is proven effective and reasonable through ablation studies.

Graphical Abstract

1. Introduction

High spatiotemporal resolution remote sensing (RS) images play a pivotal role in various applications, including, but not limited to, crop growth monitoring [1,2], land cover change detection [3,4,5,6,7,8], and land cover classification [9,10,11]. However, due to technical and budgetary constraints, obtaining RS data with high spatial and temporal resolutions is challenging [12], thereby limiting the utilization of advanced RS applications. For instance, the Moderate Resolution Imaging Spectroradiometer (MODIS) provides observations at spatial resolutions ranging from 250 to 1000 m and offers a global revisit time of nearly one day. In comparison, Landsat acquires images at a higher spatial resolution of 30 m but with relatively more minor scene coverage and a revisit time of up to 16 days. Although recent satellite systems, such as Sentinel-2, have made obtaining the time series of high-resolution RS images more accessible, challenges persist, such as frequent cloud contamination [13]. To overcome the time and space trade-offs in RS images, spatiotemporal fusion methods (STFM) are employed to combine satellite images with a low spatial resolution but high frequency (e.g., MODIS, referred to as coarse images) and satellite images with high spatial resolution but low frequency (e.g., Landsat, referred to as fine images) to create satellite image time series with both a high spatial and temporal resolution [14,15]. These fusion techniques enable researchers and practitioners to access data with enhanced spatiotemporal characteristics, facilitating more a accurate and comprehensive analysis for various environmental and land-related studies. To facilitate reader comprehension, we have compiled the primary abbreviations used in this article in Abbreviations.

2. Related Works

The current STF methods are primarily categorized into three main approaches: weighted function-based, unmixing-based, and learning-based algorithms [16]. Weight function-based methods employ linear combinations of input image information to obtain refined pixel values. For example, STARFM [17] employs a moving window to search for pixels similar to the central pixel and assigns weights based on their spatial, spectral, and temporal similarities to reflect their respective contributions. Furthermore, ESTARFM [18] introduces variable transformation coefficients based on STARFM and modifies the search method for pixels to enhance the performance of heterogeneous sites with many mixed similar pixels. On the other hand, OBSTFM [19] focuses on the performance in regions with non-shape variations, considering the actual distribution of surface features. It incorporates segmentation methods to generate surface objects with good similarity and uniformity and then searches for and weighs similar pixels within each object. Unmixing-based methods posit that a coarse pixel comprises various land-cover types of fine pixels and employs the linear spectral mixing theory to decompose the coarse pixels. Maselli [20] utilized a moving window approach to account for spatiotemporal variations in pixel reflectance and incorporated distance weighting within this window, where closer pixels to the target pixel received higher weights. When selecting endmembers, Busetto et al. [21] considered both spatial and spectral differences between pixels and determined the weights of each pixel in the linear unmixing model based on their spatial and spectral similarities. Additionally, there are hybrid methods that integrate the two approaches mentioned above. FSDAF [22] addresses rapidly changing regions by utilizing unmixing principles to obtain residuals between predicted fine images and to reference the date’s fine images. The model also incorporates weight functions to enhance its applicability in scenarios with fast-changing land cover types. Furthermore, the Fit-FC algorithm [23] combines model fitting (Fit), spatial filtering (F), and residual compensation (C) to handle scenes with significant changes while also constraining the impact of the unmixing process on the results, thereby further improving the accuracy of the fusion.
There are two main categories of learning-based methods: those based on sparse representation and those based on deep learning (DL). Algorithms based on sparse representation are commonly employed to train dictionaries for high and low spatial resolution in either the image or frequency domain, facilitating the reconstruction of finely detailed images for predicting specific dates through sparse coding [24,25,26,27]. However, due to the inherent computational complexity of sparse learning and the limitation in extracting sufficient local structural information from large input patches, they face constraints in accurately preserving object shapes [28].
DL-based methods can be categorized into three main groups: those based on Convolutional Neural Networks (CNNs), Generative Adversarial Networks (GANs), and Vision Transformers (ViTs) [29]. Convolution involves image cross-correlation, allowing the model to learn relative spatial positional information. The CNN-based STFM extracts representative image features by stacking multiple convolutional layers. For example, Song et al. [30] employed a super-resolution convolutional neural network (SRCNN) to reconstruct fine images from coarse counterparts, achieving remarkable improvements in the image quality. To further enhance image details, DCSTFN [31] simultaneously extracts features from fine and coarse images and merges these features using equations that consider temporal ground cover changes. Considering the inherent information loss in the reconstruction process of deconvolution fusion methods, EDCSTFN [32] goes a step further by incorporating residual encoding blocks. Additionally, it employs a composite loss function to enhance the learning capability of the network, thereby improving the fidelity of the fused images. The two-stream convolutional neural network spatiotemporal fusion model (StfNet) incorporates temporal dependence to predict unknown fine difference images. Additionally, it establishes a time constraint that considers the relationship between time series, ensuring the uniqueness and authenticity of the fusion results [33]. GAN-based STFMs achieve the prediction of RS data by leveraging the collaborative efforts of a generator and discriminator, aiming to make the predicted RS data as similar as possible to the actual data distribution. CycleGAN-STF [23] utilizes cycle-GAN to select generated images, enhancing the selected images using the wavelet transformation. GAN-STFM [34] introduces conditional GAN (CGAN) and switchable normalization techniques to address spatiotemporal fusion problems. This approach reduces input data and enhances model flexibility. MLFF-GAN combines multi-level feature fusion with GAN to generate fused images. MLFF-GAN incorporates Adaptive Instance Normalization (AdaIN) blocks to learn the global distribution relationships between multi-temporal images. Additionally, it employs an Attention Module (AM) to learn the local information weights of minor region variations [35]. A crucial component of the ViT-based STFMs is the self-attention mechanism, enabling the capture of global information and compensating for the inherent limitation of CNNs with narrower receptive fields. MSNet is a multi-stream STFM based on ViT and CNN. It combines the global temporal correlation learning capability of the transformer with the feature extraction capability of convolutional neural networks using an average-weighted fusion approach [36]. SwinSTFM [13] is a novel algorithm based on the Swin Transformer [37] and the Linear Spectral Mixing theory. This algorithm fully leverages the advantages of the Swin Transformer in feature extraction, and integrates the unmixing theory into the model based on the self-attention mechanism. Table 1 presents the strengths and weaknesses of the methods from different categories.
However, there are certain limitations to DL-based STFMs. Firstly, in ViT-based STFMs, self-attention often neglects fine-grained pixel-level internal structural features, leading to the loss of shallow-level features [28]. Additionally, the high resolution of RS images results in a quadratic increase in the input image size, thereby increasing computational complexity. Secondly, STFMs based on CNNs and GANs often use 2D convolutions for feature extraction, and they typically have two limitations: (1) 2D convolutions may lead to the loss of channel dimension information. (2) Hybrid features may be challenging to use for reconstructing multispectral RS images with significant differences in spectral reflectance across different bands. The detailed analysis is depicted in Figure 1, where the inputted multispectral RS images are passed through the 2D convolution-based encoder to generate hybrid features containing information from all bands of the input images. Then, the 2D convolution-based decoder reconstructs the hybrid features into individual output image bands. However, 2D convolution combines the convolution results of all input channels into a single-channel output feature map, leading to the loss of channel dimension information. Furthermore, a multi-band prediction generates a comprehensive feature representation that contains information from each band, and such hybrid features can be used for the subsequent image analysis and processing tasks, such as feature classification, target detection, change detection, etc. Nevertheless, the hybrid features may lead to the loss or blurring of high-frequency detail information, which is not conducive to reconstructing multispectral RS images with significant differences in spectral reflectance between the bands.
To address the above issues, we propose an RS image STFM that combines single-band and multi-band prediction (SMSTFM). First, we employ a Single-Band Prediction Module (SBPM) to generate an initial fused image, thus avoiding generating hybrid features. Next, we employ the Multi-Band Prediction Module (MBPM), which focuses on the information in the input images’ channel dimension to enhance the details of the preliminary fused image. In SBPM, the feature extraction is mainly performed using convolutional modules based on ConvNeXt. ConvNeXt is an advanced convolutional module redesigned according to the ViT architecture [38]. Compared to ViT, it is concise in design, computationally efficient, and has a considerable performance. In the MBPM, we replace the 2D convolutions in ConvNeXt with 3D convolutions to better extract spatial-spectral features. Here are the summarized main contributions of our study:
  • The paper introduces an STFM to address two challenges within the current STF framework. Our proposed STFM consists of two key modules: the Single-Band Prediction Module (SBPM) and the Multi-Band Prediction Module (MBPM). SBPM is responsible for generating the initial fused image, thus eliminating the need for hybrid feature generation. Subsequently, MBPM is employed to extract channel-wise information that enhances the details of the preliminary fused image.
  • In SBPM, feature extraction primarily relies on ConvNeXt due to its concise architecture, computational efficiency, and outstanding performance compared to ViT. In MBPM, we replace the 2D convolutions in ConvNeXt with 3D convolutions to better extract spatial-spectral features.
  • We evaluated and compared different models on three datasets, each with distinct characteristics. Our examination comprised three datasets: CIA, LGC, and Nanjing. The resolution difference between coarse and fine images was 16x for the former two and 3x for the latter.
The remainder of this manuscript is organized as follows: Section 3 provides an overview of the SMSTFM, outlining its overall structure and specific internal modules. Section 4 is dedicated to presenting our results, encompassing a description of the dataset, the experimental procedures, and their subsequent analysis. Section 5 constitutes our discussion, while Section 6 offers the conclusion.

3. Materials and Methods

SMSTFM requires a fine image L t 0 at reference date t 0 and a coarse image M t 1 at prediction date t 1 , and finally synthesizes a fine image L ^ t 1 at prediction date t 1 . The general structure of the SMSTFM is shown in Figure 2.
The SMSTFM consists of two main stages: the SBPM (Single Band Prediction Module) and MBPM (Multi-Band Prediction Module). In the SBPM stage, each iteration takes a band from L t 0 and a corresponding band from M t 1 as the input and generates a synthesized fine band L p i for the predicted image. The SBPM can be seen as a reference-based super-resolution reconstruction model [39]. It operates by taking a single band from the coarse image and a corresponding band from the reference image to predict the corresponding band of the fused image. These predicted bands are then stacked together to form L p . The MBPM complements this process by extracting channel-wise information from the inputted multispectral images, enhancing the details of the L p generated by the SBPM.

3.1. Single-Band Prediction Module

The overall structure of the SBPM is depicted in Figure 3. To effectively integrate multi-scale feature information, the SBPM module adopts the U-Net architecture [40]. The U-Net architecture can preserve surrounding pixel information at different scales in multispectral images, thereby reducing the impact of sensor registration errors on fusion results [41,42]. The SBPM consists of an encoder and a decoder. The encoder is responsible for extracting the feature information of the input images, while the decoder is responsible for restoring the resolution and details of the input images. In order to achieve this goal, the SBPM module performs three downsampling and three upsampling operations, which increase the receptive field of the network while maintaining a low computational complexity. Moreover, the SBPM also uses skip connections, which connect the feature maps of the same size in the encoder and decoder. The use of skip connections helps to transfer the low-level-detail information from lower layers to higher layers, thereby improving the reconstruction performance of the network. The detailed workflow of the SBPM is as follows: first, concatenate two single-band input images, L t 0 i and M t 1 i , into a tensor of shape (2, 256, 256) as the input to the module. Then, perform downsampling on the feature maps using a convolution operation (Conv2) with a kernel size of 2 and a stride of 2, while increasing the number of channels in the feature maps by four times. Next, use the PixelShuffle operation to upsample the feature maps, simultaneously reducing the number of channels in the feature maps by four times. In both the encoder and decoder, feature maps of the same size are added together (indicated by dashed lines in the diagram). Adding feature maps helps pass low-level-detail information from the lower layers to the upper layers, thus restoring resolution and details. Finally, use a convolution operation (Conv1) with a kernel size of 1 to adjust the channel dimension of the output features. This results in the single-band prediction L p i , which is the sum of the module’s predicted residuals and the coarse image M t 1 i .
The structure of the 2D-CBlock is the same as that of ConvNeXt [38], as depicted in Figure 4. Firstly, inspired by the window size in Swin Transformers, the convolutional kernel size of the first convolutional layer in the ConvNeXt block is set to 7 in order to achieve an efficient feature extraction. This convolution is a depthwise convolution, which is a special case of a grouped convolution where the number of groups is equal to the number of channels. Depthwise convolution is similar to the weighted sum operation in self-attention, as it operates independently on each channel, only mixing information across spatial dimensions. Due to the complexity of Batch Normalization (BN), it can potentially have negative effects on the performance of the model [43]. However, in the ConvNeXt module, Layer Normalization (LN) is used instead of BN, which is a simpler normalization technique. Furthermore, the ConvNeXt block employs an inverted bottleneck design, where the feature maps outputted by the LN module are first expanded by a convolutional module, increasing the number of channels by a factor of four, and then the activated features are reduced by a factor of four. The activation function used in the ConvNeXt block is the Gaussian Error Linear Unit or GELU [44], which can be seen as a smoother variant of ReLU.

3.2. Multi-Band Prediction Module

The overall structure of the MBPM is illustrated in Figure 5. The MBPM also comprises an encoder and a decoder. During the encoding phase, we extract and merge multi-scale feature maps from L p , L t 0 , and M t 1 to complement the channel-wise information of L p . The decoder is responsible for gradually reconstructing the merged feature maps into the image. The skip connections between the encoder and decoder enhance the network’s reconstruction performance by passing low-level-detail information from the lower layers to the upper layers. The specific process of the MBPM is as follows: we first extract multi-scale feature maps from two branches and then merge them. We stack L t 0 and M t 1 together as one branch, and use L p as the other branch. We perform two downsampling operations on both branches and then add the feature maps of the corresponding scales together. Finally, we perform two upsampling operations in the decoder to recover the fused image. The skip connections between the encoder and decoder enhance the network’s reconstruction performance. In Figure 5, Conv2 represents downsampling, pixelShuffle stands for upsampling, and 3D-CBlock is utilized for further feature extraction.
The overall structure of the 3D-CBlock is similar to that of the 2D-ConvBlock, as shown in Figure 6. 3D convolution operates by moving the convolution kernel in three dimensions (width, height, and depth). This process results in output features encompassing both spatial and spectral neighborhood information, effectively extracting spectral dimension details. To address memory consumption issues commonly associated with traditional 3D convolution, we employ a separable 3D convolution (Split-3D) [45]. In Split-3D, the 7 × 7 × 7 convolution kernel is split into two groups: a 7 × 1 × 1 kernel and a 1 × 7 × 7 kernel. The Split-3D achieves similar results as directly using a 7 × 7 × 7 convolution kernel [46]. The workflow of the 3D-CBlock is as follows: first, the input tensor (C, H, W) is expanded using an unsqueeze to have the dimensions (1, C, H, W), where the first dimension represents the number of feature maps. The Split-3D convolution is then applied to further extract spectral features. To perform subsequent 2D convolution operations, the feature dimension is compressed (represented by squeeze) to (C, H, W).

3.3. Loss Function

During the training phase, the SMSTFM computes the loss function separately for the single-band prediction result L p and the final prediction result L t . The total loss of the network can be mathematically represented by the following equation:
L o s s t o t a l = λ L o s s ( L p , L t r u t h ) + L o s s ( L t , L t r u t h )
where L t r u t h represents the true fine-resolution image at the predicted time, and the function L o s s ( L ^ , L t r u t h ) calculates the reconstruction loss between the predicted image L ^ and the true image L t r u t h . The weight coefficient λ is set to 1 to balance the contributions of the two loss terms.
The function L o s s ( L ^ , L t r u t h ) consists of two components: pixel loss and structure loss. The Charbonnier loss, a part of LapSRN [47], is utilized as the pixel loss, while multi-scale structural similarity (MS-SSIM) [48] is employed to measure the overall similarity between the generated image and the ground truth fine image. The two loss functions are represented by the following equations:
L structure = 1 min MS-SSIM L ^ , L t r u t h + ε s , 1
L pixel = 1 N i = 1 N L ^ L t r u t h 2 + ε p 2
Here, ε p is introduced to stabilize the error of back-propagation, and ε s is used to reduce the impact of samples with a lower structural loss on the network training process. The values of ε p and ε s are set to 0.001 and 0.05, respectively. Finally, the function L o s s ( L ^ , L t r u t h ) can be expressed by the following equation:
L o s s ( L ^ , L t r u t h ) = L p i x e l + α L s t r u c t u r e
where α is the weight coefficient and the value is set to 1.

4. Experiments and Results

4.1. Study Areas and Datasets

To facilitate comparisons with other STFMs, we have utilized well-established public datasets commonly used in spatiotemporal fusion research for RS images. These datasets include the Coleambally irrigation area (CIA) and the lower Gwydir catchment (LGC) [49]. Additionally, we introduced the Nanjing dataset to assess the performance of our STFM in urban scenes with lower resolution disparities. The CIA study site is a rice-based irrigation system in southern New South Wales (NSW, Australia, 34.0034°E, 145.0675°S), and the LGC study site is located in northern New South Wales (NSW, 149.2815°E, 29.0855°S). These two locations represent different types of changes, namely phenological and land cover changes. Located in southern New South Wales, the CIA study area is primarily characterized by the presence of rice crops and a contemporary irrigation infrastructure. The dataset from CIA comprises 17 pairs of MODIS-Landsat data, acquired from 2001 to 2002, with each pair consisting of cloudfree images that are sized at 6 × 2040 × 1720. At the time of data collection, the CIA area remained relatively stable in terms of land cover changes. However, the temporal variation in this region can be utilized to evaluate the efficacy of the SMSTFM in predicting phenological changes. Situated in the northern region of New South Wales, the LGC study area encompasses 14 sets of cloud-free data pairs, each consisting of images sized at 6 × 2720 × 3200 and acquired between 2004 to 2005. Notably, a flood event occurred in this area during mid-December 2004, resulting in significant land cover modifications. This attribute of the LGC dataset renders it highly suitable for assessing the effectiveness of the SMSTFM in predicting land cover changes.
Given that the CIA and LGC study areas are predominantly comprised of plains and agricultural land, this study includes additional experiments utilizing the Nanjing dataset (China, 118.803611°E, 32.075833°N) [50]. In this dataset, we used the first four bands of the Sentinel2 satellite images (10 m resolution) and the first four bands of the Landsat8 (30 m resolution) satellite images to form an image pair. The surface reflectance products (i.e., Level 2 products) of the Landsat images were acquired from the United States Geological Survey (USGS) (http://earthexplorer.usgs.gov/, accessed on 1 April 2023). The surface reflectance products (i.e., L2A products) of the Sentinel-2 multispectral images were acquired from the European Space Agency (ESA) (https://scihub.copernicus.eu/dhus/#/home, accessed on 1 April 2023). The Nanjing dataset contains 14 pairs of images with acquisition dates between 2017 and 2021 and each image has a size of 10800 × 10800. The resolution difference between fine and coarse images in the CIA and LGC datasets is about 16 times, while the difference between fine and coarse images in the Nanjing dataset is about three times. Therefore, the Nanjing dataset was used to explore the ability of the STF model to recover details.

4.2. Experiment Design and Evaluation

The experiment can be divided into two main parts: one focusing on the experimental results for the CIA and LGC datasets, and the other on the experimental results for the Nanjing dataset. In this paper, we compare our approach with four traditional STFMs (STARFM [17], FSDAF [22], SFSDAF [51], and Fit-FC [23]), as well as four DL-based methods (GANSTFM [34], EDCSTFN [52], SwinSTFM [13], and MLFF-GAN [35]). All three datasets have been divided into training and test sets. During the testing phase, for the LGC dataset, the prediction of the fine image on 12 December 2004 is performed using the image pair captured on 26 November 2004, and the corresponding coarse image taken on 12 December 2004. Likewise, for the CIA dataset, the prediction of the fine image on 17 April 2002 is conducted using the image pair acquired on 10 April 2002, and the corresponding coarse image captured on 17 April 2002. Lastly, for the Nanjing dataset, the prediction of the fine image on 3 October 2021 is carried out using the image pair taken on 22 March 2021, along with the corresponding coarse image captured on 3 October 2021. Additionally, it is worth noting that the images in the LGC dataset, CIA dataset, and Nanjing dataset were cropped to the following dimensions: 2400 × 2400, 1400 × 1400, and 4800 × 4800, respectively. The three traditional algorithms utilize a fixed sliding window size of 41 × 41 when searching for similar pixels, and they consider 20 similar pixels within this window during their respective processes. Moreover, deep learning-based algorithms incorporate appropriate methods for dataset generation and data augmentation. When creating the training set, all four algorithms opt for the image pair closest to the prediction date as the reference images. During each training iteration, the three input images undergo random flipping and rotation as part of the data augmentation process. This data augmentation technique helps enhance the model’s robustness and generalization ability by exposing it to a diverse range of training samples. In the SMSTFM, the number of 2D-CBlocks is set as follows: m1 = 6, m2 = 9, and m3 = 6. Additionally, the number of 3D-CBlocks is set as follows: m1 = 6 and m2 = 9. The other deep learning models use the original code disclosed by the authors.
The models’ performance is assessed using six evaluation indices: the root mean square error (RMSE), Structural Similarity Index (SSIM) [53], universal image quality index (UIQI) [54], correlation coefficient (CC), spectral angle mapper (SAM) [55], and average relative global error (ERGAS) [56]. The benchmark values for RMSE, SSIM, UIQI, and CC are 0, 1, 1, and 1, respectively. A fusion result with the SAM and ERGAS values approaching zero indicates a reduction in uncertainty.

4.3. Experimental Results on LGC and CIA Datasets

Figure 7 and Figure 8 illustrate the predicted images and subregions of different models on the LGC test dataset, using the NIR, Blue, and Green channels as RGB. From the visual comparison, it is evident that the fusion results of the SMSTFM exhibit the highest level of restoration in terms of appearance and land cover types. Moreover, in some heterogeneous regions, the fusion results of traditional fusion methods suffer from severe spectral distortion, likely due to the significant influence of the search window in the pixel prediction process of these methods. Additionally, when the predicted image differs greatly from the reference image, it becomes challenging for traditional methods to extract sufficient meaningful information from the reference image. In contrast, deep learning-based methods generate prediction images with more details, especially in subregions with the water land-cover type, where the prediction image of the SMSTFM closely resembles the ground truth image. This demonstrates the effectiveness of deep learning methods in capturing fine details and improving the fusion performance. Furthermore, Figure 9 displays a comparison of the subregion fusion results on the AAD map. It can be observed that the SMSTFM achieves the best fusion results for pixels with the water land-cover type, even though the reference image contains only a very small number of water pixels. This result highlights the strong generalization ability and capability of learning from limited samples possessed by the SMSTFM. Quantitative comparisons of fusion results are presented in Table 2. The results indicate that deep learning-based methods outperform traditional methods in all metrics across all bands, with the MLFFGAN, SwinSTFM, and SMSTFM showing significant improvements over other deep learning algorithms. Though the SMSTFM and MLFFGAN exhibit similar metrics, it is worth noting that the SMSTFM has fewer parameters, making it a more efficient model.
Figure 10 and Figure 11 depict the predicted images and subregions of different models on the CIA test dataset, with the NIR, Blue, and Green channels used as the RGB. As shown in the figure, the fusion results of the STARFM and FSDAF exhibit severe distortions in spectral details. Additionally, the predicted image of Fit-FC appears to be overall blurry. It may be because these STF methods are highly affected by the search window during the image pixel prediction process, and they perform poorly when the image exhibits high spatial heterogeneity. Furthermore, Figure 12 presents the comparison of subregion fusion results on the AAD map. It is evident that the SMSTFM achieves the highest spectral accuracy among the models, indicating its superiority in preserving spectral information during the fusion process. The quantitative comparisons of fusion results are listed in Table 3. Across all spectral bands, the SMSTFM consistently achieves the best results in almost all metrics, demonstrating its superiority over other spatiotemporal fusion models in terms of spectral accuracy and overall fusion performance. The results on the CIA dataset further reinforce the effectiveness of the SMSTFM in handling various remote sensing data scenarios and highlight its robustness in capturing spectral and spatial details for accurate and reliable fusion results.

4.4. Experimental Results on the Nanjing Dataset

With the difference in resolution between the coarse and fine images being only three times, the spatiotemporal fusion models in this dataset primarily focus on restoring image details. Figure 13 and Figure 14 illustrate the predicted images and subregions of different models on the Nanjing test dataset, with the NIR, Blue, and Green channels used as the RGB. Overall, the predicted images of the STARFM and Fit-FC differ significantly from the real images. In contrast, the predicted images of the FSDAF and SFSDAF share a similar overall structure to the real image. However, the predicted images of the FSDAF and SFSDAF exhibit significant spectral differences and overall blurriness. Among the deep learning-based fusion models, the predicted images are visually similar to the real images in terms of the overall structure and spectrum. However, there are differences in their ability to recover image details. Figure 15 shows a comparison of the subregion fusion results on the AAD map. The predicted images of the EDCSTFN and SwinSTFM contain some erroneous pixels, while the GANSTFM and MLFFGAN exhibit notable discrepancies in the edges of objects compared to the ground truth images. In contrast, our model highly preserves rich spatial detail information in the predicted images, and the spectral information in our model is found to be closer to the real image in the overall scene when compared to other models.
The quantitative metrics for the Nanjing scene are listed in Table 4. The SMSTFM demonstrates a superior performance in almost all metrics across all bands, except for the SAM, where it slightly lags behind the FSDAF. Furthermore, the traditional algorithms are significantly inferior to the deep learning-based algorithms in all metrics except the SAM, suggesting that the traditional algorithms may perform poorly in datasets with only a three-fold difference in resolution. Additionally, the SMSTFM shows a substantial improvement compared to the suboptimal models, particularly with an approximately 20% increase in the RMSE, further highlighting the superiority of the SMSTFM in accurately restoring image details and spectral fidelity.

5. Discussion

In this section, we begin by conducting ablation experiments to explore the roles of various components within the SMSTFM. Following that, we provide a brief analysis of the efficiency of the SMSTFM compared to other STFMs.
Three models are designed to demonstrate the advantages of the SMSTFM for problems based on deep learning methods. We first train the SBPM module independently, creating a standalone experiment referred to as the “SBPM”. This allows us to evaluate the performance of the SBPM without the additional components of the SMSTFM. To assess the impact of ConvNeXt on the network structure, we replace the ConvNeXt module with a standard convolution while keeping the rest of the structure unchanged. This sub-method is defined as “No-ConvNeXt”. To examine the influence of Split-3d on the network structure, we remove Split-3d from the 3D-CBlock, defining the sub-method as “No-Split-3d”. The results of the ablation experiments on the LGC dataset are listed in Table 5. The SMSTFM achieves the best results on almost all metrics and shows significant improvement compared to No-MBP. This result highlights the importance of the supplementary information predicted by the MBP module in the SMSTFM. Furthermore, Figure 16 displays the prediction images of the SBPM and SMSTFM. It can be observed that while the SBPM shares a similar structure with the SMSTFM, it noticeably lacks spectral details. This observation underscores the contribution of the MBP module in the SMSTFM, enhancing the spectral details and improving the overall fusion performance.
Table 6 illustrates the model’s efficiency by examining two key metrics: the model parameters and Multiply-Accumulate Operations (MACs). The model parameters reflect the model’s complexity and storage requirements, while the MACs measure its computational complexity and resource utilization. Models with fewer parameters and lower MACs are generally considered more efficient, requiring less storage space and computational resources. Table 6 shows that SwinSTFM, based on ViT, has the most significant number of parameters, approximately 11 times that of the SMSTFM. Although the SMSTFM has more parameters than earlier models like EDCSTFN and GANSTFM, it is significantly smaller in parameter count compared to recent models like the SwinSTFM and MLFFGAN. When considering MACs, the SMSTFM still falls within the moderate range and notably consumes fewer resources than SwinSTFM. However, it is worth noting that even though the SMSTFM falls into the mid-range in terms of efficiency, it achieves the highest prediction accuracy. Therefore, the SMSTFM strikes a good balance between model efficiency and accuracy.

6. Conclusions

This article proposes an innovative spatiotemporal fusion model called the SMSTFM, combining single-band and multi-band predictions to achieve superior fusion results. Existing STFMs that use CNN for feature extraction result in hybrid features and information loss in channel dimensions, while ViT-based STFMs incur significant computational overhead. Compared to these models, the SMSTFM has the following advantages:
  • Our model addresses the issue of hybrid features and information loss in channel dimensions by concatenating the SBPM and MBPM. The SBPM establishes a mapping from low-resolution images to high-resolution images, generating preliminary fusion results without hybrid features. The MBPM efficiently extracts spatial channel-wise information from the preliminary fusion results to enhance fusion details.
  • ConvNeXt, designed based on ViT architecture, and its variants are utilized as the feature extraction modules in our model. Compared to ViT, ConvNeXt maintains a high performance while reducing computational costs.
Furthermore, significant performance improvements were observed on datasets with 16× and 3× the resolution differences between coarse and fine images, highlighting the robustness and versatility of our proposed approach.
Our strategy for channel-wise feature extraction may serve as a valuable reference for tasks related to multispectral and hyperspectral remote sensing imagery. However, one limitation of the SMSTFM is that the concatenation of the SBPM and MBPM may result in a slowdown in inference speed, which is an aspect that we aim to improve in the future.

Author Contributions

Conceptualization, Z.W. and S.F.; Formal analysis, Z.W., S.F. and J.Z.; Methodology, Z.W.; Writing—original draft, Z.W.; Writing—review & editing, S.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Collaborative Innovation Project of Colleges and Universities of Anhui Province: grant number PA2023AGXC0006.

Data Availability Statement

Data available upon request.

Acknowledgments

We would like to thank the computing support from the Key Laboratory of Knowledge Engineering with Big Data (Hefei University of Technology).

Conflicts of Interest

The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

RSRemote sensing
DLDeep learning
GANGenerative adversarial network
STFMSpatiotemporal fusion method
CNNConvolutional neural network
ViTVision transformer

References

  1. Nduati, E.; Sofue, Y.; Matniyaz, A.; Park, J.G.; Yang, W.; Kondoh, A. Cropland Mapping Using Fusion of Multi-Sensor Data in a Complex Urban/Peri-Urban Area. Remote Sens. 2019, 11, 207. [Google Scholar] [CrossRef]
  2. Hwang, T.; Song, C.; Bolstad, P.V.; Band, L.E. Downscaling real-time vegetation dynamics by fusing multi-temporal MODIS and Landsat NDVI in topographically complex terrain. Remote Sens. Environ. 2011, 115, 2499–2512. [Google Scholar] [CrossRef]
  3. Arévalo, P.; Olofsson, P.; Woodcock, C.E. Continuous monitoring of land change activities and post-disturbance dynamics from Landsat time series: A test methodology for REDD+ reporting. Remote Sens. Environ. 2020, 238, 111051. [Google Scholar] [CrossRef]
  4. Hamunyela, E.; Brandt, P.; Shirima, D.; Do, H.T.T.; Herold, M.; Roman-Cuesta, R.M. Space-time detection of deforestation, forest degradation and regeneration in montane forests of Eastern Tanzania. Int. J. Appl. Earth Obs. Geoinf. 2020, 88, 102063. [Google Scholar] [CrossRef]
  5. Yin, L.; Wang, L.; Li, T.; Lu, S.; Yin, Z.; Liu, X.; Li, X.; Zheng, W. U-Net-STN: A Novel End-to-End Lake Boundary Prediction Model. Land 2023, 12, 1602. [Google Scholar] [CrossRef]
  6. Zhang, C.; Wang, L.; Cheng, S.; Li, Y. SwinSUNet: Pure Transformer Network for Remote Sensing Image Change Detection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–13. [Google Scholar] [CrossRef]
  7. Liu, Z.; Xu, J.; Liu, M.; Yin, Z.; Liu, X.; Yin, L.; Zheng, W. Remote sensing and geostatistics in urban water-resource monitoring: A review. Mar. Freshw. Res. 2023, 74, 747–765. [Google Scholar] [CrossRef]
  8. Liu, X.; Li, Z.; Fu, X.; Yin, Z.; Liu, M.; Yin, L.; Zheng, W. Monitoring house vacancy dynamics in the pearl river delta region: A method based on NPP-viirs night-time light remote sensing images. Land 2023, 12, 831. [Google Scholar] [CrossRef]
  9. Interdonato, R.; Ienco, D.; Gaetano, R.; Ose, K. DuPLO: A DUal view Point deep Learning architecture for time series classificatiOn. ISPRS J. Photogramm. Remote Sens. 2019, 149, 91–104. [Google Scholar] [CrossRef]
  10. Ghrefat, H.A.; Goodell, P.C. Land cover mapping at Alkali Flat and Lake Lucero, White Sands, New Mexico, USA using multi-temporal and multi-spectral remote sensing data. Int. J. Appl. Earth Obs. Geoinf. 2011, 13, 616–625. [Google Scholar] [CrossRef]
  11. Jia, D.; Gao, P.; Cheng, C.; Ye, S. Multiple-feature-driven co-training method for crop mapping based on remote sensing time series imagery. Int. J. Remote Sens. 2020, 41, 8096–8120. [Google Scholar] [CrossRef]
  12. Shen, H.; Meng, X.; Zhang, L. An Integrated Framework for the Spatio–Temporal–Spectral Fusion of Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 7135–7148. [Google Scholar] [CrossRef]
  13. Chen, G.; Jiao, P.; Hu, Q.; Xiao, L.; Ye, Z. SwinSTFM: Remote Sensing Spatiotemporal Fusion Using Swin Transformer. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–18. [Google Scholar] [CrossRef]
  14. Fu, Z.; Sun, Y.; Fan, L.; Han, Y. Multiscale and Multifeature Segmentation of High-Spatial Resolution Remote Sensing Images Using Superpixels with Mutual Optimal Strategy. Remote Sens. 2018, 10, 1289. [Google Scholar] [CrossRef]
  15. Ghassemian, H. A review of remote sensing image fusion methods. Inf. Fusion 2016, 32, 75–89. [Google Scholar] [CrossRef]
  16. Belgiu, M.; Stein, A. Spatiotemporal Image Fusion in Remote Sensing. Remote Sens. 2019, 11, 818. [Google Scholar] [CrossRef]
  17. Gao, F.; Masek, J.; Schwaller, M.; Hall, F. On the blending of the Landsat and MODIS surface reflectance: Predicting daily Landsat surface reflectance. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2207–2218. [Google Scholar]
  18. Lu, M.; Chen, J.; Tang, H.; Rao, Y.; Yang, P.; Wu, W. Land cover change detection by integrating object-based data blending model of Landsat and MODIS. Remote Sens. Environ. 2016, 184, 374–386. [Google Scholar] [CrossRef]
  19. Zhang, H.; Sun, Y.; Shi, W.; Guo, D.; Zheng, N. An object-based spatiotemporal fusion model for remote sensing images. Eur. J. Remote Sens. 2021, 54, 86–101. [Google Scholar] [CrossRef]
  20. Maselli, F. Definition of spatially variable spectral endmembers by locally calibrated multivariate regression analyses. Remote Sens. Environ. 2001, 75, 29–38. [Google Scholar] [CrossRef]
  21. Busetto, L.; Meroni, M.; Colombo, R. Combining medium and coarse spatial resolution satellite data to improve the estimation of sub-pixel NDVI time series. Remote Sens. Environ. 2008, 112, 118–131. [Google Scholar] [CrossRef]
  22. Zhu, X.; Helmer, E.H.; Gao, F.; Liu, D.; Chen, J.; Lefsky, M.A. A flexible spatiotemporal method for fusing satellite images with different resolutions. Remote Sens. Environ. 2016, 172, 165–177. [Google Scholar] [CrossRef]
  23. Wang, Q.; Atkinson, P.M. Spatio-temporal fusion for daily Sentinel-2 images. Remote Sens. Environ. 2018, 204, 31–42. [Google Scholar] [CrossRef]
  24. Huang, B.; Song, H. Spatiotemporal Reflectance Fusion via Sparse Representation. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3707–3716. [Google Scholar] [CrossRef]
  25. Song, H.; Huang, B. Spatiotemporal satellite image fusion through one-pair image learning. IEEE Trans. Geosci. Remote Sens. 2012, 51, 1883–1896. [Google Scholar] [CrossRef]
  26. Wu, B.; Huang, B.; Zhang, L. An error-bound-regularized sparse coding for spatiotemporal reflectance fusion. IEEE Trans. Geosci. Remote Sens. 2015, 53, 6791–6803. [Google Scholar] [CrossRef]
  27. Peng, Y.; Li, W.; Luo, X.; Du, J.; Zhang, X.; Gan, Y.; Gao, X. Spatiotemporal reflectance fusion via tensor sparse representation. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–18. [Google Scholar] [CrossRef]
  28. Xiao, J.; Aggarwal, A.K.; Duc, N.H.; Arya, A.; Rage, U.K.; Avtar, R. A review of remote sensing image spatiotemporal fusion: Challenges, applications and recent trends. Remote Sens. Appl. Soc. Environ. 2023, 32, 101005. [Google Scholar] [CrossRef]
  29. Li, J.; Hong, D.; Gao, L.; Yao, J.; Zheng, K.; Zhang, B.; Chanussot, J. Deep learning in multimodal remote sensing data fusion: A comprehensive review. Int. J. Appl. Earth Obs. Geoinf. 2022, 112, 102926. [Google Scholar] [CrossRef]
  30. Song, H.; Liu, Q.; Wang, G.; Hang, R.; Huang, B. Spatiotemporal Satellite Image Fusion Using Deep Convolutional Neural Networks. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 821–829. [Google Scholar] [CrossRef]
  31. Tan, Z.; Yue, P.; Di, L.; Tang, J. Deriving High Spatiotemporal Remote Sensing Images Using Deep Convolutional Network. Remote Sens. 2018, 10, 1066. [Google Scholar] [CrossRef]
  32. Tan, Z.; Di, L.; Zhang, M.; Guo, L.; Gao, M. An Enhanced Deep Convolutional Model for Spatiotemporal Image Fusion. Remote Sens. 2019, 11, 2898. [Google Scholar] [CrossRef]
  33. Liu, X.; Deng, C.; Chanussot, J.; Hong, D.; Zhao, B. StfNet: A Two-Stream Convolutional Neural Network for Spatiotemporal Image Fusion. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6552–6564. [Google Scholar] [CrossRef]
  34. Tan, Z.; Gao, M.; Li, X.; Jiang, L. A Flexible Reference-Insensitive Spatiotemporal Fusion Model for Remote Sensing Images Using Conditional Generative Adversarial Network. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–13. [Google Scholar] [CrossRef]
  35. Song, B.; Liu, P.; Li, J.; Wang, L.; Zhang, L.; He, G.; Chen, L.; Liu, J. MLFF-GAN: A Multilevel Feature Fusion With GAN for Spatiotemporal Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–16. [Google Scholar] [CrossRef]
  36. Li, W.; Cao, D.; Peng, Y.; Yang, C. MSNet: A multi-stream fusion network for remote sensing spatiotemporal fusion based on transformer and convolution. Remote Sens. 2021, 13, 3724. [Google Scholar] [CrossRef]
  37. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 10012–10022. [Google Scholar]
  38. Liu, Z.; Mao, H.; Wu, C.Y.; Feichtenhofer, C.; Darrell, T.; Xie, S. A ConvNet for the 2020s. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 11976–11986. [Google Scholar]
  39. Cao, J.; Liang, J.; Zhang, K.; Li, Y.; Zhang, Y.; Wang, W.; Gool, L.V. Reference-based image super-resolution with deformable attention transformer. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; pp. 325–342. [Google Scholar]
  40. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015, Proceedings of the 18th International Conference, Munich, Germany, 5–9 October 2015; Part III 18. Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
  41. Ma, X.; Wang, Q.; Tong, X.; Atkinson, P.M. A deep learning model for incorporating temporal information in haze removal. Remote Sens. Environ. 2022, 274, 113012. [Google Scholar] [CrossRef]
  42. Yu, B.; Xu, C.; Chen, F.; Wang, N.; Wang, L. HADeenNet: A hierarchical-attention multi-scale deconvolution network for landslide detection. Int. J. Appl. Earth Obs. Geoinf. 2022, 111, 102853. [Google Scholar] [CrossRef]
  43. Bronskill, J.; Gordon, J.; Requeima, J.; Nowozin, S.; Turner, R. Tasknorm: Rethinking batch normalization for meta-learning. In Proceedings of the International Conference on Machine Learning, Virtual Event, 13–18 July 2020; pp. 1153–1164. [Google Scholar]
  44. Hendrycks, D.; Gimpel, K. Gaussian error linear units (GELUs). arXiv 2016, arXiv:1606.08415. [Google Scholar]
  45. Zhu, Z.; Tao, Y.; Luo, X. HCNNet: A Hybrid Convolutional Neural Network for Spatiotemporal Image Fusion. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–16. [Google Scholar] [CrossRef]
  46. Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral–Spatial Residual Network for Hyperspectral Image Classification: A 3-D Deep Learning Framework. IEEE Trans. Geosci. Remote Sens. 2018, 56, 847–858. [Google Scholar] [CrossRef]
  47. Lai, W.S.; Huang, J.B.; Ahuja, N.; Yang, M.H. Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5835–5843. [Google Scholar]
  48. Wang, Z.; Simoncelli, E.; Bovik, A. Multiscale structural similarity for image quality assessment. In Proceedings of the Thrity-Seventh Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 9–12 November 2003; Volume 2, pp. 1398–1402. [Google Scholar]
  49. Emelyanova, I.V.; McVicar, T.R.; Van Niel, T.G.; Li, L.T.; van Dijk, A.I. Assessing the accuracy of blending Landsat–MODIS surface reflectances in two landscapes with contrasting spatial and temporal dynamics: A framework for algorithm selection. Remote Sens. Environ. 2013, 133, 193–209. [Google Scholar] [CrossRef]
  50. Chen, Y.; Shi, K.; Ge, Y.; Zhou, Y. Spatiotemporal Remote Sensing Image Fusion Using Multiscale Two-Stream Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–12. [Google Scholar] [CrossRef]
  51. Li, X.; Foody, G.M.; Boyd, D.S.; Ge, Y.; Zhang, Y.; Du, Y.; Ling, F. SFSDAF: An enhanced FSDAF that incorporates sub-pixel class fraction change information for spatio-temporal image fusion. Remote Sens. Environ. 2020, 237, 111537. [Google Scholar] [CrossRef]
  52. Zhu, X.; Chen, J.; Gao, F.; Chen, X.; Masek, J.G. An enhanced spatial and temporal adaptive reflectance fusion model for complex heterogeneous regions. Remote Sens. Environ. 2010, 114, 2610–2623. [Google Scholar] [CrossRef]
  53. Wang, Z.; Bovik, A.; Sheikh, H.; Simoncelli, E. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  54. Wang, Z.; Bovik, A. A universal image quality index. IEEE Signal Process. Lett. 2002, 9, 81–84. [Google Scholar] [CrossRef]
  55. Yuhas, R.H.; Goetz, A.F.H.; Boardman, J.W. Discrimination among semi-arid landscape endmembers using the Spectral Angle Mapper (SAM) algorithm. In Summaries of the Third Annual JPL Airborne Geoscience Workshop; JPL: Pasadena, CA, USA, 1992. [Google Scholar]
  56. Khan, M.M.; Alparone, L.; Chanussot, J. Pansharpening Quality Assessment Using the Modulation Transfer Functions of Instruments. IEEE Trans. Geosci. Remote Sens. 2009, 47, 3880–3891. [Google Scholar] [CrossRef]
Figure 1. General framework of STFMs based on CNNs and GANs.
Figure 1. General framework of STFMs based on CNNs and GANs.
Remotesensing 15 04936 g001
Figure 2. General structure of our proposed SMSTFM. The network consists of two modules: the Single-Band Prediction Module (SBPM) and Multi-Band Prediction Module (MBPM). L t x y and M t x y represent the y-th spectral band of the fine and coarse images at time x, respectively, while L t x and M t x represent the multispectral fine and coarse images at time x, respectively.
Figure 2. General structure of our proposed SMSTFM. The network consists of two modules: the Single-Band Prediction Module (SBPM) and Multi-Band Prediction Module (MBPM). L t x y and M t x y represent the y-th spectral band of the fine and coarse images at time x, respectively, while L t x and M t x represent the multispectral fine and coarse images at time x, respectively.
Remotesensing 15 04936 g002
Figure 3. Architecture of the SBPM. Convi represents convolution operations with a kernel size of ‘i’ and a stride of ‘i’. ×mi represents concatenating ‘mi’ identical convolution blocks together.
Figure 3. Architecture of the SBPM. Convi represents convolution operations with a kernel size of ‘i’ and a stride of ‘i’. ×mi represents concatenating ‘mi’ identical convolution blocks together.
Remotesensing 15 04936 g003
Figure 4. The overall structure of the 2D-ConvBlock.
Figure 4. The overall structure of the 2D-ConvBlock.
Remotesensing 15 04936 g004
Figure 5. The architecture of the MBPM. Convi represents convolution operations with a kernel size of ‘i’ and a stride of ‘i’. ×mi represents concatenating ‘mi’ identical convolution blocks together.
Figure 5. The architecture of the MBPM. Convi represents convolution operations with a kernel size of ‘i’ and a stride of ‘i’. ×mi represents concatenating ‘mi’ identical convolution blocks together.
Remotesensing 15 04936 g005
Figure 6. The overall structure of the 3D-ConvBlock.
Figure 6. The overall structure of the 3D-ConvBlock.
Remotesensing 15 04936 g006
Figure 7. Fusion results for the LGC dataset on 12 December 2004 with different methods. (a) Ground truth. (b) STARFM. (c) FSDAF. (d) SFSDAF (e) Fit-FC. (f) EDCSTFN. (g) GANSTFM. (h) SwinSTFM (i) MLFFGAN. (j) SMSTFM.
Figure 7. Fusion results for the LGC dataset on 12 December 2004 with different methods. (a) Ground truth. (b) STARFM. (c) FSDAF. (d) SFSDAF (e) Fit-FC. (f) EDCSTFN. (g) GANSTFM. (h) SwinSTFM (i) MLFFGAN. (j) SMSTFM.
Remotesensing 15 04936 g007
Figure 8. Enlarged display of the rectangular region in the prediction images of various methods in Figure 7. (a) Ground truth. (b) STARFM. (c) FSDAF. (d) SFSDAF (e) Fit-FC. (f) EDCSTFN. (g) GANSTFM. (h) SwinSTFM (i) MLFFGAN. (j) SMSTFM.
Figure 8. Enlarged display of the rectangular region in the prediction images of various methods in Figure 7. (a) Ground truth. (b) STARFM. (c) FSDAF. (d) SFSDAF (e) Fit-FC. (f) EDCSTFN. (g) GANSTFM. (h) SwinSTFM (i) MLFFGAN. (j) SMSTFM.
Remotesensing 15 04936 g008
Figure 9. Average absolute difference maps between the prediction images and ground truth for each image in Figure 8. (a) Ground truth. (b) STARFM. (c) FSDAF. (d) SFSDAF (e) Fit-FC. (f) EDCSTFN. (g) GANSTFM. (h) SwinSTFM (i) MLFFGAN. (j) SMSTFM.
Figure 9. Average absolute difference maps between the prediction images and ground truth for each image in Figure 8. (a) Ground truth. (b) STARFM. (c) FSDAF. (d) SFSDAF (e) Fit-FC. (f) EDCSTFN. (g) GANSTFM. (h) SwinSTFM (i) MLFFGAN. (j) SMSTFM.
Remotesensing 15 04936 g009
Figure 10. On April 17, 2002, fusion results for the CIA dataset were achieved utilizing diverse methods. (a) Ground truth. (b) STARFM. (c) FSDAF. (d) SFSDAF. (e) Fit-FC. (f) EDCSTFN. (g) GANSTFM. (h) SwinSTFM (i) MLFFGAN. (j) SMSTFM.
Figure 10. On April 17, 2002, fusion results for the CIA dataset were achieved utilizing diverse methods. (a) Ground truth. (b) STARFM. (c) FSDAF. (d) SFSDAF. (e) Fit-FC. (f) EDCSTFN. (g) GANSTFM. (h) SwinSTFM (i) MLFFGAN. (j) SMSTFM.
Remotesensing 15 04936 g010
Figure 11. Enlarged display of the rectangular region in the prediction images of various methods in Figure 10. (a) Ground truth. (b) STARFM. (c) FSDAF. (d) SFSDAF. (e) Fit-FC. (f) EDCSTFN. (g) GANSTFM. (h) SwinSTFM. (i) MLFFGAN. (j) SMSTFM.
Figure 11. Enlarged display of the rectangular region in the prediction images of various methods in Figure 10. (a) Ground truth. (b) STARFM. (c) FSDAF. (d) SFSDAF. (e) Fit-FC. (f) EDCSTFN. (g) GANSTFM. (h) SwinSTFM. (i) MLFFGAN. (j) SMSTFM.
Remotesensing 15 04936 g011
Figure 12. Average absolute difference maps between the prediction images and ground truth for each image in Figure 11. (a) Color bar. (b) STARFM. (c) FSDAF. (d) SFSDAF. (e) Fit-FC. (f) EDCSTFN. (g) GANSTFM. (h) SwinSTFM. (i) MLFFGAN. (j) SMSTFM.
Figure 12. Average absolute difference maps between the prediction images and ground truth for each image in Figure 11. (a) Color bar. (b) STARFM. (c) FSDAF. (d) SFSDAF. (e) Fit-FC. (f) EDCSTFN. (g) GANSTFM. (h) SwinSTFM. (i) MLFFGAN. (j) SMSTFM.
Remotesensing 15 04936 g012
Figure 13. On 3 October 2021, fusion results for the Nanjing dataset were obtained using various methods. (a) Ground truth. (b) STARFM. (c) FSDAF. (d) SFSDAF (e) Fit-FC. (f) EDCSTFN. (g) GANSTFM. (h) SwinSTFM. (i) MLFFGAN. (j) SMSTFM.
Figure 13. On 3 October 2021, fusion results for the Nanjing dataset were obtained using various methods. (a) Ground truth. (b) STARFM. (c) FSDAF. (d) SFSDAF (e) Fit-FC. (f) EDCSTFN. (g) GANSTFM. (h) SwinSTFM. (i) MLFFGAN. (j) SMSTFM.
Remotesensing 15 04936 g013
Figure 14. Enlarged display of the rectangular region in the prediction images of various methods in Figure 13. (a) Ground truth. (b) STARFM. (c) FSDAF. (d) SFSDAF. (e) Fit-FC. (f) EDCSTFN. (g) GANSTFM. (h) SwinSTFM. (i) MLFFGAN. (j) SMSTFM.
Figure 14. Enlarged display of the rectangular region in the prediction images of various methods in Figure 13. (a) Ground truth. (b) STARFM. (c) FSDAF. (d) SFSDAF. (e) Fit-FC. (f) EDCSTFN. (g) GANSTFM. (h) SwinSTFM. (i) MLFFGAN. (j) SMSTFM.
Remotesensing 15 04936 g014
Figure 15. Average absolute difference maps between the prediction images and ground truth for each image in Figure 14. (a) Color bar. (b) STARFM. (c) FSDAF. (d) SFSDAF. (e) Fit-FC. (f) EDCSTFN. (g) GANSTFM. (h) SwinSTFM. (i) MLFFGAN. (j) SMSTFM.
Figure 15. Average absolute difference maps between the prediction images and ground truth for each image in Figure 14. (a) Color bar. (b) STARFM. (c) FSDAF. (d) SFSDAF. (e) Fit-FC. (f) EDCSTFN. (g) GANSTFM. (h) SwinSTFM. (i) MLFFGAN. (j) SMSTFM.
Remotesensing 15 04936 g015
Figure 16. Fusion results of the SBP and SMSTFM on the LGC dataset. (a) SBP. (b) SMSTFM.
Figure 16. Fusion results of the SBP and SMSTFM on the LGC dataset. (a) SBP. (b) SMSTFM.
Remotesensing 15 04936 g016
Table 1. Comparison table of strengths and weaknesses of different categories of STFM.
Table 1. Comparison table of strengths and weaknesses of different categories of STFM.
Model TypeStrengthsWeaknesses
CNN-based Models [30,31,32,33]Capable of extracting features from coarse images and restoring fine images by employing techniques such as upsampling or deconvolution.Face the challenge of increasing complexity and redundancy as the spatial information grows.
GAN-based Models [23,34,35]Leverage the mutual competition between a generator and a discriminator to produce increasingly lifelike and finely detailed images.Incorporating a balance between structure and texture restoration is necessary, which can influence the quality and consistency of GANs’ generated outputs.
ViT-based Models [13,36]Focus on crucial areas and details within the image. Capture global information.High computational load; loss of shallow-level features.
Table 2. Quantitative evaluation result for the LGC dataset on 12 December 2004 with different methods. The bold values indicate the best results.
Table 2. Quantitative evaluation result for the LGC dataset on 12 December 2004 with different methods. The bold values indicate the best results.
BandsSTARFMFSDAFSFSDAFFit-FCEDCSTFNGANSTFMSwinSTFMMLFFGANSMSTFM
RMSE (↓)
Blue0.01520.01510.01530.02510.01340.01330.01250.01120.0111
Green0.02070.02020.02150.02560.01910.01870.01720.01620.0161
Red0.02550.02490.02750.03220.02350.02360.02050.01990.0195
NIR0.03870.03880.04040.05240.03710.03790.03250.03110.0310
SWIR10.06330.06250.07080.07910.05250.05560.04670.04640.0457
SWIR20.05700.05370.06580.05690.03970.04250.03650.03460.0345
average0.03670.03580.04020.04520.03090.03190.02760.02660.0263
SSIM (↑)
Blue0.92130.91380.91610.90040.93590.93650.94410.94690.9486
Green0.89050.89040.87480.87250.90520.90730.91100.91860.9191
Red0.85790.85550.83000.83620.87870.88030.87790.89500.8972
NIR0.77060.76310.76770.73170.78880.79210.81260.80790.8132
SWIR10.55240.56830.55870.53040.64810.64600.68330.68410.6917
SWIR20.54940.60270.55460.60170.70580.69670.72410.73910.7430
average0.75700.76560.75030.74550.81040.80980.82550.83190.8355
UIQI (↑)
Blue0.71530.69530.73620.45380.76280.77710.64560.82070.8360
Green0.70750.69890.68950.58370.75410.77220.82520.81500.8265
Red0.70830.69600.67220.57150.75870.77020.81900.82170.8117
NIR0.80630.79520.78270.66480.81940.80370.87220.86260.8742
SWIR10.75780.75130.72640.62420.83140.81770.86040.86810.8779
SWIR20.67520.64560.67660.63350.83140.81220.85470.86690.8707
average0.72840.70880.71890.58860.79300.79220.84600.84250.8495
CC (↑)
Blue0.71650.69900.73670.47680.76990.78050.81170.82760.8292
Green0.70900.70670.69150.58380.76090.77400.81930.81900.8290
Red0.70960.70500.67750.57170.76330.77230.82130.82330.8331
NIR0.82550.82650.80020.66740.82870.80950.87850.87300.8782
SWIR10.78550.79800.76730.62660.83720.82150.86180.86860.8789
SWIR20.73780.77200.74900.64880.83580.81670.85590.86770.8714
average0.74730.75110.73710.59580.79930.79570.84140.84650.8532
SAM (↓)
0.19590.19940.21250.19540.13820.14290.12470.12400.1218
ERGAS (↓)
4.03293.96913.95634.32343.27093.32053.15023.16673.1205
Table 3. Quantitative evaluation result for the CIA dataset on 17 April 2002 with different methods. The bold values indicate the best results.
Table 3. Quantitative evaluation result for the CIA dataset on 17 April 2002 with different methods. The bold values indicate the best results.
BandsSTARFMFSDAFSFSDAFFit-FCEDCSTFNGANSTFMSwinSTFMMLFFGANSMSTFM
RMSE (↓)
Blue0.01300.01140.01120.07430.01340.01390.01250.00940.0097
Green0.01490.01350.01310.07670.01340.01390.01310.01170.0117
Red0.02100.01910.01960.14960.01910.01960.01840.01730.0161
NIR0.03730.03400.03730.57430.02840.03000.02770.02790.0269
SWIR10.04190.03920.04110.62790.03440.03700.03390.03400.0318
SWIR20.03960.03640.03910.43030.03140.03380.03160.03120.0289
average0.02800.02560.02690.32220.02340.02470.02290.02190.0208
SSIM (↑)
Blue0.92820.92410.92370.81530.91680.91910.92530.94290.9408
Green0.92780.92350.92210.78720.93320.93360.93630.93750.9359
Red0.88200.88020.87340.71390.89140.89450.89840.89520.9005
NIR0.81290.81280.79230.58380.84440.84330.84960.84060.8499
SWIR10.76950.76310.75000.51340.78950.79000.80030.78870.7994
SWIR20.75930.75910.73630.53350.79750.79740.80010.79030.8021
average0.84660.84380.83300.65790.86210.86300.86830.86590.8714
UIQI (↑)
Blue0.69520.76110.77950.04930.78870.80040.81560.81570.8344
Green0.72990.76550.78490.07040.80870.82000.83040.81820.8338
Red0.80510.83010.82680.05240.84080.85300.86380.85560.8787
NIR0.82600.84690.82170.00830.87090.88370.87980.87650.8879
SWIR10.83870.85440.84190.01320.86940.86770.88030.87040.8929
SWIR20.84550.86470.84590.03350.88050.88220.89280.88850.9047
average0.79010.82050.81680.03780.84320.85120.86040.85410.8720
CC (↑)
Blue0.70020.76760.78660.12500.82040.82330.83800.83000.8417
Green0.72990.76650.78520.14300.81590.82330.82640.82990.8358
Red0.80540.83110.82700.12370.84770.85550.86740.86220.8801
NIR0.82870.84760.82360.04070.87770.88450.89150.88250.8919
SWIR10.83980.85680.84300.05950.87530.86810.88180.87710.8945
SWIR20.84720.86700.84730.10880.88910.88280.89380.89180.9064
average0.79190.82280.81880.10010.85430.85620.86640.86220.8751
SAM (↓)
0.09430.08550.09840.21600.07750.07450.06890.06790.0656
ERGAS (↓)
2.90462.76592.79678.76482.62802.66752.57282.50862.4715
Table 4. Quantitative evaluation result for the Nanjing dataset on 3 October 2021 with different methods. The bold values indicate the best results.
Table 4. Quantitative evaluation result for the Nanjing dataset on 3 October 2021 with different methods. The bold values indicate the best results.
BandsSTARFMFSDAFSFSDAFFit-FCEDCSTFMGANSTFMSwinSTFMMLFFGANSMSTFM
RMSE (↓)
Blue0.049520.040440.042140.568090.018770.018550.015960.019750.01475
Green0.052200.044960.054910.650460.019750.019510.016360.021180.01622
Red0.065500.047690.060530.607490.020860.023420.019150.020550.01873
NIR0.141080.128010.237151.036410.036350.039390.034590.034580.03415
average0.077080.065280.986880.715610.023930.025220.021510.024010.02096
SSIM (↑)
Blue0.311580.529640.514560.022890.904140.879680.920520.900090.93977
Green0.480180.596080.542750.024650.897810.870620.909730.894090.93551
Red0.174170.509050.411170.026700.884180.837880.891690.882900.91553
NIR0.548110.591740.348220.034380.762160.763620.772700.758380.85080
average0.378510.556630.454170.027160.862070.837950.873660.858860.91040
UIQI (↑)
Blue0.529980.659370.582380.002430.891130.858770.910520.861630.91857
Green0.609710.681260.514930.001960.890310.859500.909730.861140.92018
Red0.492210.671550.494170.001130.903710.876760.891690.895580.92955
NIR0.576710.630680.340550.033750.916240.919770.772700.914890.94521
average0.552150.660720.483010.008600.900350.878700.871160.883310.92838
CC (↑)
Blue0.813360.839920.690350.046800.903050.865500.914550.909620.92301
Green0.784970.803310.644050.037840.898010.869980.914060.905220.92570
Red0.760280.782030.584430.016510.908100.885900.916880.921190.93904
NIR0.730470.740720.539960.326450.917270.923240.930000.919750.94934
average0.772270.791490.614700.127280.906610.886150.918870.913940.93427
SAM (↓)
0.298780.284750.313740.495150.090070.073890.077910.095140.06548
ERGAS (↓)
7.423114.820855.5097116.911143.072523.137152.841203.103992.83762
Table 5. Quantitative evaluation result for the LGC dataset on 13 January 2005 with different methods. The bold values indicate the best results.
Table 5. Quantitative evaluation result for the LGC dataset on 13 January 2005 with different methods. The bold values indicate the best results.
SBPMNo-ConvNeXtNo-Spli-3dSMSTFM
RMSE0.032010.034840.030890.03032
SSIM0.801070.808200.812940.81792
UIQI0.801350.814820.833820.83482
CC0.902580.901220.915740.91752
SAM0.152970.138590.127750.12496
ERGAS3.039652.938642.922642.92014
Table 6. Efficiency comparison of various models in spatiotemporal fusion.
Table 6. Efficiency comparison of various models in spatiotemporal fusion.
MethodParameters (M)MACs (G)
EDCSTFN0.284 M18.585 G
GANSTFM0.585 M37.770 G
SMSTFM3.339 M18.938 G
MLFFGAN8.701 M17.369 G
SwinSTFM37.466 M28.180 G
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Z.; Fang, S.; Zhang, J. Spatiotemporal Fusion Model of Remote Sensing Images Combining Single-Band and Multi-Band Prediction. Remote Sens. 2023, 15, 4936. https://doi.org/10.3390/rs15204936

AMA Style

Wang Z, Fang S, Zhang J. Spatiotemporal Fusion Model of Remote Sensing Images Combining Single-Band and Multi-Band Prediction. Remote Sensing. 2023; 15(20):4936. https://doi.org/10.3390/rs15204936

Chicago/Turabian Style

Wang, Zhiyuan, Shuai Fang, and Jing Zhang. 2023. "Spatiotemporal Fusion Model of Remote Sensing Images Combining Single-Band and Multi-Band Prediction" Remote Sensing 15, no. 20: 4936. https://doi.org/10.3390/rs15204936

APA Style

Wang, Z., Fang, S., & Zhang, J. (2023). Spatiotemporal Fusion Model of Remote Sensing Images Combining Single-Band and Multi-Band Prediction. Remote Sensing, 15(20), 4936. https://doi.org/10.3390/rs15204936

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop