Next Article in Journal
Seabed Mapping in Coastal Shallow Waters Using High Resolution Multispectral and Hyperspectral Imagery
Previous Article in Journal
Three-Dimensional Reconstruction of Soybean Canopies Using Multisource Imaging for Phenotyping Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Enhanced Single-Pair Learning-Based Reflectance Fusion Algorithm with Spatiotemporally Extended Training Samples

1
College of Ming Engineering, Taiyuan University of Technology, Taiyuan 030024, China
2
School of Land Science and Technology, China University of Geosciences, Beijing 100083, China
3
Shanxi Coal Geology Geophysical Surveying Exploration Institute, Jinzhong 030600, China
4
China Centre for Resources Satellite Data and Application, Beijing 100094, China
5
Academy of Opto-Electronics, Chinese Academy of Science, Beijing 100094, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2018, 10(8), 1207; https://doi.org/10.3390/rs10081207
Submission received: 28 May 2018 / Revised: 11 July 2018 / Accepted: 19 July 2018 / Published: 1 August 2018
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
Spatiotemporal fusion methods are considered a useful tool for generating multi-temporal reflectance data with limited high-resolution images and necessary low-resolution images. In particular, the superiority of sparse representation-based spatiotemporal reflectance fusion model (SPSTFM) in capturing phenology and type changes of land covers has been preliminarily demonstrated. Meanwhile, the dictionary training process, which is a key step in the sparse learning-based fusion algorithm, and its effect on fusion quality are still unclear. In this paper, an enhanced spatiotemporal fusion scheme based on the single-pair SPSTFM algorithm has been proposed through improving the process of dictionary learning, and then evaluated using two actual datasets, with one representing a rural area with phenology changes and the other representing an urban area with land cover type changes. The validated strategy for enhancing the dictionary learning process is divided into two modes to enlarge the training datasets with spatially and temporally extended samples. Compared to the original learning-based algorithm and other employed typical single-pair-based fusion models, experimental results from the proposed fusion method with two extension modes show improved performance in modeling reflectance using the two preceding datasets. Furthermore, the strategy with temporally extended training samples is more effective than the strategy with spatially extended training samples for the land cover area with phenology changes, whereas it is opposite for the land cover area with type changes.

1. Introduction

Given the growing application requirements for a variety of refined and high-frequency monographic studies, such as land use and cover change [1], ecological environment monitoring [2], forest and pasture [3], oceanographic survey [4], and disaster monitoring [5], possible solutions for frequent acquisition of high-spatial-resolution remotely sensed data have been widely proposed. One significant attempt among current works presented a radical solution that involves the progressively increasing launch of various high-quality remote sensors, some of which adopt high spatial (e.g., WorldView-3/4 and Gaojing-1/2 with 0.31 and 0.5 m resolution, respectively), temporal (e.g., Moderate Resolution Imaging Spectroradiometer (MODIS) and other meteorological satellites), and spectral resolutions (e.g., EO-1 Hyperion and Gaofen-3, both with 30 m resolution) or even high quantities (e.g., the Gaojing project, which will send 16 similar optical satellites into space no later than 2020). These remote sensors cannot technologically possess overall attributes because of the inherent conflicts among the spatial, temporal, and spectral characteristics of imaging systems. Given the constrained orbits of carrying platforms, severe climate conditions, and economic costs of mass spatiotemporal data, this challenging problem would not be resolved in the foreseeable future despite the increasing number of satellites.
Other attempts that rely on spatiotemporal reconstruction techniques of remotely sensed data are rapidly being implemented to create a new image with high-quality spatial, temporal, and spectral resolutions. Coupling spatial and temporal factors is particularly urgent. In a broad sense, analogous techniques, such as image restoration [6,7], superresolution [8], and gap filling [9,10], can be simply considered different patterns of image reconstruction. Although acceptable results from these handling methods can be expected under proper application conditions, restrictions in model universality, reconstruction precision, and physical principles of remote sensing significantly affect the depth and scope of their applications. The image fusion strategy, especially in the spatial and temporal dimension, provides another effective way to synthesize an optimized image by combining spatial and temporal information from multi-source remote sensors that separately occupy different spatiotemporal characteristics (e.g., a high-spatial and low-temporal resolution image and a low-spatial and high-temporal resolution image).
Early image fusion frameworks relying on transformation models that devote high-resolution multispectral image retrieval from a high-resolution panchromatic image and a low-resolution multispectral image, such as principal component analysis [11], hue intensity saturation, and [12] wavelet transforms [13] based only on digital number, and a few mathematical models, are inherited from the digital image processing field. The virtual effect of generated images is remarkably enhanced by these traditional approaches, whereas the physical meanings of the fused image itself and application-oriented analysis and validation are generally absent [14]. Thus, this category of fusion strategies is inadequate for enhancing or further parsing user-interested image information. Spatiotemporal fusion, which aims to achieve high-temporal prediction of high-resolution images by blending high-resolution images at observed dates and low-resolution images under relative dates, has been emerging in this research field as a promising way to resolve the previously mentioned problem. Unlike early fusion methods, spatiotemporal fusion models establish spatiotemporal correlations between inputted high- and low-resolution images based on physical parameters in remote sensing, such as radiance and apparent or surface reflectance. However, from another perspective, spatiotemporal fusion methods that rely on geological or physical parameters do not provide such a novel technique as a thinking mode of fusion strategies by which spectral unmixing, spatiotemporal filtering, and sparse learning are currently utilized for an accurate description of the radiometric spectrum changes of surface features.
The unmixing-based fusion methodology that is considered effective for the case without significant seasonal changes was first presented by Fortin et al. [15] and Zhukov et al. [16] and then validated and improved by Minghelli [17], Zurita [18], and Gevaert and Garcia [19]. The difference among the preceding methods is that neighborhood spectral information was not introduced by Fortin et al. [15] and Maselli [20] but was embedded in the works of Zhukov et al. [16] and Cherchali et al. [21]. In addition, the linear unmixing model resolved by least squares or multiple linear regression is preferred due to its simplicity and efficiency. Recently, a flexible spatiotemporal data fusion (FSDAF) method was proposed by combining spectral unmixing analysis and a thin-plate spline interpolator [22] and compared with the algorithm of Zurita [18]. FSDAF demonstrates superior performance in capturing reflectance changes due to land cover conversions.
As a popular spatiotemporal fusion strategy, models based on spatiotemporal filtering assign additional temporal and spectral information to a high-resolution image with the help of ancillary low-spatial and high-temporal resolution images. Typically, the spatial and temporal adaptive reflectance fusion model (STARFM) [23] algorithm provides accurate, efficient, and stable prediction despite various inputting data conditions. From the viewpoint of sensor observation differences between different cover types when calculating their weight contribution to the pending pixel [24] and data optimization [25], two improved versions of STARFM have been proposed. To capture surface changing information with a short-lived fluctuation in the image, the enhanced STARFM (ESTARFM) [26] algorithm maintains the weight function and its contributing rules in STARFM and concentrates on promoting fusion quality for land covers with significant temporal spectrum variation (e.g., vegetation). Although additional detailed spatial change features can be obtained by ESTARFM, the temporal characteristics to be simulated should be similar and even very close to the observed data. Thus, blending high- and low-resolution images at observed date(s) and low-resolution image at a predicted date is theoretically unreasonable for the prediction when a substantial temporal discrepancy exists between these images. Apart from the models that have similar theoretical principles [27] or confined improvements [28] with STARFM and ESTARFM, a reflectance fusion algorithm based on the semi-physical model [29] provides another novel path to build a spatiotemporal correlation between multi-source images. This algorithm has been preliminarily validated in a regional application [30].
Another fusion method derived from sparse learning theory was recently developed by combining super-resolution reconstruction and sparse representation achieved using dictionary learning. Although learning-based models that currently include the single-pair-based method [31] and the two-pair-based method [32] according to the number of inputting training images, hold promises for solving fundamental problems in spatiotemporal fusion [33,34], their performance have been proved to be less stable than reconstruction-based models, such as STARFM (single-pair) and ESTARFM (two-pair). Considering that these sparse-learning fusion strategies are built upon the prior learning process by training insufficient image samples, the dictionary training step in sparse learning models therefore has difficulty providing a redundant expression of the inputting high- and low-resolution images. That means the derived “overcomplete” dictionary is not typical for both acquired data at observed and modeled dates, and the accurate retrieval of transition images, even two-layered fusion results, is difficult. To this end, an enhanced single-pair learning-based fusion scheme with the improved dictionary learning step, and its evaluation method for selecting spatiotemporal extension mode of dictionary training samples are proposed in Section 2. Experimental results are shown in Section 3, and the discussion is presented in Section 4. Conclusions are drawn in Section 5.

2. Methodology

Although two main existing spatiotemporal fusion methods based on sparse learning theory vary in model construction, fusion pattern, and complexity, the primary theoretical basis and its contributing mode for their fusion process are nearly the same. Considering the universality and simplicity of the algorithm with single image pair, an improved fusion scheme from the single-image-pair method is firstly proposed on a basis of an enhanced dictionary training strategy and then evaluated by two remotely sensed datasets.

2.1. Proposed Fusion Scheme with Enhanced Dictionary-Training Process

In the sparse-learning fusion method, remotely sensed images from the same sensor and channel are treated as different sparse “versions” of an invariable overcomplete dictionary D on different acquiring dates. Thereinto, the sparse “version” is generally called the sparse coefficient α and considered as an indicator of the seasonal factor of an acquired remotely sensed image, related to D (mainly indicates spatial and texture features). When no significant discrepancy in texture context occurs, the dictionary D derived from the observed image-pair can be on behalf of the one from the modeled image-pair. The key step of the sparse-learning fusion algorithm is therefore to retrieve a high-precision overcomplete dictionary D through training the high- and the low-resolution image pair at the observed date. If a steadily performing dictionary training algorithm is applied (e.g., coupled K-SVD algorithm), the accuracy of the sparse-learning fusion results is significantly related to the sufficiency of inputting training samples, which is, obviously, not satisfied in the original single-pair-based fusion algorithm.
For the retrieval of high-precision D, an improved sparse-learning fusion method with enhanced dictionary training process is proposed in this study and the overall processing flow is shown in Figure 1. In this method, spatiotemporally extended training samples are utilized to promote the sufficiency of dictionary training operations in both fusion layers of the single-pair learning-based algorithm. Two modes are furthermore designed to increase employed training samples: the spatially extended mode and the temporally extended mode. Specifically, the spatially extended mode increases only the image size (from S 0 to S 1 in Figure 1) of all the inputting training samples (including the low-resolution image and the high–low resolution image-pair) at the observed date ( t 1 in Figure 1). By contrast, the temporally extended mode increases the number of inputting training samples, which are all obtained from different acquired dates ( t 3 , t 4 to t n in Figure 1) and have the same image size as the original inputting images.
Considering the case where the single image pair is employed as inputs, assume that H 1 and L 1 denote high-resolution image and low-resolution image at t 1 (observed date), L 2 denotes low-resolution image at t 2 (modeled date), H 2 denotes high-resolution image at t 2 that is to be predicted, and the image size of L 1 , L 2 , M 1 and M 2 is S 0 × S 0 . The high-resolution dictionary D h and the low-resolution dictionary D l are now derived by minimizing the following improved objective functions:
{ D l , α 1 } = arg min D l , α 1 { X 1 n e w D l α 1 F 2 }
D h = arg min D h Y 1 n e w D h α 1 F 2
where α 1 is the sparse coefficients of D l and D h at t 1 ; and X 1 n e w and Y 1 n e w , respectively, denote the spatially or temporally extended training sample matrices instead of the original training sample matrices X 1 and Y 1 extracted from the difference image ( H 1 L 1 ) and the low-resolution image L 1 .
As to the dictionary training strategy with the spatially extended mode, the employed training sample matrices X 1 n e w and Y 1 n e w in Equations (1) and (2) are finally extracted from the enlarged difference image ( H 1 e n l L 1 e n l ) and the enlarged low-resolution image L 1 e n l both with an spatially extended image size S 1 × S 1 rather than the original image size S 0 × S 0 . When no significant seasonal change occurs between t 1 and t 2 (type changes), the completeness of dictionary D is mainly limited by spatial heterogeneity and diversity of surface features. This mode actually intends to address the issue of completeness of spatial features by learning larger image area where more samples of surface features can be found.
Another situation is, when the temporally extended mode is selected, the training sample matrices X 1 n e w and Y 1 n e w can be expressed as feature images respectively derived from the dataset { L 1 , L 3 a d d ,   L 4 a d d , , L n a d d } and the dataset { ( H 1 L 1 ) , ( H 3 a d d L 3 a d d ) , ( H 4 a d d L 4 a d d ) , , ( H n a d d L n a d d ) } . Thereinto, H 3 a d d ,   H 4 a d d , , H n a d d and L 3 a d d ,   L 4 a d d , , L n a d d are additional high- and low-resolution training images (with the same image size as H 1 and L 1 ) observed at t 3 , t 4 , …, and t n . Under the assumption that seasonal change occurs between t 1 and t 2 , the temporally extended mode of training samples can improve the description of phenology features extracted from training data observed at different dates. Since a single dictionary D is considered to be hard to provide a complete and precise expression for overall seasonal features, it is reasonable to define an approximately “overcomplete” and phenology-based dictionary.
Moreover, to find out the effectivity of two modes mentioned above, an evaluation strategy is presented in Section 3 (Results) to give a convictive selection proposal when both spatially and temporally extended samples are available and execution efficiency is required.

2.2. Assessment Indices of the Proposed Fusion Scheme

To obtain an accurate description of the fusion results, four types of indices are provided from the aspect of spectral errors per band, similarity of the overall structure, spectral distortion, and overall spectral errors, and then applied to the modeled reflectance and the actual reflectance for an all-sided quality evaluation of the fusion results. Five quantitative indices, namely, average absolute difference (AAD), root-mean-square error (RMSE), structure similarity (SSIM) [35], spectral angle mapper (SAM) [36], and Erreur Relative Global Adimensionnelle de Synthèse (ERGAS) [37], which correspond to the indicated aspects of foregoing assessment indices, are gathered to validate the quality of the predicted images from different assessment views. SSIM, SAM, and ERGAS are obtained by computing the following equations:
S A M = cos 1 ( i = 1 B ρ P i ρ R i i = 1 B ρ P i 2 i = 1 B ρ R i 2 )
S S I M i = ( 2 μ P i μ R i + C 1 ) ( 2 σ P i R i + C 2 ) ( μ P i 2 + μ R i 2 + C 1 ) ( σ P i 2 + σ R i 2 + C 2 )
E R G A S = 100 p r i = 1 B ( R M S E i ) 2 B
where ρ P i and ρ R i are the reflectance in band i [ 1 , B ] of the modeled image P and the actual image R ; ( μ P i , μ R i ) , ( σ P i , σ R i ) , and σ P i R i correspond to the mean value, standard deviation, and covariance in band i of P and R , respectively; C 1 = ( k 1 L ) 2 and C 2 = ( k 2 L ) 2 ; k 1 and k 2 are generally set as 0.01 and 0.03; L is the grayscale of reflectance images; R M S E i is the RMSE in band i of P and R ; and p and r are the spatial resolutions of P and R . Small values of AAD, RMSE, SAM, and ERGAS and a high value of SSIM between the modeled reflectance image and the actual reflectance image indicate a considerable fusion result.
Scatter plots based on the channel-specified reflectance of the modeled data against actual data are provided to supplement the aforementioned quantitative indices with the visualized pattern to provide an intuitive quality assessment of fusion results, and, moreover, the total time-consumption of employed channels used in fusion strategies with spatiotemporally extended training samples is also considered here to give a general description of their efficiencies.

3. Results

In view of only one acquired image-pair (the high- and the low-resolution images) the proposed learning-based algorithm has required, two reconstruction-based spatiotemporal fusion models, the STARFM and the semi-physical reflectance fusion model, are employed in comparison to the original and the improved learning-based fusion algorithms. Specifically, the single-pair version of STARFM with default parameters and the improved algorithm based on semi-physical fusion model (SPFM) [30] are finally adopted to perform the experiment.

3.1. Datasets

In this paper, two datasets, which consist of rural and urban datasets, are employed to perform the fusion strategy that utilizes spatiotemporally extended training samples. The rural dataset uses the same experimental data as in [23], which has been characterized as a study area with phenology changes [32] and comprises Landsat ETM+ images with 30 m spatial resolution and the MODIS daily 500 m surface reflectance product (MOD09GHK) acquired on 24 May, 11 July, and 12 August 2001 (Figure 2). Beijing, which is a typical urban area in China, is selected as the urban dataset to validate the fusion quality with extended spatiotemporal training samples because the sparse learning fusion method is more sensitive to texture and structural features of fused images than others. For the urban dataset listed in Table 1, reflectance products comprised the 20 Landsat-8 OLI (30 m spatial resolution) scenes, and the corresponding MODIS 8-day MOD09-A1 (500 m spatial resolution) and-Q1 (250 m spatial resolution) acquired from 2013 to 2017 are used to perform the fusion strategy described in Figure 1. The Landsat-8 Surface Reflectance product, whose performance is accepted to be either close or better than Landsat TM/ETM+ reflectance products from the Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) [38], has been generated from the Landsat Surface Reflectance Code (LaSRC). By contrast, the MODIS reflectance product is provided by combining the green channel of the MOD09A1 and the red and NIR channels of the MOD09Q1 that are directly downloaded from the Land Processes Distributed Active Archive Center (LPDAAC). Notably, only centered sub-images with 500 × 500 Landsat pixels (covering an area of 15 km × 15 km) of the preceding two datasets are used in the fusion procedure, and spatially or temporally extended training samples will be merely utilized in the process of dictionary training.
Figure 2 and Figure 3 show that training samples initially cover an area of 15 km × 15 km (500 × 500 Landsat pixels) for both datasets and finally reach 36 km × 36 km (1200 × 1200 Landsat pixels) for the rural dataset and 60 km × 60 km (2000 × 2000 Landsat pixels) for the urban dataset as their maximum sizes in the spatially extended fusion experiment. Between the original and the maximal sizes of training samples in each dataset, a dozen training samples with different image sizes are clipped with a size step of 3 km × 3 km (100 × 100 Landsat pixels). As a result, all training samples have been resized as 500 × 500, 600 × 600, …, 1200 × 1200 Landsat pixels for the rural dataset, and 500 × 500, 600 × 600, …, 2000 × 2000 for the urban dataset. In consideration of the heterogeneity and the diversities of the spatial extension of surface features in different directions, all the employed training samples yield to the same central position as their original training images, which are also inputted as the observed reflectance for Landsat and MODIS (500 × 500 Landsat pixels). For instance, the urban study area is positioned at the center of Beijing City proper, which is composed of two main parts, namely, Dongcheng District and Xicheng District. By this way, the fusion strategy with spatial extension of training samples tends to be less sensitive to the variety and texture features of land cover from different study areas. A fair and authentic comparison between the original fusion algorithm and its modified strategy with spatial extension can therefore be expected.
To ensure a consistent comparison against temporal directions, a bi-direction fusion scheme is adopted for the preceding two datasets. The dates 24 May and 11 July 2001 are determined as the bi-direction observed or modeled dates for the rural dataset, while 10 July and 12 September 2017 are selected for the urban dataset. The bi-direction fusion indicates that one of the bi-directional dates acts as the observed time, and the other serves as the modeled time (to be predicted). Original inputting reflectance images from each dataset at the observed dates are spatially replaced by or temporally added to the extended training samples to retrieve a new, enhanced training sample.

3.2. Experimental Results with the Rural Dataset

3.2.1. Experiments with Spatially Extended Training Samples

In this experiment, the rural dataset is employed to implement the bi-directional fusion scheme for modeling reflectance images on 24 May (Table 2) or 11 July (Table 3) 2001 with different sizes of spatially extended training samples, which are used in the dictionary learning process. The quality assessment of fusion results from the aforementioned bi-direction scheme is shown in Table 2 and Table 3 and Figure 4 for a graphical description of the total statistics. Several modeled images are selected from the predicted results and are then validated by scatter plots with actual reflectance (Figure 5).

3.2.2. Experiments with Temporally Extended Training Samples

Considering that only three pairs of temporal reflectance images are held in the rural dataset, the image pair acquired on 12 August 2001 is always taken as an additional training sample to model either 24 May or 11 July 2001. The resulting fused images from the bi-directional fusion scheme with temporally extended training samples and their reflectance scatter plots are shown in Figure 6, and the assessment indices are listed in Table 4.

3.3. Experimental Results with the Urban Dataset

3.3.1. Experiments with Spatially Extended Training Samples

Along with the temporal-corresponding MODIS reflectance product MOD09A1 and MOD09Q1 (Table 1), only the Landsat-8 surface reflectance products acquired on 10 July and 12 September 2017 (Figure 3a,c) are used as basic experimental data to apply the bi-directional fusion scheme with spatially extended training samples that cover the Beijing urban area. Assessment indices related to both temporal directions are summarized in Table 5 and Table 6, with the graphical presentation shown in Figure 7. Several typical modeled results and their scatter plots with actual reflectance from this spatially extended fusion with training image sizes of 500 × 500 and 1500 × 1500 pixels are displayed in Figure 8.

3.3.2. Experiments with Temporal Extended Training Data

With regard to the temporal extension of training samples, we first defined an optimized selection mode for temporal training samples by analyzing the fusion quality of the Beijing urban dataset in 2017 and selecting eligible acquired dates from 2013 to 2016 according to Landsat-8 reflectance data, which are accumulated as additional training samples into the dictionary learning process. This bi-directional fusion strategy with the temporally extended training samples is shown in Figure 9.
Nearly all 12 reflectance data acquired from 31 January to 17 December (Table 1) were taken as additional training samples for the dictionary learning process, except for two Landsat reflectance images acquired on 10 July and 12 September 2017. The assessment indices are listed in Table 7 and Table 8. Although only a small discrepancy exists among the fusion results with additional training samples from different acquisition dates, from which the reflectance data are between or close to the observed date and the modeled date, slightly higher fusion accuracy can be expected (23 May and 28 September 2017 in this experiment). Two reflectance images that satisfy the aforementioned assumption were then selected from each year from 2013 to 2016 and finally used to perform and validate the fusion with temporally accumulated training samples (Table 9 and Table 10 and Figure 10).

4. Discussion

4.1. Fusion Quality with Spatial Extended Training Samples

The resulting assessment indices from the two datasets indicate good agreement in both temporal directions of the fusion strategy with spatially extended training samples, which varied from 500 × 500 to 1200 × 1200 and 2000 × 2000 pixels (Table 2, Table 3, Table 5 and Table 6 and Figure 5 and Figure 8). On the one hand, the overall fusion quality increases with larger training image sizes, and only small improvements can be expected when the size of the training images reaches a threshold, which is approximately two and three times the original image size for the rural and the urban datasets, respectively. Besides, the proposed fusion algorithm generally has a better performance than STARFM and SPFM models under the “size threshold” for both the rural dataset (phenology changes) and the urban dataset (type changes). On the other hand, AAD, RMSE, and SSIM indices show increasing errors from the green, red, and NIR bands through all training image sizes (Figure 4 and Figure 7). A reasonable explanation for the threshold size of the training samples is the reduced spatial similarity. Therefore, the features of image structures become less effective with the increasing image size of the training samples.
The different levels of fusion errors over bands yield different standard deviations for each band. For the rural dataset with phenology changes, the standard deviations of a reflectance image used to quantify the spectral amount of variation or dispersion of images for the acquisitions on 24 May and 11 July 2001 are 0.0102, 0.013, and 0.0476 and 0.0108, 0.0165, and 0.0306, respectively. A more integrative description of fusion results has been addressed by SSIM, SAM, and ERGAS indices rather than AAD and RMSE indices. The ERGAS index in particular provides a significant difference between assessment values of AAD and RMSE indices with very small discrepancies in one or more channels. Similarly, the increasing fusion errors from the green and the red bands to the NIR band yield standard deviations of 0.0334, 0.0413, and 0.0611 and 0.031, 0.0374, and 0.0565, for the Beijing urban data acquired on 10 July and 12 September 2017, respectively. The change in the threshold size of training images from two times (rural area) to three times (urban area) is mainly ascribed to the difference in surface features and employed reflectance products (Landsat-7 ETM plus and MODGHK for the rural dataset, and Landsat-8 OLI and MOD09A1/Q1 for the urban dataset). At the threshold size of the urban training images (approximately 1500 × 1500 pixels), the modeled images in both temporal directions, especially for the reflectance on 12 September 2017, seem to have less noise disturbance than other image sizes (Figure 8). Considering running time of the fusion with spatially extended training samples, the procedure with larger size of training image become more and more time consuming, which is growing not linearly but exponentially.

4.2. Fusion Quality with Temporal Extended Training Samples

Unlike the original bi-directional fusion results, the temporally extended fusion strategy can promote fusion quality and perform better than the spatially extended fusion strategy when an equal number of training images are handled. The training image size in the temporally extended fusion scheme with the rural dataset (Table 4) corresponds to the training image size of 700 × 700 pixels used in the spatially extended fusion scheme (Table 2 and Table 3). The temporal extension scheme is therefore more efficient than the spatial extension scheme in training the rural dataset with phenology changes. Moreover, the assessment indices from the urban dataset primarily show decreasing fusion error when temporal training samples are added (Figure 11), and disagreement occurs when the two reflectance images acquired in 2016 participate in the training sample set. This phenomenon may be attributed to the large seasonal difference between the acquisition dates in 2016 and the observed–modeled period. In addition, a more effective fusion strategy for the rural dataset (the Beijing area) is used to bring the spatially extended training samples, rather than the temporally extended training samples, into the training set (Figure 12).
The results from the rural dataset, especially for the NIR channel, are more sensitive to the added temporal training image than the urban dataset (Figure 6d,h and Figure 10d,h). Regardless of the seasonal characteristics of the employed temporal training images, the discrepancy in spatial features, such as texture and structure, plays a leading role in the comparison of fused results from two datasets with different change types (primarily the phenology change in the rural dataset and the texture and structural change in the urban dataset). The time-consumption of the fusion with temporally extended training samples intends to be similar with the strategy with spatial extension due to the proportional image size between spatially and temporally extended training samples.

5. Conclusions

An enhanced fusion scheme based on the single-pair sparse learning fusion model is proposed by improving the dictionary training process, and its evaluation strategy is designed by employing the spatially and the temporally extended training samples in this paper. Results from the bi-directional fusion scheme show high agreement in the assessment indices of the fusion quality, which indicates a decrease in prediction errors and an increase in image similarity with the extension of spatial or temporal training samples. This fusion scheme is significantly effective until the spatial threshold size (approximately two to three times the original image size used here) of the training images is reached or one or more temporal training sample(s) with dissimilar acquisition seasons is added. Compared to STARFM and SPFM models, a better fusion quality can also be obtained by the proposed method with an enhanced “threshold” training size. In detail, the fusion strategy with spatially extended training samples obtain better performance than the fusion strategy with temporally extended training samples for the urban dataset, whereas an opposite inference can be derived from the rural dataset. In consideration of the land cover characteristics of the two datasets, in which phenology changes occur in the rural dataset and type changes appear in the urban dataset, a reliable approach is to adopt an adaptive pattern of training samples extended spatially or temporally to promote fusion quality according to the data acquisition condition and the land cover change type of a study area. The results of the temporally extended fusion scheme are significantly affected by additional training samples with different seasonal features. Therefore, the proposed sparse learning-based fusion scheme is more sensitive to temporal changes than to spatial changes in surface features. To promote the efficiency of sparse learning-based fusion methods, a spatial and temporal similarity measure should be designed for filtering and training spatiotemporal samples and then integrated with the fusion procedure after its availability is validated by typical areas with various land cover changes.
Compared to the whole process of the original sparse-learning algorithm cost 3.7 min for an image with 500 × 500 pixels, which is more efficient than the STARFM (about 4 min) and less than the SPFM (2.3 min), the proposed method with spatiotemporally extended training samples will become far more time consuming if a better fusion result is required. Actually, this issue can be effectively addressed by some updated sparse coding techniques. For instance, the online dictionary learning methods [39,40] can significantly reduce the time consumption of the entire training process to 1 min or so for 500 × 500 pixels and expect to be more effective with growing image size. By this way, the proposed method has a high potential in processing large scenes (spatial or temporal acquirements) usually with multiple channels and taking them into consideration for reflectance reconstruction.

Author Contributions

D.L. and Y.L. conceived and designed the work of this paper. D.L. wrote the manuscript. W.Y. and Y.G. designed the experiment and analyzed the results. Q.H. and L.M. revised the manuscript and re-recognized the Methodology Section. Y.L. and W.Y. approved the final version. Y.C. and X.L. collected and preprocessed the experimental data.

Funding

This study was supported by National Natural Science Foundation of China: 41501372, National High Technology Research and Development Program of China: 2014AA123202, National Key R&D Program of China: 2018YFB0504800 (2018YFB0504804), Scientific and Technological Innovation Projects of Shanxi, China: 2016144.

Acknowledgments

The authors would like to thank F. Gao, B. Huang, and H. Song for sharing their experimental data and source codes on the Internet.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dewan, A.M.; Yamaguchi, Y. Land use and land cover change in Greater Dhaka, Bangladesh: Using remote sensing to promote sustainable urbanization. Appl. Geogr. 2009, 29, 390–401. [Google Scholar] [CrossRef]
  2. Cohen, W.B.; Goward, S.N. Landsat’s role in ecological applications of remote sensing. AIBS Bull. 2004, 54, 535–545. [Google Scholar] [CrossRef]
  3. Kennedy, R.E.; Yang, Z.; Cohen, W.B. Detecting trends in forest disturbance and recovery using yearly Landsat time series: 1. LandTrendr-Temporal segmentation algorithms. Remote Sens. Environ. 2010, 114, 2897–2910. [Google Scholar] [CrossRef]
  4. Steinberg, D.K.; Carlson, C.A.; Bates, N.R.; Johnson, R.J.; Michaels, A.F.; Knap, A.H. Overview of the US JGOFS Bermuda Atlantic Time-series Study (BATS): A decade-scale look at ocean biology and biogeochemistry. Deep Sea Res. Part II Top. Stud. Oceanogr. 2001, 48, 1405–1447. [Google Scholar] [CrossRef]
  5. Joyce, K.E.; Belliss, S.E.; Samsonov, S.V.; McNeill, S.J.; Glassey, P.J. A review of the status of satellite remote sensing and image processing techniques for mapping natural hazards and disasters. Prog. Phys. Geogr. 2009, 33, 183–207. [Google Scholar] [CrossRef]
  6. Richardson, W.H. Bayesian-based iterative method of image restoration. JOSA 1972, 62, 55–59. [Google Scholar] [CrossRef]
  7. Elad, M.; Feuer, A. Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images. IEEE Trans. Image Process. 1997, 6, 1646–1658. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Park, S.C.; Park, M.K.; Kang, M.G. Super-resolution image reconstruction: A technical overview. IEEE Signal Process. Mag. 2003, 20, 21–36. [Google Scholar] [CrossRef]
  9. Tseng, D.C.; Tseng, H.T.; Chien, C.L. Automatic cloud removal from multi-temporal SPOT images. Appl. Math. Comput. 2008, 205, 584–600. [Google Scholar] [CrossRef]
  10. Chen, J.; Zhu, X.; Vogelmann, J.E.; Gao, F.; Jin, S. A simple and effective method for filling gaps in Landsat ETM+ SLC-off images. Remote Sens. Environ. 2011, 115, 1053–1064. [Google Scholar] [CrossRef]
  11. Kwarteng, P.S.; Chavez, A.Y. Extracting spectral contrast in Landsat Thematic Mapper image data using selective principal component analysis. Photogramm. Eng. Remote Sens. 1989, 55, 339–348. [Google Scholar]
  12. Carper, W.; Lillesand, T.; Kiefer, R. The use of intensity-hue-saturation transformations for merging SPOT panchromatic and multispectral image data. Photogramm. Eng. Remote Sens. 1990, 56, 459–467. [Google Scholar]
  13. Yocky, D.A. Multiresolution wavelet decomposition image merger of Landsat Thematic Mapper and SPOT panchromatic data. Photogramm. Eng. Remote Sens. 1996, 62, 1067–1074. [Google Scholar]
  14. Sun, H.; Dou, W.; Yi, W. Discussion of status, predicament and development tendency in the remotely sensed image fusion. Remote Sens. Inf. 2011, 1, 104–108. [Google Scholar]
  15. Fortin, J.P.; Bernier, M.; Lapointe, S.; Gauthier, Y.; De Sève, D.; Beaudoin, S. Estimation of Surface Variables at the Sub-Pixel Level for Use as Input to Climate and Hydrological Models; INRS-Eau: Sainte-Foy, QC, Canada, 1998. [Google Scholar]
  16. Zhukov, B.; Oertel, D. Multi-sensor multi-resolution technique and its simulation. Zeitschrift für Photogrammetrie und Fernerkundung 1996, 1, 11–21. [Google Scholar]
  17. Minghelli-Roman, A.; Mangolini, M.; Petit, M.; Polidori, L. Spatial resolution improvement of MeRIS images by fusion with TM images. IEEE Trans. Geosci. Remote Sens. 2001, 39, 1533–1536. [Google Scholar] [CrossRef]
  18. Zurita-Milla, R.; Clevers, J.G.P.W.; Schaepman, M.E. Unmixing-based Landsat TM and MERIS FR data fusion. IEEE Geosci. Remote Sens. Lett. 2008, 5, 453–457. [Google Scholar] [CrossRef] [Green Version]
  19. Gevaert, C.M.; García-Haro, F.J. A comparison of STARFM and an unmixing-based algorithm for Landsat and MODIS data fusion. Remote Sens. Environ. 2015, 156, 34–44. [Google Scholar] [CrossRef]
  20. Maselli, F. Definition of spatially variable spectral endmembers by locally calibrated multivariate regression analyses. Remote Sens. Environ. 2001, 75, 29–38. [Google Scholar] [CrossRef]
  21. Cherchali, S.; Flouzat, G. Linear mixture modelling applied to AVHRR data for monitoring vegetation. IEEE Geoscience and Remote Sensing Symposium. IGARSS Surf. Atmos. Remote Sens. Technol. Data Anal. Interpret. 1994, 2, 1242–1244. [Google Scholar]
  22. Zhu, X.; Helmer, E.H.; Gao, F.; Liu, D.; Chen, J.; Lefsky, M.A. A flexible spatiotemporal method for fusing satellite images with different resolutions. Remote Sens. Environ. 2016, 172, 165–177. [Google Scholar] [CrossRef]
  23. Gao, F.; Masek, J.; Schwaller, M.; Hall, F. On the blending of the Landsat and MODIS surface reflectance: Predicting daily Landsat surface reflectance. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2207–2218. [Google Scholar]
  24. Shen, H.; Wu, P.; Liu, Y.; Ai, T.; Wang, Y.; Liu, X. A spatial and temporal reflectance fusion model considering sensor observation differences. Int. J. Remote Sens. 2013, 34, 4367–4383. [Google Scholar] [CrossRef]
  25. Hilker, T.; Wulder, M.A.; Coops, N.C.; Linke, J.; McDermid, G.; Masek, J.G.; Gao, F.; White, J.C. A new data fusion model for high spatial-and temporal-resolution mapping of forest disturbance based on Landsat and MODIS. Remote Sens. Environ. 2009, 113, 1613–1627. [Google Scholar] [CrossRef]
  26. Zhu, X.; Chen, J.; Gao, F.; Chen, X.; Masek, J.G. An enhanced spatial and temporal adaptive reflectance fusion model for complex heterogeneous regions. Remote Sens. Environ. 2010, 114, 2610–2623. [Google Scholar] [CrossRef]
  27. Weng, Q.; Fu, P.; Gao, F. Generating daily land surface temperature at Landsat resolution by fusing Landsat and MODIS data. Remote Sens. Environ. 2014, 145, 55–67. [Google Scholar] [CrossRef]
  28. Michishita, R.; Jiang, Z.; Gong, P.; Xu, B. Bi-scale analysis of multitemporal land cover fractions for wetland vegetation mapping. ISPRS J. Photogramm. Remote Sens. 2012, 72, 1–15. [Google Scholar] [CrossRef]
  29. Roy, D.P.; Ju, J.; Lewis, P.; Schaaf, C.; Gao, F.; Hansen, M.; Lindquist, E. Multi-temporal MODIS–Landsat data fusion for relative radiometric normalization, gap filling, and prediction of Landsat data. Remote Sens. Environ. 2008, 112, 3112–3130. [Google Scholar] [CrossRef]
  30. Dacheng, L.; Ping, T.; Changmiao, H.; Ke, Z. Spatial-temporal fusion algorithm based on an extended semi-physical model and its preliminary application. J. Remote Sens. 2014, 18, 307–319. [Google Scholar]
  31. Song, H.; Huang, B. Spatiotemporal satellite image fusion through one-pair image learning. IEEE Trans. Geosci. Remote Sens. 2013, 51, 1883–1896. [Google Scholar] [CrossRef]
  32. Huang, B.; Song, H. Spatiotemporal reflectance fusion via sparse representation. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3707–3716. [Google Scholar] [CrossRef]
  33. Chen, B.; Huang, B.; Xu, B. Comparison of spatiotemporal fusion models: A review. Remote Sens. 2015, 7, 1798–1835. [Google Scholar] [CrossRef]
  34. Huang, B.; Song, H.; Cui, H.; Peng, J.; Xu, Z. Spatial and spectral image fusion using sparse matrix factorization. IEEE Trans. Geosci. Remote Sens. 2014, 52, 1693–1704. [Google Scholar] [CrossRef]
  35. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  36. Yuhas, R.H.; Goetz, A.F.H.; Boardman, J.W. Discrimination among Semi-Arid Landscape Endmembers Using the Spectral Angle Mapper (SAM) Algorithm. In JPL, Summaries of the Third Annual JPL Airborne Geoscience Workshop; NASA: Pasadena, CA, USA, 1992; Volume 1, pp. 147–149. [Google Scholar]
  37. Renza, D.; Martinez, E.; Arquero, A. A new approach to change detection in multispectral images by means of ERGAS index. IEEE Geosci. Remote Sens. Lett. 2013, 10, 76–80. [Google Scholar] [CrossRef] [Green Version]
  38. Vermote, E.; Justice, C.; Claverie, M.; Franch, B. Preliminary analysis of the performance of the Landsat 8/OLI land surface reflectance product. Remote Sens. Environ. 2016, 185, 46–56. [Google Scholar] [CrossRef]
  39. Mairal, J.; Bach, F.; Ponce, J.; Sapiro, G. Online learning for matrix factorization and sparse coding. J. Mach. Learn. Res. 2010, 11, 19–60. [Google Scholar]
  40. Mairal, J.; Bach, F.; Ponce, J.; Sapiro, G. Online dictionary learning for sparse coding. In Proceedings of the 26th Annual International Conference on Machine Learning, Montreal, QC, Canada, 14–18 June 2009; ACM: New York, NY, USA, 2009; pp. 689–696. [Google Scholar] [Green Version]
Figure 1. The proposed fusion scheme with spatially or temporally extended samples in dictionary training.
Figure 1. The proposed fusion scheme with spatially or temporally extended samples in dictionary training.
Remotesensing 10 01207 g001
Figure 2. Employed Landsat and MODIS reflectance data (NIR/red/green) of the rural dataset: (af) Landsat ETM+ images with a size of 500 × 500 and 2000 × 2000 pixels on 24 May, 11 July, and 12 August 2001, respectively; (gl) MODIS images correspond to (af).
Figure 2. Employed Landsat and MODIS reflectance data (NIR/red/green) of the rural dataset: (af) Landsat ETM+ images with a size of 500 × 500 and 2000 × 2000 pixels on 24 May, 11 July, and 12 August 2001, respectively; (gl) MODIS images correspond to (af).
Remotesensing 10 01207 g002
Figure 3. Landsat and MODIS reflectance data (NIR/red/green) that cover the Beijing urban area: (ad) Landsat-8 images observed on 10 July and 12 September 2017 with a size of 500 × 500 and 2000 × 2000 pixels; (eh) MODIS reflectance products that correspond to (ad).
Figure 3. Landsat and MODIS reflectance data (NIR/red/green) that cover the Beijing urban area: (ad) Landsat-8 images observed on 10 July and 12 September 2017 with a size of 500 × 500 and 2000 × 2000 pixels; (eh) MODIS reflectance products that correspond to (ad).
Remotesensing 10 01207 g003
Figure 4. Graphical assessment indices of the proposed bi-directional fusion with spatially extended training samples from the rural dataset, (a) and (b) are respectively for modeling the reflectance on 24 May and 11 July 2001.
Figure 4. Graphical assessment indices of the proposed bi-directional fusion with spatially extended training samples from the rural dataset, (a) and (b) are respectively for modeling the reflectance on 24 May and 11 July 2001.
Remotesensing 10 01207 g004
Figure 5. Proposed bi-directional fusion results with spatially extended training samples from the rural dataset: (ad) the composited fusion results (NIR/red/green) modeled on 24 May and 11 July 2001 with training image sizes of 500 × 500 pixels and 1200 × 1200 pixels, respectively; and (ep) comparisons among green, red, and NIR bands of the modeled reflectance and the actual reflectance that correspond to (ad).
Figure 5. Proposed bi-directional fusion results with spatially extended training samples from the rural dataset: (ad) the composited fusion results (NIR/red/green) modeled on 24 May and 11 July 2001 with training image sizes of 500 × 500 pixels and 1200 × 1200 pixels, respectively; and (ep) comparisons among green, red, and NIR bands of the modeled reflectance and the actual reflectance that correspond to (ad).
Remotesensing 10 01207 g005
Figure 6. Proposed bi-directional fusion results with temporally extended training samples from the rural dataset: (ad) the composited fusion results modeled on 24 May 2001 and the comparison (green, red, and NIR) with the actual reflectance; (eh) the composited fusion results modeled on 11 July 2001 and the comparison (green, red, and NIR) with the actual reflectance.
Figure 6. Proposed bi-directional fusion results with temporally extended training samples from the rural dataset: (ad) the composited fusion results modeled on 24 May 2001 and the comparison (green, red, and NIR) with the actual reflectance; (eh) the composited fusion results modeled on 11 July 2001 and the comparison (green, red, and NIR) with the actual reflectance.
Remotesensing 10 01207 g006
Figure 7. Graphical assessment indices of the proposed bi-directional fusion with spatially extended training samples from the urban dataset, (a) and (b) are respectively for modeling the reflectance on 10 July and 12 September 2017.
Figure 7. Graphical assessment indices of the proposed bi-directional fusion with spatially extended training samples from the urban dataset, (a) and (b) are respectively for modeling the reflectance on 10 July and 12 September 2017.
Remotesensing 10 01207 g007
Figure 8. Proposed bi-directional fusion results with spatially extended training samples from the urban dataset: (ad) the composited fusion results (NIR/red/green) modeled on 10 July and 12 September 2017 with training image sizes of 500 × 500 pixels and 1500 × 1500 pixels, respectively; and (ep) the scatter plots of (ad), which indicate the comparison among the green, red, and NIR bands of the modeled and actual reflectance.
Figure 8. Proposed bi-directional fusion results with spatially extended training samples from the urban dataset: (ad) the composited fusion results (NIR/red/green) modeled on 10 July and 12 September 2017 with training image sizes of 500 × 500 pixels and 1500 × 1500 pixels, respectively; and (ep) the scatter plots of (ad), which indicate the comparison among the green, red, and NIR bands of the modeled and actual reflectance.
Remotesensing 10 01207 g008
Figure 9. Proposed bi-directional fusion strategy with temporally extended training samples.
Figure 9. Proposed bi-directional fusion strategy with temporally extended training samples.
Remotesensing 10 01207 g009
Figure 10. Proposed bi-directional fusion results with temporally extended training samples (from 2013 to 2016) using the urban dataset: (ad) the composited fusion results of modeled reflectance on 10 July 2017 and the comparison with actual reflectance; (eh) the composited fusion results of modeled reflectance on 12 September 2017 and the comparison with actual reflectance.
Figure 10. Proposed bi-directional fusion results with temporally extended training samples (from 2013 to 2016) using the urban dataset: (ad) the composited fusion results of modeled reflectance on 10 July 2017 and the comparison with actual reflectance; (eh) the composited fusion results of modeled reflectance on 12 September 2017 and the comparison with actual reflectance.
Remotesensing 10 01207 g010
Figure 11. Proposed bi-directional fusion strategy in the temporal extension of training samples using the urban dataset, (a) and (b) are respectively for modeling the reflectance on 10 July and 12 September 2017.
Figure 11. Proposed bi-directional fusion strategy in the temporal extension of training samples using the urban dataset, (a) and (b) are respectively for modeling the reflectance on 10 July and 12 September 2017.
Remotesensing 10 01207 g011
Figure 12. Quality and efficiency of proposed bi-directional fusion with the spatiotemporally extended training samples from the urban dataset, (a) and (b) are respectively for the spatially extended mode and the temporally extended mode.
Figure 12. Quality and efficiency of proposed bi-directional fusion with the spatiotemporally extended training samples from the urban dataset, (a) and (b) are respectively for the spatially extended mode and the temporally extended mode.
Remotesensing 10 01207 g012
Table 1. Employed Landsat-8 OLI and MODIS reflectance products of the urban dataset.
Table 1. Employed Landsat-8 OLI and MODIS reflectance products of the urban dataset.
Landsat-8 OLIMODIS MOD09A1/Q1
DateData InfoDateData Info
31 July 201321 April 2017Orbit: 123–32
Band: 3–5
Resolution: 30 m
28 July 201323 April 2017Orbit: 26–04,05
Band: 1, 2 (MOD09Q1) and 4
(MOD09A1)
Resolution: 250 m (MOD09Q1) and 500 (MOD09A1)
1 September 20137 May 201729 August 20139 May 2017
19 August 201423 May 201721 August 201425 May 2017
4 September 201410 July 20176 September 201412 July 2017
22 August 201512 September 201721 August 201514 September 2017
7 September 201528 September 20176 September 201530 September 2017
20 May 201630 October 201716 May 20161 November 2017
11 October 201615 November 20177 October 201617 November 2017
31 January 20171 December 20172 February 20173 December 2017
4 March 201717 December 20176 March 201719 December 2017
Table 2. Assessment indices of the spatially extended fusion for modeling reflectance on 24 May 2001 of the rural dataset.
Table 2. Assessment indices of the spatially extended fusion for modeling reflectance on 24 May 2001 of the rural dataset.
MethodsTraining Image SizeAAD × 102RMSE × 102SSIM × 102SAMERGAS
GRNIRGRNIRGRNIR
Original algorithm500 × 5000.460.791.740.661.162.6396.1690.7979.731.806518.8364
Proposed algorithm600 × 6000.430.751.550.631.112.3396.5391.5183.171.811617.7550
700 × 7000.420.721.370.601.082.0796.8391.8986.521.815116.9564
800 × 8000.410.701.260.591.051.8496.9592.2689.181.817516.3198
900 × 9000.390.681.190.581.021.7597.1092.6690.161.819815.8187
1000 × 10000.390.671.160.571.011.6997.1392.8290.801.820615.6352
1100 × 11000.380.661.130.560.991.6397.292.9591.321.821515.3832
1200 × 12000.380.641.110.560.981.6397.2193.0391.371.821915.2739
STARFM-0.420.691.780.601.082.6597.0192.1188.311.812317.0671
SPFM-0.410.711.680.591.102.4796.4991.9988.521.816316.5105
Table 3. Assessment indices of the spatially extended fusion for modeling reflectance on 11 July 2001 of the rural dataset.
Table 3. Assessment indices of the spatially extended fusion for modeling reflectance on 11 July 2001 of the rural dataset.
MethodsTraining Image SizeAAD × 102RMSE × 102SSIM × 102SAMERGAS
GRNIRGRNIRGRNIR
Original algorithm500 × 5000.450.672.260.601.003.2396.4392.2979.171.805020.4154
Proposed algorithm600 × 6000.430.622.170.600.883.1596.6193.9379.751.810218.7794
700 × 7000.430.612.130.600.873.0696.8694.0680.901.813018.2613
800 × 8000.400.561.890.550.802.7197.3694.9784.441.819016.5935
900 × 9000.380.551.820.540.782.6197.4995.1885.521.820616.1860
1000 × 10000.380.541.790.530.772.5897.5995.3485.781.821515.9259
1100 × 11000.370.531.770.520.752.5297.6595.4586.291.822315.6691
1200 × 12000.370.521.750.520.752.5197.6695.5286.331.822615.5429
STARFM-0.500.682.030.701.062.8396.7492.5484.021.817216.4957
SPFM-0.410.741.910.591.082.7997.1092.1984.811.817116.5169
Table 4. Assessment indices of the proposed bi-directional fusion with temporally extended training samples from the rural dataset.
Table 4. Assessment indices of the proposed bi-directional fusion with temporally extended training samples from the rural dataset.
Modeled DatesAAD × 102RMSE × 102SSIM × 102SAMERGAS
GRNIRGRNIRGRNIR
24 May0.380.651.130.571.001.6697.1592.9991.111.821315.4358
11 July0.380.531.770.530.762.5597.5995.3885.831.821715.7507
Table 5. Assessment indices of the spatially extended fusion with the urban dataset for modeling reflectance on 10 July 2017.
Table 5. Assessment indices of the spatially extended fusion with the urban dataset for modeling reflectance on 10 July 2017.
MethodsTraining Image SizeAAD × 102RMSE × 102SSIM × 102SAMERGAS
GRNIRGRNIRGRNIR
Original algorithm500 × 5001.762.113.542.412.964.7184.4881.4577.531.785426.2220
Proposed algorithm600 × 6001.712.023.422.292.794.5886.6684.0879.491.794624.9515
700 × 7001.701.993.402.272.734.5687.3484.879.871.797324.6278
800 × 8001.691.983.392.262.704.5487.5685.6480.351.798624.3276
900 × 9001.681.973.382.252.694.5387.7585.7280.781.800224.2388
1000 × 10001.671.963.382.242.684.5187.8185.8880.941.801024.1858
1100 × 11001.671.963.372.232.684.5087.8985.9181.091.800924.1359
1200 × 12001.661.953.362.222.664.4787.9785.9481.271.802524.0686
1300 × 13001.661.943.352.222.664.4687.9785.9981.351.802424.0527
1400 × 14001.651.933.352.212.644.4688.0286.0381.421.802824.0454
1500 × 15001.651.923.352.212.644.4488.0286.0581.471.803123.9964
1600 × 16001.651.923.352.212.644.4688.0386.0581.461.803024.0518
1700 × 17001.661.943.372.222.654.5188.0386.0381.441.802724.0764
1800 × 18001.651.923.352.222.644.4388.0386.0481.451.803024.0452
1900 × 19001.651.923.342.212.634.4488.0486.0781.471.803223.9916
2000 × 20001.651.923.352.222.644.4588.0386.0681.451.802824.0490
STARFM1.661.953.502.222.674.6387.9685.9578.391.801624.7963
SPFM1.652.003.612.212.954.8688.0185.8177.281.795325.1976
Table 6. Assessment indices of the spatially extended fusion with the urban dataset for modeling reflectance at 12 September 2017.
Table 6. Assessment indices of the spatially extended fusion with the urban dataset for modeling reflectance at 12 September 2017.
MethodsTraining Image SizeAAD × 102RMSE × 102SSIM × 102SAMERGAS
GRNIRGRNIRGRNIR
Original algorithm500 × 5001.732.003.452.182.644.6487.5585.0179.011.785029.3382
Proposed algorithm600 × 6001.701.973.402.142.594.4788.6186.3582.891.792228.3268
700 × 7001.701.963.402.142.564.3588.6386.4782.881.792428.2852
800 × 8001.681.943.382.122.534.3188.9587.1283.301.794327.8460
900 × 9001.691.953.402.132.544.3388.8386.7982.891.788627.8857
1000 × 10001.681.933.392.122.534.3189.0287.2183.491.794727.7921
1100 × 11001.681.923.372.122.534.3189.0387.4483.541.794727.7726
1200 × 12001.681.913.372.122.524.3089.0687.5683.621.794927.6851
1300 × 13001.681.903.352.122.524.2989.0787.6183.631.795227.5976
1400 × 14001.681.903.342.122.514.2889.0987.6383.671.796427.5520
1500 × 15001.681.903.342.122.514.2889.1087.6683.701.798527.5435
1600 × 16001.681.903.342.122.514.2989.0987.6483.691.798027.5481
1700 × 17001.681.893.322.122.494.2189.1287.7183.781.800027.4747
1800 × 18001.691.913.352.122.534.3189.1087.6583.711.797527.5554
1900 × 19001.681.903.342.122.514.2989.1187.6683.691.798927.5174
2000 × 20001.681.903.332.122.514.2589.1387.6983.751.799127.4951
STARFM1.701.953.432.162.564.5188.5186.6882.791.793928.5247
SPFM1.682.013.442.172.634.5087.6884.8382.451.790129.2313
Table 7. Assessment indices from the proposed fusion with the urban data acquired in 2017 for modeling reflectance on 10 July 2017.
Table 7. Assessment indices from the proposed fusion with the urban data acquired in 2017 for modeling reflectance on 10 July 2017.
Added Training DatesAAD × 102RMSE × 102SSIM × 102SAMERGAS
GRNIRGRNIRGRNIR
31 January 20171.762.083.572.402.904.8384.5582.2576.361.785726.1273
4 March 20171.742.033.482.372.824.6385.2383.4178.581.791825.4679
21 April 20171.752.013.492.372.794.5985.283.9779.051.793025.2690
7 May 20171.752.033.522.372.834.6585.0483.2478.441.791025.5186
23 May 20171.731.963.462.352.724.7485.6384.9677.241.794725.1566
28 September 20171.731.973.462.362.734.5885.3584.7378.921.795325.0129
30 October 20171.722.003.472.362.784.6785.3884.1078.061.793525.3112
15 November 20171.762.063.462.412.884.6184.5982.6178.751.790125.7875
1 December 20171.732.073.512.372.894.6985.0582.4577.851.790325.8093
17 December 20171.752.053.632.382.845.0085.1983.2272.141.788826.0933
Table 8. Assessment indices from the proposed fusion with the urban data acquired in 2017 for modeling reflectance on 12 September 2017.
Table 8. Assessment indices from the proposed fusion with the urban data acquired in 2017 for modeling reflectance on 12 September 2017.
Added Training DatesAAD × 102RMSE × 102SSIM × 102SAMERGAS
GRNIRGRNIRGRNIR
31 January 20171.722.023.482.172.654.7087.2284.7378.461.783629.4916
4 March 20171.721.983.692.172.594.8687.3885.2673.241.781329.4935
21 April 20171.712.003.42.152.604.4587.9585.4681.611.788528.7471
7 May 20171.711.973.392.152.564.3587.9885.9582.391.790028.3728
23 May 20171.721.953.402.152.534.3587.9686.1582.381.790228.2556
28 September 20171.701.923.352.122.484.3488.3086.8082.271.791627.8873
30 October 20171.721.983.392.182.594.5387.4985.5880.271.786928.9585
15 November 20171.712.003.462.162.644.6987.6884.9977.911.784429.3909
1 December 20171.732.014.632.192.626.4287.3285.2538.301.759932.9495
17 December 20171.721.993.402.182.614.3787.4085.1582.361.787828.7952
Table 9. Assessment indices from the proposed fusion with the urban data acquired from 2013 to 2016 for modeling reflectance on 10 July 2017.
Table 9. Assessment indices from the proposed fusion with the urban data acquired from 2013 to 2016 for modeling reflectance on 10 July 2017.
Added Training YearsAAD × 102RMSE × 102SSIM × 102SAMERGAS
GRNIRGRNIRGRNIR
20131.732.013.472.362.784.6285.4384.1778.661.792725.2327
2013 and 20141.691.993.452.292.754.6286.5884.6978.721.796024.8839
2013 to 20151.661.933.382.252.664.4587.5385.8180.641.802024.1563
2013 to 20161.661.933.372.252.664.4787.5885.8180.371.801824.2050
Table 10. Assessment indices from the proposed fusion with the urban data acquired from 2013 to 2016 for modeling reflectance on 12 September 2017.
Table 10. Assessment indices from the proposed fusion with the urban data acquired from 2013 to 2016 for modeling reflectance on 12 September 2017.
Added Training YearsAAD × 102RMSE × 102SSIM × 102SAMERGAS
GRNIRGRNIRGRNIR
20131.691.953.382.132.524.488.5586.8981.971.792128.1927
2013 and 20141.691.943.362.132.54.388.7687.1883.321.794127.8998
2013 to 20151.681.913.342.122.464.2588.9887.5883.861.795527.6131
2013 to 20161.691.923.362.132.484.4188.8787.2982.071.793428.0341

Share and Cite

MDPI and ACS Style

Li, D.; Li, Y.; Yang, W.; Ge, Y.; Han, Q.; Ma, L.; Chen, Y.; Li, X. An Enhanced Single-Pair Learning-Based Reflectance Fusion Algorithm with Spatiotemporally Extended Training Samples. Remote Sens. 2018, 10, 1207. https://doi.org/10.3390/rs10081207

AMA Style

Li D, Li Y, Yang W, Ge Y, Han Q, Ma L, Chen Y, Li X. An Enhanced Single-Pair Learning-Based Reflectance Fusion Algorithm with Spatiotemporally Extended Training Samples. Remote Sensing. 2018; 10(8):1207. https://doi.org/10.3390/rs10081207

Chicago/Turabian Style

Li, Dacheng, Yanrong Li, Wenfu Yang, Yanqin Ge, Qijin Han, Lingling Ma, Yonghong Chen, and Xuan Li. 2018. "An Enhanced Single-Pair Learning-Based Reflectance Fusion Algorithm with Spatiotemporally Extended Training Samples" Remote Sensing 10, no. 8: 1207. https://doi.org/10.3390/rs10081207

APA Style

Li, D., Li, Y., Yang, W., Ge, Y., Han, Q., Ma, L., Chen, Y., & Li, X. (2018). An Enhanced Single-Pair Learning-Based Reflectance Fusion Algorithm with Spatiotemporally Extended Training Samples. Remote Sensing, 10(8), 1207. https://doi.org/10.3390/rs10081207

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop