Next Article in Journal
Individual and Interactive Influences of Anthropogenic and Ecological Factors on Forest PM2.5 Concentrations at an Urban Scale
Previous Article in Journal
Estimation of Penetration Depth from Soil Effective Temperature in Microwave Radiometry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hybrid Color Mapping Approach to Fusing MODIS and Landsat Images for Forward Prediction

1
Signal Processing, Inc., Rockville, MD 20850, USA
2
Hydrology & Remote Sensing Lab, USDA ARS, Beltsville, MD20704 USA
3
Department of Land Surveying and Geo-Informatics, The Hong Kong Polytechnic University, Kowloon 999077, Hong Kong, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2018, 10(4), 520; https://doi.org/10.3390/rs10040520
Submission received: 6 March 2018 / Revised: 22 March 2018 / Accepted: 24 March 2018 / Published: 26 March 2018
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
We present a simple, and efficient approach to fusing MODIS and Landsat images. It is well known that MODIS images have high temporal resolution and low spatial resolution, whereas Landsat images are just the opposite. Similar to earlier approaches, our goal is to fuse MODIS and Landsat images to yield high spatial and high temporal resolution images. Our approach consists of two steps. First, a mapping is established between two MODIS images, where one is at an earlier time, t1, and the other one is at the time of prediction, tp. Second, this mapping is applied to map a known Landsat image at t1 to generate a predicted Landsat image at tp. Similar to the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), SpatioTemporal Image-Fusion Model (STI-FM), and the Flexible Spatiotemporal DAta Fusion (FSDAF) approaches, only one pair of MODIS and Landsat images is needed for prediction. Using seven performance metrics, experiments involving actual Landsat and MODIS images demonstrated that the proposed approach achieves comparable or better fusion performance than that of STARFM, STI-FM, and FSDAF.

Graphical Abstract

1. Introduction

Fusing high spatial resolution/low temporal resolution Landsat images with low spatial resolution/high temporal resolution MODIS images will have many applications, such as drought monitoring, fire damage assessment, flood damage monitoring, etc. In [1], a fusion approach known as Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) was proposed and demonstrated. The STARFM has been used to generate daily time-series vegetation index and evapotranspiration for crop condition and drought monitoring [1,2]. Several alternative algorithms [3,4] were published to further improve the fusion performance. The Bayesian prediction approach [5,6,7], which was also proposed for fusing satellite images with complementary characteristics, can be an alternative fusion method for Landsat and MODIS. According to the survey paper [8], the STAARCH [3] approach can handle abrupt changes, but requires two pairs of MODIS and Landsat images. The ESTARFM [4] algorithm focuses on enhancing the performance of mixed pixels. Similar to STAARCH, ESTARFM also requires two pairs of images. As a result, both STAARCH and ESTARFM may not be suitable for forward prediction.
Recently, the Flexible Spatiotemporal DAta Fusion (FSDAF) algorithm can use one pair of MODIS and Landsat images for forward prediction [9]. The method has six steps, including land cover classification, grouping of neighboring pixels, etc. The computational load is heavy, as one prediction may take more than one hour to finish for an image size of 1200 × 1200. Moreover, since there are multiple steps in the prediction process, it is hard to grasp which step contributes the most to the prediction performance. A relatively simple and more efficient algorithm, the SpatioTemporal Image-Fusion Model (STI-FM) [10] applies clustering to the images first, and, for each cluster, performs a separate prediction. In addition to the above algorithms, some alternative fusion ideas [11,12,13,14,15,16] were proposed and evaluated. Interested readers can find concise reviews for these alternative methods in [10]. In this paper, we present a simple approach to fusing MODIS and Landsat images for forward prediction. Our approach was motivated by our recent pansharpening work for synthesizing a high resolution hyperspectral image by fusing a high resolution color image with a low resolution hyperspectral image cube [17]. Our approach is called hybrid color mapping (HCM), which has also been applied to enhance Mastcam images [18] in the Curiosity rover and Mars satellite imagers THEMIS and TES [19]. The HCM is simple, intuitive, and computationally efficient, as compared to other algorithms in the literature. Most importantly, it has high performance because numerous recent comparative studies [18,19,20] have demonstrated its efficacy. Similar to STARFM, STI-FM, and FSDAF, only one pair of MODIS and Landsat images is needed for prediction.
Our paper is organized as follows. Section 2 introduces our approach. Section 3 presents the experimental results. Two types of real images have been used. One type is homogeneous and the other is the more challenging type of heterogeneous images. Discussions are included in Section 4. Finally, the conclusions and future directions are given in Section 5.

2. Materials and Methods

Our approach is simple, intuitive, and has two steps. First, a mapping is established between two MODIS images where one is at an earlier time, t1, and the other one is at the time of prediction, tp. To achieve better prediction results, it is necessary to divide the images into small patches, which can be either overlapping or non-overlapping. The mapping is then obtained for each patch. It should be noted that the mapping is between a pixel vector in one image and another pixel vector in another image. Second, this mapping is applied to a known Landsat image at t1 to generate a predicted Landsat image at tp.

2.1. A Simple Motivating Example of Our Approach

We will use one band of MODIS and LANDSAT to illustrate our approach. Figure 1 shows two pairs of MODIS (M1 and M2) and Landsat (L1 and L2) images. The pairs, (M1, L1) and (M2, L2) were collected on the same days. From Figure 1, we have two observations. First, the MODIS images can be treated as blurred versions of their Landsat counterparts. Second, the intensity relationship between the MODIS pixels is somewhat similar to that of those Landsat pixels. For example, the middle of L1 is darker than the rest of the scene, and this can be easily seen in M1 as well. Similarly, the middle part of L2 is slightly brighter than the rest and one can see the same in M2. The above observations indicate that the heterogeneous landscape information is captured in the MODIS images. If we can capture the intensity mapping between the MODIS images at two different times, then we can use that mapping to predict the Landsat image at time t p using Landsat image at t k . It turns out that, although the above idea is simple and intuitive, the prediction results using this idea are quite accurate in previous experiments [19,20].
To further substantiate the above observations, we include some statistics (Table 1) from four pairs of MODIS and Landsat images. Days 128 and 144 belong to homogeneous regions and Days 214 and 246 are heterogeneous regions. It can be easily seen that the means and the standard deviations of MODIS and Landsat images are indeed very close, corroborating the visual observations in Figure 1.

2.2. Proposed Approach Based on Hybrid Color Mapping (HCM)

Figure 2 illustrates our proposed prediction approach. Based on the available MODIS images that were collected at t k and t p , we learn the pixel by pixel mapping between the two images. The learned matrix F is then applied in the prediction step. Using similar notations in related work [1], the prediction of Landsat image at t p can be achieved by
L ( x , y , t p ) = F × L ( x , y , t k )
where L ( · , · , · ) denotes a pixel vector (up to Q with Q being the number of bands) for this application and F is a pixel to pixel mapping/transformation matrix with appropriate dimensions. F can be determined by using the following relationship
M ( x , y , t p ) = F × M ( x , y , t k )
where M ( · , · , · ) denotes a pixel vector (Q bands). To account for the intensity differences between two images, a variant of (2) can be described as
M ( x , y , t p ) = F 1 × M ( x , y , t k ) + F 2
where F 2 is a vector of constants.
With no loss of generality, let us focus on the determination of F in (2). Now, let us denote Y t p as the set of all of the multispectral pixels M ( x , y , t p ) R Q for all the pixels in the image at t p and Y t k as the set of all the multispectral pixels M ( x , y , t k ) R Q in the image at t k . Q is the number of bands. Since M ( x , y , t p ) and M ( x , y , t k ) are vectors, Y t p and Y t k can be expressed as
Y t p = [ M ( 1 , 1 , t p ) M ( N R , N C , t p ) ]
Y t k = [ M ( 1 , 1 , t p ) M ( N R , N C , t k ) ]
where N R and N C are the numbers of rows and columns, respectively.
We call the mapping F in (2) the global version and all of the pixels in Y t k and Y t p are used in estimating F. To estimate F, we will use the least square approach, which minimizes the error
E = x = 1 N R y = 1 N C P ( x , y ) Transpose P ( x , y )
where P ( x , y ) = M ( x , y , t p ) F M ( x , y , t k ) .
Following the definition of Frobenius norm [21], (6) is equivalent to
E = | | Y t p F Y t k | | F 2
Solving F in (7) involves the following. Since
E = | | Y t p F Y t k | | F 2 = t r ( ( Y t p F Y t k ) ( Y t p F Y t k ) T ) = t r [ Y t p Y t p T ] t r [ Y t p Y t k T F T ] t r [ F Y t k Y t p T ] + t r [ F Y t k Y t k T F T ] .
Differentiating (8) with respect to F yields [22]
E F = 2 Y t p Y t k T + 2 F Y t k Y t k T
Setting the above to zero will yield
F = Y t p Y t k T ( Y t k Y t k T ) 1
Unlike normal image mapping, such as the 3 × 3 image transform matrices (a rotation or perspective transformation), which maps between the spatially distributed patches, it should be noted that F is a pixel to pixel mapping between two multispectral pixels.
To avoid instability, we can add a regularization term in (7). That is,
F = arg min F | | Y t p F Y t k | | F + λ | | F | | F
where λ is a regularization parameter and the optimal F becomes
F = Y t p Y t k T ( Y t k Y t k T + λ I ) 1
with I an identity matrix with the same dimension as Y t k Y t k T .
Remark 1.
Difference between the HCM in this paper and the HCM in [17].
Although the mathematical derivations are the same, the proposed HCM in this paper is different from that in [17]. The key difference is the implementation. In [17], the mapping is between a downsampled high resolution image and a low resolution hyperspectral image, whereas here the mapping is between two low resolution MODIS images.
Remark 2.
Addition of a bias term.
If the means of the two MODIS images are different, we can easily extend the above derivation to arrive at a new expression for F 1 and F 2 in (3). This can be done by rewriting (3) as
M ( x , y , t p ) = [ F 1 F 2 ] [ M ( x , y , t k ) 1 ] T = F a × M a
that is, we simply augment the vector M by one more row of 1. After that, the derivation will be the same as before.
Remark 3.
Local Mapping.
Based on our observations in some cases, prediction results will be more accurate if we divide the images into patches. Each patch will have its own mapping matrix. Figure 3 illustrates the local prediction approach. The patches can be overlapped or non-overlapped. In this paper, overlapped patches were used for homogeneous areas and non-overlapped patches were used for the heterogeneous areas.
Remark 4.
Band by band mapping.
It should be noted that the derivations for the forward prediction are done jointly for all of the bands. Actually, the mapping can also be done band by band.
Remark 5.
Only one pair of MODIS and Landsat images.
Unlike STAARCH and ESTARFM, our approach does not require two pairs of MODIS and Landsat images for prediction. This will be ideal for forward prediction, where only past measurements of Landsat images are available.
Remark 6.
One mapping per cluster.
One can also perform image clustering/segmentation first and then use HCM for each individual cluster. We implemented this approach and the results are similar to that of the non-clustering approach. Discussions on this clustering approach will be summarized in Section 4.

2.3. Evaluation Metrics

We evaluate the performance of algorithms using the following seven objective metrics. Moreover, computational times are also used in our comparative studies.
  • Absolute Difference (AD). The AD of two vectorized images S (ground truth) and S ^ (prediction) is defined as
    AD ( S , S ^ ) = 1 Z j = 1 Z | s j s ^ j |
    where Z is the number of pixels in each image. The ideal value of AD is 0 if the prediction is perfect.
  • RMSE (Root Mean Squared Error). The RMSE of two vectorized images S (ground truth) and S ^ (prediction) is defined as
    RMSE ( S , S ^ ) = 1 Z j = 1 Z ( s j s ^ j ) 2
    where Z is the number of pixels in each image. The ideal value of RMSE is 0 if the prediction is perfect.
  • CC (Cross-Correlation). We used the codes from Open Remote Sensing website (https://openremotesensing.net/). The ideal value of CC is 1 if the prediction is perfect.
  • ERGAS (Erreur Relative. Globale Adimensionnelle de Synthese). We used the codes from [23]. The ERGAS is defined as
    EGARS ( S , S ^ ) = 100 d RMSE μ
    for some constant d depending on the resolution and μ is the mean the ground truth image. The ideal value of ERGAS is 0 if a prediction algorithm flawlessly reconstructs the Landsat bands.
  • SSIM (Structural Similarity). It is a metric to reflect the similarity between two images. An equation of SSIM can be found in [9]. The ideal value of SSIM is 1 for perfect prediction.
  • SAM (Spectral Angle Mapper) [23]. The spectral angle mapper measures the angle between two vectors. The ideal value of SAM is 0 for perfect reconstruction.
  • Q2n: A definition for Q2n can be found in [24,25]. The ideal value of Q2n is 1. The codes can be downloaded from Open Remote Sensing website.

3. Results

In this section, we present extensive experimental results. Since our main interest is in forward prediction, we only compare with STARFM, STI-FM, and FSDAF.

3.1. Data Set 1: Scene Contents Are Homogeneous

Data set 1 is the Boreal Ecosystem–Atmosphere Study (BOREAS) southern study area (54.6°N, 105.8°W) that has been used by Gao et al. [26] in the data fusion test. This is a relatively homogeneous area. The major land cover type is forest, with subsidiary fen and spare vegetation. Land cover patches are large. Landsat and MODIS data have been reprocessed using the latest available collections when we started this study. Landsat surface reflectance images (L1T) were ordered from U.S. Geological Survey (USGS). MODIS daily surface reflectance products (MOD09GA, Collection 6) [27] were corrected to nadir BRDF-adjusted reflectance (NBAR) using MODIS BRDF products (MCD43A1, Collection 5) [26]. Note that the Collection 6 of daily MODIS NBAR at 500 m resolution is now available and can be directly used. Co-registration between Landsat and MODIS was applied to all of the Landsat-MODIS image pairs using maximum correlation approach [27]. Four Landsat-MODIS image pairs (day 128, 144, 192 and 224) on 2001 were re-processed for the study. Each image has six bands. We can perform six forward prediction experiments.
The parameters of our prediction algorithm for the first data set are as follows. Band to band prediction was used and no bias term was introduced. The images are divided into patches with a patch size of 80 Landsat pixels. The regularization term was chosen to be 0.001. The overlapping window size for patches is 40.
Table 2, Table 3 and Table 4summarize the forward prediction results of using Landsat image 128 to predict Landsat image 144, Landsat image 144 to predict Landsat image 192, Landsat image 192 to predict Landsat image 224, respectively. The tables are arranged in a similar manner as that of [9]. The first block of each table compares the source Landsat image and the ground truth Landsat image to be predicted. We have applied cloud masks in calculating those values in the tables. In Table 2, one can see that our prediction results in terms of AD, RMSE, CC, ERGAS, SSIM, and Q2n metrics are better than that of STARFM, STI-FM, and FSDAF in almost all of the bands. In the false color image (formed by treating NIR, Red, and Green as Red, Green, and Blue, respectively) that is shown in Figure 4, if one looks at the zoomed-in area inside the red circles, one can see that our results can recover more fine details as compared to that of STARFM and are comparable to that of FSDAF and STI-FM. From Table 3, it can be seen that our results have similar trends as Table 2. This is reflected in Figure 5, as it can be seen that, inside the red circles, our results can preserve the color information slightly better than that of STARFM and STI-FM, and are comparable to FSDAF. From Table 4, our results also give better performance metrics in almost all of the metrics as compared to those of STARFM, STI-FM, and FSDAF. In Figure 6, one can see that the STARFM image has an area in the center/bottom half (inside the white circle) that seems to have some spectral distortion, while our predicted image looks much more like the actual image. Table 4 also shows that our prediction, especially the NIR band, is a lot closer to the ground truth image.

3.2. Data Set 2: Scene Contents Are Heterogeneous

Data set 2 is a rain-fed agricultural area in central Iowa (42.4°N, 93.4°W). The major crops were corn and soybean. They were planted at different times and had different crop phenology. This is a relatively heterogeneous area. Same data processing procedures that were used in Data set 1 were applied. Three Landsat-MODIS image pairs (day 144, 182, and 224) on 2002 were processed.
MODIS daily NBAR were resampled to match Landsat 30 m resolution using the bi-linear interpolation approach. Both data sets have the image size of 1200 by 1200. For each time, one image-pair was used to predict Landsat observation for another pair date and then compared to the actual Landsat reflectance. There are three pairs of MODIS and Landsat images in this data set. Each image has six bands. We can perform two forward prediction experiments.
The parameters of our prediction algorithm for the second data set are as follows. The learning and prediction were performed band by band. For heterogeneous contents, we observed that we need to pick small patch sizes in order to achieve good prediction performance. Patches with a size of two were used and there is no overlapping between the patches. The regularization term was chosen to be 0.001.
Table 5 summarizes the forward prediction results from Landsat image 182 to Landsat image 214. It can be seen that FSDAF has the best performance, followed by STI-FM, HCM, and STARFM. In terms of subjective comparisons, one can see from the false color image in Figure 7, which is created by using the order of near infrared (NIR), red, and green bands, as the RGB bands, that our results are slightly closer to the ground truth image in terms of color. Table 6 summarizes the prediction results from Landsat image 214 to Landsat image 246. Although our results are comparable to those of STARFM and STI-FM, and slightly inferior to those of FSDAF, the numbers are close. This is also reflected in the images shown in Figure 8 where the predictions results using STARFM, STI-FM, FSDAF, and our approach are all very close visually.

4. Discussions

The STARFM algorithm is the pioneering work on the fusion of MODIS and Landsat images. The FSDAF incorporates land cover classification and some additional processing steps and yields better performance over the STARFM in heterogeneous areas. When comparing the proposed HCM with the FSDAF method, our method is much simpler and efficient. In fact, HCM is comparable to that of STI-FM in terms of computational complexity. One may think that, since the resolution between Landsat and MODIS is 16 to 1, the proposed HCM based mapping may not be effective. It turns out that we have applied HCM to enhance images with even greater resolution differences: Worldview-2 (25 to 1) [20] and THEMIS and TES (30 to 1) [19]. The results in [19,20] as well as the results in this paper clearly demonstrate that the HCM algorithm is a very competitive method for fusing different types of satellite images. More supporting arguments are presented in the following paragraphs.

4.1. Additional Simulation Studies Using Synthetic Data to Address 16:1 Resolution Concern for HCM

To further validate the performance HCM, we have carried out some additional studies. Similar to the FSDAF paper, we used a synthetic data set, as shown in Figure 9 below. There are three areas besides the background, which has a magnitude of 0.5. A small Gaussian noise (0.001) was added to the image. The pixel magnitudes in Landsat image at t1 are as follows: 0.01 in the circle (radius: 56 pixels), 0.3 in the rectangle and the line. The pixel magnitudes in Landsat image at t2 are: 0.05 in the circle (radius: 72 pixels), 0.2 in the rectangle and the line. The circles in the two Landsat images are to emulate gradual changes (phenology), and the lines and rectangle are to emulate land cover type changes. The MODIS images were generated by averaging 16 × 16 blocks in the Landsat images.
Now, we summarize the prediction results below in Figure 10. Table 7 summarizes four performance metrics. It can be seen that the FSDAF results have smaller prediction errors than those of HCM. The HCM results have some blurry peripherals. Because this synthetic data set is somewhat similar to the heterogeneous landscape, the FSDAF method performed well. However, this simple example also substantiates that our HCM does not have any “grave” issues; it just performs slightly worse than FSDAF in the heterogeneous cases. We did not include the STI-FM results here, as thorough comparisons have been carried out earlier in Section 3.

4.2. Performance of HCM for Applications with 25:1 Resolution Difference

To further alleviate the concern that our proposed HCM method may not work for large spatial resolution applications, we would like to mention some of the fusion results where we have 25:1 resolution difference between two images. The 25 to 1 resolution difference application is for enhancing the eight SWIR bands (7.5 m resolution) using the pan band (0.31 m resolution) in Worldview-3 images. A paper has been published [20]. See Table 5 in [20]. It can be seen that our HCM results are slightly better than other state-of-the-art methods in the literature using three performance metrics, even if the resolution difference is 25:1.

4.3. Performance of HCM for Applications with 30:1 Resolution Difference

Here, we mention one more application of HCM to image fusion where the spatial resolution difference between the two images is 30:1. The 30 to 1 resolution difference application is for the fusion of THEMIS (100 m resolution) and TES (3 km resolution) images for NASA’s Mars exploration. A paper was presented in 2017 IGARSS [19]. The performance of HCM was also excellent for this application. See Figure 4 and Table 1 in [19] for details. Those results clearly demonstrated that HCM can still perform well in 30:1 resolution difference application). Table 1 in [19] indicates that HCM actually has the best performance in this application.

4.4. Necessity and Importance of Having Diverse Methods for Image Fusion

We would like to argue that no one method could perform well under all of the conditions in many remote sensing applications. For example, a recent research by our team focused on the fusion of Planet and Worldview images. In this application, the resolution of Planet images is actually uncertain because its resolution is much worse than the declared resolution of 3.125 m. There are both homogeneous and heterogeneous regions in the images. We carried out three case studies in that paper. It was observed that none of the three methods (STARFM, FSDAF, and HCM) can outperform others in all cases. This shows that it is important to have more diversity in the fusion methods. HCM is simple and efficient, and hence may be more suitable for near real-time applications. On the other hand, FSDAF and STARFM require more computational times and may be suitable for batch, off-line processing applications. The bottom line is that the remote sensing community needs to have some freedom in choosing the most appropriate fusion method that can meet each individual’s specific needs. In this respect, HCM offers a reasonable alternative.

4.5. Combine HCM with Image Clustering

We have implemented a general clustering approach, which does not require the cluster maps at different times to be the same. To perform the clustering, we use k-means for MODIS images at tk and tp. The clusters can be dramatically different at two different times. Let us use Figure 11 below to illustrate our idea. At tk, there are three clusters and at tp, there are four clusters. When we overlay the cluster maps, we will have six combinations of clusters. If there are N clusters at tk and M clusters at tp, then there will be at most NM clusters in total. In each cluster, we estimate a matrix F. We applied this clustering approach to two cases. The results are shown below.

4.5.1. Example in Homogeneous Area

We used Day 128 to predict Day 144. We tried three cluster combinations: 50 × 50; 20 × 20; and, 5 × 5, where the first number indicates the number of clusters at time tk and the second number is the number of clusters at tp. From Table 8, it can be seen that the local HCM approach is slightly better than the cluster based approach.

4.5.2. Example in Heterogeneous Area

We used Day 214 to predict Day 246. We tried three cluster combinations: 50 × 50; 20 × 20; and, 5 × 5. Again, from Table 9, it can be seen that the two approaches have very close performances and the local based method has a very slight edge over the cluster based approach.
When comparing the cluster-based HCM with non-cluster/local based HCM, we see that the results are mixed. We do see some cases where the cluster-based approach is slightly better, but in other cases, the non-cluster based HCM performs well. A simple explanation is that our non-cluster based HCM is local in nature. Since we normally set the patch size to be small for heterogeneous areas, the pixels within those small patches generally do not have much variation.
We would also like to mention that there is another clustering based method (STI-FM) [10]. The idea is to use the ratio of MODIS pixels at two different times to perform clustering. We implemented that algorithm and included the results in this paper. See Table 2, Table 3, Table 4, Table 5 and Table 6 in Section 3.
In short, the use of clustering based approach could be a good future research direction, especially in the case of heterogeneous images.

4.6. General Comments and Observations

For homogeneous regions, our proposed HCM algorithm performed the best in terms of objective evaluations using seven performance metrics. See Table 2, Table 3 and Table 4 for details. Those seven performance metrics are widely used in the literature to compare image fusion and pansharpening algorithms.
In terms of subjective visualization for homogeneous regions, one can see that our results are comparable or better than others. Some details in Figure 4, Figure 5 and Figure 6 clearly show more closeness to the ground truth by using the proposed method.
For heterogeneous regions, the proposed HCM method is slightly inferior to that of FSDAF, which explicitly incorporates land cover classification. However, in some bands, such as SW1 and SW2, our results are comparable or better than that of FSDAF in terms of objective metrics. In terms of subjective comparisons, our prediction performance is comparable to that of the other methods.
In terms of computational complexity, STARFM completes the prediction of one image in less than 3 min, while FSDAF completes the prediction of one image in about 1.5 h for both datasets. There is a fast variant of FSDAF in that package. However, the prediction performance is not as good as the slow version. STI-FM completes a forward prediction in about 2 s for both of the datasets. The computational time for HCM is comparable to that of STI-FM, but varies depending on which dataset is used since different parameters were used for each dataset. For the homogenous dataset, the computational time for a single prediction is roughly 10 s, while for the heterogeneous dataset, the computational time is about 8 min.
Since none of the prediction methods can work well under all situations, it might be better to adopt a hybrid approach for Landsat and MODIS image fusion. That is, we propose to use HCM for homogeneous areas and FSDAF or STI-FM for heterogeneous areas.
One future research is to incorporate the high temporal resolution fused images into a remote sensing data product, such as fire damage assessment. Another potential research direction is to apply deep learning approach to learn the mapping between the MODIS images and then use that mapping for Landsat image prediction. A third direction is to utilize the high temporal and high spatial resolution images for anomaly detection, border monitoring, and target detection.

5. Conclusions

In this paper, we present a simple, and high performance forward prediction approach to generating Landsat images with high temporal resolution. The idea is based on learning a mapping between MODIS images. Once the mapping is learned, it is then applied to a Landsat image that is collected at an earlier time in order to predict a future Landsat image. When compared to other fusion approaches, such as STAARCH and ESTARFM, our approach does not require two pairs of MODIS and Landsat images, and hence it is more appropriate for forward prediction. Experiments using actual MODIS and Landsat images demonstrated that the proposed approach achieves a comparable performance as that of STARFM, STI-FM, and FSDAF. The comparisons were done at least after the three digits of the decimal point. Consequently, the outcomes were similar in comparison to the other methods. In addition, computationally, the proposed HCM and STI-FM were found to be relatively simpler in comparison to the other two methods.

Author Contributions

Chiman Kwan conceived the overall approach. Bence Budavari and Chiman Kwan generated all the figures in this paper. Feng Gao provided the data sets in the experiments and the STARFM results. Xiaolin Zhu provided the synthetic data set as well as the FSDAF results.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gao, F.; Masek, J.; Schwaller, M.; Hall, F. On the blending of the Landsat and MODIS surface reflectance: Predicting daily Landsat surface reflectance. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2207–2218. [Google Scholar]
  2. Gao, F.; Anderson, M.; Zhang, X.; Yang, Z.; Alfieri, J.; Kustas, B.; Mueller, R.; Johnson, D.; Prueger, J. Mapping crop progress at field scales using Landsat and MODIS imagery. Remote Sens. Environ. 2017, 188, 9–25. [Google Scholar] [CrossRef]
  3. Hilker, T.; Wulder, A.M.; Coops, N.C.; Linke, J.; McDermid, G.; Masek, J.G.; Gao, F.; White, J.C. A new data fusion model for high spatial- and temporal-resolution mapping of forest disturbance based on Landsat and MODIS. Remote Sens. Environ. 2009, 113, 1613–1627. [Google Scholar] [CrossRef]
  4. Zhu, X.; Chen, J.; Gao, F.; Chen, X.; Masek, J.G. An enhanced spatial and temporal adaptive reflectance fusion model for complex heterogeneous regions. Remote Sens. Environ. 2010, 114, 2610–2623. [Google Scholar] [CrossRef]
  5. Addesso, P.; Conte, R.; Longo, M.; Restaino, R.; Vivone, G. Sequential Bayesian methods for resolution enhancement of TIR image sequences. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 233–243. [Google Scholar] [CrossRef]
  6. Addesso, P.; Conte, R.; Longo, M.; Restaino, R.; Vivone, G. A sequential Bayesian procedure for integrating heterogeneous remotely sensed data for irrigation management. In Proceedings of the Remote Sensing for Agriculture, Ecosystems, and Hydrology XIV, Edinburgh, UK, 19 October 2012; Volume 8531, p. 85310C. [Google Scholar]
  7. Agam, N.; Kustas, W.P.; Anderson, M.C.; Li, F.; Neale, C.M. A vegetation index based technique for spatial sharpening of thermal imagery. Remote Sens. Environ. 2007, 107, 545–558. [Google Scholar] [CrossRef]
  8. Gao, F.; Hilker, T.; Zhu, X.; Anderson, M.; Masek, J.; Wang, P.; Yang, Y. Fusing Landsat and MODIS Data for Vegetation Monitoring. IEEE Geosci. Remote Sens. Mag. 2015, 3, 47–60. [Google Scholar] [CrossRef]
  9. Zhu, X.; Helmer, E.; Gao, F.; Liu, D.; Chen, J.; Lefsky, M. A flexible spatiotemporal method for fusing satellite images with different resolutions. Remote Sens. Environ. 2016, 172, 165–177. [Google Scholar] [CrossRef]
  10. Hazaymeh, K.; Hassan, Q.K. Spatiotemporal image-fusion model for enhancing temporal resolution of Landsat-8 surface reflectance images using MODIS images. J. Appl. Remote Sens. 2015, 9, 096095. [Google Scholar] [CrossRef]
  11. Roy, D.P.; Ju, J.; Lewis, P.; Schaaf, C.; Gao, F.; Hansen, M.; Lindquist, E. Multi-temporal MODIS–Landsat data fusion for relative radiometric normalization, gap filling, and prediction of Landsat data. Remote Sens. Environ. 2008, 112, 3112–3130. [Google Scholar] [CrossRef]
  12. Fu, D.; Chen, B.; Wang, J.; Zhu, X.; Hilker, T. An improved image fusion approach based on enhanced spatial and temporal the adaptive reflectance fusion model. Remote Sens. 2013, 5, 6346–6360. [Google Scholar] [CrossRef]
  13. Wu, M.; Niu, Z.; Wang, C.; Wu, C.; Wang, L. Use of MODIS and Landsat time series data to generate high-resolution temporal synthetic Landsat data using a spatial and temporal reflectance fusion model. J. Appl. Remote Sens. 2012, 6, 063507. [Google Scholar]
  14. Zhang, W.; Li, A.; Jin, H.; Bian, J.; Zhang, Z.; Lei, G.; Qin, Z.; Huang, C. An enhanced spatial and temporal data fusion model for fusing Landsat and MODIS surface reflectance to generate high temporal Landsat-like data. Remote Sens. 2013, 5, 5346–5368. [Google Scholar] [CrossRef]
  15. Huang, B.; Song, H. Spatiotemporal reflectance fusion via sparse representation. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3707–3716. [Google Scholar] [CrossRef]
  16. Song, H.; Huang, B. Spatiotemporal satellite image fusion through one-pair image learning. IEEE Trans. Geosci. Remote Sens. 2013, 51, 1883–1896. [Google Scholar] [CrossRef]
  17. Zhou, J.; Kwan, C.; Budavari, B. Hyperspectral image super-resolution: A hybrid color mapping approach. J. Appl. Remote Sens. 2016, 10, 035024. [Google Scholar] [CrossRef]
  18. Kwan, C.; Budavari, B.; Dao, M.; Ayhan, B.; Bell, J.F. Pansharpening of Mastcam images. In Proceedings of the 2017 IEEE International IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017. [Google Scholar]
  19. Kwan, C.; Ayhan, B.; Budavari, B. Fusion of THEMIS and TES for Accurate Mars Surface Characterization. In Proceedings of the 2017 IEEE International IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017. [Google Scholar]
  20. Kwan, C.; Budavari, B.; Bovik, A.C.; Marchisio, G. Blind Quality Assessment of Fused WorldView-3 Images by Using the Combinations of Pansharpening and Hypersharpening Paradigms. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1835–1839. [Google Scholar] [CrossRef]
  21. Balgopal, R. Applications of the Frobenius Norm Criterion in Multivariate Analysis; University of Alabama: Tuscaloosa, AL, USA, 1996. [Google Scholar]
  22. Matrix Calculus. Available online: http://www.psi.toronto.edu/matrix/calculus.html (accessed on 20 November 2017).
  23. Vivone, G.; Alparone, L.; Chanussot, J.; Dalla Mura, M.; Garzelli, A.; Licciardi, G.A.; Restaino, R.; Wald, L. A Critical Comparison among Pansharpening Algorithms. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2565–2586. [Google Scholar] [CrossRef]
  24. Alparone, L.; Baronti, S.; Garzelli, A.; Nencini, F. A global quality measurement of pan-sharpened multispectral imagery. IEEE Geosci. Remote Sens. Lett. 2004, 1, 313–317. [Google Scholar] [CrossRef]
  25. Garzelli, A.; Nencini, F. Hypercomplex quality assessment of multi-/hyper-spectral images. IEEE Geosci. Remote Sens. Lett. 2009, 6, 662–665. [Google Scholar] [CrossRef]
  26. Gao, F.; Masek, J.; Wofe, R.; Huang, C. Building consistent medium resolution satellite data set using moderate resolution imaging spectroradiometer products as reference. J. Appl. Remote Sens. 2010, 4, 043526. [Google Scholar]
  27. Vermote, E.F.; El Saleous, N.Z.; Justice, C.O. Atmospheric correction of MODIS data in the visible to middle infrared: First results. Remote Sens. Environ. 2002, 83, 97–111. [Google Scholar] [CrossRef]
Figure 1. Relationship between two pairs of MODIS and Landsat images. One band is shown here. (a,b) are MODIS images collected at two different times; (c,d) are Landsat images collected at two different times.
Figure 1. Relationship between two pairs of MODIS and Landsat images. One band is shown here. (a,b) are MODIS images collected at two different times; (c,d) are Landsat images collected at two different times.
Remotesensing 10 00520 g001aRemotesensing 10 00520 g001b
Figure 2. Proposed prediction approach.
Figure 2. Proposed prediction approach.
Remotesensing 10 00520 g002
Figure 3. Proposed prediction approach based on local mapping.
Figure 3. Proposed prediction approach based on local mapping.
Remotesensing 10 00520 g003
Figure 4. Comparison of forward prediction results of using Landsat image 128 to predict Landsat image 144. (a) Actual image; (b) Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM); (c) Flexible Spatiotemporal Data Fusion (FSDAF); (d) SpatioTemporal Image-Fusion Model (STI-FM); (e) hybrid color mapping (HCM).
Figure 4. Comparison of forward prediction results of using Landsat image 128 to predict Landsat image 144. (a) Actual image; (b) Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM); (c) Flexible Spatiotemporal Data Fusion (FSDAF); (d) SpatioTemporal Image-Fusion Model (STI-FM); (e) hybrid color mapping (HCM).
Remotesensing 10 00520 g004
Figure 5. Comparison of forward prediction results of using Landsat image 144 to predict Landsat image 192. (a) Actual image; (b) STARFM; (c) FSDAF; (d) STI-FM; (e) HCM.
Figure 5. Comparison of forward prediction results of using Landsat image 144 to predict Landsat image 192. (a) Actual image; (b) STARFM; (c) FSDAF; (d) STI-FM; (e) HCM.
Remotesensing 10 00520 g005
Figure 6. Comparison of forward prediction results of using Landsat image 192 to predict Landsat image 224. (a) Actual image; (b) STARFM; (c) FSDAF; (d); STI-FM; and, (e) HCM.
Figure 6. Comparison of forward prediction results of using Landsat image 192 to predict Landsat image 224. (a) Actual image; (b) STARFM; (c) FSDAF; (d); STI-FM; and, (e) HCM.
Remotesensing 10 00520 g006
Figure 7. Comparison of forward prediction results of using Landsat image 182 to predict Landsat image 214. (a) Actual image; (b) STARFM; (c) FSDAF; (d) STI-FM; and, (e) HCM.
Figure 7. Comparison of forward prediction results of using Landsat image 182 to predict Landsat image 214. (a) Actual image; (b) STARFM; (c) FSDAF; (d) STI-FM; and, (e) HCM.
Remotesensing 10 00520 g007
Figure 8. Comparison of forward prediction results of using Landsat image 214 to predict Landsat image 246. (a) Actual image; (b) STARFM; (c) FSDAF; (d) STI-FM; and, (e) HCM.
Figure 8. Comparison of forward prediction results of using Landsat image 214 to predict Landsat image 246. (a) Actual image; (b) STARFM; (c) FSDAF; (d) STI-FM; and, (e) HCM.
Remotesensing 10 00520 g008
Figure 9. Synthetic data for Landsat and MODIS at two different times. The image size is 480 × 480. (a,b) are synthetic Landsat images at two different times; (c,d) are synthetic MODIS images at two different times.
Figure 9. Synthetic data for Landsat and MODIS at two different times. The image size is 480 × 480. (a,b) are synthetic Landsat images at two different times; (c,d) are synthetic MODIS images at two different times.
Remotesensing 10 00520 g009
Figure 10. Comparison of FSDAF and HCM prediction results. (a) is the Landsat image at t2; (b) and (c) are the prediction images using FSDAF and HCM, respectively.
Figure 10. Comparison of FSDAF and HCM prediction results. (a) is the Landsat image at t2; (b) and (c) are the prediction images using FSDAF and HCM, respectively.
Remotesensing 10 00520 g010
Figure 11. Illustration of clustering combinations.
Figure 11. Illustration of clustering combinations.
Remotesensing 10 00520 g011
Table 1. Mean and standard deviation of MODIS and Landsat images at different dates.
Table 1. Mean and standard deviation of MODIS and Landsat images at different dates.
Day 128Day 144
MeanStandard Deviation MeanStandard Deviation
MODIS0.0850.057MODIS0.0900.064
Landsat0.0890.058Landsat0.0900.066
Day 214Day 246
MeanStandard Deviation MeanStandard Deviation
MODIS0.1370.149MODIS0.1350.124
Landsat0.1390.155Landsat0.1330.125
Table 2. Comparison of forward prediction results of using Landsat image 128 to predict Landsat image 144.
Table 2. Comparison of forward prediction results of using Landsat image 128 to predict Landsat image 144.
HCM
ADRMSEccSAMSSIMERGASQ2NOverall Q2N Overall ERGAS
NIR0.00760.010.96999.71 × 10−80.96790.46380.8880.8360.9174
Red0.00430.0060.94751.07 × 10−70.98151.02470.8498
Green0.00450.0060.91141.11 × 10−70.98580.94850.751
Bhie0.0040.00520.85241.06 × 10−70.98271.17540.6183
SW10.00680.010.98143.77 × 10−30.97120.55710.9251
SW20.00650.00940.97051.48 × 10−10.96740.91630.8817
STARFM
ADRMSECCSAMSSIMERGASQ2NOverall Q2N Overall ERGAS
NIR0.00890.03160.76451.55 × 10−10.95931.47160.86420.81523.3102
Red0.00470.01720.67034.49 × 10−20.98022.97050.8474
Green0.00480.01880.4795.43 × 10−20.98492.99630.7551
Bhie0.00440.01850.3855.27 × 10−20.98074.15050.6119
SW10.00830.02930.86141.35 × 10−10.96351.62910.9039
SW20.00690.02530.82792.25 × 10−10.96022.47530.8641
FSDAF
ADRMSECCSAMSSIMERGASQ2NOverall Q2N Overall ERGAS
NIR0.00860.0150.92667.99 × 10−80.95990.69190.87220.81971.382
Red0.00480.00860.88258.78 × 10−80.97821.45730.8385
Green0.00470.00840.77759.21 × 10−80.9841.32240.7515
Bhie0.00470.00830.67438.78 × 10−80.97911.82070.5995
SW10.00820.0150.95653.64 × 10−30.96220.82830.9114
SW20.0070.01220.9491.31 × 10−10.96131.1780.8758
STI-FM
ADRMSECCSAMSSIMERGASQ2NOverall Q2N Overall ERGAS
NIR0.0080.01130.96071.02 × 10−70.96320.52060.88060.82241.1434
Red0.00450.00710.91981.03 × 10−70.97931.21850.8425
Green0.00440.00660.87151.20 × 10−70.98531.04680.7574
Bhie0.0040.00610.75681.08 × 10−70.97991.39970.5827
SW10.00790.01170.97417.21 × 10−20.96410.65240.9164
SW20.00720.01070.96221.95 × 10−10.96061.03590.8716
Table 3. Comparison of forward prediction results of using Landsat image 144 to predict Landsat image 192.
Table 3. Comparison of forward prediction results of using Landsat image 144 to predict Landsat image 192.
HCM
ADRMSECCSAMSSIMERGASQ2NOverall Q2NOverall ERGAS
NIR0.01490.02290.93311.07 × 10−70.9080.88090.78280.72381.392
Red0.00530.00830.84891.32 × 10−40.96671.85520.6499
Green0.00510.00740.88721.08 × 10−70.9791.21870.6578
Bhie0.00410.00570.77911.14 × 10−70.97851.53060.4733
SW10.00820.01350.96045.51 × 10−20.9590.82610.861
SW20.00770.01160.92993.80 × 10−10.9561.49020.744
STARFM
ADRMSECCSAMSSIMERGASQ2NOverall Q2NOverall ERGAS
NIR0.01550.02430.92495.42 × 10−30.89630.93740.75070.65664.9979
Red0.00560.01680.6273.69 × 10−20.96063.73720.6232
Green0.0050.00690.90261.11 × 10−70.98111.14040.6538
Bhie0.00420.00610.7850.00010.9771.60740.4669
SW10.00880.03150.83870.19790.95261.92180.8555
SW20.0140.08130.47251.27640.918511.35540.6756
FSDAF
ADRMSECCSAMSSIMERGASQ2NOverall Q2NOverall ERGAS
NIR0.01610.0240.92908.82 × 10−80.88910.92280.74600.71191.3660
Red0.00550.00850.84651.32 × 10−40.96451.89140.6331
Green0.00520.00710.89209.04 × 10−80.98081.17670.6516
Bhie0.00490.00620.78249.33 × 10−80.97641.61910.4542
SW10.01000.01250.96525.12 × 10−20.95580.76220.8761
SW20.00830.01110.93562.35 × 10−10.95110.76220.7733
STI-FM
ADRMSECCSAMSSIMERGASQ2NOverall Q2NOverall ERGAS
NIR0.02190.04030.76291.15 × 10−70.87771.52550.68980.66721.9131
Red0.00590.00950.77651.85 × 10−30.95252.10920.6087
Green0.00590.00960.73371.17 × 10−70.96441.56090.6051
Bhie0.00440.00780.534600.96222.13010.4146
SW10.01120.02300.87930.09720.93621.37560.8102
SW20.00890.01470.88250.30510.93791.85640.7184
Table 4. Comparison of forward prediction results of using Landsat image 192 to predict Landsat image 224.
Table 4. Comparison of forward prediction results of using Landsat image 192 to predict Landsat image 224.
HCM
ADRMSEccSAMSSIMERGASQ2NOverall Q2N Overall ERGAS
NIR0.00780.01130.98211.50 × 10−10.97180.46320.94030.82910.9943
Red0.00300.00420.93952.60 × 10−20.98481.26480.8047
Green0.00260.00350.94717.93 × 10−40.98990.76750.7970
Bhie0.00300.00400.83261.09 × 10−70.98391.46080.5031
SW10.00550.00830.98281.11 × 1000.97690.57930.9240
SW20.00450.00650.97482.21 × 1000.97660.93370.8708
STARFM
ADRMSECCSAMSSIMERGASQ2NOverall Q2N Overall ERGAS
NIR0.01050.04140.83134.24 × 10−10.94821.71060.89820.73766.9867
Red0.00380.02800.42431.44 × 10−10.97818.53510.7662
Green0.00310.01640.55084.63 × 10−20.98633.61260.7692
Bhie0.00330.00620.71863.44 × 10−30.98242.13970.5094
SW10.01650.10210.57822.41 × 1000.94087.82260.8298
SW20.01020.07430.4582.52 × 1000.943711.53120.7689
FSDAF
ADRMSECCSAMSSIMERGASQ2NOverall Q2N Overall ERGAS
NIR0.00830.01230.97871.49 × 10−10.96510.50480.93140.80271.0761
Red0.00320.00450.93472.08 × 10−20.98351.36530.7770
Green0.00280.00400.93147.93 × 10−40.98860.88380.7665
Bhie0.00350.00460.83928.98 × 10−80.98081.56330.4958
SW10.00610.00930.97829.56 × 10−10.97230.66130.9089
SW20.00500.00710.97032.06 × 1000.9720.99960.8414
STI-FM
ADRMSECCSAMSSIMERGASQ2NOverall Q2N Overall ERGAS
NIR0.00970.01390.97281.77 × 10−10.95760.57060.91840.75651.4495
Red0.00420.00560.90062.60 × 10−20.96991.63380.6923
Green0.00400.00530.90977.93 × 10−40.97671.11210.6672
Bhie0.00530.00670.55341.09 × 10−70.95722.73130.2374
SW10.00650.00960.97671.18 × 1000.96960.67150.9088
SW20.00550.00810.96142.22 × 1000.96561.14000.8340
Table 5. Comparison of forward prediction results of using Landsat image 182 to predict Landsat image 214.
Table 5. Comparison of forward prediction results of using Landsat image 182 to predict Landsat image 214.
HCM
ADRMSEccSAMSSIMERGASQ2NOverall Q2N Overall ERGAS
NIR0.07400.09450.22109.98 × 10−80.48161.85950.23410.59013.4322
Red0.01190.01750.67001.04 × 10−70.87325.11460.4439
Green0.00930.01160.77121.04 × 10−70.94412.30150.4824
Bhie0.00950.01190.78461.24 × 10−70.89844.92860.2943
SW10.02120.02830.77892.73 × 10−30.85651.26310.6754
SW20.02210.02880.65645.73 × 10−30.81983.17850.4630
STARFM
ADRMSECCSAMSSIMERGASQ2NOverall Q2N Overall ERGAS
NIR0.06500.08570.27375.39 × 10−20.50571.66150.19060.503111.5752
Red0.01820.08030.17161.03 × 1000.833124.74420.4145
Green0.00920.02320.44325.62 × 10−20.94264.30450.5394
Bhie0.00850.02260.35765.74 × 10−20.92617.29170.4620
SW10.02360.04340.64501.35 × 10−10.83581.94140.6305
SW20.03120.07940.32447.93 × 10−10.71518.65640.3562
FSDAF
ADRMSECCSAMSSIMERGASQ2NOverall Q2N Overall ERGAS
NIR0.05720.07320.47718.12 × 10−80.57741.41570.2610.62533.0218
Red0.01290.01900.67378.61 × 10−80.85035.10000.4873
Green0.00900.01130.78898.36 × 10−80.94442.17360.5351
Bhie0.00680.00890.79621.05 × 10−70.94752.67870.5365
SW10.02200.02910.77051.82 × 10−30.84821.30840.6457
SW20.02740.03420.59222.60 × 10−30.74863.63380.4224
STI-FM
ADRMSECCSAMSSIMERGASQ2NOverall Q2N Overall ERGAS
NIR0.06230.07910.29399.51 × 10−80.54741.51910.17500.46103.1275
Red0.01100.01990.46401.10 × 10−70.87185.23490.2323
Green0.00840.01310.64561.27 × 10−70.93702.29670.3297
Bhie0.00910.01340.62481.15 × 10−70.90315.48190.1646
SW10.01740.02450.74952.60 × 10−30.87911.03600.6897
SW20.01450.02770.54895.21 × 10−30.84002.91610.3632
Table 6. Comparison of forward prediction results of using Landsat image 214 to predict Landsat image 246.
Table 6. Comparison of forward prediction results of using Landsat image 214 to predict Landsat image 246.
HCM
ADRMSECCSAMSSIMERGASQ2NOverall Q2N Overall ERGAS
NIR0.02570.03530.87001.06 × 10−70.84370.77350.79370.77321.9607
Red0.01070.01670.79771.05 × 10−70.91403.48400.6125
Green0.00870.01260.78761.13 × 10−70.94761.91390.5692
Bhie0.00460.00700.88041.12 × 10−70.97431.86180.7254
SW10.01130.01700.87933.64 × 10−30.93640.73270.8441
SW20.01120.01750.84569.11 × 10−30.92461.61480.7769
STARFM
ADRMSECCSAMSSIMERGASQ2NOverall Q2N Overall ERGAS
NIR0.03060.05390.77341.44 × 10−10.81761.16390.75220.77014.3193
Red0.00910.03060.43491.16 × 10−10.94195.48240.6861
Green0.00870.02960.35121.15 × 10−10.96064.02480.6216
Bhie0.00490.02790.31131.15 × 10−10.97726.79020.7572
SW10.01170.03750.62221.45 × 10−10.93531.57590.8376
SW20.01110.03570.55971.46 × 10−10.92713.16030.7723
FSDAF
ADRMSECCSAMSSIMERGASQ2NQ OverallOverall ERGAS
NIR0.02610.03580.87568.70 × 10−80.8480.77230.79030.78231.3975
Red0.00800.01190.83258.64 × 10−80.94552.12920.6969
Green0.00730.00960.82859.36 × 10−80.96351.31140.6521
Bhie0.00480.00680.89089.50 × 10−80.97581.57960.7368
SW10.01050.01600.87711.17 × 10−30.93830.67110.8502
SW20.01010.01580.84504.69 × 10−30.93151.38320.7875
STI-FM
ADRMSECCSAMSSIMERGASQ2NQ OverallOverall ERGAS
NIR0.02680.03640.85401.08 × 10−70.83980.79230.77660.74891.5995
Red0.00950.01360.77101.18 × 10−70.93662.42250.6135
Green0.00950.01180.74141.07 × 10−70.95671.56990.5094
Bhie0.00430.00640.87851.13 × 10−70.97841.58040.7420
SW10.01330.01800.86511.30 × 10−30.93960.73620.7986
SW20.01060.01680.82525.47 × 10−30.92751.48200.7692
Table 7. Comparison of FSDAF and HCM for the synthetic data.
Table 7. Comparison of FSDAF and HCM for the synthetic data.
RMSECCSAMSSIM
Landsat 10.08500.83700.02900.9190
FSDAF0.02400.98600.00500.9110
HCM0.03200.97700.01100.9460
Table 8. Comparison of cluster based HCM with different cluster sizes and local based HCM for homogeneous areas. Bold numbers indicate high performing methods.
Table 8. Comparison of cluster based HCM with different cluster sizes and local based HCM for homogeneous areas. Bold numbers indicate high performing methods.
Cluster 50 × 50Cluster based HCMLocal based HCM
RMSECCRMSECC
NIR0.01110.95730.01000.9699
Red0.00600.93480.00600.9475
Green0.00500.89990.00600.9114
Cluster 20 × 20Cluster based HCMLocal based HCM
RMSECCRMSECC
NIR0.01100.95800.01000.9699
Red0.00590.93530.00600.9475
Green0.00500.90230.00600.9114
Cluster 5 × 5Cluster based HCMLocal based HCM
RMSECCRMSECC
NIR0.01050.96150.01000.9699
Red0.00590.93720.00600.9475
Green0.00480.90860.00600.9114
Table 9. Comparison of cluster based HCM with different cluster sizes and local based HCM for heterogeneous areas. Bold numbers indicate high performing methods.
Table 9. Comparison of cluster based HCM with different cluster sizes and local based HCM for heterogeneous areas. Bold numbers indicate high performing methods.
Chister 50 × 50Chister based HCMLocal based HCM
RMSECCRMSECC
NIR0.03590.86950.03530.8700
Red0.01710.79480.01670.7977
Green0.01240.78570.01260.7876
Bhie0.00820.87240.00700.8804
SW10.01710.87890.01700.8793
SW20.01780.84440.01750.8456
Chister 20 × 20Chister based HCMLocal based HCM
RMSECCRMSECC
NIR0.03600.86870.03530.8700
Red0.01700.79590.01670.7977
Green0.01240.78580.01260.7876
Bhie0.00810.87290.00700.8804
SW10.01710.87850.01700.8793
SW20.01780.84430.01750.8456
Chister 5 × 5Chister based HCMLocal based HCM
RMSECCRMSECC
NIR0.03690.86030.03530.8700
Red0.01730.78850.01670.7977
Green0.01250.77910.01260.7876
Bhie0.00800.87250.00700.8804
SW10.01730.87570.01700.8793
SW20.01790.84030.01750.8456

Share and Cite

MDPI and ACS Style

Kwan, C.; Budavari, B.; Gao, F.; Zhu, X. A Hybrid Color Mapping Approach to Fusing MODIS and Landsat Images for Forward Prediction. Remote Sens. 2018, 10, 520. https://doi.org/10.3390/rs10040520

AMA Style

Kwan C, Budavari B, Gao F, Zhu X. A Hybrid Color Mapping Approach to Fusing MODIS and Landsat Images for Forward Prediction. Remote Sensing. 2018; 10(4):520. https://doi.org/10.3390/rs10040520

Chicago/Turabian Style

Kwan, Chiman, Bence Budavari, Feng Gao, and Xiaolin Zhu. 2018. "A Hybrid Color Mapping Approach to Fusing MODIS and Landsat Images for Forward Prediction" Remote Sensing 10, no. 4: 520. https://doi.org/10.3390/rs10040520

APA Style

Kwan, C., Budavari, B., Gao, F., & Zhu, X. (2018). A Hybrid Color Mapping Approach to Fusing MODIS and Landsat Images for Forward Prediction. Remote Sensing, 10(4), 520. https://doi.org/10.3390/rs10040520

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop