Next Article in Journal
Classifying Wheat Hyperspectral Pixels of Healthy Heads and Fusarium Head Blight Disease Using a Deep Neural Network in the Wild Field
Previous Article in Journal
Impact of Sea Ice Drift Retrieval Errors, Discretization and Grid Type on Calculations of Ice Deformation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Letter

Landsat Super-Resolution Enhancement Using Convolution Neural Networks and Sentinel-2 for Training

1
Environment and Climate Change Canada, Landscape Science and Technology, Ottawa, ON K1A 0H3, Canada
2
Natural Resources Canada, Earth Sciences Sector, Canada Center for Remote Sensing, Ottawa, ON K1A 0E4, Canada
*
Author to whom correspondence should be addressed.
Remote Sens. 2018, 10(3), 394; https://doi.org/10.3390/rs10030394
Submission received: 24 December 2017 / Revised: 27 February 2018 / Accepted: 27 February 2018 / Published: 3 March 2018

Abstract

:
Landsat is a fundamental data source for understanding historical change and its effect on environmental processes. In this research we test shallow and deep convolution neural networks (CNNs) for Landsat image super-resolution enhancement, trained using Sentinel-2, in three study sites representing boreal forest, tundra, and cropland/woodland environments. The analysis sought to assess baseline performance and determine the capacity for spatial and temporal extension of the trained CNNs. This is not a data fusion approach and a high-resolution image is only needed to train the CNN. Results show improvement with the deeper network generally achieving better results. For spatial and temporal extension, the deep CNN performed the same or better than the shallow CNN, but at greater computational cost. Results for temporal extension were influenced by change potentiality reducing the performance difference between the shallow and deep CNN. Visual examination revealed sharper images regarding land cover boundaries, linear features, and within-cover textures. The results suggest that spatial enhancement of the Landsat archive is feasible, with optimal performance where CNNs can be trained and applied within the same spatial domain. Future research will assess the enhancement on time series and associated land cover applications.

Graphical Abstract

1. Introduction

High spatial and temporal resolution earth observation (EO) images are desirable for many remote sensing applications, providing a finer depiction of spatial boundaries or timing of environmental change. Landsat provides the longest record of moderate spatial resolution (30 m) data of the earth from 1984 to present. It is currently a fundamental data source for understanding historical change and its relation to carbon dynamics, hydrology, climate, air quality, biodiversity, wildlife demography, etc. Landsat temporal coverage is sparse due to the 16-day repeat visit and cloud contamination. Several studies have addressed this through time series modeling approaches [1,2,3]. Temporal enhancement is a key requirement, but spatial enhancement is another aspect of Landsat that could be improved for time series applications. Enhancement of spatial resolution has been carried out mostly based on data fusion methods [4,5,6,7]. Studies have also shown that data fusion can lead to improvements in quantitative remote sensing applications such as land cover [4,8,9]. Although effective, data fusion techniques are limited by the requirement for coinstantaneous high-resolution observations. For more recent sensors such as Landsat-8 and Sentinel-2 this requirement is met with the panchromatic band and provides the greatest potential for spatial enhancement. However, for a consistent Landsat time series from 1985 to present, a method that will provide the same level of enhancement across sensors is needed. For Landsat-5, a suitable high-resolution source is generally inadequate in space or time to facilitate generation of an extensive spatially enhanced Landsat archive.
Numerous spatial resolution enhancement methods have been developed. However, recently, deep learning convolution neural networks (CNNs) have been shown to outperform these, with large improvements over bicubic and smaller gains over more advanced anchored neighborhood regression approaches [10]. CNNs are a special form of neural network. The basic neural network is made up of a collection of connected neurons with learnable weights and biases that are optimized through error backpropagation [11]. The input is a vector, whereas the input to a convolution neural network is an array or image. For each convolution layer, a set of weights are learned for a filter of size m × n × c that is convolved over the image, where m and n are vertical and horizontal dimensions and c is the input features to the convolution layer. Essentially, a convolution neural network can learn the optimal set of filters to apply to an image for a specific image recognition task. Thus, one strategy has been to use CNNs as feature extractors in remote sensing classification applications [12].
There has been significant development of CNNs for super-resolution enhancement with non-remote sensing image benchmark databases such as CIFAR-100 [13] or ImageNet [14]. Dong et al. [10] developed the Super-Resolution Convolutional Neural Network (SRCNN), which used small 2 and 4 layer CNNs to show that the learned model performed better than other state of the art methods. Kim et al. [15,16] developed two deep convolutional networks for super-resolution enhancement. The first was the Deeply-Recursive Convolutional Network for Image Super Resolution (DRCN), which used recursive or shared weights to reduce model parameters in a deep 20-layer network. The second was also a deep 20-layer network (Very Deep Super Resolution, VDSR), but introduced the concept of the residual learning objective. In this approach, instead of learning the fine resolution image, the differences between the fine and coarse resolution images are learned. This led to significant performance gains over SRCNN. The mean squared error loss is widely used for CNN super-resolution training. An interesting alternative was tested by Svoboda et al. [17] who used a gradient based learning objective, where the mean squared error between spatial image gradients computed using the Sobel operator was sought to be minimized. Performance by standard measures, however, were not improved. Mao et al. [18] developed a deep encoder-decoder CNN with skip connections between associated encode and decode layers. It achieved improved accuracy relative to SRCNN for both 20 and 30-layer versions. An ensemble based approach was tested in Wang et al. [19] and was found to provide an improvement in accuracy. Other methods have focused on maintaining or improving accuracy while reducing the total model parameters. The Efficient Sub-Pixel Convolutional Neural Network (ESPCN) reduces computational and memory complexity, by increasing the resolution from low to high only at the end of the network [20]. The DRCN approach [15] was extended to include residual and dense connections by Tia et al. [21]. This provided a deep network with recursive layers reducing the model parameters and achieving the best results for the assessment undertaken.
Residual connections in CNNs were introduced by He et al. [22] for image object recognition. Residual connections force the next layer in the network to learn something different from the previous layers and have been shown to alleviate the problem of deep learning models not improving performance with depth. In addition to going deep, Zagoruyko and Komodakis [23] showed that going wide can increase network performance for image recognition. More recently, Xie et al. [24] developed wide residual bocks, which adds another dimension referred to as cardinality in addition to network depth and width. The rate of new developments in network architectures is rapid with incremental improvements in accuracy or reductions in model complexity and memory requirements.
For spatial enhancement of remote sensing imagery, much less research has been carried out regarding the potential of CNNs. Only recently have results been presented by Collins et al. [25] who applied networks similar to Dong et al. [10] for enhancement of the Advanced Wide Field Sensor (AWiFS) using the Linear Imaging Self Scanner (LISS-III). Their study provides a good benchmark for CNN performance because the two sensors have the same spectral bands and are temporally coincident. Results showed similar performance to other CNN based super-resolution studies for the scaling ratio of 2.3 (56 m/24 m spatial resolution).
Advances in deep learning CNNs and the global availability of Sentinel-2 data provide a potential option to generate an extensive spatially enhanced historical Landsat archive. Conceivably, a relatively cloud free Landsat and Sentinel-2 image will be obtained within a suitable temporal window for most locations across the globe. Thus, a consistent image pair suitable for training a Landsat super-resolution transform may be obtained and could be locally optimized for this purpose following the approach applied in Latifovic et al. [26]. However, for large area implementation, CNN performance across a variety of landscapes needs to be evaluated in addition to temporal and spatial extension capacity. Therefore, specific objectives of this research were to:
  • Assesses the effectiveness of a shallow and deep CNN for super-resolution enhancement of Landsat trained from Sentinel-2 data for characteristic landscape environments in Canada including boreal forest, tundra, and cropland/woodland landscapes.
  • Evaluate the potential for spatial extension over short distances of less than 100 km and temporal extension of a trained CNN model.

2. Materials and Methods

2.1. Landsat and Sentinel-2 Datasets

For model development, Landsat-5, 8 and Sentinel-2 pairs for three study areas in Canada were acquired. The study areas are shown in Figure 1 and included boreal forest, tundra, and cropland/woodland ecosystems. These represent a range of ecosystem conditions found in Canada. If the performance is acceptable across these three, it is likely that similar performance can be obtained across the range of ecosystems found in Canada in non-complex terrain.
The date ranges of the training image pairs are given in Table 1. Landsat level 2 surface reflectance collection 1 was acquired from the USGS. Sentinel-2 level-1C data was also acquired from the USGS and converted to surface reflectance using the sen2cor algorithm version 2.3.1 (European Space Agency) [27]. Landsat-8 and Sentinel-2 have spatial misalignment that varies regionally depending on ground control point quality [28]. It has been improved for collection 1 data in global priority areas. More recent analysis shows that collection 1 Landsat-8 data within Canada (approximately lower than 70 degrees latitude) has a horizontal root mean square error (RMSE) of less than 14 m [29]. The geolocation quality of all input images was checked by collecting control points and computing the RMSE. For all scenes the RMSE was less than 10 m. The largest error was in the northern tundra study site and areas within or close to cloud cover. This error was considered reasonable given the expected operational geolocation accuracy and effective resolution of the spatial enhancement being tested. Landsat data were resampled to 10 m resolution using the nearest neighbor approach. This was selected to maintain spectral quality, allow the CNN to determine the optimal spatial weighting, and speed local resampling for application to large images. All Landsat and Sentinel-2 scenes were mosaiced and stacked together for analysis in each study site.
As identified in Latifovic et al. [30] and specified in the USGS documentation [31], atmosphere correction can be problematic in the north above 64 degrees latitude. Our northern study area was at approximately 64 degrees. However, this was not considered to be a problem as errors due to atmosphere correction would be similar between the datasets and would not affect the relative comparison of the methods.

2.2. Sampling and Assessment

For each study site a mask was manually developed for sampling to avoid clouds, shadows, and land cover changes between the image mosaic pairs. However, in cropland environments the 26-day difference between the Landsat-8 and Sentinel-2 images made it impractical to manually define areas suitable for training. Thus, for this study site an initial mask was developed, but refined by calculating the change vector between images [32] and selecting a conservative threshold to avoid including cropland change in the training. The local variance within sample windows of 33 by 33 pixels was computed and used define three levels of low, moderate, and high spatial complexity. These represented homogenous areas at the low level to areas containing significant structure related to roads, shorelines, or other boundaries at the high level. This was used in a stratified systematic sampling scheme to ensure a range of spatial variability was selected. For each stratum, every sixth pixel was selected not contaminated by clouds or land cover change. To assess performance, we compute the mean error, mean absolute error (MAE), error standard deviation, and mean and standard deviation of the spatial correlation within a sample window of 33 by 33 pixels between the predicted image and Sentinel-2. This window size was selected to be consistent with the CNNs used. We also compute the mean and standard deviation of the Structural Similarity Index Measure (SSIM) [33]. This was included as it is a common measure applied to assess image quality relative to a reference image. To provide context for the improvement obtained we also compute these metrics directly between Landsat and Sentinel-2 without applying the CNN based transform.

2.2.1. Hold-Out and Spatial Extension

For sampling, 75% of the study area was used for training, starting in the west to the east. The remaining 25% in the east was used for validation as a spatially independent extension test. Of the 75% sampled for training, 30% of this was held-out to assess the ideal situation where spatial extension is not required and high sampling rates are possible. For each study site this amounted to samples in the range of 400,000–500,000 for training and 180,000–240,000 for testing. Samples for spatial extension were more variable due to land cover change, clouds, and cloud shadows. Total samples were 64,000, 179,000, and 330,000 for the boreal, tundra, and cropland/woodland study sites, respectively.

2.2.2. Temporal Extension

For assessment of temporal performance, we apply the CNNs to Landsat-5 (Table 1) for different years for each study site. The least cloud contaminated image was selected for each period between 1984–1990, 1990–2005, and 2005–2011. We computed the same set of metrics between Landsat-5 and Sentinel-2 for areas identified as no change. No change was detected based on the maximum change vector across all years for a study site. Before detecting change the Sentinel-2 bands were normalized to Landsat using robust regression [34]. We also applied a band average minimum correlation threshold of 0.55 between images for the window size of 33 by 33 pixels. The initial CNNs were trained between Landsat-8 and Sentinel-2. To adjust these for Landsat-5 we applied a transfer learning approach where samples of no-change were split for training and testing. Similar, to the initial model development, we sampled 30% of the study area for training starting in the west of the image and the remaining 70% for validation in the east of image. As the models had already been trained, only 3 epochs were used for rapid development. Only the most recent Landsat-5 image was used for training. The refined model was used in the assessment of the independent samples (70%) for evaluation of all image dates. The retraining was needed as image quality between the two sensors is different, with Landsat-8 being sharper. Total samples used for each study site ranged from 30,000–80,000 for training. For testing the total samples were 40,000, 134,000, and 84,000 for the boreal, tundra, and cropland/woodland study sites, respectively.

2.3. CNN Super-Resolution Models

There are countless configurations for network architectures that could be employed and will likely remain an area of significant future research. Although network design is important, for the purpose of this study we only tested two configurations. We tested the SRCNN of Dong et al. [10] because it is efficient with only 41,089 parameters and has shown good results (Figure 2A). We also apply a deeper architecture using residual learning, deep connectivity, and residual connections in attempt to integrate some of the latest improvements in the field (DCR_SRCNN). In initial exploratory analysis we tested numerous configurations of which the best was kept. We settled on the 20-layer configuration shown in Figure 2B inspired by Tai et al. [21]. This is a large network with 993,373 total parameters. The rectilinear unit was used for all activations. To improve generalization, least squares (L2) weight regularization was added to the third and second last layers of the network with a weight of 0.0001. Regularization was only applied to the last layers to avoid reducing the learning potential in the lower network layers. Input image size was 33 by 33 pixels. This size was selected to capture the spatial variation in the image while keeping the size small for computational efficiency. Filter sizes were 3 by 3, except for the first convolution layer where a 7 by 7 was used. The output features from each convolution layer was 64, except for the first layer which output 96. Also, the convolution layer for the residual learning objective output one result which essentially converted the input three band image to a single band.
We trained a model for each study site to allow for regional optimization. For training, the mean squared error loss function for the pixelwise comparison of the predicted and Sentinel-2 image was used with the Adam optimization method. This optimization method has been shown to provide an efficient and stable solution [35] and has been used in other CNN based super-resolution studies [18,25]. Early stopping criteria was applied, where if the loss did not improve in 10 epochs, training stopped. The total number of epochs was set at 80 with a batch size of 125. For all networks the input was the red, near-infrared (NIR), and short-wave infrared (SWIR, 1.55–1.75 µm) bands. Output, was a single band, either the red, NIR, or SWIR. All bands were input as spatial properties between bands were expected to provide useful information for determining specific spatial transforms. To allow for the greatest possible learning potential, a model was developed for each band. To focus the learning on the spatial properties between the samples, the mean of the Sentinel-2 image was adjusted to match the Landsat image.
All models were trained on a NVIDIA GeForce 1080 Ti GPU. Training time took approximately 2 days for the deep network for each study site and less than half this time for the shallow network.

3. Results

3.1. Hold-Out Accuracy

The results for the hold-out samples show that the DCR_SRCNN provided the best results across all study areas (Table 2). However, both methods showed marked improvement relative to applying no transformation for all the key metrics (MAE, SSIM, and spatial correlation). The MAE error is an informative measure as it is in standard reflectance units, but it is related to the mean of the sample, with a larger mean reflectance resulting in larger MAE. Further, the MAE can produce the same value for very different image qualities [33]. Spatial correlation is not related to the mean reflectance and gives a good indication of the spatial agreement and thus the spatial enhancement. However, it is influenced by the data range, with a reduce range producing a lower correlation [36]. SSIM essentially incorporates the MAE and spatial correlation measures in addition to image contrast. It is related to the sample mean reflectance, but to a much lesser degree than the MAE. It is important to recognize these limitations in interpreting the results when comparing bands.
Of the bands, the NIR consistently had the higher MAE and lower SSIM values regardless of the transformation. This is related to the high reflectance of the NIR band and associated larger variance. The SWIR, also had high reflectance for the tundra study site, but in contrast had low MAE and higher SSIM values. This was related to the native 20 m spatial resolution of the SWIR band in Sentinel-2, which results in greater initial similarity with Landsat compared to the 10 m bands. The spatial correlation showed that the red band had consistently lower values which was caused by the smaller reflectance range and atmospheric noise.
Of the study areas, the cropland/woodland showed the lowest performance due to change between the images despite efforts to reduce it. Change was also a potential factor in the boreal forest study site, but to a much lesser degree. The best results, were found for the northern tundra study area and was attributed to little change between images and less overall complexity of the land surface relative to the 10 m target resolution. Figure 3 provides an example image result of a residential area surrounded by mixed boreal forest conditions. It provides a good indication of the improvement that can be obtained. Figure 4 shows the enhancement by band for a mixed forest area with some industrial development. As evident from Figure 4, the NIR and red bands are more enhanced compared to the SWIR as expected. The coarse texture within the cover types is of interest and could prove useful for improving land cover discrimination or for biophysical retrieval as canopy variability or structure appears to be enhanced.

3.2. Spatial Extension Accuracy

The spatial extension accuracy shows similar or slightly reduced performance relative to the hold-out (Table 3). Comparing the results for the two CNNs it is not evident that the deeper model made a sufficiently large improvement to warrant its greater computational complexity. This is likely caused by some overtraining and errors, due to temporal change, in the training and validation data which can reduce the sensitivity of the analysis.
The cropland study site showed the greatest difference relative to the hold-out results. The difference is in part related to the greater amount of agriculture in the extension sample. Due to the changes in crops for the 14 to 21-day difference between images, there were limited sampling opportunities and thus, the extension results did not perform as well. Croplands present a particular challenge for the approach as training data is limited by the highly dynamic nature of cropland environments with dramatic reflectance changes over a few days. With the Sentinel-2 constellation potentially more temporally coincident imagery will be captured to ensure suitable training. Otherwise spatial extension over greater distances may be required for large extent enhancements. The performance for bands suggests the same conclusion as the hold-out sample results.

3.3. Temporal Extension Accuracy

Temporal extension accuracy is an important aspect of the approach to determine if a trained network can be applied to enhance Landsat time series. Table 4 provides the temporal extension results. These at first glance appear to be low, particularly the spatial correlation, but assessment of temporal extension is fraught with difficulties. The main challenge is that no-changes areas do not exist in terms of image reflectance’s. For the purposes of land cover, no-change can be identified, but in comparing imagery between dates there are always changes due to canopy dynamics, annual changes in canopy configuration, moisture content, residual atmosphere effects, etc. that do not change the land cover, but alter reflectance’s for a cover type. Thus, in interpreting these results it is important to note that the sensitivity and accuracy is influenced by this effect. Irrespective, in all cases the metrics were improved with either CNN.
The temporal effect is clearly seen in the boreal forest and cropland/woodland study sites, where the performance metrics all improve as the image date gets closer to the Sentinel-2 image used as a reference. The tundra study area is likely the most informative as changes are subtler, less frequent and the vegetation structure is small relative to the image spatial resolution. Thus, these results are more indicative of the temporal extension capacity. The shallow network performed similarly to the deep network but with slightly reduced magnitude across all metrics of approximately 1%. The small difference is in part a result of temporal changes in the test data which reduces the sensitivity of the analysis. This is similar to the spatial extension results, but is expected to be more significant.

3.4. Visual Assessment

Visual assessment provides the more convincing evidence as the nature of the enhancement can be clearly recognized and artifacts readily identified. Here, we provide several examples of enhanced images for the different landscape environments and over multiple years in Figure 5, Figure 6, Figure 7 and Figure 8. In all examples, boundaries are clearer between cover types, linear features are more apparent, within cover textures are enhanced, and the spatial structure overall is clearer. There are also no major artifacts created within images or between image dates. Although, some speckle is introduced in a few cases. In Figure 5, the spatial structure of forest gaps or leaf area gradients appear to be enhanced. This is most evident in the 2011 imagery as fire damage has resulted in greater canopy variability. This could possibly lead to improvements in biophysical retrievals or habitat analysis, but requires further study. Figure 6 shows the northern tundra example, it more clearly defines drainage patterns and water bodies. Figure 7 shows area of trails and roads that have become much more apparent in the broadleaf forest area. This highlights the potential of the approach to better characterize edges which could enhance land cover-based landscape metrics. The final example, Figure 8 shows an area of cropland where the boundaries between crop areas and roads has been improved.

4. Discussion

In this research we show that CNN super-resolution can spatially enhance Landsat imagery and can be applied to historical peak growing season time series that could improve land cover and land cover change applications or possibly biophysical parameter retrievals. However, future research needs to specifically evaluate the improvement for a given application with this type of enhancement. This is important to determine if the approach is only suitable for visual enhancement or some types of quantitative analysis. Here, we show that boundaries between land covers and linear features are improved and likely would influence landscape metrics derived from land cover data. There are also textural enhancements that need to be explored as a means to improve information extraction applications.
The SSIM values obtained compare well with other studies achieving values in the range of 0.86 to 0.97, similar to what is achieved in benchmark image databases for an upscaling factor of three [21]. However, in most studies, images are degraded from an initial high-resolution image and thus the only differences between the fine and coarse images used for training is resolution. This was not the case in this research as there were several additional factors other than resolution and included changes or differences in land cover, canopy structure, phenology, moisture, residual atmosphere effects, sun-sensor geometry, sensor spectral response functions, and residual geolocation error. These factors need to be considered in examining the results as they reduce sensitively of the analysis. More importantly this can cause models to learn these differences resulting in reduced spatial and temporal generalization performance. For remote sensing data, Collins et al. [25] report SSIM values greater than 0.98. This is the result of using coinstantaneous images avoiding many of the factors listed above. It is also due to the upscaling factor of 2. In this study the upscaling factors were 3 for the red and near-infrared and 1.5 for the shortwave. The inferior result obtained here for the SWIR suggests that the difference is largely related to temporal variation. However, Collins et al. do not report band specific values.
It also of interest to compare the effects of using a more complex network. In Collins et al., going from shallow to wider and deeper improved the SSIM by 0.0035. In this research we also see only a small increase with the more complex deeper network, improving by about 0.006 on average for both the spatial and temporal extension. Thus, as with other SR research finding the optimal balance between model complexity and performance will be an important aspect of future research. In this regard, the effective resolution of the spatial enhancement needs to be determined. That is, we do not propose that the CCN learns a true 10 m resolution result. Future efforts need to quantify the effective resolution to avoid storage and processing redundancy.
Other approaches to assess performance were considered, such as comparison with other high spatial-resolution more temporally coincident images. However, finding such images is challenging for many regions such as the north and where suitable cloud free pairs are difficult to obtain within a few days of acquisition. This may be possible for the visible and NIR bands, but there are few higher spatial resolution sensors that capture the SWIR band for comparison and development with Landsat-5. SPOT-5 imagery is a suitable option, but it was not available for this analysis and establishing an extensive database would be costly. Despite this, the visual assessment shows that the trained networks were able to enhance the spatial properties of the images through time without introducing any strong artifacts.
For large regional implementation, spatial extension over greater distances may be required. For distances less than 100 km, the CNNs appeared to generalize well. However, to effectively train Landsat-5, larger distances may be required as suitable training locations would be limited to areas with little change or with suitable SPOT-5 images. There are several mechanisms to further enhance the generalization of the CNNs that need to be explored in future research such as optimizing network size, increasing weight regularization, batch normalization, and data augmentation. The deep network employed in this research was selected to provide an indication of the upper bound on the enhancement potential. The expectation was that a comprise between the deep and shallow network would be more effective for implementation. In this work we also included weight regularization for the last layers in the network and thus better generalization may be obtained by applying regularizing to additional layers and increasing the weight. In this research we did not use batch normalization as it was found to slightly reduce results in Liang et al. [37]. However, for spatial and temporal extension this may not be the case and requires further investigation. We also did not use data augmentation as our sample sizes were large, except for retraining of Landsat-5 for the identified no-change areas. In this case, data augmentation could provide an advantage. Data augmentation also provides an alternative training strategy, where more stringent criteria for selecting samples could be applied and data augmentation used to offset the reduced sample size. Improvements in the training data is expected to improve performance.

5. Conclusions

In this research we tested a shallow and deep CNN for the purpose of super-resolution enhancement of the Landsat archive trained from Sentinel-2 images. Results show improvement in spatial properties of the enhanced imagery and good potential for spatial and temporal extension of the CNNs developed in all the study areas. The deep CCN showed better performance, but it is not clear if it is worth the additional computational complexity and memory requirements. As research in CNN super-resolution for other applications has shown, it is possible to achieve similar performance with simpler configurations. Significant advancement of this approach is expected with progression in network design, training data sources, sampling strategies, and improved regularization. Despite this, the models developed here were effective at enhancing image spatial structure which is expected to improve land cover and land cover change applications.

Acknowledgments

This research was supported through a Canadian Space Agency grant for the project “Integrated Earth Observation Monitoring for Essential Ecosystem Information: Resilience to Ecosystem Stress and Climate Change”.

Author Contributions

Darren Pouliot and Rasim Latifovic conceived the concept and methodology. Darren Pouliot carried out the analysis and prepared the manuscript with input from all authors. Jon Prasher and Jason Duffe provided insight into development, applications, and assisted in manuscript preparation.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhu, Z.; Woodcock, C.E.; Olofsson, P. Continuous monitoring of forest disturbance using all available Landsat imagery. Remote Sens. Environ. 2012, 122, 75–91. [Google Scholar] [CrossRef]
  2. Zhu, Z.; Woodcock, C.E. Continuous change detection and classification of land cover using all available Landsat data. Remote Sens. Environ. 2014, 144, 152–171. [Google Scholar] [CrossRef]
  3. Pouliot, D.; Latifovic, R. Reconstruction of Landsat time series in the presence of irregular and sparse observations: Development and assessment in north-eastern Alberta, Canada. Remote Sens. Environ. 2017. [Google Scholar] [CrossRef]
  4. Song, H.; Huang, B.; Liu, Q.; Zhang, K. Improving the Spatial Resolution of Landsat TM/ETM+ Through Fusion with SPOT5. IEEE Trans. Geosci. Remote Sens. 2014, 53, 1195–1204. [Google Scholar] [CrossRef]
  5. Grochala, A.; Kedzierski, M. A method of panchromatic image modification for satellite imagery data fusion. Remote Sens. 2017, 9, 639. [Google Scholar] [CrossRef]
  6. Li, Z.; Zhang, K.H.; Roy, P.D.; Yan, L.; Huang, H.; Li, J. Landsat 15-m Panchromatic-Assisted Downscaling (LPAD) of the 30-m Reflective Wavelength Bands to Sentinel-2 20-m Resolution. Remote Sens. 2017, 9, 755. [Google Scholar] [CrossRef]
  7. Masi, G.; Cozzolino, D.; Verdoliva, L.; Scarpa, G. Pansharpening by convolutional neural networks. Remote Sens. 2016, 8. [Google Scholar] [CrossRef]
  8. Gilbertson, J.K.; Kemp, J.; van Niekerk, A. Effect of pan-sharpening multi-temporal Landsat 8 imagery for crop type differentiation using different classification techniques. Comput. Electron. Agric. 2017, 134, 151–159. [Google Scholar] [CrossRef]
  9. Joshi, N.; Baumann, M.; Ehammer, A.; Fensholt, R.; Grogan, K.; Hostert, P.; Jepsen, M.R.; Kuemmerle, T.; Meyfroidt, P.; Mitchard, E.T.A.; et al. A review of the application of optical and radar remote sensing data fusion to land use mapping and monitoring. Remote Sens. 2016, 8, 70. [Google Scholar] [CrossRef]
  10. Dong, C.; Loy, C.C.; He, K.; Tang, X. Learning deep convolutional networks for image super resolution. In Proceedings of the European Conference on Computer Vision, Athens, Greece, 11–13 November 2015. [Google Scholar]
  11. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Adv. Neural Inf. Process. Syst. 2012, 1–9. [Google Scholar] [CrossRef]
  12. Hu, F.; Xia, G.-S.; Hu, J.; Zhang, L. Transferring Deep Convolutional Neural Networks for the Scene Classification of High-Resolution Remote Sensing Imagery. Remote Sens. 2015, 7, 14680–14707. [Google Scholar] [CrossRef]
  13. Krizhevsky, A. Learning Multiple Layers of Features from Tiny Images; University of Toronto: Toronto, ON, Canada, 2009; pp. 1–60. [Google Scholar]
  14. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef]
  15. Kim, J.; Lee, J.K.; Lee, K.M. Deeply-Recursive Convolutional Network for Image Super-Resolution. arXiv, 2015; arXiv:1511.04491. [Google Scholar]
  16. Kim, J.; Lee, J.K.; Lee, K.M. Accurate Image Super-Resolution Using Very Deep Convolutional Networks. arXiv, 2016; arXiv:1511.04587. [Google Scholar]
  17. Svoboda, P.; Hradis, M.; Barina, D.; Zemcik, P. Compression Artifacts Removal Using Convolutional Neural Networks. arXiv, 2016; arXiv:1605.00366. [Google Scholar]
  18. Mao, X.-J.; Shen, C.; Yang, Y.-B. Image Restoration using very Deep Convolutional Encoder-Decoder Networks with Symmetric Skip Connections. arXiv, 2016; arXiv:1603.09056. [Google Scholar]
  19. Wang, L.; Huang, Z.; Gong, Y.; Pan, C. Ensemble based deep networks for image super-resolution. Pattern Recognit. 2017, 68, 191–198. [Google Scholar] [CrossRef]
  20. Shi, W.; Caballero, J.; Huszár, F.; Totz, J.; Aitken, A.P.; Bishop, R.; Rueckert, D.; Wang, Z. Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. arXiv, 2016; arXiv:1609.05158. [Google Scholar]
  21. Tai, Y.; Yang, J.; Liu, X. Image Super-Resolution via Deep Recursive Residual Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3147–3155. [Google Scholar] [CrossRef]
  22. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. arXiv, 2015; arXiv:1512.03385. [Google Scholar]
  23. Zagoruyko, S.; Komodakis, N. Wide Residual Networks. arXiv, 2016; arXiv:1605.07146. [Google Scholar]
  24. Xie, S.; Girshick, R.; Dollár, P.; Tu, Z.; He, K. Aggregated Residual Transformations for Deep Neural Networks. arXiv, 2017; arXiv:1611.05431. [Google Scholar]
  25. Collins, C.B.; Beck, J.M.; Bridges, S.M.; Rushing, J.A.; Graves, S.J. Deep Learning for Multisensor Image Resolution Enhancement. In Proceedings of the 1st Workshop on Artificial Intelligence and Deep Learning for Geographic Knowledge Discovery, Redondo Beach, CA, USA, 7 November 2017; pp. 37–44. [Google Scholar] [CrossRef]
  26. Latifovic, R.; Pouliot, D.; Olthof, I. Circa 2010 Land Cover of Canada: Local Optimization Methodology and Product Development. Remote Sens. 2017, 9, 1098. [Google Scholar] [CrossRef]
  27. Mueller-Wilm, U.M. S2 MPC Sen2Cor Configuration and User Manual; European Space Agency: Paris, France, 2017. [Google Scholar]
  28. Storey, J.; Roy, D.P.; Masek, J.; Gascon, F.; Dwyer, J.; Choate, M. A note on the temporary misregistration of Landsat-8 Operational Land Imager (OLI) and Sentinel-2 Multi Spectral Instrument (MSI) imagery. Remote Sens. Environ. 2016, 186, 121–122. [Google Scholar] [CrossRef]
  29. Storey, J.; Choate, M.; Rengarajan, R.; Lubke, M. Landsat-8/Sentinel-2 Registration Accuracy and Improvement Status; United States Geological Survey: Reston, VA, USA, 2017.
  30. Latifovic, R.; Pouliot, D.; Sun, L.; Schwarz, J.; Parkinson, W. Moderate Resolution Time Series Data Management and Analysis: Automated Large Area Mosaicking and Quality Control; Natural Resources Canada: Ottawa, ON, Canada, 2015. [Google Scholar] [CrossRef]
  31. United States Geological Survey. USGS Product Guide Landsat 8 Surface Reflectance Code (LaSRC) Product; United States Geological Survey: Reston, VA, USA, 2017; pp. 1–38.
  32. Lambin, E.F.; Strahlers, A.H. Change-vector analysis in multitemporal space: A tool to detect and categorize land-cover change processes using high temporal-resolution satellite data. Remote Sens. Environ. 1994, 48, 231–244. [Google Scholar] [CrossRef]
  33. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  34. Fernandes, R.; Leblanc, S.G. Parametric (modified least squares) and non-parametric (Theil-Sen) linear regressions for predicting biophysical parameters in the presence of measurement errors. Remote Sens. Environ. 2005, 95, 303–316. [Google Scholar] [CrossRef]
  35. Ruder, S. An overview of gradient descent optimization algorithms. arXiv, 2016; arXiv:1609.04747. [Google Scholar]
  36. Goodwin, L.D.; Leech, N.L. Understanding correlation: Factors that affect the size of r. J. Exp. Educ. 2006, 74, 249–266. [Google Scholar] [CrossRef]
  37. Liang, Y.; Yang, Z.; Zhang, K.; He, Y.; Wang, J.; Zheng, N. Single Image Super-resolution with a Parameter Economic Residual-like Convolutional Neural Network. In Proceedings of the International Conference on Multimedia Modeling, Reykjavik, Iceland, 4–6 January 2017. [Google Scholar]
Figure 1. Study site locations and Landsat scene footprints.
Figure 1. Study site locations and Landsat scene footprints.
Remotesensing 10 00394 g001
Figure 2. Configuration of the CNNs tested. (a) The SRCNN of Dong et al. [10]. (b) The deep residual and connected CNN developed. Red lines are residual blocks and blue lines are connections between residual blocks. Black line is the connection for the residual learning objective, which is put through a single convolution layer to convert the input three band image to a single band. The ⊕ symbol represents summation of the output activation layer elements.
Figure 2. Configuration of the CNNs tested. (a) The SRCNN of Dong et al. [10]. (b) The deep residual and connected CNN developed. Red lines are residual blocks and blue lines are connections between residual blocks. Black line is the connection for the residual learning objective, which is put through a single convolution layer to convert the input three band image to a single band. The ⊕ symbol represents summation of the output activation layer elements.
Remotesensing 10 00394 g002
Figure 3. Examples results for a residential area surrounded by mixed boreal forest conditions; (a) Landsat image, (b) resolution enhanced, and (c) Sentinel-2 image. Displayed as red = NIR, green = SWIR, blue = red.
Figure 3. Examples results for a residential area surrounded by mixed boreal forest conditions; (a) Landsat image, (b) resolution enhanced, and (c) Sentinel-2 image. Displayed as red = NIR, green = SWIR, blue = red.
Remotesensing 10 00394 g003
Figure 4. Example results for each band for a mixed boreal forest with some industrial development; (a,d,g,j) Landsat image, (b,e,h,k) resolution enhanced, and (c,f,i,l) Sentinel-2 image. (ac) multi-band image displayed as red = NIR, green = SWIR, blue = red, (df) NIR, (gi) SWIR, (jl) red.
Figure 4. Example results for each band for a mixed boreal forest with some industrial development; (a,d,g,j) Landsat image, (b,e,h,k) resolution enhanced, and (c,f,i,l) Sentinel-2 image. (ac) multi-band image displayed as red = NIR, green = SWIR, blue = red, (df) NIR, (gi) SWIR, (jl) red.
Remotesensing 10 00394 g004
Figure 5. Example temporal extension result for a boreal mixedwood forest. (ac) is the spatially enhanced result and (df) is the original Landsat.
Figure 5. Example temporal extension result for a boreal mixedwood forest. (ac) is the spatially enhanced result and (df) is the original Landsat.
Remotesensing 10 00394 g005
Figure 6. Example temporal extension result for a northern tundra area. (ac) is the spatially enhanced result and (df) is the original Landsat.
Figure 6. Example temporal extension result for a northern tundra area. (ac) is the spatially enhanced result and (df) is the original Landsat.
Remotesensing 10 00394 g006
Figure 7. Example temporal extension result for a broadleaf forestarea. (ac) is the spatially enhanced result and (df) is the original Landsat.
Figure 7. Example temporal extension result for a broadleaf forestarea. (ac) is the spatially enhanced result and (df) is the original Landsat.
Remotesensing 10 00394 g007
Figure 8. Example temporal extension result for a cropland area. (ac) is the spatially enhanced result and (df) is the original Landsat.
Figure 8. Example temporal extension result for a cropland area. (ac) is the spatially enhanced result and (df) is the original Landsat.
Remotesensing 10 00394 g008
Table 1. Landsat and Sentinel-2 images used for training and testing.
Table 1. Landsat and Sentinel-2 images used for training and testing.
RegionSentinel-2Landsat-8Landsat-5
Tile ReferenceDateTile ReferenceDateTile ReferenceDate
BorealT12VWH 87/19/201640_207/17/201642_207/31/1984
T12VVH 88/30/201642_207/15/201642_208/28/1997
T12VVJ 57/9/201742_218/30/201542_207/20/2011
TundraT17VML 5,88/14/201626_158/16/201626_158/1/1987
T17WMM 5,88/13/2016 26_158/4/1994
T17WMM 5,88/13/2016 26_157/15/2010
Cropland/WoodlandT18TVR 5,87/20/201615_298/19/201616_288/11/1987
T18TWR 5,87/20/201616_288/26/201616_287/3/2002
16_288/26/201616_288/2/2007
Tile reference for Landsat is Path_Row format. Superscripts for Sentinel-2 tiles denote which Landsat sensor was used, 8 is Landsat-8 and 5 is Landsat 5.
Table 2. Hold-out performance metrics.
Table 2. Hold-out performance metrics.
RegionMethodBandMEANSTDMEMAESTDEP5EP95ESSIMmSSIMsCORmCORs
Boreal forestNo TransformRed323166−0.8836.5063.14−77.0076.000.9650.0370.7460.139
Boreal forestNo TransformNIR2050720−1.40201.79292.76−450.00469.000.8270.0850.8030.110
Boreal forestNo TransformSWIR1247428−0.6696.45152.28−225.00219.000.9150.0750.8520.122
Boreal forestSRCNNRed323166−0.8431.4554.02−62.0367.860.9730.0290.7940.139
Boreal forestSRCNNNIR20507204.76167.19240.25−365.91384.170.8750.0780.8530.104
Boreal forestSRCNNSWIR1247428−3.7372.95107.91−160.51166.310.9420.0630.8950.118
Boreal forestDCR_SRCNNRed3231663.9528.3642.50−54.5065.810.9780.0170.8110.131
Boreal forestDCR_SRCNNNIR2050720−8.66141.01195.48−322.54301.100.9040.0590.8810.085
Boreal forestDCR_SRCNNSWIR12474285.7249.0069.94−98.04116.910.9670.0330.9310.084
TundraNo TransformRed10137200.49110.46190.91−240.00263.000.8750.0690.7840.098
TundraNo TransformNIR1948909−0.66163.53258.55−379.00373.000.8450.0650.8040.090
TundraNo TransformSWIR236314492.58184.42314.71−445.00447.000.8930.0560.8640.082
TundraSRCNNRed1013718−2.1872.42116.81−161.39160.700.9430.0370.8850.078
TundraSRCNNNIR19489088.23103.97151.04−222.45242.110.9340.0410.9120.066
TundraSRCNNSWIR23631446−0.3499.10154.68−229.38234.230.9610.0420.9480.075
TundraDCR_SRCNNRed10137189.2362.93593.99−123.38158.160.9540.0270.9000.072
TundraDCR_SRCNNNIR1948908−5.2485.713118.71−196.30185.540.9500.0320.9310.058
TundraDCR_SRCNNSWIR236314476.4559.61985.33−129.24142.420.9800.0220.9690.060
Cropland/WoodlandNo TransformRed318253−1.1661.43123.19−149.00170.000.8990.1070.6440.177
Cropland/WoodlandNo TransformNIR33251256−12.73319.87441.58−714.00684.000.7410.1260.7550.173
Cropland/WoodlandNo TransformSWIR1582605−4.43123.88187.29−291.00278.000.8690.1120.7730.118
Cropland/WoodlandSRCNNRed318253−0.4152.65101.73−129.22142.180.9280.0830.7190.181
Cropland/WoodlandSRCNNNIR332512560.24258.85351.04−563.76569.600.8300.1120.8190.160
Cropland/WoodlandSRCNNSWIR1582605−0.7989.03134.10−194.72207.900.9130.0990.8740.172
Cropland/WoodlandDCR_SRCNNRed3182534.0945.46582.76−103.66123.950.9450.0570.7480.173
Cropland/WoodlandDCR_SRCNNNIR3325125628.70232.460312.05−475.67541.730.8600.0890.8450.142
Cropland/WoodlandDCR_SRCNNSWIR158260513.5866.38893.63−127.05164.150.9480.0520.9090.134
MEAN—observed sample mean, STD—observed sample standard deviation, ME—mean error, MAE—mean absolute error, STDE—error standard deviation, P5E—5th percentile error, P95E—95th percentile error, SSIMm—mean SSIM, SSIMs—standard deviation SSIM, CORm—mean spatial correlation, CORs—standard deviation of spatial correlation.
Table 3. Spatial extension performance metrics.
Table 3. Spatial extension performance metrics.
RegionMethodBandMEANSTDMEMAESTDEP5EP95ESSIMmSSIMsCORmCORs
Boreal forestNo TransformRed336114−0.5326.1741.53−54.0055.000.9770.0240.7750.142
Boreal forestNo TransformNIR2084921−0.69143.96211.10−317.00332.000.8760.0590.8200.161
Boreal forestNo TransformSWIR1420524−1.1049.9477.52−116.00114.000.9550.0240.8800.166
Boreal forestSRCNNRed3361144.4824.7339.58−46.1056.610.9790.0250.7910.162
Boreal forestSRCNNNIR2084921−3.57126.70182.72−277.60289.220.9030.0530.8470.173
Boreal forestSRCNNSWIR1420524−12.3150.1473.00−111.35110.590.9690.0250.9020.177
Boreal forestDCR_SRCNNRed336114−0.3724.2337.80−42.1757.030.9810.0200.8070.152
Boreal forestDCR_SRCNNNIR20849211.20122.82176.15−280.86268.820.9100.0510.8530.166
Boreal forestDCR_SRCNNSWIR1420524−0.4751.3582.06−116.80129.430.9610.0290.8850.176
TundraNo TransformRed15009621.74148.04256.00−346.00353.000.8650.1120.8200.120
TundraNo TransformNIR227511321.06170.06283.40−389.00380.000.8700.0750.8400.101
TundraNo TransformSWIR310117010.37223.58413.35−554.00566.000.8960.0990.8720.104
TundraSRCNNRed1500962−3.1797.18180.10−205.83208.110.9360.1060.9090.110
TundraSRCNNNIR22751132−3.57101.92158.59−228.87215.300.9490.0570.9340.076
TundraSRCNNSWIR31011701−8.22129.31251.22−291.85297.050.9510.0940.9410.100
TundraDCR_SRCNNRed150096212.7892.82176.46−170.22220.730.9410.1060.9140.110
TundraDCR_SRCNNNIR22751132−3.4797.39154.86−218.60209.260.9520.0560.9370.073
TundraDCR_SRCNNSWIR310117015.69138.25271.00−311.06347.590.9450.0940.9330.099
Cropland/WoodlandNo TransformRed3283000.4379.90159.20−184.00231.000.8640.1090.6400.176
Cropland/WoodlandNo TransformNIR36321250−10.51354.13490.97−810.00772.000.7210.1250.7200.181
Cropland/WoodlandNo TransformSWIR1628543−1.61132.12203.57−299.00303.000.8480.1180.7700.200
Cropland/WoodlandSRCNNRed328300−2.4469.22135.09−169.95186.180.9020.0860.7210.176
Cropland/WoodlandSRCNNNIR36321250−6.12304.61419.01−678.00672.290.8070.1110.7890.166
Cropland/WoodlandSRCNNSWIR1628543−1.43104.11157.00−222.85242.840.8960.1040.8440.193
Cropland/WoodlandDCR_SRCNNRed3283004.4169.81139.20−163.35195.460.9020.0840.7200.170
Cropland/WoodlandDCR_SRCNNNIR3632125026.46300.29412.67−630.31695.180.8160.1100.7970.161
Cropland/WoodlandDCR_SRCNNSWIR162854314.4289.49141.21−174.24227.410.9090.1010.8560.191
MEAN—observed sample mean, STD—observed sample standard deviation, ME—mean error, MAE—mean absolute error, STDE—error standard deviation, P5E—5th percentile error, P95E—95th percentile error, SSIMm—mean SSIM, SSIMs—standard deviation SSIM, CORm—mean spatial correlation, CORs—standard deviation of spatial correlation.
Table 4. Temporal extension performance metrics.
Table 4. Temporal extension performance metrics.
RegionDateBandNo TransformSRCNNDCDR_SRCNNNo TransformSRCNNDCDR_SRCNNNo TransformSRCNNDCDR_SRCNN
MAESTDEMAESTDEMAESTDECORmCORsCORmCORsCORmCORsSSIMmSSIMsSSIMmSSIMsSSIMmSSIMs
Boreal forest1984_07_23Red4365396136590.4060.2670.4430.2790.4540.2770.9250.0960.9340.0980.9360.097
Boreal forest1984_07_23NIR2753742723732733740.5150.2510.5480.2470.5590.2460.5750.1990.6090.2050.6200.206
Boreal forest1984_07_23SWIR1361861341831291790.4880.2880.5270.2950.5380.2980.7020.1790.7210.1840.7260.184
Boreal forest1997_08_28Red4265365934580.5070.2230.5630.2380.5760.2390.9320.0870.9420.0900.9450.089
Boreal forest1997_08_28NIR2693702613632593600.5720.2210.6280.2120.6430.2070.6180.1880.6690.1850.6820.180
Boreal forest1997_08_28SWIR1201691161641111600.6440.2330.6860.2400.6950.2410.7790.1560.8010.1600.8070.160
Boreal forest2011_07_20Red3352284727460.6740.1220.7380.1140.7520.1070.9590.0420.9660.0460.9680.044
Boreal forest2011_07_20NIR1992731742411692350.7980.0800.8390.0720.8480.0710.7840.0770.8450.0660.8600.064
Boreal forest2011_07_20SWIR801147510764920.8550.0610.8990.0570.9130.0570.9070.0490.9320.0480.9400.048
Cropland/Woodland1987_08_11Red6512164126691270.5270.2290.5760.2360.5820.2430.8660.1250.8700.1240.8700.122
Cropland/Woodland1987_08_11NIR4596064155564005410.6790.2330.7220.2140.7320.2170.6320.2030.7240.1980.7370.202
Cropland/Woodland1987_08_11SWIR1762411672251462070.7500.2220.7810.2240.7870.2280.7850.1490.8200.1570.8280.162
Cropland/Woodland2002_07_30Red5710755109581070.6110.1920.6550.1980.6630.2000.8910.1020.8970.1030.9000.098
Cropland/Woodland2002_07_30NIR4145513745053644990.7630.1660.7960.1600.8050.1590.7250.1520.7990.1500.8070.152
Cropland/Woodland2002_07_30SWIR1492061431891191840.8190.1580.8460.1710.8530.1720.8410.1050.8720.1210.8800.123
Cropland/Woodland2007_08_20Red57105478846860.6380.1490.7230.1320.7450.1280.8970.0940.9260.0650.9310.061
Cropland/Woodland2007_08_20NIR4335743174203004040.8050.1150.8480.1000.8570.1000.6840.1100.8350.0980.8530.100
Cropland/Woodland2007_08_20SWIR135188105148931310.8750.0770.9150.0690.9240.0690.8800.0500.9250.0550.9360.054
Tundra1987_08_01Red9113679114781110.7490.1420.8040.1340.8060.1350.8770.0670.9060.0610.9080.061
Tundra1987_08_01NIR1552301261801271830.8130.1000.8780.0880.8780.0900.8530.0670.9070.0570.9070.058
Tundra1987_08_01SWIR2073121542261311900.8510.0920.9070.0830.9150.0800.8700.0750.9180.0660.9250.067
Tundra1994_08_04Red9113578111771100.7570.1420.8100.1350.8110.1370.8750.0670.9070.0600.9070.060
Tundra1994_08_04NIR1572301211721191710.8160.1070.8810.0970.8820.0950.8460.0780.9060.0670.9080.065
Tundra1994_08_04SWIR1902941422111191760.8470.1170.9070.1030.9160.0990.8720.0920.9210.0810.9280.078
Tundra2010_07_15Red83139659864940.8010.1030.8760.0800.8810.0780.8950.0760.9390.0450.9420.041
Tundra2010_07_15NIR1502261061501031480.8290.0850.9100.0630.9130.0610.8590.0630.9300.0420.9340.040
Tundra2010_07_15SWIR1902921331951041520.8610.0820.9290.0590.9410.0540.8830.0630.9390.0460.9500.043
MAE—mean absolute error, STDE—error standard deviation, SSIMm—mean SSIM, SSIMs—standard deviation SSIM, CORm—mean spatial correlation, CORs—standard deviation of spatial correlation.

Share and Cite

MDPI and ACS Style

Pouliot, D.; Latifovic, R.; Pasher, J.; Duffe, J. Landsat Super-Resolution Enhancement Using Convolution Neural Networks and Sentinel-2 for Training. Remote Sens. 2018, 10, 394. https://doi.org/10.3390/rs10030394

AMA Style

Pouliot D, Latifovic R, Pasher J, Duffe J. Landsat Super-Resolution Enhancement Using Convolution Neural Networks and Sentinel-2 for Training. Remote Sensing. 2018; 10(3):394. https://doi.org/10.3390/rs10030394

Chicago/Turabian Style

Pouliot, Darren, Rasim Latifovic, Jon Pasher, and Jason Duffe. 2018. "Landsat Super-Resolution Enhancement Using Convolution Neural Networks and Sentinel-2 for Training" Remote Sensing 10, no. 3: 394. https://doi.org/10.3390/rs10030394

APA Style

Pouliot, D., Latifovic, R., Pasher, J., & Duffe, J. (2018). Landsat Super-Resolution Enhancement Using Convolution Neural Networks and Sentinel-2 for Training. Remote Sensing, 10(3), 394. https://doi.org/10.3390/rs10030394

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop