Next Article in Journal
A Methodology to Monitor Urban Expansion and Green Space Change Using a Time Series of Multi-Sensor SPOT and Sentinel-2A Images
Next Article in Special Issue
A Novel Effectively Optimized One-Stage Network for Object Detection in Remote Sensing Imagery
Previous Article in Journal
Detection and Analysis of C-Band Radio Frequency Interference in AMSR2 Data over Land
Previous Article in Special Issue
Kernel Joint Sparse Representation Based on Self-Paced Learning for Hyperspectral Image Classification
 
 
Correction published on 7 December 2019, see Remote Sens. 2019, 11(24), 2933.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral Image Super-Resolution by Deep Spatial-Spectral Exploitation

1
School of Computer Science and Technology, Xi’an University of Technology, Xi’an 710048, China
2
School of Telecommunications Engineering, Xi’dian University, Xi’an 710071, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(10), 1229; https://doi.org/10.3390/rs11101229
Submission received: 11 April 2019 / Revised: 9 May 2019 / Accepted: 16 May 2019 / Published: 23 May 2019

Abstract

:
Limited by the existing imagery sensors, hyperspectral images are characterized by high spectral resolution but low spatial resolution. The super-resolution (SR) technique aiming at enhancing the spatial resolution of the input image is a hot topic in computer vision. In this paper, we present a hyperspectral image (HSI) SR method based on a deep information distillation network (IDN) and an intra-fusion operation. Specifically, bands are firstly selected by a certain distance and super-resolved by an IDN. The IDN employs distillation blocks to gradually extract abundant and efficient features for reconstructing the selected bands. Second, the unselected bands are obtained via spectral correlation, yielding a coarse high-resolution (HR) HSI. Finally, the spectral-interpolated coarse HR HSI is intra-fused with the input HSI to achieve a finer HR HSI, making further use of the spatial-spectral information these unselected bands convey. Different from most existing fusion-based HSI SR methods, the proposed intra-fusion operation does not require any auxiliary co-registered image as the input, which makes this method more practical. Moreover, contrary to most single-based HSI SR methods whose performance decreases significantly as the image quality gets worse, the proposal deeply utilizes the spatial-spectral information and the mapping knowledge provided by the IDN, which achieves more robust performance. Experimental data and comparative analysis have demonstrated the effectiveness of this method.

Graphical Abstract

1. Introduction

Hyperspectral imagery sensors usually collect reflectance information of objects in hundreds of contiguous bands over a certain electromagnetic spectrum [1], and the hyperspectral image (HSI) can simultaneously obtain a set of two-dimensional images (or bands) [2]. These rich bands play an important role in discriminating different objects by their spectral signatures [3], and making them widely applicable in classification [4] and anomaly detection [5]. However, limited by the existing imaging sensor technologies, HSIs are characterized by low spatial resolution, which results in limitation of their applications’ performance. As a type of signal post-processing technique, HSI super-resolution (SR) can improve the spatial resolution of the HSI without modifying the imagery hardware, which is a hot issue in computer vision.
HSI SR has been studied for a long time in remote sensing and many methods have been proposed to improve the spatial resolution of the HSIs. According to the number of input images, these methods can be roughly classified into two types: the fusion-based HSI SR methods and the single-based ones.
The fusion-based approaches are based on the assumption that multiple fully-registered observations of the same scene are accessible. Dong et al. [6] proposed a nonnegative structured sparse representation approach, which jointly estimates the dictionary and sparse code of the high-resolution(HR) HSI based on the input low-resolution (LR) HSI and HR panchromatic (PAN) image. By utilizing the similarities between pixels in the super-pixel, Fang et al. [7] proposed a super-pixel based sparse representation method. Dian et al. [8] presented a non-local sparse tensor factorization HSI SR method, which achieves a fuller exploitation of the spatial-spectral structures in the HSI. For matrixing three-dimensional HSI and multispectral image (MSI) that are prone to inducing loss of structural information, Kanatsoulis et al. [9] addressed the problem from a tensor perspective and established a coupled tensor factorization framework. Zhang et al. [10] discovered that the clustering manifold structure of the latent HSI can be well preserved in the spatial domain of the input conventional image, and proposed to super-resolve the HSI by this discovery. Considering that the sparse based methods tackle each pixel independently, Han et al. [11] utilized a self-similarity prior as the constraint for sparse representation of the HSI and MSI. With the auxiliary HR image being another input of this kind of methods, more information is obtained, and this type of methods often achieves a better spatial enhancement. However, in reality, the auxiliary fully-registered HR description of the same scene is always hard or impossible to be achieved, which restricts the practicability of this type of methods.
The single-based HSI SR methods can be further divided into the sub-pixel mapping ones and the direct single-based ones. Sub-pixel mapping methods aim at estimating the fractional abundance of pure ground objects within a mixed pixel, and obtain the probabilities of sub-pixels to belong to different land cover classes [12]. Irmak [13] firstly utilized the virtual dimensionality to determine the number of endmembers in the scene and computed the abundance maps. The corresponding HR abundance maps are firstly obtained by maximum a posteriori method. Then, they are utilized to reconstruct the HR HSI. This kind of methods tackles the SR problem from the endmember extraction and fractional abundance estimation. However, the noise generated by the unmixing operation is inevitable during the mapping operation, which makes a negative influence on the SR process. Additionally, sub-pixel mapping methods are usually applied to certain applications, such as classification and target detection, for overcoming the limitation in spatial resolution. Arun et al. [14] explored convolutional neural network to jointly optimize the unmixing and mapping operation in a supervised manner. Xu et al. [15] presented a joint spectral-spatial mapping model to obtain the probabilities of sub-pixels to belong to different land cover classes, and obtained a resolution-enhanced image.
The direct single-based HSI SR methods aim at reconstructing an HR HSI with only one LR HSI. Inspired by the achievements in deep learning based RGB image SR methods, Yuan et al. [16] and He et al. [17] proposed to super-resolve each band individually by transfer learning. Considering the three-dimensional data characteristics of HSIs, Mei et al. [18] proposed a 3D-CNN to exploit both the spatial context and the spectral correlation. However, as there are usually hundreds of bands in the HSI, super-resolving the bands individually consumes much complexity. Moreover, as the image quality decreases, super-resolving each single band will be much more difficult, which will induce severe performance degradation.
In this paper, we propose an HSI SR method by combining an information distillation network (IDN) [19] with an intra-fusion operation to make a deep exploitation of the spatial-spectral information. During the implementation process, bands are firstly selected by certain interval, and then super-resolved by taking advantage of their spatial information and the spatial mapping learnt by the IDN model. The IDN was trained by 91 images from Yang et al. [20] and 200 images from the Berkeley Segmentation dataset [21]. Three data augmentation ways were applied to make full use of the training data and 2619 images were obtained. The IDN was designed to learn the spatial mapping between Y channels of the low-resolution RGB images and those of the corresponding high-resolution RGB images. Each single band in HSI can also be tackled as its Y channel at current wavelength. In this way, it is reasonable to transfer the IDN for the HSI SR. Secondly, spectral correlation is utilized to achieve a complete but coarse HR HSI. In addition, the information these unselected bands convey is further exploited by intra-fusing with the coarse HR HSI, resulting a finer HR HSI. In this way, both spatial and spectral information of the input LR HSI is fully-utilized, which contributes to the robust and acceptable performance.The main contributions of this work are summarized as follows:
1. We adopt a scalable SR strategy for super-resolving the HSI. Firstly, an IDN is used for super-resolving the interval-selected bands individually, a process exploiting their spatial information and the mapping learned by the IDN. Secondly, the unselected bands are fast interpolated via cubic Hermit spline method, which uses the high spectral correlation in the HSI to obtain a coarse HR HSI. Both spatial and spectral information is utilized. Meanwhile, contrary to super-resolving the HSI band by band via some certain methods, this scalable SR strategy achieves a tradeoff between high quality and high efficiency.
2. Most existing single-based methods super-resolve bands in the HSI individually, which neglects the spectral information. In this way, their performance is highly correlated to the images’ spatial quality. The proposed method deeply utilizes both the spatial and spectral information in the HSI, and its performance is more robust.
3. To deeply use the information the input LR HSI conveys, intra-fusion is made between the spectral-interpolated coarse HR HSI and the input LR HSI. Different from most fusion methods, which require another co-registered image as the input, the other input of the proposed intra-fusion is an intermediate outcome of the SR processing, which fully exploits the information the LR HSI conveys in a subtle way.
The remainder of the paper is organized as follows: Section 2 describes the proposed method. We present the experimental results and data analysis in Section 3. Conclusions are drawn in Section 4.

2. Proposed Method

In this section, we present the four main parts of the proposed method: framework overview, bands’ selection and super-resolution by IDN, unselected bands’ super-resolution, and intra-fusion. Detailed descriptions of these four parts are presented in the following subsections.

2.1. Framework Overview

Figure 1 illustrates the workflow of the proposed HSI SR method. The input data are one LR HSI. Bands are first selected by a certain distance and super-resolved via IDN with respect to their spatial information. Then, unselected bands are super-resolved by utilizing their spectral correlation to obtain a coarse but integrated HR HSI. Furthermore, this coarse HR HSI is intra-fused with the input LR HSI to further use the information these unselected bands convey.
To facilitate discussion, we clarify the notations of some frequently used terms. The input LR HSI and the desired HR HSI are represented as L R w × h × n and H R s w × s h × n , where w and h denote the width and height of the input LR HSI. s and n denote the scaling factor and the number of bands, respectively. H ¯ and H ^ denote the coarse HR HSI and the intra-fused fine HR HSI, respectively.

2.2. Bands’ Selection and Super-Resolution by IDN

This part firstly analyzes the correlation between the bands in the HSI and depicts the rationality of interval setting, and then super-resolves the selected bands via IDN.

2.2.1. Correlation Analysis

For HSIs, their neighboring bands are highly correlated in the spectral domain [22]. Figure 2 plots the correlation coefficient curves of the Pavia university scene HSI, a remote-sensing HSI widely applied in classification [23].
The value on the x-axis is the index of the current band H i . The legend denotes the interval between H i and the other neighboring band H i + d . The value on the y-axis is the correlation coefficient between H i and H i + d . According to the three curves in Figure 2, although correlation decreases as the band gets further from the current one, most bands are highly correlated to their neighboring bands, whose correlation coefficients are larger than 0.95. Moreover, as the image quality decreases, most high-frequency in the images are damaged, the correlation coefficient between neighboring bands is supposed to be higher (related experiments have been described in Section 3). Hence, to achieve a high efficiency without performance loss, it is rational and necessary to firstly select some bands by certain distance and super-resolve the selected bands by utilizing their spatial information. Contrary to super-resolving each bands in the HSI, this operation will highly reduce the computational complexity with negligible performance degradation.

2.2.2. Super-Resolution via IDN

Figure 3 has shown the general architecture of the IDN, a process consisting of three parts, i.e., a feature extraction block, multiple stacked information distillation blocks and a reconstruction block.
The feature extraction block applies two 3 × 3 convolution layers to extract the feature maps from the original LR images. The extracted features maps act as the input of the information distillation blocks. Several information distillation blocks are composed by using chained mode, in which each block contains an enhancement unit and a compression unit with stacked style. Finally, a transposed convolution layer without activation function acts as the reconstruction block to obtain the HR image. Compared with the other networks, the IDN extracts feature maps directly from the LR images and utilizes multiple DBlocks to generate the residual representations in HR space. The enhancement unit in each DBlock gathers as much information as possible, and the remaining compression unit distills more useful information, which achieves competitive results with a concise structure.
It should be noted that the network considered two loss functions during the training process. The first one is the widely used mean square error (MSE):
l o s s M S E = 1 N i = 1 N | | I i I ^ i | | 2 2
in which N, I i , and I ^ i represent the number of training samples, the ith input image patch and the label of the ith input image patch, respectively. Meanwhile, mean absolute error (MAE) is also applied to train the IDN model. The MAE is formulated as:
l o s s M A E = 1 N i = 1 N | | I i I ^ i | | 1
Specifically, the IDN is first trained with l o s s M A E , and then fine-tuned by the l o s s M S E .
Having the trained IDN, the spatial resolution of the selected bands in the LR HSI can be enhanced in a fast and efficient way, and the IDN super-resolved HR bands can be denoted as
H ¯ = { H ¯ 1 , [ ] , . . . , [ ] , H ¯ 1 + d , [ ] , . . . , [ ] , H ¯ 2 d + 1 , [ ] , . . . }
where the unselected bands are temporarily missing.

2.3. Spectral-Interpolation for the Unselected Bands

Given the super-resolved interval-selected bands, the proposal applies a cubic Hermite spline method f ( x ) to achieve a continuous and smooth entire HR HSI. Cubic Hermite splines are typically used for interpolation of numeric data specified at given argument values, to obtain a smooth continuous function. Compared with the linearity, it can better hold and analyze the mean of the the dependent variables and capture the nature of their relationships [24].
Suppose that the following nodes in the cubic Hermite spline function and their values are given:
x : a = x 0 < x 1 < . . . < x n = b y : y 0 y 1 y n
in which n denotes the number of the given nodes minus 1, thus it starts from 0. The proposed cubic Hermite spline function f ( x ) describes the mapping between x and y, and is a partition- defined formula.
With these IDN-super-resolved HR bands acting as the given nodes, the cubic Hermite spline function f ( x ) can be applied to obtain one coarse but integrated HR HSI, which can be denoted as:
H ¯ = [ H ¯ 1 , f ( L 2 ) , f ( L 3 ) , . . . , H ¯ 1 + d , . . . ]

2.4. Intra-Fusion

According to the above descriptions, the HR HSI H ¯ is reconstructed by utilizing the spatial information of the selected bands, the mapping learned by the IDN model and the spectral correlation. The information these unselected bands convey is directly neglected. If further utilization of these information is made, it is supposed that a spatial enhancement will be gained. In this way, we propose to get an HR HSI H ^ through intra-fusing the spectral-interpolated H ¯ with the input LR HSI L via the non-negative matrix factorization (NMF) method. Different from most existing fusion methods, the input of the proposed fusion is an intermediate output of the super-resolution process, which is why it is named as intra-fusion. This intra-fusion is more flexible and more practical.
Because of the coarse spatial resolution, pixels in the HSI are usually mixed by different materials. Spectral curves of the HSI usually are mixtures of different pure materials’ reflectance, and these pure materials are called endmembers. Considering the mathematics simplicity and physical effectiveness, each spectral curve can be modeled by a linear mixture model. Let matrix H α R n × s 2 w h represent the desired HR HSI by concatenating the pixels of HSI H: H R s w × s h × n H α R n × s 2 w h . The same operation is implemented on the spectral-interpolated HR HSI H ¯ R s w × s h × n and the input LR HSI L R w × h × n to obtain the H ¯ α R n × s 2 w h and L α R n × w h . In this way, the desired HR HSI H α can be denoted as
H α = W C + N
where W R n × D is the endmember matrix, and C R D × s 2 w h is the abundance matrix. N denotes the residual. D is the number of endmembers and each column in W represents the spectrum of an endmember. Here, D is obtained by the Neyman–Pearson lemma [25]. Given an HSI with n bands, eigenvalues of its correlation matrix R n × n and covariance matrix K n × n are computed and sorted as { λ 1 λ 2 λ 3 λ n } and { ξ 1 ξ 2 ξ 3 ξ n } , respectively. According to the binary hypothesis testing, assume H0: λ i ξ i = 0 , H1: λ i ξ i > 0 . According to the preset false alarm rate, a threshold α can be computed to maximize the detection rate. In this way, when λ i ξ i is greater than α , a signal signature is considered to exist. The number of endmembers is obtained by counting the number of eigenvalues that satisfy λ i ξ i > α . In the proposal, the given false alarm rate is set as 5 × 10 2 . Meanwhile, all elements in the matrix W and C are nonnegative.
The spectral-interpolated HR HSI H ¯ α and the input LR HSI L α can be denoted as:
H ¯ α = Q H α + E q
L α = H α P + E p
in which Q is the spectral transform matrix, and P R s 2 w h × w h is the spatial spread transform matrix. E q and E p are the residual matrices. When substituting Equation (6) into Equations (7) and (8), H ¯ α and L α can be reformulated as
H ¯ α W T C
L α W C T
where W T and C T denote the spatial degraded abundance matrix and the spectrally degraded matrix, respectively.
During the HSI unmixing procedure, it is expected that the HSI reconstructed by the endmember and coefficient matrices should be close to input image. In this way, the cost functions about the input LR HSI L α and spectral-interpolated H ¯ α are formulated as | | H ¯ α W T C | | F 2 and | | L α W C T | | F 2 , respectively. NMF was developed to decompose a nonnegative matrix into a product of nonnegative matrices [26]. When applied to the H ¯ α , the cost function is formulated as
arg min W T , C | | H ¯ α W T C | | F 2 s . t . W T ( i , j ) 0 , C ( i , j ) 0
During the solving process, both W T and C are firstly initialized as nonnegative matrices. If W T is smaller than the desired matrix W T d , a variable k whose value is larger than 1 will be multiplied by W T to make it next to the W T d . On the other hand, if W T is larger than W T d , k should be larger than 0 but smaller than 1 to make sure all the elements in W T are nonnegative. In this way, k is defined as
k = ( H ¯ α C T ) . / ( W T C C T )
where ( . ) T denotes the transposition of the matrix. “./” denotes the element-wise division. According to Equation (7), it is noted that the relation between k and constant 1 changes with that between W T and W T d . Hence, W T can be updated by the following expression:
W T = W T H ¯ v C T W T C C T
The same update strategy is operated on C, W and C T .
C = C W T T H ¯ W T T W T C
W = W L v C T T W C T C T T
C T = C T W T L v W T W C T
When W and C are obtained, the HR HSI H ^ intra-fused by H ¯ and L is also achieved, which contains both the conveyed spatial information L and the IDN learned spatial mapping correlation. Moreover, this fusion operation does not require any auxiliary HR image as the input, which is more practical. The complete algorithm is summarized in Algorithm 1.
Algorithm 1: Pseudocode of the proposal.
Remotesensing 11 01229 i001

3. Experimental Setup and Data Analysis

3.1. Experimental Setup

The performance of this proposed method was mainly evaluated on outdoor HSIs obtained via airborne, spaceborne and the ground-based platforms. The outdoor HSIs utilized in the experiments are the Pavia University, Washington DC Mall, Salinas, Botswana and Scene02 datasets. They were acquired by the Reflective Optics Imaging Spectrometer (ROSIS), the Hyperspectral Digital Image Collection Experiment (HYDICE), the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), the Hyperion sensor on the NASA EO-1 satellite and the Spectral Imagery (SPECIM) hyperspectral camera, respectively. The geometric resolution of the ROSIS and HYDICE are 1.3 m and 1 m, respectively. There are 610 × 610 × 103 pixels in the original Pavia University, but we selected a 200 × 200 × 100 region with rich detail to validate the performance. The Washington DC Mall acquisition measured the pixel response from 400 nm to 2400 nm region. Bands in the 900 nm and 1400 nm region where the atmosphere is opaque were omitted from the dataset, leaving 191 bands. The Salinas dataset was collected by the 224-band AVIRIS sensor and 20 water absorption bands were abandoned. The Botswana dataset was acquired at 30 m pixel resolution over a 7.7 km strip in 242 bands covering the 400–2500 nm portion of the spectrum in 10 nm windows. The scene02 HSI consists of 1312 × 1174 pixels and 396 spectral reflectance bands in the wavelength range from 378 nm to 1003 nm. It is noted that these outdoor HSIs cover three different types of surface. Both the Pavia University and Washington DC Mall datasets refer to the building typologies, the Salinas and Botswana datasets refer to the land cover typologies, and the scene02 dataset refers to the man-made material typologies.
Moreover, to validate the proposal’s robustness to the image quality, we also validated the performance on the ground-based HSIs with finer spatial detail. We randomly selected two HSIs from the CAVE database: flowers and f a k e _ a n d _ r e a l _ f o o d . Their spectral resolution is from 400 nm to 700 nm at 10 nm steps (31 bands in total).
Figure 4 depicts the high-resolution RGB images of the HSIs from the CAVE database and the gray images of the other HSIs, in which all these gray images are visual exhibitions of the 15th band in the corresponding HSIs. The RGB images of the HSIs from the CAVE database are provided in the database. To make the exhibition more appealing, we rotated the Washington DC Mall and Botswana HSIs by 90 counterclockwise.
The low-resolution HSIs were simulated by bicubic down-sampling the original HSIs with the scaling factor s = 2 , 4 and 8. To comprehensively evaluate the performance of the proposed method, comparisons were made with several state-of-the-art deep learning-based single SR methods, including SRCNN [27], VDSR [28], LapSRN [29], and IDN [19]. The parameters in the competing methods were chosen as described in their corresponding references. The single-based methods provide a solid baseline with no auxiliary images.
In [30], Lim et al. experimentally demonstrated that training with l o s s M S E is not a good choice. Thus, the IDNs were first trained with the l o s s M A E , and then fine-tuned by the l o s s M S E . During the training process, the learning rate was initially set to 10 4 and decreased by a factor of 10 during fine-tuning phase.
The following six quantitative measurements were employed to evaluate the performance of reconstructed HR HSIs: Correlation Coefficient (CC), Root Mean Square Error (RMSE), Erreur Relative Globale Adimensionnelle de Synthese (ERGAS) [31], Peak Signal-to-Noise Ratio (PSNR), Structure Similarity Index Measurement (SSIM), and Spectral Angle Mapper (SAM) [32]. CC, RMSE, PSNR and SSIM are universal measurement matrices for image quality assessment, and their detailed descriptions are omitted in this paper. ERGAS is a global statistical measure of the super-resolution quality with the best value at 0. The ERGAS of H and H ^ is calculated by
E R G A S ( H , H ^ ) = 100 s 1 b k = 1 b ( R M S E k μ k ) 2
in which R M S E k = ( | | H ^ k H k | | F / n ) . Here, n is the number of pixel in any band of the HR HSI H, and μ k is the sample mean of the kth band of H. SAM evaluates the spectral information preservation at each pixel. The SAM at the kth pixel is determined by
S A M ( a , b ) = a r c c o s ( < a , b > | | a | | | | b | | )
in which < a , b > = a T b is an inner product between a and b, and | | . | | is the l 2 norm. The optimal value of SAM is 0. The values of SAM reported in our experiments were obtained by averaging the values obtained for all the image pixels [33]. Among these six indices, larger CC, PSNR and SSIM indicate better quality, while SAM, RMSE and ERGAS are the opposite. All experiments were run on a desktop with Intel core i5 2.8 GHZ CPU and 16.0 GB RAM using MATLAB(R2015b). It is noted that the computation time of the bicubic method was excluded in the comparison process for its poor performance. Thus, all times of the bicubic method are marked with * .

3.2. Data Analysis

Correlation coefficients between neighboring bands in the LR HSIs were firstly measured to select the most proper interval. Figure 5a describes the variation of CC with the interval for the two-times down-sampled Pavia. It is noticed the CC decreased as the interval increased. This validated that the neighboring bands were highly correlated. However, as their interval became wider, their correlation coefficients decreased. Meanwhile, as the scaling factor increased, more high frequency details in the input LR HSIs were damaged, and the differences between different bands narrowed. Figure 5b plots the difference between the CC of the four-times down-sampled Pavia and that of the two-times down-sampled one, when interval was set as 1 and 2. It is noticed that the values of most differences were larger than 0. This validated that the bands in the four-times down-sampled HSIs were more correlated than those of the two-times down-sampled ones at the same interval. Thus, a larger interval can be set for the larger down-sampling factor, which would reduce the computational complexity with negligible performance loss.
Table 1 is obtained by comparing the quality indices of the spectral-interpolated coarse HR HSIs with those of the IDN band-by-band super-resolved HR HSIs at different intervals. It is noticed that, at the same scaling factor, as the interval grew, the quality indices’ gap became bigger, which is consistent with what Figure 5a describes. Table 2 depicts the tendency of quality indices’ gap between the spectral-interpolated HR HSI and those of the IDN band-by-band super-resolved HR HSI at the same interval but different scaling factor. It is noticed that, with the same interval, the performance of the coarse HR HSI at scaling factor of 8 was closer to that of the IDN super-resolved than at the scaling factor of 4, and scaling factor of 4 was closer than scaling factor of 2, a phenomenon consistent with what shows in Figure 5b, which further validated the rationality of larger-interval setting for larger scale from a vertical perspective. In this way, setting for the “d” follows two principles: (1) for the HSIs with coarser spectral resolution, it is better to set a smaller “d”; and (2) for the HSIs with coarser spatial resolution, most high frequency differences between neighboring bands are lost, and their spectral correlation would be higher than the HSIs with finer spatial detail. A larger “d” can be set for these HSIs. Figure 6 shows the variation of PSNR and computation time with the band’s selection interval “d” for the Pavia University at the scaling factor of 2. It is noticed, as the “d” became larger, the computation time became less, but the PSNR of the reconstructed HSI became smaller, which validated our proposal’s tradeoff between high quality and high efficiency.
For the Pavia University HSI, there are 115 bands in the 430–860 nm, of which 12 bands were neglected and the remaining 103 bands were used for the experiment. There are 31 bands in the spectral range of 400–700 nm for the HSIs from the CAVE database, and 210 spectral bands covers the 400–2400 nm for the original Washington DC Mall HSI. In this way, from a horizontal perspective, under the same scaling factor, a larger interval was set for the spectrally more correlated Pavia University, and smaller interval is set for the Washington DC Mall and the HSIs from the CAVE database. Detailed interval settings for the experimental HSIs are displayed in Table 3.

3.2.1. Pavia University

In Figure 7, we present a visual exhibition of the fourth band of the reconstructed Pavia University HSIs via various single based methods, with the scaling factor of 4.
As shown in Figure 7, the proposed method achieved a better spatial enhancement. Table 4 shows the averaged performance on the Pavia University HSI with comparison to the single-based methods. When compared with the SRCNN, VDSR and LapSRN, the proposed method achieved a better spatial enhancement of nearly 0.3–1 dB at the scaling factor of 2. In addition, the proposal only required about half the time of the IDN to achieve a comparable or better performance.
The proposed method achieved the best performance when the scaling factors were 4 and 8. The reason is that, as the scaling factor grew, less spatial information was contained in the input HSI. The performance of methods that super-resolve the LR HSI band by band would be severely damaged. The proposed method fully exploits the spatial-spectral information, achieving more acceptable performance.
Figure 8a depicts the PSNRs of different bands in the 8× reconstructed Pavia University HSIs via different single based methods. It can be seen that the performance of the proposed method was superior to the other methods for each band.

3.2.2. Washington DC Mall

In Figure 9, we present a visual exhibition about the 90th band of the reconstructed Washington DC Mall via various single based methods, with the scaling factor of 4.
As shown in Figure 9, the proposed method achieved an HR HSI whose features are more distinct than the HSIs reconstructed by the other methods. Meanwhile, we randomly selected one point from the boundary of a region in the image and plotted its spectral curves (Figure 10). in Figure 10 “ref” represents the ground truth spectral curves. Because the spectral curve reconstructed by the IDN method seriously deviated from the reference one, we eliminated it in Figure 10 to have a better comparison with the other methods. The proposed method well preserved the spectral information. Contrary to the spectral curves reconstructed by the other methods, the locations of almost all the reflectance peaks and the reflectance troughs in the the spectral curve generated by the proposal are consistent with those of the reference curve, which ensures the correctness of their future applications.
Figure 8b depicts the PSNRs of different bands in the 8× reconstructed Washington DC Mall HSIs via different single-based methods. It can also be seen that the performance of the proposed method was superior to the other methods for each band.
Table 5 shows the averaged performance on the reconstructed Washington DC Mall with comparison to the other single-based methods. When compared with the SRCNN, VDSR and LapSRN, the proposal achieved a better spatial enhancement of nearly 0–0.5 dB at the scaling factor of 2.
At the scaling factors of 4 and 8, the proposal also achieved the best performance. As the selection interval for Washington DC Mall is smaller than that of Pavia University, the computational complexity’s superiority over the IDN was smaller than that for Pavia University, but still reduced about 40% at the scaling factor of 8.

3.2.3. Salinas

In Figure 11, visual exhibition about the 99th band of the reconstructed Salinas HSIs is presented, with the scaling factor set to 4.
As the land cover in Salinas is quite smooth, no big visual difference in the main body between different methods can be noticed. However, for the junction of different land covers, its geometric shape reconstructed by the proposal is much more distinct than that of the other methods. Figure 12a depicts the PSNRs of each band in the 4× reconstructed Salinas via different methods. According to the data in Figure 12a, the proposal achieved a better spatial enhancement performance on most bands for super-resolving the 4× down-sampled Salinas HSI.
Table 6 shows the averaged performance of the Salinas HSI at different scaling factors.
When compared with the SRCNN, VDSR, and LapSRN method, the proposal achieved a better spatial quality of nearly 0.06–0.74 dB at the scaling factor of 2. When it comes to the scaling factors of 4 and 8, the proposal achieved the best performance by utilizing the least time. It is demonstrated that the proposal also achieved an acceptable performance on the remote-sensing HSI with land cover typology, besides for the building typology.

3.2.4. Botswana

Figure 12b depicts the PSNRs of each band in the 8× reconstructed Botswana via different methods. The proposal achieveD the best PSNRs for most bands. Table 7 depicts the performance comparison with the alternative methods on the spaceborne Botswana. According to the data in Table 7, the IDN’s performance superiority over the proposal gradually diminishED. When IT comes to the scaling factor of 8, the proposal outperformED the IDN in less time. When combining Figure 12b with the data in Table 7, it is convincing that the proposal achieved high performance with high efficiency. In addition, from the data in Table 7, the computational complexity of VDSR and SRCNN was always stable for scaling factors 2, 4 and 8. For the IDN and the proposal, as the size of the input image decreased, less computational complexity was required, which is more practical. Both the experimental data of Salinas and Botswana demonstrated the proposal’s effectiveness on the HSIs of land cover typology.

3.2.5. Scene02

Although the spectral resolution of the Scene02 is finer than 10 nm per band, we still set small and conservative band interval for this HSI to obtain a better spatial enhancement. Figure 13 presents the visual exhibition about the 198th band for the 8× reconstructed Scene02. We amplified a square region that contains both the letter and the grid to make a comparison with the reconstructed HSIs via different methods. Both the contour of the letter and the boundary of the grid in the HSI reconstructed by the proposal are sharper than that of the other methods. Figure 14 depicts the PSNRs of each band in the 8× reconstructed Scene02. The proposal achieved the best performance for most of the single bands, especially for the bands locating at the middle wavelengths.
Table 8 shows the averaged performance on the reconstructed Scene02 via different methods. Limited by the memory, the running time of this HSI was much longer than the others, and the proposal achieved a significant computational reduction. This demonstrated that, as the image size grows, the proposal will achieve a notable computational advantage over the IDN.

3.2.6. CAVE Database

For the ground-based remote sensing HSIs in the CAVE database, their spatial quality is much finer than that of the other HSIs. In this case, according to the data in Table 9, the performance of super-resolving the HSI band by band via the IDN method was slightly better than that of the proposed method.
However, considering the input HSIs were generated by down-sampling from the original HSIs via bicubic interpolation, as the scaling factor grew, the spatial quality of the input HSIs decreased. Hence, it is rational to note that, as the scaling factor grew, the IDN’s superiority to the proposed method gradually diminished. Figure 15 plots the IDN’s superiority to the proposal on fake_and_real_food. It is noted that, as the scaling factor grew, the superiority gradually diminished and was next to 0, validating once again our proposal’s efficiency on remote sensing HSIs with poor spatial information. According to Kwan et al. [34], resolutions of most HSIs are limited to tens of meters, for which the proposed method would achieve a more appealing performance.

4. Conclusions

In this paper, an HSI super-resolution method is proposed by deeply utilizing the spatial-spectral information to reconstruct a high-resolution HSI from a low-resolution one. This method firstly incorporates different strategies for super-resolving the selected bands and the unselected ones. Compared with most of the existing single-based super-resolution methods, this scalable super-resolution operations greatly reduces the computational complexity with little performance degradation. Moreover, intra-fusion is operated to further improve the performance, which is more practical and does not require any auxiliary images. Through the proposed method, spatial-spectral information is fully exploited, and spatial details are well recovered. Experimental results and data analyses on both ground based and remote-sensing HSIs have demonstrated that the proposed method is especially suitable for dealing with the remote-sensing HSIs with poor spatial information.

Author Contributions

All authors made significant contributions to the manuscript. J.H. designed the research framework, analyzed the results and wrote the manuscript. M.Z. assisted in the preparation work and formal analysis. Y.L. supervised the framework design. All authors contributed to the editing and review of the manuscript.

Funding

This work was supported by the Natural Science Key Research Plan in Shaanxi Province of China (Program No.2016JZ001), PhD research startup foundation of Xi’an University of Technology (Program No.112/256081809), and Science and Technology Project of Xi’an City (No. 2017080CG/RC043(XALG011)).

Acknowledgments

The authors would like to take this opportunity to thank the editors and the reviewers for their detailed comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, F.; Du, B.; Zhang, L. Scene Classification via a Gradient Boosting Random Convolutional Network Framework. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1793–1802. [Google Scholar] [CrossRef]
  2. Sun, W.; Du, Q. Graph-Regularized Fast and Robust Principal Component Analysis for Hyperspectral Band Selection. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3185–3195. [Google Scholar] [CrossRef]
  3. Du, B.; Zhang, L. A Discriminative Metric Learning Based Anomaly Detection Method. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6844–6857. [Google Scholar]
  4. Andrejchenko, V.; Liao, W.; Philips, W.; Scheunders, P. Decision Fusion Framework for Hyperspectral Image Classification Based on Markov and Conditional Random Fields. Remote Sens. 2019, 11, 624. [Google Scholar] [CrossRef]
  5. Sun, W.; Tian, L.; Xu, Y.; Du, B.; Du, Q. A Randomized Subspace Learning Based Anomaly Detector for Hyperspectral Imagery. Remote Sens. 2018, 10, 417. [Google Scholar] [Green Version]
  6. Dong, W.; Fu, F.; Shi, G.; Cao, X.; Wu, J.; Li, G.; Li, X. Hyperspectral Image Super-resolution via Non-negative Structured Sparse Representation. IEEE Trans. Image Process. 2016, 25, 2337–2351. [Google Scholar] [CrossRef] [PubMed]
  7. Fang, L.; Zhuo, H.; Li, S. Super-Resolution of Hyperspectral Image via Superpixel-Based Sparse Representation. Neurocomputing 2018, 273, 171–177. [Google Scholar] [CrossRef]
  8. Dian, R.; Fang, L.; Li, S. Hyperspectral Image Super-Resolution via Non-local Sparse Tensor Factorization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  9. Kanatsoulis, C.I.; Fu, X.; Sidiropoulos, N.D.; Ma, W.K. Hyperspectral Super-Resolution: A Coupled Tensor Factorization Approach. IEEE Trans. Signal Process. 2018, 66, 6503–6517. [Google Scholar] [CrossRef] [Green Version]
  10. Zhang, L.; Wei, W.; Bai, C.; Gao, Y.; Zhang, Y. Exploiting Clustering Manifold Structure for Hyperspectral Imagery Super-Resolution. IEEE Trans. Image Process. 2018, 27, 5969–5982. [Google Scholar] [CrossRef]
  11. Han, X.H.; Shi, B.; Zheng, Y. Self-Similarity Constrained Sparse Representation for Hyperspectral Image Super-Resolution. IEEE Trans. Image Process. 2018, 27, 5625–5637. [Google Scholar] [CrossRef]
  12. Atkinson, P.M. Mapping sub-pixel vector boundaries from remotely sensed images. In Proceedings of the GISRUK ’96, Canterbury, UK, 10–12 April 1996; pp. 29–41. [Google Scholar]
  13. Irmak, H.; Akar, G.B.; Yuksel, S.E. A MAP-Based Approach for Hyperspectral Imagery Super-resolution. IEEE Trans. Image Process. 2018, 27, 2942–2951. [Google Scholar] [CrossRef]
  14. Arun, P.V.; Buddhiraju, K.M.; Porwal, A. CNN based sub-pixel mapping for hyperspectral images. Neurocomputing 2018, 311, 51–64. [Google Scholar] [CrossRef]
  15. Xu, X.; Tong, X.; Plaza, A.; Li, J.; Zhong, Y.; Xie, H.; Zhang, L. A New Spectral-Spatial Sub-Pixel Mapping Model for Remotely Sensed Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6763–6778. [Google Scholar] [CrossRef]
  16. Yuan, Y.; Zheng, X.; Lu, X. Hyperspectral Image Superresolution by Transfer Learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 1963–1974. [Google Scholar] [CrossRef]
  17. He, Z.; Liu, L. Hyperspectral Image Super-Resolution Inspired by Deep Laplacian Pyramid Network. Remote Sens. 2018, 10, 1939. [Google Scholar] [CrossRef]
  18. Mei, S.; Yuan, X.; Ji, J.; Zhang, Y.; Wan, S.; Du, Q. Hyperspectral Image Spatial Super-Resolution via 3D Full Convolutional Neural Network. Remote Sens. 2017, 9, 1139. [Google Scholar] [CrossRef]
  19. Zheng, H.; Wang, X.; Gao, X. Fast and Accurate Single Image Super-Resolution via Information Distillation Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Beijing, China, 20 August 2018; pp. 723–731. [Google Scholar]
  20. Yang, J.; Wright, J.; Huang, T.S.; Ma, Y. Image Super-Resolution Via Sparse Representation. IEEE Trans. Image Process. 2010, 19, 2861–2873. [Google Scholar] [CrossRef] [PubMed]
  21. Martin, D.; Fowlkes, C.; Tal, D.; Malik, J. A Database of Human Segmented Natural Images and its Application to Evaluating Segmentation Algorithms and Measuring Ecological Statistics. In Proceedings of the IEEE International Conference on Computer Vision, Vancouver, BC, Canada, 7–14 July 2001; Volume 2, pp. 416–423. [Google Scholar]
  22. Li, Y.; Xie, W.; Li, H. Hyperspectral image reconstruction by deep convolutional neural network for classification. Pattern Recognit. 2017, 63, 371–383. [Google Scholar] [CrossRef]
  23. Xu, Y.; Bo, D.; Fan, Z.; Zhang, L. Hyperspectral image classification via a random patches network. ISPRS J. Photogramm. Remote Sens. 2018, 142, 344–357. [Google Scholar] [CrossRef]
  24. Marrie, R.A.; Dawson, N.V.; Garland, A. Quantile regression and restricted cubic splines are useful for exploring relationships between continuous variables. J. Clin. Epidemiol. 2009, 62, 511–517. [Google Scholar] [CrossRef]
  25. Chang, C.I.; Du, Q. Estimation of number of spectrally distinct signal sources in hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2004, 42, 608–619. [Google Scholar] [CrossRef]
  26. Yokoya, N.; Yairi, T.; Iwasaki, A. Coupled Nonnegative Matrix Factorization Unmixing for Hyperspectral and Multispectral Data Fusion. IEEE Trans. Geosci. Remote Sens. 2012, 50, 528–537. [Google Scholar] [CrossRef]
  27. Dong, C.; Loy, C.C.; He, K.; Tang, X. Image Super-Resolution Using Deep Convolutional Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 295–307. [Google Scholar] [CrossRef] [Green Version]
  28. Kim, J.; Lee, J.K.; Lee, K.M. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1646–1654. [Google Scholar]
  29. Lai, W.S.; Huang, J.B.; Ahuja, N.; Yang, M.H. Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 5835–5843. [Google Scholar]
  30. Lim, B.; Son, S.; Kim, H.; Nah, S.; Lee, K.M. Enhanced Deep Residual Networks for Single Image Super-Resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 136–144. [Google Scholar]
  31. Yokoya, N. Texture-guided multisensor superresolution for remotely sensed images. Remote Sens. 2017, 9, 316. [Google Scholar] [CrossRef]
  32. Yuhas, R.; Goetz, A.F.H.; Boardman, J.W. Descrimination among semi-arid landscape endmembers using the Spectral Angle Mapper (SAM) algorithm. In Proceedings of the Summaries of the Third Annual JPL Airborne Geoscience Workshop, Pasadena, CA, USA, 1–5 June 1992; pp. 147–149. [Google Scholar]
  33. Loncan, L.; Almeida, L.B.; Bioucasdias, J.M.; Briottet, X.; Chanussot, J.; Dobigeon, N.; Fabre, S.; Liao, W.; Licciardi, G.A.; Simoes, M. Hyperspectral pansharpening: A review. IEEE Geosci. Remote Sens. Mag. 2015, 3, 27–46. [Google Scholar] [CrossRef]
  34. Kwan, C.; Choi, J.H.; Chan, S.H.; Zhou, J.; Budavari, B. A Super-Resolution and Fusion Approach to Enhancing Hyperspectral Images. Remote Sens. 2018, 10, 1416. [Google Scholar] [CrossRef]
Figure 1. Workflow of the proposed method.
Figure 1. Workflow of the proposed method.
Remotesensing 11 01229 g001
Figure 2. Correlation between neighboring bands in the Pavia university scene.
Figure 2. Correlation between neighboring bands in the Pavia university scene.
Remotesensing 11 01229 g002
Figure 3. General architecture of the deep IDN.
Figure 3. General architecture of the deep IDN.
Remotesensing 11 01229 g003
Figure 4. Visual exhibition of the HSIs used for validating the performance, in which all the gray images are generated by the 15th band in the corresponding HSIs.
Figure 4. Visual exhibition of the HSIs used for validating the performance, in which all the gray images are generated by the 15th band in the corresponding HSIs.
Remotesensing 11 01229 g004
Figure 5. (a) CCs of two-times down-sampled Pavia with different intervals; and (b) CCs of four-times down-sampled Pavia minus that of two-times down-sampled Pavia with different intervals.
Figure 5. (a) CCs of two-times down-sampled Pavia with different intervals; and (b) CCs of four-times down-sampled Pavia minus that of two-times down-sampled Pavia with different intervals.
Remotesensing 11 01229 g005
Figure 6. Variation of PSNR and computation time with “d” for the Pavia University at the scaling factor of 2.
Figure 6. Variation of PSNR and computation time with “d” for the Pavia University at the scaling factor of 2.
Remotesensing 11 01229 g006
Figure 7. Visual exhibition of the fourth band created by the Pavia University HSIs, which are reconstructed by different single methods when scaling factor is 4.
Figure 7. Visual exhibition of the fourth band created by the Pavia University HSIs, which are reconstructed by different single methods when scaling factor is 4.
Remotesensing 11 01229 g007
Figure 8. PSNRs of different bands in: (a) the 8× reconstructed Pavia University HSIs; an (b) 8× reconstructed Washington DC Mall HSIs via different single based methods.
Figure 8. PSNRs of different bands in: (a) the 8× reconstructed Pavia University HSIs; an (b) 8× reconstructed Washington DC Mall HSIs via different single based methods.
Remotesensing 11 01229 g008
Figure 9. Visual exhibition of the 90th band created by the Washington DC Mall HSIs, which are reconstructed by different single methods when scaling factor is 4.
Figure 9. Visual exhibition of the 90th band created by the Washington DC Mall HSIs, which are reconstructed by different single methods when scaling factor is 4.
Remotesensing 11 01229 g009
Figure 10. Spectral curves of the randomly selected point in the 8× reconstructed HR HSIs without IDN method.
Figure 10. Spectral curves of the randomly selected point in the 8× reconstructed HR HSIs without IDN method.
Remotesensing 11 01229 g010
Figure 11. Visual exhibition of the 99th band created by the Salinas HSIs, which are reconstructed by different single methods when scaling factor is 4.
Figure 11. Visual exhibition of the 99th band created by the Salinas HSIs, which are reconstructed by different single methods when scaling factor is 4.
Remotesensing 11 01229 g011
Figure 12. PSNRs of different bands in: (a) the 4× reconstructed Salinas HSIs; and (b) the 8× reconstructed Botswana via different single based methods
Figure 12. PSNRs of different bands in: (a) the 4× reconstructed Salinas HSIs; and (b) the 8× reconstructed Botswana via different single based methods
Remotesensing 11 01229 g012
Figure 13. Visual exhibition of the 198th band created by the Scene02 HSIs, which are reconstructed by different single methods when scaling factor is 8.
Figure 13. Visual exhibition of the 198th band created by the Scene02 HSIs, which are reconstructed by different single methods when scaling factor is 8.
Remotesensing 11 01229 g013
Figure 14. PSNRs for different bands in the 8× reconstructed Scene02 HSIs via different single based methods.
Figure 14. PSNRs for different bands in the 8× reconstructed Scene02 HSIs via different single based methods.
Remotesensing 11 01229 g014
Figure 15. The gap of the performance between the IDN and the proposal on the “fake_and_real_food” HSI at scaling factors of 2, 4 and 8.
Figure 15. The gap of the performance between the IDN and the proposal on the “fake_and_real_food” HSI at scaling factors of 2, 4 and 8.
Remotesensing 11 01229 g015
Table 1. Performance gap between H I D N and H ¯ with different intervals.
Table 1. Performance gap between H I D N and H ¯ with different intervals.
Interval dScaling FactorCCSAMRMSEERGASPSNRSSIM
d = 2 −0.005%0.507%0.100%0.052%−0.026%−0.037%
0.003%−0.017%−0.003%−0.048%0.001%−0.017%
0.009%−0.090%−0.011%−0.053%0.004%−0.010%
d = 3 −0.014%1.239%0.223%0.283%−0.058%−0.071%
−0.005%0.195%0.018%0.075%−0.006%−0.058%
−0.005%−0.060%−0.006%0.054%0.002%−0.056%
d = 4 −0.026%2.461%0.549%0.227%−0.143%−0.110%
−0.008%0.599%0.083%−0.228%−0.027%−0.073%
0.004%0.160%0.030%−0.260%−0.011%−0.087%
d = 9 −0.134%14.244%3.408%3.390%−0.879%−0.648%
−0.080%4.823%0.611%0.192%−0.197%−0.559%
−0.047%1.254%0.155%−0.283%−0.057%−0.499%
Table 2. Performance variation at the same intervals but with different scaling factors.
Table 2. Performance variation at the same intervals but with different scaling factors.
Interval dScaling FactorCCSAMRMSEERGASPSNRSSIM
d = 2 4× − 2×0.008%−0.525%−0.104%−0.100%0.027%0.020%
8× − 4×0.006%−0.072%−0.007%−0.006%0.003%0.007%
d = 3 4× − 2×0.009%−1.043%−0.204%−0.209%0.052%0.013%
8× − 4×0.000%−0.255%−0.024%−0.020%0.008%0.002%
d = 4 4× − 2×0.017%−1.862%−0.466%−0.455%0.117%0.037%
8× − 4×0.013%−0.439%−0.053%−0.032%0.016%−0.013%
d = 9 4× − 2×0.053%−9.421%−2.797%−3.198%0.682%0.089%
8× − 4×0.034%−3.568%−0.456%−0.475%0.140%0.060%
Table 3. Interval d set for the HSI under different intervals.
Table 3. Interval d set for the HSI under different intervals.
Pavia UniversityCAVEWashington DC MallSalinasBotswanascene02
322222
933333
933333
Table 4. Comparison with single-based methods on all the bands of the Pavia University.
Table 4. Comparison with single-based methods on all the bands of the Pavia University.
Scaling FactorAlgorithmCCSAMRMSEERGASPSNRSSIMTime (s)
Bicubic0.94784.10990.031210.262730.12390.90240.0113 *
SRCNN0.96703.82530.02468.036832.17710.9359146.4056
VDSR0.97153.61950.02297.445932.81530.944953.2367
LapSRN0.97103.43850.02317.487432.71460.945781.1535
IDN0.97373.45030.02197.142233.18860.950347.5715
Proposal0.97313.56080.02217.210833.12640.948222.9895
Bicubic0.86326.29940.05088.012525.88740.71820.0045 *
SRCNN0.88026.12400.04737.510126.51150.7551135.9020
VDSR0.88146.30090.04707.496126.55080.761350.8616
LapSRN0.88905.71160.04587.215426.78480.777624.0806
IDN0.89065.71680.04527.165426.89340.783111.8475
Proposal0.89075.65630.04527.156526.90290.78376.6167
Bicubic0.72729.27850.07115.473722.96640.50810.0105 *
SRCNN0.72929.21480.07075.464423.00610.5107152.1849
VDSR0.72329.33430.07135.495522.93290.503050.6967
LapSRN0.74899.01150.06825.330623.31920.528919.3687
IDN0.77098.63510.06525.077423.71820.571013.9152
Proposal0.77398.23110.06465.044823.79150.57595.4009
The asterisk * in the tables denote the best performance for the time measurement among all the methods. However, limited by its poor performance, we eliminated it from the comparison and make another time comparison among the other methods, in which we emphases the next least time in a bold format.
Table 5. Comparison with single-based methods on all the bands of the Washington DC Mall.
Table 5. Comparison with single-based methods on all the bands of the Washington DC Mall.
Scaling FactorAlgorithmCCSAMRMSEERGASPSNRSSIMTime (s)
Bicubic0.91463.91470.0067132.151243.48770.97550.1141 *
SRCNN0.92784.39880.006837.794143.37800.97411368.6865
VDSR0.92484.53130.006460.811143.83600.9779467.5891
LapSRN0.90643.48860.0056137.449145.04080.9828187.9704
IDN0.94723.42800.005495.155745.27910.9837438.5749
Proposal0.91903.43110.005679.265445.07170.9833286.9364
Bicubic0.82556.52970.010772.307839.44410.93740.0991 *
SRCNN0.81867.12020.0108239.511539.34140.93291268.5417
VDSR0.81387.72500.011647.065138.70030.9316468.6285
LapSRN0.72167.20310.01062533.900739.46470.9289225.3079
IDN0.85146.46120.010146.091239.89930.944364.0762
Proposal0.83106.27540.010160.871439.92680.944769.6417
Bicubic0.71819.05150.014435.031036.83630.90110.0807 *
SRCNN0.68219.84760.0149681.013636.51570.89591365.1717
VDSR0.70959.14660.014424.447336.81060.9004469.7174
LapSRN0.507811.96780.01527.964136.36600.8064169.2427
IDN0.72139.04980.014029.741937.05000.9054137.0598
Proposal0.72198.78920.013834.706137.18330.906985.9309
The asterisk * in the tables denote the best performance for the time measurement among all the methods. However, limited by its poor performance, we eliminated it from the comparison and make another time comparison among the other methods, in which we emphases the next least time in a bold format.
Table 6. Comparison with single-based methods on all the bands of the Salinas.
Table 6. Comparison with single-based methods on all the bands of the Salinas.
Scaling FactorAlgorithmCCSAMRMSEERGASPSNRSSIMTime (s)
Bicubic0.98330.81450.00732.821642.69320.98810.0875 *
SRCNN0.98450.78380.0060440082.931944.37350.9910478.2165
VDSR0.98550.72960.00562.681645.01410.9923285.4522
LapSRN0.98510.70270.00562.761045.05840.992793.4594
IDN0.98990.68620.00542.177445.27830.9930153.7260
Proposal0.98310.79370.00552.717445.11590.9927142.5217
Bicubic0.96181.33930.01262.164938.01980.96710.0656 *
SRCNN0.96471.26530.01052.329239.58920.9733474.2595
VDSR0.96691.20020.00982.041440.21240.9773288.5284
LapSRN0.95951.19270.00954.710240.44420.9774112.7037
IDN0.97091.12920.00951.904040.48570.979640.9700
Proposal0.96901.20630.00951.927240.48980.979928.2351
Bicubic0.93251.98810.01771.439835.05620.94690.0624 *
SRCNN0.93761.86070.01551.462636.21520.9509493.3159
VDSR0.93281.98480.01761.431135.06770.9470278.9407
LapSRN0.91641.89100.01462.231836.71260.945775.7862
IDN0.94571.58740.01391.297237.12650.960047.2434
Proposal0.94661.60810.01391.268537.16430.960823.1352
The asterisk * in the tables denote the best performance for the time measurement among all the methods. However, limited by its poor performance, we eliminated it from the comparison and make another time comparison among the other methods, in which we emphases the next least time in a bold format.
Table 7. Comparison with single-based methods on all the bands of the spaceborne Botswana.
Table 7. Comparison with single-based methods on all the bands of the spaceborne Botswana.
Scaling FactorAlgorithmCCSAMRMSEERGASPSNRSSIMTime (s)
Bicubic0.97491.56380.00273.234651.42860.99360.1697 *
SRCNN0.97261.78660.00274.768451.32340.99301841.7661
VDSR0.97331.74940.00263.677751.55230.9938665.7049
LapSRN0.96131.55650.00244.888352.28040.9947215.5832
IDN0.97901.50570.00252.963152.19390.9946382.3559
Proposal0.97501.65670.00253.355052.10000.9945194.3229
Bicubic0.93602.27190.00412.578547.65670.98520.1545 *
SRCNN0.92642.74660.00444.051147.19810.98301837.1380
VDSR0.93332.48530.00432.662147.39130.9845664.6281
LapSRN0.86932.88980.004417.711747.04330.9784260.7957
IDN0.93592.30890.00412.586647.78090.9857100.8897
Proposal0.93482.36930.00412.630447.77270.9857110.1546
Bicubic0.88492.86820.00541.705745.34790.97760.1050 *
SRCNN0.87683.34730.00561.873545.05660.97431937.2458
VDSR0.88492.89330.00541.700545.34360.9774649.2971
LapSRN0.71885.64780.00684.461443.31780.9264181.4325
IDN0.88492.89260.00531.706945.45400.9780111.4147
Proposal0.88422.92100.00531.723945.47300.978095.1863
The asterisk * in the tables denote the best performance for the time measurement among all the methods. However, limited by its poor performance, we eliminated it from the comparison and make another time comparison among the other methods, in which we emphases the next least time in a bold format.
Table 8. Comparison with single-based methods on all the bands of the ground-based Scene02.
Table 8. Comparison with single-based methods on all the bands of the ground-based Scene02.
Scaling FactorAlgorithmCCSAMRMSEERGASPSNRSSIMTime (s)
Bicubic0.98931.34970.00382.599148.36100.98931.7652 *
SRCNN0.98831.35260.00342.590649.31940.990222,196.9520
VDSR0.98941.80980.00332.490349.60120.99067087.1100
LapSRN0.98941.29180.00332.500949.65790.99072453.7132
IDN0.98911.56240.00323.307649.81860.987231,600.1253
Proposal0.98641.62640.00413.319647.78780.987116,385.2456
Bicubic0.98821.65860.00851.654041.42240.97582.3729 *
SRCNN0.98841.68820.00661.551443.64920.979719,569.4504
VDSR0.98871.66410.00701.519043.05520.98017088.5478
LapSRN0.98791.66230.00651.630343.79510.98062967.9555
IDN0.98901.65440.00671.482843.52670.98101043.8702
Proposal0.98531.84550.00691.709843.25490.9800837.5474
Bicubic0.98131.92100.01701.283435.41080.95221.3983 *
SRCNN0.98401.95610.01232.173638.17690.963120,230.4446
VDSR0.98231.92400.01592.434535.95430.95437078.2086
LapSRN0.98271.98270.01102.212739.20820.96611995.0681
IDN0.98551.94310.01121.961039.02540.96821263.6725
Proposal0.98252.01550.01082.087539.31010.9687794.5077
The asterisk * in the tables denote the best performance for the time measurement among all the methods. However, limited by its poor performance, we eliminated it from the comparison and make another time comparison among the other methods, in which we emphases the next least time in a bold format.
Table 9. Comparison with single-based methods on all the bands of the two HSIs from the CAVE database.
Table 9. Comparison with single-based methods on all the bands of the two HSIs from the CAVE database.
Scaling FactorAlgorithmCCSAMRMSEERGASPSNRSSIMTime (s)
flowersBicubic0.99842.23740.00774.747142.24590.99030.0336 *
SRCNN0.99893.42070.00634.137144.04290.9909291.4331
VDSR0.99473.39410.01478.867136.66030.9667104.7910
LapSRN0.99942.57440.00483.013946.46180.995141.9354
IDN0.99941.84810.00452.896046.87840.995799.0016
Proposal0.99922.56420.00513.459845.93360.993966.4343
Bicubic0.99273.79170.01725.159935.28720.95630.0304 *
SRCNN0.99395.13450.01524.750636.33800.9598294.0630
VDSR0.99234.15260.01705.332335.37680.9507105.5385
LapSRN0.99556.57660.01334.149237.53830.963149.7823
IDN0.99592.96970.01283.898137.84020.973324.3753
Proposal0.99574.33770.01293.952437.78360.971121.0441
Bicubic0.97656.31860.03084.583230.24230.88550.0282 *
SRCNN0.97667.66510.03004.607730.45650.8838304.0369
VDSR0.97656.29370.03054.563530.31160.8876104.5376
LapSRN0.983312.28980.02593.760131.74790.848237.3913
IDN0.98514.65580.02473.635532.14170.924928.4277
Proposal0.98496.62570.02483.657432.12660.922120.1004
fake_and_real_foodBicubic0.99742.29960.00767.471842.42250.99050.0334 *
SRCNN0.99792.75960.00686.620743.31600.9907277.1336
VDSR0.99822.52010.00636.129444.05140.9919105.6183
LapSRN0.99882.13010.00525.016445.74480.994441.4651
IDN0.99891.88040.00504.784546.05830.995098.3872
Proposal0.99862.60430.00565.336845.11410.992768.0450
Bicubic0.98983.57650.01537.300036.30870.96410.0230 *
SRCNN0.99314.01730.01275.872637.89640.9701296.6172
VDSR0.99154.09630.01406.594937.06490.9666104.8076
LapSRN0.99533.91140.01055.060239.56620.976149.7072
IDN0.99542.82060.01044.847739.69130.981624.5117
Proposal0.99513.89550.01064.926639.49170.978122.3028
Bicubic0.97045.49840.02705.988031.35850.91110.0305 *
SRCNN0.97466.57740.02545.466331.89190.9150298.1715
VDSR0.97175.59000.02635.859731.60120.9130104.9534
LapSRN0.98438.25590.02004.152033.95880.911437.2533
IDN0.98644.01060.01844.093534.69950.953428.1883
Proposal0.98604.78780.01854.112334.65180.949820.3372
The asterisk * in the tables denote the best performance for the time measurement among all the methods. However, limited by its poor performance, we eliminated it from the comparison and make another time comparison among the other methods, in which we emphases the next least time in a bold format.

Share and Cite

MDPI and ACS Style

Hu, J.; Zhao, M.; Li, Y. Hyperspectral Image Super-Resolution by Deep Spatial-Spectral Exploitation. Remote Sens. 2019, 11, 1229. https://doi.org/10.3390/rs11101229

AMA Style

Hu J, Zhao M, Li Y. Hyperspectral Image Super-Resolution by Deep Spatial-Spectral Exploitation. Remote Sensing. 2019; 11(10):1229. https://doi.org/10.3390/rs11101229

Chicago/Turabian Style

Hu, Jing, Minghua Zhao, and Yunsong Li. 2019. "Hyperspectral Image Super-Resolution by Deep Spatial-Spectral Exploitation" Remote Sensing 11, no. 10: 1229. https://doi.org/10.3390/rs11101229

APA Style

Hu, J., Zhao, M., & Li, Y. (2019). Hyperspectral Image Super-Resolution by Deep Spatial-Spectral Exploitation. Remote Sensing, 11(10), 1229. https://doi.org/10.3390/rs11101229

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop