Next Article in Journal
Using Worldview Satellite Imagery to Map Yield in Avocado (Persea americana): A Case Study in Bundaberg, Australia
Next Article in Special Issue
Adaptive Window-Based Constrained Energy Minimization for Detection of Newly Grown Tree Leaves
Previous Article in Journal
Improving Land Use/Land Cover Classification by Integrating Pixel Unmixing and Decision Tree Methods
Previous Article in Special Issue
A Hyperspectral Imaging Approach to White Matter Hyperintensities Detection in Brain Magnetic Resonance Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Joint Local Abundance Sparse Unmixing for Hyperspectral Images

by
Mia Rizkinia
1,2,*,† and
Masahiro Okuda
1,†
1
Faculty of Environmental Engineering, The University of Kitakyushu, Kitakyushu 808-0135, Japan
2
Faculty of Engineering, Universitas Indonesia, Depok, Jawa Barat 16424, Indonesia
*
Author to whom correspondence should be addressed.
This paper is partially based on the authors’ conference paper, which is presented at the 2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), Jeju, Korea, 13–16 December 2016.
Remote Sens. 2017, 9(12), 1224; https://doi.org/10.3390/rs9121224
Submission received: 20 October 2017 / Revised: 14 November 2017 / Accepted: 22 November 2017 / Published: 27 November 2017
(This article belongs to the Special Issue Hyperspectral Imaging and Applications)

Abstract

:
Sparse unmixing is widely used for hyperspectral imagery to estimate the optimal fraction (abundance) of materials contained in mixed pixels (endmembers) of a hyperspectral scene, by considering the abundance sparsity. This abundance has a unique property, i.e., high spatial correlation in local regions. This is due to the fact that the endmembers existing in the region are highly correlated. This implies the low-rankness of the abundance in terms of the endmember. From this prior knowledge, it is expected that considering the low-rank local abundance to the sparse unmixing problem improves estimation performance. In this study, we propose an algorithm that exploits the low-rank local abundance by applying the nuclear norm to the abundance matrix for local regions of spatial and abundance domains. In our optimization problem, the local abundance regularizer is collaborated with the L 2 , 1 norm and the total variation for sparsity and spatial information, respectively. We conducted experiments for real and simulated hyperspectral data sets assuming with and without the presence of pure pixels. The experiments showed that our algorithm yields competitive results and performs better than the conventional algorithms.

Graphical Abstract

1. Introduction

The need to extract more detailed information from remote-sensing imagery has expanded from multispectral images to hyperspectral images that enable pixel-constituent-level analysis. Hyperspectral images have better spectral resolution than multispectral images due to their large number of narrow and contiguous spectral bands [1]. The detailed information provided by sensors faces a trade-off in which the sensors capture distinct materials on the Earth’s surface mixed in one pixel. This is affected by one of the following factors [2,3,4]. The first factor is due to the low spatial resolution of the sensors; two or more separate materials occupy the same pixel. The other factor occurs when the sensors capture some distinct substances that have merged into a homogeneous mixture on the Earth’s surface. This condition leads to a compelling solution, i.e., spectral unmixing.
The procedure of spectral unmixing works by decomposing the measured hyperspectral data into a collection of spectral signatures (spectral library) and a set of corresponding fractions (abundances) that represent the proportion of each spectral signature contained in the pixels [2,5,6,7]. The spectral signatures that exist in the mixed pixels are called endmembers. In general, endmembers correspond to familiar macroscopic objects in a scene, such as water, metal, and vegetation, as well as constituents of intimate mixtures in microscopic scale. Hyperspectral unmixing can be reconstructed from the linear mixture model (LMM) and nonlinear mixture model [2,8,9,10]. With the LMM, it is assumed that the spectra of each mixed pixel are linear combinations of the endmembers contained in the pixel. Despite the fact that it holds only for macroscopic mixture conditions [8,11], it is widely used due to its computational tractability and flexibility in various applications.
With the LMM, several unmixing techniques have been introduced based on either geometry [12,13], statistics [12,14], nonnegative matrix factorization (NMF) [4,15,16,17], or sparse regression [12,18,19,20,21]. Although the geometry and statistical techniques are unsupervised and require only a little prior information about the data, they require an assumption that at least one pure pixel (a pixel containing only one endmember) exists for each endmember [22]. The NMF techniques do not require this assumption, however, they can obtain virtual endmembers with no physical meaning [22,23]. On the other hand, in the sparse regression techniques, additional informations are introduced as prior knowledge that are added to the objective functions in the optimization problems and called regularizers, e.g., considering the abundance sparsity [24,25,26], information of endmembers known to exist in the data [22], or total local spatial differences [27]. An abundance sparsity regularizer algorithm, called sparse unmixing by variable splitting and augmented Lagrangian (SUnSAL), was introduced by Iordache et al. [26]. They applied the L 1 norm (the sum of the absolute values of the matrix columns) to the abundance matrix, substituting the L 0 norm (the number of nonzero elements of the matrix) to impose the sparsity. With the algorithm known as collaborative SUnSAL (CLSUnSAL), it is assumed that the pixels of a hyperspectral scene share the same active set of endmembers [28]. This assumption does not hold when an endmember is contained in several pixels instead of all pixels in the scene. For example, when the hyperspectral scene captures a location that contains locally homogeneous regions. Zhang et al. [29] proposed a local approach of the CLSUnSAL considering the fact that endmembers tend to be distributed uniformly in local spatial regions. Qu et al. [30] adopted joint sparsity combined with the low-rank model under the bilinear mixture model (BMM). The low-rank term corresponds to the low number of linearly independent columns of a matrix. They applied a local sliding window to the abundance matrix as the neighboring pixels tend to be homogeneous and constituted from the same materials.
Iordache et al. [27] proposed a spatial regularizer algorithm called sparse unmixing with the total variation regularizer (SUnSAL-TV), which uses an unmixing technique that is more powerful than the conventional unmixing ones. Nevertheless, this semi-supervised algorithm may produce over-smoothed results and blur in the edges. The spatial information is also imposed in the sparse unmixing task in a nonlocal procedure [11]. Tang et al. [22] introduced an algorithm called sparse unmixing using a priori information (SUnSPI). The required prior knowledge is that some spectral signatures (endmembers) in the hyperspectral scene are known in advance. Despite the fact that the performance is superior compared to that of conventional unmixing algorithms, it is difficult to guarantee whether the assumption can always hold. Field investigation or prior hyperspectral-data analysis may be needed to provide such information.
In a region with high spatial similarity, e.g., local spatial region, the correlation among pixels’ spectral signatures can be reflected as linear dependence among their corresponding abundance vectors. The abundance matrix that is composed of these vectors should be low rank. This low-rankness has been recently applied for hyperspectral image denoising and recovery tasks [31,32,33], which results in superior performances. Furthermore, the low-rankness of the data also indicates high correlation among the abundance vectors corresponding to the pixels in such regions [30]. Giampouras et al. [34] proposed ADSpLRU algorithm by exploiting the low-rankness of abundance to the sparse unmixing problem to consider the spatial correlation of the abundance. However, they considered the low-rankness in the nonlocal fashion of the abundance dimension. In practice, to consider the local low-rankness of an image, Ono et al. [35] proposed the local color nuclear norm (LCNN). However, they locally applied the nuclear norm (the sum of the matrix singular values) only to the spatial dimension of RGB images. Yang et al. [36] also imposed the low-rank constraint for coupled sparse denoising and unmixing problems. However, the use of the nuclear norm is not local, and superior performance is more dominant in the denosing task rather than the unmixing one. To the best of our knowledge, there is no sparse unmixing algorithm that takes into account the low-rankness of local spectral signatures (endmembers) in the abundance dimension, whereas the high correlation between the spectral signatures can be guaranteed by the spectral angle (SA), which is a spectral similarity assesment defined as the angle between two spectral vectors. In turn, one can observe the linearity of the data distribution in local regions in terms of spatial as well as abundance dimension. This priori may lead to a novel approach for the sparse unmixing algorithm.
In this study, we developed an algorithm, which is called joint local abundance sparse unmixing (J-LASU), in which we proposed the local abundance regularizer and implanted it to the sparse unmixing problem using the nuclear norm for 3D local regions and evaluated the effect. We used the 3D local block sliding through the three dimensions of the abundance maps and imposed the nuclear norm to promote the low-rank structure of the local abundance cube. We preserve the use of the total variation (TV) regularizer for spatial consideration. The proposed algorithm was tested on simulated data as well as real hyperspectral data and compared with other sparse unmixing algorithms, i.e., CLSUnSAL, SUnSAL-TV, and ADSpLRU. The major contribution of this study is imposing our local abundance regularizer to a hybrid of state-of-the-art unmixing techniques that take into account collaborative sparsity and spatial difference. We also applied the proposed J-LASU to several scenes with and without pure pixels.
In Section 2, we discuss the problem formulation of hyperspectral unmixing as an introduction to the problem formulation of our proposed algorithm. In Section 3, we describe the proposed J-LASU algorithm starting with convincing evidence of the proposed concept. In Section 4, we describe the experiment and analysis. In Section 5, we discuss the results and findings. Finally, we conclude the paper in Section 6.
Variables and notation: Column vectors are represented as boldface lowercase letters, e.g., y , whereas matrices are represented as boldface uppercase letters, e.g., Y . The following variables are frequently used in this paper:
  • Y is the hyperspectral data,
  • A is the spectral library,
  • X is the abundance matrix,
  • X ^ is the 3D abundance data,
  • m is the number of spectral signatures,
  • l is the number of spectral bands,
  • n is the number of pixels in X ^ ,
  • n c is the number of columns in X ^ ,
  • n r is the number of rows in X ^ ,
  • B is the number of all local blocks in X ^ ,
  • N is the number of pixels in each local abundance matrix,
  • X ^ b is the b-th local block,
  • H x ^ b is the b-th local abundance matrix.

2. Hyperspectral Unmixing

2.1. Sparse Unmixing

Let Y R l × n be the observed hyperspectral data, where l is the number of bands, and n is the number of pixels. The LMM for a hyperspectral image is based on the assumption that each pixel y R l in any given spectral band is a linear combination of m spectral signatures in the spectral library A R l × m , that is,
y = Ax + e
where x R m is the abundance vector, and e R l is the vector of noise and model error.
With sparse unmixing, it is assumed that the abundance vector x is sparse because the number of endmembers contained in a pixel is much lower than the number of spectral signatures in the library, which implies the vector x contains many intensities of zero. Figure 1 illustrates the LMM and sparse unmixing. Considering the ground truth, x has a constraint that needs to be imposed to the sparse unmixing model, i.e., the value of x can never be negative which is called the abundance nonnegativity constraint (ANC). The sparse unmixing problem based on the LMM for each mixed pixel can be formulated as
min x x 0 s . t . y Ax 2 δ , x 0
where x 0 denotes the number of nonzero elements in x R m , and δ is the error tolerance value determined from the noise and model error. The nonconvexity of the L 0 term induces an NP-hard problem; however, it has been proven that a nonconvex optimization problem can be relaxed to a convex one by replacing L 0 with L 1 [11,37]. Thus, the problem can be written as
min x x 1 s . t . y Ax 2 δ , x 0
Applying this formula to the whole image, we estimate the abundance matrix X R m × n for all the pixels in the hyperspectral data Y using the respective Lagrangian function as
min X 1 2 AX Y F 2 + λ X 1 s . t . X 0
where · F denotes the Frobenius norm of a matrix, and λ is the sparsity regularizer. This problem can be solved through optimization by using alternating direction method of multipliers (ADMM).

2.2. Spatial Regularization

Despite taking into account sparsity, SUnSAL ignores spatial correlation. In SUnSAL-TV, the relationship between each pixel vector and its adjacent pixel vectors is taken into account. The regularizer is defined in [27] as
TV ( X ) = { i , j κ } x i x j 1
which is the anisotropic TV with κ denoting the set of horizontal and vertical neighbors in X .
Adding the TV regularizer to the problem in Equation (4) gives the optimization problem
min X 1 2 AX Y F 2 + λ X 1 + λ T V TV ( X ) s . t . X 0 .

3. Proposed Algorithm

3.1. Local Abundance Correlation

Hyperspectral data Y R l × n have linearity in their spectral [38] and spatial [30] domains. Qu et al. [30] provided prior knowledge that the high spatial correlation of the hyperspectral data, implies linearly dependent abundance vectors in the abundance matrix X R m × n . The high correlation also holds among the pixel members of a local region due to the spatial similarity. In a physical sense, the pixels in such regions contain the same materials, either in the same or different fractions. Hence, the abundance matrix of the region can be estimated by the low-rank property [30,34].
However, the success of sparse regression techniques is affected by the low sparsity as well as low correlation between spectral signatures in the library [27]. The former is represented by the number of endmembers existing in the scene, namely, the degree of sparsity [26]. The latter can be defined by an indicator representing the difficulty to accurately solve a linear system equation i.e., mutual coherence. The mutual coherence is defined as the largest cosine among endmembers in the library. In the hyperspectral case, the degree of sparsity is often low, but the mutual coherence is close to one. In fact, higher mutual coherence decreases the quality of the solution [28].
To overcome the high mutual coherence as well as consider the low-rank property of the abundance, we exploit the high correlation of library’s spectral signatures by using our LA regularizer. In our experiment with simulated data, we confirmed the idea by observing the linearity of the data distribution in abundance domain by taking the local maximum singular value of the true abundance matrix for each local block (a block refers to the three dimensions (3D), in which the third dimension has a local coverage in the endmember direction). We found that there is one value that dominates others (the ratio is close to one) in each local block. On the other hand, the value will be less dominant as the region becomes the whole matrix (nonlocal). This implies that the linearity in abundance domain is satisfied for the abundance matrix with the local point of view. Thus, we introduce our LA regularizer using the nuclear norm for the local blocks. Instead of the image, our algorithm uses the nuclear norm to the abundance matrix that constitutes the image. Another difference is that our local block slides through all dimensions, i.e., the two spatial dimensions and the endmember direction in the abundance dimension. Figure 2 illustrates the endmember direction. The block moves within the abundance maps of the 3D abundance cube.
In addition, we guarantee high correlation by selecting endmembers from the United States Geological Survey (USGS) library to form the spectral library A based on the SA. The USGS library is a collection of the measured spectral signatures of hundreds of materials and used as references for material identification in hyperspectral images. We can find the most similar signatures to each endmember of the simulated data by calculating the SA, besides the mutual coherence. This parameter represents the absolute value of spectral correlation [39]. The value ranges between 0–90 degrees. The lower the SA value, the more similar the compared signature vectors are. In the simulated-data experiment, we adjusted the SA as one of our parameter settings.

3.2. Collaborative Sparsity Regularization

In practice, the abundance matrix X has only a few endmembers (rows) with nonzero entries. Simultaneously, all the column entries of X share the same active set of endmembers. In other words, X is sparse among the rows while dense among the columns. To implement this prior, L 2 , 1 norm is used instead of L 1 . It takes the sum of the L 2 norm of the abundance entries to promote the collaborative sparsity of the abundance matrix.
X 2 , 1 = i = 1 m x i 2
where x i represents the i-th row of X .

3.3. Local Abundance Regularizer

First, let X ^ R n r × n c × m be the abundance data in 3D form, where m is the number of abundance matrices of the endmembers, n c and n r are the numbers of columns and rows, respectively, that satisfy n = n c × n r , where n is the number of pixels in each abundance matrix. Then, for each abundance matrix X ^ i R n r × n c ( i = 1 , , m ) , stacking the column on top of one another gives x ^ i R n , the vectorized form of the matrix.
In local regions, let X ^ b R n b × n b × m b denote the b-th local block, where b = 1 , , B . The B is the number of all local blocks in X ^ . Then, for each abundance of each local block X ^ j , b R n b × n b ( j = 1 , , m b ), we vectorize it into x ^ j , b R N , where N is the number of pixels in each local abundance matrix that satisfies N = n b × n b , and j is the index of local abundance matrices. Figure 3 illustrates the procedure. With this in mind, we introduce the local abundance matrix w. r. t the b-th block
H x ^ b = ( x ^ 1 , b , , x ^ m b , b ) R N × m b .
Finally, the function of our proposed LA regularization is defined as follows
X L A * = b = 1 B H x ^ b *
where · * denotes the nuclear norm, X i = 1 r a n k ( X ) σ i ( X ) , with σ i denotes the i-th singular value.

3.4. J-LASU

We formulate the new problem by adding the LA term with Equation (7) and the additional TV term. Thus, the problem of the proposed J-LASU algorithm in a convex form becomes
min g ( X ) = 1 2 AX Y F 2 + λ X 2 , 1 + γ X T V + ρ X L A * s . t . X 0
where λ , γ , and ρ are the regularization parameter for the collaborative sparsity, TV, and LA term, respectively. We use the anisotropic TV, which is used in SUnSAL-TV [27], defined as
X T V = D X 1
where D = [ D h ; D v ] , D h : R m × n R m × n and D v : R m × n R m × n , are horizontal and vertical differential operators, respectively. The D h X computes the differences between the components of X and the corresponding right-side adjacent pixels with cyclic boundary assumption, and the same way for D v X , which corresponds to the differences with the up-side adjacent pixels [27].
We estimate the abundance matrix X by solving problem in Equation (10) by using ADMM. The cost function in Equation (10) written in ADMM form becomes
f 1 ( X ) + f 2 ( V ) s . t . V = G X
where
f 1 ( X ) = 1 2 AX Y F 2
f 2 ( V ) = λ V 1 2 , 1 + γ V 2 1 + ρ V 3 L A + ι R + ( V 4 )
V = V 1 V 2 V 3 V 4 , and G = I D I I .
Here, the ι R + term projects the solution onto the nonnegative orthant ( ι R + ( x ) = 0 if x 0 and ι R + ( x ) = + otherwise), and I is an identity matrix with a proportional size. The constraint in Equation (12) satisfies the relations
V 1 = X ; V 2 = DX ; V 3 = X ; V 4 = X .
Using a positive constant μ and the Lagrange multipliers B / μ corresponding to the constraint V = GX , the cost function is minimized using ADMM. Then, the steps for the proposed algorithm are as follows
X ( k + 1 ) = arg min X f 1 ( X ) + μ 2 G X V ( k ) B ( k ) F 2
V ( k + 1 ) = arg min V f 2 ( V ) + μ 2 G X ( k ) V B ( k ) 2 2
B ( k + 1 ) = B ( k ) ( G X ( k + 1 ) V ( k + 1 ) ) .
To find the solution for X of the augmented Lagrangian formula, we calculate the solution of Equation (17) by taking the partial derivative as follows:
X ( k + 1 ) = arg min X 1 2 AX Y F 2 + μ 2 G X V ( k ) B ( k ) F 2 = A T A + μ G T G 1 A T Y + μ G T ( V ( k ) + B ( k ) )
The detailed steps for computing the values of variables V 1 , V 2 , V 3 , and V 4 for each iteration are written in general form of the proximal operator (General form of the proximal operator is as follows: prox γ h ( x ¯ ) = arg min v R N h ( v ) + 1 2 γ v x ¯ 2 2 ) [40,41].
V 1 ( k + 1 ) = prox λ μ · 2 , 1 ( R 1 ) = arg min V 1 λ V 1 2 , 1 + μ 2 V 1 R 1 2 2
V 2 ( k + 1 ) = prox γ μ · 1 ( R 2 ) = arg min V 2 γ V 2 1 + μ 2 V 2 R 2 2 2
V 3 ( k + 1 ) = prox ρ μ · L A * ( R 3 ) = arg min V 3 ρ V 3 L A * + μ 2 V 3 R 3 2 2
V 4 ( k + 1 ) = prox 1 μ ( ι R + ) ( R 4 ) = arg min V 4 ι R + ( V 4 ) + μ 2 V 4 R 4 2 2
where R 1 = X ( k ) B 1 ( k ) , R 2 = DX ( k ) B 2 ( k ) , R 3 = X ( k ) B 3 ( k ) , and R 4 = X ( k ) B 4 ( k ) , and B = [ B 1 ; B 2 ; B 3 ; B 4 ] .
For V 1 ( k + 1 ) , since the L 2 , 1 norm is not differentiable, the solution is obtained by the shrinkage for the group lasso as follows:
v 1 ( i ) ( k + 1 ) = r 1 ( i ) λ μ r 1 ( i ) r 1 ( i ) 2 if r 1 ( i ) 2 > λ μ 0 otherwise
where v 1 ( i ) ( k + 1 ) and r 1 ( i ) denote the i-row of V 1 ( k + 1 ) and R 1 , respectively. This operation is denoted as g r o u p l a s s o ( · , τ ) , where τ is the threshold.
The TV term in Equation (22) is solved by soft-thresholding on each element of V 2 ( k + 1 ) .
v 2 ( i , j ) ( k + 1 ) = r 2 ( i , j ) γ μ if r 2 ( i , j ) > γ μ r 2 ( i , j ) + γ μ if r 2 ( i , j ) < γ μ 0 if γ μ r 2 ( i , j ) γ μ
where v 2 ( i , j ) ( k + 1 ) and r 2 ( i , j ) denote the ( i , j ) -element of V 2 ( k + 1 ) and R 2 , respectively. This operation is denoted as s o f t ( · , τ ) , where τ is the threshold.
The solution of V 3 ( k + 1 ) in Equation (23) is acquired by constructing the LA matrices, applying singular value shrinkage to each matrix, and reconstructing the output abundance matrix, which is denoted as
V 3 ( k + 1 ) = s h r ( X ( k ) B 3 ( k ) , ρ μ )
where s h r ( · , τ ) denotes the singular value shrinkage ( y diag ( m a x { S V D ( y ) τ , 0 } ) ) of the LA matrices H x ^ b , where the singular value decomposition S V D ( · ) produces a vector containing the singular values in decreasing order and τ is the threshold.
Let v 4 ( i , j ) ( k + 1 ) denotes the ( i , j ) -element of V 4 ( k + 1 ) , finally, the solution of V 4 ( k + 1 ) is obtained by
v 4 ( i , j ) ( k + 1 ) = m a x ( r 4 ( i , j ) , 0 )
where r 4 ( i , j ) denotes the ( i , j ) -element of R 4 .
The whole procedure of ADMM is summarized in Algorithm 1.
Algorithm 1: ADMM in pseudocode for solving problem in Equation (10)
Remotesensing 09 01224 i001

4. Experiment and Analysis

We tested the proposed algorithm on several simulated data sets for three signal-to-noise ratio (SNR) levels, i.e., 10, 20, and 30 dB, and two real data sets. We evaluated the results by conducting a fair comparison with the CLSUnSAL [28] and SunSAL-TV [27]. State-of-the-art low-rank algorithm is also compared, which is sparse and low-rank unmixing by using ADMM (ADSpLRU) [34].

4.1. Simulated Data Sets

To simulate the condition of hyperspectral data with and without the presence of pure pixels, we used two types of data distribution for data generation. Both use the same library generated from 240 types of minerals selected randomly from the splib06 USGS library [42], which consists of 224 spectral bands ranging between 0.4–2.5 μ m. The mutual coherence among the spectral signatures is very close to one, but we set the SA to be larger than 4.4 to make the sparse regression problem easier.
The first data set, DS, is a representation of the data with pure pixels and adopted from that of Iordache et al. [27] consisting of 224 bands for 75 × 75 pixels. The data generation follows the LMM with the abundance sum-to-one constraint imposed on each pixel. Five spectral signatures are randomly selected from the library as the endmembers and distributed spatially in the form of distinct square regions. In some pixels, the endmembers stay pure and in others they are mixed with two until there are five endmembers. In Figure 4, the red squares in each abundance map represent 100% intensity which means the pure pixel regions of each endmember. The background consists of mixed pixels with randomly fixed fractional abundance values of 0.1149, 0.0741, 0.2003, 0.2055, and 0.4051 for the five endmembers.
To demonstrate the proposed algorithm under the condition without the presence of pure pixels, the distribution with a distinct spatial pattern and mixture was selected. We used the fractal database (FR) [39] consisting of five data sets, namely FR1, FR2, FR3, FR4, and FR5. Each is composed of 100 × 100 pixels with 224 spectral bands for each pixel and contains no completely pure pixels that are close to the ground-truth characteristic in which completely pure pixels are rarely found. The distribution is generated such that pixels near the edges of regions are more highly mixed than those in the center of the regions. These center pixels have a purity index between 0.95–0.99, directly proportional to the broadness of the regions. In this experiment, we set the number of endmembers to 9. Figure 5 shows FR1, FR2, FR3, FR4, and FR5 represented in pseudocolor.

4.2. Real Data Sets

For the real-data experiment, we used two real data from different sensors. The first hyperspectral scene is the widely used data set of Cuprite mining district, Nevada in 1997 [43]. We used a subscene with the size of 150 × 130 pixels whose area is shown in Figure 6a. The data are composed of 224 spectral bands with 3.7 m spatial resolution from the AVIRIS sensor. Prior to analysis, several bands were removed due to the low SNR; thus, remaining 188 bands. In this experiment, we used the USGS library of 498 spectral signatures as the standard spectral library for the data, with the corresponding bands removed. Figure 6b shows the USGS mineral distribution map of the Cuprite area [44]. From the figure, the area of interest contains at least three types of minerals: alunite, chalcedony, and kaolinite. The mineral map was produced using Tricorder 3.3 software in 1995, while the AVIRIS Cuprite data were collected in 1997. Hence, in our experiment, the mineral map was used only for visual qualitative evaluation, compared with the abundance maps of different sparse unmixing algorithms.
The second hyperspectral scene is Urban data captured by the HYDICE sensor over an area located at Copperas Cove near Fort Hood, TX, U.S., in October 1995. It consists of 307 × 307 pixels with 2 m of the pixel resolution. The wavelengths range from 0.4 to 2.5 μ m divided into 210 spectral bands. After some bands with low SNRs due to dense water vapor and atmospheric effects are discarded, it remains 162 bands. We used a subscene with the size of 100 × 100 pixels. Figure 7a shows the subscene used in the experiment. The ground truth of the Urban data set is not available, however, we used the reference abundance maps obtained from [45]. The maps are achieved via the method provided in [46,47,48] and consist of four endmembers, i.e., asphalt, grass, tree, and roof. Figure 7b shows the spectral signatures of the four endmembers.

4.3. Parameters Setting and Evaluation Metrics

In the simulated-data experiment, to build spectral library A , the spectral signatures in the USGS spectral library were selected and sorted such that the SAs between the spectral signatures were not less than 4.4 degrees in increasing order. The parameter settings of J-LASU are for the collaborative sparsity ( λ ), TV ( γ ), and LA nuclear norm ( ρ ) regularizer. For the compared algorithms, λ S P is the sparsity term for CLSUnSAL, SUnSAL-TV, and ADSpLRU [34]. For SUnSAL-TV, the TV term is controlled by λ T V . The low-rank regularizer parameter is denoted as λ L R for ADSpLRU. These parameters are adjusted for every data set under different SNR levels. However, we used the same parameter settings for the five fractal data sets since the characteristics of the scenes tend to be similar. Table 1 summarizes these settings. The values of λ L R were the optimal ones after the experiment for some recommended values. For the LA regularization, the block size is another parameter to be set to control the coverage of adjacent pixels in the spatial and the endmember directions. After several trials in this experiment, the optimum size was found to be [5 5 5] with no overlap.
We evaluated the performance of the algorithms using root mean square error (RMSE) [39,51] and signal-to-reconstruction error (SRE) [26]. The RMSE measures the error between the original and reconstructed abundance matrices. The lower the RMSE, the more accurate the estimation is. The RMSE formula for the i-th endmember is defined as
RMSE i = 1 n h = 1 n ( X i , h X ¯ i , h ) 2 ,
where n, X and X ¯ represent the number of pixels, true abundance matrices, and estimated abundance matrices, respectively. Then, we compute the mean value of all endmembers’ RMSEs.
The SRE represents the ratio between the reconstructed abundance matrix and error, and is defined as
SRE = 10 log 10 X F 2 / X X ¯ 2 F .
For the simulated data, the original abundance matrix was generated for each data set. We compared the visual appearance among the maps of the estimated abundance matrix in addition to RMSE and SRE comparison. As for the first real data set, Cuprite, the comparison was among the estimated abundance maps of the sparse unmixing algorithms and the mineral map of each expected endmembers. For the second real data set, Urban, RMSE and SRE of each method are calculated with the ground truth abundance maps as the reference value.

4.4. Simulated-Data Experiment

Table 2 and Table 3 show the RMSE and SRE values, respectively, of estimated abundances from the compared algorithms. The proposed J-LASU algorithm achieved better RMSE for all the simulated data. For the same level of SNR, J-LASU performed better than CLSUnSAL and SUnSAL-TV as well as ADSpLRU. The improvement also can be clearly seen in the DS data set from Figure 8. J-LASU preserved the square regions better than the others. Compared with the TV results, difference can be recognized in the small square regions in which J-LASU reconstructed the squares better. For the FR data sets, visually, the ADSpLRU abundance maps showed the most similar intensity with the corresponding true abundance maps. However, J-LASU is superior in preserving the gradation of intensity from edge to center of an abundance region, which is the drawback of the ADSpLRU. Compared with SUnSAL-TV, J-LASU was more accurate in determining whether an abundance is an outlier or just a low-intensity edge abundance. In addition, SUnSAL-TV produced stronger smoothing effects than J-LASU. In this case, J-LASU results are more similar with the true abundance map, which can easily be compared in the FR2 data set.

4.5. Real-Data Experiment

The visual comparisons among the five sparse unmixing algorithms and the mineral maps for the Cuprite data can be observed in Figure 9. The images in the first column show the comparison for alunite abundance maps. Among the results of the compared algorithms, The proposed J-LASU produced the map that was the most similar to the mineral map, with less outliers found in the lower-left side of the map. The same superiority was also found among the chalcedony and kaolinite abundance maps in the second and third columns, respectively. Compared to SUnSAL-TV, J-LASU had less outliers or lower intensity of outliers, most of which were found on the left-side region of the maps.
It should be noted that the estimated abundance maps of any sparse unmixing algorithm are not exactly the same as the mineral maps generated from the Tricorder software in terms of intensity. The software produced the pixel-level classification maps, while the sparse unmixing algorithms executed subpixel-level classification. However, the comparison of outliers in this paper refers to the abundances that no longer exist in the mineral map. Overall, J-LASU estimated abundance maps had smooth gradation of intensity from the edge of a detected region to the center, and removed tiny regions that were found in the other algorithms’ map, which seems to be the outliers in J-LASU algorithm.
For the Urban data, Figure 10 shows the ground truth and abundance maps of the four endmembers estimated by the compared algorithms. J-LASU algorithm resulted in the most similar maps to the ground truth, especially for the a s p h a l t abundance map which is easier to be compared with those of the other algorithms. The quantitative comparisons also show that J-LASU yielded the best performance, with the highest SRE and lowest RMSE, as shown in Table 4. Compare to the simulated data, the Urban data experienced relatively high RMSEs for all compared algorithms. This is due to the fact that the ground truth abudance maps used for the Urban data are not achieved from a ground measurement, but from a method in which error possibly exists in term of method accuration.

5. Discussion

5.1. Sensitivity Test

We evaluated the performance of the proposed J-LASU algorithm when λ , γ , and ρ were not set to the optimal values. In the experiment, when a parameter was adjusted from 0 to 10, the other parameters were set to their optimal values. When the parameter increases from 0 to the optimal value, the RMSEs decrease and the SREs increase gradually. When it reaches a higher value, the results worsen. Hence, we can conclude that each parameter influences the performance of J-LASU.
To clearly evaluate the contribution of the LA regularizer, we conducted an experiment of our optimization problem with ρ = 0 , which means no contribution of the LA regularizer. Figure 11 represent the RMSE of this condition at the three levels of SNR compared to those of J-LASU, where ρ > 0 . For each simulated data set, other parameters were set to the optimum values. For all data, it was observed that when ρ = 0 , the RMSE was higher than the condition when the LA regularizer was used. In other words, adding our LA regularizer with an optimal regularization parameter will contribute improvement in RMSE.
We found that improvement in visual quality corresponds to the additional low-rank regularization. Figure 12 shows visual improvement due to the abundance regularizer. The abundance maps in the figure belong to endmember 5 of the DS data set and endmember 7 of the FR 2 data set. For the FR abundance maps, one can see that after applying our LA regularizer with an optimal ρ , the active abundances have higher intensities. The active abundances in the left-edge of the map and around the speckles clearly appear, although in lower intensities than in the true abundance map. For the DS data set, when ρ is set to the optimal value, the small squares are preserved better than when ρ = 0 .

5.2. Effect of Block Size

The coverage of the local region affects the optimization results. In this region, the highly correlated abundance of the endmembers is taken into account by the local abundance nuclear norm. We conducted experiments to find the optimum size of the sliding block. We also observed the effect of the block size. Figure 13 shows the RMSE and SRE when the block size was adjusted in the DS data set. From the curves, we could determine that the radius of spatial similarity in the abundance map affects the optimum size of the sliding block. The distribution of spatial similarity in the DS data set, as shown in Figure 8, has a distinct pattern in which every 5 × 5 pixel has the same abundance value, giving the optimum block size in turn. However, the correlation does not hold for the data in which the spatial similarity is not represented in a square region, e.g., the FR and Cuprite data sets. In this circumstance, some trials were conducted prior to the experiment. After the trials, we found that the optimum size is 5 pixels. Hence, we selected [5 5 5] as the optimum block size for all data.

5.3. Computational Complexity

The running-time comparison among the algorithms is summarized in Table 5. The experiment was conducted for the DS simulated data, which has 75 × 75 pixels, 224 bands, and 240 spectral signatures in the library. The algorithms ran on a desktop computer with 3.50-GHz Intel Core i5 processor and 8 GB of RAM. From the table, J-LASU was the slowest due to its high computational complexity.
For the complexity analysis, recall that n, m, N, and m b are the number of pixels, spectral signatures in the library, pixels in each LA band, and local endmembers, respectively. For each iteration of J-LASU, the computation of X and the SVD step in the computation of V 3 incur the most cost. The complexity of X computation is due to the use of conjugate gradient solver, which costs O ( m ) per iteration. The conjugate gradient is a popular iterative technique for solving the system of linear equation Ax = b , where the matrix A must be symmetric possitive definite (SPD), large and sparse. The SVD step costs O ( m b 2 N ) ; however, this step is repeated as many times as the number of blocks (B) due to the sliding of the local block. Since B is calculated by m n / m b N , the total cost of V 3 is O ( m b m n ) , which is more complex than the computation of X . Hence, the overall complexity costs O ( m b m n ) .

6. Conclusions

We proposed the local abundance regularizer algorithm for the sparse unmixing problem to improve the accuracy of abundance estimation. By imposing the term to state-of-the-art unmixing algorithms, our algorithm incorporates both spatial and abundance correlation by using the low-rankness of the abundance. We implemented the nuclear norm to the local abundance matrix, which defines the local region not only in the spatial, but also in the abundance dimension. The algorithm was run at certain SNR levels for several simulated data sets, which represent the conditions with and without pure pixels, and for two real data sets. The experimental results indicate that our proposed algorithm performs better than SUnSAL-TV and yields better results than the other state-of-the-art algorithms. Relevant future research will be concerned with exploitation of the low-rankness of abundance for overlapping local regions.

Acknowledgments

This work was supported in part by JSPS Grants-in-Aid (24560473), and MIC SCOPE (172310003). The first author acknowledges support from the Indonesia Endowment Fund for Education (LPDP), Ministry of Finance, The Republic of Indonesia.

Author Contributions

All the authors significantly contributed to different phases of this manuscript including the preparation, analysis, review and revision. The research plan and mathematical formulation were developed together. Mia Rizkinia implemented the algorithm, prepared the data, and executed the experiments. Masahiro Okuda supervised the research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rizkinia, M.; Okuda, M. Local abundance regularization for hyperspectral sparse unmixing. In Proceedings of the 2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), Jeju, Korea, 13–16 December 2016; pp. 1–6. [Google Scholar]
  2. Keshava, N.; Mustard, J.F. Spectral unmixing. IEEE Signal Process. Mag. 2002, 19, 44–57. [Google Scholar] [CrossRef]
  3. Zhang, B.; Zhuang, L.; Gao, L.; Luo, W.; Ran, Q.; Du, Q. PSO-EM: A hyperspectral unmixing algorithm based on normal compositional model. IEEE Trans. Geosci. Remote Sens. 2014, 52, 7782–7792. [Google Scholar] [CrossRef]
  4. Song, M.; Ma, Q.; An, J.; Chang, C.I. An improved NMF algorithm based on spatial and abundance constraints. In Proceedings of the 2016 Progress in Electromagnetic Research Symposium (PIERS), Shanghai, China, 8–11 August 2016; pp. 4532–4537. [Google Scholar]
  5. Shippert, P. Why use hyperspectral imagery. Photogramm. Eng. Remote Sens. 2004, 70, 377–396. [Google Scholar]
  6. Landgrebe, D. Hyperspectral image data analysis. IEEE Signal Process. Mag. 2002, 19, 17–28. [Google Scholar] [CrossRef]
  7. Heiden, U.; Iwasaki, A.; Müller, A.; Schlerf, M.; Udelhoven, T.; Uto, K.; Yokoya, N.; Chanussot, J. Foreword to the special issue on hyperspectral remote sensing and imaging spectroscopy. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 3904–3908. [Google Scholar] [CrossRef]
  8. Ahmed, A.M.; Duran, O.; Zweiri, Y.; Smith, M. Hybrid spectral unmixing: Using artificial neural networks for linear/non-linear switching. Remote Sens. 2017, 9, 775. [Google Scholar] [CrossRef]
  9. Licciardi, G.A.; Sellitto, P.; Piscini, A.; Chanussot, J. Nonlinear spectral unmixing for the characterisation of volcanic surface deposit and airborne plumes from remote sensing imagery. Geosciences 2017, 7, 46. [Google Scholar] [CrossRef]
  10. Yokoya, N.; Chanussot, J.; Iwasaki, A. Nonlinear unmixing of hyperspectral data using semi-nonnegative matrix factorization. IEEE Trans. Geosci. Remote Sens. 2014, 52, 1430–1437. [Google Scholar] [CrossRef]
  11. Zhong, Y.; Feng, R.; Zhang, L. Non-local sparse unmixing for hyperspectral remote sensing imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 1889–1909. [Google Scholar] [CrossRef]
  12. Bioucas-Dias, J.M.; Plaza, A.; Dobigeon, N.; Parente, M.; Du, Q.; Gader, P.; Chanussot, J. Hyperspectral unmixing overview: Geometrical, statistical, and sparse regression-based approaches. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 354–379. [Google Scholar] [CrossRef]
  13. Nascimento, J.M.P.; Dias, J.M.B. Vertex component analysis: A fast algorithm to unmix hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2005, 43, 898–910. [Google Scholar] [CrossRef]
  14. Berman, M.; Kiiveri, H.; Lagerstrom, R.; Ernst, A.; Dunne, R.; Huntington, J.F. ICE: A statistical approach to identifying endmembers in hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2004, 42, 2085–2095. [Google Scholar] [CrossRef]
  15. Lee, D.D.; Seung, H.S. Learning the parts of objects by nonnegative matrix factorization. Nature 1999, 401, 788–791. [Google Scholar] [PubMed]
  16. Pauca, V.P.; Piper, J.; Plemmons, R.J. Nonnegative matrix factorization for spectral data analysis. Linear Algebra Appl. 2006, 416, 29–47. [Google Scholar] [CrossRef]
  17. Liu, R.; Du, B.; Zhang, L. Hyperspectral unmixing via double abundance characteristics constraints based NMF. Remote Sens. 2016, 8, 464. [Google Scholar] [CrossRef]
  18. Iordache, M.D.; Plaza, A.J. A Sparse Regression Approach to Hyperspectral Unmixing. Ph.D. Thesis, Instituto Superior Tecnico, Lisbon, Portugal, 2011. [Google Scholar]
  19. Li, C.; Ma, Y.; Mei, X.; Liu, C.; Ma, J. Hyperspectral unmixing with robust collaborative sparse regression. Remote Sens. 2016, 8, 588. [Google Scholar] [CrossRef]
  20. Ghasrodashti, E.K.; Karami, A.; Heylen, R.; Scheunders, P. Spatial resolution enhancement of hyperspectral images using spectral unmixing and bayesian sparse representation. Remote Sens. 2017, 9, 541. [Google Scholar] [CrossRef]
  21. Esmaeili Salehani, Y.; Gazor, S.; Kim, I.M.; Yousefi, S. 0-norm sparse hyperspectral unmixing using arctan smoothing. Remote Sens. 2016, 8, 187. [Google Scholar] [CrossRef]
  22. Tang, W.; Shi, Z.; Wu, Y.; Zhang, C. Sparse unmixing of hyperspectral data using spectral a priori information. IEEE Trans. Geosci. Remote Sens. 2015, 53, 770–783. [Google Scholar] [CrossRef]
  23. Chen, X.; Chen, J.; Jia, X.; Somers, B.; Wu, J.; Coppin, P. A quantitative analysis of virtual endmembers’ increased impact on the collinearity effect in spectral unmixing. Trans. Geosci. Remote Sens. 2011, 49, 2945–2956. [Google Scholar] [CrossRef]
  24. Themelis, K.E.; Rontogiannis, A.A.; Koutroumbas, K. Semi-supervised hyperspectral unmixing via the weighted Lasso. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Dallas, TX, USA, 14–19 March 2010. [Google Scholar]
  25. Bioucas-Dias, J.M.; Figueiredo, M.A.T. Alternating direction algorithms for constrained sparse regression: Application to hyperspectral unmixing. In Proceedings of the 2010 2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Reykjavik, Iceland, 14–16 June 2010; pp. 1–4. [Google Scholar]
  26. Iordache, M.D.; Bioucas-Dias, J.M.; Plaza, A. Sparse unmixing of hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2014–2039. [Google Scholar] [CrossRef]
  27. Iordache, M.D.; Bioucas-Dias, J.M.; Plaza, A. Total variation spatial regularization for sparse hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2012, 50, 4484–4502. [Google Scholar] [CrossRef]
  28. Iordache, M.D.; Bioucas-Dias, J.M.; Plaza, A. Collaborative sparse regression for hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2014, 52, 341–354. [Google Scholar] [CrossRef]
  29. Zhang, S.; Li, J.; Liu, K.; Deng, C.; Liu, L.; Plaza, A. Hyperspectral unmixing based on local collaborative sparse regression. IEEE. Trans. Geosci. Remote Sens. Lett. 2016, 13, 631–635. [Google Scholar] [CrossRef]
  30. Qu, Q.; Nasrabadi, N.M.; Tran, T.D. Abundance estimation for bilinear mixture models via joint sparse and low-rank representation. IEEE. Trans. Geosci. Remote Sens. 2014, 52, 4404–4423. [Google Scholar]
  31. Zhao, Y.; Yang, J. Hyperspectral image denoising via sparsity and low rank. In Proceedings of the 2013 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Melbourne, Australia, 21–26 July 2013; pp. 1091–1094. [Google Scholar]
  32. Zhang, H.; He, W.; Zhang, L.; Shen, H.; Yuan, Q. Hyperspectral image restoration using low-rank matrix recovery. IEEE. Trans. Geosci. Remote Sens. 2014, 52, 4729–4743. [Google Scholar] [CrossRef]
  33. Zhao, Y.Q.; Yang, J. Hyperspectral image denoising via sparse representation and low-rank constraint. IEEE. Trans. Geosci. Remote Sens. 2015, 53, 296–308. [Google Scholar] [CrossRef]
  34. Giampouras, P.V.; Themelis, K.E.; Rontogiannis, A.A.; Koutroumbas, K.D. Simultaneously sparse and low-rank abundance matrix estimation for hyperspectral image unmixing. IEEE. Trans. Geosci. Remote Sens. 2016, 54, 4775–4789. [Google Scholar] [CrossRef]
  35. Ono, S.; Yamada, I. Color-line regularization for color artifact removal. IEEE Trans. Comput. Imaging 2016, 2, 204–217. [Google Scholar] [CrossRef]
  36. Yang, J.; Zhao, Y.Q.; Chan, J.C.W.; Kong, S.G. Coupled sparse denoising and unmixing with low-rank constraint for hyperspectral image. IEEE. Trans. Geosci. Remote Sens. 2016, 54, 1818–1833. [Google Scholar] [CrossRef]
  37. Candes, E.J.; Tao, T. Decoding by linear programming. IEEE Trans. Inf. Theor. 2005, 51, 4203–4215. [Google Scholar] [CrossRef]
  38. Rizkinia, M.; Baba, T.; Shirai, K.; Okuda, M. Local spectral component decomposition for multi-channel image denoising. IEEE Trans. Image Process. 2016, 25, 3208–3218. [Google Scholar] [CrossRef] [PubMed]
  39. Plaza, J.; Hendrix, E.M.T.; García, I.; Martín, G.; Plaza, A. On endmember identification in hyperspectral images without pure pixels: A comparison of algorithms. J. Math. Imaging Vis. 2012, 42, 163–175. [Google Scholar] [CrossRef]
  40. Moreau, J.J. Fonctions convexes duales et points proximaux dans un espace hilbertien. C.R. Acad. Sci. Paris Ser. A Math. 1962, 255, 2897–2899. [Google Scholar]
  41. Combettes, P.L.; Pesquet, J.C. Proximal splitting methods in signal processing. In Fixed-Point Algorithms for Inverse Problems in Science and Engineering; Springer: New York, NY, USA, 2011; pp. 185–212. [Google Scholar]
  42. USGS Digital Spectral Library 06. Available online: https://speclab.cr.usgs.gov/spectral.lib06/ (accessed on 10 June 2016).
  43. AVIRIS Data. Available online: https://aviris.jpl.nasa.gov/data/free_data.html (accessed on 10 June 2016).
  44. Cuprite, Nevada AVIRIS 1995 Data USGS. Available online: https://speclab.cr.usgs.gov/cuprite95.tgif.2.2um_map.gif (accessed on 11 January 2017).
  45. Datasets and Ground Truths. Available online: http://www.escience.cn/people/feiyunZHU/Dataset_GT.html (accessed on 10 November 2017).
  46. Jia, S.; Qian, Y. Spectral and spatial complexity-based hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3867–3879. [Google Scholar]
  47. Jia, S.; Qian, Y. Constrained nonnegative matrix factorization for hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2009, 47, 161–173. [Google Scholar] [CrossRef]
  48. Zhu, F.; Wang, Y.; Xiang, S.; Fan, B.; Pan, C. Structured sparse method for hyperspectral unmixing. ISPRS J. Photogramm. Remote Sens. 2014, 88, 101–118. [Google Scholar] [CrossRef]
  49. Zhu, F.; Wang, Y.; Fan, B.; Meng, G.; Xiang, S.; Pan, C. Spectral unmixing via data-guided sparsity. IEEE Trans. Image Process. 2014, 23, 5412–5427. [Google Scholar] [CrossRef] [PubMed]
  50. Zhu, F.; Wang, Y.; Fan, B.; Meng, G.; Pan, C. Effective spectral unmixing via robust representation and learning-based sparsity. arXiv, 2014; arXiv:1409.0685. [Google Scholar]
  51. Zortea, M.; Plaza, A. Spatial preprocessing for endmember extraction. IEEE Trans. Geosci. Remote Sens. 2009, 47, 2679–2693. [Google Scholar] [CrossRef]
Figure 1. Illustration of hyperspectral image and sparse unmixing for pixel (top) and image (bottom).
Figure 1. Illustration of hyperspectral image and sparse unmixing for pixel (top) and image (bottom).
Remotesensing 09 01224 g001
Figure 2. Illustration of endmember (m) direction in abundance dimension. 3D local block moves through pixels (n) as well as m direction of abundance maps.
Figure 2. Illustration of endmember (m) direction in abundance dimension. 3D local block moves through pixels (n) as well as m direction of abundance maps.
Remotesensing 09 01224 g002
Figure 3. Process of vectorizing and arranging local abundance matrix of hyperspectral image.
Figure 3. Process of vectorizing and arranging local abundance matrix of hyperspectral image.
Remotesensing 09 01224 g003
Figure 4. True abundance matrix of simulated data set 1 (DS). (a) Endmember 1; (b) Endmember 2; (c) Endmember 3; (d) Endmember 4; (e) Endmember 5.
Figure 4. True abundance matrix of simulated data set 1 (DS). (a) Endmember 1; (b) Endmember 2; (c) Endmember 3; (d) Endmember 4; (e) Endmember 5.
Remotesensing 09 01224 g004
Figure 5. Fractal data sets represented in pseudocolor. (a) FR1; (b) FR2; (c) FR3; (d) FR4; (e) FR5.
Figure 5. Fractal data sets represented in pseudocolor. (a) FR1; (b) FR2; (c) FR3; (d) FR4; (e) FR5.
Remotesensing 09 01224 g005
Figure 6. (a) Cuprite data generated in pseudocolor. Black rectangle shows area of our experiment; (b) USGS mineral distribution map of Cuprite mining district in Nevada [44].
Figure 6. (a) Cuprite data generated in pseudocolor. Black rectangle shows area of our experiment; (b) USGS mineral distribution map of Cuprite mining district in Nevada [44].
Remotesensing 09 01224 g006
Figure 7. (a) A subscene of Urban data used in our experiment, generated in pseudocolor; (b) Spectral signatures of the endmembers [48,49,50], x-axis and y-axis represent the band number and reflectance unit (0–1), respectively.
Figure 7. (a) A subscene of Urban data used in our experiment, generated in pseudocolor; (b) Spectral signatures of the endmembers [48,49,50], x-axis and y-axis represent the band number and reflectance unit (0–1), respectively.
Remotesensing 09 01224 g007
Figure 8. Estimated abundance maps for simulated data sets DS and FR1–5 for SNR 30 dB (row af, respectively) using CLSUnSAL, SUnSAL-TV, ADSpLRU, and J-LASU (column 1–4, respectively) compared to the true abundance (column 5).
Figure 8. Estimated abundance maps for simulated data sets DS and FR1–5 for SNR 30 dB (row af, respectively) using CLSUnSAL, SUnSAL-TV, ADSpLRU, and J-LASU (column 1–4, respectively) compared to the true abundance (column 5).
Remotesensing 09 01224 g008
Figure 9. Estimated abundance maps of Cuprite data subscene for endmember alunite, chalcedony, and kaolinite (column 13) using CLSUnSAL, SUnSAL-TV, ADSpLRU and J-LASU (row be). First row (a) shows classification maps of endmembers from USGS Tetracorder.
Figure 9. Estimated abundance maps of Cuprite data subscene for endmember alunite, chalcedony, and kaolinite (column 13) using CLSUnSAL, SUnSAL-TV, ADSpLRU and J-LASU (row be). First row (a) shows classification maps of endmembers from USGS Tetracorder.
Remotesensing 09 01224 g009
Figure 10. Estimated abundance maps of Urban data subscene for endmember asphalt, grass, tree, and roof (column 14) using CLSUnSAL, SUnSAL-TV, ADSpLRU and J-LASU (row be). First row (a) shows the ground truth abundance maps.
Figure 10. Estimated abundance maps of Urban data subscene for endmember asphalt, grass, tree, and roof (column 14) using CLSUnSAL, SUnSAL-TV, ADSpLRU and J-LASU (row be). First row (a) shows the ground truth abundance maps.
Remotesensing 09 01224 g010
Figure 11. Effect of the LA regularizer represented by improvement in RMSE when ρ > 0 .
Figure 11. Effect of the LA regularizer represented by improvement in RMSE when ρ > 0 .
Remotesensing 09 01224 g011
Figure 12. Effect of LA regularizer represented by improvement in RMSE when ρ > 0 for (a) DS data set and (b) FR 2 data set. (a1) and (b1) Before, (a2) and (b2) after, (a3) and (b3) true abundance.
Figure 12. Effect of LA regularizer represented by improvement in RMSE when ρ > 0 for (a) DS data set and (b) FR 2 data set. (a1) and (b1) Before, (a2) and (b2) after, (a3) and (b3) true abundance.
Remotesensing 09 01224 g012
Figure 13. RMSE and SRE in relation to block size.
Figure 13. RMSE and SRE in relation to block size.
Remotesensing 09 01224 g013
Table 1. Parameter Settings.
Table 1. Parameter Settings.
Data SNR CLSUnSAL SUnSAL-TV ADSpLRU J-LASU
λ SP λ SP λ TV λ SP λ LR λ γ ρ
DS10 dB1 × 10 1 1 × 10 1 1 × 10 1 1 × 10 1 1 × 10 0 5 × 10 1 5 × 10 1 5 × 10 1
20 dB1 × 10 0 5 × 10 4 5 × 10 2 1 × 10 1 1 × 10 0 2.5 × 10 1 5 × 10 2 3 × 10 1
30 dB1 × 10 0 5 × 10 4 1 × 10 2 1 × 10 0 1 × 10 1 5 × 10 2 1 × 10 2 8 × 10 2
FR10 dB1 × 10 1 5 × 10 2 1 × 10 1 1 × 10 1 5 × 10 0 5 × 10 1 1 × 10 1 2.5 × 10 1
20 dB1 × 10 0 5 × 10 3 5 × 10 2 1 × 10 1 3 × 10 0 2.5 × 10 1 1 × 10 2 1 × 10 1
30 dB1 × 10 0 5 × 10 3 2.5 × 10 2 1 × 10 1 1 × 10 2 5 × 10 2 5 × 10 3 5 × 10 2
Cuprite-5 × 10 3 5 × 10 4 1 × 10 2 1 × 10 1 1 × 10 0 5 × 10 2 5 × 10 4 1 × 10 3
Urban-1 × 10 4 1 × 10 4 1 × 10 4 1 × 10 2 1 × 10 2 1 × 10 4 1 × 10 4 1 × 10 4
Table 2. RMSE Comparison Result.
Table 2. RMSE Comparison Result.
Data SNR CLSUnSAL SUnSAL-TV ADSpLRU J-LASU
DS100.00840.00780.0097 0.0035
200.01020.00460.0053 0.0013
300.00390.00230.0038 0.0008
FR1100.01300.01190.0140 0.0103
200.01290.00870.0107 0.0075
300.00620.00680.0073 0.0050
FR2100.01400.01190.0149 0.0104
200.01380.00830.0115 0.0076
300.00620.00610.0066 0.0052
FR3100.01360.01180.0130 0.0099
200.01280.00770.0107 0.0069
300.00560.00580.0057 0.0044
FR4100.01230.01200.0135 0.0103
200.01260.00890.0090 0.0074
300.00570.00750.0058 0.0049
FR5100.01180.01120.0139 0.0092
200.01190.00800.0106 0.0065
300.00490.00620.0061 0.0043
Table 3. SRE Comparison Result.
Table 3. SRE Comparison Result.
Data SNR CLSUnSAL SUnSAL-TV ADSpLRU J-LASU
DS102.54675.10210.3110 7.2571
202.16176.34704.5515 15.2631
306.329910.57706.1799 20.0581
FR1100.64352.0180.851 2.3625
201.31163.50712.1257 4.2158
304.22044.86254.0937 6.0185
FR2100.34572.23950.2493 2.4491
201.19153.86901.0974 4.3822
304.46285.6044.5908 6.3273
FR3101.69284.01132.1009 4.3074
203.17065.86112.3815 6.5586
306.83546.97827.0605 8.7567
FR4100.34171.32130.2092 1.6307
201.09422.57350.3275 3.5269
304.17343.2633.5545 5.4870
FR5101.0052.40540.2591 3.0209
201.57114.10261.228 5.4771
306.33245.62796.0702 7.7098
Table 4. RMSE and SRE Comparison Result for Urban data.
Table 4. RMSE and SRE Comparison Result for Urban data.
Algorithms CLSUnSAL SUnSAL-TV ADSpLRU J-LASU
RMSE0.21350.20030.2077 0.1948
SRE4.68315.47385.0805 5.8719
Table 5. Comparison of running times for DS-data experiment.
Table 5. Comparison of running times for DS-data experiment.
Algorithms CLSUnSAL SUnSAL-TV ADSpLRU J-LASU
Time/iteration (s)0.920.540.242.77

Share and Cite

MDPI and ACS Style

Rizkinia, M.; Okuda, M. Joint Local Abundance Sparse Unmixing for Hyperspectral Images. Remote Sens. 2017, 9, 1224. https://doi.org/10.3390/rs9121224

AMA Style

Rizkinia M, Okuda M. Joint Local Abundance Sparse Unmixing for Hyperspectral Images. Remote Sensing. 2017; 9(12):1224. https://doi.org/10.3390/rs9121224

Chicago/Turabian Style

Rizkinia, Mia, and Masahiro Okuda. 2017. "Joint Local Abundance Sparse Unmixing for Hyperspectral Images" Remote Sensing 9, no. 12: 1224. https://doi.org/10.3390/rs9121224

APA Style

Rizkinia, M., & Okuda, M. (2017). Joint Local Abundance Sparse Unmixing for Hyperspectral Images. Remote Sensing, 9(12), 1224. https://doi.org/10.3390/rs9121224

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop