Next Article in Journal
Evaluating Alternative Flight Plans in Thermal Drone Wildlife Surveys—Simulation Study
Previous Article in Journal
Estimating Ecological Responses to Climatic Variability on Reclaimed and Unmined Lands Using Enhanced Vegetation Index
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Variational Low-Rank Matrix Factorization with Multi-Patch Collaborative Learning for Hyperspectral Imagery Mixed Denoising

1
School of Software Engineering, Xi’an Jiaotong University, No.28, Xianning West Road, Xi’an 710049, China
2
Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, 266 Xinglong Section of Xifeng Road, Xi’an 710126, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(6), 1101; https://doi.org/10.3390/rs13061101
Submission received: 1 February 2021 / Revised: 11 March 2021 / Accepted: 12 March 2021 / Published: 14 March 2021
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
In this study, multi-patch collaborative learning is introduced into variational low-rank matrix factorization to suppress mixed noise in hyperspectral images (HSIs). Firstly, based on the spatial consistency and nonlocal self-similarities, the HSI is partitioned into overlapping patches with a full band. The similarity metric with fusing features is exploited to select the most similar patches and construct the corresponding collaborative patches. Secondly, considering that the latent clean HSI holds the low-rank property across the spectra, whereas the noise component does not, variational low-rank matrix factorization is proposed in the Bayesian framework for each collaborative patch. Using Gaussian distribution adaptively adjusted by a gamma distribution, the noise-free data can be learned by exploring low-rank properties of collaborative patches in the spatial/spectral domain. Additionally, the Dirichlet process Gaussian mixture model is utilized to approximate the statistical characteristics of mixed noises, which is constructed by exploiting the Gaussian distribution, the inverse Wishart distribution, and the Dirichlet process. Finally, variational inference is utilized to estimate all variables and solve the proposed model using closed-form equations. Widely used datasets with different settings are adopted to conduct experiments. The quantitative and qualitative results indicate the effectiveness and superiority of the proposed method in reducing mixed noises in HSIs.

Graphical Abstract

1. Introduction

Hyperspectral images (HSIs) are acquired by hyperspectral sensors, represented as a 3D data-cube containing both rich spectral and spatial information. Due to the limitations of the acquisition and transmission process, HSIs unavoidably suffer from various degradations, such as noise contamination, stripe corruption, missing data relating to the voxels in the data-cube or entire spectral bands [1,2,3,4,5]. These degradations severely limit the quality of the images and influence the precision of the subsequent processing, including unmixing, target detection, and classification [6,7,8,9]. Therefore, image restoration is of critical importance and challenging in the preprocessing stage of HSI analysis.
Previously, the traditional 2D or 1D denoising models have been applied for reducing noises in HSIs pixel-by-pixel [10] or band-by-band [11]; however, these methods ignore the correlations between different spectral bands or adjacent pixels and often result in relatively low-quality results. To further enhance the denoising performance, more efficient methods have been proposed, of which the key point is to elaborately encode the prior knowledge on the structure underlying a natural HSI, especially the characteristic across the spatial and spectral dimensionality of the image.
Othman and Qian [12] made an initial attempt to resolve this issue by designing a hybrid spatial–spectral derivative-domain wavelet shrinkage model, which was constructed by exploring the dissimilarity of the signal regularity existing along the space and spectrum of a natural HSI. Fu et al. [13] proposed an effective restoration model by considering the underlying sparsity across the spatial–spectral domain, high correlation across spectra, and non-local self-similarity over space. Meanwhile, a series of methods expanding the wavelet-based method from 2D to 3D has been proposed, for example, the so-called “non-local means” filtering approach has become popular in image processing and extensions have been developed in order to denoise structural 3D images [14]. Letexier et al. [15] proposed a generalized multi-dimensional Wiener filter for denoising hyperspectral images. Similarly, Chen et al. [16] extended Sendur and Selesnick’s bivariate wavelet thresholding from 2D image denoising to 3D data cube denoising. To get better denoising results, as an extension of the BM3D [11] method, Maggioni et al. presented a BM4D model [17]. Utilizing highly correlated spectral information and highly similar spatial information, the spectral-spatial adaptive sparse representation model was proposed for reducing the noise in HSIs [18]. By explicitly treating HSI data as a 3D cube, denoising models based on tensor decomposition have appeared. In Reference [19], a novel coupled spectral–spatial tensor representation framework was proposed for noise reduction of hyperspectral images. Chen et al. proposed a low-rank tensor decomposition model for HSI restoration [20]; however, most of the above-mentioned approaches are limited due to their insufficient usage of the correlations in the spectral domain, which results in suboptimal performance while suppressing mixed noises.
By efficiently exploring the latent knowledge across spectral bands for HSIs, low-rank models have been proposed and widely utilized to restore the pure datasets from the degraded images, with competitive performances [21,22,23,24,25,26]. The classical low-rank matrix factorization (LRMF) model was presented by K. Mitra et al. and T. Okatani et al. [24]. Subsequently, using the low-rank matrix recovery (LRMR) framework, an HSI restoration technique was explored to simultaneously remove various noises in an HSI [25]. The global and non-local low-rank factorization (GLF) was proposed to suppress the noises in HSIs by utilizing the low dimensional sub-spaces and the self-similarity of the real HSI [26]. These approaches obtained satisfactory results by effectively exploiting the spectral information.
To sufficiently enhance the denoising performance, it is necessary to well integrate the spatial characteristics of HSIs into the low-rank-based models [27,28,29,30,31,32]. Wang et al. [29] proposed a novel low-rank constraint and spatial–spectral total variation regularization model by jointly utilizing global low-rank and local spatial–spectral smooth properties of HSIs. Wang et al. [30] developed a total variation regularized low-rank tensor decomposition (LRTDTV) method, in which HSI was regarded as a third-order tensor rather than being divided into the patches or being unfolded into the matrix. In [31], a novel robust principal component analysis approach was introduced into the spatial–spectral low-rank model for mixed noise removal by fully identifying the intrinsic structures of the mixed noise and clean HSIs. Based on the global correlation along the spectra and nonlocal self-similarity across space, a low-rank tensor dictionary learning (LTDL) approach was explored with satisfactory performance in [32]. In the spatial domain, HSI has latent consistency. Using this, patch learning has been widely applied to depict spatial information and has achieved good performance [33,34,35,36,37]. When the HSIs are heavily polluted by noise, the patches with little effective information are usually not directly used to recover the noise-free data. Therefore, these methods could not efficiently learn and represent the intrinsic spatial consistency and nonlocal similarities of HSIs and thus limit their denoising performance.
Deep learning has also been widely used for HSIs [38,39,40,41,42,43]. Their success suggests its effectiveness for learning and depicting latent features when denoising the HSI. Additionally, hyperspectral images are usually polluted by various noises, which often have different statistical features [44,45], such as the noises depending on the signal, the noises depending on the space domain, or spectral domain noise and mixed noise. Therefore, it is necessary to construct a model for suppressing complex mixed noises in order to deal with real HSI scenarios.
To alleviate the above limitations, a variational low-rank matrix factorization model, combined with multi-patch collaborative learning (VLRMFmcl), is proposed in the Bayesian framework to suppress various noises in HSIs. The main contributions of this work are summarized as follows.
(1) Multi-patch collaborative learning is exploited to effectively depict and learn the spatial consistency and the nonlocal self-similarity in the HSI. The pixels within the same collaborative patches share similar spatial–spectral characteristics, which are utilized to effectively improve the performance of denoising the patches polluted by heavy noises.
(2) Variational low-rank matrix factorization is proposed to learn and characterize the collaborative patch data by exploring latent characteristics across the spatial–spectral domain, in which the latent clean image in degraded HSI has the property of low rank and the mixed noises do not. The Gaussian distribution with zero mean and the variance adjusted by gamma distribution is explored to represent the latent clean image. The Dirichlet process Gaussian mixture is exploited to depict the inherent statistical features of different noises in the HSI, which are adapted and learned by exploring the Gaussian distribution, the inverse Wishart distribution, and the Dirichlet process. Through this process, the underlying mixed noise of the HSI can be fit adaptively without needing to know specific noise types or intensity.
(3) Considering the uncertainty of information about latent variables, the posteriors of the latent clean image and the mixed noises are both explicitly parameterized and update in a closed form by utilizing the variational inference. The feasibility and validity of the VLRMFmcl method are evaluated under different experimental conditions. Compared with several popular denoising methods, VLRMFmcl can reduce the noises of the hyperspectral images while preserving the structural information.
The paper is organized as follows. Section 2 gives a detailed description of the proposed restoration model, which is performed using variational inference. In Section 3, several experimental results are presented by utilizing the real-world HSI datasets. The conclusions are given in Section 4.

2. Proposed Model

To effectively suppress the various noises in HSIs, multi-patch collaborative learning is explored to represent the intrinsic spatial consistency and non-local self-similarity of HSIs. Then, the learned patches are input into the variational low-rank matrix factorization model, which is developed to suppress the mixed noises of each patch in the Bayesian framework. Figure 1 presents the framework of the proposed VLRMFmcl.

2.1. Multi-Patch Collaborative Learning

In HSIs, the adjacent pixels have high consistency in the space domain [26,27,28,29]. Based on this fact, they are often divided into overlapping three-dimensional patches for the HSI analysis. The effective information of one patch is very scarce when most of the pixels within this patch are polluted by a large amount of noise. Therefore, it is very difficult to recover the noiseless data by directly exploiting this image patch. To solve these problems, it becomes important to effectively utilize the patches in HSIs.
Figure 2 presents some patches from the Pavia Centre data (presented in Section 3.2), in which the area marked by the red box represents a test patch, and the ones marked by three green boxes represent its neighboring patches. The similarities between the patches marked by the red box and the green boxes are very different. Inspired by this basic characteristic in the HSI, the heavily polluted patches could be restored by the patches with high similarities with them. Additionally, it has been argued that the “collaborative” nature of the approximation can improve classification accuracy [46]. Considering that the HSI denoising aims to facilitate subsequent applications (e.g., classification), a similar “collaborative” [46] nature is introduced. Based on these, a multi-patch collaborative learning strategy is proposed by exploring the similarities between different image patches to effectively learn about the HSI.
Supposing the hyperspectral image X, it is segmented into overlapped three-dimensional patches with the size of d × d × λ , where d represent the spatial size of patches and λ is the total number of bands in the HSI. For a pixel x i from X, the collection N ( x i ) , 1 i d 2 consists of all pixels of a patch centered at the sample x i . The pixels in N ( x i ) can be considered to contain similar characteristics. The y i is formed by stacking all pixels of N ( x i ) into a vector, which can be regarded as the fusing feature of x i . The similarities between the different patch data can be formulated as
S i m i l a r I n d e x ( y i , y j ) = exp ( ( 1 d 2 λ | I 1 × d 2 λ y i I 1 × d 2 λ y j | ) ) ,
where I 1 × d 2 λ is the row vector with the dimension of d 2 λ , of which the elements are equal to one. Obviously, the larger the value of the S i m i l a r I n d e x ( y i , y j ) is, the higher the similarities that are observed between N ( x i ) and N ( x j ) . According to Equation (1), we can select the most similar (P–1) patches to the patch centered at x i , and construct the non-local patch data Yi. When d is large enough, it can be considered as all the most similar data to be searched in the whole hyperspectral image.

2.2. Variational Low-Rank Matrix Decomposition

In the existing literature, many computer vision, machine learning, and statistical problems can be approached by solving and learning a low-dimensional linear model. In this case, the low-rank matrix decomposition has been widely concerned and applied in many fields [3,4,5,6], which can effectively explore the low-dimensional properties of the observed data. Assuming X = [y1,...,yH] R M × H represents the observed data, M and H represent the spatial size of X, the general low-rank matrix decomposition model is formulated as
X = U V T + n ,
where U = [u1,...,uL] R M × L and V = [v1,...,vH] T R H × L represent the decomposition matrix, L m i n ( M , H ) ; n is the noise, which is depicted by Gaussian distribution, Laplace distribution, or polynomial distribution.
Obviously, the pixels of the same collaborative patch have similar characteristics in the spatial and spectral domains. In other words, these pixels have the low-rank property and can be effectively learned and expressed by the low-rank matrix decomposition. Additionally, the hyperspectral images are usually polluted by various noises with different statistical properties. The Gaussians mixture model can effectively learn and depict the different noises, including Gaussian noise, sparse noise, and so on. Above all, the noise model is explored to depict the complex noises in real HSIs, in which the Dirichlet process is utilized to adaptively achieve the selection of Gaussian distribution and the determination of the number of Gaussian distributions. The symbol Y = { y i } i = 1 P represents the collaborative patch data, and M = d 2 λ represents the dimensions of the sample y i . According to (2), the proposed Bayesian low-rank matrix decomposition model for denoising the collaborative patch data can be written as follows:
Y = U V T + n .
The first term is the low-rank decomposition term, in which u i R M and v j R L are defined as u i ~ N ( 0 , τ u i 1 I ) and v j ~ N ( 0 , τ v j 1 I ) separately. That is, u i and v j are drawn from the Gaussian distribution with zero mean and variances of τ u i and τ v j , individually. In order to improve the model robustness and reduce the sensitivity of parameters, the gamma distribution is introduced to adaptively adjust the parameter τ u i and τ v j . The first term can be formulated as:
u i ~ N ( 0 , τ u i 1 I ) , v j ~ N ( 0 , τ v j 1 I ) τ u i ~ Γ ( a 0 , b 0 ) , τ v j ~ Γ ( c 0 , d 0 )
where I is the column vector, of which the entries are all equal to one; a 0 , b 0 , c 0 and d 0 represent the hyper-parameters of the gamma distribution.
The second term in Equation (3) represents the mixed noises in the real HSI. Considering the complex statistical properties of the noises in the HSI, the Gaussians mixture model is utilized to depict the different noises, which are displayed as
n i ~ k = 1 N ( μ k , Σ k ) z i j k μ k ~ N ( μ 0 , Σ 0 ) Σ k ~ i W i s h a r t ( e 0 , f 0 ) z i j ~ M u l t ( π ) π t ~ v t j = 1 t 1 ( 1 v j ) v t ~ b e t a ( 1 , β )
In Equation (5), μ k and Σ k are the mean and variance of the k-th Gaussian distribution, which are learned and represented by the Gaussian distribution and the inverse Wishart distribution. The Gaussian distribution and the inverse Wishart distribution are conjugate. μ 0 and Σ 0 represent the mean and variance of the parameter μ k ; and e 0 and f 0 are the freedom degree and scale matrix of the inverse Wishart distribution. To effectively depict the various noises in a data-driven way, the indicator variable z i j { 0 , 1 } K , K is introduced to determine and learn the number and mode of the Gaussian distribution, and k z i j k = 1 . z i j is drawn from the polynomial distribution with the parameter π , which is learned through the Dirichlet process. Figure 3 shows the graphical representation of the Bayesian low-rank matrix decomposition model.
Additionally, Y = Δ Y is introduced into Equation (3) when recovering the missing pixels. Δ = { 0 , 1 } M × P is the sampling matrix whose elements are equal to 0 or 1. Therefore, Σ f   i = 0 ( f = 1 , , M ) represents the loss of the f-th element in y i when acquiring; Σ f   i = 1 ( f = 1 , , M ) means that the f-th element in y i is effectively collected.

2.3. Variational Bayesian Inference

According to Equations (3) and (4) and Figure 3, it can be observed that all the variances in the proposed Bayesian low-rank matrix decomposition model satisfy the conjugation. Therefore, variational Bayesian inference can be used to solve the model. Assuming the symbol Ψ = { u i , v i , μ k , Σ k , z i , τ u i , τ v j , v t } represents the variables of the proposed model; the symbol Θ = { a 0 , b 0 , c 0 , d 0 , μ 0 , Σ 0 , e 0 , f 0 , β } is the corresponding hyperparameter in the model. The variational Bayesian inference can be achieved by estimating the posterior distribution of the latent variable Ψ with the observation data Y and the hyperparameter Θ given. To solve the proposed model, the real posterior distribution p ( Ψ | Y , Θ ) of the latent variables Ψ is approximated based on the distribution q ( Ψ ) . Then, we can obtain
ln p ( Y | Θ ) = ln p ( Y , Ψ | Θ ) d Ψ = ln p ( Y , Ψ | Θ ) q ( Ψ ) q ( Ψ ) d Ψ ln p ( Y , Ψ | Θ ) q ( Ψ ) q ( Ψ ) d Ψ = ln p ( Y | Θ ) - KL ( q ( Ψ ) | p ( Ψ | Y , Θ ) )
where KL ( q ( Ψ ) | p ( Ψ | Y , Θ ) ) is utilized to represent the KL divergence distance between the variational approximation q ( Ψ ) and the true joint probability distribution p ( Ψ | Y , Θ ) . It can be easily seen that the expression ln p ( Y | Θ ) has a strict lower bound because KL ( q ( Ψ ) | p ( Ψ | Y , Θ ) ) ≥0. Therefore, the optimal solution of the proposed model can be calculated by minimizing KL ( q ( Ψ ) | p ( Ψ | Y , Θ ) ) . Algorithm 1 presents the pseudocode of the VLRMFmcl method.
Algorithm 1. The VLRMFmcl Method
Input: the noisy HSI image X; the spatial size d of patches; the total number λ of bands; the hyperparameter Θ = { a 0 , b 0 , c 0 , d 0 , μ 0 , Σ 0 , e 0 , f 0 , β } ;
Output: the denoised image Y ;
Process Source-Code of multi-patch collaborative learning:
  Divide X into the overlapping patches with the size of d × d × λ ;
  for each pixel x i in X do
    Obtain the collection N ( x i ) and y i , where 1 i d 2 λ
    Calculate the similarities S i m a i l a r I n d e x ( y i , y j ) between N ( x i ) and N ( x j ) ;
    Select the most similar (P–1) patches to the patch centered at x i , and construct the collaborative patch data Y i ;
Process Source-Code of variational low-rank matrix factorization:
  Obtain the variables Ψ = { u i , v i , μ k , Σ k , z i , τ u i , τ v j , v t } ;
  Calculate p ( Ψ Y , Θ ) by q ( Ψ ) ; update Ψ by minimizing KL ( q ( Ψ ) p ( Ψ H , Θ ) ) ;
return denoised image Y ;
The updating equations of the model variances are listed as follows.
(1) updating z i j and v t :
The posterior probability of v t is still drawn from the beta process. Supposing v t ~ b e t a ( g t , h t ) , g t and h t can be calculated by
g t k = 1 + Σ i Σ j q i j k ( t ) , h t k = β + Σ j t + 1 T Σ i q i j k ( t ) .
For the variance z i j ,
q ( z i j ) = k q i j k z i j k ,
where
q i j k ( t ) = ρ i j k ( t ) l = 1 T ρ i j k ( l ) ρ i j k ( l ) = exp ( γ l , 1 i k + γ l , 2 i k )
Supposing Φ is the digamma function, γ l , 1 i k and γ l , 2 i k are expressed as:
γ l , 1 i k = Φ ( g l k ) + Φ ( h l k ) γ l , 2 i k = 0.5 t r { Σ l 1 [ μ l μ l T + y i j y i j T + v i v i T U T U 2 U v i ( y i j μ l ) T 2 y i j μ l T ] + ln | Σ t | }
(2) updating μ k :
The posterior probability of μ k is still drawn from the Gaussian distribution, which satisfies the following conditions:
μ i = [ Σ 0 1 + i , k q i j k ( t ) Σ t 1 ] 1 [ Σ 0 1 μ 0 + i , k q i j k ( t ) Σ t 1 ( y i j U v i ) ] μ i μ i T = [ Σ 0 1 + i , k q i j k ( t ) Σ i 1 ] 1 + μ i μ i T
(3) updating Σ k :
The posterior probability of Σ k is still drawn from the inverse-Wishart distribution, which satisfies the following conditions:
e t = e 0 + ς t f t = f 0 + 0.5 i j q i j k ( t ) ( μ t μ t T + y i j y i j T + u i u i T v j T v j 2 ( y i j μ t ) U v j 2 μ t T y i j )
From Equation (12), we can obtain the following expression:
Σ t 1 = e t ( f t ) 1 ln | Σ t | = 1 Φ ( 0.5 e t ) + d 2 λ ln 2 + ln | f t 1 |
(4) updating u i :
The posterior probability of u i is still drawn from the Gaussian distribution, which is formulated as
u i ~ N ( μ u i , Ω u i 1 ) ,
where the mean μ u i and the variance Ω u i are shown as
Ω u i = τ u i I + k Σ k 1 j z i j k v j T v j μ u i = Ω u i 1 k Σ k 1 j z i j k ( y i j μ k 1 ) v j
(5) updating τ u i :
The posterior probability of τ u i is still drawn from the gamma distribution, which satisfies the following conditions:
τ u i ~ Γ ( a , b i ) ,
where the parameters a and bi are shown as
a = a 0 + 0.5 L M , b i = b 0 + 0.5 u i u i T .
(6) updating v i :
The posterior probability of v i is still drawn from the Gaussian distribution, which can be written as follows:
v j ~ N ( μ v j , Ω v j 1 ) ,
where the mean μ v j and the variance Ω v j are:
Ω v j = τ v j I + k Σ k 1 i z i j k u i u i T μ v j = Ω v j 1 k Σ k 1 j z i j k ( y i j μ k 1 ) u i
(7) updating τ v j :
The posterior probability of τ u i is still drawn from the gamma distribution, which satisfies the following conditions:
τ v j ~ Γ ( c , d j ) ,
where the parameters c and dj are shown as
c = c 0 + 0.5 L P , d j = d 0 + 0.5 v j T v j .

3. Experiments

To validate the effectiveness of the proposed VLRMFmcl model, three popular hyperspectral images were chosen as the experimental datasets, the Beads, Pavia Centre, and Urban datasets. In addition, BM3D [11], ANLM3D [14], BM4D [17], LRMR [25], GLF [26], LRTDTV [30], LTDL [32], DnCNN [42], and HSID-CNN [43] were chosen as the compared methods. The necessary parameters in the BM3D, ANLM3D, and LRTDTV methods, were automatically/manually adjusted to generate the optimal denoising results, as the references suggested. In BM4D, the noise variance was selected from the set {0.01, 0.03, 0.04, 0.05, 0.07, 0.09, 1.1}. In LRMR, the rank of the noiseless matrix was chosen from {3, 5, 6, 7, 9, 11}, and the cardinality of the sparse term was chosen from the set {0, 500, 1000, 1500, 2000, 3000, 4000, 5000}. In GLF, the number of subspaces was chosen from the set {5, 8, 9, 11, 13}. In LTDL, the noise variance was selected from the set {0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.4}. In the DnCNN and HSID-CNN methods, the pre-trained weights and the related settings were utilized to conduct the experiments as the references suggested.
In addition, five metrics were chosen to numerically evaluate the denoising performance of the different algorithms—the peak signal-to-noise ratio (PSNR), feature similarity (FSIM) [47], the mean spectral angle (MSA), noise reduction (NR) [48,49], and the mean relative deviation (MRD) [48,49]. At the same time, the visual effect was utilized as an intuitive way to determine the denoising performance. Suppose I d e n and I r e f represent the denoised and reference images, respectively; I 1 and I 2 represent the spatial size of the image.
(a) The greater the value of PSNR is, the better the denoising image quality is. PSNR (in dB) is formulated as:
PSNR = 10 log 10 ( 255 2 I 1 I 2 I r e f I d e n 2 )
(b) The greater the value of FSIM is, the better the denoising image quality is. FSIM is formulated as:
FSIM = x = Ω S l ( x ) P C m ( x ) x = Ω P C m ( x ) ,
where S l ( x ) is derived from the phase congruency and the image gradient magnitude of I d e n and I r e f ; P C m ( x ) is the maximum phase congruency of P C d e n (for I d e n ) and P C r e f (for I r e f ); and Ω represents the entire airspace of the image.
(c) MSA was used to estimate spectral fidelity between the denoising images and reference images in the spectral domain. The smaller the value of MSA is, the better the spectral fidelity of the restored algorithms. The MSA is calculated by:
MSA = 1 I 1 I 2 i = 1 I 1 j = 1 I 2 cos 1 ( I i j d e n I i j r e f I i j d e n I i j r e f )
(d) NR was used to evaluate the noise reduction of different restored methods in the frequency domain. The greater the value of NR is, the better the performance of the denoising algorithms. NR is formulated as
NR = N 0 / N 1 ,
where N 1 is the power of the frequency components generated by stripes in the restored image and N 0 is for the reference image. N 1 and N 0 can be obtained by
N c = P c ( D ) ,
where P c ( D ) is the averaged power spectrum down the columns of an image with D being the distance from its reference image in Fourier space, and is the stripe noise region of the spectrum.
(e) MRD was utilized to compare the degree of distortion between the selected noiseless region of the restored images and reference images. The smaller the value of MRD is, the smaller the image distortion. In the experiment, a 10 × 10 window was selected to calculate the MRD value. MRD is formulated as:
MRD = 1 I 1 I 2 i = 1 I 1 j = 1 I 2 | I i j d e n I i j r e f | I i j r e f × 100 % .

3.1. Experiment on the Beads Data Set

The Beads data set, acquired from Columbia University, has a spectral resolution of 10 nm and a spectral range from 400 nm to 700 nm. The data set has a total number of 31 consecutive spectral bands. The spatial resolution in each band is 512 × 512.
Three kinds of noises were considered in the simulation experiment. The detailed descriptions are listed as follows.
(1) Gaussian white noise with the mean 0 and fixed variance.
(2) Poisson noise is added by adjusting the ratio between the maximum brightness and the initial image brightness, which can be written as X p o i s s i o n = X p e a k . X p o i s s i o n represents the image polluted by Poisson noise; X is the initial image data; p e a k refers to the intensity of the Poisson noise. To reduce the passion noise, we utilized settings similar to those in [50]. The variance-stabilizing transformation (VST) was utilized to convert Poisson noise into Gaussian noise before implementing a denoising approach. The final denoising images were obtained by the inverse variance stability transformation.
(3) Sparse noise is added to the randomly selected pixels by utilizing uniform distribution with the interval [−10, 10].
Mixed noise, consisting of zero-mean Gaussian noise with variance σ = 0.1 and Poisson noise with p e a k = { 5 , 10 , 20 , 30 , 50 , 70 , 100 , 130 , 160 } , was added to the Beads data. Then the nine compared methods and the proposed one were utilized to restore the noisy Beads data. The performance curves of the simulation experiments are shown in Figure 4, where the vertical coordinates represent the values of PSNR, FSIM, and MSA, respectively. The horizontal coordinates represent the value of parameter p e a k . Comparing the curves of PSNR, FSIM, and MSA, it can be clearly observed that both the PSNR and FSIM values of the VLRMFmcl method were higher than those of BM3D, ANLM3D, BM4D, LRMR, GLF, LRTDTV, DnCNN, and HSID-CNN methods. At the same time, the MSA of VLRMFmcl was lower than these eight compared algorithms. Compared with LTDL, the proposed model is superior in PSNR. For FSIM and MSA, it showed better values than LTDL in most cases. These facts indicate that VLRMFmcl can effectively improve the quality of the noisy HSI by better maintaining the image feature information and restoring the spectral information in the HSI. In addition, the performance curve of the VLRMFmcl method is smoother than the nine comparison algorithms, which means VLRMFmcl is more stable when denoising the HSI.
Figure 5 shows the restored images of band 27 obtained by different models, which are polluted by Gaussian noise, sparse noise, and missing pixels. Compared with the noisy image in Figure 5b, the quality of Figure 5c–l is significantly improved. According to Figure 5, it can be seen that VLRMFmcl can effectively reduce the various forms of noise in the HSI with a large difference in brightness. The denoising results can preserve the structural information and the edges of the homogeneity region. Using patch-matching three-dimensional filtering, BM3D smoothed out some feature structures and blurred the visual effect while suppressing the different noises and restoring the missing pixels. ANLM3D showed better visual performance than BM3D and DnCNN; however, ANLM3D had a weaker ability to recover the detailed information in the HSI. The restored images obtained by BM4D were too smooth and lost some information. As shown in Figure 5f, the results of the LRMR method were significantly better than BM3D, ANLM3D, and DnCNN, but the results of LRMR still showed obvious sparse noise. Utilizing low-rank factorization of tensors constructed by nonlocal similar 3D patches, GLF was able to recover the basic shapes of the Beads dataset, but its result lacked sharpness. As shown in Figure 5, the results of LRTDTV, LTDL, and VLRMFmcl were much better than those of BM3D, ANLM3D, BM4D, LRMR, GLF, DnCNN, and HSID-CNN. In general, the VLRMFmcl method can remove the mixed noises and restore the missing pixels of the Beads data.
To make a more intuitive comparison of different algorithms, Figure 6 shows the pseudo-color images of the restored images (R: 3, G: 12, B: 25). As can be seen in Figure 6, the denoising results of VLRMFmcl were better than those of BM3D, ANL3D, BM4D, LRMR, GLF, LRTDTV, LTDL, DnCNN, and HSID-CNN. Additionally, the restored results of VLRMFmcl were very similar to those of the reference images, which can be easily observed by comparing Figure 5a,l.

3.2. Experiment on the Pavia Centre Dataset

The Pavia Centre dataset was acquired from the Reflective Optics System Imaging Spectrometer. It contains 115 bands, and each band consists of 1096 × 715 pixels. The spectral range of the Pavia Centre dataset is from 0.43 micrometers to 0.86 micrometers. After removing 13 noisy bands, the remaining 102 bands were used in the following analysis. In the experiment, a subset of the Pavia Centre with the size of 400 × 400 × 102 was used.
Three kinds of noises were considered for the Pavia Centre dataset: (1) Gaussian white noise with the mean 0 and the noise variance σ = 0.1 . (2) Sparse noise was added to the randomly selected pixels by utilizing the uniform distribution with the interval [−5, 5]. (3) Deadlines were added to the same position as the selected bands in the HSI. Their width varied from one line to three lines.
Table 1 shows the evaluation results for PSNR, FSIM, and MSA calculated by different approaches. All the bold numbers in Table 1 indicate the best results. By utilizing the nonlocal self-similarity and adaptively learning the noises in the HSI, the VLRMFmcl method shown the best PSNR/MSA values and the suboptimal FSIM value than the compared methods. Compared with the noisy image, the PSNR and FSIM values obtained by VLRMFmcl were increased by 21.66 and 0.1443, respectively, and MSA was reduced by 0.264. HSID-CNN simultaneously assigned the spatial information and adjacent correlated bands to the network, where multiscale feature extraction was employed to capture both the multiscale spatial feature and spectral feature. Its FSIM value was optimal. Instead of learning the noise variance, BM3D and DnCNN denoised the HSI with the predefined fixed noise variance band by band, which could not efficiently utilize the spectral correlations of the HSI. As shown in Table 1, the PSNR, FSIM, and MSA values of BM3D and DnCNN were significantly lower than those of the other methods while reducing the mixed noises. Noticing that, ANLM3D denoised the HSI by using the high nonlocal self-similarity and making a balance between the smoothing and details preservation; BM4D adopted the 3D nonlocal self-similarity data cube to exploit the local correlations between some neighboring bands; and GLF reduced the mixed noise by utilizing low-rank factorization of tensors constructed by nonlocal similar 3D patches. In Table 1, the ANLM3D, BM4D, and GLF methods presented relatively good results by exploring the spatial–spectral information of the HSI. LRMR, LRTDTV, and LTDL took advantage of the low-rank property in HSI, and their PSNR and FSIM values were better than those of BM3D, ANLM3D, GLF, and DnCNN.
Figure 7 shows the results of band 90 obtained by different denoising approaches. To make a better visual evaluation, Figure 8 shows the comparison of the pseudo-color images (R: 60, G: 30, B 2). It can be seen that the image qualities of the BM3D, ANLM3D, BM4D, LRMR, GLF, LRTDTV, TDTL, DnCNN, HSID-CNN, and VLRMFmcl methods were significantly improved compared to the noise images as shown in Figure 7b and Figure 8b. As can be seen in Figure 7c and Figure 8c, the denoising results obtained by the BM3D method were relatively fuzzy, and the method could not effectively inhibit the strip noise. ANLM3D, BM4D, LRMR, GLF, LRTDTV, LTDL, DnCNN, and HSID-CNN could only suppress some of the noise. It can be easily seen in Figure 7l and Figure 8l that the proposed VLRMFmcl model was able to effectively suppress Gaussian noise, sparse noise, and deadlines, and its results were better than those of the compared methods.

3.3. Experiment on the Urban Dataset

Urban data, with a size of 307×307×210, was acquired from the HYDICE sensor. Due to the detector-to-detector difference, it has different strips and mixed noises versus bands. The image contains 210 bands, and each band consists of 307 × 307 pixels. Its spectral range is from 0.4 to 2.5 micrometers. Table 2 gives the NR and MRD values of band 109 for the Urban dataset, in which all the bold numbers indicate the best results. In Table 2, it can be seen that the proposed approach was able to effectively reduce the noise of the Urban data, and could retain the detailed information well, which means that VLRMFmcl can effectively reduce noises with low resolution rates.
Figure 9 and Figure 10 show the denoising results of band 109 and band 151 for the Urban dataset, respectively. As shown in Figure 9a and Figure 10a, these two images were heavily polluted with stripes and mixed noise. As shown in the blue rectangles in Figure 9, obvious stripes can be observed in the results obtained by BM3D, ANLM3D, BM4D, LRMR, LRTDTV, and LTDL. Their structure and edge information were also fuzzy. This fact indicates that BM3D, ANLM3D, BM4D, LRMR, LRTDTV, and LTDL showed weaker performance in denoising the severely polluted bands for the Urban dataset. The LRMR method performed better in the target and detail recovery, but its denoising results still showed obvious stripes and mixed noises. As shown in Figure 9i, the result of DnCNN smoothed out some structures and blurred the visual effect. As shown in Figure 9 and Figure 10, GLF, HSID-CNN, and VLRMFmcl could effectively restore the edges and textures of the image, while suppressing the mixed noise.
To facilitate the visual comparison, Figure 11 presents the pseudo-color images of the restored results calculated by different approaches (R: 55, G: 103, B: 207). Comparing the white oval regions, it can be easily seen that the proposed VLRMFmcl method was able to effectively suppress the noises in the smooth area. Meanwhile, it could effectively restore the edge and structure information. Therefore, VLRMFmcl was superior to BM3D, ANLM3D, BM4D, LRMR, GLF, LRTDTV, LTDL, DnCNN, and HSID-CNN when denoising the Urban data.

4. Conclusions

Introducing multi-patch collaborative learning into low-rank matrix decomposition, a variational model was proposed under the Bayesian framework to achieve the reduction of three kinds of noise in HSIs. The non-local self-similarity of HSIs was explored by developing multi-patch collaborative learning. Through this process, the pixels from edges and heterogeneous regions could be effectively depicted. Then, the variational low-rank matrix decomposition model was constructed to separate the latent noise-free data and mixed noises for collaborative patches. Gaussian distribution with the zero mean and variance adaptively regulated by a gamma distribution was exploited to learn and represent the low-rank property of collaborative patches in the spatial–spectral domain and obtain the related clean data. To sufficiently suppress the mixed noise, their statistical characteristics were effectively depicted by the Dirichlet process Gaussian mixture model, which was constructed using the Gaussian distribution, the inverse Wishart distribution, and the Dirichlet process. Variational Bayesian inference was used to solve the model, having the advantages of simple calculation and high stability. Simulation experiments with different combinations of Gaussian noise, Poisson noise, deadlines, and stripe noise demonstrated the effectiveness of the proposed method. Compared with the BM3D, ANLM3D, BM4D, LRMR, GLF, LRTDTV, LTDL, DnCNN, and HSID-CNN methods, the proposed VLRMFmcl method showed superior performance in both the quantitative and qualitative evaluations.

Author Contributions

Conceptualization, S.L.; methodology, S.L.; software, S.L.; validation, S.L. and Z.T.; formal analysis, Z.T. and J.F.; writing—original draft preparation, S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (No. 61703328, 61876148), the China Postdoctoral Science Foundation funded project (No. 2018M631165), Shaanxi Province Postdoctoral Science Foundation (No. 2018BSHYDZZ23) and the Fundamental Research Funds for the Central Universities (No. XJJ2018253).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets presented in this study are available through: https://rslab.ut.ac.ir/data, https://www.cs.columbia.edu/CAVE/databases/multispectral/.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rasti, B.; Scheunders, P.; Ghamisi, P. Noise reduction in hyperspectral imagery: Overview and application. Remote Sens. 2018, 10, 482. [Google Scholar] [CrossRef] [Green Version]
  2. Xie, T.; Li, S.; Sun, B. Hyperspectral Images Denoising via Nonconvex Regularized Low-Rank and Sparse Matrix Decomposition. IEEE Trans. Image Process. 2019, 29, 44–56. [Google Scholar] [CrossRef] [PubMed]
  3. Kong, X.; Zhao, Y.; Xue, J.; Chan, J.C.W.; Ren, Z.; Huang, H.; Zang, J. Hyperspectral image denoising based on nonlocal low-rank and TV regularization. Remote Sens. 2020, 12, 1956. [Google Scholar] [CrossRef]
  4. Zeng, H.; Xie, X.; Ning, J. Hyperspectral image denoising via global spatial-spectral total variation regularized nonconvex local low-rank tensor approximation. Signal Process. 2021, 178, 107805. [Google Scholar] [CrossRef]
  5. Lin, B.; Tao, X.; Lu, J. Hyperspectral Image Denoising via Matrix Factorization and Deep Prior Regularization. IEEE Trans. Image Process. 2019, 29, 565–578. [Google Scholar] [CrossRef]
  6. Sun, L.; Wu, F.; Zhan, T.; Liu, W.; Wang, J.; Jeon, B. Weighted nonlocal low-rank tensor decomposition method for sparse unmixing of hyperspectral images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 1174–1188. [Google Scholar] [CrossRef]
  7. Zhang, S.; Agathos, A.; Li, J. Robust minimum volume simplex analysis for hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2017, 55, 6431–6439. [Google Scholar] [CrossRef]
  8. Sun, X.; Qu, Y.; Gao, L.; Sun, X.; Qi, H.; Zhang, B.; Shen, T. Target Detection Through Tree-Structured Encoding for Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2020. [Google Scholar] [CrossRef]
  9. Song, X.; Jiang, X.; Gao, J.; Cai, Z. Gaussian Process Graph-Based Discriminant Analysis for Hyperspectral Images Classification. Remote Sens. 2019, 11, 2288. [Google Scholar] [CrossRef] [Green Version]
  10. Elad, M.; Aharon, M. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. 2006, 15, 3736–3745. [Google Scholar] [CrossRef] [PubMed]
  11. Dabov, K.; Foi, A.; Katkovnik, V. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef]
  12. Othman, H.; Qian, S.-E. Noise reduction of hyperspectral imagery using hybrid spatial-spectral derivative-domain wavelet shrinkage. IEEE Trans. Geosci. Remote Sens. 2006, 44, 397–408. [Google Scholar] [CrossRef]
  13. Fu, Y.; Lam, A.; Sato, I. Adaptive spatial-spectral dictionary learning for hyperspectral image restoration. Int. J. Comput. Vis. 2017, 122, 228–245. [Google Scholar] [CrossRef]
  14. Manjn, J.V.; Coup, P.; Mart-Bonmat, L. Adaptive non-local means denoising of MR images with spatially varying noise levels. J. Magn. Reson. Imaging 2010, 31, 192–203. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Letexier, D.; Bourennane, S. Noise removal from hyperspectral images by multidimensional filtering. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2061–2069. [Google Scholar] [CrossRef]
  16. Chen, G.; Qian, S.-E. Denoising of hyperspectral imagery using principal component analysis and wavelet shrinkage. IEEE Trans. Geosci. Remote Sens. 2011, 49, 973–980. [Google Scholar] [CrossRef]
  17. Maggioni, K.E.M.; Katkovnik, V.; Foi, A. Nonlocal transform domain filter for volumetric data denoising and reconstruction. IEEE Trans. Image Process. 2013, 22, 119–133. [Google Scholar] [CrossRef]
  18. Lu, T.; Li, S.; Fang, L.; Ma, Y.; Benediktsson, J.A. Spectral–spatial adaptive sparse representation for hyperspectral image denoising. IEEE Trans. Geosci. Remote Sens. 2015, 54, 373–385. [Google Scholar] [CrossRef]
  19. Zhao, L.; Xu, Y.; Wei, Z. Hyperspectral Image Denoising via Coupled Spectral-Spatial Tensor Representation. IEEE Int. Geosci. Remote Sens. Symp. 2018, 4784–4787. [Google Scholar]
  20. Chen, Y.; He, W.; Yokoya, N.; Huang, T.Z. Hyperspectral image restoration using weighted group sparsity-regularized low-rank tensor decomposition. IEEE Trans. Cybern. 2019, 50, 3556–3570. [Google Scholar] [CrossRef] [PubMed]
  21. Sun, L.; Jeon, B.; Zheng, Y.; Wu, Z. Hyperspectral image restoration using low-rank representation on spectral difference image. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1151–1155. [Google Scholar] [CrossRef]
  22. Fan, H.; Chen, Y.; Guo, Y.; Zhang, H.; Kuang, G. Hyperspectral image restoration using low-rank tensor recovery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 4589–4604. [Google Scholar] [CrossRef]
  23. Chen, Y.; Guo, Y.; Wang, Y.; Wang, D.; Peng, C.; He, G. Denoising of hyperspectral images using nonconvex low rank matrix approximation. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5366–5380. [Google Scholar] [CrossRef]
  24. Okatani, T.; Yoshida, T.; Deguchi, K. Efficient algorithm for low-rank matrix factorization with missing components and performance comparison of latest algorithms. IEEE ICCV 2011, 842–849. [Google Scholar]
  25. Zhang, H.; He, W.; Zhang, L. Hyperspectral image restoration using low-rank matrix recovery. IEEE Trans. Geosci. Remote Sens. 2014, 52, 4729–4743. [Google Scholar] [CrossRef]
  26. Zhuang, L.; Bioucas-Dias, J.M. Hyperspectral image denoising based on global and non-local low-rank factorizations. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 1900–1904. [Google Scholar]
  27. Xue, J.; Zhao, Y.; Liao, W.; Kong, S.G. Joint spatial and spectral low-rank regularization for hyperspectral image denoising. IEEE Trans. Geosci. Remote Sens. 2017, 56, 1940–1958. [Google Scholar] [CrossRef]
  28. Fan, H.; Li, C.; Guo, Y.; Kuang, G.; Ma, J. Spatial–spectral total variation regularized low-rank tensor decomposition for hyperspectral image denoising. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6196–6213. [Google Scholar] [CrossRef]
  29. Wang, Q.; Wu, Z.; Jin, J. Low rank constraint and spatial spectral total variation for hyperspectral image mixed denoising. Signal Process. 2018, 142, 11–26. [Google Scholar] [CrossRef]
  30. Wang, Y.; Peng, J.; Zhao, Q.; Leung, Y.; Zhao, X.-L.; Meng, D. Hyperspectral image restoration via total variation regularized low-rank tensor decomposition. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1227–1243. [Google Scholar] [CrossRef] [Green Version]
  31. Cao, W.; Wang, K.; Han, G. A robust PCA approach with noise structure learning and spatial–spectral low-rank modeling for hyperspectral image restoration. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 3863–3879. [Google Scholar] [CrossRef]
  32. Gong, X.; Chen, W.; Chen, J. A low-rank tensor dictionary learning method for hyperspectral image denoising. IEEE Trans. Signal Process. 2020, 68, 1168–1180. [Google Scholar] [CrossRef]
  33. Papyan, V.; Elad, M. Multi-scale patch-based image restoration. IEEE Trans. Image Process. 2016, 25, 249–261. [Google Scholar] [CrossRef]
  34. Zhang, Q.; Zhang, L.; Yang, Y. Local patch discriminative metric learning for hyperspectral image feature extraction. IEEE Geosci. Remote Sens. Lett. 2014, 11, 612–616. [Google Scholar] [CrossRef]
  35. Haut, J.M.; Paoletti, M.E.; Plaza, J. Active learning with convolutional neural networks for hyperspectral image classification using a new bayesian approach. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6440–6461. [Google Scholar] [CrossRef]
  36. Zhang, M.; Li, W.; Du, Q. Feature Extraction for Classification of Hyperspectral and LiDAR Data Using Patch-to-Patch CNN. IEEE Trans. Cybern. 2018, 50, 2168–2267. [Google Scholar] [CrossRef]
  37. Xu, Y.; Wu, Z.; Chanussot, J. Nonlocal patch tensor sparse representation for hyperspectral image super-resolution. IEEE Trans. Image Process. 2019, 28, 3034–3047. [Google Scholar] [CrossRef]
  38. Zhang, Q.; Yuan, Q.; Li, J.; Sun, F.; Zhang, L. Deep spatio-spectral Bayesian posterior for hyperspectral image non-iid noise removal. ISPRS J. Photogramm. Remote Sens. 2020, 164, 125–137. [Google Scholar] [CrossRef]
  39. Wei, K.; Fu, Y.; Huang, H. 3-D Quasi-Recurrent Neural Network for Hyperspectral Image Denoising. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 363–375. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Zhang, Q.; Yuan, Q.; Li, J. Hybrid Noise Removal in Hyperspectral Imagery with a Spatial-Spectral Gradient Network. IEEE Trans. Geosci. Remote Sens. 2019, 59, 7317–7329. [Google Scholar] [CrossRef]
  41. Ma, H.; Liu, G.; Yuan, Y. Enhanced Non-Local Cascading Network with Attention Mechanism for Hyperspectral Image Denoising. In Proceedings of the 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Virtual Conference, Barcelona, Spain, 4–9 May 2020; pp. 2448–2452. [Google Scholar]
  42. Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Trans. Image Process. 2017, 26, 3142–3155. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Yuan, Q.; Zhang, Q.; Li, J.; Shen, H.; Zhang, L. Hyperspectral image denoising employing a spatial-spectral deep residual convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2019, 57, 1205–1218. [Google Scholar] [CrossRef] [Green Version]
  44. Zhao, Q.; Meng, D.; Xu, Z. L1-Norm Low-Rank Matrix Factorization by Variational Bayesian Method. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 825–839. [Google Scholar] [CrossRef] [PubMed]
  45. Wei, K.; Fu, Y. Low-rank Bayesian tensor factorization for hyperspectral image denoising. Neurocomputing 2019, 331, 412–423. [Google Scholar] [CrossRef]
  46. Li, W.; Du, Q.; Zhang, F.; Hu, W. Hyperspectral image classification by fusing collaborative and sparse representations. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 4178–4187. [Google Scholar] [CrossRef]
  47. Zhang, L.; Mou, X.; Zhang, D. FSIM: A Feature Similarity Index for Image Quality Assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Yuan, Q.; Zhang, L.; Shen, H. Hyperspectral image denoisingwith a spatial–spectral view fusion strategy. IEEE Trans. Geosci. Remote Sens. 2014, 52, 2314–2325. [Google Scholar] [CrossRef]
  49. Shen, H.; Zhang, L. A MAP-based algorithm for destriping and inpainting of remotely sensed images. IEEE Trans. Geosci. Remote Sens. 2009, 47, 1492–1502. [Google Scholar] [CrossRef]
  50. Qian, Y.; Ye, M. Hyperspectral imagery restoration using nonlocal spectral-spatial structured sparse representation with noise estimation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 499–515. [Google Scholar] [CrossRef]
Figure 1. The framework of the proposed variational low-rank matrix factorization with multi-patch collaborative learning (VLRMFmcl) method. HSI, hyperspectral image.
Figure 1. The framework of the proposed variational low-rank matrix factorization with multi-patch collaborative learning (VLRMFmcl) method. HSI, hyperspectral image.
Remotesensing 13 01101 g001
Figure 2. Some patch structures of the Pavia Centre data. Four areas are enlarged to better show the similarities of different patches. The area marked by the red box represents a test patch, and the ones marked by three green boxes are the neighboring patches of the test patch.
Figure 2. Some patch structures of the Pavia Centre data. Four areas are enlarged to better show the similarities of different patches. The area marked by the red box represents a test patch, and the ones marked by three green boxes are the neighboring patches of the test patch.
Remotesensing 13 01101 g002
Figure 3. Graphical representation of the variational low-rank matrix decomposition model.
Figure 3. Graphical representation of the variational low-rank matrix decomposition model.
Remotesensing 13 01101 g003
Figure 4. Quantitative evaluation results for Beads data versus different noises: (a) peak signal-to-noise ratio (PSNR); (b) feature similarity (FSIM); (c) mean spectral angle (MSA).
Figure 4. Quantitative evaluation results for Beads data versus different noises: (a) peak signal-to-noise ratio (PSNR); (b) feature similarity (FSIM); (c) mean spectral angle (MSA).
Remotesensing 13 01101 g004
Figure 5. Restored images of band 27 corrupted with Gaussian noise, sparse noise and missing pixels: (a) clean HSI; (b) noisy HSI; (c) BM3D; (d) ANLM3D; (e) BM4D; (f) LRMR; (g) GLF; (h) LRTDTV; (i) LTDL; (j) DnCNN; (k) HSID-CNN; (l) VLRMFmcl.
Figure 5. Restored images of band 27 corrupted with Gaussian noise, sparse noise and missing pixels: (a) clean HSI; (b) noisy HSI; (c) BM3D; (d) ANLM3D; (e) BM4D; (f) LRMR; (g) GLF; (h) LRTDTV; (i) LTDL; (j) DnCNN; (k) HSID-CNN; (l) VLRMFmcl.
Remotesensing 13 01101 g005
Figure 6. Restored images of Beads corrupted with Gaussian noise, sparse noise and missing pixels: (a) clean HSI; (b) noisy HSI; (c) BM3D; (d) ANLM3D; (e) BM4D; (f) LRMR; (g) GLF; (h) LRTDTV; (i) LTDL; (j) DnCNN; (k) HSID-CNN; (l) VLRMFmcl.
Figure 6. Restored images of Beads corrupted with Gaussian noise, sparse noise and missing pixels: (a) clean HSI; (b) noisy HSI; (c) BM3D; (d) ANLM3D; (e) BM4D; (f) LRMR; (g) GLF; (h) LRTDTV; (i) LTDL; (j) DnCNN; (k) HSID-CNN; (l) VLRMFmcl.
Remotesensing 13 01101 g006
Figure 7. Restored results of band 90 corrupted with mixed noises: (a) clean HSI; (b) noisy HSI; (c) BM3D; (d) ANLM3D; (e) BM4D; (f) LRMR; (g) GLF; (h) LRTDTV; (i) LTDL; (j) DnCNN; (k) HSID-CNN; (l) VLRMFmcl.
Figure 7. Restored results of band 90 corrupted with mixed noises: (a) clean HSI; (b) noisy HSI; (c) BM3D; (d) ANLM3D; (e) BM4D; (f) LRMR; (g) GLF; (h) LRTDTV; (i) LTDL; (j) DnCNN; (k) HSID-CNN; (l) VLRMFmcl.
Remotesensing 13 01101 g007
Figure 8. Restored results of the Pavia Centre data corrupted with mixed noises: (a) clean HSI; (b) noisy HSI; (c) BM3D; (d) ANLM3D; (e) BM4D; (f) LRMR; (g) GLF; (h) LRTDTV; (i) LTDL; (j) DnCNN; (k) HSID-CNN; (l) VLRMFmcl.
Figure 8. Restored results of the Pavia Centre data corrupted with mixed noises: (a) clean HSI; (b) noisy HSI; (c) BM3D; (d) ANLM3D; (e) BM4D; (f) LRMR; (g) GLF; (h) LRTDTV; (i) LTDL; (j) DnCNN; (k) HSID-CNN; (l) VLRMFmcl.
Remotesensing 13 01101 g008
Figure 9. Restored results of the Urban image: (a) original band 109; (b) BM3D; (c) ANLM3D; (d) BM4D; (e) LRMR; (f) GLF; (g) LRTDTV; (h) LTDL; (i) DnCNN; (j) HSID-CNN; (k) VLRMFmcl. The blue boxes are used to facilitate the comparison by marking the differences from the results obtained by different algorithms.
Figure 9. Restored results of the Urban image: (a) original band 109; (b) BM3D; (c) ANLM3D; (d) BM4D; (e) LRMR; (f) GLF; (g) LRTDTV; (h) LTDL; (i) DnCNN; (j) HSID-CNN; (k) VLRMFmcl. The blue boxes are used to facilitate the comparison by marking the differences from the results obtained by different algorithms.
Remotesensing 13 01101 g009aRemotesensing 13 01101 g009b
Figure 10. Restored results of the Urban image: (a) original band 151; (b) BM3D; (c) ANLM3D; (d) BM4D; (e) LRMR; (f) GLF; (g) LRTDTV; (h) LTDL; (i) DnCNN; (j) HSID-CNN; (k) VLRMFmcl.
Figure 10. Restored results of the Urban image: (a) original band 151; (b) BM3D; (c) ANLM3D; (d) BM4D; (e) LRMR; (f) GLF; (g) LRTDTV; (h) LTDL; (i) DnCNN; (j) HSID-CNN; (k) VLRMFmcl.
Remotesensing 13 01101 g010
Figure 11. Restored results of the Urban image: (a) original pseudo-color image; (b) BM3D; (c) ANLM3D; (d) BM4D; (e) LRMR; (f) GLF; (g) LRTDTV; (h) LTDL; (i) DnCNN; (j) HSID-CNN; (k) VLRMFmcl. The white ovals are used to facilitate the comparison by marking the differences from the results obtained by different algorithms.
Figure 11. Restored results of the Urban image: (a) original pseudo-color image; (b) BM3D; (c) ANLM3D; (d) BM4D; (e) LRMR; (f) GLF; (g) LRTDTV; (h) LTDL; (i) DnCNN; (j) HSID-CNN; (k) VLRMFmcl. The white ovals are used to facilitate the comparison by marking the differences from the results obtained by different algorithms.
Remotesensing 13 01101 g011
Table 1. Quantitative evaluation results for the Pavia Centre data set.
Table 1. Quantitative evaluation results for the Pavia Centre data set.
Band 109BM3DANLM3DBM4DLRMRGLFLRTDTVLTDLDnCNNHSID-CNNVLRMFmcl
PSNR(dB)13.9717.9230.2533.6534.5734.1234.9135.0622.7935.5735.63
FSIM0.84580.79030.91170.97390.98350.97020.98320.95510.81250.99070.9901
MSA0.36160.30190.11010.10140.11810.10670.10250.09910.22160.09830.0976
Table 2. Quantitative evaluation of noise reduction (NR) and mean relative deviation (MRD) for band 109.
Table 2. Quantitative evaluation of noise reduction (NR) and mean relative deviation (MRD) for band 109.
Band 109BM3DANLM3DBM4DLRMRGLFLRTDTVLTDLDnCNNHSID-CNNVLRMFmcl
NR11.86142.10512.46482.51732.70512.59362.67592.49732.69932.8386
MRD03.22013.91133.55764.21653.39273.52613.60173.51623.39653.1976
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, S.; Feng, J.; Tian, Z. Variational Low-Rank Matrix Factorization with Multi-Patch Collaborative Learning for Hyperspectral Imagery Mixed Denoising. Remote Sens. 2021, 13, 1101. https://doi.org/10.3390/rs13061101

AMA Style

Liu S, Feng J, Tian Z. Variational Low-Rank Matrix Factorization with Multi-Patch Collaborative Learning for Hyperspectral Imagery Mixed Denoising. Remote Sensing. 2021; 13(6):1101. https://doi.org/10.3390/rs13061101

Chicago/Turabian Style

Liu, Shuai, Jie Feng, and Zhiqiang Tian. 2021. "Variational Low-Rank Matrix Factorization with Multi-Patch Collaborative Learning for Hyperspectral Imagery Mixed Denoising" Remote Sensing 13, no. 6: 1101. https://doi.org/10.3390/rs13061101

APA Style

Liu, S., Feng, J., & Tian, Z. (2021). Variational Low-Rank Matrix Factorization with Multi-Patch Collaborative Learning for Hyperspectral Imagery Mixed Denoising. Remote Sensing, 13(6), 1101. https://doi.org/10.3390/rs13061101

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop