Next Article in Journal
Near-Real Prediction of Earthquake-Triggered Landslides on the Southeastern Margin of the Tibetan Plateau
Next Article in Special Issue
DDFNet-A: Attention-Based Dual-Branch Feature Decomposition Fusion Network for Infrared and Visible Image Fusion
Previous Article in Journal
A Novel Multi-Scale Feature Map Fusion for Oil Spill Detection of SAR Remote Sensing
Previous Article in Special Issue
Potential of Lightweight Drones and Object-Oriented Image Segmentation in Forest Plantation Assessment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral Image Mixed Noise Removal via Double Factor Total Variation Nonlocal Low-Rank Tensor Regularization

1
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
3
Key Laboratory of Space-Based Dynamic & Rapid Optical Imaging Technology, Chinese Academy of Sciences, Changchun 130033, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(10), 1686; https://doi.org/10.3390/rs16101686
Submission received: 10 April 2024 / Accepted: 2 May 2024 / Published: 9 May 2024
(This article belongs to the Special Issue Remote Sensing: 15th Anniversary)

Abstract

:
A hyperspectral image (HSI) is often corrupted by various types of noise during image acquisition, e.g., Gaussian noise, impulse noise, stripes, deadlines, and more. Thus, as a preprocessing step, HSI denoising plays a vital role in many subsequent tasks. Recently, a variety of mixed noise removal approaches have been developed for HSI, and the methods based on spatial–spectral double factor and total variation (DFTV) regularization have achieved comparable performance. Additionally, the nonlocal low-rank tensor model (NLR) is often employed to characterize spatial nonlocal self-similarity (NSS). Generally, fully exploring prior knowledge can improve the denoising performance, but it significantly increases the computational cost when the NSS prior is employed. To solve this problem, this article proposes a novel DFTV-based NLR regularization (DFTVNLR) model for HSI mixed noise removal. The proposed model employs low-rank tensor factorization (LRTF) to characterize the spectral global low-rankness (LR), introduces 2-D and 1-D TV constraints on double-factor to characterize the spatial and spectral local smoothness (LS), respectively. Meanwhile, the NLR is applied to the spatial factor to characterize the NSS. Then, we developed an algorithm based on proximal alternating minimization (PAM) to solve the proposed model effectively. Particularly, we effectively controlled the computational cost from two aspects, namely taking small-sized double factor as regularization object and putting the time-consuming NLR model before the main loop with fewer iterations to solve it independently. Finally, considerable experiments on simulated and real noisy HSI substantiate that the proposed method is superior to the related state-of-the-art methods in balancing the denoising effect and speed.

1. Introduction

A hyperspectral image (HSI) consists of contiguous spectral band images from ultraviolet to infrared wavelengths [1,2]. Therefore, HSI contains abundant spatial and spectral information and plays an important role in various applications [3,4,5,6,7], such as military surveillance, natural disaster monitoring, terrain detection, mineral exploration, and food safety detection. Nevertheless, due to the interference of the detector sensitivity, platform vibrations, and atmospheric turbulence, a real HSI is inevitably affected by various noises, such as Gaussian noise, impulse noise, stripes, deadlines, etc. [8,9]. The existence of the above noises greatly degrades the quality of the HSI, limiting the subsequent tasks, such as classification [10], unmixing [11], fusion [12], feature learning [13], super-resolution [14], and target detection [15]. Hence, HSI denoising is a fundamental preprocessing step for further applications. In the field of remote sensing (RS), it is generally assumed that the mixed noise in optical remote sensing images is mainly additive noise, and multiplicative noise often appears in synthetic aperture radar (SAR) images. Recently, there have been many studies on denoising based on this assumption, and it can basically meet the actual denoising needs. Therefore, this article only focuses on the discussion of removing the hyperspectral mixed noise composed of additive noise.
Currently, HSI denoising approaches can be generally divided into two categories, namely model-based methods and data-based methods [2]. The present challenge of data-based methods is the lack of real HSIs that can serve as training data [16,17]. In addition, although the speed of data-based methods is faster, the cost of the previous model training stage is higher, and the universality of the training model is limited. In contrast, model-based methods based on prior knowledge have a good general performance [8,16]. The improvement in the denoising performance is closely related to the mining of prior knowledge more fully, but more prior terms inevitably lead to the increase in the computational cost. The HSI denoising methods listed below refer to model-based methods, unless otherwise specified. Recently, many advanced HSI denoising methods have been proposed, among which the works exploring and modeling the intrinsic prior knowledge have made the most outstanding achievements. The most important priors mainly include spectral global low-rankness (LR) [18], spatial–spectral local smoothness (LS) [19], spatial nonlocal self-similarity (NSS) [20], and the sparsity of sparse noise (S).
In recent years, more and more researchers have paid more attention to removing mixed noise. With the prosperity of optimization theory, methods based on low-rankness have been developed. For instance, many methods utilized tensor decomposition to exploit LR by treating the HSI as a third-order tensor [21,22,23]. Moreover, various total variation (TV) regularizations were introduced to remove noises and preserve the image edges. For instance, Chang et al. [24] proposed a model based on anisotropic spatial–spectral TV (SSTV) to promote LS. Fan et al. [25] employed SSTV and low-rank tensor factorization to remove the mixed noise of HSIs without changing the spatial structures. In order to further explore NSS, models based on nonlocal low-rank tensor decomposition and TV regularization were proposed in [26,27,28]. However, using either spatial LS or spatial–spectral LS will unavoidably result in oversmoothing or artifacts [29].
Usually, the methods based on low-rank TV (LRTV) are good at characterizing the LR and LS of HSIs, while at the expense of a higher computational cost [30]. To be clear, there are two main reasons for the above high computational cost. First, the cost of directly imposing constraints on HSIs with a large data volume is higher. Second, the cost of a high-performance denoising model based on fully mining prior knowledge is higher. Compared to panchromatic and infrared RS images, an HSI X R M × N × B has a larger spectral dimension B (usually several hundred). Therefore, this article follows the convention of [2,31] and distinguishes based on the size of spectral dimension, assuming that the HSI is “large-sized”, while the other RS images and the factors U     V mentioned below with relatively small spectral dimensions are “small-sized”. Is it possible that the original HSI has a factor with a smaller size that inherits the prior knowledge of X ? If this is true, it is hoped that we can reduce computational costs while guaranteeing the denoising performance. Inspired by this aim, we turned our attention to low-rank matrix factorization (LRMF)/low-rank tensor factorization (LRTF). For an HSI X R M × N × B , its unfolded matrix along the spectral dimension can be expressed as X M N × B , and the rank of X is R (R << B). Then, X and X can be factorized as X = U × 3 V and X = U V T , respectively, where V B × R denotes the spectral factor and U R M × N × R / U M N × R corresponds to the spatial factor [32]. The LRTF/LRMF model can reflect LR and has the advantage of not requiring large-scale singular value decomposition (SVD) calculation [2]. Furthermore, the image corresponding to U / U can maintain the structure of the original image better [2].
Therefore, under the framework of LRMF/LRTF, several HSI denoising methods based on spatial single-factor U or spatial–spectral double-factor U V regularization have been proposed [2,26,30,31,33,34]. For example, the LRTFDFR method proposed by Zheng et al. [31] applied the weighted TV l21 norm and the data fidelity term to U and V , which describe the spatial and spectral LS, respectively, but it did not consider NSS. The RCTV proposed by Peng et al. [2] applied the TV l1 norm to U to describe the spatial LS, but it did not consider the spectral LS and NSS. The SNLRSF method proposed by Cao et al. [30] applied the nonlocal low−rank decomposition to U to describe NSS, but it did not consider LS. The NLSSR method proposed by Zha et al. [35] characterized the NSS and LR of HSIs with nonlocal structured sparsity regularization, but it ignored LS and S. The LRTRDGS method proposed by Wang et al. [36] characterized the LR, LS, and S with weighted group sparsity-regularized low-rank tensor ring decomposition, but it did not consider NSS. In short, almost all methods fail to fully explore the prior knowledge of HSIs and noise. Furthermore, the DFTCR method proposed by Han et al. [37] fully explored the priors of HSIs and noise, but it did not balance the high computational cost caused by introducing an NSS prior. The related research conclusion in [2] also shows that the circular search and matching strategy of similar blocks often used to characterize NSS greatly increase the computational cost. In summary, there is still room to improve the denoising performance based on the existence of HSIs in the low-dimensional subspace.
Based on the above discussion, this article aims to recover the clean HSI X from the observation data Y polluted by additive mixed noise. In this regard, we propose a spatial–spectral double factor TV-based nonlocal low-rank tensor regularization method (DFTVNLR) to remove HSI mixed noise. We use the LRTF framework X = U × 3 V to characterize the LR. Additionally, the TV regularization is added to the spatial factor U and spectral factor V to characterize the spatial bidirectional LS and spectral LS. Moreover, we introduce a weight strategy [18] to promote LS. Furthermore, we introduce the nonlocal low-rank tensor model (NLR) to characterize NSS and pre-estimate the initial value of U . The contributions of this article are summarized in what follows:
  • First, under the framework of LRTF, by applying the weight TV regularization to the small-sized double factor ( U V ), namely W h D h U 1 + W v D v U 1 and D v V F 2 , it not only helps to reduce the computational cost, but also accurately characterizes the spatial–spectral LS of HSIs.
  • Second, we fully explore all priors of HSIs, such as LR, spatial–spectral LS, and NSS. And the NLR model is introduced before the main loop to estimate the initial value of U , which indirectly characterizes NSS and solves the problem of high computational cost caused by the introduction of NLR.
  • Third, the alternating direction multiplier method (ADMM) and augmented Lagrangian method (ALM) are used to solve the proposed DFTVNLR. The experimental results show the proposed DFTVNLR is superior to several other excellent methods for mixed noise removal, spatial texture structure restoration, and spectral feature preservation.

2. Notations and Preliminaries

Throughout this article, we denote scalars, vectors, matrices, and tensors in light lowercase letters, bold lowercase letters, bold uppercase letters, and light uppercase cursive letters, respectively, e.g., x, x, X , and X . A N-order tensor can be expressed as X R n 1 × n 2 × × n N , whose element is denoted as X i 1 , i 2 , , i n or X i 1 , i 2 , , i n . The HSI can be viewed as a 3-D tensor X R M × N × B . For the other notations used in this article, see Table 1 in [31]. Additionally, the definitions of mode-k unfolding ( X ( k ) = u n f o l d k ( X ) , X = F o l d k ( X ( k ) ) ), mode-k tensor-matrix product ( X × k A ), and the detailed introduction of the tensor can be found in [31,38]. The inner product of the two matrices X and Y with the same size is defined as X ,   Y : = i , j x i , j y i , j , where x i , j is (i, j)th element of X [2]. Furthermore, X F : = X ,   X and X 1 : = i , j x i , j represent the Frobenius norm and l 1 norm, respectively [2]. The TV norm, which characterizes LS, is introduced below. Each band of X can be regarded as a gray image X M × N . Therefore, the anisotropic TV norm [39] or the l 0 gradient [40] of X is defined as follows:
l 0 TV X = X TV : = i = 1 M j = 1 N C x i + 1 , j x i , j + x i , j + 1 x i , j = M N C vec D h X 1 + vec D v X 1
where D h and D v represent first-order forward finite difference operators along the spatial horizontal and vertical modes, respectively. C X is a binary function, and its specific definition can be found in [29]. The vec ( · ) stands for a matrix vectorization operator. For an HSI X R M × N × B , the simplest and most direct TV model is k = 1 B X : , : , k TV , in which the spectral LS of the HSI is neglected. The TV norm or l 0 gradient considering the spatial–spectral LS of the HSI is defined as follows [29]:
l 0 HTV X = X TV : = i = 1 M j = 1 N C k = 1 B X i + 1 ,   j ,   k X i ,   j ,   k + X i ,   j + 1 ,   k X i ,   j ,   k
l 0 HTV X has the obvious advantage of obtaining the spatial edge of the HSI [29]. Additionally, another TV norm considering the spatial–spectral LS of the HSI is SSTV, which can be used to restore most structural details of the HSI. It incorporated the 2-D spatial TV and 1-D spectral TV, and is defined as follows [41]:
X SSTV : = D X 1 = D h X 1 + D v X 1 + D s X 1 = D h X D s 1 + D v X D s 1
where D s represents the first-order forward finite difference operator along the spectral mode, D = D h , D v , D s stands for the 3-D forward finite difference operator, and they are defined as follows [41]:
D h X = X i ,   j + 1 ,   b X i ,   j ,   b , D v X = X i + 1 ,   j ,   b X i ,   j ,   b , D s X = X i ,   j ,   b + 1 X i ,   j ,   b

3. HSI Mixed Noise Removal via DFTVNLR

3.1. Problem Formulation

For simplicity, it is assumed a real observed noisy HSI is only corrupted by additive mixed noise, which typically includes additive Gaussian noise and additive sparse noise (impulse noise, deadlines and stripes, etc.) [42]. Then, the HSI degradation process can be approximately formulated as an additive mixed system [2]:
Y = X + N + S
where Y R M × N × B , X R M × N × B , N R M × N × B , and S R M × N × B denote the observed noisy HSI, the underlying clean HSI, additive Gaussian noise, and additive sparse noises, respectively. M, N, and B are the height of the spatial image, the width of the spatial image, and the number of spectral bands, respectively. The key task is to recover X from Y . Generally, the general HSI denoising model is expressed as follows [31]:
min X , S 1 2 Y X S F 2 + λ 1 R X + λ 2 R S
where 1 2 Y X S F 2 is a data fidelity term and indirectly characterizes Gaussian noise. R X is the regularization term associated with the clean HSI, which can be used to explore the LR, LS, and NSS of the HSI. R S is designed to suppress sparse noise, which can usually be characterized as l 0 -norm, l 1 -norm, or l 21 -norm, etc. The parameters λ 1 and λ 2 0 are utilized to combine the three terms and to balance their contribution on the final result. The performance of a denoising model is heavily dependent on the choice of regularization, which encodes the prior of the HSI and noise. Nevertheless, adding regularization to the large-sized HSI unavoidably causes a high computational cost. Fortunately, due to the high correlation between the spectral bands of the HSI, it is reasonable to use LRTF (usually expressed as mode-3 tensor–matrix product) to characterize the LR prior of the large-sized clean HSI X [43]:
X = U × 3 V
where U R M × N × R and V B × R R B represent the small-sized spatial factor and the small-sized spectral factor, respectively. The symbol × 3 represents the mode-3 tensor–matrix product. Specifically, the element in U denotes the representation coefficient of X with respect to V . Particularly, U is often called representation coefficient image (RCI), and its mode-3 slices are termed eigen-images [30].
Except for the LR, to further improve the denoising performance, most existing works adopt some other spatial priors of HSI, such as LS [9] and NSS [44]. Related research [2] has proven that the spatial information of original large-sized X can be reflected on small-sized U . Enlightened by this, we can explore the spatial priors on U instead of X . Additionally, U is robust to noise, and the robustness is analyzed in E3DTV [19]. Furthermore, according to [20,45], U can inherit two important properties of X , namely the LR and the NSS. Therefore, many methods [2,30,31,46] based on the subspace and LRMF/LRTF have been proposed, in which various regularization R U are employed. Therefore, the general denoising model based on LRTF is established as follows [31]:
min V ,   U ,   S 1 2 Y U × 3 V S F 2 + λ 1 R U + λ 2 R S
It is generally assumed that V is orthogonal, i.e., V T V = I . However, the existing research [32] shows that, even if V is not orthogonal, U can also inherit the spatial prior information of X to some extent. V usually can be learned from Y through the HySime algorithm [47]. In recent years, the priors of HSI denoising have been fully mined. However, in order to control the computational cost, only partial priors have been utilized at the same time. Therefore, the overall denoising performance of the HSI still has room for improvement. The key issue is how to balance the contradiction between fully representing priors and controlling the computational cost.

3.2. Proposed Denoising Model

3.2.1. Spectral Global Low-Rankness (LR) Characterized by LRTF

By comparing the HSI denoising performance based on the LR method [2,30,31,41,46] in recent years, it was found the LRTF can fully explore LR and has a certain noise robustness. Hence, we used Equation (7) to characterize LR.

3.2.2. Spatial–Spectral Local Smoothness (LS) Characterized by DFTV

The distribution of the SSTV model in Equation (3) is the same in all directions. However, in an actual scene, the different distributions of each band along the spatial–spectral three directions are different [9,19]. To solve this issue, based on the weighted strategy, some adaptive SSTV (ASSTV) models have been proposed [9,41]. The ASSTV model [41] selected in this article is defined as follows:
X ASSTV = W D X 1 = W h D h X 1 + W v D v X 1 + W s D s X 1
where W = W h , W v , W s R M × N × 3 B is the weight tensor used to adjust the smoothing intensity of different pixel positions in different directions.
Additionally, remarks 1 and 2 in [31] have proven that the LS constraint on small-sized U V can promote LS in X . The simulation results of [31] also show that the strategy of regularizing the U V can enrich the spatial texture structure and smooth the spectral feature. In particular, the differential operations D h U and D v U of the third-order tensor U are equivalent to the tensor–matrix product operations U × 1 D h and U × 2 D v , respectively [31]. The symbol × 1 / × 2 represents the mode-1/2 tensor–matrix product. Therefore, we obtained the idea of [31,41], and a double-factor TV model (DFTV) based on weight strategy is proposed, which is specifically defined as follows:
X DFTV = λ 1 W h D h U 1 + λ 1 W v D v U 1 + λ 2 D s V F 2 = λ 1 W h U × 1 D h 1 + λ 1 W v U × 2 D v 1 + λ 2 D s V F 2
For the determination of the weight tensor W , please refer to the calculation process in [41]. However, unlike [41], we calculated the weight tensor W based on the denoised spatial factor U ^ instead of the denoised HSI X ^ during the iteration; so, the corresponding weight tensor is revised as follows:
U 0 = LRA 1 U ^ W h = max D h U 0 D h U 0 + δ max D h U 0 = max U 0 × 1 D h U 0 × 1 D h + δ max U 0 × 1 D h W v = max D v U 0 D v U 0 + δ max D v U 0 = max U 0 × 2 D v U 0 × 2 D v + δ max U 0 × 2 D v
where LRA 1 ( · ) stands for the rank-1 low-rank approximation operator, and δ is a threshold set to avoid unexpected large values in the weight tensor [41].

3.2.3. Spatial Nonlocal Self-Similarity (NSS) Characterized by NLR

At present, there are two main methods to characterize the NSS prior of HSIs, including the method based on low-rank tensor decomposition [9,21] and the method based on patch low-rank approximation [28,48]. The literature [41] points out that the method based on patch low-rank approximation can better characterize the NSS of HSIs. However, it is extremely time-consuming to directly perform a full-band group matching operation on a large-sized X , which is an important reason why many researchers choose to ignore the characterization of NSS. According to [20,45], the spatial factor U (its forward slice is called feature image) can inherit two important properties of X , namely the NSS of feature images and the LR between feature images.
Therefore, we adopted a patch-based nonlocal low-rank tensor model (NLR) proposed in [41], which takes small-sized U as the operation object instead of large-sized X . This model first learns the spectral factor V through the HySime algorithm and then applies the NLR model to the spatial factor U , which can be expressed as follows. Note that we use P i to represent the execution of similar block searching, matching, denoising, stacking, and other processes involved in the NLR model. The specific execution process can be found in the open source code provided in [41].
X NLR = i = 1 P P i U w , *
This method can reduce the computational cost by using the small-sized U to characterize the NSS of the HSI. But, the repeated similar 3-D patch search and match of the NLR model in the main loop are still an important reason that affects the computational efficiency. For further reducing the computational cost, we chose to solve the NLR model outside the main loop and decreased the iteration times. This is not only an effective method to characterize NSS and roughly estimate the initial value of U in the main loop at the same time, but also greatly reduces the cost.

3.2.4. DFTVNLR-Based HSI Denoising Model

Based on Section 3.2.1, Section 3.2.2 and Section 3.2.3, we preliminarily determined the relatively advanced strategy and model for characterizing the LR, LS, and NSS priors of the HSI as LRTF, DFTV, and NLR, respectively. The above three advanced models were obtained from previous studies, but there has been no research on coupling and improving these three advanced models to remove HSI mixed noise. Therefore, in order to balance the denoising performance and computational cost, we propose a DFTV-based NLR regularization (DFTVNLR) model for HSI mixed noise removal as follows:
U ~ , V ~ , S ~ = min U , V , S 1 2 Y U × 3 V S F 2 + λ 1 W h D h U 1 + W v D v U 1 + λ 2 D s V F 2 + λ 3 i = 1 P P i U w , * + λ 4 W s S 1
where λ i i = 1 , 2 , 3 , 4 > 0 is a regularization parameter, which is used to balance the weights of the items in the objective function. The proposed model contains five components. The first item 1 2 Y U × 3 V S F 2 uses the l F -norm to characterize the data fidelity of the normal distribution of N , and the U × 3 V contained in it is used to characterize the LR of X . The second item W h D h U 1 + W v D v U 1 adds adaptive spatial bidirectional TV to U and uses l 1 -norm to characterize the spatial LS of X . W h and W v are the weight tensors [49] for promoting LS. The third term D s V F 2 adds TV to V and uses l F -norm to characterize the spectral LS of X . The fourth term, i = 1 P P i U w , * , adds a nonlocal low-rank constraint to U to characterize the NSS of X , and puts it outside the main loop of the algorithm to solve the initial value of U in the main loop. The fifth item W s S 1 characterizes the sparsity of S with l 1 -norm combined with weight strategy, which is used to remove impulse noise, deadlines, and stripes. W s is the weight tensor used to promote the sparsity of S . The flowchart of the proposed DFTVNLR method is shown in Figure 1.
The key of the proposed DFTVNLR model to improve the denoising performance is that it fully characterizes all the priors of the HSI and mixed noise. However, considering the fact that fully exploiting prior knowledge will increase the computational cost, the proposed model can effectively control the computational cost by using small-sized double factors rather than large-sized HSIs as constraint objects, and placing the time-consuming NLR model outside the main loop to indirectly characterize the NSS prior. Hence, the main advantages of the proposed DFTVNLR model are as follows.
  • Exploration of the prior knowledge of clean HSIs and mixed noise more fully. Compared to HSI mixed noise removal methods [2,29,30,31,41,46], which only consider partial priors, the proposed DFTVNLR comprehensively and fully considers LR, the spatial–spectral LS, the NSS of X , data fidelity of N , and the sparsity of S .
  • A better balance of the contradiction between the comprehensive representation of priors and the high computational cost. Compared to directly applying regularization to the large-sized X [29,41] or only to U [2], we applied regularization to small-sized U V , which not only fully explores prior knowledge but also reduces the cost. Furthermore, we characterized NSS in the assisted loop instead of the main loop [30,41], which is another important measure to effectively control the cost.

3.3. Solving the Optimization Algorithm

3.3.1. Estimate the Initial Values of U and V

In this article, HySime algorithm was used to learn the V l + ( 1 / 2 ) from Y and take it as the initial value of V in the main loop of the algorithm. In addition, in order to reduce the computational cost, the i = 1 P P i U w , * in model (13) was placed outside the main loop, and U l + ( 1 / 2 ) was solved independently with few iterations, which was taken as the rough initial value of U in the subsequent main loop. We referred to the solution strategy of U in reference [41], and the process of estimating the initial value of U is as follows:
U l + ( 1 / 2 ) = a r g m i n U   λ 3 i = 1 P P i U w , * + U U l F 2
Furthermore, we used the famous ADMM [50] to solve model (13) by alternately updating the following equation:
S t e p 1 : V l + 1 = a r g m i n V f V , U l , S l + ρ 2 V V l F 2 S t e p 2 : U l + 1 = a r g m i n U f V l + 1 , U , S l + ρ 2 U U l F 2 S t e p 3 : S l + 1 = a r g m i n S   f V l + 1 , U l + 1 , S + ρ 2 S S l F 2
where f V , U , S is the objective function of model (13) and ρ > 0 is the proximity parameter.

3.3.2. Update V

The solution equation of the V -subproblem is detailed as follows:
V l + 1 = a r g m i n V 1 2 Y U l × 3 V S l F 2 +   λ 2   D s V F 2 + ρ 2 V V l F 2
The above equation can be transformed into the following Sylvester matrix equation and then be solved:
2   λ 2 D s T D s V + V U ( 3 ) l U 3 l T + ρ V =   Y 3 S 3 l U 3 l T + ρ V l
For a fast solution of the Sylvester matrix equation, please refer to Theorem 1 in [31]. The typical Sylvester matrix equation satisfies the form A X + X B = Y , where A M × M , B N × N , X M × N , and Y M × N . In the above V subproblem, the four terms corresponding to X, A, B, and Y in the typical Sylvester matrix equation are: X = V , A = 2 λ 2 D s T D s , B = U 3 l U 3 l T , and Y = Y 3 S 3 l U 3 l T + ρ V l , respectively. The diagonalization process of D s T D s and U 3 l U 3 l T are as follows:
D s T D s = F 1 T ψ 1 F 1 ,                     U 3 l U 3 l T = U 1 Σ 1 U 1 T .
where F 1 represents the 1-D discrete fast Fourier transformation (FFT). Therefore, the V -subproblem can be solved as follows:
V l + 1 = F 1 T 1 / T 1 F 1 G U 1 U 1 T
G = Y ( 3 ) S 3 l U 3 l T + ρ V l T 1 = 2 λ diag ψ 1 , , diag ψ 1 + ρ ones B ,   R + diag Σ 1 , , diag Σ 1 T
where diag( Λ ) is a column vector, whose elements are diagonal elements of Λ , and ones B ,   R is an B×R matrix, whose elements are all 1.
In particular, the initial iteration value of V l + ( 1 / 2 ) was estimated in advance by the HySime algorithm in the assisted loop of Section 3.3.1, which helps the algorithm to accelerate the convergence.

3.3.3. Update U

The solution equation of the U -subproblem is detailed as follows:
U l + 1 = a r g m i n U 1 2 Y U × 3 V l + 1 S l F 2 + λ 1 k = 1 2 W k U × k D k 1 + ρ 2 U U l F 2
By introducing the auxiliary variable A k k = 1,2 , (21) is rewritten as:
U l + 1 = a r g m i n U 1 2 Y U × 3 V l + 1 S l F 2 + λ 1 k = 1 2 W k A k 1 + ρ 2 U U l F 2 s . t .           A k = U × k D k ,           k = 1,2 .
We used ALM [51] to solve the above (22), and its augmented Lagrangian function is as follows:
L μ U , A k , M k = 1 2 Y U × 3 V l + 1 S l F 2                                                                                                                                                     + k = 1 2 λ 1 W k A k 1 + μ 2 U × k D k A k + M k μ F 2 + ρ 2 U U l F 2
where M k k = 1,2 is a Lagrange multiplier and μ > 0 is the penalty parameter. Under the framework of ADMM, we can alternately optimize (23) to solve U , A k , and M k .
(1)
Update U l + 1 , p + 1
The subproblem for updating U can be expressed as follows:
U l + 1 , p + 1 = a r g m i n U 1 2 Y U × 3 V l + 1 S l F 2 + ρ 2 U U l F 2 + μ 2 k = 1 2 U × k D k A k p + M k p μ F 2
Deriving (24) with respect to U , we can obtain the following equation:
μ k = 1 2 U × k D k T D k + U × 3 V l + 1 T V l + 1 + ρ U = K K = Y S l × 3 V l + 1 T + ρ U l + μ k = 1 2 A k p M k p μ × k D k T
The solution of (25) can be obtained by the following Sylvester matrix Equation:
μ I N D 1 T D 1 + D 2 T D 2 I M U 3 T + U 3 T V l + 1 T V l + 1 + ρ U 3 T = K 3 T
The diagonalization process is as follows:
μ I N D 1 T D 1 + D 2 T D 2 I M = F 2 T ψ 2 F 2 , V l + 1 T V l + 1 = U 2 Σ 2 U 2 T
where F 2 represents 2-D discrete FFT.
Therefore, the U l + 1 , p + 1 -subproblem can be solved as follows:
U 3 = F 2 T ( 1 / T 2 ) F 2 D ( 3 ) T U 2 U 2 T T , U l + 1 , p + 1 = F o l d 3 U 3
where
T 2 = diag ψ 2 , diag ψ 2 , , diag ψ 2 + ρ ones MN ,   R diag Σ 2 , diag Σ 2 , , diag Σ 2 T
In particular, the initial iteration value of U l + ( 1 / 2 ) was estimated in advance by the NLR model in the assisted loop of Section 3.3.1, which helps the algorithm to accelerate the convergence.
(2)
Update A k p + 1
The subproblem for updating A k can be expressed as follows:
A k p + 1 = a r g m i n A k   λ 1 W k A k 1 + μ 2 U l + 1 , p + 1 × k D k A k + M k p μ F 2         ( k = 1,2 )
The closed-form solution of A can be obtained by using the known soft-thresholding operator [52] as follows:
A k p + 1 i , j , : = S W k i , j   λ 1 μ U l + 1 , p + 1 × k D k + M k p μ                 ( k = 1,2 )
where S λ represents the soft-thresholding operator with parameter λ and S λ x = sgn x max x λ , 0 . Additionally, the selection method and calculation process of W k i , j ( k = 1 , 2 ) are the same as those in Section 3.2.2 of this article.
(3)
Update M k p + 1
The multipliers M k are updated by the following Equation:
M k p + 1 = M k p + μ U l + 1 , p + 1 × k D k A k p + 1             ( k = 1,2 )

3.3.4. Update S

The solution equation of the S -subproblem is detailed as follows:
S l + 1 = a r g m i n S 1 2 Y U l + 1 × 3 V l + 1 S F 2 +   λ 4 W s S 1 + ρ 2 S S l F 2
The solution of S can be obtained by the soft-thresholding operator as follows:
S l + 1 = S W s   λ 4 1 + ρ Y U l + 1 × 3 V l + 1 + ρ S l 1 + ρ
where let             S ^ = Y U l + 1 × 3 V l + 1 + ρ S l / ( 1 + ρ ) ; then, W s = 1 / S ^ + ε . ε is a small constant for avoiding the appearance of singularities.
To sum up, we describe the pseudocode of the developed algorithm for solving the DFTVNLR-based HSI denoising model in Algorithm 1.
Algorithm 1. The pseudocode of the algorithm for solving the DFTVNLR model
Input: The observed noisy HIS Y R M × N × B , rank R (the dimension of subspace);
    the regularization parameters λ 1 , λ 2 , λ 3 , and λ 4 ; the penalty parameter of ALM
    function μ = 15,000 ; and the proximal error parameter ρ = 0.1 .
Initialization: In the assisted loop iteration: n = 0 , n max = 10 , U 0 and V 0 are
    estimated by SVD (under the condition of rank R), S 0 = 0 , A 1 0 = 0 , A 2 0 = 0 ,
     M 1 0 = 0 , M 2 0 = 0 , PatSize = 5, PatNum = 200, SearchWin = 55, Step = 5.
    In the main loop iteration: l = 0 , l max = 50 , the alternate iteration of AL function:
     p = 0 , p max = 10 , and the maximum tolerance of iterative results ε = 10−4.
Assisted loop: Pre-estimation of the U V initial value.
1: V l + ( 1 / 2 ) is obtained from Y by the HySime algorithm, and it is taken as the initial value V 0 in the main loop.
2:The NLR regularization term i = 1 P P i U w , * is used to solve U l + ( 1 / 2 ) independently, and it is taken as the initial value U 0 in the main loop.
3:    while  n < n max  do
4:        Update U l + ( 1 / 2 ) by Equation (14).
5:         n = n + 1 .
6:    end
7:     V 0 = V l + ( 1 / 2 ) , U 0 = U l + ( 1 / 2 ) .
Main loop: Solution of the V U S subproblem.
1:while not converged and l < l max  do
2:    Update V l + 1 by Equations (19) and (20).
3:    while  p < p max  do
4:        Update U l + 1 by Equations (28) and (29).
5:        Update the auxiliary variable A k p + 1 k = 1,2 by Equation (31).
6:        Update the Lagrange multiplier M k p + 1 k = 1,2 by Equation (32).
7:        p = p + 1.
8:        Check the convergence condition: U l + 1 ,   p + 1 U l + 1 ,   p F / U l + 1 ,   p F ε .
9:    end while
10:     U l + 1 = U l + 1 ,   p + 1 .
11:    Update S l + 1 by Equation (34).
12:    l = l + 1;
13:     X l + 1 = U l + 1 × 3 V l + 1 .
14:    Check the convergence condition: X l + 1 X l F / X l F ε .
15:end while
Output: The denoised HSI X l + 1 R M × N × B .

3.3.5. Computational Complexity Analysis

According to Algorithm 1, the computational cost of the proposed DFTVNLR is mainly reflected in operations such as FFT, SVD, similar block search (SBS), soft-thresholding operation (SoftThreshold), and the matrix or tensor product. Based on the relevant conclusions in reference [2,31], we summarize the computational complexity of solving operators involved in common prior terms in Table 1.
In this article, the algorithm cost calculation method mentioned in reference [2] was adopted, that is, the cheap matrix or tensor product operation cost was ignored, and only operators with a high computational cost, such as FFT, SVD, SBS, and SoftThreshold, were considered. First, updating U l + ( 1 / 2 ) via (14) mainly involves SBS and SVD, which leads to the O [ M 2 N 2 R P a t S i z e 2 / S t e p S i z e 4 + G r o u p N u m R 2 P a t N u m P a t S i z e 2 ] cost approximately. Second, updating V via (19) and (20) mainly involves SVD and 1-D FFT, which leads to the O ( B R 2 + B R log ( B ) ) cost approximately. Third, updating U via (28) and (29) needs about O ( B R 2 + M N R log ( M N ) ) of computational cost, since it mainly involves SVD and 2-D FFT. Finally, updating A 1 and A 2 via (31) and updating S via (34) mainly involve SoftThreshold, and the computational cost is about O ( M N B ) . In addition, when updating the initial value of V by HySime and updating M 1 and M 2 by (32), only the product operation between the matrices or tensors is involved, and the computational cost can be ignored. To sum up, the cumulative computational cost of assisted and main loop iteration can be roughly estimated by the following Equation (35).
O n M 2 N 2 R P a t S i z e 2 / S t e p S i z e 4 + G r o u p N u m R 2 P a t N u m P a t S i z e 2                                       + l B R 2 + B R log ( B ) + M N R log ( M N ) + p M N B

4. Experimental Results and Discussion

In this section, to demonstrate the performance of the proposed DFTVNLR in terms of visual quality and quantitative evaluation, we conducted a series of experiments on both the simulated and real HSI data. Additionally, we adopted nine state-of-art model-based HSI denoising methods for comparison, which are most relevant to our method, i.e., LRTFDFR [31], WNLRATV [41], RCTV [2], SNLRSF [30], FastHyMix [46], L0L1HTV [29], LRTDTV [9], LRMR [53], and FGSLR [54]. Their implementation codes can be directly obtained from the authors’ websites. These methods can be divided into four categories. Specifically, the methods based on noise modeling include WNLRATV and FastHyMix. The methods based on factor or representation coefficient include LRTFDFR, WNLRATV, RCTV, and SNLRSF. The methods considering NSS prior include WNLRATV and SNLRSF. The methods based on TV include LRTFDFR, WNLRATV, RCTV, and L0L1HTV.
The differences between our method and the selected comparison method are summarized in Table 2. All relevant parameters involved in these competing methods are set by the default parameters presented by the author. For our DFTVNLR solver, we would like to present a detailed discussion of parameter setting in Section 4.3. Before the simulated and real experiments, the gray values of the HSI are normalized into the interval [0, 1] band-by-band. All experiments were implemented on a PC with Windows 10 in the environment of an Intel Core [email protected] processor and 32 GB RAM.

4.1. Simulated Data Experiments

We conducted the simulated experiment on two public HSIs in this section: one was the simulated Indian Pines dataset (http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes, accessed on 1 March 2023), with the size of 145 (rows) × 145 (columns) × 224 (bands). which has the obvious characteristics of LS and large edge gradient, and is also used in the compared methods LRTFDFR, WNLRATV, L0L1HTV, and LRTDTV; the other was the Washington DC (WDC) Mall data-set (http://engineering.purdue.edu/biehl/MultiSpec/hyperspectral.html, accessed on 5 March 2023), with the size of 256 (rows) × 256 (columns) × 191 (bands), which is rich in local texture details, and is also used in the compared methods LRTFDFR, WNLRATV, RCTV, SNLRSF, FastHyMix, L0L1HTV, LRMR, and FGSLR.
Because the noise in real HSIs usually shows a mixture of several types of noise, we considered the following five additional noise mixing cases, which is basically consistent with [31]. It should be noted the salt and pepper noise belongs to a typical and common impulse noise.
Case 1 (Gaussian Noise): Add Gaussian noise with the average value of 0 and noise standard deviation randomly sampled from [0.1, 0.2] to all bands.
Case 2 (Gaussian Noise + Salt and Pepper Noise): First, Gaussian noise is added as in case 1. Then, add the salt and pepper noise with the noise proportion randomly sampled from [0.1, 0.2] to all bands.
Case 3 (Gaussian Noise + Salt and Pepper Noise + Deadlines): First, add Gaussian noise and salt and pepper noise as in case 2. Then, add deadlines to the randomly selected 20% bands. The width of the deadlines is sampled from the set [1, 2, 3]. The number of deadlines in each band is sampled from the set [6, 7, 8, 9, 10].
Case 4 (Gaussian Noise + Salt and Pepper Noise + Stripes): First, add Gaussian noise and salt and pepper noise as in case 2. Furthermore, add stripes to the randomly selected 40% bands. The number of stripes in each band is sampled from the set [6, 7, 8, …, 15].
Case 5 (Gaussian Noise + Salt and Pepper Noise + Stripes + Deadlines): First, add Gaussian noise and salt and pepper noise as in case 2. Then, 20% of all bands are randomly selected to add deadlines as in case 3. Finally, 40% of all bands are randomly selected to add stripes as in case 4.
In the simulated experiment, six frequently used evaluation metrics, i.e., the mean of peak signal-to-noise ratio (MPSNR), the mean of structural similarity (MSSIM), the mean of feature similarity (MFSIM), the mean of spectral angle mapping (MSAM), the Erreur Relative Globale Adimensionnelle de Synthese (ERGAS), and the running time of the algorithm, were employed to evaluate the overall quality of the denoised results quantitatively. Among them, MPSNR, MSSIM, and MFSIM are the mean of PSNR [53], SSIM [55], and FSIM [56] over the bands, respectively. In addition, the better denoising effect and algorithm performance are indicated by the larger MPSNR, MSSIM, and MFSIM values and the smaller MSAM, ERGAS, and Time values.

4.1.1. Visual Quality Comparison

To visually compare the denoising results, Figure 2, Figure 3, Figure 4 and Figure 5, respectively, show some pseudocolor synthetic bands of the Indian Pines dataset and the WDC Mall dataset denoised under noise cases 1 and 5 and cases 4 and 5. For a better visual comparison, the common region of each figure is marked by a red box and enlarged by 2.5 times.
From Figure 2, Figure 3, Figure 4 and Figure 5, several observations can be easily made. First, all methods can remove mixed noise for the two simulated datasets to some extent. Second, the proposed DFTVNLR can both effectively remove mixed noise and preserve the global structure and local details, as shown in the enlarged subregions of Figure 2l, Figure 3l, Figure 4l and Figure 5l, indicating that our method is reasonable, effective, and robust. Third, the denoising performances of RCTV, FastHyMix, L0L1HTV, LRTDTV, LRMR, and FGSLR are not as good as that of DFTVNLR, as shown in Figure 3e,g–l, Figure 4e,g–l and Figure 5e,g–l, which show that the denoising ability of our proposed method is stronger, thanks to the adoption of LRTF, DFTV, and NLR. Finally, the proposed DFTVNLR is better than LRTFDFR, WNLRATV, and SNLRSF, as shown in Figure 3c,d,f,l, Figure 4c,d,f,l and Figure 5c,d,f,l, which show that the strategies, exploration of prior knowledge more fully, adoption of small-sized factor regularization, and prediction of the initial values of double factors by NLR and HySime can improve the denoising performance. Additionally, RCTV, L0L1HTV, and FGSLR are not universal and robust, and their denoising effects are highly dependent on the image types.

4.1.2. Quantitative Comparison

In terms of the aforementioned six quality metrics, Table 3 and Table 4 present the denoised quantitative results of the Indian Pines and the WDC Mall datasets, respectively. Especially, we respectively highlight the top three results using bold, underlining, and italicization for each quality metrics. According to the quantitative results, the following four aspects can be obviously observed:
  • The proposed DFTVNLR can achieve the best results under most noise cases, but its MSSIM, MFSIM, or MSAM is slightly lower than LRTFDFR or WNLRATV under some noise cases. This may be because our model is still difficult to completely match the real degraded model, resulting in a further “enhancement”. However, it enhances the visual effect to a certain extent.
  • The principle of DFTVNLR is closest to that of LRTFDFR, but our results are better than those of LRTFDFR on the whole, which shows it is reasonable and effective to introduce an NSS prior. Additionally, the NLR term of DFTVNLR is derived from WNLATV, but our results are better than those of WNLATV on the whole, which proves that the strategy based on double factor is more effective and robust.
  • Compared to the methods (LRTDTV, FGSLR, and L0L1HTV) that do not characterize the time-consuming NSS prior but impose constraints on the large-sized HSI, our algorithm is 1~2 times faster than them, thanks to our strategies of imposing constraints on small-sized factors and characterizing NSS outside the main loop. Compared to the methods (WNLRATV and SNLRSF) that also consider NSS prior and regularize the small-sized spatial factor, our computational cost is reduced by about 4 times and 10 times, respectively, which is due to our strategy of independently solving the NLR model with fewer iterations outside the main loop.
  • Although our approach has made efforts to reduce or control the computational cost, it is currently difficult for us to achieve the optimal cost due to the fact we explore almost all major priors. Therefore, our cost is still higher than those of RCTV, FastHyMix, LRMR, and LRTFDFR, but our denoising performance is better. The faster execution of FastHyMix is due to the introduction of a deep image prior. The cost of RCTV is lower because it only imposes fewer regularization terms on small-sized spatial factor. LRMR is fast because only a few priors are mined. But the denoising performance of the above approaches is obviously insufficient. The cost of LRTFDFR is twice lower than ours because it does not consider an NSS prior.
For comparing the performance of various methods in each band, we show the partial bands’ PSNR and SSIM of the two simulated datasets in Figure 6 and Figure 7, respectively. It can be seen that the proposed DFTVNLR can achieve the highest PSNR and SSIM values in most bands and noise cases. For further comparing the performance of the various methods in restoring and maintaining the spectral characteristics, we present the spectral curves of the two simulated datasets in Figure 8 and Figure 9 under noise case 5. In addition, we also present the spectral curves of these two simulated datasets at other spatial locations and under other noise cases in the Supplementary Materials. It is observed that the spectral curves obtained by the proposed DFTVNLR can better approximate the original ones than those produced by the compared methods.

4.2. Real Data Experiments

In this section, we conducted further experiments on two real HSI datasets. One was the Earth Observing-1 (EO-1) Hyperion dataset (http://datamirror.csdb.cn/admin/dataEO1Main.jsp, accessed on 10 April 2023), with the size of 200 (rows) × 200 (columns) × 166 (bands), which has a rich texture and is also used in SNLRSF and LRMR. The other was the AVIRIS Indian Pines dataset (http://www.ehu.eus/ccwintco/index.php, accessed on 15 April 2023), with the size of 145 (rows) × 145 (columns) × 220 (bands), which is rich in edges and is also used in LRTFDFR, WNLRATV, SNLRSF, L0L1HTV, LRTDTV, and FGSLR. In both datasets, some bands are heavily polluted by Gaussian noise, salt and pepper noise, stripes, deadlines, or other unknown types of noise.
For the real HSI, we could not obtain the noiseless image and evaluation metrics. Alternatively, we randomly selected three bands and synthesized pseudocolor images for display. Furthermore, we presented the column DN mean curve in some bands of the denoised image, which should have a certain degree of smoothness.

4.2.1. EO-1 Hyperion Dataset

Figure 10 shows the denoising results of bands 166, 18, and 1 of the real EO-1 Hyperion dataset, including the pseudocolor visual images and the column DN mean curve in the corresponding bands. According to the pseudocolor image results, the following aspects can be obviously observed. First, the selected bands of EO-1 Hyperion are seriously polluted by Gaussian noise, stripes, and deadlines. Second, the denoising results of L0L1HTV and LRTDTV are oversmoothed, which may be related to the improper setting of the TV terms. Third, LRTFDFR and FastHyMix are excellent at keeping image details, but their denoising results tend to decline in pixel DN and image contrast. Fourth, the other compared methods remove mixed noise to varying degrees without image distortion, but the proposed DFTVNLR has greater advantages in maintaining the global structure and local details, and fully restoring the contrast.
Furthermore, we quantitatively evaluated the actual denoising effect in terms of the column DN mean curves. The curves show rapid fluctuations, as presented in Figure 10a-1,a-3, which imply the existence of noise. Meantime, it can be observed that band 1 and band 18 are less affected by noise. After denoising, the fluctuations in band 166 are suppressed more or less. Specifically, the curves of RCTV, SNLRSF, FastHyMix, LRTDTV, LRMR, and FGSLR still fluctuate slightly locally, indicating that there is still some noises. Although the curves of LRTFDFR and L0L1HTV are relatively smooth, the image contrast of LRTFDFR is low and L0L1HTV is oversmoothed. By contrast, the curves of WNLRATV and DFTVNLR are smooth and the DN curves are closest to the actual image.

4.2.2. AVIRIS Indian Pines Dataset

Figure 11 shows the denoising results of bands 220, 161, and 1 of the real AVIRIS Indian Pines dataset, including the pseudocolor visual images and the column DN mean curve in the corresponding bands. According to the pseudocolor image results, the following aspects can be obviously observed. First, the selected bands of AVIRIS Indian Pines are seriously polluted by Gaussian noise and salt and pepper noise. Second, there is still some noises in RCTV, SNLRSF, LRMR, and FGSLR, which is related to insufficient prior exploration. Third, the image contrast is seriously changed in FastHyMix, L0L1HTV, LRTDTV, and LRMR, which may be related to the LS or LR prior being ignored. Fourth, LRTFDFR, WNLRATV, and the proposed DFTVNLR are relatively excellent in removing mixed noise and retaining image contrast, but the proposed DFTVNLR has greater advantages in restoring the global structure, local details, and clarity.
Moreover, based on the column DN mean curves, we quantitatively evaluated the actual denoising effect to some extent. Due to the existence of noise, the curves show rapid fluctuations, as presented in Figure 11a-1–a-3. After denoising, the fluctuations are suppressed more or less. The column DN mean curve tends to be oversmoothed and closer to a constant in FastHyMix, L0L1HTV, LRTDTV, and LRMR, which is consistent with the phenomenon of blurring and contrast reduction, as shown in Figure 11f–i. In contrast, LRTFDFR, WNLRATV, and the proposed DFTVNLR can produce a relatively smooth column DN mean curve, but there is a certain fluctuation range, which indirectly shows that it can not only remove noises but also retain some local texture information. However, for the proposed DFTVNLR, its overall fluctuation is more natural, the local fluctuation is smoother, and the DN is closest to the actual image.

4.3. Discussion

As shown in Algorithm 1, there are seven crucial parameters in the proposed DFTVNLR, which need to be set in advance, i.e., rank R , regularization parameters λ 1 , λ 2 , λ 3 and λ 4 , penalty parameter μ , and proximal parameter ρ . Among them, the parameter ρ is used to ensure the theoretical convergence of PAM solver. References [31,54] suggested that ρ should be set to 0.1. Next, we investigated the sensibility of other parameters.

4.3.1. Parameter Analysis

The R mainly characterizes the LR of the HSI, whose sensitivity analysis is presented in Figure 12. When the R is between [12,34], the results exhibit a stable and superior performance for various datasets and noise cases. Considering a larger R leads to a higher calculation cost; so, the R should be set 12 or estimated by the HySime.
The λ 1 and λ 2 determine the weights of the U V -based regularization terms. The λ 3 and λ 4 determine the weight of NLR term and S , respectively. The sensitivity analyses of λ 1 , λ 2 , λ 3 , and λ 4 are presented in Figure 13, Figure 14, Figure 15 and Figure 16, respectively. It can be seen that λ 1 and λ 4 are sensitive to both datasets and noise cases, but the change trend of sensitivity is consistent. Additionally, λ 3 is sensitive to datasets, but is robust to noise cases. The proposed DFTVNLR is relatively robust to λ 2 . However, when λ 1 = 0 . 2 , λ 2 0 . 005 ,   0 . 03 , λ 3 0 . 2 ,   10 , and λ 4 0 . 03 ,   0 . 04 , the denoising performance is stable and superior under different noise cases and datasets.
The sensitivity analysis of μ is presented in Figure 17. It can be clearly observed that the sensitivity trend of μ to different datasets and noise cases is consistent. When μ 10,000 ,   20,000 , higher MPSNR and MSSIM can be obtained under different noise cases and datasets.

4.3.2. Convergence Analysis

In this section, we indirectly verify the convergence of Algorithm 1 from the perspective of numerical analysis. Figure 18 shows the MPSNR and MSSIM of the denoising results for different simulated datasets under noise cases 1 and 5, and the relative changes in adjacent iteration results. As the number of iterations increases, the MPSNR and MSSIM monotonically increase, and the relative changes monotonically decrease. When L 35 , the relative variation errors are stable at 0. This verifies the convergence of Algorithm 1 numerically. Based on the above analysis results, we list the parameter setting suggestions of DFTVNLR in Table 5.

5. Conclusions

In this article, we propose a novel DFTVNLR method for HSI mixed noise removal, which fully considers the intrinsic priors of HSI and mixed noise. First, the LRTF framework was used to characterize the LR. Then, the DFTV and weighted strategy were applied to describe and promote the spatial–spectral LS. Additionally, we employed the NLR model to characterize the NSS. In order to solve the proposed DFTVNLR model, we proposed an PAM algorithm based on ADMM and ALM, which is theoretically convergent. Particularly, we control the computational cost in two ways, namely regularizing the small-sized double-factors and solving the NLR outside the main loop. Additionally, in the extensive numerical experiments, the proposed DFTVNLR exhibits its superior performance in mixed noise removal, spatial information recovery, and spectral signatures preserving.
In the future, we will focus on and try to solve the following problems. First, we will incorporate the noise modeling idea [41,46] into our DFTVNLR method to further enhance its capability in real-life scenarios. Second, there may be many kinds of degradations in actual HSIs that are difficult to decouple, such as noise and blur [57]. Therefore, it is of great practical value to study the HSI restoration method that simultaneously removes multiple degradations. Third, the model-based and the data-based methods will be combined to reduce the computational cost. Fourth, we assumed that the mixed noise in actual HSIs is mainly additive noise, which has certain limitations [58]. Studying a reasonable noise model that is more in line with the actual situation will help to further improve the denoising performance.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/rs16101686/s1, Figure S1. The spectral curves at spatial location (30, 30) of the Indian Pines dataset under noise Case 3 by different compared methods. (a) Original, (b) Noisy, (c) LRTFDFR, (d) WNLRATV, (e) RCTV, (f) SNLRSF, (g) FastHyMix, (h) L0L1HTV, (i) LRTDTV, (j) LRMR, (k) FGSLR, (l) DFTVNLR (proposed). Figure S2. The spectral curves at spatial location (151, 151) of the WDC Mall dataset under noise Case 4 by different compared methods. (a) Original, (b) Noisy, (c) LRTFDFR, (d) WNLRATV, (e) RCTV, (f) SNLRSF, (g) FastHyMix, (h) L0L1HTV, (i) LRTDTV, (j) LRMR, (k) FGSLR, (l) DFTVNLR (proposed).

Author Contributions

Conceptualization, Y.W., W.X. and L.Z.; methodology, Y.W. and W.X.; validation, Y.W.; formal analysis, Y.W.; investigation, Y.W.; writing—original draft preparation, Y.W.; writing—review and editing, Y.W., L.Z. and W.X.; visualization, Y.W.; supervision, W.X.; funding acquisition, W.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (62075219).

Data Availability Statement

Data is contained within the article.

Acknowledgments

The authors would like to thank the Editors and the anonymous reviewers for their constructive comments, which significantly improved the quality of the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

For facilitating a concise expression, the frequently used technical terms in this article are summarized as follows in the form of abbreviations:
HSIhyperspectral image
RSremote sensing
SARsynthetic aperture radar
RCIrepresentation coefficient image
LRspectral global low-rankness
LSlocal smoothness
NSSspatial nonlocal self-similarity
Ssparsity of sparse noise
TVtotal variation
SSTVspatial–spectral TV
ASSTVadaptive SSTV
DFTVspatial–spectral double factor and total variation
LRTFlow-rank tensor factorization
LRMFlow-rank matrix factorization
LRTVlow-rank TV
LRMRlow-rank matrix recovery
NLRnonlocal low-rank tensor model
PAMproximal alternating minimization
ADMMalternating direction method of multipliers
ALMaugmented Lagrangian method
SVDsingular value decomposition
FFTfast Fourier transformation
SBSsimilar block search
SoftThresholdsoft-thresholding operation
MPSNRthe mean of peak signal-to-noise ratio
MSSIMthe mean of structural similarity
MFSIMthe mean of feature similarity
MSAMthe mean of spectral angle mapping
ERGASthe Erreur Relative Globale Adimensionnelle de Synthese

References

  1. Green, R.O.; Eastwood, M.L.; Sarture, C.M.; Chrien, T.G.; Aronsson, M. Imaging Spectroscopy and the Airborne Visible Infrared Imaging Spectrometer (Aviris). Remote Sens. Environ. 1998, 65, 227–248. [Google Scholar] [CrossRef]
  2. Peng, J.J.; Wang, H.L.; Cao, X.Y.; Liu, X.L.; Rui, X.Y.; Meng, D.Y. Fast Noise Removal in Hyperspectral Images Via Representative Coefficient Total Variation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5546017. [Google Scholar] [CrossRef]
  3. Zhao, X.; Tao, R.; Li, W.; Li, H.-C.; Du, Q.; Liao, W.; Philips, W. Joint Classification of Hyperspectral and Lidar Data Using Hierarchical Random Walk and Deep CNN Architecture. IEEE Trans. Geosci. Remote Sens. 2020, 58, 7355–7370. [Google Scholar] [CrossRef]
  4. Ghamisi, P.; Yokoya, N.; Li, J.; Liao, W.; Liu, S.; Plaza, J.; Rasti, B.; Plaza, A. Advances in Hyperspectral Image and Signal Processing: A Comprehensive Overview of the State of the Art. IEEE Geosci. Remote Sens. Mag. 2017, 5, 37–78. [Google Scholar] [CrossRef]
  5. Dian, R.W.; Li, S.T.; Guo, A.J.; Fang, L.Y. Deep Hyperspectral Image Sharpening. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 5345–5355. [Google Scholar] [CrossRef] [PubMed]
  6. Zhang, L.; Nie, J.T.; Wei, W.; Li, Y.; Zhang, Y.N. Deep Blind Hyperspectral Image Super-Resolution. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 2388–2400. [Google Scholar] [CrossRef] [PubMed]
  7. Lin, C.-H.; Bioucas-Dias, J.M. Nonnegative Blind Source Separation for Ill-Conditioned Mixtures Via John Ellipsoid. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 2209–2223. [Google Scholar]
  8. He, W.; Zhang, H.Y.; Zhang, L.P.; Shen, H.F. Total-Variation-Regularized Low-Rank Matrix Factorization for Hyperspectral Image Restoration. IEEE Trans. Geosci. Remote Sens. 2016, 54, 176–188. [Google Scholar] [CrossRef]
  9. Wang, Y.; Peng, J.J.; Zhao, Q.; Leung, Y.; Zhao, X.L.; Meng, D.Y. Hyperspectral Image Restoration Via Total Variation Regularized Low-Rank Tensor Decomposition. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2018, 11, 1227–1243. [Google Scholar] [CrossRef]
  10. Zhou, L.; Ma, X.; Wang, X.; Hao, S.; Ye, Y.; Zhao, K. Shallow-to-Deep Spatial-Spectral Feature Enhancement for Hyperspectral Image Classification. Remote Sens. 2023, 15, 261. [Google Scholar] [CrossRef]
  11. Deng, C.; Chen, Y.; Zhang, S.; Li, F.; Lai, P.; Su, D.; Hu, M.; Wang, S. Robust Dual Spatial Weighted Sparse Unmixing for Remotely Sensed Hyperspectral Imagery. Remote Sens. 2023, 15, 4056. [Google Scholar] [CrossRef]
  12. Pan, H.; Jing, Z.; Leung, H.; Peng, P.; Zhang, H. Multiband Image Fusion Via Regularization on a Riemannian Submanifold. Remote Sens. 2023, 15, 4370. [Google Scholar] [CrossRef]
  13. Deng, Y.-J.; Li, H.-C.; Tan, S.-Q.; Hou, J.; Du, Q.; Plaza, A. T-Linear Tensor Subspace Learning for Robust Feature Extraction of Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5501015. [Google Scholar] [CrossRef]
  14. Sun, S.; Bao, W.; Qu, K.; Feng, W.; Zhang, X.; Ma, X. Hyperspectral Image Super-Resolution Algorithm Based on Graph Regular Tensor Ring Decomposition. Remote Sens. 2023, 15, 4983. [Google Scholar] [CrossRef]
  15. Ji, L.; Geng, X. Hyperspectral Target Detection Methods Based on Statistical Information: The Key Problems and the Corresponding Strategies. Remote Sens. 2023, 15, 3835. [Google Scholar] [CrossRef]
  16. Chang, Y.; Yan, L.; Fang, H.; Zhong, S.; Liao, W. HSI-DeNet: Hyperspectral Image Restoration Via Convolutional Neural Network. IEEE Trans. Geosci. Remote Sens. 2019, 57, 667–682. [Google Scholar] [CrossRef]
  17. Cao, X.-Y.; Fu, X.-Y.; Xu, C.; Meng, D.-Y. Deep Spatial-Spectral Global Reasoning Network for Hyperspectral Image Denoising. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5504714. [Google Scholar] [CrossRef]
  18. Chen, Y.; Cao, X.; Zhao, Q.; Meng, D.; Xu, Z. Denoising Hyperspectral Image with Non-i.i.d. Noise Structure. IEEE Trans. Cybern. 2018, 48, 1054–1066. [Google Scholar] [CrossRef]
  19. Peng, J.; Xie, Q.; Zhao, Q.; Wang, Y.; Yee, L.; Meng, D. Enhanced 3DTV Regularization and Its Applications on HSI Denoising and Compressed Sensing. IEEE Trans. Image Process. 2020, 29, 7889–7903. [Google Scholar] [CrossRef]
  20. Zhuang, L.; Bioucas-Dias, J.M. Fast Hyperspectral Image Denoising and Inpainting Based on Low-Rank and Sparse Representations. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2018, 11, 730–742. [Google Scholar] [CrossRef]
  21. Xie, Q.; Zhao, Q.; Meng, D.; Xu, Z.; Gu, S. Multispectral Images Denoising by Intrinsic Tensor Sparsity Regularization. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 27–30 June 2016; pp. 1692–1700. [Google Scholar]
  22. Zheng, Y.-B.; Huang, T.-Z.; Zhao, X.-L.; Jiang, T.-X.; Ma, T.-H.; Ji, T.-Y. Mixed Noise Removal in Hyperspectral Image Via Low-Fibered-Rank Regularization. IEEE Trans. Geosci. Remote Sens. 2020, 58, 734–749. [Google Scholar] [CrossRef]
  23. Lin, J.; Huang, T.-Z.; Zhao, X.-L.; Ma, T.-H.; Jiang, T.-X.; Zheng, Y.-B. A Novel Non-Convex Low-Rank Tensor Approximation Model for Hyperspectral Image Restoration. Appl. Math. Comput. 2021, 408, 126342. [Google Scholar] [CrossRef]
  24. Chang, Y.; Yan, L.-X.; Fang, H.-Z.; Liu, H. Simultaneous Destriping and Denoising for Remote Sensing Images with Unidirectional Total Variation and Sparse Representation. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1051–1055. [Google Scholar] [CrossRef]
  25. Fan, H.-Y.; Li, C.; Guo, Y.-L.; Kuang, G.-Y.; Ma, J.-Y. Spatial-Spectral Total Variation Regularized Low-Rank Tensor Decomposition for Hyperspectral Image Denoising. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6196–6213. [Google Scholar] [CrossRef]
  26. Jiang, T.-X.; Zhuang, L.-N.; Huang, T.-Z.; Zhao, X.-L.; Bioucas-Dias, J.M. Adaptive Hyperspectral Mixed Noise Removal. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5511413. [Google Scholar] [CrossRef]
  27. Zhao, B.; Ulfarsson, M.O.; Sveinsson, J.R.; Chanussot, J. Hyperspectral Image Denoising Using Spectral-Spatial Transform-Based Sparse and Low-Rank Representations. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5522125. [Google Scholar] [CrossRef]
  28. Chang, Y.; Yan, L.-X.; Zhao, X.-L.; Fang, H.-Z.; Zhang, Z.-J.; Zhong, S. Weighted Low-Rank Tensor Recovery for Hyperspectral Image Restoration. IEEE Trans. Cybern. 2020, 50, 4558–4572. [Google Scholar] [CrossRef]
  29. Wang, M.; Wang, Q.; Chanussot, J.; Hong, D. L0-L1 Hybrid Total Variation Regularization and Its Applications on Hyperspectral Image Mixed Noise Removal and Compressed Sensing. IEEE Trans. Geosci. Remote Sens. 2021, 59, 7695–7710. [Google Scholar] [CrossRef]
  30. Cao, C.-H.; Yu, J.; Zhou, C.-Y.; Hu, K.; Xiao, F.; Gao, X.-P. Hyperspectral Image Denoising Via Subspace-Based Nonlocal Low-Rank and Sparse Factorization. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2019, 12, 973–988. [Google Scholar] [CrossRef]
  31. Zheng, Y.-B.; Huang, T.-Z.; Zhao, X.-L.; Chen, Y.; He, W. Double-Factor-Regularized Low-Rank Tensor Factorization for Mixed Noise Removal in Hyperspectral Image. IEEE Trans. Geosci. Remote Sens. 2020, 58, 8450–8464. [Google Scholar] [CrossRef]
  32. Yao, J.; Meng, D.; Zhao, Q.; Cao, W.; Xu, Z. Nonconvex-Sparsity and Nonlocal-Smoothness-Based Blind Hyperspectral Unmixing. IEEE Trans. Image Process. 2019, 28, 2991–3006. [Google Scholar] [CrossRef] [PubMed]
  33. Dian, R.-W.; Li, S.-T.; Sun, B.; Guo, A.-J. Recent Advances and New Guidelines on Hyperspectral and Multispectral Image Fusion. Inf. Fusion 2021, 69, 40–51. [Google Scholar] [CrossRef]
  34. Miao, Y.-C.; Zhao, X.-L.; Fu, X.; Wang, J.-L.; Zheng, Y.-B. Hyperspectral Denoising Using Unsupervised Disentangled Spatiospectral Deep Priors. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5513916. [Google Scholar] [CrossRef]
  35. Zha, Z.-Y.; Wen, B.-H.; Yuan, X.; Zhang, J.-C.; Zhou, J.-T.; Lu, Y.-L.; Zhu, C. Nonlocal Structured Sparsity Regularization Modeling for Hyperspectral Image Denoising. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5510316. [Google Scholar] [CrossRef]
  36. Wang, S.; Zhu, Z.-B.; Liu, Y.-F.; Zhang, B.-X. Weighted Group Sparse Regularized Tensor Decomposition for Hyperspectral Image Denoising. Appl. Sci. 2023, 13, 10363. [Google Scholar] [CrossRef]
  37. Han, J.; Pan, C.; Ding, H.-Y.; Zhang, Z.-C. Double-Factor Tensor Cascaded-Rank Decomposition for Hyperspectral Image Denoising. Remote Sens. 2024, 16, 109. [Google Scholar] [CrossRef]
  38. Kolda, T.G.; Bader, B.W. Tensor Decompositions and Applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
  39. Beck, A.; Teboulle, M. Fast Gradient-Based Algorithms for Constrained Total Variation Image Denoising and Deblurring Problems. IEEE Trans. Image Process. 2009, 18, 2419–2434. [Google Scholar] [CrossRef]
  40. Xu, L.; Zheng, S.-C.; Jia, J.-Y. Unnatural L0 Sparse Representation for Natural Image Deblurring. In Proceedings of the 26th IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Portland, OR, USA, 23–28 June 2013; pp. 1107–1114. [Google Scholar]
  41. Chen, Y.; Cao, W.-F.; Pang, L.; Cao, X.-Y. Hyperspectral Image Denoising with Weighted Nonlocal Low-Rank Model and Adaptive Total Variation Regularization. IEEE Trans. Geosci. Remote Sens. 2022, 60, 15. [Google Scholar] [CrossRef]
  42. Xie, Y.; Qu, Y.-Y.; Tao, D.-C.; Wu, W.-W.; Yuan, Q.-Q.; Zhang, W.-S. Hyperspectral Image Restoration Via Iteratively Regularized Weighted Schatten P-Norm Minimization. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4642–4659. [Google Scholar] [CrossRef]
  43. Bioucas-Dias, J.M.; Plaza, A.; Dobigeon, N.; Parente, M.; Du, Q. Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2012, 5, 354–379. [Google Scholar] [CrossRef]
  44. He, W.; Yao, Q.-M.; Li, C.; Yokoya, N.; Zhao, Q.-B. Non-Local Meets Global: An Iterative Paradigm for Hyperspectral Image Restoration. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 2089–2107. [Google Scholar] [CrossRef]
  45. Zhuang, L.-N.; Bioucas-Dias, J.M. Hyperspectral Image Denoising Based on Global and Non-Local Low-Rank Factorizations. In Proceedings of the 24th IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 1900–1904. [Google Scholar]
  46. Zhuang, L.-N.; Ng, M.K. FastHyMix: Fast and Parameter-Free Hyperspectral Image Mixed Noise Removal. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 4702–4716. [Google Scholar] [CrossRef]
  47. Bioucas-Dias, J.M.; Nascimento, J.M.P. Hyperspectral Subspace Identification. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2435–2445. [Google Scholar] [CrossRef]
  48. He, W.; Yao, Q.-M.; Li, C.; Yokoya, N.; Zhao, Q.-B. Non-Local Meets Global: An Integrated Paradigm for Hyperspectral Denoising. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 6861–6870. [Google Scholar]
  49. Chen, Y.; He, W.; Yokoya, N.; Huang, T.-Z. Hyperspectral Image Restoration Using Weighted Group Sparsity-Regularized Low-Rank Tensor Decomposition. IEEE Trans. Cybern. 2020, 50, 3556–3570. [Google Scholar] [CrossRef]
  50. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed Optimization and Statistical Learning Via the Alternating Direction Method of Multipliers. Found. Trends Mach. Learn. 2010, 3, 1–122. [Google Scholar] [CrossRef]
  51. Wu, C.-L.; Tai, X.-C. Augmented Lagrangian Method, Dual Methods, and Split Bregman Iteration for ROF, Vectorial TV, and High Order Models. SIAM J. Imag. Sci. 2010, 3, 300–339. [Google Scholar] [CrossRef]
  52. Donoho, D.L. De-Noising by Soft-Thresholding. IEEE Trans. Inf. Theory 1995, 41, 613–627. [Google Scholar] [CrossRef]
  53. Zhang, H.-Y.; He, W.; Zhang, L.-P.; Shen, H.-F.; Yuan, Q.-Q. Hyperspectral Image Restoration Using Low-Rank Matrix Recovery. IEEE Trans. Geosci. Remote Sens. 2014, 52, 4729–4743. [Google Scholar] [CrossRef]
  54. Chen, Y.; Huang, T.-Z.; He, W.; Zhao, X.-L.; Zhang, H. Hyperspectral Image Denoising Using Factor Group Sparsity-Regularized Nonconvex Low-Rank Approximation. IEEE Trans. Geosci. Remote Sens. 2021, 60, 3110769. [Google Scholar] [CrossRef]
  55. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  56. Zhang, L.; Zhang, L.; Mou, X.-Q.; Zhang, D. FSIM: A Feature Similarity Index for Image Quality Assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [PubMed]
  57. Wang, P.; Wang, Y.-L.; Huang, B.; Wang, L.-G.; Zhang, X.-W.; Leung, H.; Chanussot, J. Poissonian Blurred Hyperspectral Imagery Denoising Based on Variable Splitting and Penalty Technique. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5505414. [Google Scholar] [CrossRef]
  58. Palud, P.; Thouvenin, P.-A.; Chainais, P.; Bron, E.; Le Petit, F. Efficient Sampling of Non Log-Concave Posterior Distributions with Mixture of Noises. IEEE Trans. Signal Process. 2023, 71, 2491–2501. [Google Scholar] [CrossRef]
Figure 1. The flowchart of the proposed DFTVNLR method. It includes two main segments. First, in the assisted loop, the NLR model and Hysime algorithm are used to estimate the initial values of U and V . Second, in the main loop, the estimated U V initial value and other prior knowledge are used iteratively to obtain the denoised HSI.
Figure 1. The flowchart of the proposed DFTVNLR method. It includes two main segments. First, in the assisted loop, the NLR model and Hysime algorithm are used to estimate the initial values of U and V . Second, in the main loop, the estimated U V initial value and other prior knowledge are used iteratively to obtain the denoised HSI.
Remotesensing 16 01686 g001
Figure 2. Denoising results of the Indian Pines dataset under noise case 1. The pseudocolor images are composed of three bands (R: 224, G: 112, and B: 10).
Figure 2. Denoising results of the Indian Pines dataset under noise case 1. The pseudocolor images are composed of three bands (R: 224, G: 112, and B: 10).
Remotesensing 16 01686 g002
Figure 3. Denoising results of the Indian Pines dataset under noise case 5. The pseudocolor images are composed of three bands (R: 173, G: 63, and B: 2).
Figure 3. Denoising results of the Indian Pines dataset under noise case 5. The pseudocolor images are composed of three bands (R: 173, G: 63, and B: 2).
Remotesensing 16 01686 g003
Figure 4. Denoising results of the WDC Mall dataset under noise case 4. The pseudocolor images are composed of three bands (R: 101, G: 13, and B: 188).
Figure 4. Denoising results of the WDC Mall dataset under noise case 4. The pseudocolor images are composed of three bands (R: 101, G: 13, and B: 188).
Remotesensing 16 01686 g004
Figure 5. Denoising results of the WDC Mall dataset under noise case 5. The pseudocolor images are composed of three bands (R: 124, G: 24, and B: 19).
Figure 5. Denoising results of the WDC Mall dataset under noise case 5. The pseudocolor images are composed of three bands (R: 124, G: 24, and B: 19).
Remotesensing 16 01686 g005
Figure 6. The PSNR and SSIM values of all bands of the Indian Pines dataset by the different, compared methods for the different noise cases.
Figure 6. The PSNR and SSIM values of all bands of the Indian Pines dataset by the different, compared methods for the different noise cases.
Remotesensing 16 01686 g006
Figure 7. The PSNR and SSIM values of all bands of the WDC Mall dataset by the different, compared methods for the different noise cases.
Figure 7. The PSNR and SSIM values of all bands of the WDC Mall dataset by the different, compared methods for the different noise cases.
Remotesensing 16 01686 g007
Figure 8. The spectral curves at the spatial location (70, 70) of the Indian Pines dataset for the noise case 5 by the different, compared methods.
Figure 8. The spectral curves at the spatial location (70, 70) of the Indian Pines dataset for the noise case 5 by the different, compared methods.
Remotesensing 16 01686 g008
Figure 9. The spectral curves at the spatial location (188, 188) of the WDC Mall dataset for the noise case 5 by the different, compared methods.
Figure 9. The spectral curves at the spatial location (188, 188) of the WDC Mall dataset for the noise case 5 by the different, compared methods.
Remotesensing 16 01686 g009
Figure 10. Denoising results of pseudocolor synthetic bands (R: 166, G: 18, and B: 1) selected in the real EO-1 Hyperion dataset. The rows from top to bottom, respectively, represent the noisy/denoised pseudocolor images and the column DN mean curve in the corresponding bands, where (a) Noisy Hyperion, (b) LRTFDFR, (c) WNLRATV, (d) RCTV, (e) SNRLRSF, (f) Fast Hymix, (g) L0L1 HTV, (h) LRTDTV, (i) LRMR, (j) FGSLR, (k) DFTVNLR (proposed).
Figure 10. Denoising results of pseudocolor synthetic bands (R: 166, G: 18, and B: 1) selected in the real EO-1 Hyperion dataset. The rows from top to bottom, respectively, represent the noisy/denoised pseudocolor images and the column DN mean curve in the corresponding bands, where (a) Noisy Hyperion, (b) LRTFDFR, (c) WNLRATV, (d) RCTV, (e) SNRLRSF, (f) Fast Hymix, (g) L0L1 HTV, (h) LRTDTV, (i) LRMR, (j) FGSLR, (k) DFTVNLR (proposed).
Remotesensing 16 01686 g010aRemotesensing 16 01686 g010bRemotesensing 16 01686 g010c
Figure 11. Denoising results of pseudocolor synthetic bands (R: 220, G: 161, and B: 1) selected in the real AVIRIS IndianPines dataset. The rows from top to bottom, respectively, represent the noisy/denoised pseudocolor images and the column DN mean curve in the corresponding bands, where (a) Noisy Hyperion, (b) LRTFDFR, (c) WNLRATV, (d) RCTV, (e) SNRLRSF, (f) FastHyMix, (g) L0L1 HTV, (h) LRTDTV, (i) LRMR, (j) FGSLR, and (k) DFTVNLR (proposed).
Figure 11. Denoising results of pseudocolor synthetic bands (R: 220, G: 161, and B: 1) selected in the real AVIRIS IndianPines dataset. The rows from top to bottom, respectively, represent the noisy/denoised pseudocolor images and the column DN mean curve in the corresponding bands, where (a) Noisy Hyperion, (b) LRTFDFR, (c) WNLRATV, (d) RCTV, (e) SNRLRSF, (f) FastHyMix, (g) L0L1 HTV, (h) LRTDTV, (i) LRMR, (j) FGSLR, and (k) DFTVNLR (proposed).
Remotesensing 16 01686 g011aRemotesensing 16 01686 g011b
Figure 12. Sensitivity analysis of the DFTVNLR model to rank R .
Figure 12. Sensitivity analysis of the DFTVNLR model to rank R .
Remotesensing 16 01686 g012
Figure 13. Sensitivity analysis of the DFTVNLR model to λ 1 .
Figure 13. Sensitivity analysis of the DFTVNLR model to λ 1 .
Remotesensing 16 01686 g013
Figure 14. Sensitivity analysis of the DFTVNLR model to λ 2 .
Figure 14. Sensitivity analysis of the DFTVNLR model to λ 2 .
Remotesensing 16 01686 g014
Figure 15. Sensitivity analysis of the DFTVNLR model to λ 3 .
Figure 15. Sensitivity analysis of the DFTVNLR model to λ 3 .
Remotesensing 16 01686 g015
Figure 16. Sensitivity analysis of the DFTVNLR model to λ 4 .
Figure 16. Sensitivity analysis of the DFTVNLR model to λ 4 .
Remotesensing 16 01686 g016
Figure 17. Sensitivity analysis of the DFTVNLR model to μ .
Figure 17. Sensitivity analysis of the DFTVNLR model to μ .
Remotesensing 16 01686 g017
Figure 18. Convergence analysis of the proposed DFTVNLR.
Figure 18. Convergence analysis of the proposed DFTVNLR.
Remotesensing 16 01686 g018
Table 1. Computational complexity of solving the operators involved in common prior terms.
Table 1. Computational complexity of solving the operators involved in common prior terms.
Prior TermSubjectAlgorithmComputational Complexity
Spectral
Global Low-Rankness
(LR)
X R M × N × B SVD O ( M N B 2 )
V R B × R SVD O ( B R 2 )
Spatial–Spectral
Local Smoothness
(LS)
X R M × N × B 2-D FFT O M N B log ( M N )
U R M × N × R 2-D FFT O M N R log ( M N )
V R B × R 1-D FFT O B R log ( B )
Spatial Nonlocal
Self-Similarity
(NSS)
X R M × N × B SBS + SVD O M 2 N 2 B P a t S i z e 2 / S t e p S i z e 4 + G r o u p N u m B 2 P a t N u m P a t S i z e 2
U R M × N × R SBS + SVD O M 2 N 2 R P a t S i z e 2 / S t e p S i z e 4 + G r o u p N u m R 2 P a t N u m P a t S i z e 2
Sparsity of
Sparse Noise (S)
S R M × N × B SoftThreshold O ( M N B )
Table 2. The detailed relations and distinctions between the proposed method and the compared methods.
Table 2. The detailed relations and distinctions between the proposed method and the compared methods.
MethodSpectral
Global Low-Rankness (LR)
Spatial–Spectral
Local Smoothness
(LS)
Spatial Nonlocal
Self-Similarity
(NSS)
Gaussian Noise
(G)
Sparse Noise
(S)
LRTFDFR X = U × 3 V τ k = 1 2 W k U × k D k 2,1 + λ D 3 A F 2 ___ 1 2 Y U × 3 V S F 2 μ W s S 1
WNLRATV X = U × 3 V λ 1 W D X 1 λ 2 i = 1 P P i U w , * W Y X F 2 ___
RCTV X = U V T k = 1 2 τ k k U 1 ___ β N F 2 λ S 1
SNLRSF X = U V T ___ λ 1 i ( 1 δ i 2 R i U L i F 2 + K ( L i ) ) 1 2 Y U V T S F 2 λ 2 S 1
L0L1HTV___ λ 1 D h X D z 1 + D v X D z 1 + I B 1,0 θ B D X ___ Y X S F 2 λ 2 S 1
LRTDTV X = C × 1 U 1 × 2 U 2 × 3 U 3 τ X SSTV ___ β N F 2 λ S 1
LRMRMinimize the rank of matrix______ Y X S t 1 F 2
When rank(X) ≤ r
Y X t S F 2
When card(S) ≤ r
FGSLR X = U × 3 V V 2,1 + U 2,1 ______ β 2 Y U × 3 V S F 2 τ S 1
DFTVNLR X = U × 3 V λ 1 [ W h D h U 1 + W v D v U 1 ] + λ 2 D s V F 2 λ 3 i = 1 P P i U w , * 1 2 Y U × 3 V S F 2 λ 4 W s S 1
Table 3. Quantitative comparison of all compared methods for the Indian Pines dataset.
Table 3. Quantitative comparison of all compared methods for the Indian Pines dataset.
MetricsNoisyLRTFDFRWNLRATVRCTVSNLRSFFastHyMixL0L1HTVLRTDTVLRMRFGSLRDFTVNLR
Case 1: Gaussian Noise
MPSNR16.31237.70228.35232.68227.49631.39731.30332.80127.61430.33038.139
MSSIM0.35920.98230.96940.96800.94470.96420.96650.96470.88450.94860.9839
MFSIM0.48630.99600.98300.98120.97300.97820.99200.97630.90660.96560.9963
MSAM0.27630.03040.07970.05260.08720.05690.06540.04830.08590.06400.0284
ERGAS367.8137.587101.5667.008110.1571.16688.16861.708113.1682.10636.447
Time (s)0.000026.958196.864.7124560.025.0000178.2173.61835.845 98.51550.770
Case 2: Gaussian Noise + SaltandPepper Noise
MPSNR13.74737.68727.65932.59326.01527.81331.98032.03426.43929.58238.197
MSSIM0.21810.98160.96220.96150.91070.93510.96370.95560.85370.93720.9840
MFSIM0.38700.99570.97690.97470.94510.94320.99000.96540.88070.95370.9958
MSAM0.36890.02890.08210.05250.09860.08200.05590.04730.10040.07100.0268
ERGAS486.3937.496109.0267.773125.93106.9881.60365.330134.7892.07735.186
Time0.000028.341226.704.6043557.894.2800175.7075.82138.530159.5653.312
Case 3: Gaussian Noise + SaltandPepper Noise + Deadlines
MPSNR13.52136.65827.70129.69025.83726.27829.96929.75425.97129.51237.007
MSSIM0.21190.98250.96110.93120.90770.87570.95890.94230.84600.92430.9830
MFSIM0.38350.99530.97610.95090.93800.88730.98450.95590.87270.94150.9954
MSAM0.38030.03390.08140.08350.09320.10670.07010.07940.10300.07530.0323
ERGAS500.1942.8395106.77103.35126.83138.3995.60299.151137.7894.80941.345
Time0.000028.868220.434.5730562.294.1600176.1973.44135.975437.9553.962
Case 4: Gaussian Noise + SaltandPepper Noise + Stripes
MPSNR13.63035.67127.57330.48925.74926.74229.47330.81426.12929.27035.954
MSSIM0.21370.97970.95860.95290.90860.91650.95890.95010.84790.93280.9805
MFSIM0.38310.99510.97530.96910.94000.92060.98820.96060.87440.95020.9951
MSAM0.37300.03620.08390.07120.10070.09570.07300.05600.10530.07490.0355
ERGAS494.8147.767116.3194.314132.61121.4299.08575.942138.4199.22046.485
Time0.000025.833228.734.6481562.173.9400175.1473.20237.221171.5655.544
Case 5: Gaussian Noise + SaltandPepper Noise + Stripes + Deadlines
MPSNR13.37934.92127.49728.39425.57724.79229.78429.92125.36428.11835.269
MSSIM0.20690.97950.95820.91940.90050.83950.95600.94000.83750.91280.9790
MFSIM0.38010.99480.97490.93930.93360.85670.98190.95210.86380.93030.9943
MSAM0.38560.04100.08410.09320.10190.12680.06520.07130.11490.08670.0370
ERGAS509.6452.076115.46120.25133.71162.5091.16492.999150.12112.9747.555
Time0.000028.918199.383.7623569.105.8200126.3275.54136.245502.6553.756
The results marked in bold, underlined and italicizated represent the top three results, respectively.
Table 4. Quantitative comparison of all compared methods for the WDC Mall dataset.
Table 4. Quantitative comparison of all compared methods for the WDC Mall dataset.
MetricsNoisyLRTFDFRWNLRATVRCTVSNLRSFFastHyMixL0L1HTVLRTDTVLRMRFGSLRDFTVNLR
Case 1: Gaussian Noise
MPSNR13.16829.00128.51327.96030.60129.93522.82126.43026.99827.28230.725
MSSIM0.28640.92400.88780.89730.94260.93390.61310.82310.86430.88730.9463
MFSIM0.71760.97090.93040.95000.97450.97180.74260.92030.94400.96050.9760
MSAM0.50060.19540.17320.19680.15510.16440.37870.32220.21140.20470.1488
ERGAS1054.5162.38153.17187.61134.84140.30286.22216.23201.65207.81125.91
Time (s)0.000090.134626.3913.1741692.810.130478.94156.18105.531276.1188.87
Case 2: Gaussian Noise + SaltandPepper Noise
MPSNR12.73127.87128.24825.75126.22824.49022.69626.11226.42924.98729.338
MSSIM0.16590.91450.86420.86750.85910.83430.60740.80140.84260.80730.9252
MFSIM0.63050.96670.91790.94140.93820.93500.73600.90790.93270.92790.9615
MSAM0.53450.23200.15620.29210.23120.31020.38180.33910.21800.37570.1784
ERGAS1028.6168.98154.60242.27224.57312.79287.84228.78204.37447.56138.97
Time0.000089.041629.3912.3771691.516.210462.36157.67118.986651.2190.96
Case 3: Gaussian Noise + SaltandPepper Noise + Deadlines
MPSNR12.76427.96528.31125.80124.28323.87822.72125.53726.13024.05428.716
MSSIM0.16490.91360.86380.86740.83770.82820.60170.79480.84010.78460.9170
MFSIM0.62940.96580.91880.93730.93530.93180.73390.90390.92930.91330.9628
MSAM0.53670.24490.15840.28060.31650.34160.38000.37430.24620.42750.2088
ERGAS1029.5171.05154.25219.39267.26313.00290.93243.99208.08473.78149.87
Time0.000091.121629.1612.8241687.817.130472.53156.49112.657140.0187.42
Case 4: Gaussian Noise + SaltandPepper Noise + Stripes
MPSNR12.57827.51727.36624.41424.44222.03121.78525.40025.15021.78128.450
MSSIM0.16150.90850.86790.84430.83300.75660.59080.78990.83220.75020.9163
MFSIM0.62600.96590.92500.93720.93310.91030.73540.90530.92780.90400.9615
MSAM0.53780.27140.20720.36380.31550.43910.40110.39800.29080.50320.2274
ERGAS1048.0182.88178.34290.17288.67461.74323.23264.59223.34538.91166.86
Time0.000089.871638.2212.9901699.215.440471.36156.19128.124645.8189.11
Case 5: Gaussian Noise + SaltandPepper Noise + Stripes + Deadlines
MPSNR12.65728.65228.09424.61524.10421.21023.11924.90725.63924.32029.631
MSSIM0.16100.91650.86480.84220.82550.73090.60720.78920.83120.78120.9235
MFSIM0.62530.96490.91850.93200.93070.89910.73390.90410.92580.91330.9594
MSAM0.53740.21620.16710.30420.33000.48350.28780.39070.26520.40810.1789
ERGAS1037.7160.21156.70280.78306.97548.08270.20253.70228.40482.66138.54
Time0.000090.389718.1013.2662640.614.440369.64157.34123.197038.9189.68
The results marked in bold, underlined and italicizated represent the top three results, respectively.
Table 5. Suggestions on the parameter setting of DFTVNLR when it is used in simulated and real noisy HSI datasets.
Table 5. Suggestions on the parameter setting of DFTVNLR when it is used in simulated and real noisy HSI datasets.
Parameter R λ 1 λ 2 λ 3 λ 4 μ L
Dataset
Simulated HSI datasetIndian Pines120.20.005
~
0.03
0.2
~
10
0.03
~
0.04
10,000
~
20,000

35
WDC Mall9 or12
Real noisy HSI datasetHyperion12 or estimated
by HySime
Indian Pines
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, Y.; Xu, W.; Zheng, L. Hyperspectral Image Mixed Noise Removal via Double Factor Total Variation Nonlocal Low-Rank Tensor Regularization. Remote Sens. 2024, 16, 1686. https://doi.org/10.3390/rs16101686

AMA Style

Wu Y, Xu W, Zheng L. Hyperspectral Image Mixed Noise Removal via Double Factor Total Variation Nonlocal Low-Rank Tensor Regularization. Remote Sensing. 2024; 16(10):1686. https://doi.org/10.3390/rs16101686

Chicago/Turabian Style

Wu, Yongjie, Wei Xu, and Liangliang Zheng. 2024. "Hyperspectral Image Mixed Noise Removal via Double Factor Total Variation Nonlocal Low-Rank Tensor Regularization" Remote Sensing 16, no. 10: 1686. https://doi.org/10.3390/rs16101686

APA Style

Wu, Y., Xu, W., & Zheng, L. (2024). Hyperspectral Image Mixed Noise Removal via Double Factor Total Variation Nonlocal Low-Rank Tensor Regularization. Remote Sensing, 16(10), 1686. https://doi.org/10.3390/rs16101686

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop