Next Article in Journal
Smart Contract Centric Inference Engine For Intelligent Electric Vehicle Transportation System
Next Article in Special Issue
AttPNet: Attention-Based Deep Neural Network for 3D Point Set Analysis
Previous Article in Journal
Human Occupancy Detection via Passive Cognitive Radio
Previous Article in Special Issue
GAS-GCN: Gated Action-Specific Graph Convolutional Networks for Skeleton-Based Action Recognition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Correntropy-Induced Discriminative Nonnegative Sparse Coding for Robust Palmprint Recognition

1
School of Automation Science and Engineering, Faculty of Electronic and Information Engineering, MOE Key Lab for Intelligent Networks and Network Security, Xi’an Jiaotong University, Xi’an 710049, China
2
Sichuan Gas Turbine Research Institute of AVIC, No. 6 Xinjun Road, Xindu District, Chengdu 610500, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(15), 4250; https://doi.org/10.3390/s20154250
Submission received: 18 June 2020 / Revised: 27 July 2020 / Accepted: 27 July 2020 / Published: 30 July 2020

Abstract

:
Palmprint recognition has been widely studied for security applications. However, there is a lack of in-depth investigations on robust palmprint recognition. Regression analysis being intuitively interpretable on robustness design inspires us to propose a correntropy-induced discriminative nonnegative sparse coding method for robust palmprint recognition. Specifically, we combine the correntropy metric and l1-norm to present a powerful error estimator that gains flexibility and robustness to various contaminations by cooperatively detecting and correcting errors. Furthermore, we equip the error estimator with a tailored discriminative nonnegative sparse regularizer to extract significant nonnegative features. We manage to explore an analytical optimization approach regarding this unified scheme and figure out a novel efficient method to address the challenging non-negative constraint. Finally, the proposed coding method is extended for robust multispectral palmprint recognition. Namely, we develop a constrained particle swarm optimizer to search for the feasible parameters to fuse the extracted robust features of different spectrums. Extensive experimental results on both contactless and contact-based multispectral palmprint databases verify the flexibility and robustness of our methods.

1. Introduction

Biometrics, like face, fingerprint, and iris images, have been exhaustively investigated for identity verification [1]. With lower risk of forgery, richer texture, and more comfortable acquisition mode, compared with face, fingerprint, and iris images, palmprints have drawn significant attention gradually [2]. Palmprint recognition methods can be roughly divided into categories [3] such as texture modeling-based [4,5,6,7,8,9], subspace learning-based [10,11,12,13], and local descriptor-based [14,15,16,17,18]. These three categories of methods attempt to extract critical features by ideally defined transformations, principal directions, or descriptors. However, on the one hand, their feature extraction approaches relying on fine prior knowledge of texture location do not apply to diverse scenarios. On the other hand, some feeble but valuable wrinkles are abandoned. What’s more, despite a little work that merely considers palmprint image degeneration due to the objective rotation and illumination variation [15,19], most of the methods neglect to consider robust palmprint recognition because of the potential occlusion and corruption in real-world applications.

1.1. Research Actuality

Recent decades have witnessed the fruitful findings of robust recognition on other biometrics, among which regression analysis has aroused the most attention for its intuitive interpretability of robustness design [20]. Compared with the mainstream palmprint recognition methods, the regression-based methods extract features without relying on the prior knowledge of texture location, and all the valuable pixels are used in its vector-wise operation. It seems we can draw some inspirations from the regression-based methods to realize robust palmprint recognition.
The linear regression classifier (LRC) may be one of the foremost methods in regression-based biometric recognition, which seeks suitable representation coefficients of a query sample and classifies it by examining which class can lead the minimal reconstruction residual [21]. With the l1-norm regularization, the sparse representation classifier (SRC) showed impressive performance on biometric recognition [22]. Zhang et al. claimed that it was the collaboration mechanism of the l1-norm that rendered SRC resultful and replaced the l1-norm with the l2-norm to put forward a collaborative representation classifier (CRC) [23]. Huang et al. introduced the l2,1-norm to achieve both flat and structured sparse coding [24]. Moreover, Xu et al. created a novel regularization to propose a discriminative SRC (DSRC) [25]. The regularization-based methods utilized the l2-norm or l1-norm to measure the representation errors under the assumption that the errors follow a Gaussian or Laplacian distribution [20]. Such a simplified treatment is capable of handling some simple corruptions, but could be unreasonable when facing more complicated contaminations such as dense corruption and gross occlusion.
To alleviate the impact caused by contaminations, Wright et al. introduced an augmented dictionary into SRC to create a robust SRC (RSRC) [22]. By extracting the centroids and variation of the training samples, Deng et al. proposed a superposed SRC (SSRC) [26]. Although these ideas improved the representation ability of the dictionaries, they can not overcome the drawback of the regularization-based methods, which leads to their limited robustness. To characterize the representation errors, Yang et al. [27,28] proposed the robust sparse coding (RSC) and regularized robust coding (RRC), respectively. Drawing ideas from the information theory, He et al. measured the errors by the correntropy-based sparse representation (CESR) [29]. These error detection-based methods yielded promising results to continuous occlusion, but they can be easily trapped by the undetected errors when the occlusion is heavy [30]. The nuclear norm-based matrix regression (NMR) method appealed to model the low-rank structure of the representation errors [30,31]. Whereas, the low-rank modeling is unrealistic in practice when samples are subjected to disperse corruption. Recently, the half-quadratic (HQ) method and Laplacian-uniform mixture-driven iterative robust coding (LUMIRC) method were proposed for error detection and correction [32,33]. However, both of them neglected the fact that the robustness of the regression-based methods relies not only on the error estimator but also on the sparsity regularizer.
All the work analyzed above has a common intention of attempting to get rid of the flawed entries in the contaminated sample and obtain promising recognition performance with the partial pure entries [19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34]. However, when the features of different classes are similar, partial information is insufficient to support us to correctly distinguish one class from the others. Fortunately, multimodal biometrics acquired from multi-views can provide more useful features to address this problem. Taking multispectral palmprint images for example. Samples acquired under different spectrums can provide more information against the pixel loss caused by contaminations [35]. Up to now, much efforts have been made for multimodal biometric recognition by exploiting summation, wavelet transform, and competitive coding to fuse the features of different modes [36,37,38]. However, insightful explanations of why these strategies make effects were missed, and the potential contaminations were not considered.

1.2. Motivations and Contributions

In view of the merits and demerits of all the aforementioned studies, either the regularization-based or error detection-based work can only handle a specific contamination case, i.e., corruption or occlusion. We expect to obtain a flexible robust scheme against various contaminations in real-world applications.
Correntropy was demonstrated to be particularly robust to non-Gaussian noises and large outliers was successfully applied for feature selection and signal processing [39,40]. Compared with the methods in [28,31,32,33] that detected errors in a heuristic way, correntropy provided a realistic metric approach that was theoretically promised to have desirable measure properties and approximate solution by the information theory and HQ optimization theory [40].
As was demonstrated in [22], the sparsest representation prefers to express a query sample with its homologous samples. If the representation coefficients are not sparse enough, elements corresponding to the inhomogeneous samples of the query sample will emerge. Then, the coding errors will contain the difference among diverse classes and could not reflect the real contamination anymore, which would greatly degenerate the error estimator. Since exploring the discriminability among the training samples encourages the sparsity of the representation coefficients [25], we argue that discriminative sparse coding is conductive to precise error estimation (see the verifications in Section 4).
In addition, the conventional sparse representation expresses the query sample with a combination of the dictionary samples, which involves both additive and subtractive operations. In the sparse coefficients, the emerging negative elements are not only trivial and meaningless, but also can lead the extracted features to ‘cancel each other out’. This is contrary to the intuitive notion of combining samples into a whole and the intention of extracting significant intra-class features for reliable classification [41]. Other arguments for nonnegative coding arise from biological modeling and hyperspectral image decomposition, where the sparse representation coefficients are required to be nonnegative [42,43].
Inspired by the above analyses, a cooperative error estimator (CEE) composed of a correntropy-induced error detector and a sparse error corrector is designed. Moreover, we combine CEE with a tailored discriminative nonnegative sparse regularizer (DNSR) to propose a joint scheme, named correntropy-induced discriminative nonnegative sparse coding (CDNSC), to cope with corruptions, occlusions, and the mixture of them. We also explore a feasible feature fusion strategy to extend CDNSC to robust multispectral palmprint recognition. Figure 1 illustrates the core idea of CDNSC.
Given a query sample with mixed-contaminations, the correntropy metric detects the errors via a weighted image, while the l1-norm corrects the undetected ones. Meanwhile, DNSR produces discriminative nonnegative sparse coding to stimulate CEE to precisely estimate errors. Thus, we obtain significant features (corresponding to the red line in Figure 1) of the query sample. The extensive experimental results in Section 5 show that our algorithm outperforms all the selected state-of-the-art methods in all challenging cases, where the variation of illumination and posture, corruptions, and two types of occlusion are all considered. Our contributions are summarized as follows:
  • The correntropy metric and l1-norm are combined to compose an error estimator for cooperative error detection and correction. We further equip the estimator with a discriminative nonnegative sparse regularizer to propose CDNSC to address various contaminations, like dense corruption, gross occlusion, and the mixture of them.
  • To obtain the analytical solution of the unified scheme, we propose an efficient method to address the nonnegative constraint, namely, converting it into a nontrivial equality constraint. Then, with some self-developed skills, the new nondifferentiable equality constraint problem is expressed with a continuous formulation. Thus, combined with half-quadratic optimization, a reweighted alternating direction method of multipliers (ADMM) can be derived to obtain the closed-form solution of the reformulated problem.
  • The proposed CDNSC is extended for robust multispectral palmprint recognition. We develop a constrained particle swarm optimizer to search for the feasible parameters to fuse the extracted robust features of different spectrums. This provides a new idea for extending the single-mode biometric recognition methods to multimodal biometric recognition.
The remainder of this paper is organized as follows: Section 2 reviews the researches on coding regularization, non-negative sparse representation, and error estimation. Section 3 introduces CEE, DNSR, the optimization of CDNSC, and its expansion for multispectral palmprint recognition. Section 4 analyzes the effectiveness of CDNSC. Section 5 carries out experimental verifications. Section 6 concludes this paper.

2. Related Work

In the following content, we will use bold symbols to signify matrix or vector variables and normal symbols to signify their elements. Given a dictionary A D × L containing L vectorized D-dimensional training samples of diverse classes, the regression-based methods explore appropriate coefficients x L to facilitate the subsequent classification by representing a vectorized query sample y D with a linear reconstruction A x .

2.1. Coding Regularization

SRC gets sparse coding x over dictionary A by employing the l0-norm. The l0 minimization is NP-hard and equals to the l1-regularized minimization as long as x is sparse enough [44]. To deal with the contaminations, the constraint y = A x is relaxed to:
min x 1    s . t .    y A x 2 2 e
where x 1 = i = 1 L | x i | is the l1-norm, x i denotes the i -th element of variable x , and e denotes the coding errors. Problem (1) is the classical Lasso [45] that can be solved by leveraging the least angle regression (LAR) [46]. To better deal with the contaminations, Wright et al. further introduced an augmented dictionary into SRC to propose RSRC [22]:
min x 1    s . t .    y = A ¯ x
where A ¯ = [ A , I ] , and I is an identity matrix to fit the corruption. With a novel regularizer, DSRC presented an efficient discriminative sparse coding method [25]:
min x y A x 2 2 + γ i = 1 L j = 1 L A : i x i + A : j x j 2 2
where A : i is the i -th column of dictionary A , and γ is a tunable parameter.
Regularizers in Equations (1)–(3) benefit to defend the robustness by selectively extracting sparse features. However, when the complicated corruptions occur, the l2-norm is improper to measure the coding errors anymore.

2.2. Nonnegative Sparse Representation

An essential issue of SRC is to explore an interpretable nonnegative sparse coding x , with which a query sample is reconstructed by only addition [43]. The nonnegative matrix factorization (NMF) is an important technique to find such coefficients. Given dictionary A , NMF aims to find two nonnegative matrixes U = [ u i k ] D × R and V = [ v i k ] L × R that:
min U , V A - U V T 2 2 = min u i k , v j k i = 1 D j = 1 L ( a i j k = 1 R u i k v j k ) 2
where a i j denotes the element at the i -th row and the j -th column in dictionary A , R denotes the number of chosen principal components, and V j : denotes the j -th row of matrix V . One can find the details about NMF in [47]. For the admirable properties of the nonnegativity of NMF, Zhang et al. and Cai et al. proposed a topology-preserving nonnegative matrix factorization (TPNMF) method and a graph regularized nonnegative matrix factorization (GNMF) method, respectively [47,48].
Since the solution of NMF is not unique, Liu et al. and Zhang et al. proposed its surrogate, called nonnegative garrote (NNG), for nonnegative sparse representation [49,50]:
min x y A x 2 2 + γ i = 1 L x i    s . t .    x i 0
where Equation (5) can be solved by referring to [51]. Since NNG replaced the l1-norm with a summation term, so it relaxed the sparsity constraint regarding the coding x .
Ji et al. proposed a genuine nonnegative sparse coding method by directly imposing a nonnegative constraint on sparse representation [52]. However, they adopted the numerical methods to solve that. Such a compromised solving approach is inefficient and imprecise.

2.3. Error Estimation

To well measure the coding errors, some novel fidelity terms are proposed to replace the l1- or l2-norm. CESR measured the similarity between the query sample y and its reconstruction A x by utilizing the correntropy-induced metric [29]:
max x i = 1 D I ( y i A i : x ) γ i = 1 L x i    s . t .    x i 0
where I ( e i ) = exp ( e i 2 / 2 σ 2 ) is a metric function, and σ is a kernel parameter. Meanwhile, e i = y i A i : x is the i -th element of the error e , and A i : and y i denote the i -th row of dictionary A and the i -th element of the query sample y , respectively.
RRC assumed the elements in error e and coding x are i.i.d. with the probability densities functions (PDF) m ( e i ) and n ( x i ) , respectively. Let ( e i ) = ln m ( e i ) and θ ( x i ) = ln n ( x i ) . The local quadratic approximation of ( e i ) produces a weighted function w i t = ˙ ( e i t ) / e i t to minimize i = 1 D ( e i ) + i = 1 L θ ( x i ) in an iteratively reweighted way, where ˙ is the first-order derivation of function . Empirically, the Logistic function was selected as the weighted function [28]:
w i = exp ( μ e i 2 + μ δ ) / ( 1 + exp ( μ e i 2 + μ δ ) )
where the parameters μ and δ control the decreasing rate and the demarcation point, respectively. Assuming coding x follows a Gaussian distribution [28], the minimization problem can be finally reduced to:
min x W ( y A x ) 2 2 + γ x 1
where W = d i a g ( W ) is an error detector, and elements in vector w can be obtained according to Equation (7).
LUMIRC carried out a Laplacian-uniform mixture function ( e i ) = α ( exp ( | e i | / b ) + c ) to fit the empirical errors [31]. The corresponding weighted function is obtained by:
w i = ˙ ( e i t ) = exp ( | e i | / b ) / ( exp ( | e i | / b ) + c )
where b controls the decreasing rate, and c is a constant. Thus, LUMIRC can be reformulated by:
min x W e 1 + γ x 1    s . t .    e = y A x
where w = d i a g ( w ) , and elements in vector w can be obtained according to Equation (9).
It can be found that both RRC and LUMIRC chose the weighted function in a heuristic or empirical way, so their underlying ideas deserve in-depth analysis. The correntropy metric showing admirable properties on measuring coding errors was proved to be robust to the non-Gaussian noises and large outliers [40]. It also has the flexibility of adaptively adjusting fewer parameters compared with RRC and LUMIRC (see Equations (7), (9), and (21)). Due to these advantages, Lu et al. and Zhou et al. utilized the correntropy metric for robust subspace clustering and feature selection [35,48]. Wang et al. introduced it into the matching pursuit algorithm to propose a correntropy matching pursuit (CMP) method [34]. Unlike these work that achieved their goals with a simple introduction of correntropy metric, we equip the correntropy metric with a tailored regularizer to pursue stronger robustness.

3. Correntropy-Induced Discriminative Nonnegative Sparse Coding

In the coding process, CEE removes the contaminated pixels in the query sample, while DNSR extracts significant correct features for the subsequent classification. Accordingly, the framework of CDNSC is defined as follows:
min x i = 1 D ν ( e i ) + i = 1 L υ ( x i )
where ν ( e i ) refers to CEE, and υ ( x i ) refers to DNSR. We can obtain the specific formulation (Formula (27)) of CDNSC by substituting the formulized CEE (Formula (19)) and DNSR (Formula (26)) discussed in the subsequent Subsections into (11). For the specific implementation of CDNSC, one can refer to the operating steps listed in Algorithm 1, where the detailed calculations of all the involved variables are also given.
For this purpose, we introduce CDNSC from the following aspects: cooperative error estimator, discriminative nonnegative sparse regularizer, the optimization of CDNSC, and the extended CDNSC.

3.1. Cooperative Error Estimator

From the perspective of information learning [40], Liu et al. defined the correntropy between the query sample y and its reconstruction duplicate y as:
V σ ( y , y ) = I σ ( y y ) p y y ( y , y ) d y d y
where the joint PDF p y y ( y , y ) between y and y is unknown in practice, which leads to a reduced estimator for the correntropy:
V ^ σ ( y , y ) = 1 D i = 1 D I σ ( y i y i ) .
Based on (13), the correntropy was extended into a general similarity metric between two arbitrary variables y and y , which is called the correntropy-induced metric (CIM):
CIM σ ( y , y ) = ( I σ ( 0 ) 1 D i = 1 D I σ ( e i ) ) 1 2
where e i is the i -th element of the variable e , and e = y y . Formula (14) has been verified to be a well-defined metric for satisfying the properties of nonnegativity, symmetry, etc. [53].
Figure 2 shows the comparison among the absolute error metric, mean squared error (MSE) metric, and CIM. It is clear that the absolute error metric is a real expression of errors, while the squared error matric quadratically expresses errors. As global metrics, both of them are sensitive to large errors. Interestingly, the CIM is close to the absolute error metric and MSE metric when errors are small, and it tends to 1 when errors get larger. Note that large errors are usually caused by non-Gaussian corruption and continuous occlusion [54]. Hence, CIM is robust to them.
In the regression-based palmprint recognition procedures, we naturally hope that the representation of the query sample y can be unaffected by the contaminations, and y can be well reflected by the extracted features. Fortunately, the CIM can support us to find such a kind of representation by:
min x CIM σ ( y , A x ) = min x 1 D i = 1 D ( 1 I σ ( e i ) )    s . t .    y A x = e .
Although the gradient descent algorithm can be utilized to solve (15), we prefer to leverage the HQ method as it’s more effective and can provide an adaptive weighted variable for error detection. To well solve problem (15), Proposition 1 is introduced as follows (the proof of proposition 1 is provided in Appendix A).
Proposition 1.
For (15), there exists a dual function ψ such that:
1 I σ ( e i ) = inf w i { 1 2 w i e i 2 + ψ ( w i ) } ,
and its minimum is reached at:
w i = 1 σ 2 exp ( e i 2 / 2 σ 2 ) .
Equation (17) indicates that the CIM can adaptively learn small weights to suppress the large errors and assign significant weights to the relatively pure pixels to manifest their importance. Compared with RRC and LUMIRC, it’s easier to perform CIM towards various contaminations by adaptively adjusting the parameter σ :
σ 2 = 1 2 D y A x 2 2 .
Assuming the undetected contaminations are sparse, based on (15) and Proposition 1, CEE can be formulized as:
i = 1 D ν ( e i ) = W e 1    s . t .    y A x = e
where W = d i a g ( w ) is an error detector, and elements in vector w can be obtained according to (17). Meanwhile, the l1-norm is an error corrector.

3.2. Discriminative Nonnegative Sparse Regularizer

As an important part of DNSR, the discriminative constraint term is designed as:
i = 1 L υ 1 ( x i ) = i = 1 L j = 1 L ( A : i x i ) T W ( A : j x j )
where the superscript T denotes the matrix transpose. The minimization of (20) means the representation of the i -th and the j -th classes has the lowest correlation, which enables the representation of diverse classes to be discriminative. Thus, the method prefers to select the most relevant samples to represent the query sample. This encourages the coefficients to be intrinsically sparse. Note matrix W is obtained by (17), which suppresses errors from affecting the discriminative coding. Hence, minimizing (20) encourages x to be robustly sparse.
In light of the drawbacks of NMF and NNG, we directly impose a nonnegative constraint on the sparse representation:
i = 1 L υ 2 ( x i ) = i = 1 L | x i |    s . t .    x i 0 .
Different from [50], we aim to develop an efficient solving method to explore the analytical solution of (21). To the best of our knowledge, there has no method can be directly exploited. Fortunately, we can refer to the Lagrange multiplier theorem to convert the inequality constraint problem (ICP) (21) into an equality constraint problem (ECP). Now, we consider a general ICP:
min t f ( t )    s . t .    g ( t ) 0
where t 1 . Then, the corresponding ECP of (22) reads:
min t f ( t )    s . t .    h ( t , z ) = g ( t ) z 2 = 0
where z is an auxiliary variable to describe the nonnegativity of the value of function g ( t ) . We manage to prove that (23) has the same Karush-Kuhn-Tucker (KKT) conditions as (22), which promises that (23) is an equivalent transformation of (22) under the Lagrange multiplier theorem-based optimization method. Consequently, Lemma 1 is introduced as follows (the proof of lemma 1 is provided in Appendix B).
Lemma 1. 1.
Assuming t is a local minimum of (22), and f ( t ) and g ( t ) are continuously differentiable, there exists a unique φ for (23) such that:
{ t L ( t , z , φ ) = t f ( t ) + φ t h ( t , z ) = 0 φ 0 .
where denotes the first-order differential operator.
Because (23) and (22) have the same KKT conditions (refer to proposition 3.3.1 in [55] to find the KKT conditions of (22)), we conclude that solving (23) is equivalent to solve (22) under the Lagrange multiplier method. So, (21) can be rewritten as:
i = 1 L υ ¯ 2 ( x i ) = i = 1 L | x i |    s . t .    x i = z i 2 .
Combining (20) and (25), DNSR can be formulized as:
i = 1 L υ ( x i ) = i = 1 L j = 1 L ( A : i x i ) T W ( A : j x j ) + i = 1 L | x i |    s . t .    x i = z i 2 .

3.3. Optimization of CDNSC

We obtain the unified CDNSC by substituting (21) and (26) into (11):
J ( W , e , x , z ) = min W , e , x , z W e 1 + α i = 1 L j = 1 L ( A : i x i ) T W ( A : j x j ) + β x 1 s . t . y A x = e , x = z 2
where α and β are two tunable parameters, and the vector z 2 is composed of the element z i 2 , i = 1 , , L . Note (27) can be rewritten as:
J ( e ˜ , u , x , z ) = min e ˜ , u , x , z e ˜ 1 + α i = 1 L j = 1 L ( A ˜ : i x i ) T ( A ˜ : j x j ) + β u 1 s . t . y ˜ A ˜ x = e ˜ , x = z 2 , x = u
where e ˜ = W e , y ˜ = W y , and A ˜ = W A . Let ϕ 1 , ϕ 2 , and ϕ 3 be three vectors of the Lagrange multipliers, and ρ be the penalty parameter, the augmented Lagrange function of (28) reads:
( e ˜ , u , x , z , ϕ 1 , ϕ 2 , ϕ 3 , ρ ) = e ˜ 1 + α i = 1 L j = 1 L ( A ˜ : i x i ) T ( A ˜ : j x j ) + β u 1 + ϕ 1 T ( y ˜ A ˜ x e ˜ ) + ϕ 2 T ( x z 2 ) + ϕ 3 T ( x u ) + ρ 2 ( y ˜ A ˜ x e ˜ 2 2 + x z 2 2 2 + x u 2 2 ) .
Before solving (29), the introduced auxiliary variable z should be eliminated. Let ( e ˜ , u , x , z , ϕ 1 , ϕ 2 , ϕ 3 , ρ ) / z i = 0 , we have:
x i z i 2 = { ϕ 2 , i / ρ , ρ x i + ϕ 2 , i > 0 x i , ρ x i + ϕ 2 , i 0
where ϕ 2 , i is the i -th element of the Lagrange multiplier ϕ 2 . Note the selection function (30) renders (29) nondifferentiable. To obtain the analytical solution of (29), we skillfully rewrite (30) by:
ϕ 2 , i + ρ ( x i z i 2 ) = b i ( ρ x i + ϕ 2 , i )
where the element b i is determined by:
b i = { 0 , ρ x i + ϕ 2 , i > 0 1 , ρ x i + ϕ 2 , i 0 .
Accordingly, problem (29) can be further rewritten as:
¯ ( e ˜ , B , u , x , ϕ 1 , ϕ 2 , ϕ 3 , ρ ) = e ˜ 1 + α i = 1 L j = 1 L ( A ˜ : i x i ) T ( A ˜ : j x j ) + β u 1 ρ 2 e ˜ ( y ˜ A ˜ x + ϕ 1 ρ ) 2 2 + 1 2 ρ B ( ρ x + ϕ 2 ) 2 2 + ρ 2 u ( x + ϕ 3 ρ ) 2 2
where B = d i a g ( b ) , and the element b i in vector b is determined by (32).
In the l -th iteration, once matrix W l + 1 is updated by (17) and fixed, the variables y ˜ and A ˜ are also fixed. ADMM [56] respectively updates each undetermined variable in (33) as follows:
e ˜ l + 1 = arg min e ˜ ( e ˜ , x l , ϕ 1 l , ρ )
B l + 1 = arg min B ( x l , ϕ 2 l , ρ )
u l + 1 = arg min u ( u , x l , ϕ 3 l , ρ )
x l + 1 = arg min x ( e ˜ l + 1 , B l + 1 , u l + 1 , x , ϕ 1 l , ϕ 2 l , ϕ 3 l , ρ )
ϕ 1 l + 1 = ϕ 1 l + ρ l ( y ˜ A ˜ x l + 1 e ˜ l + 1 )
ϕ 2 l + 1 = B l + 1 ( ρ l x l + 1 + ϕ 2 l )
ϕ 3 l + 1 = ϕ 3 l + ρ l ( x l + 1 u l + 1 )
ρ l + 1 = min ( μ ρ l , ρ max )
where the parameter μ > 1 , and (39) is obtained by substituting (31) into the formula ϕ 2 l + 1 = ϕ 2 l + ρ l ( x l + 1 ( z l + 1 ) 2 ) . Note (39) reveals that the Lagrange multiplier ϕ 2 l + 1 0 always holds, which is consistent with the Lemma 1. For (34), we have:
e ˜ l + 1 = arg min e ˜ e ˜ 1 + ρ l 2 e ˜ d 1 l 2 2
where the variable d 1 l = y ˜ A ˜ x l + ϕ 1 l / ρ l . The subproblem (42) can be explicitly solved by the soft thresholding function:
e ˜ l + 1 = sign ( d 1 l ) max { | d 1 l | 1 / ρ l , 0 } .
The variable B l + 1 in the subproblem (35) is updated by formula (32), and the subproblem (36) can be expressed as:
u l + 1 = arg min u u 1 + ρ l 2 u d 2 l 2 2
where the variable d 2 l = x l + ϕ 3 l / ρ l . Similar to (42), (44) is solved by:
u l + 1 = sign ( d 2 l ) max { | d 2 l | β / ρ l , 0 } .
For the subproblem (45), we have:
x l + 1 = arg min x α i = 1 L j = 1 L ( A ˜ : i x i ) T ( A ˜ : j x j ) + ρ l 2 e ˜ l ( y ˜ A ˜ x + ϕ 1 l ρ l ) 2 2 + 1 2 ρ l B l ( ρ l x + ϕ 2 l ) 2 2 + ρ l 2 u l ( x + ϕ 3 l ρ l ) 2 2 .
Before solving problem (46), we specifically consider the derivative of the discriminative term over the variable x n :
x n [ i = 1 L j = 1 L ( A ˜ : i x i ) T ( A ˜ : j x j ) ] = x n [ i = 1 i n L ( A ˜ : i x i ) T ( A ˜ : j x j ) + j = 1 j n L ( A ˜ : i x i ) T ( A ˜ : j x j ) + i = 1 i n L j = 1 j n L ( A ˜ : i x i ) T ( A ˜ : j x j ) + ( A ˜ : i x i ) T ( A ˜ : j x j ) ] = 2 [ i = 1 i n L A ˜ : n T ( A ˜ : i x i ) + A ˜ : n T ( A ˜ : i x i ) ] = 2 A ˜ : n T A ˜ x .
Accordingly, we have ( i = 1 L j = 1 L ( A ˜ : i x i ) T ( A ˜ : j x j ) ) / x = 2 A ˜ T A ˜ x . Hence, a closed-form solution of (46) is obtained:
x l + 1 = ( 2 α A ˜ T A ˜ + ρ l A ˜ T A ˜ + ρ l B l + 1 + ρ l ) 1 ( ρ l u l + 1 ϕ 3 l B l + 1 ϕ 2 l ρ l A ˜ T ( e ˜ l + 1 y ˜ ϕ 1 l / ρ l ) ) .
The total optimization procedures of CDNSC are summarized in Algorithm 1. A termination criterion is enforced to verify whether Algorithm 1 converges
x l + 1 x l 2 / x l 2 < ε
where ε > 0 is a small stopping value.
Algorithm 1. Optimization of CDNSC via ADMM
Input: A , y , α , β , μ , ρ max , k max , and ε .
Output: The optimal y ˜ , A ˜ , e ˜ , and x .
Initialization: k = 0 , x k = 1 / L .
Repeat
1: k = k + 1 ;
2: Estimate weight matrix W k by (17) and (20);
Update: A ˜ = W k A and y ˜ = W k y .
 Initialization: l = 0 , x l = x k , ϕ 1 l , ϕ 2 l , ϕ 3 l , and ρ l .
Repeat
 3: l = l + 1 ;
 4: Estimate e ˜ l by (43);
 5: Update B l by (32);
 6: Estimate u l by (45);
 7: Estimate x ˜ l by (48);
 8: Update ϕ 1 l , ϕ 2 l , ϕ 3 l , and ρ l by (38), (39), (40), and (41);
 9: Check the termination criterion by (49);
Until convergence
11: x k = x l
Until k > k max
We classify y by finding the least reconstruction error holder among all classes. Therefore, the CDNSC-driven classifier is formulized as follows:
ID = arg min c y ˜ A ˜ δ c ( x ) e ˜ 2
where the superscript indicates the convergence values, and the function δ c selects the entries affiliated to the c -th class.

3.4. Extended CDNSC

Before presenting the extended CDNSC (E-CDNSC), we first establish the objective function to learn the feasible parameters to fuse the features of different spectrums. Let λ s be the fusion parameters corresponding to the features of the s -th spectrum, the E-CDNSC-driven classifier is given by:
ID ( λ ) = arg min c s = 1 S λ s y ˜ s A ˜ s δ c ( x s ) e ˜ s 2
where the variables y ˜ s , A ˜ s , and e ˜ s are the output of Algorithm 1. Since the parameter λ s should be nonnegative, and its summation should be equal to 1, we impose two constraints to define the feasible region of the vector λ and deem the recognition rate as the objective to establish the objective function regarding λ :
max λ R ( λ ) = max λ i = 1 N Θ i ( ID ( λ ) ) N × 100 s . t . λ s 0 , s = 1 S λ s = 1
where the variable N denotes the number of the test samples, and the function Θ ( ID ( λ ) ) counts the correctly recognized samples.
For Equation (52) is nondifferentiable, we propose a modified intelligent optimizer, named constrained PSO (CPSO), to solve it. Note the first constraint can be addressed by setting a nonnegative flying region for the particle swarm. Then, inspired by the Lagrange method, the second constraint is addressed by:
min λ   R   ˜ ( λ ) = min λ ( R ( λ ) + η s = 1 S λ s 1 2 ) × 100
where η > 0 is a penalty parameter.
In the optimizing process, CPSO ceaselessly produces the particle swarm P m Q × S to randomly fly in the defined region, where the variable P m denotes the particle swarm in the m -th generation, and the variables Q and S denote the individual number and particle swarm dimension, respectively. Note each row of P m signifies a potential solution to minimize (53). Specifically, CPSO finds the best individual p from P 1 to minimize (53) in the first generation, then p reproduces P 2 in the second generation. The above processes repeat until the following termination criterion is met:
s = 1 S λ s m + 1 1 2 < ξ
where ξ is a small positive value. Referring to 57, we update the penalty parameter in each generation by:
η m + 1 = min ( ς η m , η max )
where η max is a large positive value, and ς is a small positive value. The procedures of optimizing (53) is outlined in Algorithm 2 (In each generation, CPSO reproduces new particle swarm in the same way as PSO. For the limited space, we omit that here. The details can be found in [57]).
Algorithm 2. Optimization of (53) via CPSO
Input: y ˜ s , A ˜ s , x s , e ˜ s , η 0 , ς , η max , ξ , Q , and S
Output: The optimal λ
Initialization: m = 0 , particle swarm P m
Repeat
 1: Calculate the fitness value of each individual in P m on (53);
 2: Find the individual p in P m with least fitness value;
 3: λ = p ;
 4: m = m + 1 ;
 5: Reproduce particle swarm P m around p ;
 6: Update η m by (55);
 7: Check the termination criterion by (54);
Until convergence

4. Analysis of CDNSC

This section discusses the effectiveness of CDNSC by analyzing its complexity and convergence and demonstrating the positive effect of DNSR on the performance of CEE.

4.1. Complexity and Convergence of CDNSC

Although the mathematical derivation of optimizing CDNSC seems to be complicated due to the nonnegative constraint, the resulted extra computation is only to construct a simple matrix B , which has a low computation complexity of O ( L ) . The subproblems regarding the parameters e ˜ and u ˜ can be explicitly solved by the simple soft thresholding method, so the computational complexity of solving e ˜ and u is O ( L ) . When solving the parameter x ˜ , the most time-consuming process is the matrix inversion, which has a complexity of O ( L 2 ) . Let k and l signify the iteration index of the outer loop and inner loop in Algorithm 1, respectively. Ignoring the basic operation like matrix addition and subtraction, the computational complexity of algorithm 1 is O ( k l ( 3 L + L 2 ) ) . Unlike the Lasso problem that should be solved iteratively, all the l1 minimization problems in CDNSC have closed-form solutions, so CDNSC is relatively efficient.
The convergence of CDNSC is illustrated in Proposition 2 (the rough proof of proposition 2 is provided in Appendix C).
Proposition 2.
The sequence ¯ ( W k , x k , e l , B l , u l , ϕ 1 l , ϕ 2 l , ϕ 3 l , ρ l ) generated by Algorithm 1 converges.

4.2. Positive Effect of DNSR to CEE

To intuitively illustrate the positive effect of DNSR to CEE, the state-of-the-art methods on error correction or detection are selected for comparison. The experiments are performed on the blue spectrum samples in the PolyU palmprint database (all the samples are resized to 80 × 80 pixels and vectorially normalized). The first three samples of each subject are used for training, and a randomly selected sample of the first subject is chosen for test. We consider robust palmprint recognition under the mixed-contaminations and simulate it by imposing a combination of 40% block-wise scar occlusion and 40% pixel-wise corruption on the query sample. Figure 3 displays the performance of all the competing methods, where the coefficients and reconstruction residuals corresponding to the congeneric samples of the query sample are marked in red, while the reconstruction residual closest to the congeneric reconstruction residual are marked in black (‘N/A’ indicates that the corresponding method lacks for error corrector or detector).
Without an error detector, RSRC can only correct a portion of corruption, which leads to the misclassification and a terrible recovery of the query image. By contrast, CESR and RRC lack for the error corrector. This puts great pressure on their error detectors, so the undetected errors affect the sparse coding and result in indistinguishable inter-class reconstruction residuals, which can be possible to mislead the classifier. Since LUMIRC neglects to learn a proper regularizer, it’s representation coefficients are not sparse enough, and its error estimator appears to be underpowered. Benefitting from DNSR that encourages the sparsity and nonnegativity of the coefficients, CDNSC presents sparser coefficients than the other competitors, in which elements corresponding to the real class are significantly large and physically meaningful. So, CEE presents more precise error estimation results, and the inter-class reconstruction residuals are more distinguishable for the classification.

5. Experiments

This section verifies the flexibility and robustness of CDNSC concerning various contaminations. Meanwhile, to facilitate the intuitive comparison of the recognition accuracy between the single-spectrum and multispectral palmprint recognition, we choose the two public multispectral palmprint databases, CASIA database and PolyU database, as the benchmarks.

5.1. Experimental Settings

5.1.1. CASIA Database

This database [58] was built by using a contactless device to capture palmprints. There are no pegs to restrict hand posture and position, so the variation of illumination and palm posture extensively exist in samples. Images of 200 palms were collected in two sessions with an interval of more than one month. In a session, each palm was captured three times, respectively under 460 nm, 630 nm, 700 nm, 850 nm, 940 nm, and white spectrua. There were six images acquired from one palm. The samples are all uncropped original palm images. We utilize the method in [1] to crop each sample with a size of 180 × 180 to obtain the ROI images. In the experiments, samples of each subject are randomly divided with the proportion of 3:1:2 to compose a dictionary set, a feature fusion training set, and a test set, respectively. Figure 4a–f show some typical multispectral samples in the CASIA database.

5.1.2. PolyU Database

Samples in this database were captured by a contact-based device, where pegs are set to restrict hand posture and position. Hence, the acquired samples are rather regular. Palmprint images of 500 palms were collected in two sessions with an interval of nine days. In a session, each palm was captured six times, respectively under red, blue, green, and NIR spectra, so there were 12 images acquired from one palm. The ROI images were already cropped with a size of 128 × 128 by using the method in [59]. In the experiments, samples of each subject are randomly divided with the proportion of 1:1:1 to compose a dictionary set, a feature fusion training set, and a test set, respectively. Figure 4g–j show some typical multispectral palmprints in the PolyU database.

5.1.3. Compared Methods

There are few classical methods proposed for robust palmprint recognition. Since CDNSC derives from concluding the merits and demerits of the robust regression-based methods, the optional competing methods are all based on robust regression analysis. To present convictive comparisons, the state-of-the-art methods on coding regularization, nonnegative representation, error correction, and error detection are all preferred. Specifically, LRC and the regularization-based SRC and CRC are selected. For the methods of nonnegative coding, the classical NNG and GNMF [60] are picked. As dictionary learning-based methods, DSRC and SSRC are chosen. Meanwhile, the state-of-the-art error correction and detection-based methods, including RSRC, CESR [61], l1-regularized RRC [61], and LUMIRC [61], are chosen. Finally, as a successful application of the correntropy, CMP is selected.

5.1.4. Parameter Settings and Experimental Platform

Parameters in Algorithm 1 are set as α = β = { 0.01 , 0.1 , 1 , 10 , 50 } , ϕ 1 l = ϕ 2 l = ϕ 3 l = 0.1 , ρ 0 = 1 , μ = 1.5 , ρ max = 1 e + 8 , k max = 3 , and ε = 1 e 3 . On the basis, the other parameters in Algorithm 2 are set as η 0 = 0.01 , ς = 1.2 , η max = 100 , and ξ = 1 e 4 . All the experiments are performed in MATLAB R2019a on a laptop with 2.6-GHz CPU and 4-GB RAM.

5.2. Robust Contactless Palmprint Recognition

Experiments in this part are all implemented on the 460 nm spectrum samples in the CASIA database. Without setting pegs to restrict hand posture and position, variation of illumination and palm posture exists extensively as shown in Figure 5, which brings some challenges to ROI segmentation and palmprint recognition. What’s more, in real-world applications, dense corruption and gross occlusion probably emerge in the query samples. Hence, we verify the robustness of CDNSC from the following aspects.

5.2.1. Dimension and Number of Training Samples

As we know, the dimension and number of training samples often affect the performance of the biometric recognition methods. Here, we first consider the impact of sample dimension by fixing the training sample number of each subject as 3, where each sample is downsampled to the size of 20 × 20, 40 × 40, 80 × 80, and 120 × 120, respectively. When considering the impact of training sample number, we fix the sample dimension as 40 × 40 and respectively select the first sample and the first three samples of each subject in the dictionary set to compose the dictionary A. The recognition rates of all the methods under the two cases are displayed in Figure 6a,b, respectively.
From Figure 6a, although both CDNSC and CESR adopt the correntropy metric as the error detector, CDNSC is more robust than CESR, which owns to the regularizer DNSR. As a strong competitor, RRC is more sensitive than CDNSC concerning the variation of sample dimension. It can be observed that CDNSC outperforms all the compared methods in each dimension case. When the sample dimension increases, the recognition rates of most compared methods present a slight downward trend. This is because a proper downsampling ratio contributes to getting rid of redundant pixels and extracting distinct features. Figure 6b indicates that CDNSC achieves better results than the others no matter with one or three training samples per class. Note we set rigorous parameters to solve all the Lasso problems to pursue the sparsity of coefficients, so the augmented dictionary in RSRC plays a little role to enhance the robustness of SRC.

5.2.2. Continuous Scar Occlusion

We consider the possible occlusion caused by palm scar and design an experiment to investigate the robustness of CDNSC in handling the scar occlusion, a kind of continuous contamination. The sample dimension is fixed as 40 × 40, and the first three samples of each subject in the dictionary set of the CASIA database are all recruited to compose the dictionary A. When performing the experiments, we randomly impose a scar image on the query samples to simulate the real scar. The percentage of scar occlusion varies from 10% to 40%. The experimental results are shown in Figure 7.
It’s evident that CDNSC outperforms other methods except CESR, at different occlusion levels. However, CDNSC seems to be less sensitive to the variation of occlusion level than CESR. Although CMP also adopts the correntropy metric, its performance is greatly degraded due to the continuous occlusion in comparison to its considerable performance on the original database (see Figure 6a).

5.2.3. Dense Corruption and the Mixed-Contaminations

Finally, we consider the residual cases: dense corruption and the mixture of corruption and scar occlusion. The sample dimension and assembling processes of the dictionary A are similar to the above experiment. Due to CDNSC is quite robust to corruption, we directly evaluate its robustness regarding the dense corruption at the level of 50%. Besides, scar occlusion and corruption are combined to simulate the mixture case (level varies from 10% to 40%). The two kinds of contaminations are exhibited in Figure 8. Table 1 displays the experimental results of all the methods, where the best recognition rate of each case is bold.
Table 1 manifests that CDNSC and LUMIRC are particularly robust against dense corruption due to their appendant error correctors. But CDNSC achieves a higher recognition rate of 94.75% that is ahead of LUMIRC with 4.75%. Facing the mixed-contaminations, the compared methods seem to be fragile for the extra added corruption and present a great degeneration when the level of the mixed-contaminations increases, compared with their performance regarding occlusion (see Figure 7). Because DNSR makes CEE to be powerful, CDNSC is less sensitive to the increasing mixed-contaminations. This indicates that the proposed joint scheme is more flexible and robust to various challenging cases.

5.3. Robust Contact-Based Palmprint Recognition

Experiments in this part are all implemented on the blue spectrum samples in the PolyU database. Benefited from the well-defined acquisition restriction, samples in the PolyU database are quite regular. The recognition rate of CDNSC can reach to 100% on that. So we won’t make experiments on the original database anymore and directly verify the flexibility and robustness of CDNSC from the following aspects.

5.3.1. Continuous Camera Lens Occlusion

Now, we consider another probable occlusion, continuous camera lens pollution, which often appears in contact-based acquisition. In this experiment, we fix the sample dimension as 40 × 40, and the first three samples of each subject in the dictionary set of the PolyU database are all recruited to compose the dictionary A. The recognition rates of all the methods are displayed in Figure 9.
Obviously, CDNSC outperforms the other compared methods in all occlusion cases. CESR and LUMIRC continue to perform considerably. However, when the occlusion level increases, RRC begins to surpass them. The presence of occlusion misleads CMP from selecting correct dictionary atoms, which leads to its poor performance. CDNSC showed its robustness to scar occlusion on the irregular CASIA database. We believe that it is capable to harness the same case on the more regular PolyU database and will not consider the scar occlusion in this part.

5.3.2. Training Sample Number

Figure 6a reveals that the variation of sample dimension dose little effect on the palmprint recognition rate. So, we merely pay attention to the impact of the training sample number here. Different from the experiments performed on the CASIA database, we impose 40% camera lens occlusion on the query samples when the training sample number varies. By fixing the sample dimension as 40 × 40, we select the first sample and the first three samples of each subject in the dictionary set to compose dictionary A, respectively. The results are shown in Figure 10.
The pixel value of the simulated camera lens pollution is quite close to the palmprint pixel value, which brings extra difficulty to the error detector and corrector in contrast with the scar occlusion (see Figure 8 and Figure 11). As shown in Figure 10, nearly all the methods lost the good performance they ever presented with respect to the scar occlusion, and become sensitive to the camera lens occlusion. To our relief, CDNSC is more robust than the other methods whether with one or three training samples.

5.3.3. Dense Corruption and the Mixed-Contaminations

Finally, we discuss the robustness of CDNSC regarding dense corruption and the mixture of corruption and camera lens occlusion. The sample dimension and assembling processes of the dictionary A are the same as that in Section 5.3.1. We directly consider the 50% corruption and simulate the mixture case by combining the camera lens occlusion and corruption (level varies from 10% to 40%). The two kinds of contaminations are exhibited in Figure 11. Table 2 displays the recognition rates of all the methods where the best recognition rate of each case is bold.
From Table 2, CESR is considerably robust against dense corruption, while CDNSC and LUMIRC also show promising performance due to their error correctors. However, CDNSC achieves a higher recognition rate of 97.6% than CESR and LUMIRC. Since the mixed-contaminations doesn’t follow the Laplacian or Gaussian distribution, both SRC and RSRC lost their robustness. Although DSRC and SSRC can harness slight mixed-contaminations, they have limited capacity to handle more severe cases. NNG and GNMF are unable to extract robust features, so their performance is rather poor. CESR, RRC, CMP, and LUMIRC show relatively satisfactory results due to their error detectors. However, they are sensitive to the gradually deteriorative mixed-contaminations. Benefitting from the cooperation between CEE and DNSR, CDNSC achieves better results in all contamination levels.

5.4. Comparison of Running Times

Apart from recognition rate, computational consumption is another important indicator to evaluate the palmprint recognition methods. This subsection is organized to investigate the efficiency of CDNSC and the other competing methods. For the experiments performed in Section 5.2 and Section 5.3, we specifically consider 40% occlusion and 40% mixed contamination and give the average running time of recognizing a query sample in the two cases. The experimental settings, including sample number, sample dimension, and parameters, follow these given in the previous cases. The comparison among all the competing methods regarding the two cases on the CASIA database and PolyU database is listed in Table 3.
On the whole, the traditional methods, including LRC, CRC, SRC, RSRC, DSRC, SSRC, NNG, and GNMF, take less computation time than the state-of-the-art robust methods, including CESR, RRC, CMP, LUMIRC, and CDNSC. This is because the traditional methods can achieve batch-wise recognition by matrix-based computation. By contrast, the state-of-the-art robust methods have an additional stage to respectively learn a tailored weighted image for each query sample, thus they have to recognize the query samples one by one. Moreover, since the robust methods usually have more than one variable due to their complicated robust models, their optimization processes are consequently more time-consuming, based on the iteratively reweighted optimization strategy. However, the state-of-the-art robust methods present significantly higher accuracy than the traditional methods. From Table 3, we can conclude that CDNSC and CESR achieve a better tradeoff between accuracy and efficiency than the other methods, but CDNSC has a higher accuracy than CESR in all cases (see Figure 7, Figure 9, Table 1 and Table 2).

5.5. Multispectral Contactless and Contact-Based Palmprint Recognitions

This subsection is organized to investigate the effectiveness of E-CDNSC. Based on the well-designed objective function (53), CPSO searches such a group of fusion coefficients that manifest the informative spectral features and suppress the less useful spectral features. Figure 3 shows that CDNSC is capable of extracting significant stable features from the seriously contaminated samples. So, it’s reasonable to suppose that the fusion coefficients learned on the original samples are appropriate for fusing the robust features extracted from the multispectral samples.
In the experiments, the sample dimension is fixed as 40 × 40. The first three spectral samples of each subject in the dictionary set of the CASIA database are used to compose the spectrum-dependent dictionary A. The spectral samples in the feature fusion training set are employed to extract spectral features, which are used to train CPSO to obtain the feasible fusion parameters. Given Q = 20 (individual number of the particle swarm) and S = 6 (particle swarm dimension or spectrum number), the fusion parameter searching processes are shown in Figure 12.
We set the origin of the coordinate system as the initial positions of the fusion parameters and zero as the initial value of the penalty parameter. Figure 12a indicates that under the punishment of η , CPSO constantly produces new particle swarm P m Q × S to randomly fly in the defined region until the termination criterion is reached. Obviously, each row of P is a potential solution to (53). It can be observed that CPSO converges with only 20 iterations. Note that minimizing the constrained objective function (53) is equivalent to maximize the recognition rate function (52). To intuitively present the comparison between multispectral palmprint recognition and single-spectrum palmprint recognition, we define the best single-spectrum recognition rate 96.25% as the initial value of the function (52) and its opposite value −96.25% as the initial value of the function (53). Figure 12b reveals that multispectral palmprint recognition receives more admirable results with a 98.75% recognition rate.
Fixing all the above experimental settings, we now conduct multispectral palmprint recognition based on the learned fusion parameters (see Figure 12a). Four kinds of cases, including illumination and pose variation, 40% scar occlusion, 50% corruption, and the mixed-contaminations with 40% corruption and 40% scar occlusion, are all considered. Similarly, we perform the multispectral palmprint recognition on the PolyU database, where the mixed-contaminations is simulated with 40% corruption plus 40% camera lens occlusion. The results on the two databases are respectively displayed in Table 4 and Table 5, where all the single-spectrum palmprint recognition results are also listed for intuitive comparison.
The experimental results in Table 4 and Table 5 reveal that E-CDNSC can further improve the recognition rate based on the robustness of CDNSC. We also conclude that the fusion parameters learned on the original samples are applicable to fuse the robust features extracted from the contaminated samples. This owes to the flexibility and robustness of CDNSC.

6. Conclusions

Considering the robust palmprint recognition, the coding errors caused by contaminations such as gross occlusion, dense corruption, and a mixture of them are insightfully studied in this paper. We combine a correntropy-induced error detector and a sparse error corrector to propose the cooperative error estimator CEE. Moreover, DNSR is designed to encourage the nonnegativity and sparsity of the coefficients. By combining CEE and DNSR, a joint CDNSC is proposed to flexibly handle various contaminations. On the basis, we propose E-CDNSC for multimodal palmprint recognition by introducing a novel CPSO. The correntropy metric function is approximated with a weighted least square formula, while the nonnegative constraint problem is converted into a promising equality constraint. With some skillful techniques, the reformulated problem is effectively optimized via a reweighted ADMM. Extensive experimental results on two public benchmarks reflect the flexibility and robustness of the proposed methods.
Our research reveals the importance of handling the coding errors and the importance of a proper regularizer on precise error estimation. These factors are all vital to protect the flexibility and robustness of recognition methods, when facing various complicated scenarios. This paper only focuses on robust palmprint recognition. However, the active ideas in CDNSC and E-CDNSC can be applied to the other single-mode biometric recognition or multimodal biometric recognition.

Author Contributions

Conceptualization, K.J.; Data curation, K.J.; Formal analysis, K.J.; Funding acquisition, X.Z.; Investigation, X.Z.; Methodology, K.J.; Project administration, X.Z.; Resources, G.S.; Software, K.J. and G.S.; Validation, K.J.; Writing—original draft, K.J.; Writing—review & editing, K.J. and X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work is funded by the National Natural Science Foundation of China (No. 61673316) and Project commissioned by the Sichuan Gas Turbine Research Institute of AVIC.

Conflicts of Interest

The authors declare no conflict of interests.

Acronym Definitions

AcronymDefinitionAcronymDefinition
ADMMAlternating direction method of multipliersKKTKarush-Kuhn-Tucker
CDNSCCorrentropy-induced Discriminative nonnegative sparse codingLARLeast angle regression
CEECooperative error estimatorLRCLinear regression classifier
CESRCorrentropy-based sparse representationLUMIRCLaplacian-uniform mixture driven iterative robust coding
CIMCorrentropy-induced metricMSEMean squared error
CMPCorrentropy matching pursuitNMFNonnegative matrix factorization
CPSOConstrained PSONMRNuclear norm-based matrix regression
CRCCollaborative representation classifierNNGNonnegative garrote
DNSRDiscriminative nonnegative sparse regularizerPDFProbability densities functions
DSRCDiscriminative SRCRRCRegularized robust coding
E-CDNSCExtended CDNSCRSCRobust sparse coding
ECPEquality constraint problemRSRCrobust SRC
GNMFGraph regularized nonnegative matrix factorizationSRCSparse representation classifier
HQHalf-quadraticSSRCSuperposed SRC
ICPInequality constraint problemTPNMFTopology-preserving nonnegative matrix factorization

Appendix A

Proof of Proposition 1

Proof. 
It can be easily verified that function ϕ ( e i ) = 1 I σ ( e i ) satisfies all the conditions of (2.1) in [54]. According to (2.3) in [54], there exists a convex dual function ψ ( w i ) such that:
ϕ ( e i ) = inf w i { 1 2 w i e i 2 + ψ ( w i ) }
when e i 0 , the infimum of (16) is reached at:
w i = ϕ ˙ σ ( e i ) e i = I σ ( e i ) σ 2
where I σ ( e i ) = exp ( e i 2 / 2 σ 2 ) . □

Appendix B

Proof of Lemma 1

Proof. 
As the value t is a local minimum of (22), then ( t , z ) is a local minimum of (23). Apparently, the functions f ( t ) and h ( t , z ) in (23) are also continuously differentiable. Once g ( t ) is linearly independent, h ( t ) is linearly independent, too. For (23), according to the first-order Lagrange necessary condition of the ECP(proposition 3.1.1 in [55]), there exists a unique value φ such that:
{ t L ( t , z , φ ) = t f ( t ) + φ t h ( t , z ) = 0 z L ( t , z , φ ) = 2 φ z = 0 .
If g ( t ) 0 , we can obtain z = g ( t ) 0 . From (A3), we get:
φ = 0 ( g ( t ) 0 )
For (23), according to the second-order Lagrange necessary conditions of the ECP (3.1.1 in [55]), we have:
[ ω ϖ ] [ t t 2 L ( t , z , φ ) 0 0 2 φ ] [ ω ϖ ] 0
For arbitrary ω and ϖ 1 , they always satisfy:
[ t L ( t , z , φ ) z L ( t , z , φ ) ] [ ω ϖ ] = t g ( t ) ω 2 z ϖ = 0
The Lagrange multiplier theorem of the ECP reveals that φ exists uniquely, for which the conclusions drawn under specific situations reflect the truth. We can obtain desired results by substituting differently defined ϖ into (A6) as long as it satisfies (A7). We first define:
ϖ = { 0 g ( t ) = 0 t g ( t ) ω 2 z g ( t ) 0
With (A4) and (A7), we say that whether g ( t ) = 0 or not, ϖ φ ϖ = 0 always holds. For ω satisfying t g ( t ) ω = 0 ( g ( t ) = 0 ) , we obtain the second-order necessary condition of (23):
ω t t 2 L ( t , z , φ ) ω 0
By redefining ω = 0 , ϖ 0 ( g ( t ) = 0 ) , we obtain 2 φ ϖ 2 0 according to (A6). That is to say
φ 0 ( g ( x ) = 0 )
Finally, we obtain (24) by combining (A3), (A4), and (A9). □

Appendix C

Proof of Proposition 2

Proof. 
Note ¯ ( e ˜ , B , u , x , ϕ 1 , ϕ 2 , ϕ 3 , ρ ) in (33) can be rewritten as ¯ ( W , e , B , u , x , ϕ 1 , ϕ 2 , ϕ 3 , ρ ) . Since the convergence properties of ADMM have got in-depth research in [56], when W k is fixed the updated sequence in Algorithm 1 always satisfies ¯ ( W k , x k 1 , e l , B l , u l , ϕ 1 l , ϕ 2 l , ϕ 3 l , ρ l ) ¯ ( W k , x k , e l , B l , u l , ϕ 1 l , ϕ 2 l , ϕ 3 l , ρ l ) .
When x k is fixed, we have..according to Proposition 1. Hence, the sequence ¯ ( W k , x k , e l , B l , u l , ϕ 1 l , ϕ 2 l , ϕ 3 l , ρ l ) generated by CDNSC is non-increasing. By studying the properties of the correntropy [40] and formula (27), it is clear that J ( W , e , x , z ) has a lower bound. In conclusion, CDNSC converges. □

References

  1. Zhang, L.; Li, L.; Yang, A.; Shen, Y.; Yang, M. Towards contactless palmprint recognition: A novel device, a new benchmark, and a collaborative representation based identification approach. Pattern Recognit. 2017, 69, 199–212. [Google Scholar] [CrossRef]
  2. Fei, L.; Lu, G.; Jia, W.; Teng, S.; Zhang, D. Feature Extraction Methods for Palmprint Recognition: A Survey and Evaluation. IEEE Trans. Syst. Man Cybern. Syst. 2019, 49, 346–363. [Google Scholar] [CrossRef]
  3. Zhao, S.; Zhang, B. Learning Salient and Discriminative Descriptor for Palmprint Feature Extraction and Identification. Available online: https://ieeexplore.ieee.org/document/8976291 (accessed on 3 January 2020).
  4. Jing, X.; Zhang, D. A face and palmprint recognition approach based on discriminant DCT feature extraction. IEEE Trans. Syst., Man, Cybern. B Cybern. 2004, 34, 2405–2415. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Huang, D.; Jia, W.; Zhang, D. Palmprint verification based on principal lines. Pattern Recognit. 2008, 41, 1316–1328. [Google Scholar] [CrossRef]
  6. Hennings-Yeomans, P.; Kumar, B.; Savvides, M. Palmprint classification using multiple advanced correlation filters and palmspecific segmentation. IEEE Trans. Inf. Fore. Secur. 2007, 2, 613–622. [Google Scholar] [CrossRef]
  7. Sun, Z.; Wang, L.; Tan, T. Ordinal feature selection for iris and palmprint recognition. IEEE Trans. Image Process. 2014, 23, 3922–3934. [Google Scholar] [CrossRef]
  8. Fei, L.; Zhang, B.; Zhang, W.; Teng, S. Local apparent and latent direction extraction for palmprint recognition. Inf. Sci. 2019, 473, 59–72. [Google Scholar] [CrossRef]
  9. Fei, L.; Zhang, B.; Xu, Y.; Yan, L. Palmprint Recognition Using Neighboring Direction Indicator. IEEE Trans. Hum. Mach. Syst. 2016, 46, 787–798. [Google Scholar] [CrossRef]
  10. Lu, G.; Zhang, D.; Wang, K. Palmprint recognition using eigenpalms features. Pattern Recognit. Lett. 2003, 24, 1463–1467. [Google Scholar] [CrossRef]
  11. Yang, J.; Frangi, A.; Yang, J.; Zhang, D.; Jin, Z. KPCA plus LDA: A complete kernel Fisher discriminant framework for feature extraction and recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 230–244. [Google Scholar] [CrossRef] [Green Version]
  12. Hu, D.; Feng, G.; Zhou, Z. Two-dimensional locality preserving projections (2DLPP) with its application to palmprint recognition. Pattern Recognit. 2007, 40, 339–342. [Google Scholar] [CrossRef]
  13. Zhao, S.; Zhang, B. Deep discriminative representation for generic palmprint recognition. Pattern Recognit. 2020, 98, 107071. [Google Scholar] [CrossRef]
  14. Wu, X.; Zhao, Q.; Bu, W. A SIFT-based contactless palmprint verification approach using iterative RANSAC and local palmprint descriptors. Pattern Recognit. 2014, 47, 3314–3326. [Google Scholar] [CrossRef]
  15. Jia, W.; Hu, R.; Lei, Y.; Zhao, Y.; Gui, J. Histogram of oriented lines for palmprint recognition. IEEE Trans. Syst. Man Cybern. Syst. 2014, 44, 385–395. [Google Scholar] [CrossRef]
  16. Zheng, Q.; Kumar, A.; Pan, G. A 3D feature descriptor recovered from a single 2D palmprint image. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 1272–1279. [Google Scholar] [CrossRef] [PubMed]
  17. Fei, L.; Zhang, B.; Xu, Y.; Guo, Z.; Wen, J.; Jia, W. Learning discriminant direction binary palmprint descriptor. IEEE Trans. Image Process. 2019, 28, 3808–3820. [Google Scholar] [CrossRef]
  18. Fei, L.; Zhang, B.; Xu, Y.; Huang, D.; Jia, W.; Wen, J. Local Discriminant Direction Binary Pattern for Palmprint Representation and Recognition. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 468–481. [Google Scholar] [CrossRef]
  19. Hong, D.; Liu, W.; Wu, X.; Pan, Z.; Su, J. Robust Palmprint Recognition based on the Fast Variation Vese-Osher Model. Neurocomputing 2016, 174, 999–1012. [Google Scholar] [CrossRef]
  20. Zheng, J.; Lou, K.; Yang, X.; Bai, C.; Tang, J. Weighted Mixed-Norm Regularized Regression for Robust Face Identification. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3788–3802. [Google Scholar] [CrossRef]
  21. Naseem, I.; Togneri, R.; Bennamoun, M. Linear regression for face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 2106–2112. [Google Scholar] [CrossRef]
  22. Wright, J.; Yang, A.; Ganesh, A.; Sastry, S.; Ma, Y. Robust Face Recognition via Sparse Representation. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 210–227. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Zhang, L.; Yang, M.; Feng, X. Sparse representation or collaborative representation: Which helps face recognition? In Proceedings of the International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 471–478. [Google Scholar]
  24. Huang, J.; Nie, F.; Huang, H.; Ding, C. Supervised and projected sparse coding for image Classification. In Proceedings of the AAAI Conference on Artificial Intelligence, Bellevue, WA, USA, 14–18 July 2013; pp. 438–444. [Google Scholar]
  25. Xu, Y.; Zhong, Z.; Yang, J.; You, J.; Zhang, D. A New Discriminative Sparse Representation Method for Robust Face Recognition via l2 Regularization. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 2233–2242. [Google Scholar] [CrossRef] [PubMed]
  26. Deng, W.; Hu, J.; Guo, J. In defense of sparsity based face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 399–406. [Google Scholar]
  27. Yang, M.; Zhang, L.; Yang, J.; Zhang, D. Robust sparse coding for face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 20–25 June 2011; pp. 625–632. [Google Scholar]
  28. Yang, M.; Zhang, L.; Yang, J.; Zhang, D. Regularized robust coding for face recognition. IEEE Trans. Image Process. 2013, 22, 1753–1766. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. He, R.; Zheng, W.; Hu, B. Maximum correntropy criterion for robust face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 1561–1576. [Google Scholar] [PubMed]
  30. Wang, Y.; Tang, Y.; Li, L. Correntropy Matching Pursuit with Application to Robust Digit and Face Recognition. IEEE Trans. Cybern. 2016, 47, 1354–1366. [Google Scholar] [CrossRef] [PubMed]
  31. Jing, K.; Zhang, X.; Xu, X. An overview of multimode biometric recognition technology. In Proceedings of the IEEE Conference on Information Technology, Hong Kong, China, 29–31 December 2018; pp. 168–172. [Google Scholar]
  32. He, R.; Zheng, W.; Tan, T.; Sun, Z. Half-quadratic-based iterative minimization for robust sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 261–275. [Google Scholar]
  33. Zheng, H.; Lin, D.; Lian, L.; Dong, J.; Zhang, P. Laplacian-Uniform Mixture-Driven Iterative Robust Coding with Applications to Face Recognition Against Dense Errors. Available online: https://ieeexplore.ieee.org/document/8891717 (accessed on 6 January 2020).
  34. Zhang, D.; Guo, Z.; Lu, G.; Zhang, L. An online system of multispectral palmprint verification. IEEE Trans. Instrum. Meas. 2010, 59, 480–490. [Google Scholar] [CrossRef] [Green Version]
  35. Raghavendra, R.; Busch, C. Novel image fusion scheme based on dependency measure for robust multispectral palmprint recognition. Pattern Recognit. 2014, 47, 2205–2221. [Google Scholar] [CrossRef]
  36. Bounneche, M.; Boubchir, L.; Bouridane, A.; Nekhoul, B.; Alicherif, A. Multi-spectral palmprint recognition based on oriented multiscale log-Gabor filters. Neurocomputing 2016, 205, 274–286. [Google Scholar] [CrossRef] [Green Version]
  37. Zhou, N.; Xu, Y.; Cheng, H.; Yuan, Z.; Chen, B. Maximum Correntropy Criterion-Based Sparse Subspace Learning for Unsupervised Feature Selection. IEEE Trans. Circuits Syst. Video Technol. 2019, 29, 404–416. [Google Scholar] [CrossRef]
  38. Liu, W.; Pokharel, P.; Principe, J. Correntropy: Properties and Applications in Non-Gaussian Signal Processing. IEEE Trans. Signal Process. 2007, 55, 5286–5299. [Google Scholar] [CrossRef]
  39. Charbonnier, P.; Blanc-Feraud, L.; Aubert, G.; Barlaud, M. Deterministic edge-preserving regularization in computed imaging. IEEE Trans. Image Process. 1997, 6, 298–311. [Google Scholar] [CrossRef] [PubMed]
  40. Shastri, B.; Levine, M. Face recognition using localized features based on nonnegative sparse coding. IEEE Trans. Image Process. 2006, 18, 107–122. [Google Scholar]
  41. Kawakami, R.; Wright, J.; Tai, Y.; Matsushita, Y.; Ben-Ezra, M.; Ikeuchi, K. High-resolution hyperspectral imaging via matrix factorization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 20–25 June 2011; pp. 2329–2336. [Google Scholar]
  42. Donoho, D. For Most Large Underdetermined Systems of Linear Equations the Minimal l1-Norm Solution Is Also the Sparsest Solution. Comm. Pure. Appl. Math. 2006, 59, 797–829. [Google Scholar] [CrossRef]
  43. Tibshirani, R. Regression Shrinkage and Selection via the LASSO. J. R. Stat. Soc. B 1996, 58, 267–288. [Google Scholar] [CrossRef]
  44. Efron, B.; Hastie, T.; Johnstone, I.; Tibshirani, R. Least Angle Regression. Ann. Stat. 2004, 32, 407–499. [Google Scholar]
  45. Liu, Y.; Wu, F.; Zhang, Z.; Zhang, Y.; Yan, S. Sparse representation using nonnegative curds and whey. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 3578–3585. [Google Scholar]
  46. Zhang, T.; Fang, B.; Tang, Y.; He, G.; Wen, J. Topology Preserving Nonnegative Matrix Factorization for Face Recognition. IEEE Trans. Image Process. 2008, 17, 574–586. [Google Scholar] [CrossRef]
  47. Zhang, B.; Mu, Z.; Li, C.; Zeng, H. Robust classification for occluded ear via Gabor scale feature-based nonnegative sparse representation. Opt. Eng. 2014, 53, 1548–1561. [Google Scholar]
  48. Ji, Y.; Lin, F.; Zha, H. Mahalanobis Distance Based Nonnegative Sparse Representation for Face Recognition. In Proceedings of the IEEE Conference on Machine Learning Application, Miami Beach, FL, USA, 13–15 December 2009; pp. 41–46. [Google Scholar]
  49. Breiman, L. Better subset regression using the nonnegative garrote. Technometrics 1995, 37, 373–384. [Google Scholar] [CrossRef]
  50. Nikolova, M. Analysis of half-quadratic minimization methods for signal and image recovery. SIAM J. Sci. Comput. 2005, 27, 937–966. [Google Scholar] [CrossRef]
  51. Lu, C.; Tang, J.; Lin, M.; Lin, L.; Yan, S.; Lin, Z. Correntropy Induced L2 Graph for Robust Subspace Clustering. In Proceedings of the IEEE Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 1801–1808. [Google Scholar]
  52. Cai, D.; He, X.; Han, J.; Thomas, S. Graph Regularized Nonnegative Matrix Factorization for Data Representation. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 30, 1548–1561. [Google Scholar]
  53. Bertsekas, D. Nonlinear Programming. Athena Sci. 1999, 277–315. [Google Scholar] [CrossRef]
  54. Wimalajeewa, T.; Jayaweera, S. Optimal Power Scheduling for Correlated Data Fusion in Wireless Sensor Networks via Constrained PSO. IEEE Trans. Wirel. Commun. 2008, 7, 3608–3618. [Google Scholar] [CrossRef]
  55. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  56. Hao, Y.; Sun, Z.; Tan, T. Comparative Studies on Multispectral Palm Image Fusion for Biometrics. In Proceedings of the Asian Conferernce on Computer Vision, Tokyo, Japan, 18–22 November 2007; pp. 12–21. [Google Scholar]
  57. Zhang, D.; Kong, W.; You, J.; Wong, M. Online palmprint identification. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 1041–1050. [Google Scholar] [CrossRef] [Green Version]
  58. The Matlab Source Code. Available online: http://www.openpr.org.cn/index/php/All/69-CESR/View-detials.html (accessed on 29 July 2020).
  59. Non-Negative Matrix Factoriaztion on Manifold (Graph). Available online: http://www.cad.zju.edu.cn/home/dengcai/Data/GNMF.html (accessed on 29 July 2020).
  60. The Matlab Source Code. Available online: http://www.comp.polyu.edu.hk/ cslzhang/code.html (accessed on 29 July 2020).
  61. Laplacian-Uniform Mixture-Driven Iterative Robust Coding With Applications to Face Recognition Against Dense Errors. Available online: https://github.com/sysuzhc/LUMIRC (accessed on 29 July 2020).
Figure 1. Regression-based CDNSC. The mixedly contaminated query sample can be expressed as a linear combination of the weighted discriminative dictionary samples plus the corrected errors.
Figure 1. Regression-based CDNSC. The mixedly contaminated query sample can be expressed as a linear combination of the weighted discriminative dictionary samples plus the corrected errors.
Sensors 20 04250 g001
Figure 2. Comparison among the absolute error metric, MSE metric, and CIM.
Figure 2. Comparison among the absolute error metric, MSE metric, and CIM.
Sensors 20 04250 g002
Figure 3. Comprehensive comparisons between CDNSC and the state-of-the-art methods.
Figure 3. Comprehensive comparisons between CDNSC and the state-of-the-art methods.
Sensors 20 04250 g003
Figure 4. Some typical multispectral palmprint images in the PolyU database and CASIA database. (af) Samples under the 460 nm, 630 nm, 700 nm, 850 nm, 940 nm, and white spectrums in the CASIA database, (gj) Samples under the Blue, Green, NIR, and Red spectrums in the PolyU database.
Figure 4. Some typical multispectral palmprint images in the PolyU database and CASIA database. (af) Samples under the 460 nm, 630 nm, 700 nm, 850 nm, 940 nm, and white spectrums in the CASIA database, (gj) Samples under the Blue, Green, NIR, and Red spectrums in the PolyU database.
Sensors 20 04250 g004
Figure 5. Variation of illumination and palm posture in the CASIA database. (af) respectively show the variation of illumination and palm posture among the palmprint ROI images.
Figure 5. Variation of illumination and palm posture in the CASIA database. (af) respectively show the variation of illumination and palm posture among the palmprint ROI images.
Sensors 20 04250 g005
Figure 6. Recognition rate versus the dimension and number of training samples in the CASIA database. (a) Sample dimension, (b) Sample number.
Figure 6. Recognition rate versus the dimension and number of training samples in the CASIA database. (a) Sample dimension, (b) Sample number.
Sensors 20 04250 g006
Figure 7. Recognition rate versus the level of scar occlusion.
Figure 7. Recognition rate versus the level of scar occlusion.
Sensors 20 04250 g007
Figure 8. Query samples with dense corruption or mixed-contaminations in the test set of the CASIA database. (a) The original sample, (b) Sample with 50% corruption, (cf) Samples with 10%–40% mixed-contaminations.
Figure 8. Query samples with dense corruption or mixed-contaminations in the test set of the CASIA database. (a) The original sample, (b) Sample with 50% corruption, (cf) Samples with 10%–40% mixed-contaminations.
Sensors 20 04250 g008
Figure 9. Recognition rate versus the level of camera lens occlusion.
Figure 9. Recognition rate versus the level of camera lens occlusion.
Sensors 20 04250 g009
Figure 10. Recognition rate versus the training sample number.
Figure 10. Recognition rate versus the training sample number.
Sensors 20 04250 g010
Figure 11. Query samples with dense corruption and the mixed-contaminations in the test set of the PolyU database. (a) The original sample, (b) Sample with 50% corruption, (cf) Samples with 10%–40% mixed-contaminations.
Figure 11. Query samples with dense corruption and the mixed-contaminations in the test set of the PolyU database. (a) The original sample, (b) Sample with 50% corruption, (cf) Samples with 10%–40% mixed-contaminations.
Sensors 20 04250 g011
Figure 12. The fusion parameter searching processes. (a) Curves of the fusion parameters and penalty parameter, (b) Curves of the objective function value and recognition rate.
Figure 12. The fusion parameter searching processes. (a) Curves of the fusion parameters and penalty parameter, (b) Curves of the objective function value and recognition rate.
Sensors 20 04250 g012
Table 1. Recognition rates (%) of all the methods with respect to the two kinds of contaminations.
Table 1. Recognition rates (%) of all the methods with respect to the two kinds of contaminations.
MethodCorruption
(50%)
Mixture
(10%)
Mixture
(20%)
Mixture
(30%)
Mixture
(40%)
LRC34.579.7559278.5
CRC3.527114.253
SRC43.7585.2567.53712
RSRC48.258668.7540.519.75
DSRC4381.7565.7547.2524
SSRC53.2583.2569.552.7528
NNG6.2555.75228.253.75
GNMF1872.543.2516.757.25
CESR8392.7588.7578.2555.75
RRC62.574.2563.255850.75
CMP38.591.7585.576.7534
LUMIRC909287.580.568.5
CDNSC94.7594.590.58575.25
Table 2. Recognition rates (%) of all the methods with respect to the two kinds of contaminations.
Table 2. Recognition rates (%) of all the methods with respect to the two kinds of contaminations.
MethodCorruption
(50%)
Mixture
(10%)
Mixture
(20%)
Mixture
(30%)
Mixture
(40%)
LRC2785.255.819.65.8
CRC3.816.86.82.61.4
SRC51.892.279.24313.4
RSRC56.492.880.446.220.2
DSRC46.491.877.647.614.4
SSRC59.69381.656.623.8
NNG5.650.621.66.42.8
GNMF11.879.839.613.43.6
CESR90.297.893.28052.2
RRC58.673.255.644.637
CMP439792.88238.6
LUMIRC94.294.288.676.263
CDNSC97.697.895.286.575.4
Table 3. Average running time (Seconds) of all the methods regarding the two contaminations.
Table 3. Average running time (Seconds) of all the methods regarding the two contaminations.
MethodCASIA DatabasePolyU Database
Palm Scar Occlusion (40%)Mixture (40%)Camera Lens Occlusion (40%)Mixture (40%)
LRC0.00028850.00029500.0019360.002198
CRC0.00012100.00011850.00071600.0007155
SRC0.04980.07020.17230.1932
RSRC0.19630.19970.28810.2983
DSRC0.00058250.00058550.0049670.005208
SSRC0.079150.08940.21350.2192
NNG0.78970.999812.919916.7413
GNMF0.022320.025510.15540.1582
CESR0.19390.19530.55780.4518
RRC0.58081.27434.18738.1859
CMP0.92500.95143.11243.2647
LUMIRC0.63910.59902.60962.6455
CDNSC0.44520.44842.32142.5753
Table 4. Recognition rates (%) of multispectral and single-spectrum palmprint recognition on the CASIA database.
Table 4. Recognition rates (%) of multispectral and single-spectrum palmprint recognition on the CASIA database.
SpectrumPureOcclusion
(40%)
Corruption
(50%)
Mixture
(40%)
46095.57794.7575.25
6309573.7594.571.25
70093.2568.592.2568.25
85093.571.259270.75
94096.2577.2594.7574.5
WHT93.757593.2572
Multi-spectrum98.589.2597.7585.75
Table 5. Recognition rates (%) of multispectral and single-spectrum palmprint recognition on the PolyU database
Table 5. Recognition rates (%) of multispectral and single-spectrum palmprint recognition on the PolyU database
SpectrumPureOcclusion
(40%)
Corruption
(50%)
Mixture
(40%)
Blue99.2577.497.675.4
Green97.87695.873.2
Nir98.878.896.873.8
Red97.675.496.272.2
Multi-spectrum99.891.299.287.8

Share and Cite

MDPI and ACS Style

Jing, K.; Zhang, X.; Song, G. Correntropy-Induced Discriminative Nonnegative Sparse Coding for Robust Palmprint Recognition. Sensors 2020, 20, 4250. https://doi.org/10.3390/s20154250

AMA Style

Jing K, Zhang X, Song G. Correntropy-Induced Discriminative Nonnegative Sparse Coding for Robust Palmprint Recognition. Sensors. 2020; 20(15):4250. https://doi.org/10.3390/s20154250

Chicago/Turabian Style

Jing, Kunlei, Xinman Zhang, and Guokun Song. 2020. "Correntropy-Induced Discriminative Nonnegative Sparse Coding for Robust Palmprint Recognition" Sensors 20, no. 15: 4250. https://doi.org/10.3390/s20154250

APA Style

Jing, K., Zhang, X., & Song, G. (2020). Correntropy-Induced Discriminative Nonnegative Sparse Coding for Robust Palmprint Recognition. Sensors, 20(15), 4250. https://doi.org/10.3390/s20154250

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop