Next Article in Journal
Precise Temperature Mapping of GaN-Based LEDs by Quantitative Infrared Micro-Thermography
Next Article in Special Issue
Performance Evaluation of Fusing Protected Fingerprint Minutiae Templates on the Decision Level
Previous Article in Journal
A Neural Network Approach to Smarter Sensor Networks for Water Quality Monitoring
Previous Article in Special Issue
Scattering Removal for Finger-Vein Image Restoration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multispectral Palmprint Recognition Using a Quaternion Matrix

1
Bio-Computing Research Center, Harbin Institute of Technology Shenzhen Graduate School, Shenzhen 518055, China
2
Graduate School at Shenzhen, Tsinghua University, Shenzhen 518055, China
3
The Institute of Automation of Heilongjiang Academy of Sciences, Harbin 150090, China
4
Department of Computer Science, Baoji University of Arts and Science, Xi’an 721013, China
*
Author to whom correspondence should be addressed.
Sensors 2012, 12(4), 4633-4647; https://doi.org/10.3390/s120404633
Submission received: 20 February 2012 / Revised: 21 March 2012 / Accepted: 21 March 2012 / Published: 10 April 2012
(This article belongs to the Special Issue Hand-Based Biometrics Sensors and Systems)

Abstract

: Palmprints have been widely studied for biometric recognition for many years. Traditionally, a white light source is used for illumination. Recently, multispectral imaging has drawn attention because of its high recognition accuracy. Multispectral palmprint systems can provide more discriminant information under different illuminations in a short time, thus they can achieve better recognition accuracy. Previously, multispectral palmprint images were taken as a kind of multi-modal biometrics, and the fusion scheme on the image level or matching score level was used. However, some spectral information will be lost during image level or matching score level fusion. In this study, we propose a new method for multispectral images based on a quaternion model which could fully utilize the multispectral information. Firstly, multispectral palmprint images captured under red, green, blue and near-infrared (NIR) illuminations were represented by a quaternion matrix, then principal component analysis (PCA) and discrete wavelet transform (DWT) were applied respectively on the matrix to extract palmprint features. After that, Euclidean distance was used to measure the dissimilarity between different features. Finally, the sum of two distances and the nearest neighborhood classifier were employed for recognition decision. Experimental results showed that using the quaternion matrix can achieve a higher recognition rate. Given 3000 test samples from 500 palms, the recognition rate can be as high as 98.83%.

1. Introduction

There are two main categories in the automatic traditional personal identification area: token-based methods that rely on personal identification such as driver licenses, passports, and other IDs; and knowledge-based methods that rely on signatures or password-protected access [1]. However, things like keys, ID cards, passports are easy to be lost or stolen, and passwords can be forgotten or lost. In order to avoid these disadvantages, biometrics have becomes more and more popular in personal identification. Compared with traditional authentication methods, biometric features are difficult to be stolen, lost or copied, and altered [1,2]. This makes biometric authentication reliable and efficient in situations where high security is needed. Many biometric features have been studied and used in recent decades, such as fingerprints [3,4], palmprints [58], iris images [912], human faces [1316] and finger-knuckle-prints [17].

Among these features, palmprints are widely studied [58], as they have many merits, being user-friendly, low cost, robust, and highly reliable [5]. Traditional palmprint recognition systems use images acquired using a single source of illumination, which may cause different palms to have similar appearance due to the limited information.

To address the issue of limited information, we have applied multispectral imaging which can provide several images of the same scene with different illuminations for enhanced biometrics applications that include face recognition [14] and iris recognition [12]. Some pioneering work on multispectral palmprints has also been proposed. Rowe et al. [18] designed a prototype of a whole-hand multispectral imaging system. Zhang et al. [19] developed a fast multispectral palmprint prototype system. Hao et al. [20] proposed a new touchless multispectral palmprint system which is different from those in [18] and [19], but similar to the typical iris recognition system, where the user does not need to touch the sensor. These works regarded multispectral palmprint images as a kind of multi-modal biometrics and used fusion schemes on different levels, such as image level [20] and matching score level [18,19]. However, some useful information is lost in the image fusion [21] while correlation between different images is neglected in matching score level fusion as each image is compared to each spectrum individually. To fully utilize multispectral information, a new multispectral palmprint recognition method based on a quaternion matrix was proposed in this study. Our previous work [22] showed the effectiveness of the global quaternion model for palmprint recognition, but the local quaternion feature was not explored. To this end, this study proposes a method based on quaternion representation by incorporating both local and global features.

The concept of quaternion was first proposed in 1843 by the Irish mathematician William Rowan Hamilton [23]. As a quaternion model could represent several bands or matrixes in one complex matrix without losing information, it has been used in color image processing [24,25] and multi-feature processing [26]. In this work, multispectral palmprint images captured by red, green, blue and near-infrared (NIR) [14] are represented by a quaternion matrix first, then principal component analysis (PCA) and discrete wavelet transform (DWT) were applied on the matrix to extract palmprint features individually. After that, Euclidean distance was used to measure dissimilarity between different features. Finally, matching score fusion and the nearest neighborhood classifier were employed for recognition. Because the quaternion matrix fully utilizes the information of multispectral images, higher recognition accuracy is expected in comparison to the traditional fusion schemes.

The rest of the paper is organized as follows: a multispectral imaging device is briefly described in Section 2. The proposed method is outlined in Section 3. Experimental results are reported in Section 4. Finally, the conclusions are given in Section 5.

2. Multispectral Imaging Sensor

Figure 1 shows the structure of the designed multispectral palmprint imaging sensor and how the palm is situated.

The key components of the multispectral palmprint imaging device include a CCD camera, lens, and A/D converter. To ensure a semi-closed environment, the box containing the camera is made of opaque plastic and the central part of the device panel is hollow. The multispectral images were captured with four different wavelengths: NIR (880 nm), red (660 nm), green (525 nm) and blue (470 nm) [27]. These wavelengths were chosen because different lights can penetrate different skin layers and enhance different features [28]. The system can capture the palmprint images at a resolution of 352 × 288. During the image capture process, users are asked to put the palm on a platform. The four different palmprint images with the resolution lower than 100 DPI (dot per inch) can be captured in a short time (less than 1 second) [19]. Figure 2 shows a typical multispectral palmprint sample.

3. The Framework of the Proposed Method

Figure 3 shows the whole framework of the proposed method. It includes four key steps: preprocessing, quaternion representation, feature extraction and matching.

3.1. Preprocessing

Before extracting palmprint features, a region of interest (ROI) of a palmprint image is selected first. A coordinate system to reduce rotation and translation effects is built from the given image and then a 128 × 128 ROI [5] is cropped from the whole image. Figure 4 illustrates the extracted ROI from Figure 2 by the method proposed in [5]. After that, histogram equalization is used to remove global intensity influence.

3.2. Quaternion Representation

To fully utilize the information of the multispectral palmrpint images, the multispectral palmprint images are represented by a quaternion matrix. In mathematics, quaternions are a noncommutative number system. A quaternion is a linear combination of a real scalar and three imaginary units:

q = a + b i + c j + d k
where i, j, k are three imaginary units. Their relationship is i2 = j2 = k2 = ijk = −1.

The conjugate of a quaternion is: q*=a-bi-cj-dk, and the norm of a quaternion is:

| q | = q q = a 2 + b 2 + c 2 + d 2

I(x, y), R(x, y), G(x, y), and B(x, y) are used to represent four ROI images by NIR, red, green and blue illuminations, respectively. After preprocessing, a quaternion matrix, Q(x, y), is constructed by these four ROI images:

Q ( x , y ) = I ( x , y ) + R ( x , y ) i + G ( x , y ) j + B ( x , y ) k

In Equation (3), each pixel of Q(x, y) is represented by a 4D point. If there are less than four illuminations, we can still construct a quaternion matrix by replacing the missing illumination with a zero-matrix. This can also give good results, as shown in Section 4.

3.3. Feature Extraction

In this study, two kinds of features are extracted from a quaternion matrix, quaternion PCA (QPCA) [24] which captures global appearance of multispectral palmprint images, and quaternion DWT (QDWT) [29] which represents local texture information.

3.3.1. QPCA Feature Extraction

To speed up computation and reduce memory cost [8], the ROIs are downsampled to 64 × 64 as shown in Figure 5. In the future, instead of down-sampling, we will investigate advanced methods, such as incremental multilinear PCA [30]. The quaternion matrix is first converted to a quaternion vector recorded by row. Figure 6 illustrates a quaternion vector sample built by Figure 5. Then a new matrix for learning projection matrix is built by n training samples. Sm×n = [Q1 Q2Qn]. Each column is a quaternion vector of one multispectral palmprint sample and its length is m (64 × 64).

Given the matrix Sm×n, its covariance matrix is Cm×m. However, computing the eigenvectors and the eigenvalues of the matrix Cm×m is an intractable task for such a high dimension. In fact, the number of samples n is relatively small, thus it is easy to calculate eigenvectors and eigenvalues of matrix Cn×n:

C n × n = 1 n 1 E T E
E = S S ¯
S ( x , y ) ¯ = 1 n l = 1 n S ( x , l ) , x = 1 m , y = 1 , , n
where T*is the conjugate-transposition operator for a quaternion matrix. Cn×n is a Hermitian matrix, so through Householder transformations [24,31], it could be tridiagonalized to obtain, B, which is a real tridiagonal symmetric matrix, and P, which is the product of Householder matrices used to compute matrix:
C n × n = P T B P

Matrix B is a real matrix, so eigenvectors of B are easily calculated. VB is used to represent the eigenvectors matrix of B, each column of VB is an eigenvector of matrix B. Thus eigenvectors of Cn×n can be calculated as:

V C = P T V B

Eigenvalues of Cn×n are eigenvalues of matrix B as well. Dc is used to represent eigenvalues of Cn×n. Now we have the eigenvectors and the eigenvalues of matrix Cn×n. Finally eigenvalues and eigenvectors of matrix Cm×m is computed by:

V = E V C
D = n 1 m 1 D C
where V consists of eigenvectors of Cm×m, and D consists of eigenvalues of Cm×m.

Eigenvalues D are sorted in descending order, and an energy ratio is defined as:

ratio = x = 1 p D x / x = 1 n D x × 100 %
where Dx is the xth eigenvalue. Given an energy ratio, the number of the eigenvalues p can be calculated by using Equation (11). The projection matrix is constructed by the first p eigenvectors in the matrix V. In this work, the energy ratio is set to 90% for a balance between accuracy and feature dimension.

For an input quaternion sample s, the QPCA feature fQPCA is computed as:

f QPCA = P ^ T s

3.3.2. QDWT Feature Extraction

After one scale decomposition, four groups of coefficients will be generated [27,32] by quaternion discrete wavelet. These coefficients include one group of approximation coefficients and three groups of detail coefficients on three directions (horizontal, vertical and diagonal). Here, only approximation coefficients are used for texture feature, as detail coefficients are sensitive to noise.

Given a quaternion sample Q = I + R · i + G · j + B · k, the approximation coefficients coefQ are:

coef Q = coef I + coef R i + coef G j + coef B k
coef I = ( I g 1 ) g 1 + ( G g 2 ) g 1 + ( G g 1 ) g 2 ( I g 2 ) g 2
coef R = ( R g 1 ) g 1 + ( B g 2 ) g 1 + ( B g 1 ) g 2 ( R g 2 ) g 2
coef G = ( I g 1 ) g 2 ( G g 2 ) g 2 ( G g 2 ) g 1 + ( I g 1 ) g 1
coef B = ( R g 1 ) g 2 ( B g 2 ) g 2 ( B g 2 ) g 1 + ( R g 1 ) g 1
where ⊗ is the convolution operator, g1 and g2 are two filters specifically designed by He and Li for QDWT [29]:
g 1 = [ 0 0 0 0 0 1 4 2 0 3 4 2 1 2 0 0 0 0 ] / 2
g 2 = [ 0 0 0 0 0 0 0 3 4 2 0 3 4 2 0 0 0 ] / 2

To get stable features, the quaternion matrix is divided into non-overlapping blocks. Each block is with size of z × z. The z is empirically set as 5 to balance between accuracy and feature dimension.

For each block:

bloc k z × z = coef z × z I + coef z × z R i + coef z × z G j + coef z × z B k

The standard deviation of this block is a quaternion constructed by the standard deviation of each band in the block.

σ l = σ ( coef z × z I ) + σ ( coef z × z R ) i + σ ( coef z × z G ) j + σ ( coef z × z B ) k
where σl is the standard deviation of the lth block. Then a quaternion vector is built by concatenating all standard deviations and this quaternion vector is used as the QDWT feature:
f QDWT = [ σ 1 , σ 2 , , σ h ]
where h is the number of the blocks used.

3.3.3. Feature Matching

Euclidean distance between two quaternions p and q is defined as:

d ( p , q ) = | p q |

After feature extraction, one multispectral palmrpint sample has two different features, one QPCA feature fQPCA and one QDWT feature fQDWT. Thus, two different distances, one QPCA distance and one QDWT distance are gotten. The QPCA distance dQPCA is the Euclidean distance between two QPCA features f QPCA u and f QPCA v of two given samples: su and sv, while the QDWT distance dQDWT is the Euclidean distance between two QDWT features f QDWT u and f QDWT v:

d QPCA = d ( f QPCA u , f QPCA v )
d QDWT = d ( f QDWT u , f QDWT v )

The final distance between two samples is the weighted fusion of the QPCA distance and the QDWT distance. Before fusion, we need to normalize QPCA and QDWT distances by dividing each of these distances by their respective standard deviation of the distances between the training samples, as expressed below [33]:

d ¯ QPCA = d QPCA σ QPCA Training
d ¯ QPCA = d QPCA σ QPCA Training
where σ QPCA Training and σ QDWT Training are the standard deviation of the distance between the training samples. After the normalization, the distance between two given samples can be calculated:
d = d ¯ QPCA w QPCA + d ¯ QDWT w QDWT
where wQPCA is the weight of the QPCA distance and wDWT = 1-wQPCA is the weight of the QDWT distance. We can adjust the distance by adjusting the weights to achieve higher recognition accuracy. In this work, the weights of QPCA distance and the QDWT distance are set as 0.6 and 0.4 empirically.

4. Experimental Results

The experiments are based on a large multispectral palmprint database [19]. The nearest neighborhood classifier is used, and recognition accuracy is computed for performance evaluation.

4.1. Multispectral Palmprint Database

The database consists of 500 different palms, and each palm was sampled 12 times in two sessions with a time interval of about 5–15 days. In each session, the subjects were asked to provide six groups of images. Each group contained four images under four different illuminations, so there were 6,000 groups of palmprint images [19]. The 3,000 groups of palmprint images captured in the first session were used as the training samples, and the remaining samples were used as the test samples.

4.2. Recognition Accuracy

Recognition accuracy is obtained by matching each palmprint sample in the test set with all the samples in the training set. Table 1 shows the recognition accuracy of different methods. From Table 1, we can see that quaternion representation can improve recognition accuracy significantly comparing to the single illumination. As quaternion representation keeps all the information in feature extraction, it could get better results than image level fusion and matching score level fusion. The combination of QPCA and QDWT can further improve the recognition accuracy. Figure 7 shows a pair of multispectral palmprint images which are collected from the same palm but classified wrongly. As shown in Figure 7, ROI extraction influences somewhat the recognition of the proposed method. ROI extraction will therefore be the focus of our future work to improve the recognition accuracy.

4.3. Special Arrangement When the Number of Illuminations Is Less Than 4

To evaluate the performance of our method in the situation with less than four illuminations, we replace some bands with a zero-matrix. Table 2 shows the recognition accuracy of different situations using QDWT. The band marked 1 means that this band is used in the quaternion matrix and the band marked 0 means that we use a zero-matrix to replace this band in the matrix.

From Table 2, we can find that the recognition accuracy is still better than single illumination when we replace some of the bands with zero matrixes. The finding is similar to the result using QPCA. It validates that quaternion representation is applicable for the situation with less than four feature bands. However, the proposed method is limited to a maximum of four bands. How to extend the quaternion representation for more bands will be a topic for our future work.

Table 3 illustrates the correlation between different spectra. Several findings could be found from Tables 2 and 3. First, as shown in Table 3, when the wavelength difference between two spectra increases, the correlation of palmprint images becomes lower. For example, the correlation between Blue and Green is 0.7421, while the correlation between Blue and NIR is only 0.4487. Second, when the correlation between two spectra is low, the palmprint images by these two spectra contain more complementary information, thus the accuracy improvement by their fusion is more significant. Thus, QDWT of NIR and Blue could reach 98.13%, while the QDWT of Green and Blue is only 94.87%. Third, a quaternion model is an efficient method to utilize the information of multispectral palmprint images, as the best accuracy using three or two spectra is 98.13%, which is smaller than using all of the spectra (98.50%).

4.4. Speed

The experiment is implemented using Matlab 7.0 on a PC with Windows XP (x64), Xeon 5160 CPU (3.0GHz), and 4-GB RAM. The execution time for preprocessing, feature extraction and feature matching is listed in Table 4. As shown in Table 4, the proposed method is fast enough for real time applications.

5. Conclusions

In this paper and to fully utilize the information of multispectral palmprint images, to the best of our knowledge, a quaternion model is employed for multispectral biometrics for the first time. QPCA is proposed for representing global features while QDWT is designed for extracting local features. Their fusion could achieve 98.83% recognition accuracy for 500 palms. The experimental results show that the proposed method is good enough for real applications and the quaternion model is an effective and efficient technique for multispectral biometrics. The special arrangement has shown that the quaternion matrix is still effective for multispectral palmprint in the situation with less than four illuminations.

In the future, we will try to apply the proposed method to other multispectral biometrics, such as face and iris recognition. We will also explore advanced feature extraction methods on the quaternion matrix, such as the kernel method [34]. How to represent multispectral palmprint images with more than four bands is another research direction, and finding the similarity between palms as a quantitative measure [35] will be explored also.

Acknowledgments

The work is partially supported by National Science Foundation of China (No. 61101150, 61105011, 60872138), and the Specialized Research Fund for the Doctoral Program of Higher Education (No.20100203120010).

References

  1. Jain, A.; Bolle, R.; Pankanti, S. Biometrics: Personal Identification in Networked Society; Kluwer Academic Publisher: Norwell, MA, USA, 1999. [Google Scholar]
  2. Jain, A.; Kumar, A. Biometrics of Next Generation: An Overview. In Second Generation Biometrics; Mordini, E., Tzovaras, D., Eds.; Springer: Berlin, Germany, 2010. [Google Scholar]
  3. Jain, A.; Hong, L.; Bolle, R. On-line fingerprint verification. IEEE Trans. Patt. Anal. Mach. Intell. 1997, 19, 302–314. [Google Scholar]
  4. Jain, A.; Feng, J.; Nandakumar, K. Fingerprint matching. IEEE Comput 2010, 43, 36–44. [Google Scholar]
  5. Zhang, D.; Kong, W.; You, J.; Wong, M. Online palmprint identification. IEEE Trans. Patt. Anal. Mach. Intell. 2003, 25, 1041–1049. [Google Scholar]
  6. Hu, D.; Feng, G.; Zhou, Z. Two-dimensional locality preserving projections (2DLPP) with its application to palmprint recognition. Patten. Recog. 2007, 40, 339–342. [Google Scholar]
  7. Jia, W.; Huang, D.; Zhang, D. Palmprint verification based on robust line orientation code. Patten. Recog. 2008, 41, 1504–1513. [Google Scholar]
  8. Wu, X.; Zhang, D.; Wang, K. Fisherpalms based palmprint recognition. Patten. Recognit. Lett. 2003, 24, 2829–2838. [Google Scholar]
  9. Daugman, J. The importance of being random: Statistical principles of iris recognition. Patten. Recog. 2003, 36, 279–291. [Google Scholar]
  10. Chen, Y.; Adjouadi, M.; Han, C.; Wang, J.; Barreto, A.; Rishe, N.; Andrian, J. A highly accurate and computationally efficient approach for unconstrained iris segmentation. Image Vis. Comput. 2010, 28, 261–269. [Google Scholar]
  11. Proença, H.; Alexandre, L.A. The NICE.I: Noisy Iris Challenge Evaluation—Part I. Proceedings of the IEEE First Internaional Conference on Biometrics: Theory, Applications and Systems (BTAS), Crystal City, VA, USA, 27–29 September 2007; pp. 1–4.
  12. Boyce, C.; Ross, A.; Monaco, M.; Homak, L.; Li, X. Multispectral Iris Analysis: A Proliminary Study. Proceedings of IEEE Computer Society Workshop on Biometrics, New York, NY, USA, 17–18 June 2006; pp. 51–51.
  13. Xu, Y.; Yang, J.; Lu, J.; Yu, D. An efficient renovation on kernel Fisher discriminant analysis and face recognition experiments. Patten. Recog. 2004, 37, 2091–2094. [Google Scholar]
  14. Chang, H.; Koschan, A.; Abidi, B.; Abidi, M. Physics- based fusion of multispectral data for improved face recognition. Patten. Recog. 2006, 39, 1083–1086. [Google Scholar]
  15. Xu, Y.; Zhang, D.; Yang, J.; Yang, J. An approach for directly extracting features from matrix data and its application in face recognition. Neurocomputing 2008, 71, 1857–1865. [Google Scholar]
  16. Wang, J.; Barreto, A.; Wang, L.; Chen, Y.; Rishe, N.; Andrian, J.; Adjouadi, M. Multilinear principal component analysis for face recognition with fewer features. Neurocomputing 2010, 73, 1550–1555. [Google Scholar]
  17. Zhang, L.; Zhang, L.; Zhang, D.; Zhu, H. Online finger-knuckle-print verification for personal authentication. Patten. Recog. 2010, 43, 2560–2571. [Google Scholar]
  18. Rowe, R.; Uludag, U.; Demirkus, M.; Parthasaradhi, S.; Jain, A. A Multispectral Whole-hand Biometric Authentication System. Proceedings of the Biometrics Symposium, Baltimore MD, USA, 11–13 September 2007; pp. 1–6.
  19. Zhang, D.; Guo, Z.; Lu, G.; Zhang, L.; Zuo, W. An online system of multispectral palmprint verification. IEEE Trans. Inst. Meas. 2010, 59, 480–490. [Google Scholar]
  20. Hao, Y.; Sun, Z.; Tan, T.; Ren, C. Multispectral Palm Image Fusion For Accurate Contact-free Palmprint Recognition. Proceedings of the 15th IEEE International Conference on Image Processing (ICIP 2008), San Diego, CA, USA, 12–15 October 2008; pp. 281–284.
  21. Wang, J.; Yao, W.; Suwandy, A.; Sung, E. Person recognition by fusing palmprint and palm vein images based on ‘laplacianpalm’ representation. Patten. Recog. 2008, 41, 1531–1544. [Google Scholar]
  22. Xu, X.; Guo, Z. Multispectral Palmprint Recognition Using Quaternion Palmprint Component Analysis. Proceedings of the International Workshop on Emerging Techniques and Challenges for Hand-based Biometrics (ETCHB), Istanbul, Turkey, 22 August 2010; pp. 1–5.
  23. Hamilton, W.R. On Quaternions. In Proceeding of the Royal Irish Academy; Royal Irish Academy: Dublin, Ireland; p. 1844.
  24. Bihan, N.; Sangwinem, S. Quaternion Principal Component Analysis of Color Images. Proceedings of the International Conference on Image Processing, (ICIP 2003), Barcelona, Catalonia, Spain, 14–17 September 2003; pp. 809–812.
  25. Shi, L.; Funt, B. Quaternion color texture segmentation. Comput. Vis. Image Underst. 2007, 107, 88–96. [Google Scholar]
  26. Xie, C.; Savides, M.; Jumar, B. Quaternion Correlation Filters for Face Recognition in Wavelet Domain. Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2005), Phildelphia, PA, USA, 18–23 March 2005; pp. 85–88.
  27. Han, D.; Guo, Z.; Zhang, D. Multispectral Palmprint Recognition using Wavelet-based Image Fusion. Proceedings of the 9th International Conference on Signal Processing (ICSP 2008), Beijing, China, 26–29 October 2008; pp. 2074–2077.
  28. Zharov, V.; Ferguson, S.; Eidt, J.; Howard, P.; Fink, L.; Waner, M. Infrared imaging of subcutaneous veins. Lasers Surg. Med. 2004, 34, 56–61. [Google Scholar]
  29. He, J.; Li, Y. Construction of quaternion-valued scaling function and wavelets. J. Guangzhou Univ. 2007, 6, 17–22. [Google Scholar]
  30. Wang, J.; Barreto, A.; Rishe, N.; Andrian, J.; Adjouadi, M. A fast incremental multilinear principal component analysis algorithm. Int. J. Innov. Comput. Inf. Control 2011, 7, 6019–6040. [Google Scholar]
  31. Bihan, N.; Sangwine, S. Jacobi method for quaternion matrix singular value decomposition. Appl. Math. Comput. 2007, 187, 1265–1271. [Google Scholar]
  32. Chan, W.; Choi, H.; Baraniuk, R. Quaternion Wavelets for Image Analysis and Processing. Proceedings of the IEEE International Conference on Image Processing (ICIP 2004), Singapore, 24–27 October 2004; pp. 3057–3060.
  33. Ross, A.A.; Nadakumar, K.; Jain, A.K. Handbook of Multibiometrics; Springer: New York, NY, USA, 2006. [Google Scholar]
  34. Xu, Y.; Zhang, D.; Song, F.; Yang, J.; Jing, Z.; Li, M. A method for speeding up feature extraction based on KPCA. Neurocomputing 2007, 70, 1056–1061. [Google Scholar]
  35. Candocia, F.; Adjouadi, M. A similarity measure for stereo feature matching. IEEE Trans. Image Process 1997, 6, 1460–1464. [Google Scholar]
Figure 1. (a) The structure of the multispectral palmprint imaging sensor. (b) Device panel and how the palm of the hand is situated.
Figure 1. (a) The structure of the multispectral palmprint imaging sensor. (b) Device panel and how the palm of the hand is situated.
Sensors 12 04633f1a 1024Sensors 12 04633f1b 1024
Figure 2. A typical multispectral palmprint sample. (a) Blue; (b) Green; (c) Red; (d) NIR. The white square is the region of interest (ROI) of the image.
Figure 2. A typical multispectral palmprint sample. (a) Blue; (b) Green; (c) Red; (d) NIR. The white square is the region of interest (ROI) of the image.
Sensors 12 04633f2a 1024Sensors 12 04633f2b 1024
Figure 3. The framework of the proposed method.
Figure 3. The framework of the proposed method.
Sensors 12 04633f3 1024
Figure 4. ROI of Figure 2. (a) Blue; (b) Green; (c) Red; (d) NIR.
Figure 4. ROI of Figure 2. (a) Blue; (b) Green; (c) Red; (d) NIR.
Sensors 12 04633f4 1024
Figure 5. The ROIs of multispectral palmprint images under 4 different kinds of illuminations after the preprocessing and downsampling. (a) NIR. (b) Red. (c) Green. (d) Blue.
Figure 5. The ROIs of multispectral palmprint images under 4 different kinds of illuminations after the preprocessing and downsampling. (a) NIR. (b) Red. (c) Green. (d) Blue.
Sensors 12 04633f5 1024
Figure 6. A quaternion vector sample built using the input image in Figure 5.
Figure 6. A quaternion vector sample built using the input image in Figure 5.
Sensors 12 04633f6 1024
Figure 7. A pair of multispectral palmprint images from the same palm but falsely recognized. (a-d) are four images of one instance, and (e-h) are four images of another instance of different time.
Figure 7. A pair of multispectral palmprint images from the same palm but falsely recognized. (a-d) are four images of one instance, and (e-h) are four images of another instance of different time.
Sensors 12 04633f7 1024
Table 1. Recognition Accuracy.
Table 1. Recognition Accuracy.
ExperimentsRecognition Accuracy
NIR PCA94.60%
Red PCA96.30%
Green PCA93.47%
Blue PCA93.47%
Image level fusion by PCA95.17%
Matching score level fusion by PCA98.07%
QPCA98.13%
NIR DWT94.60%
Red DWT95.20%
Green DWT93.50%
Blue DWT93.83%
Image level fusion by DWT96.60%
Matching level score fusion by DWT98.00%
QDWT98.50%
QPCA+QDWT98.83%
Table 2. Recognition accuracy of different situations using QDWT.
Table 2. Recognition accuracy of different situations using QDWT.
NIRRedGreenBlueRecognition Accuracy
110097.17%
101098.10%
100198.13%
011097.23%
010197.33%
001194.87%
111098.03%
110197.90%
101198.13%
011197.10%
Table 3. Image correlation between different spectra.
Table 3. Image correlation between different spectra.
NIRRedGreenBlue
NIR1---
Red0.74701--
Green0.36900.50601-
Blue0.44870.68290.74211
Table 4. Execution Time.
Table 4. Execution Time.
Average Time (ms)
Preprocessing20
QPCA feature extraction46
QDWT feature extraction547
QPCA feature matching0.42
QDWT feature matching0.43

Share and Cite

MDPI and ACS Style

Xu, X.; Guo, Z.; Song, C.; Li, Y. Multispectral Palmprint Recognition Using a Quaternion Matrix. Sensors 2012, 12, 4633-4647. https://doi.org/10.3390/s120404633

AMA Style

Xu X, Guo Z, Song C, Li Y. Multispectral Palmprint Recognition Using a Quaternion Matrix. Sensors. 2012; 12(4):4633-4647. https://doi.org/10.3390/s120404633

Chicago/Turabian Style

Xu, Xingpeng, Zhenhua Guo, Changjiang Song, and Yafeng Li. 2012. "Multispectral Palmprint Recognition Using a Quaternion Matrix" Sensors 12, no. 4: 4633-4647. https://doi.org/10.3390/s120404633

APA Style

Xu, X., Guo, Z., Song, C., & Li, Y. (2012). Multispectral Palmprint Recognition Using a Quaternion Matrix. Sensors, 12(4), 4633-4647. https://doi.org/10.3390/s120404633

Article Metrics

Back to TopTop