Multispectral Palmprint Recognition Using a Quaternion Matrix
Abstract
: Palmprints have been widely studied for biometric recognition for many years. Traditionally, a white light source is used for illumination. Recently, multispectral imaging has drawn attention because of its high recognition accuracy. Multispectral palmprint systems can provide more discriminant information under different illuminations in a short time, thus they can achieve better recognition accuracy. Previously, multispectral palmprint images were taken as a kind of multi-modal biometrics, and the fusion scheme on the image level or matching score level was used. However, some spectral information will be lost during image level or matching score level fusion. In this study, we propose a new method for multispectral images based on a quaternion model which could fully utilize the multispectral information. Firstly, multispectral palmprint images captured under red, green, blue and near-infrared (NIR) illuminations were represented by a quaternion matrix, then principal component analysis (PCA) and discrete wavelet transform (DWT) were applied respectively on the matrix to extract palmprint features. After that, Euclidean distance was used to measure the dissimilarity between different features. Finally, the sum of two distances and the nearest neighborhood classifier were employed for recognition decision. Experimental results showed that using the quaternion matrix can achieve a higher recognition rate. Given 3000 test samples from 500 palms, the recognition rate can be as high as 98.83%.1. Introduction
There are two main categories in the automatic traditional personal identification area: token-based methods that rely on personal identification such as driver licenses, passports, and other IDs; and knowledge-based methods that rely on signatures or password-protected access [1]. However, things like keys, ID cards, passports are easy to be lost or stolen, and passwords can be forgotten or lost. In order to avoid these disadvantages, biometrics have becomes more and more popular in personal identification. Compared with traditional authentication methods, biometric features are difficult to be stolen, lost or copied, and altered [1,2]. This makes biometric authentication reliable and efficient in situations where high security is needed. Many biometric features have been studied and used in recent decades, such as fingerprints [3,4], palmprints [5–8], iris images [9–12], human faces [13–16] and finger-knuckle-prints [17].
Among these features, palmprints are widely studied [5–8], as they have many merits, being user-friendly, low cost, robust, and highly reliable [5]. Traditional palmprint recognition systems use images acquired using a single source of illumination, which may cause different palms to have similar appearance due to the limited information.
To address the issue of limited information, we have applied multispectral imaging which can provide several images of the same scene with different illuminations for enhanced biometrics applications that include face recognition [14] and iris recognition [12]. Some pioneering work on multispectral palmprints has also been proposed. Rowe et al. [18] designed a prototype of a whole-hand multispectral imaging system. Zhang et al. [19] developed a fast multispectral palmprint prototype system. Hao et al. [20] proposed a new touchless multispectral palmprint system which is different from those in [18] and [19], but similar to the typical iris recognition system, where the user does not need to touch the sensor. These works regarded multispectral palmprint images as a kind of multi-modal biometrics and used fusion schemes on different levels, such as image level [20] and matching score level [18,19]. However, some useful information is lost in the image fusion [21] while correlation between different images is neglected in matching score level fusion as each image is compared to each spectrum individually. To fully utilize multispectral information, a new multispectral palmprint recognition method based on a quaternion matrix was proposed in this study. Our previous work [22] showed the effectiveness of the global quaternion model for palmprint recognition, but the local quaternion feature was not explored. To this end, this study proposes a method based on quaternion representation by incorporating both local and global features.
The concept of quaternion was first proposed in 1843 by the Irish mathematician William Rowan Hamilton [23]. As a quaternion model could represent several bands or matrixes in one complex matrix without losing information, it has been used in color image processing [24,25] and multi-feature processing [26]. In this work, multispectral palmprint images captured by red, green, blue and near-infrared (NIR) [14] are represented by a quaternion matrix first, then principal component analysis (PCA) and discrete wavelet transform (DWT) were applied on the matrix to extract palmprint features individually. After that, Euclidean distance was used to measure dissimilarity between different features. Finally, matching score fusion and the nearest neighborhood classifier were employed for recognition. Because the quaternion matrix fully utilizes the information of multispectral images, higher recognition accuracy is expected in comparison to the traditional fusion schemes.
The rest of the paper is organized as follows: a multispectral imaging device is briefly described in Section 2. The proposed method is outlined in Section 3. Experimental results are reported in Section 4. Finally, the conclusions are given in Section 5.
2. Multispectral Imaging Sensor
Figure 1 shows the structure of the designed multispectral palmprint imaging sensor and how the palm is situated.
The key components of the multispectral palmprint imaging device include a CCD camera, lens, and A/D converter. To ensure a semi-closed environment, the box containing the camera is made of opaque plastic and the central part of the device panel is hollow. The multispectral images were captured with four different wavelengths: NIR (880 nm), red (660 nm), green (525 nm) and blue (470 nm) [27]. These wavelengths were chosen because different lights can penetrate different skin layers and enhance different features [28]. The system can capture the palmprint images at a resolution of 352 × 288. During the image capture process, users are asked to put the palm on a platform. The four different palmprint images with the resolution lower than 100 DPI (dot per inch) can be captured in a short time (less than 1 second) [19]. Figure 2 shows a typical multispectral palmprint sample.
3. The Framework of the Proposed Method
Figure 3 shows the whole framework of the proposed method. It includes four key steps: preprocessing, quaternion representation, feature extraction and matching.
3.1. Preprocessing
Before extracting palmprint features, a region of interest (ROI) of a palmprint image is selected first. A coordinate system to reduce rotation and translation effects is built from the given image and then a 128 × 128 ROI [5] is cropped from the whole image. Figure 4 illustrates the extracted ROI from Figure 2 by the method proposed in [5]. After that, histogram equalization is used to remove global intensity influence.
3.2. Quaternion Representation
To fully utilize the information of the multispectral palmrpint images, the multispectral palmprint images are represented by a quaternion matrix. In mathematics, quaternions are a noncommutative number system. A quaternion is a linear combination of a real scalar and three imaginary units:
The conjugate of a quaternion is: q*=a-bi-cj-dk, and the norm of a quaternion is:
I(x, y), R(x, y), G(x, y), and B(x, y) are used to represent four ROI images by NIR, red, green and blue illuminations, respectively. After preprocessing, a quaternion matrix, Q(x, y), is constructed by these four ROI images:
In Equation (3), each pixel of Q(x, y) is represented by a 4D point. If there are less than four illuminations, we can still construct a quaternion matrix by replacing the missing illumination with a zero-matrix. This can also give good results, as shown in Section 4.
3.3. Feature Extraction
In this study, two kinds of features are extracted from a quaternion matrix, quaternion PCA (QPCA) [24] which captures global appearance of multispectral palmprint images, and quaternion DWT (QDWT) [29] which represents local texture information.
3.3.1. QPCA Feature Extraction
To speed up computation and reduce memory cost [8], the ROIs are downsampled to 64 × 64 as shown in Figure 5. In the future, instead of down-sampling, we will investigate advanced methods, such as incremental multilinear PCA [30]. The quaternion matrix is first converted to a quaternion vector recorded by row. Figure 6 illustrates a quaternion vector sample built by Figure 5. Then a new matrix for learning projection matrix is built by n training samples. Sm×n = [Q1 Q2…Qn]. Each column is a quaternion vector of one multispectral palmprint sample and its length is m (64 × 64).
Given the matrix Sm×n, its covariance matrix is Cm×m. However, computing the eigenvectors and the eigenvalues of the matrix Cm×m is an intractable task for such a high dimension. In fact, the number of samples n is relatively small, thus it is easy to calculate eigenvectors and eigenvalues of matrix Cn×n:
Matrix B is a real matrix, so eigenvectors of B are easily calculated. VB is used to represent the eigenvectors matrix of B, each column of VB is an eigenvector of matrix B. Thus eigenvectors of Cn×n can be calculated as:
Eigenvalues of Cn×n are eigenvalues of matrix B as well. Dc is used to represent eigenvalues of Cn×n. Now we have the eigenvectors and the eigenvalues of matrix Cn×n. Finally eigenvalues and eigenvectors of matrix Cm×m is computed by:
Eigenvalues D are sorted in descending order, and an energy ratio is defined as:
For an input quaternion sample s, the QPCA feature fQPCA is computed as:
3.3.2. QDWT Feature Extraction
After one scale decomposition, four groups of coefficients will be generated [27,32] by quaternion discrete wavelet. These coefficients include one group of approximation coefficients and three groups of detail coefficients on three directions (horizontal, vertical and diagonal). Here, only approximation coefficients are used for texture feature, as detail coefficients are sensitive to noise.
Given a quaternion sample Q = I + R · i + G · j + B · k, the approximation coefficients coefQ are:
To get stable features, the quaternion matrix is divided into non-overlapping blocks. Each block is with size of z × z. The z is empirically set as 5 to balance between accuracy and feature dimension.
For each block:
The standard deviation of this block is a quaternion constructed by the standard deviation of each band in the block.
3.3.3. Feature Matching
Euclidean distance between two quaternions p and q is defined as:
After feature extraction, one multispectral palmrpint sample has two different features, one QPCA feature fQPCA and one QDWT feature fQDWT. Thus, two different distances, one QPCA distance and one QDWT distance are gotten. The QPCA distance dQPCA is the Euclidean distance between two QPCA features and of two given samples: su and sv, while the QDWT distance dQDWT is the Euclidean distance between two QDWT features and :
The final distance between two samples is the weighted fusion of the QPCA distance and the QDWT distance. Before fusion, we need to normalize QPCA and QDWT distances by dividing each of these distances by their respective standard deviation of the distances between the training samples, as expressed below [33]:
4. Experimental Results
The experiments are based on a large multispectral palmprint database [19]. The nearest neighborhood classifier is used, and recognition accuracy is computed for performance evaluation.
4.1. Multispectral Palmprint Database
The database consists of 500 different palms, and each palm was sampled 12 times in two sessions with a time interval of about 5–15 days. In each session, the subjects were asked to provide six groups of images. Each group contained four images under four different illuminations, so there were 6,000 groups of palmprint images [19]. The 3,000 groups of palmprint images captured in the first session were used as the training samples, and the remaining samples were used as the test samples.
4.2. Recognition Accuracy
Recognition accuracy is obtained by matching each palmprint sample in the test set with all the samples in the training set. Table 1 shows the recognition accuracy of different methods. From Table 1, we can see that quaternion representation can improve recognition accuracy significantly comparing to the single illumination. As quaternion representation keeps all the information in feature extraction, it could get better results than image level fusion and matching score level fusion. The combination of QPCA and QDWT can further improve the recognition accuracy. Figure 7 shows a pair of multispectral palmprint images which are collected from the same palm but classified wrongly. As shown in Figure 7, ROI extraction influences somewhat the recognition of the proposed method. ROI extraction will therefore be the focus of our future work to improve the recognition accuracy.
4.3. Special Arrangement When the Number of Illuminations Is Less Than 4
To evaluate the performance of our method in the situation with less than four illuminations, we replace some bands with a zero-matrix. Table 2 shows the recognition accuracy of different situations using QDWT. The band marked 1 means that this band is used in the quaternion matrix and the band marked 0 means that we use a zero-matrix to replace this band in the matrix.
From Table 2, we can find that the recognition accuracy is still better than single illumination when we replace some of the bands with zero matrixes. The finding is similar to the result using QPCA. It validates that quaternion representation is applicable for the situation with less than four feature bands. However, the proposed method is limited to a maximum of four bands. How to extend the quaternion representation for more bands will be a topic for our future work.
Table 3 illustrates the correlation between different spectra. Several findings could be found from Tables 2 and 3. First, as shown in Table 3, when the wavelength difference between two spectra increases, the correlation of palmprint images becomes lower. For example, the correlation between Blue and Green is 0.7421, while the correlation between Blue and NIR is only 0.4487. Second, when the correlation between two spectra is low, the palmprint images by these two spectra contain more complementary information, thus the accuracy improvement by their fusion is more significant. Thus, QDWT of NIR and Blue could reach 98.13%, while the QDWT of Green and Blue is only 94.87%. Third, a quaternion model is an efficient method to utilize the information of multispectral palmprint images, as the best accuracy using three or two spectra is 98.13%, which is smaller than using all of the spectra (98.50%).
4.4. Speed
The experiment is implemented using Matlab 7.0 on a PC with Windows XP (x64), Xeon 5160 CPU (3.0GHz), and 4-GB RAM. The execution time for preprocessing, feature extraction and feature matching is listed in Table 4. As shown in Table 4, the proposed method is fast enough for real time applications.
5. Conclusions
In this paper and to fully utilize the information of multispectral palmprint images, to the best of our knowledge, a quaternion model is employed for multispectral biometrics for the first time. QPCA is proposed for representing global features while QDWT is designed for extracting local features. Their fusion could achieve 98.83% recognition accuracy for 500 palms. The experimental results show that the proposed method is good enough for real applications and the quaternion model is an effective and efficient technique for multispectral biometrics. The special arrangement has shown that the quaternion matrix is still effective for multispectral palmprint in the situation with less than four illuminations.
In the future, we will try to apply the proposed method to other multispectral biometrics, such as face and iris recognition. We will also explore advanced feature extraction methods on the quaternion matrix, such as the kernel method [34]. How to represent multispectral palmprint images with more than four bands is another research direction, and finding the similarity between palms as a quantitative measure [35] will be explored also.
Acknowledgments
The work is partially supported by National Science Foundation of China (No. 61101150, 61105011, 60872138), and the Specialized Research Fund for the Doctoral Program of Higher Education (No.20100203120010).
References
- Jain, A.; Bolle, R.; Pankanti, S. Biometrics: Personal Identification in Networked Society; Kluwer Academic Publisher: Norwell, MA, USA, 1999. [Google Scholar]
- Jain, A.; Kumar, A. Biometrics of Next Generation: An Overview. In Second Generation Biometrics; Mordini, E., Tzovaras, D., Eds.; Springer: Berlin, Germany, 2010. [Google Scholar]
- Jain, A.; Hong, L.; Bolle, R. On-line fingerprint verification. IEEE Trans. Patt. Anal. Mach. Intell. 1997, 19, 302–314. [Google Scholar]
- Jain, A.; Feng, J.; Nandakumar, K. Fingerprint matching. IEEE Comput 2010, 43, 36–44. [Google Scholar]
- Zhang, D.; Kong, W.; You, J.; Wong, M. Online palmprint identification. IEEE Trans. Patt. Anal. Mach. Intell. 2003, 25, 1041–1049. [Google Scholar]
- Hu, D.; Feng, G.; Zhou, Z. Two-dimensional locality preserving projections (2DLPP) with its application to palmprint recognition. Patten. Recog. 2007, 40, 339–342. [Google Scholar]
- Jia, W.; Huang, D.; Zhang, D. Palmprint verification based on robust line orientation code. Patten. Recog. 2008, 41, 1504–1513. [Google Scholar]
- Wu, X.; Zhang, D.; Wang, K. Fisherpalms based palmprint recognition. Patten. Recognit. Lett. 2003, 24, 2829–2838. [Google Scholar]
- Daugman, J. The importance of being random: Statistical principles of iris recognition. Patten. Recog. 2003, 36, 279–291. [Google Scholar]
- Chen, Y.; Adjouadi, M.; Han, C.; Wang, J.; Barreto, A.; Rishe, N.; Andrian, J. A highly accurate and computationally efficient approach for unconstrained iris segmentation. Image Vis. Comput. 2010, 28, 261–269. [Google Scholar]
- Proença, H.; Alexandre, L.A. The NICE.I: Noisy Iris Challenge Evaluation—Part I. Proceedings of the IEEE First Internaional Conference on Biometrics: Theory, Applications and Systems (BTAS), Crystal City, VA, USA, 27–29 September 2007; pp. 1–4.
- Boyce, C.; Ross, A.; Monaco, M.; Homak, L.; Li, X. Multispectral Iris Analysis: A Proliminary Study. Proceedings of IEEE Computer Society Workshop on Biometrics, New York, NY, USA, 17–18 June 2006; pp. 51–51.
- Xu, Y.; Yang, J.; Lu, J.; Yu, D. An efficient renovation on kernel Fisher discriminant analysis and face recognition experiments. Patten. Recog. 2004, 37, 2091–2094. [Google Scholar]
- Chang, H.; Koschan, A.; Abidi, B.; Abidi, M. Physics- based fusion of multispectral data for improved face recognition. Patten. Recog. 2006, 39, 1083–1086. [Google Scholar]
- Xu, Y.; Zhang, D.; Yang, J.; Yang, J. An approach for directly extracting features from matrix data and its application in face recognition. Neurocomputing 2008, 71, 1857–1865. [Google Scholar]
- Wang, J.; Barreto, A.; Wang, L.; Chen, Y.; Rishe, N.; Andrian, J.; Adjouadi, M. Multilinear principal component analysis for face recognition with fewer features. Neurocomputing 2010, 73, 1550–1555. [Google Scholar]
- Zhang, L.; Zhang, L.; Zhang, D.; Zhu, H. Online finger-knuckle-print verification for personal authentication. Patten. Recog. 2010, 43, 2560–2571. [Google Scholar]
- Rowe, R.; Uludag, U.; Demirkus, M.; Parthasaradhi, S.; Jain, A. A Multispectral Whole-hand Biometric Authentication System. Proceedings of the Biometrics Symposium, Baltimore MD, USA, 11–13 September 2007; pp. 1–6.
- Zhang, D.; Guo, Z.; Lu, G.; Zhang, L.; Zuo, W. An online system of multispectral palmprint verification. IEEE Trans. Inst. Meas. 2010, 59, 480–490. [Google Scholar]
- Hao, Y.; Sun, Z.; Tan, T.; Ren, C. Multispectral Palm Image Fusion For Accurate Contact-free Palmprint Recognition. Proceedings of the 15th IEEE International Conference on Image Processing (ICIP 2008), San Diego, CA, USA, 12–15 October 2008; pp. 281–284.
- Wang, J.; Yao, W.; Suwandy, A.; Sung, E. Person recognition by fusing palmprint and palm vein images based on ‘laplacianpalm’ representation. Patten. Recog. 2008, 41, 1531–1544. [Google Scholar]
- Xu, X.; Guo, Z. Multispectral Palmprint Recognition Using Quaternion Palmprint Component Analysis. Proceedings of the International Workshop on Emerging Techniques and Challenges for Hand-based Biometrics (ETCHB), Istanbul, Turkey, 22 August 2010; pp. 1–5.
- Hamilton, W.R. On Quaternions. In Proceeding of the Royal Irish Academy; Royal Irish Academy: Dublin, Ireland; p. 1844.
- Bihan, N.; Sangwinem, S. Quaternion Principal Component Analysis of Color Images. Proceedings of the International Conference on Image Processing, (ICIP 2003), Barcelona, Catalonia, Spain, 14–17 September 2003; pp. 809–812.
- Shi, L.; Funt, B. Quaternion color texture segmentation. Comput. Vis. Image Underst. 2007, 107, 88–96. [Google Scholar]
- Xie, C.; Savides, M.; Jumar, B. Quaternion Correlation Filters for Face Recognition in Wavelet Domain. Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2005), Phildelphia, PA, USA, 18–23 March 2005; pp. 85–88.
- Han, D.; Guo, Z.; Zhang, D. Multispectral Palmprint Recognition using Wavelet-based Image Fusion. Proceedings of the 9th International Conference on Signal Processing (ICSP 2008), Beijing, China, 26–29 October 2008; pp. 2074–2077.
- Zharov, V.; Ferguson, S.; Eidt, J.; Howard, P.; Fink, L.; Waner, M. Infrared imaging of subcutaneous veins. Lasers Surg. Med. 2004, 34, 56–61. [Google Scholar]
- He, J.; Li, Y. Construction of quaternion-valued scaling function and wavelets. J. Guangzhou Univ. 2007, 6, 17–22. [Google Scholar]
- Wang, J.; Barreto, A.; Rishe, N.; Andrian, J.; Adjouadi, M. A fast incremental multilinear principal component analysis algorithm. Int. J. Innov. Comput. Inf. Control 2011, 7, 6019–6040. [Google Scholar]
- Bihan, N.; Sangwine, S. Jacobi method for quaternion matrix singular value decomposition. Appl. Math. Comput. 2007, 187, 1265–1271. [Google Scholar]
- Chan, W.; Choi, H.; Baraniuk, R. Quaternion Wavelets for Image Analysis and Processing. Proceedings of the IEEE International Conference on Image Processing (ICIP 2004), Singapore, 24–27 October 2004; pp. 3057–3060.
- Ross, A.A.; Nadakumar, K.; Jain, A.K. Handbook of Multibiometrics; Springer: New York, NY, USA, 2006. [Google Scholar]
- Xu, Y.; Zhang, D.; Song, F.; Yang, J.; Jing, Z.; Li, M. A method for speeding up feature extraction based on KPCA. Neurocomputing 2007, 70, 1056–1061. [Google Scholar]
- Candocia, F.; Adjouadi, M. A similarity measure for stereo feature matching. IEEE Trans. Image Process 1997, 6, 1460–1464. [Google Scholar]
Experiments | Recognition Accuracy |
---|---|
NIR PCA | 94.60% |
Red PCA | 96.30% |
Green PCA | 93.47% |
Blue PCA | 93.47% |
Image level fusion by PCA | 95.17% |
Matching score level fusion by PCA | 98.07% |
QPCA | 98.13% |
NIR DWT | 94.60% |
Red DWT | 95.20% |
Green DWT | 93.50% |
Blue DWT | 93.83% |
Image level fusion by DWT | 96.60% |
Matching level score fusion by DWT | 98.00% |
QDWT | 98.50% |
QPCA+QDWT | 98.83% |
NIR | Red | Green | Blue | Recognition Accuracy |
---|---|---|---|---|
1 | 1 | 0 | 0 | 97.17% |
1 | 0 | 1 | 0 | 98.10% |
1 | 0 | 0 | 1 | 98.13% |
0 | 1 | 1 | 0 | 97.23% |
0 | 1 | 0 | 1 | 97.33% |
0 | 0 | 1 | 1 | 94.87% |
1 | 1 | 1 | 0 | 98.03% |
1 | 1 | 0 | 1 | 97.90% |
1 | 0 | 1 | 1 | 98.13% |
0 | 1 | 1 | 1 | 97.10% |
NIR | Red | Green | Blue | |
---|---|---|---|---|
NIR | 1 | - | - | - |
Red | 0.7470 | 1 | - | - |
Green | 0.3690 | 0.5060 | 1 | - |
Blue | 0.4487 | 0.6829 | 0.7421 | 1 |
Average Time (ms) | |
---|---|
Preprocessing | 20 |
QPCA feature extraction | 46 |
QDWT feature extraction | 547 |
QPCA feature matching | 0.42 |
QDWT feature matching | 0.43 |
© 2012 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).
Share and Cite
Xu, X.; Guo, Z.; Song, C.; Li, Y. Multispectral Palmprint Recognition Using a Quaternion Matrix. Sensors 2012, 12, 4633-4647. https://doi.org/10.3390/s120404633
Xu X, Guo Z, Song C, Li Y. Multispectral Palmprint Recognition Using a Quaternion Matrix. Sensors. 2012; 12(4):4633-4647. https://doi.org/10.3390/s120404633
Chicago/Turabian StyleXu, Xingpeng, Zhenhua Guo, Changjiang Song, and Yafeng Li. 2012. "Multispectral Palmprint Recognition Using a Quaternion Matrix" Sensors 12, no. 4: 4633-4647. https://doi.org/10.3390/s120404633
APA StyleXu, X., Guo, Z., Song, C., & Li, Y. (2012). Multispectral Palmprint Recognition Using a Quaternion Matrix. Sensors, 12(4), 4633-4647. https://doi.org/10.3390/s120404633