Next Article in Journal / Special Issue
Collaborative Spectrum Sensing Algorithm Based on Exponential Entropy in Cognitive Radio Networks
Previous Article in Journal
Planar Harmonic and Monogenic Polynomials of Type A
Previous Article in Special Issue
A Search Complexity Improvement of Vector Quantization to Immittance Spectral Frequency Coefficients in AMR-WB Speech Codec
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Texture Feature Description Method Based on the Generalized Gabor Direction Pattern and Weighted Discrepancy Measurement Model

1
School of Information Engineering, Chang’an University, Xi’an 710064, China
2
School of Electrical and Information Engineering, the University of Sydney, Sydney 2006, NSW, Australia
3
School of Electronic and Control Engineering, Chang’an University, Xi’an 710064, China
*
Author to whom correspondence should be addressed.
Symmetry 2016, 8(11), 109; https://doi.org/10.3390/sym8110109
Submission received: 31 August 2016 / Revised: 10 October 2016 / Accepted: 19 October 2016 / Published: 25 October 2016
(This article belongs to the Special Issue Symmetry in Systems Design and Analysis)

Abstract

:
Texture feature description is a remarkable challenge in the fields of computer vision and pattern recognition. Since the traditional texture feature description method, the local binary pattern (LBP), is unable to acquire more detailed direction information and always sensitive to noise, we propose a novel method based on generalized Gabor direction pattern (GGDP) and weighted discrepancy measurement model (WDMM) to overcome those defects. Firstly, a novel patch-structure direction pattern (PDP) is proposed, which can extract rich feature information and be insensitive to noise. Then, motivated by searching for a description method that can explore richer and more discriminant texture features and reducing the local Gabor feature vector’s high dimension problem, we extend PDP to form the GGDP method with multi-channel Gabor space. Furthermore, WDMM, which can effectively measure the feature distance between two images, is presented for the classification and recognition of image samples. Simulated experiments on olivetti research laboratory (ORL), Carnegie Mellon University pose, illumination, and expression (CMUPIE) and Yale B face databases under different illumination or facial expression conditions indicate that the proposed method outperforms other existing classical methods.

1. Introduction

In recent years, image feature description methods have received significant attention in the fields of computer vision and pattern recognition. A number of image feature extraction methods are proposed, which can be divided into two categories: holistic and local image feature extraction. There are many holistic feature extraction methods, which can produce a statistical information template from a large amount of training sample images. One of the typical methods is principal component analysis (PCA) [1]. Based on the PCA model, some improved methods have been presented including 2D PCA [2,3], incremental PCA [4], block PCA [5,6], etc. Moreover, many methods using matrix decomposition and linear combination have become very popular, such as linear discriminant analysis (LDA) [7,8,9,10,11], independent component analysis (ICA) [12,13,14,15,16], singular value decomposition (SVD) [17,18,19], discrete wavelet transform (DWT) [20,21], etc. Also, a method called k-LDA, which combines LDA with PCA, is proposed to process image classification [22]. Without fully taking into account local detailed information, these holistic feature extraction methods are sensitive to geometric shape changes and some illumination and noise variations. However, local image feature extraction methods can effectively overcome those drawbacks. Ojala [23] proposes a texture feature description method called the local binary pattern (LBP), which can achieve superior results for image recognition. The original LBP method computes a binary sequence with 3 × 3 neighborhoods, which compares the central pixel value and its neighbor pixel value in 3 × 3 neighborhoods, and then expresses an LBP histogram as a texture description feature. The fixed neighborhoods make it easy to restrict the larger neighborhood structure, which is an obvious disadvantage for the original LBP method. Afterwards, Ojala improved the original one and suggested an extension LBRP, R [24] with neighborhoods of different sizes, where P is the sample point number in a circle area with a radius of R. Since LBP is a two-value model, which cannot describe more detailed information, Tan [25] extends the two-value model to the three-value model and proposes a novel local feature description method, local ternary patterns (LTP). Furthermore, many variants from the basic LBP have been presented, including local phase quantization (LPQ) [26], local derivative pattern (LDP) [27], local difference binary (LDB) [28], local line directional pattern (LLDP) [29], local binary pattern of pyramid transform domain (PLBP) [30], local tetra patterns (LTrPs) [31], dominant local binary pattern (DLBP) [32], binary robust independent elementary features (BRIEF) [33], local tri-directional patterns (LTPs) [34], local convex-and-concave pattern (LCP) [35] multi-scale local binary patterns (MSLBP) [36] and etc. In addition, motivated by image moments and local binary patterns, some novel texture descriptors have been proposed, such as local Tchebichef moments (LTMs) [37], moment-based local binary patterns (MLBP) [38] and etc. Nanni [39,40] has presented region-based approaches with a co-occurrence matrix, which have had promising results in several medical datasets. Gabor wavelet filters are an excellent feature representation that is insensitive to illumination and expression changes. There are many Gabor feature extraction methods which have shown remarkable performances and wide applications [41,42,43,44,45,46,47,48], such as local normalization entropy-like weighted Gabor features [42], local Gabor binary patterns (LGBP) [43], local Gabor XOR patterns (LGXP) [44], Gabor wavelets and local binary pattern [45], Gabor wavelets combined with volumetric fractal dimension [46], the combined method with the joint of local binary pattern (LBP), local phase quantization (LPQ) and fuses Gabor filters [26]. Since the computation amount of Gabor frames is very high, some accelerated Gabor methods have been studied, such as accelerated Gabor frames [47], fusion of multi-channels classifier [48].
Motivated by the LBP structure and Gabor filters, we propose a novel texture feature description method based on GGDP and WDMM. The contributions of this paper can be summarized as follows:
(1)
Conventional LBP computes the relationship between one image’s center pixel value and its neighbor pixel value, and always only utilizes the center pixel’s direction information. LBP cannot obtain more detailed direction information from other neighborhood pixels, and thus is sensitive to noise. To overcome these defects, we propose a novel patch-structure direction pattern (PDP) method, which can extract richer feature information and be insensitive to noise.
(2)
To further improve the effectiveness of PDP, we introduce it into multi-channel Gabor space and get an improved method called GGDP, which can better describe multi-direction and multi-scale texture information.
(3)
In the traditional classification process, the GGDP feature of each Gabor sub-image should be concatenated and measured. To make the measurement of feature distance more accurate, WDMM is proposed for measuring every GGDP feature of the Gabor sub-image distance and use weighted computing for the final distance with sub-image information content.
This paper is composed of four sections. The texture feature extraction background and our contributions are introduced in Section 1. Section 2 describes the proposed method and its corresponding algorithms including PDP, GGDP and WDMM. Simulated experiments are conducted in Section 3. Section 4 gives the conclusion and introduces future work.

2. Algorithm Description

2.1. PDP

We suppose the sample image is X and m 0 is the pixel value of the center pixel in the neighborhood. In addition, the neighborhood is set as 3 × 3 and its center pixel’s adjacent pixel values are marked as m i ( i = 1 ,   2 ,   ,   8 ), depicted in Figure 1. The patch with the central pixel m 0 is computed by the average value of m i ( i = 1 ,   2 ,   ,   8 ).
m 0 = 1 9 i = 0 8 m i
Other adjacent pixel values are computed according to Equation (1), shown in Figure 2. Then, we acquire the patch-structure marked as X p with the size of 3 × 3 .
Next, we use Kirsch Masks to find information on the eight directions. Kirsch Masks are shown in Figure 3, which are marked as K i ( i = 1 ,   2 ,   ,   8 ).
K i ( i = 1 ,   2 ,   ,   8 ) is defined as the following:
{ K 1 = Kirsch   Masks ( East ) K 2 = Kirsch   Masks ( Northeast ) K 3 = Kirsch   Masks ( North ) K 8 = Kirsch   Masks ( Southeast )
Direction information X p d of patch-structure X p is defined in Equation (3):
X p d = [ X p × K 4 X p × K 3 X p × K 2 X p × K 5 x X p × K 1 X p × K 6 X p × K 7 X p × K 8 ]
where notation “ × ” indicates sum of multiple of corresponding position elements in two matrix. Result of X p d is defined as:
X p d = [ R 4 R 3 R 2 R 5 x R 1 R 6 R 7 R 8 ]
where R i ( i = 1 ,   2 ,   ,   8 ) denotes the Kirsch response results.
The Kirsch response results R i ( i = 1 ,   2 ,   ,   8 ) denotes the ith direction information in the neighborhood, and are always not equal to each other in the direction feature description. In this paper, we select the maximum and minimum Kirsch responses, which are respectively marked as R max and R min , defined in Equation (5):
{ R max = arg max i { R i } ( 1 i 8 ) R min = arg min i { R i } ( 1 i 8 )
Thus, PDP code can be computed as follows:
PDP ( m 0 ) = S ( R i ) × 2 i 1 ( i = 1 ,   2 ,   ,   8 )
where S ( R i ) is defined in Equation (7)
S ( R i ) = { 1 , R i = R min or R i = R max 0 , other
Based on Equation (7), we can generate PDP code for the whole image. In order to reduce PDP dimensions and further extract PDP feature, PDP histogram is supposed to describe image feature, defined in Equation (8):
H PDP = x , y I ( PDP ( x , y ) = i ) , i = 0 ,   1 ,   ,   255
where x and y denote the horizontal and vertical coordinates in the whole image, and the function I ( ) is defined in Equation (9):
I ( P ) = { 1 P is true 0 P is false

2.2. GGDP

Gabor wavelet filters can express image direction and scale information for spatial and orientation selectivity. The mathematical model for 2D Gabor wavelet filters are given in Equation (10):
{ ϕ ( x , y ) = ϕ e 2 ( x , y ) + ϕ o 2 ( x , y ) ϕ e 2 ( x , y ) = φ e ( x , y ) X ( x , y ) ϕ o 2 ( x , y ) = φ o ( x , y ) X ( x , y )
where X ( x , y ) is the sample image, x and y denote the horizontal and vertical coordinates, φ o ( x , y ) and φ e ( x , y ) are the odd and even symmetry Gabor filters, respectively.
Isotropy Gabor filters φ o and φ e always use predigest models, which are defined as:
{ φ e ( x , y , f , θ , σ ) = g ( x , y , σ ) cos [ ( 2 π f ( x cos θ + y sin θ ) ] φ o ( x , y , f , θ , σ ) = g ( x , y , σ ) sin [ ( 2 π f ( x cos θ + y sin θ ) ]
where θ , f and σ represent space phase, space frequency and space constant; g ( x , y , σ ) is a Gauss function:
g ( x , y , σ ) = 1 2 π σ 2 exp [ x 2 + y 2 2 σ 2 ]
Since θ and f are multi-channel, we suppose F ( i ) and θ ( j ) are multi-channel scales and orientations space functions. Herein, we set the multi-channel scales to 4 ( i = 1 ,   2 ,   3 ,   4 ) and orientations to 6 ( j = 1 ,   2 ,   3 ,   4 ,   5 ,   6 ). Thus, the multi-channel scales and orientation output of the sample image are marked as ϕ e F ( x , y ) ,   θ ( x , y ) ( x , y ) and ϕ o F ( x , y ) ,   θ ( x , y ) ( x , y ) with i = 1 ,   2 ,   3 ,   4 and j = 1 ,   2 ,   3 ,   4 ,   5 ,   6 .
Suppose A F ( i ) ,   θ ( j ) indicates the filter images’ amplitude, defined in Equation (13):
A F ( i ) ,   θ ( j ) = ϕ ( x , y ) | f = F ( i ) ,   θ = θ ( j )
Next, we generate the PDP histogram for each Gabor filter image A F ( i ) ,   θ ( j ) , where i = 1 ,   2 ,   3 ,   4 and j = 1 ,   2 ,   3 ,   4 ,   5 ,   6 , by Equation (8) named as H PDP (   A F ( i ) ,   θ ( j ) ) , which is GGDP for the sample image X ( x , y ) feature:
GGDP ( i , j ) = H PDP ( A F ( i ) ,   θ ( j ) )
where i = 1 ,   2 ,   3 ,   4 and j = 1 ,   2 ,   3 ,   4 ,   5 ,   6 . In the typical process for Gabor features, GGDP ( i , j ) will be concatenated. However, we are unable to concatenate every scale and orientation Gabor features for the reason that the importance of every scale and orientation Gabor features are not equal. In fact, we will design a novel discrepancy measurement model to measure the similarity of the two groups’ GGDP.

2.3. WDMM

Suppose the training sample is X t and testing sample is X s . Main objective of classification is defining the distance between X t and X s . The weighted discrepancy measurement model is defined in the Equation (15):
D T ( X s , X t ) = i = 1 4 j = 1 6 ω i , j | f s i , j f t i , j | 1 + | f s i , j | + | f t i , j |
where f s i , j denotes the ith scale and jth orientation GGDP feature of X s , given in Equation (16):
f s i , j = GGDP ( i , j ) = H PDP ( A s F ( i ) , θ ( j ) ) , i = 1 ,   2 ,   3 ,   4 , j = 1 ,   2 ,   3 ,   4 ,   5 ,   6
where A s F ( i ) , θ ( j ) are the amplitude of X s Gabor filter images. Then, f t i , j is the ith scale and jth orientation GGDP feature of X t , given in Equation (17):
f t i , j = GGDP ( i , j ) = H PDP ( A t F ( i ) , θ ( j ) ) , i = 1 ,   2 ,   3 ,   4 , j = 1 ,   2 ,   3 ,   4 ,   5 ,   6
where A t F ( i ) , θ ( j ) are the amplitudes of the X t Gabor filter images.
Since image entropy can represent the image texture information, we adopt image entropy to describe the importance of Gabor filter images. The computation process of image entropy is introduced as follows:
Suppose the probability of the random variable x ( x 1 , x 2 , x 3 , , x n ) is p ( x ) ( p 1 ( x ) , p 2 ( x ) , p 3 ( x ) , p n ( x ) ) . The entropy H ( x ) is defined in Equation (18):
H ( x ) = i = 1 n p i ( x ) log ( 1 p i ( x ) ) = i = 1 n p i ( x ) log ( p i ( x ) )
For a Gabor filter image A F ( i ) , θ ( j ) , its 2D entropy H ( A F ( i ) , θ ( j ) ) can be defined in the following:
H ( A F ( i ) , θ ( j ) ( x , y ) ) = i = 1 m p i log ( 1 p i ) = i = 1 m p i log ( p i )
where m is the image gray degree and p i means the probability of the ith gray degree in the whole image.
The weighted coefficient ω i , j is introduced in this paper, which denotes the importance of Gabor filter images with the ith scale and jth orientation. Based on the above discussions, ω i , j is defined by Equation (20):
ω i , j = H ( A s F ( i ) , θ ( j ) ( x , y ) ) + H ( A t F ( i ) , θ ( j ) ( x , y ) ) i = 1 4 j = 1 6 H ( A s F ( i ) , θ ( j ) ( x , y ) ) + i = 1 4 j = 1 6 H ( A t F ( i ) , θ ( j ) ( x , y ) )

3. Experiments

For sake of verifying the effectiveness and stability of the proposed method, some simulated experiments were conducted on several public face databases including ORL, CMUPIE and YALE B database, on images in which contain different poses, different expressions and various illumination conditions. The proposed method is compared with some other state-of-art methods, abbreviations for which are listed in Table 1.

3.1. Performance of the Proposed Method

3.1.1. Discussion of Computational Time

Firstly, the computational time of these comparative methods are discussed in this section. In our test, the size of the testing image is set as 128 × 128. Table 2 illustrates the corresponding results, which indicate that LBP cost the least time and has a lower feature dimensions. However, the recognition rate of LBP is the lowest as well. In addition, LGBP and GGDP have a relevant lower feature dimension. According to the following experiments, GGDP can achieve the best results. When balancing effectiveness with efficiency, our proposed method has a considerable advantage over other methods.

3.1.2. Discussion on Classification

In order to evaluate the effectiveness of the classifier, we used the CMUPIE face database, which contains 68 individuals’ images, and each one has 60 different poses, expressions and various illumination conditions. Partial images from CMUPIE are shown in Figure 4.
The results reported in Table 3 show that the nearest neighbor (NN) classifier has the worst performance for its simple processing capacity. In contrast, WDMM can achieve slightly better results than a support vector machine (SVM) with GGDP.

3.2. Experiments and Analysis on CMUPIE Database

To further evaluate the stability of the proposed method for different poses and illumination variations, we conduct the experiments on CMUPIE database. In this experiments, one sub-set of CMUPIE is selected, which contains 60 individuals, and each individual has 13 different poses and 4 different expressions. Moreover, 1, 2, 4, 6, 8 and 10 images are chosen from each person’s images as the training sets randomly, and meanwhile, the other remaining images are selected for testing within the same person category. Comparison results of these methods are tabulated in Table 4 as well as Figure 5. It is clear that the recognition rates of all methods increase with the increase of the training numbers. The recognition rate of GGDP with the training number 10 outperforms LGBP and LLDP by an interval of 1.96% and 2.55%, respectively, which is due to the fact that LLDP mainly focuses on the image with line structure (e.g., palmprint). Again, GGDP demonstrates its superior performance.

3.3. Experiments and Analysis on the ORL Database

The ORL face database contains 400 grayscale images in PNG format for 40 individuals and each individual has 10 images. There are different facial expressions and poses in this database. Part images of ORL are shown as Figure 6. All face images are normalized at a size of 128 × 128.
To evaluate the effectiveness of the GGDP texture descriptor, some experiments were conducted on ORL databases, which cover different poses and facial expressions. In this paper, 1, 2, 3, 4, 5 and 6 images are randomly chosen from each person set for training sets, and at the same time, the remaining images are selected for testing for the same person category. Table 5 and Figure 7 depict the recognition results of the proposed method and other benchmark methods with different training numbers. It can be gained that those recognition rates of all the comparable methods increase as the training numbers increases. The performance of GGDP can achieve the best results compared with other methods. This is because, in short, GGDP can extract richer and more detailed features. The recognition rates of GGDP with the training number 6 outperform its nearest competitor LLDP 1.5%. In addition, GGDP outperforms LBP and LGBP by intervals of 13.75% and 6.5%, respectively.

3.4. Experiments and Analysis on YALE B Database

Yale B database has 10 subjects and each subject contains 73 viewing conditions with 9 different poses and 64 different illumination conditions. The extended Yale B dataset is extended by 16,128 images for 28 individuals. Partial images from the YALE B database are shown in Figure 8.
To validate the effectiveness of the proposed method under various illuminations, we adopted YALE B database to conduct experiments. A total of 50 individuals’ facial images in the Yale B database were selected to form a new sub-database called YALE B SET1, and each person has 64 images with different illuminations. In these experiments, 1, 2, 4, 8, 16 and 32 images are randomly chosen from each group for training purposes and the remaining images are set as testing images. Recognition rates of the proposed method and other benchmark methods are shown in Table 6 and Figure 9. In general, recognition rates of all methods also increase as the training numbers increase. Furthermore, GGDP achieves the best results once again for the same reason as in the former experiments on the ORL database.

4. Conclusions

In this paper, we propose a texture feature description method based on GGDP and WDMM. Firstly, a novel method called PDP is proposed, which can extract rich feature information and be insensitive to noise. Then, motivated by searching for a richer and more discriminant texture feature description method and reducing the local Gabor feature vector’s high dimension problem, we extend PDP to multi-channel Gabor space to form the GGDP method. Furthermore, WDMM, which can effectively measure the feature distance between two images, is also presented for image sample classification and recognition. Some simulated experiments demonstrate the proposed recognition system can achieve superior results. In future work, we will test our proposed method on other image databases to further validate its effectiveness, such as texture databases (e.g., PhoTex, A lot and RawFooT), medical datasets (e.g., Histopatology and Pap smear), and so on. It may be valuable for us to expand our research scope of face recognition to other practical applications, such as medical analysis, fingerprint recognition, image retrieval, facial recognition, etc.

Acknowledgments

This project is supported by the National Natural Science Foundation of China (51278058, 61302150), the 111 Project on Information of Vehicle-Infrastructure Sensing and ITS (B14043), the Fundamental Research Funds for the Central Universities of Ministry of Education of China (310814247004, 310824130251 and 310824153103).

Author Contributions

Ting Chen carried out the genetic studies, participated in algorithm optimization research and drafted the manuscript. Xiangmo Zhao carried out the system design. Liang Dai participated in the design of the study and performed the simulation analyses. Licheng Zhang and Jiarui Wang participated in system design and helped with some experiment analyses. All authors read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Turk, M.A.; Pentland, A.P. Face recognition using eigenfaces. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Maui, HI, USA, 3–6 June 1991; IEEE: New York, NY, USA, 1991; pp. 586–591. [Google Scholar]
  2. Yang, J.; Liu, C. Horizontal and vertical 2DPCA-based discriminant analysis for face verification on a large-scale database. IEEE Trans. Inf. Forensics Secur. 2007, 2, 781–792. [Google Scholar] [CrossRef]
  3. Meng, J.; Zhang, W. Volume measure in 2DPCA-based face recognition. Pattern Recognit. Lett. 2007, 28, 1203–1208. [Google Scholar] [CrossRef]
  4. Huang, D.; Yi, Z.; Pu, X. A new incremental PCA algorithm with application to visual learning and recognition. Neural Process. Lett. 2009, 30, 171–185. [Google Scholar] [CrossRef]
  5. Tan, K.; Chen, S. Adaptively weighted sub-pattern PCA for face recognition. Neurocomputing 2005, 64, 505–511. [Google Scholar] [CrossRef]
  6. Hsieh, P.C.; Tung, P.C. A novel hybrid approach based on sub-pattern technique and whitened PCA for face recognition. Pattern Recognit. 2009, 42, 978–984. [Google Scholar] [CrossRef]
  7. Belhumeur, P.N.; Hespanha, J.P.; Kriegman, D.J. Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Trans. Pattern Anal. Mach. Intell. 1997, 19, 711–720. [Google Scholar] [CrossRef]
  8. Fàbregas, J.; Faundez-Zanuy, M. Biometric dispersion matcher versus LDA. Pattern Recognit. 2009, 42, 1816–1823. [Google Scholar] [CrossRef]
  9. Strecha, C.; Bronstein, A.M.; Bronstein, M.M.; Fua, P. LDAHash: Improved matching with smaller descriptors. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 66–78. [Google Scholar] [CrossRef] [PubMed]
  10. Oh, J.H.; Kwak, N. Generalization of linear discriminant analysis using Lp-norm. Pattern Recognit. Lett. 2013, 34, 679–685. [Google Scholar] [CrossRef]
  11. Zhao, C.; Miao, D.; Lai, Z.; Gao, C.; Liu, C.; Yang, J. Two-dimensional color uncorrelated discriminant analysis for face recognition. Neurocomputing 2013, 113, 251–261. [Google Scholar] [CrossRef]
  12. Selvan, S.; Borckmans, P.B.; Chattopadhyay, A.; Absil, P.-A. Spherical mesh adaptive direct search for separating quasi-uncorrelated sources by range-based independent component analysis. Neural Comput. 2013, 25, 2486–2522. [Google Scholar] [CrossRef] [PubMed]
  13. Sun, Z.L.; Lam, K.M. Depth estimation of face images based on the constrained ICA model. IEEE Trans. Inf. Forensics Secur. 2011, 6, 360–370. [Google Scholar] [CrossRef]
  14. Fernandes, S.L.; Bala, G.J. A comparative study on ICA and LPP based Face Recognition under varying illuminations and facial expressions. In Proceedings of the 2013 International Conference on Signal Processing Image Processing & Pattern Recognition, Coimbatore, India, 7–8 February 2013; IEEE: New York, NY, USA, 2013; pp. 122–126. [Google Scholar]
  15. Wu, M.; Zhou, J.; Sun, J. Multi-scale ICA texture pattern for gender recognition. Electron. Lett. 2012, 48, 629–631. [Google Scholar] [CrossRef]
  16. Li, S.; Lu, H.C.; Ruan, X.; Chen, Y.-W. Human body segmentation based on independent component analysis with reference at two-scale superpixel. IET Image Process. 2012, 6, 770–777. [Google Scholar] [CrossRef]
  17. Shin, K.; Feraday, S.A.; Harris, C.J.; Brennan, M.J.; Oh, J.-E. Optimal autoregressive modeling of a measured noisy deterministic signal using singular-value decomposition. Mech. Syst. Signal Process. 2003, 17, 423–432. [Google Scholar] [CrossRef]
  18. Wei, J.J.; Chang, C.J.; Chou, N.K.; Jan, G.J. ECG data compression using truncated singular value decomposition. IEEE Trans. Inf. Technol. Biomed. 2001, 5, 290–299. [Google Scholar] [PubMed]
  19. Walton, J.; Fairley, N. Noise reduction in X-ray photoelectron spectromicroscopy by a singular value decomposition sorting procedure. J. Electron Spectrosc. Relat. Phenom. 2005, 148, 29–40. [Google Scholar] [CrossRef]
  20. Do, M.N.; Vetterli, M. Wavelet-based texture retrieval using generalized Gaussian density and Kullback-Leibler distance. IEEE Trans. Image Process. 2002, 11, 146–158. [Google Scholar] [CrossRef] [PubMed]
  21. Ahmadian, A.; Mostafa, A. An efficient texture classification algorithm using Gabor wavelet. In Proceedings of the 25th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Cancun, Mexico, 17–21 September 2003; IEEE: New York, NY, USA, 2003; Volume 1, pp. 930–933. [Google Scholar]
  22. Ye, F.; Shi, Z.; Shi, Z. A comparative study of PCA, LDA and Kernel LDA for image classification. In Proceedings of the International Symposium on Ubiquitous Virtual Reality, Gwangju, Korea, 8–11 July 2009; IEEE: New York, NY, USA, 2009; pp. 51–54. [Google Scholar]
  23. Ojala, T.; Pietikäinen, M.; Mäenpää, T. A generalized Local Binary Pattern operator for multiresolution gray scale and rotation invariant texture classification. In Proceedings of the International Conference on Advances in Pattern Recognition, Rio de Janeiro, Brazil, 11–14 March 2001; Volume 1, pp. 397–406.
  24. Ojala, T.; Pietikäinen, M.; Mäenpää, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
  25. Tan, X.; Triggs, B. Enhanced local texture feature sets for face recognition under difficult lighting conditions. IEEE Trans. Image Process. 2010, 19, 1635–1650. [Google Scholar] [PubMed]
  26. Zhou, S.R.; Yin, J.P.; Zhang, J.M. Local binary pattern (LBP) and local phase quantization (LBQ) based on Gabor filter for face representation. Neurocomputing 2013, 116, 260–264. [Google Scholar] [CrossRef]
  27. Zhang, B.; Gao, Y.; Zhao, S.; Liu, J. Local derivative pattern versus local binary pattern: Face recognition with high-order local pattern descriptor. IEEE Trans. Image Process. 2010, 19, 533–544. [Google Scholar] [CrossRef] [PubMed]
  28. Yang, X.; Cheng, K.T. Local difference binary for ultrafast and distinctive feature description. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 188–194. [Google Scholar] [CrossRef] [PubMed]
  29. Luo, Y.T.; Zhao, L.Y.; Zhang, B.; Jia, W.; Xue, F.; Lu, J.-T.; Zhu, Y.-H.; Xu, B.-Q. Local line directional pattern for palmprint recognition. Pattern Recognit. 2016, 50, 26–44. [Google Scholar] [CrossRef]
  30. Qian, X.; Hua, X.S.; Chen, P.; Ke, L. PLBP: An effective local binary patterns texture descriptor with pyramid representation. Pattern Recognit. 2011, 44, 2502–2515. [Google Scholar] [CrossRef]
  31. Murala, S.; Maheshwari, R.P.; Balasubramanian, R. Local tetra patterns: A new feature descriptor for content-based image retrieval. IEEE Trans. Image Process. 2012, 21, 2874–2886. [Google Scholar] [CrossRef] [PubMed]
  32. Liao, S.; Law, M.W.K.; Chung, A. Dominant local binary patterns for texture classification. IEEE Trans. Image Process. 2009, 18, 1107–1118. [Google Scholar] [CrossRef] [PubMed]
  33. Calonder, M.; Lepetit, V.; Strecha, C.; Fua, P. Brief: Binary robust independent elementary features. In Proceedings of the European Conference on Computer Vision-ECCV, Heraklion, Greece, 5–11 September 2010; pp. 778–792.
  34. Verma, M.; Raman, B. Local tri-directional patterns: A new texture feature descriptor for image retrieval. Digit. Signal Process. 2016, 51, 62–72. [Google Scholar] [CrossRef]
  35. Chen, X.; Zhou, Z.; Zhang, J.; Liu, Z.; Huang, Q. Local convex-and-concave pattern: An effective texture descriptor. Inf. Sci. 2016, 363, 120–139. [Google Scholar] [CrossRef]
  36. Zhang, H.; He, P.; Yang, X. Fault detection based on multi-scale local binary patterns operator and improved teaching-learning-based optimization algorithm. Symmetry 2015, 7, 1734–1750. [Google Scholar] [CrossRef]
  37. Mukundan, R. Local Tchebichef Moments for Texture Analysis; Moments and Moment Invariants—Theory and Applications; Science Gate Publishing: Thrace, Greece, 2014; Volume 1, pp. 127–142. [Google Scholar]
  38. Papakostas, G.A.; Koulouriotis, D.E.; Karakasis, E.G.; Tourassis, V.D. Moment-based local binary patterns: A novel descriptor for invariant pattern recognition applications. Neurocomputing 2013, 99, 358–371. [Google Scholar] [CrossRef]
  39. Nanni, L.; Brahnam, S.; Ghidoni, S.; Emanuele, M.; Tonya, B. Different approaches for extracting information from the co-occurrence matrix. PLoS ONE 2013, 8, e83554. [Google Scholar] [CrossRef] [PubMed]
  40. Nanni, L.; Brahnam, S.; Ghidoni, S.; Menegatti, E. Region based approaches and descriptors extracted from the cooccurrence matrix. Int. J. Latest Res. Sci. Technol. 2014, 3, 192–200. [Google Scholar]
  41. Kanan, H.R.; Faez, K. Recognizing faces using Adaptively Weighted Sub-Gabor Array from a single sample image per enrolled subject. Image Vis. Comput. 2010, 28, 438–448. [Google Scholar] [CrossRef]
  42. Cament, L.A.; Castillo, L.E.; Perez, J.P.; Galdames, F.J.; Perez, C.A. Fusion of local normalization and Gabor entropy weighted features for face identification. Pattern Recognit. 2014, 47, 568–577. [Google Scholar] [CrossRef]
  43. Gao, T.; He, M. A novel face description by local multi-channel Gabor histogram sequence binary pattern. In Proceedings of the International Conference on Audio, Language and Image Processing, Shanghai, China, 7–9 July 2008; IEEE: New York, NY, USA, 2008; pp. 1240–1244. [Google Scholar]
  44. Xie, S.; Shan, S.; Chen, X.; Chen, J. Fusing local patterns of Gabor magnitude and phase for face recognition. IEEE Trans. Image Process. 2010, 19, 1349–1361. [Google Scholar] [PubMed]
  45. Sharma, P.; Arya, K.V.; Yadav, R.N. Efficient face recognition using wavelet-based generalized neural network. Signal Process. 2013, 93, 1557–1565. [Google Scholar] [CrossRef]
  46. Zuñiga, A.G.; Florindo, J.B.; Bruno, O.M. Gabor wavelets combined with volumetric fractal dimension applied to texture analysis. Pattern Recognit. Lett. 2014, 36, 135–143. [Google Scholar] [CrossRef]
  47. Găianu, M.; Onchiş, D.M. Face and marker detection using Gabor frames on GPUs. Signal Process. 2014, 96, 90–93. [Google Scholar] [CrossRef]
  48. Zhao, Z.S.; Zhang, L.; Zhao, M.; Hou, Z.-G.; Zhang, C.-S. Gabor face recognition by multi-channel classifier fusion of supervised kernel manifold learning. Neurocomputing 2012, 97, 398–404. [Google Scholar] [CrossRef]
Figure 1. Diagram of patch computation.
Figure 1. Diagram of patch computation.
Symmetry 08 00109 g001
Figure 2. Diagram of PDP (patch-structure direction pattern) descriptor.
Figure 2. Diagram of PDP (patch-structure direction pattern) descriptor.
Symmetry 08 00109 g002
Figure 3. Kirsch Masks with 8 directions.
Figure 3. Kirsch Masks with 8 directions.
Symmetry 08 00109 g003
Figure 4. Part Images of CMUPIE (Carnegie Mellon University pose, illumination, and expression).
Figure 4. Part Images of CMUPIE (Carnegie Mellon University pose, illumination, and expression).
Symmetry 08 00109 g004
Figure 5. Recognition rates of methods on CMUPIE with different training sample numbers.
Figure 5. Recognition rates of methods on CMUPIE with different training sample numbers.
Symmetry 08 00109 g005
Figure 6. Part Images of ORL (olivetti research laboratory).
Figure 6. Part Images of ORL (olivetti research laboratory).
Symmetry 08 00109 g006
Figure 7. Recognition rates of methods on ORL with different training sample numbers.
Figure 7. Recognition rates of methods on ORL with different training sample numbers.
Symmetry 08 00109 g007
Figure 8. Part Images of YALE B.
Figure 8. Part Images of YALE B.
Symmetry 08 00109 g008
Figure 9. Recognition rates of methods on YALE B SET1 with different training sample numbers.
Figure 9. Recognition rates of methods on YALE B SET1 with different training sample numbers.
Symmetry 08 00109 g009
Table 1. Method abbreviations and their explanations.
Table 1. Method abbreviations and their explanations.
Method AbbreviationMethod Explanation
LBP [23]Basic LBP features
LBP1 [24]LBP (8, 1) features
LBP2 [24]LBP (8, 2) features
LTP [25]LTP features
LG [40]Local Gabor
LGBP [42]Local Gabor Binary Pattern
LLDP [29]Local Line Directional Pattern
GGDPGeneralized Gabor Direction Patterns
Table 2. Time cost for different feature extraction methods.
Table 2. Time cost for different feature extraction methods.
DescriptorFeature DimensionFeature Extract Times (ms)
LBP [1]25697.4
LG [40]393,216294.2
LGBP [42]6144326.4
GGDP6144386.5
Table 3. Recognition rates of different classification methods.
Table 3. Recognition rates of different classification methods.
Recognition MethodsTraining Sample Numbers
1246810
GGDP + NN44.71%56.08%62.65%67.94%73.14%81.08%
GGDP + SVM45.98%76.76%81.76%87.35%88.33%91.08%
GGDP + WDMM56.76%75.39%83.73%86.18%90.98%93.82%
Table 4. Recognition rates of methods on CMUPLE (Carnegie Mellon University pose, illumination, and expression) with different training sample numbers.
Table 4. Recognition rates of methods on CMUPLE (Carnegie Mellon University pose, illumination, and expression) with different training sample numbers.
Recognition MethodsTraining Sample Numbers
1246810
LBP40.29%44.41%48.92%55.98%62.45%68.14%
LBP146.18%47.84%49.71%58.73%64.02%73.92%
LBP247.35%48.92%50.20%61.57%65.10%75.10%
LTP49.31%50.59%51.76%63.53%66.86%76.67%
LG50.20%55.59%60.10%73.14%79.12%87.45%
LGBP54.02%67.35%72.06%80.29%83.43%91.86%
LLDP55.49%72.45%85.20%85.78%89.31%91.27%
GGDP56.76%75.39%83.73%86.18%90.98%93.82%
Table 5. Recognition rates of methods on ORL with different training sample numbers.
Table 5. Recognition rates of methods on ORL with different training sample numbers.
Recognition MethodsTraining Sample Numbers
123456
LBP53.75%55.50%64.00%76.50%83.75%84.25%
LBP155.75%56.00%65.50%81.00%85.25%85.50%
LBP258.50%59.00%65.25%81.50%88.00%88.50%
LTP60.25%61.50%69.00%83.50%87.25%88.25%
LG64.25%69.75%72.25%83.00%88.00%91.50%
LGBP65.75%70.75%75.50%87.00%90.75%95.25%
LLDP63.25%72.25%75.75%86.50%92.25%96.50%
GGDP70.25%74.75%78.00%90.25%93.25%98.00%
Table 6. Recognition rates of methods on YALE B SET1 with different training sample numbers.
Table 6. Recognition rates of methods on YALE B SET1 with different training sample numbers.
Recognition MethodsTraining Sample Numbers
12481632
LBP37.91%38.22%48.94%51.72%63.19%63.53%
LBP142.31%42.69%52.44%53.09%69.25%66.06%
LBP243.63%44.44%52.78%53.84%64.25%67.91%
LTP44.53%48.84%53.22%55.09%65.72%69.88%
LG49.59%53.01%55.48%68.77%75.21%78.36%
LGBP53.56%58.36%66.99%73.56%77.95%82.60%
LLDP59.72%66.59%69.66%70.34%79.34%85.78%
GGDP58.63%64.52%72.60%78.22%81.37%86.44%

Share and Cite

MDPI and ACS Style

Chen, T.; Zhao, X.; Dai, L.; Zhang, L.; Wang, J. A Novel Texture Feature Description Method Based on the Generalized Gabor Direction Pattern and Weighted Discrepancy Measurement Model. Symmetry 2016, 8, 109. https://doi.org/10.3390/sym8110109

AMA Style

Chen T, Zhao X, Dai L, Zhang L, Wang J. A Novel Texture Feature Description Method Based on the Generalized Gabor Direction Pattern and Weighted Discrepancy Measurement Model. Symmetry. 2016; 8(11):109. https://doi.org/10.3390/sym8110109

Chicago/Turabian Style

Chen, Ting, Xiangmo Zhao, Liang Dai, Licheng Zhang, and Jiarui Wang. 2016. "A Novel Texture Feature Description Method Based on the Generalized Gabor Direction Pattern and Weighted Discrepancy Measurement Model" Symmetry 8, no. 11: 109. https://doi.org/10.3390/sym8110109

APA Style

Chen, T., Zhao, X., Dai, L., Zhang, L., & Wang, J. (2016). A Novel Texture Feature Description Method Based on the Generalized Gabor Direction Pattern and Weighted Discrepancy Measurement Model. Symmetry, 8(11), 109. https://doi.org/10.3390/sym8110109

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop