Next Article in Journal
Phase-Contrast and Dark-Field Imaging
Next Article in Special Issue
Optimal Color Lighting for Scanning Images of Flat Panel Display using Simplex Search
Previous Article in Journal
GPU Acceleration of the Most Apparent Distortion Image Quality Assessment Algorithm
Previous Article in Special Issue
Stochastic Capsule Endoscopy Image Enhancement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Unsupervised Local Binary Pattern Histogram Selection Scores for Color Texture Classification

1
Faculty of Economics and Business Administration (First Branch), Lebanese University, Hadath, Beirut 21219, Lebanon
2
LISIC Laboratory, University of the Littoral Opal Coast, 62228 Calais, France
*
Author to whom correspondence should be addressed.
J. Imaging 2018, 4(10), 112; https://doi.org/10.3390/jimaging4100112
Submission received: 11 July 2018 / Revised: 7 September 2018 / Accepted: 25 September 2018 / Published: 28 September 2018
(This article belongs to the Special Issue Computational Colour Imaging)

Abstract

:
These last few years, several supervised scores have been proposed in the literature to select histograms. Applied to color texture classification problems, these scores have improved the accuracy by selecting the most discriminant histograms among a set of available ones computed from a color image. In this paper, two new scores are proposed to select histograms: The adapted Variance score and the adapted Laplacian score. These new scores are computed without considering the class label of the images, contrary to what is done until now. Experiments, achieved on OuTex, USPTex, and BarkTex sets, show that these unsupervised scores give as good results as the supervised ones for LBP histogram selection.

1. Introduction

Texture classification is an active research topic in image processing and computer vision. It has received significant attention in many applications such as content based image retrieval, medical image analysis, face recognition, or biometrics. The texture classification approaches can typically be categorized into two subproblems [1,2]: The representation, which aims to characterize an image with a set of texture features, and the decision, which assigns this image to one of the available texture classes. This paper focuses on the first subproblem and particularly on feature space dimensionality reduction techniques. Many approaches perform a reduction of the feature space to transform high-dimensional data into a meaningful representation of reduced dimensionality [3,4,5]. By only retaining the most discriminant features, these approaches aim to improve the classification accuracy, while decreasing the processing time.
Dimensionality reduction techniques can be divided into two categories [6]. (1) Feature extraction builds a low dimensional subspace where the new features are usually combinations of the original features. The main drawback of this strategy is it requires to compute all candidate features during the classification stage to build the new feature space, which could be time-consuming; (2) Feature selection strategies select the most relevant original features. Hence, it just requires the computation of a reduced number of selected features during the classification stage. Among the feature selection techniques, we were particularly interested in those based on individual ranking. These algorithms rank the candidate features with respect to a score which measures their relevance. They are relatively inexpensive in computation time since no subspace procedure generation is used.
In the supervised context, the information about the class distribution is available. Supervised feature selection scores, such as the Fisher and the Supervised Laplacian scores, use the class labels to determine the relevance of each feature. However, it would be interesting to see if using a soft way to measure the similarity between images could be relevant. A soft value does not use any information about the class label of the images but measures the similarity in a subtle way, instead of being binary with just two values (same class or not). This may lead to powerful discriminating information since it should better reflect the geometric structure of the different classes. The Variance and the Laplacian scores measure the ability of a feature to keep the intrinsic data structure without considering any information about the class label of the images [7]. In accordance with this, they could be considered to be unsupervised. These two different scores, originally designed for the selection of features, have been successfully used in the context of image classification to select relevant features and improve the classification accuracy [8]. In this paper, we propose to see if the soft way to measure the similarity between images used in these two unsupervised scores is relevant for selecting histograms.
To describe the texture, the local patterns contained in an image are usually represented by histograms, like sum and difference histograms [9], histograms of equivalent patterns [10] or bag-of-words histograms [11]. A set of cross-channel histograms are then computed to represent a color texture. Local Binary Pattern (LBP) is a texture descriptor belonging to this scheme [12]. The LBP operator transforms an image by thresholding the levels of the P neighboring pixels around each pixel of the image, and coding the result as a binary number. Usually, the histogram of this LBP image is then used for texture analysis. Many authors have taken an interest in the reduction of this ( 2 P = Q ) -dimensional LBP histogram in order to improve the texture classification performances [13]. Ojala et al. propose to consider the uniform LBP operator, where 59 discriminant pattern types (or bins) are a priori chosen among the 2 8 = 256 available ones [14]. Mäenpää et al. consider a method based on beam search to select a reduced number of discriminant bins [15]. Boosting has become a very popular approach for feature selection and has been widely adopted for LBP feature selection in various tasks [13]. Liao et al. introduce the Dominant Local Binary Pattern (DLBP) that considers the most frequently occurred patterns to improve the recognition accuracy [16]. Because DLBP is only based on the pattern frequency, information about the type (label) of the selected patterns is lost. That is the reason why this texture descriptor has been later improved by labelling the most frequent patterns [17], like in the Labelled Dominant Local Binary Pattern (L-DLBP) [18], the Highest-Variance Dominant Local Binary Pattern (HV-DLBP) [19] or more recently in the Highest-Rank Dominant Local Binary Pattern (HR-DLBP) [20]. Guo et al. also propose a labelled model of the DLBP based on the Fisher separation criteria [21,22]. The most reliable and robust dominant bins are thus determined by considering intra-class similarity and inter-class dissimilarity.
Many other extensions or variants of the LBP operator have been proposed in recent decades for gray level images [12]. However, the extensions of this operator applied to color images remain relatively limited since 2002, wherein the Extended Opponent Color LBPs (EOCLBP) have been proposed by Pietikäinen et al. [13]. In EOCLBP, the LBP operator is applied on each color component of a given color space independently and also on pairs of color components according to a cross-channel strategy. This leads to extract nine different histograms, three within-component and six between-component LBP histograms, and it could be interesting to wonder whether all the information contained in these histograms is relevant to discriminate the textures. Paradoxically, reducing the dimensionality of LBP histograms is much less frequent in the framework of color texture analysis whereas the dimension of the feature space is higher. A first solution proposed by Chan et al. uses linear discriminant analysis to project high-dimensional color LBP bins into a discriminant space [23]. A second solution is proposed by Hussain et al., who exploit the complementarity of Histograms of Oriented Gradients [24], Local Binary Patterns, and Local Ternary Patterns [25] and apply partial least squares to resolve their visual object detection problem [26]. More recently, Porebski et al. propose a different approach which selects, out of the nine LBP histograms extracted from a color texture, those which are the most discriminant [27]. This strategy, which selects histograms in their entirety, fundamentally differs from all the previous LBP selection approaches which select the bins of the LBP histograms or project them into a discriminant space. To evaluate the relevance of the LBP histograms, Porebski et al. propose a supervised approach where an Intra-Class Similarity score ( I C S -score) is computed for each histogram. This score is based on a measure of the histogram ability to characterize the similarity of the textures within each different class. Inspired by this approach, Kalakech et al. propose another score (the A S L -score) based on the supervised Laplacian score designed for feature ranking and selection [28]. In [29], histogram selection and bin selection schemes have been extended to the multi-color space domain and compared each other in the framework of color texture classification. It has been shown that the classification accuracy reached thanks to histogram selection is slightly higher than the accuracy provided by a bin selection, with a similar classification computation time. The encouraging results obtained with the two supervised I C S and A S L -score lead us to propose in this paper two new histogram selection scores: The adapted Variance ( A V -score) and the adapted Laplacian ( A L -score) scores. As the names suggest, these scores are respectively adapted from the unsupervised Variance and Laplacian scores which have been originally designed for the selection of features and which use a soft way to measure the similarity between images. In this paper, we propose to extend these scores in order to rank and select LBP histograms extracted from a color image.
First, the traditional unsupervised feature selection scores are presented in Section 2. The corresponding adapted histogram selection scores are then detailed in Section 3 and the LBP histogram selection approach is described in Section 4. In order to compare these two new scores each other and with the results of the state of the art, experiments are performed on benchmark and widely used databases in Section 5.

2. Feature Selection Scores

In the feature selection context, we dispose a dataset of N color texture images represented in a D-dimensional feature space. We denote X , the associated data matrix where x i r is the rth feature value ( r = 1 , , D ) of the ith color image I i ( i = 1 , , N ).
X = x 1 1 x 1 r x 1 D x i 1 x i r x i D x N 1 x N r x N D = x 1 x i x N = f 1 f r f D .
Each of the N rows of the matrix X represent a color texture x i = x i 1 , , x i D R D , while each of the D columns of X define the feature f r = x 1 r , , x N r T R N .

2.1. Unsupervised Feature Selection Scores

In the unsupervised context, the Variance and the Laplacian scores are usually used to select features [7].

2.1.1. Variance Score

The variance score V r used to evaluate the relevance of each feature f r is defined by:
V r = 1 N i = 1 N x i r μ r 2 ,
where μ r is the mean for all images of the feature f r : μ r = 1 N i = 1 N x i r .
The features are sorted according to the decreasing order of V r in order to select the most relevant ones, assuming that the feature with the highest variance is the most discriminant one.

2.1.2. Laplacian Score

Rather than measuring the data dispersion along a feature axis, the Laplacian score examines the local properties of the data. He et al. propose to compute the Laplacian score L r of a feature f r as [7]:
L r = i = 1 N j = 1 N x i r x j r 2 s i j i = 1 N x i r f ¯ r 2 d i ,
where:
  • x i r x j r 2 is the squared Euclidean distance between the rth feature of two images I i and I j ,
  • s i j is the similarity measure between I i and I j using all the input feature space composed by the D features. It is defined by: s i j = e x p x i x j 2 2 t 2 , where x i x j 2 represents the squared Euclidean distance between x i and x j in the D-dimensional initial feature space [30,31]. The parameter t has to be tuned in order to represent the local dispersion of the data [32],
  • d i represents a local density measure defined by: d i = j = 1 N s i j ,
  • and f ¯ r is the weighted feature average: f ¯ r = i = 1 N x i r d i i = 1 N d i .
The different features are sorted according to the ascending order of L r in order to select the most relevant ones: The features which respect the pre-defined graph structure minimize the numerator of L r and those with a large variance maximize the denominator.
These unsupervised feature selection scores have originally been designed for feature selection. However, in the framework of color texture characterization, histograms are widely used as texture descriptors, like the LBP histograms [13]. Because the number of those histograms can be high and problematic for classification purpose, it would be interesting to select the most discriminant ones to improve the classification performances. For this purpose, we propose to adapt the traditional feature selection scores used for feature ranking and selection, in order to rank and select histograms.

3. Histogram Selection Scores

In the histogram selection context, we dispose a dataset of N color textures images. Each image I i ( i = 1 , , N ) is characterized by D histograms. The whole data is summarized by the matrix H as:
H = h 1 1 h 1 r h 1 D h i 1 h i r h i D h N 1 h N r h N D = h 1 h i h N = h 1 h r h D
where h i r is the rth histogram computed from the ith color texture image I i . It is defined by h i r = h i r ( 1 ) , , h i r ( k ) , , h i r ( Q ) where Q is the histogram bin number.
The ith row of H represents a set of D histograms h i corresponding to the image I i and whose dimension is ( D × Q ) . For each column, h r = h 1 r h i r h N r T regroups the values of the rth histogram across the N images.
The histogram selection scheme evaluates each histogram h r in order to select the most discriminant one among the D candidate histograms. For this purpose, we propose to adapt the feature selection scores, presented in Section 2, in order to define histogram selection scores. Distance and similarity measures are two critical terms used for feature selection. Distance measures are low when the images are close to each other, contrary to similarity measures whose highest value indicates that the considered images are similar. To adapt the traditional feature selection scores to rank and select histograms, it is necessary to consider either a distance measure between histograms or a similarity measure between histograms depending on whether the term to adapt has to be maximized or minimized.
Several measures of similarity and distance between histograms have been used in computer vision and pattern recognition [33]. Since the objective of this paper is to show the interest of the proposed scores, we retain two simple measures, the histogram intersection as similarity measure and the Jeffrey distance as distance measure: the histogram intersection is considered to adapt the similarity term s i j which has to be maximized (the kernel is maximized when the images are similar) and the Jeffrey distance is used to extend the Euclidean distance which has to be minimized for similar images.
The intersection between the histograms extracted from two images I i and I j is defined as follows:
S ( h i , h j ) = k = 1 Q × D min ( h i ( k ) , h j ( k ) ) .
The result of the intersection is the number of pixels of the first image that have a corresponding pixel in the second image which has the same characteristic (the same specific pattern in the case of LBP histograms). So the more the considered images are similar, the higher the histogram intersection is. The histograms being normalized by the number of pixels in the image, the value of this measure varies between 0 and 1.
The Jeffrey distance between the histograms of two images I i and I j is defined as follows:
J ( h i , h j ) = k = 1 Q × D h i ( k ) l o g h i ( k ) h i ( k ) + h j ( k ) 2 + h j ( k ) l o g h j ( k ) h i ( k ) + h j ( k ) 2 .
As all distance measures, the value of the Jeffrey distance is low when the images are close to each other in the histogram space.
In order to clarify the adaptation of the different scores to histogram selection, we summarize the terms and the scores used in this section in Table 1 where formulas are applied to evaluate the score of the rth histogram. The left column groups feature selection terms while the right one summarizes the corresponding histogram selection adaptation. Readers can refer to this table while reading the next section.

3.1. Adapted Variance Score

Using the Jeffrey distance defined in Equation (4), we extend the Variance score of Equation (1) in order to select histograms rather than features. The Adapted Variance score A V r of the histogram h r is defined as follows:
A V r = 1 N i = 1 N J 2 h i r , h ¯ r ,
where h ¯ r is the mean histogram that is evaluated by averaging all the bins of the histogram h r across the N images: h ¯ r = h ¯ r ( 1 ) , , h ¯ r ( k ) , , h ¯ r ( Q ) , with h ¯ r ( k ) = 1 N i = 1 N h i r ( k ) .
The histograms are sorted according to the decreasing order of A V r in order to select the most relevant ones.

3.2. Adapted Laplacian Score

Using the Jeffrey distance and the intersection similarity measure defined in Equations (3) and (4), we extend the Laplacian score of Equation (2) in order to select the most discriminant histograms. The Adapted Laplacian score A L r of the histogram h r is defined as follows:
A L r = i = 1 N j = 1 N J 2 ( h i r , h j r ) S ( h i , h j ) i = 1 N J 2 ( h i r , a ¯ r ) D i .
The degree D i of the image I i is defined by: D i = j = 1 N S ( h i , h j ) and a ¯ r is the weighted histogram average: a ¯ r = a ¯ r ( 1 ) , , a ¯ r ( k ) , , a ¯ r ( Q ) , with a ¯ r ( k ) = i = 1 N h i r ( k ) D i i = 1 N D i .
As for feature selection using the Laplacian score, the histograms are sorted according to the ascending order of A L r in order to select the most relevant ones.

4. LBP Histogram Selection for Color Texture Classification

The adapted scores previously presented are used in a LBP histogram selection approach described in this section (see Section 4.2). The candidate color LBP histograms are first presented.

4.1. Candidate Color Texture Descriptors

The LBP operator is one of the most successful descriptor used to characterize texture images due to its ease of implementation, its invariance to monotonic illumination changes, and its low computational complexity. Many variants of the original LBP operator have been proposed in the literature since Ojala’s original definition [12]. The goal of this paper being to reveal the relevance of the proposed histogram selection scores, no further sophisticated texture descriptors are needed. That is the reason why the color textures are here characterized thanks to the EOCLBP histograms, which are a simple extension to color of the original LBP operator. Obviously, the classification results are expected to be improved using more elaborated descriptors, such as the Improved Opponent Color LBP [34] or the Median Robust Extended LBP for example [35], which is a gray level descriptor that has obtained the best overall performance on thirteen texture image sets and which could be extended to color.
To compute the EOCLBP histograms, each image is first coded in a 3-dimensional color space, denoted here C 1 C 2 C 3 . The D = 9 LBP histograms are then computed from the so-coded images: Three within-component LBP histograms ( ( C 1 , C 1 ) , ( C 2 , C 2 ) , and ( C 3 , C 3 ) ) and six between-component LBP histograms ( ( C 1 , C 2 ) , ( C 2 , C 1 ) , ( C 1 , C 3 ) , ( C 3 , C 1 ) , ( C 2 , C 3 ) , and ( C 3 , C 2 ) ) are extracted from each image. As do Ojala et al. when they introduce the original LBP operator, the 3 × 3 pixel neighborhood ( P = 8 neighbors) is here considered. A color texture is thus represented by a ( 9 × 256 )-dimensional feature space.
It is well-known that the performance of a classifier is generally dependent on the dimension of the feature subspace due to the curse of dimensionality [36]. To reach a satisfying classification accuracy while decreasing the computation time, we propose to reduce the number of candidate LBP histograms by selecting the most discriminating ones thanks to the histogram selection scores previously presented.

4.2. Histogram Selection

To evaluate a supervised color texture classification scheme, it is usual to divide the considered database into a learning and a testing image subset. The learning subset is used to train the classifier during the learning stage, whereas the testing subset is used during the classification stage to evaluate the performances of the proposed method. In the histogram selection framework, the learning stage aims to build a low dimensional discriminating subspace thanks to labelled or unlabelled training data.
Different models are proposed in order to evaluate the relevance of the candidate subspaces [37]. The wrapper model uses the classification accuracy as discriminating power of the candidate subspaces. When a classifier such as the nearest neighbor is considered, it involves to decompose the learning subset into a training and a validation subsets. Although this model is time consuming and classifier-dependent, it gives good results and determines easily the dimension of the selected subspace by searching the best classification accuracy. On the contrary, filter models evaluate the relevance of the candidate subspaces without classifying the images. They are less time consuming but the determination of the dimension of the subspace to be selected is not so easy. To obtain a good compromise between dimension selection, computation time and classification result, embedded models are preferred [38]. These approaches combine a filter model to determine the most discriminating subspaces at different dimensions and a wrapper model to determine the dimension of the selected subspace [6].
The approach used in this paper is an embedded histogram selection scheme which requires to split up the initial image database in a training, a validation and a testing image subset, according to a holdout decomposition. During the learning stage, candidate histograms are generated from training images and ranked with respect to a score which measures the efficiency of each candidate histogram. This score can be computed without considering the class label of the images like the unsupervised selection A V -score and A L -score or by taking the information about the class distribution into account, like the A S L -score and the I C S -score do.
Once the score has been computed for each of the D candidate histograms, a ranking is performed. The candidate subspaces—composed, at the first step, of the histogram with the best score, at the second step, of the two first ranked histograms and so on—are then evaluated to determine the relevant histogram subspace. For this purpose, a classifier operates in each candidate subspace in order to classify the validation images. For each subspace dimension d, the classification accuracy is estimated as the percentage of the validation images that have been correctly classified. This rate of well-classified validation images is denoted R d .
The dimension d ^ of the selected subspace is the one for which the value of R d is the highest:
d ^ = Q × argmax 1 d D R d .
During the classification stage, the relevant histograms previously selected are computed for each testing image and compared to the training images in the selected histogram subspace to determine the testing image label. The purpose of this paper being to show the contribution of the two new histogram selection scores, independently of the considered classifier, its parameters and its metric, the nearest neighbor classifier associated with the histogram intersection as a similarity measure is here considered.

5. Experiments

In this section, the proposed histogram selection scores are compared thanks to three benchmark color texture image sets: Outex-TC-00013, USPTex, and NewBarkTex.
Outex-TC-00013 is composed of 68 color texture images acquired under controlled conditions by a 3-CCD digital color camera and the size of which is 746 × 538 pixels [39]. Each of these 68 textures is split up into 20 128 × 128 disjoint sub-images. Among these 1360 sub-images, 680 are used for the training subset and the remaining 680 are considered as testing images. The Outex-TC-00013 image test suite can be downloaded at http://www.outex.oulu.fi/index.php?page=classification.
USPTex set is a more recent database [40]. It contains 191 natural color textures acquired under an unknown but fixed light source. As for Outex-TC-00013, these images are split up into 128 × 128 disjoint sub-images. Since the original image size is here 512 × 384 pixels, this makes a total of 12 sub-images by a texture. For our experiments, this initial dataset of 2292 sub-images is split up in order to build a training and a testing image subset: 6 images are considered for the training and the 6 others are used as testing images. This decomposition is available at https://www-lisic.univ-littoral.fr/~porebski/USPtex.zip.
The Barktex database includes six tree bark classes, with 68 images per class [41]. Even if the number of classes of this database is limited to six, the textures of these different classes are close to each other and their discrimination is not easy. To build the NewBarkTex set, a region of interest, centered on the bark and whose size is 128 × 128 pixels, is first defined. Then, four sub-images whose size is 64 × 64 pixels are extracted from each region. We thus obtain a set of 68 × 4 = 272 sub-images per class. To ensure that color texture images used for the training and the testing images are less correlated as possible, the four sub-images extracted from a same original image all belong either to the training subset or to the testing one [42]: 816 images are thus used as training images and the remaining 816 as testing images. The NewBarkTex image test suite can be downloaded at https://www-lisic.univ-littoral.fr/~porebski/NewBarkTex.zip.
These sets do not require to consider specific illuminant or rotation invariant texture descriptors since the goal of this paper is to reveal the contribution of the proposed histogram selection scores independently of the texture descriptor invariance to the observation conditions.
Let us note that the considered texture benchmark databases are composed of only two image subsets according to a holdout evaluation method, whereas the considered histogram selection scheme needs three subsets as explained in Section 4.2. We thus propose to use one subset as the training subset and the second both as the validation and testing subset to evaluate the performances of the proposed scores. Therefore, the dimensionality of the selected feature space will be ideally determined, and the classification results can be interpreted as optimistic. This solution was nevertheless chosen in order to achieve the comparison with other works using the same split into training and testing subsets.
Moreover, in order to evaluate the impact of the used color space, four color spaces are considered for experiments: R G B , Y U V , I 1 I 2 I 3 , and H S V . These color spaces are respectively representative of the four color space families (the primary spaces, the luminance-chrominance spaces, the independent color component spaces, and the perceptual spaces) and do not require to know illumination conditions like the L a b color space for example [4].
Section 5.1 presents a comparison of the performances achieved by the proposed histogram selection scores. An analysis of the histogram rank is then done (cf. Section 5.2). Finally, in Section 5.3, the classification results obtained by the proposed approach are compared with the state of the art.

5.1. Comparison of the Histogram Selection Scores

In this section, four histogram selection scores are compared on Outex-TC-00013, USPTex, and NewBarkTex sets:
  • the unsupervised Adapted Variance score ( A V -score),
  • the unsupervised Adapted Laplacian score ( A L -score),
  • the Adapted Supervised Laplacian score ( A S L -score) proposed by Kalakech [28],
  • and the supervised Intra-Class Similarity score ( I C S -score) proposed by Porebski [27].
Figure 1, Figure 2 and Figure 3 show the rate R d of well-classified validation images according to the number d of ranked histograms on Outex-TC-00013, USPTex, and NewBarkTex sets, respectively, and for each considered color space.
These figures show that the accuracy obtained thanks to the unsupervised A L -score globally outperforms the results obtained by the A V -score, for the three databases and whatever the considered color space. In the same way, the A S L -score outperforms the I C S -score in the supervised context. These results confirm the high performances obtained thanks to the Laplacian scores in the context of feature selection [8]. For histogram selection, the interest of the similarity term to capture the intrinsic properties of the data is also demonstrated.
These figures also show that the A S L -score globally gives the highest accuracy, followed very closely by the unsupervised A L -score. These scores reach a high accuracy with a lower dimensional histogram subspace. The unsupervised A L -score, which is computed without considering the class label of the images, globally outperforms the supervised I C S -score, which takes the information about the class distribution into account. This confirms again the relevance of the similarity matrix used in the Laplacian scores to perform the selection.
Table 2, Table 3 and Table 4 show the accuracies R d ^ obtained with the d ^ -dimensional selected LBP histogram subspaces, by using the different supervised and unsupervised scores on Outex-TC-00013, USPTex, and NewBarkTex sets, respectively. The accuracy reached without performing any color LBP histogram selection is also presented. The bold values represent the best rates obtained with each color space and the boxed values indicate the best rate obtained for each color texture set.
These tables confirm the interest of selecting LBP histograms: The selection improves the classification accuracy by on average 0.52% on OuTex, 7.70% on USPTex and 6.32% on BarkTex, while reducing the number of considered histograms. We can also see that the performances reached thanks to the different scores are very close to each other, especially for the OuTex and USPTex databases. For the color space that gives the best rates ( R G B for Outex-TC-00013 and NewBarkTex and Y U V for USPTex), several scores give the higher performances and the A S L and the A L scores always appear among the best scores.
For NewBarkTex which is a more challenging set, the A L , A S L , and I C S scores give exactly the same best accuracy with the same optimal dimension. The difference between these three scores appears more for subspaces with a little dimension: From Figure 3, we can notice that the A S L and the A L scores seek faster the better histograms specially for the Y U V and I 1 I 2 I 3 color spaces.
It is also interesting to notice that the unsupervised A L -score appears among the best scores 10 times out of 12. It outperforms the other unsupervised A V -score and even the supervised I C S score. Its performances are remarkable since they are similar or very close to those reached by A S L even though it does not consider the class label of the images.

5.2. Comparison of the Histogram Ranks

In this section, an analysis of the histogram ranking is done. Table 5 shows the histogram ranking obtained thanks the considered scores on Outex-TC-00013, USPTex and NewBarkTex sets. The numbers 1, 2 and 3 represent the three within-component LBP histograms ( ( C 1 , C 1 ) , ( C 2 , C 2 ) , and ( C 3 , C 3 ) ), and the six between-component LBP histograms ( ( C 1 , C 2 ) , ( C 2 , C 1 ) , ( C 1 , C 3 ) , ( C 3 , C 1 ) , ( C 2 , C 3 ) , and ( C 3 , C 2 ) ) are respectively numbered 4, 5, 6, 7, 8, and 9.
Each row of this table shows the histogram ranking in the considered color space using the specified histogram selection score, for the three image sets. For example the first row shows that, in the RGB color space, using the A V -score and the OuTex database, the first selected histogram is the number 2 ( ( C 2 , C 2 ) ), followed by the histogram 4 ( ( C 1 , C 2 ) ), ... and finally the histogram 9 ( ( C 3 , C 2 ) ) is the last selected. The bold values correspond to the selected histogram subspace for which the best accuracy is achieved for each of the three color texture sets.
This table shows that the histogram ranking is very variable according to the considered color space or score. This clearly shows the interest of performing a histogram selection, since we can not a priori judge the most relevant histogram subspace, even for a same database.

5.3. Comparison with the State of the Art

In this section, we compare the accuracy obtained using the proposed unsupervised A L -score with the results reached in the state of the art on the three considered sets. For a fair comparison, these sets have the same experimental protocol (number of classes, image size, number of images for each class, total number of images, and accuracy evaluation method), and only the works that apply a single color space strategy are mentioned. In addition to the nearest neighbor classifier, we propose also to use the SVM classifier during the classification stage of our approach since the best accuracy reached in the state of the art on Outex-TC-00013 and NewBarkTex with a single color space strategy has been reached thanks to this classifier. A one versus one SVM classifier with a linear kernel is here considered. The results are summarized in Table 6, Table 7 and Table 8.
From these tables, we can see that the second best accuracy result obtained on the Outex-TC-00013 set (95.4%) is achieved thanks to a simple 3D color histogram, although it only characterizes the color distribution within the H S V color space, and does not take into account the spatial relationships between neighboring pixels, as a color texture feature should. This inconsistency is due to the fact that the Outex-TC-00013, as well as the USPTex sets, present a major drawback: The partitioning used to build these two sets consists in extracting the training and the testing subimages from a same original image. However, such a partitioning, when it is combined with a classifier such as the nearest neighbor classifier, leads to biased classification results [42]. Indeed, testing images are spatially close to training images. They are thus correlated and a simple 3D color histogram reaches a high classification accuracy [43]. For the NewBarktex set, the training and the testing subimages come from different original images to ensure that color texture images are less correlated as possible. The analysis of the results is thus more efficient and interpretable on this image set. The best accuracy rate (89.6%) is obtained thanks to the dominant and minor sum and difference histograms [57]. Selecting LBP histograms thanks to our proposed A L -score allows to get close to this highest rate, particularly when a SVM classifier is considered to classify the testing images. In this case, the classification accuracy reaches the promising result of 84.9%. This additional experiment highlights the merit of the unsupervised A L -score when it is associated with the SVM classifier.

6. Conclusions

We have proposed to adapt the traditional unsupervised feature selection scores in order to rank and select LBP histograms for color texture classification: The Adapted Variance ( A V -score) and the Adapted Laplacian ( A L -score) scores have thus been presented.
For each one of the nine LBP histograms extracted from a color texture, a score is assigned using one of the proposed adapted scores. The histograms are then ranked in order to select the most discriminant ones and thus build a low dimensional relevant subspace, in which a classifier operates.
Experiments on Outex-TC-00013, USPTex and NewBarkTex sets have shown the interest of performing a LBP histogram selection before classifying the different images. This selection improves the classification accuracy while reducing the dimension of the histogram subspace. The A L -score outperforms the A V -score and gives performances comparable or even better than the supervised A S L -score and I C S -score.
For future research directions, we propose to associate the A L -score with a multi color space approach [29]. Moreover, an additional experimentation can be realized in the short term perspective: Similarity can be derived from a given distance by kernelization (exponential with Euclidean distance in the conventional approach of Laplacian score). As the Jeffrey divergence can also be kernelized, it would be interesting to study the trend of the results considering a kernelized Jeffrey measure as similarity measure and, more generally, the impact of the distance and similarity measure on the classification performances.

Author Contributions

M.K. and A.P. conducted the research presented in this study, performed the experiments, and wrote the paper. N.V. and D.H. contributed to the development of the overall research design, provided guidance along the way, and aided in writing of the paper.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tuceryan, M.; Jain, A.K. Texture analysis. In Handbook of Pattern Recognition and Computer Vision; Chen, C.H., Pau, L.F., Wang, P.S.P., Eds.; World Scientific Publishing Co.: Singapore, 1998; pp. 207–248. [Google Scholar]
  2. Bianconi, F.; Harvey, R.; Southam, P.; Fernandez, A. Theoretical and experimental comparison of different approaches for color texture classification. J. Electron. Imaging 2011, 20, 043006. [Google Scholar] [CrossRef]
  3. De Wouwer, G.V.; Scheunders, P.; Livens, S.; van Dyck, D. Wavelet correlation signatures for color texture characterization. Pattern Recognit. 1999, 32, 443–451. [Google Scholar] [CrossRef] [Green Version]
  4. Porebski, A.; Vandenbroucke, N.; Macaire, L. Supervised texture classification: Color space or texture feature selection? Pattern Anal. Appl. J. 2013, 16, 1–18. [Google Scholar] [CrossRef]
  5. Arvis, V.; Debain, C.; Berducat, M.; Benassi, A. Generalization of the cooccurrence matrix for colour images: Application to colour texture classification. Image Anal. Stereol. 2004, 23, 63–72. [Google Scholar] [CrossRef]
  6. Tang, J.; Alelyani, S.; Liu, H. Feature selection for classification: A review. In Data Classification Algorithms and Applications; Aggarwal, C., Ed.; CRC Press: Boca Raton, FL, USA, 2014; pp. 37–64. [Google Scholar]
  7. He, X.; Cai, D.; Niyogi, P. Laplacian Score for Feature Selection. In Advances in Neural Information Processing Systems; MIT Press: Vancouver, Canada, December 2005; pp. 507–514. [Google Scholar]
  8. Kalakech, M.; Biela, P.; Macaire, L.; Hamad, D. Constraint scores for semi-supervised feature selection: A comparative study. Pattern Recognit. Lett. 2011, 32, 656–665. [Google Scholar] [CrossRef]
  9. Sandid, F.; Douik, A. Robust color texture descriptor for material recognition. Pattern Recognit. Lett. 2016, 80, 15–23. [Google Scholar] [CrossRef]
  10. Fernandez, A.; Alvarez, M.X.; Bianconi, F. Texture Description Through Histograms of Equivalent Patterns. J. Math. Imaging Vis. 2012, 45, 76–102. [Google Scholar] [CrossRef] [Green Version]
  11. Alvarez, S.; Vanrell, M. Texton theory revisited: A bag-of-words approach to combine textons. Pattern Recognit. 2012, 45, 4312–4325. [Google Scholar] [CrossRef]
  12. Liu, L.; Fieguth, P.; Guo, Y.; Wang, X.; Pietikäinen, M. Local binary features for texture classification: Taxonomy and experimental study. Pattern Recognit. 2017, 62, 135–160. [Google Scholar] [CrossRef]
  13. Pietikäinen, M.; Hadid, A.; Zhao, G.; Ahonen, T. Computer Vision Using Local Binary Patterns; Springer: Berlin, Germany; London, UK, 2011. [Google Scholar]
  14. Ojala, T.; Pietikäinen, M.; Mäenpää, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 7, 971–987. [Google Scholar] [CrossRef]
  15. Mäenpää, T.; Ojala, T.; Pietikäinen, M.; Soriano, M. Robust texture classification by subsets of local binary patterns. In Proceedings of the 15th International Conference on Pattern Recognition, Barcelona, Spain, 3–7 September 2000; pp. 947–950. [Google Scholar]
  16. Liao, S.; Law, M.; Chung, C. Dominant local binary patterns for texture classification. IEEE Trans. Image Process. 2009, 18, 1107–1118. [Google Scholar] [CrossRef] [PubMed]
  17. Bianconi, F.; González, E.; Fernández, A. Dominant local binary patterns for texture classification: Labelled or unlabelled? Pattern Recognit. Lett. 2015, 65, 8–14. [Google Scholar] [CrossRef]
  18. Fu, X.; Shi, M.; Wei, H.; Chen, H. Fabric defect detection based on adaptive local binary patterns. In Proceedings of the IEEE International Conference on Robotics and Biomimetics (ROBIO2009), Guilin, China, 19–23 December 2009; pp. 1336–1340. [Google Scholar]
  19. Nanni, L.; Brahnam, S.; Lumini, A. Selecting the best performing rotation invariant patterns in local binary/ternary patterns. In Proceedings of the International Conference on Image Processing, Computer Vision, and Pattern Recognition, Las Vegas, NV, USA, 12–15 July 2010; pp. 369–375. [Google Scholar]
  20. Doshi, N.P.; Schaefer, G. Dominant multi-dimensional local binary patterns. In Proceedings of the IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC2013), Kunming, China, 5–8 August 2013. [Google Scholar]
  21. Guo, Y.; Zhao, G.; Pietikäinen, M.; Xu, Z. Descriptor learning based on fisher separation criterion for texture classification. In Asian Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2010; pp. 1491–1500. [Google Scholar]
  22. Guo, Y.; Zhao, G.; Pietikäinen, M. Discriminative features for texture description. Pattern Recognit. 2012, 45, 3834–3843. [Google Scholar] [CrossRef]
  23. Chan, C.; Kittler, J.; Messer, K. Multispectral local binary pattern histogram for component-based color face verification. In Proceedings of the IEEE Conference on Biometrics: Theory, Applications and Systems, Crystal City, VA, USA, 27–29 September 2007; pp. 1–7. [Google Scholar]
  24. Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–25 June 2005; pp. 886–893. [Google Scholar]
  25. Tan, X.; Triggs, B. Enhanced local texture feature sets for face recognition under difficult lighting conditions. IEEE Trans. Image Process. 2010, 19, 1635–1650. [Google Scholar] [PubMed]
  26. Hussain, S.; Triggs, B. Feature sets and dimensionality reduction for visual object detection. In British Machine Vision Conference; BMVA Press: London, UK, 2010; pp. 112.1–112.10. [Google Scholar]
  27. Porebski, A.; Vandenbroucke, N.; Hamad, D. LBP histogram selection for supervised color texture classification. In Proceedings of the 20th IEEE International Conference on Image Processing, Melbourne, Australia, 15–18 September 2013; pp. 3239–3243. [Google Scholar]
  28. Kalakech, M.; Porebski, A.; Vandenbroucke, N.; Hamad, D. A new LBP histogram selection score for color texture classification. In Proceedings of the 5th IEEE international Workshops on Image Processing Theory, Tools and Applications, Orleans, France, 10–13 November 2015. [Google Scholar]
  29. Porebski, A.; Hoang, V.T.; Vandenbroucke, N.; Hamad, D. Multi-color space local binary pattern-based feature selection for texture classification. J. Electron. Imaging 2018, 27, 011010. [Google Scholar]
  30. Luxburg, U.V. A tutorial on spectral clustering statistics and computing. Stat. Comput. 2007, 17, 395–416. [Google Scholar] [CrossRef]
  31. Ng, A.Y.; Jordan, M.; Weiss, Y. On spectral clustering: analysis and an algorithm. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, Canada, 3–8 December 2001; pp. 849–856. [Google Scholar]
  32. Zelink-Manor, L.; Perona, P. Self-tuning spectral clustering. In Proceedings of the Advances in Neural Information Processing Systems, Cambridge, MA, USA, 5 May 2005; pp. 1601–1608. [Google Scholar]
  33. Rubner, Y.; Puzich, J.; Tomasi, C.; Buhmann, J.M. Empirical evaluation of dissimilarity measures for color and texture. Comput. Vis. Image Underst. 2001, 84, 25–43. [Google Scholar] [CrossRef]
  34. Bianconi, F.; Bello-Cerezo, R.; Napoletano, P. Improved opponent color local binary patterns: An effective local image descriptor for color texture classification. J. Electron. Imaging 2017, 27, 011002. [Google Scholar] [CrossRef]
  35. Liu, L.; Lao, S.; Fieguth, P.; Guo, Y.; Wang, X.; Pietikainen, M. Median robust extended local binary pattern for texture classification. IEEE Trans. Image Process. 2016, 25, 1368–1381. [Google Scholar] [CrossRef] [PubMed]
  36. Jain, A.; Zongker, D. Feature selection: Evaluation, application and small sample performance. IEEE Trans. Pattern Anal. Mach. Intell. 1997, 19, 153–158. [Google Scholar] [CrossRef]
  37. Dash, M.; Liu, H. Feature selection for classification. Intell. Data Anal. 1997, 1, 131–156. [Google Scholar] [CrossRef] [Green Version]
  38. Liu, H.; Yu, L. Toward integrating feature selection algorithms for classification and clustering. IEEE Trans. Knowl. Data Eng. 2005, 17, 491–502. [Google Scholar] [Green Version]
  39. Ojala, T.; Mäenpää, T.; Pietikäinen, M.; Viertola, J.; Kyllönen, J.; Huovinen, S. Outex new framework for empirical evaluation of texture analysis algorithms. In Proceedings of the 16th International Conference on Pattern Recognition, Quebec City, QC, Canada, 11–15 August 2002; pp. 701–706. [Google Scholar]
  40. Backes, A.R.; Casanova, D.; Bruno, O.M. Color texture analysis based on fractal descriptors. Pattern Recognit. 2012, 45, 1984–1992. [Google Scholar] [CrossRef] [Green Version]
  41. Lakmann, R. Barktex Benchmark Database of Color Textured Images. Koblenz-Landau University. Available online: ftp://ftphost.uni-koblenz.de/outgoing/vision/Lakmann/BarkTex (accessed on 28 September 2018).
  42. Porebski, A.; Vandenbroucke, N.; Macaire, L.; Hamad, D. A new benchmark image test suite for evaluating color texture classification schemes. Multimed. Tools Appl. J. 2013, 70, 543–556. [Google Scholar] [CrossRef]
  43. Mäenpää, T.; Pietikäinen, M. Classification with color and texture: jointly or separately? Pattern Recognit. Lett. 2004, 37, 1629–1640. [Google Scholar] [CrossRef] [Green Version]
  44. Casanova, D.; Florindo, J.; Falvo, M.; Bruno, O.M. Texture analysis using fractal descriptors estimated by the mutual interference of color channels. Inf. Sci. 2016, 346, 58–72. [Google Scholar] [CrossRef]
  45. Pietikäinen, M.; Mäenpää, T.; Viertola, J. Color texture classification with color histograms and local binary patterns. In Proceedings of the 2nd International Workshop on Texture Analysis and Synthesis, Copenhagen, Denmark, 1 June 2002; pp. 109–112. [Google Scholar]
  46. Qazi, I.; Alata, O.; Burie, J.C.; Moussa, A.; Fernandez, C. Choice of a pertinent color space for color texture characterization using parametric spectral analysis. Pattern Recognit. 2011, 44, 16–31. [Google Scholar] [CrossRef]
  47. Iakovidis, D.; Maroulis, D.; Karkanis, S. A comparative study of color-texture image features. In Proceedings of the 12th International Workshop on Systems, Signals & Image Processing (IWSSIP’05), Chalkida, Greece, 22–24 September 2005; pp. 203–207. [Google Scholar]
  48. Liu, P.; Guo, J.; Chamnongthai, K.; Prasetyo, H. Fusion of color histogram and lbp-based features for texture image retrieval and classification. Inf. Sci. 2017, 390, 95–111. [Google Scholar] [CrossRef]
  49. Maliani, A.D.E.; Hassouni, M.E.; Berthoumieu, Y.; Aboutajdine, D. Color texture classification method based on a statistical multi-model and geodesic distance. J. Vis. Commun. Image Represent. 2014, 25, 1717–1725. [Google Scholar] [CrossRef]
  50. Guo, J.-M.; Prasetyo, H.; Lee, H.; Yao, C.C. Image retrieval using indexed histogram of void-and-cluster block truncation coding. Signal Process. 2016, 123, 143–156. [Google Scholar] [CrossRef]
  51. Ledoux, A.; Losson, O.; Macaire, L. Color local binary patterns: compact descriptors for texture classification. J. Electron. Imaging 2016, 25, 1–12. [Google Scholar] [CrossRef]
  52. Xu, Q.; Yang, J.; Ding, S. Color texture analysis using the wavelet-based hidden markov model. Pattern Recognit. Lett. 2005, 26, 1710–1719. [Google Scholar] [CrossRef]
  53. Martínez, R.A.; Richard, N.; Fernandez, C. Alternative to colour feature classification using colour contrast ocurrence matrix. In Proceedings of the 12th International Conference on Quality Control by Artificial Vision SPIE, Le Creusot, France, 3–5 June 2015; pp. 1–9. [Google Scholar]
  54. Hammouche, K.; Losson, O.; Macaire, L. Fuzzy aura matrices for texture classification. Pattern Recognit. 2016, 53, 212–228. [Google Scholar] [CrossRef] [Green Version]
  55. Oliveira, M.W.D.; da Silva, N.R.; Manzanera, A.; Bruno, O.M. Feature extraction on local jet space for texture classification. Phys. A Stat. Mech. Appl. 2015, 439, 160–170. [Google Scholar] [CrossRef]
  56. Florindo, J.; Bruno, O. Texture analysis by fractal descriptors over the wavelet domain using a best basis decomposition. Phys. A Stat. Mech. Appl. 2016, 444, 415–427. [Google Scholar] [CrossRef]
  57. Sandid, F.; Douik, A. Dominant and minor sum and difference histograms for texture description. In Proceedings of the 2016 International Image Processing, Applications and Systems (IPAS), Hammamet, Tunisia, 5–7 November 2016; pp. 1–5. [Google Scholar]
  58. Wang, J.; Fan, Y.; Li, N. Combining fine texture and coarse color features for color texture classification. J. Electron. Imaging 2017, 26, 9. [Google Scholar]
Figure 1. Classification accuracy Rd according to the number d of ranked histograms on Outex-TC-00013.
Figure 1. Classification accuracy Rd according to the number d of ranked histograms on Outex-TC-00013.
Jimaging 04 00112 g001
Figure 2. Classification accuracy Rd according to the number d of ranked histograms on USPTex.
Figure 2. Classification accuracy Rd according to the number d of ranked histograms on USPTex.
Jimaging 04 00112 g002
Figure 3. Classification accuracy Rd according to the number d of ranked histograms on NewBarkTex.
Figure 3. Classification accuracy Rd according to the number d of ranked histograms on NewBarkTex.
Jimaging 04 00112 g003
Table 1. Summary of the terms and the scores used in feature selection and their corresponding histogram selection adaptation.
Table 1. Summary of the terms and the scores used in feature selection and their corresponding histogram selection adaptation.
Feature SelectionHistogram Selection
DatasetDataset of N color texture images defined in a D-dimensional feature spaceDataset of N color texture images defined in a ( Q × D ) -dimensional histogram space
Data matrix X = x i r ; i = 1 , , N ; r = 1 , , D
x i r is the rth feature value of the ith image  I i
H = h i r ; i = 1 , , N ; r = 1 , , D
h i r is the rth histogram extracted from the ith image I i
Row x i = x i 1 , , x i D h i = h i 1 h i r h i D with h i r = h i r ( 1 ) , , h i r ( k ) , , h i r ( Q )
Column f r = x 1 r , , x N r T h r = h 1 r h i r h N r T
SelectionThe most discriminant features f r among the D available onesThe most discriminant histograms h r among the D available ones
Distance x i r x j r 2 is the squared Euclidean distance between the two images I i and I j using the considered feature f r J 2 ( h i r , h j r ) is the squared Jeffrey distance between the two images I i and I j using the considered histogram h r
J ( h i r , h j r ) = k = 1 Q h i r ( k ) l o g h i r ( k ) h i r ( k ) + h j r ( k ) 2 + h j r ( k ) l o g h j r ( k ) h i r ( k ) + h j r ( k ) 2
Similarity s i j evaluates the similarity between the images I i and I j in the D-dimensional input space
s i j = e x p x i x j 2 2 t 2
S ( h i , h j ) evaluates the similarity between the images I i and I j in the ( Q × D ) -dimensional input space using the histogram intersection
S ( h i , h j ) = k = 1 Q × D min h i ( k ) , h j ( k )
Mean μ r = i = 1 N x i r N h ¯ r = h ¯ r ( 1 ) , , h ¯ r ( k ) , , h ¯ r ( Q ) with h ¯ r ( k ) = 1 N i = 1 N h i r ( k )
Variance Score V r = 1 N i = 1 N x i r μ r 2 A V r = 1 N i = 1 N J 2 h i r , h ¯ r
Degree d i = j = 1 N s i j D i = j = 1 N S ( h i , h j )
Weighted average f ¯ r = i = 1 N x i r d i i = 1 N d i a ¯ r = a ¯ r ( 1 ) , , a ¯ r ( k ) , , a ¯ r ( Q ) with a ¯ r ( k ) = i = 1 N h i r ( k ) D i i = 1 N D i
Laplacian Score L r = i = 1 N j = 1 N x i r x j r 2 s i j i = 1 N x i r f ¯ r 2 d i A L r = i = 1 N j = 1 N J 2 ( h i r , h j r ) S ( h i , h j ) i = 1 N J 2 ( h i r , a ¯ r ) D i
Table 2. Accuracy R d ^ (%) reached with the d ^ -dimensional selected local binary pattern (LBP) histogram subspace, according to the different supervised and unsupervised scores on the Outex-TC-00013 set (the dimension of the histogram space is D × Q = 9 × 256 without selection).
Table 2. Accuracy R d ^ (%) reached with the d ^ -dimensional selected local binary pattern (LBP) histogram subspace, according to the different supervised and unsupervised scores on the Outex-TC-00013 set (the dimension of the histogram space is D × Q = 9 × 256 without selection).
AV AL ASL ICS Without
ScoreScoreScoreScoreSelection
R d ^ d ^ R d ^ d ^ R d ^ d ^ R d ^ d ^ R
R G B 93.25%8 93.38 % 8 93.38 % 892.94%992.94%
Y U V 89.56%991.03%791.03%789.56%989.56%
I 1 I 2 I 3 88.67%888.82%888.97%688.97%888.68%
H S V 90.44%991.91%591.91%591.03%890.44%
Table 3. Accuracy R d ^ (%) reached with the d ^ -dimensional selected LBP histogram subspace, according to the different supervised and unsupervised scores on the USPTex set (the dimension of the histogram space is D × Q = 9 × 256 without selection).
Table 3. Accuracy R d ^ (%) reached with the d ^ -dimensional selected LBP histogram subspace, according to the different supervised and unsupervised scores on the USPTex set (the dimension of the histogram space is D × Q = 9 × 256 without selection).
AV AL ASL ICS Without
ScoreScoreScoreScoreSelection
R d ^ d ^ R d ^ d ^ R d ^ d ^ R d ^ d ^ R
R G B 89.53%990.92%591.27%490.58%789.53%
Y U V 76.79%9 93.19 % 3 93.19 % 3 93.19 % 376.79%
I 1 I 2 I 3 75.31%992.06%392.06%392.06%375.31%
H S V 83.25%990.40%390.40%388.92%583.35%
Table 4. Accuracy R d ^ (%) reached with the d ^ -dimensional selected LBP histogram subspace, according to the different supervised and unsupervised scores on the NewBarkTex set (the dimension of the histogram space is D × Q = 9 × 256 without selection).
Table 4. Accuracy R d ^ (%) reached with the d ^ -dimensional selected LBP histogram subspace, according to the different supervised and unsupervised scores on the NewBarkTex set (the dimension of the histogram space is D × Q = 9 × 256 without selection).
AV AL ASL ICS Without
ScoreScoreScoreScoreSelection
R d ^ d ^ R d ^ d ^ R d ^ d ^ R d ^ d ^ R
R G B 73.16%9 81.37 % 4 81.37 % 4 81.37 % 473.16%
Y U V 71.81%979.17%779.17%779.17%771.81%
I 1 I 2 I 3 71.68%979.41%779.41%779.41%771.69%
H S V 70.59%981%381%381%370.59%
Table 5. Histogram ranks using the proposed scores with the different color spaces and for the three databases.
Table 5. Histogram ranks using the proposed scores with the different color spaces and for the three databases.
OuTexUSPTexBarkTex
A V -score2 4 3 6 8 7 1 5 95 4 6 8 7 9 2 3 13 7 6 8 4 2 5 1 9
A L -score9 1 5 8 7 6 3 4 21 3 2 9 7 8 6 4 59 1 5 2 4 8 6 7 3
R G B A S L -score9 1 5 8 7 6 4 3 21 2 3 7 4 9 6 8 59 5 1 2 4 8 6 7 3
I C S -score8 7 1 9 5 3 4 2 63 1 2 8 7 9 4 5 69 1 5 2 8 4 6 7 3
A V -score8 4 6 2 7 3 1 9 58 7 9 4 5 6 1 3 28 6 4 7 2 3 9 5 1
A L -score5 9 1 3 7 6 2 4 83 2 1 4 5 6 7 9 83 1 2 5 9 7 4 6 8
Y U V A S L -score1 9 5 6 8 3 7 2 43 2 1 4 5 6 9 7 83 2 7 4 1 5 9 6 8
I C S -score3 6 7 8 2 1 4 9 53 2 1 5 4 6 9 7 83 2 7 4 1 5 9 6 8
A V -score8 6 7 4 3 2 1 5 98 7 9 5 6 4 1 2 38 6 4 7 2 5 3 9 1
A L -score9 5 1 2 4 3 7 6 83 1 2 5 4 6 9 7 83 2 1 5 9 7 4 6 8
I 1 I 2 I 3 A S L -score1 9 5 6 8 2 3 4 72 3 1 6 4 5 9 7 81 3 2 5 9 7 4 6 8
I C S -score2 4 3 6 7 8 1 9 53 2 1 5 4 6 9 8 73 2 7 4 1 5 9 6 8
A V -score3 2 6 8 7 4 1 5 96 4 7 9 5 8 1 2 38 7 2 4 6 3 1 9 5
A L -score9 5 1 7 4 3 2 8 63 2 1 8 7 4 5 9 65 9 1 4 2 3 6 7 8
H S V A S L -score1 5 9 8 6 7 4 3 22 3 1 7 4 9 8 6 55 1 9 4 2 3 6 7 8
I C S -score7 8 6 1 3 4 9 5 23 2 7 4 1 8 5 9 65 1 9 6 2 4 3 8 7
Table 6. Comparison between the classification accuracies reached with the Outex-TC-00013 set.
Table 6. Comparison between the classification accuracies reached with the Outex-TC-00013 set.
FeaturesColor SpaceClassifierR (%)
3D-adaptive sum and difference histograms [9] I S H SVM95.8
3D color histogram [43] H S V 1-NN95.4
Fractal descriptors [44] R G B LDA95.0
EOCLBP with selection thanks to the A L -score R G B SVM94.9
Haralick features [5] R G B 5-NN94.9
3D color histogram [45] R G B 3-NN94.7
3D color histogram [46]I- H L S 1-NN94.5
Haralick features [11] R G B 1-NN94.1
EOCLBP/C [47] H S V SVM93.5
EOCLBP with selection thanks to the A L -score R G B 1-NN93.4
EOCLBP with selection thanks to the A S L -score [28] R G B 1-NN93.4
EOCLBP [27] R G B 1-NN92.9
Reduced Size Chromatic Co-occurrence Matrices [4] H L S 1-NN92.5
Between color component LBP histogram [43] R G B 1-NN92.5
Color histogram + LBP-based features [48] R G B 1-NN90.3
Wavelet coefficients [49] L * a * b * BDC89.7
Autoregressive models + 3D color histogram [46]I- H L S 1-NN88.9
Halftoning local derivative pattern + Color histogram [50] R G B 1-NN88.2
Autoregressive models [46] L * a * b * 1-NN88.0
Within color component LBP histogram [43] R G B 1-NN87.8
Mixed color order LBP [51] R G B 1-NN87.1
Features from wavelet transform [52] R G B 7-NN85.2
Color contrast occurrence matrix [53] R G B 1-NN82.6
Fuzzy aura matrices [54] R G B 1-NN80.2
SVM: Support Vector Machine, LDA: Linear Discriminant Analysis, BDC: Bayes Decision Classifier.
Table 7. Comparison between the classification accuracies reached with the USPTex set.
Table 7. Comparison between the classification accuracies reached with the USPTex set.
FeaturesColor SpaceClassifierR (%)
Color histogram + LBP-based features [48] R G B 1-NN95.9
Local jet + LBP [55]LuminanceLDA94.3
Halftoning local derivative pattern + Color histogram [50] R G B 1-NN93.9
EOCLBP with selection thanks to the A L -score Y U V 1-NN93.2
EOCLBP with selection thanks to the A L -score Y U V SVM87.9
Fractal descriptors [56]LuminanceLDA85.6
Mixed color order LBP [51] R G B 1-NN84.2
Table 8. Comparison between the classification accuracies reached with the NewBarktex set.
Table 8. Comparison between the classification accuracies reached with the NewBarktex set.
FeaturesColor spaceClassifierR (%)
Dominant and minor sum and difference histograms [57] R G B SVM89.6
EOCLBP with selection thanks to the A L -score R G B SVM84.9
Fine Texture and Coarse Color Features [58] H S V NSC84.3
3D-adaptive sum and difference histograms [9] R G B SVM82.1
EOCLBP with selection thanks to the A L -score R G B 1-NN81.4
EOCLBP with selection thanks to the I C S -score [27] R G B 1-NN81.4
EOCLBP with selection thanks to the A S L -score [28] R G B 1-NN81.4
Mixed color order LBP [51] R G B 1-NN77.7
NSC: Nearest Subspace Classifier.

Share and Cite

MDPI and ACS Style

Kalakech, M.; Porebski, A.; Vandenbroucke, N.; Hamad, D. Unsupervised Local Binary Pattern Histogram Selection Scores for Color Texture Classification. J. Imaging 2018, 4, 112. https://doi.org/10.3390/jimaging4100112

AMA Style

Kalakech M, Porebski A, Vandenbroucke N, Hamad D. Unsupervised Local Binary Pattern Histogram Selection Scores for Color Texture Classification. Journal of Imaging. 2018; 4(10):112. https://doi.org/10.3390/jimaging4100112

Chicago/Turabian Style

Kalakech, Mariam, Alice Porebski, Nicolas Vandenbroucke, and Denis Hamad. 2018. "Unsupervised Local Binary Pattern Histogram Selection Scores for Color Texture Classification" Journal of Imaging 4, no. 10: 112. https://doi.org/10.3390/jimaging4100112

APA Style

Kalakech, M., Porebski, A., Vandenbroucke, N., & Hamad, D. (2018). Unsupervised Local Binary Pattern Histogram Selection Scores for Color Texture Classification. Journal of Imaging, 4(10), 112. https://doi.org/10.3390/jimaging4100112

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop