Next Article in Journal
GPS-Based Indoor/Outdoor Detection Scheme Using Machine Learning Techniques
Next Article in Special Issue
Segmentation of River Scenes Based on Water Surface Reflection Mechanism
Previous Article in Journal
Modeling the Optimal Maintenance Scheduling Strategy for Bridge Networks
Previous Article in Special Issue
Applications of Capacitive Imaging in Human Skin Texture and Hair Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Partial Order Rank Features in Colour Space

by
Fabrizio Smeraldi
1,2,*,
Francesco Bianconi
2,
Antonio Fernández
3 and
Elena González
3
1
School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Road, London E1 4NS, UK
2
Department of Engineering, Università degli Studi di Perugia, Via G. Duranti 93, 06125 Perugia, Italy
3
Department of Engineering Design, Universidade de Vigo, Rúa Maxwell s/n, 36310 Vigo, Spain
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(2), 499; https://doi.org/10.3390/app10020499
Submission received: 26 November 2019 / Revised: 20 December 2019 / Accepted: 31 December 2019 / Published: 10 January 2020
(This article belongs to the Special Issue Texture and Colour in Image Analysis)

Abstract

:
Partial orders are the natural mathematical structure for comparing multivariate data that, like colours, lack a natural order. We introduce a novel, general approach to defining rank features in colour spaces based on partial orders, and show that it is possible to generalise existing rank based descriptors by replacing the order relation over intensity values by suitable partial orders in colour space. In particular, we extend a classical descriptor (the Texture Spectrum) to work with partial orders. The effectiveness of the generalised descriptor is demonstrated through a set of image classification experiments on 10 datasets of colour texture images. The results show that the partial-order version in colour space outperforms the grey-scale classic descriptor while maintaining the same number of features.

1. Introduction

It is, at first sight, peculiar that one of the most robust tools for image description, namely rank features, have only seen limited application to colour images. The problem is, of course, that while they are very effective at dealing with noise, rank features run afoul of the main theoretical difficulty associated with colour spaces—that is the absence of a natural order.
In recent years there has been a revival of interest in ranking of colour pixels. Notably, Ledoux et al. [1] published an extensive comparative study in the use of total orders as rank features for texture recognition. However, interest has been keenest in the field of colour morphology, where several solutions have been proposed—from adaptive orders that work around the ‘false colour problem’ to the natural mathematical structure for ordering higher-dimensional sets—that is partial orders [2,3,4,5,6].
In partially-ordered sets we simply admit that there will be couples of elements incomparable to each other. Partial orders are therefore particularly suitable for dealing with colour spaces, where statements like “yellow is greater than green” make little or no sense at all.
The objective of this work is to introduce a novel category of rank features based on partial orders. In the remainder, after providing some background on partial orders (Section 2), we detail the ways in which rank features can be defined (Section 2.5) and extend a classical descriptor (the Texture Spectrum) to work with partial orders (Section 3.1). We demonstrate the feasibility of the method through a set of experiment on 10 datasets of colour texture images (Section 3.2) and show that partial orders in colour space can outperform grey-scale total ordering (Section 4).

2. Background

2.1. Rank Features

Rank features are a well established technique for dealing with noise in images, enforcing invariance to all sorts of contrast or illumination variations and sensor nonlinearities [7]. Because of their robustness, they were first developed in the context of wide-baseline stereo matching—see for instance the census and rank transforms [8]. More recently, descriptors in the popular Local Binary Pattern (LBP) family, including Texture Spectrum, Binary Gradient Contours, etc. [9,10] have turned rank features into a general purpose tool, with applications—among others—in texture classification, face recognition, surface inspection and content-based image retrieval [11]. The descriptive power of rank features has been expanded to explicitly capture orientation (Ranklets [12]) and second-order stimuli (Variance Ranklets [13]), all types of information that were seen as the preserve of linear filters or ad-hoc algorithms.
Common to all rank features is the fact that they are defined in terms of ordinal information between pixels only, with the actual pixel values being discarded. This can be done in terms of pairwise pixel comparisons (rank and census transform, LBP), pixel ranks (Ranklets) or a permutation of ranks (Variance Ranklets), but it is easy to see that the two approaches are equivalent [12] and that all descriptors rely on the natural order relation (≤) between pixel values.
Before proceeding to definitions it is worth noting that, notwithstanding the trend towards the use of convolutional neural networks as feature extractors [14], rank features are still competitive in texture applications [15]. In the following section we recall the axioms for an order relation.

2.2. Order Relations

An order relation is an abstraction of the common notion of “greater than” used to compare numerical values, in our case pixel values in P (typically the set of 8-bit intensity values). In order to be called a (total) order, a binary relation ≤ needs to satisfy the following four conditions:
Definition 1
(Order axioms). For all ( x , y , z ) P 3 ,
1. 
x x (reflexivity).
2. 
if x y and y x then y = x (antisymmetry).
3. 
if x y and y z then x z (transitivity).
4. 
either x y or y x (totality).
The last condition guarantees that we know how to compare any pair of pixel values.

2.3. Ordering High-Dimensional Data

The application of rank features to multi-channel images or higher dimensional data is hindered by the fact that there is no natural way of ordering multivariate data. It is certainly possible to provide a total order for a colour space; for instance, one could order RGB data lexicographically using the R channel as the primary sorting key, followed by G and finally by B. However, like other similar options, this has a disadvantage, namely, there are colours that are very close to each other in colour space, but very far in the order—and is therefore of limited practical interest (a sub-relation of the lexicographical order, the product order, is indeed of practical interest and will be discussed in detail in this paper, see Section 2.5.1). In general, it is best to resort to some sort of sub-ordering principle. These can be broadly divided in four categories [16]:
  • Marginal ordering (M-ordering).
  • Reduced (aggregate) ordering (R-ordering).
  • Conditional (sequential) ordering (C-ordering).
  • Partial ordering (R-ordering).
In marginal ordering, ranking is carried out on one or more components (marginals) of the multivariate data. Ranking colour data in the RGB space by the value of red is an example of M-ordering; lexicographical ordering is another one. Reduced (aggregate) ordering relies on converting multivariate data to univariate through suitable transformations. A common way to do this consists of establishing a reference point in the data space and using the distance from that point to rank the data. Conditional ordering occurs when we sort a random multivariate sample based on the corresponding (usually marginally-sorted) values of another sample. C-ordering is closely related to the concept of concomitants in Statistics [17]. Partial ordering will be discussed in detail in Section 2.5.
Interestingly, many of the common ways of dealing with order in colour space fall under the first two categories, i.e., marginal and reduced (aggregate) ordering. For instance, ranking based on intensity can be seen as a marginal ordering of the HSV space along the V axis; or as a reduced or aggregate ordering over the RGB space, where the aggregating function is the grey-level intensity. Other examples of aggregating orders will be given in Section 3.1.
In this paper, we will focus on rank features based on partial orders. Before introducing these, we review recent approaches to using multivariate orders on colour images.

2.4. Rank-Based Approaches to Colour Processing

Previous approaches to rank-based colour features typically extend grey-scale rank-based methods to the colour domain by considering either the colour channels separately (intra-channel features) and/or in pairwise combination (inter-channel features). Mäenpää and Pietikäinen [18] for instance extended classic LBP by applying it both to each R, G and B colour channel separately and pairwise between each of the R–G, R–B and G–B pairs. Bianconi et al. [19] adopted the same approach for extending grey-scale ranklets [12] to the colour domain. Lee et al. [20] defined Local Colour Vector Binary Patterns (LCVBP) by decomposing the colour triplets into a norm and angular component and by computing LBP on each of them. More recently, Cusano et al. [21] introduced Local Angular Patterns (LAP) which consider the angular component only and discard the norm part altogether.
Another possible strategy consists of establishing some sort of a priori total ordering on the colour data. This approach is not uncommon in colour morphology—see for instance Angulo [4], van De Gronde and Roerdink [6]—and has been advocated for extending LBP to colour images by Barra [22]. Of late, this family of methods has been extensively investigated by Ledoux et al. [1] and Bello-Cerezo et al. [23]. The problem is that imposing a total ordering on the colour data inevitably entails a certain degree of arbitrariness, with the consequence that the results tend to be dataset-dependent. On the other hand, morphology for tensor-valued images (that arise from certain magnetic resonance techniques) has relied on the Loewner order, that is in fact a partial order (see for instance Burgeth et al. [24]; more on this in Section 2.5.2). More recently, this approach has been extended to morphology for colour images Burgeth and Kleefeld [5]. Partial ordering circumvents the problem of ordering multivariate data totally, at the expense of not allowing comparisons between some colour values.

2.5. Partial Orders

A partial order differs from a total order in that the fourth axiom in Definition 1 is waived, i.e., there are pairs of elements in the set that are incomparable. In order to distinguish this from a total order we use the notation x y . If the elements x and y are incomparable, we shall write x y .
In the following, we describe two types of partial orders that are applicable to colour spaces with Cartesian and polar coordinates respectively.

2.5.1. Product Order

By product order we mean the relation obtained from the component-wise comparison of colour values. Given u = c 1 u , c 2 u , c 3 u and v = c 1 v , c 2 v , c 3 v two triplets representing colours in a generic space we write:
u × v if c 1 u c 1 v , c 2 u c 2 v , c 3 u c 3 v u × v if neither u × v nor v × u .
Note that this is a subset of the lexicographical order introduced in Section 2.3; it is however of higher practical interest as it treats all three channels symmetrically. In the RGB space, for instance, a given colour u weakly dominates the rectangular parallelepiped C ( u ) with three edges along the axes and a vertex in the colour itself (see Figure 1). For any colour v that does not dominate all of C ( u ) , v u .
The product order can of course be applied to any colour space, giving relations of various degree of interpretability and effectiveness for pattern recognition (see Section 3.1 and Section 4).

2.5.2. Loewner Order

The Loewner (partial) order is defined on symmetric matrices. Given two symmetric matrices A , B we write:
A L B if B A S + A L B if neither A L B nor B L A
where S + indicates the set of positive semi-definite matrices. Applying this to a colour space requires mapping colour values to symmetric matrices. Following [5], we start from a modified colour space HC L ˜ obtained from HSL ([25], Section 4.6) by setting L ˜ = 2 L 1 for the (modified) luminance and replacing saturation with chroma C = max { R , G , B } min { R , G , B } . The resulting colour gamut fills a bicone with axis L ˜ and opening angle 90 . We isometrically map colours to the space Sym ( 2 ) of symmetric 2 × 2 matrices by setting [5]:
M h , c , l ˜ = 1 2 l ˜ c h h l ˜ + c .
For two colours u = h u , c u , l ˜ u and v = h v , c v , l ˜ v in the HC L ˜ space we therefore write:
u L v if M h v , c v , l ˜ v L M h u , c u , l ˜ u u L v if neither u L v nor v L u
where L is defined in Equation (2). Geometrically (Figure 2), a given colour v weakly dominates all colours of lower luminance that fall in a cone with its vertex in v and its axis parallel to the L ˜ axis.

3. Materials and Methods

3.1. Rank Features on Partial Orders

In this section we show how to generalise existing rank-based descriptors by replacing total order in grey-scale with suitable partial orders in colour space. In the remainder we shall use the Texture Spectrum [26] as our reference model—though other descriptors such as Local Binary Patterns and Local Ternary Patterns are amenable to the same procedure with virtually no effort.
In Texture Spectrum, a local image pattern P = p 0 , p 1 , , p n is assigned a unique decimal code as follows:
f TS P = i = 1 n 3 i τ g p 0 , g p i
where p 0 represents the central pixel and p i , i { 1 , , n } the peripheral pixels, which we assume to be arranged on a circle around the central pixel. We also assume that p represents a point in a 3D colour space, though again extension to multi-spectral data is straightforward. In Equation (5) the function g p 0 stands for a generic conversion from colour into grey-scale, whereas τ u , w indicates the ternary thresholding function:
τ w , z = 0 if w < z 1 if w = z 2 if w > z .
An image is represented by the dense, orderless statistical distribution over the set of possible codes. For Texture Spectrum, the number of (directional) features generated by the method is clearly 3 n . Invariance under rotations and/or reflections can be obtained by grouping together all those codes that represent patterns which can be transformed into one another by such transforms. The corresponding mathematical structures are necklaces and bracelets, respectively for invariance under rotations (i.e., cyclic group of order n; C n ) and under rotations + reflections (i.e., dihedral group of order n; D n ). For general formulas about the number of resulting C n - and D n -invariant features and for other mathematical details please refer to González et al. [27], Zelenyuk and Zelenyuk [28]. Specifically, for n = 8 (which is the case considered herein—see below) the number of features is respectively 834 and 498.
A ternary rank feature for partially ordered data analogous to Texture Spectrum—the Partial Order Texture Spectrum (POTS)—can easily be defined in the following way:
f POTS P = i = 1 n 3 i φ p 0 , p i
φ u , v = 0 if u v 1 if u v 2 if u v
where ⪯ indicates a generic partial order relation in the colour space (see Section 2.5). Notably, the number of features generated by this formulation is the same as generated by the Texture Spectrum.
In the experiments we considered the following partial order/colour space combinations: product order (Section 2.5.1) in the RGB, Ohta’s and opponent spaces [25]; Loewner order (Section 2.5.2) in the HC L ˜ space. When reporting experimental results we use subscripts ‘RGB’, ‘ohta’ and ‘opp’ to indicate the colour spaces, and superscripts × and L respectively for the product and Loewner orders (see Equations (1) and (4)). No superscript was used to indicate the natural total order on greyscale values.
Conversion from RGB to grey-scale was also performed in three different ways: (1) through the standard PAL/NTSC formula ([25], Section 4.3.1); (2) by computing the average of the three channels; and (3) by determining, for each image, the principal axes of the colour distribution in the RGB space and projecting each r , g , b triplet onto the first axis. In the remainder we denote the corresponding variations of Texture Spectrum respectively as TS grey , TS μ and TS p 1 .
Finally, we computed C n - and D n -invariant features over 3 × 3 , non-interpolated, square neighbourhoods of radius 1px and 2px. The overall feature vector was obtained by concatenating the feature vectors obtained at each resolution—see also González et al. [27] for details. These settings respectively generates 834 × 2 = 1668 and 498 × 2 = 996 features.

3.2. Experiments

To test the effectiveness of the partial-order rank features described in Section 3.1 we ran a set of supervised image classification experiments. Datasets, classification strategy and accuracy estimation are described in the following subsections.

3.3. Datasets

We used ten datasets of colour texture images from different sources as described below. The main properties of each dataset are summarised in Table 1.

3.3.1. Epistroma

Contains 1376 histopathological images from colorectal cancer representing either epithelium (825 images) or stroma (551 images). The image size ranges from 93 px to 2372 px in width and from 94 px to 2373 px in height. Further details about tissue preparation and digitisation procedure are available in Linder et al. [29].

3.3.2. KTH-TIPS

Includes 10 classes of common materials (e.g., aluminum foil, bread, corduroy, etc.) with 81 image samples for each class [30,31]. Each material was acquired under nine scales, three rotation angles and three lighting directions.

3.3.3. KTH-TIPS2b

Features 11 classes of materials (432 sample images per class) and is actually an extension of KTH-TIPS. The image acquisition settings were the same as in KTH-TIPS, but four rather than three illumination conditions were used in this case [32].

3.3.4. Kylberg–Sintorn

Is composed of 25 classes of heterogeneous materials, such as food (e.g., lentils, oatmeal and sugar), fabric (e.g., knitwear and towels) and tiles [33,34]. For each class one sample image was acquired using invariable illumination conditions and under nine different rotation angles—of which only the images at 0 were included in our experiments. Each image was further subdivided into six non-overlapping sub-images of dimension 1728 × 1728 px.

3.3.5. MondialMarmi

Comprises 25 classes of marble and granite products identified by their commercial denominations, e.g., Azul Platino, Bianco Sardo, Rosa Porriño and Verde Bahía [35]. Each class is represented by four tiles; ten images for each tile were acquired under steady illumination conditions and at rotation angles from 0 deg to 90 in steps of 10 . In the experiments we only used the images at 0 ; moreover, we subdivided each image into four non-overlapping sub-images therefore obtaining 16 image samples for each class.

3.3.6. OUTEX-13 and OUTEX-14

Are based on the same sets of images that respectively make up the OUTEX_ TC_00013 and OUTEX_TC_00014 test suites—see Ojala et al. [36] for details. Specifically, OUTEX-13 features 68 classes of materials with 20 images per class acquired under invariable illumination conditions; OUTEX-14 contains the same classes—but in this case the image samples were acquired under three different illumination conditions—therefore there are 60 samples per class. Please notice, however, that in order to maintain the same evaluation protocol for all the datasets considered here (see Section 3.4), the subdivisions into train and test sets used in our experiments were not the same as in the OUTEX_TC_00013 and OUTEX_TC_00014 test suites.

3.3.7. Pap Smear

Consists of 917 PAP-stained images of variable dimension representing cells from the cervix [37]. The images represent either abnormal cases—675 samples or normal cases—242 samples. The dataset also comes with a further subdivision into seven sub-classes which was not considered in our experiments. The image size ranges from 84 × 88 px to 392 × 262 px. In our experiments we considered a balanced sub-set containing 204 samples for each of the two classes.

3.3.8. Plant Leaves

Includes a total of 1200 samples of plant leaves from 20 different classes with 60 samples per class [38]. The images were acquired using a planar scanner and have a dimension of 128 × 128 px.

3.3.9. RawFooT

Comprehends 68 classes of raw food and grains such as corn, chicken breast, pomegranate, salmon and tuna [39,40]. The materials were acquired under 46 different illumination conditions resulting in as many image samples for each class. We further subdivided the images into four non-overlapping sub-images, thus obtaining 184 samples for each class. The dimension of the resulting image tiles was 400 × 400 px.

3.4. Classification and Accuracy Estimation

For each dataset described in Section 3.3 we performed supervised classification using a nearest neighbour classifier (1-NN) with the L 1 (‘Manhattan’) distance. In detail, after extracting a feature vector from all images according to one of the descriptors tested, we computed the distance between such vectors as the sum of the absolute differences between components. We then assigned each test vector to the class of the closest training vector. The absence of tuning parameters, the ease of implementation and other desirable asymptotic properties make the 1-NN particularly appealing for comparison purposes. Its use in related works is indeed customary: see for instance Cusano et al. [39], Kandaswamy et al. [41], Liu et al. [42].
Accuracy estimation was based on split-half validation with stratified sampling—for each dataset we used half of the samples of each class to train the classifier (train set) and the other half (test set) to compute the accuracy. This was defined as the ratio between of number of samples of the test set correctly classified ( N c ) and the total number of samples of the test set (N):
a = N c N .
For a stable estimation we averaged the above value over a hundred different subdivisions into train and test set:
a ^ = i = 1 100 a i 100 ,
where a i indicates the accuracy achieved in the i-th subdivision into train and test set. In Table 2 we report the 95% confidence intervals for a ^ (computed under the simplifying assumption of normal distribution).

4. Results and Discussion

Table 2 reports the confidence intervals for the means of the overall classification accuracy (see Section 3.4). For each dataset we highlighted in orange the best result obtained by total-order grey-scale rank features; in blue the best result obtained by partial order rank features (POTS) in colour space. When there was a statistically significant difference between the two, the best figure was indicated in boldface. As can be seen, partial order rank features in colour space performed significantly better in five datasets out 10, whereas the reverse occurred in one dataset only (dataset six). In the remaining four datasets there was no significant difference between the two methods.
As for grey-scale rank features, the results show that in most cases (i.e., 8 datasets out of 10) the best performance was obtained using standard PAL/NTSC grey-scale conversion. By contrast, partial-order rank features denoted a higher dependence on the colour space used.
The computational cost of all the descriptors considered is roughly equivalent, as the number of features is the same and the complexity of computing a partial or total order comparison in colour space is comparable to the cost of a colour space transformation. Indeed, as we have just described, even the traditional TS requires a grey-scale conversion, the choice of which can be seen as an integral part of the descriptor.
In Table 3 we compare our results to published results obtained using rank-based descriptors in conjunction with other ordering methods in colour spaces. As can be seen, in most cases our partial-order based approach improves significantly over previous results. We should here emphasise that the computational requirements of our partial-order descriptors are not higher than those of the other ordering methods cited.

5. Conclusions and Future Work

The lack of a natural order among colours represents an intrinsic impediment to the definition of rank features in colour space. In this paper we have introduced a novel and general approach based on partial orders. Partial orders overcome the problems inherent to ordering multivariate data at the expense of admitting that not all pairs of colours can be compared to each other. We showed that this scheme fits in well with existing grey-scale local image descriptors, that are amenable to extension to the colour domain with little effort. Taking the Texture Spectrum as a model, we showed that its partial-order version in colour space (POTS) can outperform the grey-scale classic descriptor while maintaining the same number of features and with comparable computational complexity. Previous studies have also demonstrated that the use of colour can improve texture discrimination, but at the expense of employing a higher number of features [44,45,46]. Notably, our approach improves on published results that use descriptors based specifically on (total) colour space ordering (see Table 3).
To the best of our knowledge this is the first time that partial orders have been used to define rank features for pattern recognition. The method is conceptually simple, fairly general and shows potential for application in a wide number of computer vision tasks. Future studies will be focussed on extending the approach to the broader class of descriptors known as Histograms of Equivalent Patterns [9]. The effect of the colour space on the performance of rank features based on partial orders is also an important topic for further investigation. Finally, the insertion of partial order based algorithms in more involved image processing pipelines (e.g., convolutional neural networks) also represents an interesting opportunity for future research; integration at the level of matching [47] has so far been successful.

Author Contributions

Conceptualization, F.S. and F.B.; Formal analysis, F.S., F.B. and A.F.; Methodology, F.S., F.B., A.F. and E.G.; Software, F.S. and F.B.; Validation, F.S., F.B., A.F. and E.G.; Visualization, F.B. and A.F.; Writing—original draft, F.S. and F.B.; Writing—review & editing, A.F. and E.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by the Spanish Government under projects AGL2014-56017-R and TIN2014-56919-C3-2-R, and by the Department of Engineering at the Università degli Studi di Perugia (UniPG Eng), Italy, under project Machine learning algorithms for the control of autonomous mobile systems and the automatic classification of industrial products and biomedical images (Fundamental resarch grants 2017). F.S. performed part of this work as a Visiting Researcher at UniPG Eng. He gratefully acknowledges the support of UniPG under international mobility grant ‘D.R. n.2270/2015’.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ledoux, A.; Losson, O.; Macaire, L. Color local binary patterns: Compact descriptors for texture classification. J. Electron. Imaging 2016, 25, 061404. [Google Scholar] [CrossRef]
  2. Hanbury, A.; Serra, J. Mathematical morphology in the CIELAB space. Image Anal. Stereol. 2002, 21, 201–206. [Google Scholar] [CrossRef]
  3. Aptoula, E.; Lefèvre, S. A comparative study on multivariate mathematical morphology. Pattern Recognit. 2007, 40, 2914–2929. [Google Scholar] [CrossRef] [Green Version]
  4. Angulo, J. Morphological colour operators in totally ordered lattices based on distances: Application to image filtering, enhancement and analysis. Comput. Vis. Image Underst. 2007, 107, 56–73. [Google Scholar] [CrossRef]
  5. Burgeth, B.; Kleefeld, A. Morphology for color images via Loewner order for matrix fields. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2013; Volume 7883 LNCS, pp. 243–254. [Google Scholar]
  6. van De Gronde, J.; Roerdink, J. Group-invariant colour morphology based on frames. IEEE Trans. Image Process. 2014, 23, 1276–1288. [Google Scholar] [CrossRef] [PubMed]
  7. Hodgson, R.; Bailey, D.; Naylor, M.; Ng, A.; McNeill, S. Properties, implementations and applications of rank filters. Image Vis. Comput. 1985, 3, 3–14. [Google Scholar] [CrossRef]
  8. Zabih, R.; Woodfill, J. Non-parametric Local Transforms for Computing Visual Correspondence. In European Conference on Computer Vision; Springer: Stockholm, Sweden, 1994; pp. 151–158. [Google Scholar]
  9. Fernández, A.; Álvarez, M.X.; Bianconi, F. Texture description through histograms of equivalent patterns. J. Math. Imaging Vis. 2013, 45, 76–102. [Google Scholar] [CrossRef] [Green Version]
  10. Liu, L.; Fieguth, P.; Wang, X.; Pietikäinen, M.; Hu, D. Evaluation of LBP and deep texture descriptors with a new robustness benchmark. In Proceedings of the 14th European Conference on Computer Vision (ECCV 2016), Amsterdam, The Netherlands, 11–14 October 2016; Springer: Amsterdam, The Netherlands, 2016; Volume 9907, pp. 69–86. [Google Scholar]
  11. Brahnam, S.; Jain, L.; Nanni, L.; Lumini, A. (Eds.) Local Binary Patterns: New Variants and Applications; Studies in Computational Intelligence; Springer: Berlin/Heidelberg, Germany, 2014; Volume 506. [Google Scholar]
  12. Smeraldi, F. Ranklets: Orientation selective non-parametric features applied to face detection. In Proceedings of the 16th International Conference on Pattern Recognition (ICPR’02), Quebec City, QC, Canada, 11–15 August 2002; Volume 3, pp. 379–382. [Google Scholar]
  13. Azzopardi, G.; Smeraldi, F. Variance Ranklets: Orientation-selective rank features for contrast modulations. In Proceedings of the British Machine Vision Conference, BMVC 2009, London, UK, 7–10 September 2009. [Google Scholar]
  14. Liu, L.; Chen, J.; Fieguth, P.; Zhao, G.; Chellappa, R.; Pietikäinen, M. From BoW to CNN: Two decades of texture representation for texture classification. Int. J. Comput. Vis. 2019, 127, 74–109. [Google Scholar] [CrossRef] [Green Version]
  15. Bello-Cerezo, R.; Bianconi, F.; Di Maria, F.; Napoletano, P.; Smeraldi, F. Comparative Evaluation of Hand-Crafted Image Descriptors vs. Off-the-Shelf CNN-Based Features for Colour Texture Classification under Ideal and Realistic Conditions. Appl. Sci. 2019, 9, 738. [Google Scholar] [CrossRef] [Green Version]
  16. Barnett, V. The ordering of multivariate data. J. R. Stat. Soc. Ser. A (Gen.) 1976, 139, 318–355. [Google Scholar] [CrossRef]
  17. Yang, S. Distribution Theory of the Concomitants of Order Statistics. Ann. Stat. 1977, 5, 996–1002. [Google Scholar] [CrossRef]
  18. Mäenpää, T.; Pietikäinen, M. Texture analysis with local binary patterns. In Handbook of Pattern Recognition and Computer Vision, 3rd ed.; Chen, C., Wang, P., Eds.; World Scientific: Singapore, 2005; pp. 197–216. [Google Scholar]
  19. Bianconi, F.; Fernández, A.; González, E.; Armesto, J. Robust color texture features based on ranklets and discrete Fourier transform. J. Electron. Imaging 2009, 18, 043012. [Google Scholar]
  20. Lee, S.; Choi, J.; Ro, Y.; Plataniotis, K. Local color vector binary patterns from multichannel face images for face recognition. IEEE Trans. Image Process. 2012, 21, 2347–2353. [Google Scholar] [CrossRef] [PubMed]
  21. Cusano, C.; Napoletano, P.; Schettini, R. Local angular patterns for color texture classification. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2015; Volume 9281, pp. 111–118. [Google Scholar]
  22. Barra, V. Expanding the local binary pattern to multispectral images using total orderings. Commun. Comput. Inf. Sci. 2011, 229 CCIS, 67–80. [Google Scholar]
  23. Bello-Cerezo, R.; Fieguth, P.; Bianconi, F. LBP-Motivated Colour Texture Classification. In Proceedings of the 2nd International Workshop on Compact and Efficient Feature Representation and Learning in Computer Vision (in Conjunction with ECCV 2018), Munich, Germany, 9 September 2018; Volume 11132, pp. 517–533. [Google Scholar]
  24. Burgeth, B.; Welk, M.; Feddern, C.; Weickert, J. Mathematical morphology on tensor data using the Loewner ordering. In Visualization and Processing of Tensor Fields; Mathematics and Visualization; Springer: Berlin/Heidelberg, Germany, 2006; pp. 357–368. [Google Scholar]
  25. Palus, H. Representations of colour images in different colour spaces. In The Colour Image Processing Handbook; Sangwine, S.J., Horne, R.E.N., Eds.; Springer: Berlin/Heidelberg, Germany, 1998; pp. 67–90. [Google Scholar]
  26. He, D.C.; Wang, L. Texture Unit, Texture Spectrum, And Texture Analysis. IEEE Trans. Geosci. Remote. Sens. 1990, 28, 509–512. [Google Scholar]
  27. González, E.; Bianconi, F.; Fernández, A. An investigation on the use of local multi-resolution patterns for image classification. Inf. Sci. 2016, 361–362, 1–13. [Google Scholar] [CrossRef]
  28. Zelenyuk, Y.; Zelenyuk, Y. Counting symmetric bracelets. Bull. Aust. Math. Soc. 2014, 89, 431–436. [Google Scholar] [CrossRef] [Green Version]
  29. Linder, N.; Konsti, J.; Turkki, R.; Rahtu, E.; Lundin, M.; Nordling, S.; Haglund, C.; Ahonen, T.; Pietikäinen, M.; Lundin, J. Identification of tumor epithelium and stroma in tissue microarrays using texture analysis. Diagn. Pathol. 2012, 7, 22. [Google Scholar] [CrossRef] [Green Version]
  30. Hayman, E.; Caputo, B.; Fritz, M.; Eklundh, J.O. On the Significance of Real-World Conditions for Material Classification. In Proceedings of the 8th European Conference on Computer Vision (ECCV 2004), Prague, Czech Republic, 11–14 May 2004; Springer: Prague, Czech Republic, 2004; Volume 3024, pp. 253–266. [Google Scholar]
  31. The KTH-TIPS and KTH-TIPS2 Image Databases. 2004. Available online: http://www.nada.kth.se/cvap/databases/kth-tips/ (accessed on 21 September 2016).
  32. Caputo, B.; Hayman, E.; Mallikarjuna, P. Class-specific material categorisation. In Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05), Beijing, China, 17–20 October 2005; Volume II, pp. 1597–1604. [Google Scholar]
  33. Kylberg, G. Automatic Virus Identification Using TEM. Image Segmentation and Texture Analysis. Ph.D. Thesis, Faculty of Science and Technology, University of Uppsala, Uppsala, Sweden, 2014. [Google Scholar]
  34. Kylberg Sintorn Rotation Dataset. 2013. Available online: http://www.cb.uu.se/gustaf/KylbergSintornRotation/ (accessed on 6 January 2016).
  35. Bello-Cerezo, R.; Bianconi, F.; Fernández, A.; González, E.; Di Maria, F. Experimental comparison of color spaces for material classification. J. Electron. Imaging 2016, 25, 061406. [Google Scholar] [CrossRef]
  36. Ojala, T.; Pietikäinen, M.; Mäenpää, T.; Viertola, J.; Kyllönen, J.; Huovinen, S. Outex—New Framework for Empirical Evaluation of Texture Analysis Algorithms. In Proceedings of the 16th International Conference on Pattern Recognition (ICPR’02), Quebec, QC, Canada, 11–15 August 2002; Volume 1, pp. 701–706. [Google Scholar]
  37. Jantzen, J.; Noras, J.; Dounias, G.; Bjerregaard, B. Pap-smear Benchmark Data For Pattern Classification. In Nature Inspired Smart Information Systems (NiSIS 2005); NiSIS: Albufeira, Portugal, 2005. [Google Scholar]
  38. Casanova, D.; de Mesquita Sá Junior, J.J.; Bruno, O.M. Plant leaf identification using Gabor wavelets. Int. J. Imaging Syst. Technol. 2009, 19, 236–243. [Google Scholar] [CrossRef]
  39. Cusano, C.; Napoletano, P.; Schettini, R. Evaluating color texture descriptors under large variations of controlled lighting conditions. J. Opt. Soc. Am. A 2016, 33, 17–30. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. RawFooT DB: Raw Food Texture Database. 2015. Available online: http:projects.ivl.disco.unimib.it/rawfoot/ (accessed on 22 September 2016).
  41. Kandaswamy, U.; Schuckers, S.A.; Adjeroh, D. Comparison of Texture Analysis Schemes Under Nonideal Conditions. IEEE Trans. Image Process. 2011, 20, 2260–2275. [Google Scholar] [CrossRef] [PubMed]
  42. Liu, L.; Zhao, L.; Long, Y.; Kuang, G.; Fieguth, P.W. Extended local binary patterns for texture classification. Image Vis. Comput. 2012, 30, 86–99. [Google Scholar] [CrossRef]
  43. Fernández, A.; Lima, D.; Bianconi, F.; Smeraldi, F. Compact Color Texture Descriptor Based on Rank Transform and Product Ordering in the RGB Color Space. In Proceedings of the IEEE International Conference on Computer Vision Workshops, ICCVW 2017, Venice, Italy, 22–29 October 2017; Institute of Electrical and Electronics Engineers Inc.: Venice, Italy, 2017; pp. 1032–1040. [Google Scholar]
  44. Drimbarean, A.; Whelan, P.F. Experiments in colour texture analysis. Pattern Recognit. Lett. 2001, 22, 1161–1167. [Google Scholar] [CrossRef] [Green Version]
  45. Mäenpää, T.; Pietikäinen, M. Classification with color and texture: Jointly or separately? Pattern Recognit. 2004, 37, 1629–1640. [Google Scholar] [CrossRef] [Green Version]
  46. Bianconi, F.; Harvey, R.; Southam, P.; Fernández, A. Theoretical and experimental comparison of different approaches for color texture classification. J. Electron. Imaging 2011, 20, 043006. [Google Scholar] [CrossRef]
  47. Abdollahyan, M.; Cascianelli, S.; Bellocchio, E.; Costante, G.; Ciarfuglia, T.A.; Bianconi, F.; Smeraldi, F.; Fravolini, M.L. Visual Localization in the Presence of Appearance Changes Using the Partial Order Kernel. In Proceedings of the 2018 26th European Signal Processing Conference (EUSIPCO), Roma, Italy, 3–7 September 2018; pp. 697–701. [Google Scholar]
Figure 1. Product order in the RGB space: A generic colour r 0 , g 0 , b 0 dominates all the colours in the blue volume and is dominated by all the colours in the red volume.
Figure 1. Product order in the RGB space: A generic colour r 0 , g 0 , b 0 dominates all the colours in the blue volume and is dominated by all the colours in the red volume.
Applsci 10 00499 g001
Figure 2. Loewner order in the HC L ˜ space: A generic colour h 0 , c 0 , l ˜ 0 dominates all the colours in the blue volume and is dominated by all the colours in the red volume.
Figure 2. Loewner order in the HC L ˜ space: A generic colour h 0 , c 0 , l ˜ 0 dominates all the colours in the blue volume and is dominated by all the colours in the red volume.
Applsci 10 00499 g002
Table 1. Datasets used in the experiments: round-up table.
Table 1. Datasets used in the experiments: round-up table.
IDNameNo. of ClassesNo. of Samples per ClassVariations in Imaging ConditionsSample Images
1Epistroma2825/551Unspecified Applsci 10 00499 i001
2KTH-TIPS1081 Applsci 10 00499 i011 Applsci 10 00499 i002
3KTH-TIPS2b11432 Applsci 10 00499 i012 Applsci 10 00499 i003
4Kylberg-Sintorn256None Applsci 10 00499 i004
5MondialMarmi2516None Applsci 10 00499 i005
6Outex-136820None Applsci 10 00499 i006
7Outex-146860 Applsci 10 00499 i013 Applsci 10 00499 i007
8Pap smear2204Unspecified Applsci 10 00499 i008
9PlantLeaves2060None Applsci 10 00499 i009
10RawFooT68184 Applsci 10 00499 i014 Applsci 10 00499 i010
KEY TO SYMBOLS: Applsci 10 00499 i015 = illumination, Applsci 10 00499 i016 = rotation, Applsci 10 00499 i017 = scale.
Table 2. Overall classification accuracy: confidence intervals for the cross-validated accuracy a ^ . Best results highlighted for grey-level (orange) and colour space features (blue). Boldface figures indicate statistically significant differences.
Table 2. Overall classification accuracy: confidence intervals for the cross-validated accuracy a ^ . Best results highlighted for grey-level (orange) and colour space features (blue). Boldface figures indicate statistically significant differences.
DescriptorInv.Dataset
12345678910
LBP grey C n 91.3–91.793.2–93.892.6–92.893.4–94.679.5–79.982.5–82.797.4–97.781.5–82.272.9–73.594.3–94.5
LBP grey D n 91.5–91.993.3–93.992.5–92.793.4–94.679.7–80.182.4–82.797.3–97.681.1–81.973.0–73.694.3–94.5
TS grey C n 91.3–91.892.4–93.094.3–94.595.2–96.279.5–79.982.5–82.797.7–98.183.5–84.374.5–75.196.1–96.2
TS grey D n 91.8–92.292.3–92.994.2–94.395.3–96.379.3–79.882.4–82.697.8–98.284.0–84.974.8–75.496.1–96.2
TS μ C n 91.6–92.092.7–93.393.7–93.992.8–94.279.2–79.679.5–79.797.1–97.583.1–83.873.9–74.495.8–96.0
TS μ D n 91.9–92.292.8–93.493.5–93.792.8–94.278.8–79.379.5–79.797.0–97.483.7–84.574.3–74.895.9–96.0
TS p 1 C n 91.3–91.692.9–93.493.0–93.294.4–95.678.6–79.080.0–80.296.8–97.380.6–81.473.2–73.895.1–95.2
TS p 1 D n 91.5–91.892.9–93.592.8–93.094.6–95.878.4–78.980.0–80.396.8–97.380.9–81.773.0–73.695.1–95.3
POTS ohta × C n 86.9–87.389.3–89.894.7–94.993.3–94.482.2–82.682.0–82.297.6–98.077.6–78.669.3–69.994.5–94.6
POTS ohta × D n 87.0–87.389.4–89.994.9–95.193.3–94.482.4–82.882.3–82.597.7–98.178.3–79.369.3–69.994.5–94.6
POTS opp × C n 90.3–90.689.9–90.595.1–95.395.3–96.580.1–80.578.9–79.297.5–97.973.7–74.764.1–64.793.1–93.3
POTS opp × D n 90.2–90.689.7–90.395.2–95.394.9–96.080.4–80.979.0–79.297.5–97.974.6–75.664.4–65.093.2–93.4
POTS HC L ˜ L C n 91.4–91.893.6–94.195.3–95.590.7–92.573.8–74.373.4–73.796.1–96.782.9–83.767.7–68.395.3–95.5
POTS HC L ˜ L D n 92.0–92.493.5–94.095.2–95.490.7–92.574.3–74.774.0–74.396.2–96.783.9–84.768.1–68.695.4–95.5
POTS RGB × C n 90.4–90.894.1–94.694.9–95.187.9–90.072.6–73.074.5–74.796.0–96.682.6–83.475.2–75.896.4–96.5
POTS RGB × D n 91.3–91.794.0–94.594.8–95.087.9–90.073.0–73.574.8–75.096.2–96.883.6–84.575.9–76.496.3–96.5
Table 3. Comparison with the results obtained by other ordering methods as reported in the references indicated. Key to symbols: ‘cvn’ = colour vector norm, ‘lex’ = lexicographic ordering, ‘rcl’ = preorder based on white as reference colour. Please refer to the cited works for further details.
Table 3. Comparison with the results obtained by other ordering methods as reported in the references indicated. Key to symbols: ‘cvn’ = colour vector norm, ‘lex’ = lexicographic ordering, ‘rcl’ = preorder based on white as reference colour. Please refer to the cited works for further details.
DatasetBest Result (Literature)Best Result (This Paper)
LBP cvn LBP lex LBP rcl
KTH-TIPS94.3 [23]94.3 [23]94.0 [23]94.1–94.6 (POTS RGB × / C n )
KTH-TIPS2b92.3 [23]92.3 [23]92.1 [23]95.3–95.5 (POTS HCL × / C n )
Kylberg-SintornN/A99.1 [43]N/A95.3–96.5 (POTS RGB × / C n )
Outex-1385.3 [1]86.3 [1]85.9 [1]82.3–82.5 (POTS opp × / D n )
Outex-1474.3 [1]73.4 [1]72.3 [1]97.7–98.1 (POTS ohta × / D n )
PapSmearN/AN/A66.2 [43]83.6–84.5 (POTS RGB × / D n )
PlantLeaves69.9 [23]65.2 [23]71.9 [23]75.9–76.4 (POTS RGB × / D n )
RawFootN/AN/A80.5 [43]96.4–96.5 (POTS RGB × / C n )

Share and Cite

MDPI and ACS Style

Smeraldi, F.; Bianconi, F.; Fernández, A.; González, E. Partial Order Rank Features in Colour Space. Appl. Sci. 2020, 10, 499. https://doi.org/10.3390/app10020499

AMA Style

Smeraldi F, Bianconi F, Fernández A, González E. Partial Order Rank Features in Colour Space. Applied Sciences. 2020; 10(2):499. https://doi.org/10.3390/app10020499

Chicago/Turabian Style

Smeraldi, Fabrizio, Francesco Bianconi, Antonio Fernández, and Elena González. 2020. "Partial Order Rank Features in Colour Space" Applied Sciences 10, no. 2: 499. https://doi.org/10.3390/app10020499

APA Style

Smeraldi, F., Bianconi, F., Fernández, A., & González, E. (2020). Partial Order Rank Features in Colour Space. Applied Sciences, 10(2), 499. https://doi.org/10.3390/app10020499

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop