Next Article in Journal
Influence of Ageing on Abrasion Volume Loss, Density, and Structural Components of Subfossil Oak
Previous Article in Journal
Effect of Terahertz Antenna Radiation in Hypersonic Plasma Sheaths with Different Vehicle Shapes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Theoretical Approach to Ordinal Classification: Feature Space-Based Definition and Classifier-Independent Detection of Ordinal Class Structures

1
Institute of Neural Information Processing, Ulm University, James-Franck-Ring, 89081 Ulm, Germany
2
Institute of Medical Systems Biology, Ulm University, Albert-Einstein-Allee 11, 89081 Ulm, Germany
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(4), 1815; https://doi.org/10.3390/app12041815
Submission received: 23 December 2021 / Revised: 28 January 2022 / Accepted: 1 February 2022 / Published: 10 February 2022
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
Ordinal classification (OC) is a sub-discipline of multi-class classification (i.e., including at least three classes), in which the classes constitute an ordinal structure. Applications of ordinal classification can be found, for instance, in the medical field, e.g., with the class labels order, early stage-intermediate stage-final stage, corresponding to the task of classifying different stages of a certain disease. While the field of OC was continuously enhanced, e.g., by designing and adapting appropriate classification models as well as performance metrics, there is still a lack of a common mathematical definition for OC tasks. More precisely, in general, a classification task is defined as an OC task, solely based on the corresponding class label names. However, an ordinal class structure that is identified based on the class labels is not necessarily reflected in the corresponding feature space. In contrast, naturally any kind of multi-class classification task can consist of a set of arbitrary class labels that form an ordinal structure which can be observed in the current feature space. Based on this simple observation, in this work, we present our generalised approach towards an intuitive working definition for OC tasks, which is based on the corresponding feature space and allows a classifier-independent detection of ordinal class structures. To this end, we introduce and discuss novel, OC-specific theoretical concepts. Moreover, we validate our proposed working definition in combination with a set of traditionally ordinal and traditionally non-ordinal data sets, and provide the results of the corresponding detection algorithm. Additionally, we motivate our theoretical concepts, based on an illustrative evaluation of one of the oldest and most popular machine learning data sets, i.e., on the traditionally non-ordinal Fisher’s Iris data set.

1. Introduction

In the traditional sense, a multi-class classification task is denoted as an ordinal classification (OC) task if the corresponding class label names can be sorted, e.g., small medium large, or short mediumlong, etc. In this case, one can implement OC-specific classifiers to improve classification performance [1]. However, in general, a classification model can only benefit from a specific label order if it is represented in the corresponding feature space [2]. Otherwise, it can lead to a severely decreased classification performance [3].
Following the latest trend of (deep) artificial neural networks (ANNs) [4], less than one decade ago, researches started to adapt ANNs to the field of ordinal regression (OR), see, e.g., in [5,6]. For instance, different ANN architectures have been proposed for the interesting task of age estimation [7,8]. Note that ordinal classification constitutes a special case of ordinal regression. Furthermore, regularly, these terms are even used interchangeably, as, for instance, in one of the recent survey papers on OR [9].
Moreover, the evaluation of OC tasks requires a specific choice of performance metrics. For a broad overview on OC-specific performance measures, we refer the reader to the works in [10,11]. For instance, let us consider the misclassification of a person’s early stage of a severe disease as an intermediate stage, and the misclassification of the early stage as a final stage. In both cases, we obtain a classification error. However, obviously, with respect to a real-world application (e.g., the detection of a certain cancer stage), each of the errors should be associated with a different cost value.
Instead of restricting the field of OC to multi-class classification tasks in which the class label names constitute an ordinal structure, Lattke et al. proposed detecting ordinal class structures independently from the label names [3], based on classifier cascades [12], in combination with different classification models such as nearest neighbour classifiers [13], decision trees [14], or support vector machines (SVMs) [15,16]. Note that prior to the adaptation of (deep) ANNs, first the basic SVM model was adapted for the task of OC/OR [17,18,19].
On the one hand, the field of ordinal classification was steadily enriched by the introduction of OC-specific classification models, metrics, as well as modified ANN architectures, and on the other hand, we observed a lack for a common definition of OC tasks. Therefore, inspired by the cascades-based approach for the detection of ordinal class structures [3,20], recently, in [21], we proposed a working definition for OC tasks that is based on the relation between the pairwise performance of binary SVM models. Further investigations of our initially proposed working definition led us to a new theoretical concept, which allows us to provide a novel and generalised working definition that is not based on any classification/regression model.
Therefore, the current work constitutes a generalisation of our previous work [21]. In addition to the work in [21], here, we provide the following outcomes: (i) We introduce the concept of level of separability measures and ordinal arrangements; (ii) We provide a generalised working definition, based on the aforementioned concepts, which is independent from any classification model; (iii) We discuss additionally observed limitations of our initial working definition and extend Theorem 1 from our previous work [21] by Corollary 1; (iv) We introduce a classifier-independent measure, which allows us to find ordinal class structures, based solely on the corresponding feature space; (v) We evaluate our proposed generalised working definition on a set of traditionally ordinal, as well as traditionally non-ordinal data sets; and (vi) We discuss and illustrate the usefulness of our proposed working definition, based on the obtained outcomes, as well as on an additional motivational example.
Finally, note that by the term OC, throughout the whole work, we will refer to multi-class classification tasks (i.e., classification tasks with at least three classes) with an ordinal class structure. More precisely, including the term OC does not imply that the corresponding feature space is ordinal-scaled [22,23]. Moreover, even if one can apply different classification models to detect ordinal class structures, the detection of ordinal class structures itself is not part of the corresponding classification process. Figure 1 provides an exemplary pipeline for the processing steps of an arbitrary classification task, to emphasise the research area of the current contribution.
The remainder of this work is organised as follows. In Section 2, we first provide the formalisation of our approach, followed by our proposed feature space-based definition for ordinal classification tasks, which is based on specific mappings that we denote as level of separability measures (LSMs). Subsequently, in Section 3, we discuss the main differences to our recently provided working definition, and present additional characteristics of our current theoretical concepts. We introduce a specific LSM mapping in Section 4, including a numerical example as well as a simple illustration. In Section 5, we provide an experimental validation of our proposed feature space-based working definition for ordinal classification tasks, based on traditionally ordinal as well as traditionally non-ordinal data sets, including a running time evaluation. A detailed discussion on the complexity, limitations, and usefulness of our proposed working definition is followed in Section 6. Finally, in Section 7, we conclude the current work.

2. Formalisation and Generalised Working Definition for Ordinal Class Structures

In this section, we will first provide the formalisation for our current work. Subsequently, we will introduce our proposed novel working definition for ordinal classification tasks that is independent from the meaning of the corresponding class labels.

2.1. Formalisation

Let X Ω be a c-class classification task, which is defined by the d-dimensional data set X R d , d N , and the corresponding set of class labels Ω = { ω 1 , , ω c } , with c > 2 . We denote the resulting index set as I, i.e., I = { 1 , , c } . Each element of X Ω is a task-related object, which is a pair consisting of a data sample and its true class label, i.e., X Ω = { ( x i , y i ) } i = 1 N , y i Ω , i = 1 , , N , whereby N = | X Ω | denotes the number of elements in the set X Ω . Specifically, it holds, X Ω X × Ω . Moreover, by X Ω i , j , we denote the binary subtask that is restricted to the classes ω i and ω j , i.e., for all i , j I , with i j , we define
X Ω i , j : = { ( x , y ) X Ω | y = ω i y = ω j } .
Therefore, it holds that X Ω i , j = X Ω j , i , i , j I , i j . For the definition of ordinal classification tasks, in this work, we introduce the term level of separability measures, which we define as follows, in Definition 1.
Definition 1
(Level of Separability Measures). Let X Y , X R d , d N , and Y = { 0 , 1 } constitute a binary classification task in a d-dimensional feature space. Furthermore, let X Y ¯ be the corresponding binary classification task, where we interchange the class labels of all samples from the task X Y , i.e., X Y ¯ : = { ( x , 1 y ) | ( x , y ) X Y } . Additionally, for each X R d , we define the set X Y * as X Y * = X × Y . We denote the non-constant and non-randommapping μ as a level of separability measure (LSM), if μ fulfils the following properties:
( P 0 ) μ ( X Y ) 0 , X Y R d × { 0 , 1 } , ( nonnegative ) , ( P 1 ) μ ( X Y * ) μ ( Z Y ) , Z Y R d × { 0 , 1 } , X R d , ( point-separating ) , ( P 2 ) μ ( X Y ) = μ ( X Y ¯ ) , X Y R d × { 0 , 1 } , ( label invariant ) .
Note that a higher value for μ implies a higher level of separability. The properties defined in Equation (2) are further discussed in the following remark, i.e., in Remark 1.
Remark 1
(Properties of LSM mappings). Let μ be an LSM mapping by Definition 1. Furthermore, let X { 0 , 1 } R d × { 0 , 1 } constitute a binary classification task. Property (P1) of Equation (2) implies that if we set Z = X { 0 , 1 } * = X × { 0 , 1 } , then the value of μ ( Z ) is equal to the minimum value of μ, across all X { 0 , 1 } R d × { 0 , 1 } . Therefore, we say that μ is point-separating. More precisely, the set X { 0 , 1 } * is defined as X { 0 , 1 } * = X × { 0 } X × { 1 } , i.e., X { 0 , 1 } * = { ( x 1 , 0 ) , ( x 1 , 1 ) , , ( x N , 0 ) , ( x N , 1 ) } . In such a set, each data point is assigned to both of the class labels, leading to the lowest level of separability, in combination with any LSM mapping μ. Property (P2) of Equation (2) simply implies that interchanging the labels of all data points x X does not change the value of μ, evaluated in combination with the set X. Therefore, we say that μ is label invariant, or simply symmetric.
By M d , we denote the set of all mappings that measure the level of separability of any binary classification task from the d-dimensional feature space, i.e.,
M d : = { μ | μ is an LSM mapping in R d × { 0 , 1 } by Definition 1 } .
Note that by Definition 1, each element of the set M d , d N , not only fulfils the properties (P0), (P1), and (P2) of Equation (2), but is also a non-constant and non-random mapping. Moreover, note that the set M d is non-empty, for all d N , as we briefly discuss in the following example, i.e., Example 1.
Example 1
(Existence of LSM mappings). Note that the set M d is non-empty, for all d N . Let CM be a deterministic classification model, e.g., a support vector machine, which is trained based on the set X { 0 , 1 } R d × { 0 , 1 } . Let e r r C M [ 0 , 1 ] be the resubstitution error (training error) of classifier CM, i.e., e r r C M is the fraction of data points in X that are misclassified by classifier CM. Then, μ : X { 0 , 1 } C e r r C M fulfils the properties of Equation (2), for each absolute term C 1 , and any deterministic classification model CM.
Let μ M d be an LSM mapping. For all i , j I , we define μ i , j R as follows:
μ i , j : = μ X Ω i , j , if i j , 0 , if i = j ,
which measures the level of separability between the samples from class ω i and the samples from class ω j . Therefore, for i , j , k , l I , the statement, μ i , j > μ k , l , implies that it is easier to separate the classes ω i and ω j from each other, than to separate the classes ω k and ω l from each other. Thus, if it holds, μ i , j > μ k , l , we simply say that the binary classification task X Ω i , j has a higher level of separability than the binary classification task X Ω k , l . Note that from Equations (1) and (4), it directly follows that μ i , j = μ j , i , i , j I . In addition, note that setting μ i , i to zero, for all i I , is a logical consequence, as it is not possible to separate two identical data sets from each other.
Furthermore, let T c be the set of all permutations of the set I. More precisely, each τ T c , τ : { 1 , , c } { 1 , , c } , is a bijective function. In addition, by τ T c , we denote the reversed permutation of τ T c . For instance, for the identity permutation, i d : ( 1 , , c ) ( 1 , , c ) , it holds, i d : ( 1 , , c ) ( c , c 1 , , 1 ) .
By M ( X Ω , μ ) R 0 c × c , we denote the pairwise separability matrix (PSM), consisting of the elements μ i , j , i.e., M ( X Ω , μ ) : = ( μ i , j ) i , j = 1 c . Moreover, by M ( X Ω , μ ) ( τ ) , we define the PSM whose rows and columns are rearranged specific to permutation τ T c , i.e., M ( X Ω , μ ) ( τ ) : = ( μ τ ( i ) , τ ( j ) ) i , j = 1 c . For reasons of simplicity, we will denote M ( X Ω , μ ) ( τ ) simply by M ( τ ) , with M = M ( i d ) , whereby i d denotes the identity permutation. More precisely, M ( τ ) can be depicted as follows:
M ( τ ) = 0 μ τ ( 1 ) , τ ( 2 ) μ τ ( 1 ) , τ ( c ) μ τ ( 2 ) , τ ( 1 ) 0 μ τ ( 2 ) , τ ( c ) μ τ ( c ) , τ ( 1 ) μ τ ( c ) , τ ( 2 ) 0 .
Note that by definition, matrix M ( τ ) is symmetric for all τ T c . Finally, we summarise all of the required ingredients for our proposed working definition of ordinal classification tasks in Table 1.
As briefly introduced in Section 1, in general, an ordinal class structure is denoted by the ≺-sign, e.g., ω 1 ω c . In an OC task, we denote the first and the last class of such an ordered class structure as edge classes, or simply edges. In that particular example, i.e., ω 1 ω c , the classes ω 1 and ω c would be the corresponding edges. Moreover, in the common understanding of ordinal (class) structures, the reversed order of an ordinal arrangement also constitutes an ordinal structure. More precisely, the relations ω 1 ω c and ω c ω 1 are equivalent. Therefore, the uniqueness of an ordinal class structure is defined by exactly two class orders (or permutations), originating from the two edges. Based on this observation and the definitions introduced above, in the next subsection, we propose a working definition for OC tasks that is independent from the meaning of the corresponding class labels.

2.2. Feature Space-Based Working Definition for Ordinal Classification Tasks

As a prior step to our proposed working definition for ordinal classification tasks, we first introduce the term ordinal arrangement, which we present in Definition 2.
Definition 2
(Ordinal Arrangements). Let S R c × c be a symmetric matrix with elements ( s i , j ) i , j = 1 c , with c > 2 . Matrix S represents an ordinal arrangement, if and only if, i , j , k { 1 , , c } , it holds that
s i , j s i , k j < k i , s i , j s i , k i j < k .
Note that the properties of Equation (6) can be summarised as follows. Let S R c × c , c > 2 , constitute an ordinal arrangement by Definition 2. Then, the relations between the elements of S can be symbolically depicted as
S = S T = ^ 0 * * * 0 * * 0 * * * 0 ,
whereby S T denotes the transpose of S. Note that for each symmetric matrix S R c × c , c > 2 , property s i , j s i , k j < k i is equivalent to s j , i s k , i j < k i , and statement s i , j s i , k i j < k is equivalent to s j , i s k , i i j < k (which is implied in Equation (7) by including S T ). Based on the concept of ordinal arrangements introduced above, we provide the working definition for ordinal classification tasks as follows.
Definition 3
(Working Definition for Ordinal Classification Tasks). Let X Ω , X R d , Ω = { ω 1 , , ω c } , constitute a c-class classification task, with c > 2 . Let μ M d be an LSM mapping. Furthermore, let T c be the set of all permutations of the set { 1 , , c } and ν T c . We denote X Ω as feature space-based ordinal (FS-ordinal), with respect to the order ω ± ν ( 1 ) ω ± ν ( c ) and mapping μ, if and only if, τ T c , for the corresponding PSMs, M ( τ ) , it holds that
M ( τ ) fulfils the properties of Equation ( 6 ) , if τ = ± ν , ( existence ) , and M ( τ ) violates the properties of Equation ( 6 ) , if τ ± ν , ( uniqueness ) .
Note that, as defined in Section 2.1, ν T c denotes the reversed permutation of ν T c , e.g., i d : ( 1 , , c ) ( 1 , , c ) and i d : ( 1 , , c ) ( c , c 1 , , 1 ) .
If the task X Ω constitutes an FS-ordinal classification task, with respect to the order ω ± ν ( 1 ) ω ± ν ( c ) and mapping μ M d , we simply say that the task X Ω is FS-ordinal specific to ( μ , ± ν ) .
Figure 2 illustrates the properties of an ordinal classification task, based on a two-dimensional 5-class toy data set. Note that the term closer in the caption of Figure 2 does not refer to the distances between different class pairs, but to the order of the classes, i.e., the arrangement of the columns of the corresponding PSM. More precisely, e.g., for the arrangement ω 1 ω 2 ω 3 ω 4 ω 5 , we say that class ω 2 is closer to class ω 1 than to class ω 4 . However, for instance, the (averaged) Euclidean distance between the samples from the classes ω 2 and ω 1 could be greater than the (averaged) Euclidean distance between the samples from the classes ω 2 and ω 4 .

3. Comparison to Previous Work and Additional Theoretical Outcomes

As we already implied, the current work is an extension, or even more precisely, a generalisation, of our previous study [21]. In [21], we provided a working definition for ordinal classification tasks based on the resubstitution accuracy (training accuracy) of linear support vector machines (SVMs), i.e., SVMs with linear kernels, to which we referred to as SVM-ordinal class structures. More precisely, we did not consider different options for possible LSM mappings μ M d , as we propose here. Therefore, the working definition in [21] constitutes a special case of the novel approach to ordinal classification introduced in this work. However, the working definition provided in [21] led to two theoretical outcomes, i.e., a theorem for 3-class classification tasks, as well as a detection algorithm for ordinal class structures that originates from the provided definition. Moreover, with minor changes, the corresponding theorem and detection algorithm also apply to the novel generalised working definition of FS-ordinal class structures, which we briefly discuss in this section, followed by additional theoretical outcomes.

3.1. Special Case for 3-Class Classification Tasks and Detection of FS-Ordinal Structures

For the special case of c = 3 , i.e., for 3-class classification tasks, we obtain the following theorem (Theorem 1), which will be later extended as an additional theoretical outcome, at the end of this section, in Section 3.2.
Theorem 1
(FS-ordinal class structures in 3-class classification tasks). Let X Ω R d × { ω 1 , ω 2 , ω 3 } , d N , be a d-dimensional labelled data set, which constitutes a 3-class classification task. Moreover, let the corresponding PSM, M, be defined as follows, for some LSM, μ M d ,
M ( i d ) = 0 e f e 0 g f g 0 , e , f , g > 0 .
If e , f , g are pairwise distinct, i.e., e f , e g , and f g , then there exists a permutation τ T 3 , such that X Ω constitutes an FS-ordinal classification task specific to ( μ , ± τ ) .
The proof of Theorem 1 is provided in the appendix of our previous study [21] for e , f , g ( 0 , 1 ] , but works analogously for e , f , g > 0 , as it is the case in the current work.
Based on our proposed working definition which is introduced in Definition 3, there exists a simple way to check for FS-ordinal class structures. A corresponding pseudo code is presented in Figure 3. Note that a numerical example for the detection algorithm depicted in Figure 3 is also provided in our previous work [21].

3.2. FS-Ordinal versus SVM-Ordinal Structures

As already mentioned above, our first approach to provide a working definition for ordinal class structures was based on pairwise resubstitution accuracies of linear SVM models. In [21], we justified the choice for such a working definition and discussed its potential, in combination with a numerical evaluation. Moreover, we also discussed the limitations of an SVM-based definition of ordinal structures. More precisely, we showed that in the case of a 3-class classification task, in which all of the possible class pairs are linearly separable, the provided detection algorithm (see Figure 3) always fails to find SVM-ordinal class structures.
However, what we did not consider in [21] is that the detection algorithm already fails if two out of the three possible class pairs of a 3-class classification task are linearly separable, as we will show shortly. This is due to the fact that the (resubstitution) accuracy is bounded to 1 (100%). As an illustrative example, let us consider a two-dimensional 3-class toy data set that is depicted in Figure 4.
From Figure 4, we can observe that the data points from class ω 1 are linearly separable from the data points from both remaining classes ω 2 and ω 3 , respectively. Moreover, the data points from class ω 2 are not linearly separable from the data points from class ω 3 , in the provided two-dimensional feature space. However, the toy data set clearly constitutes an ordinal class structure with the order ω 1 ω 2 ω 3 , and its reversed order ω 3 ω 2 ω 1 . Note that for the definition of SVM-ordinal class structures, the PSM originates from the resubstitution accuracies between the corresponding class pairs. As the class pairs ( ω 1 , ω 2 ) and ( ω 1 , ω 3 ) are linearly separable, it follows that μ 1 , 2 = μ 1 , 3 = 1 , and due to the symmetry (see Equation (4) for the definition of μ i , j ), μ 2 , 1 = μ 3 , 1 = 1 . Moreover, for the class pair ( ω 2 , ω 3 ) , it clearly holds that μ 2 , 3 = μ 3 , 2 = α , with α ( 0 , 1 ) .
Now, let us define τ T 3 as τ : ( 1 , 2 , 3 ) ( 1 , 3 , 2 ) , and thus τ : ( 1 , 2 , 3 ) ( 2 , 3 , 1 ) . Then, for the PSMs, with μ M 2 defined as the resubstitution accuracy based on linear SVMs, we obtain the following matrices:
M ( i d ) = ω 1     ω 2     ω 3 0 1 1 1 0 α 1 α 0 , M ( i d ) = ω 3     ω 2     ω 1 0 1 1 1 0 α 1 α 0 , and
M ( τ ) = ω 1     ω 3     ω 2 0 1 1 1 0 α 1 α 0 , M ( τ ) = ω 2     ω 3     ω 1 0 1 1 1 0 α 1 α 0 .
Obviously, both matrices M ( i d ) and M ( τ ) , and therefore their corresponding reversed counterparts, M ( i d ) and M ( τ ) , fulfil the properties of Equation (6), and thus constitute ordinal arrangements by Definition 2. Thus, the uniqueness property of Definition 3 is violated and no (SVM-)ordinal class structure is found.
The main issue for not detecting the obvious ordinal structure of the 3-class toy data set depicted in Figure 4 is the choice of the LSM mapping μ . More precisely, as we briefly discussed above, the resubstitution accuracy is bounded to the value 1. Therefore, for 3-class classification tasks, the SVM-based working definition fails in cases where at least two of the three possible class pairs are linearly separable. In this particular example, the ordinal structure would be found if the statement, μ 1 , 2 < μ 1 , 3 , was true. Note that, based on an eye test associated with Figure 4, the relation, μ 1 , 2 < μ 1 , 3 , is more intuitive than the relation μ 1 , 2 = μ 1 , 3 . This simple example shows us why it was important to introduce the concept of LSM mappings in combination with the generalised definition of FS-ordinal class structures provided in this work.
Note that the main condition in Theorem 1 requires that the values of μ 1 , 2 , μ 1 , 3 , and μ 2 , 3 are pairwise distinct. The discussion of the current example showed the importance of this condition. In the current example, we had the following relations, μ 1 , 2 = μ 1 , 3 and μ 2 , 3 < μ 1 , 2 , μ 1 , 3 . Moreover, this observation leads to a short and simple extension of Theorem 1, which we summarise in Corollary 1. Note that in contrast to the current example, in Corollary 1, we assume that the unique value (e.g., μ 2 , 3 ) is greater than the two equal values (e.g., μ 1 , 2 and μ 1 , 3 ).
Corollary 1
(Extension of Theorem 1). Let X Ω R d × { ω 1 , ω 2 , ω 3 } , d N , be a d-dimensional labelled data set, which constitutes a 3-class classification task. Moreover, let the corresponding PSM, M, be defined as follows, for some LSM, μ M d ,
M ( i d ) = 0 e f e 0 g f g 0 , e , f , g > 0 .
Furthermore, let two of the three values, e , f , g , be equal and smaller than the remaining one. More precisely, let one of the following three statements be true:
( i ) ( e = f ) ( g > e = f ) ( i i ) ( e = g ) ( f > e = g ) ( i i i ) ( f = g ) ( e > f = g )
Then, there exists a permutation τ T 3 , such that X Ω constitutes an FS-ordinal classification task specific to ( μ , ± τ ) .
The proof of Corollary 1 is provided in the Appendix A. Note that it can be analogously shown that in the case of obtaining two equal values that are greater than the third one, e.g., ( e = f ) ( g < e = f ) , in combination with some LSM μ , always leads to a violation of Definition 3, i.e., to the observation that the current task is not FS-ordinal specific to mapping μ .

4. Classifier-Independent Level of Separability Measures

In the current section, we will first provide an example of an LSM mapping, which we will later apply in our validation experiments. Subsequently, based on the introduced LSM mapping, we will discuss a possible way of how to proceed in the case of ordinal-scaled and categorical features. Finally, we will close this section with an interpretation of the concepts provided in this work.

4.1. Discriminant Ratio

Similar to Equation (1), by X Ω i , we denote the subset of X Ω that consists solely of the samples from class ω i , i.e.,
X Ω i = { x X | ( x , y ) X Ω y = ω i } .
Moreover, by x ¯ ( i ) , we define the centroid of the set X Ω i . More precisely, with N i = | X Ω i | denoting the number of samples in X Ω i ,
x ¯ ( i ) = 1 N i x X Ω i x .
Furthermore, by σ i 2 R d , we denote the d-dimensional ( X R d , d N ) variance in X Ω i , which we define as follows:
σ i 2 = 1 N i 1 x X Ω i ( x x ¯ ( i ) ) ( x x ¯ ( i ) ) .
Note that the ∘-symbol denotes the Hadamard product, also known as the Schur product, which is an element-wise product. More precisely, for u , v R d with w = u v , it holds, w R d with w i = u i v i , i = 1 , , d .
Inspired by Fisher’s discriminant analysis [24], in this work, we define the discriminant ratio (DR) between the classes ω i and ω j as follows,
D R i , j : = x ¯ ( i ) x ¯ ( j ) 2 σ i 2 + σ j 2 .
Obviously, it holds, D R i , j R 0 , and D R i , j is undefined if and only if σ i 2 = σ j 2 = 0 . Therefore, D R i , j is undefined if and only if each of the sets X Ω i and X Ω j consists of solely one data point or of arbitrarily many identical data points, respectively. However, this is an unrealistic classification task scenario. Therefore, in this work, we will always assume that it holds, σ i 2 + σ j 2 > 0 .
Note that the DR measure constitutes an LSM mapping by Definition 1. More precisely, changing the class labels for the samples from class ω i to ω j , and changing the class labels for the samples from class ω j to ω i , obviously leads to the same D R i , j value as for the initial class labels. Therefore, the DR mapping is label invariant, and therefore fulfils property (P2) of Equation (2). Thus, we only have to check property (P1) of Equation (2), since the validity of property (P0), i.e., the non-negativity, directly follows from Equation (12).
To this end, let Y = { 0 , 1 } be a binary label set, and X R d , d N , be a d-dimensional data set, with at least two unequal elements, i.e., | X | > 1 . For X Y * = X × Y , it holds, x ¯ ( 0 ) = x ¯ ( 1 ) , and thus x ¯ ( 0 ) x ¯ ( 1 ) 2 = 0 . Therefore, for X Y * = X × Y , it holds that D R 0 , 1 = 0 , which is the lower bound of the defined measure. Thus, the DR mapping fulfils property (P1) of Equation (2), and therefore constitutes an LSM mapping by Definition 1.
An exemplary illustration of the discriminant ratio-specific components is provided in Figure 5, for a 2-dimensional class pair, based on a toy data set.

4.2. Ordinal-Scaled and Categorical Features

For ordinal-scaled features, we propose to map the features to the set { 1 , , n i } , whereby n i denotes the number of possible values of feature i. For instance, let us assume that feature i consists of the three feature values low < medium < high. Then, the values of feature i are transferred to the set { 1 , 2 , 3 } , i.e., mapping low to 1, medium to 2, and high to 3.
For categorical features, we propose to replace the mean value by the mode, i.e., the most frequent value. Let D be a d-dimensional categorical feature space. By Δ : D × D { 0 , 1 } d , we denote the d-dimensional difference vector, which we define as follows. Let x , z D be two data points from D. Furthermore, let δ { 0 , 1 } d be the difference between x and z, i.e., δ = Δ ( x , z ) . Then, for all i = 1 , , d , the components of δ are defined as follows:
δ i = 0 , x i z i , 1 , x i = z i .
Note that in general, the mode is not unique. Thus, by M ¯ i , we denote the set of all d-dimensional mode values specific to X Ω i , i.e.,
M ¯ i = mode ( X Ω i ) .
Therefore, for the categorical case, with N i = | X Ω i | , we define the d-dimensional variance vector, as follows:
σ i cat = 1 ( N i 1 ) · | M ¯ i | x X Ω i x ¯ cat M ¯ i Δ ( x , x ¯ cat ) .
Note that it is not necessary to apply the Hadamard product in the case of categorical features for the following reason: The elements of the difference vector are always equal to zero or one. Thus, squaring does not change the vector elements.
Analogously to the non-categorical case, we define the categorical discriminant ratio, denoted by DR cat , as follows:
D R i , j cat : = 1 | M ¯ i | + | M ¯ j | x ¯ cat M ¯ i z ¯ cat M ¯ j Δ ( x ¯ cat , z ¯ cat ) 2 σ i cat + σ j cat .
As an example, we constructed a non-numerical data set that is depicted in Table 2, based on the authors’ meta information.
From Table 2, we obtain the following unique 3-dimensional mode value, based on the features Middle Name, Institute, and ORCID,
x ¯ cat = ( No , MSB , Yes ) .
In combination with the obtained mode value, x ¯ cat , we can compute the categorical variance as follows, based on Equation (15),
σ cat = 1 3 1 i = 1 3 Δ ( x i , x ¯ cat ) = 1 2 0 0 1 + 1 0 0 + 0 1 0 = 1 2 1 1 1 .
If we include the first author of this work to the current data set, we get the additional feature vector x 4 = ( No , NIP , Yes ) . This leads then to two different x ¯ cat values, i.e., x ¯ cat { ( No , MSB , Yes ) , ( No , NIP , Yes ) } . Note that in this example, all of the three (four) data points belong to one class, therefore we omitted the index in σ cat .
Alternatively, in the case of there being more than one mode, one could analyse whether it could be useful to randomly choose one of the corresponding mode values. This should help to reduce the computational complexity. However, this kind of evaluation is not part of the current contribution.

4.3. Interpretation

Note that one can find many mappings which fulfil the properties (P0), (P1), and (P2) of Equation (2), and thus can be identified as LSM mappings by Definition 1. For instance, it is obvious that removing the denominator from the definition of the discriminant ratio in Equation (12) also leads to an LSM mapping. More precisely, the function which solely focuses on the difference between the two centres, i.e., x ¯ ( i ) x ¯ ( j ) 2 , fulfils the properties (P0), (P1), and (P2) of Equation (2).
Therefore, in practice, it is important to choose a reasonable and task-appropriate LSM mapping. In order to remain with the current example, it is obvious that focusing on the differences between the class centres does not provide adequate—or even any—information about how easy it is to separate the corresponding classes. On the other hand, intuitively speaking, the discriminant ratio defined in Equation (12) is an appropriate measure for ranking the separability between different class pairs, in many cases.
While the motivation is clear, why it might be helpful to find ordinal structures in multi-class tasks for ordinal- and numerical-scaled features, one can argue whether it is useful to search for ordinal structures based on categorical feature spaces. To keep this discussion short, we think that, depending on the categorical features-based task at hand, it might be interesting to determine and to rank the level of separability between different class pairs, and even to find an overall class structure.

5. Evaluation

In this section, we will briefly describe a set of traditionally ordinal data sets as well as a set of traditionally non-ordinal data sets. Subsequently, we will provide the outcomes for the detection of ordinal class structures based on the resubstitution accuracy of linear SVM models and the discriminant ratio defined in Equation (12). All results were obtained using Matlab (https://www.mathworks.com/products/matlab.html, last access on 20 December 2021), applying the default parameters for the SVMs, in combination with the SMO (Sequential Minimal Optimization) solver [25,26]. At the end of the current section, we will provide a running time comparison between both LSM mappings, i.e., SVM resubstitution accuracy (SVM-Acc) and the DR measure.
Note that we apply the algorithm presented in Figure 3 for the detection of FS-ordinal structures. More precisely, a data set is identified as FS-ordinal if there exist exactly two class label permutations such that the corresponding pairwise separability matrices fulfil the properties of Equation (6). The two permutations represent the detected (bidirectional) class order.

5.1. Traditionally Ordinal Data Sets

We will evaluate the following eight, publicly available, traditionally ordinal data sets. The data sets Social Workers Decisions (SWD), Lecturers Evaluation (LEV), Employee Selection (ESL), and Employee Rejection/Acceptance (ERA) are publicly available on Weka (https://waikato.github.io/weka-wiki/datasets/, last access on 20 December 2021), and can be extracted from the file datasets_arie_ben_david.tar.gz. The data sets Contraceptive Method Choice (CMC), Car Evaluation (Cars), and Nursery are all part of the UCI machine learning repository [27]. Moreover, the BioVid Heat Pain Database (BVDB) can be obtained by request (http://www.iikt.ovgu.de/BioVid.print, last access on 20 December 2021). Note that a detailed description of the BVDB is provided in the Appendix B.

5.2. Additional Data Set Information

As one of the five classes of the Nursery data set consists of solely two data points, we will evaluate this data set as a 4-class classification task, omitting the corresponding two samples.
Due to the strong class imbalance (see Table 3), often the classes 4 and 5 of the LEV data set are fused to one class. In this work, we will analyse both the initial and modified LEV data sets. We will refer to the modified 4-class data set as LEV-4.
In addition, based on the present class imbalance, in general, the classes 1 , 2 , 3 and 7 , 8 , 9 of the ESL data set are, respectively, combined to one class. We will evaluate both variants of the ESL data set, denoting the modified 5-class data set by ESL-5.
Similar to the data sets LEV and ESL, we will analyse two variants of the ERA data set, including the ERA-7 data set, due to the present class imbalance. We obtain the ERA-7 data set by fusing the classes 7 , 8 , 9 to one corresponding class.
The properties of all traditionally ordinal data sets that we evaluate in this work are listed in Table 3.

5.3. Results for Traditionally Ordinal Data Sets

The evaluation of the detection of ordinal structures in combination with the traditionally ordinal data sets is provided in Table 4. From Table 4, we can make the following observations. First, based on the LSM mapping DR, eight out of the eleven data sets are identified as FS-ordinal, whereas based on the SVM-Acc measure, seven data sets are identified as FS-ordinal. Both approaches found the correct structures, i.e., the structures corresponding to the data sets’ natural class order. Second, six out of the eleven data sets are identified as FS-ordinal by both mappings simultaneously, i.e., the data sets CMC, LEV-4, SWD, ELS-5, LEV, and BVDB. Third, the data set Cars is identified as FS-ordinal in combination with the SVM-Acc measure, while not being identified as FS-ordinal based on the DR measure. On the other hand, the data sets ERA-7 and ERA are identified as FS-ordinal in combination with the DR measure, while not being identified as FS-ordinal based on the SVM-Acc measure.
The data sets Nursery and ESL were not identified as FS-ordinal by either of the two measures: DR and SVM-Acc. Obviously, FS-ordinal structures and traditionally ordinal structures constitute two different categories. Traditionally ordinal structures are defined simply based on class label names, i.e., on a semantic level. However, the corresponding order is not necessarily reflected in the feature space. As a consequence, it is not always possible to detect any structure in combination with traditionally ordinal data sets. This outcome has also been observed and discussed in [3,20,21].

5.4. Results for Traditionally Non-Ordinal Data Sets

We evaluated a set of six traditionally non-ordinal data sets that are all publicly available in the UCI machine learning repository [27]: Seeds, Forest Type Mapping (Forests), Statlog Vehicle Silhouettes (Vehicles), Statlog Image Segmentation (Segment), and Multiple Features (Mfeat). The data set properties are summarised in Table 5.
The evaluation of the detection of ordinal structures in combination with the traditionally non-ordinal data sets is provided in Table 6. From Table 6, we can make the following observations. First, only the Seeds data set is identified as FS-ordinal simultaneously by both LSM mappings, DR and SVM-Acc, in each case specific to the class order RosaKamaCanadian. Second, the data set Forests is identified as FS-ordinal in combination with the SVM-Acc measure, but not specific to the DR mapping. The identified class order is equal to HinokiSugiMixed DeciduousNon-Forest. Third, the data sets Iris and Vehicles are identified as FS-ordinal in combination with the DR measure, while not being identified as FS-ordinal based on the SVM-Acc measure. The detected class order for the Iris data set is equal to SetosaVersicolorVirginica, whereas the detected class order for the Vehicles data set is equal to VanBusSaabOpel.

5.5. Running Time Comparison

In Table 7, we provide the averaged running time and standard deviation (std) values in ms, for both LSM mappings, DR and SVM-Acc. Note that the values are obtained by repeating the detection algorithm, which is depicted in Figure 3, for ten iterations (including the entire data set, respectively). For the experiments, we used Matlab, version R2019b, in combination with an old Intel Core i7-6700K @ 4 GHz, with the operating system Windows7, 64 bit. From Table 7, we can make the following observations.
Applying the DR measure leads to a much faster check for (FS-)ordinal structures, in comparison to applying the SVM-Acc mapping. The difference is statistically significant, according to a two-sided Wilcoxon signed-rank test [28], with a p-value of 2.93 · 10 4 . The largest difference can be observed for the BVDB. Using the DR mapping leads to an averaged detection time of approximately 44 ms, whereas applying the SVM-Acc measure leads to an averaged detection time of approximately 469,839 ms, which is approximately 7.8 min, with a standard deviation of 2.6 s.

6. Discussion

In this section, we will first discuss the operational complexity of the detection of FS-ordinal class structures, based on the obtained outcomes in Section 5, including the limitations of our proposed working definition. Subsequently, we will use the simple 4-dimensional Iris data set to provide an illustrative example for the usefulness of our introduced concept of FS-ordinal class structures.

6.1. Operational Complexity and Detection Limitations

The operational cost depends on many factors, i.e., on the number of classes, the number of features, the number of samples, as well as on the choice of the corresponding LSM mapping and the complexity of the current classification task. For instance, applying the SVM-Acc measure in combination with the BVDB led to the highest averaged operational time (AOT) by far (see Table 7). The AOT for the BVDB in combination with the SVM-Acc mapping is approximately equal to 470 s, followed by the second longest AOT of approximately 20 s for the Mfeat data set. Note that the BVDB has less features than the Mfeat data set (194 to 649), and also consists of less classes (5 to 10). Obviously, the BVDB is composed of more data points than the Mfeat data set (8700 to 2000). However, on the other hand, the BVDB consists of fewer data points than the Nursery data set (8700 to 12,598), for which the AOT is only ~2 s, in combination with the SVM-Acc measure. These observations emphasise that the operational cost depends indeed on the combination of all aforementioned factors, including the complexity of the corresponding classification task.
Intuitively, the task of classifying different pain levels based on the participants’ physiological signals, such as the heart rate (BVDB), obviously seems to be more complex than differentiating between ten digits based on features such as pixel averages (Mfeat). For instance, in one of our recent studies [29], we obtained the following accuracy values. For the Mfeat data set, we obtained an averaged cross-validation (CV) accuracy of ~96%, including all 10 classes, in combination with bagging [30] and the early fusion approach [31]. In contrast, for the BVDB, including only two of the best separable classes (no pain vs. the highest pain level), we merely obtained a mean CV accuracy of about 82% (with the same size for the test folds). Implementing SVM classifiers in combination with the BVDB, very likely leads to more support vectors, in comparison to training SVM classifiers based on the Mfeat data set. An increased amount of support vectors leads to an increased amount of updating steps, and thus to a greater training duration.
As we already discussed in our previous work [21], our proposed detection algorithm reduces the detection complexity from O ( c ! ) to O ( c 2 ) , whereby c denotes the number of classes. Note that the complexity O ( c ! ) corresponds to an exhaustive search, where one has to specifically check all possible class permutations, or at least the half amount of all possible class permutations. Moreover, the sorting complexity, which is generally equal to O ( c 2 log ( c ) ) , for the rearrangement of the rows and columns of the corresponding PSMs can be neglected. Note that, in general, the number of classes c for which we try to provide an ordinal structure analysis is usually low and mostly not (significantly) greater than 10.
As briefly discussed in Section 3.2, choosing an LSM mapping that is bounded, e.g., resubstitution accuracy which is limited to 1, i.e., 100%, can lead to failures in detecting FS-ordinal structures, even in cases where the data are linearly separably and obviously ordered, as, for instance, is depicted in Figure 4. However, one can overcome this issue by choosing an LSM mapping that can take arbitrary values, as, for instance, the DR measure, which we introduced in Equation (12). Moreover, as discussed in [32], using an accuracy-based LSM mapping might suffer from the curse of dimensionality, according to Cover’s theorem [33]. Note that we already discussed the influence of the feature dimension, based on the BVDB, for which the averaged detection time increased from roughly 44 ms specific to the DR measure, to approximately 7.8 min based on the SVM-Acc mapping. Again, one might overcome this issue by choosing an appropriate LSM mapping, such as the DR measure, which also identified the correct order of the BVDB, however in less than one second on average.

6.2. Iris Data Set—A Motivational Example for the Detection of FS-Ordinal Structures

The Iris database is one of the most frequently used traditional machine learning data sets consisting of three types of Iris flowers. The classes, i.e., Iris types, are Setosa, Versicolour, and Virginica. The data are characterised by four features, i.e., sepal length, sepal width, petal length, and petal width.
In combination with the SVM-Acc mapping, it was not able to detect an (FS-)ordinal structure specific to the Iris data set. Note that the Iris data set constitutes a 3-class classification task. This observation implies that the SVM-Acc measure-based PSM does neither fulfil the conditions of Theorem 1, nor of Corollary 1. In fact, calculating the corresponding SVM-Acc mapping-based PSM leads to
M ( i d ) = ω 1 ω 2 ω 3 0 1.00 1.00 1.00 0 0.99 1.00 0.99 0 ,
whereby ω 1 , ω 2 , and ω 3 denote the classes Setosa, Versicolor, and Virginica, respectively. Note that the matrix M ( i d ) from Equation (17) constitutes exactly the same example which we discussed in Section 3.2 (with 0.99 = α ), in combination with Figure 4. More precisely, the class orders ω 1 ω 2 ω 3 (with its reversed order ω 3 ω 2 ω 1 ) and ω 1 ω 3 ω 2 (with its reversed order ω 2 ω 3 ω 1 ) constitute ordinal arrangements by Definition 2. Therefore, the uniqueness property of Equation (8) is violated. Thus, by Definition 3, the Iris data set is not FS-ordinal with respect to the mapping SVM-Acc.
On the other hand, using the DR mapping, the Iris data set is identified as FS-ordinal specific to the order ω 1 ω 2 ω 3 , i.e., SetosaVersicolorVirginica (with its reversed order VirginicaVersicolorSetosa). The question that arises here is the following. Does this specific structure make sense? Based solely on the label names (i.e., flower types), one would probably not try to find an ordinal structure in combination with the Iris data set. However, as we already discussed in this work, one can benefit from ordinal class structures that are present in the feature space, from a machine learning-based point of view. Note that the feature space of the Iris data set consists of solely four features (sepal length (SL), sepal width (SW), petal length (PL), and petal width (PW)), which are even easy to interpret. The low amount of features allows us to proceed with the following eye test.
From the total amount of four features, we can form six distinct binary feature combinations, i.e., (SL, SW), (SL, PL), (SL, PW), (SW, PL), (SW, PW), and (PL, PW). In Figure 6, we plotted all of the six binary feature combinations. From Figure 6, we can make the following observations. Except for the top left plot (sepal length vs. sepal width), the detected class structure (SetosaVersicolorVirginica, with its reversed order VirginicaVersicolorSetosa) is reflected in each binary subspace. Therefore, it is to expect that the same class order is present in the complete 4-dimensional feature space. To answer the question stated above, the class order SetosaVersicolorVirginica makes sense in combination with the provided feature space. Thus, applying the proposed DR measure helped us to identify the correct class order of the Iris data set.

7. Conclusions

In this work, we provided a generalised working definition for ordinal classification (OC) tasks. To this end, we introduced the concepts of ordinal arrangements and level of separability measures (LSMs). The resulting definition of OC tasks, which is presented in Definition 3, is based on the task-specific feature space. Thus, we denote OC tasks that are identified as ordinal based on our proposed working definition, as feature space-based ordinal, i.e., FS-ordinal. Note that the definition of FS-ordinal class structures, including the corresponding concepts, is a generalisation of the recent definition approach from our previous work [21]. In the current study, we discussed one of the main limitations of our former definition and extended the corresponding theoretical outcomes. More precisely, here, we completed Theorem 1 by Corollary 1. Moreover, we presented, illustrated, and interpreted the discriminant ratio (DR), which constitutes a classifier-independent LSM mapping. Additionally, we discussed its potential for the case of categorical feature spaces, which might be an interesting research question for future studies.
We provided an exhaustive evaluation of our proposed working definition and detection algorithm, based on a set of traditionally ordinal and traditionally non-ordinal data sets, including the pain-related BioVid Heat Pain Database (BVDB). Note that the naturally occurring ordinal class structure of the BVDB, i.e., no pain   unbearable pain, was correctly identified based on both our former as well as current working definition. Moreover, we were able to provide an additional motivational example for the effectiveness of our presented concepts, based on one of the oldest and most popular pattern recognition data sets, the Iris data set. Note that the Iris data set is a 4-dimensional data set consisting of three types of Iris flowers and of four easily interpretable features.
Based on the outcomes of our numerical experiments, which included a short evaluation of the corresponding detection-specific operational times, we can conclude this work as follows. We believe that we provided a non-complex working definition of ordinal class structures, i.e., FS-ordinal class structures, which benefits from the following characteristics: (i) The definition is intuitively interpretable and easy to apply; (ii) The definition focuses on the corresponding feature space; (iii) The definition allows a classifier-independent detection of (FS-)ordinal class structures; (iv) The definition can be enhanced by theoretical outcomes; and (v) The definition can be used appropriately specific to different characteristics of the corresponding classification tasks, e.g., including class imbalance or high-dimensional data.
Finally, as discussed above, the requirements of the proposed working definition do not describe a unique definition of class ordinality. This allows for a plethora of different instantiations that can imply different ordinal class structures with different characteristics. Therefore, one should be aware that not all of these class structures might be useful for specific classification tasks [34]. Thus, providing additional domain-specific definition extensions might be beneficial.

Author Contributions

Conceptualisation, P.B. and F.S.; Methodology, P.B.; Software, P.B.; Validation, P.B.; Formal Analysis, P.B.; Investigation, P.B. and F.S.; Writing—Original Draft Preparation, P.B.; Writing—Review and Editing, P.B., L.L., H.A.K. and F.S.; Visualisation, P.B.; Supervision, H.A.K. and F.S.; Project Administration, H.A.K. and F.S.; Funding Acquisition, H.A.K. and F.S. All authors have read and agreed to the published version of the manuscript.

Funding

The work of Peter Bellmann and Friedhelm Schwenker is supported by the project Multimodal recognition of affect over the course of a tutorial learning experiment (SCHW623/7-1) funded by the German Research Foundation (DFG). Hans A. Kestler acknowledges funding from the German Science Foundation (DFG, 217328187 (SFB 1074) and 288342734 (GRK HEIST)). Hans A. Kestler also acknowledges funding from the German Federal Ministery of Education and Research (BMBF) e:MED confirm (id 01ZX1708C) and TRAN-SCAN VI - PMTR-pNET (id 01KT1901B).

Acknowledgments

We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Tesla K40 GPU used for this research.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ANNArtificial Neural Network
AOTAveraged Operational Time
BVDBBioVid Heat Pain Database
CMClassification Model
CMCContraceptive Method Choice (Data Set)
CVCross Validation
DRDiscriminant Ratio
ECGElectrocardiogram
EDAElectrodermal Activity
EMGElectromyogram
ERAEmployee Rejection/Acceptance (Data Set)
ESLEmployee Selection (Data Set)
FS-ordinalFeature Space-Based Ordinal
LEVLecturers Evaluation (Data Set)
LSMLevel of Separability Measure
MfeatMultiple Features (Data Set)
OCOrdinal Classification
OROrdinal Regression
PSMPairwise Separability Matrix
SMOSequential Minimal Optimisation
stdStandard Deviation
SVMSupport Vector Machine
SVM-AccSupport Vector Machine Resubstitution Accuracy
SWDSocial Workers Decisions (Data Set)

Appendix A. Proof of Corollary 1

Let X Ω R d × { ω 1 , ω 2 , ω 3 } , d N , constitute a 3-class classification task. Furthermore, let the corresponding PSM, M, for some LSM, μ M d , be defined as follows:
M ( i d ) = ω 1   ω 2   ω 3 0 e f e 0 g f g 0 ,   with   e , f , g > 0
Let two of the three values, e , f , g , be equal and smaller than the remaining one. Without loss of generality, we assume it holds that
( e = f ) ( g > e = f ) .
As it holds that f < g , it follows that M ( i d ) does not constitute an ordinal arrangement, because the last row of matrix M ( i d ) is not monotonously decreasing. Therefore, the PSM, M ( i d ) , violates the properties of Equation (6). Thus, it directly follows that M ( i d ) also violates the properties of Equation (6).
Let us now define permutation ν T 3 , as ν : ( 1 , 2 , 3 ) ( 1 , 3 , 2 ) . Therefore, the resulting PSM is equal to
M ( ν ) = ω 1   ω 3   ω 2 0 f e f 0 g e g 0 .
.
As it holds that e < g , it follows that M ( ν ) does not constitute an ordinal arrangement, because the last row of matrix M ( ν ) violates the properties of Equation (6). Therefore, it directly follows that M ( ν ) also violates the properties of Equation (6).
Let us now define permutation τ T 3 , as τ : ( 1 , 2 , 3 ) ( 2 , 1 , 3 ) , i.e., τ ± i d and τ ± ν . Therefore, with τ : ( 1 , 2 , 3 ) ( 3 , 1 , 2 ) , the resulting PSMs are equal to
M ( τ ) = ω 2   ω 1   ω 3 0 e g e 0 f g f 0 ,   M ( τ ) = ω 3   ω 1   ω 2 0 f g f 0 e g e 0 .
Thus, both matrices M ( τ ) and M ( τ ) fulfil the properties of Equation (6). Note that the number of elements in T 3 is equal to 6, i.e., T 3 = { ± i d , ± ν , ± τ } . We showed that M ( ± τ ) fulfils the properties of Equation (6) (existence), whereas M ( ± i d ) and M ( ± ν ) violate the properties of Equation (6) (uniqueness). Therefore, we showed that the task X Ω is FS-ordinal specific to ( μ , ± τ ) by Definition 3.
Analogously, based on the Equations (A1)–(A3), we can observe that for the case ( e = g ) ( f > e = g ) , the task X Ω is FS-ordinal specific to ( μ , ± i d ) , whereas for the case ( f = g ) ( e > f = g ) , the task X Ω is FS-ordinal specific to ( μ , ± ν ) by Definition 3 (Note that this proof works analogously to the proof in [21]). □

Appendix B. BioVid Heat Pain Database Part A

The BioVid Heat Pain Database (BVDB) [35] was collected at Ulm University to enhance the research in the field of machine learning-based emotion- and pain (intensity) recognition. The publicly available—strictly restricted to research purposes—BVDB consists of five parts (http://www.iikt.ovgu.de/BioVid.print, last access on 20 December 2021). In the current study, we focus on Part A of the BVDB, i.e., by the abbreviation BVDB we will always refer to Part A of the database.
A total amount of 87 healthy test subjects participated in strictly controlled pain elicitation experiments that were conducted with a Medoc heat thermode (https://www.medoc-web.com/, last access on 20 December 2021), which was attached at one of the participant’s forearms. Each participant had to undergo an individual calibration phase, which led to four equidistant temperature values, i.e., pain levels. To avoid skin burns, it was strictly forbidden to exceed the temperature of 50.5 C.
Each of the participants was stimulated 20 times with each of the four pain levels in randomised order. Each pain level was held for 4 s. Between two pain level-specific stimuli, the temperature was linearly decreased to 32 C, denoted as baseline, and held for a random duration of 8–12 s. During the experiments, the participants were recorded from three different angles, leading to three video signals. Additionally, the experimenters recorded the following three physiological signals, electrocardiogram (ECG), electromyogram (EMG), and electrodermal activity (EDA).
In the current work, we focus on the physiological modalities. The ECG signals measure a person’s heart activity. The EMG signals measure a person’s muscle activity. The EMG sensor was attached to the trapezius muscle (in Part A of the BVDB), which is located at the back of a human torso, in the shoulder area. The EDA signals measure a person’s skin conductance. To this end, the sensors were attached at the participant’s ring and index finger, respectively.
Note that each of the physiological signals constitute a time series. To manually extract features, windows of length 5.5 s were defined and applied to each of the pain-related and baseline stimuli. Different statistical descriptors were extracted from the frequency domain, including mean, min, and max, among others. Additionally, different descriptors were extracted from the temporal domain, including mean, min, and max, among others. Moreover, for the ECG modality, different signal-specific features were extracted that are based on the so-called Q, P, R, S, and T wavelets. As the process of feature extraction is not part of the current contribution, we refer the reader to [36] or [37] for a detailed feature extraction analysis, because we are using exactly the same features in the current work.
The feature extraction process led to a total of 194 features, including 56, 68, and 70 features for the modalities EMG, ECG, and EDA, respectively. Following the feature extraction step, the participant-specific feature subsets were normalised leading to the values 0 and 1, for the mean and standard variation, respectively.
Note that the BVDB constitutes an ordinal data set in the traditional way, specific to the class label order no painlow painintermediate painstrong painunbearable pain. While this data set is usually not evaluated in combination with its corresponding class order, recently, we showed the effectiveness of focusing on the ordinal structure in [1].

References

  1. Bellmann, P.; Lausser, L.; Kestler, H.A.; Schwenker, F. Introducing Bidirectional Ordinal Classifier Cascades Based on a Pain Intensity Recognition Scenario; ICPR Workshops (6); Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2020; Volume 12666, pp. 773–787. [Google Scholar]
  2. Hühn, J.C.; Hüllermeier, E. Is an ordinal class structure useful in classifier learning? IJDMMM 2008, 1, 45–67. [Google Scholar] [CrossRef] [Green Version]
  3. Lattke, R.; Lausser, L.; Müssel, C.; Kestler, H.A. Detecting Ordinal Class Structures. In MCS; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2015; Volume 9132, pp. 100–111. [Google Scholar]
  4. LeCun, Y.; Bengio, Y.; Hinton, G.E. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  5. Liu, Y.; Kong, A.W.; Goh, C.K. Deep Ordinal Regression Based on Data Relationship for Small Datasets. In Proceedings of the Twenty-Sixth Joint Conference on Artificial Intelligence (IJCAI), Melbourne, Australia, 19–25 August 2017; pp. 2372–2378. [Google Scholar]
  6. Lin, Z.; Gao, Z.; Ji, H.; Zhai, R.; Shen, X.; Mei, T. Classification of cervical cells leveraging simultaneous super-resolution and ordinal regression. Appl. Soft Comput. 2022, 115, 108208. [Google Scholar] [CrossRef]
  7. Niu, Z.; Zhou, M.; Wang, L.; Gao, X.; Hua, G. Ordinal Regression with Multiple Output CNN for Age Estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; IEEE Computer Society: Washington, DC, USA, 2016; pp. 4920–4928. [Google Scholar]
  8. Chen, S.; Zhang, C.; Dong, M.; Le, J.; Rao, M. Using Ranking-CNN for Age Estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; IEEE Computer Society: Washington, DC, USA, 2017; pp. 742–751. [Google Scholar]
  9. Gutiérrez, P.A.; Pérez-Ortiz, M.; Sánchez-Monedero, J.; Fernández-Navarro, F.; Hervás-Martínez, C. Ordinal Regression Methods: Survey and Experimental Study. IEEE Trans. Knowl. Data Eng. 2016, 28, 127–146. [Google Scholar] [CrossRef] [Green Version]
  10. Cruz-Ramírez, M.; Hervás-Martínez, C.; Sánchez-Monedero, J.; Gutiérrez, P.A. Metrics to guide a multi-objective evolutionary algorithm for ordinal classification. Neurocomputing 2014, 135, 21–31. [Google Scholar] [CrossRef]
  11. Cardoso, J.S.; Sousa, R.G. Measuring the Performance of Ordinal Classification. IJPRAI 2011, 25, 1173–1195. [Google Scholar] [CrossRef] [Green Version]
  12. Frank, E.; Hall, M.A. A Simple Approach to Ordinal Classification. In ECML; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2001; Volume 2167, pp. 145–156. [Google Scholar]
  13. Cover, T.M.; Hart, P.E. Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 1967, 13, 21–27. [Google Scholar] [CrossRef]
  14. Breiman, L.; Friedman, J.H.; Olshen, R.A.; Stone, C.J. Classification and Regression Trees; Wiley: Wadsworth, OH, USA, 1984. [Google Scholar]
  15. Vapnik, V. The Nature of Statistical Learning Theory; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  16. Abe, S. Support Vector Machines for Pattern Classification; Advances in Pattern Recognition; Springer: London, UK, 2005. [Google Scholar]
  17. Chu, W.; Keerthi, S.S. New approaches to support vector ordinal regression. In ICML; ACM International Conference Proceeding Series; ACM: New York, NY, USA, 2005; Volume 119, pp. 145–152. [Google Scholar]
  18. Cardoso, J.S.; da Costa, J.F.P.; Cardoso, M.J. Modelling ordinal relations with SVMs: An application to objective aesthetic evaluation of breast cancer conservative treatment. Neural Netw. 2005, 18, 808–817. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Chu, W.; Keerthi, S.S. Support Vector Ordinal Regression. Neural Comput. 2007, 19, 792–815. [Google Scholar] [CrossRef] [PubMed]
  20. Lausser, L.; Schäfer, L.M.; Schirra, L.R.; Szekely, R.; Schmid, F.; Kestler, H.A. Assessing phenotype order in molecular data. Sci. Rep. 2019, 9, 1–10. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Bellmann, P.; Schwenker, F. Ordinal Classification: Working Definition and Detection of Ordinal Structures. IEEE Access 2020, 8, 164380–164391. [Google Scholar] [CrossRef]
  22. McCullagh, P. Regression models for ordinal data. J. R. Stat. Soc. Ser. B Methodol. 1980, 42, 109–127. [Google Scholar] [CrossRef]
  23. Agresti, A. Analysis of Ordinal Categorical Data; John Wiley & Sons: Hoboken, NJ, USA, 2010; Volume 656. [Google Scholar]
  24. Fisher, R.A. The use of multiple measurements in taxonomic problems. Ann. Eugen. 1936, 7, 179–188. [Google Scholar] [CrossRef]
  25. Kächele, M.; Palm, G.; Schwenker, F. SMO Lattices for the Parallel Training of Support Vector Machines. In Proceedings of the ESANN, Bruges, Belgium, 22–24 April 2015. [Google Scholar]
  26. Fan, R.; Chen, P.; Lin, C. Working Set Selection Using Second Order Information for Training Support Vector Machines. J. Mach. Learn. Res. 2005, 6, 1889–1918. [Google Scholar]
  27. Dua, D.; Graff, C. UCI Machine Learning Repository; University of California: Irvine, CA, USA, 2017. [Google Scholar]
  28. Wilcoxon, F. Individual Comparisons by Ranking Methods. Biom. Bull. 1945, 1, 80–83. [Google Scholar] [CrossRef]
  29. Bellmann, P.; Thiam, P.; Schwenker, F. Using Meta Labels for the Training of Weighting Models in a Sample-Specific Late Fusion Classification Architecture. In Proceedings of the ICPR, Milan, Italy, 10–15 January 2021; IEEE: Washington, DC, USA, 2020; pp. 2604–2611. [Google Scholar]
  30. Breiman, L. Bagging Predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef] [Green Version]
  31. Snoek, C.; Worring, M.; Smeulders, A.W.M. Early versus late fusion in semantic video analysis. In Proceedings of the ACM Multimedia, Singapore, 6–11 November 2005; ACM: New York, NY, USA, 2005; pp. 399–402. [Google Scholar]
  32. Schäfer, L.M. Systems Biology of Tumour Evolution: Estimating Orders from Omics Data. Ph.D. Thesis, Universität Ulm, Ulm, Germany, 2021. [Google Scholar]
  33. Cover, T.M. Geometrical and Statistical Properties of Systems of Linear Inequalities with Applications in Pattern Recognition. IEEE Trans. Electron. Comput. 1965, EC-14, 326–334. [Google Scholar] [CrossRef] [Green Version]
  34. Lausser, L.; Schäfer, L.M.; Kestler, H.A. Ordinal Classifiers Can Fail on Repetitive Class Structures. Arch. Data Sci. Ser. A 2018, 4, 25. [Google Scholar]
  35. Walter, S.; Gruss, S.; Ehleiter, H.; Tan, J.; Traue, H.C.; Crawcour, S.C.; Werner, P.; Al-Hamadi, A.; Andrade, A.O. The biovid heat pain database data for the advancement and systematic validation of an automated pain recognition system. In Proceedings of the CYBCONF, Lausanne, Switzerland, 13–15 June 2013; IEEE: Washington, DC, USA, 2013; pp. 128–131. [Google Scholar]
  36. Kächele, M.; Amirian, M.; Thiam, P.; Werner, P.; Walter, S.; Palm, G.; Schwenker, F. Adaptive confidence learning for the personalization of pain intensity estimation systems. Evol. Syst. 2017, 8, 71–83. [Google Scholar] [CrossRef]
  37. Kächele, M.; Thiam, P.; Amirian, M.; Schwenker, F.; Palm, G. Methods for Person-Centered Continuous Pain Intensity Assessment From Bio-Physiological Channels. J. Sel. Top. Signal Process. 2016, 10, 854–864. [Google Scholar] [CrossRef]
Figure 1. General classification task processing steps. Left: Sequential processing steps. Right: Step-specific processing examples. The detection of ordinal class structures is included in the Data Analysis step (highlighted in green colour, in the online version of the manuscript).
Figure 1. General classification task processing steps. Left: Sequential processing steps. Right: Step-specific processing examples. The detection of ordinal class structures is included in the Data Analysis step (highlighted in green colour, in the online version of the manuscript).
Applsci 12 01815 g001
Figure 2. Example of an ordinal-structured 2-dimensional 5-class toy data set with class order ω 1 ω 2 ω 3 ω 4 ω 5 . The relationship between μ 2 , 3 and μ 3 , 4 could be either ≤ or ≥, because class ω 2 is closer to edge class ω 1 , whereas class ω 4 is closer to edge class ω 5 . For μ 3 , 5 and μ 3 , 4 , it holds μ 3 , 5 μ 3 , 4 .
Figure 2. Example of an ordinal-structured 2-dimensional 5-class toy data set with class order ω 1 ω 2 ω 3 ω 4 ω 5 . The relationship between μ 2 , 3 and μ 3 , 4 could be either ≤ or ≥, because class ω 2 is closer to edge class ω 1 , whereas class ω 4 is closer to edge class ω 5 . For μ 3 , 5 and μ 3 , 4 , it holds μ 3 , 5 μ 3 , 4 .
Applsci 12 01815 g002
Figure 3. Detectionof FS-ordinal structures. If the given task X Ω constitutes an FS-ordinal classification task, then the output includes exactly two permutations, which represent the ordinal structure of the current task (This figure is adapted from our previous work [21].).
Figure 3. Detectionof FS-ordinal structures. If the given task X Ω constitutes an FS-ordinal classification task, then the output includes exactly two permutations, which represent the ordinal structure of the current task (This figure is adapted from our previous work [21].).
Applsci 12 01815 g003
Figure 4. Example of an ordinal-structured 3-class toy data set with class order ω 1 ω 2 ω 3 (This figure is adapted from our previous work [21].).
Figure 4. Example of an ordinal-structured 3-class toy data set with class order ω 1 ω 2 ω 3 (This figure is adapted from our previous work [21].).
Applsci 12 01815 g004
Figure 5. Visualisation of the discriminant ratio components for two 2-dimensional classes.
Figure 5. Visualisation of the discriminant ratio components for two 2-dimensional classes.
Applsci 12 01815 g005
Figure 6. Iris data set. Depicted are all binary combinations of the features Sepal Length, Sepal Width, Petal Length, and Petal Width, in cm. The legend is provided in the bottom right plot.
Figure 6. Iris data set. Depicted are all binary combinations of the features Sepal Length, Sepal Width, Petal Length, and Petal Width, in cm. The legend is provided in the bottom right plot.
Applsci 12 01815 g006
Table 1. Summary of applied notations.
Table 1. Summary of applied notations.
VariableDescription
X R d d-dimensional data set, d N
Ω = { ω 1 , , ω c } set of class labels, with c > 2 , c N
I = { 1 , , c } index set
T c set of all permutations τ of the set I
μ M d mapping for measuring the level of separability
μ i , j R 0 level of separability between classes ω i and ω j
M ( τ ) = ( μ τ ( i ) , τ ( j ) ) i , j = 1 c symmetric pairwise separability matrix (PSM)
Table 2. Example Data Set. MSB: Medical Systems Biology. NIP: Neural Information Processing.
Table 2. Example Data Set. MSB: Medical Systems Biology. NIP: Neural Information Processing.
AuthorMiddle NameInstituteORCIDNotation
Ludwig LausserNoMSBNo x 1
Hans A. KestlerYesMSBYes x 2
Friedhelm SchwenkerNoNIPYes x 3
Table 3. Data Set Properties (Traditionally Ordinal Data Sets). Cl: Number of Classes. Fea: Number of Features. Sam: Number of Samples. # ω i : Number of samples in class ω i .
Table 3. Data Set Properties (Traditionally Ordinal Data Sets). Cl: Number of Classes. Fea: Number of Features. Sam: Number of Samples. # ω i : Number of samples in class ω i .
Data SetClFeaSam# ω 1 # ω 2 # ω 3 # ω 4 # ω 5 # ω 6 # ω 7 # ω 8 # ω 9
CMC391473629511333
LEV-444100093280403224
SWD410100032352399217
Cars46172812103846965
Nursery48 12,958 432032842664044
ESL-5544885210011613585
LEV5410009328040319727
BVDB5194870017401740174017401740
ERA-774100092142181172158118137
ESL944882123810011613562194
ERA94100092142181172158118883118
Table 4. Ordinal Structure Detection (Traditionally Ordinal Data Sets). DR: Detection based on the discriminant ratio. SVM-Acc: Detection based on the linear SVM resubstitution accuracy. ✓: Ordinal class structure found. ×: No ordinal class structure found.
Table 4. Ordinal Structure Detection (Traditionally Ordinal Data Sets). DR: Detection based on the discriminant ratio. SVM-Acc: Detection based on the linear SVM resubstitution accuracy. ✓: Ordinal class structure found. ×: No ordinal class structure found.
TypeCMCLEV-4SWDCarsNurseryESL-5LEVBVDBERA-7ESLERA
DR×××
SVM-Acc××××
Table 5. Data Set Properties (Traditionally Non-Ordinal Data Sets). Cl: Number of Classes. Fea: Number of Features. Sam: Number of Samples.
Table 5. Data Set Properties (Traditionally Non-Ordinal Data Sets). Cl: Number of Classes. Fea: Number of Features. Sam: Number of Samples.
Data SetClFeaSamClass Distribution
Iris3415050 per class
Seeds3721070 per class
Forests42752383—86—159—195
Vehicles418846199—212—217—218
Segment7192310330 per class
Mfeat106492000200 per class
Table 6. Ordinal Structure Detection (Traditionally Non-Ordinal Data Sets). DR: Detection based on the discriminant ratio. SVM-Acc: Detection based on the linear SVM resubstitution accuracy. ✓ : Ordinal class structure found. ×: No ordinal class structure found.
Table 6. Ordinal Structure Detection (Traditionally Non-Ordinal Data Sets). DR: Detection based on the discriminant ratio. SVM-Acc: Detection based on the linear SVM resubstitution accuracy. ✓ : Ordinal class structure found. ×: No ordinal class structure found.
TypeIrisSeedsForestsVehiclesSegmentMfeat
DR×××
SVM-Acc××××
Table 7. Running Time Comparison. Cl: Number of Classes. Fea: Number of Features. Sam: Number of Samples. DR: Detection based on the discriminant ratio. SVM-Acc: Detection based on the linear SVM resubstitution accuracy. Depicted are the mean and standard deviation (std) values, for the operational time in ms, averaged over ten repetitions. For the SVM-Acc approach, for the BVDB data set, we removed the digits from the std value, for the sake of readability.
Table 7. Running Time Comparison. Cl: Number of Classes. Fea: Number of Features. Sam: Number of Samples. DR: Detection based on the discriminant ratio. SVM-Acc: Detection based on the linear SVM resubstitution accuracy. Depicted are the mean and standard deviation (std) values, for the operational time in ms, averaged over ten repetitions. For the SVM-Acc approach, for the BVDB data set, we removed the digits from the std value, for the sake of readability.
Data SetClFeaSamDRSVM-Acc
Iris341500.25 ± 0.1419.97 ± 3.02
Seeds372100.16 ± 0.0117.36 ± 1.61
CMC3914730.34 ± 0.111987.39 ± 2.25
Forests4275230.42 ± 0.152267.25 ± 4.47
Vehicles4188460.43 ± 0.049069.24 ± 13.29
LEV-44410000.32 ± 0.0770.16 ± 1.88
SWD41010000.34 ± 0.02110.61 ± 1.71
Cars4617280.31 ± 0.0292.20 ± 4.33
Nursery4812,9581.76 ± 0.142079.20 ± 16.89
ESL-5544880.29 ± 0.0252.81 ± 1.91
LEV5410000.40 ± 0.0490.75 ± 2.12
BVDB5194870044.26 ± 4.91469,839.44 ± 2608
ERA-77410001.09 ± 0.09539.96 ± 5.43
Segment71923101.77 ± 0.1213,851.62 ± 58.83
ESL9448834.01 ± 0.43200.59 ± 1.94
ERA94100077.50 ± 2.56616.12 ± 1.61
Mfeat106492000392.36 ± 1.1319,661.20 ± 27.87
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bellmann, P.; Lausser, L.; Kestler, H.A.; Schwenker, F. A Theoretical Approach to Ordinal Classification: Feature Space-Based Definition and Classifier-Independent Detection of Ordinal Class Structures. Appl. Sci. 2022, 12, 1815. https://doi.org/10.3390/app12041815

AMA Style

Bellmann P, Lausser L, Kestler HA, Schwenker F. A Theoretical Approach to Ordinal Classification: Feature Space-Based Definition and Classifier-Independent Detection of Ordinal Class Structures. Applied Sciences. 2022; 12(4):1815. https://doi.org/10.3390/app12041815

Chicago/Turabian Style

Bellmann, Peter, Ludwig Lausser, Hans A. Kestler, and Friedhelm Schwenker. 2022. "A Theoretical Approach to Ordinal Classification: Feature Space-Based Definition and Classifier-Independent Detection of Ordinal Class Structures" Applied Sciences 12, no. 4: 1815. https://doi.org/10.3390/app12041815

APA Style

Bellmann, P., Lausser, L., Kestler, H. A., & Schwenker, F. (2022). A Theoretical Approach to Ordinal Classification: Feature Space-Based Definition and Classifier-Independent Detection of Ordinal Class Structures. Applied Sciences, 12(4), 1815. https://doi.org/10.3390/app12041815

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop