Next Article in Journal
Fault-Tolerant SINS/Doppler Radar/Odometer Integrated Navigation Method Based on Two-Stage Fault Detection Structure
Previous Article in Journal
The Measurement Problem Is a Feature, Not a Bug–Schematising the Observer and the Concept of an Open System on an Informational, or (Neo-)Bohrian, Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Complexity of Deterministic and Nondeterministic Decision Trees for Conventional Decision Tables from Closed Classes

Computer, Electrical and Mathematical Sciences & Engineering Division and Computational Bioscience Research Center, King Abdullah University of Science and Technology (KAUST), Thuwal 23955-6900, Saudi Arabia
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(10), 1411; https://doi.org/10.3390/e25101411
Submission received: 8 August 2023 / Revised: 5 September 2023 / Accepted: 10 September 2023 / Published: 3 October 2023

Abstract

:
In this paper, we consider classes of conventional decision tables closed relative to the removal of attributes (columns) and changing decisions assigned to rows. For tables from an arbitrary closed class, we study the dependence of the minimum complexity of deterministic and nondeterministic decision trees on the complexity of the set of attributes attached to columns. We also study the dependence of the minimum complexity of deterministic decision trees on the minimum complexity of nondeterministic decision trees. Note that a nondeterministic decision tree can be interpreted as a set of true decision rules that covers all rows of the table.

1. Introduction

Decision tables (sometimes represented as datasets or finite information systems with a distinguished decision attribute) appear in data analysis [1,2,3,4,5,6] and in such areas as combinatorial optimization, computational geometry, and fault diagnosis, where they are used to represent and explore problems [7,8].
Decision trees [1,5,6,7,9] and decision rule systems [2,3,4,8,10,11,12] are widely used as classifiers, as a means for knowledge representation, and as algorithms for solving various problems of combinatorial optimization, fault diagnosis, etc. Decision trees and rules are among the most interpretable models in data analysis [13].
In this paper, we consider classes of conventional decision tables closed under the removal of columns and changing of decisions. The most natural examples of such classes are closed classes of decision tables generated by information systems (see Section 3.4 for an explanation). We study the dependence of the minimum complexity of deterministic and nondeterministic decision trees on the complexity of the set of attributes attached to columns of the decision table. We also study the dependence of the minimum complexity of deterministic decision trees on the minimum complexity of nondeterministic decision trees. Note that the nondeterministic decision trees can be considered as representations of systems of true decision rules that cover all rows of decision tables. Note also that the depth of deterministic and nondeterministic decision trees for computation Boolean functions was studied quite intensively [14,15,16,17].
This paper continues the study of closed classes of decision tables that began with work [18] and continued with works [19,20]. To the best of our knowledge, there are no other papers that study the closed classes of decision tables.
Various classes of objects that are closed under different operations are intensively studied. Among them, in particular, are classes of Boolean functions closed under the operation of superposition [21] and minor-closed classes of graphs [22]. Decision tables represent an interesting and important mathematical object deserving of mathematical research, in particular, the study of closed classes of decision tables.
In [18], we studied the dependence of the minimum depth of deterministic decision trees and the depth of deterministic decision trees constructed by a greedy algorithm on the number of attributes (columns) for conventional decision tables from classes closed under operations of removal of columns and changing of decisions.
In [19], we considered classes of decision tables with many-valued decisions closed under operations of removal of columns, changing of decisions, permutation of columns, and duplication of columns. We studied relationships among three parameters of these tables: the complexity of a decision table (if we consider the depth of decision trees, then the complexity of a decision table is the number of columns in it), the minimum complexity of a deterministic decision tree, and the minimum complexity of a nondeterministic decision tree. We considered a rough classification of functions characterizing relationships and enumerated all possible seven types of relationships.
In [20], we considered classes of decision tables with 0–1 decisions (each row is labeled with the decision 0 or the decision 1) closed relative to the removal of attributes (columns) and changing decisions assigned to rows. For tables from an arbitrary closed class, we studied the dependence of the minimum complexity of deterministic decision trees on various parameters of the tables: the minimum complexity of a test, the complexity of the set of attributes attached to columns, and the minimum complexity of a strongly nondeterministic decision tree. We also studied the dependence of the minimum complexity of strongly nondeterministic decision trees on the complexity of the set of attributes attached to columns. Note that a strongly nondeterministic decision tree can be interpreted as a set of true decision rules that covers all rows labeled with the decision 1.
In the previous papers, we did not consider in detail conventional decision tables in which rows are labeled with arbitrary decisions. These tables differ significantly from tables with many-valued decisions and from tables with 0–1 decisions considered previously. We now describe the results obtained in the present paper. Let A be a class of conventional decision tables closed under the removal of columns and changing of decisions, and let ψ be a bounded complexity measure. In this paper, we study three functions: F ψ , A ( n ) , G ψ , A ( n ) and H ψ , A ( n ) .
The function F ψ , A ( n ) characterizes the growth in the worst case of the minimum complexity of a deterministic decision tree for a decision table from A with the growth of the complexity of the set of attributes attached to columns of the table. We prove that the function F ψ , A ( n ) is either bounded from above by a constant or grows as a logarithm of n, or it grows almost linearly depending on n (it is bounded from above by n and is equal to n for infinitely many n). These results are generalizations of results obtained in [20] for closed classes of decision tables with 0–1 decisions.
The function G ψ , A ( n ) characterizes the growth in the worst case of the minimum complexity of a nondeterministic decision tree for a decision table from A with the growth of the complexity of the set of attributes attached to columns of the table. We prove that the function G ψ , A ( n ) is either bounded from above by a constant or grows almost linearly depending on n (it is bounded from above by n and is equal to n for infinitely many n).
The function H ψ , A ( n ) characterizes the growth in the worst case of the minimum complexity of a deterministic decision tree for a decision table from A with the growth of the minimum complexity of a nondeterministic decision tree for the table. This function is either not everywhere defined or is everywhere defined. Let H ψ , A ( n ) be everywhere defined. We proved that this function is either bounded from above by a constant or it is greater than or equal to n for infinitely many n.
The novelty of the work is as follows:
  • For the function F ψ , A , which characterizes the complexity of deterministic decision trees, we have received an exhaustive description of the types of its behavior.
  • For the function G ψ , A , which characterizes the complexity of nondeterministic decision trees, we have received an exhaustive description of the types of its behavior.
  • For the function H ψ , A , which characterizes relationships between the complexity of deterministic and nondeterministic decision trees, we have received a preliminary description of the types of its behavior that requires additional study.
The obtained results allow us to point out the cases when the complexity of deterministic and nondeterministic decision trees is essentially less than the complexity of the set of attributes attached to columns of the table. This may be useful in applications.
The paper consists of six sections. In Section 2, main definitions and notation are considered. In Section 3, we provide the main results. Section 4 contains auxiliary statements. In Section 5, we prove the main results. Section 6 contains short conclusions.

2. Main Definitions and Notation

Denote ω = { 0 , 1 , 2 , } and, for any k ω \ { 0 , 1 } , denote E k = { 0 , 1 , , k 1 } . Let P = { f i : i ω } be the set of attributes (really names of attributes). Two attributes f i , f j P are considered different if i j .

2.1. Decision Tables

First, we define the notion of a decision table.
Definition 1.
Let k ω \ { 0 , 1 } . Denote by M k the set of rectangular tables filled with numbers from E k in each of which rows are pairwise different, each row is labeled with a number from ω (decision), and columns are labeled with pairwise different attributes from P. Rows are interpreted as tuples of values of these attributes. Empty tables without rows belong also to the set M k . We will use the same notation Λ for these tables. Tables from M k will be called decision tables.
Example 1.
Figure 1 shows a decision table from M 2 .
Denote by M k C the set of tables from M k in each of which all rows are labeled with the same decision. Let Λ M k C .
Let T be a nonempty table from M k . Denote by P ( T ) the set of attributes attached to columns of the table T. Let f i 1 , , f i m P ( T ) and δ 1 , , δ m E k . We denote by T ( f i 1 , δ 1 ) ( f i m , δ m ) the table obtained from T by the removal of all rows that do not satisfy the following condition: in columns labeled with attributes f i 1 , , f i m , the row has numbers δ 1 , , δ m , respectively.
We now define two operations on decision tables: the removal of columns and changing of decisions. Let T M k .
Definition 2.
Removal of columns. Let D P ( T ) . We remove from T all columns labeled with the attributes from the set D. In each group of equal rows on the remaining columns, we keep one with the minimum decision. Denote the obtained table by I ( D , T ) . In particular, I ( , T ) = T and I ( P ( T ) , T ) = Λ . It is obvious that I ( D , T ) M k .
Definition 3.
Changing of decisions. Let ν : E k P ( T ) ω (by definition, E k 0 = ). For each row δ ¯ of the table T, we replace the decision attached to this row with ν ( δ ¯ ) . We denote the obtained table by J ( ν , T ) . It is obvious that J ( ν , T ) M k .
Definition 4.
Denote T = { J ( ν , I ( D , T ) ) : D P ( T ) , ν : E k P ( T ) \ D ω } . The set T is the closure of the table T under the operations of removal of columns and changing of decisions.
Example 2.
Figure 2 shows the table J ( ν , I ( D , T 0 ) ) , where T 0 is the table shown in Figure 1, D = { f 4 } and ν ( x 1 , x 2 ) = x 1 + x 2 .
Definition 5.
Let A M k and A . Denote A = T A T . The set A is the closure of the set A under the considered two operations. The class (the set) of decision tables A will be called a closed class if A = A .
Let A 1 and A 2 be closed classes of decision tables from M k . Then, A 1 A 2 is a closed class of decision tables from M k .

2.2. Deterministic and Nondeterministic Decision Trees

A finite tree with the root is a finite directed tree in which exactly one node called the root has no entering edges. The nodes without leaving edges are called terminal nodes.
Definition 6.
A k-decision tree is a finite tree with the root, which has at least two nodes and in which
  • The root and edges leaving the root are not labeled.
  • Each terminal node is labeled with a decision from the set ω.
  • Each node, which is neither the root nor a terminal node, is labeled with an attribute from the set P. Each edge leaving such a node is labeled with a number from the set E k .
Example 3.
Figure 3 and Figure 4 show 2-decision trees.
We denote by T k the set of all k-decision trees. Let Γ T k . We denote by P ( Γ ) the set of attributes attached to nodes of Γ that are neither the root nor terminal nodes. A complete path of Γ is a sequence τ = v 1 , d 1 , , v m , d m , v m + 1 of nodes and edges of Γ in which v 1 is the root of Γ , v m + 1 is a terminal node of Γ and, for j = 1 , , m , the edge d j leaves the node v j and enters the node v j + 1 . Let T M k . If P ( Γ ) P ( T ) , then we correspond to the table T and the complete path τ a decision table T ( τ ) . If m = 1 , then T ( τ ) = T . If m > 1 and, for j = 2 , , m , the node v j is labeled with the attribute f i j and the edge d j is labeled with the number δ j , then T ( τ ) = T ( f i 2 , δ 2 ) ( f i m , δ m ) .
Definition 7.
Let T M k \ { Λ } . A deterministic decision tree for the table T is a k-decision tree Γ satisfying the following conditions:
  • Only one edge leaves the root of Γ.
  • For any node, which is neither the root nor a terminal node, edges leaving this node are labeled with pairwise different numbers.
  • P ( Γ ) P ( T ) .
  • For any row of T, there exists a complete path τ of Γ such that the considered row belongs to the table T ( τ ) .
  • For any complete path τ of Γ, either T ( τ ) = Λ or all rows of T ( τ ) are labeled with the decision attached to the terminal node of τ.
Example 4.
The 2-decision tree shown in Figure 3 is a deterministic decision tree for the decision table shown in Figure 1.
Definition 8.
Let T M k \ { Λ } . A nondeterministic decision tree for the table T is a k-decision tree Γ satisfying the following conditions:
  • P ( Γ ) P ( T ) .
  • For any row of T, there exists a complete path τ of Γ such that the considered row belongs to the table T ( τ ) .
  • For any complete path τ of Γ, either T ( τ ) = Λ or all rows of T ( τ ) are labeled with the decision attached to the terminal node of τ.
Example 5.
The 2-decision tree shown in Figure 4 is a nondeterministic decision tree for the decision table shown in Figure 1.
Figure 4. A nondeterministic decision tree for the decision table shown in Figure 1.
Figure 4. A nondeterministic decision tree for the decision table shown in Figure 1.
Entropy 25 01411 g004

2.3. Complexity Measures

Denote by B the set of all finite words over the alphabet P = { f i : i ω } , which contains the empty word λ and on which the word concatenation operation is defined.
Definition 9.
A complexity measure is an arbitrary function ψ : B ω that has the following properties: for any words α 1 , α 2 B ,
  • ψ ( α 1 ) = 0 if and only if α 1 = λ positivity property.
  • ψ ( α 1 ) = ψ ( α 1 ) for any word α 1 obtained from α 1 by permutation of letters—commutativity property.
  • ψ ( α 1 ) ψ ( α 1 α 2 ) nondecreasing property.
  • ψ ( α 1 α 2 ) ψ ( α 1 ) + ψ ( α 2 ) boundedness from above property.
The following functions are complexity measures:
  • Function h for which, for any word α B , h ( α ) = α , where α is the length of the word α .
  • An arbitrary function φ : B ω such that φ ( λ ) = 0 , for any f i P , φ ( f i ) > 0 and, for any nonempty word f i 1 f i m B ,
    φ ( f i 1 f i m ) = j = 1 m φ ( f i j ) .
  • An arbitrary function ρ : B ω such that ρ ( λ ) = 0 , for any f i P , ρ ( f i ) > 0 , and, for any nonempty word f i 1 f i m B , ρ ( f i 1 f i m ) = max { ρ ( f i j ) : j = 1 , , m } .
Definition 10.
A bounded complexity measure is a complexity measure ψ, which has the boundedness from below property: for any word α B , ψ ( α ) α .
Any complexity measure satisfying the equality (1), in particular the function h, is a bounded complexity measure. One can show that if functions ψ 1 and ψ 2 are complexity measures, then the functions ψ 3 and ψ 4 are complexity measures, where for any α B , ψ 3 ( α ) = ψ 1 ( α ) + ψ 2 ( α ) and ψ 4 ( α ) = max ( ψ 1 ( α ) , ψ 2 ( α ) ) . If the function ψ 1 is a bounded complexity measure, then the functions ψ 3 and ψ 4 are bounded complexity measures.
Definition 11.
Let ψ be a complexity measure. We extend it to the set of all finite subsets of the set P. Let D be a finite subset of the set P. If D = , then ψ ( D ) = 0 . Let D = { f i 1 , , f i m } and m 1 . Then, ψ ( D ) = ψ ( f i 1 f i m ) .

2.4. Parameters of Decision Trees and Tables

Let Γ T k and τ = v 1 , d 1 , , v m , d m , v m + 1 be a complete path of Γ . We correspond to the path τ a word F ( τ ) B : if m = 1 , then F ( τ ) = λ , and if m > 1 and, for j = 2 , , m , the node v j is labeled with the attribute f i j , then F ( τ ) = f i 2 f i m .
Definition 12.
Let ψ be a complexity measure. We extend the function ψ to the set T k . Let Γ T k . Then, ψ ( Γ ) = max { ψ ( F ( τ ) ) } , where the maximum is taken over all complete paths τ of the decision tree Γ. For a given complexity measure ψ, the value ψ ( Γ ) will be called the complexity of the decision tree Γ. The value h ( Γ ) will be called the depth of the decision tree Γ.
Let ψ be a complexity measure. We now describe the functions ψ d , ψ a , S e p , W ψ , S ψ , S ^ ψ , M ψ , and N defined on the set M k and taking values from the set ω . By definition, the value of each of these functions for Λ is equal to 0. Let T M k \ { Λ } .
  • ψ d ( T ) = min { ψ ( Γ ) } , where the minimum is taken over all deterministic decision trees Γ for the table T.
  • ψ a ( T ) = min { ψ ( Γ ) } , where the minimum is taken over all nondeterministic decision trees Γ for the table T.
  • A set D P ( T ) is called a separating set for the table T if the sets of columns labeled with attributes from D rows of the table T are pairwise different. Then, S e p ( T ) is the minimum cardinality of a separating set for the table T.
  • W ψ ( T ) = ψ ( P ( T ) ) . This is the complexity of the set of attributes attached to columns of the table T.
  • Let δ ¯ be a row of the table T. Denote S ψ ( T , δ ¯ ) = min { ψ ( D ) } , where the minimum is taken over all subsets D of the set P ( T ) such that in the set of columns of T labeled with attributes from D, the row δ ¯ is different from all other rows of the table T. Then, S ψ ( T ) = max { S ψ ( T , δ ¯ ) } , where the maximum is taken over all rows δ ¯ of the table T.
  • S ^ ψ ( T ) = max { S ψ ( T * ) : T * T } .
  • If T M k C , then M ψ ( T ) = 0 . Let T M k C , P ( T ) = n , and columns of the table T be labeled with the attributes f t 1 , , f t n . Let δ ¯ = ( δ 1 , , δ n ) E k n . Denote by M ψ ( T , δ ¯ ) the minimum number p ω for which there exist attributes f t i 1 , , f t i m P ( T ) such that T ( f t i 1 , δ i 1 ) ( f t i m , δ i m ) M k C and ψ ( f t i 1 f t i m ) = p . Then, M ψ ( T ) = max { M ψ ( T , δ ¯ ) : δ ¯ E k n } .
  • N ( T ) is the number of rows in the table T.
For the complexity measure h, we denote W ( T ) = W h ( T ) , S ( T ) = S h ( T ) , S ^ ( T ) = S ^ h ( T ) , and M ( T ) = M h ( T ) . Note that W ( T ) is the number of columns in the table T.
Example 6.
We denote by T 0 the decision table shown in Figure 1. One can show that h d ( T 0 ) = 2 , h a ( T 0 ) = 2 , S e p ( T 0 ) = 3 , W ( T 0 ) = 3 , N ( T 0 ) = 7 , S ( T 0 ) = 3 , S ^ ( T 0 ) = 3 , and M ( T 0 ) = 2 .

3. Main Results

In this section, we consider results obtained for the functions F ψ , A , G ψ , A , and H ψ , A and discuss closed classes of decision tables generated by information systems.

3.1. Function F ψ , A

Let ψ be a bounded complexity measure and A be a nonempty closed class of decision tables from M k . We now define a function F ψ , A : ω ω . Let n ω . Then
F ψ , A ( n ) = max { ψ d ( T ) : T A , W ψ ( T ) n } .
The function F ψ , A characterizes the growth in the worst case of the minimum complexity of a deterministic decision tree for a decision table from A with the growth of the complexity of the set of attributes attached to columns of this table.
Let D = { n i : i ω } be an infinite subset of the set ω in which, for any i ω , n i < n i + 1 . Let us define a function H D : ω ω . Let n ω . If n < n 0 , then H D ( n ) = 0 . If, for some i ω , n i n < n i + 1 , then H D ( n ) = n i .
Theorem 1.
Let ψ be a bounded complexity measure and A be a nonempty closed class of decision tables from M k . Then, F ψ , A is an everywhere defined nondecreasing function such that F ψ , A ( n ) n for any n ω and F ψ , A ( 0 ) = 0 . For this function, one of the following statements holds:
(a) 
If the functions S ψ and N are bounded from above on class A, then there exists a positive constant c 0 such that F ψ , A ( n ) c 0 for any n ω .
(b) 
If the function S ψ is bounded from above on class A and the function N is not bounded from above on class A, then there exist positive constants c 1 , c 2 , c 3 , c 4 such that c 1 log 2 n c 2 F ψ , A ( n ) c 3 log 2 n + c 4 for any n ω \ { 0 } .
(c) 
If the function S ψ is not bounded from above on class A, then there exists an infinite subset D of the set ω such that H D ( n ) F ψ , A ( n ) for any n ω .
Thus, for the function F ψ , A , we have received an exhaustive description of the types of its behavior. Type (a) is degenerate: the number of rows in decision tables from the closed class is limited from above by a constant. Type (b) is of most interest to us: the complexity of deterministic decision trees behaves in the worst case as the logarithm on the complexity of the set of attributes in the table. Type (c) is not of particular interest: the complexity of deterministic decision trees in the worst case is the same as the complexity of the set of attributes in the table.

3.2. Function G ψ , A

Let ψ be a bounded complexity measure and A be a nonempty closed class of decision tables from M k . We now define a function G ψ , A . Let n ω . Then
G ψ , A ( n ) = max { ψ a ( T ) : T A , W ψ ( T ) n } .
The function G ψ , A characterizes the growth in the worst case of the minimum complexity of a nondeterministic decision tree for a decision table from A with the growth of the complexity of the set of attributes attached to columns of this table.
Theorem 2.
Let ψ be a bounded complexity measure and A be a nonempty closed class of decision tables from M k . Then, G ψ , A is an everywhere defined nondecreasing function such that G ψ , A ( n ) n for any n ω and G ψ , A ( 0 ) = 0 . For this function, one of the following statements holds:
(a) 
If the function S ψ is bounded from above on class A, then there exists a positive constant c such that G ψ , A ( n ) c for any n ω .
(b) 
If the function S ψ is not bounded from above on the class A, then there exists an infinite subset D of the set ω such that H D ( n ) G ψ , A ( n ) for any n ω .
Thus, for the function G ψ , A , we have received an exhaustive description of the types of its behavior. Type (a) is of most interest to us: the complexity of nondeterministic decision trees is bounded from above by a constant. Type (b) is not of particular interest: the complexity of nondeterministic decision trees in the worst case is the same as the complexity of the set of attributes in the table.

3.3. Function H ψ , A

Let ψ be a bounded complexity measure and A be a nonempty closed class of decision tables from M k . We now define possibly partial function H ψ , A : ω ω . Let n ω . If the set { ψ d ( T ) : T A , ψ a ( T ) n } is infinite, then the value H ψ , A ( n ) is undefined. Otherwise, H ψ , A ( n ) = max { ψ d ( T ) : T A , ψ a ( T ) n } .
The function H ψ , A characterizes the growth in the worst case of the minimum complexity of a deterministic decision tree for a decision table from A with the growth of the minimum complexity of a nondeterministic decision tree for this table.
Theorem 3.
Let ψ be a bounded complexity measure and A be a nonempty closed class of decision tables from M k . Then, H ψ , A ( 0 ) = 0 and H ψ , A is a nondecreasing function in its domain.
If H ψ , A is not an everywhere defined function, then its domain coincides with the set { n : n ω , n n 0 } for some n 0 ω .
If the function H ψ , A is everywhere defined, then one of the following statements holds:
(a) 
If the function ψ d is bounded from above on the class A, then there is a nonnegative constant c such that H ψ , A ( n ) c for any n ω .
(b) 
If the function ψ d is not bounded from above on the class A, then there exists an infinite subset D of the set ω such that H ψ , A ( n ) H D ( n ) for any n ω .
Remark 1.
From Theorem 1, it follows that the function ψ d is bounded from above on class A if and only if the functions S ψ and N are bounded from above on class A.
For the function H ψ , A , we have received a preliminary description of the types of its behavior. Type (a) is degenerate: the number of rows in decision tables from the closed class is limited from above by a constant. Type (b) is of most interest to us. However, more research is needed to understand how the function can behave within this type.

3.4. Family of Closed Classes of Decision Tables

Let U be a set and Φ = { f 0 , f 1 , } be a finite or countable set of functions (attributes) defined on U and taking values from E k . The pair ( U , Φ ) is called a k-information system. A problem over ( U , Φ ) is an arbitrary tuple z = ( U , ν , f i 1 , , f i n ) , where n ω \ { 0 } , ν : E k n ω and f i 1 , , f i n are functions from Φ with pairwise different indices i 1 , , i n . The problem z is to determine the value ν ( f i 1 ( u ) , , f i n ( u ) ) for a given u U . Various examples of k-information systems and problems over these systems can be found in [7].
We denote by T ( z ) a decision table from M k with n columns labeled with attributes f i 1 , , f i n . A row ( δ 1 , , δ n ) E k n belongs to the table T ( z ) if and only if the system of equations { f i 1 ( x ) = δ 1 , , f i n ( x ) = δ n } has a solution from the set U. This row is labeled with the decision ν ( δ 1 , , δ n ) .
Let the algorithms for solving problem z be algorithms in which each elementary operation consists of calculating the value of some attribute from the set { f i 1 , , f i n } on a given element u U . Then, as a model of the problem z, we can use the decision table T ( z ) , and as models of algorithms for solving the problem z, we can use deterministic and nondeterministic decision trees for the table T ( z ) .
Denote by Z ( U , Φ ) the set of problems over ( U , Φ ) and A ( U , Φ ) = { T ( z ) : z Z ( U , Φ ) } . One can show that A ( U , Φ ) = [ A ( U , Φ ) ] ; i.e., A ( U , Φ ) is a closed class of decision tables from M k  generated by the information system ( U , Φ ) .
Closed classes of decision tables generated by k-information systems are the most natural examples of closed classes. However, the notion of a closed class is essentially wider. In particular, the union A ( U 1 , Φ 1 ) A ( U 2 , Φ 2 ) , where ( U 1 , Φ 1 ) and ( U 2 , Φ 2 ) are k-information systems, is a closed class, but generally, we cannot find an information system ( U , Φ ) such that A ( U , Φ ) = A ( U 1 , Φ 1 ) A ( U 2 , Φ 2 ) .

3.5. Example of Information System

Let R be the set of real numbers and F = { f i : i ω } be the set of functions defined on R and taking values from the set E 2 such that, for any i ω and a R ,
f i ( a ) = 0 , a < i , 1 , a i .
Let ψ be a bounded complexity measure and A = A ( R , F ) . One can prove the following statements:
  • The function N is not bounded from above on the set A.
  • The function S ψ is bounded from above on the set A if and only if there exists a constant c 0 > 0 such that ψ ( f i ) c 0 for any i ω .
  • The function ψ d is not bounded from above on the set A.
  • The function H ψ , A is everywhere defined if and only if, for any n ω , the set { f i : i ω , ψ ( f i ) n } is finite.

4. Auxiliary Statements

This section contains auxiliary statements.
It is not difficult to prove the following upper bound on the minimum complexity of deterministic decision trees for a table.
Lemma 1.
For any complexity measure ψ and any table T from M k ,
ψ d ( T ) W ψ ( T ) .
The notions of a decision table and a deterministic decision tree used in this paper are somewhat different from the corresponding notions used in [23]. Taking into account these differences, it is easy to prove the following statement, which follows almost directly from Lemma 1.3 and Theorem 2.2 from [23].
Lemma 2.
For any complexity measure ψ and any table T from M k ,
ψ d ( T ) 0 , M ψ ( T ) = 0 , M ψ ( T ) log 2 N ( T ) , M ψ ( T ) 1 .
The following two statements are simple generalizations of similar results obtained in [20] for decision tables with 0–1-decisions. For the sake of completeness, we present their proofs.
Lemma 3.
For any complexity measure ψ and any table T from M k ,
M ψ ( T ) 2 S ^ ψ ( T ) .
Proof. 
Let T M k C . Then, M ψ ( T ) = 0 . Therefore, M ψ ( T ) 2 S ^ ψ ( T ) .
Let T M k C , W ( T ) = n and f t 1 , , f t n be attributes attached to columns of the table T. Denote D = { f i : f i P ( T ) , ψ ( f i ) S ψ ( T ) } and T * = I ( P ( T ) \ D , T ) . Evidently, T * [ T ] . Taking into account that the function ψ has the nondecreasing property, we obtain that any two rows of T are different in columns labeled with attributes from the set D. Let for the definiteness, D = { f t 1 , , f t m } .
Let δ ¯ = ( δ 1 , , δ n ) E k n . Let ( δ 1 , , δ m ) be a row of T * . Since T * [ T ] , there exist attributes f t j 1 , , f t j s of the table T * such that the row ( δ 1 , , δ m ) is different from all other rows of T * in columns labeled with these attributes and ψ ( f t j 1 f t j s ) S ^ ψ ( T ) . It is clear that T ( f t j 1 , δ j 1 ) ( f t j s , δ j s ) M k C . Therefore, M ψ ( T , δ ¯ ) S ^ ψ ( T ) .
Let ( δ 1 , , δ m ) be not a row of T * . We consider m tables T * ( f t 1 , δ 1 ) , T * ( f t 1 , δ 1 ) ( f t 2 , δ 2 ) , ⋯, T * ( f t 1 , δ 1 ) ( f t m , δ m ) . If T * ( f t 1 , δ 1 ) = Λ , then T ( f t 1 , δ 1 ) = Λ . Since f t 1 D , ψ ( f t 1 ) S ψ ( T ) . Taking into account that S ψ ( T ) S ^ ψ ( T ) , we obtain M ψ ( T , δ ¯ ) S ^ ψ ( T ) . Let T * ( f t 1 , δ 1 ) Λ . Then, there exists p { 1 , , m 1 } such that T * ( f t 1 , δ 1 ) ( f t p , δ p ) Λ and T * ( f t 1 , δ 1 ) ( f t p + 1 , δ p + 1 ) = Λ . Denote C = { f t p + 1 , f t p + 2 , , f t n } and T 0 = I ( C , T ) . Evidently, T 0 [ T ] . Therefore, in T 0 , there are attributes f t i 1 , , f t i l such that the row ( δ 1 , , δ p ) is different from all other rows of the table T 0 in columns labeled with these attributes and ψ ( f t i 1 f t i l ) S ^ ψ ( T ) . One can show that
T * ( f t i 1 , δ i 1 ) ( f t i l , δ i l ) = T * ( f t 1 , δ 1 ) ( f t p , δ p ) .
Therefore, T * ( f t i 1 , δ i 1 ) ( f t i l , δ i l ) ( f t p + 1 , δ p + 1 ) = Λ . Hence,
T ( f t i 1 , δ i 1 ) ( f t i l , δ i l ) ( f t p + 1 , δ p + 1 ) = Λ .
Since f t p + 1 D , ψ ( f t p + 1 ) S ψ ( T ) S ^ ψ ( T ) . Using the boundedness from above property of the function ψ , we obtain ψ ( f t i 1 f t i l f t p + 1 ) 2 S ^ ψ ( T ) . Therefore, M ψ ( T , δ ¯ ) 2 S ^ ψ ( T ) .
Thus, for any δ ¯ E k n , M ψ ( T , δ ¯ ) 2 S ^ ψ ( T ) . As a result, we obtain M ψ ( T ) 2 S ^ ψ ( T ) .    □
Lemma 4.
For any table T from M k \ { Λ } ,
N ( T ) ( k W ( T ) ) S ( T ) .
Proof. 
If N ( T ) = 1 , then S ( T ) = 0 and the considered inequality holds. Let N ( T ) > 1 . Then, S ( T ) > 0 . Denote m = S ( T ) . Evidently, for any row δ ¯ of the table T, there exist attributes f i 1 , , f i m P ( T ) and numbers σ 1 , , σ m E k such that the table T ( f i 1 , σ 1 ) ( f i m , σ m ) contains only the row δ ¯ . Therefore, there is a one-to-one mapping of rows of the table T onto some set G of pairs of tuples of the kind ( ( f i 1 , , f i m ) , ( σ 1 , , σ m ) ) where f i 1 , , f i m P ( T ) and σ 1 , , σ m E k . Evidently, G W ( T ) m k m . Therefore, N ( T ) ( k W ( T ) ) S ( T ) .    □
Lemma 5.
For any table T from M k \ { Λ } , there exists a mapping ν : E k W ( T ) ω such that
h d ( J ( ν , T ) ) log k N ( T ) .
Proof. 
Let T M k \ { Λ } and ν : E k W ( T ) ω be a mapping for which ν ( δ ¯ ) ν ( σ ¯ ) for any δ ¯ , σ ¯ E k W ( T ) such that δ ¯ σ ¯ . Denote T * = J ( ν , T ) . Let Γ be a deterministic decision tree for the table T * such that h ( Γ ) = h d ( T * ) . Denote by L t ( Γ ) the number of terminal nodes of Γ . Evidently, N ( T ) L t ( Γ ) . One can show that L t ( Γ ) k h ( Γ ) . Therefore, N ( T ) k h ( Γ ) . Since T Λ , N ( T ) > 0 . Hence, h ( Γ ) log k N ( T ) . Taking into account that h ( Γ ) = h d ( T * ) , we obtain h d ( T * ) log k N ( T ) .    □
It is not difficult to prove the following upper bound on the minimum cardinality of a separating set for a table by the induction on the number of rows in the table.
Lemma 6.
For any table T from M k \ { Λ } ,
S e p ( T ) N ( T ) 1 .
Lemma 7.
For any complexity measure ψ and any table T from M k ,
ψ a ( T ) ψ d ( T ) .
Proof. 
Let T M k . If T = Λ , then ψ a ( T ) = ψ d ( T ) = 0 . Let T M k \ { Λ } . It is clear that each deterministic decision tree for the table T is a nondeterministic decision tree for the table T. Therefore, ψ a ( T ) ψ d ( T ) .    □
Lemma 8.
For any complexity measure ψ and any table T from M k , which contains at least two rows, there exists a table T * [ T ] such that
ψ a ( T * ) = ψ d ( T * ) = W ψ ( T * ) = S ψ ( T * ) = S ψ ( T ) .
Proof. 
Let δ ¯ be a row of the table T such that S ψ ( T , δ ¯ ) = S ψ ( T ) . Let D be a subset of the set P ( T ) with the minimum cardinality such that ψ ( D ) = S ψ ( T , δ ¯ ) and in the set of columns labeled with attributes from D, the row δ ¯ is different from all other rows of the table T. Let σ ¯ be the tuple obtained from the row δ ¯ by the removal of all numbers that are in the intersection with columns labeled with attributes from the set P ( T ) \ D . Let ν : E k D E 2 and, for any γ ¯ E k D , if γ ¯ = σ ¯ , then ν ( γ ¯ ) = 1 and if γ ¯ σ ¯ , then ν ( γ ¯ ) = 0 . Denote T * = J ( ν , I ( P ( T ) \ D , T ) ) .
From the fact that D has the minimum cardinality and from the properties of the function ψ , it follows that for any attribute from the set D, there exists a row of T, which is different from the row δ ¯ only in the column labeled with the considered attribute among attributes from D. Therefore, for any attribute of the table T * , there exists a row of T * , which is different from the row σ ¯ only in the column labeled with this attribute. Thus,
S ψ ( T * , σ ¯ ) = W ψ ( T * ) .
Using properties of the function ψ , we obtain S ψ ( T * ) W ψ ( T * ) . From this inequality and from (2), it follows that
S ψ ( T * ) = W ψ ( T * ) .
Let Γ be a nondeterministic decision tree for the table T * such that ψ ( Γ ) = ψ a ( T * ) , τ be a complete path of Γ such that the row σ ¯ belongs to the table T ( τ ) , and F ( τ ) = f i 1 f i t . It is clear that σ ¯ is the only row of the table T ( τ ) . Therefore, in columns labeled with attributes f i 1 , , f i t the row σ ¯ is different from all other rows of the table T * . Thus, ψ ( F ( τ ) ) S ψ ( T * , σ ¯ ) = W ψ ( T * ) . Therefore, ψ ( Γ ) W ψ ( T * ) and ψ a ( T * ) W ψ ( T * ) . Using Lemmas 1 and 7, we obtain ψ a ( T * ) ψ d ( T * ) W ψ ( T * ) . Therefore,
ψ a ( T * ) = ψ d ( T * ) = W ψ ( T * ) .
By the choice of the set D, W ψ ( T * ) = S ψ ( T ) . From this equality and from (3) and (4), it follows that ψ a ( T * ) = ψ d ( T * ) = W ψ ( T * ) = S ψ ( T * ) = S ψ ( T ) .    □
It is not difficult to prove the following upper bounds on the minimum complexity of nondeterministic decision trees for a table.
Lemma 9.
For any complexity measure ψ and any table T from M k ,
ψ a ( T ) S ψ ( T ) W ψ ( T ) .

5. Proofs of Theorems 1, 2 and 3

Proof of Theorem 1.
Since A is a closed class, Λ A . By definition, W ψ ( Λ ) = 0 . Using this fact and Lemma 1, we obtain that F ψ , A is an everywhere defined function and F ψ , A ( n ) n for any n ω . Evidently, F ψ , A is a nondecreasing function. Let T A and W ψ ( T ) 0 . Using the positivity property of the function ψ , we obtain T = Λ . Therefore F ψ , A ( 0 ) = 0 .
(a) Let the functions S ψ and N be bounded from above on the class A. Then, there are constants a 1 and b 2 such that S ψ ( T ) a and N ( T ) b for any table T A . Let T A . Taking into account that A is a closed class, we obtain S ^ ψ ( T ) a . By Lemma 3, M ψ ( T ) 2 a . From this inequality, inequality N ( T ) b and from Lemma 2, it follows that ψ d ( T ) 2 a log 2 b . Denote c 0 = 2 a log 2 b . Taking into account that T is an arbitrary table from the class A, we obtain that F ψ , A ( n ) c 0 for any n ω .
(b) Let the function S ψ be bounded from above on class A and the function N be not bounded from above on A. Then, there exists a constant a 2 such that for any table Q A ,
S ψ ( Q ) a .
Let n ω \ { 0 } and T be an arbitrary table from A such that W ψ ( T ) n . If T M k C ; then, evidently, ψ d ( T ) = 0 . Let T M k C . Using the boundedness from below property of the function ψ , we obtain W ( T ) n and S ( T ) a . From these inequalities and Lemma 4, it follows that N ( T ) ( k n ) a . From (5) and Lemma 3, it follows that M ψ ( T ) 2 a . Using the last two inequalities and Lemma 2, we obtain
ψ d ( T ) 2 a 2 log 2 n + 2 a 2 log 2 k .
Denote c 3 = 2 a 2 and c 4 = 2 a 2 log 2 k . Then, ψ d ( T ) c 3 log 2 n + c 4 . Taking into account that n is an arbitrary number from ω \ { 0 } and T is an arbitrary table from A such that W ψ ( T ) n , we obtain that for any n ω \ { 0 } ,
F ψ , A ( n ) c 3 log 2 n + c 4 .
Let n ω \ { 0 } . Denote c 1 = 1 / log 2 k and c 2 = log k a . We now show that
F ψ , A ( n ) c 1 log 2 n c 2 .
Denote m = n / a . Taking into account that the function N is not bounded from above on set A, we obtain that there exists a table T A such that N ( T ) k m . Let C be a separating set for the table T with the minimum cardinality such that ψ ( f i ) a for any f i C . The existence of such a set follows from the inequality (5) and properties of commutativity and nondecreasing of the function ψ . Evidently, C m . Let D be a subset of the set C such that D = m . Denote T 0 = I ( P ( T ) \ D , T ) . One can show that for any attribute of T 0 , there are two rows of the table T 0 that differ only in the column labeled with this attribute. Therefore, D is a separating set for the table T 0 with the minimum cardinality. By Lemma 6, N ( T 0 ) m + 1 n / a .
Using Lemma 5, we obtain that there exists a mapping ν : E k m ω such that
h d ( J ( ν , T 0 ) ) log k ( n / a ) .
Denote T * = J ( ν , T 0 ) . Since ψ is a bounded complexity measure, ψ d ( T * ) log k ( n / a ) . Using the boundedness from above property of the function ψ , we obtain W ψ ( T * ) n / a a n . Hence, the inequality (8) holds. From (7) and (8), it follows that c 1 log 2 n c 2 F ψ , A ( n ) c 3 log 2 n + c 4 for any n ω \ { 0 } .
(c) Let the function S ψ be not bounded from above on class A. Using Lemma 8, we obtain that the set D = { W ψ ( T ) : T A , ψ d ( T ) = W ψ ( T ) } is infinite. Since the class A is closed, Λ A , and since ψ d ( Λ ) = W ψ ( Λ ) = 0 , 0 D . Evidently, for any n D , F ψ , A ( n ) n . Taking into account that F ψ , A is a nondecreasing function, we obtain that F ψ , A ( n ) H D ( n ) for any n ω \ { 0 } .    □
Proof of Theorem 2.
Since A is a closed class, Λ A . By definition, W ψ ( Λ ) = 0 . Using this fact and Lemma 9, we obtain that G ψ , A is an everywhere defined function and G ψ , A ( n ) n for any n ω . Evidently, G ψ , A is a nondecreasing function. Let T A and W ψ ( T ) 0 . Using the positivity property of the function ψ , we obtain T = Λ . Therefore, G ψ , A ( 0 ) = 0 .
(a) Let the function S ψ be bounded from above on the class A. Then, there is a constant c > 0 such that S ψ ( T ) c for any T A . By Lemma 9, ψ a ( T ) S ψ ( T ) c for any T A . Therefore, for any n ω , G ψ , A ( n ) c .
(b) Let the function S ψ be not bounded from above on class A. Using Lemma 8, we obtain that the set D = { W ψ ( T ) : T A , ψ a ( T ) = W ψ ( T ) } is infinite. Since the class A is closed, Λ A , and since ψ a ( Λ ) = W ψ ( Λ ) = 0 , 0 D . Evidently, for any n D , G ψ , A ( n ) n . Taking into account that G ψ , A is a nondecreasing function, we obtain that G ψ , A ( n ) H D ( n ) for any n ω \ { 0 } .    □
Proof of Theorem 3.
Let n ω and both n and n + 1 belong to the domain of the function H ψ , A . Immediately from the definition of this function, it follows that H ψ , A ( n ) H ψ , A ( n + 1 ) . Let T A and ψ a ( T ) 0 . Using the positivity property of the function ψ , one can show that T M k C . From Lemma 2, it follows that ψ d ( T ) = 0 . Therefore, H ψ , A ( 0 ) = 0 .
Let H ψ , A be not an everywhere defined function and m be the minimum number from ω such that the value H ψ , A ( m ) is not defined. It is clear that m > 0 and the value H ψ , A ( n ) is not defined for each n ω such that n m . Denote n 0 = m 1 . Then, the domain of the function H ψ , A is equal to { n : n ω , n n 0 } .
We now consider the case when H ψ , A is an everywhere defined function.
(a) Let there exist a nonnegative constant c such that ψ d ( T ) c for any table T A . Then, evidently, H ψ , A ( n ) c for any n ω .
(b) Let there be a nonnegative constant c such that ψ d ( T ) c for any table T A . Let us assume that there exists a nonnegative constant d such that ψ a ( T ) d for any table T A . Then, the value H ψ , A ( d ) is not defined, but this is impossible. Therefore, the set D = { ψ a ( T ) : T A } is infinite. Since Λ A , 0 D . By Lemma 7, ψ a ( T ) ψ d ( T ) for any table T A . Hence, H ψ , A ( n ) n for any n D . Taking into account that H ψ , A is a nondecreasing function, we obtain that H ψ , A ( n ) H D ( n ) for any n ω .    □

6. Conclusions

In this paper, we studied the complexity of deterministic and nondeterministic decision trees for tables from closed classes of conventional decision tables. The obtained results allow us to point out the cases when the complexity of deterministic and nondeterministic decision trees is essentially less than the complexity of the set of attributes attached to columns of the table. This may be useful in applications. Future research will be devoted to a more in-depth study of relationships between the complexity of deterministic and nondeterministic decision trees for conventional decision tables from closed classes.

Author Contributions

Conceptualization, A.O. and M.M.; methodology, A.O. and M.M.; validation, A.O.; formal analysis, A.O. and M.M.; investigation, A.O.; resources, A.O. and M.M.; writing—original draft preparation, A.O. and M.M.; writing—review and editing, A.O. and M.M.; visualization, A.O.; supervision, M.M.; funding acquisition, M.M. All authors have read and agreed to the published version of the manuscript.

Funding

Research funded by King Abdullah University of Science and Technology.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

Research reported in this publication was supported by King Abdullah University of Science and Technology (KAUST). The authors are grateful to the anonymous reviewers for useful remarks and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Breiman, L.; Friedman, J.H.; Olshen, R.A.; Stone, C.J. Classification and Regression Trees; Wadsworth and Brooks: Monterey, CA, USA, 1984. [Google Scholar]
  2. Chikalov, I.; Lozin, V.V.; Lozina, I.; Moshkov, M.; Nguyen, H.S.; Skowron, A.; Zielosko, B. Three Approaches to Data Analysis—Test Theory, Rough Sets and Logical Analysis of Data; Intelligent Systems Reference Library; Springer: Berlin/Heidelberg, Germany, 2013; Volume 41. [Google Scholar]
  3. Fürnkranz, J.; Gamberger, D.; Lavrac, N. Foundations of Rule Learning; Cognitive Technologies; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  4. Pawlak, Z. Rough Sets—Theoretical Aspects of Reasoning about Data; Theory and Decision Library: Series D; Kluwer: Alphen aan den Rijn, The Netherlands, 1991; Volume 9. [Google Scholar]
  5. Quinlan, J.R. C4.5: Programs for Machine Learning; Morgan Kaufmann: Burlington, MA, USA, 1993. [Google Scholar]
  6. Rokach, L.; Maimon, O. Data Mining with Decision Trees—Theory and Applications; Series in Machine Perception and Artificial Intelligence; World Scientific: Singapore, 2007; Volume 69. [Google Scholar]
  7. Moshkov, M. Time Complexity of Decision Trees. Trans. Rough Sets 2005, 3, 244–459. [Google Scholar]
  8. Moshkov, M.; Zielosko, B. Combinatorial Machine Learning—A Rough Set Approach; Studies in Computational Intelligence; Springer: Berlin/Heidelberg, Germany, 2011; Volume 360. [Google Scholar]
  9. Moshkov, M. Comparative Analysis of Deterministic and Nondeterministic Decision Trees; Intelligent Systems Reference Library; Springer: Cham, Switzerland, 2020; Volume 179. [Google Scholar]
  10. Boros, E.; Hammer, P.L.; Ibaraki, T.; Kogan, A. Logical analysis of numerical data. Math. Program. 1997, 79, 163–190. [Google Scholar] [CrossRef]
  11. Boros, E.; Hammer, P.L.; Ibaraki, T.; Kogan, A.; Mayoraz, E.; Muchnik, I.B. An Implementation of Logical Analysis of Data. IEEE Trans. Knowl. Data Eng. 2000, 12, 292–306. [Google Scholar] [CrossRef]
  12. Pawlak, Z.; Skowron, A. Rudiments of rough sets. Inf. Sci. 2007, 177, 3–27. [Google Scholar] [CrossRef]
  13. Molnar, C. Interpretable Machine Learning. A Guide for Making Black Box Models Explainable, 2nd ed.; Independent Publishers: Chicago, IL, USA, 2022. [Google Scholar]
  14. Blum, M.; Impagliazzo, R. Generic Oracles and Oracle Classes (Extended Abstract). In Proceedings of the 28th Annual Symposium on Foundations of Computer Science, Los Angeles, CA, USA, 27–29 October 1987; IEEE Computer Society: Washington, DC, USA, 1987; pp. 118–126. [Google Scholar]
  15. Buhrman, H.; de Wolf, R. Complexity measures and decision tree complexity: A survey. Theor. Comput. Sci. 2002, 288, 21–43. [Google Scholar] [CrossRef]
  16. Hartmanis, J.; Hemachandra, L.A. One-way functions, robustness, and the non-isomorphism of NP-complete sets. In Proceedings of the Second Annual Conference on Structure in Complexity Theory, Cornell University, Ithaca, NY, USA, 16–19 June 1987; IEEE Computer Society: Washington, DC, USA, 1987. [Google Scholar]
  17. Tardos, G. Query complexity, or why is it difficult to separate NPAcoNPA from PA by random oracles A? Combinatorica 1989, 9, 385–392. [Google Scholar] [CrossRef]
  18. Moshkov, M. On depth of conditional tests for tables from closed classes. In Combinatorial-Algebraic and Probabilistic Methods of Discrete Analysis; Markov, A.A., Ed.; Gorky University Press: Gorky, Russia, 1989; pp. 78–86. (In Russian) [Google Scholar]
  19. Ostonov, A.; Moshkov, M. Comparative analysis of deterministic and nondeterministic decision trees for decision tables from closed classes. arXiv 2023, arXiv:2304.10594. [Google Scholar]
  20. Ostonov, A.; Moshkov, M. Deterministic and strongly nondeterministic decision trees for decision tables from closed classes. arXiv 2023, arXiv:2305.06093. [Google Scholar]
  21. Post, E. Two-Valued Iterative Systems of Mathematical Logic; Annals of Mathematics Studies; Princeton University Press: Princeton, NJ, USA, 1941; Volume 5. [Google Scholar]
  22. Robertson, N.; Seymour, P.D. Graph Minors. XX. Wagner’s conjecture. J. Comb. Theory, Ser. B 2004, 92, 325–357. [Google Scholar] [CrossRef]
  23. Moshkov, M. Conditional tests. In Problemy Kibernetiki; Yablonskii, S.V., Ed.; Nauka Publishers: Moscow, Russia, 1983; Volume 40, pp. 131–170. (In Russian) [Google Scholar]
Figure 1. Decision table from M 2 .
Figure 1. Decision table from M 2 .
Entropy 25 01411 g001
Figure 2. Decision table obtained from the decision table shown in Figure 1 by removal of a column and changing of decisions.
Figure 2. Decision table obtained from the decision table shown in Figure 1 by removal of a column and changing of decisions.
Entropy 25 01411 g002
Figure 3. A deterministic decision tree for the decision table shown in Figure 1.
Figure 3. A deterministic decision tree for the decision table shown in Figure 1.
Entropy 25 01411 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ostonov, A.; Moshkov, M. On Complexity of Deterministic and Nondeterministic Decision Trees for Conventional Decision Tables from Closed Classes. Entropy 2023, 25, 1411. https://doi.org/10.3390/e25101411

AMA Style

Ostonov A, Moshkov M. On Complexity of Deterministic and Nondeterministic Decision Trees for Conventional Decision Tables from Closed Classes. Entropy. 2023; 25(10):1411. https://doi.org/10.3390/e25101411

Chicago/Turabian Style

Ostonov, Azimkhon, and Mikhail Moshkov. 2023. "On Complexity of Deterministic and Nondeterministic Decision Trees for Conventional Decision Tables from Closed Classes" Entropy 25, no. 10: 1411. https://doi.org/10.3390/e25101411

APA Style

Ostonov, A., & Moshkov, M. (2023). On Complexity of Deterministic and Nondeterministic Decision Trees for Conventional Decision Tables from Closed Classes. Entropy, 25(10), 1411. https://doi.org/10.3390/e25101411

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop