Next Article in Journal
A Bibliometric Analysis of Digital Twin in the Supply Chain
Next Article in Special Issue
Unsupervised Attribute Reduction Algorithm for Mixed Data Based on Fuzzy Optimal Approximation Set
Previous Article in Journal
Borel Chain Conditions of Borel Posets
Previous Article in Special Issue
A Fast Algorithm for Updating Negative Concept Lattices with Increasing the Granularity Sizes of Attributes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Three-Way Co-Training with Pseudo Labels for Semi-Supervised Learning

1
College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, China
2
Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen 518060, China
3
SZU Branch, Shenzhen Institute of Artificial Intelligence and Robotics for Society, Shenzhen 518060, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(15), 3348; https://doi.org/10.3390/math11153348
Submission received: 20 June 2023 / Revised: 22 July 2023 / Accepted: 23 July 2023 / Published: 31 July 2023
(This article belongs to the Special Issue Data Mining: Analysis and Applications)

Abstract

:
The theory of three-way decision has been widely utilized across various disciplines and fields as an efficient method for both knowledge reasoning and decision making. However, the application of the three-way decision theory to partially labeled data has received relatively less attention. In this study, we propose a semi-supervised co-training model based on the three-way decision and pseudo labels. We first present a simple yet effective method for producing two views by assigning pseudo labels to unlabeled data, based on which a heuristic attribute reduction algorithm is developed. The three-way decision is then combined with the concept of entropy to form co-decision rules for classifying unlabeled data into useful, uncertain, or useless samples. Finally, some useful samples are iteratively selected to improve the performance of the co-decision model. The experimental results on UCI datasets demonstrate that the proposed model outperforms other semi-supervised models, exhibiting its potential for partially labeled data.

1. Introduction

In the era of information and data, there is an increasing need to process data described by a large number of attributes, such as genetic analysis, medical image classification, and text mining, which poses a huge challenge to traditional machine learning algorithms. Attribute reduction [1,2] has played an important role in removing irrelevant or redundant attributes, while retaining the most informative ones, which can improve computational efficiency and performance, and mitigate the risk of overfitting and the curse of dimensionality.
Rough set theory [3] has been proven to be a powerful mathematical tool for handling incomplete, uncertain, or imprecise data. Since Pawlak’s groundbreaking research [4], the theory has undergone extensive development and application [5,6,7]. Presently, numerous attribute reduction methods based on rough set theory have been proposed, including discernibility matrix methods [3], positive region, and information entropy. Information entropy is one of the basic concepts of information theory. Pawlak et al. [8] introduced conditional entropy to assess the significance of attributes. Sun et al. [9] devised a novel neighborhood joint entropy from the algebraic and information perspectives of neighborhood rough sets. Jiang et al. [10] utilized the notion of relative decision entropy for the selection of informative attributes. Gao et al. [11,12] defined granularity-based maximum decision entropy to evaluate the significance of attributes. Yang et al. [13] proposed a pseudo-label neighborhood relation and re-defined both the neighborhood rough set and some corresponding measures. Yuan et al. [14] defined an uncertainty measure based on fuzzy complementary entropy and developed the attribute evaluation criteria of the maximal information, minimal redundancy, and maximal interactivity based on the proposed uncertainty measure. Xu et al. [15] expanded the concept of information entropy to handle fuzzy incomplete systems.
As a generalization of rough set theory, the theory of three-way decision [16,17,18,19,20,21,22,23,24,25] is a decision-making methodology that introduces three options—acceptance, non-commitment, and rejection—by both considering the decision itself and the decision costs, rather than solely being a deterministic or binary decision. The three-way decision has now become one of the research hotspots in rough set theory, and there have been many related studies. Li et al. [26] proposed an axiomatic approach for describing three-way concepts using multiple granularities, and utilized the concept of set approximation to simulate cognitive processes. Wang et al. [27] proposed a solution to the problem of attribute reduction based on decision region preservation in rough sets. Ren et al. [28] introduced the three-way decision to the concept lattice and systematically studied the methods for the three-way concept lattice. Huang et al. [29] combined the distance function with the three-way decision to compute the neighborhood for mixed data. Zhang et al. [30] applied the three-way decision to the concept of the neighborhood information system and decomposed it into a three-level structure. Kong et al. [31] divided all attributes in the information table into three disjoint sets and developed a new granular structure for granular computing. Fang et al. [32] presented a framework that utilizes the three-way decision and discernibility matrix to address cost-sensitive approximation attribute reduction.
Typically, rough set-based approaches are employed for either fully labeled or unlabeled data. However, many real-world datasets may consist of both labeled and unlabeled data. To address the challenge of handling partially labeled data, several rough set-based methods have been developed. Miao et al. [33] defined a semi-supervised discernibility matrix and developed a novel rough co-training model to capitalize on unlabeled data to improve the performance of classifiers learned only from labeled data. By defining a novel discernibility matrix, Miao et al. [33] proposed a rough co-training model that harnesses unlabeled data to improve the accuracy of classifiers trained exclusively on labeled data. Wang et al. [34] applied Gaussian kernel-based similarity relation to evaluating the samples’ inconsistency and developed an active learning model. To effectively deal with big data, Li et al. [35] used condition neighborhood granularity and neighborhood granularity to represent the importance of attributes, and provided an attribute reduction method for numerical attributes. Liu et al. [36] constructed multiple attribute fitness functions by the class local neighborhood decision error rate and used them to evaluate the importance of attributes. Xu et al. [6] developed a model and mechanism for a two-way learning system in fuzzy datasets based on information granules and developed an algorithm to implement different types of information granules. Pan et al. [37] defined a measure of semi-supervised neighborhood mutual information to generate the optimal semi-supervised reduct of partially labeled data by heuristic search. However, most of the aforementioned works focus on semi-supervised attribute reduction; less consideration is given to directly learning from semi-supervised models for partially labeled data. In fact, unlabeled data contain valuable information that can help the model better learn the information of data distribution and improve its generalization ability. However, noise and useless samples are also present in unlabeled data, which pose a significant threat to the learning process of partially labeled data. Therefore, it is important to develop a strategy that allows the model to efficiently select samples that are beneficial to itself. In this study, we propose a co-training model based on the three-way decision, and the main contributions are as follows:
(1)
To perform attribute reduction for partially labeled data, we propose a simple but effective labeling strategy for unlabeled data and develop a granular condition entropy-based heuristic attribute reduction algorithm for partially labeled data with pseudo labels.
(2)
To learn from unlabeled data effectively, we combine the three-way decision with information entropy into a co-training model and design three-way co-decision rules to classify unlabeled data into three sets of samples—useful, uncertain, and useless—which allows the model to learn from useful samples, thus improving performance.
(3)
To test the effectiveness of the proposed model, a large number of comparative experiments are conducted. The results demonstrate the superiority of the model and show its potential to handle partially labeled data.
The rest of this paper is ordered as follows. Section 2 briefly introduces the basic concepts of semi-supervised learning and three-way decision. Section 3 mainly describes the co-decision model based on the three-way decision. The experimental results and analysis are given in Section 4. Finally, Section 5 concludes the paper.

2. Preliminaries

This section briefly describes some related concepts in semi-supervised learning and three-way decision theory. More information can be found in [3,8,22,38,39,40,41,42,43].

2.1. Semi-Supervised Learning

In semi-supervised learning, partially labeled datum U containing  l + n  samples is divided into two parts: labeled data  L = x i , y i i = 1 l  and unlabeled data  N = x j , ? j = 1 l + n  where  U = L N  and  l n , so any sample in U is divided into L or N based on whether it has labels or not, that is, the intersection of L and N is . Semi-supervised learning performs well in various machine learning tasks, such as semi-supervised clustering, semi-supervised classification, or semi-supervised regression. This paper focuses on the semi-supervised classification task [43].
Semi-supervised classification is a method that can use effective information in unlabeled data to improve the performance of supervised classifiers trained only on labeled data. It can generally be divided into four methods: generative methods, low-density separation methods, graph-based methods, and divergence-based methods [43]. Among them, co-training [40,41], as one of the more popular multi-view models in the divergence-based methods, has performed well on a large number of practical problems. It assumes that there are two views (attribute sets) to describe data, and two base classifiers trained separately on initially labeled data learn from each other using unlabeled data. Unfortunately, two independent and redundant views are difficult to guarantee in practical data. Therefore, it is essential to design a method to decompose the attribute set into two independent subsets to make the co-training model work well.

2.2. Three-Way Decision

In rough sets, an information system [3] represents the data that need to be processed, denoted as  I S = ( U , A , V , f ) , where U is a non-empty set containing all samples; A is the set of attributes; V is the value domain of all attributes, so  V a  represents the value domain of an attribute a in A and  V = V a ; and f denotes a mapping function. For any sample  x i U , there exists a mapping relationship  f x i , a V a  for any attribute  a A .
For any subset B of A, U is divided into a set of equivalence classes  U / B , and the equivalence class containing sample x is denoted as  [ x ] B . Let X be a subset of U and then the upper and lower approximations of X given B are defined as [3]
B ¯ ( X ) = x U | [ x ] B X , B ̲ ( X ) = x U | [ x ] B X
The  B ¯ ( X )  and  B ̲ ( X )  represent the set of samples that may belong to X and the set of samples that must belong to X, respectively. Particularly, the lower approximation  B ̲ ( X )  is also called the positive region  P O S B ( X )  of X on U; the difference between the upper and lower approximations of X is called the boundary region  B N D B ( X ) , that is,  B N D B ( X ) = B ¯ ( X ) B ̲ ( X ) ; the set of samples outside the upper approximation of X is called the negative region, denoted as  N E G B ( X ) = U B ¯ ( X ) .
When the set of attributes A is further divided into the set of condition attributes C and the set of decision attributes D, the information system is called the decision information system, denoted as  D S = ( U , A = C D , V , f ) . Let  U / D = Y 1 , Y 2 , , Y | U / D |  be the set of equivalence classes of the decision attribute set D, where  Y i  is a set of samples with the decision i, then the positive region, the boundary region, and the negative region of D given C are defined as [3]
P O S C ( D ) = Y i U / D C ̲ Y i , B N D C ( D ) = Y i U / D C ¯ Y i C ̲ Y i , N E G C ( D ) = U Y i U / D C ¯ Y i .
Let  Λ = a P , a B , a N  be the set of behaviors that classify a sample into the positive region  P O S ( X ) , the boundary region  B N D ( X ) , or the negative region  N E G ( X ) . Given a sample  x U , the cost of taking different actions on x can be defined as [22]
R a P | [ x ] = λ P P P X | [ x ] + λ P N 1 P X | [ x ] , R a B | [ x ] = λ B P P X | [ x ] + λ B N 1 P X | [ x ] , R a N | [ x ] = λ N P P X | [ x ] + λ N N 1 P X | [ x ] ,
where  P X | x  represents the probability that sample x belongs to X λ P P λ B P , and  λ N P  are the costs of taking the action  a P a B , or  a N  when sample x is in X, respectively. Conversely, when the sample x is not in X, the costs of taking  a P a B , or  a N  are represented as  λ P N λ B N , and  λ N N , respectively.
According to Bayesian minimal risk decision theory [39], the following rules can be obtained using the above decision costs [22]:
(P) 
When  R a P | x R a B | x  and  R a P | x R a N | x , classify sample x into the positive region;
(B) 
When  R a B | x R a P | x  and  R a B | x R a N | x , classify sample x into the boundary region;
(N) 
When  R a N | x R a P | x  and  R a N | x R a B | x , classify sample x into the negative region.
If we assume that the inequality  λ P N λ B N λ N P λ B P > ( λ B P λ P P ) ( λ B N λ N N )  holds, then the above rules can be further rewritten as [22]
(P) 
When  P X | x α , classify sample x into the positive region;
(B) 
When  β < P X | x < α , classify sample x into the boundary region;
(N) 
When  P X | x β , classify sample x into the negative region,
where  α = λ P N λ B N λ P N λ B N + ( λ B P λ P P ) ,   β = ( λ B N λ N N ) ( λ B N λ N N ) + λ N P λ B P .

3. Three-Way Decision-Based Co-Training with Pseudo Labels

In this section, we first present the overall framework of the model proposed in this study. Then, we introduce a pseudo-labeling strategy to generate labels for partially labeled data and provide a heuristic attribute reduction algorithm. Finally, we propose a co-decision model to learn from unlabeled data.

3.1. Overall Framework

Co-training [40,41] is a divergence-based multi-classifier model, in that two base classifiers are trained to learn from each other under two mutually independent views. However, there are no naturally divided two views in practical data, which limits the application of co-training. Moreover, base classifiers may learn from mislabeled or noisy samples, which worsens their performance. To address these problems, a three-way decision-based co-decision model is proposed in this study, and its overall framework is shown in Figure 1.
Firstly, the unlabeled data are all tagged with the pseudo labels 0 or 1 and combined with the labeled data, respectively, which preserves the discriminative information of both the labeled and unlabeled data. Then, an attribute reduction algorithm is performed to obtain the optimal reduct on the two pseudo-labeled datasets, and two base classifiers are trained on the two reducts, respectively. Subsequently, the two classifiers are retrained iteratively on useful unlabeled data selected by using the three-way decision to make the classifiers only learn beneficial data and improve their performance until the stopping conditions are met. Finally, the two classifiers are combined to obtain the final classifier.

3.2. Semi-Supervised Attribute Reduction Based on Pseudo Labels

Traditional co-training has shown effectiveness in dealing with partially labeled data, but it remains an open question as to how to obtain two views from data with a naturally undivided attribute set. In this study, we propose a strategy that uses labeled samples and unlabeled samples with pseudo labels to form two views.
Assume a partially labeled datum  P S = ( U = L N , A = C D , V , f )  has l samples in labeled datum L and n samples in unlabeled datum N, and  | U | = u , u = l + n . Without loss of generality, assume that there are only two classes in the partially labeled data. For the unlabeled data N, we adopt a simple strategy of labeling all samples in N with pseudo labels 0 or 1, respectively, and generate two sets of pseudo-labeled data:  N 0  with pseudo label 0 and  N 1  with pseudo label 1. The two pseudo-labeled data are combined with the labeled datum L to form two views  L 0 = L N 0  and  L 1 = L N 1 . Factually, the generated pseudo-labeled data reflect the original partially labeled data from different perspectives, providing data diversity for the learning model. Formally, the partially labeled data with pseudo labels are represented as  D S = ( U = L N k , A = C D , V , f ) ( k = 0 , 1 ) , where  N k  is the pseudo-labeled data after the labeling strategy.
In information theory [38], information entropy is represented by the expectation of information content from all possible events. Factually, it can be also used to quantify the uncertainty of attributes in a given dataset.
Definition 1. 
Given a partially labeled data  P S = ( U = L N , A = C D , V , f )  and the partition  U / B = { X 1 , X 2 , , X | U / B | }  induced by the subset of condition attributes  B C , the information entropy of B over U is defined as [8]
H B = i = 1 | U / B | P X i l o g P X i ,
where  | · |  is the number of elements in a finite set and  P X i = X i U .
Definition 2. 
Given a partially labeled datum  P S = ( U = L N , A = C D , V , f )  and the partitions  U / B = { X 1 , X 2 , , X | U / B | } U / D = Y 1 , Y 2 , , Y | U / D |  induced by the subset of condition attributes  B C  and the decision attribute set D, respectively, the joint entropy between B and D is defined as  [8]
H B , D = i = 1 | U / B | j = 1 | U / D | P X i , Y j l o g P X i , Y j ,
Definition 3. 
Given a partially labeled datum  P S = ( U = L N , A = C D , V , f )  and the partitions  U / B = { X 1 , X 2 , , X | U / B | } U / D = Y 1 , Y 2 , , Y | U / D |  induced by the subset of condition attributes  B C  and the decision attribute set D, respectively, the conditional entropy of D given B is defined [42]:
H D | B = i = 1 | U / B | j = 1 | U / D | P X i , Y j l o g P Y j | X i ,
where  P Y j | X i = X i Y j X i .
In rough sets, the universe under a subset of attributes can be divided into a set of information granules, each of which consists of some indiscernible samples. Factually, the size of the information granules reflects the discriminating power of the attribute subset. The finer the information granularity, the better the quality of the attribute subset.
Definition 4. 
Given a partially labeled datum  P S = ( U = L N , A = C D , V , f )  and the partition  U / B = { X 1 , X 2 , , X | U / B | }  induced by the subset of condition attributes  B C , the granularity on U given B is defined as [44]
G ( B ) = i = 1 | U / B | P X i 2 .
Definition 5. 
Given a partially labeled datum  P S = ( U = L N , A = C D , V , f )  and the partitions  U / B = { X 1 , X 2 , , X | U / B | } U / D = Y 1 , Y 2 , , Y | U / D |  induced by the subset of condition attributes  B C  and the decision attribute set D, respectively, the granular conditional entropy of D given B is defined as [45]
G H D | B = i = 1 | U / B | P X i 2 j = 1 | U / D | P X i , Y j l o g P Y j | X i .
For any condition attribute subset B, its granular conditional entropy to D integrates the granularity and conditional entropy, which not only evaluates the quality of partitions induced by the condition attribute subset but also accumulates the uncertainty of each condition class under different decisions, providing a better measure for the importance of an attribute subset.
Definition 6. 
Given a partially labeled datum  P S = ( U = L N , A = C D , V , f )  and the partition  U / B = { X 1 , X 2 , , X | U / B | }  induced by the subset of condition attributes  B C , the importance of an attribute  a ( B C )  is defined as
s i g ( a , B , D ) = H ( D | B ) H ( D | B { a } ) .
Based on the pseudo-labeling strategy and granular conditional entropy, a semi-supervised attribute reduction algorithm is developed using the forward heuristic search. The procedure is described in Algorithm 1.
Algorithm 1 Semi-supervised attribute reduction based on granular condition entropy.
Input: 
Partially labeled data  P S = ( U = L N , A = C D , V , f ) .
Output: 
Semi-supervised reducts  R E D 0  and  R E D 1 .
1:
Generate two pseudo-labeled data  N 0  and  N 1  from the unlabeled data using the
pseudo labels 0 and 1, respectively;
2:
for   k 0 , 1  do
3:
   Compute the granular conditional entropy of  G H D | C  on the data  L N k ;
4:
   Calculate the granular conditional entropy  G H D | a i ( a i C )  to evaluate each
   attribute and add the attribute  a o p t  that has the lowest granular conditional entropy
   to  R E D k ;
5:
   while  G H D | R E D k G H D | C  do
6:
     Calculate the importance of each attribute  s i g ( a i , R E D k , D ) ;
7:
     Select the attribute  a o p t  with highest importance;
8:
      R E D k = R E D k a o p t ;
9:
   end while
10:
end for
11:
Return Two semi-supervised reducts  R E D 0  and  R E D 1 .
In Algorithm 1, two pseudo-labeled data are first generated. Then, on each, the overall granular conditional entropy under all condition attributes is calculated, and the attributes that have the lowest granular conditional entropies are iteratively selected to the reduct. The algorithm terminates when the granular conditional entropy of the obtained reduct is the same as that of all condition attributes. Finally, two optimal semi-supervised reducts are returned.
Assume that the partially labeled data have  | U |  samples and  | C |  condition attributes. The time cost of determining the optimal attribute is  O ( | C | | U | 2 )  in each round of iteration. Note that the worst case is that it takes  | C |  rounds to select the optimal attribute in each iteration, then the worst time cost is  O ( | C | 2 | U | 2 ) . The space cost is  O ( | C | | U | )  because it is necessary to store all partially labeled data.

3.3. Three-Way Co-Training Model for Partially Labeled Data

Co-training is a semi-supervised learning model that leverages unlabeled samples to enhance the performance of two classifiers. Therefore, the selection of unlabeled samples is crucial. Typically, each unlabeled sample can be classified as useful, uncertain, or useless. When the classifiers select useful unlabeled samples for learning, their classification performance will be improved; when the classifiers select useless samples, there may be negative effects on their performance; and uncertain samples indicate that the classifiers cannot classify them undoubtedly at this time, but they may become useful unlabeled samples in the next round of iterations. Therefore, our primary goal is to select useful samples and exclude useless ones as much as possible to improve the classifiers’ performance. To accurately assess the confidence level of a classifier’s prediction probability, we introduce the concept of normalized entropy.
Definition 7. 
Given a partially labeled datum  P S = ( U = L N , A = C D , V , f )  and the probability distribution  P k ( x ) = p 1 k ( x ) , p 2 k ( x ) , , p U / D k ( x ) , k 0 , 1  predicted by the classifier for  x N  in the view k, the normalized entropy of the classifier on x is defined as
N E k ( x ) = 1 l o g | U / D i = 1 | U / D | p i k ( x ) l o g p i k ( x )
Normalized entropy uses the prediction probability distribution in a given view to reflect the degree of certainty of the classifier on its prediction. The higher the normalized entropy, the lower the credibility of the classifier in making the prediction on an unlabeled sample. Conversely, the lower the normalized entropy, the higher the credibility of the classifier.
The three-way decision is an effective method for making three optional decisions under uncertainty—acceptance, wait-to-see, and rejection—which coincides with our desire to classify unlabeled samples as useful, uncertain, and useless. However, the three-way decision is often used for a single classifier, so we propose a three-way co-decision model to classify unlabeled samples. In the model, each classifier divides a sample into useful, uncertain, or useless by using the Bayesian minimum risk theory. When the two classifiers both confidently classify an unlabeled sample as useful, uncertain, or useless, a co-decision P, B, or N is made. However, when one of the classifiers classifies the sample as uncertain and the other classifies the sample as useless, we consider the sample useless and make an N decision; when one of the classifiers classifies the sample as uncertain and the other classifies the sample as useful, we decide to make a P decision. However, when the predictions of two classifiers conflict, that is, one classifier considers the sample as useful and the other classifier classifies it as useless, we need to make further consideration to find as many useful samples as possible to improve the performance of classifiers, even though this may degrade the performance of one classifier but improve the performance of the combined classifier. Specifically, we use two threshold parameters  δ  and  ε  to evaluate the average normalized entropy of two classifiers. If the average normalized entropy is less than or equal to  δ , the sample is considered helpful to improve the performance of the combined classifier, so it is classified as P. If the average normalized entropy is greater than  δ  and less than or equal to  ε , it means that the two classifiers generally do not have enough confidence in the sample and need further learning to judge it, so the sample is classified as B. If the average normalized entropy is greater than  ε , which indicates that the sample has only a negative impact on the performance of the combined classifier, it should be classified as N. Bearing this in mind, we further define the following three-way co-decision rules when two classifiers highly contradict each other:
(P) 
If  1 2 ( N E 0 ( x ) + N E 1 ( x ) ) δ , then determine sample x to P.
(B) 
If  δ < 1 2 ( N E 0 ( x ) + N E 1 ( x ) ) ε , then determine sample x to B.
(N) 
If  1 2 ( N E 0 ( x ) + N E 1 ( x ) ) > ε , then determine sample x to N.
By using the normalized entropy and three-way decision, the co-decision results for each unlabeled sample can be expressed in Table 1.
In Table 1 a w k  indicates that the classifier k makes the decision w, where  k 0 , 1  and  w P , B , N , the P, B, and N denote the co-decision result of two classifiers for an unlabeled sample, while the  C o T W D  represents the co-decision result further determined by using the average normalized entropy and the three-way decision when two classifiers make different decisions.
Using these rules, unlabeled samples can be classified as useful, uncertain, and useless, from which only the useful samples are selected to retrain the classifiers. Algorithm 2 describes the procedure of the co-training model.
Algorithm 2 begins with Algorithm 1 to generate two semi-supervised reducts using pseudo labels. Two base classifiers are then trained separately on the two reducts. After initializing all parameters required for the co-training process, the unlabeled samples are iteratively classified into useful, uncertain, or useless by using co-decision rules. The classifiers can only be updated when useful samples exist. If the inequality constraint is satisfied, i.e., the classifier’s performance does not deteriorate after adding unlabeled samples, some useful samples are selected for updating the classifier in descending order of the average normalized entropy. The algorithm stops when neither classifier can be further updated, and the final classifier is obtained by combining the two classifiers.
Algorithm 2 Co-training model for partially labeled data based on the three-way decision.
Input: 
A partially labeled data  P S = ( U = L N , A = C D , V , f ) .
Output: 
A combined classifier  H c o m b i n e d = H 0 t H 1 t .
1:
Using Algorithm 1 to generate two pseudo-labeled data  N 0  and  N 1  and use them to
obtain semi-supervised reducts  R E D 0  and  R E D 1 ;
2:
Train base classifiers  H 0  and  H 1  using  R E D 0  and  R E D 1 , respectively;
3:
Set the error rates, unlabeled samples, useful samples, and update flags for each
classifier.  E r r k t = 0.5 , N t = N , N P , k t = , U p d a t e k t = T r u e , t = 0 , k 0 , 1 ;
4:
while   U p d a t e 0 t = T r u e o r U p d a t e 1 t = T r u e  do
5:
    U p d a t e 0 t = U p d a t e 1 t = F l a s e ;
6:
   Divide the unlabeled data  N t  into useful samples  N P t + 1 , uncertain samples  N B t + 1 , and
   useless samples  N N t + 1  by using the three-way decision;
7:
   if  N P t + 1  then
8:
     Sort the samples in descending order in  N P t + 1  based on the average normalized
     entropy of two classifiers  H 0 t  and  H 1 t ;
9:
     for  k 0 , 1  do
10:
        Pick a certain number of samples  N P * , k t + 1  from  N P t + 1  to ensure that the inequality 
         E r r k t + 1 N P , k t N P * , k t + 1 < E r r k t | N P , k t |  holds;
11:
         N P , k t = N P t + 1 N P * , k t + 1 U p d a t e k t = T r u e ;
12:
     end for
13:
      N t = N t N N t + 1 N P * , 0 t + 1 N P * , 1 t + 1 ;
14:
   end if
15:
   for  k 0 , 1  do
16:
     if  U p d a t e k t = T r u e  then
17:
        Retrain classifier  H k t  on  L N P , k t ;
18:
     end if
19:
   end for
20:
    t = t + 1 ;
21:
end while
22:
Return the combined classifiers  H c o m b i n e d = H 0 t H 1 t .
Assume a partially labeled datum has  | U |  samples and  | C |  condition attributes, where  | L |  samples are labeled and  | N |  samples are unlabeled. The time complexity of training each base classifier is  O ( | C | | U | ) . In the worst-case scenario, if there is only one useful unlabeled sample learned by one classifier in each iteration, it takes  | N |  iterations. Therefore, the time complexity of Algorithm 2 is  O ( | C | | U | 2 ) , and the space complexity is  O ( | C | | U | ) .

4. Empirical Analysis

In this section, we first test the effectiveness of semi-supervised attribute reduction and then the proposed model is compared with other semi-supervised learning methods. All experiments are conducted on a computer with Windows 10 operating system configured with Inter(R) Core(TM) i7-7700K CPU @ 4.20 GHz 4.20 GHz, 16 GB RAM.

4.1. Investigated Data Sets and Experiment Design

In the experiments, 16 UCI datasets are used in this experiment, and their details are presented in Table 2. The second column in Table 2 indicates the number of condition attributes, with the number of continuous attributes shown in brackets. The third and fourth columns display the sample size and number of classes for each dataset, respectively. The fifth column indicates whether the dataset has missing values.
In the experiments, missing values in each dataset are replaced by the mean or mode of their corresponding attributes. Continuous attribute values are first normalized to the range of [0, 1], and then an equal-frequency discretization technique with five bins is used [46]. To accurately evaluate the performance of the selected methods, we use a 10-fold cross-validation technique. For example, suppose there are 1000 samples in the partially labeled data, with the class distribution of 30% positive samples and 70% negative samples. In each fold of cross validation, 90% of the samples is randomly selected as the training set, while the remaining 10% is used as the test set, i.e., 900 training samples and 100 test samples are generated, and the original class distribution (30%, 70%) is retained. Considering a label rate of 10%, only 90 of the 900 training samples are selected as labeled samples, and the remaining 810 are treated as unlabeled samples. Finally, the average performance obtained from 10 cross validations is used as the final performance of the method for the given dataset.

4.2. Attribute Reduction for Partially Labeled Data with Pseudo Labels

The semi-supervised attribute reduction based on the granular conditional entropy is used in the experiments. Specifically, the method uses a heuristic algorithm to generate the optimal reduct of partially labeled data by combining the information granularity with conditional entropy. The results of the attribute reduction at a label rate of 10% are shown in Table 3, where the second column shows the number of attributes of the dataset before attribute reduction, and the third to fifth columns are the maximum, minimum, and average number of remaining attributes after attribute reduction in 10 cross validations, respectively. The sixth column displays the number of attributes obtained after attribute reduction at a label rate of 100%, that is, all data are labeled.
By observing Table 3, it can be found that after attribute reduction, the completely irrelevant and some redundant attributes are excluded from the obtained attribute subsets, thus reducing the redundant information as much as possible while preserving the inherent information of the dataset. At the same time, on the dataset “biode”, “hepatitis”, and “tic”, the minimum number of attributes in the reduct is nearly equivalent to that of the GT (ground truth), which implies that the proposed method can achieve the attribute reduction performance as the fully supervised method. The proposed method achieved an average attribute reduction rate of 48.95% across all selected datasets, demonstrating its potential in reducing the number of attributes required for classification.

4.3. Effectiveness of the Proposed Co-Training Model

To demonstrate the performance of the proposed model, it is compared with classical semi-supervised methods, including self-training, co-training, and their extensions.
The classic self-training is a self-learning model. It first trains a basic classifier on labeled data, followed by iteratively selecting some confident samples from unlabeled samples to learn until the stop condition is met. Co-training is a multi-view model, in which two classifiers learn from each other on unlabeled data, but it has to fulfill the condition that two views must be sufficient and independent. Nevertheless, such a condition is usually difficult to satisfy in practical problems. Fortunately, the work of Nigam et al. [47] demonstrated that even if the raw data are randomly split into two attribute subsets, the co-training classifier can still learn from unlabeled samples. Therefore, we divided each condition attribute set of the dataset into two disjoint subsets by half-splitting attributes. In addition, for a more comprehensive comparison, we set the self-training into two cases: self-training with a single view and self-training with two randomly divided views. Moreover, we evaluated single-view self-training in two cases: data after attribute reduction and data without attribute reduction. The settings for these models are shown in Table 4.
In Table 4, ST-1V and ST-2V denote the single-view self-training and two-view self-training, respectively, and ST-1VR denotes the single-view self-training after attribute reduction. In order to comprehensively compare the performance of the proposed model, we adopt a semi-supervised neighborhood discriminant index, which is a filter method that combines the supervised neighborhood discriminant index with unsupervised Laplacian information. CT-2V represents the classical co-training, while CT-TWD denotes the proposed three-way co-training model. To learn useful unlabeled samples, a threshold parameter needs to be set for ST-1V, ST-1VR, ST-2V, and CT-2V, and the model proposed in this study requires two pairs of parameters, where the first pair can be obtained based on the Bayesian minimum risk decision, while the second pair is calculated by the defined normalized entropy. For a simple and fair comparison, these parameters are all empirically set to  α  = 0.75,  β  = 0.55,  δ  = 0.80, and  ε  = 0.95. For ST-1V and ST-2V, the unlabeled sample with a prediction probability greater than  α  is selected for learning. For CT-2V, the unlabeled sample is used for learning when its prediction probability of one classifier is greater than  α , and the probability predicted by the other classifier is less than  β . For the CT-TWD in this study, the thresholds  α  and  β  are used to classify whether the unlabeled sample is useful, uncertain, or useless. It should be noted that when the average normalized entropy of the two classifiers for an unlabeled sample is less than  δ , the unlabeled sample is considered useful; when the average normalized entropy is greater than  δ  and less than  ε , the sample is determined to be uncertain; when the average normalized entropy is greater than  ε , the sample is considered useless. In the experiments, two types of classifiers, i.e., the K-nearest neighbor classifier with  K = 3  and the naive Bayes classifier, are used to evaluate the performance of the selected methods. Given a label rate  θ  = 10%, the results of the different methods are shown in Table 5 and Table 6.
In Table 5 and Table 6, the symbols “initial” and “final” denote the error rates of each model trained from labeled data and then improved by unlabeled data, respectively. All results in “initial” and “final” are obtained after averaging over 10-fold cross validation. In addition, for the convenience of comparison, the results with the lowest error rates are marked in bold. Table 7 and Table 8 provide the computation time of different comparison methods on KNN and naive Bayes classifiers. The row “avg.” represents the average error rates of the selected models computed from all the datasets.
By observing Table 5, Table 6, Table 7 and Table 8, it can be found that when the label rate is 10%, the initial performance of the ST-1VR model is better than ST-1V, and even on some datasets, such as “ttt”(33.26%), “vowel”(32.85%) in Table 5, and “cmc”(36.14%) in Table 6, it is better than that of the proposed model CT-TWD in this study, which shows the effectiveness of attribute reduction for semi-supervised learning. However, the improvement of both ST-1VR and ST-1V after learning unlabeled samples is not significant, even worse performance is observed on many datasets. The two-view self-training (ST-2V) can learn useful information from unlabeled samples and outperform the first two models in terms of performance. Combining Table 7 and Table 8, it can be found that the computation time of ST-2V is greater than that of ST-1VR and ST-1V, which proves that two views have better performance than single views, but additional computational time is required to process them. For most datasets, the classifier retrained on unlabeled samples performs better than the classifier trained on labeled data only, while the co-training model with two views (CT-2V) achieved better performance using the KNN classifier and the naive Bayes classifier, which is improved by 2.50% and 2.20%, respectively, because of the mutual learning in the two classifiers. The average error rates on the KNN classifier and the naive Bayes classifier are lower than ST-1V, ST-1VR, and ST-2V, which demonstrates the stability of CT-2V. In addition, CT-2V requires to simultaneously train the two classifiers, resulting in a slightly longer computation time. However, the results in Table 5 and Table 6 show that the performance of CT-2V is with the average error rates of 33.42% on the KNN classifier and 31.65% on the naive Bayes classifier, which still has a large gap compared to the proposed model CT-TWD, with 30.60% on the KNN classifier and 29.61% on the naive Bayes classifier. In terms of the calculation time, although the average calculation time (avg.) of CT-TWD in Table 7 and Table 8 is relatively large with 4.0440 s on the KNN classifier and 2.5763 s on the naive Bayes classifier, considering the good performance of CT-TWD, the additional time cost is clearly acceptable.
To compare the differences among the methods more comprehensively, we also conduct experiments at different label rates, and the results are shown in Figure 2 and Figure 3.
As can be seen in Figure 2 and Figure 3, the proposed model CT-TWD can learn from unlabeled samples and achieve impressive performance against different models. ST-1V is a single-view semi-supervised learning model, and it can be found in the experiments that ST-1V performs poorly on most datasets; even worse performance occurs at higher label rates, such as “lymph” with the KNN classifier and “frogs” with the naive Bayes classifier. This may be because the initially labeled data are not representative, so the classifiers will mislabel unlabeled samples in the training process. Therefore, the classifier will learn the wrong classification information, which results in poor generalization of the final performance. ST-1VR is also a single-view semi-supervised self-learning model but performs attribute reduction on the dataset. Although its overall performance is poor, it outperforms ST-1V, which shows the effectiveness of the semi-supervised neighborhood discriminant index-based attribute reduction method. However, ST-1VR still has poor final performance with the limitations of the single-view model, such as “cmc” and “lymph” with the KNN classifier. ST-2V is a multi-view self-training model that uses randomly split subsets of attributes from the raw dataset to train the base classifiers, and a threshold is used to select useful samples to help the classifiers retrain themselves, but its final performance is not good. On the one hand, the two classifiers of ST-2V are self-taught. On the other hand, the poor quality of the randomly partitioned subsets of attributes also leads to the disappointing performance of ST-2V. Although CT-2V can make two base classifiers learn from each other through unlabeled samples to improve the performance, the two subspaces of CT-2V are randomly divided from the dataset. Therefore, the performance of the classifiers is not stable, resulting in CT-2V only performing better on some datasets.
Different from the selected comparison models, the CT-TWD uses the three-way co-decision model in the training process to classify unlabeled samples into useful, uncertain, and useless. The training set of each classifier is updated only when the unlabeled samples are useful and have a positive impact on the model performance. Such a sample selection mechanism ensures that CT-TWD can effectively use unlabeled samples to improve performance on most datasets at different label rates. For example, the proposed model achieves an improvement of 22.03% at a 30% label rate on the “vowel” dataset with the KNN classifier and an improvement of 13.55% at a 40% label rate on the “wine” dataset with the naive Bayes classifier, illustrating the potential of the proposed model for partially labeled data.
It should be noted that for some datasets, such as “cmc” on the naive Bayes classifier and “lymph” on the KNN classifier, as the label rate increases, the performance of the methods tends to decrease. This is likely due to the labeled data not being representative enough, thereby limiting the performance of the classifier as the data scale increases. Compared to other models, the CT-TWD proposed in this study assigns pseudo labels of 0 and 1 to unlabeled samples to form two views of data, which makes the two views of data still maintain the discriminative ability as the raw dataset. Therefore, the quality of the base classifiers obtained by CT-TWD has good robustness, which allows it to have better performance across all the datasets.

5. Conclusions

In real-world applications, annotating large amounts of data is often challenging, but collecting unlabeled data is relatively easy, which results in semi-supervised data with a small amount of labeled data and a large amount of unlabeled data. In this study, we proposed a simple yet effective strategy for generating pseudo-labels for partially labeled data and developed a heuristic semi-supervised attribute reduction algorithm using a granular conditional entropy measure. To exploit useful unlabeled samples for learning, we combined the three-way decision with the normalized entropy and proposed a three-way co-decision model for partially labeled data. However, due to the proposed model requiring partitioning of the dataset with 0 and 1 pseudo labels, which can only have significant advantages in binary classification problems, it still has limitations in multi-classification problems. Therefore, extending the model to multi-classification problems will be future work. Also, exploring the semi-supervised model for discrete and continuous data is worthy of further investigation.

Author Contributions

Conceptualization, C.G.; Methodology, L.W. and C.G.; Software, L.W., J.Z. and J.W.; Data analysis, L.W.; Writing the original draft, L.W.; Review and editing, C.G., J.Z. and J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Shenzhen Science and Technology Program (No. JCYJ20210324094601005), the Natural Science Foundation of Guangdong Province, China (No. 2021A1515011861), the National Natural Science Foundation of China (Nos. 62076164 and 61806127), and Shenzhen Institute of Artificial Intelligence and Robotics for Society.

Data Availability Statement

Data are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, J.D.; Cheng, K.W.; Wang, S.H.; Morstatter, F.; Trevino, R.P.; Tang, J.L.; Liu, H. Feature Selection: A Data Perspective. ACM Comput. Surv. 2017, 50, 94. [Google Scholar]
  2. Thangavel, K.; Pethalakshmi, A. Dimensionality reduction based on rough set theory: A review. Appl. Soft Comput. 2009, 9, 1–12. [Google Scholar] [CrossRef]
  3. Pawlak, Z. Rough Sets: Theoretical Aspects of Reasoning about Data; Springer Science and Business Media: Berlin/Heidelberg, Germany, 1991. [Google Scholar]
  4. Pawlak, Z. Rough sets. Int. J. Comput. Inf. Sci. 1982, 11, 341–356. [Google Scholar] [CrossRef]
  5. Hu, M.; Tsang, E.C.; Guo, Y.; Chen, D.; Xu, W. A novel approach to attribute reduction based on weighted neighborhood rough sets. Knowl.-Based Syst. 2021, 220, 106908. [Google Scholar] [CrossRef]
  6. Xu, W.H.; Li, W.T. Granular computing approach to two-way learning based on formal concept analysis in fuzzy datasets. IEEE Trans. Cybern. 2014, 46, 366–379. [Google Scholar] [CrossRef]
  7. Zhang, P.; Li, T.; Wang, G.; Luo, C.; Chen, H.; Zhang, J.; Wang, D.; Yu, Z. Multi-source information fusion based on rough set theory: A review. Inf. Fusion. 2021, 68, 85–117. [Google Scholar] [CrossRef]
  8. Pawlak, Z.; Wong, S.K.M.; Ziarko, W. Rough sets: Probabilistic versus deterministic approach. Int. J. Man-Mach. Stud. 1988, 29, 81–95. [Google Scholar] [CrossRef]
  9. Sun, L.; Wang, L.Y.; Ding, W.P.; Qian, Y.H.; Xu, J.C. Neighborhood multi-granulation rough sets-based attribute reduction using Lebesgue and entropy measures in incomplete neighborhood decision systems. Knowl.-Based Syst. 2020, 192, 105373. [Google Scholar] [CrossRef]
  10. Jiang, F.; Sui, Y.S.; Zhou, L. A relative decision entropy-based feature selection approach. Pattern Recognit. 2015, 48, 2151–2163. [Google Scholar] [CrossRef]
  11. Gao, C.; Lai, Z.H.; Zhou, J.; Wen, J.J.; Wong, W.K. Granular maximum decision entropy-based monotonic uncertainty measure for attribute reduction. Int. J. Approx. Reason. 2019, 104, 9–24. [Google Scholar] [CrossRef]
  12. Gao, C.; Lai, Z.H.; Zhou, J.; Zhao, C.R.; Miao, D.Q. Maximum decision entropy-based attribute reduction in decision-theoretic rough set model. Knowl.-Based Syst. 2018, 143, 179–191. [Google Scholar] [CrossRef]
  13. Yang, X.; Liang, S.; Yu, H.; Gao, S.; Qian, Y. Pseudo-label neighborhood rough set: Measures and attribute reductions. Int. J. Approx. Reason. 2019, 105, 112–129. [Google Scholar] [CrossRef]
  14. Yuan, Z.; Chen, H.M.; Li, T.R. Exploring interactive attribute reduction via fuzzy complementary entropy for unlabeled mixed data. Pattern Recognit. 2022, 127, 108651. [Google Scholar] [CrossRef]
  15. Xu, W.H.; Li, M.M.; Wang, X.Z. Information fusion based on information entropy in fuzzy multi-source incomplete information system. Int. J. Fuzzy Syst. 2017, 19, 1200–1216. [Google Scholar] [CrossRef]
  16. Liang, D.C.; Cao, W.; Xu, Z.S.; Wang, M.W. A novel approach of two-stage three-way co-opetition decision for crowdsourcing task allocation scheme. Inf. Sci. 2021, 559, 191–211. [Google Scholar] [CrossRef]
  17. Qian, J.; Liu, C.H.; Miao, D.Q.; Yue, X.D. Sequential three-way decisions via multi-granularity. Inf. Sci. 2020, 507, 606–629. [Google Scholar] [CrossRef]
  18. Xu, W.H.; Guo, Y.T. Generalized multigranulation double-quantitative decision-theoretic rough set. Knowl.-Based Syst. 2016, 105, 190–205. [Google Scholar] [CrossRef]
  19. Yang, J.L.; Yao, Y.Y. A three-way decision based construction of shadowed sets from Atanassov intuitionistic fuzzy sets. Inf. Sci. 2021, 577, 1–21. [Google Scholar] [CrossRef]
  20. Yao, Y.Y. Three-way granular computing, rough sets, and formal concept analysis. Int. J. Approx. Reason. 2020, 116, 106–125. [Google Scholar] [CrossRef]
  21. Yao, Y.Y. Tri-level thinking: Models of three-way decision. Int. J. Mach. Learn. Cybern. 2020, 11, 947–959. [Google Scholar] [CrossRef]
  22. Yao, Y.Y. Three-way decisions with probabilistic rough sets. Inf. Sci. 2010, 180, 341–353. [Google Scholar] [CrossRef] [Green Version]
  23. Yao, Y.Y. The superiority of three-way decisions in probabilistic rough set models. Inf. Sci. 2011, 181, 1080–1096. [Google Scholar] [CrossRef]
  24. Yue, X.D.; Chen, Y.F.; Miao, D.Q.; Fujita, H. Fuzzy neighborhood covering for three-way classification. Inf. Sci. 2020, 507, 795–808. [Google Scholar] [CrossRef]
  25. Yao, Y.Y. Three-way decision and granular computing. Int. J. Approx. Reason. 2018, 103, 107–123. [Google Scholar] [CrossRef]
  26. Li, J.H.; Huang, C.C.; Qi, J.J.; Qian, J.H.; Liu, W.Q. Three-way cognitive concept learning via multi-granularity. Inf. Sci. 2017, 378, 244–263. [Google Scholar] [CrossRef]
  27. Wang, G.Y.; Yu, H.; Li, T.R. Decision region distribution preservation reduction in decision-theoretic rough set model. Inf. Sci. 2014, 278, 614–640. [Google Scholar]
  28. Ren, R.S.; Wei, L. The attribute reductions of three-way concept lattices. Knowl.-Based Syst. 2016, 99, 92–102. [Google Scholar] [CrossRef]
  29. Huang, Q.Q.; Li, T.R.; Huang, Y.Y.; Yang, X. Incremental three-way neighborhood approach for dynamic incomplete hybrid data. Inf. Sci. 2020, 541, 98–122. [Google Scholar] [CrossRef]
  30. Zhang, X.Y.; Zhou, Y.H.; Tang, Y.; Fan, Y.R. Three-way improved neighborhood entropies based on three-level granular structures. Int. J. Mach. Learn. Cybern. 2022, 13, 1861–1890. [Google Scholar] [CrossRef]
  31. Kong, Q.Z.; Zhang, X.W.; Xu, W.H.; Long, B.H. A novel granular computing model based on three-way decision. Int. J. Approx. Reason. 2022, 144, 92–112. [Google Scholar] [CrossRef]
  32. Fang, Y.; Gao, L.; Liu, Z.H.; Yang, X. Generalized cost-sensitive approximate attribute reduction based on three-way decisions. J. Nanjing Univ. Sci. Technol. 2019, 43, 481–488. [Google Scholar]
  33. Miao, D.Q.; Gao, C.; Zhang, N.; Zhang, Z.F. Diverse reduct subspaces based co-training for partially labeled data. Int. J. Approx. Reason. 2011, 52, 1103–1117. [Google Scholar] [CrossRef] [Green Version]
  34. Wang, R.; Chen, D.G.; Kwong, S. Fuzzy-rough-set-based active learning. IEEE Trans. Fuzzy Syst. 2013, 22, 1699–1704. [Google Scholar] [CrossRef]
  35. Li, B.Y.; Xiao, J.M.; Wang, X.H. Feature selection for partially labeled data based on neighborhood granulation measures. IEEE Access 2019, 7, 37238–37250. [Google Scholar] [CrossRef]
  36. Liu, K.Y.; Yang, X.B.; Yu, H.L.; Mi, J.S.; Wang, P.X.; Chen, X.J. Rough set based semi-supervised feature selection via ensemble selector. Knowl.-Based Syst. 2019, 165, 282–296. [Google Scholar] [CrossRef]
  37. Pan, L.C.; Gao, C.; Zhou, J. Three-way decision-based tri-training with entropy minimization. Inf. Sci. 2022, 610, 33–51. [Google Scholar] [CrossRef]
  38. Ash, R.B. Information Theory; Courier Corporation: Chelmsford, MA, USA, 2012. [Google Scholar]
  39. Ashby, D.; Smith, A.F.M. Evidence-based medicine as Bayesian decision-making. Stat. Med. 2000, 19, 3291–3305. [Google Scholar] [CrossRef]
  40. Blum, A.; Mitchell, T. Combining labeled and unlabeled data with co-training. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, Madison, WI, USA, 24–26 July 1998; pp. 92–100. [Google Scholar]
  41. Dai, D.; Li, H.X.; Jia, X.Y.; Zhou, X.Z.; Huang, B.; Liang, S.N. A co-training approach for sequential three-way decisions. Int. J. Mach. Learn. Cybern. 2020, 11, 1129–1139. [Google Scholar] [CrossRef]
  42. Wang, G.Y.; Yu, H.; Yang, D.C. Decision table reduction based on conditional information entropy. Chin. J. Comput. 2002, 25, 759–766. [Google Scholar]
  43. Zhu, X.J.; Goldberg, A.B. Introduction to Semi-Supervised Learning; Morgan and Claypool Publishers: Cambridge, MA, USA, 2009. [Google Scholar]
  44. Liang, J.Y.; Shi, Z.Z. The information entropy, rough entropy and knowledge granulation in rough set theory. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 2004, 12, 37–46. [Google Scholar] [CrossRef]
  45. Gao, C.; Zhou, J.; Miao, D.; Wen, J.J.; Yue, X.D. Three-way decision with co-training for partially labeled data. Inf. Sci. 2021, 544, 500–518. [Google Scholar] [CrossRef]
  46. Witten, I.H.; Frank, E. Data mining: Practical machine learning tools and techniques with Java implementations. ACM Sigm. Rec. 2002, 31, 76–77. [Google Scholar] [CrossRef]
  47. Nigam, K.; Ghani, R. Analyzing the effectiveness and applicability of co-training. In Proceedings of the Ninth International Conference on Information and Knowledge Management, McLean, VA, USA, 6–11 November 2000; pp. 86–93. [Google Scholar]
Figure 1. Framework of the three-way decision based co-training with pseudo labels.
Figure 1. Framework of the three-way decision based co-training with pseudo labels.
Mathematics 11 03348 g001
Figure 2. Error rates of comparison methods under different label rates when using KNN.
Figure 2. Error rates of comparison methods under different label rates when using KNN.
Mathematics 11 03348 g002aMathematics 11 03348 g002b
Figure 3. Error rates of comparison methods under different label rates when using Naive Bayes.
Figure 3. Error rates of comparison methods under different label rates when using Naive Bayes.
Mathematics 11 03348 g003aMathematics 11 03348 g003b
Table 1. Co-decision rules by the proposed model.
Table 1. Co-decision rules by the proposed model.
  a P 1   a B 1   a N 1
  a P 0 PP   C o T W D
  a B 0 PBN
  a N 0   C o T W D NN
Table 2. Investigated datasets.
Table 2. Investigated datasets.
Dataset NameNumber of AttributesNumber of SamplesNumber of ClassesMissing Data
biodegradation (biode)41 (41)10552N
cardiotocography (cardio)21 (21)212610N
cmc (cmc)9 (2)14733N
frogs (frogs)22 (22)719510N
hepatitis (hepatitis)19 (6)1552Y
hungarian (hungarian)13 (6)2942Y
kr-vs-kp (krvskp)36 (0)31962N
lymph (lymph)18 (3)1484N
newcylinder-bands (newcylinder)37 (18)5402Y
pima (pima)8 (8)7682N
quality-assessment-green (green)62 (62)982N
spectf (spectf)44 (44)2692N
tic-tac-toe (ttt)9 (0)9582N
vehicle (vehicle)18 (18)8464N
vowel (vowel)13 (10)99011N
wine (wine)13 (13)1783N
Table 3. Results of semi-supervised attribute reduction based on granular conditional entropy.
Table 3. Results of semi-supervised attribute reduction based on granular conditional entropy.
Dataset NameRawReductsGround Truth
MaxMinAvg
biode41151213.812
cardio21856.34
cmc9988.37
frogs22151414.410
hepatitis1911910.19
hungarian13433.23
krvskp36323031.229
lymph18877.56
newcylinder37171616.715
pima8655.64
green62292828.826
spectf44141313.210
ttt9988.28
vehicle18141213.210
vowel13111010.910
wine13544.44
avg.23.912.911.512.210.4
Table 4. Settings of comparison methods.
Table 4. Settings of comparison methods.
MethodsGenerated Views
ST-1VOriginal attribute set
ST-1VRAttribute reduction
ST-2VRandom split attribute subsets
CT-2VRandom split attribute subsets
CT-TWDAttribute reduction with pseudo-labeled data
Table 5. Error rates of comparison methods on KNN classifier.
Table 5. Error rates of comparison methods on KNN classifier.
Dataset NameST-1VST-1VRST-2VCT-2VCT-TWD
InitialFinalInitialFinalInitialFinalInitialFinalInitialFinal
biode0.32330.33570.34820.34590.37470.37050.31760.30670.28000.2486
cardio0.36420.37220.36950.35880.38210.37220.28020.28890.29860.2858
cmc0.36510.38300.35350.36710.45090.45100.39590.37310.36670.3619
frogs0.49070.49360.49350.49130.50150.49960.49320.49320.49070.4882
hepatitis0.27330.26670.26670.26670.31040.30000.28000.26000.30670.2333
hungarian0.44930.43450.44410.43080.42040.41380.42760.42790.37590.3448
krvskp0.35640.33950.34020.35610.39140.38430.34010.32260.32760.3082
lymph0.36290.35000.31430.32880.36610.35570.31430.31430.26430.2429
newcylinder0.44260.44630.44590.44220.47050.46220.44440.46300.43890.4222
pima0.32370.36580.32780.31580.38460.36470.29740.25660.24080.2395
green0.43330.44740.38520.38850.47250.46670.37020.35560.34860.3345
spectf0.33740.34620.30610.31160.38120.36150.28620.26150.25380.2346
ttt0.33260.33950.34210.33260.34680.34110.36420.36020.35390.3400
vehicle0.38760.39770.35620.34820.34080.33690.27810.27640.28610.2607
vowel0.37150.34850.31290.32850.36740.35810.34550.35120.34640.3452
wine0.26240.25290.23540.24140.32480.29410.24760.23640.23530.2059
avg.0.36730.37000.35260.35340.39290.38330.34270.33420.32590.3060
Table 6. Error rates of comparison methods on Naive Bayes classifier.
Table 6. Error rates of comparison methods on Naive Bayes classifier.
Dataset NameST-1VST-1VRST-2VCT-2VCT-TWD
InitialFinalInitialFinalInitialFinalInitialFinalInitialFinal
biode0.38330.38950.34940.34010.36380.35290.33760.33860.32570.3048
cardio0.23460.23690.28060.27920.23470.23110.28020.27970.25940.2264
cmc0.39600.39530.36210.36140.42750.42520.39590.40070.38090.3741
frogs0.49620.50220.49510.49120.49310.49250.49320.48440.48260.4826
hepatitis0.37400.37670.37690.36270.38620.37670.38150.36840.37250.3584
hungarian0.49100.49930.46550.45820.48840.48760.46760.45660.45850.4483
krvskp0.28310.28970.28840.28590.29440.28840.30010.28010.27270.2705
lymph0.28430.29860.31420.30750.31430.31290.31430.29430.28570.2762
newcylinder0.49410.49520.49180.49920.50740.49700.46440.47780.45060.4424
pima0.32660.32630.26720.25060.33320.33550.26740.25390.24080.2368
green0.31090.31110.27830.26700.33890.32220.24020.22670.22830.2186
spectf0.32460.32300.31950.31950.29920.28080.28620.29690.24760.2308
ttt0.33220.33680.32000.32840.34740.33390.33420.32160.32060.3163
vehicle0.18860.18100.18750.18570.18600.18310.19810.18520.17660.1667
vowel0.11310.09390.10280.09120.11290.10510.11120.10200.10750.0909
wine0.30180.31180.30560.30210.30530.30410.30760.29760.30420.2941
avg.0.33340.33550.32530.32060.33950.33310.32370.31650.30710.2961
Table 7. Computation time of comparison methods on KNN classifier (in seconds).
Table 7. Computation time of comparison methods on KNN classifier (in seconds).
Dataset NameST-1VST-1VRST-2VCT-2VCT-TWD
biode0.72410.85910.83571.21591.7486
cardio1.49741.76922.71573.25314.2960
cmc1.19131.08381.20391.14481.9738
frogs8.552715.108518.919420.226920.6095
hepatitis0.08370.09470.14780.17710.2135
hungarian0.17210.16780.24800.35630.4245
krvskp2.45416.82752.72474.56005.8669
lymph0.07640.09730.11310.13200.1882
newcylinder0.25670.29880.33350.40900.4505
pima0.51080.61470.68240.69680.6937
green0.05220.05660.07170.08110.0859
spectf0.14300.21000.18550.25390.2629
ttt0.78390.77370.99430.91060.9787
vehicle0.54640.69361.11771.13661.1564
vowel1.09191.02201.40241.12341.3427
wine0.10330.11250.15530.12200.1485
avg.1.14001.86193.18513.58004.0440
Table 8. Computation time of comparison methods on Naive Bayes classifier (in seconds).
Table 8. Computation time of comparison methods on Naive Bayes classifier (in seconds).
Dataset NameST-1VST-1VRST-2VCT-2VCT-TWD
biode0.64750.68390.82810.97821.0773
cardio1.31941.37851.50041.68891.7402
cmc0.72740.77040.92320.99021.0686
frogs6.13809.843011.292014.174914.9361
hepatitis0.06850.07480.08640.10380.1090
hungarian0.12810.14420.14540.24290.2635
krvskp2.92062.87302.41052.47372.6089
lymph0.08040.07140.07420.08440.0954
newcylinder0.23230.28060.31450.34980.3535
pima0.36830.37940.47960.66690.6978
green0.03970.04920.06180.06940.0752
spectf0.13490.13820.19860.24650.2790
ttt0.45380.43450.70010.67710.7709
vehicle0.47090.45550.70140.71130.8108
vowel0.54230.54810.53100.68380.7396
wine0.07600.08300.11280.11660.1374
avg.0.89671.13801.27252.42582.5763
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, L.; Gao, C.; Zhou, J.; Wen, J. Three-Way Co-Training with Pseudo Labels for Semi-Supervised Learning. Mathematics 2023, 11, 3348. https://doi.org/10.3390/math11153348

AMA Style

Wang L, Gao C, Zhou J, Wen J. Three-Way Co-Training with Pseudo Labels for Semi-Supervised Learning. Mathematics. 2023; 11(15):3348. https://doi.org/10.3390/math11153348

Chicago/Turabian Style

Wang, Liuxin, Can Gao, Jie Zhou, and Jiajun Wen. 2023. "Three-Way Co-Training with Pseudo Labels for Semi-Supervised Learning" Mathematics 11, no. 15: 3348. https://doi.org/10.3390/math11153348

APA Style

Wang, L., Gao, C., Zhou, J., & Wen, J. (2023). Three-Way Co-Training with Pseudo Labels for Semi-Supervised Learning. Mathematics, 11(15), 3348. https://doi.org/10.3390/math11153348

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop