Next Article in Journal
Power Families of Bivariate Proportional Hazard Models
Next Article in Special Issue
Reliable Multi-View Deep Patent Classification
Previous Article in Journal
Explainable Deep-Learning-Based Depression Modeling of Elderly Community after COVID-19 Pandemic
Previous Article in Special Issue
Granular Elastic Network Regression with Stochastic Gradient Descent
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Ensemble Strategy Based on Determinantal Point Processes for Transfer Learning

1
School of Computer Engineering and Science, Shanghai University, Shanghai 200444, China
2
School of Computer and Information Engineering, Shanghai Polytechnic University, Shanghai 201209, China
3
School of Computer Science and Technology, Kashi University, Kashi 844000, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(23), 4409; https://doi.org/10.3390/math10234409
Submission received: 17 October 2022 / Revised: 7 November 2022 / Accepted: 21 November 2022 / Published: 23 November 2022
(This article belongs to the Special Issue Soft Computing and Uncertainty Learning with Applications)

Abstract

:
Transfer learning (TL) hopes to train a model for target domain tasks by using knowledge from different but related source domains. Most TL methods focus more on improving the predictive performance of the single model across domains. Since domain differences cannot be avoided, the knowledge from the source domain to obtain the target domain is limited. Therefore, the transfer model has to predict out-of-distribution (OOD) data in the target domain. However, the prediction of the single model is unstable when dealing with the OOD data, which can easily cause negative transfer. To solve this problem, we propose a parallel ensemble strategy based on Determinantal Point Processes (DPP) for transfer learning. In this strategy, we first proposed an improved DPP sampling to generate training subsets with higher transferability and diversity. Second, we use the subsets to train the base models. Finally, the base models are fused using the adaptability of subsets. To validate the effectiveness of the ensemble strategy, we couple the ensemble strategy into traditional TL models and deep TL models and evaluate the transfer performance models on text and image data sets. The experiment results show that our proposed ensemble strategy can significantly improve the performance of the transfer model.

1. Introduction

Although traditional machine learning has a performance advantage in applications with rich annotation data and has been successfully applied in many applications [1,2], its performance is limited in applications with little annotation data. In addition, traditional machine learning is limited by the assumption that the training and test data obey independent identical distributions. This assumption does not hold in many real-world applications [3,4], such as medical image analysis, autonomous driving, etc.
A better solution to the above problem is transfer learning (TL), which aims to borrow knowledge from different but related source domains to train a transfer model for the target task [3,5]. It tries to achieve models with the ability to learn by analogy, e.g., the skill of learning to ride a bicycle can be used to learn to ride a motorcycle, and the ability to learn to play the piano can be used to learn to play the guitar. Existing TL methods can be divided into four types, namely instance-based [6,7], feature-based [8,9], model-based [10,11], and deep learning-based [12,13].
Most TL approaches focus on reducing inter-domain differences to extract more useful shared knowledge for target domain task and thus increase the transferability of the model. However, they ignore the limitations of using single transfer model. Because the differences between the source and target domains cannot be completely avoided, the shared knowledge extracted from the source domain is limited. The single transfer model trained based on the knowledge is often unstable when dealing with out-of-distribution (OOD) data. This may lead to negative transfer. In addition, ensemble learning has been successfully applied to traditional machine learning models to improve the robustness of the models, but has received less attention in transfer learning.
For transfer learning tasks, in addition to considering the diversity of instances, the transferability of the instance is more important due to the distribution difference between source and target domains. The existing submodular function methods [14,15] do not guarantee that the selected subsets are suitable for the target domain, which can lead to negative transfer of the trained model.
To solve these problems, we propose a novel parallel ensemble strategy based on improved Determinantal Point Processes (DPP). The DPP provides a class of precise probabilistic models for sample selection problems [16,17]. In the strategy, we first measure the transferability of instances in source domain based on evidence theory and use the transferability to rewrite the correlation matrix of DPP. It makes the training subset obtained by DPP sampling highly adaptable with respect to the target domain. Second, the base transfer models are trained based on the subsets. Finally, we calculate weights using the adaptability of subsets to ensemble the base models. In addition, the proposed ensemble strategy is independent of the transfer algorithm, and it can be used as a general ensemble strategy. The contributions are summarized as follows.
  • Designing an ensemble strategy with Determinantal Point Processes, which enhance transfer performance and stability of models in cross domain.
  • Extending the DPP with transferability and diversity to make it suitable for transfer learning.
  • The ensemble strategy can be seen as a generic technique, which can be applied to different transfer algorithms.
The paper is organized as follows. Section 2 introduces related work. Section 3 introduces the ensemble strategy based on DPP. Section 4 presents the experimental results to validate that the proposed ensemble strategy is effective to improve the performances of multiple kinds of transfer learning methods. The conclusion is given in Section 5.

2. Related Work

According to a literature survey [2,3,18], most previous transfer learning (TL) methods can be organized into instance-based methods, feature-based methods, classifier-based methods, and deep learning-based methods.
In instance-based methods, most methods aim to estimate the instance weight by feature distribution matching across different domains. Jiang and Zhai [19] proposed an intuitive instance weighted method, which calculates the distribution difference between source and target instances by four parameters. Dai et al. [6] proposed a TrAdaBoost to tune instance weights based on a Boosting algorithm. In [20,21], the authors utilize the kernel mean matching (KMM) to calculate the weight for reducing the difference between source domain and target domain. Long et al. [22] proposed the Transfer Joint Matching (TJM) method by minimizing the maximum mean discrepancy (MMD). Yan et al. [23] proposed a weighted maximum mean discrepancy (WMMD) for transfer learning.
In feature-based methods, a feature transformation strategy is often adopted in transfer learning. It transforms each original feature into a new feature representation for transfer learning. The objective is to learn a new feature representation with some distribution matching metrics between source and target domains. Pan and Yang [18] firstly introduced MMD (maximum mean discrepancy) to design a transfer method called the transfer component analysis (TCA). Further improving the MMD, Long et al. [24] designed joint distribution adaptation (JDA), which measures the difference of joint distribution between domains. Gong et al. [25] proposed a geodesic flow kernel (GFK) based on manifold learning. Sun et al. [26] proposed a correlation alignment (CORAL) for transfer learning.
The classifier-based methods focus on classifier adaptation. Yang et al. [27] modified the support vector machine so that it can be adapted to the transfer learning task. Duan et al. [28] proposed a multiple kernel learning (MKL) for transfer learning. Based on MKL, Duan et al. proposed the DTSVM [29] and DTMKL [10]. Long et al. and Cao et al. designed ARTL [30] and DMM [31] by manifold regularization in model training.
In deep learning-based method, the fine-tuning pre-trained strategy has become a common strategy in transfer learning [12,32]. It utilizes the well-trained models (CNN, transform model and Bert model) on a large dataset (e.g., ImageNet) as the base and uses target domain to fine-tune weights. In addition, there are also some deep migration models designed based on adding constraints on domain differences in the loss function, such as coupled approximation neural network [33], joint adaptation network [34], manifold embedded distribution alignment [35], normalized squares maximization [36], and multi-representation adaptation network [37].

3. Ensemble Strategy Based on DPP Sampling

In this section, we first improve the DPP sampling and apply sampling with DPP to generate subsets from source domain. Second, we train the base model for the target domain based on the selecting of the subsets. Finally, we combine the models using weight, in which the weight of the base model is computed by the adaptability of subsets. For clarity, the notations are summarized in Table 1.

3.1. Subset Generation Based on DPP Sampling

As shown in Figure 1, the key step of the ensemble strategy is to generate the subsets with high transferability and diversity from the source domain using DPP sampling. DPP provides a class of precise probabilistic models for sample selection problems. In DPP, the probabilities of sampling are computed by correlation the matrix of items. In transfer learning, the performance of the ensemble model is determined by high transferability and diversity of subsets. To ensure that DPP sampling can increase the transferability and diversity of subsets, we reformulate the correlation matrix with the measure of samples transferability and diversity based on evidence theory.
To achieve this, we first revisit the selection of the subset as a stochastic sampling process according to DPP and reformulate the selection of the source domain subset with DPP sampling based on the transferability and diversity.
Give a source domain D s , we randomly select a subset C s D s from source domain by probability. The probability P ( C s ) of subset C s selection is determined by transferability and diversity of source domain instance x s in C s .
P ( C S ) P ( t r a n s f e r a b i l i t y ( C S ) , d i v e r s i t y ( C S ) ) ,
Definition 1. 
DPP Sampling. Given the source domain D s = { x 1 s , x 2 s , , x N s } , a sampling possibility of each instance in source domain D s can be defined as a point process P . The probability of a subset C S being selected is
P ( C S ) = det ( K C S ) ,
where K is referred as the marginal kernel. K C S = K i j x i s , x j s C S denotes the sub-matrix indexed by the source domain instances in C S and det K C S is the determinant of the sub-matrix. For the empty set, det K C S = 1 .
In fact, the marginal kernel K is difficult to construct. Referring to [38], L-ensemble can be used to construct DPP for the sampling of source domain samples.
Definition 2. 
DPP Sampling with L-ensemble. A sampling possibility of each instance in source domain D s can be calculated by a point process P with L-ensemble. The probability of a subset C S being selected is
P ( C S ) = det ( L C S ) det ( L + I ) ,
where L is a N × N positive semidefinite matrix indexed by instances of D s , I is the N × N identity matrix and det L + I = C D s det L C is used to normalize the matrix determinants to probabilities.
Definition 3. 
k-DPP Sampling with L-ensemble. Suppose the sampled subset C S consists of k source domain instances, the sampling probability of subset C S can be written as,
P k ( C S ) = det ( L C S ) C D s C = k det ( L C )
where C S = k and L C S is the k × k submatrix of L indexed by C S .
Suppose correlation matrix L = i = 1 N λ i v i v i T , λ i refers to the eigenvalue corresponding to the eigenvector v i , the probability of selecting a k-size subset C S is
P k ( C S ) = det L C S C C C = k det L C = c i C S λ i C C C = k { c j C λ j } .
In summary, the k-DPP sampling probability is determined by the correlation matrix L. The different correlation matrices have different sampling properties. To ensure that the subset of source domain is highly transferability and diversity, we redefine the matrix L in Section 3.2.

3.2. Correlation Matrix L Construction

In this section, we define the transferability m ( Ω | x i s ; Φ ) and diversity s ( x i s , x j s ) of source domain instances and use them to rewrite the L-correlation matrix. Decomposing the matrix L as a Gram matrix L = B T B , the probability of k-DPP sampling can be calculate with the transferability m ( Ω | x i s ; Φ ) and diversity s ( x i s , x j s )  [38]. Suppose each vector B i in B has the form of B i = m ( Ω | x i s ; Φ ) · ϕ i , in which ϕ i is the normalized feature vector. the matrix L can be defined as
L = L i j 1 i , j N
where L i j = m ( Ω | x i s ; Φ ) ϕ i T · ϕ j m ( Ω | x j s ; Φ ) , and the inner product ϕ i T · ϕ j 1 , + 1 indicates the similarity between source domain instance x i s and x j s , we rewrite
L = L i j = m ( Ω | x i s ; Φ ) s ( x i s , x j s ) m ( Ω | x j s ; Φ ) 1 i N , 1 j N .
where s ( x i s , x j s ) = ϕ i T · ϕ j .

3.2.1. Transferability Measure of Instance with Evidence Theory

The evidence theory is a generalization of Bayesian theory to subjective probabilities [39,40,41]. In the evidence theory, a mass function m ( · ) is constructed to assign masses to the elements of the power set of a frame of discernment Ω . For classification tasks, Ω is the class label space and the mass function is used to assign masses to the subsets of class labels. For the mass assigned on the whole label space m ( Ω ) , it means that all the class labels have the same possibility and implies the probability of knowing nothing (ignorance) [42].
In the transferred classification with domain adaptation, when data comes from source domain and the label space comes from target domain, the mass assigned on the whole label space represents the transferability of source domain data with respect to the classification on target domain. Based on this, we define the transferability measure for transfer learning as follows.
Suppose that Ω is the label space of the target domain, a data instance x s comes from source domain, Φ t is an evidence set from the target domain for x s , the transferability of the source domain data instance x s about the classification on target domain is
t r a n s f e r a b i l i t y = m ( Ω | x s ; Φ t ) .
m ( Ω | x s ; Φ t ) denotes the ignorance probability of x s about the classes of target domain under a given evidence set Φ t .
Next, we introduce how to formulate m ( Ω | x s ; Φ t ) . First, we construct the evidence set Φ t according to the target domain D t . Given a source-domain instance x s , its evidence set Φ t can be written as a neighborhood surrounding x s .
Φ t = { x 1 t , x 2 t , , x n t } ,
where x i t comes from the target domain.
To achieve this, we design the objective function of obtaining an evidence set as follows:
Φ t = arg min Φ ϕ x s 1 | Φ | x t Φ ϕ ( x ) H 2 ,
where ϕ is the feature mapping. The optimal evidence set Φ t can be solved by the greedy algorithm on the labeled target domain.
Second, we further decompose Φ t and refine the mass function to implement the uncertainty measure. Given k classes, the evidence set Φ t can be divided into different classes,
Φ t = Φ 1 t , Φ 2 t , , Φ k t ,
where Φ k t = { x k 1 t , x k l t } is the evidence subset in which all the target domain instances have the class label y k , and x k l t is the lth element in the evidence subset.
Through decomposing the evidence set, we can adopt Dempster’s rule to refine the ignorance mass m ( Ω | x ; Φ t ) with multilevel evidence as
m ( Ω | x s ; Φ t ) = Φ k t Φ t m ( Ω x s ; Φ k t ) = Φ k t Φ t x t Φ k t m Ω x s ; x t ,
in which the orthogonal sum ⊕ denotes the combination operator of Dempster’s rule [43]. The ignorance mass m ( Ω | x s ; x t ) is calculated by
m Ω | x s ; x i t = 1 exp d x s , x i t ,
where d ( · ) is defined as
d x s , x i t = K x s , x s 2 K x s , x i t + K x t , x i t ,
where K · is the radial basis function kernel.

3.2.2. Diversity Measure of Instance

According to the characteristics of k-DPP sampling, if s ( x i s , x j s ) measures the similarity of two instances, this can make it less probable that similar source domain instances are selected at the same time, thus ensuring the diversity of the selected subset. To achieve this, we adopt Normalized Mutual Information (NMI) to measure the similarity between two instances x i s and x j s .
s = N M I ( x i s , x j s ) .

3.3. Model Ensemble

Algorithm 1 lists the main steps of the ensemble strategy. According to the improved DPP sampling, we can obtain m subsets T = { T 1 , T 2 , , T m } from source domain D s . For each subset T i , we train a base model f i for the target domain. By repeating the process, we can get a set of base models F = { f 1 , f 2 , , f m } . We combine the base model by
f ( x ) = i = 1 | T | w i f i ( x ) ,
where w i is the weighting and represents the transferability of the base model with respect to the target domain. To achieve this, we calculate the weight w i using the adaptability of the subset with respect to the target domain. It can be defined as
w i = Ent T i i = 1 | T | Ent T i ,
where Ent T i is the adaptability of the subset about the target domain task. The adaptability is defined as
Ent T i = 1 | T i | x T i ( m ( Ω | x ) ) .
In addition, the proposed ensemble strategy is a general technology and the existing transfer learning (TL) methods can be embedded into it.
Algorithm 1 Ensemble strategy based on DPP sampling for transfer learning.
Input: 
Source domain D s , target domain D t ;
Output: 
f ( x ) = i = 1 | T | w i f i ( x ) ;
  1:
for each x s in D s  do
  2:
   Construct an evidence set Φ for x s with labeled data instances in target domain D t ;
  3:
   Decompose the evidence set Φ and formulate the mass function m ( · | x s ; Φ ) with multilevel evidences;
  4:
   Hierarchically compute m ( Ω | x s ; Φ ) to measure the transferability of x s for transfer learning according to formula (12);
  5:
end for
  6:
Construct the similarity matrix of source domain instances using formula (15);
  7:
Construct correlation matrix L of DPP based on transferability and diversity;
  8:
for each i in | T |  do
  9:
    Decompose L matrix into eigenvalues and eigenvectors;
  10:
   Calculate the probability P ( T i ) of selecting a subset T i ;
  11:
   Generate the subset T i according to DPP sampling probability P ( T i ) ;
  12:
   Calculate the transferability of subset T i as weights w i according to formula (17);
  13:
   Train the transfer model f i based on T i ;
  14:
end for
  15:
return  f ( x ) = i = 1 | T | w i f i ( x ) .

4. Experiments

To verify the effectiveness of the ensemble strategy, we couple the ensemble strategy into traditional TL methods and deep transfer methods, and test the transfer performance of the TL model on various kinds of data, including Amazon product reviews, Office+Caltech data sets and Office-Home data sets. The descriptions of the data sets are listed below.

4.1. Data Sets

Amazon product reviews is a cross-domain text data set for transfer learning evaluation [44]. The dataset includes four domains: books (denote B), dvds (denote D), electronics (denote E) and kitchen appliances (denote K). In each domain, there are 1000 positive reviews and 1000 negative ones. In this data set, we can construct 12 cross-domain sentiment classification tasks: B D , B E , B K , D B , D E , D K , E B , E D , E K , K B , K D , B E , where the word before ‘−’ corresponds with the source domain and the word after ‘−’ corresponds with the target domain.
The Office+Caltech data set is generated from Office and Caltech-256, which are introduced by Gong et al. [25], which are two benchmark data sets widely adopted for visual domain adaptation evaluation. It consists of 4563 images with 31 categories. Caltech-256 is a standard database for object recognition. It consists of 30,607 images with 256 categories. In experiments, we use the smaller Office+Caltech data sets. It includes four domains: Amazon (denotes A), Webcam (denotes W), DSLR (denotes D), and Caltech-256 (denotes C). The dataset includes 10 classes. There are 8 to 151 samples per category per domain, and 2533 images in total. In this dataset, we can construct 9 cross-domain classification tasks: A C , A W , C A , C W , D A , D C , D W , W A , W C .
The Office-Home data set has been created to evaluate domain adaptation algorithms for object recognition using deep learning [13]. It consists of images from four different domains: Artistic images (denotes A, paintings, sketches andor artistic depictions), Clip Art (denotes C, clipart images), Product images (denotes P, images without background) and Real-World images (denotes R, regular images captured with a camera). The data set has a total of 15,500 images, and for each domain, the data set contains images of 65 object categories found typically in Office and Home settings. In this data set, we construct 12 cross-domain classification tasks: A C , A P , A R , C A , C P , C R , P A , P C , P R , R A , R C and R P .

4.2. Experimental Study on Traditional Transfer Learning Methods

In the experiment, to verify the effectiveness of our proposed ensemble strategy, we couple it with 9 traditional Transfer Learning (TL) methods and compare the classification results with and without the ensemble strategy. In addition, we compare different ensemble strategies where the base models are trained by subsets of the source domain, which are separately generated by improved DDP sampling, information gain, and random sampling. The traditional TL methods include: transfer component analysis (TCA) [18], correlation alignment (CORAL) [26], geodesic flow kernel (GFK) [25], joint distribution adaptation (JDA) [24], kernel mean matching (KMM) [20], scatter component analysis (SCA) [45], Balanced Distribution Adaptation (BDA) [46], Manifold Embedded Distribution Alignment (MEDA) [35], and practically easy transfer learning (EasyTL) [47]. For any transfer learning (TL) method ‘∗’, we briefly denote the ensemble strategy based on improved DPP sampling as ‘ E ’, based on information gain (IG) as “ I ” and based on random sampling as ‘ R ’. The abbreviations are summarized in Table 2.

4.2.1. Experimental Setting

In each cross-domain text sentiment classification task, we apply the Bert model to extract the feature of the review texts [48]. In each cross-domain image classification task, we utilize deep convolutional activation features (DeCAF6 features) to represent all the images, in which the outputs from the 6th layer in the deep convolutional neural network are transformed to 4096 dimensional features [35]. In DPP sampling and random sampling, the number of subsets is set to 10. The size of the subset is 70 % of the size of the source domain.

4.2.2. Test on Text Data

In this testing, we construct 12 cross-domain sentiment classification tasks from Amazon product reviews, i.e, B D , B E , B K , D B , D E , D K , E B , E D , E K , K B , K D , B E .
As shown in Table 3, we perform the TL methods with and without ensemble strategy to generate the sentiment classification results, respectively. It is clear to observe that TL methods with our proposed ensemble strategy achieve the best performance on all cross-domain text classification tasks. Specifically, for the TL methods, using the ensemble strategy improves the accuracies of cross sentiment classifications by 4.45 % , 4.39 % , 3.42 % , 4.07 % , 6.46 % , 4.47 % , 2.65 % , 3.18 % , respectively. In addition, we can find that the KMM method achieves the largest performance improvement 6.46 % by embedding the ensemble strategy. The reason is that KMM is sensitive to domain differences and is unstable for predicting out of distribution (OOD) data. Using the ensemble strategy can enhance the performance of KMM when predicting OOD data. These results clearly demonstrate that our ensemble strategy can improve the performance of the single model on the text data set.
To further validate the effectiveness of our proposed method on text data set, we compare different ensemble strategies where the base models are trained by subsets of source domain, which are separately generated by improved DDP sampling, information gain, and random sampling. The classification accuracies are listed in Table 4. In random sampling, we randomly select 10 subsets to train the base models. The size of the subset is 70 % of the size of the source domain.
As shown in Table 4, the average classification accuracies of TL methods with the proposed ensemble strategy on 12 tasks are 81.09 % , 74.42 % , 76.94 % , 80.81 % , 84.07 % , 78.36 % , 79.45 % , 81.55 % , respectively. Comparing with the ensemble strategy based on random sampling, the improved DPP sampling improves the accuracies of different cross-domain sentiment classification tasks by 4.98 % , 4.41 % , 3.77 % , 4.28 % , 7.17 % , 5.41 % , 3.59 % , and 3.22 % . Comparing with the ensemble strategy based on information gain, the improved DPP sampling improves the accuracies of different cross-domain sentiment classification tasks by 3.69 % , 3.34 % , 2.60 % , 3.07 % , 4.65 % , 3.48 % , 2.84 % , and 2.83 % , especially TCA, JDA, and BDA, in which the maximum mean discrepancy metric is adopted to minimize the differences between the source domain and target domain achieve greater classification improvements. Based on the reported experimental results, our proposed ensemble strategy with DPP sampling is effective and can significantly enhance the transfer performance of TL methods on text data.

4.2.3. Test on Image Data

In the test, we further validate that the proposed ensemble strategy on image data. The image cross-domain classification tasks are constructed from the Office+Caltech data set, including A C , A W , C A , C W , D A , D C , D W , W A , W C .
We perform TL methods with and without the ensemble strategy to generate the image classification results, respectively. The results are shown in Table 5, in all the cross-domain image classification tasks, our proposed ensemble strategy achieves better performance than the single TL methods. The TL methods with the ensemble strategy gain a significant performance improvement of 4.96 % , 2.34 % , 4.79 % , 5.16 % , 3.52 % , 2.28 % , 2.06 % , and 3.65 % compared to the single TL methods.
To further validate the effectiveness of our proposed ensemble strategy on the image data set, we compare different ensemble strategies where the base models are trained by subsets of source domain, which are separately generated by improved DDP sampling, information gain, and random sampling. As shown in Table 6, using the ensemble with DPP sampling, the TL methods achieve the average classification accuracies of 86.18 % , 84.96 % , 85.12 % , 86.20 % , 83.85 % , 86.39 % , 91.87 % , and 82.68 % on the cross-domain image data sets, respectively. In contrast to the ensemble strategy based on random sampling, the ensemble strategy with DPP sampling gains the significant performance improvements of 6.24 % , 2.94 % , 5.27 % , 5.78 % , 4.10 % , 2.97 % , 2.99 % , and 4.19 % . Comparing with the ensemble strategy based on information gain, the improved DPP sampling improves the accuracies of different cross-domain image classification tasks by 4.78 % , 2.13 % , 3.23 % , 4.01 % , 2.83 % , 2.03 % , 2.47 % , and 3.53 % . The experimental results reveal that our proposed ensemble strategy with DPP sampling can improve the transfer performance of TL methods on image data sets.

4.3. Experimental Study on Deep Transfer Model

Besides the traditional transfer learning methods, we also verify that our proposed ensemble strategy is effective to improve the deep transfer models. In the experiment, we integrate the ensemble strategy to 5 deep transfer models, including deep adaptation network (DAN) [49], deep version of manifold embedded distribution alignment (DANN) [50], joint adaptation network (JAN) [34], multi-representation adaptation network (MRAN) [37], and deep subdomain adaptation network (DSAN) [51]. We construct 12 cross-domain classification tasks from Office-Home data set.
The experiment consists of two parts: (1) Comparing the performance of the deep transfer model with and without the ensemble strategy. (2) We compared different ensemble strategies, in which the base model is trained using improved DPP sampling, information gain and random sampling to generate subsets of source domain, respectively. The results are listed in Table 7 and Table 8. For any transfer learning (TL) method ‘*’, we briefly denote the ensemble strategy based on improved DPP sampling as ‘ E ’, based on information gain (IG) as “ I ” and based on random sampling as ‘ R ’. The abbreviations are summarized in Table 2.

4.3.1. Experimental Setting

We implement the experiments using PyTorch of version higher than 1.3 over a cluster of NVIDIA A100 GPUs. For the model training, we use the stochastic gradient descent (SGD) methods to optimize the network. In DPP sampling and random sampling, the number of subsets is set to 10. The size of the subset is 70 % of the size of source domain.

4.3.2. Experimental Results

As shown in Table 7, using the ensemble strategy improves the classification accuracies of deep transfer model by 2.01 % , 1.75 % , 2.35 % , 1.69 % , 1.74 % , respectively, compared with the single model. The performance of our proposed ensemble strategy is better than the compared methods on most of the cross domain classification task, which indicates that our proposed ensemble strategy can be integrated into deep transfer models and further improve their performance.
To further validate the effectiveness of our ensemble strategy on the deep transfer model, we compare different ensemble strategies where the base models are trained by subsets of source domain, which are separately generated by improved DDP sampling, information gain, and random sampling. Table 8 list the results. We can observe that our proposed ensemble strategy adopting DPP sampling achieves superior performance than the random sampling. The average accuracies of ensemble strategy with DPP sampling are higher by 2.17 % , 2.58 % , 2.98 % , 3.22 % , and 2.40 % than the ensemble strategy with random sampling on each task, respectively. Comparing with ensemble strategy based on information gain, the improved DPP sampling improves the accuracies of different cross-domain image classification tasks by 1.81 % , 2.19 % , 2.11 % , 2.12 % , and 1.74 % . These results further validate the effectiveness of our proposed strategy in improving the performance of deep transfer models.

5. Conclusions

In this article, we proposed a novel ensemble strategy based on improved DPP sampling. Specifically, we first rewritten the correlation matrix of DPP with transferability and diversity. Second, we use the improved DPP sampling to select k subsets from the source domain. Finally, we train the base models with the selected subsets and using the transferability of subsets to ensemble the base models. The proposed strategy is a general preprocessing technique. Through coupling the ensemble strategy into the transfer learning model, we can improve the robustness and generalization of the transfer model. Experiments on text and image data sets validate that our proposed ensemble strategy improves the performances of various kinds of transfer learning methods. In our work, the ensemble strategy based on improved DPP is limited by instance transferability, which can affect the performance of the ensemble strategy if there are huge differences between the source domain and target domain. Moving forward, we plan to extend the ensemble strategy to handle the transfer learning with multiple source domains and the domain adaptation on open sets.

Author Contributions

Conceptualization, Y.L. and B.Z.; methodology, Y.L.; data curation, Z.X.; writing–original draft, Y.L.; writing–review–editing, B.Z. and X.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2009, 22, 1345–1359. [Google Scholar] [CrossRef]
  2. Zhuang, F.; Qi, Z.; Duan, K.; Xi, D.; Zhu, Y.; Zhu, H.; Xiong, H.; He, Q. A comprehensive survey on transfer learning. Proc. IEEE 2020, 109, 43–76. [Google Scholar] [CrossRef]
  3. Zhang, J.; Li, W.; Ogunbona, P.; Xu, D. Recent advances in transfer learning for cross-dataset visual recognition: A problem-oriented perspective. ACM Comput. Surv. (CSUR) 2019, 52, 1–38. [Google Scholar] [CrossRef] [Green Version]
  4. Jiang, J.; Shu, Y.; Wang, J.; Long, M. Transferability in Deep Learning: A Survey. arXiv 2022, arXiv:2201.05867. [Google Scholar]
  5. Iman, M.; Rasheed, K.; Arabnia, H.R. A Review of Deep Transfer Learning and Recent Advancements. arXiv 2022, arXiv:2201.09679. [Google Scholar]
  6. Dai, W.; Yang, Q.; Xue, G.R.; Yu, Y. Boosting for transfer learning. In Proceedings of the 24th International Conference on Machine Learning, Corvalis, OR, USA, 20–24 June 2007; pp. 193–200. [Google Scholar]
  7. Chen, M.; Weinberger, K.Q.; Blitzer, J. Co-training for domain adaptation. In Proceedings of the Advances in Neural Information Processing Systems, Granada, Spain, 12–15 December 2011; pp. 2456–2464. [Google Scholar]
  8. Courty, N.; Flamary, R.; Tuia, D.; Rakotomamonjy, A. Optimal transport for domain adaptation. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1853–1865. [Google Scholar] [CrossRef]
  9. Fernando, B.; Habrard, A.; Sebban, M.; Tuytelaars, T. Unsupervised visual domain adaptation using subspace alignment. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 2–8 December 2013; pp. 2960–2967. [Google Scholar]
  10. Duan, L.; Tsang, I.W.; Xu, D. Domain transfer multiple kernel learning. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 465–479. [Google Scholar] [CrossRef]
  11. Karbalayghareh, A.; Qian, X.; Dougherty, E.R. Optimal Bayesian transfer learning. IEEE Trans. Signal Process. 2018, 66, 3724–3739. [Google Scholar] [CrossRef] [Green Version]
  12. Bengio, Y. Deep learning of representations for unsupervised and transfer learning. In Proceedings of the ICML Workshop on Unsupervised and Transfer Learning, Bellevue, WA, USA, 2 July 2012; pp. 17–36. [Google Scholar]
  13. Venkateswara, H.; Chakraborty, S.; Panchanathan, S. Deep-learning systems for domain adaptation in computer vision: Learning transferable feature representations. IEEE Signal Process. Mag. 2017, 34, 117–129. [Google Scholar] [CrossRef]
  14. Wei, K.; Iyer, R.K.; Wang, S.; Bai, W.; Bilmes, J.A. Mixed robust/average submodular partitioning: Fast algorithms, guarantees, and applications. Adv. Neural Inf. Process. Syst. 2015, 28, 2233–2241. [Google Scholar]
  15. Qi, J.; Tejedor, J. Robust submodular data partitioning for distributed speech recognition. In Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 20–25 March 2016; pp. 2254–2258. [Google Scholar]
  16. Liu, W.; Yue, X.; Zhong, C.; Zhou, J. Clustering Ensemble Selection with Determinantal Point Processes. In Proceedings of International Conference on Neural Information Processing, Sydney, NSW, Australia, 12–15 December 2019; pp. 621–633. [Google Scholar]
  17. Yue, X.; Xiao, X.; Chen, Y. Robust neighborhood covering reduction with determinantal point process sampling. Knowl.-Based Systems. 2020, 188, 105063. [Google Scholar] [CrossRef]
  18. Pan, S.J.; Tsang, I.W.; Kwok, J.T.; Yang, Q. Domain adaptation via transfer component analysis. IEEE Trans. Neural Netw. 2010, 22, 199–210. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Jiang, J.; Zhai, C. Instance weighting for domain adaptation in NLP. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, Prague, Czech Republic, 23–30 June 2007; pp. 264–271. [Google Scholar]
  20. Huang, J.; Gretton, A.; Borgwardt, K.M.; Scholkopf, B.; Smola, A.J. Correcting Sample Selection Bias by Unlabeled Data. Adv. Neural Inf. Process. Syst. 2006, 19, 601–608. [Google Scholar]
  21. Chu, W.S.; De la Torre, F.; Cohn, J.F. Selective transfer machine for personalized facial action unit detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 3515–3522. [Google Scholar]
  22. Long, M.; Wang, J.; Ding, G.; Sun, J.; Yu, P.S. Transfer joint matching for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1410–1417. [Google Scholar]
  23. Yan, H.; Ding, Y.; Li, P.; Wang, Q.; Xu, Y.; Zuo, W. Mind the class weight bias: Weighted maximum mean discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2272–2281. [Google Scholar]
  24. Long, M.; Wang, J.; Ding, G.; Sun, J.; Yu, P.S. Transfer feature learning with joint distribution adaptation. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 2–8 December 2013; pp. 2200–2207. [Google Scholar]
  25. Gong, B.; Shi, Y.; Sha, F.; Grauman, K. Geodesic flow kernel for unsupervised domain adaptation. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 2066–2073. [Google Scholar]
  26. Sun, B.; Feng, J.; Saenko, K. Return of frustratingly easy domain adaptation. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016. [Google Scholar]
  27. Yang, J.; Yan, R.; Hauptmann, A.G. Cross-domain video concept detection using adaptive svms. In Proceedings of the 15th ACM international conference on Multimedia, Augsburg, Germany, 25–29 September 2007; pp. 188–197. [Google Scholar]
  28. Duan, L.; Tsang, I.W.; Xu, D.; Maybank, S.J. Domain transfer svm for video concept detection. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 1375–1381. [Google Scholar]
  29. Duan, L.; Xu, D.; Tsang, I.W.H.; Luo, J. Visual event recognition in videos by learning from web data. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 34, 1667–1680. [Google Scholar] [CrossRef]
  30. Long, M.; Wang, J.; Ding, G.; Pan, S.J.; Philip, S.Y. Adaptation regularization: A general framework for transfer learning. IEEE Trans. Knowl. Data Eng. 2013, 26, 1076–1089. [Google Scholar] [CrossRef]
  31. Cao, Y.; Long, M.; Wang, J. Unsupervised domain adaptation with distribution matching machines. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; Volume 32. [Google Scholar]
  32. Donahue, J.; Jia, Y.; Vinyals, O.; Hoffman, J.; Zhang, N.; Tzeng, E.; Darrell, T. Decaf: A deep convolutional activation feature for generic visual recognition. In Proceedings of the International Conference on Machine Learning, Beijing, China, 21–26 June 2014; pp. 647–655. [Google Scholar]
  33. Feng, C.; Zhong, C.; Wang, J.; Sun, J.; Yokota, Y. CANN: Coupled Approximation Neural Network for Partial Domain Adaptation. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, Gold Coast, QLD, Australia, 1–5 November 2021; pp. 464–473. [Google Scholar]
  34. Long, M.; Zhu, H.; Wang, J.; Jordan, M.I. Deep transfer learning with joint adaptation networks. In Proceedings of the International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 2208–2217. [Google Scholar]
  35. Wang, J.; Feng, W.; Chen, Y.; Yu, H.; Huang, M.; Yu, P.S. Visual domain adaptation with manifold embedded distribution alignment. In Proceedings of the 26th ACM international conference on Multimedia, Seoul, Republic of Korea, 22–26 October 2018; pp. 402–410. [Google Scholar]
  36. Zhang, W.; Zhang, X.; Liao, Q.; Yang, W.; Lan, L.; Luo, Z. Robust normalized squares maximization for unsupervised domain adaptation. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, Virtual, 19–23 October 2020; pp. 2317–2320. [Google Scholar]
  37. Zhu, Y.; Zhuang, F.; Wang, J.; Chen, J.; Shi, Z.; Wu, W.; He, Q. Multi-representation adaptation network for cross-domain image classification. Neural Netw. 2019, 119, 214–221. [Google Scholar] [CrossRef]
  38. Kulesza, A.; Taskar, B. Learning determinantal point processes. arXiv 2011, arXiv:1202.3738. [Google Scholar]
  39. Dempster, A.P. A generalization of Bayesian inference. J. R. Stat. Soc. Ser. B (Methodol.) 1968, 30, 205–232. [Google Scholar] [CrossRef]
  40. Denoeux, T. 40 years of Dempster-Shafer theory. Int. J. Approx. Reason. 2016, 79, 1–6. [Google Scholar] [CrossRef]
  41. Yager, R.R.; Liu, L. Classic Works of the Dempster-Shafer Theory of Belief Functions; Springer: Berlin/Heidelberg, Germany, 2008; Volume 219. [Google Scholar]
  42. Liu, W.; Yue, X.; Chen, Y. Trusted Multi-View Deep Learning with Opinion Aggregation. In Proceedings of the 36th AAAI Conference on Artificial Intelligence, Palo Alto, CA, USA, 22 February–1 March 2022; pp. 7585–7593. [Google Scholar]
  43. Shafer, G. A mathematical theory of evidence turns 40. Int. J. Approx. Reason. 2016, 79, 7–25. [Google Scholar] [CrossRef]
  44. Blitzer, J.; Dredze, M.; Pereira, F. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, Prague, Czech Republic, 23–30 June 2007; pp. 440–447. [Google Scholar]
  45. Ghifary, M.; Balduzzi, D.; Kleijn, W.B.; Zhang, M. Scatter Component Analysis: A Unified Framework for Domain Adaptation and Domain Generalization. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1414–1430. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Wang, J.; Chen, Y.; Hao, S.; Feng, W.; Shen, Z. Balanced distribution adaptation for transfer learning. In Proceedings of the 2017 IEEE International Conference on Data Mining (ICDM), New Orleans, LA, USA, 18–21 November 2017; pp. 1129–1134. [Google Scholar]
  47. Wang, J.; Chen, Y.; Yu, H.; Huang, M.; Yang, Q. Easy Transfer Learning By Exploiting Intra-Domain Structures. In Proceedings of the 2019 IEEE International Conference on Multimedia and Expo (ICME), Shanghai, China, 8–12 July2019; pp. 1210–1215. [Google Scholar]
  48. Bose, T.; Illina, I.; Fohr, D. Unsupervised domain adaptation in cross-corpora abusive language detection. In Proceedings of the SocialNLP 2021-The 9th International Workshop on Natural Language Processing for Social Media, Virtual, 10 June 2021. [Google Scholar]
  49. Long, M.; Cao, Y.; Wang, J.; Jordan, M. Learning transferable features with deep adaptation networks. In Proceedings of the International Conference on Machine Learning, Lille, France, 7–9 July 2015; pp. 97–105. [Google Scholar]
  50. Ganin, Y.; Lempitsky, V. Unsupervised domain adaptation by backpropagation. In Proceedings of the International Conference on Machine Learning, Lille, France, 7–9 July 2015; pp. 1180–1189. [Google Scholar]
  51. Zhu, Y.; Zhuang, F.; Wang, J.; Ke, G.; Chen, J.; Bian, J.; Xiong, H.; He, Q. Deep subdomain adaptation network for image classification. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 1713–1722. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Ensemble strategybBased on DPP sampling for transfer learning.
Figure 1. Ensemble strategybBased on DPP sampling for transfer learning.
Mathematics 10 04409 g001
Table 1. Notations used in this paper.
Table 1. Notations used in this paper.
NotationDescription
D s , D t source domain, target domain
x s , x t instance of source/target domain
Φ t evidence set
Φ k t evidence subset
Ω label spcae
Dempster’s combinational rule
K kernel function
ϕ ( · ) feature mapping function
m ( · ) mass function
d ( · ) distance function
Lcorrelation matrix
Table 2. Abbreviations and descriptions used in the experiments.
Table 2. Abbreviations and descriptions used in the experiments.
AbbreviationDescription
E The ensemble strategy with improved DPP sampling
I The ensemble strategy with information gain
R The ensemble strategy with random sampling
Table 3. Cross-domain sentiment classification accuracies of text dataset generated by the traditional TL methods with and without ensemble strategy.
Table 3. Cross-domain sentiment classification accuracies of text dataset generated by the traditional TL methods with and without ensemble strategy.
Methods B D B E B K D B D E D K E B E D E K K B K D K E Ave acc
TCA77.7675.5478.7476.0576.3879.3473.3573.6679.7473.0577.2678.7476.63
E-TCA81.2183.3184.7981.9582.4182.1376.6079.8982.5778.1079.2880.8081.09
CORAL70.7666.2170.0073.0568.7071.9669.9065.7172.3567.4568.6175.6870.03
E-CORAL75.7973.4874.7776.5773.5174.2576.9071.1575.3673.5271.3576.4474.42
GFK75.7672.0073.5071.8568.9675.7072.6071.1176.2073.7574.2176.5873.52
E-GFK78.0077.3976.4279.1374.9879.0277.2373.4178.0575.6975.0878.9176.94
JDA77.2675.9378.0977.6576.0378.2972.6572.1680.1475.0577.5680.3276.76
E-JDA80.0982.1484.2380.3181.7181.9977.3877.7182.8880.0579.8181.3980.81
KMM83.7679.0275.9080.5068.5176.4573.7077.8680.3974.2575.9685.0077.61
E-KMM85.9881.4485.4785.2282.7985.8279.0084.0884.5282.5984.1687.7884.07
BDA75.0173.0475.7574.5571.8075.3071.4071.6178.4971.1571.6676.9373.89
E-BDA79.2277.5182.4180.0178.0678.2675.7074.9381.7677.1174.8880.4478.36
SCA79.4178.8278.1474.9577.5376.0073.1571.8679.7973.7075.7182.4176.79
E-SCA83.5279.8879.8082.0178.3078.1576.4574.8082.2274.8979.0084.3379.45
EasyTL76.7676.0882.1483.9080.9185.2276.4072.6186.3773.2570.8675.9378.37
E-EasyTL79.4483.0683.2185.0985.6685.0078.1377.7786.9377.4977.8179.0381.55
Table 4. Cross-domain sentiment classification accuracies of the text dataset generated by ensemble strategy with improved DPP sampling (“ E ”), information gain (“ I ”) and random sampling (“ R ”).
Table 4. Cross-domain sentiment classification accuracies of the text dataset generated by ensemble strategy with improved DPP sampling (“ E ”), information gain (“ I ”) and random sampling (“ R ”).
Methods B D B E B K D B D E D K E B E D E K K B K D K E Ave acc
R-TCA75.2979.1478.0675.1074.8977.6171.2574.1977.2074.3377.0079.20476.11
I-TCA77.1280.4581.1177.0976.5477.0073.6275.0477.6875.7678.478.9377.40
E-TCA81.2183.3184.7981.9582.4182.1376.6079.8982.5778.1079.2880.8081.09
R-CORAL69.1263.4471.9376.2570.9870.6771.1865.4573.1465.7165.4776.8470.01
I-CORAL69.8964.3872.0577.2271.1472.5371.9768.3974.2666.8367.9276.4271.08
E-CORAL75.7973.4874.7776.5773.5174.2576.9071.1575.3673.5271.3576.4474.42
R-GFK75.9273.1771.2970.2268.9173.4971.8972.9475.8272.9775.0476.4073.17
I-GFK77.0473.973.8174.6270.4675.1873.5572.8776.3474.1174.2375.9974.34
E-GFK78.0077.3976.4279.1374.9879.0277.2373.4178.0575.6975.0878.9176.94
R-JDA77.8076.4779.1475.0376.4577.3671.0871.7879.9476.1777.1480.0276.53
I-JDA77.9578.7681.8977.6476.0979.0973.473.0380.1677.3877.6979.7977.74
E-JDA80.0982.1484.2380.3181.7181.9977.3877.7182.8880.0579.8181.3980.81
R-KMM80.7478.7774.1280.0669.2175.8272.6977.3781.0275.4875.1182.4576.90
I-KMM8279.8279.6782.6174.7679.3772.779.6381.9779.5878.1882.7779.42
E-KMM85.9881.4485.4785.2282.7985.8279.0084.0884.5282.5984.1687.7884.07
R-BDA73.5272.1374.2172.7870.7173.4872.0570.4078.2570.7472.0975.0072.95
I-BDA74.9774.4476.5174.3974.1675.4474.1972.4178.9773.9672.9576.1374.88
E-BDA79.2277.5182.4180.0178.0678.2675.7074.9381.7677.1174.8880.4478.36
R-SCA80.2076.2475.3371.2778.1475.9172.4172.1978.8073.5575.3980.8375.86
I-SCA79.8877.4176.0375.1977.2376.0774.0871.9880.873.2675.981.5176.61
E-SCA83.5279.8879.8082.0178.3078.1576.4574.8082.2274.8979.0084.3379.45
R-EasyTL76.8076.1181.7483.4781.2084.3175.2473.3786.5772.5072.1176.5478.33
I-EasyTL75.9179.3880.1584.1182.3583.7976.0472.5485.1574.2874.0576.8778.72
E-EasyTL79.4483.0683.2185.0985.6685.0078.1377.7786.9377.4977.8179.0381.55
Table 5. Cross-domain classification accuracies of Office+Caltech image data sets generated by the traditional TL methods with and without ensemble strategy.
Table 5. Cross-domain classification accuracies of Office+Caltech image data sets generated by the traditional TL methods with and without ensemble strategy.
Methods A C A W C A C W D A D C D W W A W C Ave acc
TCA75.6975.5989.7774.9289.2473.4698.3080.3873.6481.22
E-TCA83.4181.1790.1489.7890.2879.5598.4184.5178.3786.18
CORAL83.774.5889.9878.6485.7079.1699.6677.1474.9882.62
E-CORAL84.7581.7991.2581.4387.8881.5297.1082.3376.5784.96
GFK76.8568.4788.4180.6885.8074.0998.6475.2674.880.33
E-GFK81.7680.2590.0286.6688.7880.0498.8782.3777.3585.12
JDA75.0770.8589.6780.0088.3173.9198.3180.2772.9381.04
E-JDA82.7281.1792.4587.3389.9178.5198.3986.8578.4386.20
KMM83.0874.2491.2380.3484.3471.8698.9871.8167.1480.34
E-KMM84.8178.8892.3583.4785.5278.1698.0080.7072.7783.85
BDA83.7974.9289.4682.0388.8381.3099.3180.8576.4984.11
E-BDA86.5176.6891.9486.5988.7883.2297.0286.7080.0886.39
MEDA87.7185.7691.0784.0792.9087.8998.9893.2186.7389.81
E-MEDA87.7389.6392.6192.7193.8189.9798.8093.6787.9391.87
EasyTL81.3072.8890.5074.9183.0073.6493.2274.5367.3179.03
E-EasyTL83.0980.5490.5881.2585.7979.1191.6679.5872.5482.68
Table 6. Cross-domain classification accuracies of Office+Caltech image data sets generated by ensemble strategy with improved DPP sampling (“ E ”), information gain (“ I ”) and random sampling (“ R ”).
Table 6. Cross-domain classification accuracies of Office+Caltech image data sets generated by ensemble strategy with improved DPP sampling (“ E ”), information gain (“ I ”) and random sampling (“ R ”).
Methods A C A W C A C W D A D C D W W A W C Ave acc
R-TCA72.8774.8286.3875.1088.1770.0597.2881.1973.6179.94
I-TCA75.9476.3386.9176.7188.5373.4297.1182.475.2781.40
E-TCA83.4181.1790.1489.7890.2879.5598.4184.5178.3786.18
R-CORAL83.8074.2287.2978.3784.9680.0197.2178.0074.3182.02
I-CORAL83.8676.1487.9479.1585.2979.8897.7280.3475.182.82
E-CORAL84.7581.7991.2581.4387.8881.5297.1082.3376.5784.96
R-GFK77.0270.2186.3980.2284.0774.3497.2175.5273.6979.85
I-GFK78.3475.7587.5682.4486.5976.6497.1977.7874.781.89
E-GFK81.7680.2590.0286.6688.7880.0498.8782.3777.3585.12
R-JDA75.4371.4388.5379.2987.1972.4696.1780.0473.1580.41
I-JDA77.957389.3981.4887.4774.8697.0982.6675.8182.19
E-JDA82.7281.1792.4587.3389.9178.5198.3986.8578.4386.20
R-KMM83.2274.6390.1778.7883.7971.1196.7470.0469.2679.75
I-KMM83.467589.6180.3583.9173.7197.5675.5270.0681.02
E-KMM84.8178.8892.3583.4785.5278.1698.0080.7072.7783.85
R-BDA83.3373.8187.3581.9788.9180.0596.9581.0477.3883.42
I-BDA84.0874.8788.2384.6687.5281.4596.0783.5278.8184.36
E-BDA86.5176.6891.9486.5988.7883.2297.0286.7080.0886.39
R-MEDA87.7785.2190.5584.8391.1086.6495.8092.6885.3688.88
I-MEDA87.1484.2491.3287.4991.9387.2696.99286.3789.41
E-MEDA87.7389.6392.6192.7193.8189.9798.8093.6787.9391.87
R-EasyTL80.7471.4489.7773.8283.7971.5592.5674.1868.5678.49
I-EasyTL81.4973.5789.9174.0183.1774.3790.0975.5270.2379.15
E-EasyTL83.0980.5490.5881.2585.7979.1191.6679.5872.5482.68
Table 7. Cross-domain classification accuracies of Office-Home image data sets generated by deep transfer methods with and without ensemble strategy.
Table 7. Cross-domain classification accuracies of Office-Home image data sets generated by deep transfer methods with and without ensemble strategy.
Methods A C A P A R C A C P C R P A P C P R R A R C R P Ave acc
DAN43.6057.0067.9045.8056.5060.4044.0043.6067.7063.1051.5074.3056.28
E-DAN45.9660.0569.7248.5158.9162.0446.3344.8668.1964.5153.6076.7758.29
DANN45.6059.3070.1047.0058.5060.9046.1043.7068.5063.2051.8076.8057.63
E-DANN45.661.1372.1547.960.5161.8448.8846.0970.1966.6552.0979.4159.37
JAN45.9061.2068.9050.4059.7061.0045.8043.4070.3063.9052.4076.8058.31
E-JAN46.6164.0870.5352.9462.6661.9948.8547.3973.6465.5554.1079.5660.66
MRAN53.8068.6075.0057.3068.5068.3058.5054.6077.5070.4060.0082.2066.23
E-MRAN56.6670.1477.6359.4669.7870.0459.1455.0777.9373.5862.2183.3167.91
DSAN54.4070.8075.4060.4067.8068.0062.6055.9078.5073.8060.6083.1067.61
E-DSAN56.1772.6875.9662.2669.7769.8964.4458.8379.8474.0962.3485.9269.35
Table 8. Cross-domain image classification accuracies of Office-Home image data sets generated by ensemble strategy with improved DPP sampling (“ E ”), information gain (“ I ”) and random sampling (“ R ”).
Table 8. Cross-domain image classification accuracies of Office-Home image data sets generated by ensemble strategy with improved DPP sampling (“ E ”), information gain (“ I ”) and random sampling (“ R ”).
Methods A C A P A R C A C P C R P A P C P R R A R C R P Ave acc
R-DAN43.6755.2066.3847.7856.3161.8543.4843.8465.5061.4952.7175.1956.12
I-DAN43.7456.2267.4246.9456.7961.744.5143.2666.6262.3652.4275.856.48
E-DAN45.9660.0569.7248.5158.9162.0446.3344.8668.1964.5153.6076.7758.29
R-DANN45.6957.1167.4445.8358.6158.3746.7342.1969.4662.7452.6974.5756.79
I-DANN45.3857.7868.7446.2258.8959.6146.9644.4669.0762.8551.1475.157.18
E-DANN45.6061.1372.1547.9060.5161.8448.8846.0970.1966.6552.0979.4159.37
R-JAN43.8960.0166.3149.459.3561.8844.6745.1970.4561.1951.4078.3757.68
I-JAN44.162.3667.5850.2960.7961.4645.5846.4270.8662.2852.2578.6658.55
E-JAN46.6164.0870.5352.9462.6661.9948.8547.3973.6465.5554.1079.5660.66
R-MRAN53.3368.1073.9655.3766.4367.1756.3952.2575.4168.8858.048164.69
I-MRAN54.9268.874.5256.3867.9268.3657.4753.3976.1870.0859.5381.9765.79
E-MRAN56.6670.1477.6359.4669.7870.0459.1455.0777.9373.5862.2183.3167.91
R-DSAN54.5170.1774.7861.2866.266.6862.9054.3578.7772.3559.9181.5566.95
I-DSAN55.2870.6474.8261.6868.2267.1963.8255.4679.0473.1460.0182.0567.61
E-DSAN56.1772.6875.9662.2669.7769.8964.4458.8379.8474.0962.3485.9269.35
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lv, Y.; Zhang, B.; Yue, X.; Xu, Z. A Novel Ensemble Strategy Based on Determinantal Point Processes for Transfer Learning. Mathematics 2022, 10, 4409. https://doi.org/10.3390/math10234409

AMA Style

Lv Y, Zhang B, Yue X, Xu Z. A Novel Ensemble Strategy Based on Determinantal Point Processes for Transfer Learning. Mathematics. 2022; 10(23):4409. https://doi.org/10.3390/math10234409

Chicago/Turabian Style

Lv, Ying, Bofeng Zhang, Xiaodong Yue, and Zhikang Xu. 2022. "A Novel Ensemble Strategy Based on Determinantal Point Processes for Transfer Learning" Mathematics 10, no. 23: 4409. https://doi.org/10.3390/math10234409

APA Style

Lv, Y., Zhang, B., Yue, X., & Xu, Z. (2022). A Novel Ensemble Strategy Based on Determinantal Point Processes for Transfer Learning. Mathematics, 10(23), 4409. https://doi.org/10.3390/math10234409

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop