Domain Adaptation Based on Semi-Supervised Cross-Domain Mean Discriminative Analysis and Kernel Transfer Extreme Learning Machine
Abstract
:1. Introduction
- We introduce CDMA into SDA and then propose SCDMDA. It extracts shared discriminative features across domains by using CDMA to minimize the marginal and conditional discrepancies between domains and applying SDA to exploit the label information and original structure information.
- We present KTELM by designing a cross-domain mean approximation constraint into KELM for classification in domain adaptation.
- We obtain a classifier with the ability of knowledge transfer by combining SCDMDA and KTELM and implement a classification task on public image datasets. The results show the superiority of our approach.
2. Preliminary
2.1. Semi-Supervised Discriminant Analysis (SDA)
2.2. Extreme Learning Machine (ELM) and Kernel Extreme Learning Machine (KELM)
2.3. Domain Adaptation (DA)
3. Proposed Method
3.1. Feature-Based Adaptation of SCDMDA
3.1.1. Cross-Domain Mean Approximation (CDMA)
3.1.2. Semi-supervised Cross-Domain Mean Discriminative Analysis (SCDMDA)
Algorithm 1. SCDMDA |
Input: Dataset and , subspace dimension , parameters , , , , and , classifier KTELM, maximum number of iterations . Output: Projection matrix and target prediction . Step1: According to Equations (1) and (9), construct , , and , and set and to 0. Step2: Let . Step3: Solve Equation (12) or Equation (14) to obtain the projection matrix . Step4: Project and by into -dimensional subspace to obtain and . Step5: Learn a KTELM on , and classify to obtain the label set of the target domain data . Step6: Use and , construct , and solve and ( and in the nonlinear case) according to Equations (2) and (3). Step7: Let . Step8: If or does not change, output , otherwise, go to Step3. |
3.2. Classifier-Based Adaptation of KTELM
3.3. Discussion
- From Equation (11) and Equation (18), it can be seen that, compared with SDA, SCDMDA adopts CDMA to reduce the distribution discrepancy between domains, which is better for domain adaptation than SDA. Moreover, as a semi-supervised feature extraction method, SCDMDA focuses more attention on individual information with the help of the category separability of LDA and the original structure information of graph regularizers.
- Compared with MMD, CDMA reflects individual differences. In our method, CDMA mines individual information through and , which is a more effective interdomain distribution difference measurement mechanism than MMD. In addition, we verify the improved method on k-NN, KELM, and KTELM classifiers.
- In the classical ELM, solving the output weight that connects the hidden layer and the output layer is the key, and the optimal solution is obtained by solving Equation (5). However, for samples with interdomain distribution differences, the solution obtained by Equation (5) is not the optimal. The domain adaptation is added to the ELM to obtain Equation (16), and the optimal can be obtained. By adopting the cross-domain mean constraint on the source domain to achieve the cross-domain transfer of knowledge, the interdomain distribution difference can be reduced, which shows that KTELM has higher domain adaptation accuracy.
4. Experiments and Analysis
4.1. Dataset Description
4.2. Experiment Setting
4.3. Results and Analysis
- Table 2 summarizes the accuracies of all methods on Office+Caltech, PIE, and USPS+MNIST datasets with shallow feature representation, and the optimal result of each row in the table among all the methods is presented in bold. Our method SCDMDA (0–2) outperforms any other compared methods. The total average accuracy of SCDMDA2 on the 34 tasks is 74.9%, which achieves a 28.75% improvement compared with the baseline SDA. This verifies that CDMA is effective in reducing the between-domain distribution discrepancy and improving the knowledge transfer ability of SDA.
- SCDMDA2 outperforms SCDMDA0 and SCDMDA1, indicating that KTELM and KELM have higher accuracy than 1-NN, and that the cross-domain mean constraint on the source domain is effective for domain adaptation.
- Similarly, the semi-supervised feature extraction method SCDMDA0 works better than SWTDA-CMC. An explanation is that CDMA is a better distribution discrepancy measurement criterion than MMD, so our method works quite well in reducing domain bias. SCDMDA0 and SWTDA-CMC are better than JDA, JDA-CDMAW, and W-JDA. This illustrates that the category separability of LDA and the original structure information of graph regularizers are important for the classification task.
- The accuracy of SDA, GFK, JDA, STDA-CMC, W-JDA, JDA-CDMAW, and SCDMDA (0–2) outperforms 1-NN in most cases, showing the importance of feature extraction in classification tasks. Most domain adaptation techniques such as JDA, STDA-CMC, W-JDA, JDA-CDMAW, and SCDMDA (0–2) achieve higher accuracy than SDA, due to cross-domain shared feature extraction with few or no same distribution samples in the target domain. The accuracy of JDA, STDA-CMC, W-JDA, JDA-CDMAW, and SCDMDA (0–2) generally improves shared feature extraction through joint distribution adaptation.
4.4. Sensitivity Analysis
4.5. Visualization Analysis
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Wang, X.Z.; Xing, H.J.; Li, Y.; Hua, Y.L.; Dong, C.R.; Pedrycz, W. A study on relationship between generalization abilities and fuzziness of base classifiers in ensemble learning. IEEE Trans. Fuzzy Syst. 2014, 23, 1638–1654. [Google Scholar] [CrossRef]
- Li, S.; Huang, J.Q.; Hua, X.S.; Zhang, L. Category dictionary guided unsupervised domain adaptation for object detection. In Proceedings of the AAAI Conference on Artificial Intelligence, Palo Alto, CA, USA, 2–9 February 2021; Volume 35, pp. 1949–1957. [Google Scholar]
- Zhang, B.Y.; Liu, Y.Y.; Yuan, H.W.; Sun, L.J.; Ma, Z. A joint unsupervised cross-domain model via scalable discriminative extreme learning machine. Cogn. Comput. 2018, 10, 577–590. [Google Scholar] [CrossRef]
- Gretton, A.; Borgwardt, K.M.; Rasch, M.J.; Schölkopf, B.; Smola, A. A kernel two-sample test. J. Mach. Learn. Res. 2012, 13, 723–773. [Google Scholar]
- Zhuang, F.; Cheng, X.; Luo, P. Supervised representation learning with double encoding-layer autoencoder for transfer learning. ACM Trans. Intell. Syst. Technol. (TIST) 2017, 9, 1–17. [Google Scholar] [CrossRef]
- Shi, Q.; Zhang, Y.; Liu, X. Regularised transfer learning for hyperspectral image classification. IET Comput. Vis. 2019, 13, 188–193. [Google Scholar] [CrossRef]
- Lee, C.Y.; Batra, T.; Baig, M.H. Sliced wasserstein discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 10285–10295. [Google Scholar]
- Pan, S.J.; Kwok, J.T.; Yang, Q. Transfer learning via dimensionality reduction. In Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence, Chicago, IL, USA, 13–17 July 2008; Volume 8, pp. 677–682. [Google Scholar]
- Pan, S.J.; Tsang, I.W.; Kwok, J.T.; Yang, Q. Domain adaptation via transfer component analysis. IEEE Trans. Neural Netw. 2010, 22, 199–210. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Gong, B.Q.; Shi, Y.; Sha, F.; Grauman, K. Geodesic flow kernel for unsupervised domain adaptation. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 2066–2073. [Google Scholar]
- Long, M.S.; Wang, J.M.; Ding, G.G.; Sun, J.G.; Yu, P. Transfer feature learning with joint distribution adaptation. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 3–6 December 2013; pp. 2200–2207. [Google Scholar]
- Lu, N.N.; Chu, F.; Qi, H.R.; Xia, S. A new domain adaption algorithm based on weights adaption from the source domain. IEEJ Trans. Electr. Electron. Eng. 2018, 13, 1769–1776. [Google Scholar] [CrossRef]
- Jia, S.; Deng, Y.F.; Lv, J.; Du, S.C.; Xie, Z.Y. Joint distribution adaptation with diverse feature aggregation: A new transfer learning framework for bearing diagnosis across different machines. Measurement 2022, 187, 110332. [Google Scholar] [CrossRef]
- Huang, G.B.; Zhu, Q.Y.; Siew, C.K. Extreme learning machine: Theory and applications. Neurocomputing 2006, 70, 489–501. [Google Scholar] [CrossRef]
- Huang, G.; Song, S.J.; Gupta, J.N.D.; Wu, C. Semi-supervised and unsupervised extreme learning machines. IEEE Trans. Cybern. 2014, 44, 2405–2417. [Google Scholar] [CrossRef]
- Sharifmoghadam, M.; Jazayeriy, H. Breast cancer classification using AdaBoost-extreme learning machine. In Proceedings of the 5th Iranian Conference on Signal Processing and Intelligent Systems (ICSPIS), IEEE, Shahrood, Iran, 18–19 December 2019; pp. 1–5. [Google Scholar]
- Zhang, J.; Li, Y.J.; Xiao, W.D.; Zhang, Z.Q. Non-iterative and fast deep learning: Multilayer extreme learning machines. J. Frankl. Inst. 2020, 357, 8925–8955. [Google Scholar] [CrossRef]
- Min, M.C.; Chen, X.F.; Xie, Y.F. Constrained voting extreme learning machine and its application. J. Syst. Eng. Electron. 2021, 32, 209–219. [Google Scholar]
- Li, D.Z.; Li, S.; Zhang, S.B.; Sun, J.R.; Wang, L.C.; Wang, K. Aging state prediction for supercapacitors based on heuristic kalman filter optimization extreme learning machine. Energy 2022, 250, 123773. [Google Scholar] [CrossRef]
- Zang, S.F.; Li, X.H.; Ma, J.W.; Yan, Y.Y.; Gao, J.W.; Wei, Y. TSTELM: Two-stage transfer extreme learning machine for unsupervised domain adaptation. Comput. Intell. Neurosci. 2022, 2022, 1582624. [Google Scholar] [CrossRef] [PubMed]
- Chen, Y.M.; Song, S.J.; Li, S.; Yang, L.; Wu, C. Domain space transfer extreme learning machine for domain adaptation. IEEE Trans. Cybern. 2018, 49, 1909–1922. [Google Scholar] [CrossRef]
- Li, X.; Zhang, W.; Ding, Q.; Sun, J.Q. Multi-layer domain adaptation method for rolling bearing fault diagnosis. Signal Process. 2019, 157, 180–197. [Google Scholar] [CrossRef] [Green Version]
- Cheng, M.; You, X. Adaptive matching of kernel means. In Proceedings of the 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 13–18 September 2021; pp. 2498–2505. [Google Scholar]
- Wang, W.; Li, H.J.; Ding, Z.M.; Nie, F.P.; Chen, J.Y.; Dong, X.; Wang, Z.H. Rethinking maximum mean discrepancy for visual domain adaptation. IEEE Trans. Neural Netw. Learn. Syst. 2021, 34, 264–277. [Google Scholar] [CrossRef]
- Yan, L.; Zhu, R.X.; Liu, Y.; Mo, N. TrAdaBoost based on improved particle swarm optimization for cross-domain scene classification with limited samples. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 3235–3251. [Google Scholar] [CrossRef]
- Li, S.; Song, S.; Huang, G. Prediction reweighting for domain adaptation. IEEE Trans. Neural Netw. Learn. Syst. 2016, 28, 1682–1695. [Google Scholar] [CrossRef] [PubMed]
- Li, J.Q.; Sun, T.; Lin, Q.Z.; Jiang, M.; Tan, K.C. Reducing negative transfer learning via clustering for dynamic multiobjective optimization. IEEE Trans. Evol. Comput. 2022, 26, 1102–1116. [Google Scholar] [CrossRef]
- Farahani, A.; Voghoei, S.; Rasheed, K.; Arabnia, H.R. A brief review of domain adaptation. In Advances in Data Science and Information Engineering, Transactions on Science and Computational Intelligence; Springer: Cham, Switzerland, 2021; pp. 877–894. [Google Scholar]
- Wang, Q.; Breckon, T.P. Cross-domain structure preserving projection for heterogeneous domain adaptation. Pattern Recognit. 2022, 123, 108362. [Google Scholar] [CrossRef]
- Wei, D.D.; Han, T.; Chu, F.L.; Zuo, M.J. Weighted domain adaptation networks for machinery fault diagnosis. Mech. Syst. Signal Process. 2021, 158, 107744. [Google Scholar] [CrossRef]
- Li, J.J.; Jing, M.M.; Su, H.Z.; Lu, K.; Zhu, L.; Shen, H.T. Faster domain adaptation networks. IEEE Trans. Knowl. Data Eng. 2021, 34, 5770–5783. [Google Scholar] [CrossRef]
- Fang, X.H.; Bai, H.L.; Guo, Z.Y.; Shen, B.; Hoi, S.; Xu, Z.L. DART: Domain-adversarial residual-transfer networks for unsupervised cross-domain image classification. Neural Netw. 2020, 127, 182–192. [Google Scholar] [CrossRef] [Green Version]
- Sicilia, A.; Zhao, X.C.; Hwang, S.J. Domain adversarial neural networks for domain generalization: When it works and how to improve. Mach. Learn. 2023, 1–37. [Google Scholar] [CrossRef]
- Zhang, W.C.; Ouyang, W.L.; Li, W.; Xu, D. Collaborative and adversarial network for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 3801–3809. [Google Scholar]
- Yang, J.; Yan, R.; Hauptmann, A.G. Adapting SVM classifiers to data with shifted distributions. In Proceedings of the Seventh IEEE International Conference on Data Mining Workshops (ICDMW 2007), Omaha, NE, USA, 28–31 October 2007; pp. 69–76. [Google Scholar]
- Tommasi, T.; Orabona, F.; Caputo, B. Learning categories from few examples with multi model knowledge transfer. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 36, 928–941. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wu, Y.P.; Lv, W.J.; Li, Z.R.; Chang, J.; Li, X.C.; Liu, S. Unsupervised domain adaptation for vibration-based robotic ground classification in dynamic environments. Mech. Syst. Signal Process. 2022, 169, 108648. [Google Scholar] [CrossRef]
- Li, S.; Song, S.J.; Huang, G.; Wu, C. Cross-domain extreme learning machines for domain adaptation. IEEE Trans. Syst. Man Cybern. Syst. 2019, 49, 1194–1207. [Google Scholar] [CrossRef]
- Liu, Y.; Yang, C.; Liu, K.X.; Chen, B.C.; Yao, Y. Domain adaptation transfer learning soft sensor for product quality prediction. Chemom. Intell. Lab. Syst. 2019, 192, 103813. [Google Scholar] [CrossRef]
- Tang, H.; Jia, K. Discriminative adversarial domain adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 5940–5947. [Google Scholar]
- Li, S.; Liu, C.H.; Xie, B.H.; Su, L.M.; Ding, Z.M.; Huang, G. Joint adversarial domain adaptation. In Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019; pp. 729–737. [Google Scholar]
- Xu., M.H.; Zhang, J.; Ni, B.B.; Li, T.; Wang, C.J.; Tian, Q.; Zhang, W.J. Adversarial domain adaptation with domain mixup. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 6502–6509. [Google Scholar]
- Cai, D.; He, X.; Han, J. Speed up kernel discriminant analysis. VLDB J. 2011, 20, 21–33. [Google Scholar] [CrossRef] [Green Version]
- Saenko, K.; Kulis, B.; Fritz, M.; Darrell, T. Adapting visual category models to new domains. In Proceedings of the Computer Vision–ECCV 2010: 11th European Conference on Computer Vision, Heraklion, Greece, 5–11 September 2010; pp. 213–226. [Google Scholar]
- Gentile, C. A new approximate maximal margin classification algorithm. In Advances in Neural Information Processing Systems 13, Proceedings of the Neural Information Processing Systems (NIPS) 2000, Denver, USA, 1 January 2000; MIT Press: Cambridge, MA, USA, 2001; pp. 479–485. [Google Scholar]
- LeCun, Y.; Bottou, L.; Bengio, Y.; Haffber, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
- Gross, R.; Matthews, I.; Cohn, J.; Kanade, T.; Baker, S. Multi-pie. Image Vis. Comput. 2010, 28, 807–813. [Google Scholar] [CrossRef] [PubMed]
- Guo, G.; Wang, H.; Bell, D.; Bi, Y.; Greer, K. KNN model-based approach in classification. In Proceedings of the On the Move to Meaningful Internet Systems 2003: CoopIS, DOA, and ODBASE: OTM Confederated In-ternational Conferences, Catania, Italy, 3–7 November 2003; pp. 986–996. [Google Scholar]
- Zang, S.F.; Cheng, Y.H.; Wang, X.S.; Yu, Q. Semi-supervised transfer discriminant analysis based on cross-domain mean constraint. Artif. Intell. Rev. 2018, 49, 581–595. [Google Scholar] [CrossRef]
- Zang, S.F.; Cheng, Y.H.; Wang, X.; Yu, Q.; Xie, G.S. Cross domain mean approximation for unsupervised domain adaptation. IEEE Access 2020, 8, 139052–139069. [Google Scholar] [CrossRef]
Dataset | Type | No. of Samples | Feature Dimension | Type Number | Subsets |
---|---|---|---|---|---|
Office | Object | 1410 | 800 | 10 | A, W, D |
Caltech | Object | 1123 | 800 | 10 | C |
USPS | Digit | 1800 | 256 | 10 | USPS |
MNIST | Digit | 2000 | 256 | 10 | MNIST |
Office-31 | Image | 4652 | 2048 | 31 | amazon, webcam, dslr |
PIE | Face | 11,554 | 1024 | 68 | PIE1,..., PIE5 |
Dataset/Methods | 1-NN | KELM | SDA | GFK | JDA | STDA-CMC | W-JDA | JDA-CDMAW | SCDMDA0 | SCDMDA1 | SCDMDA2 |
---|---|---|---|---|---|---|---|---|---|---|---|
USPS vs. MNIST | 44.70 | 46.70 | 27.50 | 46.45 | 59.65 | 63.90 | 62.35 | 60.35 | 66.75 | 76.80 | 76.85 |
MNIST vs. USPS | 65.94 | 68.28 | 62.83 | 67.22 | 67.28 | 79.22 | 76.11 | 73.06 | 76.06 | 83.44 | 83.89 |
Average | 55.32 | 57.49 | 45.17 | 56.84 | 63.47 | 71.56 | 69.23 | 66.70 | 71.40 | 80.12 | 80.37 |
PIE1 vs. PIE2 (1) | 26.09 | 26.46 | 27.69 | 26.15 | 58.81 | 72.44 | 58.87 | 77.72 | 86.86 | 84.53 | 84.53 |
PIE1 vs. PIE3 (2) | 26.59 | 27.08 | 28.55 | 27.27 | 54.23 | 73.53 | 58.15 | 67.71 | 81.68 | 82.66 | 82.72 |
PIE1 vs. PIE4 (3) | 30.67 | 31.09 | 41.00 | 31.15 | 84.50 | 93.93 | 86.19 | 93.18 | 95.76 | 96.03 | 96.03 |
PIE1 vs. PIE5 (4) | 16.67 | 17.89 | 15.38 | 17.59 | 49.75 | 63.85 | 56.56 | 60.11 | 74.94 | 76.65 | 76.65 |
PIE2 vs. PIE1 (5) | 24.49 | 26.86 | 31.78 | 25.24 | 57.62 | 74.97 | 63.78 | 75.30 | 81.42 | 84.81 | 84.84 |
PIE2 vs. PIE3 (6) | 46.63 | 46.63 | 51.41 | 47.37 | 62.93 | 69.06 | 64.95 | 75.31 | 80.02 | 80.64 | 82.60 |
PIE2 vs. PIE4 (7) | 54.07 | 54.46 | 77.05 | 54.25 | 75.82 | 88.37 | 80.71 | 83.84 | 90.39 | 93.69 | 93.81 |
PIE2 vs. PIE5 (8) | 26.53 | 26.96 | 33.21 | 27.08 | 39.89 | 54.47 | 40.32 | 66.54 | 72.00 | 69.36 | 69.67 |
PIE3 vs. PIE1 (9) | 21.37 | 22.09 | 24.37 | 21.82 | 50.96 | 72.33 | 59.66 | 72.39 | 78.72 | 81.00 | 81.15 |
PIE3 vs. PIE2 (10) | 41.01 | 40.95 | 46.59 | 43.16 | 57.95 | 67.34 | 61.02 | 75.63 | 81.83 | 82.87 | 82.93 |
PIE3 vs. PIE4 (11) | 46.53 | 47.73 | 77.20 | 46.41 | 68.45 | 85.46 | 79.06 | 85.10 | 90.15 | 93.63 | 94.38 |
PIE3 vs. PIE5 (12) | 26.23 | 26.84 | 41.18 | 26.78 | 39.95 | 60.36 | 48.47 | 58.03 | 74.14 | 77.02 | 77.70 |
PIE4 vs. PIE1 (13) | 32.95 | 34.33 | 46.49 | 34.24 | 80.58 | 94.24 | 84.24 | 91.39 | 96.13 | 96.97 | 97.03 |
PIE4 vs. PIE2 (14) | 62.68 | 62.92 | 80.91 | 62.92 | 82.63 | 91.04 | 84.53 | 90.42 | 95.40 | 96.26 | 96.26 |
PIE4 vs. PIE3 (15) | 73.22 | 73.65 | 86.27 | 73.35 | 87.25 | 91.18 | 87.50 | 89.58 | 93.20 | 95.10 | 95.10 |
PIE4 vs. PIE5 (16) | 37.19 | 38.17 | 56.31 | 37.38 | 54.66 | 75.49 | 59.13 | 68.93 | 84.62 | 84.56 | 85.60 |
PIE5 vs. PIE1 (17) | 18.49 | 20.23 | 25.09 | 20.35 | 46.46 | 70.02 | 52.76 | 61.85 | 76.11 | 75.90 | 75.90 |
PIE5 vs. PIE2 (18) | 24.19 | 24.80 | 43.95 | 24.62 | 42.05 | 57.70 | 43.22 | 66.36 | 73.42 | 79.56 | 79.31 |
PIE5 vs. PIE3 (19) | 28.31 | 28.98 | 53.00 | 28.49 | 53.31 | 66.85 | 55.51 | 71.26 | 78.74 | 86.58 | 86.58 |
PIE5 vs. PIE4 (20) | 31.24 | 33.70 | 55.69 | 31.33 | 57.01 | 78.64 | 58.52 | 77.02 | 83.90 | 87.44 | 87.47 |
Average | 34.76 | 35.59 | 47.16 | 35.35 | 60.24 | 75.06 | 64.16 | 75.38 | 83.47 | 85.26 | 85.51 |
C vs. A (1) | 23.70 | 54.49 | 45.72 | 41.02 | 44.78 | 46.03 | 48.12 | 44.68 | 50.21 | 60.75 | 62.32 |
C vs. W (2) | 25.76 | 48.14 | 35.59 | 40.68 | 41.69 | 42.37 | 44.07 | 41.69 | 46.10 | 55.25 | 57.29 |
C vs. D (3) | 25.48 | 43.95 | 43.31 | 38.85 | 45.22 | 49.68 | 47.13 | 45.86 | 50.96 | 54.78 | 55.41 |
A vs. C (4) | 26.00 | 45.06 | 37.76 | 40.25 | 39.36 | 41.41 | 41.41 | 38.56 | 42.83 | 48.26 | 48.44 |
A vs. W (5) | 29.83 | 42.03 | 38.31 | 38.98 | 37.97 | 39.32 | 40.00 | 38.31 | 43.05 | 49.15 | 52.54 |
A vs. D (6) | 25.48 | 44.59 | 31.21 | 36.31 | 39.49 | 38.22 | 38.22 | 43.31 | 38.85 | 45.22 | 51.59 |
W vs. C (7) | 19.86 | 35.71 | 33.66 | 30.72 | 31.17 | 32.24 | 31.88 | 32.32 | 33.30 | 37.31 | 38.47 |
W vs. A (8) | 22.96 | 38.94 | 31.63 | 29.75 | 32.78 | 33.92 | 33.09 | 33.61 | 36.85 | 41.02 | 41.13 |
W vs. D (9) | 59.24 | 76.43 | 87.26 | 80.89 | 89.17 | 87.90 | 89.17 | 89.81 | 89.81 | 92.99 | 92.99 |
D vs. C (10) | 26.27 | 34.73 | 31.88 | 30.28 | 31.52 | 32.15 | 31.34 | 30.45 | 33.57 | 36.95 | 37.49 |
D vs. A (11) | 28.50 | 36.01 | 32.36 | 32.05 | 33.09 | 32.67 | 32.88 | 32.67 | 32.46 | 46.03 | 46.03 |
D vs. W (12) | 63.39 | 66.44 | 87.12 | 75.59 | 89.49 | 90.17 | 91.19 | 91.19 | 89.83 | 91.86 | 91.86 |
Average | 31.37 | 47.21 | 44.65 | 42.95 | 46.31 | 47.17 | 47.38 | 46.87 | 48.99 | 54.97 | 56.30 |
Total Average | 34.77 | 40.98 | 46.15 | 39.29 | 55.51 | 65.01 | 58.53 | 64.81 | 70.59 | 74.27 | 74.90 |
Datasets/Algorithms | 1-NN | KELM | JDA | DAN | DANN | CAN | STDA-CMC | JDA-CDMAW | SCDMDA0 | SCDMDA1 | SCDMDA2 |
---|---|---|---|---|---|---|---|---|---|---|---|
amazon vs. dslr | 79.12 | 79.12 | 85.54 | 78.60 | 79.70 | 85.50 | 82.73 | 85.54 | 82.13 | 90.56 | 91.16 |
amazon vs. webcam | 75.85 | 75.85 | 82.52 | 80.50 | 82.00 | 81.50 | 79.75 | 82.89 | 82.64 | 86.42 | 88.55 |
dslr vs. amazon | 60.17 | 60.24 | 66.56 | 63.60 | 68.20 | 65.90 | 68.09 | 67.13 | 69.08 | 73.52 | 73.62 |
dslr vs. webcam | 95.97 | 95.97 | 97.36 | 97.10 | 96.90 | 98.20 | 98.62 | 97.36 | 98.49 | 98.99 | 98.99 |
webcam vs. amazon | 59.92 | 59.96 | 68.83 | 62.80 | 67.40 | 63.40 | 66.31 | 69.08 | 68.44 | 73.73 | 73.84 |
webcam vs. dslr | 99.40 | 99.40 | 99.20 | 99.60 | 99.10 | 99.70 | 99.80 | 99.20 | 99.80 | 99.80 | 99.80 |
Average | 78.41 | 78.42 | 83.33 | 80.40 | 82.20 | 82.40 | 82.55 | 83.53 | 83.43 | 87.17 | 87.66 |
Algorithm | 1-NN | KELM | JDA | STDA-CMC | JDA-CDMAW | SCDMDA0 | SCDMDA1 | SCDMDA2 |
---|---|---|---|---|---|---|---|---|
Time(s) | 2.72 | 2.02 | 187.26 | 221.34 | 96.69 | 137.60 | 134.78 | 173.93 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, X.; Ma, J. Domain Adaptation Based on Semi-Supervised Cross-Domain Mean Discriminative Analysis and Kernel Transfer Extreme Learning Machine. Sensors 2023, 23, 6102. https://doi.org/10.3390/s23136102
Li X, Ma J. Domain Adaptation Based on Semi-Supervised Cross-Domain Mean Discriminative Analysis and Kernel Transfer Extreme Learning Machine. Sensors. 2023; 23(13):6102. https://doi.org/10.3390/s23136102
Chicago/Turabian StyleLi, Xinghai, and Jianwei Ma. 2023. "Domain Adaptation Based on Semi-Supervised Cross-Domain Mean Discriminative Analysis and Kernel Transfer Extreme Learning Machine" Sensors 23, no. 13: 6102. https://doi.org/10.3390/s23136102
APA StyleLi, X., & Ma, J. (2023). Domain Adaptation Based on Semi-Supervised Cross-Domain Mean Discriminative Analysis and Kernel Transfer Extreme Learning Machine. Sensors, 23(13), 6102. https://doi.org/10.3390/s23136102