Sub-Graph Regularization on Kernel Regression for Robust Semi-Supervised Dimensionality Reduction
Abstract
:1. Introduction
- (1)
- We develop a doubly-stochastic S that measures the similarity between data points and anchors. The new updated S has probability means and can be viewed as transition probability between data points and anchors. In addition, the proposed S is also a stochastic extension to the ones in AGR.
- (2)
- We develop a sub-graph regularized framework for SSL. The new sub-graph is constructed by S in an efficient way and can preserve the geometry of the data manifold.
- (3)
- We also adopt a linear predictor for inferring the class labels of new incoming data, which can handle out-of-sample problems. In addition, the computational complexity of this linear predictor is linear with the number of anchors, and hence is efficient.
2. Notations and Preliminary Work
2.1. Notations
2.2. Review of Graph Based Semi-Supervised Learning
2.3. Anchor Graph Regularization
3. A Sub-Graph Regularized Framework for Efficient Semi-Supervised Learning
3.1. Analysis of Anchor Graph Construction
3.2. Sub-Graph Construction
3.3. Efficient Semi-Supervised Learning via Sub-Graph Construction
Algorithm 1: The proposed SGR. |
|
3.4. Out-of-Sample Extension via Kernel Regression
4. Experiments
4.1. Toy Examples for Synthetic Datasets
4.2. Description of Dataset
4.3. Image Classification
4.4. Parameter Analysis with Different Numbers of Anchors
4.5. Image Visualization
5. Conclusions
- (1)
- We developed a doubly-stochastic S that measures the similarity between data points and anchors. The new updated S has probability means and can be viewed asa transition probability between data points and anchors. In addition, the new sub-graph is constructed by S in an efficient way and can preserve the geometry of data manifold. Simulation results verify the superiority of the proposed SGR;
- (2)
- We also adopt a linear predictor for inferring the labels of new incoming data, which can handle out-of-sample problems. The computational complexity of this linear predictor is linear with the number of anchors; hence it is efficient. This shows that SGR can handle a large-scale dataset, which is quite practical;
Author Contributions
Funding
Conflicts of Interest
References
- Zhu, X.; Ghahramani, Z.; Lafferty, J.D. Semi-supervised learning using gaussian fields and harmonic functions. In Proceedings of the 20th International conference on Machine learning (ICML-03), Washington, DC, USA, 21–24 August 2003. [Google Scholar]
- Zhou, D.; Bousquet, O.; Lal, T.N.; Weston, J.; Scholkopf, B. Learning with local and global consistency. Advances in Neural Information Processing Systems; MIT: Cambridge, MA, USA, 2004. [Google Scholar]
- Belkin, M.; Niyogi, P.; Sindhwani, V. Manifold regularization: A geometric framework for learning from labeled and unlabeled samples. J. Mach. Learn. Res. 2006, 7, 2399–2434. [Google Scholar]
- Nie, F.; Xiang, S.; Liu, Y.; Zhang, C. A general graph based semi-supervised learning with novel class discovery. Neural Comput. Appl. 2010, 19, 549–555. [Google Scholar] [CrossRef]
- Cai, D.; He, X.; Han, J. Semi-supervised discriminant analysis. In Proceedings of the 2007 IEEE 11th International Conference on Computer Vision, Rio de Janeiro, Brazil, 14–20 October 2007; pp. 1–7. [Google Scholar]
- Zhao, M.; Zhang, Z.; Chow, T.W.; Li, B. Soft label based linear discriminant analysis for image recognition and retrieval. Comput. Image Underst. 2014, 121, 86–99. [Google Scholar] [CrossRef]
- Zhao, M.; Zhang, Z.; Chow, T.W.; Li, B. A general soft label based linear discriminant analysis for semi-supervised dimensionality reduction. Neural Netw. 2014, 55, 83–97. [Google Scholar] [CrossRef]
- Zhao, M.; Chow, T.W.; Wu, Z.; Zhang, Z.; Li, B. Learning from normalized local and global discriminative information for semi-supervised regression and dimensionality reduction. Inf. Sci. 2015, 324, 286–309. [Google Scholar] [CrossRef]
- Zhao, M.; Chow, T.W.; Zhang, Z.; Li, B. Automatic image annotation via compact graph based semi-supervised learning. Knowl.-Based Syst. 2015, 76, 148–165. [Google Scholar] [CrossRef]
- Zhao, M.; Zhang, Z.; Chow, T.W. Trace ratio criterion based generalized discriminative learning for semi-supervised dimensionality reduction. Pattern Recognit. 2012, 45, 1482–1499. [Google Scholar] [CrossRef]
- Fukunaga, K. Introduction to statistical pattern classification. Patt. Recognit. 1990, 30, 1149. [Google Scholar]
- Gao, Y.; Ma, J.; Yuille, A.L. Semi-supervised sparse representation based classification for face recognition with insufficient labeled samples. arXiv 2016, arXiv:1609.03279. [Google Scholar] [CrossRef]
- Ma, J.; Zhao, J.; Jiang, J.; Zhou, H.; Guo, X. Locality Preserving Matching. Int. J. Comput. Vis. 2019, 127, 512–531. [Google Scholar] [CrossRef]
- Gao, Y.; Yuille, A.L. Estimation of 3D Category-Specific Object Structure: Symmetry, Manhattan and/or Multiple Images. Int. J. Comput. Vis. 2019, 127, 1501–1526. [Google Scholar] [CrossRef]
- Tenenbaum, J.B.; de Silva, V.; Langford, J.C. A global geometric framework for nonlinear dimensionality reduction. Science 2000, 290, 2319–2323. [Google Scholar] [CrossRef] [PubMed]
- Roweis, S.T.; Saul, L.K. Nonlinear dimensionality reduction by locally linear embedding. Science 2000, 290, 2323–2326. [Google Scholar] [CrossRef]
- He, X.; Yan, S.; Hu, Y.; Niyogi, P.; Zhang, H. Face recognition using Laplacianfaces. IEEE Trans. Pattern Anal. Mach. 2005, 27, 328–340. [Google Scholar]
- Wang, F.; Zhang, C. Label propagation through linear neighborhoods. IEEE Trans. Knowl. Data Eng. 2008, 20, 55–67. [Google Scholar] [CrossRef]
- Wang, J.; Wang, F.; Zhang, C.; Shen, H.C.; Quan, L. Linear neighborhood propagation and its applications. IEEE Trans. Pattern Anal. Machine Intell. 2009, 31, 1600–1615. [Google Scholar] [CrossRef]
- Yang, Y.; Nie, F.; Xu, D.; Luo, J.; Zhuang, Y.; Pan, Y. A multimedia retrieval framework based on semi-supervised ranking and relevance feedback. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 723–742. [Google Scholar] [CrossRef]
- Xiang, S.; Nie, F.; Zhang, C. Semi-supervised classification via local spline regression. IEEE Trans. Pattern Anal. Mach. 2010, 32, 2039–2053. [Google Scholar] [CrossRef]
- Liu, W.; He, J.; Chang, S.-F. Large graph construction for scalable semi-supervised learning. In Proceedings of the 27th international conference on machine learning (ICML-10), Haifa, Israel, 21–24 June 2010; pp. 679–686. [Google Scholar]
- Liu, W.; Wang, J.; Chang, S.-F. Robust and scalable graph-based semisupervised learning. Proc. IEEE 2012, 100, 2624–2638. [Google Scholar] [CrossRef]
- Wang, M.; Fu, W.; Hao, S.; Tao, D.; Wu, X. Scalable semi-supervised learning by efficient anchor graph regularization. IEEE Trans. Knowl. Data Eng. 2016, 28, 1864–1877. [Google Scholar] [CrossRef]
- Fu, W.; Wang, M.; Hao, S.; Mu, T. Flag: Faster learning on anchor graph with label predictor optimization. IEEE Trans. Big Data 2017. [Google Scholar] [CrossRef]
- Wang, M.; Fu, W.; Hao, S.; Liu, H.; Wu, X. Learning on big graph: Label inference and regularization with anchor hierarchy. IEEE Trans. Knowl. Data Eng. 2017, 29, 1101–1114. [Google Scholar] [CrossRef]
- Von Neumann, J. Functional Operators: Measures and Integrals; Princeton University Press: Princeton, NJ, USA, 1950; Volume 1. [Google Scholar]
- Liu, W.; Chang, S.-F. Robust multi-class transductive learning with graphs. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009. [Google Scholar]
- Zhao, X.; Wang, D.; Zhang, X.; Gu, N.; Ye, X. Semi-supervised learning based on coupled graph laplacian regularization. In Proceedings of the 2018 Chinese Intelligent Systems Conference; Springer: Berlin/Heidelberg, Germany, 2019; pp. 131–142. [Google Scholar]
- Georghiades, A.S.; Belhumeur, P.N.; Kriegman, D.J. From few to many: Illumination cone models for face recognition under variable lighting and pose. IEEE Trans. Pattern Anal. Mach. 2001, 23, 643–660. [Google Scholar] [CrossRef]
- Baker, S.; Bsat, M. The CMU pose, illumination, and expression database. IEEE Trans. Pattern Anal. Mach. 2003, 25, 1615. [Google Scholar]
- Nene, S.A.; Nayar, S.K.; Murase, H. Columbia Object Image Library (COIL-100); Technical Report CUCS-005-96; Columbia University: New York, NY, USA, 1996. [Google Scholar]
- Leibe, B.; Schiele, B. Analyzing appearance and contour based methods for object categorization. In Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, WI, USA, 18–20 June 2003; p. II-409. [Google Scholar]
- Hull, J.J. A database for handwritten text recognition research. IEEE Trans. Pattern Anal. Mach. Intell. 1994, 16, 550–554. [Google Scholar] [CrossRef]
- Liu, C.-L.; Yin, F.; Wang, D.-H.; Wang, Q.-F. CASIA online and offline chinese handwriting databases. In Proceedings of the 2011 International Conference on Document Analysis and Recognition, 18–21 September 2011; pp. 37–41. [Google Scholar]
- Hou, C.; Nie, F.; Wang, F.; Zhang, C.; Wu, Y. Semisupervised learning using negative labels. IEEE Trans. Neural Netw. 2011, 22, 420–432. [Google Scholar]
- Rodriguez, M.Z.; Comin, C.H.; Casanova, D.; Bruno, O.M.; Amancio, D.R.; Costa, L.D.F.; Rodrigues, F.A.; Kestler, H.A. Clustering algorithms: A comparative approach. PLoS ONE 2019, 14, e0210236. [Google Scholar] [CrossRef]
- Tang, J.; Shu, X.; Li, Z.; Jiang, Y.G.; Tian, Q. Social anchor unit graph regularized tensor completion for large scale image retagging. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 2027–2034. [Google Scholar] [CrossRef]
- Amancio, D.R.; Silva, F.N.; Costa, L.d.F. Concentric network symmetry grasps authors’ styles in word adjacency networks. EPL (Europhys. Lett.) 2015, 110, 68001. [Google Scholar] [CrossRef]
- Koplenig, A.; Wolfer, S. Studying lexical dynamics and language change via generalized entropies: The problem of sample size. Entropy 2019, 21, 464. [Google Scholar] [CrossRef] [Green Version]
The Proposed Method | The First Stage (Initialization) | The Second Stage (The Proposed Model) | The Third Stage (SSL) | Totals (Considering Large-Scale Data ) |
---|---|---|---|---|
Computational Complexity |
Dataset | Database Type | Sample | Dim | Class | Train per Class | Test per Class |
---|---|---|---|---|---|---|
Extended Yale-B [30] | Face | 16,123 | 1024 | 38 | 80% | 20% |
CMU-PIE [31] | Face | 11,000 | 1024 | 68 | 80% | 20% |
COIL100 [32] | Object | 7200 | 1024 | 100 | 58 | 14 |
ETH80 [33] | Object | 3280 | 1024 | 80 | 33 | 8 |
USPS [34] | Hand-written digits | 9298 | 256 | 10 | 800 | remaining |
CASIA-HWDB [35] | Hand-written letters | 12,456 | 256 | 52 | 200 | remaining |
Methods | 5% Training Labeled | 10% Training Labeled | 15% Training Labeled | 20% Training Labeled | ||||
---|---|---|---|---|---|---|---|---|
Unlabeled | Test | Unlabeled | Test | Unlabeled | Test | Unlabeled | Test | |
SVM | 53.1 ± 1.1 | 52.7 ± 1.0 | 68.8 ± 2.0 | 67.7 ± 0.6 | 75.2 ± 1.1 | 73.7 ± 1.3 | 80.0 ± 1.8 | 78.8 ± 1.2 |
MR | 59.0 ± 1.2 | 58.5 ± 1.3 | 70.3 ± 1.1 | 69.4 ± 0.5 | 76.4 ± 1.3 | 74.9 ± 1.5 | 80.7 ± 1.3 | 79.0 ± 1.1 |
LGC | 64.7 ± 1.0 | 71.8 ± 1.1 | 76.4 ± 4.2 | 80.8 ± 1.0 | ||||
SLP | 65.6 ± 2.3 | 73.9 ± 1.0 | 78.0 ± 1.8 | 81.8 ± 1.0 | ||||
LNP | 64.9 ± 1.3 | 53.8 ± 2.7 | 72.0 ± 1.2 | 71.2 ± 0.4 | 78,0 ± 2.4 | 76.6 ± 2.1 | 81.6 ± 1.0 | 80.0 ± 1.4 |
AGR | 66.6 ± 1.5 | 65.8 ± 1.3 | 74.3 ± 1.2 | 72.2 ± 0.4 | 78.1 ± 1.5 | 77.3 ± 1.7 | 83.0 ± 1.2 | 80.0 ± 4.5 |
EAGR | 66.9 ± 0.8 | 66.5 ± 1.8 | 74.4 ± 1.1 | 73.2 ± 1.5 | 78.0 ± 1.5 | 77.2 ± 1.9 | 84.4 ± 2.4 | 83.6 ± 3.1 |
SGR | 69.9 ± 0.4 | 67.2 ± 1.0 | 75.7 ± 1.1 | 74.0 ± 3.3 | 79.4 ± 1.0 | 78.3 ± 1.1 | 86.3 ± 2.5 | 82.8 ± 2.4 |
Methods | 5% Training Labeled | 10% Training Labeled | 15% Training Labeled | 20% Training Labeled | ||||
---|---|---|---|---|---|---|---|---|
Unlabeled | Test | Unlabeled | Test | Unlabeled | Test | Unlabeled | Test | |
SVM | 42.5 ± 1.3 | 41.5 ± 1.1 | 56.8 ± 2.2 | 55.8 ± 1.5 | 64.6 ± 1.2 | 63.8 ± 1.8 | 69.3 ± 1.7 | 68.9 ± 1.2 |
MR | 47.8 ± 1.1 | 46.7 ± 1.6 | 59.3 ± 1.8 | 58.8 ± 1.3 | 65.6 ± 1.6 | 64.5 ± 1.6 | 69.9 ± 1.4 | 69.1 ± 1.4 |
LGC | 53.5 ± 1.6 | 60.3 ± 1.7 | 66.5 ± 2.8 | 70.5 ± 1.3 | ||||
SLP | 55.3 ± 1.9 | 63.4 ± 1.8 | 67.2 ± 1.9 | 70.9 ± 1.3 | ||||
LNP | 55.2 ± 1.2 | 54.8 ± 1.9 | 62.9 ± 1.5 | 61.8 ± 0.9 | 68,3 ± 2.7 | 67.3 ± 2.3 | 71.1 ± 1.2 | 71.0 ± 1.6 |
AGR | 56.4 ± 1.4 | 55.3 ± 1.8 | 64.8 ± 1.3 | 64.7 ± 0.5 | 68.5 ± 2.1 | 66.9 ± 1.8 | 72.8 ± 1.7 | 71.3 ± 3.5 |
EAGR | 57.2 ± 1.0 | 56.4 ± 1.6 | 64.4 ± 1.2 | 63.7 ± 1.9 | 68.4 ± 1.8 | 67.7 ± 2.3 | 73.1 ± 2.0 | 72.4 ± 2.7 |
SGR | 59.0 ± 0.7 | 58.4 ± 1.3 | 65.6 ± 1.2 | 64.6 ± 1.9 | 69.8 ± 1.6 | 67.9 ± 1.6 | 75.0 ± 2.4 | 73.9 ± 2.3 |
Methods | 5% Training Labeled | 10% Training Labeled | 15% Training Labeled | 20% Training Labeled | ||||
---|---|---|---|---|---|---|---|---|
Unlabeled | Test | Unlabeled | Test | Unlabeled | Test | Unlabeled | Test | |
SVM | 83.6 ± 0.9 | 83.2 ± 0.8 | 88.5 ± 0.8 | 86.6 ± 0.8 | 91.8 ± 0.8 | 91.4 ± 0.7 | 95.3 ± 0.8 | 94.5 ± 1.6 |
MR | 83.7 ± 1.0 | 83.4 ± 0.9 | 89.0 ± 0.9 | 87.3 ± 0.9 | 92.1 ± 0.8 | 91.6 ± 0.9 | 95.3 ± 0.7 | 94.7 ± 1.3 |
LGC | 85.5 ± 0.8 | 89.3 ± 0.9 | 92.4 ± 0.8 | 95.5 ± 0.6 | ||||
SLP | 86.4 ± 0.7 | 89.3 ± 0.9 | 92.8 ± 0.6 | 95.6 ± 0.8 | ||||
LNP | 86.5 ± 0.7 | 85.6 ± 0.7 | 89.6 ± 0.9 | 88.7 ± 0.7 | 92.9 ± 0.7 | 92.4 ± 0.8 | 95.8 ± 0.7 | 95.1 ± 1.3 |
AGR | 86.5 ± 0.6 | 85.8 ± 0.9 | 90.9 ± 0.9 | 88.8 ± 0.8 | 93.3 ± 0.6 | 92.7 ± 0.9 | 95.8 ± 0.7 | 95.3 ± 1.4 |
EAGR | 86.6 ± 0.7 | 85.7 ± 1.3 | 89.9 ± 0.9 | 89.0 ± 1.5 | 93.2 ± 0.6 | 92.7 ± 1.5 | 96.0 ± 0.7 | 95.2 ± 0.9 |
SGR | 87.0 ± 0.6 | 86.7 ± 1.0 | 91.8 ± 0.9 | 89.7 ± 0.8 | 94.7 ± 0.6 | 93.2 ± 0.8 | 97.0 ± 0.6 | 95.6 ± 0.9 |
Methods | 5% Training Labeled | 10% Training Labeled | 15% Training Labeled | 20% Training Labeled | ||||
---|---|---|---|---|---|---|---|---|
Unlabeled | Test | Unlabeled | Test | Unlabeled | Test | Unlabeled | Test | |
SVM | 61.1 ± 1.3 | 59.4 ± 0.3 | 71.1 ± 1.9 | 70.2 ± 2.0 | 75.9 ± 1.5 | 75.3 ± 3.1 | 78.9 ± 2.0 | 77.9 ± 2.5 |
MR | 62.3 ± 0.8 | 60.0 ± 0.2 | 71.7 ± 2.0 | 71.0 ± 2.7 | 76.2 ± 1.0 | 75.3 ± 2.8 | 78.9 ± 1.9 | 78.3 ± 2.5 |
LGC | 65.7 ± 1.4 | 73.5 ± 1.4 | 76.8 ± 1.5 | 79.0 ± 1.7 | ||||
SLP | 65.9 ± 1.5 | 73.9 ± 1.2 | 76.9 ± 1.6 | 79.3 ± 1.8 | ||||
LNP | 64.9 ± 0.9 | 62.2 ± 0.2 | 73.4 ± 2.0 | 71.4 ± 2.6 | 76.7 ± 1.1 | 76.0 ± 2.6 | 79.0 ± 1.8 | 78.5 ± 2.0 |
AGR | 66.4 ± 1.6 | 65.1 ± 0.2 | 75.0 ± 1.7 | 72.2 ± 2.2 | 76.9 ± 1.7 | 76.1 ± 2.5 | 79.6 ± 2.0 | 78.9 ± 1.9 |
EAGR | 68.2 ± 1.7 | 67.7 ± 2.1 | 74.9 ± 1.4 | 74.2 ± 1.9 | 77.3 ± 1.7 | 77.0 ± 1.9 | 80.0 ± 2.2 | 79.4 ± 2.8 |
SGR | 69.4 ± 1.9 | 67.2 ± 0.1 | 74.0 ± 1.3 | 74.2 ± 2.2 | 77.5 ± 1.9 | 77.3 ± 1.8 | 79.8 ± 22 | 79.0 ± 2.2 |
Methods | 5% Training Labeled | 10% Training Labeled | 15% Training Labeled | 20% Training Labeled | ||||
---|---|---|---|---|---|---|---|---|
Unlabeled | Test | Unlabeled | Test | Unlabeled | Test | Unlabeled | Test | |
SVM | 71.7 ± 0.7 | 70.6 ± 1.5 | 77.9 ± 0.7 | 77.8 ± 0.2 | 91.9 ± 4.4 | 90.9 ± 4.2 | 96.1 ± 1.9 | 95.7 ± 0.9 |
MR | 74.1 ± 0.7 | 73.0 ± 1.5 | 80.9 ± 0.8 | 79.8 ± 0.1 | 92.6 ± 3.4 | 91.7 ± 3.4 | 96.1 ± 2.2 | 95.0 ± 1.0 |
LGC | 74.7 ± 0.7 | 87.1 ± 0.8 | 94.6 ± 3.3 | 96.5 ± 2.3 | ||||
SLP | 75.0 ± 0.5 | 89.7 ± 0.7 | 95.4 ± 3.0 | 96.5 ± 2.3 | ||||
LNP | 76.5 ± 0.6 | 74.8 ± 0.8 | 92.0 ± 0.7 | 90.8 ± 0.5 | 95.5 ± 3.4 | 95.0 ± 3.4 | 96.9 ± 2.5 | 96.5 ± 0.9 |
AGR | 78.7 ± 0.6 | 76.1 ± 0.7 | 93.6 ± 0.7 | 92.6 ± 0.7 | 96.0 ± 2.4 | 95.8 ± 2.4 | 97.1 ± 2.8 | 96.7 ± 0.9 |
EAGR | 79.9 ± 0.6 | 79.4 ± 1.2 | 93.6 ± 0.7 | 92.9 ± 1.1 | 96.3 ± 3.6 | 95.5 ± 3.5 | 97.2 ± 1.7 | 96.3 ± 2.2 |
SGR | 80.7 ± 0.5 | 79.7 ± 0.7 | 95.0 ± 0.5 | 93.3 ± 0.8 | 97.2 ± 3.1 | 96.2 ± 3.1 | 97.4 ± 1.5 | 97.3 ± 0.7 |
Methods | 5% Training Labeled | 10% Training Labeled | 15% Training Labeled | 20% Training Labeled | ||||
---|---|---|---|---|---|---|---|---|
Unlabeled | Test | Unlabeled | Test | Unlabeled | Test | Unlabeled | Test | |
SVM | 56.8 ± 5.4 | 55.8 ± 0.6 | 65.7 ± 0.6 | 64.0 ± 1.7 | 79.0 ± 0.5 | 78.2 ± 4.0 | 83.4 ± 1.8 | 82.1 ± 1.9 |
MR | 58.7 ± 3.3 | 57.3 ± 0.5 | 73.0 ± 0.6 | 62.0 ± 1.4 | 79.4 ± 0.6 | 78.4 ± 2.7 | 86.6 ± 1.9 | 85.5 ± 1.5 |
LGC | 63.1 ± 2.4 | 76.1 ± 0.4 | 80.7 ± 0.5 | 88.1 ± 1.4 | ||||
SLP | 63.4 ± 1.6 | 77.4 ± 0.4 | 85.3 ± 0.5 | 88.6 ± 1.7 | ||||
LNP | 66.5 ± 1.4 | 64.8 ± 0.6 | 78.5 ± 0.5 | 77.5 ± 0.7 | 85.9 ± 0.5 | 84.8 ± 1.7 | 89.2 ± 1.7 | 90.6 ± 8.2 |
AGR | 72.0 ± 0.9 | 71.0 ± 0.6 | 80.9 ± 2.8 | 77.8 ± 0.6 | 87.2 ± 0.5 | 86.4 ± 1.6 | 91.8 ± 1.6 | 90.0 ± 4.1 |
EAGR | 74.9 ± 0.7 | 74.4 ± 1.2 | 78.6 ± 3.3 | 78.0 ± 3.1 | 87.6 ± 0.4 | 87.2 ± 1.0 | 91.6 ± 1.8 | 91.2 ± 2.2 |
SGR | 75.3 ± 0.7 | 73.6 ± 0.5 | 83.6 ± 2.2 | 80.3 ± 0.6 | 88.7 ± 0.3 | 86.5 ± 1.6 | 93.2 ± 1.7 | 91.7 ± 3.3 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, J.; Zhao, M.; Kong, W. Sub-Graph Regularization on Kernel Regression for Robust Semi-Supervised Dimensionality Reduction. Entropy 2019, 21, 1125. https://doi.org/10.3390/e21111125
Liu J, Zhao M, Kong W. Sub-Graph Regularization on Kernel Regression for Robust Semi-Supervised Dimensionality Reduction. Entropy. 2019; 21(11):1125. https://doi.org/10.3390/e21111125
Chicago/Turabian StyleLiu, Jiao, Mingbo Zhao, and Weijian Kong. 2019. "Sub-Graph Regularization on Kernel Regression for Robust Semi-Supervised Dimensionality Reduction" Entropy 21, no. 11: 1125. https://doi.org/10.3390/e21111125
APA StyleLiu, J., Zhao, M., & Kong, W. (2019). Sub-Graph Regularization on Kernel Regression for Robust Semi-Supervised Dimensionality Reduction. Entropy, 21(11), 1125. https://doi.org/10.3390/e21111125