One-Step Clustering with Adaptively Local Kernels and a Neighborhood Kernel
Abstract
:1. Introduction
- By considering the correlation between base kernels, a simple strategy for selecting local base kernels is used to produce a consensus kernel, which adjusts adaptively to avoid the redundancy of given kernels and promote variety.
- By selecting a neighborhood kernel of the consensus kernel as the optimal kernel, the expression capability of the optimal kernel is improved and its search scope is expanded.
- A soft BD regularizer is used to encourage the product of the indicator matrix and its transpose to be BD, which means that the clustering results are obtained from the indicator matrix directly. Therefore, one-step clustering is realized, which ensures the final clustering results are optimal.
- A four-step iterative algorithm including the Riemann conjugate gradient method in [26], is used to overcome the difficulty of solving the model.
- Extensive experiment results conducted on eight benchmark datasets and compared with six clustering methods indicate that OSC-ALK-ONK is effective.
2. Related Work
2.1. Notations
2.2. Kernel k-Means Clustering (KKC)
2.3. Multiple Kernel k-Means Clustering (MKKC)
3. Proposed Method
3.1. Localized Kernel Selection
3.2. Block Diagonal Regularizer
3.3. Objective Function
3.4. Optimization
3.4.1. Update W While Fixing G, K and H
3.4.2. Update G While Fixing W, K and H
3.4.3. Update K While Fixing W, G and H
3.4.4. Update H While Fixing G, K and W
4. Experiments
4.1. Data Sets
4.2. Comparison Methods
- KKM integrates integral operator kernel functions in principal component analysis to deal with nonlinear data [30].
- MKKM combines the fuzzy k-means clustering with multiple kernel learning, where the weights of base kernels are automatically updated to produce the optimal kernel [31].
- RMKKM is an extension based on MKKM, and its robustness is ensured by an -norm in kernel space [7].
- MKKM-MR uses a matrix-induced regularization to measure the correlation between all the kernel pairs and implements MKC [18].
- SimpleMKKM adopts a min-max model to minimize kernel alignment on the kernel coefficient and maximize kernel alignment on the clustering matrix, and is a simple MKC [12].
- MKKM-RK is an MKC method by selecting representative kernels from the base kernel pool to generate the optimal kernel [17].
4.3. Multiple Kernels’ Construction
4.4. Experimental Results and Analysis
4.5. Ablation Study
4.6. Parameters’ Sensitivity
4.7. Convergence
Algorithm 1: Pseudo code of solving problem (18). |
Input: m base kernels and parameters , , . |
Initialize: . |
While not converged do. |
(1) Update by solving (21). |
(2) Update by solving (22). |
(3) Update via (26). |
(4) Update via (27), compute via (10) and (11). |
end while |
Obtain the optimal . |
Output: ACC, NMI and Purity. |
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Valizadegan, H.; Jin, R. Generalized maximum margin clustering and unsupervised kernel learning. In Proceedings of the Twentieth Annual Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 4–7 December 2006; pp. 1417–1424. [Google Scholar]
- Zeng, H.; Cheung, Y.M. Feature selection and kernel learning for local learning-based clustering. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 1532–1547. [Google Scholar] [CrossRef] [PubMed]
- Cortes, C.; Mohri, M.; Rostamizadeh, A. L2 regularization for learning kernels. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, Montreal, QC, Canada, 18–21 June 2009; pp. 109–116. [Google Scholar]
- Zhao, B.; Kwok, J.T.; Zhang, C.S. Multiple Kernel Clustering. In Proceedings of the SIAM International Conference on Data Mining, Sparks, NV, USA, 30 April–2 May 2009; pp. 638–649. [Google Scholar]
- Kloft, M.; Brefeld, U.; Sonnenburg, S.; Laskov, P.; Müller, K.R.; Zien, A.; Sonnenburg, S. Efficient and accurate lp-norm multiple kernel learning. In Proceedings of the 23rd Annual Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 7–10 December 2009; pp. 997–1005. [Google Scholar]
- Xu, Z.L.; Jin, R.; Yang, H.Q.; King, I.; Lyu, M.R. Simple and efficient multiple kernel learning by group lasso. In Proceedings of the 27th International Conference on Machine Learning, Haifa, Israel, 21–24 June 2010; pp. 1175–1182. [Google Scholar]
- Du, L.; Zhou, P.; Shi, L.; Wang, H.M.; Fan, M.Y.; Wang, W.J.; Shen, Y.D. Robust multiple kernel k-means using l21-norm. In Proceedings of the TwentyFourth International Joint Conference on Artificial Intelligence, Buenos Aires, Argentina, 25–31 July 2015; pp. 3476–3482. [Google Scholar]
- Kang, Z.; Lu, X.; Yi, J.F.; Xu, Z.L. Self-weighted multiple kernel learning for graph-based clustering and semi-supervised classification. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial, Stockholm, Sweden, 13–19 July 2018; pp. 2312–2318. [Google Scholar]
- Kang, Z.; Peng, C.; Cheng, Q.; Xu, Z.L. Unified spectral clustering with optimal graph. In Proceedings of the Thirty-Second Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; pp. 3366–3373. [Google Scholar]
- Kang, Z.; Wen, L.J.; Chen, W.Y.; Xu, Z.L. Low-rank kernel learning for graph-based clustering. Knowl. Based Syst. 2019, 163, 510–517. [Google Scholar] [CrossRef]
- Zhou, S.H.; Zhu, E.; Liu, X.W.; Zheng, T.M.; Liu, Q.; Xia, J.Y.; Yin, J.P. Subspace segmentation-based robust multiple kernel clustering. Inf. Fusion 2020, 53, 145–154. [Google Scholar] [CrossRef]
- Liu, X.W. Simplemkkm:Simple multiple kernel k-means. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 5174–5186. [Google Scholar] [PubMed]
- Liu, X.W.; Zhou, S.H.; Wang, Y.Q.; Li, M.M.; Dou, Y.; Zhu, E.; Yin, J.P. Optimal neighborhood kernel clustering with multiple kernels. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; pp. 2266–2272. [Google Scholar]
- Ou, Q.Y.; Gao, L.; Zhu, E. Multiple kernel k-means with low-rank neighborhood kernel. IEEE Access 2021, 9, 3291–3300. [Google Scholar] [CrossRef]
- Zhou, S.H.; Liu, X.W.; Li, M.M.; Zhu, E.; Liu, L.; Zhang, C.W.; Yin, J.P. Multiple kernel clustering with neighbor-kernel subspace segmentation. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 1351–1362. [Google Scholar] [CrossRef] [PubMed]
- Liu, X.W.; Zhou, S.H.; Liu, L.; Tang, C.; Wang, S.W.; Liu, J.Y.; Zhang, Y. Localized simple multiple kernel k-means. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 9273–9281. [Google Scholar]
- Yao, Y.Q.; Li, Y.; Jiang, B.B.; Chen, H.H. Multiple kernel k-means clustering by selecting representative kernels. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 4983–4996. [Google Scholar] [CrossRef] [PubMed]
- Liu, X.W.; Dou, Y.; Yin, J.P.; Wang, L.; Zhu, E. Multiple kernel k-means clustering with matrix-induced regularization. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; pp. 1888–1894. [Google Scholar]
- Liu, J.Y.; Liu, X.W.; Xiong, J.; Liao, Q.; Zhou, S.H.; Wang, S.W.; Yang, Y.X. Optimal neighborhood multiple kernel clustering with adaptive local kernels. IEEE Trans. Knowl. Data Eng. 2022, 34, 2872–2885. [Google Scholar] [CrossRef]
- Afkanpour, A.; Szepesvári, C.; Bowling, M. Alignment based kernel learning with a continuous set of base kernels. Mach. Learn. 2013, 91, 305–324. [Google Scholar] [CrossRef]
- Wang, T.H.; Tian, S.F.; Huang, H.K.; Deng, D.Y. Learning by local kernel polarization. Neurocomputing 2009, 72, 3077–3084. [Google Scholar] [CrossRef]
- Wang, L. Feature selection with kernel class separability. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 1534–1546. [Google Scholar] [CrossRef] [PubMed]
- Lu, Y.T.; Wang, L.T.; Lu, J.F.; Yang, J.Y.; Shen, C.H. Multiple kernel clustering based on centered kernel alignment. Pattern Recognit. 2014, 47, 3656–3664. [Google Scholar] [CrossRef]
- Li, M.M.; Liu, X.W.; Wang, L.; Dou, Y.; Yin, J.P.; Zhu, E. Multiple kernel clustering with local kernel alignment maximization. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, New York, NY, USA, 9–15 July 2016; pp. 1704–1710. [Google Scholar]
- Wang, C.L.; Zhu, E.; Liu, X.W.; Gao, L.; Yin, J.P.; Hu, N. Multiple kernel clustering with global and local structure alignment. IEEE Access 2018, 6, 77911–77920. [Google Scholar] [CrossRef]
- Li, J.F.; Qin, S.J.; Zhang, L.; Hou, W.T. An efficient method for solving a class of matrix trace function minization problem in multivariate statistical. Math. Numer. Sin. 2021, 43, 70–86. [Google Scholar]
- Lu, C.Y.; Feng, J.S.; Lin, Z.C.; Mei, T.; Yan, S.C. Subspace clustering by block diagonal representation. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 487–501. [Google Scholar] [CrossRef] [PubMed]
- Feng, J.S.; Lin, Z.C.; Xu, H.; Yan, S.C. Robust subspace segmentation with block-diagonal prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 3818–3825. [Google Scholar]
- Dattorro, J. Convex Optimization and Euclidean Distance Geometry. 2016. Available online: http://meboo.convexoptimization.com/Meboo.html (accessed on 10 October 2022).
- Schölkopf, B.; Smola, A.J.; Müller, K.R. Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput. 1998, 10, 1299–1319. [Google Scholar] [CrossRef]
- Huang, H.C.; Chuang, Y.Y.; Chen, C.S. Multiple kernel fuzzy clustering. IEEE Trans. Fuzzy Syst. 2012, 20, 120–134. [Google Scholar] [CrossRef]
Frobenius norm of A, i.e., | |
transpose of | |
trace of | |
diagonal matrix with diagonal elements of | |
positive semi-definite | |
k-order identity matrix | |
all-one column vector | |
all-one matrix |
Data Sets | # (Samples) | # (Features) | # (Classes) |
---|---|---|---|
AR | 840 | 768 | 120 |
BA | 1404 | 320 | 36 |
CCUDS10 | 1944 | 101 | 10 |
GLIOMA | 50 | 4434 | 4 |
ISOLET | 1560 | 617 | 2 |
LYMPHOMA | 96 | 4026 | 9 |
ORL | 400 | 1024 | 40 |
YALE | 165 | 1024 | 15 |
Dataset | Metric | KKM | MKKM | RMKKM | MKKM | Simple | MKKM | Proposed |
---|---|---|---|---|---|---|---|---|
-MR | -MKKM | -RK | ||||||
AR | ACC | 0.3000 | 0.3167 | 0.3168 | 0.4863 | 0.5150 | 0.5047 | 0.6686 |
NMI | 0.6360 | 0.6350 | 0.6608 | 0.7615 | 0.7644 | 0.7608 | 0.8890 | |
Purity | 0.3190 | 0.3437 | 0.3358 | 0.5398 | 0.5304 | 0.5305 | 0.7826 | |
BA | ACC | 0.2863 | 0.3868 | 0.4088 | 0.4177 | 0.4496 | 0.3708 | 0.4211 |
NMI | 0.4365 | 0.5301 | 0.5639 | 0.5882 | 0.5919 | 0.5194 | 0.6716 | |
Purity | 0.3226 | 0.4010 | 0.4329 | 0.4619 | 0.4780 | 0.3962 | 0.4773 | |
CCUDS10 | ACC | 0.1280 | 0.1214 | 0.1285 | 0.1345 | 0.1287 | 0.1287 | 0.2031 |
NMI | 0.0093 | 0.0081 | 0.0091 | 0.0083 | 0.0102 | 0.0073 | 0.1054 | |
Purity | 0.1318 | 0.1234 | 0.1331 | 0.1357 | 0.1327 | 0.1310 | 0.2182 | |
GLIOMA | ACC | 0.5032 | 0.4880 | 0.5760 | 0.4955 | 0.5120 | 0.5640 | 0.7900 |
NMI | 0.3256 | 0.2943 | 0.4818 | 0.3083 | 0.2957 | 0.4077 | 0.7178 | |
Purity | 0.5357 | 0.5400 | 0.6460 | 0.5341 | 0.5320 | 0.5787 | 0.8420 | |
ISOLET | ACC | 0.5659 | 0.5269 | 0.5643 | 0.5282 | 0.5801 | 0.5558 | 0.6489 |
NMI | 0.0224 | 0.0021 | 0.0121 | 0.0023 | 0.0192 | 0.0090 | 0.0862 | |
Purity | 0.5659 | 0.5269 | 0.5643 | 0.5282 | 0.5801 | 0.5558 | 0.6500 | |
LYMPHOMA | ACC | 0.4982 | 0.5085 | 0.6135 | 0.5437 | 0.5932 | 0.5639 | 0.6929 |
NMI | 0.5105 | 0.5070 | 0.6172 | 0.6495 | 0.6099 | 0.5963 | 0.7070 | |
Purity | 0.7163 | 0.7036 | 0.8031 | 0.7826 | 0.8266 | 0.8125 | 0.8350 | |
ORL | ACC | 0.4308 | 0.3475 | 0.5521 | 0.6357 | 0.6391 | 0.5860 | 0.7127 |
NMI | 0.6383 | 0.5378 | 0.7406 | 0.8163 | 0.8073 | 0.7581 | 0.8898 | |
Purity | 0.4797 | 0.3525 | 0.6001 | 0.6908 | 0.6860 | 0.6188 | 0.8107 | |
YALE | ACC | 0.4182 | 0.3515 | 0.5218 | 0.5341 | 0.5512 | 0.5939 | 0.6962 |
NMI | 0.4330 | 0.4152 | 0.5558 | 0.5614 | 0.5826 | 0.5986 | 0.8262 | |
Purity | 0.4424 | 0.3636 | 0.5364 | 0.5495 | 0.5555 | 0.5988 | 0.7691 | |
Avg | ACC | 0.3913 | 0.3809 | 0.4602 | 0.4720 | 0.4961 | 0.4835 | 0.6042 |
NMI | 0.3765 | 0.3662 | 0.4552 | 0.4620 | 0.4602 | 0.4572 | 0.6116 | |
Purity | 0.4392 | 0.4193 | 0.5065 | 0.5278 | 0.5402 | 0.5278 | 0.6731 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, C.; Hu, Z.; Xiao, H.; Ma, J.; Li, Z. One-Step Clustering with Adaptively Local Kernels and a Neighborhood Kernel. Mathematics 2023, 11, 3950. https://doi.org/10.3390/math11183950
Chen C, Hu Z, Xiao H, Ma J, Li Z. One-Step Clustering with Adaptively Local Kernels and a Neighborhood Kernel. Mathematics. 2023; 11(18):3950. https://doi.org/10.3390/math11183950
Chicago/Turabian StyleChen, Cuiling, Zhijun Hu, Hongbin Xiao, Junbo Ma, and Zhi Li. 2023. "One-Step Clustering with Adaptively Local Kernels and a Neighborhood Kernel" Mathematics 11, no. 18: 3950. https://doi.org/10.3390/math11183950
APA StyleChen, C., Hu, Z., Xiao, H., Ma, J., & Li, Z. (2023). One-Step Clustering with Adaptively Local Kernels and a Neighborhood Kernel. Mathematics, 11(18), 3950. https://doi.org/10.3390/math11183950