Next Article in Journal
Cyclic Structure, Vertex Degree and Number of Linear Vertices in Minimal Strong Digraphs
Previous Article in Journal
Solving Nonlinear Equation Systems via a Steffensen-Type Higher-Order Method with Memory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sparse Feature-Weighted Double Laplacian Rank Constraint Non-Negative Matrix Factorization for Image Clustering

1
School of Mathematics and Information Science, North Minzu University, Yinchuan 750030, China
2
School of Mathematics and Computer Application, Shangluo University, Shangluo 726000, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(23), 3656; https://doi.org/10.3390/math12233656
Submission received: 16 October 2024 / Revised: 14 November 2024 / Accepted: 18 November 2024 / Published: 22 November 2024

Abstract

:
As an extension of non-negative matrix factorization (NMF), graph-regularized non-negative matrix factorization (GNMF) has been widely applied in data mining and machine learning, particularly for tasks such as clustering and feature selection. Traditional GNMF methods typically rely on predefined graph structures to guide the decomposition process, using fixed data graphs and feature graphs to capture relationships between data points and features. However, these fixed graphs may limit the model’s expressiveness. Additionally, many NMF variants face challenges when dealing with complex data distributions and are vulnerable to noise and outliers. To overcome these challenges, we propose a novel method called sparse feature-weighted double Laplacian rank constraint non-negative matrix factorization (SFLRNMF), along with its extended version, SFLRNMTF. These methods adaptively construct more accurate data similarity and feature similarity graphs, while imposing rank constraints on the Laplacian matrices of these graphs. This rank constraint ensures that the resulting matrix ranks reflect the true number of clusters, thereby improving clustering performance. Moreover, we introduce a feature weighting matrix into the original data matrix to reduce the influence of irrelevant features and apply an L 2,1 / 2 norm sparsity constraint in the basis matrix to encourage sparse representations. An orthogonal constraint is also enforced on the coefficient matrix to ensure interpretability of the dimensionality reduction results. In the extended model (SFLRNMTF), we introduce a double orthogonal constraint on the basis matrix and coefficient matrix to enhance the uniqueness and interpretability of the decomposition, thereby facilitating clearer clustering results for both rows and columns. However, enforcing double orthogonal constraints can reduce approximation accuracy, especially with low-rank matrices, as it restricts the model’s flexibility. To address this limitation, we introduce an additional factor matrix R, which acts as an adaptive component that balances the trade-off between constraint enforcement and approximation accuracy. This adjustment allows the model to achieve greater representational flexibility, improving reconstruction accuracy while preserving the interpretability and clustering clarity provided by the double orthogonality constraints. Consequently, the SFLRNMTF approach becomes more robust in capturing data patterns and achieving high-quality clustering results in complex datasets. We also propose an efficient alternating iterative update algorithm to optimize the proposed model and provide a theoretical analysis of its performance. Clustering results on four benchmark datasets demonstrate that our method outperforms competing approaches.

1. Introduction

With the rapid development of information technology and artificial intelligence, the scale and dimensionality of data are also increasing significantly. This vast amount of high-dimensional data pose great challenges to traditional machine learning and statistical analysis. In the fields of data mining and machine learning, data representation [1] is a fundamental and crucial task. It remains a fundamental and critical task, as it directly influences the performance and interpretability of subsequent models. Effective low-dimensional data representation can handle massive high-dimensional data, reduce redundant features in the original data, and reveal the latent structural information of the data [2]. The primary objective of data representation is to effectively characterize the original data using various techniques, thereby facilitating subsequent tasks like clustering and classification. As a result, data representation plays a crucial role in a range of applications, including information retrieval, classification, hyperspectral image processing [3], and information extraction [4,5,6,7].
Over the past few decades, to effectively handle high-dimensional data, researchers have developed a variety of data representation methods. These methods include Principal Component Analysis (PCA) [8], Manifold Learning [9,10,11], Linear Discriminant Analysis (LDA) [12], Concept Factorization (CF) [13], Sparse Coding (SC) [14], non-negative matrix factorization (NMF) [15,16] Deep Learning (DL) [17], and low-rank representation (LRR) [18]. Among them, non-negative matrix factorization (NMF) has become one of the most widely used data representation techniques [19] due to its excellent interpretability. However, the classic NMF algorithm only imposes non-negativity constraints, which is not sufficient to meet the diverse clustering needs. Therefore, to enhance its clustering performance, researchers have embedded various additional constraints into the NMF algorithm. For example, Gao et al. [20] proposed a sparse non-negative matrix factorization (SNMF) algorithm, which enhances the algorithm’s learning capability by embedding sparsity constraints as a penalty term within the NMF framework. Similarly, Ding et al. [21] introduced an Orthogonal Non-negative Matrix Tri-factorization (ONMTF) algorithm, which imposes orthogonality constraints on both the basis matrix and coefficient matrix, aiming to produce clearer and more interpretable clustering results during NMF decomposition. Although these NMF variants improve clustering performance by imposing different constraints on the classical NMF algorithm, they fail to consider the manifold structure in the data, which is crucial for the manifold structure of clustering data. To explore the geometric structure of data manifolds and feature manifolds, researchers encode the geometric information of data and feature spaces by constructing similarity graphs and embed graph regularization into the original NMF to reveal the intrinsic geometric structure of the data.
Consequently, various graph-regularized NMF variants have been proposed. For example, Cai et al. [22] introduced graph-regularized NMF (GNMF), which learns the local manifold structure of the data space by constructing a similarity graph. However, GNMF only considers the similarity in the data space and does not address the feature space. For this purpose, Shang et al. [23] proposed dual regularized NMF (DNMF) by constructing two similarity graphs to explore the geometric structure information in both the data and feature spaces simultaneously. Inspired by Shang et al. [23], Sun et al. [24] introduced sparse dual graph-regularized NMF (SDGNMF), which not only incorporates label information into the graph regularization but also imposes sparsity constraints on the basis matrix. To prevent misalignment between the image and basis vectors and further enhance the algorithm’s discriminative ability, Li et al. [25] proposed semi-supervised dual orthogonally constrained dual graph-regularized NMF (SDGNMF-BO). This method integrates dual orthogonality constraints and dual graph regularization into a semi-supervised NMF framework, further enhancing the learning capability in the subspace. To maximize the sparsity of the learned coefficient matrix, Li et al. [26] introduced semi-supervised graph and local coordinate regularized NMF. By embedding local coordinate constraints into a semi-supervised NMF framework with graph regularization, the sparsity of the coefficient matrix is enhanced. Inspired by Li et al. [26] and considering that matrix factorization may have multiple solutions, Wang et al. [27] proposed locally orthogonally constrained semi-supervised dual graph-regularized NMF (LOSDNMF). This method integrates dual graph regularization, local coordinate constraints, and dual orthogonality constraints into a semi-supervised NMF framework, effectively enhancing the sparsity and discriminative power of data representation. However, although graph regularization methods are generally superior to many other approaches, these methods often rely on the K-nearest neighbors (KNN) approach to construct the graph structure. However, this construction can interfere with the matrix factorization process, potentially leading to suboptimal clustering results. To explore the nonlinear structure of data and construct a similarity graph with an optimal block-diagonal structure, Xu et al. [28] proposed an explicit data-driven kernel learning strategy. This strategy directly learns the kernel through the self-representation of the data, simultaneously enabling adaptive weighting. Based on this kernel, the local manifold structure of the data can be preserved in the nonlinear space via a kernel-based local manifold term, facilitating the construction of a graph structure with an optimal block-diagonal form. Recognizing that multi-view clustering can enhance clustering performance by effectively integrating complementary information from different views, Xu et al. [29] further proposed a novel multi-view clustering algorithm that adaptively constructs a kernel matrix without requiring a predefined kernel function. Inspired by the aforementioned research, constructing adaptive graph structures has also gained increasing attention in non-negative matrix factorization (NMF). For instance, the NMF with Adaptive Neighbors (NMFAN) method [30] introduces an adaptive graph mechanism to achieve simultaneous optimization of matrix factorization and similarity learning. This method balances the interactions between these two sub-tasks, allowing each sub-task to iteratively optimize based on the results of the other, thereby ensuring a more accurate construction of the similarity matrix in the data graph. Additionally, Shu et al. [31] proposed a new data representation method (RCNMF) by imposing a rank constraint on the Laplacian matrix of the learned graph to ensure that the connected components precisely match the sample categories. On the other hand, with the continuous development of non-negative matrix factorization (NMF), an increasing number of constraints have been introduced. However, excessive constraints may lead to unreliable solutions. Therefore, to address the issue of limited degrees of freedom caused by too many constraints, some researchers have introduced a third decomposition factor, denoted as R, within the NMF framework. This additional factor, serving as a scaling function, not only provides extra degrees of freedom for the factor matrices X, U, and V, but also enhances the flexibility of the decomposition process. For example, Tang et al. [32] proposed a new three-factor matrix decomposition model. By introducing dual graph regularization and dual orthogonality constraints into NMF, this model not only explores the geometric properties of data and feature manifolds but also ensures the orthogonality of the factor matrices.
Inspired by the aforementioned algorithms, we have not fixed the input feature map and input data graph related to the affinity matrix in our model. Instead, we learn new data similarity matrix D and feature similarity matrix A based on the initial data similarity matrix W and the initial feature similarity matrix S. These new optimal similarity graphs are more suitable for clustering tasks. Moreover, we have imposed rank constraints on the Laplacian graphs of these two new similarity matrices to ensure that the number of connected components is consistent with the number of sample categories. Furthermore, inspired by recent research on orthogonal constraints by Ding et al. [21] and sparse constraints by Luo et al. [33], we have introduced sparsity and double orthogonal constraints within the non-negative matrix factorization framework with dual Laplacian rank constraints. These double orthogonal constraints not only address the issues of slow optimization and high computational complexity in existing NMF models but also prevent mismatches between images and base vectors. Thus, this enhancement effectively improves the discriminative and exclusive nature of clustering.
Therefore, we propose a novel non-negative matrix factorization algorithm called sparse feature-weighted dual Laplacian rank-constrained non-negative matrix factorization (SFLRNMF), along with its extended version, sparse feature-weighted dual Laplacian rank-constrained non-negative tri-factor matrix factorization (SFLRNMTF). Our main contributions include the following:
(1)
A novel learning mechanism, dual Laplacian rank-constrained non-negative matrix factorization (SFLRNMF), has been proposed. This mechanism is capable of learning the optimal feature similarity graph and data similarity graph, and rank constraints are applied to the Laplacian matrices of both graphs to construct an optimal dual graph regularizer. Additionally, a weight matrix has been constructed to explore the attributes of the original data and the diversity of the samples. Furthermore, an L 2,1 / 2 -norm sparsity constraint has been imposed on the basis matrix to promote sparsity, simplify computation, and enhance the model’s local learning capabilities and robustness.
(2)
Based on SFLRNMF, the dual Laplacian rank-constrained non-negative matrix tri-factorization (SFLRNMTF) model is proposed. Specially, orthogonal constraints are imposed on both the coefficient matrix and the basis matrix to ensure that each data point has a unique basis vector in the feature space, which allows one to enhance the discriminative ability of clustering. Additionally, excessive constraints may lead to too few degrees of freedom, which results in unreliable solutions during factor decomposition. Therefore, factor R has been introduced to ensure the accuracy of the factor decomposition.
(3)
Corresponding iterative update optimization schemes have been proposed, and convergence proofs for the two algorithms have been provided. Furthermore, the effectiveness of these two models has been validated by conducting experiments on benchmark datasets and comparing the results with several of the most advanced clustering methods.
The rest of the paper is structured as follows: In Section 2, we introduce the basic principles of standard NMF and its variants. In Section 3, the novel SFLRNMF algorithm is presented, and its convergence is provided theoretically. Section 4 introduces the extended version of SFLRNMF and provides the convergence of the optimization process. In Section 5, we conduct numerical experiments to demonstrate the efficiency of our two methods. In Section 6, we summarize the paper and discuss future work.

2. Related Works

2.1. NMF

NMF aims to approximate the original high-dimensional data matrix x by decomposing it into two lower-dimensional matrices u and v. Some of the notation used in NMF is shown in Table 1.
Given a non-negative dataset X = x 1 ,   x 2 ,   x n R m × n , U P m × r , V R n × r , and X U V T . Here, r m i n m , n , where U represents the basis matrix in the data space, and V represents the coefficient matrix in the feature space. The objective function of this problem is as follows:
J 1 = X U V T F 2 = i = 1 p j = 1 n x i j l = 1 r u i l v j l 2 s . t .     U 0 , V 0
In the formula, · F represents the Frobenius norm. The objective function J 1 in the above Formula (1) is non-convex with respect to the joint variables (U, V). To make J 1 a convex function, it is necessary to fix one of the variables (U, V). Therefore, the objective function can be solved iteratively using an alternating approach. To effectively solve the objective function of NMF, Lee et al. [15] proposed the well-known multiplicative update method, which allows the objective function to be solved using a simple multiplicative iterative update approach. The update rules for Formula (1) are given below:
u i j     u i j X V i j U V T i j
v i j     v i j X T U i j V U T U i j
Here, i ∈ {1, …, m}, l ∈ {1, …, r} and j ∈ {1, …, n}.
At the beginning of the iterative process, we randomly initialize the basis matrix U and the coefficient matrix V and then update them according to the iterative update rules in Formulas (2) and (3) until the final condition is satisfied.

2.2. GNMF

Cai et al. [22]. developed a graph-regularized non-negative matrix factorization (GNMF) for data representation. This model constructs an affinity graph to describe the manifold structure embedded in a high-dimensional ambient space, aiming to optimize the problem. The objective function of GNMF is as follows:
O G N M F = X U V T F 2 + λ T r V T L V s . t .     U 0 , V 0
Here, L is the Laplacian matrix in the data space. For the above Equation (4), the following update rules are provided:
u i j u i j X V i j U V T V i j
v i j v i j X T U + λ W V i j V U T U + λ D V i j

2.3. NMFAN

Huang et al. [30] pay special attention to the local connectivity of data points in their adaptive graph-regularized non-negative matrix factorization (NMFAN) method. They strive to construct an ideal similarity matrix aimed at achieving effective neighbor allocation. Based on the assumption that data points closer in distance are more likely to become neighbors, this method selects the best neighbors for each data point to build the similarity matrix. By imposing constraints on this matrix, the neighbor allocation process becomes adaptive. The objective function of NMFAN is as follows:
O N M F A N = X U V T F 2 + λ T r V T L S V + i , j = 1 n x i x j 2 2 S i j + γ S i j 2 s . t .   s i T 1 = 1 ,   0 S i 1 ,     U 0 , V 0
Based on the above objective function, the iterative update rules for Formula (7) are as follows:
u i j u i j X V i j U V T V i j
v i j v i j X T U + λ W S V i j V U T U + λ D S V i j
For the update of S in Equation (7), the following problem needs to be solved:
min s i T 1 = 1 ,   0 S i 1 s i + 1 2 γ d i 2 2
Here, d i is the j -th column element of d i j , specifically as follows:
d i j = x i x j 2 2 + λ 2 μ v i v j 2 2
For the update of γ in Equation (7), the following problem needs to be solved:
γ = 1 n i = 1 n k 2 x i x k + 1 2 2 1 2 j = 1 k x i x j 2 2

2.4. DNMF

Considering that both the observed data and features are situated on a low-dimensional manifold, Shang et al. [23] proposed the dual graph-regularized non-negative matrix factorization (DNMF). This model constructs dual affinity graphs to simultaneously mine the geometric structural information contained in the data points and features, aiming to optimize the problem. The objective function of DNMF is as follows:
O D N M F = X U V T F 2 + λ T r V T L V V + μ T r U T L U U s . t .     U 0 , V 0
Based on the above objective function, the iterative update rules for Formula (13) are as follows:
u i j u i j X V + μ W U U i j U V T V + μ D U U i j
v i j v i j X T U + λ W V V i j V U T U + λ D V V i j

2.5. SDGNMF-BO

Inspired by co-clustering algorithms, Li et al. [25] introduced dual graph regularization into the semi-supervised non-negative matrix factorization (NMF) framework and proposed the semi-supervised dual orthogonal constrained dual graph- regularized subspace clustering NMF method (SDGNMF-BO). This method aims to more effectively utilize the latent structures and class information in the data. To enhance the model’s recognition capability, they imposed soft orthogonal constraints on the decomposed basis and coefficient matrices. The objective function of SDGNMF-BO is as follows:
O S D G N M F - B O = X H A T C T F 2 + α T r H T L H H + T r A T C T L A C C A + β T r H T H I + T r A T C T C A I s . t .     H T H = I , A T C T C A = I , h i j 0 , c i j 0
Based on the above objective function, the iterative update rules for Formula (16) are as follows:
h i j h i j X C A + α W H H i j A T C T C A + α D H H + β H i j
a i j a i j C T X T H + α C T W A C C A i j C T C A H T H + α C T D A C C A + β C T C A i j
c i j c i j X T H A T + α W A C C A i j C A H T H A T + α D A C C A A T + β C A A T i j

3. The Presented Model

3.1. The Motivation of the Proposed Method

Inspired by Local Linear Embedding (LLE) [10] and Laplacian Eigenmaps (LE) [11], researchers have developed several graph-regularized NMF clustering algorithms. However, the clustering performance of these algorithms highly depends on the quality of the graph model. Therefore, we propose automatically adjusting the weights of a given similarity matrix to learn the optimal similarity matrix. Specifically, our goal is to learn the new data similarity matrix D and feature similarity matrix A based on the initial data similarity matrix W and the initial feature similarity matrix S, thus constructing an optimal graph more suited for clustering tasks. Figure 1 shows the process of constructing this optimal graph. We use traditional weighting methods, such as 0–1 weighting, heat kernel weighting, or probabilistic neighborhood methods, to construct nearest neighbor graphs related to the data similarity matrix W and feature similarity matrix S. Then, we impose rank constraints on the Laplacian matrix of the learned similarity matrices and iteratively update the similarity matrices A and D.
Through the analysis above, in this section, we propose a new NMF model, namely sparse feature-weighted dual graph Laplacian rank-constrained non-negative matrix factorization (SFLRNMF), designed for graph-based clustering.

3.2. Sparse Constraints

Sparse constraints involve using appropriate sparse models to represent sparse data. Introducing sparse constraints into non-negative matrix factorization (NMF) combines the advantages of NMF and sparse representation, thus enhancing the effectiveness of NMF methods. In particular, imposing sparse constraints on the basis matrix U when decomposing the original matrix X has been proven to be a very successful and practical strategy. When each row of the basis matrix is sparse, fewer basis elements are needed to represent the original matrix, which greatly aids in data recovery. Therefore, sparse constraints have received widespread attention in recent years. Xu et al.’s [34] research shows that the sparse effect of the L p -norm is better when p = 1 / 2 , making the L p -norm-based sparse constraint an increasingly favored condition among researchers. We first use the L p -norm to impose sparse constraints on the basis matrix U with the specific method as follows:
U 2,1 / 2 = i = 1 n U i 2 1 / 2 2
Imposing L p -norm sparse constraints on the basis matrix enhances the algorithm’s robustness, local learning capability, and clustering performance, while making the basis matrix U sparser and simplifying the computational process.

3.3. Bi-Orthogonal Constraints

Ding et al. [21] proposed that in non-negative matrix factorization, satisfying X U V T , for a given solution ( U , V ) , there also exists another solution ( U A , V B ) where A B T = I and U A 0 , V B 0 . To avoid such erroneous solutions, orthogonal constraints should be applied to the basis matrix after decomposition, such that U U T = I . Additionally, to differentiate various features and ensure that each feature vector points in distinct directions, which facilitates clearer and more distinguishable clustering in the sample space, orthogonal constraints should also be applied to the coefficient matrix, such that V V T = I . Imposing orthogonal constraints in both the feature and sample spaces helps to distinguish different features and samples more prominently. This significantly enhances the performance of clustering algorithms, making different data groups more distinct and independent.

3.4. Feature Weighting

In this section, we integrate a feature weighting mechanism into non-negative matrix factorization (NMF) to better differentiate the importance of features in the original matrix, thereby improving the model’s performance and interpretability. By introducing a feature weighting matrix T, the objective function can be summarized as follows:
O = T X U V T F 2 s . t . V 0 , U 0 , T i = d i a g t i , t i 0 , i = 1 m t i = 1

3.5. Models of Proposed Methods

Our strategy is to construct an optimal dual graph regularizer based on the initial data similarity matrix and the initial feature similarity matrix, both of which block diagonal matrices. To establish this objective, we begin with the following theorem.
Theorem 1
([35]). If the affinity matrix is non-negative, the Laplacian matrix L w = D w W T + W / 2 , where the i-th element is j w i j + w j i / 2 . The degree matrix D w R n × n is defined as a diagonal matrix with the following important properties [36,37], where W is the initial data similarity graph.
Theorem 1 indicates that given an affinity matrix W, if the rank of the Laplacian matrix is equal to n-k, it is an ideal graph. Therefore, for a given initial data affinity matrix W and feature affinity matrix S, we can learn the corresponding data similarity matrix D and feature similarity matrix A. The Laplacian matrices corresponding to these two similarity matrices are D S S T + S / 2 and L W = D W W T + W / 2 , respectively.
By applying rank constraints to the two Laplacian matrices, namely r a n k L S = n k and r a n k L S = m k , and using the Frobenius norm to measure the approximation error between the initial affinity matrices W and S and the learned similarity matrices D and A, respectively, the constrained Laplacian rank for graph-based clustering can be formulated as the following problem:
O = S A F 2 + W D F 2 s . t .     j s i j = 1 , s i j 0 j w i j = 1 , w i j 0 , r a n k L S = n k , r a n k L w = m k
The above objective function, due to D S depending on S and D W depending on W, along with the constraints r a n k L S = n k ,   r a n k L w = m k , is evidently a complex nonlinear consrained problem. Next, we will reformulate this problem using Laplacian rank constraints.
Assume σ i L W represents the i-th smallest eigenvalue of L W , and σ i L S represents the i-th smallest eigenvalue of L S . Additionally, since L W and L S are both semi-definite matrices, their eigenvalues σ i L S and σ i L W are non-negative. It can be seen that for a sufficiently large value of λ , problem (22) is equivalent to the following problem:
min S , W J = S A F 2 + W D F 2 + 2 α i = 1 k σ i L S + i = 1 k σ i L W s . t .     j s i j = 1 , s i j 0 j w i j = 1 , w i j 0
Therefore, when λ is sufficiently large, the optimal solutions S and W to Equation (25) will make i = 1 k σ i L S + i = 1 k σ i L W equal to zero, thereby satisfying the two rank constraints in problem (22). According to the Ky Fan theorem [38], the following equality can be obtained:
i = 1 k σ i L S = min F R n × k T r F T L S F   ,     i = 1 k σ i L W = min F R n × k T r F T L W F
Further, expression (23) can be rewritten in the following form:
                          min S , W J = S A F 2 + W D F 2 + α T r V T L S V + T r U T L W U S 1 = 1 , W 1 = 1 , S R n × n , W R m × m
It is worth noting that 1 is a vector with all elements equal to 1. Next, we construct the optimal dual graph regularization term within the feature-weighted NMF framework. The objective function is written in the following form:
min S , W J = S A F 2 + W D F 2 + α T r V T L S V + T r U T L W U + T X U V T F 2 s . t . S 1 = 1 , W 1 = 1 , S 0 , W 0 , S R n × n , W R m × m , T = d i a g t , t i 0 , i = 1 m t i = 1 , U 0 , V 0
Considering the sparsity constraints and orthogonality constraints in Equation (26), the objective function of the proposed SFLRNMF framework can be summarized as follows:
min S , W J = S A F 2 + W D F 2 + α T r V T L S V + T r U T L W U + T X U V T F 2 + θ U 2,1 / 2 1 / 2 s . t .   V T V = I , S 1 = 1 , W 1 = 1 , S 0 , W 0 , S R n × n , W R m × m , T i = d i a g t i , t i 0 , i = 1 m t i = 1 , U 0 , V 0
Here, α and θ are non-negative and balance the weight of the first reconstruction error term and the other terms, with θ being the sparsity parameter. T is a diagonal matrix, T R m × m , which assigns a weight to each feature of the original matrix X.

3.6. An Efficient Iterative Update Rule for Solving the Proposed Model

To address this non-convex problem, by alternately optimizing the following variables, we can transform the Lagrangian function of the objective function (27) into the following equations:
min S , W J = S A F 2 + W D F 2 + α T r V T L S V + T r U T L W U + T r T X X T T T 2 T r X T T T U V T + T r V U T U V T + β T r V T V I + 4 θ T r U T Q U
Here, Q R m × m , and Q = q i j is a diagonal matrix. We can compute the diagonal element of its i-th row as follows:
q i j = 1 4 m a x u i 2 3 / 2 , ε
Here, ε is a sufficiently small constant to avoid overflow in the above equation. To iteratively update the basis matrix U, the coefficient matrix V, and the feature weighting matrix T, we should take the partial derivatives of J:
J U = 2 α L w U 2 T X V + 2 U V T V + 8 θ Q U
J V = 2 α L s V 2 X T T T U + 2 V U T U + 2 β V
J T = 2 T X X T 2 U V T X T
According to the Karush–Kuhn–Tucker (KKT) conditions, the iterative updates for the basis matrix U, the coefficient matrix V, and the feature weighting matrix T are as follows:
u i j u i j T X V + α W W U i j α D W U + U V T V + 4 θ Q U i j
v i j v i j X T T T U + α W S V i j V U T U + β V + α D S V i j
t i j t i j U V T X T i j T X X T i j
To obtain the updates for the new data graph S and the new feature graph W, we use alternating optimization. First, we fix U, V, T, and W, then update S:
L 1 = S A F 2 + α T r V T L S V S 1 = 1 , S 0 , S R n × n
Equation (36) is equivalent to optimizing the following problem:
min j s ij = 1 , s ij 0 i , j s i , j a i , j 2 + α i , j v i v j 2 2 s i , j
Note that for different i, the above problem is independent, so we can solve the above problem by solving for each i separately.
min j s ij = 1 , s ij 0 i s i , j a i , j 2 + α i v i v j 2 2 s i , j
where definition f i , j = v i v j 2 2 , and the j-th column element of f i , j is denoted by f i (similarly for s i and a i ). Problem (35) can be written in vector form as follows:
s i T = 1 , s i 0 s i a i λ 2 f i 2 2
Equation (39) can be solved using the simplex sparse learning model proposed by Huang et al. [39].
Next, update matrix W; similarly, we can update W by fixing matrices U, V, T, and S as follows:
L 2 = W D F 2 + α T r U T L S U s . t .     W 1 = 1 , W 0 , W R m × m
Similarly, optimizing (40) is equivalent to optimizing the following problem:
min j w ij = 1 , w ij 0 i , j w i , j d i , j 2 + α i , j u i u j 2 2 w i , j
For each row of w i and d i , we have
min j w ij = 1 , w ij 0 i w i , j d i , j 2 + α i u i u j 2 2 w i , j
Similarly, we define g i , j = u i u j 2 2 , and the j-th column element of g i , j is denoted by g i (similarly for w i and d i ). Problem (39) can be written in vector form as follows:
w i T = 1 , w i 0 w i d i λ 2 g i 2 2
For solving Equation (43), the simplex sparse learning model proposed by Huang et al. [39] is used.
The optimization process of the SFLRNMF algorithm is shown in Algorithm 1.
Algorithm 1 The process of SFLRNMF algorithm
  Input: Initial Data similarity matrix S and Characteristic similarity matrix W , parameter α , θ , λ ; Data matrix X = x 1 , x 2 , , x n . the neighbor number k , and the maximum iteration number Niter.
  Output: fundamental matrix U , coefficient matrix V
  Initialization: S S 0 , W W 0 , L S = L a p l a c e S , L W = L a p l a c e W
  Repeat: Fixing S and W, update U, V and T
      Update U by Equation (33)
      Update V by Equation (34)
      Update T by Equation (35)
      Fixing U, V and T, update S and W
      Update S by Equation (39)
      Update W by Equation (43)
      Compute L S by L S = D S S T + S / 2 , L W by L W = D W W T + W / 2
  Until: convergence

3.7. Convergence Analysis of the SFLRNMF Algorithm

In this section, we analyze the convergence of SFLRNMF and prove that the objective function in Equation (27) is monotonically decreasing under the iterative update rules (33) to (35).
Firstly, we analyze the convergence of the iterative update rule in Equation (34).
Definition 1.
Provided the following conditions are met [40]: G x , x F x , G x , x = F x , where G x , x is an auxiliary function of F x . Assuming that the (t + 1)-th iteration update rule is as follows:
x t + 1 = a r g m i n x G x , x t
Therefore, we can prove that x t + 1 G x t + 1 , x t G x t , x t = F x t , which implies that F x converges.
Lemma 1.
G v i j , v i j t = F i j v i j t + F i j v i j t v i j v i j t + α D s V + V U T U + β V i j v i j t v i j v i j t 2
The above equation is an auxiliary function of F i j v i j , where F v = T X U V T F 2 + α T r V T L s V + β V T V I .
Proof. 
Given that the first and second derivatives of F i j v i j are F i j v = ( 2 α L s V 2 X T T T U + 2 V U T U + 2 β V ) i j and F i j v = 2 α L s i i + 2 U T U j j + 2 β , we can derive the Taylor expansion of F i j v i j as follows:
F v i j = F i j v i j t + F i j v i j t v i j v i j t + U T U j j + α L s i i + β v i j v i j t 2
Since we have
V U T U i j = r = 1 k v i r t U T U r j v i j t U T U j j α D s V i j = α r = 1 n D i r s v r j t α ( D s W s ) i i v i j t = α L s i i v i j t
By simultaneously solving Equations (44) and (45), we obtain v i j t + 1 as the local minimum of Equation (45), and G v i j t + 1 , v i j t as the corresponding local minimum value.
Given the equation v i j t + 1 = v i j t v i j t F i j v i j t 2 α D s V + V U T U + β V i j = v i j t X T T T U + α W s V i j V U T U + β V + α D s V i j , we have α D s V + V U T U + β V i j v i j t α L s i i + U T U j j + β . Therefore, we can derive that G v i j , v i j t F v i j , Since G v i j , v i j t is an auxiliary function of F i j v i j , F v i j is monotonically decreasing.
In the same method, we can prove that under the iterative update rule (35), F i j t i j is monotonically decreasing. □
Lemma 2
([41]).
i = 1 m g i t + 1 2 1 / 2 g i t + 1 2 2 4 g i t 2 3 / 2 i = 1 m g i t 2 1 / 2 g i t 2 2 4 g i t 2 3 / 2
Proof. 
According to Lemma 2, we have the following equation:
i = 1 m u i t + 1 2 1 / 2 u i t + 1 2 2 4 u i t 2 3 / 2 i = 1 m u i t 2 1 / 2 u i t 2 2 4 u i t 2 3 / 2
In the i-th iteration, we fix Q as Q T to solve for U t + 1 , V t + 1 , and T t + 1 . We define the following function:
L U , V , T = α T r V T L s V + T r U T L w U + T r T X X T T T 2 T r X T T T U V T + T r V U T U V T + β T r V T V I
Given that u 2,1 / 2 1 / 2 = i = 1 m u i 2 1 / 2 , we obtain the following inequality:
L T t + 1 + θ i = 1 m u i t + 1 2 2 4 u i t 2 3 / 2 = L T t + 1 + θ u t + 1 2 , 1 / 2 1 / 2 + θ i = 1 m u i t + 1 2 2 4 u i t 2 3 / 2 u i t + 1 2 1 / 2 L T t + θ i = 1 m u i t 2 2 4 u i t 2 3 2 = L T t + θ u t 2 , 1 / 2 1 2 + θ i = 1 m u i t 2 2 4 u i t 2 3 / 2 u i t 2 1 / 2
Combining inequalities (48) and (51), we obtain the following inequality: L t + 1 + θ U t + 1 2 , 1 / 2 1 / 2 L t + θ U t 2 , 1 / 2 1 / 2 . Thus, F i j u i j is monotonically decreasing under the updated Equation (33). Based on the above convergence analysis, we can conclude that the objective function (27) is monotonically decreasing under the iterative update rules (33)–(35), (39), and (43). □

4. Sparse Feature-Weighted Double Laplace Rank-Constrained NMTF Model

In this section, we propose a novel extension of SFLRNMF, called sparse feature-weighted dual Laplacian rank-constrained non-negative matrix tri-factorization (SFLRNMTF). In SFLRNMTF, we introduce an additional factor R and dual orthogonality constraints to SFLRNMF. This not only enhances the accuracy of the low-rank representation but also ensures a more robust model by incorporating different scales of X, U, and V.

4.1. SFLRNMF with Three-Factor (SFLRNMTF)

To apply the idea of dual orthogonality constraints to SFLRNMF, we incorporate an additional factor R and dual orthogonality constraints into Equation (27) for SFLRNMF, as shown below:
min S , W J T = S A F 2 + W D F 2 + α T r V T L S V + T r U T L W U + T X U R V T F 2 + θ U 2,1 / 2 1 / 2 s . t .   U T U = I , V T V = I , S 1 = 1 , W 1 = 1 , S 0 , W 0 ,        S R n × n , W R m × m , T i = d i a g t , t i 0 , i = 1 m t i = 1 , U 0 , V 0
Here, R R c × c is a diagonal scaling matrix, and T has the same meaning as in Equation (27).

4.2. An Efficient Iterative Update Rule for Solving Model (SFLRNMTF)

We rewrite the objective function (51) of the SFLRNMTF model as a Lagrangian function, as shown below:
J T = S A F 2 + W D F 2 + α T r V T L S V + T r U T L W U + T r T X X T T T 2 T r X T T T U R V T + T r V R T U T U R V T + β T r U T U I + T r V T V I + 4 θ T r U T Q U
To update the matrices U, V, R, and T, first, we compute the partial derivatives of the Lagrangian function J T with respect to different variables, as shown below:
J T U = 2 α L w U 2 T X V V R T + 2 U R V T V R T + 2 β U + 8 θ Q U
J T V = 2 α L s V 2 X T T T U R + 2 V R T U T U R + 2 β V
J T R = 2 U T T X V + 2 U T U R V T V
J T T = 2 T X X T 2 U R V T X T
Based on the Karush–Kuhn–Tucker (KKT) conditions [39], the iterative updates for matrices U, V, R, and T are as follows:
u i j u i j T X V R T + α W w U i j α D w V + U R V T V R T + β U + 4 θ Q U i j
v i j v i j ( X T T T U R + α W s V ) i j ( α D S V + V R T U T U R + β V ) i j
r i j r i j ( U T T X V ) i j ( U T U R V T V ) i j
t i j t i j U R V T X T i j T X X T i j
The optimization process for SFLRNMTF is shown in Algorithm 2.
Algorithm 2 The process of SFLRNMTF algorithm
Input: Initial Data similarity matrix S and Characteristic similarity matrix W, parameter α, θ, λ; Data matrix X = x 1 , x 2 , , x n . the neighbor number k, and the maximum iteration number Niter.
Output: fundamental matrix U, coefficient matrix V
Initialization: S S 0 , W W 0 , L S = L a p l a c e S , L W = L a p l a c e W
Repeat: Fixing S and W, update U, V and T
    Update U by Equation (58)
    Update V by Equation (59)
    Update R by Equation (60)
    Update T by Equation (61)
    Fixing U, V and T, update S and W
    Update S by Equation (39)
    Update W by Equation (43)
    Compute L S by L S = D S S T + S / 2 , L W by L W = D W W T + W / 2
Until: convergence

4.3. Convergence Analysis of the SFLRNMTF Algorithm

In this section, we theoretically prove the convergence of SFLRNMTF and demonstrate that the objective function in Equation (52) is monotonically decreasing under the iterative update rules (58)–(61).
Definition 2.
If the following conditions proposed by Shang et al. [40] are satisfied, that is, G T x , x F T x , G T x , x = F T x , and G T x , x is an auxiliary function of F T x , then for iteration t + 1 , the update rule is as follows:
x t + 1 = a r g m i n x G T x , x t
Thus, it can be proven that F T x t + 1 G T x t + 1 , x t G T x t , x t = F T x t converges.
Lemma 3.
T v i j , v i j t = F T i j v i j t + F T i j v i j t v i j v i j t + α D S V + V R T U T U R + β V v i j t v i j v i j t 2
The above equation is an auxiliary function for F T i j v i j , where F T V = T X U R V T F 2 + α T r V T L s V + β V T V I .
Proof. 
First, the first-order derivative of F T i j v i j is F T i j V = ( 2 α L s V 2 X T T T U R + 2 V R T U T U R + 2 β V ) i j , and the second-order derivative of F T i j v i j is F T i j V = 2 α ( L s ) i i + 2 ( R T U T U R ) j j + 2 β .
Thus, we can rewrite F T i j v i j in the following Taylor series form:
F T v i j = F T i j v i j t + F T i j v i j t v i j v i j t + α L s i i + R T U T U R j j + β v i j v i j t 2
Given
V R T U T U R i j = r = 1 k v i r t R T U T U R r j v i r t R T U T U R i i β = β α D S v i j = α r = 1 n D i r S v r j t α D i i s w i i s s i j t = α L S i i v i j t
By solving Equations (60) and (61) simultaneously, we find that G T i j v i j t + 1 , v i j t is a local minimum of Equation (61), and v i j t + 1 is the local minimum point corresponding to this local minimum.
v i j t + 1 = v i j t v i j t F T i j v i j t 2 D S V + V R T U T U R + β V = v i j t X T T T U R + α W s V i j α D S V + V R T U T U R + β V
we can derive that G T v i j , v i j t F T v i j . Since G T v i j , v i j t is an auxiliary function of F T i j v i j , F T i j v i j is monotonically decreasing.
In the same method, we can prove that under the iterative update rule (35), F T i j t i j , and F T i j r i j are monotonically decreasing. □
Lemma 4
([41]).
i = 1 m g i t + 1 2 1 / 2 g i t + 1 2 2 4 g i t 2 3 / 2 i = 1 m g i t 2 1 / 2 g i t 2 2 4 g i t 2 3 / 2
According to Lemma 4, we have
i = 1 m u i t + 1 2 1 / 2 u i t + 1 2 2 4 u i t 2 3 / 2 i = 1 m u i t 2 1 / 2 u i t 2 2 4 u i t 2 3 / 2
To solve for U t + 1 , V t + 1 , T t + 1 , and R t + 1 , we set Q to Q t in the i-th generation. We define the following function:
L T U , V , T , R = λ T r V T L s V + T r U T L w U + T r T X X T T T 2 T r X T T T U R V T + T r V R T U T U R V T + β T r U T U I T r V T V I
Since U 2 , 1 / 2 1 / 2 = i = 1 m u i 2 1 / 2 , we obtain the following inequality:
L T t + 1 + θ i = 1 m u i t + 1 2 2 4 u i t 2 3 / 2 = L T t + 1 + θ U t + 1 2 , 1 / 2 1 / 2 + θ i = 1 m u i t + 1 2 2 4 u i t 2 3 / 2 u i t + 1 2 1 / 2 L T t + θ i = 1 m u i t 2 2 4 u i t 2 3 2 = L T t + θ U t 2 , 1 / 2 1 2 + θ i = 1 m u i t 2 2 4 u i t 2 3 / 2 u i t 2 1 / 2
Combining inequalities (67) and (70), we obtain the following inequality:
L T t + 1 + θ U t + 1 2 , 1 / 2 1 / 2 L T t + θ U t 2 , 1 / 2 1 / 2
Thus, F T i j u i j is monotonically decreasing. Based on the above analysis, we can conclude that the objective function (52) of SFLRNMTF is monotonically decreasing under the update rules in Equations (39), (43), and (58)–(61).

5. Experiments and Analysis

In this subsection, to verify the efficiency of the proposed methods (SFLRNMF and SFLRNMTF), we conducted numerical experiments on four datasets (COIL20, JAFFE, UMIST, and YaleB32) to evaluate their robustness, sensitivity, and convergence. We compared the clustering performance with eight algorithms, including PCA [8], NMF [15,16], GNMF [22], DNMF [23], DSNMF-LDC [42], NMFAN [30], LOSDNMF [27], EWNMF [43], SGLNMF [26], and SDGNMF-BO [25].
All numerical experiments were conducted on a PC with Windows 10 operating system, a CPU of 3.40 GHz, and 8GB of memory, using the Matlab 2021a platform. The specific characteristics of these datasets are shown in Table 2.

5.1. Datasets

(1)
COIL-20 Dataset: The COIL-20 dataset was developed by Columbia University and contains images of 20 different real objects. The dataset comprises a total of 1,440 images.
(2)
JAFFE Dataset: This dataset involves Japanese female facial expressions from ten female volunteers, with each category containing seven facial expressions. Each facial emotion of each volunteer was photographed two to four times, resulting in a total of 213 images.
(3)
UMIST Dataset: This dataset was constructed by the University of Manchester Institute of Science and Technology (UMIST) and includes facial images of 20 individuals with varying poses, races, genders, and appearances. The dataset contains a total of 575 images.
(4)
YaleB Dataset: This dataset contains 2414 frontal face images taken under controlled lighting conditions in a laboratory, representing 38 subjects. Each image was manually cropped to a 32 × 32 pixel grayscale image and then converted into a 1024-dimensional vector.

5.2. Parameter Setting

In this section, to ensure fairness, we employ k-means as the clustering method to compare the clustering performance of SFLRNMF, SFLRNMTF, and eight other algorithms across four datasets. For our SFLRNMF and SFLRNMTF models, the primary parameters involve the dual Laplacian graph rank constraint parameter α , the sparsity parameter θ , and the orthogonality constraint parameter β . The range for all regularization parameters is set within 10 6 , 10 5 , 10 4 , 10 3 , 10 2 , 10 1 , 1 , 10 1 , 10 2 , 10 3 , 10 4 , 10 5 , 10 6 . The maximum number of iterations is set to 200, and the best average result from 10 experiments is considered the final clustering result. For all graph-based methods, the number of nearest neighbors k is fixed at five. For all semi-supervised NMF methods, in each dataset, 20% of the data points from each category are randomly selected as labeled samples, and these points are used to construct the label constraint matrix A. Meanwhile, to verify the performance of our proposed algorithms on different classification metrics, we randomly select the number of clusters c within the range of 2 to 10 for clustering experiments on the Jaffe and YaleB datasets. For clustering experiments on the COIL20 and UMIST datasets, the range for the number of clusters c is set from 10 to 20.
Furthermore, due to the multitude of algorithms compared, to enhance the clarity of reading for the readers, in the clustering experiments, the names of all NMF variant algorithms have been abbreviated as follows: GNMF is abbreviated as G, DSNMF-LDC as LDC, NMFAN as AN, DNMF as D, LOSDNMF as LOSD, EWNMF as EW, SGLNMF as SGL, SDGNMF-BO as SDG, SFLRNMF as SFLR, and SFLRNMTF as SFLRT. This treatment helps in making the understanding and comparison of each algorithm clearer to the readers.

5.3. Clustering Performance

Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10 show the clustering results in terms of ACC and NMI values for the four datasets. The results in bold represent the optimal results. The corresponding clustering results are shown in Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9.
Table 2 and Table 3, along with Figure 2 and Figure 3, display the clustering results on the JAFFE dataset. As shown, some algorithms achieved 100% in both ACC and NMI when the number of clusters was set to two. However, as the number of clusters increased, our algorithms, SFLRNMF and SFLRNMTF, consistently performed well across most clustering settings, achieving average accuracy rates (ACC) of 97.28% and 95.89% and average Normalized Mutual Information (NMI) of 98.16% and 96.80%, respectively. Compared to the lowest-performing NMF algorithm, SFLR and SFLRT showed an average increase in accuracy of 7.21% and 8.09% and in NMI of 10.89% and 11.8%, respectively. These results demonstrate that our methods have a significant advantage in processing facial recognition images.
Table 4 and Table 5, along with Figure 4 and Figure 5, present the clustering results on the COIL20 dataset. These data clearly show that, compared to ten other benchmark algorithms, our algorithms achieved satisfactory results. Specifically, in terms of average ACC scores, DSNMF-LDC’s scores are second only to ours, and in terms of average NMI scores, DNMF’s scores also rank just below ours. More precisely, our SFLRNMF and SFLRNMTF algorithms exceed the average ACC of DSNMF-LDC by 6.47% and 6.99%, respectively, and surpass DNMF’s average NMI by 5.48% and 5.94%, respectively. Moreover, as observed from Figure 4 and Figure 5, regardless of the number of clusters, the highest ACC and NMI scores consistently appear in our two algorithms. These results amply demonstrate the superiority and stability of our algorithms.
The clustering results on the UMIST dataset are shown in Table 6 and Table 7, as well as Figure 4 and Figure 5. According to Table 6 and Table 7, it can be observed that in terms of ACC, the SFLRNMTF algorithm generally performs the best among the majority of the clustering settings, followed by the SFLRNMF algorithm. In terms of NMI, the SFLRNMTF algorithm also exhibits high performance, with an average score of 84.42%, while the average score for the SFLRNMF algorithm is 84.84%. Compared to other algorithms, such as the poorest performing NMF and the best performing DSNMF-LDC, our SFLRNMTF algorithm’s average ACC is higher by 30.75% and 6.95%, respectively. In terms of NMI, the average NMI of the SFLRNMF algorithm is higher than that of NMF and DSNMF-LDC by 26.54% and 2.48%, respectively. It is noteworthy that although the average ACC score of SFLRNMT is higher than that of SFLRNMF, its average NMI score is lower than that of SFLRNMF. The reason for this phenomenon is that NMI is actually a measure of the joint probability distribution across different classes, which means it calculates by comparing the mutual information and entropy between generated labels and true labels. Therefore, the more points that are misclassified into the same class, the higher the NMI score at the same ACC value. This explains why SFLRNMTF has a higher average ACC score than SFLRNMF, yet a slightly lower average NMI score. Furthermore, as depicted in Figure 4 and Figure 5, regardless of the number of clusters, the ACC and NMI curves of SFLRNMF and SFLRNMTF consistently remain above those of the other algorithms compared. These results indicate that the SFLRNMF and SFLRNMTF algorithms proposed by us not only produce significant effects but also demonstrate strong stability when processing facial recognition images in the UMIST dataset.
Similarly, the clustering performance on the YaleB dataset is presented in Table 8 and Table 9, as well as in Figure 8 and Figure 9. In terms of accuracy, the SFLRNMF algorithm was found to perform best in most clustering setups, especially when the number of clusters was set to 2, achieving a high score of 91.64%, significantly surpassing the other algorithms. On average, an ACC score of 65.02% was obtained by SFLRNMF, with SFLRNMF closely following at an average score of 63.49%. In contrast, the worst performing algorithm was NMF, which only achieved an average ACC score of 27.39%. In terms of Normalized Mutual Information (NMI), high scores were also demonstrated by SFLRNMTF, particularly when the number of clusters was 2, reaching an NMI score of 81.56%. Overall, the average NMI score for SFLRNMTF was 58.85%, slightly higher than that of SFLRNMF, which was 57.09%. Compared to LOSDNMF, which exhibited poorer clustering performance, both SFLRNMF and SFLRNMTF achieved higher scores in ACC and NMI across various numbers of clusters. Furthermore, as observed in Figure 8 and Figure 9, although the ACC and NMI curves for all algorithms showed a downward trend with an increasing number of clusters, the curves for SFLRNMF and SFLRNMTF consistently remained above those of the other comparative algorithms. These results indicate that the proposed SFLRNMF and SFLRNMTF algorithms not only produce significant effects in handling facial recognition images from the YaleB dataset but also exhibit stability across various cluster number settings. Particularly, the SFLRNMF algorithm showed the most stable performance among all compared algorithms.
Similarly, the clustering performance on the YaleB dataset is demonstrated in Table 8 and Table 9, as well as Figure 8 and Figure 9. In terms of accuracy, the SFLRNMTF algorithm performs best under most clustering counts. For accuracy (ACC), the average ACC score of SFLRNMF is 65.02%, followed closely by SFLRNMTF with an average score of 63.49%. In contrast, the worst performing algorithm is NMF, with an average ACC score of only 27.39%. In terms of Normalized Mutual Information (NMI), SFLRNMF also shows higher scores, particularly when the cluster count is 2, reaching an NMI score of 81.56%. Overall, the average NMI score of SFLRNMF is 58.85%, slightly higher than that of SFLRNMTF at 57.09%. Compared to LOSDNMF, which has poorer clustering performance, both SFLRNMF and SFLRNMTF outperform in both ACC and NMI scores across various numbers of clusters. Furthermore, as shown in Figure 8 and Figure 9, although the ACC and NMI curves for all algorithms tend to decline as the number of clusters increases, the curves for SFLRNMF and SFLRNMTF consistently remain above those of the other comparative algorithms. These results indicate that the proposed SFLRNMF and SFLRNMTF algorithms not only produce significant effects in handling facial recognition images from the YaleB dataset but also exhibit stability across different cluster settings. Notably, the SFLRNMF algorithm shows the most stable performance among all compared algorithms.

5.4. Visualization Comparison

Based on the clustering comparison experiments mentioned above, we plan to delve deeper into the superiority of our two algorithms through their subspace learning capabilities. For a more intuitive and effective comparison, we selected 10 categories from four test datasets as the input data for the experiments (since too many data categories might create visual clutter that could hinder analysis, while too few categories might not adequately highlight the subspace learning performance of different algorithms). The visual comparison results are displayed in Figure 10 and Figure 11. Note that the subgraphs containing category labels do not obscure any comparative information.
Figure 10 and Figure 11 demonstrate that, on the COIL-20 and UMIST datasets, our SFRLNMF and SFRLNMTF algorithms perform excellently, being capable of distinguishing data samples more clearly. Particularly in the visual comparison on the COIL-20 dataset, both methods proposed by us are demonstrated to successfully and clearly separate the samples from other categories. This indicates that our methods can efficiently learn low-dimensional subspace representations. Not only does this experiment validate the reliability of the aforementioned clustering experiments, but it also confirms the contributions of our two methods.

5.5. Parameter Sensitivity Analysis

In this section, we explored the sensitivity of parameters for our method on the JAFFE dataset. As seen from the SFRLNMF objective function in Equation (27), three fundamental parameters are involved: the orthogonality constraint parameter β , the sparsity parameter θ , and the graph regularization parameter α . These parameters were selected from the range [ 10 4 , 10 3 , 10 2 , 10 1 , 10 0 , 10 1 , 10 2 , 10 3 , 10 4 ]. The comparison results for ACC and NMI are displayed in three-dimensional bar charts in Figure 12a,b, Figure 13a,b and Figure 14a,b.
From Figure 12a,b, it can be observed that with the variation in the values of parameters α and β , the values of ACC and NMI decrease, indicating that the SFLRNMF is sensitive to the parameters α and β on the JAFFE dataset, particularly evident when θ equals 1000. Specifically, when α is selected from [ 10 2 , 10 1 ,   10 2 , 10 3 , 10 4 ], the scores of these two metrics are slightly higher.
Similarly, we fixed β at 1000 and varied θ and α to compute the scores for ACC and NMI. The corresponding 3D histograms for ACC and NMI are shown in Figure 13a,b. From Figure 13a,b, it is evident that the overall scores for ACC and NMI on the JAFFE dataset are satisfactory, particularly when θ is chosen from the range [ 10 4 , 10 3 , 10 2 , 10 1 , 10 0 , 10 1 ], where the scores are highest. Additionally, when θ is within the range [ 10 4 , 10 3 , 10 2 , 10 1 , 10 0 , 10 1 , 10 2 , 10 3 , 10 4 ], regardless of how the value of α changes within the given range, the scores for ACC and NMI remain high, indicating that the parameter α has a minimal impact on the clustering results. In this scenario, it can be considered that the SFLRNMF algorithm demonstrates greater robustness when β is fixed.
To explore the sensitivity of other parameters in the SFLRNMF algorithm, a similar analysis of parameter sensitivity was conducted with the α parameter fixed, as illustrated in Figure 14.
According to Figure 14a,b, overall, the scores of the ACC and NMI metrics vary with changes in these two parameters. Particularly, the best results are obtained when the values of parameters β and θ range from [ 10 3 , 10 4 ]. Therefore, in this case, the SFLRNMF is more sensitive to parameters β and θ .
Next, we also conducted a parameter sensitivity analysis for SFLRNMTF with respect to the parameters α , β , and θ , since SFLRNMTF is an improved version of SFLRNMF and thus they share the same parameters. When α , β , and θ were fixed at 1000, respectively, the corresponding 3D histograms for the ACC and NMI scores on the JAFFE dataset are shown in Figure 15, Figure 16 and Figure 17, while the ranges for the other two parameters remained set at 10 4 , 10 3 , 10 2 , 10 1 , 10 0 , 10 1 , 10 2 , 10 3 , 10 4 .
From these figures, it can be concluded that compared to parameter θ , parameter β shows higher sensitivity to the performance of SFLRNMTF. When parameter α is set to 1000, the appropriate range for β is [ 10 3 , 10 4 ]. When β is 1000 and θ is within the range [ 10 4 , 10 3 , 10 2 , 10 1 , 10 0 ], better performance is achieved by the algorithm. Additionally, regardless of the values of θ and β , the overall variation in ACC and NMI remains almost constant, indicating that when parameter α is fixed at 1000, both θ and β exhibit robustness in the performance of SFLRNMTF. However, in Figure 16 and Figure 17, it can be seen that the variations in ACC and NMI are uneven and unstable, indicating that parameters θ and β exhibit higher sensitivity to the performance of SFLRNMTF. When parameter α is set to 1000, the clustering performance of the SFLRNMTF algorithm is optimal and remains stable under other conditions. Additionally, it is evident that under these circumstances, the optimal ranges for θ and β are [ 10 4 , 10 3 , 10 2 , 10 1 , 10 0 ] and [ 10 2 , 10 1 ], respectively.

5.6. Convergence Analysis

Based on the mathematical derivations of the convergence of two algorithms discussed in Section 3.7 and Section 4.3 of the paper, it was verified that these two algorithms are highly efficient in addressing the problem of local optima. To further assess the convergence of the SFLRNMF and SFLRNMTF algorithms, experiments were conducted on the four datasets discussed in this paper. In the experiments, the number of categories was set to the maximum number of categories for each dataset, and the convergence curves of our model on these four datasets were plotted, as shown in Figure 18 and Figure 19. In these figures, the y-axis represents the value of the objective function, while the x-axis represents the number of iterations. The experimental results demonstrated the superior performance of our algorithms on these datasets.
From Figure 18a–d, it can be seen that the SFLRNMF algorithm demonstrates extremely fast convergence on four datasets (JAFFE, COIL20, UMIST, and YaleB). The objective function value drops rapidly and stabilizes within less than 10 iterations. This consistent, fast convergence indicates that SFLRNMF can efficiently find a local optimal solution across various datasets, highlighting the robustness and adaptability of the algorithm. In contrast, Figure 19 shows the convergence curves of the SFLRNMTF algorithm. Observing Figure 19b–d, it can be seen that the SFLRNMTF algorithm also exhibits a relatively fast convergence rate on the COIL20, UMIST, and YaleB datasets, with the objective function value stabilizing within 10 iterations, indicating high optimization efficiency on these datasets. However, in Figure 19a, the convergence rate of SFLRNMTF is relatively slower on the JAFFE dataset. Although the objective function value decreases significantly in the first 10 iterations, it takes approximately 40 iterations to fully converge.
In summary, Figure 18 and Figure 19 indicate that both algorithms exhibit favorable convergence performance across different datasets. Overall, SFLRNMF and SFLRNMTF both converge within a relatively small number of iterations, confirming their stability and efficiency on various datasets. Notably, the SFLRNMF algorithm demonstrates a consistently rapid convergence across all datasets.

5.7. Calculation Time Analysis

In this section, we analyze the computation time by comparing the duration required for clustering experiments conducted on four datasets, in order to more clearly understand the specific impacts of different data dimensions and dataset sizes on computation time. These datasets include JAFFE, COIL20, UMIST, and YaleB. During the experiments, the number of clusters was set to the maximum number of categories each dataset contains. To ensure the stability and reliability of the results, ten independent experiments were conducted on each dataset, and the average execution time of these experiments was calculated. The results are summarized in Table 11, which details the average computation time for each dataset under various clustering settings.
From the data analysis in Table 11, it is observed that, firstly, datasets with larger dimensions significantly increase runtime, demonstrating that the model’s runtime is notably influenced by the dimensions and size of the data. Secondly, algorithms based on graph structures have longer computation times compared to non-graph-based algorithms. This indicates that graph-based algorithms consume a substantial amount of time in constructing graphs, especially in high-dimensional datasets, where this difference in time is more pronounced. In terms of algorithm types, algorithms utilizing dual graph structures take longer to execute compared to those using a single graph structure. Furthermore, semi-supervised algorithms (such as LOSDNMF, SGLNMF, SDGNMF-BO) have longer runtimes compared to unsupervised algorithms. For unsupervised dual graph algorithms like DNMF, as well as SFLRNMF and SFLRNMTF, the latter two demonstrate superior performance on high-dimensional datasets. This is primarily due to these algorithms having faster convergence rates and fewer iterations, significantly enhancing efficiency when dealing with large-scale or high-dimensional data.

6. Conclusions

In this study, two innovative unsupervised algorithms, SFLRNMF and SFLRNMTF, are proposed, with the latter being a further improvement of the former. The SFLRNMF algorithm effectively addresses the issue of inaccurate low-rank matrix approximation by integrating orthogonality, sparsity, and dual Laplacian rank constraints in non-negative matrix factorization, significantly enhancing the model’s capability in low-dimensional representation. Building on this, SFLRNMTF introduces an additional factor R, which is capable of handling scale differences among the input matrices X, U, and V, further improving the accuracy of factor decomposition. Moreover, both models are designed with a learning mechanism to optimize the graph model, enabling the automatic adjustment of the affinity matrix’s weights to more precisely capture the intrinsic structure of the data. According to the numerical experiment results, SFLRNMF and SFLRNMTF outperform several existing mainstream methods on multiple standard datasets, demonstrating a clear competitive advantage.

Author Contributions

H.M., software, data curation, and writing—original draft preparation; Z.M., conceptualization, methodology, writing—review and editing, and validation; H.L., visualization and investigation; J.W., supervision and writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data and code that support the findings of this study are available from the corresponding author [Z. Ma] upon reasonable request.

Conflicts of Interest

The authors declare that they have no competing interests.

References

  1. Gu, B.; Sun, X. Structural Minimax Probability Machine. IEEE Trans. Neural Netw. Learn. Syst. 2016, 1, 1646–1656. [Google Scholar] [CrossRef] [PubMed]
  2. Babaee, M. Discriminative Nonnegative Matrix Factorization for Dimensionality Reduction. Neurocomputing 2015, 173, 212–223. [Google Scholar] [CrossRef]
  3. Deborah, H.; Richard, N.; Hardeberg, J.Y. A Comprehensive Evaluation of Spectral Distance Functions and Metrics for Hyperspectral Image Processing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 3224–3234. [Google Scholar] [CrossRef]
  4. Bengio, Y.; Courville, A.; Vincent, P. Representation Learning: A Review and New Perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 1798–1828. [Google Scholar] [CrossRef] [PubMed]
  5. Shu, Z.; Wu, X.; Fan, H.; Huang, P.; Wu, D.; Hu, C.; Ye, F. Parameter-Less Auto-Weighted Multiple Graph Regularized Nonnegative Matrix Factorization for Data Representation. Knowl. Based Syst. 2017, 131, 105–112. [Google Scholar] [CrossRef]
  6. Wang, Y.-X.; Zhang, Y.-J. Nonnegative Matrix Factorization: A Comprehensive Review. IEEE Trans. Knowl. Data Eng. 2013, 25, 1336–1353. [Google Scholar] [CrossRef]
  7. Shu, Z.; Zhao, C.; Huang, P. Local Regularization Concept Factorization and Its Semi-Supervised Extension for Image Representation. Neurocomputing 2015, 158, 1–12. [Google Scholar] [CrossRef]
  8. Wold, S.; Esbensen, K.; Geladi, P. Principal Component Analysis. Chemom. Intell. Lab. Syst. 1987, 2, 37–52. [Google Scholar] [CrossRef]
  9. Tenenbaum, J.B.; Silva, V.D.; Langford, J.C. A Global Geometric Framework for Nonlinear Dimensionality Reduction. Science 2000, 290, 2319–2323. [Google Scholar] [CrossRef]
  10. Roweis, S.T.; Saul, L.K. Nonlinear Dimensionality Reduction by Locally Linear Embedding. Science 2000, 290, 2323–2326. [Google Scholar] [CrossRef]
  11. Belkin, M.; Niyogi, P. Laplacian Eigenmaps for Dimensionality Reduction and Data Representation. Neural Comput. 2003, 15, 1373–1396. [Google Scholar] [CrossRef]
  12. Belhumeur, P.N.; Hespanha, J.P.; Kriegman, D.J. Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection. IEEE Trans. Pattern Anal. Mach. Intell. 1997, 19, 711–720. [Google Scholar] [CrossRef]
  13. Xu, W.; Gong, Y. Document Clustering by Concept Factorization. In Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Sheffield, UK, 25–29 July 2004; pp. 202–209. [Google Scholar]
  14. Wright, J.; Yang, A.Y.; Ganesh, A.; Sastry, S.S.; Yi, M. Robust Face Recognition via Sparse Representation. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 210–227. [Google Scholar] [CrossRef] [PubMed]
  15. Lee, D.D.; Seung, H.S. Learning the Parts of Objects by Non-Negative Matrix Factorization. Nature 1999, 401, 788–791. [Google Scholar] [CrossRef] [PubMed]
  16. Lee, D.D.; Seung, H.S. Algorithms for Non-Negative Matrix Factorization. Adv. Neural Inf. Process. Syst. 2000, 13, 1969–1976. [Google Scholar]
  17. LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  18. Liu, G.; Lin, Z.; Yan, S.; Sun, J.; Yu, Y.; Ma, Y. Robust Recovery of Subspace Structures by Low-Rank Representation. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 171–184. [Google Scholar] [CrossRef]
  19. Salehani, Y.E.; Arabnejad, E.; Rahiche, A.; Bakhta, A.; Cheriet, M. MSdB-NMF: MultiSpectral Document Image Binarization Framework via Non-Negative Matrix Factorization Approach. IEEE Trans. Image Process. 2020, 29, 9099–9112. [Google Scholar] [CrossRef]
  20. Gao, Y.; Church, G. Improving Molecular Cancer Class Discovery through Sparse Non-Negative Matrix Factorization. Bioinformatics 2005, 21, 3970–3975. [Google Scholar] [CrossRef]
  21. Ding, C.; Li, T.; Peng, W.; Park, H. Orthogonal Nonnegative Matrix T-Factorizations for Clustering. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Philadelphia, PA, USA, 20–23 August 2006; ACM: Philadelphia, PA, USA, 2006; pp. 126–135. [Google Scholar]
  22. Cai, D.; He, X.; Han, J.; Huang, T.S. Graph Regularized Nonnegative Matrix Factorization for Data Representation. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 1548–1560. [Google Scholar] [CrossRef]
  23. Shang, F.; Jiao, L.C.; Wang, F. Graph Dual Regularization Non-Negative Matrix Factorization for Co-Clustering. Pattern Recognit. 2012, 45, 2237–2250. [Google Scholar] [CrossRef]
  24. Sun, J.; Wang, Z.; Sun, F.; Li, H. Sparse Dual Graph-Regularized NMF for Image Co-Clustering. Neurocomputing 2018, 316, 156–165. [Google Scholar] [CrossRef]
  25. Li, S.; Li, W.; Hu, J.; Li, Y. Semi-Supervised Bi-Orthogonal Constraints Dual-Graph Regularized NMF for Subspace Clustering. Appl. Intell. 2022, 52, 3227–3248. [Google Scholar] [CrossRef]
  26. Li, H.; Gao, Y.; Liu, J.; Zhang, J.; Li, C. Semi-Supervised Graph Regularized Nonnegative Matrix Factorization with Local Coordinate for Image Representation. Signal Process. Image Commun. 2022, 102, 116589. [Google Scholar] [CrossRef]
  27. Wang, J.; Ma, Z.; Li, H.; Feng, D. Semi-Supervised Dual-Graph Regularization Non-Negative Matrix Factorization with Local Coordinate and Orthogonal Constraints for Image Clustering. J. Electron. Imaging 2022, 31, 053009. [Google Scholar] [CrossRef]
  28. Xu, K.; Chen, L.; Wang, S. Data-Driven Kernel Subspace Clustering with Local Manifold Preservation. In Proceedings of the 2022 IEEE International Conference on Data Mining Workshops (ICDMW), Orlando, FL, USA, 8 November–1 December 2022; pp. 876–884. [Google Scholar]
  29. Xu, K.; Chen, L.; Wang, S. A Multi-View Kernel Clustering Framework for Categorical Sequences. Expert Syst. Appl. 2022, 197, 116637. [Google Scholar] [CrossRef]
  30. Huang, S.; Xu, Z.; Wang, F. Nonnegative Matrix Factorization with Adaptive Neighbors. In Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, 14–19 May 2017; pp. 486–493. [Google Scholar]
  31. Shu, Z.; Wu, X.-J.; You, C.; Liu, Z.; Li, P.; Fan, H.; Ye, F. Rank-Constrained Nonnegative Matrix Factorization for Data Representation. Inf. Sci. 2020, 528, 133–146. [Google Scholar] [CrossRef]
  32. Tang, J.; Wan, Z. Orthogonal Dual Graph-Regularized Nonnegative Matrix Factorization for Co-Clustering. J. Sci. Comput. 2021, 87, 66. [Google Scholar] [CrossRef]
  33. Luo, M.; Zhang, K. A Hybrid Approach Combining Extreme Learning Machine and Sparse Representation for Image Classification. Eng. Appl. Artif. Intell. 2014, 27, 228–235. [Google Scholar] [CrossRef]
  34. Xu, Z.; Chang, X.; Xu, F.; Zhang, H. L1/2 Regularization: A Thresholding Representation Theory and a Fast Solver. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23, 1013–1027. [Google Scholar] [CrossRef]
  35. Das, K.C. The Laplacian Spectrum of a Graph. Comput. Math. Appl. 2004, 48, 715–724. [Google Scholar] [CrossRef]
  36. Biggs, N. SPECTRAL GRAPH THEORY (CBMS Regional Conference Series in Mathematics 92). Bull. Lond. Math. Soc. 1998, 30, 197–199. [Google Scholar] [CrossRef]
  37. Nie, F.; Wang, X.; Jordan, M.; Huang, H. The Constrained Laplacian Rank Algorithm for Graph-Based Clustering. Proc. AAAI Conf. Artif. Intell. 2016, 30. [Google Scholar] [CrossRef]
  38. Fan, K. On a Theorem of Weyl Concerning Eigenvalues of Linear Transformations I. Proc. Natl. Acad. Sci. USA 1949, 35, 652–655. [Google Scholar] [CrossRef]
  39. Huang, J.; Nie, F.; Huang, H. A New Simplex Sparse Learning Model to Measure Data Similarity for Clustering; AAAI Press: Washington, DC, USA, 2015; pp. 3569–3575. [Google Scholar]
  40. Shang, R.; Zhang, Z.; Jiao, L.; Liu, C.; Li, Y. Self-Representation Based Dual-Graph Regularized Feature Selection Clustering. Neurocomputing 2016, 171, 1242–1253. [Google Scholar] [CrossRef]
  41. Shi, C.; Ruan, Q.; An, G.; Zhao, R. Hessian Semi-Supervised Sparse Feature Selection Based on ${l_{2,1/2}}$ -Matrix Norm. IEEE Trans. Multimed. 2015, 17, 16–28. [Google Scholar] [CrossRef]
  42. Meng, Y.; Shang, R.; Jiao, L.; Zhang, W.; Yuan, Y.; Yang, S. Feature Selection Based Dual-Graph Sparse Non-Negative Matrix Factorization for Local Discriminative Clustering. Neurocomputing 2018, 290, 87–99. [Google Scholar] [CrossRef]
  43. Wei, J.; Tong, C.; Wu, B.; He, Q.; Qi, S.; Yao, Y.; Teng, Y. An Entropy Weighted Nonnegative Matrix Factorization Algorithm for Feature Representation. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 5381–5391. [Google Scholar] [CrossRef]
Figure 1. Construction of optimal graph.
Figure 1. Construction of optimal graph.
Mathematics 12 03656 g001
Figure 2. Clustering ACC on the dataset JAFFE.
Figure 2. Clustering ACC on the dataset JAFFE.
Mathematics 12 03656 g002
Figure 3. Clustering NMI on the dataset JAFFE.
Figure 3. Clustering NMI on the dataset JAFFE.
Mathematics 12 03656 g003
Figure 4. Clustering ACC on the dataset COIL20.
Figure 4. Clustering ACC on the dataset COIL20.
Mathematics 12 03656 g004
Figure 5. Clustering NMI on the dataset COIL20.
Figure 5. Clustering NMI on the dataset COIL20.
Mathematics 12 03656 g005
Figure 6. Clustering ACC on the dataset UMIST.
Figure 6. Clustering ACC on the dataset UMIST.
Mathematics 12 03656 g006
Figure 7. Clustering NMI on the dataset UMIST.
Figure 7. Clustering NMI on the dataset UMIST.
Mathematics 12 03656 g007
Figure 8. Clustering ACC on the dataset YaleB.
Figure 8. Clustering ACC on the dataset YaleB.
Mathematics 12 03656 g008
Figure 9. Clustering NMI on the dataset YaleB.
Figure 9. Clustering NMI on the dataset YaleB.
Mathematics 12 03656 g009
Figure 10. Two-dimensional representations of UMIST dataset using t-SNE on the results of different methods.
Figure 10. Two-dimensional representations of UMIST dataset using t-SNE on the results of different methods.
Mathematics 12 03656 g010aMathematics 12 03656 g010b
Figure 11. Two-dimensional representations of COIL20 dataset using t-SNE on the results of different methods.
Figure 11. Two-dimensional representations of COIL20 dataset using t-SNE on the results of different methods.
Mathematics 12 03656 g011aMathematics 12 03656 g011b
Figure 12. The ACC and NMI of SFLRNMF with different α and β on JAFFE.
Figure 12. The ACC and NMI of SFLRNMF with different α and β on JAFFE.
Mathematics 12 03656 g012
Figure 13. The ACC and NMI of SFLRNMF with different α and θ on JAFFE.
Figure 13. The ACC and NMI of SFLRNMF with different α and θ on JAFFE.
Mathematics 12 03656 g013
Figure 14. The ACC and NMI of SFLRNMF with different θ and β on JAFFE.
Figure 14. The ACC and NMI of SFLRNMF with different θ and β on JAFFE.
Mathematics 12 03656 g014
Figure 15. The ACC and NMI of SFLRNMTF with different θ and β on JAFFE.
Figure 15. The ACC and NMI of SFLRNMTF with different θ and β on JAFFE.
Mathematics 12 03656 g015
Figure 16. The ACC and NMI of SFLRNMTF with different θ and α on JAFFE.
Figure 16. The ACC and NMI of SFLRNMTF with different θ and α on JAFFE.
Mathematics 12 03656 g016
Figure 17. The ACC and NMI of SFLRNMTF with different β and α on JAFFE.
Figure 17. The ACC and NMI of SFLRNMTF with different β and α on JAFFE.
Mathematics 12 03656 g017
Figure 18. Convergence curves of the SFLRNMF algorithm on four datasets.
Figure 18. Convergence curves of the SFLRNMF algorithm on four datasets.
Mathematics 12 03656 g018
Figure 19. Convergence curves of the SFLRNMTF algorithm on four datasets.
Figure 19. Convergence curves of the SFLRNMTF algorithm on four datasets.
Mathematics 12 03656 g019
Table 1. List of notation.
Table 1. List of notation.
NotationNotation Description
X R m × n Data matrix
T R m × m Feature-weighted matrix
R R c × c Diagonal scaling matrix
A R n × n Initial data affinity matrix
S R n × n Data affinity matrix
D R m × m Initial feature affinity matrix
W R m × m Feature affinity matrix
U R m × c Basis matrix
V R n × c Coefficient matrix
I Unit matrix
1 Vector with all elements being 1
L S R n × n Data Laplacian matrix
L W R m × m Feature Laplacian matrix
D S R n × n Data graph degree matrix
D W R m × m Feature graph degree matrix
α Dual graph parameter
β Orthogonal parameter
θ Sparse parameter
mThe number of data dimensions
nThe number of data points
cThe number of data clusters
kThe number of nearest data points
Table 2. Benchmark dataset.
Table 2. Benchmark dataset.
DatasetsSamplesDimensionsClassesType
COIL-201440102420Object
JAFFE213102410Face
UMIST575102420Face
YaleB2414102438Face
Table 3. ACC results for clustering of SFLRNMF, SFLRNMTF and ten other algorithms on the JAFFE dataset.
Table 3. ACC results for clustering of SFLRNMF, SFLRNMTF and ten other algorithms on the JAFFE dataset.
cPCANMFGLDCANDLOSDEWSGLSDGSFLRSFLRT
299.1098.5899.0210099.0110010010010099.15100100
397.7893.3298.1510098.1998.0298.2997.1898.8998.9699.0499.65
495.7393.2596.3299.0395.9397.3198.7295.5598.6097.8698.2998.36
593.6692.1896.1496.8597.1796.3996.2295.3897.2097.9397.0898.04
693.2491.2692.4594.4994.7393.4295.6393.3494.6396.8297.1498.15
792.3189.3295.2296.7895.9393.1995.5892.4794.3595.4696.5897.72
890.4285.1590.5492.2392.1891.5894.5989.5493.3694.0096.5597.31
988.3284.5890.2791.9793.0291.4593.8788.9294.1794.7995.5297.26
1084.9383.0290.1990.0590.5290.5592.0588.5093.0592.1695.3196.97
Avg.92.8390.0794.2695.7195.1994.6696.1193.4396.0396.3597.2898.16
Table 4. NMI results for clustering of SFLRNMF, SFLRNMTF and ten other algorithms on the JAFFE dataset.
Table 4. NMI results for clustering of SFLRNMF, SFLRNMTF and ten other algorithms on the JAFFE dataset.
cPCANMFGLDCANDLOSDEWSGLSDGSFLRSFLRT
295.6292.0790.1810095.7910010010010096.21100100
393.0289.1494.2197.2595.1294.5695.0291.8996.8297.8597.0199.06
491.5187.5691.3698.1388.6593.7596.6890.1096.4694.7196.1496.11
589.7986.2893.4495.7993.6391.4592.2990.0194.2395.4495.0996.03
689.1384.1986.5692.1594.4689.2592.2390.9090.3494.1694.5596.09
789.4983.3791.0095.1095.4491.4692.1588.4690.5192.1295.4196.05
887.8980.4689.0194.0293.6890.3593.0386.1291.0093.1095.3396.05
986.0981.5590.0691.5395.1192.8991.6986.5192.1293.0694.5996.02
1083.8380.3188.0992.2393.7590.7190.6985.4892.4591.7894.9295.82
Avg.89.6085.0090.4395.1393.9692.7193.7589.9493.7794.2795.8996.80
Table 5. ACC results for clustering of SFLRNMF, SFLRNMTF and ten other algorithms on the COIL20 dataset.
Table 5. ACC results for clustering of SFLRNMF, SFLRNMTF and ten other algorithms on the COIL20 dataset.
cPCANMFGLDCANDLOSDEWSGLSDGSFLRSFLRT
1077.1471.2486.2791.9373.9687.8589.5673.4685.6588.7696.0494.57
1174.6868.1383.8088.0668.8987.8886.6271.0785.4385.7795.2995.51
1274.0764.8983.5188.9169.1386.2384.2570.8585.1385.9393.1994.39
1374.0771.9581.0987.9968.0982.5083.7969.6583.4487.0492.7195.09
1473.8269.2382.4583.5470.1582.4181.4969.4182.8983.6294.0294.15
1572.0666.6982.1583.0867.4583.3981.4369.2880.9082.9293.7690.34
1672.9365.5579.5385.5965.7683.7281.0567.0479.9081.8889.0888.15
1771.1665.1578.9182.1264.3980.5780.9065.9578.9981.3288.2488.82
1871.1363.1182.2082.0966.9381.0679.4765.5281.6280.7686.3888.34
1970.9164.7680.1377.0165.2179.5079.3364.9578.4479.3584.6788.11
2068.7864.3279.6977.9865.3581.8378.6164.8779.0979.5186.0487.75
Avg.72.8066.8281.7984.3967.7683.3682.4168.3781.9583.3590.8691.38
Table 6. NMI results for clustering of SFLRNMF, SFLRNMTF and ten other algorithms on the COIL20 dataset.
Table 6. NMI results for clustering of SFLRNMF, SFLRNMTF and ten other algorithms on the COIL20 dataset.
cPCANMFGLDCANDLOSDEWSGLSDGSFLRSFLRT
1080.3872.1588.0992.7975.8992.6194.1275.7189.5892.2797.8996.14
1178.7271.3683.9592.3172.2593.7489.9874.5086.9687.6297.7197.52
1278.0168.4583.3491.7973.1691.5588.8873.2487.9990.4096.2598.28
1378.8775.3881.2992.5574.3687.9590.7674.1989.1291.6495.9398.39
1479.6775.0682.1688.7477.4287.9088.0574.1889.8289.3695.0597.28
1578.8173.5882.5989.4075.1589.6789.6174.5488.1788.6995.1294.43
1678.6573.4288.4387.4273.2789.9190.1274.7187.4588.6193.2394.29
1778.6773.1987.3289.8973.1988.9189.6972.5687.2688.6795.3695.71
1878.0373.3189.9589.0974.3289.3088.4073.5987.4388.2495.1595.08
1978.7173.4288.4686.8574.2588.8288.8972.8887.9888.6593.2394.21
2068.7874.5589.7986.9875.6589.7689.0972.9588.1088.5595.4294.15
Avg.77.9473.0885.9489.8074.4590.0189.7873.9188.1789.3495.4995.95
Table 7. ACC results for clustering of SFLRNMF, SFLRNMTF and ten other algorithms on the UMIST dataset.
Table 7. ACC results for clustering of SFLRNMF, SFLRNMTF and ten other algorithms on the UMIST dataset.
cPCANMFGLDCANDLOSDEWSGLSDGSFLRSFLRT
1053.4948.9565.5575.1950.9166.0969.2550.3667.9768.4077.1278.06
1152.3748.4664.5372.2550.8565.6768.3046.9766.4965.9077.1278.19
1254.1047.3163.7171.3447.9965.7864.8845.8962.1963.7675.2877.21
1350.1147.9860.4370.1647.0360.9663.5646.0762.1563.4974.1977.16
1449.1345.7960.5569.0145.5361.5262.8644.9361.3664.1575.2775.63
1548.2644.0159.2968.3644.5759.9161.2545.1461.9661.9374.2975.27
1646.7442.2559.4669.1644.4860.2261.0744.1658.0962.0076.0675.19
1745.8441.4660.3969.0943.1960.4060.0142.7959.6062.1373.0174.46
1846.1241.2959.3466.1443.2459.5259.2342.4657.6963.2472.5374.25
1944.9041.3559.4862.2343.3559.7859.0941.0956.6759.1772.1174.21
2044.2141.5958.9859.3842.7759.1059.7041.2357.4159.6571.1469.09
Avg.48.6644.5961.1668.3945.8161.7262.5644.6461.0563.0774.3775.34
Table 8. NMI results for clustering of SFLRNMF, SFLRNMTF and ten other algorithms on the UMIST dataset.
Table 8. NMI results for clustering of SFLRNMF, SFLRNMTF and ten other algorithms on the UMIST dataset.
cPCANMFGLDCANDLOSDEWSGLSDGSFLRSFLRT
1065.8956.7976.4183.0160.9176.2678.8859.5278.3477.2586.0184.09
1166.1259.1276.4784.7860.9677.3578.4857.7977.6976.6385.2387.27
1267.2958.1875.2382.4657.8978.7075.4357.5676.2175.9685.0485.18
1365.4957.1674.8882.5460.7175.1176.2360.0373.7876.2784.0983.41
1466.3358.5975.9681.7859.0874.2976.3758.6976.8276.3383.1884.05
1566.1458.7875.8982.1359.1474.9076.1359.0076.2675.9586.3385.19
1666.0558.9374.6584.0661.7876.5776.0758.7473.8075.9584.5286.79
1765.4858.9576.4383.2559.1676.7276.0258.6075.0475.6384.4585.14
1866.0258.4675.2982.0361.2574.5375.9659.2974.3277.2485.0483.21
1965.4058.2975.1380.7960.2975.1275.6158.1973.0275.9385.1383.13
2065.5058.0175.5979.1560.3775.0775.9159.2374.4676.0884.1881.16
Avg.65.9758.3075.6382.3660.1475.8876.4658.7975.4376.2984.8484.42
Table 9. ACC results for clustering of SFLRNMF, SFLRNMTF and ten other algorithms on the YaleB dataset.
Table 9. ACC results for clustering of SFLRNMF, SFLRNMTF and ten other algorithms on the YaleB dataset.
cPCANMFGLDCANDLOSDEWSGLSDGSFLRSFLRT
252.5251.6579.1379.5358.5080.1486.3652.8781.7273.8291.6488.39
337.5136.7156.8275.2743.1265.4860.5336.9967.5759.0678.5678.66
429.5427.8054.0857.7034.7654.4258.4429.7861.0557.3669.5672.00
526.2624.0447.6955.0630.4251.8751.6826.5153.5057.6761.0467.88
622.7320.3346.6547.4426.8645.8744.6225.1051.7448.4559.4764.78
721.2718.6242.9344.6724.3837.3041.7425.2346.1545.5259.4248.95
820.9818.4941.7043.6322.5336.0039.5423.4347.3742.3856.2045.87
918.4918.1539.6240.5221.6534.6235.1622.1544.8142.2654.0955.48
1017.2317.5339.8421.7419.9831.2533.6622.0542.5441.6655.2449.43
Avg.27.3925.9249.8351.7331.3648.5550.1929.3555.1652.0265.0263.49
Table 10. NMI results for clustering of SFLRNMF, SFLRNMTF and ten other algorithms on the YaleB dataset.
Table 10. NMI results for clustering of SFLRNMF, SFLRNMTF and ten other algorithms on the YaleB dataset.
cPCANMFGLDCANDLOSDEWSGLSDGSFLRSFLRT
20.280.1145.7438.443.6544.0463.500.2845.5541.1981.5681.01
30.840.4727.1244.853.7337.8332.840.9141.3234.3764.4167.11
41.500.6736.8837.594.3823.8132.051.1544.3945.4758.4758.95
54.112.3031.6443.575.0440.5540.123.7643.8148.2250.2457.48
63.801.1539.1327.405.7939.7236.777.6642.1839.2655.0859.01
76.272.6036.4634.786.3129.2836.7111.8042.7341.8558.3845.99
87.383.9336.0341.866.3334.7236.5411.8645.7939.4753.9540.83
97.626.9339.8333.017.1834.1935.6715.8146.2640.0354.0854.91
107.108.4941.0615.597.7831.7235.5017.5942.8941.1453.4548.54
Avg.4.322.9637.1035.235.5835.1038.867.8743.8841.2258.8557.09
Table 11. Comparison of computation times of different algorithms on four datasets (s).
Table 11. Comparison of computation times of different algorithms on four datasets (s).
DatasetsPCANMFGLDCANDLOSDEWSGLSDGSFLRSFLRT
JAFFE0.020.110.120.450.922.861.710.842.583.720.874.59
COIL200.230.991.054.5123.9088.7193.626.1271.7360.053.883.61
UMIST0.080.400.421.124.4116.827.612.3913.9012.842.663.40
YaleB0.372.452.8212.5970.89178.45305.2111.66213.96187.898.7920.34
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, H.; Ma, Z.; Li, H.; Wang, J. Sparse Feature-Weighted Double Laplacian Rank Constraint Non-Negative Matrix Factorization for Image Clustering. Mathematics 2024, 12, 3656. https://doi.org/10.3390/math12233656

AMA Style

Ma H, Ma Z, Li H, Wang J. Sparse Feature-Weighted Double Laplacian Rank Constraint Non-Negative Matrix Factorization for Image Clustering. Mathematics. 2024; 12(23):3656. https://doi.org/10.3390/math12233656

Chicago/Turabian Style

Ma, Hu, Ziping Ma, Huirong Li, and Jingyu Wang. 2024. "Sparse Feature-Weighted Double Laplacian Rank Constraint Non-Negative Matrix Factorization for Image Clustering" Mathematics 12, no. 23: 3656. https://doi.org/10.3390/math12233656

APA Style

Ma, H., Ma, Z., Li, H., & Wang, J. (2024). Sparse Feature-Weighted Double Laplacian Rank Constraint Non-Negative Matrix Factorization for Image Clustering. Mathematics, 12(23), 3656. https://doi.org/10.3390/math12233656

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop