Next Article in Journal
Federated Learning-Based Lightweight Two-Factor Authentication Framework with Privacy Preservation for Mobile Sink in the Social IoMT
Next Article in Special Issue
Evaluation of Hand Washing Procedure Using Vision-Based Frame Level and Spatio-Temporal Level Data Models
Previous Article in Journal
Energy Efficient Data Dissemination for Large-Scale Smart Farming Using Reinforcement Learning
Previous Article in Special Issue
Visual Three-Dimensional Reconstruction Based on Spatiotemporal Analysis Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Subspace Clustering with Block Diagonal Representation for Noisy Image Datasets

1
School of Economics and Management, Yantai University, Yantai 264005, China
2
School of Computer and Control Engineering, Yantai University, Yantai 264005, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(5), 1249; https://doi.org/10.3390/electronics12051249
Submission received: 26 January 2023 / Revised: 2 March 2023 / Accepted: 2 March 2023 / Published: 5 March 2023
(This article belongs to the Special Issue Advances in Spatiotemporal Data Management and Analytics)

Abstract

:
As a relatively advanced method, the subspace clustering algorithm by block diagonal representation (BDR) will be competent in performing subspace clustering on a dataset if the dataset is assumed to be noise-free and drawn from the union of independent linear subspaces. Unfortunately, this assumption is far from reality, since the real data are usually corrupted by various noises and the subspaces of data overlap with each other, the performance of linear subspace clustering algorithms, including BDR, degrades on the real complex data. To solve this problem, we design a new objective function based on BDR, in which l2,1 norm of the reconstruction error is introduced to model the noises and improve the robustness of the algorithm. After optimizing the objective function, we present the corresponding subspace clustering algorithm to pursue a self-expressive coefficient matrix with a block diagonal structure for a noisy dataset. An affinity matrix is constructed based on the coefficient matrix, and then fed to the spectral clustering algorithm to obtain the final clustering results. Experiments on several artificial noisy image datasets show that the proposed algorithm has robustness and better clustering performance than the compared algorithms.

1. Introduction

Clustering [1] is an important analysis tool in the fields of data mining and machine learning, and is widely used in motion segmentation [2,3,4], text clustering [5], image segmentation [6,7], face recognition [8], and other practical applications [9]. In brief, clustering divides unlabeled data into several clusters according to the similarity of data points. Traditional clustering methods, such as K-means, hierarchical clustering [10], and density clustering [11,12], cannot perform well on high-dimensional data. Therefore, subspace clustering algorithms for high-dimensional data get more attention in many applications. For example, in spatiotemporal data analysis, huge volumes of image and video data are high-dimensional and noisy, subspace clustering can be employed as an effective method of data preprocessing, and the clustering results help to uncover the structure of the dataset.
In the view of spectral graph theory, the key to the success of subspace clustering is to construct a coefficient matrix that can correctly reflect the spatial distribution of the data. Ideally, the coefficient matrix has a block diagonal structure, and each block is a self-representative coefficient sub-matrix for a cluster of data. Sparse subspace clustering (SSC) [13,14] and low rank subspace clustering (LRR) [15] are two representative subspace clustering algorithms used in the last decade. SSC and LRR construct coefficient matrices C by sparse representation and low rank representation of data, respectively, and then obtain the affinity matrix W from the coefficient matrix, i.e., W = (|C| + |CT|)/2, and finally W is fed into spectral clustering to obtain clustering results. In recent years, block diagonal representation (BDR) for subspace clustering algorithms [16,17] attracted attention due to its good clustering performance. BDR introduces Laplacian rank constraint into the objective function, and obtains a more accurate block diagonal coefficient matrix than SSC and LRR. However, SSC, LRR, and BDR algorithms all belong to linear subspace clustering, the basic assumption of which is the self-expressiveness property. The self-expressiveness property assumes that each data point can be expressed as a linear combination of all the other points in the union of subspaces. Meanwhile, subspace-preserving is also a common assumption for linear subspace clustering, where each data point in a subspace can be linearly expressed by other points in the same subspace [18]. Finally, the data should be noise-free and subspaces should be independent of each other. However, these assumptions are usually far from reality, since in the process of data generation, transmission, storage, and even applications, data may be damaged, resulting in data containing noises, outliers, and even corruptions [19,20,21,22,23,24,25]. The corruptions make data deviate from the original data model. The subspaces corrupted by complex noises are beyond the self-expressiveness of independent linear subspaces, consequently, subspace clustering results will deviate from expectations. Thus, the linear subspace clustering algorithms degrade when facing corrupted datasets in reality.
Aiming at above problems, some efforts impose norm regularizations to handle various noises. Studies show that the Frobenius norm (F norm for short), l1 norm, and l2,1 norm can efficiently handle Gaussian noises, sparse noises, and outliers, respectively [19,26]. SSC used F norm and l1 norm to deal with noise [13]. Favaro et al. [27] learned a new data dictionary by handling data errors through the F norm. LRR used l2,1 norm to reduce the influence of data errors [19]. It is easy to find that these efforts achieve good results by assuming the type of errors as a prior and removing errors in the original input space by modeling them in their objective functions [28]. Unfortunately, the data in reality may be corrupted by complex noises [29], and modeling of errors should combine with more robust methods to handle the noises and uncover the subspace structure of the data set.
To solve this problem, Chen et al. [30] proposed a robust low-rank subspace clustering algorithm (RLRR), which adopts a probability density function to fit noise distribution. Liu et al. [31] proposed the latent low-rank representation subspace clustering (LatLRR) by reconstructing the representation dictionary to avoid the influence of noise and correctly represent the distribution of data subspaces. Zhang et al. [32] proposed Robust LatLRR on the basis of LatLRR to impose sparse constraints on the matrix. Similar to LatLRR, references [33,34] also learned a clean dictionary to find the data distribution structure and reduce the influence of data noise. Nie et al. [35] proposed a novel low-rank structural model for segmenting high-dimensional data. A new rank constraint is defined to learn a subspace indicator, which can capture different clusters directly from the data. These methods based on low-rank representation show good performance in dealing with the linear structure of data. Unfortunately, in practice, many datasets have nonlinear subspaces that overlap with each other. To solve this problem, the multiple kernel learning (MKL) is applied to solve nonlinear structural data modeling [36]. Moreover, low-rank kernel space clustering methods are studied to deal with the nonlinear structure of high-dimensional data [37]. These methods adaptively select or combine kernels suitable for the datasets from a set of kernels. However, if the dataset is corrupted by complex noises, maybe the selected kernel is not optimal.
Furthermore, there are efforts dealing with noises in various ways for subspace clustering. Qin et al. [28] proposed a method from an energy perspective to eliminate errors in the projected space rather than the input space. They defined an energy function to measure a block in the projected space, and found the correct block with the maximal energy to lead clustering. He et al. [38] proposed the subspace clustering algorithm via half-quadratic (SCHQ) to handle noisy data. SCHQ consisted of two parts, the first one adopted l1 norm to obtain a sparse representation of data, and the second one maximized the correlation of low-dimensional representations among a given data point and other data points to reduce the damage of noisy data to coefficient representation. Wang et al. [39] proposed a robust block diagonal representation (RBDR) method, which used a penalty matrix to adaptively weigh the reconstruction error to handle noises without prior knowledge, and thus had good clustering performance under noise conditions. The principles of these efforts are different from our study, and the performance of subspace clustering of some work (e.g., RBDR) will be compared in the following experiments.
In this study, we focus on the robust subspace clustering algorithms based on the Laplacian rank constraint, i.e., the block diagonal property. BDR assumes that the data contain Gaussian noises and fits the reconstruction errors with F norm. In fact, other types of noises, such as occlusions and outliers, may exist in the data. In this paper, l2,1 norm is used to fit the possible outliers in the data, and is combined with the Laplacian rank constraint to improve the performance of the subspace clustering on more complex noisy datasets.
The main contributions of this paper are as follows:
(1)
The robust subspace clustering algorithm with block diagonal representation (OBDR) is proposed to handle noises, in which the noises are modeled using l2,1 norm, and the Laplacian rank constraint is adopted to pursue a block diagonal structure of the subspace representation;
(2)
The objective function of the proposed algorithm is designed and a corresponding optimization process based on ADMM is given. The algorithm is presented and the time complexity is analyzed;
(3)
Experiments on artificial noisy digital dataset MNIST and face datasets (ORL and YaleB) show that OBDR is insensitive to noises.
Notation. We summarize the symbols and norms used in this paper, as shown in Table 1. We define the vector as italic lowercase letters, e.g., c, and the matrix as italic capital letters, e.g., C.
The rest of this paper is organized as follows: In Section 2, we propose the objective function of the robust subspace clustering with block diagonal representation (OBDR) and illustrate the framework of the OBDR. In Section 3, we solve the optimization of the objective function by ADMM and present the algorithm in details. The experimental results are introduced in Section 4, and finally Section 5 summarizes the paper.

2. The Proposed Method

BDR adds Laplacian rank constraint, namely K-block diagonal regularizer, to its objective function to control the number of connected branches of the graph, which is constructed by using the matrix W as the affinity matrix. It is proved theoretically and experimentally that a K-block diagonal regularizer can correctly represent the similarity of data and the coefficient matrix shows a block diagonal structure, which makes the BDR algorithm have good clustering performance [17].
For any matrix B, the K-block diagonal regularizer is defined as the sum of the k smallest eigenvalues of the corresponding Laplacian matrix of B (i.e., LB = Diag(B1) − B),
B K = i = n k + 1 n λ i ( L B ) .
Lu [17] defined the BDR model as following:
min Z , B 1 2 X X Z F 2 + λ 2 Z B F 2 + γ B K ,   s . t .   d i a g ( B ) = 0 , B 0 , B = B T .  
On the basis of BDR, we remodel the reconstruction error E of data X, which may contain outliers, and propose the objective function of robust subspace clustering with block diagonal representation (OBDR) as follows:
min Z , B , E E + λ 2 Z B F 2 + γ B K ,   s . t .   d i a g ( B ) = 0 , B 0 , B = B T , E = X X Z .
| | | |   is an alternative regularization method for handling different types of noise. In general, F norm is selected to deal with Gaussian noises and slight sparse noises in the data, meanwhile, sparse noises can be better processed by | | | | 1   . In the case of corruptions or outliers, | | | | 2 , 1   is a better choice. For different noises, appropriate regularization methods can effectively reduce the sensitivity of the algorithm to noisy data and improve the robustness of the algorithm.
Aiming at handling outliers, we select | | | | 2 , 1   to replace | | | | in Equation (3),
min Z , B , E E 2 , 1 + λ 2 Z B F 2 + γ B K ,   s . t .   d i a g ( B ) = 0 , B 0 , B = B T , E = X X Z .
Next, we use the augmented Lagrange multiplier method ALM [40,41] to rewrite Equation (4) and define the objective function F (Z, B, E) as follows:
min Z , B , E F ( Z , B , E ) = μ 2 X X Z E + Λ μ F 2 + E 2 , 1 + λ 2 Z B F 2 + γ B K ,   s . t .   d i a g ( B ) = 0 , B 0 , B = B T .
where μ and Λ are the penalty parameter and Lagrange multiplier, respectively. Equation (5) shows that, OBDR not only uses l2,1 norm to handle outliers, but also uses F norm to deal with Gaussian noise and slight sparse noise.
Following [42,43], we summarize the framework of the OBDR in Figure 1. The objective function of OBDR retains the self-expressiveness and subspace preserving property in the first term of F (Z, B, E), and the Laplacian rank constraint in the last term. Moreover, the noises are separated from the input data X in the first term of F (Z, B, E) and modeled by the l2,1 norm in the second term. The optimized self-expressive matrix Z is used to calculate the affinity matrix W = (|Z| + |ZT|)/2, and then, W is fed to spectral clustering to get the clustering result.

3. Optimization of the Objective Function

Firstly, | | B | | K   is rewritten as a convex optimization problem [17],
| | B | | K = min V L B , V ,   s . t .   0 V I , t r ( V ) = K .
Then, Equation (5) is equivalent to
min J ( Z , B , E ) = μ 2 X X Z E + Λ μ F 2 + | | E | | 2 , 1 + λ 2 Z B F 2 + γ D i a g ( B 1 ) B , V   s . t .   d i a g ( B ) = 0 , B 0 , B = B T , 0 V I , t r ( V ) = K .
Finally, the ADMM method is adopted to solve the variables E, Z, B, and V. The optimization consists of four sub-problems as follows:
(1) Update E with fixed Z, B, and V
E is updated as:
E t + 1 = arg min E μ 2 X X Z E + Λ μ F 2 + E 2 , 1 .
To solve Equation (8), the following theorem is introduced.
Theorem 1 
([44]). Given a matrix A = [a1, a2, …, ai, …], solve the following problem:
min W σ W 2 , 1 + 1 2 W A F 2  
then, the optimal solution W* of Equation (9) can be solved column by column, and the i-th column of W* as follows:
W * ( : , i ) = { a i σ a i a i , i f σ < a i , 0 , o t h e r w i s e .  
According to Theorem 1, we rewrite Equation (8) as:
E t + 1 = arg min E 1 2 E ( X X Z + Λ μ ) F 2 + 1 μ E 2 , 1 .
Let A = X X Z + Λ μ   , and A = [a1, a2, …, ai, …], the optimal solution E* is given:
E * ( : , i ) = { a i 1 μ a i a i , i f 1 μ < a i , 0 , o t h e r w i s e .
(2) Update Z with fixed E, B, and V
Z is updated as:
Z t + 1 = arg min Z μ 2 X X Z E + Λ μ F 2 + λ 2 Z B F 2 .
Since | | A | | F 2 = T r ( A T A ) , we define the function f and transform it as follows:
f ( Z ) = μ 2 X X Z E + Λ μ F 2 + λ 2 Z B F 2 = μ 2 T r ( ( X X Z E + Λ μ ) T ( X X Z E + Λ μ ) ) + λ 2 T r ( ( Z B ) T ( Z B ) ) .
Take the derivative of Equation (14), we obtain:
f Z = μ ( X T X Z X T X + X T E X T Λ μ ) + λ ( Z B ) .
Set Equation (15) to 0, then
( μ X T X + λ I ) Z = μ X T X μ X T E + X T Λ + λ B .
We derive Z,
Z = ( μ X T X + λ I ) 1 ( μ X T X μ X T E + X T Λ + λ B ) .
(3) Update B with fixed E, Z, and V
B is updated as:
B t + 1 = arg min B λ 2 Z B F 2 + γ D i a g ( B 1 ) B , V   s . t .   d i a g ( B ) = 0 , B 0 , B = B T .
According to the reference [17], the closed-form solution of B is given as:
B t + 1 = [ ( A ^ + A ^ T ) / 2 ] +  
where [ A ] + = max ( A , 0 ) , A = Z γ λ ( d i a g ( V ) 1 T V ) , A ^ = A D i a g ( d i a g ( A ) ) .
(4) Update V with fixed E, B, and Z
V is updated as:
V t + 1 = arg min V D i a g ( B 1 ) B , V   s . t .   0 V I , t r ( V ) = K .
Equation (18) can be solved as follows [17]:
V t + 1 = U U T
where U R n * K consists of the K eigenvectors associated with the first K smallest eigenvalues of Diag(B1) − B.
The process of OBDR is summarized as Algorithm 1.
Algorithm 1 OBDR
Input: X, λ, γ, ρ
Initialization: V = 0, B = 0, Z = 0, E = 0, Λ = 0, μ = 10−3, μmax = 108, ε = 10−6, t = 0, Maxloop = 500.
 1. WHILE t < Maxloop DO
 2.  Update E by Equation (12);
 3.  Update Z by Equation (16);
 4.  Update B by Equation (17);
 5.  Update V by Equation (19);
 6.  Update Λ, Λ = Λ + μ(X − XZ − E);
 7.  Update μ, μ = min(ρμ, μmax);
 8.  if Max( Z t + 1 Z , B t + 1 B , X X Z E ) < ε, break;
 9.  t = t + 1;
 10. END.
 Ouput: Z, B
μ is the parameter for updating the Lagrange multiplier Λ, and its initial value is usually within (10−8, 10−3). Parameter ρ directly affects the update times of the algorithm. Generally, the larger ρ is, the higher the efficiency of the algorithm will be. However, large ρ will cause the variables to change greatly and miss the optimal solution; on the contrary, lowering ρ can bring high variable accuracies and good results but also takes a lot of time [45]. Considering both the updating efficiency and clustering results, we set the maximum number of iterations (Maxloop) in Algorithm 1 to 500 and set ρ according to the datasets to ensure an accurate coefficient matrix Z. Then the affinity matrix W = (|Z| + |ZT|)/2 is calculated and fed into spectral clustering to get the clustering result.
The computation complexity of OBDR includes two parts, i.e., the process of updating E, Z, B, and V, and spectral clustering. Because the time complexity of eigenvector decomposition and spectral clustering is O (n3), and the time complexity of updating variables is O (Tn3), where T is iterative times, the total time complexity of OBDR is O (Tn3).

4. Experiments and Discussions

4.1. Experimental Datasets

In order to evaluate the robustness of the proposed algorithm, we test four artificial data sets under noises, including outliers, masking noises, Gaussian noises, and a mixture of various noises. We restrict our discussions and experiments to the following corruption processes:
(1)
Outliers: For a given data set, outliers are data that are beyond all subspaces of this data, i.e., outliers come from different data models, rather than a simple situation that is floating between subspaces [19], so we select small samples from different data sets to simulate outliers for a given data set;
(2)
Masking noises: A fraction of an image is masked by setting the elements of the masked position to 0 [46];
(3)
Additive Gaussian noises: x ˜ | x ~ N ( x , σ 2 I ) for an image x [46].
The details of noisy data sets are described as follows:
(1)
Dataset1 for outliers (D1)
We conduct experiments on the MNIST dataset for k = {2, 4, 6} digits, respectively. For each k, we randomly select 100 images for each digit, down sample each image to 28 × 28 pixels and vectorize it as a vector of length 784. Therefore, the size of Dataset1 is 784 × 100 k, for k = {2, 4, 6}. After that, we randomly replace a fraction p = {1%, 3%, 5%, 10%} of Dataset1 with images from ORL, which were cropped to the same size as digits, i.e., 784-length vectors. Finally, we normalize each column vector to have a unit length. Two clusters in Dataset1 are shown in Figure 2. Dataset1 is denoted as D1 for simplicity, D2~D4 are defined similarly.
(2)
Dataset2 for masking noise (D2)
On the ORL dataset, we conduct clustering experiments for k = {10, 20, 30} people. For each k, we randomly pick k people from ORL, each of which has 10 images and the images are down sampled to 32 × 32 pixels. Then, we perform masking operations for each people. We randomly corrupt 4 out of 10 photos with mask size 5 × 5 at random positions, then a test dataset with 40% corrupted images is produced. Repeat the masking operations with mask sizes 8 × 8 and 10 × 10 to have two more test datasets. Furthermore, we test the noise-free dataset as a benchmark. Finally, we vectorize images and normalize each vector to have a unit length. Some images are shown in Figure 3.
(3)
Dataset3 for Gaussian noise (D3)
The extended YaleB dataset contains 38 people, each of which contains 64 frontal face images under different Lambertian conditions. We conduct clustering experiments on the YaleB for k = {3, 5, 8} people. For each k, we randomly pick k people, each of which has 64 images. The images are down sampled to 32 × 32 pixels and vectorized as a vector of length 1024. Next, we randomly add Gaussian noise N (0.1, 0.01) to a fraction p = {10%, 20%, 30%} of vectors. Similarly, we test the noise-free dataset as a benchmark. Some images from YaleB as shown in Figure 4.
(4)
Dataset4 for mixture noise (D4)
In practice, a data set may be corrupted by various noises simultaneously and the performance of data clustering is seriously damaged [26,47]. On the basis of Dataset1~Dataset3, we add three kinds of data noises with a ratio of 1: 1: 1 to the ORL. Specifically, for each people, we randomly pick one image to add masking noise with mask size 10 × 10 and one image to add Gaussian noise N(0.1, 0.01), respectively, and then replace one image with a cropped MNIST image with 32 × 32 pixels. So, there are three corrupted images for each people. On the Dataset4, we also conduct clustering experiments for k = {10, 20, 30} people.

4.2. Experimental Setup

ACC [17] and NMI [48] were used to evaluate the clustering performance of algorithms under noises.
Let U = [u1, u2, …, un], V = [v1, v2, …, vn] represent the real labels and the clustering labels of all data, respectively. ACC is defined as follows:
ACC ( U , V ) = 1 n i = 1 n i n d ( u i , g ( v i ) )
where ui and vi denote the real label and the clustering label of the i-th data point, respectively, ind(x, y) = 1 if x = y, and 0 otherwise. The function g(.) maximizes the ACC by permuting the labels V to match the labels U.
NMI is defined as follows:
NMI ( U , V ) = i = 1 R j = 1 T P ( p i q j ) log P ( p i q j ) P ( p i ) P ( q j )   [ i = 1 R P ( p i ) log P ( p i ) + j = 1 T P ( q j ) log P ( q j ) ] / 2 .
P(c) = |c|/n denotes the probability that the data points belong to the cluster c, where |c| is the cardinality of c, pi is the set of data points with the real label ui, while qj is the set with the clustering label vj. R and T represent the number of real labels and output labels of data, respectively.
The value ranges of ACC and NMI are [0, 1], and the higher the ACC and NMI, the better the performance.
We picked five algorithms for comparison, and the algorithms are described as follows:
SSC [13]: The sparse subspace clustering algorithm, which uses sparse representation among data to obtain clustering results;
LRR [19]: The low rank subspace clustering algorithm, which uses low rank representation among data to obtain clustering results;
BDR [17]: Subspace clustering by block diagonal representation, the coefficient matrix obtained by BDR presents a block diagonal structure;
SBDR [16]: Structured block diagonal representation for subspace clustering. On the basis of BDR, SBDR added additional a subspace structure constraint;
RBDR [39]: Block diagonal representation learning for robust subspace clustering for handling data noise.
All the codes used in the experiment were released by the authors. For the parameter(s) in each algorithm, we searched the optimal values within the range of [10−5, 105] to ensure the good performance on most test sets. The parameters are shown in Table 2.
All experiments were run on a computer equipped with an Intel Core I3-10100 processor, 8 G memory, and Windows10 operating system.

4.3. Experimental Results

We run 10 trials on each test set, and the averaged ACC and NMI are shown in Table 3 and Table 4, respectively, the best ACC and NMI on each test set are shown in bold.
We observe that for each pair of algorithm and dataset, both the ACC and NMI decrease generally with the increasing of the noise levels, which verifies that the noise will hurt the performance of each algorithm. For the dataset D1, in terms of ACC and NMI, no algorithm achieved the best performance in all test cases. The main reason for these phenomena is that D1 is a dataset with outliers, which are data from different data models, rather than a simple situation that is floating between subspaces [19]. All the algorithms try to express the outliers in the subspaces of the main body of the data, but the randomness of the outliers and the distinct differences between them make the expression more uncertain, and then the performance of some algorithms decreases abruptly or even increases with the increasing of the noise levels. For example, the ACC of SBDR decreases from 0.8465 to 0.5115 when the noise level increases from 1% to 3%, and the ACC of RBDR increases from 0.8425 to 0.9230 when the noise level increases from 0 to 1%. Similar things happen with the NMI results. However, with regard to ACC and NMI, the performance of OBDR decreases gradually and monotonically with the increasing noise levels, which shows the stability of OBDR on the noisy data with outliers. Additionally, among these algorithms, the SBDR and the OBDR are the top two algorithms in terms of both ACC and NMI. Specifically, in terms of ACC, the OBDR wins in six test cases, and the second best one is the SBDR, which wins in four cases. However, in terms of NMI, the SBDR wins in 11 cases, and the second best one is OBDR, which wins in 3 cases. The two algorithms work in different ways. The SBDR pursues the structure of subspaces at the expense of long running time (see Figure 5), and achieves the best NMIs in most cases on D1. However, the OBDR models the outliers and separates the noises from the subspaces, thus it has the second better performance. On the datasets D2 and D3, the OBDR has significant advantages over the compared algorithms, including the SBDR with regard to ACC and NMI. As the dataset D4 is concerned, when the data set is corrupted by various noises simultaneously, the OBDR wins in all the three cases in terms of ACC, while the SSC wins in two out of three cases with regard to NMI.
Moreover, we report the wins/ties/losses (W/T/L) and average ranks to record times of best performance and overall performance of the algorithms, respectively. From Table 5, we observe that, in view of W/T/L, OBDR performs significantly better than other algorithms on the test datasets. Specifically, the OBDR wins in 28 and 19 out of the 42 cases in terms of ACC and NMI, respectively, which are higher than that of other algorithms. In terms of average rank, we observe that OBDR is better than the others. Lower ranks are better, and OBDR is the algorithm with the lowest average ranks in terms of ACC and NMI (average rank = 1.50 and 2.19, respectively), and SBDR is the second lowest one (average rank = 2.73 and 2.42, respectively). Meanwhile, we calculate the p-values of the Friedman test [49] for statistical comparison between the six algorithms. The results are shown in Table 5. In terms of both ACC and NMI, Friedman test results are significant at α = 0.05 significance level. Thus, the six algorithms perform significantly different from each other at the test datasets.
As shown in Table 3, Table 4 and Table 5, on the noisy data sets, both ACC and NMI of OBDR obtained the most times of best performance and the best average ranks, thus OBDR can handle various data noise corruptions better than the compared algorithms. Furthermore, we conduct pairwise comparisons using the Nemenyi multiple comparison test [50], as shown in Table 6. We observe that at the significance hypothesis level α = 0.05, OBDR is significantly different from all compared algorithms in terms of ACC, and significantly different from all compared algorithms except SBDR in terms of NMI, indicating that the SBDR is also robust to data noises. In addition, RBDR does not show good clustering performance, possibly due to its special preprocessing methods of data sets [39]. Overall, the OBDR outperforms the compared algorithms with regard to the evaluation metrics.
Figure 5 records the computational time of algorithms on Dataset1~Dataset4 with the maximum number of clusters, i.e., Dataset1(6), which denotes Dataset1 with six subjects. It can be seen that SBDR achieves good performance but needs to take a long time to update the subspace structure matrix.

4.4. Parameter Analysis

Fixing the number of iterations of Algorithm 1, ρ will directly affect the clustering performance of OBDR. In this section, we search the optimal values of ρ in OBDR on different data sets. We use noise-free data with the maximum number of clusters in Dataset1~Dataset3 as the test sets, such as Dataset1(6). The maximum iteration times of OBDR is set to 500, and the values of λ and γ are set as Table 2. The effect of ρ on OBDR is shown in Figure 6.
Based on Figure 6, ρ is set to 2.0, 1.05, and 1.45 on Dataset1, Dataset2 (Dataset4), and Dataset3, respectively. After fixing ρ, we also conduct the experiments to test the sensitivity of λ and γ. The results are shown in Figure 7. From Figure 7, we can see that OBDR is more sensitive to λ than γ on all three datasets. It is observed that OBDR has steady clustering performance when λ is within [1, 100], and the parameter values in Table 2 are sound for the three datasets.

5. Conclusions

In this paper, we proposed the robust subspace clustering with block diagonal representation algorithm (OBDR). OBDR reduces the influence of noises on the construction of the coefficient matrix via l2,1 norm and F norm, recovers the block diagonal spatial structure distribution of data by the Laplacian rank constraint, and thus improves the clustering performance. Experiments on several artificial noisy datasets demonstrate the effectiveness of OBDR in dealing with complex noises. In this study, the OBDR is evaluated on the noisy face datasets and handwritten digit datasets, and the results show that the OBDR has advantages over the compared algorithms with regard to the evaluation metrics. Moreover, the OBDR can be applied in other subspace clustering tasks, such as motion segmentation, image segmentation, and object clustering. In the future, we plan to investigate subspace clustering with deep learning on more high dimensional data. Since the deep learning network can transform the input data into a low dimensional latent space [51,52,53,54,55,56], combining of OBDR and the deep learning network (e.g., AutoEncoder) is worthy of careful study.

Author Contributions

Conceptualization, Q.L.; methodology, Z.X. and L.W.; software, Z.X.; validation, L.W.; investigation, Q.L.; writing—original draft preparation, Z.X.; writing—review and editing, Z.X. and Q.L.; visualization, Z.X.; supervision, Q.L. and L.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by “National Natural Science Foundation of China”, grant number 62072391.

Data Availability Statement

The data used in this study are reported in the figures and tables of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jain, A.K.; Murty, M.N.; Flynn, P.J. Data clustering: A review. ACM Comput. Surv. 1999, 31, 264–323. [Google Scholar] [CrossRef]
  2. Gear, C.W. Multibody grouping from motion images. Int. J. Comput. Vis. 1998, 29, 133–150. [Google Scholar] [CrossRef]
  3. Liu, Y.; Wang, K.; Liu, L.; Lan, H.; Lin, L. Tcgl: Temporal contrastive graph for self-supervised video representation learning. IEEE Trans. Image Process. 2022, 31, 1978–1993. [Google Scholar] [CrossRef]
  4. Ban, Y.; Liu, M.; Wu, P.; Yang, B.; Liu, S.; Yin, L.; Zheng, W. Depth estimation method for monocular camera defocus images in microscopic scenes. Electronics 2022, 11, 2012. [Google Scholar] [CrossRef]
  5. Jing, L.P.; Ng, M.K.; Huang, J.Z. An entropy weighting k-means algorithm for subspace clustering of high-dimensional sparse data. IEEE Trans. Knowl. Data Eng. 2007, 19, 1026–1041. [Google Scholar] [CrossRef]
  6. Hong, W.; Wright, J.; Huang, K.; Ma, Y. Multiscale hybrid linear models for lossy image representation. IEEE Trans. Image Process. 2006, 15, 3655–3671. [Google Scholar] [CrossRef] [PubMed]
  7. Zhou, W.; Lv, Y.; Lei, J.; Yu, L. Global and local-contrast guides content-aware fusion for RGB-D saliency prediction. IEEE Trans. Syst. Man Cybern. 2021, 51, 3641–3649. [Google Scholar] [CrossRef]
  8. Basri, R.; Jacobs, D.W. Lambertian reflectance and linear subspaces. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 218–233. [Google Scholar] [CrossRef]
  9. Jiang, S.; Zhao, C.; Zhu, Y.; Wang, C.; Du, Y.; Lei, W.; Wang, L. A practical and economical ultra-wideband base station placement approach for indoor autonomous driving systems. J. Adv. Transp. 2022, 2022, 3815306. [Google Scholar] [CrossRef]
  10. Johnson, S.C. Hierarchical clustering schemes. Psychometrika 1967, 32, 241–254. [Google Scholar] [CrossRef]
  11. Kriegel, H.P.; Kröger, P.; Sander, J.; Zimek, A. Density-based clustering. Wiley Interdiscip. Rev.-Data Min. Knowl. Discov. 2011, 1, 231–240. [Google Scholar] [CrossRef]
  12. Xiong, S.; Li, B.; Zhu, S. DCGNN: A single-stage 3D object detection network based on density clustering and graph neural network. Complex Intell. Syst. 2022. [Google Scholar] [CrossRef]
  13. Elhamifar, E.; Vidal, R. Sparse subspace clustering: Algorithm, theory, and applications. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 2765–2781. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Qin, X.; Ban, Y.; Wu, P.; Yang, B.; Liu, S.; Yin, L.; Liu, M.; Zheng, W. Improved image fusion method based on sparse decomposition. Electronics 2022, 11, 2321. [Google Scholar] [CrossRef]
  15. Liu, G.C.; Lin, Z.C.; Yu, Y. Robust subspace segmentation by low-rank representation. In Proceedings of the 27th International Conference on Machine Learning, Haifa, Israel, 21–24 June 2010; pp. 1–8. [Google Scholar]
  16. Liu, M.S.; Wang, Y.; Sun, J.; Ji, Z.C. Structured block diagonal representation for subspace clustering. Appl. Intell. 2020, 50, 2523–2536. [Google Scholar] [CrossRef]
  17. Lu, C.Y.; Feng, J.S.; Lin, Z.C.; Mei, T.; Yan, S.C. Subspace clustering by block diagonal representation. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 41, 487–501. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Vidal, R.; Ma, Y.; Sastry, S. Generalized Principal Component Analysis; Springer: New York, NY, USA, 2016. [Google Scholar]
  19. Liu, G.C.; Lin, Z.C.; Yan, S.C.; Sun, J.; Yu, Y.; Ma, Y. Robust recovery of subspace structures by low-rank representation. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 171–184. [Google Scholar] [CrossRef] [Green Version]
  20. Hu, J.; Wu, Y.; Li, T.; Ghosh, B.K. Consensus control of general linear multiagent systems with antagonistic interactions and communication noises. IEEE Tran. Autom. Control 2019, 64, 2122–2127. [Google Scholar] [CrossRef]
  21. Zhong, T.; Wang, W.; Lu, S.; Dong, X.; Yang, B. RMCHN: A residual modular cascaded heterogeneous network for noise suppression in DAS-VSP Records. IEEE Geosci. Remote Sens. Lett. 2022, 20, 7500205. [Google Scholar] [CrossRef]
  22. Yang, C.; Zhang, J.; Huang, Z. Numerical study on cavitation-vortex-noise correlation mechanism and dynamic mode decomposition of a hydrofoil. Phys. Fluids 2022, 34, 125105. [Google Scholar] [CrossRef]
  23. Huang, N.; Chen, Q.; Cai, G.; Xu, D.; Zhang, L.; Zhao, W. Fault diagnosis of bearing in wind turbine gearbox under actual operating conditions driven by limited data with noise labels. IEEE Trans. Instrum. Meas. 2021, 70, 3502510. [Google Scholar] [CrossRef]
  24. Li, R.; Zhang, H.; Chen, Z.; Yu, N.; Kong, W.; Li, T.; Liu, Y. Denoising method of ground-penetrating radar signal based on independent component analysis with multifractal spectrum. Measurement 2022, 192, 110886. [Google Scholar] [CrossRef]
  25. Liu, F.; Zhao, X.; Zhu, Z.; Zhai, Z.; Liu, Y. Dual-microphone active noise cancellation paved with doppler assimilation for TADS. Mech. Syst. Signal Process. 2023, 184, 109727. [Google Scholar] [CrossRef]
  26. He, R.; Zheng, W.S.; Tan, T.N.; Sun, Z.N. Half-quadratic-based iterative minimization for robust sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 36, 261–275. [Google Scholar]
  27. Favaro, P.; Vidal, R.; Ravichandran, A. A closed form solution to robust subspace estimation and clustering. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1801–1807. [Google Scholar]
  28. Qin, Y.; Zhang, X.; Shen, L.; Feng, G. Maximum block energy guided robust subspace clustering. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 2652–2659. [Google Scholar] [CrossRef]
  29. Duy, H.; Dengy, Y.; Xueyz, J.; Mengyz, D.; Zhaoy, Q.; Xuy, Z. Robust online CSI estimation in a complex environment. IEEE trans. Wirel. Commun. 2022, 21, 8322–8336. [Google Scholar]
  30. Chen, J.H.; Yang, J. Robust subspace segmentation via low-rank representation. IEEE Trans. Cybern. 2014, 44, 1432–1445. [Google Scholar] [CrossRef] [PubMed]
  31. Liu, G.C.; Yan, S.C. Latent low-rank representation for subspace segmentation and feature extraction. In Proceedings of the 2011 IEEE International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 1615–1622. [Google Scholar]
  32. Zhang, H.Y.; Lin, Z.C.; Zhang, C.; Gao, J.B. Robust latent low rank representation for subspace clustering. Neurocomputing 2014, 145, 369–373. [Google Scholar] [CrossRef]
  33. Ji, P.; Salzmann, M.; Li, H.D. Efficient dense subspace clustering. In Proceedings of the 2014 IEEE Winter Conference on Applications of Computer Vision, Steamboat Springs, CO, USA, 24–26 March 2014; pp. 461–468. [Google Scholar]
  34. Jing, L.P.; Ng, M.K.; Zeng, T.Y. Dictionary learning-based subspace structure identification in spectral clustering. IEEE Trans. Neural Netw. Learn. 2013, 24, 1188–1199. [Google Scholar] [CrossRef] [PubMed]
  35. Nie, F.; Chang, W.; Hu, Z.; Li, X. Robust subspace clustering with low-rank structure constraint. IEEE Trans. Knowl. Data Eng. 2022, 34, 1404–1415. [Google Scholar] [CrossRef]
  36. Guo, L.; Zhang, X.; Liu, Z.; Xue, X.; Wang, Q.; Zheng, S. Robust subspace clustering based on automatic weighted multiple kernel learning. Inf. Sci. 2021, 573, 453–474. [Google Scholar] [CrossRef]
  37. Xue, X.; Zhang, X.; Feng, X.; Sun, H.; Chen, W.; Liu, Z. Robust subspace clustering based on non-convex low-rank approximation and adaptive kernel. Inf. Sci. 2019, 513, 190–205. [Google Scholar] [CrossRef]
  38. He, R.; Zhang, Y.Y.; Sun, Z.N.; Yin, Q.Y. Robust subspace clustering with complex noise. IEEE Trans. Image Process. 2015, 24, 4001–4013. [Google Scholar] [PubMed]
  39. Wang, L.J.; Huang, J.W.; Yin, M.; Cai, R.C.; Hao, Z.F. Block diagonal representation learning for robust subspace clustering. Inf. Sci. 2020, 526, 54–67. [Google Scholar] [CrossRef]
  40. Nie, F.P.; Wang, H.; Cai, X.; Huang, H.; Ding, C. Robust matrix completion via joint schatten p-norm and lp-norm minimization. In Proceedings of the 12th IEEE International Conference on Data Mining, Brussels, Belgium, 10–13 December 2012; pp. 566–574. [Google Scholar]
  41. Rockafellar, R.T. Augmented Lagrange multiplier functions and duality in nonconvex programming. SIAM J. Control Optim. 1974, 12, 268–285. [Google Scholar] [CrossRef]
  42. Khan, M.; Imtiaz, S.; Parvaiz, G.; Hussain, A.; Bae, J. Integration of Internet-of-Things with blockchain technology to enhance humanitarian logistics performance. IEEE Access 2021, 9, 25422–25436. [Google Scholar] [CrossRef]
  43. Khan, M.; Parvaiz, G.; Dedahanov, A.; Abdurazzakov, O.; Rakhmonov, D. The Impact of technologies of traceability and transparency in supply chains. Sustainability 2022, 14, 16336. [Google Scholar] [CrossRef]
  44. Huang, J.; Nie, F.P.; Huang, H.; Ding, C. Robust manifold nonnegative matrix factorization. ACM Trans. Knowl. Discov. Data 2014, 8, 1–21. [Google Scholar] [CrossRef]
  45. Lin, Z.C.; Chen, M.M.; Ma, Y. The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. arXiv 2010, arXiv:1009.5055. [Google Scholar]
  46. Vincent, P.; Larochelle, H.; Lajoie, I.; Bengio, Y.; Manzagol, P.A. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 2010, 11, 3371–3408. [Google Scholar]
  47. Wang, Y.X.; Xu, H. Noisy sparse subspace clustering. In Proceedings of the 30th International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013; pp. 89–97. [Google Scholar]
  48. Vinh, N.X.; Epps, J.; Bailey, J. Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance. J. Mach. Learn. Res. 2010, 11, 2837–2854. [Google Scholar]
  49. Friedman, M. The use of ranks to avoid the assumption of normality implicit in the analysis of variance. J. Am. Stat. Assoc. 1937, 32, 675–701. [Google Scholar] [CrossRef]
  50. Nemenyi, P.B. Distribution-Free Multiple Comparisons. Ph.D. Thesis, Princeton University, Princeton, NJ, USA, 1963. [Google Scholar]
  51. Liu, Y.; Zhang, Z.; Liu, X.; Wang, L.; Xia, X. Efficient image segmentation based on deep learning for mineral image classification. Adv. Powder Technol. 2021, 32, 3885–3903. [Google Scholar] [CrossRef]
  52. Liu, H.; Liu, M.; Li, D.; Zheng, W.; Yin, L.; Wang, R. Recent advances in pulse-coupled neural networks with applications in image processing. Electronics 2022, 11, 3264. [Google Scholar] [CrossRef]
  53. Li, M.; Tian, Z.; Du, X.; Yuan, X.; Shan, C.; Guizani, M. Power normalized cepstral robust features of deep neural networks in a cloud computing data privacy protection scheme. Neurocomputing 2023, 518, 165–173. [Google Scholar] [CrossRef]
  54. Zhou, W.; Wang, H.; Wan, Z. Ore image classification based on improved CNN. Comput. Electr. Eng. 2022, 99, 107819. [Google Scholar] [CrossRef]
  55. Liu, R.; Wang, X.; Lu, H.; Wu, Z.; Fan, Q.; Li, S.; Jin, X. SCCGAN: Style and characters inpainting based on CGAN. Mob. Netw. Appl. 2021, 26, 3–12. [Google Scholar] [CrossRef]
  56. Yang, M.; Wang, H.; Hu, K.; Yin, G.; Wei, Z. IA-Net: An inception–attention-module-based network for classifying underwater images from others. IEEE J. Ocean. Eng. 2022, 47, 704–717. [Google Scholar] [CrossRef]
Figure 1. The framework of OBDR.
Figure 1. The framework of OBDR.
Electronics 12 01249 g001
Figure 2. MNIST confused with a small number of ORL images.
Figure 2. MNIST confused with a small number of ORL images.
Electronics 12 01249 g002
Figure 3. Masking photos from ORL.
Figure 3. Masking photos from ORL.
Electronics 12 01249 g003
Figure 4. YaleB dataset.
Figure 4. YaleB dataset.
Electronics 12 01249 g004
Figure 5. The computational time of algorithms on Dataset1~Dataset4.
Figure 5. The computational time of algorithms on Dataset1~Dataset4.
Electronics 12 01249 g005
Figure 6. The effect of ρ.
Figure 6. The effect of ρ.
Electronics 12 01249 g006
Figure 7. The ACCs of OBDR with different λ and γ.
Figure 7. The ACCs of OBDR with different λ and γ.
Electronics 12 01249 g007
Table 1. The notations used in each algorithm.
Table 1. The notations used in each algorithm.
NotationsDescriptions
Ci,:the i-th row of C
C:,jthe j-th column of C
Cijthe (i,j)-th element of C
CTthe transposed matrix of C
Tr(C)the trace of the matrix C
diag(C) a vector with its i-th element being the i-th diagonal element of C
Diag(c) a diagonal matrix with its i-th diagonal element being i-th element of vector c
| | C | | F = i j C i j 2   the Frobenius norm(or l2 norm) of C
| | C | | 2 , 1 = i | | C i , : | | F   the l2,1 norm of C
| | C | | 1 = i j | C i j |   the l1 norm of C
| | C | | = max | C i j |   the l norm of C
Table 2. The parameter values used in each algorithm.
Table 2. The parameter values used in each algorithm.
DatasetsSSCLRRBDRSBDRRBDROBDR
D1α = 20λ = 0.45λ = 70
γ = 0.1
λ = 100
γ = 10
δ = 1
λ = 1 × 10−4
γ = 1 × 10−4
λ = 70
γ = 0.1
ρ = 2
D2, D4α = 100λ = 1.45λ = 70
γ = 0.1
λ = 100
γ = 0.1
δ = 1 × 10−5
λ = 1 × 10−3
γ = 1 × 10−4
λ = 70
γ = 0.1
ρ = 1.05
D3α = 20λ = 0.85λ = 10
γ = 0.1
λ = 70
γ = 0.1
δ = 0.1
λ = 1 × 10−4
γ = 1 × 10−5
λ = 10
γ = 1 × 10−3
ρ = 1.45
Table 3. ACCs of algorithms on Dataset1~Dataset4.
Table 3. ACCs of algorithms on Dataset1~Dataset4.
Datasets#SubjectsNoise Level SSCLRRBDRSBDRRBDROBDR
D12Clean0.86850.89550.92300.9245 10.84250.8965
1%0.63550.89550.67150.84650.92300.8910
3%0.55660.70850.51100.51150.69450.8805
5%0.50900.51050.50900.51200.69450.8645
10%0.57350.51850.57200.51800.68700.6975
4Clean0.64200.73680.77120.75480.62020.7560
1%0.65150.72320.62220.63520.60200.7183
3%0.57350.60380.61530.62950.54530.6358
5%0.56230.59680.60950.62130.43700.5930
10%0.36400.57770.40400.55900.28920.5767
6Clean0.57120.62350.60770.64370.61000.6423
1%0.54950.55280.58280.54600.58280.6073
3%0.51130.54650.57580.55700.57580.5682
5%0.51830.53680.54520.60420.38320.5607
10%0.30230.46400.25820.45250.18400.5398
D210Clean0.80000.87600.88200.88200.54700.8830
5 × 50.74400.85500.85900.86400.79900.8700
8 × 80.64400.78000.74600.76900.61500.7830
10 × 100.64800.74900.61200.60300.49800.6850
20Clean0.76500.79400.81350.81550.39750.8350
5 × 50.66600.76800.78350.79500.70700.8220
8 × 80.61500.69100.63750.65600.50900.6905
10 × 100.59350.60200.50850.51300.42050.5625
30Clean0.73070.79030.81430.83700.24230.8543
5 × 50.63700.76270.79130.79700.67470.8317
8 × 80.58400.65300.62330.66700.48270.6923
10 × 100.58500.61070.50470.53630.41200.5857
D33Clean0.64950.96380.96640.93030.33270.9824
10%0.64890.48980.74950.80590.32210.8729
20%0.61010.43940.73300.74790.24660.7947
30%0.56810.33510.66280.60160.18890.6686
5Clean0.59620.71630.79870.79410.42100.8687
10%0.53000.60690.59660.70500.32100.7622
20%0.42060.42870.55720.58410.25670.5962
30%0.38590.30870.45630.52220.22120.5269
8Clean0.51150.70940.86870.91270.28820.8468
10%0.56260.61600.64090.74110.25110.7568
20%0.44950.50390.61490.70000.18890.7080
30%0.41020.41270.55640.62210.18760.6268
D410 0.58800.26210.65800.65110.53150.6905
20 0.60030.32850.57960.61210.47620.6188
30 0.56820.35620.58020.60000.49350.6062
1 The best ACC on each test set is shown in bold.
Table 4. NMIs of algorithms on Dataset1~Dataset4.
Table 4. NMIs of algorithms on Dataset1~Dataset4.
Datasets#SubjectsNoise Level SSCLRRBDRSBDRRBDROBDR
D1 2Clean0.49910.60780.67070.7173 20.56490.6242
1%0.23020.59190.28760.58400.55280.6063
3%0.08580.32520.00660.00660.32980.5594
5%0.00470.00530.00470.00470.32590.4863
10%0.06490.00510.00440.00480.28600.2366
4Clean0.41540.58420.61330.68210.48490.5507
1%0.57020.56880.55340.60650.48220.5292
3%0.51280.47770.52830.58030.34560.4590
5%0.47680.46410.51000.56310.19890.4442
10%0.09470.42220.40240.47440.05030.4182
6Clean0.42550.55890.56230.65330.56230.5774
1%0.54680.50470.53250.60080.53250.5258
3%0.48590.48760.50370.56620.50370.4841
5%0.48440.47420.46440.58830.26050.4658
10%0.12540.38320.14750.46510.02210.4379
D210Clean0.86620.89760.89520.90470.63240.9176
5 × 50.77260.88390.87930.88120.82290.8958
8 × 80.68040.79170.78190.79810.62420.8008
10 × 100.65640.74930.63460.62590.52250.6957
20Clean0.85710.87320.88210.89410.55410.9009
5 × 50.76430.84390.86350.87610.78610.8840
8 × 80.71350.76440.72820.74790.62460.7646
10 × 100.69430.69540.62640.63420.55720.6548
30Clean0.85610.88290.90230.91440.42770.9221
5 × 50.77450.85430.88130.88950.79130.8989
8 × 80.72810.76020.74590.77660.63870.7900
10 × 100.72280.72950.66260.68050.59740.7046
D33Clean0.41880.87120.81300.48700.21300.8851
10%0.40080.26570.54540.37020.24540.6236
20%0.35730.17410.49490.30820.18000.5637
30%0.31260.02000.44270.32150.10420.3843
5Clean0.42240.78050.57140.63980.37140.8190
10%0.47030.58530.47490.48720.27490.5723
20%0.38170.39170.48560.46060.18560.4908
30%0.34630.22340.44700.45070.14700.4109
8Clean0.44200.64170.54380.70660.25450.5757
10%0.52660.53510.48950.56470.23720.5979
20%0.44490.42400.45230.59820.16750.4959
30%0.39170.32530.44200.46580.14200.5457
D410 0.63620.31200.62990.64130.55000.6360
20 0.68100.50650.65000.64880.51210.6541
30 0.70300.55420.67410.67600.59350.6912
2 The best NMI on each test set is shown in bold.
Table 5. W/T/L, average ranks and p-values.
Table 5. W/T/L, average ranks and p-values.
SSCLRRBDRSBDRRBDROBDRp-Value
ACCW/T/L0/0/426/0/361/1/405/0/371/1/4028/0/14 3
Av. ranks4.653.653.332.735.241.501.0 × 10−21
NMIW/T/L2/0/404/0/382/0/4014/0/281/0/4119/0/23
Av. ranks4.123.553.522.425.202.191.2 × 10−14
3 The best performance is shown in bold.
Table 6. The p-values of using the Nemenyi multiple comparison test on Dataset1~Dataset4.
Table 6. The p-values of using the Nemenyi multiple comparison test on Dataset1~Dataset4.
SSCLRRBDRSBDRRBDR
ACCLRR0.0729----
BDR0.01530.9952---
SBDR3.4 × 10−50.33530.6725--
RBDR0.70940.00054.5 × 10−51.1 × 10−8-
OBDR2.0 × 10−137.8 × 10−60.00010.03197.1 × 10−14
NMILRR0.7273----
BDR0.69111.0000---
SBDR0.00040.06230.0729--
RBDR0.08490.00070.00061.3 × 10−10-
OBDR3.4 × 10−50.01140.01390.99382.5 × 10−12
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, Q.; Xie, Z.; Wang, L. Robust Subspace Clustering with Block Diagonal Representation for Noisy Image Datasets. Electronics 2023, 12, 1249. https://doi.org/10.3390/electronics12051249

AMA Style

Li Q, Xie Z, Wang L. Robust Subspace Clustering with Block Diagonal Representation for Noisy Image Datasets. Electronics. 2023; 12(5):1249. https://doi.org/10.3390/electronics12051249

Chicago/Turabian Style

Li, Qiang, Ziqi Xie, and Lihong Wang. 2023. "Robust Subspace Clustering with Block Diagonal Representation for Noisy Image Datasets" Electronics 12, no. 5: 1249. https://doi.org/10.3390/electronics12051249

APA Style

Li, Q., Xie, Z., & Wang, L. (2023). Robust Subspace Clustering with Block Diagonal Representation for Noisy Image Datasets. Electronics, 12(5), 1249. https://doi.org/10.3390/electronics12051249

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop