Next Article in Journal
Entropy Generation Analysis in Turbulent Reacting Flows and Near Wall: A Review
Next Article in Special Issue
Research on Throughput-Guaranteed MAC Scheduling Policies in Wireless Networks
Previous Article in Journal
Stochastic Control for Bayesian Neural Network Training
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Useful Criterion on Studying Consistent Estimation in Community Detection

School of Mathematics, China University of Mining and Technology, Xuzhou 221116, China
Entropy 2022, 24(8), 1098; https://doi.org/10.3390/e24081098
Submission received: 10 July 2022 / Revised: 4 August 2022 / Accepted: 8 August 2022 / Published: 9 August 2022
(This article belongs to the Special Issue Signal and Information Processing in Networks)

Abstract

:
In network analysis, developing a unified theoretical framework that can compare methods under different models is an interesting problem. This paper proposes a partial solution to this problem. We summarize the idea of using a separation condition for a standard network and sharp threshold of the Erdös–Rényi random graph to study consistent estimation, and compare theoretical error rates and requirements on the network sparsity of spectral methods under models that can degenerate to a stochastic block model as a four-step criterion SCSTC. Using SCSTC, we find some inconsistent phenomena on separation condition and sharp threshold in community detection. In particular, we find that the original theoretical results of the SPACL algorithm introduced to estimate network memberships under the mixed membership stochastic blockmodel are sub-optimal. To find the formation mechanism of inconsistencies, we re-establish the theoretical convergence rate of this algorithm by applying recent techniques on row-wise eigenvector deviation. The results are further extended to the degree-corrected mixed membership model. By comparison, our results enjoy smaller error rates, lesser dependence on the number of communities, weaker requirements on network sparsity, and so forth. The separation condition and sharp threshold obtained from our theoretical results match the classical results, so the usefulness of this criterion on studying consistent estimation is guaranteed. Numerical results for computer-generated networks support our finding that spectral methods considered in this paper achieve the threshold of separation condition.

1. Introduction

Networks with latent structure are ubiquitous in our daily life, for example, social networks from social platforms, protein–protein interaction networks, co-citation networks and co-authorship networks [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]. Community detection is a powerful tool to learn the latent community structure in networks and graphs in social science, computer science, machine learning, statistical science and complex networks [16,17,18,19,20,21,22]. The goal of community detection is to infer a node’s community information from the network.
Many models have been proposed to model networks with latent community structure; see [23] for a survey. The stochastic blockmodel (SBM) [24] stands out for its simplicity, and it has received increasing attention in recent years [25,26,27,28,29,30,31,32,33,34,35]. However, the SBM only models a non-overlapping network in which each node belongs to a single community. Estimating mixed memberships of the network whose node may belong to multiple communities has received a lot of attention [36,37,38,39,40,41,42,43,44]. To capture the structure of the network with mixed memberships, Ref. [36] proposed the popular mixed membership stochastic blockmodel (MMSB), which is an extension of SBM from non-overlapping networks to overlapping networks. It is well known that the degree-corrected stochastic blockmodel (DCSBM) [45] is an extension of SBM by considering the degree heterogeneity of nodes to fit real-world networks with various node degree. Similarly, Ref. [41] proposed a model named the degree-corrected mixed membership (DCMM) model as an extension of MMSB by considering the degree heterogeneity of nodes. There are alternative models based on MMSB, such as the overlapping continuous community assignment model (OCCAM) of [40] and the stochastic blockmodel with overlap (SBMO) proposed by [46], which can also model networks with mixed memberships. As discussed in Section 5, OCCAM equals DCMM, while SBMO is a special case of DCMM.

1.1. Spectral Clustering Approaches

For the four models SBM, DCSBM, MMSB and DCMM, many researchers focus on designing algorithms with provable consistent theoretical guarantees. Spectral clustering [47] is one of the most widely applied methods with guarantees of consistency for community detection.
Within the SBM and DCSBM frameworks for a non-overlapping network, spectral clustering has two steps. It first conducts the eigen-decomposition of the adjacency matrix or the Laplacian matrix [26,48,49]. Then it runs a clustering algorithm (typically, k-means) on some leading eigenvectors or their variants to infer the community membership. For example, Ref. [26] showed the consistency of spectral clustering designed based on Laplacian matrix under SBM. Ref. [48] proposed a regularized spectral clustering (RSC) algorithm designed based on regularized Laplacian matrix and shows its theoretical consistency under DCSBM. Ref. [30] studied the consistencies of two spectral clustering algorithms based on the adjacency matrix under SBM and DCSBM. Ref. [50] designed the spectral clustering on the ratios-of-eigenvectors (SCORE) algorithm with a theoretical guarantee under DCSBM. Ref. [49] studied the impact of regularization on a Laplacian spectral clustering under SBM.
Within the MMSB and DCMM frameworks for the overlapping network, broadly speaking, spectral clustering has the following three steps. One first conducts an eigen-decomposition of the adjacency matrix or the graph Laplacian, then hunts corners (also known as vertexes) using a convex hull algorithm, and finally has a membership reconstruction step by projection. The convex hull algorithms suggested in [41] differ in the k-means algorithm a lot. For example, Ref. [44] designed the sequential projection after cleaning (SPACL) algorithm based on the finding that there exists a simplex structure in the eigen-decomposition of the population adjacency matrix and studies the SPACL theoretical properties under MMSB. Meanwhile, SPACL uses the successive projection algorithm proposed in [51] to find the corners for its simplex structure. To fit DCMM, Ref. [41] designs the Mixed-SCORE algorithm based on the finding that there exists a simplex structure in the entry-wise ratio matrix obtained from the eigen-decomposition of the population adjacency matrix under DCMM. Ref. [41] also introduces several choices for convex hull algorithms to find corners for the simplex structure and show the estimation consistency of the Mixed-SCORE under DCMM. Ref. [43] finds the cone structure inherent in the normalization of eigenvectors of the population adjacency matrix under DCMM as well as OCCAM, and develops an algorithm to hunt corners in the cone structure.

1.2. Separation Condition, Alternative Separation Condition and Sharp Threshold

SBM with n nodes belonging to K equal (or nearly equal) size communities and vertices connect with probability p in within clusters and p out across clusters, denoted by S B M ( n , K , p in , p out ) , has been well studied in recent years, especially for the case when K = 2 ; see [21] and the references therein. In this paper, we call the network generated from S B M ( n , K , p in , p out ) the standard network for convenience. Without causing confusion, we also call S B M ( n , K , p in , p out ) the standard network, occasionally. Let p in = α in log ( n ) n , p out = α out log ( n ) n . Refs. [21,52] found that exact recovery in S B M ( n , 2 , α in log ( n ) n , α out log ( n ) n ) is solvable, and efficiently so, if | α in α out | > 2 (i.e., | p in p out | > 2 log ( n ) n ) and unsolvable if | α in α out | < 2 as summarized in Theorem 13 of [53]. This threshold can be achieved by semidefinite relaxations [21,54,55,56] and spectral methods with local refinements [57,58]. Unlike semidefinite relaxations, spectral methods have a different threshold, which was particularly pointed out by [21,52]: one highlight for S B M ( n , 2 , p in , p out ) is a theorem by [59] which says that when p in > p out , if
p in p out p in log ( n ) n
then spectral methods can exactly recover node labels with high probability as n goes to infinity (also known as consistent estimation [30,40,41,43,44,48,50]).
Consider a more general case S B M ( n , K , p in , p out ) with K = O ( 1 ) ; this paper finds that the above threshold can be extended as
| p in p out | max ( p in , p out ) log ( n ) n ,
which can be alternatively written as
| α in α out | max ( α in , α out ) 1 .
In this paper, when K = O ( 1 ) , the lower bound requirement on | p in p out | max ( p in , p out ) (and | α in α out | max ( α in , α out ) ) for the consistent estimation of spectral methods is called the separation condition (alternative separation condition). The network generated from S B M ( n , K , p in , p out ) with p in > p out is an assortative network in which nodes within the community have more edges than across communities [60]. The network generated from S B M ( n , K , p in , p out ) with p in < p out is a dis-assortative network in which nodes within the community have fewer edges than across communities [60]. Therefore, Equation (2) holds for both assortative and dis-assortative networks.
Meanwhile, when K = 1 such that p = p in = p out , S B M ( n , K , p in , p out ) = S B M ( n , 1 , p , p ) degenerates to Erdös–Rényi (ER) random graph G ( n , p ) [53,61,62]. Ref. [61] finds that the ER random graph is connected with high probability if
p log ( n ) n .
We call the lower bound requirement on p for generating a connected ER random graph the sharp threshold in this paper.

1.3. Inconsistencies on Separation Condition in Some Previous Works

In this paper, we focus on the consistency of spectral method in community detection. The study of consistency is developed by obtaining the theoretical upper bound of error rate for a spectral method through analyzing the properties of the population adjacency matrix under the statistical model. To compare the consistencies of the theoretical results under different models, it is meaningful to study whether the separation condition and sharp threshold obtained from upper bounds of theoretical error rates for different methods under different models are consistent or not. Meanwhile, the separation condition and sharp threshold can also be seen as alternative unified theoretical frameworks to compare all methods and model parameters mentioned in the concluding remarks of [30].
Based on the separation condition and sharp threshold, here we describe some phenomena of the inconsistency in the community detection area. We find that the separation conditions of S B M ( n , K , p in , p out ) with K = O ( 1 ) obtained from the error rates developed in [41,43,44] under DCMM or MMSB are not consistent with those obtained from the main results of [30] under SBM, and the sharp threshold obtained from the main results of [43,44] do not match the classical results. A summary of these inconsistencies is provided in Table 1 and Table 2. Furthermore, after delicate analysis, we find that the requirement on the network sparsity of [43,44] is stronger than that of [30,41], and [63] also finds that the requirement of Ref. [44] of network sparsity is sub-optimal.

1.4. Our Findings

Recall that we reviewed several spectral clustering methods under SBM, DCSBM, MMSB and DCMM introduced in [26,30,41,43,44,48,49,50] and DCSBM, MMSB and DCMM are extensions of SBM (i.e., S B M ( n , K , p in , p out ) is a special case of DCSBM, MMSB and DCMM). We have the following question:
Can these spectral clustering methods achieve the threshold in Equation (1) (or Equation (2)) for S B M ( n , K , p in , p out ) with K = O ( 1 ) and the threshold in Equation (3) for the Erdös–Rényi (ER) random graph G ( n , p ) ?
The answer is yes. In fact, spectral methods for network with mixed memberships still achieve thresholds in Equations (1) and (2) for M M S B ( n , K , Π , p in , p out ) defined in Definition 2 when K = O ( 1 ) , where M M S B ( n , K , Π , p in , p out ) can be seen as a generalization of S B M ( n , K , p in , p out ) such that there exist nodes belonging to multiple communities. Explanations for why these spectral clustering methods achieve thresholds in Equations (1)–(3) will be provided in Section 3, Section 4 and Section 5 via re-establishing theoretical guarantee for SPACL under MMSB and its extension under DCMM because we find that the main theoretical results of [43,44] are sub-optimal. Meanwhile, we can obtain (and cannot obtain) the separation condition and sharp threshold from the theoretical bounds of error rates for spectral methods analyzed in [30,41,43,44] ([26,30,48,49,50]) directly. Instead of re-establishing the theoretical guarantee for all spectral methods reviewed in this paper to show that they achieve thresholds in Equations (1) and (3) for S B M ( n , K , P in , P out ) with K = O ( 1 ) , we mainly focus on the SPACL algorithm under MMSB and its extension under DCMM since MMSB and DCMM are more complex than SBM and DCSBM.
We then summarize the idea of using the separation condition and sharp threshold to study the consistencies, and compare the error rates and requirements on network sparsity of different spectral methods under different models as a four-step criterion, which we call the separation condition and sharp threshold criterion (SCSTC for short). With an application of this criterion, this paper provides an attempt to answer the questions of how the above inconsistency phenomena occur, and how to obtain consistent resultswith weaker requirements on the network sparsity of [43,44]. To answer the two questions, we use the recent techniques on row-wise eigenvector deviation developed in [64,65] to obtain consistent theoretical results directly related with model parameters for the SPACL and the SVM-cone-DCMMSB algorithm of [43]. The two questions are then answered by delicate analysis with an application of SCSTC to the theoretical upper bounds of error rates in this paper and some previous spectral methods. Using SCSTC for the spectral methods introduced and studied in [26,30,48,49,50] and some other spectral methods fitting models that can reduce to S B M ( n , K , P in , P out ) with K = O ( 1 ) , one can prove that these spectral methods achieve thresholds in Equations (1)–(3). The main contributions in this paper are as follows:
(i)
We summarize the idea of using the separation condition of a standard network and sharp threshold of the ER random graph G ( n , p ) to study the consistent estimations of different spectral methods designed via eigen-decomposition or singular value decomposition of the adjacency matrix or its variants under different models that can degenerate to SBM under mild conditions as a four-step criterion, SCSTC. The separation condition is used to study the consistency of the theoretical upper bound for the spectral method, and the sharp threshold can be used to study the network sparsity. The theoretical results of upper bounds for different spectral methods can be compared by SCSTC. Using this criterion, a few inconsistent phenomenons of some previous works are found.
(ii)
Under MMSB and DCMM, we study the consistencies of the SPACL algorithm proposed in [44] and its extended version using the recent techniques on row-wise eigenvector deviation developed in [64,65]. Compared with the original results of [43,44], our main theoretical results enjoy smaller error rates by lesser dependence on K and log ( n ) . Meanwhile, our main theoretical results have weaker requirements on the network sparsity and the lower bound of the smallest nonzero singular value of the population adjacency matrix. For details, see Table 3 and Table 4.
(iii)
Our results for DCMM are consistent with those for MMSB when DCMM degenerates to MMSB under mild conditions. Using SCSTC, under mild conditions, our main theoretical results under DCMM are consistent with those of [41]. This answers the question that the phenomenon that the main results of [43,44] do not match those of [41] occurs due to the fact that in Refs. [43,44], the theoretical results of error rates are sub-optimal. We also find that our theoretical results (as well as those of [41]) under both MMSB and DCMM match the classical results on the separation condition and sharp threshold, i.e., achieve thresholds in Equations (1)–(3). Using the bound of A Ω instead of A re Ω to establish the upper bound of error rate under SBM in [30], the two spectral methods studied in [30] achieve thresholds in Equations (1)–(3), which answers the question of why the separation condition obtained from error rate of [41] does not match that obtained from the error rate of [30]. Using A re Ω or A Ω influences the row-wise eigenvector deviations in Theorem 3.1 of [44] and Theorem I.3 of [43], and thus using A re Ω or A Ω influences the separation conditions and sharp thresholds of [43,44]. For comparison, our bound on row-wise eigenvector deviation is obtained by using the techniques developed in [64,65] and that of [41] is obtained by applying the modified Theorem 2.1 of [66]; therefore, using A re Ω or A Ω has no influence on the separation conditions and sharp thresholds of ours and that of [41]. For details, see Table 1 and Table 2. In a word, using SCSTC, the spectral methods proposed and studied in [26,30,41,43,44,48,49,50,67,68] or some other spectral methods fitting models that can reduce to S B M ( n , K , p in , p out ) achieve thresholds in Equations (1)–(3).
(iv)
We verify our threshold in Equation (2) by some computer-generated networks in Section 6. The numerical results for networks generated under M M S B ( n , K , Π , p in , p out ) when K = 2 and K = 3 show that SPACL and its extended version achieve a threshold in Equation (2), and results for networks generated from S B M ( n , K , p in , p out ) when K = 2 and K = 3 show that the spectral methods considered in [26,30,48,50] achieve the threshold in Equation (2).
The article is organized as follows. In Section 2, we give the formal introduction to the mixed membership stochastic blockmodel and review the algorithm SPACL considered in this paper. The theoretical results of consistency for the mixed membership stochastic blockmodel are presented and compared to related works in Section 3. After delicate analysis, the separation condition and sharp threshold criterion is presented in Section 4. Based on an application of this criterion, the improvement consistent estimation results for the extended version of SPACL under the degree corrected mixed membership model are provided in Section 5. Several computer-generated networks under MMSB and SBM are conducted to show that some spectral clustering methods achieve the threshold in Equation (2) in Section 6. The conclusion is given in Section 7.
Notations. We take the following general notations in this paper. Write [ m ] : = { 1 , 2 , , m } for any positive integer m. For a vector x and fixed q > 0 , x q denotes its l q -norm. We drop the subscript if q = 2 occasionally. For a matrix M, M denotes the transpose of the matrix M, M denotes the spectral norm, M F denotes the Frobenius norm, M 2 denotes the maximum l 2 -norm of all the rows of M, and M : = max i j | M ( i , j ) | denotes the maximum absolute row sum of M. Let rank ( M ) denote the rank of matrix M. Let σ i ( M ) be the i-th largest singular value of matrix M, λ i ( M ) denote the i-th largest eigenvalue of the matrix M ordered by the magnitude, and κ ( M ) denote the condition number of M. M ( i , : ) and M ( : , j ) denote the i-th row and the j-th column of matrix M, respectively. M ( S r , : ) and M ( : , S c ) denote the rows and columns in the index sets S r and S c of matrix M, respectively. For any matrix M, we simply use Y = max ( 0 , M ) to represent Y i j = max ( 0 , M i j ) for any i , j . For any matrix M R m × m , let diag ( M ) be the m × m diagonal matrix whose i-th diagonal entry is M ( i , i ) . 1 and 0 are column vectors with all entries being ones and zeros, respectively. e i is a column vector whose i-th entry is 1, while other entries are zero. In this paper, C is a positive constant which may vary occasionally. f ( n ) = O ( g ( n ) ) means that there exists a constant c > 0 such that | f ( n ) | c | g ( n ) | holds for all sufficiently large n. x y means there exists a constant c > 0 such that | x | c | y | . f ( n ) = o ( g ( n ) ) indicates that f ( n ) g ( n ) 0 as n .

2. Mixed Membership Stochastic Blockmodel

Let A { 0 , 1 } n × n be a symmetric adjacency matrix such that A ( i , j ) = 1 if there is an edge between node i to node j, and A ( i , j ) = 0 otherwise. The mixed membership stochastic blockmodel (MMSB) [36] for generating A is as follows.
Ω : = ρ Π P ˜ Π A ( i , j ) Bernoulli ( Ω ( i , j ) ) i , j [ n ] ,
where Π R n × K is called the membership matrix with Π ( i , k ) 0 and k = 1 K Π ( i , k ) = 1 for i [ n ] and k [ K ] , P ˜ R K × K is an non-negative symmetric matrix with max k , l [ K ] P ˜ ( k , l ) = 1 for model identifiability under MMSB, ρ is called the sparsity parameter which controls the sparsity of the network, and Ω R n × n is called the population adjacency matrix since E [ A ] = Ω . As mentioned in [41,44], σ K ( P ˜ ) is a measure of the separation between communities, and we call it the separation parameter in this paper. ρ and σ K ( P ˜ ) are two important model parameters directly related with the separation condition and sharp threshold, and they will be considered throughout this paper.
Definition 1.
Call model (4) the mixed membership stochastic blockmodel (MMSB), and denote it by M M S B n ( K , P ˜ , Π , ρ ) .
Definition 2.
Let M M S B ( n , K , Π , p in , p out ) be a special case of M M S B n ( K , P ˜ , Π , ρ ) when ρ P ˜ has diagonal entries p in and non-diagonal entries p out , and κ ( Π Π ) = O ( 1 ) .
Call node i ‘pure’ if Π ( i , : ) is degenerate (i.e., one entry is 1, all others K 1 entries are 0) and ‘mixed’ otherwise. When all nodes are pure in Π , we see that M M S B ( n , K , Π , p in , p out ) exactly reduces to S B M ( n , K , p in , p out ) . Thus, M M S B ( n , K , Π , p in , p out ) is a generalization of S B M ( n , K , p in , p out ) with mixed nodes in each community. In this paper, we show that SPACL [44] fitting MMSB and SVM-cone-DCMMSB [43] and Mixed-SCORE [41] fitting DCMM also achieve thresholds in Equations (1)–(3) for M M S B ( n , K , Π , p in , p out ) with K = O ( 1 ) . By Theorems 2.1 and 2.2 [44], the following conditions are sufficient for the identifiability of MMSB, when ρ P ˜ ( k , l ) [ 0 , 1 ] for all k , l [ K ] ,
  • (I1) rank ( P ˜ ) = K .
  • (I2) There is at least one pure node for each of the K communities.
Unless specified, we treat conditions (I1) and (I2) as the default from now on.
For k [ K ] , let I ( k ) be the set of pure nodes in community k such that I ( k ) = { i [ n ] : Π ( i , k ) = 1 } . For k [ K ] , select one node from I ( k ) to construct the index set I , i.e., I is the index of nodes corresponding to K pure nodes, one from each community. Without loss of generality, let Π ( I , : ) = I K where I K is the K × K identity matrix. Recall that rank ( Ω ) = K . Let Ω = U Λ U be the compact eigen-decomposition of Ω such that U R n × K , Λ R K × K , and U U = I K . Lemma 2.1 [44] gives that U = Π U ( I , : ) , and such a form is called ideal simplex (IS for short) [41,44] since all rows of U form a K-simplex in R K and the K rows of U ( I , : ) are the vertices of the K-simplex. Given Ω and K, as long as we know U ( I , : ) , we can exactly recover Π by Π = U U 1 ( I , : ) since U ( I , : ) R K × K is a full rank matrix. As mentioned in [41,44], for such IS, the successive projection (SP) algorithm [51] (i.e., Algorithm A1) can be applied to U with K communities to exactly find the corner matrix U ( I , : ) . For convenience, set Z = U U 1 ( I , : ) . Since Π = Z , we have Π ( i , : ) = Z ( i , : ) Z ( i , : ) 1 for i [ n ] .
Based on the above analysis, we are now ready to give the ideal SPACL algorithm with input Ω , K and output Π .
  • Let Ω = U Λ U be the top-K eigen-decomposition of Ω such that U R n × K , Λ R K × K , U U = I .
  • Run SP algorithm on the rows of U assuming that there are K communities to obtain I .
  • Set Z = U U 1 ( I , : ) .
  • Recover Π by setting Π ( i , : ) = Z ( i , : ) Z ( i , : ) 1 for i [ n ] .
With the given U and K, since the SP algorithm returns U ( I , : ) , we see that the ideal SPACL exactly (for detail, see Appendix B) returns Π .
Now, we review the SPACL algorithm of [44]. Set A ˜ = U ^ Λ ^ U ^ to be the top K eigen-decomposition of A such that U ^ R n × K , Λ ^ R K × K , U ^ U ^ = I K , and Λ ^ contains the top K eigenvalues of A. For the real case, use Z ^ , Π ^ given in Algorithm 1 to estimate Z , Π , respectively. Algorithm 1 is the SPACL algorithm [44] where we only care about the estimation of the membership matrix Π , and omit the estimation of P and ρ . Meanwhile, Algorithm 1 is a direct extension of the ideal SPACL algorithm from the oracle case to the real case, and we omit the prune step in the original SPACL algorithm of [44].
Algorithm 1 SPACL [44]
  • Require: The adjacency matrix A R n × n and the number of communities K.
  • Ensure: The estimated n × K membership matrix Π ^ .
  • 1: Obtain A ˜ = U ^ Λ ^ U ^ , the top K eigen-decomposition of A.
  • 2: Apply SP algorithm (i.e., Algorithm A1) on the rows of U ^ assuming there are K communities to obtain I ^ , the index set returned by SP algorithm.
  • 3: Set Z ^ = U ^ U ^ 1 ( I ^ , : ) . Then set Z ^ = max ( 0 , Z ^ ) .
  • 4: Estimate Π ( i , : ) by Π ^ ( i , : ) = Z ^ ( i , : ) / Z ^ ( i , : ) 1 , i [ n ] .

3. Consistency under MMSB

Our main result under MMSB provides an upper bound on the estimation error of each node’s membership in terms of several model parameters. Throughout this paper, K is a known positive integer. Assume that
(A1)
ρ n log ( n ) .
Assumption (A1) provides a requirement on the lower bound of sparsity parameter ρ such that it should be at least log ( n ) / n . Then we have the following lemma.
Lemma 1.
Under M M S B n ( K , P ˜ , Π , ρ ) , when Assumption (A1) holds, with probability at least 1 o ( n α ) for any α > 0 , we have
A Ω α + 1 + ( α + 1 ) ( α + 19 ) 3 ρ n log ( n ) .
In Lemma 1, instead of simply using a constant C α to denote α + 1 + ( α + 1 ) ( α + 19 ) 3 , we keep the explicit form here.
Remark 1.
When Assumption (A1) holds, the upper bound of A Ω in Lemma 1 is consistent with Corollary 6.5 in [69] since Var ( A ( i , j ) ) ρ under M M S B n ( K , P , Π , ρ ) .
Lemma 1 is obtained via Theorem 1.4 (Bernstein inequality) in [70]. For comparison, Ref. [44] applies Theorem 5.2 [30] to bound A Ω (see, for example, Equation (14) of [44]) and obtains a bound as C ρ n for some C > 0 . However, C ρ n is the bound between a regularization of A and Ω as stated in the proof of Theorem 5.2 [30], where such regularization of A is obtained from A with some constraints in Lemmas 4.1 and 4.2 of the supplemental material [30]. Meanwhile, Theorem 2 [71] also gives that the bound between a regularization of A and Ω is C ρ n , where such a regularization of A should also satisfy few constraints on A; see Theorem 2 [71] for detail. Instead of bounding the difference between a regularization of A and Ω , we are interested in bounding A Ω by the Bernstein inequality, which has no constraints on A. For convenience, use A re to denote the regularization of A in this paper. Hence, A re Ω C ρ n with high probability, and this bound is model independent as shown by Theorem 5.2 [30] and Theorem 2 [71] as long as ρ max i , j Ω ( i , j ) (here, let Ω = E [ A ] without considering models, a ρ satisfying ρ max i , j Ω ( i , j ) is also the sparsity parameter which controls the overall sparsity of a network). Note that A re is not A ˜ , where A ˜ = U ^ Λ U ^ is obtained by the top K eigen-decomposition of A, while A re is obtained by adding constraints on degrees of A; see Theorem 2 [71] for detail.
In [41,43,44], the main theoretical results for their proposed membership estimating methods hinge on a row-wise deviation bound for the eigenvectors of the adjacency matrix, whether under MMSB or DCMM. Different from the theoretical technique applied in Theorem 3.1 [44], which provides sup-optimal dependencies on log ( n ) and K, and needs sub-optimal requirements on the sparsity parameter ρ and the lower bound of σ K ( Ω ) , to obtain row-wise deviation bound for the singular eigenvector of Ω , we use Theorem 4.2 [64] and Theorem 4.2 [65].
Lemma 2
(Row-wise eigenspace error). Under M M S B n ( K , P ˜ , Π , ρ ) , when Assumption (A1) holds, suppose σ K ( Ω ) C ρ n log ( n ) , with probability at least 1 o ( n α ) ,
  • When we apply Theorem 4.2 of [64], we have
    U ^ U ^ U U 2 = O ( K ( κ ( Ω ) n K λ K ( Π Π ) + log ( n ) ) σ K ( P ˜ ) ρ λ K ( Π Π ) ) ,
  • When we apply Theorem 4.2 of [65], we have
    U ^ U ^ U U 2 = O ( n log ( n ) σ K ( P ˜ ) ρ λ K 1.5 ( Π Π ) ) .
For convenience, set ϖ = U ^ U ^ U U 2 , and let ϖ 1 , ϖ 2 denote the upper bound in Lemma 2 when applying Theorem 4.2 of [64] and Theorem 4.2 of [65], respectively. Note that when λ K ( Π Π ) = O ( n K ) , we have ϖ 1 = ϖ 2 = O ( K 1.5 σ K ( P ˜ ) 1 n log ( n ) ρ n ) , and therefore we simply let ϖ 2 be the bound since its form is slightly simpler than ϖ 1 .
Compared with Theorem 3.1 of [44], since we apply Theorem 4.2 of [64] and Theorem 4.2 of [65] to obtain the bound of row-wise eigenspace error under MMSB, our bounds do not rely on min ( K 2 , κ 2 ( Ω ) ) while Theorem 3.1 [44] does. Meanwhile, our bound in Lemma 2 is sharper with lesser dependence on K and log ( n ) , has weaker requirements on the lower bounds of σ K ( Ω ) , λ K ( Π Π ) and the sparsity parameter ρ . The details are given below:
  • We emphasize that the bound of Theorem 3.1 of [44] should be U ^ U ^ U U 2 = O ( ψ ( Ω ) K n log ξ ( n ) σ K ( P ˜ ) ρ λ K 1.5 ( Π Π ) ) instead of U ^ U ^ U U 2 = O ( ψ ( Ω ) K n σ K ( P ˜ ) ρ λ K 1.5 ( Π Π ) ) for ξ > 1 where the function ψ is defined in Equation (7) of [44], and this is also pointed out by Table 2 of [63]. The reason is that in the proof part of Theorem 3.1 [44], from step (iii) to step (iv), they should keep the term log ξ ( n ) since this term is much larger than 1. We can also find that the bound in Theorem 3.1 [44] should multiply log ξ ( n ) from Theorem VI.1 [44] directly. For comparison, this bound O ( ψ ( Ω ) K n log ξ ( n ) σ K ( P ˜ ) ρ λ K 1.5 ( Π Π ) ) is K 0.5 log ξ 0.5 ( n ) times our bound in Lemma 2. Meanwhile, by the proof of the bound in Theorem 3.1 of [44], we see that the bound depends on the upper bound of A Ω , and [44] applies Theorem 5.2 of [30] such that A re Ω C ρ n with high probability. Since C ρ n is the upper bound of the difference between a regularization of A and Ω . Therefore, if we are only interested in bounding A Ω instead of A re Ω , the upper bound of Theorem 3.1 [44] should be O ( ψ ( Ω ) K n log ξ + 0.5 ( n ) σ K ( P ˜ ) λ K 1.5 ( Π Π ) ) , which is at least K 0.5 log ξ ( n ) times our bound in Lemma 2. Furthermore, the upper bound of the row-wise eigenspace error in Lemma 2 does not rely on the upper bound of A Ω as long as σ K ( Ω ) C ρ n log ( n ) holds. Therefore, whether using A re Ω C ρ n or A Ω C ρ n log ( n ) does not change the bound in Lemma 2.
  • Our Lemma 2 requires σ K ( Ω ) C ρ n log ( n ) , while Theorem 3.1 [44] requires σ K ( Ω ) 4 ρ n log ξ ( n ) by their Assumption 3.1. Therefore, our Lemma 2 has a weaker requirement on the lower bound of σ K ( Ω ) than that of Theorem 3.1 [44]. Meanwhile, Theorem 3.1 [44] requires λ K ( Π Π ) 1 ρ while our Lemma 2 has no lower bound requirement on λ K ( Π Π ) as long as it is positive.
  • Since Ω = ρ Π P ˜ Π C ρ n by basic algebra, the lower bound requirement on σ K ( Ω ) in Assumption 3.1 of [44] gives that 4 ρ n log ξ ( n ) σ K ( Ω ) Ω C ρ n , which suggests that Theorem 3.1 [44] requires ρ n C log 2 ξ ( n ) , and this also matches with the requirement on ρ n in Theorem VI.1 of [44] (and this is also pointed out by Table 1 of [63]). For comparison, our requirement on sparsity given in Assumption (A1) is ρ n log ( n ) , which is weaker than ρ n C log 2 ξ ( n ) . Similarly, in our Lemma 2, the requirement σ K ( Ω ) C ρ n log ( n ) gives C ρ n log ( n ) σ K ( Ω ) Ω C ρ n , thus we have log ( n ) C ρ n which is consistent with Assumption (A1).
If we further assume that K = O ( 1 ) , λ K ( Π Π ) = O ( n K ) (i.e., κ ( Π Π ) = O ( 1 ) ) and σ K ( P ˜ ) = O ( 1 ) , the row-wise eigenspace error is of order 1 n log ( n ) ρ n , which is consistent with the row-wise eigenvector deviation of the result of [63], shown in their Table 2. The next theorem gives the theoretical bounds on the estimations of memberships under MMSB.
Theorem 1.
Under M M S B n ( K , P ˜ , Π , ρ ) , let Π ^ be obtained from Algorithm 1, and suppose the conditions in Lemma 2 hold; there exists a permutation matrix P R K × K such that, with probability at least 1 o ( n α ) , we have
max i [ n ] e i ( Π ^ Π P ) 1 = O ( ϖ K κ ( Π Π ) λ 1 ( Π Π ) ) .
Remark 2
(Comparison to Theorem 3.2 [44]). Consider a special case by setting κ ( Π Π ) = O ( 1 ) , i.e., λ K ( Π Π ) = O ( n K ) and λ 1 ( Π Π ) = O ( n K ) . We focus on comparing the dependencies on K in bounds of our Theorem 1 and Theorem 3.2 [44]. Under this case, the bound of our Theorem 1 is proportional to K 2 by basic algebra; since min ( K 2 , κ 2 ( Ω ) ) = min ( K 2 , O ( 1 ) ) = O ( 1 ) and the bound in Theorem 3.2 [44] should multiply K because (in [44]’s language) V ^ p 1 F K σ K ( V ^ p ) instead of V ^ p 1 F = 1 λ K ( V ^ p ) in Equation (45) [44], the power of K is 2 by checking the bound of Theorem 3.2 [44]. Meanwhile, note that our bound in Theorem 2 is l 1 bound, while the bound in Theorem 3.2 [44] is l 2 bound. When we translate the l 2 bound of Theorem 3.2 [44] into l 1 bound, the power of K is 2.5 for Theorem 3.2 [44]. Hence, our bound in Theorem 1 has less dependence on K than that of Theorem 3.2 [44], and this is also consistent with the first bullet given after Lemma 2.
Table 3 summarizes the necessary conditions and dependence on the model parameters of the rates in Theorem 1 and Theorem 3.2 [44] for comparison. The following corollary is obtained by adding conditions on the model parameters similar to Corollary 3.1 in [44].
Corollary 1.
Under M M S B ( n , K , Π , p in , p out ) with K = O ( 1 ) , when the conditions of Lemma 2 hold, with probability at least 1 o ( n α ) , we have
max i [ n ] e i ( Π ^ Π P ) 1 = O ( 1 σ K ( P ˜ ) log ( n ) ρ n ) .
Remark 3.
Consider a special case in Corollary 1 by setting σ K ( P ˜ ) as a constant, we see that the error bound O ( log ( n ) ρ n ) in Corollary 1 is directly related to Assumption (A1), and for consistent estimation, ρ should shrink slower than log ( n ) n .
Remark 4.
Under the setting of Corollary 1, the requirement σ K ( Ω ) C ρ n log ( n ) in Lemma 2 holds naturally. By Lemma II.4 [44], we know that σ K ( Ω ) ρ σ K ( P ˜ ) λ K ( Π Π ) = C ρ n σ K ( P ˜ ) . To make the requirement σ K ( Ω ) C ρ n log ( n ) always hold, we just need C ρ n σ K ( P ˜ ) C ρ n log ( n ) , which gives that σ K ( P ˜ ) C log ( n ) ρ n , and it just matches with the requirement of the consistent estimation of memberships in Corollary 1.
Remark 5
(Comparison to Theorem 3.2 [44]). When K = O ( 1 ) and λ K ( Π Π ) = O ( n K ) , by the first bullet in the analysis given after Lemma 2, the row-wise eigenspace error of Theorem 3.1 [44] is O ( log ξ ( n ) σ K ( P ˜ ) ρ n ) , and it gives that their error bound on estimation membership given in their Equation (3) is O ( log ξ ( n ) σ K ( P ˜ ) ρ n ) , which is log ξ 0.5 ( n ) times of the bound in our Lemma 1.
Remark 6
(Comparison to Theorem 2.2 [41]). Replacing the Θ in [41] by Θ = ρ I , their DCMM model degenerates to MMSB. Then their conditions in Theorem 2.2 are our Assumption (A1) and λ K ( Π Π ) = O ( n K ) for MMSB. When K = O ( 1 ) , the error bound in Theorem 2.2 in [41] is O ( 1 σ K ( P ˜ ) log ( n ) ρ n ) , which is consistent with ours.

4. Separation Condition and Sharp Threshold Criterion

After obtaining Corollary 1 under MMSB, now we are ready to give our criterion after introducing the separation condition of M M S B ( n , K , Π , p in , p out ) with K = O ( 1 ) and the sharp threshold of ER random graph G ( n , p ) in this section.
Separation condition. Let P = ρ P ˜ be the probability matrix for M M S B ( n , K , Π , p in , p out ) when K = O ( 1 ) , so P has diagonal (and non-diagonal) entries p in (and p out ) and σ K ( P ) = ρ σ K ( P ˜ ) | p in p out | . Recall that max k , l [ K ] P ˜ ( k , l ) = 1 under M M S B n ( K , P ˜ , Π , ρ ) , we have max k , l [ K ] P ( k , l ) = ρ max ( p in , p out ) . So, we have the separation condition | p in p out | max ( p in , p out ) ρ σ K ( P ˜ ) (also known as the relative edge probability gap in [44]) and the alternative separation condition | α in α out | max ( α in , α out ) ρ n log ( n ) σ K ( P ˜ ) . Now, we are ready to compare the thresholds of the (alternative) separation condition obtained from different theoretical results.
  • (a) By Corollary 1, we know that σ K ( P ˜ ) should shrink slower than log ( n ) ρ n for consistent estimation. Therefore, the separation condition | p in p out | max ( p in , p out ) ρ σ K ( P ˜ ) should shrink slower than log ( n ) n (i.e., Equation (1)), and this threshold is consistent with Corollary 1 of [59] and Equation (17) of [49]. The alternative separation condition | α in α out | max ( α in , α out ) ρ n log ( n ) σ K ( P ˜ ) should shrink slower than 1 (i.e., Equation (2)).
  • (b) Undoubtedly, the (alternative) separation condition in (a) is consistent with that of [41], since Theorem 2.2 [41] shares the same error rate O ( 1 σ K ( P ˜ ) log ( n ) ρ n ) for M M S B ( n , K , Π , p in , p out ) with K = O ( 1 ) .
  • (c) By Remark 5, using A re Ω C ρ n , we know that in Ref. [44], Equation (3) is O ( log ξ ( n ) σ K ( P ˜ ) ρ n ) , so ρ σ K ( P ˜ ) should shrink slower than log ξ ( n ) n . Thus, for [44], the separation condition is log ξ ( n ) n , and the alternative separation condition is log ξ 0.5 ( n ) , which are sub-optimal compared with ours in (a). Using A Ω C ρ n log ( n ) , and Equation (3) in Ref. [44], which is O ( log ξ + 0.5 ( n ) σ K ( P ˜ ) ρ n ) , we see that for [44], now the separation condition is log ξ + 0.5 ( n ) n and the alternative separation condition is log ξ ( n ) .
  • (d) For comparison, the error bound of Corollary 3.2 [30] built under SBM for community detection is O ( 1 σ K 2 ( P ˜ ) ρ n ) for S B M ( n , K , p in , p out ) with K = O ( 1 ) , so ρ σ K ( P ˜ ) should shrink slower than 1 n . Thus the separation condition for [30] is 1 n . However, as we analyzed in the first bullet given after Lemma 2, [30] applied A r e Ω C ρ n to build their consistency results. Instead, we apply A Ω C ρ n log ( n ) to the built theoretical results of [30], and the error bound of Corollary 3.2 [30] is O ( log ( n ) σ K 2 ( P ˜ ) ρ n ) , which returns the same separation condition as our Corollary 1 and Theorem 2.2 of [41] now. Following a similar analysis to (a)–(c), we can obtain an alternative separation condition for [30] immediately, and the results are provided in Table 2. Meanwhile, as analyzed in the first bullet given after Lemma 2, whether using A Ω C ρ n log ( n ) or A re Ω C ρ n does not change our error rates. By carefully analyzing the proof of 2.1 of [41], we see that whether using A Ω C ρ n log ( n ) or A re Ω C ρ n also does not change their row-wise large deviation, hence it does not influence their upper bound of the error rate for their Mixed-SCORE.
Sharp threshold. Consider the Erdös–Rényi (ER) random graph G ( n , p ) [61]. To construct the ER random graph G ( n , p ) , let K = 1 and Π be an n × 1 vector with all entries being ones. Since K = 1 and the maximum entry of P ˜ is assumed to be 1, we have P ˜ = 1 in G ( n , p ) and hence σ K ( P ˜ ) = 1 . Then we have Ω = Π ρ P ˜ Π = Π ρ Π = Π p Π , i.e, p = ρ . Since the error rate is O ( 1 σ K ( P ˜ ) log ( n ) ρ n ) = O ( log ( n ) p n ) , for consistent estimation, we see that p should shrink slower than log ( n ) n (i.e., Equation (3)), which is just the sharp threshold in [61], Theorem 4.6 [62], strongly consistent with [72], and the first bullet in Section 2.5 [53] (called the lower bound requirement of p for the ER random graph to enjoy consistent estimation as the sharp threshold). Since the sharp threshold is obtained when K = 1 , which means a connected ER random graph G ( n , p ) , this is also consistent with the connectivity in Table 2 of [21]. Meanwhile, since our Assumption (A1) requires ρ n log ( n ) , it gives that p should shrink slower than log ( n ) n since p = ρ under G ( n , p ) , which is consistent with the sharp threshold. Since Theorem 2.2 of Ref. [41] enjoys the same error rate as ours under the settings in Corollary 1, [41] also reaches the sharp threshold as log ( n ) n . Furthermore, Remark 5 says that the bound for the error rate in Equation (3) [44] should be O ( log ξ ( n ) σ K ( P ˜ ) ρ n ) when using A re Ω C ρ n ; following a similar analysis, we see that the sharp threshold for [44] is log 2 ξ ( n ) n , which is sub-optimal compared with ours. When using A Ω C ρ n log ( n ) , the sharp threshold for [44] is log 2 ξ + 1 ( n ) n . Similarly, the error bound of Corollary 3.2 [30] is O ( 1 σ K 2 ( P ˜ ) ρ n ) O ( 1 p n ) under ER G ( n , p ) since p = ρ , σ K ( P ˜ ) = 1 and K = 1 . Hence, the sharp threshold obtained from the theoretical upper bound for error rates of [30] is 1 n , which does not match the classical result. Instead, we apply A Ω C ρ n log ( n ) with a high probability to build the theoretical results of [30], and the error bound of Corollary 3.2 [30] is O ( log ( n ) p n ) , which returns the classical sharp threshold log ( n ) n now.
Table 1 summarizes the comparisons of the separation condition and sharp threshold. Table 2 records the respective alternative separation condition. The delicate analysis given above supports our statement that the separation condition of a standard network (i.e., S B M ( n , K , p in , p out ) with K = O ( 1 ) or M M S B ( n , K , Π , p in , p out ) with K = O ( 1 ) ) and the sharp threshold of ER random graph G ( n , p ) can be seen as unified criteria to compare the theoretical results of spectral methods under different models. To conclude the above analysis, here, we summarize the main steps to apply the separation condition and sharp threshold criterion (SCSTC for short) to check the consistency of the theoretical results or compare the results of spectral methods under different models, where spectral methods mean methods developed based on the application of the eigenvectors or singular vectors of the adjacency matrix or its variants for community detection. The four-stage SCSTC is given below:
s t e p 1
Check whether the theoretical upper bound of the error rate contains σ K ( P ˜ ) (note that P = ρ P ˜ is probability matrix and maximum entries of P ˜ should be set as 1), where the separation parameter σ K ( P ˜ ) always appears when considering the lower bound of σ K ( Ω ) . If it contains σ K ( P ˜ ) , move to the next step. Otherwise, it suggests possible improvements for the consistency by considering σ K ( P ˜ ) in the proofs.
s t e p 2
Let K = O ( 1 ) and network degenerate to the standard network whose numbers of nodes in each community are in the same order and can been seen as O ( n K ) (i.e., a S B M ( n , K , p in , p out ) with K = O ( 1 ) in the case of a non-overlapping network or a M M S B ( n , K , Π , p in , p out ) with K = O ( 1 ) in the case of an overlapping network, and we will mainly focus on S B M ( n , K , p in , p out ) with K = O ( 1 ) for convenience.). Let the model degenerate to S B M ( n , K , p in , p out ) with K = O ( 1 ) , and then we obtain the new theoretical upper bound of the error rate. Note that if the model does consider degree heterogeneity, sparsity parameter ρ should be considered in the theoretical upper bound of error rate in s t e p 1 . If the model considers degree heterogeneity, when it degenerates to S B M ( n , K , p in , p out ) with K = O ( 1 ) , ρ appears at this step. Meanwhile, if ρ is not contained in the error rate of s t e p 1 when the model does not consider degree heterogeneity, it suggests possible improvements by considering ρ .
s t e p 3
Let P = ρ P ˜ be the probability matrix when the model degenerates to S B M ( n , K , p in , p out ) such that P has diagonal entries p in and non-diagonal entries p out . So, σ K ( P ) = | p in p out | = ρ σ K ( P ˜ ) and separation condition | p in p out | max ( p in , p out ) ρ σ K ( P ˜ ) since the maximum entry of P ˜ is assumed to be 1. Compute the lower bound requirement of σ K ( P ˜ ) for consistency estimation through analyzing the new bound obtained in s t e p 2 . Compute separation condition | p in p out | max ( p in , p out ) ρ σ K ( P ˜ ) using the lower bound requirement for σ K ( P ˜ ) . The sharp threshold for the ER random graph G ( n , p ) is obtained from the lower bound requirement on ρ for the consistency estimation under the setting that K = 1 , σ K ( P ˜ ) = 1 and p = ρ .
s t e p 4
Compare the separation condition and the sharp threshold obtained in s t e p 3 with Equations (1) and (3), respectively. If the sharp threshold log ( n ) n or the separation condition log ( n ) n , then this leaves improvements on the requirement of the network sparsity or theoretical upper bound of the error rate. If the sharp threshold is log ( n ) n and the separation condition is log ( n ) n , the optimality of the theoretical results on both error rates and the requirement of network sparsity is guaranteed. Finally, if the sharp threshold log ( n ) n or separation condition log ( n ) n , this suggests that the theoretical result is obtained based on A re Ω instead of A Ω .
Remark 7.
This remark provides some explanations on the four steps of SCSTC.
  • In s t e p 1 , we give a few examples. When applying SCSTC to the main results of [40,48,67], we stop at s t e p 1 as analyzed in Remark 8, suggesting possible improvements by considering σ K ( P ˜ ) for these works. Meanwhile, for the theoretical result without considering σ K ( P ˜ ) , we can also move to s t e p 2 to obtain the new theoretical upper bound of the error rate, which is related with ρ and n. Discussions on the theoretical upper bounds of error rates of [50,68] given in Remark 8 are examples of this case.
  • In s t e p 2 , letting K = O ( 1 ) and the model reduce to S B M ( n , K , p in , p out ) for the non-overlapping network or M M S B ( n , K , Π , p in , p out ) for the overlapping network can always simplify the theoretical upper bound of error rate, as shown by our Corollaries 1 and 2. Here, we provide some examples about how to make a model degenerate to SBM. For M M S B n ( K , P ˜ , Π , ρ ) in this paper, when all nodes are pure, MMSB degenerates to SBM; for the D C M M n ( K , P ˜ , Π , Θ ) model introduced in Section 5 or DCSBM considered in [30,48,50], setting Θ = ρ I makes DCMM and DCSBM degenerates to SBM when all nodes are pure, similar to the ScBM and DCScBM considered in [67,68,71], the OCCAM model of [40], the stochastic blockmodel with the overlap proposed in [46], the extensions of SBM and DCSBM for hypergraph networks considered in [73,74,75], and so forth.
  • In s t e p 3 and s t e p 4 , the separation condition can be replaced by an alternative separation condition.
  • When using SCSTC to build and compare theoretical results for the spectral clustering method, the key point is computing the lower bound for | p in p out | max ( p in , p out ) when the probability matrix P has diagonal entries p in and non-diagonal entries p out from the theoretical upper bound of the error rate for a certain spectral method. If this lower bound is consistent with that of Equation (1), this suggests theoretical optimality, and otherwise it suggests possible improvements by following the four steps of SCSTC.
The above analysis shows that SCSTC can be used to study the consistent estimation of model-based spectral methods. Use SCSTC, the following remark lists a few works whose main theoretical results leave possible improvements.
Remark 8.
The unknown separation condition, or sub-optimal error rates, or a lack of requirement of network sparsity of some previous works, suggest possible improvements of their theoretical results. Here, we list a few works whose main results can be possibly improved until considering the separation condition.
  • Theorem 4.4 of [48] proposes the upper bound of the error rate for their regularized spectral clustering (RSC) algorithm, designed based on a regularized Laplacian matrix under DCSBM. However, since [48] does not study the lower bound (in the [48] language) of λ K and m, we cannot directly obtain the separation condition from their main theorem. Meanwhile, the main result of [48] does not consider the requirement on the network sparsity, which leaves some improvements. Ref. [48] does not study the theoretical optimal choice for the RSC regularizer τ. After considering σ K ( P ˜ ) and sparsity parameter ρ, one can obtain the theoretical optimal choice for τ, and this is helpful for explaining and choosing the empirical optimal choice for τ. Therefore, the feasible network implementation of SCSTC is obtaining the theoretical optimal choices for some tuning parameters, such as regularizer τ of the RSC algorithm. By using SCSTC, we can find that RSC achieves thresholds in Equations (1)–(3), and we omit proofs for it in this paper.
  • Refs. [26,49] study two algorithms designed based on the Laplacian matrix and its regularized version under SBM. They obtain meaningful results, but do not consider the network sparsity parameter ρ and separation parameter σ K ( P ˜ ) . After obtaining improved error bounds which are consistent with separation condition log ( n ) n using SCSTC, one can also obtain the theoretical optimal choice for regularizer τ of the RSC-τ algorithm considered in [49] and find that the two algorithms considered in [26,49] achieve thresholds in Equations (1)–(3).
  • Theorem 2.2 of [50] provides an upper bound of their SCORE algorithm under DCSBM. However, since they do not consider the influence of σ K ( P ˜ ) , we cannot directly obtain the separation condition from their main result. Meanwhile, by setting their Θ = ρ I , DCSBM degenerates to SBM, which gives that their e r r n = 1 ρ 2 n ( 1 + log ( n ) ρ n ) = O ( 1 ρ 2 n ) by their assumption Equation (2.9). Hence, when Θ = ρ I , the upper bound of Theorem 2.2 in [50] is O ( log 3 ( n ) ρ 2 n ) . The upper bound of error rate in Corollary 3.2 of [30] is O ( log ( n ) ρ n ) when using A Ω C ρ n log ( n ) under the setting that κ ( Π ) = O ( 1 ) , K = O ( 1 ) and σ K ( P ˜ ) = O ( 1 ) . We see that log 3 ( n ) ρ 2 n grows faster than log ( n ) ρ n , which suggests that there is space to improve the main result of [50] in the aspects of the separation condition and error rates. Furthermore, using SCSTC, we can find that SCORE achieves thresholds in Equations (1)–(3) because its extension mixed-SCORE [41] achieves thresholds in Equations (1)–(3).
  • Ref. [67] proposes two models, ScBM and DCScBM, to model the directed networks and an algorithm DI-SIM based on the directed regularized Laplacian matrix to fit DCScBM. However, similar to [48], their main theoretical result in their Theorem C.1 does not consider the lower bound of (in the language of Ref. [67]) σ K , m y , m z and γ z , which causes that we cannot obtain the separation condition when DCScBM degenerates to SBM. Meanwhile, their Theorem C.1 also lacks a lower bound requirement on network sparsity. Hence, there is space to improve the theoretical guarantees of [67]. Similar to [48,49], we can also obtain the theoretical optimal choices for regularizer τ of the DI-SIM algorithm and prove that DI-SIM achieves the thresholds in Equations (1)–(3) since it is the directed version of RSC [48].
  • Ref. [68] mainly studies the theoretical guarantee for the D-SCORE algorithm proposed by [14] to fit a special case of the DCScBM model for directed networks. By setting their θ ( i ) = ρ , δ ( j ) = ρ for i , j [ n ] , their directed-DCBM degenerates to SBM. Meanwhile, since their e r r n = 1 ρ , their mis-clustering rate is O ( T n 2 log ( n ) ρ n ) , which matches that of [30] under SBM when setting T n as a constant. However, if setting T n as log ( n ) , then the error rate is O ( log 3 ( n ) ρ n ) , which is sub-optimal compared with that of [30]. Meanwhile, similar to [50,68], the main result does not consider the influences of K and σ K ( P ˜ ) , causing a lack of a separation condition. Hence, the main results of [68] can be improved by considering K, σ K ( P ) , or a more optimal choice of T n to make their main results comparable with those of [30] when directed-DCBM degenerates to SBM. Using SCSTC, we can find that the D-SCORE also achieves thresholds in Equations (1)–(3) since it is the directed version of SCORE [50].

5. Degree Corrected Mixed Membership Model

Using SCSTC to Theorem 3.2 of [43], as shown in Table 1 and Table 2 results in Theorem 3.2 [43] being sub-optimal. To obtain the improvement theoretical results, we give a formal introduction of the degree corrected mixed membership (DCMM) model proposed in [41] first, then we review the SVM-cone-DCMMSB algorithm of [43] and provide the improvement theoretical results. A DCMM for generating A is as follows.
Ω : = Θ Π P ˜ Π Θ A ( i , j ) Bernoulli ( Ω ( i , j ) ) i , j [ n ] ,
where Θ R n × n is a diagonal matrix whose i-th diagonal entry is the degree heterogeneity of node i for i [ n ] . Let θ R n × 1 with θ ( i ) = Θ ( i , i ) for i [ n ] . Set θ max = max i [ n ] θ ( i ) , θ min = min i [ n ] θ ( i ) and P ˜ max = max k , l [ K ] P ˜ ( k , l ) , P ˜ min = min k , l [ K ] P ˜ ( k , l ) .
Definition 3.
Call model (5) the degree corrected mixed membership (DCMM) model, and denote it by D C M M n ( K , P ˜ , Π , Θ ) .
Note that if we set Π ˜ = Θ Π and choose Θ such that Π ˜ { 0 , 1 } n × K , then we have Ω = Π ˜ P ˜ Π ˜ , which means that the stochastic blockmodel with overlap (SBMO) proposed in [46] is just a special case of DCMM. Meanwhile, if we write Θ as Θ = Θ ˜ D o , where Θ ˜ , D o are two positive diagonal matrices and let Π o = D o Π , then we can choose D 0 such that Π o ( i , : ) F = 1 . By Ω = Θ Π P ˜ Π Θ = Θ ˜ Π o P ˜ Π o Θ ˜ , we see that the OCCAM model proposed in [40] equals the DCMM model. By Equation (1.3) and Proposition 1.1 of [41], the following conditions are sufficient for the identifiability of DCMM when θ max P ˜ max 1 :
  • (II1) rank ( P ˜ ) = K and P ˜ has unit diagonals.
  • (II2) There is at least one pure node for each of the K communities.
Note that though the diagonal entries of P ˜ are ones, P ˜ max may be larger than 1 as long as θ max P ˜ max 1 under DCMM, and this is slightly different from the setting that max k , l [ K ] P ˜ ( k , l ) = 1 under MMSB.
Without causing confusion, under D C M M n ( K , P ˜ , Π , Θ ) , we still let Ω = U Λ U be the top-K eigen-decomposition of Ω such that U R n × K , Λ R K × K and U U = I K . Set U * R n × K by U * ( i , : ) = U ( i , : ) U ( i , : ) F and let N U R n × n be a diagonal matrix such that N U ( i , i ) = 1 U ( i , : ) F for i [ n ] . Then U * can be rewritten as U * = N U U . The existence of the ideal cone (IC for short) structure inherent in U * mentioned in [43] is guaranteed by the following lemma.
Lemma 3.
Under D C M M n ( K , P ˜ , Π , Θ ) , U * = Y U * ( I , : ) where Y = N M Π Θ 1 ( I , I ) N U 1 ( I , I ) with N M being an n × n diagonal matrix whose diagonal entries are positive.
Lemma 3 gives Y = U * U * 1 ( I , : ) . Since U * = N U U and Y = N M Π Θ 1 ( I , I ) N U 1 ( I , I ) , we have
N U 1 N M Π = U U * 1 ( I , : ) N U ( I , I ) Θ ( I , I ) .
Since Ω ( I , I ) = Θ ( I , I ) Π ( I , : ) P ˜ Π ( I , : ) = Θ ( I , I ) P ˜ Θ ( I , I ) = U ( I , : ) Λ U ( I , : ) , we have Θ ( I , I ) P ˜ Θ ( I , I ) = U ( I , : ) Λ U ( I , : ) . Then we have Θ ( I , I ) = diag ( U ( I , : ) Λ U ( I , : ) ) when Condition (II1) holds such that P ˜ has unit-diagonals. Set J * = N U ( I , I ) Θ ( I , I ) diag ( U * ( I , : ) Λ U * ( I , : ) ) , Z * = N U 1 N M Π , Y * = U U * 1 ( I , : ) . By Equation (6), we have
Z * = Y * J * U U * 1 ( I , : ) diag ( U * ( I , : ) Λ U * ( I , : ) ) .
Meanwhile, since N U 1 N M is an n × n positive diagonal matrix, we have
Π ( i , : ) = Z * ( i , : ) Z * ( i , : ) 1 , i [ n ] .
With given Ω and K, we can obtain U , U * and Λ . The above analysis shows that once U * ( I , : ) is known, we can exactly recover Π by Equations (7) and (8). From Lemma 3, we know that U * = Y U * ( I , : ) forms the IC structure. Ref. [43] proposes the SVM-cone algorithm (i.e., Algorithm A2) which can exactly obtain U * ( I , : ) from the ideal cone U * = Y U * ( I , : ) with inputs U * and K.
Based on the above analysis, we are now ready to give the ideal SVM-cone-DCMMSB algorithm. Input Ω , K . Output: Π .
  • Let Ω = U Λ U be the top-K eigen-decomposition of Ω such that U R n × K , Λ R K × K , U U = I . Let U * = N U U , where N U is a n × n diagonal matrix whose i-th diagonal entry is 1 U ( i , : ) F for i [ n ] .
  • Run SVM-cone algorithm on U * assuming that there are K communities to obtain I .
  • Set J * = diag ( U * ( I , : ) Λ U * ( I , : ) ) , Y * = U U * 1 ( I , : ) , Z * = Y * J * .
  • Recover Π by setting Π ( i , : ) = Z * ( i , : ) Z * ( i , : ) 1 for i [ n ] .
With given U * and K, since the SVM-cone algorithm returns U * ( I , : ) , the ideal SVM-cone-DCMMSB exactly (for detail, see Appendix B) returns Π .
Now, we review the SVM-cone-DCMMSB algorithm of [43], where this algorithm can be seen as an extension of SPACL designed under MMSB to fit DCMM. For the real case, use Y ^ * , J ^ * , Z ^ * , Π ^ * given in Algorithm 2 to estimate Y * , J * , Z * , Π , respectively.
Algorithm 2 SVM-cone-DCMMSB [43]
  • Require: The adjacency matrix A R n × n and the number of communities K.
  • Ensure: The estimated n × K membership matrix Π ^ * .
  • 1: Obtain A ˜ = U ^ Λ ^ U ^ , the top K eigen-decomposition of A. Let U ^ * R n × K such that U ^ * ( i , : ) = U ^ ( i , : ) U ^ ( i , : ) F for i [ n ] .
  • 2: Apply SVM-cone algorithm (i.e., Algorithm A2) on the rows of U ^ * assuming there are K communities to obtain I ^ * , the index set returned by SVM-cone algorithm.
  • 3: Set J ^ * = diag ( U ^ * ( I ^ * , : ) Λ ^ U ^ * ( I ^ * , : ) ) , Y ^ * = U ^ U ^ * 1 ( I ^ * , : ) , Z ^ * = Y ^ * J ^ * . Then set Z ^ * = max ( 0 , Z ^ * ) .
  • 4: Estimate Π ( i , : ) by Π ^ * ( i , : ) = Z ^ * ( i , : ) / Z ^ * ( i , : ) 1 , i [ n ] .

Consistency under DCMM

Assume that
(A2)
P ˜ max θ max θ 1 log ( n ) .
Since we let P ˜ max C , Assumption (A2) equals θ max θ 1 log ( n ) / C . The following lemma bounds A Ω under D C M M n ( K , P ˜ , Π , Θ ) when Assumption (A2) holds.
Lemma 4.
Under D C M M n ( K , P ˜ , Π , Θ ) , when Assumption (A2) holds, with probability at least 1 o ( n α ) , we have
A Ω α + 1 + ( α + 1 ) ( α + 19 ) 3 P ˜ max θ max θ 1 log ( n ) .
Remark 9.
Consider a special case when Θ = ρ I such that DCMM degenerates to MMSB, since P ˜ max is assumed to be 1 under MMSB, Assumption (A2) and the upper bound of A Ω in Lemma 4 are consistent with that of Lemma 1. When all nodes are pure, DCMM degenerates to DCSBM [45], then the upper bound of A Ω in Lemma 4 is also consistent with Lemma 2.2 of [50]. Meanwhile, this bound is also consistent with Equation (6.34) in the first version of [41], which also applies the Bernstein inequality to bound A Ω . However, the bound is C θ max θ 1 in Equation (C.53) of the latest version for [41], which applies Corollary 3.12 and Remark 3.13 of [76] to obtain the bound. Though the bound in Equation (C.53) of the latest version for [41] is sharper by a log ( n ) term, Corollary 3.12 of [76] has constraints on W ( i , j ) (here, W = A Ω ) such that W ( i , j ) can be written as W ( i , j ) = ξ i j b i j , where { ξ i , j : i j } are independent symmetric random variables with unit variance, and { b i , j : i j } are given scalars; see the proof of Corollary 3.12 [76] for detail. Therefore, without causing confusion, we also use A re to denote the constraint A used in [41] such that A re Ω C θ max θ 1 . Furthermore, if we set ρ max i , j Ω ( i , j ) such that ρ θ max 2 , the bound in Lemma 4 also equals A Ω C ρ n log ( n ) and the assumption (A2) reads P ˜ max ρ n log ( n ) . The bound A re Ω C θ max θ 1 in Equation (C.53) of [41] reads A re Ω | | C ρ n .
Lemma 5.
(Row-wise eigenspace error) Under D C M M n ( K , P ˜ , Π , Θ ) , when Assumption (A2) holds, suppose σ K ( Ω ) C θ max P ˜ max n log ( n ) , with probability at least 1 o ( n α ) .
  • When we apply Theorem 4.2 of [64], we have
    U ^ U ^ U U 2 = O ( θ max P ˜ max K ( θ max κ ( Ω ) θ min n K λ K ( Π Π ) + log ( n ) ) θ min 2 σ K ( P ˜ ) λ K ( Π Π ) ) .
  • When we apply Theorem 4.2 of [65], we have
    U ^ U ^ U U 2 = O ( θ max P ˜ max θ max θ 1 log ( n ) θ min 3 σ K ( P ˜ ) λ K 1.5 ( Π Π ) ) .
Without causing confusion, we also use ϖ , ϖ 1 , ϖ 2 under DCMM as Lemma 2 for notation convenience.
Remark 10.
When Θ = ρ I such that DCMM degenerates to MMSB, bounds in Lemma 5 are consistent with those of Lemma 2.
Remark 11
(Comparison to Theorem I.3 [43]). Note that the ρ in [43] is θ max 2 , which gives that the row-wise eigenspace concentration in Theorem I.3 [43] is O ( θ max K n U 2 log ξ ( n ) σ K ( Ω ) ) when using A re Ω C ρ n and this value is at least O ( θ max θ 1 K U 2 log ξ ( n ) σ K ( Ω ) ) . Since U 2 θ max θ min λ K ( Π Π ) by Lemma II.1 of [43] and σ K ( Ω ) θ min 2 σ K ( P ˜ ) λ K ( Π Π ) by the proof of Lemma 5, we see that the upper bound of Theorem I.3 [43] is O ( θ max K θ max θ 1 log ξ ( n ) θ min 3 σ K ( P ˜ ) λ K 1.5 ( Π Π ) ) , which is K log ξ 0.5 ( n ) (recall that ξ > 1 ) times than our ϖ 2 . Again, Theorem I.3 [43] has stronger requirements on the sparsity of θ max θ 1 and the lower bound of σ K ( Ω ) than our Lemma 5. When using the bound of A Ω in our Lemma 4 to obtain the row-wise eigenspace concentration in Theorem I.3 [43], their upper bound is K log ξ ( n ) times than our ϖ 2 . Similar to the first bullet given after Lemma 2, whether using A Ω C θ max θ 1 log ( n ) or A re Ω C θ max θ 1 does not change our ϖ under DCMM.
Remark 12
(Comparison to Lemma 2.1 [41]). The fourth bullet of Lemma 2.1 [41] is the row-wise deviation bound for the eigenvectors of the adjacency matrix under some assumptions translated to our κ ( Π Π ) = O ( 1 ) , Assumption (A2) and lower bound requirement on σ K ( Ω ) since they apply Lemma C.2 [41]. The row-wise deviation bound in the fourth bullet of Lemma 2.1 [41] reads O ( θ max K 1.5 θ max θ 1 log ( n ) σ K ( P ˜ ) θ F 3 ) , where the denominator is σ K ( P ˜ ) θ F 3 instead of our θ min 3 σ K ( P ˜ ) λ K 1.5 ( Π Π ) due to the fact that [41] uses σ K ( P ˜ ) θ F 2 K to roughly estimate σ K ( Ω ) while we apply θ min 2 σ K ( P ˜ ) λ K ( Π Π ) to strictly control the lower bound of σ K ( Ω ) . Therefore, we see that the row-wise deviation bound in the fourth bullet of Lemma 2.1 [41] is consistent with our bounds in Lemma 5 when κ ( Π Π ) = O ( 1 ) , while our row-wise eigenspace errors in Lemma 5 are more applicable than those of [41] since we do not need to add a constraint on Π Π such that κ ( Π Π ) = O ( 1 ) . The upper bound of A Ω of [41] is C θ max θ 1 given in their Equation (C.53) under D C M M n ( K , P ˜ , Π , Θ ) , while ours is C θ max θ 1 log ( n ) in Lemma 4, since our bound of the row-wise eigenspace error in Lemma 5 is consistent with the fourth bullet of Lemma 2.1 [41], this supports the statement that the row-wise eigenspace error does not rely on A Ω given in the first bullet after Lemma 2.
Let π min = min 1 k K 1 Π e k , where π min measures the minimum summation of nodes belonging to a certain community. Increasing π min makes the network tend to be more balanced, and vice versa. Meanwhile, the term π min appears when we propose a lower bound of η defined in Lemma A1 to keep track of the model parameters in our main theorem under D C M M n ( K , P ˜ , Π , Θ ) . The next theorem gives the theoretical bounds on estimations of memberships under DCMM.
Theorem 2.
Under D C M M n ( K , P ˜ , Π , Θ ) , let Π ^ be obtained from Algorithm 2, suppose conditions in Lemma 5 hold, and there exists a permutation matrix P * R K × K such that with probability at least 1 o ( n α ) , we have
max i [ n ] e i ( Π ^ * Π P * ) 1 = O ( θ max 15 K 5 ϖ κ 4.5 ( Π Π ) λ 1 1.5 ( Π Π ) θ min 15 π min ) .
For comparison, Table 4 summarizes the necessary conditions and dependence on model parameters of rates for Theorem 2 and Theorem 3.2 [43], where the dependence on K and log ( n ) are analyzed in Remark 13 given below.
Remark 13.
(Comparison to Theorem 3.2 [43]) Our bound in Theorem 2 is written as combinations of model parameters and Π can follow any distribution as long as Condition (II2) holds, where such model parameters’ related form of estimation bound is convenient for further theoretical analysis (see Corollary 2), while the bound in Theorem 3.2 [43] is built when Π follows a Dirichlet distribution and κ ( Π Θ 2 Π ) = O ( 1 ) . Meanwhile, since Theorem 3.2 [43] applies Theorem I.3 [43] to obtain the row-wise eigenspace error, the bound in Theorem 3.2 [43] should multiply log ξ ( n ) by Remark 11, and this is also supported by the fact that in the proof of Theorem 3.1 [43], when computing bound of ϵ 0 (in the language in Ref. [43]) [43] ignores the log ξ ( n ) term.
Consider a special case by setting λ K ( Π Π ) = O ( n K ) , π min = O ( n K ) and θ max θ min = O ( 1 ) with θ max = ρ , where such case matches the setting κ ( Π Θ 2 Π ) = O ( 1 ) in Theorem 3.2 [43]. Now we focus on analyzing the powers of K in our Theorem 2 and Theorem 3.2 [43]. Under this case, the power of K in the estimation bound of Theorem 2 is 6 by basic algebra; since min ( K 2 , κ 2 ( Ω ) ) = min ( K 2 , O ( 1 ) ) = O ( 1 ) , 1 λ K 2 ( Π Θ 2 Π ) = O ( K 2 ρ 2 n 2 ) , 1 η = O ( K ) by Lemma A1 where η in Lemma A1 follows the same definition as that of Theorem 3.2 [43], and the bound in Theorem 3.2 [43] should multiply K because (in the language of Ref. [43]) ( Y ^ C Y ^ C ) 1 F should be no larger than K λ K ( Y ^ C Y ^ C ) instead of 1 λ K ( Y ^ C Y ^ C ) in the proof of Theorem 2.8 [43], the power of K is 6 by checking the bound of Theorem 3.2 [43]. Meanwhile, note that our bound in Theorem 2 is l 1 bound, while the bound in Theorem 3.2 [43] is l 2 bound, and when we translate the l 2 bound of Theorem 3.2 [43] into l 1 bound, the power of K is 6.5 for Theorem 3.2 [43], suggesting that our bound in Theorem 2 has less dependence on K than that of Theorem 3.2 [43].
The following corollary is obtained by adding some conditions on the model parameters.
Corollary 2.
Under D C M M n ( K , P ˜ , Π , Θ ) , when conditions of Lemma 5 hold, suppose λ K ( Π Π ) = O ( n K ) , π min = O ( n K ) and K = O ( 1 ) , with probability at least 1 o ( n α ) , we have
max i [ n ] e i ( Π ^ * Π P * ) 1 = O ( θ max 16 P ˜ max θ max θ 1 log ( n ) θ min 18 σ K ( P ˜ ) n ) .
Meanwhile, when θ max = O ( ρ ) , θ min = O ( ρ ) (i.e., θ min θ max = O ( 1 ) ), we have
max i [ n ] e i ( Π ^ * Π P * ) 1 = O ( 1 σ K ( P ˜ ) P ˜ max log ( n ) ρ n ) .
Remark 14.
When λ K ( Π Π ) = O ( n K ) , K = O ( 1 ) , θ max = O ( ρ ) and θ min = O ( ρ ) , the requirement σ K ( Ω ) C θ max P ˜ max n log ( n ) in Lemma 5 holds naturally. By the proof of Lemma 5, σ K ( Ω ) has a lower bound θ min 2 σ K ( P ˜ ) λ K ( Π Π ) = O ( θ min 2 σ K ( P ) n ) . To make the requirement σ K ( Ω ) C θ max P ˜ max n log ( n ) always hold, we just need θ min 2 σ K ( P ˜ ) n C θ max P ˜ max n log ( n ) , and it gives σ K ( P ˜ ) C P ˜ max log ( n ) ρ n , which matches the requirement of consistent estimation in Corollary 2.
Using SCSTC to Corollary 2, let Θ = ρ I such that DCMM degenerates to MMSB, and it is easy to see that the bound in Lemma 2 is consistent with that of Lemma 1. Therefore, the separation condition, alternative separation condition and sharp threshold obtained from Corollary 2 for the extended version of SPACL under DCMM are consistent with classical results, as shown in Table 1 and Table 2 (detailed analysis will be provided in next paragraph). Meanwhile, when θ max = O ( ρ ) , θ min = O ( ρ ) and settings in Corollary 2 hold, the bound in Theorem 2.2 [41] is of order 1 σ K ( P ˜ ) log ( n ) ρ n , which is consistent with our bound in Corollary 2.
Consider a mixed membership network under the settings of Corollary 2 when Θ = ρ I such that DCMM degenerates to MMSB. By Corollary 2, σ K ( P ˜ ) P ˜ max should shrink slower than log ( n ) ρ n . We further assume that P ˜ = ( 2 β ) I K + ( β 1 ) 1 1 for β [ 1 , 2 ) ( 2 , ) ; we see that this P ˜ with unit diagonals and β 1 as non-diagonal entries still satisfies Condition (II1). Meanwhile, σ K ( P ˜ ) = | β 2 | = P ˜ max P ˜ min and P ˜ max = max ( 1 , β 1 ) , so σ K ( P ˜ ) P ˜ max = | β 2 | max ( 1 , β 1 ) should shrink slower than log ( n ) ρ n . Setting P = ρ P as the probability matrix for such P ˜ , we have p out = ρ ( β 1 ) , p in = ρ , and max ( p in , p out ) = ρ max ( 1 , β 1 ) . Sure, the separation condition | p in p out | max ( p in , p out ) ρ | β 2 | max ( 1 , β 1 ) should shrink slower than log ( n ) n , which satisfies Equation (1). For an alternative separation condition and sharp threshold, just follow a similar analysis as that of MMSB, and we obtain the results in Table 1 and Table 2.

6. Numerical Results

In this section, we present the experimental results for an overlapping network by plotting the phase transition behaviors for both SPACL and SVM-cone-DCMMSB to show that the two methods achieve the threshold in Equation (2) under M M S B ( n , K , Π , p in , p out ) when K = 2 and K = 3 . We also use some experiments to show that the spectral methods studied in [26,30,48,50] achieve the threshold in Equation (2) under S B M ( n , K , p in , p out ) when K = 2 and K = 3 for the non-overlapping network. To measure the performance of different algorithms, we use the error rate defined below:
min P { K × K permutation matrix } 1 n Π ^ Π P 1 .
For all simulations, let p in = α in log ( n ) n and p out = α out log ( n ) n be diagonal and non-diagonal entries of P, respectively. Since P is the probability matrix, α in and α out should be located in ( 0 , n log ( n ) ] . After setting P and Π , each simulation experiment has the following steps:
(a)
Set Ω = Π P Π .
(b)
Let W be an n × n symmetric matrix such that all diagonal entries of W are 0, and W ( i , j ) are independent centered Bernoulli with parameters Ω ( i , j ) . Let A = Ω diag ( Ω ) + W be the adjacency matrix of a simulated network with mixed memberships under MMSB (so there are no loops).
(c)
Apply spectral clustering method to A with K communities. Record the error rate.
(d)
Repeat (b)–(c) 50 times, and report the mean of the error rates over the 50 times.
Experiment 1: Set n = 600 , K = 2 , and n 0 = 250 , where n 0 is the number of pure nodes in each community. Let all mixed nodes have mixed membership ( 1 / 2 , 1 / 2 ) . Since α in and α out should be set to less than n log ( n ) = 600 log ( 600 ) 93.795 , we let α in and α out be in the range of { 5 , 10 , 15 , , 90 } . For each pair of ( α in , α out ) , we generate P and then run steps (a)–(d) for this experiment. So, this experiment generates an adjacency matrix of network with mixed memberships under M M S B ( n , 2 , Π , p in , p out ) . The numerical results are displayed in panels (a) and (b) of Figure 1. We can see that our theoretical bounds (red lines) are quite tight, and the threshold regions obtained from the boundaries of light white areas in panels (a) and (b) are close to our theoretical bounds. Meanwhile, both methods perform better when | α in α out | max ( α in , α out ) increases and SVM-cone-DCMMSB outperforms SPACL for this experiment since panel (b) is darker than panel (a). Note that the network generated here is an assortative network when α in > α out , and the network is a dis-assortative network when α in < α out . So, the results of this experiment support our finding that SPACL and SVM-cone-DCMMSB achieves the threshold in Equation (2) for both assortative and dis-assortative networks.
Experiment 2: Set n = 600 , K = 3 , and n 0 = 150 . Let all mixed nodes have mixed membership ( 1 / 3 , 1 / 3 , 1 / 3 ) . Let α in and α out range in { 5 , 10 , 15 , , 90 } . Sure, this experiment is under M M S B ( n , 3 , Π , p in , p out ) . The numerical results are displayed in panels (c) and (d) of Figure 1. We see that both methods perform poorly in the region between the two red lines, and the analysis of the numerical results for this experiment is similar to that of Experiment 1.
For visualization, we plot two networks generated from M M S B ( n , K , Π , p in , p out ) when K = 2 and K = 3 in Figure 2. We also plot two dis-assortative networks generated from M M S B ( n , K , Π , p in , p out ) when K = 2 and K = 3 in Figure A1 in Appendix A. In Experiments 1 and 2, there exist some mixed nodes for network generated under MMSB. The following two experiments only focus on network under SBM such that all nodes are pure. Meanwhile, we only consider four spectral algorithms studied in [26,30,48,50] for the non-overlapping network. For convenience, we call the spectral clustering method studied in [26] normalized principle component analysis (nPCA), and Algorithm 1 studied in [30] ordinary principle component analysis (oPCA), where nPCA and oPCA are also considered in [50]. Next, we briefly review nPCA, oPCA, RSC and SCORE.
The nPCA algorithm is as follows with input A , K and output Π ^ .
  • Obtain the graph Laplacian L = D 1 / 2 A D 1 / 2 , where D is a diagonal matrix with D ( i , i ) = j = 1 n A ( i , j ) for i [ n ] .
  • Obtain U ^ Λ ^ U ^ , the top K eigen-decomposition of L.
  • Apply k-means algorithm to U ^ to obtain Π ^ .
The oPCA algorithm is as follows with input A , K and output Π ^ .
  • Obtain U ^ Λ ^ U ^ , the top K eigen-decomposition of A.
  • Apply k-means algorithm to U ^ to obtain Π ^ .
The RSC algorithm is as follows with input A , K , regularizer τ , and output Π ^ .
  • Obtain the regularized graph Laplacian L τ = D τ 1 / 2 A D τ 1 / 2 , where D τ = D + τ I , and the default τ is the average node degree.
  • Obtain U ^ Λ ^ U ^ , the top K eigen-decomposition of L τ . Let U ^ * be the row-normalized version of U ^ .
  • Apply k-means algorithm to U ^ * to obtain Π ^ .
The SCORE algorithm is as follows with input A , K , threshold T n and output Π ^ .
  • Obtain the K (unit-norm) leading eigenvectors of A: η ^ 1 , η ^ 2 , , η ^ K .
  • Obtain an n × ( K 1 ) matrix R ^ * such that for i [ n ] , k [ K 1 ] ,
    R ^ * ( i , k ) = R ^ ( i , k ) , if | R ^ ( i , k ) | T n , T n , if R ^ ( i , k ) > T n , T n , if R ^ ( i , k ) < T n ,
    where R ^ ( i , k ) = η ^ k + 1 ( i ) η ^ 1 ( i ) , and the default T n is log ( n ) .
  • Apply k-means algorithm to R ^ * to obtain Π ^ .
We now describe Experiments 3 and 4 under S B M ( n , K , p in , p out ) when K = 2 and K = 3 .
Experiment 3: Set n = 600 , K = 2 , and n 0 = 300 , i.e., all nodes are pure and each community has 300 nodes. So this experiment generates the adjacency matrix of the network under S B M ( n , 2 , p in , p out ) . Numerical results are displayed in panels (a)–(d) of Figure 3. We can see that these spectral clustering methods achieve the threshold in Equation (2).
Experiment 4: Set n = 600 , K = 3 , and n 0 = 200 , i.e., all nodes are pure and each community has 200 nodes. So, this experiment is under S B M ( n , 3 , p in , p out ) . The numerical results are displayed in panels (e)–(h) of Figure 3. The results show that these methods achieve threshold in Equation (2).
For visualization, we plot two assortative networks generated from S B M ( n , K , p in , p out ) when K = 2 and K = 3 in Figure 4. We also plot two dis-assortative networks generated from S B M ( n , K , p in , p out ) when K = 2 and K = 3 in Figure A2 in Appendix A.

7. Conclusions

In this paper, the four-step separation condition and sharp threshold criterion SCSTC is summarized as a unified framework to study the consistencies and compare the theoretical error rates of spectral methods under models that can degenerate to SBM in a community detection area. With an application of this criterion, we find some inconsistent phenomena of a few previous works. In particular, using SCSTC, we find that the original theoretical upper bounds on error rates of the SPACL algorithm under MMSB and its extended version under DCMM are sub-optimal for the error rates and requirements on network sparsity. To find how the inconsistent phenomena occur, we re-establish theoretical upper bounds of error rats for both SPACL and its extended version by using recent techniques on row-wise eigenvector deviation. The resulting error bounds explicitly keep track of seven independent model parameters ( K , ρ , σ K ( P ˜ ) , λ K ( Π Π ) , λ 1 ( Π Π ) , θ min , θ max ) , which allow us to have a further delicate analysis. Compared with the original theoretical results, ours have smaller error rates with lesser dependence on K and log ( n ) , weaker requirements on the network sparsity and the lower bound of the smallest nonzero singular value of population adjacency matrix under both MMSB and DCMM. For DCMM, we have no constraint on the distribution of the membership matrix as long as it satisfies the identifiability condition. When considering the separation condition of a standard network and the probability to generate a connected Erdös–Rényi (ER) random graph by using SCSTC, our theoretical results match the classical results. Meanwhile, our theoretical results also match those of Theorem 2.2 [41] under mild conditions, and when DCMM degenerates to MMSB, the theoretical results under DCMM are consistent with those under MMSB. Using the SCSTC criterion, we find that the reasons behind the inconsistent phenomena are the sup-optimality of the original theoretical upper bounds on error rates for SPACL as well as its extended version, and the usage of a regularization version of the adjacency matrix when building theoretical results for spectral methods designed to detect nodes labels for a non-mixed network. The processes of finding these inconsistent phenomena, sub-optimality theoretical results on error rates and the formation mechanism of these inconsistent phenomena guarantee the usefulness of the SCSTC criterion. As shown by Remark 8, the theoretical results of some previous works can be improved by applying this criterion. Using SCSTC, we find that spectral methods considered in [26,41,43,44,48,49,50,67,68] achieve thresholds in Equations (1)–(3), and this conclusion is verified by both theoretical analysis and the numerical results in this paper. A limitation of this criterion is that it is only used for studying the consistency of spectral methods for a standard network with a constant number of communities. It would be interesting to develop a more general criterion that can study the consistency of all methods besides spectral methods, and models besides those can degenerate to SBM for a non-standard network with large K. Finally, we hope that the SCSTC criterion developed in this paper can be widely applied to build and compare theoretical results for spectral methods in the community detection area and that the thresholds in Equations (1)–(3) can be seen as benchmark thresholds for spectral methods.   

Funding

This research was funded by Scientific research start-up fund of China University of Mining and Technology NO. 102520253, the High level personal project of Jiangsu Province NO. JSSCBS20211218.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SCSTCseparation condition and sharp threshold criterion
SBMstochastic blockmodels
DCSBMdegree corrected stochastic blockmodel
MMSBmixed membership stochastic blockmodel
DCMMdegree corrected mixed membership model
SBMOstochastic blockmodel with overlap
OCCAMoverlapping continuous community assignment model
RSCregularized spectral clustering
SCOREspectral clustering on ratios-of-eigenvectors
SPACLsequential projection after cleaning
ERErdös–Rényi
ISideal simplex
ICideal cone
SPsuccessive projection algorithm
oPCAordinary principle component analysis
nPCAnormalized principle component analysis

Appendix A. Additional Experiments

Figure A1. Panel (a): a graph generated from MMSB with n = 600 and K = 2 . Each community has 250 pure nodes. For the 100 mixed nodes, they have mixed membership ( 1 / 2 , 1 / 2 ) . Panel (b): a graph generated from MMSB with n = 600 and K = 3 . Each community has 150 pure nodes. For the 150 mixed nodes, they have mixed membership ( 1 / 3 , 1 / 3 , 1 / 3 ) . Nodes in panels (a,b) connect with probability p in = 1 / 600 and p out = 60 / 600 , so the two networks in both panels are dis-assortative networks. For panel (a), error rates for SPACL and SVM-cone-DCMMSB are 0.0298 and 0.0180, respectively. For panel (b), error rates for SPACL and SVM-cone-DCMMSB are 0.1286 and 0.0896, respectively. For both panels, dots in the same color are pure nodes in the same community, and green square nodes are mixed.
Figure A1. Panel (a): a graph generated from MMSB with n = 600 and K = 2 . Each community has 250 pure nodes. For the 100 mixed nodes, they have mixed membership ( 1 / 2 , 1 / 2 ) . Panel (b): a graph generated from MMSB with n = 600 and K = 3 . Each community has 150 pure nodes. For the 150 mixed nodes, they have mixed membership ( 1 / 3 , 1 / 3 , 1 / 3 ) . Nodes in panels (a,b) connect with probability p in = 1 / 600 and p out = 60 / 600 , so the two networks in both panels are dis-assortative networks. For panel (a), error rates for SPACL and SVM-cone-DCMMSB are 0.0298 and 0.0180, respectively. For panel (b), error rates for SPACL and SVM-cone-DCMMSB are 0.1286 and 0.0896, respectively. For both panels, dots in the same color are pure nodes in the same community, and green square nodes are mixed.
Entropy 24 01098 g0a1
Figure A2. Panel (a): a graph generated from S B M ( 600 , 2 , 2 / 600 , 30 / 600 ) . Panel (b): a graph generated from S B M ( 600 , 3 , 2 / 600 , 30 / 600 ) . Networks in panels (a,b) are dis-assortative networks since p in < p out . For panel (a), error rates for oPCA, nPCA, RSC and SCORE are 0. For panel (b), error rates for oPCA, nPCA, RSC and SCORE are 0.0067. Colors indicate clusters.
Figure A2. Panel (a): a graph generated from S B M ( 600 , 2 , 2 / 600 , 30 / 600 ) . Panel (b): a graph generated from S B M ( 600 , 3 , 2 / 600 , 30 / 600 ) . Networks in panels (a,b) are dis-assortative networks since p in < p out . For panel (a), error rates for oPCA, nPCA, RSC and SCORE are 0. For panel (b), error rates for oPCA, nPCA, RSC and SCORE are 0.0067. Colors indicate clusters.
Entropy 24 01098 g0a2

Appendix B. Vertex Hunting Algorithms

The SP algorithm is written as below.
Algorithm A1 Successive projection (SP) [51]
  • Require: Near-separable matrix Y s p = S s p M s p + Z s p R + m × n , where S s p , M s p should satisfy Assumption 1 [51], the number r of columns to be extracted.
  • Ensure: Set of indices K such that Y ( K , : ) S (up to permutation)
  • 1: Let R = Y s p , K = { } , k = 1 .
  • 2: While R 0 and k r  do
  • 3:         k * = argmax k R ( k , : ) F .
  • 4:        u k = R ( k * , : ) .
  • 5:        R ( I u k u k u k F 2 ) R .
  • 6:        K = K { k * } .
  • 7:       k=k+1.
  • 8: end while
Based on Algorithm A1, the following theorem is Theorem 1.1 in [51], and it is also the Lemma VII.1 in [44]. This theorem provides the bound between the corner matrix S s p and its estimated version returned by letting Y s p be the input of the SP algorithm when M s p S s p enjoys the ideal simplex structure.
Theorem A1.
Fix m r and n r . Consider a matrix Y s p = S s p M s p + Z s p , where S s p R m × r has a full column rank, M s p R r × n is a non-negative matrix such that the sum of each column is at most 1, and Z s p = [ Z s p , 1 , , Z s p , n ] R m × n . Suppose that M s p has a submatrix equal to I r . Write ϵ max 1 i n Z s p , i F . Suppose ϵ = O ( σ min ( S s p ) r κ 2 ( S s p ) ) , where σ min ( S s p ) and κ ( S s p ) are the minimum singular value and condition number of S s p , respectively. If we apply the SP algorithm to columns of Y s p , then it outputs an index set K { 1 , 2 , , n } such that | K | = r and max 1 k r min j K S s p ( : , k ) Y s p ( : , j ) F = O ( ϵ κ 2 ( S s p ) ) , where S s p ( : , k ) is the k-th column of S s p .
For the ideal SPACL algorithm, since inputs of the ideal SPACL are Ω and K, we see that the inputs of SP algorithm are U and K. Let m = K , r = K , Y s p = U , Z s p = U U 0 , S s p = U ( I , : ) , and M s p = Π . Then, we have max i [ n ] U ( i , : ) U ( i , : ) F = 0 . By Theorem A1, the SP algorithm returns I up to permutation when the input is U, assuming there are K communities. Since U = Π U ( I , : ) under M M S B n ( K , P , Π , ρ ) , we see that U ( i , : ) = U ( j , : ) as long as Π ( i , : ) = Π ( j , : ) . Therefore, though I may be different up to the permutation, U ( I , : ) is unchanged. Therefore, following the four steps of the ideal SPACL algorithm, we see that it exactly returns Π .
Algorithm A2 below is the SVM-cone algorithm provided in [43].
Algorithm A2 SVM-cone [43]
  • Require: S ^ R n × m with rows have unit l 2 norm, number of corners K, estimated distance corners from hyperplane γ .
  • Ensure: The near-corner index set I ^ .
  • 1: Run one-class SVM on S ^ ( i , : ) to get w ^ and b ^
  • 2: Run k-means algorithm to the set { S ^ ( i , : ) | S ^ ( i , : ) w ^ b ^ + γ } that are close to the hyperplane into K clusters
  • 3: Pick one point from each cluster to get the near-corner set I ^
As suggested in [43], we can start γ = 0 and incrementally increase it until K distinct clusters are found. Meanwhile, for the ideal SVM-cone-DCMMSB algorithm, when setting U * and K as the inputs of the SVM-cone algorithms, since U * U * 2 = 0 , Lemma F.1. [43] guarantees that SVM-cone algorithm returns I up to permutation. Since U * = Y U * ( I , : ) by Lemma 3 under D C M M n ( K , P , Π , Θ ) , we have U * ( i , : ) = U * ( j , : ) when Π ( i , : ) = Π ( j , : ) by basic algebra, which gives that U * ( I , : ) is unchanged though I may be different up to permutation. Therefore, the ideal SVM-cone-DCMMSB exactly recovers Π .

Appendix C. Proof of Consistency under MMSB

Appendix C.1. Proof of Lemma 1

Proof. 
We apply Theorem 1.4 (Bernstein inequality) in [70] to bound A Ω , and this theorem is written as shown below.
Theorem A2.
Consider a finite sequence { X k } of independent, random, self-adjoint matrices with dimension d. Assume that each random matrix satisfies
E [ X k ] = 0 , and λ max ( X k ) R almost surely .
Then, for all t 0 ,
P ( λ max ( k X k ) t ) d · exp ( t 2 / 2 σ 2 + R t / 3 ) ,
where σ 2 : = k E [ X k 2 ] .
Let e i be an n × 1 vector, where e i ( i ) = 1 and 0 elsewhere, for i [ n ] . For convenience, set W = A Ω . Then we can write W as W = i = 1 n j = 1 n W ( i , j ) e i e j . Set W ( i , j ) as the n × n matrix such that W ( i , j ) = W ( i , j ) ( e i e j + e j e i ) , which gives W = 1 i < j n W ( i , j ) where E [ W ( i , j ) ] = 0 and
W ( i , j ) =   W ( i , j ) ( e i e j + e j e i )   = | W ( i , j ) | ( e i e j + e j e i )   = | W ( i , j ) |   = | A ( i , j ) Ω ( i , j ) | 1 .
For the variance parameter σ 2 : = 1 i < j n E [ ( W ( i , j ) ) 2 ] . We bound E ( W 2 ( i , j ) ) as shown below:
E ( W 2 ( i , j ) ) = E ( ( A ( i , j ) Ω ( i , j ) ) 2 ) = Var ( A ( i , j ) ) = Ω ( i , j ) ( 1 Ω ( i , j ) ) Ω ( i , j ) = ρ Π ( i , : ) P ˜ Π ( j , : ) ρ .
Next we bound σ 2 as shown below:
σ 2 = 1 i < j n E ( W 2 ( i , j ) ) ( e i e j + e j e i ) ( e i e j + e j e i ) = 1 i < j n E [ W 2 ( i , j ) ( e i e i + e j e j ) ] max 1 i n | j = 1 n E ( W 2 ( i , j ) ) | max 1 i n j = 1 n ρ = ρ n .
Set t = α + 1 + ( α + 1 ) ( α + 19 ) 3 ρ n log ( n ) for any α > 0 , combine Theorem A2 with σ 2 ρ n , R = 1 , d = n , and we have
P ( W t ) = P ( 1 i < j n W ( i , j ) t ) n · exp ( t 2 / 2 σ 2 + R t / 3 ) n · exp ( ( α + 1 ) log ( n ) 18 ( α + 1 + α + 19 ) 2 + 2 α + 1 α + 1 + α + 19 log ( n ) ρ n ) 1 n α ,
where we use Assumption (A1) such that 18 ( α + 1 + α + 19 ) 2 + 2 α + 1 α + 1 + α + 19 log ( n ) ρ n 18 ( α + 1 + α + 19 ) 2 + 2 α + 1 α + 1 + α + 19 = 1 . □

Appendix C.2. Proof of Lemma 2

Proof. 
Let H = U ^ U , and H = U H Σ H V H be the SVD decomposition of H U ^ with U H , V H R n × K , where U H and V H represent respectively the left and right singular matrices of H. Define sgn ( H ) = U H V H . Since E ( A ( i , j ) Ω ( i , j ) ) = 0 , E [ ( A ( i , j ) Ω ( i , j ) ) 2 ] ρ by the proof of Lemma 1, 1 ρ n / ( μ log ( n ) ) O ( 1 ) holds by Assumption (A1) where μ is the incoherence parameter defined as μ = n U 2 2 K . By Theorem 4.2 [64], with high probability, we have
U ^ sgn ( H ) U 2 C K ρ ( κ ( Ω ) μ + log ( n ) ) σ K ( Ω ) ,
provided that c 1 σ K ( Ω ) ρ n log ( n ) for some sufficiently small constant c 1 . By Lemma 3.1 of [44], we know that U 2 2 1 λ K ( Π Π ) , which gives
U ^ sgn ( H ) U 2 C K ρ ( κ ( Ω ) n K λ K ( Π Π ) + log ( n ) ) σ K ( Ω ) .
Remark A1.
By Theorem 4.2 of [65], when σ K ( Ω ) 4 A Ω , we have
U ^ sgn ( H ) U 2 14 A Ω σ K ( Ω ) U 2 .
By Lemma 3.1 [44], we have
U ^ sgn ( H ) U 2 14 A Ω σ K ( Ω ) 1 λ K ( Π Π ) .
Unlike Lemma V.1 [44] which bounds A Ω via the Chernoff bound and obtains A Ω C ρ n with high probability, we bound A Ω by the Bernstein inequality using a similar idea as Equation (C.67) of [41]. Let y = ( y 1 , y 2 , , y n ) be any n × 1 vector; by Equation (C.67) [41], we know that with an application of the Bernstein inequality, for any t 0 and i [ n ] , we have
P ( | j = 1 n ( A ( i , j ) Ω ( i , j ) ) y ( j ) | > t ) 2 exp ( t 2 / 2 j = 1 n Ω ( i , j ) y 2 ( j ) + t y 3 ) .
By the proof of Lemma 1, we have Ω ( i , j ) ρ . Set y ( j ) as 1 or 1 such that ( A ( i , j ) Ω ( i , j ) ) y ( j ) = | A ( i , j ) Ω ( i , j ) | , we have
P ( A Ω > t ) 2 exp ( t 2 / 2 ρ n + t 3 ) .
Set t = α + 1 + ( α + 1 ) ( α + 19 ) 3 ρ n log ( n ) for any α > 0 , by Assumption (A1), we have
P ( A Ω > t ) 2 exp ( t 2 / 2 ρ n + t 3 ) n α .
Hence, when σ K ( Ω ) C 0 ρ n log ( n ) where C 0 = 4 α + 1 + ( α + 1 ) ( α + 19 ) 3 , with probability at least 1 o ( n α ) ,
U ^ sgn ( H U ^ ) U 2 C ρ n log ( n ) σ K ( Ω ) 1 λ K ( Π Π ) .
Note that when λ K ( Π Π ) = O ( n K ) , the above bound turns to be C ρ K log ( n ) σ K ( Ω ) , which is consistent with that of Equation (A1). Also note that this bound ρ n log ( n ) σ K ( Ω ) 1 λ K ( Π Π ) is sharper than the ρ n σ K ( Ω ) 1 λ K ( Π Π ) of Lemma V.1 [44] by Assumption (A1).
Since U ^ and U have orthonormal columns, now we are ready to bound U ^ U ^ U U 2 :
U ^ U ^ U U 2 = max i [ n ] e i ( U U U ^ U ^ ) F = max i [ n ] e i ( U U U ^ sgn ( H ) U + U ^ sgn ( H ) U U ^ U ^ ) F max i [ n ] e i ( U U ^ sgn ( H ) ) U F + max i [ n ] e i U ^ ( sgn ( H ) U U ^ ) F = max i [ n ] e i ( U U ^ sgn ( H ) ) F + max i [ n ] U ^ ( sgn ( H ) U U ^ ) e i F = max i [ n ] e i ( U U ^ sgn ( H ) ) F + max i [ n ] ( sgn ( H ) U U ^ ) e i F = max i [ n ] e i ( U U ^ sgn ( H ) ) F + max i [ n ] e i ( U ( sgn ( H ) ) U ^ ) F = max i [ n ] e i ( U U ^ sgn ( H ) ) F + max i [ n ] e i ( U U ^ sgn ( H ) ) F = 2 max i [ n ] e i ( U U ^ sgn ( H ) ) F = 2 U U ^ sgn ( H ) 2 C K ( κ ( Ω ) n K λ K ( Π Π ) + log ( n ) ) σ K ( P ˜ ) ρ λ K ( Π Π ) ,
where the last inequality holds since σ K ( Ω ) σ K ( P ˜ ) ρ λ K ( Π Π ) under M M S B n ( K , P ˜ , Π , ρ ) by Lemma II.4 [44] This bound is C n log ( n ) σ K ( P ˜ ) ρ λ K 1.5 ( Π Π ) if we use Theorem 4.2 of [65].
Remark A2.
By Theorem 4.5 [77], we have U ^ U ^ U U 2 n ( U ^ 2 + U 2 ) U U ^ sgn ( H ) 2 n ( U U ^ sgn ( H ) 2 + 2 U 2 ) U U ^ sgn ( H ) 2 n ( U U ^ sgn ( H ) 2 + 2 λ K ( Π Π ) ) U U ^ sgn ( H ) 2 = O ( 2 n λ K ( Π Π ) U U ^ sgn ( H ) 2 ) . Sure our bound U ^ U ^ U U 2 2 U U ^ sgn ( H ) 2 enjoys concise form. In particular, when λ K ( Π Π ) = O ( n K ) and K = O ( 1 ) , the two bounds give that U ^ U ^ U U 2 = O ( U U ^ sgn ( H ) 2 ) , which provides same error bound of the estimated memberships given in Corollary 1.

Appendix C.3. Proof of Theorem 1

Proof. 
Follow almost the same proof as Equation (3) of [44]. For i [ n ] , there exists a permutation matrix P R K × K such that
e i ( Z ^ Z P ) F = O ( ϖ κ ( Π Π ) K λ 1 ( Π Π ) ) .
Note that the bound in Equation (A2) is K times the bound in Equation (3) of [44], and this is because in Equation (3) of [44], (in the language of Ref. [44]) V ^ p 1 denotes the Frobenius norm of V ^ p 1 instead of the spectral norm. Since V ^ p 1 F K σ K ( V ^ p ) , the bound in Equation (3) [44] should multiply K .
Recall that Z = Π , Π ( i , : ) = Z ( i , : ) Z ( i , : ) 1 , Π ^ ( i , : ) = Z ^ ( j , : ) Z ^ ( j , : ) 1 , for i [ n ] , since
e i ( Π ^ Π P ) 1 = e i Z ^ e i Z ^ 1 e i Z P e i Z P 1 1 = e i Z ^ e i Z 1 e i Z P e i Z ^ 1 e i Z ^ 1 e i Z 1 1 e i Z ^ e i Z 1 e i Z ^ e i Z ^ 1 1 + e i Z ^ e i Z ^ 1 e i Z P e i Z ^ 1 1 e i Z ^ 1 e i Z 1 = | e i Z 1 e i Z ^ 1 | + e i Z ^ e i Z P 1 e i Z 1 2 e i ( Z ^ Z P ) 1 e i Z 1 = 2 e i ( Z ^ Z P ) 1 e i Π 1 = 2 e i ( Z ^ Z P ) 1 2 K e i ( Z ^ Z P ) F = O ( ϖ K κ ( Π Π ) λ 1 ( Π Π ) ) .

Appendix C.4. Proof of Corollary 1

Proof. 
Under the conditions of Corollary 1, we have
max i [ n ] e i ( Π ^ Π P ) 1 = O ( ϖ n ) .
Under the conditions of Corollary 1, Lemma 2 gives ϖ = O ( 1 σ K ( P ˜ ) 1 n log ( n ) ρ n ) , which gives that
max i [ n ] e i ( Π ^ Π P ) 1 = O ( 1 σ K ( P ˜ ) log ( n ) ρ n ) .

Appendix D. Proof of Consistency under DCMM

Appendix D.1. Proof of Lemma 3

Proof. 
Since Ω = U Λ U , we have U = Ω U Λ 1 since U U = I K . Recall that Ω = Θ Π P ˜ Π Θ , we have U = Θ Π P ˜ Π Θ U Λ 1 = Θ Π B , where we set B = P ˜ Π Θ U Λ 1 for convenience. Since U ( I , : ) = Θ ( I , I ) Π ( I , : ) B = Θ ( I , I ) B , we have B = Θ 1 ( I , I ) U ( I , : ) .
Set M = Π B . Then we have U = Θ M , which gives that U ( i , : ) = e i U = Θ ( i , i ) M ( i , : ) for i [ n ] . Therefore, U * ( i , : ) = U ( i , : ) U ( i , : ) F = M ( i , : ) M ( i , : ) F , and combined with the fact that B = Θ 1 ( I , I ) U ( I , : ) Θ 1 ( I , I ) N U 1 ( I , I ) N U ( I , I ) U ( I , : ) Θ 1 ( I , I ) N U 1 ( I , I ) U * ( I , : ) , we have
U * = Π ( 1 , : ) / M ( 1 , : ) F Π ( 2 , : ) / M ( 2 , : ) F Π ( n , : ) / M ( n , : ) F B = Π ( 1 , : ) / M ( 1 , : ) F Π ( 2 , : ) / M ( 2 , : ) F Π ( n , : ) / M ( n , : ) F Θ 1 ( I , I ) N U 1 ( I , I ) U * ( I , : ) .
Therefore, we have
Y = N M Π Θ 1 ( I , I ) N U 1 ( I , I ) ,
where N M is a diagonal matrix whose i-th diagonal entry is 1 M ( i , : ) F for i [ n ] . □

Appendix D.2. Proof of Lemma 4

Proof. 
Similar to the proof of Lemma 1, set W = A Ω and W ( i , j ) = W ( i , j ) ( e i e j + e j e i ) , we have W = 1 i < j n W ( i , j ) , E [ W ( i , j ) ] = 0 and W ( i , j ) 1 . Since
E ( W 2 ( i , j ) ) = E ( ( A ( i , j ) Ω ( i , j ) ) 2 ) = Var ( A ( i , j ) ) = Ω ( i , j ) ( 1 Ω ( i , j ) ) Ω ( i , j ) = θ ( i ) θ ( j ) Π ( i , : ) P ˜ Π ( j , : ) θ ( i ) θ ( j ) P ˜ max ,
we have
σ 2 = 1 i < j n E ( W 2 ( i , j ) ) ( e i e j + e j e i ) ( e i e j + e j e i ) = 1 i < j n E [ W 2 ( i , j ) ( e i e i + e j e j ) ] max 1 i n | j = 1 n E ( W 2 ( i , j ) ) | max 1 i n j = 1 n θ ( i ) θ ( j ) P ˜ max P ˜ max θ max θ 1 .
Set t = α + 1 + ( α + 1 ) ( α + 19 ) 3 P ˜ max θ max θ 1 log ( n ) for any α > 0 , combine Theorem A2 with σ 2 P ˜ max θ max θ 1 , R = 1 , d = n , we have
P ( W t ) = P ( 1 i < j n W ( i , j ) t ) n · exp ( t 2 / 2 σ 2 + R t / 3 ) n · exp ( ( α + 1 ) log ( n ) 18 ( α + 1 + α + 19 ) 2 + 2 α + 1 α + 1 + α + 19 log ( n ) P ˜ max θ max θ 1 ) 1 n α ,
where we use Assumption (A2) such that 18 ( α + 1 + α + 19 ) 2 + 2 α + 1 α + 1 + α + 19 log ( n ) P ˜ max θ max θ 1 18 ( α + 1 + α + 19 ) 2 + 2 α + 1 α + 1 + α + 19 = 1 . □

Appendix D.3. Proof of Lemma 5

Proof. 
The proof is similar to that of Lemma 2, so we omit most details. Since E ( A ( i , j ) Ω ( i , j ) ) = 0 , E [ ( A ( i , j ) Ω ( i , j ) ) 2 ] θ ( i ) θ ( j ) P ˜ max θ max 2 P ˜ max , 1 θ max P ˜ max n / ( μ log ( n ) ) O ( 1 ) holds by Assumption (A2) where μ = n U 2 2 K . By Theorem 4.2 [64], with high probability, we have
U ^ sgn ( H ) U 2 C θ max P ˜ max K ( κ ( Ω ) μ + log ( n ) ) σ K ( Ω ) ,
provided that c * σ K ( Ω ) θ max P ˜ max n log ( n ) for some sufficiently small constant c * . By Lemma H.1 of [43], we know that U 2 2 θ max 2 λ K ( Π Θ 2 Π ) θ max 2 θ min 2 λ K ( Π Π ) under D C M M n ( K , P ˜ , Π , Θ ) , which gives
U ^ sgn ( H ) U 2 C θ max P ˜ max K ( θ max κ ( Ω ) θ min n K λ K ( Π Π ) + log ( n ) ) σ K ( Ω ) ,
Remark A3.
Similar to the proof of Lemma 2, by Theorem 4.2 of [65], when σ K ( Ω ) 4 A Ω , we have
U ^ sgn ( H ) U 2 14 A Ω σ K ( Ω ) U 2 14 θ max A Ω θ min σ K ( Ω ) λ K ( Π Π ) .
Let y = ( y 1 , y 2 , , y n ) be any n × 1 vector, and by the Bernstein inequality, for any t 0 and i [ n ] , we have
P ( | j = 1 n ( A ( i , j ) Ω ( i , j ) ) y ( j ) | > t ) 2 exp ( t 2 / 2 j = 1 n Ω ( i , j ) y 2 ( j ) + t y 3 ) .
By the proof of Lemma 4, we have Ω ( i , j ) θ ( i ) θ ( j ) P ˜ max , which gives j = 1 n Ω ( i , j ) P ˜ max θ max θ 1 . Set y ( j ) as 1 or 1 such that ( A ( i , j ) Ω ( i , j ) ) y ( j ) = | A ( i , j ) Ω ( i , j ) | , we have
P ( A Ω > t ) 2 exp ( t 2 / 2 P ˜ max θ max θ 1 + t 3 ) .
Set t = α + 1 + ( α + 1 ) ( α + 19 ) 3 P ˜ max θ max θ 1 log ( n ) for any α > 0 , by Assumption (A2), we have
P ( A Ω > t ) 2 exp ( t 2 / 2 P ˜ max θ max θ 1 + t 3 ) n α .
Hence, when σ K ( Ω ) C 0 P ˜ max θ max θ 1 log ( n ) where C 0 = 4 α + 1 + ( α + 1 ) ( α + 19 ) 3 , with probability at least 1 o ( n α ) ,
U ^ sgn ( H ) U 2 C θ max P ˜ max θ max θ 1 log ( n ) θ min σ K ( Ω ) λ K ( Π Π ) .
Meanwhile, since P ˜ max θ max θ 1 log ( n ) θ max P ˜ max n log ( n ) , for convenience, we let the lower bound requirement of σ K ( Ω ) be C θ max P ˜ max n log ( n ) .
Similar to the proof of Lemma 2, we have
U ^ U ^ U U 2 = max i [ n ] e i ( U U U ^ U ^ ) F 2 U U ^ sgn ( H ) 2 C θ max P ˜ max K ( θ max κ ( Ω ) θ min n K λ K ( Π Π ) + log ( n ) ) σ K ( Ω ) C θ max P ˜ max K ( θ max κ ( Ω ) θ min n K λ K ( Π Π ) + log ( n ) ) θ min 2 σ K ( P ˜ ) λ K ( Π Π ) ,
where the last inequality holds since σ K ( Ω ) = σ K ( Θ Π P ˜ Π Θ ) θ min 2 σ K ( Π P Π ) = θ min 2 σ K ( Π Π P ˜ ) θ min 2 σ K ( P ˜ ) σ K ( Π Π ) = θ min 2 σ K ( P ˜ ) λ K ( Π Π ) . And this bound is C θ max P ˜ max θ max θ 1 log ( n ) θ min 3 σ K ( P ˜ ) λ K 1.5 ( Π Π ) if we use Theorem 4.2 of [65]. □

Appendix D.4. Proof of Theorem 2

Proof. 
To prove this theorem, we follow similar procedures as Theorem 3.2 of [43]. For i [ n ] , recall that Z * = Y * J * N U 1 N M Π , Z ^ * = Y ^ * J ^ * , Π ( i , : ) = Z ( i , : ) Z ( i , : ) 1 and Π ^ * ( i , : ) = Z ^ * ( i , : ) Z ^ * ( i , : ) 1 , where N M and M are defined in the proof of Lemma 3 such that U = Θ M Θ Π B * and N M ( i , i ) = 1 M ( i , : ) F , we have
e i ( Π ^ * Π P * ) 1 2 e i ( Z ^ * Z * P * ) 1 e i Z * 1 2 K e i ( Z ^ * Z * P * ) F e i Z * 1 .
Now, we provide a lower bound of e i Z * 1 as below
e i Z * 1 = e i N U 1 N M Π 1 = N U 1 ( i , i ) e i N M Π 1 = N U 1 ( i , i ) N M ( i , i ) e i Π 1 = N M ( i , i ) N U ( i , i ) = U ( i , : ) F N M ( i , i ) = U ( i , : ) F 1 M ( i , : ) F = U ( i , : ) F 1 e i M F = U ( i , : ) F 1 e i Θ 1 U F = U ( i , : ) F 1 Θ 1 ( i , i ) e i U F = θ ( i ) θ min .
Therefore, by Lemma A3, we have
e i ( Π ^ * Π P * ) 1 2 K e i ( Z ^ * Z * P * ) F e i Z * 1 2 K e i ( Z ^ * Z * P * ) F θ min = O ( θ max 15 K 5 ϖ κ 4.5 ( Π Π ) λ 1 1.5 ( Π Π ) θ min 15 π min ) .

Appendix D.5. Proof of Corollary 2

Proof. 
Under conditions of Corollary 2, we have
max i [ n ] e i ( Π ^ * Π P * ) 1 = O ( θ max 15 K 5 ϖ κ 4.5 ( Π Π ) λ 1 1.5 ( Π Π ) θ min 15 π min ) = O ( θ max 15 ϖ n θ min 15 ) .
Under the conditions of Corollary 2, Lemma 5 gives ϖ = O ( θ max P ˜ max θ max θ 1 log ( n ) θ min 3 σ K ( P ˜ ) n 1.5 ) , which gives that
max i [ n ] e i ( Π ^ * Π P * ) 1 = O ( θ max 15 ϖ n θ min 15 ) = O ( θ max 16 P ˜ max θ max θ 1 log ( n ) θ min 18 σ K ( P ˜ ) n ) .
By basic algebra, this corollary follows. □

Appendix D.6. Basic Properties of Ω under DCMM

Lemma A1.
Under D C M M n ( K , P ˜ , Π , Θ ) , we have
U ( i , : ) F θ min θ max K λ 1 ( Π Π ) for i [ n ] , and η θ min 4 π min θ max 4 K λ 1 ( Π Π ) ,
where η = min k [ K ] ( ( U * ( I , : ) U * ( I , : ) ) 1 1 ) ( k ) .
Proof. 
Since I = U U = U ( I , : ) Θ 1 ( I , I ) Π Θ 2 Π Θ 1 ( I , I ) U ( I , : ) by the proof of Lemma 3, we have ( ( Θ 1 ( I , I ) U ( I , : ) ) ( ( Θ 1 ( I , I ) U ( I , : ) ) ) 1 = Π Θ 2 Π , which gives that
min k e k ( Θ 1 ( I , I ) U ( I , : ) ) F 2 = min k e k ( Θ 1 ( I , I ) U ( I , : ) ) ( Θ 1 ( I , I ) U ( I , : ) ) e k min x = 1 x ( Θ 1 ( I , I ) U ( I , : ) ) ( Θ 1 ( I , I ) U ( I , : ) ) x = λ K ( ( Θ 1 ( I , I ) U ( I , : ) ) ( Θ 1 ( I , I ) U ( I , : ) ) ) = 1 λ 1 ( Π Θ 2 Π ) ,
where x is a K × 1 vector whose l 2 norm is 1. Then, for i [ n ] , we have
U ( i , : ) F = θ i Π ( i , : ) Θ 1 ( I , I ) U ( I , : ) F = θ i Π ( i , : ) Θ 1 ( I , I ) U ( I , : ) F θ i min i Π ( i , : ) F min i e i ( Θ 1 ( I , I ) U ( I , : ) ) F θ i min i e i ( Θ 1 ( I , I ) U ( I , : ) ) F / K θ i K λ 1 ( Π Θ 2 Π ) θ min θ max K λ 1 ( Π Π ) ,
where we use the fact that min i Π ( i , : ) F 1 K since k = 1 K Π ( i , k ) = 1 and all entries of Π are non-negative.
Since U * = N U U , we have
( U * ( I , : ) U * ( I , : ) ) 1 = N U 1 ( I , I ) Θ 1 ( I , I ) Π Θ 2 Π Θ 1 ( I , I ) N U 1 ( I , I ) θ min 2 θ max 2 N U , max 2 Π Π θ min 4 θ max 4 K λ 1 ( Π Π ) Π Π ,
where we set N U , max = max i [ n ] N U ( i , i ) and we use the fact that N U , Θ are diagonal matrices and N U , max θ max K λ 1 ( Π Π ) θ min . Then we have
η = min k [ K ] ( ( U * ( I , : ) U * ( I , : ) ) 1 1 ) ( k ) θ min 4 θ max 4 K λ 1 ( Π Π ) min k [ K ] e k Π Π 1 = θ min 4 θ max 4 K λ 1 ( Π Π ) min k [ K ] e k Π 1 = θ min 4 π min θ max 4 K λ 1 ( Π Π ) .

Appendix D.7. Bounds between Ideal SVM-cone-DCMMSB and SVM-cone-DCMMSB

The next lemma focuses on the 2nd step of SVM-cone-DCMMSB and is the cornerstone to characterize the behaviors of SVM-cone-DCMMSB.
Lemma A2.
Under D C M M n ( K , P ˜ , Π , Θ ) , when conditions of Lemma 5 hold, there exists a permutation matrix P * R K × K such that with probability at least 1 o ( n α ) , we have
max 1 k K e k ( U ^ * , 2 ( I ^ * , : ) P * U * , 2 ( I , : ) ) F = O ( K 3 θ max 11 ϖ κ 3 ( Π Π ) λ 1 1.5 ( Π Π ) θ min 11 π min ) ,
where U * , 2 = U * U , U ^ * , 2 = U ^ * U ^ , i.e., U * , 2 , U ^ * , 2 are the row-normalized versions of U U and U ^ U ^ , respectively.
Proof. 
Lemma G.1. of [43] says that using U ^ * , 2 as input of the SVM-cone algorithm returns the same result as using U ^ * as the input. By Lemma F.1 of [43], there exists a permutation matrix P * R K × K such that
max k [ K ] e k ( U ^ * , 2 ( I ^ * , : ) P * U * , 2 ( I , : ) ) F = O ( K ζ ϵ * λ K 1.5 ( U * , 2 ( I , : ) ) U * , 2 ( I , : ) ) ,
where ζ 4 K η λ K 1.5 ( U * , 2 ( I , : ) U * , 2 ( I , : ) ) = O ( K η λ K 1.5 ( U * ( I , : ) U * ( I , : ) ) ) , ϵ * = max i [ n ] U ^ * , 2 ( i , : ) U * , 2 ( i , : ) F and η = min 1 k K ( ( U * ( I , : ) U * ( I , : ) ) 1 1 ) ( k ) . Next we give upper bound of ϵ * .
U ^ * , 2 ( i , : ) U * , 2 ( i , : ) F = U ^ 2 ( i , : ) U 2 ( i , : ) F U 2 ( i , : ) U ^ 2 ( i , : ) F U ^ 2 ( i , : ) F U 2 ( i , : ) F F 2 U ^ 2 ( i , : ) U 2 ( i , : ) F U 2 ( i , : ) F 2 U ^ 2 U 2 2 U 2 ( i , : ) F 2 ϖ U 2 ( i , : ) F = 2 ϖ ( U U ) ( i , : ) F = 2 ϖ U ( i , : ) U F = 2 ϖ U ( i , : ) F 2 θ max ϖ K λ 1 ( Π Π ) θ min ,
where the last inequality holds by Lemma A1. Then, we have ϵ * = O ( θ max ϖ K λ 1 ( Π Π ) θ min ) . By Lemma H.2. of [43], λ K ( U * ( I , : ) U * ( I , : ) ) θ min 2 κ 1 ( Π Π ) θ max 2 . By the lower bound of η given in Lemma A1, we have
max k [ K ] e k ( U ^ * , 2 ( I ^ * , : ) P * U * , 2 ( I , : ) ) F = O ( K 3 θ max 11 ϖ κ 3 ( Π Π ) λ 1 1.5 ( Π Π ) θ min 11 π min ) .
Next the lemma focuses on the 3rd step of SVM-cone-DCMMSB and bounds max i [ n ] e i ( Z ^ * Z * P * ) F .
Lemma A3.
Under D C M M n ( K , P ˜ , Π , Θ ) , when the conditions of Lemma 5 hold, with a probability of at least 1 o ( n α ) , we have
max i [ n ] e i ( Z ^ * Z * P * ) F = O ( θ max 15 K 4.5 ϖ κ 4.5 ( Π Π ) λ 1 1.5 ( Π Π ) θ min 14 π min ) .
Proof. 
For i [ n ] , since Z * = Y * J * , Z ^ * = Y ^ * J ^ * and J * , J ^ * are diagonal matrices, we have
e i ( Z ^ * Z * P * ) F = e i ( max ( 0 , Y ^ * J ^ * ) Y * J * P * ) F e i ( Y ^ * J ^ * Y * J * P * ) F = e i ( Y ^ * Y * P * ) J ^ * + e i Y * P * ( J ^ * P * J * P * ) F e i ( Y ^ * Y * P * ) F J ^ * + e i Y * P * F J ^ * P * J * P * = e i ( Y ^ * Y * P * ) F J ^ * + e i Y * F J ^ * P * J * P * = e i ( Y ^ * Y * P * ) F J ^ * + e i Y * F J * P * J ^ * P * .
Therefore, the bound of e i ( Z ^ * Z * P * ) F can be obtained as long as we bound e i ( Y ^ * Y * P * ) F , J ^ * , e i Y * F and J * P * J ^ * P * . We bound the four terms as below:
  • We bound e i ( Y ^ * Y * P * ) F first. Set U * ( I , : ) = B * , U ^ * ( I ^ * , : ) = B ^ * , U * , 2 ( I , : ) = B 2 * , U ^ * , 2 ( I ^ * , : ) = B ^ 2 * for convenience. For i [ n ] , we have
    e i ( Y ^ * Y * P * ) F = e i ( U ^ B ^ * ( B ^ * B ^ * ) 1 U B * ( B * B * ) 1 P * ) F = e i ( U ^ U ( U U ^ ) ) B ^ * ( B ^ * B ^ * ) 1 + e i ( U ( U U ^ ) B ^ * ( B ^ * B ^ * ) 1 U ( U U ^ ) ( P * ( B * B * ) ( B * ) 1 ( U U ^ ) ) 1 ) F e i ( U ^ U ( U U ^ ) ) B ^ * ( B ^ * B ^ * ) 1 F + e i U ( U U ^ ) ( B ^ * ( B ^ * B ^ * ) 1 ( P * ( B * B * ) ( B * ) 1 ( U U ^ ) ) 1 ) F e i ( U ^ U ( U U ^ ) ) F B ^ * 1 F + e i U ( U U ^ ) ( B ^ * ( B ^ * B ^ * ) 1 ( P * ( B * B * ) ( B * ) 1 ( U U ^ ) ) 1 ) F K e i ( U ^ U ( U U ^ ) ) F / λ K ( B ^ * B ^ * ) + e i U ( U U ^ ) ( B ^ * 1 ( P * B * ( U U ^ ) ) 1 ) F = ( i ) K e i ( U ^ U ^ U U ) U ^ F O ( θ max κ ( Π Π ) θ min ) + e i U ( U U ^ ) ( B ^ * 1 ( P * B * ( U U ^ ) ) 1 ) F K e i ( U ^ U ^ U U ) F O ( θ max κ ( Π Π ) θ min ) + e i U ( U U ^ ) ( B ^ * 1 ( P * B * ( U U ^ ) ) 1 ) F K ϖ O ( θ max κ ( Π Π ) θ min ) + e i U ( U U ^ ) ( B ^ * 1 ( P * B * ( U U ^ ) ) 1 ) F = O ( ϖ θ max K κ ( Π Π ) θ min ) + e i U ( U U ^ ) ( B ^ * 1 ( P * B * ( U U ^ ) ) 1 ) F ,
    where we have used similar idea in the proof of Lemma VII.3 in [44] such that we apply O ( 1 λ K ( B * B * ) ) to estimate 1 λ K ( B ^ * B ^ * ) .
    Now we aim to bound e i U ( U U ^ ) ( B ^ * 1 ( P * B * ( U U ^ ) ) 1 ) F . For convenience, set T = U U ^ , S = P * B * T . We have
    e i U ( U U ^ ) ( B ^ * 1 ( P * B * ( U U ^ ) ) 1 ) F = e i U T S 1 ( S B ^ * ) B ^ * 1 F e i U T S 1 ( S B ^ * ) F B ^ * 1 F e i U T S 1 ( S B ^ * ) F K | λ K ( B ^ * ) | = e i U T S 1 ( S B ^ * ) F K λ K ( B ^ * B ^ * ) e i U T S 1 ( S B ^ * ) F O ( θ max K κ ( Π Π ) θ min ) = e i U T T 1 B * ( B * B * ) 1 P * ( S B ^ * ) F O ( θ max K κ ( Π Π ) θ min ) = e i U B * ( B * B * ) 1 P * ( S B ^ * ) F O ( θ max K κ ( Π Π ) θ min ) = e i Y * P * ( S B ^ * ) F O ( θ max K λ 1 ( Π Π ) θ min ) e i Y * F S B ^ * F O ( θ max K λ 1 ( Π Π ) θ min ) By Equation ( A 3 ) θ max 2 K λ 1 ( Π Π ) θ min 2 λ K ( Π Π ) max 1 k K e k ( S B ^ * ) F O ( θ max K κ ( Π Π ) θ min ) = max 1 k K e k ( B ^ * P * B * U U ^ ) F O ( θ max 3 K 1.5 κ ( Π Π ) θ min 3 λ K ( Π Π ) ) = max 1 k K e k ( B ^ * U ^ P * B * U ) U ^ F O ( θ max 3 K 1.5 κ ( Π Π ) θ min 3 λ K ( Π Π ) ) max 1 k K e k ( B ^ * U ^ P B * U ) F O ( θ max 3 K 1.5 κ ( Π Π ) θ min 3 λ K ( Π Π ) ) = max 1 k K e k ( B ^ 2 * P * B 2 * ) F O ( θ max 3 K 1.5 κ ( Π Π ) θ min 3 λ K ( Π Π ) ) = By Lemma A 2 O ( K 4.5 θ max 14 ϖ κ 4.5 ( Π Π ) λ 1 ( Π Π ) θ min 14 π min ) .
    Then, we have
    e i ( Y ^ * Y * P * ) F O ( ϖ θ max K κ ( Π Π ) θ min ) + e i U ( U U ^ ) ( B ^ * 1 ( P * B * U U ^ ) ) 1 ) F O ( ϖ θ max K κ ( Π Π ) θ min ) + O ( K 4.5 θ max 14 ϖ κ 4.5 ( Π Π ) λ 1 ( Π Π ) θ min 14 π min ) = O ( K 4.5 θ max 14 ϖ κ 4.5 ( Π Π ) λ 1 ( Π Π ) θ min 14 π min ) .
  • for e i Y * F , since Y * = U U * 1 ( I , : ) , we have
    e i Y * F U ( i , : ) F U * 1 ( I , : ) F K U ( i , : ) F λ K ( U * ( I , : ) U * ( I , : ) ) θ max 2 K λ 1 ( Π Π ) θ min 2 λ K ( Π Π ) .
  • for J ^ * , recall that J ^ * = diag ( U ^ * ( I ^ * , : ) Λ ^ U ^ * ( I ^ * , : ) ) , we have
    J ^ * 2 = max k [ K ] J ^ * 2 ( k , k ) = max k [ K ] e k U ^ * ( I ^ * , : ) Λ ^ U ^ * ( I ^ * , : ) e k = max k [ K ] e k U ^ * ( I ^ * , : ) Λ ^ U ^ * ( I ^ * , : ) e k max k [ K ] e k U ^ * ( I ^ * , : ) 2 Λ ^ max k [ K ] e k U ^ * ( I ^ * , : ) F 2 Λ ^ = Λ ^ ,
    where we have used the fact that U ^ * ( i , : ) F = 1 for i [ n ] in the last equality. Since we need σ K ( Ω ) C θ max P ˜ max n log ( n ) C A Ω in the proof of Lemma 5, we have Λ ^ = A = A Ω + Ω A Ω + Ω σ K ( Ω ) + Ω 2 Ω = 2 Θ Π P ˜ Π Θ 2 C θ max 2 P ˜ max λ 1 ( Π Π ) = O ( θ max 2 P ˜ max λ 1 ( Π Π ) ) . Then we have
    J ^ * = O ( θ max P ˜ max λ 1 ( Π Π ) ) .
  • for J * P * J ^ * P * , we provide some simple facts first: Λ ^ = A , Λ = Ω , Ω = U Λ U , A ˜ = U ^ Λ ^ U ^ , U ^ = 1 , U = 1 , e k P * B ^ 2 * = B ^ 2 * e k = e k B ^ 2 * e k B ^ 2 * F = 1 . Since A ˜ is the best rank K approximation to A in the spectral norm, and therefore A ˜ A Ω A since Ω = U Λ U with rank K and Ω can also be viewed as a rank K approximation to A. This leads to Ω A ˜ = Ω A + A A ˜ 2 A Ω . By Lemma H.2 [43], B * = U * ( I , : ) = λ 1 ( U * ( I , : ) U * ( I , : ) ) κ ( Π Θ 2 Π ) θ max κ 0.5 ( Π Π ) θ min . A = A Ω + Ω A Ω + Ω σ K ( Ω ) + Ω 2 Ω by the lower bound requirement of σ K ( Ω ) in Lemma 5, and we also have A Ω σ K ( Ω ) Ω . For k [ K ] , let τ k = J * ( k , k ) , τ ^ k = ( P * J ^ * P * ) ( k , k ) for convenience. Based on the above facts and Lemma A2, we have
    max k [ K ] | τ k 2 τ ^ k 2 | = max k [ K ] e k U * ( I , : ) Λ U * ( I , : ) e k e k P * U ^ * ( I ^ * , : ) Λ ^ U ^ * ( I ^ * , : ) P * e k = max k [ K ] e k B 2 * U Λ U B 2 * e k e k P * B ^ 2 * U ^ Λ ^ U ^ B ^ 2 * P * e k e k ( B 2 * P * B ^ 2 * ) U Λ U B 2 * e k + e k P * B ^ 2 * ( U Λ U U ^ Λ ^ U ^ ) B 2 * e k + e k P * B ^ 2 * U ^ Λ ^ U ^ ( B 2 * B ^ 2 * P * ) e k e k ( B 2 * P * B ^ 2 * ) U Λ U B 2 * e k + e k P * B ^ 2 * U Λ U U ^ Λ ^ U ^ B 2 * e k + e k P * B ^ 2 * U ^ Λ ^ U ^ ( B 2 * B ^ 2 * P * ) e k e k ( B 2 * P * B ^ 2 * ) Λ B 2 * e k + U Λ U U ^ Λ ^ U ^ B 2 * e k + Λ ^ ( B 2 * B ^ 2 * P * ) e k = e k ( B 2 * P * B ^ 2 * ) ( Ω B 2 * e k + A ) + Ω A ˜ B 2 * e k = e k ( B ^ 2 * P * B 2 * ) ( Ω B 2 * e k + A ) + Ω A ˜ B 2 * e k e k ( B ^ 2 * P * B 2 * ) ( Ω B 2 * e k + A ) + 2 A Ω B 2 * e k = e k ( B ^ 2 * P * B 2 * ) ( Ω U B * e k + A ) + 2 A Ω U B * e k e k ( B ^ 2 * P * B 2 * ) ( Ω B * e k + A ) + 2 A Ω B * e k e k ( B ^ 2 * P * B 2 * ) ( Ω B * e k + 2 Ω ) + 2 Ω B * e k = e k ( B ^ 2 * P * B 2 * ) ( B * e k + 1 ) O ( θ max 2 P ˜ max λ 1 ( Π Π ) ) + B * e k O ( θ max 2 P ˜ max λ 1 ( Π Π ) ) e k ( B ^ 2 * P * B 2 * ) ( B * + 1 ) O ( θ max 2 P ˜ max λ 1 ( Π Π ) ) + B * O ( θ max 2 P ˜ max λ 1 ( Π Π ) ) e k ( B ^ 2 * P * B 2 * ) O ( θ max 3 P ˜ max κ 0.5 ( Π Π ) λ 1 ( Π Π ) / θ min ) + O ( θ max 3 P ˜ max κ 0.5 ( Π Π ) λ 1 ( Π Π ) / θ min ) e k ( B ^ 2 * P * B 2 * ) F O ( θ max 3 P ˜ max κ 0.5 ( Π Π ) λ 1 ( Π Π ) / θ min ) + O ( θ max 3 P ˜ max κ 0.5 ( Π Π ) λ 1 ( Π Π ) / θ min ) = O ( K 3 θ max 11 ϖ κ 3 ( Π Π ) λ 1 1.5 ( Π Π ) θ min 11 π min ) O ( θ max 3 P ˜ max κ 0.5 ( Π Π ) λ 1 ( Π Π ) / θ min ) + O ( θ max 3 P ˜ max κ 0.5 ( Π Π ) λ 1 ( Π Π ) / θ min ) = O ( K 3 θ max 14 P ˜ max ϖ κ 3.5 ( Π Π ) λ 1 2.5 ( Π Π ) θ min 12 π min ) .
    Recall that J * = N U ( I , I ) Θ ( I , I ) , we have J * N U , max θ max θ max 2 K λ 1 ( Π Π ) θ min where the last inequality holds by Lemma A1. Similarly, we have J * ( k , k ) θ min min i [ n ] 1 U ( i , : ) F θ min 2 λ K ( Π Π ) θ max where the last inequality holds by the proof of Lemma 5. Then we have
    J * P * J ^ * P * = max k [ K ] | τ ^ k τ k | = max k [ K ] | τ ^ k 2 τ k 2 | τ ^ k + τ k max k [ K ] | τ ^ k 2 τ k 2 | τ k θ max θ min 2 λ K ( Π Π ) max k [ K ] | τ ^ k 2 τ k 2 | = O ( K 3 θ max 15 P ˜ max ϖ κ 3.5 ( Π Π ) λ 1 2.5 ( Π Π ) θ min 14 π min λ K ( Π Π ) ) .
    Combining the above results, we have
    e i ( Z ^ * Z * P * ) F e i ( Y ^ * Y * P * ) F J ^ * + e i Y * F J * P * J ^ * P * O ( K 4.5 θ max 14 ϖ κ 4.5 ( Π Π ) λ 1 ( Π Π ) θ min 14 π min ) O ( θ max P ˜ max λ 1 ( Π Π ) ) + θ max 2 K λ 1 ( Π Π ) θ min 2 λ K ( Π Π ) O ( K 3 θ max 15 P ˜ max ϖ κ 3.5 ( Π Π ) λ 1 2.5 ( Π Π ) θ min 14 π min λ K ( Π Π ) ) = O ( θ max 15 K 4.5 ϖ κ 4.5 ( Π Π ) λ 1 1.5 ( Π Π ) θ min 14 π min ) .

References

  1. Watts, D.J.; Strogatz, S.H. Collective dynamics of ‘small-world’networks. Nature 1998, 393, 440–442. [Google Scholar] [CrossRef] [PubMed]
  2. Newman, M.E. Scientific collaboration networks. II. Shortest paths, weighted networks, and centrality. Phys. Rev. E 2001, 64, 016132. [Google Scholar] [CrossRef] [PubMed]
  3. Dunne, J.A.; Williams, R.J.; Martinez, N.D. Food-web structure and network theory: The role of connectance and size. Proc. Natl. Acad. Sci. USA 2002, 99, 12917–12922. [Google Scholar] [CrossRef]
  4. Newman, M.E.J. Coauthorship networks and patterns of scientific collaboration. Proc. Natl. Acad. Sci. USA 2004, 101, 5200–5205. [Google Scholar] [CrossRef]
  5. Notebaart, R.A.; van Enckevort, F.H.; Francke, C.; Siezen, R.J.; Teusink, B. Accelerating the reconstruction of genome-scale metabolic networks. BMC Bioinform. 2006, 7, 296. [Google Scholar] [CrossRef]
  6. Pizzuti, C. Ga-net: A genetic algorithm for community detection in social networks. In International Conference on Parallel Problem Solving from Nature; Springer: Berlin/Heidelberg, Germany, 2008; pp. 1081–1090. [Google Scholar]
  7. Jackson, M.O. Social and Economic Networks; Princeton University Press: Princeton, NJ, USA, 2010. [Google Scholar]
  8. Gao, J.; Liang, F.; Fan, W.; Wang, C.; Sun, Y.; Han, J. On community outliers and their efficient detection in information networks. In Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, 24–28 July 2010; pp. 813–822. [Google Scholar]
  9. Rubinov, M.; Sporns, O. Complex network measures of brain connectivity: Uses and interpretations. Neuroimage 2010, 52, 1059–1069. [Google Scholar] [CrossRef]
  10. Su, G.; Kuchinsky, A.; Morris, J.H.; States, D.J.; Meng, F. GLay: Community structure analysis of biological networks. Bioinformatics 2010, 26, 3135–3137. [Google Scholar] [CrossRef]
  11. Lin, W.; Kong, X.; Yu, P.S.; Wu, Q.; Jia, Y.; Li, C. Community detection in incomplete information networks. In Proceedings of the 21st International Conference on World Wide Web, Lyon, France, 16–20 April 2012; pp. 341–350. [Google Scholar]
  12. Scott, J.; Carrington, P.J. The SAGE Handbook of Social Network Analysis; SAGE Publications: London, UK, 2014. [Google Scholar]
  13. Bedi, P.; Sharma, C. Community detection in social networks. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2016, 6, 115–135. [Google Scholar] [CrossRef]
  14. Ji, P.; Jin, J. Coauthorship and citation networks for statisticians. Ann. Appl. Stat. 2016, 10, 1779–1812. [Google Scholar] [CrossRef]
  15. Ji, P.; Jin, J.; Ke, Z.T.; Li, W. Co-citation and Co-authorship Networks of Statisticians. J. Bus. Econ. Stat. 2022, 40, 469–485. [Google Scholar] [CrossRef]
  16. Newman, M.E. The structure and function of complex networks. SIAM Rev. 2003, 45, 167–256. [Google Scholar] [CrossRef]
  17. Newman, M.E.; Girvan, M. Finding and evaluating community structure in networks. Phys. Rev. E 2004, 69, 026113. [Google Scholar] [CrossRef] [PubMed]
  18. Boccaletti, S.; Latora, V.; Moreno, Y.; Chavez, M.; Hwang, D.U. Complex networks: Structure and dynamics. Phys. Rep. 2006, 424, 175–308. [Google Scholar] [CrossRef]
  19. Fortunato, S. Community detection in graphs. Phys. Rep. 2010, 486, 75–174. [Google Scholar] [CrossRef]
  20. Fortunato, S.; Hric, D. Community detection in networks: A user guide. Phys. Rep. 2016, 659, 1–44. [Google Scholar] [CrossRef]
  21. Abbe, E.; Bandeira, A.S.; Hall, G. Exact Recovery in the Stochastic Block Model. IEEE Trans. Inf. Theory 2016, 62, 471–487. [Google Scholar] [CrossRef]
  22. Fortunato, S.; Newman, M.E. 20 years of network community detection. Nat. Phys. 2022, 1–3. [Google Scholar] [CrossRef]
  23. Goldenberg, A.; Zheng, A.X.; Fienberg, S.E.; Airoldi, E.M. A survey of statistical network models. Found. Trends Mach. Learn. 2010, 2, 129–233. [Google Scholar] [CrossRef]
  24. Holland, P.W.; Laskey, K.B.; Leinhardt, S. Stochastic blockmodels: First steps. Soc. Netw. 1983, 5, 109–137. [Google Scholar] [CrossRef]
  25. Snijders, T.A.; Nowicki, K. Estimation and prediction for stochastic blockmodels for graphs with latent block structure. J. Classif. 1997, 14, 75–100. [Google Scholar] [CrossRef]
  26. Rohe, K.; Chatterjee, S.; Yu, B. Spectral clustering and the high-dimensional stochastic blockmodel. Ann. Stat. 2011, 39, 1878–1915. [Google Scholar] [CrossRef]
  27. Choi, D.S.; Wolfe, P.J.; Airoldi, E.M. Stochastic blockmodels with a growing number of classes. Biometrika 2012, 99, 273–284. [Google Scholar] [CrossRef]
  28. Sussman, D.L.; Tang, M.; Fishkind, D.E.; Priebe, C.E. A consistent adjacency spectral embedding for stochastic blockmodel graphs. J. Am. Stat. Assoc. 2012, 107, 1119–1128. [Google Scholar] [CrossRef]
  29. Latouche, P.; Birmelé, E.; Ambroise, C. Model selection in overlapping stochastic block models. Electron. J. Stat. 2014, 8, 762–794. [Google Scholar] [CrossRef]
  30. Lei, J.; Rinaldo, A. Consistency of spectral clustering in stochastic block models. Ann. Stat. 2015, 43, 215–237. [Google Scholar] [CrossRef]
  31. Sarkar, P.; Bickel, P.J. Role of normalization in spectral clustering for stochastic blockmodels. Ann. Stat. 2015, 43, 962–990. [Google Scholar] [CrossRef]
  32. Lyzinski, V.; Tang, M.; Athreya, A.; Park, Y.; Priebe, C.E. Community detection and classification in hierarchical stochastic blockmodels. IEEE Trans. Netw. Sci. Eng. 2016, 4, 13–26. [Google Scholar] [CrossRef]
  33. Valles-Catala, T.; Massucci, F.A.; Guimera, R.; Sales-Pardo, M. Multilayer stochastic block models reveal the multilayer structure of complex networks. Phys. Rev. X 2016, 6, 011036. [Google Scholar] [CrossRef]
  34. Lei, J. A goodness-of-fit test for stochastic block models. Ann. Stat. 2016, 44, 401–424. [Google Scholar] [CrossRef]
  35. Tabouy, T.; Barbillon, P.; Chiquet, J. Variational inference for stochastic block models from sampled data. J. Am. Stat. Assoc. 2020, 115, 455–466. [Google Scholar] [CrossRef]
  36. Airoldi, E.M.; Blei, D.M.; Fienberg, S.E.; Xing, E.P. Mixed Membership Stochastic Blockmodels. J. Mach. Learn. Res. 2008, 9, 1981–2014. [Google Scholar] [PubMed]
  37. Wang, F.; Li, T.; Wang, X.; Zhu, S.; Ding, C. Community discovery using nonnegative matrix factorization. Data Min. Knowl. Discov. 2011, 22, 493–521. [Google Scholar] [CrossRef]
  38. Airoldi, E.M.; Wang, X.; Lin, X. Multi-way blockmodels for analyzing coordinated high-dimensional responses. Ann. Appl. Stat. 2013, 7, 2431–2457. [Google Scholar] [CrossRef] [PubMed]
  39. Panov, M.; Slavnov, K.; Ushakov, R. Consistent Estimation of Mixed Memberships with Successive Projections. In International Conference on Complex Networks and Their Applications; Springer: Cham, Switzerland, 2017; pp. 53–64. [Google Scholar]
  40. Zhang, Y.; Levina, E.; Zhu, J. Detecting overlapping communities in networks using spectral methods. SIAM J. Math. Data Sci. 2020, 2, 265–283. [Google Scholar] [CrossRef]
  41. Jin, J.; Ke, Z.T.; Luo, S. Estimating network memberships by simplex vertex hunting. arXiv 2017, arXiv:1708.07852. [Google Scholar]
  42. Mao, X.; Sarkar, P.; Chakrabarti, D. On Mixed Memberships and Symmetric Nonnegative Matrix Factorizations. In Proceedings of the 34th International Conference of Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 2324–2333. [Google Scholar]
  43. Mao, X.; Sarkar, P.; Chakrabarti, D. Overlapping Clustering Models, and One (class) SVM to Bind Them All. In Proceedings of the Advances in Neural Information Processing Systems, Montréal, QC, Canada, 3–8 December 2018; Volume 31, pp. 2126–2136. [Google Scholar]
  44. Mao, X.; Sarkar, P.; Chakrabarti, D. Estimating Mixed Memberships With Sharp Eigenvector Deviations. J. Am. Stat. Assoc. 2020, 116, 1928–1940. [Google Scholar] [CrossRef]
  45. Karrer, B.; Newman, M.E.J. Stochastic blockmodels and community structure in networks. Phys. Rev. E 2011, 83, 16107. [Google Scholar] [CrossRef]
  46. Kaufmann, E.; Bonald, T.; Lelarge, M. A spectral algorithm with additive clustering for the recovery of overlapping communities in networks. Theor. Comput. Sci. 2017, 742, 3–26. [Google Scholar] [CrossRef]
  47. Von Luxburg, U. A tutorial on spectral clustering. Stat. Comput. 2007, 17, 395–416. [Google Scholar] [CrossRef]
  48. Qin, T.; Rohe, K. Regularized spectral clustering under the degree-corrected stochastic blockmodel. Adv. Neural Inf. Process. Syst. 2013, 26, 3120–3128. [Google Scholar]
  49. Joseph, A.; Yu, B. Impact of regularization on spectral clustering. Ann. Stat. 2016, 44, 1765–1791. [Google Scholar] [CrossRef]
  50. Jin, J. Fast community detection by SCORE. Ann. Stat. 2015, 43, 57–89. [Google Scholar] [CrossRef]
  51. Gillis, N.; Vavasis, S.A. Semidefinite Programming Based Preconditioning for More Robust Near-Separable Nonnegative Matrix Factorization. SIAM J. Optim. 2015, 25, 677–698. [Google Scholar] [CrossRef]
  52. Mossel, E.; Neeman, J.; Sly, A. Consistency thresholds for binary symmetric block models. arXiv 2014, arXiv:1407.1591. [Google Scholar]
  53. Abbe, E. Community detection and stochastic block models: Recent developments. J. Mach. Learn. Res. 2017, 18, 6446–6531. [Google Scholar]
  54. Hajek, B.; Wu, Y.; Xu, J. Achieving Exact Cluster Recovery Threshold via Semidefinite Programming: Extensions. IEEE Trans. Inf. Theory 2016, 62, 5918–5937. [Google Scholar] [CrossRef]
  55. Agarwal, N.; Bandeira, A.S.; Koiliaris, K.; Kolla, A. Multisection in the Stochastic Block Model using Semidefinite Programming. arXiv 2017, arXiv:1507.02323. [Google Scholar]
  56. Bandeira, A.S. Random Laplacian Matrices and Convex Relaxations. Found. Comput. Math. 2018, 18, 345–379. [Google Scholar] [CrossRef]
  57. Abbe, E.; Sandon, C. Community Detection in General Stochastic Block models: Fundamental Limits and Efficient Algorithms for Recovery. In Proceedings of the 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, Berkeley, CA, USA, 17–20 October 2015; pp. 670–688. [Google Scholar]
  58. Gao, C.; Ma, Z.; Zhang, A.Y.; Zhou, H.H. Achieving Optimal Misclassification Proportion in Stochastic Block Models. J. Mach. Learn. Res. 2017, 18, 1–45. [Google Scholar]
  59. McSherry, F. Spectral partitioning of random graphs. In Proceedings of the 2001 IEEE International Conference on Cluster Computing, Newport Beach, CA, USA, 8–11 October 2001; pp. 529–537. [Google Scholar]
  60. Newman, M.E. Assortative mixing in networks. Phys. Rev. Lett. 2002, 89, 208701. [Google Scholar] [CrossRef]
  61. Erdös, P.; Rényi, A. On the evolution of random graphs. In The Structure and Dynamics of Networks; Princeton University Press: Princeton, NJ, USA, 2011; pp. 38–82. [Google Scholar] [CrossRef]
  62. Blum, A.; Hopcroft, J.; Kannan, R. Foundations of Data Science; Number 1; Cambridge University Press: Cambridge, UK, 2020; pp. 1–465. [Google Scholar]
  63. Lei, L. Unified 2→∞ Eigenspace Perturbation Theory for Symmetric Random Matrices. arXiv 2019, arXiv:1909.04798. [Google Scholar]
  64. Chen, Y.; Chi, Y.; Fan, J.; Ma, C. Spectral methods for data science: A statistical perspective. Found. Trends Mach. Learn. 2021, 14, 566–806. [Google Scholar] [CrossRef]
  65. Cape, J.; Tang, M.; Priebe, C.E. The two-to-infinity norm and singular subspace geometry with applications to high-dimensional statistics. Ann. Stat. 2019, 47, 2405–2439. [Google Scholar] [CrossRef]
  66. Abbe, E.; Fan, J.; Wang, K.; Zhong, Y. Entrywise Eigenvector Analysis of Random Matrices with Low Expected Rank. Ann. Stat. 2020, 48, 1452–1474. [Google Scholar] [CrossRef] [PubMed]
  67. Rohe, K.; Qin, T.; Yu, B. Co-clustering directed graphs to discover asymmetries and directional communities. Proc. Natl. Acad. Sci. USA 2016, 113, 12679–12684. [Google Scholar] [CrossRef] [PubMed]
  68. Wang, Z.; Liang, Y.; Ji, P. Spectral Algorithms for Community Detection in Directed Networks. J. Mach. Learn. Res. 2020, 21, 1–45. [Google Scholar]
  69. Cai, T.T.; Li, X. Robust and computationally feasible community detection in the presence of arbitrary outlier nodes. Ann. Stat. 2015, 43, 1027–1059. [Google Scholar] [CrossRef]
  70. Tropp, J.A. User-Friendly Tail Bounds for Sums of Random Matrices. Found. Comput. Math. 2012, 12, 389–434. [Google Scholar] [CrossRef]
  71. Zhou, Z.; A.Amini, A. Analysis of spectral clustering algorithms for community detection: The general bipartite setting. J. Mach. Learn. Res. 2019, 20, 1–47. [Google Scholar]
  72. Zhao, Y.; Levina, E.; Zhu, J. Consistency of community detection in networks under degree-corrected stochastic block models. Ann. Stat. 2012, 40, 2266–2292. [Google Scholar] [CrossRef]
  73. Ghoshdastidar, D.; Dukkipati, A. Consistency of Spectral Partitioning of Uniform Hypergraphs under Planted Partition Model. In Proceedings of the Advances in Neural Information Processing Systems 27, Montreal, QC, Canada, 8–13 December 2014; Volume 27, pp. 397–405. [Google Scholar]
  74. Ke, Z.T.; Shi, F.; Xia, D. Community Detection for Hypergraph Networks via Regularized Tensor Power Iteration. arXiv 2019, arXiv:1909.06503. [Google Scholar]
  75. Cole, S.; Zhu, Y. Exact recovery in the hypergraph stochastic block model: A spectral algorithm. Linear Algebra Its Appl. 2020, 593, 45–73. [Google Scholar] [CrossRef]
  76. Bandeira, A.S.; van Handel, R. Sharp nonasymptotic bounds on the norm of random matrices with independent entries. Ann. Probab. 2016, 44, 2479–2506. [Google Scholar] [CrossRef]
  77. Cape, J. Orthogonal Procrustes and norm-dependent optimality. Electron. J. Linear Algebra 2020, 36, 158–168. [Google Scholar] [CrossRef]
Figure 1. Phase transition for SPACL and SVM-cone-DCMMSB under MMSB: darker pixels represent lower error rates. The red lines represent | α in α out | max ( α in , α out ) = 1 .
Figure 1. Phase transition for SPACL and SVM-cone-DCMMSB under MMSB: darker pixels represent lower error rates. The red lines represent | α in α out | max ( α in , α out ) = 1 .
Entropy 24 01098 g001
Figure 2. Panel (a): a graph generated from the mixed membership stochastic blockmodel with n = 600 nodes and 2 communities. Among the 600 nodes, each community has 250 pure nodes. For the 100 mixed nodes, they have mixed membership ( 1 / 2 , 1 / 2 ) . Panel (b): a graph generated from MMSB with n = 600 nodes and 3 communities. Among the 600 nodes, each community has 150 pure nodes. For the 150 mixed nodes, they have mixed membership ( 1 / 3 , 1 / 3 , 1 / 3 ) . Nodes in panels (a,b) connect with probability p in = 60 / 600 and p out = 1 / 600 , so the two networks in both panels are assortative networks. For panel (a), error rates for SPACL and SVM-cone-DCMMSB are 0.0285 and 0.0175, respectively, where error rate is defined in Equation (9). For panel (b), error rates for SPACL and SVM-cone-DCMMSB are 0.0709 and 0.0436, respectively. For both panels, dots in the same color are pure nodes in the same community and green square nodes are mixed.
Figure 2. Panel (a): a graph generated from the mixed membership stochastic blockmodel with n = 600 nodes and 2 communities. Among the 600 nodes, each community has 250 pure nodes. For the 100 mixed nodes, they have mixed membership ( 1 / 2 , 1 / 2 ) . Panel (b): a graph generated from MMSB with n = 600 nodes and 3 communities. Among the 600 nodes, each community has 150 pure nodes. For the 150 mixed nodes, they have mixed membership ( 1 / 3 , 1 / 3 , 1 / 3 ) . Nodes in panels (a,b) connect with probability p in = 60 / 600 and p out = 1 / 600 , so the two networks in both panels are assortative networks. For panel (a), error rates for SPACL and SVM-cone-DCMMSB are 0.0285 and 0.0175, respectively, where error rate is defined in Equation (9). For panel (b), error rates for SPACL and SVM-cone-DCMMSB are 0.0709 and 0.0436, respectively. For both panels, dots in the same color are pure nodes in the same community and green square nodes are mixed.
Entropy 24 01098 g002
Figure 3. Phase transition for oPCA, nPCA, RSC and SCORE under SBM: darker pixels represent lower error rates. The red lines represent | α in α out | max ( α in , α out ) = 1 .
Figure 3. Phase transition for oPCA, nPCA, RSC and SCORE under SBM: darker pixels represent lower error rates. The red lines represent | α in α out | max ( α in , α out ) = 1 .
Entropy 24 01098 g003aEntropy 24 01098 g003b
Figure 4. Panel (a): a graph generated from S B M ( 600 , 2 , 30 / 600 , 2 / 600 ) . Panel (b): a graph generated from S B M ( 600 , 3 , 30 / 600 , 2 / 600 ) . So, in panel (a), there are 2 communities and each community has 300 nodes; in panel (b), there are 3 communities and each community has 200 nodes. Networks in panels (a,b) are assortative networks since p in > p out . For both panels, error rates for oPCA, nPCA, RSC and SCORE are 0. Colors indicate clusters.
Figure 4. Panel (a): a graph generated from S B M ( 600 , 2 , 30 / 600 , 2 / 600 ) . Panel (b): a graph generated from S B M ( 600 , 3 , 30 / 600 , 2 / 600 ) . So, in panel (a), there are 2 communities and each community has 300 nodes; in panel (b), there are 3 communities and each community has 200 nodes. Networks in panels (a,b) are assortative networks since p in > p out . For both panels, error rates for oPCA, nPCA, RSC and SCORE are 0. Colors indicate clusters.
Entropy 24 01098 g004
Table 1. Comparison of separation condition and sharp threshold. Details of this table are given in Section 4. The classical result on separation condition given in Corollary 1 of [59] is log ( n ) n (i.e., Equation (1)). The classical result on sharp threshold is log ( n ) n (i.e., Equation (3)) given in [61], Theorem 4.6 [62] and the first bullet in Section 2.5 [53]. In this paper, n is the number of nodes in a network, A is the adjacency matrix, Ω is the expectation of A under some models, A re is a regularization of A, ρ is the sparsity parameter such that ρ max i , j Ω ( i , j ) and it controls the overall sparsity of a network, · denotes spectral norm, and ξ > 1 .
Table 1. Comparison of separation condition and sharp threshold. Details of this table are given in Section 4. The classical result on separation condition given in Corollary 1 of [59] is log ( n ) n (i.e., Equation (1)). The classical result on sharp threshold is log ( n ) n (i.e., Equation (3)) given in [61], Theorem 4.6 [62] and the first bullet in Section 2.5 [53]. In this paper, n is the number of nodes in a network, A is the adjacency matrix, Ω is the expectation of A under some models, A re is a regularization of A, ρ is the sparsity parameter such that ρ max i , j Ω ( i , j ) and it controls the overall sparsity of a network, · denotes spectral norm, and ξ > 1 .
ModelSeparation ConditionSharp Threshold
Ours using A re Ω C ρ n MMSB&DCMM log ( n ) n log ( n ) n
Ours using A Ω C ρ n log ( n ) MMSB&DCMM log ( n ) n log ( n ) n
Ref. [41] using A re Ω C ρ n (original)DCMM log ( n ) n log ( n ) n
Ref. [41] using A Ω C ρ n log ( n ) DCMM log ( n ) n log ( n ) n
Refs. [43,44] using A re Ω C ρ n (original)MMSB&DCMM log ξ ( n ) n log 2 ξ ( n ) n
Refs. [43,44] using A Ω C ρ n log ( n ) MMSB&DCMM log ξ + 0.5 ( n ) n log 2 ξ + 1 ( n ) n
Ref. [30] using A re Ω C ρ n (original)SBM&DCSBM 1 n 1 n
Ref. [30] using A Ω C ρ n log ( n ) log ( n ) SBM&DCSBM log ( n ) n log ( n ) n
Table 2. Comparison of alternative separation condition, where the classical result on alternative separation condition is 1 (i.e., Equation (2)).
Table 2. Comparison of alternative separation condition, where the classical result on alternative separation condition is 1 (i.e., Equation (2)).
ModelAlternative Separation Condition
Ours using A re Ω C ρ n MMSB&DCMM1
Ours using A Ω C ρ n log ( n ) MMSB&DCMM1
Ref. [41] using A re Ω C ρ n (original)DCMM1
Ref. [41] using A Ω C ρ n log ( n ) DCMM1
Refs. [43,44] using A re Ω C ρ n (original)MMSB&DCMM log ξ 0.5 ( n )
Refs. [43,44] using A Ω C ρ n log ( n ) MMSB&DCMM log ξ ( n )
Ref. [30] using A re Ω C ρ n (original)SBM&DCSBM 1 log ( n )
Ref. [30] using A Ω C ρ n log ( n ) log ( n ) SBM&DCSBM1
Table 3. Comparison of error rates between our Theorem 1 and Theorem 3.2 [44] under M M S B n ( K , P ˜ , Π , ρ ) . The dependence on K is obtained when κ ( Π Π ) = O ( 1 ) . For comparison, we have adjusted the l 2 error rates of Theorem 3.2 [44] into l 1 error rates. Note that as analyzed in the first bullet given after Lemma 2, whether using A Ω C ρ n log ( n ) or A re Ω C ρ n does not change our ϖ , and has no influence on bound in Theorem 1. For [44], using A re Ω ρ n , the power of log ( n ) in their Theorem 3.2 is ξ ; using A Ω ρ n log ( n ) , the power of log ( n ) in their Theorem 3.2 is ξ + 0.5 .
Table 3. Comparison of error rates between our Theorem 1 and Theorem 3.2 [44] under M M S B n ( K , P ˜ , Π , ρ ) . The dependence on K is obtained when κ ( Π Π ) = O ( 1 ) . For comparison, we have adjusted the l 2 error rates of Theorem 3.2 [44] into l 1 error rates. Note that as analyzed in the first bullet given after Lemma 2, whether using A Ω C ρ n log ( n ) or A re Ω C ρ n does not change our ϖ , and has no influence on bound in Theorem 1. For [44], using A re Ω ρ n , the power of log ( n ) in their Theorem 3.2 is ξ ; using A Ω ρ n log ( n ) , the power of log ( n ) in their Theorem 3.2 is ξ + 0.5 .
ρ n σ K ( Ω ) λ K ( Π Π ) Dependence on KDependence on log ( n )
Ours log ( n ) ρ n log ( n ) > 0 K 2 log 0.5 ( n )
[44] log 2 ξ ( n ) ρ n log ξ ( n ) 1 / ρ K 2.5 log ξ ( n )
Table 4. Comparison of error rates between our Theorem 2 and Theorem 3.2 [43] under D C M M n ( K , P , Π , Θ ) . The dependence on K is obtained when κ ( Π Π ) = O ( 1 ) . For comparison, we adjusted the l 2 error rates of Theorem 3.2 [43] into l 1 error rates. Since Theorem 2 enjoys the same separation condition and sharp threshold as Theorem 1, and Theorem 3.2 [43] enjoys the same separation condition and sharp threshold as Theorem 3.2 [44], we do not report them in this table. Note that as analyzed in Remark 11, whether using A Ω C θ max θ 1 log ( n ) or A re Ω C θ max θ 1 does not change our ϖ under DCMM, and has no influence on the results in Theorem 2. For [43], using A re Ω θ max θ 1 , the power of log ( n ) in their Theorem 3.2 is ξ ; using A Ω θ max θ 1 log ( n ) , the power of log ( n ) in their Theorem 3.2 is ξ + 0.5 .
Table 4. Comparison of error rates between our Theorem 2 and Theorem 3.2 [43] under D C M M n ( K , P , Π , Θ ) . The dependence on K is obtained when κ ( Π Π ) = O ( 1 ) . For comparison, we adjusted the l 2 error rates of Theorem 3.2 [43] into l 1 error rates. Since Theorem 2 enjoys the same separation condition and sharp threshold as Theorem 1, and Theorem 3.2 [43] enjoys the same separation condition and sharp threshold as Theorem 3.2 [44], we do not report them in this table. Note that as analyzed in Remark 11, whether using A Ω C θ max θ 1 log ( n ) or A re Ω C θ max θ 1 does not change our ϖ under DCMM, and has no influence on the results in Theorem 2. For [43], using A re Ω θ max θ 1 , the power of log ( n ) in their Theorem 3.2 is ξ ; using A Ω θ max θ 1 log ( n ) , the power of log ( n ) in their Theorem 3.2 is ξ + 0.5 .
Π ( i , : ) θ max θ 1 σ K ( Ω ) κ ( Π Θ 2 Π ) Dependence on KDependence on log ( n )
Oursarbitrary log ( n ) θ max n log ( n ) 1 K 6 log 0.5 ( n )
[43] i i d from Dirichlet log 2 ξ ( n ) θ max n log ξ ( n ) = O ( 1 ) K 6.5 log ξ ( n )
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Qing, H. A Useful Criterion on Studying Consistent Estimation in Community Detection. Entropy 2022, 24, 1098. https://doi.org/10.3390/e24081098

AMA Style

Qing H. A Useful Criterion on Studying Consistent Estimation in Community Detection. Entropy. 2022; 24(8):1098. https://doi.org/10.3390/e24081098

Chicago/Turabian Style

Qing, Huan. 2022. "A Useful Criterion on Studying Consistent Estimation in Community Detection" Entropy 24, no. 8: 1098. https://doi.org/10.3390/e24081098

APA Style

Qing, H. (2022). A Useful Criterion on Studying Consistent Estimation in Community Detection. Entropy, 24(8), 1098. https://doi.org/10.3390/e24081098

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop