Next Article in Journal
Stability Analysis and Synchronization for a Class of Fractional-Order Neural Networks
Previous Article in Journal
Sensitivity Analysis of Entropy Generation in Nanofluid Flow inside a Channel by Response Surface Methodology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bounding Extremal Degrees of Edge-Independent Random Graphs Using Relative Entropy

Department of Mathematics, Tongji University, Shanghai 200092, China
Entropy 2016, 18(2), 53; https://doi.org/10.3390/e18020053
Submission received: 2 December 2015 / Revised: 1 February 2016 / Accepted: 1 February 2016 / Published: 5 February 2016
(This article belongs to the Section Complexity)

Abstract

:
Edge-independent random graphs are a model of random graphs in which each potential edge appears independently with an individual probability. Based on the relative entropy method, we determine the upper and lower bounds for the extremal vertex degrees using the edge probability matrix and its largest eigenvalue. Moreover, an application to random graphs with given expected degree sequences is presented.

1. Introduction

Edge-independent random graphs are random graph models with independent but (possibly) heterogeneous edge probabilities, generalizing the model with constant edge probability introduced by Erdős and Rényi [1,2]. Given a real symmetric matrix A = ( p i j ) R n × n with p i j [ 0 , 1 ] , the edge-independent random graph model G n ( p i j ) [3] is defined as a random graph on the vertex set [ n ] = { 1 , 2 , , n } , which includes each edge ( i , j ) with probability p i j independently. Clearly, the classical Erdős-Rényi random graphs and the Chung–Lu models [4] with given expected degrees are two special examples of G n ( p i j ) .
Edge-independent random graphs are applicable in a range of areas such as modeling of social networks, and detection of community structures [5,6], etc. The number of interacting nodes is typically large in practical applications, and it is appropriate to investigate the statistical properties of parameters of interest. The Estrada index and the normalized Laplacian Estrada index of G n ( p i j ) for large n are examined in [7]. The problem of bounding the difference between eigenvalues of A and those of the adjacency matrix of G n ( p i j ) , together with its Laplacian spectra version, has been studied intensively recently; see, e.g., [3,8,9]. It is revealed in [9] that large deviation from the expected spectrum is caused by vertices with extremal degrees, where abnormally high-degree and low-degree vertices are obstructions to concentration of the adjacency and the Laplacian matrices, respectively. A regularization technique is employed to address this issue.
Relative entropy [10] is a key notion in quantum information theory, ergodic theory, and statistical mechanics. It measures the difference between two probability distributions; see e.g., [11,12,13,14,15,16,17] for various applications of relative entropy on physical, chemical and engineering sciences.
Inspired by the above consideration, we in this paper study the extremal degrees of the edge-independent random graph G n ( p i j ) in the thermodynamic limit, namely, as n tends to infinity. Our approach is based on concentration inequalities, where the notation of relative entropy plays a critical role. We first build the theory for maximum and minimum degrees for G n ( p i j ) in Section 2, and then present an application for the random graph model G ( w ) with given expected degree sequence w and a discussion regarding possible future direction in Section 3. Various combinatorial and geometric properties of G ( w ) including the hyperbolicity and warmth have been reported; see, e.g., [18,19,20].

2. Bounds for Maximum and Minimum Degrees

Recall that A = ( p i j ) R n × n is a real symmetric matrix. Its eigenvalues can be ordered as λ 1 ( A ) λ 2 ( A ) λ n ( A ) . Given a graph G G n ( p i j ) , let Δ ( G ) and δ ( G ) be its maximum and minimum degrees, respectively. The maximum expected degree of G is denoted by Δ ( A ) , which is equivalent to the maximum row sum of A. Let p ¯ = max { p i j } and p ̲ = min { p i j } represent the maximum and minimum elements, respectively, in A. We say that a graph property P holds in G n ( p i j ) asymptotically almost surely (a.a.s.) if the probability that a random graph G G n ( p i j ) has P converges to 1 as n goes to infinity.
Theorem 1. 
For an edge-independent random graph G, suppose that Δ ( A ) ln 4 n . Then
λ 1 ( A ) ( 2 + o ( 1 ) ) Δ ( A ) Δ ( G ) ( 1 + o ( 1 ) ) n p ¯ , a . a . s .
Proof. 
The lower bound is straightforward since
Δ ( G ) λ 1 ( G ) λ 1 ( A ) ( 2 + o ( 1 ) ) Δ ( A ) a . a . s .
by employing Theorem 1 in [3].
For the upper bound, we set d i = d i ( G ) as the degree of vertex i in G. By construction, d i j = 1 n Ber ( p i j ) follows the sum of n independent Bernoulli distributions. If p ¯ = 1 , the upper bound in Equation (1) holds true trivially. Therefore, we assume p ¯ < 1 in the sequel.
For any non-decreasing function f ( x ) on the interval [ 0 , n ] , the Markov inequality [2] implies that for p ¯ < a < 1 ,
P ( d i a n ) = P ( f ( d i ) f ( a n ) ) E ( f ( d i ) ) f ( a n ) = k = 0 n f ( k ) p ¯ k ( 1 p ¯ ) n k f ( a n ) .
By choosing f ( x ) = a / p ¯ x ( 1 a ) / ( 1 p ¯ ) n x , we obtain from (3) that
P ( Δ ( G ) a n ) i = 1 n P ( d i a n ) n e n Ent ( a , p ¯ ) ,
where Ent ( a , p ¯ ) = a ln ( a / p ¯ ) + ( 1 a ) ln ( 1 a ) / ( 1 p ¯ ) is the so-called relative entropy [10].
Recall the Taylor expansion of ln ( 1 + x ) = x x 2 / 2 + x 3 / 3 x 4 / 4 + . For any ε > 0 , we have ln ( ( 1 p ¯ ) / ( 1 ( 1 + ε ) p ¯ ) ) ( 1 p ¯ ) / ( 1 ( 1 + ε ) p ¯ ) 1 and
( 1 + ε ) ln ( 1 + ε ) ε ( 1 + ε ) ε ε 2 2 ε = 1 2 ε 2 ( 1 ε ) .
Now, we choose ε > 0 satisfying a = ( 1 + ε ) p ¯ < 1 . Therefore, if ε = o ( 1 ) as n , the above comments and the inequality (4) yield
P ( Δ ( G ) ( 1 + ε ) n p ¯ ) n e 1 2 n p ¯ ε 2 ( 1 + o ( 1 ) ) .
By assumption, we have n p ¯ ln 4 n . We choose ε = ( ln 2 n ) / n p ¯ = o ( 1 ) . Hence, the estimate (6) implies that Δ ( G ) ( 1 + o ( 1 ) ) n p ¯ asymptotically almost surely, which concludes the proof of the upper bound.  ☐
Remark 1. 
The lower bound in (1) is best possible according to [3]. The upper bound in (1) is also essentially best possible. Indeed, suppose that there exists 0 < q < p ¯ satisfying Δ ( G ) ( 1 + o ( 1 ) ) n q a.a.s.. Given an i 0 [ n ] , we define p i 0 j = p ¯ for all j. Hence, E ( d i 0 ) > q n . Using the Chernoff bound, we deduce that P ( d i 0 > ( 1 + o ( 1 ) ) n q ) = 1 o ( 1 ) , which contradicts the assumption.
Remark 2. 
The use of Markov’s inequality in (3) is of course reminiscent of the Chernoff bound, which is a common tool in bounding tail probabilities [2]. However, we mention that the relative entropy Ent ( a , p ¯ ) here plays an essential role that cannot be simply replaced by the Chernoff-type bounds. The Chernoff’s inequality (see, e.g., Lem. 1 in [4]) gives
P d i ( 1 + ε ) j = 1 n p i j e ( j = 1 n p i j ) ε 2 / 3 e n p ̲ ε 2 / 3 ,
which may produce a fit upper bound only if p ̲ = p ¯ . The similar comments can be applied to Theorem 2 below for the minimum degree of G n ( p i j ) .
Remark 3. 
Notice that Δ ( A ) n p ¯ . It is easy to see that the upper bound Δ ( G ) ( 1 + o ( 1 ) ) n p ¯ holds a.a.s. provided p ¯ ( ln n ) / n . In fact, it suffices to take ε = 2 ( ln n ) / n p ¯ in the above proof.
Remark 4. 
If p i j = p for all i and j, the edge-independent model G n ( p i j ) reduces to the Erdős-Rényi random graph G n ( p ) (with possible self-loops; however, this is not essential throughout this paper). Since Δ ( A ) = λ 1 ( A ) = n p , Theorem 1 implies that for G G n ( p ) , if n p ln 4 n , we have Δ ( G ) = ( 1 + o ( 1 ) ) n p a.a.s.. However, this result is already known to be true under an even weaker condition, namely, n p ln n (see, e.g., p.72, Cor. 3.14 in [1], [21]). It is viable to expect that our Theorem 1 holds as long as Δ ( A ) ln n . Unfortunately, we do not have a proof presently.
This also lends support to the conjecture made in [3] that Theorem 1 therein (regarding the behavior of adjacency eigenvalues of edge-independent random graphs) holds when Δ ( A ) ln n . A partial solution in this direction can be found in [8].
Theorem 2. 
Let G be an edge-independent random graph.
(A) 
If p ̲ ( ln n ) / n , then ( 1 + o ( 1 ) ) n p ̲ δ ( G ) ( 1 + o ( 1 ) ) n p ¯ a . a . s . ;
(B) 
If Δ ( A ) ln 4 n , then δ ( G ) n λ 1 ( A ) + ( 2 + o ( 1 ) ) Δ ( A ) a . a . s . .
Proof. 
The statement (B) holds directly from Theorem 1 by noting that δ ( G ) = n Δ ( G c ) , where G c is the complement of G. Since p ¯ p ̲ ( ln n ) / n and δ ( G ) Δ ( G ) , the upper bound in the statement (A) follows immediately from Remark 3. It remains to prove the lower bound of the statement (A).
To show the lower bound, we address three cases separately.
Case 1. p ̲ = 1 . It it clear that δ ( G ) ( 1 + o ( 1 ) ) n p ̲ a.a.s. in this case.
Case 2. 1 p ̲ = O ( ln n ) 1 / 3 / n 1 / 3 .
For any non-decreasing function g ( x ) on the interval [ 0 , n ] , the Markov inequality indicates that for 0 < a < p ̲ < 1 ,
P ( d i a n ) = P ( g ( n d i ) g ( n a n ) ) E ( g ( n d i ) ) g ( n a n ) = k = 0 n g ( n k ) p ̲ k ( 1 p ̲ ) n k g ( n a n ) .
By choosing g ( x ) = a / p ̲ n x ( 1 a ) / ( 1 p ̲ ) x , we obtain from (8) that
P ( d i a n ) e n Ent ( a , p ̲ ) ,
where Ent ( a , p ̲ ) is the relative entropy defined in the proof of Theorem 1.
In the following, we choose a 1 and 1 p ̲ 1 a as n . Hence, from (9) we obtain
P ( δ ( G ) a n ) n exp n a ln ( a / p ̲ ) + ( 1 a ) ln ( ( 1 a ) / ( 1 p ̲ ) ) n exp n ( 1 a ) ln ( ( 1 a ) / ( 1 p ̲ ) ) 2 ( 1 a ) + ln n ,
where in the second inequality we have used the following estimation
e 2 ( 1 a ) a 1 2 ( 1 a ) a + 2 ( 1 a ) 2 a 2 = 1 / ( 1 a ) 3 1 / ( 1 a ) 1 + Θ ( ( 1 a ) 2 ) 1 / ( 1 a ) 1 1 / ( 1 a ) ( 1 p ̲ ) / ( 1 a ) = a p ̲ .
By assumption we set 1 p ̲ c ( ln n ) 1 / 3 / n 1 / 3 for some c > 0 . By choosing 1 a = ( ln n ) 2 / 3 / n 1 / 3 , we obtain
( 1 a ) ln 1 a 1 p ̲ 2 ( 1 a ) ln ( 1 a ) c n ln n 1 3 2 2 ln n n .
Combining (10) and (12) we arrive at P ( δ ( G ) a n ) e ln n 0 as n .
Finally, by our choice of parameters, a / p ̲ = ( 1 / ( 1 a ) 1 ) / ( 1 / ( 1 a ) ( 1 p ̲ ) / ( 1 a ) ) 1 as n . Hence, a n = ( 1 + o ( 1 ) ) n p ̲ , which completes the proof in this case.
Case 3. 1 p ̲ ( ln n ) 1 / 3 / n 1 / 3 .
Notice that G c can be viewed as a random graph in G n ( 1 p i j ) . Hence, the same arguments towards (4) imply that
P ( Δ ( G c ) ( 1 + ε ) ( 1 p ̲ ) n ) n e n Ent ( 1 + ε ) ( 1 p ̲ ) , 1 p ̲
for any ε > 0 satisfying ( 1 + ε ) ( 1 p ̲ ) < 1 .
In the following, we take o ( 1 ) = ε p ̲ . Thus, the relative entropy in (13) can be bounded below as
Ent ( 1 + ε ) ( 1 p ̲ ) , 1 p ̲ = ( 1 + ε ) ( 1 p ̲ ) ln ( 1 + ε ) + ( p ̲ ε + p ̲ ε ) ln 1 ε ( 1 p ̲ ) p ̲ ( 1 + ε ) ( 1 p ̲ ) ε ε 2 2 + ( p ̲ ε + p ̲ ε ) ε ( 1 p ̲ ) p ̲ ε 2 ( 1 p ̲ ) 2 2 p ̲ 2 ε 3 ( 1 p ̲ ) 3 p ̲ 3 = ε 2 2 p ̲ ( 1 p ̲ ) c 1 ε + ε p ̲ + O p ̲ ε 2 ,
where c 1 > 0 is a constant. Combining (13) and (14) we obtain
P ( Δ ( G c ) ( 1 + ε ) ( 1 p ̲ ) n ) exp n ε 2 2 p ̲ ( 1 p ̲ ) c 1 ε + ε p ̲ + O p ̲ ε 2 + ln n exp c 2 n ε 2 2 p ̲ ( 1 p ̲ ) + ln n ,
where c 2 > 0 is a constant. Here, in the second inequality of (15), we have employed the assumptions 1 p ̲ ( ln n ) 1 / 3 / n 1 / 3 and p ̲ ( ln n ) / n .
Take ε = 3 p ̲ ( ln n ) / ( c 2 ( 1 p ̲ ) n ) in the inequality (15). It is direct to check that ε 0 and ε / p ̲ 0 as n under our assumptions. Hence, we have P ( Δ ( G c ) ( 1 + ε ) ( 1 p ̲ ) n ) = o ( 1 ) , and
δ ( G ) = n Δ ( G c ) n ( 1 + ε ) ( 1 p ̲ ) n = ( 1 + o ( 1 ) ) n p ̲ a . a . s . .
The last equality holds since o ( 1 ) = ε p ̲ . The proof is then complete.  ☐
Remark 5. 
Similarly as in Remark 1, the upper and lower bounds of Theorem 2 are essentially best possible.
Remark 6. 
When p i j = p for all i and j, Theorem 2 reduces to the fact for Erdős-Rényi model that δ ( G ) = ( 1 + o ( 1 ) ) n p a.a.s. provided p ( ln n ) / n . This result is already known (see, e.g., p.152 in [2]) and is proved by a more sophisticated method called Stein’s method. A more or less similar approach appears in [21].

3. An Application to Random Graphs with Given Expected Degrees

The random graph model G ( w ) with given expected degree sequence w = ( w 1 , w 2 , , w n ) is defined by including each edge between vertex i and j independently with probability p i j = w i w j / Vol ( G ) , where the volume Vol ( G ) = i = 1 n w i [4,18]. By definition we have Δ ( A ) = w max : = max 1 i n w i , p ¯ = w max 2 / Vol ( G ) and p ̲ = w min 2 / Vol ( G ) , where w min : = min 1 i n w i . Moreover, let the second-order volume and the expected second-order average degree be Vol 2 ( G ) = i = 1 n w i 2 and w ˜ = Vol 2 ( G ) / Vol ( G ) , respectively.
An application of Theorem 1 to G ( w ) yields the following corollary on the maximum degree of G ( w ) .
Corollary 1. 
For a random graph G G ( w ) , suppose that w max ln 4 n . Then
w ˜ ( 2 + o ( 1 ) ) w max + o ( 1 ) Δ ( G ) ( 1 + o ( 1 ) ) n w max 2 Vol ( G ) , a . a . s .
Proof. 
The results follow immediately from Theorem 1 by noting that λ 1 ( A ) w ˜ + o ( 1 ) (see, e.g., p.163, Lem. 8.7 in [18]).  ☐
Analogously, the following result is for the minimum degree of G ( w ) .
Corollary 2. 
Let G be a random graph in G ( w ) .
(A) 
If w min 2 Vol ( G ) ( ln n ) / n , then
( 1 + o ( 1 ) ) n w min 2 Vol ( G ) δ ( G ) ( 1 + o ( 1 ) ) n w max 2 Vol ( G ) a . a . s . ;
(B) 
If w max ln 4 n , then
δ ( G ) n w ˜ + ( 2 + o ( 1 ) ) w max + o ( 1 ) a . a . s . .
To illustrate the availability of the above results, we study two numerical examples.
Example 1. 
Consider the random graph model G ( w ) with w 1 = = w n / 2 = ln 4 n and w n / 2 = = w n = ln 5 n . This model is more or less similar to homogeneous Erdős-Rényi random graphs. It is straightforward to check that all conditions in Corollary 1 and Corollary 2 hold. In Table 1, we compare the theoretical bounds of maximum degrees obtained in Corollary 1 with numerical values using Matlab software. The analogous results for minimum degrees are reported in Table 2. We observe that the simulations are in line with the theory. It turns out that the upper bound for the maximum degree and the lower bound for the minimum degree are more accurate.
Table 1. Maximum degree Δ ( G ) of G G ( w ) with w = ( ln 4 n , , ln 4 n , ln 5 n , , ln 5 n ) (with half of the numbers being ln 4 n ). The theoretical upper and lower bounds are calculated from Corollary 1. Numerical results are based on average over 20 independent runs.
Table 1. Maximum degree Δ ( G ) of G G ( w ) with w = ( ln 4 n , , ln 4 n , ln 5 n , , ln 5 n ) (with half of the numbers being ln 4 n ). The theoretical upper and lower bounds are calculated from Corollary 1. Numerical results are based on average over 20 independent runs.
nTheoretical Lower Bound Δ ( G ) Theoretical Upper Bound
9 . 8 × 10 5 4 . 683 × 10 5 ( 2 + o ( 1 ) ) 706 . 85 + o ( 1 ) 7 . 260 × 10 5 ( 1 + o ( 1 ) ) 9 . 317 × 10 5
9 . 9 × 10 5 4 . 701 × 10 5 ( 2 + o ( 1 ) ) 708 . 15 + o ( 1 ) 7 . 283 × 10 5 ( 1 + o ( 1 ) ) 9 . 352 × 10 5
10 . 0 × 10 5 4 . 718 × 10 5 ( 2 + o ( 1 ) ) 709 . 44 + o ( 1 ) 7 . 305 × 10 5 ( 1 + o ( 1 ) ) 9 . 387 × 10 5
10 . 1 × 10 5 4 . 735 × 10 5 ( 2 + o ( 1 ) ) 710 . 72 + o ( 1 ) 7 . 327 × 10 5 ( 1 + o ( 1 ) ) 9 . 421 × 10 5
10 . 2 × 10 5 4 . 752 × 10 5 ( 2 + o ( 1 ) ) 711 . 99 + o ( 1 ) 7 . 351 × 10 5 ( 1 + o ( 1 ) ) 9 . 455 × 10 5
Table 2. Minimum degree δ ( G ) of G G ( w ) with w = ( ln 4 n , , ln 4 n , ln 5 n , , ln 5 n ) (with half of the numbers being ln 4 n ). The theoretical upper and lower bounds are calculated from Corollary 2. Numerical results are based on average over 20 independent runs.
Table 2. Minimum degree δ ( G ) of G G ( w ) with w = ( ln 4 n , , ln 4 n , ln 5 n , , ln 5 n ) (with half of the numbers being ln 4 n ). The theoretical upper and lower bounds are calculated from Corollary 2. Numerical results are based on average over 20 independent runs.
nTheoretical Lower Bound δ ( G ) Theoretical Upper Bound
9 . 8 × 10 5 ( 1 + o ( 1 ) ) 4 . 896 × 10 3 1 . 826 × 10 4 5 . 117 × 10 5 + ( 2 + o ( 1 ) ) 706 . 85 + o ( 1 )
9 . 9 × 10 5 ( 1 + o ( 1 ) ) 4 . 907 × 10 3 1 . 829 × 10 4 5 . 200 × 10 5 + ( 2 + o ( 1 ) ) 708 . 15 + o ( 1 )
10 . 0 × 10 5 ( 1 + o ( 1 ) ) 4 . 918 × 10 3 1 . 832 × 10 4 5 . 282 × 10 5 + ( 2 + o ( 1 ) ) 709 . 44 + o ( 1 )
10 . 1 × 10 5 ( 1 + o ( 1 ) ) 4 . 929 × 10 3 1 . 834 × 10 4 5 . 365 × 10 5 + ( 2 + o ( 1 ) ) 710 . 72 + o ( 1 )
10 . 2 × 10 5 ( 1 + o ( 1 ) ) 4 . 940 × 10 3 1 . 837 × 10 4 5 . 448 × 10 5 + ( 2 + o ( 1 ) ) 711 . 99 + o ( 1 )
Example 2. 
Power-law graphs, which are prevalent in real-life networks, can also be constructed based on the Chung–Lu model G ( w ) [18]. Given a scaling exponent β, an average degree d : = Vol ( G ) / n , and w max , a power-law random graph G ( w ) is defined by taking w i = c i 1 / ( β 1 ) for i 0 i < i 0 + n , where
c = β 2 β 1 d n 1 β 1 and i 0 = n d ( β 2 ) w max ( β 1 ) β 1 .
We choose β = 2 . 5 , w max = n , and d = ( ln n ) 2 . It is direct to check that the conditions in Corollary 1 and Corollary 2 hold.
In Figure 1 we show the maximum and minimum degrees as well as the theoretical bounds for G ( w ) with different number of vertices. Note that the upper bound in (19) is worse than that in (18) for this example. We thus invoke the same upper bounds for both Δ ( G ) and δ ( G ) in Figure 1.
Figure 1. Extremal degree versus the number of vertices n. The theoretical upper and lower bounds are from (17) and (18). Each data point is obtained by means of a mixed ensemble averaging of 30 independent runs of 10 graphs yielding a statistically ample sampling.
Figure 1. Extremal degree versus the number of vertices n. The theoretical upper and lower bounds are from (17) and (18). Each data point is obtained by means of a mixed ensemble averaging of 30 independent runs of 10 graphs yielding a statistically ample sampling.
Entropy 18 00053 g001
We observe interestingly, as in Example 1, that the upper bound for the maximum degree and the lower bound for the minimum degree seem to be more accurate. As is known that large deviation phenomena are normally associated with a global hard constraint which fights against a local soft constraint. We contend that the deviations from the expected degree sequence are due here to a fight of the constrained degree sequence with the imposed edge-independency.
Figure 2. A depiction of the small-world graph S ( n , p , C 2 k ) with n = 16 , p = 0 , and k = 3 .
Figure 2. A depiction of the small-world graph S ( n , p , C 2 k ) with n = 16 , p = 0 , and k = 3 .
Entropy 18 00053 g002
As a follow up work, inspired by the above examples, it would be of interest to identify all the graphs that are close to the theoretical upper or lower bounds. As an illustrating example, we consider the small-world graph G = S ( n , p , C 2 k ) ( k 1 ) studied in [22,23], which can be viewed as the join of a random graph G n ( p ) and a ring on n vertices, each of which has edges to precisely k subsequent and k previous neighbors (see, e.g., Figure 2). In the special case of p 0 , G becomes a regular graph, and we know that λ 1 ( A ) = Δ ( G ) = 2 k , where A is the adjacency matrix of G. If k ln 4 n holds, it follows from Theorem 1 that
2 k ( 2 + o ( 1 ) ) 2 k 2 k ( 1 + o ( 1 ) ) n , a . a . s .
Clearly, the upper bound is close if k is large, while the lower bound tends to be more accurate if k is small. In general, when p ( ln 4 n / ) n holds, for any k 1 it follows from Theorem 1 that
λ 1 ( A ) ( 2 + o ( 1 ) ) 2 k + ( n 2 k 1 ) p Δ ( G ) ( 1 + o ( 1 ) ) n , a . a . s .
Note that the second largest eigenvalue of the adjacency matrix of S ( n , 0 , C 2 k ) , which is a circulant matrix, is
λ 2 ( A ( S ( n , 0 , C 2 k ) ) ) = sin ( π ( 2 k + 1 ) / n ) sin ( π / n ) 1 = ( 2 + o ( 1 ) ) k ,
as n . Utilizing the edge version of Cauchy’s interlacing theorem, (22) and (23), we derive the following estimation
( 2 + o ( 1 ) ) k 2 k + ( n 2 k 1 ) p Δ ( G ) ( 1 + o ( 1 ) ) n , a . a . s .
The gap between upper and lower bounds can be quite close provided k attains it maximum, namely, ( n 1 ) / 2 .

Acknowledgments

The author would like to thank the anonymous reviewers and Academic Editor for the insightful and constructive suggestions. This work is funded by the National Natural Science Foundation of China (11505127), the Shanghai Pujiang Program (15PJ1408300), and the Program for Young Excellent Talents in Tongji University (2014KJ036).

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Bollobás, B. Random Graphs, 2nd ed.; Cambridge University Press: Cambridge, UK, 2001. [Google Scholar]
  2. Janson, S.; Łuczak, T.; Ruciński, A. Random Graphs; Wiley: New York, NY, USA, 2000. [Google Scholar]
  3. Lu, L.; Peng, X. Spectra of edge-independent random graphs. Electron. J. Comb. 2013; arXiv:1204.6207. [Google Scholar]
  4. Chung, F.; Lu, L. Connected components in random graphs with given expected degree sequences. Ann. Comb. 2002, 6, 125–145. [Google Scholar] [CrossRef]
  5. Abbe, E.; Sandon, C. Community detection in the general stochastic block model: Fundamental limits and efficient algorithms for recovery. In Proceedings of 56th Annual IEEE Symposium on Foundations of Computer Science, Berkely, CA, USA, 18–20 October 2015.
  6. Chin, P.; Rao, A.; Vu, V. Stochastic block model and community detection in the sparse graphs: A spectral algorithm with optimal rate of recovery. In Proceedings of JMLR Workshop and Conference, Paris, France, 2–6 July 2015.
  7. Shang, Y. Estrada and L-Estrada indices of edge-independent random graphs. Symmetry 2015, 7, 1455–1462. [Google Scholar] [CrossRef]
  8. Lei, J.; Rinaldo, A. Consistency of spectral clustering in stochastic block models. Ann. Stat. 2015, 43, 215–237. [Google Scholar] [CrossRef]
  9. Le, C.M.; Vershynin, R. Concentration and regularization of random graphs. 2015; arXiv:1506.00669. [Google Scholar]
  10. Cover, T.M.; Thomas, J.A. Elements of Information Theory; Wiley-Interscience: New York, NY, USA, 1991. [Google Scholar]
  11. Vedral, V. The role of relative entropy in quantum information theory. Rev. Mod. Phys. 2002, 74. [Google Scholar] [CrossRef]
  12. Blanco, D.D.; Casini, H.; Hung, L.-Y.; Myers, R.C. Relative entropy and holography. J. High Energy Phys. 2013, 2013. [Google Scholar] [CrossRef]
  13. Lin, S.; Gao, S.; He, Z.; Deng, Y. A pilot directional protection for HVDC Transmission line based on relative entropy of wavelet energy. Entropy 2015, 17, 5257–5273. [Google Scholar] [CrossRef]
  14. Gaveau, B.; Granger, L.; Moreau, M.; Schulman, L.S. Relative entropy, interation energy and the nature of dissipation. Entropy 2014, 16, 3173–3206. [Google Scholar] [CrossRef]
  15. Prehl, J.; Boldt, F.; Essex, C.; Hoffmann, K.H. Time evolution of relative entropies for anomalous diffusion. Entropy 2013, 15, 2989–3006. [Google Scholar] [CrossRef]
  16. Lods, B.; Pistone, G. Information geometry formalism for the spatially homogeneous Boltzmann equation. Entropy 2015, 17, 4323–4363. [Google Scholar] [CrossRef]
  17. Mohammad-Djafari, A. Entropy, information theory, information geometry and Bayesian inference in data, signal and image processing and inverse problems. Entropy 2015, 17, 3989–4027. [Google Scholar] [CrossRef]
  18. Chung, F.; Lu, L. Complex Graphs and Networks; American Mathematical Society: Providence, RI, USA, 2006. [Google Scholar]
  19. Shang, Y. Non-hyperbolicity of random graphs with given expected degrees. Stoch. Model. 2013, 29, 451–462. [Google Scholar] [CrossRef]
  20. Shang, Y. A note on the warmth of random graphs with given expected degrees. Int. J. Math. Math. Sci. 2014, 2014, 749856–749860. [Google Scholar] [CrossRef]
  21. Löwe, M.; Vermet, F. Capacity of an associative memory model on random graph architectures. Bernoulli 2015, 21, 1884–1910. [Google Scholar] [CrossRef]
  22. Gu, L.; Zhang, X.-D.; Zhou, Q. Consensus and synchronization problems on small-world networks. J. Math. Phys. 2010, 51, 082701. [Google Scholar] [CrossRef]
  23. Shang, Y. A sharp threshold for rainbow connection in small-world networks. Miskolc Math. Notes 2012, 13, 493–497. [Google Scholar]

Share and Cite

MDPI and ACS Style

Shang, Y. Bounding Extremal Degrees of Edge-Independent Random Graphs Using Relative Entropy. Entropy 2016, 18, 53. https://doi.org/10.3390/e18020053

AMA Style

Shang Y. Bounding Extremal Degrees of Edge-Independent Random Graphs Using Relative Entropy. Entropy. 2016; 18(2):53. https://doi.org/10.3390/e18020053

Chicago/Turabian Style

Shang, Yilun. 2016. "Bounding Extremal Degrees of Edge-Independent Random Graphs Using Relative Entropy" Entropy 18, no. 2: 53. https://doi.org/10.3390/e18020053

APA Style

Shang, Y. (2016). Bounding Extremal Degrees of Edge-Independent Random Graphs Using Relative Entropy. Entropy, 18(2), 53. https://doi.org/10.3390/e18020053

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop