Next Article in Journal
Measuring the Mediating Roles of E-Trust and E-Satisfaction in the Relationship between E-Service Quality and E-Loyalty: A Structural Modeling Approach
Next Article in Special Issue
Asymptotic Properties and Application of GSB Process: A Case Study of the COVID-19 Dynamics in Serbia
Previous Article in Journal
Non-Overlapping Domain Decomposition via BURA Preconditioning of the Schur Complement
Previous Article in Special Issue
On De la Peña Type Inequalities for Point Processes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Local Laws for Sparse Sample Covariance Matrices

by
Alexander N. Tikhomirov
* and
Dmitry A. Timushev
Institute of Physics and Mathematics, Komi Science Center of Ural Branch of RAS, 167982 Syktyvkar, Russia
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(13), 2326; https://doi.org/10.3390/math10132326
Submission received: 11 May 2022 / Revised: 16 June 2022 / Accepted: 29 June 2022 / Published: 3 July 2022
(This article belongs to the Special Issue Limit Theorems of Probability Theory)

Abstract

:
We proved the local Marchenko–Pastur law for sparse sample covariance matrices that corresponded to rectangular observation matrices of order n × m with n / m y (where y > 0 ) and sparse probability n p n > log β n (where β > 0 ). The bounds of the distance between the empirical spectral distribution function of the sparse sample covariance matrices and the Marchenko–Pastur law distribution function that was obtained in the complex domain z D with Im z > v 0 > 0 (where v 0 ) were of order log 4 n / n and the domain bounds did not depend on p n while n p n > log β n .

1. Introduction

The random matrix theory (RMT) dates back to the work of Wishart in multivariate statistics [1], which was devoted to the joint distribution of the entries of sample covariance matrices. The next RMT milestone was the work of Wigner [2] in the middle of the last century, in which the modelling of the Hamiltonian of excited heavy nuclei using a large dimensional random matrix was proposed, thereby replacing the study of the energy levels of nuclei with the study of the distribution of the eigenvalues of a random matrix. Wigner studied the eigenvalues of random Hermitian matrices with centred, independent and identically distributed elements (such matrices were later named Wigner matrices) and proved that the density of the empirical spectral distribution function of the eigenvalues of such matrices converges to the semicircle law as the matrix dimensions increase. Later, this convergence was named Wigner’s semicircle law and Wigner’s results were generalised in various aspects.
The breakthrough work of Marchenko and Pastur [3] gave impetus to new progress in the study of sample covariance matrices. Under quite general conditions, they found an explicit form of the limiting density of the expected empirical spectral distribution function of sample covariance matrices. Later, this convergence was named the Marchenko–Pastur law.
Sample covariance matrices are of great practical importance for the problems of multivariate statistical analysis, particularly for the method of principal component analysis (PCA). In recent years, many studies have appeared that have connected RMT with other rapidly developing areas, such as the theory of wireless communication and deep learning. For example, the spectral density of sample covariance matrices is used in calculations that relate to multiple input multiple output (MIMO) channel capacity [4]. An important object of study for neural networks is the loss surface. The geometry and critical points of this surface can be predicted using the Hessian of the loss function. A number of works that have been devoted to deep networks have suggested the application of various RMT models for Hessian approximation, thereby allowing the use of RMT results to reach specific conclusions about the nature of the critical points of the surface.
Another area of application for sample covariance matrices is graph theory. The adjacency matrix of an undirected graph is asymmetric, so the study of its singular values leads to sample covariance matrices. An example of these graphs is the bipartite random graph, the vertices of which can be divided into two groups in which the vertices are not connected to each other.
If we assume that the probability p n of having graph edges tends to zero as the number of vertices n increases to infinity, we arrive at the concept of sparse random matrices. The behaviour of the eigenvalues and eigenvectors of a sparse random matrix significantly depends on its sparsity and results that are obtained for non-sparse matrices cannot be applied. Sparse sample covariance matrices have applications in random graph models [5] and deep learning problems [6] as well.
Sparse Wigner matrices have been considered in a number of papers (see [7,8,9,10]), in which many results have been obtained. With the symmetrisation of sample covariance matrices, it is possible to apply these results when observation matrices are square. However, when the sample size is greater than the observation dimensions, the spectral limit distribution has a singularity at zero, which requires a different approach. The spectral limit distribution of sparse sample covariance matrices with a sparsity of n p n n ϵ (where ϵ > 0 was arbitrary small) was studied in [11,12]. In particular, a local law was proven under the assumption that the matrix elements satisfied the moment conditions E   | X j k | q ( C q ) c q . In this paper, we considered a case with a sparsity of n p n log α n for α > 1 and assumed that the matrix element moments satisfied the conditions E   | X j k | 4 + δ C < and | X j k | c 1 ( n p n ) 1 2 ϰ for ϰ > 0 .

2. Main Results

We let m = m ( n ) , where m n . We considered the independent and identically distributed zero mean random variables X j k , 1 j n and 1 k m with E   X j k = 0 and E   X j k 2 = 1 and an independent set of the independent Bernoulli random variables ξ j k , 1 j n and 1 k m with E   ξ j k = p n . In addition, we supposed that n p n as n . In what follows, we omitted the index n from p n when this would not cause confusion.
We considered a sequence of random matrices:
X = 1 m p n ( ξ j k X j k ) 1 j n , 1 k m .
Denoted by s 1 s n , the singular values of X and the symmetrised empirical spectral distribution function (ESD) of the sample covariance matrix W = X X were defined as:
F n ( x ) = 1 2 n j = 1 n I { s j x } + I { s j x } ,
where I { A } stands for the event A indicator.
We let y : = y ( n , m ) = n m and G y ( x ) be the symmetrised Marchenko–Pastur distribution function with the density:
g y ( x ) = 1 2 π y | x | ( x 2 a 2 ) ( b 2 x 2 ) I { a 2 x 2 b 2 } ,
where a = 1 y and b = 1 + y . We assumed that y y 0 < 1 for n , m 1 . When the Stieltjes transformation of the distribution function G y ( x ) was denoted by S y ( z ) and the Stieltjes transformation of the distribution function F n ( x ) was denoted by s n ( z ) , we obtained:
S y ( z ) = z + 1 y z + ( z 1 y z ) 2 4 y 2 y , s n ( z ) = 1 2 n j = 1 n 1 s j z + j = 1 n 1 s j z = 1 n j = 1 n z s j 2 z 2 .
We also put:
b ( z ) = z 1 y z + 2 y S y ( z ) = 1 S y ( z ) + y S y ( z ) .
In this paper, we proved the so called local Marchenko–Pastur law for sparse covariance matrices. We let:
Λ n : = Λ n , y ( z ) = s n ( z ) S y ( z ) .
For a constant δ > 0 , we defined the value ϰ = ϰ ( δ ) : = δ 2 ( 4 + δ ) . We assumed that a sparse probability of p n and that the moments of the matrix elements X i j satisfied the following conditions:
  • Condition ( C 0 ) : for c 0 > 0 and n 1 , we have n p n c 0 log 2 ϰ n ;
  • Condition ( C 1 ) : for δ > 0 , we have μ 4 + δ : = E   | X 11 | 4 + δ < ;
  • Condition ( C 2 ) : a constant c 1 > 0 exists, such that for all 1 j n and 1 k m , we have | X j k | c 1 ( n p n ) 1 2 ϰ .
We introduced the quantity v 0 = v 0 ( a 0 ) : = a 0 n 1 log 4 n with a positive constant a 0 . We then introduced the region:
D ( a 0 ) : = { z = u + i v : ( 1 y v ) + | u | 1 + y + v , V v v 0 } .
For constants u 0 > 0 and V, we defined the region:
D ˜ ( a 0 , a 1 ) = { z = u + i v : | u | u 0 , V v v 0 , | b ( z ) | a 1 Γ n } .
Next, we introduced some notations. We let:
Γ n = 2 C 0 log n 1 n v + min 1 n p | b ( z ) | , 1 n p .
We introduced the quantity:
d ( z ) = Im b ( z ) | b ( z ) |
and put:
d n ( z ) : = 1 n v d ( z ) + log n n v | b ( z ) | + 1 n p | b ( z ) | .
We stated the improved bounds for Λ n ( z ) and put:
T n : = I { | b ( z ) | Γ n } d n ( z ) + d n 3 4 ( z ) 1 ( n v ) 1 4 + d n 1 2 ( z ) 1 ( n v ) 1 2 + I { | b ( z ) | Γ n } Γ n n v 1 2 + Γ n 1 2 Γ n 1 2 n v + 1 n p .
Theorem 1.
Assuming that the conditions ( C 0 ) ( C 2 ) are satisfied. Then, for any Q 1 the positive constants C = C ( Q , δ , μ 4 + δ , c 0 , c 1 ) , K = K ( Q , δ , μ 4 + δ , c 0 , c 1 ) and a 0 = a 0 ( Q , δ , μ 4 + δ , c 0 , c 1 ) exist, such that for z D ( a 0 ) :
Pr | Λ n | K T n C n Q .
We also proved the following result.
Theorem 2.
Under the conditions of Theorem 1 and for Q 1 , the positive constants C = C ( Q , δ , μ 4 + δ , c 0 , c 1 ) , K = K ( Q , δ , μ 4 + δ , c 0 , c 1 ) , a 0 = a 0 ( Q , δ , μ 4 + δ , c 0 , c 1 ) and a 1 = a 1 ( Q , δ , μ 4 + δ , c 0 , c 1 ) exist, such that for z D ˜ ( a 0 , a 1 ) :
Pr | Im Λ n | K T n C n Q .

2.1. Organisation

The paper is organised as follows. In Section 3, we state Theorems 3–5 and several corollaries. In Section 4, the delocalisation is considered. In Section 4, we prove the corollaries that were stated in Section 3. Section 6 is devoted to the proof of Theorems 3–5. In Section 7, we state and prove some auxiliary results.

2.2. Notation

We use C for large universal constants, which may be different from line to line. S y ( z ) and s n ( z ) denote the Stieltjes transformations of the symmetrised Marchenko–Pastur distribution and the spectral distribution function, respectively. R ( z ) denotes the resolvent matrix. We let T = { 1 , , n } , J T , T ( 1 ) = { 1 , , m } and K T ( 1 ) . We consider the σ -algebras M ( J , K ) , which were generated by the elements of X (with the exception of the rows from J and the columns from K ). We write M j ( J , K ) instead of M ( J { j } , K ) and M l + n ( J , K ) instead of M ( J , K { l } ) for brevity. The symbol X ( J , K ) denotes the matrix X , from which the rows with numbers in J and columns with numbers in K were deleted. In a similar way, we denote all objects in terms of X ( J , K ) , such that the resolvent matrix is R ( J , K ) , the ESD Stieltjes transformation is s n ( J , K ) , Λ n ( J , K ) , etc. The symbol E   j denotes the conditional expectation with respect to the σ -algebra M j and E   l + n denotes the conditional expectation with respect to σ -algebra M l + n . We let J c = T \ J and K c = T ( 1 ) \ K .

3. Main Equation and Its Error Term Estimation

Note that F n ( x ) is the ESD of the block matrix:
V = O n X X O m ,
where O k is a k × k matrix with zero elements.
We let R = R ( z ) be the resolvent matrix of V :
R = ( V z I ) 1 .
By applying the Schur complement, we obtained:
R = z ( X X z 2 I ) 1 ( X X z 2 I ) 1 X X ( X X z 2 I ) 1 z ( X X z 2 I ) 1 .
This implied:
s n ( z ) = 1 n j = 1 n R j j = 1 n l = 1 m R l + n , l + n + m n n z .
For the diagonal elements of R , we could write:
R j j ( J , K ) = S y ( z ) 1 ε j ( J , K ) R j j ( J , K ) + y Λ n ( J , K ) R j j ( J , K ) ,
for j J c and:
R l + n , l + n ( J , K ) = 1 z + y S y ( z ) 1 ε l + n ( J , K ) R l + n , l + n ( J , K ) + y Λ n ( J , K ) R l + n , l + n ( J , K ) ,
for l K c . The correction terms ε j ( J , K ) for j J c and ε l + n ( J , K ) for l K c were defined as:
ε j ( J , K ) = ε j 1 ( J , K ) + + ε j 3 ( J , K ) , ε j 1 ( J , K ) = 1 m l = 1 m R l + n , l + n ( J , K ) 1 m l = 1 m R l + n , l + n ( J { j } , K ) , ε j 2 ( J , K ) = 1 m p l = 1 m ( X j l 2 ξ j l p ) R l + n , l + n ( J { j } , K ) , ε j 3 ( J , K ) = 1 m p 1 l k m X j l X j k ξ j l ξ j k R l + n , k + n ( J { j } , K ) ;
and
ε l + n ( J , K ) = ε l + n , 1 ( J , K ) + + ε l + n , 3 ( J , K ) , ε l + n , 1 ( J , K ) = 1 m j = 1 n R j j ( J , K ) 1 m j = 1 n R j j ( J , K { l + n } ) , ε l + n , 2 ( J , K ) = 1 m p j = 1 n ( X j l 2 ξ j l p ) R j j ( J , K { l + n } ) , ε l + n , 3 ( J , K ) = 1 m p 1 j k n X j l X k l ξ j l ξ k l R j k ( J , K { l + n } ) .
By summing Equation (4) ( J = and K = ), we obtained the self-consistent equation:
s n ( z ) = S y ( z ) ( 1 + T n y Λ n s n ( z ) ) ,
with the error term:
T n = 1 n j = 1 n ε j R j j .
We let s 0 > 1 be positive constant V, depending on δ . The exact values of these constants were defined as below. For 0 < v V , we defined k v as:
k v = k v ( V ) : = min { l 0 : s 0 l v V } .
Remembering that:
Λ n = Λ n ( z ) : = s n ( z ) S y ( z ) ,
and:
Γ n = 2 C 0 log n 1 n v + min 1 n p | b ( z ) | , 1 n p .
We defined:
a n ( z ) = a n ( u , v ) = Im b ( z ) + Γ n , if | b ( z ) | Γ n , Γ n , if | b ( z ) | Γ n .
The function b ( z ) was defined in (2). For a given γ > 0 , we considered the event:
Q γ ( v ) : = | Λ n ( u + i v ) | γ a n ( u , v ) , for all u
and the event:
Q ^ γ ( v ) = l = 0 k v Q γ ( s 0 l v ) .
For any γ value, the constant V = V ( γ ) existed, such that:
Pr { Q ^ γ ( V ) } = 1 .
It could be V = 2 / γ , for example. In what follows, we assumed that γ and V were chosen so that (6) was satisfied and we wrote:
Q : = Q ^ γ .
We defined:
β n ( z ) : = a n ( z ) n v + | A 0 ( z ) | 2 n p ,
where
A 0 ( z ) = y S y ( z ) 1 y z .
In this section, we demonstrate the following results.
Theorem 3.
Under the condition ( C 0 ) , the positive constants C = C ( δ , μ 4 + δ , c 0 ) , a 0 = a 0 ( δ , μ 4 + δ , c 0 ) and a 1 = a 1 ( δ , μ 4 + δ , c 0 ) exist, such that for z = u + i v D ˜ :
E   | T n | q I { Q } C F 1 + + F 6 ,
where
F 1 = a n q ( z ) n q v q , F 2 = | S y ( z ) | 2 q β n q ( z ) I { | b ( z ) | Γ n } + | S y ( z ) | 2 q β n q 2 ( z ) Γ n q 2 , F 3 = | S y ( z ) | 2 q β n q 2 ( z ) Γ n q ( I { | b ( z ) | Γ n } I { z D } ) + [ | S y ( z ) | 3 q β n q 2 ( z ) a n q 2 ( z ) ( n v ) q ( | S y ( z ) | q | A 0 ( z ) | q 2 β n q 2 ( z ) + | A 0 ( z ) | q 2 ( n p ) q 2 + 1 ( n v ) q 2 ) ] , F 4 = | S y ( z ) | 2 q β n q 2 ( z ) a n q 2 ( z ) ( n v ) q ( | S y ( z ) | q | A 0 ( z ) | q 2 β n q 2 ( z ) + | A 0 ( z ) | q 2 ( n p ) q 2 + 1 ( n v ) q 2 ) , F 5 = q q 2 | S y ( z ) | 3 q 2 β n q 2 ( z ) | A 0 ( z ) | q 4 a n q 4 ( z ) ( n v ) q 2 ( a n ( z ) + | b ( z ) | ) q 2 + C q q q 2 a n ( z ) | S y ( z ) | n v q 4 | S y ( z ) | 2 β n ( z ) q 4 ( a n ( z ) + | b ( z ) | ) q 2 | S y ( z ) | q 4 ( n p ) q 4 1 ( n v ) q 4 + C q q q | S y ( z ) | 2 a n ( z ) n v q 4 | S y ( z ) | 2 β n ( z ) q 4 ( a n ( z ) + | b ( z ) | ) q 2 1 ( n v ) q 2 , F 6 = C q q 2 ( q 1 ) ( a n ( z ) + | b ( z ) | ) q 1 | S y ( z ) | β n 1 2 ( z ) [ q q 1 | S y ( z ) | a n ( z ) n v q 1 1 ( n p ) 2 ϰ ( q 1 ) + q q | S y ( z ) | a n ( z ) n v q 1 | S y ( z ) | q 1 β n q 1 2 ( z ) + q 3 ( q 1 ) 2 | S y ( z ) | 2 a n ( z ) n v q 1 2 1 n v q 1 + q 2 ( q 1 ) ( n p ) 2 ( q 1 ) ϰ ( n v ) q 1 + q 2 ( q 1 ) | S y ( z ) | q 1 2 ( n v ) q 1 a n ( z ) | S y ( z ) | ( n v ) q 1 2 + q 5 ( q 1 ) 2 1 n q 1 v q 1 | S y ( z ) | a n ( z ) n v q 1 2 + q 3 ( q 1 ) ( n p ) 2 ( q 1 ) ϰ ( n v ) q 1 ] .
Remark 1.
Theorem 3 was auxiliary. T n was the perturbation of the main equation in the Stieltjes transformation of the limit distribution. The size of T n was responsible for the stability of the solution of the perturbed equation. We were interested in the estimates of T n that were uniform in the domain D and had an order of log n / ( n v ) (such estimates were needed for the proof of the delocalisation of Theorem 6). It was important to know to what extent the estimates depended on both n p n and n v . The estimates behaved differently on the beam and at the ends of the support of the limit distribution (the introduced functions a n ( z ) and b ( z ) were responsible for the behaviour of the estimates, depending on the real part of the argument: on the beam or at the ends of the support of the limit distribution). For Λ n estimation, there were two regimes: for | b ( z ) | Γ n , we used the inequality (10) and for | b ( z ) | Γ n , we used the inequality (18).
Corollary 1.
Under conditions of Theorem 3, the following inequalities hold:
I { | b ( z ) | Γ n } E   | T n | q I { Q } C q | b ( z ) | q [ q 2 ( n p ) 2 ϰ q 1 d n 2 q 1 2 ( z ) + d n 3 q 4 ( z ) q 2 n v q 4 + d n q 2 ( z ) q 2 n v q 2 + q q 1 d n 3 q 2 2 ( z ) + q 2 ( q 1 ) d n q ( z ) 1 ( n v ) q 1 + q 3 ( q 1 ) d n 1 2 ( z ) 1 ( n v ) q 1 ( n p ) 2 ϰ ( q 1 ) ]
and
I { Γ n | b ( z ) | } E   | T n | q I { Q } C q Γ n n v + 1 n p q 2 Γ n q 2 .
Corollary 2.
Under the conditions of Theorem 3 and in the domain:
D = { z = u + i v : 1 y v | u | 1 + y + v , V v v 0 } ,
for any Q > 1 , a constant C exists that depends on Q, such that:
Pr | Λ n | > 1 2 Γ n ; Q C n Q .
Moreover, for z = u + i v to satisfy v v 0 and | z | C max { log n n p , log 4 n ( n p ) 2 ϰ } and for Q > 1 , a constant C exists that depends on Q, such that:
Pr | Im Λ n | > 1 2 Γ n ; Q C n Q .
Corollary 3.
Under the conditions of Theorem 3, for Q 1 , a constant C that depends on Q exists, such that:
Pr { Q } 1 C n Q .
Theorem 4.
Under the conditions of Theorem 1, for Q 1 , the positive constants C = C ( Q , δ , μ 4 + δ , c 0 , c 1 ) and a 0 = a 0 ( Q , δ , μ 4 + δ , c 0 , c 1 ) exists, such that for z = u + i v D ( a 0 ) :
Pr { | Λ n | 1 2 Γ n } C n Q .
Moreover, for Q 1 , the positive constants C = C ( Q , δ , μ 4 + δ , c 0 , c 1 ) , C 0 = C 0 ( Q , δ , μ 4 + δ , c 0 , c 1 ) and a 0 = a 0 ( Q , δ , μ 4 + δ , c 0 , c 1 ) exist, such that for z = u + i v satisfying v v 0 and | z | Γ n :
Pr | Im Λ n | > 1 2 Γ n C n Q ,
where
Γ n = C 0 log n 1 n v + min 1 n p | b ( z ) | , 1 n p .
To prove the main result, we needed to estimate the entries of the resolvent matrix.
Theorem 5.
Under the condition ( C 0 ) and for 0 < γ < γ 0 and u 0 > 0 , the constants H = H ( δ , μ 4 + δ , c 0 , γ , u 0 ) , C = C ( δ , μ 4 + δ , c 0 , γ , u 0 ) , c = c ( δ , μ 4 + δ , c 0 , γ , u 0 ) , a 0 = a 0 ( δ , μ 4 + δ , c 0 , γ , u 0 ) and a 1 = a 1 ( δ , μ 4 + δ , c 0 , γ , u 0 ) exist, such that for 1 j n , 1 k m and z = u + i v D ˜ , we have:
Pr { | R j k | > H | S y ( z ) | ; Q ^ γ ( v ) } C n c log n ,
Pr { max { | R j , k + n | , | R j + n , k | } > H | S y ( z ) | ; Q ^ γ ( v ) } C n c log n ,
Pr { | R j + n , k + n | > H | A 0 ( z ) | ; Q ^ γ ( v ) } C n c log n ,
where
A 0 ( z ) = y S y ( z ) 1 y z .
Corollary 4.
Under the conditions of Theorem 5, for v v 0 and q c log n , a constant H exists, such that for j , k T ( T ( 1 ) + n ) :
E   | R j k | q I { Q ^ γ } H q | S y ( z ) | q .

4. Delocalisation

In this section, we demonstrate some applications of the main result. We let L = ( L j k ) j , k = 1 n and K = ( K j k ) j , k = 1 m be orthogonal matrices from the SVD of matrix X s.t.:
X = L D ˜ K ,
where D ˜ = D n O n , m and D = diag { s 1 , , s n } . Here and in what follows, O k , n denotes a k × n matrix with zero entries. The eigenvalues of matrix V are denoted by λ j ( λ j = s j for j = 1 , , n , λ j = s j for j = n + 1 , , 2 n and λ j = 0 for j = 2 n + 1 , , n + m ). We let u j = ( u j , 1 , , u j , n + m ) be the eigenvector of matrix V , corresponding to eigenvalue λ j , where j = 1 , , n + m .
We proved the following result.
Theorem 6.
Under the conditions ( C 0 ) ( C 2 ) , for Q 1 , the positive constants C 1 = C 1 ( Q , δ , μ 4 + δ , c 0 , c 1 ) and C 2 = C 2 ( Q , δ , μ 4 + δ , c 0 , c 1 ) exist, such that:
Pr { max 1 j , k n | L j k | 2 C 1 log 4 n n } C 2 n Q .
Moreover, for j = 1 , n , we have:
Pr { max 1 j n , 1 k m | K j k | 2 C 1 log 4 n n } C 2 n Q .
Proof. 
First, we noted that according to [13] based on [14] and Theorem 1, c ˜ 1 , c ˜ 2 , C > 0 exists, such that:
Pr { c ˜ 1 s n s 1 c ˜ 2 } 1 C n Q .
Furthermore, by Lemma 11, we obtained:
R j j = k = 1 n | L j k | 2 1 s k z 1 s k + z = 1 x z d F n j ( x ) ,
where
F n j ( x ) = 1 2 j = 1 n | L j k | 2 ( I { s k x } + I { s k > x } ) .
We noted that:
max 1 j n | L j k | 2 2 sup u : | u | c ˜ 1 / 2 ( F n j ( u + λ ) F n j ( u ) ) ,
and
F n j ( x + λ ) F n j ( x ) = x x + λ d F n j ( u ) 2 λ 0 λ λ ( x + λ u ) 2 + λ 2 d F n j ( u ) 2 λ Im R j j ( x + λ + i λ ) .
These implied that:
sup x : | x | c ˜ 1 2 | F n j ( x + λ ) F n j ( x ) | 2 λ sup | x | > c ˜ 1 4 Im R j j ( x + i λ ) .
We chose λ n 1 log 4 n . Then, by Corollary 4, we obtained:
Pr { sup x : | x | > c ˜ 1 2 | F n j ( x + λ ) F n j ( x ) | C log 4 n n } 1 C n Q .
We obtained the bounds for K j k in a similar way. Thus, the theorem was proven. □

5. Proof of the Corollaries

5.1. The Proof of Corollary 4

Proof.
We could write:
E   | R j k | q I { Q } E   | R j k | q I { Q } I { A ( v ) } + E   | R j k | q I { Q } I { A c ( v ) } .
Combining this inequality with | R j k | v 1 , we found that:
E   | R j k | q I { Q } C q + v 0 q E   { I { Q } I { A c ( v ) } .
By applying Theorem 5, we obtained what was required.
Thus, the corollary was proven. □

5.2. The Proof of Corollary 2

Proof.
We considered the domain D . We noted that for z D , we obtained:
| z | 2 ( 1 y v ) 2 + v 2 1 2 ( 1 y ) 2 and | A 0 ( z ) | C ,
and
| b ( z ) | 1 y α + 2 y + B .
First, we considered the case | b ( z ) | Γ n . This inequality implied that:
| b ( z ) | 2 C 0 log n n p 1 n p .
From there, it followed that:
min 1 n p | b ( z ) | , 1 n p = 1 n p | b ( z ) | .
Furthermore, for the case | b ( z ) | Γ n , we obtained | b n ( z ) | I { Q } ( 1 γ ) | b ( z ) | I { Q } . We used the inequality:
| Λ n | I { Q } C | T n | | b ( z ) | .
By Chebyshev’s inequality, we obtained:
Pr { | Λ n | 1 2 Γ n ; Q } 2 q E   | T n | q I { Q } Γ n q | b ( z ) | q .
By applying Corollary 1, we obtained:
Pr { | Λ n | 1 2 Γ n ; Q } 2 q H n q Γ n q ,
where
H n q : = C q [ q 1 2 ( n p ) 2 ϰ q 1 d n 2 q 1 2 ( z ) + d n 3 q 4 ( z ) q 2 n v q 4 + d n q 2 ( z ) q n v q 2 + q q 1 d n 3 q 2 2 ( z ) + q 2 ( q 1 ) d n q ( z ) 1 ( n v ) q 1 + q 3 ( q 1 ) d n 1 2 ( z ) 1 ( n v ) q 1 ( n p ) 2 ϰ ( q 1 ] .
First, we noted that for q = K log n :
d n ( z ) Γ n C log n .
Moreover, for q = C log n :
q 2 n v Γ n C log n .
From there, it followed that:
C q d n 3 q 4 ( z ) q 2 n v q 4 ( C log n ) q 2 .
Furthermore:
C q d n ( z ) Γ n q 2 q n v Γ n q 2 C log n q 2 .
Using these estimations, we could show that:
2 q H n q Γ n q C log n q 2
By choosing q = K log n and K > C ( Q ) , we obtained:
Pr { | Λ n | 1 2 Γ n ; Q } C n Q .
Then, we considered the case | b ( z ) | Γ n . In this case:
Γ n 1 2 ( Γ n n v + 1 n p ) 1 2 / Γ n ( 1 n v + 1 n p Γ n ) 1 2 C log n .
By applying the inequality | Λ n ( z ) | C | T n | and Corollary 1, we obtained:
Pr { | Λ n | 1 2 Γ n ; Q } 2 q ( Γ n n v + 1 n p ) q 2 Γ n q 2 C q ( 1 n v + 1 n p Γ n ) q 2 .
It was then simple to show that:
Pr { | Λ n | 1 2 Γ n ; Q } C n Q .
Thus, the first inequality was proven. The proof of the second inequality was similar to the proof of the first. We had to use the inequality:
| Im Λ n | C | T n | ,
which was valid on the real line, instead of | Λ n | C | T n | , which held in the domain D ^ . Moreover, we noted that for any z value, we obtained:
| S y ( z ) | | A 0 ( z ) | C .
Thus, the corollary was proven. □

5.3. Proof of Corollary 3

Proof. 
According to Theorem 4:
Pr { | Λ n ( z ) | 1 2 Γ n ( z ) ; Q } 1 C n Q .
We noted that for v = V :
Pr { Q ( z ) } = 1 .
Furthermore:
| d Λ ( z ) d z | 2 v 2 .
We split the interval [ v 0 , V ] into subintervals by v 0 < v 1 < < v M = V , such that for k = 1 , , M :
| Λ n ( u + i v k ) Λ n ( u + i v k 1 ) | 1 2 Γ n ( z ) .
We noted that the event Q k = { | Λ n ( u + i v k ) | 1 2 Γ n ( u + i v k ) } implied the event Q ˜ k + 1 = { | Λ n ( u + i v k ) | Γ n } . From there, for v k v v k + 1 , k = 0 , , M 1 , we obtained:
Pr { Q ( u + i v ) } 1 Pr { Q ( u + i v k 1 ) } Pr { Q k 1 c ; Q ( u + i v k 1 ) } 1 C n Q .

6. Proof of the Theorems

6.1. Proof of Theorem 1

Proof. 
We obtained:
Pr { | Λ n ( z ) | T n } Pr { | Λ n ( z ) | T n ; Q } + Pr { Q c } .
The second term in the RHS of the last inequality was bounded by Corollary 3. For z (such that | b ( z ) | C Γ n ( z ) ), we used the inequality:
| Λ n ( z ) | | T n | | b n ( z ) | ,
the inequality:
| b n ( z ) | ( 1 γ ) | b ( z ) |
and the Markov inequality. We could write:
Pr { | Λ n ( z ) | T n } E   { | T N | q ; Q } | T n | q | b ( z ) | q + C n c log log n .
We recalled that in the case | b ( z ) | Γ n :
T n : = K ( d ^ n ( z ) + d ^ n 3 4 ( z ) 1 ( n v ) 1 4 + d ^ n 1 2 ( z ) 1 ( n v ) 1 2 ) .
In the case | b ( z ) | Γ n and using Corollary 1, we obtained:
Pr { | Λ n ( z ) | K T n } H n K T n q + C n c log log n .
First, we considered the case | b ( z ) | Γ n . By our definition of r n ( z ) , we obtained:
Pr { | Λ n ( z ) | T n } C 1 K log 1 2 n q + C n c log log n .
This inequality completed the proof for | b ( z ) | Γ n .
We then considered | b ( z ) | Γ n . We used inequality | Λ n ( z ) | | T n | and Corollary 1 to obtain:
Pr { | Λ n ( z ) | T n } C K q .
By choosing a sufficiently large K value, we obtained the proof. Thus, the theorem was proven. □

6.2. Proof of Theorem 2

Proof. 
The proof of Theorem 2 was similar to the proof of Theorem 1. We only noted that inequality:
| Im Λ n ( u + i v ) | | T n |
held for all u R . □

6.3. The Proof of Theorem 5

Proof.
Using the definition of the Stieltjes transformation, we obtained:
s n ( z ) = 1 2 n j = 1 n 1 s j z + j = 1 n 1 s j z = 1 n j = 1 n z s j 2 z 2 ,
and
S y ( z ) = ( z 2 a b ) + ( z 2 a 2 ) ( z 2 b 2 ) 2 y z .
It is also well known that for z = u + i v :
| S y ( z ) | 1 y
and
A 0 ( z ) : = 1 y S y ( z ) + z = y S y ( z ) 1 y z .
We considered the following event for 1 j n , 1 k m and C > 0 :
A j k ( v , J , K ; C ) = { | R j k ( J , K ) ( u + i v ) | C } .
We set:
A ( 1 ) ( v , J , K ) = j = 1 n k = 1 m A j , k ( v , J , K ; C | S y ( z ) | ) , A ( 2 ) ( v , J , K ) = j = 1 m k = 1 n A j + n , k ( v , J , K ; C | S y ( z ) | ) , A ( 3 ) ( v , J , K ) = j = 1 n k = 1 m A j , k + n ( v , J , K ; C | S y ( z ) | ) , A ( 4 ) ( v , J , K ) = j = 1 m k = 1 m A j + n , k + n ( v , J , K ; C | A 0 ( z ) | ) .
For j J c , k K c and u, we obtained:
| R j k ( J , K ) ( z ) | 1 v .
We recalled:
a : = a n ( u , v ) = Im b ( z ) + Γ n , if | b ( z ) | Γ n , Γ n , if | b ( z ) | Γ n .
Then:
Γ n = Γ n ( z ) = 2 C 0 log n ( 1 n v + min { 1 n p | b ( z ) | , 1 n p } ) .
We introduced the events:
Q ^ γ ( J , K ) ( v ) : = l = 0 k v { | Λ n ( J , K ) ( u + i s 0 l v ) | γ a n ( u , s 0 l v ) + | J | + K | n s 0 l v } .
It was easy to see that:
Q ^ γ ( v ) Q ^ γ ( J , K ) ( v ) .
In what follows, we used Q : = Q ^ γ ( v ) .
Equations (4) and (5) and Lemma 10 yielded that for γ γ 0 and for J , K that satisfied ( | J | + | K | ) / n v 1 / 4 , the following inequalities held:
| R j j ( J , K ) | I { Q } 2 | S y ( z ) | | ε j ( J , K ) | | R j j ( J , K ) | I { Q } + 2 | S y ( z ) |
and | A 0 ( z ) | ( | J | + | K | ) / n v 1 / 4 ,
| R l + n , l + n ( J , K ) | I { Q } 2 | A 0 ( Z ) | | ε l + n ( J , K ) | | R l + n , l + n ( J , K ) | I { Q } + 2 | A 0 ( z ) | .
We noted that for | z | C 1 log n n v and | J | C 2 log n under appropriate C 1 and C 2 , we obtained A 0 ( z ) ( | J | + | K | ) / n v 1 / 4 .
We considered the off-diagonal elements of the resolvent matrix. It could be shown that for j k J c :
R j k ( J , K ) = R j j ( J , K ) 1 m p l = 1 m X j l ξ j l R l + n , k ( J { j } , K ) = R j j ( J , K ) ζ j k ( J , K ) ,
for l k K c :
R l + n , k + n ( J , K ) = R l + n , l + n ( J , K ) 1 m p r = 1 n X r l ξ r l R k + n , r ( J , K { l + n } ) = R l + n , l + n ( J , K ) ζ l + n , k + n ( J , K ) ,
and
R j , k + n ( J , K ) = R j j ( J , K ) 1 m p r = 1 m X j r ξ j r R r + n , l + n ( J { j } , K ) = R j j ( J , K ) ζ j , k + n ( J , K ) , R k + n , j ( J , K ) = R j j ( J , K ) 1 m p r = 1 m X j r ξ j r R r + n , k + n ( J { j } , K ) = R j , j ( J , K ) ζ k + n , j ( J , K ) ,
where
ζ j k ( J , K ) = 1 m p l = 1 m X j l ξ j l R l + n , k ( J { j } , K ) , ζ j + n , k + n ( J , K ) = 1 m p r = 1 n X r j ξ r j R r , k + n ( J , K { j + n } ) , ζ j + n , k ( J , K ) = 1 m p l = 1 m X k l ξ k l R l + n , j + n ( J { k } , K ) , ζ j , k + n ( J , K ) = 1 m p l = 1 n X l k ξ l k R l + n , k + n ( J { j } , K ) .
Inequalities (21) and (22) implied that:
Pr { | R j j | I { Q } > C | S y ( z ) | } Pr | ε j | I { Q } > 1 4
for 1 j n and C > 4 y and that:
Pr { | R l + n , l + n | I { Q } > C | A 0 ( z ) | } Pr | ε l + n | I { Q } > 1 4 | A 0 ( z ) |
for 1 l m and C > 2 . Equations (23)–(25) produced:
Pr { | R j k | I { Q } > C | S y ( z ) | } Pr { | R j j | I { Q } > C | S y ( z ) | } + Pr { | ζ j k | I { Q } > 1 }
for 1 j k n and:
Pr { | R l + n , k + n | I { Q } > C | A 0 ( z ) | } Pr { | R l + n , l + n | I { Q } > C | A 0 ( z ) | } + Pr { | ζ l + n , k + n | I { Q } > 1 }
for 1 l k m . Similarly, we obtained:
Pr { | R l , k + n | I { Q } > C | S y ( z ) | } Pr { | R l , l | I { Q } > C | S y ( z ) | } + Pr { | ζ l , k + n | I { Q } > 1 }
and
Pr { | R l + n , k | I { Q } > C | S y ( z ) | } Pr { | R k , k | I { Q } > C | S y ( z ) | } + Pr { | ζ l + n , k | I { Q } > 1 } .
We noted that for | z | B , we obtained:
1 | A 0 ( z ) | B + y .
Using Rosenthal’s inequality, we found that:
E   j | ζ j k | q C q q q 2 ( n v ) q 2 ( Im R k k ( j ) ) q 2 + q q ( n p ) q ϰ 1 1 n l = 1 m | R k , l + n ( j ) | q
for 1 j k n and that:
E   j + n | ζ j + n , k + n | q C q ( q q 2 ( n v ) q 2 ( Im R k + n , k + n ( j + n ) ) q 2 + q q ( n p ) q ϰ 1 1 n r = 1 n | R k + n , r ( j + n ) | q ) , E   j | ζ j , k + n | q C q ( q q 2 ( n v ) q 2 ( Im R k + n , k + n ( j + n ) ) q 2 + q q ( n p ) q ϰ 1 1 n r = 1 n | R k + n , r + n ( j + n ) | q ) , E   j + n | ζ j + n , k | q C q ( q q 2 ( n v ) q 2 ( Im R k + n , k + n ( j + n ) ) q 2 + q q ( n p ) q ϰ 1 1 n r = 1 n | R k + n , r + n ( j + n ) | q )
for 1 j k m . We noted that:
Pr { | ε j ( J , K ) | > 1 4 ; Q } Pr { A ( 4 ) ( s v , J , K ) c ; Q } + Pr { | ε j ( J , K ) | > 1 4 ; A ( 4 ) ( s v , J , K ) ; Q } , Pr { | ε j + n ( J , K ) | > 1 4 | A 0 ( z ) | ; Q } Pr { A ( 1 ) ( s v , J , K ) c ; Q } + Pr { | ε j + n ( J , K ) | > 1 / ( 4 | A 0 ( z ) | ) ; A ( 1 ) ( s v , J , K ) ; Q ; } , Pr { | ζ j k ( J , K ) | > 1 ; Q } Pr { A ( 2 ) ( s v , J , K ) c ; Q } + Pr { | ζ j k ( J , K ) | > 1 ; A ( 2 ) ( s v , J , K ) ; Q } , Pr { | ζ l + n , k + n ( J , K ) | > 1 ; Q } Pr { A ( 3 ) ( s v , J , K ) c ; Q } + Pr { | ζ l + n , k + n ( J , K ) | > 1 ; A ( 3 ) ( s v , J , K ) ; Q ; } , Pr { | ζ j + n , k ( J , K ) | > 1 ; Q } Pr { A ( 4 ) ( s v , J , K ) c ; Q } + Pr { | ζ j + n , k ( J , K ) | > 1 ; Q ; A ( 4 ) ( s v , J , K ) } , Pr { | ζ k , j + n ( J , K ) | > 1 ; Q } Pr { A ( 4 ) ( s v , J , K ) c ; Q } + Pr { | ζ k , l + n ( J , K ) ( v ) | > 1 ; Q ; A ( 4 ) ( s v , J , K ) } .
Using Chebyshev’s inequality, we obtained:
Pr { | ε j ( J , K ) | > 1 / 4 ; Q ; A ( 4 ) } C q E   E   j | ε j | q I { Q ( J , K ) } I { A ( 4 ) } .
By applying the triangle inequality to the results of Lemmas (1)–(3) (which were the property of the multiplicative gradient descent of the resolvent matrix), we arrived at the inequality:
E   j I { A ( 4 ) ( s v , J , K ) } | ε j | q C q [ 1 ( n v ) q + q s | A 0 ( z ) | 2 n p q 2 + 1 n p q s | A 0 ( z ) | ( n p ) 2 ϰ q + q 2 s ( a n ( z ) + | A 0 ( z ) | ) n v q 2 + 1 n p q s | A 0 ( z ) | ( n v ) q 2 q 2 n p q 2 + q 2 s | A 0 ( z ) | ( n p ) 2 ϰ q 1 ( n p ) 2 ] .
When we set q log 2 n , n v > C log 4 n and n p > C ( log n ) 2 ϰ and took into account that ϰ < 1 / 2 and | A 0 ( z ) | C / | z | , then we obtained:
E   j | ε j | q I { A ( 4 ) ( s v , J { j } , K ) } C n c log n .
Moreover, the constant c could be made arbitrarily large. We could obtain similar estimates for the quantities of ε l + n , ζ j k , ζ j + n k , ζ j k + n , ζ j + n , k + n . Inequalities (27) and (28) implied:
Pr { | R j j ( J , K ) | I { Q } > C | S y ( z ) | } Pr { A ( 4 ) ( s v , J { j } , K ) c } + C n c log n , Pr { | R l + n , l + n ( J , K ) | I { Q } > C | A 0 ( z ) | } Pr { A ( 1 ) ( s v , J , K { l } ) c } + C n c log n , Pr { | R j k ( J , K ) | I { Q } > C | S y ( z ) | } Pr { A ( 2 ) ( s v , J , K { l } ) c } + C n c log n , Pr { | R j + n , k | I { Q } > C | S y ( z ) | } Pr { A ( 4 ) ( s v , J , K { j } ) c } + C n c log n , Pr { | R k + n , j | I { Q } > C | S y ( z ) | } Pr { A ( 4 ) ( s v , J , K { j } ) c } + C n c log n , Pr { | R k + n , j + n | I { Q } > C | A 0 ( z ) | } Pr { A ( 3 ) ( s v , J , K { j } ) c } + C n c log n .
The last inequalities produced:
max j , k J c K c Pr { | R j , k ( J , K ) | I { Q } > C } C n c log n + max j J c , k K c max { Pr { A c ( s v , J { j } , K ; C A 0 ( z ) ) } , Pr { A c ( s 0 v , J , K { k } ; C A 0 ( z ) ) } .
We noted that k v C log n for v v 0 = n 1 log 4 n . So, by choosing c large enough, we obtained:
Pr { A c ( v ) Q } C n c log n .
This completed the proof of the theorem. □

6.4. The Proof of Theorem 3

Proof.
First, we noted that for z D , a constant C = C ( y , V ) exists, such that:
| b ( z ) | C .
Without a loss of generality, we could assume that Γ n 1 | b ( z ) | . We recalled that:
a : = a n ( z ) : = a n ( u , v ) = Im b ( z ) + Γ n if | b ( z ) | Γ n , Γ n , if | b ( z ) | Γ n .
Then:
Γ n = 2 C 0 log n 1 n v + min 1 n p | b ( z ) | , 1 n p .
We considered the smoothing of the indicator h γ ( x ) :
h γ ( x , v ) = 1 , for | x | γ a , 1 | x | γ a γ a , for γ a | x | 2 γ a , 0 , for | x | > 2 γ a .
We noted that:
I Q ^ γ ( v ) h γ ( | Λ n ( u + i v ) | , v ) I Q ^ 2 γ ( v ) ,
where, as before:
Q ^ γ ( v ) = ν = 0 k v { | Λ n ( u + i s 0 ν v ) | γ a n ( u , s 0 ν v ) } .
We estimated the value:
D n : = E   | T n | q h γ q ( | Λ n | , v ) .
It was easy to see that:
E   | T n | q I { Q } D n .
To estimate D n , we used the approach developed in [15], which refers back to Stein’s method. We let:
φ ( z ) : = z ¯ | z | q 2 .
We set:
T ^ n : = T n h γ ( | Λ n | , v ) .
Then, we could write:
D n : = E   T ^ n φ ( T ^ n ) .
The equality:
T n = 1 + z 1 y z s n ( z ) + y s n 2 ( z ) = b ( z ) Λ n ( z ) + y Λ n 2 ( z )
implied that a constant C exists that depends on γ in the definition of Q , such that:
| T n | I { Q } ( | b ( z ) | | Λ n ( z ) | + y | Λ n ( z ) | 2 ) I { Q } C ( a n 2 ( z ) + | b ( z ) | | a n ( z ) | ) I { Q } C .
We considered:
B : = A ( 1 ) A ( 2 ) A ( 3 ) A ( 4 ) .
Then:
D n E   | T n | q I { Q } I { B } + C n c log n .
By the definition of T n , we could rewrite the last inequality as:
D n : = 1 n j = 1 n E   ε j R j j h γ ( | Λ n | , v ) φ ( T ^ n ) I { B } + C n c log n .
We set:
D n = D n ( 1 ) + D n ( 2 ) + C n c log n ,
where
D n ( 1 ) : = 1 n j = 1 n E   ε j 1 R j j h γ ( | Λ n | , v ) φ ( T ^ n ) I { B } , D n ( 2 ) : = 1 n j = 1 n E   ε ^ j R j j h γ ( | Λ n | , v ) φ ( T ^ n ) I { B } , ε ^ j : = ε j 2 + ε j 3 .
We obtained:
1 n j = 1 n ε j 1 R j j = 1 2 n s n ( z ) + s n ( z ) 2 n z
and this yielded:
| 1 n j = 1 n ε j 1 R j j | C n v Im s n ( z ) + C n + C | Λ n | n | z | .
Then, we used:
| S y ( z ) | | z | 1 1 y ( y | S y ( z ) | 2 + | z | | S y ( z ) | + 1 } ) C .
Inequality (30) implied that for z D :
| D n ( 1 ) | J 1 D n q 1 q ,
where
J 1 = C a n ( z ) n v .
Further, we considered:
T ^ n ( j ) = E   j T ^ n , T n ( j ) = E   j T n , Λ n ( j ) = E   j Λ n .
We noted that by the Jensen inequality, for q 1 :
E   | T ^ n ( j ) | q E   | T ^ n | q .
We represented D n ( 2 ) in the form:
D n ( 2 ) = D n ( 21 ) + + D n ( 24 ) ,
where
D n ( 21 ) : = S y ( z ) n j = 1 n E   ε ^ j h γ ( | Λ n ( j ) | , v ) φ ( T ^ n ( j ) ) I { B } , D n ( 22 ) : = 1 n j = 1 n E   ε ^ j ( R j j S y ( z ) ) h γ ( | Λ n ( j ) | , v ) φ ( T ^ n ( j ) ) I { B } , D n ( 23 ) : = 1 n j = 1 n E   ε ^ j R j j ( h γ ( | Λ n | , v ) h γ ( | Λ n ( j ) | , v ) ) φ ( T ^ n ( j ) ) I { B } , D n ( 24 ) : = 1 n j = 1 n E   ε ^ j R j j h γ ( | Λ n | , v ) ( φ ( T ^ n ) φ ( T ^ n ( j ) ) ) I { B } .
Since E j ε ^ j = 0 , we found:
D n ( 21 ) = S y ( z ) n j = 1 n E   ε ^ j h γ ( | Λ n ( j ) | , v ) φ ( T ^ n ( j ) ) I { B c } .
From there, it was easy to obtain:
| D n ( 21 ) | C n c log n .

6.4.1. Estimation of D n ( 22 )

Using the representation of R j j , we could write:
D n ( 22 ) = D ˜ n ( 22 ) + D ^ n ( 22 ) + D ˘ n ( 22 ) ,
where
D ˜ n ( 22 ) : = S y ( z ) n j = 1 n E   ε ^ j 2 R j j h γ ( | Λ n ( j ) | , v ) φ ( T ^ n ( j ) ) I { B } , D ^ n ( 22 ) : = y S y ( z ) n j = 1 n E   ε ^ j Λ n R j j h γ ( | Λ n ( j ) | , v ) φ ( T ^ n ( j ) ) I { B } , D ˘ n ( 22 ) : = y S y ( z ) n j = 1 n E   ε ^ j ε j 1 R j j h γ ( | Λ n ( j ) | , v ) φ ( T ^ n ( j ) ) I { B }
By Hölder’s inequality:
| D ^ n ( 22 ) | C | S y ( z ) | n j = 1 n E   1 q E   j | ε ^ j | | Λ n | | R j j | h γ ( | Λ n ( j ) | , v ) I { B } q D n q 1 q .
Further:
E   j | ε ^ j | | Λ n | | R j j | h γ ( | Λ n ( j ) | , v ) I { B } C | S y ( z ) | E   j | ε ^ j | | Λ n | h γ ( | Λ n ( j ) | , v ) I { B } .
We obtained:
| Λ n | h γ ( | Λ n ( j ) | , v ) I { B } | Λ n | h γ ( | Λ n | , v ) I { B } + | Λ n | | h γ ( | Λ n | , v ) h γ ( | Λ n ( j ) | , v ) | I { B } .
In the case | b n ( z ) | | T n | , we obtained:
| Λ n | | T n | | b n ( z ) | | T n | .
This implied that:
| Λ n | h γ ( | Λ n | , v ) I { B } I { | T n | | b n ( z ) | } C | T n | h ( | Λ n | , v ) .
Furthermore, in the case | b n ( z ) | | T n | and | b ( z ) | Γ n , we obtained:
| b n ( z ) | I { Q } ( 1 2 γ ) | b ( z ) | I { Q } > c | b ( z ) | I { Q } .
This implied that:
| Λ n | I { Q } C ( Im b ( z ) + Γ n ) I { Q } C | T n | .
For | b ( z ) | Γ n , we could write:
E   j | ε ^ j | | Λ n | | R j j | h γ ( | Λ n ( j ) | , v ) I { B } C | S y ( z ) | E   j | ε ^ j | | Λ n | I { | Λ n ( j ) | C Γ n } , I { B } C | S y ( z ) | Γ n E   j | ε ^ j I { | Λ n ( j ) | C Γ n } , I { B } .
Using this, we concluded that:
E   j | ε ^ j | | Λ n | h γ ( | Λ n ( j ) | , v ) I { B } E   j 1 2 | ε ^ j | 2 I { | Λ n ( j ) | C a n ( z ) } I { B } × ( I { | b ( z ) | Γ n } E   j 1 2 | T ^ n | + Γ n I { | b ( z ) | Γ n } I { z D } ) .
By applying Lemmas 2 and 3, we obtained:
E   j | ε ^ j | | Λ n | h γ ( | Λ n ( j ) | , v ) I { B } C β n 1 2 ( z ) ( E   j 1 2 | T ^ n | + Γ n I { | b ( z ) | Γ n } I { z D } ) .
By combining inequalities (34) and (35), | S y ( z ) | | A 0 ( z ) | C and Young’s inequality, we obtained:
| D ^ n ( 22 ) |   H 1 D n 2 q 1 2 q + H 2 D n q 1 q ,
where
H 1 = C | S y ( z ) | 2 β n 1 2 ( z ) I { | b ( z ) | Γ n } , H 2 = | S y ( z ) | 2 Γ n β n 1 2 ( z ) I { | b ( z ) | Γ n } I { z D } .
Hölder’s inequality and (35) produced:
| D ˜ n ( 22 ) | C | S y ( z ) | 2 β n ( z ) D n q 1 q .

6.4.2. Estimation of D n ( 23 )

We noted that:
| h γ ( | Λ n | , v ) h γ ( | Λ n ( j ) | , v ) | | R j j | I { B } C a n ( z ) | Λ n Λ n ( j ) | I { max { | Λ n | , | Λ n ( j ) | } 2 γ a n ( z ) } I { B } .
Using Hölder’s inequality and Cauchy’s inequality, we obtained:
D n ( 23 ) C | S y ( z ) | a n ( z ) 1 n j = 1 n E   1 q E   j | ε ^ j | 2 I { Q } I ( B ) q 2 E   j | Λ n Λ n ( j ) | 2 I { Q } I ( B ) q 2 D n q 1 q .
By applying Lemmas 2, 3 and 5, we obtained:
D n ( 23 ) C | S y ( z ) | a n 1 ( z ) β n 1 2 ( z ) 1 n j = 1 n E   1 q E   j | Λ n Λ n ( j ) | 2 I { Q } I ( B ) q 2 D n q 1 q .

6.4.3. Estimation of D n ( 24 )

Using Taylor’s formula, we obtained:
D n ( 24 ) = 1 n j = 1 n E   ε ^ j R j j h γ ( | Λ n | , v ) ( T ^ n T ^ n ( j ) ) φ ( T ^ n ( j ) + τ ( T ^ n T ^ n ( j ) ) ) I { B } ,
where τ is uniformly distributed across the interval [ 0 , 1 ] and the random variables are independent from each other. Since I { B } = 1 yields | R j j | C | S y ( z ) | , we found that:
| D n ( 24 ) | C | S y ( z ) | n j = 1 n E   | ε ^ j | h γ ( | Λ n | , v ) | T ^ n T ^ n ( j ) | | φ ( T ^ n ( j ) + τ ( T ^ n T ^ n ( j ) ) ) | I { B } .
Taking into account the inequality:
| φ ( T ^ n ( j ) + τ ( T ^ n T ^ n ( j ) ) ) | C q | T ^ n ( j ) | q 2 + q q 2 | T ^ n T ^ n ( j ) | q 2 ,
we obtained:
| D n ( 24 ) | C q | S y ( z ) | n j = 1 n E   | ε ^ j | h γ ( | Λ n | , v ) | T ^ n T ^ n ( j ) | | T ^ n ( j ) | q 2 I { B } + C q q 1 | S y ( z ) | n j = 1 n E   | ε ^ j | h γ ( | Λ n | , v ) | T ^ n T ^ n ( j ) | q 1 I { B } = : D ^ n ( 24 ) + D ˜ n ( 24 ) .
By applying Hölder’s inequality, we obtained:
D ^ n ( 24 ) C q | S y ( z ) | n j = 1 n E   2 q E   j { | ε ^ j | h γ ( | Λ n | , v ) | T ^ n T ^ n ( j ) | I { B } } q 2 E   q 2 q | T ^ n ( j ) | q .
Jensen’s inequality produced:
D ^ n ( 24 ) C q | S y ( z ) | n j = 1 n E   2 q E   j { | ε ^ j | h γ ( | Λ n | , v ) | T ^ n T ^ n ( j ) | I { B } } q 2 D n q 2 q .
To estimate D ^ n ( 24 ) , we had to obtain the bounds for:
V j q 2 : = E   [ E   j { | ε ^ j | h γ ( | Λ n | , v ) | T ^ n T ^ n ( j ) | I { B } } ] q 2 .
Using Cauchy’s inequality, we obtained:
V j q 2 E   ( V j ( 1 ) ) q 4 ( V j ( 2 ) ) q 4 E   1 2 ( V j ( 1 ) ) q 2 E   1 2 ( V j ( 2 ) ) q 2
where
V j ( 1 ) : = E   j | ε ^ j | 2 I { Q ^ 2 γ ( v ) } I { B } , V j ( 2 ) : = E   j | T ^ n T ^ n ( j ) | 2 h γ 2 ( | Λ n | , v ) I { B } .

6.4.4. Estimation of V j ( 1 )

Lemma 2 produced:
E   j | ε j 2 | 2 ] I { Q ^ 2 γ ( v ) } I { B } C | A 0 ( z ) | 2 n p ,
and, in turn, Lemma 3 produced:
E   j | ε j 3 | 2 I { Q ^ 2 γ ( v ) } I { B } C n v a n ( z ) .
By summing the obtained estimates, we arrived at the following inequality:
V j ( 1 ) C a n ( z ) n v + C A 0 2 ( z ) n p = β n ( z ) .

6.4.5. Estimation of V j ( 2 )

We considered T ^ n T ^ n ( j ) . Since T ^ n = T n h γ ( | Λ n | , v ) and T ^ n ( j ) = E   j T ^ n , we obtained:
T ^ n T ^ n ( j ) = ( T n T n ( j ) ) h γ ( | Λ n | , v ) + T n ( j ) [ h γ ( | Λ n | , v ) h γ ( | Λ n ( j ) | , v ) ] E   j T n h γ ( | Λ n | , v ) h γ ( | Λ n ( j ) | , v ) .
Further, we noted that:
T n = Λ n b n = Λ n b ( z ) + y Λ n 2 ,
T n ( j ) z = Λ n ( j ) b ( z ) + y E   j Λ n 2 .
Then:
T n T n ( j ) = ( Λ n Λ n ( j ) ) ( b ( z ) + 2 y Λ n ( j ) ) + y ( Λ n Λ n ( j ) ) 2 y E   j ( Λ n Λ n ( j ) ) 2 .
We obtained:
T ^ n T ^ n ( j ) = ( b ( z ) + 2 y Λ n ( j ) ) ( Λ n Λ n ( j ) ) h γ ( | Λ n | , v ) ] + y ( Λ n Λ n ( j ) ) 2 E   j ( Λ n Λ n ( j ) ) 2 h γ ( | Λ n | , v ) + T n ( j ) ( h γ ( | Λ n | , v ) h γ ( | Λ n ( j ) | , v ) E   j ( h γ ( | Λ n | , v ) h γ ( | Λ n ( j ) | , v ) ) .
Then, we returned to the estimation of V j ( 2 ) . Equality (41) implied:
V j ( 2 ) 4 | b ( z ) | 2 E   j | Λ n Λ n ( j ) | 2 h γ 4 ( | Λ n | , v ) I { B } + 8 y 2 E   j | Λ n ( j ) | 2 | Λ n Λ n ( j ) | 2 h γ 4 ( | Λ n | , v ) I { B } + 4 y 2 E   j | Λ n Λ n ( j ) | 4 h γ 4 ( | Λ n | , v ) I { B } + 4 y 2 E   j ( Λ n Λ n ( j ) ) 2 h γ ( | Λ n | , v ) 2 E   j h γ 2 ( | Λ n | , v ) I { B } + 4 | T n ( j ) | 2 E   j h γ ( | Λ n | , v ) h γ ( | Λ n ( j ) | , v ) 2 h γ 2 ( | Λ n | , v ) I { B } + 4 | T n ( j ) | 2 E   j h γ ( | Λ n | , v ) h γ ( | Λ n ( j ) | , v ) 2 E   j h γ 2 ( | Λ n | , v ) I { B } .
We could rewrite this as:
V j ( 2 ) A 1 + A 2 + A 3 + A 4 ,
A 1 = C | b ( z ) | 2 E   j | Λ n Λ n ( j ) | 2 h γ 4 ( | Λ n | , v ) I { B } , A 2 = C E   j | Λ n ( j ) | 2 | Λ n Λ n ( j ) | 2 h γ 4 ( | Λ n | , v ) I { B } , A 3 = C E   j | Λ n Λ n ( j ) | 4 h γ 2 ( | Λ n | , v ) h γ 2 ( | Λ n | , v ) + E   j h γ 2 ( | Λ n | , v ) I { B } , A 4 = C | T n ( j ) | 2 E   j h γ ( | Λ n | , v ) h γ ( | Λ n ( j ) | , v ) 2 h γ 2 ( | Λ n | , v ) + E   j h γ 2 ( | Λ n | , v ) I { B } .
First, we found that:
A 1 C | b ( z ) | 2 E   j | Λ n Λ n ( j ) | 2 h γ 4 ( | Λ n | , v ) I { B } .
and
A 2 C a n 2 ( z ) E   j | Λ n Λ n ( j ) | 2 h γ 4 ( | Λ n | , v ) I { B } .
We noted that:
A 3 C n 2 v 2 E   j | Λ n Λ n ( j ) | 2 h γ 2 ( | Λ n | , v ) I { B } .
It was straightforward to see that:
| T n ( j ) | 2 ( h γ 2 ( | Λ n ( z ) | , v ) + E   j h γ 2 ( | Λ n ( z ) | , v ) ) C ( | b ( z ) | 2 a n 2 ( z ) + a n 4 ( z ) + 1 n 4 v 4 ) .
This bound implied that:
A 4 C ( | b ( z ) 2 a n 2 ( z ) + a n 4 ( z ) + 1 n 4 v 4 ) E   j h γ ( | Λ n | , v ) h γ ( | Λ n ( j ) | , v ) 2 I { B } .
Further, since:
h γ ( | Λ n | , v ) h γ ( | Λ n ( j ) | , v ) C γ a n ( z ) | Λ n Λ n ( j ) | I { max { | Λ n | , | Λ n ( j ) | } ( 1 + γ ) a n ( z ) } ,
we could write:
A 4 C ( | b ( z ) | 2 + a n 2 ( z ) ) E   j | Λ n Λ n ( j ) | 2 I { max { | Λ n | , | Λ n ( j ) | } C a n ( z ) } I { B } .
By combining the estimates that were obtained for A 1 , , A 4 , we concluded that:
V j ( 2 ) C ( a n 2 ( z ) + | b ( z ) | 2 ) E   j | Λ n Λ n ( j ) | 2 I { max { | Λ n | , | Λ n ( j ) | } C a n ( z ) } I { B } .
Inequalities (38) and (39) implied the bounds:
V j q 2 C q β n q 4 ( z ) ( a n 2 ( z ) + | b ( z ) | 2 ) q 4 × E   E   j | Λ n Λ n ( j ) | 2 I { max { | Λ n | , | Λ n ( j ) | } C a n ( z ) } I { B } q 4 .
We noted that:
D ^ n ( 24 ) C q | S y ( z ) | 1 n j = 1 n V j D n q 2 q .
Then, Inequality (42) yielded:
D ^ n ( 24 ) C q | S y ( z ) | β n 1 2 ( z ) ( a n 2 ( z ) + | b ( z ) | 2 ) 1 2 × 1 n j = 1 n E   2 q E   j | Λ n Λ n ( j ) | 2 I { max { | Λ n | , | Λ n ( j ) | } C a n ( z ) } I { B } q 4 D n q 2 q .
We rewrote this as:
D ^ n ( 24 ) L 1 D n q 2 q ,
where
L 1 = C q | S y ( z ) | β n 1 2 ( z ) ( a n 2 ( z ) + | b ( z ) | 2 ) 1 2 × 1 n j = 1 n E   2 q E   j | Λ n Λ n ( j ) | 2 I { max { | Λ n | , | Λ n ( j ) | } C a n ( z ) } I { B } q 4 .

6.4.6. Estimation of D ˜ n ( 24 )

We recalled that:
D ˜ n ( 24 ) = C q q q 1 n | S y ( z ) | j = 1 n E   | ε ^ j | | T ^ n T ^ n ( j ) | q 1 h γ ( | Λ n | , v ) I { B } .
Using Inequalities (40) and (41) and a n ( z ) C n v , we obtained:
| T ^ n T ^ n ( j ) | | b ( z ) | + | a n ( z ) | + C a n ( z ) | T n ( j ) | | Λ n Λ n ( j ) | I { max { | Λ n | , | Λ n ( j ) | } C a n ( z ) } .
By applying:
| T n ( j ) | I { | Λ n ( j ) ( z ) | C a n ( z ) } C ( a n 2 ( z ) + | b ( z ) | a n ( z ) ) ,
we obtained:
| T ^ n T ^ n ( j ) | C ( | b ( z ) | + a n ( z ) ) | Λ n Λ n ( j ) | I { max { | Λ n | , | Λ n ( j ) | } C a n ( z ) } .
The last inequality produced:
D ˜ n ( 24 ) C q q q 1 ( a n ( z ) + | b ( z ) | ) q 1 n | S y ( z ) | j = 1 n E   1 q E   j | ε j | 2 h γ ( | λ n | , v ) I { B } q 2 × E   q 1 q E   j | Λ n Λ n ( j ) | 2 q I { B } 1 2 C q q q | S y ( z ) | β n 1 2 ( z ) ( a n ( z ) + | b ( z ) | ) q 1 1 n j = 1 n E   E   j | Λ n Λ n ( j ) | 2 q I { Q } I { B } q 1 2 q .
We put:
R n ( q ) : = 1 n j = 1 n E   E   j | Λ n Λ n ( j ) | 2 I { B } I { Q } q 2
and
U n ( q ) : = 1 n j = 1 n E   | Λ n Λ n ( j ) | 2 q I { B } I { Q } .
By applying Lemma 5, we obtained:
R n ( q ) C q | S y ( z ) | q a n q 2 ( z ) ( n v ) q ( | S y ( z ) | q | A 0 ( z ) | q 2 β n q 2 ( z ) + | A 0 ( z ) | q 2 ( n p ) q 2 + 1 ( n v ) q 2 ) .
Finally, using Lemma 6, we obtained:
U n q 1 2 q ( q ) C q q q 1 a n ( z ) n v q 1 | S y ( z ) | 2 ( q 1 ) | A 0 ( z ) | ( n p ) 2 ϰ q 1 + C q a n ( z ) n v q 1 | S y ( z ) | 2 ( q 1 ) β n q 1 2 ( z ) + C q 1 q q 1 2 | S y ( z ) | a n ( z ) n v q 1 2 | S y ( z ) | | A 0 ( z ) | n v n p q 1 2 + C q 1 q q 1 | S y ( z ) n v q 1 | A 0 ( z ) | ( n p ) 2 ϰ ( q 1 ) + C q q q 1 | S y ( z ) | n v q 1 a n ( z ) n v q 1 2 + C q 1 q 3 ( q 1 ) 2 a n ( z ) | S y ( z ) | n v q 1 2 | A 0 ( z ) | | S y ( z ) | ( n p ) 2 ϰ q 1 2 1 n v q 1 + C q 1 q 2 ( q 1 ) | A 0 ( z ) | q 1 | S y ( z ) | q 1 ( n v ) q 1 ( n p ) 2 ϰ ( q 1 ) .
Using:
| S y ( z ) | | A 0 ( z ) | 1 + 2 y ,
we could write:
U n q 1 2 q ( q ) C q 1 q q 1 | S y ( z ) | a n ( z ) n v q 1 1 ( n p ) 2 ϰ ( q 1 ) + C q 1 | S y ( z ) | a n ( z ) n v q 1 | S y ( z ) | q 1 β n q 1 2 ( z ) + C q 1 q q 1 2 | S y ( z ) | 1 2 a n 1 2 ( z ) n v q 1 1 n p q 1 2 + C q q q 1 1 n v q 1 1 n p 2 ϰ ( q 1 ) + C q 1 q q 1 | S y ( z ) | q 1 2 ( n v ) q 1 a n ( z ) | S y ( z ) | ( n v ) q 1 2 + C q q 3 ( q 1 ) 2 1 n q 1 v q 1 | S y ( z ) | a n ( z ) n v q 1 2 1 ( n p ) ( q 1 ) ϰ + C q 1 q 2 ( q 1 1 ( n v ) q 1 ( n p ) 2 ϰ ( q 1 ) .
By combining Inequalities (29), (31), (32), (33), (36), (37) and (43) and applying Young’s inequality, we obtained the proof. □

6.5. The Proof of Theorem 4

Proof.
We considered the case z D , where
D = { z = u + i v : ( 1 y v ) + | u | 1 + y + v , V v v 0 = n 1 log 4 n } .
For z, we obtained:
2 V + ( 1 + y ) | z | 1 2 ( 1 y ) .
This implied that the constant C 1 exists, depending on V , y , such that:
| b ( z ) | C 1 .
First, we considered the case | b ( z ) | Γ n . Without a loss of generality, we assumed that C 0 C 1 , where C 0 is the constant in the definition of a n ( z ) . This meant that a n ( z ) = Im b ( z ) + C 0 Γ n . Furthermore:
| b n ( z ) | I { Q } ( 1 2 γ ) | b ( z ) | I { Q }
and
| Λ n ( z ) | I { Q } C | T n | | b ( z ) | .
Using Theorem 3, we obtained:
E   | Λ n ( z ) | q I { Q } C q q q F 1 + + F 6 | b ( z ) | q .
We let:
d ( z ) = Im b ( z ) 1 n v | b ( z ) | .
The analysis of F i / | b ( z ) | q for i = 1 , , 6 .
  • The bound of F 1 / | b ( z ) | q . By the definition of a n ( z ) and F 1 , we obtained:
    F 1 / | b ( z ) | q C q d ( z ) n v + 1 n p | b ( z ) | q .
  • The bound of F 2 / | b ( z ) | q . By the definition of F 2 , we obtained:
    F 2 / | b ( z ) | q C q | S y ( z ) | 2 q d ( z ) ( n v ) + 1 ( n p | b ( z ) | ) q .
    For this, we used | S y ( z ) | | A 0 ( z ) | = | 1 + z S y ( z ) | C .
  • The bound for F 3 / | b ( z ) | q . By the definition of F 3 , we obtained:
    F 3 / | b ( z ) | q | S y ( z ) | 3 q 2 a n q 2 ( z ) ( n v ) q + | S y ( z ) | q 2 ( n v ) q 2 ( n p ) q 2 + | S y ( z ) | q ( n v ) q 1 ( n p ) | b ( z ) | + d ( z ) n v q .
  • The bound of F 4 / | b ( z ) | q . Simple calculations showed that:
    F 4 ( z ) / | b ( z ) | q | S y ( z ) | 3 q 2 ( n v ) q a n q 2 ( z ) + | S y ( z ) | q 2 a n q 2 ( z ) ( n v ) q 2 + | S y ( z ) | q ( a n n v ) q 2 1 ( n p ) | b ( z ) | + d ( z ) n v q .
  • The bound of F 5 / | b ( z ) | q . We noted that:
    ( a n ( z ) + | b ( z ) | ) / | b ( z ) | C .
    From there and from the definition of F 5 , it followed that:
    F 5 ( z ) / | b ( z ) | q C q q q 2 ( ( d ( z ) n v + 1 ( n p ) | b ( z ) | ) 3 q 4 1 n v q 4 + d ( z ) n v + 1 ( n p ) | b ( z ) | q 2 | S y ( z ) | n v q 2 ) .
  • The bound of F 6 / | b ( z ) | q . Simple calculations showed that:
    F 6 / | b ( z ) | q C q q 2 ( q 1 ) ( n p ) 2 ϰ ( q 1 ) β n 1 2 ( z ) | b ( z ) | d ( z ) n v + 1 n p | b ( z ) | q 1 + C q q 2 ( q 1 ) β n 1 2 ( z ) | b ( z ) | 1 d ( z ) n v + 1 n p | b ( z ) | 3 ( q 1 ) 2 + C q q 5 ( q 1 ) 2 ( n p ) q 1 2 β n 1 2 ( z ) | b ( z ) | 1 d ( z ) n v + 1 n p | b ( z ) | ( q 1 ) 2 1 ( n v ) q 1 2 + C q q 3 q ( n p ) 2 ϰ ( q 1 ) 1 ( n v ) q 1 + q q | S y ( z ) | q 1 2 ( n v ) q 1 β n 1 2 ( z ) | b ( z ) | d ( z ) n v + 1 n p | b ( z ) | q 1 2 + C q q 3 q ( n v ) q 1 2 β n 1 2 ( z ) | b ( z ) | d ( z ) n v + 1 n p | b ( z ) | q 1 2 1 ( n p ) ϰ ( q 1 ) + C q q 4 ( q 1 ) ( n p ) 2 ϰ ( q 1 ) 1 ( n v ) q 1 β n 1 2 ( z ) | b ( z ) | .
We defined:
d n ( z ) : = d ( z ) n v + 1 ( n p ) | b ( z ) | .
By combining all of these estimations and using:
d n ( z ) | b ( z ) | 1 n p ,
we obtained:
I { Γ n | b ( z ) | } E   | Λ n | q I { Q } C q q q ( q q 2 ( n v ) q 2 d n q 2 ( z ) + d n q ( z ) ) .
For z D (such that Γ n | b ( z ) | ), we could write:
E   | Λ n ( z ) | q I { Q } C q q q ( q q 2 ( n v ) q 2 d n q 2 ( z ) + d n q ( z ) ) δ q Γ n q .
Then, we considered | b ( z ) | Γ n . In this case, we used the inequality:
| Λ n | | T n | .
In what follows, we assumed that q log n .
The bound of E   | T n | q for | b ( z ) | Γ n .
  • By the definition of a n ( z ) , we obtained:
    a n ( z ) n v = Γ n n v .
    We could obtain from this that, for sufficiently small δ > 0 values:
    F 1 C q Γ n q / ( n v ) q δ q Γ n 2 q .
  • We noted that Γ n Im b ( z ) Im A 0 ( z ) . This immediately implied that:
    C q q q F 2 δ q Γ n 2 q .
  • We noted that for Im b ( z ) | b ( z ) | Γ n , we obtained:
    min { 1 n p | b ( z ) | , 1 n p } = 1 n p
    and
    1 n p δ Γ n 2 / log 2 n .
    From there, it followed that:
    C q q q δ q Γ n 2 q .
  • Simple calculations showed that:
    C q q q F 4 δ q Γ n 2 q .
  • Simple calculation showed that:
    C q q q F 5 C q Γ n 4 q δ q Γ n 2 q .
  • It was straightforward to check that:
    C q q q F 6 C q Γ n 3 q δ q Γ n 2 q .
By applying the Markov inequality for Γ n Im b ( z ) C , we obtained:
Pr { | Λ n | > K d n ( z ) log n ; Q } C n q .
On the other hand, when Im b ( z ) Γ n , we used the inequality:
| Λ n | C | T n | 1 2 .
By applying the Markov inequality, we obtained:
Pr { | Λ n ( z ) | 2 δ Γ n ; Q } C n Q .
This implied that:
Pr { | Λ n ( v ) | 1 2 Γ n ; Q } C n Q .
We noted that Q = Q ( v ) for V v v 0 and that for V v v 0 :
a n ( z ) C log 2 n n .
On the other hand:
sup u | Λ n ( v ) Λ n ( v ) | | v v | v 0 2 n 2 | v v | = n 2 Δ v .
We chose Δ v , such that:
sup u | Λ n ( v ) Λ n ( v ) | 1 2 Γ n .
It was enough to put Δ v : = n 4 . We let K : = V v 0 Δ v . For ν = 0 , , K 1 , we defined:
v ν = v 0 + ν Δ v ,
and v K = V . We noted that v 0 < v 1 < > v K = V and that:
sup u | Λ n ( v ν + 1 Λ n ( v ν ) | 1 2 Γ n .
We started with v K = V . We noted that:
Pr { Q ( V ) } = 1 .
This implied that:
Pr { | Λ n ( v K ) | 1 2 Γ m } C n Q .
From there, it followed that:
Pr { Q ( v K 1 ) } C n Q .
By repeating this procedure and using the union bound, we obtained the proof.
Thus, Theorem 4 was proven. □

7. Auxiliary Lemmas

Lemma 1.
Under the conditions of Theorem, for j J c and l K c , we have:
max | ε j 1 ( J , K ) | , | ε l + n , 1 ( J , K ) | C n v .
Proof. 
For simplicity, we only considered the case J = and K = . We noted that:
ε j 1 = 1 2 m Tr R m n z Tr R ( j ) m n 1 z = 1 2 m Tr R Tr R ( j ) 1 2 m z .
By applying Schur’s formula, we obtained:
| ε j 1 | 1 n v .
The second inequality was proven in a similar way. □
Lemma 2.
Under the conditions of Theorem 5, for all j J c , the following inequalities are valid:
E   j | ε j 2 ( J , K ) | 2 μ 4 n p 1 n l = 1 m | R l + n , l + n ( J { j } , K ) | 2
and
E   l + n | ε l + n , 2 ( J , K ) | 2 μ 4 n p 1 n j = 1 n | R j j ( J , K { l } ) | 2 .
In addition, for q > 2 , we have:
E   j | ε j 2 ( J , K ) | q C q q q 2 ( n p ) q 2 1 n l = 1 m | R l + n , l + n ( J { j } , K ) | 2 q 2 + q q ( n p ) 2 q ϰ + 1 1 n l = 1 m | R l + n , l + n ( J { j } , K ) | q
and for l K c , we have:
E   l + n | ε l + n , 2 ( J , K ) | q C q q q 2 ( n p ) q 2 1 n j = 1 n | R j j ( J , K { l } ) | 2 q 2 + q q ( n p ) 2 q ϰ + 1 1 n j = 1 n | R j j ( J , K { l } ) | q .
Proof. 
For simplicity, we only considered the case J = and K = . The first two inequalities were obvious. We only considered q > 2 . By applying Rosenthal’s inequality, for q > 2 , we obtained:
E   j | ε j 2 | q = 1 ( m p ) q E   j | l = 1 m ( X j l 2 ξ j l p ) R l + n , l + n ( j ) | q C q ( m p ) q [ q q 2 l = 1 m E   j | X j l 2 ξ j l p | 2 | R l + n , l + n ( j ) | 2 q 2 + q q l = 1 m E   j | X j l 2 ξ j l p | q | R l + n , l + n ( j ) | q ] C q ( m p ) q 2 [ ( q μ 4 ) q 2 1 m l = 1 m | R l + n , l + n ( j ) | 2 q 2 + m q q ( m p ) q 2 μ ˜ 2 q 1 m l = 1 m | R l + n , l + n ( j ) | q ] .
We recalled that:
μ ˜ r = E   | X j k ξ j k | r
and under the conditions of the theorem:
μ ˜ 2 q C q p ( n p ) q 2 q ϰ 2 μ 4 + δ .
By substituting the last inequality into Inequality (44), we obtained:
E   j | ε j 2 | q C q q q 2 ( m p ) q 2 1 m l = 1 m | R l + n , l + n ( j ) | 2 q 2 + q q ( m p ) 2 q ϰ + 1 1 m l = 1 m | R l + n , l + n ( j ) | q .
The second inequality could be proven similarly. □
Lemma 3.
Under the conditions of the theorem, for all j T J , the following inequalities are valid:
E   j | ε j 3 ( J , K ) | 2 C l , k = 1 m | R l + n , k + n ( J { j } , K ) ( z ) | 2 n 2
and
E   l + n | ε l + n , 3 ( J , K ) | 2 C i , k = 1 n | R i , k ( J , K { l } ) ( z ) | 2 n 2 .
In addition, for q > 2 , we have:
E   j | ε j 3 ( J , K ) | q C q ( q q ( n v ) q 2 ( Im s n ( j ) ( z ) Im 1 y z ) q 2 + q 3 q 2 ( n v ) q 2 ( n p ) q ϰ 1 1 n l = 1 m ( Im R l + n , l + n ( J { j } , K ) ) q 2 + q 2 q ( n p ) 2 q ϰ 1 n 2 l = 1 m k = 1 m | R l + n , k + n ( J { j } , K ) | q )
and for l T K 1 , we have:
E   l + n | ε l + n , 3 ( J , K ) | q C q ( q q ( n v ) q 2 ( Im s n ( l ) ( z ) ) q 2 + q 3 q 2 ( n v ) q 2 ( n p ) q ϰ 1 1 n j = 1 m ( Im R j j ( J , K { l + n } ) ) q + q 2 q ( n p ) 2 q ϰ n 2 j = 1 n k = 1 n | R k j ( J , K { l + n } ) | q ) .
Proof. 
It sufficed to apply the inequality from Corollary 1 of [16]. □
We recalled the notation:
β n ( z ) = a n ( z ) n v + | A 0 ( z ) | 2 n p .
Lemma 4.
Under the conditions of the theorem, the following bounds are valid:
E   j | R j j E   j R j j | 2 I { Q } I { B } C | S y ( z ) | 4 β n ( z )
and
E   j | R j j E   j R j j | q I { Q } I { B } C q | S y ( z ) | 2 q q q q q | A 0 ( z ) | ( n p ) 2 ϰ q + β n q 2 ( z ) .
Proof. 
We considered the equality:
R j j = 1 z 1 y z + y s n ( j ) ( z ) 1 + ε ^ j R j j .
It implied that:
R j j E   j R j j = 1 z 1 y z + y s n ( j ) ( z ) ε ^ j R j j E   j ε ^ j R j j .
Further, we noted that for a sufficiently small γ value, a constant H existed, such that:
1 z 1 y z + y s n ( j ) ( z ) I { Q } H | S y ( z ) | I { Q } .
Hence:
E   j | R j j E   j R j j | 2 I { Q } I { B } H 2 | S y ( z ) | 2 ( E   j | ε ^ j | 2 | R j j | 2 I { Q } I { B } + E   j I { Q } I { B } E   j | ε ^ j | 2 | R j j | 2 ) .
It was easy to see that:
E   j | ε ^ j | 2 | R j j | 2 I { Q } I { B } C | S y ( z ) | 2 E   j | ε ^ j | 2 I { Q } I { B } C | S y ( z ) | 2 a n ( z ) n v + | A 0 ( z ) | 2 n p .
We introduced the events:
Q ( j ) = | Λ n ( j ) | 2 γ a n ( z ) + 1 n v .
It was obvious that:
I { Q } I { Q } I { Q ( j ) } .
Consequently:
E   j I { Q } I { B } E   j | ε ^ j | 2 | R j j | 2 E   j I { Q } I { B } E   j | ε ^ j | 2 | R j j | 2 I { Q ( j ) } .
Further, we considered Q ˜ = { | Λ n | 2 γ a n ( z ) } . We obtained:
I { Q ( j ) } I { Q ˜ } .
Then, it followed that:
E   j I { Q } I { B } E   j | ε ^ j | 2 | R j j | 2 E   j I { Q } I { B } E   j | ε ^ j | 2 | R j j | 2 I { Q ˜ } .
Next, the following inequality held:
E   j | ε ^ j | 2 | R j j | 2 I { Q ˜ } E   j | ε ^ j | 2 | R j j | 2 I { Q ˜ } I { B ˜ } + E   j | ε ^ j | 2 | R j j | 2 I { Q ˜ } I { B c ˜ } .
Under the condition C 0 and the inequality | R j j | v 0 1 , we obtained the bounds:
E   j | ε ^ j | 2 | R j j | 2 I { Q ˜ } I { B c ˜ } C n c log n .
By applying Lemmas 2 and 3, for the first term on the right side of (48), we obtained:
E   j | ε ^ j | 2 | R j j | 2 I { Q ˜ } I { B ˜ } C | S y ( z ) | 2 a n ( z ) n v + | A 0 ( z ) | 2 n p .
This completed the proof of Inequality (45).
Furthermore, by using representation (47), we obtained:
E   j | R j j E   j R j j | q I { Q } I { B } C q | S y ( z ) | q E   | ε ^ j | q | R j j | q I { Q } I B } C q | S y ( z ) | 2 q E   j | ε ^ j | q | I { Q } I B } .
By applying Lemmas 2 and 3, we obtained:
E   j | R j j E   j R j j | q I { Q } I { B } C q | S y ( z ) | 2 q ( q | A 0 ( z ) | 2 n p q 2 + q | A 0 ( z ) | ( n p ) 2 ϰ q + q 2 a n ( z ) n v q 2 + q 3 | A 0 ( z ) | n v ( n p ) 2 ϰ q 2 + q 2 | A 0 ( z ) | ( n p ) 2 ϰ q ) .
By applying Young’s inequality, we obtained the required proof. Thus, the lemma was proven. □
Lemma 5.
Under the conditions of the theorem, we have:
E   j | Λ n Λ n ( j ) | 2 I { Q } I { B } C | S y ( z ) | 4 | A 0 ( z ) | a n ( z ) ( n v ) 2 β n + C | S y ( z ) | 2 | A 0 ( z ) | a n ( z ) ( n v ) 2 n p + C | S y ( z ) | 2 a n ( z ) ( n v ) 3 .
Proof. 
We set Λ ^ n ( j ) = s n ( j ) ( z ) S y ( z ) . Using Schur’s complement formula:
Λ n Λ ^ n ( j ) = 1 2 n ( 1 + 1 n p l . k = 1 m X j l X j k ξ j l ξ j k [ R ( j ) ] 2 k + n , l + n ) R j j .
Since Λ ^ n ( j ) was measurable with respect to M ( j ) , we could write:
Λ n Λ n ( j ) = ( Λ n Λ ^ n ( j ) ) E   j { Λ n Λ ^ n ( j ) } .
We introduced the notation:
η j 1 = 1 n p l = 1 m ( X j l 2 ξ j l p ) [ R ( j ) 2 ] l + n , l + n , η j 2 = 1 n p l = 1 m k = 1 , k l m X j l X j k ξ j l ξ j k [ R ( j ) 2 ] k + n , l + n .
In this notation:
Λ n Λ n ( j ) = 1 n 1 + 1 n l = 1 m [ R ( j ) 2 ] l + n , l + n ( R j j E   j R j j ) + 1 n ( η j 1 + η j 2 ) R j j 1 n E   j ( η j 1 + η j 2 ) R j j .
We noted that:
E   j | η j 1 | 2 I { Q } I { B } C n 2 p l = 1 m | [ R ( j ) 2 ] l + n , l + n | 2 I { Q ( j ) } I { B ( j ) } .
Since:
| [ R ( j ) 2 ] l + n , l + n | k = 1 m | R ( j ) l + n , k + n | 2 C v Im R l + n , l + n ( j ) ,
Theorem 5 produced:
E   j | η j 1 | 2 I { Q } I { B } C n p v 2 1 n l = 1 m Im R l + n , l + n ( j ) 2 I { Q ( j ) } I { B ( j ) } C | A 0 ( z ) | a n ( z ) n p v 2 .
Similarly, for the moment of η j 2 , we obtained the following estimate:
E   j | η j 2 | 2 I { Q } I { B } C n 2 l , k = 1 m | [ R ( j ) ] 2 l + n , k + n | 2 I { Q ( j ) } I { B ( j ) } C n 2 Tr | R ( j ) | 4 I { Q ( j ) } I { B ( j ) } C n v 3 a n ( z ) .
From the above estimates and Lemma 4, we concluded that:
E   j | Λ n Λ n ( j ) | 2 I { Q } I { B } C | A 0 ( z ) | a n ( z ) ( n v ) 2 | S y ( z ) | 2 n p + E   j | R j j E   j R j j | 2 I { Q } I { B } + C | S y ( z ) | 2 ( n v ) 2 a n ( z ) n v .
Thus, the lemma was proven. □
Lemma 6.
Under the conditions of the theorem, for 2 q c log n , we have:
E   j | Λ n Λ n ( j ) | q I { Q } I { B } C q | S y ( z ) | 2 q a n q ( z ) ( n v ) q q q | A 0 ( z ) | ( n p ) 2 ϰ q + β n q 2 ( z ) + C q q 2 | S y ( z ) | q ( n v ) q ( n p ) q 2 | A 0 ( z ) | q 2 a n q 2 ( z ) + C q q | S y ( z ) | q ( n v ) q ( n p ) 2 q ϰ + 1 | A 0 ( z ) | q + C q q q | S y ( z ) | q ( n v ) 3 q 2 a n q 2 ( z ) + C q q 3 q 2 | S y ( z ) | q ( n v ) 3 q 2 ( n p ) q ϰ + 1 | A 0 ( z ) | q 2 a n q 2 ( z ) + C | S y ( z ) | q q 2 q ( n p ) 2 q ϰ + 2 n q v q | A 0 ( z ) | q .
Proof. 
We used the representation:
Λ n Λ n ( j ) = 1 n 1 + 1 n l = 1 m [ R ( j ) 2 ] l + n , l + n ( R j j E   j R j j ) + 1 n ( η j 1 + η j 2 ) R j j 1 n E   j ( η j 1 + η j 2 ) R j j .
We noted that by using Rosenthal’s inequality:
E   j | η j 1 | q I { Q } I { B } C q q 2 | A 0 ( z ) | q 2 a n q 2 ( z ) v q n q 2 p q 2 + C q q | A 0 ( z ) | q v q ( n p ) 2 q ϰ + 1 .
Similarly, for the second moment of η j 2 , we obtained the following estimate:
E   j | η j 2 | q I { Q } I { B } C q q q n q 2 v 3 q 2 a n q 2 ( z ) + C q q 3 q 2 n q 2 v 3 q 2 ( n p ) q ϰ + 1 | A 0 ( z ) | q 2 a n q 2 ( z ) + C q q 2 q | A 0 ( z ) | q ( n p ) 2 q ϰ + 2 v q .
From the estimates above and Lemma 4, we concluded that:
E   j | Λ n Λ n ( j ) | q I { Q } I { B } C q a n q ( z ) ( n v ) q E   j | R j j E   j R j j | q I { Q } I { B } + C q q q 2 a n q 2 ( z ) | A 0 ( z ) | q 2 | S y ( z ) | q ( n v ) q ( n p ) q 2 + C q q q | S y ( z ) | q A 0 ( z ) | q ( n v ) q ( n p ) 2 q ϰ + 1 + C q q q a n q 2 ( z ) | S y ( z ) | q ( n v ) 3 q 2 + C q q 3 q 2 | S y ( z ) | q | A 0 ( z ) | q 2 a n q 2 ( z ) ( n v ) 3 q 2 ( n p ) q ϰ + 1 + C | S y ( z ) | q q 2 q | A 0 ( z ) | q ( n p ) 2 q ϰ + 2 n q v q .
To finish the proof, we applied Lemma (45) and Inequality (46). Thus, the lemma was proven. □
Lemma 7.
For 1 y v | u | 1 + y + v , the following inequality holds:
| b ( z ) | C a n ( z ) .
Proof. 
We noted that:
b ( z ) = z 1 y z + 2 y S y ( z ) = ( z 1 y z ) 2 4 y
and
a n ( z ) = Im { ( z 1 y z ) 2 4 y } + 1 n v + 1 n p .
It was easy to show that for 1 y | u | 1 + y :
Re { ( z 1 y z ) 2 4 y } 0 .
Indeed:
Re { ( z 1 y z ) 2 4 y } u 2 + 1 y ) 2 u 2 2 ( 1 + y ) .
The last expression was not positive for 1 y | u | 1 + y . From the negativity of the real part, it followed that:
Im { ( z 1 y z ) 2 4 y } 1 2 ( z 1 y z ) 2 4 y
This implied the required proof. Thus, the lemma was proven. □
Lemma 8.
There is an absolute constant C > 0 , such that for z = u + i v :
| Λ n | C min { | T n | | b ( z ) | , | T n | } ,
and that for z = u + i v to satisfy 1 y v | u | 1 + y + v and v > 0 , the following inequality is valid:
| Im Λ n | C min { | T n | | b ( z ) | , | T n | } .
Proof. 
We changed the variables by setting:
w = 1 y ( z 1 y z ) , z = w y + y w 2 + 4 ( 1 y ) 2 ,
and
S ˜ ( w ) = y S y ( z ) , s ˜ n ( w ) = y s n ( z ) .
In this notation, we could rewrite the main equation in the form:
1 + w s ˜ n ( w ) + s ˜ n 2 ( w ) = T n .
It was easy to see that:
Λ n = 1 y ( s ˜ n ( z ) S ˜ ( w ) ) .
Then, it sufficed to repeat the proof of Lemma B.1 from [17]. We noted that this lemma implied that Inequality (50) held for all w with Im w > 0 (and, therefore, for all z) and that Inequality (49) satisfied | Re w | 2 + Im w for w. From this, we concluded that Inequality (49) held for z = u + i v , such that 1 y c v | u | 1 + y + c v for a sufficiently small constant c > 0 .
Thus, the lemma was proven. □
Lemma 9.
For z = u + i v , we have:
| A 0 ( z ) | = 1 | z + y S y ( z ) | 1 + | b ( z ) | ,
and
Im A 0 ( z ) Im b ( z ) ,
where
b ( z ) = z 1 y z + 2 y S y ( z ) .
Proof. 
First, we noted that:
1 z + y S y ( z ) = y S y ( z ) 1 y z .
Using this, we could write:
b ( z ) = A 0 ( z ) 1 A 0 ( z ) .
From there, it followed that:
A 0 ( z ) = b ( z ) ± b 2 ( z ) + 4 2 .
This implied that:
| A 0 ( z ) | 1 + | b ( z ) | .
Equality (51) yielded:
Im A 0 ( z ) = | A 0 ( z ) | 2 1 + | A 0 ( z ) | 2 Im b ( z ) Im b ( z ) .
Thus, the lemma was proven. □
Lemma 10.
A positive absolute constant B exists, such that:
a n ( z ) | A 0 ( z ) | B
and
| S y ( z ) | | A 0 ( z ) | C .
Proof. 
First, we considered | b ( z ) | Γ 1 . Then, for | z | C Γ n :
a n ( z ) | A 0 ( z ) | Γ n ( | b ( z ) | + 1 ) C Γ n | z | C .
In the case Γ n | b ( z ) | C , we obtained:
a n ( z ) A 0 ( z ) | b ( z ) | ( | b ( z ) | + 1 ) C ( C + 1 ) .
we then considered the case | b ( z ) | Γ n :
a n ( z ) A 0 ( z ) ( y S y ( z ) + 1 y | z | ) Γ n y Γ n + 1 y 1 .
To prove the second inequality, we considered the equality:
| S y ( z ) A 0 ( z ) | = | y S y 2 ( z ) 1 y z S y ( z ) | = | 1 z S y ( z ) | C .
Thus, the lemma was proven. □
We let X be a rectangular n × m matrix with m n . We let s 1 s n be the singular values of matrix X . The diagonal matrix with d j j = s j was denoted by D n = ( d j k ) n × n . We let O n , k be an n × k matrix with zero entries. We put O n = O n , n and D ˜ n = D n O n , m n . We let L and K be orthogonal (Hermitian) matrices, such that the singular value decomposition held:
X = L D ˜ n K .
Furthermore, we let I n be the identity of an n × n matrix and E n = I n O n , m n . We introduced the matrices L n = L E n and K n = K E n T . We noted that L n = E n T L and K n = E n K . We introduced the matrix V = O X X O . We considered the matrix Z = 1 2 L L n K n K . We then obtained the following:
Lemma 11.
Z V Z = D n O n O n O n D n O m n , n O m n , n O m n , n O m n = : D ^ .
Proof. 
The proof followed direct calculations. It was straightforward to see that:
Z V = 1 2 K n X L X L n X K X = 1 2 E n D ˜ T L D ˜ K E n D ˜ K D ˜ T L .
Furthermore:
Z V Z = 1 2 E n D ˜ T + D ˜ E n T E n D ˜ T D ˜ n E n T E n T D ˜ + D ˜ T E n D ˜ T E n E n T D ˜ = D ^ .

8. Conclusions

In this work, we obtained results by assuming that the conditions ( C 0 ) ( C 2 ) were fulfilled. The condition ( C 2 ) was of a technical nature. In our investigation on the asymptotic behaviour of the Stieltjes transformation on a beam, this restriction could be eliminated. However, this was a technically cumbersome task that requires separate consideration.

Author Contributions

Writing—original draft, A.N.T. and D.A.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors wish to thank F. Götze for the several fruitful discussions on this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wishart, J. The generalised product moment distribution in samples from a normal multivariate population. Biometrika 1928, 20A, 32–52. [Google Scholar] [CrossRef] [Green Version]
  2. Wigner, E.P. Characteristic vectors of bordered matrices with infinite dimensions. Ann. Math. 1955, 62, 548–564. [Google Scholar] [CrossRef]
  3. Marchenko, V.A.; Pastur, L.A. Distribution of eigenvalues for some sets of random matrices. Mat. Sb. 1967, 72, 507–536. [Google Scholar]
  4. Telatar, E. Capacity of multi-antenna Gaussian channels. Eur. Trans. Telecomm. 1999, 10, 585–595. [Google Scholar] [CrossRef]
  5. Newman, M. Random graphs as models of networks. In Handbook of Graphs and Networks; Bornholdt, S., Schuster, H.G., Eds.; Wiley-VCH: Hoboken, NJ, USA, 2002; pp. 35–68. [Google Scholar]
  6. Granziol, D. Beyond Random Matrix Theory for Deep Networks. arXiv 2021, arXiv:2006.07721v2. [Google Scholar]
  7. Erdős, L.; Knowles, A.; Yau, H.-T.; Yin, J. Spectral statistics of Erdős–Rényi graphs I: Local semicircle law. Ann. Probab. 2013, 41, 2279–2375. [Google Scholar] [CrossRef] [Green Version]
  8. Erdős, L.; Knowles, A.; Yau, H.-T.; Yin, J. Spectral statistics of Erdős-Rényi graphs II: Eigenvalue spacing and the extreme eigenvalues. Comm. Math. Phys. 2012, 314, 587–640. [Google Scholar] [CrossRef] [Green Version]
  9. Huang, J.; Landon, B.; Yau, H.-T. Bulk universality of sparse random matrices. J. Math. Phys. 2015, 56, 123301. [Google Scholar] [CrossRef]
  10. Huang, J.; Yau, H.-T. Edge Universality of Sparse Random Matrices. arXiv 2022, arXiv:2206.06580. [Google Scholar]
  11. Lee, J.O.; Schnelli, K. Tracy–Widom distribution for the largest eigenvalue of real sample covariance matrices with general population. Ann. Appl. Probab. 2016, 26, 3786–3839. [Google Scholar] [CrossRef] [Green Version]
  12. Hwang, J.Y.; Lee, J.O.; Schnelli, K. Local law and Tracy–Widom limit for sparse sample covariance matrices. Ann. Appl. Probab. 2019, 29, 3006–3036. [Google Scholar] [CrossRef] [Green Version]
  13. Götze, F.; Tikhomirov, A.N. On the largest and smallest singular values of sparse rectangular random matrices. Electron. J. Probab. 2021; submitted. [Google Scholar]
  14. Rudelson, M.; Vershynin, R. Smallest singular value of a random rectangular matrix. Comm. Pure Appl. Math. 2009, 62, 1707–1739. [Google Scholar] [CrossRef] [Green Version]
  15. Götze, F.; Naumov, A.A.; Tikhomirov, A.N. On the local semicircular law for Wigner ensembles. Bernoulli 2018, 24, 2358–2400. [Google Scholar] [CrossRef] [Green Version]
  16. Götze, F.; Naumov, A.A.; Tikhomirov, A.N. Moment inequalities for linear and nonlinear statistics. Theory Probab. Appl. 2020, 65, 1–16. [Google Scholar] [CrossRef]
  17. Götze, F.; Naumov, A.A.; Tikhomirov, A.N. Local Semicircle Law under Moment Conditions. Part I: The Stieltjes Transform. arXiv 2016, arXiv:1510.07350v4. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tikhomirov, A.N.; Timushev, D.A. Local Laws for Sparse Sample Covariance Matrices. Mathematics 2022, 10, 2326. https://doi.org/10.3390/math10132326

AMA Style

Tikhomirov AN, Timushev DA. Local Laws for Sparse Sample Covariance Matrices. Mathematics. 2022; 10(13):2326. https://doi.org/10.3390/math10132326

Chicago/Turabian Style

Tikhomirov, Alexander N., and Dmitry A. Timushev. 2022. "Local Laws for Sparse Sample Covariance Matrices" Mathematics 10, no. 13: 2326. https://doi.org/10.3390/math10132326

APA Style

Tikhomirov, A. N., & Timushev, D. A. (2022). Local Laws for Sparse Sample Covariance Matrices. Mathematics, 10(13), 2326. https://doi.org/10.3390/math10132326

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop