Next Article in Journal
Path Planning Algorithm for Dual-Arm Robot Based on Depth Deterministic Gradient Strategy Algorithm
Previous Article in Journal
An Improved Multi-Objective Particle Swarm Optimization-Based Hybrid Intelligent Algorithm for Index Screening of Underwater Manned/Unmanned Cooperative System of Systems Architecture Evaluation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Consistency of the Nearest Neighbor Estimator of the Density Function for m-AANA Samples

Center of Applied Mathematics, School of Big Data and Artificial Intelligence, Chizhou University, Chizhou 247000, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(20), 4391; https://doi.org/10.3390/math11204391
Submission received: 19 September 2023 / Revised: 13 October 2023 / Accepted: 19 October 2023 / Published: 23 October 2023

Abstract

:
In this paper, by establishing a Bernstein inequality for m-asymptotically almost negatively associated random variables, some results on consistency for the nearest neighbor estimator of the density function are further established. The results generalize some existing ones in the literature. Some numerical simulations are also provided to support the results.

1. Introduction

Nearest neighbor estimators can be used for many flexible questions and data types. Let X be a random variable whose density function f ( x ) is unknown and needs to be estimated. Let X 1 , X 2 , , X n be the sample drawn from population X. To estimate f ( x ) , Loftsgarden and Quesenberry [1] raised the nearest neighbour estimator f n ( x ) as follows:
f n ( x ) = k n 2 n a n ( x ) ,
where 1 k n n and
a n ( x ) = min { α : the number of X i [ x α , x + α ] is no less than k n } .
Since Loftsgarden and Quesenberry [1] put forward the method of estimating the density function, many scholars showed their interest in this field. For some recent examples, Liu and Wu [2] established the Bernstein inequality to deal with the consistency results under negatively dependent samples; Lu et al. [3] investigated some results on consistency and convergence rate for this estimator based on φ -mixing samples; Liu and Zhang [4] established the consistency and asymptotic normality of the estimator based on α -mixing samples; Yang [5] established various results on the consistency of the estimator based on negatively associated (NA, in short) samples; Wang and Hu [6] obtained the corresponding results for widely orthant dependent (WOD, in short) samples, which extend and improve those of Yang [5] for NA samples and further proved the rates of strong consistency and uniformly strong consistency; Lan and Wu [7] investigated the rate of uniform strong consistency for the estimator under extended negatively dependent (END, in short) samples; and Wang and Wu [8] extended and improved the results of Lan and Wu [7] from END samles to m-extended negatively dependent (m-END, in short) samples and obtained the same rates as that of END samples.
This paper will further study this topic and extend those aforementioned results to a more general setting. Now, we are at a position to recall some concepts of dependent random variables, of which the first one is that of asymptotically almost negatively associated (AANA, in short) random variables, which was first raised by Chandra and Ghosal [9] as follows.
Definition 1.
We call a sequence { Z n , n 1 } of random variables to be AANA if there is a nonnegative sequence satisfying lim n q ( n ) = 0 such that for all n , l 1 and for all coordinatewise nondecreasing functions f 1 and f 2 ,
C o v ( f 1 ( Z n ) , f 2 ( Z n + 1 , Z n + 2 , , Z n + l ) ) q ( n ) V a r ( f 1 ( Z n ) ) V a r ( f 2 ( Z n + 1 , Z n + 2 , , Z n + l ) ) 1 / 2
whenever the variances above exist.
Since the concept of AANA random variables was put forward by Chandra and Ghosal [9], plenty of results have been established concerning this dependence structure. For instance, Kim and Ko [10] developed the Hajeck–Renyi inequality for these dependent random variables; Yuan and An [11] established some moment inequalities for maximum sums; Chandra and Ghosal [12] as well as Shen and Wu [13] proved the strong law of large numbers for weighted sums; Yuan and An [14] investigated the laws of large numbers for this dependent random variables satisfying the Cesàro alpha-integrability condition; and Wu and Wang [15] studied some results on the nearest neighbor estimator of the density function under AANA samples.
As an extension of AANA random variables, the concept of m-AANA random variables was raised by Nam et al. [16] as follows.
Definition 2.
Let m be a positive integer. We say that a sequence { Z n , n 1 } of random variables is m-AANA if there exists a nonnegative sequence q ( n ) 0 as n such that for all n , l m and for all coordinatewise nondecreasing functions f 1 and f 2 ,
C o v ( f 1 ( Z n ) , f 2 ( Z n + m , , Z n + l ) ) q ( n ) V a r ( f 1 ( Z n ) ) V a r ( f 2 ( Z n + m , , Z n + l ) ) 1 / 2
whenever the variances exist.
It is known that many multivariate distributions satisfy the NA property. The concept of AANA random variables will degenerate to that of NA random variables by taking q ( n ) = 0 . It is easy to see that the m-AANA sequence is equivalent to AANA with m = 1 . Therefore, the structure of m-AANA random variables includes AANA random variables, m-NA random variables, NA random variables, moving average processes, and independent random variables as special cases, and thus it is a more plausible assumption in realistic applications. Now, we present an example of m-AANA random variables that are not necessarily AANA.
Example 1.
Let { Y n , n 1 } be independent and identically distributed N ( 0 , 1 ) random variables and define X n = ( 1 + a n 2 ) 1 / 2 ( Y n + a n Y n + 1 ) , where a n > 0 and a n 0 . It follows from Chandra and Ghosal [9] that { X n , n 1 } is a sequence of AANA random variables that is not NA. Now, we define for each n 1 that Z m ( n 1 ) + 1 = = Z m n = X n with m 2 . Then, it is easy to check that the sequence { Z n , n 1 } is m-AANA. However, it is not AANA since the condition lim n q ( n ) = 0 is not satisfied if we take l = 1 , for example.
In this paper, motivated by the literature above, we first establish a Bernstein inequality for m-asymptotically almost negatively associated (m-AANA, in short) random variables, which is of interest itself. By using this inequality, we further investigate some results on the consistency of the nearest neighbor estimator under m-AANA samples. These results are generalizations of the corresponding ones of Wu and Wang [15] from AANA samples to m-AANA samples.
The layout of this paper is as follows. Some preliminary lemmas are stated in Section 2. Section 3 includes the main results, while the numerical simulations are given in Section 4 to support the theoretical results. The proofs of our main results are postponed in Section 5. The paper is concluded in Section 6. Throughout this paper, x stands for the integer part of x. Let log x = max { 1 , ln x } . Indicator function I ( A ) = 1 if the set A occurs or I ( A ) = 0 otherwise. C ( f ) = { x : f is continuous at x } . C and c 0 stand for positive constants whose values are not necessarily the same in each appearance. All limits are taken as n unless specified otherwise.

2. Preliminary Lemmas

To prove the main results, we first provide several important lemmas in this section.
Lemma 1
(cf. [14]). Suppose that { X n , n 1 } is a sequence of AANA random variables with mixing coefficients { q ( n ) , n 1 } . If f n ( · ) , n 1 are all nondecreasing or all nonincreasing, then { f n ( X n ) , n 1 } is still a sequence of AANA random variables with the same mixing coefficients.
A combination of Lemma 1 and Definition 2 yields the following lemma, which is obvious, and thus the proof is omitted.
Lemma 2.
Suppose that { X n , n 1 } is a sequence of m-AANA random variables with mixing coefficients { q ( n ) , n 1 } . If f n ( · ) , n 1 are all nondecreasing or all nonincreasing; then, { f n ( X n ) , n 1 } is still a sequence of m-AANA random variables with the same mixing coefficients.
Lemma 3
(cf. [15]). Let { X n , n 1 } be a sequence of AANA random variables with zero means and mixing coefficients { q ( n ) , n 1 } . Assume that | X n | is bounded by a positive number b for each n 1 . Then, a positive constant C exists such that for all n 1 and ε > 0 ,
P i = 1 n X i ε C k = 1 n 1 q ( k ) + 1 · exp ε 2 2 i = 1 n E X i 2 + 2 3 b ε .
By virtue of Lemma 3, we can further prove the Bernstein inequality for m-AANA random variables. The lemma will play a significant role in the proof of the main results.
Lemma 4.
Let { X n , n 1 } be a sequence of m-AANA random variables with zero means and mixing coefficients { q ( n ) , n 1 } . Assume that | X n | is bounded by a positive number b for each n 1 . Then, a positive constant C exists such that for all n 1 and ε > 0 ,
P i = 1 n X i ε C m k = 1 n 1 q ( k ) + 1 · exp ε 2 m 2 2 i = 1 n X i + 2 3 m b ε .
Proof. 
For all sufficiently large n, positive integers j 0 and 1 l m always exist satisfying n = m j + l . Without a loss of generality, we may define that X i = 0 for all n < i m ( j + 1 ) . Thus, i = 1 n X i can be decomposed as
i = 1 n X i = l = 1 m i = 0 j X m i + l
where { X m i + l , 0 i j } are AANA for each given l = 1 , 2 , , m . Thus, we can obtain from Lemma 3 that
P S n ε = P l = 1 m i = 0 j X m i + l ε P l = 1 m i = 0 j X m i + l ε m l = 1 m P i = 0 j X m i + l ε m C l = 1 m k = 1 j 1 q ( k ) + 1 · exp ε 2 m 2 2 × i = 0 j E ( X m i + l ) 2 + 2 3 m b ε C m k = 1 n 1 q ( k ) + 1 · exp ε 2 m 2 2 B n 2 + 2 3 m b ε
This completes the proof of the lemma. □
Lemma 5
(cf. [5]). Let Z 1 , Z 2 , , Z n follow a common distribution F ( z ) , which is continuous. For n 3 , assume that z n i satisfies F ( z n i ) = i / n for each 1 i n 1 . Then,
sup < z < | F n ( z ) F ( z ) | max 1 i n 1 | F n ( z n i ) F ( z n i ) | + 2 / n ,
where F n ( z ) = n 1 j = 1 n I ( Z j < z ) is the empirical distribution function.
Lemma 6.
Let { Z n , n 1 } be a sequence of m-AANA random variables, with F ( z ) and f ( z ) being the distribution function and density function, respectively. Let { κ n , n 1 } be a sequence of positive numbers satisfying κ n 0 such that lim inf n n κ n 2 / log n c 0 > 0 . Then, for any D 0 > 0 large enough,
n = 1 P sup z | F n ( z ) F ( z ) | > D 0 κ n < .
In particular,
n = 1 P sup z | F n ( z ) F ( z ) | > D 0 ( log n / n ) 1 / 2 < .
Proof. 
Observing that n κ n , we have that 2 / n < D 0 κ n / 2 for all sufficiently large n and any positive constant D 0 , the value of which will be specified later. It follows from Lemma 5 that
P sup x | F n ( x ) F ( x ) | > D 0 κ n P max 1 i n 1 | F n ( x n i ) F ( x n i ) | > D 0 κ n / 2 i = 1 n 1 P ( | F n ( z n i ) F ( z n i ) | > D 0 κ n / 2 ) .
Let Z j ( z n i ) = I ( Z j < z n i ) E I ( Z j < z n i ) . By Lemma 2, we know that { Z j ( z n i ) , j 1 } is still a sequence of m-AANA random variables with E Z j ( z n i ) = 0 , | Z j ( z n i ) | 1 and E ( Z j ( z n i ) ) 2 1 . Thus, by Lemma 4 we have that for all n adequately large,
P ( | F n ( z n i ) F ( z n i ) | > D 0 κ n / 2 ) = P j = 1 n Z j ( z n i ) > D 0 n κ n / 2 C m k = 1 n 1 q ( k ) + 1 · exp D 0 2 n 2 κ n 2 m 2 8 B n 2 + 4 3 m D 0 n κ n C n exp D 0 2 9 m 2 n κ n 2 C n exp c 0 D 0 2 18 m 2 log n C n 1 c 0 D 0 2 18 m 2 .
Taking D 0 sufficiently large such that 1 c 0 D 0 2 18 m 2 < 2 , by (3) and (4) we have
n = 1 P sup z | F n ( z ) F ( z ) | > D 0 κ n C n = 1 j = 1 n 1 n 1 c 0 D 0 2 18 m 2 < .
This completes the proof of the lemma. □

3. Main Results

Now, we state our results one by one as follows. Denote χ n = k = 1 n 1 q ( k ) + 1 . The first one concerns the weak consistency of the nearest neighbor density estimator.
Theorem 1.
Suppose that { X n , n 1 } is a sequence of m-AANA samples and k n / n 0 , k n 2 / n . If
lim n χ n · exp γ k n 2 n = 0
for all γ > 0 , then for all x c ( f ) ,
f n ( x ) P f ( x ) .
Remark 1.
We point out that (5) is easy to verify. For example, if n = 1 q ( n ) < , which is frequently adopted in the literature, we have χ n 1 + n = 1 q ( n ) < and thus (5) follows. Moreover, if k n 2 / ( n log n ) , (5) also holds without any restriction on the mixing coefficients. We give it in the following corollary.
Corollary 1.
Let { X n , n 1 } be a sequence of m-AANA samples and k n / n 0 , k n 2 / ( n log n ) . Then, for all x c ( f ) ,
f n ( x ) P f ( x ) .
Under some slightly stronger conditions, one can obtain the following results on complete consistency.
Theorem 2.
Let { X n , n 1 } be a sequence of m-AANA samples and k n / n 0 , k n 2 / n . If
n = 1 χ n exp γ k n 2 n <
for all γ > 0 , then for all x c ( f ) ,
n = 1 P ( | f n ( x ) f ( x ) | > ε ) <
for all ε > 0 , and hence
f n ( x ) f ( x ) a . s .
By some analogous argument to that of Corollary 1, the following conclusion can also be obtained.
Corollary 2.
Let { X n , n 1 } be a sequence of m-AANA samples and k n / n 0 , k n 2 / ( n log n ) . Then, for all x c ( f ) ,
n = 1 P ( | f n ( x ) f ( x ) | > ε ) <
for all ε > 0 , and hence
f n ( x ) f ( x ) a . s .
Moreover, we can further obtain the rate of complete consistency for the nearest neighbor density estimator as follows.
Theorem 3.
Let { X n , n 1 } be a sequence of m-AANA samples and f ( x ) satisfy the local Lipschitz condition at x and f ( x ) > 0 . If k n = O ( n 3 / 4 log 1 / 4 n ) and τ n = : n log n / k n 0 ; then, for all sufficiently large D > 0 ,
n = 1 P ( | f n ( x ) f ( x ) | > D τ n ) < ,
and hence
| f n ( x ) f ( x ) | D τ n a . s .
By choosing k n = n 3 / 4 log 1 / 4 n in Theorem 3, the following result follows immediately.
Corollary 3.
Let { X n , n 1 } be a sequence of m-AANA samples, and let f ( x ) satisfy the local Lipschitz condition at x and f ( x ) > 0 . If k n = n 3 / 4 log 1 / 4 n , then for all sufficiently large D > 0 ,
n = 1 P | f n ( x ) f ( x ) | > D n 1 / 4 log 1 / 4 n < ,
and hence
| f n ( x ) f ( x ) | D n 1 / 4 log 1 / 4 n a . s .
At last, we also obtain some achievements concerning uniform consistency and the corresponding convergence rate for the estimator as follows.
Theorem 4.
Let { X n , n 1 } be a sequence of m-AANA samples and f ( x ) be uniformly continuous. If k n / n 0 , k n 2 / ( n log n ) , then for all ε > 0 ,
n = 1 P sup x | f n ( x ) f ( x ) | > ε < ,
and hence
sup x | f n ( x ) f ( x ) | 0 a . s .
Theorem 5.
Let { X n , n 1 } be a sequence of m-AANA samples and let f ( x ) satisfy the Lipschitz condition on R . If k n = O ( n 2 / 3 log 1 / 3 n ) and τ n = : n log n / k n 0 ; then, for any sufficiently large D > 0 ,
n = 1 P sup x | f n ( x ) f ( x ) | > D τ n < ,
and hence
sup x | f n ( x ) f ( x ) | D τ n a . s .
By choosing k n = n 2 / 3 log 1 / 3 n in Theorem 5, one can further obtain the corollary as follows.
Corollary 4.
Let { X n , n 1 } be a sequence of m-AANA samples, and let f ( x ) satisfy the Lipschitz condition on R . If k n = n 2 / 3 log 1 / 3 n , then for any sufficiently large D > 0 ,
n = 1 P sup x | f n ( x ) f ( x ) | > D n 1 / 6 log 1 / 6 n < ,
and hence
sup x | f n ( x ) f ( x ) | D n 1 / 6 log 1 / 6 n a . s .
Remark 2.
Yang [5], as well as Wang and Hu [6], obtained the rates o ( n 1 / 4 log 1 / 4 n log log n )   a . s . of strong consistency and o ( n 1 / 6 log 1 / 6 n log log n )   a . s . of uniformly strong consistency for NA samples and WOD samples, respectively. Wu and Wang [15] extended their results to AANA samples with the same rates presented in Theorems 3 and 5. Noting that the rates are sharper than those of Yang [5] and Wang and Hu [6], and AANA implies m-AANA, our results extend or improve the corresponding ones in Yang [5], Wang and Hu [6], and Wu and Wang [15].

4. Numerical Simulation

In this section, some simple numerical simulations are carried out to verify the performance of f n ( x ) with a finite sample. First, we generate the AANA and m-dependent data, both of which are special cases of m-AANA, according to the following two cases, respectively.
Case 1.
Let { Y n , n 1 } be independent and identically distributed with a standard normal variable, and let X n = ( 1 + a n 2 ) 1 / 2 ( Y n + a n Y n + 1 ) for each n 1 , where a n > 0 and a n 0 . It is easy to check that X 1 , X 2 , , X n are AANA random variables with X i N ( 0 , 1 ) for each i = 1 , 2 , , n .
Case 2.
For m 2 , let Y n , n 1 be independent and identically distributed with a common χ ( 1 ) 2 variable. Let X n = i = 1 m Y n + i 1 for each n 1 . Obviously, X 1 , X 2 , , X n are m-dependent and thus m-AANA random variables with X n χ ( m ) 2 .
Case 3.
For m 2 , let { Y n , n 1 } be independent and identically distributed N ( 0 , 1 ) random variables and define Z n = ( 1 + a n 2 ) 1 / 2 ( Y n + a n Y n + 1 ) , where a n > 0 and a n 0 . Now, let X m ( n 1 ) + 1 = = X m n = Z n for each n 1 . From Example 1, one knows that { X n , n 1 } is m-AANA rather than AANA.
In this section, we will compare the frequency polygon estimator, Epanechnikov kernel estimator (that is, the kernel K ( u ) = 0.75 ( 1 u 2 ) I ( | u | 1 ) ), and histogram estimation with the nearest neighbor estimator. In the sequel, we take m = 3 , k n = n 3 / 4 ( log n ) 1 / 4 for the nearest neighbor estimator, the bin-width b n = ( log ( n ) / n ) 0.25 for the frequency polygon estimator and the histogram estimator, and the bandwidth by cross validation (CV, in short) method for the Epanechnikov kernel estimator. It is deserved to mention that k n and b n are chosen to achieve the optimal convergence rates. According to the above three cases, we take n = 100 , 200 , 500 , 1000 and different x-values such as the peak and tail, respectively. For different x and n, we adopt the R software to calculate the four estimators for 1000 times to obtain the the absolute bias (ABias, in short) and the root mean squared error (RMSE, in short) of the four estimators. The conclusions obtained are exhibited in Table 1, Table 2 and Table 3 and Figure 1, Figure 2 and Figure 3.
In view of Table 1, Table 2 and Table 3 and Figure 1, Figure 2 and Figure 3, we can see the same conclusion under the three cases. Firstly, as the sample size increases, the error of all estimators decreases. The nearest neighbour estimator performs a little better than the kernel estimator and histogram estimation at most points, while at the points distributed on the tail, the nearest neighbour estimator performs worse than the later ones. In summary, the nearest neighbour estimator performs better than others near the peak but worse near the tail. These results show that the estimator considered in this paper also has some superiority to other classical estimators under dependent settings.

5. Proof of the Main Results

The proofs are similar to those of Wu and Wang [15]. Therefore, we only present the differences in the sequel.
Proof of Theorem 1.
Similar to the proof of Wu and Wang [15], we have
{ | f n ( x ) f ( x ) | > ε } A 11 x A 12 x A 21 x A 22 x ,
where
A 11 x = | F n ( x + b n ( x ) ) F ( x + b n ( x ) ) | k n n δ ( x ) ,
A 12 x = | F n ( x b n ( x ) ) F ( x b n ( x ) ) | k n n δ ( x ) ,
A 21 x = | F n ( x + c n ( x ) ) F ( x + c n ( x ) ) | k n n δ ( x ) ,
and
A 22 x = | F n ( x c n ( x ) ) F ( x c n ( x ) ) | k n n δ ( x )
with δ ( x ) = ε 8 ( f ( x ) + ε ) .
For given x, define for each 1 i n , n 1 that
ξ n i = I ( X i < x + b n ( x ) ) E I ( X i < x + b n ( x ) ) .
From Lemma 2, it is easy to see that ξ n 1 , ξ n 2 , , ξ n n are still m-AANA random variables with E ξ n i = 0 and | ξ n i | 1 . Observe that k n n and δ ( x ) 1 8 . Using Lemma 4, we have that
P ( A 11 x ) = P | F n ( x + b n ( x ) ) F ( x + b n ( x ) ) | k n n δ ( x ) = P k = 1 n ξ n i > k n δ ( x ) C χ n · exp k n 2 δ 2 ( x ) / m 2 2 B n 2 + 2 3 k n δ ( x ) / m C χ n · exp k n 2 δ 2 ( x ) / m 2 2 n + 1 12 n / m = C χ n · exp 12 δ 2 ( x ) / m 24 m + 1 k n 2 n .
Analogously, we can also obtain the same upper bounds as in (8) for the probability of events A 12 x , A 21 x , and A 22 x , respectively. Therefore, we further obtain by (5) and (7) that
P ( | f n ( x ) f ( x ) | > ε ) P ( A 11 x ) + P ( A 12 x ) + P ( A 21 x ) + P ( A 22 x ) 4 C χ n · exp 12 δ 2 ( x ) / m 24 m + 1 k n 2 n 0 .
The proof is finished. □
Proof of Corollary 1.
In view of Theorem 1, we only need to verify that (5) holds. By k n 2 / ( n log n ) , one can obtain that
exp γ k n 2 n exp { 3 log n } = n 3
for any γ > 0 and any sufficiently large n. Moreover, noticing that q ( n ) 0 , n 0 > 0 exists such that q ( n ) 1 for all n > n 0 , and thus
χ n = k = 1 n 1 q ( k ) + 1 = O ( n ) .
Therefore, we have by (9) that
χ n exp γ k n 2 n C n 2 0 ,
which finishes the proof. □
Proof of Theorem 2.
The proof is analogous to that of Theorem 1. In view of (6), one has that
n = 1 P ( | f n ( x ) f ( x ) | > ε ) 4 C n = 1 χ n · exp 12 δ 2 ( x ) / m 24 m + 1 k n 2 n < .
Hence, the desired result follows from the Borel–Cantelli lemma and the formula above immediately. □
Proof of Corollary 2.
Similar to the proof of Corollary 1, we have by (10) that
n = 1 k = 1 n 1 q ( k ) + 1 · exp γ k n 2 n C n = 1 n 2 < .
The proof is thus finished. □
Proof of Theorem 3.
Analogous to the proof of Theorem 2.6 in Wu and Wang [15], we also have that
{ | f n ( x ) f ( x ) | > D τ n } B 11 x B 12 x B 21 x B 22 x ,
where
B 11 x = | F n ( x + μ n ( x ) ) F ( x + μ n ( x ) ) | k n τ n n · D 8 T ,
B 12 x = | F n ( x μ n ( x ) ) F ( x μ n ( x ) ) | k n τ n n · D 8 T ,
B 21 x = | F n ( x + ν n ( x ) ) F ( x + ν n ( x ) ) | k n τ n n · D 8 T ,
and
B 22 x = | F n ( x ν n ( x ) ) F ( x ν n ( x ) ) | k n τ n n · D 8 T
with T = : sup x f ( x ) < , D > c 1 2 L ( x ) f ( x ) and L ( x ) > 0 depending only on x.
For each given x and 1 i n , n 1 , we define
η n i = I ( X i < x + μ n ( x ) ) E I ( X i < x + μ n ( x ) ) .
From Lemma 2, it is easy to see that η n 1 , η n 2 , , η n n are still m-AANA random variables with E η n i = 0 and | η n i | 1 . Applying Lemma 4 and noticing that k n n , τ n 0 , we obtain that for all sufficiently large n,
P ( B 11 x ) = P | F n ( x + μ n ( x ) ) F ( x + μ n ( x ) ) | k n τ n n · D 8 T = P i = 1 n η n i > k n τ n · D 8 T C χ n · exp k n 2 τ n 2 D 2 / ( 64 T 2 m 2 ) 2 B n 2 + D 12 T m k n τ n C n exp k n 2 τ n 2 n · D 2 128 m 2 T 2 + 16 3 D m T = C n exp D 2 128 m 2 T 2 + 16 3 D m T log n C n 1 D 2 128 m 2 T 2 + 16 3 D m T .
Analogously, the probabilities of B 12 x , B 21 x , and B 22 x also have the same upper bounds as in (12). Therefore, taking D > c 0 2 L ( x ) f ( x ) such that 1 D 2 128 m 2 T 2 + 16 3 D m T < 1 , one can obtain by (11) that
n = 1 P ( | f n ( x ) f ( x ) | > D τ n ) n = 1 ( P ( B 11 x ) + P ( B 12 x ) + P ( B 21 x ) + P ( B 22 x ) ) 4 C n = 1 n 1 D 2 128 m 2 T 2 + 16 3 D m T < .
This completes the proof of the theorem. □
Proof of Theorem 4.
It follows from the proof of Theorem 2.9 in Wu and Wang [15] that
sup x | f n ( x ) f ( x ) | > ε sup x | F n ( x ) F ( x ) | ε 8 ( T + ε ) k n n ,
where T = sup x f ( x ) < .
On the other hand, by k n 2 / ( n log n ) we have that for all sufficiently large n, ε 8 ( T + ε ) k n n D 0 ( log n / n ) 1 / 2 . Hence, taking κ n = ( log n / n ) 1 / 2 in Lemma 6, one has by (13) that
n = 1 P sup x | f n ( x ) f ( x ) | > ε n = 1 P sup x | F n ( x ) F ( x ) | ε 8 ( T + ε ) k n n n = 1 P sup x | F n ( x ) F ( x ) | D 0 ( log n / n ) 1 / 2 < .
The proof is hence finished. □
Proof of Theorem 5.
It follows from the proof of Theorem 2.10 in Wu and Wang [15] that
sup x | f n ( x ) f ( x ) | > D τ n sup x | F n ( x ) F ( x ) | k n τ n n · D 8 T ,
where D > max { 4 c 2 3 L , 8 T D 0 } , T = sup x f ( x ) < , and L > 0 is independent of x.
Consequently, on can apply Lemma 6 with κ n = k n τ n n = ( log n / n ) 1 / 2 to obtain that
n = 1 P sup x | f n ( x ) f ( x ) | > D τ n n = 1 P sup x | F n ( x ) F ( x ) | k n τ n n · D 8 T n = 1 P sup x | F n ( x ) F ( x ) | D 0 ( log n / n ) 1 / 2 < .
This completes the proof of the theorem. □

6. Conclusions

In this paper, a Bernstein inequality for m-asymptotically almost negatively associated random variables is established based on that of asymptotically almost negatively associated random variables. By virtue of this inequality, some results on consistency for the nearest neighbor estimator of the density function are further obtained. The results are further extensions of existing ones in the literature. From the simulation study, we find that the nearest neighbour estimator performs better than others on the peak but worse on the tail, which encourages us to consider whether we can combine the superiorities of these estimators to construct a better method.

Author Contributions

Validation, W.W.; data curation, W.W.; writing—original draft, X.L.; writing—review & editing, Y.W.; supervision, Y.Z.; funding acquisition, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Provincial Natural Science Research Project of Anhui Colleges, grant number KJ2018A0579.

Data Availability Statement

Not applicable.

Acknowledgments

The authors are most grateful to the editor and anonymous referees for carefully reading the manuscript and for valuable suggestions that helped in improving an earlier version of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Loftsgarden, D.O.; Quesenberry, C.P. A nonparametric estimate of a multivariate density function. Ann. Math. Stat. 1965, 36, 1049–1051. [Google Scholar] [CrossRef]
  2. Liu, Y.H.; Wu, Q.Y. Consistency of nearest neighbor estimator of density function for negatively dependent samples. J. Jilin Univ. 2012, 50, 1142–1145. [Google Scholar]
  3. Lu, Z.L.; Ding, S.N.; Zhang, F.; Wang, R.; Wang, X.J. The consistency and convergence rate for the nearest neighbor density estimator based on φ-mixing random samples. Commun. Stat.-Theory Methods 2022, 51, 669–684. [Google Scholar] [CrossRef]
  4. Liu, Y.; Zhang, Y. The consistency and asymptotic normality of nearest neighbor density estimator under φ-mixing condition. Acta Math. Sin. 2010, 30, 733–738. [Google Scholar]
  5. Yang, S.C. Consistency of nearest neighbor estimator of density function for negative associated samples. Acta Math. Appl. Sin. 2003, 26, 385–395. [Google Scholar]
  6. Wang, X.J.; Hu, H.S. The consistency of the nearest neighbor estimator of the density function based on WOD samples. J. Math. Anal. Appl. 2015, 429, 497–512. [Google Scholar] [CrossRef]
  7. Lan, C.F.; Wu, Q.Y. Uniform Strong Consistency Rate of Nearest Neighbor Estimator of Density Function for END Samples. J. Jilin Univ. 2014, 52, 495–498. [Google Scholar]
  8. Wang, W.; Wu, Y. Consistency of nearest neighbor estimator of density function for m-END samples. Braz. J. Probab. Stat. 2022, 36, 369–384. [Google Scholar] [CrossRef]
  9. Chandra, T.K.; Ghosal, S. Extensions of the strong law of large numbers of Marcinkiewicz and Zygmund for dependent variables. Acta Math. Hung. 1996, 71, 327–336. [Google Scholar] [CrossRef]
  10. Kim, T.; Ko, M.; Lee, I. On the strong law for asymptotically almost negatively associated random variables. Rocky Mt. J. Math. 2004, 34, 979–989. [Google Scholar] [CrossRef]
  11. Yuan, D.M.; An, J. Rosenthal type inequalities for asymptotically almost negatively associated random variables and applications. Sci. China Ser. A Math. 2009, 52, 1887–1904. [Google Scholar] [CrossRef]
  12. Chandra, T.K.; Ghosal, S. The strong law of large numbers for weighted averages under dependence assumptions. J. Theor. Probab. 1996, 9, 797–809. [Google Scholar] [CrossRef]
  13. Shen, A.T.; Wu, R.C. Strong convergence for sequences of asymptotically almost negatively associated random variables. Stochastics-Int. J. Probab. Stoch. Process. 2014, 86, 291–303. [Google Scholar] [CrossRef]
  14. Yuan, D.M.; An, J. Laws of large numbers for Cesàro alpha-integrable random variables under dependence condition AANA or AQSI. Acta Math. Sin. 2012, 28, 1103–1118. [Google Scholar] [CrossRef]
  15. Wu, Y.; Wang, X.J. On Consistency of the Nearest Neighbor Estimator of the Density Function and Its Applications. Acta Math. Sin. 2019, 35, 703–720. [Google Scholar] [CrossRef]
  16. Nam, T.H.; Thuy, N.T.; Hu, T.C.; Volodin, A. Maximal inequalities and strong law of large numbers for sequences of m-asymptotically almost negatively associated random variables. Commun. Stat.-Theory Methods 2016, 46, 2696–2707. [Google Scholar] [CrossRef]
Figure 1. Comparison of different estimators for n = 100 , 200 , 500 , 1000 under case 1.
Figure 1. Comparison of different estimators for n = 100 , 200 , 500 , 1000 under case 1.
Mathematics 11 04391 g001
Figure 2. Comparison of different estimators for n = 100 , 200 , 500 , 1000 under case 2.
Figure 2. Comparison of different estimators for n = 100 , 200 , 500 , 1000 under case 2.
Mathematics 11 04391 g002aMathematics 11 04391 g002b
Figure 3. Comparison of different estimators for n = 100 , 200 , 500 , 1000 under case 3.
Figure 3. Comparison of different estimators for n = 100 , 200 , 500 , 1000 under case 3.
Mathematics 11 04391 g003
Table 1. Absolute bias and RMSE of the estimators for different x and n under Case 1.
Table 1. Absolute bias and RMSE of the estimators for different x and n under Case 1.
Estimators n = 100 n = 200 n = 500 n = 1000
ABiasRMSEABiasRMSEABiasRMSEABiasRMSE
x = 3 nearest neighbor0.075130.075210.068810.068840.060620.060640.054550.05456
frequency0.009960.053380.000730.006810.000490.004210.000210.00345
kernel0.001020.008270.000620.005970.000550.004350.000230.00306
histogram0.000480.010180.000260.007200.001700.004360.000280.00377
x = 2 nearest neighbor0.067790.068230.061320.061620.052540.052680.046020.04612
frequency0.003620.026060.003610.019350.003260.012960.002320.01059
kernel0.003030.026250.002320.020220.001770.013570.001600.01109
histogram0.070340.031370.019850.02798−0.015450.021660.015230.01837
x = 1 nearest neighbor0.001130.027170.000810.021080.000530.015430.000530.01263
frequency0.002520.047230.000810.048730.000670.024240.000320.02708
kernel0.001190.051120.002380.037980.001610.026820.001360.02187
histogram0.035600.075450.003530.0532700.041990.052950.003490.02816
x = 0 nearest neighbor0.020420.052590.013710.040310.009630.028600.008540.02147
frequency0.013250.052930.010860.042710.006580.030400.005840.02284
kernel0.007410.060470.005040.047410.003590.034130.003360.02526
histogram0.014670.084890.010400.064920.006890.045070.006330.03474
x = 1 nearest neighbor0.001060.027380.000420.022090.000150.015420.000110.01206
frequency0.000550.046150.000450.049850.000400.024700.000410.02743
kernel0.001470.050310.000660.037760.000440.026800.000350.02131
histogram0.071770.105300.088780.104890.039150.056920.065730.07274
x = 2 nearest neighbor0.067670.068120.061320.061580.052560.052700.046010.04610
frequency0.004130.026200.004440.019900.003400.013070.001810.01024
kernel0.003770.026020.002690.020790.002140.014010.001050.010646
histogram0.053440.070640.023960.039200.020810.029010.015170.02162
x = 3 nearest neighbor0.075210.075280.068860.068910.060560.060580.054630.05464
frequency0.000310.009540.000370.006850.000730.004000.000100.00338
kernel0.000550.007750.000520.005820.000500.004180.000180.00306
histogram0.011690.022310.008800.014860.002890.007090.004820.00742
Table 2. Absolute bias and RMSE of the estimators for different x and n under Case 2.
Table 2. Absolute bias and RMSE of the estimators for different x and n under Case 2.
Estimators n = 100 n = 200 n = 500 n = 1000
ABiasRMSEABiasRMSEABiasRMSEABiasRMSE
x = 0.5 nearest neighbor0.079370.085120.070470.075570.062000.065290.054130.05700
frequency0.053270.065630.043240.054470.035100.044270.022710.02801
kernel0.055030.068880.043000.054390.028910.036140.022450.02760
histogram0.083570.098960.078070.090170.079920.089710.028190.03487
x = 1.5 nearest neighbor0.032340.040470.025040.031280.017590.021660.013350.01664
frequency0.049270.061880.040070.050440.033340.041240.019860.02491
kernel0.048930.061560.039440.048900.026630.033320.020400.02552
histogram0.067770.084690.048640.061300.034550.044330.026380.03333
x = 3.5 nearest neighbor0.015860.019960.012340.016030.009100.011650.007050.00898
frequency0.043760.054520.030500.038000.023940.029960.013490.01714
kernel0.036140.045030.028060.034820.019430.024160.014750.01833
histogram0.045990.057640.035620.044560.028880.036560.020290.02558
x = 5.5 nearest neighbor0.015920.018600.012380.014270.009300.010610.007330.00832
frequency0.024790.030770.020900.026030.016040.0203180.009090.01148
kernel0.025590.032090.019180.023770.013200.016620.009850.01238
histogram0.032940.041270.023250.029160.018240.023810.013020.01638
x = 7.5 nearest neighbor0.021620.022110.018330.018640.014650.014830.012210.01235
frequency0.016900.021150.014600.018780.010230.012860.006210.00784
kernel0.017170.021700.012620.016210.008610.010730.006690.00840
histogram0.022600.029910.015640.020340.012090.015390.008490.01072
x = 9.5 nearest neighbor0.022830.022930.020120.020190.016800.016840.014410.01444
frequency0.013270.016010.009230.011970.006660.008380.003860.00495
kernel0.010260.012980.008230.010440.005620.007080.004160.00534
histogram0.013350.016110.009620.012470.007200.009150.005180.00666
Table 3. Absolute bias and RMSE of the estimators for different x and n under Case 3.
Table 3. Absolute bias and RMSE of the estimators for different x and n under Case 3.
Estimators n = 100 n = 200 n = 500 n = 1000
ABiasRMSEABiasRMSEABiasRMSEABiasRMSE
x = 3 nearest neighbor0.076100.076330.069500.069710.060520.060560.054610.05464
frequency0.009300.016220.006520.011280.006450.007490.004960.00588
kernel0.008320.012840.007100.010210.006840.008070.004440.00536
histogram0.008150.016630.007570.011880.007030.008140.005630.00635
x = 2 nearest neighbor0.069230.070700.061180.061960.057150.057620.046690.04682
frequency0.038610.048810.026070.033710.019260.025110.018530.02361
kernel0.038080.048230.028200.034320.022760.030590.018020.02188
histogram0.049370.059140.033430.038390.020030.025450.017560.02350
x = 1 nearest neighbor0.036310.047750.033920.045060.019970.023030.009120.00912
frequency0.062290.076710.072470.089030.029840.036440.038660.03866
kernel0.069570.084090.052740.065310.029780.037290.016780.01678
histogram0.093210.084090.074340.091250.052000.056960.038990.03899
x = 0 nearest neighbor0.070750.088890.049160.062480.041650.048750.030890.03813
frequency0.073850.092060.056380.065420.052100.066100.034070.04322
kernel0.080640.100950.063310.076230.063880.080900.034310.04535
histogram0.114340.145910.080770.098850.063660.079570.049860.06241
x = 1 nearest neighbor0.038350.050630.031490.041360.021090.027320.016970.02093
frequency0.064900.081860.074140.088150.036910.042650.034290.04344
kernel0.072120.090970.055120.069310.036430.045740.027770.03529
histogram0.111800.140160.109620.137030.068450.086030.067550.08356
x = 2 nearest neighbor0.069300.070760.064860.065730.050610.050790.041270.04130
frequency0.036040.046650.021330.027860.019160.023150.018220.01687
kernel0.037240.046550.020600.027540.018640.021480.015020.01845
histogram0.080880.103310.046900.060460.026930.029770.022770.02503
x = 3 nearest neighbor0.075930.076170.069400.069520.060540.060560.054790.05480
frequency0.007710.014060.007540.013030.004440.004440.003540.00390
kernel0.008310.011530.007810.011020.005110.005760.003500.00388
histogram0.021360.035340.015590.023500.005730.006550.012390.01605
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, X.; Wu, Y.; Wang, W.; Zhu, Y. On Consistency of the Nearest Neighbor Estimator of the Density Function for m-AANA Samples. Mathematics 2023, 11, 4391. https://doi.org/10.3390/math11204391

AMA Style

Liu X, Wu Y, Wang W, Zhu Y. On Consistency of the Nearest Neighbor Estimator of the Density Function for m-AANA Samples. Mathematics. 2023; 11(20):4391. https://doi.org/10.3390/math11204391

Chicago/Turabian Style

Liu, Xin, Yi Wu, Wei Wang, and Yong Zhu. 2023. "On Consistency of the Nearest Neighbor Estimator of the Density Function for m-AANA Samples" Mathematics 11, no. 20: 4391. https://doi.org/10.3390/math11204391

APA Style

Liu, X., Wu, Y., Wang, W., & Zhu, Y. (2023). On Consistency of the Nearest Neighbor Estimator of the Density Function for m-AANA Samples. Mathematics, 11(20), 4391. https://doi.org/10.3390/math11204391

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop