Next Article in Journal
Design Approach for Evolutionary Techniques Using Genetic Algorithms: Application for a Biped Robot to Learn to Walk and Rise after a Fall
Next Article in Special Issue
Statistical Modeling Using a New Distribution with Application in Health Data
Previous Article in Journal
On Sub Convexlike Optimization Problems
Previous Article in Special Issue
Quality Control Testing with Experimental Practical Illustrations under the Modified Lindley Distribution Using Single, Double, and Multiple Acceptance Sampling Plans
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Review of Representative Points of Statistical Distributions and Their Applications

1
Guangdong Provincial Key Laboratory of Interdisciplinary Research and Application for Data Science, Beijing Normal University—Hong Kong Baptist University United International College, Zhuhai 519087, China
2
The Key Lab of Random Complex Structures and Data Analysis, The Chinese Academy of Sciences, Beijing 100045, China
3
Research Center for Mathematics, Beijing Normal University, Zhuhai 519087, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(13), 2930; https://doi.org/10.3390/math11132930
Submission received: 19 April 2023 / Revised: 20 June 2023 / Accepted: 27 June 2023 / Published: 29 June 2023
(This article belongs to the Special Issue Distribution Theory and Application)

Abstract

:
Statistical modeling relies on a diverse range of statistical distributions, encompassing both univariate and multivariate distributions and/or discrete and continuous distributions. In the literature, numerous statistical methods have been proposed to approximate continuous distributions. The most commonly used approach is the use of the empirical distribution which is obtained from a random sample drawn from the distribution. However, it is very likely that the empirical distribution suffers from an accuracy problem when used to approximate the underlying distribution, especially if the sample size is not sufficient. In order to improve statistical inferences, various alternative forms of discrete approximation to the distribution were proposed in the literature. The choice of support points for the discrete approximation, known as Representative Points (RPs), becomes extremely important in terms of distribution approximations. In this paper we give a review of the three main methods for constructing RPs, namely based on the Monte Carlo method, the number-theoretic method (or quasi-Monte Carlo method), and the mean square error method, aiming to introduce such important methods to the statistical or mathematical community. Additional approaches for forming RPs are also briefly discussed. The review focuses on certain critical aspects such as theoretical properties and computational algorithms for constructing RPs. We also address the issue of the application of RPs through studying practical problems and provide evidence of RPs’ advantages over random samples in approximating the distribution.

1. Introduction

Statistics is an art of science involving data collection and data analysis, in which statistical modeling relies on various types of statistical distributions. Such distributions are either discrete or continuous and either univariate or multivariate distributions. For an unknown continuous distribution F in R d , the conventional approach is to approximate F using the empirical distribution of a random sample. The empirical distribution is discrete, consisting of support points from the random sample, with each point contributing equally to the approximation. Because of the accuracy problem resulting from the empirical distribution, we want to construct a discrete distribution F ^ R d that approximates the distribution F while preserving the distribution information as much as possible. Consider a random vector X following a continuous distribution F ( x ) R d , characterized by a probability density function (pdf) p ( x ) . In contrast, a discrete random vector Y is characterized by a probability mass function (pmf) F Y ( y ) .
Y y 1 y 2 y k p p 1 p 2 p k
where y 1 , , y k are support points of Y and P ( Y = y i ) = p i > 0 , i = 1 , , k , i 1 k p i = 1 . For defining an approximation distribution F Y ( y ) to F ( x ) it should satisfy
(i)
F Y ( y ) is a function F;
(ii)
A pre-decided distance between F and F Y is small;
(iii)
F Y ( k ) F in distribution as k , where k in F Y is the number of support points of Y. In this case, the support points y 1 , , y k are called representative points (RPs). There are several ways to choose an approximation distribution F Y .

1.1. Monte Carlo—RPs

Let X F ( x ; θ ) R d be a random vector, where θ represents the parameters. For instance, for the normal distribution N ( μ , σ 2 ) , the parameters are denoted as θ = ( μ , σ 2 ) . In traditional statistics, random samples are utilized to make inferences about the population. Specifically, a collection of independently and identically distributed (iid) random samples, denoted as x 1 , , x k , are drawn from the population distribution F. The empirical distribution of the random sample is defined as follows:
F k ( x ) = 1 k i = 1 k I x i x ,
where I A is the indicator function of A, and the inequalities x i x means that x i l x l ( 1 i k , 1 l d ) where x i = ( x i 1 , , x i d ) T and x = ( x 1 , , x d ) T . Many statistical inferences rely on the empirical distribution F k ( x ) , which includes various methods such as:
(1)
Parameter estimation (point estimation and confidence interval estimation);
(2)
Density estimation;
(3)
Testing hypothesis, and so on.
The empirical distribution is a discrete distribution with support points x 1 , , x k each having the sampling probability 1 / k and can be considered as an approximation of F ( x ) in the sense of consistency, i.e., F k ( x ) F ( x ) in distribution as k . In statistical simulation, a set of random samples can be generated by computer software under the Monte Carlo (MC) method. Therefore, we denote random variable Y M C F k ( y ) or Y M C F M C ( k ) ( y ) in this paper. The MC methods have been commonly used. For instance, in the case of a normal population N ( μ , σ 2 ) with unknown parameters μ and σ 2 , one can utilize the sample mean x ¯ and the sample variance s 2 = 1 n 1 i = 1 n ( x i x ¯ ) 2 to estimate μ and σ 2 , respectively.
As the empirical distribution can be regarded as an approximation distribution to F ( x ) , one can therefore take a set of random samples from F M C instead of from F, as suggested by Efron [1], which is called as bootstrap method. The bootstrap method is a resampling technique, where the random sample takes from an approximation distribution F ^ . Later, Efron gave a comprehensive study on the theory and application of the bootstrap method.
The MC method has proven to be useful in statistical theory and applications. However, its efficiency is not always good due to the convergence rate of F k ( x ) F ( x ) in distribution, which is O p ( 1 k ) as k . The slow convergence leads to unsatisfactory approximations when performing numerical integration using the MC method. While the empirical distribution serves as one approximation to the true distribution F, alternative approaches were proposed in the literature to address this issue.

1.2. Number-Theoretic RPs or Quasi-Monte Carlo RPs

Let us consider the numerical calculation for high-dimension integration in a canonical form
I ( f ) = 0 1 0 1 f ( y 1 , , y d ) d y 1 d y d = C d f ( y ) d y ,
where f is a continuous function on C d = [ 0 , 1 ] d . Let Y = { y 1 , , y k } be a set of k points uniformly scattered on C d . One can use the mean of { f ( y i ) , y i Y } , denoted by f ( y ) ¯ = 1 k i = 1 k f ( y i ) , to approximate I ( f ) . By the MC method, we can employ a random sample from the uniform distribution U ( C d ) . The rate of convergence of f ( y ) ¯ I ( f ) is O p ( 1 / k ) , which is relatively slow but does not depend on the dimensionality d. How to increase the convergence rate is an impotent subject in applications. The number-theoretic methods (NTM) or quasi-Monte Carlo methods (QMC) provide many methods for the construction of Y such that { y 1 , , y k } are uniformly scattered on C d , by which it can increase the rate of convergence into O ( k 1 ( log k ) d ) . For the theory and methodology of NTM/QMC, one can refer to Hua and Wang [2] and Niederreiter [3]. In the earlier study on NTM, many authors employed the star discrepancy as a measure of the uniformity of Y in C d . The star discrepancy is defined by
D ( F , F ( k ) ) = sup x R d | F ( x ) F ( k ) ( x ) | ,
where F ( x ) is the cdf of U ( C d ) and F ( k ) ( x ) is the empirical distribution of Y . An optimal Y has the minimum D ( F , F ( k ) ) . In this case the points in Y are called QMC-RPs that are support points of F ( k ) ( y ) each having the equal probability 1 / k . In this paper we denote F ( k ) by F Q M C ( k ) and random vector Y Q M C ( k ) F Q M C ( k ) . Another popular measure is the L p -distance between F ( k ) and F
D p ( F ( k ) , F ) = R d | F ( k ) ( x ) F ( x ) | p d x 1 / p .
When F is the uniform distribution on C d , the the L p -distance is called as L p -discrepancy. The star discrepancy is the L p -discrepancy as p . In the literature a set of { q 1 , , q n } under a certain structure is regarded as a set of quasirandom F-numbers if its discrepancy has an order O ( n 1 + ϵ ) under the given discrepancy. When F is the uniform distribution on C d ( U ( C d ) ), the quasirandom F-numbers are called as quasirandom numbers. The reader can refer to Fang and Wang [4] for details. Due to the numerical computation complexity of the L p -discrepancy ( p 2 ), the L 2 -discrepancy has a simple computational formula. There are more uniformity measures in the experimental design such as the centered L 2 -discrepancy, wrap-around L 2 -discrepancy and mixture L 2 -discrepancy (refer to Fang et al. [5]). Fang and Wang [4] and Fang et al. [6] gave a comprehensive study on NTM and applications of NTM in statistical inference, experimental design, geometric probability, and optimization. Pages [7] gave a detailed study on applications of QMC to financial mathematics. Section 6 will introduce some algorithms for the generation of QMC-RPs.

1.3. Mean Square Error—RPs

Another measure to choose discrete approximation distribution to a given continuous distribution is the mean square error (MSE), and the corresponding support points are called MSE-RPs.
Definition 1.
Suppose that a random vector X F ( x ) in R d has a density function p ( x ) with finite mean vector and covariance matrix. A set of points of Ξ = { ξ j , j = 1 , , k } is called MSE-RPs if it minimizes the mean square error (MSE)
MSE ( Ξ ) = R d min j = 1 , , k | | x ξ j | | 2 d x ,
where | | a | | denotes l 2 -norm of a R d .
Given a set of k points { y 1 , , y k } in R d a set of regions are defined by
S j = { x : ( x y j ) T ( x y j ) ( x y h ) T ( x y h ) , h j } , j = 1 , , k ,
that are called as Voronoi regions, where S j is the attraction domain of y j .
For a univariate distribution F ( x ) with pdf p ( x ) , mean μ and variance σ 2 , its MSE-RPs can be sorted as ξ 1 ( k ) < ξ 2 ( k ) < < ξ k ( k ) , and its MSE can be expressed as
MSE ( Y ) = + min i ( x ξ i ( k ) ) 2 p ( x ) d x = j = 1 k I j ( x ξ i ( k ) ) 2 p ( x ) d x ,
where
I 1 = ( a 1 , a 2 ) , I 2 = ( a 2 , a 3 ) , , I k = ( a k , a k + 1 ) , a 1 = , a i = ξ i 1 ( k ) + ξ i ( k ) / 2 , i = 2 , , k , a k + 1 = .
The corresponding Y M S E has support points ξ j ( k ) , 1 j k with probabilities P ( Y M S E = ξ i ) = p i where
p 1 = ( ξ 1 + ξ 2 ) / 2 p ( x ) d x , p i = ( ξ i 1 + ξ i ) / 2 ( ξ i + ξ i + 1 ) / 2 p ( x ) d x , i = 2 , , k 1 , p k = ( ξ k 1 + ξ k ) / 2 ) p ( x ) d x .
Y M S E ξ 1 ξ 2 ξ k p p 1 p 2 p k
Its loss function (LF) is defined by
L ( Y ) = L ( ξ 1 , , ξ k ) = 1 σ 2 M S E ( Y ) .
It is known that 0 L ( Y ) 1 . The loss function shows what percentage of Var ( X ) is lost by using Y M S E replacing X.
The concept of MSE-RPs has been motivated by various problems. In the context of the grouping problem, Cox [8] considered the task of condensing observations of a variate into a limited number of groups, where the grouping intervals are selected to retain maximum information. He introduced the concept of mean squared error (MSE) and provided several sets of MSE-RPs for the standard normal distribution. The concept of MSE-RPs is also relevant in data transmission systems, where analog input signals are converted to digital form, transmitted, and then reconstituted as analog signals at the receiver. The problem of optimal quantization of a continuous random variable with a fixed number of levels was precisely defined by Max [9]. In IEEE journals, MSE-RPs are also referred to as “quatizers”.
Fang and He [10] proposed the mathematical problem based on the national Chinese garment standard (refer to Fang [11]). Iyengar and Solomon [12] considered the mathematical problems that arise in the theory of representing a distribution by a few optimally chosen points. Flury [13], Flury [14] studied a project of the Swiss army to replace existing with newly designed protection masks. They used “principal points” for MSE-RPs due to some link between the principal components and MSE-RPs. The MSE-RPs are also applied to select a few “representative” curves from a large collection of curves which is useful for kernel density estimation (see Flury and Tarpey [15]) and for psychiatric studies by Tarpey and Petkova [16]. Furthermore, MSE-RPs can be applied to problems related to the numerical computation of conditional expectations, stochastic differential equations, and stochastic partial differential equations. These applications are often motivated by challenges encountered in the field of finance [7]. There was a special issue of “IEEE Transaction on Information Theory” on vector quantizers in 1982, a very detailed review on quantization by Gray and Neuhoff [17]. There are several monographs on the theory and applications of RPs, for example, Graf and Luschgy [18] “Foundations of Quantization for Probability Distributions” and Pages [7] “Numerical Probability, An Introduction with Applications to Finance”.
The use of different types of representative points (RPs) allows for the construction of diverse approximation distributions to represent the underlying population distribution. By utilizing these approximation distributions, researchers can make more reliable and precise statistical inferences. The objective of this paper is to provide a comprehensive review of various types of RPs and their associated theory, algorithms, and applications. The focus of this review extends to the examination of recent advancements in the field, highlighting the latest developments and emerging trends. This paper aims to offer valuable insights into the current state of the art and provide researchers and practitioners with a deeper understanding of the potential applications and implications of RPs in statistical science. In Section 2, we present a comprehensive list of properties associated with MSE-RPs for univariate distributions. Section 3 focuses on reviewing various algorithms used for generating MSE-RPs for univariate distributions. In Section 4, we compare various types of RPs in terms of their performance in stochastic simulation and resampling. Additionally, we show the consistency of resampling when MSE-RPs. Properties of MSE-RPs for multivariate distributions are reviewed in Section 5, and algorithms for generating QMC-RPs and MSE-RPs for multivariate distributions are introduced in Section 6. QMC-RPs and MSE-RPs have found numerous applications across various domains. In this paper, we focus on selected applications in statistical inference and geometric probability due to space limitations.

2. Properties of MSE-RPs for Univariate Distributions

We collect some properties of MSR-RPs in the literature in this section. These properties can be grouped into different issues. Some properties are only for the univariate distributions, and some are for the multivariate ones. The following results are from many articles, including Fei [19] under the notation in the previous section.
Theorem 1.
Let X be a continuous random variable with pdf p ( x ) , finite mean μ and variance σ 2 . Then we have
( A ) E ( Y M S E ) = E ( X ) = μ ; ( B ) E ( Y M S E t ) = E ( X Y M S E t 1 ) , t = 2 , 3 , , m 1 , I f E ( | X | m ) < ; ( C ) Var ( Y M S E ) = Var ( X ) M S E ( Y M S E ( k ) ) ; ( D ) MSE ( Y M S E ( k ) ) MSE ( Y M S E ( k + 1 ) ) , k > 1 ; ( E ) Y M S E ( k ) X i n   d i s t r i b u t i o n   a s k ; ( F ) MSE ( Y M S E ( k ) ) 0 , a n d   L ( Y M S E ( k ) ) 0 , a s   k .
The property (A) can be regarded as “unbiased mean”. The property (C) gives a decomposition of variance of X as
Var ( X ) = Var ( Y M S E ) + M S E ( Y M S E ) .
The concept of self-consistent has been used in the clustering analysis and has a close relation with MSE-RPs.
Definition 2.
The set of k points { y 1 , , y k } in R d is called self-consistent with respect to the d-variate random vector X and the partition { S j , j = 1 , , k } of R d if
E [ X | X S j ] = y j , j = 1 , , k ,
where the region S i is the domain of attraction of y j .
Tarpey and Flury [20] gave a comprehensive study on the self-consistent and they pointed out
(1)
MSE-RPs are self-consistent with respective to X;
(2)
MSE-RPs have the minimum mean square error among all sets of the self-consistent to X.

2.1. Existence and Uniqueness of MSE-RPs

The existence of MSE-RPs is no problem for any continuous distributions with the first second moments. For the case of k = 1 , the MSE-RP ξ 1 is the mean of X. This fact indicates that the MSE-RPs can be regarded as an extension of the mean. The MSE-RPs are no analytic formula for most cases of k 2 , but there are some discoveries on the symmetric distributions. In this paper, the notation X = d Y means two random vectors X and Y have the same distribution.
Definition 3.
A random vector X R p is symmetric to 0 if X = d X and X is symmetric to its mean vector μ if X μ = d ( X μ ) .
Theorem 2.
Let Ξ = { ξ j , j = 1 , , k } be a set of MSE-RPs for a symmetric distribution X F ( x ) about 0 , then the set of { ξ l , , ξ k } is also a set of MSE-RPs. Furthermore, if the set of MSE-RPs for F ( x ) R is unique, and its MSE-RPs are sorted as ξ 1 < ξ 2 < , ξ k , then
ξ i = ξ k i + 1 , i = 1 , , k / 2 ,
where a is the largest integer not exceeding a.
The following review is for the univariate distribution F with mean μ and variance σ 2 . Sharma [21] pointed out that the MSE-RPs of a symmetric distribution about zero do not need to be symmetric if the set of MSE-RPs is not unique.
Theorem 3.
Let X be a continuous random variable with pdf p ( x ) , finite mean μ and variance σ 2 and the distribution of X is symmetric about μ. Let Z = X μ . The two MSE-RPs of X are
ξ 1 = μ E ( | Z | ) ,   a n d   ξ 2 = μ + E ( | Z | ) ,
and the corresponding MSE is with the related
MSE ( ξ 1 , ξ 2 ) = Var ( | Z | ) = Var ( | X μ | ) ,
if and only if
p ( μ ) E ( | X μ | ) 1 2 .
This theorem was presented in Flury [13]. If the condition (12) does not hold, Gu and Mathew [22] gave a detailed study on some characterizations of symmetric two MSE-RPs. Their results are listed below.
  • Let X be a random variable with density p ( x ) symmetric about the mean μ and continuous at μ ; then
    (a)
    If p ( μ ) E ( | X μ | ) < 1 2 , then ξ 1 = μ E ( | Z | ) and ξ 2 = μ + E ( | Z | ) are 2 MSE-RPs of X;
    (b)
    If p ( μ ) E ( | X μ | ) > 1 2 , it implies that the above points do not provide local minimum of MSE.
  • They pointed out that Theorem 3 needs to be modified and gave a counterexample about the standard symmetric exponential distribution with pdf p ( x ) = 1 2 e | x | and mean μ = 1 . It is easy to find that p ( μ ) E ( | X μ | ) = 1 2 , but ξ = ± 1 are MSE-RPs.
More examples are discussed in their article. If two random variables Z and X have the relationship Z = a + b X and MSE-RPs of X are known, then MSE-RPs of Z can be easily obtained (Fang and He [10] and Zoppè [23]).
Theorem 4.
Let Y = { ξ 1 < < ξ k } be MSE-RPs of X, then Z = a + b X has MSE-RPs of { a + b Y } = { a + b ξ 1 , , a + b ξ k } with MSE to be b 2 MSE ( Y ) .
There are three special families that satisfy the above relationship: the location-scale family ( b > 0 ), the location family ( b = 1 ) and the scales family ( b > 0 , a = 0 ).
The study on the uniqueness of MSE-RPs is a challenging problem. Fleischer [24] gave a sufficient condition “log-concavity” for the uniqueness of the MSE-RPs. The sufficient conditions in summary:
  • The density function p ( x ) must be differentiable;
  • The derivative ( log p ( x ) ) must be strictly decreasing.
Trushkin [25] proved that a log-concave probability density function has a unique set of MSE-RPs.
Definition 4.
A continuous random variable X is said to have a log-concave density p ( x ) if it satisfies
log ( p ( λ x + ( 1 λ ) y ) ) λ log ( p ( x ) ) + ( 1 λ ) log ( p ( y ) )
for all λ ( 0 , 1 ) and all x , y in the support of X.
Log-concavity of the density is a well-known property, which is satisfied by a large number of remarkable distributions including the normal distribution. Table 1 lists some log-concave densities, where the kernel of p ( x ) ignores some constant in p ( x ) so that the condition for log-concavity of p ( x ) remains the same. The exponential distribution is the case of gamma distribution with α = 1 , and the uniform distribution U ( 0 , 1 ) is the special case of beta distribution with a = 1 , b = 1 .
Example 1.
A finite mixture of distributions allows for great flexibility in capturing a variety of density shapes. Research into mixture models has a long history. The most cited early publication is Pearson [26], as he used a two-component normal mixture model for a biometric data set. The density of a mixture of two normal distributions, denoted by X M i x N ( α , μ 1 , σ 1 2 ; μ 2 , σ 2 2 ) , is
p ( x ) = α φ ( x ; μ 1 , σ 1 2 ) + ( 1 α ) φ ( x ; μ 2 , σ 2 2 ) = α 1 2 π σ 1 exp x μ 1 2 2 σ 1 2 + ( 1 α ) 1 2 π σ 2 exp x μ 2 2 2 σ 2 2 .
Li et al. [27] gave a detailed study on several aspects of the distribution: “unimodal or bimodal”, “measure of disparity of two normals”, and “uniqueness of MSE-RPs”. Generally, the uniqueness of MSE-RPs is not always true, but under some conditions it is true. For example, for a location mixture of two normal densities with σ 1 = σ 2 = σ , a set of MSE-RPs is is unique if | μ 1 μ 2 | 2 σ for all α ( 0 , 1 ) .

2.2. Asymptotic Behavior of MSE-RPs

There are a lot of studies on the asymptotic behavior of MSE-RPs; for example, see Zador [28], Su [29], Graf and Luschgy [18], and Pagès [7]. It is well-known that the distribution tail gives strong inference on statistical inference. According to different standards, there are many kinds of classification methods for statistical distribution. Embrechts et al. [30] classified the distributions based on the convergence rate of pdf p ( x ) as x , and they defined the so-called heavy-tailed distribution and light-tailed distribution, in which the exponential distribution is used as a standard for comparison. The following formal definitions are from Foss et al. [31].
Definition 5.
The univariate random variable X with the distribution function F is said to have a heavy tail if
+ e λ x d F ( x ) = , f o r   a l l λ > 0 .
Otherwise, F is said to have a light tail if
+ e λ x d F ( x ) < , f o r   s o m e λ > 0 .
Obviously, any univariate random variable supported on a bounded interval is light-tailed. In fact, this definition can intuitively reflect that the tail of a heavy-tailed distribution is heavier than the tail of the exponential distribution. Moreover, the long-tailed distribution is an important subclass of heavy-tailed distribution and is more commonly used in applications. The formal definition of a long-tailed distribution was given by Foss et al. [31] as follows.
Definition 6.
The univariate random variable X with distribution function F is said to be long-tailed if
lim x P ( X > x + t | X > x ) = 1 f o r   a l l   t > 0 ,
or equivalently
lim x F ¯ ( x + t ) F ¯ ( x ) = 1 f o r   a l l   t > 0 ,
where F ¯ ( x ) = P ( X > x ) .
Xu et al. [32] studied the limiting behavior of the gap between the largest two representative points of a statistical distribution and obtained another kind of classification for the most useful univariate distributions. They illustrate the relationship between RPs and the concepts of doubly truncated mean residual life (DMRL) and mean residual life (MRL), which are widely used in survival analysis. Denote
ρ k = ξ k ( k ) , Δ i ( k ) = ξ i ( k ) ξ i 1 ( k ) , i = 2 , , k .
They consider three kinds of distributions according to the domain of distribution, i.e., R , R + , and finite interval.
Table 2 shows limiting value of Δ k ( k ) of the normal, t, and logistic distributions. Their density functions are
p ( x ; μ , σ 2 ) = 1 2 π σ exp 1 2 x μ σ 2 , μ R , σ 2 > 0 ; p ( x ; α , β ) = exp ( ( x α ) / β ) β ( 1 + exp ( ( x α ) / β ) ) 2 , α R , β > 0 ; a n d p ( x ; ν ) = Γ ( ν + 1 2 ) ν π Γ ( ν 2 ) 1 + x 2 ν ν + 1 2 , ν > 2 ,
respectively. It is surprising that the normal distribution and t distribution have such different behavior, although the normal distribution is the limiting distribution of the student’s t distribution as ν .
Table 3 presents limiting value of Δ k ( k ) for many useful distribution on R + . These distributions include the Weibull distribution with density
p ( x ; σ , m ) = m σ x m 1 e ( x / σ ) m , σ > 0 , m > 1 ; p ( x ; σ , m ) = m σ x m 1 e ( x / σ ) m , σ > 0 , 0 < m < 1 ,
the Gamma and exponential distributions with respective densities
p ( x ; α , β ) = 1 β α Γ ( α ) x α 1 e x / β , α > 0 , β > 0 ; p ( x ; β ) = 1 β e x / β , β > 0 ;
the density of the F-distribution with degrees of freedoms d 1 and d 2
p ( x ; d 1 , d 2 ) = 1 B ( d 1 2 , d 2 2 ) d 1 d 2 d 1 2 x d 1 2 1 1 + d 1 d 2 x d 1 + d 2 2 , d 1 > 0 , d 2 > 4 ;
the Beta prime distribution with density
p ( x ; α , β ) = 1 B ( α , β ) x α 1 ( 1 + x ) α + β , α > 0 , β > 2 ;
the lognormal distribution with density
p ( x ; μ , σ 2 ) = 1 x σ 2 π exp ln x μ 2 2 σ 2 , μ R , σ > 0 ;
and the inverse Gaussian distribution with density
p ( x ; μ , σ ) = σ 2 π x 3 exp ( σ ( x μ ) 2 2 μ 2 x ) , μ > 0 , σ > 0 .
Observing on these results [32] gave Theorem 5.
Theorem 5.
If the univariate random variable X supported on R + is long-tailed, then
lim n Δ n ( n ) = + .
For the distributions on the finite interval [ m , M ] , Xu et al. [32] gave a systematic study including the following result.
Theorem 6.
Suppose that a random variable X has continuous probability density function p ( x ) on [ m , M ] and E ( X 2 ) < + . Let m ξ 1 ( k ) < . . . < ξ k ( k ) M be the k MSE-RPs of X. If p ( ξ k ( k ) y ) converges uniformly to f ( M y ) , y [ 0 , M m ] , then
lim k Δ k ( k ) = 0 ,
provided that the above limit exists.

3. Algorithms for Generation of MSE-RPs of Univariate Distributions

Generation of MSE-RPs is very important for applications. This section reviews algorithms for the generation of univariate distributions. To minimize the mean square error (5) is an optimization problem, including some difficulties:
  • The objective function is multivariate on simplex { ξ 1 ( k ) < ξ 2 ( k ) < < ξ k ( k ) } ;
  • The objective function might be not differentiable in the whole domain;
  • The minimum of the objective function is not unique, and the objective function may have more local minimums on the domain.
This kind of problems can not be directly solved by the classical optimization methods (such as the downhill simplex method, quasi-Newton methods, and conjugate gradient methods) for most of distributions.
There are three main different approaches for the generation of RPs:
(a)
Theoretic approach or combining the theoretic approach and computational calculation;
(b)
Applying the k-means method finds approximate RPs, and this approach can be applied to all of univariate and multivariate distributions;
(c)
To solve a system of nonlinear equations.
Approach (a) can be used for very few distributions, such as the uniform distribution in a finite interval. [33] proposed the method for finding MSE-RPs of the exponential and Laplace distributions by combining the theoretic approach and computational calculation.
Approach (b) applies the k-means methods to any continuous univariate and multivariate distributions. The traditional k-means algorithm needs a set of n observations from the underlying distribution F R d and the user needs to cluster those observations into k groups under a loss function ϕ . The k-means algorithm begins with k arbitrary centers. Each observation is then assigned to the nearest center, and each center is recomputed as the center of mass of all points assigned to it. These steps (assignment and center calculation) are repeated until the process stabilizes. One can check that the total error ϕ is monotonically decreasing, which ensures that no clustering is repeated during the course of the algorithm. The mean square error (MSE), see Definition 1, has been popularly used as an error ϕ .
It seems to us that Polard [34] was the first one to propose this approach. Along this line, Lloyd [35] proposed two trial-and-error methods. This approach is easy to implement, but it needs to choose a good quality initial and a large number of training samples. There are two kinds of k-means algorithms: nonparametric and parametric k-means algorithms. If the population distribution is known, the training samples are from the known population distribution and the corresponding k-means algorithm is parametric; otherwise, the underlying distribution is unknown and the corresponding k-means algorithm is nonparametric. Usually, the parametric k-means algorithm is more accurate for most univariate distributions.
The parametric k-means algorithm
(1)
For given pdf p ( x ) , the number of RPs: k, and t = 0 , input a set of initial points y 1 ( t ) < y 2 ( t ) < < y k ( t ) . Determine a partition of R as
I i ( t ) = ( a i ( t ) , a i + 1 ( t ) ] , i = 1 , , k 1 , I k ( t ) = ( a k 1 ( t ) , a k ( t ) ) ,
where
a 1 ( t ) = , a i ( t ) = ( y i 1 ( t ) + y i ( t ) ) / 2 , i = 2 , , k , a k ( t ) = .
(2)
Calculate probabilities
p j ( t ) = I j ( t ) p ( x ) d x , j = 1 , , k ;
and the condition mean
y j ( t + 1 ) = I j ( t ) x p ( x ) d x I j ( t ) p ( x ) d x = I j ( t ) x p ( x ) d x p j ( t ) .
(3)
If two sets of { y j ( t ) } and { y j ( t + 1 ) } are identical, the process stops and deliver { y j ( t ) } as MSE-RPs of the distribution with probabilities { p j ( t ) } ; otherwise, let t : = t + 1 and go back to Step (1).
Stampfer and Stadlober [36] called this algorithm the self-consistency algorithm as the output set of RPs is self-consistent (not necessarily MSE-RPs).
Approach (c) was proposed by Max [9] and Fang and He [10] based on the traditional optimization for minimizing the mean square error function Y = f ( z 1 , , z k ) with respect to z 1 , , z k , where the objective function (5) is differentiable, taking partial derivative of Y with respect to z 1 , , z k and constructing a system of equations. Its solutions, denoted by y 1 , , y k , might be the global minimum of Y, i.e., MSE-RPs. For k 3 there are three kinds of equations in (13), (14), and (15), respectively. Fang and He [10] gave the conditions for the solution to be unique under the normal distribution.
Theorem 7.
Taking partial derivative of Y with respective to to z 1 , , z k , we have three kinds of equations:
1.
For any z 1 > 0 , for the equation
( z 1 μ ) F ( z 1 + z 2 2 ) = 1 2 b ( z 1 + z 2 ) p ( z 1 + z 2 2 )
there exists a solution z 2 if and only if z 1 < μ .
2.
For given z i 1 > 0 , i = 2 , , k 1 , for the equation
( z i μ ) [ F ( z i + z i + 1 2 ) F ( z i 1 + z i 2 ) ] = 1 2 b ( z i 1 + z i ) f ( z i 1 + z i 2 ) 1 2 b ( z i + z i + 1 ) p ( z i + z i + 1 2 )
there exists a solution z i + 1 when z i 1 < z i , i 1 , where z i , i 1 is the ( i 1 ) th representative point in the set of MSE-RPs which has k = i .
3.
For any z k 1 > 0 , for the equation
( z k μ ) [ 1 F ( z k 1 + z k 2 ) ] = 1 2 b ( z k 1 + z k ) p ( z k 1 + z k 2 )
there exists a solution z k .
The Fang–He algorithm has been applied to many univariate distributions. Max [9] and Fang and He [10] obtained sets of MSE-RPs of N ( 0 , 1 ) for k 36 and k 31 , respectively. Fu [37] applied the Fang–He algorithm to the gamma distribution Γ ( 3 2 , 1 ) and obtained MSE-RPs for k 20 . Ke et al. [38] gave a more advanced study on MSE-RPs of the gamma distribution. Zhou and Wang [39] studied the t distribution with 10 degrees of freedom and gave MSE-RPs for k 35 . Fei [40] proposed an algorithm for generating MSE-RPs by Newton optimization algorithm. Li et al. [27] gave a detailed study on MSE-RPs of the mixture normal distributions. Fei [41] studied the class of Pearson distributions where the pdf of X has the form of
p ( x ) = c exp a 0 + a 1 x b 0 + b 1 x + b 2 x 2 d x ,
where c is the normalized constant and parameters a 0 , a 1 , b 0 , b 1 , b 2 satisfy the differential equation
d p ( x ) d x = a 0 + a 1 x b 0 + b 1 x + b 2 x 2 p ( x ) .
The class of Pearson distributions includes many useful distributions. For example, type I is the beta distribution; type II is the symmetrical U-shaped curve; type III is the shifted gamma distribution; type V is the shift inverse gamma distribution; type VI is the inverse beta distribution; type VII is the t distribution; type VIII is the power function distribution; type X is the exponential distribution; and type XI is the normal distribution. Fei [41] gave some sufficient conditions for the uniqueness of the solution.
Comparisons of the three kinds of generations of MSE-RPs: The approach (a) obviously is the best but only for a few distributions. The approach (b) can be applied to any continuous univariate and multivariate distributions. For the generation of univariate MSE-RPs, the parametric k-means algorithm does not need the training sample. Many authors have used this algorithm with a good initial set of points. The approach (c) can find the most accurate MSE-RPs of univariate distributions, but it needs a heavy computational calculation if k is larger.

4. Stochastic Simulation and Resampling

Stochastic simulation has played an important role in statistical research and applications. The traditional simulation employs a set of random samples from the underlying distribution by some computational software.

4.1. Estimation of the Mean, Variance, Skewness, and Kurtosis

Fang et al. [42] first proposed a way of simulation to take random samples from F Q M C and F M S E instead of the underlying distribution F and found that the statistical simulation and resampling for estimation of the mean, variance, skewness, and kurtosis under F M S E have the best performance. The second is under F Q M C . Xu et al. [32] gave comparisons above simulation and resampling accuracy for estimation of the mean, variance, skewness, and kurtosis under a lot of distributions: the uniform normal, logistic, t, F, beta prime, gamma, lognormal, and inverse Gaussian distributions. Yang et al. [43] considered more approximation distributions to F in simulation. All of their results show advantages based on F M S E and F Q M C .
Example 2.
Li et al. [27] gave comparison estimators (mean, variance, skewness, and kurtosis) and density estimators of a mixture of two normal distributions via the four kinds of RPs. Here, the revised MC (RMC) is as follows. Denote distribution F R M C by a sorted random samples x 1 x k as support points with related probabilities
p 1 ( R M C ) = x 1 + x 2 2 p ( x ) d x , p 2 ( R M C ) = x 1 + x 2 2 x 2 + x 3 2 p ( x ) d x , p i ( R M C ) = x i 1 + x i 2 x i + x i + 1 2 p ( x ) d x , i = 2 , , k 1 , p k ( R M C ) = x k 1 + x k 2 p ( x ) d x .
Consider the mixture of two normal distributions N ( 8 , 3 2 ) and N ( 3 , 4 2 ) with α = 0.6 ; that is, its pdf is
p ( x ) = α φ ( x ; μ 1 , σ 1 2 ) + ( 1 α ) φ ( x ; μ 2 , σ 2 2 ) ,
where φ ( x ; 0 , 1 ) is the pdf of N ( 0 , 1 ) . When k = 15 , the density estimations by the four methods are given in Figure 1, and the L 2 -distances between p ( x ) and p Y ( x ) by the four methods are given in Table 4. Comparing their performances, we can see that MSE > QMC > RMC > MC.

4.2. Consistency

There are many criteria for stochastic simulation/resampling, including unbiasedness, efficiency (minimum variance), and consistency.
Let X 1 , , X n be a random sample of size n from a population distribution F and let Z n be a random-sample-based estimator of the parameter of interest θ . Denote a random variable of interest as T X 1 , , X n ; F = n 1 / 2 Z n θ depending upon the underlying distribution F. Let
H n ( x ) = P T ( X 1 , , X n ; F ) x
be the cdf of n 1 / 2 ( Z n θ ) . Let F ^ be an approximation distribution of F and let Y 1 , , Y n be a random sample of size n from F ^ . A new plug-in estimate of T X 1 , , X n ; F is defined as
T Y 1 , , Y n ; F ^ = n 1 / 2 ( Z ˜ n θ ˜ k ) ,
where θ ˜ k is a plug-in estimate of θ under F ^ and Z ˜ n is the counterpart of Z n based on i.i.d. samples Y 1 , , Y n . The sampling distribution of T Y 1 , , Y n ; F ^ is denoted as
H ˜ n ( k ) ( y ) = P T ( Y 1 , , Y n ; F ^ ) y ,
which is used to approximate the sampling distribution H n ( x ) defined in (18).
In order to show that the proposed resampling method is effective for simulation, we need to demonstrate that H ˜ n ( k ) converges to H n and show H ˜ n ( k ) is closed to H n in a certain sense. There are several ways to define a distance between H n and H ˜ n ( k ) . Shao and Tu [44] provided a comprehensive review of the theoretical properties of the consistency of bootstrapping. Xu et al. [45] extended this study when the approximation distribution is chosen by F m s e , k (the approximate distribution formed by k MSE-RPs of F). They employ the Kolmogorov metric (20) and the Mallows–Wasserstein metric (21) to measure the distance between H n and H ˜ n ( k ) . If one of these distances converges to 0, then we say that H ˜ n ( k ) is consistent with H n .
Definition 7.
Let G and K be two distribution functions in F = all distributions on R . The Kolmogorov’s distance between G and K is
ρ ( G , K ) = sup < x < | G ( x ) K ( x ) | .
Definition 8.
For two distributions G and K in F 2 where
F 2 = F F : | x | 2 d F ( x ) < ,
their Mallows–Wasserstein’s distance is
ρ 2 ( G , K ) = inf T U , V E | U V | 2 1 / 2 ,
where T U , V is the collection of all possible joint distributions of the pairs ( U , V ) , whose marginal distributions are G and K, respectively.
The convergence under the Mallows–Wasserstein metric (21) is stronger than the convergence in distribution. Xu et al. [45] consider the consistency of sample mean and sample variance of F m s e , k . Let X 1 , , X n be independent observations on F. Denote sample mean as X ¯ , and sample variance as S 2 . Let Y 1 , , Y n be independent observations on F m s e , k . Denote sample mean as Y ¯ and sample variance as S ˜ 2 . The following results are from Xu et al. [45].
Theorem 8.
Under the above notation, the distribution H ˜ n ( k ) of n 1 / 2 ( Y ¯ μ ˜ k ) converges to the distribution H n of n 1 / 2 ( X ¯ μ ) , where μ ˜ k is the mean of F m s e , k and μ is the mean of F, i.e.,
(i).
ρ 2 ( H ˜ n ( k ) , H n ) 0 as k ;
(ii).
ρ ( H ˜ n ( k ) , H n ) 0 as both n and k .
Corollary 1.
Given a set of k MSE-RPs from F, where k N + . For any sample size n, we have
ρ 2 ( H ˜ n ( k ) , H n ) E 1 / 2 [ ( X Y k m s e ) 2 ] .
Corollary 2.
Given a set of k MSE-RPs from F, where k N + . Denote σ 2 as the variance of F and σ ˜ k 2 as the variance of F m s e , k . Then, we have
lim sup n ρ ( H ˜ n ( k ) , H n ) ρ Φ x σ , Φ x σ ˜ k ,
where Φ is the cumulative distribution function of the standard normal distribution.
The sampling distribution of n 1 / 2 ( Y ¯ μ ˜ k ) is a valid estimation of the sampling distribution of n 1 / 2 ( X ¯ μ ) according to Theorem 8. As shown in Corollary 1, the distance between the two sampling distributions under the Mallows–Wasserstein metric is always bounded by the square root of the mean squared error of k MSE-RPs from the underlying distribution F regardless of the sample size n. Moreover, Corollary 2 explains that when n , the Kolmogorov’s distance between the two sampling distributions is bounded by the distance between two normal distributions with different standard errors. The distance on the right-hand side of (22) is determined by the number of representative points k. We should notice that the Kolmogorov’s distance between two sampling distributions converges to 0 as both n and k increase. In practice, the number of representative points k should be as large as possible.
Theorem 9.
Let X F and Y k m s e F m s e , k on the domain X .
(i) 
If a function g : X R is continuous, then
g ( Y k m s e ) g ( X ) i n   d i s t r i b u t i o n ;
(ii) 
If g is also bounded, then
lim k E g ( Y k m s e ) = E g ( X ) .
Specifically, part (i) of Theorem 9 proves that the convergence of MSE-RPs is preserved by continuous functions, and part (ii) shows that the MSE-RPs can be used to estimate the expectation of g ( X ) consistently for any continuous and bounded function g.
Theorem 10.
Let X F and Y k m s e F m s e , k . If there exists some r N + and r 3 such that E ( | X | r + ε ) < with some ϵ > 0 , then
(i) 
Y k m s e converges to X in rth raw moment, i.e.,
lim k E [ ( Y k m s e ) r ] = E ( X r ) ;
(ii) 
Y k m s e converges to X in rth absolute central moment, i.e.,
lim k E ( | Y k m s e μ | r ) = E ( | X μ | r ) ,
where μ = E ( Y k m s e ) = E ( X ) .
Theorem 10 is essential for establishing the consistency of the sample rth moment under F m s e , k , including the sample variance under F m s e , k .

4.3. Confidence Interval Estimation

Confidence intervals are usually constructed by considering a pivotal quantity t n = T ( X 1 , , X n ; F ) whose distribution is known. Suppose θ is a parameter of interest. Consider a studentized pivot t n = ( θ ^ n θ ) / σ ^ n , where θ ^ n is an estimator of θ and σ ^ n is an estimator of the standard error of θ ^ n . Denote the distribution of t n as H n . Assume the inverse of H n is unique, an exact 100 ( 1 α ) % confidence interval for θ is
θ ^ n σ ^ n H n 1 ( 1 α / 2 ) , θ ^ n σ ^ n H n 1 ( α / 2 ) .
In the traditional asymptotic approach, we replace the unknown H n by its limit H. There are two main disadvantages of the traditional asymptotic approach. It requires the limit H to be derived analytically and explicitly, and it lacks high-order accuracy. The bootstrap method can be applied to obtain an easy-to-use confidence interval with higher accuracy by replacing H n in (23) with its bootstrap estimator H b o o t . According to numerical studies under some basic experimental settings by Xu et al. [45], the proposed resampling method tends to be more accurate for interval estimation than the bootstrap method.

5. Property of MSE-RPs of Multivariate Distributions

Let X = ( X 1 , , X d ) T be a random vector with cdf F ( x 1 , , x d ) and pdf p ( x 1 , , x d ) . Assume that X exist finite mean vector E ( X ) = μ and covariance matrix Σ = Cov ( X ) . A set of MSE-RPs of X is denoted by Ξ = { ξ j , j = 1 , , k } that minimizes the mean square error (MSE) and the corresponding vector is ξ = Y M S E with Voronoi regions S j , j = 1 , , k and probabilities p j = P ( X S j ) , j = 1 , , k (refer to Definition 1). The following results are from Flury [13,14].
Theorem 11.
Under the above assumption on X we have
1.
When k = 1 the MSE-RP is given by ξ = μ ;
2.
E ( Y M S E ) = E ( X ) = j = 1 k p j ξ j , i.e., E ( X ) is in the convex hull of ξ 1 , , ξ k ;
3.
MSE-RPs are self-consistent and
MSE ( Y M S E ) = t r ( Σ ) t r ( Σ ξ ) ,
where Σ ξ = j = 1 k p j ( ξ j μ ) ( ξ j μ ) T is the covariance matrix of Y M S E ;
4.
The rank of ( ξ 1 μ , , ξ k μ ) < k .
Theorems 2 and 4 can be easily extended to the multivariate case, but Theorem 4 needs some change in linear relation for extension below.
Theorem 12.
Let X 1 and X 2 be two random vectors in R d with relation X 2 = a + b H X 1 , where a R d , b R and H is an orthogonal matrix of order d. We have
(a) 
If { y 1 , , y k } is a set of self-consistent points of X 1 , then { a + b H y 1 , , a + b H y k } is a set of self-consistent points of X 2 ;
(b) 
If { ξ 1 , , ξ k } is a set of MSE-RPs of X 1 , then { a + b H ξ 1 , , a + b H ξ k } is a set of MSE-RPs of X 2 .
There are various kinds of symmetry in multivariate distributions, among which the class of elliptically symmetric distributions is an extension of the multivariate normal distribution and includes so many useful distributions. For a comprehensive study, refer to Fang et al. [46].
Definition 9.
Spherically and elliptically symmetric distributions. A d-dimensional random vector X is said to have an elliptically symmetric distribution (ESD), or elliptical distribution for short, if X has the following stochastic representation (SR)
X = d μ + R Ψ 1 / 2 U ( d ) ,
where random variable R 0 is independent of U ( p ) that is uniformly distributed on the unit sphere in R p , μ R d , Ψ is a positive definite matrix of order d (not necessary to be Cov ( X ) ), and Ψ 1 / 2 is the positive definite square root of Ψ. We write X E S D p ( μ , Ψ , g ) if X has a density of the form
p ( x ) = | Ψ | 1 / 2 g ( ( x μ ) T Ψ 1 ( x μ ) ) ,
where g is called as the density generator. When μ = 0 , Ψ = I , X has a spherical distribution with the stochastic representation
X = d R U ( d ) ,
and write X S ( g ) , where g ( x T x ) is the density of X.
In general, an elliptical/spherical distribution is not necessary to have a density. For example, U ( d ) does not have a density in R d . If the distribution of X is spherical and P ( X = 0 ) = 0 , then X = d R and X / X = d U ( d ) are independent. It is known that X defined in (24) has a density if and only if R has a density f ( · ) . The relationship between g ( · ) and f ( · ) is given by
f ( r ) = 2 π d / 2 Γ ( d / 2 ) r d 1 g ( r 2 ) .
Table 5 lists some useful subclasses of the elliptical distributions.
Flury [13] is the first one who found some relationship between the principal components and the MSE-RPs of the elliptical distribution in the following theorems.
Theorem 13.
Suppose X E S D d ( μ , Ψ , g ) with mean vector μ , covariance matrix Σ that is proportional to Ψ and density generator g. Then, the two MSE-RPs of X have of the form
ξ 1 = μ + λ 1 γ , y 2 = μ + λ 2 γ ,
where γ R d is the normalized characteristic vector associated with the largest eigenvalue of Σ, and { λ 1 , λ 2 } are the k = 2 MSE-RPs of the univariate random variable γ T ( X μ ) . If the MSE-RPs are not unique, they can be chosen as the given form.
Tarpey et al. [47] established a theorem, called the principal subspace theorem, which shows that k principal points of an elliptically symmetric distribution lie in the linear subspace spanned by the first several principal components.
Theorem 14.
Let X E C D d ( 0 , Σ , g ) . If a set of k MSE-RPs of X spans a subspace V of dimension q ( q d ) , then Σ has a set of eigenvectors β 1 , , β d with associated ordered eigenvalues λ 1 λ 2 λ d such that V is spanned by β 1 , , β q .
The principal subspace theorem exploresthe set of MSE-RPs of an elliptical distribution that has a close relationship with its principal components. It is why Flury [13] called the MSE-RPs principal points. Tarpey [48] and Yang et al. [43] proposed ways to generate a set of MSE-RPs of elliptical distributions in several subclasses of elliptical distributions and explore more relationships between the principal components and MSE-RPs. Their studies need algorithms for producing MSE-RPs.
Yang et al. [43] consider numerical simulation for estimation of mean vector and covariance matrix of the elliptical distributions and show that both QMC-RPs and MSE-RPs have better performance than MC-RPs. They also studied the distribution of MSE of MC-RPs for univariate distributions and elliptical distributions and pointed out that MSE of MC-RPs can be fitted by the extreme value distribution. For a random sample with a poor MSE value, it does not expect to have a good result based on this set of random samples.

6. Algorithms of Generation for RPs of Multivariate Distributions

There are a lot of methods for generating a random sample from a given multivariate distribution F ( x ) . Johnson [49] gave a good introduction to various methods. There are two useful methods: conditional decomposition and stochastic representation.

6.1. Conditional Decomposition

The conditional distribution method changes generation for a multivariate distribution into generation for several conditional univariate distributions. Suppose random vector X = ( X 1 , , X d ) has the cdf F ( x 1 , , x d ) . Let F 1 ( x ) be the cdf of X 1 and let F i ( x i | x 1 , , x i 1 ) be the conditional distribution of X i given X 1 = x 1 , , X i 1 = x i 1 . It is known in theory of probability
F ( x 1 , , x p ) = F 1 ( x ) F 2 ( x 2 | x 1 ) F d ( x d | x 1 , , x d 1 ) .
Note that each of F 1 ( x ) , F 2 ( x 2 | x 1 ) and F d ( x d | x 1 , , x d 1 ) is a univariate (conditional) distributions. We can apply some methods including the inverse transformation method to generate a random sample from these distributions. Denote a set of random samples from these univariate (conditional) distributions by z 1 , , z d , then z = z 1 z d is a random sample from X. In particular, when X 1 , , X d are independent, F ( x 1 , , x d ) = i = 1 d F X i ( x i ) , where F X i ( x i ) is the cdf of X i .

6.2. Stochastic Representation

Let X F ( x ) , X R d . Suppose that X has a stochastic representation
X = h ( Y ) , Y U ( C t ) , t d ,
where h is a set of continuous functions on C t = [ 0 , 1 ] t and Y follows the uniform distribution on C t . The Monte Carlo simulation can find a random sample u = ( u 1 , , u t ) from C t . Then, h ( u ) is a random sample from F ( x ) .
The SR method can be extended to generate a set of QMC-RPs and MSE-RPs. The QMC method employ a set of quasirandom numbers on C t , denoted by { c i , i = 1 , , n } , c i C t . Set x i = h ( c i ) , i = 1 , , n . Then, the set of { x i } is called a set of quasirandom F-numbers, which can be regarded as another kind of RPs of F ( x ) , i.e., NTM-RPs or QMC-RPs.
Generating a set of MSE-RPs is not possible for most multivariate distributions. If we focus on some class of multivariate distributions that are easily generated by MC or QMC, then the generation of MSE-RPs becomes much easier. One method is the LBG Algorithm by the use of k-means method proposed by Linde et al. [50]. The LBG algorithm requests a training sequence X = { x 1 , , x N } from the given distribution F ( x ) by a Monte Carlo method, where N is much larger than k and k is the number of RPs for F ( x ) . The next step chooses a set of initial vectors using the same Monte Carlo method and finds the associated Voronoi partition { S j } by assigning each x i to the nearest region of the partition { S j } . Then follow the procedure of the k-means algorithm and iteration steps until reach the stopping rule.
Although the LBG algorithm can reach a local optimal output with a non-increasing MSE, Fang et al. [51] pointed out two problems when applying this algorithm:
(a)
The algorithm gives the local optimum and the results are dependent on the initial points;
(b)
The generation of samples of F ( x ) and the calculation of MSE are based on the Monte Carlo method, which is less efficient with the convergence rate O p ( n 1 / 2 ) .
Fang et al. [51] revised the LBG algorithm by the use of quasirandom F-numbers in producing the set of training samples and the set of initial vectors. They proposed the so-called NTLBG algorithm for the generation of QMC-RPs of an elliptical distribution.
Recall a spherical distribution S ( g ) has a SR X = d R U ( d ) in (25). If we can find a set of quasirandom numbers of the uniform distribution on U ( C d ) and a set of quasirandom numbers of R, their product can produce a set of QMC-RPs of X. An effective algorithm for generating a set of QMC-RPs on U ( C d ) can refer to Fang and Wang [4]. The latter calls this algorithm as TFWW algorithm. It is easy to see that if X S ( g ) , then H X S ( g ) for any orthogonal matrix H of order d. Therefore, if { ξ 1 , , ξ k } is a set of MSE-RPs of X, then { H ξ 1 , , H ξ k } is also a set of MSE-RPs of X. That means that the set of MSE-RPs for spherical distributions is quite not unique.

6.3. The NTSR Algorithm for the Generation of a Spherical Distribution

  • Generate a set of quasirandom numbers { c j = ( c j 1 , , c j d ) , j = 1 , , n } on C d .
  • Denote the cdf of R by F R ( r ) and let F R 1 be its inverse function. Compute r j = F R 1 ( c j d ) , j = 1 , , n .
  • Generate a set of quasirandom F-numbers { u j , j = 1 , , n } of the uniform distribution on U d with the first ( d 1 ) -components of c j ’s via the TFWW algorithm.
  • Then { y j = r j u j , j = 1 , , n } is a set of quasirandom F-numbers or QMC-RPs of the given spherical distribution F ( x ) .
This algorithm can be easily extended to generation of quasirandom F-numbers or QMC-RPs for elliptical distributions. The NTLBG algorithm has the following steps:
  • Step 1. For a given F ( x ) , generate a set of quasirandom F-number x 1 , , x N as a training sequence by the NTSR algorithm with a large N.
  • Step 2. Set t = 0. For a given k, generate a set of quasirandom F-numbers Y t = { y i 1 , , y i k } of F ( x ) as an initial set of output vectors.
  • Step 3. Form a partition { S i ( t ) } of { x j , j = 1 , , N } such that each x i is assigned to the nearest region of the partition, i.e., x i S m ( t ) if | | x i y t m | | | | x i y t j | | , j m .
  • Step 4. Calculate the sample conditional means y i = E [ x | x S i ] and form a new set of output vector vectors Y t + 1 = { y t + 1 , 1 , , y t + 1 , k } , where
    y t + 1 , m = 1 n t + 1 , m x j S m ( t ) x j
    and n t + 1 , m is the number of x j falling in S m ( t ) . If Y t + 1 = Y t , deliver Y t as MSE-RPs and go to Step 6 and n t + 1 , m / N as estimated probability of P ( x S m ) ; otherwise go to the next step.
  • Step 5. Let t : = t + 1 and go to Step 3.
  • Step 6. Calculate and deliver MSE
    M S E ( Y t ) = R p min 1 j k ( x y t j ) T ( x y t j ) d F ( x ) ,
    or
    M S E ( Y t ) = i = 1 N n t + 1 , m N ( x i y t , m ) T ( x i y t , m ) I ( x i S m ( t ) )
    by its estimate based on the training sequence.
The NTLBG algorithm has used in generation QMC-RPs and MSE-RPs for elliptical distributions ([43,52,53]) and the skew-normal distribution in Yang et al. [43].

7. Applications of QMC-RPs and MSE-RPs

It is known that QMC-RPs have been applied to numerical high-dimensional integration [3], experimental design [5], geometric probability [4], optimization [4], option pricing [7], and big data analysis. Due to the limited space, we cite only some applications.

7.1. Statistical Inference

For statistical estimation, Fang et al. [42] proposed three kinds of RPs (MC-RPS, QMC-RPs, and MSE-RPs) in the estimation of the mean, variance, skewness, and kurtosis as well as density function. Their results show a big potential for the application of QMC-RPs and MSE-RPs in statistical estimation and resampling. Their results also show that MSE-RPs have a better performance than QMC-RPs under the normal distribution. Their observation is true for many other distributions. However, Jiang et al. [54] found that Y Q M C of the arcsine distribution have the same mean, variance, skewness, and kurtosis of the population, i.e., QMC-RPs are the best among other approximations. These interesting results motivated Zhou and Fang [55] to consider putting more conditions into a new criterion for construction of RPs, i.e., FM criterion and FM-RPs. There are more criteria for the construction of RPs; for example, Mak and Joseph [56] proposed so-called support points vs. energy distance, and Rover and Friede [57] used the Kullback–Leibler divergence and symmetrized KL-divergence to define a kind of discrete approximation to a continuous distribution F(x). The mean square error is a special case of
R d min j = 1 , , k | | x ξ j | | r d x
with r = 2 , where | | a | | denotes l r -norm of a R d , and the corresponding set of RPs is called set of MrE-RPs. There are a lot of studies under this criteria, especially r = 1 (see Graf and Luschgy [18]). Yu [58] gave some sufficient conditions to the uniqueness of MSE-RPs.
Liang et al. [52] and Wang et al. [53] proposed some statistics for testing multivariate normality based on MSE-RPs. Their numerical comparisons show that the new goodness-of-fit testing statistics can significantly improve the traditional chi-square test.

7.2. Moment Estimation and Maximum Likelihood Method

Let x 1 , , x n be a sample from the population F ( x ; θ ) , where θ is a set of unknown parameters and one needs to find a good estimator θ ^ based on this sample. For example, for normal distribution N ( μ , σ 2 ) , one needs to estimate the θ = ( μ , σ 2 ) . There are many useful methods for estimation, among which the moment estimator (ME) and maximum-likelihood estimator (MLE) are popular. For the ME, it needs to solve a system of nonlinear equations, and the sequential number-theoretic methods for optimization (SNTO for short) proposed by Fang and Wang [4] is a powerful method for solving the corresponding equations. Note that QMC-RPs have played an important role in the SNTO algorithm.
To find the MLE of θ is an optimization problem where the objective function (the likelihood function of the random sample) may have many local maximums and the traditional optimization methods may get into a local maximum. Fang and Wang [4] and Li and Fang [59] applied SNTO to find MSE for many univariate distributions.
It is well known that the performances of sample moments highly depend on the quality of the random sample. The sample moments will be less reliable if the sample fails to catch the turning points of population density. One way for ME revision is to improve the representativeness of the random sample used for calculating sample moments. Li and Fang [59] proposed a new concept “QMC-data” via the nonparametric estimator of the quantile proposed by Harrell and Davis [60].
Let x = x 1 , , x n be a random sample of size n from F ( x ) . Denote X ( 1 ) X ( 2 ) X ( n ) as the order statistics and F 1 ( q ) as the q t h population quantile.
  • Step 1. Generate n quasirandom numbers q i = ( 2 i 1 ) / 2 n , for i = 1 , , n , which are uniformly scattered on the interval ( 0 , 1 ) . For each q = q i run the following steps.
  • Step 2. The quantile estimator of F 1 ( q ) based on random sample is proposed to be Q ( q ) :
    S Q ( q ) = i = 1 n W n , i X ( i ) ,
    where
    W n , i = 1 β { ( n + 1 ) q , ( n + 1 ) ( 1 q ) } ( i 1 ) / n i / n y ( n + 1 ) q 1 ( 1 y ) ( n + 1 ) ( 1 q ) 1 d y = I i / n { q ( n + 1 ) , ( 1 q ) ( n + 1 ) } I ( i 1 ) / n { q ( n + 1 ) , ( 1 q ) ( n + 1 ) }
    and I x ( a , b ) denotes the incomplete beta function.
  • Step 3. Let z i = S Q ( q i ) , for i = 1 , , n , then x = x 1 , , x n is replaced by z = z 1 , , z n for moment estimation. The objective function based on the revised sample is
    Q ˜ ( θ | z ) = i = 1 5 ( M i ( θ ) m i ( z ) ) 2 .
Li and Fang [59] call z as QMC-data based on which they obtain a better estimator of θ = ( α , μ 1 , σ 1 2 , μ 2 , σ 2 ) of a mixture of two normal distributions by ME and/or MLE methods. Chen [61], Chen [62] gave a comprehensive study on estimation including convergency rate and condition for establishing the consistency of MLE based on this popularly used distribution. Qi et al. [63] considered using the density estimation for revising location-biased datasets for reducing biased datasets. The kernel estimation of the density function is a nonparametric method that likes the idea of the QMC-Data structure. Wang et al. [64] proposed a new bias-correction numerical method based on SNTO that can reduce the bias and variance of parameter estimators significantly in three types of extreme value distributions. Many case studies can be found in Qi et al. [63], Li and Fang [59], and Wang et al. [64].

7.3. Geometric Probability

Statistical simulation has been widely used for problems in statistics with no analytic solution. For example, let D be a domain in R d and E be the output of a stochastic process working on D. One needs to find the distribution of the area or volume of E. Statistical simulations need a set of points that are uniformly scattered on D and on any subarea of D, including E. A set of random samples fails to provide such a set of points. Fang and Wang [4] suggested applying QMC-RPs to two real-life case studies where D is not a rectangle.

7.3.1. Case Study I: Area of Intersection between a Fixed Circle and Several Random Circles

Let K be a unit circle with a center at the origin o = ( 0 , 0 ) T . Suppose that K 1 , , K m are the m random circles centered at o , , o m with given radii r 1 , , r m , respectively. The centers o 1 , , o m are independent and o j N 2 ( 0 , σ j 2 I 2 ) , where σ j > 0 are known. Let S m = K ( K 1 K m ) , the overlapping region of K and the union of the m random circles K 1 , , K m , and denote A ( S m ) the area of S m . The goal of the problem is to find the distribution of A ( S m ) . It is not difficult to find the distribution of the overlapping area with a single random circle; however, there is no analytic solution for the cases when m 2 .

7.3.2. Case Study II: Area of Random Belts with a Fixed Width Covering a Unit Sphere

Let S 3 be a unit sphere in R 3 centering at the origin ( 0 , 0 , 0 ) T , and it can be covered by random belts with a fixed width. Each belt is symmetric about a unit circle in S 3 centered at the origin ( 0 , 0 , 0 ) T . Let n R 3 be a normal direction, then the belt on the surface of S 3 with thickness 2 h can be written as
G h ( n ) = { w : | w T n | h } ,
and the area of the belt can be computed by A ( G h ( n ) = 4 π h . The original problem arises in steel rolling and aims to find the distribution of a roller’s lifespan and find some ways to increase the lifespan.
There are no analytic solutions in both case studies. If one considers a statistical simulation, an NT-net on the unit circle K or the unit sphere in R 3 is needed. Fang and Wang [4] employed the statistical simulation over NT-net on S 3 . They found that QMC-RPs can give the best design for sequential belts that has a significantly longer life. Yang et al. [43] gave comparisons on estimation accuracy between QMC-RPs and MSE-RPs for construction of NT-net on the unit circle and S 3 , respectively, and concluded that both are working perfectly.

8. Concluding Remarks

The bootstrap method, originally proposed by Efron [1], has found wide applications in statistical theory and practice. This method involves drawing random samples from the empirical distribution, which serves as an approximation to the population distribution F ( x ) . However, due to the inherent randomness of these samples, the bootstrap method has certain limitations. To overcome this, a natural solution is to construct support points called RPs that offer a more representative characterization of the distribution F ( x ) compared to random samples.
This paper discusses three types of RPs: MC-RPs, QMC-RPs, and MSE-RPs, along with their respective approximations. Theoretical foundations and practical applications demonstrate that all of these RPs can be effectively and efficiently utilized for statistical inferences, including estimation and hypothesis testing. In many case studies, MSE-RPs and/or QMC-RPs have shown better performance compared to MC-RPs. QMC-RPs have been widely applied in various fields, including numerical integration in high dimensions, financial mathematics, experimental design, and geometric probability. This paper provides a comprehensive review of the theory and applications of MSE-RPs, with particular emphasis on recent developments. MSE-RPs exhibit significant potential for applications in statistics, financial mathematics, and big data analysis.
However, in the theory of MSE- and QMC-RPs, several open questions remain. For instance, although several new RP construction methods have been proposed, these methods still lack solid theoretical justifications and practical applications. Further research is needed to address these gaps and advance the field.
We are creating a website (https://fst.uic.edu.cn/isci_en/index.htm, accessed on 19 June 2023) where readers can access fundamental knowledge about RPs and MSE-RPs for various univariate distributions in the near future. Additionally, we are in the process of incorporating R software that generates MSE-RPs into the website, which will be available soon. While there are existing monographs such as “Foundations of Quantization for Probability Distributions” by Graf and Luschgy [18] and “Numerical Probability, An Introduction with Applications to Finance” by Pages [7], these works do not specifically focus on applications in statistical inference. Therefore, there is a need for a new monograph that covers recent advancements in both theory and applications. This review article can serve as a valuable resource, providing relevant content and establishing connections for a potential new book in this area.

Author Contributions

For this review article, K.-T.F. prepared a draft and J.P. made the final version. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the Guangdong Provincial Key Laboratory of Interdisciplinary Research and Application for Data Science, BNU-HKBU United International College [grant number 2022B1212010006], and by Guangdong Higher Education Upgrading Plan (2021–2025) UIC [grant number R0400001-22].

Data Availability Statement

Not applicable.

Acknowledgments

We are grateful for the valuable comments provided by the four referees, which have greatly improved the quality of the paper. We thank Yinan Li, a doctoral student in UIC, for her great help. She has made contributions in this field and published two papers during her Ph.D. study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Efron, B. Bootstrap methods: Another look at the jackknife. Ann. Stat. 1979, 7, 1–26. [Google Scholar] [CrossRef]
  2. Hua, L.K.; Wang, Y. Applications of Number Theory to Numerical Analysis; Springer: Berlin/Heidelberg, Germany; Science Press: Beijing, China, 1981. [Google Scholar]
  3. Niederreiter, H. Random Number Generation and Quasi-Monte Carlo Methods; Society Industrial and Applied Mathematics (SIAM): Phiadelphia, PA, USA, 1992. [Google Scholar]
  4. Fang, K.T.; Wang, Y. Number-Theoretic Methods in Statistics; Chapman and Hall: London, UK, 1994. [Google Scholar]
  5. Fang, K.T.; Liu, M.Q.; Qin, H.; Zhou, Y.D. Theory and Application of Uniform Experimental Designs; Science Press: Beijing, China; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  6. Fang, K.T.; Wang, Y.; Bentler, P.M. Some applications of number-theoretic methods in statistics. Stat. Sci. 1994, 9, 416–428. [Google Scholar] [CrossRef]
  7. Pagès, G. Numerical Probability: An Introduction with Applications to Finance; Universitext; Springer: Cham, Switzerland, 2018. [Google Scholar]
  8. Cox, D.R. Note on grouping. J. Am. Stat. Theory 1957, 52, 543–547. [Google Scholar] [CrossRef]
  9. Max, J. Quantizing for minimum distortion. IRE Transform. Theory 1960, IT-6, 7–12. [Google Scholar] [CrossRef]
  10. Fang, K.T.; He, S. The Problem of Selecting a Given Number of Representative Points in a Normal Population and a Generalized Mill’s Ratio; Technical Report No. 5; Department of Statistics, Stanford University: Stanford, CA, USA, 1982. [Google Scholar]
  11. Fang, K.T. Application of the theory of the conditional distribution for the standardization of clothes. Acta Math. Appl. Sin. 1976, 2, 62–74. (In Chinese) [Google Scholar]
  12. Iyengar, S.; Solomon, H. Selecting representative points in normal populations. In Recent Advances in Statistics: Papers in Honor of Herman Chernoff on His 60th Birthday; Rizvi, M.H., Rustagi, J., Siegmund, D., Eds.; Academic Press: New York, NY, USA, 1983; pp. 579–591. [Google Scholar]
  13. Flury, B.A. Principal points. Biometrika 1990, 77, 33–41. [Google Scholar] [CrossRef]
  14. Flury, B.A. Estimation of principal points. J. R. Stat. Soc. Ser. C Appl. Stat. 1993, 42, 139–151. [Google Scholar] [CrossRef]
  15. Flury, B.A.; Tarpey, T. Representing a large collection of curves: A case for principal points. Am. Stat. 1993, 47, 304–306. [Google Scholar]
  16. Tarpey, T.; Petkova, E. Principal point classification: Applications to differentiating drug and placebo responses in longitudinal studies. J. Stat. Plan. Inference 2010, 140, 539–550. [Google Scholar] [CrossRef]
  17. Gray, R.M.; Neuhoff, D.L. Quantization. IEEE Trans. Inf. Theory 1998, 44, 2325–2383. [Google Scholar] [CrossRef]
  18. Graf, S.; Luschgy, H. Foundations of Quantization for Probability Distributions; Lecture Notes in Math. 1730; Springer: Berlin/Heidelberg, Germany, 2000. [Google Scholar]
  19. Fei, R.C. Statistical relationship between the representative point and the population. J. Wuxi Inst. Light Ind. 1991, 10, 78–81. (In Chinese) [Google Scholar]
  20. Tarpey, T.; Flury, B. Self-consistency: A fundamental concept in statistics. Stat. Sci. 1996, 11, 229–243. [Google Scholar]
  21. Sharma, D.K. Design of absolutely optimal quantizers for a wide class of distortion measures. IEEE Trans. Infor. Theory 1978, IT-24, 693–702. [Google Scholar] [CrossRef]
  22. Gu, X.; Mathew, T. Some characterizations of symmetric two-principal points. J. Stat. Plan. Inference 2001, 98, 29–37. [Google Scholar] [CrossRef]
  23. Zoppè, A. Principal points of univariate continuous distributions. Stat. Comput. 1995, 5, 127–132. [Google Scholar] [CrossRef]
  24. Fleischer, P.E. Sufficient conditions for achieving minimum distortion in a quantizer. IEEE Int. Conv. Rec. 1964, 1, 104–111. [Google Scholar]
  25. Trushkin, A.V. Sufficient conditions for uniqueness of a locally optimal quantizer for a class of convex error weighting functions. IEEE Trans. Inform. Theory 1982, IT-28, 187–198. [Google Scholar] [CrossRef]
  26. Pearson, K. Contribution to the theory of mathematical evolution. Philos. Trans. R. Soc. Lond. 1894, 186, 71–110. [Google Scholar]
  27. Li, Y.; Fang, K.T.; He, P.; Peng, H. Representative points from a mixture of two normal distributions. Mathematics 2022, 10, 3952. [Google Scholar] [CrossRef]
  28. Zador, P.L. Asymptotic quantization error of continuous signals and the quantization dimension. IEEE Trans. Inform. Theory 1982, 28, 139–149. [Google Scholar] [CrossRef]
  29. Su, Y. Asmptotically optimal representative points of bivariate random vectors. Stat. Sin. 2000, 10, 559–575. [Google Scholar]
  30. Embrechts, P.; Klüppelberg, C.; Mikosch, T. Modelling Extremal Events for Insurance and Finance; Springer: Berlin/Heidelberg, Germany, 1997. [Google Scholar]
  31. Foss, S.; Korshunov, D.; Zachary, S. An Introduction to Heavy-Tailed and Subexponential Distributions; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  32. Xu, L.H.; Fang, K.T.; Pan, J.X. Limiting behavior of the gap between the largest two representative points of statistical distributions. Commun. Stat. Theory Methods 2021, 52, 3290–3313. [Google Scholar] [CrossRef]
  33. Xu, L.H.; Fang, K.T.; He, P. Representative points of the exponential distribution. Stat. Pap. 2021, 63, 197–223. [Google Scholar] [CrossRef]
  34. Polard, D. Quantization and the method of k-means. IEEE Trans. Inform. Theory 1982, IT-28, 199–205. [Google Scholar] [CrossRef]
  35. Lloyd, S.P. Least squares quantization in PCM. IEEE Trans. Inform. Theory 1982, IT-28, 129–137. [Google Scholar] [CrossRef]
  36. Stampfer, E.; Stadlober, E. Methods for estimating principal points. Commun. Stat. Simul. Comput. 2002, 31, 261–277. [Google Scholar] [CrossRef]
  37. Fu, H.H. The problem of selecting a specified number of representative points from a gamma population. J. China Univ. Min. Technol. 1985, 4, 107–117. (In Chinese) [Google Scholar]
  38. Ke, X.; Wang, S.; Zhou, M.; Ye, H. New approaches on parameter estimation of the gamma distribution. Mathematics 2022, 11, 927. [Google Scholar] [CrossRef]
  39. Zhou, M.; Wang, W.J. Representative points of Student’s tn-distribution and their applications in statistical simulation. Acta Math. Appl. Sin. 2016, 39, 620–640. (In Chinese) [Google Scholar]
  40. Fei, R.C. The problem of selecting representative points from population. Comm. Appl. Math. Comput. 1992, 16, 94–96. (In Chinese) [Google Scholar]
  41. Fei, R.C. The problem of selecting representative points in pearson distributions population. J. Wuxi Inst. Light Ind. 1990, 9, 74–78. (In Chinese) [Google Scholar]
  42. Fang, K.T.; Zhou, M.; Wang, W.J. Applications of the representative points in statistical simulations. Sci. China Ser. A 2014, 57, 2609–2620. [Google Scholar] [CrossRef]
  43. Yang, J.; He, P.; Fang, K.T. Three kinds of discrete approximations of statistical multivariate distributions and their applications. J. Multivar. Anal. 2021, 188, 104829. [Google Scholar] [CrossRef]
  44. Shao, J.; Tu, D. The Jackknife and Bootstrap; Springer Series in Statistics; Springer: New York, NY, USA, 1995. [Google Scholar]
  45. Xu, L.H.; Li, Y.; Fang, K.T. The resampling method via representative points. Stat. Pap. 2023; submitted. [Google Scholar]
  46. Fang, K.T.; Kotz, S.; Ng, K.W. Symmetric Multivariate and Related Distributions; Chapman and Hall: London, UK; New York, NY, USA, 1990. [Google Scholar]
  47. Tarpey, T.; Li, L.; Flury, B. Principal points and self-consistent points of elliptical distributions. Ann. Stat. 1995, 23, 103–112. [Google Scholar] [CrossRef]
  48. Tarpey, T. Self-consistent patterns for symmetric multivariate distributions. J. Classif. 1998, 15, 57–79. [Google Scholar] [CrossRef]
  49. Johnson, M.E. Multivariate Statistical Simulation; Wiley: New York, NY, USA, 1987. [Google Scholar]
  50. Linde, Y.; Buzo, A.; Gray, R. An algorithm for vector quantizer design. IEEE Trans. Commun. 1980, COM-28, 84–95. [Google Scholar] [CrossRef]
  51. Fang, K.T.; Yuan, K.H.; Benlter, P.M. Applications of number-theoretic methods to quantizers of elliptically contoured distributions. Multivar. Anal. Appl. IMS Lect. Notes-Monogr. Ser. 1994, 24, 211–225. [Google Scholar]
  52. Liang, J.; He, P.; Yang, J. Testing Multivariate Normality Based on t-Representative Points. Axioms 2022, 11, 587. [Google Scholar] [CrossRef]
  53. Wang, S.; Liang, J.; Zhou, M.; Ye, H. Testing Multivariate Normality Based on F-Representative Points. Mathematics 2022, 10, 4300. [Google Scholar] [CrossRef]
  54. Jiang, J.J.; He, P.; Fang, K.T. An interesting property of the arcsine distribution and its applications. Stat. Prob. Lett. 2015, 105, 88–95. [Google Scholar] [CrossRef]
  55. Zhou, Y.D.; Fang, K.T. FM Criterion in representative points. Sci. Sin. Math. 2019, 49, 1009–1020. (In Chinese) [Google Scholar]
  56. Mak, S.; Joseph, R. Support points. Ann. Stat. 2020, 46, 2562–2592. [Google Scholar] [CrossRef]
  57. Rover, C.; Friede, T. Discrete approximation of a mixture distribution via restricted divergence. J. Comput. Graph. Stat. 2017, 26, 217–222. [Google Scholar] [CrossRef]
  58. Yu, F. Uniqueness of principal points with respect to p-order distance for a class of univariate continuous distribution. Stat. Probab. Lett. 2022, 183, 109341. [Google Scholar] [CrossRef]
  59. Li, Y.; Fang, K.T. A New Approach to Parameter Estimation of Mixture of Two Normal Distributions. Commun. Stat. Theory Methods 2022, 1–27. [Google Scholar] [CrossRef]
  60. Harrell, F.; Davis, C.E. A new distribution-free quantile estimator. Biometrika 1982, 69, 635–640. [Google Scholar] [CrossRef]
  61. Chen, J. Optimal rate of convergence for finite mixture models. Ann. Statist. 1995, 23, 221–233. [Google Scholar] [CrossRef]
  62. Chen, J. Consistency of the MLE under Mixture Models. Stat. Sci. 2017, 32, 47–63. [Google Scholar] [CrossRef]
  63. Qi, Z.F.; Zhou, Y.D.; Fang, K.T. Representative points for location-biased datasets. Commun. Stat. Simul. Comput. 2019, 48, 458–471. [Google Scholar] [CrossRef]
  64. Wang, S.R.; Fang, K.T.; Ye, H.J. A new bias-corrected estimator method in extreme value distributions with small sample size. J. Stat. Comput. Simul. 2022, 92, 3862–3884. [Google Scholar] [CrossRef]
Figure 1. Density estimators by the four RPs, k = 31 .
Figure 1. Density estimators by the four RPs, k = 31 .
Mathematics 11 02930 g001
Table 1. Some log-concave densities.
Table 1. Some log-concave densities.
DistributionKernel of p ( x ) Condition
Normal exp ( x μ ) 2 2 σ 2 σ > 0
t-distribution ( 1 + 1 + x 2 v ) ( v + 1 ) / 2 v 3
Gamma x α 1 e β x α 1 , β > 0
Beta x a 1 ( 1 x ) b 1 a 1 , b 1
Logistic e x μ σ / [ 1 + e x μ σ ] 2 σ > 0
Gumbel e x μ σ exp { e x μ σ } σ > 0
Weibull x α 1 e β x α α 1 , β > 0
Table 2. Limit behavior of Δ k ( k ) for some distributions on R .
Table 2. Limit behavior of Δ k ( k ) for some distributions on R .
DistributionLimiting Value of Δ k ( k )
Normal distribution0
Logistic distribution 2 β
Student’s t distribution +
Table 3. Limit behavior of Δ k ( k ) for some distributions on R + .
Table 3. Limit behavior of Δ k ( k ) for some distributions on R + .
DistributionLimiting Value of Δ k ( k )
Weibull distribution ( m > 1 )0
Weibull distribution ( 0 < m < 1 ) +
Gamma distribution 2 β
Exponential distribution 2 β , for all n 2
F-distribution +
Beta prime distribution +
Log-normal distribution +
Inverse Gaussian distribution 2 μ 2 σ
Table 4. L 2 -distance between the population density and approximate densities by 4 methods.
Table 4. L 2 -distance between the population density and approximate densities by 4 methods.
kMCRMCQMCMSE
150.06760.04530.01710.0151
300.06740.03160.01290.0126
350.06150.02840.01240.0122
Table 5. Density function of some useful ECDs.
Table 5. Density function of some useful ECDs.
TypeDensity Function g ( x ) in R d
Kotz Type g ( x ) = c ( x T x ) N 1 exp [ r ( x T x ) s ] , r , s > 0 , 2 N + d > 2
Multinormal g ( x ) = c exp ( 1 2 x T x )
Person Type VII g ( x ) = c ( l + x T x / s ) N , N > d / 2 , s > 0
Multivariate t g ( x ) = c ( l + x T x / s ) ( d + q ) / 2 , q > 0 , an   integer , s > 0
Person Type II g ( x ) = c ( l x T x ) q , q > 0
Logistics g ( x ) = c exp ( x T x ) / [ 1 + exp ( x T x ) ] 2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fang, K.-T.; Pan, J. A Review of Representative Points of Statistical Distributions and Their Applications. Mathematics 2023, 11, 2930. https://doi.org/10.3390/math11132930

AMA Style

Fang K-T, Pan J. A Review of Representative Points of Statistical Distributions and Their Applications. Mathematics. 2023; 11(13):2930. https://doi.org/10.3390/math11132930

Chicago/Turabian Style

Fang, Kai-Tai, and Jianxin Pan. 2023. "A Review of Representative Points of Statistical Distributions and Their Applications" Mathematics 11, no. 13: 2930. https://doi.org/10.3390/math11132930

APA Style

Fang, K. -T., & Pan, J. (2023). A Review of Representative Points of Statistical Distributions and Their Applications. Mathematics, 11(13), 2930. https://doi.org/10.3390/math11132930

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop