Next Article in Journal
A Hybrid Genetic-Hierarchical Algorithm for the Quadratic Assignment Problem
Next Article in Special Issue
Rare Event Analysis for Minimum Hellinger Distance Estimators via Large Deviation Theory
Previous Article in Journal
Neural Networks for Estimating Speculative Attacks Models
Previous Article in Special Issue
Monitoring Parameter Change for Time Series Models of Counts Based on Minimum Density Power Divergence Estimator
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Distance-Based Estimation Methods for Models for Discrete and Mixed-Scale Data

1
Department of Biostatistics, University at Buffalo, Buffalo, NY 14214, USA
2
Head of Oncology Data Science, AstraZeneca PLC, Gaithersburg, MD 20878, USA
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(1), 107; https://doi.org/10.3390/e23010107
Submission received: 5 December 2020 / Revised: 12 January 2021 / Accepted: 12 January 2021 / Published: 14 January 2021

Abstract

:
Pearson residuals aid the task of identifying model misspecification because they compare the estimated, using data, model with the model assumed under the null hypothesis. We present different formulations of the Pearson residual system that account for the measurement scale of the data and study their properties. We further concentrate on the case of mixed-scale data, that is, data measured in both categorical and interval scale. We study the asymptotic properties and the robustness of minimum disparity estimators obtained in the case of mixed-scale data and exemplify the performance of the methods via simulation.

Graphical Abstract

1. Introduction

Minimum disparity estimation has been studied extensively in models where the scale of the data is either interval or ratio (Beran [1], Basu and Lindsay [2]). It has also been studied in the discrete outcomes case. Specifically, when the response variable is discrete and the explanatory variables are continuous, Pardo et al. [3] introduced a general class of distance estimators based on ϕ -divergence measures, the minimum ϕ -divergence estimators, and they studied their asymptotic properties. The estimators can be viewed as an extension/generalization of the Maximum Likelihood Estimator (MLE). Pardo et al. [4] used the minimum ϕ -divergence estimator in a ϕ -divergence statistic to perform goodness-of-fit tests in logistic regression models, while Pardo and Pardo [5] extended the previous works to address solving problems for testing in generalized linear models with binary scale data.
The case where data are measured on discrete scale (either on ordinal or generally categorical scale) has also attracted the interest of other researchers. For instance, Simpson [6] demonstrated that minimum Hellinger distance estimators fulfill desirable robustness properties and for this reason can be effective in the analysis of count data prone to outliers. Simpson [7] also suggested tests based on the minimum Hellinger distance for parametric inference which are robust as the density of the (parametric) model can be nonparametrically estimated. In contrast, Markatou et al. [8] used weighted likelihood equations to obtain efficient and robust estimators in discrete probability models and applied their methods to logistic regression, whereas Basu and Basu [9] considered robust penalized minimum disparity estimators for multinomial models with good small sample efficiency.
Moreover, Gupta et al. [10], Martín and Pardo [11] and Castilla et al. [12] used the minimum ϕ -divergence estimator to provide solution to testing problems in polytomous regression models. Working in a similar fashion, Martín and Pardo [13] studied the properties of the family of ϕ -divergence estimators for log-linear models with linear constraints under multinomial sampling in order to identify potential associations between various variables in multi-way contingency tables. Pardo and Martín [14] presented an overview of works associated with contigency tables of symmetric structure on the basis of minimum ϕ -divergence estimators and minimum ϕ -divergence test statistics. Additional works include Pardo and Pardo [15] and Pardo et al. [16]. Alternative power divergence measures have been introduced by Basu et al. [17].
The class of f or ϕ divergences was originally introduced by Csiszár [18]. The structural characteristics of this class and their relationship to the concepts of efficiency and robustness were studied, for the case of discrete probability models, by Lindsay [19]. Basu and Lindsay [2] studied the properties of estimators derived by minimizing f divergences between continuous models and presented examples showing the robustness results of these estimates. We also note that Tamura and Boos [20] studied the minimum Hellinger distance estimation for multivariate location and covariance. Additionally, formal robustness results were presented in Markatou et al. [8,21] in connection with the introduction of weighted likelihood estimation.
If G is a real valued, convex function, defined on [ 0 , ) and such that G ( u ) converges to 0 as u , 0 G ( 0 / 0 ) = 0 , 0 G ( u / 0 ) = u G , G = lim u ( G ( u ) / u ) , the class of ϕ divergences is defined as
ρ ( τ , m β 0 ) = G τ ( t ) m β 0 ( t ) m β 0 ( t ) ,
where τ ( · ) , m β 0 ( · ) are two probability models. Notice that we define ρ ( τ , m β 0 ) on discrete probability models first, where T = { 0 , 1 , 2 , , T } is a discrete sample space, T possibly infinite, and m β 0 ( t ) M = m β ( t ) : β B , B is the parameter space B R d . Furthermore, different forms of the function G ( u ) provide different statistical distances or divergences.
We can change the argument of the function G from τ ( t ) m β 0 ( t ) to τ ( t ) m β 0 ( t ) 1 . Then, G is a function of the Pearson residual which is defined as δ ( t ) = τ ( t ) m β 0 ( t ) 1 , and takes values in [ 1 , ) . If the measurement scale is interval/ratio, then the Pearson residuals are modified to reflect and adjust for the discrepancy of scale between data, that are always discrete, and the assumed continuous probability model (see Basu and Lindsay [2]).
The Pearson residual is used by Lindsay [19], Basu and Lindsay [2] and Markatou et al. [8,21] in investigating the robustness of the minimum disparity and weighted likelihood estimators, respectively. This residual system allows one to identify distributional errors. If, in the equation of Pearson residual, we replace τ ( t ) with its best nonparametric representative d ( t ) , the proportion of observations in a sample with value t, then δ ( t ) = d ( t ) m β 0 ( t ) 1 . We note that the Pearson residuals are called so because n δ 2 ( t ) m ( t ) is Pearson’s chi-squared distance. Furthermore, these residuals are not symmetric since they take values in [ 1 , ] and are not standardized to have identical variances.
How does robustness fit into this picture? In the robustness literature, there is a denial of the model’s truth. Following this logic, the framework based on disparities starts with goodness-of-fit by identifying a measure that assesses whether the model fits the data adequately. Then, we examine whether this measure of adequacy is robust and in what sense. A fundamental tool that assists in measuring the degree of robustness is the Pearson residual, because it measures model misspecification. That is, Pearson residuals provide information about the degree to which the specified model m β fits the data. In this context, outliers are defined as those data points that have a low probability of occurrence under the hypothesized model. Such probabilistic outliers are called surprising observations (Lindsay [19]). Furthermore, the robustness of estimators obtained via minimization of the divergence measures we discuss here is indicated by the shape of the associated Residual Adjustment Function (RAF), a concept that is reviewed in Section 2. Of note is that in contingency table analysis, the generalized residual system is used for examination of sources of error in models for contingency tables, see, for example, Haberman [22], Haberman and Sinharay [23]. The concept of generalized residuals in the case of generalized linear models is discussed, for example, in Pierce and Schafer [24].
Data sets are comprised of data measured on both categorical (ordinal or nominal) scale and interval/ratio scale. We can think of these data as realizations of discrete and continuous random variables respectively. Examples of data sets that include mixed-scale data are electronic health records containing diagnostic codes (discrete) and laboratory measurements (e.g., blood pressure, alanine amino transferase (ALT) measurements on interval/ratio scale) and marketing data (customer records include income and gender information). Additional examples include data from developmental toxicology (Aerts et al. [25]), where fetal data from laboratory animals include binary, categorical and continuous outcomes. In this context, the joint density of the discrete and continuous random variables is given as m β ( x , y ) = f β 1 ( y | x ) g β 2 ( x ) , where β T = ( β 1 T , β 2 T ) are parameter vectors indexing the joint, conditional on x and probability density function of x.
Work on the analysis of mixed-scale data is complicated by the fact that is difficult to identify suitable joint probability distributions to describe both measurement scales of the data, although a number of ad hoc methods to the analysis of mixed-scale data have been used in applications. Olkin and Tate [26] proposed multivariate correlation models for mixed-scale data. Copulas also provide an attractive approach to modeling the joint distribution of mixed-scale data, though copulas are less straightforward to implement, and there are subtle identifiability issues that complicate the specification of a model (Genest and Nešlehová [27]).
To formulate the joint distribution in the mixed-scale variables case one can either specify the marginal distribution of the discrete variables and the conditional distribution of the continuous variables. Alternatively, one can specify the marginal distribution of the continuous variables and the conditional distribution of the discrete variables given the continuous variables. Of note here is that the direction of factorization generally yields distinct model interpretations and results. The first approach has received much attention in the literature, in the context of the analysis of data with mixtures of categorical and continuous variables. Here, the continuous variables follow different multivariate normal distributions for each possible setting of the categorical variable values; the categorical variables then follow an arbitrary marginal multinomial distribution. This model is known in the literature as the conditional Gaussian distribution model and is central in the discussion of graphical association models with mixed-scale variables (Lauritzen and Wermuth [28]). A very special case of this model is used in our simulations.
In this paper, we develop robust methods for mixed-scale data. Specifically, Section 2 reviews basic concepts in minimum disparity estimation, Section 3 defines Pearson residuals for data measured in discrete, interval/ratio and mixed-scale, and studies their properties. Section 4 establishes the optimization problem for obtaining estimators of the model parameters, while Section 5 and Section 6 establish the robustness and asymptotic properties of these estimators. Finally, Section 7 presents simulations showing the performance of these methods and Section 8 offers discussions. The Appendix A includes proofs of the theoretical results.

2. Concepts in Minimum Disparity Estimation

Beran [1] introduced a robust method to estimate the parameters of a statistical model, called minimum Hellinger distance estimation. The parameter estimator is obtained by minimizing the Hellinger distance between a parametric model density and a nonparametric density estimator. Lindsay [19] extended the aforementioned method to incorporate many other distances, and introduced the concept of the residual adjustment function in the context of minimum disparity estimation. The Minimum Distance Estimators (MDE) of a parameter vector β are obtained by minimizing over β , the distance (or disparity)
ρ ( d , m β ) = x G ( δ ( x ) ) m β ( x ) ,
where the assumed model m β is a probability mass function. When the model m β is continuous, the MDE of the parameter vector β is obtained by minimizing over β the quantity
ρ ( f * , m β * ) = G ( δ ( x ) ) m β * ( x ) d x ,
where f * ( x ) = k ( x ; t , h ) d F ^ ( t ) , m β * ( x ) = k ( x ; t , h ) m β ( t ) d t , F ^ is the empirical distribution function obtained from the data and k is a smooth family of kernel functions. One example is the normal density with mean t and standard deviation h. Furthermore, δ ( x ) is the Pearson residual defined as δ ( x ) = f * ( x ) / m * ( x ) 1 . Lindsay [19] and Basu and Lindsay [2] discuss the efficiency and robustness properties of these estimators.
If G ( δ ) = 1 λ ( 1 + λ ) ( 1 + δ ) ( λ + 1 ) 1 we obtain the class of power divergence measures. Notice that we have G ( 0 ) = 0 . Different values of λ offer different measures; for example, when λ = 2 we obtain Neyman’s chi-squared divided by 2 measure, while λ = 1 , 1 / 2 return the Kullback-Leibler and Hellinger distances, respectively.
Under appropriate conditions, (1) and (2) can be written as
A ( δ ( x ) ) m β ( x ) = 0 ,
or
A ( δ ( x ) ) m β * ( x ) d x = 0 ,
where A ( δ ) = ( δ + 1 ) G ( δ ) G ( δ ) and the prime denotes differentiation with respect to δ .
Lindsay [19] has shown that the structural characteristics of the function A ( δ ) play an important role in the robustness and efficiency properties of these methods. Furthermore, without loss of generality, we can center and rescale A ( δ ) , and define the RAF as follows.
Definition 1
(Lindsay [19]). Let A ( δ ) be an increasing and twice differentiable function on [ 1 , ) defined as
A ( δ ) = ( δ + 1 ) G ( δ ) G ( δ ) , A ( 0 ) = 0 , A ( 0 ) = 1 ,
where G is strictly convex and twice differentiable with respect to δ on [ 1 , ) with G ( 0 ) = 0 . Then, A ( δ ) is called residual adjustment function.
Remark 1.
Since A ( δ ) = ( 1 + δ ) G ( δ ) , the second order differentiability of G, in addition to its strict convexity, implies that A ( δ ) is strictly increasing function of δ on [ 1 , ) . Thus, we can define A ( δ ) as above without changing the solutions of the aforementioned estimating equations in the discrete case (see Lindsay [19], p. 1089). In the continuous case, such standardization does not change the estimating properties of the associated disparities (see Basu and Lindsay [2], p. 687).
Two fundamental and at the same time conflicting goals in robust statistics are the goals of robustness and efficiency. In the traditional literature on robustness, first order efficiency is sacrificed and, instead, safety of the estimation or testing method against outliers is guaranteed. Here, one adheres to the notion that information about robustness of a method is carried by the influence function. In our setting, using the influence function to characterize the robustness properties of the associated estimation procedures is misleading. Instead, the shape of the RAF, A ( · ) , provides information to the extent of which our procedures can be characterized as robust. The interested reader is directed to Lindsay [19] for further discussion on this topic.

3. Pearson Residual Systems

In this section, we define various Pearson residuals, appropriate for the measurement scale of the data. We introduce our notation first.
Let ( y i , x i ) , i = 1 , 2 , , n be realizations from n independent and identically distributed random variables that follow a distribution with density m β ( x , y ) . Recall that we use the word density to denote a general probability function, independently of whether the random variables X , Y are discrete, continuous or mixed. In what follows, we define different Pearson residual systems that account for the measurement scale of the data and study their properties.
Case 1:Both X and Y are discrete.
In this case, the pairs ( y i , x i ) follow a discrete probability mass function m β ( x i , y i ) . Define the Pearson residual as
δ ( x , y ) = n x , y n m β ( y | x ) π x 1 ,
where π x = P ( X = x ) = g ( x ) , and n x , y is the number of observations in the cell with Y = y and X = x .
Note that this definition of the Pearson residual is nonparametric on the discrete support of X. In the case of regression, one can carry out a semiparametric argument to obtain the estimators of the vector β and π x .
We now establish that, under correct model specification, the residual δ ( x , y ) converges, almost surely, to zero.
Proposition 1.
When the model is correctly specified and as n ,
δ ( x , y ) a . s . 0 .
Proof. 
Write
δ ( x , y ) = n x , y n m β ( y | x ) π x 1 = n x , y n x · n x n m β ( y | x ) π x 1 .
Then
n x n = ( # of observations in the sample equal to x ) n = 1 n i = 1 n I ( x i = x ) ,
where I ( · ) is the indicator function. Furthermore,
E 1 n I ( X i = x ) = P ( X = x ) < ,
and by the strong law of large numbers
n x n n a . s . E [ I ( X = x ) ] = P ( X = x ) = π x .
Similarly,
n x , y n x a . s . m β ( y | x ) ,
therefore
δ ( x , y ) n a . s . 0
under correct model specification. □
Case 2:Y is continuous and X is discrete.
This is the case in some ANOVA models. We can still define the Pearson residual in this setting as
δ ( x , y ) = f n ( y , x ) m β ( y , x ) 1 ,
where
f n ( y , x ) = f n * ( y | x ) g ( x ) = k ( y , t , h ) d F ^ n ( t | x ) n x n
and
m β ( y , x ) = m β * ( y | x ) g ( x ) = k ( y , t , h ) d M β ( t | x ) π x .
Then,
δ ( x , y ) = f n * ( y | X = x ) n x n m β * ( y | X = x ) π x 1 .
Proposition 2.
Assume the model is correctly specified and k ( y , t , h ) is a continuous function. Then,
δ ( x , y ) n a . s . 0 .
Proof. 
Under the strong law of large numbers
n x n n a . s . π x .
Under the correct model specification, continuity of the kernel function and the fact that F ^ n converges completely to F (implication of Glivenko-Cantelli theorem),
lim n k ( y ; t , h ) d F ^ n ( t | x ) k ( y ; t , h ) d F ( t | x ) = k ( y ; t , h ) d M β ( t | x ) = m β * ( y | x )
(extension of Helly-Bray lemma). Therefore,
n x n f n * ( y | x ) π x m β * ( y | x ) a . s . π x π x · m β * ( y | x ) m β * ( y | x ) = 1
and hence
δ ( x , y ) = n x n f n * ( y | x ) π x m β * ( y | x ) 1 a . s . 1 1 = 0 .
 □
Case 3:Y is continuous and X is continuous.
In this case, the pairs ( y i , x i ) follow a continuous probability distribution. The Pearson residual is then defined as
δ ( x , y ) = f n * ( y , x ) m β * ( y , x ) 1 ,
where
f n * ( x , y ) = k ( x , y ; t 1 , t 2 ) d F ^ n ( t 1 , t 2 ) , m β * ( x , y ) = k ( x , y ; t 1 , t 2 ) m β ( t 1 , t 2 ) d t 1 d t 2 .
As an example, we take the linear regression model with random carriers X, and ϵ i N ( 0 , 1 ) . Furthermore, assume that the random carriers follow a normal distribution with mean vector μ and covariance matrix Σ . In this case, y i = x i T β + ϵ i and the quantities z i = ( y i x i T β ) / σ are independent, identically distributed random variables when β represents the vector of true parameters. Hence, the z i ’s represent realizations of a random variable Z that has a completely known density f ( z ) . Thus,
m β ( x , y ) = m β ( z | x ) · g ( x ) , z = ( y x T β ) / σ
and hence
m β * ( x , y ) = m β * ( y x T β | X = x ) g * ( x ) , m β * ( y x T β | X = x ) = m β * ( z | x ) = k ( z , t , h ) d M β ( t | x ) , g * ( x ) = k ( x , t , h ) g ( t ) d t .
The kernel k ( z , t , h ) is selected so that it facilitates easy computation. Kernels that do not entail loss of information when they are used to smooth the assumed parametric model are called transparent kernels (Basu and Lindsay [2]). Basu and Lindsay [2] provide a formal definition of transparent kernels and an insightful discussion on the point of why transparent kernels do not exhibit information loss when convoluted with the hypothesized model (see Section 3.1 of Basu and Lindsay [2]).

4. Estimating Equations

In this section, we concentrate on cases 1, 2 presented in the previous section. We carefully outline the optimization problems and discuss the associated estimating equations for these two cases. The case where both X and Y are continuous has been discussed in the literature, see, for example, Markatou et al. [21].
Case 1:Both X and Y are discrete.
In this case, the minimum distance estimators of the parameter vector β and π x are obtained by solving the following optimization problem
min β , π x ρ ( d , m β )
subject to
x π x = 1 .
Optimization problem (3) is equivalent to the problem
min x , y G ( δ ( x , y ) ) m β ( x , y )
subject to
x π x = 1 .
The class of G functions that we use creates distances that belong in the family of ϕ -divergences.
Proposition 3.
The estimating equations for β and π x are given as:
x , y w ( δ ( x , y ) ) n x , y u ( y | x ; β ) = 0 , x , y w ( δ ( x , y ) ) n x , y I ( X = x ) π x 1 = 0 .
The function w ( δ ( x , y ) ) is a weight function, such that 0 w ( δ ( x , y ) ) 1 , and it is defined as
w ( δ ( x , y ) ) = min [ A ( δ ( x , y ) ) + 1 ] + δ ( x , y ) + 1 , 1
with [ · ] + indicating the positive part of the function A ( δ ( x , y ) ) + 1 .
Proof. 
The main steps of the proof are provided in the Appendix A.1. □
Remark 2.
1.
The above two estimating equations can be solved with respect to β and π x . In an iterative algorithm, we can solve the second equation (4) explicitly for π x to obtain
π x = y w ( δ ( x , y ) ) n x , y x , y w ( δ ( x , y ) ) n x , y .
This means that if the model does not fit any of the y, observed at a particular x well, the weight for this x will drop as well.
2.
When A ( δ ( x , y ) ) = δ ( x , y ) the corresponding estimating equation for β becomes x , y n x , y u ( y | x ; β ) = 0 and the MLE is obtained. This is because the corresponding weight function w ( δ ( x , y ) ) = 1 . In this case, the estimating equations for the π x s become n x , y I ( X = x ) π x 1 = 0 , the estimating equations for the MLEs of π x .
3.
The Fisher consistency property of the function that introduces the estimates guarantees that the expectation of the corresponding estimating function is 0, under the correct model specification.
Case 2:Y is continuous and X is discrete.
In this case, the estimates of the parameters β and π x are obtained by solving the following optimization problem
min β , π x x G ( δ ( x , y ) ) m β * ( y , x ) d y
subject to
x π x = 1 .
In general m β * ( y , x ) = m β * ( y | x ) π x ; in the case where y , x are independent m β * ( y , x ) = m β * ( y ) π x , and the optimization problem stated above is equivalent to
min β , π x x π x G ( δ ( x , y ) ) m β * ( y ) d y
subject to
x π x = 1 .
Proposition 4.
The estimating equations for β and π x in the case of independence of y , x are given as follows:
x π x A ( δ ( x , y ) ) β m β * ( y ) d y = 0 , x π x A ( δ ( x , y ) ) I ( X = x ) π x 1 m β * ( y ) d y = 0 ,
where A ( δ ) is the residual adjustment function (RAF) that corresponds to the function G, and G ( δ ) is the derivative of G with respect to δ.
Proof. 
Straightforward, after differentiating the Lagrangian with respect to β and π x . □
Case 3:Y is continuous and X is continuous.
In this case, we refer the reader to Basu and Lindsay [2].

5. Robustness Properties

Hampel et al. [29] and Hampel [30,31] define robust statistics as the “statistics of approximate parametric models”, and introduce one of the fundamental tools of robust statistics, the concept of the influence function, in order to investigate the behavior of a statistic T n expressed as a functional T ( G ) . The influence function is a heuristic tool with the intuitive interpretation of measuring the bias caused by an infinitesimal contamination at a point x on the estimate standardized by the mass of contamination. Its formal definition is as follows:
Definition 2.
The influence function of a functional T at the distribution F is given as
I F ( x ; T , F ) = lim t 0 T ( ( 1 t ) F + t Δ x ) T ( F ) t ,
in those x X where the limit exists, 0 t 1 and Δ x is the Dirac measure defined as
Δ x ( u ) = 1 , u = x , 0 , u x .
If an estimator has a bounded influence function, the estimator is considered to be robust to outliers, that is data which is away from the pattern set by the majority of the data. The effect of bounding the influence function is the sacrifice of efficiency; estimators with bounded influence function, while are not affected by outlying points, are not fully efficient under the correct model specification.
Our goal in calculating the influence function is to show the full efficiency of the proposed estimators. That is, the influence function of the proposed estimators, under correct model specification, equals the influence function of the corresponding maximum likelihood estimators. In our context, robustness of the estimators is quantified by the associated RAFs (see Lindsay [19] and Basu and Lindsay [2]).
In what follows, we will derive the influence function of the estimators for the parameter vector β in the case where both y , x are discrete. Similar calculations provide the influence functions of estimators obtained under the remaining scenarios. To do so, we need to resort to the estimators’ functional form, denoted by β ϵ , with corresponding estimating equations
s , t w ( δ ϵ ( s , t ) ) u ( t | s ; β ϵ ) d ϵ ( s , t ) = 0 ,
where d ϵ ( s , t ) = ( 1 ϵ ) d ( s , t ) + ϵ Δ x , y ( s , t ) . The influence function is then obtained by differentiating the aforementioned estimating equations with respect to ϵ and then evaluating the derivative at ϵ = 0 .
Proposition 5.
The influence function of the β estimator is given by
β 0 = [ A ( d ) ] 1 B ( x , y ; d ) ,
where
A ( d ) = s , t [ δ 0 ( t ) + 1 ] w ( δ 0 ( s , t ) ) u ( t | s ; β 0 ) u T ( t | s ; β 0 ) d ( s , t ) s , t w ( δ 0 ( s , t ) ) u ( t | s ; β 0 ) d ( s , t ) ,
B ( x , y ; d ) = s , t I ( s = x , t = y ) m β 0 ( t | s ) π s d ( s , t ) m β 0 ( t | s ) π s w ( δ 0 ( s , t ) ) u ( t | s ; β 0 ) d ( s , t ) s , t w ( δ 0 ( s , t ) ) u ( t | s ; β 0 ) d ( s , t ) + w ( δ 0 ( x , y ) ) u ( t | s ; β 0 ) ,
with u ( t | s ; β ) = ln m β ( t | s ) , and the subscript 0 indicates evaluation at a parametric model.
Proof. 
The proof is obtained via straightforward differentiation and its main steps are provided in the Appendix A.2. □
Proposition 6.
Under the assumption that the model is correct, the influence function derived, reduces to the influence function of the MLE of β.
Proof. 
Under the assumption that the adopted model is the correct model, the density d ( s , t ) is m β 0 ( s , t ) , so that δ ( s , t ) = 0 . Now recall that w ( 0 ) = 1 and w ( 0 ) = 0 , so the expression A ( d ) reduces to
A ( d ) = s , t u ( t | s ; β 0 ) m β 0 ( s , t ) = i ( β , x , y ) .
Furthermore, the expression B ( x , y ; d ) reduces to u ( y | x ; β 0 ) , where we assume exchangeability of differentiation and integration and use the fact that u ( t | s ; β 0 ) = u ( s , t ; β 0 ) . Hence, the influence function is given as
i 1 ( β ; x , y ) u ( y | x ; β 0 ) ,
which is exactly the influence function of the MLE. Therefore, full efficiency is preserved under the model. □

6. Asymptotic Properties

In what follows, we establish asymptotic normality of the estimators in the case of discrete variables. The techniques for obtaining asymptotic normality in the mixed-scale case are similar and not presented here.
Case 1:Both X and Y are discrete.
Recall that the k th estimating equation is given as x , y w ( δ β ( x , y ) ) n x , y u k ( y | x ; β ) = 0 , which can be expanded in Taylor series in the neighborhood of the true parameter β 0 to obtain:
1 n x , y w ( δ β ( x , y ) ) n x , y u k ( y | x ; β ) A n + ( β β 0 ) T B n + 1 2 ( β β 0 ) T C n ( β β 0 ) ,
where
A n = 1 n x , y w ( δ β ( x , y ) ) n x , y u k ( y | x ; β 0 ) , B n = β 1 n x , y w ( δ β ( x , y ) ) n x , y u k ( y | x ; β ) | β 0 ,
C n is a p × p Hessian matrix whose ( t , e ) th element is given as
2 β t β e 1 n x , y w ( δ β ( x , y ) ) n x , y u k ( y | x ; β ) | β 0 .
Under assumptions 1–8, listed in the Appendix A.3, we have the following theorem.
Theorem 1.
The minimum disparity estimators of the parameter vector β are asymptotically normal with asymptotic variance I 1 ( β 0 ) , where I ( · ) indicates the Fisher information matrix.

7. Simulations

The simulation study presented below has two aims. The first one, is to indicate the versatility of the disparity methods for different data measurement scales. The second aim is to exemplify and study the robustness of these methods under different contamination scenarios.
Case 1:Both X and Y are discrete.
The Cressie-Read family of power divergence is given by
P W D ( d , m β ) = m β ( x , y ) · [ 1 + δ ( x , y ) ] λ + 1 1 λ ( λ + 1 ) = d ( x , y ) · [ d ( x , y ) / m β ( x , y ) ] λ 1 λ ( λ + 1 ) ,
where d ( x , y ) = n x , y / n is the proportion of observations with value x , y and m β ( x , y ) = m β ( y | x ) π x is the density function of the model of interest.
To evaluate the performance of our algorithmic procedure, we use the following disparity measures, that is,
Likelihood disparity ( λ = 0 ) : L D ( d , m β ) = d ( x , y ) · log [ d ( x , y ) / m β ( x , y ) ] , Twice squared Hellinger s ( λ = 1 / 2 ) : H D ( d , m β ) = 2 · d ( x , y ) m β ( x , y ) 2 , Pearson s chi squared divided by 2 ( λ = 1 ) : P C S ( d , m β ) = d ( x , y ) m β ( x , y ) 2 2 · m β ( x , y ) , Symmetric chi squared G ( δ ( x , y ) ) = 2 [ δ ( x , y ) ] 2 δ ( x , y ) + 2 : S C S ( d , m β ) = 2 · m β ( x , y ) d ( x , y ) 2 m β ( x , y ) + d ( x , y ) .
The data are generated in four different ways using three different sample sizes N, say N = 100 ; N = 1000 and N = 10,000. The data format used can be represented in a 5 × 5 contingency table, with n i , j , i = 1 , 2 , , 5 ; j = 1 , 2 , , 5 denoting the counts in the i j -th cell, n i and n j representing the row and column totals, respectively. Furthermore, the variable x indicates columns, while y indicates the rows. In each of the aforementioned cases/scenarios, 10,000 tables were generated and that corresponds to the number of Monte Carlo (MC) replications. Our purpose is to get the mean values of the estimates of the parameters m β ( y | x ) ’s and π x ’s along with their corresponding standard deviations (SDs). Notice that, in this setting, the estimation of π x and m β ( y | x ) is completely nonparametric, that is, no model is assumed for estimating the marginal probabilities of X and Y.
The table was generated by using either a fixed total sample size N or fixed marginal probabilities. These two data generating schemes imply two different sampling schemes that could have generated the data with consequences for the probability model one would use. For example, with fixed total sample size the distribution of the counts is multinomial, or if the row margin is fixed in advance the distribution of the counts is a product binomial distribution. In the former case of fixed N, we explored two different scenarios: a balanced and an imbalanced one. The imbalanced scenario allows for the presence of one zero cell in the contingency table, whereas the balanced scenario does not. In the latter case of fixed marginal probabilities, the row marginal probabilities ( m β ( y | x ) ’s) were fixed, while the column marginals ( π x ’s) were randomly chosen and these values were used to obtain the contingency table. In this case, we also explored a balanced and an imbalanced scenario based on whether the row marginal probabilities were chosen so that to be equal to each other or not, respectively.
Specifically, under Scenario Ia, where the total sample size N was fixed and the balanced design was exploited, none of the n i j ’s ( n i j 0 , i , j = 1 , 2 , 3 , 4 , 5 ) was set equal to zero, with equal row and column marginal probabilities. Table 1 presents the mean of 10,000 estimates and the corresponding SDs for all four distances ( P C S , H D , S C S , L D ) when N is fixed under the balanced scenario. Table 1 clearly shows that all distances provide estimates approximately equal to 0.200 regardless of the sample size used. Furthermore, as the sample size increases, the SDs decrease noticeably.
In Scenario IIa, where the total sample size N was fixed and the contingency table was structured using the imbalanced design, the presence of a zero cell ( n 11 = 0 ) was allowed. The results of this scenario are presented in Table 2, where the estimates were calculated exploiting all disparity measures. For the L D , n 11 was set equal to 10 8 . The presence of zero cells in contingency tables has a large history in the relevant literature on contingency tables analysis, where several options are provided for the analysis of these tables (Fienberg [32], Agresti [33], Johnson and May [34], Poon et al. [35]). From Table 2, one could infer that the different distances handle differently the zero cell. This difference is reflected in the estimate of m ^ β ( y 1 | x ) = m ^ β 1 , because it is affected by the zero value of n 11 . The strongest control is provided by the Hellinger and symmetric chi-squared distances. All distances estimate the parameters π x i similarly, with the bias in their estimation been between 2.7 % and 5.2 % . The SDs are almost the same for all distances per estimate and their values are ameliorated for N = 10,000.
A referee suggested that in certain cases interest may be centered on smaller samples. We generated 2 × 3 tables with fixed total sample size of 50 and 70 observations. Table 3 and Table 4 describe the results when the contingency tables were generated under a balanced and an imbalanced design with associated respective Scenarios Ib and IIb. More precisely, Table 3 presents the estimators of the marginal row and column probabilities obtained when P C , H D , S C S and L D distances are used. We notice that the increase in the sample size provides for a decrease in the overall absolute bias in estimation, defined as = 1 L | θ ^ θ 0 , | , where θ ^ is the estimate of the -th component of an L × 1 vector θ and θ 0 , is the corresponding true value. In our case, θ T = ( m β 1 , m β 2 , π x 1 , π x 2 , π x 3 ) . This observation applies to all distances used in our calculations. Table 4 presents results associated with the imbalanced case. The generated 2 × 3 tables contain two empty cells ( n 12 = n 21 = 0 ). Once again, for calculating the L D , cells n 12 = n 21 = 10 8 . We notice that the bias associated with the estimates is rather large for all the distances, and an increased sample size does not alleviate the observed bias. Basu and Basu [9] have proposed an empty cell penalty for the minimum power-divergence estimators. This penalty leads to estimators with improved small sample properties. See also Alin and Kurt [36] for a discussion of the need of penalization in small samples.
Table 5 provides the results obtained under Scenario III. In this case, the parameter estimates were calculated using the P C S , H D , S C S and L D distances when the 5 × 5 contingency table was constructed by fixing the row marginal probabilities so that they were all set at 0.20, that is, ( 0.20 , 0.20 , 0.20 , 0.20 , 0.20 ) . The column marginals were randomly chosen in the interval [ 0 , 1 ] and summed to 1. In this case, the produced column marginal probabilities were ( 0.1472 , 0.2365 , 0.3196 , 0.2370 , 0.0597 ) . The simulation study reveals that the estimates of the parameters m β ( y | x ) ’s and π x ’s do not differ substantially from the respective row and column marginal probabilities for any of the four distances utilized. The SDs are approximately the same and they get lower values for larger N.
Finally, in Table 6 the data generation was done by exploiting Scenario IV, that is, by having fixed the row marginal probabilities, which were not equal to each other; while, the column marginals were randomly chosen in the interval [ 0 , 1 ] so that they sum to 1. In particular, the row marginal probabilities were fixed at values ( 0.04 , 0.20 , 0.20 , 0.20 , 0.36 ) , while the column marginals used were ( 0.2171 , 0.1676 , 0.2347 , 0.1178 , 0.2628 ) . When N = 100 , the value of m ^ β ( y 1 | x ) = m ^ β 1 is not approximately 0.07 and not equal to 0.04 for all distances. However, when N = 1000 or N = 10,000, we get better estimates irrespectively of the disparity measure choice. The SDs are approximately the same and they become smaller as the sample size increases.
We also notice from Table 1, Table 5 and Table 6 that in all cases the standard deviation associated with the estimates obtained when we use other than likelihood distances, is approximately the same with the standard deviation that corresponds to the likelihood estimates, thereby showing the asymptotic efficiency of the disparity estimators.
All calculations were performed using the R language. Given that the problem described in this section can be viewed as a general non-linear optimization problem, the solnp function of the Rsolnp package (Ye [37]) was used to obtain the aforementioned estimates. For our calculations, we tried using a variety of different initial values ( π ^ x ( 0 ) ’s and m ^ β ( 0 ) ( y | x ) ’s); we notice that no matter how the initial values were chosen, the estimates were always pretty similar and very close to the observed values ( n i / N and n j / N for i , j = 1 , 2 , 3 , 4 , 5 ). Only the number of iterations needed for convergence is slightly affected. Consequently, random numbers from a Uniform distribution in the interval [ 0 , 1 ] were set as initial values (which were not necessarily summing to 1). The solnp function has a built-in stopping rule and there was no need to set our own stopping rule. We only set the boundary constraints to be in the interval [ 0 , 1 ] for all estimates which were also subject to π x = m β ( y | x ) = 1 .
Other functions may also be used to obtain the estimates. For example, we used the auglag function of the nloptr package with local solvers “lbfgs” or “SLSQP” (Conn et al. [38], Birgin and Martínez [39]) which emulates Augmented Lagrangian multipliers. However, the convergence using the solnp function (the number of iterations was on average 2) was extremely faster than using the auglag function (the average number of iterations was approximately 100). For this reason, the results presented in Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6 were based only on the function solnp.
Case 2:X is discrete and Y is continuous
In this section, we are interested in solving the optimization problem (5) when X is discrete, Y is continuous and X , Y are independent of each other. To evaluate the performance of our procedure, we used Hellinger’s distance, which in this case takes on the following form:
H D ( f * , m β * ) = x f N * ( x , y ) m β * ( x , y ) 2 d y = x f Y * ( y ) · n X N m X ( x ) · m Y * ( y ) 2 d y .
The aim of this simulation is to obtain the minimum Hellinger distance estimators of π x and μ assuming (without loss of generality) that σ 2 is known to be equal to 1. All calculations were performed in R language.
For this purpose, we generated mixed-type data of size N using the package OrdNor (Amatya and Demirtas [40]). More precisely, the data are comprised of one categorical variable X with three levels and probability vector ( 1 / 3 , 1 / 3 , 1 / 3 ) , while the continuous part is coming from a trivariate normal distribution; symbolic Y = ( Y 1 , Y 2 , Y 3 ) M V N 3 ( μ , I 3 ) , where μ T = ( μ 1 , μ 2 , μ 3 ) . We used two different mean vectors: μ T = ( 0 , 0 , 0 ) and μ T = ( 0 , 3 , 6 ) . The set of ordinal and normal variables were generated concurrently using an overall correlation matrix Σ , which consists of three components/sub-matrices: Σ O O , Σ O N and Σ N N , with O and N corresponding to “Ordinal” and “Normal” variables, respectively. More precisely, the overall correlation matrix Σ used is the following
Σ = 1 ρ O N ρ O N ρ O N ρ O N 1 0 0 ρ O N 0 1 0 ρ O N 0 0 1 ,
where Σ O O = 1 , Σ N N = I 3 , Σ O N = ρ O N ρ O N ρ O N and ρ O N represents the polyserial correlations for the O N combinations (for more information on polyserial correlations refer to Olsson et al. [41]). Since X , Y were assumed to be independent, we set ρ O N = 0.0 . However, we also used weak correlations, say ρ O N = 0.1 and 0.2 , to investigate whether the estimates we receive in these cases remain reasonable.
The kernel function was the multivariate normal density M V N 3 ( 0 , H ) with H being estimated by the data using the kde function of the ks package (Duong [42]), m Y * ( y ) represented the multivariate normal density M V N 3 ( μ , Σ + H ) and m X ( x ) was the multinomial mass function. This choice of smoothing parameter, stemmed from the fact that we were interested in evaluating the performance, in terms of robustness, of standard bandwidth selection.
To solve the optimization problem, the solnp function of the Rsolnp package (Ye [37]) was used. Specifically, the initial values set for the probabilities π x 1 , π x 2 , π x 3 associated with the X variable were random uniform numbers in the interval [ 0 , 1 ] , while the initial values for the means μ y 1 , μ y 2 , μ y 3 were random numbers in the interval [ Q 1 ( Y i ) , Q 3 ( Y i ) ] for i = 1 , 2 , 3 , where Q 1 and Q 3 stand for the respective 25th and the 75th quantile per component of the continuous part. Following the same procedure with the one of Basu and Lindsay [2] in the univariate continuous case, here (in the mixed-case) the numerical evaluation of the integrals was also done on the basis of the Simpson’s 1/3rd rule using the sintegral function of the Bolstad2 package (Bolstad [43]). Moreover, we calculated the mean values, the SDs, as well as the percentages of bias of the mean and the probability vectors for three different sample sizes: N = 100 ; N = 1000 and N = 1500 over 1000 MC replications. The bias is defined as the difference of the estimates from their “true” values, that is, b i a s ( μ y i ) = μ ^ y i μ i and b i a s ( π x i ) = π ^ x i 1 / 3 for i = 1 , 2 , 3 . The results are shown in Table 7 and Table 8.
In particular, Table 7 illustrates the mean values, the SDs and the bias percentages of the corresponding minimum Hellinger distance estimators, over 1000 MC replications, for the three different sample sizes and polyserial correlations, when μ = ( 0 , 0 , 0 ) T . The estimates for the π x i are approximately equal to 1 / 3 = 0.333 , while the μ y i estimates are almost zero, even in the cases of weak correlations. When ρ O N = 0.0 , the sample size choice does not seem to affect the values of the estimates either overall or per component of X , Y variables. Specifically, we observe that the total absolute bias, computed as the sum of the individual component-wise absolute biases of the vectors π T = ( π 1 , π 2 , π 3 ) and μ T = ( μ 1 , μ 2 , μ 3 ) are approximately the same, with larger samples providing slightly less biases at the expense of a higher computational cost.
In Table 8, analogous results are presented with the difference that the mean vector used was μ = ( 0 , 3 , 6 ) T . The π x i estimates are very close to 1 / 3 ( = 0.333 ) for all X components, no matter which sample size or correlation is used. On the contrary, the interpretation of the μ i estimates slightly differs in this case. We also calculated the overall absolute bias as well as the individual, per parameter, absolute biases. In this case, larger samples clearly provide estimates with smaller bias for both parameter vectors π , μ and for both cases, the case of independence as well as the case of weak correlations. However, the computational time increases.
In what follows, we also present -for illustration purposes- a small simulation example using a mixed-type, contaminated data set of size N = 1000 , which was generated using OrdNor package setting ρ O N = 0.0 . Once again, the data were comprised of one categorical variable X with three levels and probability vector ( 1 / 3 , 1 / 3 , 1 / 3 ) , and a trivariate continuous vector Y = ( Y 1 , Y 2 , Y 3 ) . The contamination is happening only in the continuous part on the basis of α { 1.00 , 0.95 , 0.90 , 0.85 , 0.80 } , as follows: Y α × M V N 3 ( 0 , I 3 ) + ( 1 α ) × M V N 3 ( μ , I 3 ) , where μ T = ( 3 , 3 , 3 ) . This means that, N 1 = α × N data were generated with Y coming from multivaraiate standard normal and the remaining N 2 = N N 1 subset of the data followed a multivaraiate normal distribution with mean vector μ T = ( 3 , 3 , 3 ) . It goes without saying that when α = 1.00 , there is no contamination. Here, we are still considering the same optimization problem with the one described above and, consequently, we are interested in evaluating the minimum Hellinger distance estimators over 1000 MC replications by examining/studying to what extend the contamination level affects these estimates.
As indicated from Table 9, when there is no contamination in the data ( α = 1.00 ) , the estimates for the π x i s are almost equal to 1 / 3 , while the μ y ’s estimates are almost equal to zero. As the data become more contaminated (i.e., the value of α decreases), the minimum disparity estimators corresponding to X variable remain pretty consistent with their true values. However, this is not the case with the estimates for the μ y i s, which deteriorate as the value of the contamination level α shifts from the target/null value, that is 1.00 .
The mean parameters are estimated with reasonable bias (maximum bias is 9 % for the second component of the mean) when α = 0.95 , that is the contamination is 5 % . When the contamination is 10 % , the bias of the mean components is relatively high but still below 19 % . With higher contamination, the percentage of bias in the mean components is in the interval [ 28.3 % , 47 % ] . This is the result of using standard density estimation to obtain the smoothing parameters for the different mean components. Smaller values of these component smoothing parameters result in substantial bias reduction.
We also looked at the case where the continuous model was contaminated by a trivariate normal with mean μ T = ( 1.5 , 1.5 , 1.5 ) and covariance matrix I . In this case (results not shown), when the contamination is 5 % the maximum bias of the mean components is 6.6 % , while when the contamination is 10 % the maximum bias of the mean components is 13.5 % . Again, in this case the bandwidth parameters were obtained by fitting a unimodal density to the data.
The above results are not surprising. A judicious selection of the smoothing parameter decreases the bias of the component estimates of the mean. Agostinelli and Markatou [44] provide suggestions of how to select the smoothing parameter that can be extended and applied in this context.

8. Discussion and Conclusions

In this paper, we discuss Pearson residual systems that conform to the measurement scale of the data. We place emphasis on the mixed-scale measurements scenario, which is equivalent to having both discrete (categorical or nominal) and continuous type random variables, and obtain robust estimators of the parameters of the joint probability distribution that describes those variables. We show that, disparity methods can be used to actually control against model misspecification and the presence of outliers, and these methods provide reasonable results.
The scale and nature of measurement of the data imposes additional challenges, both computationally and statistically. Detecting outliers in this multidimensional space is an open research question (Eiras-Franco et al. [45]). The concept of outliers has a long history in the field of statistics and outlier detection methods have broad applications in many scientific fields such as security (Diehl and Hampshire [46], Portnoy et al. [47]), health care (Tran et al. [48]) and insurance (Konijn and Kowalczyk [49]) to mention just a few.
Classical outlier detection methods are largely designed for single measurement scale data. Handling mixed measurement scale is a challenge with few works coming from both, the field of statistics (Fraley and Wilkinson [50], Wilkinson [51]) and the fields of engineering and computer science (Do et al. [52], Koufakou et al. [53]). All these works use some version of a probabilistic outlier, either looking for regions in the space of data that have low density (Do et al. [52], Koufakou et al. [53]) or by attaching a probability, under a model, to the suspicious data point (Fraley and Wilkinson [50], Wilkinson [51]).
Our concept of a probabilistic outlier discussed here and expressed via the construction of appropriate Pearson residuals can unify the different measurement scales, and the class of disparity functions discussed above can provide estimators for the model parameters that are not influenced unduly by potential outliers.
One of the important parameters that controls the robustness of these methods is the smoothing parameter(s) used to compute the density estimator of the continuous part of the model. In our computations, we use standard smoothing parameters obtained from utilizing appropriate R functions for density estimation. The results show that, depending on the level of contamination and the type of contaminating probability model, the performance of the methods is satisfactory. Specifically, a small simulation study using the model reported in the caption of Table 9 shows that the overall bias associated with the mean components of the standard multivariate normal model is low when contamination with a multivariate normal model with mean components equal to 3 is less than or equal to 10 % . But even in this case, when the percentage of contamination is greater than 10 % , the bias increases when the smoothing parameter used is the one obtained from the R density function. Here, smaller values of the smoothing parameter guarantee reduction of the bias.
Devising rules for selecting the smoothing parameter(s) in the context of mixed-scale measurements that can guarantee robustness for larger than 5 % levels of contamination may be possible. However, it is the opinion of the authors that greater levels of data inhomogeneity may indicate model failure, a case where assessing model goodness of fit is of importance.

Author Contributions

The authors of this paper have contributed as follows. Conceptualization: M.M.; Methodology: M.M., E.M.S., R.L.; Software: E.M.S., H.W.; Writing-original draft presentation: M.M., E.M.S., R.L., H.W.; Supervision, funding acquisition and project administration: M.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Troup Fund, KALEIDA Health Foundation, under award number 82114, to Markatou who supported the work of the first and the third author of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ALTAlanine Aminotransferase
HDTwice-Squared Hellinger’s Disparity
LDLikelihood Disparity
MCMonte Carlo Replications
MDEMinimum Distance Estimators
MLEMaximum Likelihood Estimator
PCSPearson’s Chi-Squared Disparity Divided by 2
PWDPower Divergence Disparity
RAFResidual Adjustment Function
SCSSymmetric Chi-Squared Disparity
SDStandard Deviation

Appendix A

Appendix A.1. Proof of Proposition 3

Proof. 
The equations (4) are obtained from solving optimization problem (3). To solve this problem we need to form the corresponding Langrangian, which is
x , y G ( δ ( x , y ) ) m β ( y | x ) π x λ ( π x 1 ) .
(i) Let β denote gradient with respect to β . The estimators of β are obtained as solutions of the set of equations:
β x , y G ( δ ( x , y ) ) m β ( y | x ) π x λ ( π x 1 ) = 0 ,
which can be equivalently expressed as follows,
x , y π x [ β G ( δ ( x , y ) ) ] m β ( y | x ) + x , y π x G ( δ ( x , y ) ) β ( y | x ) = 0 .
Notice that the β of G ( δ ( x , y ) ) is given by
β G ( δ ( x , y ) ) = G ( δ ( x , y ) ) ( δ ( x , y ) + 1 ) u ( y | x ; β ) ,
where the superscript "’" denote derivative with respect to δ , δ ( x , y ) is the Pearson residual and
u ( y | x ; β ) = β m β ( y | x ) m β ( y | x ) = β ln [ m β ( y | x ) ]
is the score for β in the conditional distribution of y given x. Therefore,
x , y A ( δ ( x , y ) ) π x u ( y | x ; β ) m β ( y | x ) = 0 ,
where
A ( δ ( x , y ) ) = G ( δ ( x , y ) ) [ δ ( x , y ) + 1 ] G ( δ ( x , y ) ) .
By making use of the fact that x π x β m β ( y | x ) = 0 , the resulting equations can represented as
x , y A ( δ ( x , y ) ) + 1 δ ( x , y ) + 1 n x , y u ( y | x ; β ) = 0 ,
or equivalently,
x , y w ( δ ( x , y ) ) n x , y u ( y | x ; β ) = 0 .
Without loss of generality, we can take,
w ( δ ( x , y ) ) = min [ A ( δ ( x , y ) ) + 1 ] + δ ( x , y ) + 1 , 1 , w ( δ ( x , y ) ) 1 .
(ii) We now need to obtain π ^ x , which can be obtained by setting the gradient of formula with respect to π z equal to zero, that is, by the following equations:
y G ( δ ( z , y ) ) [ π z δ ( z , y ) ] m β ( y | z ) π z + y G ( δ ( z , y ) ) m β ( y | z ) λ = 0 .
Recording A ( δ ( z , y ) ) = G ( δ ( z , y ) ) [ δ ( z , y ) + 1 ] G ( δ ( z , y ) ) and δ ( z , y ) + 1 = n z , y / n m β ( y | z ) π z , the above equations are reduced to,
y A ( δ ( z , y ) ) m β ( z , y ) 1 π z + λ = 0
and we readily conclude that,
π z = 1 λ y A ( δ ( z , y ) ) m ( z , y ) , z .
Furthermore, to satisfy the constraint x π x = 1 , we obtain
λ = x , y A ( δ ( x , y ) ) m β ( x , y ) .
Therefore, we get
x , y A ( δ ( x , y ) ) m β ( y , x ) I ( X = z ) π x 1 = 0
and by making use of the fact that x , y m β ( x , y ) I ( X = z ) π x 1 = 0 , the above equation can be represented as
x , y w ( δ ( x , y ) ) n x , y I ( X = x ) π x 1 = 0
for any x where I ( X = x ) is the indicator function of the event { X = x } .  □

Appendix A.2. Proof of Proposition 5

Recall that β ϵ is a solution of the set of estimating equation
s , t w ( δ ϵ ( s , t ) ) u ( t | s ; β ϵ ) d ϵ ( s , t ) = 0 ,
where d ϵ ( s , t ) = ( 1 ϵ ) d ( s , t ) + ϵ x , y ( s , t ) and u ( t | s ; β ) = β m β ( s , t ) m β ( s , t ) = β ln [ m β ( s , t ) ] is a p-dimensional vector.
The influence function of β is calculated by differentiating, with respect to ϵ , the quantity (A1), and evaluating the derivative at ϵ = 0 . Thus, we need
d d ϵ { s , t w ( δ ϵ ( s , t ) ) u ( t | s ; β ϵ ) d ( s , t ) ϵ s , t w ( δ ϵ ( s , t ) ) u ( t | s ; β ϵ ) d ( s , t ) + ϵ s , t w ( δ ϵ ( s , t ) ) u ( t | s ; β ϵ ) ( x , y ) ( s , t ) } | ϵ = 0 = 0 .
Taking into account that δ ϵ ( s , t ) = d ϵ ( s , t ) m β ( s , t ) 1 = d ϵ ( s , t ) m β ( t | s ) π s 1 , the aforementioned evaluation implies
{ s , t ( δ 0 ( t ) + 1 ) w 0 ( δ 0 ( s , t ) ) u ( t | s ; β 0 ) u T ( t | s ; β 0 ) d ( s , t ) s , t w ( δ 0 ( s , t ) ) u ( t | s ; β 0 ) d ( s , t ) } β 0 = s , t I ( s = x , y = t ) m β 0 ( t | s ) π s d ( s , t ) m β 0 ( t | s ) π s w ( δ 0 ( s , t ) ) u ( t | s ; β 0 ) d ( s , t ) s , t w ( δ 0 ( s , t ) ) u ( t | s ; β 0 ) d ( s , t ) + w ( δ 0 ( x , y ) ) u ( y | x ; β 0 ) ,
which implies that
β 0 = I F ( β ; F ) = [ A ( d ) ] 1 B ( x , y ; d ) .

Appendix A.3. Assumptions of Theorem 1

The following assumptions are needed to be able to establish asymptotic normality of the estimators.
1.
The weight functions are nonnegative, bounded and differentiable with respect to δ .
2.
The weight function is regular, that is, w ( δ ) ( δ + 1 ) is bounded, where w ( δ ) is the derivative of w with respect to δ .
3.
x , y m 1 2 ( x , y ) E [ u k 2 ( y | x ; β 0 ) ] < .
4.
The elements of the Fisher information matrix are finite and the Fisher information matrix is nonsingular.
5.
x , y m 1 2 ( x , y ) E [ u i 2 ( y | x ; β 0 ) u j 2 ( y | x ; β 0 ) ] < i , j = 1 , 2 , , p .
6.
If β 0 denotes the true value of β , there exist functions M i j k ( x ) such that | u i j k ( y | x ; β 0 ) | M i j k ( x ) , β with β β 0 2 < r ( β 0 ) , r ( β 0 ) < 0 and E β 0 | M i j k ( y | x ) | < , i , j , k .
7.
If β 0 denotes the true value of β , there is a neighborhood N ( β 0 ) such that for β N ( β 0 ) the quantity | u t ( y | x ; β 0 ) u i ( y | x ; β 0 ) u e ( y | x ; β 0 ) | are bounded by M 1 ( y | x ) and M 2 ( y | x ) respectively, such that their corresponding expectations are finite.
8.
A ( δ + 1 ) ( δ + 1 ) is bounded, where A denotes the second derivative of A with respect to δ .

References

  1. Beran, R. Minimum Hellinger Distance Estimates for Parametric Models. Ann. Stat. 1977, 5, 445–463. [Google Scholar] [CrossRef]
  2. Basu, A.; Lindsay, B.G. Minimum Disparity Estimation for Continuous Models: Efficiency, Distributions and Robustness. Ann. Inst. Stat. Math. 1994, 46, 683–705. [Google Scholar] [CrossRef]
  3. Pardo, J.A.; Pardo, L.; Pardo, M.C. Minimum ϕ-Divergence Estimator in Logistic Regression Models. Stat. Pap. 2005, 47, 91–108. [Google Scholar] [CrossRef]
  4. Pardo, J.A.; Pardo, L.; Pardo, M.C. Testing In Logistic Regression Models on ϕ-Divergences Measures. J. Stat. Plan. Inference 2006, 136, 982–1006. [Google Scholar] [CrossRef]
  5. Pardo, J.A.; Pardo, M.C. Minimum ϕ-Divergence Estimator and ϕ-Divergence Statistics in Generalized Linear Models with Binary Data. Methodol. Comput. Appl. Probab. 2008, 10, 357–379. [Google Scholar] [CrossRef]
  6. Simpson, D.G. Minimum Hellinger Distance Estimation for the Analysis of Count Data. J. Am. Stat. Assoc. 1987, 82, 802–807. [Google Scholar] [CrossRef]
  7. Simpson, D.G. Hellinger Deviance Tests: Efficiency, Breakdown Points, and Examples. J. Am. Stat. Assoc. 1989, 84, 104–113. [Google Scholar] [CrossRef]
  8. Markatou, M.; Basu, A.; Lindsay, B.G. Weighted Likelihood Estimating Equations: The Discrete Case with Applications to Logistic Regression. J. Stat. Plan. Inference 1997, 57, 215–232. [Google Scholar] [CrossRef]
  9. Basu, A.; Basu, S. Penalized Minimum Disparity Methods for Multinomial Models. Stat. Sin. 1998, 8, 841–860. [Google Scholar]
  10. Gupta, A.K.; Nguyen, T.; Pardo, L. Inference Procedures for Polytomous Logistic Regression Models Based on ϕ-Divergence Measures. Math. Methods Stat. 2006, 15, 269–288. [Google Scholar]
  11. Martín, N.; Pardo, L. New Influence Measures in Polytomous Logistic Regression Models Based on Phi-Divergence Measures. Commun. Stat. Theory Methods 2014, 43, 2311–2321. [Google Scholar] [CrossRef]
  12. Castilla, E.; Ghosh, A.; Martín, N.; Pardo, L. New Robust Statistical Procedures for Polytomous Logistic Regression Models. Biometrics 2018, 74, 1282–1291. [Google Scholar] [CrossRef] [PubMed]
  13. Martín, N.; Pardo, L. Minimum Phi-Divergence Estimators for Loglinear Models with Linear Constraints and Multinomial Sampling. Stat. Pap. 2008, 49, 2311–2321. [Google Scholar] [CrossRef]
  14. Pardo, L.; Martín, N. Minimum Phi-Divergence Estimators and Phi-Divergence Test for Statistics in Contingency Tables with Symmetric Structure: An Overview. Symmetry 2010, 2, 1108–1120. [Google Scholar] [CrossRef]
  15. Pardo, L.; Pardo, M.C. Minimum Power-Divergence Estimator in Three-Way Contingency Tables. J. Stat. Comput. Simul. 2003, 73, 819–831. [Google Scholar] [CrossRef]
  16. Pardo, L.; Pardo, M.C.; Zografos, K. Minimum ϕ-Divergence Estimator for Homogeneity in Multinomial Populations. Sankhyā Indian J. Stat. Ser. A (1961–2002) 2001, 63, 72–92. [Google Scholar]
  17. Basu, A.; Harris, I.A.; Hjort, N.L.; Jones, M.C. Robust and Efficient Estimation by Minimising a Density Power Divergence. Biometrika 1998, 85, 549–559. [Google Scholar] [CrossRef] [Green Version]
  18. Csiszár, I. Information-Type Measures of Difference of Probability Distributions and Indirect Observations. Stud. Sci. Math. Hung. 1967, 25, 299–318. [Google Scholar]
  19. Lindsay, B.G. Efficiency Versus Robustness: The Case for Minimum Hellinger Distance and Related Methods. Ann. Stat. 1994, 22, 1081–1114. [Google Scholar] [CrossRef]
  20. Tamura, R.N.; Boos, D.D. Minimum Hellinger Distance Estimation for Multivariate Location and Covariance. J. Am. Stat. Assoc. 1986, 81, 223–229. [Google Scholar] [CrossRef]
  21. Markatou, M.; Basu, A.; Lindsay, B.G. Weighted Likelihood Equations with Bootstrap Root Search. J. Am. Stat. Assoc. 1998, 93, 740–750. [Google Scholar] [CrossRef]
  22. Haberman, S.J. Generalized Residuals for Log-Linear Models. In Proceedings of the 9th International Biometrics Conference, Boston, MA, USA, 22–27 August 1976; pp. 104–122. [Google Scholar]
  23. Haberman, S.J.; Sinharay, S. Generalized Residuals for General Models for Contingency Tables with Application to Item Response Theory. J. Am. Stat. Assoc. 2013, 108, 1435–1444. [Google Scholar] [CrossRef]
  24. Pierce, D.A.; Schafer, D.W. Residuals in Generalized Linear Models. J. Am. Stat. Assoc. 1986, 81, 977–986. [Google Scholar] [CrossRef]
  25. Aerts, M.; Molenberghs, G.; Geys, H.; Ryan, L. Topics in Modelling of Clustered Data; Monographs on Statistics and Applied Probability; Chapman & Hall/CRC Press: New York, NY, USA, 1986; Volume 96. [Google Scholar]
  26. Olkin, I.; Tate, R.F. Multivariate Correlation Models with Mixed Discrete and Continuous Variables. Ann. Math. Stat. 1961, 32, 448–465, With correction in 1961, 36, 343–344. [Google Scholar] [CrossRef]
  27. Genest, C.; Nešlehová, J. A Primer on Copulas for Count Data. ASTIN Bull. 2007, 37, 475–515. [Google Scholar] [CrossRef] [Green Version]
  28. Lauritzen, S.; Wermuth, N. Graphical Models for Associations between Variables, some of which are Qualitative and some Quantitative. Ann. Stat. 1989, 17, 31–57. [Google Scholar] [CrossRef]
  29. Hampel, F.R.; Ronchetti, E.M.; Rousseeuw, P.J.; Stahel, W.A. Robust Statistics: The Approach Based on Influence Functions; Wiley Series in Probability and Mathematical Statistics. Probability and Mathematical Statistics; Wiley: New York, NY, USA, 1986. [Google Scholar]
  30. Hampel, F.R. Contributions to the Theory of Robust Estimation. Ph.D. Thesis, Department of Statistics, University of California, Berkeley, Berkeley, CA, USA, 1968. Unpublished. [Google Scholar]
  31. Hampel, F.R. The Influence Curve and its Role in Robust Estimation. J. Am. Stat. Assoc. 1974, 69, 383–393. [Google Scholar] [CrossRef]
  32. Fienberg, S.E. The Analysis of Incomplete Multi-Way Contingency Tables. Biometrics 1972, 28, 177–202. [Google Scholar] [CrossRef]
  33. Agresti, A. Categorical Data Analysis, 3rd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2013. [Google Scholar]
  34. Johnson, W.D.; May, W.L. Combining 2 × 2 Tables That Contain Structural Zeros. Biometrics 1972, 14, 1901–1911. [Google Scholar] [CrossRef]
  35. Poon, W.Y.; Tang, M.L.; Wang, S.J. Influence Measures in Contingency Tables with Application in Sampling Zeros. Sociol. Methods Res. 2003, 31, 439–452. [Google Scholar] [CrossRef]
  36. Alin, A.; Kurt, S. Ordinary and Penalized Minimum Power-Divergence Estimators in Two-Way Contingency Tables. Comput. Stat. 2008, 23, 455–468. [Google Scholar] [CrossRef]
  37. Ye, Y. Interior Algorithms for Linear, Quadratic, and Linearly Constrained Convex Programming. Ph.D. Thesis, Department of Engineering-Economic Systems, Stanford University, Stanford, CA, USA, 1987. Unpublished. [Google Scholar]
  38. Conn, A.R.; Gould, N.I.M.; Toint, P. A Globally Convergent Augmented Lagrangian Algorithm for Optimization with General Constraints and Simple Bounds. SIAM J. Numer. Anal. 1991, 28, 545–572. [Google Scholar] [CrossRef] [Green Version]
  39. Birgin, E.G.; Martínez, J.M. Improving Ultimate Convergence of an Augmented Lagrangian Method. Optim. Methods Softw. 2008, 23, 177–195. [Google Scholar] [CrossRef] [Green Version]
  40. Amatya, A.; Demirtas, H. OrdNor: An R Package for Concurrent Generation of Correlated Ordinal and Normal Data. J. Stat. Softw. 2015, 68, 1–14. [Google Scholar] [CrossRef] [Green Version]
  41. Olsson, U.; Drasgow, F.; Dorans, N.J. The Polyserial Correlation Coefficient. Psychmetrika 1982, 47, 337–347. [Google Scholar] [CrossRef]
  42. Duong, T. ks: Kernel Density Estimation and Kernel Discriminant Analysis for Multivariate Data in R. J. Stat. Softw. 2007, 21, 1–16. [Google Scholar] [CrossRef] [Green Version]
  43. Bolstad, W.M. Understanding Computational Bayesian Statistics; John Wiley & Sons: Hoboken, NJ, USA, 2010. [Google Scholar]
  44. Agostinelli, C.; Markatou, M. Test of Hypotheses Based on the Weighted Likelihood Methodology. Stat. Sin. 2001, 11, 499–514. [Google Scholar]
  45. Eiras-Franco, C.; Martínez-Rego, D.; Guijarro-Berdiñas, B.; Alonso-Betanzos, A.; Bahamonde, A. Large Scale Anomaly Detection in Mixed Numerical and Categorical Input Spaces. Inf. Sci. 2019, 487, 115–127. [Google Scholar] [CrossRef]
  46. Diehl, C.; Hampshire, J. Real-Time Object Classification and Novelty Detection for Collaborative Video Surveillance. In Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN’02 (Cat. No.02CH37290), Honolulu, HI, USA, 12–17 May 2002; Volume 3, pp. 2620–2625. [Google Scholar]
  47. Portnoy, L.; Eskin, E.; Stolfo, S. Intrusion Detection with Unlabeled Data Using Clustering. In Proceedings of the ACM CSS Workshop on Data Mining Applied to Security (DMSA-2001), Philadelphia, PA, USA, 5–8 November 2001; pp. 5–8. [Google Scholar]
  48. Tran, T.; Phung, D.; Luo, W.; Harvey, R.; Berk, M.; Venkatesh, S. An Integrated Framework for Suicide Risk Prediction. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Chicago, IL, USA, 11–14 August 2013; ACM: New York, NY, USA, 2013; pp. 1410–1418. [Google Scholar]
  49. Konijn, R.M.; Kowalczyk, W. Finding Fraud in Health Insurance Data with Two-Layer Outlier Detection Approach. In Data Warehousing and Knowledge Discovery, DaWak 2011; Cuzzocrea, A., Dayal, U., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 394–405. [Google Scholar]
  50. Fraley, C.; Wilkinson, L. Package ‘HDoutliers’. R Package. 2020. Available online: https://cran.r-project.org/web/packages/HDoutliers/index.html (accessed on 31 December 2020).
  51. Wilkinson, L. Visualizing Outliers. 2016. Available online: https://www.cs.uic.edu/~wilkinson/Publications/outliers.pdf (accessed on 31 December 2020).
  52. Do, K.; Tran, T.; Phung, D.; Venkatesh, S. Outlier Detection on Mixed-Type Data: An Energy-Based Approach. In Advanced Data Mining and Applications; Li, J., Li, X., Wang, S., Li, J., Sheng, Q.Z., Eds.; Springer: Cham, Switzerland, 2016; pp. 111–125. [Google Scholar]
  53. Koufakou, A.; Georgiopoulos, M.; Anagnostopoulos, G.C. Detecting Outliers in High-Dimensional Datasets with Mixed Attributes. In Proceedings of the 2008 International Conference on Data Mining, DMIN, Las Vegas, NV, USA, 14–17 July 2008; pp. 427–433. [Google Scholar]
Table 1. Scenario Ia: Means and standard deviations (SDs) of 4 distances ( P C S , H D , S C S , L D ). A 5 × 5 contingency table was generated having fixed the total sample size N under a balanced design with n i j 0 , i , j = 1 , 2 , 3 , 4 , 5 . The number of Monte Carlo (MC) replications used is 10,000.
Table 1. Scenario Ia: Means and standard deviations (SDs) of 4 distances ( P C S , H D , S C S , L D ). A 5 × 5 contingency table was generated having fixed the total sample size N under a balanced design with n i j 0 , i , j = 1 , 2 , 3 , 4 , 5 . The number of Monte Carlo (MC) replications used is 10,000.
NStatistical
Distance
SummaryEstimates
Means and SDs over 10,000 Replications
m ^ β 1 m ^ β 2 m ^ β 3 m ^ β 4 m ^ β 5 π ^ x 1 π ^ x 2 π ^ x 3 π ^ x 4 π ^ x 5
100PCSMean0.1990.1990.2010.2010.2000.2010.2000.1990.2000.201
SD0.0380.0410.0390.0390.0390.0380.0380.0370.0380.038
HDMean0.1990.2000.2000.2000.2010.2000.2000.2000.2000.200
SD0.0370.0410.0370.0370.0370.0370.0370.0350.0360.037
SCSMean0.1990.2010.2000.2000.2000.2000.2000.1990.2000.201
SD0.0370.0410.0380.0380.0380.0320.0330.0300.0310.032
LDMean0.1990.2000.2000.2000.2000.2000.0020.2000.2000.200
SD0.0350.0390.0360.0360.0360.0350.0360.0360.0340.035
1000PCSMean0.2000.2000.2000.2000.2000.2000.2000.2000.2000.200
SD0.0140.0150.0160.0160.0140.0170.0150.0150.0130.016
HDMean0.2000.2000.2000.2000.2000.2000.2000.2000.2000.200
SD0.0130.0150.0130.0130.0130.0130.0120.0120.0120.013
SCSMean0.2000.2000.2000.2000.2000.2000.2000.2000.2000.200
SD0.0140.0150.0130.0130.0130.0080.0090.0110.0120.008
LDMean0.2000.2000.2000.2000.2000.2000.2000.2000.2000.200
SD0.0130.0150.0130.0130.0130.0130.0130.0120.0120.013
10,000PCSMean0.2000.2000.2000.2000.2000.2000.2000.2000.2000.200
SD0.0080.0070.0060.0060.0090.0100.0100.0070.0080.006
HDMean0.2000.2000.2000.2000.2000.2000.2000.2000.2000.200
SD0.0040.0050.0040.0040.0040.0040.0040.0040.0040.004
SCSMean0.2000.2000.2000.2000.2000.2000.2000.2000.2000.200
SD0.0040.0050.0040.0040.0040.0070.0050.0080.0080.004
LDMean0.2000.2000.2000.2000.2000.2000.2000.2000.2000.200
SD0.0040.0050.0040.0040.0040.0040.0040.0040.0040.004
Table 2. Scenario IIa Means and SDs of 4 distances ( P C S , H D , S C S , L D ). A 5 × 5 contingency table was generated having fixed the total sample size N under an imbalanced design with n 11 = 0 . The number of MC replications used is 10,000.
Table 2. Scenario IIa Means and SDs of 4 distances ( P C S , H D , S C S , L D ). A 5 × 5 contingency table was generated having fixed the total sample size N under an imbalanced design with n 11 = 0 . The number of MC replications used is 10,000.
NStatistical
Distance
SummaryEstimates
Means and SDs over 10,000 Replications
m ^ β 1 m ^ β 2 m ^ β 3 m ^ β 4 m ^ β 5 π ^ x 1 π ^ x 2 π ^ x 3 π ^ x 4 π ^ x 5
100PCSMean0.0520.1970.1980.1980.3550.1650.1730.1720.2450.245
SD0.0280.0450.0440.0440.0530.0410.0390.0440.0440.047
HDMean0.0260.2020.2020.2020.3680.1560.1680.1680.2540.254
SD0.0190.0490.0450.0450.0540.0410.0420.0410.0460.049
SCSMean0.0330.2090.2090.2090.3400.1660.1720.1710.2450.246
SD0.0220.0470.0450.0450.0510.0360.0360.0330.0380.040
LDMean0.0400.2000.2000.2000.3600.1600.1700.1700.2500.250
SD0.0200.0430.0400.0400.0480.0370.0380.0360.0420.044
1000PCSMean0.0440.1970.1970.1970.3650.1640.1700.1700.2480.248
SD0.0110.0170.0140.0140.0180.0130.0140.0130.0150.015
HDMean0.0340.2030.2020.2020.3590.1560.1700.1700.2520.252
SD0.0050.0150.0130.0130.0160.0110.0120.0120.0130.014
SCSMean0.0380.2100.2100.2100.3320.1660.1690.1690.2480.248
SD0.0060.0150.0140.0140.0160.0140.0130.0110.0130.014
LDMean0.0400.2000.2000.2000.3600.1600.1700.1700.2500.250
SD0.0060.0150.0130.0130.0160.0120.0120.0110.0130.014
10,000PCSMean0.0440.1970.1960.1960.3670.1640.1700.1700.2480.248
SD0.0020.0060.0070.0070.0100.0070.0060.0050.0070.008
HDMean0.0340.2030.2020.2020.3590.1560.1710.1710.2520.252
SD0.0020.0050.0040.0040.0050.0040.0040.0040.0040.005
SCSMean0.0380.2100.2100.2100.3320.1660.1690.1690.2480.248
SD0.0020.0050.0040.0040.0050.0070.0060.0040.0060.006
LDMean0.0400.2000.2000.2000.3600.1600.1700.1700.2500.250
SD0.0020.0050.0040.0040.0050.0040.0040.0040.0040.004
Table 3. Scenario Ib: Means and Biases of 4 distances ( P C S , H D , S C S , L D ). A 2 × 3 contingency table was generated having fixed the total sample size N under a balanced design with n i j 0 , i = 1 , 2 , j = 1 , 2 , 3 . The number of MC replications used is 10,000.
Table 3. Scenario Ib: Means and Biases of 4 distances ( P C S , H D , S C S , L D ). A 2 × 3 contingency table was generated having fixed the total sample size N under a balanced design with n i j 0 , i = 1 , 2 , j = 1 , 2 , 3 . The number of MC replications used is 10,000.
NStatistical
Distance
SummaryEstimates
Means and Biases over 10,000 Replications
m ^ β 1 m ^ β 2 π ^ x 1 π ^ x 2 π ^ x 3
50PCSMean0.50080.49920.33390.33360.3325
Abs.Biases0.00080.00080.00060.00030.0009
Overall Bias 0.0034
HDMean0.50080.49920.33390.33350.3326
Abs.Biases0.00080.00080.00060.00020.0007
Overall Bias 0.0031
SCSMean0.50070.49930.33380.33350.3326
Abs.Biases0.00070.00070.00050.00020.0007
Overall Bias 0.0028
LDMean0.50080.49920.33390.33350.3326
Abs.Biases0.00080.00080.00060.00020.0008
Overall Bias 0.0032
70PCSMean0.49980.50020.33330.33310.3337
Abs.Biases0.00020.00020.00010.00030.0003
Overall Bias 0.0011
HDMean0.49980.50020.33330.33300.3336
Abs.Biases0.00020.00020.00000.00030.0003
Overall Bias 0.0009
SCSMean0.49980.50020.33340.33310.3335
Abs.Biases0.00020.00020.00000.00020.0002
Overall Bias 0.0008
LDMean0.49990.50010.33330.33300.3336
Abs.Biases0.00010.00010.00000.00030.0003
Overall Bias 0.0009
Table 4. Scenario IIb: Means and Biases of 4 distances ( P C S , H D , S C S , L D ). A 2 × 3 contingency table was generated having fixed the total sample size N under an imbalanced design with n 12 = n 21 = 0 . The number of MC replications used is 10,000.
Table 4. Scenario IIb: Means and Biases of 4 distances ( P C S , H D , S C S , L D ). A 2 × 3 contingency table was generated having fixed the total sample size N under an imbalanced design with n 12 = n 21 = 0 . The number of MC replications used is 10,000.
NStatistical
Distance
SummaryEstimates
Means and Biases over 10,000 Replications
m ^ β 1 m ^ β 2 π ^ x 1 π ^ x 2 π ^ x 3
50PCSMean0.63910.36090.34890.22780.4234
Abs.Biases0.02760.02760.01550.06110.0766
Overall Bias 0.2084
HDMean0.78150.21850.33460.04970.6157
Abs.Biases0.11490.11490.00130.11700.1157
Overall Bias 0.4638
SCSMean0.64200.35800.35100.27260.3765
Abs.Biases0.02470.02470.01760.10590.1235
Overall Bias 0.2964
LDMean0.66770.33230.33420.16600.4998
Abs.Biases0.00100.00100.00090.00070.0002
Overall Bias 0.0038
70PCSMean0.63770.36230.34830.22970.4220
Abs.Biases0.02900.02900.01500.06310.0780
Overall Bias 0.2141
HDMean0.78120.21880.33280.04910.6180
Abs.Biases0.11450.11450.00050.11750.1180
Overall Bias 0.4650
SCSMean0.63950.36050.35050.27390.3756
Abs.Biases0.02710.02710.01720.10720.1244
Overall Bias 0.3030
LDMean0.66570.33430.33310.16710.4998
Abs.Biases0.00100.00100.00020.00040.0002
Overall Bias 0.0028
Table 5. Scenario III: Means and SDs of 4 distances ( P C S , H D , S C S , L D ). A 5 × 5 contingency table was generated having fixed the row marginal probabilities at (0.20, 0.20, 0.20, 0.20, 0.20). The number of MC replications used is 10,000.
Table 5. Scenario III: Means and SDs of 4 distances ( P C S , H D , S C S , L D ). A 5 × 5 contingency table was generated having fixed the row marginal probabilities at (0.20, 0.20, 0.20, 0.20, 0.20). The number of MC replications used is 10,000.
NStatistical
Distance
SummaryEstimates
Means and SDs over 10,000 Replications
m ^ β 1 m ^ β 2 m ^ β 3 m ^ β 4 m ^ β 5 π ^ x 1 π ^ x 2 π ^ x 3 π ^ x 4 π ^ x 5
100PCSMean0.1990.2000.2000.2000.2010.1530.2300.3020.2290.086
SD0.0370.0370.0370.0370.0370.0340.0390.0430.0390.023
HDMean0.2000.2000.2000.2000.2000.1470.2300.3110.2300.082
SD0.0390.0400.0390.0390.0400.0330.0430.0370.0420.019
SCSMean0.2000.2000.2000.2000.2000.1530.2300.3020.2300.085
SD0.0390.0850.0380.0380.0380.0330.0390.0430.0390.022
LDMean0.2000.2000.2000.2000.2000.1500.2300.3070.2300.083
SD0.0380.0380.0380.0380.0380.0330.0410.0450.0400.019
1000PCSMean0.2000.2000.2000.2000.2000.1480.2360.3190.2360.061
SD0.0130.0130.0130.0130.0140.0120.0140.0170.0150.011
HDMean0.2000.2000.2000.2000.2000.1470.2370.3200.2370.059
SD0.0130.0130.0130.0130.0130.0110.0140.0150.0140.008
SCSMean0.2000.2000.2000.2000.2000.1480.2360.3190.2370.060
SD0.0150.0150.0150.0150.0150.0110.0140.0160.0140.013
LDMean0.2000.2000.2000.2000.2000.1470.2370.3200.2370.059
SD0.0130.0130.0130.0130.0130.0110.0140.0150.0130.008
10,000PCSMean0.2000.2000.2000.2000.2000.1470.2360.3200.2370.060
SD0.0060.0060.0060.0060.0060.0080.0060.0110.0060.008
HDMean0.2000.2000.2000.2000.2000.1470.2360.3200.2370.060
SD0.0040.0040.0040.0040.0040.0040.0040.0050.0040.002
SCSMean0.2000.2000.2000.2000.2000.1470.2360.3200.2370.060
SD0.0050.0050.0050.0050.0050.0040.0060.0080.0060.008
LDMean0.2000.2000.2000.2000.2000.1470.2360.3200.2370.060
SD0.0040.0040.0040.0040.0040.0040.0050.0050.0050.002
Table 6. Scenario IV: Means and SDs of 4 distances ( P C S , H D , S C S , L D ). A 5 × 5 contingency table was generated having fixed the row marginal probabilities at (0.04, 0.20, 0.20, 0.20, 0.36). The number of MC replications used is 10,000.
Table 6. Scenario IV: Means and SDs of 4 distances ( P C S , H D , S C S , L D ). A 5 × 5 contingency table was generated having fixed the row marginal probabilities at (0.04, 0.20, 0.20, 0.20, 0.36). The number of MC replications used is 10,000.
NStatistical
Distance
SummaryEstimates
Means and SDs over 10,000 Replications
m ^ β 1 m ^ β 2 m ^ β 3 m ^ β 4 m ^ β 5 π ^ x 1 π ^ x 2 π ^ x 3 π ^ x 4 π ^ x 5
100PCSMean0.0740.1970.1970.1970.3350.2140.1730.2280.1320.253
SD0.0220.0370.0380.0380.0450.0380.0350.0390.0310.041
HDMean0.0700.1940.1950.1950.3460.2150.1700.2310.1260.258
SD0.0150.0390.0390.0390.0480.0410.0370.0420.0300.044
SCSMean0.0740.1940.1950.1950.3420.2140.1730.2290.1310.253
SD0.0150.0390.0390.0390.0480.0380.0350.0400.0300.041
LDMean0.0710.1950.1960.1960.3420.2140.1720.2300.1280.256
SD0.0150.0370.0380.0380.0460.0400.0360.0410.0300.042
1000PCSMean0.0420.2000.2000.2000.3580.2170.1680.2340.1190.262
SD0.0110.0140.0130.0130.0170.0140.0130.0140.0140.015
HDMean0.0390.2000.2000.2000.3610.2170.1670.2350.1180.263
SD0.0060.0130.0130.0130.0150.0130.0120.0130.0100.014
SCSMean0.0390.2000.2000.2000.3610.2170.1680.2340.1180.263
SD0.0070.0130.0130.0130.0160.0160.0130.0140.0100.015
LDMean0.0400.2000.2000.2000.3600.2170.1670.2350.1180.263
SD0.0060.0130.0130.0130.0150.0130.0120.0130.0100.014
10,000PCSMean0.0400.2000.2000.2000.3600.2170.1670.2350.1180.263
SD0.0080.0050.0070.0070.0090.0060.0050.0050.0070.006
HDMean0.0400.2000.2000.2000.3600.2170.1670.2350.1180.263
SD0.0020.0040.0040.0040.0050.0040.0040.0040.0030.004
SCSMean0.0400.2000.2000.2000.3600.2170.1670.2350.1180.263
SD0.0020.0040.0040.0040.0050.0060.0050.0070.0030.008
LDMean0.0400.2000.2000.2000.3600.2170.1670.2350.1180.263
SD0.0020.0040.0040.0040.0050.0040.0040.0050.0030.005
Table 7. Means, Absolute Biases and Overall Absolute Bias of the Hellinger’s distance ( H D ). The data were concurrently generated with a given correlation structure (an overall correlation matrix Σ ) and consist of a discrete variable X with marginal probability vector ( 1 / 3 , 1 / 3 , 1 / 3 ) and a continuous vector Y = ( Y 1 , Y 2 , Y 3 ) M V N 3 ( μ , I 3 ) , where μ T = ( 0 , 0 , 0 ) and I 3 is a ( 3 × 3 ) identity matrix. The number of MC replications used is 1000.
Table 7. Means, Absolute Biases and Overall Absolute Bias of the Hellinger’s distance ( H D ). The data were concurrently generated with a given correlation structure (an overall correlation matrix Σ ) and consist of a discrete variable X with marginal probability vector ( 1 / 3 , 1 / 3 , 1 / 3 ) and a continuous vector Y = ( Y 1 , Y 2 , Y 3 ) M V N 3 ( μ , I 3 ) , where μ T = ( 0 , 0 , 0 ) and I 3 is a ( 3 × 3 ) identity matrix. The number of MC replications used is 1000.
ρ ON NSummaryEstimates
Means, Biases over 1000 Replications
π ^ x 1 π ^ x 2 π ^ x 3 μ ^ y 1 μ ^ y 2 μ ^ y 3
0.050Mean0.3320.3400.3290.0160.011−0.011
Abs. Biases0.0010.0070.0040.0160.0110.011
Overall Bias0.050
100Mean0.3300.3500.3200.017−0.018−0.010
Abs. Biases0.0030.0170.0130.0170.0180.010
Overall Bias0.078
1000Mean0.3240.3370.3390.001−0.0080.007
Abs. Biases0.0090.0040.0060.0010.0080.007
Overall Bias0.035
0.150Mean0.3510.3200.329−0.0060.0030.005
Abs. Biases0.0180.0130.0040.0060.0030.005
Overall Bias0.049
100Mean0.3300.3230.3470.0010.005−0.004
Abs. Biases0.0030.0100.0140.0010.0050.004
Overall Bias0.037
1000Mean0.3270.3430.330−0.0210.0080.003
Abs. Biases0.0060.0100.0030.0210.0080.003
Overall Bias0.051
Table 8. Means, Absolute Biases and Overall Absolute Bias of the Hellinger’s distance ( H D ). The data were concurrently generated with a given correlation structure (an overall correlation matrix Σ ) and consist of a discrete variable X with marginal probability vector ( 1 / 3 , 1 / 3 , 1 / 3 ) and a continuous vector Y = ( Y 1 , Y 2 , Y 3 ) M V N 3 ( μ , I 3 ) , where μ T = ( 0 , 3 , 6 ) and I 3 is a ( 3 × 3 ) identity matrix. The number of MC replications used is 1000.
Table 8. Means, Absolute Biases and Overall Absolute Bias of the Hellinger’s distance ( H D ). The data were concurrently generated with a given correlation structure (an overall correlation matrix Σ ) and consist of a discrete variable X with marginal probability vector ( 1 / 3 , 1 / 3 , 1 / 3 ) and a continuous vector Y = ( Y 1 , Y 2 , Y 3 ) M V N 3 ( μ , I 3 ) , where μ T = ( 0 , 3 , 6 ) and I 3 is a ( 3 × 3 ) identity matrix. The number of MC replications used is 1000.
ρ ON NSummaryEstimates
Means, Biases over 1000 Replications
π ^ x 1 π ^ x 2 π ^ x 3 μ ^ y 1 μ ^ y 2 μ ^ y 3
0.050Mean0.3400.3280.332−0.0042.6065.227
Abs. Biases0.0070.0050.0010.0040.3940.773
Overall Bias1.184
100Mean0.3130.3500.337−0.0042.7775.593
Abs. Biases0.0200.0170.0040.0040.2230.407
Overall Bias0.675
1000Mean0.3380.3340.3280.0122.9725.958
Abs. Biases0.0050.0010.0050.0120.0280.042
Overall Bias0.093
0.150Mean0.3470.3230.330−0.0212.6285.249
Abs. Biases0.0140.0100.0030.0210.3720.751
Overall Bias1.171
100Mean0.3170.3430.3400.0172.8175.615
Abs. Biases0.0160.0100.0070.0170.1830.385
Overall Bias0.618
1000Mean0.3340.3200.346−0.0132.9885.956
Abs. Biases0.0010.0130.0130.0130.0120.044
Overall Bias0.096
0.250Mean0.3240.3330.343−0.0042.5895.240
Abs. Biases0.0090.0000.0100.0040.4110.760
Overall Bias1.194
100Mean0.3290.3500.3210.0242.7635.549
Abs. Biases0.0040.0170.0120.0240.2370.451
Overall Bias0.745
1000Mean0.3370.3440.319−0.0112.9715.951
Abs. Biases0.0040.0110.0140.0190.0290.049
Overall Bias0.118
Table 9. Means and SDs of the Hellinger’s distance ( H D ). The data were concurrently generated with a given correlation structure (an overall correlation matrix Σ ) and consist of a discrete variable X with marginal probability vector ( 1 / 3 , 1 / 3 , 1 / 3 ) and a continuous trivariate vector Y = ( Y 1 , Y 2 , Y 3 ) α × M V N 3 ( 0 , I 3 ) + ( 1 α ) × M V N 3 ( μ , I 3 ) , where μ T = ( 3 , 3 , 3 ) , I 3 is a ( 3 × 3 ) identity matrix and α = 1.00 ( 0.05 ) 0.80 indicates the contamination level. The number of MC replications used is 1000.
Table 9. Means and SDs of the Hellinger’s distance ( H D ). The data were concurrently generated with a given correlation structure (an overall correlation matrix Σ ) and consist of a discrete variable X with marginal probability vector ( 1 / 3 , 1 / 3 , 1 / 3 ) and a continuous trivariate vector Y = ( Y 1 , Y 2 , Y 3 ) α × M V N 3 ( 0 , I 3 ) + ( 1 α ) × M V N 3 ( μ , I 3 ) , where μ T = ( 3 , 3 , 3 ) , I 3 is a ( 3 × 3 ) identity matrix and α = 1.00 ( 0.05 ) 0.80 indicates the contamination level. The number of MC replications used is 1000.
ρ ON N α SummaryEstimates
Means and SDs over 1000 Replications
π ^ x 1 π ^ x 2 π ^ x 3 μ ^ y 1 μ ^ y 2 μ ^ y 3
0.010001.00Mean0.3240.3370.3390.001−0.0080.007
SD0.2930.2930.2980.3780.3780.386
0.95Mean0.3270.3260.3470.0680.0900.079
SD0.3040.2990.3090.4130.4130.413
0.90Mean0.3180.3310.3510.1880.1700.189
SD0.3000.3050.3060.4430.4500.436
0.85Mean0.3240.3370.3390.2920.2830.312
SD0.2930.2930.2970.4840.4870.491
0.80Mean0.3240.3370.3380.4470.4360.470
SD0.2930.2930.2970.552 0.5470.559
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sofikitou, E.M.; Liu, R.; Wang, H.; Markatou, M. Distance-Based Estimation Methods for Models for Discrete and Mixed-Scale Data. Entropy 2021, 23, 107. https://doi.org/10.3390/e23010107

AMA Style

Sofikitou EM, Liu R, Wang H, Markatou M. Distance-Based Estimation Methods for Models for Discrete and Mixed-Scale Data. Entropy. 2021; 23(1):107. https://doi.org/10.3390/e23010107

Chicago/Turabian Style

Sofikitou, Elisavet M., Ray Liu, Huipei Wang, and Marianthi Markatou. 2021. "Distance-Based Estimation Methods for Models for Discrete and Mixed-Scale Data" Entropy 23, no. 1: 107. https://doi.org/10.3390/e23010107

APA Style

Sofikitou, E. M., Liu, R., Wang, H., & Markatou, M. (2021). Distance-Based Estimation Methods for Models for Discrete and Mixed-Scale Data. Entropy, 23(1), 107. https://doi.org/10.3390/e23010107

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop