Next Article in Journal
Fast Feature Selection in a GPU Cluster Using the Delta Test
Next Article in Special Issue
A Relationship between the Ordinary Maximum Entropy Method and the Method of Maximum Entropy in the Mean
Previous Article in Journal
A Note on the W-S Lower Bound of the MEE Estimation
Previous Article in Special Issue
Modelling and Simulation of Seasonal Rainfall Using the Principle of Maximum Entropy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generalized Maximum Entropy Analysis of the Linear Simultaneous Equations Model

by
Thomas L. Marsh
1,*,
Ron Mittelhammer
2 and
Nicholas Scott Cardell
3
1
School of Economic Sciences and IMPACT, Washington State University, Pullman, WA 99164, USA
2
School of Economic Sciences and Statistics, Washington State University, Pullman, WA 99164, USA
3
Salford Systems, San Diego, CA 92126, USA
*
Author to whom correspondence should be addressed.
Entropy 2014, 16(2), 825-853; https://doi.org/10.3390/e16020825
Submission received: 20 November 2013 / Revised: 17 January 2014 / Accepted: 28 January 2014 / Published: 12 February 2014
(This article belongs to the Special Issue Maximum Entropy and Its Application)

Abstract

: A generalized maximum entropy estimator is developed for the linear simultaneous equations model. Monte Carlo sampling experiments are used to evaluate the estimator’s performance in small and medium sized samples, suggesting contexts in which the current generalized maximum entropy estimator is superior in mean square error to two and three stage least squares. Analytical results are provided relating to asymptotic properties of the estimator and associated hypothesis testing statistics. Monte Carlo experiments are also used to provide evidence on the power and size of test statistics. An empirical application is included to demonstrate the practical implementation of the estimator.

1. Introduction

The simultaneous equations model (SEM) is applied extensively in econometric-statistical studies. Examples of traditional estimators for the SEM include two stage least squares [1], three stage least squares [2], limited information maximum likelihood [3], and full information maximum likelihood [4,5]. These estimators yield consistent estimates of structural parameters by correcting for simultaneity between the endogenous variables and the disturbance terms of the statistical model. However, in the presence of small samples or ill-posed problems, traditional approaches may provide parameter estimates with high variance and/or bias, or provide no solution at all. As an alternative to traditional estimators, we present a generalized maximum entropy estimator for the linear SEM and rigorously analyze its sampling properties in small and large sample situations including the case of contaminated error models.

Finite sampling properties of the SEM have been discussed in [610], where alternative estimation techniques that have potentially superior sampling properties are suggested. Specifically, they discussed limitations of asymptotically justified estimators in finite sample situations and the lack of research on estimators that have small sample justification. In a special issue of The Journal of Business and Economic Statistics, the authors of [11,12] examined small sample properties of generalized methods of moments estimators for model parameters and covariance matrices. References [1315] pointed out that even small deviations from model assumptions in parametric econometric-statistical models that are only asymptotically justified can lead to undesirable outcomes. Moreover, Reference [16] singled out the extreme sensitivity of least squares estimators to modest departures from strictly Gaussian conditions as a justification for examining robust methods of estimation. These studies motivate the importance of investigating alternatives to parameter estimation methods for the SEM that are robust in finite samples and lead to improved prediction, forecasting, and policy analysis.

The principle of maximum entropy has been applied in a variety of modeling contexts. Reference [10] proposed estimation of the SEM based on generalized maximum entropy (GME) to deal with small samples or ill-posed problems, and defined a criteria that balances the entropy in both the parameter and residual spaces. The estimator was justified on information theoretic grounds, but the repeated sampling properties of the estimator and its asymptotic properties were not analyzed extensively. Reference [17] suggested an information theoretic estimator based on minimization of the Kullback-Leibler Information Criterion as an alternative to optimally-weighted generalized method of moments estimation that can accommodate weakly dependent data generating mechanisms. Subsequently, [18] investigated an information theoretic estimator based on minimization of the Cressie-Read discrepancy statistic as an alternative approach to inference in models whose data information was cast in terms of moment conditions. Reference [18] identified both exponential empirical likelihood (negative entropy) andempirical likelihood as special cases of the Cressie-Read power divergence statistic. More recently, [19,20] applied the Kullback-Leibler Information Criterion to define empirical moment equations leading to estimators with improved predictive accuracy and mean square error in some small sample estimation contexts. Reference [21] provided an overview of information theoretic estimators for the SEM. Reference [22] demonstrated that maximum entropy estimation of the SEM has relevant application to spatial autoregressive models wherein autocorrelation parameters are inherently bounded and in circumstances when traditional spatial estimators become unstable. Reference [23] examined the effect of management factors on enterprise performance using a GME SEM estimator. Finally, [24] estimated spatial structural equation models also extended to a panel data framework.

In this paper we investigate a GME estimator for the linear SEM that is fundamentally different from traditional approaches and identify classes of problems (e.g., contaminated error models) in which the proposed estimator outperforms traditional estimators. The estimator: (1) is completely consistent with data and other model information constraints on parameters, even in finite samples; (2) has large sample justification in that, under regularity conditions, it retains properties of consistency and asymptotic normality to provide practitioners with means to apply standard hypothesis testing procedures; and (3) has the potential for improved finite sample properties relative to alternative traditional methods of estimation. The proposed estimator is a one-step instrumental variable-type estimator based on a nonlinear-in-parameters SEM model discussed in [1,7,25]. The method does not deal with data information by projecting it in the form of moment constraints but rather, in GME parlance, is based on data constraints that deal with the data in individual sample observation form. Additional information utilized in the GME estimator includes finite support spaces that are imposed on model parameters and disturbances, which allows users to incorporate a priori interval restrictions on the parameters of the model.

Monte Carlo (MC) sampling experiments are used to investigate the finite sample performance of the proposed GME estimator. In the small sample situations analyzed, the GME estimator is superior to two and three stage least squares based on mean square error considerations. Further, we demonstrate the improved robustness of GME relative to 3SLS in the case of contaminated error models. For larger sample sizes, the consistency of the GME estimator results in sampling behavior that emulates that of 2SLS and 3SLS estimators. Observations on power and size of asymptotic test statistics suggest that the GME does not dominate, nor is it dominated by, traditional testing methods. An empirical application is provided to demonstrate practical implementation of the GME estimator and to delineate inherent differences between GME and traditional estimators in finite samples. The empirical analysis also highlights the sensitivity of GME coefficient estimates and predictive fit to specification of error truncation points, underscoring the need for care in specifying the empirical error support.

2. The GME-Parameterized Simultaneous Equations Model

Consider the SEM with G equations, which can be written in matrix form as:

Y Γ + X B + E = 0
where Y = (y1yG) is a (N×G) matrix of jointly determined endogenous variables, Γ=(Γ1ΓG) is an invertible(G × G) matrix of structural coefficients of the endogenous variables, X = (x1xK) is a (N × K) matrix of exogenous variables that has full column rank, B= (B1BG) is a (K × G) matrix of coefficients of exogenous variables, and E = (ε1εG) is a (N × G) matrix of unobserved random disturbances. The standard stochastic assumptions of the disturbance vectors are that E[εi] = 0 for i = 1,…,G and E[εiεj] = σijIN for i,j = 1,…,G. Letting ε = vec(ε1εG) denote the vertical concatenation of the vectors ε1,…, εG, the covariance matrix is given by E[εε′] = ΣIN where the (G × G) matrix Σ contains the unknown σijs for i,j = 1,…,G.

The reduced form model is obtained by post-multiplying Equation (1) by Γ−1 and solving for Y as:

Y = X ( B Γ 1 ) + ( E Γ 1 ) = X Π + V
where Π = (π1πG) is a (K × G) matrix of reduced form coefficients and V = (v1vG) is a (N × G) matrix of reduced form disturbances. The reduced form for the ith endogenous variable is:
y i = X π i + v i

The ith equation in Equation (1) can be rewritten in terms of a nonlinear structural parameter representation of the reduced form model as [1]:

y i = X Π ( i ) γ i + X i β i + μ i = Z δ i + μ i
where E[Y(−i)] = (−i), μi = εi + (Y(−i)E[Y(−i)])γi, Zi = ((−i) Xi), and δi = vec (γi,βi)

In general the notation (−i) in the subscript of a variable represents the explicit exclusion of the ith column vector, such as yi being excluded from Y to form Y(−i), in addition to the exclusion of any other column vectors implied by the structural restrictions. Then Y(−i) represents a (N ×Gi) matrix of Gi jointly dependent explanatory variables having nonzero coefficients in the ith equation, γi is the corresponding (G ×1) subvector of the structural parameter vector Γi, Xi is a (N ×Ki) matrix that represents the Ki exogenous variables with nonzero coefficients in the ith equation, and βi is the corresponding corresponding (Ki×1) subvector of the parameter vector Bi. It is assumed that the linear exclusion restrictions on the structural parameters are sufficient to identify each equation. The (K ×Gi) matrix of reduced form coefficients Π(−i) coincides with the endogenous variables in Y(−i).

Historically, Equation (4) has provided motivation for two stage least squares (2SLS) and three stage least squares (3SLS) estimators. The presence of right hand side endogenous variables yields biased and inconsistent estimates for Y(−i) [1]. In 2SLS and 3SLS, the first stage is to approximate E[Y(−i)] by applying ordinary least squares (OLS) to the unrestricted reduced form model in Equation (2) and thereby obtain predicted values of Y(−i). Then, using the predicted values to replace E[Y(−i)], the second stage is to estimate the model in Equation (4) with OLS. In the event that the error terms are normally distributed, homoskedastic, and serially independent, the 3SLS estimator is asymptotically equivalent to the asymptotically efficient full-information maximum likelihood (FIML) estimator [21]. Under the same conditions, it is equivalent to apply FIML to either Equation (1) or to Equation (4) under the restriction Π = −−1.

2.1. GME Estimation of the SEM

Following the maximum entropy principle, the entropy of a distribution of probabilities q = (q1,…,qN), n = 1 N q n = 1, is defined by:

H ( q ) = n = 1 N q n ln q n
in [26]. The value of H(q) reaches a maximum when qn = N1 for n = 1,…,N, which characterizes the uniform distribution. Generalizations of the entropy function that have been examined elsewhere in the econometrics and statistics literature include the Cressie-Read power divergence statistic [18], Kullback-Leibler Information Criterion [27], and the α-entropy measure [28]. We restrict our analysis to the entropy objective function due to its efficiency and robustness properties [18], and its current universal use within the context of GME applications [9].

GME estimators previously proposed for the SEM include (a) the data constrained estimator for the general linear model, hereafter GME-D, which amounts to applying the GME principle to a vectorized version of the structural model in Equation (1); and (b) a two stage estimator analogous to 2SLS whereby GME-D is applied to the reduced form model in the first stage and to the structural model in the second stage, hereafter GME-2S. Alternatively, [10] applied the GME principle to the reduced form model in Equation (3) with the restriction Π =−−1 imposed, hereafter GME-GJM.

Our approach follows 2SLS and 3SLS in the sense that the restriction Π =−−1 is not explicitly enforced and that E[Y(−i)] is algebraically replaced by (−i). However, unlike 2SLS and 3SLS, our approach is formulated under the GME principle completely consistent with Equation (4) retained as a nonlinear constraint and concurrently solved with the unrestricted reduced form model in Equation (3) to identify structural and reduced form coefficient estimates. Reference [7] refers to Equations (3) and (4) as a nonlinear-in-parameters (NLP) form of the SEM model.

To formulate a GME estimator for the NLP model of the SEM, henceforth referred to as GME-NLP, parameters and disturbance terms of Equations (3) and (4) are reparameterized as convex combinations of reference support points and unknown convexity weights. Support matrices Si for i = π, γ, β, z, w that identify finite bounded feasible spaces for individual parameters and weight vectors pβ, pγ, pπ, z, w that consist of unknown parameters to be estimated are explicitly defined below. The parameters are redefined as β = vec(β1,…, βG) =Sβpβ, γ = vec(γ1,…, γG) = Sγpγ, and π = vec (π1,…, πG) = Sπpπ), while the disturbance vectors are defined as v = vec (v1,…, vG) = Szz), and μ = vec (μ1,…, μG) = Sww). Using these identities and letting p = vec(pβ, pγ, pπ, z, w) the estimates of π, γ, β are obtained by solving the constrained GME problem:

max p { p ln p }
subject to:
y = ( I G X ) ( S ( ) π p π ) ( S γ p π ) + X β ( S β p β ) + S w w
y = ( I G X ) ( S π p π ) + S z z
( I Q + 2 N G 1 M ) p = 1 Q + 2 N G

The Si support matrices (for i = π, γ, β, z, w) present in Equations (6) and (7) consist of user supplied reference support points defining feasible spaces for parameters and disturbances. For example, Sw is given by:

S w = ( S 1 w 0 0 0 S 2 w 0 0 0 S G w ) ( G N × GNM ) S i w = ( s 1 i w 0 0 0 s 2 i w 0 0 0 s N i w ) N × NM S ni w = ( s n i 1 w s n i 2 w . s n i M w ) ( M × 1 )
where the nth disturbance term of the gth equation with M support points is defined, in summation notation, as μ ng = m = 1 M S ngm w w ngm Similarly, the kth β parameter of the gth equation is defined by β kg = m = 1 M S kgm β p kgm β. For notational convenience the number of support points have been defined as M ≥ 2 for both errors and parameters.

In Equation (6), the matrix S ( ) π defines the reference supports for the block diagonal matrix f, while Xβ = diag (X1,…, XG) is a (GN × ) block diagonal matrix and y = vec(y1,…, yG) is a (GN × 1) vectors of endogenous variables. In Equations (6) and (7) the (NGM × 1) w = vec(w11,…, wNG) and z = vec(z11,…, zNG) represent vertical concatenations of sets of (M × 1) subvectors for n = 1,…,N and g = 1,…,G, where each subvector wng= (wng1,…, wngM)′ and zng= (zng1,…, zngM)′ contains a set of M convex weights. Also p π = vec ( p 11 π , , p KG π ) is a(KGM ×1) vector that consists of convex weights p kg π = ( p kg 1 π , , p kgM π ) for k= 1,…, K and g= 1,…, G. The (MḠ × 1) vector p γ = vec ( p 11 γ , , p GG γ ) and the (K̄M × 1) vector p β = vec ( p 11 β , , p KG β ) are similarly defined. Equation (8) contains the required adding up conditions for each of the sets of convexity weights used in forming the GME-NLP estimator. Nonnegativity of the weights is an inherent characteristic of the maximum entropy objective and does not need to be explicitly enforced with inequality constraints. Regarding notation in (8), IG represents a (G × G) identity matrix and 1N is a (N ×1) unit vector. Letting K ¯ = i = 1 G K i denote the number of unknown βkgs and G ¯ = i = 1 G G i denote the number of unknown γigs, then together with the KG reduced form parameters, the πkgs, the total number of unknown parameters in the structural and reduced form equations is Q = + + KG.

Optimizing the objective function defined in Equation (5) optimizes the entropy in the parameter and disturbance spaces for both the structural model in Equation (6) and the reduced form model in Equation (7). The optimized objective function can mitigate the detrimental effects of ill-conditioned explanatory and/or instrumental variables and extreme outliers due to heavy tailed sampling distributions. In these circumstances traditional estimators are unstable and often represent an unsatisfactory basis for estimation and inference [20,25,29].

We emphasize that the proposed GME-NLP is a data-constrained estimator. Equations (5)(8) constitute a data-constrained model in which the regression models themselves, as opposed to moment conditions based on them, represent constraining functions to the entropy objective function. [16] pointed out that outside the Gaussian error model, estimation based on sample moments can be inefficient relative to other procedures. Reference [9] provided MC evidence that data-constrained GME models, making use of the full set of observations, outperformed moment-constrained GME models in mean square error. In the GME-NLP model, constraints Equations (6) and (7) remain completely consistent with sample data information in Equations (3) and (4).

We also emphasize that the proposed GME-NLP estimator is a one-step approach, simultaneously solving for reduced form and structural parameters. As a result, the nonlinear specification of Equation (6) leads to first order optimization conditions (Equation A15) derived in the Appendix) that are different from other multiple-step or asymptotically justified estimators. The most obvious difference is that the first order conditions do not require orthogonality between right hand side variables and error terms, i.e., GME-NLP relaxes the orthogonality condition between instruments and the structural error term. Perhaps more importantly, multiple-step estimators (e.g., 2SLS or GME-2S) only approximate the NLP model and ignore nonlinear interactions between reduced and structural form coefficients. Thus, constraints Equations (6) and (7) are not completely satisfied by multiple-step procedures, yielding an estimator that is not fully consistent with the entire information set underlying the specification of the model. Although this is not a critical issue in large sample estimation, as demonstrated below, estimation inefficiency can be substantial in small samples if multiple-step estimators do not adequately approximate the NLP model.

The proposed GME-NLP estimator has some econometric limitations similar to, and other limitations which set it apart from, 2SLS that are evident when inspecting Equations (5)(8). Firstly, like 2SLS, the residuals in Equations (4) and (6) are not identical to those of the original structural model, nor are they the same as the reduced form error term, except when evaluated at the true parameter values. Secondly, the GME-NLP estimator does not attempt to correct for contemporaneous correlation among the errors of the structural equations. Although a relevant efficiency issue, contemporaneous correlation is left for future research. Thirdly, and perhaps most importantly, the use of bounded disturbance support spaces in GME estimation introduces a specification issue in empirical analysis that typically does not arise with traditional estimators. These issues are discussed in more detail ahead.

2.2. Parameter Restrictions

In practice, parameter restrictions for coefficients of the SEM have been imposed using constrained maximum likelihood or Bayesian regression [7,30]. Neither approach is necessarily simple enough to specify analytically nor estimate empirically, and each has its empirical advantages and disadvantages. For example, Bayesian estimation is well-suited for representing uncertainty with respect to model parameters, but can also require extensive MC sampling when numerical estimation techniques are required, as is often the case in non-normal, non-conjugate prior model contexts. In comparison to constrained maximum likelihood or Bayesian analysis, the GME-NLP estimator also enforces restrictions on parameter values, is arguably no more difficult to specify or estimate, and does not require the use of MC sampling in the estimation phase of the analysis. Moreover, and in contrast to constrained maximum likelihood or the typical parametric Bayesian analysis, GME-NLP does not require explicit specification of the distributions of the disturbance terms or of the parameter values. However, both the coefficient and the disturbance support spaces are compact in the GME-NLP estimation method, which may not apply in some idealized empirical modeling contexts.

Imposing bounded support spaces on coefficients and error terms has several implications for GME estimation. Consider support spaces for coefficients. Selecting bounds and intermediate reference support points provides an effective way to restrict parameters of the model to intervals. If prior knowledge about coefficients is limited, wider truncation points can be used to increase the confidence that the support space contains the true β. If knowledge exists about, say, the sign of a specific coefficient from economic theory, this can be straightforwardly imposed together with a reasonable bound on the coefficient.

Importantly, there is a bias-efficiency tradeoff that arises when parameter support spaces are specified in terms of bounded intervals. A disadvantage of bounded intervals is that they will generally introduce bias into the GME estimator unless the intervals happen to be centered on the true values of the parameters. An advantage of restricting parameters to finite intervals is that they can lead to increases in efficiency by lowering parameter estimation variability. In the MC analysis ahead, it is demonstrated that the bias introduced by bounded parameter intervals in the GME-NLP estimator can be much more-than compensated for by substantial decreases in variability, leading to notable increases in overall estimation efficiency.

In practice, support spaces for disturbances can always be chosen in a manner that provides a reasonable approximation to the true disturbance distribution because upper and lower truncation points can always be selected sufficiently wide to contain the true disturbances of regression models [31]. The number, M, of support points for each disturbance can be chosen to account for additional information relating to higher moments (e.g., skewness and kurtosis) of each disturbance term. MC experiments by [9] demonstrated that support points ranging from 2 to 10 are acceptable for empirical applications.

For the GME-NLP estimator, identifying bounds for the disturbance support spaces is complicated by the interaction among truncation points of the parameters and disturbance support points of both the reduced and structural form models. Yet, several informative generalizations can be drawn. First, [32] demonstrated that ordinary least squares-like behavior can be obtained by appropriately selecting truncation points of the GME-D estimator of the general linear model. This has direct implications to SEM estimation in that appropriately selected truncation points of the GME-2S estimator leads to 2SLS-like behavior. However, as demonstrated ahead, given the nonlinear interactions between the structural and reduced form models, adjusting truncation points of the GME-NLP does not necessarily lead to two stage like behavior in finite samples. Second, the reduced form model in Equation (3) and the nonlinear structural parameter representation of the reduced form model in Equation (4) have identical error structure at the true parameter values. Hence, in the empirical applications below, we specify identical support matrices for error terms of both the structural and reduced form models. Third, in the limiting case where the disturbance boundary points of the GME-NLP structural model expand in absolute value to infinity, the parameter estimates converge to the mean of their support points.

Given ignorance regarding the disturbance distribution, [9,10] suggest using a sample scale parameter and the multiple-sigma truncation rule to determine error bounds. For example, the three sigma rule for random variables states that the probability of a unimodal continuous random variable assuming outcomes distant from its mean by more than three standard deviations is at most 5% [33]. Intuitively, this multiple-sigma truncation rule provides a means of encompassing an arbitrarily large proportion of the disturbance support space. From the empirical evidence presented below, it appears that combining the three sigma rule with a sample scale parameter to estimate the GME-NLP model is a useful approach.

3. GME-NLP Asymptotic Properties and Inference

To derive consistency and asymptotic normality results for the GME-NLP estimator, we assume the following regularity conditions.

R1. The N rows of the (N × G) disturbance matrix E are independent random drawings from an G-dimensional population with zero mean vector and unknown finite covariance matrix Σ.

R2. The (N × K) matrix X of exogenous variables has rank K and consists of nonstochastic elements, with lim N ( 1 N X X ) = Ω where Ω is a positive definite matrix.

R3. The elements μng of the vector vg = μg (n = 1,…,N, g = 1,…,G) are independent and bounded such that cg1 + ωg ≤ μngcgM − ωgfor some ωg> 0 and large enough positive cgM = □cg1. The probability density function of μ is assumed to be symmetric about the origin with a finite covariance matrix.

R4. πkg ∈ (πkgL, πkgH), for finite πkgL and πkgH, ∀ k= 1,…,K and g= 1,…, G.

γjg ∈ (γjgL, γjgH), for finite γjgL and γjgH, ∀ (jg) j,g = 1,…,G; and γgg= −1.

βkg ∈ (βkgL, βkgH), for finite βkgL and γkgH, ∀ k = 1,…, K and g= 1,…, G.

R5. For the true B and nonsingular Γ, there exists positive definite matrices Ψg (g = 1,…, G) such that lim N ( 1 N Z g Z g ) Ψ g where Π = − −1.

Condition R1 asserts that the disturbances are contemporaneously correlated. It also requires independence of the N rows of the (N × G) disturbance matrix E, which is stronger than the uncorrelated error assumptions introduced immediately following Equation (1). Conditions R1, R2, and R5 are typical assumptions made when deriving asymptotic properties for the 2SLS and 3SLS estimators of the SEM [1]. The condition R3 states that the supports of μng and vng are symmetric about the origin and can be contained in the interior of closed and bounded intervals [c1,cM]. Extending the lower and upper bounds of the interval by (possibly arbitrarily small) ωg > 0 is a technical and computational convenience ensuring feasibility of the entropic solutions [32]. Condition R4 implies that the true value of the parameters πkg, γjg, βkg can be enclosed within a bounded interval.

3.1. Estimator Properties

The regularity conditions (R1)-(R5) provide a basic set of assumptions sufficient to establish asymptotic properties for the GME-NLP estimator of the SEM. For notational convenience let θ = vec(π, δ), where we follow the standard convention that δ = vec(δ1, δG). The theorems for consistency and asymptotic normality are stated below with proofs in the Appendix.

Theorem 1. Under the regularity conditions R1–R5, the GME-NLP estimator, θ̂ =vec(π̂, δ̂), is a consistent estimator of the true coefficient values θ = vec (π, δ).

The intuition behind the proof is that without the reduced form component in Equation (7) the parameters of the structural component in Equation (6) are not identified. As shown in the Appendix, the reduced form component yields estimates that are consistent and contribute to identifying the structural parameters, and the structural component in Equation (7) ties the structural coefficients to the data and draws the GME-NLP estimates toward the true parameter values as the sample size increases.

Theorem 2. Under the conditions of Theorem 1, the GME-NLP estimator, δ̂ =vec(δ̂1,…, δ̂G), is asymptotically normally distributed as δ ^ ~ a N ( δ , 1 N Ω ξ 1 Ω Σ Ω ξ 1 ).

The asymptotic covariance matrix consists of Ωξ = diag(ξ1Ψ1,…, ξGΨG), which follows from R5 and ξ g = E [ ξ ng w ] with ξ ng w = λ w ( u ng ) u ng = ( m = 1 M ( S ngm w ) 2 w ngm ( λ w ( u ng ) ) ( u ng ) 2 ) 1. The elements of ΩΣ are defined by 1 N Z ( Σ λ I ) Z Ω Σ, where Z = diag (Z1,…, ZG) and Σλ is a (G × G) covariance matrix for the λ ng w s.

Estimators of the SEM are generally categorized as “full information” (e.g., 3SLS or FIML) or “limited information” (e.g., 2SLS or LIML) estimators. GME-NLP is not a full information estimator because the estimator neither enforces the restriction Π =− −1 nor explicitly characterizes the contemporaneous correlation of the disturbance terms. An advantage of GME-NLP is that it is completely consistent with data constraints in both small and large samples, because we concurrently estimate the parameters of the reduced form and structural models. As a limited information estimator, GME-NLP has several additional attractive characteristics. First, similar to other limited information estimators, it is likely to be more robust to misspecification than a full information alternative because in the latter case misspecification of any one equation can lead to inconsistent estimation of all the equations in the system [34]. Second, GME-NLP is easily applied in the case of a single equation, G = 1, and it retains the asymptotic properties identified above. Finally, the single equation case is a natural generalization of the data-constrained GME estimator for the general linear model.

3.2. Hypothesis Tests

Because the GME-NLP estimator δ̂ is consistent and asymptotically normally distributed, asymptotically valid normal and chi-square test statistics can be used to test hypothesis about δ. To implement such tests a consistent estimate of the asymptotic covariance of δ̂, or Ω ξ 1 Ω Σ Ω ξ 1, is required. The matrix Ωξ can be estimated using ξ ng w ( δ ^ ) above or alternatively by:

ξ ^ g ( δ ^ ) = 1 N n = 1 N ( m = 1 M ( s ngm w ) 2 w ngm ( λ w ( u ng ( δ ^ ) ) ) ( u ng ( δ ^ ) ) 2 ) 1
In the former case based on ξ ng w ( δ ^ ), which are the elements of Ξ g w as defined in the Appendix, then Ω ^ ξ = diag ( 1 N Z ^ 1 ( Ξ ^ 1 w Z ^ 1 ) , , 1 N Z ^ G ( Ξ ^ G w Z ^ G ) ). In the latter case based on ξ̂g and Ψ ^ g = ( 1 N Z ^ g Z ^ g ), then Ω̂ξ = diag(ξ̂1Ψ̂1,…, ξ̂GΨ̂G). A straightforward estimate of ΩΣ can be constructed as Ω ^ Σ = 1 N Z ^ ( Σ ^ λ I ) Z ^. The (G × G) matrix Σλ can be estimated by σ ^ i j λ = 1 N λ w ( u i ( δ ^ ) ) λ w ( u i ( δ ^ ) ) for i,j = 1,…,G. Combining these elements, the estimated asymptotic covariance matrix of δ̂ is defined as V ^ ar ( δ ^ ) = 1 N Ω ^ ξ 1 Ω ^ Σ Ω ^ ξ 1.

3.2.1. Asymptotically Normal Tests

Since Z = δ ^ i j δ i j 0 V ^ ar ( δ ^ ) i i is asymptotically N(0,1) under the null hypothesis H o : δ i j = δ i j 0, the statistic Z can be used to test hypothesis about the values of the δijs.

3.2.2. Wald Tests

To define Wald tests on the elements of δ, let Ho: R (δ) = 0 be the null hypothesis to be tested. Here R(δ) is a continuously differentiable L-dimensional vector function with rank R ( δ ) δ = L K. In the special case of a linear null hypothesis Ho: = r, then ( R δ ) δ = R. It follows from Theorem 5.37 in [35] that:

N ( R ( δ ^ ) r ) d N ( [ 0 ] , R ( δ ) δ Ω ξ 1 Ω Σ Ω ξ 1 R ( δ ) δ )
The Wald test statistic has a χ2 limiting distribution with L degrees of freedom given as
W = ( R ( δ ^ ) r ) ( R ( δ ^ ) δ V ^ ar ( δ ^ ) R ( δ ^ ) δ ) 1 ( R ( δ ^ ) r ) d χ L 2
under the null hypothesis.

4. Monte Carlo Experiments

For the sampling experiments we set up an overdetermined simultaneous system with contemporaneously correlated errors that is similar, but not identical, to empirical models discussed in [10,36,37]. Reference [10] provide empirical evidence of the performance of the GME-GJM estimator for both ill-posed (multicollinearity) and well-posed problems using a sample size of 20 observations. In this study we attempt to focus on both smaller and larger sample size performance of the GME-NLP estimator, the size and power of single and joint hypothesis tests, and the relative performance of GME-NLP to 2SLS and 3SLS. In addition, the performance of GME-NLP is compared to Golan, Judge, and Miller’s GME-GJM estimator. The estimation performance measure is the mean square error (MSE) between the empirical coefficient estimates and the true coefficient values.

4.1. Parameters and Support Spaces

The parameters Γ and B and the covariance structure Σ of the structural system in Equation (1) are specified as:

Γ = ( 1 .267 .087 .222 1 0 0 .046 1 ) B = ( 6.2 4.4 4.0 0 .74 0 .7 0 .53 0 0 .11 .96 .13 0 0 0 .56 .06 0 0 ) Σ = ( 1 1 .125 1 4 .0625 .125 .0625 8 )

The exogenous variables are drawn from an iid N(0,1) distribution, while the errors for the structural equations are drawn from a multivariate normal distribution with mean zero and covariance ΣI that is truncated at ±3 standard deviations.

To specify the GME models, additional information beyond that traditionally used in 2SLS and 3SLS is required. Upper and lower bounds, as well as intermediate support points for the individual coefficients and disturbance terms, are supplied for the GME-NLP and GME-GJM models along with starting values for the parameter coefficients. The difference in specification of GME-GJM relative to GME-NLP is that in the former, Π = − −1 replaces the structural model in Equation (6) and the GME-GJM objective function excludes any parameters associated with the structural form disturbance term. The upper and lower bounds of the support spaces specified for the structural and reduced form models are identical to [10] except that we use three rather than five support points. The supports are defined as s i k β = s i k π = ( 5 , 0 , 5 ) for k = 2,…,7, s i 1 β = s i 1 π = ( 20 , 0 , 20 ) , and s i j γ = ( 2 , 0 , 2 ) for i,j = 1,2,3. The error supports for the reduced form and structural model were specified as s in z = s in w = ( ω i 3 σ i , 0 , ω i + 3 σ i ) , where σi is the standard deviation of the errors from the ith equation and from R3 we let ωi = 2.5 to ensure feasibility. See appendix material for a more complete discussion of computational issues.

4.2. Estimation Performance

Table 1 contains the mean values of the estimated Γ parameters based on 1,000 MC repetitions for sample sizes of 5, 25, 100, 400, and 1,600 observations per equation. From this information, we can infer several implications about the performance of the GME estimators. For a sample size of five observations per equation, 2SLS and 3SLS estimators provide no solution due to insufficient degrees of freedom. For five and 25 observations the GME-NLP and GME-GJM estimators have mean values that are similar, although GME-NLP exhibits more bias. When the sample size is 100, the GME-NLP estimator generally exhibits less bias. Like 2SLS and 3SLS, the GME-NLP estimator is converging to the true coefficient values as N increases to 1,600 observations per equation (3SLS estimates are not reported for 1,600 observations).

In Table 2 the standard error (SE) and MSE are reported for 3SLS and GME-NLP. The GME-NLP estimator has uniformly lower standard error and MSE than does 3SLS. For small samples of 25 observations the MSE performance of the GME-NLP estimator is vastly improved relative to the 3SLS estimator, which is consistent with MC results from other studies relating to other GME-type estimators [9,32]. As the sample size increases from 25 to 400 observations, both the standard error and mean squared error of the 3SLS and GME-NLP converge towards each other. Interestingly, even at a sample size of 100 observations the GME-NLP mean squared error remains notably superior to 3SLS.

4.3. Inference Performance

To investigate the size of the asymptotically normal test, the single hypothesis H0: γij = k was tested with k set equal to the true values of the structural parameters. Critical values of the tests were based on a normal distribution with a 0.05 level of significance. An observation on the power of the respective tests was obtained by performing a test of significance whereby k = 0 in the preceding hypothesis. To complement this analysis, we investigated the size and power of a joint hypothesis H0: γ21= k1, γ32 = k2 using the Wald test. The scenarios were analyzed using 1000 MC repetitions for sample sizes of 25, 100, and 400 per equation.

Table 3 contains the rejection probabilities for the true and false hypotheses of both the GME-NLP and 3SLS estimators. The single hypothesis test for the parameter γ21 = 0.222 based on the asymptotically normal test responded well for GME-NLP (3SLS), yielding an estimated test size of 0.066 (0.043) and power of 0.980 (0.964) at 400 observations per equation. In contrast, for the remaining parameters, the size and power of the hypotheses tests were considerably less satisfactory. This is due in part to the second and third equations having substantially larger disturbance variability. For the joint hypothesis test based on the Wald test the size and power perform well for GME-NLP (3SLS) with an estimated test size of 0.047 (0.047) and power of 0.961 (0.934) at 400 observations. Overall, the results indicate that based on asymptotic test statistics GME-NLP does not dominate, nor is it dominated by, 3SLS.

4.4. Further Results: 3-Sigma Rule and Contaminated Errors

Further MC results are presented to demonstrate the sensitivity of the GME-NLP to the sigma truncation rule (Table 4) and to illustrate robustness of the GME-NLP relative to 3SLS in the presence of contaminated error models (Table 5). Each of these issues play a critical role in empirical analysis of the SEM, while the latter can compound estimation problems especially in small sample estimation.

To obtain the results in Table 4, the error supports for the reduced form and structural model were specified as before with s in z = s in w = ( ω i j σ i , 0 , ω i + j σ i ) where σi is the standard deviation of the errors from the ith equation, j = 3,4,5 and from R3 ωi = 2.5, again for solution feasibility. The results exhibit a tradeoff between bias and MSE specific to the individual coefficient estimates. For γ21 the bias and the MSE decreases as the truncation points are shrunk from five to three sigma. In contrast, for the remaining coefficients in Table 4, the MSE increases as the truncation points are decreased. The bias decreases for γ32 and γ13 as the truncation points are shrunk, while the direction of bias is ambiguous for γ12. Predominately, the empirical standard error of the coefficients decreased with wider truncation points. Overall, these results underscore that the mean and standard error of GME-NLP coefficient values are sensitive to the choice of truncation points.

Results from Table 5 provide the mean and MSE of the distribution of coefficient estimates for 3SLS and GME-NLP when the error term is contaminated by outcomes from an asymmetric distribution [14,15]. For a given percentage level φ, the errors for the structural equations are drawn from (1−φ) N([0],ΣI)+ φF(2,3) and then truncated at ±3 standard deviations. We define F (2,3) = Beta(2,3)−6 and examine the robustness of 3SLS and GME-NLP with values of φ = 0.1, 0.5, and 0.9. The error supports for the reduced form and structural model were specified with the three sigma rule. As evident in Table 5, when the percent of contamination induced in the error component of the SEM increases, performance of both estimators is detrimentally impacted. For 25 observations, the 3SLS coefficient estimates are much less robust to the contamination process than are the GME-NLP estimates as measured by the MSE values. At 100 observations the performance of 3SLS improves, but still remain less robust than GME-NLP.

4.5. Discussion

The performance of the GME-NLP estimator was based on a variety of MC experiments. In small and medium sample situations (≤100 observations) the GME-NLP is MSE superior to 3SLS for the defined experiments. Increasing the sample size clearly demonstrated consistency of the GME-NLP estimator for the SEM. Regarding performance in single or joint hypothesis testing contexts, the empirical results indicate that the GME-NLP did not dominate, nor was it dominated by 3SLS.

The MC evidence provided above indicates that applying the multiple-sigma truncation rule with a sample scale parameter to estimate the GME-NLP model is a useful empirical approach. Across the 3, 4, and 5-sigma rule sampling experiments, GME-NLP continued to dominate 3SLS in MSE for 25, 100, and 400 observations per equation. For wider truncation points the empirical SE of the coefficients decreased. However, these results also demonstrate that the GME-NLP coefficients are sensitive to the choice of truncation points with no consensus in choosing narrower (3-sigma) over wider (5-sigma) truncation supports under a Gaussian error structure. We suggest that additional research is needed to optimally identify error truncation points.

Finally, the GME-NLP estimator exhibited more robustness in the presence of contaminated errors relative to 3SLS. The MC analysis illustrates that deviations from normality assumptions in asymptotically justified econometric-statistical models lead to dramatically less robust outcomes in small samples. Reference [9,16] emphasized that under traditional econometric assumptions, when samples are Gaussian in nature and sample moments are taken as minimal sufficient statistics, then no information may be lost. However, they point out that outside the Gaussian setting, reducing data constraints to moment constraints can be wasteful use of sample information and results in estimators that are less than fully efficient. The above MC analysis suggests that GME-NLP, which relies on full sample information but does not rely on a full parametric specification such as maximum likelihood, can be more robust to alternative error distributions.

5. Empirical Illustration

In this section, an empirical application is examined to demonstrate implementation of the GME-NLP estimator. It is the well known three-equation system that comprises the Klein Model I, which further benchmarks the GME-NLP estimator relative to least squares.

5.1. Klein Model

Klein’s Model I was selected as an empirical application because it has been extensively applied in many studies. Klein’s macroeconomic model is highly aggregated with relatively low parameter dimensionality, making it useful for pedagogical purposes. It is a three-equation SEM based on annual data for the United States from 1920 to 1941. All variables are in billions of dollars, which are constant dollars with base year 1934 (for a complete description of the model and data see [1,38]).

The model is comprised of three stochastic equations and five identities. The stochastic equations include demand for consumption, investment, and labor. Klein’s consumption function is given as:

CN t = β 11 + γ 11 ( W 1 t + W 2 t ) + γ 21 P t + β 21 P t 1 + ε t 1
where CNt is consumption, W1t is wages earned by workers in the private sector, W2t is wages earned by government workers, Pt is nonwage income (profit), and ε1t is a stochastic error term. This equation describes aggregate consumption as a function of the total wage bill and current and lagged profit. The investment equation is given by:
I t = β 12 + γ 12 P 1 t + β 22 P t 1 + β 32 K t 1 + ε t 2
where It is net investment, Kt is the stock of capital goods at the end of the year, and ε2t is a stochastic error term. This equation implies that net investment reacts to current and lagged profits, as well as beginning of the year capital stocks. The demand for labor is given by:
W 1 t = β 13 + γ 13 E t + β 23 E t 1 + β 33 ( Year 1931 ) + ε t 3
where Et is a measure of private product and ε3t is a stochastic error term. It implies that the wage bill paid by private industry varies with the current and lagged total private product and a time trend. A time trend is included to capture institutional changes over the period, primarily the bargaining strength of labor. The identities that complete the structural model include:
Total ProductYt + TXt= CNt + It + Gt + W2t
IncomeYt = Pt + Wt
CapitalKt = It + Kt-1
Wage BillWt = W1t+ W2t
Private ProductEt = Yt + TXtW2t

The first identity states that national income, Yt, plus business taxes, TXt, are equal to the sum of goods and services demanded by consumers, CNt, plus investors, It, plus net government demands, Gt + W2t. The second identity holds total income, Yt, as the sum of profit, Pt, and wages, Wt, while the third implies that end-of-year capital stocks, Kt, are equal to investment, It, plus last years end-of-year capital stock, Kt−1. In the fourth identity, Wt, is the total wage bill that is the sum of wages earned from the private sector, W1t, and wages earned by the government, W2t. The fifth identity states that private product, Et, is the equal to income, It, plus business taxes, TXt, less government wages, W2t.

5.2. Klein Model I Results

Table 6 contains the estimates of the three stochastic equations using ordinary least squares (OLS), two stage least squares (2SLS), three stage least squares (3SLS), and GME-NLP. Parameter restrictions for GME-NLP were specified using the fairly uninformative reference support points (−50,0,50)′ for the intercept, (−5,0,5)′ for the slope parameters of the reduced form models and (−2,0,2)′ for the slope parameters of the structural form models. Truncation points for the error supports of the structural model are specified using both three- and five-sigma rules.

For the given truncation points, the GME-NLP estimates of asymptotic standard errors are greater than those of the other estimators. It is to be expected that if more informative parameter support ranges had been used when representing the feasible space of the parameters, standard errors would have been reduced. In most of the cases, the parameter, standard error, and R2 measures were not particularly sensitive to the choice of error truncation point, although there were a few notable exceptions dispersed throughout the three equation system.

The Klein Model I benchmarks the GME-NLP estimator relative to OLS, 2SLS, and 3SLS. Comparisons are based on the sum of the squared difference (SSD) measures between GME-NLP and the OLS, 2SLS and 3SLS parameter estimates. Turning to the consumption model, the SSD is smallest (largest) between GME-NLP and OLS (3SLS) parameter estimates for both the three- and five-sigma rules (but only marginally). For example, the SSD between OLS (3SLS) and GME-NLP under the 3-sigma is 3.35 (4.15). Alternatively, for the labor model, the SSD is smallest (largest) between GME-NLP and 3SLS (OLS) parameter estimates for both the three- and five-sigma rules. The most dramatic differences arise in the investment model. For example, the SSD between OLS (3SLS) and GME-NLP under the 3-sigma is 3.00 (391.79). This comparison underscores divergences that exist between GME-NLP and 2SLS and 3SLS estimators. In addition to the information introduced by the parameter support spaces, another reason for this divergence may be due to the fact that GME-NLP is a single-step estimator that is completely consistent with data constraints Equations (6) and (7), while 2SLS and 3SLS are multiple step estimators that only approximate the NLP model and ignore nonlinear interactions between reduced and structural form coefficients. The nonlinear specification of GME-NLP leads to first order optimization conditions (Equation (16) derived in the Appendix) that are different from other multiple-step or asymptotically justified estimators such as 2SLS and 3SLS. Overall, the SSD comparisons characterize finite samples differences in the GME-NLP estimator relative to more traditional estimators.

6. Conclusions

In this paper a one-step, data-constrained generalized maximum entropy estimator is proposed for the nonlinear- in- parameters model of the SEM (GME-NLP). Under the assumed regularity conditions, it is shown that the estimator is consistent and asymptotically normal in the presence of contemporaneously correlated errors. We define an asymptotically normal test (single scalar hypothesis) and an asymptotically chisquare-distributed Wald test (joint vector hypothesis) that are capable of performing hypothesis tests typically used in empirical work. Moreover, the GME-NLP estimator provides a simple method of introducing prior information into the model by means of informative supports on the parameters that can decrease the mean square error of the coefficient estimates. The reformulated GME-NLP model, which is optimized over the structural and reduced form parameter set, provides a computationally efficient approach for large and small sample sizes.

We evaluated the performance for the GME-NLP estimator based on a variety of Monte Carlo experiments and in an illustrative empirical application. In small and medium sample situations (≤100 observations) the GME-NLP is mean square error superior to 3SLS for the defined experiments. Relative to 3SLS the GME-NLP estimator exhibited dramatically more robustness in the presence of contaminated error problems. These result illustrate advantages of a one-step, data-constrained estimator over multiple-step, moment-constrained estimators. Increasing the sample size clearly demonstrated consistency of the GME-NLP estimator for the SEM. The empirical results indicate that the GME-NLP did not dominate, nor was it dominated by, 3SLS in single or joint asymptotic hypothesis testing.

The three-equation Klein Model I was estimated as an empirical application of the GME-NLP method. Results of the Klein Model I benchmarked parameter estimates of GME-NLP relative to OLS, 2SLS, and 3SLS using the summed squared difference between parameter values of the estimators. GME-NLP was most similar to 2SLS and 3SLS for the consumption and labor demand equations, while it was most similar to OLS for the investment demand equation. In all, the empirical example also demonstrated some disadvantages of GME estimation in that coefficient estimates and predictive fit were somewhat sensitive to specification of error truncation points. This suggests additional research is needed to optimally identify error truncation points.

The analytical results in this study contribute toward establishing a rigorous foundation for GME estimation of the SEM and analogous properties of test statistics. It also furnishes a starting point for empirical economists desiring to apply maximum entropy to linear simultaneous systems (e.g., normalized quadratic demand systems used extensively in applied research). While empirical results are intriguing, this approach does not definitively solve the problem of estimating the SEM in small samples or ill-posed problems, and underscores the need for continued research on problems of a number of problems in small sample estimation based on asymptotically justified estimators.

Acknowledgments

We thank George Judge (Berkeley) for helpful comments and suggestions. All errors remaining are the sole property of the authors.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix

A. Theorems and Proofs

To facilitate both the derivation of the asymptotic properties and computational efficiency of the GME-NLP estimator, we reformulate the maximum entropy model into scalar notation that is completely consistent with Equations (5)(8) (under the prevailing assumptions and the constraints Equations A1A8 defined below). The scalar notation exhibits the flexibility to use different numbers of support points for each parameter or error term. However, we simplify the notation by using M support points for each parameter and error term.

Let Δ represent a bounded, convex, and dense parameter space containing the (Q ×1) vector of the reduced form and structural parameters θ = vec(θπ, θγ, θβ). The reformulated constrained maximum entropy model is defined as

max θ , p π , p γ , p β , z , w = { kgm p kgm π  ln  p kgm π igm p igm γ  ln  p igm γ kgm p kgm β  ln  p kgm β ngm w ngm  ln  w ngm ngm z ngm  ln  z ngm }
subject to:
m = 1 M s kgm π p kgm π = θ kg π ; π kgL = s kg 1 π s kg M π = π kg H
m = 1 M s igm γ p igm γ = θ ig γ ; γ igL = s ig 1 γ s ig M γ = γ ig H
m = 1 M s kgm β p kgm β = θ kg β ; β kg L = s kg 1 β s kg M β = β ig H
m = 1 M s ngm w w ngm = u ng = y ng X n ( ( θ π ) ( g ) ) θ g γ ( X g n ) θ g β ; c g 1 = s ng 1 w s ng M w = c g M
m = 1 M s ngm w z ngm = v ng = y ng ( X n ) θ g π ; c g 1 = s ng 1 z s ng M z = c g M
s jg m i = s jg ( M + 1 m ) i for     m = 1 , , M ( where for M odd s jg M + 1 2 i 0 and i = w , z )
m = 1 M p kgm π = 1 , m = 1 M p igm γ = 1 , m = 1 M p kgm β = 1 , m = 1 M w ngm = 1 , m = 1 M z ngm = 1

Constraints A2–A6 define the reparameterized coefficients and errors with supports. In A5 the term Π (θπ) (−g). is a (K × Gg) matrix of elements θ kg π that coincide with the endogenous variables in Y(−g). The constraint A7 implies symmetry of the error supports about the origin and A8 defines the normalization conditions. The nonnegativity restrictions on p kgm π, p igm γ, p kgm β, wngm, and zngm are inherently satisfied by the optimization problem and are not explicitly incorporated into the constraint set.

Next, we define the conditional entropy function by conditioning on θπ = τπ, θγ = τγ, and θβ= τβ, or simply θ =τ where θ = vec(θπ, θγ, θβ) and τ = vec(τπ, τγ, τβ). This yields

F ( τ ) = max p π , p γ , p β , z , w : θ = τ { kgm p kgm π ln p kgm π igm p igm γ ln p igm γ kgm p kgm β ln p kgm β ngm w ngm ln w ngm ngm z ngm ln z ngm }

The optimal value of zngm in the conditionally-maximized entropy function is the solution to the Lagrangian L ( z ng , η ng z , λ ng z ) = m = 1 m z ngm ln ( z ngm ) + η ng z ( m = 1 M z ngm 1 ) + λ ng z ( m = 1 M s ngm z z ngm v ng ( τ π ) ) and is given by

z ngm ( λ ng z ( v ng ( τ π ) ) ) = e λ ng z ( v ng ( τ π ) ) s ngm z = 1 M e λ ng z ( v ng ( τ π ) ) s ng z , m = 1 , , M
while the optimal value wngm:
w ngm ( λ ng w ( u ng ( τ ) ) ) = e λ ng w ( u ng ( τ ) ) s ngm w = 1 M e λ ng w ( u ng ( τ ) ) s ng w , m = 1 , , M
solves L ( w ng , η ng w , λ ng w ) = m = 1 m w ngm ln ( w ngm ) + η ng w ( m = 1 M w ngm 1 ) + λ ng z ( m = 1 M s ngm z w ngm u ng ( τ ) )

The identities:

m = 1 M s ngm z z ngm ( λ ng z ( v ng ( τ π ) ) ) = m = 1 M s ngm z z ngm ( λ ng z ( v ng ( τ π ) ) )
and:
m = 1 M s ngm w w ngm ( λ ng w ( u ng ( τ ) ) ) = m = 1 M s ngm w w ngm ( λ ng w ( u ng ( τ ) ) )
follow from the symmetry of the support points around zero. Likewise the optimal values of p kgm (for = π, γ, β) are respectively:
p kgm ( λ kg ( τ kg ) ) = e λ kg ( τ kg ) s ngm j = 1 M e λ kg ( τ kg ) s ngj , m = 1 , , M
which satisfy L ( p kg , η ng , λ ng ) = m = 1 M p kgm ln ( p kgm ) + η kg ( m = 1 M p kgm 1 ) + λ kg ( m = 1 M s ngm p kgm τ kg ). For notational convenience we let λ ng z = λ z ( v ng ( τ π ) ), λ ng w = λ w ( u ng ( τ ) ), λ kg π = λ kg π ( τ kg π ), λ ig γ = λ ig γ ( τ ig γ ) and λ kg β = λ kg β ( τ kg β ) represent the optimal values of the Lagrangian multipliers. Substituting the solutions defined from Equations (A10), (A11), and (A14) into the conditional objective function yields the conditional maximum value entropy function:
F ( τ ) = kg [ λ kg π τ kg π ln ( m exp ( λ kg π s kgm π ) ) ] jg [ λ jg γ τ jg γ ln ( m exp ( λ jg γ s jg m γ ) ) ] kg [ λ kg β τ kg β ln ( m exp ( λ kg β s kgm β ) ) ] ng [ λ ng w u ng ( τ ) ln ( m exp ( λ ng w s ngm w ) ) ] ng [ λ ng z v ng ( τ π ) ln ( m exp ( λ ng z s ngm z ) ) ]
The gradient of F(τ) in Equation (A16) is a (Q × 1) vector ∇(τ)= vec(∇π(τ), ∇γ (τ), ∇β (τ)) defined by:
( τ ) = ( λ π ( τ π ) λ γ ( τ γ ) λ β ( τ β ) ) + ( ( I G X ) [ ( I G + ( Γ ( τ γ ) ) X ) ] [ 0 ] diag ( ( ( τ π ) ( 1 ) ) X , , ( ( τ π ) ( G ) ) X ) [ 0 ] diag ( X 1 , , X G ) ) ( λ z λ w ) = λ ( τ ) + Z * ( τ ) ( λ z λ w )
Above, Γ(τγ) is a(G × G) matrix of elements τ ig γ and Π(τπ)(−g) is a (K × Gg) matrix of elements τ kg π The Lagrangian multipliers are vertically concatenated into λπ, λγ, λβ, λw, λz, where, for example, the vector λ w = vec ( λ 1 w , , λ G w ) is of dimension (NG ×1) and is made up of λ g w = ( λ 1 g w , , λ Ng w ) for g = 1,…,G.

The (Q × Q) Hessian matrix of the conditional maximum value F(τ) in Equation (A15) is given by:

H ( τ ) = λ ( τ ) τ + ( Z * ( τ ) τ ) ( I λ w ) Z * ( τ ) ( Ξ ( τ ) Z * ( τ ) )
where ⊙ denotes the Hadamard product (element wise) between two matrices. The (Q × Q) diagonal matrix λ ( τ ) τ = ( λ π τ , λ γ τ , λ β τ ) is defined by:
λ ( τ kg ) τ rt = { ( m = 1 M ( s kgm ) 2 p kgm ( τ kg ) 2 ) 1 if k = r , g = t 0 otherwise } for = π , γ , β

The components of the (Q × QGN) matrix Z * ( τ ) τ are given by Z * ( τ ) τ = ( Z * ( τ ) τ π , Z * ( τ ) τ γ , Z * ( τ ) τ β ), where Z * ( τ ) τ π = ( Z * ( τ ) τ 11 π Z * ( τ ) τ KG π ) is a (KG × KGGN) sparse matrix of xnk = s, Z * ( τ ) τ γ = ( Z * ( τ ) τ 11 γ Z * ( τ ) τ τ GG γ ) is a ( ×ḠGN) matrix, and the ( ×K̄GN) matrix Z * ( τ ) τ β = [ 0 ]. Finally the matrix Ξ (τ) is made up of derivatives of the Lagrangian multipliers λw and λz. It is defined as:

Ξ ( τ ) = ( 1 0 0 0 0 0 ) diag ( Ξ 1 z ( τ ) , , Ξ G z ( τ ) ) + ( 0 1 0 0 0 0 ) ( Ξ 1 w ( τ ) Ξ 1 w ( τ ) Ξ G w ( τ ) Ξ G w ( τ ) ) + ( 0 0 0 1 0 0 ) diag ( Ξ 1 w ( τ ) , , Ξ G w ( τ ) ) + ( 0 0 0 0 0 1 ) diag ( Ξ 1 w ( τ ) , , Ξ G w ( τ ) )
with:
Ξ g i ( τ ) = ( ξ 1 g i ( τ ) ξ 1 g i ( τ ) ξ Ng i ( τ ) ξ Ng i ( τ ) ) ; g = 1 , , G ; i = z , w
where:
ξ ng w ( τ ) = ( m = 1 M ( s ngm w ) 2 w ngm ( λ w ( u ng ( τ ) ) ) ( u ng ( τ ) ) 2 ) 1
and
ξ ng z ( τ π ) = ( m = 1 M ( s ngm z ) 2 z ngm ( λ z ( v ng ( τ π ) ) ) ( v ng ( τ π ) ) 2 ) 1

By the Cauchy-Swcharz inequality, symmetry assumption on the supports, and the adding up conditions, then λ ( τ ) τ Z * ( τ ) ( Ξ ( τ ) Z * ( τ ) ) is a negative definite matrix. Next, we prove consistency and asymptotic normality of the GME-NLP estimator.

Theorem 1. Under the regularity conditions R1–R5, the GME-NLP estimator, θ̂ = vec(π̂, δ̂), is a consistent estimator of the true coefficient values θ = vec (π, δ)

Proof. Let Δ represent a bounded, convex, and dense parameter space such that the true coefficient values θ ∈ Δ. Consider the just identified case. From Equations (5)(8):

max p π , p γ , p β , w , z { w ln w }
is not a function of pπ or z almost everywhere. Furthermore, it is not a function of the reduced form coefficients satisfying the identification conditions that are discussed after Equation (4). In addition the nonstochastic terms ln − pπ′ ln pπ, − pγ′ ln pγ, and − pβ′ ln pβ, are asymptotically irrelevant terms that vanish in the convergence of the scaled Hessian or 1 N H. Accordingly the GME-NLP estimates of the reduced form parameters, π̂, are asymptotically and uniquely determined by:
π ^ = arg   max τ π { z ( τ π ) ln z ( τ π ) }
subject to Equation (7) and a normalization condition in Equation (8). The π̂ are consistent, or π ^ p π, which is proved in the Proposition below.

Next define the conditional estimator

δ ^ ( τ π ) = ( γ ^ ( τ π ) , β ^ ( τ π ) ) = argmax τ γ , τ β , τ π F ( τ )
for τπ in the parameter set that satisfies the identification conditions. By [32]:
γ ^ ( τ π ) p γ ( τ π ) and β ^ ( τ π ) p β ( τ π )
and:
γ ^ ( τ ) p γ ( τ ) and β ^ ( τ ) p β ( π )
then by [39]
( π ^ , γ ^ ( π ^ ) β ^ ( π ^ ) ) p ( π , γ , β )
which establishes consistency for the just identified case. Further results pertaining to the overidentified case are available from the authors upon request.

Theorem 2. Under the conditions of Theorem 1, the GME-NLP estimator, δ̂ = vec (δ̂1,…,δ̂G), is asymptotically normally distributed as δ ^ ~ a N ( δ , 1 N Ω ξ 1 Ω Σ Ω ξ 1 ).

Proof. Let δ̂ be the GME-NLP estimator of δ = vec (δ1,…,δG). Expand the gradient vector in a Taylor series around δ to obtain:

( δ ^ ) = ( δ ) + H ( δ * ) ( δ ^ δ )
where δ*is between δ̂ and δ. Since δ̂ is a consistent estimator of δ, then δ * p δ. Using this information and the fact that ∇(δ̂) =[0] at the optimum, then:
N ( δ ^ δ ) d = ( 1 N H ( δ ) ) 1 ( 1 N ( δ ) )
where both the left hand and right hand side terms have equivalent limiting distributions. Note that 1 N H ( δ ) = 1 N Z ( Ξ w Z ) + 0 p ( 1 N ) where Z is the block diagonal matrix Z=diag (Z1,…, ZG). From regulatory conditions 1 N H ( δ ) p Ω ξ = diag ( ξ 1 Ψ 1 , , ξ G Ψ G ) where Ωξ is a positive definite matrix. Because ξ ng = ξ ng w ( θ ) = ξ ng z ( θ ) are iid for n = 1,…,N, then ξg = Eng].

The scaled gradient term is asymptotically normally distributed as:

1 N ( δ ) d N ( [ 0 ] , Ω Σ )
with covariance matrix 1 N Z ( Σ λ I ) Z Ω Σ, where Σλ a (G × G) covariance matrix for the λ g w s (see [40,41]). From the above results and by applying Slutsky’s Theorem:
N ( δ ^ δ ) ~ a N ( [ 0 ] , Ω ξ 1 Ω Σ Ω ξ 1 )
which yields the asymptotic distribution:
δ ^ ~ a N ( δ , 1 N Ω ξ 1 Ω Σ Ω ξ 1 )
Proposition 1. Under the assumptions of Theorem 1, the reduced form estimates of (3) are consistent, π ^ = argmax τ π { z ( τ π ) ln z ( τ π ) } π p.

Proof. With the exception that we account for contemporaneous correlation in the errors, this is the proof for consistency of the data-constrained GME estimator of the general linear model [32]. Consider the conditional maximum function:

F R ( τ π ) = ng [ λ ng z v ng ( τ π ) ln ( m exp ( λ ng z s ng z ) ) ]
where vng = yngXn· πg.

We expand FR (τπ) about π with a Taylor series expansion that yields:

F R ( τ π ) = F R ( π ) + ( π ) ( τ π π ) + 1 2 ( τ π π ) H R ( π * ) ( τ π π )
where π* lies between τπ and π. The gradient vector is given by ∇π =(IX′) and the Hessian matrix is HR = (IX′)(Ξz ⊙ (IX′))′. The scaled gradient term is asymptotically normally distributed as 1 N π ( π ) d N ( [ 0 ] , Ω R ) by a multivariate version of Liapounov’s central limit theorem (see [40,41]). The covariance matrix is 1 N Z R ( Σ λ I ) Z R Ω R where ZR =(IX) and Σλ is a (G × G) covariance matrix of the λ g z s. Hence the gradient is bounded in probability. The value of the quadratic term in the Taylor expansion can be bounded above by:
1 2 ( τ π π ) H R ( π * ) ( τ π π ) φ s N | | τ π π | | 2 2

The parameter ϕs denotes the smallest eigenvalue of 1 N H R ( π * ) for any π* that lies between τπ and π, where a = [ k = 1 K a k 2 ] 1 / 2 denotes the standard vector norm.

Combining the elements from above, for all ε > 0 the P ( max τ : | τ π π | > ε { F ( τ ) F ( δ ) < 0 } ) 1 as N → ∞.

Thus, π ^ = argmax τ π { z ( τ π ) ln z ( τ π ) } p π.

B. Model Estimation: Computational Considerations

To estimate the GME-NLP model, the conditional entropy function (Equation (A15)) was maximized. Note that the constrained maximization problem Equations (5)(8) requires estimation of (Q + 2GNM) unknown parameters. Solving Equations (5)(8) for (Q + 2GNM) unknowns is not computationally practical as the sample size, N, grows larger. For example, consider an empirical application with Q = 36 coefficients, G = 3 equations, and M = 3 support points. Even for a small number of observations, say N = 50, the number of unknown parameters would be 936. In contrast, maximizing Equation (A15) requires estimation of only Q unknown coefficients for any real value of N.

The GME-NLP estimator uses the reduced and structural form models as data constraints with a dual objective function as part of its information set. To completely specify the GME-NLP model, support (upper and lower truncation and intermediate) points for the individual parameters, support points for each error term, and Q starting values for the parameter coefficients are supplied by the user. In the Monte Carlo analysis and empirical application, the model was estimated using the unconstrained optimizer OPTIMUM in the econometric software GAUSS. We used 3 support points for each parameter and error term. To increase the efficiency of the estimation process the analytical gradient and Hessian were coded in GAUSS and called in the optimization routine. This also offered an opportunity to empirically validate the derivation of the gradient, Hessian, and covariance matrix. Given suitable starting values the optimization routine generally converged within seconds for the empirical examples discussed above. Moreover, solutions were quite robust to alternative starting values.

References

  1. Theil, H. Principles of Econometrics; John Wiley & Sons: New York, NY, USA, 1971. [Google Scholar]
  2. Zellner, A.; Theil, H. Three-stage least squares: Simultaneous estimation of simultaneous equations. Econometrica 1962, 30, 54–78. [Google Scholar]
  3. Fuller, W.A. Some properties of a modification of the limited information estimator. Econometrica 1977, 45, 939–953. [Google Scholar]
  4. Koopmans, T.C. Statistical Inference in Dynamic Economic Models; Cowles Commission Monograph 10; Wiley: New York, NY, USA, 1950. [Google Scholar]
  5. Hausman, J.A. Full information instrumental variable estimation of simultaneous equations systems. Ann. Econ. Soc. Meas 1974, 3, 641–652. [Google Scholar]
  6. Zellner, A. Statistical analysis of econometric models. J. Am. Stat. Assoc 1976, 74, 628–643. [Google Scholar]
  7. Zellner, A. The finite sample properties of simultaneous equations = estimates and estimators bayesian and non-bayesian approaches. J. Econom 1998, 83, 185–212. [Google Scholar]
  8. Phillips, P.C.B. Exact Small Sample Theory in the Simultaneous Equations Model. In Handbook of Econometrics; Griliches, Z., Intrilligator, M.D., Eds.; Elsevier: New York, NY, USA, 1983. [Google Scholar]
  9. Golan, A.; Judge, G.; Miller, D. Maximum Entropy Econometrics: Robust Estimation with Limited Data; John Wiley & Sons: New York, NY, USA, 1996. [Google Scholar]
  10. Golan, A.; Judge, G.; Miller, D. Information Recovery in Simultaneous Equations Statistical Models. In Handbook of Applied Economic Statistics; Ullah, A., Giles, D., Eds.; Marcel Dekker: New York, NY, USA, 1997. [Google Scholar]
  11. West, K.D.; Wilcox, D.W. A comparison of alternative instrumental variables estimators of a dynamic linear model. J. Bus. Econ. Stat 1996, 14, 281–293. [Google Scholar]
  12. Hanson, L.P.; Heaton, J.; Yaron, A. Finite-sample properties of some alternative GMM estimators. J. Bus. Econ. Stat 1996, 14, 262–280. [Google Scholar]
  13. Tukey, J.W. A. Survey Sampling from Contaminated Distributions. In Contributions to Probability and Statistics; Olkin, I., Ed.; Stanford University Press: Stanford, CA, USA, 1960. [Google Scholar]
  14. Huber, P.J. Robust Statistics; John Wiley & Sons: New York, NY, USA, 1981. [Google Scholar]
  15. Hampel, F.R.; Ronchetti, E.M.; Rousseeuw, P.J.; Stahel, W.A. Robust Statistics: The Approach Based on Influence Functions; John Wiley & Sons: New York, NY, USA, 1986. [Google Scholar]
  16. Koenker, R.; Machado, J.A.F.; Keels, C.L.S.; Welsh, A.H. Momentary lapses: Moment expansions and the robustness of minimum distance estimation. Econom. Theory 1994, 10, 172–197. [Google Scholar]
  17. Kitamura, Y.; Stutzer, M. An information-theoretic alternative to generalized method of moments estimation. Econometrica 1997, 65, 861–874. [Google Scholar]
  18. Imbens, G.; Spady, R.; Johnson, P. Information theoretic approaches to inference in moment condition models. Econometrica 1998, 66, 333–357. [Google Scholar]
  19. Van Akkeren, M.; Judge, G.G.; Mittelhammer, R.C. Generalized moment based estimation and inference. J. Econom 2002, 107, 127–148. [Google Scholar]
  20. Mittelhammer, R.; Judge, G. Endogeneity and Moment Based Estimation under Squared Error Loss. In Handbook of Applied Econometrics and Statistics; Wan, A., Ullah, A., Chaturvedi, A., Eds.; Marcel Dekker: New York, NY, USA, 2001. [Google Scholar]
  21. Mittelhammer, R.C.; Judge, G.; Miller, D. Econometric Foundations; Cambridge University Press: New York, NY, USA, 2000. [Google Scholar]
  22. Marsh, T.L.; Mittelhammer, R.C. Generalized Maximum Entropy Estimation of a First Order Spatial Autoregressive Model. In Advances in Econometrics, Spatial and Spatiotemporal Econometrics; LeSage, J.P., Ed.; Elsevier: New York, USA, 2004; Volume 18. [Google Scholar]
  23. Ciavolino, E.; Dahlgaard, J.J. Simultaneous Equation Model based on the generalized maximum entropy for studying the effect of management factors on enterprise performance. J. Appl. Stat 2009, 36, 801–815. [Google Scholar]
  24. Papalia, R.B.; Ciavolino, E. GME estimation of spatial structural equations models. J. Classif 2011, 28, 126–141. [Google Scholar]
  25. Zellner, A. Estimation of regression relationships containing unobservable independent variables. Int. Econ. Rev 1970, 11, 441–454. [Google Scholar]
  26. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J 1948, 27, 379–423. [Google Scholar]
  27. Kullback, S. Information Theory and Statistics; John Wiley & Sons: New York, NY, USA, 1959. [Google Scholar]
  28. Pompe, B. On some entropy measures in data analysis. Chaos Solitons Fractals 1994, 4, 83–96. [Google Scholar]
  29. Zellner, A. Bayesian and Non-Bayesian Estimation Using Balanced Loss Functions. In Statistical Decision Theory and Related Topics; Gupta, S., Berger, J., Eds.; Springer Verlag: New York, NY, USA, 1994. [Google Scholar]
  30. Dreze, J.H.; Richard, J.F. Bayesian Analysis of Simultaneous Equations Systems. In Handbook of Econometrics; Griliches, Z., Intrilligator, M.D., Eds.; Elsevier: New York, NY, USA, 1983. [Google Scholar]
  31. Malinvaud, E. Statistical Methods of Econometrics, 3rd Ed ed; North-Holland: Amsterdam, The Netherlands, 1980. [Google Scholar]
  32. Mittelhammer, R.C.; Cardell, N.S.; Marsh, T.L. The data-constrained generalized maximum entropy estimator of the GLM: Asymptotic theory and inference. Entropy 2013, 15, 1756–1775. [Google Scholar]
  33. Pukelsheim, F. The three sigma rule. Am. Stat 1994, 48, 88–91. [Google Scholar]
  34. Davidson, R.; MacKinnon, J.G. Estimation and Inference in Econometrics; Oxford: New York, NY, USA, 1993. [Google Scholar]
  35. Mittelhammer, R.C. Mathematical Statistics for Economics and Business; Springer: New York, NY, USA, 1996. [Google Scholar]
  36. Cragg, J.G. On the relative small sample properties of several structural-equation estimators. Econometrica 1967, 35, 89–110. [Google Scholar]
  37. Tsurumi, H. Comparing Bayesian and Non-Bayesian Limited Information Estimators. In Bayesian and Likelihood Methods in Statistics and Econometrics; Geisser, S., Hodges, J.S., Press, S.J., Zellner, A., Eds.; North Holland Publishing: Amsterdam, The Netherlands, 1990. [Google Scholar]
  38. Klein, L.R. Economic Fluctuations in the United States, 1921–1941; John Wiley & Sons: New York, NY, USA, 1950. [Google Scholar]
  39. Rao, C.R. Linear Statistical Inference and Its Applications, 2nd ed; John Wiley & Sons: New York, NY, USA, 1973. [Google Scholar]
  40. Hoadley, B. Asymptotic properties of maximum likelihood estimators for the independent but not identically distributed case. Ann. Math. Stat 1971, 42, 1977–1991. [Google Scholar]
  41. White, H. Asymptotic Theory for Econometricians; Academic Press: New York, NY, USA, 1984. [Google Scholar]
Table 1. Mean value of parameter estimates from 1000 Monte Carlo simulations using 2SLS, 3SLS, GME-GJM, and GME-NLP.
Table 1. Mean value of parameter estimates from 1000 Monte Carlo simulations using 2SLS, 3SLS, GME-GJM, and GME-NLP.
Obs2SLS3SLSGME-GJMGME-NLP
γ21 = 0.222
5--0.3310.353
250.1650.1860.3040.311
1000.2070.2200.3570.259
4000.2190.2220.3730.234
1,6000.223-0.3930.227

γ12 = 0.267
5--0.2670.301
250.2740.2410.2920.304
1000.2640.2780.2780.283
4000.2720.2760.2930.274
1,6000.268-0.3190.269

γ32 = 0.046
5--0.1440.158
250.0670.1030.1070.144
1000.0440.0480.1010.083
4000.0390.0400.0950.053
16000.046-0.0750.048

γ13 = 0.087
5--0.1970.223
250.1150.1140.1820.208
1000.0840.0850.1650.139
4000.0830.0830.1550.100
1,6000.088-0.1530.093
Table 2. Standard error (SE) and mean square error (MSE) of parameter estimates from 1000 Monte Carlo simulations using 3SLS and GME-NLP.
Table 2. Standard error (SE) and mean square error (MSE) of parameter estimates from 1000 Monte Carlo simulations using 3SLS and GME-NLP.
ObsSEMSE
3SLSGME-NLP3SLSGME-NLP
γ21 = 0.222
5-0.101-0.027
250.4420.1550.1970.032
1000.1430.1160.0210.015
4000.0650.0640.0040.004

γ12 = 0.267
5-0.103-0.012
251.2810.1661.6410.029
1000.4590.1830.2110.034
4000.1980.1490.0390.022

γ32 = 0.046
5-0.168-0.041
250.8420.2560.7110.075
1000.4490.2260.2010.052
4000.1830.1580.0330.025

γ13 = 0.087
5-0.120-0.033
250.6690.2020.4480.055
1000.2690.1880.0730.038
4000.1330.1210.0180.015
Table 3. Rejection Probabilities for True and False Hypotheses.
Table 3. Rejection Probabilities for True and False Hypotheses.
Single Hypotheses: Asymptotic Normal Test GME-NLP
Obsγ21 = 0.222γ21=0γ12 = 0.267γ12 = 0γ32 = 0.046γ32 = 0γ13 = 0.087γ13 =0
250.0210.230.0010.0080.0210.0220.0020.005
1000.0460.6000.0050.0510.0130.0190.0090.025
4000.0660.9800.0120.2760.0330.0420.0320.092

3SLS

Obsγ21 = 0.222γ21 = 0γ12 = 0.267γ12 = 0γ12 = 0.046γ32 = 0γ32 = 0.087γ13 = 0
250.1490.1970.1010.1240.1000.1080.1020.104
1000.0640.4240.0360.1350.0500.0520.0510.068
4000.0430.9640.0310.3380.0410.0450.0450.094

Joint Hypotheses: Asymptotic Chi-Square Wald Test

GME-NLP3SLS
γ21 = 0.222γ21 = 0γ21 = 0.222γ21 =0
γ32 = 0.046γ32 = 0γ32 = 0.046γ32 =0
250.0140.1690.1890.256
1000.0290.4330.0820.357
4000.0470.9610.0470.934
Table 4. Mean, standard error (SE), and mean square error (MSE) of parameter estimates from 1000 Monte Carlo simulations for GME-NLP with 3, 4, and 5-sigma truncation rules.
Table 4. Mean, standard error (SE), and mean square error (MSE) of parameter estimates from 1000 Monte Carlo simulations for GME-NLP with 3, 4, and 5-sigma truncation rules.
Obs3-Sigma4-Sigma5-Sigma
MeanSEMSEMeanSEMSEMeanSEMSE

γ21= 0.222
250.3110.1550.0300.3360.1330.0310.3450.1110.033
1000.2590.1160.0150.2770.1110.0150.2920.1080.017
4000.2340.0640.0040.2440.0660.0050.2470.0630.005

γ12= 0.267
250.3040.1660.0290.3030.1200.0160.3010.0950.010
1000.2830.1830.0340.2830.1460.0210.2850.1180.014
4000.2740.1490.0220.2710.1300.0170.2720.1150.013

γ32= 0.046
250.1440.2560.0750.1440.2030.0510.1640.1520.037
1000.0830.2260.0520.1010.1990.0420.1130.1580.029
4000.0530.1580.0250.0630.1370.0190.0680.1280.017

γ13= 0.087
250.2080.2020.0550.2100.1450.0360.2170.1090.029
1000.1390.1880.0380.1570.1570.0300.1760.1390.027
4000.1000.1210.0150.1110.1120.0130.1270.1060.013
Table 5. Mean and mean square error (in parentheses) of parameter estimates from 1000 Monte Carlo simulations for 3SLS and GME-NLP with contaminated normal distribution.
Table 5. Mean and mean square error (in parentheses) of parameter estimates from 1000 Monte Carlo simulations for 3SLS and GME-NLP with contaminated normal distribution.
Obs0.90N(0, Σ) + 0.10 F(2,3)0.50N(0, Σ) + 0.50 F(2,3)0.10N(0, Σ) + 0.90 F(2,3)
3SLSGME-NLP3SLSGME-NLP3SLSGME-NLP

γ21 = 0.222
250.184 (0.159)0.320 (0.032)0.278 (0.406)0.414 (0.064)0.350 (1.404)0.451 (0.082)
1000.226 (0.023)0.262 (0.016)0.243 (0.082)0.329 (0.037)0.268 (0.204)0.368 (0.050)

γ12 = 0.267
250.262 (1.058)0.309 (0.029)0.427 (1.195)0.385 (0.041)0.608 (4.578)0.422 (0.056)
1000.267 (0.353)0.282 (0.036)0.356 (0.551)0.339 (0.038)0.374 (0.726)0.364 (0.44)

γ32= .046
250.084 (0.794)0.111 (0.067)−0.009 (0.779)0.105 (0.058)−0.070 (2.489)0.097 (0.062)
1000.061 (0.326)0.082 (0.049)0.010 (0.395)0.067 (0.048)−0.003 (0.601)0.075 (0.057)

γ13 = 0.087
250.081 (0.330)0.198 (0.048)0.094 (0.401)0.198 (0.056)0.083 (1.366)0.219 (0.067)
1000.093 (0.061)0.142 (0.036)0.093 (0.059)0.144 (0.038)0.077 (0.124)0.150 (0.055)
Table 6. Structural parameter estimates and standard errors (in parentheses) of Klein’s Model I using OLS, 2SLS, 3SLS, and GME-NLP.
Table 6. Structural parameter estimates and standard errors (in parentheses) of Klein’s Model I using OLS, 2SLS, 3SLS, and GME-NLP.
Structural ParameterOLS2SLS3SLSGME-NLP 3-sigmaGME-NLP 5-sigma
Consumption
β1116.237 (1.303)16.555 (1.468)16.441 (12.603)14.405 (2.788)14.374 (2.625)
γ110.796 (0.040)0.810 (0.045)0.790 (0.038)0.772 (0.073)0.750 (0.071)
γ210.193 (0.091)0.017 (0.131)0.125 (0.108)0.325 (0.372)0.280 (0.306)
β210.090 (0.091)0.216 (0.119)0.163 (0.100)0.120 (0.332)0.206 (0.274)
R20.9810.9290.9280.9160.922

Investment
β1210.126 (5.466)20.278 (8.383)28.178 (6.79)8.394 (10.012)9.511 (10.940)
γ120.480 (0.097)0.150 (0.193)−0.013 (0.162)0.440 (0.386)0.358 (0.362)
β220.333 (0.101)0.616 (0.181)0.756 (0.153)0.340 (0.342)0.350 (0.325)
β32−0.112 (0.027)−0.158 (0.040)−0.195 (0.033)−0.100 (0.046)−0.100 (0.051)
R20.9310.8370.8310.8190.811

Labor
β131.497 (1.270)1.500 (1.276)1.797 (1.12)2.423 (3.112)1.859 (3.157)
γ130.439 (0.032)0.439 (0.040)0.400 (0.032)0.481 (0.255)0.381 (0.178)
β230.146 (0.037)0.147 (0.043)0.181 (0.034)0.087 (0.272)0.200 (0.180)
β330.130 (0.032)0.130 (0.032)0.150 (0.028)0.112 (0.091)0.114 (0.085)
R20.9870.942 0.9410.9050.907

Share and Cite

MDPI and ACS Style

Marsh, T.L.; Mittelhammer, R.; Cardell, N.S. Generalized Maximum Entropy Analysis of the Linear Simultaneous Equations Model. Entropy 2014, 16, 825-853. https://doi.org/10.3390/e16020825

AMA Style

Marsh TL, Mittelhammer R, Cardell NS. Generalized Maximum Entropy Analysis of the Linear Simultaneous Equations Model. Entropy. 2014; 16(2):825-853. https://doi.org/10.3390/e16020825

Chicago/Turabian Style

Marsh, Thomas L., Ron Mittelhammer, and Nicholas Scott Cardell. 2014. "Generalized Maximum Entropy Analysis of the Linear Simultaneous Equations Model" Entropy 16, no. 2: 825-853. https://doi.org/10.3390/e16020825

APA Style

Marsh, T. L., Mittelhammer, R., & Cardell, N. S. (2014). Generalized Maximum Entropy Analysis of the Linear Simultaneous Equations Model. Entropy, 16(2), 825-853. https://doi.org/10.3390/e16020825

Article Metrics

Back to TopTop