Next Article in Journal
Multivariate Multi-Scale Permutation Entropy for Complexity Analysis of Alzheimer’s Disease EEG
Previous Article in Journal
Statistical Information Based Single Neuron Adaptive Control for Non-Gaussian Stochastic Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Information Theory Estimators for the First-Order Spatial Autoregressive Model

by
Evgeniy V. Perevodchikov
1,
Thomas L. Marsh
2,* and
Ron C. Mittelhammer
2
1
The Institute for Innovation, Tomsk State University of Control Systems and Radioelectronics, Tomsk 634050, Russia
2
School of Economic Sciences, Washington State University, PO Box 646210, Pullman, WA 99164, USA
*
Author to whom correspondence should be addressed.
Entropy 2012, 14(7), 1165-1185; https://doi.org/10.3390/e14071165
Submission received: 25 May 2012 / Revised: 29 June 2012 / Accepted: 30 June 2012 / Published: 4 July 2012

Abstract

:
Information theoretic estimators for the first-order spatial autoregressive model are introduced, small sample properties are investigated, and the estimator is applied empirically. Monte Carlo experiments are used to compare finite sample performance of more traditional spatial estimators to three different information theoretic estimators, including maximum empirical likelihood, maximum empirical exponential likelihood, and maximum log Euclidean likelihood. Information theoretic estimators are found to be robust to selected specifications of spatial autocorrelation and may dominate traditional estimators in the finite sample situations analyzed, except for the quasi-maximum likelihood estimator which competes reasonably well. The information theoretic estimators are illustrated via an application to hedonic housing pricing.

1. Introduction

Information Theoretic (IT) estimators are alternatives to traditional estimators [1,2,3] and have been applied in a variety of modeling contexts [4,5,6,7,8]. The IT estimators are semi-nonparametric in nature and lead to a wide and flexible class of estimators that are minimally power divergent from any reference distribution and both consistent with a nonparametric specification of the model and a parametric specification of empirical moment conditions. The estimators are generally asymptotically efficient and can have superior sampling properties in terms of mean squared error relative to traditional estimators [9]. Application of IT estimators to spatial regression models is currently limited [10,11]. In order for such estimators to be more widely adopted, it is important that analysts be aware of the ways in which they can be implemented empirically, and that they also have an understanding of the estimators’ finite sample properties for work with smaller-to-medium size samples prevalent in applied work.
Spatial estimators are particularly relevant when variables, related through their location, can influence economic behavior and equilibrium outcomes. For instance, spatial patterns are found in regional adoption of agricultural technology [12] and in a household’s behavior when a household gains utility in consuming bundles similar to those consumed by its neighbors [13]. Spatial spillover effects arise from technical innovations [14] and geographic proximity of a firm’s competitors might directly determine its marketing strategy [15]. Recent empirical studies in agriculture focused on spatial aspects of technology adoption, structure of production, market efficiency, arbitrage, and integration [16,17,18,19,20,21,22,23].
The objectives of this paper are to introduce a generalized information theoretic estimator of the first-order spatial autoregressive model, compare finite sample properties of selected IT estimators with traditional ones, and provide an illustration of the estimator’s implementation in the context of an empirical application. The IT estimators analyzed include the maximum empirical likelihood, maximum empirical exponential likelihood, and maximum log Euclidean likelihood estimators. Finite sample properties of these estimators are investigated in the context of an extensive Monte Carlo analysis conducted over a range of finite sample sizes, a range of spatial autoregressive coefficients, selected forms of heteroscedasticity, different distributional assumptions on disturbance terms, and alternative forms of spatial weight matrices typically used in applied econometric work. The estimators are compared to each other as well as to traditional estimators on the basis of root mean square error, and response functions are used to summarize findings of all Monte Carlo experiments. Then each of the IT estimators are applied to the Harrison and Rubinfeld [24] example of hedonic housing prices and demand for clean air [25]. We draw implications and provide concluding remarks in the final section of the paper.

2. First-Order Spatial Autoregressive Model and Traditional Estimators

Consider a first-order spatial autoregressive model:
Y = ρ W Y + X β + ε ,   | ρ | < 1
where Y is a n × 1 dependent variable vector, X is n × k matrix of exogenous variables with full column rank, W is a n × n spatial proximity matrix of constants, β is a k × 1 vector of parameters, ρ is a scalar spatial autoregressive parameter, and ε is an n × 1 vector of unobserved residuals with E ( ε ) = 0 and E ( ε ε ) = σ 2 Ω [26,27,28]. The spatial proximity matrix is row normalized [27] or a “row stochastic” [29] matrix, so that row sums are unity and all diagonal elements are zero, i.e., the spatial weight matrix is nonsingular and the reduced form of (1) is well defined [30]. The weights are fixed in repeated sampling and are constructed based on the spatial configuration of points making up a spatial sample. LeSage and Pace [31] provide proper limits on the parameter space for the spatial autoregressive parameter and justification for using an interval from −1 to 1 in most applied situations. This specification of disturbances allows for general patterns of autocorrelation and heteroscedasticity.
A relatively wide variety of alternative estimation procedures for the model, with and without spatially correlated disturbances, have been proposed in the literature. Bayesian estimators are not considered in the current study, but can be found in [32]. The ordinary least squares (OLS) estimator of (1) is generally biased and inconsistent due to simultaneity bias [33,34,35]. Under appropriate regularity assumptions, the maximum likelihood (ML) estimator of (1) (with multivariate normal distribution of disturbances) is consistent and asymptotically efficient [27,36]. The quasi maximum likelihood (QML) estimator is consistent and asymptotically normal [36]. The maximum likelihood approaches were not practical for large datasets until innovations were made (e.g., Barry and Pace [37]). Generalized method of moments (GMM) estimation has been proposed for its computational simplicity, less restrictive distributional assumptions, and good asymptotic properties [38,39,40,41,42]. The generalized spatial two-stage least squares (GS2SLS) estimator is consistent and asymptotically normal [38], and the best spatial two-stage least squares (BS2SLS) estimator is also an asymptotically optimal instrumental variable estimator [43]. However the two-stage least squares estimators are inefficient relative to ML estimator and may be inconsistent [39,43]. The best generalized method of moments (BGMM) estimator incorporates additional moment conditions and attains the same limiting distribution as the ML estimator (with normal disturbances) [41]. A computationally simple sequential GMM estimator based on optimization of a concentrated objective function with respect to a single spatial effect parameter may be as efficient as BGMM estimator [42]. All of the above estimators of the first-order spatial autoregressive model and their asymptotic properties were derived under the assumption that the disturbance term is homoscedastic.
Regarding the computational implementation of the preceding estimators, assuming a normally distributed disturbance term, ε ~ N ( 0 , σ 2 Ω ) , as is often done in applications, the ML estimator of the parameter vector in (1) is given by:
[ β ^ M L ρ ^ M L σ ^ M L 2 ] = arg max β ρ , σ 2 [ n 2 ln ( 2 π ) n 2 ln ( σ 2 ) 1 2 ln ( | Ω | ) 1 2 σ 2 ε Ω 1 ε + ln ( | ( I ρ W ) | ) ]
where I is a conformable identity matrix, ε = ( I ρ W ) Y X β , and |.| is a determinant operator [27,43]. A difficulty in practice is that there is generally insufficient information with which to specify the parametric form of the likelihood function, so that (2) then represents a quasi-ML approach to estimation. Moreover, the structure of the covariance matrix of the disturbance term is generally unknown. One might contemplate estimation based on less restrictive assumptions about the existence of zero valued moment conditions.
The model in (1) can be estimated by the computationally less complicated two-stage least squares (2SLS) method. Instrumental variables are generated from exogenous regressors and a spatial weight matrix [38,41]. The 2SLS estimator is specified by:
[ β ^ 2 S L S ρ ^ 2 S L S ] = [ ( ( W Y ) X ) Z ( p ) ( W Y , X ) ] 1 [ ( ( W Y ) X ) Z ( p ) Y ]
where Z is an n × m instrumental variable (IV) matrix with full column rank and mk, and Z ( p ) = Z ( Z Z ) 1 Z is the orthogonal projector of the column space of Z.
The consistent and asymptotically normal generalized spatial 2SLS estimator (GS2SLS) suggested by Kelejian and Prucha [38] defines the IV matrix as Z = [ X , W X , W W X ] . The asymptotically optimal best spatial 2SLS (BS2SLS) estimator proposed by Lee [43] defines the IV matrix as Z = [ X , W ( I ρ ^ W ) 1 X * β ^ ] , where X * has no intercept column, and β ^ and ρ ^ are the estimates from the first stage.
The consistent generalized method of moments (GMM) estimator of unknown parameters in (1) can be derived from the empirical moments:
n 1 Z [ ( I ρ W ) Y X β ] = 0
where Z is an (n × m) instrumental variables matrix with full column rank and mk. The parameters can be estimated as:
[ β ^ G M M ρ ^ G M M ] = arg min β , ρ [ n 1 Z [ ( I ρ W ) Y X β ] ] W ^ n [ n 1 Z [ ( I ρ W ) Y X β ] ]
where traditionally W ^ n is an estimate of the asymptotically optimal weight matrix equal to the inverse of the estimated covariance matrix of the moment conditions [44].

3. Information Theoretic Estimators

Previous information theoretic econometric literature relating to the spatial autoregressive model of type (1) has focused on generalized maximum entropy. Marsh and Mittelhammer [10] formulate generalized maximum entropy estimators for the general linear model and the censored regression model when there is first order spatial autoregression in the dependent variable. Monte Carlo experiments were provided to compare the performance of spatial entropy estimators relative to classical spatial estimators. Fernandez-Vazquez, Mayor-Fernandez, and Rodriguez-Valez [11] compare some traditional spatial estimators with generalized maximum entropy and generalized cross entropy estimators by means of Monte Carlo simulations. Bernardini Papalia [45] applied generalized crops entropy for modeling economic aggregates and estimating their sub-group (sub-area) decomposition when no individual or sub-group data are available.
To extend the literature relating to IT estimators for the spatial autoregressive model, we first modify the moment conditions in (4). The empirical moments for the IT estimator of the parameters in (1) are specified as:
( p Z ) [ ( I ρ W ) Y X β ] = 0
where p is an (n × 1) vector of unknown empirical probability weights supported on a sample outcome ( Y , X ) , and denotes the extended Hadamard (elementwise) product operator. For example, the concept of empirical probability weights is developed in the framework of empirical likelihood function [46]. The value of the empirical likelihood function is the maximum empirical probability i = 1 n p i that can be assigned to a random sample outcome of ( Y , X ) among all distributions of probability p supported on the ( X i , Y i ) ’s that satisfy the empirical moment conditions, adding up restriction i = 1 n p i = 1 , and nonnegativity constraint p i 0 , i = 1 , , n [47].
Similarly in (6) the empirical weights p i are treated as unknown parameters of a multinomial distribution for n different types of data outcomes presented by the sample such that i = 1 n p i = 1 and p i 0 , i = 1 , , n . In contrast, the empirical moment conditions of the GMM approach in (4) restrict p i = 1 n for i = 1 , , n .
A generalized IT estimator of the unknown parameters in (1) is specified as the values of the parameters that solve the following optimization problem subject to empirical moments, normalization, and nonnegativity constraints given by:
arg max p , β , ρ [ ϕ ( p ) s . t . ( p Z ) [ ( I ρ W ) Y X β ] = 0 , i = 1 n p i = 1 , p i 0 , i ]
The objective function ϕ ( p ) in (7) is the negative of the Cressie–Read power divergence statistic [48] given by:
C R ( p , q , γ ) = 1 γ ( γ + 1 ) i = 1 n p i [ ( p i q i ) γ 1 ]
where γ is a given real-valued constant, and p = [ p 1 , ... , p n ] and q = [ q 1 , ... , q n ] are (n × 1) vectors of estimated and empirical probability densities supported on a sample outcome ( Y , X ) . The empirical probability density is taken to be fixed and the estimated probability weights p i are calculated to solve the moment equations in (6) while being as minimally divergent from the empirical probability distribution as possible. The specification in (7) is a general representation of the objective function which defines a class of IT estimators for the model in (1). This estimation approach circumvents the need for estimating a weight matrix in the GMM procedure.
The Lagrange form of the extremum problem in (7) is given by:
L ( p , β , ρ , λ , η ) = ϕ ( p ) λ ( p Z ) [ ( I ρ W ) Y X β ] η ( i = 1 n p i 1 )
where λ = [ λ 1 , ... , λ m ] is (m × 1) vector and η is scalar of Lagrange multipliers. First order conditions are:
L p i = ϕ ( p ) p i λ Z [ i , . ] [ ( I ρ W ) Y X [ i , . ] β ] η = 0 ,   i = 1 , , n
L β = X ( p Z ) λ = 0
L ρ = λ ( p Z ) W Y = 0
L λ = ( p Z ) [ ( I ρ W ) Y X β ] = 0
L η = i = 1 n p i 1 = 0
and p i 0 , ∀i. The first set of equations in (10) links the empirical probability weights p i ’s to the unknown parameters β, ρ, and λ through the empirical moment conditions. The set of equations in (11)–(13) modifies traditional orthogonality conditions of GMM estimation. Equation (14) is a standard normalization condition for the empirically estimated probability weights.
In order to derive the IT estimator, let f ( p ) = ( f 1 ( p ) , ... , f n ( p ) ) where f i ( p ) = ϕ ( p ) p i i. Then the general solution for p in terms of parameters and Lagrange multipliers is:
p ( β , ρ , λ , η ) = f 1 ( λ Z [ ( I ρ W ) Y X β ] + η )
where f 1 ( . ) is a well-defined vector-valued inverse function.
Substituting (15) into (6) leads to a well-defined solution for λ under general regularity conditions (Qin and Lawless [49]) which is an implicit function of parameters β and ρ denoted by:
λ ( β , ρ ) = arg λ [ i = 1 n p i ( β , ρ , λ , η * ) Z [ i , . ] ( Y i ρ W [ i , . ] Y X [ i , . ] β ) = 0 ]
where η * is the optimal value obtained from the first order conditions.
Substituting p ( β , ρ , λ ( β , ρ ) ) , λ ( β , ρ ) , and η * in (7) produces the concentrated objective function:
ϕ ( β , ρ , λ ( β , ρ ) ) = arg max p { ϕ ( p ) λ ( β , ρ ) ( p Z ) [ ( I ρ W ) Y X β ] η * ( i = 1 n p i 1 ) }
which assigns the most favorable empirical weights to each value of the parameter vector from within a family of multinomial distributions supported on the sample ( Y , X ) and satisfying the empirical moment equations in (6).
The IT estimators behave like conventional likelihood estimators in the sense that they are obtained by maximizing the objective function in (17) over the domain of parameter space given by β R k and 1 ρ 1 as:
[ β ^ I T ρ ^ I T ] = arg m a x β , ρ { ϕ ( β , ρ , λ ( β , ρ ) ) }
Let the empirical probability density in (8) be fixed to a discrete uniform distribution q = n 1 1 where 1 represents a (n × 1) vector of ones. Then the objective functions in (7) encompasses: (a) the traditional empirical log-likelihood function ϕ ( p ) = i = 1 n ln ( p i ) since lim γ 1 C R ( p , n 1 1 , γ ) = i = 1 n ln ( p i ) , (b) the empirical exponential likelihood (or negative entropy) function ϕ ( p ) = i = 1 n p i ln ( p i ) since lim γ 0 C R ( p , n 1 1 , γ ) = i = 1 n p i ln ( p i ) , and (c) log Euclidean likelihood function ϕ ( p ) = n 1 i = 1 n ( n 2 p i 2 1 ) for γ = 1. The corresponding IT estimators are maximum empirical likelihood (MEL), maximum exponential empirical likelihood (MEEL), and maximum log Euclidean empirical likelihood (MLEL).
The resulting optimal weights p i for the MEL estimator implied by (10) are given by:
p i ( β , ρ , λ ( β , ρ ) ) = n 1 [ 1 + λ ( β , ρ ) Z [ i , . ] ( Y i ρ W [ i , . ] Y X [ i , . ] β ) ] 1
where η * = 1 . The optimal p i for the MEEL estimator can be expressed as:
p i ( β , ρ , λ ( β , ρ ) ) = exp [ 1 + λ ( β , ρ ) Z [ i , . ] ( Y i ρ W [ i , . ] Y X [ i , . ] β ) ] i = 1 n exp [ 1 + λ ( β , ρ ) Z [ i , . ] ( Y i ρ W [ i , . ] Y X [ i , . ] β ) ]
The optimal p i for the MLEL estimator can be expressed as:
p i ( β , ρ , λ ( β , ρ ) ) = ( 2 n ) 1 [ η * + λ ( β , ρ ) Z [ i , . ] ( Y i ρ W [ i , . ] Y X [ i , . ] β ) ]
where η * = 2 1 n i = 1 n [ λ Z [ i , . ] ( Y i ρ W [ i , . ] Y X [ i , . ] β ) ] . The IT estimators β ^ I T and ρ ^ I T for I T { M E L , M E E L , M L E L } are obtained by maximizing the corresponding concentrated objective function in (17) with respect to the (k+1) unknown parameters with the appropriate p i ( λ ( β , ρ ) , β , ρ ) substituted for p i .

4. Monte Carlo Experiments

In order to compare the finite sampling properties of the estimators, extensive Monte Carlo sampling experiments were conducted. For a range of small sample sizes, spatial autocorrelation parameters, and commonly used weight matrices, finite sample performances of the IT estimators are compared to the OLS, 2SLS, GMM, and QML estimators noted in Section 2.

4.1. Monte Carlo Design

The Monte Carlo experiments are constructed in the following manner: three different sample sizes were analyzed, namely, n = 25, 60, and 90. For each sample size, three different specifications of spatial weight matrices were used. Weight matrices differ in their degree of sparseness, they have equal weights, and an average number of neighbors per unit, J, is chosen to be J = 2, 6 and 10 [50]. Eleven values for the spatial autoregressive coefficient ρ in (1) are chosen, namely, −0.9, −0.8, −0.6, −0.4, −0.2, 0, 0.2, 0.4, 0.6, 0.8 and 0.9. Three values of σ 2 are 1, 2.5, and 5. The remaining elements of the parameter vector are specified as β = ( β 1 , β 2 ) = ( 1 , 1 ) . Two regressors, X = ( X 1 , X 2 ) , are specified as X 1 ~ G a m m a ( 2 , 2 ) and X 2 ~ U n i f o r m ( 0 , 1 ) . The n observations on independent variables are normalized so that their sample means and variances are, respectively, zero and one. The same regressors are used in all experiments and 1000 repetitions were conducted for each Monte Carlo experiment. In summary, there are then three values of n, eleven values of ρ, three values of σ 2 , and three values of J. All combinations of n, ρ, σ 2 , and J result in 3 × 11 × 3 × 3 = 297 Monte Carlo experiments.
In addition, three different parametric functional forms for the distribution of the disturbance term ε in (1) are analyzed, together with varying specifications relating to heteroskedasticity. First, normally distributed homoscedastic disturbance terms, ε i ~ N ( 0 , σ 2 ) for ∀i = 1,…,n, are considered and referred to as H = 1. Three forms of heteroscedasticity are also specified for the normal distribution such that σ i 2 = [ 0.5 X 2 ] σ 2 , σ i 2 = [ 0.5 X 2 2 ] σ 2 , and σ i 2 = exp ( 0.5 X 2 ) σ 2 , referred to as H = 2, H = 3, H = 4, correspondingly. In addition, the log-normal distribution (H = 5) is chosen because of its asymmetric nature such that ε i = exp ( ξ i ) exp ( 0.5 σ 2 ) , where ξ i ~ N ( 0 , σ 2 ) . Finally, a mixture of normal is considered (H = 6), where a normally distributed variable is contaminated by another with a larger variance, producing thicker tails than a normal distribution, i.e., ε i = ς i ξ i + ( 1 ς i ) ζ i , the ς i are iid Bernoulli variables with Pr ( ς i = 1 ) = 0.95 , ξ i ~ N ( 0 , σ 2 ) and ζ i ~ N ( 0 , 100 ) . There are 297 experiments for the 6 distributional assumptions considered, which result in a total of 297 × 6 = 1782 Monte Carlo experiments performed.
To keep the Monte Carlo study manageable in terms of reporting results, bias and spread of the estimator distributions are measured by root mean square error (RMSE). Since sample moments for the calculation of the standard RMSE might not exist, the RMSE’s measure proposed by [50] is used, which guarantees the existence of necessary components. According to Kelejian and Prucha [48], R M S E = [ b i a s 2 + ( I Q 1.35 ) 2 ] 0.5 where the bias is the difference between the median and the true parameter values, and I Q = c 1 c 2 is the inter-quantile range where c 1 and c 2 are 0.75 and 0.25 quantiles, correspondingly.
In addition, response functions are used to summarize the relationship between RMSEs of estimators and model parameters over the set of all considered parameter values. In particular, let ρ i , β 1 i , β 2 i , J i and σ i 2 be the values, respectively, of ρ, β 1 , β 2 , J and σ 2 in the ith experiment, i = 1,…,297, for each distributional assumption, and let n i correspond to the sample size. Then the functional form of the response function [51] is:
R M S E i = σ i n i exp [ a 1 + a 2 ( 1 J i ) + a 3 ( ρ i J i ) + a 4 ρ i + a 5 ρ i 2 + a 6 ( J i n i ) + + a 7 ( J i n i ) 2 + a 8 ( 1 n i ) + a 9 σ i 2 ] , i = 1 , ... , 297
where RMSEi is the RMSE of an estimator of a given parameter in the ith experiment and the parameters a 1 , ... , a 9 are different for each estimator of each parameter. For each case considered, the parameters of (22) are estimated using the entire set of 297 Monte Carlo experiments for each distributional assumption by least squares after taking natural logs on both sides.

4.2. Results

There are three types of results presented. First, tables of root mean square errors of the estimators for a subset of experiments are reported. Second, average RMSEs for the entire Monte Carlo study are provided. Finally, response functions for the RMSEs of estimators are estimated, and graphs of these response functions are presented.
In order to conserve space, Table 1, Table 2 and Table 3 report RMSEs of the estimators for a selected subset of experimental parameter values. These tables report RMSEs of the estimators of the parameters ρ , β 1 and β 2 , correspondingly, for 33 Monte Carlo experiments representing a combination of 11 parameter values for ρ and 3 parameter values for σ 2 , with the average number of neighbors J = 6, sample size n = 60, and normally distributed disturbance term H = 1.
According to Table 1, the RMSEs of the QML estimator of ρ are the lowest and the OLS estimator are the largest, since the OLS estimator is inconsistent and the QML estimator with a normally distributed homoscedastic disturbance term is consistent and efficient.
Table 1. Root mean square errors of estimators of ρ, n = 60, J = 6, H = 1.
Table 1. Root mean square errors of estimators of ρ, n = 60, J = 6, H = 1.
ρ σ 2 OLS2SLSGMMQMLMEELMELMLEL
0.910.0550.0480.0480.0360.0510.0740.051
0.92.50.0770.0610.0630.0400.0650.0930.069
0.950.1100.1150.1110.0460.1130.0980.127
0.810.0770.0690.0700.0550.0710.1200.074
0.82.50.1550.1280.1380.0730.1430.1210.153
0.850.1890.1780.1830.0790.1650.1460.180
0.610.1730.1480.1590.0970.1490.1160.155
0.62.50.2370.2270.2330.1130.2360.1330.243
0.650.2580.2360.2600.1240.2580.1460.267
0.410.1590.1660.1700.1160.1600.1280.166
0.42.50.2290.2410.2450.1390.2470.1570.251
0.450.2710.2950.2880.1500.2670.1810.274
0.210.1960.1940.2040.1370.2000.1900.207
0.22.50.2600.2950.2940.1530.2760.2220.289
0.250.3150.4000.3890.2160.3540.2330.369
010.2040.2090.2140.1530.2100.1640.219
02.50.3050.3150.3310.2010.3020.2200.310
050.3890.5630.5930.2200.5060.2550.532
−0.210.2310.2130.2250.1660.2050.1840.207
−0.22.50.4040.3900.4140.2190.3890.2380.397
−0.250.4240.4650.4960.2160.3930.2740.410
−0.410.3600.2820.3210.2000.2840.2300.288
−0.42.50.4760.4210.4380.2140.4050.2560.415
−0.450.5550.5560.6120.2320.5330.3280.533
−0.610.3600.2560.2820.1930.2420.2210.246
−0.62.50.5810.4340.4970.2400.3970.3150.408
−0.650.6690.5980.6270.2390.5490.3500.546
−0.810.5320.3680.3820.2080.3020.2660.307
−0.82.50.5910.4420.4630.2470.3640.3070.368
−0.850.7890.6090.5690.2400.5040.3370.498
−0.910.3280.2240.1860.1750.1780.1440.189
−0.92.50.6560.4110.3870.2170.3230.2440.329
−0.950.7570.5760.3920.2350.3960.2250.403
Column Average0.3450.3070.3120.1630.2800.2030.287
Table 2. Root mean square errors of estimators of β 1 , n = 60, J = 6, H = 1.
Table 2. Root mean square errors of estimators of β 1 , n = 60, J = 6, H = 1.
ρ σ 2 OLS2SLSGMMQMLMEELMELMLEL
0.910.1410.1420.1410.1390.1420.1540.146
0.92.50.2170.2170.2190.2220.2210.2470.224
0.950.2940.3090.3130.3030.3150.3340.312
0.810.1320.1310.1320.1280.1370.1510.138
0.82.50.2250.2280.2290.2220.2210.2540.228
0.850.3010.3120.3150.2930.3070.3210.304
0.610.1320.1330.1350.1300.1360.1750.137
0.62.50.2060.2090.2090.2030.2020.2490.208
0.650.3050.2990.3000.2930.3050.3460.310
0.410.1290.1300.1290.1290.1350.1920.134
0.42.50.2040.2060.2060.2020.2040.2720.209
0.450.3030.3050.3160.3040.3100.3510.306
0.210.1300.1340.1350.1280.1350.1900.139
0.22.50.2170.2190.2200.2190.2230.2620.221
0.250.2950.3050.3140.3020.3140.3560.319
010.1330.1360.1370.1290.1440.1710.146
02.50.2110.2090.2130.2110.2100.2520.210
050.2960.3090.3200.2940.3150.3390.315
−0.210.1260.1290.1270.1260.1320.1650.142
−0.22.50.1990.2020.2030.2010.1990.2360.204
−0.250.2970.3050.3020.3070.3200.3410.318
−0.410.1390.1420.1450.1370.1420.1970.144
−0.42.50.2050.2120.2220.2080.2070.2590.215
−0.450.2950.2920.3030.2820.2990.3340.299
−0.610.1310.1300.1300.1290.1310.1850.137
−0.62.50.2400.2240.2340.2200.2230.2810.223
−0.650.3030.3170.3140.2960.3170.3330.323
−0.810.1420.1340.1390.1360.1400.1800.140
−0.82.50.2160.2190.2360.2100.2190.2760.219
−0.850.2930.2980.3120.2900.3010.3170.309
−0.910.1420.1290.1320.1260.1340.1750.137
−0.92.50.2370.2140.2250.2090.2110.2740.218
−0.950.2850.2940.3070.2840.3000.3370.295
Column Average0.2160.2170.2220.2130.2200.2580.222
This is not surprising since in this case, the QML estimator is actually a correctly specified classical ML estimator. The IT estimators of ρ have the second lowest RMSEs, outperforming the OLS, 2SLS, and GMM estimators. The MEL estimator outperforms the other IT estimators but is, on average, slightly less efficient than the QML estimator. Table 2 and Table 3 report the RMSEs of the QML and IT estimators of β 1 and β 2 which are on average roughly the same. A loss of efficiency of the IT estimators relative to the QML estimators is small and mostly arises from estimation of the spatial autocorrelation coefficient ρ [11]. The same pattern holds for other distributional assumptions, namely H = 2,...,5, and only the averages of the RMSEs are reported.
In order to summarize the results of the total 1782 Monte Carlo experiments, average RMSEs of the estimators, namely, for ρ, β 1 and β 2 , are presented in Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9. A table’s entry is the averages of RMSEs of estimators over 33 Monte Carlo experiments for a given sample size n, average number of neighbors J, and distributional assumptions H.
Table 3. Root mean square errors of estimators of β 2 , n = 60, J = 6, H = 1.
Table 3. Root mean square errors of estimators of β 2 , n = 60, J = 6, H = 1.
ρ σ 2 OLS2SLSGMMQMLMEELMELMLEL
0.910.1310.1300.1310.1260.1390.1490.141
0.92.50.2330.2360.2430.2300.2460.2490.243
0.950.2970.3070.3080.3010.3140.3300.310
0.810.1400.1370.1430.1360.1410.1560.146
0.82.50.2250.2250.2390.2210.2370.2440.237
0.850.2740.2780.2770.2750.2920.3060.293
0.610.1270.1250.1240.1250.1280.1730.132
0.62.50.1990.1980.1990.2040.2050.2490.214
0.650.3010.3110.3080.3020.3040.3450.307
0.410.1370.1320.1310.1330.1370.1860.140
0.42.50.2110.2150.2190.2100.2260.2650.226
0.450.2990.3030.3120.3120.3110.3400.316
0.210.1350.1360.1370.1340.1400.1820.144
0.22.50.2190.2240.2240.2210.2200.2690.226
0.250.3080.3090.3140.3060.3190.3340.321
010.1310.1320.1310.1330.1400.1790.141
02.50.2280.2300.2270.2260.2330.2590.236
050.2900.3020.3040.2920.3040.3330.309
−0.210.1280.1280.1310.1260.1410.1820.147
−0.22.50.2050.2100.2080.2060.2210.2510.221
−0.250.2820.3020.3030.2810.2960.3100.306
−0.410.1340.1370.1350.1330.1410.1780.143
−0.42.50.2160.2170.2210.2140.2220.2500.230
−0.450.3020.3000.3120.2940.3120.3310.319
−0.610.1330.1310.1310.1260.1400.1790.142
−0.62.50.2360.2140.2120.2020.2140.2570.213
−0.650.3160.3210.3360.2980.3140.3610.318
−0.810.1560.1490.1550.1420.1470.1900.148
−0.82.50.2270.2120.2340.2120.2220.2690.218
−0.850.3110.3240.3200.3080.3240.3430.315
−0.910.1370.1370.1390.1360.1400.1920.145
−0.92.50.2160.1990.2050.2000.2130.2650.212
−0.950.3050.3270.3420.2960.3300.3600.344
Column Average0.2180.2190.2230.2140.2250.2570.227
The results of the Monte Carlo experiments for other distributional assumptions are consistent with the results for a subset of parameter space for the case of H = 1 reported in Table 1, Table 2 and Table 3. In fact, Table 4 and Table 5 indicate that the RMSEs of the QML estimator of ρ are the lowest, regardless of sample size, average number of neighbors, and distributional assumptions considered. The IT estimators of ρ have the second lowest RMSEs across all Monte Carlo experiments, being less efficient than the QML estimator, but often outperforming OLS, 2SLS, and GMM estimators.
Table 6, Table 7, Table 8 and Table 9 indicate that the RMSEs of QML and the IT estimators of β 1 and β 2 are roughly the same with the exception of the MEL estimator which is, on average, less efficient than the QML estimator for β 1 and β 2 . Indeed, many cases exist where IT estimators (e.g., MLEL) outperform QML. In any case, the loss of efficiency of the IT estimators relative to the QML estimator is, on average, small and mostly arises from estimation of the spatial autocorrelation coefficient ρ.
Table 4. Average root mean square errors of estimators of ρ for H = 1−3.
Table 4. Average root mean square errors of estimators of ρ for H = 1−3.
HJnOLS2SLSGMMQMLMEELMELMLEL
12250.2120.2210.2470.1150.2100.1910.223
12600.1700.1400.1480.0770.1390.1120.146
12900.1560.1130.1180.0600.1130.0970.117
16250.4930.5220.5400.2720.4560.4020.465
16600.3450.3070.3120.1630.2800.2030.287
16900.3010.2510.2560.1320.2300.1590.236
110250.6930.7210.6980.3720.5980.5240.610
110600.4630.4400.4660.2260.4130.2840.418
110900.4000.3510.3520.1800.3190.2080.325
22250.1140.1060.1140.0770.1060.1130.109
22600.0820.0660.0690.0500.0650.0700.065
22900.0800.0570.0580.0380.0550.0610.056
26250.2600.2620.2760.1890.2440.2400.246
26600.1970.1700.1770.1200.1540.1450.154
26900.1620.1250.1330.0950.1170.1160.117
210250.3950.4220.4350.2730.3700.3600.370
210600.2720.2440.2570.1890.2250.2140.224
210900.2250.1840.1960.1330.1690.1560.169
32250.1480.1340.1400.0810.1260.1440.127
32600.1430.1170.1170.0610.1060.1110.105
32900.1320.0990.1040.0460.0840.0930.083
36250.3650.3750.3860.2170.3360.3410.334
36600.2850.2400.2570.1340.2070.2170.203
36900.2690.2360.2460.1150.2030.1780.197
310250.4440.4730.5100.2770.4180.4250.420
310600.3670.3360.3600.1830.3000.2880.294
310900.3250.3250.3430.1460.2600.2410.253
Table 5. Average root mean square errors of estimators of ρ for H = 4−6.
Table 5. Average root mean square errors of estimators of ρ for H = 4−6.
HJnOLS2SLSGMMQMLMEELMELMLEL
42250.2190.2320.2500.1200.2110.2040.222
42600.1880.1670.1770.0760.1570.1310.164
42900.1850.1440.1520.0640.1380.1120.142
46250.5060.5600.5790.2660.4880.4480.496
46600.3800.3800.3870.1620.3330.2390.338
46900.3380.3070.3060.1320.2710.1890.274
410250.7010.7950.8050.3650.6470.5890.657
410600.4940.5150.5100.2250.4460.3030.452
410900.4520.4500.4600.1860.4100.2520.412
52250.1750.1690.1820.1000.1490.1510.155
52600.1480.1190.1250.0660.1020.1050.103
52900.1420.1010.1050.0500.0870.0900.088
56250.4390.4520.4740.2440.3730.3680.374
56600.3040.2750.2820.1430.2280.2100.226
56900.2830.2350.2440.1200.1960.1680.195
510250.6590.6850.7160.3490.5440.5320.549
510600.4200.4080.4420.2060.3620.3250.360
510900.3570.3070.3360.1630.2700.2400.266
62250.1600.1600.1690.0830.1430.1480.148
62600.1470.1200.1260.0570.0960.1030.096
62900.1420.1000.1050.0460.0800.0860.080
66250.3430.3750.3860.1930.3120.3110.315
66600.2860.2690.2780.1310.2130.2060.209
66900.2610.2220.2270.1080.1710.1590.168
610250.4860.5970.5800.2710.4730.4510.478
610600.3830.4010.4310.1830.3300.3320.322
610900.3430.3240.3440.1520.2570.2510.250
Column Average0.3040.2950.3050.1530.2550.2300.257
Table 6. Average root mean square errors of estimators of β1 for H = 1−3.
Table 6. Average root mean square errors of estimators of β1 for H = 1−3.
HJnOLS2SLSGMMQMLMEELMELMLEL
12250.3490.3560.3740.3370.3650.3840.372
12600.2260.2230.2310.2150.2270.2540.231
12900.1860.1790.1830.1720.1810.2150.182
16250.3450.3520.3640.3380.3590.3830.366
16600.2160.2170.2220.2120.2200.2580.222
16900.1770.1770.1790.1730.1780.2270.180
110250.3430.3550.3680.3340.3600.3840.368
110600.2190.2210.2270.2120.2240.2660.227
110900.1720.1720.1730.1690.1740.2260.175
22250.3270.3290.3310.3290.3390.3560.353
22600.2290.2300.2310.2270.2350.2490.241
22900.1860.1840.1850.1840.1810.2010.185
26250.3350.3340.3350.3310.3380.3540.349
26600.2240.2240.2250.2230.2210.2440.225
26900.1840.1830.1840.1830.1810.2000.182
210250.3170.3160.3200.3140.3210.3420.330
210600.2220.2230.2250.2220.2210.2430.225
210900.1870.1860.1870.1860.1840.2050.186
32250.7580.7730.7630.7810.8020.8200.833
32600.5890.6000.5990.5990.6020.6160.600
32900.5460.5610.5640.5590.5540.5780.540
36250.8830.8840.8810.8880.9010.9290.916
36600.7680.7780.7810.7780.7840.7940.781
36900.6310.6350.6350.6400.6230.6490.614
310250.7820.7840.7860.7810.7950.8200.801
310600.7320.7330.7310.7320.7350.7420.734
310900.6550.6640.6630.6600.6490.6710.637
Table 7. Average root mean square errors of estimators of β1 for H = 4−6.
Table 7. Average root mean square errors of estimators of β1 for H = 4−6.
HJnOLS2SLSGMMQMLMEELMELMLEL
42250.7990.8280.8280.8390.8440.8710.854
42600.7510.7950.8000.8040.7990.8350.796
42900.5050.5230.5210.5240.5180.5430.513
46250.8440.8570.8660.8510.8700.9060.878
46600.6160.6280.6340.6270.6260.6570.624
46900.6550.6660.6650.6690.6560.6960.648
410250.8280.8500.8730.8400.8700.8990.882
410600.7060.7150.7110.7160.7070.7510.703
410900.5790.5850.5860.5800.5790.6110.574
52250.2700.2710.2820.2570.2580.2940.257
52600.1900.1850.1900.1780.1700.2210.165
52900.1680.1580.1620.1520.1450.1940.140
56250.2760.2740.2850.2650.2620.3010.259
56600.1820.1800.1830.1770.1650.2190.161
56900.1550.1540.1550.1490.1390.1940.134
510250.2700.2680.2840.2570.2590.2980.256
510600.1870.1850.1900.1810.1710.2290.167
510900.1530.1520.1530.1480.1380.1960.133
62250.2340.2360.2440.2290.2310.2630.230
62600.1810.1780.1830.1700.1550.2120.147
62900.1610.1520.1560.1460.1290.1930.122
66250.2410.2400.2490.2340.2320.2670.230
66600.1780.1760.1790.1720.1520.2170.145
66900.1480.1480.1500.1440.1250.1960.117
610250.2380.2430.2540.2280.2330.2690.232
610600.1730.1730.1770.1680.1510.2150.143
610900.1490.1480.1510.1450.1250.1940.118
Column Average0.3820.3860.3900.3820.3830.4180.383
Table 8. Average root mean square errors of estimators of β2 for H = 1−3.
Table 8. Average root mean square errors of estimators of β2 for H = 1−3.
HJnOLS2SLSGMMQMLMEELMELMLEL
12250.3510.3610.3760.3420.3710.3860.379
12600.2230.2210.2300.2130.2280.2500.231
12900.1850.1780.1830.1720.1820.2130.184
16250.3430.3510.3670.3320.3600.3850.369
16600.2180.2190.2230.2140.2250.2570.227
16900.1780.1750.1780.1720.1770.2210.181
110250.3430.3530.3640.3360.3620.3870.370
110600.2160.2170.2210.2120.2200.2580.223
110900.1750.1750.1770.1720.1770.2250.179
22250.1520.1510.1550.1470.1510.1830.152
22600.1080.1020.1060.1000.0990.1510.098
22900.0920.0850.0870.0830.0820.1440.081
26250.1600.1600.1610.1560.1550.1910.154
26600.1090.1070.1080.1060.1030.1580.101
26900.0860.0850.0850.0840.0810.1500.080
210250.1620.1620.1650.1580.1570.1930.157
210600.1000.0980.1000.0970.0960.1530.095
210900.0850.0840.0850.0840.0820.1540.081
32250.2540.2540.2550.2500.2390.2820.231
32600.1880.1870.1900.1810.1690.2270.161
32900.1820.1710.1760.1660.1510.2210.140
36250.2380.2350.2430.2270.2200.2630.217
36600.2050.2040.2030.2000.1830.2410.172
36900.1740.1790.1790.1790.1480.2380.134
310250.2390.2390.2500.2310.2300.2720.227
310600.2090.2080.2080.2040.1850.2570.176
310900.1850.1860.1850.1850.1640.2420.153
Table 9. Average root mean square errors of estimators of β2 for H = 4−6.
Table 9. Average root mean square errors of estimators of β2 for H = 4−6.
HJnOLS2SLSGMMQMLMEELMELMLEL
36900.1740.1790.1790.1790.1480.2380.134
42250.4360.4520.4730.4370.4610.4860.464
42600.3160.3210.3290.3050.3180.3540.312
42900.2570.2550.2620.2470.2480.2970.243
46250.4260.4400.4590.4210.4410.4740.441
46600.3020.3060.3130.2930.2970.3500.292
46900.2550.2570.2620.2540.2480.3150.242
410250.4450.4580.4800.4440.4580.4950.461
410600.3030.3100.3140.2990.3040.3580.299
410900.2430.2430.2460.2400.2360.3020.233
52250.2780.2800.2890.2710.2670.3040.261
52600.1980.1920.1990.1860.1730.2380.167
52900.1720.1630.1680.1580.1490.2130.143
56250.2790.2850.2910.2730.2670.3100.261
56600.1890.1890.1920.1840.1710.2430.166
56900.1590.1570.1580.1530.1410.2170.136
510250.2750.2760.2870.2660.2590.3040.253
510600.1920.1920.1930.1890.1720.2490.166
510900.1590.1580.1610.1560.1440.2240.137
62250.2520.2530.2620.2460.2450.2760.242
62600.1940.1910.1980.1860.1650.2300.154
62900.1720.1640.1680.1590.1380.2080.128
66250.2490.2500.2560.2430.2400.2750.237
66600.1880.1870.1890.1850.1600.2310.148
66900.1600.1590.1590.1560.1320.2090.124
610250.2480.2530.2600.2410.2410.2760.237
610600.1850.1840.1880.1810.1570.2300.147
610900.1600.1580.1610.1560.1320.2100.123
Column Average0.2200.2200.2250.2140.2100.2630.207
The response functions (22) are used to depict a relationship between the RMSEs of estimators over the set of all considered parameter values. Figure 1, Figure 2 and Figure 3 describe 18 sets of response functions for the RMSEs of the QML and the IT estimators of ρ for average number of neighbors equal J = 2 and three sample sizes n = 25, 60, 90. Each figure describes six response functions of the RMSEs for six distributional assumptions.
Figure 1. Response function RMSEs of QML, MEEL, MLEL and MEL of ρ (n = 25 and J = 2).
Figure 1. Response function RMSEs of QML, MEEL, MLEL and MEL of ρ (n = 25 and J = 2).
Entropy 14 01165 g001
Figure 2. Response function RMSEs of QML, MEEL, MLEL and MEL of ρ (n = 60 and J = 2).
Figure 2. Response function RMSEs of QML, MEEL, MLEL and MEL of ρ (n = 60 and J = 2).
Entropy 14 01165 g002
The RMSEs are related to the spatial autocorrelation parameter, ρ, in a concave fashion. The difference in the RMSEs between QML and IT estimators increases as ρ approaches zero, and it decreases as ρ approaches ±1. The QML estimator outperforms the IT estimators for all distributional assumptions considered and for J = 2, 6, 10. The response functions for the three IT estimators considered are related to the ρ in similar fashion and have comparable magnitude. As sample size increases, differences between the response functions of QML and IT estimators declines.
The results of the Monte Carlo study suggest that in certain sampling situations information theoretic estimators of the first-order spatial autoregressive model have superior sampling properties and outperform traditional OLS, 2SLS and GMM estimators with the exception of quasi-maximum likelihood estimator.
Figure 3. Response function RMSEs of QML, MEEL, MLEL and MEL of ρ (n = 90 and J = 2).
Figure 3. Response function RMSEs of QML, MEEL, MLEL and MEL of ρ (n = 90 and J = 2).
Entropy 14 01165 g003

5. Illustrative Application: Hedonic Housing Value Model

Harrison and Rubinfeld [24] estimated demand for clean air or the dollar benefits of air quality improvements. The approach estimates willingess to pay for better air quality based on the presumption that consumers pay more for an identical housing unit located in an area with good air quality than in an area with poor air quality. The hedonic housing value function translates housing attributes into a price. The household’s annual wiliness to pay for a small improvement in air quality is the increased cost of purchasing a house with identical attributes except for marginal improvement in air quality found by calculating the derivatives of the hedonic housing price equation with respect to the air pollution attribute.
The housing data for census tracts in the Boston Standard Metropolitan Statistical Area (SMSA) in 1970 contains 506 observations (one observation per census tract) on 14 independent variables. The dependent variable is the median value of owner-occupied homes in a census tract (MV). Independent variables include two structural attribute variables, eight neighborhood variables, and two accessibility variables such as average number of rooms (RM), proportion of structures built before 1940 (AGE), black population proportion (B), lower status population proportion (LSTAT), crime rate (CRIM), proportion of area zoned with large lots (ZN), proportion of non retail business area (INDUS), property tax rate (TAX), pupil-teacher ratio (PTRATIO), location contiguous to the Charles River (CHAS), weighted distances to the employment centers (DIS), and an index of accessibility (RAD) [25]. In addition, the pollution variable (NOX), which is the concentration of nitrogen oxides, is used to proxy air quality.
Gilley and Pace [52] specify the best functional form of the hedonic housing equation given by:
log ( M V ) = β 0 + ρ W log ( M V ) + β 1 C R I M + β 2 Z N + β 3 I N D U S + β 4 C H A S
β 5 N O X 2 + β 6 R M 2 + β 7 A G E + β 8 log ( D I S ) + β 9 log ( R A D )
β 10 T A X + β 11 P T R A T I O + β 12 B + β 13 L S T A T + ε
where W is a distance matrix, and ρ is a scalar spatial autoregressive parameter.
The OLS, 2SLS, GMM, QML, MEEL, MEL, and MLEL estimates are reported in Table 10. All coefficients have the expected signs [24]. OLS exhibits biasness consistent with the outcomes of the Monte Carlo experiments. The 2SLS, MEEL, and MEL estimates are very comparable across parameters. The information theoretic and the QML estimators provide similar estimates of the model parameters with an exception being the spatial autoregressive parameter ρ.
Table 10. Estimated parameters of the hedonic housing model.
Table 10. Estimated parameters of the hedonic housing model.
ParametersOLS2SLSGMMQMLMEELMELMLEL
0.4410.2760.0330.3930.2760.2760.432
β01.0341.2371.8501.1371.2371.2371.137
β2−0.004−0.004−0.004−0.004−0.004−0.004−0.004
β30.0000.0000.0000.0000.0000.0000.000
β40.0010.0010.0000.0010.0010.0010.001
β50.0090.0220.0020.0120.0220.0220.012
β6−0.131−0.1410.396−0.146−0.143−0.141−0.146
β70.0030.0040.0000.0030.0040.0040.003
β80.0000.0000.0010.0000.0000.0000.000
β9−0.157−0.1420.108−0.161−0.141−0.142−0.161
β100.0830.0830.1500.0830.0840.0830.083
β110.0000.0000.0000.0000.0000.0000.000
β12−0.003−0.004−0.008−0.004−0.004−0.004−0.004
β130.0000.0000.0000.0000.0000.0000.000
β14−0.286−0.296−0.623−0.295−0.294−0.296−0.295

6. Concluding Remarks

A general information theoretic estimator of the first order spatial autoregressive model was proposed, and special cases of Maximum Empirical Likelihood, Maximum Exponential Empirical Likelihood, and Maximum Log Euclidean Likelihood estimators were explored.
In extensive Monte Carlo experiments, small sample performances of the information theoretic estimators were evaluated and found to be robust to a range of sample sizes, spatial autocorrelation coefficients, distributional assumptions of disturbance term, and spatial weight matrices. Compared to traditional estimators, it was found that in certain sampling situations information theoretic estimators of the first-order spatial autoregressive model have superior sampling properties and outperform traditional OLS, 2SLS and GMM estimators. However, while generally comparable in RMSE, the IT estimators do not consistently outperform the quasi-maximum likelihood estimator. Findings suggest that one can confidently use information theoretic estimators for estimating the first order spatial autocorrelation model for small sample sizes, and they represent defensible alternatives when maximum likelihood estimators cannot be easily computed. The estimation of the housing value model provided a practical illustration of the implementation and results forthcoming from OLS, 2SLS, QML, and information theoretic estimators. Future work could include comparison of IT estimators with Lin and Lee [53] and Kelejian and Prucha [54] who examined robust GMM estimators incorporating spatial autoregressive structures.

References

  1. Owen, A. Empirical likelihood ratio confidence intervals for a single functional. Biometrica 1988, 75, 237–249. [Google Scholar] [CrossRef]
  2. Owen, A. Empirical likelihood for linear models. Ann. Stat. 1991, 19, 1725–1747. [Google Scholar] [CrossRef]
  3. Kitamura, Y.; Stutzer, M. An information-theoretic alternative to generalized method of moments estimation. Econometrica 1997, 65, 861–874. [Google Scholar] [CrossRef]
  4. Fraser, I. An application of maximum entropy estimation: The demand for mean in the United Kingdom. Appl. Econ. 2000, 32, 45–59. [Google Scholar] [CrossRef]
  5. Golan, A.; Judge, G.; Zen, E. Estimating a demand system with nonnegativity constraints: Mexican meat demand. Rev. Econ. Stat. 2001, 83, 541–550. [Google Scholar] [CrossRef]
  6. Preckel, P.V. Least squares and entropy: A penalty function perspective. Am. J. Agr. Econ. 2001, 83, 366–377. [Google Scholar] [CrossRef]
  7. Miller, D.J. Entropy-based methods of modeling stochastic production efficiency. Am. J. Agr. Econ. 2002, 84, 1264–1270. [Google Scholar] [CrossRef]
  8. Zohrabian, A.; Traxler, G.; Caudill, S.; Smale, M. Valuing pre-commercial genetic resources: A maximum entropy approach. Am. J. Agr. Econ. 2003, 85, 429–436. [Google Scholar] [CrossRef]
  9. Imbens, G.W.; Spady, R.H.; Johnson, P. Information theoretic approaches to inference in moment condition models. Econometrica 1998, 66, 333–357. [Google Scholar] [CrossRef]
  10. Marsh, T.L.; Mittelhammer, R. Spatial and spatiotemporal econometrics. In Advances in Econometrics; Elsevier: Oxford, UK, 2004; Volume 18, Chapter 7; pp. 203–238. [Google Scholar]
  11. Fernandez-Vazquez, E.; Mayor-Fernandez, M.; Rodriguez-Valez, J. Estimating spatial autoregressive models by GME-GCE techniques. Int. Reg. Sci. Rev. 2009, 32, 148–172. [Google Scholar] [CrossRef]
  12. Case, A.C. Neighborhood influence and technological change. Reg. Sci. Urban Econ. 1992, 22, 491–508. [Google Scholar] [CrossRef]
  13. Case, A.C. Spatial patterns in household demand. Econometrica 1991, 59, 953–965. [Google Scholar] [CrossRef]
  14. Anselin, L.; Vagra, A.; Acs, Z. Local geographic spillovers between university research and high technology innovations. J. Urban Econ. 1997, 42, 422–448. [Google Scholar] [CrossRef]
  15. Haining, R.P. Testing a spatial interacting-markets hypothesis. Rev. Econ. Stat. 1984, 66, 576–583. [Google Scholar] [CrossRef]
  16. Wu, J. Environmental amenities and the spatial pattern of urban sprawl. Am. J. Agr. Econ. 2001, 83, 691–997. [Google Scholar] [CrossRef]
  17. Goodwin, B.K.; Piggott, N.E. Spatial market integration in the presence of threshold effects. Am. J. Agr. Econ. 2001, 83, 302–317. [Google Scholar] [CrossRef]
  18. Roe, B.; Irwin, E.; Sharp, J.S. Pigs in space: Modeling the spatial structure of hog production in traditional and nontraditional production regions. Am. J. Agr. Econ. 2002, 84, 259–278. [Google Scholar] [CrossRef]
  19. Thompson, S.R.; Donggyu, S.; Bohl, M.T. Spatial market efficiency and policy regime change: Seemingly unrelated error correction model estimation. Am. J. Agr. Econ. 2002, 84, 1042–1053. [Google Scholar] [CrossRef]
  20. Sephton, P.S. Spatial market arbitrage and threshold cointegration. Am. J. Agr. Econ. 2003, 85, 1041–1046. [Google Scholar] [CrossRef]
  21. Saak, A.E. Spatial and temporal marketing considerations under marketing loan programs. Am. J. Agr. Econ. 2003, 85, 872–887. [Google Scholar] [CrossRef]
  22. Sarmiento, C.; Wilson, W.W. Spatial modeling in technology adoption decisions: The case of shuttle train elevators. Am. J. Agr. Econ. 2005, 87, 1034–1045. [Google Scholar] [CrossRef]
  23. Negassa, A.; Myers, R.J. Estimating policy effects on spatial market efficiency: An extension to the parity bounds model. Am. J. Agr. Econ. 2007, 89, 338–352. [Google Scholar] [CrossRef]
  24. Harrison, D.; Rubinfeld, D. Hedonic housing prices and the demand for clean air. J. Environ. Econ. Manag. 1978, 5, 81–102. [Google Scholar] [CrossRef]
  25. Gilley, O.W.; Pace, R. On the Harrison and Rubinfeld data. J. Environ. Econ. Manag. 1996, 31, 403–405. [Google Scholar] [CrossRef]
  26. Cliff, A.; Ord, K. Spatial Processes, Models, and Applications; Pion: London, UK, 1981. [Google Scholar]
  27. Anselin, L. Spatial Econometrics: Methods and Models; Kluwer: Boston, MA, USA, 1988. [Google Scholar]
  28. Cressie, N. Statistics for Spatial Data; Wiley: New York, NY, USA, 1993. [Google Scholar]
  29. LeSage, J.; Pace, R.K. Spatial and spatiotemporal econometrics. In Advances in Econometrics; Elsevier: Oxford, UK, 2004; Volume 18, Chapter Introduction; pp. 1–32. [Google Scholar]
  30. Horn, R.; Johnson, C. Matrix Analysis; Cambridge University Press: New York, NY, USA, 1985. [Google Scholar]
  31. LeSage, J.P.; Pace, R.K. Introduction to Spatial Econometrics; Taylor-Francis/CRC Press: Boca Raton, FL, USA, 2009. [Google Scholar]
  32. LeSage, J. Bayesian estimation of spatial autoregressive models. Int. Reg. Sci. Rev. 1997, 20, 113–129. [Google Scholar] [CrossRef]
  33. Whittle, P. On stationary processes in the plane. Biometrica 1954, 41, 434–449, Parts 3 and 4. [Google Scholar] [CrossRef]
  34. Ord, K. Estimation methods for models of spatial interaction. J. Am. Stat. Assoc. 1975, 70, 120–126. [Google Scholar] [CrossRef]
  35. Lee, L. Consistency and efficiency of least squares estimation for mixed regressive, spatial autoregressive models. Economet. Theor. 2002, 18, 252–277. [Google Scholar] [CrossRef]
  36. Lee, L. Asymptotic distributions of quasi-maximum likelihood estimators for spatial autoregressive models. Econometrica 2004, 72, 1899–1925. [Google Scholar] [CrossRef]
  37. Barry, R.P.; Pace, R.K. A Monte Carlo Estimator of the log determinant of large sparse matrices. Lin. Algebra Appl. 1999, 289, 41–54. [Google Scholar] [CrossRef]
  38. Kelejian, H.H.; Prucha, I. A Generalized spatial two-stage least squares procedure for estimating a spatial autoregressive model with autoregressive disturbances. J. R. Estate Finance Econ. 1998, 17, 99–121. [Google Scholar] [CrossRef]
  39. Kelejian, H.H.; Prucha, I. 2SLS and OLS in a spatial autoregressive model with equal spatial weights. Reg. Sci. Urban Econ. 2002, 32, 691–707. [Google Scholar] [CrossRef]
  40. Lee, L. GMM and 2SLS estimation of mixed regressive, spatial autoregressive models. Department of Economics, Ohio State University: Columbus, OH, USA, Unpublished work. 2001. [Google Scholar]
  41. Lee, L. GMM and 2SLS estimation of mixed regressive, spatial autoregressive models. J. Econometrics 2007, 137, 489–514. [Google Scholar] [CrossRef]
  42. Lee, L. The method of elimination and substitution in the GMM estimation of mixed regressive, spatial autoregressive models. J. Econometrics 2007, 140, 155–189. [Google Scholar] [CrossRef]
  43. Lee, L. Best spatial two-stage least squares estimators for a spatial autoregressive model with autoregressive disturbances. Economet. Rev. 2003, 22, 307–335. [Google Scholar] [CrossRef]
  44. Hansen, L. Large sample properties of generalized method of moments estimators. Econometrica 1982, 50, 1029–1054. [Google Scholar] [CrossRef]
  45. Bernardini Papalia, R. Incorporating spatial structures in ecological inference: An information theoretic approach. Entropy 2010, 12, 2171–2185. [Google Scholar] [CrossRef]
  46. Thomas, D.; Grunkemeier, G. Confidence interval estimation of survival probabilities for censored data. J. Am. Stat. Assoc. 1975, 70, 865–871. [Google Scholar] [CrossRef]
  47. Owen, A. Empirical likelihood ratio confidence regions. Ann. Stat. 1990, 18, 90–120. [Google Scholar] [CrossRef]
  48. Cressie, N.; Read, T. Multinomial goodness-of-fit tests. J. Roy. Stat. Soc. B Stat. Meth. 1984, 46, 440–464. [Google Scholar]
  49. Qin, J.; Lawless, J. Empirical likelihood and general estimating equations. Ann. Stat. 1994, 22, 300–325. [Google Scholar] [CrossRef]
  50. Kelejian, H.H.; Prucha, I. A generalized moments estimator for the autoregressive parameter in a spatial model. Int. Econ. Rev. 1999, 40, 509–533. [Google Scholar] [CrossRef]
  51. Das, D.; Kelejian, H.H.; Prucha, I. Finite sample properties of estimators of spatial autoregressive models with autoregressive disturbances. Paper Reg. Sci. 2003, 82, 1–26. [Google Scholar] [CrossRef]
  52. Gilley, O.W.; Pace, R.K. The Harrison and Rubinfeld data revisited. J. Environ. Econ. Manag. 1996, 31, 403–405. [Google Scholar] [CrossRef]
  53. Lin, X.; Lee, L.F. GMM estimation of spatial autoregressive models with unknown heteroskedasticity. J. Econometrics 2010, 157, 34–52. [Google Scholar] [CrossRef]
  54. Kelejian, H.H.; Prucha, I.R. Specication and estimation of spatial autoregressive models with autoregressive and heteroskedastic disturbances. J. Econometrics 2010, 157, 53–67. [Google Scholar] [CrossRef] [PubMed]

Share and Cite

MDPI and ACS Style

Perevodchikov, E.V.; Marsh, T.L.; Mittelhammer, R.C. Information Theory Estimators for the First-Order Spatial Autoregressive Model. Entropy 2012, 14, 1165-1185. https://doi.org/10.3390/e14071165

AMA Style

Perevodchikov EV, Marsh TL, Mittelhammer RC. Information Theory Estimators for the First-Order Spatial Autoregressive Model. Entropy. 2012; 14(7):1165-1185. https://doi.org/10.3390/e14071165

Chicago/Turabian Style

Perevodchikov, Evgeniy V., Thomas L. Marsh, and Ron C. Mittelhammer. 2012. "Information Theory Estimators for the First-Order Spatial Autoregressive Model" Entropy 14, no. 7: 1165-1185. https://doi.org/10.3390/e14071165

APA Style

Perevodchikov, E. V., Marsh, T. L., & Mittelhammer, R. C. (2012). Information Theory Estimators for the First-Order Spatial Autoregressive Model. Entropy, 14(7), 1165-1185. https://doi.org/10.3390/e14071165

Article Metrics

Back to TopTop