Next Article in Journal
Adaptive Graph Attention and Long Short-Term Memory-Based Networks for Traffic Prediction
Next Article in Special Issue
Efficient Maintenance of Minimum Spanning Trees in Dynamic Weighted Undirected Graphs
Previous Article in Journal
Pricing of Al-Urbun and a Class of Al-Istijrar Islamic Contracts under the Black–Scholes Framework
Previous Article in Special Issue
Geary’s c and Spectral Graph Theory: A Complement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Perspective on Moran’s Coefficient: Revisited

Graduate School of Humanities and Social Sciences, Hiroshima University, 1-2-1 Kagamiyama, Higashihiroshima 739-8525, Japan
Mathematics 2024, 12(2), 253; https://doi.org/10.3390/math12020253
Submission received: 1 December 2023 / Revised: 8 January 2024 / Accepted: 10 January 2024 / Published: 12 January 2024
(This article belongs to the Special Issue Advances in Graph Theory: Algorithms and Applications)

Abstract

:
Moran’s I (Moran’s coefficient) is one of the most prominent measures of spatial autocorrelation. It is well known that Moran’s I has a representation that is similar to a Fourier series and is therefore useful for characterizing spatial data. However, the representation needs to be modified. This paper contributes to the literature by showing the necessary modification and presenting some further results. In addition, we provide the required MATLAB/GNU Octave and R user-defined functions.

1. Introduction

Spatial autocorrelation, which describes the similarity between signals at adjacent vertices, is central to geographical and spatial analysis ([1]). It is also closely related to Tobler’s first law of geography ([2]). As listed in [1], many measures of it have been proposed. Among them, Moran’s I is one of the most prominent measures. In addition, the popular eigenvector spatial filter (ESF), which was developed by Daniel A. Griffith and his co-authors ([3,4,5,6,7,8,9,10]) is based on Moran’s I (see also [11,12]). This paper contributes to the literature by providing new insights into Moran’s I.
Dray [13] made an important contribution to the understanding of Moran’s I. More specifically, he presented a remarkable representation of it that is similar to a Fourier series. It is an expansion of Moran’s I into a linear combination of variables with different degrees of spatial autocorrelation. However, the representation needs to be modified. In this paper, after reviewing Dray’s representation, we show the necessary modification. We then present some further results. Specifically, we show that Moran’s I is not just a linear combination of variables with different degrees of spatial autocorrelation, but a weighted average of such variables. A way to obtain the matrices needed for the modified representation is also provided. In addition, we provide the required MATLAB/GNU Octave and R user-defined functions.
We make four remarks on Moran’s I. First, Cliff and Ord [14,15,16,17] made today’s Moran’s I ([18,19]). Second, there exists
positive spatial autocorrelation if I > 1 n 1 , no spatial autocorrelation if I = 1 n 1 , negative spatial autocorrelation if I < 1 n 1 .
See [20] (Equation (5)). Third, Moran’s I can be regarded as a generalization of some autocorrelation measures, such as Anderson’s [21] first circular serial correlation coefficient, Orcutt’s [22] first serial correlation coefficient, and  Moran’s [23] r 11 ([19]). Thus, our results apply to these as well. Fourth, Anselin [24] developed a spatial autocorrelation coefficient called local Moran’s I. In contrast, Moran’s I, which is the subject of this paper, is sometimes referred to as global Moran’s I.
Two more remarks follow. First, we will refer to Moran’s I as Moran’s coefficient. The reason for this is that this is what Dray [13] called it. In addition, we will use the same notation as [13] as much as possible. Second, Geary’s [25] c is another prominent measure of spatial autocorrelation. Yamada [26,27] provided some results on the coefficient. The current paper can be seen as a companion to [27].
The organization of this paper is as follows. In Section 2, we provide some preliminaries. In Section 3, we briefly review some results in [13]. In Section 4, we present the main results of this paper. In Section 5, we add to the results in Section 4. Section 6 concludes the paper. In Appendix A and Appendix B, we provide proofs and MATLAB/GNU Octave and R user-defined functions, respectively.

Some of the Notations

For a matrix A , the transpose of A is denoted by A . Let x = [ x 1 , , x n ] , I n be the identity matrix of order n, 1 be the n-dimensional column vector of ones, i.e., 1 = [ 1 , , 1 ] , and  e k be the k-th column of I n , i.e.,  I n = [ e 1 , , e n ] . Let
H = I n 1 ( 1 1 ) 1 1 = I n 1 n 1 1 .
Then, H is a symmetric and idempotent matrix, such that 1 belongs to its null space, i.e.,  H = H , H 2 = H , and H 1 = 0 . For an n × m full column rank matrix A , denote the column space of A and its orthogonal complement by S ( A ) and S ( A ) , respectively. For a column vector η , η 2 = η η . Let cor ( η 1 , η 2 ) denote the correlation coefficient between η 1 and η 2 , i.e.,
cor ( η 1 , η 2 ) = η 1 H η 2 η 1 H η 1 η 2 H η 2 ,
where η 1 and η 2 are n-dimensional column vectors that do not belong to S ( 1 ) , i.e.,  η 1 , η 2 R n S ( 1 ) .

2. Preliminaries

Following [20], we treat the problem of spatial autocorrelation in terms of a graph (see, for example, [28] for details on linear algebraic graph theory). Let G = ( V , E ) denote a directed/undirected graph, where V = { v 1 , , v n } is a set of vertices and E V × V is a set of ordered pairs of distinct vertices. Here, n 2 and E . For i , j = 1 , , n , let
w i , j > 0 if ( v i , v j ) E , w i , j = 0 otherwise ,
and W = [ w i , j ] R n × n . Here, ( v i , v j ) denotes the directed edge from v i to v j . If both ( v i , v j ) and ( v j , v i ) belong to E, then the edge between v i and v j is undirected. Given that ( v i , v i ) E , w i , i = 0 for i = 1 , , n , i.e.,  the diagonal entries of W are all zero. In addition, given that E , i = 1 n j = 1 n w i , j > 0 .
Let x i denote the realization of a variable on a vertex v i for i = 1 , , n . Here, we exclude the case where x 1 = = x n . That is, we assume that x S ( 1 ) . Accordingly, under the assumption, k = 1 n ( x k x ¯ ) 2 > 0 , where x ¯ = 1 n i = 1 n x i .
Following [14,15,16,17], the Moran’s coefficient for x , denoted by M C ( x ) , is defined by
M C ( x ) = n i = 1 n j = 1 n w i , j i = 1 n j = 1 n w i , j ( x i x ¯ ) ( x j x ¯ ) k = 1 n ( x k x ¯ ) 2 .
Incidentally, a local Moran’s coefficient, [24] (Equation (7)), is
M C i ( x ) = ( x i x ¯ ) j = 1 n w i , j ( x j x ¯ ) , i = 1 , , n .
According to this construction, it follows that i = 1 n M C i ( x ) = i = 1 n j = 1 n w i , j ( x i x ¯ ) ( x j x ¯ ) , which appears in the numerator of M C ( x ) in (5).
Given that x H W H x = i = 1 n j = 1 n w i , j ( x i x ¯ ) ( x j x ¯ ) , x H x = k = 1 n ( x k x ¯ ) 2 , and  1 W 1 = i = 1 n j = 1 n w i , j , as shown in, for example, [13,20], M C ( x ) in (5) can be represented in matrix notation as
M C ( x ) = n 1 W 1 x H W H x x H x .
By definition, W is not necessarily symmetric. However, since η W η = ( η W η ) = η W η for any η R n , it follows that
η W η = η W η ,
where W = W + W 2 . Note that W is symmetric even if W is not symmetric. Accordingly, as is well known, M C ( x ) in (7) can be represented with a symmetric matrix W as
M C ( x ) = n 1 W 1 x H W H x x H x .

3. Brief Review of a Closely Related Study

In this section, we briefly review some results in [13].
Let z = [ z 1 , , z n ] , where z i = x i x ¯ s for i = 1 , , n . Here, s = 1 n k = 1 n ( x k x ¯ ) 2 . Then, given that s = 1 n x H x , z can be represented in matrix notation as
z = 1 1 n x H x H x .
Accordingly, it follows that
M C ( x ) = 1 1 W 1 x H H W H H x 1 n x H x = z H W H z 1 W 1 .
Here, we note that the first equality in (11) follows because H is a symmetric and idempotent matrix.
Given that H W H is a real symmetric matrix, it can be spectrally decomposed as
H W H = U Λ U ,
where Λ = diag ( λ 1 , , λ n ) and U = [ u 1 , , u n ] is an orthogonal matrix (e.g., [29] (page 342)). Here, ( λ k , u k ) denotes an eigenpair of H W H for k = 1 , , n such that λ 1 , , λ n are in descending order, i.e.,  λ 1 λ n . Incidentally, the ESF is a spatial analysis that uses a submatrix of U (for more details, see, for example, [30] (Section 7)).
From (9), (11), and (12), Dray [13] derived
M C ( x ) = n 1 W 1 k = 1 n λ k x u k u k x x H x ,
M C ( x ) = k = 1 n λ k z u k u k z 1 W 1 .
(13) and (14), respectively, correspond to Equations (3) and (4) of [13]. In addition, Dray [13] demonstrated that
M C ( x ) = n 1 W 1 k = 1 n λ k cor 2 ( u k , z ) ,
cor 2 ( u k , z ) = β k 2 n , k = 1 , , n ,
where β k = arg min ϕ k z ϕ 1 u 1 ϕ n u n 2 = arg min ϕ k z ϕ k u k 2 . Among them, (15) corresponds to [13] (Equation (5)). Finally, by combining (15) and the result given by
M C ( u k ) = n 1 W 1 λ k , k = 1 , , n ,
from [20], [3] (Equation (3)), [31] (Equation (5)), and [6] (page 1200), Dray [13] derived
M C ( x ) = k = 1 n M C ( u k ) cor 2 ( u k , z ) ,
which corresponds to [13] (Equation (6)).
Although (18) as well as (17) are remarkable results, they need to be modified. This is because 1 belongs to the null space of H W H , i.e.,  H W H 1 = 0 = 0 · 1 . Accordingly, when the nullity (the dimension of the null space) of H W H is one, 1 n 1 or 1 n 1 must be one of the normalized eigenvectors of H W H . Denote such normalized eigenvectors by u * . Then, both M C ( u * ) and cor 2 ( u * , z ) cannot be defined because u * H u * = 0 . Even in the case where the nullity is greater than one, to avoid such a situation, care must be taken to ensure that u * is not selected as one of the normalized eigenvectors. In the next two sections, we present a way to avoid this problem.
Example 1. 
We provide a simple W such that the nullity of H W H is one. It is W = J n + J n , where J n = [ e 2 , , e n , e 1 ] . This W is the binary adjacency matrix of a cycle graph with n vertices. The right side of Figure 1 shows a cycle graph with s i x vertices. In this case, it follows that
rank ( H W H ) = n 1 ,
unless n is a multiple of 4. A proof of (19) is provided in Appendix A.1.

4. Main Results

In this section, we present the main results of this paper.
Let
H W H = P Q P ,
be another spectral decomposition of H W H , where Q = diag ( q 1 , , q n ) and P = [ p 1 , , p n ] is an orthogonal matrix. Here, ( q k , p k ) denotes an eigenpair of H W H for k = 1 , , n . As stated, given that H W H 1 = 0 , 0 , 1 n 1 is an eigenpair of H W H , we let ( q 1 , p 1 ) = 0 , 1 n 1 . For the other eigenvalues, we suppose that q 2 , , q n are in descending order, i.e.,  q 2 q n . Let Q 2 = diag ( q 2 , , q n ) and P 2 = [ p 2 , , p n ] . We present the way to obtain P 2 as well as Q 2 from W in Section 5.
Given that p 1 = 1 n 1 and P = [ p 1 , P 2 ] is an orthogonal matrix, H can be represented by P 2 as
H = I n p 1 ( p 1 p 1 ) 1 p 1 = P 2 ( P 2 P 2 ) 1 P 2 .
See Appendix A.2 for details on (21). In addition, S ( P 2 ) is identical to S ( 1 ) . (In [20] S ( 1 ) and S ( 1 ) are, respectively, represented as R c and R c + ). That is, H is the orthogonal projection matrix onto S ( 1 ) . Accordingly, given that p 1 S ( 1 ) and p k S ( 1 ) for k = 2 , , n , it follows that
H p k = 0 , k = 1 , p k , k = 2 , , n .
Moreover, we note that the smallest eigenvalue is negative, i.e.,
q n < 0 .
This is because, as shown in, for example, [32] (page 4),
k = 2 n q k = i = 1 n q k = tr ( H W H ) = tr ( W H ) = tr ( W ) 1 n tr ( W 1 1 ) = 1 n tr ( 1 W 1 ) = 1 W 1 n = i = 1 n j = 1 n w i , j n < 0 .
Here, note that tr ( W ) = tr W + W 2 = 0 because tr ( W ) = tr ( W ) = 0 . If q n 0 , then it follows that k = 2 n q k 0 , which contradicts (24). (Although λ n is negative, λ 2 is not necessarily positive. See Examples 3 and 5).
Given that p k S ( 1 ) for k = 2 , , n , we can consider
cor ( p k , z ) = p k H z p k H p k z H z , k = 2 , , n .
Then, given that H z = z , p k H p k = p k p k = 1 for k = 2 , , n , and  z H z = z z = n , we obtain
cor ( p k , z ) = p k z n , k = 2 , , n .
Given that H is a symmetric and idempotent matrix and P is an orthogonal matrix, M C ( x ) in (7) can be represented as
M C ( x ) = n 1 W 1 x H P Q P H x x H P P H x .
Let α = [ α 1 , , α n ] = P H x . Then, from (22), α k for k = 1 , , n are
α k = p k H x = 0 , k = 1 , p k x , k = 2 , , n .
Accordingly, M C ( x ) in (27) can be represented as follows:
M C ( x ) = n 1 W 1 α Q α α α = n 1 W 1 k = 2 n q k α k 2 l = 2 n α l 2 .
Here, we note that α is
α = P H x S ( e 1 ) { 0 }
and α Q α α α in (29) is a Rayleigh quotient because Q is symmetric and α 0 . For a proof of (30), see Appendix A.3.
Now, consider M C ( p k ) for i = 1 , , n . Among them, M C ( p 1 ) cannot be defined. This is because p 1 H p 1 = 0 . For k = 2 , , n , given that p k H P Q P H p k = p k P Q P p k = e k Q e k = q k and p k H P P H p k = p k P P p k = e k e k = 1 , it follows that
M C ( p k ) = n 1 W 1 q k , k = 2 , , n .
Hence, from the inequalities given by q 2 q n , it follows that
M C ( p 2 ) M C ( p n ) .
In addition, from (23), it follows that M C ( p n ) < 0 and from (24), it follows that
1 n 1 k = 2 n M C ( p k ) = 1 n 1 n 1 W 1 k = 2 n q k = 1 n 1 n 1 W 1 × 1 W 1 n = 1 n 1 .
If M C ( p 2 ) = = M C ( p n ) = μ , then μ = 1 n 1 from (33). In addition, M C ( x ) = k = 2 n ψ k M C ( p k ) = μ k = 2 n ψ k = μ . Accordingly, if  M C ( p 2 ) = = M C ( p n ) , then M C ( x ) = 1 n 1 . Hence, if  M C ( p 2 ) = = M C ( p n ) , then, given (1), there is always no spatial autocorrelation measured by Moran’s coefficient.
Combining (29) and (31), we obtain
M C ( x ) = k = 2 n n 1 W 1 q k α k 2 l = 2 n α l 2 = k = 2 n M C ( p k ) α k 2 l = 2 n α l 2 = k = 2 n M C ( p k ) α k 2 l = 2 n α l 2 = k = 2 n ψ k M C ( p k ) ,
where
ψ k = α k 2 l = 2 n α l 2 , k = 2 , , n .
Here, given that ( p k p k ) 1 = 1 and p k p l = 0 if k l , α k in (35), such that
α k = p k x = arg min ϕ k x ϕ k p k 2 = arg min ϕ k x ϕ 1 p 1 ϕ n p n 2 .
In addition, from (35), it immediately follows that ψ k 0 for k = 2 , , n and k = 2 n ψ k = 1 . Moreover, concerning ψ k , we have the following result.
Lemma 1. 
It follows that
cor 2 ( p k , z ) = α k 2 l = 2 n α l 2 , k = 2 , , n .
Proof of Lemma 1. 
See Appendix A.4.    □
The next proposition summarizes the abovementioned results.
Proposition 1. 
(a) M C ( x ) can be represented as k = 2 n ψ k M C ( p k ) , where
ψ k = α k 2 l = 2 n α l 2 = cor 2 ( p k , z ) , k = 2 , , n .
Here, ψ k for k = 2 , , n are nonnegative and sum to one. (b) It follows that M C ( p 2 ) M C ( p n ) , M C ( p n ) < 0 , and  1 n 1 k = 2 n M C ( p k ) = 1 n 1 . (c) If M C ( p 2 ) = = M C ( p n ) , then M C ( x ) = 1 n 1 .
Remark 1.
Concerning Proposition 1, we make four remarks.
(i)
Proposition 1(a) implies that Moran’s coefficient in (5) can be represented as a weighted average of M C ( p 2 ) , , M C ( p n ) . Concerning the weights, ψ k for k = 2 , , n , the larger | α k | = | p k x | = | ( p k p k ) 1 p k x | is, the larger ψ k is. Likewise, for  k = 2 , , n , the larger | cor ( p k , z ) | is, the larger ψ k is. Recall that, from (26), cor ( p k , z ) = β k n .
(ii)
Proposition 1(b) implies that the eigenvectors, p 2 , , p n , are in order of spatial autocorrelation. Accordingly, for example, if  { ψ k } is a monotonically decreasing sequence, then M C ( x ) is likely to be positive. Of course, from Proposition 1(c), if M C ( p 2 ) = = M C ( p n ) , then M C ( x ) = 1 n 1 < 0 , even if { ψ k } is a monotonically decreasing sequence.
(iii)
From Lemma 1, it immediately follows that
k = 2 n cor 2 ( p k , z ) = 1 .
We provide a more direct proof of (39) in Appendix A.5.
(iv)
The MATLAB/GNU Octave and R user-defined functions to compute ψ 2 = [ ψ 2 , , ψ n ] (psi2) are provided in Appendix B.
Example 2. 
Consider two extreme cases. First, suppose that ψ 2 = 1 and ψ 3 = = ψ n = 0 . Then, since ψ 2 is the coefficient of M C ( p 2 ) , which is larger or equal to M C ( p k ) for k = 3 , , n , y is considered to be highly positively autocorrelated. Second, suppose that ψ 2 = ψ n 1 = 0 and ψ n = 1 . Then, since ψ n is the coefficient of M C ( p n ) , which is negative and smaller or equal to M C ( p k ) for k = 2 , , n 1 , y is considered to be highly negatively autocorrelated. Note that, as mentioned, these do not hold if M C ( p 2 ) = = M C ( p n ) .
Example 3. 
We give an example such that M C ( p 2 ) = = M C ( p n ) . Consider the case where W ( = W ) = 1 1 I n , which is the binary adjacency matrix of the complete graph with n vertices. The left side of Figure 1 shows a complete graph with s i x vertices. We note that this case is also considered in [32]. Then, it follows that
H W H = H ( 1 1 I n ) H = H .
Given that H is a symmetric and idempotent matrix whose rank equals n 1 , its eigenvalues are 0 with multiplicity 1 and 1 with multiplicity n 1 (e.g., [29] (page 167)). Then, since q 2 = = q n = 1 , it follows that M C ( p 2 ) = = M C ( p n ) . Here, it is noteworthy that, in this case, q 2 is not the largest eigenvalue of H W H . This is because q 2 = 1 is less than q 1 = 0 .
From Proposition 1, it immediately follows that
M C ( x ) = k = 2 n ψ k M C ( p k ) k = 2 n ψ k M C ( p 2 ) = M C ( p 2 ) k = 2 n ψ k = M C ( p 2 ) .
Likewise, M C ( p n ) M C ( x ) follows.
The next corollary summarizes the above results.
Corollary 1. 
M C ( x ) belongs to the interval given by M C ( p n ) , M C ( p 2 ) .
Remark 2.
Concerning Corollary 1, we make three remarks.
(i)
If M C ( p 2 ) = = M C ( p n ) , then the interval given by M C ( p n ) , M C ( p 2 ) reduces to a singleton. For example, as stated, if W = 1 1 I n , then
M C ( x ) 1 n 1 .
(ii)
De Jong et al. [20] and [32] (Theorem 2.1) showed that
M C ( x ) n 1 W 1 q n , n 1 W 1 q 2 .
Given (31), Corollary 1 is its equivalent.
(iii)
The MATLAB/GNU Octave and R user-defined functions to compute the bounds of Moran’s coefficient (MoranIbounds) are provided in Appendix B.
Let χ = χ 1 + χ 2 , where χ 1 S ( 1 ) and χ 2 S ( x ) { 0 } . Maruyama [32] discussed that M C ( χ ) equals M C ( x ) . That is, for all c 1 R and c 2 R { 0 } , M C ( c 1 1 + c 2 x ) and M C ( x ) are identical. Given this result, from Proposition 1, we immediately obtain the following extended result.
Corollary 2. 
M C ( χ ) can be represented as k = 2 n ψ k M C ( p k ) , where ψ k = α k 2 l = 2 n α l 2 = cor 2 ( p k , z ) for k = 2 , , n .
Example 4. 
Given that H x = x x ¯ 1 , H x is an example of χ . Of course, this result is quite reasonable because H in (7) is a symmetric and idempotent matrix. Accordingly, from Corollary 2, M C ( H x ) can be represented as k = 2 n ψ k M C ( p k ) .

5. Additional Results

In this section, we add results to those in the previous section. More specifically, we document the useful results for obtaining P 2 = [ p 2 , , p n ] as well as Q 2 = diag ( q 2 , , q n ) from W . Recall that α k = p k x for k = 2 , , n and the bounds of Moran’s coefficient are described with q 2 and q n . Although ( q k , p k ) for k = 2 , , n are the eigenpairs of H W H , it is not easy to obtain them from it. When the nullity of H W H is one, the location of 0 , 1 n 1 is unknown. Moreover, when the nullity is greater than one, 1 n 1 is not necessarily selected as one of the eigenvectors.
Let G = [ g 1 , G 2 ] , where g 1 = 1 n 1 and G 2 = [ g 2 , , g n ] such that { g 2 , , g n } is an orthonormal basis of S ( 1 ) . Accordingly, G is an n × n orthogonal matrix. In addition, let V = G P and Ξ be an ( n 1 ) × ( n 1 ) submatrix of V such that
V = v 1 , 1 v 1 , 2 v 2 , 1 Ξ .
Then, we have the following results:
Proposition 2. 
Let Ξ = [ ξ 2 , , ξ n ] . Then, ( q k , ξ k ) for k = 2 , , n are the eigenpairs of G 2 W G 2 . In addition, P 2 is equal to G 2 Ξ .
Proof. 
See Appendix A.6.    □
Remark 3.
Concerning Proposition 2, we make two remarks.
(i)
The candidates for G 2 are numerous. One of them is the following n × ( n 1 ) matrix:
E 2 = 2 n cos ( 2 1 ) θ 1 cos ( n 1 ) θ 1 cos ( 2 1 ) θ 2 cos ( n 1 ) θ 2 cos ( 2 1 ) θ n cos ( n 1 ) θ n ,
where θ i = ( i 0.5 ) π n for i = 1 , , n . Here,
E = 1 n 1 , E 2
is the matrix used for the discrete cosine transform ([33] and [34]). The following n × ( n 1 ) matrix is another candidate:
F 2 = Γ 1 1 1 1 0 2 1 0 0 ( n 1 ) ,
where Γ = diag ( 1 · 2 , , ( n 1 ) · n ) . (The use of F 2 is inspired by [32]). Here,
F = 1 n 1 , F 2
is a Helmert orthogonal matrix ([35]).
(ii)
The MATLAB/GNU Octave and R user-defined functions to compute Q 2 and P 2 (Q2P2) and E (Emat) are provided in Appendix B.
Example 5. 
We provide an example of the use of user-defined function Q2P2. Consider the case such that
W = J 4 + J 4 = 0 1 0 1 1 0 1 0 0 1 0 1 1 0 1 0 ,
which is the binary adjacency matrix of a cycle graph with f o u r vertices. In this case, as shown in Appendix A.1, the spectrum of W is as follows:
2 , 2 cos π 2 , 2 cos π , 2 cos 3 π 2 = { 2 , 0 , 2 , 0 } .
In addition, again from Appendix A.1, the spectrum of H W H is { 0 , 0 , 2 , 0 } . By applying  Q2P2, we obtain the corresponding P 2 as
0.3607 0.6082 0.5000 0.6082 0.3607 0.5000 0.3607 0.6082 0.5000 0.6082 0.3607 0.5000 ,
from which it is easy to see that all column vectors of P 2 belong to S ( 1 ) and are orthogonal to each other.

6. Concluding Remarks

In this paper, we contributed to the literature by showing that the Moran’s coefficient can be represented as a weighted average of M C ( p 2 ) , , M C ( p n ) . Here, M C ( p 2 ) , , M C ( p n ) are in descending order, and thus p i + 1 is more positively autocorrelated than or equal to p k for k = 2 , , n 1 . Therefore, the representation is somewhat similar to a Fourier series. This is useful because we can characterize spatial data based on it. We then presented the theory of how to obtain the matrices, such as P 2 = [ p 2 , , p n ] , and provided MATLAB/GNU Octave and R user-defined functions for analysis. The theoretical results we obtained are summarized in Propositions 1 and 2 and Corollaries 1 and 2.

Funding

The Japan Society for the Promotion of Science supported this work through KAKENHI (grant number: 23K013770A).

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

The author would like to thank the three anonymous reviewers for their valuable comments and suggestions. An earlier draft of this paper was presented at the Rokko Forum at Kobe University, Japan. The author thanks the participants of the seminar, especially Yuzo Maruyama and Naoya Sueishi. The usual caveat applies.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Proofs

In this section, we provide some proofs.

Appendix A.1. Proof of (19)

Proof of (19). 
The spectrum of W = J n + J n is { γ 1 , , γ n } , where γ k = 2 cos 2 π n ( k 1 ) for k = 1 , , n (see, for example, [36,37]). Denote a normalized eigenvector of W associated with γ k by ν k for k = 1 , , n . Given that γ 1 = 2 cos ( 0 ) = 2 and W 1 = 2 · 1 , it follows that W 1 = γ 1 1 , and  thus we may let ν 1 = 1 n 1 . In addition, given that W is symmetric and γ k γ 1 for k = 2 , , n , ν k S ( 1 ) for k = 2 , , n . Accordingly, the eigenvectors are such that
H ν k = 0 , k = 1 , ν k , k = 2 , , n .
Consequently, it immediately follows that H W H ν 1 = 0 = 0 · ν 1 , from which ( 0 , ν 1 ) is an eigenpair of H W H . In addition, it also follows that
H W H ν k = H W ν k = H ( γ k ν k ) = γ k ν k , k = 2 , , n .
Therefore, ( γ k , ν k ) for k = 2 , , n are also eigenpairs of H W H . Finally, γ k 0 for k = 2 , , n , unless n is a multiple of 4.    □

Appendix A.2. Proof of (21)

Proof of (21). 
The first equality follows from p 1 ( p 1 p 1 ) 1 p 1 = 1 n 1 1 n 1 1 1 1 n 1 = 1 ( 1 1 ) 1 1 . Next, since P is nonsingular, it follows that P ( P P ) 1 P = I n . In addition, since p 1 P 2 = 0 , P P is a block diagonal matrix, we then have
P ( P P ) 1 P = p 1 ( p 1 p 1 ) 1 p 1 + P 2 ( P 2 P 2 ) 1 P 2 .
Combining these results proves the second equality.    □

Appendix A.3. Proof of (30)

Proof of (30). 
Given that x S ( 1 ) by assumption, x can be represented by p 1 ζ 1 + P 2 ζ 2 , where ζ 1 R and ζ 2 R n 1 { 0 } . Then, from (22), it follows that H x = H ( p 1 ζ 1 + P 2 ζ 2 ) = P 2 ζ 2 , which results in
a = P H x = p 1 P 2 ζ 2 P 2 P 2 ζ 2 = 0 ζ 2 S ( e 1 ) { 0 } .
   □

Appendix A.4. Proof of Lemma 1

Proof of Lemma 1. 
From (10), (22), and (26), it follows that
cor ( p k , z ) = p k z n = 1 n p k H x 1 n x H x = p k x x H x , k = 2 , , n .
Here, α k = p k x for k = 2 , , n and x H x = x H P P H x = l = 2 n α l 2 . Thus, we obtain
cor ( p k , z ) = α k l = 2 n α l 2 , k = 2 , , n ,
which results in (37).    □

Appendix A.5. Proof of (39)

Proof of (39). 
From (26), it follows that k = 2 n cor 2 ( p k , z ) = 1 n k = 2 n ( p k z ) 2 . In addition, p 1 z = 1 n 1 H z = 0 . Combining these yields
k = 2 n cor 2 ( p k , z ) = 1 n k = 1 n ( p k z ) 2 .
Here, we have k = 1 n ( p k z ) 2 = z P P z = z z = n . Therefore, k = 2 n cor 2 ( p k , z ) is equal to one.    □

Appendix A.6. Proof of Proposition 2

Proof of Proposition 2. 
The following equalities hold:
G H W H G V = G H W H G G P = G P Q P G G P = V Q .
The first equality immediately follows from the definition of V , i.e.,  V = G P . The second equality follows from (20). The third equality follows from the definition of V and both G and P are orthogonal matrices.
Given that g 1 = p 1 , it follows that v 1 , 1 = g 1 p 1 = 1 , v 1 , 2 = g 1 P 2 = 0 , and  v 2 , 1 = G 2 p 1 = 0 , which results in
V = G P = g 1 p 1 g 1 P 2 G 2 p 1 G 2 P 2 = 1 0 0 Ξ .
Here, Ξ is an orthogonal matrix because V V = G P P G = I n . In addition, given that H G = [ 0 , G 2 ] , it follows that
G H W H G = 0 0 0 G 2 W G 2 .
Then, from (A9) and (A10), (A8) can be represented as
0 0 0 G 2 W G 2 1 0 0 Ξ = 1 0 0 Ξ q 1 0 0 Q 2 ,
from which we obtain
G 2 W G 2 Ξ = Ξ Q 2 .
Here, recall that Ξ is an orthogonal matrix and Q 2 is a diagonal matrix. In addition, given that G is an orthogonal matrix, premultiplying (A9) by G = [ g 1 , G 2 ] yields
[ p 1 , P 2 ] = P = G V = [ g 1 , G 2 ] 1 0 0 Ξ = [ g 1 , G 2 Ξ ] ,
which yields P 2 = G 2 Ξ .    □

Appendix B. User-Defined Functions

In this section, we provide the MATLAB/GNU Octave and R user-defined functions to compute ψ 2 = [ ψ 2 , , ψ n ] (psi2), the bounds of Moran’s coefficient (MoranIbounds), Q 2 , P 2 (Q2P2), and  E (Emat). Of the input arguments, x is an n-dimensional column vector and W is an n × n matrix. Of the output arguments, Psi2 is an ( n 1 ) -dimensional column vector.

Appendix B.1. MATLAB/GNU Octave Functions

1 function [Psi2]= psi2 (x,W)
2    [Q2,P2]=Q2P2(W);
3    c=(P2’∗x).^2;
4    Psi2 =c/sum(c);
5 end
1 function [Ilb,Iub]= MoranIbounds(W)
2    W=(W+W’)/2; n=size(W,1); m=sum(sum(W));
3    [Q2,P2]= Q2P2(W);
4    Iub =(n/m)∗Q2 (1,1); Ilb =(n/m)∗Q2(n-1,n-1);
5 end
1 function [Q2,P2]= Q2P2(W)
2    W=(W+W’)/2; n=size(W,1);
3    G= Emat(n); G2=G(:,2:n); A=G2’∗W∗G2;
4    [X,L]= eig((A+A’)/2);
5    [l,ind]= sort(diag (L),’descend’);
6    Q2= diag(l); Xi=X(:,ind);
7    P2=G2∗Xi;
8 end
1 function [E]= Emat (n)
2    E=zeros(n,n);
3    E(:,1)= ones (n,1)/ sqrt(n);
4    for i=1:n
5       for j=2:n
6          E(i,j)= sqrt(2/n)∗cos((j-1)∗(i-0.5)∗pi/n);
7       end
8    end
9 end
		
Note that (A+A’)/2 in Q2P2 is to ensure symmetry. In addition, G=Emat(n) in Q2P2 can be replaced by G=Fmat(n), where Fmat is a function to make F and is provided in [27].

Appendix B.2. R Functions

1 psi2=function(x,W){
2    Q2P2_result=Q2P2(W)
3    c=(t(Q2P2_result$P2)%∗%x)^2
4    Psi2=c/sum(c)
5    return( Psi2 )
6 }
1 MoranIbounds=function(W){
2    W=(W+t(W))/2; n=dim(W)[1]; m=sum(W)
3    Q2P2_result=Q2P2(W); Q2=Q2P2_result$Q2 ; P2=Q2P2_result$P2
4    Iub=(n/m)∗Q2[1 ,1]; Ilb=(n/m)∗Q2[n-1,n-1]
5    return(list(Ilb=Ilb ,Iub= Iub))
6 }
1 Q2P2=function(W){
2    W=(W+t(W))/2; n=nrow(W)
3    G=Emat(n); G2=G[,2:n]; A=t(G2)%∗%W%∗%G2
4    eig_result=eigen((A+t(A))/2)
5    l=eig_result$values; ind=order(l, decreasing= TRUE)
6    Q2= diag(l[ind]); Xi=eig_result$vectors[,ind]
7    P2=G2%∗%Xi
8    return (list (Q2=Q2,P2=P2))
9 }
1 Emat=function (n){
2    E=matrix(0,n,n)
3    E[,1]=rep (1/sqrt(n),n)
4    for (i in 1:n){
5       for (j in 2:n){
6           E[i,j]= sqrt(2/n)∗cos((j-1)∗(i-0.5)∗pi/n)
7       }
8    }
9    return (E)
10 }
Note that (A+t(A))/2 in Q2P2 is to ensure symmetry.

References

  1. Getis, A. A history of the concept of spatial autocorrelation: A geographer’s perspective. Geogr. Anal. 2008, 40, 297–309. [Google Scholar] [CrossRef]
  2. Moraga, P. Spatial Statistics for Data Science: Theory and Practice with R; Chapman and Hall/CRC: Boca Raton, FL, USA, 2023. [Google Scholar]
  3. Griffith, D.A. Spatial autocorrelation and eigenfunctions of the geographic weights matrix accompanying geo-referenced data. Can. Geogr. 1996, 40, 351–367. [Google Scholar] [CrossRef]
  4. Griffith, D.A. Spatial Autocorrelation and Spatial Filtering: Gaining Understanding Through Theory and Scientific Visualization; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
  5. Getis, A.; Griffith, D.A. Comparative spatial filtering in regression analysis. Geogr. Anal. 2002, 34, 130–140. [Google Scholar] [CrossRef]
  6. Tiefelsdorf, M.; Griffith, D.A. Semiparametric filtering of spatial autocorrelation: The eigenvector approach. Environ. Plan. 2007, 39, 1017–1274. [Google Scholar] [CrossRef]
  7. Murakami, D.; Griffith, D.A. Random effects specifications in eigenvector spatial filtering: A simulation study. J. Geogr. Syst. 2015, 17, 311–331. [Google Scholar] [CrossRef]
  8. Murakami, D.; Griffith, D.A. Eigenvector spatial filtering for large data sets: Fixed and random effects approaches. Geogr. Anal. 2019, 51, 23–49. [Google Scholar] [CrossRef]
  9. Murakami, D.; Griffith, D.A. Spatially varying coefficient modeling for large datasets: Eliminating N from spatial regressions. Spat. Stat. 2019, 30, 39–64. [Google Scholar] [CrossRef]
  10. Murakami, D.; Yoshida, T.; Seya, H.; Griffith, D.A.; Yamagata, Y. A Moran coefficient-based mixed effects approach to investigate spatially varying relationships. Spat. Stat. 2017, 19, 68–89. [Google Scholar] [CrossRef]
  11. Seya, H.; Murakami, D.; Tsutsumi, M.; Yamagata, Y. Application of LASSO to the eigenvector selection problem in eigenvector-based spatial filtering. Geogr. Anal. 2015, 47, 284–299. [Google Scholar] [CrossRef]
  12. Nishi, H.; Asami, Y.; Baba, H.; Shimizu, C. Scalable spatiotemporal regression model based on Moran’s eigenvectors. Int. J. Geogr. Inf. Sci. 2023, 37, 162–188. [Google Scholar] [CrossRef]
  13. Dray, S. A new perspective about Moran’s coefficient: Spatial autocorrelation as a linear regression problem. Geogr. Anal. 2011, 43, 127–141. [Google Scholar] [CrossRef]
  14. Cliff, A.D.; Ord, J.K. The problem of spatial autocorrelation. In Studies in Regional Science; Scott, A.J., Ed.; Pion: London, UK, 1969; pp. 25–55. [Google Scholar]
  15. Cliff, A.D.; Ord, J.K. Spatial autocorrelation: A review of existing and new measures with applications. Econ. Geogr. 1970, 46, 269–292. [Google Scholar] [CrossRef]
  16. Cliff, A.D.; Ord, J.K. Spatial Autocorrelation; Pion: London, UK, 1973. [Google Scholar]
  17. Cliff, A.D.; Ord, J.K. Spatial Processes: Models and Applications; Pion: London, UK, 1981. [Google Scholar]
  18. Li, H.; Calder, C.A.; Cressie, N. Beyond Moran’s I: Testing for spatial dependence based on the spatial autoregressive model. Geogr. Anal. 2007, 39, 357–375. [Google Scholar] [CrossRef]
  19. Yamada, H. A unified perspective on some autocorrelation measures in different fields: A note. Open Math. 2023, 21, 20220574. [Google Scholar] [CrossRef]
  20. De Jong, P.; Sprenger, C.; van Veen, F. On extreme values of Moran’s I and Geary’s c. Geogr. Anal. 1984, 16, 17–24. [Google Scholar] [CrossRef]
  21. Anderson, R.L. Distribution of the serial correlation coefficient. Ann. Math. Stat. 1942, 13, 1–13. [Google Scholar] [CrossRef]
  22. Orcutt, G.H. A study of the autoregressive nature of the time series used for Tinbergen’s model of the economic system of the United States, 1919–1932. J. R. Stat. Soc. 1948, 10, 1–45. [Google Scholar] [CrossRef]
  23. Moran, P.A.P. Notes on continuous stochastic phenomena. Biometrika 1950, 37, 17–23. [Google Scholar] [CrossRef]
  24. Anselin, L. Local indicators of spatial association—LISA. Geogr. Anal. 1995, 27, 93–115. [Google Scholar] [CrossRef]
  25. Geary, R.C. The contiguity ratio and statistical mapping. Inc. Stat. 1954, 5, 115–145. [Google Scholar] [CrossRef]
  26. Yamada, H. Geary’s c and spectral graph theory. Mathematics 2021, 9, 2465. [Google Scholar] [CrossRef]
  27. Yamada, H. Geary’s c and spectral graph theory: A complement. Mathematics 2023, 11, 4228. [Google Scholar] [CrossRef]
  28. Gallier, J. Spectral theory of unsigned and signed graphs. Applications to graph clustering: A survey. arXiv 2016, arXiv:1601.04692. [Google Scholar]
  29. Seber, G.A.F. A Matrix Handbook for Statisticians; Wiley: Hoboken, NJ, USA, 2008. [Google Scholar]
  30. Yamada, H. Spatial smoothing using graph Laplacian penalized filter. Spatial Statistics, 2024; forthcoming. [Google Scholar]
  31. Boots, B.; Tiefelsdorf, M. Global and local spatial autocorrelation in bounded regular tessellations. J. Geogr. Syst. 2000, 2, 319–348. [Google Scholar] [CrossRef]
  32. Maruyama, Y. An alternative to Moran’s I for spatial autocorrelation. arXiv 2015, arXiv:1501.06260. [Google Scholar]
  33. Ahmed, N.; Natarajan, T.; Rao, K.R. Discrete cosine transform. IEEE Trans. Comput. 1974, C-23, 90–93. [Google Scholar] [CrossRef]
  34. Strang, G. The discrete cosine transform. Siam Rev. 1999, 41, 135–147. [Google Scholar] [CrossRef]
  35. Lancaster, H.O. The Helmert Matrices. Am. Math. Mon. 1965, 72, 4–12. [Google Scholar] [CrossRef]
  36. Brouwer, A.E.; Haemers, W.H. Spectra of Graphs; Springer: New York, NY, USA, 2012. [Google Scholar]
  37. Estrada, E.; Knight, P. A First Course in Network Theory; Oxford University Press: Oxford, UK, 2015. [Google Scholar]
Figure 1. A cycle graph with 6 vertices (left). A complete graph with 6 vertices (right).
Figure 1. A cycle graph with 6 vertices (left). A complete graph with 6 vertices (right).
Mathematics 12 00253 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yamada, H. A New Perspective on Moran’s Coefficient: Revisited. Mathematics 2024, 12, 253. https://doi.org/10.3390/math12020253

AMA Style

Yamada H. A New Perspective on Moran’s Coefficient: Revisited. Mathematics. 2024; 12(2):253. https://doi.org/10.3390/math12020253

Chicago/Turabian Style

Yamada, Hiroshi. 2024. "A New Perspective on Moran’s Coefficient: Revisited" Mathematics 12, no. 2: 253. https://doi.org/10.3390/math12020253

APA Style

Yamada, H. (2024). A New Perspective on Moran’s Coefficient: Revisited. Mathematics, 12(2), 253. https://doi.org/10.3390/math12020253

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop