1. Introduction
In papers [
1,
2,
3], we calculated eigenvalues of Ising connection matrices defined on
-dimensional hypercube lattices (
). To provide the translation invariance we imposed periodic boundary conditions. In our calculations, we accounted for interactions not only with the nearest spins but with distant spins, as well. In papers [
1,
2], we analyzed isotropic interactions, while in paper [
3], we discussed the general case of anisotropic interactions. We succeeded to obtain analytical expressions for the eigenvalues of the above-described Ising connection matrices. For the
d-dimensional system, the eigenvalues are polynomials of the degree
d in the eigenvalues for the one-dimensional system with long-range interaction (see [
2,
3]). The coefficients of these polynomials are the constants of interaction between spins.
In the present paper, we solve an inverse problem formulated as follows. Suppose that we need an Ising connection matrix with a given spectrum of its eigenvalues. Two questions arise. Firstly, can any sequence of random numbers be the spectrum of some connection matrix? Secondly, how to restore the interaction constants that define the connection matrix whose spectrum matches the given one? In
Section 2, we obtain the answers to these questions for the one-dimensional Ising system. In
Section 3 and
Section 4, we extend the obtained results to the two- and three-dimensional systems, respectively. The discussion and conclusions are in
Section 5.
The eigenvalues of Ising connection matrices became of interest, since in recent times, the problems have appeared where it is necessary to generate an Ising connection matrix with a given spectrum [
4]. Moreover, the eigenvalues of Ising connection matrix are closely related to the calculation of the partition function. Indeed, let
be an
Ising connection matrix and
,
are
-dimensional configuration vectors whose coordinates are
. In the absence of a magnetic field, the partition function is:
Let us expand each exponential here into a formal Taylor series, and rearrange the summands, combining in one sum the terms with the same power
. Then, we have:
In [
5], we showed that the first three coefficients of this expansion are:
Here, means the trace of the matrix: , where are eigenvalues of the matrix . Note, beginning from the expressions for the sums become more complex including not only but also some additional terms. We verified that up to these additional terms are defined by , . We hope that the same is also true for larger values of k. These arguments show that the eigenvalues of the Ising connection matrix may be useful when calculating the partition function.
In concluding the introduction, we would like to briefly discuss the place of the Ising model in the modern science. This model describes a system of interacting particles that are placed at the nodes of a multidimensional regular lattice. The Ising model appeared almost a hundred years ago. Its purpose was to analytically describe a collective behavior of a large number
of interacting binary spins and to define thermodynamics properties of such system. W.L. Bragg and E.J. Williams were the first who succeeded in describing a phase transition with the aid of the Ising model. However, they made an unreal supposition that all the spins interacted in the same way (see the mean-field model described in [
6]). Finally, in the late forties, L. Onsager et al. found an exact solution for the planar Ising system, when the spins were at the nodes of the plane square lattice and only the nearest spins interacted. Sometimes such a short-range interaction describes real systems. With regards to the Ising problem, this result is one of the most significant.
At first, the Ising model described systems of interacting spins. However, the universal formalism makes it possible to use this model in different scientific fields where the interacting neurons, agents, and other objects are defined by binary variables. Now the scientists use the Ising model when solving the problems of the spin glass theory [
7] and the neural network theory [
8]. They use it in the theory and applications of the global minimization [
9,
10], in socio- and econophysics [
11], and in many other problems. The calculation of the partition function
is the main and the most difficult part of all these problems.
There is extensive literature on
the inverse Ising problem, see, for example, a rather full review published in [
12]. When solving the inverse Ising problems, the authors examine how with the aid of the statistical inference method they can estimate the parameters of the Ising system—interaction constants and external magnetic fields—when they know empirical characteristics of a large number of random spin configurations. We would like to emphasize that although, as in the papers cited in [
12], we also restore the parameters of the Ising systems, the setting of the problem and the method of its solution differ significantly. In our approach, we inverse the exact formulas that express the connection matrix eigenvalues in terms of its matrix elements. However, when using the statistical inference method the input data are the observables such as magnetizations, correlations, etc. The solution tools are also different. They are the Boltzmann equilibrium distribution, the principle of the maximal likelihood, the Bayes theorem, etc.
2. One-Dimensional Ising Model
(1) A one-dimensional Ising system is a linear chain of L interacting spins. To provide a translation invariance, let us close the chain in a ring. Then, the last spin is also the nearest neighbor of the first spin. This means that each spin has two (on the left and right) nearest neighbors, two next nearest neighbors (the distance to which is twice as large), two next-next nearest neighbors, etc. To be specific, we suppose that L is odd: . Consequently, each spin has l pairs of the neighbors. Since we have in mind to discuss multidimensional lattices, we use the term “coordination spheres” to describe these pairs: First coordination sphere, second coordination sphere, …, l-th coordination sphere. In the beginning of the next Section, we will give a general definition of the coordination spheres.
By
, we denote a connection matrix that defines the interaction of each spin only with the spins from the
k-th coordination sphere. For example, it is easy to see that the matrices
and
have the following form:
is a symmetric matrix with the ones at the
-th and
-th diagonals that are parallel to the main diagonal. We use the set of matrices
to write down the Ising connection matrix
that accounts for interactions with spins belonging to all the coordination spheres. Let
be a constant of interaction with spins from the
k-th coordination sphere. Then:
When there is no interaction with the spins from the k-th coordination sphere, the corresponding constant in Equation (1) is equal to zero.
(2) The matrices
are circulants: Each next row of such a matrix is obtained by a cyclic shift of the previous row one position to the right. All the circulants have the same set of the eigenvectors that may have complex coordinates [
13,
14]. In the general case, the eigenvalues of the circulant matrices can also be complex. However, since in our problem the matrices
are symmetric, their eigenvalues are real. By
, we denote the eigenvalues of these matrices. It can be shown that [
2,
3]:
The first eigenvalue of each matrix
is equal to 2, and other eigenvalues are twice degenerate:
Consequently, for each
k (if we do not take into account the first eigenvalue), the sequence
is mirror-symmetrical about its middle (see
Figure 1). In what follows, we repeatedly use this symmetry property.
The eigenvector
with equal coordinates corresponds to the first eigenvalue
:
We can choose the two eigenvectors
and
corresponding to a degenerate eigenvalue
to be real. It is convenient to write them as follows:
Since the eigenvectors of all the matrices
are the same, it is easy to write down the eigenvalues of the connection matrix (1):
Expression (4) is a generalization of the formula obtained previously in [
15].
The spectrum of the eigenvalues of the connection matrix
cannot be a set of arbitrary numbers. It has a structure defined by the properties of the summands in Equation (4). First, since the Equation (2) hold for each
k, the spectrum of the eigenvalues
has to be mirror-symmetrical about its middle (without accounting for the first eigenvalue). Then, we have the equalities:
Second, due to the zero-valued elements at the diagonals of all the matrices
the sum of the eigenvalues of the matrix
has to be equal to zero. This means that:
Consequently, only l numbers , , … of the set (4) can be arbitrary. The other eigenvalues are expressed through these numbers with the aid of the Equations (5) and (6).
(3) Let us analyze the inverse problem. Suppose we know a spectrum of a connection matrix of a one-dimensional Ising system (for example, obtained experimentally). Of course, the sequence satisfies equalities (5) and (6). What are the connections between the spins that provide this spectrum?
To determine the unknowns
, we have to solve the system (4) with the known left-hand side:
We can obtain the answer in an explicit form. Let us generate an
-dimensional vector
whose coordinates are the eigenvalues of the experimental spectrum
. We also generate
-dimensional vectors
whose coordinates are the eigenvalues of the matrices
:
Then, we can rewrite the system of Equation (7) in the vector form:
It is evident that the vectors
and the eigenvectors
are collinear:
. Consequently, we can calculate the weights
as scalar products of the vectors
and
:
By doing that, we solve the inverse problem in the one-dimensional case.
3. Two-Dimensional Ising Model
(1) In this case, the spins are in the nods of a square lattice of the size . As previously shown, we set and assume periodic boundary conditions. Then, each spin has l pairs of neighbors along both the horizontal and the vertical axes. In addition, there are neighbors that are not on the same horizontal or vertical axes as the given spin.
The set of spins equally interacting with the given spin belongs to the same coordination sphere. In the case of an isotropic interaction, the coordination spheres consist of spins equally distant from the given spin. Then, we can enumerate the coordination spheres in the ascending order of distances to the given spin. In the anisotropic case, the interaction constants but not the distances define the spins belonging to the given coordination sphere.
When analyzing multidimensional Ising systems, we first have to distribute spins between the coordination spheres. This step is simple in the one-dimensional case: The pair of spins that are equidistant from the given spin belongs to the same coordination sphere. In the case of two-dimensional lattice, to describe the interaction between the spins spaced by m steps along the vertical axis and by k steps along the horizontal axis we introduce the interaction constant . The values of and change independently from 0 to . If the interaction is anisotropic , in the isotropic case . The difference between the coordination spheres in the isotropic and anisotropic cases influences the symmetry properties of the spectrum.
Let us make a few necessary comments. Since there is no self-action in the system, we always have
. It is convenient to introduce a unit
-dimensional matrix
. This matrix completes the set of matrices
. All the eigenvalues of the matrix
are equal to one. With the aid of these eigenvalues, we define the
L-dimensional vector:
which completes the set (8) of the vectors
:
.
In the next item, we solve the inverse problem in the case of anisotropic interaction. The isotropic interaction is a subject of the last item of this Section.
(2) In paper [
3], we showed that a
-dimensional matrix
that described the interactions
between spins had a block-circulant form and its eigenvectors were the pairwise Kronecker products of the eigenvectors
defined by Equation (3). Exactly as in the one-dimensional case, the set of the eigenvectors of the matrix
does not depend on the interaction constants and the eigenvalues of this matrix obtained in [
3] are:
Let us write Equation (10) in the vector form using the above-introduced
-dimensional vectors
(see Equation (8)). With the aid of these vectors, we generate
-dimensional vectors
that are the Kronecker products of the vectors
and
:
The vectors
are mutually orthogonal. Let us define an
-dimensional vector
whose coordinates are the eigenvalues
defined by Equation (10):
Now, we can rewrite the set of equalities (10) in the vector form:
Since , in this equation the term is absent.
Equation (13) allows us to easily solve the two-dimensional inverse problem. Namely, we have to determine the interaction constants that provide a known eigenvalues spectrum . For example, it might be an experimental spectrum.
Let us write an
-dimensional column vector
of the form (12) using the “experimental” spectrum components
and let us take into account the mutual orthogonality of the vectors
(11). Then, the desired interaction constants are the scalar products of the
-dimensional vectors:
Now, let us discuss another question. In the same way as in the one-dimensional problem, not any sequence of the numbers can be a spectrum of a connection matrix: The symmetry properties of the -dimensional vectors impose rather severe restrictions on the values of these numbers.
Firstly, from Equation (13) it follows that the sum of the numbers
has to be equal to zero:
Secondly, in the one-dimensional problem the set of the eigenvalues (excluding the first eigenvalue) is mirror-symmetrical about its middle for each
:
From Equation (11), which defines the
-dimensional vectors
as the products of the eigenvalues
by the vectors
, it is evident that their last
coordinates copy the preceding
ones. Consequently, the same has to be true for the sequence of the numbers
. Then, it is necessary that the numbers that constitute the spectrum satisfy the equalities:
In other words, the last terms of the sequence of the numbers are not free parameters.
Thirdly, since the last
coordinates of each
-dimensional vector
are a mirror image of the preceding
coordinates, not all the first
coordinates of any vector
are different. Consequently, the same has to be true for the given sequence
: The last
terms of the first group of its
terms have to be a mirror image of the preceding
terms. The last
terms of the second group of its
terms have to be a mirror image of the preceding
terms, etc. Finally, for the last
-th group consisting of
terms of the sequence
, the equalities
,
have to be fulfilled. This means that by symmetry reasons only
numbers
of the sequence
may be independent parameters.
We can rewrite Equation (15) using only the terms of sequence (16):
This equation allows us to express , through the other independent numbers from sequence (16). Consequently, the number of the independent values equals exactly the number of the orthogonal vectors taking part in the expansion (13).
(3) Finally, let us discuss briefly a two-dimensional Ising system with an isotropic interaction. Evidently, we again can use Equations (13), (14) and (17). However, now the number of various interaction constants is not but . This means that the same has to be the number of independent terms in the given sequence that represents the spectrum of an isotropic connection matrix. Let us without proving write down the formulas that replace Equations (16) and (17) when the interaction between spins is isotropic.
After removing all the numbers
that due to the symmetry reasons copy the coordinates of the vector
(see Equation (12)), in place of (16) we obtain the sequence:
that includes only
numbers. Next, when the interaction is isotropic we can rewrite the general requirement (15) as follows:
and calculate
with the aid of this equation. As a result, we obtain the correct answer: In sequence (18), the number of independent values is equal to
.
4. Three-Dimensional Ising Model
(1) We consider a system of spins at the nods of a cubic lattice of the size assuming periodic boundary conditions. Then, each spin has pairs of neighbors that are situated along the three independent coordinate axes. In addition, the spins have neighbors that are not on the same coordinate axes as the given spin. The spins equally interacting with the given spin constitute a coordination sphere.
Let be a constant of interaction between spins shifted with respect to each other by a distance along the first axis, by a distance along the second axis, and by a distance along the third axis. When the interaction is anisotropic, there are independent interaction constants , where appears since there is no self-interaction and . In the case of an isotropic interaction, the number of various constants is equal to .
In paper [
3], we showed that the
-dimensional connection matrix
defined by the interaction constants
is a block-circulant. Its eigenvectors
are the Kronecker products of the eigenvectors
(see Equation (3)):
The vectors
constitute a full set of the eigenvectors of
any connection matrix of the three-dimensional Ising system and they do not depend on the type of the interaction constants
. Let us write down the eigenvalues of the matrix
obtained in [
3]:
We use the above-introduced
-dimensional vectors
(see Equation (11)) to generate
-dimensional vectors
that are the Kronecker products of the vectors
and
:
The vectors are mutually orthogonal.
Let us define an
-dimensional vector
whose coordinates are the eigenvalues (19):
Then, we can rewrite the set of Equation (19) in the vector form:
Equation (22) allows us to solve the inverse problem and calculate the interaction constants
that define the given set of the eigenvalues
of the connection matrix. Indeed, let us transform this “experimental” spectrum
into an
-dimensional column-vector
of the form (21) and use the mutual orthogonality of the vectors
. Then, we obtain the required interaction constants as the scalar products of the
-dimensional vectors:
This formula solves the problem of restoring the interaction constants corresponding to the given spectrum.
(2) Not any sequence of the numbers
can represent the spectrum of a three-dimensional Ising connection matrix. To start with, the equality
has to be held. As in the two-dimensional problem, the cases of anisotropic and isotropic interactions differ significantly. When the interaction is anisotropic, it is easy to list the values
where we exclude the numbers repeated due the symmetry reasons. This list contains
values
(compare with Equation (16)). Due to Equation (24), the independent values in this list are one less. For example, we can express
in terms of the other independent values:
The symmetry reasons allow us to restore all the other numbers .
Consequently, the number of independent values equals exactly the number of the basic vectors (see Equation (20)), which enter the sum (22) with nonzero coefficients.
When the interaction is isotropic, due to symmetry restrictions only
values
may be independent. They are:
In addition, due to Equation (24) this number is less by one. In the same way as we have done previously (see Equation (25)), we can define, for example, . Then, using the remaining independent values with the aid of the symmetry reasons we restore all the other numbers . Thus, in the given “experimental” set of eigenvalues there must be independent values and this number exactly matches the number of various coefficients in expansion (22).