2. The Rules Of The Game
The first rule of our game is the principle of reason (POR): No distinction without reason-we should not add or remove something specific (an asymmetry, a concept, a distinction) from our model without having a clear and explicite reason. If there is no reason for a specific asymmetry or choice, then all possibilities are considered equivalently.
The second rule is the principle of variation (POV): We postulate that change is immanent to all fundamental quantities in our game. From these two rules, we take that the mathematical object of our theory is a list (n-tuple) of quantities (variables) ψ, each of which varies at all times.
The third rule is the principle of objectivity (POO): Any law within this game refers to measurements, defined as comparison of quantities (object properties) in relation to other object properties of the same type (i.e., unit). Measurements require reference standards (rulers). A measurement is objective if it is based on (properties of) the objects of the game. This apparent self-reference is unavoidable, as it models the real situation of physics as experimental science. Since all fundamental objects (quantities) in our model vary at all times, the only option to construct a constant quantity that might serve as a ruler, is given by constants of motion (COM). Hence the principle of objectivity requires that measurement standards are derived from constants of motion.
This third rule implies that the fundamental variables can not be directly measured, but only functions of the fundamental variables of the same dimension (unit) of a COM. Thus the model has two levels: The level of the fundamental variable list ψ, which is experimentally not directly accessible and a level of observables which are (as we shall argue) even moments of the fundamental variables ψ.
2.1. Discussion of the Rules
E.T. Jaynes wrote that “Because of their empirical origins, QM and QED are not physical theories at all. In contrast, Newtonian celestial mechanics, Relativity, and Mendelian genetics are physical theories, because their mathematics was developed by reasoning out the consequences of clearly stated physical principles from which constraint the possibilities”. And he continues “To this day we have no constraining principle from which one can deduce the mathematics of QM and QED; [...] In other words, the mathematical system of the present quantum theory is [...] unconstrained by any physical principle” [
9]. This remarkably harsh criticism of quantum mechanics raises the question of what we consider to be a physical principle. Are the rules of our game physical principles? We believe that they are no substantial physical principles but
formal first principles, they are
preconditions of a sensible theory. They contain no immediate physical content, but they define the
form or the
idea of physics.
It is to a large degree immanent to science and specifically to physics to presuppose the existence of reason: Apples do not fall down by chance—there is a reason for this tendency. Usually this believe in reason implies the believe in causality, i.e., that we can also (at least in principle) explain why a specific apple falls at a specific time, but practically this latter believe can rarely be confirmed experimentally and therefore remains to some degree metaphysical. Thus, if, as scientists, we postulate that things have reason, then this is not a physical principle but a precondition, a first principle.
The second rules (POV), is specific to the form (or idea) of physics, e.g., that it is the sense of physics to recognize the pattern of motion and to predict future. Therefore the notion of time in the form of change is indeed immanent to the physical description of reality.
The principle of objectivity (POO) is immanent to the very idea of physics: A measurement is the comparison of properties of objects with compatible properties of reference objects, e.g., requires “constant” rulers. Hence the rules of the game are to a large degree unavoidable: They follow from the very form of physics and therefore certain laws of physics are not substantial results of a physical theory. For instance a consistent “explanation” of the stability of matter is impossible as we presumed it already within the idea of measurement. More precisely: if this presumption does not follow within the framework of a physical theory, then the theory is flawed, since it can not reproduce it’s own presumptions.
Einstein wrote with respect to relativity that “It is striking that the theory (except for the four-dimensional space) introduces two kinds of things,
i.e., (1) measuring rods and clocks; (2) all other things, e.g., the electromagnetic field, the material point,
etc. This, in a certain sense, is inconsistent; strictly speaking, measuring rods and clocks should emerge as solutions of the basic equations [...], not, as it were, as theoretically self-sufficient entities”. [
10]. The more it may surprise that the stability of matter can not be obtained from classical physics as remarked by Elliott H. Lieb: “A fundamental paradox of classical physics is why matter, which is held together by Coulomb forces, does not collapse” [
11]. This single sentence seems to rule out the possibility of a fundamental classical theory and uncovers the uncomfortable situation of theoretical physics today: Despite the overwhelming experimental and technological success, there is a deep-seated confusion concerning the theoretical foundations. Our game is therefore a meta-experiment. It is not the primary goal to find “new” laws of nature or new experimental predictions, but it is a conceptional “experiment” that aims to further develop our understanding of the consequences of principles: which ones are
really required to derive central “results” of contemporary physics. In this short essay final answers can not be given, but maybe some new insights are possible.
2.2. What about Space-Time?
A theory has to make the choice between postulate and proof. If a 3+1 dimensional space-time is presumed, then it cannot be proven within the same theoretical framework. More precisely, the value of such a proof remains questionable. This is a sufficient reason to avoid postulates concerning the dimensionality of space-time. Another, even stronger, reason to avoid a direct postulate of space-time and its geometry has been given above: The fundamental variables that we postulated, can not be directly measured. This excludes space-time coordinates as primary variables (which can be directly measured), but with it almost all other apriori assumed concepts like velocity, acceleration, momentum, energy and so on. At some point these concepts certainly have to be introduced, but we suggest an approach to the formation of concepts that differs from the Newtonian axiomatic method. The POR does not allow to introduce distinctions between the fundamental variables into coordinates and momenta without reason. Therefore we are forced to use an interpretational method, which one might summarize as function follows form. We shall first derive equations and then we shall interpret the equations according to some formal criteria. This implies that we have to refer to already existing notions if we want to identify quantities according to their appearance within a certain formalism. The consequence for the game is, that we have to show how to give rise to geometrical notions: If we do not postulate space-time then we have to suggest a method to construct it.
A consequence of our conception is that both, objects and fields have to be identified with dynamical structures, as there is simply nothing else available. This fits to the framework of structure preserving (symplectic) dynamics that we shall derive from the described principles.
3. Theory of Small Oscillations
In this section we shall derive the theory of coupled oscillators from the rules of our game. According to the POO there exists a function (COM)
such that ( Let us first (for simplicity) assume that
.):
or in vector notation
The simplest solution is given by an arbitrary skew-symmetric matrix
:
Note that it is only the
skew-symmetry of
, which ensures that it is always a solution to Equation (2) and which ensures that
is constant. If we now consider a state vector
ψ of dimension
k, then there is a theorem in linear algebra, which states that for
any skew-symmetric matrix
there exists a non-singular matrix
such that we can write [
12]:
where
is the matrix
If we restrict us to orthogonal matrices
, then we may still write
In both cases we may leave away the zeros, since they correspond to non-varying variables, which would be in conflict with the second rule of our modeling game. Hence
must be even and the square matrix
has the dimension
. As we have no specific reason to assume asymmetries between the different degrees of freedom (DOF), we have to choose all
in Equation (6) and return to Equation (4) without zeros and define the block-diagonal so-called
symplectic unit matrix (SUM)
:
These few basic rules thus lead us directly to Hamiltonian mechanics: Since the state vector has even dimension and due to the form of
, we can interpret
ψ as an ensemble of
n classical DOF-each DOF represented by a canonical pair of coordinate and momentum:
. In this notation and after the application of the transformation
, Equation (3) can be written in form of the Hamiltonian equations of motion (HEQOM):
The validity of the HEQOM is of fundamental importance as it allows for the use of the results of Hamiltonian mechanics, of statistical mechanics and thermodynamics-but without the intrinsic presupposition that the
have to be understood as positions in real space and the
as the corresponding canonical momenta. This is legitimate as the theory of canonical transformations is
independent from any specific physical interpretation of what the coordinates and momenta represent physically. As no other interpretation is at hand, we say that these canonical pairs are coordinates
in an abstract phase space and they are canonical coordinates and momenta only due to the
form of the HEQOM. The choice of the specific form of
is for
DOF not unique. It could for instance be written as
which corresponds a state vector of the form
or by
as in Equation (7). Therefore we are forced to make an arbitrary choice (But we should keep in mind, that other “systems” with a different choice are possible. If we can not exclude their existence, then they should exist as well. With respect to the form of the SUM, we suggest that different “particle” types (different types of fermions for instance) have a different SUM). But in all cases the SUM
must be skew-symmetric and have the following properties:
which also implies that
is orthogonal and has unit determinant. Note also that all eigenvalues of
are purely imaginary. However, once we have chosen a specific form of
, we have specified a set of canonical pairs
within the state vector. This choice fixes the set of possible canonical (structure preserving) transformations.
Now we write the Hamiltonian
as a Taylor series, we remove the rule-violating constant term and cut it after the second term. We do not claim that higher terms may not appear, but we delay the discussion of higher orders to a later stage. All this is well-known in the theory of small oscillations. There is only one difference to the conventional treatment: We have no direct macroscopic interpretation for
ψ and following our first rule we have to write the second-order Hamiltonian
in the most general form:
where
is only restricted to be
symmetric as all non-symmetric terms
do not contribute to
. Since it is not unlikely to find more than a single constant of motion in systems with multiple DOFs, we distinguish systems with singular matrix
from those with a positive or negative definite matrix
. Positive definite matrices are favoured in the sense that they allow to identify
with the amount of a substance or an amount of energy (It is immanent to the concept of substance that it is understood as something positive semidefinite).
Before we try to interprete the elements in
, we will explore some general algebraic properties of the Hamiltonian formalism. If we plug Equations (12) into (3), then the equations of motion can be written in the general form
The matrix
is the product of the symmetric (positive semi-definite) matrix
and the skew-symmetric matrix
. As known from linear algebra, the trace of such products is zero:
Pure harmonic oscillation of
ψ is described by matrices
with purely imaginary eigenvalues and those are the only stable solutions [
12]. Note that Equation (13) may represent a tremendous amount of different types of systems-all linearily coupled systems in any dimension, chains or
d-dimensional lattices of linear coupled oscillators and wave propagation (However the linear approximation does not allow for the description of the transport of heat).
One quickly derives from the properties of
and
that
Since any square matrix can be written as the sum of a symmetric and a skew-symmetric matrix, it is nearby to also consider the properties of products of
with a skew-symmetric real square matrices
. If
, then
Symmetric
-matrices contain
different matrix elements and skew-symmetric ones
elements, so that there are
linear independent matrix elements in
and
matrix elements in
with
In the theory of linear Hamiltonian dynamics, matrices of the form of
are known as “Hamiltonian” or “infinitesimal symplectic” and those of the form of
as “skew-Hamiltonian” matrices. This convention is a bit odd as
does not appear in the Hamiltonian and it is in general not symplectic. Furthermore the term “Hamiltonian matrix” has a different meaning in quantum mechanics - in close analogy to
. But it is known that this type of matrix is closely connected to symplectic matrices as every symplectic matrix is a matrix exponential of a matrix
[
12]. We consider the matrices as defined by Equations (15) and (16) as too important and fundamental to have no meaningful and unique names: Therefore we speak of a
symplex (plural
symplices), if a matrix holds Equation (15) and of a
cosymplex if it holds Equation (16).
3.1. Symplectic Motion and Second Moments
So what is a symplectic matrix anyway? The concept of symplectic transformations is a specific formulation of the theory of canonical transformations. Consider we define a new state vector (or new coordinates)
-with the additional requirement, that the transformation is reversible. Then the Jacobian matrix of the transformation is given by
and the transformation is said to be symplectic, if the Jacobian matrix holds [
12]
Let us see what this implies in the linear case:
and-by the use of Equation (20) one finds that
is still a symplex:
Hence a symplectic transformation is first of all a similarity transformation, but secondly, it preserves the structure of all involved equations. Therefore the transformation is said to be
canonical or
structure preserving. The distinction between canonical and non-canonical transformations can therefore be traced back to the skew-symmetry of
and the symmetry of
- both of them consequences of the rules of our physics modeling game.
Recall that we argued that the matrix
should be symmetric
because skew-symmetric terms do not contribute to the Hamiltonian. Let us have a closer look what this means. Consider the matrix of second moments
Σ that can be build from the variables
ψ:
in which the angles indicate some (yet unspecified) sort of average. The equation of motion of this matrix is given by
Now, as long as
does not depend on
ψ, we obtain
where we defined the new matrix
. For completeness we introduce the “adjunct” spinor
so that we may write
Note that
is also a symplex. The matrix
(
i.e., all second moments) is constant, iff
and
commute.
Now we define an
observable to be an operator
with a (potentially) non-vanishing expectation value, defined by:
Thus, if the product
is
not skew-symmetric,
i.e., contains a product of
with a symmetric matrix
, then the expectation value is potentially non-zero:
This means that only the symplex-part of an operator is “observable”, while cosymplices yield a vanishing expectation value. Hence Equation (25) delivers the blueprint for the general definition of observables. Furthermore we find in the last line the constituting equation for Lax pairs [
13]. Peter Lax has shown that for such pairs of operators
and
that obey Equation (25) there are the following constants of motion
for arbitrary integer
. Since
is a symplex and therefore by definition the product of a symmetric matrix and the skew-symmetric
, Equation (29) is always zero and hence trivially true for
. The same is true for any odd power of
, as it can be easily shown that any odd power of a symplex is again a symplex (see Equation (35), so that the only non-trivial general constants of motion correspond to even powers of
, which implies that all observables are functions of even powers of the fundamental variables.
To see the validity for
we have to consider the general algebraic properties of the trace operator. Let
λ be an arbitrary real constant and
τ be a real parameter, then
It follows that
From the last line of Equation (31) it follows with
Remark: This conclusion is not limited to symplices.
However for single spinors
ψ and the corresponding second moments
we find:
since each single factor
vanishes due to the skew-symmetry of
. Therefore the constants of motion as derived from Equation (29) are non-zero only for even
k and
after averaging over some kind of distribution such that
has non-zero eigenvalues as in Equation (34) below.
The symmetric matrix -matrix Σ (and also ) is positive definite, if it can be written as a product , where Ψ is a non-singular matrix of size with .
For
, the form of
Ψ may be chosen as
so that for
the average of two “orthogonal” column-vectors
ψ and
gives a non-zero constant of motion via Lax pairs as
.
These findings have some consequences for the modeling game. The first is that we have found constants of motion-though some of them are physically meaningful only for a non-vanishing volume in phase space, i.e., by the combination of several spinors ψ. Secondly, a stable state implies that the matrix operators forming the Lax pair have the same eigenvectors: a density distribution in phase space (as described by the matrix of second moments) is stable if it is adapted or matched to the symplex . The phase space distribution as represented by and the driving terms (the components of ) must fit to each other in order to obtain a stable “eigenstate”. But we also found a clear reason, why generators (of symplectic transformations) are always observables and vice versa: Both, the generators as well as the observables are symplices of the same type. There is a one-to-one correspondence between them, not only as generators of infinitesimal transformations, but also algebraically.
Furthermore, we may conclude that (anti-) commutators are an essential part of “classical” Hamiltonian mechanics and secondly that the matrix has the desired properties of observables: Though is based on continuously varying fundamental variables, it is constant, if it commutes with , and it varies otherwise (In accelerator physics, Equation (25) describes the envelope of a beam in linear optics. The matrix of second moments Σ is a covariance matrix-and therefore our modeling game is connected to probability theory exactly when observables are introduced).
Hence it appears sensible to take a closer look on the (anti-) commutation relations of (co-) symplices and though the definitions of (co-) symplices are quite plain, the (anti-) commutator algebra that emerges from them has a surprisingly rich structure. If we denote symplices by
and cosymplices by
, then the following rules can quickly be derived:
This
Hamiltonian algebra of (anti-)commutators is of fundamental importance insofar as we derived it in a few steps from first principles (
i.e., the rules of the game) and it defines the structure of Hamiltonian dynamics in phase space. The distinction between symplices and cosymplices is also the distinction between observables and non-observables. It is the basis of essential parts of the following considerations.
4. Geometry from Hamiltonian Motion
In the following we will demonstrate the
geometrical content of the algebra of (co-)symplices (Equation (35)) which emerges for
specific numbers of DOF n. As shown above, pairs of canonical variables (DOFs) are a direct consequence of the abstract rules of our game. Though single DOFs are poor “objects”, it is remarkable to find physical structures emerging from our abstract rules
at all. This suggests that there might be more structure to discover when
n DOF are combined, for instance geometrical structures. The following considerations obey the rules of our game, since they are based purely on symmetry considerations like those that guided us towards Hamiltonian dynamics. The objects of interest in our algebraic interpretation of Hamiltonian dynamics are matrices. The first matrix (besides
) with a specific form that we found, is
. It is a symplex:
According to Equation (17) there are
(
i.e.,
) symplices. Hence it is nearby to ask if other symplices with similar properties like
exist-and if so, what the relations between these matrices are. According to Equation (35) the commutator of two symplices is again a symplex, while the anti-commutator is a cosymplex. As we are primarily interested in
observables and components of the Hamiltonian (
i.e., symplices), respectively, we look for further symplices that anti-commute with
and with each other. In this case, the product of two such matrices is also a symplex,
i.e., another potential contribution to the general Hamiltonian matrix
.
Assumed we had a set of
N mutually anti-commuting orthogonal symplices
and
with
, then a Hamiltonian matrix
might look like
The
are symplices
and anti-commute with
:
Multiplication from the left with
gives:
so that all other possible symplices
, which anticommute with
, are symmetric and square to
. This is an important finding for what follows, as it can (within our game) be interpreted as a classical proof of the uniqueness of (observable) time-dimension: Time is one-dimensional as there is no other skew-symmetric symplex that anti-commutes with
. We can choose different forms for
, but the emerging algebra allows for no second “direction of time”.
The second order derivative of
ψ is (for constant
) given by
which yields:
Since the anti-commutator on the right vanishes by definition, we are left with:
Thus-we find a set of (coupled) oscillators, if
such that
Given such matrix systems exist-then they generate a Minkowski type “metric” as in Equation (41) (Indeed it appears that Dirac derived his system of matrices from the this requirement [
14]). The appearance of this metric shows how a Minkowski type geometry emerges from the driving terms of oscillatory motion. This is indeed possible- at least for symplices of certain dimensions as we will show below. The first thing needed is some kind of measure to define the length of a “vector”. Since the length is a measure that is invariant under certain transformations, specifically under rotations, we prefer to use a quantity with certain invariance properties to define a length. The only one we have at hand is given by Equation (29). Accordingly we define the (squared) length of a matrix representing a “vector” by
The division by
is required to make the unit matrix have unit norm. Besides the norm we need a scalar product,
i.e., a definition of orthogonality. Consider the Pythagorean theorem which says that two vectors
and
are orthogonal iff
The general expression is
The equations are equal, iff
. Hence the Pythagorean theorem yields a reasonable definition of orthogonality. However, we had no method yet to define vectors within our game. Using matrices
and
we may then write
If we compare this to Equations (45) and (46), respectively, then the obvious definition of the inner product is given by:
Since the anticommutator does in general not yield a scalar, we have to distinguish between inner product and scalar product:
where we indicate the scalar part by the subscript “
S”. Accordingly we define the exterior product by the commutator
Now that we defined the products, we should come back to the unit vectors. The only “unit vector” that we explicitely defined so far is the symplectic unit matrix
. If it represents anything at all then it must be “the direction” of change, the direction of evolution in time as it was derived in this context and is the only “dimension” found so far. As we have already shown, all other unit vectors
must be symmetric, if they are symplices. And vice versa: If
is symmetric and anti-commutes with
, then it is a symplex. As only symplices represent observables and are generators of symplectic transformations, we can have only a single “time” direction
and a yet unknown number of
symmetric unit vectors (Thus we found a simple answer to the question, why only a single time direction is possible, a question also debated in Reference [
15]). However, for
, there might be different equivalent choices of
. Whatever the specific form of
is, we will show that in combination with some general requirements like completeness, normalizability and observability it determines the structure of the complete algebra. Though we don’t yet know how many symmetric and pairwise anti-commuting unit vectors
exist-we have to interpret them as unit vectors in “spatial directions” (The meaning of what a spatial direction is, especially in contrast to the direction of time
, has to be derived from the form of the emerging equations, of course. As meaning follows form, we do not define space-time, but we identify structures that fit to the known concept of space-time). Of course unit vectors must have unit length, so that we have to demand that
Note that (since our norm is not positive definite), we explicitely allow for unit vectors with negative “length” as we find it for
. Note furthermore that all skew-symmetric unit vectors square to
while the symmetric ones square to
[
16].
Indeed systems of
anti-commuting real matrices are known as real representations of Clifford algebras
. The index
p is the number of unit elements (“vectors”) that square to
and
q is the number of unit vectors that square to
. Clifford algebras are not necessarily connected to Hamiltonian motion, rather they can be regarded as purely mathematical “objects”. They can be defined without reference to matrices whatsoever. Hence in mathematics, sets of matrices are merely “representations” of Clifford algebras. But our game is about physics and due to the proven one-dimensionality of time we concentrate on Clifford algebras
which link CHOs in the described way with the generators of a Minkowski type metric. Further below it will turn out that the representation by matrices is-within the game-indeed helpful, since it leads to an overlap of certain symmetry structures. The unit elements (or unit “vectors”) of a Clifford algebra,
, are called the
generators of the Clifford algebra. They pairwise anticommute and they square to
(The role as
generator of the Clifford algebra should not be confused with the role as generators of symplectic transformations (
i.e., symplices). Though we are especially interested in Clifford algebras in which all generators are symplices, not all symplices are generators of the Clifford algebra. Bi-vectors for instance are symplices, but not generators of the Clifford algebra). Since the inverse of the unit elements
of a Clifford algebra must be unique, the products of different unit vectors form new elements and all possible products including the unit matrix form a group. There are
possible combinations (products without repetition) of
k elements from a set of
N generators. We therefore find
bi-vectors, which are products of two generators,
trivectors) and so on. The product of all
N basic matrices is called
pseudoscalar. The total number of all k-vectors then is (We identify
with the unit matrix
.):
If we desire to construct a
complete system, then the number of variables of the Clifford algebra has to match the number of variables of the used matrix system:
Note that the root of this equation gives an even integer
so that
N must be even. Hence all Hamiltonian Clifford algebras have an even dimension. Of course not all elements of the Clifford algebra may be symplices. The unit matrix (for instance) is a cosymplex. Consider the Clifford algebra
with
, which has two generators, say
with
and
with
. Since these two anticommute (by definition of the Clifford algebra), so that we find (besides the unit matrix) a fourth matrix formed by the product
:
The completeness of the Clifford algebras as we use them here implies that any
-matrix
with
can be written as a linear combination of all elements of the Clifford algebra:
The coefficients can be computed from the scalar product of the unit vectors with the matrix
:
Recall that skew-symmetric
have a negative length and therefore we included a factor
which represents the “signature” of
, in order to get the correct sign of the coefficients
.
Can we derive more properties of the constructable space-times? One restriction results from representation theory: A theorem from the theory of Clifford algebras states that
has a representation by real matrices if (and only if) [
17]
The additional requirement that all generators must be symplices so that
and
then restricts
N to
Hence the only matrix systems that have the required symmetry properties within our modeling game are those that represent Clifford algebras with the dimensions
,
,
,
,
,
,
,
and so on. These correspond to matrix representations of size
,
,
,
,
and so on. The first of them is called
Pauli algebra, the second one is the
Dirac algebra. Do these two have special properties that the higher-dimensional algebras do not have? Yes, indeed.
Firstly, since dynamics is based on canonical pairs, the real Pauli algebra describes the motion of a single DOF and the Dirac algebra decribes the simplest system with interaction between two DOF. This suggests the interpretation that within our game, objects (Dirac-particles) are not located “within space-time”, since we did not define space at all up to this point, but that space-time can be modeled as an emergent phenomenon. Space-time is in between particles.
Secondly, if we equate the number of fundamental variables (
) of the oscillator phase space with the dimension of the Clifford space
N, then Equation (53) leads to
which allows for
and
only. But why should it be meaningful to assume
? The reason is quite simple: If
as for all higher-dimensional state vectors, there are less generators of the algebra than independent variables. This discrepancy increases with
n. Hence the described objects can not be pure
vectors anymore, but must contain tensor-type components (
k-vectors) (For a deeper discussion of the dimensionality of space-time, see Reference [
16] and references therein).
But before we describe a formal way to interprete Equation (59), let us first investigate the physical and geometrical implications of the game as described so far.
Matrix Exponentials
We said that the unit vectors
and
are symplices and therefore generators of symplectic transformations. All symplectic matrices are matrix exponentials of symplices [
12]. The computation of matrix exponentials is in the general case non-trivial. However, in the special case of matrices that square to
(e.g., along the “axis”
of the coordinate system), the exponentials are readily evaluated:
where
is the sign of the matrix square of
. For
(
), it follows that
and for
(
):
We can indentify skew-symmetric generators with rotations and (as we will show in more detail below) symmetric generators with boosts.
The (hyperbolic) sine/cosine structure of symplectic matrices are not limited to the generators but are a general property of the matrix exponentials of the symplex
(These properties are the main motivation to choose the nomenclature of “symplex” and “cosymplex”.):
where the (co-) symplex
(
) is given by:
since (the linear combination of) all odd powers of a symplex is again a symplex and the sum of all even powers is a cosymplex. The inverse transfer matrix
is given by:
The physical meaning of the matrix exponential results from Equation (13), which states that (for constant symplices
) the solutions are given by the matrix exponential of
:
A symplectic transformation can be regarded as the result of a
possible evolution in time. There is no proof that non-symplectic processes are forbidden by nature, but that
only symplectic transformations are
structure preserving. Non-symplectic transformations are then
structure defining. Both play a fundamental role in the physics of our model reality,
because fundamental particles are-according to our model-represented by dynamical structures. Therefore symplectic transformations describe those processes and interactions, in which structure is preserved,
i.e., in which the type of the particle is not changed. The fundamental variables are just “carriers” of the dynamical structures. Non-symplectic transformations can be used to transform the structure. This could also be described by a rotation of the direction of time. Another interpretation is that of a gauge-transformation [
18].
6. Electromechanical Equivalence (EMEQ)
The number and type of symplices within the Dirac algebra (80) suggests to use the following vector notation for the coefficients [
32,
33] of the observables:
where the “clustering” of the coefficients into 3-dimensional vectors will be explained in the following. The first four elements
and
are the coefficients of the generators of the Clifford algebra and the remaining symplices are 3 symmetric bi-vectors
and skew-symmetric bi-vectors
. As explained above, the matrix exponentials of pure Clifford elements are readily evaluated (Equations (61) and (62)). The effect of a symplectic similarity transformation on a symplex
can then be computed component-wise as in the following case of a rotation (using Equation (81)):
Since all Clifford elements either commute or anti-commute with each other, we have two possible solutions. The first (
and
commute) yields with
:
but if (
and
anti-commute) we obtain a rotation:
For
(
) for instance we find:
which is formally equivalent to a rotation of
about the “z-axis”. If the generator
of the transformation is symmetric, we obtain:
so that (if
and
commute):
and if
and
anticommute:
which is equivalent to a boost when the following parametrization of “rapidity”
τ is used:
A complete survey of these transformations and the (anti-) commutator tables can be found in Reference [
32] (This formalism corresponds exactly to the relativistic invariance of a Dirac spinor in QED as described for instance in Reference [
34], although the Dirac theory uses complex numbers and a different sign-convention for the metric tensor). The “spatial” rotations are generated by the bi-vectors associated with
and Lorentz boosts by the components associated with
. The remaining 4 generators of symplectic transformations correspond to
and
. They where named
phase-rotation (generated by
) and
phase-boosts (generated by
) and have been used for instance for symplectic decoupling as described in Reference [
33].
It is nearby (and already suggested by our notation) to consider the possibility that the EMEQ (Equation (82)) allows to model a relativistic particle as represented by energy and momentum either in an external electromagnetic field given by and or-alternatively-in an accelerating and/or rotating reference frame, where the elements and correspond to the axis of acceleration and rotation, respectively. We assumed, that all components of the state vector ψ are equivalent in meaning and unit. Though we found that the state vector is formally composed of canonical pairs, the units are unchanged and identical for all elements of ψ. From Equation (13) we take, that the simplex (and also ) have the unit of a frequency. If the Hamiltonian is supposed to represent energy, then the components of ψ have the unit of the square root of action.
If the coefficients are supposed to represent the electromagnetic field, then we need to express these fields in the unit of frequency. This can be done, but it requires to involve natural conversion factors like ℏ, charge e, velocity c and a mass, for instance the electron mass . The magnetic field (for instance) is related to a “cyclotron frequency” by .
However, according to the rules of the game, the distinction between particle properties and “external” fields requires a reason, an explanation. Especially as it is physically meaningless for macroscopic coupled oscillators. In References [
32,
33] this nomenclature was used in a merely
formal way, namely to find a descriptive scheme to order the symplectic generators, so to speak an
equivalent circuit to describe the general possible coupling terms for two-dimensional coupled linear optics as required for the description of charged particles beams.
Here we play the reversed modeling game: Instead of using the EMEQ as an equivalent circuit to describe ensembles of oscillators, we now use ensembles of oscillators as an equivalent circuit to describe point particles. The motivation for Equation (82) is nevertheless similar, i.e., it follows from the formal structure of the Dirac Clifford algebra. The grouping of the coefficients comes along with the number of vector- and bi-vector-elements, 4 and 6, respectively. The second criterium is to distinguish between generators of rotations and boost, i.e., between symmetric and skew-symmetric symplices, which separates energy from momentum and electric from magnetic elements. Third of all, we note that even (Even k-vectors are those with even , where m is a natural number) elements (scalar, bi-vectors, 4-vectors etc.) of even-dimensional Clifford algebras form a sub-algebra. This means that we can generate the complete Clifford algebra from the vector-elements by matrix multiplication (this is why we call them generators), but we can not generate vectors from bi-vectors by multiplication. And therefore the vectors are the particles (which are understood as the sources of fields) and the bi-vectors are the fields, which are generated by the objects and influence their motion. The full Dirac symplex-algebra includes the description of a particle (vector) in a field (bi-vector). But why would the field be external? Simply, because it is impossible to generate bi-vectors from a single vector-type object, since any single vector-type object written as squares to a scalar. Therefore, the fields must be the result of interaction with other particles and hence we call them “external”. This is in some way a “first-order” approach, since there might be higher order processes that we did not consider yet. But in the linear approach (i.e., for second-order Hamiltonians), this distinction is reasonable and hence a legitimate move in the game.
Besides the Hamiltonian structure (symplices
vs. co-symplices) and the Clifford algebraic structure (distinguishing vectors, bi-vectors, tri-vectors
etc.) there is a third essential symmetry, which is connected to the real matrix representation of the Dirac algebra and to the fact that it describes the general Hamiltonian motion of coupled oscillators: To distinguish the even from the odd elements with respect to the block-diagonal matrix structure. We used this property in Reference [
33] to develop a general geometrical decoupling algorithm (see also
Section 6.2).
Now it may appear that we are cheating somehow, as relativity is usually “derived” from the constancy of the speed of light, while in our modeling game, we did neither introduce spatial notions nor light at all. Instead we directly arrive at notions of quantum electrodynamics (QED). How can this be? The definition of “velocity” within wave mechanics usually involves the dispersion relation of waves,
i.e., the velocity of a wave packet is given by the group velocity
defined by
and the so-called phase velocity
defined by
It is then typically mentioned that the product of these two velocities is a constant
. By the use of the EMEQ and Equation (29), the eigenvalues of
can be written as:
Since symplectic transformations are similarity transformations, they do not alter the eigenvalues of the matrix
and since all possible evolutions in time (which can be described by the Hamiltonian) are symplectic transformations, the eigenvalues (of closed systems) are conserved. If we consider a “free particle”, the we obtain from Equation (94):
As we mentioned before both, energy and momentum, have (within this game) the unit of frequencies. If we take into account that
is fixed, then the dispersion relation for “the energy”
is
which is indeed the correct relativistic dispersion. But how do we make the step from pure oscillations to
waves? (The question if Quantum theory requires Planck’s constant
ℏ, has been answered negative by John P. Ralston [
35]).
6.1. Moments and The Fourier Transform
In case of “classical” probability distribution functions (PDFs)
we may use the Taylor terms of the
characteristic function , which is the Fourier transform of
, at the origin. The
k-th moment is then given by
where
is the
k-th derivative of
.
A similar method would be of interest for our modeling game. Since a (phase space-) density is positive definite, we can always take the square root of the density instead of the density itself:
. The square root can also defined to be a complex function, so that the density is
and, if mathematically well-defined (convergent), we can also define the Fourier transform of the complex root,
i.e.,
and vice versa:
In principle, we may
define the density no only by real and imaginary part, but by an arbitrary number of components. Thus, if we consider a four-component spinor, we may of course mathematically define its Fourier transform. But in order to see, why this might be more than a mathematical “trick”, but
physically meaningful, we need to go back to the notions of classical statistical mechanics. Consider that we replace the single state vector by an “ensemble”, where we leave the question open, if the ensemble should be understood as a single phase space trajectory, averaged over time, or as some (presumably large) number of different trajectories. It is well-known, that the phase space density
is stationary, if it depends only on constants of motion, for instance if it depends only on the Hamiltonian itself. With the Hamiltonian of Equation (12), the density could for example have the form
which corresponds to a multivariate Gaussian. But more important is the insight, that the density exclusively depends on the second moments of the phase space variables as given by the Hamiltonian,
i.e., in case of a “free particle” it depends on
and
. And therefore we should be able to use energy and momentum as frequency
ω and wave-vector
.
But there are more indications in our modeling game that suggest the use of a Fourier transform as we will show in the next section.
6.2. The Geometry of (De-)Coupling
In the following we give a (very) brief summary of Reference [
33]. As already mentioned, decoupling is meant-despite the use of the EMEQ-first of all purely technical-mathematical. Let us delay the question, if the notions that we define in the following have any physical relevance. Here we refer first of all to block-diagonalization,
i.e., we treat the symplex
just as a “Hamiltonian” matrix. From the definition of the real Dirac matrices we obtain
in explicit
matrix form:
If we find a (sequence of) symplectic similarity transformations that would allow to reduce the
-form to a block-diagonal form, then we would obtain two separate systems of size
and we could continue with the transformations of
Section 5.1.
Inspection of Equation (101) unveils that
is block-diagonal, if the coefficents
,
,
and
vanish. Obviously this implies that
and
. Or vice versa, if we find a symplectic method that transforms into a system in which
and
, then we only need to apply appropriate rotations to achieve block-diagonal form. As shown in Reference [
33] this can be done in different ways, but in general it requires the use of the “phase rotation”
and “phase boosts”
. Within the conceptional framework of our game, the application of these transformations equals the use of “matter fields”. But furthermore, this shows that block-diagonalization has also geometric significance within the Dirac algebra and, with respect to the Fourier transformation, the requirement
indicates a divergence free magnetic field, as the replacement of
by
yields
. The additional requirement
also fits well to our physical picture of e.m. waves. Note furthermore, that there is no analogous requirement to make
equal to zero. Thus (within this analogy) we
can accept .
But this is not everything to be taken from this method. If we analyze in more detail, which expressions are
required to vanish and which may remain, then it appears that
is explicitely given by
That means that exactly those products have to vanish which yield
cosymplices. This can be interpreted via the structure preserving properties of symplectic motion. Since within our game, the particle
type can only be represented by the structure of the dynamics, and since electromagnetic processes do not change the type of a particle, then they are quite obviously
structure preserving which then implies the non-appearance of co-symplices. Or-in other words-electromagnetism is of Hamiltonian nature. We will come back to this point in
Section 6.4.
6.3. The Lorentz Force
In the previous section we constituted the distinction between the “mechanical” elements
of the general matrix
and the electrodynamical elements
. Since the matrix
is a symplex, let us assume to be equal to
and apply Equation (25). We then find (with the appropriate relative scaling between
and
as explained above):
which yields written with the coefficients of the real Dirac matrices:
where
τ is the proper time. If we convert to the lab frame time
t using
Equation (103) yields (setting
):
which is the Lorentz force. Therefore the Lorentz force acting on a charged particle in 3 spatial dimensions can be modeled by an ensemble of 2-dimensional CHOs. The isomorphism between the observables of the perceived 3-dimensional world and the second moments of density distributions in the phase space of 2-dimensional oscillators is remarkable.
In any case, Equation (103) clarifies two things within the game. Firstly, that both, energy and momentum , have to be interpreted as mechanical energy and momentum (and not canonical), secondly the relative normalization between fields and mechanical momentum is fixed and last, but not least, it clarifies the relation between the time related to mass (proper time) and the time related to and energy, which appears to be the laboratory time.
6.4. The Maxwell Equations
As we already pointed out, waves are (within this game) the result of a Fourier transformation (FT). But there are different ways to argue this. In Reference [
16] we argued that Maxwell’s equations can be derived within our framework by (a) the postulate that space-time emerges from interaction,
i.e., that the fields
and
have to be constructed from the 4-vectors.
,
and
with (b) the requirement that no co-symplices emerge. But we can also argue with the FT of the density (see
Section 6.1).
If we introduce the 4-derivative
The non-abelian nature of matrix multiplication requires to distinguish differential operators acting to the right and to the left,
i.e., we have
∂ as defined in Equation (106),
and
which is written to the right of the operand (thus indicating the order of the matrix multiplication) so that
The we find the following general rules (see Equation (35)) that prevent from non-zero cosymplices:
Application of these derivatives yields:
The first row of Equation (109) corresponds to the usual definition of the bi-vector fields from a vector potential
and is (written by components) given by
The second row of Equation (109) corresponds to the usual definition of the 4-current
as sources of the fields and the last three rows just express the impossibility of the appearance of cosymplices. They explicitely represent the homogenuous Maxwell equations
the continuity equation
and the so-called “Lorentz gauge”
The simplest idea about the 4-current within QED is to assume that it is proportional to the “probability current”, which is within our game given by the vector components of
.
7. The Phase Space
Up to now, our modeling game referred to the second moments and the elements of
are second moments such that the observables are given by (averages over) the following quadratic forms:
If we analyze the real Dirac matrix coefficents of
in terms of the EMEQ and evaluate the quadratic relations between those coefficients, then we obtain:
Besides a missing renormalization these equations describe an object without mass but with the geometric properties of light as decribed by electrodynamics, e.g., by the electrodynamic description of electromagnetic waves, which are
,
,
and so on. Hence single spinors are
light-like and can not represent massive particles.
Consider the spinor as a vector in a four-dimensional Euclidean space. We write the symmetric matrix
(or
Σ, respectively) as a product in the form of a Gramian:
or-componentwise:
The last line can be read such that matrix element
is the conventional 4-dimensional scalar product of column vector
with column vector
.
From linear algebra we know that Equation (116) yields a non-singular matrix
, iff the column-vectors of the matrix
are linearily independent. In the orthonormal case, the matrix
simply is the pure form of a non-singular matrix,
i.e., the unit matrix. Hence, if we want to construct a massive object from spinors, we need several spinors to fill the columns of
. The simplest case is the orthogonal case: the combination of four mutual orthogonal vectors. Given a general 4-component Hamiltonian spinor
, how do we find a spinor that is orthogonal to this one? In 3 (
i.e., odd) space dimensions, we know that there are two vectors that are perpendicular to any vector
, but without fixing the first vector, we can’t define the others. In even dimensions this is different: it suffices to find a non-singular skew-symmetric matrix like
to generate a vector that is orthogonal to
ψ, namely
. As in Equation (3), it is the skew-symmetry of the matrix that ensures the orthogonality. A third vector
must then be orthogonal to
ψ and to
. It must be skew-symmetric and it must hold
. This means that the product
must also be skew-symmetric and hence that
must anti-commute with
:
Now let us for a moment return to the question of dimensionality. There are in general
non-zero independent elements in a skew-symmetric square
matrix. But how many matrices are there in the considered phase space dimensions,
i.e., in
,
and
(
etc.) dimensions which anti-commute with
? We need at least
skew-symmetric anti-commuting elements to obtain a diagonal
. However, this implies at least
anticommuting elements of the Clifford algebra that square to
. Hence the ideal case is
, which is only true for the Pauli and Dirac algebra. For the Pauli algebra, there is one skew-symmetric element, namely
. In the Dirac algebra there are 6 skew-symmetric generators that contain two sets of mutually anti-commuting skew-symmetric matrices:
,
and
on the one hand and
,
and
on the other hand. The next considered Clifford algebra with
dimensions requires a representation by
-dimensional real matrices. Hence this algebra may not represent a Clifford algebra with more than 10 unit elements-certainly not
. Hence, we can not use the algebra to generate purely massive objects (e.g., diagonal matrices) without further restrictions (
i.e., projections) of the spinor
ψ.
But what exactly does this mean? Of course we can easily find 32 linearily independent spinors to generate an orthogonal matrix
. So what exactly is special in the Pauli- and Dirac algebra? To see this, we need to understand, what it means that we can use the matrix
of mutually orthogonal column-spinors
This form implies that we can define the
mass of the “particle”
algebraically, and since we have
anticommuting skew-symmetric matrices in the Dirac algebra, we can find a multispinor
for
any arbitrary point in phase space. This does not seem to be sensational at first sight, since this appears to be a property of any Euclidean space. The importance comes from the fact that
ψ is a “point” in a very special space-a point in phase space. In fact, we will argue in the following that this possibility to factorize
ψ and the density
ρ is everything but self-evident.
If we want to simulate a phase space distribution, we can either define a phase space density or we use the technique of Monte-Carlo simulations and represent the phase space by (a huge number of random) samples. If we generate a random sample and we like to implement a certain exact symmetry of the density in phase space, then we would (for instance) form a symmetric sample by appending not only a column-vector to , but also its negative . In this way we obtain a sample with an exact symmetry. In a more general sense: If a phase space symmetry can be represented by a matrix that allows to associate to an arbitrary phase space point ψ a second point where is skew-symmetric, then we have a certain continuous linear rotational symmetry in this phase space. As we have shown, phase-spaces are intrinsically structured by and insofar much more restricted than Euclidean spaces. This is due to the distinction of symplectic from non-symplectic transformations and due to the intrinsic relation to Clifford algebras: Phase spaces are spaces structured by time. Within our game, the phase space is the only possible fundamental space.
We may imprint the mentioned symmetry to an arbitrary phase space density
ρ by taking all phase space samples that we have so far and adding the same number of samples, each column multiplied by
. Thus, we have a single rotation in the Pauli algebra and two of them in the Dirac algebra:
or:
Note that order and sign of the column-vectors in
are irrelevant—at least with respect to the autocorrelation matrix
. Thus we find that there are two fundamental ways to represent a positive mass in the Dirac algebra and one in the Pauli-algebra. The 4-dimensional phase space of the Dirac algebra is in two independent ways self-matched.
Our starting point was the statement that linear independent vectors are needed to generate mass. If we can’t find vectors in the way described above for the Pauli and Dirac algebra, then this does (of course) not automatically imply that there are not linear independent vectors.
But what does it mean that the dimension of the Clifford algebra of observables (
N) does not match the dimension of the phase space (
) in higher dimensions? There are different physical descriptions given. Classically we would say that a positive definite
-component spinor describes a system of
n (potentially) coupled oscillators with
n frequencies. If
is orthogonal, then all oscillators have the same frequency,
i.e., the system is degenerate. But for
we find that not all eigenmodes can involve the complete
-dimensional phase space. This phenomenon is already known in 3 dimensions: The trajectory of the isotropic three-dimensional oscillator always happens in a 2-dimensional plane,
i.e., in a subspace. If it did not, then the angular momentum would not be conserved. In this case the isotropy of space would be broken. Hence one may say in some sense that the
isotropy of space is the reason for a 4-dimensional phase-space and hence the reason for the
-dimensional observable space-time of objects. Or in other words: higher-dimensional spaces are incompatible with isotropy,
i.e., with the conservation of angular momentum. There is an intimate connection of these findings to the impossibility of Clifford algebras
with
to create a homogeneous “Euclidean” space: Let
represent time and
with
the spatial coordinates. The spatial rotators are products of two spatial basis vectors. The generator of rotations in the
-plane is
. Then we have 6 rotators in 4 “spatial” dimensions:
However, we find that some generators commute and while others anticommute and it can be taken from combinatorics that only sets of 3 mutual anti-commuting rotators can be formed from a set of symmetric anti-commuting
. The 3 rotators
mutually anticommute, but
and
commute. Furthermore, in
dimensions, the spinors are either projections into 4-dimensional subspaces or there are non-zero off-diagonal terms in
,
i.e., there is “internal interaction”.
Another way to express the above considerations is the following: Only in 4 phase space dimensions we may construct a massive object from a matrix
that represents a multispinor
Ψ of exactly
single spinors and construct a wave-function according to
where
is the phase space density.
It is easy to prove and has been shown in Reference [
16] that the elements
,
and
represent parity, time reversal and charge conjugation. The combination of these operators to form a multispinor, may lead (with normalization) to the construction of symplectic matrices
. Some examples are:
Hence the combination of the identity and CPT-operators can be arranged such that the multispinor
is symplectic with respect to the directions of time
,
and
, but not with respect to
,
or
. As we tried to explain, the specific choice of the skew-symmetric matrix
is determined by a structure defining transformation. Since particles are nothing but dynamical structures in this game, the 6 possible SUMs should stand for 6 different particle types. However, for each direction of time, there are also two choices of the spatial axes. For
we have chosen
,
and
, but we could have used
,
and
as well.
Thus, there should be either 6 or 12 different types of structures (types of fermions) that can be constructed within the Dirac algebra. The above construction allows for three different types corresponding to three different forms of the symplectic unit matrix, further three types are expected to be related to
,
and
:
These matrices describe specific symmetries of the 4-dimensional phase space,
i.e., geometric objects in phase space. Therefore massive multispinors can be described as volumes in phase space. If we deform the figure by stretching parameters
such that
then one obtains with
taken from Equation (114):
This result reproduces the quadratic forms
of Equation (114), but furthermore the phase space radii
a,
b,
c and
d reproduce the structure of the Clifford algebra,
i.e., the classification into the 4 types of observables
,
,
and
. This means that a deformation of the phase space “unit cell” represents momenta and fields,
i.e., the dimensions of the phase space unit cell are related to the appearance of certain symplices:
while for
all vectors but
vanish. Only in this latter case, the matrix
is symplectic for
. These relations confirm the intrinsic connection between a classical 4-dimensional Hamiltonian phase space and Clifford algebras in dimension 3+1.
8. Summary and Discussion
Based on three fundamental principles, which describe the form of physics, we have shown that the algebraic structure of coupled classical degrees of freedom is (depending on the number of the DOFs) isomorph to certain Clifford algebras that allow to explain the dimensionality of space-time, to model Lorentz-transformations, the relativistic energy-momentum relation and even Maxwell’s equations.
It is usually assumed that we have to define the properties of space-time in the first place: “In Einstein’s theory of gravitation matter and its dynamical interaction are based on the notion of an intrinsic geometric structure of the space-time continuum” [
36]. However, as we have shown within this “game”, it has far more explanatory power to derive and explain space-time from the principles of interaction. Hence we propose to reverse the above statement: The intrinsic geometric structure of the space-time continuum is based on the dynamical interaction of matter. A rigorous consequence of this reversal of perspective is that “space-time” does not need to have a fixed and unique dimensionality at all. It appears that the dimensionality is a property of the type of interaction. However, supposed higher-dimensional space-times (see Reference [
16]) would emerge in analogy to the method presented here, for instance in nuclear interaction, then these space-times would not simply be Euclidean spaces of higher dimension. Clifford algebras, especially if they are restricted by symplectic conditions by a Hamiltonian function, have a surprisingly complicated intrinsic structure. As we pointed out, if all generators of a Clifford algebra are symplices, then in
dimensions, we find
k-vectors with
but
k-vectors generated from symplices are themselves symplices only for
. However, if space-time is constraint by Hamiltonian motion, then ensembles of oscillators may also clump together to form “objects” with
or
-dimensional interactions, despite the fact that we gave strong arguments for the fundamentality of the
-dimensional Hamiltonian algebra.
There is no a priori reason to exclude higher order terms-whenever they include constants of motion. However, as the Hamiltonian then involves terms of higher order, we might then need to consider higher order moments of the phase space distribution. In this case we would have to invent an action constant in order to scale ψ.
Our game is based a few general rules and symmetry considerations. The math used in our derivation-taken the results of representation theory for granted-is simple and can be understood on an undergraduate level. And though we never intended to find a connection to string theory, we found-besides the
-dimensional interactions a list of possible higher-dimensional candidates, two of which are also in the focus of string theories, namely
-dimensional and
-dimensional theories [
37].
We understand this modeling game as a contribution to the demystification (and unification) of our understanding of space-time, relativity, electrodynamics and quantum mechanics. Despite the fact that it has become tradition to write all equations of motion of QED and QM in a way that requires the use of the unit imaginary, our model seems to indicate that it does not have to be that way. Though it is frequently postulated that evolution in time has to be unitary within QM, it appears that symplectic motion does not only suffice, but is superior as it yields the correct number of relevant operators. While in the unitary case, one should expect 16 (15) unitary (traceless) operators for a 4-component spinor, but the natural number of generators in the corresponding symplectic treatment is 10 as found by Dirac himself in QED [
2,
38]. If a theory contains things which are
not required, then we have added something arbitrary and artificial. The theory as we described it indicates that in momentum space, which is used here, there is no immediate need for the use of the unit imaginary and no need for more than 10 fundamental generators. The use of the unit imaginary however appears unavoidable when we switch via Fourier transform to the “real space”.
There is a dichotomy in physics. On the one hand all causes are considered to inhabit space-time (local causality), but on the other hand the physical reasoning mostly happens in energy-momentum space: There are no Feyman-graphs, no scattering amplitudes, no fundamental physical relations, that do not refer in some way to energy or momentum (-conservation). We treat problems in solid state physics as well as in high energy physics mostly in Fourier space (reciprocal lattice).
We are aware that the rules of the game are, due to their rigour, difficult to accept. However, maybe it does not suffice to speculate that the world might be a hologram (As t’Hooft suggested [
39] and Leonard Susskind sketched in his celebrated paper, Reference [
40])-we really should play modeling games that might help to decide, if and
how it could be like that.