1. Introduction
The act of measurement plays a role in quantum mechanics that is unlike any it had in classical theories. Physical predictions change after a measurement. We take the point of view that our understanding of quantum mechanics is not complete until a well defined understanding of the measurement process is in place. We want a complete characterization of the process
, which is, to go from a coherent superposition to a state that can be attributed definite outcomes. Environmental decoherence [
1] is a proposal that comes close, identifying instances in which the transition occurs for all practical purposes. There are additional proposals, like quantum Darwinism [
2] and coarse-grained measurements [
3].
However, solutions that are based on decoherence are incomplete, since they give rise to a subjective and ill defined notion of event. There are protocols that show that the ‘decoherence solution’ is apparent. We are not the only ones levying these criticisms, and lots of work has been devoted to different modifications and/or re-interpretations of the theory to clarify the situation. Several involve compromises, for example, giving up the possibility of a single-world description as a way of keeping the formalism of quantum theory intact [
4,
5,
6,
7], or losing the notion of objectivity [
8,
9], accepting makeshift modifications to the theory in order to restore a single-world picture [
10,
11,
12,
13,
14,
15,
16], or even combinations of all the above [
17]. Here, we would like to present a realist description, with minimal premises and a well defined notion of event.
The Montevideo Interpretation of quantum mechanics [
18,
19,
20] is, in reality, an extension of the quantum mechanics obtained by taking into account that time must be a physical observable, and not a classical, parameter
1. It takes seriously the notion that everything must be quantum in a quantum universe. It does not invoke a classical world in order in order to define the quantum theory, like the Copenhagen interpretation does. Alternatively, it could be interpreted as confirming the preferred basis to be used in a Many Worlds interpretation [
23]. When considering that time is a physical observable is even more natural when one considers gravity, as it is well known that, in generally covariant theories, the time coordinate is just a gauge parameter and clocks must be constructed from physical quantities.
In this paper, we succinctly present the main results that lead to the Montevideo Interpretation incorporating recent advances in the understanding of problems that are related to it, pointing, in each case, to references where the analyses are more extensive, and showing its general applicability to quantum covariant systems.
In
Section 2, we introduce a notion of time for generally covariant systems.
Section 3 discusses fundamental limits to space-time measurements based on general relativity and quantum mechanics.
Section 4 discusses how quantum measurements are possible when real clocks are considered.
Section 5 discusses the objective notion of the event that we consider. We end with conclusions.
2. A Quantum Notion of Time in Generally Covariant Systems
There has been recent progress in understanding, with precision, how to recover the usual quantum descriptions, which is, the Schrödinger or Heisenberg pictures in totally covariant systems, like general relativity [
24,
25]. It provides a systematic analysis that cannot be ignored in any attempt to advance the understanding of the problem of time. In their main paper regarding the problem of time, Höhn, Smith, and Lock [
24,
25] have shown how it is possible to consistently treat three historical approaches to this problem. Here, we will review the first two approaches at a classical level in order to focus on what follows in the nature of the time parameter used.
One wishes to describe a totally constrained system with a constraint
, whose classical dynamics is defined by the gauge transformations that are generated by such a constraint. A first approach can be made in terms of relational physical observables
2 [
26,
27,
28,
29,
30,
31]. They are Dirac observables, which is they commute with the constraint(s) and coincide with the function of the kinematical variables
f when the variable
T is equal to the parameter
, also belonging to the kinematical space. A second approach can be to fix a gauge (a choice of time in the classical theory) and deparameterizing. For that purpose, a canonical transformation is introduced in the classical system, which gives rise to a time variable
T and its canonically conjugate momentum
, and relational Dirac observables that are associated to the remaining original canonical variables
,
. The gauge is fixed, for instance,
and one solves for
, which eliminates the gauge dependence. That this can always be done with “good clocks”, as shown in [
24,
25]
3. The
now obey canonical equations that describe their evolution in terms of a true Hamiltonian
that result from the deparameterization and satisfy initial conditions
and
at
. One then recovers the classical evolution in terms of the classical clock variable
. The third approach, which is that of Page and Wootters [
32], is purely quantum and cannot be treated classically.
In order to quantize these approaches, one needs to introduce, in each case, a time operator. Time operators are not defined in the physical space of states. The latter are annihilated by constraints and, therefore, by the total Hamiltonian (which is a linear combination of the constraints in a totally constrained theory). Therefore, physical operators have to commute with the total Hamiltonian and, therefore, are constants of the motion. Even at the kinematical level (where states that are not annihilated by the total Hamiltonian and the constraints), it is problematic to quantize a time variable. In fact, as Pauli has observed, it is not possible to define a self-adjoint operator conjugate to a bounded-below Hamiltonian. Later on, this result was extended by Unruh and Wald [
33], who showed that, if the Hamiltonian is bounded-below, there is no self-adjoint operator that only can run forward in time. In other words any realistic clock that runs forward in time has a non vanishing probability of running backwards. Although there are no self-adjoint operators associated to a “time” variable that satisfy the conditions of a good clock, one can use generalized measurements in terms of positive operator-valued measures (POVM) and effect operators in order to describe clock readings. This was the strategy followed by Höhn, Smith and Lock [
24,
25]. However, these operators are necessarily defined in the kinematical space and do not correspond to any Dirac observable in the physical space. One may wonder if operators defined in the kinematical space are observable. After all any physical system in a background independent theory should obey the physical laws derived from the constraint. Let us assume that that the clock system is an element of our Universe and, therefore, satisfies the quantum and general relativistic laws, for instance, it is an atomic clock. The observable time for this kind of clocks cannot be described by kinematical operators that would completely ignore the Einstein equations. Of course, if we come back to the physical Hilbert space, we again have to face the problem of time, given the fact that Dirac observables are constants of the motion.
One could ask oneself if there is anything gained with this construction. The answer is clearly positive. The most important result is that, following any of the mentioned techniques, i.e., introducing relational physical observables, deparameterization or the Page–Wootters formalism, one ends up with a standard formulation of quantum mechanics in totally covariant systems in the Schrödinger or Heisenberg picture. Like in non-relativistic quantum mechanics, time is treated differently than any other variable. In ordinary quantum mechanics, it is a classical variable and, in a totally constrained system, a kinematical variable. In a totally constrained and quantum mechanical world, such variables are idealizations that lead to the simplest description of the evolution in terms of unitary evolution operators. However, the measurement of time is always done while using a real clock that is subject to gravitational and quantum laws. These clocks are subject to the uncertainties and fluctuations of all quantum systems and, therefore, are never exact. Our observation, that we here re-derive in the language of Höhn, Smith, and Lock, is that the evolution that is described in terms of a real clock satisfies a modified master equation.
We shall consider a physical situation described by a set of variables
belonging to a constrained system that includes the physical system that we wish to study and a “clock” that will be used to keep track of the passage of time, which we will assume takes continuum values. A simple example of a clock variable could be the position of certain free particle. The evolving constant of the motion associated with it is
4. As before,
T and
are kinematical quantities and
a number. It corresponds to a relational physical observable that takes the value of
T when the parameter
takes the value of the auxiliary kinematical time
that will, in the quantum theory, be modeled by a POVM. We also identify the physical variables that we wish to study as the relational physical observable
that is associated to the kinematical variable
O. We then proceed to quantize the system by promoting the observable and clock to self-adjoint quantum operators
acting on the kinematical Hilbert space and
is taken as an auxiliary variable belonging to the kinematical space (or represented by a POVM in the quantum theory). We are quantizing the relational physical observables following the procedure that is discussed in [
24,
25].
As we did in reference [
34], we call the clock eigenvalues (the eigenvalues of the relational observable
)
T and
O the eigenvalues of the relational observable
, and we assume that both have a continuum spectrum. They satisfy,
which are eigenvalue equations in the kinematical space. The states of such space
are labeled by the eigenvalues of
, of other parameterized Dirac observables that we collectively denote as
k. The observables that are associated with
T and
k are dependent on
, which is why we include it among the labels of the state. We observe that this operator may be written in terms of Dirac observables and
, and we “normalize” the eigenstates in the physical state space
where
becomes a non-observable parameter.
5The projector in the physical space that corresponds to finding the time variable
T in the interval
for a given
is,
where
k denotes the eigenvalues of other Dirac observables that form a complete set with
, which we have assumed has a continuum spectrum (this is not really necessary, which is why we kept a sum over it rather than an integral). Similarly, the projector for O is
In order to determine the simultaneous probability of finding
O in the interval
around
and the clock around time
, we start by assuming that the measurements occurs when the parameter takes the value
,
where
is a density matrix in the physical space of states and we used the cyclic property of the trace and that
is a projector and, therefore, equal to its square. We can compute, in an analogous way, the probability of finding
T in the given interval. The conditional probability of finding the observable in its interval when the clock is in its own interval would be given by the ratio of the simultaneous probability and the probability of finding
T in the given interval. Taking into account that we have a complete ignorance of the value of
, and that the above equations depend on
, we need to take the average of these expressions in all possible values of
. To do that, we consider all of the values of
leading to a non-vanishing value of
, let us call
the minimum of these values and
the maximum, which is
and
. Subsequently, by taking the average on the possible values of the unobservable parameter and simplifying the factors
in numerator and denominator, we get, for the conditional probability,
We integrate the numerator and denominator separately, since the numerator with its integral corresponds, up to a factor , to the joint probability of , and the integral of the, up to the same factor, to the probability of . The reason for this averaging is that we do not know for which value of the kinematical time the clock took the value . Notice that something similar occurs if one deparameterizes, because, in this case, the parameter would correspond to a classical variable. If we could have a perfect clock, such that, to each , corresponded only one T, the expression would reduce to the one of ordinary quantum mechanics. This behavior is still true if one considers simple theories, like in ordinary quantum mechanics, it does not depend on the fact that we are working with a totally constrained theory. It is enough to take into account that the Schrödinger time is an idealized variable about which one does not have perfect information.
Let us assume that we have chosen a physical clock whose interactions with the system of interest are negligible and that behaves semiclassically with small quantum fluctuations. It will typically be an atomic clock that is based on a periodic system with small deviations from the ideal non observable time . Therefore, in first approximation, one expects to recover the ordinary Schrödinger evolution plus small corrections. Let us then assume that we can divide the density matrix of the whole system into a product form between clock and system, , and the evolution will be given by a unitary operator that is also of product type .
As shown by Höhn, Smith, and Lock [
24,
25], the evolution in terms of a density matrix at time
is given by the usual probability of measuring the value of
O at a time
, in a state
,
Because
is unknown, we would like to shift to a description where we have density matrices as functions of the observable time T. To do this, we start from the conditional probability (
6), and make the separation between clock and system explicit,
By introducing the probability that the measurement of T has occurred at the (unknown) time
Additionally, noticing from the above expression that
, we may introduce a
T dependent density matrix of the system at time
TAdditionally, noting that one can derive from (5) the ordinary expression for the probabilities in quantum mechanics given in (4) but with an effective density matrix given by . Because one ends up with a superposition of density matrices evolved unitarily for different values of , the effective evolution of the physical density matrix is not unitary.
In reference [
34], we have shown that, if the real clock behaves semi-classically and we assume that
is a peaked symmetric function approximated by a Dirac delta, which is
with a width
that grows with time, one gets
where
is the rate of spread of the width of the clock state. This is an equation of the Lindblad [
35] type that should be considered to be the master equation that describes the actual evolution of any physical system evolving in terms of a real clock. The correcting term is proportional to
, a quantity that depends on the particular clock used controls the decoherence induced by the use of physical clocks on the evolution of the states. As we shall see, there are fundamental bounds of how well a clock can be and for the value of
that are implied by this bound; we will recover a master equation for optimal physical clocks.
Summarizing: Schrödinger’s equation is only approximate, it is a description in terms of a classical time that is not accessible to observers in a quantum and covariant universe. When one takes into account that clocks are physical systems just like any other and that the universe has covariant and quantum laws, the evolution needs to be modified. It ends up being described by a master equation that includes additional effects of loss of coherence. The origin of the lack of unitarity is the fact that definite statistical predictions are only possible by repeating an experiment. If one uses a real clock, which has thermal and quantum fluctuations, each experimental run will correspond to a different value of the evolution parameter. Therefore, the statistical prediction will correspond to an average over several intervals and, therefore, its evolution cannot be unitary.
3. Fundamental Limits to Space-Time Measurements
From the above analysis, it is clear that, if there are fundamental limitations of how good a clock can be, then the use of real clocks will introduce an additional, fundamental, source of decoherence. Because we do not have a complete theory of quantum gravity, this is still a contentious issue. Phenomenological arguments have been given by Salecker and Wigner, Karolyhazy, Ng and van Dam, Amelino-Camelia, Ng and Lloyd, and Frenkel [
36,
37,
38,
39,
40,
41,
42,
43], leading to similar estimations that are based on two main effects: quantum fluctuations and black hole formation. We have recently given a simple argument leading to a fundamental minimum uncertainty in the determination of time intervals consistent with the previous estimations. It only relies in the uncertainty principle and time dilation in a gravitational field [
44]. Schematically, the argument is as follows: let us consider a microscopic quantum system playing the role of a clock and a macroscopic observer that interacts with the clock interchanging signals. Let us start by considering the time-energy uncertainty relation,
, where
is of the order of the period of oscillation of the system being considered, and
E is the energy of the quantum oscillator. One can consider that the macroscopic system is at an infinite (macroscopic) distance from the microscopic quantum oscillator. We now consider the relationship between the time that is measured by the clock locally,
, and an observer at an infinite distance from it,
t. The gravitational time dilation measures the difference in the passage of proper time at different positions, as described by a the metric tensor of space-time. It is given by
where
is the Schwarzschild radius that is given by
.
We will concentrate in the uncertainty of the observed period of oscillations. While using the standard technique for the propagation of errors of a measurement,
and taking into account the definition of the Schwarzschild radius and the time energy uncertainty relation, we get
Recalling that
is a positive quantity that is less than one, since the size of the clock cannot be smaller than its Schwarzschild radius, and translating
to
t, one has that,
Assuming that the clock has size r and that the oscillation within it takes place at the mean speed
v, we have that
(this actually holds in curved space [
45]) and computing the minimum of
as a function of
while using Equation (
14) and taking into account that
, we get,
This is a bound that agrees with those that are derived by other means by the authors mentioned above [
36,
37,
38,
39,
40,
41,
42,
43]. Similar fundamental uncertainties hold for spatial intervals
l and for any relativistic invariant interval
s [
46].
If the best accuracy one can get with a clock is the one given above, it will induce via the master equation the decay of the out of diagonal terms of the density matrix
(
T would correspond to the
t of this section).
Therefore, pure states evolve approaching statistical mixtures, also known as classical mixtures, which suffer an irreversible evolution, and the system presents a fundamental loss of coherence due to this effect. This is a fundamental effect; any physical system will lose unitarity through its evolution.
Summarizing: the precision with which time lapses or spatial distances can be measured is limited by quantum and gravitational effects. As a consequence of this limitation the quantum states exhibit small deviations from the Schrödinger evolution. The evolution described by real clocks becomes irreversible and exhibits a new form of loss of coherence independent from environmental decoherence implying that pure states approach classical mixtures as time evolves.
4. Quantum Measurements with Real Clocks
According with the orthodox view of the quantum measurement problem, the collapse of the wave-packet during measurements refers to an irreducibly indeterministic change in the state of a quantum system, contravening the deterministic and continuous evolution that was prescribed by the Schrödinger equation. One needs to address several questions. In particular: under exactly what conditions does the collapse occur? In other words, the problem of measurement in quantum mechanics arises in standard treatments as the requirement of a reduction process when a measurement takes place. Such a process is not contained within the unitary evolution of the quantum theory, but it has to be postulated externally and is not unitary. Processes that occur during measurements are usually justified through the interaction with a large, classical measuring device and an environment with many degrees of freedom, which is through ambient decoherence.
6Objections have been levied onto two aspects of the solution of the problem of measurement through decoherence. (1) Because the evolution of the system plus environment is unitary, the coherence still persists and it could potentially be regained. (2) In a picture where evolution is unitary “nothing ever occurs”. This is Bell’s “and/or” problem. The final reduced density matrix of the system plus the measurement device will describe a set of coexistent options and not alternative options with definite probabilities. To put this feature vividly, in terms of Schrödinger’s cat: at the end of the decoherence process, the quantum state still describes two coexistent cats, one alive and one dead. We shall argue that these two problems are related with the issue of time and then propose a solution for them.
A first approach that we took to the analysis of the first issue is model dependent [
47]. It consists in considering a model where the quantum system, the measurement apparatus, and the environment are completely under control and study its evolution in terms of real quantum clocks. One can show that with the modified evolution described by the master equation the first objection to decoherence does not apply. We will concentrate in this section in the first objection:
the quantum coherence is still there. Although a quantum system interacting with an environment with many degrees of freedom will very likely give the appearance that the initial quantum coherence of the system is lost—the density matrix of the measurement device is almost diagonal—the information regarding the original superposition could be recovered, for instance, carrying out a measurement that includes the environment. The fact that such measurements are hard to carry out in practice does not prevent the issue from existing as a conceptual problem. The persistence of correlations also manifests in closed systems in the problem of revivals: after a very long time, the off-diagonal terms in the reduced matrix of the system plus the measurement device become large again. So that, whatever definiteness of the observed preferred quantity that had been gained by the end of the measurement interaction, it turns out, in the very long run, to have been but a temporary victory. The superposition of different outcomes reappear in the state of the measuring apparatus. This is called the problem of revivals (or ‘recurrence of coherence’, or ‘recoherence’). The fundamental irreversible decoherence that is induced by the use of real clocks allows for showing that the modified evolution prevents revivals. When the multiperiodic functions in the coherences of the process induced by the environment tend to take again the original value after a Poincaré time of recurrence, the exponential decay of the out of diagonal terms of the density matrix in Equation (
16) for sufficiently large systems completely hides the revival under the noise amplitude [
47].
Let us discuss the possibility of recovering the information that, after the enviromental decoherence, lays in the environment interference terms. That is, let us try to establish whether the system remains in a coherent superposition or has become a statistical mixture. Because the information in question is a characteristic of the total system (including the environment) being in a coherent superposition, it can be, in principle, revealed by measuring a suitable quantity of the total system. A typical procedure could be measuring an observable of the total system that takes different values for a coherent superposition and a statistical mixture. For instance, that vanishes in the last case. This was proposed, for instance, by d’Espagnat [
48]. In reference [
47], we showed that, due to the fundamental decoherence induced by quantum clocks, the expectation value of such observables exponentially decreases and it is increasingly difficult to distinguish from the vanishing value that results from an exact statistical mixture. The analysis was based in a particular model of a spin interacting with a spin bath environment.
In what follows, we are going to show that, due to the combined effect of fundamental decoherence and the bounds in the precision of the measurements of length and time intervals, this analysis may be extended to a wide range of systems [
49] and show that it is impossible to distinguish between the expectation value for the evolved initial state and a statistical mixture not only for all practical purposes (FAPP), but fundamentally. The solution may be applied to a general class of global protocols that apply to any decoherence model. In this way, we provide a criterion that works in much more general settings than a particular model. This analysis also permits providing estimates for bounds on the time in which the event occurs.
Let us first consider a system with unitary evolution that interacts with its environment and presents ambient decoherence, then
it is in principle always possible to distinguish whether the system is in a superposition or a statistical mixture. An obvious way of showing that is the following. Let us assume that we are interested in the measurement of the observable
A with eigenvalues
and eigenstates
, and we are interested in comparing the evolution during the measurement process of a pure state,
and a statistical mixture,
Measurements on both states lead to the same set of results with the same probabilities. However, while after a measurement outcome is observed
becomes
, the latter remains invariant. After an event
7 occurs, the system becomes
. However, if one studies the unitary evolution of the systems that are coupled to the environment, even after the interaction with the environment of the measuring device, the total system resulting from the evolution of the first state differs from the total system resulting from the second. Without projections breaking the unitarity, events would not occur. The distinction between them may become harder after evolution, but it is always possible provided that unitarity is preserved. A measurement on the first system with a definite outcome would always require breaking the unitarity. It is usually argued by decoherentists that, at a certain moment, the evolution becomes “effectively irreversible”.
Aaronson, Atia, and Susskind [
50] have shown that observing the interference terms in a state as
is as difficult as reproducing the initial state. In terms of Schrödinger’s cat, measuring the interference terms is as difficult as bringing the dead cat back to life. A trivial example of a protocol allowing for distinguishing
and
is time reversal. In fact, we would recover the initial states, and they are easily distinguishable. Implementing this protocol with enough approximation to distinguish the two situations in an experiment is, without a doubt, an extremely hard task, which would require control over the huge number of degrees of freedom in the environment (see, however, ref. [
51] for some progress in that direction). However, the fact that this possibility, in principle, exists is already an insurmountable obstacle to constructing an objective notion of event within unitary quantum mechanics at a conceptual level.
In reference [
49], we have analyzed the effect of the fundamental decoherence that is induced by the use of real clocks. If the same time reversal is considered in this case, the irreversibility of the evolution that is described by the master equation does not allow for recovering the initial state. In fact, if we consider the evolution of
from the initial time 0 to
t and then back-reversed to
, then we do not re obtain
, but a state that differs very little from
.
In order to quantify this difference, let us consider the observable
, where
S refers to the system and
E the environment. Subsequently, if
the expectation values for
and
initially are
while
where the trace is taken over the whole system, including the environment. After evolving a time
T and time-reversing the system until
,
where
is the usual time of environmental decoherence of the system that is coupled to the measuring device and, in the right hand side, the trace is taken over
S. This shows that using the global protocol to distinguish the state evolved with the master equation from the state in the case an event occurs, becomes increasingly harder when real clocks are used and time uncertainties are taken into account. The state evolved with the master equation becomes increasingly similar to the case in which an event occurs and it can be interpreted as a classical mixture.
One could consider that, after all, even though, for all practical purposes, both of the states are practically identical, they are still fundamentally different. We now show that this is not the case when one takes into account that the uncertainties in time intervals and length measurements put limitations in how the states of a system is prepared or measured. We illustrate it in the paradigmatic case of a particle in a coherent superposition over two spatial locations (for more details, see [
49]). To do that, we take
, where
and
are states that are localized at different points, and
with
The state that would remain invariant if an event occurs after the measurement of the position is the statistical mixture,
We can distinguish both states computing the expectation value of the observable
. In fact, while
we have that
Hence, if the initial state is chosen with the appropriate phases, p discriminates whether an event occurred or not. While taking into account that fundamental uncertainties on the measurement of length intervals forbid a perfect preparation of the wavepackets and , since they imply errors and on the separation and width of the Gaussians, which will lead to uncertainties on the expectation value of p.
We have shown, in [
49], that a simple propagation of uncertainties on the error that is induced on
by these uncertainties gives,
The uncertainty on the measurement of
, has to be taken into account when analyzing the global protocol that is considered above in Equation (
19).
That is, once condition (
25) is satisfied, the uncertainty in the measurement of the observable prevents one from verifying whether the system was initially in a coherent superposition
or in a statistical mixture
. Notice that this is a fundamental limitation and cannot be circumvented by making multiple measurements. It is related to the impossibility of preparing exactly the same initial state with infinite precision. One cannot decide whether the system was at the end of the evolution in a mixture state or the one that results from the evolution with the master equation. While, in the case of ambient decoherence, the final states were not distinguishable for any practical purpose (FAPP) when the fundamental decoherence due to the bounds in the precision of the determination of time and space intervals is taken into account; the production of events in a system that initially was in a pure state superposition is compatible with the evolution equation. This condition is fundamental and not FAPP. This determines when a system produces an event, and it does so in an objective way. In reference [
49], we proved that events can occur for times
, where,
It could be argued that more efficient protocols could exist than the one that requires the system’s time-reversal and, therefore, that the analysis presented is insufficient for showing that events can be produced without violating the causal evolution that is described by the master equation. Nevertheless, the studies conducted by Brown and Susskind [
52] and Aaronson, Atia, and Susskind [
50] regarding quantum complexity suggest that any protocol that allows for distinguishing the final states of a measurement process when the environment is included would be equally costly. In particular, it would be as difficult as implementing a procedure for bringing the dead cat back to life, the way in which the time reversal operation would do it and, therefore, would require operations with a similar degree of difficulty. On the other hand, the quantum complexity ideas that were mentioned before suggest that a rigorous demonstration of the indistinguishability of the final states after measurements, and statistical mixtures can be feasible.
When the fundamental decoherence effects due to the use of real quantum and relativistic clocks and to the quantum and gravitational limits to the measurements of time intervals are taken into account, one can show that during the measurement process pure states evolve into states that are fundamentally physically indistinguishably from statistical mixtures of the various possible outcomes of the measured observable, without any correction or violation of the evolution that is described by the master equation.