Next Article in Journal / Special Issue
Analysis of Time Reversible Born-Oppenheimer Molecular Dynamics
Previous Article in Journal / Special Issue
Nonadiabatic Molecular Dynamics Based on Trajectories
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Approximating Time-Dependent Quantum Statistical Properties

Department of Physics and CNISM Unit 1, University of Rome "La Sapienza", Ple A. Moro 5, 00185 Rome, Italy
*
Author to whom correspondence should be addressed.
Entropy 2014, 16(1), 86-109; https://doi.org/10.3390/e16010086
Submission received: 11 November 2013 / Revised: 10 December 2013 / Accepted: 19 December 2013 / Published: 27 December 2013
(This article belongs to the Special Issue Molecular Dynamics Simulation)

Abstract

: Computing quantum dynamics in condensed matter systems is an open challenge due to the exponential scaling of exact algorithms with the number of degrees of freedom. Current methods try to reduce the cost of the calculation using classical dynamics as the key ingredient of approximations of the quantum time evolution. Two main approaches exist, quantum classical and semi-classical, but they suffer from various difficulties, in particular when trying to go beyond the classical approximation. It may then be useful to reconsider the problem focusing on statistical time-dependent averages rather than directly on the dynamics. In this paper, we discuss a recently developed scheme for calculating symmetrized correlation functions. In this scheme, the full (complex time) evolution is broken into segments alternating thermal and real-time propagation, and the latter is reduced to classical dynamics via a linearization approximation. Increasing the number of segments systematically improves the result with respect to full classical dynamics, but at a cost which is still prohibitive. If only one segment is considered, a cumulant expansion can be used to obtain a computationally efficient algorithm, which has proven accurate for condensed phase systems in moderately quantum regimes. This scheme is summarized in the second part of the paper. We conclude by outlining how the cumulant expansion formally provides a way to improve convergence also for more than one segment. Future work will focus on testing the numerical performance of this extension and, more importantly, on investigating the limit for the number of segments that goes to infinity of the approximate expression for the symmetrized correlation function to assess formally its convergence to the exact result.

1. Introduction

Exact simulation methods to compute either the evolution of the wave function or dynamical statistical averages for quantum systems in the condensed phase are currently restricted to small sizes and short times. The exponential scaling of available algorithms with the number of degrees of freedom, in fact, limits calculations to ten–twenty particles (and this for Hamiltonians of relatively simple form) and to time scales of at most a few picoseconds. This situation is in striking contrast with analogous classical calculations, which, when empirical potentials are adopted, are nowadays routinely used to study high dimensional, complex systems for times reaching, on dedicated machines, microseconds. (Ab initio classical molecular dynamics is considerably more expensive, but, depending on the number of electrons that have to be included, even in this case moderately sized systems of up to a hundred particles can be integrated for hundreds of picoseconds.) Several approximate schemes have thus been proposed attempting to import, with appropriate modifications, methods from classical molecular dynamics to quantum dynamics. Two approaches, in particular, can be identified in which classical trajectories play a crucial role: semi-classical and mixed quantum classical.

In semi-classical schemes, originally developed for approximating wave function propagation, all degrees of freedom are treated on equal footings. To begin with, the quantum time propagator is expressed, in the path integral formalism [1], as a sum over all possible paths connecting the initial and final states, each path being weighted by a complex exponential, whose argument is the classical action along it. The approximate propagator is then obtained by expanding the action to second order around its stationary points, which are classical trajectories, and performing the remaining quadratic path integral analytically [2,3]. Different forms exist for the semi-classical propagator depending on the specific representation adopted for the path integral (most notably standard coordinates, usually followed by the so-called initial value transformation [4,5], hybrid coordinate momenta [6] or coherent states [79]), but the evolved wave function has always the same structure, which we illustrate with the most commonly used Herman Kluk expression [1012]:

| Ψ ( t ) sc = d p d q | q ( t ) , p ( t ) W ( t ) e i S cl ( t ) p , q | Ψ ( 0 )

In the expression above, |q, p〉 indicates a coherent state (in the coordinate representation, 〈 r|q, p〉 ∝ eγ(qr)2+ip(qr)/), Scl(t) is the action computed along a classical trajectory propagated to time t from initial conditions (q, p), (q(t), p(t)) is the endpoint of the trajectory and W (t), a known function, is the result of the integration over the quadratic fluctuations around the stationary paths. All the functions of time in the integrand are calculable using classical evolution algorithms and, once the ket has been saturated (for example, via a scalar product), the integral over the initial conditions can be estimated via Monte Carlo sampling of a probability density based on the absolute value of the wave function at t = 0. Although calculations based on this scheme have been used, the semi-classical wave function is still remarkably expensive, due mainly to the characteristics of W (t). This function is in fact related to a linear combination of the monodromy matrices of the system (i.e., the matrices of the derivatives of the endpoints of the trajectory with respect to the initial conditions), and, for chaotic dynamics, it can assume values varying over several orders of magnitude, hindering the convergence of the Monte Carlo sampling. Furthermore, the actual evaluation of the wave function requires one to project |Ψ(t)〉 sc on a basis. While this is, in principle, straightforward (for example, one could choose the continuous coordinate representation and then discretize it on a grid), in practice it reinstates the exponential scaling of the numerical effort with the number of degrees of freedom. This last problem can be avoided focusing on expected values (observables):

sc Ψ ( t ) | A ^ | Ψ ( t ) sc = d p ˜ d q ˜ d p d q Ψ ( 0 ) | q ˜ , p ˜ W ˜ * ( t ) e i S ˜ cl ( t ) q ˜ ( t ) , p ˜ ( t ) | A ^ | q ( t ) , p ( t ) W ( t ) e i S cl ( t ) p , q | Ψ ( 0 )
( is a Hermitian operator) at the price of doubling the dimension of the Monte Carlo sampling. The expression above, however, requires averaging the product of the two unstable W functions (for common operators, the matrix element in the integrand is known analytically, thus posing no problem), and although some schemes exist to mitigate the problem [13,14], this approach is of limited practical use. Shifting the focus from the wave function to the observables proves considerably more effective moving to the Heisenberg representation and taking advantage of the presence of two propagators in their exact quantum expression to develop alternative approximations. This strategy is most commonly adopted when calculating time-dependent statistical properties, more specifically, time correlation functions of operators  and , usually defined as:
C A , B ( t ; β ) = 1 Z Tr { e β H ^ A ^ e i H ^ t B ^ e i H ^ t } = 1 Z d r d r N d r ˜ d r ˜ N r | e β H ^ A ^ | r ˜ r ˜ | e i H ^ t | r ˜ N r ˜ N | B ^ | r N r N | e i H ^ t | r
where Ĥ is the Hamiltonian, Z the partition function, β = 1/kBT with kB Boltzmann’s constant and T temperature. (Throughout the paper, we consider distinguishable particles.) In the second line, the trace was expressed, in the coordinate representation, as the product of four matrix elements; from left to right, that of the product of the quantum Boltzmann density times operator Â, the propagator backward in time, the matrix element of operator and the propagator forward in time. To pave the way for the so-called linearized approximation of the correlation function [13,1518], the two propagators are expressed as path integrals in the hybrid coordinate-momenta representation, in which the resolution of the identity in the momentum basis (inserted, as usual, to evaluate the exponential of the kinetic energy contribution in the Trotter break up of the propagators) are not resolved analytically. The advantage of this representation is a closer analogy to the phase space representation of classical mechanics. The quantum expression of the correlation function thus obtained is then manipulated via a change of variables: the forward and backward paths are changed to semi-sum and difference paths (see the next section for a more precise definition of these paths), and the key approximation of the approach is introduced. A Taylor series expansion of the action to the quadratic order in the difference path is performed, allowing all integrals on the difference variables, except the ones over the initial and endpoint in coordinate space, to be performed analytically. The integration results in a product of delta functions constraining the semi-sum path to be a classical (Hamiltonian) trajectory. The remaining integrals over the difference coordinates define Wigner transforms [19] of operators to give:
C A , B l ( t ; β ) = 1 ( 2 π ) Z d r d p [ e β H ^ A ^ ] w ( r , p ) B w ( r ( t ) , p ( t ) )
where, for example, B w ( r ( t ) , p ( t ) ) = d ξ e i p ( t ) ξ r ( t ) + ξ / 2 | B ^ | r ( t ) ξ / 2 . The superscript, l, indicates the linearization approximation. Compared to the semi-classical expression for the wave function, Equation (4) has the remarkable advantage of not containing unstable factors in the integrand, even though the approximation of the overall dynamics is accurate to the same order in . Indeed, the second order terms in the action expansion, which originated the W (t) in Equation (1), cancel exactly when the difference of the action along the forward and backward paths is considered. The absence of the W s suggests the computing of the approximate time correlation by combining Monte Carlo sampling of initial conditions and molecular dynamics. The serious difficulty with this idea lies in the sampling of the initial conditions. The probability density is, in fact, usually defined from the absolute value of [ e β H ^ A ^ ] w ( r , p ) / ( 2 π ) Z, but computing this Wigner transform is far from trivial, and the available methods introduce further, uncontrolled, approximations. In addition to this practical difficulty, there is also a conceptual problem with linearized calculations: the classical evolution conserves the quantum probability density only for short times. The rapid decay time of correlations for standard condensed phase systems is usually invoked to mitigate the consequences of this pathology, but it is known that in some cases, e.g., low dimensional systems with a long time coherence, it may lead to unphysical results (this is the so-called zero energy leakage problem).

The problems and numerical cost of semi-classical calculations justify the development of the second, alternative approximation scheme mentioned at the beginning of this section: mixed quantum classical dynamics. In this approach, the degrees of freedom of the system are partitioned into two sets, usually based on their mass ratio. The first set (called the subsystem) is composed of a few degrees of freedom and is treated quantum mechanically; the second (called the environment or the bath) is often high dimensional and is treated classically. Existing quantum classical methods differ in the way in which the coupling among the classical evolution of the bath and the quantum propagation of the subsystem is taken into account. The first approach of this kind, still very popular due to its efficiency and ease of implementation, is Tully’s surface hopping [20]. In this scheme, electrons and nuclei constitute the subsystem and the bath, respectively, and the coupling, designed to mimic dynamics beyond the Born–Oppenheimer approximation, is defined ad hoc based on heuristic arguments. In more recent, and more rigorous, developments, the coupling is derived starting from a fully quantum representation of the evolution equations for the system and then taking a partial classical limit on the bath’s degrees of freedom. Examples of this type are schemes to propagate the full density matrix of the quantum subsystem, such as the Wigner Liouville mixed quantum dynamics [21,22], or the iterative linearized density propagation methods [23,24]. Both surface hopping and Wigner Liouville dynamics (with particular reference to its most recent developments aimed at computing correlation functions) are discussed (and/or criticized) in other contributions to this issue. Focusing on the latter, we quickly recall that it adopts a mixed representation in which the operators related to the bath’s degrees of freedom are described using the Wigner representation, while for the subsystem an abstract operator representation is retained. The quantum evolution operator in this mixed representation is then expanded to first order in the ratio of the thermal De Broglie wavelength of subsystem to the bath to obtain the generator of the mixed quantum classical dynamics. This generator has the form of a generalized Lie bracket in which both a commutator (linked to the operators for the subsystem) and Poisson parenthesis (acting on the bath’s phase space) appear. Once a specific basis set is chosen for the subsystem (e.g., adiabatic electronic states [21,25] or, more recently, the so-called mapping representation [26,27]) the evolution equation for the density matrix, or any observable, becomes explicit, and several different algorithms, sharing the characteristic that the bath motion is obtained via classical evolution (possibly with generalized forces describing the influence of more than one electronic state), have been proposed to solve them. In spite of its merits, it has been shown that this mixed quantum classical dynamics lacks several properties that characterize fully quantum and classical dynamics [28]. In particular the mixed Lie bracket does not satisfy the Jacobi identity exactly, and, similar to linearized calculations, the quantum thermal density is not stationary under the mixed dynamics. The loss of formal properties with respect to classical and quantum mechanics arises, in different forms, in all current mixed quantum classical schemes (see also [29]).

While application driven calculations might not be paralyzed by the state of affairs described above, in particular, if and when it is possible to verify that these well-known pathologies have no uncontrolled effects on the results, it is important to pursue alternative approaches in an effort to derive more general schemes allowing for systematic improvement and/or assessment of the approximations employed. Indeed, a critical stumbling block common to semi-classical and mixed quantum classical methods is that it is essentially impossible to go beyond classical trajectories to approximate the quantum evolution of the full system (semi-classical) or of the bath (mixed). In the semi-classical case, including terms of higher order, the expansion of the action along the paths makes it impossible to obtain calculable expressions for the pre-factor in the expression of the wave function (already at third order, the integral corresponds to intractable Airy functions [2]), while in the linearized correlation function, it kills the emergence of delta functions that univocally determine the semi-sum path. In mixed quantum classical calculations, we refer to the Wigner Liouville formalism, but analogous problems appear, for example, in the iterative linearized propagation methods, including higher order terms in the mass ratio expansion of the propagator introduces terms in the phase space evolution of the bath that cannot be integrated numerically. In this paper, we summarize (in the spirit of an extended review) a recently developed method [30] attempting to overcome this problem. In this approach, the focus is not directly on the dynamics, but, rather, on statistical time-dependent averages, which are linked (via linear response theory) to experimental observables. In particular, we focus on time correlation functions expressed in the symmetrized form first introduced by Schofield [31]:

G A , B ( t ; β ) = 1 Z Tr { A ^ e i H ^ t c * B ^ e i H ^ t c }
where t c = t i β 2. The time Fourier transform of this complex time correlation function is related to the time Fourier transform of Equation (3) by a known multiplicative factor so both carry equivalent information. Furthermore, the symmetrized function shares some properties with its classical counterpart (e.g., it is a real function of time), which makes it a convenient starting point for developing approximations [3237]. In the following, we summarize how the path integral formalism can be used to express the full complex time evolution in Equation (5) as a concatenation of segments alternating imaginary (i.e., thermal sampling) and real-time propagation. The real-time propagation is then reduced to classical evolution via a linearization approximation. In our approach, the number of segments, L, plays a role analogous to that of the number of beads in standard thermal path integral calculations. Although the precise nature of the limit for L → ∞ is still under investigation, this analogy and numerical calculations on relatively simple model systems indicate that increasing the number of segments systematically improves the results with respect to classical dynamics or to the previously mentioned linearization schemes. It may be worth stressing that, in this approach, the focus is on computing the correlation by defining an appropriate stochastic process inspired by the full quantum expression. Adopting this perspective, the dynamics does not have any meaning per se and is viewed simply as part of a sampling mechanism, which is implemented via a generalized Monte Carlo scheme. While this circumvents some of the inconsistency of standard semi-classical and mixed quantum classical schemes and justifies further investigation of the method, it remains to be seen whether the approach outlined in the following has practical value. In fact, due to the presence of an increasing number of phase factors in the Monte Carlo estimator of the correlation function, the numerical cost of the calculation scales very badly with the number of segments (and of degrees of freedom). In the second part of the paper, we then summarize (again reviewing published material) how, when only one segment is considered, it is possible to improve the situation via a cumulant expansion that tames the phase factor present already in this lowest order approximation of the result [38]. We then present a new formal development of our approach that generalizes the use of cumulants to the case of more than one propagation segment, and we give the explicit formal expression for the case L = 2. Future work will focus on testing the accuracy of this new result. We conclude by stating some of the open questions related to the approach and indicating possible further developments.

2. Theory

Let us begin by expressing the symmetrized correlation function, Equation (5), in the coordinate representation. Inserting resolutions of the identity, we have:

G A , B ( t ; β ) = 1 Z d r 0 d r ˜ 0 d r t c d r ˜ t c r 0 | A ^ | r ˜ 0 r ˜ 0 | e i H ^ t c * | r ˜ t c r ˜ t c | B ^ | r t c r t c | e i H ^ t c | r 0

The structure of the integrand is represented in Figure 1 in which we show the sequence of matrix elements to be evaluated. Reading the figure from the bottom left corner up, we see the matrix element of operator Â, the backward complex time propagator (from 0 to c), the matrix element of operator and, finally, the forward propagator that closes the circuit representing the trace operation. The difficult task is the evaluation of the propagators in complex time. To set the stage for the approximation we intend to perform, we use the time composition property to divide the two propagations into L segments of duration τc = tc/L (τc need not be infinitesimal) and rewrite, for example:

r t c | e i H ^ t c | r 0 = d r 1 d r L 1 J = 0 L 1 r J + 1 | e i H ^ τ c | r J
with rL = rtc. Introducing an analogous expression for the backward propagator changes the scheme of the integrand as sketched in Figure 2, where each propagation lag (from J to J + 1) is indicated by the segment with arrows. We can now pair corresponding segments of propagation along the forward and backward paths, as indicated by the red frame in the figure, and define the product K ( r J + 1 , r J ; r ˜ J + 1 , r ˜ J ) = r ˜ J + 1 | e i H ^ τ c * | r ˜ J r J + 1 | e i H ^ τ c | r J to rewrite the symmetrized correlation function as:
G A , B ( t ; β ) = 1 Z d r ˜ L d r L r ˜ L | B ^ | r L { J = 1 L 1 d r ˜ J d r J K ( r J + 1 , r J ; r ˜ J + 1 , r ˜ J ) } × d r 0 d r ˜ 0 K ( r 1 , r 0 ; r ˜ 1 , r ˜ 0 ) r 0 | A ^ | r ˜ 0

The expression above is an exact, incalculable, expression of the time correlation function. In the following, we will work on the generic K to obtain an approximate expression for it that has the key advantage of being analytically known as a product of functions calculable via an appropriate combination of Monte Carlo and molecular dynamics. To obtain this result, we first separate the real from the imaginary time part of the propagation by inserting one more resolution of the coordinate identity, thus:

K ( r J + 1 , r J ; r ˜ J + 1 , r ˜ J ) = r ˜ J + 1 | e i H ^ τ c * | r ˜ J r J + 1 | e i H ^ τ c | r J = d r ˜ J ν d r J ν r ˜ J + 1 | e i H ^ τ t | r ˜ J ν r ˜ J ν | e τ β H ^ | r ˜ J r J + 1 | e i H ^ τ t | r J ν r J ν | e τ β H ^ | r J
The two (thermal) propagators in the integrand above, associated with the inverse temperature, τβ, can be expressed relatively easily in the path integral formalism as positive definite functions and interpreted, via the so-called classical isomorphism, as probability densities associated with systems of polymers. Well established techniques allow one to sample these densities. The real-time propagators, on the other hand, are prohibitive, even in path integral form. Feynman’s prescription requires, in fact, one to generate “all possible paths” connecting the initial and final point of the propagation, but, in contrast to the thermal case in which the probability density provides us with a sampling mechanism for the paths, no rule is given to determine them. Furthermore, even if we had a recipe for generating the paths, we would have to sum a (potentially infinite) set of phase factors, the exponential weighting each path. Capturing accurately the interference among these factors is essentially impossible (this is the well-known dynamical sign problem). As shown in detail in [30], progress can be made by deriving an approximate form for the product of the two real-time propagators in Equation (9). In analogy with the standard linearization methods mentioned in the Introduction, this is done most conveniently using a hybrid coordinate momenta representation of the propagators. The propagation time is divided into n intervals of length δt = τt/n, and appropriate resolutions of the identity are introduced to isolate matrix elements of the propagator for each short time interval. As usual, after a Trotter break up of the exponential of the Hamiltonian is performed, the (diagonal) exponential of the potential can be trivially evaluated. The matrix element of the kinetic energy part of the propagator, on the other hand, is easily evaluated by inserting a resolution of the identity in the momenta. Contrary to what is done in standard path integrals, however, the resulting generalized Gaussian integral in the momenta is not performed analytically, but left in the expression. This sequence of operations results in:
r ˜ J + 1 | e i H ^ τ t | r ˜ J ν r J + 1 | e i H ^ τ t | r J ν l = 1 n d p ˜ J l 2 π l = 1 n 1 d r ˜ J ν + l e i S ( { r ˜ , p ˜ } ) l = 1 n d p J l 2 π l = 1 n 1 d r J ν + l e i S ( { r , p } )
where ({r, p}) indicates the full set of path variables and S ( { r , p } ) = l = 1 n { p J l ( r J ( ν + l ) r J ( ν + l 1 ) ) δ t [ ( p J l ) 2 / 2 m V ( r J ( ν + l 1 ) ) ] } with analogous definitions for the tilde variables. The expression above becomes exact for n → ∞. At this stage, the forward and backward path integrals above are independent. Proceeding in analogy with standard linearization methods, we combine them by introducing the semi-sum and difference variables:
r ¯ J ν + l = r J ν + l + r ˜ J ν + l 2 p ¯ J l = p J l + p ˜ J l 2 Δ r J ν + l = r J ν + l r ˜ J ν + l Δ p J l = p J l p ˜ J l
with l = 0, …, n. In these variables, the difference of actions in Equation (10) is a linear function in Δ p J l. Integrals over the difference momenta can then be performed analytically and result in a set of delta functions (this originates the last set of deltas in Equation (12) below). The dependence of the difference of the actions on Δ r J ( ν + l ) is more complicated: they appear in an explicit, linear term, but also in the argument of the potentials. This dependence can also be linearized via the expansion V ( r ¯ J ( ν + l ) + Δ r J ( ν + l ) / 2 ) V ( r ¯ J ( ν + l ) Δ r J ( ν + l ) / 2 ) = V ( r ¯ J ( ν + l ) ) Δ r J ( ν + l ) + o [ ( Δ r J ( ν + l ) ) 3 ]. This is the key approximation that we perform. An appropriate rescaling of the variables shows that the approximation is equivalent to a second order expansion in of the phase, but a more precise analysis of its validity is required and under consideration (see, also, the discussion at the end of this section). Bearing this in mind, we observe that, once the expansion is performed, also the integrals on the Δ r J ( ν + l ) variables can be analytically solved, producing a second set of delta functions. Thus:
r ¯ J ν Δ r J ν 2 | e i H ^ τ t | r ¯ J + 1 Δ r J + 1 2 r ¯ J + 1 + Δ r J + 1 2 | e i H ^ τ t | r ¯ J ν + Δ r J ν 2 d r ¯ J ν + 1 d r ¯ J ν + n 1 d p ¯ J 1 d p ¯ J n e i p ¯ J n Δ r J ( ν + n ) e i p ¯ J 1 Δ r J ν × l = 1 n 1 δ [ ( p ¯ J ( l + 1 ) p ¯ J l ) + δ t V ( r ¯ J ( ν + l ) ) ] l = 1 n δ [ p ¯ J l m δ t ( r ¯ J ( ν + l ) r ¯ J ( ν + l 1 ) ) ]

The linearization approximation then has two crucial consequences: (1) by allowing the integration over the difference paths, it transforms the quantum expression of the correlation function, which, in the beginning, includes two propagators and, therefore, two paths, into a formula where only the semi-sum path appears, thus leading to a structure more similar to classical time correlations in which only one propagation is present; (2) (perhaps more importantly) it forces the semi-sum path to follow a, classical, Hamiltonian trajectory, as identified by the arguments of the delta functions.

The final step to obtain a suitable expression for Equation (9) does not introduce any further approximation. Let us consider again the product of the thermal propagators in the equation. As mentioned above, these can be written via standard coordinate path integrals. Once this is done, it is convenient to introduce also for these propagators semi-sum and difference path (this is important, in particular, to ensure that the common boundaries of the thermal and real-time propagations, r J ν and r ˜ J ν, are represented coherently). In the semi-sum and difference variables, the product of the thermal propagators takes the form:

r ¯ J Δ r J 2 | e H ^ τ β | r ¯ J ν Δ r J ν 2 r ¯ J ν _ Δ r J ν 2 | e H ^ τ β | r ¯ J + Δ r J 2 [ m 2 π δ β ] ( ν 1 ) d r ¯ J 1 d r ¯ J ν 1 d Δ r J 1 d Δ r J ν 1 e δ β λ = 1 ν [ V ( r ¯ J ( λ 1 ) + Δ r J ( λ 1 ) / 2 ) + V ( r ¯ J ( λ 1 ) Δ r J ( λ 1 ) / 2 ) ] × e σ p 2 2 λ = 1 ν ( Δ r J λ + Δ r J ( λ 1 ) ) 2 e 1 σ r 2 λ = 1 ν ( r ¯ J λ r ¯ J ( λ 1 ) ) 2
with σ p 2 = m / 2 δ β and σ r 2 = δ β / 2 m. Substituting Equations (12) and (13) in Equation (9), it can be noted that the integral over Δrν is of a Gaussian form and can be performed analytically. Introducing the notation 𝒳 J = ( { r ¯ J λ } λ = 0 , ν , { Δ r J λ } λ = 0 , ν 1 , { r ¯ J ν + l } l = 1 , n 1 , { p ¯ J λ } l = l , n ), after integration over Δrν, the linearized short time propagator can be written as:
K l ( r ¯ J + 1 , Δ r J + 1 ; r ¯ J , Δ r J ) = d 𝒳 J e i p ¯ J n Δ r J + 1 ρ ( 𝒳 J , r ¯ J + 1 ) e i p ¯ J 1 Δ r J ( ν 1 )
with:
ρ ( 𝒳 J , r ¯ J + 1 ) = l = 1 n 1 δ [ ( p ¯ J ( l + 1 ) p ¯ J l ) + δ t V ( r ¯ J ( ν + l ) ) ] l = 1 n δ [ p ¯ J l m δ t ( r ¯ J ( ν + l ) r ¯ J ( ν + l 1 ) ) ] × e δ β λ = 1 ν [ V ( r ¯ J ( λ 1 ) + Δ r J ( λ 1 ) / 2 ) + V ( r ¯ J ( λ 1 ) Δ r J ( λ 1 ) / 2 ) ] × e ( p ¯ J 1 ) 2 2 σ p 2 e σ p 2 2 λ = 1 ( ν 1 ) ( Δ r J λ Δ r J ( λ 1 ) ) 2 e 1 σ r 2 λ = 1 ν ( r ¯ J λ r ¯ J ( λ 1 ) ) 2
(The Gaussian in p ¯ J 1, the first factor in the last line of the expression above, is the result of the integration over Δrν.) Substituting the approximate form of the propagator between complex time slices J and J + 1, we obtain for the symmetrized correlation function:
G A , B ( L ) ( t , β ) = 1 Z d Δ r t c d r ¯ t c r ¯ t c + Δ r t c 2 | B ^ | r ¯ t c Δ r t c 2 × J = 1 L 1 d 𝒳 J e i p ¯ J n Δ r J + 1 ρ ( 𝒳 J , r ¯ J + 1 ) e i p ¯ J 1 Δ r J ( ν 1 ) × d 𝒳 0 e i p ¯ 0 n Δ r 1 ρ ( 𝒳 0 , r ¯ 1 ) e i p ¯ 0 1 Δ r 0 ( ν 1 ) r ¯ 0 + Δ r 0 2 | A ^ | r ¯ 0 Δ r 0 2

The expression above is interesting. First of all, assuming that the linearization approximation of each short time propagator improves when the propagation time goes to zero, there is potential for systematic improvement with increasing L, and indeed, numerical tests [30] indicate that this is the case. However, the limit for large L, and, in particular, the validity of the expansion in the difference path at the intermediate times of the propagation, is delicate. In fact, while it can be argued that the matrix elements of operators  and (usually diagonal) force the forward and backward paths (the free and tilde variables in the upper panel of Figure 1) to start and end close to one another, and, therefore, that only small values of the difference among the paths will be relevant close to the initial and final time, truncating the expansion of the difference of the potentials along the whole pair of paths is considerably more delicate. This issue, and the nature of the dynamics when L → ∞, are currently under investigation. In the meantime, note that the ρ functions are positive definite, so that they can be used to define a probability density for sampling the overall path variables (i.e., the full set of {𝒳J}J= 0,...,L−1 variables) as Π = 1 Ω J = 0 L 1 ρ ( 𝒳 J , r ¯ J + 1 ), where Ω is the (unknown) normalization factor. The method to deal with this factor is illustrated in the next subsection for the case L = 1 and can be straightforwardly generalized to L > 1. The probability density, Π, corresponds to a stochastic process, which concatenates the thermal and time propagations within each short time propagator, Kl. The structure of the propagations in real and imaginary time is determined by the definition of ρ in Equation (15) and can be described as follows. For L = 1, there is only one real-time leg of duration τt = t, while the imaginary time propagation corresponds to an inverse temperature β/2 for both the semi-sum and difference variables. The upper panel of Figure 3 illustrates these propagations with a sketch. In the figure, the horizontal axis is time and the vertical axis temperature. The vertical plane represents the space of configurations associated with the thermal path integral for both the semi-sum and difference variables; the thermal beads are represented with the red circles. The harmonic interactions in the thermal paths are indicated with zigzagged lines connecting adjacent beads, while the interactions among the two paths due to the potential are drawn as dashed lines. Note that the difference variables path, on the left in the vertical plane in the figure, has one less bead than the semi-sum variables path, due to the integration carried out to isolate a Gaussian probability density for the initial momenta. The propagation in real time is drawn as the curve on the horizontal plane, which represents the phase space of the system. The red and golden circle at t = 0 represents the initial conditions for the time evolution: the initial coordinate coincides with the last bead of the thermal path in the semi-sum variables, while the initial momentum is sampled from the Gaussian mentioned before. A phase factor is associated with the initial point of the classical propagation. The exponent of this phase couples the initial momentum of the trajectory with the last bead of the thermal path in the difference variables. A phase factor is also associated with the final point of the classical propagation, where, for L = 1, the exponent couples the momentum at time t with the variable, Δr1. The integrals over Δ r 0 ( ν 1 ) and Δr1 in the expression for G A B 1 ( t ; β ) involve products of these phase factors with the matrix elements of operators  and . The end-point integral reconstructs the Wigner transform of the operator, . To see this, consider Equation (16). For L = 1, the second line of the equation is absent, and boundary conditions impose Δr1 = Δrtc (with similar relationships for the sum variables). With this notation, the integral over Δrtc is recognizable as the Wigner transform of operator . The structure of the sequence of imaginary and real-time propagation for generic values of L can be inferred from the lower part of Figure 3, where we show what happens for L = 2. In this case, there are two segments of classical dynamics, each of duration t/2, and two propagations of semi-sum and difference variables in imaginary time, taking the system from zero inverse temperature to β/4 and from β/4 to β/2, respectively. As before, the first segment of dynamics starts, with a Gaussian initial momentum, from the last bead of the semi-sum variable thermal path at t = 0. The end-point of this leg of propagation is the initial configuration for the semi-sum variable thermal path at t/2, and the second segment of dynamics has as initial conditions the final coordinate of the semi-sum variable thermal path and a new momentum sampled from a Gaussian. The variances of the Gaussians associated with the momentum sampling are doubled with respect to the case L = 1. The integrand now contains four phase factors coupling the momenta at the beginning and end of each classical dynamics segment with the values of the difference path variables at the end and at the beginning of each thermal slice, respectively. The phase factor that depends on Δ r 2 p ¯ 1 n (i.e., the phase factor computed at time t) can again be combined with the matrix element of operator to obtain the Wigner transform of this operator at the final time of the propagation, so that only three phases remain to contribute to the result. In general, G A B ( L ) ( t ; β ) involves L segments of classical propagation, each of duration t/L, interspersed with L pairs of thermal paths in the semi-sum and difference variables, each at an inverse temperature β/2L. The rules for connecting the coordinate and momenta at the initial and final time of the dynamics with the final and initial points of the thermal paths and for constructing the 2L − 1 phase factors contributing to the integrand (the phase factor at time t can always be absorbed in the Wigner transform of operator ) are completely analogous to the L = 2 case.

A Monte Carlo algorithm to sample Π for different values of L was illustrated in [30]. This Monte Carlo has several non-standard features, most notably the fact that the normalization of the probability density is unknown and that Π contains products of delta functions, i.e., singular distributions. These difficulties can be circumvented as detailed in [30]. The first one is tackled by recasting (without further approximations) Equation (16) in the form of a ratio of expected values. The second is addressed via an appropriate choice of the trial moves and acceptance probabilities. The most serious numerical difficulty, unfortunately, comes from the estimator of the observable. In fact, from Equation (16), it can be seen that, in addition to the matrix elements of the operators, the integrand contains a product of phase factors to be evaluated at the beginning and end of each real-time propagation segment. As mentioned above, the number of these phase factors for the “order L” approximation of the symmetrized correlation function is 2L − 1, and their presence rapidly hinders the convergence of the calculation. Furthermore, for a system of n particles in three dimensions, the phases take the (generic) form e ± 1 p Δ r, where p and Δr are 3n-dimensional vectors. The number of phase factors thus scales linearly with the number of degrees of freedom, so that, even for small values of L, convergence is problematic. The numerical tests performed so far on simple model systems confirm both the interest and the difficulties of Equation (16). In [30], we computed position autocorrelation functions for a set of one-dimensional systems (e.g., quartic potential) at temperatures low enough to ensure that the system was in the quantum regime. We observed that increasing the number of complex time slices did systematically improve the length of time for which we were able to get accurate results. However, the numerical effort to go beyond L = 3, though not exponential in time, became essentially prohibitive, even for these simple systems. To indicate possible means to reduce the numerical effort involved in these calculations, we now discuss a recent development of the method developed to address the problem of the phase in the simplest case, L = 1, in which it presents itself. We then illustrate how to formally extend this development, which, in its simplest form, has been successfully applied to realistic models of condensed phase systems, to the case L > 1.

3. L = 1: Fully Linearized Approximation

The expression for the L = 1 approximation of the symmetrized correlation function is given by (see Equation (16)):

G A , B ( 1 ) ( t ; β ) = 1 Z d Δ r t c d r ¯ t c d 𝒳 0 r ¯ t c + Δ r t c 2 | B ^ | r ¯ t c Δ r t c 2 e i p ¯ 0 n Δ r t c × ρ ( 𝒳 0 , r ¯ t c ) e i p ¯ 0 1 Δ r 0 ( ν 1 ) r ¯ 0 + Δ r 0 2 | A ^ | r ¯ 0 Δ r 0 2

We are now going to simplify the expression above using four steps: (1) observe that the integral over Δrtc in the first line of the equation above defines the Wigner transform of operator (see, also, the definition below Equation (4)); (2) note that the product of δ functions in the definition of ρ (Equation (15)) forces (tc, p ¯ t c n) to be endpoints of a classical trajectory of length t starting at ( r ¯ 0 ν, p ¯ 0 1), so that, after integration over (ν+l) (l = 1, ..., n − 1) and l (l = 1, ..., n), (tc, p ¯ t c n) = (rt, pt) (where (rt, pt) denote the classically evolved variables); (3) choose, for the sake of simplicity, to specialize the discussion to an operator, Â, which is diagonal in the coordinate representation (the case of generic operators is considered in the Appendix of [39]). This choice, producing a δr0) in the evaluation of the matrix elements, allows one to integrate also over Δr0. The surviving variables (i.e., the semi-sum and difference variables of the thermal path integral and the initial momentum of the classical trajectory) will be indicated collectively as Γ = {p1, r0, …, rν, Δr1, …, Δrν−1}; (4) simplify the notation by dropping the bar from the semi-sum variables and the subscript, which identifies the J = 0 propagation segment in Equation (16), since only one segment is now present. Once these operations are performed, the correlation function can be written as:

G A , B ( 1 ) ( t ; β ) = Q ^ Z d Γ B w ( r t , p t ) P ( Γ ) e i p 1 Δ r ( ν 1 ) A ( r 0 )
with
P ( Γ ) = 1 Q ^ e ( p 1 ) 2 2 σ p 2 e 2 δ β V ( r 0 ) e 1 2 σ r 2 λ = 1 ν ( r λ r λ 1 ) 2 e σ p 2 2 λ = 2 ν 1 ( Δ r λ Δ r λ 1 ) 2 e σ p 2 2 ( Δ r 1 ) 2 × e δ β λ = 2 ν [ V ( r ( λ 1 ) + Δ r ( λ 1 ) 2 ) + V ( r ( λ 1 ) Δ r ( λ 1 ) 2 ) ]
and the normalization of P(Γ). Note that the expression above for the probability is quite standard, being an explicit function of the semi-sum and difference variables, which can be sampled via Monte Carlo, multiplied by a Gaussian term for the momenta. As mentioned in the previous section, the ratio of the normalization over the partition function, which appears in Equation (18), is, in general, not known, and we estimate it via the autocorrelation of the identity, G I , I ( 1 ) = 1 = Q ^ Z d Γ P ( Γ ) e i p 1 Δ r ( ν 1 ) Using this approach, the L = 1 estimator of the correlation function is given by the following ratio of expectation values over P(Γ):
G A , B ( 1 ) ( t ; β ) = B w ( r t , p t ) e i p 1 Δ r ( ν 1 ) A ( r 0 ) P e i p 1 Δ r ( ν 1 ) P

As anticipated, both in the numerator and denominator of this expression, a phase factor appears, which, for high dimensional systems, hinders an efficient convergence of the calculation. To alleviate this problem, we proposed a method, described in detail in [39], which starts by obtaining an alternative expression for G A , B ( 1 ). As will be shown in the following, the new expression does not introduce further (analytical) approximations, but it has the advantage of eliminating the phase factor from the observable. Let us consider in more detail the structure of the probability, P(Λ). This probability is given by the product of a Gaussian for the momenta, ρ G ( p ) e p 2 2 σ p 2 (note that, with respect to Equation (19), we dropped the superscript, 1, on the momenta to simplify the notation), times a joint probability function for the semi-sum and difference thermal variables to be indicated in the following as ρ̃(r, Δr), where we have introduced the notation r = {r0, …, rν} and Δr = {Δr1, , Δr(ν−1)}. This joint probability (whose form can be inferred from Equation (19) by taking out the momentum Gaussian) is most conveniently expressed as:

ρ ˜ = ( r , Δ r ) = ρ c ( Δ r | r ) ρ m ( r )
where:
ρ m ( r ) = 1 Q ^ e 2 δ β V ( r 0 ) e 1 2 σ r 2 λ = 1 ν ( r λ r λ 1 ) 2 d Δ r e σ p 2 2 λ = 2 ν 1 ( Δ r λ Δ r λ 1 ) 2 e σ p 2 2 ( Δ r 1 ) 2 = e δ β λ = 2 ν [ V ( r ( λ 1 ) + Δ r λ 1 2 ) + V ( r ( λ 1 ) Δ r λ 1 2 ) ]
is the marginal probability for the semi-sum variables and ρcr|r) ≡ ρ̃(r, Δr)m(r) is the conditional probability for the difference variables given the semi-sum variables. This rewriting of the probability density is convenient because the phase factors in Equation (20) depend only on the momenta and difference variables. We can use this observation to define:
F ( p , r ) = d Δ r e i p Δ r ( ν 1 ) ρ c ( Δ r | r )
which is the average of the phase with respect to the conditional probability density, and investigate the properties of this function to see if we can use them to improve the convergence of our calculations. To that end, note that F is also, by definition, the cumulant generating function of the variable Δr(ν−1) with respect to the conditional probability, ρc(see, for example, [40,41] for previous use of cumulants in this field). This means that the coefficients of the Taylor series expansion (with respect to −ip/):
ln F ( p , r ) = n = 1 ( i p / ) n n ! ( Δ r ( ν 1 ) ) n ρ c ( Δ r | r ) c
(these coefficients are indicated above as ( Δ r ( ν 1 ) ) n ρ c ( Δ r | r ) c) are the cumulant moments of Δr(ν−1). Importantly, the conditional probability density is an even function of the difference variables, implying that only even order terms in the series above are non-zero and that the series corresponds to a real function that we will denote in the following with E(p, r). We can then express the average of the phase as:
F ( p , r ) = e E ( p , r )
i.e., a positive definite function of the momenta and the semi-sum variables. We now use the function above to define a new probability density:
𝒫 ( p , r ) = ρ g ( p ) e E ( p , r ) ρ m ( r ) d p d r ρ g ( p ) e E ( p , r ) ρ m ( r )
and note that, by direct substitution of this definition in the (explicit) expression of Equation (20), we obtain:
G A , B ( 1 ) ( t ; β ) = B w ( r t , p t ) A ( r 0 ) 𝒫
The key advantage of the expression above is that the observable does not contain phase factors anymore and is, therefore, well suited for a Monte Carlo estimate. Sampling the distribution, 𝒫, however, is non-trivial, since this probability density contains two factors, eE(p,r) and ρm(r), that do not have an explicit analytic form, but, for each value of r and p, can only be estimated numerically. The numerical estimate of E(r, p), in particular, requires one to truncate the cumulant series at a given order. The convergence of the calculation with respect to truncation of the series can always be checked numerically, and, although the cost scales up where terms of higher order are included, it does not present any particular difficulty. (In all calculations performed so far, a second order cumulant expansion proved sufficient.) In the following subsection, we briefly describe how to combine two schemes, known as the Kennedy and Penalty methods, for Monte Carlo sampling of noisy probability densities and obtain G A , B ( 1 ) ( t ; β ). Our goal is to highlight the main differences among these schemes and standard Monte Carlo and to indicate where the algorithm is most affected by them. A detailed description of the algorithm can be found in [38,39].

3.1. Noisy Monte Carlo Algorithm

To simplify the discussion, we introduce some notation. Let us indicate the coordinate-dependent Gaussian terms in Equation (19) as:

e 1 2 σ r 2 λ = 1 ν ( r λ r ( λ 1 ) ) 2 = e V r ( r ) e σ p 2 2 λ = 1 ν 1 ( Δ r λ Δ r ( λ 1 ) ) 2 = e V Δ ( Δ r )
(above Δr0 = 0) and write the potential term as:
e δ β λ = 2 ν [ V ( r ( λ 1 ) + Δ r 2 ( λ 1 ) ) + V ( r ( λ 1 ) Δ r 2 ( λ 1 ) ) ] e 2 δ β V ( r 0 ) = e δ β V ¯ ( r , Δ r )

We also rewrite the marginal probability, ρm(r), defined in Equation (22), isolating the terms that do not depend on Δr and have an explicit analytic expression, thus:

ρ m ( r ) = d Δ r ρ ˜ ( r , Δ r ) = 1 Q e V r ( r ) d Δ r e δ β V ¯ ( r , Δ r ) e V Δ ( Δ r ) = 1 Q e V r ( r ) ρ m ' ( r )
With the definitions above, 𝒫(p, r) takes the form:
𝒫 ( p , r ) = 1 𝒬 ρ G ( p ) e E ( r , p ) e V r ( r ) ρ m ( r )
where 𝒬 is the normalization. The scheme that we use to perform the sampling is based on earlier work by Ceperley [42] and Kennedy [43]. Adapting their ideas to our case, we will introduce a Monte Carlo algorithm in which the definition of the probability to generate a new state of the system by changing either the coordinates or the momenta of the particle (unlike what happens in classical canonical densities, in our probability, the variables, p and r, are not independent (with the momenta Gaussian and then integrable) and must be treated numerically) and/or to accept this new state is modified to guarantee that detailed balance is satisfied also when ρ m ( r ) and E(r, p) are estimated with significant noise. Both the Ceperley and Kennedy scheme require the introduction of appropriate numerical estimators of the unknown functions that will be indicated with calligraphic fonts.

The Monte Carlo scheme to sample Equation (31) is constructed as follows. Choose, with probability 1/2, if the move will involve r or p.

(1) A move on p has been selected:

choose a new momentum according to p′ = p + δp, where δp is a uniform random number centered on zero (the magnitude of the displacement is chosen so as to optimize the acceptance). Taking into account that the r variables are not being updated, detailed balance for this trial move takes the form:

ρ G ( p ) e E ( r , p ) A p ( p p ) = ρ G ( p ) e E ( r , p ) A p ( p p )
where Ap(pp′) is the acceptance probability. The detailed balance relationship above has the same form as the one discussed by Ceperly et al. within the penalty method [42], a generalized Monte Carlo for sampling a density given by the exponential of a function, in our case E(., .), known with statistical errors. According to the penalty method, if a numerical estimate, Δp(p′, p; r), of the difference E(r, p′)−E(r, p) has been obtained (for example, by averaging Ns values of a specific estimator) and an estimate of its variance, χ p 2, is also known, detailed balance can be satisfied by defining the acceptance as:
A p ( p p ) = min [ 1 , ρ G ( p ) ρ G ( p ) e Δ p ( p ; p ; r ) u χ p 2 ]
where:
u χ p 2 = χ p 2 2 + χ p 4 4 ( N s + 1 ) +

The expression for the acceptance probability differs from the standard Metropolis prescription for the presence of u χ p 2 and is valid when χ p 2 / n < 1 / 4 [42]. In the limit of an infinitely precise estimate of the difference, u χ p 2 0 and the standard criterion is recovered; when non-zero, this function corrects, on average, for the effect of the noise.

(2) A move on r has been selected:

in this case, indicating with Tr(rr′) and Ar(rr′) the probability to generate and accept a new configuration, respectively, detailed balance is expressed, after simplifying ρG(p), as:

e E ( r , p ) e V r ( r ) ρ m ( r ) T r ( r r ) A r ( r r ) = e E ( r , p ) e V r ( r ) ρ m ( r ) T r ( r r ) A r ( r r )

The structure of this relationship is analogous to the one considered by Kennedy et al. [43], who adapted Monte Carlo sampling to probability densities given by an exponential term times a “noisy” (positive definite) function. They showed that detailed balance is satisfied if states are generated according to the probability:

T r ( r r ) e E ( r , p ) e V r ( r )
(a method to sample Tr(rr′) is described after the next equation) and accepted with probability:
A r ( r r ) = { c 𝒰 ( r r ) if e δ β V ( r , 0 ) > e δ β V ( r , 0 ) c if e δ β V ( r , 0 ) e δ β V ( r , 0 )

Above, 𝒰(rr′) is an unbiased estimator of the ratio, ρ m ( r ) / ρ m ( r ), and c < 1 is a constant that ensures Ar(rr′) ∈ [0, 1] (for details on the meaning and choice of c, see [43] and the discussion on page 8 of [39]). The conditions on the exponential of the potential enforce an ordering criterion on the states, whose optimal choice depends on the problem (here, we used the one adopted in our previous work [39]). In the usual implementation of the Kennedy method, the exponential part of the probability density is assumed to be known analytically, and the states are generated via a standard Monte Carlo method. In our case, the situation is more complicated, since eE(r′,p) is only known with noise. To solve this problem, we employ the penalty method to obtain configurations distributed according to Equation (36). These configurations are generated using a Monte Carlo with transition probability t(rr′) ∝ eVr(r′) and acceptance probability a ( r r ) = min [ 1 , exp ( Δ r ( r , r ; p ) u χ r 2 ) ] where Δr(r′,r; p) is an unbiased estimator of E(r′, p) − E(r, p), and u χ r 2 is defined in analogy with Equation (34).

This concludes the description of our Monte Carlo moves. The practical implementation of this algorithm requires the definition of the numerical estimators, 𝒰(rr′), Δp(p′, p; r) and Δr(r′, r; p). While this is an important technical point, it only involves a set of calculations, each performed via an auxiliary Monte Carlo move, that are quite standard. To provide a typical example, we consider one of the estimators referring the reader to [39] for a detailed description of the others. Let us then consider 𝒰(rr′). This quantity, necessary in the Kennedy acceptance test, see Equation (37), is obtained by writing the ratio of the marginal probabilities as:

ρ m ( r ) ρ m ( r ) = d Δ r e V Δ ( Δ r ) e δ β V ¯ ( r , Δ r ) d Δ r e V Δ ( Δ r ) e δ β V ¯ ( r , Δ r ) = d Δ r e V Δ ( Δ r ) e δ β V ¯ ( r , Δ r ) e δ β [ V ¯ ( r , Δ r ) V ¯ ( r , Δ r ) ] d Δ r e V Δ ( Δ r ) e δ β V ¯ ( r , Δ r ) = e - δ β [ V ¯ ( r , Δ r ) - V ¯ ( r , Δ r ) ] ρ c ( Δ r | r )
whose unbiased estimator is:
𝒰 ( r r ) = 1 N a i = 1 N a e δ β [ V ¯ ( r , Δ r i ) V ¯ ( r , Δ r i ) ]
where {Δri} are a sample distributed according to ρcr|r). This sample is obtained via an auxiliary (standard) Monte Carlo calculation over the conditional probability, ρcr|r), in which new configurations are generated according to Tr → Δr′) ∝ exp[−VΔr′)] (To do this, we use the staging method [44], which allows one to sample exactly a probability density containing Gaussian-like distributions, see Equation (28)), and moves are accepted or rejected based on Ar → Δr′) = min {1, exp[−δβ( (r, Δr′) − (r, Δr)]}. As can be seen from the expression above, calculating the estimator requires Na steps in the auxiliary Monte Carlo calculation. A similar situation arises when the other estimators introduced above are considered, so that the total number of Monte Carlo moves in our scheme is given by Nt = Nm × Na, where we indicated with Nm the number of moves in the “main” Monte Carlo cycle (i.e., each choice of a move on r or p) and with Na the number of auxiliary Monte Carlo steps per “main” move.

The computational overhead introduced by the auxiliary Monte Carlo calculation increases the cost of our calculation, but it is very small compared to the number of moves necessary to converge the estimate of Equation (20). The algorithm just described is, in fact, efficient enough to make possible calculations on realistic condensed phase systems with relatively little numerical effort. In particular, the algorithm was used to compute the dynamic structure factor of a model of liquid neon composed of 64 atoms [38]. Details of the calculation can be found in [38]. Here, we show, in Figure 4, our results (red triangles with error bars in the figure) and compare them with experiments (green curve) and the results of a calculation (with the same empirical potential and simulation parameters) performed by Poulsen et al. [45] using the linearized approximation for quantum time correlation functions described in the Introduction (see Equation (4) and the discussion above it).

The results show a rather pronounced asymmetry around zero, due to detailed balance, that indicates the presence of relevant quantum effects in the system. The agreement between our calculations and experiments is very good, as it is the agreement with the standard linearized calculation by Poulsen (a state-of-the-art reference in the field). The numerical cost of the two calculations is very similar (about a million Monte Carlo steps in total for initial condition sampling), showing that the auxiliary steps, due to the noisy distributions in our approach, are essentially irrelevant. Indeed, other tests indicate that, depending on the system, the overall cost of our method can be less than that of alternative schemes with comparable or better accuracy. The approach described in this section, for example, has also been used to obtain the infrared spectra of simple models of molecules in the gas phase [46]. Although these systems are quite small, the calculations that we performed are known to pose a considerable challenge to alternative, less rigorous methods, such as Centroid Molecular Dynamics [47] and Ring Polymer Molecular Dynamics [48], which fail to capture the spectra and/or introduce spurious features. In contrast, even though obtaining the exact intensities is quite expensive, our method proved remarkably effective in identifying the positions of the peaks, which could be obtained with only about one hundred Monte Carlo moves.

3.2. L > 1

In this subsection, we present a new development of the approach summarized above that extends the use of cumulants to pre-average the phase factors in the expression of the symmetrized correlation function to the case L > 1 (This possibility came out in discussions with M. Monteferrante.). For simplicity of notation, in this subsection, we describe how this can be done for L = 2, but the steps that we shall use can be generalized to a larger number of segments. In the following, we report the formal result, while the construction and test of an algorithm that generalizes the noisy Monte Carlo scheme described in the previous section will be the object of future work. Let us begin by rewriting the L = 2 approximation of the symmetrized correlation function, for diagonal Â, as follows (see Equations (16) and (20) for structure and notation):

G A , B ( 2 ) ( t ; β ) = d Γ 1 d Γ 0 B w ( r t ( 2 ) , p t ( 2 ) ) [ e i p 1 1 Δ r 1 ( ν 1 ) P ( Γ 1 ; r 0 n ) e i p 0 n Δ r 1 ] [ e i p 0 1 Δ r 0 ( ν 1 ) P ( Γ 0 ) ] A ( r 0 ) d Γ 1 d Γ 0 [ e i p 1 1 Δ r 1 ( ν 1 ) P ( Γ 1 ; r 0 n ) e i p 0 n Δ r 1 ] [ e i p 0 1 Δ r ( ν 1 ) P ( Γ 0 ) ]

In the equation above, ( r t ( 2 ) , p t ( 2 )) is the endpoint of the propagation obtained by combining the two segments of classical dynamics described in the lower panel of Figure 3; Γ 0 = { p 0 1 , r 0 0 , r 0 ν , Δ r 0 1 , , Δ r 0 ( ν 1 ) } and Γ 1 = { p 1 1 , r 1 1 , r 1 ν , Δ r 1 0 , , Δ r 1 ( ν 1 ) } indicate the variables associated with the first and second set of thermal path integrals, respectively (the first set does not include Δ r 0 0, since this variable can be integrated over for diagonal Â, and the second does not include r 1 0 Δ r 0 n, since this is the endpoint of the, deterministic, classical propagation from zero to t/2 in Figure 3). P0) was defined in the previous section (see Equation (19)), and:

P ( Γ 1 ; r 0 n ) = ρ G ( p 1 1 ) ρ m ( r 1 ; r 0 n ) ρ c ( Δ r 1 | r 1 )
where r 1 = { r 1 1 , , r 1 ν } and Δ r 1 = { Δ r 1 0 , , Δ r 1 ( ν 1 ) }. The Gaussian probability for the momenta, ρ G ( p 1 1 ), and the marginal, ρ m ( r 1 ; r 0 n ), and conditional, ρcr1|r1), probabilities are defined in analogy with the expressions introduced in Section 3, with the caveat that for J = 1 (and, in general, for J > 0), the sum involving the potentials in the second line of Equation (22) runs from one to ν − 1. In the marginal probability, we have also indicated the (parametric) dependence of the density on r 0 n, i.e., the endpoint of the classical propagation of the first segment, which corresponds, due to the boundary condition mentioned above, to the first bead of the concatenated semi-sum thermal path. The square brackets in Equation (40) isolate the terms that play the most significant role in the following. The first bracket from the left (corresponding to the J = 0 term in the approximation of the correlation function) is the same as the one we encountered in the previous subsection, while the second shows the general structure of the terms involving the phase factors for J > 0. As summarized when discussing Figure 3, in this bracket the first phase factor is given by the product of the momentum at the endpoint of the first segment of classical dynamics (i.e., a variable fixed by the classical evolution) and the first difference variable of the second thermal segment. The second phase factor is given by the product of the initial momentum of the second leg of classical dynamics (a variable to be sampled in analogy with p 0 1) and the final difference variable of the thermal path. As in the L = 1 case, these phase factors do not depend on the semi-sum variables and can be pre-averaged with respect to the conditional probability density. Let us indicate this average as:
F ( π J , r J ) = d Δ r J e i π J δ r J ρ c ( Δ r J | r J )
where, for J = 0, π 0 = p 0 1 and δ r 0 = Δ r 0 ( ν 1 ), while for J = 1, and, more in general, for J > 0, π J = { p J 1 n , p J 1 } and δ r J = { Δ r J 0 , Δ r J ( ν 1 ) }. The equation above is formally identical to Equation (23), with the important difference that, when J > 0, the phase is now given by the scalar product of two vectors and can be recognized as the definition of the joint cumulant generating function for the components of δrJ [40]. Although such joint cumulants are formally more complex, the cumulant moments of δrJ are still given by the coefficients of the expansion:
ln F ( π J , r J ) = | λ | 1 ( i ) | λ | λ ! | λ | π J λ C λ ( r J )

For J = 0, the definition above is to be read as identical to Equation (24). For J = 1 (and, in general, J > 0), λ = {λ1, λ2} is a vector of positive integers (including zero), |λ| is their sum, λ! = λ1! λ2! and:

C λ ( r J ) = C λ 1 , λ 2 ( r J ) = ( | λ | ln F ( π J , r J ) ( p J 1 n ) λ 1 ( p J 1 ) λ 2 ) π J = 0

As in the previous subsection, the conditional distribution density is even with respect to the difference variables, implying that only even terms are non-zero in Equation (43). The function F (πJ, rJ) is then real and positive, so we can set F (πJ, rJ) = eE(πJ,rJ) and define, in analogy with Equation (26), the probability density:

𝒫 ( π J , r J ; r 0 n ) = ρ G ( p J 1 ) ρ m ( r J ; r 0 n ) e E ( π J , r J ) d p J 1 d r J ρ g ( p J 1 ) ρ m ( r J ) e E ( π J , r J )

Substitution of the definition above in the expression for the symmetrized correlation function shows that we can write the two-segment approximation as the following expectation value:

G A , B ( 2 ) ( t ; β ) = B w ( r t ( 2 ) , p t ( 2 ) ) A ( r 0 ) 𝒫 ( 2 )
where 𝒫 ( 2 ) = 𝒫 ( π 1 , r 1 ; r 0 n ) 𝒫 ( p 0 1 , r 0 ) (with a straightforward generalization of the notation adopted here, the L-segment approximation of the correlation function can be written as G A , B ( L ) ( t ; β ) = B w ( r t ( L ) , p t ( L ) ) A ( r 0 ) 𝒫 ( L ), where 𝒫 ( L ) = J = 1 L 1 𝒫 ( π J , r J ; r J 1 n ) 𝒫 ( p 0 1 , r 0 ). The average above presents the same immediate advantage of the L = 1 case in that the “observable” does not contain any explicit phase factors. It also presents the same numerical difficulties, given that the probability density contains analytically unknown quantities (the marginal probabilities, ρm(rJ), and the cumulants). Although it is possible to construct generalization of the noisy Monte Carlo scheme described in the previous section, it is not certain that the favorable convergence properties of the auxiliary sampling necessary, in particular, for computing the cumulants (the decisive ingredient in the L = 1 case) will be preserved in this more general situation. Developing and testing the most appropriate algorithm for this generalization will be the focus of future work.

4. Conclusions

In this paper, we summarized a recently developed method to approximate symmetrized quantum time correlation functions. The method recasts the problem as the calculation of averages over a stochastic process based on a linearized approximation of the complex time propagators in the correlation function. This approximation can be enforced either on the full length of the evolution (fully linearized approach) or in an iterative form obtained via the (complex) time composition property of the evolution operators. Thanks to the use of a cumulant expansion, which tames the phase factors present in the observable, the fully linearized approach has proven efficient and accurate in calculations on moderately quantum systems in the condensed phase. The iterative form offers, in principle, a way to improve the accuracy of the results with respect to the fully linearized case and may be useful when higher order quantum effects must be kept into account. While the potential for systematic improvement with respect to the fully classical limit for the dynamics is indeed the most interesting feature of the approach (and the one that distinguishes it from other available methods for which there is no way to improve upon the classical or semi-classical approximation), the practical use of the approach for L > 1 is currently hindered by numerical instabilities. In the final section of the paper, we have shown how to extend the use of the cumulant expansion to obtain a formal expression for this case that does not require one to average phase factors in the observable. This expression may be a promising starting point for considerable improvement of the algorithm for more than one segment, and future work will focus on developing and testing an appropriate algorithm. However, importantly, while numerical evidence on model systems supports the claim that systematic improvements can be obtained by higher order iterations, an exact statement on the convergence properties of the method is lacking, and further investigation is needed to formally assess the features of the scheme.

Acknowledgments

The authors are grateful to C. Pierleoni and M. Monteferrante for their substantial contributions to the earlier methods for symmetrized correlation functions summarized in this work. Funding from IIT-SEED grant No 259 “SIMBEDD” is also acknowledged.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Feynman, R. Space-time approach to non-relativistic quantum mechanics. Rev. Mod. Phys 1948, 20, 367–387. [Google Scholar]
  2. Schulman, L.S. Techniques and Applications of Path Integration; Dover Publications: Dover, UK, 2005; p. 448. [Google Scholar]
  3. Van Vleck, V. The correspondence principle in the statistical interpretation of quantum mechanics. Proc. Natl. Acad. Sci. USA 1928, 14, 178–188. [Google Scholar]
  4. Miller, W.H. Classical S matrix: Numerical application to inelastic collisions. J. Chem. Phys 1970, 53, 3578. [Google Scholar]
  5. Miller, W.H. The semi-classical initial value representation: a potentially practical way for adding quantum effects to classical molecular dynamics simulations. J. Phys. Chem. A 2001, 105, 2942–2955. [Google Scholar]
  6. Kleinert, H. Path Integrals in Quantum Mechanics, Statistics, and Polymer Physics, and Financial Markets; World Scientific: River Edge, New Jersey, USA, 2004. [Google Scholar]
  7. Glauber, R. Quantum theory of optical coherence. Phys. Rev 1963, 1449, 2529–2539. [Google Scholar]
  8. Kay, K.G. Integral expressions for the semiclassical time-dependent propagator. J. Chem. Phys 1994, 100, 4377. [Google Scholar]
  9. Caratzoulas, S.; Pechukas, P. Phase space path integrals in Monte Carlo quantum dynamics. J. Chem. Phys 1996, 104, 6265. [Google Scholar]
  10. Herman, M.F.; Kluk, E. A semiclassical justification for the use of non-spreading wavepackets in dynamics calculations. Chem. Phys 1984, 91, 27–34. [Google Scholar]
  11. Kluk, E.; Herman, M.F.; Davis, H.L. Comparison of the propagation of semiclassical frozen Gaussian wave functions with quantum propagation for a highly excited anharmonic oscillator. J. Chem. Phys 1986, 84, 326. [Google Scholar]
  12. Ankerhold, J.; Pollak, E.; Kay, K.G. The Herman Kluk approximation: Derivation and semiclassical corrections. Chem. Phys 2006, 322, 3–12. [Google Scholar]
  13. Shao, J.; Makri, N. Forward-Backward semiclassical dynamics with linear scaling. J. Phys. Chem. A 1999, 103, 9479. [Google Scholar]
  14. Herman, M.F.; Coker, D.F. Classical mechanics and the spreading of localized wave packets in condensed phase molecular systems. J. Chem. Phys 111.
  15. Hernandez, R.; Voth, G.A. Quantum time correlation functions and classical coherence. Chem. Phys 1998, 233, 243–255. [Google Scholar]
  16. Poulsen, J.A.; Nyman, G.; Rossky, P.J. Practical evaluation of condensed phase quantum correlation functions: A FeynmanKleinert variational linearized path integral method. J. Chem. Phys 2003, 119, 12179. [Google Scholar]
  17. Shi, Q.; Geva, E. A relationship between semiclassical and centroid correlation functions. J. Chem. Phys 2003, 118, 8173. [Google Scholar]
  18. Poulsen, J.A.; Nyman, G.; Rossky, P.J. Static and dynamic quantum effects in molecular liquids: A linearized path integral description of water. Proc. Natl. Acad. Sci. USA 2005, 102, 6709–6714. [Google Scholar]
  19. Wigner, E. On the quantum correction for thermodynamic equilibrium. Phys. Rev 1932, 40, 749–759. [Google Scholar]
  20. Tully, J.C. Trajectory surface hopping approach to nonadiabatic molecular collisions: The reaction of H+ with D2. J. Chem. Phys 1971, 55, 562. [Google Scholar]
  21. Kapral, R.; Ciccotti, G. Mixed quantum classical dynamics. J. Chem. Phys 1999, 110, 8919. [Google Scholar]
  22. Kapral, R. Progressin the theory of mixed quantum classical dynamics. Annu. Rev. Phys. Chem 2006, 57, 129–157. [Google Scholar]
  23. Bonella, S.; Coker, D.F. LAND-map, a linearized approach to nonadiabatic dynamics using the mapping formalism. J. Chem. Phys 2005, 122, 194102. [Google Scholar]
  24. Bonella, S.; Montemayor, D.; Coker, D.F. Linearized path integral approach for calculating nonadiabatic time correlation functions. Proc. Natl. Acad. Sci. USA 2005, 102, 6715–6719. [Google Scholar]
  25. MacKernan, D.; Ciccotti, G.; Kapral, R. Trotter-based simulation of quantum classical dynamics. J. Phys. Chem. B 2008, 112, 424–432. [Google Scholar]
  26. Kim, H.; Nassimi, A.; Kapral, R. Quantum classical Liouville dynamics in the mapping basis. J. Chem. Phys 2008, 129, 084102. [Google Scholar]
  27. Nassimi, A.; Kapral, R. Mapping Approach for Quantum Classical Time Correlation Functions. Can. J. Chem 2009, 87, 880–890. [Google Scholar]
  28. Nielsen, S.; Kapral, R.; Ciccotti, G. Statistical mechanics of quantum classical systems. J. Chem. Phys 2001, 115, 5805. [Google Scholar]
  29. Agostini, F.; Caprara, S.; Ciccotti, G. Do we have a consistent non adiabatic quantum classical mechanics? Europhys. Lett 2007, 78, 30001. [Google Scholar]
  30. Bonella, S.; Monteferrante, M.; Pierleoni, C.; Ciccotti, G. Path integral based calculations of symmetrized time correlation functions. II. J. Chem. Phys 2010, 133, 164105. [Google Scholar]
  31. Schofield, P. Space-time correlation function formalism for slow neutron scattering. Phys. Rev. Lett 1960, 4, 239–240. [Google Scholar]
  32. Filinov, V. Wigner approach to quantum statistical mechanics and quantum generalization molecular dynamics methods. Part 1. Mol. Phys 1996, 88, 1517. [Google Scholar]
  33. Filinov, V. Wigner approach to quantum statistical mechanics and quantum generalization molecular dynamics method. Part 2. Mol. Phys 1996, 88, 1529. [Google Scholar]
  34. Miller, W.H.; Schwartz, S.D.; Tromp, J.W. Quantum mechanical rate constants for bimolecular reactions. J. Chem. Phys 1983, 79, 4889. [Google Scholar]
  35. Frantsuzov, P.A.; Mandelshtam, V.A. Quantum statistical mechanics with Gaussians: Equilibrium properties of van der Waals clusters. J. Chem. Phys 2004, 121, 9247–9256. [Google Scholar]
  36. Jadhao, V.; Makri, N. Iterative Monte Carlo for quantum dynamics. J. Chem. Phys 2008, 129, 161102. [Google Scholar]
  37. Bonella, S.; Monteferrante, M.; Pierleoni, C.; Ciccotti, G. Path integral based calculations of symmetrized time correlation functions. I. J. Chem. Phys 2010, 133, 164104. [Google Scholar]
  38. Monteferrante, M.; Bonella, S.; Ciccotti, G. Quantum dynamical structure factor of liquid neon via a quasiclassical symmetrized method. J. Chem. Phys 2013, 138, 054118. [Google Scholar]
  39. Monteferrante, M.; Bonella, S.; Ciccotti, G. Linearized symmetrized quantum time correlation functions calculation via phase pre-averaging. Mol. Phys 2011, 109, 3015–3027. [Google Scholar]
  40. Lindley, D.V. Introduction to Probability and Statistics from a Bayesian Viewpoint, Part 1, Probability; Cambridge University Press: Cambridge, UK, 1965. [Google Scholar]
  41. Causo, M.S.; Ciccotti, G.; Bonella, S.; Vuilleumier, R. An adiabatic linearized path integral approach for quantum time-correlation functions II: A cumulant expansion method for improving convergence. J. Phys. Chem. B 2006, 110, 16026–16034. [Google Scholar]
  42. Ceperley, D.M.; Dewing, M. The penalty method for random walks with uncertain energies. J. Chem. Phys 1999, 110, 9812. [Google Scholar]
  43. Kennedy, A.; Kuti, J. Noise without noise: A new Monte Carlo method. Phys. Rev. Lett 1985, 54, 2473–2476. [Google Scholar]
  44. Sprik, M.; Klein, M.; Chandler, D. Staging: A sampling technique for the Monte Carlo evaluation of path integrals. J. Chem. Phys 1985, 31. [Google Scholar]
  45. Poulsen, J.; Scheers, J.; Nyman, G.; Rossky, P. Quantum density fluctuations in liquid neon from linearized path-integral calculations. Phys. Rev. B 2007, 75, 1–5. [Google Scholar]
  46. Beutier, J.; Bonella, S.; Monteferrante, M.; Vuilleumier, R.; Ciccotti, G. Gas Phase Infrared Spectra via the Phase Integration Quasi Classical Method. Mol. Phys. 2013. accepted. [Google Scholar]
  47. Cao, J.; Voth, G.A. A new perspective on quantum time correlation functions. J. Chem. Phys 1993, 99, 10070. [Google Scholar]
  48. Craig, I.R.; Manolopoulos, D.E. Quantum statistics and classical mechanics: Real time correlation functions from ring polymer molecular dynamics. J. Chem. Phys 2004, 121, 3368–3373. [Google Scholar]
Figure 1. Schematic representation of the integrand in the coordinate representation of the symmetrized time correlation function; see the text.
Figure 1. Schematic representation of the integrand in the coordinate representation of the symmetrized time correlation function; see the text.
Entropy 16 00086f1 1024
Figure 2. Schematic representation of the break up of the propagators in complex time: the short complex time propagators are represented as the segments with arrows along the forward and backward path, and the pairing mentioned in the text to obtain the K propagators is indicated by the red frame.
Figure 2. Schematic representation of the break up of the propagators in complex time: the short complex time propagators are represented as the segments with arrows along the forward and backward path, and the pairing mentioned in the text to obtain the K propagators is indicated by the red frame.
Entropy 16 00086f2 1024
Figure 3. Graphic representation of the propagators in real and imaginary times contributing to the approximate Schofield function for the case L = 1 (upper panel) and L = 2 (lower panel). The horizontal axis is real time, while the vertical axis is inverse temperature. The mean and difference coordinates in the thermal paths are represented as red dots on the vertical planes (in the upper panel, for example, ν = 6, i.e., we use six beads to represent the thermal path integrals at inverse temperature β/2). Segments of classical propagation in phase space are represented as continuous red curves in the horizontal planes. The golden circles indicate the connection between the coordinate-momentum representation of the dynamics in real time (horizontal planes) and the representation of the dynamics in imaginary time that takes place in coordinate space (vertical planes).
Figure 3. Graphic representation of the propagators in real and imaginary times contributing to the approximate Schofield function for the case L = 1 (upper panel) and L = 2 (lower panel). The horizontal axis is real time, while the vertical axis is inverse temperature. The mean and difference coordinates in the thermal paths are represented as red dots on the vertical planes (in the upper panel, for example, ν = 6, i.e., we use six beads to represent the thermal path integrals at inverse temperature β/2). Segments of classical propagation in phase space are represented as continuous red curves in the horizontal planes. The golden circles indicate the connection between the coordinate-momentum representation of the dynamics in real time (horizontal planes) and the representation of the dynamics in imaginary time that takes place in coordinate space (vertical planes).
Entropy 16 00086f3 1024
Figure 4. Dynamic structure factor for liquid neon (see the text). The solid green line shows the experimental curve, our results (with error bars) are the red triangles. We also report for comparison results obtained with the linearized IVRmethod by Polusen et al. (see the Introduction); blue circles.
Figure 4. Dynamic structure factor for liquid neon (see the text). The solid green line shows the experimental curve, our results (with error bars) are the red triangles. We also report for comparison results obtained with the linearized IVRmethod by Polusen et al. (see the Introduction); blue circles.
Entropy 16 00086f4 1024

Share and Cite

MDPI and ACS Style

Bonella, S.; Ciccotti, G. Approximating Time-Dependent Quantum Statistical Properties. Entropy 2014, 16, 86-109. https://doi.org/10.3390/e16010086

AMA Style

Bonella S, Ciccotti G. Approximating Time-Dependent Quantum Statistical Properties. Entropy. 2014; 16(1):86-109. https://doi.org/10.3390/e16010086

Chicago/Turabian Style

Bonella, Sara, and Giovanni Ciccotti. 2014. "Approximating Time-Dependent Quantum Statistical Properties" Entropy 16, no. 1: 86-109. https://doi.org/10.3390/e16010086

APA Style

Bonella, S., & Ciccotti, G. (2014). Approximating Time-Dependent Quantum Statistical Properties. Entropy, 16(1), 86-109. https://doi.org/10.3390/e16010086

Article Metrics

Back to TopTop