Next Article in Journal
A Survey on Interference Networks: Interference Alignment and Neutralization
Previous Article in Journal
Network Coding for Line Networks with Broadcast Channels
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Extracting Probability Distribution Information from Time Series

by
Andres M. Kowalski
1,2,
Maria Teresa Martin
2,
Angelo Plastino
2,3,* and
George Judge
4,†
1
CIC Buenos Aires, C. C. 67, 1900 La Plata, Argentina
2
Instituto de F ísica La Plata–CCT-CONICET, C.C. 727, 1900 La Plata, Argentina
3
IFISC (CSIC-UIB), Campues Universitat Illes Balears, E-07122 Palma de Mallorca, Spain
4
207 Giannini Hall, University of California, Berkeley, Berkeley, CA 94720, USA
*
Author to whom correspondence should be addressed.
Member of the Giannini Foundation.
Entropy 2012, 14(10), 1829-1841; https://doi.org/10.3390/e14101829
Submission received: 15 August 2012 / Revised: 8 September 2012 / Accepted: 21 September 2012 / Published: 28 September 2012

Abstract

:
Time-series (TS) are employed in a variety of academic disciplines. In this paper we focus on extracting probability density functions (PDFs) from TS to gain an insight into the underlying dynamic processes. On discussing this “extraction" problem, we consider two popular approaches that we identify as histograms and Bandt–Pompe. We use an information-theoretic method to objectively compare the information content of the concomitant PDFs.

1. Introduction

Time series (TS) data originating from different physical/natural systems/processes usually contain extremely valuable information. Traditionally, such information is conveyed in the form of probabilistic distribution functions (PDFs) that, in some sense, represent the TS. The problem we discuss is how to best extract that information from the time series. This, of course, is tantamount to asking for the best PDF that represents it. The indirect data always possess a stochastic component due to the omnipresent dynamical noise [1,2]. Consequently, the specific procedure one employs in extracting a TS from given data seriously affects the quality of the information one may gain.
To study this the extraction issue is the purpose of this paper. To do this we consider two popular PDF extraction methodologies and analyze their information-content. Then we introduce a basis to quantitatively assess the amount of information a PDF contains.
The extraction procedures are considered in Section II. Section III is devoted to the CR-divergence family of entropic functionals. Two scenarios that will serve as applications are described in Section IV and results are presented Section V. Finally, some conclusions are drawn in Section VI.

2. Two Popular Extraction Procedures

In this Section we generate an appropriate time-series (TS) and proceed to extract from it a suitable PDF. The crucial issue is getting the “best” PDF that will properly “capture” either the physics at hand, the nature of the associated underlying natural process, and/or the features of the TS-generating nonlinear dynamical system. Two methodologies have become popular in this respect and we describe them in the subsections below.

2.1. PDF Based on Histograms

In order to extract a PDF via amplitude-statistics, the interval [ a , b ] (with a and b the minimum and maximum of the time series S ( t ) = { x t ; t = 1 , , M } ) is first divided into a finite number N b i n of non-overlapping subintervals A i : [ a , b ] = i = 1 N b i n A i and A i A j = , i j . One then employs the usual histogram-method, which is based on counting the relative frequencies of the time series values within each subinterval.
It should be clear that the resulting PDF lacks any information regarding temporal ordering (temporal causality). The only pieces of information that result are the x t - values that allow one to assign inclusion within a given bin, thus ignoring the temporal order (this is, the subindex t). In addition, it is necessary to consider a judiciously chosen optimal value for N b i n (see De Micco et al. [3]).

2.2. PDF Based on Bandt and Pompe’s Methodology

To use the Bandt and Pompe [4] methodology for evaluating the probability distribution P associated with the time series (dynamical system), one starts by considering partitions of the pertinent D-dimensional space that will hopefully “reveal" relevant details of the ordinal structure of a given one-dimensional time series S ( t ) = { x t ; t = 1 , , M } with embedding dimension D > 1 and time delay τ. In the following we take τ = 1 as the time delay [4]. We are interested in “ordinal patterns", of order D [4,5], generated by
( s ) x s - ( D - 1 ) , x s - ( D - 2 ) , , x s - 1 , x s
which assigns to each time s the D-dimensional vector of values at times s , s - 1 , , s - ( D - 1 ) . Clearly, the greater the D - value, the more information on the past is incorporated into our vectors. By “ordinal pattern" related to the time ( s ) , we mean the permutation π = ( r 0 , r 1 , , r D - 1 ) of [ 0 , 1 , , D - 1 ] defined by
x s - r D - 1 x s - r D - 2 x s - r 1 x s - r 0
In order to get a unique result we set r i < r i - 1 , if x s - r i = x s - r i - 1 . This is justified if the values of x t have a continuous distribution so that equal values are very unusual. Otherwise, it is possible to break these equalities by adding small random perturbations.
Thus, for the D ! possible permutations π of order D, the associated relative frequencies can be naturally computed by the number of times this particular order sequence is found in the time series divided by the total number of sequences. The probability distribution P = { p ( π ) } is defined by
p ( π ) = { s | s M - D + 1 ; ( s ) , has type π } M - D + 1
In this expression, the symbol ♯ stands for “number".
The procedure can be illustrated with a simple example; let us assume that we start with the time series { 1 , 3 , 5 , 4 , 2 , 5 , } , and we set the embedding dimension D = 4 . In this case the state space is divided into 4 ! partitions and 24 mutually exclusive permutation symbols are considered. The first 4-dimensional vector is ( 1 , 3 , 5 , 4 ) . According to Equation (1) this vector corresponds with ( x s - 3 , x s - 2 , x s - 1 , x s ) . Following Equation (2) we find that x s - 3 x s - 2 x s x s - 1 . Then, the ordinal pattern that allows us to fulfill Equation (2) will be [ 3 , 2 , 0 , 1 ] . The second 4-dimensional vector is ( 3 , 5 , 4 , 2 ) , and [ 0 , 3 , 1 , 2 ] will be its associated permutation, and so on. For the computation of the Bandt and Pompe PDF, we follow the very fast algorithm described by Keller and Sinn [6], in which the different ordinal patterns are generated in lexicographic ordering.
The Bandt–Pompe methodology is not restricted to a time series (TS) representative of low dimensional dynamical systems. It can be applied to any TS-type (regular, chaotic, noisy, or reality based), with a very weak stationary assumption. This means the probability of finding x t < x t + k should not depend on t [4]). It is assumed that enough data are available for a correct embedding procedure, of course. The embedding dimension D plays an important role in the evaluation of the appropriate probability distribution, because D determines the number of accessible states D ! . The minimum acceptable length of the time series that one needs in order to extract reliable statistics is M D ! .

3. Quantifier of a PDF’s Information Content: the CR Divergence Measure

Our goal is to assess the informational content of the two methodologies reviewed above. To this end we will use, following Judge and Mittelhammer [7], the Cressie–Read family of divergence measures [8]. This family of divergence measures provides us with an objective assessment of how much information a given PDF contains relative to a second PDF. In this sense, CR-divergences are extensions of the celebrated Kullback–Leibler divergence. For two normalized, discrete probability distribution functions (PDF) p and q, one has [8]
I ( p , q , γ ) = 1 γ ( γ + 1 ) i = 1 n p i [ ( p i q i ) γ - 1 ]
where γ is a parameter that indexes members of the CR family, and the p i represent the subject probabilities. The q i ’s are interpreted as reference probabilities. Being probabilities, the usual PDF characteristics of p i , q i [ 0 , 1 ] for all i are assumed. The CR family of power divergences is defined through a class of additive convex functions and the CR power divergence measure encompasses a broad family of test statistics that leads to a broad family of likelihood functions within a moments-based estimation context. In an extremum metrics scenario, the general Cressie-Read family of power divergence statistics represents a flexible family of pseudo-distance measures from which to derive empirical probabilities associated with indirect noisy micro and macro data [8].
As γ varies, the resulting estimators that minimize power divergence exhibit qualitatively different sampling behavior. As an illustration, a solution to the stochastic inverse problem, (for any given choice of the parameter) may be formulated by recourse to empirical sample moments (as constraints) [9].
To place the CR family of power divergence statistics in an entropic perspective, we note that there are corresponding Renyi and Tsallis families of entropy functionals-divergence measures [10]. As demonstrated by Gorban et al. [9], over defined ranges of the divergence measures, the CR and entropy families are equivalent. Relative to Renyi and Tsallis, the CR family has a more convenient normalization factor 1 / [ γ ( γ + 1 ) ] , and has proper convexity for all powers, both positive and negative. The CR family has the separation of variables for independent subsystems [9] over the range of γ. This separation of variables permits the partitioning of the state space and is valid for divergences in the form of a convex function. In this preliminary illustrative effort we use γ = 1 .

3.1. The Information We Seek to Gain

The family of CR-divergences gives us an essential piece of information. Given two PDFs p and q, I ( p , q , γ ) permits us to determine what new information is contained in p relative to that contained in q. If q is a uniform PDF, it conveys no information and I measures the information content of p. In the present analysis p will be associated with the Bandt–Pompe PDF and q with the histogram procedure.

3.2. Three Main Variants of I ( p , q , γ )

Three discrete CR alternatives for I ( p , q , γ ) , where γ { - 1 , 0 , 1 } , have received the most attention, empirically and in the literature, to date. In reviewing these, we adopt the notation C R ( γ ) = I ( p , q , γ ) , where the arguments p and q are tacitly understood to be evaluated at relevant vector values. In the two special cases where γ = 0 or - 1 , the notation C R ( 0 ) and C R ( 1 ) are to be interpreted as the continuous limits, lim γ 0 or lim γ - 1 , respectively.
In the discrete instance in which the PDFs have n components, we let q be the uniform distribution q i = 1 / n , the reference distribution q is the empirical distribution function (EDF) associated with the observed sample data.
The Kullback–Leibler p / q divergence is easily seen to emerge if we set λ = 0 , while if we set λ = - 1 we get the KL q / p .
Minimizing C R ( - 1 ) is then equivalent to maximizing i n ln p i and leads to the traditional maximum empirical log-likelihood (MEL) objective function. Minimizing C R ( 0 ) is equivalent to maximizing - i n p i ln p i , and leads to the maximum empirical exponential likelihood (MEEL) objective function, which is also equivalent to Shannon’s entropy. Finally, minimizing C R ( 1 ) is equivalent to maximizing ( n / 2 ) i n ( p i 2 - n - 1 ) , and leads to the maximum log-Euclidean likelihood (MLEL) objective function. Note the latter objective function is also equivalent to minimizing the sum of squares function ( p - n - 1 1 ) ( p - n - 1 1 ) , in self-explanatory notation.

4. Two Applications

We apply the preceding solutions’ framework considerations to time series generated by
  • a celebrated nonlinear map,
  • a physical model.

4.1. Essentials of the Logistic Map: A Short Review

The celebrated logistic map yields perhaps the simplest example of chaos. Thus, it is a useful tool to illustrate new concepts in the treatment of dynamical systems. To do this, focus on the quadratic map F : x n x n + 1 [11,12,13], described by the ecologically motivated, dissipative system given by the first-order difference equation
x n + 1 = r · x n · ( 1 - x n )
where 0 x n 1 and r ( 0 r 4 ) represents the growth rate. The corresponding Lyapunov exponent is evaluated numerically [11] via
Λ ( r ) = lim N 1 N n = 0 N - 1 ln | r ( 1 - 2 x n ) |
where N is the iterations-number. Figure 1a,b display the well-known bifurcation diagram and the corresponding Lyapunov exponent (Λ), respectively, as a function of the parameter r in the range 3 . 5 r 4 . 0 , with Δ r = 0 . 0001 . We evaluate numerically the logistic map starting from a random initial condition. The first 10 4 iterations are disregarded (transitory states), and the next N = 10 6 are used for Lyapunov-evaluation using Equation (6) and information theory quantifiers. The resulting plot is exceedingly well known and we repeat it below, together with its analysis, so that our discourse becomes self-contained.
Figure 1a depicts the orbit diagram for the logistic map as a function of the parameter r, while Figure 1b displays the Lyapunov exponent for the logistic map vs. r. In the bifurcation diagram (Figure 1a), for fixed r, a periodic orbit consists of a countable set of points, while a chaotic attractor fills out dense bands within the unit interval. For r [ 0 , 1 ) , x n = 0 , one detects stable behavior. For r [ 1 , 3 ) , there exist only a single steady-state solution given by x n = 1 - 1 / r . Increasing the control parameter r [ 3 , r ] forces the system to undergo period-doubling bifurcations. Cycles of period 2 , 4 , 8 , 16 , 32 occur, and if r n denotes the values of r for which a 2 n cycle first appears, successive r n ’s converge to the limiting value r 3 . 5699456 [11,12,13]. At r the final-state diagram splits into two distinct parts, the period-doubling tree on the left and the area governed by chaos on the right. Here, the attractor becomes chaotic and eventually completely fills up the interval via pairwise merging bands.
Figure 1. (a) Orbit diagram for the logistic map as a function of the parameter r; (b) Lyapunov exponent for the logistic map as a function of the parameter r.
Figure 1. (a) Orbit diagram for the logistic map as a function of the parameter r; (b) Lyapunov exponent for the logistic map as a function of the parameter r.
Entropy 14 01829 g001

4.2. The Classical-Quantum Transition and a Semi-Classical Model

We test our PDF-extracting approaches with reference to an important physical problem. The classical limit of quantum mechanics (CLQM) is a subject that continues to attract much attention, and can be regarded as one of the frontiers of physics research. Certainly, it is the source of much exciting discussion (see, for instance, [14,15] and references therein). A particularly exciting sub-theme is that of “quantum" chaotic motion with regard to the classical limit. Recent efforts provide us with many examples, although an adequate definition of the underlying phenomena is understood in diverse fashion, according to different authors (see [16] and references therein).
Since the introduction of the decoherence concept in the early 1980s, by Zeh, Zurek, and Habib [17,18,19] and others, the emergence of the classical world from Quantum Mechanics has been a subject of much interest. It is clear that much quantum insight is to be gained from semiclassical perspectives. Several methodologies, such as the WKB or the Born–Oppenheimer approach, are available. The two-interacting systems, considered by Bonilla and Guinea [20], Cooper et al. [21], and Kowalski et al. [22,23], constitute composite models in which one system is classical and the other is quantal. This makes sense whenever the quantum effects of one of the two systems are negligible in comparison to those of the other one. Examples include Bloch equations, two-level systems interacting with an electromagnetic field within a cavity and collective nuclear motion. We deal with a special bipartite system that represents the zero-th mode contribution of a strong external field to the production of charged meson pairs [21,23]. The corresponding Hamiltonian is
H ^ = 1 2 p ^ 2 m q + P A 2 m c l + m q ω 2 x ^ 2
where, (i) x ^ and p ^ are quantum operators, (ii) A and P A are classical canonical conjugate variables and (iii) ω 2 = ω q 2 + e 2 A 2 is an interaction term that introduces nonlinearity, where ω q is a frequency. The quantities m q and m c l are masses, corresponding to the quantum and classical systems, respectively. As shown in [22], in dealing with Equation (7), one faces the following autonomous system of nonlinear coupled equations
d x ^ 2 d t = L ^ m q ; d p ^ 2 d t = - m q ω 2 L ^
d L ^ d t = 2 ( p ^ 2 m q - m q ω 2 x ^ 2 )
d A d t = P A m c l d P A d t = - e 2 m q A x ^ 2
L ^ = x ^ p ^ + p ^ x ^
The system of Equation (8) follows immediately from Ehrenfest’s relations [22]. The system is numerically solved. The analysis of the system is done by plotting quantities of interest against a special variable, called E r in [22], that ranges in [ 1 , ]. For E r 1 the system is purely quantal. As E r grows, quantum features are gradually lost and one enters the so-called semiclassical region. For very large E r - values, the system is classical. Time series for the system are to be extracted from a “signal" given by the E r - evolution of appropriate expectation values of the dynamical variables.

5. Results

We present in this section our main results regarding the two systems under scrutiny here.

5.1. CR-Results for the Logistic Map

In Figure 2, we compare three different PDFs. One is obtained using the histogram technique, another pertains to the BP methodology, and, for the sake of easy reference, we include the uniform PDF, that conveys no information at all.
(1) At r = 3 . 5 , cycles of period 2, 4, 8, 16, 32, appear and we have chaos at r = 3 . 57 , which is clearly displayed in the graph. The histogram results greatly differ from the “uniform-PDF" (blue line), which suggests that its information-content is significant. Notice that the BP line (black) is zero at the periodic windows. This is easily understood, since temporal causality (the important BP feature) is irrelevant for periodic motion.
(2) In the r - interval [ 3 . 57 , 3 . 63 ] chaos reigns: now causality and ordinal patterns become of the essence. The time-series values for these r - values exhibit an almost uniform distribution. This means that the histogram PDF does not yield any information not possessed by the uniform PDF (blue line at zero)! This kind of behavior repeats itself as chaotic windows and periodic ones alternate one after the other, and the graph clearly depicts this effect. Intermittent behavior near the beginning and the end of a periodic window is also detected and the histogram and BP lines are different form zero. The histogram-technique “acquires" extra-information over the uniform PDF, whenever order prevails. Alternatively, when chaos prevails, BP gains information over histograms. With the above reported results in mind, we now turn to a physical scenario.
Figure 2. I(B-P,histogram,1) and I(histogram,uniform,1) for the logistic map as a function of the parameter r.
Figure 2. I(B-P,histogram,1) and I(histogram,uniform,1) for the logistic map as a function of the parameter r.
Entropy 14 01829 g002

5.2. CR-physical Results

Signal vs. time graphs pertaining to the physical model are presented in Figure 3. Subplots 1–5 refer to solutions of the system Equation (8) (semi-quantum signal), for representative fixed values of E r . Subplots 6–10 refer to solutions of the classical counterpart of the system Equation (8) (classical, I = 0 ). We use (a) m q = m c l = ω q = e = 1 and (b) the values E = 0 . 6 , L ( 0 ) = L ( 0 ) = 0 , and A ( 0 ) = 0 as the initial conditions. The uppermost-left plot corresponds to the “pure quantum" signal. At the bottom-right, we plot the classical signal vs. time. The remaining plots refer to intermediate situations. All quantities are dimensionless. The results for our three PDFs are compared in Figure 4, as we did earlier for the logistic map.
Figure 3. Signal vs. time graphs. Subplots 1–5: solutions of the system Equation (8) (semi-quantum signal), for representative fixed values of E r . Subplots 6–10: solutions of the classical counterpart of the system Equation (8) (classical, I = 0 ). We took m q = m c l = ω q = e = 1 . Initial conditions: E = 0 . 6 , L ( 0 ) = L ( 0 ) = 0 , and A ( 0 ) = 0 . The uppermost-left plot corresponds to the “pure quantum" signal. At the bottom-right we plot the classical signal vs. time. The remaining are intermediate situations. All quantities are dimensionless.
Figure 3. Signal vs. time graphs. Subplots 1–5: solutions of the system Equation (8) (semi-quantum signal), for representative fixed values of E r . Subplots 6–10: solutions of the classical counterpart of the system Equation (8) (classical, I = 0 ). We took m q = m c l = ω q = e = 1 . Initial conditions: E = 0 . 6 , L ( 0 ) = L ( 0 ) = 0 , and A ( 0 ) = 0 . The uppermost-left plot corresponds to the “pure quantum" signal. At the bottom-right we plot the classical signal vs. time. The remaining are intermediate situations. All quantities are dimensionless.
Entropy 14 01829 g003
Figure 4. Normalized Cressie–Read divergence ( λ = 1 ). I(B-P,histogram,1) and I(histogram,uniform,1) are plotted vs. E r .
Figure 4. Normalized Cressie–Read divergence ( λ = 1 ). I(B-P,histogram,1) and I(histogram,uniform,1) are plotted vs. E r .
Entropy 14 01829 g004
The CR results indicate that the performance of BP is so good that it makes consideration of the histogram PDF irrelevant. Notice that the horizontal coordinate E r in our graphs is, essentially, an effective Planck constant. At E r = 1 we have the actual Planck value. It vanishes as E r tends to infinity. We have a signal that represents the system’s state at a given E r . Sampling that signal we extracted several PDFs. One way of doing this is via histograms. Another way is using the B-P approach. What is suggested by the CR-divergence method is that the histogram-PDF does not provide any information about the physics at hand. This occurs because a uniform PDF contains as much information about the physics as the histogram one!
It seems important to emphasize the following point. Periodicity in the quantal zone is seen in this work to be quite different from that of the logistic map, as indicated by the behavior of the CR-divergence measure. In the logistic case we are concerned with just 2 , 4 , 8 , states, while in the physical example there exists an infinite number of periodic states for which causality turns out to be an important factor, because BP-values are significantly larger than the histogram values (the black lines touches the unity value!).
Finally, in Figure 5 we compare the CR(BP-Histogram), in black, to the CR(Histogram - BP), in red. This comparison indicates that the histogram approach does not add any information to that provided by the BP-PDF.
Figure 5. Normalized Cressie–Read divergence ( λ = 1 ). I(B-P,histogram,1) and I(histogram,B-P,1) are plotted vs. E r .
Figure 5. Normalized Cressie–Read divergence ( λ = 1 ). I(B-P,histogram,1) and I(histogram,B-P,1) are plotted vs. E r .
Entropy 14 01829 g005

6. Conclusions

The study and characterization of time series, by recourse to conventional parametric statistical theory tools, assumes that the underlying probability distribution function (PDF) is given. However, for extracting an unknown PDF from the data, a superior nonparametric procedure does not exist. Our present work involves the use of Bandt and Pompe’s methodology for the evaluation of the PDF associated with scalar time series data, using a symbolization technique. The symbolic data are created by ranking the values of the series and defined by reordering the embedded data in ascending order. This yields a phase space reconstruction with embedding dimension (pattern length) D and time lag τ. In this way it is possible to quantify the diversity of the ordering symbols (patterns) derived from a scalar time series, using Bandt–Pompe PDF. It is important to note that the appropriate symbol sequence arises naturally from the time series without any model-based assumptions.
We have employed the CR-divergence measures in order to obtain a definite, quantitative assessment of the superior performance of the BP-approach.
To this end, we have compared the PB approach, via CR-divergence measures, with two other PDFs obtained in different fashion, with reference to the Logistic Map and to a well-known physical problem. We have confirmed the BP-superiority relative to their counterparts and we have also gained some insights concerning details of the physical problem.
Previous papers have examined the relative merits of the histogram vs. the BP approach (for instance, see the excellent review [5] and the references therein). However, these approaches rely on visual inspection of the concomitant curves so as to compare signals with their PDF-representatives (for instance, see [24]). Our work adds an objective assessment, via CR-divergences, that dominates visualization.

Acknowledgments

This work was partially supported by the project PIP1177 of CONICET (Argentina), and the projects FIS2008-00781/FIS (MICINN) - FEDER (EU) (Spain, EU).

References

  1. Wold, H. A Study in the Analysis of Stationary Time Series; Almqvist and Wiksell: Upsala, Sweden, 1938. [Google Scholar]
  2. Kurths, J.; Herzel, H. Probability theory and related fields. Phys. D 1987, 25, 165. [Google Scholar] [CrossRef]
  3. De Micco, L.; González, C.M.; Larrondo, H.A.; Martín, M.T.; Plastino, A.; Rosso, O.A. Randomizing nonlinear maps via symbolic dynamics. Physica A 2008, 87, 3373–3383. [Google Scholar] [CrossRef]
  4. Bandt, C.; Pompe, B. Permutation entropy: A natural complexity measure for time series. Phys. Rev. Lett. 2002, 88, 174102:1–174102:4. [Google Scholar] [CrossRef]
  5. Zanin, M.; Zunino, L.; Rosso, O.A.; Papo, D. Permutation entropy and its main biomedical and econophysics applications: A review. Entropy 2012, 14, 1553–1577. [Google Scholar] [CrossRef]
  6. Keller, K.; Sinn, M. Ordinal anlysis of time series. Physica A 2005, 356, 114. [Google Scholar] [CrossRef]
  7. Judge, G.; Mittelhammer, M. An Information Approach to Econometrics; Cambridge University Press: New York, NY, USA, 2012. [Google Scholar]
  8. Cressie, N.; Read, T. Goodness of fit statistics for discrete multivariate data. J. R. Stat. Soc. Series B, 1984, 46, 440. [Google Scholar]
  9. Gorban, A.N.; Gorban, P.A.; Judge, G. The Markov Ordering Approach. Entropy 2010, 12, 1145–1193. [Google Scholar] [CrossRef]
  10. Tsallis, C.J. Introduction to Nonextensive Statistical Mechanics; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  11. Sprott, J.C. Chaos and Time Series Analysis; Oxford University Press: Oxford, UK, 2004. [Google Scholar]
  12. Peitgen, H.O.; Jürgens, H.; Saupe, D. Chaos and Fractals, New Frontiers of Science; Springer-Verlag: New York, NY, USA, 1992. [Google Scholar]
  13. Crutchfield, J.P.; Farmer, J.D.; Huberman, B.A. Fluctuation and simple chaotic dynamics. Phys. Rep. 1982, 92, 45–82. [Google Scholar] [CrossRef]
  14. Halliwell, J.J.; Yearsley, J.M. Arrival times, complex potentials, and decoherent histories. Phys. Rev. A 2009, 79, 062101:1–062101:17. [Google Scholar] [CrossRef]
  15. Everitt, M.J.; Munro, W.J.; Spiller, T.P. Quantum-classical crossover of a field mode. Phys. Rev. A 2009, 79, 032328:1–032328:6. [Google Scholar] [CrossRef]
  16. Kowalski, A.M.; Martín, M.T.; Plastino, A.; Proto, A.N. Classical limit and chaotic regime in a semi-quantum hamiltonian. Int. J. Bifurc. Chaos, 2003, 13, 2315–2325. [Google Scholar] [CrossRef]
  17. Zeh, H.D. Why Bohms quantum theory? Found. Phys. Lett. 1999, 12, 197–200. [Google Scholar] [CrossRef]
  18. Zurek, W.H. Pointer basis of quantum apparatus: Into what mixture does the wave packet collapse? Phys. Rev. D 1981, 24, 1516–1525. [Google Scholar] [CrossRef]
  19. Zurek, W.H. Decoherence, einselection, and the quantum origins of the classical. Rev. Mod. Phys. 2003, 75, 715–775. [Google Scholar] [CrossRef]
  20. Bonilla, L.L.; Guinea, F. Collapse of the wave packet and chaos in a model with classical and quantum degrees of freedom. Phys. Rev. A 1992, 45, 7718–7728. [Google Scholar] [CrossRef] [PubMed]
  21. Cooper, F.; Dawson, J.; Habib, S.; Ryne, R.D. Chaos in time-dependent variational approximations to quantum dynamics. Phys. Rev. E 1998, 57, 1489–1498. [Google Scholar] [CrossRef]
  22. Kowalski, A.M.; Martín, M.T.; Nuñez, J.; Plastino, A.; Proto, A.N. Quantitative indicator for semiquantum chaos. Phys. Rev. A 1998, 58, 2596–2599. [Google Scholar] [CrossRef]
  23. Kowalski, A.M.; Plastino, A.; Proto, A.N. Classical limits. Phys. Lett. A 2002, 297, 162–172. [Google Scholar] [CrossRef]
  24. Kowalski, A.M.; Martín, M.T.; Plastino, A.; Rosso, O.A. Bandt-Pompe approach to the classical-quantum transition. Phys. D 2007, 233, 21–31. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Kowalski, A.M.; Martin, M.T.; Plastino, A.; Judge, G. On Extracting Probability Distribution Information from Time Series. Entropy 2012, 14, 1829-1841. https://doi.org/10.3390/e14101829

AMA Style

Kowalski AM, Martin MT, Plastino A, Judge G. On Extracting Probability Distribution Information from Time Series. Entropy. 2012; 14(10):1829-1841. https://doi.org/10.3390/e14101829

Chicago/Turabian Style

Kowalski, Andres M., Maria Teresa Martin, Angelo Plastino, and George Judge. 2012. "On Extracting Probability Distribution Information from Time Series" Entropy 14, no. 10: 1829-1841. https://doi.org/10.3390/e14101829

APA Style

Kowalski, A. M., Martin, M. T., Plastino, A., & Judge, G. (2012). On Extracting Probability Distribution Information from Time Series. Entropy, 14(10), 1829-1841. https://doi.org/10.3390/e14101829

Article Metrics

Back to TopTop