Next Article in Journal
Digital Signatures with Quantum Candies
Next Article in Special Issue
Modelling Asymmetric Unemployment Dynamics: The Logarithmic-Harmonic Potential Approach
Previous Article in Journal
On the Mortality of Companies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Opinion

Entropy, Economics, and Criticality

by
Michael S. Harré
Complex Systems Research Group, Faculty of Engineering, The University of Sydney, Sydney 2006, Australia
Entropy 2022, 24(2), 210; https://doi.org/10.3390/e24020210
Submission received: 3 December 2021 / Revised: 21 January 2022 / Accepted: 25 January 2022 / Published: 28 January 2022

Abstract

:
Information theory is a well-established method for the study of many phenomena and more than 70 years after Claude Shannon first described it in A Mathematical Theory of Communication it has been extended well beyond Shannon’s initial vision. It is now an interdisciplinary tool that is used from ‘causal’ information flow to inferring complex computational processes and it is common to see it play an important role in fields as diverse as neuroscience, artificial intelligence, quantum mechanics, and astrophysics. In this article, I provide a selective review of a specific aspect of information theory that has received less attention than many of the others: as a tool for understanding, modelling, and detecting non-linear phenomena in finance and economics. Although some progress has been made in this area, it is still an under-developed area that I argue has considerable scope for further development.

1. Introduction

Information theory as a tool for economics has a long history and at this point is well-established in many sub-fields. This foundational work was carried out by authors such as Kolmogorov [1], Theil [2], Wilson [3], Georgescu-Roegen [4], and Aoki [5], each contributing significantly to quite different fields within economics. A recent review of information theory at the agent level and economics can be found here [6] and a broader review of entropy in economics in general over the last 150 years can be found here [7]. Recently, many others have contributed key results to this body of work. At the market level Vogel et al. [8] used data compression and information theory to study different regimes of a financial market. Sornette has also written an earlier but highly informative review of work on markets as exhibiting critical phenomena in their non-linear dynamics [9]. At a similar scale but for housing markets Crosato et al. [10,11] have studied the criticality of city dynamics using maximum entropy techniques. At the individual agent level Dinis et al. [12] studied phase transitions in optimal betting strategies using the Kelly criterion.
This opinion piece argues for an alternative use of information theory that has been used earlier but has yet to make a significant impact in the field of economics or information theory: As a tool for the analysis of “critical phenomena” in economics. It follows on from earlier work I have completed applying the notion of critical phenomena to the abrupt breaks in time series data, such as market crashes, such as the 1987 crash [13], the Asian crisis of 1997 [14], the build up to the housing crisis of 2007 [15], and the COVID-19 crisis of 2020–2021 [16], all of which made use of information theory in its various forms. With collaborators, I have also explored the occurrence of bifurcations in micro-economics [17,18], as well as in housing markets [19,20], each using maximum entropy techniques. In order to make the case for information theory as a tool in the analysis of criticality in economics, I will argue for two important elements. The first is that critical phenomena, i.e., bifurcations, catastrophes, tipping points, etc., can be analysed most effectively using information theory due its intrinsic sensitivity to non-linear behaviour. The second is that market dynamics exhibit behaviour that is very much like what we should expect in critical phenomena. These two points are covered in the next two sections and then some final points are discussed at the end.

2. Criticality and Statistical Measures

I begin by outlining and connecting some well established results that, to the best of my knowledge, have not previously been discussed in combination with each other. There is an early result due to Theil [21] that establishes the relationship between the amount of information shared between the dependent and the independent variables in a multiple regression analysis, and that it can be derived directly from correlations and partial correlations. The result is easy to state. Given a multiple regression analysis between a dependent variable X 0 and N independent variables X i { X 1 , X 2 , X N } = X , the question arises as to how much each X i contributes to the behaviour of X 0 . Given the total correlation R between X 0 and X and the correlations r 0 , 1 between X 0 and X 1 , partial correlations r 0 , 2 | 1 between X 0 and X 2 conditional on X 1 , etc., then the total amount of information contributed from each of the X i to X 0 is a sum of their individual information contributions:
I ( R 2 ) = I ( r 0 , 1 2 ) + I ( r 0 , 1 | 2 2 ) + + I ( r 0 , N | 1 , 2 , , ( N 1 ) 2 )
where I ( x ) = log 2 ( 1 x ) and as x is bounded on the interval [ 0 , 1 ] this is a non-negative value corresponding to the information content, see the paper and references therein for the details. Contrast this approach to Scheffer et al. [22] (Box 3), in which the relationship between the non-stationary properties of the auto-correlation coefficient of an AR(1) process as it approaches a tipping point is analysed and we see that to first order in Equation (1), i.e., I ( r 1 , 0 2 ) , there is an informational analogue to Scheffer et al.’s analysis of the precursory signals of an impending tipping point.
There has already been progress in developing this direction as Barnett and colleagues [23] studied the relationship between “Granger causality” (G-causality), first developed by Granger in econometrics [24], and transfer entropy (TE). According to G-causality, given a vector X of stochastic variables that evolve in time it is said that “ X i G-causes X j ” if, by including X i in the predictive information set of X j , the subsequent prediction of X j is improved beyond the extent to which X j is already able to predict its own future. The central insight of the work of Barnett et al. is that, for Gaussian variables, G-causality is equivalent to TE, making a direct connection between the predictive analysis of vector auto-regression and information theoretical approaches to causal inference. This becomes relevant to critical phenomena not only because of the relationship with Scheffer et al.’s work but also because TE peaks before the phase-transition in the two-dimensional Ising model [25], where the Gaussian assumption no longer holds, i.e., TE becomes a candidate for the analysis of phase transitions at precisely the point where the relationship between TE and G-causality is expected to break down.
There is a similar correspondence between Pearson correlations and mutual information (MI) in the Ising model. Away from the critical temperature there is an exponential decay in the correlations between the individual spins. However, as the temperature approaches the critical temperature, the relationship between correlations and MI become strongly non-linear, although still expressible in closed form [26]. A distinguishing characteristic between MI and TE is that TE peaks before the phase transition (on the disordered side of the transition) whereas the MI peaks (diverges) exactly at the phase transition.
A final example is the use of Kullback–Leibler divergence (KL-divergence) to measure the statistical separation between probability distributions. Both MI and TE are specific examples of KL-divergence but the more general form is useful in its own right. It is defined for two discrete probability distributions P ( X ) and Q ( X ) for X = { X 1 , X 2 , , X n } as:
D K L ( P ; Q ) = i P ( X i ) log P ( X i ) Q ( X i )
Although the KL-divergence is central to MI and TE, it is also central to other information measures, specifically the Fisher information (FI) which has also been used in the study of critical phenomena. To see the relationship start with a θ -parameterised family of distributions P ( X | θ ) , then the KL-divergence between two members of this family is D K L ( ( X | θ ) ; ( X | θ ) ) and as the divergence is minimised (zero) when θ = θ we can expand this around θ to second order in θ [27] (Section 3.3: Information Measures):
D K L ( P ( X | θ ) ; P ( X | θ ) ) = 1 2 ( θ θ ) T 2 θ i θ j D K L ( P ( X | θ ) ; P ( X | θ ) ) ( θ θ ) + h . o . t .
where the matrix of second derivatives (the elements are denoted i , j D ( θ ; θ ) ) in this equation is the FI matrix, i.e., the FI is the first non-zero term in the expansion of the KL-divergence about θ = θ . We note that the FI is known to measure the gain in transient sensitivity of a distribution [28]. In that work, Prokopenko et al. were able to relate the i , j D ( θ ; θ ) terms to the rate of change in the corresponding order parameters θ . Of relevance to the current article is that these relationships allow for the identification of second-order phase transitions via the divergence of individual i , j D ( θ ; θ ) terms of the FI matrix. This work was later generalised to the Fisher TE [29], which was used to capture both transient and contextual aspects of the second order phase transition of the two-dimensional Ising model.

3. Critical Transitions Are a Phenomena of Markets

One of the first approaches to using statistical measures to understand sudden behavioural changes in financial markets is the work of Onella et al. [30] on the “Black Monday” crash on 19 October 1987. In that study they used a modified form of the Pearson correlation coefficient to measure the dyadic relationships between pairs of equities. This measure begins with the correlations between all pairs of equities ρ i , j and transforms them into a distance measure d i , j = 2 ( 1 ρ i , j ) which results in a distance matrix that can be thought of as a network of distances between equities. The underlying correlations were based on a window of time: [ t a , t b ] that was a subset of the complete time series, and then through a sliding of this window over the whole time series a sequence of equity trees could be built up and the dynamical properties of the market correlations could be studied as Black Monday approached. What was observed is that the equity tress collapsed to a star network at the point of the market crash, very similar to the abrupt transitions observed in the topologies of networked systems going through a phase transition [31]. This network approach has been applied in theoretical and empirical studies of networked equity markets in order to test the robustness of the phase transition idea, see for example the work of Kostanjvcar et al. [32].
This approach was extended to information theoretical measures in two distinct ways in order to further understand market crises as critical phenomena. In the first instance, the d i , j measures were replaced with MI and the same analysis as that of Onnella et al. was carried out for the subsequent MI network [15]. In that study, it was shown that there are peaks in the MI at crisis points as predicted by the Matsuda et al. [26] study of the Ising model, and, furthermore, the MI could be broken up into its entropy and joint entropy components in a diagnostically informative fashion, for example distinguishing between the market disruption of the 11 September 2001 attacks which had no discernible increase in joint entropies, only an increase in the entropy terms, and the 1987 crisis which had a significant increase in the joint entropies. It was also noted that there was a peak in MI away from any known critical points, suggesting that MI may be identifying other non-linear transients indicative of the market restructuring in more subtle ways than market crashes. In a second extension to the Onnela work a modified version of TE that accounts for the continuous flow of information through the market (rather than artificially discretising the data) was applied to the Asian financial crisis of 1997 [14] in order to build a network of information flows around a market crisis point. The key finding was that Pearson correlations and continuous TE distinguish between qualitatively distinct aspects and that continuous information flows are a more sensitive measure of dynamics during a crisis.
Other approaches to modelling criticality in economic markets have focused on the application of potential functions to market dynamics in order to test for the statistical significance of tipping points in uni-variate times series. In several recent papers [33,34,35], researchers have used the stochastic form of Thom’s catastrophe theory put forward by Cobb [36] and Wagenmakers et al. [37] to examine the empirical evidence for critical transitions in housing and equity markets. This follows on from a recent re-evaluation of catastrophe theory in economics as argued for in the review by Rosser, Jr. [38]. In principle, if a system has a well-defined potential function (sometimes “potential landscape”), a necessary element for catastrophe theory, then the system should also be susceptible to the methods proposed in a variety of fields [39,40,41,42] for the detection of nearby critical transitions, which brings us back to the study mentioned above by Scheffer et al. [22]: nearby critical points can (sometimes) be detected using statistical methods that measure the progressive deformation of the probability distributions caused by the deformation of the potential function near a critical point.

4. Limitations and Future Directions

The points laid out above are not without their issues and there are reasonable arguments for why this approach is less attractive than traditional statistical methods. However, I believe most of these can be addressed and here I divide them into two broad classes: issues of practice and issues of principle.
Some practical issues are the same for every discipline that uses information theory: the computations are expensive, there are fewer out-of-the-box software packages available, and more data are needed to obtain statistically reliable results. The first point is simply one that we may need to accept, even as computers become faster, it will likely remain the case that, for example, the Pearson correlation will be faster to compute than the corresponding MI. However, the computations will get faster in absolute terms as computers get faster, and other fields with large datasets, such as neuroscience, have seen the benefits of these new methods [43,44] with efficiency gains being made as well [45,46]. On the other hand software packages are becoming more readily available, and economics can benefit from the software advances that have been made in other fields. Two popular packages that have come from neuroscience are JIDT [47] and TRENTOOL [48] and their successor IDTxL [49]. As the methods become more readily used no doubt more implementations will become available. The data problem is a constant issue in economics, independent of the arguments made above, and aside from financial and industrial economics where data are rich and progress was initially quite rapid as computer scientists and physicists worked on market dynamics [50], data tends to be more sparse. However, while infrequently sampled time series and non-stationary data can make long term temporal analysis and prediction problematic, there is often considerable high resolution geospatial data, for example tax revenue or house prices indexed by postcodes. From this point of view long term prediction may still be difficult but temporally localised dynamics is achievable if the information theory tools can be adapted accordingly to suit the task at hand. It is also hoped that as data limitations more clearly become the bottleneck to better analyses, then private and government agencies will gather more data and this will become less problematic.
Another issue arises though, and it is a matter of principle rather than practice: Is information theory more useful than simply a new tool? In neuroscience and artificial intelligence there is a good “in principle” argument for why information theory is useful, it measures the amount of information being stored, processed, and transmitted within a complex adaptive system. For example Zhu et al. [51] studied neuromorphic nanowire networks using TE and active information storage, finding that information theoretical values peak when the networks transition from a quiescent state to an active state, illustrating the relationship between information theory as a measure of computational capacity and criticality in an artificial system. Likewise, other studies have shown that biological brains may be poised at or near a critical state [52] where it has been argued the brain is at a point of “self-organised criticality”, a term introduced by Bak [53], and see the recent critical review by Girardi-Schappo [54]. Others have argued that this may be a widespread property of many other systems as well, see, for example, the recent article by Tadić and Melnik [55]. However, the case has yet to be made that, at this more conceptual level, information theory and criticality adds to economic discourse, so I would like to discuss one, rather speculative, path through which this is relevant to economics. The point is fairly straightforward, Mirowski and Somefun [56] and Axtell [57], amongst others, have argued that markets and economies are computational processes in their own right, as Axtell frames it:
There is a close connection between agent computing in the positive social sciences and distributed computation in computer science, in which individual processors have heterogeneous information that they compute with and then communicate to other processors.
This is very much in the same vein as how neuroscientists might describe the processing of information in the human brain at the neuronal level [58]. The analogy does not map across in a trivial way though and care is needed. In a financial market, for example, instead of electrical signals between neurons price movements are the primary means of communicating and coordinating economic activity, and this might, at a suitably high level, justify the view that a market or an economy is indeed a computational process. However, market traders do not form long term price signalling relationships between each other in the way that neurons form connections with one another and so we need to be careful about the precise specification that comes from this analogy. One way in which we can keep the brain–market analogy but make it more pertinent is to take the recent work of Solé on “liquid brains” [59] that have been used as a computational model of ant communities as the analogy, rather than the “solid brains” of neural networks with their more rigid connections. This brings us to the point of self-organised criticality in economics and why it might be a relevant lens through which to see market dynamics. In this view, which has been espoused several times in recent work [60,61,62], markets need to be able to sensitively adapt to informational changes in such a way that allows prices to reflect news, and like liquid brains a critical or near critical state of the market may be the most effective position in order for a market to do so.

Funding

This research received no external funding.

Data Availability Statement

No data was used in this article.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Kolmogorov, A.N. Three approaches to the definition of the concept “quantity of information”. Probl. Peredachi Informatsii 1965, 1, 3–11. [Google Scholar]
  2. Theil, H. On the use of information theory concepts in the analysis of financial statements. Manag. Sci. 1969, 15, 459–480. [Google Scholar] [CrossRef]
  3. Wilson, A.G. The use of entropy maximising models, in the theory of trip distribution, mode split and route split. J. Transp. Econ. Policy 1969, 3, 108–126. [Google Scholar]
  4. Georgescu-Roegen, N. The entropy law and the economic problem. In Valuing the Earth: Economics, Ecology, Ethics; MIT Press: Cambridge, MA, USA, 1993; pp. 75–88. [Google Scholar]
  5. Aoki, M. New Approaches to Macroeconomic Modeling. In Cambridge Books; Cambridge University Press: Cambridge, UK, 1998. [Google Scholar]
  6. Harré, M.S. Information Theory for Agents in Artificial Intelligence, Psychology, and Economics. Entropy 2021, 23, 310. [Google Scholar] [CrossRef] [PubMed]
  7. Jakimowicz, A. The role of entropy in the development of economics. Entropy 2020, 22, 452. [Google Scholar] [CrossRef] [PubMed]
  8. Vogel, E.E.; Saravia, G. Information theory applied to econophysics: Stock market behaviors. Eur. Phys. J. B 2014, 87, 177. [Google Scholar] [CrossRef]
  9. Sornette, D. Critical market crashes. Phys. Rep. 2003, 378, 1–98. [Google Scholar]
  10. Crosato, E.; Nigmatullin, R.; Prokopenko, M. On critical dynamics and thermodynamic efficiency of urban transformations. R. Soc. Open Sci. 2018, 5, 180863. [Google Scholar] [CrossRef] [Green Version]
  11. Crosato, E.; Prokopenko, M.; Harré, M.S. The polycentric dynamics of Melbourne and Sydney: Suburb attractiveness divides a city at the home ownership level. Proc. R. Soc. A 2021, 477, 20200514. [Google Scholar] [CrossRef]
  12. Dinis, L.; Unterberger, J.; Lacoste, D. Phase transitions in optimal betting strategies. Europhys. Lett. 2020, 131, 60005. [Google Scholar] [CrossRef]
  13. Bossomaier, T.; Barnett, L.; Steen, A.; Harré, M.; d’Alessandro, S.; Duncan, R. Information flow around stock market collapse. Account. Financ. 2018, 58, 45–58. [Google Scholar] [CrossRef]
  14. Harré, M. Entropy and Transfer Entropy: The Dow Jones and the Build Up to the 1997 Asian Crisis. In Proceedings of the International Conference on Social Modeling and Simulation, Plus Econophysics Colloquium 2014; Springer: Cham, Switzerland, 2015; pp. 15–25. [Google Scholar]
  15. Harré, M.; Bossomaier, T. Phase-transition–like behaviour of information measures in financial markets. Europhys. Lett. 2009, 87, 18009. [Google Scholar] [CrossRef]
  16. Harré, M.S.; Eremenko, A.; Glavatskiy, K.; Hopmere, M.; Pinheiro, L.; Watson, S.; Crawford, L. Complexity Economics in a Time of Crisis: Heterogeneous Agents, Interconnections, and Contagion. Systems 2021, 9, 73. [Google Scholar] [CrossRef]
  17. Wolpert, D.H.; Harré, M.; Olbrich, E.; Bertschinger, N.; Jost, J. Hysteresis effects of changing the parameters of noncooperative games. Phys. Rev. E 2012, 85, 036102. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Harré, M.S.; Atkinson, S.R.; Hossain, L. Simple nonlinear systems and navigating catastrophes. Eur. Phys. J. B 2013, 86, 289. [Google Scholar] [CrossRef]
  19. Glavatskiy, K.S.; Prokopenko, M.; Carro, A.; Ormerod, P.; Harré, M. Explaining herding and volatility in the cyclical price dynamics of urban housing markets using a large scale agent-based model. arXiv 2020, arXiv:2004.07571. [Google Scholar] [CrossRef]
  20. Evans, B.P.; Glavatskiy, K.; Harré, M.S.; Prokopenko, M. The impact of social influence in Australian real estate: Market forecasting with a spatial agent-based model. J. Econ. Interact. Coord. 2021, 22, 1–53. [Google Scholar] [CrossRef]
  21. Theil, H.; Chung, C. Information-theoretic measures of fit for univariate and multivariate linear regressions. Am. Stat. 1988, 42, 249–252. [Google Scholar]
  22. Scheffer, M.; Bascompte, J.; Brock, W.A.; Brovkin, V.; Carpenter, S.R.; Dakos, V.; Held, H.; Van Nes, E.H.; Rietkerk, M.; Sugihara, G. Early-warning signals for critical transitions. Nature 2009, 461, 53–59. [Google Scholar] [CrossRef]
  23. Barnett, L.; Barrett, A.B.; Seth, A.K. Granger causality and transfer entropy are equivalent for Gaussian variables. Phys. Rev. Lett. 2009, 103, 238701. [Google Scholar] [CrossRef] [Green Version]
  24. Granger, C.W. Investigating causal relations by econometric models and cross-spectral methods. Econom. J. Econom. Soc. 1969, 37, 424–438. [Google Scholar] [CrossRef]
  25. Barnett, L.; Lizier, J.T.; Harré, M.; Seth, A.K.; Bossomaier, T. Information flow in a kinetic Ising model peaks in the disordered phase. Phys. Rev. Lett. 2013, 111, 177203. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Matsuda, H.; Kudo, K.; Nakamura, R.; Yamakawa, O.; Murata, T. Mutual information of Ising systems. Int. J. Theor. Phys. 1996, 35, 839–845. [Google Scholar] [CrossRef]
  27. Gourieroux, C.; Monfort, A. Statistics and Econometric Models; Cambridge University Press: Cambridge, UK, 1995; Volume 1. [Google Scholar]
  28. Prokopenko, M.; Lizier, J.T.; Obst, O.; Wang, X.R. Relating Fisher information to order parameters. Phys. Rev. E 2011, 84, 041116. [Google Scholar] [CrossRef] [Green Version]
  29. Prokopenko, M.; Barnett, L.; Harré, M.; Lizier, J.T.; Obst, O.; Wang, X.R. Fisher transfer entropy: Quantifying the gain in transient sensitivity. Proc. R. Soc. A Math. Phys. Eng. Sci. 2015, 471, 20150610. [Google Scholar] [CrossRef]
  30. Onnela, J.P.; Chakraborti, A.; Kaski, K.; Kertesz, J. Dynamic asset trees and Black Monday. Phys. A Stat. Mech. Its Appl. 2003, 324, 247–252. [Google Scholar] [CrossRef] [Green Version]
  31. Aleksiejuk, A.; Hołyst, J.A.; Stauffer, D. Ferromagnetic phase transition in Barabási–Albert networks. Phys. A Stat. Mech. Its Appl. 2002, 310, 260–266. [Google Scholar] [CrossRef] [Green Version]
  32. Kostanjčar, Z.; Begušić, S.; Stanley, H.E.; Podobnik, B. Estimating tipping points in feedback-driven financial networks. IEEE J. Sel. Top. Signal Process. 2016, 10, 1040–1052. [Google Scholar] [CrossRef] [Green Version]
  33. Baruník, J.; Vosvrda, M. Can a stochastic cusp catastrophe model explain stock market crashes? J. Econ. Dyn. Control 2009, 33, 1824–1836. [Google Scholar] [CrossRef]
  34. Barunik, J.; Kukacka, J. Realizing stock market crashes: Stochastic cusp catastrophe model of returns under time-varying volatility. Quant. Financ. 2015, 15, 959–973. [Google Scholar] [CrossRef] [Green Version]
  35. Diks, C.; Wang, J. Can a stochastic cusp catastrophe model explain housing market crashes? J. Econ. Dyn. Control 2016, 69, 68–88. [Google Scholar] [CrossRef] [Green Version]
  36. Cobb, L. Stochastic catastrophe models and multimodal distributions. Syst. Res. Behav. Sci. 1978, 23, 360–374. [Google Scholar] [CrossRef]
  37. Wagenmakers, E.J.; Molenaar, P.C.; Grasman, R.P.; Hartelman, P.A.; van der Maas, H.L. Transformation invariant stochastic catastrophe theory. Phys. D Nonlinear Phenom. 2005, 211, 263–276. [Google Scholar] [CrossRef]
  38. Rosser, J.B. The rise and fall of catastrophe theory applications in economics: Was the baby thrown out with the bathwater? J. Econ. Dyn. Control 2007, 31, 3255–3280. [Google Scholar] [CrossRef]
  39. Scheffer, M.; Hirota, M.; Holmgren, M.; Van Nes, E.H.; Chapin, F.S. Thresholds for boreal biome transitions. Proc. Natl. Acad. Sci. USA 2012, 109, 21384–21389. [Google Scholar] [CrossRef] [Green Version]
  40. Scheffer, M.; Carpenter, S.R.; Lenton, T.M.; Bascompte, J.; Brock, W.; Dakos, V.; Van de Koppel, J.; Van de Leemput, I.A.; Levin, S.A.; Van Nes, E.H.; et al. Anticipating critical transitions. Science 2012, 338, 344–348. [Google Scholar] [CrossRef] [Green Version]
  41. Bury, T.M.; Sujith, R.; Pavithran, I.; Scheffer, M.; Lenton, T.M.; Anand, M.; Bauch, C.T. Deep learning for early warning signals of tipping points. Proc. Natl. Acad. Sci. USA 2021, 118, e2106140118. [Google Scholar] [CrossRef]
  42. Boers, N.; Rypdal, M. Critical slowing down suggests that the western Greenland Ice Sheet is close to a tipping point. Proc. Natl. Acad. Sci. USA 2021, 118, e2024192118. [Google Scholar] [CrossRef]
  43. Rozell, C.J.; Johnson, D.H. Examining methods for estimating mutual information in spiking neural systems. Neurocomputing 2005, 65, 429–434. [Google Scholar] [CrossRef] [Green Version]
  44. Vicente, R.; Wibral, M.; Lindner, M.; Pipa, G. Transfer entropy—A model-free measure of effective connectivity for the neurosciences. J. Comput. Neurosci. 2011, 30, 45–67. [Google Scholar] [CrossRef] [Green Version]
  45. Wollstadt, P.; Martínez-Zarzuela, M.; Vicente, R.; Díaz-Pernas, F.J.; Wibral, M. Efficient transfer entropy analysis of non-stationary neural time series. PLoS ONE 2014, 9, e102833. [Google Scholar] [CrossRef] [Green Version]
  46. Zbili, M.; Rama, S. A quick and easy way to estimate entropy and mutual information for neuroscience. bioRxiv 2021. [Google Scholar] [CrossRef] [PubMed]
  47. Lizier, J.T. JIDT: An information-theoretic toolkit for studying the dynamics of complex systems. Front. Robot. AI 2014, 1, 11. [Google Scholar] [CrossRef] [Green Version]
  48. Lindner, M.; Vicente, R.; Priesemann, V.; Wibral, M. TRENTOOL: A Matlab open source toolbox to analyse information flow in time series data with transfer entropy. BMC Neurosci. 2011, 12, 119. [Google Scholar] [CrossRef] [Green Version]
  49. Wollstadt, P.; Lizier, J.T.; Vicente, R.; Finn, C.; Martinez-Zarzuela, M.; Mediano, P.; Novelli, L.; Wibral, M. IDTxl: The Information Dynamics Toolkit xl: A Python package for the efficient analysis of multivariate information dynamics in networks. arXiv 2018, arXiv:1807.10459. [Google Scholar] [CrossRef]
  50. Gallegati, M.; Keen, S.; Lux, T.; Ormerod, P. Worrying trends in econophysics. Phys. A Stat. Mech. Its Appl. 2006, 370, 1–6. [Google Scholar] [CrossRef]
  51. Zhu, R.; Hochstetter, J.; Loeffler, A.; Diaz-Alvarez, A.; Nakayama, T.; Lizier, J.T.; Kuncic, Z. Information dynamics in neuromorphic nanowire networks. Sci. Rep. 2021, 11, 13047. [Google Scholar] [CrossRef]
  52. Lotfi, N.; Feliciano, T.; Aguiar, L.A.; Silva, T.P.L.; Carvalho, T.T.; Rosso, O.A.; Copelli, M.; Matias, F.S.; Carelli, P.V. Statistical complexity is maximized close to criticality in cortical dynamics. Phys. Rev. E 2021, 103, 012415. [Google Scholar] [CrossRef]
  53. Bak, P.; Tang, C.; Wiesenfeld, K. Self-organized criticality. Phys. Rev. A 1988, 38, 364. [Google Scholar] [CrossRef]
  54. Girardi-Schappo, M. Brain criticality beyond avalanches: Open problems and how to approach them. J. Phys. Complex. 2021, 2, 031003. [Google Scholar] [CrossRef]
  55. Tadić, B.; Melnik, R. Self-Organised Critical Dynamics as a Key to Fundamental Features of Complexity in Physical, Biological, and Social Networks. Dynamics 2021, 1, 181–197. [Google Scholar] [CrossRef]
  56. Mirowski, P.; Somefun, K. Markets as evolving computational entities. J. Evol. Econ. 1998, 8, 329–356. [Google Scholar] [CrossRef]
  57. Axtell, R.L. Economics as distributed computation. In Meeting the Challenge of Social Problems via Agent-Based Simulation; Springer: Berlin/Heidelberg, Germany, 2003; pp. 3–23. [Google Scholar]
  58. Hertz, J.; Krogh, A.; Palmer, R.G. Introduction to the Theory of Neural Computation; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
  59. Solé, R.; Moses, M.; Forrest, S. Liquid brains, solid brains. Philos. Trans. R. Soc. B 2019, 374, 20190040. [Google Scholar] [CrossRef] [PubMed]
  60. Biondo, A.E.; Pluchino, A.; Rapisarda, A. Micro and macro benefits of random investments in financial markets. Contemp. Phys. 2014, 55, 318–334. [Google Scholar] [CrossRef] [Green Version]
  61. Bocher, R. Causal Entropic Forces, Narratives and Self-organisation of Capital Markets. J. Interdiscip. Econ. 2021, 30, 02601079211039321. [Google Scholar] [CrossRef]
  62. Turiel, J.D.; Aste, T. Self-organised criticality in high frequency finance: The case of flash crashes. arXiv 2021, arXiv:2110.13718. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Harré, M.S. Entropy, Economics, and Criticality. Entropy 2022, 24, 210. https://doi.org/10.3390/e24020210

AMA Style

Harré MS. Entropy, Economics, and Criticality. Entropy. 2022; 24(2):210. https://doi.org/10.3390/e24020210

Chicago/Turabian Style

Harré, Michael S. 2022. "Entropy, Economics, and Criticality" Entropy 24, no. 2: 210. https://doi.org/10.3390/e24020210

APA Style

Harré, M. S. (2022). Entropy, Economics, and Criticality. Entropy, 24(2), 210. https://doi.org/10.3390/e24020210

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop