Next Article in Journal
Performance Analysis of Troposphere Scattering Communication Channel with Chirp-BOK Modulation
Next Article in Special Issue
Lattice Boltzmann Modeling of Additive Manufacturing of Functionally Graded Materials
Previous Article in Journal
On the Capacity of the Peak-Limited and Band-Limited Channel
Previous Article in Special Issue
Species Richness Net Primary Productivity and the Water Balance Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cascades Towards Noise-Induced Transitions on Networks Revealed Using Information Flows

by
Casper van Elteren
1,2,*,
Rick Quax
1,2 and
Peter M. A. Sloot
1,2,3
1
Institute of Informatics, University of Amsterdam, 1098 XH Amsterdam, The Netherlands
2
Institute for Advanced Study, 1012 GC Amsterdam, The Netherlands
3
Complexity Science Hub Viennna, 1080 Vienna, Austria
*
Author to whom correspondence should be addressed.
Entropy 2024, 26(12), 1050; https://doi.org/10.3390/e26121050
Submission received: 25 September 2024 / Revised: 28 November 2024 / Accepted: 29 November 2024 / Published: 4 December 2024
(This article belongs to the Special Issue 180th Anniversary of Ludwig Boltzmann)

Abstract

:
Complex networks, from neuronal assemblies to social systems, can exhibit abrupt, system-wide transitions without external forcing. These endogenously generated “noise-induced transitions” emerge from the intricate interplay between network structure and local dynamics, yet their underlying mechanisms remain elusive. Our study unveils two critical roles that nodes play in catalyzing these transitions within dynamical networks governed by the Boltzmann–Gibbs distribution. We introduce the concept of “initiator nodes”, which absorb and propagate short-lived fluctuations, temporarily destabilizing their neighbors. This process initiates a domino effect, where the stability of a node inversely correlates with the number of destabilized neighbors required to tip it. As the system approaches a tipping point, we identify “stabilizer nodes” that encode the system’s long-term memory, ultimately reversing the domino effect and settling the network into a new stable attractor. Through targeted interventions, we demonstrate how these roles can be manipulated to either promote or inhibit systemic transitions. Our findings provide a novel framework for understanding and potentially controlling endogenously generated metastable behavior in complex networks. This approach opens new avenues for predicting and managing critical transitions in diverse fields, from neuroscience to social dynamics and beyond.

1. Introduction

Multistability, a fundamental characteristic of complex systems [1,2], describes the capacity of a system to occupy multiple stable states and transition between them. This phenomenon is ubiquitous, manifesting in diverse domains from neural networks [3,4] to opinion dynamics [5] and ecosystems [6]. While state transitions are often attributed to external perturbations, we propose a novel perspective: in networked systems, noise-induced transitions can occur endogenously. These transitions emerge from local interactions that cascade through the network, triggering large-scale regime shifts in a process we term the “domino effect”. This mechanism offers a new understanding of how complex systems can dramatically reconfigure without external forcing, challenging traditional views on system stability and change.
In nonlinear systems, such as interconnected neurons, noise plays a fundamental role in facilitating transitions between attractor states [7,8,9]. It enables the exploration of larger state spaces, allowing systems to escape local minima [10,11]. While multistability has historically been studied from an equilibrium perspective [10,12,13], recent research has revealed how network structure fundamentally affects the stability and transitions of complex systems [14,15,16,17].
Recent work has approached network control through algorithmic information theory, which measures the computational complexity of producing network states through controlled interventions [18,19,20]. While this provides powerful tools for steering networks through external manipulation, fundamental questions remain about how networks spontaneously transition between states through their internal dynamics. Our approach uses Shannon information theory to quantify the temporal correlations that emerge naturally as networks evolve, revealing how noise propagates through network structure to generate endogenous transitions. This complements algorithmic approaches by focusing on the statistical mechanisms underlying spontaneous state changes rather than the computational complexity of producing specific states.
Our study addresses a critical gap in understanding noise-induced transitions in networked dynamical systems out of equilibrium. We focus on systems where each node’s state evolves according to the Boltzmann–Gibbs distribution, a framework applicable to various phenomena including neural dynamics [21], opinion formation, and ferromagnetic spins [22]. An example of a noise-induced transition in this model executed on a network is shown in Figure 1.
We introduce two novel concepts: initiator nodes that propagate noise and destabilize the system, and stabilizing nodes that maintain metastable states. To quantify the impact of short-term and long-term correlations in these transitions, we propose two information-theoretic measures: integrated mutual information and asymptotic information. These metrics, computable from observational data, provide powerful tools for analyzing metastable dynamics across different time scales.
Integrated mutual information captures the transient destabilization of the system, revealing the role of initiator nodes in triggering systemic transitions. Asymptotic information, on the other hand, quantifies the long-term memory encoded by stabilizer nodes, which ultimately reverses the domino effect and settles the network into a new stable attractor. By manipulating these roles, we demonstrate how targeted interventions can either promote or inhibit systemic transitions, offering a new approach to controlling critical transitions in complex networks.
Our computational method uncovers a network percolation process that facilitates noise-induced transitions without external parameter changes, offering a fresh perspective on tipping points in complex networks [23,24,25,26]. This approach bridges the gap between local equilibrium dynamics and global system behavior, providing insights into how network structure influences systemic transitions [14,15,27,28,29].
By revealing the domino-like mechanisms of endogenous state transitions, our work has broad implications for predicting and potentially controlling critical transitions in diverse, complex systems. From enhancing brain plasticity to anticipating ecosystem shifts, this framework provides a foundation for understanding and managing multistability in an interconnected world.

2. Methods

Our study focuses on dynamical systems where the state transitions of individual nodes are governed by the Boltzmann–Gibbs distribution. This distribution, fundamental in statistical mechanics, provides a probabilistic framework for describing the behavior of systems in thermal equilibrium. In our context, it determines the likelihood of a node transitioning from one state to another based on the energy difference between states and a global noise parameter. Specifically, the probability of a node transitioning from state s i to state s i is given by:
P ( s i s i ) = 1 1 + exp ( β Δ E ( s i , s i ) ) ,
where Δ E ( s i , s i ) represents the energy difference for the state transition, and β is the inverse temperature or noise parameter. This formulation captures the essence of how local interactions and global noise influence state changes in our networked system. Higher values of β correspond to lower noise levels, leading to more deterministic behavior, while lower β values introduce more randomness into the system’s dynamics. This framework allows us to model a wide range of phenomena, from neural activity to opinion dynamics, within a consistent mathematical structure.
Fluctuations and their correlations at time τ are captured using Shannon’s mutual information [30] shared between a node’s state ( s i t ) at time t and the entire future system state ( S t + τ ), I ( s i τ : S τ + t ) . The time lag t is used to analyze two key features of information flows of a system: the area under the curve (AUC) of short-term information and the sustained level of long-term information.
The contribution of a node to the dynamics of the system will differ depending on the network connectivity of a node (Figure A3) [31,32]. The total amount of fluctuations shared between the node’s current state and the system’s short-term future trajectory is computed as the integrated mutual information.
μ ( s i ) = t = 0 ( I ( s i τ : S τ + t ) ω ( s i ) ) Δ t .
Intuitively, μ ( s i ) represents a combination of the intensity and duration of the short-term fluctuations on the (transient) system dynamics [31]. It reflects how much of the node state is in the “working memory” of the system.
The term ω ( s i ) R 0 represents the system’s long-term memory. As the system transitions between stable points, short-lived correlations evolve into longer-lasting ones, particularly among less dynamic nodes. When ω ( s i ) is positive, it indicates a separation of time scales: ephemeral correlations dissipate, giving way to slower, more persistent fluctuations. These enduring fluctuations reflect the multiple attractor states accessible to the system, with fewer dynamic nodes becoming more aligned with future system states.
Near a stable attractor, the system primarily generates short-lived fluctuations. However, as it approaches a tipping point, longer-lasting correlations emerge. These persistent correlations facilitate the system’s transition from one stable attractor to another, much like repeated nudges eventually push a ball over a hill. The asymptotic information, ω ( s i ) , quantifies this transition potential. Higher values of ω ( s i ) indicate a greater likelihood of state transition, with the exact value reflecting each node’s contribution to the tipping behavior.
Asymptotic information distinguishes itself from other early warning signals—such as increased autocorrelation, critical slowing down captured by Fisher information, changes in skewness or kurtosis, and increased variance—by specifically measuring the system’s long-term memory and temporal correlation structure. While entropy captures the overall uncertainty or disorder in a system at a given moment, and mutual information quantifies the shared information between components at a particular time, asymptotic information focuses on the persistence of correlations over extended time periods. It reveals how past states influence future configurations, capturing aspects of the system’s dynamics that are not explained by instantaneous or short-term pairwise measures.
Using these information features, each node can be assigned to a different role based on their contribution to the metastable transition. We denote nodes with short-lived correlations as initiators pushing nodes towards a tipping point. In contrast, nodes with longer-lived correlations are referred to as stabilizers. For these nodes, their dynamics are less affected by short-lived correlations, and they require a higher mixing state to transition from one state to another. The role assignment will be further discussed in Section 3.5.
We compute information flows using exact calculations on a randomly generated connected graph of n = 10 nodes. The states are grouped based on their distance to the tipping point, defined as the energy barrier between two locally stable states. For the Ising model, this corresponds to the collection of states where S = 0.5 . We evaluate the conditional distribution up to τ = 300 time steps.
This computational process scales exponentially with the number of nodes, O ( n ) = 2 n , which limits its applicability to large-scale systems without employing variable reduction techniques such as coarse-graining. Extending this analysis to larger systems will be the focus of future research.
For detailed replication instructions, please refer to Appendix A.

3. Results

Our analysis reveals several key insights into the dynamics of metastable transitions and tipping points in complex networks. We observe a distinct domino effect where low-degree nodes initiate system destabilization. As the system approaches a tipping point, information flows shift from low-degree to high-degree nodes. We identify a rise in asymptotic information as a potential early warning signal for an impending tipping point. Finally, we uncover a division of roles among nodes, with some acting as initiators that propagate perturbations and others as stabilizers that influence the system’s transition between attractor states.
In Figure 2, we visualize the information flows at different stages as the system approaches the tipping point. While we present detailed analysis using the kite graph for simplicity, these findings generalize to other network structures, as demonstrated in Figure 3 and further elaborated in the Appendix A.

3.1. Information Flow Dynamics and the Domino Effect

To decompose the metastable transition, we consider local information flows in a given system partition, S γ = { S S | S = γ } where γ [ 0 , 1 ] represents the fraction of nodes that have state 1. This yields the conditional integrated mutual information:
μ ( s i | S ) = t = 0 ( I ( s i τ : S τ + t | S τ ) ω s i ) Δ t .
Details about the estimation procedure can be found in Appendix A.5.
Two key observations emerge from Figure 2:
First, the tipping point is reached through a domino effect, with low-degree nodes acting as initiators early in the process. These nodes, being more susceptible to noise (see Figure A3), are more likely to pass fluctuations to neighbors—akin to pushing a ball up a hill. Far from the tipping point (Figure 2a), lower-degree nodes show higher integrated mutual information, μ ( s i | S ) , than higher-degree nodes. This noise injection by lower-degree nodes increases the likelihood of a metastable transition.
Second, an increase in asymptotic behavior corresponds to the system transitioning between attractor states. As shown in Figure 2b,c, asymptotic information remains low far from the tipping point and steadily increases as the system approaches it. Nodes with higher asymptotic information possess greater predictive power regarding which side of the tipping point the system will settle on.

3.2. Path Analysis and Tipping Point Trajectories

To illustrate the information encoded in these flows, we computed trajectories from the attractor state S = { 0 , , 0 } , simulated for t = 5 steps. Figure 4 shows a trajectory that maximizes:
log p S t + 1 | S t , S 0 = { 0 , , 0 } , S 5 = 0.5 .
These trajectories reveal how the information flows measured in Figure 2c are generated by the sequence of flips originating from the tail of the kite graph. Tail nodes are uniquely positioned to pass on fluctuations to their neighbors, eventually causing a cascade of flips that reach the tipping point. This simple example illustrates how the network structure can influence the system’s dynamics and the information flows that precede a metastable transition. Where noise pushes the system towards a tipping point, originating first in low-degree nodes for dynamics governed by the Boltzmann–Gibbs distribution.

3.3. Network Structure and Node Roles in Metastable Transitions

The domino effect is not solely determined by node degree. As the system nears the tipping point, network effects become significant. For instance, in the kite graph, node 8 (degree 2) exhibits the highest integrated mutual information when 2 bits are flipped (Figure 2b). In contrast, node 3 (degree 6) shows low shared information prior to the tipping point but high shared information at the tipping point.
This transition highlights how the network structure as a whole contributes to a system’s behavior. Local structural measures, such as degree centrality, may undervalue a node’s contribution towards a tipping point and the eventual settlement in a new attractor.

3.4. Tipping Point Dynamics and Information Flow

At the tipping point, the system is most likely to either move to a new attractor state or relax back to its original state (Figure 4). Path analysis reveals that the most likely paths to the tipping point result in a configuration where a high-degree cluster of nodes must flip. This trajectory is less likely than reversing the path shown in Figure 4, explaining why most tipping points “fail” and relax back to the original attractor state (Figure 5b).
The increased information of node 8 around the tipping point can be understood by considering its predictive power about the system’s future. As shown in Figure 5a, both node 3 and node 8 have low uncertainty about the future system state, but the nature of this certainty differs. Node 3 is more certain that the average system state will equal its state at the tipping point, while node 8 is more certain that the future system state will have the opposite sign to its state at the tipping point.

3.5. Role Division and Interventions in Tipping Behavior

We approximate the role of a node i using the difference between integrated mutual information and asymptotic information:
r i = max S μ * ( s i | S ) max S ω * ( s i ) [ 1 , 1 ] ,
where μ * and ω * are normalized versions of μ and ω , respectively.
Nodes with role values close to 1 are classified as “initiators” with high predictive information about short-lived system trajectories. Nodes with values close to 1 are “stabilizers” with high long-term predictive information about future system states.
We validated these roles using simulated interventions (Figure 3). Pinning initiator nodes to the 0 state promotes tipping points while pinning stabilizer nodes is essential for stabilizing transitions between attractor states.

4. Discussion

Understanding how metastable transitions occur may help in understanding how, for example, a pandemic occurs or a system undergoes critical failure. In this paper, dynamical networks governed by the Boltzmann–Gibbs distribution were used to study how endogenously generated metastable transitions occur. The external noise parameter (temperature) was fixed such that the statistical complexity of the system behavior was maximized (see Appendix A.2).
The results show that in the network, two distinct node types could be identified: initiator and stabilizer nodes. Initiator nodes are essential early in the metastable transition. Due to their high degree of freedom, these nodes are more affected by external noise. They are instigators and propagate noise in the system, destabilizing more stable nodes. In contrast, stabilizer nodes have a low degree of freedom and require more energy to change state. These nodes are essential for the metastable behavior as they stabilize the system macrostate. During the metastable transition, a domino sequence of node state changes is propagated in an ordered sequence toward the tipping point.
This domino effect was revealed through two information features, unveiling an information cascade underpinning the trajectories toward the tipping point.
Integrated mutual information captured how short-lived correlations are passed on from the initiator nodes. In the stable regime (close to the ground state), low-degree nodes drive the system dynamics. Low-degree nodes destabilize the system, pushing the system closer to the tipping point. In most cases, the initiator nodes will fail to propagate the noise to their neighbors. On rare occasions, however, the cascade is propagated progressively from low degree to higher and higher degree. A similar domino mechanism was recently found in climate science [6,27]. Wunderling and colleagues provided a simplified model of the climate system, analyzing how various components contribute to the stability of the climate. They found that interactions generally stabilize the system dynamics. If, however, a metastable transition was initialized, the noise was propagated through a similar mechanism as found here, i.e., an initializer node propagated noise through the system, which created a domino effect that percolated through the system.
An increase in asymptotic information forms an indicator of how close the system is to a tipping point. Close to the ground state, the asymptotic information is low, reflecting how transient noise perturbations are not amplified, and the system macrostate relaxes back to the ground state. As the system approaches the tipping point, the asymptotic information increases. As the distance to the ground state increases, the system is more likely to transition between metastable states. After the transition, there remains a longer-term correlation. Asymptotic information reflects the long(er) timescale dynamics of the system. This “rest” information peaks at the tipping point as the system chooses its next state.
The information viewpoint uniquely offers an alternative view to understand how metastable transitions are generated by dynamical networks. Two information features were introduced that decompose the metastable transition in sources of high information processing (integrated mutual information) and distance of the system to the tipping point (asymptotic information). A domino effect was revealed, whereby low-degree nodes initiate the tipping point, making it more likely for higher-degree nodes to tip. On the tipping point, long-term correlations stabilize the system inside the new metastable state. Importantly, the information perspective allows for estimating integrated mutual information directly from data without knowing the mechanisms that drive the tipping behavior. The results highlight how short-lived correlations are essential to initiate the information cascade for crossing a tipping point.

5. Conclusions

Our information-theoretic approach offers an alternative view to understanding how metastable transitions are generated by dynamical networks. Two information features were introduced that decompose the metastable transition in sources of high information processing (integrated mutual information) and distance of the system to the tipping point (asymptotic information). A domino effect was revealed, whereby low-degree nodes initiate the tipping point, making it more likely for higher-degree nodes to tip. On the tipping point, long-term correlations stabilize the system inside the new metastable state. Importantly, the information perspective allows for estimating integrated mutual information directly from data without knowing the mechanisms that drive the tipping behavior. The results highlight how short-lived correlations are essential to initiate the information cascade for crossing a tipping point.

6. Limitations

Integrated mutual information was computed based on exact information flows. This means that for binary systems, it is necessary to compute a transfer matrix on the order of 2 | S | × 2 | S | . This reduced the present analysis to smaller graphs. It would be possible to use Monte-Carlo methods to estimate the information flows. However, I ( s i τ : S τ + t ) remains expensive to compute. When using computational models, it is necessary to compute the conditional and marginal distributions, which are on order O ( 2 | S | ) and O ( 2 t | S | ) , respectively. In Appendix A.11, we give a proof of principle of how the results presented here would generalize to larger systems.
In addition, the decomposition of the metastable transition depends on the partition of the state space. Information flows are, in essence, statistical dependencies among random variables. Here, the effect of how the tipping point was reached was studied by partitioning the average system state in terms of the number of bits flipped. This partitioning assumes that the majority of states prior to the tipping point are reached by having fraction c [ 0 , 1 ] bits flipped. The contribution of each system state over time, however, reflects a distribution of different states; reaching the tipping point from the ground state 0 can be done at t 2 prior to tipping by either remaining in 0.4 bits or transitioning from 0.3 bits flipped to 0.4 and eventually to 0.5 in 2 time steps. The effect of these additional paths showed marginal effects on the integrated mutual information and asymptotic information.
Information flows conditioned on a partition is a form of conditional mutual information [33]. Prior results showed that conditional information produces synergy, i.e., information that is only present in the joint of all variables but cannot be found in any of the subsets of each variable. Unfortunately, there is no generally agreed-upon definition of how to measure synergy [34,35], and different estimates exist that may over or underestimate the synergetic effects. By partitioning, one can create synergy as, for a given partition, each spin has some additional information about the other spins. For example, by taking the states such that S = 0.1 , each spin “knows” that the average of the system equals 0.1. This creates shared information among the spins. Analyses were performed to estimate synergy using the redundancy estimation I m i n [36]. Using this approach, no synergy was measured that affected the outcome of this study. However, it should be emphasized that synergetic effects may influence the causal interpretation of the approach presented here.
A general class of systems was studied governed by the Boltzmann–Gibbs distribution. For practical purposes, the kinetic Ising model was only tested, but we speculate that the results should hold (in principle) for other systems dictated by the Boltzmann–Gibbs distribution. We leave the extension to other Hamiltonian systems for future work.
The practical implementation of interventions based on our theoretical framework faces several real-world challenges. First, in actual complex systems, measuring and monitoring the complete state space in real time may be technically infeasible or prohibitively expensive. Second, the ability to perform precise, targeted interventions on specific components of the system may be limited by physical constraints or technological capabilities. Third, the assumption of perfect knowledge about system parameters and state transitions may not hold in real-world scenarios where noise, measurement errors, and external perturbations are present. Furthermore, the time scales at which interventions need to be implemented may be too rapid for practical human or automated response systems. These practical limitations suggest that while our framework provides valuable theoretical insights, its application may require significant adaptation and simplification for real-world implementation, potentially trading off optimal control for practical feasibility.

Author Contributions

Conceptualization, C.v.E.; Methodology, C.v.E.; Software, C.v.E.; Validation, C.v.E.; Formal analysis, C.v.E.; Investigation, C.v.E.; Resources, C.v.E.; Data curation, C.v.E.; Writing—original draft preparation, C.v.E.; Writing—review and editing, C.v.E., R.Q. and P.M.A.S.; Visualization, C.v.E.; Supervision, C.v.E., R.Q. and P.M.A.S.; Project administration, C.v.E.; Funding acquisition, C.v.E., R.Q. and P.M.A.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by grant Hyperion 2454972 of the Dutch National Police.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The datasets generated and/or analyzed during the current study are available in the https://github.com/cvanelteren/metastability repository, accessed date (28 November 2024).

Conflicts of Interest

The authors declare no competing interests.

Appendix A

Appendix A.1. Background, Scope and Innovation

Noise-induced transitions produce metastable behavior that is fundamental for the functioning of complex dynamical systems. For example, in neural systems, the presence of noise enhances information processing through several mechanisms: stochastic resonance, where moderate levels of noise can amplify weak signals and improve signal detection; prevention of neural networks becoming stuck in local minima, therefore maintaining system flexibility; and enabling more efficient exploration of different neural states during computation [37,38,39,40]. These effects have been demonstrated both in experimental studies of neural circuits and theoretical models of neural computation.
Similarly, the relation between glacial ice ages and Earth’s eccentricity has been shown to have a strong correlation. Metastability manifests itself through noise that can be of two kinds [9]. External noise originates from events outside the internal system dynamics [10,41]. Examples include the influence of climate effects, population growth, or random noise sources on transmission lines. External noise is commonly modeled by replacing an external control or order parameter with a stochastic process. Internal noise, in contrast, is inherent to the system itself and is caused by random interactions of elements within the system, e.g., individuals in a population or molecules in chemical processes. Both types of noise can generate transitions between metastable states. In this paper, we study the metastable behavior of internal noise in complex dynamical networks governed by kinetic Ising dynamics.
The ubiquity of multistability in complex systems calls for a general framework to understand how metastable transitions occur. The diversity of complex systems can be captured by interaction networks that dynamically evolve over time. These dynamics can be seen as a distributive network of computational units, where each unit or element of the interaction network changes its state based on input from its local neighborhood. Lizier demonstrated that the dynamic interaction of complex systems can be understood through their local information processing [42,43,44]. Instead of describing the dynamics of the system in terms of domain knowledge such as voltage over distance, disease spreading rate, or climate conditions, one can understand the dynamics in terms of information dynamics. In particular, the field of information dynamics is concerned with describing system behavior through its capacity to store, transmit, and modify information. By abstracting away the domain details of a system and recasting the dynamics in terms of how the system computes its next state, one can capture the intrinsic computations a system performs. The system behavior is encoded in terms of probability, and the relationships among these variables are explored using the language of information theory [45].
Information theory offers profound benefits over traditional methods used in metastability analysis, as the methods developed are model-free, can capture nonlinear relationships, can be used for both discrete and continuous variables, and can be estimated directly from data [30]. Shannon information measures, such as mutual information and Fisher information, can be used to study how much information the system dynamics shares with the control parameter [11,46].
Past research on information flows and metastable transitions focuses on methods to detect the onset of a tipping point [47,48,49]. It often centers around an observation that the system’s ability to absorb noise reduces prior to the system going through a critical point. This critical slowing down can be captured as a statistical signature where the Fisher information peaks [50]. However, these methods traditionally use some form of control parameter driving the system towards or away from a critical point. Most real-world systems lack such an explicit control parameter and require different methods. Furthermore, detecting a tipping point does not necessarily lead to a further understanding of how the tipping point was created. For example, in a finite-size Ising model, the system produces bistable behavior. As one increases the noise parameter, the bistable behavior disappears. The increase in noise effectively changes the energy landscape, but little information is gained about how the metastable behavior initially emerged.
In this work, a novel approach using information theory is explored to study metastable behavior. The statistical coherence between parts of the system is quantified by the capability of individual nodes to predict the future behavior of the system [43]. Two information features are introduced: Integrated mutual information measures predictive information of a node on the future of the system, and Asymptotic information measures the long timescale memory capacity of a node. These measures differ from previous information methods such as transfer entropy [51], conditional mutual information under causal intervention [52], causation entropy [53], and time-delayed variants [54] in that these methods are used to infer the transfer of information between sets of nodes by possibly correcting for a third variable. Here, instead, we aim to understand how the elements in the system contribute to the macroscopic properties of the system. It is important to emphasize that information flows are not directly comparable to causal flows [33]. A rule of thumb is that causal flows focus on micro-level dynamics (X causes Y), whereas information flows focus on the predictive aspects, providing a holistic view of emergent structures [43]. In this sense, this work is similar to predictive information [55] where predictive information of some system ( S ) is projected onto its consistent elements ( s i S ) and computed as a function of time  ( t ) .

Appendix A.2. Methods and Definitions

Appendix A.2.1. Model

To study metastable behavior, we consider a system as a collection of random variables S = { s 1 , , s n } governed by the Boltzmann–Gibbs distribution:
p ( S ) = 1 Z exp ( β H ( S ) ) ,
where β = 1 T is the inverse temperature which controls the noise in the system, and H ( S ) is the system Hamiltonian which encodes the node-node dynamics. The choice of the energy function dictates what kind of system behavior we observe. Here, we focus on arguably the simplest models that show metastable behavior: the kinetic Ising model and the Susceptible-Infected-Susceptible model.
Temporal dynamics are simulated using Glauber dynamics sampling. In each discrete time step, a spin is randomly chosen, and a new state X S is accepted with probability
p ( accept X ) = 1 1 + exp ( β Δ E ) ,
where Δ E = H ( X ) H ( X ) is the energy difference between the current state X and the proposed state X .

Appendix A.2.2. Kinetic Ising Model

The traditional Ising model, originally developed to study ferromagnetism, is considered one of the simplest models that generate complex behavior. It consists of a set of binary distributed spins S = { s 1 , , s n } . Each spin contains energy given by the Hamiltonian:
H ( S ) = i , j J i j s i s j h i s i ,
where J i j is the interaction energy of the spins s i , s j .
The interaction energy effectively encodes the underlying network structure of the system. Different network structures are used in this study to provide a comprehensive numerical overview of the relation between network structure and information flows (see Appendix A.2). The interaction energy J i j is set to 1 if a connection exists in the network.
For sufficiently low noise (temperature), the Ising model shows metastable behavior (Figure 1c). Here, we aim to study how the system goes through a tipping point by tracking the information flow per node with the entire system state.

Appendix A.3. Information Flow on Complex Networks

Informally, information flows measure the statistical coherence between two random variables X and Y over time, such that the present information in Y cannot be explained by the past of Y but rather by the past of X. Estimating information flow is inherently difficult due to the presence of confounding factors, which potentially trap the interpretation in the “correlation does not equal causation” paradigm. Under some contexts, however, information flow can be interpreted as causal [31]. Let S = { s 1 , , s n } be a random process, and S t represent the state of the random process at some time t. The information present in S is given as the Shannon entropy:
H ( S ) = x S p ( x ) log p ( x ) ,
where log is base 2 unless otherwise stated, and p ( x ) is used as shorthand for p ( S = x ) . Shannon entropy captures the uncertainty of a random variable; it can be understood as the number of yes/no questions needed to determine the state of S. This measure of uncertainty naturally extends to two variables with Shannon mutual information. Let s i be an element of the state of S, then the Shannon mutual information I ( S ; s i ) is given as:
I ( S ; s i ) = S i S , s s i p ( S i , s ) log p ( S i , s ) p ( S i ) p ( s ) = H ( S ) H ( S | s i ) .
Shannon mutual information can be interpreted as the uncertainty reduction of S after knowing the state of s i . Consequently, it encodes how much statistical coherence s i and S share. Shannon mutual information can be measured over time to encode how much information (in bits) flows from state s i τ to S τ + t :
I ( S τ + t ; s i τ ) = H ( S τ + t ) H ( S τ + t | s i τ ) .
Prior results showed that the nodes with the highest causal importance are those nodes that have the highest information flow (i.e., maximize Equation (A5)) [31]. Intuitively, the nodes for which the future system “remembers” information from a node in the past are the ones that “drive” the system dynamics. Formally, these driver nodes can be identified by computing the total information flow between S t and s i , which can be captured with the integrated mutual information [31]:
μ ( s i ) = τ = 0 I ( s i t τ ; S t ) .
In some contexts, the nodes that maximize (A6) are those nodes that have the highest causal influence in the system [31]. However, in general, information flows are difficult to equate to causal flows [33,43]. Here, the local information flows are computed by considering the integrated mutual information conditioned on part of the entire state space. This allows for mapping the local information flows between nodes and the system over time but does not guarantee that the measured information flows are directly causal. The main reason is that having predictive power about the future could be completely caused by the partitioning. In [31], the correlation measured considered all possible states, and the measures were directly related to a causal effect.
In addition, in [31], the shared information between the system with a node shifted over time ( I ( S τ : s i τ + t ) ) was considered. Applying this approach under a state partition I ( S τ : s i τ + t | S ) causes a violation of the data processing inequality, as information may flow from a node at a particular time t = t 1 and then flow back to the node at t = t 2 , where t 2 > t 1 . To simplify the interpretation of the information flows and maintain the data processing inequality, the reverse I ( S t + τ : s i τ | S ) was computed in the present study.

Appendix A.4. Noise Matching Procedure

The Boltzmann–Gibbs distribution is parameterized by noise factor β = 1 k T , where T is the temperature and k is the Boltzmann constant. For high β values, metastable behavior occurs in the kinetic Ising model. The temperature was chosen such that the statistical complexity [56] was maximized. The statistical complexity C is computed as:
C = H ¯ ( S ) D ( S ) ,
where H ¯ ( S ) = H ( s ) log 2 ( | S | ) is the system entropy, and D ( S ) measures the distance to disequilibrium:
D ( S ) = i ( p ( S i ) 1 | S | ) 2 .
A typical statistical complexity curve is shown in Figure A1. The noise parameter β is set such that it maximizes the statistical complexity using numerical optimization (COBYLA method in Scipy’s optimize.minimize module) [57].
Figure A1. (a) Statistical complexity (C), normalized system entropy ( H ( S ) ) and disequilibrium ( D ( S ) ) as a function of the temperature ( T = 1 β ) for Krackhardt kite graph. The noise parameter was set such that it maximizes the statistical complexity (vertical black line). The values are normalized between [0, 1] for aesthetic purposes. (b) State distribution p ( S ) for temperature that maximizes the statistical complexity in (a) as a function of nodes in state 1.
Figure A1. (a) Statistical complexity (C), normalized system entropy ( H ( S ) ) and disequilibrium ( D ( S ) ) as a function of the temperature ( T = 1 β ) for Krackhardt kite graph. The noise parameter was set such that it maximizes the statistical complexity (vertical black line). The values are normalized between [0, 1] for aesthetic purposes. (b) State distribution p ( S ) for temperature that maximizes the statistical complexity in (a) as a function of nodes in state 1.
Entropy 26 01050 g0a1

Appendix A.5. Exact Information Flows I( s i τ ; S τ + t )

In order to compute I ( s i τ : S τ + t ) , the conditional distribution p ( S τ + t | s i τ ) and p ( S τ + t ) need to be computed. This is achieved through direct computation on the partitions of the state consisting of the number of bits flipped—effectively encoding the distance towards or away from a tipping point.
We partition the state space based on the average magnetization S , allowing us to track states that are n bit flips away from the tipping point.
For Glauber dynamics, the system S transitions into S by randomly choosing node s i to potentially flip. The transition matrix p ( S t | s i ) = P can be constructed by computing each entry p i j as:
p i j , i j = 1 | S | 1 1 + exp ( Δ E ) , with p i i = 1 j , j i p i j ,
where Δ E = H ( S j ) H ( S i ) encodes the energy difference of moving from S i to S j .
For each partition S γ = { S S | S = γ } , we:
  • Compute transition probabilities between states within the partition
  • Renormalize probabilities to ensure conservation within the partition
  • Evaluate p ( S t | s i ) for all possible node states s i in that partition
The marginal distribution p ( S t ) is then computed as:
p ( S τ + t ) = s i p ( S τ + t | s i τ ) p ( s i τ ) .
This procedure provides exact information flows for states at specific distances from the tipping point, allowing us to track how correlations evolve as the system approaches and moves away from the transition.

Extrapolation with Regressions

Exact information flows were computed per graph for t = 500 time steps. Using ordinary least squares, a double exponential was fit to estimate the information flows for longer t and estimate the integrated mutual information and asymptotic information.

Appendix A.6. Noise Estimation Procedure

Tipping point behavior under intervention was quantified by evaluating the level of noise on both sides of the tipping point. Let T 1 represent the ground state where all spins are 0, T 2 where all spins are 1, and let the tipping point T P be where the instantaneous macrostate M ( S t ) = 0.5 . Fluctuations of the system macrostate were evaluated by analyzing the second moment above and below the tipping point. This was achieved by numerically simulating the system trajectories under 6 different seeds for t = 10 6 time steps. The data were split between two sets (above and below the tipping point), and the noise η was computed as:
η = 1 α 2 | S w | w S w t 2 ,
where w { S < 0.5 , S > 0.5 } , and
S w t = S t if S t < 0.5 1 S t if S t > 0.5
is the instantaneous system trajectory for the system macrostate above or below the tipping point value. The factor α corrects for the reduced range the system macrostate has under interventions. For example, pinning a node s i to state 0 reduces the maximum possible macrostate to 1 1 n , where n is the size of the system. The correction factor α is set such that for an intervention on 0 for a particular node, the range S S > 0.5 , α is set to n 2 1 n .

Appendix A.7. Switch Susceptibility as a Function of Degree

First, we investigate the susceptibility of a spin as a function of its degree. The susceptibility of a spin switching its state is a function of both the system temperature T and the system dynamics. The system dynamics contribute to the susceptibility through the underlying network structure either directly or indirectly. The network structure produces local correlations, which affect the switching probability for a given spin.
As an initial approximation, we consider the susceptibility of a target spin s i to flip from a majority state to a minority state, given the state of its neighbors, where the neighbors are not connected among themselves. Furthermore, we assume that for the instantaneous update of s i , the configuration of the neighborhood of s i can be considered to be the outcome of a binomial trial. Let N be a random variable with state space { 0 , 1 } | N | , and let n j N represent a neighbor of s i . We assume that all neighbors of s i are i.i.d. distributed given the instantaneous system magnetization:
M ( S t ) = 1 | S t | i s i t .
Let the minority state be 1, and the majority state be 0. The expectation of s i flipping from the majority state to the minority state is given as:
E [ p ( s i = 1 | N ) ] p ( N ) = N i N p ( N i ) p ( s i = 1 | N i ) = N i N j | N i | p ( n j ) p ( s i = 1 | N i ) = N i N n k f k ( 1 f ) n k p ( s i = 1 | f ) ,
where f is the fraction of nodes in the majority states, n is the number of neighbors, and k is the number of nodes in state 0. As shown in Figure A3, this is computed as a function of the degree of spin s i . As the degree increases, the susceptibility for a spin decreases relative to the same spin with a lower degree. This implies that the susceptibility to change due to random fluctuations is more likely to occur in nodes with fewer external constraints as measured by degree.

Appendix A.8. Additional Networks

The kite graph was chosen as it allowed for computing exact information flows while retaining a high variety of degree distribution given the small size. Other networks were also tested. In Figure A2, different network structures were used. Each node is governed by kinetic Ising spin dynamics.
Figure A2. Adjusted mutual information for a random tree (top) and Leder–Coxeter–Frucht graphs (middle,bottom). Each node is governed by kinetic Ising spin dynamics. Far from the tipping point (fraction nodes 1 = 0.5), most information flows are concentrated on non-hub nodes. As the system approaches the tipping point (fraction = 0.5), the information flows move inwards, generating higher adjusted integrated mutual information for nodes with higher degrees.
Figure A2. Adjusted mutual information for a random tree (top) and Leder–Coxeter–Frucht graphs (middle,bottom). Each node is governed by kinetic Ising spin dynamics. Far from the tipping point (fraction nodes 1 = 0.5), most information flows are concentrated on non-hub nodes. As the system approaches the tipping point (fraction = 0.5), the information flows move inwards, generating higher adjusted integrated mutual information for nodes with higher degrees.
Entropy 26 01050 g0a2

Appendix A.9. Flip Probability per Degree

In Figure A3, the tendency for a node to flip from the majority to the minority state is computed as a function of the fraction of nodes possessing the majority state 1 in the system, denoted as N. Two things are observed. First, nodes with lower degrees are more susceptible to noise than nodes with higher degrees. For a given system stability, nodes with lower degrees tend to have a higher tendency to flip. This holds true for all distances of the system from the tipping point. In contrast, the higher the degree of the node, the closer the system must be to a tipping point for the node to change its state. This can be explained by the fact that lower-degree nodes have fewer constraints compared to nodes with higher degrees. For Ising spin kinetics, the nodes with higher degrees tend to be more "frozen" in their node dynamics than nodes with lower degrees. Second, for a node to flip with similar probability mass (i.e., E [ p ( s i ) | N ] = 0.2 ), a node with higher degrees needs to be closer to the tipping point than nodes with lower degrees. In fact, the order of susceptibility is correlated with the degree; the susceptibility decreases with increasing degree and fixed fraction of nodes in state 1.
Figure A3. Susceptibility of a node with degree k switching from the minority state 0 to the majority state 1 as a function of the neighborhood entropy for β = 0.5 . The neighborhood entropy encodes how stable the environment of a spin is. As the system approaches the tipping point, the propensity of a node to flip from the minority state increases faster for low-degree nodes than for high-degree nodes. Higher-degree nodes require more change in their local environment to flip to the majority state. See for details Appendix A.7.
Figure A3. Susceptibility of a node with degree k switching from the minority state 0 to the majority state 1 as a function of the neighborhood entropy for β = 0.5 . The neighborhood entropy encodes how stable the environment of a spin is. As the system approaches the tipping point, the propensity of a node to flip from the minority state increases faster for low-degree nodes than for high-degree nodes. Higher-degree nodes require more change in their local environment to flip to the majority state. See for details Appendix A.7.
Entropy 26 01050 g0a3
Figure A4. Shortest path analysis of the system ending up in the tipping point from the state where all nodes have state 0. The node size is proportional to the expectation value of a node that has state 1 ( E [ s i = 1 ] S t , M ( S 5 ) ) as a function of the fraction of nodes that have state 1. The expectation values are computed based on 30240 trajectories; an example trajectory can be seen in Figure 4.
Figure A4. Shortest path analysis of the system ending up in the tipping point from the state where all nodes have state 0. The node size is proportional to the expectation value of a node that has state 1 ( E [ s i = 1 ] S t , M ( S 5 ) ) as a function of the fraction of nodes that have state 1. The expectation values are computed based on 30240 trajectories; an example trajectory can be seen in Figure 4.
Entropy 26 01050 g0a4

Appendix A.10. Synthetic Networks

For the synthetic graphs, 100 non-isomorphic connected Erdős–Rényi networks were generated with p = 0.2. Graphs were generated randomly and rejected if they did not contain a giant component or were isomorphic with already generated graphs. For each of the graphs, information curves were computed as a function of the macrostate S .

Noise and Time Spent

Various network structures are generated in the synthetic networks. The variety of network structures has nonlinear effects on the information flows. The effect of intervention in Figure 3 is made relative to the control values for the graph and seed. The second moment (appendix: Appendix A.6) and the time spent below the tipping point are normalized with respect to the graph (Figure A5) and the seed. In total, 6 seeds are used (0, 12, 123, 1234, 123,456, 1,234,567).
Figure A5. Erdős–Rényi graphs generated from seed = 0 to produce non-isomorphic connected graphs.
Figure A5. Erdős–Rényi graphs generated from seed = 0 to produce non-isomorphic connected graphs.
Entropy 26 01050 g0a5

Appendix A.11. Case Study of a Larger System

In this section, we extend our analysis to a 15-node network to demonstrate the applicability of our findings to larger systems (see Figure A6). This case study serves to validate our theoretical insights derived from smaller networks and to illustrate how the fundamental mechanisms of metastable transitions are preserved as network size increases. Despite the increased computational complexity, our results indicate that the structural features driving these transitions in smaller networks are also evident in larger ones.
Figure A6. Example of tipping behavior in a system consisting of N = 15 nodes. The colors of the curves correspond to the nodes in the network. The information decay curves are bundled per degree. The transition from left to right increases the number of bits flipped until the tipping point. A wave can be seen where the integrated information flows from lower-degree nodes to higher ones as the number of bits flipped increases. The size of the nodes is proportional to the integrated mutual information.
Figure A6. Example of tipping behavior in a system consisting of N = 15 nodes. The colors of the curves correspond to the nodes in the network. The information decay curves are bundled per degree. The transition from left to right increases the number of bits flipped until the tipping point. A wave can be seen where the integrated information flows from lower-degree nodes to higher ones as the number of bits flipped increases. The size of the nodes is proportional to the integrated mutual information.
Entropy 26 01050 g0a6
As highlighted in Section 6, the state space of a network grows exponentially ( 2 n ) with the number of nodes, making simulations of larger systems computationally demanding. Nevertheless, our analysis of the 15-node network supports our assertion that the foundational processes identified in our primary study can be extrapolated to more complex networks. Detailed results and discussion of this 15-node network analysis are provided to substantiate our approach and highlight the consistency of our findings across different network sizes.
Figure A7. (A) Probability distribution of trajectories reaching the tipping point within 5 time steps, starting from an initial state where all nodes are set to 0. Out of all possible trajectories, 30,240 paths reached the tipping point. (B) Visualization of the highest-probability trajectories leading to system collapse. These equiprobable paths demonstrate the cascading failure mechanism, where specific initiator nodes trigger a domino effect throughout the network. The color gradient indicates the temporal progression of state changes, illustrating the sequential nature of the collapse process.
Figure A7. (A) Probability distribution of trajectories reaching the tipping point within 5 time steps, starting from an initial state where all nodes are set to 0. Out of all possible trajectories, 30,240 paths reached the tipping point. (B) Visualization of the highest-probability trajectories leading to system collapse. These equiprobable paths demonstrate the cascading failure mechanism, where specific initiator nodes trigger a domino effect throughout the network. The color gradient indicates the temporal progression of state changes, illustrating the sequential nature of the collapse process.
Entropy 26 01050 g0a7

References

  1. Ladyman, J.; Lambert, J.; Wiesner, K. What Is a Complex System? Eur. J. Philos. Sci. 2013, 3, 33–67. [Google Scholar] [CrossRef]
  2. van Nes, E.H.; Arani, B.M.; Staal, A.; van der Bolt, B.; Flores, B.M.; Bathiany, S.; Scheffer, M. What Do You Mean, ‘Tipping Point’? Trends Ecol. Evol. 2016, 31, 902–904. [Google Scholar] [CrossRef] [PubMed]
  3. Kandel, E.R.; Schwartz, J.H.; Jessell, T.M. Principles of Neural Science, 4th ed.; McGraw-Hill Medical: New York, NY, USA, 2000. [Google Scholar]
  4. Fries, P. Rhythms for Cognition: Communication through Coherence. Neuron 2015, 88, 220–235. [Google Scholar] [CrossRef] [PubMed]
  5. Galam, S.; Cheon, T. Tipping Points in Opinion Dynamics: A Universal Formula in Five Dimensions. Front. Phys. 2020, 8, 566580. [Google Scholar] [CrossRef]
  6. Wunderling, N.; Donges, J.F.; Kurths, J.; Winkelmann, R. Interacting Tipping Elements Increase Risk of Climate Domino Effects under Global Warming. Earth Syst. Dyn. 2021, 12, 601–619. [Google Scholar] [CrossRef]
  7. Beggs, J.M.; Timme, N. Being Critical of Criticality in the Brain. Front. Physiol. 2012, 3. [Google Scholar] [CrossRef]
  8. Mitchell, M.; Hraber, P.; Crutchfield, J.P. Revisiting the Edge of Chaos: Evolving Cellular Automata to Perform Computations. arXiv 1993. [Google Scholar] [CrossRef]
  9. Forgoston, E.; Moore, R.O. A Primer on Noise-Induced Transitions in Applied Dynamical Systems. SIAM Rev. 2018, 60, 969–1009. [Google Scholar] [CrossRef]
  10. Czaplicka, A.; Holyst, J.A.; Sloot, P.M.A. Noise Enhances Information Transfer in Hierarchical Networks. Sci. Rep. 2013, 3, 1223. [Google Scholar] [CrossRef]
  11. Nicolis, G.; Nicolis, C. Stochastic Resonance, Self-Organization and Information Dynamics in Multistable Systems. Entropy 2016, 18, 172. [Google Scholar] [CrossRef]
  12. McNamara, B.; Wiesenfeld, K. Theory of Stochastic Resonance. Phys. Rev. A 1989, 39, 4854–4869. [Google Scholar] [CrossRef] [PubMed]
  13. Kramers, H.A. Brownian Motion in a Field of Force and the Diffusion Model of Chemical Reactions. Physica 1940, 7, 284–304. [Google Scholar] [CrossRef]
  14. Harush, U.; Barzel, B. Dynamic Patterns of Information Flow in Complex Networks. Nat. Commun. 2017, 8, 2181. [Google Scholar] [CrossRef] [PubMed]
  15. Gao, J.; Barzel, B.; Barabási, A.L. Universal Resilience Patterns in Complex Networks. Nature 2016, 536, 238. [Google Scholar] [CrossRef]
  16. Dong, G.; Wang, F.; Shekhtman, L.M.; Danziger, M.M.; Fan, J.; Du, R.; Liu, J.; Tian, L.; Stanley, H.E.; Havlin, S. Optimal Resilience of Modular Interacting Networks. Proc. Natl. Acad. Sci. USA 2021, 118, e1922831118. [Google Scholar] [CrossRef]
  17. Liu, Y.; Sanhedrai, H.; Dong, G.; Shekhtman, L.M.; Wang, F.; Buldyrev, S.V.; Havlin, S. Efficient Network Immunization under Limited Knowledge. Natl. Sci. Rev. 2021, 8, nwaa229. [Google Scholar] [CrossRef]
  18. Zenil, H.; Kiani, N.A.; Marabita, F.; Deng, Y.; Elias, S.; Schmidt, A.; Ball, G.; Tegnér, J. An Algorithmic Information Calculus for Causal Discovery and Reprogramming Systems. iScience 2019, 19, 1160–1172. [Google Scholar] [CrossRef]
  19. Zenil, H.; Kiani, N.A.; Zea, A.A.; Tegnér, J. Causal Deconvolution by Algorithmic Generative Models. Nat. Mach. Intell. 2019, 1, 58–66. [Google Scholar] [CrossRef]
  20. Guo, C.; Yang, L.; Chen, X.; Chen, D.; Gao, H.; Ma, J. Influential Nodes Identification in Complex Networks via Information Entropy. Entropy 2020, 22, 242. [Google Scholar] [CrossRef]
  21. Hopfield, J.J. Neural Networks and Physical Systems with Emergent Collective Computational Abilities. Proc. Natl. Acad. Sci. USA 1982, 79, 2554–2558. [Google Scholar] [CrossRef]
  22. Glauber, R.J. Time-Dependent Statistics of the Ising Model. J. Math. Phys. 1963, 4, 294–307. [Google Scholar] [CrossRef]
  23. Lenton, T.M.; Abrams, J.F.; Bartsch, A.; Bathiany, S.; Boulton, C.A.; Buxton, J.E.; Conversi, A.; Cunliffe, A.M.; Hebden, S.; Lavergne, T.; et al. Remotely Sensing Potential Climate Change Tipping Points across Scales. Nat. Commun. 2024, 15, 343. [Google Scholar] [CrossRef] [PubMed]
  24. Peng, X.; Small, M.; Zhao, Y.; Moore, J.M. Detecting and Predicting Tipping Points. Int. J. Bifurc. Chaos 2019, 29, 1930022. [Google Scholar] [CrossRef]
  25. Bury, T.M.; Sujith, R.I.; Pavithran, I.; Scheffer, M.; Lenton, T.M.; Anand, M.; Bauch, C.T. Deep Learning for Early Warning Signals of Tipping Points. Proc. Natl. Acad. Sci. USA 2021, 118, e2106140118. [Google Scholar] [CrossRef]
  26. D’Orsogna, M.R.; Perc, M. Statistical Physics of Crime: A Review. Phys. Life Rev. 2015, 12, 1–21. [Google Scholar] [CrossRef]
  27. Wunderling, N.; Stumpf, B.; Krönke, J.; Staal, A.; Tuinenburg, O.A.; Winkelmann, R.; Donges, J.F. How Motifs Condition Critical Thresholds for Tipping Cascades in Complex Networks: Linking Micro- to Macro-Scales. Chaos Interdiscip. J. Nonlinear Sci. 2020, 30, 043129. [Google Scholar] [CrossRef]
  28. Yang, Y.; Motter, A.E. Cascading Failures as Continuous Phase-Space Transitions. Phys. Rev. Lett. 2017, 119, 248302. [Google Scholar] [CrossRef]
  29. Yang, Y.; Nishikawa, T.; Motter, A.E. Small Vulnerable Sets Determine Large Network Cascades in Power Grids. Science 2017, 358, eaan3184. [Google Scholar] [CrossRef]
  30. Cover, T.M.; Thomas, J.A. Elements of Information Theory; Wiley-Interscience: New York, NY, USA, 2005. [Google Scholar] [CrossRef]
  31. van Elteren, C.; Quax, R.; Sloot, P. Dynamic Importance of Network Nodes Is Poorly Predicted by Static Structural Features. Phys. A Stat. Mech. Its Appl. 2022, 593, 126889. [Google Scholar] [CrossRef]
  32. Quax, R.; Apolloni, A.; a Sloot, P.M. The Diminishing Role of Hubs in Dynamical Processes on Complex Networks. J. R. Soc. Interface R. Soc. 2013, 10Q, 20130568. [Google Scholar] [CrossRef]
  33. James, R.G.; Barnett, N.; Crutchfield, J.P. Information Flows? A Critique of Transfer Entropies. Phys. Rev. Lett. 2016, 116, 238701. [Google Scholar] [CrossRef] [PubMed]
  34. Beer, R.D.; Williams, P.L. Information Processing and Dynamics in Minimally Cognitive Agents. Cogn. Sci. 2015, 39, 1–38. [Google Scholar] [CrossRef] [PubMed]
  35. Kolchinsky, A. A Novel Approach to the Partial Information Decomposition. Entropy 2022, 24, 403. [Google Scholar] [CrossRef] [PubMed]
  36. Williams, P.L.; Beer, R.D. Nonnegative Decomposition of Multivariate Information. arXiv 2010, arXiv:1004.2515. [Google Scholar]
  37. McDonnell, M.D.; Ward, L.M. The Benefits of Noise in Neural Systems: Bridging Theory and Experiment. Nat. Rev. Neurosci. 2011, 12, 415–425. [Google Scholar] [CrossRef]
  38. Vázquez-Rodríguez, B.; Avena-Koenigsberger, A.; Sporns, O.; Griffa, A.; Hagmann, P.; Larralde, H. Stochastic Resonance at Criticality in a Network Model of the Human Cortex. Sci. Rep. 2017, 7, 13020. [Google Scholar] [CrossRef]
  39. Roy, S.; Majumdar, S. The Role of Noise in Brain Function. In Noise and Randomness in Living System; Roy, S., Majumdar, S., Eds.; Springer: Singapore, 2022; pp. 99–110. [Google Scholar] [CrossRef]
  40. Faisal, A.A.; Selen, L.P.J.; Wolpert, D.M. Noise in the Nervous System. Nat. Rev. Neurosci. 2008, 9, 292–303. [Google Scholar] [CrossRef]
  41. Calim, A.; Palabas, T.; Uzuntarla, M. Stochastic and Vibrational Resonance in Complex Networks of Neurons. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 2021, 379, rsta.2020.0236. [Google Scholar] [CrossRef]
  42. Lizier, J.T.; Prokopenko, M.; Zomaya, A.Y. The Information Dynamics of Phase Transitions in Random Boolean Networks. In Proceedings of the Eleventh International Conference on the Simulation and Synthesis of Living Systems (ALife XI), Winchester, UK, 5–8 August 2008; pp. 374–381. [Google Scholar]
  43. Lizier, J.T.; Flecker, B.; Williams, P.L. Towards a Synergy-Based Approach to Measuring Information Modification. In Proceedings of the IEEE Symposium on Artificial Life (ALIFE), Singapore, 16–19 April 2013; pp. 43–51. [Google Scholar] [CrossRef]
  44. Lizier, J.T.; Bertschinger, N.; Jost, J.; Wibral, M. Information Decomposition of Target Effects from Multi-Source Interactions: Perspectives on Previous, Current and Future Work. Entropy 2018, 20, 307. [Google Scholar] [CrossRef]
  45. Quax, R.; Har-Shemesh, O.; Sloot, P.M. Quantifying Synergistic Information Using Intermediate Stochastic Variables. Entropy 2017, 19, 85. [Google Scholar] [CrossRef]
  46. Lizier, J.T.; Prokopenko, M.; Zomaya, A.Y. Information Modification and Particle Collisions in Distributed Computation. Chaos Interdiscip. J. Nonlinear Sci. 2010, 20, 037109. [Google Scholar] [CrossRef] [PubMed]
  47. Scheffer, M.; Bascompte, J.; Brock, W.A.; Brovkin, V.; Carpenter, S.R.; Dakos, V.; Held, H.; van Nes, E.H.; Rietkerk, M.; Sugihara, G. Early-Warning Signals for Critical Transitions. Nature 2009, 461, 53–59. [Google Scholar] [CrossRef] [PubMed]
  48. Prokopenko, M.; Lizier, J.T.; Obst, O.; Wang, X.R. Relating Fisher Information to Order Parameters. Phys. Rev. E 2011, 84, 041116. [Google Scholar] [CrossRef] [PubMed]
  49. Scheffer, M.; Carpenter, S.; Foley, J.A.; Folke, C.; Walker, B. Catastrophic Shifts in Ecosystems. Nature 2001, 413, 591–596. [Google Scholar] [CrossRef]
  50. Eason, T.; Garmestani, A.S.; Cabezas, H. Managing for Resilience: Early Detection of Regime Shifts in Complex Systems. Clean Technol. Environ. Policy 2014, 16, 773–783. [Google Scholar] [CrossRef]
  51. Schreiber, T. Measuring Information Transfer. Phys. Rev. Lett. 2000, 85, 461–464. [Google Scholar] [CrossRef]
  52. Ay, N.; Polani, D. Information Flows in Causal Networks. Adv. Complex Syst. 2008, 11, 17–41. [Google Scholar] [CrossRef]
  53. Runge, J.; Bathiany, S.; Bollt, E.; Camps-Valls, G.; Coumou, D.; Deyle, E.; Glymour, C.; Kretschmer, M.; Mahecha, M.D.; Muñoz-Marí, J.; et al. Inferring Causation from Time Series in Earth System Sciences. Nat. Commun. 2019, 10, 1–13. [Google Scholar] [CrossRef]
  54. Li, C. Functions of Neuronal Network Motifs. Phys. Rev. E 2008, 78, 037101. [Google Scholar] [CrossRef]
  55. Bialek, W.; Tishby, N. Predictive Information. arXiv 1999. [Google Scholar] [CrossRef]
  56. López-Ruiz, R.; Mancini, H.L.; Calbet, X. A Statistical Measure of Complexity. Phys. Lett. A 1995, 209, 321–326. [Google Scholar] [CrossRef]
  57. Virtanen, P. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nat. Methods 2020, 17, 15. [Google Scholar] [CrossRef]
Figure 1. A dynamical network governed by kinetic Ising dynamics produces multistable behavior. (a) A typical trajectory is shown for a kite network for which each node is governed by the Ising dynamics with β 0.534 . The panels show system configurations S i S as the system approaches the tipping point (orange to purple to red). For the system to transition between attractor states, it must cross an energy barrier (c). (b) The dynamics of the system can be represented as a graph. Each node represents a system configuration S i S such as depicted in (a). The probability for a particular system configuration p ( S ) is indicated with a color; some states are more likely than others. The trajectory from (a) is visualized. Dynamics that move towards the tipping point (midline) destabilize the system, whereas moving away from the tipping point are stabilizing dynamics. (c) The stationary distribution of the system is bistable. Crossing the tipping point requires crossing a high-energy state (dashed line). Transitions between the attractor states are infrequent and rare. For more information on the numerical simulations, see Appendix A.2.
Figure 1. A dynamical network governed by kinetic Ising dynamics produces multistable behavior. (a) A typical trajectory is shown for a kite network for which each node is governed by the Ising dynamics with β 0.534 . The panels show system configurations S i S as the system approaches the tipping point (orange to purple to red). For the system to transition between attractor states, it must cross an energy barrier (c). (b) The dynamics of the system can be represented as a graph. Each node represents a system configuration S i S such as depicted in (a). The probability for a particular system configuration p ( S ) is indicated with a color; some states are more likely than others. The trajectory from (a) is visualized. Dynamics that move towards the tipping point (midline) destabilize the system, whereas moving away from the tipping point are stabilizing dynamics. (c) The stationary distribution of the system is bistable. Crossing the tipping point requires crossing a high-energy state (dashed line). Transitions between the attractor states are infrequent and rare. For more information on the numerical simulations, see Appendix A.2.
Entropy 26 01050 g001
Figure 2. (ae) Information flows as distance to a tipping point, where each line color corresponds to the matching-colored node in the kite graph inset. Far away from the tipping point, most information processing occurs in low-degree nodes (colored in blue/purple, tail of kite). As the system moves towards the tipping point, the information flows increase and shift towards higher-degree nodes (colored in red/orange, core of kite). (f) Integrated mutual information as a function of distance to the tipping point. The graphical inset plots show how noise is introduced far away from the tipping point in the tail of the kite graph (blue/purple nodes). As the system approaches the tipping point, the local information dynamics move from the tail to the core of the kite (red/orange nodes). (g) A rise in asymptotic information indicates that the system is close to a tipping point. At the tipping point, the decay maximizes as trajectories stabilize into one of the two attractor states. The color of each line consistently matches its corresponding node in the kite graph visualization.
Figure 2. (ae) Information flows as distance to a tipping point, where each line color corresponds to the matching-colored node in the kite graph inset. Far away from the tipping point, most information processing occurs in low-degree nodes (colored in blue/purple, tail of kite). As the system moves towards the tipping point, the information flows increase and shift towards higher-degree nodes (colored in red/orange, core of kite). (f) Integrated mutual information as a function of distance to the tipping point. The graphical inset plots show how noise is introduced far away from the tipping point in the tail of the kite graph (blue/purple nodes). As the system approaches the tipping point, the local information dynamics move from the tail to the core of the kite (red/orange nodes). (g) A rise in asymptotic information indicates that the system is close to a tipping point. At the tipping point, the decay maximizes as trajectories stabilize into one of the two attractor states. The color of each line consistently matches its corresponding node in the kite graph visualization.
Entropy 26 01050 g002
Figure 3. For system to cross a tipping point, two distinct types of nodes are essential: stabilizers, which contain information about the system’s next attractor state and facilitate transitions between states; and initiators, which propagate noise through the system. (a) The effect of causal pinning interventions on node 0 states in Erdős–Rényi graphs ( N = 100 , 10 nodes each, p = 0.2 , 6 seeds) is shown. Normalized system fluctuations (second moment) and time spent below the tipping point relative to the control are presented per network to indicate the effect of the pinning interventions. Pinning initiators increase tipping points while pinning stabilizers prevent tipping and increase noise above the tipping point. For more details on role approximation, see Section 3.5. (b) To exemplify the effect of the causal interventions in (a), typical system trajectories underpinning interventions on a node for the kite graph are shown. Colors reflect intervention on corresponding nodes in the inset kite graph. Initiator-based interventions remove fluctuations below the tipping point (<0.5) and increase fluctuations above, whereas stabilizer-based interventions stabilize tipping points while increasing noise.
Figure 3. For system to cross a tipping point, two distinct types of nodes are essential: stabilizers, which contain information about the system’s next attractor state and facilitate transitions between states; and initiators, which propagate noise through the system. (a) The effect of causal pinning interventions on node 0 states in Erdős–Rényi graphs ( N = 100 , 10 nodes each, p = 0.2 , 6 seeds) is shown. Normalized system fluctuations (second moment) and time spent below the tipping point relative to the control are presented per network to indicate the effect of the pinning interventions. Pinning initiators increase tipping points while pinning stabilizers prevent tipping and increase noise above the tipping point. For more details on role approximation, see Section 3.5. (b) To exemplify the effect of the causal interventions in (a), typical system trajectories underpinning interventions on a node for the kite graph are shown. Colors reflect intervention on corresponding nodes in the inset kite graph. Initiator-based interventions remove fluctuations below the tipping point (<0.5) and increase fluctuations above, whereas stabilizer-based interventions stabilize tipping points while increasing noise.
Entropy 26 01050 g003
Figure 4. The tipping point is initiated from the bottom up. Each node is colored according to state 0 (black) and state 1 (yellow) Shown is a trajectory towards the tipping point that maximizes t = 1 5 log p ( S t + 1 | S t , S 0 = { 0 } , S 5 ) = 0.5 ) . As the system approaches the tipping point, low-degree nodes flip first and recruit “higher” degree nodes to further destabilize the system and push it towards a tipping point. In total, 30,240 trajectories reach the tipping point in 5 steps, and 10 trajectories have the same maximized values as the trajectory shown in this figure (see Figure A7 for the remaining trajectories and probabilities).
Figure 4. The tipping point is initiated from the bottom up. Each node is colored according to state 0 (black) and state 1 (yellow) Shown is a trajectory towards the tipping point that maximizes t = 1 5 log p ( S t + 1 | S t , S 0 = { 0 } , S 5 ) = 0.5 ) . As the system approaches the tipping point, low-degree nodes flip first and recruit “higher” degree nodes to further destabilize the system and push it towards a tipping point. In total, 30,240 trajectories reach the tipping point in 5 steps, and 10 trajectories have the same maximized values as the trajectory shown in this figure (see Figure A7 for the remaining trajectories and probabilities).
Entropy 26 01050 g004
Figure 5. (a) Shown are the conditional probabilities at time t = 10 relative to the tipping point. The shared information between the hub node 3 and the tail node 8 is similar but, importantly caused through different sources. The hub (node 3) has high certainty that the system macrostate will be the same sign as its state. In contrast, node 8 has high certainty that the system macrostate will be opposite to its state at the tipping point. This is caused by the interaction between the network structure and the system dynamics whereby the most likely trajectories to the tipping point from the stable regime are mediated by the noise-induced dynamics from the tail to the core in the kite graph (see main text). (b) Successful metastable transitions are affected by network structure. Successful metastable transitions are those for which the sign of the macrostate is not the same prior to and after the tipping point, e.g., the system going from the 0 macrostate side to the +1 macrostate side or vice versa. Shown here are the number of successful metastable transitions for Figure 3 under control and pinning interventions on the nodes in the kite graph.
Figure 5. (a) Shown are the conditional probabilities at time t = 10 relative to the tipping point. The shared information between the hub node 3 and the tail node 8 is similar but, importantly caused through different sources. The hub (node 3) has high certainty that the system macrostate will be the same sign as its state. In contrast, node 8 has high certainty that the system macrostate will be opposite to its state at the tipping point. This is caused by the interaction between the network structure and the system dynamics whereby the most likely trajectories to the tipping point from the stable regime are mediated by the noise-induced dynamics from the tail to the core in the kite graph (see main text). (b) Successful metastable transitions are affected by network structure. Successful metastable transitions are those for which the sign of the macrostate is not the same prior to and after the tipping point, e.g., the system going from the 0 macrostate side to the +1 macrostate side or vice versa. Shown here are the number of successful metastable transitions for Figure 3 under control and pinning interventions on the nodes in the kite graph.
Entropy 26 01050 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

van Elteren, C.; Quax, R.; Sloot, P.M.A. Cascades Towards Noise-Induced Transitions on Networks Revealed Using Information Flows. Entropy 2024, 26, 1050. https://doi.org/10.3390/e26121050

AMA Style

van Elteren C, Quax R, Sloot PMA. Cascades Towards Noise-Induced Transitions on Networks Revealed Using Information Flows. Entropy. 2024; 26(12):1050. https://doi.org/10.3390/e26121050

Chicago/Turabian Style

van Elteren, Casper, Rick Quax, and Peter M. A. Sloot. 2024. "Cascades Towards Noise-Induced Transitions on Networks Revealed Using Information Flows" Entropy 26, no. 12: 1050. https://doi.org/10.3390/e26121050

APA Style

van Elteren, C., Quax, R., & Sloot, P. M. A. (2024). Cascades Towards Noise-Induced Transitions on Networks Revealed Using Information Flows. Entropy, 26(12), 1050. https://doi.org/10.3390/e26121050

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop