Next Article in Journal
Numerical Analyses of Entropy Production and Thermodynamics Exergy on a Hydrogen-Fueled Micro Combustor Featuring a Diamond-Shaped Bifurcated Inner-Tube Structure for Thermophotovoltaic Applications
Next Article in Special Issue
The Epistemic Uncertainty Gradient in Spaces of Random Projections
Previous Article in Journal
Local Predictors of Explosive Synchronization with Ordinal Methods
Previous Article in Special Issue
Structured Dynamics in the Algorithmic Agent
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Entropy and Complexity Tools Across Scales in Neuroscience: A Review

Centre National de la Recherche Scientifique (CNRS), Institute of Neuroscience (NeuroPSI), Paris-Saclay University, 91400 Saclay, France
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(2), 115; https://doi.org/10.3390/e27020115
Submission received: 28 November 2024 / Revised: 22 January 2025 / Accepted: 23 January 2025 / Published: 24 January 2025

Abstract

:
Understanding the brain’s intricate dynamics across multiple scales—from cellular interactions to large-scale brain behavior—remains one of the most significant challenges in modern neuroscience. Two key concepts, entropy and complexity, have been increasingly employed by neuroscientists as powerful tools for characterizing the interplay between structure and function in the brain across scales. The flexibility of these two concepts enables researchers to explore quantitatively how the brain processes information, adapts to changing environments, and maintains a delicate balance between order and disorder. This review illustrates the main tools and ideas to study neural phenomena using these concepts. This review does not delve into the specific methods or analyses of each study. Instead, it aims to offer a broad overview of how these tools are applied within the neuroscientific community and how they are transforming our understanding of the brain. We focus on their applications across scales, discuss the strengths and limitations of different metrics, and examine their practical applications and theoretical significance.

1. Introduction

Our understanding of brain function has largely been shaped by the analysis of experimental data, from neuronal spikes and local field potentials (LFPs) to electrocorticography (ECoG), electroencephalography (EEG), and advanced imaging methods like voltage-sensitive dye imaging (VSDI) and functional magnetic resonance imaging (fMRI). In the absence of a unified theoretical framework in neuroscience, researchers have traditionally relied on statistical methods to analyze data [1]. As a result, neuroscientific discovery has often been data-driven, primarily focusing on characterizing variability or identifying correlations between recorded neural signals and the subject’s behavior, cognitive states, or sensory experiences. This approach has revealed exciting avenues for understanding the brain, even as we continue to search for unifying theories to tie everything together.
To move beyond traditional correlation analysis, entropy and complexity measures have emerged as powerful tools for quantifying the dynamic, nonlinear, and multiscale nature of brain activity across different levels of organization [2,3].
The concept of entropy, first introduced by Rudolf Clausius in the 19th century as a fundamental principle of thermodynamics [4], was transformed by Claude Shannon in 1948, who redefined it as a measure of information uncertainty in communication systems [5]. Shannon’s information theory extended entropy beyond its physical roots, providing a mathematical framework for quantifying unpredictability in any signal or probability distribution, laying the groundwork for its application in neuroscience. In this field, entropy has been used to analyze neural signals at different scales, for example, to quantify the unpredictability of spike trains [6,7], to quantify the variability in EEG rhythms [8,9], and to study the dynamic transitions between brain networks using fMRI [10,11], providing insights into the information-processing capabilities underlying sensory, motor, and cognitive functions.
Complexity encompasses a range of definitions across scientific fields, often linked to the amount of information or computational effort required to describe a system [12]. In neuroscience, complexity measures such as Lempel–Ziv complexity [13], neural complexity [14], and the Perturbational Complexity Index (PCI) [15] have been employed to capture the capacity of the brain for functional integration and segregation. Synergistic information further expands complexity measures in neuroscience, highlighting the importance of higher-order interdependencies that provide insights beyond simple pairwise interactions [16,17].
Despite the progress in applying entropy and complexity measures to study brain signals, there is still no comprehensive framework that integrates these concepts across different scales of brain organization. The current studies often focus on specific datasets or isolated levels of analysis, lacking a unified approach that connects single-neuron dynamics with macroscopic brain behavior. This fragmentation limits our ability to better understand how the brain processes information, adapts to changing conditions, and generates conscious experiences.
This review aims to synthesize the current research on the applications of entropy and complexity in neuroscience, highlighting how these measures can bridge gaps in our understanding of the brain’s multiscale organization. By reviewing the types of data used, as well as the variety of entropy and complexity indexes applied, we seek to provide a coherent picture of how these tools contribute to our current understanding of brain function. The insights gained from entropy and complexity measures offer the potential to unify findings across different experimental modalities and levels of analysis, thereby advancing the theoretical foundations of this field.
The review is organized as follows: we begin by summarizing the main types of recording techniques and neural signals used in neuroscience. We then introduce various entropy and complexity measures, detailing their mathematical formulations and applications in neuroscience. Finally, we discuss how these measures can be integrated to study brain function across multiple scales and propose future directions for research.

2. Types of Signals in Neuroscience

Since entropy and complexity measures in neuroscience are typically derived from experimental data or computational models and simulations, it is important to understand the types of signals used to capture brain activity. These signals range from discrete to continuous, each offering distinct perspectives on neural dynamics across different spatial and temporal scales. Neuroscientists utilize a variety of recording modalities to probe brain function at multiple levels, from the activity of single neurons to the behavior of large-scale networks including the whole brain [18,19,20,21,22,23]. Below, we provide an overview of some of the key signals employed in the field (see Figure 1).

2.1. Discrete Signals

Action Potentials (Spikes): Neurons transmit information through action potentials, which are all-or-none events representing rapid depolarization followed by repolarization of the neuronal membrane potential. These events can be encoded as binary sequences, with a value of 1 indicating the occurrence of a spike and 0 indicating the absence of a spike, providing a discrete representation of neuronal firing [24].
Spike Trains: Temporal binary sequences that represent the occurrence of action potentials over time. Spike trains are usually recorded from individual neurons or groups of neurons, serving as a fundamental data type for analyzing neuronal activity patterns. They provide valuable insights into how single neurons and populations of neurons encode information [25].
Raster Plots: A graphical representation used to visualize spike trains across multiple neurons or experimental trials. In this type of plot, each row represents a neuron or trial, while each point indicates the timing of an action potential. This representation of spiking activity enables researchers to identify patterns of neuronal firing across different time intervals and experimental conditions [26].
Symbolic Sequences of States: A series of discrete symbols or states that represent a sequence of events or values over discrete time. Each symbol corresponds to a different state or category, and transitions between symbols capture changes in state over time. This approach is often employed to characterize transitions between discrete states of neurons or the entire brain and to simplify and analyze complex continuous data by converting them into a finite set of discrete states. In neuroscience, symbolic sequences of states are utilized to represent neuronal activity patterns [27] or brain states [10,28], which evolve over time.

2.2. Continuous Signals

Continuous signals provide analog data that capture neuronal or brain activity—either directly or indirectly—over time, providing insights into neural dynamics across various spatial and temporal scales. These signals reflect a wide range of neural processes, enabling researchers to investigate the continuous flow of information within the nervous system. By analyzing these signals through the lenses of entropy and complexity, we can gain a deeper understanding of the underlying mechanisms operating at multiple levels of organization (see Figure 2).
Intracellular Recordings: These recordings not only provide exceptional resolution in voltage (sub-millivolt), but also in time (sub-millisecond). By capturing the membrane potential (Vm), particularly subthreshold fluctuations that represent the synaptic inputs from thousands of synapses, intracellular recordings offer a rich continuous signal. This makes them an ideal source for applying computational methods to extract patterns and insights regarding the underlying neuronal and network activity. Recent results using entropy over these signals include the study of sensory processing by mechanoreceptors [29,30]. An insightful review on information theory and its use in neural coding at the single-neuron scale can be found in [31].
Local Field Potentials (LFPs): Continuous electrical signals that reflect the integrated synaptic activity of neuronal populations within a localized area surrounding the recording electrode [32]. LFPs capture slow oscillatory dynamics, including theta, delta, alpha, and gamma rhythms, which are essential for understanding the coordination of neural activity across regions. Recent developments show how to compute LFPs from integrate-and-fire network models, linking the micro- and mesoscopic scales [33,34].
Intracranial Recordings (ECoG): A highly invasive method involving electrodes placed directly on the cortical surface. ECoG captures continuous electrical signals representing the aggregate activity of neuronal populations with greater spatial and temporal resolutions compared to non-invasive methods (like EEG or MEG), enabling precise mapping of neural processes [35].
Electroencephalography (EEG): A non-invasive technique that measures the brain electrical activity via electrodes placed on the scalp. EEG recordings provide a continuous signal that represents the collective activity of large populations of neurons. This method is particularly valuable for analyzing brain rhythms, such as alpha, beta, theta, and gamma waves, which are associated with various cognitive and physiological states [36,37].
Magnetoencephalography (MEG): A non-invasive technique that detects the magnetic fields generated by neuronal currents. MEG provides continuous neural signals, similar to EEG, but offers superior spatial resolutions for localized brain regions, making it highly useful for studying the temporal and spatial dynamics of brain activity [38,39].

2.3. Imaging-Based Signals

Imaging-based signals refer to data acquired through various imaging techniques that visualize and measure biological processes associated with neuronal activity, brain function, metabolic processes, and structural connectivity. These signals provide insights into both the dynamic and static properties of the brain (see Figure 2).
Calcium Imaging Signals: Fluorescent imaging techniques that monitor calcium influx as an indirect measure of neuronal activity. Since calcium ions are involved in the activation of neurons, these signals provide continuous temporal data on neuronal dynamics in response to stimuli, enabling the visualization of activity across large populations of neurons [40,41].
Voltage-Sensitive Dye Imaging (VSDI): An optical technique for real-time monitoring of electrical activity across large populations of neurons. It relies on applying voltage-sensitive dyes to neural tissue, which fluoresce in response to changes in membrane potential. This technique provides continuous signals representing population activity across large areas of the brain [42].
Functional Ultrasound (fUS): A technique for imaging transient changes in blood volume across the entire brain, offering superior spatiotemporal resolution compared to other functional brain imaging methods [43]. Its ability to capture high-resolution, real-time data makes it particularly suitable for data-driven approaches in computational neuroscience.
Blood-Oxygen-Level-Dependent (BOLD) Signal (fMRI): A hemodynamic signal that reflects changes in blood oxygenation levels, serving as an indirect marker of neuronal activity. The BOLD signal is temporally coarse, capturing fluctuations on the order of seconds, and is widely used to investigate large-scale brain networks and functional connectivity during diverse cognitive states and sensory processes [44,45].
Diffusion Tensor Imaging (DTI): An MRI technique that measures the diffusion of water molecules in tissues, particularly in the brain. It is based on the principle that water diffuses more freely along the direction of white matter fibers than perpendicular to them. By tracking this movement, DTI provides detailed maps of the brain’s white matter pathways, enabling the study of brain connectivity and structural integrity [46,47].
Positron Emission Tomography (PET): A nuclear imaging technique that measures metabolic processes in the body by detecting gamma rays emitted from a radioactive tracer. The tracer, typically a molecule like glucose labeled with a positron-emitting isotope (e.g., fluorine-18), is injected into the body and accumulates in areas with high metabolic activity, such as tumors or active brain regions. When the tracer decays, it emits positrons that collide with electrons, producing gamma rays that are detected by the PET scanner [48].

2.4. Computational Models and Simulations of the Brain

Computational models and simulations of the brain, spanning multiple spatial and temporal scales, serve as powerful generative frameworks for understanding and interpreting data in neuroscience. These models, often inspired by biophysical principles, aim to replicate neural dynamics by incorporating realistic structural and mechanistic features of neurons [49]. They are constructed at different levels of complexity, from the microscopic scale of individual neurons to the macroscopic scale of entire brain regions [49,50].
One prominent approach involves mean-field models, which provide a simplified yet effective representation of collective neuronal dynamics by averaging the behavior of large populations of neurons [51,52]. Such models capture the emergent properties of brain activity, including adaptation and responsibility to external stimuli [53], enabling the study of large-scale neural interactions without the computational burden of simulating individual neurons in detail. When coupled with the structural connectome—the detailed map of anatomical connections between brain regions—these mean-field models become highly informative tools for investigating the brain’s global dynamics, including network-level interactions and functional connectivity patterns in different brain states [54].
These models have the potential to simulate how disruptions in connectivity or neuronal dynamics might lead to pathological states, offering valuable mechanistic perspectives for understanding neurological disorders and developing therapeutic interventions from the perspective of entropy measures [55,56] or complexity [57,58].

3. Types of Entropy and Complexity Indexes in Neuroscience

Different types of entropy provide diverse insights into neural processes, from quantifying uncertainty to assessing regularity in neural data. In neuroscience, the primary focus is on entropy within the framework of information theory. Entropy, as a concept, spans various fields, each with its interpretations and applications.

3.1. Entropy

Without attempting to be exhaustive, we provide a taxonomy of the various types of entropy used in neuroscience research as follows:
Shannon Entropy: A measure of the uncertainty or unpredictability associated with a random variable [5]. It quantifies the information contained in a probability distribution and is widely used in information theory [59,60,61]. The more unpredictable or uncertain the outcome of a random variable is, the higher the entropy. The two extremes are the delta function with 0 entropy and the uniform distribution with entropy log N , where N is the number of possible outcomes of the random variable.
Given a discrete random variable X with possible outcomes x 1 , x 2 , , x n and corresponding probabilities P ( X = x i ) = p i , the Shannon entropy H ( X ) is defined as
H ( X ) = i = 1 n p i log 2 p i ,
where H ( X ) is measured in bits when log 2 is used and in nats when log e (or the natural logarithm) is used.
If any p i = 0 , the corresponding term in the summation is considered to be zero as 0 log 2 0 is defined to be 0 (using lim p 0 p log p = 0 ).
Shannon entropy is particularly useful in neuroscience. Neural activity, such as action potentials (spikes) or brain rhythms (EEG), often carries information regarding sensory inputs, motor commands, and cognitive or consciousness states that can be quantified. Shannon entropy has found wide-ranging applications as an inference procedure in neuroscience, particularly through the lens of the Jaynes maximum entropy principle [60]. The core idea of the maximum entropy principle is to select the probability distribution that maximizes Shannon entropy, subject to known constraints—such as empirical averages or moments of observables—without making any additional assumptions about the system. Maximum entropy models have been used to infer the most likely distribution of neural firing patterns based on constraints such as average firing rates or pairwise correlations between neurons [62], triplets, and high-order correlations [63] and time-dependent correlations [64,65]. At the macroscopic level, the maximum entropy principle has been used to characterize resting-state human brain networks using fMRI data [66], to explore the energy landscape in brain network structure [67], and to study collective brain activity during wakefulness and anesthesia [68].
Sample Entropy (SE): A measure of complexity in time-series [69]. It assesses the probability that two sequences of data points that are similar over a given number of points remain similar when one more point is added.
Given a time series { x 1 , x 2 , , x N } , the SE is defined as
S E ( m , r , N ) = ln A ( m , r , N ) B ( m , r , N ) ,
where m is the embedding dimension (length of the sequences to be compared), r is the tolerance for accepting matches, typically a percentage of the standard deviation of the time series, N is the length of the time series, A is the number of pairs of sequences of length m + 1 that are within the tolerance r, and B is the number of pairs of sequences of length m that are within the tolerance r.
In neuroscience, sample entropy has proven to be a highly effective metric for quantifying the complexity of neural signals, including spikes, local field potentials (LFPs), and electroencephalography (EEG) recordings, particularly in the analysis of anesthesia depth [70,71]. Furthermore, it has been applied to MRI data to investigate neurodegenerative processes such as Alzheimer’s disease and the effects of aging [72] and to characterize functional complexity of fMRI human brain signals under propofol anesthesia [73].
Multiscale Entropy (MSE): A method to quantify the complexity of time-series data across multiple temporal scales. It extends the concept of entropy by examining how signal complexity changes when the data are viewed at different scales, which is important in understanding the dynamics of neural signals [74].
Given a time series { x 1 , x 2 , , x N } , for each scale factor τ , the coarse-grained time series y ( τ ) is obtained by averaging the data points over non-overlapping windows of length τ
y j ( τ ) = 1 τ i = ( j 1 ) τ + 1 j τ x i , j = 1 , 2 , , N τ ,
where N τ is the length of the new coarse-grained time series.
MSE is computed as follows: For each coarse-grained time series y ( τ ) , compute the SE, or another entropy measure, to quantify the regularity or complexity at that scale
S ( τ ) = S E ( y ( τ ) , m , r ) ,
The process is repeated for multiple scales τ = 1 , 2 , , τ max , resulting in a set of entropy values S ( τ ) for different temporal scales.
Recently, multiscale entropy has been successfully employed to investigate retinal data in a mouse model of Alzheimer’s disease [75], as well as to analyze the complexity of brain activity in individuals with attention-deficit/hyperactivity disorder (ADHD) [76], and to study the human sleep cycle [77].
ϵ -entropy: A generalization of the Kolmogorov–Sinai entropy rate [78], which is defined for a finite scale ϵ and time delay τ by
h ( ϵ , τ ) = lim m h m ( ϵ , τ ) = 1 τ lim m 1 m H m ( ϵ , τ ) ,
with h m ( ϵ , τ ) = 1 τ H m + 1 ( ϵ , τ ) H m ( ϵ , τ ) . Here, H m ( ϵ , τ ) represents the entropy calculated using a box partition of the phase space, where the box size is specified by ϵ , and the attractor is reconstructed with a time delay τ and embedding dimension m. For deterministic low-dimensional dynamics, this measure tends to plateau at specific scales. As such, it serves as a useful tool for characterizing large-scale dynamics while filtering out small-scale noise, and it can also aid in distinguishing chaotic behavior from stochastic noise under certain conditions [79]. In the context of neuroscience, ϵ -entropy has been used to study brain dynamics at multiple scales, and to show how one can reconcile the low-dimensional chaos of macroscopic variables (EEG) with the stochastic behavior of single neurons [80].
Transfer Entropy (TE): An information-theoretic measure used to quantify the directional transfer of information between two time series. It provides a way to capture the dynamic dependencies and causality between time-series signals, making it particularly useful for analyzing neural communication in the brain [81]. TE is often used as an alternative to correlation or mutual information as it specifically accounts for the influence of the past state of one variable on the future state of another variable, thus identifying causal relationships.
Given two time-series X t and Y t , the TE from X to Y (denoted T X Y ) is defined as
T X Y = y t + 1 , y t , x t p ( y t + 1 , y t , x t ) log p ( y t + 1 y t , x t ) p ( y t + 1 y t ) ,
where p ( y t + 1 , y t , x t ) is the joint probability distribution of the future state y t + 1 , current state y t , and the current state of X t . p ( y t + 1 y t , x t ) is the conditional probability of the future state of Y t given both its past state and the current state of X t . p ( y t + 1 y t ) is the conditional probability of the future state of Y t given only its own past state. TE quantifies the reduction in uncertainty regarding Y’s future by knowing the past of X beyond what is already known from Y’s past.
In neuroscience, TE can be used to map and analyze the directional information flow between different brain regions, helping to understand how neural networks process and transmit information, and to identify effective connectivity between brain regions, which is essential in understanding the neural mechanisms [82,83] and directed network inference [84].
Entropy Production: A quantity linked to temporal irreversibility in a system. In a system exhibiting temporal irreversibility, the forward trajectory is distinct from the reversed trajectory, leading to positive entropy production. This asymmetry indicates the inherent directionality of natural processes (arrow of time). The greater the entropy production, the more pronounced the temporal irreversibility of the process [85,86,87].
The entropy production denoted Φ can be defined in the context of information theory and statistical mechanics. For discrete Markov processes, it is often computed as follows:
Φ = 1 2 i , j p i P i j p j P j i ln p i P i j p j P j i ,
where p i and p j represent the invariant probability of the system being in states i and j, respectively, and P i j , P j i the Markov transition rate from state i to state j and from state j to state i. The term p i P i j p j P j i represents the detailed balance difference, quantifying the net flow between states i and j.
In neuroscience, entropy production has been employed to investigate brain dynamics and to gain insights into how neural systems transition between different states under varying conditions and at different scales. In spiking neuronal networks, entropy production has been quantified using maximum-entropy Markov chains [88] and applied to characterize non-equilibrium steady states [89]. At the whole-brain level, entropy production has been used to characterize brain dynamics across different cognitive tasks [90] and to distinguish levels of consciousness in the human brain [91,92].

3.2. Complexity in Brain Dynamics

Complexity metrics, beyond entropies, help to capture emergent behavior. Here, we describe various complexity measures and their relevance in neuroscience.
Lempel–Ziv Complexity (LZC): A measure used to quantify the complexity or unpredictability of a sequence. It is based on the principles of the Lempel–Ziv compression algorithms, which are designed for lossless data compression. LZC evaluates how compressible a sequence is by counting the number of distinct substrings or phrases required to represent the sequence [93,94].
Given a sequence S of length N, Lempel–Ziv complexity C L Z ( S ) is computed by first decomposing the sequence S into a set of phrases or substrings such that each substring is either new or a repeat of an already observed substring. Then, the complexity is quantified by counting the number of distinct phrases or substrings needed to reconstruct the sequence.
Mathematically, if D ( S ) represents the number of distinct phrases required to represent the sequence S, then the Lempel–Ziv complexity is provided by
C L Z ( S ) = D ( S ) N .
This formula provides a normalized measure of complexity, reflecting the proportion of distinct phrases relative to the total length of the sequence. Sequences with low LZC values are more repetitive and compressible. For example, sequences with many repeating patterns or regular structures will exhibit low complexity. Sequences with high LZC values are less compressible and exhibit a higher degree of unpredictability.
LZC is particularly useful for analyzing various types of neuroscientific data, providing insights into their underlying complexity and structure. Among the many applications in neuroscience, it has been used as an alternative entropy estimator for binned spike trains [95], for fMRI data of propofol anesthesia of spontaneous brain activity in rats [96], in EEG recordings to discriminate sleep and wakefulness [97], and in MEG data to analyze the effects of external stimulation on psychedelic state neurodynamics [98].
Perturbational Complexity Index (PCI): A measure used to quantify the complexity of brain responses to external perturbations, such as Transcranial Magnetic Stimulation (TMS), and it is often applied in the study of consciousness [15,99,100], as well as in models [57]. The index reflects the diversity and complexity of spatiotemporal patterns that result from brain activity. Low PCI indicates a stereotypical simple brain response, often observed in unconscious states such as general anesthesia, coma, or deep sleep. In contrast, high PCI indicates a complex, varied brain response, generally observed in conscious and awake states (see Figure 3 for illustration).

3.3. Steps to Compute PCI

  • Perturbation: Apply external stimulation to a brain region.
  • Recording: Measure the resulting brain activity.
  • Spatiotemporal Analysis: Analyze the recorded data to extract binary spatiotemporal patterns.
  • Compression: Apply a compression algorithm (such as Lempel–Ziv complexity) to the spatiotemporal patterns.
  • Normalization: Normalize the compressibility score to obtain the PCI.
Let B represent the binary spatiotemporal pattern derived from the recordings. The Perturbational Complexity Index (PCI) can be defined as
PCI = C ( B ) max ( C ) ,
where C ( B ) is the Lempel–Ziv complexity (or another compression measure) of the spatiotemporal pattern B, and max ( C ) is a normalization factor representing the maximum possible complexity of the system’s response, ensuring PCI values range between 0 and 1.
One of the most impactful clinical applications of PCI is in the evaluation of patients with disorders of consciousness (DOCs). Patients in a coma typically exhibit low PCI values, reflecting the brain’s inability to generate complex, integrated responses. PCI has been used to distinguish between Unresponsive Wakefulness Syndrome (UWS) and minimally conscious state (MCS). Patients with UWS usually have low PCI values, while minimally conscious patients often show slightly higher complexity, indicating some preserved capacity for integrated brain function. PCI can help to detect locked-in patients (who are fully conscious but unable to move) as locked-in patients typically have PCI values similar to those of conscious individuals.
Neural Complexity (Tononi’s Complexity): Introduced by Giulio Tononi and colleagues [14], it is a measure that quantifies the balance between integration and differentiation in a neural system, but, unlike PCI, this measure does not require any external perturbation of the brain. Neural complexity reflects how well the components of a system can interact while maintaining their functional specialization [101].
Consider a bipartition of the system X into a j-th subset X j k , composed of k components, and its complement X X j k . The mutual information (MI) between X j k and X X j k is
M I ( X j k ; X X j k ) = H ( X j k ) + H ( X X j k ) H ( X ) ,
where H ( X j k ) and H ( X X j k ) are the entropies of X j k and X X j k considered independently, and H ( X ) is the entropy of the system considered as a whole. Mutual information M I is 0 if X j k and X X j k are statistically independent and M I > 0 otherwise.
The neural complexity N C ( X ) is defined as the average of mutual information over all bipartitions of the system
N C ( X ) = k = 1 n / 2 M I ( X j k ; X X j k ) .
Neural complexity is high when a system shows rich interactions between its parts and diverse specialized activities within those parts. This measure is particularly relevant to studies of consciousness and brain organization, where both high integration (coordinated activity across the brain) and high differentiation (specialized processing in different regions) are important features [102].
High-order Interdependencies: A measure that capture complex multi-variable relationships that extend beyond pairwise interactions [17]. Traditional graph representations, where edges represent pairwise interactions between nodes, are insufficient to fully describe high-order interactions. These more intricate dependencies, involving three or more variables at once, are better modeled by hypergraphs or simplicial complexes, which are mathematical tools capable of representing multi-variable interactions [103].
Let X 1 , X 2 , , X n be a set of random variables with joint probability distribution P ( X 1 , X 2 , , X n ) . The Total Correlation (TC) [104] is provided by
T C ( X 1 , X 2 , , X n ) = i = 1 n H ( X i ) H ( X 1 , X 2 , , X n )
where H ( X i ) is the Shannon entropy of the i-th random variable, and H ( X 1 , X 2 , , X n ) is the joint entropy of all the variables. Alternatively, it can be written as the Kullback–Leibler divergence D K L between the joint distribution and the product of the marginal distributions
T C ( X 1 , X 2 , , X n ) = D K L ( P ( X 1 , X 2 , , X n ) P ( X 1 ) P ( X 2 ) P ( X n ) )
The Dual Total Correlation (DTC) [105] is provided by
D T C ( X 1 , X 2 , , X n ) = H ( X 1 , X 2 , , X n ) i = 1 n H ( X i | X i ) ,
where H ( X i | X i ) is the conditional entropy of X i given all other variables X i , defined as
H ( X i | X i ) = H ( X 1 , X 2 , , X n ) H ( X i ) .
The O-Information [106] (denoted Ω ) is defined as
Ω ( X 1 , X 2 , , X n ) = ( n 2 ) H ( X 1 , X 2 , , X n ) i = 1 n H ( X i | X i )
If Ω > 0 , the system is redundancy-dominated; otherwise, if Ω < 0 , the system is synergy-dominated. Recent applications of these concepts in the context of neuroscience include a study of high-order interdependencies in the aging brain [107], neurodegeneration [108], cognitive state and behavior in the macaque cerebral cortex [109], and a study of the synergistic workspace for human consciousness [110].

4. Challenges and Limitations

While entropy and complexity approaches offer powerful insights into brain function, they also present several challenges and limitations in neuroscience research. These issues are methodological and related to data quality and quantity, interpretation, and the generalization of findings.

4.1. Data Quality and Arbitrary Preprocessing

Neuronal recordings are often noisy and prone to artifacts (e.g., head movement, electrical interference, environmental disturbances, etc.). Because entropy and complexity measures are directly computed from data, they are sensitive to signal quality. Artifacts or noise in data can lead to spurious entropy or complexity values. In particular, entropy measures may incorrectly increase in the presence of noise, leading to an overestimation of disorder or unpredictability not intrinsically reflecting a property of the neuronal populations. A way to deal with this issue is to filter the data to eliminate the noise, but this introduces another issue related to arbitrary choices regarding filtering. Signal filtering and detrending can significantly influence the outcomes of entropy and complexity, sometimes leading to non-replicable results [111].
Effective preprocessing steps, such as artifact removal, normalization, filtering, and signal alignment, are essential for extracting meaningful information. However, improper handling of these steps can introduce biases, distort the underlying neural signals, and lead to spurious conclusions when computing entropy and complexity measures. For example, in electrophysiological experiments, spike sorting and baseline correction can significantly impact the results of data analysis [112]; in neuroimaging, choices related to motion correction or spatial smoothing can drastically affect brain activity patterns [113,114].

4.2. Choosing Appropriate Parameters

Some entropy and complexity measures require specific tuning of parameters. Sample entropy and multiscale entropy are highly dependent on user-defined parameters, and choosing these parameters can be non-trivial (embedding dimensions, time-lags, or scale factors). Entropy computed from symbolic sequences depends on the choice in thresholds for each symbol, and the number of symbols is usually arbitrary [115]. Arbitrary or improper selection of parameters can lead to misleading results, making cross-study comparisons difficult. There is often no clear or universally accepted method for selecting these parameters, and optimal values can vary depending on the nature of the data (e.g., different recording modalities, recording sites, and recording lengths).

4.3. Interpretation of Results

While entropy and complexity measures provide quantitative insights, their biological interpretation remains challenging. High entropy may suggest randomness or flexibility; understanding what this means in terms of brain function or behavior requires careful examination [116]. The interpretation of increased or decreased entropy/complexity depends heavily on the scale and neurobiological context. For example, increased entropy might reflect pathological dysfunction (e.g., in epilepsy [117]) or healthy adaptive variability (e.g., during cognitive tasks [118] or psychedelic states [119,120]). Without a clear framework, it is difficult to assign biological meaning to these changes in entropy.

4.4. Experimental and Computational Implementation

Entropy and complexity measures, especially when applied at multiple scales, can be computationally expensive and time-consuming, requiring elaborate experimental setups and sophisticated offline computations, often necessitating parallel computing or advanced optimization techniques [15,62]. This drawback makes these measures difficult to apply in the clinical context, where rapid and accurate results are needed. Another example is calculating higher-order interdependencies, which presents a significant challenge due to the combinatorial explosion that occurs as the number of variables increases. This exponential growth in the number of possible interactions makes it increasingly difficult to evaluate every potential interdependency, requiring substantial computational resources or alternative computations that bypass the complete exploration of interactions.

4.5. Non-Stationarity of Brain Signals

Neural data are often non-stationary, meaning that the statistical properties of the signals (e.g., mean and variance) change over time. This presents a significant challenge for entropy and complexity approaches, which often assume stationarity, making it difficult to determine whether changes in these metrics reflect genuine neural variability or artifacts of non-stationarity [121,122]. While techniques such as windowing or adaptive measures exist to address non-stationarity, they add another layer of computation and introduce potential biases in the results.

4.6. Comparability Across Studies and Modalities

Different brain imaging modalities (e.g., EEG, fMRI, and MEG) operate at distinct temporal and spatial scales, and the interpretation of entropy and complexity measures may differ depending on the recording modalities. Comparing results across different studies or modalities is challenging. For example, entropy or complexity measures calculated from EEG (which has high temporal but low spatial resolution) may not be directly comparable to those derived from fMRI (which has high spatial but low temporal resolution). Another issue mentioned before that impacts comparability across studies and modalities is the absence of standardized approaches to calculate and report entropy and complexity measures. Even subtle differences in preprocessing or parameter choices can result in divergent conclusions, complicating meta-analyses or replication efforts [113].

4.7. Pathological vs. Healthy Brain States

Although entropy and complexity measures are useful for distinguishing between healthy and pathological states, such as in epilepsy [117] or neurodegenerative diseases [123], the distinction is not always rigorously defined. Reduced entropy is often associated with neurodegenerative disorders, but this relationship may not be linear or consistent across all brain regions or stages of disease. The challenge is understanding when entropy changes are pathological and when they reflect adaptive processes. In some pathological states, increased entropy may represent compensatory neural mechanisms rather than dysfunction. Entropy increases during psychedelic experiences in healthy and pathological populations, reflecting heightened neural complexity and variability, which add further complexity to the interpretation [124].

4.8. Statistical Challenges and False Positives

Some entropy measures may not perform well with small datasets or short time series. Most complexity measures require long and continuous recordings to provide stable results, limiting their use in some experimental paradigms with limited time resolutions like fMRI [125]. When applying entropy measures across multiple brain regions, time points, or scales, the risk of false positives increases due to multiple comparisons. Proper statistical corrections (e.g., Bonferroni or FDR) are necessary but can sometimes obscure meaningful findings [126,127].

5. Discussion

In recent years, significant advancements have been achieved in neuroimaging techniques and electrophysiological recordings, enhancing both their quantity and quality across diverse spatial and temporal scales [128]. Despite this important progress, neuroscience remains without a unified theoretical framework, leaving the field in a “data-rich, theory-poor” state [129]. This disparity between abundant data and the lack of comprehensive theories underscores the need for data-driven approaches to extract meaningful insights. Such approaches enable the discovery of patterns and relationships that might otherwise remain hidden.
In this review, we have highlighted examples of the practical applications of entropy and complexity metrics in neuroscience. The clinical implications of these approaches are extensive and promising. Recent studies demonstrate that the entropy of brain signals can differentiate states of consciousness in neurological and psychiatric disorders [28,130,131,132]. These findings suggest that entropy-based metrics could serve as sensitive biomarkers for detecting consciousness, supporting the hypothesis that neural complexity may be a fundamental aspect of human consciousness [73,102]. One of the most widely validated complexity measures with clinical utility is the Perturbational Complexity Index (PCI) [15,100], which has emerged as an objective marker of consciousness (see [57] for models). The PCI provides clinicians with a valuable tool for assessing awareness levels in non-communicative patients, especially when behavioral assessments are unreliable or restricted. Entropy has also been used to model how psychedelics induce altered consciousness. The Entropic Brain Hypothesis, proposed by Robin Carhart-Harris and colleagues [119,120], suggests that different states of consciousness correspond to varying levels of entropy, or disorder, in brain activity. Psychedelics like psilocybin and LSD are believed to increase brain entropy. This entropy shift may underlie the profound subjective experiences reported during psychedelic states, such as expanded consciousness and a sense of interconnectedness, offering insights into how psychedelics affect brain function at a fundamental level. This high-entropy brain activity may enable altered perceptions, enhanced creativity, and novel cognitive insights. Therapeutically, such high-entropy states may help individuals to break free from rigid thought patterns, offering alternatives for treating conditions like treatment-resistant depression or PTSD [133].
Additionally, we provide the reader with a critical perspective on using complexity metrics. While entropy and complexity metrics are valuable tools for exploring brain dynamics, their application requires careful consideration as their effectiveness depends on the methodological rigor of analytical workflows. These metrics, while flexible, are sensitive to factors such as signal noise, data quality, and preprocessing choices, which can significantly impact the results. As such, careful experimental design and rigorous statistical controls are essential to ensure that entropy and complexity indexes reflect genuine aspects of brain activity rather than artifacts. Furthermore, standardizing methodologies and creating transparent, reproducible data-processing pipelines are critical as neuroscience increasingly embraces large-scale datasets and sophisticated computational methods. Such rigor is fundamental for facilitating meaningful comparisons across studies and building a cumulative understanding of neural complexity.
The future of entropy and complexity metrics in neuroscience holds immense potential. As recording technologies continue to improve and computational power becomes even more accessible, these tools will likely play an increasingly central role in decoding the brain’s intricate dynamics. The convergence of vast data resources (“data-rich”) with increased computational power offers an unprecedented opportunity to advance neuroscience, as evidenced by some recently awarded Nobel prizes [134,135].
This novel scenario may open new avenues for clinical applications, including personalized diagnostics and treatments for neurological and psychiatric disorders driven by better experimental datasets and data generated by sophisticated brain simulations [136,137]. Additionally, the development of more sophisticated models for understanding brain function and consciousness, informed by these metrics, may lead to groundbreaking discoveries that bridge the gap between experimental data and theoretical developments. We hope that, with continued progress, these tools will not only deepen our understanding of brain function but also revolutionize the way we diagnose, monitor, and treat brain-related conditions.

Funding

Research supported by CNRS, the European Union (Human Brain Project H2020-945539; Virtual Brain Twin project 101137289), and the ANR FLAG-ERA program (BrainAct project).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Acknowledgments

We thank our lab colleagues for stimulating discussions.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
LFP                Local field potential
ECoGElectrocorticography
EEGElectroencephalography
fMRIFunctional magnetic resonance imaging
fUSFunctional ultrasound
PCIPerturbational Complexity Index
MSEMultiscale Entropy
LZCLempel–Ziv Complexity
KLKullback–Leibler
MEPMaximum Entropy Principle
MEAMulti-Electrode Array
SESample Entropy
TETransfer Entropy
DTCDual Total Correlation
TCTotal Correlation
UWSUnresponsive Wakefulness Syndrome
MCSMinimally Conscious State
Symbol List
H ( X ) entropy of the random variable X;
p i probability of outcome x i ;
T X Y Transfer Entropy from X to Y;
Φ entropy production;
N C ( X ) neural complexity;
C L Z ( S ) Lempel–Ziv complexity of the sequence S;
Ω O-Information.

References

  1. Bialek, W. Biophysics: Searching for Principles; Princeton University Press: Princeton, NJ, USA, 2012. [Google Scholar]
  2. Fagerholm, E.D.; Dezhina, Z.; Moran, R.J.; Turkheimer, F.E.; Leech, R. A primer on entropy in neuroscience. Neurosci. Biobehav. Rev. 2023, 146, 105070. [Google Scholar] [CrossRef] [PubMed]
  3. Keshmiri, S. Entropy and the brain: An overview. Entropy 2020, 22, 917. [Google Scholar] [CrossRef] [PubMed]
  4. Müller, I. A History of Thermodynamics: The Doctrine of Energy and Entropy; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  5. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  6. Strong, S.P.; Koberle, R.; Van Steveninck, R.R.D.R.; Bialek, W. Entropy and information in neural spike trains. Phys. Rev. Lett. 1998, 80, 197. [Google Scholar] [CrossRef]
  7. Nemenman, I.; Bialek, W.; de Ruyter van Steveninck, R. Entropy and information in neural spike trains: Progress on the sampling problem. Phys. Rev. E—Stat. Nonlinear Soft Matter Phys. 2004, 69, 056111. [Google Scholar] [CrossRef]
  8. Inouye, T.; Shinosaki, K.; Sakamoto, H.; Toi, S.; Ukai, S.; Iyama, A.; Katsuda, Y.; Hirano, M. Quantification of EEG irregularity by use of the entropy of the power spectrum. Electroencephalogr. Clin. Neurophysiol. 1991, 79, 204–210. [Google Scholar] [CrossRef]
  9. Abásolo, D.; Hornero, R.; Espino, P.; Alvarez, D.; Poza, J. Entropy analysis of the EEG background activity in Alzheimer’s disease patients. Physiol. Meas. 2006, 27, 241. [Google Scholar] [CrossRef]
  10. Demertzi, A.; Tagliazucchi, E.; Dehaene, S.; Deco, G.; Barttfeld, P.; Raimondo, F.; Martial, C.; Fernández-Espejo, D.; Rohaut, B.; Voss, H.; et al. Human consciousness is supported by dynamic complex patterns of brain signal coordination. Sci. Adv. 2019, 5, eaat7603. [Google Scholar] [CrossRef]
  11. Barttfeld, P.; Uhrig, L.; Sitt, J.D.; Sigman, M.; Jarraya, B.; Dehaene, S. Signature of consciousness in the dynamics of resting-state brain activity. Proc. Natl. Acad. Sci. USA 2015, 112, 887–892. [Google Scholar] [CrossRef]
  12. Mitchell, M. Complexity: A Guided Tour; Oxford University Press: New York, NY, USA, 2009. [Google Scholar]
  13. Ziv, J.; Lempel, A. Compression of individual sequences via variable-rate coding. IEEE Trans. Inf. Theory 1978, 24, 530–536. [Google Scholar] [CrossRef]
  14. Tononi, G.; Sporns, O.; Edelman, G.M. A measure for brain complexity: Relating functional segregation and integration in the nervous system. Proc. Natl. Acad. Sci. USA 1994, 91, 5033–5037. [Google Scholar] [CrossRef] [PubMed]
  15. Casali, A.G.; Gosseries, O.; Rosanova, M.; Boly, M.; Sarasso, S.; Casali, K.R.; Casarotto, S.; Bruno, M.A.; Laureys, S.; Tononi, G.; et al. A theoretically based index of consciousness independent of sensory processing and behavior. Sci. Transl. Med. 2013, 5, 198ra105. [Google Scholar] [CrossRef]
  16. Mediano, P.A.; Rosas, F.E.; Luppi, A.I.; Jensen, H.J.; Seth, A.K.; Barrett, A.B.; Carhart-Harris, R.L.; Bor, D. Greater than the parts: A review of the information decomposition approach to causal emergence. Philos. Trans. R. Soc. A 2022, 380, 20210246. [Google Scholar] [CrossRef]
  17. Battiston, F.; Cencetti, G.; Iacopini, I.; Latora, V.; Lucas, M.; Patania, A.; Young, J.G.; Petri, G. Networks beyond pairwise interactions: Structure and dynamics. Phys. Rep. 2020, 874, 1–92. [Google Scholar]
  18. Brette, R.; Destexhe, A. Handbook of Neural Activity Measurement; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  19. Einevoll, G.T.; Kayser, C.; Logothetis, N.K.; Panzeri, S. Modelling and analysis of local field potentials for studying the function of cortical circuits. Nat. Rev. Neurosci. 2013, 14, 770–785. [Google Scholar] [CrossRef]
  20. Heeger, D.J.; Ress, D. What does fMRI tell us about neuronal activity? Nat. Rev. Neurosci. 2002, 3, 142–151. [Google Scholar] [CrossRef] [PubMed]
  21. Gross, J. Magnetoencephalography in cognitive neuroscience: A primer. Neuron 2019, 104, 189–204. [Google Scholar] [CrossRef]
  22. Uddin, L.Q. Bring the noise: Reconceptualizing spontaneous neural activity. Trends Cogn. Sci. 2020, 24, 734–746. [Google Scholar] [CrossRef]
  23. Halnes, G.; Ness, T.V.; Næss, S.; Hagen, E.; Pettersen, K.H.; Einevoll, G.T. Electric Brain Signals: Foundations and Applications of Biophysical Modeling; Cambridge University Press: Cambridge, UK, 2024. [Google Scholar]
  24. Rieke, F.; Warland, D.; Van Steveninck, R.d.R.; Bialek, W. Spikes: Exploring the Neural Code; MIT Press: Cambridge, MA, USA, 1999. [Google Scholar]
  25. Gabbiani, F.; Koch, C. Principles of spike train analysis. Methods Neuronal Model. 1998, 12, 313–360. [Google Scholar]
  26. Grün, S.; Rotter, S. Analysis of Parallel Spike Trains; Springer: Berlin/Heidelberg, Germany, 2010; Volume 7. [Google Scholar]
  27. Cessac, B. A discrete time neural network model with spiking neurons: Rigorous results on the spontaneous dynamics. J. Math. Biol. 2008, 56, 311–345. [Google Scholar] [CrossRef]
  28. Castro, P.; Luppi, A.; Tagliazucchi, E.; Perl, Y.S.; Naci, L.; Owen, A.M.; Sitt, J.D.; Destexhe, A.; Cofré, R. Dynamical structure-function correlations provide robust and generalizable signatures of consciousness in humans. Commun. Biol. 2024, 7, 1224. [Google Scholar] [CrossRef] [PubMed]
  29. French, A.S.; Pfeiffer, K. Measuring entropy in continuous and digitally filtered neural signals. J. Neurosci. Methods 2011, 196, 81–87. [Google Scholar] [CrossRef] [PubMed]
  30. Pfeiffer, K.; French, A.S. GABAergic excitation of spider mechanoreceptors increases information capacity by increasing entropy rather than decreasing jitter. J. Neurosci. 2009, 29, 10989–10994. [Google Scholar] [CrossRef] [PubMed]
  31. Borst, A.; Theunissen, F.E. Information theory and neural coding. Nat. Neurosci. 1999, 2, 947–957. [Google Scholar] [CrossRef]
  32. Kajikawa, Y.; Schroeder, C.E. How local is the local field potential? Neuron 2011, 72, 847–858. [Google Scholar] [CrossRef]
  33. Mazzoni, A.; Lindén, H.; Cuntz, H.; Lansner, A.; Panzeri, S.; Einevoll, G.T. Computing the local field potential (LFP) from integrate-and-fire network models. PLoS Comput. Biol. 2015, 11, e1004584. [Google Scholar] [CrossRef]
  34. Telenczuk, B.; Telenczuk, M.; Destexhe, A. A kernel-based method to calculate local field potentials from networks of spiking neurons. J. Neurosci. Methods 2020, 344, 108871. [Google Scholar] [CrossRef]
  35. Vakani, R.; Nair, D.R. Electrocorticography and functional mapping. Handb. Clin. Neurol. 2019, 160, 313–327. [Google Scholar]
  36. Teplan, M. Fundamentals of EEG measurement. Meas. Sci. Rev. 2002, 2, 1–11. [Google Scholar]
  37. Cohen, M.X. Where does EEG come from and what does it mean? Trends Neurosci. 2017, 40, 208–218. [Google Scholar] [CrossRef]
  38. Wheless, J.W.; Castillo, E.; Maggio, V.; Kim, H.L.; Breier, J.I.; Simos, P.G.; Papanicolaou, A.C. Magnetoencephalography (MEG) and magnetic source imaging (MSI). Neurologist 2004, 10, 138–153. [Google Scholar] [CrossRef] [PubMed]
  39. Supek, S.; Aine, C.J. Magnetoencephalography; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  40. Grienberger, C.; Konnerth, A. Imaging calcium in neurons. Neuron 2012, 73, 862–885. [Google Scholar] [CrossRef] [PubMed]
  41. Mukamel, E.A.; Nimmerjahn, A.; Schnitzer, M.J. Automated analysis of cellular signals from large-scale calcium imaging data. Neuron 2009, 63, 747–760. [Google Scholar] [CrossRef]
  42. Chemla, S.; Chavane, F. Voltage-sensitive dye imaging: Technique review and models. J. Physiol. 2010, 104, 40–50. [Google Scholar] [CrossRef]
  43. Deffieux, T.; Demené, C.; Tanter, M. Functional Ultrasound Imaging: A New Imaging Modality for Neuroscience. Neuroscience 2021, 474, 110–121. [Google Scholar] [CrossRef]
  44. Logothetis, N.K.; Wandell, B.A. Interpreting the BOLD signal. Annu. Rev. Physiol. 2004, 66, 735–769. [Google Scholar] [CrossRef]
  45. Arthurs, O.J.; Boniface, S. How well do we understand the neural origins of the fMRI BOLD signal? Trends Neurosci. 2002, 25, 27–31. [Google Scholar] [CrossRef]
  46. Le Bihan, D.; Mangin, J.F.; Poupon, C.; Clark, C.A.; Pappata, S.; Molko, N.; Chabriat, H. Diffusion tensor imaging: Concepts and applications. J. Magn. Reson. Imaging Off. J. Int. Soc. Magn. Reson. Med. 2001, 13, 534–546. [Google Scholar] [CrossRef]
  47. Assaf, Y.; Pasternak, O. Diffusion tensor imaging (DTI)-based white matter mapping in brain research: A review. J. Mol. Neurosci. 2008, 34, 51–61. [Google Scholar] [CrossRef]
  48. Bailey, D.L.; Maisey, M.N.; Townsend, D.W.; Valk, P.E. Positron Emission Tomography; Springer: Berlin/Heidelberg, Germany, 2005; Volume 2. [Google Scholar]
  49. Gerstner, W.; Kistler, W.M.; Naud, R.; Paninski, L. Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
  50. Luppi, A.I.; Cabral, J.; Cofre, R.; Destexhe, A.; Deco, G.; Kringelbach, M.L. Dynamical models to evaluate structure–function relationships in network neuroscience. Nat. Rev. Neurosci. 2022, 23, 767–768. [Google Scholar] [CrossRef]
  51. Di Volo, M.; Romagnoni, A.; Capone, C.; Destexhe, A. Biologically realistic mean-field models of conductance-based networks of spiking neurons with adaptation. Neural Comput. 2019, 31, 653–680. [Google Scholar] [CrossRef] [PubMed]
  52. Deco, G.; Ponce-Alvarez, A.; Hagmann, P.; Romani, G.L.; Mantini, D.; Corbetta, M. How local excitation–inhibition ratio impacts the whole brain dynamics. J. Neurosci. 2014, 34, 7886–7898. [Google Scholar] [CrossRef]
  53. Capone, C.; Di Volo, M.; Romagnoni, A.; Mattia, M.; Destexhe, A. State-dependent mean-field formalism to model different activity states in conductance-based networks of spiking neurons. Phys. Rev. E 2019, 100, 062413. [Google Scholar] [CrossRef]
  54. Herzog, R.; Mediano, P.A.; Rosas, F.E.; Luppi, A.I.; Sanz-Perl, Y.; Tagliazucchi, E.; Kringelbach, M.L.; Cofré, R.; Deco, G. Neural mass modeling for the masses: Democratizing access to whole-brain biophysical modeling with FastDMF. Netw. Neurosci. 2024, 8, 1590–1612. [Google Scholar] [CrossRef]
  55. Herzog, R.; Mediano, P.A.; Rosas, F.E.; Lodder, P.; Carhart-Harris, R.; Perl, Y.S.; Tagliazucchi, E.; Cofre, R. A whole-brain model of the neural entropy increase elicited by psychedelic drugs. Sci. Rep. 2023, 13, 6244. [Google Scholar] [CrossRef]
  56. Cofré, R.; Herzog, R.; Mediano, P.A.; Piccinini, J.; Rosas, F.E.; Sanz Perl, Y.; Tagliazucchi, E. Whole-brain models to explore altered states of consciousness from the bottom up. Brain Sci. 2020, 10, 626. [Google Scholar] [CrossRef]
  57. Goldman, J.S.; Kusch, L.; Aquilue, D.; Yalçınkaya, B.H.; Depannemaecker, D.; Ancourt, K.; Nghiem, T.A.E.; Jirsa, V.; Destexhe, A. A comprehensive neural simulation of slow-wave sleep and highly responsive wakefulness dynamics. Front. Comput. Neurosci. 2023, 16, 1058957. [Google Scholar] [CrossRef]
  58. Destexhe, A.; Sacha, M.; Tesler, F.; Cofre, R. A Computational Approach to Evaluate How Molecular Mechanisms Impact Large-Scale Brain Activity. 2024. Available online: https://www.researchsquare.com/article/rs-4610184/v1 (accessed on 27 November 2024).
  59. Cover, T.M.; Thomas, J.A. Information theory and statistics. Elem. Inf. Theory 1991, 1, 279–335. [Google Scholar]
  60. Jaynes, E.T. Information theory and statistical mechanics. Phys. Rev. 1957, 106, 620. [Google Scholar] [CrossRef]
  61. Lesne, A. Shannon entropy: A rigorous notion at the crossroads between probability, information theory, dynamical systems and statistical physics. Math. Struct. Comput. Sci. 2014, 24, e240311. [Google Scholar] [CrossRef]
  62. Schneidman, E.; Berry, M.J.; Segev, R.; Bialek, W. Weak pairwise correlations imply strongly correlated network states in a neural population. Nature 2006, 440, 1007–1012. [Google Scholar] [CrossRef]
  63. Ganmor, E.; Segev, R.; Schneidman, E. Sparse low-order interaction network underlies a highly correlated and learnable neural population code. Proc. Natl. Acad. Sci. USA 2011, 108, 9679–9684. [Google Scholar] [CrossRef]
  64. Marre, O.; El Boustani, S.; Frégnac, Y.; Destexhe, A. Prediction of spatiotemporal patterns of neural activity from pairwise correlations. Phys. Rev. Lett. 2009, 102, 138101. [Google Scholar] [CrossRef] [PubMed]
  65. Vasquez, J.C.; Marre, O.; Palacios, A.G.; Berry, M.J., II; Cessac, B. Gibbs distribution analysis of temporal correlations structure in retina ganglion cells. J. Physiol. 2012, 106, 120–127. [Google Scholar]
  66. Watanabe, T.; Hirose, S.; Wada, H.; Imai, Y.; Machida, T.; Shirouzu, I.; Konishi, S.; Miyashita, Y.; Masuda, N. A pairwise maximum entropy model accurately describes resting-state human brain networks. Nat. Commun. 2013, 4, 1370. [Google Scholar] [CrossRef]
  67. Gu, S.; Cieslak, M.; Baird, B.; Muldoon, S.F.; Grafton, S.T.; Pasqualetti, F.; Bassett, D.S. The energy landscape of neurophysiological activity implicit in brain network structure. Sci. Rep. 2018, 8, 2507. [Google Scholar] [CrossRef] [PubMed]
  68. Ponce-Alvarez, A.; Uhrig, L.; Deco, N.; Signorelli, C.M.; Kringelbach, M.L.; Jarraya, B.; Deco, G. Macroscopic quantities of collective brain activity during wakefulness and anesthesia. Cereb. Cortex 2022, 32, 298–311. [Google Scholar] [CrossRef]
  69. Delgado-Bonal, A.; Marshak, A. Approximate entropy and sample entropy: A comprehensive tutorial. Entropy 2019, 21, 541. [Google Scholar] [CrossRef]
  70. Wei, Q.; Liu, Q.; Fan, S.Z.; Lu, C.W.; Lin, T.Y.; Abbod, M.F.; Shieh, J.S. Analysis of EEG via multivariate empirical mode decomposition for depth of anesthesia based on sample entropy. Entropy 2013, 15, 3458–3470. [Google Scholar] [CrossRef]
  71. Jiang, G.J.; Fan, S.Z.; Abbod, M.F.; Huang, H.H.; Lan, J.Y.; Tsai, F.F.; Chang, H.C.; Yang, Y.W.; Chuang, F.L.; Chiu, Y.F.; et al. Sample entropy analysis of EEG signals via artificial neural networks to model patients’ consciousness level based on anesthesiologists experience. BioMed Res. Int. 2015, 2015, 343478. [Google Scholar] [CrossRef]
  72. Chen, Y.; Pham, T.D. Sample entropy and regularity dimension in complexity analysis of cortical surface structure in early Alzheimer’s disease and aging. J. Neurosci. Methods 2013, 215, 210–217. [Google Scholar] [CrossRef] [PubMed]
  73. Varley, T.F.; Luppi, A.I.; Pappas, I.; Naci, L.; Adapa, R.; Owen, A.M.; Menon, D.K.; Stamatakis, E.A. Consciousness & brain functional complexity in propofol anaesthesia. Sci. Rep. 2020, 10, 1018. [Google Scholar]
  74. Courtiol, J.; Perdikis, D.; Petkoski, S.; Müller, V.; Huys, R.; Sleimen-Malkoun, R.; Jirsa, V.K. The multiscale entropy: Guidelines for use and interpretation in brain signal analysis. J. Neurosci. Methods 2016, 273, 175–190. [Google Scholar] [CrossRef]
  75. Araya-Arriagada, J.; Garay, S.; Rojas, C.; Duran-Aniotz, C.; Palacios, A.G.; Chacón, M.; Medina, L.E. Multiscale entropy analysis of retinal signals reveals reduced complexity in a mouse model of Alzheimer’s disease. Sci. Rep. 2022, 12, 8900. [Google Scholar] [CrossRef]
  76. Chenxi, L.; Chen, Y.; Li, Y.; Wang, J.; Liu, T. Complexity analysis of brain activity in attention-deficit/hyperactivity disorder: A multiscale entropy analysis. Brain Res. Bull. 2016, 124, 12–20. [Google Scholar] [CrossRef]
  77. Miskovic, V.; MacDonald, K.J.; Rhodes, L.J.; Cote, K.A. Changes in EEG multiscale entropy and power-law frequency scaling during the human sleep cycle. Hum. Brain Mapp. 2019, 40, 538–551. [Google Scholar] [CrossRef]
  78. Gaspard, P.; Wang, X.J. Noise, chaos, and (ε, τ)-entropy per unit time. Phys. Rep. 1993, 235, 291–343. [Google Scholar] [CrossRef]
  79. Cencini, M.; Falcioni, M.; Olbrich, E.; Kantz, H.; Vulpiani, A. Chaos or noise: Difficulties of a distinction. Phys. Rev. E 2000, 62, 427. [Google Scholar] [CrossRef]
  80. El Boustani, S.; Destexhe, A. Brain dynamics at multiple scales: Can one reconcile the apparent low-dimensional chaos of macroscopic variables with the seemingly stochastic behavior of single neurons? Int. J. Bifurc. Chaos 2010, 20, 1687–1702. [Google Scholar] [CrossRef]
  81. Bossomaier, T.; Barnett, L.; Harré, M.; Lizier, J.T.; Bossomaier, T.; Barnett, L.; Harré, M.; Lizier, J.T. Transfer Entropy; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  82. Wibral, M.; Vicente, R.; Lindner, M. Transfer entropy in neuroscience. In Directed Information Measures in Neuroscience; Springer: Berlin/Heidelberg, Germany, 2014; pp. 3–36. [Google Scholar]
  83. Vicente, R.; Wibral, M.; Lindner, M.; Pipa, G. Transfer entropy—A model-free measure of effective connectivity for the neurosciences. J. Comput. Neurosci. 2011, 30, 45–67. [Google Scholar] [CrossRef]
  84. Novelli, L.; Wollstadt, P.; Mediano, P.; Wibral, M.; Lizier, J.T. Large-scale directed network inference with multivariate transfer entropy and hierarchical statistical testing. Netw. Neurosci. 2019, 3, 827–847. [Google Scholar] [CrossRef]
  85. Maes, C.; Redig, F.; Moffaert, A.V. On the definition of entropy production, via examples. J. Math. Phys. 2000, 41, 1528–1554. [Google Scholar] [CrossRef]
  86. Jiang, D.Q.; Jiang, D. Mathematical Theory of Nonequilibrium Steady States: On the Frontier of Probability and Dynamical Systems; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  87. Maes, C.; Netočnỳ, K. Time-reversal and entropy. J. Stat. Phys. 2003, 110, 269–310. [Google Scholar] [CrossRef]
  88. Cofré, R.; Maldonado, C. Information entropy production of maximum entropy Markov chains from spike trains. Entropy 2018, 20, 34. [Google Scholar] [CrossRef]
  89. Cofré, R.; Videla, L.; Rosas, F. An introduction to the non-equilibrium steady states of maximum entropy spike trains. Entropy 2019, 21, 884. [Google Scholar] [CrossRef]
  90. Lynn, C.W.; Cornblath, E.J.; Papadopoulos, L.; Bertolero, M.A.; Bassett, D.S. Broken detailed balance and entropy production in the human brain. Proc. Natl. Acad. Sci. USA 2021, 118, e2109889118. [Google Scholar] [CrossRef]
  91. Gilson, M.; Tagliazucchi, E.; Cofré, R. Entropy production of multivariate Ornstein-Uhlenbeck processes correlates with consciousness levels in the human brain. Phys. Rev. E 2023, 107, 024121. [Google Scholar] [CrossRef]
  92. de la Fuente, L.A.; Zamberlan, F.; Bocaccio, H.; Kringelbach, M.; Deco, G.; Perl, Y.S.; Pallavicini, C.; Tagliazucchi, E. Temporal irreversibility of neural dynamics as a signature of consciousness. Cereb. Cortex 2023, 33, 1856–1865. [Google Scholar] [CrossRef]
  93. Lempel, A.; Ziv, J. On the complexity of finite sequences. IEEE Trans. Inf. Theory 1976, 22, 75–81. [Google Scholar] [CrossRef]
  94. Aboy, M.; Hornero, R.; Abásolo, D.; Álvarez, D. Interpretation of the Lempel-Ziv complexity measure in the context of biomedical signal analysis. IEEE Trans. Biomed. Eng. 2006, 53, 2282–2288. [Google Scholar] [CrossRef]
  95. Amigó, J.M.; Szczepański, J.; Wajnryb, E.; Sanchez-Vives, M.V. Estimating the entropy rate of spike trains via Lempel-Ziv complexity. Neural Comput. 2004, 16, 717–736. [Google Scholar] [CrossRef] [PubMed]
  96. Hudetz, A.G.; Liu, X.; Pillay, S.; Boly, M.; Tononi, G. Propofol anesthesia reduces Lempel-Ziv complexity of spontaneous brain activity in rats. Neurosci. Lett. 2016, 628, 132–135. [Google Scholar] [CrossRef] [PubMed]
  97. Höhn, C.; Hahn, M.A.; Lendner, J.D.; Hoedlmoser, K. Spectral Slope and Lempel–Ziv Complexity as Robust Markers of Brain States during Sleep and Wakefulness. Eneuro 2024, 11, 1–17. [Google Scholar] [CrossRef] [PubMed]
  98. Mediano, P.A.; Rosas, F.E.; Timmermann, C.; Roseman, L.; Nutt, D.J.; Feilding, A.; Kaelen, M.; Kringelbach, M.L.; Barrett, A.B.; Seth, A.K.; et al. Effects of external stimulation on psychedelic state neurodynamics. ACS Chem. Neurosci. 2024, 15, 462–471. [Google Scholar] [CrossRef] [PubMed]
  99. Barbero-Castillo, A.; Mateos-Aparicio, P.; Dalla Porta, L.; Camassa, A.; Perez-Mendez, L.; Sanchez-Vives, M.V. Impact of GABAA and GABAB inhibition on cortical dynamics and perturbational complexity during synchronous and desynchronized states. J. Neurosci. 2021, 41, 5029–5044. [Google Scholar] [CrossRef]
  100. Comolatti, R.; Pigorini, A.; Casarotto, S.; Fecchio, M.; Faria, G.; Sarasso, S.; Rosanova, M.; Gosseries, O.; Boly, M.; Bodart, O.; et al. A fast and general method to empirically estimate the complexity of brain responses to transcranial and intracranial stimulations. Brain Stimul. 2019, 12, 1280–1289. [Google Scholar] [CrossRef]
  101. Barnett, L.; Buckley, C.L.; Bullock, S. Neural complexity and structural connectivity. Phys. Rev. E—Stat. Nonlinear Soft Matter Phys. 2009, 79, 051914. [Google Scholar] [CrossRef]
  102. Frohlich, J.; Chiang, J.N.; Mediano, P.A.; Nespeca, M.; Saravanapandian, V.; Toker, D.; Dell’Italia, J.; Hipp, J.F.; Jeste, S.S.; Chu, C.J.; et al. Neural complexity is a common denominator of human consciousness across diverse regimes of cortical dynamics. Commun. Biol. 2022, 5, 1374. [Google Scholar] [CrossRef]
  103. Zhang, Y.; Lucas, M.; Battiston, F. Higher-order interactions shape collective dynamics differently in hypergraphs and simplicial complexes. Nat. Commun. 2023, 14, 1605. [Google Scholar] [CrossRef]
  104. Watanabe, S. Information theoretical analysis of multivariate correlation. IBM J. Res. Dev. 1960, 4, 66–82. [Google Scholar] [CrossRef]
  105. Te Sun, H. Nonnegative entropy measures of multivariate symmetric correlations. Inf. Control 1978, 36, 133–156. [Google Scholar]
  106. Rosas, F.E.; Mediano, P.A.; Gastpar, M.; Jensen, H.J. Quantifying high-order interdependencies via multivariate extensions of the mutual information. Phys. Rev. E 2019, 100, 032305. [Google Scholar] [CrossRef] [PubMed]
  107. Gatica, M.; Cofré, R.; Mediano, P.A.; Rosas, F.E.; Orio, P.; Diez, I.; Swinnen, S.P.; Cortes, J.M. High-order interdependencies in the aging brain. Brain Connect. 2021, 11, 734–744. [Google Scholar] [CrossRef] [PubMed]
  108. Herzog, R.; Rosas, F.E.; Whelan, R.; Fittipaldi, S.; Santamaria-Garcia, H.; Cruzat, J.; Birba, A.; Moguilner, S.; Tagliazucchi, E.; Prado, P.; et al. Genuine high-order interactions in brain networks and neurodegeneration. Neurobiol. Dis. 2022, 175, 105918. [Google Scholar] [CrossRef]
  109. Varley, T.F.; Sporns, O.; Schaffelhofer, S.; Scherberger, H.; Dann, B. Information-processing dynamics in neural networks of macaque cerebral cortex reflect cognitive state and behavior. Proc. Natl. Acad. Sci. USA 2023, 120, e2207677120. [Google Scholar] [CrossRef]
  110. Luppi, A.I.; Mediano, P.A.; Rosas, F.E.; Allanson, J.; Pickard, J.; Carhart-Harris, R.L.; Williams, G.B.; Craig, M.M.; Finoia, P.; Owen, A.M.; et al. A synergistic workspace for human consciousness revealed by integrated information decomposition. eLife 2024, 12, RP88173. [Google Scholar] [CrossRef]
  111. Valencia, M.; Artieda, J.; Alegre, M.; Maza, D. Influence of filters in the detrended fluctuation analysis of digital electroencephalographic data. J. Neurosci. Methods 2008, 170, 310–316. [Google Scholar] [CrossRef]
  112. Todorova, S.; Sadtler, P.; Batista, A.; Chase, S.; Ventura, V. To sort or not to sort: The impact of spike-sorting on neural decoding performance. J. Neural Eng. 2014, 11, 056005. [Google Scholar] [CrossRef]
  113. Gavrilescu, M.; Stuart, G.W.; Rossell, S.; Henshall, K.; McKay, C.; Sergejew, A.A.; Copolov, D.; Egan, G.F. Functional connectivity estimation in fMRI data: Influence of preprocessing and time course selection. Hum. Brain Mapp. 2008, 29, 1040–1052. [Google Scholar] [CrossRef]
  114. Lindquist, M.A.; Geuter, S.; Wager, T.D.; Caffo, B.S. Modular preprocessing pipelines can reintroduce artifacts into fMRI data. Hum. Brain Mapp. 2019, 40, 2358–2376. [Google Scholar] [CrossRef]
  115. Humeau-Heurtier, A. The multiscale entropy algorithm and its variants: A review. Entropy 2015, 17, 3110–3123. [Google Scholar] [CrossRef]
  116. Beggs, J.M.; Timme, N. Being critical of criticality in the brain. Front. Physiol. 2012, 3, 163. [Google Scholar] [CrossRef] [PubMed]
  117. Kannathal, N.; Choo, M.L.; Acharya, U.R.; Sadasivan, P. Entropies for detection of epilepsy in EEG. Comput. Methods Programs Biomed. 2005, 80, 187–194. [Google Scholar] [CrossRef]
  118. Camargo, A.; Del Mauro, G.; Wang, Z. Task-induced changes in brain entropy. J. Neurosci. Res. 2024, 102, e25310. [Google Scholar] [CrossRef]
  119. Carhart-Harris, R.L.; Leech, R.; Hellyer, P.J.; Shanahan, M.; Feilding, A.; Tagliazucchi, E.; Chialvo, D.R.; Nutt, D. The entropic brain: A theory of conscious states informed by neuroimaging research with psychedelic drugs. Front. Hum. Neurosci. 2014, 8, 20. [Google Scholar] [CrossRef]
  120. Carhart-Harris, R.L. The entropic brain-revisited. Neuropharmacology 2018, 142, 167–178. [Google Scholar] [CrossRef]
  121. Tyrcha, J.; Roudi, Y.; Marsili, M.; Hertz, J. The effect of nonstationarity on models inferred from neural data. J. Stat. Mech. Theory Exp. 2013, 2013, P03005. [Google Scholar] [CrossRef]
  122. Grün, S.; Diesmann, M.; Aertsen, A. Unitary events in multiple single-neuron spiking activity: II. Nonstationary data. Neural Comput. 2002, 14, 81–119. [Google Scholar] [CrossRef]
  123. Drachman, D.A. Aging of the brain, entropy, and Alzheimer disease. Neurology 2006, 67, 1340–1352. [Google Scholar] [CrossRef]
  124. Hong, S.L.; Barton, S.J.; Rebec, G.V. Altered neural and behavioral dynamics in Huntington’s disease: An entropy conservation approach. PLoS ONE 2012, 7, e30879. [Google Scholar] [CrossRef]
  125. Button, K.S.; Ioannidis, J.P.; Mokrysz, C.; Nosek, B.A.; Flint, J.; Robinson, E.S.; Munafò, M.R. Power failure: Why small sample size undermines the reliability of neuroscience. Nat. Rev. Neurosci. 2013, 14, 365–376. [Google Scholar] [CrossRef] [PubMed]
  126. Nieuwenhuis, S.; Forstmann, B.U.; Wagenmakers, E.J. Erroneous analyses of interactions in neuroscience: A problem of significance. Nat. Neurosci. 2011, 14, 1105–1107. [Google Scholar] [CrossRef] [PubMed]
  127. Colquhoun, D. An investigation of the false discovery rate and the misinterpretation of p-values. R. Soc. Open Sci. 2014, 1, 140216. [Google Scholar] [CrossRef] [PubMed]
  128. Vázquez-Guardado, A.; Yang, Y.; Bandodkar, A.J.; Rogers, J.A. Recent advances in neurotechnologies with broad potential for neuroscience research. Nat. Neurosci. 2020, 23, 1522–1536. [Google Scholar] [CrossRef]
  129. Churchland, P.S.; Sejnowski, T.J. The Computational Brain; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  130. Guan, S.; Wan, D.; Zhao, R.; Canario, E.; Meng, C.; Biswal, B.B. The complexity of spontaneous brain activity changes in schizophrenia, bipolar disorder, and ADHD was examined using different variations of entropy. Hum. Brain Mapp. 2023, 44, 94–118. [Google Scholar] [CrossRef]
  131. Zhang, R.; Murray, S.B.; Duval, C.J.; Wang, D.J.; Jann, K. Functional connectivity and complexity analyses of resting-state fMRI in pre-adolescents demonstrating the behavioral symptoms of ADHD. Psychiatry Res. 2024, 334, 115794. [Google Scholar] [CrossRef]
  132. Gu, Y.; Miao, S.; Han, J.; Zeng, K.; Ouyang, G.; Yang, J.; Li, X. Complexity analysis of fNIRS signals in ADHD children during working memory task. Sci. Rep. 2017, 7, 829. [Google Scholar] [CrossRef]
  133. Muttoni, S.; Ardissino, M.; John, C. Classical psychedelics for the treatment of depression and anxiety: A systematic review. J. Affect. Disord. 2019, 258, 11–24. [Google Scholar] [CrossRef]
  134. Wang, J.Z.; Wyble, B. Hopfield and Hinton’s neural network revolution and the future of AI. Patterns 2024, 5, 101094. [Google Scholar] [CrossRef]
  135. Ball, P. Chemistry Nobel Awarded for an AI System That Predicts Protein Structures. Physics 2024, 17, 149. [Google Scholar] [CrossRef]
  136. Deco, G.; Kringelbach, M.L. Great expectations: Using whole-brain computational connectomics for understanding neuropsychiatric disorders. Neuron 2014, 84, 892–905. [Google Scholar] [CrossRef] [PubMed]
  137. Amunts, K.; Axer, M.; Banerjee, S.; Bitsch, L.; Bjaalie, J.G.; Brauner, P.; Brovelli, A.; Calarco, N.; Carrere, M.; Caspers, S.; et al. The coming decade of digital brain research: A vision for neuroscience at the intersection of technology and computing. Imaging Neurosci. 2024, 2, 1–35. [Google Scholar] [CrossRef]
Figure 1. Discrete signals. (A) Single-cell spike recordings can be transformed into binary sequences of zeros and ones. (B) For simultaneous recordings from multiple neurons, such as those obtained via multi-electrode arrays (MEAs) in retinal ganglion cells responding to light stimuli, spike sorting is required to identify individual spikes. After selecting a binning time, a multidimensional binary signal is generated. (C) Continuous fMRI BOLD signals from a given parcellation can be discretized (e.g., assigning 1 to signals above 1 standard deviation and 0 otherwise) to create a multidimensional binary signal representing the whole brain.
Figure 1. Discrete signals. (A) Single-cell spike recordings can be transformed into binary sequences of zeros and ones. (B) For simultaneous recordings from multiple neurons, such as those obtained via multi-electrode arrays (MEAs) in retinal ganglion cells responding to light stimuli, spike sorting is required to identify individual spikes. After selecting a binning time, a multidimensional binary signal is generated. (C) Continuous fMRI BOLD signals from a given parcellation can be discretized (e.g., assigning 1 to signals above 1 standard deviation and 0 otherwise) to create a multidimensional binary signal representing the whole brain.
Entropy 27 00115 g001
Figure 2. Continuous and imaging-based signals. (A) EEG signals are recorded using sensors attached to the scalp, detecting the brain’s electrical activity. Electrocorticography (ECoG), a type of intracranial EEG, involves electrodes placed directly on the exposed surface of the brain to capture electrical activity from the cerebral cortex. Implantable intracortical microelectrodes are surgically inserted into the cortex to record precise neural activity or stimulate specific groups of neurons. (B) Imaging-based signals, such as those from fMRI and DTI, are obtained using MRI machines, while PET scans reveal the metabolic and biochemical functions of tissues and organs.
Figure 2. Continuous and imaging-based signals. (A) EEG signals are recorded using sensors attached to the scalp, detecting the brain’s electrical activity. Electrocorticography (ECoG), a type of intracranial EEG, involves electrodes placed directly on the exposed surface of the brain to capture electrical activity from the cerebral cortex. Implantable intracortical microelectrodes are surgically inserted into the cortex to record precise neural activity or stimulate specific groups of neurons. (B) Imaging-based signals, such as those from fMRI and DTI, are obtained using MRI machines, while PET scans reveal the metabolic and biochemical functions of tissues and organs.
Entropy 27 00115 g002
Figure 3. From brain signals to entropy and complexity. (A) Two groups are typically analyzed: either a population of healthy controls versus patients with a particular pathology, or a single population under varying conditions, such as placebo versus drug treatment. (B) Brain activity data—such as EEG, MEG, or fMRI—are collected from these groups under different experimental conditions, for example, before and after external stimulation. (C) Entropy and complexity measures are then computed as functions of the acquired data (denoted here as functions f and g). These measures enable the comparison of brain signal characteristics, revealing potential differences between groups (e.g., Condition A (CA) versus Condition B (CB) or healthy controls (HCs) versus patients (Ps)).
Figure 3. From brain signals to entropy and complexity. (A) Two groups are typically analyzed: either a population of healthy controls versus patients with a particular pathology, or a single population under varying conditions, such as placebo versus drug treatment. (B) Brain activity data—such as EEG, MEG, or fMRI—are collected from these groups under different experimental conditions, for example, before and after external stimulation. (C) Entropy and complexity measures are then computed as functions of the acquired data (denoted here as functions f and g). These measures enable the comparison of brain signal characteristics, revealing potential differences between groups (e.g., Condition A (CA) versus Condition B (CB) or healthy controls (HCs) versus patients (Ps)).
Entropy 27 00115 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cofré, R.; Destexhe, A. Entropy and Complexity Tools Across Scales in Neuroscience: A Review. Entropy 2025, 27, 115. https://doi.org/10.3390/e27020115

AMA Style

Cofré R, Destexhe A. Entropy and Complexity Tools Across Scales in Neuroscience: A Review. Entropy. 2025; 27(2):115. https://doi.org/10.3390/e27020115

Chicago/Turabian Style

Cofré, Rodrigo, and Alain Destexhe. 2025. "Entropy and Complexity Tools Across Scales in Neuroscience: A Review" Entropy 27, no. 2: 115. https://doi.org/10.3390/e27020115

APA Style

Cofré, R., & Destexhe, A. (2025). Entropy and Complexity Tools Across Scales in Neuroscience: A Review. Entropy, 27(2), 115. https://doi.org/10.3390/e27020115

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop