Next Article in Journal
Effect of Brain Lesions on Voluntary Cough in Patients with Supratentorial Stroke: An Observational Study
Next Article in Special Issue
Auditory and Somatosensory P3 Are Complementary for the Assessment of Patients with Disorders of Consciousness
Previous Article in Journal
Corpus Callosum Agenesis: An Insight into the Etiology and Spectrum of Symptoms
Previous Article in Special Issue
Cortical Function in Acute Severe Traumatic Brain Injury and at Recovery: A Longitudinal fMRI Case Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Whole-Brain Models to Explore Altered States of Consciousness from the Bottom Up

1
CIMFAV-Ingemat, Facultad de Ingeniería, Universidad de Valparaíso, Valparaíso 2340000, Chile
2
Centro Interdisciplinario de Neurociencia de Valparaíso, Universidad de Valparaíso, Valparaíso 2360103, Chile
3
Department of Psychology, University of Cambridge, Cambridge CB2 3EB, UK
4
National Scientific and Technical Research Council, Buenos Aires C1033AAJ, Argentina
5
Buenos Aires Physics Institute and Physics Department, University of Buenos Aires, Buenos Aires C1428EGA, Argentina
6
Centre for Psychedelic Research, Department of Brain Science, Imperial College London, London SW7 2DD, UK
7
Data Science Institute, Imperial College London, London SW7 2AZ, UK
8
Centre for Complexity Science, Imperial College London, London SW7 2AZ, UK
9
Departamento de Matemáticas y Ciencias, Universidad de San Andrés, Buenos Aires B1644BID, Argentina
*
Author to whom correspondence should be addressed.
Brain Sci. 2020, 10(9), 626; https://doi.org/10.3390/brainsci10090626
Submission received: 26 July 2020 / Revised: 3 September 2020 / Accepted: 7 September 2020 / Published: 10 September 2020
(This article belongs to the Special Issue Recent Advances in the Study of Altered State of Consciousness)

Abstract

:
The scope of human consciousness includes states departing from what most of us experience as ordinary wakefulness. These altered states of consciousness constitute a prime opportunity to study how global changes in brain activity relate to different varieties of subjective experience. We consider the problem of explaining how global signatures of altered consciousness arise from the interplay between large-scale connectivity and local dynamical rules that can be traced to known properties of neural tissue. For this purpose, we advocate a research program aimed at bridging the gap between bottom-up generative models of whole-brain activity and the top-down signatures proposed by theories of consciousness. Throughout this paper, we define altered states of consciousness, discuss relevant signatures of consciousness observed in brain activity, and introduce whole-brain models to explore the biophysics of altered consciousness from the bottom-up. We discuss the potential of our proposal in view of the current state of the art, give specific examples of how this research agenda might play out, and emphasize how a systematic investigation of altered states of consciousness via bottom-up modeling may help us better understand the biophysical, informational, and dynamical underpinnings of consciousness.

1. Introduction

Consciousness has been a puzzle beyond the scope of natural science for centuries; however, the significant progress seen during the last 30 years of research suggests that a rigorous scientific understanding of consciousness is possible [1,2,3]. The dawn of the modern neuroscientific approach to consciousness can be traced back to Crick and Koch’s proposal for identifying the neural correlates of consciousness (NCC) [4,5], understood as the minimal set of neural events associated with a certain subjective experience. The key intuition that fuels this proposal is that careful experimentation should suffice to reveal brain events that are systematically associated with conscious (as opposed to unconscious or subliminal) perception. Needless to say, the methodological challenges associated with this idea are vast—particularly concerning the determination of what constitutes conscious content (e.g., must content be explicitly reported, or are other less direct forms of inference equally valid [6,7])? Despite these problems, which are still actively debated, the program put forward by Crick and Koch succeeded to jump-start contemporary consciousness research. For recent reviews on the empirical search for NCC, see Ref. [8]; for a theoretical examination of the concept of NCC, see Ref. [9]; and for criticism to the concept of NCC, see Refs. [10,11].
While the quest for the NCC aims to provide answers to where and when consciousness occurs in the brain, subsequent theoretical efforts have attempted to discover systematic signatures within those NCC that could reflect key mechanisms underlying the emergence of consciousness. In other words, these efforts try to answer how consciousness emerges from the processes that give rise to the NCC [12,13]. Hence, theoretical models of consciousness strive to “compress” our empirical knowledge of the NCC, i.e., to provide rules that can predict when and where from how. The nature of those rules, in turn, determines the kind of explanation offered by a theoretical model of consciousness. Here, we consider two possible approaches: top-down and bottom-up [14]. On the one hand, top-down approaches start by identifying high-level signatures of consciousness, and then try to narrow down low-level biophysical mechanisms compatible with those signatures. On the other hand, bottom-up approaches build from dynamical rules of elementary units (such as neurons or groups of neurons [15]) and attempt to provide quantitative predictions by exploring the aggregated consequences of these rules across various temporal and spatial scales. We further subdivide explanations into those addressing conscious information access (e.g., perception in different sensory modalities) and those concerning consciousness as a temporally extended state, such as wakefulness, sleep, anaesthesia, and the altered states that can be elicited by pharmacological manipulation [16,17,18,19,20,21,22].
Our objective is to put forward a research program for the development of bottom-up explanations for the relationship between brain activity and states of consciousness, which we claim is underrepresented both in past and current research. Theories that rely heavily on a top-down perspective risk being under-determined in the reductive sense, i.e., they could be compatible with multiple and potentially divergent lower-level biological and physical mechanisms [23]. While we do not know whether consciousness may be instantiated in other physical systems, we certainly do know that it is instantiated in the human brain, and therefore all theoretical models of consciousness should be consistent with the low-level biophysical details of the brain to be considered acceptable. In light of this potential under-determination, it is difficult to decide whether the different theories currently dominating the field are competing (in the sense of predicting mutually contradictory empirical findings) or convergent (in spite of being formulated from disparate perspectives). Without investigating theories of consciousness from the bottom-up, it could be simply too early for proposals of an experimentum crucis to decide between candidates [24].
In this paper, we posit that computational models can play a crucial role in determining the low-level physical and biological mechanisms fulfilling the high-level phenomenological and computational constraints of theoretical models of consciousness. The idea that consciousness is intrinsically dependent on the dynamics of neural activity is not new, and, in this sense, we follow the trail of pioneers, such as Walter J. Freeman [25], Francisco Varela [26], and Gerald Edelman [27], among others. However, our proposal reaches further than these previous attempts by building upon the technological and conceptual advances accumulated over the last decades. In particular, the widespread availability of non-invasive neuroimaging methods (functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), magnetoencephalography (MEG)) has expanded our knowledge of the functional and structural aspects of the brain, while the development of connectomics has revealed the intricate meso- and macroscopic connectivity patterns that wire cortical and subcortical structures together [28]. Moreover, for the first time, there is sufficient empirical data and computational power available to construct whole-brain models with real predictive power [15,29,30], which represents a radical improvement over past research efforts. We expect that these advances will enable us to compare the predictions of theories of consciousness by means of whole-brain computational models that can be directly contrasted with empirical results.
In the following, we adopt and explore the consequences of this perspective. Our proposal and its justification are structured as follows. First, Section 2 describes several examples of altered states of consciousness and briefly discusses some proposed general definitions. Next, Section 3 introduces top-down approaches for quantifying and classifying states of consciousness solely from functional data. Then, Section 4 introduces the main technical ideas underlying the development of whole-brain computational models, highlighting novel results with special emphasis on those informing research on altered states of consciousness. Section 5 discusses how computational models can contribute to overcome open challenges and conceptual difficulties, thus providing new insights into the study of altered states of consciousness. Finally, Section 6 elaborates on possible future directions of research stemming from our proposal.

2. What Is an Altered State of Consciousness? Examples and Defining Features

A basic distinction is commonly drawn between phenomenal and access consciousness [31]. The first represents the subjective experience of sensory perception, emotion, thoughts, etc.; in other words, what it feels like to perceive something, undergo a certain emotion, or engage in a certain thought process. The second represents the global availability of conscious content for cognitive functions, such as speech, reasoning, and decision-making, enabling the capacity to issue first-person reports.
The term “consciousness” is also used in reference to a third concept in which the definition is comparatively more elusive: that of temporally extended and qualitatively distinct modes or states of consciousness [16,17,18,19,20,21,22]. This concept is perhaps best introduced by listing examples, such as our ordinary state of conscious wakefulness, the different phases of the wake-sleep cycle, dreaming during rapid eye movement (REM) sleep, sedation and general anaesthesia, post-comatose disorders, such as the unresponsive wakefulness syndrome, the acute effects of certain drugs (mainly serotonergic psychedelics and glutamatergic dissociatives), the state achieved in some contemplative traditions by means of meditation, hypnosis, and shamanic trance, among others. Following Ludwig [20] and Tart [32], we refer to these as “altered states of consciousness”, adopting this term to emphasize their dissimilarity to ordinary conscious wakefulness. Altered states of consciousness have been studied for decades from different perspectives [33], emphasizing the individual differences in conscious experiencing. Basic processes, such as sensory perception, reveal consistent and substantial inter-individual differences [34]. Other inter-individual differences in experiences of imagery and thought in the waking state, dreaming, hypnosis, and other phenomena have been reported [35]. Furthermore, the same phenomenal event may be interpreted in different ways, evidencing cultural variations [36]. For a complete account of altered states of consciousness following a multidisciplinary perspective, we refer the reader to Reference [37].
Let us describe commonalities shared by altered states of consciousness, which point towards a potential general operational definition. First, altered states of consciousness are temporally extended and typically (but not always) reversible. Second, they are not defined by the presence of specific subjective experiences, but instead by general and qualitative modifications to the contents of consciousness, including their experienced intensity [17]. Third, at least some states can be ordered along a hierarchy of levels, from states of “reduced” consciousness (e.g., general anaesthesia, sleep) to others considered “richer” (e.g., certain states achieved during meditation or induced by pharmacological means) [38].
A proper definition of what constitutes an altered state of consciousness is, unfortunately, more elusive than suggested by the examination of these examples. If states of consciousness are transient, then what is their minimum accepted length? Do qualitative modifications of conscious content apply only to the sensory domain, or encompass other forms of subjective experience, as well? Does a déjà-vu (a brief episode of eerie familiarity with an unknown past event) qualify as an altered state of consciousness? What about an orgasm, or the state of pain caused by hitting one’s finger with a hammer? Without doubt, these examples modify in one way or another the general contents of consciousness, but they are not commonly considered as altered states of consciousness.
The intuitive notion of “levels” of consciousness is also problematic [39]. We are familiar with the fact that some states appear to be “more conscious” than others; for instance, ordinary wakefulness would have a higher conscious level than deep sleep or an absence seizure. But in what sense is deep sleep more or less conscious than an absence seizure? Following this logic, how should dreaming, the acute effects of psychedelic drugs, and the state achieved by expert meditators be ordered along a hypothetical uni-dimensional hierarchy of levels of consciousness? It seems that altered states of consciousness can only be subject to partial ordering, with comparisons between certain pairs of states being questionable or outright meaningless.
These difficulties relate to two main problems. The first problem is granularity: how long is long enough to qualify as an altered state of consciousness? The second is compositeness: instead of a single level of intensity, multiple dimensions are likely required for an unambiguous characterization; however, it is unclear how many dimensions are needed and how they should be determined [39,40]. A subsidiary issue related to the granularity problem is whether altered states of consciousness represent discrete states with sharply defined boundaries, or are more adequately understood as continuous transitions.
Several proposals have been put forward to circumvent these issues and define altered states of consciousness [16,17,18,19,20,21,22]. Here, we adopt perforce a more pragmatic stance: we are interested in altered states of consciousness lasting enough to be investigated by modern neuroimaging techniques (>10 min). At the same time, we strive to show that whole-brain models can be sufficiently rich to transcend the unidimensional characterization of consciousness in terms of “levels”.
For the purposes of this article, we divide altered states of consciousness into the following (neither exhaustive nor mutually exclusive) categories: natural or endogenous (e.g., the states within the sleep cycle), induced by pharmacological means (e.g., general anaesthesia, the psychedelic state), induced by other means (e.g., meditation, hypnosis), caused by pathological processes, either neurological or psychiatric (e.g., disorders of consciousness, epilepsy, psychotic episodes), and transitory versus permanent. Examples can be found in Table 1.

3. Top-Down Signatures of Consciousness from Brain Signals

A major challenge in the study of altered states of consciousness has been to establish empirical signatures in brain signals that are characteristic of different states, thus allowing us to identify them “from the outside”, i.e., not depending on self-report or behavioral tasks [13]. Establishing and validating these signatures also carries significance from a clinical perspective, since they could lead to efficient and specific biomarkers for certain neuropsychiatric conditions [41,42]. Furthermore, when interpreted within a broader theory, some of these signatures may also provide new insights about the nature of the corresponding conscious states, advancing our fundamental understanding of consciousness itself.
In the following, we first provide a broad overview of general aspects of theories of consciousness and then illustrate what a signature of consciousness is by reviewing two well-known examples.

3.1. Functionalist and Non-Functionalist Positions on the Mind-Brain Problem

When we consider the most prominent contemporary theories of consciousness, we find that they mainly differ in what they take as valid empirical data to be explained by the theory. There are essentially two positions on this matter, which can be related to the influential division between functionalist and non-functionalist positions on the mind-brain problem. For a functionalist, the subjective quality of conscious experience is rejected as a valid target of scientific explanation. According to this view, most famously articulated by Daniel Dennett in Consciousness Explained [43], only third-person objective measurements fall into the scope of a science of consciousness. This data is limited to observable behavior and neural activity recordings; for instance, whenever an experimental subject claims to be experiencing a certain shade of blue, the neuroscientist is not tasked with finding how a physical process in the brain can cause a subjective feeling of blue, but with determining the mechanisms leading the subject to declare such experience [44]. Non-functionalists, on the other hand, reject this position as a sophisticated form of behaviorism [45]. According to this view, introspection plays a crucial role in the scientific explanation of consciousness, because it reveals the very nature of the explanandum itself; any other kind of data represents, at best, an indirect approximation [46,47,48]. It is one of the defining features of consciousness, argue the defenders of this position, that it cannot be illusory [49] since being conscious of something is precisely what bears that conscious experience into existence [50,51].
When translated into the domain of neuroscience, these positions inform the two most influential contemporary models of consciousness. The global neuronal workspace theory (GNW) [52,53] links consciousness with the widespread and sustained propagation of activity in the cortex, serving the computational function of broadcasting information to be processed by specialized modules [54]. This theory was developed to explain the neural signatures of consciousness seen in cognitive neuroscience experiments—in other words, to explain third-person objective data. On the contrary, the integrated information theory (IIT) [55,56,57] is based on certain first-person qualities of subjective experience, which are accessed by introspection and can be taken as “postulates” or “axioms” for the theory [57]. This theory strives to provide a quantitative characterization of consciousness, as well as to determine the neural correlates of conscious contents from first principles only (even though concrete predictions may be computationally intractable [58]). Both theories have been the target of intense criticism [6,59,60,61,62,63], which can be taken as a sign that the scientific problem of consciousness remains unsolved.
While the GWT and the IIT are frequently pitted against each other, their predictions for human brains may still be mutually compatible [64,65]. For our purpose, what these two theories have in common is that they follow a top-down approach, in the sense that they both focus on abstract computational or information-theoretical principles, without necessarily specifying how these principles arise as a consequence of local dynamics within the underlying neural substrate. We argue that it is via detailed whole-brain modeling that the points of agreement and divergence between theories, and how they relate to the neurophysiology of the human brain, can and should be studied ahead of possible experiments.

3.2. Examples of Signatures of Consciousness

Since the conception of NCC, neuroscientists have turned to every available neuroimaging technology in the search for signatures of consciousness [4,5]. Although many kinds of signatures have been explored (including some related to metabolic consumption [66] or cortical connectivity [67]), for the purposes of this article we will focus on signatures measurable with functional neuroimaging tools, like MEG, electroencephalography (EEG), and fMRI (which can be simulated with the models described in Section 4). In the sequel, we illustrate the nature and application of signatures of consciousness by elaborating on two well-known examples.

3.2.1. The Entropic Brain Hypothesis

One of the simplest, yet remarkably powerful, theoretical frameworks to furnish signatures of consciousness is Carhart-Harris’ entropic brain hypothesis (EBH) [38,68]. According to the EBH, the richness of conscious experience depends on the complexity of the underlying population-level neuronal activity, which determines the repertoire of states available for the brain to explore. Put simply, conscious states that involve richer experiences might require a more diverse set of brain configurations, which should leave a traceable footprint to be observed in the entropy, or in the entropy rate of the corresponding brain signals (while the entropy estimates the average uncertainty in a signal, the entropy rate estimates how hard it is to predict the next time-point given its history). Following this rationale, the level of consciousness should be proportional (at least within reasonable range) to the entropy of brain signals.
An effective tool to estimate the entropy rate of a signal is the Lempel-Ziv complexity (LZc) [68,69,70], originally conceived as a lossless compression algorithm. The LZc of brain signals has proven to be an extremely robust signature of consciousness, and has been tested in a breadth of scenarios including anaesthesia [71], coma [72], sleep [73], epilepsy [74], meditation [75], and the psychedelic state [76,77]. More recently, it has also been used to assess fluctuations of consciousness during normal wakefulness due to cognitive tasks [78], stress [79], fatigue [80], and music performance or listening [81].
With its impressive track record and wide applicability, LZc stands as a prominent signature of consciousness to compare across biological and simulated brains. Furthermore, LZc can be used in tandem with transcranial magnetic stimulation to compute the perturbational complexity index [82], a clinically-tested marker of consciousness, which can also be used as a test measure for whole-brain models.

3.2.2. Integrated Information Theory

A strong limitation of standard brain entropy analyses is that they consider only the entropy of individual signals, without acknowledging the multivariate structure of brain dynamics. An attractive way of studying interdependencies between brain signals is with tools drawn from the integrated information theory (IIT) [83]. The IIT proposes an intimate relationship between consciousness and the ability of a physical system to be integrated in such a way that is “more that the sum of its parts”, i.e., to display dynamical properties in the whole that are not observed in any of its parts.
The IIT builds on key information-theoretic ideas first presented in the seminal early work of Tononi, Sporns, and Edelman [84] and has been subject of continuous development since [55,56,57,85]. Following Mediano et al. [86], we distinguish between empirical IIT and fundamentalist IIT as two separate branches of the theory. While fundamentalist IIT has been highly controversial and subject of extensive criticism [58,87,88,89], multiple efforts in empirical IIT have been made to overcome the computational challenges of the theory [90,91,92].
At the core of empirical applications of the IIT is a quantitative measure of integrated information, typically denoted by Φ. There is currently no agreed-upon Φ measure, although multiple proposals have been put forward [86] and can be used to understand and compare the dynamical structure of systems of interest. Detailed procedures describing how to compute different versions of Φ can be found in Ref. [86]. Although the evidence supporting the IIT as a fundamental theory of consciousness has been contested [93], measures inspired by empirical IIT have proven useful in analyzing both empirical [94,95], as well as simulated [92,96], neural data. Altogether, the family of information-theoretic measures inspired by empirical IIT provides a valuable toolkit to study the multivariate dynamics of whole-brain models.

4. Bottom-Up Whole-Brain Models

While human neuroscience research has been increasingly dominated by imaging experiments, an important complement to this research is provided by computational neuroscience [97]. In effect, neuroimaging data is usually insufficient to inform the underlying mechanisms at play behind neural phenomena unfolding at different spatial and temporal scales [98]. In addition, since ethical considerations severely limit direct causal manipulation of human brain activity, most of the neuroimaging literature is limited to correlational studies.
The application of computational models to neuroimaging data with the purpose of making causal and mechanistic assertions has been proposed and developed in parallel with different objectives. For instance, deep neural networks can be used to model information-processing in the brain [99] by comparing their representational content via second-order isomorphisms (e.g., representational similarity analysis) [100]. These models can be used to investigate the plausibility of different computational architectures within cognitive neuroscience [101]. Another example is dynamic causal modeling (DCM), which was developed to make model-based causal inferences from neuroimaging experiments [102]. DCM is based on simulating brain signals under the assumption of different causal interactions and then performing model comparison and selection. Finally, whole-brain models are based on dynamical systems coupled by large-scale anatomical connectivity networks and are developed to reproduce the statistics of empirical brain signals at multiple scales [103]. We also distinguish whole-brain models from attempts to produce extremely detailed reproductions of large neural circuits (e.g., cortical columns) [104], mainly due to differences in model complexity.
Whole-brain models provide a practical, ethical, and inexpensive “digital scalpel”, which allows researchers to explore the counterfactual consequences of modifying structural or dynamical aspects of the brain. More generally, whole-brain models build a bridge from local networked dynamics to the large-scale patterns of activity that are addressed by theoretical signatures of consciousness. As such, they represent a valuable tool to narrow the space of mechanistic explanations compatible with the observed neuroimaging data, including data acquired from subjects undergoing different altered states of consciousness.
In this section, we provide a brief introduction to whole-brain models to the unfamiliar reader, discussing their various types and the principles behind their tuning to empirical data. Additionally, we review recent articles where these models have been used to shed light on the neurobiological mechanisms underlying different altered states of consciousness.

4.1. What Are Whole-Brain Models?

Whole-brain models are sets of equations that describe the dynamics and interactions between neural populations in different brain regions. These models typically focus on the joint evolution of a set of key biophysical variables using systems of coupled differential equations (although discrete time step models can also be used, as will be discussed below). These equations can be built from knowledge concerning the biophysical mechanisms underlying different forms of brain activity, or as phenomenological models chosen by the kind of dynamics they produce. Then, local dynamics are combined by in vivo estimates of anatomical connectivity networks. In particular, fMRI, EEG, and MEG signals can be used to define the statistical observables, diffusion tensor imaging (DTI) can provide information about the structural connectivity between brain regions by means of whole-brain tractography, and positron emission tomography (PET) imaging can inform on metabolism and produce receptor density maps for a given neuromodulator.
Most whole-brain models are structured around three basic elements:
  • Brain parcellation: A brain parcellation determines the number of regions and the spatial resolution at which the brain dynamics take place. The parcellation may include cortical, sub-cortical, and cerebellar regions. Examples of well-known parcellations are the Hagmann parcellation [105], and the automated anatomical labeling (AAL) atlas [106].
  • Anatomical connectivity matrix: This matrix defines the network of connections between brain regions. Most studies are based on the human connectome, obtained by estimating the number of white-matter fibers connecting brain areas from DTI data combined with probabilistic tractography [28]. For control purposes, randomized versions of the connectome (null hypothesis networks) may also be employed.
  • Local dynamics: The activity of each brain region is typically determined by the chosen local dynamics plus interaction terms with other regions. A variety of approaches have been proposed to model whole-brain dynamics, including cellular automata [107,108], the Ising spin model [109,110,111], autoregressive models [112], stochastic linear models [113], non-linear oscillators [114,115], neural field theory [116,117], neural mass models [118,119], and dynamic mean-field models [120,121,122]. A detailed review of the different models that can be explored within this context can be found in Reference [15,29].
The first two items are guided by available experimental data. In contrast, the choice of local dynamics is usually driven by the phenomena under study and the epistemological context at which the modeling effort takes place. The workflow describing the construction of whole-brain models is illustrated in Figure 1. Because of this hybrid nature, whole-brain models constructed following this process are sometimes called semi-empirical models. Whole-brain models can be constructed from in-house code, or more easily from platforms, such as The Virtual Brain (https://www.thevirtualbrain.org/tvb/zwei) [30].

4.2. Examples

We showcase two models that have been frequently used to assess mechanistic hypotheses behind both pharmacologically and physiologically-induced altered states of consciousness: the dynamic mean field model [120,121,123] and the model comprised by Stuart-Landau non-linear coupled oscillators [114,115,124]. These examples are chosen to represent a biologically realistic model (dynamic mean field) and a phenomenological model (Stuart-Landau oscillators); moreover, these models have been applied to different states of consciousness, making them pertinent in the context of the present discussion.

4.2.1. Dynamic Mean-Field (DMF) Model

In this approach, the neuronal activity in a given brain region is represented by a set of differential equations describing the interaction between inhibitory and excitatory pools of neurons [125]. The DMF presents three variables for each population: the synaptic current, the firing rate, and the synaptic gating, where the excitatory coupling is mediated by NMDA receptors and the inhibitory by gamma-aminobutyric acid (GABA)-A receptors. The interregional coupling is considered excitatory-to-excitatory only, and a feedback inhibition control in the excitatory current equation is included [120]. The output variable of the model is the firing rate of the excitatory population that is then included in a nonlinear hemodynamical model [126] to simulate the regional BOLD signals.
The key idea behind the mean-field approximation is to reduce the high-dimensional randomly interacting elements to a system of elements treated as independent. Then, an average external field effectively replaces the interaction with all other elements. Thus, this approach represents the average activity of an homogeneous population of neurons by the activity of a single unit of this class, reducing in this way the dimensionality of the system. In spite of these approximations, the dynamic mean field model incorporates a detailed biophysical description of the local dynamics, which increases the interpretability of the model parameters.

4.2.2. Stuart-Landau Non-Linear Oscillator Model

This approach builds on the idea that neural activity can exhibit—under suitable conditions—self-sustained oscillations at the population level [107,114,115,124,127]. In this model, the dynamical behavior is represented by a non-linear oscillator with the addition of Gaussian noise at the proximity of a Hopf bifurcation [128]. By changing a single model parameter (i.e., bifurcation parameter) across a critical value, the model gives rise to three qualitatively different asymptotic behaviors: harmonic oscillations, fixed point dynamics governed by noise, and intermittent complex oscillations when the bifurcation parameter is close to the bifurcation (i.e., at dynamical criticality). Correspondingly, the model is determined by two parameters: the bifurcation parameter of the Hopf bifurcation in the local dynamics, and the coupling strength factor that scales the anatomical connectivity matrix. In contrast to the DMF model, coupled Stuart-Landau non-linear oscillators constitute a phenomenological model, i.e., the model parameter does not map into any biophysically relevant variables. In this case, the model is attractive due to its conceptual simplicity, which is given by its capacity to produce three qualitatively different behaviors of interest by changing a single parameter.

4.3. How to Fit Whole-Brain Models to Neuroimaging Data?

Whole-brain models are tuned to reproduce specific features of brain activity. The way in which this is ensured is via optimization of the free parameters in the local dynamics plus the coupling strength. Parameter values are usually selected so that the model matches a certain statistical observable computed from the experimental data.
For example, the DMF whole-brain model introduces one parameter to scale the strength of the connectivity matrix, usually known as the global coupling parameter. During model training, an exhaustive exploration of this parameter is conducted over a wide range of values. The parameter value is chosen to maximize the similarity between the observable computed from simulated and experimental data. For instance, the parameter can be chosen to minimize the Kolmogorov-Smirnov distance between the functional connectivity dynamics (FCD) distributions of the simulated and real data [120].
This kind of brute-force optimization is employed when the number of free parameters is low (i.e., two or three). However, it is also possible to separately optimize the parameters governing the local dynamics of each node, which dramatically increases the dimensionality of the search space, and thus requires more elaborated optimization techniques, such as gradient descent [129] or genetic algorithms [124]. The advantage of considering a small set of global parameters resides in its simplicity and scalability, but unfortunately it misses the dynamical heterogeneity among brain regions. These heterogeneities can be modeled at the expense of increasing the parameter space. Essentially, the choice of model complexity (i.e., the number of free parameters) depends on the scientific question and its associated hypotheses.
Since adding more free parameters increases the computational cost of the optimization procedure, it becomes critical to choose parameters reflecting variables that are considered relevant, either from a general neurobiological perspective or in the specific context of the altered state under investigation. Depending on the latter, the parameters could be divided into groups that are allowed to change independently based on different criteria, including structural lesion maps, receptor densities, local gene expression profiles, and parcellations that reflect the neural substrate of certain cognitive functions, among others.
After choosing the parcellation, the equations governing the local dynamics and their interaction terms, the interregional coupling given by the structural connectivity matrix, and selecting a criterion to constrain the dimensionality of the parameter space. The last critical step is to define the observable which will be used to construct the target function for the optimization procedure. As mentioned above, one possibility is to optimize the model to reproduce the statistics of functional connectivity dynamics (FCD). Perhaps a more straightforward option is to optimize the “static” functional connectivity matrix computed over the duration of the complete experiment, an approach followed by Refs. [115,124], among others. Other observables related to the collective dynamics can be obtained from the synchrony and metastability, as defined in the context of the Kuramoto model [115,130]. In general, any meaningful computation summarizing the spatiotemporal structure of a neuroimaging dataset constitutes a valid observable, with the adequate choice depending on the scientific question and the altered state of consciousness under study.
Since different observables can be defined, reflecting both stationary and dynamic aspects of brain activity, a natural question arises: is a given whole-brain model capable of simultaneously reproducing multiple observables within reasonable accuracy? We consider this question to be very relevant, yet at the same time it has been comparatively understudied. For instance, a review of articles using coupled Stuart-Landau oscillators shows that dynamical observables are reproduced when the oscillators operate at dynamical criticality (i.e., near the Hopf bifurcation), yet stationary observables (such as the “static” functional connectivity“) are best reproduced for other parameter combinations [115,124,129]. This suggests that exploring bifurcations with higher co-dimensions or even chaotic dynamics unfolding in the proximity of strange attractors could enable the simultaneous optimization of several observables, a possibility that is discussed later in this article.
Finally, some natural candidates for observables to be fitted by whole-brain models are precisely the high-level signatures of consciousness put forward by theoretical predictions, such as the different measures of information integration, complexity, and entropy that were reviewed in the previous section. The objective is to fit whole-brain models using these signatures as target functions and then assess the biological plausibility of the optimal model parameters, which allows for the testing of the consistency of these signatures from a bottom-up perspective. Alternatively, signatures of consciousness can be computed from the model—initially fitted to other observables—and compared to the empirical results. Again, this highlights the need to understand which kind of local dynamics allow the simultaneous reproduction of multiple observables derived from experimental data.

4.4. Whole-Brain Models Applied to the Study of Consciousness

The available evidence suggests that states of consciousness are not determined by activity in individual brain areas, but emerge as a global property of the brain, which in turn is shaped by its large-scale structural and functional organization [53,131,132]. According to this view, whole-brain models provide a fertile ground to explore how global signatures of different states of consciousness emerge from local dynamics. This promise is already being met, as shown by several recent articles [38,107,114,115,123,124,127,133].
For example, transitions from wakefulness into other states, such as the different stages of human sleep or the state induced by general anaesthetics, have been interpreted as phase transitions in neural mass models and in terms of the collective dynamics of coupled Stuart-Landau oscillators [114,115,124]. Noise-driven systems at dynamical criticality result in dynamics compatible with neuroimaging recordings obtained during conscious wakefulness, and departures from these dynamics better reflect different states of unconsciousness [38,107,127,133,134,135]. As will be discussed in the following section, the stochastic switching between different attractors results in the kind of metastable behavior that is characteristic of conscious wakefulness [136]. These results are consistent with the hypothesis of statistical criticality (e.g., proximity to a second order phase transition) as a fundamental principle of brain organization [137]. Even though parallels can be drawn between statistical and dynamical criticality, we limit our discussion to the former since the relationship between both concepts is complicated and beyond the scope of this article. Following the example of the perturbational complexity index (PCI) index (which is obtained by perturbing the cortex with Transcranial magnetic stimulation (TMS) and measuring the complexity of the elicited response) [82], whole-brain models can be systematically ”perturbed“ by incorporating changes into the dynamical equations. The in silico rehearsal of perturbations is useful to test hypotheses concerning which parts of the model are essential to produce different signatures of consciousness. A prominent example of this perturbational analysis applied to whole-brain models can be found in a recent article [123] where a whole-brain model based on coupled Stuart-Landau oscillators was fitted to empirical fMRI data acquired from subjects during deep sleep. The model was then modified by changing local bifurcation parameters with a greedy optimization algorithm, which unveiled the optimal perturbation profile to increase the similarity to a target brain state (in this case, conscious wakefulness). Another relevant example of this perturbational approach is found in Ref. [121], where a transition was shaped by the effects of neuromodulation. The authors investigated the transition from resting state activity acquired under a placebo condition towards the altered state of consciousness induced by the serotonin 2A receptor agonist lysergic acid diethylamide (LSD). A dynamical mean-field model was fitted to minimize the difference between FCD of the simulated activity and the empirical data of subjects in the placebo condition, which allowed to determine the optimal value of the global coupling parameter. Then, an empirical map of 5- H T 2 A receptor density was used to modulate the synaptic gain, effectively simulating the heterogeneous effects of LSD across the whole brain. As a control, the authors showed that using maps for the density of other serotonin receptor sub-types decreased the goodness of fit, thus corroborating the well-known association between LSD and the 5- H T 2 A receptor.
Another interesting possibility is to assess the consequences of stimulation protocols that are impossible to apply in vivo. An example is the Perturbative Integration Latency Index (PILI) [127], which measures the latency of the return to baseline after a strong perturbation that generates dynamical changes detectable over long temporal scales (on the order of tens of seconds). This in silico perturbative approach allows to systematically investigate how the response of brain activity upon external perturbations is indicative of the state of consciousness, providing new mechanistic insights into the capacity of the human brain to integrate and segregate information over different time scales.
In Ref. [124], the authors used a model of coupled Stuart-Landau oscillators to model the regional changes in dynamical stability that occur during the wake-sleep cycle. Brain regions belonging to different resting state networks (RSN) [138] were considered as independent sources of variation for the local model parameters. Using a stochastic optimization algorithm, the authors represented the transition from wakefulness into deep sleep as a sequence of changes in the stability of brain activity within canonical RSN. A follow-up paper extended this analysis to other states of reduced consciousness (including anaesthesia and patients suffering from disorders of consciousness) and investigated the possibility of inducing transitions to conscious wakefulness by means of simulated periodic stimulation at the resonant frequency of each node in the model [139].

5. Proposed Research Agenda

5.1. Motivation

Consciousness research is in need of mechanistic accounts to explain why brain signals recorded during different states of consciousness can be consistently characterized by the presence of certain global signatures. Our motivation is not the replacement of the explanations of these signatures provided by theories, such as GNW or IIT. Instead, we aim to put forward a framework for their investigation from a bottom-up perspective. Eventually, we expect to converge on the high-level explanations furnished by some of these theories. Our inspiration is partially drawn from statistical thermodynamics, which provides a clear example of how the bottom-up and top-down perspectives can converge into a consistent picture of physical reality. Importantly, in this case, the resulting theory remained useful both as a set of phenomenological principles and computational rules (i.e., classical thermodynamics) but also as a framework to establish connections between those principles and the rules governing the microscopic properties of matter.
Following this concept, we strive to use our current knowledge about neural dynamics to produce models in which behavior agrees with the constraints of some theories formulated from a top-down perspective, while weakening the support for others as a result of inconsistent predictions. Here, it becomes important to clarify our intended meaning of the word “prediction”. When it comes to complex systems, such as the brain, predictions are considered possible only in a statistical sense [137]. Accordingly, we do not expect that the time series generated by computational models directly correspond to their empirical counterparts; however, we can expect a match for statistical observables.
This motivates our study of altered states of consciousness, since their extended temporal duration guarantees the possibility of extracting robust statistical characterizations from multivariate neuroimaging recordings. An example of this characterization is the matrix derived from computing all pairwise correlations between regional time series, which is considered a marker of inter-areal functional connectivity (sometimes referred to as the “functional connectome”) [140]. We consider that whole-brain computational models have been developed to a point where they contain sufficient empirical ingredients to predict the second-order statistics of brain activity. Thus, the field is ripe to welcome a framework which may provide solid ground to investigate signatures of consciousness from a mechanistic perspective.
The following example is aimed to motivate the proposal we put forward in the next section. We know that activity within a network of brain regions including the fronto-parietal cortex is correlated with conscious experience [8,67,141,142,143]. On the other hand, conscious experience is also characterized by signatures, such as information integration, entropy, and neural complexity. Is it possible to determine the causal role that these anatomical regions play in the generation of these signatures of consciousness by means of computational models?

5.2. Proposal

The principal idea behind our proposal is that whole-brain models can be used to test hypotheses concerning the mechanistic and causal underpinnings of different states of consciousness. We do not expect that whole-brain models are sufficiently advanced to identify those precise mechanisms; however, we propose that they can contribute to narrow the space of possible mechanistic explanations, therefore complementing current theories of consciousness from a bottom-up perspective.
The fundamental objective of this research program is to foster the development of this novel approach to study altered states of consciousness. Our framework rests upon the complementary nature of three key ingredients: experimental data obtained through neuroimaging experiments, theoretical approaches to characterize signatures of consciousness, and bottom-up whole-brain computational models. The application of modern neuroimaging techniques to the study of signatures of consciousness has provided very effective tools to predict the brain activity patterns that are associated with different states of consciousness. However, as René Thom famously stated, “to predict is not to explain” [144]. Hence, we now turn to the discussion of how models could bridge the gap between prediction and explanation.
The proposed framework to model altered states of consciousness is based on adjusting three independent variables (see Figure 2):
  • Connectome: Is the state of consciousness implicated with local or diffuse structural abnormalities? This is frequently the case for neurological conditions, such as coma and post-comatose disorders of consciousness (e.g., unresponsive wakefulness syndrome, minimally conscious state) [145]. In addition, subtler structural modifications can be implicated in certain psychiatric conditions presenting episodes of altered consciousness, such as different forms of schizophrenia [146]. While several papers have investigated localized (e.g., stroke, tumors, focal epilepsy) structural damage from this perspective [147,148,149,150,151,152,153,154], the literature on whole-brain models applied to patients suffering from neurological impairments and from disorders of consciousness is very limited. The project of modeling pathological brain states perforce necessitates to incorporate individualized structural connectomes and lesion maps, thus moving towards simulation at the single patient level [155,156].
  • Modulation: Is the state of consciousness a consequence of neuromodulatory changes, either endogenous or induced externally by means of pharmacological manipulation? Two typical examples are the altered states of consciousness induced by psychedelics/dissociatives, which are linked to agonism/antagonism at serotonin/glutamate receptors [157]. Certain psychiatric conditions are believed to arise as a consequence of neuromodulatory imbalances, e.g., dopaminergic imbalances are believed to play an important role in the pathophysiology of schizophrenia [158]. Most anaesthetic drugs reduce the complexity of the brain activity by targeting specific neuromodulatory sites, such as those activated by gamma-aminobutyric acid (GABA) [159]. Finally, sleep is a state of reduced consciousness triggered by activity in monoaminergic neurons with diffuse projections throughout the brain [160].
  • Dynamics: Is the altered state of consciousness captured by well-understood dynamical mechanisms? Does the model include parametrically controlled external perturbations? While changes in the local excitation/inhibition balance are ultimately caused by neurochemical processes, they are best understood in terms of their dynamical consequences. States such as epilepsy, deep sleep and general anaesthesia are believed to involve unbalanced excitation/inhibition [161]. In some cases, dynamics may be sufficiently idiosyncratic to be captured by low dimensional phenomenological models, as in the case of certain forms of epileptic activity [162]. Finally, local dynamics could be modified to simulate the effects of external neurostimulation [123,139].
Depending on the answers to these questions, the whole-brain model should incorporate changes to anatomical connectivity, local dynamics, or include empirical receptor density maps to add a new layer of neurobiological detail.

5.3. What Can We Learn?

The dynamics of whole-brain models can be perturbed arbitrarily. This is significant since it allows for the exploration of different mechanisms, leading to the observed empirical dynamics (as described in a previous paragraph) and the exploration of how external stimulation can force transitions between states of consciousness, including the clinically relevant case of displacing whole-brain models from unconscious states towards wakefulness [123,139]. Therapeutic alternatives to accelerate the recovery of disorder of consciousness (DOC) patients are scarce, and while some studies support the therapeutic role of external electrical stimulation [163], very little is known about the optimal choice of stimulation sites and parameters. Whole-brain models could be useful for the optimization of stimulation protocols, as well as for assisting in clinical decision making. Localized stimulation and/or resection of neural tissue are surgical alternatives to treat certain severe forms of epilepsy, and whole-brain models have been explored with success to predict the outcome of these interventions [164]. The same concept could apply to the development and in silico testing of new pharmaceuticals to treat psychiatric conditions, where whole-brain models could be used to reverse-engineer the optimal receptor affinity profiles required to restore statistical signatures of healthy brain dynamics. Finally, the combination of data produced by whole-brain models and machine learning classifiers could be useful for data augmentation in the context of automated diagnosis of rare neurological diseases [165] and to generate input for deep learning architectures (e.g., variational autoencoders) capable of representing altered states of consciousness as trajectories within a low dimensionality latent space [166].

5.4. Case Study: Modeling Neural Entropy Increases Induced by Psychedelics

To further highlight what we can learn from whole-brain models, we discuss an illustrative example of a bottom-up model that successfully matches a global signature of altered conscious [167]. Using the DMF model optimized to fit the FCD of placebo and LSD conditions [121], a significant entropy increase of brain signals was found in LSD versus placebo as a consequence of simulated 5- H T 2 A receptor activation. Thus, the model was capable of identifying a low-level (i.e., molecular scale) mechanism leading to increased neural entropy, which is a robust signature of the psychedelic state [38,68].
Since activation of the 5- H T 2 A receptor is causally implicated with the conscious state induced by serotonergic psychedelics [157,168,169], the effect of the drug was modeled as a local change in the non-linearity of the regional firing rate. This change was proportional to the local density of 5- H T 2 A receptors as determined by PET imaging. Brain entropy increases during the psychedelic state were the result of heterogeneous changes in the entropy of the regional firing rates (i.e., some regions increased while others decreased their entropy). These changes in firing rate entropy depended both on the local anatomical connectivity and the 5- H T 2 A receptor density.
Thus, starting from local dynamics describing the behavior of coupled excitatory and inhibitory pools of neurons, and introducing a perturbation which reflects serotonergic activation, the model provided a bottom-up confirmation of 5- H T 2 A activation as the source of increased neural entropy during the psychedelic state. In the context of Figure 2, the model adopted changes in local dynamics (bottom left) informed by empirical maps of 5- H T 2 A receptor density (bottom right).

6. Future Directions

6.1. What Should Be the “Bottom” of Bottom-Up Models?

The question of the ultimate substrate of consciousness is part of a long-standing philosophical debate, with positions including functionalism (the substrate is irrelevant insofar as it instantiates the adequate set of causal relationships) [43], biological naturalism (the view that consciousness arises as a consequence of biochemical processes in the brain) [170], and proposals of consciousness as a manifestation of quantum mechanics [171]. Even though we choose to sidestep this complicated discussion, our modest aim of building bottom-up models of brain activity still requires the specification of some physical or biological substrate, which in turn determines the level of realism displayed by the equations that govern local dynamics.
Many signatures of consciousness are directly related to the global complexity of brain dynamics, reflecting the widespread hypothesis that consciousness plays an integrative role in the brain [132]. According to this hypothesis, consciousness could be considered a dynamical process “gluing” together the output of specialized neural circuits. While tampering with these circuits could modify some specific contents of consciousness, only the disruption of large-scale neural communication would result in a state of altered or reduced consciousness. Since this view disregards the contribution of specific computations that are implemented in local neural circuitry, we could expect that bottom-up models capable of reproducing an adequate set of canonical dynamics. Here, canonical dynamics refers to dynamics in the proximity of a class of topologically equivalent attractors. The reader should think of the result of simplifying the equations into the normal forms corresponding to the bifurcations present in the system [172]. would suffice to span the spectrum of signatures of altered consciousness. Conversely, it could be that the large-scale dynamics that support inter-areal communication at the same time interact and shape local information processing, and vice-versa. In this case, we expect that increasingly complex and biologically realistic models will be needed to advance with our proposal.
This crucial point results in a ramification within our proposal to investigate altered states of consciousness using whole-brain models. On one hand, models could be enriched by increasingly detailed and sophisticated sources of empirical information with the purpose of linking signatures of consciousness to the biophysical details of neural activity. This direction is already suggested by studies modeling the effects of 5- H T 2 A activation using receptor density maps produced by PET imaging [121,167]. Following this direction, future models could be expanded to include fine-grained details of local wiring patterns, different cell types and their projections, as well as their interaction with diffuse neuromodulatory systems. However, as complexity is increased, the conceptual interpretation of models becomes less clear. On the other hand, it is known that dynamical systems may exhibit canonical behaviors when their solutions undergo changes in their qualitative behavior (i.e., bifurcations) [172]. Recent work fitting whole-brain models to the results of fMRI experiments suggests that bifurcations play a key role in the reproduction of the second-order statistics of empirical data [107,114,115,124,127]. This occurs because noisy dynamics close to a bifurcation point switches between different attractors, producing rich and complex dynamics typical of brain signals. This observation raises the question of whether more complex models reproduce the statistics of empirical observables by virtue of their universal behavior near bifurcation points, or as a consequence of their stationary solutions away from dynamical criticality.

6.2. Transitions between Canonical Dynamics as Primitives to Construct Whole-Brain Models

Contrary to the dictum by Norbert Wiener (“The best material model of a cat is another, or preferably the same, cat”), we propose that even if vast sources of biological information can be incorporated into whole-brain models, striving for such level of detail defeats the purpose of unveiling concrete and interpretable mechanisms underlying signatures of consciousness. Thus, we suggest that models could be classified by the kind of large-scale activity patterns they are capable of generating. In other words, we propose that the “bottom” of bottom-up models should not be related to the scale of the biological substrate, but to the minimal set of simple dynamical behaviors necessary to reproduce a certain signature of consciousness. Paralleling the definition of NCC given by Crick and Koch [4,5], we could introduce the “dynamical correlates of consciousness” (DCC); but we opt to not introduce yet another acronym in an already crowded field.
We note that this approach could be especially useful to model brain activity during altered states of consciousness, which are frequently characterized by changes in dynamics measurable at a macroscopic scale. Changes in large-scale neural dynamics signal the onset of different physiological (e.g., sleep) [173], pharmacologically-induced [174], and pathological brain states [175]. In general, during states of reduced consciousness dynamics tend to become slower and less complex, and the opposite is reported for the psychedelic state [76,176]. In the following, we propose that relatively simple local dynamics (i.e., noise-drive multistability) can yield simulated brain activity and connectivity matching these empirical observations. The development of phenomenological models can be used to determine the minimal set of modifications giving rise to the measured dynamics. This conceptual simplicity has the merit of facilitating model interpretability and can pave the way towards the development of more realistic biophysical simulations. In addition, since the collective behavior of brain activity presents emergent properties that are not easy to infer from local rules [137], phenomenological models are potentially useful and informative by themselves.
Interestingly, Batterman has suggested that multiple realizability, the “metaphysical mystery” that troubled Jerry Fodor, among other great philosophers of the mind, is as mysterious as the observation that physical matter behaves in ways which are entirely independent from the vast majority of its details [177]. For a typical example, consider a pendulum, in which behavior is described by the same differential equation regardless of the color of the swinging bob. Furthermore, in the small amplitude regime all systems with an U-shaped energy landscape can be approximated by an harmonic solution, with examples ranging from electrical circuits to orbital mechanics. Northoff and colleagues have argued that the spatiotemporal dynamics constitutes the fundamental substrate underlying human consciousness [178], which resonates with Batterman’s proposal, as well as with our suggestion that the “bottom” (i.e., the maximum necessary level of detail) is best understood as a comprehensive list of the dynamical behaviors that the system can display. We postpone taking a stance towards these metaphysical speculations, and proceed to develop these ideas in the context of building useful bottom-up models in the future.
A set of qualitatively different dynamics is provided in Figure 3, illustrating a Takens-Bogdanov bifurcation diagram [179]. A bifurcation is a combination of parameters for which a qualitative change in the dynamics takes place. Whole-brain models can be constructed by coupling the dynamics given as an equation in the inset (left panel) either by variables x, y, or both. The equation and its solutions depend on two parameters, α and β . Under the weak coupling assumption, modifying these two parameters will result in qualitative changes in the local dynamics (where these changes occur in the diagram could be modified by the coupling strength). For uncoupled dynamics, parameter combinations at points a, c, e result in a stable constant level of activity (i.e., fixed point dynamics). Parameter combinations at points b, d, f give rise to oscillations of different spectral content (i.e., limit cycles).
In the right panel of Figure 3, the solutions can be visualized either as time series or as two dimensional diagrams known as phase portraits, where each axis corresponds to a variable (in this case, x and y) and the arrows stand for the vector field (in this case, x ˙ and y ˙ ). At each position of the phase space, the vector field indicates the derivative of the solution passing through that point. Insofar as the bifurcations in the left panel of Figure 3 are not crossed, changes in the parameters α and β only result in deformations of the phase portrait, representing solutions that are equivalent in a qualitative sense (more formally, the phase portraits are topologically equivalent). Crossing a bifurcation results in an abrupt change that cannot be understood as a small deformation of the phase portrait, implying a qualitatively different behavior of the system.
The richness of coupling this kind of simple dynamical model stems from the possibility of inducing stochastic transitions across bifurcations by incorporating an additive noise term. In this way, dynamics switch intermittently between two qualitatively different solutions. In the case of the Hopf bifurcation, for instance, noise-driven dynamics at the bifurcation point are neither stable nor oscillatory, but present complex amplitude fluctuations [129]. The noise-driven exploration of a system’s attractor space is a mainstay of computational neuroscience [181] and could represent a useful methodological resource to build whole-brain models to explore altered states of consciousness.
Following the pioneering work of Deco and colleagues [129], the most frequently explored transition is between stable noise-driven dynamics and self-sustained harmonic oscillations, corresponding to the Hopf bifurcation (vertical red line in Figure 3), which appears in Stuart-Landau nonlinear oscillators. At the bifurcation point, dynamics show the kind of complexity that is compatible with certain signatures of consciousness, with departures from this point being reported for states of reduced consciousness, such as sleep and anaesthesia [115,123,124,127] (as is clear from Figure 3, however, this bifurcation is only one among multiple possibilities). The upper panel of Figure 4 illustrates this situation by presenting the phase space and temporal evolution of a noise-driven Stuart-Landau nonlinear oscillator near dynamical criticality. The signal evolves with complex amplitude fluctuations as noise drives the dynamics across the bifurcation. In addition, at dynamical criticality, small fluctuations tend to be amplified [115,129]; thus, whole-brain models far from criticality reproduce the lack of sustained and complex responses to external perturbations seen in states of reduced consciousness [82].
Noise-driven dynamics near a bifurcation generate the kind of complex behavior characteristic of conscious wakefulness. Thus, we expect that states associated with unconsciousness are best reproduced for parameters far from bifurcation points, as already shown by studies fitting coupled Stuart-Landau oscillators to data acquired during deep sleep [115,124]. Low dimensional models can also be used to capture more specific dynamics that are representative of other brain states. An important example is that of epileptic seizures, where bifurcations in a phenomenological model represent transitions between dynamics characteristic of different epileptic syndromes, both in ictal [162] and inter-ictal activity [182]. The same principle could be extended to model the changes in complexity, oscillatory power and transient events seen during other brain states.
The inclusion of noise in whole-brain models raises questions concerning the mechanisms that endow biological systems with stochastic dynamics [181]. Again, we postpone these difficult questions in lieu of more practical considerations, and propose that noise-driven equilibrium dynamics increase interpretability at the expense of two main shortcomings. First, parameter fine-tuning is required to pose dynamics near dynamical criticality. As discussed above, optimization procedures can be applied to obtain the parameters which best reproduce certain empirical observables. However, the biological variables captured by the optimal combination of parameters could change upon small perturbations, leading to models that always predict intrinsically unstable states of consciousness. The second problem is that once parameters are optimized to reproduce a certain observable, other different observables could be poorly captured by the model, thus questioning the extent to which the model is adequately describing the empirical data. We propose that both problems could be simultaneously addressed by exploring non-stochastic models of chaotic coupled oscillators, such as Rossler oscillators. In this model, dynamics unfold near a strange attractor with positive Lyapunov exponent for a comparatively ample range of parameters [183]. Thus, complex dynamics do not depend on a bifurcation parameter taking a precise value, but instead arise over an extended range of parameter values. This kind of phenomenological models of whole-brain activity is comparatively understudied, and could represent a valuable target for future developments.

7. Final Remarks

The history of science shows an intensive ongoing debate about the position of scientific inquires with respect to the study of consciousness. As a matter of fact, until recently the largest part of the scientific community did not consider consciousness as a suitable topic for investigation. While the ultimate nature of consciousness is still full of mysteries, it is evident that deepening our knowledge of the mechanistic, statistical, and dynamical relationships within the brain in its different possible states of consciousness can only increase our understanding of the relationship between mind and body.
A key factor supporting the modern discipline of consciousness research is the extraordinary development of neuroimaging technologies that has occurred over the last decades, which plays a similar fundamental role to the one played by telescopes in the discovery of the nature of the solar system. However, making progress in the problem of consciousness not only depends on technological advances, but also on our capacity to explore and chart the contents and boundaries of consciousness itself. Consciousness research needs neuroimaging as much as any other branch of human neuroscience, but also needs to devise and explore new methods to induce altered states of consciousness, and to break through arbitrary regulatory restrictions preventing the exploration of certain older but very powerful research tools [184,185]. That being said, please note that materialist-reductionist accounts are not the only road to approach consciousness, and alternative rigorous frameworks are also being developed (see e.g., Reference [186,187]).
These technological advances, matched with increases in computational capability, and a renewed appreciation of the role that altered states of consciousness play in scientific research (especially in the recent focus on the NCC), have prepared a fertile ground for whole-brain models to open a new window of research possibilities. In effect, while much progress has been made during the last decades in the problem of identifying top-down signatures of consciousness, most of these tools have not yet reached a stage of maturity to allow clinical applications. We expect that pursuing the problem from a different perspective will be invigorating for the field as a whole, increasing the appreciation for the role that low-level biological mechanisms play in the emergence of high-level signatures of consciousness.
Consciousness research is not alone in its need for low-level mechanistic explanations. The project of formulating psychiatric diagnoses in biological terms [188] will require a systematic exploration of the low-level mechanisms giving rise to the behavioral manifestations of mental disorders [189,190]. We expect that many of the ideas and methods proposed here will seamlessly translate into the field of computational psychiatry, even for the study of disorders which do not include altered consciousness as a defining feature (e.g., depression).
In the same way that scientific inquiry has eventually succeeded in explaining seemingly mysterious phenomena, such as heat (in terms of kinetic considerations), combustion (in terms of chemical reactions), and genes (in terms of molecular replication), it is reasonable to expect that consciousness will also be explainable someday in mechanistic terms. If this is to happen, the perspective of bottom-up modeling is likely to play a crucial role, as was the case for the three aforementioned examples. It is our hope that the present proposal will serve both as an encouragement and as a roadmap to invest future research efforts in the computational modeling of altered states of consciousness.

Author Contributions

Conceptualization, R.C., R.H., P.A.M.M., F.E.R., Y.S.P. and E.T.; methodology, R.C., R.H., P.A.M.M., F.E.R., Y.S.P. and E.T.; writing—original draft preparation, R.C., R.H., P.A.M.M., F.E.R., J.P., Y.S.P. and E.T.; writing—review and editing R.C., R.H., P.A.M.M., F.E.R., J.P., Y.S.P., and E.T. All authors have read and agreed to the published version of the manuscript.

Funding

R.C. was supported by Fondecyt Iniciación 2018 Proyecto 11181072. R.H. was funded by CONICYT scholarship CONICYT-PFCHA/Doctorado Nacional/2018-21180428. P.M. was funded by the Wellcome Trust (grant no. 210920/Z/18/Z). F.R. was supported by the Ad Astra Chandaria Foundation. E.T. and Y.S.P. were supported by ANPCyT (Argentina), grant PICT-2018-03103.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
NCCNeural correlates of consciousness
DMFDynamic mean-field
fMRIFunctional magnetic resonance imaging
BOLDBlood oxygen level–dependent
PETPositron emission tomography
DTIDiffusion tensor imaging
EEGElectroencephalography
MEGMagnetoencephalography
IITIntegrated Information Theory
GNWGlobal neuronal workspace
EBHEntropic brain hypothesis
LZcLempel-Ziv complexity
FCDFunctional connectivity dynamics
PCIPerturbational complexity index
PILIPerturbative Integration Latency Index
LSDLysergic acid diethylamide
AALAutomated anatomical labeling
DOCDisorder of consciousness
GABAGamma-aminobutyric acid
RSNResting-state networks
Glossary of Technical Terms
5- HT 2 A receptorSerotonin receptor in which acute activation by serotonergic psychedelics produces a transient altered state of consciousness.
AttractorSet of points in the phase space of a dynamical system towards which the system approaches during its temporal evolution.
BifurcationPhenomena in the field of dynamical systems that occurs when a small change in the parameter values causes a sharp qualitative change in the behavior of the system.
Bottom-up approachDefines the local dynamics of interacting units (such as neurons or groups of neurons) in order to generate features as similar as possible to the ones observed in the brain during different experimental conditions.
Entropic brain hypothesisAn example of top-down approach. Postulates that the richness of conscious experience depends on the complexity of the underlying population-level neuronal activity, which determines the repertoire of states available for the brain to explore.
Functional connectivity (FC) and Functional Connectivity Dynamics (FCD)Second order statistics summarizing the pair-wise dependence between the activity of brain regions. The FCD is obtained from computing the similarity between the FCs associated with different time windows.
Integrated information theory (IIT)An example of top-down approach. Based on certain first-person qualities of subjective experience, which are accessed by introspection and can be taken as “postulates” or “axioms” for the theory. This theory strives to provide a quantitative characterization of consciousness analyzing the causal relationships of brain activity using multivariate information theory.
Hopf bifurcationExample of a bifurcation of a nonlinear dynamical system where steady dynamics change their stability and a limit cycle emerges, giving rise to periodical solutions.
Lempel-Ziv complexityLossless compression algorithm that provides an effective tool to estimate the entropy rate of a signal.
Lyapunov exponentAn exponent that indicates how two trajectories with similar initial conditions diverge in their temporal evolution along each dimension. A positive value for the Lyapunov exponent is indicative of deterministic chaos.
Mind-brain problemDualistic perspective addressing the relationship between the mental and the embodied brain processes.
Neural correlates of consciousness (NCC)Minimal set of neural events associated with a certain subjective experience.
Perturbational complexity indexMeasure of the complexity of the cortical activity evoked by transcranial magnetic stimulation.
Phenomenal and access consciousnessThe first represents the subjective experience of sensory perception, emotion, thoughts, etc. The second represents the global availability of conscious content for cognitive functions, such as speech, reasoning, and decision-making, enabling the capacity to issue first-person reports.
Psychedelic drugsPsychoactive drugs in which the primary effect is to produce profound changes in perception, mood, and cognitive processes, triggering non-ordinary states of consciousness. There are two major types: serotonergic (e.g., LSD, DMT), which activate the serotonin 2A receptor (5- H T 2 A ), and glutamatergic dissociatives (e.g., ketamine, PCP), in which action blocks the PCP site of NMDA glutamate receptors.
Resting state networksRepresent specific patterns of synchronous activity between brain regions in whole-brain recordings. They are consistently found in healthy subjects in fMRI data when no explicit task is being performed.
Stuart-Landau oscillatorsNon-linear oscillating system near a Hopf bifurcation.
Top-down approachFocuses on the use of subjective signatures of consciousness as guiding principles to analyze brain signals in order to narrow down the possible biophysical mechanisms compatible with those signatures.
Whole-brain computational modelsAn implementation of bottom-up approach. Defines a set of differential equations ruling the dynamics and interactions between simulated brain regions in order to reproduce observables from neuroimaging data.

References

  1. LeDoux, J.E.; Michel, M.; Lau, H. A little history goes a long way toward understanding why we study consciousness the way we do today. Proc. Natl. Acad. Sci. USA 2020, 117, 6976–6984. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Seth, A.K. Consciousness: The last 50 years (and the next). Brain Neurosci. Adv. 2018, 2, 2398212818816019. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Overgaard, M. The status and future of consciousness research. Front. Psychol. 2017. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Crick, F.; Koch, C. Towards a neurobiological theory of consciousness. In Seminars in the Neurosciences; Salk Institute: La Jolla, California, USA, 1990. [Google Scholar]
  5. Crick, F.; Koch, C. A framework for consciousness. Nat. Neurosci. 2003. [Google Scholar] [CrossRef] [PubMed]
  6. Tsuchiya, N.; Wilke, M.; Frässle, S.; Lamme, V.A. No-report paradigms: Extracting the true neural correlates of consciousness. Trends Cogn. Sci. 2015, 19, 757–770. [Google Scholar] [CrossRef] [PubMed]
  7. Cohen, M.A.; Dennett, D.C. Consciousness cannot be separated from function. Trends Cogn. Sci. 2011, 15, 358–364. [Google Scholar] [CrossRef]
  8. Koch, C.; Massimini, M.; Boly, M.; Tononi, G. Neural correlates of consciousness: Progress and problems. Nat. Rev. Neurosci. 2016. [Google Scholar] [CrossRef]
  9. Chalmers, D.J. What is a neural correlate of consciousness? In Neural Correlates of Consciousness: Empirical and Conceptual Questions; Metzinger, T., Ed.; MIT Press: Cambridge, MA, USA, 2000; pp. 17–39. [Google Scholar]
  10. Noë, A.; Thompson, E. Are there neural correlates of consciousness? J. Conscious. Stud. 2004, 11, 3–28. [Google Scholar]
  11. De Graaf, T.A.; Hsieh, P.J.; Sack, A.T. The ‘correlates’ in neural correlates of consciousness. Neurosci. Biobehav. Rev. 2012, 36, 191–197. [Google Scholar] [CrossRef]
  12. Seth, A. Models of consciousness. Scholarpedia 2007, 2, 1328. [Google Scholar] [CrossRef]
  13. Sergent, C.; Naccache, L. Imaging neural signatures of consciousness:‘What’,‘When’,‘Where’and ‘How’does it work? Arch. Ital. Biol. 2012, 150, 91–106. [Google Scholar] [PubMed]
  14. Stinson, C.; Sullivan, J. Mechanistic explanation in neuroscience. In The Routledge Handbook of Mechanisms and Mechanical Philosophy; Routledge Books, Taylor and Francis Group: Oxfordshire, UK, 2018; pp. 375–387. [Google Scholar]
  15. Deco, G.; Jirsa, V.K.; Robinson, P.A.; Breakspear, M.; Friston, K. The dynamic brain: From spiking neurons to neural masses and cortical fields. PLoS Comput. Biol. 2008, 4, e1000092. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Vaitl, D.; Birbaumer, N.; Gruzelier, J.; Jamieson, G.A.; Kotchoubey, B.; Kübler, A.; Lehmann, D.; Miltner, W.H.; Ott, U.; Pütz, P.; et al. Psychobiology of altered states of consciousness. Psychol. Bull. 2005, 131, 98. [Google Scholar] [CrossRef] [PubMed]
  17. Revonsuo, A.; Kallio, S.; Sikka, P. What is an altered state of consciousness? Philos. Psychol. 2009, 22, 187–204. [Google Scholar] [CrossRef]
  18. Overgaard, M.; Overgaard, R. Neural correlates of contents and levels of consciousness. Front. Psychol. 2010, 1, 164. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Tassi, P.; Muzet, A. Defining the states of consciousness. Neurosci. Biobehav. Rev. 2001, 25, 175–191. [Google Scholar] [CrossRef]
  20. Ludwig, A.M. Altered states of consciousness. Arch. Gen. Psychiatry 1966, 15, 225–234. [Google Scholar] [CrossRef]
  21. Tart, C. The basic nature of altered states of consciousness, a system approach. J. Transpers. Psychol. 1976, 8, 45–64. [Google Scholar]
  22. Bayne, T. Conscious states and conscious creatures: Explanation in the scientific study of consciousness. Philos. Perspect. 2007, 21, 1–22. [Google Scholar] [CrossRef]
  23. Michel, M. Consciousness science underdetermined: A short history of endless debates. Ergo Open Access J. Philos. 2019, 6, 2019–2020. [Google Scholar] [CrossRef] [Green Version]
  24. Reardon, S. Rival Theories Face off over Brain’s Source of Consciousness. Science 2019, 366, 293. [Google Scholar] [CrossRef]
  25. Freeman, W.J. Indirect biological measures of consciousness from field studies of brains as dynamical systems. Neural Netw. 2007, 20, 1021–1031. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Thompson, E.; Varela, F.J. Radical embodiment: Neural dynamics and consciousness. Trends Cogn. Sci. 2001, 5, 418–425. [Google Scholar] [CrossRef]
  27. Edelman, G.M.; Tononi, G. Reentry and the dynamic core: Neural correlates of conscious experience. In Neural Correlates of Consciousness: Empirical and Conceptual Questions; Metzinger, T., Ed.; MIT Press: Cambridge, MA, USA, 2000; pp. 139–151. [Google Scholar]
  28. Sporns, O.; Tononi, G.; Kötter, R. The human connectome: A structural description of the human brain. PLoS Comput. Biol. 2005, 1, e42. [Google Scholar] [CrossRef] [PubMed]
  29. Breakspear, M. Dynamic models of large-scale brain activity. Nat. Neurosci. 2017, 20, 340–352. [Google Scholar] [CrossRef] [PubMed]
  30. Ritter, P.; Schirner, M.; McIntosh, A.R.; Jirsa, V.K. The virtual brain integrates computational modeling and multimodal neuroimaging. Brain Connect. 2013, 3, 121–145. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Block, N. On a confusion about a function of consciousness. Behav. Brain Sci. 1995, 18, 227–247. [Google Scholar] [CrossRef]
  32. Tart, C.T. Altered States of Consciousness; Doubleday: New York, NY, USA, 1972. [Google Scholar]
  33. Natsoulas, T. Basic problems of consciousness. J. Personal. Soc. Psychol. 1981. [Google Scholar] [CrossRef]
  34. Deutsch, D. A Musical Paradox. Music. Percept. 1986. [Google Scholar] [CrossRef]
  35. Kunzendorf, R.G.; Wallace, B.E. Individual Differences in Conscious Experience; John Benjamins: Amsterdam, The Netherelands; Philadelphia, PA, USA, 2000. [Google Scholar]
  36. Pasricha, S.; Stevenson, I. Near-death experiences in india: A preliminary report. J. Nerv. Ment. Dis. 1986. [Google Scholar] [CrossRef]
  37. Cardeña, E.; Winkelman, M.J.E. Altering Consciousness: Multidisciplinary Perspectives; Praeger: Santa Barbara, CA, USA, 2011. [Google Scholar]
  38. Carhart-Harris, R.L.; Leech, R.; Hellyer, P.J.; Shanahan, M.; Feilding, A.; Tagliazucchi, E.; Chialvo, D.R.; Nutt, D. The entropic brain: A theory of conscious states informed by neuroimaging research with psychedelic drugs. Front. Hum. Neurosci. 2014, 8, 20. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Bayne, T.; Hohwy, J.; Owen, A.M. Are there levels of consciousness? Trends Cogn. Sci. 2016, 20, 405–413. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Bayne, T.; Carter, O. Dimensions of consciousness and the psychedelic state. Neurosci. Conscious. 2018, 2018, niy008. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  41. Sitt, J.D.; King, J.R.; El Karoui, I.; Rohaut, B.; Faugeras, F.; Gramfort, A.; Cohen, L.; Sigman, M.; Dehaene, S.; Naccache, L. Large scale screening of neural signatures of consciousness in patients in a vegetative or minimally conscious state. Brain 2014, 137, 2258–2270. [Google Scholar] [CrossRef] [PubMed]
  42. Watt, D.F.; Pincus, D.I. Neural substrates of consciousness: Implications for clinical psychiatry. In Textbook of Biological Psychiatry; Panksepp, J., Ed.; John Wiley & Sons: Hoboken, NJ, USA, 2004; p. 75. [Google Scholar]
  43. Dennet, D. Consciousness Explained; Penguin Science, Theory & Psychology: London, UK, 1997. [Google Scholar]
  44. Dennett, D. Who’s on first? Heterophenomenology explained. J. Conscious. Stud. 2003, 10, 19–30. [Google Scholar]
  45. Block, N. Troubles with Functionalism; University of Minnesota Press: Minneapolis, MN, USA, 1978. [Google Scholar]
  46. Lutz, A.; Lachaux, J.P.; Martinerie, J.; Varela, F.J. Guiding the study of brain dynamics by using first-person data: Synchrony patterns correlate with ongoing conscious states during a simple visual task. Proc. Natl. Acad. Sci. USA 2002, 99, 1586–1591. [Google Scholar] [CrossRef] [Green Version]
  47. Shear, J.; Varela, F.J. The View from within: First-Person Approaches to the Study of Consciousness; Imprint Academic: Thorverton, UK, 1999. [Google Scholar]
  48. Chalmers, D.J. First-person methods in the science of consciousness. Conscious. Bull. 1999. Available online: http://consc.net/papers/firstperson.html (accessed on 1 September 2020).
  49. Frankish, K. Illusionism as a theory of consciousness. J. Conscious. Stud. 2016, 23, 11–39. [Google Scholar]
  50. Nida-Rümelin, M. The illusion of illusionism. J. Conscious. Stud. 2016, 23, 160–171. [Google Scholar]
  51. Seager, W. Could consciousness be an illusion? Mind Matter 2017, 15, 7–28. [Google Scholar]
  52. Baars, B.J. Global workspace theory of consciousness: Toward a cognitive neuroscience of human experience. Prog. Brain Res. 2005. [Google Scholar] [CrossRef]
  53. Dehaene, S.; Naccache, L. Towards a cognitive neuroscience of consciousness: Basic evidence and a workspace framework. Cognition 2001. [Google Scholar] [CrossRef]
  54. Mashour, G.A.; Roelfsema, P.; Changeux, J.P.; Dehaene, S. Conscious processing and the global neuronal workspace hypothesis. Neuron 2020, 105, 776–798. [Google Scholar] [CrossRef] [PubMed]
  55. Tononi, G. An information integration theory of consciousness. BMC Neurosci. 2004. [Google Scholar] [CrossRef] [Green Version]
  56. Balduzzi, D.; Tononi, G. Integrated information in discrete dynamical systems: Motivation and theoretical framework. PLoS Comput. Biol. 2008. [Google Scholar] [CrossRef] [Green Version]
  57. Oizumi, M.; Albantakis, L.; Tononi, G. From the phenomenology to the mechanisms of consciousness: Integrated information theory 3.0. PLoS Comput. Biol. 2014, 10, e1003588. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  58. Barrett, A.B.; Mediano, P.A. The Phi measure of integrated information is not well-defined for general physical systems. J. Conscious. Stud. 2019, 26, 11–20. [Google Scholar]
  59. Block, N. Perceptual consciousness overflows cognitive access. Trends Cogn. Sci. 2011, 15, 567–575. [Google Scholar] [CrossRef] [Green Version]
  60. Aru, J.; Bachmann, T.; Singer, W.; Melloni, L. Distilling the neural correlates of consciousness. Neurosci. Biobehav. Rev. 2012, 36, 737–746. [Google Scholar] [CrossRef] [Green Version]
  61. Lamme, V.A. Towards a true neural stance on consciousness. Trends Cogn. Sci. 2006, 10, 494–501. [Google Scholar] [CrossRef]
  62. Doerig, A.; Schurger, A.; Hess, K.; Herzog, M.H. The unfolding argument: Why IIT and other causal structure theories cannot explain consciousness. Conscious. Cogn. 2019, 72, 49–59. [Google Scholar] [CrossRef] [PubMed]
  63. Tsuchiya, N.; Andrillon, T.; Haun, A. A reply to “the unfolding argument”: Beyond functionalism/behaviorism and towards a truer science of causal structural theories of consciousness. Conscious. Cogn. 2020, 79, 102877. [Google Scholar] [CrossRef] [PubMed]
  64. Seth, A.K.; Izhikevich, E.; Reeke, G.N.; Edelman, G.M. Theories and measures of consciousness: An extended framework. Proc. Natl. Acad. Sci. USA 2006, 103, 10799–10804. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  65. Tagliazucchi, E. The signatures of conscious access and its phenomenology are consistent with large-scale brain communication at criticality. Conscious. Cogn. 2017, 55, 136–147. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  66. Laureys, S.; Owen, A.M.; Schiff, N.D. Brain function in coma, vegetative state, and related disorders. Lancet Neurol. 2004. [Google Scholar] [CrossRef] [Green Version]
  67. Laureys, S.; Goldman, S.; Phillips, C.; Van Bogaert, P.; Aerts, J.; Luxen, A.; Franck, G.; Maquet, P. Impaired effective cortical connectivity in vegetative state: Preliminary investigation using PET. NeuroImage 1999. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  68. Carhart-Harris, R.L. The entropic brain-revisited. Neuropharmacology 2018, 142, 167–178. [Google Scholar] [CrossRef]
  69. Lempel, A.; Ziv, J. On the complexity of finite sequences. IEEE Trans. Inf. Theory 1976. [Google Scholar] [CrossRef]
  70. Ziv, J. Coding theorems for individual sequences. IEEE Trans. Inf. Theory 1978. [Google Scholar] [CrossRef]
  71. Zhang, X.S.; Roy, R.J.; Jensen, E.W. EEG complexity as a measure of depth of anesthesia for patients. IEEE Trans. Biomed. Eng. 2001, 48, 1424–1433. [Google Scholar] [CrossRef]
  72. Nenadovic, V.; Perez Velazquez, J.L.; Hutchison, J.S. Phase synchronization in electroencephalographic recordings prognosticates outcome in paediatric coma. PLoS ONE 2014. [Google Scholar] [CrossRef] [PubMed]
  73. Schartner, M.M.; Pigorini, A.; Gibbs, S.A.; Arnulfo, G.; Sarasso, S.; Barnett, L.; Nobili, L.; Massimini, M.; Seth, A.K.; Barrett, A.B. Global and local complexity of intracranial EEG decreases during NREM sleep. Neurosci. Conscious. 2017, 2017, niw022. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  74. Dominguez, L.G.; Wennberg, R.A.; Gaetz, W.; Cheyne, D.; Snead, O.C.; Perez Velazquez, J.L. Enhanced synchrony in epileptiform activity? Local versus distant phase synchronization in generalized seizures. J. Neurosci. 2005. [Google Scholar] [CrossRef] [PubMed]
  75. Vivot, R.M.; Pallavicini, C.; Zamberlan, F.; Vigo, D.; Tagliazucchi, E. Meditation increases the entropy of brain oscillatory activity. Neuroscience 2020. [Google Scholar] [CrossRef]
  76. Schartner, M.M.; Carhart-Harris, R.L.; Barrett, A.B.; Seth, A.K.; Muthukumaraswamy, S.D. Increased spontaneous MEG signal diversity for psychoactive doses of ketamine, LSD and psilocybin. Sci. Rep. 2017, 7, 46421. [Google Scholar] [CrossRef] [PubMed]
  77. Timmermann, C.; Roseman, L.; Schartner, M.; Milliere, R.; Williams, L.T.; Erritzoe, D.; Muthukumaraswamy, S.; Ashton, M.; Bendrioua, A.; Kaur, O.; et al. Neural correlates of the DMT experience assessed with multivariate EEG. Sci. Rep. 2019, 9, 1–13. [Google Scholar] [CrossRef]
  78. Stam, C.J. Nonlinear dynamical analysis of EEG and MEG: Review of an emerging field. Clin. Neurophysiol. 2005, 116, 2266–2301. [Google Scholar] [CrossRef]
  79. Peng, H.; Hu, B.; Zheng, F.; Fan, D.; Zhao, W.; Chen, X.; Yang, Y.; Cai, Q. A method of identifying chronic stress by EEG. Pers. Ubiquitous Comput. 2013, 17, 1341–1347. [Google Scholar] [CrossRef]
  80. Xu, R.; Zhang, C.; He, F.; Zhao, X.; Qi, H.; Zhou, P.; Zhang, L.; Ming, D. How physical activities affect mental fatigue based on EEG energy, connectivity, and complexity. Front. Neurol. 2018, 9, 915. [Google Scholar] [CrossRef]
  81. Dolan, D.; Jensen, H.J.; Mediano, P.; Molina-Solana, M.; Rajpal, H.; Rosas, F.; Sloboda, J.A. The improvisational state of mind: A multidisciplinary study of an improvisatory approach to classical music repertoire performance. Front. Psychol. 2018, 9, 1341. [Google Scholar] [CrossRef] [Green Version]
  82. Casali, A.G.; Gosseries, O.; Rosanova, M.; Boly, M.; Sarasso, S.; Casali, K.R.; Casarotto, S.; Bruno, M.A.; Laureys, S.; Tononi, G.; et al. A theoretically based index of consciousness independent of sensory processing and behavior. Sci. Transl. Med. 2013. [Google Scholar] [CrossRef] [PubMed]
  83. Tononi, G. Consciousness as integrated information: A provisional manifesto. Biol. Bull. 2008. [Google Scholar] [CrossRef] [PubMed]
  84. Tononi, G.; Sporns, O.; Edelman, G.M. A measure for brain complexity: Relating functional segregation and integration in the nervous system. Proc. Natl. Acad. Sci. USA 1994. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  85. Mediano, P.A.; Rosas, F.; Carhart-Harris, R.L.; Seth, A.K.; Barrett, A.B. Beyond integrated information: A taxonomy of information dynamics phenomena. arXiv 2019, arXiv:1909.02297. [Google Scholar]
  86. Mediano, P.A.; Seth, A.K.; Barrett, A.B. Measuring integrated information: Comparison of candidate measures in theory and simulation. Entropy 2019. [Google Scholar] [CrossRef] [Green Version]
  87. Mindt, G. The problem with the ’information’ in integrated information theory. J. Conscious. Stud. 2017, 24, 130–154. [Google Scholar]
  88. Morch, H.H. Is consciousness intrinsic?: A problem for the integrated information theory. J. Conscious. Stud. 2019, 26, 133–162. [Google Scholar]
  89. Bayne, T. On the axiomatic foundations of the integrated information theory of consciousness. Neurosci. Conscious. 2018, 2018, niy007. [Google Scholar] [CrossRef] [Green Version]
  90. Krohn, S.; Ostwald, D. Computing integrated information. Neurosci. Conscious. 2017. [Google Scholar] [CrossRef] [Green Version]
  91. Kitazono, J.; Kanai, R.; Oizumi, M. Efficient algorithms for searching the minimum information partition in integrated information theory. Entropy 2018, 20, 173. [Google Scholar] [CrossRef] [Green Version]
  92. Toker, D.; Sommer, F.T. Information integration in large brain networks. PLoS Comput. Biol. 2019, 15, e1006807. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  93. Mediano, P. Integrated Information in Complex Neural Systems. PhD Thesis, Imperial College London, London, UK, 2020. [Google Scholar]
  94. Chang, J.Y.; Pigorini, A.; Massimini, M.; Tononi, G.; Nobili, L.; Van Veen, B.D. Multivariate autoregressive models with exogenous inputs for intracerebral responses to direct electrical stimulation of the human brain. Front. Hum. Neurosci. 2012. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  95. Kim, H.; Hudetz, A.G.; Lee, J.; Mashour, G.A.; Lee, U.C.; Avidan, M.S.; Bel-Bahar, T.; Blain-Moraes, S.; Golmirzaie, G.; Janke, E.; et al. Estimating the integrated information measure Phi from high-density electroencephalography during states of consciousness in humans. Front. Hum. Neurosci. 2018. [Google Scholar] [CrossRef] [PubMed]
  96. Mediano, P.A.; Farah, J.C.; Shanahan, M. Integrated information and metastability in systems of coupled oscillators. arXiv 2016, arXiv:1606.08313. [Google Scholar]
  97. Gerstner, W.; Sprekeler, H.; Deco, G. Theory and simulation in neuroscience. Science 2012, 338, 60–65. [Google Scholar] [CrossRef] [Green Version]
  98. Ramsey, J.D.; Hanson, S.J.; Hanson, C.; Halchenko, Y.O.; Poldrack, R.A.; Glymour, C. Six problems for causal inference from fMRI. NeuroImage 2010, 49, 1545–1558. [Google Scholar] [CrossRef] [PubMed]
  99. Kriegeskorte, N.; Douglas, P.K. Cognitive computational neuroscience. Nat. Neurosci. 2018, 21, 1148–1160. [Google Scholar] [CrossRef] [PubMed]
  100. Kriegeskorte, N.; Mur, M.; Bandettini, P.A. Representational similarity analysis-connecting the branches of systems neuroscience. Front. Syst. Neurosci. 2008, 2, 4. [Google Scholar] [CrossRef] [Green Version]
  101. Kriegeskorte, N.; Kievit, R.A. Representational geometry: Integrating cognition, computation, and the brain. Trends Cogn. Sci. 2013, 17, 401–412. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  102. Friston, K.J.; Harrison, L.; Penny, W. Dynamic causal modelling. Neuroimage 2003, 19, 1273–1302. [Google Scholar] [CrossRef]
  103. Schirner, M.; McIntosh, A.R.; Jirsa, V.; Deco, G.; Ritter, P. Inferring multi-scale neural mechanisms with brain network modelling. Elife 2018, 7, e28927. [Google Scholar] [CrossRef] [PubMed]
  104. Markram, H. The blue brain project. Nat. Rev. Neurosci. 2006, 7, 153–160. [Google Scholar] [CrossRef] [PubMed]
  105. Hagmann, P.; Cammoun, L.; Gigandet, X.; Meuli, R.; Honey, C.J.; Van Wedeen, J.; Sporns, O. Mapping the structural core of human cerebral cortex. PLoS Biol. 2008. [Google Scholar] [CrossRef] [PubMed]
  106. Tzourio-Mazoyer, N.; Landeau, B.; Papathanassiou, D.; Crivello, F.; Etard, O.; Delcroix, N.; Mazoyer, B.; Joliot, M. Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. NeuroImage 2002. [Google Scholar] [CrossRef] [PubMed]
  107. Tagliazucchi, E.; Chialvo, D.R.; Siniatchkin, M.; Amico, E.; Brichant, J.F.; Bonhomme, V.; Noirhomme, Q.; Laufs, H.; Laureys, S. Large-scale signatures of unconsciousness are consistent with a departure from critical dynamics. J. R. Soc. Interface 2016. [Google Scholar] [CrossRef] [PubMed]
  108. Haimovici, A.; Tagliazucchi, E.; Balenzuela, P.; Chialvo, D.R. Brain organization into resting state networks emerges at criticality on a model of the human connectome. Phys. Rev. Lett. 2013. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  109. Deco, G.; Jirsa, V.K. Ongoing cortical activity at rest: Criticality, multistability, and ghost attractors. J. Neurosci. 2012. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  110. Marinazzo, D.; Pellicoro, M.; Wu, G.; Angelini, L.; Cortés, J.M.; Stramaglia, S. Information transfer and criticality in the Ising model on the human connectome. PLoS ONE 2014, 9, e93616. [Google Scholar] [CrossRef]
  111. Abeyasinghe, P.M.; Aiello, M.; Nichols, E.S.; Cavaliere, C.; Fiorenza, S.; Masotta, O.; Borrelli, P.; Owen, A.M.; Estraneo, A.; Soddu, A. Consciousness and the Dimensionality of DOC Patients via the Generalized Ising Model. J. Clin. Med. 2020, 9, 1342. [Google Scholar] [CrossRef]
  112. Messé, A.; Rudrauf, D.; Benali, H.; Marrelec, G. Relating structure and function in the human brain: Relative contributions of anatomy, stationary dynamics, and non-stationarities. PLoS Comput. Biol. 2014. [Google Scholar] [CrossRef] [Green Version]
  113. Saggio, M.L.; Ritter, P.; Jirsa, V.K. Analytical operations relate structural and functional connectivity in the brain. PLoS ONE 2016. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  114. Cabral, J.; Kringelbach, M.L.; Deco, G. Exploring the network dynamics underlying brain activity during rest. Prog. Neurobiol. 2014. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  115. Jobst, B.M.; Hindriks, R.; Laufs, H.; Tagliazucchi, E.; Hahn, G.; Ponce-Alvarez, A.; Stevner, A.B.; Kringelbach, M.L.; Deco, G. Increased stability and breakdown of brain effective connectivity during slow-wave sleep: Mechanistic insights from whole-brain computational modelling. Sci. Rep. 2017. [Google Scholar] [CrossRef] [PubMed]
  116. Robinson, P.A.; Roy, N. Neural field theory of nonlinear wave-wave and wave-neuron processes. Phys. Rev. E 2015. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  117. Babaie Janvier, T.; Robinson, P.A. Neural field theory of corticothalamic prediction with control systems analysis. Front. Hum. Neurosci. 2018. [Google Scholar] [CrossRef] [PubMed]
  118. Breakspear, M.; Terry, J.R.; Friston, K.J. Modulation of excitatory synaptic coupling facilitates synchronization and complex dynamics in a biophysical model of neuronal dynamics. Netw. Comput. Neural Syst. 2003. [Google Scholar] [CrossRef]
  119. Honey, C.J.; Sporns, O.; Cammoun, L.; Gigandet, X.; Thiran, J.P.; Meuli, R.; Hagmann, P. Predicting human resting-state functional connectivity from structural connectivity. Proc. Natl. Acad. Sci. USA 2009. [Google Scholar] [CrossRef] [Green Version]
  120. Deco, G.; Ponce-Alvarez, A.; Hagmann, P.; Romani, G.L.; Mantini, D.; Corbetta, M. How local excitation-inhibition ratio impacts the whole brain dynamics. J. Neurosci. 2014. [Google Scholar] [CrossRef] [Green Version]
  121. Deco, G.; Cruzat, J.; Cabral, J.; Knudsen, G.M.; Carhart-Harris, R.L.; Whybrow, P.C.; Logothetis, N.K.; Kringelbach, M.L. Whole-brain multimodal neuroimaging model using serotonin receptor maps explains non-linear functional effects of LSD. Curr. Biol. 2018. [Google Scholar] [CrossRef] [Green Version]
  122. Kringelbach, M.L.; Cruzat, J.; Cabral, J.; Knudsen, G.M.; Carhart-Harris, R.; Whybrow, P.C.; Logothetis, N.K.; Deco, G. Dynamic coupling of whole-brain neuronal and neurotransmitter systems. Proc. Natl. Acad. Sci. USA 2020. [Google Scholar] [CrossRef] [Green Version]
  123. Deco, G.; Cruzat, J.; Cabral, J.; Tagliazucchi, E.; Laufs, H.; Logothetis, N.K.; Kringelbach, M.L. Awakening: Predicting external stimulation to force transitions between different brain states. Proc. Natl. Acad. Sci. USA 2019. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  124. Ipiña, I.P.; Kehoe, P.D.; Kringelbach, M.; Laufs, H.; Ibañez, A.; Deco, G.; Perl, Y.S.; Tagliazucchi, E. Modeling regional changes in dynamic stability during sleep and wakefulness. NeuroImage 2020. [Google Scholar] [CrossRef] [PubMed]
  125. Renart, A.; Brunel, N.; Wang, X.J. Mean-field theory of irregularly spiking neuronal populations and working memory in recurrent cortical networks. In Computational Neuroscience: A Comprehensive Approach; Feng, J., Ed.; CRC: Boca Raton, FL, USA, 2004; pp. 431–490. [Google Scholar]
  126. Friston, K.J.; Mechelli, A.; Turner, R.; Price, C.J. Nonlinear responses in fMRI: The Balloon model, Volterra kernels, and other hemodynamics. NeuroImage 2000, 12, 466–477. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  127. Deco, G.; Cabral, J.; Saenger, V.M.; Boly, M.; Tagliazucchi, E.; Laufs, H.; Van Someren, E.; Jobst, B.; Stevner, A.; Kringelbach, M.L. Perturbation of whole-brain dynamics in silico reveals mechanistic differences between brain states. NeuroImage 2018. [Google Scholar] [CrossRef] [PubMed]
  128. Marsden, J.E.; McCracken, M. The Hopf Bifurcation and Its Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012; Volume 19. [Google Scholar]
  129. Deco, G.; Kringelbach, M.L.; Jirsa, V.K.; Ritter, P. The dynamics of resting fluctuations in the brain: Metastability and its dynamical cortical core. Sci. Rep. 2017. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  130. Shanahan, M. Metastable chimera states in community-structured oscillator networks. Chaos Interdiscip. J. Nonlinear Sci. 2010, 20, 013108. [Google Scholar] [CrossRef] [Green Version]
  131. Dehaene, S.; Changeux, J.P. Experimental and theoretical approaches to conscious processing. Neuron 2011. [Google Scholar] [CrossRef] [Green Version]
  132. Tononi, G.; Edelman, G.M. Consciousness and complexity. Science 1998, 282, 1846–1851. [Google Scholar] [CrossRef]
  133. Bocaccio, H.; Pallavicini, C.; Castro, M.N.; Sánchez, S.M.; De Pino, G.; Laufs, H.; Villarreal, M.F.; Tagliazucchi, E. The avalanche-like behaviour of large-scale haemodynamic activity from wakefulness to deep sleep. J. R. Soc. Interface 2019. [Google Scholar] [CrossRef] [Green Version]
  134. Solovey, G.; Alonso, L.M.; Yanagawa, T.; Fujii, N.; Magnasco, M.O.; Cecchi, G.A.; Proekt, A. Loss of consciousness is associated with stabilization of cortical activity. J. Neurosci. 2015, 35, 10866–10877. [Google Scholar] [CrossRef]
  135. Alonso, L.M.; Proekt, A.; Schwartz, T.H.; Pryor, K.O.; Cecchi, G.A.; Magnasco, M.O. Dynamical criticality during induction of anesthesia in human ECoG recordings. Front. Neural Circuits 2014, 8, 20. [Google Scholar] [CrossRef] [PubMed]
  136. Cavanna, F.; Vilas, M.G.; Palmucci, M.; Tagliazucchi, E. Dynamic functional connectivity and brain metastability during altered states of consciousness. Neuroimage 2018, 180, 383–395. [Google Scholar] [CrossRef] [Green Version]
  137. Chialvo, D.R. Emergent complex neural dynamics: The brain at the edge. Nat. Phys. 2010, 6, 744–750. [Google Scholar] [CrossRef] [Green Version]
  138. Damoiseaux, J.S.; Rombouts, S.; Barkhof, F.; Scheltens, P.; Stam, C.J.; Smith, S.M.; Beckmann, C.F. Consistent resting-state networks across healthy subjects. Proc. Natl. Acad. Sci. USA 2006, 103, 13848–13853. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  139. Perl, Y.S.; Pallavicini, C.; Ipina, I.P.; Demertzi, A.; Bonhomme, V.; Martial, C.; Panda, R.; Annen, J.; Ibanez, A.; Kringelbach, M.; et al. Perturbations in dynamical models of whole-brain activity dissociate between the level and stability of consciousness. bioRxiv 2020. [Google Scholar] [CrossRef]
  140. Smith, S.M.; Vidaurre, D.; Beckmann, C.F.; Glasser, M.F.; Jenkinson, M.; Miller, K.L.; Nichols, T.E.; Robinson, E.C.; Salimi-Khorshidi, G.; Woolrich, M.W.; et al. Functional connectomics from resting-state fMRI. Trends Cogn. Sci. 2013, 17, 666–682. [Google Scholar] [CrossRef] [Green Version]
  141. Cavanna, A.E.; Trimble, M.R. The precuneus: A review of its functional anatomy and behavioural correlates. Brain 2006, 129, 564–583. [Google Scholar] [CrossRef] [Green Version]
  142. Andersen, L.M.; Pedersen, M.N.; Sandberg, K.; Overgaard, M. Occipital MEG activity in the early time range (<300 ms) predicts graded changes in perceptual consciousness. Cereb. Cortex 2016. [Google Scholar] [CrossRef] [Green Version]
  143. Utevsky, A.V.; Smith, D.V.; Huettel, S.A. Precuneus is a functional core of the default-mode network. J. Neurosci. 2014. [Google Scholar] [CrossRef] [Green Version]
  144. Thom, R. Prédire n’est pas Expliquer; Eshel: Paris, France, 1997. [Google Scholar]
  145. Fernández-Espejo, D.; Bekinschtein, T.; Monti, M.M.; Pickard, J.D.; Junque, C.; Coleman, M.R.; Owen, A.M. Diffusion weighted imaging distinguishes the vegetative state from the minimally conscious state. Neuroimage 2011, 54, 103–112. [Google Scholar] [CrossRef]
  146. Kubicki, M.; Park, H.; Westin, C.F.; Nestor, P.G.; Mulkern, R.V.; Maier, S.E.; Niznikiewicz, M.; Connor, E.E.; Levitt, J.J.; Frumin, M.; et al. DTI and MTR abnormalities in schizophrenia: Analysis of white matter integrity. Neuroimage 2005, 26, 1109–1118. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  147. Adhikari, M.H.; Hacker, C.D.; Siegel, J.S.; Griffa, A.; Hagmann, P.; Deco, G.; Corbetta, M. Decreased integration and information capacity in stroke measured by whole brain models of resting state activity. Brain 2017, 140, 1068–1085. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  148. Haimovici, A.; Balenzuela, P.; Tagliazucchi, E. Dynamical signatures of structural connectivity damage to a model of the brain posed at criticality. Brain Connect. 2016, 6, 759–771. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  149. Beuter, A.; Balossier, A.; Vassal, F.; Hemm, S.; Volpert, V. Cortical stimulation in aphasia following ischemic stroke: Toward model-guided electrical neuromodulation. Biol. Cybern. 2020, 114, 5–21. [Google Scholar] [CrossRef] [PubMed]
  150. Váša, F.; Shanahan, M.; Hellyer, P.J.; Scott, G.; Cabral, J.; Leech, R. Effects of lesions on synchrony and metastability in cortical networks. Neuroimage 2015, 118, 456–467. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  151. Hellyer, P.J.; Scott, G.; Shanahan, M.; Sharp, D.J.; Leech, R. Cognitive flexibility through metastable neural dynamics is disrupted by damage to the structural connectome. J. Neurosci. 2015, 35, 9050–9063. [Google Scholar] [CrossRef] [Green Version]
  152. Sinha, N.; Dauwels, J.; Kaiser, M.; Cash, S.S.; Brandon Westover, M.; Wang, Y.; Taylor, P.N. Predicting neurosurgical outcomes in focal epilepsy patients using computational modelling. Brain 2017, 140, 319–332. [Google Scholar] [CrossRef] [Green Version]
  153. Richardson, M.P. Large scale brain models of epilepsy: Dynamics meets connectomics. J. Neurol. Neurosurg. Psychiatry 2012, 83, 1238–1248. [Google Scholar] [CrossRef] [Green Version]
  154. Aerts, H.; Schirner, M.; Dhollander, T.; Jeurissen, B.; Achten, E.; Van Roost, D.; Ritter, P.; Marinazzo, D. Modeling brain dynamics after tumor resection using The Virtual Brain. NeuroImage 2020, 213, 116738. [Google Scholar] [CrossRef]
  155. Bansal, K.; Nakuci, J.; Muldoon, S.F. Personalized brain network models for assessing structure–function relationships. Curr. Opin. Neurobiol. 2018, 52, 42–47. [Google Scholar] [CrossRef] [Green Version]
  156. Jirsa, V.K.; Proix, T.; Perdikis, D.; Woodman, M.M.; Wang, H.; Gonzalez-Martinez, J.; Bernard, C.; Bénar, C.; Guye, M.; Chauvel, P.; et al. The virtual epileptic patient: Individualized whole-brain models of epilepsy spread. Neuroimage 2017, 145, 377–388. [Google Scholar] [CrossRef] [PubMed]
  157. Nichols, D.E. Psychedelics. Pharmacol. Rev. 2016, 68, 264–355. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  158. Howes, O.D.; Kapur, S. The dopamine hypothesis of schizophrenia: Version III—The final common pathway. Schizophr. Bull. 2009, 35, 549–562. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  159. Peduto, V.; Concas, A.; Santoro, G.; Biggio, G.; Gessa, G. Biochemical and electrophysiologic evidence that propofol enhances GABAergic transmission in the rat brain. Anesthesiol. J. Am. Soc. Anesthesiol. 1991, 75, 1000–1009. [Google Scholar] [CrossRef]
  160. Jouvet, M. The role of monoamines and acetylcholine-containing neurons in the regulation of the sleep-waking cycle. In Neurophysiology and Neurochemistry of Sleep and Wakefulness; Springer: Berlin/Heidelberg, Germany, 1972; pp. 166–307. [Google Scholar]
  161. Gao, R.; Peterson, E.J.; Voytek, B. Inferring synaptic excitation/inhibition balance from field potentials. Neuroimage 2017, 158, 70–78. [Google Scholar] [CrossRef]
  162. El Houssaini, K.; Bernard, C.; Jirsa, V.K. The Epileptor model: A systematic mathematical analysis linked to the dynamics of seizures, refractory status epilepticus and depolarization block. Eneuro 2020, 7. [Google Scholar] [CrossRef] [Green Version]
  163. Hermann, B.; Raimondo, F.; Hirsch, L.; Huang, Y.; Denis-Valente, M.; Pérez, P.; Engemann, D.; Faugeras, F.; Weiss, N.; Demeret, S.; et al. Combined behavioral and electrophysiological evidence for a direct cortical effect of prefrontal tDCS on disorders of consciousness. Sci. Rep. 2020, 10, 1–16. [Google Scholar] [CrossRef]
  164. An, S.; Bartolomei, F.; Guye, M.; Jirsa, V. Optimization of surgical intervention outside the epileptogenic zone in the Virtual Epileptic Patient (VEP). PLoS Comput. Biol. 2019, 15, e1007051. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  165. Perl, Y.S.; Pallacivini, C.; Ipina, I.P.; Kringelbach, M.L.; Deco, G.; Laufs, H.; Tagliazucchi, E. Data augmentation based on dynamical systems for the classification of brain states. bioRxiv 2020. [Google Scholar] [CrossRef]
  166. Perl, Y.S.; Boccacio, H.; Pérez-Ipiña, I.; Zamberlán, F.; Laufs, H.; Kringelbach, M.; Deco, G.; Tagliazucchi, E. Generative embeddings of brain collective dynamics using variational autoencoders. arXiv 2020, arXiv:2007.01378. [Google Scholar]
  167. Herzog, R.; Mediano, P.A.; Rosas, F.E.; Carhart-Harris, R.; Sanz, Y.; Tagliazucchi, E.; Cofré, R. A mechanistic model of the neural entropy increase elicited by psychedelic drugs. bioRxiv 2020. [Google Scholar] [CrossRef]
  168. Kraehenmann, R.; Pokorny, D.; Vollenweider, L.; Preller, K.; Pokorny, T.; Seifritz, E.; Vollenweider, F. Dreamlike effects of LSD on waking imagery in humans depend on serotonin 2A receptor activation. Psychopharmacology 2017, 234, 2031–2046. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  169. Preller, K.H.; Burt, J.B.; Ji, J.L.; Schleifer, C.H.; Adkinson, B.D.; Stämpfli, P.; Seifritz, E.; Repovs, G.; Krystal, J.H.; Murray, J.D.; et al. Changes in global and thalamic brain connectivity in LSD-induced altered states of consciousness are attributable to the 5-HT2A receptor. eLife 2018, 7, e35082. [Google Scholar] [CrossRef] [PubMed]
  170. Searle, J.R. Biological naturalism. In Blackwell Companion Conscious; Wiley: Hoboken, NJ, USA, 2007; Chapter 23; pp. 325–334. [Google Scholar]
  171. Stuart, H. Quantum computation in brain microtubules? The Penrose–Hameroff ‘Orch OR ‘model of consciousness. Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 1998, 356, 1869–1896. [Google Scholar] [CrossRef] [Green Version]
  172. Murdock, J. Normal forms. Scholarpedia 2006, 1, 1902. [Google Scholar] [CrossRef]
  173. Berry, R.B.; Budhiraja, R.; Gottlieb, D.J.; Gozal, D.; Iber, C.; Kapur, V.K.; Marcus, C.L.; Mehra, R.; Parthasarathy, S.; Quan, S.F.; et al. Rules for scoring respiratory events in sleep: Update of the 2007 AASM manual for the scoring of sleep and associated events: Deliberations of the sleep apnea definitions task force of the American Academy of Sleep Medicine. J. Clin. Sleep Med. 2012, 8, 597–619. [Google Scholar] [CrossRef] [Green Version]
  174. Rosow, C.; Manberg, P.J. Bispectral index monitoring. Anesthesiol. Clin. N. Am. 2001, 19, 947–966. [Google Scholar] [CrossRef]
  175. Schiff, N.D.; Nauvel, T.; Victor, J.D. Large-scale brain dynamics in disorders of consciousness. Curr. Opin. Neurobiol. 2014, 25, 7–14. [Google Scholar] [CrossRef] [Green Version]
  176. Schartner, M.; Seth, A.; Noirhomme, Q.; Boly, M.; Bruno, M.A.; Laureys, S.; Barrett, A. Complexity of multi-dimensional spontaneous EEG decreases during propofol induced general anaesthesia. PLoS ONE 2015. [Google Scholar] [CrossRef]
  177. Batterman, R.W. Multiple realizability and universality. Br. J. Philos. Sci. 2000, 51, 115–145. [Google Scholar] [CrossRef]
  178. Northoff, G.; Wainio-Theberge, S.; Evers, K. Is temporo-spatial dynamics the “common currency” of brain and mind? In Quest of “Spatiotemporal Neuroscience”. Phys. Life Rev. 2019, 33, 34–54. [Google Scholar] [CrossRef] [PubMed]
  179. Guckenheimer, J.; Kuznetsov, Y.A. Bogdanov-Takens bifurcation. Scholarpedia 2007, 2, 1854. [Google Scholar] [CrossRef]
  180. Mindlin, G. Dinámica no Lineal; Universidad Nacional de Quilmes: Bernal, Argentina, 2017. [Google Scholar]
  181. Rolls, E.T.; Deco, G. The Noisy Brain: Stochastic Dynamics as a Principle of Brain Function; Oxford University Press: Oxford, UK, 2010; Volume 34. [Google Scholar]
  182. Chizhov, A.V.; Zefirov, A.V.; Amakhin, D.V.; Smirnova, E.Y.; Zaitsev, A.V. Minimal model of interictal and ictal discharges “Epileptor-2”. PLoS Comput. Biol. 2018, 14, e1006186. [Google Scholar] [CrossRef] [PubMed]
  183. Letellier, C.; Rossler, O.E. Rossler attractor. Scholarpedia 2006, 1, 1721. [Google Scholar] [CrossRef]
  184. Shulgin, A.; Shulgin, A. PiHKAL. In A Chemical Love Story; Transform Press: Berkeley, CA, USA, 1992. [Google Scholar]
  185. Shulgin, A.; Shulgin, A. TIHKAL the Continuation; Transform Press: Berkeley, CA, USA, 1997. [Google Scholar]
  186. Velmans, M. Towards a Deeper Understanding of Consciousness. Selected Works of Max Velmans; Routledge: Abingdon, UK, 2016. [Google Scholar]
  187. Kelly, E.F.; Kelly, E.W.; Crabtree, A.; Gauld, A.; Grosso, M. Irreducible Mind: Toward a Psychology for the 21st Century; Rowman and Littlefield: Lanham, MD, USA, 2007. [Google Scholar]
  188. Miller, G. Beyond DSM: Seeking a brain-based classification of mental illness. Science 2010, 327, 1437. [Google Scholar] [CrossRef]
  189. Deco, G.; Kringelbach, M.L. Great expectations: Using whole-brain computational connectomics for understanding neuropsychiatric disorders. Neuron 2014, 84, 892–905. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  190. Murray, J.D.; Demirtaş, M.; Anticevic, A. Biophysical modeling of large-scale brain dynamics and applications for computational psychiatry. Biol. Psychiatry Cogn. Neurosci. Neuroimaging 2018, 3, 777–787. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Workflow describing the construction of whole-brain models. First, model inputs are determined based on anatomical connectivity, a brain parcellation (representing a certain coarse graining), and the local dynamics (left). Each region defined by the parcellation is endowed with a specific connectivity profile and local dynamics. Then, the model can be optimized to generate data as similar as possible to the brain activity observed during conscious wakefulness. Generally, this similarity is determined by certain statistical properties of the empirical brain signals, which constitute the target observable. The same or another observable is obtained from subjects during altered states of consciousness and used again as the target of an optimization algorithm to infer model parameters. Following a given working hypothesis, the model for wakeful consciousness can be perturbed in such a way that optimizes the similarity between the target observable for the altered state of consciousness and the data generated by the model. In this way, a whole-brain model for an altered state of consciousness can be used to test working hypotheses about its mechanistic underpinnings.
Figure 1. Workflow describing the construction of whole-brain models. First, model inputs are determined based on anatomical connectivity, a brain parcellation (representing a certain coarse graining), and the local dynamics (left). Each region defined by the parcellation is endowed with a specific connectivity profile and local dynamics. Then, the model can be optimized to generate data as similar as possible to the brain activity observed during conscious wakefulness. Generally, this similarity is determined by certain statistical properties of the empirical brain signals, which constitute the target observable. The same or another observable is obtained from subjects during altered states of consciousness and used again as the target of an optimization algorithm to infer model parameters. Following a given working hypothesis, the model for wakeful consciousness can be perturbed in such a way that optimizes the similarity between the target observable for the altered state of consciousness and the data generated by the model. In this way, a whole-brain model for an altered state of consciousness can be used to test working hypotheses about its mechanistic underpinnings.
Brainsci 10 00626 g001
Figure 2. Representation of the three key variables that can be modified to construct whole-brain models of different altered states of consciousness. These variables correspond to local dynamics, anatomical connectivity, and priors related to neuromodulatory systems necessary to accommodate physiological, pathological, and pharmacologically-induced altered states of consciousness. Certain states may require the modification of multiple variables; for instance, focal seizures and propofol-induced anaesthesia are both associated with low complexity patterns of brain activity; yet, in the first case, these dynamics reflect structural abnormalities, while, in the second case, they reflect the activation of certain inhibitory pathways.
Figure 2. Representation of the three key variables that can be modified to construct whole-brain models of different altered states of consciousness. These variables correspond to local dynamics, anatomical connectivity, and priors related to neuromodulatory systems necessary to accommodate physiological, pathological, and pharmacologically-induced altered states of consciousness. Certain states may require the modification of multiple variables; for instance, focal seizures and propofol-induced anaesthesia are both associated with low complexity patterns of brain activity; yet, in the first case, these dynamics reflect structural abnormalities, while, in the second case, they reflect the activation of certain inhibitory pathways.
Brainsci 10 00626 g002
Figure 3. Left panel: Takens-Bogdanov bifurcation diagram, which is obtained by changing parameters α and β in the normal form equations (included as an inset). Depending on the combination of parameters, this simple dynamical system can present qualitatively different solutions. The green line stands for a saddle-node bifurcation, where two equilibrium points collide and disappear. Crossing the red line results in a Hopf bifurcation, where dynamics switch from a fixed point to stable harmonic oscillations. The dashed line represents a homoclinic bifurcation, where the limit cycle collides with a saddle point resulting again in steady dynamics. Right panel: The phase portraits (af) illustrate the dynamics at different regions of the bifurcation diagram, with individual trajectories highlighted in red and presented both as curves in phase space and as time series. (a) Stable fixed point, (b) Self-sustained harmonic oscillation after the appearance of a stable limit cycle, (c) Three fixed points appear due to a saddle-node bifurcation, resulting in a stable fixed point, (d) One of the stable fixed points loses its stability and dynamics undergo a Hopf bifurcation, (e) The limit cycle undergoes a homoclinic bifurcation, (f) A saddle-node on a limit cycle (SNIC) bifurcation occurs, resulting in periodic dynamics with complex spectral content. For a detailed description of the Takens-Bogdanov bifurcation, see Ref. [179]. Left panel adapted from Ref. [180].
Figure 3. Left panel: Takens-Bogdanov bifurcation diagram, which is obtained by changing parameters α and β in the normal form equations (included as an inset). Depending on the combination of parameters, this simple dynamical system can present qualitatively different solutions. The green line stands for a saddle-node bifurcation, where two equilibrium points collide and disappear. Crossing the red line results in a Hopf bifurcation, where dynamics switch from a fixed point to stable harmonic oscillations. The dashed line represents a homoclinic bifurcation, where the limit cycle collides with a saddle point resulting again in steady dynamics. Right panel: The phase portraits (af) illustrate the dynamics at different regions of the bifurcation diagram, with individual trajectories highlighted in red and presented both as curves in phase space and as time series. (a) Stable fixed point, (b) Self-sustained harmonic oscillation after the appearance of a stable limit cycle, (c) Three fixed points appear due to a saddle-node bifurcation, resulting in a stable fixed point, (d) One of the stable fixed points loses its stability and dynamics undergo a Hopf bifurcation, (e) The limit cycle undergoes a homoclinic bifurcation, (f) A saddle-node on a limit cycle (SNIC) bifurcation occurs, resulting in periodic dynamics with complex spectral content. For a detailed description of the Takens-Bogdanov bifurcation, see Ref. [179]. Left panel adapted from Ref. [180].
Brainsci 10 00626 g003
Figure 4. Upper panel: Phase space of a single Stuart-Landau nonlinear oscillator near dynamical criticality (Hopf bifurcation) with an additive noise term. The radius of the limit cycle fluctuates unpredictably, resulting in complex signal amplitude modulations. Bottom panel: Phase space of a chaotic Rossler oscillation in a regime with positive Lyapunov exponent, without the addition of noise. Dynamics unfold in the proximity of a strange attractor, which results in complex but deterministic dynamics.
Figure 4. Upper panel: Phase space of a single Stuart-Landau nonlinear oscillator near dynamical criticality (Hopf bifurcation) with an additive noise term. The radius of the limit cycle fluctuates unpredictably, resulting in complex signal amplitude modulations. Bottom panel: Phase space of a chaotic Rossler oscillation in a regime with positive Lyapunov exponent, without the addition of noise. Dynamics unfold in the proximity of a strange attractor, which results in complex but deterministic dynamics.
Brainsci 10 00626 g004
Table 1. Categories of altered states of consciousness.
Table 1. Categories of altered states of consciousness.
CategoryExamplesReversibility
Natural or endogenousdeep sleep
dreaming
transitory
transitory
Pharmacologicalgeneral anaesthesia
psychedelic state
transitory
transitory
Induced by other meansmeditation
hypnosis
transitory
transitory
Pathologicalepilepsy
psychotic episodes
disorders of consciousness
brain death
transitory
transitory
transitory or permanent
permanent

Share and Cite

MDPI and ACS Style

Cofré, R.; Herzog, R.; Mediano, P.A.M.; Piccinini, J.; Rosas, F.E.; Sanz Perl, Y.; Tagliazucchi, E. Whole-Brain Models to Explore Altered States of Consciousness from the Bottom Up. Brain Sci. 2020, 10, 626. https://doi.org/10.3390/brainsci10090626

AMA Style

Cofré R, Herzog R, Mediano PAM, Piccinini J, Rosas FE, Sanz Perl Y, Tagliazucchi E. Whole-Brain Models to Explore Altered States of Consciousness from the Bottom Up. Brain Sciences. 2020; 10(9):626. https://doi.org/10.3390/brainsci10090626

Chicago/Turabian Style

Cofré, Rodrigo, Rubén Herzog, Pedro A.M. Mediano, Juan Piccinini, Fernando E. Rosas, Yonatan Sanz Perl, and Enzo Tagliazucchi. 2020. "Whole-Brain Models to Explore Altered States of Consciousness from the Bottom Up" Brain Sciences 10, no. 9: 626. https://doi.org/10.3390/brainsci10090626

APA Style

Cofré, R., Herzog, R., Mediano, P. A. M., Piccinini, J., Rosas, F. E., Sanz Perl, Y., & Tagliazucchi, E. (2020). Whole-Brain Models to Explore Altered States of Consciousness from the Bottom Up. Brain Sciences, 10(9), 626. https://doi.org/10.3390/brainsci10090626

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop