entropy-logo

Journal Browser

Journal Browser

Foundations of Biological Computation

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Entropy and Biology".

Deadline for manuscript submissions: closed (1 December 2021) | Viewed by 33671

Special Issue Editors


E-Mail Website
Guest Editor
Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501, USA
Interests: nonequilibrium physics and computation; structure and dynamics of social organizations; foundations of physics; statistical inference; modeling microbiome consortia

E-Mail Website
Guest Editor
Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501, USA
Interests: collective computation; collective phenomena; robustness; origins of biological space–time; individuality; micro to macro; emergent engineering; downward causation; emergence

Special Issue Information

Dear Colleagues,

Many researchers view it as a truism that “biological systems compute.” Some examples given include: expansion and contraction of heart cells by intracellular nanotube circuits, gene regulatory networks controlling gene expression contributing to the embryo body plan, “microbial power grids” shuffling electrons to compute metabolic functions, neural circuits computing decisions at the whole organism level, social insect colonies adaptively adjusting nest architectural properties in response to environmental changes, primate social circuits computing aspects of social structure like the distribution of power, and human voting designed to identify the consensus view of the best presidential candidate. Even natural selection itself is often said to be a computation.

What unifies these examples of biological systems computing? What biological systems don’t compute? Can we make progress on understanding computation in biology by looking at biological systems through the lens of computer science theory (either the current kind designed for digital systems or some other variant)? Might thermodynamics also provide insights?

We welcome articles that investigate these fundamental issues, taking the broadest view of both computational theory and biological systems, in order to identify new research paths within and across computer science, biology and non-equilibrium statistical physics. Our goal is to lay the groundwork for the development of formal language(s) for biological computation that are mechanistically principled, taking seriously the universal, collective property of biological systems and constraints imposed by thermodynamics.

We specifically welcome contributions that focus on one or more of the four following themes:

1) Identification of the basic elements and mechanics of computation in biological systems to include thus far understudied collective properties of computation;

2) The role of energy, thermodynamics, and information transformation in structuring biological computation;

3) Identification of principles shared with electronic computing systems;

4) Promising directions for future research, including how mechanistic insights might guide development of a formal language for biological computation.

Prof. Dr. David Wolpert
Prof. Dr. Jessica Flack
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • computation in nature
  • information processing
  • information theory
  • thermodynamics
  • non-equilibrium statistical physics
  • inference
  • levels of organization
  • collective phenomena
  • biological circuits
  • mechanistic logic

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

18 pages, 1528 KiB  
Article
Compositional Sequence Generation in the Entorhinal–Hippocampal System
by Daniel C. McNamee, Kimberly L. Stachenfeld, Matthew M. Botvinick and Samuel J. Gershman
Entropy 2022, 24(12), 1791; https://doi.org/10.3390/e24121791 - 8 Dec 2022
Cited by 5 | Viewed by 2872
Abstract
Neurons in the medial entorhinal cortex exhibit multiple, periodically organized, firing fields which collectively appear to form an internal representation of space. Neuroimaging data suggest that this grid coding is also present in other cortical areas such as the prefrontal cortex, indicating that [...] Read more.
Neurons in the medial entorhinal cortex exhibit multiple, periodically organized, firing fields which collectively appear to form an internal representation of space. Neuroimaging data suggest that this grid coding is also present in other cortical areas such as the prefrontal cortex, indicating that it may be a general principle of neural functionality in the brain. In a recent analysis through the lens of dynamical systems theory, we showed how grid coding can lead to the generation of a diversity of empirically observed sequential reactivations of hippocampal place cells corresponding to traversals of cognitive maps. Here, we extend this sequence generation model by describing how the synthesis of multiple dynamical systems can support compositional cognitive computations. To empirically validate the model, we simulate two experiments demonstrating compositionality in space or in time during sequence generation. Finally, we describe several neural network architectures supporting various types of compositionality based on grid coding and highlight connections to recent work in machine learning leveraging analogous techniques. Full article
(This article belongs to the Special Issue Foundations of Biological Computation)
Show Figures

Figure 1

23 pages, 1720 KiB  
Article
Information Fragmentation, Encryption and Information Flow in Complex Biological Networks
by Clifford Bohm, Douglas Kirkpatrick, Victoria Cao and Christoph Adami
Entropy 2022, 24(5), 735; https://doi.org/10.3390/e24050735 - 21 May 2022
Cited by 5 | Viewed by 3591
Abstract
Assessing where and how information is stored in biological networks (such as neuronal and genetic networks) is a central task both in neuroscience and in molecular genetics, but most available tools focus on the network’s structure as opposed to its function. Here, we [...] Read more.
Assessing where and how information is stored in biological networks (such as neuronal and genetic networks) is a central task both in neuroscience and in molecular genetics, but most available tools focus on the network’s structure as opposed to its function. Here, we introduce a new information-theoretic tool—information fragmentation analysis—that, given full phenotypic data, allows us to localize information in complex networks, determine how fragmented (across multiple nodes of the network) the information is, and assess the level of encryption of that information. Using information fragmentation matrices we can also create information flow graphs that illustrate how information propagates through these networks. We illustrate the use of this tool by analyzing how artificial brains that evolved in silico solve particular tasks, and show how information fragmentation analysis provides deeper insights into how these brains process information and “think”. The measures of information fragmentation and encryption that result from our methods also quantify complexity of information processing in these networks and how this processing complexity differs between primary exposure to sensory data (early in the lifetime) and later routine processing. Full article
(This article belongs to the Special Issue Foundations of Biological Computation)
Show Figures

Figure 1

21 pages, 9215 KiB  
Article
Probabilistic Inference with Polymerizing Biochemical Circuits
by Yarden Katz and Walter Fontana
Entropy 2022, 24(5), 629; https://doi.org/10.3390/e24050629 - 29 Apr 2022
Cited by 4 | Viewed by 2081
Abstract
Probabilistic inference—the process of estimating the values of unobserved variables in probabilistic models—has been used to describe various cognitive phenomena related to learning and memory. While the study of biological realizations of inference has focused on animal nervous systems, single-celled organisms also show [...] Read more.
Probabilistic inference—the process of estimating the values of unobserved variables in probabilistic models—has been used to describe various cognitive phenomena related to learning and memory. While the study of biological realizations of inference has focused on animal nervous systems, single-celled organisms also show complex and potentially “predictive” behaviors in changing environments. Yet, it is unclear how the biochemical machinery found in cells might perform inference. Here, we show how inference in a simple Markov model can be approximately realized, in real-time, using polymerizing biochemical circuits. Our approach relies on assembling linear polymers that record the history of environmental changes, where the polymerization process produces molecular complexes that reflect posterior probabilities. We discuss the implications of realizing inference using biochemistry, and the potential of polymerization as a form of biological information-processing. Full article
(This article belongs to the Special Issue Foundations of Biological Computation)
Show Figures

Figure 1

24 pages, 1574 KiB  
Article
A Family of Fitness Landscapes Modeled through Gene Regulatory Networks
by Chia-Hung Yang and Samuel V. Scarpino
Entropy 2022, 24(5), 622; https://doi.org/10.3390/e24050622 - 29 Apr 2022
Cited by 2 | Viewed by 3868
Abstract
Fitness landscapes are a powerful metaphor for understanding the evolution of biological systems. These landscapes describe how genotypes are connected to each other through mutation and related through fitness. Empirical studies of fitness landscapes have increasingly revealed conserved topographical features across diverse taxa, [...] Read more.
Fitness landscapes are a powerful metaphor for understanding the evolution of biological systems. These landscapes describe how genotypes are connected to each other through mutation and related through fitness. Empirical studies of fitness landscapes have increasingly revealed conserved topographical features across diverse taxa, e.g., the accessibility of genotypes and “ruggedness”. As a result, theoretical studies are needed to investigate how evolution proceeds on fitness landscapes with such conserved features. Here, we develop and study a model of evolution on fitness landscapes using the lens of Gene Regulatory Networks (GRNs), where the regulatory products are computed from multiple genes and collectively treated as phenotypes. With the assumption that regulation is a binary process, we prove the existence of empirically observed, topographical features such as accessibility and connectivity. We further show that these results hold across arbitrary fitness functions and that a trade-off between accessibility and ruggedness need not exist. Then, using graph theory and a coarse-graining approach, we deduce a mesoscopic structure underlying GRN fitness landscapes where the information necessary to predict a population’s evolutionary trajectory is retained with minimal complexity. Using this coarse-graining, we develop a bottom-up algorithm to construct such mesoscopic backbones, which does not require computing the genotype network and is therefore far more efficient than brute-force approaches. Altogether, this work provides mathematical results of high-dimensional fitness landscapes and a path toward connecting theory to empirical studies. Full article
(This article belongs to the Special Issue Foundations of Biological Computation)
Show Figures

Figure 1

22 pages, 6541 KiB  
Article
Minimal Developmental Computation: A Causal Network Approach to Understand Morphogenetic Pattern Formation
by Santosh Manicka and Michael Levin
Entropy 2022, 24(1), 107; https://doi.org/10.3390/e24010107 - 10 Jan 2022
Cited by 12 | Viewed by 6121
Abstract
What information-processing strategies and general principles are sufficient to enable self-organized morphogenesis in embryogenesis and regeneration? We designed and analyzed a minimal model of self-scaling axial patterning consisting of a cellular network that develops activity patterns within implicitly set bounds. The properties of [...] Read more.
What information-processing strategies and general principles are sufficient to enable self-organized morphogenesis in embryogenesis and regeneration? We designed and analyzed a minimal model of self-scaling axial patterning consisting of a cellular network that develops activity patterns within implicitly set bounds. The properties of the cells are determined by internal ‘genetic’ networks with an architecture shared across all cells. We used machine-learning to identify models that enable this virtual mini-embryo to pattern a typical axial gradient while simultaneously sensing the set boundaries within which to develop it from homogeneous conditions—a setting that captures the essence of early embryogenesis. Interestingly, the model revealed several features (such as planar polarity and regenerative re-scaling capacity) for which it was not directly selected, showing how these common biological design principles can emerge as a consequence of simple patterning modes. A novel “causal network” analysis of the best model furthermore revealed that the originally symmetric model dynamically integrates into intercellular causal networks characterized by broken-symmetry, long-range influence and modularity, offering an interpretable macroscale-circuit-based explanation for phenotypic patterning. This work shows how computation could occur in biological development and how machine learning approaches can generate hypotheses and deepen our understanding of how featureless tissues might develop sophisticated patterns—an essential step towards predictive control of morphogenesis in regenerative medicine or synthetic bioengineering contexts. The tools developed here also have the potential to benefit machine learning via new forms of backpropagation and by leveraging the novel distributed self-representation mechanisms to improve robustness and generalization. Full article
(This article belongs to the Special Issue Foundations of Biological Computation)
Show Figures

Figure 1

11 pages, 11378 KiB  
Article
Multiscale Information Propagation in Emergent Functional Networks
by Arsham Ghavasieh and Manlio De Domenico
Entropy 2021, 23(10), 1369; https://doi.org/10.3390/e23101369 - 19 Oct 2021
Cited by 5 | Viewed by 2703
Abstract
Complex biological systems consist of large numbers of interconnected units, characterized by emergent properties such as collective computation. In spite of all the progress in the last decade, we still lack a deep understanding of how these properties arise from the coupling between [...] Read more.
Complex biological systems consist of large numbers of interconnected units, characterized by emergent properties such as collective computation. In spite of all the progress in the last decade, we still lack a deep understanding of how these properties arise from the coupling between the structure and dynamics. Here, we introduce the multiscale emergent functional state, which can be represented as a network where links encode the flow exchange between the nodes, calculated using diffusion processes on top of the network. We analyze the emergent functional state to study the distribution of the flow among components of 92 fungal networks, identifying their functional modules at different scales and, more importantly, demonstrating the importance of functional modules for the information content of networks, quantified in terms of network spectral entropy. Our results suggest that the topological complexity of fungal networks guarantees the existence of functional modules at different scales keeping the information entropy, and functional diversity, high. Full article
(This article belongs to the Special Issue Foundations of Biological Computation)
Show Figures

Figure 1

15 pages, 2712 KiB  
Article
A Drive towards Thermodynamic Efficiency for Dissipative Structures in Chemical Reaction Networks
by Kai Ueltzhöffer, Lancelot Da Costa, Daniela Cialfi and Karl Friston
Entropy 2021, 23(9), 1115; https://doi.org/10.3390/e23091115 - 27 Aug 2021
Cited by 5 | Viewed by 3475
Abstract
Dissipative accounts of structure formation show that the self-organisation of complex structures is thermodynamically favoured, whenever these structures dissipate free energy that could not be accessed otherwise. These structures therefore open transition channels for the state of the universe to move from a [...] Read more.
Dissipative accounts of structure formation show that the self-organisation of complex structures is thermodynamically favoured, whenever these structures dissipate free energy that could not be accessed otherwise. These structures therefore open transition channels for the state of the universe to move from a frustrated, metastable state to another metastable state of higher entropy. However, these accounts apply as well to relatively simple, dissipative systems, such as convection cells, hurricanes, candle flames, lightning strikes, or mechanical cracks, as they do to complex biological systems. Conversely, interesting computational properties—that characterize complex biological systems, such as efficient, predictive representations of environmental dynamics—can be linked to the thermodynamic efficiency of underlying physical processes. However, the potential mechanisms that underwrite the selection of dissipative structures with thermodynamically efficient subprocesses is not completely understood. We address these mechanisms by explaining how bifurcation-based, work-harvesting processes—required to sustain complex dissipative structures—might be driven towards thermodynamic efficiency. We first demonstrate a simple mechanism that leads to self-selection of efficient dissipative structures in a stochastic chemical reaction network, when the dissipated driving chemical potential difference is decreased. We then discuss how such a drive can emerge naturally in a hierarchy of self-similar dissipative structures, each feeding on the dissipative structures of a previous level, when moving away from the initial, driving disequilibrium. Full article
(This article belongs to the Special Issue Foundations of Biological Computation)
Show Figures

Figure 1

Other

Jump to: Research

25 pages, 5408 KiB  
Perspective
Evolution of Brains and Computers: The Roads Not Taken
by Ricard Solé and Luís F. Seoane
Entropy 2022, 24(5), 665; https://doi.org/10.3390/e24050665 - 9 May 2022
Cited by 6 | Viewed by 5765
Abstract
When computers started to become a dominant part of technology around the 1950s, fundamental questions about reliable designs and robustness were of great relevance. Their development gave rise to the exploration of new questions, such as what made brains reliable (since neurons can [...] Read more.
When computers started to become a dominant part of technology around the 1950s, fundamental questions about reliable designs and robustness were of great relevance. Their development gave rise to the exploration of new questions, such as what made brains reliable (since neurons can die) and how computers could get inspiration from neural systems. In parallel, the first artificial neural networks came to life. Since then, the comparative view between brains and computers has been developed in new, sometimes unexpected directions. With the rise of deep learning and the development of connectomics, an evolutionary look at how both hardware and neural complexity have evolved or designed is required. In this paper, we argue that important similarities have resulted both from convergent evolution (the inevitable outcome of architectural constraints) and inspiration of hardware and software principles guided by toy pictures of neurobiology. Moreover, dissimilarities and gaps originate from the lack of major innovations that have paved the way to biological computing (including brains) that are completely absent within the artificial domain. As it occurs within synthetic biocomputation, we can also ask whether alternative minds can emerge from A.I. designs. Here, we take an evolutionary view of the problem and discuss the remarkable convergences between living and artificial designs and what are the pre-conditions to achieve artificial intelligence. Full article
(This article belongs to the Special Issue Foundations of Biological Computation)
Show Figures

Figure 1

Back to TopTop