entropy-logo

Journal Browser

Journal Browser

Information Theoretic Incentives for Cognitive Systems

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (30 June 2015) | Viewed by 67119

Special Issue Editors


E-Mail Website
Guest Editor
Adaptive Systems Research Group,University of Hertfordshire, College Lane, AL10 9AB Hatfield, UK
Interests: Intrinsic Motivation (Empowerment); Self-Organization; Guided Self-Organization; Information-Theoretic Incentives for Social Interaction; Information-Theoretic Incentives for Swarms; Information Theory and Computer Game AI

E-Mail Website
Guest Editor
Cognition and Neurosciences, Max Planck Institute for Mathematics in the Sciences Inselstrasse 22, 04103 Leipzig, Germany
Interests: Autonomous Robots; Self-Organization; Guided Self-Organization; Information Theory; Dynamical Systems; Machine Learning; Neuroscience of Learning; Optimal Control

E-Mail Website
Guest Editor
Information Theory of Cognitive Systems, Max Planck Institute for Mathematics in the Sciences Inselstrasse 22, 04103 Leipzig, Germany
Interests: Embodied Artificial Intelligence; Information Theory of the Sensorimotor Loop; Dynamical Systems; Cybernetics; Self-organisation; Synaptic plasticity; Evolutionary Robotics

E-Mail Website
Guest Editor
Department of Computer Science, University of Hertfordshire, Hatfield AL10 9AB, UK
Interests: artificial life; artificial intelligence; information theory; minimally cognitive agents; embodiment
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In recent years, ideas such as "life is information processing" or "information holds the key to understanding life" have become more common. However, how can information, or more formally Information Theory, increase our understanding of life, or life-like systems?

Information Theory not only has a profound mathematical basis, but also typically provides an intuitive understanding of processes, such as learning, behavior and evolution terms of information processing.

In this special issue, we are interested in both: a.) the information-theoretic formalization and quantification of different aspects of life, such as driving forces of learning and behavior generation, information flows between neurons, swarm members and social agents, and information theoretic aspects of evolution and adaptation and b.) the simulation and creation of life-like systems with previously identified principles and incentives.

Topics with relation to artificial and natural systems:

  • information theoretic intrinsic motivations
  • information theoretic quantification of behavior
  • information theoretic guidance of artificial evolution
  • information theoretic guidance of self-organization
  • information theoretic driving forces behind learning
  • information theoretic driving forces behind behavior
  • information theory in swarms
  • information theory in social behavior
  • information theory in evolution
  • information theory in the brain
  • information theory in system-environment distinction
  • information theory in the perception action loop
  • information theoretic definitions of life

Dr. Christoph Salge
Dr. Georg Martius
Dr.  Keyan Ghazi-Zahedi
Dr. Daniel Polani
Guest Editors

[1] A. S. Klyubin, D. Polani, and C.L. Nehaniv. Empowerment: A Universal Agent-Centric Measure of Control. In Proc. CEC 2005.
[2] G. Martius, R. Der, and N. Ay. Information driven self-organization of complex robotic behaviors. PLoS ONE, 8(5):e63400, 2013.
[3] D. Y. Little, F. T. Sommer: Learning and exploration in action-perception loops. Frontiers in Neural Circuits, 2013
[4] Max Lungarella, Olaf Sporns, Information self-structuring: key principle for learning and development. Proc. ICDL, 2005.
[5] Polani, Daniel. "Information: currency of life?." HFSP journal 3.5 (2009): 307-316.
[6] K. Zahedi, N. Ay, and R. Der. Higher coordination with less control – a result of information maximization in the sensori-motor loop. Adaptive Behavior, 18(3–4):338–355, 2010.
[7] K. Zahedi and N. Ay. Quantifying morphological computation. Entropy, 15(5):1887–1915, 2013.
[8] R. Pfeifer, M. Lungarella, and F. Iida. Self-organization, embodiment, and biologically inspired robotics. Science, 318(5853):1088–1093, 2007.
[9] Salge, Christoph, Cornelius Glackin, and Daniel Polani. "Changing the Environment Based on Empowerment as Intrinsic Motivation." Entropy 16.5 (2014): 2789-2819.

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

2143 KiB  
Article
Nonparametric Problem-Space Clustering: Learning Efficient Codes for Cognitive Control Tasks
by Domenico Maisto, Francesco Donnarumma and Giovanni Pezzulo
Entropy 2016, 18(2), 61; https://doi.org/10.3390/e18020061 - 19 Feb 2016
Cited by 18 | Viewed by 6116
Abstract
We present an information-theoretic method permitting one to find structure in a problem space (here, in a spatial navigation domain) and cluster it in ways that are convenient to solve different classes of control problems, which include planning a path to a goal [...] Read more.
We present an information-theoretic method permitting one to find structure in a problem space (here, in a spatial navigation domain) and cluster it in ways that are convenient to solve different classes of control problems, which include planning a path to a goal from a known or an unknown location, achieving multiple goals and exploring a novel environment. Our generative nonparametric approach, called the generative embedded Chinese restaurant process (geCRP), extends the family of Chinese restaurant process (CRP) models by introducing a parameterizable notion of distance (or kernel) between the states to be clustered together. By using different kernels, such as the the conditional probability or joint probability of two states, the same geCRP method clusters the environment in ways that are more sensitive to different control-related information, such as goal, sub-goal and path information. We perform a series of simulations in three scenarios—an open space, a grid world with four rooms and a maze having the same structure as the Hanoi Tower—in order to illustrate the characteristics of the different clusters (obtained using different kernels) and their relative benefits for solving planning and control problems. Full article
(This article belongs to the Special Issue Information Theoretic Incentives for Cognitive Systems)
Show Figures

Figure 1

4492 KiB  
Article
Information-Theoretic Neuro-Correlates Boost Evolution of Cognitive Systems
by Jory Schossau, Christoph Adami and Arend Hintze
Entropy 2016, 18(1), 6; https://doi.org/10.3390/e18010006 - 25 Dec 2015
Cited by 16 | Viewed by 14027
Abstract
Genetic Algorithms (GA) are a powerful set of tools for search and optimization that mimic the process of natural selection, and have been used successfully in a wide variety of problems, including evolving neural networks to solve cognitive tasks. Despite their success, GAs [...] Read more.
Genetic Algorithms (GA) are a powerful set of tools for search and optimization that mimic the process of natural selection, and have been used successfully in a wide variety of problems, including evolving neural networks to solve cognitive tasks. Despite their success, GAs sometimes fail to locate the highest peaks of the fitness landscape, in particular if the landscape is rugged and contains multiple peaks. Reaching distant and higher peaks is difficult because valleys need to be crossed, in a process that (at least temporarily) runs against the fitness maximization objective. Here we propose and test a number of information-theoretic (as well as network-based) measures that can be used in conjunction with a fitness maximization objective (so-called “neuro-correlates”) to evolve neural controllers for two widely different tasks: a behavioral task that requires information integration, and a cognitive task that requires memory and logic. We find that judiciously chosen neuro-correlates can significantly aid GAs to find the highest peaks. Full article
(This article belongs to the Special Issue Information Theoretic Incentives for Cognitive Systems)
Show Figures

Figure 1

6304 KiB  
Article
Quantifying Emergent Behavior of Autonomous Robots
by Georg Martius and Eckehard Olbrich
Entropy 2015, 17(10), 7266-7297; https://doi.org/10.3390/e17107266 - 23 Oct 2015
Cited by 4 | Viewed by 7460
Abstract
Quantifying behaviors of robots which were generated autonomously from task-independent objective functions is an important prerequisite for objective comparisons of algorithms and movements of animals. The temporal sequence of such a behavior can be considered as a time series and hence complexity measures [...] Read more.
Quantifying behaviors of robots which were generated autonomously from task-independent objective functions is an important prerequisite for objective comparisons of algorithms and movements of animals. The temporal sequence of such a behavior can be considered as a time series and hence complexity measures developed for time series are natural candidates for its quantification. The predictive information and the excess entropy are such complexity measures. They measure the amount of information the past contains about the future and thus quantify the nonrandom structure in the temporal sequence. However, when using these measures for systems with continuous states one has to deal with the fact that their values will depend on the resolution with which the systems states are observed. For deterministic systems both measures will diverge with increasing resolution. We therefore propose a new decomposition of the excess entropy in resolution dependent and resolution independent parts and discuss how they depend on the dimensionality of the dynamics, correlations and the noise level. For the practical estimation we propose to use estimates based on the correlation integral instead of the direct estimation of the mutual information based on next neighbor statistics because the latter allows less control of the scale dependencies. Using our algorithm we are able to show how autonomous learning generates behavior of increasing complexity with increasing learning duration. Full article
(This article belongs to the Special Issue Information Theoretic Incentives for Cognitive Systems)
Show Figures

Figure 1

6593 KiB  
Article
The Intrinsic Cause-Effect Power of Discrete Dynamical Systems—From Elementary Cellular Automata to Adapting Animats
by Larissa Albantakis and Giulio Tononi
Entropy 2015, 17(8), 5472-5502; https://doi.org/10.3390/e17085472 - 31 Jul 2015
Cited by 40 | Viewed by 13312
Abstract
Current approaches to characterize the complexity of dynamical systems usually rely on state-space trajectories. In this article instead we focus on causal structure, treating discrete dynamical systems as directed causal graphs—systems of elements implementing local update functions. This allows us to characterize the [...] Read more.
Current approaches to characterize the complexity of dynamical systems usually rely on state-space trajectories. In this article instead we focus on causal structure, treating discrete dynamical systems as directed causal graphs—systems of elements implementing local update functions. This allows us to characterize the system’s intrinsic cause-effect structure by applying the mathematical and conceptual tools developed within the framework of integrated information theory (IIT). In particular, we assess the number of irreducible mechanisms (concepts) and the total amount of integrated conceptual information Φ specified by a system. We analyze: (i) elementary cellular automata (ECA); and (ii) small, adaptive logic-gate networks (“animats”), similar to ECA in structure but evolving by interacting with an environment. We show that, in general, an integrated cause-effect structure with many concepts and high Φ is likely to have high dynamical complexity. Importantly, while a dynamical analysis describes what is “happening” in a system from the extrinsic perspective of an observer, the analysis of its cause-effect structure reveals what a system “is” from its own intrinsic perspective, exposing its dynamical and evolutionary potential under many different scenarios. Full article
(This article belongs to the Special Issue Information Theoretic Incentives for Cognitive Systems)
Show Figures

Figure 1

398 KiB  
Article
The Fisher Information as a Neural Guiding Principle for Independent Component Analysis
by Rodrigo Echeveste, Samuel Eckmann and Claudius Gros
Entropy 2015, 17(6), 3838-3856; https://doi.org/10.3390/e17063838 - 9 Jun 2015
Cited by 7 | Viewed by 6512
Abstract
The Fisher information constitutes a natural measure for the sensitivity of a probability distribution with respect to a set of parameters. An implementation of the stationarity principle for synaptic learning in terms of the Fisher information results in a Hebbian self-limiting learning rule [...] Read more.
The Fisher information constitutes a natural measure for the sensitivity of a probability distribution with respect to a set of parameters. An implementation of the stationarity principle for synaptic learning in terms of the Fisher information results in a Hebbian self-limiting learning rule for synaptic plasticity. In the present work, we study the dependence of the solutions to this rule in terms of the moments of the input probability distribution and find a preference for non-Gaussian directions, making it a suitable candidate for independent component analysis (ICA). We confirm in a numerical experiment that a neuron trained under these rules is able to find the independent components in the non-linear bars problem. The specific form of the plasticity rule depends on the transfer function used, becoming a simple cubic polynomial of the membrane potential for the case of the rescaled error function. The cubic learning rule is also an excellent approximation for other transfer functions, as the standard sigmoidal, and can be used to show analytically that the proposed plasticity rules are selective for directions in the space of presynaptic neural activities characterized by a negative excess kurtosis. Full article
(This article belongs to the Special Issue Information Theoretic Incentives for Cognitive Systems)
Show Figures

Graphical abstract

380 KiB  
Article
Information Geometry on Complexity and Stochastic Interaction
by Nihat Ay
Entropy 2015, 17(4), 2432-2458; https://doi.org/10.3390/e17042432 - 21 Apr 2015
Cited by 51 | Viewed by 10554
Abstract
Interdependencies of stochastically interacting units are usually quantified by the Kullback-Leibler divergence of a stationary joint probability distribution on the set of all configurations from the corresponding factorized distribution. This is a spatial approach which does not describe the intrinsically temporal aspects of [...] Read more.
Interdependencies of stochastically interacting units are usually quantified by the Kullback-Leibler divergence of a stationary joint probability distribution on the set of all configurations from the corresponding factorized distribution. This is a spatial approach which does not describe the intrinsically temporal aspects of interaction. In the present paper, the setting is extended to a dynamical version where temporal interdependencies are also captured by using information geometry of Markov chain manifolds. Full article
(This article belongs to the Special Issue Information Theoretic Incentives for Cognitive Systems)
Show Figures

2980 KiB  
Article
A Fundamental Scale of Descriptions for Analyzing Information Content of Communication Systems
by Gerardo Febres and Klaus Jaffe
Entropy 2015, 17(4), 1606-1633; https://doi.org/10.3390/e17041606 - 25 Mar 2015
Cited by 9 | Viewed by 8131
Abstract
The complexity of the description of a system is a function of the entropy of its symbolic description. Prior to computing the entropy of the system’s description, an observation scale has to be assumed. In texts written in artificial and natural languages, typical [...] Read more.
The complexity of the description of a system is a function of the entropy of its symbolic description. Prior to computing the entropy of the system’s description, an observation scale has to be assumed. In texts written in artificial and natural languages, typical scales are binary, characters, and words. However, considering languages as structures built around certain preconceived set of symbols, like words or characters, limits the level of complexity that can be revealed analytically. This study introduces the notion of the fundamental description scale to analyze the essence of the structure of a language. The concept of Fundamental Scale is tested for English and musical instrument digital interface (MIDI) music texts using an algorithm developed to split a text in a collection of sets of symbols that minimizes the observed entropy of the system. This Fundamental Scale reflects more details of the complexity of the language than using bits, characters or words. Results show that this Fundamental Scale allows to compare completely different languages, such as English and MIDI coded music regarding its structural entropy. This comparative power facilitates the study of the complexity of the structure of different communication systems. Full article
(This article belongs to the Special Issue Information Theoretic Incentives for Cognitive Systems)
Show Figures

Back to TopTop