Next Issue
Volume 16, August
Previous Issue
Volume 16, June
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 16, Issue 7 (July 2014) – 32 articles , Pages 3552-4184

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
258 KiB  
Article
Characterizing the Asymptotic Per-Symbol Redundancy of Memoryless Sources over Countable Alphabets in Terms of Single-Letter Marginals
by Maryam Hosseini and Narayana Santhanam
Entropy 2014, 16(7), 4168-4184; https://doi.org/10.3390/e16074168 - 23 Jul 2014
Cited by 1 | Viewed by 4367
Abstract
The minimum expected number of bits needed to describe a random variable is its entropy, assuming knowledge of the distribution of the random variable. On the other hand, universal compression describes data supposing that the underlying distribution is unknown, but that it belongs [...] Read more.
The minimum expected number of bits needed to describe a random variable is its entropy, assuming knowledge of the distribution of the random variable. On the other hand, universal compression describes data supposing that the underlying distribution is unknown, but that it belongs to a known set Ρ of distributions. However, since universal descriptions are not matched exactly to the underlying distribution, the number of bits they use on average is higher, and the excess over the entropy used is the redundancy. In this paper, we study the redundancy incurred by the universal description of strings of positive integers (Z+), the strings being generated independently and identically distributed (i.i.d.) according an unknown distribution over Z+ in a known collection P. We first show that if describing a single symbol incurs finite redundancy, then P is tight, but that the converse does not always hold. If a single symbol can be described with finite worst-case regret (a more stringent formulation than redundancy above), then it is known that describing length n i.i.d. strings only incurs vanishing (to zero) redundancy per symbol as n increases. On the contrary, we show it is possible that the description of a single symbol from an unknown distribution of P incurs finite redundancy, yet the description of length n i.i.d. strings incurs a constant (> 0) redundancy per symbol encoded. We then show a sufficient condition on single-letter marginals, such that length n i.i.d. samples will incur vanishing redundancy per symbol encoded. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
1740 KiB  
Article
Network Decomposition and Complexity Measures: An Information Geometrical Approach
by Masatoshi Funabashi
Entropy 2014, 16(7), 4132-4167; https://doi.org/10.3390/e16074132 - 23 Jul 2014
Cited by 6 | Viewed by 6108
Abstract
We consider the graph representation of the stochastic model with n binary variables, and develop an information theoretical framework to measure the degree of statistical association existing between subsystems as well as the ones represented by each edge of the graph representation. Besides, [...] Read more.
We consider the graph representation of the stochastic model with n binary variables, and develop an information theoretical framework to measure the degree of statistical association existing between subsystems as well as the ones represented by each edge of the graph representation. Besides, we consider the novel measures of complexity with respect to the system decompositionability, by introducing the geometric product of Kullback–Leibler (KL-) divergence. The novel complexity measures satisfy the boundary condition of vanishing at the limit of completely random and ordered state, and also with the existence of independent subsystem of any size. Such complexity measures based on the geometric means are relevant to the heterogeneity of dependencies between subsystems, and the amount of information propagation shared entirely in the system. Full article
(This article belongs to the Special Issue Information Geometry)
Show Figures

Graphical abstract

443 KiB  
Article
Numerical Investigation on the Temperature Characteristics of the Voice Coil for a Woofer Using Thermal Equivalent Heat Conduction Models
by Moo-Yeon Lee and Hyung-Jin Kim
Entropy 2014, 16(7), 4121-4131; https://doi.org/10.3390/e16074121 - 21 Jul 2014
Cited by 2 | Viewed by 5931
Abstract
The objective of this study is to numerically investigate the temperature and heat transfer characteristics of the voice coil for a woofer with and without bobbins using the thermal equivalent heat conduction models. The temperature and heat transfer characteristics of the main components [...] Read more.
The objective of this study is to numerically investigate the temperature and heat transfer characteristics of the voice coil for a woofer with and without bobbins using the thermal equivalent heat conduction models. The temperature and heat transfer characteristics of the main components of the woofer were analyzed with input powers ranging from 5 W to 60 W. The numerical results of the voice coil showed good agreement within ±1% of the data by Odenbach (2003). The temperatures of the voice coil and its units for the woofer without the bobbin were 6.1% and 5.0% on average, respectively; lower than those of the woofer with the bobbin. However, at an input power of 30 W for the voice coil, the temperatures of the main components of the woofer without the bobbin were 40.0% higher on average than those of the woofer obtained by Lee et al. (2013). Full article
Show Figures

1528 KiB  
Article
Can the Hexagonal Ice-like Model Render the Spectroscopic Fingerprints of Structured Water? Feedback from Quantum-Chemical Computations
by Javier Segarra-Martí, Daniel Roca-Sanjuán and Manuela Merchán
Entropy 2014, 16(7), 4101-4120; https://doi.org/10.3390/e16074101 - 21 Jul 2014
Cited by 11 | Viewed by 9674
Abstract
The spectroscopic features of the multilayer honeycomb model of structured water are analyzed on theoretical grounds, by using high-level ab initio quantum-chemical methodologies, through model systems built by two fused hexagons of water molecules: the monomeric system [H19O10], in [...] Read more.
The spectroscopic features of the multilayer honeycomb model of structured water are analyzed on theoretical grounds, by using high-level ab initio quantum-chemical methodologies, through model systems built by two fused hexagons of water molecules: the monomeric system [H19O10], in different oxidation states (anionic and neutral species). The findings do not support anionic species as the origin of the spectroscopic fingerprints observed experimentally for structured water. In this context, hexameric anions can just be seen as a source of hydrated hydroxyl anions and cationic species. The results for the neutral dimer are, however, fully consistent with the experimental evidence related to both, absorption and fluorescence spectra. The neutral π-stacked dimer [H38O20] can be assigned as the main responsible for the recorded absorption and fluorescence spectra with computed band maxima at 271 nm (4.58 eV) and 441 nm (2.81 eV), respectively. The important role of triplet excited states is finally discussed. The most intense vertical triplet⇨ triplet transition is predicted to be at 318 nm (3.90 eV). Full article
(This article belongs to the Special Issue Entropy and EZ-Water)
Show Figures

Graphical abstract

262 KiB  
Article
Using Geometry to Select One Dimensional Exponential Families That Are Monotone Likelihood Ratio in the Sample Space, Are Weakly Unimodal and Can Be Parametrized by a Measure of Central Tendency
by Paul Vos and Karim Anaya-Izquierdo
Entropy 2014, 16(7), 4088-4100; https://doi.org/10.3390/e16074088 - 18 Jul 2014
Cited by 1 | Viewed by 5442
Abstract
One dimensional exponential families on finite sample spaces are studied using the geometry of the simplex Δn°-1 and that of a transformation Vn-1 of its interior. This transformation is the natural parameter space associated with the family of multinomial [...] Read more.
One dimensional exponential families on finite sample spaces are studied using the geometry of the simplex Δn°-1 and that of a transformation Vn-1 of its interior. This transformation is the natural parameter space associated with the family of multinomial distributions. The space Vn-1 is partitioned into cones that are used to find one dimensional families with desirable properties for modeling and inference. These properties include the availability of uniformly most powerful tests and estimators that exhibit optimal properties in terms of variability and unbiasedness. Full article
(This article belongs to the Special Issue Information Geometry)
Show Figures

859 KiB  
Editorial
Biosemiotic Entropy: Concluding the Series
by John W. Oller, Jr.
Entropy 2014, 16(7), 4060-4087; https://doi.org/10.3390/e16074060 - 18 Jul 2014
Cited by 2 | Viewed by 9806
Abstract
This article concludes the special issue on Biosemiotic Entropy looking toward the future on the basis of current and prior results. It highlights certain aspects of the series, concerning factors that damage and degenerate biosignaling systems. As in ordinary linguistic discourse, well-formedness (coherence) [...] Read more.
This article concludes the special issue on Biosemiotic Entropy looking toward the future on the basis of current and prior results. It highlights certain aspects of the series, concerning factors that damage and degenerate biosignaling systems. As in ordinary linguistic discourse, well-formedness (coherence) in biological signaling systems depends on valid representations correctly construed: a series of proofs are presented and generalized to all meaningful sign systems. The proofs show why infants must (as empirical evidence shows they do) proceed through a strict sequence of formal steps in acquiring any language. Classical and contemporary conceptions of entropy and information are deployed showing why factors that interfere with coherence in biological signaling systems are necessary and sufficient causes of disorders, diseases, and mortality. Known sources of such formal degeneracy in living organisms (here termed, biosemiotic entropy) include: (a) toxicants, (b) pathogens; (c) excessive exposures to radiant energy and/or sufficiently powerful electromagnetic fields; (d) traumatic injuries; and (e) interactions between the foregoing factors. Just as Jaynes proved that irreversible changes invariably increase entropy, the theory of true narrative representations (TNR theory) demonstrates that factors disrupting the well-formedness (coherence) of valid representations, all else being held equal, must increase biosemiotic entropy—the kind impacting biosignaling systems. Full article
(This article belongs to the Special Issue Biosemiotic Entropy: Disorder, Disease, and Mortality)
Show Figures

431 KiB  
Article
Entropy and Its Discontents: A Note on Definitions
by Nicola Cufaro Petroni
Entropy 2014, 16(7), 4044-4059; https://doi.org/10.3390/e16074044 - 17 Jul 2014
Cited by 4 | Viewed by 4934
Abstract
The routine definitions of Shannon entropy for both discrete and continuous probability laws show inconsistencies that make them not reciprocally coherent. We propose a few possible modifications of these quantities so that: (1) they no longer show incongruities; and (2) they go one [...] Read more.
The routine definitions of Shannon entropy for both discrete and continuous probability laws show inconsistencies that make them not reciprocally coherent. We propose a few possible modifications of these quantities so that: (1) they no longer show incongruities; and (2) they go one into the other in a suitable limit as the result of a renormalization. The properties of the new quantities would slightly differ from that of the usual entropies in a few other respects. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

1232 KiB  
Article
Application of a Modified Entropy Computational Method in Assessing the Complexity of Pulse Wave Velocity Signals in Healthy and Diabetic Subjects
by Yi-Chung Chang, Hsien-Tsai Wu, Hong-Ruei Chen, An-Bang Liu, Jung-Jen Yeh, Men-Tzung Lo, Jen-Ho Tsao, Chieh-Ju Tang, I-Ting Tsai and Cheuk-Kwan Sun
Entropy 2014, 16(7), 4032-4043; https://doi.org/10.3390/e16074032 - 17 Jul 2014
Cited by 26 | Viewed by 7089
Abstract
Using 1000 successive points of a pulse wave velocity (PWV) series, we previously distinguished healthy from diabetic subjects with multi-scale entropy (MSE) using a scale factor of 10. One major limitation is the long time for data acquisition (i.e., 20 min). [...] Read more.
Using 1000 successive points of a pulse wave velocity (PWV) series, we previously distinguished healthy from diabetic subjects with multi-scale entropy (MSE) using a scale factor of 10. One major limitation is the long time for data acquisition (i.e., 20 min). This study aimed at validating the sensitivity of a novel method, short time MSE (sMSE) that utilized a substantially smaller sample size (i.e., 600 consecutive points), in differentiating the complexity of PWV signals both in simulation and in human subjects that were divided into four groups: healthy young (Group 1; n = 24) and middle-aged (Group 2; n = 30) subjects without known cardiovascular disease and middle-aged individuals with well-controlled (Group 3; n = 18) and poorly-controlled (Group 4; n = 22) diabetes mellitus type 2. The results demonstrated that although conventional MSE could differentiate the subjects using 1000 consecutive PWV series points, sensitivity was lost using only 600 points. Simulation study revealed consistent results. By contrast, the novel sMSE method produced significant differences in entropy in both simulation and testing subjects. In conclusion, this study demonstrated that using a novel sMSE approach for PWV analysis, the time for data acquisition can be substantially reduced to that required for 600 cardiac cycles (~10 min) with remarkable preservation of sensitivity in differentiating among healthy, aged, and diabetic populations. Full article
(This article belongs to the Special Issue Entropy and Cardiac Physics)
Show Figures

Graphical abstract

2418 KiB  
Article
New Riemannian Priors on the Univariate Normal Model
by Salem Said, Lionel Bombrun and Yannick Berthoumieu
Entropy 2014, 16(7), 4015-4031; https://doi.org/10.3390/e16074015 - 17 Jul 2014
Cited by 18 | Viewed by 5653
Abstract
The current paper introduces new prior distributions on the univariate normal model, with the aim of applying them to the classification of univariate normal populations. These new prior distributions are entirely based on the Riemannian geometry of the univariate normal model, so that [...] Read more.
The current paper introduces new prior distributions on the univariate normal model, with the aim of applying them to the classification of univariate normal populations. These new prior distributions are entirely based on the Riemannian geometry of the univariate normal model, so that they can be thought of as “Riemannian priors”. Precisely, if {pθ ; θ Θ} is any parametrization of the univariate normal model, the paper considers prior distributions G( θ - , γ) with hyperparameters θ - Θ and γ > 0, whose density with respect to Riemannian volume is proportional to exp(d2(θ, θ - )/2γ2), where d2(θ, θ - ) is the square of Rao’s Riemannian distance. The distributions G( θ - , γ) are termed Gaussian distributions on the univariate normal model. The motivation for considering a distribution G( θ - , γ) is that this distribution gives a geometric representation of a class or cluster of univariate normal populations. Indeed, G( θ - , γ) has a unique mode θ - (precisely, θ - is the unique Riemannian center of mass of G( θ - , γ), as shown in the paper), and its dispersion away from θ - is given by γ. Therefore, one thinks of members of the class represented by G( θ - , γ) as being centered around θ - and lying within a typical distance determined by γ. The paper defines rigorously the Gaussian distributions G( θ - , γ) and describes an algorithm for computing maximum likelihood estimates of their hyperparameters. Based on this algorithm and on the Laplace approximation, it describes how the distributions G( θ - , γ) can be used as prior distributions for Bayesian classification of large univariate normal populations. In a concrete application to texture image classification, it is shown that this leads to an improvement in performance over the use of conjugate priors. Full article
(This article belongs to the Special Issue Information Geometry)
Show Figures

709 KiB  
Article
A Note of Caution on Maximizing Entropy
by Richard E. Neapolitan and Xia Jiang
Entropy 2014, 16(7), 4004-4014; https://doi.org/10.3390/e16074004 - 17 Jul 2014
Cited by 2 | Viewed by 6944
Abstract
The Principle of Maximum Entropy is often used to update probabilities due to evidence instead of performing Bayesian updating using Bayes’ Theorem, and its use often has efficacious results. However, in some circumstances the results seem unacceptable and unintuitive. This paper discusses some [...] Read more.
The Principle of Maximum Entropy is often used to update probabilities due to evidence instead of performing Bayesian updating using Bayes’ Theorem, and its use often has efficacious results. However, in some circumstances the results seem unacceptable and unintuitive. This paper discusses some of these cases, and discusses how to identify some of the situations in which this principle should not be used. The paper starts by reviewing three approaches to probability, namely the classical approach, the limiting frequency approach, and the Bayesian approach. It then introduces maximum entropy and shows its relationship to the three approaches. Next, through examples, it shows that maximizing entropy sometimes can stand in direct opposition to Bayesian updating based on reasonable prior beliefs. The paper concludes that if we take the Bayesian approach that probability is about reasonable belief based on all available information, then we can resolve the conflict between the maximum entropy approach and the Bayesian approach that is demonstrated in the examples. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Show Figures

1906 KiB  
Article
Human Brain Networks: Spiking Neuron Models, Multistability, Synchronization, Thermodynamics, Maximum Entropy Production, and Anesthetic Cascade Mechanisms
by Wassim M. Haddad, Qing Hui and James M. Bailey
Entropy 2014, 16(7), 3939-4003; https://doi.org/10.3390/e16073939 - 17 Jul 2014
Cited by 15 | Viewed by 11907
Abstract
Advances in neuroscience have been closely linked to mathematical modeling beginning with the integrate-and-fire model of Lapicque and proceeding through the modeling of the action potential by Hodgkin and Huxley to the current era. The fundamental building block of the central nervous system, [...] Read more.
Advances in neuroscience have been closely linked to mathematical modeling beginning with the integrate-and-fire model of Lapicque and proceeding through the modeling of the action potential by Hodgkin and Huxley to the current era. The fundamental building block of the central nervous system, the neuron, may be thought of as a dynamic element that is “excitable”, and can generate a pulse or spike whenever the electrochemical potential across the cell membrane of the neuron exceeds a threshold. A key application of nonlinear dynamical systems theory to the neurosciences is to study phenomena of the central nervous system that exhibit nearly discontinuous transitions between macroscopic states. A very challenging and clinically important problem exhibiting this phenomenon is the induction of general anesthesia. In any specific patient, the transition from consciousness to unconsciousness as the concentration of anesthetic drugs increases is very sharp, resembling a thermodynamic phase transition. This paper focuses on multistability theory for continuous and discontinuous dynamical systems having a set of multiple isolated equilibria and/or a continuum of equilibria. Multistability is the property whereby the solutions of a dynamical system can alternate between two or more mutually exclusive Lyapunov stable and convergent equilibrium states under asymptotically slowly changing inputs or system parameters. In this paper, we extend the theory of multistability to continuous, discontinuous, and stochastic nonlinear dynamical systems. In particular, Lyapunov-based tests for multistability and synchronization of dynamical systems with continuously differentiable and absolutely continuous flows are established. The results are then applied to excitatory and inhibitory biological neuronal networks to explain the underlying mechanism of action for anesthesia and consciousness from a multistable dynamical system perspective, thereby providing a theoretical foundation for general anesthesia using the network properties of the brain. Finally, we present some key emergent properties from the fields of thermodynamics and electromagnetic field theory to qualitatively explain the underlying neuronal mechanisms of action for anesthesia and consciousness. Full article
(This article belongs to the Special Issue Entropy in Human Brain Networks)
Show Figures

1057 KiB  
Review
Panel I: Connecting 2nd Law Analysis with Economics, Ecology and Energy Policy
by Richard Gaggioli and Mauro Reini
Entropy 2014, 16(7), 3903-3938; https://doi.org/10.3390/e16073903 - 16 Jul 2014
Cited by 16 | Viewed by 6130
Abstract
The present paper is a review of several papers from the Proceedings of the Joint European Thermodynamics Conference, held in Brescia, Italy, 1–5 July 2013, namely papers introduced by their authors at Panel I of the conference. Panel I was devoted to applications [...] Read more.
The present paper is a review of several papers from the Proceedings of the Joint European Thermodynamics Conference, held in Brescia, Italy, 1–5 July 2013, namely papers introduced by their authors at Panel I of the conference. Panel I was devoted to applications of the Second Law of Thermodynamics to social issues—economics, ecology, sustainability, and energy policy. The concept called Available Energy which goes back to mid-nineteenth century work of Kelvin, Rankine, Maxwell and Gibbs, is relevant to all of the papers. Various names have been applied to the concept when interactions between the system of interest and an environment are involved. Today, the name exergy is generally accepted. The scope of the papers being reviewed is wide and they complement one another well. Full article
(This article belongs to the Special Issue Advances in Methods and Foundations of Non-Equilibrium Thermodynamics)
Show Figures

339 KiB  
Article
Identifying Chaotic FitzHugh–Nagumo Neurons Using Compressive Sensing
by Ri-Qi Su, Ying-Cheng Lai and Xiao Wang
Entropy 2014, 16(7), 3889-3902; https://doi.org/10.3390/e16073889 - 15 Jul 2014
Cited by 18 | Viewed by 6882
Abstract
We develop a completely data-driven approach to reconstructing coupled neuronal networks that contain a small subset of chaotic neurons. Such chaotic elements can be the result of parameter shift in their individual dynamical systems and may lead to abnormal functions of the network. [...] Read more.
We develop a completely data-driven approach to reconstructing coupled neuronal networks that contain a small subset of chaotic neurons. Such chaotic elements can be the result of parameter shift in their individual dynamical systems and may lead to abnormal functions of the network. To accurately identify the chaotic neurons may thus be necessary and important, for example, applying appropriate controls to bring the network to a normal state. However, due to couplings among the nodes, the measured time series, even from non-chaotic neurons, would appear random, rendering inapplicable traditional nonlinear time-series analysis, such as the delay-coordinate embedding method, which yields information about the global dynamics of the entire network. Our method is based on compressive sensing. In particular, we demonstrate that identifying chaotic elements can be formulated as a general problem of reconstructing the nodal dynamical systems, network connections and all coupling functions, as well as their weights. The working and efficiency of the method are illustrated by using networks of non-identical FitzHugh–Nagumo neurons with randomly-distributed coupling weights. Full article
(This article belongs to the Special Issue Information in Dynamical Systems and Complex Systems)
Show Figures

215 KiB  
Article
The Entropy-Based Quantum Metric
by Roger Balian
Entropy 2014, 16(7), 3878-3888; https://doi.org/10.3390/e16073878 - 15 Jul 2014
Cited by 21 | Viewed by 6173
Abstract
The von Neumann entropy S( D ^ ) generates in the space of quantum density matrices D ^ the Riemannian metric ds2 = −d2S( D ^ ), which is physically founded and which characterises the amount of quantum information [...] Read more.
The von Neumann entropy S( D ^ ) generates in the space of quantum density matrices D ^ the Riemannian metric ds2 = −d2S( D ^ ), which is physically founded and which characterises the amount of quantum information lost by mixing D ^ and D ^ + d D ^ . A rich geometric structure is thereby implemented in quantum mechanics. It includes a canonical mapping between the spaces of states and of observables, which involves the Legendre transform of S( D ^ ). The Kubo scalar product is recovered within the space of observables. Applications are given to equilibrium and non equilibrium quantum statistical mechanics. There the formalism is specialised to the relevant space of observables and to the associated reduced states issued from the maximum entropy criterion, which result from the exact states through an orthogonal projection. Von Neumann’s entropy specialises into a relevant entropy. Comparison is made with other metrics. The Riemannian properties of the metric ds2 = −d2S( D ^ ) are derived. The curvature arises from the non-Abelian nature of quantum mechanics; its general expression and its explicit form for q-bits are given, as well as geodesics. Full article
(This article belongs to the Special Issue Information Geometry)
1130 KiB  
Article
Many Can Work Better than the Best: Diagnosing with Medical Images via Crowdsourcing
by Xian-Hong Xiang, Xiao-Yu Huang, Xiao-Ling Zhang, Chun-Fang Cai, Jian-Yong Yang and Lei Li
Entropy 2014, 16(7), 3866-3877; https://doi.org/10.3390/e16073866 - 14 Jul 2014
Cited by 4 | Viewed by 5638
Abstract
We study a crowdsourcing-based diagnosis algorithm, which is against the fact that currently we do not lack medical staff, but high level experts. Our approach is to make use of the general practitioners’ efforts: For every patient whose illness cannot be judged definitely, [...] Read more.
We study a crowdsourcing-based diagnosis algorithm, which is against the fact that currently we do not lack medical staff, but high level experts. Our approach is to make use of the general practitioners’ efforts: For every patient whose illness cannot be judged definitely, we arrange for them to be diagnosed multiple times by different doctors, and we collect the all diagnosis results to derive the final judgement. Our inference model is based on the statistical consistency of the diagnosis data. To evaluate the proposed model, we conduct experiments on both the synthetic and real data; the results show that it outperforms the benchmarks. Full article
Show Figures

1252 KiB  
Article
Hierarchical Geometry Verification via Maximum Entropy Saliency in Image Retrieval
by Hongwei Zhao, Qingliang Li and Pingping Liu
Entropy 2014, 16(7), 3848-3865; https://doi.org/10.3390/e16073848 - 14 Jul 2014
Cited by 6 | Viewed by 5494
Abstract
We propose a new geometric verification method in image retrieval—Hierarchical Geometry Verification via Maximum Entropy Saliency (HGV)—which aims at filtering the redundant matches and remaining the information of retrieval target in images which is partly out of the salient regions with hierarchical saliency [...] Read more.
We propose a new geometric verification method in image retrieval—Hierarchical Geometry Verification via Maximum Entropy Saliency (HGV)—which aims at filtering the redundant matches and remaining the information of retrieval target in images which is partly out of the salient regions with hierarchical saliency and also fully exploring the geometric context of all visual words in images. First of all, we obtain hierarchical salient regions of a query image based on the maximum entropy principle and label visual features with salient tags. The tags added to the feature descriptors are used to compute the saliency matching score, and the scores are regarded as the weight information in the geometry verification step. Second we define a spatial pattern as a triangle composed of three matched features and evaluate the similarity between every two spatial patterns. Finally, we sum all spatial matching scores with weights to generate the final ranking list. Experiment results prove that Hierarchical Geometry Verification based on Maximum Entropy Saliency can not only improve retrieval accuracy, but also reduce the time consumption of the full retrieval. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Show Figures

743 KiB  
Article
Variational Bayes for Regime-Switching Log-Normal Models
by Hui Zhao and Paul Marriott
Entropy 2014, 16(7), 3832-3847; https://doi.org/10.3390/e16073832 - 14 Jul 2014
Cited by 3 | Viewed by 5182
Abstract
The power of projection using divergence functions is a major theme in information geometry. One version of this is the variational Bayes (VB) method. This paper looks at VB in the context of other projection-based methods in information geometry. It also describes how [...] Read more.
The power of projection using divergence functions is a major theme in information geometry. One version of this is the variational Bayes (VB) method. This paper looks at VB in the context of other projection-based methods in information geometry. It also describes how to apply VB to the regime-switching log-normal model and how it provides a computationally fast solution to quantify the uncertainty in the model specification. The results show that the method can recover exactly the model structure, gives the reasonable point estimates and is very computationally efficient. The potential problems of the method in quantifying the parameter uncertainty are discussed. Full article
(This article belongs to the Special Issue Information Geometry)
Show Figures

1484 KiB  
Review
Phase Competitions behind the Giant Magnetic Entropy Variation: Gd5Si2Ge2 and Tb5Si2Ge2 Case Studies
by Ana Lúcia Pires, João Horta Belo, Armandina Maria Lima Lopes, Isabel T. Gomes, Luis Morellón, Cesar Magen, Pedro Antonio Algarabel, Manuel Ricardo Ibarra, André Miguel Pereira and João Pedro Araújo
Entropy 2014, 16(7), 3813-3831; https://doi.org/10.3390/e16073813 - 11 Jul 2014
Cited by 18 | Viewed by 7863
Abstract
Magnetic materials with strong spin-lattice coupling are a powerful set of candidates for multifunctional applications because of their multiferroic, magnetocaloric (MCE), magnetostrictive and magnetoresistive effects. In these materials there is a strong competition between two states (where a state comprises an atomic and [...] Read more.
Magnetic materials with strong spin-lattice coupling are a powerful set of candidates for multifunctional applications because of their multiferroic, magnetocaloric (MCE), magnetostrictive and magnetoresistive effects. In these materials there is a strong competition between two states (where a state comprises an atomic and an associated magnetic structure) that leads to the occurrence of phase transitions under subtle variations of external parameters, such as temperature, magnetic field and hydrostatic pressure. In this review a general method combining detailed magnetic measurements/analysis and first principles calculations with the purpose of estimating the phase transition temperature is presented with the help of two examples (Gd5Si2Ge2 and Tb5Si2Ge2). It is demonstrated that such method is an important tool for a deeper understanding of the (de)coupled nature of each phase transition in the materials belonging to the R5(Si,Ge)4 family and most possibly can be applied to other systems. The exotic Griffiths-like phase in the framework of the R5(SixGe1-x)4 compounds is reviewed and its generalization as a requisite for strong phase competitions systems that present large magneto-responsive properties is proposed. Full article
(This article belongs to the Special Issue Entropy in Shape Memory Alloys)
Show Figures

Graphical abstract

697 KiB  
Letter
Entropy Generation in Steady Laminar Boundary Layers with Pressure Gradients
by Donald M. McEligot and Edmond J. Walsh
Entropy 2014, 16(7), 3808-3812; https://doi.org/10.3390/e16073808 - 10 Jul 2014
Cited by 2 | Viewed by 5212
Abstract
In an earlier paper in Entropy [1] we hypothesized that the entropy generation rate is the driving force for boundary layer transition from laminar to turbulent flow. Subsequently, with our colleagues we have examined the prediction of entropy generation during such transitions [2,3]. [...] Read more.
In an earlier paper in Entropy [1] we hypothesized that the entropy generation rate is the driving force for boundary layer transition from laminar to turbulent flow. Subsequently, with our colleagues we have examined the prediction of entropy generation during such transitions [2,3]. We found that reasonable predictions for engineering purposes could be obtained for flows with negligible streamwise pressure gradients by adapting the linear combination model of Emmons [4]. A question then arises—will the Emmons approach be useful for boundary layer transition with significant streamwise pressure gradients as by Nolan and Zaki [5]. In our implementation the intermittency is calculated by comparison to skin friction correlations for laminar and turbulent boundary layers and is then applied with comparable correlations for the energy dissipation coefficient (i.e., non-dimensional integral entropy generation rate). In the case of negligible pressure gradients the Blasius theory provides the necessary laminar correlations. Full article
Show Figures

1480 KiB  
Article
Entropy vs. Majorization: What Determines Complexity?
by William Seitz and A. D. Kirwan, Jr.
Entropy 2014, 16(7), 3793-3807; https://doi.org/10.3390/e16073793 - 9 Jul 2014
Cited by 12 | Viewed by 6337
Abstract
The evolution of a microcanonical statistical ensemble of states of isolated systems from order to disorder as determined by increasing entropy, is compared to an alternative evolution that is determined by mixing character. The fact that the partitions of an integer N are [...] Read more.
The evolution of a microcanonical statistical ensemble of states of isolated systems from order to disorder as determined by increasing entropy, is compared to an alternative evolution that is determined by mixing character. The fact that the partitions of an integer N are in one-to-one correspondence with macrostates for N distinguishable objects is noted. Orders for integer partitions are given, including the original order by Young and the Boltzmann order by entropy. Mixing character (represented by Young diagrams) is seen to be a partially ordered quality rather than a quantity (See Ruch, 1975). The majorization partial order is reviewed as is its Hasse diagram representation known as the Young Diagram Lattice (YDL).Two lattices that show allowed transitions between macrostates are obtained from the YDL: we term these the mixing lattice and the diversity lattice. We study the dynamics (time evolution) on the two lattices, namely the sequence of steps on the lattices (i.e., the path or trajectory) that leads from low entropy, less mixed states to high entropy, highly mixed states. These paths are sequences of macrostates with monotonically increasing entropy. The distributions of path lengths on the two lattices are obtained via Monte Carlo methods, and surprisingly both distributions appear Gaussian. However, the width of the path length distribution for diversity is the square root of the mixing case, suggesting a qualitative difference in their temporal evolution. Another surprising result is that some macrostates occur in many paths while others do not. The evolution at low entropy and at high entropy is quite simple, but at intermediate entropies, the number of possible evolutionary paths is extremely large (due to the extensive branching of the lattices). A quantitative complexity measure associated with incomparability of macrostates in the mixing partial order is proposed, complementing Kolmogorov complexity and Shannon entropy. Full article
Show Figures

266 KiB  
Article
On Conservation Equation Combinations and Closure Relations
by William G. Gray and Amanda L. Dye
Entropy 2014, 16(7), 3769-3792; https://doi.org/10.3390/e16073769 - 7 Jul 2014
Cited by 1 | Viewed by 4802
Abstract
Fundamental conservation equations for mass, momentum and energy of chemical species can be combined with thermodynamic relations to obtain secondary forms, such as conservation equations for phases, an internal energy balance and a mechanical energy balance. In fact, the forms of secondary equations [...] Read more.
Fundamental conservation equations for mass, momentum and energy of chemical species can be combined with thermodynamic relations to obtain secondary forms, such as conservation equations for phases, an internal energy balance and a mechanical energy balance. In fact, the forms of secondary equations are infinite and depend on the criteria used in determining which species-based equations to employ and how to combine them. If one uses these secondary forms in developing an entropy inequality to be used in formulating closure relations, care must be employed to ensure that the appropriate equations are used, or problematic results can develop for multispecies systems. We show here that the use of the fundamental forms minimizes the chance of an erroneous formulation in terms of secondary forms and also provides guidance as to which secondary forms should be used if one uses them as a starting point. Full article
744 KiB  
Review
Maximum Entropy in Drug Discovery
by Chih-Yuan Tseng and Jack Tuszynski
Entropy 2014, 16(7), 3754-3768; https://doi.org/10.3390/e16073754 - 7 Jul 2014
Cited by 7 | Viewed by 6891
Abstract
Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA)-approved drugs, researchers continue investigating and designing better approaches to increase the success [...] Read more.
Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA)-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
352 KiB  
Article
On the Connections of Generalized Entropies With Shannon and Kolmogorov–Sinai Entropies
by Fryderyk Falniowski
Entropy 2014, 16(7), 3732-3753; https://doi.org/10.3390/e16073732 - 3 Jul 2014
Cited by 12 | Viewed by 5622
Abstract
We consider the concept of generalized Kolmogorov–Sinai entropy, where instead of the Shannon entropy function, we consider an arbitrary concave function defined on the unit interval, vanishing in the origin. Under mild assumptions on this function, we show that this isomorphism invariant is [...] Read more.
We consider the concept of generalized Kolmogorov–Sinai entropy, where instead of the Shannon entropy function, we consider an arbitrary concave function defined on the unit interval, vanishing in the origin. Under mild assumptions on this function, we show that this isomorphism invariant is linearly dependent on the Kolmogorov–Sinai entropy. Full article
Show Figures

2660 KiB  
Article
The Role of Vegetation on the Ecosystem Radiative Entropy Budget and Trends Along Ecological Succession
by Paul C. Stoy, Hua Lin, Kimberly A. Novick, Mario B. S. Siqueira and Jehn-Yih Juang
Entropy 2014, 16(7), 3710-3731; https://doi.org/10.3390/e16073710 - 3 Jul 2014
Cited by 16 | Viewed by 9872
Abstract
Ecosystem entropy production is predicted to increase along ecological succession and approach a state of maximum entropy production, but few studies have bridged the gap between theory and data. Here, we explore radiative entropy production in terrestrial ecosystems using measurements from 64 Free/Fair-Use [...] Read more.
Ecosystem entropy production is predicted to increase along ecological succession and approach a state of maximum entropy production, but few studies have bridged the gap between theory and data. Here, we explore radiative entropy production in terrestrial ecosystems using measurements from 64 Free/Fair-Use sites in the FLUXNET database, including a successional chronosequence in the Duke Forest in the southeastern United States. Ecosystem radiative entropy production increased then decreased as succession progressed in the Duke Forest ecosystems, and did not exceed 95% of the calculated empirical maximum entropy production in the FLUXNET study sites. Forest vegetation, especially evergreen needleleaf forests characterized by low shortwave albedo and close coupling to the atmosphere, had a significantly higher ratio of radiative entropy production to the empirical maximum entropy production than did croplands and grasslands. Our results demonstrate that ecosystems approach, but do not reach, maximum entropy production and that the relationship between succession and entropy production depends on vegetation characteristics. Future studies should investigate how natural disturbances and anthropogenic management—especially the tendency to shift vegetation to an earlier successional state—alter energy flux and entropy production at the surface-atmosphere interface. Full article
(This article belongs to the Special Issue Entropy in Hydrology)
Show Figures

4166 KiB  
Article
Searching for Conservation Laws in Brain Dynamics—BOLD Flux and Source Imaging
by Henning U. Voss and Nicholas D. Schiff
Entropy 2014, 16(7), 3689-3709; https://doi.org/10.3390/e16073689 - 3 Jul 2014
Cited by 5 | Viewed by 6019
Abstract
Blood-oxygen-level-dependent (BOLD) imaging is the most important noninvasive tool to map human brain function. It relies on local blood-flow changes controlled by neurovascular coupling effects, usually in response to some cognitive or perceptual task. In this contribution we ask if the spatiotemporal dynamics [...] Read more.
Blood-oxygen-level-dependent (BOLD) imaging is the most important noninvasive tool to map human brain function. It relies on local blood-flow changes controlled by neurovascular coupling effects, usually in response to some cognitive or perceptual task. In this contribution we ask if the spatiotemporal dynamics of the BOLD signal can be modeled by a conservation law. In analogy to the description of physical laws, which often can be derived from some underlying conservation law, identification of conservation laws in the brain could lead to new models for the functional organization of the brain. Our model is independent of the nature of the conservation law, but we discuss possible hints and motivations for conservation laws. For example, globally limited blood supply and local competition between brain regions for blood might restrict the large scale BOLD signal in certain ways that could be observable. One proposed selective pressure for the evolution of such conservation laws is the closed volume of the skull limiting the expansion of brain tissue by increases in blood volume. These ideas are demonstrated on a mental motor imagery fMRI experiment, in which functional brain activation was mapped in a group of volunteers imagining themselves swimming. In order to search for local conservation laws during this complex cognitive process, we derived maps of quantities resulting from spatial interaction of the BOLD amplitudes. Specifically, we mapped fluxes and sources of the BOLD signal, terms that would appear in a description by a continuity equation. Whereas we cannot present final answers with the particular analysis of this particular experiment, some results seem to be non-trivial. For example, we found that during task the group BOLD flux covered more widespread regions than identified by conventional BOLD mapping and was always increasing during task. It is our hope that these results motivate more work towards the search for conservation laws in neuroimaging experiments or at least towards imaging procedures based on spatial interactions of signals. The payoff could be new models for the dynamics of the healthy brain or more sensitive clinical imaging approaches, respectively. Full article
(This article belongs to the Special Issue Entropy in Human Brain Networks)
Show Figures

397 KiB  
Article
Extending the Extreme Physical Information to Universal Cognitive Models via a Confident Information First Principle
by Xiaozhao Zhao, Yuexian Hou, Dawei Song and Wenjie Li
Entropy 2014, 16(7), 3670-3688; https://doi.org/10.3390/e16073670 - 1 Jul 2014
Cited by 4 | Viewed by 6459
Abstract
The principle of extreme physical information (EPI) can be used to derive many known laws and distributions in theoretical physics by extremizing the physical information loss K, i.e., the difference between the observed Fisher information I and the intrinsic information bound J of [...] Read more.
The principle of extreme physical information (EPI) can be used to derive many known laws and distributions in theoretical physics by extremizing the physical information loss K, i.e., the difference between the observed Fisher information I and the intrinsic information bound J of the physical phenomenon being measured. However, for complex cognitive systems of high dimensionality (e.g., human language processing and image recognition), the information bound J could be excessively larger than I (J ≫ I), due to insufficient observation, which would lead to serious over-fitting problems in the derivation of cognitive models. Moreover, there is a lack of an established exact invariance principle that gives rise to the bound information in universal cognitive systems. This limits the direct application of EPI. To narrow down the gap between I and J, in this paper, we propose a confident-information-first (CIF) principle to lower the information bound J by preserving confident parameters and ruling out unreliable or noisy parameters in the probability density function being measured. The confidence of each parameter can be assessed by its contribution to the expected Fisher information distance between the physical phenomenon and its observations. In addition, given a specific parametric representation, this contribution can often be directly assessed by the Fisher information, which establishes a connection with the inverse variance of any unbiased estimate for the parameter via the Cramér–Rao bound. We then consider the dimensionality reduction in the parameter spaces of binary multivariate distributions. We show that the single-layer Boltzmann machine without hidden units (SBM) can be derived using the CIF principle. An illustrative experiment is conducted to show how the CIF principle improves the density estimation performance. Full article
(This article belongs to the Special Issue Information Geometry)
Show Figures

1744 KiB  
Article
An Estimation of the Entropy for a Rayleigh Distribution Based on Doubly-Generalized Type-II Hybrid Censored Samples
by Youngseuk Cho, Hokeun Sun and Kyeongjun Lee
Entropy 2014, 16(7), 3655-3669; https://doi.org/10.3390/e16073655 - 1 Jul 2014
Cited by 41 | Viewed by 6376
Abstract
In this paper, based on a doubly generalized Type II censored sample, the maximum likelihood estimators (MLEs), the approximate MLE and the Bayes estimator for the entropy of the Rayleigh distribution are derived. We compare the entropy estimators’ root mean squared error (RMSE), [...] Read more.
In this paper, based on a doubly generalized Type II censored sample, the maximum likelihood estimators (MLEs), the approximate MLE and the Bayes estimator for the entropy of the Rayleigh distribution are derived. We compare the entropy estimators’ root mean squared error (RMSE), bias and Kullback–Leibler divergence values. The simulation procedure is repeated 10,000 times for the sample size n = 10, 20, 40 and 100 and various doubly generalized Type II hybrid censoring schemes. Finally, a real data set has been analyzed for illustrative purposes. Full article
Show Figures

864 KiB  
Article
A Maximum Entropy Fixed-Point Route Choice Model for Route Correlation
by Louis De Grange, Sebastián Raveau and Felipe González
Entropy 2014, 16(7), 3635-3654; https://doi.org/10.3390/e16073635 - 30 Jun 2014
Cited by 1 | Viewed by 5212
Abstract
In this paper we present a stochastic route choice model for transit networks that explicitly addresses route correlation due to overlapping alternatives. The model is based on a multi-objective mathematical programming problem, the optimality conditions of which generate an extension to the Multinomial [...] Read more.
In this paper we present a stochastic route choice model for transit networks that explicitly addresses route correlation due to overlapping alternatives. The model is based on a multi-objective mathematical programming problem, the optimality conditions of which generate an extension to the Multinomial Logit models. The proposed model considers a fixed point problem for treating correlations between routes, which can be solved iteratively. We estimated the new model on the Santiago (Chile) Metro network and compared the results with other route choice models that can be found in the literature. The new model has better explanatory and predictive power that many other alternative models, correctly capturing the correlation factor. Our methodology can be extended to private transport networks. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Show Figures

530 KiB  
Article
Entropy Evolution and Uncertainty Estimation with Dynamical Systems
by X. San Liang
Entropy 2014, 16(7), 3605-3634; https://doi.org/10.3390/e16073605 - 30 Jun 2014
Cited by 13 | Viewed by 6195
Abstract
This paper presents a comprehensive introduction and systematic derivation of the evolutionary equations for absolute entropy H and relative entropy D, some of which exist sporadically in the literature in different forms under different subjects, within the framework of dynamical systems. In [...] Read more.
This paper presents a comprehensive introduction and systematic derivation of the evolutionary equations for absolute entropy H and relative entropy D, some of which exist sporadically in the literature in different forms under different subjects, within the framework of dynamical systems. In general, both H and D are dissipated, and the dissipation bears a form reminiscent of the Fisher information; in the absence of stochasticity, dH/dt is connected to the rate of phase space expansion, and D stays invariant, i.e., the separation of two probability density functions is always conserved. These formulas are validated with linear systems, and put to application with the Lorenz system and a large-dimensional stochastic quasi-geostrophic flow problem. In the Lorenz case, H falls at a constant rate with time, implying that H will eventually become negative, a situation beyond the capability of the commonly used computational technique like coarse-graining and bin counting. For the stochastic flow problem, it is first reduced to a computationally tractable low-dimensional system, using a reduced model approach, and then handled through ensemble prediction. Both the Lorenz system and the stochastic flow system are examples of self-organization in the light of uncertainty reduction. The latter particularly shows that, sometimes stochasticity may actually enhance the self-organization process. Full article
Show Figures

235 KiB  
Article
Normalized Expected Utility-Entropy Measure of Risk
by Jiping Yang and Wanhua Qiu
Entropy 2014, 16(7), 3590-3604; https://doi.org/10.3390/e16073590 - 30 Jun 2014
Cited by 15 | Viewed by 6322
Abstract
Yang and Qiu proposed an expected utility-entropy (EU-E) measure of risk, which reflects an individual’s intuitive attitude toward risk. Luce et al. have derived the numerical representations under behavioral axioms about preference orderings among gambles and their joint receipt, which further demonstrates the [...] Read more.
Yang and Qiu proposed an expected utility-entropy (EU-E) measure of risk, which reflects an individual’s intuitive attitude toward risk. Luce et al. have derived the numerical representations under behavioral axioms about preference orderings among gambles and their joint receipt, which further demonstrates the reasonability of the EU-E decision model as a normative one. In the paper, combining normalized expected utility and entropy together, we improve the EU-E measure of risk and decision model, and then propose the normalized EU-E measure of risk and decision model. The normalized EU-E measure of risk has some normative properties under certain conditions. Moreover, the normalized EU-E decision model can be a proper descriptive model to some extent. Using this model, two cases of common ratio effect and common consequence effect, which are the examples of certainty effects, can be explained in an intuitive way. Full article
Previous Issue
Next Issue
Back to TopTop