Next Issue
Volume 16, June
Previous Issue
Volume 16, April
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 16, Issue 5 (May 2014) – 27 articles , Pages 2384-2903

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
549 KiB  
Article
Maximum Power of Thermally and Electrically Coupled Thermoelectric Generators
by Pablo Camacho-Medina, Miguel Angel Olivares-Robles, Alexander Vargas-Almeida and Francisco Solorio-Ordaz
Entropy 2014, 16(5), 2890-2903; https://doi.org/10.3390/e16052890 - 23 May 2014
Cited by 14 | Viewed by 7059
Abstract
In a recent work, we have reported a study on the figure of merit of a thermoelectric system composed by thermoelectric generators connected electrically and thermally in different configurations. In this work, we are interested in analyzing the output power delivered by a [...] Read more.
In a recent work, we have reported a study on the figure of merit of a thermoelectric system composed by thermoelectric generators connected electrically and thermally in different configurations. In this work, we are interested in analyzing the output power delivered by a thermoelectric system for different arrays of thermoelectric materials in each configuration. Our study shows the impact of the array of thermoelectric materials in the output power of the composite system. We evaluate numerically the corresponding maximum output power for each configuration and determine the optimum array and configuration for maximum power. We compare our results with other recently reported studies. Full article
(This article belongs to the Special Issue Advances in Applied Thermodynamics)
Show Figures

Graphical abstract

4820 KiB  
Article
A Maximum Entropy Approach to Assess Debonding in Honeycomb aluminum Plates
by Viviana Meruane, Valentina Del Fierro and Alejandro Ortiz-Bernardin
Entropy 2014, 16(5), 2869-2889; https://doi.org/10.3390/e16052869 - 23 May 2014
Cited by 15 | Viewed by 6969
Abstract
Honeycomb sandwich structures are used in a wide variety of applications. Nevertheless, due to manufacturing defects or impact loads, these structures can be subject to imperfect bonding or debonding between the skin and the honeycomb core. The presence of debonding reduces the bending [...] Read more.
Honeycomb sandwich structures are used in a wide variety of applications. Nevertheless, due to manufacturing defects or impact loads, these structures can be subject to imperfect bonding or debonding between the skin and the honeycomb core. The presence of debonding reduces the bending stiffness of the composite panel, which causes detectable changes in its vibration characteristics. This article presents a new supervised learning algorithm to identify debonded regions in aluminum honeycomb panels. The algorithm uses a linear approximation method handled by a statistical inference model based on the maximum-entropy principle. The merits of this new approach are twofold: training is avoided and data is processed in a period of time that is comparable to the one of neural networks. The honeycomb panels are modeled with finite elements using a simplified three-layer shell model. The adhesive layer between the skin and core is modeled using linear springs, the rigidities of which are reduced in debonded sectors. The algorithm is validated using experimental data of an aluminum honeycomb panel under different damage scenarios. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Show Figures

Graphical abstract

346 KiB  
Article
A Probabilistic Description of the Configurational Entropy of Mixing
by Jorge Garcés
Entropy 2014, 16(5), 2850-2868; https://doi.org/10.3390/e16052850 - 23 May 2014
Cited by 4 | Viewed by 6380
Abstract
This work presents a formalism to calculate the configurational entropy of mixing based on the identification of non-interacting atomic complexes in the mixture and the calculation of their respective probabilities, instead of computing the number of atomic configurations in a lattice. The methodology [...] Read more.
This work presents a formalism to calculate the configurational entropy of mixing based on the identification of non-interacting atomic complexes in the mixture and the calculation of their respective probabilities, instead of computing the number of atomic configurations in a lattice. The methodology is applied in order to develop a general analytical expression for the configurational entropy of mixing of interstitial solutions. The expression is valid for any interstitial concentration, is suitable for the treatment of interstitial short-range order (SRO) and can be applied to tetrahedral or octahedral interstitial solutions in any crystal lattice. The effect of the SRO of H on the structural properties of the Nb-H and bcc Zr-H solid solutions is studied using an accurate description of the configurational entropy. The methodology can also be applied to systems with no translational symmetry, such as liquids and amorphous materials. An expression for the configurational entropy of a granular system composed by equal sized hard spheres is deduced. Full article
(This article belongs to the Special Issue Advances in Applied Thermodynamics)
Show Figures

126 KiB  
Article
Exact Test of Independence Using Mutual Information
by Shawn D. Pethel and Daniel W. Hahs
Entropy 2014, 16(5), 2839-2849; https://doi.org/10.3390/e16052839 - 23 May 2014
Cited by 24 | Viewed by 8586
Abstract
Using a recently discovered method for producing random symbol sequences with prescribed transition counts, we present an exact null hypothesis significance test (NHST) for mutual information between two random variables, the null hypothesis being that the mutual information is zero (i.e., independence). The [...] Read more.
Using a recently discovered method for producing random symbol sequences with prescribed transition counts, we present an exact null hypothesis significance test (NHST) for mutual information between two random variables, the null hypothesis being that the mutual information is zero (i.e., independence). The exact tests reported in the literature assume that data samples for each variable are sequentially independent and identically distributed (iid). In general, time series data have dependencies (Markov structure) that violate this condition. The algorithm given in this paper is the first exact significance test of mutual information that takes into account the Markov structure. When the Markov order is not known or indefinite, an exact test is used to determine an effective Markov order. Full article
(This article belongs to the Special Issue Information in Dynamical Systems and Complex Systems)
Show Figures

712 KiB  
Article
Randomized Binary Consensus with Faulty Agents
by Alexander Gogolev and Lucio Marcenaro
Entropy 2014, 16(5), 2820-2838; https://doi.org/10.3390/e16052820 - 21 May 2014
Cited by 4 | Viewed by 6029
Abstract
This paper investigates self-organizing binary majority consensus disturbed by faulty nodes with random and persistent failure. We study consensus in ordered and random networks with noise, message loss and delays. Using computer simulations, we show that: (1) explicit randomization by noise, message loss [...] Read more.
This paper investigates self-organizing binary majority consensus disturbed by faulty nodes with random and persistent failure. We study consensus in ordered and random networks with noise, message loss and delays. Using computer simulations, we show that: (1) explicit randomization by noise, message loss and topology can increase robustness towards faulty nodes; (2) commonly-used faulty nodes with random failure inhibit consensus less than faulty nodes with persistent failure; and (3) in some cases, such randomly failing faulty nodes can even promote agreement. Full article
(This article belongs to the Special Issue Entropy Methods in Guided Self-Organization)
Show Figures

1413 KiB  
Article
Changing the Environment Based on Empowerment as Intrinsic Motivation
by Christoph Salge, Cornelius Glackin and Daniel Polani
Entropy 2014, 16(5), 2789-2819; https://doi.org/10.3390/e16052789 - 21 May 2014
Cited by 38 | Viewed by 13094
Abstract
One aspect of intelligence is the ability to restructure your own environment so that the world you live in becomes more beneficial to you. In this paper we investigate how the information-theoretic measure of agent empowerment can provide a task-independent, intrinsic motivation to [...] Read more.
One aspect of intelligence is the ability to restructure your own environment so that the world you live in becomes more beneficial to you. In this paper we investigate how the information-theoretic measure of agent empowerment can provide a task-independent, intrinsic motivation to restructure the world. We show how changes in embodiment and in the environment change the resulting behaviour of the agent and the artefacts left in the world. For this purpose, we introduce an approximation of the established empowerment formalism based on sparse sampling, which is simpler and significantly faster to compute for deterministic dynamics. Sparse sampling also introduces a degree of randomness into the decision making process, which turns out to beneficial for some cases. We then utilize the measure to generate agent behaviour for different agent embodiments in a Minecraft-inspired three dimensional block world. The paradigmatic results demonstrate that empowerment can be used as a suitable generic intrinsic motivation to not only generate actions in given static environments, as shown in the past, but also to modify existing environmental conditions. In doing so, the emerging strategies to modify an agent’s environment turn out to be meaningful to the specific agent capabilities, i.e., de facto to its embodiment. Full article
(This article belongs to the Special Issue Entropy Methods in Guided Self-Organization)
Show Figures

Graphical abstract

417 KiB  
Article
Market Efficiency, Roughness and Long Memory in PSI20 Index Returns: Wavelet and Entropy Analysis
by Rui Pascoal and Ana Margarida Monteiro
Entropy 2014, 16(5), 2768-2788; https://doi.org/10.3390/e16052768 - 19 May 2014
Cited by 11 | Viewed by 5843
Abstract
In this study, features of the financial returns of the PSI20index, related to market efficiency, are captured using wavelet- and entropy-based techniques. This characterization includes the following points. First, the detection of long memory, associated with low frequencies, and a global measure of [...] Read more.
In this study, features of the financial returns of the PSI20index, related to market efficiency, are captured using wavelet- and entropy-based techniques. This characterization includes the following points. First, the detection of long memory, associated with low frequencies, and a global measure of the time series: the Hurst exponent estimated by several methods, including wavelets. Second, the degree of roughness, or regularity variation, associated with the H¨older exponent, fractal dimension and estimation based on the multifractal spectrum. Finally, the degree of the unpredictability of the series, estimated by approximate entropy. These aspects may also be studied through the concepts of non-extensive entropy and distribution using, for instance, the Tsallis q-triplet. They allow one to study the existence of efficiency in the financial market. On the other hand, the study of local roughness is performed by considering wavelet leader-based entropy. In fact, the wavelet coefficients are computed from a multiresolution analysis, and the wavelet leaders are defined by the local suprema of these coefficients, near the point that we are considering. The resulting entropy is more accurate in that detection than the H¨older exponent. These procedures enhance the capacity to identify the occurrence of financial crashes. Full article
Show Figures

598 KiB  
Article
Long-Range Atomic Order and Entropy Change at the Martensitic Transformation in a Ni-Mn-In-Co Metamagnetic Shape Memory Alloy
by Vicente Sánchez-Alarcos, Vicente Recarte, José Ignacio Pérez-Landazábal, Eduard Cesari and José Alberto Rodríguez-Velamazán
Entropy 2014, 16(5), 2756-2767; https://doi.org/10.3390/e16052756 - 19 May 2014
Cited by 32 | Viewed by 6991
Abstract
The influence of the atomic order on the martensitic transformation entropy change has been studied in a Ni-Mn-In-Co metamagnetic shape memory alloy through the evolution of the transformation temperatures under high-temperature quenching and post-quench annealing thermal treatments. It is confirmed that the entropy [...] Read more.
The influence of the atomic order on the martensitic transformation entropy change has been studied in a Ni-Mn-In-Co metamagnetic shape memory alloy through the evolution of the transformation temperatures under high-temperature quenching and post-quench annealing thermal treatments. It is confirmed that the entropy change evolves as a consequence of the variations on the degree of L21 atomic order brought by thermal treatments, though, contrary to what occurs in ternary Ni-Mn-In, post-quench aging appears to be the most effective way to modify the transformation entropy in Ni-Mn-In-Co. It is also shown that any entropy change value between around 40 and 5 J/kgK can be achieved in a controllable way for a single alloy under the appropriate aging treatment, thus bringing out the possibility of properly tune the magnetocaloric effect. Full article
(This article belongs to the Special Issue Entropy in Shape Memory Alloys)
Show Figures

Graphical abstract

666 KiB  
Article
Transitional Intermittency Exponents Through Deterministic Boundary-Layer Structures and Empirical Entropic Indices
by LaVar King Isaacson
Entropy 2014, 16(5), 2729-2755; https://doi.org/10.3390/e16052729 - 16 May 2014
Cited by 4 | Viewed by 4592
Abstract
A computational procedure is developed to determine initial instabilities within a three-dimensional laminar boundary layer and to follow these instabilities in the streamwise direction through to the resulting intermittency exponents within a fully developed turbulent flow. The fluctuating velocity wave vector component equations [...] Read more.
A computational procedure is developed to determine initial instabilities within a three-dimensional laminar boundary layer and to follow these instabilities in the streamwise direction through to the resulting intermittency exponents within a fully developed turbulent flow. The fluctuating velocity wave vector component equations are arranged into a Lorenz-type system of equations. The nonlinear time series solution of these equations at the fifth station downstream of the initial instabilities indicates a sequential outward burst process, while the results for the eleventh station predict a strong sequential inward sweep process. The results for the thirteenth station indicate a return to the original instability autogeneration process. The nonlinear time series solutions indicate regions of order and disorder within the solutions. Empirical entropies are defined from decomposition modes obtained from singular value decomposition techniques applied to the nonlinear time series solutions. Empirical entropic indices are obtained from the empirical entropies for two streamwise stations. The intermittency exponents are then obtained from the entropic indices for these streamwise stations that indicate the burst and autogeneration processes. Full article
Show Figures

503 KiB  
Article
Non-Extensive Entropy Econometrics: New Statistical Features of Constant Elasticity of Substitution-Related Models
by Second Bwanakare
Entropy 2014, 16(5), 2713-2728; https://doi.org/10.3390/e16052713 - 16 May 2014
Cited by 11 | Viewed by 6000
Abstract
Power-law (PL) formalism is known to provide an appropriate framework for canonical modeling of nonlinear systems. We estimated three stochastically distinct models of constant elasticity of substitution (CES) class functions as non-linear inverse problem and showed that these PL related functions should have [...] Read more.
Power-law (PL) formalism is known to provide an appropriate framework for canonical modeling of nonlinear systems. We estimated three stochastically distinct models of constant elasticity of substitution (CES) class functions as non-linear inverse problem and showed that these PL related functions should have a closed form. The first model is related to an aggregator production function, the second to an aggregator utility function (the Armington) and the third to an aggregator technical transformation function. A q-generalization of K-L information divergence criterion function with a priori consistency constraints is proposed. Related inferential statistical indices are computed. The approach leads to robust estimation and to new findings about the true stochastic nature of this class of nonlinear—up until now—analytically intractable functions. Outputs from traditional econometric techniques (Shannon entropy, NLLS, GMM, ML) are also presented. Full article
Show Figures

1000 KiB  
Article
Action-Amplitude Approach to Controlled Entropic Self-Organization
by Vladimir Ivancevic, Darryn Reid and Jason Scholz
Entropy 2014, 16(5), 2699-2712; https://doi.org/10.3390/e16052699 - 14 May 2014
Cited by 4 | Viewed by 5930
Abstract
Motivated by the notion of perceptual error, as a core concept of the perceptual control theory, we propose an action-amplitude model for controlled entropic self-organization (CESO). We present several aspects of this development that illustrate its explanatory power: (i) a physical view of [...] Read more.
Motivated by the notion of perceptual error, as a core concept of the perceptual control theory, we propose an action-amplitude model for controlled entropic self-organization (CESO). We present several aspects of this development that illustrate its explanatory power: (i) a physical view of partition functions and path integrals, as well as entropy and phase transitions; (ii) a global view of functional compositions and commutative diagrams; (iii) a local geometric view of the Kähler–Ricci flow and time-evolution of entropic action; and (iv) a computational view using various path-integral approximations. Full article
(This article belongs to the Special Issue Entropy Methods in Guided Self-Organization)
Show Figures

229 KiB  
Article
Model Selection Criteria Using Divergences
by Aida Toma
Entropy 2014, 16(5), 2686-2698; https://doi.org/10.3390/e16052686 - 14 May 2014
Cited by 31 | Viewed by 4828
Abstract
In this note we introduce some divergence-based model selection criteria. These criteria are defined by estimators of the expected overall discrepancy between the true unknown model and the candidate model, using dual representations of divergences and associated minimum divergence estimators. It is shown [...] Read more.
In this note we introduce some divergence-based model selection criteria. These criteria are defined by estimators of the expected overall discrepancy between the true unknown model and the candidate model, using dual representations of divergences and associated minimum divergence estimators. It is shown that the proposed criteria are asymptotically unbiased. The influence functions of these criteria are also derived and some comments on robustness are provided. Full article
Show Figures

463 KiB  
Article
Equivalent Temperature-Enthalpy Diagram for the Study of Ejector Refrigeration Systems
by Mohammed Khennich, Mikhail Sorin and Nicolas Galanis
Entropy 2014, 16(5), 2669-2685; https://doi.org/10.3390/e16052669 - 14 May 2014
Cited by 14 | Viewed by 13680
Abstract
The Carnot factor versus enthalpy variation (heat) diagram has been used extensively for the second law analysis of heat transfer processes. With enthalpy variation (heat) as the abscissa and the Carnot factor as the ordinate the area between the curves representing the heat [...] Read more.
The Carnot factor versus enthalpy variation (heat) diagram has been used extensively for the second law analysis of heat transfer processes. With enthalpy variation (heat) as the abscissa and the Carnot factor as the ordinate the area between the curves representing the heat exchanging media on this diagram illustrates the exergy losses due to the transfer. It is also possible to draw the paths of working fluids in steady-state, steady-flow thermodynamic cycles on this diagram using the definition of “the equivalent temperature” as the ratio between the variations of enthalpy and entropy in an analyzed process. Despite the usefulness of this approach two important shortcomings should be emphasized. First, the approach is not applicable for the processes of expansion and compression particularly for the isenthalpic processes taking place in expansion valves. Second, from the point of view of rigorous thermodynamics, the proposed ratio gives the temperature dimension for the isobaric processes only. The present paper proposes to overcome these shortcomings by replacing the actual processes of expansion and compression by combinations of two thermodynamic paths: isentropic and isobaric. As a result the actual (not ideal) refrigeration and power cycles can be presented on equivalent temperature versus enthalpy variation diagrams. All the exergy losses, taking place in different equipments like pumps, turbines, compressors, expansion valves, condensers and evaporators are then clearly visualized. Moreover the exergies consumed and produced in each component of these cycles are also presented. The latter give the opportunity to also analyze the exergy efficiencies of the components. The proposed diagram is finally applied for the second law analysis of an ejector based refrigeration system. Full article
(This article belongs to the Special Issue Entropy and the Second Law of Thermodynamics)
Show Figures

469 KiB  
Article
The Impact of the Prior Density on a Minimum Relative Entropy Density: A Case Study with SPX Option Data
by Cassio Neri and Lorenz Schneider
Entropy 2014, 16(5), 2642-2668; https://doi.org/10.3390/e16052642 - 14 May 2014
Cited by 6 | Viewed by 6353
Abstract
We study the problem of finding probability densities that match given European call option prices. To allow prior information about such a density to be taken into account, we generalise the algorithm presented in Neri and Schneider (Appl. Math. Finance 2013) to find [...] Read more.
We study the problem of finding probability densities that match given European call option prices. To allow prior information about such a density to be taken into account, we generalise the algorithm presented in Neri and Schneider (Appl. Math. Finance 2013) to find the maximum entropy density of an asset price to the relative entropy case. This is applied to study the impact of the choice of prior density in two market scenarios. In the first scenario, call option prices are prescribed at only a small number of strikes, and we see that the choice of prior, or indeed its omission, yields notably different densities. The second scenario is given by CBOE option price data for S&P500 index options at a large number of strikes. Prior information is now considered to be given by calibrated Heston, Schöbel–Zhu or Variance Gamma models. We find that the resulting digital option prices are essentially the same as those given by the (non-relative) Buchen–Kelly density itself. In other words, in a sufficiently liquid market, the influence of the prior density seems to vanish almost completely. Finally, we study variance swaps and derive a simple formula relating the fair variance swap rate to entropy. Then we show, again, that the prior loses its influence on the fair variance swap rate as the number of strikes increases. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Show Figures

281 KiB  
Article
Using Neighbor Diversity to Detect Fraudsters in Online Auctions
by Jun-Lin Lin and Laksamee Khomnotai
Entropy 2014, 16(5), 2629-2641; https://doi.org/10.3390/e16052629 - 14 May 2014
Cited by 9 | Viewed by 5394
Abstract
Online auctions attract not only legitimate businesses trying to sell their products but also fraudsters wishing to commit fraudulent transactions. Consequently, fraudster detection is crucial to ensure the continued success of online auctions. This paper proposes an approach to detect fraudsters based on [...] Read more.
Online auctions attract not only legitimate businesses trying to sell their products but also fraudsters wishing to commit fraudulent transactions. Consequently, fraudster detection is crucial to ensure the continued success of online auctions. This paper proposes an approach to detect fraudsters based on the concept of neighbor diversity. The neighbor diversity of an auction account quantifies the diversity of all traders that have transactions with this account. Based on four different features of each trader (i.e., the number of received ratings, the number of cancelled transactions, k-core, and the joined date), four measurements of neighbor diversity are proposed to discern fraudsters from legitimate traders. An experiment is conducted using data gathered from a real world auction website. The results show that, although the use of neighbor diversity on k-core or on the joined date shows little or no improvement in detecting fraudsters, both the neighbor diversity on the number of received ratings and the neighbor diversity on the number of cancelled transactions improve classification accuracy, compared to the state-of-the-art methods that use k-core and center weight. Full article
Show Figures

214 KiB  
Article
Scale-Invariant Divergences for Density Functions
by Takafumi Kanamori
Entropy 2014, 16(5), 2611-2628; https://doi.org/10.3390/e16052611 - 13 May 2014
Cited by 10 | Viewed by 5195
Abstract
Divergence is a discrepancy measure between two objects, such as functions, vectors, matrices, and so forth. In particular, divergences defined on probability distributions are widely employed in probabilistic forecasting. As the dissimilarity measure, the divergence should satisfy some conditions. In this paper, we [...] Read more.
Divergence is a discrepancy measure between two objects, such as functions, vectors, matrices, and so forth. In particular, divergences defined on probability distributions are widely employed in probabilistic forecasting. As the dissimilarity measure, the divergence should satisfy some conditions. In this paper, we consider two conditions: The first one is the scale-invariance property and the second is that the divergence is approximated by the sample mean of a loss function. The first requirement is an important feature for dissimilarity measures. The divergence will depend on which system of measurements we used to measure the objects. Scale-invariant divergence is transformed in a consistent way when the system of measurements is changed to the other one. The second requirement is formalized such that the divergence is expressed by using the so-called composite score. We study the relation between composite scores and scale-invariant divergences, and we propose a new class of divergences called H¨older divergence that satisfies two conditions above. We present some theoretical properties of H¨older divergence. We show that H¨older divergence unifies existing divergences from the viewpoint of scale-invariance. Full article
921 KiB  
Article
Guided Self-Organization in a Dynamic Embodied System Based on Attractor Selection Mechanism
by Surya G. Nurzaman, Xiaoxiang Yu, Yongjae Kim and Fumiya Iida
Entropy 2014, 16(5), 2592-2610; https://doi.org/10.3390/e16052592 - 13 May 2014
Cited by 15 | Viewed by 10412
Abstract
Guided self-organization can be regarded as a paradigm proposed to understand how to guide a self-organizing system towards desirable behaviors, while maintaining its non-deterministic dynamics with emergent features. It is, however, not a trivial problem to guide the self-organizing behavior of physically embodied [...] Read more.
Guided self-organization can be regarded as a paradigm proposed to understand how to guide a self-organizing system towards desirable behaviors, while maintaining its non-deterministic dynamics with emergent features. It is, however, not a trivial problem to guide the self-organizing behavior of physically embodied systems like robots, as the behavioral dynamics are results of interactions among their controller, mechanical dynamics of the body, and the environment. This paper presents a guided self-organization approach for dynamic robots based on a coupling between the system mechanical dynamics with an internal control structure known as the attractor selection mechanism. The mechanism enables the robot to gracefully shift between random and deterministic behaviors, represented by a number of attractors, depending on internally generated stochastic perturbation and sensory input. The robot used in this paper is a simulated curved beam hopping robot: a system with a variety of mechanical dynamics which depends on its actuation frequencies. Despite the simplicity of the approach, it will be shown how the approach regulates the probability of the robot to reach a goal through the interplay among the sensory input, the level of inherent stochastic perturbation, i.e., noise, and the mechanical dynamics. Full article
(This article belongs to the Special Issue Entropy Methods in Guided Self-Organization)
Show Figures

2852 KiB  
Article
A Relevancy, Hierarchical and Contextual Maximum Entropy Framework for a Data-Driven 3D Scene Generation
by Mesfin Dema and Hamed Sari-Sarraf
Entropy 2014, 16(5), 2568-2591; https://doi.org/10.3390/e16052568 - 9 May 2014
Viewed by 5859
Abstract
We introduce a novel Maximum Entropy (MaxEnt) framework that can generate 3D scenes by incorporating objects’ relevancy, hierarchical and contextual constraints in a unified model. This model is formulated by a Gibbs distribution, under the MaxEnt framework, that can be sampled to generate [...] Read more.
We introduce a novel Maximum Entropy (MaxEnt) framework that can generate 3D scenes by incorporating objects’ relevancy, hierarchical and contextual constraints in a unified model. This model is formulated by a Gibbs distribution, under the MaxEnt framework, that can be sampled to generate plausible scenes. Unlike existing approaches, which represent a given scene by a single And-Or graph, the relevancy constraint (defined as the frequency with which a given object exists in the training data) require our approach to sample from multiple And-Or graphs, allowing variability in terms of objects’ existence across synthesized scenes. Once an And-Or graph is sampled from the ensemble, the hierarchical constraints are employed to sample the Or-nodes (style variations) and the contextual constraints are subsequently used to enforce the corresponding relations that must be satisfied by the And-nodes. To illustrate the proposed methodology, we use desk scenes that are composed of objects whose existence, styles and arrangements (position and orientation) can vary from one scene to the next. The relevancy, hierarchical and contextual constraints are extracted from a set of training scenes and utilized to generate plausible synthetic scenes that in turn satisfy these constraints. After applying the proposed framework, scenes that are plausible representations of the training examples are automatically generated. Full article
(This article belongs to the Special Issue Maximum Entropy and Its Application)
Show Figures

549 KiB  
Article
Exergy Analysis of Flat Plate Solar Collectors
by Zhong Ge, Huitao Wang, Hua Wang, Songyuan Zhang and Xin Guan
Entropy 2014, 16(5), 2549-2567; https://doi.org/10.3390/e16052549 - 9 May 2014
Cited by 66 | Viewed by 9291
Abstract
This study proposes the concept of the local heat loss coefficient and examines the calculation method for the average heat loss coefficient and the average absorber plate temperature. It also presents an exergy analysis model of flat plate collectors, considering non-uniformity in temperature [...] Read more.
This study proposes the concept of the local heat loss coefficient and examines the calculation method for the average heat loss coefficient and the average absorber plate temperature. It also presents an exergy analysis model of flat plate collectors, considering non-uniformity in temperature distribution along the absorber plate. The computation results agree well with experimental data. The effects of ambient temperature, solar irradiance, fluid inlet temperature, and fluid mass flow rate on useful heat rate, useful exergy rate, and exergy loss rate are examined. An optimal fluid inlet temperature exists for obtaining the maximum useful exergy rate. The calculated optimal fluid inlet temperature is 69 °C, and the maximum useful exergy rate is 101.6 W. Exergy rate distribution is analyzed when ambient temperature, solar irradiance, fluid mass flow rate, and fluid inlet temperature are set to 20 °C, 800 W/m2, 0.05 kg/s, and 50 °C, respectively. The exergy efficiency is 5.96%, and the largest exergy loss is caused by the temperature difference between the absorber plate surface and the sun, accounting for 72.86% of the total exergy rate. Full article
Show Figures

1220 KiB  
Article
Measuring Instantaneous and Spectral Information Entropies by Shannon Entropy of Choi-Williams Distribution in the Context of Electroencephalography
by Umberto Melia, Francesc Claria, Montserrat Vallverdu and Pere Caminal
Entropy 2014, 16(5), 2530-2548; https://doi.org/10.3390/e16052530 - 9 May 2014
Cited by 11 | Viewed by 8295
Abstract
The theory of Shannon entropy was applied to the Choi-Williams time-frequency distribution (CWD) of time series in order to extract entropy information in both time and frequency domains. In this way, four novel indexes were defined: (1) partial instantaneous entropy, calculated as the [...] Read more.
The theory of Shannon entropy was applied to the Choi-Williams time-frequency distribution (CWD) of time series in order to extract entropy information in both time and frequency domains. In this way, four novel indexes were defined: (1) partial instantaneous entropy, calculated as the entropy of the CWD with respect to time by using the probability mass function at each time instant taken independently; (2) partial spectral information entropy, calculated as the entropy of the CWD with respect to frequency by using the probability mass function of each frequency value taken independently; (3) complete instantaneous entropy, calculated as the entropy of the CWD with respect to time by using the probability mass function of the entire CWD; (4) complete spectral information entropy, calculated as the entropy of the CWD with respect to frequency by using the probability mass function of the entire CWD. These indexes were tested on synthetic time series with different behavior (periodic, chaotic and random) and on a dataset of electroencephalographic (EEG) signals recorded in different states (eyes-open, eyes-closed, ictal and non-ictal activity). The results have shown that the values of these indexes tend to decrease, with different proportion, when the behavior of the synthetic signals evolved from chaos or randomness to periodicity. Statistical differences (p-value < 0.0005) were found between values of these measures comparing eyes-open and eyes-closed states and between ictal and non-ictal states in the traditional EEG frequency bands. Finally, this paper has demonstrated that the proposed measures can be useful tools to quantify the different periodic, chaotic and random components in EEG signals. Full article
(This article belongs to the Special Issue Advances in Information Theory)
Show Figures

471 KiB  
Article
Three Methods for Estimating the Entropy Parameter M Based on a Decreasing Number of Velocity Measurements in a River Cross-Section
by Giulia Farina, Stefano Alvisi, Marco Franchini and Tommaso Moramarco
Entropy 2014, 16(5), 2512-2529; https://doi.org/10.3390/e16052512 - 9 May 2014
Cited by 34 | Viewed by 7487
Abstract
The theoretical development and practical application of three new methods for estimating the entropy parameter M used within the framework of the entropy method proposed by Chiu in the 1980s as a valid alternative to the velocity-area method for measuring the discharge in [...] Read more.
The theoretical development and practical application of three new methods for estimating the entropy parameter M used within the framework of the entropy method proposed by Chiu in the 1980s as a valid alternative to the velocity-area method for measuring the discharge in a river is here illustrated. The first method is based on reproducing the cumulative velocity distribution function associated with a flood event and requires measurements regarding the entire cross-section, whereas, in the second and third method, the estimate of M is based on reproducing the cross-sectional mean velocity by following two different procedures. Both of them rely on the entropy parameter M alone and look for that value of M that brings two different estimates of , obtained by using two different M-dependent-approaches, as close as possible. From an operational viewpoint, the acquisition of velocity data becomes increasingly simplified going from the first to the third approach, which uses only one surface velocity measurement. The procedures proposed are applied in a case study based on the Ponte Nuovo hydrometric station on the Tiber River in central Italy. Full article
(This article belongs to the Special Issue Entropy in Hydrology)
Show Figures

1412 KiB  
Review
Recent Theoretical Approaches to Minimal Artificial Cells
by Fabio Mavelli, Emiliano Altamura, Luigi Cassidei and Pasquale Stano
Entropy 2014, 16(5), 2488-2511; https://doi.org/10.3390/e16052488 - 8 May 2014
Cited by 16 | Viewed by 6601
Abstract
Minimal artificial cells (MACs) are self-assembled chemical systems able to mimic the behavior of living cells at a minimal level, i.e. to exhibit self-maintenance, self-reproduction and the capability of evolution. The bottom-up approach to the construction of MACs is mainly based on the [...] Read more.
Minimal artificial cells (MACs) are self-assembled chemical systems able to mimic the behavior of living cells at a minimal level, i.e. to exhibit self-maintenance, self-reproduction and the capability of evolution. The bottom-up approach to the construction of MACs is mainly based on the encapsulation of chemical reacting systems inside lipid vesicles, i.e. chemical systems enclosed (compartmentalized) by a double-layered lipid membrane. Several researchers are currently interested in synthesizing such simple cellular models for biotechnological purposes or for investigating origin of life scenarios. Within this context, the properties of lipid vesicles (e.g., their stability, permeability, growth dynamics, potential to host reactions or undergo division processes…) play a central role, in combination with the dynamics of the encapsulated chemical or biochemical networks. Thus, from a theoretical standpoint, it is very important to develop kinetic equations in order to explore first—and specify later—the conditions that allow the robust implementation of these complex chemically reacting systems, as well as their controlled reproduction. Due to being compartmentalized in small volumes, the population of reacting molecules can be very low in terms of the number of molecules and therefore their behavior becomes highly affected by stochastic effects both in the time course of reactions and in occupancy distribution among the vesicle population. In this short review we report our mathematical approaches to model artificial cell systems in this complex scenario by giving a summary of three recent simulations studies on the topic of primitive cell (protocell) systems. Full article
(This article belongs to the Section Entropy Reviews)
Show Figures

220 KiB  
Article
F-Geometry and Amari’s α-Geometry on a Statistical Manifold
by Harsha K. V. and Subrahamanian Moosath K. S.
Entropy 2014, 16(5), 2472-2487; https://doi.org/10.3390/e16052472 - 6 May 2014
Cited by 9 | Viewed by 6114
Abstract
In this paper, we introduce a geometry called F-geometry on a statistical manifold S using an embedding F of S into the space RX of random variables. Amari’s α-geometry is a special case of F-geometry. Then using the embedding [...] Read more.
In this paper, we introduce a geometry called F-geometry on a statistical manifold S using an embedding F of S into the space RX of random variables. Amari’s α-geometry is a special case of F-geometry. Then using the embedding F and a positive smooth function G, we introduce (F,G)-metric and (F,G)-connections that enable one to consider weighted Fisher information metric and weighted connections. The necessary and sufficient condition for two (F,G)-connections to be dual with respect to the (F,G)-metric is obtained. Then we show that Amari’s 0-connection is the only self dual F-connection with respect to the Fisher information metric. Invariance properties of the geometric structures are discussed, which proved that Amari’s α-connections are the only F-connections that are invariant under smooth one-to-one transformations of the random variables. Full article
(This article belongs to the Special Issue Information Geometry)
Show Figures

735 KiB  
Article
Computational Information Geometry in Statistics: Theory and Practice
by Frank Critchley and Paul Marriott
Entropy 2014, 16(5), 2454-2471; https://doi.org/10.3390/e16052454 - 2 May 2014
Cited by 7 | Viewed by 8237
Abstract
A broad view of the nature and potential of computational information geometry in statistics is offered. This new area suitably extends the manifold-based approach of classical information geometry to a simplicial setting, in order to obtain an operational universal model space. Additional underlying [...] Read more.
A broad view of the nature and potential of computational information geometry in statistics is offered. This new area suitably extends the manifold-based approach of classical information geometry to a simplicial setting, in order to obtain an operational universal model space. Additional underlying theory and illustrative real examples are presented. In the infinite-dimensional case, challenges inherent in this ambitious overall agenda are highlighted and promising new methodologies indicated. Full article
(This article belongs to the Special Issue Information Geometry)
Show Figures

1512 KiB  
Article
Optimization of Biomass-Fuelled Combined Cooling, Heating and Power (CCHP) Systems Integrated with Subcritical or Transcritical Organic Rankine Cycles (ORCs)
by Daniel Maraver, Sylvain Quoilin and Javier Royo
Entropy 2014, 16(5), 2433-2453; https://doi.org/10.3390/e16052433 - 30 Apr 2014
Cited by 27 | Viewed by 8730
Abstract
This work is focused on the thermodynamic optimization of Organic Rankine Cycles (ORCs), coupled with absorption or adsorption cooling units, for combined cooling heating and power (CCHP) generation from biomass combustion. Results were obtained by modelling with the main aim of providing optimization [...] Read more.
This work is focused on the thermodynamic optimization of Organic Rankine Cycles (ORCs), coupled with absorption or adsorption cooling units, for combined cooling heating and power (CCHP) generation from biomass combustion. Results were obtained by modelling with the main aim of providing optimization guidelines for the operating conditions of these types of systems, specifically the subcritical or transcritical ORC, when integrated in a CCHP system to supply typical heating and cooling demands in the tertiary sector. The thermodynamic approach was complemented, to avoid its possible limitations, by the technological constraints of the expander, the heat exchangers and the pump of the ORC. The working fluids considered are: n-pentane, n-heptane, octamethyltrisiloxane, toluene and dodecamethylcyclohexasiloxane. In addition, the energy and environmental performance of the different optimal CCHP plants was investigated. The optimal plant from the energy and environmental point of view is the one integrated by a toluene recuperative ORC, although it is limited to a development with a turbine type expander. Also, the trigeneration plant could be developed in an energy and environmental efficient way with an n-pentane recuperative ORC and a volumetric type expander. Full article
(This article belongs to the Special Issue Advances in Applied Thermodynamics)
Show Figures

364 KiB  
Article
General H-theorem and Entropies that Violate the Second Law
by Alexander N. Gorban
Entropy 2014, 16(5), 2408-2432; https://doi.org/10.3390/e16052408 - 29 Apr 2014
Cited by 16 | Viewed by 6657
Abstract
H-theorem states that the entropy production is nonnegative and, therefore, the entropy of a closed system should monotonically change in time. In information processing, the entropy production is positive for random transformation of signals (the information processing lemma). Originally, the H-theorem and [...] Read more.
H-theorem states that the entropy production is nonnegative and, therefore, the entropy of a closed system should monotonically change in time. In information processing, the entropy production is positive for random transformation of signals (the information processing lemma). Originally, the H-theorem and the information processing lemma were proved for the classical Boltzmann-Gibbs-Shannon entropy and for the correspondent divergence (the relative entropy). Many new entropies and divergences have been proposed during last decades and for all of them the H-theorem is needed. This note proposes a simple and general criterion to check whether the H-theorem is valid for a convex divergence H and demonstrates that some of the popular divergences obey no H-theorem. We consider systems with n states Ai that obey first order kinetics (master equation). A convex function H is a Lyapunov function for all master equations with given equilibrium if and only if its conditional minima properly describe the equilibria of pair transitions AiAj . This theorem does not depend on the principle of detailed balance and is valid for general Markov kinetics. Elementary analysis of pair equilibria demonstrate that the popular Bregman divergences like Euclidian distance or Itakura-Saito distance in the space of distribution cannot be the universal Lyapunov functions for the first-order kinetics and can increase in Markov processes. Therefore, they violate the second law and the information processing lemma. In particular, for these measures of information (divergences) random manipulation with data may add information to data. The main results are extended to nonlinear generalized mass action law kinetic equations. Full article
Show Figures

549 KiB  
Article
Measuring the Complexity of Self-Organizing Traffic Lights
by Darío Zubillaga, Geovany Cruz, Luis Daniel Aguilar, Jorge Zapotécatl, Nelson Fernández, José Aguilar, David A. Rosenblueth and Carlos Gershenson
Entropy 2014, 16(5), 2384-2407; https://doi.org/10.3390/e16052384 - 25 Apr 2014
Cited by 45 | Viewed by 24574
Abstract
We apply measures of complexity, emergence, and self-organization to an urban traffic model for comparing a traditional traffic-light coordination method with a self-organizing method in two scenarios: cyclic boundaries and non-orientable boundaries. We show that the measures are useful to identify and characterize [...] Read more.
We apply measures of complexity, emergence, and self-organization to an urban traffic model for comparing a traditional traffic-light coordination method with a self-organizing method in two scenarios: cyclic boundaries and non-orientable boundaries. We show that the measures are useful to identify and characterize different dynamical phases. It becomes clear that different operation regimes are required for different traffic demands. Thus, not only is traffic a non-stationary problem, requiring controllers to adapt constantly; controllers must also change drastically the complexity of their behavior depending on the demand. Based on our measures and extending Ashby’s law of requisite variety, we can say that the self-organizing method achieves an adaptability level comparable to that of a living system. Full article
(This article belongs to the Special Issue Entropy Methods in Guided Self-Organization)
Show Figures

Previous Issue
Next Issue
Back to TopTop