Next Issue
Volume 15, August
Previous Issue
Volume 15, June
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 15, Issue 7 (July 2013) – 20 articles , Pages 2464-2873

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
289 KiB  
Article
Relative Entropy Derivative Bounds
by Pablo Zegers, Alexis Fuentes and Carlos Alarcón
Entropy 2013, 15(7), 2861-2873; https://doi.org/10.3390/e15072861 - 23 Jul 2013
Cited by 4 | Viewed by 6176
Abstract
We show that the derivative of the relative entropy with respect to its parameters is lower and upper bounded. We characterize the conditions under which this derivative can reach zero. We use these results to explain when the minimum relative entropy and the [...] Read more.
We show that the derivative of the relative entropy with respect to its parameters is lower and upper bounded. We characterize the conditions under which this derivative can reach zero. We use these results to explain when the minimum relative entropy and the maximum log likelihood approaches can be valid. We show that these approaches naturally activate in the presence of large data sets and that they are inherent properties of any density estimation process involving large numbers of random variables. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayes Theorem)
728 KiB  
Article
Maximum Entropy Production and Time Varying Problems: The Seasonal Cycle in a Conceptual Climate Model
by Didier Paillard and Corentin Herbert
Entropy 2013, 15(7), 2846-2860; https://doi.org/10.3390/e15072846 - 19 Jul 2013
Cited by 7 | Viewed by 6510
Abstract
It has been suggested that the maximum entropy production (MEP) principle, or MEP hypothesis, could be an interesting tool to compute climatic variables like temperature. In this climatological context, a major limitation of MEP is that it is generally assumed to be applicable [...] Read more.
It has been suggested that the maximum entropy production (MEP) principle, or MEP hypothesis, could be an interesting tool to compute climatic variables like temperature. In this climatological context, a major limitation of MEP is that it is generally assumed to be applicable only for stationary systems. It is therefore often anticipated that critical climatic features like the seasonal cycle or climatic change cannot be represented within this framework. We discuss here several possibilities in order to introduce time- varying climatic problems using the MEP formalism. We will show that it is possible to formulate a MEP model which accounts for time evolution in a consistent way. This formulation leads to physically relevant results as long as the internal time scales associated with thermal inertia are small compared to the speed of external changes. We will focus on transient changes as well as on the seasonal cycle in a conceptual climate box-model in order to discuss the physical relevance of such an extension of the MEP framework. Full article
(This article belongs to the Special Issue Maximum Entropy Production)
Show Figures

Figure 1

702 KiB  
Article
Microstructure of Laser Re-Melted AlCoCrCuFeNi High Entropy Alloy Coatings Produced by Plasma Spraying
by Tai M. Yue, Hui Xie, Xin Lin, Haiou Yang and Guanghui Meng
Entropy 2013, 15(7), 2833-2845; https://doi.org/10.3390/e15072833 - 19 Jul 2013
Cited by 90 | Viewed by 8944
Abstract
An AlCoCrCuFeNi high-entropy alloy (HEA) coating was fabricated on a pure magnesium substrate using a two-step method, involving plasma spray processing and laser re-melting. After laser re-melting, the microporosity present in the as-sprayed coating was eliminated, and a dense surface layer was obtained. [...] Read more.
An AlCoCrCuFeNi high-entropy alloy (HEA) coating was fabricated on a pure magnesium substrate using a two-step method, involving plasma spray processing and laser re-melting. After laser re-melting, the microporosity present in the as-sprayed coating was eliminated, and a dense surface layer was obtained. The microstructure of the laser-remelted layer exhibits an epitaxial growth of columnar dendrites, which originate from the crystals of the spray coating. The presence of a continuous epitaxial growth of columnar HEA dendrites in the laser re-melted layer was analyzed based on the critical stability condition of a planar interface. The solidification of a columnar dendrite structure of the HEA alloy in the laser-remelted layer was analyzed based on the Kurz–Giovanola–Trivedi model and Hunt’s criterion, with modifications for a multi-component alloy. Full article
(This article belongs to the Special Issue High Entropy Alloys)
Show Figures

Figure 1

4192 KiB  
Article
Kinetic Theory Microstructure Modeling in Concentrated Suspensions
by Emmanuelle Abisset-Chavanne, Rabih Mezher, Steven Le Corre, Amine Ammar and Francisco Chinesta
Entropy 2013, 15(7), 2805-2832; https://doi.org/10.3390/e15072805 - 19 Jul 2013
Cited by 20 | Viewed by 6982
Abstract
When suspensions involving rigid rods become too concentrated, standard dilute theories fail to describe their behavior. Rich microstructures involving complex clusters are observed, and no model allows describing its kinematics and rheological effects. In previous works the authors propose a first attempt to [...] Read more.
When suspensions involving rigid rods become too concentrated, standard dilute theories fail to describe their behavior. Rich microstructures involving complex clusters are observed, and no model allows describing its kinematics and rheological effects. In previous works the authors propose a first attempt to describe such clusters from a micromechanical model, but neither its validity nor the rheological effects were addressed. Later, authors applied this model for fitting the rheological measurements in concentrated suspensions of carbon nanotubes (CNTs) by assuming a rheo-thinning behavior at the constitutive law level. However, three major issues were never addressed until now: (i) the validation of the micromechanical model by direct numerical simulation; (ii) the establishment of a general enough multi-scale kinetic theory description, taking into account interaction, diffusion and elastic effects; and (iii) proposing a numerical technique able to solve the kinetic theory description. This paper focuses on these three major issues, proving the validity of the micromechanical model, establishing a multi-scale kinetic theory description and, then, solving it by using an advanced and efficient separated representation of the cluster distribution function. These three aspects, never until now addressed in the past, constitute the main originality and the major contribution of the present paper. Full article
(This article belongs to the Collection Advances in Applied Statistical Mechanics)
Show Figures

Figure 1

344 KiB  
Article
Non-Linear Canonical Correlation Analysis Using Alpha-Beta Divergence
by Abhijit Mandal and Andrzej Cichocki
Entropy 2013, 15(7), 2788-2804; https://doi.org/10.3390/e15072788 - 18 Jul 2013
Cited by 12 | Viewed by 6046
Abstract
We propose a generalized method of the canonical correlation analysis using Alpha-Beta divergence, called AB-canonical analysis (ABCA). From observations of two random variables, x RP and y RQ, ABCA finds directions, wx RP and w [...] Read more.
We propose a generalized method of the canonical correlation analysis using Alpha-Beta divergence, called AB-canonical analysis (ABCA). From observations of two random variables, x RP and y RQ, ABCA finds directions, wx RP and wy RQ, such that the AB-divergence between the joint distribution of (wT x, wT y) and the product x y of their marginal distributions is maximized. The number of significant non-zero canonical coefficients are determined by using a sequential permutation test. The advantage of our method over the standard canonical correlation analysis (CCA) is that it can reconstruct the hidden non-linear relationship between wT xx and wT y, and it is robust against outliers. We extend ABCA when data are observed in terms of tensors. We further generalize this method by imposing sparseness constraints. Extensive simulation study is performed to justify our approach. Full article
Show Figures

Figure 1

1429 KiB  
Article
Protection Intensity Evaluation for a Security System Based on Entropy Theory
by Haitao Lv, Ruimin Hu, Jun Chen, Zheng He and Shihong Chen
Entropy 2013, 15(7), 2766-2787; https://doi.org/10.3390/e15072766 - 17 Jul 2013
Cited by 1 | Viewed by 5002
Abstract
The protection effectiveness is an important metric to judge whether a security system is good or not. In this paper, a security system deployed in a guard field is regarded abstractly as a security network. A quantitative protection effectiveness evaluation method based on [...] Read more.
The protection effectiveness is an important metric to judge whether a security system is good or not. In this paper, a security system deployed in a guard field is regarded abstractly as a security network. A quantitative protection effectiveness evaluation method based on entropy theory is introduced. We propose the protection intensity model, which can be used to calculate the protection intensity of a stationary or moving object provided by a security system or a security network. Using the protection intensity model, an algorithm, specifically for finding the minimal protection intensity paths of a field deployed multiple security system, is also put forward. The minimal protection intensity paths can be considered as the effectiveness measure of security networks. Finally, we present the simulation of the methods and models in this paper. Full article
Show Figures

Figure 1

325 KiB  
Article
An Economics-Based Second Law Efficiency
by Karan H. Mistry and John H. Lienhard V
Entropy 2013, 15(7), 2736-2765; https://doi.org/10.3390/e15072736 - 12 Jul 2013
Cited by 30 | Viewed by 7341
Abstract
Second Law efficiency is a useful parameter for characterizing the energy requirements of a system in relation to the limits of performance prescribed by the Laws of Thermodynamics. However, since energy costs typically represent less than 50% of the overall cost of product [...] Read more.
Second Law efficiency is a useful parameter for characterizing the energy requirements of a system in relation to the limits of performance prescribed by the Laws of Thermodynamics. However, since energy costs typically represent less than 50% of the overall cost of product for many large-scale plants (and, in particular, for desalination plants), it is useful to have a parameter that can characterize both energetic and economic effects. In this paper, an economics-based Second Law efficiency is defined by analogy to the exergetic Second Law efficiency and is applied to several desalination systems. It is defined as the ratio of the minimum cost of producing a product divided by the actual cost of production. The minimum cost of producing the product is equal to the cost of the primary source of energy times the minimum amount of energy required, as governed by the Second Law. The analogy is used to show that thermodynamic irreversibilities can be assigned costs and compared directly to non-energetic costs, such as capital expenses, labor and other operating costs. The economics-based Second Law efficiency identifies costly sources of irreversibility and places these irreversibilities in context with the overall system costs. These principles are illustrated through three case studies. First, a simple analysis of multistage flash and multiple effect distillation systems is performed using available data. Second, a complete energetic and economic model of a reverse osmosis plant is developed to show how economic costs are influenced by energetics. Third, a complete energetic and economic model of a solar powered direct contact membrane distillation system is developed to illustrate the true costs associated with so-called free energy sources. Full article
Show Figures

Figure 1

2244 KiB  
Article
Efficient Approximation of the Conditional Relative Entropy with Applications to Discriminative Learning of Bayesian Network Classifiers
by Alexandra M. Carvalho, Pedro Adão and Paulo Mateus
Entropy 2013, 15(7), 2716-2735; https://doi.org/10.3390/e15072716 - 12 Jul 2013
Cited by 14 | Viewed by 6087
Abstract
We propose a minimum variance unbiased approximation to the conditional relative entropy of the distribution induced by the observed frequency estimates, for multi-classification tasks. Such approximation is an extension of a decomposable scoring criterion, named approximate conditional log-likelihood (aCLL), primarily used for discriminative [...] Read more.
We propose a minimum variance unbiased approximation to the conditional relative entropy of the distribution induced by the observed frequency estimates, for multi-classification tasks. Such approximation is an extension of a decomposable scoring criterion, named approximate conditional log-likelihood (aCLL), primarily used for discriminative learning of augmented Bayesian network classifiers. Our contribution is twofold: (i) it addresses multi-classification tasks and not only binary-classification ones; and (ii) it covers broader stochastic assumptions than uniform distribution over the parameters. Specifically, we considered a Dirichlet distribution over the parameters, which was experimentally shown to be a very good approximation to CLL. In addition, for Bayesian network classifiers, a closed-form equation is found for the parameters that maximize the scoring criterion. Full article
(This article belongs to the Special Issue Estimating Information-Theoretic Quantities from Data)
Show Figures

Figure 1

1176 KiB  
Article
Non-Linear Fusion of Observations Provided by Two Sensors
by Monir Azmani, Serge Reboul and Mohammed Benjelloun
Entropy 2013, 15(7), 2698-2715; https://doi.org/10.3390/e15072698 - 11 Jul 2013
Cited by 1 | Viewed by 4941
Abstract
When we try to make the best estimate of some quantity, the problem of combining results from different experiments is encountered. In multi-sensor data fusion, the problem is seen as combining observations provided by different sensors. Sensors provide observations and information on an [...] Read more.
When we try to make the best estimate of some quantity, the problem of combining results from different experiments is encountered. In multi-sensor data fusion, the problem is seen as combining observations provided by different sensors. Sensors provide observations and information on an unknown quantity, which can differ in precision. We propose a combined estimate that uses prior information. We consider the simpler aspects of the problem, so that two sensors provide an observation of the same quantity. The standard error of the observations is supposed to be known. The prior information is an interval that bounds the parameter of the estimate. We derive the proposed combined estimate methodology, and we show its efficiency in the minimum mean square sense. The proposed combined estimate is assessed using synthetic data, and an application is presented. Full article
Show Figures

Figure 1

848 KiB  
Article
Urban Dynamics, Fractals and Generalized Entropy
by Sara Encarnação, Marcos Gaudiano, Francisco C. Santos, José A. Tenedório and Jorge M. Pacheco
Entropy 2013, 15(7), 2679-2697; https://doi.org/10.3390/e15072679 - 11 Jul 2013
Cited by 16 | Viewed by 6995
Abstract
We explore the relation between the local fractal dimension and the development of the built-up area of the Northern Margin of the Metropolitan Area of Lisbon (NMAL), for the period between 1960 and 2004. To this end we make use of a Generalized [...] Read more.
We explore the relation between the local fractal dimension and the development of the built-up area of the Northern Margin of the Metropolitan Area of Lisbon (NMAL), for the period between 1960 and 2004. To this end we make use of a Generalized Local Spatial Entropy (GLSE) function based on which urban areas can be classified into five different types. Our analysis of NMAL shows how some of the growth dynamics encountered can be linked to the plethora of social, economic and political changes that have taken place in NMAL (and Portugal), during the last 40 years, allowing for the establishment of urban planning measures to either inhibit or promote sprawl in urban areas. Full article
(This article belongs to the Special Issue Entropy and Urban Sprawl)
Show Figures

Figure 1

250 KiB  
Article
Exploring the Characteristics of Innovation Adoption in Social Networks: Structure, Homophily, and Strategy
by Yongli Li, Chong Wu, Peng Luo and Wei Zhang
Entropy 2013, 15(7), 2662-2678; https://doi.org/10.3390/e15072662 - 10 Jul 2013
Cited by 19 | Viewed by 7913
Abstract
Exploration of the characteristics of innovation adoption in the context of social network will add new insights beyond the traditional innovation models. In this paper, we establish a new agent-based model to simulate the behaviors of agents in terms of innovation adoption. Specifically, [...] Read more.
Exploration of the characteristics of innovation adoption in the context of social network will add new insights beyond the traditional innovation models. In this paper, we establish a new agent-based model to simulate the behaviors of agents in terms of innovation adoption. Specifically, we examine the effects of the network structure, homophily and strategy, among which homophily is a new topic in this field of innovation adoption. The experiments illustrate six important findings involving five aspects and their influences on the innovation adoption. The five aspects are initial conditions, homophily, network topology, rules of updating and strategy, respectively. This paper also compares the different cases within one aspect or across several aspects listed above. Accordingly, some management advices and future work are provided in the last part of this paper. Full article
(This article belongs to the Special Issue Social Networks and Information Diffusion)
Show Figures

Figure 1

3052 KiB  
Article
Simulation Study of Direct Causality Measures in Multivariate Time Series
by Angeliki Papana, Catherine Kyrtsou, Dimitris Kugiumtzis and Cees Diks
Entropy 2013, 15(7), 2635-2661; https://doi.org/10.3390/e15072635 - 4 Jul 2013
Cited by 77 | Viewed by 9640
Abstract
Measures of the direction and strength of the interdependence among time series from multivariate systems are evaluated based on their statistical significance and discrimination ability. The best-known measures estimating direct causal effects, both linear and nonlinear, are considered, i.e., conditional Granger causality index [...] Read more.
Measures of the direction and strength of the interdependence among time series from multivariate systems are evaluated based on their statistical significance and discrimination ability. The best-known measures estimating direct causal effects, both linear and nonlinear, are considered, i.e., conditional Granger causality index (CGCI), partial Granger causality index (PGCI), partial directed coherence (PDC), partial transfer entropy (PTE), partial symbolic transfer entropy (PSTE) and partial mutual information on mixed embedding (PMIME). The performance of the multivariate coupling measures is assessed on stochastic and chaotic simulated uncoupled and coupled dynamical systems for different settings of embedding dimension and time series length. The CGCI, PGCI and PDC seem to outperform the other causality measures in the case of the linearly coupled systems, while the PGCI is the most effective one when latent and exogenous variables are present. The PMIME outweighs all others in the case of nonlinear simulation systems. Full article
(This article belongs to the Special Issue Transfer Entropy)
Show Figures

Figure 1

1228 KiB  
Article
Simple Urban Simulation Atop Complicated Models: Multi-Scale Equation-Free Computing of Sprawl Using Geographic Automata
by Paul M. Torrens, Yannis Kevrekidis, Roger Ghanem and Yu Zou
Entropy 2013, 15(7), 2606-2634; https://doi.org/10.3390/e15072606 - 2 Jul 2013
Cited by 11 | Viewed by 7604
Abstract
Reconciling competing desires to build urban models that can be simple and complicated is something of a grand challenge for urban simulation. It also prompts difficulties in many urban policy situations, such as urban sprawl, where simple, actionable ideas may need to be [...] Read more.
Reconciling competing desires to build urban models that can be simple and complicated is something of a grand challenge for urban simulation. It also prompts difficulties in many urban policy situations, such as urban sprawl, where simple, actionable ideas may need to be considered in the context of the messily complex and complicated urban processes and phenomena that work within cities. In this paper, we present a novel architecture for achieving both simple and complicated realizations of urban sprawl in simulation. Fine-scale simulations of sprawl geography are run using geographic automata to represent the geographical drivers of sprawl in intricate detail and over fine resolutions of space and time. We use Equation-Free computing to deploy population as a coarse observable of sprawl, which can be leveraged to run automata-based models as short-burst experiments within a meta-simulation framework. Full article
(This article belongs to the Special Issue Entropy and Urban Sprawl)
Show Figures

Figure 1

563 KiB  
Article
Maximum Entropy Distributions Describing Critical Currents in Superconductors
by Nicholas J. Long
Entropy 2013, 15(7), 2585-2605; https://doi.org/10.3390/e15072585 - 2 Jul 2013
Cited by 16 | Viewed by 5486
Abstract
Maximum entropy inference can be used to find equations for the critical currents (Jc) in a type II superconductor as a function of temperature, applied magnetic field, and angle of the applied field, θ or φ . This approach provides [...] Read more.
Maximum entropy inference can be used to find equations for the critical currents (Jc) in a type II superconductor as a function of temperature, applied magnetic field, and angle of the applied field, θ or φ . This approach provides an understanding of how the macroscopic critical currents arise from averaging over different sources of vortex pinning. The dependence of critical currents on temperature and magnetic field can be derived with logarithmic constraints and accord with expressions which have been widely used with empirical justification since the first development of technical superconductors. In this paper we provide a physical interpretation of the constraints leading to the distributions for Jc(T) and Jc(B), and discuss the implications for experimental data analysis. We expand the maximum entropy analysis of angular Jc data to encompass samples which have correlated defects at arbitrary angles to the crystal axes giving both symmetric and asymmetric peaks and samples which show vortex channeling behavior. The distributions for angular data are derived using combinations of first, second or fourth order constraints on cot θ or cot φ . We discuss why these distributions apply whether or not correlated defects are aligned with the crystal axes and thereby provide a unified description of critical currents in superconductors. For J//B we discuss what the maximum entropy equations imply about the vortex geometry. Full article
Show Figures

Figure 1

269 KiB  
Article
Fact-Checking Ziegler’s Maximum Entropy Production Principle beyond the Linear Regime and towards Steady States
by Matteo Polettini
Entropy 2013, 15(7), 2570-2584; https://doi.org/10.3390/e15072570 - 28 Jun 2013
Cited by 29 | Viewed by 6609
Abstract
We challenge claims that the principle of maximum entropy production produces physical phenomenological relations between conjugate currents and forces, even beyond the linear regime, and that currents in networks arrange themselves to maximize entropy production as the system approaches the steady state. In [...] Read more.
We challenge claims that the principle of maximum entropy production produces physical phenomenological relations between conjugate currents and forces, even beyond the linear regime, and that currents in networks arrange themselves to maximize entropy production as the system approaches the steady state. In particular: (1) we show that Ziegler’s principle of thermodynamic orthogonality leads to stringent reciprocal relations for higher order response coefficients, and in the framework of stochastic thermodynamics, we exhibit a simple explicit model that does not satisfy them; (2) on a network, enforcing Kirchhoff’s current law, we show that maximization of the entropy production prescribes reciprocal relations between coarse-grained observables, but is not responsible for the onset of the steady state, which is, rather, due to the minimum entropy production principle. Full article
(This article belongs to the Special Issue Maximum Entropy Production)
773 KiB  
Article
Relating Deformation and Thermodynamics: An Opportunity for Rethinking Basic Concepts of Continuum Mechanics
by Giuseppe Guzzetta
Entropy 2013, 15(7), 2548-2569; https://doi.org/10.3390/e15072548 - 26 Jun 2013
Cited by 3 | Viewed by 7225
Abstract
In order to treat deformation as one of the processes taking place in an irreversible thermodynamic transformation, two main conditions must be satisfied: (1) strain and stress should be defined in such a way that the modification of the symmetry of these tensorial [...] Read more.
In order to treat deformation as one of the processes taking place in an irreversible thermodynamic transformation, two main conditions must be satisfied: (1) strain and stress should be defined in such a way that the modification of the symmetry of these tensorial quantities reflects that of the structure of the actual material of which the deforming ideal continuum is the counterpart; and (2) the unique decomposition of the above tensors into the algebraic sum of an isotropic and an anisotropic part with different physical meanings should be recognized. The first condition allows the distinction of the energy balance in irrotational and rotational deformations; the second allows the description of a thermodynamic transformation involving deformation as a function of both process quantities, whose values depend on the specific transition, or path, between two equilibrium states, and of state quantities, which describe equilibrium states of a system quantitatively. One of the main conclusions that can be drawn is that, dealing with deformable materials, the quantities that must appear in thermodynamic equations cannot be tensorial quantities, such as the stress tensor and the infinitesimal or finite strain tensor usually considered in continuum mechanics (or, even worse, their components). The appropriate quantities should be invariants involved by the strain and stress tensors here defined. Another important conclusion is that, from a thermodynamic point of view, the consideration of the measurable volume change occurring in an isothermal deformation does not itself give any meaningful information. Full article
Show Figures

Figure 1

526 KiB  
Article
A Decentralized Heuristic Approach towards Resource Allocation in Femtocell Networks
by Adnan Shahid, Saleem Aslam and Kyung-Geun Lee
Entropy 2013, 15(7), 2524-2547; https://doi.org/10.3390/e15072524 - 25 Jun 2013
Cited by 16 | Viewed by 6108
Abstract
Femtocells represent a novel configuration for existing cellular communication, contributing towards the improvement of coverage and throughput. The dense deployment of these femtocells causes significant femto-macro and femto-femto interference, consequently deteriorating the throughput of femtocells. In this study, we compare two heuristic approaches, [...] Read more.
Femtocells represent a novel configuration for existing cellular communication, contributing towards the improvement of coverage and throughput. The dense deployment of these femtocells causes significant femto-macro and femto-femto interference, consequently deteriorating the throughput of femtocells. In this study, we compare two heuristic approaches, i.e., particle swarm optimization (PSO) and genetic algorithm (GA), for joint power assignment and resource allocation, within the context of the femtocell environment. The supposition made in this joint optimization is that the discrete power levels are available for the assignment. Furthermore, we have employed two variants of each PSO and GA: inertia weight and constriction factor model for PSO, and twopoint and uniform crossover for GA. The two proposed algorithms are in a decentralized manner, with no involvement of any centralized entity. The comparison is carried out between the two proposed algorithms for the aforementioned joint optimization problem. The contrast includes the performance metrics: including average objective function, min–max throughput of the femtocells, average throughput of the femto users, outage rate and time complexity. The results demonstrate that the decentralized PSO constriction factor outperforms the others in terms of the aforementioned performance metrics. Full article
Show Figures

Figure 1

568 KiB  
Article
Improved Minimum Entropy Filtering for Continuous Nonlinear Non-Gaussian Systems Using a Generalized Density Evolution Equation
by Mifeng Ren, Jianhua Zhang, Fang Fang, Guolian Hou and Jinliang Xu
Entropy 2013, 15(7), 2510-2523; https://doi.org/10.3390/e15072510 - 25 Jun 2013
Cited by 8 | Viewed by 5241
Abstract
This paper investigates the filtering problem for multivariate continuous nonlinear non-Gaussian systems based on an improved minimum error entropy (MEE) criterion. The system is described by a set of nonlinear continuous equations with non-Gaussian system noises and measurement noises. The recently developed generalized [...] Read more.
This paper investigates the filtering problem for multivariate continuous nonlinear non-Gaussian systems based on an improved minimum error entropy (MEE) criterion. The system is described by a set of nonlinear continuous equations with non-Gaussian system noises and measurement noises. The recently developed generalized density evolution equation is utilized to formulate the joint probability density function (PDF) of the estimation errors. Combining the entropy of the estimation error with the mean squared error, a novel performance index is constructed to ensure the estimation error not only has small uncertainty but also approaches to zero. According to the conjugate gradient method, the optimal filter gain matrix is then obtained by minimizing the improved minimum error entropy criterion. In addition, the condition is proposed to guarantee that the estimation error dynamics is exponentially bounded in the mean square sense. Finally, the comparative simulation results are presented to show that the proposed MEE filter is superior to nonlinear unscented Kalman filter (UKF). Full article
Show Figures

Figure 1

1462 KiB  
Article
Spatially-Explicit Bayesian Information Entropy Metrics for Calibrating Landscape Transformation Models
by Kostas Alexandridis and Bryan C. Pijanowski
Entropy 2013, 15(7), 2480-2509; https://doi.org/10.3390/e15072480 - 25 Jun 2013
Cited by 5 | Viewed by 6948
Abstract
Assessing spatial model performance often presents challenges related to the choice and suitability of traditional statistical methods in capturing the true validity and dynamics of the predicted outcomes. The stochastic nature of many of our contemporary spatial models of land use change necessitate [...] Read more.
Assessing spatial model performance often presents challenges related to the choice and suitability of traditional statistical methods in capturing the true validity and dynamics of the predicted outcomes. The stochastic nature of many of our contemporary spatial models of land use change necessitate the testing and development of new and innovative methodologies in statistical spatial assessment. In many cases, spatial model performance depends critically on the spatially-explicit prior distributions, characteristics, availability and prevalence of the variables and factors under study. This study explores the statistical spatial characteristics of statistical model assessment of modeling land use change dynamics in a seven-county study area in South-Eastern Wisconsin during the historical period of 1963–1990. The artificial neural network-based Land Transformation Model (LTM) predictions are used to compare simulated with historical land use transformations in urban/suburban landscapes. We introduce a range of Bayesian information entropy statistical spatial metrics for assessing the model performance across multiple simulation testing runs. Bayesian entropic estimates of model performance are compared against information-theoretic stochastic entropy estimates and theoretically-derived accuracy assessments. We argue for the critical role of informational uncertainty across different scales of spatial resolution in informing spatial landscape model assessment. Our analysis reveals how incorporation of spatial and landscape information asymmetry estimates can improve our stochastic assessments of spatial model predictions. Finally our study shows how spatially-explicit entropic classification accuracy estimates can work closely with dynamic modeling methodologies in improving our scientific understanding of landscape change as a complex adaptive system and process. Full article
(This article belongs to the Special Issue Entropy and Urban Sprawl)
Show Figures

Graphical abstract

275 KiB  
Article
The Entropy of Co-Compact Open Covers
by Zheng Wei, Yangeng Wang, Guo Wei, Tonghui Wang and Steven Bourquin
Entropy 2013, 15(7), 2464-2479; https://doi.org/10.3390/e15072464 - 24 Jun 2013
Cited by 4 | Viewed by 5482
Abstract
Co-compact entropy is introduced as an invariant of topological conjugation for perfect mappings defined on any Hausdorff space (compactness and metrizability are not necessarily required). This is achieved through the consideration of co-compact covers of the space. The advantages of co-compact entropy include: [...] Read more.
Co-compact entropy is introduced as an invariant of topological conjugation for perfect mappings defined on any Hausdorff space (compactness and metrizability are not necessarily required). This is achieved through the consideration of co-compact covers of the space. The advantages of co-compact entropy include: (1) it does not require the space to be compact and, thus, generalizes Adler, Konheim and McAndrew’s topological entropy of continuous mappings on compact dynamical systems; and (2) it is an invariant of topological conjugation, compared to Bowen’s entropy, which is metric-dependent. Other properties of co-compact entropy are investigated, e.g., the co-compact entropy of a subsystem does not exceed that of the whole system. For the linear system, (R; f), defined by f(x) = 2x, the co-compact entropy is zero, while Bowen’s entropy for this system is at least log 2. More generally, it is found that co-compact entropy is a lower bound of Bowen’s entropies, and the proof of this result also generates the Lebesgue Covering Theorem to co-compact open covers of non-compact metric spaces. Full article
(This article belongs to the Special Issue Dynamical Systems)
Previous Issue
Next Issue
Back to TopTop