Next Issue
Volume 20, December
Previous Issue
Volume 20, October
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 20, Issue 11 (November 2018) – 84 articles

Cover Story (view full-size image): From physics to the social sciences, information is now seen as a fundamental component of reality. However, a form of information seems underestimated, perhaps precisely because it is so pervasive that we take it for granted: the information encoded in the very environment we live in. Bridging information theory, cognitive science, urban studies and social systems theory, we introduce a three-layered model of information in cities, namely, environmental information in physical space and semantic space, and the information enacted by agents. We propose forms of estimating entropy in those layers, and apply these measures to emblematic urban cases and simulated scenarios. Our results suggest that environmental information affects coordination in interaction systems, and that the interplay of minds, cities and societies could only happen through information. Information is the bridge. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
11 pages, 4513 KiB  
Article
Mechanical Properties and Microstructure of a NiCrFeCoMn High-Entropy Alloy Deformed at High Strain Rates
by Bingfeng Wang, Xianrui Yao, Chu Wang, Xiaoyong Zhang and Xiaoxia Huang
Entropy 2018, 20(11), 892; https://doi.org/10.3390/e20110892 - 21 Nov 2018
Cited by 26 | Viewed by 5124
Abstract
The equiatomic NiCrFeCoMn high-entropy alloy prepared by arc melting has a single crystallographic structure. Mechanical properties and microstructure of the NiCrFeCoMn high-entropy alloy deformed at high strain rates (900 s−1 to 4600 s−1) were investigated. The yield strength of the [...] Read more.
The equiatomic NiCrFeCoMn high-entropy alloy prepared by arc melting has a single crystallographic structure. Mechanical properties and microstructure of the NiCrFeCoMn high-entropy alloy deformed at high strain rates (900 s−1 to 4600 s−1) were investigated. The yield strength of the NiCrFeCoMn high-entropy alloy is sensitive to the change of high strain rates. Serration behaviors were also observed on the flow stress curves of the alloy deformed at the strain rates ranging from 900 s−1 to 4600 s−1. The Zerilli–Armstrong constitutive equation can be used to predict the flow stress curves of the NiCrFeCoMn high-entropy alloy. Large amounts of deformation bands led to obvious serration behaviors of the NiCrFeCoMn high-entropy alloy under dynamic loading. Full article
(This article belongs to the Special Issue New Advances in High-Entropy Alloys)
Show Figures

Figure 1

15 pages, 1866 KiB  
Article
Improving Entropy Estimates of Complex Network Topology for the Characterization of Coupling in Dynamical Systems
by Teddy Craciunescu, Andrea Murari and Michela Gelfusa
Entropy 2018, 20(11), 891; https://doi.org/10.3390/e20110891 - 20 Nov 2018
Cited by 8 | Viewed by 3935
Abstract
A new measure for the characterization of interconnected dynamical systems coupling is proposed. The method is based on the representation of time series as weighted cross-visibility networks. The weights are introduced as the metric distance between connected nodes. The structure of the networks, [...] Read more.
A new measure for the characterization of interconnected dynamical systems coupling is proposed. The method is based on the representation of time series as weighted cross-visibility networks. The weights are introduced as the metric distance between connected nodes. The structure of the networks, depending on the coupling strength, is quantified via the entropy of the weighted adjacency matrix. The method has been tested on several coupled model systems with different individual properties. The results show that the proposed measure is able to distinguish the degree of coupling of the studied dynamical systems. The original use of the geodesic distance on Gaussian manifolds as a metric distance, which is able to take into account the noise inherently superimposed on the experimental data, provides significantly better results in the calculation of the entropy, improving the reliability of the coupling estimates. The application to the interaction between the El Niño Southern Oscillation (ENSO) and the Indian Ocean Dipole and to the influence of ENSO on influenza pandemic occurrence illustrates the potential of the method for real-life problems. Full article
Show Figures

Graphical abstract

19 pages, 4884 KiB  
Review
Liquid Phase Separation in High-Entropy Alloys—A Review
by Nicholas Derimow and Reza Abbaschian
Entropy 2018, 20(11), 890; https://doi.org/10.3390/e20110890 - 20 Nov 2018
Cited by 40 | Viewed by 8667
Abstract
It has been 14 years since the discovery of the high-entropy alloys (HEAs), an idea of alloying which has reinvigorated materials scientists to explore unconventional alloy compositions and multicomponent alloy systems. Many authors have referred to these alloys as multi-principal element alloys (MPEAs) [...] Read more.
It has been 14 years since the discovery of the high-entropy alloys (HEAs), an idea of alloying which has reinvigorated materials scientists to explore unconventional alloy compositions and multicomponent alloy systems. Many authors have referred to these alloys as multi-principal element alloys (MPEAs) or complex concentrated alloys (CCAs) in order to place less restrictions on what constitutes an HEA. Regardless of classification, the research is rooted in the exploration of structure-properties and processing relations in these multicomponent alloys with the aim to surpass the physical properties of conventional materials. More recent studies show that some of these alloys undergo liquid phase separation, a phenomenon largely dictated by low entropy of mixing and positive mixing enthalpy. Studies posit that positive mixing enthalpy of the binary and ternary components contribute substantially to the formation of liquid miscibility gaps. The objective of this review is to bring forth and summarize the findings of the experiments which detail liquid phase separation (LPS) in HEAs, MPEAs, and CCAs and to draw parallels between HEAs and the conventional alloy systems which undergo liquid-liquid separation. Positive mixing enthalpy if not compensated by the entropy of mixing will lead to liquid phase separation. It appears that Co, Ni, and Ti promote miscibility in HEAs/CCAs/MPEAs while Cr, V, and Nb will raise the miscibility gap temperature and increase LPS. Moreover, addition of appropriate amounts of Ni to CoCrCu eliminates immiscibility, such as in cases of dendritically solidifying CoCrCuNi, CoCrCuFeNi, and CoCrCuMnNi. Full article
(This article belongs to the Special Issue New Advances in High-Entropy Alloys)
Show Figures

Figure 1

6 pages, 3463 KiB  
Article
Small-Scale Plastic Deformation of Nanocrystalline High Entropy Alloy
by Sanghita Mridha, Mageshwari Komarasamy, Sanjit Bhowmick, Rajiv S. Mishra and Sundeep Mukherjee
Entropy 2018, 20(11), 889; https://doi.org/10.3390/e20110889 - 20 Nov 2018
Cited by 8 | Viewed by 4320
Abstract
High entropy alloys (HEAs) have attracted widespread interest due to their unique properties at many different length-scales. Here, we report the fabrication of nanocrystalline (NC) Al0.1CoCrFeNi high entropy alloy and subsequent small-scale plastic deformation behavior via nano-pillar compression tests. Exceptional strength [...] Read more.
High entropy alloys (HEAs) have attracted widespread interest due to their unique properties at many different length-scales. Here, we report the fabrication of nanocrystalline (NC) Al0.1CoCrFeNi high entropy alloy and subsequent small-scale plastic deformation behavior via nano-pillar compression tests. Exceptional strength was realized for the NC HEA compared to pure Ni of similar grain sizes. Grain boundary mediated deformation mechanisms led to high strain rate sensitivity of flow stress in the nanocrystalline HEA. Full article
(This article belongs to the Special Issue New Advances in High-Entropy Alloys)
Show Figures

Graphical abstract

15 pages, 33920 KiB  
Article
Magnetocaloric Effect in an Antidot: The Effect of the Aharonov-Bohm Flux and Antidot Radius
by Oscar A. Negrete, Francisco J. Peña and Patricio Vargas
Entropy 2018, 20(11), 888; https://doi.org/10.3390/e20110888 - 19 Nov 2018
Cited by 18 | Viewed by 3887
Abstract
In this work, we report the magnetocaloric effect (MCE) for an electron interacting with an antidot, under the effect of an Aharonov-Bohm flux (AB-flux) subjected to a parabolic confinement potential. We use the Bogachek and Landman model, which additionally allows the study of [...] Read more.
In this work, we report the magnetocaloric effect (MCE) for an electron interacting with an antidot, under the effect of an Aharonov-Bohm flux (AB-flux) subjected to a parabolic confinement potential. We use the Bogachek and Landman model, which additionally allows the study of quantum dots with Fock-Darwin energy levels for vanishing antidot radius and AB-flux. We find that AB-flux strongly controls the oscillatory behaviour of the MCE, thus acting as a control parameter for the cooling or heating of the magnetocaloric effect. We propose a way to detect AB-flux by measuring temperature differences. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

15 pages, 2461 KiB  
Article
A Technology-Based Classification of Firms: Can We Learn Something Looking Beyond Industry Classifications?
by Petros Gkotsis, Emanuele Pugliese and Antonio Vezzani
Entropy 2018, 20(11), 887; https://doi.org/10.3390/e20110887 - 18 Nov 2018
Cited by 8 | Viewed by 5420
Abstract
In this work we use clustering techniques to identify groups of firms competing in similar technological markets. Our clustering properly highlights technological similarities grouping together firms normally classified in different industrial sectors. Technological development leads to a continuous changing structure of industries and [...] Read more.
In this work we use clustering techniques to identify groups of firms competing in similar technological markets. Our clustering properly highlights technological similarities grouping together firms normally classified in different industrial sectors. Technological development leads to a continuous changing structure of industries and firms. For this reason, we propose a data driven approach to classify firms together allowing for fast adaptation of the classification to the changing technological landscape. In this respect we differentiate from previous taxonomic exercises of industries and innovation which are based on more general common features. In our empirical application, we use patent data as a proxy for the firms’ capabilities of developing new solutions in different technological fields. On this basis, we extract what we define a Technologically Driven Classification (TDC). In order to validate the result of our exercise we use information theory to look at the amount of information explained by our clustering and the amount of information shared with an industrial classification. All-in-all, our approach provides a good grouping of firms on the basis of their technological capabilities and represents an attractive option to compare firms in the technological space and better characterise competition in technological markets. Full article
(This article belongs to the Special Issue Economic Fitness and Complexity)
Show Figures

Figure 1

13 pages, 615 KiB  
Article
Advanced Statistical Testing of Quantum Random Number Generators
by Aldo C. Martínez, Aldo Solis, Rafael Díaz Hernández Rojas, Alfred B. U'Ren, Jorge G. Hirsch and Isaac Pérez Castillo
Entropy 2018, 20(11), 886; https://doi.org/10.3390/e20110886 - 17 Nov 2018
Cited by 14 | Viewed by 4637
Abstract
Pseudo-random number generators are widely used in many branches of science, mainly in applications related to Monte Carlo methods, although they are deterministic in design and, therefore, unsuitable for tackling fundamental problems in security and cryptography. The natural laws of the microscopic realm [...] Read more.
Pseudo-random number generators are widely used in many branches of science, mainly in applications related to Monte Carlo methods, although they are deterministic in design and, therefore, unsuitable for tackling fundamental problems in security and cryptography. The natural laws of the microscopic realm provide a fairly simple method to generate non-deterministic sequences of random numbers, based on measurements of quantum states. In practice, however, the experimental devices on which quantum random number generators are based are often unable to pass some tests of randomness. In this review, we briefly discuss two such tests, point out the challenges that we have encountered in experimental implementations and finally present a fairly simple method that successfully generates non-deterministic maximally random sequences. Full article
(This article belongs to the Special Issue Quantum Probability and Randomness)
Show Figures

Figure 1

21 pages, 1683 KiB  
Article
Blind Image Quality Assessment of Natural Scenes Based on Entropy Differences in the DCT Domain
by Xiaohan Yang, Fan Li, Wei Zhang and Lijun He
Entropy 2018, 20(11), 885; https://doi.org/10.3390/e20110885 - 17 Nov 2018
Cited by 18 | Viewed by 4475
Abstract
Blind/no-reference image quality assessment is performed to accurately evaluate the perceptual quality of a distorted image without prior information from a reference image. In this paper, an effective blind image quality assessment approach based on entropy differences in the discrete cosine transform domain [...] Read more.
Blind/no-reference image quality assessment is performed to accurately evaluate the perceptual quality of a distorted image without prior information from a reference image. In this paper, an effective blind image quality assessment approach based on entropy differences in the discrete cosine transform domain for natural images is proposed. Information entropy is an effective measure of the amount of information in an image. We find the discrete cosine transform coefficient distribution of distorted natural images shows a pulse-shape phenomenon, which directly affects the differences of entropy. Then, a Weibull model is used to fit the distributions of natural and distorted images. This is because the Weibull model sufficiently approximates the pulse-shape phenomenon as well as the sharp-peak and heavy-tail phenomena of natural scene statistics rules. Four features that are related to entropy differences and human visual system are extracted from the Weibull model for three scaling images. Image quality is assessed by the support vector regression method based on the extracted features. This blind Weibull statistics algorithm is thoroughly evaluated using three widely used databases: LIVE, TID2008, and CSIQ. The experimental results show that the performance of the proposed blind Weibull statistics method is highly consistent with that of human visual perception and greater than that of the state-of-the-art blind and full-reference image quality assessment methods in most cases. Full article
(This article belongs to the Special Issue Entropy in Image Analysis)
Show Figures

Figure 1

24 pages, 2169 KiB  
Article
Hybrid Integration Approach of Entropy with Logistic Regression and Support Vector Machine for Landslide Susceptibility Modeling
by Tingyu Zhang, Ling Han, Wei Chen and Himan Shahabi
Entropy 2018, 20(11), 884; https://doi.org/10.3390/e20110884 - 17 Nov 2018
Cited by 68 | Viewed by 4397
Abstract
The main purpose of the present study is to apply three classification models, namely, the index of entropy (IOE) model, the logistic regression (LR) model, and the support vector machine (SVM) model by radial basis function (RBF), to produce landslide susceptibility maps for [...] Read more.
The main purpose of the present study is to apply three classification models, namely, the index of entropy (IOE) model, the logistic regression (LR) model, and the support vector machine (SVM) model by radial basis function (RBF), to produce landslide susceptibility maps for the Fugu County of Shaanxi Province, China. Firstly, landslide locations were extracted from field investigation and aerial photographs, and a total of 194 landslide polygons were transformed into points to produce a landslide inventory map. Secondly, the landslide points were randomly split into two groups (70/30) for training and validation purposes, respectively. Then, 10 landslide explanatory variables, such as slope aspect, slope angle, altitude, lithology, mean annual precipitation, distance to roads, distance to rivers, distance to faults, land use, and normalized difference vegetation index (NDVI), were selected and the potential multicollinearity problems between these factors were detected by the Pearson Correlation Coefficient (PCC), the variance inflation factor (VIF), and tolerance (TOL). Subsequently, the landslide susceptibility maps for the study region were obtained using the IOE model, the LR–IOE, and the SVM–IOE model. Finally, the performance of these three models was verified and compared using the receiver operating characteristics (ROC) curve. The success rate results showed that the LR–IOE model has the highest accuracy (90.11%), followed by the IOE model (87.43%) and the SVM–IOE model (86.53%). Similarly, the AUC values also showed that the prediction accuracy expresses a similar result, with the LR–IOE model having the highest accuracy (81.84%), followed by the IOE model (76.86%) and the SVM–IOE model (76.61%). Thus, the landslide susceptibility map (LSM) for the study region can provide an effective reference for the Fugu County government to properly address land planning and mitigate landslide risk. Full article
Show Figures

Figure 1

18 pages, 4065 KiB  
Article
The Role of Complex Analysis in Modelling Economic Growth
by Angelica Sbardella, Emanuele Pugliese, Andrea Zaccaria and Pasquale Scaramozzino
Entropy 2018, 20(11), 883; https://doi.org/10.3390/e20110883 - 16 Nov 2018
Cited by 23 | Viewed by 7143
Abstract
Development and growth are complex and tumultuous processes. Modern economic growth theories identify some key determinants of economic growth. However, the relative importance of the determinants remains unknown, and additional variables may help clarify the directions and dimensions of the interactions. The novel [...] Read more.
Development and growth are complex and tumultuous processes. Modern economic growth theories identify some key determinants of economic growth. However, the relative importance of the determinants remains unknown, and additional variables may help clarify the directions and dimensions of the interactions. The novel stream of literature on economic complexity goes beyond aggregate measures of productive inputs and considers instead a more granular and structural view of the productive possibilities of countries, i.e., their capabilities. Different endowments of capabilities are crucial ingredients in explaining differences in economic performances. In this paper we employ economic fitness, a measure of productive capabilities obtained through complex network techniques. Focusing on the combined roles of fitness and some more traditional drivers of growth—GDP per capita, capital intensity, employment ratio, life expectancy, human capital and total factor productivity—we build a bridge between economic growth theories and the economic complexity literature. Our findings show that fitness plays a crucial role in fostering economic growth and, when it is included in the analysis, can be either complementary to traditional drivers of growth or can completely overshadow them. Notably, for the most complex countries, which have the most diversified export baskets and the largest endowments of capabilities, fitness is complementary to the chosen growth determinants in enhancing economic growth. The empirical findings are in agreement with neoclassical and endogenous growth theories. By contrast, for countries with intermediate and low capability levels, fitness emerges as the key growth driver. This suggests that economic models should account for capabilities; in fact, describing the technological possibilities of countries solely in terms of their production functions may lead to a misinterpretation of the roles of factors. Full article
(This article belongs to the Special Issue Economic Fitness and Complexity)
Show Figures

Figure 1

12 pages, 844 KiB  
Article
Study in Natural Time of Geoelectric Field and Seismicity Changes Preceding the Mw6.8 Earthquake on 25 October 2018 in Greece
by Nicholas V. Sarlis and Efthimios S. Skordas
Entropy 2018, 20(11), 882; https://doi.org/10.3390/e20110882 - 16 Nov 2018
Cited by 14 | Viewed by 4273
Abstract
A strong earthquake of magnitude M w 6.8 struck Western Greece on 25 October 2018 with an epicenter at 37.515 N 20.564 E. It was preceded by an anomalous geolectric signal that was recorded on 2 October 2018 at a measuring [...] Read more.
A strong earthquake of magnitude M w 6.8 struck Western Greece on 25 October 2018 with an epicenter at 37.515 N 20.564 E. It was preceded by an anomalous geolectric signal that was recorded on 2 October 2018 at a measuring station 70 km away from the epicenter. Upon analyzing this signal in natural time, we find that it conforms to the conditions suggested for its identification as precursory Seismic Electric Signal (SES) activity. Notably, the observed lead time of 23 days lies within the range of values that has been very recently identified as being statistically significant for the precursory variations of the electric field of the Earth. Moreover, the analysis in natural time of the seismicity subsequent to the SES activity in the area candidate to suffer this strong earthquake reveals that the criticality conditions were obeyed early in the morning of 18 October 2018, i.e., almost a week before the strong earthquake occurrence, in agreement with earlier findings. Finally, when employing the recent method of nowcasting earthquakes, which is based on natural time, we find an earthquake potential score around 80%. Full article
Show Figures

Figure 1

15 pages, 1851 KiB  
Article
Between Waves and Diffusion: Paradoxical Entropy Production in an Exceptional Regime
by Karl Heinz Hoffmann, Kathrin Kulmus, Christopher Essex and Janett Prehl
Entropy 2018, 20(11), 881; https://doi.org/10.3390/e20110881 - 16 Nov 2018
Cited by 5 | Viewed by 3987
Abstract
The entropy production rate is a well established measure for the extent of irreversibility in a process. For irreversible processes, one thus usually expects that the entropy production rate approaches zero in the reversible limit. Fractional diffusion equations provide a fascinating testbed for [...] Read more.
The entropy production rate is a well established measure for the extent of irreversibility in a process. For irreversible processes, one thus usually expects that the entropy production rate approaches zero in the reversible limit. Fractional diffusion equations provide a fascinating testbed for that intuition in that they build a bridge connecting the fully irreversible diffusion equation with the fully reversible wave equation by a one-parameter family of processes. The entropy production paradox describes the very non-intuitive increase of the entropy production rate as that bridge is passed from irreversible diffusion to reversible waves. This paradox has been established for time- and space-fractional diffusion equations on one-dimensional continuous space and for the Shannon, Tsallis and Renyi entropies. After a brief review of the known results, we generalize it to time-fractional diffusion on a finite chain of points described by a fractional master equation. Full article
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)
Show Figures

Figure 1

12 pages, 2045 KiB  
Article
Merging of Numerical Intervals in Entropy-Based Discretization
by Jerzy W. Grzymala-Busse and Teresa Mroczek
Entropy 2018, 20(11), 880; https://doi.org/10.3390/e20110880 - 16 Nov 2018
Cited by 2 | Viewed by 3290
Abstract
As previous research indicates, a multiple-scanning methodology for discretization of numerical datasets, based on entropy, is very competitive. Discretization is a process of converting numerical values of the data records into discrete values associated with numerical intervals defined over the domains of the [...] Read more.
As previous research indicates, a multiple-scanning methodology for discretization of numerical datasets, based on entropy, is very competitive. Discretization is a process of converting numerical values of the data records into discrete values associated with numerical intervals defined over the domains of the data records. In multiple-scanning discretization, the last step is the merging of neighboring intervals in discretized datasets as a kind of postprocessing. Our objective is to check how the error rate, measured by tenfold cross validation within the C4.5 system, is affected by such merging. We conducted experiments on 17 numerical datasets, using the same setup of multiple scanning, with three different options for merging: no merging at all, merging based on the smallest entropy, and merging based on the biggest entropy. As a result of the Friedman rank sum test (5% significance level) we concluded that the differences between all three approaches are statistically insignificant. There is no universally best approach. Then, we repeated all experiments 30 times, recording averages and standard deviations. The test of the difference between averages shows that, for a comparison of no merging with merging based on the smallest entropy, there are statistically highly significant differences (with a 1% significance level). In some cases, the smaller error rate is associated with no merging, in some cases the smaller error rate is associated with merging based on the smallest entropy. A comparison of no merging with merging based on the biggest entropy showed similar results. So, our final conclusion was that there are highly significant differences between no merging and merging, depending on the dataset. The best approach should be chosen by trying all three approaches. Full article
Show Figures

Figure 1

33 pages, 960 KiB  
Article
Assessing the Relevance of Specific Response Features in the Neural Code
by Hugo Gabriel Eyherabide and Inés Samengo
Entropy 2018, 20(11), 879; https://doi.org/10.3390/e20110879 - 15 Nov 2018
Cited by 1 | Viewed by 3800
Abstract
The study of the neural code aims at deciphering how the nervous system maps external stimuli into neural activity—the encoding phase—and subsequently transforms such activity into adequate responses to the original stimuli—the decoding phase. Several information-theoretical methods have been proposed to assess the [...] Read more.
The study of the neural code aims at deciphering how the nervous system maps external stimuli into neural activity—the encoding phase—and subsequently transforms such activity into adequate responses to the original stimuli—the decoding phase. Several information-theoretical methods have been proposed to assess the relevance of individual response features, as for example, the spike count of a given neuron, or the amount of correlation in the activity of two cells. These methods work under the premise that the relevance of a feature is reflected in the information loss that is induced by eliminating the feature from the response. The alternative methods differ in the procedure by which the tested feature is removed, and the algorithm with which the lost information is calculated. Here we compare these methods, and show that more often than not, each method assigns a different relevance to the tested feature. We demonstrate that the differences are both quantitative and qualitative, and connect them with the method employed to remove the tested feature, as well as the procedure to calculate the lost information. By studying a collection of carefully designed examples, and working on analytic derivations, we identify the conditions under which the relevance of features diagnosed by different methods can be ranked, or sometimes even equated. The condition for equality involves both the amount and the type of information contributed by the tested feature. We conclude that the quest for relevant response features is more delicate than previously thought, and may yield to multiple answers depending on methodological subtleties. Full article
(This article belongs to the Special Issue Information Theory in Neuroscience)
Show Figures

Figure 1

23 pages, 4448 KiB  
Review
Coherent Precipitation and Strengthening in Compositionally Complex Alloys: A Review
by Qing Wang, Zhen Li, Shujie Pang, Xiaona Li, Chuang Dong and Peter K. Liaw
Entropy 2018, 20(11), 878; https://doi.org/10.3390/e20110878 - 15 Nov 2018
Cited by 129 | Viewed by 10162
Abstract
High-performance conventional engineering materials (including Al alloys, Mg alloys, Cu alloys, stainless steels, Ni superalloys, etc.) and newly-developed high entropy alloys are all compositionally-complex alloys (CCAs). In these CCA systems, the second-phase particles are generally precipitated in their solid-solution matrix, in which the [...] Read more.
High-performance conventional engineering materials (including Al alloys, Mg alloys, Cu alloys, stainless steels, Ni superalloys, etc.) and newly-developed high entropy alloys are all compositionally-complex alloys (CCAs). In these CCA systems, the second-phase particles are generally precipitated in their solid-solution matrix, in which the precipitates are diverse and can result in different strengthening effects. The present work aims at generalizing the precipitation behavior and precipitation strengthening in CCAs comprehensively. First of all, the morphology evolution of second-phase particles and precipitation strengthening mechanisms are introduced. Then, the precipitation behaviors in diverse CCA systems are illustrated, especially the coherent precipitation. The relationship between the particle morphology and strengthening effectiveness is discussed. It is addressed that the challenge in the future is to design the stable coherent microstructure in different solid-solution matrices, which will be the most effective approach for the enhancement of alloy strength. Full article
(This article belongs to the Special Issue New Advances in High-Entropy Alloys)
Show Figures

Figure 1

17 pages, 351 KiB  
Article
Closing the Door on Quantum Nonlocality
by Marian Kupczynski
Entropy 2018, 20(11), 877; https://doi.org/10.3390/e20110877 - 15 Nov 2018
Cited by 21 | Viewed by 4559
Abstract
Bell-type inequalities are proven using oversimplified probabilistic models and/or counterfactual definiteness (CFD). If setting-dependent variables describing measuring instruments are correctly introduced, none of these inequalities may be proven. In spite of this, a belief in a mysterious quantum nonlocality is not fading. Computer [...] Read more.
Bell-type inequalities are proven using oversimplified probabilistic models and/or counterfactual definiteness (CFD). If setting-dependent variables describing measuring instruments are correctly introduced, none of these inequalities may be proven. In spite of this, a belief in a mysterious quantum nonlocality is not fading. Computer simulations of Bell tests allow people to study the different ways in which the experimental data might have been created. They also allow for the generation of various counterfactual experiments’ outcomes, such as repeated or simultaneous measurements performed in different settings on the same “photon-pair”, and so forth. They allow for the reinforcing or relaxing of CFD compliance and/or for studying the impact of various “photon identification procedures”, mimicking those used in real experiments. Data samples consistent with quantum predictions may be generated by using a specific setting-dependent identification procedure. It reflects the active role of instruments during the measurement process. Each of the setting-dependent data samples are consistent with specific setting-dependent probabilistic models which may not be deduced using non-contextual local realistic or stochastic hidden variables. In this paper, we will be discussing the results of these simulations. Since the data samples are generated in a locally causal way, these simulations provide additional strong arguments for closing the door on quantum nonlocality. Full article
(This article belongs to the Special Issue Towards Ultimate Quantum Theory (UQT))
13 pages, 1393 KiB  
Article
A Fractional Single-Phase-Lag Model of Heat Conduction for Describing Propagation of the Maximum Temperature in a Finite Medium
by Stanisław Kukla and Urszula Siedlecka
Entropy 2018, 20(11), 876; https://doi.org/10.3390/e20110876 - 15 Nov 2018
Cited by 6 | Viewed by 2990
Abstract
In this paper, an investigation of the maximum temperature propagation in a finite medium is presented. The heat conduction in the medium was modelled by using a single-phase-lag equation with fractional Caputo derivatives. The formulation and solution of the problem concern the heat [...] Read more.
In this paper, an investigation of the maximum temperature propagation in a finite medium is presented. The heat conduction in the medium was modelled by using a single-phase-lag equation with fractional Caputo derivatives. The formulation and solution of the problem concern the heat conduction in a slab, a hollow cylinder, and a hollow sphere, which are subjected to a heat source represented by the Robotnov function and a harmonically varying ambient temperature. The problem with time-dependent Robin and homogenous Neumann boundary conditions has been solved by using an eigenfunction expansion method and the Laplace transform technique. The solution of the heat conduction problem was used for determination of the maximum temperature trajectories. The trajectories and propagation speeds of the temperature maxima in the medium depend on the order of fractional derivatives occurring in the heat conduction model. These dependencies for the heat conduction in the hollow cylinder have been numerically investigated. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

10 pages, 1134 KiB  
Article
Efficiency of Harmonic Quantum Otto Engines at Maximal Power
by Sebastian Deffner
Entropy 2018, 20(11), 875; https://doi.org/10.3390/e20110875 - 15 Nov 2018
Cited by 79 | Viewed by 6612
Abstract
Recent experimental breakthroughs produced the first nano heat engines that have the potential to harness quantum resources. An instrumental question is how their performance measures up against the efficiency of classical engines. For single ion engines undergoing quantum Otto cycles it has been [...] Read more.
Recent experimental breakthroughs produced the first nano heat engines that have the potential to harness quantum resources. An instrumental question is how their performance measures up against the efficiency of classical engines. For single ion engines undergoing quantum Otto cycles it has been found that the efficiency at maximal power is given by the Curzon–Ahlborn efficiency. This is rather remarkable as the Curzon–Alhbron efficiency was originally derived for endoreversible Carnot cycles. Here, we analyze two examples of endoreversible Otto engines within the same conceptual framework as Curzon and Ahlborn’s original treatment. We find that for endoreversible Otto cycles in classical harmonic oscillators the efficiency at maximal power is, indeed, given by the Curzon–Ahlborn efficiency. However, we also find that the efficiency of Otto engines made of quantum harmonic oscillators is significantly larger. Full article
Show Figures

Figure 1

24 pages, 3039 KiB  
Article
Information Dynamics in Urban Crime
by Miguel Melgarejo and Nelson Obregon
Entropy 2018, 20(11), 874; https://doi.org/10.3390/e20110874 - 14 Nov 2018
Cited by 2 | Viewed by 4862
Abstract
Information production in both space and time has been highlighted as one of the elements that shapes the footprint of complexity in natural and socio-technical systems. However, information production in urban crime has barely been studied. This work copes with this problem by [...] Read more.
Information production in both space and time has been highlighted as one of the elements that shapes the footprint of complexity in natural and socio-technical systems. However, information production in urban crime has barely been studied. This work copes with this problem by using multifractal analysis to characterize the spatial information scaling in urban crime reports and nonlinear processing tools to study the temporal behavior of this scaling. Our results suggest that information scaling in urban crime exhibits dynamics that evolve in low-dimensional chaotic attractors, and this can be observed in several spatio-temporal scales, although some of them are more favorable than others. This evidence has practical implications in terms of defining the characteristic scales to approach urban crime from available data and supporting theoretical perspectives about the complexity of urban crime. Full article
(This article belongs to the Special Issue Information Theory in Complex Systems)
Show Figures

Figure 1

22 pages, 13188 KiB  
Article
Early Fault Detection Method for Rotating Machinery Based on Harmonic-Assisted Multivariate Empirical Mode Decomposition and Transfer Entropy
by Zhe Wu, Qiang Zhang, Lixin Wang, Lifeng Cheng and Jingbo Zhou
Entropy 2018, 20(11), 873; https://doi.org/10.3390/e20110873 - 13 Nov 2018
Cited by 10 | Viewed by 3838
Abstract
It is a difficult task to analyze the coupling characteristics of rotating machinery fault signals under the influence of complex and nonlinear interference signals. This difficulty is due to the strong noise background of rotating machinery fault feature extraction and weaknesses, such as [...] Read more.
It is a difficult task to analyze the coupling characteristics of rotating machinery fault signals under the influence of complex and nonlinear interference signals. This difficulty is due to the strong noise background of rotating machinery fault feature extraction and weaknesses, such as modal mixing problems, in the existing Ensemble Empirical Mode Decomposition (EEMD) time–frequency analysis methods. To quantitatively study the nonlinear synchronous coupling characteristics and information transfer characteristics of rotating machinery fault signals between different frequency scales under the influence of complex and nonlinear interference signals, a new nonlinear signal processing method—the harmonic assisted multivariate empirical mode decomposition method (HA-MEMD)—is proposed in this paper. By adding additional high-frequency harmonic-assisted channels and reducing them, the decomposing precision of the Intrinsic Mode Function (IMF) can be effectively improved, and the phenomenon of mode aliasing can be mitigated. Analysis results of the simulated signals prove the effectiveness of this method. By combining HA-MEMD with the transfer entropy algorithm and introducing signal processing of the rotating machinery, a fault detection method of rotating machinery based on high-frequency harmonic-assisted multivariate empirical mode decomposition-transfer entropy (HA-MEMD-TE) was established. The main features of the mechanical transmission system were extracted by the high-frequency harmonic-assisted multivariate empirical mode decomposition method, and the signal, after noise reduction, was used for the transfer entropy calculation. The evaluation index of the rotating machinery state based on HA-MEMD-TE was established to quantitatively describe the degree of nonlinear coupling between signals to effectively evaluate and diagnose the operating state of the mechanical system. By adding noise to different signal-to-noise ratios, the fault detection ability of HA-MEMD-TE method in the background of strong noise is investigated, which proves that the method has strong reliability and robustness. In this paper, transfer entropy is applied to the fault diagnosis field of rotating machinery, which provides a new effective method for early fault diagnosis and performance degradation-state recognition of rotating machinery, and leads to relevant research conclusions. Full article
Show Figures

Figure 1

11 pages, 4308 KiB  
Article
Magnetic Properties and Microstructure of FeCoNi(CuAl)0.8Snx (0 ≤ x ≤ 0.10) High-Entropy Alloys
by Zhong Li, Chenxu Wang, Linye Yu, Yong Gu, Minxiang Pan, Xiaohua Tan and Hui Xu
Entropy 2018, 20(11), 872; https://doi.org/10.3390/e20110872 - 13 Nov 2018
Cited by 23 | Viewed by 5287
Abstract
The present work exhibits the effects of Sn addition on the magnetic properties and microstructure of FeCoNi(CuAl)0.8Snx (0 ≤ x ≤ 0.10) high-entropy alloys (HEAs). The results show all the samples consist of a mixed structure of face-centered-cubic (FCC) phase [...] Read more.
The present work exhibits the effects of Sn addition on the magnetic properties and microstructure of FeCoNi(CuAl)0.8Snx (0 ≤ x ≤ 0.10) high-entropy alloys (HEAs). The results show all the samples consist of a mixed structure of face-centered-cubic (FCC) phase and body-centered-cubic (BCC) phase. The addition of Sn promotes the formation of BCC phase, and it also affects the shape of Cu-rich nano-precipitates in BCC matrix. It also shows that the Curie temperatures (Tc) of the FCC phase and the saturation magnetization (Ms) of the FeCoNi(CuAl)0.8Snx (0 ≤ x ≤ 0.10) HEAs increase greatly while the remanence (Br) decreases after the addition of Sn into FeCoNi(CuAl)0.8 HEA. The thermomagnetic curves indicate that the phases of the FeCoNi(CuAl)0.8Snx (0 ≤ x ≤ 0.10) HEAs will transform from FCC with low Tc to BCC phase with high Tc at temperature of 600–700 K. This work provides a new idea for FeCoNi(CuAl)0.8Snx (0 ≤ x ≤ 0.10) HEAs for their potential application as soft magnets to be used at high temperatures. Full article
(This article belongs to the Special Issue New Advances in High-Entropy Alloys)
Show Figures

Graphical abstract

18 pages, 1131 KiB  
Article
Characterization of Artifact Influence on the Classification of Glucose Time Series Using Sample Entropy Statistics
by David Cuesta-Frau, Daniel Novák, Vacláv Burda, Antonio Molina-Picó, Borja Vargas, Milos Mraz, Petra Kavalkova, Marek Benes and Martin Haluzik
Entropy 2018, 20(11), 871; https://doi.org/10.3390/e20110871 - 12 Nov 2018
Cited by 12 | Viewed by 5185
Abstract
This paper analyses the performance of SampEn and one of its derivatives, Fuzzy Entropy (FuzzyEn), in the context of artifacted blood glucose time series classification. This is a difficult and practically unexplored framework, where the availability of more sensitive and reliable measures could [...] Read more.
This paper analyses the performance of SampEn and one of its derivatives, Fuzzy Entropy (FuzzyEn), in the context of artifacted blood glucose time series classification. This is a difficult and practically unexplored framework, where the availability of more sensitive and reliable measures could be of great clinical impact. Although the advent of new blood glucose monitoring technologies may reduce the incidence of the problems stated above, incorrect device or sensor manipulation, patient adherence, sensor detachment, time constraints, adoption barriers or affordability can still result in relatively short and artifacted records, as the ones analyzed in this paper or in other similar works. This study is aimed at characterizing the changes induced by such artifacts, enabling the arrangement of countermeasures in advance when possible. Despite the presence of these disturbances, results demonstrate that SampEn and FuzzyEn are sufficiently robust to achieve a significant classification performance, using records obtained from patients with duodenal-jejunal exclusion. The classification results, in terms of area under the ROC of up to 0.9, with several tests yielding AUC values also greater than 0.8, and in terms of a leave-one-out average classification accuracy of 80%, confirm the potential of these measures in this context despite the presence of artifacts, with SampEn having slightly better performance than FuzzyEn. Full article
(This article belongs to the Special Issue The 20th Anniversary of Entropy - Approximate and Sample Entropy)
Show Figures

Figure 1

62 pages, 1359 KiB  
Article
Robust Signaling for Bursty Interference
by Grace Villacrés, Tobias Koch, Aydin Sezgin and Gonzalo Vazquez-Vilar
Entropy 2018, 20(11), 870; https://doi.org/10.3390/e20110870 - 12 Nov 2018
Cited by 4 | Viewed by 3236
Abstract
This paper studies a bursty interference channel, where the presence/absence of interference is modeled by a block-i.i.d. Bernoulli process that stays constant for a duration of T symbols (referred to as coherence block) and then changes independently to a new state. We consider [...] Read more.
This paper studies a bursty interference channel, where the presence/absence of interference is modeled by a block-i.i.d. Bernoulli process that stays constant for a duration of T symbols (referred to as coherence block) and then changes independently to a new state. We consider both a quasi-static setup, where the interference state remains constant during the whole transmission of the codeword, and an ergodic setup, where a codeword spans several coherence blocks. For the quasi-static setup, we study the largest rate of a coding strategy that provides reliable communication at a basic rate and allows an increased (opportunistic) rate when there is no interference. For the ergodic setup, we study the largest achievable rate. We study how non-causal knowledge of the interference state, referred to as channel-state information (CSI), affects the achievable rates. We derive converse and achievability bounds for (i) local CSI at the receiver side only; (ii) local CSI at the transmitter and receiver side; and (iii) global CSI at all nodes. Our bounds allow us to identify when interference burstiness is beneficial and in which scenarios global CSI outperforms local CSI. The joint treatment of the quasi-static and ergodic setup further allows for a thorough comparison of these two setups. Full article
(This article belongs to the Special Issue Multiuser Information Theory II)
Show Figures

Figure 1

15 pages, 259 KiB  
Article
Short-Time Propagators and the Born–Jordan Quantization Rule
by Maurice A. De Gosson
Entropy 2018, 20(11), 869; https://doi.org/10.3390/e20110869 - 10 Nov 2018
Cited by 4 | Viewed by 3379
Abstract
We have shown in previous work that the equivalence of the Heisenberg and Schrödinger pictures of quantum mechanics requires the use of the Born and Jordan quantization rules. In the present work we give further evidence that the Born–Jordan rule is the correct [...] Read more.
We have shown in previous work that the equivalence of the Heisenberg and Schrödinger pictures of quantum mechanics requires the use of the Born and Jordan quantization rules. In the present work we give further evidence that the Born–Jordan rule is the correct quantization scheme for quantum mechanics. For this purpose we use correct short-time approximations to the action functional, initially due to Makri and Miller, and show that these lead to the desired quantization of the classical Hamiltonian. Full article
22 pages, 4759 KiB  
Article
Quantitative Assessment of Landslide Susceptibility Comparing Statistical Index, Index of Entropy, and Weights of Evidence in the Shangnan Area, China
by Jie Liu and Zhao Duan
Entropy 2018, 20(11), 868; https://doi.org/10.3390/e20110868 - 10 Nov 2018
Cited by 45 | Viewed by 4703
Abstract
In this study, a comparative analysis of the statistical index (SI), index of entropy (IOE) and weights of evidence (WOE) models was introduced to landslide susceptibility mapping, and the performance of the three models was validated and systematically compared. As one of the [...] Read more.
In this study, a comparative analysis of the statistical index (SI), index of entropy (IOE) and weights of evidence (WOE) models was introduced to landslide susceptibility mapping, and the performance of the three models was validated and systematically compared. As one of the most landslide-prone areas in Shaanxi Province, China, Shangnan County was selected as the study area. Firstly, a series of reports, remote sensing images and geological maps were collected, and field surveys were carried out to prepare a landslide inventory map. A total of 348 landslides were identified in study area, and they were reclassified as a training dataset (70% = 244 landslides) and testing dataset (30% = 104 landslides) by random selection. Thirteen conditioning factors were then employed. Corresponding thematic data layers and landslide susceptibility maps were generated based on ArcGIS software. Finally, the area under the curve (AUC) values were calculated for the training dataset and the testing dataset in order to validate and compare the performance of the three models. For the training dataset, the AUC plots showed that the WOE model had the highest accuracy rate of 76.05%, followed by the SI model (74.67%) and the IOE model (71.12%). In the case of the testing dataset, the prediction accuracy rates for the SI, IOE and WOE models were 73.75%, 63.89%, and 75.10%, respectively. It can be concluded that the WOE model had the best prediction capacity for landslide susceptibility mapping in Shangnan County. The landslide susceptibility map produced by the WOE model had a profound geological and engineering significance in terms of landslide hazard prevention and control in the study area and other similar areas. Full article
(This article belongs to the Special Issue Applications of Information Theory in the Geosciences II)
Show Figures

Figure 1

16 pages, 4672 KiB  
Article
Double Quantum Image Encryption Based on Arnold Transform and Qubit Random Rotation
by Xingbin Liu, Di Xiao and Cong Liu
Entropy 2018, 20(11), 867; https://doi.org/10.3390/e20110867 - 10 Nov 2018
Cited by 23 | Viewed by 4768
Abstract
Quantum image encryption offers major advantages over its classical counterpart in terms of key space, computational complexity, and so on. A novel double quantum image encryption approach based on quantum Arnold transform (QAT) and qubit random rotation is proposed in this paper, in [...] Read more.
Quantum image encryption offers major advantages over its classical counterpart in terms of key space, computational complexity, and so on. A novel double quantum image encryption approach based on quantum Arnold transform (QAT) and qubit random rotation is proposed in this paper, in which QAT is used to scramble pixel positions and the gray information is changed by utilizing random qubit rotation. Actually, the independent random qubit rotation operates once, respectively, in spatial and frequency domains with the help of quantum Fourier transform (QFT). The encryption process accomplishes pixel confusion and diffusion, and finally the noise-like cipher image is obtained. Numerical simulation and theoretical analysis verify that the method is valid and it shows superior performance in security and computational complexity. Full article
(This article belongs to the Collection Quantum Information)
Show Figures

Figure 1

17 pages, 872 KiB  
Article
An Entropy-Guided Monte Carlo Tree Search Approach for Generating Optimal Container Loading Layouts
by Richard Cant, Ayodeji Remi-Omosowon, Caroline Langensiepen and Ahmad Lotfi
Entropy 2018, 20(11), 866; https://doi.org/10.3390/e20110866 - 9 Nov 2018
Cited by 3 | Viewed by 3285
Abstract
In this paper, a novel approach to the container loading problem using a spatial entropy measure to bias a Monte Carlo Tree Search is proposed. The proposed algorithm generates layouts that achieve the goals of both fitting a constrained space and also having [...] Read more.
In this paper, a novel approach to the container loading problem using a spatial entropy measure to bias a Monte Carlo Tree Search is proposed. The proposed algorithm generates layouts that achieve the goals of both fitting a constrained space and also having “consistency” or neatness that enables forklift truck drivers to apply them easily to real shipping containers loaded from one end. Three algorithms are analysed. The first is a basic Monte Carlo Tree Search, driven only by the principle of minimising the length of container that is occupied. The second is an algorithm that uses the proposed entropy measure to drive an otherwise random process. The third algorithm combines these two principles and produces superior results to either. These algorithms are then compared to a classical deterministic algorithm. It is shown that where the classical algorithm fails, the entropy-driven algorithms are still capable of providing good results in a short computational time. Full article
Show Figures

Graphical abstract

12 pages, 2042 KiB  
Article
Optimization and Stability of Heat Engines: The Role of Entropy Evolution
by Julian Gonzalez-Ayala, Moises Santillán, Maria Jesus Santos, Antonio Calvo Hernández and José Miguel Mateos Roco
Entropy 2018, 20(11), 865; https://doi.org/10.3390/e20110865 - 9 Nov 2018
Cited by 7 | Viewed by 3435
Abstract
Local stability of maximum power and maximum compromise (Omega) operation regimes dynamic evolution for a low-dissipation heat engine is analyzed. The thermodynamic behavior of trajectories to the stationary state, after perturbing the operation regime, display a trade-off between stability, entropy production, efficiency and [...] Read more.
Local stability of maximum power and maximum compromise (Omega) operation regimes dynamic evolution for a low-dissipation heat engine is analyzed. The thermodynamic behavior of trajectories to the stationary state, after perturbing the operation regime, display a trade-off between stability, entropy production, efficiency and power output. This allows considering stability and optimization as connected pieces of a single phenomenon. Trajectories inside the basin of attraction display the smallest entropy drops. Additionally, it was found that time constraints, related with irreversible and endoreversible behaviors, influence the thermodynamic evolution of relaxation trajectories. The behavior of the evolution in terms of the symmetries of the model and the applied thermal gradients was analyzed. Full article
(This article belongs to the Special Issue Entropy Generation and Heat Transfer)
Show Figures

Figure 1

12 pages, 354 KiB  
Article
Modeling and Fusing the Uncertainty of FMEA Experts Using an Entropy-Like Measure with an Application in Fault Evaluation of Aircraft Turbine Rotor Blades
by Xuelian Zhou and Yongchuan Tang
Entropy 2018, 20(11), 864; https://doi.org/10.3390/e20110864 - 9 Nov 2018
Cited by 21 | Viewed by 3920
Abstract
As a typical tool of risk analysis in practical engineering, failure mode and effects analysis (FMEA) theory is a well known method for risk prediction and prevention. However, how to quantify the uncertainty of the subjective assessments from FMEA experts and aggregate the [...] Read more.
As a typical tool of risk analysis in practical engineering, failure mode and effects analysis (FMEA) theory is a well known method for risk prediction and prevention. However, how to quantify the uncertainty of the subjective assessments from FMEA experts and aggregate the corresponding uncertainty to the classical FMEA approach still needs further study. In this paper, we argue that the subjective assessments of FMEA experts can be adopted to model the weight of each FMEA expert, which can be regarded as a data-driven method for ambiguity information modeling in FMEA method. Based on this new perspective, a modified FMEA approach is proposed, where the subjective uncertainty of FMEA experts is handled in the framework of Dempster–Shafer evidence theory (DST). In the improved FMEA approach, the ambiguity measure (AM) which is an entropy-like uncertainty measure in DST framework is applied to quantify the uncertainty degree of each FMEA expert. Then, the classical risk priority number (RPN) model is improved by aggregating an AM-based weight factor into the RPN function. A case study based on the new RPN model in aircraft turbine rotor blades verifies the applicable and useful of the proposed FMEA approach. Full article
(This article belongs to the Special Issue Entropy-Based Fault Diagnosis)
Show Figures

Figure 1

14 pages, 424 KiB  
Article
Sample Entropy of sEMG Signals at Different Stages of Rectal Cancer Treatment
by Paulina Trybek, Michal Nowakowski, Jerzy Salowka, Jakub Spiechowicz and Lukasz Machura
Entropy 2018, 20(11), 863; https://doi.org/10.3390/e20110863 - 9 Nov 2018
Cited by 8 | Viewed by 3104
Abstract
Information theory provides a spectrum of nonlinear methods capable of grasping an internal structure of a signal together with an insight into its complex nature. In this work, we discuss the usefulness of the selected entropy techniques for a description of the information [...] Read more.
Information theory provides a spectrum of nonlinear methods capable of grasping an internal structure of a signal together with an insight into its complex nature. In this work, we discuss the usefulness of the selected entropy techniques for a description of the information carried by the surface electromyography signals during colorectal cancer treatment. The electrical activity of the external anal sphincter can serve as a potential source of knowledge of the actual state of the patient who underwent a common surgery for rectal cancer in the form of anterior or lower anterior resection. The calculation of Sample entropy parameters has been extended to multiple time scales in terms of the Multiscale Sample Entropy. The specific values of the entropy measures and their dependence on the time scales were analyzed with regard to the time elapsed since the operation, the type of surgical treatment and also the different depths of the rectum canal. The Mann–Whitney U test and Anova Friedman statistics indicate the statistically significant differences among all of stages of treatment and for all consecutive depths of rectum area for the estimated Sample Entropy. The further analysis at the multiple time scales signify the substantial differences among compared stages of treatment in the group of patients who underwent the lower anterior resection. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop