Next Issue
Volume 24, January
Previous Issue
Volume 23, November
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 23, Issue 12 (December 2021) – 151 articles

Cover Story (view full-size image): Historically, quantifying the complexity of graphs and networks has been of great interest to scientists in different fields. In this paper, we have tackled this problem using Kolmogorov complexity as the metric. Firstly, ‘Kolmogorov basic graphs’ are defined as those with the least possible Kolmogorov complexity. These graphs are then seen as the building blocks for constructing any given graph. Consequently, the complexity of a graph is estimated by decomposing it into a set of Kolmogorov basic graphs. The result is an algorithm, called ‘Kolmogorov graph covering’, which takes a graph as an input and returns an upper bound for its Kolmogorov complexity. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 1830 KiB  
Article
Time and Causality: A Thermocontextual Perspective
by Harrison Crecraft
Entropy 2021, 23(12), 1705; https://doi.org/10.3390/e23121705 - 20 Dec 2021
Cited by 1 | Viewed by 4218
Abstract
The thermocontextual interpretation (TCI) is an alternative to the existing interpretations of physical states and time. The prevailing interpretations are based on assumptions rooted in classical mechanics, the logical implications of which include determinism, time symmetry, and a paradox: determinism implies that effects [...] Read more.
The thermocontextual interpretation (TCI) is an alternative to the existing interpretations of physical states and time. The prevailing interpretations are based on assumptions rooted in classical mechanics, the logical implications of which include determinism, time symmetry, and a paradox: determinism implies that effects follow causes and an arrow of causality, and this conflicts with time symmetry. The prevailing interpretations also fail to explain the empirical irreversibility of wavefunction collapse without invoking untestable and untenable metaphysical implications. They fail to reconcile nonlocality and relativistic causality without invoking superdeterminism or unexplained superluminal correlations. The TCI defines a system’s state with respect to its actual surroundings at a positive ambient temperature. It recognizes the existing physical interpretations as special cases which either define a state with respect to an absolute zero reference (classical and relativistic states) or with respect to an equilibrium reference (quantum states). Between these special case extremes is where thermodynamic irreversibility and randomness exist. The TCI distinguishes between a system’s internal time and the reference time of relativity and causality as measured by an external observer’s clock. It defines system time as a complex property of state spanning both reversible mechanical time and irreversible thermodynamic time. Additionally, it provides a physical explanation for nonlocality that is consistent with relativistic causality without hidden variables, superdeterminism, or “spooky action”. Full article
(This article belongs to the Special Issue Time, Causality, and Entropy)
Show Figures

Figure 1

17 pages, 9372 KiB  
Article
Experimental Investigation and Fault Diagnosis for Buckled Wet Clutch Based on Multi-Speed Hilbert Spectrum Entropy
by Jiaqi Xue, Biao Ma, Man Chen, Qianqian Zhang and Liangjie Zheng
Entropy 2021, 23(12), 1704; https://doi.org/10.3390/e23121704 - 20 Dec 2021
Cited by 13 | Viewed by 3168
Abstract
The multi-disc wet clutch is widely used in transmission systems as it transfers the torque and power between the gearbox and the driving engine. During service, the buckling of the friction components in the wet clutch is inevitable, which can shorten the lifetime [...] Read more.
The multi-disc wet clutch is widely used in transmission systems as it transfers the torque and power between the gearbox and the driving engine. During service, the buckling of the friction components in the wet clutch is inevitable, which can shorten the lifetime of the wet clutch and decrease the vehicle performance. Therefore, fault diagnosis and online monitoring are required to identify the buckling state of the friction components. However, unlike in other rotating machinery, the time-domain features of the vibration signal lack efficiency in fault diagnosis for the wet clutch. This paper aims to present a new fault diagnosis method based on multi-speed Hilbert spectrum entropy to classify the buckling state of the wet clutch. Firstly, the wet clutch is classified depending on the buckling degree of the disks, and then a bench test is conducted to obtain vibration signals of each class at varying speeds. By comparing the accuracy of different classifiers with and without entropy, Hilbert spectrum entropy shows higher efficiency than time-domain features for the wet clutch diagnosis. Thus, the classification results based on multi-speed entropy achieve even better accuracy. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

25 pages, 406 KiB  
Article
Exact Learning Augmented Naive Bayes Classifier
by Shouta Sugahara and Maomi Ueno
Entropy 2021, 23(12), 1703; https://doi.org/10.3390/e23121703 - 20 Dec 2021
Cited by 21 | Viewed by 3230
Abstract
Earlier studies have shown that classification accuracies of Bayesian networks (BNs) obtained by maximizing the conditional log likelihood (CLL) of a class variable, given the feature variables, were higher than those obtained by maximizing the marginal likelihood (ML). However, differences between the performances [...] Read more.
Earlier studies have shown that classification accuracies of Bayesian networks (BNs) obtained by maximizing the conditional log likelihood (CLL) of a class variable, given the feature variables, were higher than those obtained by maximizing the marginal likelihood (ML). However, differences between the performances of the two scores in the earlier studies may be attributed to the fact that they used approximate learning algorithms, not exact ones. This paper compares the classification accuracies of BNs with approximate learning using CLL to those with exact learning using ML. The results demonstrate that the classification accuracies of BNs obtained by maximizing the ML are higher than those obtained by maximizing the CLL for large data. However, the results also demonstrate that the classification accuracies of exact learning BNs using the ML are much worse than those of other methods when the sample size is small and the class variable has numerous parents. To resolve the problem, we propose an exact learning augmented naive Bayes classifier (ANB), which ensures a class variable with no parents. The proposed method is guaranteed to asymptotically estimate the identical class posterior to that of the exactly learned BN. Comparison experiments demonstrated the superior performance of the proposed method. Full article
(This article belongs to the Topic Machine and Deep Learning)
Show Figures

Figure 1

14 pages, 1689 KiB  
Article
Continuous Viewpoint Planning in Conjunction with Dynamic Exploration for Active Object Recognition
by Haibo Sun, Feng Zhu, Yanzi Kong, Jianyu Wang and Pengfei Zhao
Entropy 2021, 23(12), 1702; https://doi.org/10.3390/e23121702 - 20 Dec 2021
Cited by 3 | Viewed by 2463
Abstract
Active object recognition (AOR) aims at collecting additional information to improve recognition performance by purposefully adjusting the viewpoint of an agent. How to determine the next best viewpoint of the agent, i.e., viewpoint planning (VP), is a research focus. Most existing VP methods [...] Read more.
Active object recognition (AOR) aims at collecting additional information to improve recognition performance by purposefully adjusting the viewpoint of an agent. How to determine the next best viewpoint of the agent, i.e., viewpoint planning (VP), is a research focus. Most existing VP methods perform viewpoint exploration in the discrete viewpoint space, which have to sample viewpoint space and may bring in significant quantization error. To address this challenge, a continuous VP approach for AOR based on reinforcement learning is proposed. Specifically, we use two separate neural networks to model the VP policy as a parameterized Gaussian distribution and resort the proximal policy optimization framework to learn the policy. Furthermore, an adaptive entropy regularization based dynamic exploration scheme is presented to automatically adjust the viewpoint exploration ability in the learning process. To the end, experimental results on the public dataset GERMS well demonstrate the superiority of our proposed VP method. Full article
(This article belongs to the Topic Machine and Deep Learning)
Show Figures

Figure 1

37 pages, 3715 KiB  
Concept Paper
Permutation Entropy as a Universal Disorder Criterion: How Disorders at Different Scale Levels Are Manifestations of the Same Underlying Principle
by Rutger Goekoop and Roy de Kleijn
Entropy 2021, 23(12), 1701; https://doi.org/10.3390/e23121701 - 20 Dec 2021
Cited by 4 | Viewed by 5951
Abstract
What do bacteria, cells, organs, people, and social communities have in common? At first sight, perhaps not much. They involve totally different agents and scale levels of observation. On second thought, however, perhaps they share everything. A growing body of literature suggests that [...] Read more.
What do bacteria, cells, organs, people, and social communities have in common? At first sight, perhaps not much. They involve totally different agents and scale levels of observation. On second thought, however, perhaps they share everything. A growing body of literature suggests that living systems at different scale levels of observation follow the same architectural principles and process information in similar ways. Moreover, such systems appear to respond in similar ways to rising levels of stress, especially when stress levels approach near-lethal levels. To explain such communalities, we argue that all organisms (including humans) can be modeled as hierarchical Bayesian controls systems that are governed by the same biophysical principles. Such systems show generic changes when taxed beyond their ability to correct for environmental disturbances. Without exception, stressed organisms show rising levels of ‘disorder’ (randomness, unpredictability) in internal message passing and overt behavior. We argue that such changes can be explained by a collapse of allostatic (high-level integrative) control, which normally synchronizes activity of the various components of a living system to produce order. The selective overload and cascading failure of highly connected (hub) nodes flattens hierarchical control, producing maladaptive behavior. Thus, we present a theory according to which organic concepts such as stress, a loss of control, disorder, disease, and death can be operationalized in biophysical terms that apply to all scale levels of organization. Given the presumed universality of this mechanism, ‘losing control’ appears to involve the same process anywhere, whether involving bacteria succumbing to an antibiotic agent, people suffering from physical or mental disorders, or social systems slipping into warfare. On a practical note, measures of disorder may serve as early warning signs of system failure even when catastrophic failure is still some distance away. Full article
(This article belongs to the Special Issue Applying the Free-Energy Principle to Complex Adaptive Systems)
Show Figures

Figure 1

32 pages, 52957 KiB  
Article
Enhanced Slime Mould Algorithm for Multilevel Thresholding Image Segmentation Using Entropy Measures
by Shanying Lin, Heming Jia, Laith Abualigah and Maryam Altalhi
Entropy 2021, 23(12), 1700; https://doi.org/10.3390/e23121700 - 20 Dec 2021
Cited by 36 | Viewed by 3338
Abstract
Image segmentation is a fundamental but essential step in image processing because it dramatically influences posterior image analysis. Multilevel thresholding image segmentation is one of the most popular image segmentation techniques, and many researchers have used meta-heuristic optimization algorithms (MAs) to determine the [...] Read more.
Image segmentation is a fundamental but essential step in image processing because it dramatically influences posterior image analysis. Multilevel thresholding image segmentation is one of the most popular image segmentation techniques, and many researchers have used meta-heuristic optimization algorithms (MAs) to determine the threshold values. However, MAs have some defects; for example, they are prone to stagnate in local optimal and slow convergence speed. This paper proposes an enhanced slime mould algorithm for global optimization and multilevel thresholding image segmentation, namely ESMA. First, the Levy flight method is used to improve the exploration ability of SMA. Second, quasi opposition-based learning is introduced to enhance the exploitation ability and balance the exploration and exploitation. Then, the superiority of the proposed work ESMA is confirmed concerning the 23 benchmark functions. Afterward, the ESMA is applied in multilevel thresholding image segmentation using minimum cross-entropy as the fitness function. We select eight greyscale images as the benchmark images for testing and compare them with the other classical and state-of-the-art algorithms. Meanwhile, the experimental metrics include the average fitness (mean), standard deviation (Std), peak signal to noise ratio (PSNR), structure similarity index (SSIM), feature similarity index (FSIM), and Wilcoxon rank-sum test, which is utilized to evaluate the quality of segmentation. Experimental results demonstrated that ESMA is superior to other algorithms and can provide higher segmentation accuracy. Full article
Show Figures

Figure 1

28 pages, 7757 KiB  
Article
Whether the Support Region of Three-Bit Uniform Quantizer Has a Strong Impact on Post-Training Quantization for MNIST Dataset?
by Jelena Nikolić, Zoran Perić, Danijela Aleksić, Stefan Tomić and Aleksandra Jovanović
Entropy 2021, 23(12), 1699; https://doi.org/10.3390/e23121699 - 20 Dec 2021
Cited by 6 | Viewed by 3164
Abstract
Driven by the need for the compression of weights in neural networks (NNs), which is especially beneficial for edge devices with a constrained resource, and by the need to utilize the simplest possible quantization model, in this paper, we study the performance of [...] Read more.
Driven by the need for the compression of weights in neural networks (NNs), which is especially beneficial for edge devices with a constrained resource, and by the need to utilize the simplest possible quantization model, in this paper, we study the performance of three-bit post-training uniform quantization. The goal is to put various choices of the key parameter of the quantizer in question (support region threshold) in one place and provide a detailed overview of this choice’s impact on the performance of post-training quantization for the MNIST dataset. Specifically, we analyze whether it is possible to preserve the accuracy of the two NN models (MLP and CNN) to a great extent with the very simple three-bit uniform quantizer, regardless of the choice of the key parameter. Moreover, our goal is to answer the question of whether it is of the utmost importance in post-training three-bit uniform quantization, as it is in quantization, to determine the optimal support region threshold value of the quantizer to achieve some predefined accuracy of the quantized neural network (QNN). The results show that the choice of the support region threshold value of the three-bit uniform quantizer does not have such a strong impact on the accuracy of the QNNs, which is not the case with two-bit uniform post-training quantization, when applied in MLP for the same classification task. Accordingly, one can anticipate that due to this special property, the post-training quantization model in question can be greatly exploited. Full article
(This article belongs to the Special Issue Methods in Artificial Intelligence and Information Processing)
Show Figures

Figure 1

12 pages, 572 KiB  
Article
Security Analysis of a Passive Continuous-Variable Quantum Key Distribution by Considering Finite-Size Effect
by Shengjie Xu, Yin Li, Yijun Wang, Yun Mao, Xiaodong Wu and Ying Guo
Entropy 2021, 23(12), 1698; https://doi.org/10.3390/e23121698 - 19 Dec 2021
Cited by 3 | Viewed by 2668
Abstract
We perform security analysis of a passive continuous-variable quantum key distribution (CV-QKD) protocol by considering the finite-size effect. In the passive CV-QKD scheme, Alice utilizes thermal sources to passively make preparation of quantum state without Gaussian modulations. With this technique, the quantum states [...] Read more.
We perform security analysis of a passive continuous-variable quantum key distribution (CV-QKD) protocol by considering the finite-size effect. In the passive CV-QKD scheme, Alice utilizes thermal sources to passively make preparation of quantum state without Gaussian modulations. With this technique, the quantum states can be prepared precisely to match the high transmission rate. Here, both asymptotic regime and finite-size regime are considered to make a comparison. In the finite-size scenario, we illustrate the passive CV-QKD protocol against collective attacks. Simulation results show that the performance of passive CV-QKD protocol in the finite-size case is more pessimistic than that achieved in the asymptotic case, which indicates that the finite-size effect has a great influence on the performance of the single-mode passive CV-QKD protocol. However, we can still obtain a reasonable performance in the finite-size regime by enhancing the average photon number of the thermal state. Full article
(This article belongs to the Special Issue Practical Quantum Communication)
Show Figures

Figure 1

21 pages, 682 KiB  
Article
Breaking Data Encryption Standard with a Reduced Number of Rounds Using Metaheuristics Differential Cryptanalysis
by Kamil Dworak and Urszula Boryczka
Entropy 2021, 23(12), 1697; https://doi.org/10.3390/e23121697 - 18 Dec 2021
Cited by 5 | Viewed by 3062
Abstract
This article presents the author’s own metaheuristic cryptanalytic attack based on the use of differential cryptanalysis (DC) methods and memetic algorithms (MA) that improve the local search process through simulated annealing (SA). The suggested attack will be verified on a set of ciphertexts [...] Read more.
This article presents the author’s own metaheuristic cryptanalytic attack based on the use of differential cryptanalysis (DC) methods and memetic algorithms (MA) that improve the local search process through simulated annealing (SA). The suggested attack will be verified on a set of ciphertexts generated with the well-known DES (data encryption standard) reduced to six rounds. The aim of the attack is to guess the last encryption subkey, for each of the two characteristics Ω. Knowing the last subkey, it is possible to recreate the complete encryption key and thus decrypt the cryptogram. The suggested approach makes it possible to automatically reject solutions (keys) that represent the worst fitness function, owing to which we are able to significantly reduce the attack search space. The memetic algorithm (MASA) created in such a way will be compared with other metaheuristic techniques suggested in literature, in particular, with the genetic algorithm (NGA) and the classical differential cryptanalysis attack, in terms of consumption of memory and time needed to guess the key. The article also investigated the entropy of MASA and NGA attacks. Full article
(This article belongs to the Special Issue Entropy in Real-World Datasets and Its Impact on Machine Learning)
Show Figures

Figure 1

20 pages, 3580 KiB  
Article
Information Flow Network of International Exchange Rates and Influence of Currencies
by Hongduo Cao, Fan Lin, Ying Li and Yiming Wu
Entropy 2021, 23(12), 1696; https://doi.org/10.3390/e23121696 - 18 Dec 2021
Cited by 3 | Viewed by 3126
Abstract
The main purpose of the study is to investigate how price fluctuations of a sovereign currency are transmitted among currencies and what network traits and currency relationships are formed in this process under the background of economic globalization. As a universal equivalent, currency [...] Read more.
The main purpose of the study is to investigate how price fluctuations of a sovereign currency are transmitted among currencies and what network traits and currency relationships are formed in this process under the background of economic globalization. As a universal equivalent, currency with naturally owned network attributes has not been paid enough attention by the traditional exchange rate determination theories because of their overemphasis of the attribute of value measurement. Considering the network attribute of currency, the characteristics of the information flow network of exchange rate are extracted and analyzed in order to research the impact they have on each other among currencies. The information flow correlation network between currencies is researched from 2007 to 2019 based on data from 30 currencies. A transfer entropy is used to measure the nonlinear information flow between currencies, and complex network indexes such as average shortest path and aggregation coefficient are used to analyze the macroscopic topology characteristics and key nodes of information flow-associated network. It was found that there may be strong information exchange between currencies when the overall market price fluctuates violently. Commodity currencies and currencies of major countries have great influence in the network, and local fluctuations may result in increased risks in the overall exchange rate market. Therefore, it is necessary to monitor exchange rate fluctuations of relevant currencies in order to prevent risks in advance. The network characteristics and evolution of major currencies are revealed, and the influence of a currency in the international money market can be evaluated based on the characteristics of the network. The world monetary system is developing towards diversification, and the currency of developing countries is becoming more and more important. Taking CNY as an example, it was found that the international influence of CNY has increased, although without advantage over other major international currencies since 2015, and this trend continues even if there are trade frictions between China and the United States. Full article
(This article belongs to the Special Issue Granger Causality and Transfer Entropy for Financial Networks)
Show Figures

Figure 1

26 pages, 2111 KiB  
Article
Interactive System for Similarity-Based Inspection and Assessment of the Well-Being of mHealth Users
by Subash Prakash, Vishnu Unnikrishnan, Rüdiger Pryss, Robin Kraft, Johannes Schobel, Ronny Hannemann, Berthold Langguth, Winfried Schlee and Myra Spiliopoulou
Entropy 2021, 23(12), 1695; https://doi.org/10.3390/e23121695 - 17 Dec 2021
Cited by 1 | Viewed by 3748
Abstract
Recent digitization technologies empower mHealth users to conveniently record their Ecological Momentary Assessments (EMA) through web applications, smartphones, and wearable devices. These recordings can help clinicians understand how the users’ condition changes, but appropriate learning and visualization mechanisms are required for this purpose. [...] Read more.
Recent digitization technologies empower mHealth users to conveniently record their Ecological Momentary Assessments (EMA) through web applications, smartphones, and wearable devices. These recordings can help clinicians understand how the users’ condition changes, but appropriate learning and visualization mechanisms are required for this purpose. We propose a web-based visual analytics tool, which processes clinical data as well as EMAs that were recorded through a mHealth application. The goals we pursue are (1) to predict the condition of the user in the near and the far future, while also identifying the clinical data that mostly contribute to EMA predictions, (2) to identify users with outlier EMA, and (3) to show to what extent the EMAs of a user are in line with or diverge from those users similar to him/her. We report our findings based on a pilot study on patient empowerment, involving tinnitus patients who recorded EMAs with the mHealth app TinnitusTips. To validate our method, we also derived synthetic data from the same pilot study. Based on this setting, results for different use cases are reported. Full article
(This article belongs to the Special Issue Challenges of Health Data Analytics)
Show Figures

Figure 1

20 pages, 387 KiB  
Article
Encoding Individual Source Sequences for the Wiretap Channel
by Neri Merhav
Entropy 2021, 23(12), 1694; https://doi.org/10.3390/e23121694 - 17 Dec 2021
Cited by 4 | Viewed by 2278
Abstract
We consider the problem of encoding a deterministic source sequence (i.e., individual sequence) for the degraded wiretap channel by means of an encoder and decoder that can both be implemented as finite-state machines. Our first main result is a necessary condition for both [...] Read more.
We consider the problem of encoding a deterministic source sequence (i.e., individual sequence) for the degraded wiretap channel by means of an encoder and decoder that can both be implemented as finite-state machines. Our first main result is a necessary condition for both reliable and secure transmission in terms of the given source sequence, the bandwidth expansion factor, the secrecy capacity, the number of states of the encoder and the number of states of the decoder. Equivalently, this necessary condition can be presented as a converse bound (i.e., a lower bound) on the smallest achievable bandwidth expansion factor. The bound is asymptotically achievable by Lempel–Ziv compression followed by good channel coding for the wiretap channel. Given that the lower bound is saturated, we also derive a lower bound on the minimum necessary rate of purely random bits needed for local randomness at the encoder in order to meet the security constraint. This bound too is achieved by the same achievability scheme. Finally, we extend the main results to the case where the legitimate decoder has access to a side information sequence, which is another individual sequence that may be related to the source sequence, and a noisy version of the side information sequence leaks to the wiretapper. Full article
(This article belongs to the Special Issue Wireless Networks: Information Theoretic Perspectives Ⅱ)
Show Figures

Figure 1

12 pages, 641 KiB  
Article
Hypothetical Control of Fatal Quarrel Variability
by Bruce J. West
Entropy 2021, 23(12), 1693; https://doi.org/10.3390/e23121693 - 17 Dec 2021
Viewed by 2027
Abstract
Wars, terrorist attacks, as well as natural catastrophes typically result in a large number of casualties, whose distributions have been shown to belong to the class of Pareto’s inverse power laws (IPLs). The number of deaths resulting from terrorist attacks are herein fit [...] Read more.
Wars, terrorist attacks, as well as natural catastrophes typically result in a large number of casualties, whose distributions have been shown to belong to the class of Pareto’s inverse power laws (IPLs). The number of deaths resulting from terrorist attacks are herein fit by a double Pareto probability density function (PDF). We use the fractional probability calculus to frame our arguments and to parameterize a hypothetical control process to temper a Lévy process through a collective-induced potential. Thus, the PDF is shown to be a consequence of the complexity of the underlying social network. The analytic steady-state solution to the fractional Fokker-Planck equation (FFPE) is fit to a forty-year fatal quarrel (FQ) dataset. Full article
(This article belongs to the Special Issue Fractal and Multifractal Analysis of Complex Networks)
Show Figures

Figure 1

14 pages, 1409 KiB  
Article
MFF-Net: Deepfake Detection Network Based on Multi-Feature Fusion
by Lei Zhao, Mingcheng Zhang, Hongwei Ding and Xiaohui Cui
Entropy 2021, 23(12), 1692; https://doi.org/10.3390/e23121692 - 17 Dec 2021
Cited by 22 | Viewed by 4733
Abstract
Significant progress has been made in generating counterfeit images and videos. Forged videos generated by deepfaking have been widely spread and have caused severe societal impacts, which stir up public concern about automatic deepfake detection technology. Recently, many deepfake detection methods based on [...] Read more.
Significant progress has been made in generating counterfeit images and videos. Forged videos generated by deepfaking have been widely spread and have caused severe societal impacts, which stir up public concern about automatic deepfake detection technology. Recently, many deepfake detection methods based on forged features have been proposed. Among the popular forged features, textural features are widely used. However, most of the current texture-based detection methods extract textures directly from RGB images, ignoring the mature spectral analysis methods. Therefore, this research proposes a deepfake detection network fusing RGB features and textural information extracted by neural networks and signal processing methods, namely, MFF-Net. Specifically, it consists of four key components: (1) a feature extraction module to further extract textural and frequency information using the Gabor convolution and residual attention blocks; (2) a texture enhancement module to zoom into the subtle textural features in shallow layers; (3) an attention module to force the classifier to focus on the forged part; (4) two instances of feature fusion to firstly fuse textural features from the shallow RGB branch and feature extraction module and then to fuse the textural features and semantic information. Moreover, we further introduce a new diversity loss to force the feature extraction module to learn features of different scales and directions. The experimental results show that MFF-Net has excellent generalization and has achieved state-of-the-art performance on various deepfake datasets. Full article
(This article belongs to the Topic Machine and Deep Learning)
Show Figures

Figure 1

11 pages, 463 KiB  
Article
Improving the Performance of Continuous-Variable Measurement-Device-Independent Quantum Key Distribution via a Noiseless Linear Amplifier
by Fan Jing, Weiqi Liu, Lingzhi Kong and Chen He
Entropy 2021, 23(12), 1691; https://doi.org/10.3390/e23121691 - 16 Dec 2021
Cited by 2 | Viewed by 2537
Abstract
In the continuous variable measurement-device-independent quantum key distribution (CV-MDI-QKD) protocol, both Alice and Bob send quantum states to an untrusted third party, Charlie, for detection through the quantum channel. In this paper, we mainly study the performance of the CV-MDI-QKD system using the [...] Read more.
In the continuous variable measurement-device-independent quantum key distribution (CV-MDI-QKD) protocol, both Alice and Bob send quantum states to an untrusted third party, Charlie, for detection through the quantum channel. In this paper, we mainly study the performance of the CV-MDI-QKD system using the noiseless linear amplifier (NLA). The NLA is added to the output of the detector at Charlie’s side. The research results show that NLA can increase the communication distance and secret key rate of the CV-MDI-QKD protocol. Moreover, we find that the more powerful the improvement of the performance with the longer gain of NLA and the optimum gain is given under different conditions. Full article
Show Figures

Figure 1

18 pages, 3420 KiB  
Article
Perfect Density Models Cannot Guarantee Anomaly Detection
by Charline Le Lan and Laurent Dinh
Entropy 2021, 23(12), 1690; https://doi.org/10.3390/e23121690 - 16 Dec 2021
Cited by 19 | Viewed by 4365
Abstract
Thanks to the tractability of their likelihood, several deep generative models show promise for seemingly straightforward but important applications like anomaly detection, uncertainty estimation, and active learning. However, the likelihood values empirically attributed to anomalies conflict with the expectations these proposed applications suggest. [...] Read more.
Thanks to the tractability of their likelihood, several deep generative models show promise for seemingly straightforward but important applications like anomaly detection, uncertainty estimation, and active learning. However, the likelihood values empirically attributed to anomalies conflict with the expectations these proposed applications suggest. In this paper, we take a closer look at the behavior of distribution densities through the lens of reparametrization and show that these quantities carry less meaningful information than previously thought, beyond estimation issues or the curse of dimensionality. We conclude that the use of these likelihoods for anomaly detection relies on strong and implicit hypotheses, and highlight the necessity of explicitly formulating these assumptions for reliable anomaly detection. Full article
(This article belongs to the Special Issue Probabilistic Methods for Deep Learning)
Show Figures

Figure 1

15 pages, 627 KiB  
Article
A Smooth Path between the Classical Realm and the Quantum Realm
by John R. Klauder
Entropy 2021, 23(12), 1689; https://doi.org/10.3390/e23121689 - 16 Dec 2021
Viewed by 2295
Abstract
A simple example of classical physics may be defined as classical variables, p and q, and quantum physics may be defined as quantum operators, P and Q. The classical world of p&q, as it is currently understood, is [...] Read more.
A simple example of classical physics may be defined as classical variables, p and q, and quantum physics may be defined as quantum operators, P and Q. The classical world of p&q, as it is currently understood, is truly disconnected from the quantum world, as it is currently understood. The process of quantization, for which there are several procedures, aims to promote a classical issue into a related quantum issue. In order to retain their physical connection, it becomes critical as to how to promote specific classical variables to associated specific quantum variables. This paper, which also serves as a review paper, leads the reader toward specific, but natural, procedures that promise to ensure that the classical and quantum choices are guaranteed a proper physical connection. Moreover, parallel procedures for fields, and even gravity, that connect classical and quantum physical regimes, will be introduced. Full article
(This article belongs to the Special Issue Quantum Mechanics and Its Foundations II)
Show Figures

Figure 1

12 pages, 282 KiB  
Article
Inequalities for Jensen–Sharma–Mittal and Jeffreys–Sharma–Mittal Type f–Divergences
by Paweł A. Kluza
Entropy 2021, 23(12), 1688; https://doi.org/10.3390/e23121688 - 16 Dec 2021
Cited by 2 | Viewed by 2117
Abstract
In this paper, we introduce new divergences called Jensen–Sharma–Mittal and Jeffreys–Sharma–Mittal in relation to convex functions. Some theorems, which give the lower and upper bounds for two new introduced divergences, are provided. The obtained results imply some new inequalities corresponding to known divergences. [...] Read more.
In this paper, we introduce new divergences called Jensen–Sharma–Mittal and Jeffreys–Sharma–Mittal in relation to convex functions. Some theorems, which give the lower and upper bounds for two new introduced divergences, are provided. The obtained results imply some new inequalities corresponding to known divergences. Some examples, which show that these are the generalizations of Rényi, Tsallis, and Kullback–Leibler types of divergences, are provided in order to show a few applications of new divergences. Full article
(This article belongs to the Special Issue Distance in Information and Statistical Physics III)
20 pages, 12115 KiB  
Article
Gravity Observations and Apparent Density Changes before the 2017 Jiuzhaigou Ms7.0 Earthquake and Their Precursory Significance
by Jinling Yang, Shi Chen, Bei Zhang, Jiancang Zhuang, Linhai Wang and Hongyan Lu
Entropy 2021, 23(12), 1687; https://doi.org/10.3390/e23121687 - 16 Dec 2021
Cited by 8 | Viewed by 2632
Abstract
An Ms7.0 earthquake struck Jiuzhaigou (China) on 8 August 2017. The epicenter was in the eastern margin of the Tibetan Plateau, an area covered by a dense time-varying gravity observation network. Data from seven repeated high-precision hybrid gravity surveys (2014–2017) allowed the microGal-level [...] Read more.
An Ms7.0 earthquake struck Jiuzhaigou (China) on 8 August 2017. The epicenter was in the eastern margin of the Tibetan Plateau, an area covered by a dense time-varying gravity observation network. Data from seven repeated high-precision hybrid gravity surveys (2014–2017) allowed the microGal-level time-varying gravity signal to be obtained at a resolution better than 75 km using the modified Bayesian gravity adjustment method. The “equivalent source” model inversion method in spherical coordinates was adopted to obtain the near-crust apparent density variations before the earthquake. A major gravity change occurred from the southwest to the northeast of the eastern Tibetan Plateau approximately 2 years before the earthquake, and a substantial gravity gradient zone was consistent with the tectonic trend that gradually appeared within the focal area of the Jiuzhaigou earthquake during 2015–2016. Factors that might cause such regional gravitational changes (e.g., vertical crustal deformation and variations in near-surface water distributions) were studied. The results suggest that gravity effects contributed by these known factors were insufficient to produce gravity changes as big as those observed, which might be related to the process of fluid material redistribution in the crust. Regional change of the gravity field has precursory significance for high-risk earthquake areas and it could be used as a candidate precursor for annual medium-term earthquake prediction. Full article
Show Figures

Figure 1

17 pages, 1330 KiB  
Article
Multi-Level Fusion Temporal–Spatial Co-Attention for Video-Based Person Re-Identification
by Shengyu Pei and Xiaoping Fan
Entropy 2021, 23(12), 1686; https://doi.org/10.3390/e23121686 - 15 Dec 2021
Cited by 1 | Viewed by 2513
Abstract
A convolutional neural network can easily fall into local minima for insufficient data, and the needed training is unstable. Many current methods are used to solve these problems by adding pedestrian attributes, pedestrian postures, and other auxiliary information, but they require additional collection, [...] Read more.
A convolutional neural network can easily fall into local minima for insufficient data, and the needed training is unstable. Many current methods are used to solve these problems by adding pedestrian attributes, pedestrian postures, and other auxiliary information, but they require additional collection, which is time-consuming and laborious. Every video sequence frame has a different degree of similarity. In this paper, multi-level fusion temporal–spatial co-attention is adopted to improve person re-identification (reID). For a small dataset, the improved network can better prevent over-fitting and reduce the dataset limit. Specifically, the concept of knowledge evolution is introduced into video-based person re-identification to improve the backbone residual neural network (ResNet). The global branch, local branch, and attention branch are used in parallel for feature extraction. Three high-level features are embedded in the metric learning network to improve the network’s generalization ability and the accuracy of video-based person re-identification. Simulation experiments are implemented on small datasets PRID2011 and iLIDS-VID, and the improved network can better prevent over-fitting. Experiments are also implemented on MARS and DukeMTMC-VideoReID, and the proposed method can be used to extract more feature information and improve the network’s generalization ability. The results show that our method achieves better performance. The model achieves 90.15% Rank1 and 81.91% mAP on MARS. Full article
Show Figures

Figure 1

21 pages, 3529 KiB  
Article
Study of Nonlinear Models of Oscillatory Systems by Applying an Intelligent Computational Technique
by Naveed Ahmad Khan, Fahad Sameer Alshammari, Carlos Andrés Tavera Romero and Muhammad Sulaiman
Entropy 2021, 23(12), 1685; https://doi.org/10.3390/e23121685 - 15 Dec 2021
Cited by 8 | Viewed by 2963
Abstract
In this paper, we have analyzed the mathematical model of various nonlinear oscillators arising in different fields of engineering. Further, approximate solutions for different variations in oscillators are studied by using feedforward neural networks (NNs) based on the backpropagated Levenberg–Marquardt algorithm (BLMA). A [...] Read more.
In this paper, we have analyzed the mathematical model of various nonlinear oscillators arising in different fields of engineering. Further, approximate solutions for different variations in oscillators are studied by using feedforward neural networks (NNs) based on the backpropagated Levenberg–Marquardt algorithm (BLMA). A data set for different problem scenarios for the supervised learning of BLMA has been generated by the Runge–Kutta method of order 4 (RK-4) with the “NDSolve” package in Mathematica. The worth of the approximate solution by NN-BLMA is attained by employing the processing of testing, training, and validation of the reference data set. For each model, convergence analysis, error histograms, regression analysis, and curve fitting are considered to study the robustness and accuracy of the design scheme. Full article
Show Figures

Figure 1

24 pages, 1192 KiB  
Article
Categorical Nature of Major Factor Selection via Information Theoretic Measurements
by Ting-Li Chen, Elizabeth P. Chou and Hsieh Fushing
Entropy 2021, 23(12), 1684; https://doi.org/10.3390/e23121684 - 15 Dec 2021
Cited by 9 | Viewed by 2371
Abstract
Without assuming any functional or distributional structure, we select collections of major factors embedded within response-versus-covariate (Re-Co) dynamics via selection criteria [C1: confirmable] and [C2: irrepaceable], which are based on information theoretic measurements. The two criteria are constructed based on the computing paradigm [...] Read more.
Without assuming any functional or distributional structure, we select collections of major factors embedded within response-versus-covariate (Re-Co) dynamics via selection criteria [C1: confirmable] and [C2: irrepaceable], which are based on information theoretic measurements. The two criteria are constructed based on the computing paradigm called Categorical Exploratory Data Analysis (CEDA) and linked to Wiener–Granger causality. All the information theoretical measurements, including conditional mutual information and entropy, are evaluated through the contingency table platform, which primarily rests on the categorical nature within all involved features of any data types: quantitative or qualitative. Our selection task identifies one chief collection, together with several secondary collections of major factors of various orders underlying the targeted Re-Co dynamics. Each selected collection is checked with algorithmically computed reliability against the finite sample phenomenon, and so is each member’s major factor individually. The developments of our selection protocol are illustrated in detail through two experimental examples: a simple one and a complex one. We then apply this protocol on two data sets pertaining to two somewhat related but distinct pitching dynamics of two pitch types: slider and fastball. In particular, we refer to a specific Major League Baseball (MLB) pitcher and we consider data of multiple seasons. Full article
(This article belongs to the Special Issue Information Complexity in Structured Data)
Show Figures

Figure 1

19 pages, 2013 KiB  
Article
Thermodynamic Definitions of Temperature and Kappa and Introduction of the Entropy Defect
by George Livadiotis and David J. McComas
Entropy 2021, 23(12), 1683; https://doi.org/10.3390/e23121683 - 15 Dec 2021
Cited by 21 | Viewed by 3148
Abstract
This paper develops explicit and consistent definitions of the independent thermodynamic properties of temperature and the kappa index within the framework of nonextensive statistical mechanics and shows their connection with the formalism of kappa distributions. By defining the “entropy defect” in the composition [...] Read more.
This paper develops explicit and consistent definitions of the independent thermodynamic properties of temperature and the kappa index within the framework of nonextensive statistical mechanics and shows their connection with the formalism of kappa distributions. By defining the “entropy defect” in the composition of a system, we show how the nonextensive entropy of systems with correlations differs from the sum of the entropies of their constituents of these systems. A system is composed extensively when its elementary subsystems are independent, interacting with no correlations; this leads to an extensive system entropy, which is simply the sum of the subsystem entropies. In contrast, a system is composed nonextensively when its elementary subsystems are connected through long-range interactions that produce correlations. This leads to an entropy defect that quantifies the missing entropy, analogous to the mass defect that quantifies the mass (energy) associated with assembling subatomic particles. We develop thermodynamic definitions of kappa and temperature that connect with the corresponding kinetic definitions originated from kappa distributions. Finally, we show that the entropy of a system, composed by a number of subsystems with correlations, is determined using both discrete and continuous descriptions, and find: (i) the resulted entropic form expressed in terms of thermodynamic parameters; (ii) an optimal relationship between kappa and temperature; and (iii) the correlation coefficient to be inversely proportional to the temperature logarithm. Full article
(This article belongs to the Collection Foundations of Statistical Mechanics)
Show Figures

Figure 1

16 pages, 293 KiB  
Article
Minimum Query Set for Decision Tree Construction
by Wojciech Wieczorek, Jan Kozak, Łukasz Strąk and Arkadiusz Nowakowski
Entropy 2021, 23(12), 1682; https://doi.org/10.3390/e23121682 - 14 Dec 2021
Cited by 5 | Viewed by 2503
Abstract
A new two-stage method for the construction of a decision tree is developed. The first stage is based on the definition of a minimum query set, which is the smallest set of attribute-value pairs for which any two objects can be distinguished. To [...] Read more.
A new two-stage method for the construction of a decision tree is developed. The first stage is based on the definition of a minimum query set, which is the smallest set of attribute-value pairs for which any two objects can be distinguished. To obtain this set, an appropriate linear programming model is proposed. The queries from this set are building blocks of the second stage in which we try to find an optimal decision tree using a genetic algorithm. In a series of experiments, we show that for some databases, our approach should be considered as an alternative method to classical ones (CART, C4.5) and other heuristic approaches in terms of classification quality. Full article
(This article belongs to the Special Issue Entropy in Real-World Datasets and Its Impact on Machine Learning)
Show Figures

Figure 1

9 pages, 1490 KiB  
Article
On the Role of the Excitation/Inhibition Balance of Homeostatic Artificial Neural Networks
by Maximilian Brütt and Christian Kaernbach
Entropy 2021, 23(12), 1681; https://doi.org/10.3390/e23121681 - 14 Dec 2021
Viewed by 2363
Abstract
Homeostatic models of artificial neural networks have been developed to explain the self-organization of a stable dynamical connectivity between the neurons of the net. These models are typically two-population models, with excitatory and inhibitory cells. In these models, connectivity is a means to [...] Read more.
Homeostatic models of artificial neural networks have been developed to explain the self-organization of a stable dynamical connectivity between the neurons of the net. These models are typically two-population models, with excitatory and inhibitory cells. In these models, connectivity is a means to regulate cell activity, and in consequence, intracellular calcium levels towards a desired target level. The excitation/inhibition (E/I) balance is usually set to 80:20, a value characteristic for cortical cell distributions. We study the behavior of these homeostatic models outside of the physiological range of the E/I balance, and we find a pronounced bifurcation at about the physiological value of this balance. Lower inhibition values lead to sparsely connected networks. At a certain threshold value, the neurons develop a reasonably connected network that can fulfill the homeostasis criteria in a stable way. Beyond the threshold, the behavior of the artificial neural network changes drastically, with failing homeostasis and in consequence with an exploding number of connections. While the exact value of the balance at the bifurcation point is subject to the parameters of the model, the existence of this bifurcation might explain the stability of a certain E/I balance across a wide range of biological neural networks. Assuming that this class of models describes the self-organization of biological network connectivity reasonably realistically, the omnipresent physiological balance might represent a case of self-organized criticality in order to obtain a good connectivity while allowing for a stable intracellular calcium homeostasis. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

22 pages, 2448 KiB  
Article
Soft Compression for Lossless Image Coding Based on Shape Recognition
by Gangtao Xin and Pingyi Fan
Entropy 2021, 23(12), 1680; https://doi.org/10.3390/e23121680 - 14 Dec 2021
Cited by 8 | Viewed by 2869
Abstract
Soft compression is a lossless image compression method that is committed to eliminating coding redundancy and spatial redundancy simultaneously. To do so, it adopts shapes to encode an image. In this paper, we propose a compressible indicator function with regard to images, which [...] Read more.
Soft compression is a lossless image compression method that is committed to eliminating coding redundancy and spatial redundancy simultaneously. To do so, it adopts shapes to encode an image. In this paper, we propose a compressible indicator function with regard to images, which gives a threshold of the average number of bits required to represent a location and can be used for illustrating the working principle. We investigate and analyze soft compression for binary image, gray image and multi-component image with specific algorithms and compressible indicator value. In terms of compression ratio, the soft compression algorithm outperforms the popular classical standards PNG and JPEG2000 in lossless image compression. It is expected that the bandwidth and storage space needed when transmitting and storing the same kind of images (such as medical images) can be greatly reduced with applying soft compression. Full article
Show Figures

Figure 1

25 pages, 1724 KiB  
Article
Zero-Delay Joint Source Channel Coding for a Bivariate Gaussian Source over the Broadcast Channel with One-Bit ADC Front Ends
by Weijie Zhao and Xuechen Chen
Entropy 2021, 23(12), 1679; https://doi.org/10.3390/e23121679 - 14 Dec 2021
Cited by 2 | Viewed by 2233
Abstract
In this work, we consider the zero-delay transmission of bivariate Gaussian sources over a Gaussian broadcast channel with one-bit analog-to-digital converter (ADC) front ends. An outer bound on the conditional distortion region is derived. Focusing on the minimization of the average distortion, two [...] Read more.
In this work, we consider the zero-delay transmission of bivariate Gaussian sources over a Gaussian broadcast channel with one-bit analog-to-digital converter (ADC) front ends. An outer bound on the conditional distortion region is derived. Focusing on the minimization of the average distortion, two types of methods are proposed to design nonparametric mappings. The first one is based on the joint optimization between the encoder and decoder with the use of an iterative algorithm. In the second method, we derive the necessary conditions to develop the optimal encoder numerically. Using these necessary conditions, an algorithm based on gradient descent search is designed. Subsequently, the characteristics of the optimized encoding mapping structure are discussed, and inspired by which, several parametric mappings are proposed. Numerical results show that the proposed parametric mappings outperform the uncoded scheme and previous parametric mappings for broadcast channels with infinite resolution ADC front ends. The nonparametric mappings succeed in outperforming the parametric mappings. The causes for the differences between the performances of two nonparametric mappings are analyzed. The average distortions of the parametric and nonparametric mappings proposed here are close to the bound for the cases with one-bit ADC front ends in low channel signal-to-noise ratio regions. Full article
(This article belongs to the Special Issue Information Theory for Communication Systems)
Show Figures

Figure 1

20 pages, 2671 KiB  
Article
RF Signal-Based UAV Detection and Mode Classification: A Joint Feature Engineering Generator and Multi-Channel Deep Neural Network Approach
by Shubo Yang, Yang Luo, Wang Miao, Changhao Ge, Wenjian Sun and Chunbo Luo
Entropy 2021, 23(12), 1678; https://doi.org/10.3390/e23121678 - 14 Dec 2021
Cited by 14 | Viewed by 4287
Abstract
With the proliferation of Unmanned Aerial Vehicles (UAVs) to provide diverse critical services, such as surveillance, disaster management, and medicine delivery, the accurate detection of these small devices and the efficient classification of their flight modes are of paramount importance to guarantee their [...] Read more.
With the proliferation of Unmanned Aerial Vehicles (UAVs) to provide diverse critical services, such as surveillance, disaster management, and medicine delivery, the accurate detection of these small devices and the efficient classification of their flight modes are of paramount importance to guarantee their safe operation in our sky. Among the existing approaches, Radio Frequency (RF) based methods are less affected by complex environmental factors. The similarities between UAV RF signals and the diversity of frequency components make accurate detection and classification a particularly difficult task. To bridge this gap, we propose a joint Feature Engineering Generator (FEG) and Multi-Channel Deep Neural Network (MC-DNN) approach. Specifically, in FEG, data truncation and normalization separate different frequency components, the moving average filter reduces the outliers in the RF signal, and the concatenation fully exploits the details of the dataset. In addition, the multi-channel input in MC-DNN separates multiple frequency components and reduces the interference between them. A novel dataset that contains ten categories of RF signals from three types of UAVs is used to verify the effectiveness. Experiments show that the proposed method outperforms the state-of-the-art UAV detection and classification approaches in terms of 98.4% and F1 score of 98.3%. Full article
(This article belongs to the Topic Machine and Deep Learning)
Show Figures

Figure 1

12 pages, 3677 KiB  
Article
The Downside of Heterogeneity: How Established Relations Counteract Systemic Adaptivity in Tasks Assignments
by Giona Casiraghi, Christian Zingg and Frank Schweitzer
Entropy 2021, 23(12), 1677; https://doi.org/10.3390/e23121677 - 14 Dec 2021
Cited by 6 | Viewed by 2231
Abstract
We study the lock-in effect in a network of task assignments. Agents have a heterogeneous fitness for solving tasks and can redistribute unfinished tasks to other agents. They learn over time to whom to reassign tasks and preferably choose agents with higher fitness. [...] Read more.
We study the lock-in effect in a network of task assignments. Agents have a heterogeneous fitness for solving tasks and can redistribute unfinished tasks to other agents. They learn over time to whom to reassign tasks and preferably choose agents with higher fitness. A lock-in occurs if reassignments can no longer adapt. Agents overwhelmed with tasks then fail, leading to failure cascades. We find that the probability for lock-ins and systemic failures increase with the heterogeneity in fitness values. To study this dependence, we use the Shannon entropy of the network of task assignments. A detailed discussion links our findings to the problem of resilience and observations in social systems. Full article
(This article belongs to the Special Issue Entropy and Social Physics)
Show Figures

Figure 1

10 pages, 223 KiB  
Perspective
Radical Complexity
by Jean-Philippe Bouchaud
Entropy 2021, 23(12), 1676; https://doi.org/10.3390/e23121676 - 14 Dec 2021
Cited by 5 | Viewed by 3467
Abstract
This is an informal and sketchy review of five topical, somewhat unrelated subjects in quantitative finance and econophysics: (i) models of price changes; (ii) linear correlations and random matrix theory; (iii) non-linear dependence copulas; (iv) high-frequency trading and market stability; and finally—but perhaps [...] Read more.
This is an informal and sketchy review of five topical, somewhat unrelated subjects in quantitative finance and econophysics: (i) models of price changes; (ii) linear correlations and random matrix theory; (iii) non-linear dependence copulas; (iv) high-frequency trading and market stability; and finally—but perhaps most importantly—(v) “radical complexity” that prompts a scenario-based approach to macroeconomics heavily relying on Agent-Based Models. Some open questions and future research directions are outlined. Full article
(This article belongs to the Special Issue Three Risky Decades: A Time for Econophysics?)
Previous Issue
Next Issue
Back to TopTop