entropy-logo

Journal Browser

Journal Browser

Information Theoretic Signal Processing and Learning

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Signal and Data Analysis".

Deadline for manuscript submissions: closed (31 January 2021) | Viewed by 16801

Special Issue Editors


E-Mail Website1 Website2
Guest Editor
Department of Electrical and Computer Engineering, University of California, Santa Barbara, CA 93106-9560, USA
Interests: information theoretic techniques for signal processing and machine learning; lossy source coding; rate distortion theory
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical & Computer Engineering, University of Nebraska-Lincoln, 209N Scott Engineering Center, P.O. Box 880511, Lincoln, NE 68588-0511, USA
Interests: data compression; joint source-channel coding; bioinformatics; metagenomics; neuroscience of cognition and memory; biological signal processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Shannon developed the concepts of entropy, entropy rate, and mutual information to characterize uncertainty and to establish fundamental limits on reliable communication over a channel and on the minimum rate required to represent a source to a desired fidelity. Variations on these quantities have been employed over the decades to derive algorithms for a wide range of signal processing applications, to analyze the performance of different approaches to signal processing, and to develop signal models. In recent years, alternative forms of entropy/entropy rate and mutual information have found utility in the analysis, design, and understanding of agent/machine/deep learning methods and for performance evaluation. Concepts that build on these quantities, such as relevant/maximum/meaningful/directed/Granger/predictive/discriminative/transient/negative information and Kullback–Leibler (KL) divergence/relative entropy, information gain, Kullback causality, redundancy, intrinsic redundancy, and stochastic/model/statistical/algorithmic complexity, have proliferated and been applied to problems in prediction, estimation, feature selection, signal representations, information extraction, signal recognition, and model building. For this Special Issue on Information-Theoretic Signal Processing and Learning, we solicit papers that explore these quantities in more depth for signal processing and learning, examine new applications of information-theoretic methods to open problems in signal processing and learning, and define new information-theoretic quantities and interpretations for such applications.

Prof. Dr. Jerry D. Gibson
Prof. Khalid Sayood
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • entropy
  • entropy rate
  • mutual information
  • Kullback–Leibler divergence
  • directed information
  • Kullback causality
  • redundancy
  • model complexity
  • information extraction
  • model building

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 619 KiB  
Article
Use of Average Mutual Information and Derived Measures to Find Coding Regions
by Garin Newcomb and Khalid Sayood
Entropy 2021, 23(10), 1324; https://doi.org/10.3390/e23101324 - 11 Oct 2021
Viewed by 1979
Abstract
One of the important steps in the annotation of genomes is the identification of regions in the genome which code for proteins. One of the tools used by most annotation approaches is the use of signals extracted from genomic regions that can be [...] Read more.
One of the important steps in the annotation of genomes is the identification of regions in the genome which code for proteins. One of the tools used by most annotation approaches is the use of signals extracted from genomic regions that can be used to identify whether the region is a protein coding region. Motivated by the fact that these regions are information bearing structures we propose signals based on measures motivated by the average mutual information for use in this task. We show that these signals can be used to identify coding and noncoding sequences with high accuracy. We also show that these signals are robust across species, phyla, and kingdom and can, therefore, be used in species agnostic genome annotation algorithms for identifying protein coding regions. These in turn could be used for gene identification. Full article
(This article belongs to the Special Issue Information Theoretic Signal Processing and Learning)
Show Figures

Figure 1

17 pages, 1498 KiB  
Article
AC2: An Efficient Protein Sequence Compression Tool Using Artificial Neural Networks and Cache-Hash Models
by Milton Silva, Diogo Pratas and Armando J. Pinho
Entropy 2021, 23(5), 530; https://doi.org/10.3390/e23050530 - 26 Apr 2021
Cited by 5 | Viewed by 3569
Abstract
Recently, the scientific community has witnessed a substantial increase in the generation of protein sequence data, triggering emergent challenges of increasing importance, namely efficient storage and improved data analysis. For both applications, data compression is a straightforward solution. However, in the literature, the [...] Read more.
Recently, the scientific community has witnessed a substantial increase in the generation of protein sequence data, triggering emergent challenges of increasing importance, namely efficient storage and improved data analysis. For both applications, data compression is a straightforward solution. However, in the literature, the number of specific protein sequence compressors is relatively low. Moreover, these specialized compressors marginally improve the compression ratio over the best general-purpose compressors. In this paper, we present AC2, a new lossless data compressor for protein (or amino acid) sequences. AC2 uses a neural network to mix experts with a stacked generalization approach and individual cache-hash memory models to the highest-context orders. Compared to the previous compressor (AC), we show gains of 2–9% and 6–7% in reference-free and reference-based modes, respectively. These gains come at the cost of three times slower computations. AC2 also improves memory usage against AC, with requirements about seven times lower, without being affected by the sequences’ input size. As an analysis application, we use AC2 to measure the similarity between each SARS-CoV-2 protein sequence with each viral protein sequence from the whole UniProt database. The results consistently show higher similarity to the pangolin coronavirus, followed by the bat and human coronaviruses, contributing with critical results to a current controversial subject. AC2 is available for free download under GPLv3 license. Full article
(This article belongs to the Special Issue Information Theoretic Signal Processing and Learning)
Show Figures

Figure 1

17 pages, 2030 KiB  
Article
Information Theoretic Metagenome Assembly Allows the Discovery of Disease Biomarkers in Human Microbiome
by O. Ufuk Nalbantoglu
Entropy 2021, 23(2), 187; https://doi.org/10.3390/e23020187 - 2 Feb 2021
Cited by 1 | Viewed by 2248
Abstract
Quantitative metagenomics is an important field that has delivered successful microbiome biomarkers associated with host phenotypes. The current convention mainly depends on unsupervised assembly of metagenomic contigs with a possibility of leaving interesting genetic material unassembled. Additionally, biomarkers are commonly defined on the [...] Read more.
Quantitative metagenomics is an important field that has delivered successful microbiome biomarkers associated with host phenotypes. The current convention mainly depends on unsupervised assembly of metagenomic contigs with a possibility of leaving interesting genetic material unassembled. Additionally, biomarkers are commonly defined on the differential relative abundance of compositional or functional units. Accumulating evidence supports that microbial genetic variations are as important as the differential abundance content, implying the need for novel methods accounting for the genetic variations in metagenomics studies. We propose an information theoretic metagenome assembly algorithm, discovering genomic fragments with maximal self-information, defined by the empirical distributions of nucleotides across the phenotypes and quantified with the help of statistical tests. Our algorithm infers fragments populating the most informative genetic variants in a single contig, named supervariant fragments. Experiments on simulated metagenomes, as well as on a colorectal cancer and an atherosclerotic cardiovascular disease dataset consistently discovered sequences strongly associated with the disease phenotypes. Moreover, the discriminatory power of these putative biomarkers was mainly attributed to the genetic variations rather than relative abundance. Our results support that a focus on metagenomics methods considering microbiome population genetics might be useful in discovering disease biomarkers with a great potential of translating to molecular diagnostics and biotherapeutics applications. Full article
(This article belongs to the Special Issue Information Theoretic Signal Processing and Learning)
Show Figures

Figure 1

16 pages, 3115 KiB  
Article
A Method of Constructing Measurement Matrix for Compressed Sensing by Chebyshev Chaotic Sequence
by Renjie Yi, Chen Cui, Yingjie Miao and Biao Wu
Entropy 2020, 22(10), 1085; https://doi.org/10.3390/e22101085 - 26 Sep 2020
Cited by 7 | Viewed by 2495
Abstract
In this paper, the problem of constructing the measurement matrix in compressed sensing is addressed. In compressed sensing, constructing a measurement matrix of good performance and easy hardware implementation is of interest. It has been recently shown that the measurement matrices constructed by [...] Read more.
In this paper, the problem of constructing the measurement matrix in compressed sensing is addressed. In compressed sensing, constructing a measurement matrix of good performance and easy hardware implementation is of interest. It has been recently shown that the measurement matrices constructed by Logistic or Tent chaotic sequences satisfy the restricted isometric property (RIP) with a certain probability and are easy to be implemented in the physical electric circuit. However, a large sample distance that means large resources consumption is required to obtain uncorrelated samples from these sequences in the construction. To solve this problem, we propose a method of constructing the measurement matrix by the Chebyshev chaotic sequence. The method effectively reduces the sample distance and the proposed measurement matrix is proved to satisfy the RIP with high probability on the assumption that the sampled elements are statistically independent. Simulation results show that the proposed measurement matrix has comparable reconstruction performance to that of the existing chaotic matrices for compressed sensing. Full article
(This article belongs to the Special Issue Information Theoretic Signal Processing and Learning)
Show Figures

Figure 1

16 pages, 1081 KiB  
Article
Newtonian-Type Adaptive Filtering Based on the Maximum Correntropy Criterion
by Pengcheng Yue, Hua Qu, Jihong Zhao and Meng Wang
Entropy 2020, 22(9), 922; https://doi.org/10.3390/e22090922 - 22 Aug 2020
Cited by 5 | Viewed by 2712
Abstract
This paper provides a novel Newtonian-type optimization method for robust adaptive filtering inspired by information theory learning. With the traditional minimum mean square error (MMSE) criterion replaced by criteria like the maximum correntropy criterion (MCC) or generalized maximum correntropy criterion (GMCC), adaptive filters [...] Read more.
This paper provides a novel Newtonian-type optimization method for robust adaptive filtering inspired by information theory learning. With the traditional minimum mean square error (MMSE) criterion replaced by criteria like the maximum correntropy criterion (MCC) or generalized maximum correntropy criterion (GMCC), adaptive filters assign less emphasis on the outlier data, thus become more robust against impulsive noises. The optimization methods adopted in current MCC-based LMS-type and RLS-type adaptive filters are gradient descent method and fixed point iteration, respectively. However, in this paper, a Newtonian-type method is introduced as a novel method for enhancing the existing body of knowledge of MCC-based adaptive filtering and providing a fast convergence rate. Theoretical analysis of the steady-state performance of the algorithm is carried out and verified by simulations. The experimental results show that, compared to the conventional MCC adaptive filter, the MCC-based Newtonian-type method converges faster and still maintains a good steady-state performance under impulsive noise. The practicability of the algorithm is also verified in the experiment of acoustic echo cancellation. Full article
(This article belongs to the Special Issue Information Theoretic Signal Processing and Learning)
Show Figures

Figure 1

16 pages, 468 KiB  
Article
Mutual Information Gain and Linear/Nonlinear Redundancy for Agent Learning, Sequence Analysis, and Modeling
by Jerry D. Gibson
Entropy 2020, 22(6), 608; https://doi.org/10.3390/e22060608 - 30 May 2020
Cited by 6 | Viewed by 2736
Abstract
In many applications, intelligent agents need to identify any structure or apparent randomness in an environment and respond appropriately. We use the relative entropy to separate and quantify the presence of both linear and nonlinear redundancy in a sequence and we introduce the [...] Read more.
In many applications, intelligent agents need to identify any structure or apparent randomness in an environment and respond appropriately. We use the relative entropy to separate and quantify the presence of both linear and nonlinear redundancy in a sequence and we introduce the new quantities of total mutual information gain and incremental mutual information gain. We illustrate how these new quantities can be used to analyze and characterize the structures and apparent randomness for purely autoregressive sequences and for speech signals with long and short term linear redundancies. The mutual information gain is shown to be an important new tool for capturing and quantifying learning for sequence modeling and analysis. Full article
(This article belongs to the Special Issue Information Theoretic Signal Processing and Learning)
Show Figures

Figure 1

Back to TopTop