Next Article in Journal
Image Thresholding Segmentation on Quantum State Space
Next Article in Special Issue
From Physics to Bioengineering: Microbial Cultivation Process Design and Feeding Rate Control Based on Relative Entropy Using Nuisance Time
Previous Article in Journal
Microscopic Theory of Energy Dissipation and Decoherence in Solid-State Quantum Devices: Need for Nonlocal Scattering Models
Previous Article in Special Issue
A Novel Algorithm Based on the Pixel-Entropy for Automatic Detection of Number of Lanes, Lane Centers, and Lane Division Lines Formation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Entropy, or Information, Unifies Ecology and Evolution and Beyond

by
William Bruce Sherwin
Evolution & Ecology Research Center, School of Biological Earth and Environmental Science, UNSW Sydney, Sydney 2052, Australia
Entropy 2018, 20(10), 727; https://doi.org/10.3390/e20100727
Submission received: 6 July 2018 / Revised: 18 August 2018 / Accepted: 11 September 2018 / Published: 21 September 2018
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)

Abstract

:
This article discusses how entropy/information methods are well-suited to analyzing and forecasting the four processes of innovation, transmission, movement, and adaptation, which are the common basis to ecology and evolution. Macroecologists study assemblages of differing species, whereas micro-evolutionary biologists study variants of heritable information within species, such as DNA and epigenetic modifications. These two different modes of variation are both driven by the same four basic processes, but approaches to these processes sometimes differ considerably. For example, macroecology often documents patterns without modeling underlying processes, with some notable exceptions. On the other hand, evolutionary biologists have a long history of deriving and testing mathematical genetic forecasts, previously focusing on entropies such as heterozygosity. Macroecology calls this Gini–Simpson, and has borrowed the genetic predictions, but sometimes this measure has shortcomings. Therefore it is important to note that predictive equations have now been derived for molecular diversity based on Shannon entropy and mutual information. As a result, we can now forecast all major types of entropy/information, creating a general predictive approach for the four basic processes in ecology and evolution. Additionally, the use of these methods will allow seamless integration with other studies such as the physical environment, and may even extend to assisting with evolutionary algorithms.

1. A Shared Basis for Ecology and Evolution

Ecology and evolution are often studied separately, with researchers focusing only on a single aspect of information or entropy: molecular variation, species variation, etc. All of these aspects of information can be seen in a larger, unified framework with nested levels such as molecules, individuals, populations, species, and ecosystems. Each of these information types manifests four common features [1]:
  • Innovation (e.g., mutation, recombination, divergence and speciation, behavioral innovation)
  • Transmission and replication (e.g., inheritance)
  • Movement (e.g., migration, pollen dispersal, etc.)
  • Adaptation (e.g., selection, behavioral avoidance of harm)
Often the same level of organization will incorporate several competing or cooperating methods of innovation, transmission, movement, and adaptation.
There are many types of information, but for simplicity, this article will focus largely on two analogous types of information: alternative species in ecological assemblages and DNA alternatives in one species (‘alleles’). Additionally, within those two types, discussion will mostly be restricted to binary cases, such as presence or absence of two alternative species in an assemblage, or presence or absence of two alternative ‘nucleotides’ in DNA e.g.,
…ACAGCCT…
vs.
…ACTGCCT…
These alternatives or ‘alleles’ can be characterized by the probabilities P(T) and P(A) in the biological population (usually at any position in the DNA, called a ‘SNP’ or single nucleotide polymorphism, only two of the possible four nucleotides are found). These molecular variants are exactly analogous to alternative species in ecological assemblages, and in most cases, measures or forecasts made in one of these areas have been, or could be, transferred directly to the other.
This article will discuss how entropy/information methods are well suited to analyzing and forecasting the four common processes of innovation, transmission, movement, and adaptation in ecology and evolution. Despite the focus on a few variant types, this will apply broadly to variants of all types: DNA, epigenetics, behavior, species, the physical environment, etc., as well as their interactions [2,3].

2. Background: Measuring Biological Entropy, Information and Diversity

Measurement of ecological or evolutionary variants uses various entropy or information measures (Table 1a). The measures are all part of a ‘q-profile’ derived from a general power-sum of variant proportions ( 0 q ) [4,5]), composed of D q measures on a common scale of the ‘effective’ number of variants, which means the number of equally-frequent variants that would give the same entropy ( H q ) as the typically unequal array of variants in the sampled system. The use of the D q profile has been recommended because each q value emphasizes different aspects of the diversity [6,7]; for example, higher q values emphasize the more common variants [5]. These different sensitivities mean that for different biological cases, different parts of a D q profile might provide the best discrimination (Figure 1). A comparison of q = 0, 1, 2 shows similar results in 85% of studies, and where one measure is better, there is a clear explanation for this [5]. For example, in a study of invasive mosquito populations Aedes j. japonicas, q = 1 was more sensitive than q = 2 for tracing invasion patterns [8], presumably because q = 2 emphasizes common variants, rather than the rare variants that tend to be lost during periods of small population size at successive newly invaded sites.
Of course, integration of ecology and evolution would be easiest if they used the same measures of information/entropy. Table 1 shows that they do use the same measures, but with different emphasis. This article proposes that although the entire q-profile is useful, q = 1 based on Shannon information/entropy is uniquely informative, combining many important properties for measurement of diversity within and between groups [28,29]. This combination of properties has led to q = 1 becoming very frequently used in ecology (Table 1a). Other measures have some of these properties, but not all [5]. First, Shannon’s sampling can be adequately adjusted by modern methods to account for the possibility of missing rare types, whereas the same problem for q = 0 is not completely correctable [9,10], and leads to the wide confidence limits for q = 0, seen in Figure 1. Second, within-group D 1 increases linearly with pooling of equally-diverse, completely distinct groups, which does not happen with some other measures.
Third, we must deal not only with the entropy or diversity within a single system (α), but also entropy or diversity due to divergence or differentiation between systems (β). Extensive equations for α- and β-diversity with all levels of q are in the supplements of a past review [5]. For β-differentiation between localities, q = 1 measures show strict monotonicity, always increasing with increasing differentiation between groups of molecules or species, whereas q = 2 measures do not always do this [30]. In particular, there is no q = 2 β-measure that creates complete independence between α (within-group), β (between-group), and γ (total) diversity [5]. This contrasts with q = 1 measures that are based on Shannon’s explicitly hierarchical theory, and thus always ensure complete independence of α and β. There are various fixes for the problems of q = 2 β-measures [21,22], but it is better to realize that such measures have properties that, while useful, do not always reflect differentiation between groups [31].

3. Forecasting Biological Entropy, Information, and Diversity, Based on the Four Processes Common to Ecology and Evolution

Predictions under different hypothetical biological scenarios can be tested by measurement—the key to scientific advancement. Thus we need to make forecasts of the expected value under specified histories of the four processes: innovation, transmission, movement, and adaptation. Testing for agreement with, or departure from, those predictions allows us to infer the likely underlying processes. In this article, there is emphasis on forecasts based on algebraic modeling of the underlying processes, rather than from curve-fitting, because of the understanding of the system that can be achieved from algebraic expressions. The predictive theory for entropy/information q = 1, is already sufficiently complete to be used, together with predictions for other values of q, to unite analysis of all aspects of ecology and evolution.
Table 1b shows that there is a huge body of predictive theory for q = 2 measures in evolution (some also transferred to ecology), but that as late as 2006, we still had little predictive power for q = 1 (Shannon), despite some early attempts [5,32]. Since that time, we now have q = 1 & 2 predictions for a wide range of situations involving the four basic information processes—Innovation, Transmission, Movement, and Adaptation (Table 2). In some cases q = 1 methods outperform those based on other values of q, mainly because the q = 1 methods are completely additive, and robust to a very wide range of population sizes, dispersal rates, and mutation modes ([33] and supplement 2 of [5]). Nevertheless, it can also be seen that there are still areas where further research is needed for q = 1, labelled ‘Not Yet’ in Table 2.
Table 2 shows that innovation of new variants can take various forms, which can be dealt with by entropic methods just as well as by other methods. For variation within species, DNA mutation can take at least three different forms (Table 2) each yielding its own mathematical expressions; often all forms might occur on a single DNA molecule [19,20]. SNP mutation is focused on a single ‘nucleotide’ in the DNA sequence, showing forward (and possibly back) mutation to create variant ‘alleles’. SNP innovation is extremely rare (~10−9 per generation), but because most genomes contain billions of nucleotides, and many species have persisted for a huge number of generations, SNPs have become ubiquitous in natural populations. IAM is an extreme alternative innovation mechanism, used when we consider a long DNA sequence such as a thousand-nucleotide protein-coding region. In this case, mutations usually create a sequence that has never occurred before, so this is called the ‘infinite alleles model’ (IAM), which has its own mathematical formulation. Finally, SMM is another type of mutation model called ‘stepwise’, in which new variants progress through adjacent functionally similar states, such as proteins mutating by a single unit of net surface charge, yielding variants of 2− Entropy 20 00727 i001 1− Entropy 20 00727 i001 Entropy 20 00727 i001 0 Entropy 20 00727 i001 Entropy 20 00727 i001 1+ Entropy 20 00727 i001 Entropy 20 00727 i001 2+, etc. Nowadays this model is also used to approximate innovation in repetitive DNA regions (e.g., ‘microsatellite’ fingerprint DNA). Each model—SNP, IAM, and SMM—is only an approximation, and there are other innovation processes such as insertion or deletion of nucleotides, rearrangements, and epigenetic modification.
What about ecological innovation? Of course, this is ultimately highly reliant on genetic innovation, however for modeling at the ecological level, some of the mutation models (SNP/IAM) have also been employed as approximations for the production of novel species [18]. There is a wide range of speciation types [40], so a wide range of models are needed. For example, speciation that occurs by the alteration of a single character, such as the ‘magic traits’ discussed in the speciation literature [41], could be modeled by SNP, or by SMMif the novel species have an ordered relationship (e.g., gradual addition of more gill-rakers in a series of fish speciation events). On the other hand, the IAM might be more appropriate for speciation occurring via relatively rapid (but not instantaneous) multiple changes, a factor that has recently been added to the “neutral” theory of biodiversity [42]. These multiple changes can occur completely simultaneously by processes such as gross chromosomal alteration affecting many characters at once, due to entire genes being duplicated, deleted, or rearranged into a novel linear order, which affects their expression (called ‘position effect’). On the other hand, the multiple changes might accumulate during a period when two parts of a single species’ range are separated by a barrier that appears then later disappears, such as a sea-level rise inundating the center of the species’ range for 10,000 years, then receding. This can be modeled as a continuous process [42], or might be modeled as IAM where each new species is regarded as a totally novel variant, based on myriad genetic differences, occurring during the relatively short period of separation. Whatever the innovation mechanism assumed, Table 1b and Table 2 show that there are entropic forecasts available.
The other major method of innovation is through the breakage of associations between different variants, such as an association of high dispersal ability with low reproduction. At the molecular level, this is called ‘recombination’: the exchange of information by physical breakage and reunion of the DNA string of information, to unite SNP variants that were previously on separate DNA molecules (or ‘haplotypes’), such as,
…ACAGCCT…     …ACAGCGT…
and    →    and
…ACTGCGT…    …ACTGCCT…
Of course, innovation by recombination is limited by the availability of variants that originate from mutation; However, given that many such variants are available, recombination produces new combinations at a vastly faster rate than the original mutation, giving recombination huge importance in evolutionary biology. A typical pair of SNP locations experiences 50% recombination per generation, in diploid individuals such as most higher organisms. Entropic methods are at the core of many modern methods to assess recombination [5], or rather the effect of low recombination rates to create ‘linkage’ into ‘multi-SNP haplotype’ molecules, which may have great adaptive significance [17]. The ecological parallel to linkage is correlation of phenotypic traits (actually often due to genetic linkage), and innovation occurs when these correlations occur, or break down, due to chance or adaptive processes discussed below.
Transmission of information is also extremely well-analyzed by entropic methods. The second row of Table 2 shows the modeling of stochastic transmission of several types of variant in finite populations, whose equations have also been applied to transmission of members of different species in ecological assemblages [18]. Simple replication modes, as seen with cells in bodies, or individuals in an ecosystem, have an exponential rate equation (or a ‘logistic’ equation when restricted by resources etc.) which can be expressed in entropic terms [43]. Other replication modes are discussed in the next section.
Movement of variants (e.g., alternative alleles or members of alternative species) can also be assessed very well using entropic analysis [5]. Briefly, for any pair of locations, lower dispersal, smaller population size, or greater elapsed time since separation, will increase divergence between the arrays of variants (types of alleles, species, etc.). This divergence can be characterized as mutual information (I, q = 1) between variant identity and location of origin [5]. In other words, if there is less sharing of variants between locations, then knowing the type of an individual (i.e., what species it is or what allele it possesses) gives better information about that individual’s geographic origin. There is an inverse relationship between mutual information and effective dispersal rate, over a very wide range of population sizes and dispersal rates [5]. For genes, the q = 1 equations apply to a wide assortment of types of genetic variant, seen in the second row of Table 2, and can be used to estimate dispersal from genetic data, a task at which they can outperform other methods [5].
Of course, species can vary widely in their dispersal ability [44,45], and there is also considerable genetic variation of dispersal ability within a single species, such as wing-polymorphism [46]. Nevertheless, some authors have successfully forecast assemblages of species or allelic variants, based on the assumption that any variation in dispersal is purely stochastic and unrelated to species- or allele-identity; such forecasting uses q = 2 for species assemblages [18,42] and q = 0,1,2 for genetic variants within a species [5]. This somewhat surprising result is consistent with findings that individuals, even of very different species, might have their dispersal more affected by gross physical effects such as currents and winds, than by their individual locomotion ability [47]. This also agrees with empirical and modeling results which indicate that geographic connectivity might be less affected by dispersal ability of particular types than by the relative reproductive output of the types [48,49]; relative reproductive output is discussed under adaptation below. Despite the success of forecasting when assuming that all types disperse equally, it is likely that forecasting will sometimes be improved by adding differential dispersal of different species or allelic types. Such forecasting may be developed from the mutual information q = 1 methods above, given their good performance in the simpler case of equal chance of dispersal for all types [5].
Adaptation, central to both ecology and evolution, has been addressed by a variety of entropic methods (Table 2). Note that for both molecular and species variants, there can be processes that eliminate one type in favor of another (“directional selection” in Table 2), or other processes that actively maintain more than one type (“balancing selection” in Table 2). There has been some success in modeling ecological assemblages without assessing adaptive differences between species [18]. However, there are now moves to make models that include adaptive differences between guilds of species [50]. Frank [13] and Day [12] have made a very clear case for assessing biological adaptation by entropic methods, which are a general method that allows us to connect underlying causes—such as adaptive differences of variants—to the resulting macropatterns, such as diversity within and between locations. For example, survival of individuals of a particular type (alleles, species) must often be combined over different life-stages such as:
’survival birth to juvenile (e.g., 0.4 chance of survival)’,
then…‘survival juvenile to breeder (e.g., 0.6 survival)’
so that multiplication of the successive chances of survival, to give overall survival from newborn to adult breeder
‘survival from newborn to breeder = (0.4 × 0.6)’
is equivalent to addition of the logs of the survivals, and thus one often uses log fitnesses, e.g., log (p’/p), where p is the proportion of a particular type before selection and p’ is its proportion after selection. Then the average of the log fitnesses is
  Average   fitness =   p p log ( p / p ) =   K L  
where KL is the classic expression for relative entropy (Kullback–Liebler) of the adult array of types relative to the initial newborn array [13]. This calculation provides immediate access to the maximum entropy production approach that is widely used throughout science for exploiting hypotheses about fundamental processes (e.g., inheritance mode and dispersal) to create forecasts of measurable patterns, including ecological adaptation and assemblages [51,52,53,54,55] (although some of those are not based on the four fundamental processes outlined above [51]). Analysis of adaptation might also exploit the similarity of Kullback–Liebler to logit methods already used for analysis of adaptation [5].
Moreover, many tests for traits that are important in adaptation rely upon contrasts between variation within and between localities. For example, if selection is in different directions in two localities, one expects to see different arrays of species or alleles, whereas if there is the same selection in all areas, one expects uniformity. Therefore, many tests for adaptation compare the amount of variation within (α) and between (β) locations [56,57,58]. Such tests can benefit from many of the essential features of Shannon (q = 1) such as the complete independence of within- and between-group measures, which is not easily achieved with the more commonly used q = 2 methods [5]. Finally, functional differences of variants (such as alleles or species) are obviously crucial to adaptation, and there are now methods for incorporating functional divergence for measures based on any q-value, without violating fundamental properties of diversity measures [59].
The unfilled areas in Table 2 mostly involve more than one variant (e.g., multiple species or multiple locations in the genome), AND more than one locality, AND adaptation—a very realistic and important situation! Of course, this quite complex situation is challenging for all values of q. However, for q = 1, we can anticipate that further developments will benefit from the special properties of q = 1 discussed earlier in this subsection, especially those properties that facilitate analysis of adaptation, dispersal, and divergence.

4. Beyond Ecology and Evolution

The whole of biology is fundamental to ecology and evolution. For example, perhaps the single most important common process, adaptation, is underpinned by the cell- and molecular-biology that produce the phenotype (together with ecological influences). Of course, the phenotype is the critical link between inheritance and ecological pressures, thus creating the interactions that result in natural selection and adaptation. Likewise, the nervous system is molded by evolution, and drives behavior, which is crucial to ecology and evolution. This section deals briefly with such aspects of biological information and entropy, then the next section extends this to show links with non-biological aspects of information.
As well as the innovation methods mentioned in previous sections, ecology and evolution are both heavily affected by other types of innovation, such as behavioral innovations, based on either adaptive responses within nervous systems, or remodeling of the nervous system by evolution of molecular information; the connection between these different aspects of biological information has been expressed in entropic terms [60].
Transmission and replication can also be broadened, to include not only inheritance, but other information processes such as nerve transmission and learning. Taking this broader approach, transmission of all types biological information goes beyond what is explained in Table 2, having three fundamental replication modes, with different entropic implications [43]:
  • The simple type seen with cells within individuals, or individuals within a population or ecological assemblage, having an exponential rate equation,
  • the autocatalytic type seen with some macromolecules, having a hyperbolic rate equation and,
  • the template-dependent type, as seen with nucleic acids, having a parabolic rate equation.
The different rate equations for these processes are further modified by density, competition for space, energy, and resources, etc., as well as showing considerable stochasticity. Some replicators have become dependent upon others; for example, many nucleic acids only replicate as a synchronous part of a cell replication cycle that has a fundamentally different rate equation, which itself is often constrained within replicating individuals [43]. In contrast, other molecules are partly independent of the cell cycle, including viruses, epigenetic modifications, and prions. Nerve impulses might show any of these three replication modes, depending upon the way the nerve network is connected. The same is true for behavioral transmission such as learning in populations with differently configured social networks.
Broadly speaking, adaptation includes not only selection, but interaction with all other information processes such as behavioral avoidance of harm [60] or molecular interactions. Thus adaptation requires modeling and assessment of physical and functional networks of heritable information. There is already extensive use of Shannon-based methods for expressing associations within networks of genes that are interacting either by physical linkage, or through expression pathways [5,17,37,61].

5. Extended Ecology and Evolution

The four basic processes are found beyond ecology, likely including prebiotic transmission and prebiotic adaptation to the physical environment or competition [62]. Moreover, biological information has continuously sprouted offshoots such as the nervous system, electronic information systems, etc. Every issue of the journal Entropy attests that information approaches apply well to innovation, transmission, adaptation, and movement in the physical world. Again, these processes can be expressed as probabilities of alternatives, such as SNP alleles or the 0 versus 1 for a binary string in computing. As a result, there is much borrowing of mathematical approaches, not only within biology [5], but also between genetic theory and computer algorithm design [63,64].
Perhaps even more powerful might be to consider one continuous process that encompasses innovation, transmission, adaptation, and movement, from the prebiotic physical environment [43], through biology, to the physical environment including modern information technology applications (Table 3). These different systems interact strongly, often being dependent upon one another, over various time-scales. For example, within nervous systems, rapid innovation of impulses and connections is limited by the broad architecture of the network, which ultimately derives from slow DNA or epigenetic changes taking place over a longer time-scale. Also, information technology is still dependent upon our biological neuronal systems to build and program machines.
Evolutionary algorithms are modeled on the same four processes of biological evolution, and are used to explore for potentially improved computer code [63,64]. These algorithms usually mimic only some aspects of biological evolution, such as mutation, recombination, selection, and associative overdominance [68]. In the latter, advantageous or disadvantageous code affects the transmission of nearby code that is selectively neutral. The progress of associative overdominance depends upon the combination of selective advantage/disadvantage, and the rate at which parts of the code are swapped between scripts—the mimic of recombination [69]. There are other areas where biology and evolutionary algorithms converge, such as genetic ‘diploid’ or ‘polyploid’ code, which is a form of what is called parallelism in computing: each biological individual has two or more slightly different versions of the genome, and sometimes individuals with two (or more) versions perform better, which is a type of ‘balancing’ selection that maintains variation. For both biology and evolutionary algorithms, there is an enormous array of possible novelties, called the ‘adaptive landscape’, so exploring these possibilities requires systematic methods, which are highly developed in phylogenetics and other aspects of biology [64,69,70]. The problem of exploring a huge space of molecular interactions has been extensively investigated with q = 1 methods, sometimes with great success in medical genetics and molecular biology [15,16,71].
The interaction between evolutionary algorithms and artificial intelligence extends beyond their shared mathematics. First, just as the nervous system’s information arose out of heritable information such as DNA, our nervous systems’ information has given rise to evolutionary algorithms, and one of their manifestations, artificial intelligence (AI). Secondly, the nervous system can lead particular individuals to move to places where their heritable information makes them better adapted, such as moving a cold-sensitive individual to a warmer place, where it might survive and reproduce better. There is no reason why artificial intelligence should not result in such adaptive behavior of both living organisms and nonliving mechanisms. Indeed there is great interest in using AI to understand (and therefore manipulate?) the behavior of neuron networks, as well as group decisions by an ‘intelligent swarm’ of humans [66], so that all the systems in Table 3 interact extensively as part of a continuum of information. Any value of q might help in these applications, but we might see special utility for q = 1 biological theory, because of its good performance at tracking and forecasting each of the four processes, as outlined in Table 1 and Table 2, as well as the utility of q = 1 for exploring a huge space of alternatives.
It is likely that the similarities of biological evolution and evolutionary algorithms will become more noticeable when quantum computing becomes a day-to-day reality [60,66]. This is because of the probabilistic and parallel nature of quantum computing mimics biology closely. First, the behavior of qbits is stochastic, collapsing, upon observation, to one state or another with probabilities determined by the prior input of energy to that qbit [63,72]. Second, it is said that massive parallelism will be important for efficient quantum computing [63,72]. The result is that quantum computing displays some close similarities to a process called balancing selection in biology, where two allelic states are maintained in a population (equivalent to the computer parallelism), with their relative frequencies maintained by selective forces that act against individuals that contain only one type of allele. In stochastic genetic systems, this situation has the counterintuitive behavior that if the expected equilibrium proportions are near the absorbing boundaries—0 or 1—then the forces that would be expected to maintain both variants actually increase the chance of losing one of the variants [73,74]. In future, this behavior may also occur in quantum computing. Again, Shannon’s utility in assessing selection might be useful for quantum computing, just as for evolutionary computing. Figure 2 shows an example of analogy between DNA nucleotides and qbits, in cases where there is independence within each system, i.e., no linkage of DNA nucleotides and parallelism of qbits. As described above, there are already extensive methods to deal with the cases where DNA nucleotides are not independent (i.e., “linked”), which can also happen with qbits.

6. Conclusions

Inspired by projects aiming to systematically amass all genomic information throughout life [75], it seems that modeling and understanding of information will be best served by considering a single process encompassing all evolution from prebiotic to biological-evolution to evolutionary computing. Throughout this continuum, the common information processes are Innovation, Transmission, Adaptation, and Movement. In arriving at a unified treatment of these processes, there appears to be great promise in using the new theoretical base for Shannon Entropy/Information q = 1. However, this theory needs further extension, especially to multiple locations with adaptation.

Funding

This research received no external funding.

Acknowledgments

This article derives from an invited talk given at the conference “Entropy 2018: From Physics to Information Sciences and Geometry 14/05/2018–16/05/2018, Barcelona, Spain”, managed by MDPI. The author is very grateful for comments from Anne Chao, Jordi Piñero, Gabe O’Reilly, and Peter Smouse, as well as anonymous reviewers.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Vellend, M. The Theory of Ecological Communities (MPB-57); Princeton University Press: Princeton, NJ, USA, 2016. [Google Scholar]
  2. Danchin, E. Avatars of information: Towards an inclusive evolutionary synthesis. Trends Ecol. Evol. 2013, 28, 351–358. [Google Scholar] [CrossRef] [PubMed]
  3. Frère, C.H.; Krützen, M.; Mann, J.; Connor, R.C.; Bejder, L.; Sherwin, W.B. Social and genetic interactions drive fitness variation in a free-living dolphin population. Proc. Natl. Acad. Sci. USA 2010, 107, 19949–19954. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Chiu, C.-H.; Chao, A. Distance-based functional diversity measures and their decomposition: A framework based on Hill numbers. PLoS ONE 2014, 9, e100014. [Google Scholar] [CrossRef] [PubMed]
  5. Sherwin, W.B.; Chao, A.; Jost, L.; Smouse, P.E. Information Theory Broadens the Spectrum of Molecular Ecology and Evolution. Trends Ecol. Evol. 2017, 32, 948–963. [Google Scholar] [CrossRef] [PubMed]
  6. Pielou, E.C. The measurement of diversity in different types of biological collections. J. Theor. Biol. 1966, 13, 131–144. [Google Scholar] [CrossRef]
  7. Hill, M.O. Diversityand evenness: A unifying notation and its consequences. Ecology 1973, 54, 427–432. [Google Scholar] [CrossRef]
  8. Egizi, A.; Fonseca, D.M. Ecological limits can obscure expansion history: Patterns of genetic diversity in a temperate mosquito in Hawaii. Biol. Invasions 2015, 17, 123–132. [Google Scholar] [CrossRef]
  9. Chao, A.; Wang, Y.T.; Jost, L. Entropy and the species accumulation curve: A novel entropy estimator via discovery rates of new species. Methods Ecol. Evol. 2013, 4, 1091–1100. [Google Scholar] [CrossRef]
  10. Chao, A.; Jost, L. Estimating diversity and entropy profiles via discovery rates of new species. Methods Ecol. Evol. 2015, 6, 873–882. [Google Scholar] [CrossRef] [Green Version]
  11. Buddle, C.M.; Beguin, J.; Bolduc, E.; Mercardo, A.; Sackett, T.E.; Selby, R.D.; Varady-Szabo, H.; Zeran, R.M. The importance and use of taxon sampling curves for comparative biodiversity research with forest arthropod assemblages. Can. Èntomol. 2004, 137, 120–127. [Google Scholar] [CrossRef]
  12. Day, T. Information entropy as a measure of genetic diversity and evolvability in colonization. Mol. Ecol. 2015, 24, 2073–2083. [Google Scholar] [CrossRef] [PubMed]
  13. Frank, S.A. Universal expressions of population change by the Price equation: Natural selection, information, and maximum entropy production. Ecol. Evol. 2017, 7, 3381–3396. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Hu, T.; Chen, Y.; Kiralis, J.W.; Collins, R.L.; Wejse, C.; Sirugo, G.; Williams, S.M.; Moore, J.H. An information-gain approach to detecting three-way epistatic interactions in genetic association studies. J. Am. Med. Inform. Assoc. 2013, 20, 630–636. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Chanda, P.; Zhang, A.; Brazeau, D.; Sucheston, L.; Freudenheim, J.L.; Ambrosone, C.; Ramanathan, M. Information-theoretic metrics for visualizing gene-environment interactions. Am. J. Hum. Genet. 2007, 81, 939–963. [Google Scholar] [CrossRef] [PubMed]
  16. Chanda, P.; Sucheston, L.; Liu, S.; Zhang, A.; Ramanathan, M. Information-theoretic gene-gene and gene-environment interaction analysis of quantitative traits. BMC Genom. 2009, 10, 509. [Google Scholar] [CrossRef] [PubMed]
  17. Hubbell, S.P. The Unified Neutral Theory of Biodiversity and Biogeography; Princeton University Press: Princeton, NJ, USA, 2001. [Google Scholar]
  18. Moore, J.H.; Hu, T. Epistasis analysis using information theory. Epistasis 2015, 1253, 257–268. [Google Scholar]
  19. Halliburton, R. Introduction to Population Genetics; Pearson: Upper Saddle River, NJ, USA, 2004. [Google Scholar]
  20. Nielsen, R.; Slatkin, M. An Introduction to Population Genetics Theory and Applications; Sinauer: Sunderland, MA, USA, 2013. [Google Scholar]
  21. Meirmans, P.G.; Hedrick, P.W. Assessing population structure: Fst and related measures. Mol. Ecol. Resour. 2011, 11, 5–18. [Google Scholar] [CrossRef] [PubMed]
  22. Jost, L. Gst and its relatives do not measure differentiation. Mol. Ecol. 2008, 17, 4015–4026. [Google Scholar] [CrossRef] [PubMed]
  23. Pritchard, J.K.; Stephens, M.; Donnelly, P. Inference of population structure using multilocus genotype data. Genetics 2000, 155, 945–959. [Google Scholar] [PubMed]
  24. Excoffier, L.; Smouse, P.E.; Quattro, J.M. Analysis of molecular variance inferred from metric distances among DNA haplotypes: Application to human mitochondrial DNA restriction data. Genetics 1992, 131, 479–491. [Google Scholar] [PubMed]
  25. Fisher, R.A.; Corbet, A.S.; Williams, C.B. The relation between the number of species and the number of individuals in a random sample of an animal population. J. Anim. Ecol. 1943, 12, 42–58. [Google Scholar] [CrossRef]
  26. Preston, F.W. The commonness, and rarity, of species. Ecology 1948, 29, 254–283. [Google Scholar] [CrossRef]
  27. Ewens, W. Mathematical Population Genetics; Springer-Verlag: Berlin, Germany, 1979. [Google Scholar]
  28. Leinster, T.; Cobbold, C. Measuring diversity: The importance of species similarity. Ecology 2012, 93, 477–489. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Chao, A.; Chiu, C.-H.; Jost, L. Unifying species diversity, phylogenetic diversity, functional diversity, and related similarity and differentiation measures through Hill numbers. Annual Rev. Ecol. Evol. Syst. 2014, 45, 297–324. [Google Scholar] [CrossRef]
  30. Jost, L.; DeVries, P.; Walla, T.; Greeney, H.; Chao, A.; Ricotta, C. Partitioning diversity for conservation analyses. Divers. Distrib. 2010, 16, 65–76. [Google Scholar] [CrossRef]
  31. Jost, J.; Archer, F.; Flanagan, S.; Gaggiotti, O.; Hoban, S.; Latch, E. Differentiation measures for conservation genetics. Evol. Appl. 2018, 2018. [Google Scholar] [CrossRef] [PubMed]
  32. Ewens, W.J. The sampling theory of selectively neutral alleles. Theor. Popul. Biol. 1972, 3, 87–112. [Google Scholar] [CrossRef]
  33. Sherwin, W.B.; Jabot, F.; Rush, R.; Rossetto, M. Measurement of biological information with applications from genes to landscapes. Mol. Ecol. 2006, 15, 2857–2869. [Google Scholar] [CrossRef] [PubMed]
  34. Dewar, R.C.; Sherwin, W.B.; Thomas, E.; Holleley, C.E.; Nichols, R.A. Predictions of single-nucleotide polymorphism differentiation between two populations in terms of mutual information. Mol. Ecol. 2011, 20, 3156–3166. [Google Scholar] [CrossRef] [PubMed]
  35. Chao, A.; Jost, L.; Hsieh, T.C.; Ma, K.H.; Sherwin, W.B.; Rollins, L.A. Expected Shannon entropy and Shannon differentiation between subpopulations for neutral genes under the finite island model. PLoS ONE 2015, 10, e0125471. [Google Scholar] [CrossRef] [PubMed]
  36. O’Reilly, G.D.; Jabot, F.; Gunn, M.R.; Sherwin, W.B. Novel uses for equations: Predicting Shannon’s information for genes in finite populations. Conserv. Genet. Resour. 2018. submitted. [Google Scholar]
  37. Iwasa, Y. Free fitness that always increases in evolution. J. Theor. Biol. 1988, 135, 265–281. [Google Scholar] [CrossRef]
  38. De Vladar, H.P.; Barton, N.H. The contribution of statistical physics to evolutionary biology. Trends Ecol. Evol. 2011, 26, 424–432. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Sherwin, W.B. Review: Entropy and information approaches to genetic diversity and its expression: Genomic geography. Entropy 2010, 12, 1765–1798. [Google Scholar] [CrossRef]
  40. Coyne, J.; Orr, H.A. Speciation; Sinauer: Sunderland, MA, USA, 2004. [Google Scholar]
  41. Thibert-Plante, X.; Gavrilets, S. Evolution of mate choice and the so-called magic traits in ecological speciation. Ecol. Lett. 2013, 16, 1004–1013. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Rosindell, J.; Cornell, S.J.; Hubbell, S.P.; Etienne, R.S. Protracted speciation revitalizes the neutral theory of biodiversity. Ecol. Lett. 2010, 13, 716–727. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Moore, J.H.; Hu, T. Epistasis analysis using information theory. In Epistasis: Methods and Protocols, Methods in Molecular Biology; Moore, J.H., Williams, S.M., Eds.; Springer Science + Business Media: New York, NY, USA, 2015; Volume 1253, pp. 257–268. [Google Scholar]
  44. Piñero, J.; Solé, R. Nonequilibrium entropic bounds for Darwinian replicators. Entropy 2018, 20, 98. [Google Scholar] [CrossRef]
  45. Grzywacz, B.; Lehmann, A.W.; Chobanov, D.; Lehmann, G.U.C. Multiple origin of flightlessness in Phaneropterinae bushcrickets and redefinition of the tribus Odonturini (Orthoptera: Tettigonioidea: Phaneropteridae). Org. Divers. Evol. 2018, 1–3. [Google Scholar] [CrossRef]
  46. Andersen, N.M. The evolution of wing polymorphism in water striders (Gerridae): A phylogenetic approach. Oikos 1993, 67, 433–443. [Google Scholar] [CrossRef]
  47. Crnokrak, P.; Roff, D.A. The genetic basis of the trade-off between calling and wing morph in males of the cricket Gryllus firmus. Evolution 1998, 52, 1111–1118. [Google Scholar] [CrossRef] [PubMed]
  48. James, M.K.; Armsworth, P.R.; Mason, L.B.; Bode, L. The structure of reef fish metapopulations: Modeling larval dispersal and retention patterns. Proc. R. Soc. Lond. B 2002, 269, 2079–2086. [Google Scholar] [CrossRef] [PubMed]
  49. López-Duarte, P.C.; Carson, H.S.; Cook, G.S.; Fodrie, F.J.; Becker, B.J.; Dibacco, C.; Levin, L.A. What controls connectivity? An empirical, multi-species approach. Integr. Comp. Biol. 2012, 52, 511–524. [Google Scholar] [CrossRef] [PubMed]
  50. Castorani, M.C.N.; Reed, D.C.; Raimondi, P.T.; Alberto, F.; Bell, T.W.; Cavanaugh, K.C.; Siegel, D.A.; Simons, R.D. Fluctuations in population fecundity drive variation in demographic connectivity and metapopulation dynamics. Proc. R. Soc. B 2017, 284, 20162086. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  51. Matthews, T.J.; Whittaker, R.J. Neutral theory and the species abundance distribution: Recent developments and prospects for unifying niche and neutral perspectives. Ecol. Evol. 2014, 4, 2263–2277. [Google Scholar] [CrossRef] [PubMed]
  52. Elith, J.; Phillips, S.J.; Hastie, T.; Dudık, M.; Chee, Y.E.; Yates, C.J. A statistical explanation of MaxEnt for ecologists. Divers. Distrib. 2011, 17, 43–57. [Google Scholar] [CrossRef]
  53. Bessa, R.J.; Miranda, V.; Gama, J. Entropy and correntropy against minimum square error in offline and online three-day ahead wind power forecasting. IEEE Trans. Power Syst. 2009, 24, 1657–1666. [Google Scholar] [CrossRef]
  54. Skene, K.R. Life’s a gas: A thermodynamic theory of biological evolution. Entropy 2015, 17, 5522–5548. [Google Scholar] [CrossRef]
  55. Dewar, R.C.; Porte, A. Statistical mechanics unifies different ecological patterns. J. Theor. Biol. 2008, 251, 389–403. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  56. Borlase, S.C.; Loebel, D.A.; Frankham, R.; Nurthen, R.K.; Briscoe, D.A.; Daggard, G.E. Modeling problems in conservation genetics using captive drosophila populations—Consequences of equalization of family sizes. Conserv. Biol. 1993, 7, 122–131. [Google Scholar] [CrossRef]
  57. Schlötterer, C.; Kofler, R.; Versace, E.; Tobler, T.; Franssen, S.U. Combining experimental evolution with next-generation sequencing: A powerful tool to study adaptation from standing genetic variation. Heredity 2015, 114, 431–440. [Google Scholar] [CrossRef] [PubMed]
  58. Vandepitte, K.; De Meyer, T.; Helsen, K.; Van Acker, K.; Roldán-Ruiz, I.; Mergeay, J.; Honnay, O. Rapid genetic adaptation precedes the spread of an exotic plant species. Mol. Ecol. 2014, 23, 2157–2164. [Google Scholar] [CrossRef] [PubMed]
  59. Beaumont, M.; Nichols, R.A. Evaluating loci for use in the geentic analysis of popualtion structure. Proc. R. Soc. Lond. 1996, 263, 1619–1626. [Google Scholar] [CrossRef]
  60. Chao, A.; Chiu, C.-H.; Villéger, S.; Sun, I.-F.; Thorn, S.; Lin, Y.; Chiang, J.-M.; Sherwin, W.B. An attribute-diversity approach to functional diversity, functional beta diversity, and related (dis)similarity measures. Ecol. Monogr. 2018. submitted. [Google Scholar]
  61. Dodig-Crnkovica, G. Nature as a network of morphological infocomputational processes for cognitive agents. Eur. Phys. J. Spec. Top. 2017, 226, 181. [Google Scholar] [CrossRef]
  62. Barton, N.H.; De Vladar, H.P. Statistical mechanics and the evolution of polygenic quantitative traits. Genetics 2009, 181, 997–1011. [Google Scholar] [CrossRef] [PubMed]
  63. Popa, R.; Cimpoiasu, V.M. Prebiotic Competition between Information Variants, With Low Error Catastrophe Risks. Entropy 2015, 17, 5274–5287. [Google Scholar] [CrossRef] [Green Version]
  64. Zhang, G.X. Quantum-inspired evolutionary algorithms: A survey and empirical study. J. Heuristics 2011, 17, 303–351. [Google Scholar] [CrossRef]
  65. Hamblin, S. On the practical usage of genetic algorithms in ecology and evolution. Methods Ecol. Evol. 2013, 4, 184–194. [Google Scholar] [CrossRef]
  66. Miller, S.L.; Urey, H.C. Organic Compound Synthesis on the Primitive Earth. Science 1959, 130, 245–251. [Google Scholar] [CrossRef] [PubMed]
  67. Yukalov, V.I.; Sornette, D. Quantum probability and quantum decision-making. Phil. Trans. R. Soc. A 2016, 374, 20150100. [Google Scholar] [CrossRef] [PubMed]
  68. Paixão, T.; Heredia, J.P.; Sudholt, D.; Trubenová, B. Towards a runtime comparison of natural and artificial evolution. Algorithmica 2017, 78, 681. [Google Scholar] [CrossRef]
  69. Zhao, L.; Charlesworth, B. Resolving the conflict between associative overdominance and background selection. Genetics 2016, 203, 1315–1334. [Google Scholar] [CrossRef] [PubMed]
  70. Blum, C.; Roli, A. Metaheuristics in combinatorial optimization. Comput. Surv. 2003, 35, 268–308. [Google Scholar] [CrossRef]
  71. Von Kodolitsch, Y.; Berger, J.; Rogan, P.K. Predicting severity of haemophilia A and B splicing mutations by information analysis. Haemophilia 2006, 12, 258–262. [Google Scholar] [CrossRef] [PubMed]
  72. Narayanan, A. Quantum Computing for beginners. Proc Congr. Evol. Comput. 1999, 1999, 2231–2238. [Google Scholar]
  73. Robertson, A. Selection for heterozygotes in small populations. Genetics 1962, 47, 1291–1300. [Google Scholar] [PubMed]
  74. Sutton, J.T.; Nakagawa, S.; Robertson, B.C.; Jamieson, I.G. Disentangling the roles of natural selection and genetic drift in shaping variation at MHC immunity genes. Mol. Ecol. 2011, 20, 4408–4420. [Google Scholar] [CrossRef] [PubMed]
  75. Lewin, H.A.; Robinson, G.E.; Kress, W.J.; Baker, W.J.; Coddington, J.; Crandall, K.A.; Durbin, R.; Edwards, S.V.; Forest, F.; Gilbert, M.T.P.; et al. Earth BioGenome Project: Sequencing life for the future of life. Proc. Natl. Acad. Sci. USA 2018, 115, 4325–4333. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Confidence limits for D q values for two hypothetical localities, one locality shown as a pair of solid lines, the other locality shown as a pair of dashed lines (the mean curves would be between the two confidence limits, but are omitted for clarity). The circled areas in each of the three panels show cases where discrimination between the assemblages of species or genes at the two localities is more clearly identified by (a) q = 0, (b) q = 1, or (c) q = 2, respectively.
Figure 1. Confidence limits for D q values for two hypothetical localities, one locality shown as a pair of solid lines, the other locality shown as a pair of dashed lines (the mean curves would be between the two confidence limits, but are omitted for clarity). The circled areas in each of the three panels show cases where discrimination between the assemblages of species or genes at the two localities is more clearly identified by (a) q = 0, (b) q = 1, or (c) q = 2, respectively.
Entropy 20 00727 g001
Figure 2. Similarities of DNA and Quantum Computing. In the DNA in the upper panel, if association between individual SNPs is random (‘linkage equilibrium’), then the proportion of a particular DNA sequence (‘haplotype’) is the product of the proportions at each SNP in the population, over m nucleotide positions. Similarly, for the parallel quantum ‘qbits’ in the lower panel, each will have a probability of being zero or 1, depending upon the input of energy to that part of the quantum computer (which affects the complex amplitude, whose square is the probability). Like the DNA sequence, the expected outcome in a quantum computer would be characterized by the product of the m probabilities, P.
Figure 2. Similarities of DNA and Quantum Computing. In the DNA in the upper panel, if association between individual SNPs is random (‘linkage equilibrium’), then the proportion of a particular DNA sequence (‘haplotype’) is the product of the proportions at each SNP in the population, over m nucleotide positions. Similarly, for the parallel quantum ‘qbits’ in the lower panel, each will have a probability of being zero or 1, depending upon the input of energy to that part of the quantum computer (which affects the complex amplitude, whose square is the probability). Like the DNA sequence, the expected outcome in a quantum computer would be characterized by the product of the m probabilities, P.
Entropy 20 00727 g002
Table 1. Ecological and evolutionary information or entropy, for values q = 0, 1, 2. (a) Measurement and (b) forecasting from underlying processes. Full equations are found in the supplement of a previous review [5].
Table 1. Ecological and evolutionary information or entropy, for values q = 0, 1, 2. (a) Measurement and (b) forecasting from underlying processes. Full equations are found in the supplement of a previous review [5].
Entropy H q
Effective Number D q  
ECOLOGY: Variant Species in an AssemblageEVOLUTION: Variant Molecules (Genes) within Species
(a) Measurement
H 0   = Count of types − 1
D 0   = Count of types
Used, but has very wide confidence limits, even with modern corrections [9,10].
H 1 = p ln p
D 1 =   e H 1
Where p values are
the proportions
of the different variants
The most common frequency-sensitive measure [11].Rarely used until recently [5]. Related measures are proposed as a primary measure of evolvability [12,13]. Commonly used for analyzing networks of physically linked or functionally interacting genes [5,14,15,16,17].
H 2 = 1 Σ p 2
D 2 =   1 / ( 1 H 2 )
Some use [18]The most common measure (Heterozygosity, Nucleotide diversity, STRUCTURE, AMOVA, FST, GST, DEST, etc.) [19,20,21,22,23,24].
(b) Forecasts from Underlying Processes
H 0   = Count of types − 1
D 0   = Count of types
No forecasts from underlying processes; some from curve-fitting [25,26].Some forecasts, with underlying transmission and innovation only [27].
H 1 = p ln p
D 1 =   e H 1
Forecasts are available to be transferred from Molecular Ecology [5].Forecasting ability now close to matching that for q = 2 [5]. Further details are in Table 2.
H 2 = 1 Σ p 2
D 2 =   1 / ( 1 H 2 )
Some forecasts transferred from Molecular Ecology, but only with underlying transmission and innovation, no adaptation [18].Extensive ability to forecast under a wide range of conditions for all underlying processes: Innovation, Transmission, Movement, and Adaptation. Forecasts are often based on gas diffusion theory, e.g., Fokker–Planck Equation (see summaries in textbooks [19,20])
Table 2. Types of forecasts available for q = 1 (Shannon) entropy/information, showing how they can be used for the common processes: Innovation, Transmission, Movement, and Adaptation. Although much of this modeling has been done for molecular variants, it has often been, or could be, applied to variant species in ecological assemblages, as described in Vellend (2016) [1] and text of Section 3. For forecasts with other values of q, see Table 1b.
Table 2. Types of forecasts available for q = 1 (Shannon) entropy/information, showing how they can be used for the common processes: Innovation, Transmission, Movement, and Adaptation. Although much of this modeling has been done for molecular variants, it has often been, or could be, applied to variant species in ecological assemblages, as described in Vellend (2016) [1] and text of Section 3. For forecasts with other values of q, see Table 1b.
UNDERLYING PROCESSESSpace and Time Scales
α Within-Localityβ between-Locality
Finite Size, at EquilibriumDynamic: Non-EquilibriumFinite Size, at EquilibriumDynamic: Non-Equilibrium
INNOVATIONInnovation mechanisms—SNP, IAM and SMM—are defined and described further in the text, including the relationships between forecasts for molecules within one species (described in this table) and forecasts for species in assemblages
TRANSMISSION
Neutral variants
(i.e., no effect on adaptation) with stochasticity
SNP [34]
IAM [33,35]
SMM [33,35]
SNP, IAM, SMM [36]
SNP [34]
SNP [34]
IAM [33,35]
SMM [33,35]
SNP [34]
MOVEMENT
Neutral variants, with dispersal between locations
--SNP [34]
IAM [33,35]
SMM [33,35]
SNP [34]
ADAPTATION
Continuous heritable variants, e.g., reproductive rate or gene expression patterns
[5,37,38][5,37,38]Not YetNot Yet
ADAPTATION
Discrete heritable variants, e.g., DNA alleles or haplotypes
‘Balancing’ selection that maintains more than one variant [39]‘Directional’ selection that favors a single variant ([12,13] and Supp. S4, S5 of review [5])Not YetNot Yet
Table 3. Processes common to all systems of evolution, and their likely timescales.
Table 3. Processes common to all systems of evolution, and their likely timescales.
SystemCommon Processes for Information
InnovationTransmissionAdaptationMovement
Prebiotic (may be continuing slowly in current physical environment)Many years? [65]Seconds, or longer, rate depends upon type of interactions [43]Speed would depend upon relative rates of innovation and competitive interactions [62].Probably occurs, at least involuntarily in currents, etc.
Biomolecules—acting individuallySeconds, or longerSeconds, or longer, rate depends upon type of interactions [43]Seconds, or longerSeconds, or longer
Biomolecules—as basis of biological evolutionGenerations [19,20]Generations [19,20]Generations [19,20]Generations [19,20]
Neural networks and Behavioral responses driven by neuronsSecondsSecondsSeconds (or longer with a group of individuals [66]Seconds, or longer
SpeciesUsually 1000’s of generations [1,18,40]Usually 1000’s of generations [1,18,40]Usually 1000’s of generations [1,40]Usually 1000’s of generations [1,18,40]
Algorithms and machinesSeconds to Hours [63,64,67]Seconds to Hours [63,64,67]Seconds to Hours [63,64,67]Seconds to Hours e.g., Self-driving cars, Mars rovers, Computer viruses

Share and Cite

MDPI and ACS Style

Sherwin, W.B. Entropy, or Information, Unifies Ecology and Evolution and Beyond. Entropy 2018, 20, 727. https://doi.org/10.3390/e20100727

AMA Style

Sherwin WB. Entropy, or Information, Unifies Ecology and Evolution and Beyond. Entropy. 2018; 20(10):727. https://doi.org/10.3390/e20100727

Chicago/Turabian Style

Sherwin, William Bruce. 2018. "Entropy, or Information, Unifies Ecology and Evolution and Beyond" Entropy 20, no. 10: 727. https://doi.org/10.3390/e20100727

APA Style

Sherwin, W. B. (2018). Entropy, or Information, Unifies Ecology and Evolution and Beyond. Entropy, 20(10), 727. https://doi.org/10.3390/e20100727

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop