Next Article in Journal
Robust Relative Error Estimation
Next Article in Special Issue
Comparative Performance Analysis of a Simplified Curzon-Ahlborn Engine
Previous Article in Journal
Geometry and Entanglement of Two-Qubit States in the Quantum Probabilistic Representation
Previous Article in Special Issue
On the Use of Transfer Entropy to Investigate the Time Horizon of Causal Influences between Signals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Entropic Equilibria Selection of Stationary Extrema in Finite Populations

1
Google Inc., Mountain View, CA 94043, USA
2
Department of Mathematics and Statistics, San José State University, San José, CA 95192-0103, USA
*
Author to whom correspondence should be addressed.
Entropy 2018, 20(9), 631; https://doi.org/10.3390/e20090631
Submission received: 29 June 2018 / Revised: 16 August 2018 / Accepted: 20 August 2018 / Published: 24 August 2018
(This article belongs to the Special Issue Entropy: From Physics to Information Sciences and Geometry)

Abstract

:
We propose the entropy of random Markov trajectories originating and terminating at the same state as a measure of the stability of a state of a Markov process. These entropies can be computed in terms of the entropy rates and stationary distributions of Markov processes. We apply this definition of stability to local maxima and minima of the stationary distribution of the Moran process with mutation and show that variations in population size, mutation rate, and strength of selection all affect the stability of the stationary extrema.

1. Introduction

This work is motivated by the stationary stability theorem [1], which characterizes local maxima and minima of the stationary distribution of the Moran process with mutation in terms of evolutionary stability. Specifically, the theorem says that for sufficiently large populations, the local maxima and minima of the stationary distribution satisfy a selective-mutative equilibria criterion that generalizes the celebrated notion of evolutionary stability [2]. This means that the stationary distribution encodes the usual information about evolutionary stability. Precisely which equilibria are favored (i.e., are maxima or minima) is a natural question and depends on the choice of various parameters, such as the mutation rate μ , the strength of selection β , and the population size N.
We propose the random trajectory entropy (RTE) of paths originating and terminating at a state as a measure of stability of the state [3,4]. This is an information-theoretic quantity that is easily computable from the entropy rate and stationary distribution of a process, and varies continuously with the critical evolutionary parameters (as does the stationary distribution). We will see that RTE captures the behavior of the Moran process with mutation intuitively, leading to a simple method for equilibrium selection for finite populations—generally a significant problem in evolutionary game theory [5,6].

2. Results

2.1. Stationary Distributions, Entropy Rates, and Random Trajectory Entropies

Our first goal is to establish the random trajectory entropy (RTE) of a state as a measure of stability of the state. We are particularly concerned with the local and global extrema of the stationary distribution, shown in [1] to have a close connection with evolutionary stability.
The stationary distribution of a Markov process gives the probability that the process will be in each state in the long run [7]. As such, it is a fundamental convergence concept for Markov processes. We take the weighted graph viewpoint of Markov processes on a finite set of states V. Let the transition probabilities be given by a function T : V × V [ 0 , 1 ] (viewed as a matrix or a function), and the stationary distribution by a function s : V [ 0 , 1 ] (appropriately normalized to a probability distribution). We assume throughout that all processes are irreducible (there is a path between any two states) and have unique stationary distributions. Let V V and define a stationary maximum of V to be a state v V such that s ( v ) < s ( v ) for all v V v . Then, we have a local maximum v if the set V is the set of neighboring states of v and a global maximum if V = V (similarly for minima).
Although the stationary distribution of a process is often quite useful, it does not tell the full story of the process. While the stationary distribution gives the long-run occupancy of any particular state, it does not explain how much the process moves among states, and so gives an incomplete description of the dynamic stability of a state. Entropy rate is a generalization of Shannon entropy to Markov processes, and is commonly described as the inherent randomness or information content of a process [3]. The entropy rate of a process encodes both long-term and short-term information about the process, defined for a process X as follows:
H ( X ) = i , j s ( v i ) T ( v i , v j ) log T ( v i , v j ) .
The entropy rate is a value attached to a process rather than individual states. To measure the stability of a state, we need a quantity associated to both the process and the individual states that can discriminate between equilibria. Following [3], define the probability of a trajectory V : v 0 v 1 v k with no intermediate state being v k as the product of the transitions along the path
P r ( V ) = T ( v 0 , v 1 ) T ( v 1 , v 2 ) T ( v k 1 , v k ) .
Since the process is irreducible, we have that the sum over all possible such trajectories from v 0 to v k is one, so they form a probability distribution. Let T ( v 0 , v k ) be the set of all such paths and define the random trajectory entropy (RTE) from v 0 to v k to be the entropy of the probability distribution on T ( v 0 , v k ) ; that is,
H v 0 v k = v T ( v 0 , v k ) P r ( v ) log P r ( v ) .
It was shown in [3] (Theorem 1, p. 1419) that when the starting and ending states are the same, the RTE is determined by the entropy rate and the stationary probability:
H v : = H v v = H ( X ) s ( v ) .
From this we immediately have the following theorem characterizing local and global extrema of the stationary distribution.
Theorem 1.
For an irreducible Markov process with stationary distribution s, a state s is a local (resp. global) maximum (resp. minimum) if and only if the RTE H v is a local (resp. global) minimum (resp. maximum).
Furthermore, we now recognize the random trajectory entropy as a measure of the stationary instability of a state, which we can now use to compare and select equilibria for the same process and for closely related processes. Intuitively, a smaller RTE means that trajectories tend to stay near a local maxima (i.e., that random walks tend to be short), which is a way of saying that the state is stable (note that 1 / s ( v ) is the expected number of steps it takes to return to v).

2.2. Applications

We now consider several explicit examples of finite population processes.

2.2.1. Moran Process with Mutation

For the Moran process with mutation, we use a special case of the formulation [1]; see also [8,9,10]. Let a population be composed of n types A 1 , , A n of size N with a i individuals of type A i so that N = a 1 + + a n . We will denote a population state by the tuple a = ( a 1 , , a n ) and the population distribution by a ¯ = a / N . We assume the existence of a fitness landscape f where f i ( a ¯ ) gives the fitness of type A i . Typically, f ( a ¯ ) = G a ¯ for some game matrix G (see [11,12,13] for general references on evolutionary games). Define a matrix of mutations M where 0 M i j 1 may be a function of the population state for our most general results, but we will typically assume in examples that for some constant value μ , the mutation matrix takes the form M i j = μ / ( n 1 ) for i j and M i i = 1 μ . A typical mutation rate is μ 1 / N [1,14].
The Moran process with mutation is a Markov process on the population states defined by the following transition probabilities, corresponding to a birth–death process where birth is fitness-proportionate with mutation and death is uniformly random. To define the adjacent population states, let i u , w be the vector that is 1 at index u, −1 at index w, and zero otherwise, with the convention that i u , u is the zero vector of length n. Every adjacent state of state a for the Moran process is of the form a + i u , w for some 1 u , w n . At a population state a we choose an individual of type A i to reproduce proportionally to its fitness, allowing for mutation of the new individual as given by the mutation probabilities. The distribution of fitness proportionate selection probabilities is given by p ( a ¯ ) = M ( a ¯ ) φ ¯ ( a ¯ ) . Explicitly, the i-th component is
p i ( a ¯ ) = k = 1 n φ k ( a ¯ ) M k i k = 1 n φ k ( a ¯ ) ,
where the function φ ( a ¯ ) = a ¯ i f i ( a ¯ ) . We also randomly choose an individual to be replaced, just as in the Moran process. This yields the transition probabilities
T a a + i u , w = p u ( a ¯ ) a ¯ w for u w , T a a = 1 b adj a , b a T a b .
We will also utilize a variant incorporating a strength of selection term β called Fermi selection [15]:
φ ( a ¯ ) = a ¯ i e β f i ( a ¯ ) .
For our examples, we will restrict our attention to processes defined by X = X ( N , n , μ , φ ) . Several explicit examples of stationary distributions for Moran processes with mutation are given in [9,16]. The entropy rate of the Moran process with mutation was computed in [16] (for n = 2 ) and [17] (for n > 2 ) along with the development of a number of theoretical results. For our purposes, analytic values of the entropy rate are not needed. Generally, as μ 0 , the entropy rate also goes to zero, and attains its maximum as N for the neutral fitness landscape (e.g., with mutations μ = 1 / N ). The entropy rate of is bounded below by zero and above by 2 n 1 n log n [17]. The RTE is bounded below by the entropy rate, justifying the description of the entropy rate as the inherent randomness of a process.

2.2.2. Comparison of Equilibria of a Single Process

Since the entropy rate is associated to the entire process, for two different states i and j, we have that H i = H ( X ) / s ( i ) and H j = H ( X ) / s ( j ) , so if H ( X ) 0 we need only consider the values of the stationary process in this case to compare the equilibria as H j / H i = s ( i ) / s ( j ) .

2.2.3. Small Mutation Limit

Consider the special case in which the rate of mutation parameter μ 0 in a population of two types. Then, we have lim μ 0 H ( X ) = 0 [17]. In this case, the stationary distribution becomes a delta distribution on the corner states. For two population types A and B, we can express the limiting stationary distribution in terms of the fixation probabilities of the two types ρ A and ρ B [8]:
lim μ 0 s ( 0 , N ) = ρ B ρ A + ρ B and lim μ 0 s ( N , 0 ) = ρ A ρ A + ρ B .
Hence, we have that
lim μ 0 H ( 0 , N ) H ( N , 0 ) = lim μ 0 s ( N , 0 ) s ( 0 , N ) = ρ B ρ A .
In other words, the state with the type having greater fixation probability is more stable. For the classical Moran process with game matrix G = r r 1 1 , we have that (assuming r 1 ): ρ A = ( 1 r 1 ) / ( 1 r N ) and ρ B = ( 1 r 1 N ) / ( 1 r N ) , which gives
lim μ 0 H ( 0 , N ) H ( N , 0 ) = ρ B ρ A = 1 r 1 N 1 r 1 .
As expected, whether r > 1 determines which equilibrium is favored. If r = 1 , ρ A = 1 / N = ρ B and the RTEs are equal.

2.2.4. Large Populations and Neutral Landscapes

For arbitrarily many types, the stationary distribution for the neutral fitness landscape (matrix of all ones) and any mutation rate μ can be analytically computed. For large N, the neutral landscape attains the maximum entropy rate, so for large populations a sufficient condition for a state for a non-neutral landscape to be more stable than the same state for the neutral landscape is simply to have a larger stationary probability [17]. For non-neutral landscapes, the large population limit need not maximize the entropy rate [17].

2.2.5. Comparison of Equilibria for Separate Processes on the Same States

Two instances of the Moran process with mutation can have the same stationary maximum state but different entropy rates. Consider the one-parameter family corresponding to a Hawk–Dove matrix
G = 1 2 2 1
and transition probabilities defined by Fermi selection (the parameter is the strength of selection β ). For convenience, fix a population size N 10 and N even, and let the rate of mutation be μ = 1 / N . Then, we have that ( N / 2 , N / 2 ) is the unique stationary maximum [1]. As β increases, the stationary probability at the maximum increases more quickly than the entropy rate (which is not monotonic in this case). The net result is that the random trajectory entropy is decreasing as a function of β and that the stationary maximum is “more stable” (as is expected for greater strengths of selection; see Figure 1). For other two-player games, the situation is analogous. For example, for the coordination game, the interior equilibrium RTE is decreasing as a function of β . For both games, we have the intuitive result that the stability (measured by the RTE) of the extrema varies monotonically with the strength of selection.
We now consider multiple examples for the landscape derived from the three-type game matrix:
G = 0 1 1 1 0 1 1 1 0 .
This landscape typically has several local extrema. Let the population size N = 6 N . Then, we have extrema at the simplex corners ( 6 N , 0 , 0 ) , ( 0 , 6 N , 0 ) , ( 0 , 0 , 6 N ) , center of the boundary simplices ( 3 N , 3 N , 0 ) , ( 3 N , 0 , 3 N ) , ( 0 , 3 N , 3 N ) , and the center ( 2 N , 2 N , 2 N ) . Varying either β (Figure 2) or μ (Figure 3) can change which equilibria have the smallest RTE. As μ increases, stationary probability moves from the corner points of the simplex to the midpoints of the boundary simplices, and also toward the center (similarly for the strength of selection β ). In both cases, the rate of change of the stationary extrema dominates because the entropy rate varies slowly.
Though we have focused on equilibria, the stationary distributions of finite population games can exhibit a variety of complex dynamical behaviors, as depicted in Figure 4. Consider the rock-paper-scissors (RPS) landscape given by the matrix
G = 0 1 1 1 0 1 1 1 0 .
For some parameter choices, RPS landscapes produce an interesting stationary distribution with discretized cycles of constant trajectory entropy, analogous to the concentric cycles for the replicator equation and the fact that the relative entropy is a constant of motion of the replicator equation [12]. Assuming symmetry of the cycle (a large population of size divisible by 3 seems to suffice to yield approximate symmetry), no value on any cycle is a local maximum and the values on the maximal cycle are all global maxima. Note that in this case the stationary stability theorem as stated in [1] does not apply to the cycles (only to local extrema).

2.2.6. Comparison of Equilibria for Process with Varying Population Size

For the final example, we consider the effect of altering the population size N. In this case, the underlying state spaces are different even though the equilibria are generally same for large enough N. For the same number of types n, the entropy rate has the same upper bound (though the entropy rate typically increases with N), and so in order to enable a fair comparison we normalize by the number of states (since the stationary distribution is spread out over a variable number of states). In general, the number of states is N + n 1 n . As for both β and μ , varying the population size N changes the favored equilibrium (see Figure 5). However, we note that the RTEs are increasing in N, and so the issue of normalization is critical to the comparison of equilibria for processes with different population sizes.

3. Discussion

We have proposed random trajectory entropy as a measure of the stability of states of finite Markov processes, and considered several examples from finite population biology. Variations of fundamental evolutionary parameters alter the stability of equilibria, agreeing with intuitive expectations. In particular, stability is closely tracked by stationary probability in several example population dynamics. We did not consider RTEs for paths that originate and terminate at different states, but it is reasonable to expect, for example, that a local stationary maxima will have smaller RTE in some neighborhood (and similarly for local minima).
Furthermore, as described by Blume [18], high selection intensity describes a population that is more likely to select best responses. As such, the examples demonstrated in this paper show a bias to certain equilibria based on the application. In population biology, common assumptions are low mutation and low selection intensity, which may favor single type populations. For human learning and decision making, higher selection intensity is expected. Our examples show that mixed (interior) equilibria are more likely in this scenario.
Finally, we note that since the entropy rate is an invariant for Markov processes, both the entropy rate and our measure of the stability of a state of a Markov process allow comparisons between processes on the same state space across parameter space and when the state space varies (due to changing population size). The various examples in this manuscript show that this is a useful way to capture the impact of varying parameters in isolation and when the parameters are related, such as when μ = 1 / N and μ 0 . Altogether, we have shown that random trajectory entropies are simple yet powerful invariants for understanding evolutionary processes and the relationships between mutation, fitness, and drift, and that each of the corresponding parameters can determine which equilibrium is most stable.

4. Methods and Materials

All computations were performed with open source code, archived at [19] in a Python package called stationary. This package can compute exact stationary distributions and entropy rates for reversible processes and approximate solutions for all other cases mentioned in this manuscript. All plots were created with matplotlib [20] and python-ternary [21].
The computation of the RTE for every state of a given process back to itself is as follows:
  • Given values for all relevant parameters (e.g., a fitness landscape, β , N, μ ), generate the matrix of transition probabilities;
  • Compute the stationary distribution of the process, which can be done efficiently (in memory and time) with a power iteration and sparse matrix implementation, or simply standard matrix multiplication for processes with small state spaces;
  • With the stationary distribution and the matrix of transition probabilities, compute the entropy rate using Equation (1);
  • For each state, compute the RTE using Equation (4) using the entropy rate and the stationary distribution.

Author Contributions

M.H. and D.F. contributed to the conceptualization, code, and writing of this paper. Both authors have read and approved the final manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors thank the anonymous reviewers for their comments. Some of the work was done at Pomona College.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Harper, M.; Fryer, D. Stationary Stability for Evolutionary Dynamics in Finite Populations. Entropy 2016, 18, 316. [Google Scholar] [CrossRef]
  2. Smith, J.M. Evolution and The Theory of Games; Cambridge University Press: Cambridge, UK, 1982. [Google Scholar]
  3. Ekroot, L.; Cover, T.M. The Entropy of Markov Trajectories. IEEE Trans. Inf. Theor. 1993, 39, 1418–1421. [Google Scholar] [CrossRef]
  4. Kafsi, M.; Grossglauser, M.; Thiran, P. The Entropy of Conditional Markov Trajectories. IEEE Trans. Inf. Theor. 2013, 59, 5577–5583. [Google Scholar] [CrossRef] [Green Version]
  5. Samuelson, L. Evolutionary Games and Equilibrium Selection; The MIT Press: Cambridge, MA, USA, 1998. [Google Scholar]
  6. Harsanyi, J.C.; Selten, R. A General Theory of Equilibrium Selection in Games; The MIT Press: Cambridge, MA, USA, 1988. [Google Scholar]
  7. Hordijk, A.; Ridder, A. Insensitive Bounds for The Stationary Distribution of Non-reversible Markov Chains. J. Appl. Probab. 1988, 25, 9–20. [Google Scholar] [CrossRef]
  8. Fudenberg, D.; Imhof, L.; Nowak, M.A.; Taylor, C. Stochastic Evolution As a Generalized Moran Process. Unpublished work. 2004. [Google Scholar]
  9. Claussen, J.C.; Traulsen, A. Non-Gaussian Fluctuations Arising from Finite Populations: Exact Results for The Evolutionary Moran Process. Phys. Rev. E 2005, 71, 025101. [Google Scholar] [CrossRef] [PubMed]
  10. Moran, P.A.P. The Statistical Processes of Evolutionary Theory. Am. J. Hum. Genet. 1962, 14, 438–439. [Google Scholar]
  11. Hofbauer, J.; Sigmund, K. Evolutionary Game Dynamics. Bull. Am. Math. Soc. 2003, 40, 479–519. [Google Scholar] [CrossRef]
  12. Weibull, J.W. Evolutionary Game Theory; The MIT Press: Cambridge, MA, USA, 1997. [Google Scholar]
  13. Hofbauer, J.; Sigmund, K. Evolutionary Games and Population Dynamics; Cambridge University Press: Cambridge, UK, 1998. [Google Scholar]
  14. Lynch, M. Evolution of The Mutation Rate. Trends Genet. 2010, 26, 345–352. [Google Scholar] [CrossRef] [PubMed]
  15. Traulsen, A.; Hauert, C. Stochastic Evolutionary Game Dynamics. Rev. Nonlinear Dyn. Complex. 2009, 2, 25–61. [Google Scholar]
  16. Harper, M. Inherent Randomness of Evolving Populations. Phys. Rev. E 2014, 89, 032709. [Google Scholar] [CrossRef] [PubMed]
  17. Harper, M. Entropy Rates of The Multidimensional Moran Processes and Generalizations. arXiv, 2014; arXiv:1401.2713. [Google Scholar]
  18. Blume, L.E. The Statistical Mechanics of Strategic Interaction. Games Econ. Behav. 1993, 5, 387–424. [Google Scholar] [CrossRef]
  19. Harper, M. Stationary: Release 1.0.0. 2018. Available online: https://zenodo.org/record/1343125#.W3-HtbgRVPY (accessed on 23 May 2018).
  20. Hunter, J.D. Matplotlib: A 2D Graphics Environment. Comput. Sci. Eng. 2007, 9, 90–95. [Google Scholar] [CrossRef]
  21. Harper, M.; Weinstein, B. Python-Ternary: Release 1.0.4. 2018. Available online: https://github.com/marcharper/python-ternary (accessed on 23 May 2018).
Figure 1. More intense selection yields greater stability at the maximum. Right: Stationary distributions for Hawk–Dove landscapes (using Matrix (11)) for varying strength of selection β [ 0 , 8 ] , N = 30 , μ = 1 / N (each line is a different value of β ). As β increases, the stationary distributions become more concentrated on the central equilibrium. Top-Left: As β increases, so does the stationary probability (blue, lower curve) of the maxima at ( 15 , 15 ) . The entropy rate (green, upper) is not monotonically increasing in β . Lower-Left: Nevertheless, as β increases, the random trajectory entropy decreases monotonically as expected intuitively. RTE: random trajectory entropy.
Figure 1. More intense selection yields greater stability at the maximum. Right: Stationary distributions for Hawk–Dove landscapes (using Matrix (11)) for varying strength of selection β [ 0 , 8 ] , N = 30 , μ = 1 / N (each line is a different value of β ). As β increases, the stationary distributions become more concentrated on the central equilibrium. Top-Left: As β increases, so does the stationary probability (blue, lower curve) of the maxima at ( 15 , 15 ) . The entropy rate (green, upper) is not monotonically increasing in β . Lower-Left: Nevertheless, as β increases, the random trajectory entropy decreases monotonically as expected intuitively. RTE: random trajectory entropy.
Entropy 20 00631 g001
Figure 2. This n = 3 player example for landscape defined by the matrix (12), N = 60 , μ = 1 / N has multiple local stationary extrema, at the center of the simplex, on the centers of the boundary simplices, and on the corners of the simplex. Top-Left: The entropy rate of the process is given as a function of the strength of selection β . Left-Center: As β increases, the stationary probability of each extrema changes. The curves correspond to the equilibria v 0 : ( N , 0 , 0 ) , v 1 : ( N / 2 , N / 2 , 0 ) , and v 2 : ( N / 3 , N / 3 , N / 3 ) (symmetric permutations of these states are also extrema and have the same probabilities). As the strength of selection increases, more stationary probability is concentrated on the central extrema. Lower-Left: As β increases, the (log) trajectory entropy of the boundary extrema increases while decreasing for the central extrema, showing that strength of selection affects the stability of the equilibria. Which of the equilibria is most stable depends on the value of β . Right: Stationary distribution for β = 0.35 .
Figure 2. This n = 3 player example for landscape defined by the matrix (12), N = 60 , μ = 1 / N has multiple local stationary extrema, at the center of the simplex, on the centers of the boundary simplices, and on the corners of the simplex. Top-Left: The entropy rate of the process is given as a function of the strength of selection β . Left-Center: As β increases, the stationary probability of each extrema changes. The curves correspond to the equilibria v 0 : ( N , 0 , 0 ) , v 1 : ( N / 2 , N / 2 , 0 ) , and v 2 : ( N / 3 , N / 3 , N / 3 ) (symmetric permutations of these states are also extrema and have the same probabilities). As the strength of selection increases, more stationary probability is concentrated on the central extrema. Lower-Left: As β increases, the (log) trajectory entropy of the boundary extrema increases while decreasing for the central extrema, showing that strength of selection affects the stability of the equilibria. Which of the equilibria is most stable depends on the value of β . Right: Stationary distribution for β = 0.35 .
Entropy 20 00631 g002
Figure 3. (Top) Entropy rate, (Center) Stationary probabilities of extrema, and (Lower) log RTE for a process with N = 42 , β = 1 , landscape defined by Matrix (12), and varying rate of mutation μ . As for the strength of selection β in Figure 2, the value of μ can determine which of the equilibria is most stable. As μ 0 , the corner states are favored. As μ increases, the interior equilibrium becomes more stable. Note that the RTE for v 0 grows very quickly and is not fully depicted.
Figure 3. (Top) Entropy rate, (Center) Stationary probabilities of extrema, and (Lower) log RTE for a process with N = 42 , β = 1 , landscape defined by Matrix (12), and varying rate of mutation μ . As for the strength of selection β in Figure 2, the value of μ can determine which of the equilibria is most stable. As μ 0 , the corner states are favored. As μ increases, the interior equilibrium becomes more stable. Note that the RTE for v 0 grows very quickly and is not fully depicted.
Entropy 20 00631 g003
Figure 4. Stationary distribution for a rock–paper–scissors (RPS) landscape (three population types, one per edge) with population N = 180 , β = 1.5 , μ = 1 / N . There are apparent cycles of constant stationary probability and hence constant RTE. This is analogous to the concentric cycles of the replicator equation [13]. The central state is a local extremum. The boundary states are not shown in order to reveal more interior detail.
Figure 4. Stationary distribution for a rock–paper–scissors (RPS) landscape (three population types, one per edge) with population N = 180 , β = 1.5 , μ = 1 / N . There are apparent cycles of constant stationary probability and hence constant RTE. This is analogous to the concentric cycles of the replicator equation [13]. The central state is a local extremum. The boundary states are not shown in order to reveal more interior detail.
Entropy 20 00631 g004
Figure 5. (Top) Entropy rate, (Center) Stationary probabilities, and (Lower) log RTE for a process with β = 1 , landscape defined by matrix (12), varying population size N (divisible by 6), and μ = 1 / N . As for the strength of selection β in Figure 2 and μ in Figure 3, the population size N can determine which of the equilibria is most stable. The trajectory entropies have been scaled by the number of states of the process, N + 3 3 . The plotted equilibria are v 0 : ( N , 0 , 0 ) , v 1 : ( N / 2 , N / 2 , 0 ) , and v 2 : ( N / 3 , N / 3 , N / 3 ) .
Figure 5. (Top) Entropy rate, (Center) Stationary probabilities, and (Lower) log RTE for a process with β = 1 , landscape defined by matrix (12), varying population size N (divisible by 6), and μ = 1 / N . As for the strength of selection β in Figure 2 and μ in Figure 3, the population size N can determine which of the equilibria is most stable. The trajectory entropies have been scaled by the number of states of the process, N + 3 3 . The plotted equilibria are v 0 : ( N , 0 , 0 ) , v 1 : ( N / 2 , N / 2 , 0 ) , and v 2 : ( N / 3 , N / 3 , N / 3 ) .
Entropy 20 00631 g005

Share and Cite

MDPI and ACS Style

Harper, M.; Fryer, D. Entropic Equilibria Selection of Stationary Extrema in Finite Populations. Entropy 2018, 20, 631. https://doi.org/10.3390/e20090631

AMA Style

Harper M, Fryer D. Entropic Equilibria Selection of Stationary Extrema in Finite Populations. Entropy. 2018; 20(9):631. https://doi.org/10.3390/e20090631

Chicago/Turabian Style

Harper, Marc, and Dashiell Fryer. 2018. "Entropic Equilibria Selection of Stationary Extrema in Finite Populations" Entropy 20, no. 9: 631. https://doi.org/10.3390/e20090631

APA Style

Harper, M., & Fryer, D. (2018). Entropic Equilibria Selection of Stationary Extrema in Finite Populations. Entropy, 20(9), 631. https://doi.org/10.3390/e20090631

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop