Next Article in Journal
Selecting Video Key Frames Based on Relative Entropy and the Extreme Studentized Deviate Test
Next Article in Special Issue
The Free Energy Requirements of Biological Organisms; Implications for Evolution
Previous Article in Journal
Two Universality Properties Associated with the Monkey Model of Zipf’s Law
Previous Article in Special Issue
Relative Entropy in Biological Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Maximizing Diversity in Biology and Beyond

1
School of Mathematics, University of Edinburgh, Edinburgh EH9 3FD, UK
2
Boyd Orr Centre for Population and Ecosystem Health, University of Glasgow, Glasgow G12 8QQ, UK
3
Department of Mathematics, Applied Mathematics, and Statistics, Case Western Reserve University, Cleveland, OH 44106, USA
*
Authors to whom correspondence should be addressed.
Entropy 2016, 18(3), 88; https://doi.org/10.3390/e18030088
Submission received: 19 December 2015 / Revised: 22 February 2016 / Accepted: 24 February 2016 / Published: 9 March 2016
(This article belongs to the Special Issue Information and Entropy in Biological Systems)

Abstract

:
Entropy, under a variety of names, has long been used as a measure of diversity in ecology, as well as in genetics, economics and other fields. There is a spectrum of viewpoints on diversity, indexed by a real parameter q giving greater or lesser importance to rare species. Leinster and Cobbold (2012) proposed a one-parameter family of diversity measures taking into account both this variation and the varying similarities between species. Because of this latter feature, diversity is not maximized by the uniform distribution on species. So it is natural to ask: which distributions maximize diversity, and what is its maximum value? In principle, both answers depend on q, but our main theorem is that neither does. Thus, there is a single distribution that maximizes diversity from all viewpoints simultaneously, and any list of species has an unambiguous maximum diversity value. Furthermore, the maximizing distribution(s) can be computed in finite time, and any distribution maximizing diversity from some particular viewpoint q > 0 actually maximizes diversity for all q. Although we phrase our results in ecological terms, they apply very widely, with applications in graph theory and metric geometry.

1. Introduction

For decades, ecologists have used entropy-like quantities as measures of biological diversity. The basic premise is that given a biological community or ecosystem containing n species in proportions p 1 , , p n , the entropy of the probability distribution ( p i ) indicates the extent to which the community is balanced or “diverse”. Shannon entropy itself is often used; so too are many variants, as we shall see. But almost all of them share the property that for a fixed number n of species, the entropy is maximized by the uniform distribution p i = 1 / n .
However, there is a growing appreciation that this crude model of a biological community is too far from reality, in that it takes no notice of the varying similarities between species. For instance, we would intuitively judge a meadow to be more diverse if it consisted of ten dramatically different plant species than if it consisted of ten species of grass. This has led to the introduction of measures that do take into account inter-species similarities [1,2]. In mathematical terms, making this refinement essentially means extending the classical notion of entropy from probability distributions on a finite set to probability distributions on a finite metric space.
The maximum entropy problem now becomes more interesting. Consider, for instance, a pond community consisting of two very similar species of frog and one species of newt. We would not expect the maximum entropy (or diversity) to be achieved by the uniform distribution ( 1 / 3 , 1 / 3 , 1 / 3 ) , since the community would then be 2 / 3 frog and only 1 / 3 newt. We might expect the maximizing distribution to be closer to ( 1 / 4 , 1 / 4 , 1 / 2 ) ; the exact answer should depend on the degrees of similarity of the species involved. We return to this scenario in Example 7.
For the sake of concreteness, this paper is written in terms of an ecological scenario: a community of organisms classified into species. However, nothing that we do is intrinsically ecological, or indeed connected to any specific branch of science. Our results apply equally to any collection of objects classified into types.
It is well understood that Shannon entropy is just one point (albeit a special one) on a continuous spectrum of entropies, indexed by a parameter q [ 0 , ] . This spectrum has been presented in at least two ways: as the Rényi entropies H q [3] and as the so-called Tsallis entropies S q (actually introduced as biodiversity measures by Patil and Taillie prior to Tsallis’s work in physics, and earlier still in information theory [4,5,6]):
H q ( p ) = 1 1 - q log i = 1 n p i q , S q ( p ) = 1 q - 1 1 - i = 1 n p i q .
Both H q and S q converge to Shannon entropy as q 1 . Moreover, H q and S q can be obtained from one another by an increasing invertible transformation, and in this sense are interchangeable.
When H q or S q is used as a diversity measure, q controls the weight attached to rare species, with q = 0 giving as much importance to rare species as common ones and the limiting case q = reflecting only the prevalence of the most common species. Different values of q produce genuinely different judgements on which of two distributions is the more diverse. For instance, if over time a community loses some species but becomes more balanced, then the Rényi and Tsallis entropies decrease for q = 0 but increase for q = . Varying q therefore allows us to incorporate a spectrum of viewpoints on the meaning of the word “diversity”.
Here we use the diversity measures introduced by Leinster and Cobbold [1], which both (i) reflect this spectrum of viewpoints by including the variable parameter q, and (ii) take into account the varying similarities between species. We review these measures in Section 2, Section 3 and Section 4. In the extreme case where different species are assumed to have nothing whatsoever in common, they reduce to the exponentials of the Rényi entropies, and in other special cases they reduce to other diversity measures used by ecologists. In practical terms, the measures of [1] have been used to assess a variety of ecological systems, from communities of microbes [7,8] and crustacean zooplankton [9] to alpine plants [10] and arctic predators [11], as well as being applied in non-biological contexts such as computer network security [12].
Mathematically, the set-up is as follows. A biological community is modelled as a probability distribution p = ( p 1 , , p n ) (with p i representing the proportion of the community made up of species i) together with an n × n matrix Z (whose ( i , j ) -entry represents the similarity between species i and j). From this data, a formula gives a real number q D Z ( p ) for each q [ 0 , ] , called the “diversity of order q” of the community. As for the Rényi entropies, different values of q make different judgements: for instance, it may be that for two distributions p and p ,
1 D Z ( p ) < 1 D Z ( p ) but 2 D Z ( p ) > 2 D Z ( p ) .
Now consider the maximum diversity problem. Fix a list of species whose similarities to one another are known; that is, fix a matrix Z (subject to hypotheses to be discussed). The two basic questions are:
  • Which distribution(s) p maximize the diversity q D Z ( p ) of order q?
  • What is the value of the maximum diversity sup p q D Z ( p ) ?
This can be interpreted ecologically as follows: if we have a fixed list of species and complete control over their abundances within our community, how should we choose those abundances in order to maximize the diversity, and how large can we make that diversity?
In principle, both answers depend on q. After all, we have seen that if distributions are ranked by diversity then the ranking varies according to the value of q chosen. But our main theorem is that, in fact, both answers are independent of q:
Theorem 1 
(Main theorem). There exists a probability distribution on { 1 , , n } that maximizes q D Z for all q [ 0 , ] . Moreover, the maximum diversity sup p q D Z ( p ) is independent of q [ 0 , ] .
So, there is a “best of all possible worlds”: a distribution that maximizes diversity no matter what viewpoint one takes on the relative importance of rare and common species.
This theorem merely asserts the existence of a maximizing distribution. However, a second theorem describes how to compute all maximizing distributions, and the maximum diversity, in a finite number of steps (Theorem 2).
Better still, if by some means we have found a distribution p that maximizes the diversity of some order q > 0 , then a further result asserts that p maximizes diversity of all orders (Corollary 2). For instance, it is often easiest to find a maximizing distribution for diversity of order (as in Example 6 and Proposition 2), and it is then automatic that this distribution maximizes diversity of all orders.
Let us put these results into context. First, they belong to the huge body of work on maximum entropy problems. For example, the normal distribution has the maximum entropy among all probability distributions on R with a given mean and variance, a property which is intimately connected with its appearance in the central limit theorem. This alone would be enough motivation to seek maximum entropy distributions in other settings (such as the one at hand), quite apart from the importance of maximum entropy in thermodynamics, machine learning, macroecology, and so on.
Second, we will see that maximum diversity is very closely related to the emerging invariant known as magnitude. This is defined in the extremely wide generality of enriched category theory (Section 1 of [13]) and specializes in interesting ways in a variety of mathematical fields. For instance, it automatically produces a notion of the Euler characteristic of an (ordinary) category, closely related to the topological Euler characteristic [14]; in the context of metric spaces, magnitude encodes geometric information such as volume and dimension [15,16,17]; in graph theory, magnitude is a new invariant that turns out to be related to a graded homology theory for graphs [18,19]; and in algebra, magnitude produces an invariant of associative algebras that can be understood as a homological Euler characteristic [20].
This work is self-contained. To that end, we begin by explaining and defining the diversity measures in [1] (Section 2, Section 3 and Section 4). Then come the results: preparatory lemmas in Section 5, and the main results in Section 6 and Section 7. Examples are given in Section 8, Section 9 and Section 10, including results on special cases such as when the similarity matrix Z is either the adjacency matrix of a graph or positive definite. Perhaps counterintuitively, a distribution that maximizes diversity can eliminate some species entirely. This is addressed in Section 11, where we derive necessary and sufficient conditions on Z for maximization to preserve all species. Finally, we state some open questions (Section 12).
The main results of this paper previously appeared in a preprint of Leinster [21], but the proofs we present here are substantially simpler. Of the new results, Lemma 8 (the key to our results on preservation of species by maximizing distributions) borrows heavily from an argument of Fremlin and Talagrand [22].

Conventions

A vector x = ( x 1 , , x n ) R n is nonnegative if x i 0 for all i, and positive if x i > 0 for all i. The support of x R n is
supp ( x ) = i { 1 , , n } : x i 0 ,
and x has full support if supp ( x ) = { 1 , , n } . A real symmetric n × n matrix Z is positive semidefinite if x T Z x 0 for all 0 x R n , and positive definite if this inequality is strict.

2. A Spectrum of Viewpoints on Diversity

Ecologists began to propose quantitative definitions of biological diversity in the mid-twentieth century [23,24], setting in motion more than fifty years of heated debate, dozens of further proposed diversity measures, hundreds of scholarly papers, at least one book devoted to the subject [25], and consequently, for some, despair (already expressed by 1971 in a famously-titled paper of Hurlbert [26]). Meanwhile, parallel discussions were taking place in disciplines such as genetics [27], economists were using the same formulas to measure wealth inequality and industrial concentration [28], and information theorists were developing the mathematical theory of such quantities under the name of entropy rather than diversity.
Obtaining accurate data about an ecosystem is beset with practical and statistical problems, but that is not the reason for the prolonged debate. Even assuming that complete information is available, there are genuine differences of opinion about what the word “diversity” should mean. We focus here on one particular axis of disagreement, illustrated by the examples in Figure 1.
One extreme viewpoint on diversity is that preservation of species is all that matters: “biodiversity” simply means the number of species present (as is common in the ecological literature as well as the media). Since no attention is paid to the abundances of the species present, rare species count for exactly as much as common species. From this viewpoint, community (a) of Figure 1 is more diverse than community (b), simply because it contains more species.
The opposite extreme is to ignore rare species altogether and consider only those that are most common. (This might be motivated by a focus on overall ecosystem function.) From this viewpoint, community (b) is more diverse than community (a), because it is better-balanced: (a) is dominated by a single common species, whereas (b) has three common species in equal proportions.
Between these two extremes, there is a spectrum of intermediate viewpoints, attaching more or less weight to rare species. Different scientists have found it appropriate to adopt different positions on this spectrum for different purposes, as the literature amply attests.
Rather than attempting to impose one particular viewpoint, we will consider all equally. Thus, we use a one-parameter family of diversity measures, with the “viewpoint parameter” q [ 0 , ] controlling one’s position on the spectrum. Taking q = 0 will give rare species as much importance as common species, while taking q = will give rare species no importance at all.
There is one important dimension missing from the discussion so far. We will consider not only the varying abundances of the species, but also the varying similarities between them. This is addressed in the next section.

3. Distributions on a Set with Similarities

In this section and the next, we give a brief introduction to the diversity measures of Leinster and Cobbold [1]. We have two tasks. We must build a mathematical model of the notion of “biological community” (this section). Then, we must define and explain the diversity measures themselves (next section).
In brief, a biological community will be modelled as a finite set (whose elements are the species) equipped with both a probability distribution (indicating the relative abundances of the species) and, for each pair of elements of the set, a similarity coefficient (reflecting the similarities between species).
Let us now consider each of these aspects in turn. First, we assume a community or system of individuals, partitioned into n 1 species. The word “species” need not have its standard meaning: it can denote any unit thought meaningful, such as genus, serotype (in the case of viruses), or the class of organisms having a particular type of diet. It need not even be a biological grouping; for instance, in [29] the units are soil types. For concreteness, however, we write in terms of an ecological community divided into species. The division of a system into species or types may be somewhat artificial, but this is mitigated by the introduction of the similarity coefficients (as shown in [1], p. 482).
Second, each species has a relative abundance, the proportion of organisms in the community belonging to that species. Thus, listing the species in order as 1 , , n , the relative abundances determine a vector p = ( p 1 , , p n ) . This is a probability distribution: p i 0 for each species i, and i = 1 n p i = 1 . Abundance can be measured in any way thought relevant, e.g., number of individuals, biomass, or (in the case of plants) ground coverage.
Critically, the word “diversity” refers only to the relative, not absolute, abundances. If half of a forest burns down, or if a patient loses 90 % of their gut bacteria, then it may be an ecological or medical disaster; but assuming that the system is well-mixed, the diversity does not change. In the language of physics, diversity is an intensive quantity (like density or temperature) rather than an extensive quantity (like mass or heat), meaning that it is independent of the system’s size.
The third and final aspect of the model is inter-species similarity. For each pair ( i , j ) of species, we specify a real number Z i j representing the similarity between species i and j. This defines an n × n matrix Z = ( Z i j ) 1 i , j n . In [1], similarity is taken to be measured on a scale of 0 to 1, with 0 meaning total dissimilarity and 1 that the species are identical. Thus, it is assumed there that
0 Z i j 1 for all i , j , Z i i = 1 for all i .
In fact, our maximization theorems will only require the weaker hypotheses
Z i j 0 for all i , j , Z i i > 0 for all i
together with the requirement that Z is a symmetric matrix. (In the appendix to [1], matrices satisfying conditions (2) were called “relatedness matrices”.)
Just as the meanings of “species” and “abundance” are highly flexible, so too is the meaning of “similarity”:
Example 1. The simplest similarity matrix Z is the identity matrix I . This is called the naive model in [1], since it embodies the assumption that distinct species have nothing in common. Crude though this assumption is, it is implicit in the diversity measures most popular in the ecological literature (Table 1 of [1] ).
Example 2. With the rapid fall in the cost of DNA sequencing, it is increasingly common to measure similarity genetically (in any of several ways). Thus, the coefficients Z i j may be chosen to represent percentage genetic similarities between species. This is an effective strategy even when the taxonomic classification is unclear or incomplete [1], as is often the case for microbial communities [7].
Example 3. Given a suitable phylogenetic tree, we may define the similarity between two present-day species as the proportion of evolutionary time before the point at which the species diverged.
Example 4. In the absence of more refined data, we can measure species similarity according to a taxonomic tree. For instance, we might define
Z i j = 1 if i = j , 0 . 8 if species i and j are different but of the same genus , 0 . 5 if species i and j are of different genera but the same family , 0 otherwise .
Example 5. In purely mathematical terms, an important case is where the similarity matrix arises from a metric d on the set { 1 , , n } via the formula Z i j = e - d ( i , j ) . Thus, the community is modelled as a probability distribution on a finite metric space. (The naive model corresponds to the metric defined by d ( i , j ) = for all i j .) The diversity measures that we will shortly define can be understood as (the exponentials of) Rényi-like entropies for such distributions.

4. The Diversity Measures

Here we state the definition of the diversity measures of [1], which we will later seek to maximize. We then explain the reasons for this particular definition.
As in Section 3, we take a biological community modelled as a finite probability distribution p = ( p 1 , , p n ) together with an n × n matrix Z satisfying conditions (2). As explained in Section 2, we define not one diversity measure but a family of them, indexed by a parameter q [ 0 , ] controlling the emphasis placed on rare species. The diversity of order q of the community is
q D Z ( p ) = i supp ( p ) p i ( Z p ) i q - 1 1 / ( 1 - q )
( q 1 , ). Here supp ( p ) is the support of p (Conventions, Section 1), Z p is the column vector obtained by multiplying the matrix Z by the column vector p , and ( Z p ) i is its i-th entry. Conditions (2) imply that ( Z p ) i > 0 whenever i supp ( p ) , and so q D Z ( p ) is well-defined.
Although this formula is invalid for q = 1 , it converges as q 1 , and 1 D Z ( p ) is defined to be the limit. The same is true for q = . Explicitly,
1 D Z ( p ) = i supp ( p ) ( Z p ) i - p i = exp - i supp ( p ) p i log ( Z p ) i , D Z ( p ) = 1 / max i supp ( p ) ( Z p ) i .
The applicability, context and meaning of Equation (3) are discussed at length in [1]. Here we briefly review the principal points.
First, the definition includes as special cases many existing quantities going by the name of diversity or entropy. For instance, in the naive model Z = I , the diversity q D I ( p ) is the exponential of the Rényi entropy of order q, and is also known in ecology as the Hill number of order q. (References for this and the next two paragraphs are given in Table 1 of [1].)
Continuing in the naive model Z = I and specializing further to particular values of q, we obtain other known quantities: 0 D I ( p ) is species richness (the number of species present), 1 D I ( p ) is the exponential of Shannon entropy, 2 D I ( p ) is the Gini–Simpson index (the reciprocal of the probability that two randomly-chosen individuals are of the same species), and D I ( p ) = 1 / max i p i is the Berger–Parker index (a measure of the dominance of the most abundant species).
Now allowing a general Z , the diversity of order 2 is 1 / i , j p i Z i j p j . Thus, diversity of order 2 is the reciprocal of the expected similarity between a random pair of individuals. (The meaning given to “similarity” will determine the meaning of the diversity measure: taking the coefficients Z i j to be genetic similarities produces a genetic notion of diversity, and similarly phylogenetic, taxonomic, and so on.) Up to an increasing, invertible transformation, this is the well-studied quantity known as Rao’s quadratic entropy.
Given distributions p and p on the same list of species, different values of q may make different judgements on which of p and p is the more diverse. For instance, with Z = I and the two distributions shown in Figure 1, taking q = 0 makes community (a) more diverse and embodies the first “extreme viewpoint” described in Section 2, whereas q = makes (b) more diverse and embodies the opposite extreme.
It is therefore most informative if we calculate the diversity of all orders q [ 0 , ] . The graph of q D Z ( p ) against q is called the diversity profile of p . Two distributions p and p can be compared by plotting their diversity profiles on the same axes. If one curve is wholly above the other then the corresponding distribution is unambiguously more diverse. If they cross then the judgement as to which is the more diverse depends on how much importance is attached to rare species.
The formula for q D Z ( p ) can be understood as follows.
First, for a given species i, the quantity ( Z p ) i = j Z i j p j is the expected similarity between species i and an individual chosen at random. Differently put, ( Z p ) i measures the ordinariness of the i-th species within the community; in [1], it is called the “relative abundance of species similar to the i-th”. Hence, the mean ordinariness of an individual in the community is i p i ( Z p ) i . This measures the lack of diversity of the community, so its reciprocal is a measure of diversity. This is exactly 2 D Z ( p ) .
To explain the diversity of orders q 2 , we recall the classical notion of power mean. Let p = ( p 1 , , p n ) be a finite probability distribution and let x = ( x 1 , , x n ) [ 0 , ) n , with x i > 0 whenever p i > 0 . For real t 0 , the power mean of x of order t, weighted by p , is
M t ( p , x ) = i supp ( p ) p i x i t 1 / t
(Chapter II of [30]). This definition is extended to t = 0 and t = ± by taking limits in t, which gives
M - ( p , x ) = min i supp ( p ) x i , M 0 ( p , x ) = i supp ( p ) x i p i , M ( p , x ) = max i supp ( p ) x i .
Now, when we take the “mean ordinariness” in the previous paragraph, we can replace the ordinary arithmetic mean (the case t = 1 ) by the power mean of order t = q - 1 . Again taking the reciprocal, we obtain exactly Equation (3). That is,
q D Z ( p ) = 1 / M q - 1 ( p , Z p )
for all p , Z , and q [ 0 , ] . So in all cases, diversity is the reciprocal mean ordinariness of an individual within the community, for varying interpretations of “mean”.
The diversity measures q D Z ( p ) have many good properties, discussed in [1]. Crucially, they are effective numbers: that is,
q D I ( 1 / n , , 1 / n ) = n
for all q and n. This gives meaning to the quantities q D Z ( p ) : if q D Z ( p ) = 32 . 8 , say, then the community is nearly as diverse as a community of 33 completely dissimilar species in equal proportions. With the stronger assumptions (1) on Z , the value of q D Z ( p ) always lies between 1 and n.
Diversity profiles are decreasing: as less emphasis is given to rare species, perceived diversity drops. More precisely:
Proposition 1. 
Let p be a probability distribution on { 1 , , n } and let Z be an n × n matrix satisfying conditions (2). If ( Z p ) i has the same value K for all i supp ( p ) then q D Z ( p ) = 1 / K for all q [ 0 , ] . Otherwise, q D Z ( p ) is strictly decreasing in q [ 0 , ] .
Proof. 
This is immediate from Equation (4) and a classical result on power means (Theorem 16 of [30]): M t ( p , x ) is increasing in t, strictly so unless x i has the same value K for all i supp ( p ) , in which case it has constant value K. ☐
So, any diversity profile is either constant or strictly decreasing. The first part of the next lemma states that diversity profiles are also continuous:
Lemma 1. 
Fix an n × n matrix Z satisfying conditions (2). Then:
i.
q D Z ( p ) is continuous in q [ 0 , ] for each distribution p ;
ii.
q D Z ( p ) is continuous in p for each q ( 0 , ) .
Proof. 
See Propositions A1 and A2 of the appendix of [1]. ☐
Finally, the measures have the sensible property that if some species have zero abundance, then the diversity is the same as if they were not mentioned at all. To express this, we introduce some notation: given a subset B { 1 , , n } , we denote by Z B the submatrix ( Z i j ) i , j B of Z .
Lemma 2 
(Absent species). Let Z be an n × n matrix satisfying conditions (2). Let B { 1 , , n } , and let p be a probability distribution on { 1 , , n } such that p i = 0 for all i B . Then, writing p for the restriction of p to B,
q D Z B ( p ) = q D Z ( p )
for all q [ 0 , ] .
Proof. 
This is trivial, and is also an instance of a more general naturality property (Lemma A13 in the appendix of [1]). ☐

5. Preparatory Lemmas

For the rest of this work, fix an integer n 1 and an n × n symmetric matrix Z of nonnegative reals whose diagonal entries are positive (that is, strictly greater than zero). Also write
Δ n = ( p 1 , , p n ) R n : p i 0 , p 1 + + p n = 1
for the set of probability distributions on { 1 , , n } .
To prove the main theorem, we begin by making two apparent digressions.
Let M be any matrix. A weighting on M is a column vector w such that M w is the column vector whose every entry is 1. It is trivial to check that if both M and its transpose have at least one weighting, then the quantity i w i is independent of the choice of weighting w on M ; this quantity is called the magnitude M of M (Section 1.1 of [13]).
When M is symmetric (the case of interest here), M is defined just as long as M has at least one weighting. When M is invertible, M has exactly one weighting and M is the sum of all the entries of M - 1 .
The second digression concerns the dichotomy expressed in Proposition 1: every diversity profile is either constant or strictly decreasing. We now ask: which distributions have constant diversity profile?
This question turns out to have a clean answer in terms of weightings and magnitude. To state it, we make some further definitions.
Definition 1. 
A probability distribution p on { 1 , , n } is invariant if q D Z ( p ) = q D Z ( p ) for all q , q [ 0 , ] .
Let B { 1 , , n } , and let 0 w [ 0 , ) B be a nonnegative vector. Then there is a probability distribution p ( w ) on { 1 , , n } defined by
( p ( w ) ) i = w i / j B w j if i B , 0 otherwise .
In particular, let B be a nonempty subset of { 1 , , n } and w a nonnegative weighting on Z B = ( Z i j ) i , j B . Then w 0 , so p ( w ) is defined, and p ( w ) i = w i / Z B for all i B .
Lemma 3. 
The following are equivalent for p Δ n :
i.
p is invariant;
ii.
( Z p ) i = ( Z p ) j for all i , j supp ( p ) ;
iii.
p = p ( w ) for some nonnegative weighting w on Z B and some nonempty subset B { 1 , , n } .
Moreover, in the situation of (iii), q D Z ( p ) = Z B for all q [ 0 , ] .
Proof. 
(i)    (ii) is immediate from Proposition 1.
For (ii)    (iii), assume (ii). Put B = supp ( p ) and write K = ( Z p ) i for any i B . Then K > 0 , so we may define w R B by w i = p i / K ( i B ). Evidently p = p ( w ) and w is nonnegative. Furthermore, w is a weighting on Z B , since whenever i B ,
( Z B w ) i = j B Z i j p j / K = j = 1 n Z i j p j / K = 1 .
Finally, for (iii)    (ii) and “moreover”, take B and w as in (iii). Then supp ( p ( w ) ) B , so for all i supp ( p ( w ) ) ,
Z · p ( w ) i = Z B w / Z B i = 1 / Z B .
Hence q D Z ( p ( w ) ) = Z B for all q [ 0 , ] by Proposition 1. ☐
We now prove a result that is much weaker than the main theorem, but will act as a stepping stone in the proof.
Lemma 4. 
For each q ( 0 , 1 ) , there exists an invariant distribution that maximizes q D Z .
Proof. 
Let q ( 0 , 1 ) . Then q D Z is continuous on the compact space Δ n (Lemma 1(ii)), so attains a maximum at some point p . Take j , k supp ( p ) such that ( Z p ) j is least and ( Z p ) k is greatest. By Lemma 3, it is enough to prove that ( Z p ) j = ( Z p ) k .
Define δ j Δ n by taking ( δ j ) i to be the Kronecker delta δ j i , and δ k similarly. Then p + t ( δ j - δ k ) Δ n for all real t sufficiently close to 0, and
0 = d d t q D Z p + t ( δ j - δ k ) 1 - q | t = 0
= ( q - 1 ) i supp ( p ) Z i j p i ( Z p ) i q - 2 - i supp ( p ) Z i k p i ( Z p ) i q - 2 + ( Z p ) j q - 1 - ( Z p ) k q - 1
( q - 1 ) i = 1 n Z i j p i ( Z p ) j q - 2 - i = 1 n Z i k p i ( Z p ) k q - 2 + ( Z p ) j q - 1 - ( Z p ) k q - 1
= q ( Z p ) j q - 1 - ( Z p ) k q - 1
0 ,
where Equation (5) holds because p is a supremum, Equation (6) is a routine computation, inequalities (7) and (9) follow from the defining properties of j and k, and Equation (8) uses the symmetry of Z . Equality therefore holds throughout, and in particular in (9). Hence ( Z p ) j = ( Z p ) k , as required. ☐
An alternative proof uses Lagrange multipliers, but is complicated by the possibility that q D Z attains its maximum on the boundary of Δ n .
The result we have just proved only concerns the maximization of q D Z for specific values of q, but the following lemma will allow us to deduce results about maximization for all q simultaneously.
Definition 2. 
A probability distribution on { 1 , , n } is maximizing if it maximizes q D Z for all q [ 0 , ] .
Lemma 5. 
For 0 q q , any invariant distribution that maximizes q D Z also maximizes q D Z . In particular, any invariant distribution that maximizes 0 D Z is maximizing.
Proof. Let 0 q q and let p be an invariant distribution that maximizes q D Z . Then for all r Δ n ,
q D Z ( r ) q D Z ( r ) q D Z ( p ) = q D Z ( p ) ,
since diversity profiles are decreasing (Proposition 1). ☐

6. The Main Theorem

For convenience, we restate the main theorem:
Theorem 1 
(Main theorem). There exists a probability distribution on { 1 , , n } that maximizes q D Z for all q [ 0 , ] . Moreover, the maximum diversity sup p Δ n q D Z ( p ) is independent of q [ 0 , ] .
Proof. 
An equivalent statement is that there exists an invariant maximizing distribution. To prove this, choose a decreasing sequence ( q λ ) λ = 1 in ( 0 , 1 ) converging to 0. By Lemma 4, we can choose for each λ 1 an invariant distribution p λ that maximizes q λ D Z . Since Δ n is compact, we may assume (by passing to a subsequence if necessary) that the sequence ( p λ ) converges to some point p Δ n . We will show that p is invariant and maximizing.
We show that p is invariant using Lemma 3. Let i , j supp ( p ) . Then i , j supp ( p λ ) for all λ 0 , so ( Z p λ ) i = ( Z p λ ) j for all λ 0 , and letting λ gives ( Z p ) i = ( Z p ) j .
To show that p is maximizing, first note that p λ maximizes q λ D Z whenever λ λ 1 (by Lemma 5). Fixing λ and letting λ , this implies that p maximizes q λ D Z , since q λ D Z is continuous (Lemma 1(ii)).
Thus, p maximizes q λ D Z for all λ. But q λ 0 as λ , and diversity is continuous in its order (Lemma 1(i)), so p maximizes 0 D Z . Since p is invariant, Lemma 5 implies that p is maximizing. ☐
The theorem can be understood as follows (Figure 2a). Each particular value of the viewpoint parameter q ranks the set of all distributions p in order of diversity, with p placed above p when q D Z ( p ) > q D Z ( p ) . Different values of q rank the set of distributions differently. Nevertheless, there is a distribution p max that is at the top of every ranking. This is the content of the first half of Theorem 1.
Alternatively, we can visualize the theorem in terms of diversity profiles (Figure 2b). Diversity profiles may cross, reflecting the different priorities embodied by different values of q. But there is at least one distribution p max whose profile is above every other profile; moreover, its profile is constant.
Theorem 1 immediately implies:
Corollary 1. 
Every maximizing distribution is invariant.
This result can be partially understood as follows. For Shannon entropy, and more generally any of the Rényi entropies, the maximizing distribution is obtained by taking the relative abundance p i to be the same for all species i. This is no longer true when inter-species similarities are taken into account. However, what is approximately true is that diversity is maximized when ( Z p ) i , the relative abundance of species similar to the i-th, is the same for all species i. This follows from Corollary 1 together with the characterization of invariant distributions in Lemma 3(ii); but it is only “approximately true” because it is only guaranteed that ( Z p ) i = ( Z p ) j when i and j both belong to the support of p , not for all i and j. It may in fact be that some or all maximizing distributions do not have full support, a phenomenon we examine in Section 11.
The second half of Theorem 1 tells us that associated with the matrix Z is a numerical invariant, the constant value of a maximizing distribution:
Definition 3. 
The maximum diversity of Z is D max ( Z ) = sup p Δ n q D Z ( p ) , for any q [ 0 , ] .
We show how to compute D max ( Z ) in the next section.
If a distribution p maximizes diversity of order 2, say, must it also maximize diversity of orders 1 and ? The answer turns out to be yes:
Corollary 2. 
Let p be a probability distribution on { 1 , , n } . If p maximizes q D Z for some q ( 0 , ] then p maximizes q D Z for all q [ 0 , ] .
Proof. 
Let q ( 0 , ] and let p be a distribution maximizing q D Z . Then
q D Z ( p ) 0 D Z ( p ) D max ( Z ) = q D Z ( p ) ,
where the first inequality holds because diversity profiles are decreasing. So equality holds throughout. Now q D Z ( p ) = 0 D Z ( p ) with q 0 , so Proposition 1 implies that p is invariant. But also 0 D Z ( p ) = D max ( Z ) , so p maximizes 0 D Z . Hence by Lemma 5, p is maximizing. ☐
The significance of this corollary is that if we wish to find a distribution that maximizes diversity of all orders q, it suffices to find a distribution that maximizes diversity of a single nonzero order.
The hypothesis that q > 0 in Corollary 2 cannot be dropped. Indeed, take Z = I . Then 0 D I ( p ) is species richness (the cardinality of supp ( p ) ), which is maximized by any distribution p of full support, whereas 1 D I ( p ) is the exponential of Shannon entropy, which is maximized only when p is uniform.

7. The Computation Theorem

The main theorem guarantees the existence of a maximizing distribution p max , but does not tell us how to find it. It also states that q D Z ( p max ) is independent of q, but does not tell us what its value is. The following result repairs both omissions.
Theorem 2 
(Computation theorem). The maximum diversity and maximizing distributions of Z are given as follows:
i.
For all q [ 0 , ] ,
sup p Δ n q D Z ( p ) = max B Z B
where the maximum is over all B { 1 , , n } such that Z B admits a nonnegative weighting.
ii.
The maximizing distributions are precisely those of the form p ( w ) where w is a nonnegative weighting on Z B for some B attaining the maximum in Equation (10).
Proof. 
Let q [ 0 , ] . Then
sup { q D Z ( p ) : p Δ n } = sup { q D Z ( p ) : p Δ n , p is invariant }
= sup { Z B : B { 1 , , n } , Z B admits a nonnegative weighting }
= max { Z B : B { 1 , , n } , Z B admits a nonnegative weighting } ,
where Equation (11) follows from the fact that there is an invariant maximizing distribution (Theorem 1), Equation (12) follows from Lemma 3, and Equation (13) follows from the trivial fact that Z B 0 = Z whenever Z B admits a nonnegative weighting.
This proves part (i). Every maximizing distribution is invariant (Corollary 1), so part (ii) follows from Lemma 3. ☐
Remark 1. The computation theorem provides a finite-time algorithm for finding all the maximizing distributions and computing D max ( Z ) , as follows. For each of the 2 n subsets B of { 1 , , n } , perform some simple linear algebra to find the space of nonnegative weightings on Z B ; if this space is nonempty, call B feasible and record the magnitude Z B . Then D max ( Z ) is the maximum of all the recorded magnitudes. For each feasible B such that Z B = D max ( Z ) , and each nonnegative weighting w on Z B , the distribution p ( w ) is maximizing. This generates all of the maximizing distributions.
This algorithm takes exponentially many steps in n, and Remark 3 provides strong evidence that the time taken cannot be reduced to a polynomial in n. But the situation is not as hopeless as it might appear, for two reasons.
First, each step of the algorithm is fast, consisting as it does of solving a system of linear equations. For instance, in an implementation in Matlab on a standard laptop, with no attempt at optimization, the maximizing distributions of 25 × 25 matrices were computed in a few seconds. (We thank Christina Cobbold for carrying out this implementation.) Second, for certain classes of matrices Z , we can make substantial improvements in computing time, as observed in Section 10.

8. Simple Examples

The next three sections give examples of the main results, beginning here with some simple, specific examples.
Example 6. First consider the naive model Z = I , in which different species are deemed to be entirely dissimilar. As noted in Section 4, q D I ( p ) is the exponential of the Rényi entropy of order q. It is well-known that Rényi entropy of any order q > 0 is maximized uniquely by the uniform distribution. This result also follows trivially from Corollary 2: for clearly D I ( p ) = 1 / max i p i is uniquely maximized by the uniform distribution, and the corollary implies that the same is true for all values of q > 0 . Moreover, D max ( I ) = I = n .
Example 7. For a general matrix Z satisfying conditions (1), a two-species system is always maximized by the uniform distribution p 1 = p 2 = 1 / 2 . When n = 3 , however, nontrivial examples arise. For instance, take the system shown in Figure 3, consisting of one species of newt and two species of frog. Let us first consider intuitively what we expect the maximizing distribution to be, then compare this with the answer given by Theorem 2.
If we ignore the fact that the two frog species are more similar to each other than they are to the newt, then (as in Example 6) the maximizing distribution is ( 1 / 3 , 1 / 3 , 1 / 3 ) . At the other extreme, if we regard the two frog species as essentially identical then effectively there are only two species, newts and frogs, so the maximizing distribution gives relative abundance 0 . 5 to the newt and 0 . 5 to the frogs. So with this assumption, we expect diversity to be maximized by the distribution ( 0 . 5 , 0 . 25 , 0 . 25 ) .
Intuitively, then, the maximizing distribution should lie between these two extremes. And indeed, it does: implementing the algorithm in Remark 1 (or using Proposition 3 below) reveals that the unique maximizing distribution is ( 0 . 478 , 0 . 261 , 0 . 261 ) .
One of our standing hypotheses on Z is symmetry. The last of our simple examples shows that if Z is no longer assumed to be symmetric, then the main theorem fails in every respect.
Example 8. Let Z = 1 1 / 2 0 1 , which satisfies all of our standing hypotheses except symmetry. Consider a distribution p = ( p 1 , p 2 ) Δ 2 . If p is ( 1 , 0 ) or ( 0 , 1 ) then q D Z ( p ) = 1 for all q. Otherwise,
0 D Z ( p ) = 3 - 2 1 + p 1 ,
2 D Z ( p ) = 2 3 ( p 1 - 1 / 2 ) 2 + 5 / 4 ,
D Z ( p ) = 1 / ( 1 - p 1 ) if p 1 1 / 3 , 2 / ( 1 + p 1 ) if p 1 1 / 3 .
From Equation (14) it follows that sup p Δ 2 0 D Z ( p ) = 2 . However, this supremum is not attained; 0 D Z ( p ) 2 as p ( 1 , 0 ) , but 0 D Z ( 1 , 0 ) = 1 . Equations (15) and (16) imply that
sup p Δ 2 2 D Z ( p ) = 1 . 6 , sup p Δ 2 D Z ( p ) = 1 . 5 ,
with unique maximizing distributions ( 1 / 2 , 1 / 2 ) and ( 1 / 3 , 2 / 3 ) respectively.
Thus, when Z is not symmetric, the main theorem fails comprehensively: the supremum sup p Δ n 0 D Z ( p ) may not be attained; there may be no distribution maximizing sup p Δ n q D Z ( p ) for all q simultaneously; and that supremum may vary with q.
Perhaps surprisingly, nonsymmetric similarity matrices Z do have practical uses. For example, it is shown in Proposition A7 of [1] that the mean phylogenetic diversity measures of Chao, Chiu and Jost [31] are a special case of the measures q D Z ( p ) , obtained by taking a particular Z depending on the phylogenetic tree concerned. This Z is usually nonsymmetric, reflecting the asymmetry of evolutionary time. More generally, the case for dropping the symmetry axiom for metric spaces was made in [32], and Gromov has argued that symmetry “unpleasantly limits many applications” (p. xv of [33]). So the fact that our maximization theorem fails for nonsymmetric Z is an important restriction.

9. Maximum Diversity on Graphs

Consider those matrices Z for which each similarity coefficient Z i j is either 0 or 1. A matrix Z of this form amounts to a (finite, undirected) reflexive graph with vertex-set { 1 , , n } , with an edge between i and j if and only if Z i j = 1 . (That is, Z is the adjacency matrix of the graph.) Our standing hypotheses on Z then imply that Z i i = 1 for all i, so every vertex has a loop on it; this is the meaning of reflexive.
What is the maximum diversity of the adjacency matrix of a graph? Before answering this question, we explain why it is worth asking. Mathematically, the question is natural, since such matrices Z are extreme cases. More exactly, the set of symmetric matrices Z satisfying conditions (1) is convex, the adjacency matrices of graphs are the extreme points of this convex set, and the diversity measure q D Z ( p ) is a convex function of Z for certain values of q (such as q = 2 ). Computationally, the answer turns out to lead to a lower bound on the difficulty of computing the maximum diversity of a given similarity matrix. Biologically, it is less clear that the question is relevant, but neither is it implausible, given the importance in biology of graphs (food webs, epidemiological contact networks, etc.).
We now recall some terminology. Vertices x and y of a graph are adjacent, written x y , if there is an edge between them. (In particular, every vertex of a reflexive graph is adjacent to itself.) A set of vertices is independent if no two distinct vertices are adjacent. The independence number α ( G ) of a graph G is the maximal cardinality of an independent set of vertices of G.
Proposition 2. 
Let G be a reflexive graph with adjacency matrix Z . Then the maximum diversity D max ( Z ) is equal to the independence number α ( G ) .
Proof. 
We will maximize the diversity of order and apply Theorem 1. For any probability distribution p on the vertex-set { 1 , , n } , we have
D Z ( p ) = 1 / max i supp ( p ) j : i j p j .
First we show that D max ( Z ) α ( G ) . Choose an independent set B of maximal cardinality, and define p Δ n by
p i = 1 / α ( G ) if i B , 0 otherwise .
For each i supp ( p ) , the sum on the right-hand side of Equation (17) is 1 / α ( G ) . Hence D Z ( p ) = α ( G ) , and so α ( G ) D max ( Z ) .
Now we show that D max ( Z ) α ( G ) . Let p Δ n . Choose an independent set B supp ( p ) with maximal cardinality among all independent subsets of supp ( p ) . Then every vertex of supp ( p ) is adjacent to at least one vertex in B, otherwise we could adjoin it to B to make a larger independent subset. Hence
i B j : i j p j = i B j supp ( p ) : i j p j j supp ( p ) p j = 1 .
So there exists i B such that j : i j p j 1 / # B , where # B denotes the cardinality of B. But # B α ( G ) , and therefore
max i supp ( p ) j : i j p j 1 / α ( G ) ,
as required. ☐
Remark 2. The first part of the proof (together with Corollary 2) shows that a maximizing distribution can be constructed by taking the uniform distribution on some independent set of largest cardinality, then extending by zero to the whole vertex-set. Except in the trivial case Z = I , this maximizing distribution never has full support. We return to this point in Section 11.
Example 9. The reflexive graph G = - - (loops not shown) has adjacency matrix Z = 1 1 0 1 1 1 0 1 1 . The independence number of G is 2; this, then, is the maximum diversity of Z . There is a unique independent set of cardinality 2, and a unique maximizing distribution, ( 1 / 2 , 0 , 1 / 2 ) .
Example 10. The reflexive graph - - - again has independence number 2. There are three independent sets of maximal cardinality, so by Remark 2, there are at least three maximizing distributions,
( 1 / 2 , 0 , 1 / 2 , 0 ) , ( 1 / 2 , 0 , 0 , 1 / 2 ) , ( 0 , 1 / 2 , 0 , 1 / 2 ) ,
all with different supports. (The possibility of multiple maximizing distributions was also observed in the case q = 2 by Pavoine and Bonsall [34].) In fact, there are further maximizing distributions not constructed in the proof of Proposition 2, namely, ( 1 / 2 , 0 , t , 1 / 2 - t ) and ( 1 / 2 - t , t , 0 , 1 / 2 ) for any t ( 0 , 1 / 2 ) .
Example 11. Let d be a metric on { 1 , , n } . For a given ε > 0 , the covering number N ( d , ε ) is the minimum cardinality of a subset A { 1 , , n } such that
i A B ( i , ε ) = { 1 , , n } ,
where B ( i , ε ) = { j : d ( i , j ) ε } . The number log N ( d , ε ) is known as the ε-entropy of d [35].
Now define a matrix Z ε by
Z i j ε = 1 if d ( i , j ) ε , 0 otherwise .
Then Z ε is the adjacency matrix of the reflexive graph G with vertices { 1 , , n } and i j if and only if d ( i , j ) ε . Thus, a subset of B { 1 , , n } is independent in G if and only if d ( i , j ) > ε for every i , j B . It is a consequence of the triangle inequality that
N ( d , ε ) α ( G ) N ( d , ε / 2 ) ,
and so by Proposition 2,
N ( d , ε ) D max ( Z ε ) N ( d , ε / 2 ) .
Recalling that log q D Z extends the classical notion of Rényi entropy, this thoroughly justifies the name of ε-entropy (which was originally justified by vague analogy).
The moral of the proof of Proposition 2 is that by performing the simple task of maximizing diversity of order , we automatically maximize diversity of all other orders. Here is an example of how this can be exploited.
Recall that every graph G has a complement G ¯ , with the same vertex-set as G; two vertices are adjacent in G ¯ if and only if they are not adjacent in G. Thus, the complement of a reflexive graph is irreflexive (has no loops), and vice versa. A set B of vertices in an irreflexive graph X is a clique if all pairs of distinct elements of B are adjacent in X. The clique number ω ( X ) of X is the maximal cardinality of a clique in X. Thus, ω ( X ) = α ( X ¯ ) .
We now recover a result of Berarducci, Majer and Novaga (Proposition 5.10 of [36]).
Corollary 3. 
Let X be an irreflexive graph. Then
sup p ( i , j ) : i j p i p j = 1 - 1 ω ( X )
where the supremum is over probability distributions p on the vertex-set of X, and the sum is over pairs of adjacent vertices of X.
Proof. 
Write { 1 , , n } for the vertex-set of X, and Z for the adjacency matrix of the reflexive graph X ¯ . Then for all p Δ n ,
( i , j ) : i j in X p i p j = i , j = 1 n p i p j - ( i , j ) : i j in X ¯ p i p j = 1 - i , j = 1 n p i Z i j p j = 1 - 1 / 2 D Z ( p ) .
Hence by Theorem 1 and Proposition 2,
sup p Δ n ( i , j ) : i j in X p i p j = 1 - 1 D max ( p ) = 1 - 1 α ( X ¯ ) = 1 - 1 ω ( X ) . ☐
It follows from this proof and Remark 2 that ( i , j ) : i j p i p j can be maximized as follows: take the uniform distribution on some clique in X of maximal cardinality, then extend by zero to the whole vertex-set.
Remark 3. Proposition 2 implies that computationally, finding the maximum diversity of an arbitrary Z is at least as hard as finding the independence number of a reflexive graph. This is a very well-studied problem, usually presented in its dual form (find the clique number of an irreflexive graph) and called the maximum clique problem [37]. It is NP -hard, so on the assumption that P NP , there is no polynomial-time algorithm for computing maximum diversity, nor even for computing the support of a maximizing distribution.

10. Positive Definite Similarity Matrices

The theory of magnitude of metric spaces runs most smoothly when the matrices Z concerned are positive definite [16,38]. We will see that positive (semi)definiteness is also an important condition when maximizing diversity.
Any positive definite matrix is invertible and therefore has a unique weighting. (A positive semidefinite matrix need not have a weighting at all.) Now the crucial fact about magnitude is:
Lemma 6. 
Let M be a positive semidefinite n × n real matrix admitting a weighting. Then
M = sup x R n : x T M x 0 i = 1 n x i 2 x T M x > 0 .
If M is positive definite then the supremum is attained by exactly the nonzero scalar multiples x of the unique weighting on M .
Proof. 
This is a small extension of Proposition 2.4.3 of [13]. Choose a weighting w on M . By the Cauchy–Schwarz inequality,
( x T M w ) 2 ( x T M x ) ( w T M w ) ,
or equivalently
x i 2 ( x T M x ) M ,
for all x R n . Equality holds when x is a scalar multiple of w , and if M is positive definite, it holds only then. Finally, taking x = ( 1 , 0 , , 0 ) T in (18) and using positive semidefiniteness gives M > 0 . ☐
From this, we deduce:
Lemma 7. 
Let B { 1 , , n } . If Z is positive semidefinite and both Z and Z B admit a weighting, then Z B Z . Moreover, if Z is positive definite and the unique weighting on Z has full support, then Z B < Z .
Proof. 
The first statement follows from Lemma 6 and the fact that Z B is positive semidefinite. The second is trivial if B = . Assuming not, let y R B be the unique weighting on Z B (which is positive definite), and write x R n for the extension of y by zero to { 1 , , n } . Then y 0 , x 0 , and
Z B = i B y i 2 y T Z B y = i = 1 n x i 2 x T Z x .
But x does not have full support, so by hypothesis, it is not a scalar multiple of the unique weighting on Z . Hence by Lemma 6, ( x i ) 2 / x T Z x < Z . ☐
We now apply this result on magnitude to the maximization of diversity.
Proposition 3. 
Suppose that Z is positive semidefinite. If Z has a nonnegative weighting w , then D max ( Z ) = Z and w / Z is a maximizing distribution. Moreover, if Z is positive definite and its unique weighting w is positive then w / Z is the unique maximizing distribution.
Proof. 
This follows from Theorem 2 and Lemma 7. ☐
In particular, if Z is positive semidefinite and has a nonnegative weighting, then its maximum diversity can be computed in polynomial time.
Corollary 4. 
If Z is positive definite with positive weighting, then its unique maximizing distribution has full support.
In other words, when Z has these properties, its maximizing distribution eliminates no species. Here are three classes of such matrices Z .
Example 12. Call Z ultrametric if Z i k min { Z i j , Z j k } for all i , j , k and Z i i > max j k Z j k for all i. (Under the assumptions (1) on Z , the latter condition just states that distinct species are not completely similar.) If Z is ultrametric then Z is positive definite with positive weighting, by Proposition 2.4.18 of [13].
Such matrices arise in practice: for instance, Z is ultrametric if it is defined from a phylogenetic or taxonomic tree as in Examples 3 and 4.
Example 13. Let r Δ n be a probability distribution of full support, and write Z for the diagonal matrix with entries 1 / r 1 , , 1 / r n . Then for 0 < q < ,
- log q D Z ( p ) = 1 q - 1 log i supp ( p ) p i q r i 1 - q if q 1 , i supp ( p ) p i log ( p i / r i ) if q = 1 .
The right-hand side is the Rényi relative entropy or Rényi divergence I q ( p | r ) (Section 3 of [3]). Evidently Z is positive definite, and its unique weighting r is positive. (In fact, Z is ultrametric.) So Proposition 3 applies; in fact, it gives the classical result that I q ( p | r ) 0 with equality if and only if p = r .
Example 14. The identity matrix Z = I is certainly positive definite with positive weighting. By topological arguments, there is a neighbourhood U of I in the space of symmetric matrices such that every matrix in U also has these properties. (See the proofs of Propositions 2.2.6 and 2.4.6 of [13].) Quantitative versions of this result are also available. For instance, in Proposition 2.4.17 of [13] it was shown that Z is positive definite with positive weighting if Z i i = 1 for all i and Z i j < 1 / ( n - 1 ) for all i j . In fact, this result can be improved:
Proposition 4. 
Suppose that Z i i = 1 for all i , j and that Z is strictly diagonally dominant (that is, Z i i > j i Z i j for all i). Then Z is positive definite with positive weighting.
Proof. 
Since Z is real symmetric, it is diagonalizable with real eigenvalues. By the hypotheses on Z and the Gershgorin disc theorem (Theorem 6.1.1 of [39]), every eigenvalue of Z is in the interval ( 0 , 2 ) . It follows that Z is positive definite and that every eigenvalue of I - Z is in ( - 1 , 1 ) . Hence I - Z is similar to a diagonal matrix with entries in ( - 1 , 1 ) , and so k = 0 ( I - Z ) k converges to ( I - ( I - Z ) ) - 1 = Z - 1 . Thus,
Z - 1 = k = 0 ( I - Z ) k = k = 0 ( Z - I ) 2 k ( 2 I - Z ) .
Writing e = ( 1 1 ) T , the unique weighting on Z is w = Z - 1 e . The hypotheses on Z imply that Z - I has nonnegative entries and ( 2 I - Z ) e has positive entries. Hence by (19),
w = Z - 1 e ( Z - I ) 0 ( 2 I - Z ) e = ( 2 I - Z ) e
entrywise, and so w is positive. ☐
Thus, a matrix Z that is ultrametric, or satisfies conditions (1) and is strictly diagonally dominant, has many special properties: the maximum diversity is equal to the magnitude, there is a unique maximizing distribution, the maximizing distribution has full support, and both the maximizing distribution and the maximum diversity can be computed in polynomial time.

11. Preservation of Species

We saw in Examples 9 and 10 that for certain similarity matrices Z , none of the maximizing distributions has full support. Mathematically, this simply means that maximizing distributions sometimes lie on the boundary of Δ n . But ecologically, it may sound shocking: is it reasonable that diversity can be increased by eliminating some species?
We argue that it is. Consider, for instance, a forest consisting of one species of oak and ten species of pine, with each species equally abundant. Suppose that an eleventh species of pine is added, again with equal abundance (Figure 4). This makes the forest even more heavily dominated by pine, so it is intuitively reasonable that the diversity should decrease. But now running time backwards, the conclusion is that if we start with a forest containing the oak and all eleven pine species, eliminating the eleventh should increase diversity.
To clarify further, recall from Section 3 that diversity is defined in terms of the relative abundances only. Thus, eliminating species i causes not only a decrease in p i , but also an increase in the other relative abundances p j . If the i-th species is particularly ordinary within the community (like the eleventh species of pine), then eliminating it makes way for less ordinary species, resulting in a more diverse community.
The instinct that maximizing diversity should not eliminate any species is based on the assumption that the distinction between species is of high value. (After all, if two species were very nearly identical—or in the extreme, actually identical—then losing one would be of little importance.) If one wishes to make that assumption, one must build it into the model. This is done by choosing a similarity matrix Z with a low similarity coefficient Z i j for each i j . Thus, Z is close to the identity matrix I (assuming that similarity is measured on a scale of 0 to 1). Example 14 guarantees that in this case, there is a unique maximizing distribution and it does not, in fact, eliminate any species.
(The fact that maximizing distributions can eliminate some species has previously been discussed in the ecological literature in the case q = 2 ; see Pavoine and Bonsall [34] and references therein.)
We now derive necessary and sufficient conditions for a similarity matrix Z to admit at least one maximizing distribution of full support, and also necessary and sufficient conditions for every maximizing distribution to have full support. The latter conditions are genuinely more restrictive; for instance, if Z = 1 1 1 1 then some but not all maximizing distributions have full support.
Lemma 8. 
If at least one maximizing distribution for Z has full support then Z is positive semidefinite and admits a positive weighting. Moreover, if every maximizing distribution for Z has full support then Z is positive definite.
Proof. 
Fix a maximizing distribution p of full support. Maximizing distributions are invariant (Corollary 1), so by (i)    (iii) of Lemma 3, Z p is a weighting of Z and Z > 0 . In particular, Z has a positive weighting.
Now we imitate the proof of Proposition 3B of [22]. For each s R n such that i = 1 n s i = 0 , define a function f s : R R by
f s ( t ) = ( p + t s ) T Z ( p + t s ) .
Using the symmetry of Z and the fact that Z p is a weighting, we obtain
f s ( t ) = p T Z p + 2 s T Z p · t + s T Z s · t 2 = 1 / Z + s T Z s · t 2 .
Now s i = 0 and p has full support, so p + t s Δ n for all real t sufficiently close to zero. But f s ( t ) = 1 / 2 D Z ( p + t s ) for such t, so f s has a local minimum at 0. Hence s T Z s 0 . It follows that f s is everywhere positive.
We have shown that s T Z s 0 whenever s R n with s i = 0 . Now take x R n with x i 0 . Put s = x / x i - p . Then s i = 0 , and
x T Z x = x i 2 f s ( 1 ) > 0 .
Hence Z is positive semidefinite.
For “moreover”, assume that every maximizing distribution for Z has full support. By (21), we need only show that s T Z s > 0 whenever s 0 with s i = 0 . Given such an s , choose t R such that p + t s lies on the boundary of Δ n . Then p + t s does not have full support, so is not maximizing, so does not maximize 2 D Z (by Corollary 2). Hence f s ( t ) > f s ( 0 ) , which by (20) implies that s T Z s > 0 . ☐
We can now prove the two main results of this section.
Proposition 5. 
The following are equivalent:
i.
there exists a maximizing distribution for Z of full support;
ii.
Z is positive semidefinite and admits a positive weighting.
Proof. 
(i)    (ii) is the first part of Lemma 8. For the converse, assume (ii) and choose a positive weighting w . Then Z > 0 , so p = w / Z is a probability distribution of full support. We have q D Z ( p ) = Z for all q, by Lemma 3. But the computation theorem implies that D max ( Z ) = Z B for some B { 1 , , n } such that Z B admits a weighting, so D max ( Z ) Z by Lemma 7. Hence p is maximizing. ☐
Proposition 6. 
The following are equivalent:
i.
every maximizing distribution for Z has full support;
ii.
Z has exactly one maximizing distribution, which has full support;
iii.
Z is positive definite with positive weighting;
iv.
D max ( Z ) > D max ( Z B ) for every nonempty proper subset B of { 1 , , n } .
(The weak inequality D max ( Z ) D max ( Z B ) holds for any Z , by the absent species lemma (Lemma 2).)
Proof. 
(i)    (iii) and (iii)    (ii) are immediate from Lemma 8 and Proposition 3 respectively, while (ii)    (i) is trivial.
For (i)    (iv), assume (i). Let B { 1 , , n } . Choose a maximizing distribution p for Z B , and denote by p its extension by zero to { 1 , , n } . Then p does not have full support, so there is some q [ 0 , ] such that p fails to maximize q D Z . Hence
D max ( Z B ) = q D Z B ( p ) = q D Z ( p ) < D max ( Z ) ,
where the second equality is by the absent species lemma.
For (iv)    (i), assume (iv). Let p be a maximizing distribution for Z . Write B = supp ( p ) , and denote by p the restriction of p to B. Then for any q,
D max ( Z B ) q D Z B ( p ) = q D Z ( p ) = D max ( Z ) ,
again by the absent species lemma. Hence by (iv), B = { 1 , , n } . ☐

12. Open Questions

The main theorem, the computation theorem and Corollary 2 answer all the principal questions about maximizing the diversities q D Z . Nevertheless, certain questions remain.
First, there are computational questions. We have found two classes of matrix Z for which the maximum diversity and maximizing distributions can be computed in polynomial time: ultrametric matrices (Example 12) and those close to the identity matrix I (Example 14). Both are biologically significant. Are there other classes of similarity matrix for which the computation can be performed in less than exponential time?
Second, we may seek results on maximization of q D Z ( p ) under constraints on p . There are certainly some types of constraint under which both parts of Theorem 1 fail, for trivial reasons: if we choose two distributions p and p whose diversity profiles cross (Figure 2b) and constrain our distribution to lie in the set { p , p } , then there is no distribution that maximizes q D Z for all q simultaneously, and the maximum value of q D Z also depends on q. But are there other types of constraint under which the main theorem still holds?
In particular, the distribution might be constrained to lie close to a given distribution p . The question then becomes: if we start with a distribution p and have the resources to change it by only a given small amount, what should we do in order to maximize the diversity?
Third, there are suggestive resemblances between the theory developed here and the theory of evolutionarily stable strategies (ESSs) for matrix games (Chapter 6 of [40]), taking the payoff matrix for the game to be the dissimilarity matrix ( 1 - Z i j ) . For instance, the condition in Lemma 3(ii) that ( Z p ) i = ( Z p ) j for all i , j supp ( p ) appears as one of the ESS criteria in [41]; the diversity maximization algorithm of Remark 1 closely resembles the method for finding ESSs in [42]; and the positive definiteness conditions in Section 11 are related to negative definiteness conditions in the ESS literature (such as [41]). Can results on evolutionary games be translated to give new results—or improved proofs of existing results—on maximizing diversity? In particular, the evolutionary game literature contains results on local extrema of quadratic forms [43], which (for q = 2 , at least) may be useful in answering the question of constrained maximization posed in the previous paragraph.
Fourth, we have confined ourselves to considering a single, static population and its diversity. In ecological situations, what is the relationship between diversity maximization and population dynamics? This is a very broad question, but there has been work in ecology on the entropy–dynamics connection. For instance, Zhang and Harte [44] used the principle that Boltzmann entropy should be maximized to predict population dynamics under resource constraints, incorporating into their model a parameter that reflects distinguishability within species relative to distinguishability between species.
Fifth, we have seen that every symmetric matrix Z satisfying conditions (2) (for instance, every symmetric matrix of positive reals) has attached to it a real number, the maximum diversity D max ( Z ) . What is the significance of this invariant?
We know that it is closely related to the magnitude of matrices. This has been most intensively studied in the context of metric spaces. By definition, the magnitude of a finite metric space X is the magnitude of the matrix Z = ( e - d ( i , j ) ) i , j X ; see [13,38,45], for instance. In the metric context, the meaning of magnitude becomes clearer after one extends the definition from finite to compact spaces (which is done by approximating them by finite subspaces). Magnitude for compact metric spaces has recognizable geometric content: for example, the magnitude of a 3-dimensional ball is a cubic polynomial in its radius (Theorem 2 of [15]) and the magnitude of a homogeneous Riemannian manifold is closely related to its total scalar curvature (Theorem 11 of [17]).
Thus, it is natural to ask: can one extend Theorem 1 to some class of “infinite matrices” Z ? (For instance, Z might be the form ( x , y ) e - d ( x , y ) arising from a compact metric space. In this case, the maximum diversity of order 2 is a kind of capacity, analogous to classical definitions in potential theory; for a compact subset of R n , it coincides with the Bessel capacity of an appropriate order [16].) And if so, what is the geometric significance of maximum diversity in that context?
There is already evidence that this is a fruitful line of enquiry. In [16], Meckes gave a definition of the maximum diversity of order 2 of a compact metric space, and used it to prove a purely geometric theorem relating magnitude to fractional dimensions of subsets of R n . If this maximum diversity can be shown to be equal to the maximum diversity of all other orders then further geometric results may come within reach.
The final question concerns interpretation. Throughout, we have interpreted q D Z ( p ) in terms of ecological diversity. However, there is nothing intrinsically biological about any of our results. For example, in an information-theoretic context, the “species” might be the code symbols, with two symbols seen as similar if one is easily mistaken for the other; or if one wishes to transmit an image, the “species” might be the colours, with two colours seen as similar if one is an acceptable substitute for the other (much as in rate distortion theory [46]). Under these or other interpretations, what is the significance of the theorem that the diversities of all orders can be maximized simultaneously?

Acknowledgments

We thank Mark Broom, Christina Cobbold, Ciaran McCreesh, Richard Reeve and the anonymous referees for helpful discussions. This work was supported by the Carnegie Trust for the Universities of Scotland, the Centre de Recerca Matemàtica, an EPSRC Advanced Research Fellowship, the National Institute for Mathematical and Biological Synthesis, the National Science Foundation, and the Simons Foundation.

Author Contributions

Tom Leinster found the original proof of the main theorem and led the writing. The improved proof presented here is joint work of Tom Leinster and Mark W. Meckes, as are the other results and examples. Both authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Leinster, T.; Cobbold, C.A. Measuring diversity: The importance of species similarity. Ecology 2012, 93, 477–489. [Google Scholar] [CrossRef] [PubMed]
  2. Rao, C.R. Diversity and dissimilarity coefficients: A unified approach. Theor. Popul. Biol. 1982, 21, 24–43. [Google Scholar] [CrossRef]
  3. Rényi, A. On Measures of Entropy and Information. In Proceedings of the 4th Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, CA, USA, 20 June–30 July 1960; University of California Press: Oakland, CA, USA, 1961; Volume 1, pp. 547–561. [Google Scholar]
  4. Tsallis, C. Possible generalization of Boltzmann–Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  5. Patil, G.P.; Taillie, C. Diversity as a concept and its measurement. J. Am. Stat. Assoc. 1982, 77, 548–561. [Google Scholar] [CrossRef]
  6. Havrda, J.; Charvát, F. Quantification method of classification processes: concept of structural α-entropy. Kybernetika 1967, 3, 30–35. [Google Scholar]
  7. Veresoglou, S.D.; Powell, J.R.; Davison, J.; Lekberg, Y.; Rillig, M.C. The Leinster and Cobbold indices improve inferences about microbial diversity. Fungal Ecol. 2014, 11, 1–7. [Google Scholar] [CrossRef]
  8. Bakker, M.G.; Chaparro, J.M.; Manter, D.K.; Vivanco, J.M. Impacts of bulk soil microbial community structure on rhizosphere microbiomes of Zea mays. Plant Soil 2015, 392, 115–126. [Google Scholar] [CrossRef]
  9. Jeziorski, A.; Tanentzap, A.J.; Yan, N.D.; Paterson, A.M.; Palmer, M.E.; Korosi, J.B.; Rusak, J.A.; Arts, M.T.; Keller, W.; Ingram, R.; et al. The jellification of north temperate lakes. Proc. R. Soc. B 2015, 282. [Google Scholar] [CrossRef] [PubMed]
  10. Chalmandrier, L.; Münkemüller, T.; Lavergne, S.; Thuiller, W. Effects of species’ similarity and dominance on the functional and phylogenetic structure of a plant meta-community. Ecology 2015, 96, 143–153. [Google Scholar] [CrossRef] [PubMed]
  11. Bromaghin, J.F.; Rode, K.D.; Budge, S.M.; Thiemann, G.W. Distance measures and optimization spaces in quantitative fatty acid signature analysis. Ecol. Evol. 2015, 5, 1249–1262. [Google Scholar] [CrossRef] [PubMed]
  12. Wang, L.; Zhang, M.; Jajodia, S.; Singhal, A.; Albanese, M. Modeling Network Diversity for Evaluating the Robustness of Networks against Zero-Day Attacks. In Proceedings of the 19th European Symposium on Research in Computer Security (ESORICS 2014), Wroclaw, Poland, 7–11 September 2014; pp. 494–511.
  13. Leinster, T. The magnitude of metric spaces. Doc. Math. 2013, 18, 857–905. [Google Scholar]
  14. Leinster, T. The Euler characteristic of a category. Doc. Math. 2008, 13, 21–49. [Google Scholar]
  15. Barceló, J.A.; Carbery, A. On the magnitudes of compact sets in Euclidean spaces. 2015; arXiv:1507.02502. [Google Scholar]
  16. Meckes, M.W. Magnitude, diversity, capacities, and dimensions of metric spaces. Potential Anal. 2015, 42, 549–572. [Google Scholar] [CrossRef]
  17. Willerton, S. On the magnitude of spheres, surfaces and other homogeneous spaces. Geom. Dedicata 2014, 168, 291–310. [Google Scholar] [CrossRef]
  18. Leinster, T. The magnitude of a graph. 2014; arXiv:1401.4623. [Google Scholar]
  19. Hepworth, R.; Willerton, S. Categorifying the magnitude of a graph. 2015; arXiv:1505.04125. [Google Scholar]
  20. Chuang, J.; King, A.; Leinster, T. On the magnitude of a finite dimensional algebra. Theory Appl. Categories 2016, 31, 63–72. [Google Scholar]
  21. Leinster, T. A maximum entropy theorem with applications to the measurement of biodiversity. 2009; arXiv:0910.0906. [Google Scholar]
  22. Fremlin, D.H.; Talagrand, M. Subgraphs of random graphs. Trans. Am. Math. Soc. 1985, 291, 551–582. [Google Scholar] [CrossRef]
  23. Simpson, E.H. Measurement of diversity. Nature 1949, 163, 688. [Google Scholar] [CrossRef]
  24. Whittaker, R.H. Vegetation of the Siskiyou mountains, Oregon and California. Ecol. Monogr. 1960, 30, 279–338. [Google Scholar] [CrossRef]
  25. Magurran, A.E. Measuring Biological Diversity; Wiley-Blackwell: Hoboken, NJ, USA, 2003. [Google Scholar]
  26. Hurlbert, S.H. The nonconcept of species diversity: A critique and alternative parameters. Ecology 1971, 52, 577–586. [Google Scholar] [CrossRef]
  27. Kimura, M.; Crow, J.F. The number of alleles that can be maintained in a finite population. Genetics 1964, 49, 725–738. [Google Scholar] [PubMed]
  28. Hannah, L.; Kay, J.A. Concentration in the Modern Industry: Theory, Measurement, and the U.K. Experience; MacMillan: London, UK, 1977. [Google Scholar]
  29. McBratney, A.; Minasny, B. On measuring pedodiversity. Geoderma 2007, 141, 149–154. [Google Scholar] [CrossRef]
  30. Hardy, G.H.; Littlewood, J.E.; Pólya, G. Inequalities, 2nd ed.; Cambridge University Press: Cambridge, UK, 1952. [Google Scholar]
  31. Chao, A.; Chiu, C.H.; Jost, L. Phylogenetic diversity measures based on Hill numbers. Philos. Trans. R. Soc. B 2010, 365, 3599–3609. [Google Scholar] [CrossRef] [PubMed]
  32. Lawvere, F.W. Metric spaces, generalized logic and closed categories. Rendiconti del Seminario Matematico e Fisico di Milano 1973, 43, 135–166, reprinted in Repr. Theory Appl. Categories 2002, 1, 1–37. [Google Scholar] [CrossRef]
  33. Gromov, M. Metric Structures for Riemannian and Non-Riemannian Spaces; Birkhäuser: Boston, MA, USA, 2001. [Google Scholar]
  34. Pavoine, S.; Bonsall, M.B. Biological diversity: distinct distributions can lead to the maximization of Rao’s quadratic entropy. Theor. Popul. Biol. 2009, 75, 153–163. [Google Scholar] [CrossRef] [PubMed]
  35. Kolmogorov, A.N. On certain asymptotic characteristics of completely bounded metric spaces. Doklady Akademii Nauk SSSR 1956, 108, 385–388. [Google Scholar]
  36. Berarducci, A.; Majer, P.; Novaga, M. Infinite paths and cliques in random graphs. Fundam. Math. 2012, 216, 163–191. [Google Scholar] [CrossRef]
  37. Karp, R.M. Reducibility among Combinatorial Problems. In Complexity of Computer Computations; Miller, R.E., Thatcher, J.W., Eds.; Plenum Press: New York, NY, USA, 1972; pp. 85–103. [Google Scholar]
  38. Meckes, M.W. Positive definite metric spaces. Positivity 2013, 17, 733–757. [Google Scholar] [CrossRef]
  39. Horn, R.A.; Johnson, C.R. Matrix Analysis, 2nd ed.; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  40. Broom, M.; Rychtář, J. Game-Theoretical Models in Biology; Chapman & Hall/CRC Press: Boca Raton, FL, USA, 2013. [Google Scholar]
  41. Haigh, J. Game theory and evolution. Adv. Appl. Probab. 1975, 7, 8–11. [Google Scholar] [CrossRef]
  42. Bishop, D.T.; Cannings, C. Models of animal conflict. Adv. Appl. Probab. 1976, 8, 616–621. [Google Scholar] [CrossRef]
  43. Broom, M.; Cannings, C.; Vickers, G.T. On the number of local maxima of a constrained quadratic form. Proc. R. Soc. A 1993, 443, 573–584. [Google Scholar] [CrossRef]
  44. Zhang, Y.J.; Harte, J. Population dynamics and competitive outcome derive from resource allocation statistics: The governing influence of the distinguishability of individuals. Theor. Popul. Biol. 2015, 105, 53–63. [Google Scholar] [CrossRef] [PubMed]
  45. Leinster, T.; Willerton, S. On the asymptotic magnitude of subsets of Euclidean space. Geom. Dedicata 2013, 164, 287–310. [Google Scholar] [CrossRef]
  46. Cover, T.M.; Thomas, J.A. Elements of Information Theory; Wiley: New York, NY, USA, 1991. [Google Scholar]
Figure 1. Two bird communities. Heights of stacks indicate species abundances. In (a), there are four species, with the first dominant and the others relatively rare; in (b), the fourth species is absent but the community is otherwise evenly balanced.
Figure 1. Two bird communities. Heights of stacks indicate species abundances. In (a), there are four species, with the first dominant and the others relatively rare; in (b), the fourth species is absent but the community is otherwise evenly balanced.
Entropy 18 00088 g001
Figure 2. Visualizations of the main theorem: (a) in terms of how different values of q rank the set of distributions; and (b) in terms of diversity profiles.
Figure 2. Visualizations of the main theorem: (a) in terms of how different values of q rank the set of distributions; and (b) in terms of diversity profiles.
Entropy 18 00088 g002
Figure 3. Hypothetical three-species system. Distances between species indicate degrees of dissimilarity between them (not to scale).
Figure 3. Hypothetical three-species system. Distances between species indicate degrees of dissimilarity between them (not to scale).
Entropy 18 00088 g003
Figure 4. Hypothetical community consisting of one species of oak (▪) and ten species of pine (•), to which one further species of pine is then added (◦). Distances between species indicate degrees of dissimilarity (not to scale).
Figure 4. Hypothetical community consisting of one species of oak (▪) and ten species of pine (•), to which one further species of pine is then added (◦). Distances between species indicate degrees of dissimilarity (not to scale).
Entropy 18 00088 g004

Share and Cite

MDPI and ACS Style

Leinster, T.; Meckes, M.W. Maximizing Diversity in Biology and Beyond. Entropy 2016, 18, 88. https://doi.org/10.3390/e18030088

AMA Style

Leinster T, Meckes MW. Maximizing Diversity in Biology and Beyond. Entropy. 2016; 18(3):88. https://doi.org/10.3390/e18030088

Chicago/Turabian Style

Leinster, Tom, and Mark W. Meckes. 2016. "Maximizing Diversity in Biology and Beyond" Entropy 18, no. 3: 88. https://doi.org/10.3390/e18030088

APA Style

Leinster, T., & Meckes, M. W. (2016). Maximizing Diversity in Biology and Beyond. Entropy, 18(3), 88. https://doi.org/10.3390/e18030088

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop