1. Introduction
Boolean networks are complex dynamical systems that were proposed as models of genetic regulatory networks [
1,
2] and have since then been used to model a range of complex phenomena. Random Boolean networks (RBNs) are ensembles of randomly generated Boolean networks with random topology and random updating functions. It is known that RBNs undergo a phase transition. In the thermodynamic limit, meaning that the number of nodes goes to infinity, the phase transition curve for RBNs is given by
, where
p is the so-called bias of the random Boolean functions, which is the probability that the function takes on the value 1, and
K is the connectivity of the network, or equivalently, the number of inputs to each Boolean function [
3].
Under a synchronous updating scheme, whereby all Boolean functions get updated simultaneously at each time step, this phase transition curve separates two qualitatively distinct dynamical regimes. Below the curve, when
, infinitesimally small perturbations to the state of the network decay, while above the curve, when
, small perturbations spread throughout the network. Thus, the two regimes are often referred to as ordered and chaotic regimes, respectively. The quantity
coincides with the Lyapunov exponent [
4,
5]. Networks that are known to operate in the dynamically critical regime, meaning that their Lyapunov exponent is 0, are widely known to have many optimal properties and are thought to be a hallmark of living systems [
2,
6]. In the context of Boolean networks, these properties include: the ability to balance resilience to random mutations with emergence of new phenotypes for adapting to new environmental challenges [
7], maximization of the associative memory of the network [
8], maximization of the diversity in structure-dynamics relationships [
9], maximization of communication among nodes [
10], maximization of so-called set complexity of the dynamics of the network [
11], emergence of diversity in a spatial arrangement of Boolean networks representing cells in a tissue [
12], and optimum network learning and generalization when the Boolean network has input and output signals [
13,
14].
We consider monotone Boolean networks in which all updating rules belong to the class of monotone Boolean functions. This class of functions is one of the most widely studied classes of Boolean functions [
15]. Monotone Boolean networks have been primarily studied in the asynchronous updating scheme setting, whereby only one node is updated at a time. Some work has focused on long-term dynamics, such as fixed points and limit cycles [
16,
17,
18]. It is also known that certain classes of fully asynchronous Boolean networks can be simulated by monotone Boolean networks [
19]. However, in the context of the Lyapunov exponent, asynchronous updating schemes appear to be less relevant than synchronous updating schemes [
20]. Our main results are asymptotic formulas, depending on whether
n is even or odd, for the so-called expected average sensitivity of a monotone Boolean function, first given in [
21]. In light of the results in [
5], the logarithm of the expected average sensitivities,
, in Theorems 4 and 5, can be directly interpreted as the Lyapunov exponent
.
In the remainder, we will use
n, rather than the conventional
K in the Boolean network literature, to denote the number of variables of the Boolean functions. Our results concerning almost all monotone Boolean networks can be understood in a probabilistic manner, meaning that the asymptotic formulas are valid with probability almost 1 if a monotone Boolean network is chosen at random from the set of all such networks. As the formulas are asymptotic, they should not be interpreted for small
n. At the same time, they are quite accurate even for
, which follows from the results originally published in [
22]. Absent additional constraints on the Boolean functions, such as the classes of canalizing functions [
5,
23,
24,
25] or Post classes [
26], random monotone Boolean networks quickly enter the disordered regime relative to
n, but slower than RBNs. For example, for
, monotone Boolean networks have the expected average sensitivity
, which is already slightly in the chaotic regime (
). For
, the expected average sensitivity is
, whereas for random Boolean networks, it is
(we can also see this if we plug
and
into
).
2. Definitions and Preliminaries
Let
be a Boolean function of
n variables
. Let
be the partial derivative of
f with respect to
, where ⊕ is addition modulo 2 (exclusive OR) and
,
. Clearly, the partial derivative is a Boolean function itself that specifies whether a change in the
jth input causes a change in the original function
f. The activity of variable
in function
f can be defined as
Note that although the vector
consists of
n components (variables), the
jth variable is fictitious in
. A variable
is fictitious in
f if
for all
and
. For a
n-variable Boolean function
f, we can form its activity vector
. It is easy to see that
for any
. In fact, we can consider
to be a probability that toggling the
jth input bit changes the function value, when the input vectors
are distributed uniformly over
. Since we’re in the binary setting, the activity is also the expectation of the partial derivative with respect to the uniform distribution:
. The activity of a fictitious variable
is
. Under an arbitrary distribution,
is referred to as the influence of variable
on the function
f [
27]. The influence of variables was used in the context of genetic regulatory network modeling in [
28].
Another important quantity is the sensitivity of a Boolean function
f, which measures how sensitive the output of the function is to changes in the inputs. The sensitivity
of
f on vector
is defined as the number of Hamming neighbors of
on which the function value is different than on
(two vectors are Hamming neighbors if they differ in only one component). That is,
where
is the unit vector with 1 in the
ith position and 0s everywhere else and
is an indicator function that is equal to 1 if and only if
A is true. The average sensitivity
is defined by taking the expectation of
with respect to the distribution of
. It is easy to see that under the uniform distribution, the average sensitivity is equal to the sum of the activities:
Therefore,
is a number between 0 and
n.
The average sensitivity has been studied intensively by a number of authors [
5,
29,
30,
31,
32,
33,
34,
35,
36,
37]. For example, it was shown by Friedgut [
33] that if the average sensitivity of
f is
k then
f can be approximated by a function depending on only
variables where
c is a constant depending only on the accuracy of the approximation, but not on
n. Shi [
34] showed that the average sensitivity can serve as a lower bound of quantum query complexity. Average sensitivity was used to characterize the noise sensitivity of monotone Boolean functions by Mossel and O’Donnell [
35]. Zhang [
36] gives lower and upper bounds of the average sensitivity of a monotone Boolean function. The upper bound is asymptotic to
, which has been shown by Bshouty and Tamon [
37]. Shmulevich and Kauffman [
5] have shown that the average sensitivity determines the critical phase transition curve in random Boolean networks and thus coincides with the Lyapunov exponent as
. We now turn to monotone Boolean functions.
2.1. Monotone Boolean Functions
Let and be two different n-element binary vectors. We say that precedes , denoted as , if for every i, . If and , then and are said to be incomparable. Relative to the predicate ≺, the set of all binary vectors of a given length is a partially ordered set. A Boolean function is called monotone if for any two vectors and such that , we have .
We denote by the set of all monotone Boolean functions of n variables. Let denote the Boolean n-cube, that is, a graph with vertices each of which is labeled by an n-element binary vector. Two vertices and are connected by an edge if and only if the Hamming distance . The set of those vectors from in which there are exactly k units, , is called the kth layer of and is denoted by .
A vector
is called a minimal one or minimal unit of monotone Boolean function
if
and
for any
. A vector
is called an maximal zero of monotone Boolean function
if
and
for any
. The minimal ones correspond directly to the terms in the minimal disjunctive normal form (DNF) representation of the monotone Boolean function. In [
38], asymptotic formulae for the number of monotone Boolean functions of
n variables with a most probable number of minimal ones were derived, confirming the conjecture in [
39] that the number of monotone Boolean functions relative to the number of minimal ones asymptotically follows a normal distribution, with the assumption of all monotone Boolean functions being equiprobable.
2.2. The Structure of Special Monotone Boolean Functions
We now briefly review some known results concerning the structure of so-called special monotone Boolean functions. Let denote the set of functions in possessing the following properties. If n is even, then contains only functions such that all minimal ones of f are situated in , , and while function f is equal to 1 on all vectors in . For odd n, contains only functions such that all minimal ones of f are situated in either , , and or , , and . In the first case, for all in while in the second case, for all in .
Then, as shown in [
22],
which we denote by
In [
40], asymptotic formulae for the number of special functions from
were established and subsequently used to characterize statistical properties of a popular class of nonlinear digital filters called stack filters [
41]. The set of these special functions is denoted by
and, depending on whether
n is even or odd, is defined differently. While we shall omit the rather lengthy definitions of special functions, the result from [
40] that will be important to us is that
. In other words, almost all monotone Boolean functions are special. We shall also need the following results.
Let us start with the case of even
n. Let
Let
denote the set of functions
such that
f has
r minimal ones in
,
v maximal zeros in
, and
f is equal to 1 on
z vertices in
. In [
40], the following result was proved.
Theorem 1. Let n be even,where are defined in (6). Then, for any k, t, and u such that , , , For any odd
n, we use the parameters
which are given by
and parameters
, which are given by
Let
denote the set of functions
such that
f has
r minimal ones in
,
v maximal zeros in
, and
f is equal to 1 on
z vertices in
. Similarly, let
denote the set of functions
such that
f has
r minimal ones in
,
v maximal zeros in
, and
f is equal to 1 on
z vertices in
. Then, in [
40], the following two Theorems were proved.
Theorem 2. Let n be odd,where are defined in (8) and (9). Then, for any k, t, and u such that , , , Theorem 3. Let n be odd,where are defined in (10) and (11). Then, for any k, t, and u such that , , , Prior to presenting the main results, we summarize some of the notation in
Table 1.
3. Main Results
Since
we can focus our attention on functions in
and derive the average sensitivity of a typical function from
By ‘typical’ we mean the most probable Boolean function relative to the parameters
k,
t, and
u in Theorems 1–3. It can easily be seen that the most probable special Boolean functions will have
This will imply, to take the
n-even case as an example, that the most probable function
f has
minimal ones in
,
maximal zeros in
, and
f is equal to 1 on
vertices in
, where
and
are given in (
6). Our proofs are thus based on the derivation of the average sensitivity of such a function. Whenever we make probabilistic assertions using words such as ‘most probable’ or ‘typical’ or talk about expectations, we are implicitly endowing the set
with a uniform probability distribution for fixed parameters
This should not be confused with the Gaussian-like distribution of
relative to its parameters
[
38]. We will also omit the floor notation
as the results are asymptotic.
Theorem 4. Let n be even and let be a typical monotone Boolean function. Then, the expected average sensitivity of f is Proof. We will proceed by first focusing on determining the activity of an arbitrary variable
of a typical function
By simple symmetry arguments, if we were to sample randomly from the set
of monotone Boolean functions, the expected activities would be equal for all the variables. It will follow by (
4) that the expected average sensitivity will be equal to
n multiplied by the expected activity. Since the function
f is such that its minimal ones are situated in
,
, and
while it is equal to 1 on all vectors in
, the only non-trivial behavior occurs between the layers
and
Let us consider the minimal ones, and hence all of the ones, on
Since we are considering variable
half of these minimal ones will have
(i.e.,
) and the other half will have
(i.e.,
). It is easy to see that if
is a minimal one, then by monotonicity,
. Consequently, the Hamming neighbors
and
contribute nothing to the sum in (
2). On the other hand, if
is a minimal one, then
since
for all
. The most probable number of such minimal ones on
contributing to the sum in (
2) is thus equal to
The number of zeros on
is equal to
As above, half of these will have
and half will have
. We need not consider vectors
since
for all
However, we should consider the number of ones situated on the middle layer
The middle layer contains
ones and an equal number of zeros. Thus, half of the vectors
will be ones and the other half will be zeros. In total, the number of vectors
such that
and
is equal to
We have now examined all partial derivatives above and below the layer
Let us now jump to layer
as it will be similar by duality considerations. The number of maximal zeros on that layer is equal to
Half of these will have
and thus
due to monotonicity. The other half will have
and since
for all
the same total as in (
14) will result. Similarly, the number of ones on
is the same as in (
15). We are only concerned with
such that
and
As above, because the middle layer contains the same number of ones and zeros, the total number of such pairs of vectors is the same as in (
17). Thus, having accounted for all partial derivatives above and below
and having convinced ourselves that their contribution to the overall activity of variable
is the same as in (
14) and (
17), we can multiply (
14) and (
17) by 2 and add them together to obtain
Since there are
Hamming neighbors
and since the average sensitivity is
n times the activity, we must multiply (
18) by
resulting in the statement of the theorem. □
The case of odd n is somewhat more involved because there is no “middle” layer Instead, typical functions break up into two sets: and In the first case, all minimal ones are on layers , , and while in the second case, all minimal ones are situated on , , and . Under random sampling, these two cases will occur with equal probabilities. As we shall see, the results will be different for each case. Thus, the expected average sensitivity will be the average of the the expected average sensitivities corresponding to these two cases.
Theorem 5. Let n be odd and let be a typical monotone Boolean function. Then, the expected average sensitivity of f iswhere and are given in (28) and (37), respectively. Proof. Let us first address the set As in Theorem 4, we only need to concern ourselves with four cases. These are:
All other situations, such as
will result in the partial derivatives being zero due to monotonicity of
f, hence will make no contribution to the activity of variable
There are
minimal ones on
such that
At the same time, there are
zeros on
such that
Unlike in the
n-even case, where the middle layer contains an equal number of ones and zeros, the number of ones and zeros on
is not equal. The number of ones on
is given in (
9). Thus, if
is a zero on
, then the probability that
is
where we have simply divided the number of ones on
by the total number of vectors on that layer. Thus, multiplying (
21) by (
22) and adding to (
20) gives us the total contribution to the activity of variable
from cases 1 and 2 above, which is equal to
Cases 3 and 4 are similar. The number of (maximal) zeros on
for which
is equal to
and the number of ones on
for which
is equal to
The proportion of zeros on
is
Thus, as before, multiplying (
25) by (
26) and adding to (
24), we get the total contribution to the activity from above and below the layer
(cases 3 and 4), resulting in
Finally, adding (
23) and (
27) and then multiplying by
as in Theorem 4, we obtain the expected average sensitivity
of a typical function from
:
We now proceed to derive the expected average sensitivity of a typical function from following the same steps.
Again, we only need to concern ourselves with four cases:
There are
minimal ones on
such that
At the same time, there are
zeros on
such that
As for
, the number of ones and zeros on
is not equal. The number of ones on
is given in (
11). Thus, if
is a zero on
, then the probability that
is
where we have simply divided the number of ones on
by the total number of vectors on that layer. Thus, multiplying (
30) by (
31) and adding to (
29) gives us the total contribution to the activity of variable
from cases 1 and 2 above, which is equal to
Cases 3 and 4 are similar. The number of (maximal) zeros on
for which
is equal to
and the number of ones on
for which
is equal to
The proportion of zeros on
is
Thus, as before, multiplying (
34) by (
35) and adding to (
33), we get the total contribution to the activity from above and below the layer
(cases 3 and 4), resulting in
Finally, adding (
32) and (
36) and then multiplying by
as in Theorem 4, we obtain the expected average sensitivity
of a typical function from
:
Given that a function is equally likely to be picked from
as from
the expected average sensitivity is the average of Equations (
28) and (
37). □