Next Article in Journal
Dynamics of Hyperbolically Symmetric Fluids
Previous Article in Journal
Symmetries and Selection Rules of the Spectra of Photoelectrons and High-Order Harmonics Generated by Field-Driven Atoms and Molecules
Previous Article in Special Issue
Three-Way Multi-Attribute Decision Making Based on Outranking Relations under Intuitionistic Fuzzy Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Temporal Behavior of Local Characteristics in Complex Networks with Preferential Attachment-Based Growth

by
Sergei Sidorov
1,*,†,
Sergei Mironov
2,†,
Nina Agafonova
1,† and
Dmitry Kadomtsev
2,†
1
Faculty of Mathematics and Mechanics, Saratov State University, Saratov 410012, Russia
2
Faculty of Computer Science and Information Technology, Saratov State University, Saratov 410012, Russia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2021, 13(9), 1567; https://doi.org/10.3390/sym13091567
Submission received: 23 July 2021 / Revised: 20 August 2021 / Accepted: 21 August 2021 / Published: 25 August 2021

Abstract

:
The study of temporal behavior of local characteristics in complex growing networks makes it possible to more accurately understand the processes caused by the development of interconnections and links between parts of the complex system that occur as a result of its growth. The spatial position of an element of the system, determined on the basis of connections with its other elements, is constantly changing as the result of these dynamic processes. In this paper, we examine two non-stationary Markov stochastic processes related to the evolution of Barabási–Albert networks: the first describes the dynamics of the degree of a fixed node in the network, and the second is related to the dynamics of the total degree of its neighbors. We evaluate the temporal behavior of some characteristics of the distributions of these two random variables, which are associated with higher-order moments, including their variation, skewness, and kurtosis. The analysis shows that both distributions have a variation coefficient close to 1, positive skewness, and a kurtosis greater than 3. This means that both distributions have huge standard deviations that are of the same order of magnitude as the expected values. Moreover, they are asymmetric with fat right-hand tails.

1. Introduction

Many technological, biological, and social systems can be represented by underlying complex networks. Such networks consist of numerous nodes, and if an interaction between a pair of elements in the system takes place, then it is assumed that the corresponding pair of nodes is connected by a link.
An important exmaple of complex systems of this kind are economic systems, the elements (or nodes) of which are firms or companies, and the links between these elements reflect their economic, informational, or financial interactions. The well-observed effect of the first-mover advantage is that companies that have appeared earlier than others (“first-moving” significant element of the system) usually receive a serious advantage in their development and a larger market segment than firms that entered the market later [1]. The same effect is often observed in the innovation propagation, in the production and the distribution of patents and technologies, in information interaction, as well as in social networks. However, numerous examples of new successful technology companies (or new popular social network accounts) show that the temporal behavior of network elements is very diverse: elements that appeared much later can take a more dominant position in the complex system than elements that appeared in the early stages of the system development.
In this paper, we study complex systems whose growth is based on the use of the preferential attachment mechanism and show that while the “first-moving” effect is performed, on average, in relation to the node degree (i.e., the number of its links), the temporal behavior of this quantity has an important feature: its coefficient of variation is close to 1. This means that the variance of the random variable (which is the node degree) is comparable to its average value, which can explain the effect of the appearance of large and important nodes in such complex systems at later stages of their growth. In addition, the paper examines other local node characteristics, associated with the higher order moments, which makes it possible to find the skewness and kurtosis of the distributions of these random variables.
Many networks, such as social, economic, citation, or WWW networks, evolve over time by adding new nodes, which at the moment of their appearance join the already existing ones. Numerous studies of real networks have shown that degree distributions in such complex networks follow the power law [2,3,4,5,6,7,8]. Modeling the growth of real complex networks is an important problem, and one of the first successful attempts was the Barabási–Albert model [9]. Using the mechanisms of growth and preferential attachment, the model made it possible to describe the evolution of networks with the power-law degree distribution. In recent years, researchers have proposed many extensions of this model, to approximate the properties of real systems [10,11,12,13,14,15,16,17,18,19,20]. Nevertheless, studying the peculiarities of networks generated by the pioneering Barabási–Albert model is of interest, as it sheds light on the properties of extended models [21].
The dynamics of the degree of a certain individual node, while the Barabási–Albert network is evolving, is a stochastic process. On the one hand, it is a Markov process, since at each iteration the newborn node selects those vertices to which it will join, based on their current degrees, i.e., does not depend on their degrees taken at previous iterations. On the other hand, this stochastic process is not stationary, since the local characteristics change with the growth of the network.
It is known [22] that the expected value of the degree d i ( t ) of node v i at moment t follows the power law:
E ( d i ( t ) ) = m t i 1 2 .
The degree of a vertex at a particular moment in time is a random variable. However, to characterize a random variable, knowing its mathematical expectation alone is not enough. The higher-order moment-related quantities, such as variation, the coefficient of asymmetry (skewness), and kurtosis, allow one to clearer understand the dynamic behavior of the degree of a vertex and to more definitely characterize the underlying stochastic process.
Another local characteristic of a node (in addition to its degree), which is of interest, is the sum of the degrees of all its neighbors. The knowledge of its dynamics makes it possible for one to answer many questions related to the local neighborhood of a given node:
  • how much faster does the total degree of neighbors grow than the degree of the node itself?
  • Are the variation of node degree and the variation of the total degree of its neighbors comparable?
  • Do the nodes’ asymmetry coefficients differ or not?
  • Do the nodes’ kurtosises differ or not?
In this paper, we answer these questions and find the values of these characteristics for the distributions of both the degree of a node and the total degree of its neighbors. While different methods can be employed to estimate these local characteristics [23], in this paper we use the mean-field approach as a method for assessing these quantities [22,24,25].
The recent paper [26] studies the behavior in the limit of the degree of an individual node in the Barabási–Albert model, and it shows that after some scaling procedures, this stochastic process converges to a Yule process (in distribution). Based on this findings, the paper examines why the limit degree distribution of a node picked uniformly at random (as the network grows to infinity) matches the limit distribution of the number of species chosen randomly in a Yule model (as time goes to infinity).
In contrast with paper [26], our paper focuses on time dynamics of the distribution characteristics, rather than on their limit behavior. In addition, we expand the study with the analysis of the total degree of neighbors of node.

2. Barabási–Albert Model

2.1. Notations and Definitions

Let G t = { V t , E t } be a graph, where V t = { v 1 , , v t } is the set of vertices and E t is the set of edges. Let d i ( t ) denote the degree of node v i of graph G t . Let m N be a fixed integer.
According to the Barabási–Albert model, graph G t + 1 is obtained from graph G t (at each discrete time moment t + 1 = m + 1 , m + 2 , ) in the following way:
  • In the initial time t = m , G m = { V m , E m } is a graph with | V t | = m and | E t | = m 2 ;
  • One vertex v t + 1 is attached to the graph, i.e., V t + 1 = V t { v t + 1 } ;
  • m t + 1 edges that connect vertex v t + 1 with m t + 1 existing vertices are added; each of these edges appears as the result of the realization of the discrete random variable ξ t + 1 that takes the value i with probability P ( ξ t + 1 = i ) = d i ( t ) 2 m t . If ξ t + 1 = i , then edge ( v t + 1 , v i ) is added to the graph. We conduct m such independent repetitions. If the random variable ξ t + 1 takes the same value i in two or more repetitions at the iteration, then only one edge is added (there are no multiple edges in the graph).
Denote by ξ i t + 1 the (cumulative) random variable that takes i if ξ t + 1 takes i at least in one of m repetitions at iteration t + 1 .
Remark 1.
We are interested in the evolution of the graph for sufficiently large t. In this case, the probability that the random variable ξ t + 1 will take the value of i exactly k times in a series of m independent repetitions is proportional to d i ( t ) 2 m t k 1 d i ( t ) 2 m t m k , which is an order of magnitude less than the probability d i ( t ) 2 m t . Therefore, without a loss of generality, we will assume that m t + 1 = m for all t + 1 . Then the probability that an edge from new vertex v t + 1 that appears at iteration t + 1 is linked to vertex v i is
P ( ξ i t + 1 ) = m d i ( t ) 2 m t = d i ( t ) 2 t .
Let d i ( t ) be the degree of node v i of graph G t , and let s i ( t ) be the total sum of degrees of all v i neighbors in graph G t .
Note that trajectories of these quantities over time t are described by non-stationary Markov processes, since their values at each moment t are random variables that depend only on the state of the system at the previous moment. In the papers [27,28], asymptotic estimates of the expected values of these quantities at iteration t are found:
E ( d i ( t ) ) = m t i 1 2 , E ( s i ( t ) ) = m 2 2 t i 1 2 ( log t + C ) ,
where C is a constant.
The aim of this work is to further analyze the behavior of these stochastic processes in time. In this article, we focus on estimating its moments, variances, asymmetry coefficient and kurtosis.

2.2. Temporal Behavior in Simulated Networks

The stationarity of stochastic processes means that the distribution parameters of a random variable remain unchanged over time. Obviously, the processes under consideration are not stationary. This can clearly be seen in Figure 1, Figure 2, Figure 3 and Figure 4, which show empirical histograms of distributions of random variables d i ( t ) and s i ( t ) based on different realizations of their trajectories. The histograms were obtained as follows: we simulated the evolution of the BA graphs 200 times and obtained 200 corresponding values of random variables d i ( t ) and s i ( t ) for two nodes i = 10 and i = 50 at iterations t = 5000 and t = 20,000. To construct the histograms, we used the number of bins equal to 15. Figure 1 and Figure 2 represent histograms of d i and were obtained for m = 3 and m = 5 , respectively. Figure 3 and Figure 4 show histograms of s i and were obtained for m = 3 and m = 5 , respectively. The empirical values of the characteristics of the distributions of random variables are presented in Table 1, Table 2, Table 3 and Table 4.
Experimental results show that both distributions have mean values that increase over time. In addition, the growth of their standard deviations is proportional to the increase in their means. The values of the skewness coefficient are positive in all cases, which indicate the asymmetry of the distributions. Kurtosis is greater than 3, which means that their tails are thicker than the tail of normal distribution.

2.3. The Evolution of the Barabási–Albert Networks

It follows from the definition of the Barabási–Albert network that
  • If ξ t + 1 = i , then d i ( t + 1 ) = d i ( t ) + 1 and s i ( t + 1 ) = s i ( t ) + m , as the result of joining node v i with newborn node v t + 1 of degree m.
  • If ξ t + 1 = j and ( v j , v i ) V t , i.e., new node v t + 1 joins a neighbor v j of node v i , then d i ( t + 1 ) = d i ( t ) and s i ( t + 1 ) = s i ( t ) + 1 .
Let ξ i t + 1 = 1 if node v t + 1 links to node v i at iteration t + 1 , and ξ i t + 1 = 0 otherwise; i.e.,
ξ i t + 1 = 1 , ( v t + 1 , v i ) V t + 1 0 , otherwise .
Let η i t + 1 = 1 if node v t + 1 links to one of the neighbors of node v i at iteration t + 1 , and η i t + 1 = 0 otherwise; i.e.,
η i t + 1 = 1 , ( v t + 1 , v j ) V t + 1 and ( v j , v i ) V t , 0 , otherwise .
Then the conditional expectations of ξ i t + 1 and η i t + 1 at moment t + 1 are equal to
E ( ξ i t + 1 | G t ) = d i ( t ) 2 t , E ( η i t + 1 | G t ) = s i ( t ) 2 t ,
Let μ n ( ξ ) denote n-th central moment of a random variable ξ defined by
μ n ( ξ ) = E ( ξ E ( ξ ) ) n , n N .
Due to the linearity of the mathematical expectation, the following formula holds for finding the n-th central moment of a random variable:
μ n ( ξ ) = i = 0 n ( 1 ) i n i ν n i ( ξ ) ν 1 i ( ξ ) ,
where ν i ( ξ ) is i-th moment defined by
ν i ( ξ ) = E ( ξ i ) .
Indeed, it follows from Equation (4) that we have
μ n = E i = 0 n ( 1 ) i n i E i ( ξ ) ξ n i = i = 0 n ( 1 ) i n i E i ( ξ ) E ( ξ n i ) = i = 0 n ( 1 ) i n i ν 1 i ν n i .
The variance is the second central moment of the random variable, i.e., μ 2 ( ξ ) = Var ( ξ ) .

3. Node Degree Dynamics: The Evolution of Its Variation and High-Order Moments in Time

3.1. The Variation of d i ( t )

Lemma 1.
The second moment of d i ( t ) follows
E ( d i 2 ( t ) ) = m ( m + 1 ) t i m t i 1 2 .
Proof. 
We have
Δ d i 2 ( t + 1 ) : = d i 2 ( t + 1 ) d i 2 ( t ) = ( d i ( t ) + 1 ) 2 ξ i t + 1 + d i 2 ( t ) ( 1 ξ i t + 1 ) d i 2 ( t ) = ( 2 d i ( t ) + 1 ) ξ i t + 1 .
It follows from (3) that the conditional expectation of d i 2 ( t + 1 ) at iteration t + 1 is
E ( d i 2 ( t ) | G t ) = ( 2 d i ( t ) + 1 ) d i ( t ) 2 t = d i 2 ( t ) t + d i ( t ) 2 t .
Now let us pass from the difference equation, Equation (8), to its approximate version of the differential equation, denoting E ( d i 2 ( t ) ) as f ( t ) , replacing Δ f ( t ) with d f ( t ) d t , and replacing d i ( t ) with its expectation m t i 1 2 . Then, we get
d f ( t ) d t = f ( t ) t + m 2 ( t i ) 1 2 ,
the solution of which is f ( t ) = c t m t i 1 2 . Since d i ( i ) = m , i.e., d i 2 ( i ) = m 2 , we get c = m ( m + 1 ) . Thus, we get (6). □
To illustrate the result, we carried out T = 200 independent repetitions in which BA graph evolution was simulated 200 times, each time for N = 20,000 iterations, for different values m = 3 and m = 5 . Then we obtained the mean of the empirical values of d i 2 ( t ) . The results are presented in Figure 5.
Theorem 1.
The variation of d i ( t ) at iteration t is
Var ( d i ( t ) ) = m t i t i 1 2 .
Proof. 
The definition of variation implies
Var ( d i ( t ) ) = E ( d i 2 ( t ) ) E 2 ( d i ( t ) ) .
Then Theorem follows from Lemma 1 and the estimate E ( d i ( t ) ) = m t i 1 2 (see Equation (2)). □
The standard deviation of d i ( t ) , defined as Var ( d i ( t ) ) , is the same order of magnitude as E ( d i ( t ) ) :
E ( d i ( t ) ) Var ( d i ( t ) ) m + 1 m as t ,
i.e., the coefficient of variation for d i ( t ) tends to m + 1 m > 1 as t tends to . Thus, the d i ( t ) -distribution is high-variance.

3.2. The High-Order Moments of d i ( t )

Theorem 2.
E ( d i n ( t ) ) C ( n , m ) t i n 2 + O ( t n 1 2 ) ,
where C ( n , m ) = m ( m + 1 ) ( m + n 1 ) depends on n and m only.
Proof. 
We have
Δ d i n ( t + 1 ) : = d i n ( t + 1 ) d i n ( t ) = ( d i ( t ) + 1 ) n ξ i t + 1 + d i n ( t ) 1 ξ i t + 1 d i n ( t ) .
Then it follows from (3) that
E ( Δ d i n ( t + 1 ) | G t ) = d i ( t ) 2 t ( ( d i ( t ) + 1 ) n d i n ( t ) ) = d i ( t ) 2 t j = 0 n 1 n j d i j ( t ) = 1 2 t j = 0 n 1 n j d i j + 1 ( t ) = 1 2 t n d i n ( t ) + j = 0 n 2 n j d i j + 1 ( t ) .
Assuming that E ( d i m ( t ) ) are obtained for all m { 1 , , n 1 } , the expectation of j = 0 n 2 n j d i j + 1 can be found with the use of linearity of expectation. Taking the expectation of both sides and denoting d i n ( t ) and j = 0 n 2 n j E ( d i j + 1 ) by f ( t ) and g ( t ) , respectively, we get the following differential equation
d f d t = n f 2 t + g 2 t .
We get its solution in the following form:
f ( t ) = t n 2 2 t n 2 1 g ( t ) d t + C t n 2 ,
and we obtain the recurrent formula for finding E ( d i n ( t ) ) :
E ( d i n ( t ) ) ) = t n 2 2 t n 2 1 j = 0 n 2 n j E ( d i j + 1 ( t ) ) d t + C t n 2 ,
where constant C can be found from the initial condition E ( d i n ( i ) ) ) = m n .
Let us show by induction that
E ( d i n ( t ) ) = C 1 t i 1 2 + C 2 t i 2 2 + s + C n t i n 2 .
Indeed, if n = 1 , we get the well-known estimate E ( d i ( t ) ) = m t i 1 2 .
Suppose that (11) is true for all n < n . We will show that (11) is also fulfilled for n = n . We have
E ( d i n ( t ) ) ) = t n 2 2 t n 2 1 j = 0 n 2 n j E ( d i j + 1 ( t ) ) d t + C t n 2 ,
by the induction hypothesis. The sum j = 0 n 2 n j E ( d i j + 1 ) can be presented as
C 1 t i 1 2 + C 2 t i 2 2 + + C n 1 t i n 1 2 .
Then the whole integral will be the sum of integrals of the form t n 2 1 C p t i p 2 d t , where p { 1 , , n 1 } , each of which is equal to C p t p n 2 i p 2 . Therefore, we get
E ( d i n ( t ) ) = t n 2 2 p = 1 n 1 C p t p n 2 i p 2 + C t n 2 = p = 1 n 1 C p t i p 2 + C t n 2 = C 1 t i 1 2 + C 2 t i 2 2 + + C n t i n 2 .

3.3. The Skewness of d i ( t )

The asymmetry coefficient γ 1 ( ξ ) of a random variable ξ is defined by
γ 1 ( ξ ) = μ 3 ( ξ ) μ 2 3 / 2 ( ξ ) ,
where μ 3 ( ξ ) and μ 2 ( ξ ) are the third and the second central moments of the ξ -distribution, respectively.
Lemma 2.
The third moment of d i ( t ) follows
E ( d i 3 ( t ) ) ) = m ( m + 1 ) ( m + 2 ) t i 3 2 3 m ( m + 1 ) t i + m t i 1 2 .
Proof. 
Using Equation (10), we can find E ( d i 3 ( t ) ) :
E ( d i 3 ( t ) ) ) = t 3 2 2 t 5 2 j = 0 1 3 j E ( d i j + 1 ) d t + C t 3 2 = t 3 2 2 t 5 2 E ( d i ) + 3 E ( d i 2 ) d t + C t 3 2 = t 3 2 2 t 5 2 m t i + 3 m ( m + 1 ) t i m t i d t + C t 3 2 = m t i 3 m ( m + 1 ) t i + C t 3 2 .
We have
E ( d i 3 ( i ) ) ) = m 3 C = m 3 + 3 m 2 + 2 m i 3 2 = m ( m + 1 ) ( m + 2 ) i 3 2 .
Therefore, we get (12). □
To exhibit the result, we carried out T = 200 independent repetitions; in each of them, the BA graph was simulated for N = 20,000 iterations, for different values m = 3 and m = 5 . Then the empirical values of mean ( d i 3 ( t ) ) were obtained. The results are presented in Figure 6.
Theorem 3.
The asymmetry coefficient γ 1 ( d i ( t ) ) for the distribution of d i ( t ) follows
γ 1 ( d i ( t ) ) 2 t i 1 2 1 m 1 2 t i 1 4 t i 1 2 1 1 2 .
Proof. 
Using Equation (5) we can find the third central moments μ 3 ( d i ) as follows
μ 3 ( d i ( t ) ) = E ( d i 3 ) 3 E ( d i 2 ) E ( d i ) + 2 E 3 ( d i ) = m t i 1 2 3 m t i + 2 m t i 3 2 .
Therefore, using Theorem 1 we have
γ 1 ( d i ( t ) ) : = μ 3 ( d i ( t ) ) μ 2 3 / 2 ( d i ( t ) ) = m t i 1 2 3 m t i + 2 m t i 3 2 m t i t i 3 2 = m t i 1 2 t i 1 2 1 2 t i 1 2 1 m 3 2 t i 3 4 t i 1 2 1 3 2 = 2 t i 1 2 1 m 1 2 t i 1 4 t i 1 2 1 1 2 .
Remark 2.
It follows from Theorem 3 that γ 1 ( d i ( t ) ) > 0 for all t i , therefore, the distribution of d i ( t ) is asymmetric, and its right tail is thicker than the left tail. The initial value of the asymmetry coefficient is 4 m 1 / 2 . However, γ 1 ( d i ( t ) ) 2 m 1 / 2 as t . Therefore, its value decreases with the network growth (see Figure 7).

3.4. The Kurtosis of d i ( t )

Using Equation (10), we can find E ( d i 4 ( t ) ) :
E ( d i 4 ( t ) ) = m ( m + 1 ) ( m + 2 ) ( m + 3 ) t i 2 6 m ( m + 1 ) ( m + 2 ) t i 3 2 + 7 m ( m + 1 ) t i m t i 1 2 ,
which in turn can be used to find the kurtosis of d i .
Theorem 4.
The kurtosis of d i follows
Kurt ( d i ( t ) ) t i t i 1 2 3 m ( m + 2 ) t i t i 1 2 + m m 2 t i t i 1 2 2 .
Proof. 
By definition of kurtosis, we have
Kurt ( d i ( t ) ) : = μ 4 ( d i ( t ) ) Var 2 ( d i ( t ) ) ,
where
μ 4 ( d i ( t ) ) = E ( d i ( t ) E ( d i ( t ) ) ) 4 = E ( d i 4 ( t ) ) 4 E ( d i ( t ) ) E ( d i 3 ( t ) ) + 6 E 2 ( d i ( t ) ) E ( d i 2 ( t ) ) 3 E 4 ( d i ( t ) ) = t i t i 1 2 3 m ( m + 2 ) t i t i 1 2 + m .
Thus, we get (14). □
Remark 3.
Equation (14) implies Kurt ( d i ( t ) ) > 3 ( m + 2 ) m for all t. Moreover, Kurt ( d i ( t ) ) gradually decreases to 3 ( m + 2 ) m as t tends to infinity (see Figure 8). This means that the distribution of d i ( t ) is heavy-tailed for small t and is close to normal distribution for large t and large m.
-0.6cm0cm

4. The Dynamics of s i ( t ) : Its Variation, Asymmetry Coefficient and Kurtosis

In this section, we consider the random variable s i ( t ) , which is defined as the sum of the degrees of the v i -vertex neighbors at the time t. The mathematical expression of this random variable E ( s i ( t ) ) is found in the works [27] (for m = 1 ) and [28] (for arbitrary m):
E ( s i ( t ) ) = m 2 2 t i 1 2 ( log t + C ) ,
where C is a constant.
In this section, the dynamics of the stochastic process of s i ( t ) is investigated a little deeper, namely, the dynamics of the second, third and fourth moments are found, i.e., E ( s i n ( t ) ) , n = 2 , 3 , 4 , which allow us to estimate the variation, the asymmetry coefficient and the kurtosis of s i ( t ) .

4.1. The Second Moment and the Variation of s i ( t )

We first find the exact value of constant C from Equation (16).
Let P ( i , j ) denote the probability that vertex v i is connected to the vertex v j at the moment of its appearance at time i, i.e., P ( i , j ) = d j ( i ) 2 i . We get
E ( s i ( i ) ) = E j = 1 i 1 P ( i , j ) d j ( i ) = E j = 1 i 1 d j 2 ( i ) 2 i = 1 2 i j = 1 i 1 E ( d j 2 ( i ) ) .
Since
E ( d i 2 ( t ) ) = m ( m + 1 ) t i m t i 1 2 ,
we can continue equality as follows:
E ( s i ( i ) ) = 1 2 i j = 1 i 1 m ( m + 1 ) i j m i j 1 2 = 1 2 i m ( m + 1 ) i j = 1 i 1 1 j m i j = 1 i 1 1 j 1 2 i m ( m + 1 ) i log i m i 2 i = m 2 ( ( m + 1 ) log i 2 ) .
Therefore,
E ( s i ( i ) ) = m 2 2 ( log i + C ) = m 2 ( ( m + 1 ) log i 2 ) C = 1 m ( log i 2 ) ,
and we finally get
E ( s i ( t ) ) = m 2 2 t i 1 2 log t + 1 m log i 2 m .
This result will be useful to us later.
To exhibit the result, we carried out T = 200 independent repetitions; in each of them the BA graph was simulated for N = 20,000 iterations, for different values m = 3 and m = 5 . Then the empirical values of E ( s i ( t ) ) were obtained. The results are presented in Figure 9.
Lemma 3.
The second moment of s i ( t ) is
E ( s i 2 ( t ) ) = t i m 3 ( m + 1 ) 4 log 2 t m 3 log t + m 2 ( m + 1 ) log 2 i 4 log i + 1 .
Proof. 
Let us consider how the values s i ( t + 1 ) and s i ( t ) are related:
  • If new vertex v t + 1 joins the vertex v i at the time t + 1 , then s i ( t ) increases by m, since the vertex i obtains a new neighbor whose degree is m;
  • If new vertex v t + 1 joins one of the neighbors for vertex i, then s i ( t ) increases by 1, since in this case the contribution of one neighboring vertex to the increase of s i ( t ) is 1;
  • If none of these events occurs, then s i ( t ) does not change.
Now we can obtain the stochastic difference equation for the random variable s i 2 ( t ) at the moment of time t. We have
Δ s i 2 ( t + 1 ) : = s i 2 ( t + 1 ) s i 2 ( t ) = ξ i t + 1 ( s i ( t ) + m ) 2 + η i t + 1 ( s i ( t ) + 1 ) 2 + ( 1 ξ i t + 1 η i t + 1 ) s i 2 ( t ) s i 2 ( t ) = ξ i t + 1 ( 2 m s i ( t ) + m 2 ) + η i t + 1 ( 2 s i ( t ) + 1 ) .
Since
E ( ξ i t + 1 | G t ) = d i ( t ) 2 t , E ( η i t + 1 | G t ) = s i ( t ) 2 t ,
we get
E ( Δ s i 2 ( t + 1 ) | G t ) = s i 2 ( t ) t + m d i ( t ) s i ( t ) t + s i ( t ) 2 t + m 2 d i ( t ) 2 t .
We cannot assert that s i ( t ) and d i ( t ) are independent; therefore, we may expect that E ( d i ( t ) s i ( t ) ) E ( d i ( t ) ) E ( s i ( t ) ) . Lemma A1 finds E ( d i ( t ) s i ( t ) ) (see Appendix A).
Using Lemma A1, Equations (2) and (19), passing to the mathematical expectation of both sides, and making the substitution f = E ( s i 2 ( t ) | G t ) for convenience, we get the approximate differential equation:
d f d t = f t + m 3 ( m + 1 ) 2 i log t m 3 i .
Its solution has the form
f ( t ) = m 3 ( m + 1 ) t 4 i log 2 t m 3 t i log t + C t ,
where C is a constant. We have
E ( s i 2 ( i ) ) = m 3 ( m + 1 ) 4 log 2 i m 3 log i + C i .
On the other hand, since E ( s i ( i ) ) = m 2 ( ( m + 1 ) log i 2 ) , we have
E ( s i 2 ( i ) ) = m 2 ( m + 1 ) 2 4 log 2 i m 2 ( m + 1 ) log i + m 2 .
Equating this result with (24), we find C:
C = 1 4 i m 2 ( m + 1 ) log 2 i m 2 i log i + m 2 i .
Thus, we get Lemma. □
To confirm the result, we carried out T = 200 independent repetitions; in each of them the BA graph was simulated for N = 20,000 iterations, for different values m = 3 and m = 5 . Then, the empirical values of mean ( s i 2 ( t ) ) were obtained. The results are presented in Figure 10.
Theorem 5.
The variation of s i ( t ) follows
Var ( s i ( t ) ) = E ( s i 2 ( t ) ) E 2 ( s i ( t ) ) = m 3 4 log 2 t i 6 log i log t t i .
Proof. 
Since Var ( s i ( t ) ) = E ( s i 2 ( t ) ) E 2 ( s i ( t ) ) , the statement is the consequence of Equation (19) and Lemma 3. □

4.2. The Third Moment and the Asymmetry Coefficient of s i ( t )

Lemma 4.
The third moment of s i ( t ) is
E ( s i 3 ( t ) ) = 1 8 m 4 ( m + 1 ) ( m + 2 ) t i 3 2 log 3 t + O ( t 3 2 log 2 t )
Proof. 
Let us obtain the difference stochastic equation describing the dynamics of random variable s i 3 ( t ) at moment t. We have
Δ s i 3 ( t + 1 ) : = s i 3 ( t + 1 ) s i 3 ( t ) = ξ i t + 1 ( s i ( t ) + m ) 3 + η i t + 1 ( s i ( t ) + 1 ) 3 + ( 1 ξ i t + 1 η i t + 1 ) s i 3 ( t ) s i 3 ( t ) = ξ i t + 1 ( 3 m s i 2 ( t ) + 3 m 2 s i ( t ) + m 3 ) + η i t + 1 ( 3 s i 2 + 3 s i ( t ) + 1 ) .
It follows from (21) that
E ( Δ s i 3 ( t + 1 ) | G t ) = 3 s i 3 ( t ) 2 t + 3 m d i ( t ) s i 2 ( t ) 2 t + 3 s i 2 ( t ) 2 t + 3 m 2 s i ( t ) d i ( t ) 2 t + s i ( t ) 2 t + m 3 d i ( t ) 2 t .
Note that s i ( t ) and d i ( t ) may not be independent, and therefore, it is possible that E ( d i ( t ) s i 2 ( t ) ) E ( d i ( t ) ) E ( s i 2 ( t ) ) .
After using Lemma A3, Lemma A1, Lemma 3, and Equations (19) and (2), taking the expectation of both parts, making the substitution f = E ( s i 2 ( t ) ) , we get
d f d t = 3 f 2 t + 3 8 m 4 ( m + 1 ) ( m + 2 ) t 1 2 i 3 2 log 2 t + O ( t 1 2 log t ) ,
The solution of which is
f ( t ) = 1 8 m 4 ( m + 1 ) ( m + 2 ) t i 3 2 log 3 t + O ( t 3 2 log 2 t ) .
To confirm the result, we carried out T = 200 independent repetitions, and in each of them, the BA graph was simulated for N = 20,000 iterations, for different values m = 3 and m = 5 . Then the empirical values of mean ( s i 3 ( t ) ) were obtained. The results are presented in Figure 11.
Theorem 6.
The asymmetry coefficient of s i ( t ) follows
γ 1 ( s i ( t ) ) 2 m 1 2
for sufficiently large t.
Proof. 
The asymmetry coefficient γ 1 ( s i ( t ) ) is defined by
γ 1 ( s i ( t ) ) = μ 3 ( s i ( t ) ) μ 2 3 / 2 ( s i ( t ) ) ,
where μ 3 ( s i ( t ) ) and μ 2 ( s i ( t ) ) are the third and the second central moments of the s i ( t ) -distribution, respectively. It follows from Lemma 4, Lemma 3, and Equation (19) that
μ 3 ( s i ( t ) ) = E ( s i 3 ) 3 E ( s i 2 ) E ( s i ) + 2 E 3 ( s i ) = 1 4 m 4 t i 3 2 log 3 t + O ( t 3 2 log 2 t ) .
From Lemma 3, we have
μ 2 3 / 2 ( s i ( t ) ) = Var 3 / 2 ( s i ( t ) ) = m 9 2 8 log 2 t i 6 log i log t 3 2 t i 3 2 = m 9 2 8 t i 3 2 log 3 t + O ( t 3 2 log 2 t ) .
Theorem follows from (29)–(31). □
Remark 4.
The positivity of γ 1 ( s i ( t ) ) implies the asymmetry of the distribution.

4.3. The Kurtosis of s i ( t )

Lemma 5.
The fourth moment of s i at moment t follows
E ( s i 4 ( t ) ) 1 16 m 5 ( m + 1 ) ( m + 2 ) ( m + 3 ) t i 2 log 4 t + o ( t 2 log 3 t )
Proof. 
The change in the value of s i 4 ( t ) from t to t + 1 occurs as follows:
Δ s i 4 ( t + 1 ) : = s i 4 ( t + 1 ) s i 4 ( t ) = ξ ( s i ( t ) + m ) 4 + η d i ( t ) ( s i ( t ) + 1 ) 4 + ( 1 ξ η ) s i 4 ( t ) s i 4 ( t ) = ξ ( 4 m s i 3 ( t ) + 6 m 2 s i 2 ( t ) + 4 m 3 s i ( t ) + m 4 ) + η ( 4 s i 3 ( t ) + 6 s i 2 ( t ) + 4 s i ( t ) + 1 ) .
Equation (21) implies that
E ( Δ s i 4 ( t + 1 ) G t ) = 2 s i 4 ( t ) t + 2 m d i ( t ) s i 3 ( t ) t + 3 s i 3 ( t ) t + 3 m 2 d i ( t ) s i 2 ( t ) t + 2 s i 2 ( t ) t + 2 m 3 d i ( t ) s i ( t ) t + 2 s i 2 ( t ) t + s i ( t ) 2 t + m 4 d i ( t ) 2 t .
Denote f ( t ) = E ( d i ( t ) s i 3 ( t ) ) . Using Equations (2) and (19), Lemmas 3, 4, A1, A3, A4, and A6, we get the following differential equation
d f ( t ) d t = 2 f ( t ) t + 1 4 m 5 ( m + 1 ) ( m + 2 ) ( m + 3 ) t i 2 log 3 t + O ( t log 2 t ) ,
the solution of which has the form
f ( t ) = 1 16 m 5 ( m + 1 ) ( m + 2 ) ( m + 3 ) t i 2 log 4 t + O ( t 2 log 3 t ) .
Theorem 7.
The Kurtosis of s i ( t ) at iteration t follows
Kurt ( s i ( t ) ) = μ 4 ( s i ( t ) ) Var 2 ( s i ( t ) ) 3 ( m + 2 ) m
for sufficiently large t.
Proof. 
By definition, we have
μ 4 ( s i ( t ) ) = E ( s i 4 ( t ) ) 4 E ( s i ( t ) ) E ( s i 3 ( t ) ) + 6 E 2 ( s i ( t ) ) E ( s i 2 ( t ) ) 3 E 4 ( s i ( t ) ) = 1 16 m 5 ( m + 1 ) ( m + 2 ) ( m + 3 ) t i 2 log 4 t 4 · m 2 2 t i 1 / 2 log t · 1 8 m 4 ( m + 1 ) ( m + 2 ) t i 3 / 2 log 3 t + 6 · m 4 4 t i log 2 t · t i m 3 ( m + 1 ) 4 log 2 t 3 m 8 16 t i 2 log 4 t + O ( t 2 log 3 t ) = 3 16 m 5 ( m + 2 ) t i 2 log 4 t + o ( t 2 log 3 t ) .
It follows from Theorem 5 that
Var 2 ( s i ( t ) ) = m 6 16 t i 2 log 4 t + o ( t 2 log 3 t ) .
Therefore,
Kurt ( s i ( t ) ) 3 ( m + 2 ) m
for large t. □

5. Conclusions

In this article, we studied two Markov non-stationary random processes related to the evolution of BA networks: the first of them describes the dynamics of the degree of one fixed network node, the second is related to the dynamics of the total degree of the neighbors of one node. We evaluated the dynamic behavior of some characteristics of the distributions of these two random variables, which are associated with higher-order moments, including their variation, skewness, and kurtosis. The analysis showed that both distributions have the following properties:
  • The coefficient of variation, defined as the ratio of the mathematical expectation to the variance, is close to the value of m + 1 m at each moment of time. Moreover, as the number of iterations increases, the coefficient of variation converges to m + 1 m .
  • The skewness coefficient is positive for both distributions at any moment of the network evolution, which indicates that both distributions are asymmetric (their right tails are greater than their left ones).
  • The kurtosis is greater than 3 for all subsequent iterations. This means that the right tails of both distributions are thicker than the tail of the normal distribution.
  • It is also interesting to note that if the number of added edges m increases, then the coefficient of variation approaches 1, the coefficient of asymmetry tends to 0, and kurtosis converges to 3. This means that the characteristics of the random numbers are close to the ones of the normal distribution.
It should also be noted that although the characteristics of both distributions are close to each other, the mathematical expectation of the total degree of the neighbors of a node grows log t times faster than the expected degree of the same node.

Author Contributions

Conceptualization, S.S. and S.M.; methodology, S.S.; software, S.M. and D.K.; validation, S.S., N.A., and S.M.; formal analysis, S.S., S.M., and D.K.; writing—original draft preparation, S.S. and S.M.; writing—review and editing, S.M.; visualization, D.K.; supervision, S.S. All authors have read and agreed to the published version of the manuscript.

Funding

The work was supported by the Russian Science Foundation, Project 19-18-00199.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Lemma A1.
E ( d i ( t ) s i ( t ) ) m 2 ( ( m + 1 ) log t 2 ) t i .
Proof. 
Let us consider the stochastic difference equation:
Δ ( d i s i ) ( t + 1 ) : = d i ( t + 1 ) s i ( t + 1 ) d i ( t ) s i ( t ) = ξ i t + 1 ( d i ( t ) + 1 ) ( s i ( t ) + m ) + η i t + 1 d i ( t ) ( s i ( t ) + 1 ) + ( 1 ξ i t + 1 η i t + 1 ) d i ( t ) s i ( t ) d i ( t ) s i ( t ) = ξ i t + 1 ( s i ( t ) + m d i ( t ) + m ) + η i t + 1 d i ( t ) .
It follows from (21) that
E ( Δ ( d i s i ) ( t + 1 ) | G t ) = d i ( t ) s i ( t ) t + m d i 2 ( t ) 2 t + m d i ( t ) 2 t .
We pass to the unconditional expectation of both parts taken at the moment t, make the substitution f ( t ) = E ( Δ d i ( t ) s i ( t ) | G t ) , and using the previously obtained relations (Equations (2) and (6))
E ( d i 2 ( t ) ) = m ( m + 1 ) t i m t i 1 2 , E ( d i ( t ) ) = m t i 1 2 ,
we obtain the following approximate differential equation:
d f d t = f t + m 2 ( m + 1 ) 2 i .
Its solution is
E ( d i ( t ) s i ( t ) ) = f ( t ) = m 2 ( m + 1 ) 2 i t log t + C t ,
where C is a constant, which we would like to find.
Let us consider E ( d i ( t ) s i ( t ) ) at moment i. Since E ( d i ( i ) ) is equal to constant m, then using (17) we get
E ( d i ( i ) s i ( i ) ) = m E ( s i ( i ) ) = m 2 2 ( ( m + 1 ) log i 2 ) ,
and consequently,
C = m 2 i .
Finally, we get Lemma A1. □
Lemma A2.
E ( d i 2 ( t ) s i ( t ) ) 1 2 m 2 ( m + 1 ) ( m + 2 ) t i 3 2 log t ( m 2 ( m + 1 ) log i + m 3 ) t i 3 2 .
Proof. 
The stochastic equation is
Δ ( d i 2 s i ) ( t + 1 ) : = d i 2 ( t + 1 ) s i ( t + 1 ) d i 2 ( t ) s i ( t ) = ξ i t + 1 ( d i ( t ) + 1 ) 2 ( s i ( t ) + m ) + η i t + 1 d i 2 ( t ) ( s i ( t ) + 1 ) + ( 1 ξ i t + 1 η i t + 1 ) d i 2 ( t ) s i ( t ) d i 2 ( t ) s i ( t ) = ξ i t + 1 ( m d i 2 ( t ) + 2 d i ( t ) s i ( t ) + 2 m d i ( t ) + s i ( t ) + m ) + η i t + 1 d i 2 ( t ) .
It follows from (21) that
E ( Δ ( d i 2 s i ) ( t + 1 ) | G t ) = 3 d i 2 ( t ) s i ( t ) 2 t + m d i 3 ( t ) 2 t + m d i 2 ( t ) t + d i ( t ) s i ( t ) 2 t + m d i ( t ) 2 t .
Let us pass to the unconditional expectation of both parts at the moment t and make the replacement f ( t ) = E ( d i 2 ( t ) s i ( t ) ) and, using the obtained earlier relations (2), (6), (12), and Lemma A1, we get the following approximate differential equation:
d f d t = 3 f 2 t + 1 2 m 2 ( m + 1 ) ( m + 2 ) t 1 2 i 3 2 + 1 2 i m 2 ( m + 1 ) log t m 2 ( m + 2 ) 2 i .
Its solution has the form
f ( t ) 1 2 m 2 ( m + 1 ) ( m + 2 ) t 3 2 i 3 2 log t + C t 3 2 ,
where C is a constant of integration, which we find from the initial condition. To find E ( d i 2 ( t ) s i ( t ) ) at moment t = i , we note that E ( d i 2 ( i ) ) is equal to constant m 2 , while E ( s i ( i ) ) = m 2 ( ( m + 1 ) log i 2 ) . Then we get
E ( d i 2 ( i ) s i ( i ) ) = m 2 E ( s i ( i ) ) = m 3 2 ( ( m + 1 ) log i 2 ) ,
and consequently,
C = m 2 ( m + 1 ) log i + m 3 i 3 2 ,
and we obtain Equation (A3). □
Lemma A3.
E ( d i ( t ) s i 2 ( t ) ) 1 4 m 3 ( m + 1 ) ( m + 2 ) t i 3 2 log 2 t + O ( t 3 2 log t )
Proof. 
The dynamics of d i s i 2 can be described by
Δ ( d i s i 2 ) ( t + 1 ) : = d i ( t + 1 ) s i 2 ( t + 1 ) d i ( t ) s i 2 ( t ) = ξ i t + 1 ( d i ( t ) + 1 ) ( s i ( t ) + m ) 2 + η i t + 1 d i ( t ) ( s i ( t ) + 1 ) 2 + ( 1 ξ i t + 1 η i t + 1 ) d i ( t ) s i 2 ( t ) d i ( t ) s i 2 ( t ) = ξ i t + 1 ( 2 m d i ( t ) s i ( t ) + m 2 d i ( t ) + s i 2 ( t ) + 2 m s i ( t ) + m 2 ) + η i t + 1 ( 2 d i ( t ) s i ( t ) + d i ( t ) ) .
It follows from (21) that
E ( Δ ( d i s i 2 ) ( t + 1 ) | G t ) = 3 d i ( t ) s i 2 ( t ) 2 t + m d i 2 ( t ) s i ( t ) t + m 2 d i 2 ( t ) 2 t + m + 1 2 d i ( t ) s i ( t ) t + m 2 d i ( t ) 2 t .
Using Equations (2) and (12), Lemmas A1 and A2, and making substitution f ( t ) = E ( d i ( t ) s i 2 ( t ) ) , taking the unconditional expectation of both parts at the moment t, we obtain the following approximate differential equation:
d f d t = 3 f 2 t + m 3 ( m + 1 ) ( m + 2 ) t 1 2 log t 2 i 3 2 m 3 ( ( m + 1 ) log i + m ) t 1 2 i 3 2 + m 2 ( m + 1 / 2 ) ( m + 1 ) log t i ,
the solution of which is
f ( t ) = 1 4 m 3 ( m + 1 ) ( m + 2 ) t i 3 2 log 2 t m 3 ( ( m + 1 ) log i + m ) t i 3 2 log t + C t 3 2 + o ( t 3 2 ) ,
where C can be found from the initial condition. □
Lemma A4.
E ( d i 3 ( t ) s i ( t ) ) 1 2 m 2 ( m + 1 ) ( m + 2 ) ( m + 3 ) t i 2 log t + o ( t 2 )
Proof. 
The evolution of d i 3 ( t ) s i ( t ) can be described by the stochastic difference equation as follows:
Δ ( d i 3 s i ) ( t + 1 ) : = d i 3 ( t + 1 ) s i ( t + 1 ) d i 3 ( t ) s i ( t ) = ξ ( d i ( t ) + 1 ) 3 ( s i ( t ) + m ) + η d i 3 ( t ) ( s i ( t ) + 1 ) ξ d i 3 ( t ) s i ( t ) η d i 3 ( t ) s i ( t ) = ξ ( m d i 3 ( t ) + 3 d i 2 ( t ) s i ( t ) + 3 m d i 2 ( t ) + 3 d i ( t ) s i ( t ) + 3 m d i ( t ) + s i ( t ) + m ) + η d i 3 ( t ) .
Then it follows from (21) that
E ( Δ ( d i 3 s i ) ( t + 1 ) G t ) = 2 d i 3 ( t ) s i ( t ) t + m d i 4 ( t ) 2 t + 3 d i 2 ( t ) s i ( t ) 2 t + 3 m d i 3 ( t ) 2 t + 3 m d i 2 ( t ) 2 t + d i ( t ) s i ( t ) 2 t + m d i ( t ) 2 t .
Denote f ( t ) = E ( d i 3 ( t ) s i ( t ) ) . Then it follows from Equations (2), (6), (12), and (13), Lemmas A1 and A2 that
d f ( t ) d t = 2 f ( t ) t + m 2 ( m + 1 ) ( m + 2 ) ( m + 3 ) t 2 i 2 + + c 0 t 1 / 2 + c 1 log t + c 2 + c 3 t 1 / 2 ,
for some c 0 , c 1 , c 2 , c 3 . The solution of this differential equation is
f ( t ) = 1 2 m 2 ( m + 1 ) ( m + 2 ) ( m + 3 ) t i 2 log t + O ( t 2 ) .
Lemma A5.
E ( d i 2 ( t ) s i 2 ( t ) ) = 1 4 m 3 ( m + 1 ) ( m + 2 ) ( m + 3 ) t i 2 log 2 t + o ( t 2 log t )
Proof. 
The value of d i 2 ( t ) s i 2 ( t ) evolves according to the following equation:
Δ d i 2 s i 2 ( t + 1 ) = d i 2 ( t + 1 ) s i 2 ( t + 1 ) d i 2 ( t ) s i 2 ( t ) = ξ ( d i ( t ) + 1 ) 2 ( s i ( t ) + m ) 2 + η d i 2 ( t ) ( s i ( t ) + 1 ) 2 ξ d i 2 ( t ) s i 2 ( t ) η d i 2 ( t ) s i 2 ( t ) = ξ ( 2 m d i 2 ( t ) s i ( t ) + m 2 d i 2 ( t ) + 2 d i ( t ) s i 2 ( t ) + 4 m d i ( t ) s i ( t ) + 2 m 2 d i ( t ) + s i 2 ( t ) + 2 m s i ( t ) + m 2 ) + η ( 2 d i 2 ( t ) s i ( t ) + d i 2 ( t ) ) .
Then it follows from (21) that
E ( Δ d i 2 s i 2 ( t + 1 ) G t ) = 2 d i 2 ( t ) s i 2 ( t ) t + m d i 3 ( t ) s i ( t ) t + ( 2 m + 1 2 ) d i 2 ( t ) s i ( t ) t + m 2 d i 3 ( t ) 2 t + d i ( t ) s i 2 ( t ) 2 t + m d i ( t ) s i ( t ) t + m 2 d i 2 ( t ) t + m 2 d i ( t ) 2 t .
Denote f ( t ) = E ( d i 2 ( t ) s i 2 ( t ) ) . Then it follows from Equations (2), (6) and (12), Lemmas A1–A4 that
d f ( t ) d t = 2 f ( t ) t + 1 2 m 3 ( m + 1 ) ( m + 2 ) ( m + 3 ) t log t i 2 + c 0 t + c 1 t 1 / 2 + c 2 log t + c 3 + c 4 t 1 / 2 ,
for some c 0 , c 1 , c 2 , c 3 , c 4 . The solution of Equation (A6) has the form
f ( t ) = 1 4 m 3 ( m + 1 ) ( m + 2 ) ( m + 3 ) t i 2 log 2 t + O ( t 2 log t ) .
Lemma A6.
E ( d i ( t ) s i 3 ( t ) ) 1 8 m 4 ( m + 1 ) ( m + 2 ) ( m + 3 ) t i 2 log 3 t + o ( t 2 log 2 t )
Proof. 
The evolution of d i ( t ) s i 3 ( t ) from t to t + 1 follows the equation
Δ ( d i s i 3 ) ( t + 1 ) : = d i ( t + 1 ) s i 3 ( t + 1 ) d i ( t ) s i 3 ( t ) = ξ ( d i ( t ) + 1 ) ( s i ( t ) + m ) 3 + η d i ( t ) ( s i ( t ) + 1 ) 3 + ( 1 ξ η ) d i ( t ) s i 3 ( t ) d i ( t ) s i 3 ( t ) = ξ ( 3 m d i ( t ) s i 2 ( t ) + 3 m 2 d i ( t ) s i ( t ) + m 3 d i ( t ) + s i 3 ( t ) + 3 m s i 2 ( t ) + 3 m 2 s i ( t ) + m 3 ) + η ( 3 d i ( t ) s i 2 ( t ) + 3 d i ( t ) s i ( t ) + d i ( t ) ) .
Then it follows from (21) that
E ( Δ ( d i s i 3 ) ( t + 1 ) G t ) = 2 d i ( t ) s i 3 ( t ) t + 3 m d i 2 ( t ) s i 2 ( t ) 2 t + 3 ( m + 1 ) d i ( t ) s i 2 ( t ) 2 t + 3 m 2 d i 2 ( t ) s i ( t ) 2 t + ( 3 m 2 + 1 ) d i ( t ) s i ( t ) 2 t + m 3 d i 2 ( t ) 2 t + m 3 d i ( t ) 2 t .
Denote f ( t ) = E ( d i ( t ) s i 3 ( t ) ) . Using Equations (2) and (6), Lemmas A1–A3, and A5, we get the following differential equation
d f ( t ) d t = 2 f ( t ) t + 3 8 i 2 m 4 ( m + 1 ) ( m + 2 ) ( m + 3 ) t log 2 t + c 1 t log t + + c 7 t 1 / 2 ,
for some constant c 1 , , c 7 . Its solution can be presented as follows:
f = 1 8 m 4 ( m + 1 ) ( m + 2 ) ( m + 3 ) t i 2 log 3 t + o ( t 2 log 2 t ) .

References

  1. Lieberman, M.B.; Montgomery, D.B. First-mover advantages. Strateg. Manag. J. 1988, 9, 41–58. [Google Scholar] [CrossRef]
  2. Faloutsos, M.; Faloutsos, P.; Faloutsos, C. On Power-Law Relationships of the Internet Topology. SIGCOMM Comput. Commun. Rev. 1999, 29, 251–262. [Google Scholar] [CrossRef]
  3. Kleinberg, J.M.; Kumar, R.; Raghavan, P.; Rajagopalan, S.; Tomkins, A.S. The Web as a Graph: Measurements, Models, and Methods. In International Computing and Combinatorics Conference; Asano, T., Imai, H., Lee, D.T., Nakano, S.I., Tokuyama, T., Eds.; Springer: Berlin/Heidelberg, Germany, 1999; pp. 1–17. [Google Scholar] [CrossRef]
  4. Clauset, A.; Shalizi, C.R.; Newman, M.E.J. Power-Law Distributions in Empirical Data. SIAM Rev. 2009, 51, 661–703. [Google Scholar] [CrossRef] [Green Version]
  5. De Solla Price, D. Networks of scientific papers. Science 1976, 149, 292–306. [Google Scholar] [CrossRef] [PubMed]
  6. Klaus, A.; Yu, S.; Plenz, D. Statistical Analyses Support Power Law Distributions Found in Neuronal Avalanches. PLoS ONE 2011, 6, e19779. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Newman, M.E.J. The Structure and Function of Complex Networks. SIAM Rev. 2003, 45, 167–256. [Google Scholar] [CrossRef] [Green Version]
  8. Albert, R.; Barabási, A.L. Statistical mechanics of complex networks. Rev. Mod. Phys. 2002, 74, 47–97. [Google Scholar] [CrossRef] [Green Version]
  9. Barabási, A.L.; Albert, R. Emergence of Scaling in Random Networks. Science 1999, 286, 509–512. [Google Scholar] [CrossRef] [Green Version]
  10. Dorogovtsev, S.N.; Mendes, J.F.F.; Samukhin, A.N. Structure of growing networks with preferential linking. Phys. Rev. Lett. 2000, 85, 4633–4636. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Krapivsky, P.L.; Redner, S. Organization of growing random networks. Phys. Rev. E 2001, 63, 066123. [Google Scholar] [CrossRef] [Green Version]
  12. Krapivsky, P.L.; Redner, S.; Leyvraz, F. Connectivity of growing random networks. Phys. Rev. Lett. 2000, 85, 4629–4632. [Google Scholar] [CrossRef] [Green Version]
  13. Sidorov, S.; Mironov, S. Growth network models with random number of attached links. Phys. A Stat. Mech. Its Appl. 2021, 576, 126041. [Google Scholar] [CrossRef]
  14. Tsiotas, D. Detecting differences in the topology of scale-free networks grown under time-dynamic topological fitness. Sci. Rep. 2020, 10, 10630. [Google Scholar] [CrossRef]
  15. Pal, S.; Makowski, A.M. Asymptotic Degree Distributions in Large (Homogeneous) Random Networks: A Little Theory and a Counterexample. IEEE Trans. Netw. Sci. Eng. 2020, 7, 1531–1544. [Google Scholar] [CrossRef] [Green Version]
  16. Rak, R.; Rak, E. The fractional preferential attachment scale-free network model. Entropy 2020, 22, 509. [Google Scholar] [CrossRef] [PubMed]
  17. Cinardi, N.; Rapisarda, A.; Tsallis, C. A generalised model for asymptotically-scale-free geographical networks. J. Stat. Mech. Theory Exp. 2020, 2020, 043404. [Google Scholar] [CrossRef] [Green Version]
  18. Shang, K.K.; Yang, B.; Moore, J.M.; Ji, Q.; Small, M. Growing networks with communities: A distributive link model. Chaos 2020, 30, 041101. [Google Scholar] [CrossRef]
  19. Bertotti, M.L.; Modanese, G. The configuration model for Barabasi-Albert networks. Appl. Netw. Sci. 2019, 4, 1–13. [Google Scholar] [CrossRef] [Green Version]
  20. Pachon, A.; Sacerdote, L.; Yang, S. Scale-free behavior of networks with the copresence of preferential and uniform attachment rules. Phys. D Nonlinear Phenom. 2018, 371, 1–12. [Google Scholar] [CrossRef] [Green Version]
  21. Van Der Hofstad, R. Random Graphs and Complex Networks; Cambridge University Press: Cambridge, UK, 2016; Volume 1. [Google Scholar] [CrossRef] [Green Version]
  22. Barabási, A.; Albert, R.; Jeong, H. Mean-field theory for scale-free random networks. Phys. A Stat. Mech. Its Appl. 1999, 272, 173–187. [Google Scholar] [CrossRef] [Green Version]
  23. Krapivsky, P.L.; Redner, S. Finiteness and fluctuations in growing networks. J. Phys. A Math. Gen. 2002, 35, 9517–9534. [Google Scholar] [CrossRef]
  24. Kadanoff, L. More is the Same; Phase Transitions and Mean Field Theories. J. Stat. Phys. 2009, 137, 777–797. [Google Scholar] [CrossRef]
  25. Parr, T.; Sajid, N.; Friston, K.J. Modules or Mean-Fields? Entropy 2020, 22, 552. [Google Scholar] [CrossRef] [PubMed]
  26. Pachon, A.; Polito, F.; Sacerdote, L. On the continuous-time limit of the Barabási-Albert random graph. Appl. Math. Comput. 2020, 378, 125177. [Google Scholar] [CrossRef] [Green Version]
  27. Sidorov, S.; Mironov, S.; Malinskii, I.; Kadomtsev, D. Local Degree Asymmetry for Preferential Attachment Model. Stud. Comput. Intell. 2021, 944, 450–461. [Google Scholar] [CrossRef]
  28. Sidorov, S.P.; Mironov, S.V.; Grigoriev, A.A. Friendship paradox in growth networks: Analytical and empirical analysis. Appl. Netw. Sci. 2021, 6, 35. [Google Scholar] [CrossRef]
Figure 1. The histograms of the empirical values of d i ( t ) obtained by stimulating 200 different networks with m = 3 at iterations 5000 (blue line) and 20,000 (red line), (a) for node i = 10 , (b) for node i = 50 .
Figure 1. The histograms of the empirical values of d i ( t ) obtained by stimulating 200 different networks with m = 3 at iterations 5000 (blue line) and 20,000 (red line), (a) for node i = 10 , (b) for node i = 50 .
Symmetry 13 01567 g001
Figure 2. The histograms of the empirical values of d i ( t ) obtained by stimulating 200 different networks with m = 5 at iterations 5000 (blue line) and 20,000 (red line), (a) for node i = 10 , (b) for node i = 50 .
Figure 2. The histograms of the empirical values of d i ( t ) obtained by stimulating 200 different networks with m = 5 at iterations 5000 (blue line) and 20,000 (red line), (a) for node i = 10 , (b) for node i = 50 .
Symmetry 13 01567 g002
Figure 3. The histograms of the empirical values of s i ( t ) obtained by stimulating 200 different networks with m = 3 at iterations 5000 (blue line) and 20,000 (red line), (a) for node i = 10 , (b) for node i = 50 .
Figure 3. The histograms of the empirical values of s i ( t ) obtained by stimulating 200 different networks with m = 3 at iterations 5000 (blue line) and 20,000 (red line), (a) for node i = 10 , (b) for node i = 50 .
Symmetry 13 01567 g003
Figure 4. The histograms of the empirical values of s i ( t ) obtained by stimulating 200 different networks with m = 5 at iterations 5000 (blue line) and 20,000 (red line), (a) for node i = 10 , (b) for node i = 50 .
Figure 4. The histograms of the empirical values of s i ( t ) obtained by stimulating 200 different networks with m = 5 at iterations 5000 (blue line) and 20,000 (red line), (a) for node i = 10 , (b) for node i = 50 .
Symmetry 13 01567 g004
Figure 5. Dynamics of empirical values for Mean ( d i 2 ( t ) ) in networks based on BA model for selected nodes i = 10 , 50 as t iterates up to 20,000. Network in (a) is modeled with m = 3 , i = 10 , (b) with m = 5 , i = 10 , (c) with m = 3 , i = 50 , (d) with m = 5 , i = 50 .
Figure 5. Dynamics of empirical values for Mean ( d i 2 ( t ) ) in networks based on BA model for selected nodes i = 10 , 50 as t iterates up to 20,000. Network in (a) is modeled with m = 3 , i = 10 , (b) with m = 5 , i = 10 , (c) with m = 3 , i = 50 , (d) with m = 5 , i = 50 .
Symmetry 13 01567 g005
Figure 6. Dynamics of empirical values for Mean ( d i 3 ( t ) ) in networks based on BA model for selected nodes i = 10 , 50 as t iterates up to 20,000. Network in (a) is modeled with m = 3 , i = 10 , (b) with m = 5 , i = 10 , (c) with m = 3 , i = 50 , (d) with m = 5 , i = 50 .
Figure 6. Dynamics of empirical values for Mean ( d i 3 ( t ) ) in networks based on BA model for selected nodes i = 10 , 50 as t iterates up to 20,000. Network in (a) is modeled with m = 3 , i = 10 , (b) with m = 5 , i = 10 , (c) with m = 3 , i = 50 , (d) with m = 5 , i = 50 .
Symmetry 13 01567 g006
Figure 7. Evolution of asymmetry coefficient in BA networks for selected nodes i = 10 and i = 50 as t iterates up to 20,000.
Figure 7. Evolution of asymmetry coefficient in BA networks for selected nodes i = 10 and i = 50 as t iterates up to 20,000.
Symmetry 13 01567 g007
Figure 8. Evolution of kurtosis in BA networks for selected nodes i = 10 and i = 50 as t iterates up to 20,000.
Figure 8. Evolution of kurtosis in BA networks for selected nodes i = 10 and i = 50 as t iterates up to 20,000.
Symmetry 13 01567 g008
Figure 9. Dynamics of empiricalvalues for Mean ( s i ( t ) ) in networks based on BA model for selected nodes i = 10 , 50 as t iterates up to 20,000. The network in (a) is modeled with m = 3 , i = 10 , (b) with m = 5 , i = 10 , (c) with m = 3 , i = 50 , (d) with m = 5 , i = 50 .
Figure 9. Dynamics of empiricalvalues for Mean ( s i ( t ) ) in networks based on BA model for selected nodes i = 10 , 50 as t iterates up to 20,000. The network in (a) is modeled with m = 3 , i = 10 , (b) with m = 5 , i = 10 , (c) with m = 3 , i = 50 , (d) with m = 5 , i = 50 .
Symmetry 13 01567 g009
Figure 10. Dynamics of empirical values for Mean ( s i 2 ( t ) ) in networks based on BA model for selected nodes i = 10 , 50 as t iterates up to 20,000. Network in (a) is modeled with m = 3 , i = 10 , (b) with m = 5 , i = 10 , (c) with m = 3 , i = 50 , (d) with m = 5 , i = 50 .
Figure 10. Dynamics of empirical values for Mean ( s i 2 ( t ) ) in networks based on BA model for selected nodes i = 10 , 50 as t iterates up to 20,000. Network in (a) is modeled with m = 3 , i = 10 , (b) with m = 5 , i = 10 , (c) with m = 3 , i = 50 , (d) with m = 5 , i = 50 .
Symmetry 13 01567 g010
Figure 11. Dynamics of empirical values for Mean ( s i 3 ( t ) ) in networks based on BA model for selected nodes i = 10 , 50 as t iterates up to 20 , 000 . Network in (a) is modeled with m = 3 , i = 10 , (b) with m = 5 , i = 10 , (c) with m = 3 , i = 50 , (d) with m = 5 , i = 50 .
Figure 11. Dynamics of empirical values for Mean ( s i 3 ( t ) ) in networks based on BA model for selected nodes i = 10 , 50 as t iterates up to 20 , 000 . Network in (a) is modeled with m = 3 , i = 10 , (b) with m = 5 , i = 10 , (c) with m = 3 , i = 50 , (d) with m = 5 , i = 50 .
Symmetry 13 01567 g011
Table 1. Empirical characteristics of the d i -distribution obtained by simulating 200 different Barabási–Albert networks with m = 3 , for nodes i = 10 and i = 50 at iterations 5000 and 20,000.
Table 1. Empirical characteristics of the d i -distribution obtained by simulating 200 different Barabási–Albert networks with m = 3 , for nodes i = 10 and i = 50 at iterations 5000 and 20,000.
d 10 ( 5000 ) d 10 (20,000) d 50 ( 5000 ) d 50 (20,000)
mean70.85141.0931.9262.65
st.dev.38.8879.0617.6835.89
skewness0.971.010.920.91
kurtosis3.764.013.583.58
Table 2. Empirical characteristics of the d i -distribution obtained by simulating 200 different Barabási–Albert networks with m = 5 , for nodes i = 10 and i = 50 at iterations 5000 and 20,000.
Table 2. Empirical characteristics of the d i -distribution obtained by simulating 200 different Barabási–Albert networks with m = 5 , for nodes i = 10 and i = 50 at iterations 5000 and 20,000.
d 10 ( 5000 ) d 10 (20,000) d 50 ( 5000 ) d 50 (20,000)
mean127.41253.5252.54105.37
st.dev.46.6893.5121.6044.66
skewness0.520.520.580.78
kurtosis3.243.113.193.80
Table 3. Empirical characteristics of the s i -distribution obtained by simulating 200 different Barabási–Albert networks with m = 3 , for nodes i = 10 and i = 50 at iterations 5000 and 20,000.
Table 3. Empirical characteristics of the s i -distribution obtained by simulating 200 different Barabási–Albert networks with m = 3 , for nodes i = 10 and i = 50 at iterations 5000 and 20,000.
s 10 ( 5000 ) s 10 (20,000) s 50 ( 5000 ) s 50 (20,000)
mean1120.172534.05513.341155.37
st.dev.398.22957.09165.09387.56
skewness0.740.800.610.68
kurtosis3.253.443.693.74
Table 4. Empirical characteristics of the d i -distribution obtained by simulating 200 different Barabási–Albert networks with m = 5 , for nodes i = 10 and i = 50 at iterations 5000 and 20,000.
Table 4. Empirical characteristics of the d i -distribution obtained by simulating 200 different Barabási–Albert networks with m = 5 , for nodes i = 10 and i = 50 at iterations 5000 and 20,000.
s 10 ( 5000 ) s 10 (20,000) s 50 ( 5000 ) s 50 (20,000)
mean3037.396947.171277.912915.58
st.dev.773.231867.67318.93774.61
skewness0.280.320.410.51
kurtosis3.143.093.323.23
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sidorov, S.; Mironov, S.; Agafonova, N.; Kadomtsev, D. Temporal Behavior of Local Characteristics in Complex Networks with Preferential Attachment-Based Growth. Symmetry 2021, 13, 1567. https://doi.org/10.3390/sym13091567

AMA Style

Sidorov S, Mironov S, Agafonova N, Kadomtsev D. Temporal Behavior of Local Characteristics in Complex Networks with Preferential Attachment-Based Growth. Symmetry. 2021; 13(9):1567. https://doi.org/10.3390/sym13091567

Chicago/Turabian Style

Sidorov, Sergei, Sergei Mironov, Nina Agafonova, and Dmitry Kadomtsev. 2021. "Temporal Behavior of Local Characteristics in Complex Networks with Preferential Attachment-Based Growth" Symmetry 13, no. 9: 1567. https://doi.org/10.3390/sym13091567

APA Style

Sidorov, S., Mironov, S., Agafonova, N., & Kadomtsev, D. (2021). Temporal Behavior of Local Characteristics in Complex Networks with Preferential Attachment-Based Growth. Symmetry, 13(9), 1567. https://doi.org/10.3390/sym13091567

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop