Next Article in Journal
The Potential of a Thick Present through Undefined Causality and Non-Locality
Previous Article in Journal
A Hybrid Method Using HAVOK Analysis and Machine Learning for Predicting Chaotic Time Series
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Structure and First-Passage Properties of Generalized Weighted Koch Networks

1
School of Electronics Engineering and Computer Science, Peking University, Beijing 100871, China
2
Key Laboratory of High Confidence Software Technologies, Peking University, Beijing 100871, China
3
China Northwest Center of Financial Research, Lanzhou University of Finance and Economics, Lanzhou 730020, China
4
School of Information Engineering, Lanzhou University of Finance and Economics, Lanzhou 730020, China
5
Key Laboratory of E-Business Technology and Application, Lanzhou 730020, China
6
College of Mathematics and Statistics, Northwest Normal University, Lanzhou 730070, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Entropy 2022, 24(3), 409; https://doi.org/10.3390/e24030409
Submission received: 21 February 2022 / Revised: 12 March 2022 / Accepted: 13 March 2022 / Published: 15 March 2022

Abstract

:
Characterizing the topology and random walk of a random network is difficult because the connections in the network are uncertain. We propose a class of the generalized weighted Koch network by replacing the triangles in the traditional Koch network with a graph R s according to probability 0 p 1 and assign weight to the network. Then, we determine the range of several indicators that can characterize the topological properties of generalized weighted Koch networks by examining the two models under extreme conditions, p = 0 and p = 1 , including average degree, degree distribution, clustering coefficient, diameter, and average weighted shortest path. In addition, we give a lower bound on the average trapping time (ATT) in the trapping problem of generalized weighted Koch networks and also reveal the linear, super-linear, and sub-linear relationships between ATT and the number of nodes in the network.

1. Introduction

Complex networks are acknowledged as an invaluable system for describing nature and society [1,2]; many endeavors have been devoted to exploring the structure and properties of complex networks for characterizing and simulating the properties of some real-world systems in our life. Among all of these properties, the scale-free nature, diameter, and clustering coefficient have attracted considerable attention [3,4,5]. In addition, the weight of the network has important research significance in air transportation [6], biological neural networks [7] and so on. Therefore, it is necessary to explore the influence of weight on the topological properties and dynamic process of the network, so we also determine the bound of the average weighted shortest path of network designed in this paper.
To better understand the properties of random networks in complex networks, extensive technical methods were developed for establishing a variety of theoretical models of random networks. For example, the well-known ER-model was proposed by Erdos and Renyi [8] to try to explain a low clustering coefficient and low variation in the node degrees; its degree distribution was verified to be Poisson distribution. Watts and Strogatz put forward a small-world WS-model [9], which can rationally reflect the statistical properties of the network that are neither completely regular nor entirely random and explain small-world phenomena in various real-world networks by exploring the diameter and clustering coefficient. The BA model was built by Barabasi et al. [10] using two rules, growth and preferential attachment; the degree distribution of the latter two networks obeys a power–law distribution. In this paper, we introduce a class of generalized weighted Koch networks with probability p. The ranges of their topological parameters are given by characterizing the topological characteristics of the deterministic network models under the two extreme states.
Random walk as a fundamental tool to describe the dynamic process of networks, such as page search in the world wide web [11], signal propagation [12] and energy transport [13]. The trapping problem is defined as a kind of random walk that takes place in networks in the presence of a fixed trap, absorbing all particles that visit it [14,15]. A basic quantity relevant to the trapping problem is called the mean first-passage time (MFPT). The MFPT from a node i to the trap is the expected time taken by a walker starting from i to reach the trap for the first time. The average trapping time (ATT) is the average of MFPTs over all starting nodes other than the trap. The ATT for a given trap is used as a indicator of the trapping efficiency to evaluate the process of trapping; it has a core position in many disciplines, including computer, biology, engineering and so on [16,17,18].
The organization of this paper is as follows. In Section 2, we introduce a method about constructs the generalized weighted Koch networks, according to probability p. In Section 3, we characterize several parameters that reveal the topological properties and dynamic processes of our network, including average degree, degree distribution, clustering coefficient, diameter, average weighted shortest path, and average trapping time. In the last section, we draw the conclusion with a concise narrative.

2. The Generalized Weighted Koch Network

The generalized weighted Koch network G s , r ( t ) presented in this paper is controlled by three parameters s 3 , r > 0 , t 0 , where r is the weight factor and t is the time step, and s and t are positive integers. We mainly explore the influence of the parameter s and the weight r related to the topology on the topological properties and dynamic characteristics of the network.
Let C s and K s be a cycle with s nodes and a fully connected graph with s nodes, respectively. We introduce a probability 0 p 1 , then G s , r ( t ) can be created in the following way: for t = 0 , the network starts with a graph R s of s nodes, and its edge with unit weight corresponds to G s , r ( 0 ) . When p = 0 , R s is a cycle C s of s nodes, and when p = 1 , R s is a complete graph K s of s nodes. For t 1 , G s , r ( t ) is obtained by adding a node group R s for each node in every existing graph R s of G s , r ( t 1 ) , where each node group R s can be a cycle C s with probability p or a complete graph K s with complementary probability 1 p ; this rule is shown in Figure 1. We repeat this growth process until the network becomes what we need.
Additionally, for the generalized weighted Koch network we proposed, if r = 1 and s = 3 are satisfied, we can obtain the classic Koch network mentioned in the literature [19]; when r = 1 , we can obtain the expanded Koch networks referred to in Ref. [20]. For s = 3 , the network G s , r ( t ) is simplified to the weighted Koch network in Ref. [21]. The above network models are all special cases of the generalized weighted Koch networks constructed in this paper.
That is to say, G s , r ( t ) can be obtained from G s , r ( t 1 ) by the recursive method, where one can connect each node of existing cluster R s in G s , r ( t 1 ) with a graph of s nodes according to probability 0 p 1 . We denote the two networks corresponding to the extreme conditions p = 0 and p = 1 as G s , r A ( t ) and G s , r B ( t ) , respectively. Figure 2 and Figure 3 show the growing process of the two determined networks at t = 0 , 1 , 2 . Some topological properties and ATT of G s , r A ( t ) have been described in the literature [22,23], so we mainly focus on the topological properties and ATT of G s , r B ( t ) , then estimate the range of parameters of random network G s , r ( t ) by studying the characteristics of two deterministic networks G s , r A ( t ) and G s , r B ( t ) .
Let N t and E t be the number of nodes and edges of network G s , r ( t ) , and let Δ N ( t ) and Δ E ( t ) be the number of new nodes and new edges created at time step t, that is N t = N t 1 + Δ N ( t ) and E t = E t 1 + Δ E ( t ) . Additionally, C s or K s in Figure 1 is regarded as a cluster R s , the total number of new generated clusters in the G s , r ( t ) at time step t is recorded as R ( t ) , and combining R ( t ) = ( s + 1 ) R ( t 1 ) and the initial value R ( 0 ) = 1 , we have R ( t ) = ( s + 1 ) t . Based on the construction method, we obtain
Δ N ( t ) = s ( s 1 ) · C ( t 1 ) = s ( s 1 ) ( s + 1 ) t 1 .
The number of nodes and edges of networks G s , r A ( t ) and G s , r B ( t ) are denoted by N t Z and E t Z , Z = A , B . The difference between G s , r A ( t ) and G s , r B ( t ) is that the cluster R s selected by the probability p is not the same, the number of nodes of all R s is s, but the number of their edges is different. So, we have N t = N t A = N t B , and they can be calculated as
N t = N 0 + i = 1 t Δ N ( i ) = ( s 1 ) ( s + 1 ) t + 1 .
For the number of new edges created at time step t in the network G s , r A ( t ) , we obtain Δ E A ( t ) = s 2 · R ( t 1 ) = s 2 ( s + 1 ) t 1 . According to the iterative construction of G s , r A ( t ) , we calculate the total number of edges in the network G s , r A ( t ) as follows,
E t A = E A ( 0 ) + i = 1 t Δ E A ( i ) = s ( s + 1 ) t .
Similarly, we examine the number of new edges created at time step t in G s , r B ( t ) , it is equal to Δ E B ( t ) = [ s 2 ( s 1 ) / 2 ] · R ( t 1 ) = [ s 2 ( s 1 ) / 2 ] · ( s + 1 ) t 1 , thus, the total number of edges in G s , r B ( t ) is equal to
E t B = E B ( 0 ) + i = 1 t Δ E B ( i ) = s ( s 1 ) 2 ( s + 1 ) t .
Therefore, we can obtain the range of the number of edges of the generalized weighted Koch network G s , r ( t ) , which satisfies s ( s + 1 ) t E t s ( s 1 ) 2 ( s + 1 ) t .
Referring to Ref. [22], the average degree of G s , r A ( t ) is k A 2 s / ( s 1 ) for t . On the other hand, the solution of average degree of G s , r B ( t ) is
k B = 2 E t B N t B = s ( s 1 ) ( s + 1 ) t ( s 1 ) ( s + 1 ) t + 1 ,
where k B is approximately s for a large t, which shows that the two networks G s , r A ( t ) and G s , r B ( t ) are sparse networks according to the standard proposed in the literature [24] because the condition E t N t ( N t 1 ) / 2 is clearly established. Therefore, we obtain that the range of average degree of the generalized weighted Koch network G s , r ( t ) satisfies 2 s / ( s 1 ) k s .

3. Topological Properties and ATT

Next, we discuss some relevant topological characteristics of our network, including degree distribution, clustering coefficient, diameter and average weighted shortest path, and give the lower bound of the ATT for our networks.

3.1. Degree Distribution

The degree distribution P ( k ) is a physical quantity that describes the overall characteristics of the network. It is defined as the probability that a randomly selected node in the graph has exactly k associated edges. Cumulative degree distribution P c u m ( k ) is defined as the probability that the degree of a node is greater than or equal to k, that is, P c u m ( k ) = k = k P ( k ) . Many networks are regarded as scale-free when their cumulative degree distribution approximately follows a power–law distribution P c u m ( k ) k 1 γ and the power exponent γ lies between 2 and 3 [25].
Theorem 1.
The cumulative degree distribution of the network G s , r B ( t ) obeys a power law distribution
P c u m B ( k ) = ( s 1 ) ln ( s + 1 ) ln 2 k 1 γ , γ = 1 + ln ( s + 1 ) ln 2 ,
and G s , r B ( t ) is a scale-free network if, and only if, s = 3 .
Proof. 
Let k i B ( t ) be the degree of node i in network G s , r B ( t ) at time step t. When node i joins the network at time step t i , there is only one K s connected to it, so we have k i B ( t i ) = s 1 . We can find that the degree of node i depends on the number of cluster K s , including it. Let R B ( i , t ) be the number of cluster K s including node i at time step t. We can establish the following relation R B ( i , t ) = 2 R B ( i , t 1 ) , considering the initial condition R B ( i , t i ) = 1 , so R B ( i , t ) = 2 t t i . Further, there is a relationship between k i B ( t ) and R B ( i , t ) ,
k i B ( t ) = ( s 1 ) R B ( i , t ) = ( s 1 ) 2 t t i .
We know that the degree of node i satisfies k i B ( t ) = 2 k i B ( t 1 ) , which shows that the degree spectrum of G s , r B ( t ) is discrete. Table 1 lists the degree spectrum of the two networks G s , r A ( t ) and G s , r B ( t ) , and n Z ( k i ) ( Z = A , B ) represents the number of nodes with degree k i .
Analyzing the degree spectrum of G s , r B ( t ) , we can obtain its cumulative degree distribution,
P c u m B ( k ) = 1 N t B k = k n B ( k i B ) = ( s 1 ) ( s + 1 ) t i + 1 ( s 1 ) ( s + 1 ) t + 1 ,
we can solve for t i = t ln k / ( s 1 ) ln 2 from Equation (7) and substitute t i into Equation (8), then
P c u m B ( k ) = ( s 1 ) ( s + 1 ) t [ k / ( s 1 ) ] ln ( s + 1 ) ln 2 + 1 ( s 1 ) ( s + 1 ) t + 1 ( s 1 ) ln ( s + 1 ) ln 2 k ln ( s + 1 ) ln 2 ,
when t is large enough, the cumulative degree distribution follows a power law with exponent γ = 1 + ln ( s + 1 ) / ln 2 [ 2 , 3 ] if, and only if, 1 s 3 , considering that the range of the parameter s is s 3 , so the network G s , r B ( t ) is a scale-free network when s = 3 . On the other hand, the cumulative degree distribution of G s , r A ( t ) can be found in [22]. It is expressed as P c u m A ( k ) = 2 ln ( s + 1 ) ln 2 k ln ( s + 1 ) ln 2 , which shows that the network G s , r A ( t ) also satisfies the properties of the scale-free network when s = 3 . Therefore, we can judge that the generalized weighted Koch network G s , r ( t ) has a remarkable scale-free property when s = 3 . When s > 3 , the degree distribution of G s , r ( t ) obeys a power-law distribution k 1 γ and the exponent is γ = 1 + ln ( s + 1 ) ln 2 . □

3.2. Clustering Coefficient

The overall clustering coefficient of a network is used to quantity the ability of the network to agglomerate, while the local clustering coefficient can measure the agglomeration near each node in the network. The local clustering coefficient c i corresponding to node i is defined as the ratio between the number of existing edges e i connecting its k i neighbors and the number of all possible edges k i ( k i 1 ) / 2 between them, that is, c i = 2 e i / k i ( k i 1 ) [26]. The overall clustering coefficient of the network is denoted as C = 1 | N t | i V ( G ) c i ; it is defined as the average of c i over all nodes in the network.
Theorem 2.
For the network G s , r B ( t ) , the solution of the overall clustering coefficient of the whole network satisfies
C B 2 s + 1 2 ( s + 1 ) , f o r t .
Proof. 
According to the structure of the network G s , r B ( t ) , it can be judged that the nodes with the same degree have the same local clustering coefficient. Therefore, Table 2 shows the local clustering coefficient and the corresponding number of nodes. n B ( k i B ) represents the number of nodes with degree k i B , and c ( k i B ) is the local clustering coefficient of each node with degree k i B . Therefore, the overall clustering coefficient C B of G s , r B ( t ) is
C B = 1 N t B k i B c ( k i B ) · n B ( k i B ) = 1 ( s 1 ) ( s + 1 ) t + 1 [ s ( s 1 ) ( s + 1 ) t 1 + s ( s 2 ) ( s 1 ) ( s + 1 ) t 2 2 ( s 1 ) 1 + s ( s 2 ) ( s 1 ) ( s + 1 ) t 3 2 2 ( s 1 ) 1 + + s ( s 2 ) 2 t ( s 1 ) 1 ] 2 s + 1 2 ( s + 1 ) .
In the limit of t , we have C B 1 , which indicates that the network G s , r B ( t ) is high clustered. We can control the clustering coefficient by adjusting parameter s. For network G s , r A ( t ) , when s = 3 , the clustering coefficient C A is approximately k 1 in the large limit of t; otherwise, the clustering coefficient of G s , r A ( t ) in other cases is always equal to zero. Therefore, the generalized weighted Koch networks transition from low clustered to high clustered, as the probability p keeps increasing in the interval [ 0 , 1 ] . □

3.3. Diameter

The diameter D m a x ( t ) of a network is used to measure information transmission delays and find vital nodes in the network. It is defined as the largest distance between any pair of nodes. Let D m a x A ( t ) and D m a x B ( t ) denote the diameters of G s , r A ( t ) and G s , r B ( t ) , respectively.
Theorem 3.
The diameter of the network G s , r A ( t ) is equal to
D m a x A ( t ) = ( 1 2 + t ) s , i i s e v e n ; ( 1 2 + t ) ( s 1 ) , i i s o d d .
The diameter of the network G s , r B ( t ) is
D m a x B ( t ) = 2 t + 1 .
Proof. 
By the topological structure of G s , r A ( t ) , we find that a node on the cycle C s can only reach another node through a path on the cycle, so we discuss the diameter of the network G s , r A ( t ) according to the parity of the number of nodes.
Case 1. When s is even, we assume that the two nodes with the farthest distance in the network G s , r A ( t 1 ) are x 1 and x 2 . In network G s , r A ( t ) , y 1 is the farthest node to x 1 among the new neighbors of x 1 , and y 2 is the farthest node to x 2 among the new neighbors of x 2 . New neighbors refer to those nodes generated at time step t among all neighbors. Then, the diameter of G s , r A ( t ) refers to the distance between nodes y 1 and y 2 ; we can obtain the following equation,
D m a x A ( t ) = D m a x A ( t 1 ) + s ,
with the help of D m a x A ( 0 ) = s / 2 , we can obtain
D m a x A ( t ) = ( 1 2 + t ) s .
Case 2. When s is odd, the diameter of the network G s , r A ( t ) at two consecutive time steps t 1 and t satisfies the following relationship:
D m a x A ( t ) = D m a x A ( t 1 ) + s 1 ,
where it is obvious that the diameter of the smallest network G s , r A ( 0 ) is D m a x A ( 0 ) = ( s 1 ) / 2 , so the expression of diameter can be obtained according to the above recursive formula,
D m a x A ( t ) = ( 1 2 + t ) ( s 1 ) .
Considering the construction algorithm of G s , r B ( t ) , the diameters of G s , r B ( t ) and G s , r B ( t 1 ) at two consecutive time steps have the following rule:
D m a x B ( t ) = D m a x B ( t 1 ) + 2 ,
the initial condition is D m a x B ( 0 ) = 1 , then for any t 0 , we have
D m a x B ( t ) = 2 t + 1 .
Theorem 3 shows that the propagation efficiency of G s , r B ( t ) is more efficient than that of G s , r A ( t ) . Among all generalized weighted Koch networks, the network G s , r A ( t ) has the largest diameter and the network G s , r B ( t ) has the smallest diameter. Thus, we can determine that the diameter of the generalized weighted Koch network satisfies 2 t + 1 D m a x ( t ) ( 1 2 + t ) s . □

3.4. Average Weighted Shortest Path

In this subsection, we take the weight into account to examine the shortest path between two nodes in our networks. The average weighted shortest path is defined as L t = 2 N t ( N t 1 ) D t o t ( t ) , where D t o t ( t ) = i , j G s , r ( t ) , i j d i j ( t ) , d i j ( t ) denotes the weighted shortest path connecting node i and j in network G s , r ( t ) [27,28]. We only determine the lower bound of L t for our network G s , r ( t ) by calculating the average weighted shortest path of G s , r B ( t ) .
Theorem 4.
For the networks G s , r B ( t ) , when r = 1 , its average weighted shortest path is
L t B = 2 [ A 1 ( s + 1 ) t + A 2 ( s + 1 ) 2 t + A 3 t ( s + 1 ) 2 t ] [ ( s 1 ) ( s + 1 ) t + 1 ] ( s 1 ) ( s + 1 ) t ,
when r 1 , then
L t B = 2 [ A 4 ( s r + 1 ) t + A 5 ( s + 1 ) 2 t + A 6 ( s r + 1 ) t ( s + 1 ) t ] [ ( s 1 ) ( s + 1 ) t + 1 ] ( s 1 ) ( s + 1 ) t ,
the values A 1 A 6 are in Appendix A. They are constants that depend on the parameter s and the weight r, and do not depend on the time step t.
Proof. 
The recursive construction of the network allows us to calculate the D t o t ( t ) . The network G s , r B ( t + 1 ) can be divided into s + 1 branches, which we label as G s , r B , n ( t ) for n = 1 , 2 , , s , s + 1 , the center branch G s , r B , 1 ( t ) is a copy of G s , r B ( t ) , and G s , r B , 2 ( t ) , G s , r B , 3 ( t ) , ⋯, G s , r B , s + 1 ( t ) have the same structure as G s , r B ( t ) , but their edge weights are scaled by a factor of r. We denote the connected nodes as W 1 , W 2 , , W s , which connect the copy G s , r B , 1 ( t ) and other copies G s , r B , n ( t ) , n = 2 , 3 , , s + 1 . Therefore, the total of the shortest distances D t o t ( t + 1 ) satisfies the following relation,
D t o t ( t + 1 ) = ( s r + 1 ) D t o t ( t ) + Ω t ,
where Ω t is the sum over all shortest paths whose nodes are not in the same copy of G s , r B ( t ) , that is to say, the paths in Ω t must all go though at least one of the s connected nodes W 1 , W 2 , ⋯, W s . The first term on Equation (22) is the sum of weighted shortest path linking node i and j in every G s , r B , n ( t ) , n = 2 , 3 , , s + 1 . Considering the scaling of the edges, we have
n = 1 s + 1 i , j G s , r B , n ( t ) d i j = i , j G s , r B , 1 ( t ) d i j + n = 2 s + 1 i , j G s , r B , n ( t ) d i j = i , j G s , r B ( t ) d i j + r s i , j G s , r B ( t ) d i j = ( s r + 1 ) D t o t ( t ) .
Next, the analytical expression for Ω t is not difficult to find. We denote Ω t α β as the sum of all shortest paths with nodes in G s , r B , α ( t ) and G s , r B , β ( t ) ; there are two different situations that need to be discussed. Let Ω t α β denote the sum of the distances of the nodes in G s , r B , n to the nodes in G s , r B , 1 for n = 2 , 3 , , s + 1 . Moreover, Ω t α γ represents the sum of nodes in different G s , r B , n , which must pass through G s , r B , 1 , but their end node is not in G s , r B , 1 . Thus, we have
Ω t = s Ω t α β + s ( s 1 ) 2 Ω t α γ .
It can be seen from the above formula that we need to calculate Ω t α β and Ω t α γ to obtain Ω t . We define a variable
Δ t = i G s , r B ( t ) , i W 2 d i W 2 ,
which represents the sum of the distances from all nodes in G s , r B ( t ) to node W 2 ; specifically, we can obtain the following result for G s , r B ( 1 ) ,
Δ 1 = ( r + 1 ) ( s 1 ) 2 + ( s 1 ) r + ( s 1 ) .
Considering the self-similar structure at time step t, we know that the quantity Δ t evolves recursively as
Δ t = r Δ t 1 + ( s 1 ) ( r Δ t 1 + N t 1 1 ) + Δ t 1 = ( s r + 1 ) Δ t 1 + ( s 1 ) 2 ( s + 1 ) t 1 .
We obtained the recursive relation of Δ t , and combined with the initial condition Δ 1 given by Equation (26), we can calculate the expression of Δ t ,
Δ t = [ ( s 1 ) 2 t + s 2 1 ] ( s + 1 ) t 1 , r = 1 ; ( s r + 1 ) t s 1 ( s 2 s ) r s ( 1 r ) + ( s 1 ) 2 s ( 1 r ) ( s + 1 ) t , r 1 .
Then, we calculate the two variables Ω t α β and Ω t α γ through the variable Δ t , and the following two cases are discussed.
Case 1. This situation shows that one of α and β must be equal to 1 because only copy G s , r B , 1 ( t ) and other copies G s , r B , n ( t ) have a common node. Let G s , r B , α ( t ) and G s , r B , β ( t ) have a common node W k , where α , β [ 1 , s + 1 ] , α β . For two nodes i G s , r B , α ( t ) , j G s , r B , β ( t ) and W k i , j , we have
Ω t α β = i G s , r α ( t ) , j G s , r β ( t ) i , j W 2 d i j = i G s , r α ( t ) , j G s , r β ( t ) i , j W 2 ( d i W 2 + d j W 2 ) = ( N t 1 ) i G s , r α ( t ) i W 2 d i W 2 + ( N t 1 ) j G s , r β ( t ) j W 2 d j W 2 = ( r + 1 ) ( N t 1 ) Δ t .
Case 2. G s , r B , α ( t ) and G s , r B , γ ( t ) have no common node, so it must cross two nodes W k and W m in copy G s , r B , 1 ( t ) from node i to j.
Ω t α γ = i G s , r α ( t ) , j G s , r γ ( t ) i , j W 2 d i j = i G s , r α ( t ) , j G s , r γ ( t ) i , j W 2 , m 2 ( d i W 2 + d W 2 W m + d W i j ) = r ( N t 1 ) Δ t + r ( N t 1 ) Δ t + ( N t 1 ) 2 = 2 r ( N t 1 ) Δ t + ( N t 1 ) 2 .
where d W 2 W m = 1 is used, and W m refers to other nodes on G s , r B ( t 1 ) except node W 2 . Substituting Equations (29) and (30) into Equation (24), we have
Ω t = s [ r ( N t 1 ) Δ t + ( N t 1 ) Δ t ] + s ( s 1 ) 2 [ 2 r ( N t 1 ) Δ t + ( N t 1 ) 2 ] ,
Considering Equations (2) and (28), then substituting them into Equation (31), we obtain the following.
If r = 1 , then
Ω t = [ s ( s 1 ) 3 t + ( 3 s 2 + s ) ( s 1 ) 2 2 ] ( s + 1 ) 2 t .
If r 1 , we have
Ω t = ( s 1 ) 2 ( 1 s 2 r 2 ) 1 r ( s r + 1 ) t ( s + 1 ) t + ( s 2 s ) r + s 2 + s 2 2 ( 1 r ) ( s 1 ) 2 ( s + 1 ) 2 t .
According to Equation (22) and the initial condition D t o t ( 0 ) = s ( s 1 ) 2 , the result of D t o t ( t ) can be obtained by the recursive formula. If r = 1 , we have
D t o t ( t ) = ( s + 1 ) t s 2 ( s 2 1 ) + 2 ( s 1 ) 3 ( s + 1 ) ( 3 s 2 + s ) ( s 1 ) 2 2 s ( s + 1 ) + ( s + 1 ) 2 t 2 ( s 1 ) 3 2 s ( s 1 ) 3 + ( 3 s 2 + s ) ( s 1 ) 2 2 s ( s + 1 ) + t ( s + 1 ) 2 t ( s 1 ) 3 s + 1 .
When r 1 , we obtain
D t o t ( t ) = ( s r + 1 ) t { s ( s 1 ) 2 ( s 1 ) 2 ( 1 s 2 r 2 ) s ( 1 r ) ( s r + 1 ) + [ ( s 2 s ) r + s 2 + s 2 ] ( s 1 ) 2 ( s + 1 ) 2 2 ( 1 r ) [ s r + 1 ( s + 1 ) 2 ] ( s r + 1 ) } + ( s + 1 ) 2 t [ ( s 2 s ) r + s 2 + s 2 ] ( s 1 ) 2 2 ( 1 r ) [ s r + 1 ( s + 1 ) 2 ] + ( s r + 1 ) t ( s + 1 ) t ( s 1 ) 2 ( 1 s 2 r 2 ) s ( 1 r ) ( s r + 1 ) .
Further, according to the equation L t = 2 N t ( N t 1 ) D t o t ( t ) , substituting Equations (34) and (35) into it, we can obtain the average weighted shortest path for r = 1 ,
L t B = 2 [ A 1 ( s + 1 ) t + A 2 ( s + 1 ) 2 t + A 3 t ( s + 1 ) 2 t ] [ ( s 1 ) ( s + 1 ) t + 1 ] ( s 1 ) ( s + 1 ) t .
For r 1 , we can obtain
L t B = 2 [ A 4 ( s r + 1 ) t + A 5 ( s + 1 ) 2 t + A 6 ( s r + 1 ) t ( s + 1 ) t ] [ ( s 1 ) ( s + 1 ) t + 1 ] ( s 1 ) ( s + 1 ) t ,
where A 1 , A 2 , A 3 , A 4 , A 5 , A 6 are constants that do not depend on the time step t, they are only related to s and r. Appendix A contains the detailed values of A 1 A 6 . Therefore, we can determine that the lower bound of the average weighted shortest path of network G s , r ( t ) is equal to L t B . □

3.5. ATT on Random Walk with Weight

Next, we derive analytically the average trapping time on random walk with weight and show how it scales with the network order and parameters r and s. The strength of a node integrates the information concerning its connectivity and the weights of its edges. Let s i = j N ( i ) w i j be the strength of node i. The walker starting from a given node i moves to its neighbor node j with probability p i j at each step, and the transition probability from node i to j is
p i j = w i j s i = w i j j N ( i ) w i j ,
where N ( i ) is the neighbors of node i. For the convenience of description, let us denote all nodes in G s , r ( t 1 ) by 1 , 2 , , N t 1 1 , N t 1 , and N t 1 + 1 , N t 1 + 2 , , N t 1 , N t represent other nodes generated at time step t. Let T i ( t ) be the MFPT from node i to the trap. T t denotes the ATT, which is defined as the mean of T i ( t ) starting from all sources of nodes over the whole network to the trap node. It is the core issue considered in this subsection. By definition, T t is given by
T t = 1 N t 1 i = 2 N t T i ( t ) ,
and further, we denote the sum of MFPTs for all nodes to absorption at the trap located the one of the nodes of G ( 0 ) as T t o t ( t ) , that is
T t o t ( t ) = i = 2 N t T i ( t ) .
Theorem 5.
Let r > 0 be a weight factor. When r = 1 , the average trapping time of the network G s , r B ( t ) is
T t B = 1 ( s 1 ) ( s + 1 ) t [ B 1 ( 1 + s ) t + B 2 ( 1 + s ) 2 t B 3 t ( 1 + s ) t ] ,
when r 1 , then
T t B = 1 ( s 1 ) ( s + 1 ) t [ B 4 ( 1 + s r ) t + B 5 ( 1 + s r ) t ( 1 + s ) t B 6 ( 1 + s ) t ] ,
where B 1 B 6 is in Appendix B. The relationship between the average trapping time and the network order N t can be expressed as
T t N t , r = 1 ; N t log s + 1 ( r s + 1 ) , r 1 .
Proof. 
For a certain time step t i , the nodes generated at time step t i are called new nodes, and the nodes added to the network before time step t i are called old nodes. Then, we let X be the MFPT starting from node i to any of its k i ( t 1 ) old neighbors, and let Y be the MFPT from any of new neighbors of node i to one of its k i ( t 1 ) old neighbors; thus, we can establish the following relations among X and Y,
X = 1 r + 1 + r r + 1 ( 1 + Y ) , Y = 1 s 1 ( 1 + X ) + s 2 s 1 ( 1 + Y ) .
We obtain that the result is X = 1 + s r . Upon the evolution of the weighted network from time step t to time step t + 1 , the trapping time for an arbitrary node i increases by a factor of 1 + s r , that is
T i ( t + 1 ) = ( 1 + s r ) T i ( t ) .
Next, we consider the MFPT of all nodes in light of the classification of new nodes and old nodes, which is written as the following formula:
T t , t o t ( t ) = T t 1 , t o t ( t ) + T ¯ t , t o t ( t ) = ( 1 + s r ) T t 1 , t o t ( t 1 ) + T ¯ t , t o t ( t ) ,
where T ¯ t , t o t ( t ) is the sum of MFPTs for all new nodes. Equation (46) shows that the focus of our calculation is T ¯ t , t o t ( t ) . According to the construction of network G s , r B ( t ) , as shown in Figure 4, for the new complete graph K s involving a node v, the first passage times for its s 1 nodes Q 1 , Q 2 , ⋯, and Q s 1 , and that of its old node v follow the relations,
T ( Q 1 ) = 1 + 1 s 1 [ T ( v ) + T ( Q 2 ) + T ( Q 3 ) + + T ( Q s 1 ) ] , T ( Q 2 ) = 1 + 1 s 1 [ T ( v ) + T ( Q 1 ) + T ( Q 3 ) + + T ( Q s 1 ) ] , T ( Q s 1 ) = 1 + 1 s 1 [ T ( v ) + T ( Q 1 ) + T ( Q 2 ) + + T ( Q s 2 ) ] .
According to the above equations, we obtain
T ( Q 1 ) + T ( Q 2 ) + + T ( Q s 1 ) = ( s 1 ) 2 + ( s 1 ) T ( v ) ,
summing Equation (48) over all the R ( t ) = ( s + 1 ) t old K s pre-existing at the time step t. Let V ( t ) be the set of all nodes of the G s , r B ( t ) . It contains all old nodes in network G s , r B ( t ) ,
T ¯ t + 1 , t o t ( t + 1 ) = s ( s 1 ) 2 R ( t ) + i V ( t ) [ ( s 1 ) · R B ( i , t ) · T i ( t + 1 ) ] = s ( s 1 ) 2 ( s + 1 ) t + ( s 1 ) T ¯ t , t o t ( t + 1 ) + 2 ( s 1 ) T ¯ t 1 , t o t ( t + 1 ) + + 2 t ( s 1 ) T ¯ 0 , t o t ( t + 1 ) .
Similarly, it is not difficult to write T ¯ t , t o t ( t ) as
T ¯ t , t o t ( t ) = s ( s 1 ) 2 ( s + 1 ) t 1 + ( s 1 ) T ¯ t 1 , t o t ( t ) + 2 ( s 1 ) T ¯ t 2 , t o t ( t ) + + 2 t 1 ( s 1 ) T ¯ 0 , t o t ( t ) .
Multiplying Equation (50) with 2 ( 1 + s r ) and subtracting the result from Equation (49), we obtain
T ¯ t + 1 , t o t ( t + 1 ) = ( 1 + s r ) ( 1 + s ) T ¯ t , t o t ( t ) + s ( s 1 ) 2 ( s 2 s r 1 ) ( s + 1 ) t 1 .
Considering T ¯ 1 , t o t ( 1 ) = 4 s 2 32 s + 16 s 4 , substituting T ¯ 1 , t o t ( 1 ) into Equation (51), we can compute Equation (51) to yield
T ¯ t , t o t ( t ) = [ 4 s 2 32 s + 16 s 4 + s ( s 1 ) 2 ( s 2 s r 1 ) ( 1 + s r ) ( 1 + s ) s 1 ] ( 1 + s r ) t 1 ( 1 + s ) t 1 s ( s 1 ) 2 ( s 2 s r 1 ) ( 1 + s r ) ( 1 + s ) s 1 ( s + 1 ) t 1 ,
substituting Equation (52) into Equation (46) and taking the initial value T 1 , t o t ( 1 ) = 4 s 2 40 s + 24 s 4 into consideration. When r = 1 , we have
T t , t o t ( t ) = ( 1 + s ) t [ 4 s 2 40 s + 24 ( s 4 ) ( s + 1 ) ( s 1 ) 2 s ( 1 + s ) 4 s 2 32 s + 16 s 2 4 s ] + ( 1 + s ) 2 t [ 4 s 2 32 s + 16 ( s 2 4 s ) ( 1 + s ) ( s 1 ) 2 s ( 1 + s ) ] + t ( 1 + s ) t ( s 1 ) 2 1 + s .
If r 1 , we obtain
T t , t o t ( t ) = ( 1 + s r ) t 1 4 s 2 40 s + 24 s 4 + [ 4 s 2 32 s + 16 s 4 + s ( s 1 ) 2 ( s 2 s r 1 ) ( 1 + s r ) ( 1 + s ) s 1 ] ( 1 + s r ) t 1 · ( 1 + s ) [ ( 1 + s ) t 1 1 ] s s ( s 1 ) 2 ( s 2 s t 1 ) ( 1 + s r ) ( 1 + s ) s 1 · ( 1 + s r ) t 1 ( 1 + s ) ( 1 + s ) t s r s .
Substituting Equations (53) and (54) into Equation (40), if r = 1 , we obtain
T t B = 1 ( s 1 ) ( s + 1 ) t [ B 1 ( 1 + s ) t + B 2 ( 1 + s ) 2 t B 3 t ( 1 + s ) t ] ,
where B 1 , B 2 , B 3 are constants independent of t, so T t B N t for t . Additionally, if r 1 , we can obtain
T t B = 1 ( s 1 ) ( s + 1 ) t [ B 4 ( 1 + s r ) t + B 5 ( 1 + s r ) t ( 1 + s ) t B 6 ( 1 + s ) t ] ,
where B 4 , B 5 , B 6 are parameters that have nothing to do with t, as they are only related to parameters r and s. The detailed values of B 1 B 6 are given in Appendix B. Next, we show how to express T t B in terms of network order N t . We can obtain t = log s + 1 ( 1 s 1 N t 1 s 1 ) from N t = ( s 1 ) ( s + 1 ) t + 1 , so we have
T t B = N t log s + 1 ( 1 + s r ) ,
for t . The above results show that T t B grows linearly with the network order when r = 1 , while T t B grows sub-linearly and super-linearly with N t if r < 1 and r > 1 , respectively. We find that the weight r can modify not only the prefactor of T t B , but also the scaling of T t B . This means that the lower bound of the ATT with the weight of the generalized weighted Koch network G s , r ( t ) is T t B . □

4. Conclusions

In this paper, we construct a class of random networks according to probability p and normal Koch networks, and give the necessary and sufficient conditions for such networks to be scale free. For two deterministic network models G s , r A ( t ) and G s , r B ( t ) corresponding to two extreme conditions p = 0 and p = 1 , we give exact solutions for topological parameters, including the clustering coefficient, diameter, average weighted shortest path and average trapping time. In view of this, we determine the range of topological parameters of random network G s , r ( t ) . Furthermore, we reveal the effect of weights on ATT, that is, ATT grows linearly with the network order when the weight r = 1 ; otherwise, ATT grows superlinearly and sub-linearly with the network order when r > 1 and r < 1 . These topological features and dynamic characteristics allow us to better understand some fundamental properties of complex networks.

Author Contributions

Conceptualization, B.Y. and J.S.; writing—original draft preparation, J.S. and M.Z.; writing—review and editing, J.S. and B.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Plan under Grant No. 2019YFA0706401 and the National Natural Science Foundation of China under Grants No. 61872166 and No. 61662066. The Technological Innovation Guidance Program of Gansu Province: Soft Science Special Project (21CX1ZA285). The Northwest China Financial Research Center Project of Lanzhou University of Finance and Economics (JYYZ201905).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. The Values of A1–A6

In Appendix A, we give the analytical values of A 1 , A 2 , A 3 , A 4 , A 5 , A 6 involved in Theorem 4.
A 1 = s 2 ( s 2 1 ) + 2 ( s 1 ) 3 ( s + 1 ) ( 3 s 2 + s ) ( s 1 ) 2 2 s ( s + 1 ) , A 2 = 2 ( s 1 ) 3 2 s ( s 1 ) 3 + ( 3 s 2 + s ) ( s 1 ) 2 2 s ( s + 1 ) , A 3 = ( s 1 ) 3 s + 1 , A 4 = s ( s 1 ) 2 ( s 1 ) 2 ( 1 s 2 r 2 ) s ( 1 r ) ( s r + 1 ) + [ ( s 2 s ) r + s 2 + s 2 ] ( s 1 ) 2 ( s + 1 ) 2 2 ( 1 r ) [ s r + 1 ( s + 1 ) 2 ] ( s r + 1 ) , A 5 = [ ( s 2 s ) r + s 2 + s 2 ] ( s 1 ) 2 2 ( 1 r ) [ s r + 1 ( s + 1 ) 2 ] , A 6 = ( s 1 ) 2 ( 1 s 2 r 2 ) s ( 1 r ) ( s r + 1 ) .

Appendix B. The Values of B1–B6

In Appendix B, we give the values of B 1 , B 2 , B 3 , B 4 , B 5 , B 6 involved in Theorem 5.
B 1 = 4 s 2 40 s + 24 ( s 4 ) ( s + 1 ) ( s 1 ) 2 s ( 1 + s ) 4 s 2 32 s + 16 s 2 4 s , B 2 = 4 s 2 32 s + 16 ( s 2 4 s ) ( 1 + s ) ( s 1 ) 2 s ( 1 + s ) , B 3 = ( s 1 ) 2 1 + s , B 4 = 4 s 2 40 s + 24 ( s 4 ) ( 1 + s r ) ( 4 s 2 32 s + 16 ) ( 1 + s ) s ( s 4 ) ( 1 + s r ) ( s 1 ) 2 ( s 2 s r 1 ) ( 1 + s ) ( 1 + s ) ( s 2 r 2 + s r ) ( s 1 ) 2 ( s 2 s r 1 ) ( 1 + s ) ( 1 + s ) ( s 2 r 2 + s r ) ( r 1 ) , B 5 = 4 s 2 32 s + 16 s ( s 4 ) ( 1 + s r ) + ( s 1 ) 2 ( s 2 s r 1 ) ( 1 + s ) ( s 2 r 2 + s r ) , B 6 = ( s 1 ) 2 ( s 2 r 1 ) [ ( 1 + s r ) ( 1 + s ) s 1 ] ( r 1 ) .

References

  1. Al-Tarawneh, A.; Al-Saraireh, J. Efficient detection of hacker community based on twitter data using complex networks and machine learning algorithm. J. Intell. Fuzzy Syst. 2021, 40, 12321–12337. [Google Scholar] [CrossRef]
  2. Fu, Y.; Zhang, Y.; Guo, Y.; Xie, Y. Evolutionary dynamics of cooperation with the celebrity effect in complex networks. Chaos 2021, 31, 013130. [Google Scholar] [CrossRef] [PubMed]
  3. Huang, Y.; Zhang, H.; Zeng, C.; Xue, Y. Scale-free and small-world properties of a multiple-hub network with fractal structure. Phys. A Stat. Mech. Its Appl. 2020, 558, 125001. [Google Scholar] [CrossRef]
  4. Pi, X.; Tang, L.; Chen, X. A directed weighted scale-free network model with an adaptive evolution mechanism. Phys. A Stat. Mech. Its Appl. 2021, 572, 125897. [Google Scholar] [CrossRef]
  5. Rak, R.; Rak, E. The Fractional Preferential Attachment Scale-Free Network Model. Entropy 2020, 22, 509. [Google Scholar] [CrossRef] [PubMed]
  6. Zhou, Y.; Wang, J.; Huang, G.Q. Efficiency and robustness of weighted air transport networks. Transp. Res. Part E Logist. Transp. Rev. 2019, 122, 14–26. [Google Scholar] [CrossRef]
  7. Zhang, J.; Hu, J.; Liu, J. Neural Network With Multiple Connection Weights. Pattern Recognit. 2020, 107, 107481. [Google Scholar] [CrossRef]
  8. Erdos, P.; Renyi, A. On Random Graphs I. Publ. Math. 1959, 4, 3286–3291. [Google Scholar]
  9. Watts, D.J.; Strogatz, S.H. Collective dynamics of small-world networks. Nature 1998, 393, 440–442. [Google Scholar] [CrossRef]
  10. Barabási, A.L.; Albert, R. Emergence of scaling in random networks. Science 1999, 5439, 509–512. [Google Scholar] [CrossRef] [Green Version]
  11. Hwang, S.; Lee, D.S.; Kahng, B. First passage time for random walks in heterogeneous networks. Phys. Rev. Lett. 2012, 109, 088701. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Yin, X.H.; Zhao, S.Y.; Chen, X.Y. Community Detection Algorithm Based on Random Walk of Signal Propagation with Bias. Comput. Sci. 2019, 46, 45–55. [Google Scholar] [CrossRef]
  13. Defrenne, Y.; Zhdankin, V.; Ramanna, S.; Ramaswamy, S.; Ramarao, B.V. The dual phase moisture conductivity of fibrous materials using random walk techniques in X-ray microcomputed tomographic structures. Chem. Eng. Sci. 2018, 195, 565–577. [Google Scholar] [CrossRef]
  14. Zhang, Z.Z.; Julaiti, A.; Hou, B.Y.; Zhang, H.J.; Chen, G.R. Mean first-passage time for random walks on undirected networks. Eur. Phys. J. B—Condens. Matter 2011, 84, 691–697. [Google Scholar] [CrossRef] [Green Version]
  15. Lin, Y.; Zhang, Z.Z. Random walks in weighted networks with a perfect trap: An application of laplacian spectra. Phys. Rev. E Stat. Nonlinear Soft Matter Phys. 2013, 87, 062140. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Chelali, M.; Kurtz, C.; Puissant, A.; Vincent, N. From pixels to random walk based segments for image time series deep classification. In Proceedings of the International Conference on Pattern Recognition and Artificial Intelligence, Zhongshan, China, 19–23 October 2020. [Google Scholar]
  17. Baum, D.; Weaver, J.C.; Zlotnikov, I.; Knötel, D.; Tomholt, L.; Dean, M.N. High-throughput segmentation of tiled biological structures using random walk distance transforms. Integr. Comp. Biol. 2019, 59, 1700–1712. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Zhang, K.; Gui, H.; Luo, Z.; Li, D. Matching for navigation map building for automated guided robot based on laser navigation without a reflector. Ind. Robot 2019, 46, 17–30. [Google Scholar] [CrossRef]
  19. Xie, P.C.; Lin, Y.; Zhang, Z.Z. Spectrum of walk matrix for Koch network and its application. J. Chem. Phys. 2015, 142, 175. [Google Scholar] [CrossRef] [PubMed]
  20. Zhang, Z.Z.; Gao, S.Y.; Xie, W.L. Impact of degree heterogeneity on the behavior of trapping in Koch networks. Chaos Interdiscip. J. Nonlinear Sci. 2010, 20, 47. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Dai, M.F.; Chen, D.D.; Dong, Y.J.; Liu, J. Scaling of average receiving time and average weighted shortest path on weighted Koch networks. Phys. A Stat. Mech. Its Appl. 2012, 391, 6165–6173. [Google Scholar] [CrossRef]
  22. Hou, B.Y.; Zhang, H.J.; Liu, L. Expanded Koch networks: Structure and trapping time of random walks. Eur. Phys. J. B 2013, 64, 156. [Google Scholar] [CrossRef]
  23. Ye, D.D.; Dai, M.F.; Sun, Y.Q.; Shao, S.; Xie, Q. Average receiving scaling of the weighted polygon Koch networks with the weight-dependent walk. Phys. A Stat. Mech. Its Appl. 2016, 458, 1–8. [Google Scholar] [CrossRef]
  24. Del Genio, C.I.; Gross, T.; Bassler, K.E. All scale-free network are sparse. Phys. Rev. Lett. 2011, 107, 178701. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Barabasi, A.L. Scale-free network. Sci. Am. 2003, 288, 60–69. [Google Scholar] [CrossRef] [PubMed]
  26. Kaiser, M. Mean clustering coefficient-on clustering measures for small-world networks. New J. Phys. 2008, 10, 083042. [Google Scholar] [CrossRef]
  27. Carletti, T.; Righi, S. Weighted Fractal Networks. Phys. A Stat. Mech. Its Appl. 2009, 389, 2134–2142. [Google Scholar] [CrossRef] [Green Version]
  28. Sun, Y.; Dai, M.F.; Xi, L.F. Scaling of average weighted shortest path and average receiving time on weighted hierarchical networks. Phys. A Stat. Mech. Its Appl. 2014, 407, 110–118. [Google Scholar] [CrossRef]
Figure 1. Iterative rule of the generalized weighted Koch network.
Figure 1. Iterative rule of the generalized weighted Koch network.
Entropy 24 00409 g001
Figure 2. The network G s , r A ( t ) at first three time steps when s = 5 and r = 1 .
Figure 2. The network G s , r A ( t ) at first three time steps when s = 5 and r = 1 .
Entropy 24 00409 g002
Figure 3. The network G s , r B ( t ) at first three time steps when s = 6 and r = 1 .
Figure 3. The network G s , r B ( t ) at first three time steps when s = 6 and r = 1 .
Entropy 24 00409 g003
Figure 4. The positional relationship between node v and its neighbors.
Figure 4. The positional relationship between node v and its neighbors.
Entropy 24 00409 g004
Table 1. The degree spectrum of G s , r A ( t ) and G s , r B ( t ) .
Table 1. The degree spectrum of G s , r A ( t ) and G s , r B ( t ) .
t k i A n A ( k i A ) k i B n B ( k i B )
02 s ( s 1 ) ( s + 1 ) t 1 s 1 s ( s 1 ) ( s + 1 ) t 1
1 2 × 2 s ( s 1 ) ( s + 1 ) t 2 2 ( s 1 ) s ( s 1 ) ( s + 1 ) t 2
t i 2 × 2 t i s ( s 1 ) ( s + 1 ) t t i 1 2 t i ( s 1 ) s ( s 1 ) ( s + 1 ) t t i 1
t 1 2 × 2 t 1 s ( s 1 ) ( s + 1 ) 0 2 t 1 ( s 1 ) s ( s 1 ) ( s + 1 ) 0
t 2 × 2 t s 2 t ( s 1 ) s
Table 2. The local clustering coefficient of node in network G s , r B ( t ) .
Table 2. The local clustering coefficient of node in network G s , r B ( t ) .
t k i B c ( k i B ) n B ( k i B )
0 s 1 1 s ( s 1 ) ( s + 1 ) t 1
1 2 ( s 1 ) s 2 2 ( s 1 ) 1 s ( s 1 ) ( s + 1 ) t 2
t i 2 t i ( s 1 ) s 2 2 t i ( s 1 ) 1 s ( s 1 ) ( s + 1 ) t t i 1
t 1 2 t 1 ( s 1 ) s 2 2 t 1 ( s 1 ) 1 s ( s 1 ) ( s + 1 ) 1
t 2 t ( s 1 ) s 2 2 t ( s 1 ) 1 s
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Su, J.; Zhang, M.; Yao, B. The Structure and First-Passage Properties of Generalized Weighted Koch Networks. Entropy 2022, 24, 409. https://doi.org/10.3390/e24030409

AMA Style

Su J, Zhang M, Yao B. The Structure and First-Passage Properties of Generalized Weighted Koch Networks. Entropy. 2022; 24(3):409. https://doi.org/10.3390/e24030409

Chicago/Turabian Style

Su, Jing, Mingjun Zhang, and Bing Yao. 2022. "The Structure and First-Passage Properties of Generalized Weighted Koch Networks" Entropy 24, no. 3: 409. https://doi.org/10.3390/e24030409

APA Style

Su, J., Zhang, M., & Yao, B. (2022). The Structure and First-Passage Properties of Generalized Weighted Koch Networks. Entropy, 24(3), 409. https://doi.org/10.3390/e24030409

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop