Next Article in Journal
An Enhanced Calculation Method of the Heat Rejection System of a Free-Piston Stirling Engine (FPSE) Operating on the Moon
Next Article in Special Issue
Determination of Mutation Rates with Two Symmetric and Asymmetric Mutation Types
Previous Article in Journal
Successive Low-Light Image Enhancement Using an Image-Adaptive Mask
Previous Article in Special Issue
Skorokhod Reflection Problem for Delayed Brownian Motion with Applications to Fractional Queues
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Exiting Patterns of Multivariate Renewal-Reward Processes with an Application to Stochastic Networks

Department of Mathematical Sciences, Florida Institute of Technology, Melbourne, FL 32901, USA
Symmetry 2022, 14(6), 1167; https://doi.org/10.3390/sym14061167
Submission received: 20 May 2022 / Revised: 1 June 2022 / Accepted: 3 June 2022 / Published: 6 June 2022
(This article belongs to the Special Issue Stochastic Analysis with Applications and Symmetry)

Abstract

:
This article is a study of vector-valued renewal-reward processes on R d . The jumps of the process are assumed to be independent and identically distributed nonnegative random vectors with mutually dependent components, each of which may be either discrete or continuous (or a mixture of discrete and continuous components). Each component of the process has a fixed threshold. Operational calculus techniques and symmetries with respect to permutations are used to find a general result for the probability of an arbitrary weak ordering of threshold crossings. The analytic and numerical tractability of the result are demonstrated by an application to the reliability of stochastic networks and some other special cases. Results are shown to agree with empirical probabilities generated through simulation of the process.

1. Introduction

Analysis of sums of independent random variables falls into the classical theory of fluctuations, which has been widely studied, from fluctuations about thresholds [1,2,3,4] to ruin times to limiting distributions [5,6] and bounds [7], but less frequent are studies of sums of independent random vectors. Most development is in the areas of limit theorems and other asymptotic results [8,9,10,11].
More infrequent are studies of the threshold crossings of sums of independent random vectors, which may be considered in the context of multidimensional renewal processes [12,13] or random walks [14,15]. Most of these tend to focus on exit times [16,17,18,19,20] and overshoots of the boundary [21,22]. These latter ideas are commonly studied in the context of broader Lévy processes as well [23,24,25], but frequently only in one dimension. Another recent and interesting work [26] provides some analysis of fluctuations of the more general idea of spectrally positive additive Lévy fields.
Studies in these areas have broad applications, including works applied to insurance [27,28,29], finance [30,31,32], stochastic games [33,34], queueing theory [35,36,37], and reliability theory [38,39], among other fields.

1.1. Goals and Structure of the Paper

This paper focuses on a different, but related question: if there is a threshold in each dimension, i.i.d. sums of nonnegative random vectors (not strictly 0) will almost surely cross them all eventually, but what is the probability that the thresholds are crossed in a specific (weak) order?
After giving the mathematical setting for the problem in Section 1.2 and establishing some lemmas in Section 2, we derive a formula for the probabilities through operational calculus techniques and symmetries with respect to permutations in Section 3 to answer this question. In Section 4, the formulas are shown to be analytically tractable in an application to the reliability of stochastic networks and numerically tractable in a three-dimensional problem, and all are shown to agree with simulated results for special cases. In Section 5, we describe how the results herein fall into a wider project and should contribute to some forthcoming results.

1.2. The Mathematical Setting of the Problem

On a probability space ( Ω , F ( Ω ) , P ) , consider a sequence of i.i.d. nonnegative real random vectors X [ 1 ] , X [ 2 ] , with P X [ j ] 0 > 0 such that each X [ j ] = X 1 [ j ] , , X d [ j ] has a common Laplace–Stieltjes transform
γ ( x ) = E e x · X [ 1 ] ,
where x C d such that each component of x has a nonnegative real part. We also assume P X k [ j ] > 1 > 0 for each k.
Note that while the random vectors X [ 1 ] , X [ 2 ] , … are independent, we make no such assumption on the components of X [ n ] , so X 1 [ n ] , , X d [ n ] may have some arbitrary dependency structure.
Consider a renewal-reward process
A ( t ) = j = 1 X [ j ] 1 { τ j t } ,
where { τ n } forms a renewal process of jump times. Since we are concerned only with the crossing order, studying the renewal-reward process directly is unnecessary, so we can focus on the embedded discrete-time stochastic process
A [ n ] = j = 1 n X [ j ] , n N
with the marginal processes
A k [ n ] = j = 1 n X k [ j ] , n N , k { 1 , , d }
and the approach and crossing of fixed thresholds in each coordinate.
In other words, we are interested in the first n such that A k [ n ] > M k for each k, i.e., the first time each coordinate crosses its fixed threshold M k ( 0 , ) .
Denote the first crossing index of the threshold of the kth coordinate as
ν k = inf n : A k [ n ] > M k .
Each ν k can be viewed as the first time when the process A [ n ] exits R k 1 × [ 0 , M k ) × R d k .
We will derive the probability of an event W, a fixed weak ordering of the threshold crossings ν 1 , …, ν d . In other words, W is an element of the following measurable partition of the sample space Ω ,
W = { ν p ( 1 ) 1 ν p ( 2 ) 2 d 1 ν p ( d ) } : p is a permutation F ( Ω )
where, in each event, each relation j is fixed to be < or = for some fixed permutation p of the dimensions { 1 , , d } .
We will assume that p is the identity function. Since the probability P ( W ) will be shown to be symmetric with respect to permutations of the threshold crossings in the main result of the paper, Theorem 1. Hence, the results trivially follow for arbitrary weak ordering.
It should be noted that deriving results about the crossing times, positions, or excess levels of the underlying renewal-reward process would require further information about the jump times { τ n } . However, much prior scholarship by Takács [3,4] classically, and by Dshalalow and his collaborators over the past few decades, demonstrates that results about crossing weak orders of the embedded discrete-time process easily extend to results about the crossing times and position of renewal-reward processes [12], as well as monotone [14,30] and even oscillating [15] marked random walks. However, all such prior works were limited to, at most, four dimensions. The present work generalises these ideas to arbitrary finite dimension.

2. Preliminary Results

We need to establish two lemmas before proving the main result. First, a simple algebraic lemma is established.
Lemma 1.
For a vector y = ( y 1 , , y d ) C d ,
k = 1 d ( 1 y k ) = k = 0 d ( 1 ) k f y ( d , k )
where
f y ( d , k ) = N { 1 , , d } | N | = k j = 1 k y n j ,
where the sum runs over all k-subsets of { 1 , , d } , each denoted N = { n 1 , , n k } , and we define f y ( d , 0 ) = 1 .
Proof. 
We will first establish some properties of the function f y ( n , k ) and then prove the lemma via induction on d.
Notice that f y ( n 1 , k 1 ) is the sum of all ( k 1 ) -factor products from { y 1 , , y n 1 } , so, if this is multiplied by y n , we have the sum of all k-factor products from { y 1 , , y n } including y n as a factor. Note that f y ( n 1 , k ) is the sum of all k-factor products from { y 1 , , y n 1 } or, equivalently, the sum of all k-factor products of terms from { y 1 , , y n } not including y n as a factor. Adding them,
f y ( n 1 , k ) + y n f y ( n 1 , k 1 ) = f y ( n , k ) .
Further, since there is exactly one product of n distinct terms from { y 1 , , y n } , we have
f y ( n , n ) = y 1 y n .
Let d = 1 ; then, the left side of (7) is 1 y 1 while the right side is
k = 0 1 ( 1 ) k f y ( 1 , k ) = f y ( 1 , 0 ) f y ( 1 , 1 ) = 1 y 1 .
Suppose that, for d = n 1 , it is true that
j = 1 n 1 ( 1 y j ) = k = 0 n 1 ( 1 ) k f y ( n 1 , k ) ,
then, for d = n , we have
j = 1 n ( 1 y j ) = ( 1 y n ) k = 0 n 1 ( 1 ) k f y ( n 1 , k ) = k = 0 n 1 ( 1 ) k f y ( n 1 , k ) + k = 0 n 1 ( 1 ) k + 1 f y ( n 1 , k ) y n .
Rearranging the sums,
j = 1 n ( 1 y j ) = 1 + k = 1 n 1 ( 1 ) k f y ( n 1 , k ) + k = 1 n ( 1 ) k y n f y ( n 1 , k 1 ) = 1 + k = 1 n 1 ( 1 ) k f y ( n 1 , k ) + y n f y ( n 1 , k 1 ) + ( 1 ) n y n f y ( n 1 , n 1 ) .
By (8) and (9) and the fact that f y ( d , 0 ) = 1 ,
j = 1 n ( 1 y j ) = ( 1 ) 0 f y ( n , 0 ) + k = 1 n 1 ( 1 ) k f y ( n , k ) + ( 1 ) n f y ( n , n ) = k = 0 n ( 1 ) n f y ( n , k ) .
Thus, the result of the lemma is true by induction. □
Second, we establish a minor sufficient condition under which γ ( x ) < 1 , where γ is the joint Laplace–Stieltjes transform of the i.i.d. random vectors X [ n ] introduced in (1). This is required later for summing γ ( x ) j as a geometric series.
Lemma 2.
If any component of x has a positive real part, then γ ( x ) < 1 .
Proof. 
Note that
γ ( x ) = Ω e x · X [ 1 ] d P Ω e x 1 X 1 [ 1 ] x d X d [ 1 ] d P = Ω e R e ( x 1 X 1 [ 1 ] ) e R e ( x d X d [ 1 ] ) d P .
Since each X j [ 1 ] is real and nonnegative and each R e ( x j ) 0 by assumption, each exponential term is in ( 0 , 1 ] , so, for any j d , this implies
γ ( x ) Ω e R e ( x j ) X j [ 1 ] d P
Suppose that we partition Ω into B = X j [ 1 ] > 1 and B C = X j [ 1 ] 1 ; then,
γ ( x ) B e R e ( x j ) X j [ 1 ] d P + B C e R e ( x j ) X j [ 1 ] d P .
If X j [ 1 ] > 1 , then e R e ( x j ) X j [ 1 ] e R e ( x j ) , so we have
γ ( x ) e R e ( x j ) B d P + B C d P e R e ( x j ) P ( B ) + P ( B C )
If R e ( x j ) = 0 , this result implies merely γ ( x ) 1 , but if R e ( x j ) > 0 , it implies γ ( x ) < 1 , recalling P ( B C ) < 1 by assumption. Since j was arbitrary, we have γ ( x ) < 1 if at least one component of x has a positive real part. □
With these preliminary results established, we continue towards proving the main result of the paper.

3. The Main Result

In this section, we derive P ( W ) for an arbitrary weak ordering W W . Regardless of W, the exit indices ν 1 , , ν d must occur at some n d distinct times since some may occur simultaneously.
Without loss of generality, we can suppose that the groups of simultaneous crossing indices are
ν 1 = = ν s 1 ν s 1 + 1 = = ν s 2 ν s n 1 + 1 = = ν s n ,
and ν 1 < ν s 1 + 1 < < ν s n 1 + 1 . The s k values are, then, the number of crossing indices in the first k groups. (Note that, the subscripts may actually occur with some permutation p of the given subscripts, so we may easily replace ν j with ν p ( j ) given p.)
Further, denote r k = s k s k 1 as the number of crossing indices in the kth group. Then, we can represent an arbitrary weak ordering W as
W = { ν 1 = = ν s 1 < ν s 1 + 1 = = ν s 2 < < ν s n 1 + 1 = = ν d } F ( Ω ) .
We will partition W into events of the form
W ( q , j ) = { ν 1 ( q 1 ) = = ν s 1 ( q s 1 ) = j 1 < ν s 1 + 1 ( q s 1 + 1 ) = = ν s 2 ( q s 2 ) = j 2 < < ν s n 1 + 1 ( q s n 1 + 1 ) = = ν d ( q d ) = j n } ,
where j is such that j 1 < < j n , and q = ( q 1 , , q d ) R 0 d is a vector of thresholds in each dimension to be used during some intermediate steps of the upcoming proofs. As such, the threshold function ν k ( M k ) equals the true ν k for each k = 1 , , d .
To be precise, an operator to be defined next will be applied to P ( W ( q , j ) ) for an arbitrary fixed q vector and summed over all permissible j vectors to derive P ( W ( q ) ) , where
W ( q ) = { ν 1 ( q 1 ) = = ν s 1 ( q s 1 ) < ν s 1 + 1 ( q s 1 + 1 ) = = ν s 2 ( q s 2 ) < < ν s n 1 + 1 ( q s n 1 + 1 ) = = ν d ( q d ) } ,
under the operator. The inverse operator will then be applied at the true threshold vector q = ( M 1 , , M d ) to reconstruct the desired probability P ( W ) .
We will use an operator defined as
H q ( · ) ( x ) = H q 1 1 H q 2 2 H q d d ( · ) ( x 1 , , x d ) ,
which is a composition of d operators, one for each component of the process A [ n ] . The operators take one of two forms, depending on whether the corresponding component is discrete or continuous,
H q k k ( · ) ( x k ) = LC q k ( · ) ( x k ) , if each X k [ n ] is continuous and valued in ( 0 , ) D q k ( · ) ( x k ) , if each X k [ n ] is discrete and valued in N
where LC is the Laplace–Carson transform,
LC q k ( f ( q k ) ) ( x k ) = x k q k = 0 e x k q k f ( q k ) d q k ,
defined for nonnegative measurable functions f and x k > 0 . D is defined as
D q k ( f ( q k ) ) ( x k ) = ( 1 x k ) q k = 0 x k q k f ( q k )
for functions f and x k < 1 . This operator has the inverse
D x k q k 1 ( · ) = lim x k 0 1 ( q k 1 ) ! q k 1 x k q k 1 1 1 x k ( · ) .
Since each operator can reconstruct f ( q k ) , note that using q k = M k will allow us to reconstruct f evaluated at M k . This is important for the main result of the paper, Theorem 1, so that we can derive the probabilities P ( W ) for the proper threshold values M 1 , …, M k .
We see that D is similar to a z-transform and has been used in more or less the same way for discrete problems as variants of the Laplace transforms have been used for continuous problems in stochastic processes. Common use of such operators be found frequently in, for example, many earlier works of Takács [3,4,35] and Dshalalow and his collaborators [12,40].
Before deriving P ( W ) , we need to establish one last lemma.
Lemma 3.
For an event W W , suppose that each ν i = j k for each i = s k 1 + 1 , s k 1 + 2 , , s k ; then,
H q 1 W ( q , j ) ( x ) = k = 1 n m = 1 r k z k m A k m [ j k 1 ] z k m A k m [ j k ] ,
where
k m = s k 1 + m
and
z k = e x k , if A k [ n ] is continuous x k , if A k [ n ] is discrete .
Proof. 
If a component is continuous, note simply that
LC q ( 1 { ν k ( q ) = j } ) ( x ) = x 0 e x q 1 { ν k ( q ) = j } d q = x A k [ j 1 ] A k [ j ] e x q d q = e x A k [ j 1 ] e x A k [ j ]
If a component is discrete, similarly, we can sum as two partial geometric series,
D q 1 { ν k ( q ) = j } ( x ) = ( 1 x ) q = 0 x q 1 { ν k ( q ) = j } = ( 1 x ) q = A k [ j 1 ] A k [ j ] 1 x q = ( 1 x ) 1 x A k [ j ] 1 x 1 x A k [ j 1 ] 1 x = x A k [ j 1 ] x A k [ j ] .
Then, notice that if we apply d operators of the appropriate types to 1 W , the result is a product of d terms similar to (21) or (22), which, after carefully handling the indices and notation, leads to (18). □
Denote S j = { k N : s j 1 < k s j } for j = 1 , 2 , , n with S n + 1 = Ø and T j = m = j n S m . Further, for B N d = { 1 , , d } and each i d , define x B = ( x 1 B , , x d B ) where x i B = 1 B ( i ) x i and denote
γ B = γ x B
so that we may refer to the joint transform of a specific subset of the random variables making up the random jump vectors X [ n ] .
Given the lemmas above, we may derive P ( W ) as an expression in terms of inverse of the H q operator and the common joint transform γ of the random vectors X [ n ] as follows.
Theorem 1.
If each vector x T j contains at least one component with a positive real part, then, for each W W ,
P ( W ) = E 1 W = H x 1 m = 1 n 1 1 γ T m k = 0 r m ( 1 ) k N S m | N | = k γ T m + 1 N ( M ) ,
Proof. 
Without loss of significant generality, assume that A [ n ] is continuous in each component. A remark after the proof will show how the result generalises to accommodate processes with some discrete components. To find P ( W ) , we merely need to sum the probabilities of the events W ( q , j ) , which partition W, as follows.
P ( W ) = E 1 W = j 1 = 1 j 2 = j 1 + 1 j n = j n 1 + 1 E 1 W ( q , j ) .
Next, apply the H q operator, which bypasses all terms except the q-dependent indicator,
H q ( E [ 1 W ( q , j ) ] ) ( x ) = j 1 = 1 j 2 = j 1 + 1 j n = j n 1 + 1 E k = 1 n m = 1 r k e x k m A k m [ j k ] H q 1 W ( q , j ) ( x ) .
Then, by Lemma 3, we have
H q ( E [ 1 W ( q ) ] ) ( x ) = j 1 = 1 j 2 = j 1 + 1 j n = j n 1 + 1 E k = 1 n m = 1 r k e x k m A k m [ j k ] 1 e x k m X k m [ j k ] .
Since the X [ j ] vectors are i.i.d.,
H q E [ 1 W ( q ) ] ( x )
= j 1 = 1 E e x · A [ j 1 1 ] E e x T 2 · X [ j 1 1 ] i = 1 r 1 1 e x i X i [ j 1 ]
j 2 = j 1 + 1 E e x T 2 · X [ j 1 + 1 ] + + X [ j 2 1 ] E e x T 3 · X [ j 2 ] i = 1 r 2 1 e x 2 i X 2 i [ j 2 ] ×
× j n = j n 1 + 1 E e x T n · X [ j n 1 + 1 ] + + X [ j n 1 ] E i = 1 r n 1 e x n i X n i [ j n ] .
Notice that the expression above has n nested infinite series, which fall into two types. First, in (28) and (29), the first n 1 series are identical except for the subscripts. The last series in (30) is slightly different because T n + 1 = Ø . We will consider (28) in detail, and the other two cases follow by nearly the same argument.
Notice that (28) has two expectations. Since the X [ j ] vectors are i.i.d., the first expectation simplifies to γ T 1 j 1 1 , yielding
j 1 = 1 γ T 1 j 1 1 E e x T 2 · X [ j 1 ] i = 1 r 1 1 e x i X i [ j 1 ] .
If we set y i = e x i X i [ j 1 ] , Lemma 1 implies that the above may be written as
j 1 = 1 γ T 1 j 1 1 E e x T 2 · X [ j 1 ] k = 0 r 1 ( 1 ) k N S 1 | N | = k e x N · X [ j 1 ] ,
which may be simplified as
j 1 = 1 γ T 1 j 1 1 k = 0 r 1 ( 1 ) k N S 1 | N | = k E e x ( T 2 N ) · X [ j 1 ] = j 1 = 1 γ T 1 j 1 1 k = 0 r 1 ( 1 ) k N S 1 | N | = k γ T 2 N .
Note that the last sum is constant with respect to j 1 . Since x T 1 has a component with a positive real part, Lemma 2 implies γ T 1 < 1 , so the infinite series is a geometric series, so the above simplifies to
1 1 γ T 1 k = 0 r 1 ( 1 ) k N S 1 | N | = k γ T 2 N .
Through a very similar argument but with different indices, the terms of the form (29) and (30) simplify to
1 1 γ T m k = 0 r m ( 1 ) k N S m | N | = k γ T m + 1 N .
Plugging (34) and (35) into (28)–(30) gives
H q E [ 1 W ( q ) ] ( x ) = H q m = 1 n 1 1 γ T m k = 0 r m ( 1 ) k N S m | N | = k γ T m + 1 N ( x )
Applying the inverse operator H x 1 evaluated at q = M = ( M 1 , , M d ) yields (24), the claim of the theorem. □
Note that, without exploiting the symmetry of the argument above with respect to permutations, the proof would have been infeasible since one would have to consider each permutation of threshold crossing indices and render a separate argument. Since the number of permutations grows factorially with dimension, prior multivariate results were limited to d = 3 [12] or at most d = 4 [30].
Remark 1.
For convenience, the proof above was rendered under the assumption that the components of A [ n ] are continuous-valued. However, for any discrete component of the process, A k [ n ] , replacing the corresponding x k with ln x k and replacing x k B with
x k B = x k , if k B 1 , if k B ,
and maintaining all other steps of the proof yields the same result.
Example 1.
Suppose that W = ν 1 = ν 2 < ν 3 = ν 4 < ν 5 ; then, we have
S 1 = ν 1 , ν 2 , S 2 = ν 3 , ν 4 , S 3 = ν 5 ,
so r 1 = r 2 = 2 and r 3 = 1 . Then, Theorem 1 implies
P ( W ) = H x 1 γ { 345 } γ { 1345 } γ { 2345 } + γ 1 γ γ { 5 } γ { 35 } γ { 45 } + γ { 345 } 1 γ { 345 } ( M ) .
If γ is known, the operators may be inverted one-by-one to find a tractable expression for the probability, as we demonstrate with several models in the next section.
Suppose a similar weak ordering W where the permutation
p = 1 2 3 4 5 3 5 1 4 2
has been applied to the threshold numbers. Then, the weak ordering is
W = { ν 3 = ν 5 < ν 1 = ν 4 < ν 2 } .
By Theorem 1 and referring to (39), the result is trivially found to be
P ( W ) = H x 1 γ { 142 } γ { 3142 } γ { 5142 } + γ 1 γ γ { 2 } γ { 12 } γ { 42 } + γ { 142 } 1 γ { 142 } ( M ) .
The symmetry of the result with respect to permutations makes this the same as (39) but for the application of the permutation p to the dimension numbers.

4. Applications and Special Case Results

In this section, we apply Theorem 1 to find probabilities of weak orderings of threshold crossings in a problem associated with the reliability of stochastic networks, as well as two special cases of random vectors with exponential distributions.
Any exact Laplace transform inversions in this section are computed using sequences of tabled results from Supplement 5 of [41] or with Wolfram Mathematica. In Section 4.3, some results require numerical inverse Laplace transforms rendered via methods developed from the Talbot algorithm [42]. In particular, we use McClure’s MATLAB implementation [43] following the unified framework of Abate and Whitt [44].
Inverse D transforms will be evaluated in exact form using properties derived by Dshalalow and summarized in Appendix B of [45]. Many of the results have been used to invert the D operator in works from Dshalalow and his collaborators [36,46,47].
In each application and special case below, the A [ n ] processes are further simulated in MATLAB to find empirical probabilities, which are shown to match the numerical and analytic results derived from inverting the sequences of operators.

4.1. Application to Stochastic Networks

Some prior work [38,39,45] studies processes that model stochastic networks under attack, where successive batches of nodes of random size X 1 [ j ] are incapacitated upon a Poisson point process on [ 0 , ) , where each node has random weight Y k measuring its value to the network. In these works, most attention was given to a ruin time where cumulative node loss crosses a threshold M 1 or cumulative weight loss crosses another threshold M 2 , whichever comes first. This ruin time represents the network entering a critical state of interest. Probabilistic information about this time and the extent of different types of damage that should be anticipated in the near term can give security professionals earlier warnings of attacks and help them to differentiate malicious attacks from benign failures on the network.
While understanding the timing and extent of critical damage is important, it leaves network administrators with more new questions than answers: What should we do about different attack patterns? Where are our weaknesses?
The present work can address is the probability of each ordering of failures, which would indicate the more vulnerable part of the network, whether nodes or weights, which can suggest strategies for improving reliability.
To model the situation, suppose that each jump in the process is a vector consisting of the batch size X 1 [ j ] and sum of weights X 2 [ j ] . Since each node weight is Y k ,
X 2 [ j ] = k = 1 X 1 [ j ] Y k ,
where we assume that the node batch sizes X 1 [ j ] are i.i.d. with common probability-generating function g and the weights Y k are i.i.d. with common Laplace–Stieltjes transform l.
We will seek to apply Theorem 1 to find the probabilities of each weak ordering of threshold crossings, which requires knowledge of γ , which we can find by double expectation,
γ ( ln x 1 , x 2 ) = E x 1 X 1 [ 1 ] e x 2 X 2 [ 1 ] = E x 1 X 1 [ 1 ] E e x 2 X 2 [ 1 ] | X 1 [ 1 ] = g ( x 1 l ( x 2 ) ) .
Corollary 1.
If node batch sizes are geometrically distributed with parameter p and node weights are exponentially distributed with parameter μ, then
P ( ν 1 < ν 2 ) = P ( M 1 1 , μ M 2 ) e p μ M 2 ( 1 p ) M 1 1 P ( M 1 1 , ( 1 p ) μ M 2 ) ,
where P ( a , x ) = 1 Γ ( a ) 0 x t a 1 e t d t is the lower regularised gamma function.
Proof. 
By Theorem 1,
P ( ν 1 < ν 2 ) = H x 1 γ ( ( 1 , x 2 ) ) γ ( x ) 1 γ ( x ) ( M ) = H x 1 g ( l ( x 2 ) ) g ( x 1 l ( x 2 ) ) 1 g ( x 1 l ( x 2 ) ) ( M ) .
Here, H x 1 = LC x 2 1 D x 1 M 1 1 . To compute the inverse, we first apply the inverse D x 1 M 1 1 , which is possible when g is specified, and then apply the inverse LC x 2 1 ( · ) ( M 2 ) , which is possible when l is specified.
Since each node batch size X 1 [ n ] is geometrically distributed with parameter p, g ( z ) = p z 1 q z with q = 1 p , then the term within the inverse simplifies to
g ( l ( x 2 ) ) g ( x 1 l ( x 2 ) ) 1 g ( x 1 l ( x 2 ) ) = p l ( x 2 ) 1 q l ( x 2 ) p x 1 l ( x 2 ) 1 q x 1 l ( x 2 ) 1 p x 1 l ( x 2 ) 1 q x 1 l ( x 2 ) = p l ( x 2 ) 1 q l ( x 2 ) 1 x 1 1 l ( x 2 ) x 1 .
The operator D x 1 M 1 1 has the effect of truncating power series at the ( M 1 1 ) th term and has a well-known property D x 1 M 1 1 ( x 1 f ( x 1 ) ) = D x 1 M 1 2 ( f ( x 1 ) ) , so when we apply it to the x 1 -dependent terms, we have
D x 1 M 1 1 1 x 1 1 l ( x 2 ) x 1 = D x 1 M 1 1 k = 0 l ( x 2 ) k x 1 k D x 1 M 1 2 k = 0 l ( x 2 ) k x 1 k = k = 0 M 1 1 l ( x 2 ) k k = 0 M 1 2 l ( x 2 ) k = l ( x 2 ) M 1 1 .
Since each Y k is exponentially distributed with parameter μ , l ( z ) = μ μ + z , so we can apply the inverse Laplace–Carson transform to find
P ( ν 1 < ν 2 ) = L x 2 1 1 x 2 p μ p μ + x 2 μ μ + x 2 M 1 1 ( M 2 ) = P ( M 1 1 , μ M 2 ) e p μ M 2 ( 1 p ) M 1 1 P ( M 1 1 , ( 1 p ) μ M 2 ) ,
where P ( a , x ) = 1 Γ ( a ) 0 x t a 1 e t d t is the lower regularised gamma function, which can be computed to high precision efficiently. □
Corollary 2.
If node batch sizes are geometrically distributed with parameter p and node weights are exponentially distributed with parameter μ, then
P ( ν 1 > ν 2 ) = Q ( M 1 1 , μ M 2 ) ( 1 p ) M 1 1 e p μ M 2 1 p Q M 1 1 , μ M 2 1 p ,
where Q ( a , x ) = 1 Γ ( a ) x t a 1 e t d t = 1 P ( a , x ) is the upper regularised gamma function.
Proof. 
Theorem 1 gives
P ( ν 1 > ν 2 ) = H x 1 γ ( ( x 1 , 0 ) ) γ ( x ) 1 γ ( x ) ( M ) = H x 1 g ( x 1 ) g ( x 1 l ( x 2 ) ) 1 g ( x 1 l ( x 2 ) ) ( M ) .
The term within the inverse simplifies to
g ( x 1 ) g ( x 1 l ( x 2 ) ) 1 g ( x 1 l ( x 2 ) ) = p x 1 1 q x 1 p x 1 l ( x 2 ) 1 q x 1 l ( x 2 ) 1 p x 1 l ( x 2 ) 1 q x 1 l ( x 2 ) = ( 1 l ( x 2 ) ) p x 1 1 q x 1 1 1 l ( x 2 ) x 1 .
Applying the D x 1 M 1 1 to the x 1 -dependent terms,
D x 1 M 1 1 p x 1 1 q x 1 1 1 l ( x 2 ) x 1 = p D x 1 M 1 2 k = 0 l ( x 2 ) k x 1 k 1 1 q x 1 = p k = 0 M 1 2 l ( x 2 ) k D x 1 M 1 2 k j = 0 q j x 1 j = p k = 0 M 1 2 l ( x 2 ) k j = 0 M 1 2 k q j = k = 0 M 1 2 l ( x 2 ) k 1 q M 1 1 k .
Then, we can apply the inverse Laplace–Carson transform to find
P ( ν 1 > ν 2 ) = k = 0 M 1 2 1 q M 1 1 k L x 2 1 1 x 2 ( 1 l ( x 2 ) ) l ( x 2 ) k ( M 2 ) = k = 0 M 1 2 1 q M 1 1 k L x 2 1 1 x 2 x 2 μ + x 2 μ μ + x 2 k ( M 2 ) = e μ M 2 k = 0 M 1 2 ( μ M 2 ) k Γ ( k + 1 ) 1 q M 1 1 k = Q ( M 1 1 , μ M 2 ) ( 1 p ) M 1 1 e p μ M 2 1 p Q M 1 1 , μ M 2 1 p ,
where Q ( a , x ) = 1 Γ ( a ) x t a 1 e t d t = 1 P ( a , x ) is the upper regularised gamma function. □
These two results lead trivially to
P ( ν 1 = ν 2 ) = 1 P ( ν 1 < ν 2 ) P ( ν 1 > ν 2 ) ,
which completes the derivations of probabilities of each weak ordering in W .
The expressions (44), (49), and (54) are numerically very tractable since the regularised gamma functions can be computed to high numerical precision efficiently. These are computed numerically and compared to simulated probabilities in Figure 1.
Note that, in the first three diagrams, μ = 1 and p = 0.5 , so we have
E X 1 [ n ] = p 1 p = 1 ,
and, by the independence of X 1 [ n ] and Y k s and since Y k s are i.i.d.,
E X 2 [ n ] = E Y 1 X 1 [ n ] = E [ Y 1 ] E X 1 [ n ] = 1 μ p 1 p = 1 ,
so both components grow at the same rate in this example, and it provides a good opportunity to analyse the effect of the thresholds in isolation.
As M 2 increases, the probability that M 1 is crossed first P ( ν 1 < ν 2 ) increases in Figure 1a and the probability that M 2 is crossed first P ( ν 1 > ν 2 ) decreases in Figure 1b, which is an intuitive result. Further, the probability that they are crossed simultaneously P ( ν 1 = ν 2 ) peaks when M 1 = M 2 in each case in Figure 1c.
In Figure 1d, by increasing μ , this only increases the average jump in the second component, X 2 [ n ] , which makes crossing M 2 more likely to occur more quickly, so the probabilities P ( ν 1 > ν 2 ) drop to 0 more quickly than in Figure 1b.
In this subsection, we have applied the result from Section 3 to a problem related to the reliability of networked structures experiencing node failures or attacks that incapacitate geometric batches of nodes, each with exponentially distributed weights. In particular, we have derived the probabilities that node losses enter a critical state before, at the same time, or after the weight loss reaches critical levels given the parameters of the distributions and the thresholds. The formulas are numerically very tractable as the regularised gamma functions can easily be computed to high precision. The formulas are also shown to match simulated results to very high precision in several special cases.
A primary practical benefit of these probabilities is that it indicates where weaknesses lie in the network—with node losses or with weight losses. If node loss is likely to become critical first, one should implement interventions that strengthen this aspect of the network, and vice versa with weight loss. Pairing these results on the order of threshold crossings and marginal probabilistic results about the first crossing times and the extent of losses of each type upon crossings in [38,39,45] provides a suitably full understanding of the dynamics of such a process as it approaches and enters a critical state.
While the geometric and exponential distributional assumptions on node and weight losses may seem limiting, they were only needed to invert the operators. The underlying results hold with much more minimal assumptions on the losses, and the same inversion process generally works for many other well-known distributions and empirical distributions derived from data, several of which are shown in [39,45].
Next, we demonstrate that Theorem 1 leads to analytically, or at least numerically, tractable results for two other models, both with explicit inversion and with a numerical approximation.

4.2. 2D Exponential Model

Suppose that each X [ n ] = X 1 [ n ] , X 2 [ n ] is a vector of two independent exponential random variables with parameters μ 1 and μ 2 and Laplace–Stieltjes transforms l 1 and l 2 ; then,
γ ( x ) = E e x 1 X 1 [ 1 ] x 2 X 2 [ 1 ] = l 1 ( x 1 ) l 2 ( x 2 ) .
Here, d = 2 , so we have W = { { ν 1 < ν 2 } , { ν 1 = ν 2 } , { ν 1 > ν 2 } } , so there are only three possible weak orderings of threshold crossings, which we derive in the following corollary to Theorem 1.
Corollary 3.
If X [ n ] = X 1 [ n ] , X 2 [ n ] is a vector of independent exponential random variables with parameters μ 1 and μ 2 for each n,
P ( ν 1 < ν 2 ) = 1 e μ 2 M 2 1 + μ 1 μ 2 M 2 0 M 1 e μ 1 τ τ I 1 2 μ 1 μ 2 M 2 τ d τ ,
P ( ν 1 > ν 2 ) = 1 e μ 1 M 1 1 + μ 1 μ 2 M 1 0 M 2 e μ 2 τ τ I 1 2 μ 1 μ 2 M 1 τ d τ ,
and P ( ν 1 = ν 2 ) = 1 P ( ν 1 < ν 2 ) P ( ν 1 > ν 2 ) , where I 1 ( x ) is the modified Bessel function of the first kind.
Proof. 
Theorem 1 implies
P ( ν 1 < ν 2 ) = H x 1 γ ( ( 0 , x 2 ) ) γ ( x ) 1 γ ( x ) 1 γ ( ( 0 , x 2 ) ) 1 γ ( ( 0 , x 2 ) ) ( M 1 , M 2 ) = L x 1 1 1 l 1 ( x 1 ) x 1 L x 2 1 1 x 2 l 2 ( x 2 ) 1 l 1 ( x 1 ) l 2 ( x 2 ) ( M 2 ) ( M 1 )
If we assume that X 1 [ n ] ’s and X 2 [ n ] ’s are exponential with parameters μ 1 and μ 2 ,
l 1 ( x 1 ) = μ 1 μ 1 + x 1 l 2 ( x 2 ) = μ 2 μ 2 + x 2 ,
Then, the innermost inverse Laplace transform is
L x 2 1 1 x 2 μ 2 μ 2 + x 2 1 μ 1 μ 1 + x 1 μ 2 μ 2 + x 2 ( M 2 ) = μ 1 + x 1 x 1 1 e μ 2 M 2 x 1 μ 1 + x 1
Therefore, noting
1 l 1 ( x 1 ) = x 1 μ 1 + x 1 ,
we have
P ( ν 1 < ν 2 ) = L x 1 1 1 x 1 x 1 μ 1 + x 1 μ 1 + x 1 x 1 1 e μ 2 M 2 x 1 μ 1 + x 1 ( M 1 ) = 1 0 M 1 L x 1 1 e μ 2 M 2 x 1 μ 1 + x 1 ( τ ) d τ = 1 e μ 2 M 2 1 + μ 1 μ 2 M 2 0 M 1 e μ 1 τ τ I 1 2 μ 1 μ 2 M 2 τ d τ .
P ( ν 2 < ν 1 ) can be derived by simply interchanging the parameters μ 1 and μ 2 and interchanging the thresholds M 1 and M 2 , while P ( ν 1 = ν 2 ) = 1 P ( ν 1 < ν 2 ) P ( ν 1 > ν 1 ) trivially, which completes the probabilities for each weak ordering in W . □
The expressions (58) and (59) are numerically very tractable since the modified Bessel function of the first kind can be computed to high numerical precision efficiently. The practicality of the results is demonstrated by computing them numerically and comparing them to simulated probabilities in Figure 2.
Note that in Figure 2, M 2 takes several values, while M 1 { 1 , 1.1 , 1.2 , , 20 } , the reverse of Figure 1, so, for example, the effect of increasing M 1 makes it less likely to be crossed first, so P ( ν 1 < ν 2 ) approaches 0 rather than 1.

4.3. 3D Exponential Model

Suppose that each X [ n ] = X 1 [ n ] , X 2 [ n ] , X 3 [ n ] is a vector of three independent exponential random variables with parameters μ 1 , μ 2 , and μ 3 and Laplace–Stieltjes transforms l 1 , l 2 , and l 3 . Then,
γ ( x ) = E e x 1 X 1 [ 1 ] x 2 X 2 [ 1 ] x 3 X 3 [ 1 ] = μ 1 μ 1 + x 1 μ 2 μ 2 + x 2 μ 3 μ 3 + x 3 .
We will apply Theorem 1 to derive several results giving the probability of { ν 1 < ν 2 < ν 3 } , { ν 1 = ν 2 < ν 3 } , { ν 1 < ν 2 = ν 3 } , and { ν 1 = ν 2 = ν 3 } , which trivially generate probabilities of every other weak ordering of threshold crossings by interchanging the roles of the x j , μ j , and M j terms and interchanging the operators.
Corollary 4.
If X [ n ] = X 1 [ n ] , X 2 [ n ] , X 3 [ n ] is a vector of independent exponential random variables with parameters μ 1 , μ 2 , and μ 3 for each n,
P ( ν 1 < ν 2 < ν 3 ) = 1 e μ 2 M 2 1 + μ 1 μ 2 M 2 0 M 1 e μ 1 τ τ I 1 2 μ 1 μ 2 M 2 τ d τ e μ 3 M 3 1 + μ 2 μ 3 M 3 0 M 2 e μ 2 τ τ I 1 2 μ 2 μ 3 M 3 τ d τ + e μ 2 M 2 μ 3 M 3 × L x 1 1 e μ 1 μ 2 M 2 μ 1 + x 1 x 1 0 M 2 e μ 1 μ 2 τ μ 1 + x 1 τ I 1 2 μ 1 μ 2 μ 3 M 3 τ μ 1 + x 1 d τ ( M 1 ) .
Proof. 
By Theorem 1,
P ( ν 1 < ν 2 < ν 3 ) = H x 1 γ ( ( 0 , x 2 , x 3 ) ) γ ( x ) 1 γ ( x ) γ ( ( 0 , 0 , x 3 ) ) γ ( ( 0 , x 2 , x 3 ) ) 1 γ ( ( 0 , x 2 , x 3 ) ) ( M ) = L x 1 1 L x 2 1 L x 3 1 A ( M 3 ) ( M 2 ) ( M 1 ) ,
where
A = 1 l 1 ( x 1 ) x 1 ( 1 l 2 ( x 2 ) ) l 2 ( x 2 ) x 2 l 3 ( x 3 ) 2 x 3 ( 1 l 1 ( x 1 ) l 2 ( x 2 ) l 3 ( x 2 ) ) ( 1 l 2 ( x 2 ) l 3 ( x 3 ) ) = μ 2 μ 3 2 x 3 ( μ 1 + x 1 ) ( μ 2 + x 2 ) ( μ 3 + x 3 ) μ 1 μ 2 μ 3 ( μ 2 + x 2 ) ( μ 3 + x 3 ) μ 2 μ 3 .
Let B = L x 3 1 ( A ) ( M 3 ) , and then
B = μ 2 x 2 1 μ 2 x 1 + μ 1 x 2 + x 1 x 2 1 x 1 e μ 3 M 3 x 2 μ 2 + x 2 x 2 + e μ 3 M 3 μ 1 + x 1 μ 2 x 1 + μ 1 x 2 + x 1 x 2 μ 2 + x 2 μ 2 x 1 + μ 1 x 2 + x 1 x 2 .
Let C = L x 2 1 ( B ) ( M 2 ) , and then
C = 1 x 1 e μ 2 M 2 x 1 μ 1 + x 1 x 1 1 x 1 e μ 3 M 3 1 + μ 2 μ 3 0 M 2 e μ 2 τ τ I 1 2 μ 2 μ 3 M 3 τ d τ + e μ 3 M 3 e μ 1 μ 2 M 2 μ 1 + x 1 x 1 1 + μ 1 μ 2 μ 3 M 3 μ 1 + x 1 0 M 2 e μ 1 μ 2 τ μ 1 + x 1 τ I 1 2 μ 1 μ 2 μ 3 M 3 τ μ 1 + x 1 d τ .
Then, we can find the probability by applying the final inverse Laplace transform with respect to x 1 , i.e., P ( ν 1 < ν 2 < ν 3 ) equals
1 e μ 2 M 2 1 + μ 1 μ 2 M 2 0 M 1 e μ 1 τ τ I 1 2 μ 1 μ 2 M 2 τ d τ e μ 3 M 3 1 + μ 2 μ 3 M 3 0 M 2 e μ 2 τ τ I 1 2 μ 2 μ 3 M 3 τ d τ + e μ 2 M 2 μ 3 M 3 L x 1 1 e μ 1 μ 2 M 2 μ 1 + x 1 x 1 0 M 2 e μ 1 μ 2 τ μ 1 + x 1 τ I 1 2 μ 1 μ 2 μ 3 M 3 τ μ 1 + x 1 d τ ( M 1 ) .
As we will show later, this can be approximated to high precision by numerical integration and numerical Laplace inversion.
The probability of any other weak ordering in the form { ν p ( 1 ) < ν p ( 2 ) < ν p ( 3 ) } for some permutation p follows trivially by interchanging the roles of the variables.
Corollary 5.
If X [ n ] = X 1 [ n ] , X 2 [ n ] , X 3 [ n ] is a vector of independent exponential random variables with parameters μ 1 , μ 2 , and μ 3 for each n,
P ( ν 1 = ν 2 < ν 3 ) = 1 e μ 3 M 3 e μ 1 M 1 μ 2 M 2 I 0 2 μ 1 μ 2 M 1 M 3 e μ 1 M 1 μ 2 M 2 μ 3 M 3 μ 1 μ 2 μ 3 M 3 × L x 1 1 e μ 1 μ 2 M 2 x 1 x 1 x 1 0 M 2 e μ 1 μ 2 τ x 1 τ I 1 2 μ 1 μ 2 μ 3 M 3 τ x 1 d τ ( τ ) .
Proof. 
By Theorem 1,
P ( ν 1 = ν 2 < ν 3 ) = H x 1 γ ( ( 0 , 0 , x 3 ) ) γ ( ( x 1 , 0 , x 3 ) ) γ ( ( 0 , x 2 , x 3 ) ) + γ ( x ) 1 γ ( x ) ( M ) = L x 1 1 L x 2 1 L x 3 1 A ( M 3 ) ( M 2 ) ( M 1 ) ,
where
A = 1 l 1 ( x 1 ) x 1 1 l 2 ( x 2 ) x 2 l 3 ( x 3 ) x 3 1 l 1 ( x 1 ) l 2 ( x 2 ) l 3 ( x 3 ) = μ 3 x 3 ( μ 1 + x 1 ) ( μ 2 + x 2 ) ( μ 3 + x 3 ) μ 1 μ 2 μ 3 .
Let B = L x 3 1 ( A ) ( M 3 ) , and then
B = 1 μ 2 x 1 + μ 1 x 2 + x 1 x 2 e μ 3 M 3 μ 1 + x 1 μ 2 x 1 + μ 1 x 2 + x 1 x 2 μ 2 + x 2 μ 2 x 1 + μ 1 x 2 + x 1 x 2 .
Let C = L x 2 1 ( B ) ( M 2 ) , and then
C = μ 2 M 2 x 1 μ 1 + x 1 μ 1 + x 1 e μ 3 M 3 e μ 2 M 2 x 1 μ 1 + x 1 μ 1 + x 1 e μ 3 M 3 e μ 2 M 2 x 1 μ 1 + x 1 μ 1 + x 1 μ 1 μ 2 μ 3 M 3 μ 1 + x 1 0 M 2 e μ 1 μ 2 τ μ 1 + x 1 τ I 1 2 μ 1 μ 2 μ 3 M 3 τ μ 1 + x 1 d τ .
Then, we can find the probability by applying the final inverse Laplace transform with respect to x 1 , so
P ( ν 1 = ν 2 < ν 3 ) = 1 e μ 3 M 3 e μ 1 M 1 μ 2 M 2 I 0 2 μ 1 μ 2 M 1 M 3 e μ 1 M 1 μ 2 M 2 μ 3 M 3 μ 1 μ 2 μ 3 M 3 × L x 1 1 e μ 1 μ 2 M 2 x 1 x 1 x 1 0 M 2 e μ 1 μ 2 τ x 1 τ I 1 2 μ 1 μ 2 μ 3 M 3 τ x 1 d τ ( M 1 ) .
Note that this formula trivially generates the probabilities of { ν 1 = ν 3 < ν 2 } and { ν 2 = ν 3 < ν 1 } by interchanging the appropriate variables.
Corollary 6.
If X [ n ] = X 1 [ n ] , X 2 [ n ] , X 3 [ n ] is a vector of independent exponential random variables with parameters μ 1 , μ 2 , and μ 3 for each n,
P ( ν 1 < ν 2 = ν 3 ) = e μ 2 M 2 μ 3 M 3 I 0 2 μ 2 μ 3 M 2 M 3 e μ 2 M 2 μ 3 M 3 L x 1 1 1 x 1 I 0 2 μ 1 μ 2 μ 3 M 2 M 3 μ 1 + x 1 ( M 1 ) .
Proof. 
By Theorem 1,
P ( ν 1 < ν 2 = ν 3 ) = H x 1 ( γ ( ( 0 , x 2 , x 3 ) ) γ ( x ) 1 γ ( x ) × 1 γ ( ( 0 , x 2 , 0 ) ) γ ( ( 0 , 0 , x 3 ) ) + γ ( ( 0 , x 2 , x 3 ) ) 1 γ ( ( 0 , x 2 , x 3 ) ) ) ( M ) = L x 1 1 L x 2 1 L x 3 1 A ( M 3 ) ( M 2 ) ( M 1 ) ,
where
A = 1 l 1 ( x 1 ) x 1 ( 1 l 2 ( x 2 ) ) l 2 ( x 2 ) x 2 ( 1 l 3 ( x 3 ) ) l 3 ( x 3 ) x 3 ( 1 l 1 ( x 1 ) l 2 ( x 2 ) l 3 ( x 3 ) ) ( 1 l 2 ( x 1 ) l 3 ( x 3 ) ) = μ 2 μ 3 ( μ 1 + x 1 ) ( μ 2 + x 2 ) ( μ 3 + x 3 ) μ 1 μ 2 μ 3 ( μ 2 + x 2 ) ( μ 3 + x 3 ) μ 2 μ 3 .
Let B = L x 3 1 ( A ) ( M 3 ) , and then
B = 1 x 1 e μ 3 M 3 x 2 μ 2 + x 2 μ 2 + x 2 1 x 1 e μ 3 M 3 μ 1 + x 1 μ 2 x 1 + μ 1 x 2 + x 1 x 2 μ 2 + x 2 μ 2 + x 2 .
Let C = L x 2 1 ( B ) ( M 2 ) , and then
C = 1 x 1 e μ 2 M 2 μ 3 M 3 I 0 2 μ 2 μ 3 M 2 M 3 1 x 1 e μ 2 M 2 μ 3 M 3 I 0 2 μ 1 μ 2 μ 3 M 2 M 3 μ 1 + x 1 .
Then, we can find the probability by applying the final inverse Laplace transform with respect to x 1 , so
P ( ν 1 < ν 2 = ν 3 ) = e μ 2 M 2 μ 3 M 3 I 0 2 μ 2 μ 3 M 2 M 3 e μ 2 M 2 μ 3 M 3 L x 1 1 1 x 1 I 0 2 μ 1 μ 2 μ 3 M 2 M 3 μ 1 + x 1 ( M 1 ) .
Note that this formula trivially generates the probabilities of { ν 2 < ν 1 = ν 3 } and { ν 3 < ν 1 = ν 2 } by interchanging the appropriate variables.
Corollary 7.
If X [ n ] = X 1 [ n ] , X 2 [ n ] , X 3 [ n ] is a vector of independent exponential random variables with parameters μ 1 , μ 2 , and μ 3 for each n,
P ( ν 1 = ν 2 = ν 3 ) = e μ 1 M 1 μ 2 M 2 μ 3 M 3 L x 1 1 1 x 1 I 0 2 μ 1 μ 2 μ 3 M 2 M 3 x 1 ( M 1 ) .
Proof. 
By Theorem 1, P ( ν 1 = ν 2 = ν 3 ) is the result of applying H x 1 to
1 γ ( ( x 1 , 0 , 0 ) ) γ ( ( 0 , x 2 , 0 ) ) γ ( ( 0 , 0 , x 3 ) ) + γ ( ( x 1 , x 2 , 0 ) ) 1 γ ( x ) + γ ( ( x 1 , x 2 , 0 ) ) + γ ( ( x 1 , 0 , x 3 ) ) + γ ( ( 0 , x 2 , x 3 ) ) γ ( x ) 1 γ ( x ) ,
so
P ( ν 1 = ν 2 = ν 3 ) = L x 1 1 L x 2 1 L x 3 1 A ( M 3 ) ( M 2 ) ( M 1 ) ,
where
A = 1 l 1 ( x 1 ) x 1 1 l 2 ( x 2 ) x 2 1 l 3 ( x 3 ) x 3 ( 1 l 1 ( x 1 ) l 2 ( x 2 ) l 3 ( x 3 ) ) = 1 ( μ 1 + x 1 ) ( μ 2 + x 2 ) ( μ 3 + x 3 ) μ 1 μ 2 μ 3 .
Let B = L x 3 1 ( A ) ( M 3 ) , and then
B = 1 μ 1 + x 1 e μ 3 M 3 μ 1 + x 1 μ 2 x 1 + μ 1 x 2 + x 1 x 2 μ 2 + x 2 μ 2 + x 2 .
Let C = L x 2 1 ( B ) ( M 2 ) , and then
C = 1 μ 1 + x 1 e μ 2 M 2 μ 3 M 3 I 0 2 μ 1 μ 2 μ 3 M 2 M 3 μ 1 + x 1 .
Lastly, the probability is the last inverse Laplace transform with respect to x 1 applied to the formula above. □

Computational and Simulated Results

The corollaries above establish that these formulas hold, but what may not be clear is that the formulas are actually quite numerically tractable, even with modest computational resources, as we will demonstrate.
The inverse Laplace transforms above that could not be calculated explicitly were computed using a MATLAB implementation by McClure [43] of the fixed Talbot algorithm developed by Talbot [42], which uses trapezoidal integration along a deformed contour in the Bromwich inversion integral. The approach is not exactly the same as Talbot’s original algorithm, but uses some ideas from the framework of Abate and Whitt [44], and the code is optimised for MATLAB by McClure. A similar numerical approach was used to find probabilities in the context of oscillating random walks by Dshalalow and Liew [15], but, in contrast, the results in this subsection are more extensive and involve comparisons with empirical results from simulations.
Note that, for each set of parameters, the probabilities of the weak orderings are listed in the third column of Table 1, ordered in the following way:
ν 1 < ν 2 < ν 3 , ν 1 < ν 3 < ν 2 , ν 2 < ν 1 < ν 3 , ν 2 < ν 3 < ν 1 , ν 3 < ν 1 < ν 2 , ν 3 < ν 2 < ν 1 ν 1 = ν 2 < ν 3 , ν 1 = ν 3 < ν 2 , ν 2 = ν 3 < ν 1 ν 1 < ν 2 = ν 3 , ν 2 < ν 1 = ν 3 , ν 3 < ν 1 = ν 2 ν 1 = ν 2 = ν 3
The testing above and results in the Table 1 reveal some intuitive results about the probabilities:
  • In parameter sets (1)–(4), we have μ 1 = μ 2 = μ 3 and M 1 = M 2 = M 3 , so, in each line of the results for each set, the probabilities are the same since there is symmetry between the dimensions such that they are indistinguishable.
  • In parameter sets (1)–(4), the μ j s are 1 and the M j s are equal but increasing. Since the process must travel further to cross thresholds while the distribution of the jumps is fixed, simultaneously crossing multiple thresholds (lines 2–4) becomes less probable.
  • Comparing parameter sets (5)–(7) to (1) reveals that increasing a single μ j decreases the mean jump length in coordinate j so that M j is likely to be crossed later than others. Here, we increase μ 1 , and the probabilities of ν 2 < ν 3 < ν 1 , ν 3 < ν 2 < ν 1 , and ν 2 = ν 3 < ν 1 , precisely where M 1 is crossed last, grow.
  • Comparing parameter sets (8)–(10) to (1) reveals that increasing a single M j has a similar effect as increasing a single μ j :
    Parameter set (8) doubles M 1 (doubling the distance to cross M 1 ) and parameter set (6) halves μ 1 (doubling mean jump length in dimension 1), which have the precisely same impact on the probabilities.
    Parameter sets (7) and (9) exhibit an analogous relationship.
While the testing above reveals some details of the relationship between the parameters and probabilities, its scope is constrained by our choice of only 10 sets of parameters, so further testing took a large sample random parameters, computed the predicted and empirical probabilities, and compared them. In particular, we sampled sets of the six parameters, with μ 1 , μ 2 , μ 3 chosen from a uniform distribution on [ 0.5 , 3 ] population and M 1 , M 2 , M 3 from a uniform distribution on [ 10 , 40 ] , and compared the results to empirical results.
Note the boundaries of the intervals are somewhat arbitrarily selected within a region where an existing MATLAB implementation of the fixed Talbot algorithm for numerical inverse Laplace transforms generally converges properly, with a few exceptions (2% of the sampled parameters).
A random sample of 500 sets of parameters from the distributions mentioned above was taken, the process was simulated 100,000 times for each set, and the predicted and empirical probabilities were compared. In 10 samples, the numerical inverse Laplace transform failed due to rounding or overflow errors, but in the remaining 490 cases, predictions again showed good matching with empirical results. In the worst case, the sum of errors between empirical and predicted reached 0.023, with a maximum individual error of 0.012. This is quite accurate, but even this is an outlier—in the other 489 cases, the maximum sum of errors is 0.012, with maximum individual error 0.004.

4.4. Further Applications of P ( W )

The tractable formula for P ( W ) found in Theorem 1 has further applications in insurance, finance, reliability theory, and more advanced stochastic network models. Herein, we discuss two of these areas as a motivation for further study and adoption of the present modeling approach in diverse disciplines. Lastly, we will consider the versatility of the result in a more mathematical sense.

4.4.1. Applications to Insurance

Insurance companies receive a random number of claims from their customers per month, each with a random, nonnegative cost to the company. Assuming that the company has a budget of agents to adjudicate claims and a planned budget for costs, then it will be of interest which budget will be exhausted first. If the agent budget is highly likely to exhausted first, the company may optimise its resources by shifting some of the funds pre-allocated for paying the claims to hiring more agents. If the claims budget is highly likely to be exhausted first, the company may optimise resources by hiring fewer agents.
If the distributions of claims per unit time and cost per claim are time-independent, this example is essentially equivalent to the stochastic network application from Section 4.1 above, with X 1 [ j ] equal to the number of claims received in month j and X 2 [ j ] the sum of the costs of those claims in month j. However, the distributions of claims per month or cost per claim need not be geometric and exponential, respectively.
While this two-dimensional problem does not exceed the capabilities inherent in the methods of prior works on stochastic networks [38], the problem becomes far more complex and higher-dimensional if we consider claims budgets in a localised sense. For example, if agents or funding for claims need to be assigned to certain cities, there may be city-specific budgets, and the probabilities of certain orders of these budgets being exhausted can be used to study the effects of allocating agents to different cities. Likewise, a company with diverse offerings may need to assign specialist agents to claims on home, auto, health, or life insurance.

4.4.2. Reliability Theory

The model can also be used to analyse a complex system made up of numerous deteriorating parts. For example, some approaches [48] model the deterioration of a part as a renewal-reward process where the jumps consist of a sum of a continuous deterioration (e.g., linear) plus a random number of “shocks” that introduce damage of random magnitudes approaching a threshold of irrecoverable failure.
The present work can allow such models to be extended to complex systems and determine the most likely order(s) of failures of multiple parts critical to the operation of the system. Such systems frequently require upkeep and repairs to different types of parts, so understanding the most vulnerable parts given these likely orders can allow administrators to design maintenance programs to ensure consistent, reliable operation of the system.

4.4.3. Versatility of Theorem 1

A point that may be missed in the focus on finding the probability of a fixed weak ordering of threshold crossings, P ( W ) , is that they trivially can build probabilities of weak orderings of subsets of thresholds.
For example, in a three-dimensional model, we may have a weak ordering V = { ν 1 < ν 3 } of two of the threshold crossings. Since any weak ordering of a subset can be represented as a sum of some distinct weak orderings, in this case,
V = { ν 1 < ν 2 < ν 3 } { ν 1 < ν 3 < ν 2 } { ν 1 = ν 2 < ν 3 } { ν 1 < ν 2 = ν 3 } = W 1 W 2 W 3 W 4
The probability of V is, then, the sum of the probabilities of the weak orderings on the right side, all of which are given by Theorem 1 as follows:
P ( V ) = P ( W 1 ) + P ( W 2 ) + P ( W 3 ) + P ( W 4 ) = H x 1 γ ( ( 0 , x 2 , x 3 ) ) γ ( x ) 1 γ ( x ) γ ( ( 0 , 0 , x 3 ) ) γ ( ( 0 , x 2 , x 3 ) ) 1 γ ( ( 0 , x 2 , x 3 ) ) ( M ) + H x 1 γ ( ( 0 , x 2 , x 3 ) ) γ ( x ) 1 γ ( x ) γ ( ( 0 , x 2 , 0 ) ) γ ( ( 0 , x 2 , x 3 ) ) 1 γ ( ( 0 , x 2 , x 3 ) ) ( M ) + H x 1 γ ( ( 0 , 0 , x 3 ) ) γ ( ( x 1 , 0 , x 3 ) ) γ ( ( 0 , x 2 , x 3 ) ) + γ ( x ) 1 γ ( x ) ( M ) + H x 1 γ ( ( 0 , x 2 , x 3 ) ) γ ( x ) 1 γ ( x ) 1 γ ( ( 0 , x 2 , 0 ) ) γ ( ( 0 , 0 , x 3 ) ) + γ ( ( 0 , x 2 , x 3 ) ) 1 γ ( ( 0 , x 2 , x 3 ) ) ( M ) = H x 1 γ ( ( 0 , 0 , x 3 ) ) γ ( ( x 1 , 0 , x 3 ) ) 1 γ ( x ) ( M )
As such, while Theorem 1 focuses on probabilities of weak orderings of all the threshold crossings, these results can be summed to obtain the probability of a weak ordering of a subset of threshold crossings. Moreover, in this case, the sum actually simplifies the expression tremendously.
In addition, identifying the likely first threshold crossing can be important. In the stochastic network problem, this identifies the most vulnerable aspect of the network. In insurance, it locates the first budget likely to be exhausted. In reliability problems, it can find the first part that will likely need maintenance. In each case, identifying the likely vulnerabilities can help to determine appropriate strategies to avoid such problems.
Mathematically, this means that we find P ( ν k < min { ν j : j k } ) for each k = 1 , , d , i.e., the probability that the kth threshold will be crossed first. Clearly, any such probability will be the sum of all probabilities of weak orderings of threshold crossings where ν k comes first—precisely what Theorem 1 expresses.

5. Significance and Future Work

The work above can reasonably be expected to extend in several directions, some of which are the subject of current and future projects.
Closed-form expressions for Φ ( u ) = E e u · A [ ν ] have been derived by brute force for dimensions d = 2 and d = 3 in a similar setting by Dshalalow [12], as well as d = 4 in the context of a continuous-time marked random walk by Dshalalow and Liew [30]. Although the setting is different in the latter case, the approach and result are quite similar. Their approach was to derive Φ W = E e u · A [ ν ] 1 W separately for each W W and add them together, but the size of W grows very quickly with d; in fact, it grows at a super-factorial rate since the set of total orderings of the threshold crossings grows factorially, but is only a subset of W . The number of weak orderings follows the pattern of the Fubini numbers (also called the ordered Bell numbers), 3, 13, 75, 541, 4683, 47,293, …, so the brute force approach quickly becomes impractical to carry out by hand. The author did some prior work on an algorithmic solution [45], but even listing all the elements of W poses computational problems for current consumer-grade computers at dimension 7 or 8. More extensive computational power would not be able to handle much beyond this on a practical time scale given the super-factorial rate of growth.
However, the result of Theorem 1 provides a path to representing an arbitrary Φ W in finite dimension d in a manageable formula. An ongoing project seeks to either confirm a conjecture from Dshalalow and Liew [30] or correct it by deriving a closed-form expression for
Φ ( u ) = W W Φ W ( u )
through a recursive approach exploiting patterns in the Φ W expressions.
Further, it is possible to move beyond discrete-time stochastic processes. For example, some prior studies [14,38,39] have considered (continuous-time) marked random walks { A ( t ) : t 0 } with multidimensional jump times forming a renewal process { τ n } for small d with each random vector X [ n ] actually being the d-dimensional jump A ( τ n ) A ( τ n 1 ) , which are i.i.d. in this scenario. This work can be adapted to find a joint LST for the position and time of the exit of such a process, E e u · A ( τ ν ) θ τ ν , in terms of a joint LST of the jump vector and time between jumps, E e u · A ( τ 1 ) θ τ 1 .

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Wald, A. On cumulative sums of random variables. Ann. Math. Stat. 1944, 15, 283–296. [Google Scholar] [CrossRef]
  2. Andersen, E.S. On sums of symmetrically dependent random variables. Scand. Actuar. J. 1953, 1953, 123–138. [Google Scholar] [CrossRef]
  3. Takács, L. On a problem of fluctuations of sums of independent random variables. J. Appl. Probab. 1975, 12, 29–37. [Google Scholar] [CrossRef]
  4. Takács, L. On fluctuations of sums of random variables. In Sudies in Probability and Ergodic Theory. Advances in Mathematics. Supplementary Studies; Rota, G.C., Ed.; Academic Press: New York, NY, USA, 1978; Volume 2, pp. 45–93. [Google Scholar]
  5. Erdös, P.; Kac, M. On the number of positive sums of independent random variables. Bull. Am. Math. Soc. 1947, 53, 1011–1021. [Google Scholar] [CrossRef] [Green Version]
  6. Feller, W. On the fluctuations of sums of independent random variables. Proc. Natl. Acad. Sci. USA 1969, 63, 637–639. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Chung, K.L. On the maximum partial sum of independent random variables. Proc. Natl. Acad. Sci. USA 1947, 33, 132–136. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Teicher, H. On random sums of random vectors. Ann. Math. Stat. 1965, 36, 1450–1458. [Google Scholar] [CrossRef]
  9. Iglehart, D.L. Weak Convergence of Probability Measures on Product Spaces with Applications to Sums of Random Vectors; Technical Report 109; Department of Operations Research, Stanford University: Stanford, CA, USA, 1968. [Google Scholar]
  10. Osipov, L. On large deviations for sums of random vectors in Rk. J. Multivar. Anal. 1981, 11, 115–126. [Google Scholar] [CrossRef] [Green Version]
  11. Borovkov, A.; Mogulskii, A. Chebyshev-type exponential inequalities for sums of random vectors and for trajectories of random walks. Theory Probab. Its Appl. 2012, 56, 21–43. [Google Scholar] [CrossRef]
  12. Dshalalow, J.H. On the level crossing of multi-dimensional delayed renewal processes. J. Appl. Math. Stoch. Anal. 1997, 10, 355–361. [Google Scholar] [CrossRef] [Green Version]
  13. Dshalalow, J.H. Time dependent analysis of multivariate marked renewal processes. J. Appl. Probab. 2001, 38, 707–721. [Google Scholar] [CrossRef]
  14. Dshalalow, J.H.; Liew, A. On exit times of a multivariate random walk and its embedding in a quasi Poisson process. Stoch. Anal. Appl. 2006, 24, 451–474. [Google Scholar] [CrossRef]
  15. Dshalalow, J.H.; Liew, A. Level crossings of an oscillating marked random walk. Comput. Math. Appl. 2006, 52, 917–932. [Google Scholar] [CrossRef] [Green Version]
  16. Novikov, A.; Melchers, R.E.; Shinjikashvili, E.; Kordzakhhia, N. First pasage time of filtered Poisson process with exponential shape function. Probabilistic Eng. Mech. 2005, 20, 57–65. [Google Scholar] [CrossRef]
  17. Jacobsen, M. The time to ruin for a class of Markov additive risk processes with two-sided jumps. Adv. Appl. Probab. 2005, 37, 963–992. [Google Scholar] [CrossRef] [Green Version]
  18. Crescenzo, A.D.; Martinucci, B. On a first-passage-time problem for the compound power-law process. Stoch. Model. 2009, 25, 420–435. [Google Scholar] [CrossRef]
  19. Bo, L.; Lefebre, M. Mean first passage times of two-dimensional processes with jumps. Stat. Prob. Lett. 2011, 81, 1183–1189. [Google Scholar] [CrossRef]
  20. Coutin, L.; Dorobantu, D. First passage time law for some Lévy processes with compound Poisson: Existence of a density. Bernoulli 2011, 17, 1127–1135. [Google Scholar] [CrossRef]
  21. Kadankov, V.F.; Kadankova, T.V. On the distribution of the first exit time from an interval and the value of the overjump across a boundary for processes with independent increments and random walks. Random Oper. Stoch. Equ. 2005, 13, 219–244. [Google Scholar] [CrossRef]
  22. Kadankova, T.V. Exit, passage, and crossing time and overshoots for a Poisson compound process with an exponential component. Theory Probab. Math. Stat. 2007, 75, 23–29. [Google Scholar] [CrossRef] [Green Version]
  23. Aurzada, F.; Iksanov, A.; Meiners, M. Exponential moments of first passage time and related quantities for Lévy processes. Math. Nachrichten 2015, 288, 1921–1938. [Google Scholar] [CrossRef] [Green Version]
  24. Bernyk, V.; Dalang, R.C.; Peskir, G. The law of the supremum of stable Lévy processes with no negative jumps. Ann. Probab. 2008, 36, 1777–1789. [Google Scholar] [CrossRef] [Green Version]
  25. White, R.T.; Dshalalow, J.H. Characterizations of random walks on random lattices and their ramifications. Stoch. Anal. Appl. 2019, 38, 307–342. [Google Scholar] [CrossRef]
  26. Chaumont, L.; Marolleau, M. Fluctuation theory for spectrally positive additive Lévy fields. arXiv 2019, arXiv:1912.10474. [Google Scholar] [CrossRef]
  27. Borovkov, K.A.; Dickson, D.C.M. On the ruin time distribution for a Sparre Andersen process with exponential claim sizes. Insur. Math. Econ. 2008, 42, 1104–1108. [Google Scholar] [CrossRef] [Green Version]
  28. Yin, C.; Shen, Y.; Wen, Y. Exit problems for jump processes with applications to dividend problems. J. Comput. Appl. Math. 2013, 245, 30–52. [Google Scholar] [CrossRef]
  29. Yin, C.; Wen, Y.; Zong, Z.; Shen, Y. The first passage problem for mixed-exponential jump processes with applications in insurance and finance. Abstr. Appl. Anal. 2014, 2014, 571724. [Google Scholar] [CrossRef] [Green Version]
  30. Dshalalow, J.; Liew, A. On fluctuations of a multivariate random walk with some applications to stock options trading and hedging. Math. Comput. Model. 2006, 44, 931–944. [Google Scholar] [CrossRef]
  31. Kyprianou, A.E.; Pistorius, M.R. Perpetual options and Canadization through fluctuation theory. Ann. Appl. Probab. 2003, 13, 1077–1098. [Google Scholar] [CrossRef] [Green Version]
  32. Mellander, E.; Vredin, A.; Warne, A. Stochastic trends and economic fluctuations in a small open economy. J. Appl. Econom. 1992, 7, 369–394. [Google Scholar] [CrossRef]
  33. Dshalalow, J.H. Random walk analysis in antagonistic stochastic games. Stoch. Anal. Appl. 2008, 26, 738–783. [Google Scholar] [CrossRef]
  34. Dshalalow, J.H.; Iwezulu, K.; White, R.T. Discrete operational calculus in delayed stochastic games. Neural Parallel Sci. Comput. 2016, 24, 55–64. [Google Scholar]
  35. Takács, L. Introduction to the Theory of Queues; Oxford University: Oxford, UK, 1962. [Google Scholar]
  36. Dshalalow, J.H.; Merie, A.; White, R.T. Fluctuation analysis in parallel queues with hysteretic control. Methodol. Comput. Appl. Probab. 2019, 22, 295–327. [Google Scholar] [CrossRef]
  37. Ali, E.; AlObaidi, A. On first excess level analysis of hysteretic bilevel control queue with multiple vacations. Int. J. Nonlinear Anal. Appl. 2021, 12, 2131–2144. [Google Scholar] [CrossRef]
  38. Dshalalow, J.H.; White, R.T. On reliability of stochastic networks. Neural Parallel Sci. Comput. 2013, 21, 141–160. [Google Scholar]
  39. Dshalalow, J.H.; White, R.T. On strategic defense in stochastic networks. Stoch. Anal. Appl. 2014, 32, 365–396. [Google Scholar] [CrossRef]
  40. Abolnikov, L.; Dshalalow, J.H. A first passage problem and its applications to the analysis of a class of stochastic models. Int. J. Stoch. Anal. 1992, 5, 83–97. [Google Scholar] [CrossRef] [Green Version]
  41. Polyanin, A.; Manzhirov, A. Handbook of Integral Equations; Chapman and Hall/CRC: Boca Raton, FL, USA, 2008. [Google Scholar] [CrossRef]
  42. Talbot, A. The accurate numerical inversion of Laplace transforms. IMA J. Appl. Math. 1979, 23, 97–120. [Google Scholar] [CrossRef]
  43. McClure, T. Numerical Inverse Laplace Transform. MATLAB Central File Exchange. 2022. Available online: https://www.mathworks.com/matlabcentral/fileexchange/39035-numerical-inverse-laplace-transform (accessed on 5 June 2022).
  44. Abate, J.; Whitt, W. A unified framework for numerically inverting Laplace transforms. INFORMS J. Comput. 2006, 18, 408–421. [Google Scholar] [CrossRef] [Green Version]
  45. White, R.T. Random Walks on Random Lattices and Their Applications. Ph.D. Thesis, Florida Institute of Technology, Melbourne, FL, USA, 2015. [Google Scholar]
  46. Al-Matar, N.; Dshalalow, J.H. Time sensitive functionals in a queue with sequential maintenance. Stoch. Model. 2011, 27, 687–704. [Google Scholar] [CrossRef]
  47. Dshalalow, J.H.; Merie, A. Fluctuation analysis in queues with several operational modes and priority customers. TOP 2018, 26, 309–333. [Google Scholar] [CrossRef]
  48. Dshalalow, J.H.; White, R.T. Random walk analysis in a reliability system under constant degradation and random shocks. Axioms 2021, 10, 199. [Google Scholar] [CrossRef]
Figure 1. In each diagram predicted results are plotted as solid curves and empirical probabilities from 10,000 simulations are computed and plotted as dots. (a) P ( ν 1 < ν 2 ) with μ = 1 , p = 0.5 ; (b) P ( ν 1 > ν 2 ) with μ = 1 , p = 0.5 ; (c) P ( ν 1 = ν 2 ) with μ = 1 , p = 0.5 ; (d) P ( ν 1 > ν 2 ) with μ = 2 , p = 0.5 . Each diagram shows plots for several M 1 values.
Figure 1. In each diagram predicted results are plotted as solid curves and empirical probabilities from 10,000 simulations are computed and plotted as dots. (a) P ( ν 1 < ν 2 ) with μ = 1 , p = 0.5 ; (b) P ( ν 1 > ν 2 ) with μ = 1 , p = 0.5 ; (c) P ( ν 1 = ν 2 ) with μ = 1 , p = 0.5 ; (d) P ( ν 1 > ν 2 ) with μ = 2 , p = 0.5 . Each diagram shows plots for several M 1 values.
Symmetry 14 01167 g001
Figure 2. In the diagrams, predicted results from Corollary 3 are plotted as solid curves and empirical probabilities from 10,000 simulations are computed and plotted as dots. (a) P ( ν 1 < ν 2 ) with μ 1 = μ 2 = 1 ; (b) P ( ν 1 < ν 2 ) with μ 1 = 1 , μ 2 = 2 ; (c) P ( ν 1 = ν 2 ) with μ 1 = μ 2 = 1 ; (d) P ( ν 1 = ν 2 ) with μ 1 = 2 , μ 2 = 1 . Each diagram shows plots for several M 2 values.
Figure 2. In the diagrams, predicted results from Corollary 3 are plotted as solid curves and empirical probabilities from 10,000 simulations are computed and plotted as dots. (a) P ( ν 1 < ν 2 ) with μ 1 = μ 2 = 1 ; (b) P ( ν 1 < ν 2 ) with μ 1 = 1 , μ 2 = 2 ; (c) P ( ν 1 = ν 2 ) with μ 1 = μ 2 = 1 ; (d) P ( ν 1 = ν 2 ) with μ 1 = 2 , μ 2 = 1 . Each diagram shows plots for several M 2 values.
Symmetry 14 01167 g002
Table 1. In each row of the table, we set some parameter values and compute the probability for each weak ordering by Corollary 4 (line 1), Corollary 5 (line 2), Corollary 6 (line 3), and Corollary 7 (line 4), rounded to the nearest thousandth. Lastly, we have the sum of absolute errors for all thirteen probabilities and the maximum individual error compared to empirical probabilities of each weak ordering computed from 1,000,000 simulated paths of the process.
Table 1. In each row of the table, we set some parameter values and compute the probability for each weak ordering by Corollary 4 (line 1), Corollary 5 (line 2), Corollary 6 (line 3), and Corollary 7 (line 4), rounded to the nearest thousandth. Lastly, we have the sum of absolute errors for all thirteen probabilities and the maximum individual error compared to empirical probabilities of each weak ordering computed from 1,000,000 simulated paths of the process.
ParametersSignsPredicted ProbabilitiesErrors
(1) < < 0.125 0.125 0.125 0.125 0.125 0.125Sum = 0.003
( μ 1 , μ 2 , μ 3 ) = ( 1 , 1 , 1 ) = < 0.042 0.042 0.042Max < 10 3
M 1 = M 2 = M 3 = 10          < = 0.039 0.039 0.039
= = 0.009
(2) < < 0.137 0.137 0.137 0.137 0.137 0.137Sum = 0.002
( μ 1 , μ 2 , μ 3 ) = ( 1 , 1 , 1 ) = < 0.030 0.030 0.030Max < 10 3
M 1 = M 2 = M 3 = 20 < = 0.029 0.029 0.029
= = 0.005
(3) < < 0.147 0.147 0.147 0.147 0.147 0.147Sum = 0.003
( μ 1 , μ 2 , μ 3 ) = ( 1 , 1 , 1 ) = < 0.019 0.019 0.019Max < 10 3
M 1 = M 2 = M 3 = 50 < = 0.019 0.019 0.019
= = 0.002
(4) < < 0.153 0.153 0.153 0.153 0.153 0.153Sum = 0.002
( μ 1 , μ 2 , μ 3 ) = ( 1 , 1 , 1 ) = < 0.014 0.014 0.014Max < 10 3
M 1 = M 2 = M 3 = 100 < = 0.014 0.014 0.014
= = 0.001
(5) < < 0.358 0.358 0.048 0.008 0.048 0.008Sum = 0.002
( μ 1 , μ 2 , μ 3 ) = ( 0.5 , 1 , 1 ) = < 0.035 0.035 0.005Max < 10 3
M 1 = M 2 = M 3 = 10 < = 0.082 0.007 0.007
= = 0.004
(6) < < 0.002 0.002 0.018 0.422 0.018 0.422Sum = 0.002
( μ 1 , μ 2 , μ 3 ) = ( 2 , 1 , 1 ) = < 0.002 0.002 0.088Max < 10 3
M 1 = M 2 = M 3 = 10 < = 0.001 0.011 0.011
= = 0.001
(7) < < 0.000 0.000 0.000 0.455 0.000 0.455Sum = 0.002
( μ 1 , μ 2 , μ 3 ) = ( 5 , 1 , 1 ) = < 0.000 0.000 0.090Max < 10 3
M 1 = M 2 = M 3 = 10 < = 0.000 0.000 0.000
= = 0.000
(8) < < 0.002 0.002 0.018 0.422 0.018 0.422Sum = 0.001
( μ 1 , μ 2 , μ 3 ) = ( 1 , 1 , 1 ) = < 0.002 0.002 0.088Max < 10 3
M 1 = 20 , M 2 = M 3 = 10 < = 0.001 0.011 0.011
= = 0.001
(9) < < 0.000 0.000 0.000 0.455 0.000 0.455Sum = 0.002
( μ 1 , μ 2 , μ 3 ) = ( 1 , 1 , 1 ) = < 0.000 0.000 0.000 0.090Max < 10 3
M 1 = 50 , M 2 = M 3 = 10 < = 0.000 0.000 0.000
= = 0.000
(10) < < 0.000 0.000 0.000 0.455 0.000 0.455Sum < 10 4
( μ 1 , μ 2 , μ 3 ) = ( 1 , 1 , 1 ) = < 0.000 0.000 0.090Max < 10 4
M 1 = 100 , M 2 = M 3 = 10 < = 0.000 0.000 0.000
= = 0.000
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

White, R.T. On the Exiting Patterns of Multivariate Renewal-Reward Processes with an Application to Stochastic Networks. Symmetry 2022, 14, 1167. https://doi.org/10.3390/sym14061167

AMA Style

White RT. On the Exiting Patterns of Multivariate Renewal-Reward Processes with an Application to Stochastic Networks. Symmetry. 2022; 14(6):1167. https://doi.org/10.3390/sym14061167

Chicago/Turabian Style

White, Ryan T. 2022. "On the Exiting Patterns of Multivariate Renewal-Reward Processes with an Application to Stochastic Networks" Symmetry 14, no. 6: 1167. https://doi.org/10.3390/sym14061167

APA Style

White, R. T. (2022). On the Exiting Patterns of Multivariate Renewal-Reward Processes with an Application to Stochastic Networks. Symmetry, 14(6), 1167. https://doi.org/10.3390/sym14061167

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop