Next Article in Journal
Swing-By Applications and Estimation of the Van Allen Belts’ Radiation Exposure for a Spacecraft in a Low Thrust Transfer to the Moon
Next Article in Special Issue
On the Exiting Patterns of Multivariate Renewal-Reward Processes with an Application to Stochastic Networks
Previous Article in Journal
Semantic SLAM Based on Deep Learning in Endocavity Environment
Previous Article in Special Issue
Asymmetric Density for Risk Claim-Size Data: Prediction and Bimodal Data Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Skorokhod Reflection Problem for Delayed Brownian Motion with Applications to Fractional Queues

1
Scuola Superiore Meridionale, Universitá degli Studi di Napoli Federico II, 80138 Napoli, Italy
2
School of Mathematcs, Cardiff University, Cardiff CF10 3AT, UK
3
Dipartimento di Matematica e Applicazioni, Universitá degli Studi di Napoli Federico II, 80126 Napoli, Italy
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(3), 615; https://doi.org/10.3390/sym14030615
Submission received: 17 February 2022 / Revised: 16 March 2022 / Accepted: 17 March 2022 / Published: 19 March 2022
(This article belongs to the Special Issue Stochastic Analysis with Applications and Symmetry)

Abstract

:
Several queueing systems in heavy traffic regimes are shown to admit a diffusive approximation in terms of the Reflected Brownian Motion. The latter is defined by solving the Skorokhod reflection problem on the trajectories of a standard Brownian motion. In recent years, fractional queueing systems have been introduced to model a class of queueing systems with heavy-tailed interarrival and service times. In this paper, we consider a subdiffusive approximation for such processes in the heavy traffic regime. To do this, we introduce the Delayed Reflected Brownian Motion by either solving the Skorohod reflection problem on the trajectories of the delayed Brownian motion or by composing the Reflected Brownian Motion with an inverse stable subordinator. The heavy traffic limit is achieved via the continuous mapping theorem. As a further interesting consequence, we obtain a simulation algorithm for the Delayed Reflected Brownian Motion via a continuous-time random walk approximation.
MSC:
60K25; 60G22; 60J70; 60K15

1. Introduction

Queueing theory is a widely studied branch of mathematics, thanks to its numerous applications. Aside from the classical ones in telecommunications [1] and traffic engineering [2], one can cite the ones in computing [3] (with particular attention to scheduling) and economics (see, for instance, [4]), together with the fact that such a branch clearly shares different models with population dynamics (see also [5]). The simplest building block of such a theory is given by the M / M / 1 queue, which is the prototype of a Markovian queueing model. While, on one hand, the study of this queueing model at its stationary phase can be achieved without too much effort (see, for instance, [6,7]), the determination of its distribution during the transient phase is a quite difficult task that requires much more attention (see, for instance, [8,9]).
The main feature of the M / M / 1 queue is the Markov property, which, on one hand, gives a quite tractable model, while, on the other, it precludes any possibility of including memory effects. To overcome such a problem, one can extend the study to the class of semi-Markov processes, introduced by Lévy in [10]. A special class of semi-Markov processes can be obtained by means of a time change, i.e., the introduction of a non-Markov stochastic clock in the model. This is done, for instance, by composing a parent Markov process with the inverse of a subordinator. In the specific case of the α -stable subordinator, one refers to such a new process as a fractional version of the parent one, due to the link between this time-change procedure and fractional calculus, as one can see from [11,12,13,14,15,16,17], to cite some of the several works on the topic. Together with a purely mathematical interest, let us stress that this procedure leads also to some interesting applications (see, for instance, [18] in economics, [19] in computing and [20] in computational neurosciences). In the context of queueing theory, this has been first done in [4], in the case of the M / M / 1 queue, and then extended in [21,22] to the case of M / M / 1 queues with catastrophes, [23] in the case of Erlang queues and [24] for the M / M / queues.
Another quite interesting feature of the classical M / M / 1 queue is given by its link with the Brownian motion. It is well known that the behavior of the queue is widely influenced by the traffic intensity ρ . In particular, if  ρ < 1 , the queue is ergodic (and thus admits a stationary state), while for ρ 1 , this is no longer true. In a certain sense, we can see the critical value ρ = 1 as a bifurcation of the system (as seen as some sort of integral curve in the space of probability measures), in which it loses its globally asymptotically stable equilibrium point. Thus, a natural question one can ask concerns what happens, at a proper scaling, to the queueing model as ρ is near 1. This is called the heavy traffic regime. In the case of the M / M / 1 queue, it is well known that, up to a proper scaling, the queue length process in the heavy traffic regime can be approximated by means of a Reflected Brownian Motion with negative drift (see [25]), i.e., a Brownian motion with negative drift whose velocity is symmetrically regulated as soon as it touches 0. More precisely, the Reflected Brownian Motion is the solution of the Skorokhod reflection problem on the trajectory of a drifted Brownian motion. This process is of independent importance, as it provides the stochastic representation of the solution of the heat equation with Neumann boundary condition (see [26]).
The time-change procedure described before has been also applied to the Brownian motion, obtaining the so-called Delayed Brownian Motion (see [27]). Despite the name, it is interesting to observe that such a process is not necessarily delayed: it is true that it exhibits some intervals of constancy; nevertheless, on short times, it can vary faster than the parent Brownian motion, as evidenced from its expected lifetime in fractal sets (see [28]). Once this procedure has been applied to the Brownian motion, one could ask what happens as we adopt a time-change also to the Reflected Brownian Motion.
In this paper, we introduce the Delayed Reflected Brownian Motion (DRBM) as the solution of the Skorokhod reflection problem for the trajectories of the Delayed Brownian Motion (DBM). In particular, we prove that the process obtained in this way coincides with the one achieved by time-changing the Reflected Brownian Motion via an inverse stable subordinator. As a consequence, we can obtain some alternative representations of the process as a direct counterpart of the ones in the classical case. Among them, it is interesting to cite that we are able to exhibit a representation analogous to the one of [29] in terms of time-changed stochastic differential equations (introduced in [30]). We then use such a process to characterize the heavy traffic regime of a fractional M / M / 1 queue. The results that we obtain have a twofold interest. One one hand, we achieve a subdiffusive approximation of the fractional M / M / 1 queue in the heavy traffic regime, which is a quite powerful tool to study the properties of such queueing models. On the other hand, if we take a look at the results from a different point of view, we have a continuous-time random walk approximation of the Delayed Reflected Brownian Motion. The latter can be used to provide simulation algorithms that do not require us to simulate the inverse subordinator or to provide some inverse Laplace transforms (both of them being expensive computational tasks in terms of errors and time). The motivation of the paper is indeed in this twofold result: we want to provide a new tool to study the properties of the fractional M / M / 1 queue introduced in [4] in the heavy traffic regime, while, at the same time, exploiting and better investigating the power of the discrete event simulation algorithm (see [31]) via continuous-time random walk approximations of a subdiffusive process.
The paper is organized as follows: in Section 2, we introduce the Skorokhod’s reflection problem and the Reflected Brownian Motion, while in Section 3, we investigate the Delayed Reflected Brownian Motion, with particular attention to the scaling properties of the two processes. The heavy traffic approximation for both the classical and the fractional M / M / 1 queues is discussed in Section 4, together with some scaling properties of the classical M / M / 1 queueing system. In this section, we also refine (and, in a certain sense, complete) the analysis of the queueing model introduced in [4], by proving the respective independence of the interarrival and the service times (recall that they are not mutually independent) and providing their distributions (with a different strategy with respect to [21] that does not require any further conditioning). Finally, in Section 5, we exhibit the simulation algorithm for the fractional M / M / 1 queue and how to use it to simulate a Delayed Reflected Brownian motion, together with some heuristic considerations on stopping criteria.

2. The Reflected Brownian Motion

2.1. Skorokhod’s Reflection Problem

Let us denote by C ( R 0 + ) (where R 0 + : = [ 0 , + ) ) the space of continuous functions f : R 0 + R . In the following, we also need the space of cádlág functions D ( R 0 + ) , i.e., the space of functions f : R 0 + R that are right-continuous and such that the left limits exist finite. Let us denote, for simplicity, D 0 ( R 0 + ) : = { y D ( R 0 + ) : y ( 0 ) 0 } and C 0 ( R 0 + ) : = { y C ( R 0 + ) : y ( 0 ) 0 } , which are convex cones. Let us also recall that any monotone function a : R 0 + R is locally of bounded variation and thus admits a distributional derivative ( d a that is a Radon measure (see [32]).
Definition 1.
Let y D 0 ( R 0 + ) . We say that a pair ( z , a ) D ( R 0 + ) × D ( R 0 + ) is a solution of the one-dimensional Skorokhod reflection problem if
(i)
z = y + a ;
(ii)
z ( t ) 0 for any t 0 ;
(iii)
a is nondecreasing, a ( 0 ) = 0 and ( d a is supported on { s R 0 + : z ( s ) = 0 }
Practically, we are asking if it is possible to construct a path z starting from y by adding a term a in such a way that whenever y touches 0, it is instantaneously symmetrically reflected (by using the function a), so that the resulting path z is conditioned to remain non-negative. For these reasons, the function a is called the regulator, while z is called the regulated path. Let us emphasize that the problem can be extended to the n-dimensional setting for any n N —for instance, asking that the regulated path is conditioned to remain in a certain orthant (so that the path is symmetrically reflected as soon as it touches a coordinate hyperplane) or a polyedral region; see [33]. Let us recall that the one-dimensional Skorokhod problem admits a unique solution.
Theorem 1.
Let y D 0 ( R 0 + ) . Then, there exists a unique solution ( z , a ) D ( R 0 + ) × D ( R 0 + ) of the one-dimensional Skorokhod reflection problem (given in Definition 1). In particular,
a ( t ) = sup s t y ( s )
where y ( t ) = max { y ( t ) , 0 } . Moreover, if  y C 0 ( R 0 + ) , then ( z , a ) C ( R 0 + ) × C ( R 0 + ) .
With the previous theorem in mind, one can define the reflection map as Ψ : D 0 ( R 0 + ) D ( R 0 + ) such that, for any y D 0 ( R 0 + ) , Ψ ( y ) = z , where ( z , a ) is the unique solution of the Skorokhod problem. We also have that Ψ : C 0 ( R 0 + ) C ( R 0 + ) is well defined.
Concerning the continuity of the reflection map on D 0 ( R 0 + ) , we first have to consider some topology on it (see [25] (Chapter 3)).
Definition 2.
Let us consider the space D ( [ 0 , T ] ) of the cádlág functions f : [ 0 , T ] R . We denote by U the uniform topology, induced on D ( [ 0 , T ] ) by the metric
d U ( x 1 , x 2 ) = sup t [ 0 , T ] | x 1 ( t ) x 2 ( t ) | , x 1 , x 2 D ( [ 0 , T ] ) .
Let us denote by ι the identity map on [ 0 , T ] , i.e.,  ι ( t ) = t for any t [ 0 , T ] . Moreover, let
Λ ( [ 0 , T ] ) = { λ : [ 0 , T ] [ 0 , T ] : λ   i s   s t r i c t l y   i n c r a s i n g   a n d   λ ( 0 ) = 0 , λ ( T ) = T } .
We denote by J 1 the topology on D ( [ 0 , T ] ) defined by the metric
d J 1 ( x 1 , x 2 ) = inf λ Λ ( [ 0 , T ] ) max { d U ( x 1 λ , x 2 ) , d U ( λ , ι ) } , x 1 , x 2 D ( [ 0 , T ] ) ,
where : D ( [ 0 , T ] ) × D ( [ 0 , T ] ) D ( [ 0 , T ] ) is the composition map, i.e.,
( x 1 x 2 ) ( t ) : = ( x 1 , x 2 ) ( t ) = x 1 ( x 2 ( t ) ) , t [ 0 , T ] , x 1 , x 2 D ( [ 0 , T ] ) .
We denote by x n U x and x n J 1 x the convergence in D ( [ 0 , T ] ) , respectively, in U and J 1 . It is clear that if x n U x , then x n J 1 x . We say that x n J 1 x in D ( R 0 + ) if x n J 1 x in D ( [ 0 , T k ] ) for a sequence { T k } k 0 of continuity points of x such that T k + . This topology is metrizable and we still denote the metric as d J 1 .
In general, if we have a sequence of D ( R 0 + ) -valued random variables, we denote X n X whenever X n converges in distribution to a D ( R 0 + ) -valued random variable X with respect to the J 1 topology.
Now, we can describe the continuity properties of the reflection map (see [25] (Lemma 13.5.1 and Theorem 13.5.1 ) for the compact domain case and extend it with [25] (Theorem 12.9.4 ) to the non-compact domain case).
Theorem 2.
The map Ψ : D 0 ( [ 0 , T ] ) D ( [ 0 , T ] ) is Lipschitz-continuous in both U and J 1 topology with Lipschitz constant 2, i.e.,
d U ( Ψ ( x 1 ) , Ψ ( x 2 ) ) 2 d U ( x 1 , x 2 )
and
d J 1 ( Ψ ( x 1 ) , Ψ ( x 2 ) ) 2 d J 1 ( x 1 , x 2 ) .
Moreover, Ψ : D 0 ( [ 0 , + ) ) D ( [ 0 , + ) ) is Lipschitz-continuous in the J 1 topology with Lipschitz constant 2, i.e., Equation (1) still holds in this case.
Let us also denote
Λ ˜ = { λ : R 0 + R 0 + : λ is   continuous ,   nondecreasing   and   λ ( 0 ) = 0 }
and observe the following elementary properties of the reflection map (which are particular cases of [25] (Lemma 13.5.2)).
Lemma 1.
The following properties are true:
(i)
For any x D 0 ( R 0 + ) and b > 0 , it holds that
Ψ ( b x ) = b Ψ ( x ) ;
(ii)
For any x D 0 ( R 0 + ) and λ Λ ˜ , it holds that
Ψ ( x λ ) = Ψ ( x ) λ .
Proof. 
Let us first observe that, clearly, ( b x ) = b x . Then it holds, for any t 0 , by Theorem 1, that
Ψ ( b x ) ( t ) = b x ( t ) + sup s t ( b x ) ( s ) = b ( x ( t ) + sup s t x ( s ) ) = b Ψ ( x ) ( t ) ,
proving property (i).
Concerning property (ii), let us observe that ( x λ ) = x λ . Define the running supremum a ( x ) ( t ) = sup s t x ( s ) for any t 0 . Then, a ( x ) D ( R 0 + ) by Theorem 1. Moreover, we have a ( x ) λ = a ( x λ ) . Indeed, let us define the set F of maps f : R 0 + 2 R and the range map R : D ( R 0 + ) F as R ( z ) ( t ) = { y R : s t : z ( s ) = y } for any t 0 and any z D ( R 0 + ) . Then, λ being increasing and continuous, we have
R ( x λ ) ( t ) = { y R : s t : x ( λ ( s ) ) = y } = { y R : λ ( s ) λ ( t ) : x ( λ ( s ) ) = y } = { y R : τ λ ( t ) : x ( τ ) = y } = R ( x ) ( λ ( t ) ) .
Hence,
a ( x λ ) ( t ) = sup R ( x λ ) ( t ) = sup R ( x ) ( λ ( t ) ) = a ( x ) ( λ ( t ) )
and finally
Ψ ( x λ ) ( t ) = x ( λ ( t ) ) + a ( x λ ) ( t ) = x ( λ ( t ) ) + a ( x ) ( λ ( t ) ) = Ψ ( x ) ( λ ( t ) ) .
   □

2.2. The Reflected Brownian Motion

Let us fix a complete filtered probability space ( Ω , F , { F t } t 0 , P ) , supporting all the following random variables and processes. Let W = { W ( t ) , t 0 } be a standard Brownian motion and W η , σ 2 = { W η ( t ) , t 0 } be the Brownian motion with drift η R and diffusion parameter σ 2 with σ > 0 (we denote it as W η , σ 2 BM ( η , σ 2 ) ) defined as
W η , σ 2 ( t ) = σ W ( t ) + η t , t 0 .
If σ = 1 , we denote W η BM ( η ) , and if, additionally, η = 0 , we denote W BM . In any case, we set P ( W η , σ 2 ( 0 ) = 0 ) = 1 . It is well known that P ( W η , σ 2 C ( R 0 + ) ) = 1 ; thus, we can solve the Skorokhod problem on each sample path of W η , σ 2 .
Definition 3.
The Reflected Brownian Motion (RBM) with drift η and diffusion parameter σ 2 is given by the process W ˜ η , σ 2 = { W ˜ η , σ 2 ( t ) , t 0 } defined as
W ˜ η , σ 2 = Ψ ( W η , σ 2 ) ,
and we denote W ˜ η , σ 2 RBM ( η , σ 2 ) . If  σ 2 = 1 , we denote it by W ˜ η RBM ( η ) . If additionally η = 0 , we denote it by W ˜ RBM .
It is clear, by definition, that an RBM is almost surely non-negative. This is achieved thanks to the regulator, which provides a symmetrization of the velocity with which the original BM tries to cross 0. This additive component symmetrizes perfectly such an effort, thus preventing the process from crossing the 0 threshold. However, in this case, the regulator is a singular function; thus, we cannot properly speak about velocity. The symmetrizing effect of the regulator is instead clearer in the processes that are linked to the queueing systems, as will be shown in Section 4. First, we want to prove a simple scaling property. To do this, let us denote ι b Λ ˜ as ι b ( t ) = b t . Obviously, ι 1 ι . Extending the arguments in [25] (Theorem 5.7.9 ), we have the following result.
Proposition 1.
Let W ˜ η , σ 2 RBM ( η , σ 2 ) . The following properties hold true.
(i)
For any b , c > 0 , it holds that
c W ˜ η , σ 2 ι b RBM ( η b c , σ 2 c 2 b ) ;
(ii)
If η 0 , it holds that
W ˜ η , σ 2 = d σ 2 | η | W ˜ s i g n ( η ) ι η 2 σ 2 ;
(iii)
If η = 0 , it holds that
W ˜ 0 , σ 2 = d W ˜ ι σ 2
(iv)
If η = 0 , it holds that
W ˜ 0 , σ 2 = d σ W ˜ .
Proof. 
Let us first prove property (i). Consider W η , σ 2 BM ( η , σ 2 ) such that W ˜ η , σ 2 = Ψ ( W ˜ η , σ 2 ) . The self-similarity of the Brownian motion implies that
c W η , σ 2 ι b = c σ W ι b + ι c η b = d c σ b W + ι c η b = W η b c , σ 2 c 2 b .
Applying the reflection map on both sides, we conclude the proof by Lemma 1.
Property (ii) follows from property (i) applied on W ˜ s i g n ( η ) by choosing b = η 2 σ 2 and c = σ 2 | η | . Property (iii) follows from property (i) applied on W ˜ by choosing b = σ 2 and c = 1 . Finally, property (iv) follows from (i) applied on W ˜ by choosing b = 1 and c = σ .    □
From now on, we will focus on the case σ 2 = 1 , as any other case can be deduced from this one. Let us stress that the one-dimensional distribution of W ˜ η is well known (see [25] (Theorem 5.7.7 ) or [34] (Section 3.6 )).
Theorem 3.
For W ˜ η RBM ( η ) , y 0 and t > 0 , it holds that
P ( W ˜ η ( t ) > y ) = Φ y + η t t + e 2 η y Φ y η t t ,
where Φ is the cumulative distribution function (CDF) of a standard normal random variable, i.e.,
Φ ( x ) = 1 2 π x e y 2 2 d y , x R .
Remark 1.
It is clear that P ( W ˜ η ( t ) > y ) = 1 for any y < 0 by definition. Let us also observe that, if  η = 0 , then the one-dimensional distribution of W ˜ is a folded Gaussian distribution. This will be made clearer in the following subsection.
As a consequence of the previous theorem, we have the following corollary (see [35] (Corollary 1.1.1)).
Corollary 1.
For W ˜ η RBM ( η ) with η < 0 and t > 0 , it holds that
E [ W ˜ η ( t ) ] = 1 2 | η | ( η 2 t + 1 ) | η | ( 1 Φ ( | η | t ) ) + t φ ( | η | t ) ,
where φ is the probability density function (PDF) of a standard normal random variable, i.e.,
φ ( x ) = Φ ( x ) = 1 2 π e x 2 2 , x R .

2.3. Alternative Construction of the Reflected Brownian Motion

One can provide a different construction of the Reflected Brownian Motion by means of the absolute value of an Itô process. To do this, let us observe, as done in [36], that, whenever y ( 0 ) = 0 , it holds that
Ψ ( y ) ( t ) = sup s t ( y ( t ) y ( s ) ) .
Using this characterization, together with the fact that W η ( 0 ) = 0 , we have
W ˜ η ( t ) = sup s t ( η t + W ( t ) η s W ( s ) ) = sup s t ( η ( s t ) ( W ( s ) W ( t ) ) ) .
It is clear that W ^ = W is still a Brownian motion and we can write
W ˜ η ( t ) = sup s t ( η ( s t ) + ( W ^ ( s ) W ^ ( t ) ) ) = M ^ η ( t ) W ^ η ( t ) ,
where W ^ η BM ( η ) and M ^ η ( t ) = max s t W ^ η ( s ) is the running maximum of W ^ η . Combining the previous argument with [29] (Theorem 1), we get the following result.
Theorem 4.
For W ˜ η RBM ( η ) , the following properties are true.
(i)
There exists a process W η BM ( η ) such that, denoting by M η ( t ) = sup s t W η ( s ) , it holds that
W ˜ η = M η W η .
(ii)
Let W BM and consider X η the unique strong solution of the stochastic differential equation
d X η ( t ) = η s i g n ( X η ( t ) ) d t + d W ( t ) , X η ( 0 ) = 0 .
Then, it holds that
W ˜ η = d | X η | .
Remark 2.
  • In the case η = 0 , the previous theorem tells that W ˜ = d | W | , i.e., it is a folded Gaussian process, as observed in the previous subsection.
  • The equality in law M 0 W = d | W | has been proven by Lévy prior to [29]; see [37] (Chapter VI, Theorem 2.3 ).
The previous characterization of the RBM can be fruitfully used for stochastic simulation. Indeed, Equation (2) can be discretized by means of the Euler scheme (see [38]), which is weakly convergent of order 1. On the other hand, one could also simulate W ˜ η by means of the reflection map. However, such an algorithm is weakly convergent of order 1 / 2 . In any case, one can provide not only algorithms with improved weak convergence order, but also exact simulation algorithms for W ˜ η ( t ) at some grid points t 0 (see [39]). Other approaches to generate the sample paths of the RBM are based on the Gauss–Markov nature of the Drifted Brownian Motion (see [40,41]). There is also another approximate simulation method that makes use of a continuous-time random walk (CTRW) approximation. The latter is made clearer in Section 5.

3. The Reflected Brownian Motion Delayed by the Inverse Stable Subordinator

3.1. Inverse Stable Subordinators and Semi-Markov Processes

Let us first introduce the inverse stable subordinator. To do this, we recall the following definitions (we refer to [42,43,44]).
Definition 4.
For any α ( 0 , 1 ) , a standard α-stable subordinator σ α = { σ α ( t ) , t 0 } is an increasing Lévy process with σ α ( 0 ) = 0 almost surely (a.s.) and such that
E [ e λ σ α ( t ) ] = e t λ α , t 0 , λ 0 .
We call the inverse α-stable subordinator the process L α = { L α ( t ) , t 0 } defined as
L α ( t ) = inf { y 0 : σ α ( y ) > t } , t 0 .
We denote by L the Laplace transform operator, i.e., for a function f L loc 1 ( R 0 + ) ,
L [ f ] ( λ ) = 0 + e λ t f ( t ) d t ,
and
abs ( f ) = inf { λ R : L [ f ] ( λ )   is   well - defined   and   finite } ,
the abscissa of convergence of f. It is well known that L [ f ] ( λ ) is well defined for any λ > abs ( f ) (see [45] (Proposition 1.4.1 )).
In the following proposition, we recall some of the main properties of the stable subordinator and its inverse (see [44]).
Proposition 2.
Let α ( 0 , 1 ) and σ α an α-stable subordinator with inverse L α . Then,
(i)
σ α is a strictly increasing process with a.s. pure jump sample paths;
(ii)
L α is an increasing process with a.s. continuous sample paths;
(iii)
For any fixed t > 0 , σ α ( t ) is an absolutely continuous random variable with PDF g α ( s ; t ) satisfying
g α ( s ; t ) = t 1 α g α ( s t 1 α ) , s > 0 ,
where g α ( · ) : = g α ( · ; 1 ) . This means that σ α ( t ) = d t 1 α σ α ( 1 ) ;
(iv)
For any fixed t > 0 , L α ( t ) is an absolutely continuous random variable with PDF f α ( s ; t ) satisfying
f α ( s ; t ) = t α s 1 1 α g α ( t s 1 α ) , s > 0 ;
(v)
For any fixed s > 0 , it holds that abs ( f α ( s ; · ) ) = 0 and
L [ f α ( s , · ) ] ( λ ) = λ α 1 e s λ α , λ > 0 .
Remark 3.
Remark that, while σ α is a Lévy process and then a strong Markov process, L α is not even a Markov process.
As it is clear from the previous proposition, explicit formulae for the one-variable function g α lead to analogous ones for the density of the stable subordinator and its inverse. However, such explicit formulae are not so easy to handle. First of all, there are several integral representations of g α (see [46] and references therein). Among them, let us recall the so-called Mikusinski representation:
g α ( s ) = 1 π α 1 α 1 s 0 π s α 1 α sin ( α τ ) sin ( τ ) α 1 α sin ( ( 1 α ) τ ) sin ( τ ) e s α 1 α sin ( α τ ) sin ( τ ) α 1 α sin ( ( 1 α ) τ ) sin ( τ ) d τ .
This representation is shown to be quite useful both to determine asymptotic properties of g α and to evaluate it numerically (see [47]). The function g α can be also expressed in several ways as a function series (see [48]). Let us refer, among them, to the following formula:
g α ( s ) = 1 π j = 1 ( 1 ) j + 1 j ! s 1 + α j Γ ( 1 + α j ) sin ( π α j ) .
The latter leads to a quite interesting representation of f α ( s ; t ) , which can be also used to prove some regularity results (as, for instance, done in [49]). Further discussion on the various representations of g α and f α is given, for instance, in the Appendix of [50].
In the following, we want to apply a time-change procedure on W ˜ by using as a stochastic clock the process L α . Thus, it could be useful to recall the notion of the semi-Markov process, as introduced in [10].
Definition 5.
A real valued cádlág process X = { X ( t ) , t 0 } is semi-Markov if, denoting by γ = { γ ( t ) , t 0 } its age process, i.e.,  γ ( t ) = sup { ε 0 > 0 : ε ( 0 , ε 0 ) , X ( t ε ) = X ( t ) } , for any 0 s < t and any A B ( R ) , where B ( R ) is the Borel σ-algebra of R , it holds that
P ( X ( t ) A | F s ) = P ( X ( t ) A | X ( s ) , γ ( s ) ) .
In particular, if we consider a strong Markov process X = { X ( t ) , t 0 } and an independent inverse α -stable subordinator L α , we can define the time-changed process X α = X L α . In [51], it is shown that, if we define the process V : = { V ( t ) , t 0 } as V ( t ) = t σ α ( L α ( t ) ) for any t 0 , the couple ( X α , V ) is a strong Markov process. This implies (actually, it is equivalent) that X α is a semi-Markov process. The semi-Markov property of X α , in particular, follows from the fact that ( X , σ α ) is a Markov additive process in the sense of [52] (see [51] for details). Finally, let us recall that such a semi-Markov property actually tells us that the process X α satisfies a (strong) regenerative property (in the sense of [53]) with the regenerative set given by
M : = { s 0 : t 0 , σ α ( t ) = s } ,
i.e., the process X α satisfies the strong Markov property on any random time T such that T ( ω ) M ( ω ) for almost any ω Ω . Actually, a regenerative property is common for any semi-Markov process and could be informally stated as follows: a semi-Markov process exhibits the strong Markov property at each time in which it changes its state.

3.2. The Reflected Brownian Motion Delayed by the Inverse Stable Subordinator

We are now ready to introduce the Delayed Reflected Brownian Motion.
Definition 6.
Let W ˜ η , σ 2 RBM ( η , σ 2 ) , α ( 0 , 1 ) and L α be an inverse stable subordinator independent of W ˜ η , σ 2 . We define the Reflected Brownian Motion delayed by an inverse α-stable subordinator (DRBM) as the process W ˜ α η , σ 2 = W ˜ η , σ 2 L α , we denote it as W ˜ α η , σ 2 DRBM ( η , σ 2 ; α ) and we call W ˜ η , σ 2 its parent process. As before, if  σ = 1 , we denote it as W ˜ α η DRBM ( η ; α ) , and if furthermore η = 0 , we denote it as W ˜ α DRBM ( α ) .
First of all, let us stress that the arguments in the previous subsection tell us that W ˜ α η , σ 2 is semi-Markov. Together with W ˜ α η , σ 2 , we can define the Delayed Brownian Motion W α η , σ 2 : = W η , σ 2 L α , where W η , σ 2 BM ( η , σ 2 ) and L α is an inverse α -stable subordinator independent of it (see, for instance, [27]). We denote this as W α η , σ 2 DBM ( η , σ 2 ; α ) and we call W η , σ 2 its parent process.
Another natural definition for the DRBM could be obtained by simply considering Ψ ( W α η , σ 2 ) . In the following proposition, we observe that such a definition is equivalent to the one we have given.
Proposition 3.
Let W α η , σ 2 DBM ( η , σ 2 ; α ) with parent process W η , σ 2 . Let W ˜ η , σ 2 = Ψ ( W η , σ 2 ) and W ˜ α η , σ 2 DRBM ( η , σ 2 ; α ) with parent process W ˜ η , σ 2 . Then, W ˜ α η , σ 2 = Ψ ( W α η , σ 2 ) almost surely.
Proof. 
This proposition easily follows from property (ii) of Lemma 1 after observing that P ( L α Λ ˜ ) = 1 .    □
Remark 4.
The couple ( W ˜ α η , σ 2 , a ( ( W α η , σ 2 ) ) ) is the unique solution of the Skorokhod problem in Definition 1 for the sample paths of W α η , σ 2 .
We can also deduce a scaling property for the DRBM in the spirit of Proposition 1.
Proposition 4.
Let W ˜ α η , σ 2 DRBM ( η , σ ; α ) . The following properties hold true.
(i)
For any c > 0 , it holds that
c W ˜ α η , σ 2 DRBM ( η c , σ 2 c 2 ; α ) ;
(ii)
It holds that
W ˜ α η , σ 2 = d σ W ˜ α η / σ .
Proof. 
Let us first show property (i). Let W ˜ η , σ 2 RBM ( η , σ 2 ) be the parent process of W ˜ α η , σ 2 , and L α be the involved inverse α -stable subordinator. By Proposition 1 with b = 1 , we get c W ˜ η , σ 2 RBM η c , σ 2 c 2 . By the Skorokhod representation theorem (see, for instance, [54]), we can suppose, without loss of generality, that there exists a process W ˜ η c , σ 2 c 2 RBM η c , σ 2 c 2 such that W ˜ η c , σ 2 c 2 = c W ˜ η , σ 2 almost surely. This clearly implies that also W ˜ η c , σ 2 c 2 is independent of L α . Thus, composing both sides of the equality with L α , we conclude the proof of (i). Property (ii) follows from (i) applied to W ˜ α η / σ with c = σ .    □
Remark 5.
Thanks to the previous proposition, we can always reduce to the case σ 2 = 1 . However, we cannot reduce to the case η = ± 1 , 0 . This is due to the fact that the composition operator is not symmetric and L α ι b ι b L α in general.
Thus, from now on, let us consider σ 2 = 1 . Concerning the one-dimensional distribution, we get a subordination principle from a simple conditioning argument.
Proposition 5.
For W ˜ α η DRBM ( η , α ) , y 0 and t > 0 , it holds that
P ( W ˜ α η ( t ) > y | W ˜ α η ( 0 ) = x ) = 0 + Φ y + η s s + e 2 η y Φ y η s s f α ( s ; t ) d s .
Proof. 
Let us rewrite P ( W ˜ α η ( t ) > y | W ˜ α η ( 0 ) = x ) as a conditional expectation. Precisely, if we let 1 A be the indicator function of the set A B ( R ) , that is to say,
1 A ( x ) = 1 x A 0 x A ,
it holds that
P ( W ˜ α η ( t ) > y ) = E [ 1 ( y , + ) ( W ˜ α η ( t ) ) ] = E [ 1 ( y , + ) ( W ˜ η ( L α ( t ) ) ) ] ,
where W ˜ η is the parent process of W ˜ α η and L α is independent of W ˜ η . Using the tower property of conditional expectations, we achieve
E [ 1 ( y , + ) ( W ˜ η ( L α ( t ) ) ) ] = E [ E [ 1 ( y , + ) ( W ˜ η ( L α ( t ) ) ) | L α ( t ) ] ] .
We can set E [ 1 ( y , + ) ( W ˜ η ( L α ( t ) ) ) | L α ( t ) ] = F ( L α ( t ) ) with   
F ( s ) = E [ 1 ( y , + ) ( W ˜ η ( L α ( t ) ) ) | L α ( t ) = s ] = E [ 1 ( y , + ) ( W ˜ η ( s ) ) ] = Φ y + η s s + e 2 η y Φ y η s s ,
where we also use the fact that L α is independent of W ˜ η and Theorem 3. Finally, we have
E [ 1 ( y , + ) ( W ˜ η ( L α ( t ) ) ) ] = E [ F ( L α ( t ) ) ] = 0 + F ( s ) f α ( s ; t ) d s ,
concluding the proof.    □
With the same exact argument, we can prove the following statement.
Corollary 2.
For W ˜ α η DRBM ( η ) with η < 0 and t > 0 , it holds that
E [ W ˜ α η ( t ) ] = 1 2 | η | 0 + ( η 2 s + 1 ) | η | ( 1 Φ ( | η | s ) ) + s φ ( | η | s ) f α ( s ; t ) d s .
As for the classical case, we could ask if there are some alternative representations of the DRBM in terms of the solution of some particular SDE. This is done by generalizing Theorem 4.
Theorem 5.
For W ˜ α η DRBM ( η ; α ) , the following properties are true.
(i)
There exists a process W α η DBM ( η ; α ) such that, denoting by M α η ( t ) = sup s t W α η ( s ) its running maximum, it holds that
W ˜ α η = M α η W α η
almost surely.
(ii)
Let W α DBM ( η ; α ) and consider X α η the unique strong solution of the time-changed stochastic differential equation (in the sense of [30])
d X α η ( t ) = η s i g n ( X α η ( t ) ) d L α ( t ) + d W α ( t ) , X α η ( 0 ) = 0 .
Then, it holds that
W ˜ α η = d | X α η | .
Proof. 
Let us first prove property (i). Let W ˜ η be the parent process of W ˜ α η . By item (i) of Theorem 4, we know that there exists W η BM ( η ) such that, setting M η ( t ) = sup s t W η ( s ) , it holds that
W ˜ η = M η W η
Let W α η DBM ( η ; α ) with parent process W η and define M α η as in the statement. Arguing as in Lemma 1, the fact that P ( L α Λ ˜ ) = 1 implies M α η = M η L α almost surely. Thus, applying the composition with L α on both sides of equality (5), we conclude the proof of (i). To prove the second item, let us define X η as the unique strong solution of Equation (2), where W is the parent process of W α . Observe that, L α being almost surely continuous, W is in synchronization with L α (in the sense of [30] (p. 793)) and then we can use the duality theorem [30] (Theorem 4.2 ) to state that X α η : = X η L α is the unique strong solution of
d X α η ( t ) = η s i g n ( X α η ( t ) ) d L α ( t ) + d W α ( t ) , X α η ( 0 ) = 0 .
Continuity of X α η leads to Equation (4). Vice versa, if  X α η is the unique strong solution of Equation (4), then the duality theorem tells us that X η is the unique strong solution of Equation (2). Thus, we observe that X α η is the unique strong solution of Equation (4) if and only if X α η = X η L α , and the statement directly follows from item (ii) of Theorem 4.    □
One could try to use the previous characterization to provide an algorithm for the simulation of the DRBM. Such an approach requires, first of all, the simulation of a stable subordinator, which can be done by means of the Chambers–Mallow–Stuck algorithm [55], which is a generalization of the Box–Müller algorithm. As underlined in [43] (Section 5), one can simulate a time-changed process X α = X L α by setting a grid { t n } n N (with t 1 < t 2 < < t N ), simulating { σ α ( t n ) } n N and then the parent process X up to t N . Finally, an approximate trajectory of the process X α is obtained by a step plot between the nodes { ( X ( t n ) , σ α ( t n ) ) } n N . Let us stress that such an algorithm can be generalized to any subordinator, but in this case, Laplace inversion could be required. Moreover, in the case of the DRBM, this involves the simulation of an RBM. We can bypass such a step by using a CTRW approximation of W ˜ α η , at least for η < 0 . This follows from an analogous approach for the RBM, which will be discussed in the following sections.

4. Heavy Traffic Approximation of the Fractional M / M / 1 Queue

4.1. The Heavy Traffic Approximation of the Classical M / M / 1 Queue

4.1.1. The Queueing Model

Let us first introduce the M / M / 1 queueing model. Consider a system in which, at a certain average rate λ , a job (client) enters the system to be processed by a server, which takes a certain amount of time of mean 1 / μ to perform. If a job is currently being served and another job enters the system, it waits in line in a queue. Jobs that are waiting are then processed, as soon as the server is free, via a First In First Out (FIFO) policy. To model such a system, we assume in any case that the interarrival times (i.e., the time interval between the arrival of two jobs) are i.i.d. and so are the service times.
In this first case, i.e., the M/M/1 queueing system, we also suppose that:
  • service times and interarrival times are independent of each other;
  • the jobs enter the system following a Poisson arrival process;
  • service times are exponentially distributed.
Thus, we consider two sequences U = ( U k ) k 1 and V = ( V k ) k 1 of i.i.d. exponential random variables of parameters, respectively, λ > 0 and μ > 0 that are independent of each other, representing, respectively, the interarrival and the service times. We can define the sequence of the arrival times (i.e., the time instants in which the jobs enter the system) S U = ( S k U ) k 0 as
S k U = j = 1 k U j ,
where we set S 0 U = 0 . Let A = { A ( t ) , t 0 } be the arrival counting process, i.e.,  A ( t ) is the number of arrivals up to time t. It is defined as
A ( t ) = max { k 0 : S k u t } .
In the case of a M/M/1 system, as we supposed before, A is a Poisson process of parameter λ and we denote it by A Pois ( λ ) . We can also define the sequence S V = ( S k V ) k 0 , which is, for any k 0 , the total amount of time that is necessary to process the first k-th jobs, as 
S k V = j = 1 k V j ,
where we set S 0 V = 0 , and the cumulative input process C = { C ( t ) , t 0 } as   
C ( t ) = j = 1 A ( t ) V j = S A ( t ) V ,
i.e., the necessary service times to process all the tasks that entered the system up to time t. Let us stress that S k V is not the time of completion of the k-th task, as there could have been some idle periods. By using the cumulative input process, we can introduce the net input process X = { X ( t ) , t 0 } as
X ( t ) = C ( t ) t .
The net input process takes, in a certain sense, a balance between the total service time that is necessary to process all the tasks and the time that has passed since the system is initialized. By definition, X is a cádlág process that decreases with continuity while it increases only by jumping. Moreover, X ( 0 ) = 0 , and thus X D 0 ( R 0 + ) almost surely, and we can define the workload process W L = { W L ( t ) , t 0 } via the Skorokhod reflection map as W L = Ψ ( X ) . One can visualize how W L works: when it is positive, the process W L decreases linearly with slope 1 until it reaches 0; once 0 has been reached, it remains constant (while the net input X ( t ) could be still decreasing); each time X jumps, so does W L . Thus, it is clear that W L is 0 during idle periods and it is positive while a job is being served. We can state that W L , in a certain sense, measures how much residual time is needed to process all the tasks. Here, the effect of the regulator is much clearer than in the RBM case. Indeed, the process W L ( t ) tries to cross 0 with a fixed slope of 1 . As soon as W L ( t ) touches 0, the regulator symmetrizes such a slope, thus adding a linear term with slope 1 to the process. The sum of the symmetric effects of the proper velocity of W L ( t ) (or, in other words, the effort that X ( t ) puts into driving W L ( t ) against 0) and the velocity of the regulator is the main motivation for which the process W L ( t ) stabilizes on 0 until it increases jumping away. Such an example actually shows how the threshold 0 works as a mirror for the velocity of the process and how the regulator provides a symmetric additional term to such a velocity. Moreover, it is clear that, for any t 0 , it holds that W L ( t ) C ( t ) : they increase together with the same amount, but  W L decreases with continuity in the meantime. As C measures the total amount of necessary service time, up to time t, to process all the jobs, and W L , the remaining amount of time to process all the waiting jobs, up to time t, their difference represents the total amount of time, up to time t, in which the server was actually working. We call such a process the cumulative busy time process B = { B ( t ) , t 0 } , where B ( t ) = C ( t ) W L ( t ) . The cumulative busy time process is increasing and piecewise linear. It remains constant during idle periods; otherwise, it grows with slope 1. Thus, if we consider B as the clock of the process, we are only neglecting the idle periods. Thus, if we count service times only on this clock, we obtain the exact number of processed jobs. Hence, we consider a counting process N = { N ( t ) , t 0 } of the service times, i.e., 
N ( t ) = max { k 0 : S k V t } ,
and then we define the departure process  D = { D ( t ) , t 0 } as D ( t ) = N ( B ( t ) ) . Finally, the balance between the number of arrivals and the number of departures, i.e., the number of jobs that are currently in the system, gives us the queue length process Q = { Q ( t ) , t 0 } , i.e.,  Q ( t ) = A ( t ) D ( t ) . Since we are mainly interested in the process Q, we resume the full construction with the notation Q M / M / 1 ( λ , μ ) . In any case, we will adopt the full notation that we introduced before, since the previously constructed processes will still play a role. Moreover, if we write Q ( c ) M / M / 1 ( λ , μ ) , then all the involved processes will present the same apex.
It is clear that the traffic of the system can be expressed in terms of the constant ρ = λ μ . As  ρ < 1 , Q admits a stationary distribution and, in particular, is ergodic, while ρ 1 implies that Q is not ergodic. The threshold ρ = 1 thus divides the ergodic behavior with the non-ergodic one. We are interested in what happens when ρ < 1 but 1 ρ 0 . In this situation, we say that the queueing system is in the heavy traffic regime.

4.1.2. The Heavy Traffic Approximation

It is well known that, under a suitable rescaling, a  M / M / 1 queueing system in the heavy traffic regime can be approximated (in distribution) with a Reflected Brownian Motion. The following theorem, which is practically [25] (Section 9.6 ), resumes the heavy traffic approximation result.
Theorem 6.
Fix λ > 0 and let ( ρ n ) n 1 be a sequence such that 0 < ρ n 1 for any n 1 , ρ n 1 and n ( 1 ρ n ) ζ > 0 . Set μ n = λ ρ n 1 , η = λ 1 ζ and σ 2 = 2 λ 2 . Let Q ^ n M / M / 1 ( λ , μ n ) and define Q ( n ) = n 1 2 Q ^ n ι n . Then, there exists a process Q ˜ RBM ( λ 2 η , λ 3 σ 2 ) such that, as  n + ,
Q ( n ) Q ˜ .
Proof. 
Let ( U k ( n ) ) k 1 and ( V k ( n ) ) k 1 be the sequences of interarrival and service times of Q ( n ) and ( S k U , ( n ) ) k 1 , and ( S k V , ( n ) ) k 1 be the arrival and cumulative service times. Observe that we can construct the n-th sequence of service times by rescaling the ones of the first model, i.e., we can set ( V k ( n ) ) k 1 = μ n μ 1 ( V k ( 1 ) ) k 1 , while ( U k ( n ) ) k 1 is independent of n and thus can be considered to be equal to ( U k ( 1 ) ) k 1 . Let us define the processes S ˜ U , ( n ) = { S ˜ U , ( n ) ( t ) , t 0 } and S ˜ V , ( n ) = { S ˜ V , ( n ) ( t ) , t 0 } as
S ˜ U , ( n ) ( t ) = n 1 2 ( S n t U , ( n ) λ 1 n t ) S ˜ V , ( n ) ( t ) = n 1 2 ( S n t V , ( n ) μ n 1 n t ) .
Observing that, for any k 1 ,
E [ U k ( n ) ] = λ 1 Var [ U k ( n ) ] = λ 2 E [ V k ( n ) ] = μ n 1 Var [ V k ( n ) ] = μ n 2 ,
and that S U , ( n ) and S V , ( n ) are independent of each other, we know, by Donsker’s theorem (see [25] (Theorem 4.3.2 )), that there exist two independent Brownian motions W i BM , i = 1 , 2 , such that, as  n + ,
S ˜ U , ( n ) λ 1 W 1 S ˜ V , ( n ) λ 1 W 2 ,
where we also use the fact that μ n λ . Since W 1 and W 2 are almost surely continuous, we can use both [25] (Theorems 9.3.3 and 9.3.4 ) to conclude that Q ( n ) Q ˜ with
Q ˜ = λ ( Φ ( W 3 ) ι λ ) ,
where W 3 BM ( η , σ 2 ) and η , σ 2 are defined in the statement of the theorem. Denoting W ˜ = Φ ( W 3 ) , we know that W ˜ RBM ( η , σ 2 ) , and then, by property (i) of Proposition 1, we get
Q ˜ RBM ( λ 2 η , λ 3 σ 2 ) ,
concluding the proof.    □
In the previous theorem, we considered a space–time scaling Q ( n ) = n 1 2 Q ^ n ι n . However, in place of the time one, we could consider an appropriate scaling of the arrival and service rates. From now on, we will denote the fact that a random variable T is exponentially distributed with parameter λ > 0 with T Exp ( λ ) . We have the following scaling property.
Proposition 6.
Let Q M / M / 1 ( λ , μ ) and Q ( c ) M / M / 1 ( c λ , c μ ) . Then, the following equalities hold:
U ( c ) = d c 1 U V ( c ) = d c 1 U S U , ( c ) = d c 1 S U S V , ( c ) = d c 1 S V A ( c ) = d A ι c N ( c ) = d N ι c C ( c ) = d c 1 C ι c W L ( c ) = d c 1 W L ι c B ( c ) = d c 1 B ι c D ( c ) = d D ι c Q ( c ) = d Q ι c .
Proof. 
Since U k Exp ( λ ) and V k Exp ( μ ) , it holds that c 1 U k Exp ( c λ ) and c 1 V k Exp ( c λ ) . This, together with the independence of the involved variables, implies U ( c ) = d c 1 U and V ( c ) = d c 1 V . As a direct consequence, it holds that S U , ( c ) = d c 1 S U , S V , ( c ) = d c 1 S V . By definition, one can easily check that, for the counting processes (which are Poisson processes), it holds that A ( c ) = d A ι c and N ( c ) = d N ι c . The independence of A ( c ) and V ( c ) leads to
C ( c t ) = j = 1 A ( c t ) V j = c j = 1 A ( c t ) c 1 V j = d c j = 1 A ( c ) ( t ) V j ( c ) = c C ( c ) ( t ) ,
and, in particular, C ι c = d c C ( c ) . Moving to the net input processes, we clearly get X ι c = d c ( C ( c ) ι ) = c X ( c ) . Since the Skorokhod reflection mapping is continuous, we have
W L ι c = Ψ ( X ι c ) = d Ψ ( c X ( c ) ) = c Ψ ( X ( c ) ) = c W L ( c ) .
Next, consequently, we have
B ι c = C ι c W L ι c = d c ( C ( c ) L ( c ) ) = c B ( c ) .
Now, let us focus on the queue length processes. To show the equality in law, we need to use a different representation of the queue length, due to [56] and widely exploited in [57]. Precisely, if we consider another sequence V ˜ = ( V ˜ k ) k 1 of i.i.d. random variables with V ˜ k Exp ( μ ) , we can define the sequence S V ˜ = ( S k V ˜ ) k 0 , with  S k V ˜ = j = 1 k V ˜ k and S 0 V ˜ = 0 , and the counting process N ˜ = { N ˜ ( t ) , t 0 } , where
N ˜ ( t ) = inf { k 0 : S k V ˜ t } .
In particular, we obtain that, for the process X ˜ ( t ) = A ( t ) N ˜ ( t ) , one has
Q = d Ψ ( X ˜ ) .
Arguing as before, we get
Q ι c = d Ψ ( X ˜ ) ι c = Ψ ( X ˜ ι c ) Ψ d ( X ˜ ( c ) ) = d Q ( c ) .
Finally, the property of the departure process follows by difference.    □
Remark 6.
The fact that V ˜ k is distributed as the service times is typical of the M/M/1 case. If one considers a multi-channel queue, ( V ˜ k ) k 1 represents the potential service times, i.e., the service times one gets if the servers are not shut down when they are idle. This means that, in the multi-channel case, in place of having a single sequence of service times, each one associated with a job, one has many sequences of service times and each sequence is associated with one server (as if the systems admits as many queues as servers). It is clear that in the M/M/1 queues, this makes no difference and ( V ˜ k ) k 1 is really the sequence of service times ( V k ) k 1 . In particular, this means that the queue length process Q can be written as the reflection of the process X ˜ ( t ) = A ( t ) N ( t ) . In general, Q can be written as the reflection of the process X ˜ ( t ) = A ( t ) N ˜ ( t ) , which is called the modified net input process.
We can restate Theorem 6 in terms of such a scaling property.
Corollary 3.
Fix λ > 0 and let ( ρ n ) n 1 be a sequence such that 0 < ρ n 1 for any n 1 , ρ n 1 and n ( 1 ρ n ) ζ > 0 . Set λ n = n λ , μ n = n λ ρ n 1 , η = λ 1 ζ and σ 2 = 2 λ 2 . Let Q ( n ) M / M / 1 ( λ n , μ n ) . Then, there exists a process Q ˜ RBM ( λ 2 η , λ 3 σ 2 ) such that, as  n + ,
n 1 2 Q ( n ) Q ˜ .
Proof. 
The statement follows from Theorem 6 after observing from Proposition 6 that Q ( n ) = d Q ^ n ι n , where Q ^ n M / M / 1 ( n 1 λ n , n 1 μ n ) , n 1 λ n = λ and n 1 μ n = λ ρ n 1 .    □

4.2. The Heavy Traffic Approximation of the Fractional M/M/1 Queue

4.2.1. The Queueing Model

Now, we need to introduce the model that we are interested in. Precisely, we refer to the fractional M/M/1 queue, first defined in [4].
Definition 7.
Let Q M / M / 1 ( λ , μ ) , α ( 0 , 1 ) and L α be an inverse stable subordinator independent of Q. We define a fractional M/M/1 queue by means of its queue length process Q α : = Q L α and we denote it as Q α M / M / 1 ( λ , μ ; α ) . We call Q its parent process and we extend the definition to α = 1 , by setting Q α as a classical M/M/1 queue length process.
Since the process is defined by means of a time-change procedure, we know that it is a semi-Markov process. On the other hand, the interpretation of Q α as a queueing model is unclear unless we define some other quantities. In place of proceeding with a forward construction as done for the M/M/1 queue, here, we need to consider a backward construction to define the main quantities of a queueing system. Indeed, we can recognize the arrival times S U α = ( S k U α ) k 0 by setting S 0 U α = 0 and then
S k U α = inf { t S k 1 U α , Q α ( t ) Q α ( t ) = 1 } , k 1 .
Hence, the interarrival times U α = ( U k α ) k 1 are defined by setting
U k α = S k U α S k 1 U α , k 1 .
Analogously, one can define the departure times D T α = ( D T k α ) k 0 by setting D T 0 α = 0 and
D T k α = inf { t D T k 1 α , Q α ( t ) Q α ( t ) = 1 } , k 1 .
To identify the service times, we have to distinguish between two cases. Indeed, if  Q α ( D T k 1 α ) 0 , then, as soon as the k 1 -th job leaves the system, the k-th one is already being served. Thus, its service time is D T k α D T k 1 α . Otherwise, as soon as the k 1 -th job leaves the system, to identify the service time, we have to wait for the k-th job to enter the system. Hence, we can define the service times V α = ( V k α ) k 1 as
V k α = D T k α D T k 1 α Q α ( D T k 1 α ) 0 D T k α S k U α Q α ( D T k 1 α ) = 0 .
Once the sequences U α and V α are defined, we can reconstruct all the quantities involved in the queueing model, as done in Section 4.1.1, and thus we will use the same notation with the apex α to denote the respective fractional counterpart. We are also interested in the interevent times  T α = ( T k α ) k 1 . To define them, let us first define the event times  S T α = ( S k T α ) k 0 by setting S 0 T α = 0 and
S k T α = inf { t S k 1 T α , Q α ( t ) Q α ( t ) 0 } , k 1
and then set
T k α = S k T α S k 1 T α , k 1 .
In the case α = 1 , the interevent time T k is exponentially distributed with parameter λ + μ if Q ( S k 1 T ) 0 , while it is exponentially distributed with parameter λ if Q ( S k 1 T ) = 0 . In particular, if  S k 1 T α = S j 1 U α for some j N , then T k α is the minimum between the interarrival time U j α and the residual service time
R V k α : = min { D T j α : D T j α S k T α } S k T α .
If S k 1 T α = D T j 1 α for some j N and Q ( S j 1 V α ) 0 , then T k α is the minimum between the service time V j α and the residual interarrival time
R U k α : = min { S j U α : S j U α S k T α } S k T α .
Clearly, if  S k 1 T α = D T j 1 α for some j N and Q ( S j 1 V α ) = 0 , then T k α = R U k α . As  α = 1 , R V k 1 and R U k 1 are still independent and exponentially distributed with parameters λ and μ : this is due to the loss of memory property of the exponential distribution. Thus, the fact that, in the first two cases, T k is an exponential with parameter λ + μ can be seen as a consequence of the fact that it is the minimum of two independent exponential random variables.
This is, in general, not true for α ( 0 , 1 ) . To describe the distribution of U k α , V k α and T k α , we need some additional definitions (see [58,59,60]).
Definition 8.
We denote by E α the Mittag–Leffler function of order α ( 0 , 1 ] , defined as
E α ( t ) = j = 0 + t j Γ ( α j + 1 ) , t R .
For α = 1 , it holds that E α ( t ) = e t .
We say that T is a Mittag–Leffler random variable of order α ( 0 , 1 ] and parameter λ > 0 if
P ( T > t ) = E α ( λ t α ) t 0 1 t < 0
and we denote it by T ML ( λ ; α ) .
We say that T is a generalized Erlang random variable of order α ( 0 , 1 ] , rate λ > 0 and shape parameter k = 1 , 2 , if
P ( T > t ) = n = 0 k 1 j = 1 + ( 1 ) j n + j n ( λ t α ) j + 1 Γ ( α ( j + n ) + 1 ) t 0 1 t < 0
and we denote it by T GE ( k , λ ; α ) . In particular, if T i M L ( λ / k ; α ) , i = 1 , , k , are independent, then i = 1 k T i G E ( k , λ ; α ) .
Mittag–Leffler random variables naturally arise from the evaluation of an α -stable subordinator σ α in an independent exponential random variable.
Lemma 2.
Let Y Exp ( λ ) be independent of σ α . Then, σ α ( Y ) ML ( λ ; α ) .
Proof. 
It is clear that for t < 0 , it holds that P ( σ α ( Y ) > t ) = 1 . By using the independence of Y and σ α , we get, for  t 0 ,
P ( σ α ( Y ) > t ) = P ( Y > L α ( t ) ) = E [ P ( Y > L α ( t ) ) | L α ( t ) ] = E [ e λ L α ( t ) ] = E α ( λ t α ) ,
where the last equality follows from the Laplace transform of the inverse α -stable subordinator, as obtained in [61].    □
In the following, we will specify some distributional properties of U α , V α and T α . Let us stress that the distribution of T α has been already exploited in [4]. On the other hand, the distribution of U α has been obtained in [21] under the condition that there are no departures between two arrivals: the conditioning arises from the fact that the proof is carried on by using the semi-Markov property and a modification of the Kolmogorov equations for the queueing system. A similar argument holds for V α in [21]. Here, we use some different techniques that rely on the fact that σ α is a Lévy process. As a consequence, as will be clear in the following statement, we do not have to consider any conditioning to obtain the distribution of U α and V α . Moreover, we also obtain the independence, respectively, of the interarrival times and the service times, while the mutual independence of the two sequences holds only if α = 1 , as observed in [4].
Proposition 7.
Let Q α M / M / 1 ( λ , μ ; α ) . Then,
1.
The variables T k α are independent of each other;
2.
For any k 1 , T k α ML ( λ + μ ; α ) conditioned to the event { Q α ( T k 1 α ) 0 } ;
3.
For any k 1 , T k α ML ( λ ; α ) conditioned to the event { Q α ( T k 1 α ) = 0 } ;
4.
For any k , h 1 , S k + h T α S k T α GE ( h , h ( λ + μ ) ; α ) conditioned to the event { Q α ( T j α ) 0 , j = k , , k + h 1 } ;
5.
The variables U k α are independent of each other;
6.
For any k 1 , it holds that U k α ML ( λ ; α ) ;
7.
The variables V k α are independent of each other;
8.
For any k 1 , it holds that V k α ML ( μ ; α ) ;
9.
If α ( 0 , 1 ) , the sequences U α and V α are not independent of each other.
Proof. 
To prove item ( 1 ) , let us first observe that S 0 T α = 0 = σ α ( 0 ) = σ α ( S 0 T ) almost surely. Now, let us suppose that S k 1 T α = σ α ( S k 1 T ) . Arguing on S k T α , by definition, we get
S k T α = inf { t S k 1 T α , Q α ( t ) Q α ( t ) } = inf { t σ α ( S k 1 T ) , Q ( L α ( t ) ) Q α ( L α ( t ) ) } .
Since L α ( t ) is constant for t [ σ α ( y ) , σ α ( y ) ) , there exists a random variable Y 0 such that S k T α = σ ( Y ) almost surely. Thus, we can rewrite
S 1 T α = inf { σ ( y ) , y S k 1 T , Q ( L α ( σ ( y ) ) ) Q α ( L α ( σ ( y ) ) ) } = inf { σ ( y ) , y S k 1 T , Q ( y ) Q α ( y ) } .
If σ α is strictly increasing, we have that, almost surely,
S k T α = σ α ( inf { y S k 1 T , Q ( y ) Q α ( y ) } ) = σ α ( S k T ) .
We conclude, by induction, that S k T α = σ α ( S k T ) for any k 0 .
Now, let us consider two interevent times T k α , T j α with 1 k < j . By definition, it holds that T k α = S k T α S k 1 T α and T j α = S j T α S j 1 T α . Fix t k , t h 0 and observe that   
P ( T k α t k , T j α t j ) = P ( S k T α S k 1 T α t k , S j T α S j 1 T α t j ) = P ( σ α ( S k T ) σ α ( S k 1 T ) t k , σ α ( S j T ) σ α ( S j 1 T ) t j ) = E [ P ( σ α ( S k T ) σ α ( S k 1 T ) t k , σ α ( S j T ) σ α ( S j 1 T ) t j | S k T , S k 1 T , S j T , S j 1 T ) ] = E [ P ( σ α ( S k T ) σ α ( S k 1 T ) t k , σ α ( S j T ) σ α ( S j 1 T ) t j | S k T , S k 1 T , S j T , S j 1 T ) ] ,
where the last equality holds due to the fact that σ α is stochastically continuous and S T is independent of σ α . Nevertheless, by the fact that S T is independent of σ α , if we define F 4 ( s 1 , s 2 , s 3 , s 4 ; t k , t j ) = P ( σ α ( s 1 ) σ α ( s 2 ) t k , σ α ( s 3 ) σ α ( s 4 ) t j ) , it holds that
P ( T k α t k , T j α t j ) = E [ F 4 ( S k T , S k 1 T , S j T , S j 1 T ; t k , t j ) ] .
If σ α is a Lévy process, whenever s 2 < s 1 s 4 < s 3 , it holds that F 4 ( s 1 , s 2 , s 3 , s 4 ; t k , t j ) = F 1 ( s 1 s 2 ; t k ) F 1 ( s 3 s 4 ; t j ) , where F 1 ( s ; t ) = P ( σ α ( s ) t ) . However, S k 1 T < S k T S j 1 T < S j T almost surely; thus,
P ( T k α t k , T j α t j ) = E [ F 1 ( S k T S k 1 T ; t k ) F 1 ( S j T S j 1 T ; t j ) ] = E [ F 1 ( T k ; t k ) ] E [ F 1 ( T j ; t j ) ] ,
where we also use the fact that T k and T j are independent. Now, we need to determine E [ F 1 ( T k ; t k ) ] . To do this, we use again the fact that σ α is a Lévy process independent of the sequence T to obtain that
E [ F 1 ( T k ; t k ) ] = P ( σ α ( S k T ) σ α ( S k 1 T ) t k ) = P ( T k α t k ) .
Combining Equations (7) and (8), we have
P ( T k α t k , T j α t j ) = P ( T k α t k ) P ( T j α t j ) .
Properties (2) and (3) follow from Equation (8), as 
P ( T k α t k ) = E [ F 1 ( T k ; t k ) ] = P ( σ α ( T k ) t k ) ,
and Lemma 2. Property ( 4 ) is a direct consequence of ( 1 ) , ( 2 ) together with the decomposition property of the generalized Erlang distribution. The proof of item ( 5 ) is analogous to the one of item ( 1 ) applied to S k U α . Item ( 6 ) follows from the equality U k α = d σ α ( U k ) , which can be proven analogously as in the case of the interevent times, and Lemma 2. The proof of item ( 5 ) is similar to the one of item ( 1 ) ; however, once k , j are fixed, one has to discuss separately
P ( V k α t k , V j α t j , Q α ( D T k 1 α ) 0 , Q α ( D T j 1 α ) 0 ) , P ( V k α t k , V j α t j , Q α ( D T k 1 α ) 0 , Q α ( D T j 1 α ) = 0 ) , P ( V k α t k , V j α t j , Q α ( D T k 1 α ) = 0 , Q α ( D T j 1 α ) 0 ) , P ( V k α t k , V j α t j , Q α ( D T k 1 α ) = 0 , Q α ( D T j 1 α ) = 0 ) .
Item ( 8 ) follows from V k α = d σ ( V k ) and Lemma 2. Finally, item ( 9 ) follows from the fact that, despite T 1 α = min { V 1 α , U 1 α } , P ( T 1 α > t ) P ( V 1 α > t ) P ( U 1 α > t ) , as a consequence of the lack of semigroup property of the Mittag–Leffler function for α < 1 (see [62]).    □
Remark 7.
It is clear from the previous proposition that the arrival counting process A α is a fractional Poisson process (see [15,16]) of parameter λ and order α.

4.2.2. The Heavy Traffic Approximation

Now, we are ready to prove the heavy traffic limit for the fractional M/M/1 queue. It will be a clear consequence of the definition of the fractional M/M/1 queue, the classic heavy traffic approximation and the continuous mapping theorem.
Theorem 7.
Fix λ and let ( ρ n ) n 1 be a sequence such that 0 < ρ n 1 for any n 1 , ρ n 1 and n ( 1 ρ n ) ζ > 0 . Set λ n = n λ , μ n = n λ ρ n 1 , η = λ 1 ζ and σ 2 = 2 λ 2 . Let Q α , ( n ) M / M / 1 ( λ n , μ n ; α ) . Then, there exists a process Q ˜ α DRBM ( λ 2 η , λ 3 σ 2 ; α ) such that, as  n + ,
n 1 2 Q α , ( n ) Q ˜ α .
Proof. 
Let Q ( n ) M / M / 1 ( λ n , μ n ) be the parent process of Q α , ( n ) for any n 1 . By Corollary 3, we know that there exists Q ˜ RBM ( λ 2 η , λ 3 σ 2 ) such that
n 1 / 2 Q ( n ) Q ˜ .
Without loss of generality, we can suppose that L α is independent of each n 1 / 2 Q ( n ) and Q ˜ . Then, we have that ( n 1 / 2 Q ( n ) , L α ) ( Q ˜ , L α ) in D ( R 0 + ) × D ( R 0 + ) . Now, let us denote by Disc ( ) the set of discontinuity points of the composition map : D ( R 0 + ) × D ( R 0 + ) D ( R 0 + ) with respect to the J 1 topology. Denoting by D ( R 0 + ) = { x D ( R 0 + ) : x   is   nodecreasing } , it holds that ( Q ˜ , L α ) C ( R 0 + ) × D ( R 0 + ) almost surely. Moreover, ( C ( R 0 + ) × D ( R 0 + ) ) Disc ( ) = (see [25] ( 13.2.2 )); thus, P ( ( Q ˜ , L α ) Disc ( ) ) = 0 . We get Equation (9) with Q ˜ α = Q ˜ L α by the continuous mapping theorem (see [25] (Theorem  3.4.3 )).    □

5. Simulating a DRBM ( η ; α ) with η < 0 via CTRW

In this section, we want to derive a simulation algorithm for the DRBM ( η ; α ) with η < 0 from the heavy traffic approximation of the fractional M / M / 1 queue. To do this, we first need to investigate how to simulate the fractional M / M / 1 queue. Let us anticipate that we will adapt the well-known Doob–Gillespie algorithm (see [63]) to this case. This has been already done in [4] and discussed for general discrete event systems in [31] (Chapter II, Section 6); however, for completeness, let us recall the main steps of such generalization.

5.1. Simulation of the M/M/1 ( λ , μ ; α )

To achieve the simulation algorithm for the fractional M / M / 1 , we need to exploit some distributional properties of the α -stable subordinator. First, let us define stable random variables.
Definition 9.
We say that a real random variable S is stable if there exist α ( 0 , 1 ) , β [ 1 , 1 ] , γ 0 and δ R such that
E [ e i θ S ] = e i θ δ γ α | θ | α 1 i β s i g n ( θ ) tan π 2 α
and we denote it as S S ( α , β , γ , δ ) . We refer to [64] for the parametrization.
Comparing (3) with (10), it is clear that σ α ( 1 ) S α , 1 , cos 1 α π 2 α , 0 . The latter observation can be used to state the following lemma (see also [23] (Corollary 7.3 )).
Lemma 3.
Let Y Exp ( λ ) and S S α , 1 , cos 1 α π 2 α , 0 be independent of each other. Then, Y 1 α S ML ( λ ; α ) .
Proof. 
Let σ α be an α -stable subordinator independent of Y. Thus, it is clear, by simple conditioning arguments and property (iii) of Proposition 2, that σ α ( Y ) = d Y 1 α σ α ( 1 ) = d Y 1 α S . Finally, Lemma 2 concludes the proof.    □
Since we know how to simulate an exponential random variable by means of the inversion method (see [31] (Chapter II, Example 2.3)) and a stable random variable by means of the Chambers–Mallow–Stuck algorithm (see [55]), we are able to simulate Mittag–Leffler random variables. Moreover, let us stress that one can easily simulate discrete random variables (see [31] (Chapter II, Example 2.1)). We only need to understand how to combine all these simulation procedures to obtain the fractional M/M/1 queue for fixed λ , μ > 0 and α ( 0 , 1 ] .
To do this, we define the functions r λ , μ , p λ , μ : N 0 R (where N 0 is the set of non-negative integers) as
r λ , μ ( x ) = λ x = 0 λ + μ x > 0 p λ , μ ( x ) = 1 x = 0 λ λ + μ x > 0 .
Let us define the sequence X λ , μ = ( X k λ , μ ) k 1 by setting X 0 λ , μ = 0 and X k λ , μ , for any k 1 , via the distribution
P ( X k λ , μ X k 1 λ , μ = 1 | χ k 1 λ , μ = x ) = p λ , μ ( x ) , P ( X k λ , μ X k 1 λ , μ = 1 | χ k 1 λ , μ = x ) = 1 p λ , μ ( x ) .
It is clear that X λ , μ is the jump chain of the classical M/M/1 queue. Moreover, let us stress that the time-change procedure does not change the jump chain of a continuous-time random walk; thus, X λ , μ is also the jump chain of the fractional M/M/1 queue. Concerning the interevent times, let us define the sequence T λ , μ , α = ( T k λ , μ , α ) k 1 of i.i.d. random variables with distribution dictated by the following relation:
P ( T k λ , μ , α > t | χ k 1 λ , μ = x ) = E α ( r λ , μ ( x ) t α ) .
In general, we can define the sequence S λ , μ , α by setting S 0 λ , μ , α = 0 and S k λ , μ , α = S k 1 λ , μ , α + T k λ , μ , α and the counting process N λ , μ , α = { N λ , μ , α ( t ) , t 0 } given by
N λ , μ , α ( t ) = max { k 0 : S k λ , μ , α t } .
Once N λ , μ , α is defined, we can define the process Q λ , μ , α = { Q λ , μ , α ( t ) , t 0 } as Q λ , μ , α ( t ) = X N λ , μ , α ( t ) λ , μ . Since, by definition, the process ( X λ , μ , T λ , μ , α ) is a Markov-additive process, Q λ , μ , α is a semi-Markov process (see [51]). For  α = 1 , it is well known that Q λ , μ , 1 M / M / 1 ( λ , μ ) and the Markov-additive decomposition that we exploited before is the main tool behind the Doob–Gillespie algorithm [63]. Hence, it is clear that, to generalize the previous algorithm, we only need to show that Q λ , μ , α M / M / 1 ( λ , μ ; α ) .
Proposition 8.
Let Q λ , μ , α and Q λ , μ , 1 be two processes as defined before, and let L α be an inverse stable subordinator independent of both X λ , μ and T λ , μ , 1 . Then, Q λ , μ , α = d Q λ , μ , 1 L α . As a consequence, Q λ , μ , α M / M / 1 ( λ , μ ; α ) .
Proof. 
First, let us observe that, if T λ , μ , 1 is independent of σ α , we have σ α ( T k λ , μ , 1 ) = d T k λ , μ , α for any k 1 by Lemma 2. Arguing as in Proposition 7, one can prove by induction that S k λ , μ , α = d σ α ( S k λ , μ , 1 ) for any k 0 and that ( σ α ( T k λ , μ , 1 ) ) k 1 is a sequence of independent random variables. Thus, we conclude that T λ , μ , α = d ( σ ( T k λ , μ , α ) ) k 0 . By Skorokhod’s representation theorem (see, for instance, [54]), we can suppose, without loss of generality, that the equality holds almost surely. Hence, we obtain that S k λ , μ , α t if and only if S k λ , μ , 1 L α ( t ) for any k 1 and any t 0 ; that is to say, N λ , μ , α = N λ , μ , 1 L α . This concludes the proof.    □
Now, we are ready to express the simulation algorithm. It is clear that we only need to simulate the following arrays:
  • The state array X λ , μ , which contains the states of the queue length process;
  • The calendar array S λ , μ , α , which contains the times in which an event happens.
Indeed, in such a case, one can recover Q λ , μ , α ( t ) by finding k such that S k λ , μ , α t < S k + 1 λ , μ , α and then setting Q λ , μ , α ( t ) = X k λ , μ , as done in Algorithm 1. Let us stress that all the algorithms will be given as procedures, so that we can recall each algorithm in other ones.
Algorithm 1 Generation of the queue length process from the state and calendar arrays
procedureGenerateQueue            ▹ Input: X λ , μ , S λ , μ , α
                           ▹ Output: Q λ , μ , α
    function  Q λ , μ , α (t)
         length ( S λ , μ , α )             ▹ Recall that the arrays start with 0
         t final S 1 λ , μ , α
        if  t final < t  then
           Error
        else
            k 0
           while  t > S k + 1 λ , μ , α  do
                k k + 1
           end while
            Q λ , μ , α ( t ) X k λ , μ
        end if
    end function
end procedure
Once we know how to generate the queue from the arrays, let us state Algorithm 2, which simulates them up to the N-th event.
Algorithm 2 Simulation of a M / M / 1 ( λ , μ ; α ) queue up to event n final
procedure SimulateArraysEvent         ▹ Input: λ , μ > 0 , α ( 0 , 1 ) , n final N
                                 ▹ Output: X λ , μ , S λ , μ , α
     X 0 λ , μ 0
     S 0 λ , μ , α 0
    for  k = 1 , , n final  do
        Simulate U uniform in ( 0 , 1 )
        if  U < p λ , μ ( X k 1 λ , μ )  then
            X k λ , μ X k 1 λ , μ + 1
        else
            X k λ , μ X k 1 λ , μ 1
        end if
        Simulate T Exp ( r λ , μ ( X k 1 λ , μ ) )
        Simulate S S α , 1 , cos 1 α π 2 α , 0
         S k λ , μ , α S k 1 λ , μ , α + T 1 α S
    end for
end procedure
Algorithm 2 can be easily adapted to other stopping conditions. In the following, it will be useful to express how to simulate a M / M / 1 ( λ , μ ; α ) queue up to time t stop > 0 (precisely, up to the time min { T k α t stop } ). This is done in Algorithm 3, and some simulation results are shown in Figure 1.
Algorithm 3 Simulation of a M / M / 1 ( λ , μ ; α ) queue up to time t stop > 0
procedure SimulateArraysTime         ▹ Input: λ , μ > 0 , α ( 0 , 1 ) , t stop > 0
                                 ▹ Output: X λ , μ , S λ , μ , α
     X 0 λ , μ 0
     S 0 λ , μ , α 0
     k 0
    while  S k λ , μ , α < t stop  do
         k k + 1
        Simulate U uniform in ( 0 , 1 )
        if  U < p λ , μ ( X k 1 λ , μ )  then
            X k λ , μ X k 1 λ , μ + 1
        else
            X k λ , μ X k 1 λ , μ 1
        end if
        Simulate T Exp ( r λ , μ ( X k 1 λ , μ ) )
        Simulate S S α , 1 , cos 1 α π 2 α , 0
         S k λ , μ , α S k 1 λ , μ , α + T 1 α S
    end while
end procedure

5.2. Simulation of the DRBM ( η ; α ) with η < 0

Now, we want to use Algorithm 3 to determine a simulation algorithm for the Delayed Reflected Brownian Motion with negative η . To do this, let us observe that the limit process in Theorem 7 is Q ˜ α DRBM ( λ 2 η ¯ , λ 3 σ ¯ 2 ; α ) for some constants η ¯ < 0 and σ ¯ > 0 . Recalling that σ ¯ 2 = 2 / λ 2 , we know that, in order to have DRBM ( η , 1 ; α ) , it must hold that 2 λ = 1 , i.e.,  λ = 1 / 2 , and then η ¯ = 4 η . Now, set ρ n = 1 ζ n , where ζ > 0 has to be chosen in a proper way, precisely ζ = η ¯ / 2 = 2 η . However, since we need μ n = n 2 ρ n 1 > 0 , it must hold that ρ n > 0 , which is true if and only if ζ < n . Taking into account this requirement, we can develop an algorithm for the simulation of the DRBM with negative η up to time t stop > 0 and iteration n it > 4 η 2 . Again, we only need to simulate the state and calendar arrays of the approximating queue length process and then generate the DRBM from them. This is done in Algorithms 4 and 5.
Algorithm 4 Simulation of a DRBM ( η ; α ) for η < 0 up to time t stop > 0 and iteration n it > 4 η 2
procedureSimulateDRBM     ▹ Input: η < 0 , α ( 0 , 1 ) , t stop > 0 , n it > 4 η 2
                    ▹ Output: W ˜ α η
     λ n it 2
     ζ 2 η
     ρ 1 ζ n it
     μ λ ρ
     X 0 λ , μ 0
     S 0 λ , μ , α 0
     k 0
    while  S k λ , μ , α < t stop  do
         k k + 1
        Simulate U uniform in ( 0 , 1 )
        if  U < p λ , μ ( X k 1 λ , μ )  then
            X k λ , μ X k 1 λ , μ + 1
        else
            X k λ , μ X k 1 λ , μ 1
        end if
        Simulate T Exp ( r λ , μ ( X k 1 λ , μ ) )
        Simulate S S α , 1 , cos 1 α π 2 α , 0
         S k λ , μ , α S k 1 λ , μ , α + T 1 α S
    end while
     W ˜ α η GenerateDRBM( X λ , μ , S λ , μ , α , n it )
end procedure
Algorithm 5 Generation of the DRBM process from the state and calendar arrays
procedureGenerateDRBM          ▹ Input: X λ , μ , S λ , μ , α , n it
                               ▹ Output: W ˜ α η
    function  W ˜ α η (t)
         length ( S λ , μ , α )            ▹ Recall that the arrays start with 0
         t final S 1 λ , μ , α
        if  t final < t  then
           Error
        else
            k 0
           while  t > S k + 1 λ , μ , α  do
                k k + 1
           end while
            W ˜ α η ( t ) X k λ , μ / n it
        end if
    end function
end procedure

5.3. Numerical Results

In this section, we want to show some simulation results. However, how can one visualize the convergence in distribution?
By the Portmanteau Theorem (see [54] (Chapter 2, Section 3)) and Theorem 7, we know that, for any bounded functional F : D ( [ 0 , t stop ] ) R that is continuous in the J 1 topology, it holds that
lim n + E F Q α ( n ) n = E [ F ( W ˜ α η ) ]
and thus we could study whether
e F ( n ) : = E F Q α ( n ) n E [ F ( W ˜ α η ) ]
converges to 0. However, to do this, we should know E [ F ( W ˜ α η ) ] a priori, which is not always possible. Moreover, let us observe that even the evaluation map x D ( [ 0 , t stop ] ) x ( t ) R is not continuous everywhere in the J 1 topology; hence, we need to enlarge the set of possible functionals F . To do this, we need the following weaker version of the Portmanteau Theorem.
Proposition 9.
Let X n , X be a random variable with values in a metric space M such that X n X . Let also F : M R be a function such that the following properties hold true:
(i)
It holds that P ( X Disc ( F ) ) = 0 ;
(ii)
There exist two constants ε , M > 0 such that E [ ( F ( X n ) ) 1 + ε ] M for any n N .
Then,
lim n + E [ F ( X n ) ] = E [ F ( X ) ] .
Proof. 
By property (i) and the continuous mapping theorem (see [25] (Theorem 3.4.3 )), we know that F ( X n ) F ( X ) . Moreover, property (ii) implies that F ( X n ) are uniformly integrable. We conclude the proof by [54] (Chapter 2, Theorem 3.5).    □
Now, let us select two particular functionals. The first one is given by
F 1 ( x ) = x ( t stop ) , x D ( [ 0 , t stop ] ) ,
i.e., it is an evaluation functional. To work with such a functional, we need to prove the following result.
Proposition 10.
The functional F 1 satisfies the hypotheses of Proposition 9 with respect to the sequence Q α ( n ) n n N .
Proof. 
Let us first observe that if W ˜ α η is continuous, it is clear that P ( W ˜ α η Disc ( F 1 ) ) = 0 ; see [25] (Proposition 13.2.1). Thus, we only need to prove item (ii). To do this, let us select ε = 1 and remark that we need to estimate n 1 E ( Q α ( n ) ( t stop ) ) 2 . Let m n , α ( 2 ) ( t ) = E ( Q α ( n ) ( t ) ) 2 and observe that, by a simple conditioning argument,
m n , α ( 2 ) ( t ) = E [ m n , 1 ( 2 ) ( L α ( t ) ) ] ,
where L α is an inverse α -stable subordinator independent of Q α ( n ) ( t ) . By [65], we know that m n , 1 ( 2 ) is increasing. If also L α is almost surely increasing, it is clear that m n , α ( 2 ) ( t ) is increasing and then
m n , α ( 2 ) ( t ) lim t + m n , α ( 2 ) ( t ) .
Recalling that lim t + L α ( t ) = + almost surely, we have, by the monotone convergence theorem,   
m n , α ( 2 ) ( t ) lim t + m n , 1 ( 2 ) ( t ) .
If ρ n < 1 for any n N , we can combine Equations ( 3.24 ) and ( 3.25 ) in [7] to get
m n , α ( 2 ) ( t ) ρ n ( 1 + ρ n ) ( 1 ρ n ) 2 2 n ζ 2 ,
where we also use the fact that ρ n = 1 ζ n . Hence, we finally get
E F 1 Q α ( n ) ( t stop ) n 2 2 ζ 2 < + ,
concluding the proof.    □
The second functional that we consider is the integral functional
F 2 ( x ) = 0 t stop x ( t ) d t , x D ( [ 0 , t stop ] ) .
This time, we will directly prove the limit result.
Proposition 11.
It holds that
lim n + E F 2 Q α ( n ) n = E [ F 2 ( W ˜ α η ) ] .
Proof. 
Let us first rewrite
E F 2 Q α ( n ) n = 0 t stop 1 n E Q α ( n ) ( t ) d t ,
where we use Fubini’s theorem, being Q α ( n ) ( t ) 0 for all t [ 0 , t stop ] . By Propositions 9 and 10, we have that
lim n + 1 n E Q α ( n ) ( t ) = E [ W ˜ α η ( t ) ] , t [ 0 , t stop ] .
Furthermore, arguing exactly as in the proof of Proposition 10 and using Equation  ( 3.24 ) in [7], we get
1 n E Q α ( n ) ( t ) 1 ζ ,
and thus we can use the dominated convergence theorem to conclude the proof.    □
In particular, E [ F i ( W ˜ α η ) ] , i = 1 , 2 , could be estimated numerically by means of Corollary 2 and Mikusinski’s representation of g α . However, one still needs to estimate E F i ( Q α ( n ) / n ) for i = 1 , 2 , which is not an easy task since the distribution (and then the expected value) of the fractional M / M / 1 queue admits some series representations (see [4,21]), which are not easy to evaluate numerically. To overcome this problem, we adopt a Monte Carlo method (with 5000 samples) to determine E F i ( Q α ( n ) / n ) . However, this means that we have to consider the fact that the evaluated value randomly oscillates around the exact one. For this reason, one cannot use the Cauchy-type error
e F , Cauchy ( n ) : = E F Q α ( n + 1 ) n + 1 F Q α ( n ) n
to estimate the actual approximation error. In any case, a further investigation on the link between e F , Cauchy ( n ) and e Cauchy ( n ) will be carried out in future works. Let us also underline that, clearly, the oscillation of the Monte Carlo evaluation depends on the variance of F i ( Q α ( n ) / n ) and, thus, for fixed n, depends on α . This is made clear in the boxplots in Figure 2. The plots of E [ F i ( Q α ( n ) / n ) ] against n are given in Figure 3. The convergence of such a sequence is not so clear due the oscillations caused by the Monte Carlo method. Another visualization of the convergence of E [ F i ( Q α ( n ) / n ) ] can be obtained by considering the fact that, for any n 0 , δ N , the following Cesaro convergence holds:
lim N 1 N n = n 0 N + n 0 E [ F i ( Q α ( n δ ) / n δ ) ] = E [ F i ( W ˜ α η ) ] .
Indeed, the sequence S N ( i ) : = 1 N n = n 0 N + n 0 E [ F i ( Q α ( n δ ) / n δ ) ] provides some form of smoothing of the point plot and the convergence is much clearer in this case, as shown in Figure 4. As S N ( i ) is smoother than the original sequence, one could still use the error
e ˜ F i , Cauchy N = | S N + 1 ( i ) S N ( i ) |
to impose a stopping criterion of the form e ˜ F i , Cauchy N < ε abs for the absolute error or e ˜ F i , Cauchy N < ε rel S N + 1 ( i ) for the relative error.
With the procedure to simulate the DRBM and E [ F ( Q α ( n ) / n ) ] is given in Algorithm 6, a simulation algorithm that takes into consideration this stopping criterion is exploited in Algorithm 7. To show some simulated sample paths, we used Algorithm 7 with F 2 (which takes into consideration the whole simulated trajectory). The results are illustrated in Figure 5, while the sequence e ˜ F 2 , Cauchy N is plotted in Figure 6.
Algorithm 6 Simulation of n traj N trajectories of a DRBM ( η ; α ) for η < 0 up to time t stop > 0 with iteration n > 4 η 2 and evaluation of the functional E [ F ( W ˜ α η ) ]
procedure SimDRBMwFunc     ▹ Input: η < 0 , α ( 0 , 1 ) , n , n traj N , n > 4 η 2
                    ▹ Output: ( X λ , μ ) k , j , ( S λ , μ , α ) k , j , F mean
     λ n 2
     ζ 2 η
     ρ 1 ζ n
     μ λ ρ
     F mean 0
    for  j = 1 , , n traj  do
         X 0 , j λ , μ 0
         S 0 , j λ , μ , α 0
         k 0
        while  S k , j λ , μ , α < t stop  do
            k k + 1
           Simulate U uniform in ( 0 , 1 )
           if  U < p λ , μ ( X k 1 , j λ , μ )  then
                X k , j λ , μ X k 1 , j λ , μ + 1
           else
                X k , j λ , μ X k 1 , j λ , μ 1
           end if
           Simulate T Exp ( r λ , μ ( X k 1 , j λ , μ ) )
           Simulate S S α , 1 , cos 1 α π 2 α , 0
            S k , j λ , μ , α S k 1 , j λ , μ , α + T 1 α S
        end while
         F mean F mean + F n ( X · , j λ , μ , S · , j λ , μ , α )
    end for
     F mean F mean / n traj
end procedure
Algorithm 7 Simulation of n traj N trajectories of a DRBM ( η ; α ) for η < 0 up to time t stop > 0 with tolerance ε abs , ε rel > 0 and maximum number of iterations n max N
procedure SimDRBMwTol     ▹ Input: α ( 0 , 1 ) , η , t stop , ε abs , ε rel > 0 , n traj , n max , n 0 , δ N , n 0 > 4 η 2
                        ▹ Output: n traj trajectories of W ˜ α η , e F , Cauchy
     n n 0
     n it 0
     ( X λ , μ ) k , j , ( S λ , μ , α ) k , j , F mean SimDRBMwFunc( n , η , α , n traj )
     S ^ F mean
     S 0 S ^
     n n + δ
     n it n it + 1
     ( X λ , μ ) k , j , ( S λ , μ , α ) k , j , F mean SimDRBMwFunc( n , η , α , n traj )
     S ^ S ^ + F mean
     S 1 S ^ / 2
     e F , Cauchy | S 1 S 0 |
    while  e F , Cauchy min ε abs , ε rel S 1 and n it n max  do
         n n + δ
         n it n it + 1
         S 0 S 1
         ( X λ , μ ) k , j , ( S λ , μ , α ) k , j , F mean SimDRBMwFunc( n , η , α , n traj )
         S ^ S ^ + F mean
         S 1 S ^ / ( n it + 1 )
         e F , Cauchy | S 1 S 0 |
    end while
    for  j = 1 , , n traj  do
         W α , j η GenerateDRBM( X · , j λ , μ , S · , j λ , μ , α , n )
    end for
end procedure

6. Conclusions

To summarize the results, we introduced the Delayed Reflected Brownian Motion by means of a suitable time change of the Reflected Brownian Motion (or, equivalently, by solving Skorokhod’s reflection problem on the paths of the Delayed Brownian Motion) and recalled the main properties of fractional M / M / 1 queues as defined in [4]. These two processes are then linked via the heavy traffic approximation result exploited in Theorem 7. As we also underline in the Introduction, such a theorem can be read in two ways depending on the process that we want to focus on. If we are interested in the properties of the fractional M / M / 1 queues, the theorem provides a subdiffusive approximation of them, in terms of the Delayed Reflected Brownian Motion, as the traffic intensity ρ is near 1. This is quite useful if one needs to investigate (approximatively) some distributional property of a fractional M / M / 1 queue in the transient state. Indeed, different formulations of the one-dimensional distribution of the queue length process in the fractional case are given in [4,21] but they involve nested series of functions, which can be difficult to evaluate even numerically. However, if ρ is near 1, one can approximate the one-dimensional distribution of the queue length process with the one of the Delayed Reflected Brownian Motion given in Proposition 5, which can be numerically evaluated thanks to Mikusinski’s representation of the density of the stable subordinator.
Vice versa, if we are interested in the properties of the Delayed Reflected Brownian Motion, Theorem 7 provides a continuous-time random walk approximation of such a process. This is a quite useful property when combined with the discrete event simulation procedure (which is a generalization of the well-known Gillespie’s algorithm; see [31,63]). Indeed, the simulation of an inverse α -stable subordinator is usually done by means of Laplace inversion, if we start from the Laplace transform of f α , or by inverting the subordinator, which is instead simulated via the Chambers–Mallow–Stuck method [55]. The algorithm presented in the paper does not rely at all on the simulation of the inverse subordinator, thanks to the fact that we are able to simulate Mittag–Leffler random variables by using the Chambers–Mallow–Stuck method.
This second point of view is investigated with more attention in Section 5. Precisely, Theorem 7 gives us a limit result, but does not tell us how large we should chose n to have a suitably good approximation. Moreover, as we observed before, estimating the distributional properties of a fractional queue is not an easy task, due to the quite complicated form of the state probabilities. However, some distributional quantities can be provided by Monte Carlo estimates, which, due to the random nature of the approach, invalidates the idea of using a form of error based on the Cauchy property of converging sequences. This problem can be overcome by using the Cesaro convergence of the sequence, as taking the average smooths in some sense the oscillating simulated data, as one can observe by comparing Figure 3 and Figure 4. Thus, one can use the sequence of averages in place of the original one to provide a stopping criterion, as done in Algorithm 7. Such an approach is supported by the sequence of errors given in Figure 6. Clearly, one could use other smoothing procedures on data to overcome the oscillations caused by Monte Carlo estimates. In future works, we aim to discuss the properties of time-changed M / M / 1 queues and Reflected Brownian Motions with more general inverse subordinators, trying to link them via a heavy traffic approximation result. The simulation of such types of queueing models will require some more sophisticated methods, due to the lack, in general, of both the self-similarity property and a special algorithm for the subordinator.

Author Contributions

All authors equally contributed to the paper. All authors have read and agreed to the published version of the manuscript.

Funding

G. Ascione and E. Pirozzi were partially supported by MIUR-PRIN 2017, project “Stochastic Models for Complex Systems”, grant number 2017JFFHSH. G. Ascione was partially supported by INdAM Gruppo Nazionale per l’Analisi Matematica, la Probabilità e le loro Applicazioni. E. Pirozzi was partially supported by INdAM Gruppo Nazionale per il Calcolo Scientifico. N. Leonenko was partially supported by LMS grant 42997, ARC grant DP220101680. G.Ascione, N. Leonenko and E.Pirozzi were supported by the Isaac Newton Institute (Cambridge) Program Fractional Differential Equations.

Acknowledgments

The authors would like to thank the anonymous referees for their valuable comments. The authors would like to thank the Isaac Newton Institute for Mathematical Sciences for support and hospitality during the programme Fractional Differential Equations (FDE2), when work on this paper was undertaken.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Erlang, A.K. The theory of probabilities and telephone conversations. Nyt. Tidsskr. Mat. Ser. B 1909, 20, 33–39. [Google Scholar]
  2. Helbing, D. A section-based queueing-theoretical traffic model for congestion and travel time analysis in networks. J. Phys. A Math. Gen. 2003, 36, L593. [Google Scholar] [CrossRef] [Green Version]
  3. Kleinrock, L. Queueing Systems: Computer Applications; John Wiley: Hoboken, NJ, USA, 1976. [Google Scholar]
  4. Cahoy, D.O.; Polito, F.; Phoha, V. Transient behavior of fractional queues and related processes. Methodol. Comput. Appl. Probab. 2015, 17, 739–759. [Google Scholar] [CrossRef] [Green Version]
  5. Schoutens, W. Stochastic Processes and Orthogonal Polynomials; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012; Volume 146. [Google Scholar]
  6. Ross, S.M. Introduction to Probability Models; Academic Press: Cambridge, MA, USA, 2014. [Google Scholar]
  7. Kleinrock, L. Queueing Systems: Theory; John Wiley: Hoboken, NJ, USA, 1975. [Google Scholar]
  8. Conolly, B.; Langaris, C. On a new formula for the transient state probabilities for M/M/1 queues and computational implications. J. Appl. Probab. 1993, 30, 237–246. [Google Scholar] [CrossRef]
  9. Parthasarathy, P. A transient solution to an M/M/1 queue: A simple approach. Adv. Appl. Probab. 1987, 19, 997–998. [Google Scholar] [CrossRef] [Green Version]
  10. Levy, P. Processus semi-markoviens. In Proceedings of the International Congress of Mathematicians, Amsterdam, The Netherlands, 2–9 September 1954. [Google Scholar]
  11. Orsingher, E.; Polito, F. Fractional pure birth processes. Bernoulli 2010, 16, 858–881. [Google Scholar] [CrossRef]
  12. Orsingher, E.; Polito, F. On a fractional linear birth–death process. Bernoulli 2011, 17, 114–137. [Google Scholar] [CrossRef]
  13. Leonenko, N.N.; Meerschaert, M.M.; Sikorskii, A. Fractional Pearson diffusions. J. Math. Anal. Appl. 2013, 403, 532–546. [Google Scholar] [CrossRef]
  14. Ascione, G.; Leonenko, N.; Pirozzi, E. Fractional immigration-death processes. J. Math. Anal. Appl. 2021, 495, 124768. [Google Scholar] [CrossRef]
  15. Laskin, N. Fractional Poisson process. Commun. Nonlinear Sci. Numer. Simul. 2003, 8, 201–213. [Google Scholar] [CrossRef]
  16. Meerschaert, M.; Nane, E.; Vellaisamy, P. The fractional Poisson process and the inverse stable subordinator. Electron. J. Probab. 2011, 16, 1600–1620. [Google Scholar] [CrossRef]
  17. Baeumer, B.; Meerschaert, M.M. Stochastic solutions for fractional Cauchy problems. Fract. Calc. Appl. Anal. 2001, 4, 481–500. [Google Scholar]
  18. Scalas, E.; Toaldo, B. Limit theorems for prices of options written on semi-Markov processes. Theory Probab. Math. Stat. 2021, 105, 3–33. [Google Scholar] [CrossRef]
  19. Ascione, G.; Cuomo, S. A sojourn-based approach to semi-Markov Reinforcement Learning. arXiv 2022, arXiv:2201.06827. [Google Scholar]
  20. Ascione, G.; Toaldo, B. A semi-Markov leaky integrate-and-fire model. Mathematics 2019, 7, 1022. [Google Scholar] [CrossRef] [Green Version]
  21. Ascione, G.; Leonenko, N.; Pirozzi, E. Fractional queues with catastrophes and their transient behaviour. Mathematics 2018, 6, 159. [Google Scholar] [CrossRef] [Green Version]
  22. De Oliveira Souza, M.; Rodriguez, P.M. On a fractional queueing model with catastrophes. Appl. Math. Comput. 2021, 410, 126468. [Google Scholar]
  23. Ascione, G.; Leonenko, N.; Pirozzi, E. Fractional Erlang queues. Stoch. Process. Their Appl. 2020, 130, 3249–3276. [Google Scholar] [CrossRef] [Green Version]
  24. Ascione, G.; Leonenko, N.; Pirozzi, E. On the Transient Behaviour of Fractional M/M/ Queues. In Nonlocal and Fractional Operators; Springer: Berlin/Heidelberg, Germany, 2021; pp. 1–22. [Google Scholar]
  25. Whitt, W. Stochastic-Process Limits: An Introduction to Stochastic-Process Limits and their Application to Queues; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
  26. Skorokhod, A.V. Stochastic equations for diffusion processes in a bounded region. II. Theory Probab. Its Appl. 1962, 7, 3–23. [Google Scholar] [CrossRef]
  27. Magdziarz, M.; Schilling, R. Asymptotic properties of Brownian motion delayed by inverse subordinators. Proc. Am. Math. Soc. 2015, 143, 4485–4501. [Google Scholar] [CrossRef] [Green Version]
  28. Capitanelli, R.; D’Ovidio, M. Delayed and rushed motions through time change. Lat. Am. J. Probab. Math. Stat. 2020, 17, 183–204. [Google Scholar] [CrossRef]
  29. Graversen, S.E.; Shiryaev, A.N. An extension of P. Lévy’s distributional properties to the case of a Brownian motion with drift. Bernoulli 2000, 6, 615–620. [Google Scholar] [CrossRef]
  30. Kobayashi, K. Stochastic calculus for a time-changed semimartingale and the associated stochastic differential equations. J. Theor. Probab. 2011, 24, 789–820. [Google Scholar] [CrossRef]
  31. Asmussen, S.; Glynn, P.W. Stochastic Simulation: Algorithms and Analysis; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2007; Volume 57. [Google Scholar]
  32. Ambrosio, L.; Fusco, N.; Pallara, D. Functions of Bounded Variation and Free Discontinuity Problems; Courier Corporation: Washington, DC, USA, 2000. [Google Scholar]
  33. Dupuis, P.; Ramanan, K. Convex duality and the Skorokhod problem. I. Probab. Theory Relat. Fields 1999, 115, 153–195. [Google Scholar] [CrossRef]
  34. Harrison, J.M. Brownian Motion and Stochastic Flow Systems; Wiley: New York, NY, USA, 1985. [Google Scholar]
  35. Abate, J.; Whitt, W. Transient behavior of regulated Brownian motion, I: Starting at the origin. Adv. Appl. Probab. 1987, 19, 560–598. [Google Scholar] [CrossRef]
  36. Kinkladze, G. A note on the structure of processes the measure of which is absolutely continuous with respect to the Wiener process modulus measure. Stochastics: Int. J. Probab. Stoch. Process. 1982, 8, 39–44. [Google Scholar] [CrossRef]
  37. Revuz, D.; Yor, M. Continuous Martingales and Brownian Motion; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013; Volume 293. [Google Scholar]
  38. Göttlich, S.; Lux, K.; Neuenkirch, A. The Euler scheme for stochastic differential equations with discontinuous drift coefficient: A numerical study of the convergence rate. Adv. Differ. Equ. 2019, 2019, 429. [Google Scholar] [CrossRef] [Green Version]
  39. Asmussen, S.; Glynn, P.; Pitman, J. Discretization error in simulation of one-dimensional reflecting Brownian motion. Ann. Appl. Probab. 1995, 5, 875–896. [Google Scholar] [CrossRef]
  40. Buonocore, A.; Nobile, A.G.; Pirozzi, E. Simulation of sample paths for Gauss-Markov processes in the presence of a reflecting boundary. Cogent Math. 2017, 4, 1354469. [Google Scholar] [CrossRef]
  41. Buonocore, A.; Nobile, A.G.; Pirozzi, E. Generating random variates from PDF of Gauss–Markov processes with a reflecting boundary. Comput. Stat. Data Anal. 2018, 118, 40–53. [Google Scholar] [CrossRef]
  42. Bertoin, J. Subordinators: Examples and Applications. In Lectures on Probability Theory and Statistics; Springer: Berlin/Heidelberg, Germany, 1999; pp. 1–91. [Google Scholar]
  43. Meerschaert, M.M.; Sikorskii, A. Stochastic Models for Fractional Calculus; de Gruyter: Berlin, Germany, 2019. [Google Scholar]
  44. Meerschaert, M.M.; Straka, P. Inverse stable subordinators. Math. Model. Nat. Phenom. 2013, 8, 1–16. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  45. Arendt, W.; Batty, C.J.; Hieber, M.; Neubrander, F. Vector-Valued Laplace Transforms and Cauchy Problems; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  46. Mikusiński, J. On the function whose Laplace-transform is e-sα. Stud. Math. 1959, 2, 191–198. [Google Scholar] [CrossRef] [Green Version]
  47. Saa, A.; Venegeroles, R. Alternative numerical computation of one-sided Lévy and Mittag-Leffler distributions. Phys. Rev. E 2011, 84, 026702. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Penson, K.; Górska, K. Exact and explicit probability densities for one-sided Lévy stable distributions. Phys. Rev. Lett. 2010, 105, 210604. [Google Scholar] [CrossRef] [Green Version]
  49. Ascione, G.; Patie, P.; Toaldo, B. Non-local heat equation with moving boundary and curve-crossing of delayed Brownian motion. 2022; in preparation. [Google Scholar]
  50. Leonenko, N.; Pirozzi, E. First passage times for some classes of fractional time-changed diffusions. Stoch. Anal. Appl. 2021, 1–29. [Google Scholar] [CrossRef]
  51. Meerschaert, M.M.; Straka, P. Semi-Markov approach to continuous time random walk limit processes. Ann. Probab. 2014, 42, 1699–1723. [Google Scholar] [CrossRef] [Green Version]
  52. Çinlar, E. Markov additive processes. II. Z. Wahrscheinlichkeitstheorie Verwandte Geb. 1972, 24, 95–121. [Google Scholar] [CrossRef]
  53. Kaspi, H.; Maisonneuve, B. Regenerative systems on the real line. Ann. Probab. 1988, 16, 1306–1332. [Google Scholar] [CrossRef]
  54. Billingsley, P. Convergence of Probability Measures; John Wiley & Sons: Hoboken, NJ, USA, 2013. [Google Scholar]
  55. Chambers, J.M.; Mallows, C.L.; Stuck, B. A method for simulating stable random variables. J. Am. Stat. Assoc. 1976, 71, 340–344. [Google Scholar] [CrossRef]
  56. Borovkov, A. Some limit theorems in the theory of mass service. Theory Probab. Its Appl. 1964, 9, 550–565. [Google Scholar] [CrossRef]
  57. Iglehart, D.L.; Whitt, W. Multiple channel queues in heavy traffic. I. Adv. Appl. Probab. 1970, 2, 150–177. [Google Scholar] [CrossRef]
  58. Kilbas, A.A.; Srivastava, H.M.; Trujillo, J.J. Theory and Applications of Fractional Differential Equations; Elsevier: Amsterdam, The Netherlands, 2006; Volume 204. [Google Scholar]
  59. Mainardi, F.; Gorenflo, R.; Scalas, E. A fractional generalization of the Poisson processes. Vietnam J. Math. 2004, 32, 53–64. [Google Scholar]
  60. Mainardi, F.; Gorenflo, R.; Vivoli, A. Renewal processes of Mittag-Leffler and Wright type. Fract. Calc. Appl. Anal. 2005, 8, 7–38. [Google Scholar]
  61. Bingham, N.H. Limit theorems for occupation times of Markov processes. Z. Wahrscheinlichkeitstheorie Verwandte Geb. 1971, 17, 1–22. [Google Scholar] [CrossRef]
  62. Peng, J.; Li, K. A note on property of the Mittag-Leffler function. J. Math. Anal. Appl. 2010, 370, 635–638. [Google Scholar] [CrossRef] [Green Version]
  63. Gillespie, D.T. A general method for numerically simulating the stochastic time evolution of coupled chemical reactions. J. Comput. Phys. 1976, 22, 403–434. [Google Scholar] [CrossRef]
  64. Nolan, J.P. Numerical calculation of stable densities and distribution functions. Commun. Statistics Stoch. Model. 1997, 13, 759–774. [Google Scholar] [CrossRef]
  65. Abate, J.; Whitt, W. Transient behavior of the M/M/l queue: Starting at the origin. Queueing Syst. 1987, 2, 41–65. [Google Scholar] [CrossRef]
Figure 1. Simulated sample paths of Q M / M / 1 ( λ , μ ; α ) up to time t stop = 100 for different values of α , λ and μ . Precisely, on the left ρ < 1 , while on the right ρ > 1 .
Figure 1. Simulated sample paths of Q M / M / 1 ( λ , μ ; α ) up to time t stop = 100 for different values of α , λ and μ . Precisely, on the left ρ < 1 , while on the right ρ > 1 .
Symmetry 14 00615 g001aSymmetry 14 00615 g001b
Figure 2. Boxplots of 100 simulations of E [ F i ( Q α ( n ) / n ) ] (resp. i = 1 on the right and i = 2 on the left) via Monte Carlo method for n = 50 , η = 1 / 2 , t stop = 1 and different values of α .
Figure 2. Boxplots of 100 simulations of E [ F i ( Q α ( n ) / n ) ] (resp. i = 1 on the right and i = 2 on the left) via Monte Carlo method for n = 50 , η = 1 / 2 , t stop = 1 and different values of α .
Symmetry 14 00615 g002
Figure 3. Point plots of E [ F i ( Q α ( n ) / n ) ] (resp. i = 1 on the right and i = 2 on the left) against n for η = 1 / 2 , t stop = 1 and different values of α .
Figure 3. Point plots of E [ F i ( Q α ( n ) / n ) ] (resp. i = 1 on the right and i = 2 on the left) against n for η = 1 / 2 , t stop = 1 and different values of α .
Symmetry 14 00615 g003aSymmetry 14 00615 g003b
Figure 4. Point plots of S N ( i ) (resp. i = 1 on the right and i = 2 on the left) against n for η = 1 / 2 , t stop = 1 and different values of α .
Figure 4. Point plots of S N ( i ) (resp. i = 1 on the right and i = 2 on the left) against n for η = 1 / 2 , t stop = 1 and different values of α .
Symmetry 14 00615 g004aSymmetry 14 00615 g004b
Figure 5. Sample paths of W ˜ α η for η = 1 / 2 and different values of α simulated by means of Algorithm 7 with n 0 = 20 , δ = 10 , n max = 5000 and ε abs = ε rel = 0.001 .
Figure 5. Sample paths of W ˜ α η for η = 1 / 2 and different values of α simulated by means of Algorithm 7 with n 0 = 20 , δ = 10 , n max = 5000 and ε abs = ε rel = 0.001 .
Symmetry 14 00615 g005
Figure 6. Sequence of e ˜ F 2 , Cauchy N to produce the plots of Figure 5.
Figure 6. Sequence of e ˜ F 2 , Cauchy N to produce the plots of Figure 5.
Symmetry 14 00615 g006
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ascione, G.; Leonenko, N.; Pirozzi, E. Skorokhod Reflection Problem for Delayed Brownian Motion with Applications to Fractional Queues. Symmetry 2022, 14, 615. https://doi.org/10.3390/sym14030615

AMA Style

Ascione G, Leonenko N, Pirozzi E. Skorokhod Reflection Problem for Delayed Brownian Motion with Applications to Fractional Queues. Symmetry. 2022; 14(3):615. https://doi.org/10.3390/sym14030615

Chicago/Turabian Style

Ascione, Giacomo, Nikolai Leonenko, and Enrica Pirozzi. 2022. "Skorokhod Reflection Problem for Delayed Brownian Motion with Applications to Fractional Queues" Symmetry 14, no. 3: 615. https://doi.org/10.3390/sym14030615

APA Style

Ascione, G., Leonenko, N., & Pirozzi, E. (2022). Skorokhod Reflection Problem for Delayed Brownian Motion with Applications to Fractional Queues. Symmetry, 14(3), 615. https://doi.org/10.3390/sym14030615

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop