Next Article in Journal
Bi-Objective Optimization for Interval Max-Plus Linear Systems
Previous Article in Journal
4.6-Bit Quantization for Fast and Accurate Neural Network Inference on CPUs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Asymptotic Duration for Optimal Multiple Stopping Problems

by
Hugh N. Entwistle
1,
Christopher J. Lustri
2 and
Georgy Yu. Sofronov
1,*
1
School of Mathematical and Physical Sciences, Macquarie University, Sydney, NSW 2109, Australia
2
School of Mathematics and Statistics, The University of Sydney, Camperdown, NSW 2006, Australia
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(5), 652; https://doi.org/10.3390/math12050652
Submission received: 17 January 2024 / Revised: 17 February 2024 / Accepted: 21 February 2024 / Published: 23 February 2024
(This article belongs to the Section Probability and Statistics)

Abstract

:
We study the asymptotic duration of optimal stopping problems involving a sequence of independent random variables that are drawn from a known continuous distribution. These variables are observed as a sequence, where no recall of previous observations is permitted, and the objective is to form an optimal strategy to maximise the expected reward. In our previous work, we presented a methodology, borrowing techniques from applied mathematics, for obtaining asymptotic expressions for the expectation duration of the optimal stopping time where one stop is permitted. In this study, we generalise further to the case where more than one stop is permitted, with an updated objective function of maximising the expected sum of the variables chosen. We formulate a complete generalisation for an exponential family as well as the uniform distribution by utilising an inductive approach in the formulation of the stopping rule. Explicit examples are shown for common probability functions as well as simulations to verify the asymptotic calculations.

1. Introduction

Optimal stopping problems are formulated in terms of observing random variables and determining the stopping time(s) in order to maximise an objective function (which can be thought of as some reward). The discrete single-stopping problem involves observing a sequence of random variables y 1 , y 2 , , y N , and making the choice to stop on a particular observation y m , where 1 m N , which is based only on the variables that have been previously observed. After stopping, the “player” receives some pay-off which is a function of the variables observed y 1 , , y m . In [1], we used the pay-off of y m (i.e., the value of the variable stopped on). To extend this to the multiple optimal stopping problem we stop on k > 1 variables and, after stopping on variables y m 1 , y m 2 , , y m k at times m 1 , m 2 , , m k , we then receive a gain which is a function of the variables observed. This problem is a subset of a general class of other optimal stopping problems that all aim to find a sequential procedure to maximise the expected reward (see Section 13.4 of [2] for a more extensive discussion of this class of problem). The secretary problem is arguably the most well known (see [3,4]), and it has a wide range of variations (see [5,6]), but there is also a rich literature of other examples (see [7]). By the ‘duration’ (sometimes referred to as ‘time’) of the stopping problem, we refer to how many observations the statistician observes before optimally stopping. It should be emphasised that knowing how long you would be waiting, on average, for potentially large sequences of observations is also useful to understand (see [8]), which highlights the need for asymptotic analysis to address this question.
Less focus has been placed on understanding the asymptotic behaviour of the stopping duration, with most pre-existing results focusing on secretary-type, so called “no-information”, problems where the distribution of the observations is unknown. The asymptotic expectation and variance of the stopping time for the secretary problem were studied in [9]. Similar asymptotic analyses for other variants of no-information problems can be found in [10,11,12,13], where the techniques/formulations used differ depending on the particular variation or structure of the problem.
There is substantially less literature describing the asymptotic behaviour for “full-information” problems, when the distribution of the variables is known a priori. A smaller subset addresses the multiple stopping problem, which we focus on in this study.
Gilbert and Mosteller [11] studied the optimal stopping strategy for the full-information problem in which the objective is to maximise the probability of attaining the best observation, known as the full-information best-choice problem [14]. The optimal rule was shown to be a threshold strategy wherein the player would stop on y m if it was the best observed so far after all other observations and its value exceeded a threshold that depended on m. The asymptotic behaviour of this rule was also derived.
In the full-information case, the pay-off can instead be in terms of the actual values of the variables stopped on. A special case of this is the uniform game (see Section 5a of [11]), which is closely related to Cayley’s problem (see [3,15]). In [11], the authors showed an asymptotic expression for the expected reward of a sequence of n independent and identically distributed (iid) random variables having the standard uniform distribution (see also [15]). In [9], Mazalov and Peshkov found the asymptotic behaviour of the expected value and variance of the stopping time as N / 3 and N 2 / 18 , respectively, when the variables are from the uniform distribution. However, the techniques applied were specific to the structure of the distribution, and thus, it is difficult to easily extend or generalise this to other distribution functions. In [16,17], using the extreme value theory, Kennedy and Kertz proved limit theorems for threshold-stopped random variables and derived the asymptotic distribution of the reward sequence of the optimal stopping (iid) random variables. The asymptotic pay-off for the multiple case is briefly analysed in (Section 5c of [11]).
In [1], we outlined a novel approach for a general asymptotic technique for calculating the asymptotic behaviour of the pay-off as well as E ( τ N ) and Var ( τ N ) in the single-stopping case, where τ N is the single-stopping time, as N for general classes of probability distributions in the full-information problem where we wish to maximise the expected reward y m . The techniques in our previous paper, which are extended in this study, employ the asymptotic analysis of difference and differential equations in order to establish and solve asymptotic differential equations for the quantities of interest. Differential equations were also used in [18,19]. In this study, we extend some of our results in [1] to the multiple stopping case through some inductive arguments and verify these results with simulations. For simplicity, we only analyse continuous distributions and reserve the notation f ( x ) , F ( x ) , and h ( y ) = 1 F ( x ) for the continuous probability density, cumulative distribution, and “survivor” function, respectively. We use the notation f ( x ) g ( x ) to establish the asymptotic relation that lim x f ( x ) g ( x ) = 1 .

2. Formulation of the Multiple Optimal Stopping Problem

As in the single-stopping problem, we still sequentially observe the sequence y 1 , y 2 , , y N of independent, identically distributed (iid) random variables from a known distribution, we must decide which k of these variables to stop on. After k, k 2 stoppings at times m 1 , m 2 , , m k , where 1 m 1 < m 2 < < m k N , we receive the gain Z m 1 , m 2 , , m k = y m 1 + y m 2 + + y m k . The random variable y m can be interpreted as the value of some asset, such as a house, at time m. The problem of selling k identical objects in a finite time (or horizon) N with one offer per time period and no recall of previous offers is analogous to the multiple optimal stopping problem described. If we stop at m 1 after observations ( y 1 , y 2 ,…, y m 1 ), then we proceed to observe another sequence y m 1 + 1 , y m 1 + 2 , , y N (whose length depends on m 1 ) and must solve the new optimal stopping problem on this sequence. From [20], we have the following theorem.
Theorem 1.
Let y 1 , y 2 , , y N be a sequence of independent random variables with known cumulative distribution functions (cdfs) F 1 , F 2 , , F N . Let v L , l be the value, which is the optimal expected reward, of a game with l, l k stoppings and L, L N steps. If there exist E ( y 1 ) , E ( y 2 ) , , E ( y N ) , then the value v of the ‘game’ is v N , k , where
v n , 1 = E ( max { y N n + 1 , v n 1 , 1 } ) , 1 n N v 0 , 1 = v n , k i + 1 = E ( max { y N n + 1 + v n 1 , k i , v n 1 , k i + 1 } ) , k i + 1 n N v k i , k i + 1 = , i = k 1 , , 1 .
We put
τ 1 * = min { m 1 : 1 m 1 N k + 1 , y m 1 v N m 1 , k v N m 1 , k 1 } τ k * = min { m k : τ k 1 * < m k N , y m k v N m k , 1 } τ i * = min { m i : τ i 1 * < m i N k + i , y m i v N m i , k i + 1 v N m i , k i } , i = 2 , , k 1
then, τ * = ( τ 1 * , τ 2 * , , τ k * ) is the optimal stopping rule.
If we define v i , j = v N j , k i + 1 v N j , k i , then we may notice that the stopping rules now take the form
τ 1 * = min { m 1 : 1 m 1 N k + 1 , y m 1 v 1 , m 1 } τ k * = min { m k : τ k 1 * < m k N , y m k v N m k , 1 } τ i * = min { m i : τ i 1 * < m i N k + i , y m i v i , m i } , i = 2 , , k 1
so we may interpret v i , j as the appropriate threshold value that needs to be satisfied for stopping for the ith occasion at the jth term in the sequence of N observations. We can then define w i , j = P ( y < v i , j ) , which can now be interpreted as the probability that in the above situation we do not stop.
Example 1 (Multiple Stopping on the Uniform (0, 1) Distribution with k = 2 stops).
Let y 1 , y 2 , y N be a sequence of independent, identically distributed random variables that follow the uniform U ( 0 , 1 ) distribution.
We derived the equations for v n , 1 in the original paper (for details, see [1]):
v n , 1 = 1 + ( v n 1 , 1 ) 2 2 where v 0 , 1 = 0 . For v n , 2 we have that
v n , 2 = E ( max { v n 1 , 1 + y N n + 1 , v n 1 , 2 } ) = E ( max { y N n + 1 , v n 1 , 2 v n 1 , 1 } ) + v n 1 , 1 = ( v n 1 , 2 v n 1 , 1 ) 2 + 1 2 + v n 1 , 1
which can then be used to numerically determine the values of the ‘game’.
For the example of N = 6 , we may produce the values for v n , 1 and v n , 2 for each value of n from n = 1 to n = 6 , as displayed in Table 1. For example, consider this sequence of simulated variables:
0.1081 , 0.6987 , 0.1483 , 0.4123 , 0.8968 , 0.7242 .
For the first stop, we would arrive on y 2 = 0.6987 , since y 3 is the first variable to satisfy y m 1 v N m 1 , 2 v N m 1 , 1 and for the second, we arrive on y 5 = 0.8968 , as this is the first subsequent variable for which y m 2 v N m 2 , 1 . This particular example would have resulted in the reward 0.6987 + 0.8968 = 1.5955 .

3. Computing v n , k Behaviour

We establish a similar recurrence result as we did for the single-stopping case. We note that this relation for v n , k will now be a second-order relation. For a convenient evaluation of the expectation, we use the fact that for any continuous integrable random variable X with cdf F ( x ) , the expectation can be given by
E ( X ) = 0 ( 1 F ( x ) ) d x 0 F ( x ) d x
Theorem 2.
Let Y be an integrable random variable whose expectation exists, and which is drawn from a continuous probability distribution function (pdf) f ( y ) with survivor function h ( y ) = 1 F ( y ) . The value of a sequence with n + 1 steps and k stops remaining is given by
v n + 1 , k = v n , k + v n , k v n , k 1 h ( y ) d y .
Proof. 
For ease of notation, we let v = v n 1 , k v n 1 , k 1 . Then, by definition, we have
v n + 1 , k = E ( max { y N n + v n , k 1 , v n , k } ) = v n , k 1 + E ( max { y N n , v n , k v n , k 1 } ) = v n , k 1 + E ( max { y N n , v } )
where the last expectation, using (3), is given by
0 P ( max ( Y , v y ) d y + 0 v P ( max ( Y , v ) > y ) d y + v P ( max ( Y , v ) > y ) d y = 0 + 0 v 1 d y + v P ( Y > y ) d y = v + v h ( y ) d y = v n , k v n , k 1 + v n , k v n , k 1 h ( y ) d y .
Substituting this result, noting that the v n , k 1 terms cancel, we obtain the required result. □
We note that if f ( y ) has bounded support in the positive direction such that f ( y ) = 0 for y > y max , it follows from above that
v n + 1 , k = v n , k + v n , k v n , k 1 y max h ( y ) d y .
By the controlling factor method, we also have that v n + 1 , k v n , k ( v n , k ) , which can be combined with the previous integral to establish the following:
( v n , k ) v n , k v n , k 1 h ( y ) d y .
This may be differentiated on both sides to obtain the asymptotic relation for the second derivative:
( v n , k ) h ( v n , k v n , k 1 ) · ( v n , k v n , k 1 ) .
This expression can now be rearranged for h ( v n , k v n , k ) to give
h ( v n , k v n , k 1 ) ( v n , k ) ( v n , k v n , k 1 ) .
Depending on the asymptotic nature of v n , k , v n , k 1 and their derivatives, this result can be used through direct substitution. In other scenarios, the derivative expressions may not yield useful expressions and the behaviour of h ( v n , k v n , k 1 ) can be analysed directly without this result. We will provide variations of this in the subsequent example calculations.

Example Calculations

We illustrate the application of these ideas to some common distributions, such as the uniform and exponential distributions. However, the nature of the differential equations that arise from the multiple stopping problem are much harder to solve—some of them have no solution.
Example 2.
The uniform distribution is given by f ( y ) = 1 b a on y [ a , b ] , where b > a .
For the differential equation for the large asymptotic behaviour, we have that h ( y ) = b y b a . Defining v : = v n , 2 v n , 1 and rearranging in the asymptotic relation for ( v n , 2 ) we obtain
v v b b y b a d y + ( v n , 1 ) .
From [1], we have the asymptotic relation v n , 1 = v n b 2 ( b a ) n , which can be directly substituted into the above equation:
v ( b v ) 2 2 ( b a ) 2 ( b a ) n 2 .
We can solve this formal differential equation to obtain
v b + ( a b ) 1 + 5 n 2 c n ( n 5 + c ) b + ( a b ) ( 1 + 5 ) n
where c arises as a constant of integration but may be dropped since it is part of a sub-dominant term. We now replace v with v n , 2 v n , 1 , substitute our asymptotic relation for v n , 1 , and rearrange for v n , 2 to obtain
v n , 2 2 b ( 3 + 5 ) ( b a ) n as n .
We may notice, in general, that whenever v satisfies the relation
v ( b v ) 2 2 ( b a ) Δ ( b a ) n 2
where Δ is some positive constant, that we obtain the asymptotic relation
v b + ( a b ) ( 1 + 1 + 2 Δ ) n .
This can be used to generalise the asymptotic behaviour for v n , k for the uniform distribution.
Theorem 3.
Consider X 1 , X 2 , , X N to be independent identically distributed uniform variables on [ a , b ] , b > a . The reward sequence v n , k follows the asymptotic relation
v n , k k b Δ k ( b a ) n as n ,
where Δ k + 1 = 1 + Δ k + 1 + 2 Δ k for k 0 and Δ 0 = 0 .
Proof. 
We have shown this to be true for k = 1 and k = 2 . Now, assume that
v n , k k b Δ k ( b a ) n as n .
Now define v = v n , k + 1 v n , k , we have
v ( b v ) 2 2 ( b a ) Δ k ( b a ) n 2
which, from (10) and (11), has the asymptotic solution
v b + ( a b ) ( 1 + 1 + 2 Δ k ) n .
To conclude the proof, we rearrange for v n , k + 1 to obtain
v n , k + 1 b ( k + 1 ) ( 1 + Δ k + 1 + 2 Δ k ) ( b a ) n = b ( k + 1 ) Δ k + 1 ( b a ) n .
We noted in [1] that the behaviour of v n , 1 was identical for the family of distributions with exponential tails. The next example will seek to unify such distributions that corresponded to α = 1 in the multiple stopping scenario.
Example 3.
A continuous probability density function f ( y ) is given with a survival function that, for sufficiently large y, satisfies
h ( y ) γ e ( y / β ) < e ( y / β ) y Δ
for positive Δ, and where β and γ are positive constants. Assume each of the terms in the sequence of reward values v n , 1 , , v n , k increases without bound.
We obtain the ordinary differential equation
d v n , k d n v n , k v n , k 1 h ( y ) d y = γ β Γ 1 , v n , k v n , k 1 β + g * ( n )
where Γ represents the upper incomplete gamma function, and
| g * ( n ) | < v n , k v n , k 1 e y / β y Δ d y < 1 ( v n , k v n , k 1 ) Δ v n , k v n , k 1 e y / β d y
which is sub-dominant in the asymptotic differential equation. The solution to the differential equation is thus approximated by
d v n , k d n γ β Γ 1 , v n , k v n , k 1 β .
This gives
d v n , k d n γ e ( v n , k v n , k 1 ) / β as n .
We have, from [1], v n , 1 β log ( n ) , and so the general case for k > 2 may be presented by mathematical induction.
Theorem 4.
Let X 1 , X 2 , , X N be random variables from a distribution f ( y ) that, for sufficiently large y, satisfies
h ( y ) γ e ( y / β ) < e ( y / β ) y Δ
for positive Δ, and where β and γ are positive constants. Assume each of the terms in the sequence of reward values v n , 1 , , v n , k increases without bound. Then, the asymptotic behaviour of v n , k is given by
v n , k β log ( n k ) as n .
Proof. 
From (15) we have that the behaviour of v n , k is related by
d v n , k d n γ e v n , k / β e v n , k 1 / β
We have verified the claim for n = 1 in [1]. We now assume that v n , k β log ( n k ) as n and use this to prove the same for v n , k + 1 . We write the asymptotic differential equation ( v n , k + 1 ) γ e ( v n , k + 1 v n , k ) / β and substitute for our assumed asymptotic for v n , k to obtain
( v n , k + 1 ) β e v n , k + 1 β log ( n k ) / β = β n k e v n , k + 1 / β
which is a separable differential equation with solution
v n , k + 1 β log ( n k + 1 )
as required. □

4. Calculating the Optimal Expectation

In this section, we continue the ideas from the single-stopping calculation to calculate the expectation for the multiple stopping rules. We proceed to find the asymptotic for the expected value τ 1 * , the first stopping time. We can then find the rest of the expectations in an inductive fashion under certain conditions. We now extend some of the previous notation to reflect the multiple reward sequences, as well as k stopping variables. Let τ 1 * , τ 2 * , , τ k * denote the 1st, 2nd, …, kth stopping times, respectively, and let w i , j = P ( y < v i , j ) , v i , j = v N j , k i + 1 v N j , k i , i = 1 , , k , j = 1 , , N .

5. An Asymptotic Equation for E ( τ 1 * )

By recalling the notation from the beginning of this section, we obtain E ( τ 1 * ) through
E ( τ 1 * ) = n = 1 N k + 1 n P ( y 1 < v 1 , 1 , , y n 1 < v 1 , n 1 , y n v 1 , n ) = ( 1 w 1 , 1 ) + 2 w 1 , 1 ( 1 w 1 , 2 ) + + N w 1 , 1 w 1 , 2 w 1 , N k = 1 + n = 1 N k j = n N k w 1 , N k + 1 j ,
We split the summation for E ( τ 1 * ) on a value k * , where 0 k * N :
E ( τ 1 * ) = 1 + n = 1 k * 1 j = n N k w 1 , N 1 j + n = k * N k j = n N k w 1 , N 1 j
where we apply the fact that 0 < w 1 , N 1 j < 1 to obtain a bound for the first summation term:
0 < n = 1 k * 1 j = n N k w 1 , N 1 j < k * 1 .
For the second summation term, as k * is large in the limit as N , we may use the asymptotic approximations for v n , k obtained in the previous section.
n = k * N k j = n N k w 1 , N 1 j = n = k * N k j = n N k 1 h ( v 1 , N 1 j ) n = k * N k j = n N k 1 + ( v j + 1 , k ) ( v j + 1 , k v j + 1 , k 1 ) .
For many distributions, this can be simplified through using the large-n asymptotic for v n , k and its derivatives, or obtaining an asymptotic expression for h ( v n , k v n , k 1 ) through other means. In the case where h ( v 1 , N 1 k ) λ j , we have from [1] that
E ( τ 1 * ) N λ + 1 as N .
Example 4.
Uniform Distribution for k stops.
For simplicity, we first consider the double stopping problem ( k = 2 ). From Example 2, we have that
v n , 1 b 2 ( b a ) n and v n , 2 2 b ( 3 + 5 ) ( b a ) n as n .
This obtains v n , 2 v n , 1 b ( b a ) ( 1 + 5 ) n , and thus, h ( v n , 2 v n , 1 ) 1 + 5 n .
We then have that
E ( τ 1 * ) n = k * N 2 j = n N 2 1 h ( v 1 , N 1 j ) n = k * N 2 j = n N 2 1 1 + 5 j N 2 + 5 .
For the general result with k > 2 stops for the uniform distribution, we apply the asymptotic behaviour of h ( v n , k v n , k 1 ) to obtain
E ( τ 1 * ) n = k * N k j = n N k 1 1 + 1 + 2 Δ k 1 j N 2 + 1 + 2 Δ k 1 .
For k = 1 , conveniently Δ 0 = 0 , this retrieves N 3 . For k = 2 , we have that Δ 1 = 2 , and so, this retrieves N 2 + 5 , consistent with our previous results.
Example 5 (Distributions with an Exponential Tail (k stops)).
We once again consider a probability distribution f ( y ) is given with a survival function that, for sufficiently large y, satisfies
h ( y ) γ e ( y / β ) < e ( y / β ) y Δ
where the additional conditions are described in Example 3.
We found that the sequences v n , k satisfy the asymptotic relation v n , k β log ( n k ) and so
v n , k v n , k 1 β log ( n k ) β log ( n k 1 ) = β log ( n ) .
From this we obtain asymptotic relations for the derivatives:
( v n , k v n , k 1 ) β n and ( v n , k ) β k n 2 as n .
Hence, we obtain
E ( τ 1 * ) n = k * N k j = n N k 1 + ( v j + 1 , k ) ( v j + 1 , k v j + 1 , k 1 ) n = k * N k j = n N k 1 k j N k + 1 .
Provided that E ( τ 1 * ) is asymptotically of this form, we may make use of linearity of expectation to obtain convenient conditional formulae that are not as complicated as those encountered in the previous section.

6. An Inductive Approach for E ( τ j * ) , j > 1

Due to the independence of observations, it starts to make physical sense to view the expectation of the ( j + 1 ) th stopping time as a function of only the previous stopping time. We, thus, investigate the properties of the ( j + 1 ) th stopping time when conditioned on the jth. We would expect then to only need to add the additional expected number of observations to stop one more time out of a ‘reduced’ optimal stopping problem. We now introduce the notation τ N , j , k * to allow for more flexible interactions between stopping times.
Definition 1.
Let τ N , j , k * denote the jth stopping time (out of k) in the optimal stopping problem with N observations.
Here, τ j , N , k * denotes the τ j * used in the previous section. τ 1 , N , 1 * corresponds to the τ * in the single-stopping problem.
Theorem 5.
Let X 1 , X 2 , , X N be independent and identically distributed random variables where the expectations of the optimal stopping times { τ j , N , k * } exist. Suppose further that the first stopping time out of k stops has asymptotic expectation E ( τ 1 , N , k * ) N λ 1 , k for some constant λ 1 , k . Then, the following relation holds:
E ( τ j + 1 , N , k * ) E ( τ j , N , k * ) + N E ( τ j , N , k * ) λ 1 , k j
where λ 1 , k j is some other positive constant.
Proof. 
We first show that the conditional expectation, E ( τ j + 1 , N , k * | τ j , N , k * = m ) , is given by
E ( min { m j + 1 : m < m j + 1 N k + j + 1 , y m j + 1 v N m j + 1 , k j v N m j + 1 , k j 1 } ) = m + E ( min { m 1 : 1 m 1 ( N m ) ( k j ) + 1 : y m 1 v N m , k j v N m , ( k j ) 1 } ) = m + E ( τ 1 , N m , k j ) ,
where the last equality follows directly from the definition of τ 1 * in the previous section. By the assumption of the form of the expectation, we thus have that
E ( τ j + 1 , N , k * | τ j , N , k * ) τ j , N , k * + N τ j , N , k * λ 1 , k j
and so through linearity properties of expectation, as well as the law of total expectation, we finally obtain
E ( τ j + 1 , N , k * ) E ( τ j , N , k * ) + N E ( τ j , N , k * ) λ 1 , k j .
This has a reasonable physical interpretation as we would expect that the expectation of the ( j + 1 ) th stopping time will be the expectation of the jth stopping time as well as the additional time it would take to stop one more time in the revised stopping problem to stop at the next observation. This revised stopping problem has a reduced N E ( τ j , N , k * ) number of observations on average, as well as k j stops remaining since we have already stopped j times out of the original k times. A consequence of this theorem is that the asymptotic expectations may all be considered linear.

7. Asymptotic Equations for E ( τ j , N , k * )

We now demonstrate how the ideas discussed in the previous section can be applied to obtain all of the multiple stopping times for some classes of distribution. Simulations were also conducted to support the asymptotic calculations in this study. Figure 1 compares the large-N asymptotic predictions for the expected duration for the standard uniform distribution under the optimal double stopping rule and Figure 2 shows a similar comparison under a triple stopping rule for the exponential and the gamma distribution. In this section, we will now adopt the previous notation for the jth stopping time out of k, for N observations as τ j , N , k * .
These results were verified by simulation (see Figure 1) which show the realised expectation converging to the asymptotic prediction. This procedure can be extended to calculate further results for more stopping times permitted. Simulation results (see Figure 2) for the exponential and gamma distribution further support these asymptotic calculations.
Example 6 (Double Stopping on the Uniform Distribution).
For the uniform distribution, we obtain an asymptotic expression for the first stopping time out of k stops. It is given by
E ( τ 1 , N , k * ) N 2 + 1 + 2 Δ k 1 ,
where Δ k is defined through the recursive scheme
Δ k + 1 = 1 + Δ k + 1 + 2 Δ k k = 0 , 1 , 2 , 3 , Δ 0 = 0
We can then apply Equation (21), for the example of the double stopping case, recursively to obtain E ( τ 1 , N , 2 ) 1 2 + 5 N and E ( τ 2 , N , 2 ) 2 5 3 3 N .
Example 7 (Multiple Stopping on Distributions with Exponential Tails).
We saw that the expressions can get somewhat out of hand for the uniform distribution. However, the structure can sometimes behave nicely to lead to a closed form result that need not be recursively evaluated. This is true for the class of distributions outlined in Example 3. We prove the following theorem for the general asymptotic expectation:
Theorem 6.
Let X 1 , X 2 , , X N be independent and identically distributed random variables. Define τ j , N , k * to be the jth stopping time out of k stops for the sequence of N observations. If E ( τ 1 , N , k * ) N k + 1 , then we have that
E ( τ j , N , k * ) j N k + 1 for j = 1 , 2 , k .
Proof. 
We see that the base case is in agreement; we now proceed with the inductive argument that
E ( τ j + 1 , N , k * ) E ( τ j , N , k * ) + N E ( τ j , N , k * ) k j + 1 ( from Theorem 5 ) = j N k + 1 + N j N k + 1 k j + 1 = ( j + 1 ) N k + 1 .

8. Conclusions

In this paper, we have derived asymptotics of multiple optimal stopping times for sequences of independent identically distributed continuous random variables through extending the pre-existing methodology that we developed for the single-stopping case. It is anticipated that a similar class of results can be established for other classes of distribution, but it is not clear if the resulting differential equations are easily solvable.
Asymptotic calculations were performed on a number of probability distributions, on both bounded and unbounded domains. The asymptotic properties for k 2 were then inductively obtained. Numerical simulations were subsequently performed to calculate the expectation of the optimal stopping rule for a range of values of N, ranging from 10 to 1000. In each case, the simulated results tended towards the asymptotic prediction in the large-N limit, validating the asymptotic and inductive approach.

Author Contributions

Conceptualisation, H.N.E., C.J.L. and G.Y.S.; methodology, H.N.E., C.J.L. and G.Y.S.; simulation, H.N.E.; writing—original draft preparation, H.N.E.; writing—review and editing, H.N.E. and G.Y.S.; visualisation, H.N.E. and C.J.L.; project administration, G.Y.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The datasets simulated during the study are available from H.E. on reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Entwistle, H.N.; Lustri, C.J.; Sofronov, G.Y. On asymptotics of optimal stopping times. Mathematics 2022, 10, 194. [Google Scholar] [CrossRef]
  2. DeGroot, M.H. Optimal Statistical Decisions; John Wiley & Sons: Hoboken, NJ, USA, 2005; Volume 82. [Google Scholar]
  3. Ferguson, T.S. Who solved the secretary problem? Stat. Sci. 1989, 4, 282–296. [Google Scholar] [CrossRef]
  4. Freeman, P.R. The Secretary Problem and Its Extensions: A Review. Int. Stat. Rev. 1983, 51, 189–206. [Google Scholar] [CrossRef]
  5. Kubicka, E.M.; Kubicki, G.; Kuchta, M.; Morayne, M. Secretary problem with hidden information; searching for a high merit candidate. Adv. Appl. Math. 2023, 144, 102468. [Google Scholar] [CrossRef]
  6. Presman, E.L.; Sonin, I.M. The best choice problem for a random number of objects. Theory Probab. Its Appl. 1973, 17, 657–668. [Google Scholar] [CrossRef]
  7. Chow, Y.S.; Robbins, H.; Siegmund, D. Great Expectations: The Theory of Optimal Stopping; Houghton Mifflin: Boston, MA, USA, 1971. [Google Scholar]
  8. Ernst, M.; Szajowski, K.J. Average number of candidates surveyed by the headhunter in the recruitment. Math. Appl. 2021, 49, 31–53. [Google Scholar] [CrossRef]
  9. Mazalov, V.V.; Peshkov, N.V. On asymptotic properties of optimal stopping time. Theory Probab. Its Appl. 2004, 48, 549–555. [Google Scholar] [CrossRef]
  10. Demers, S. Expected duration of the no-information minimum rank problem. Stat. Probab. Lett. 2021, 168, 108950. [Google Scholar] [CrossRef]
  11. Gilbert, J.P.; Mosteller, F. Recognizing the Maximum of a Sequence. J. Am. Stat. Assoc. 1966, 61, 35–73. [Google Scholar] [CrossRef]
  12. Yasuda, M. Asymptotic Results for the Best-Choice Problem with a Random Number of Objects. Appl. Probab. 1984, 21, 521–536. [Google Scholar] [CrossRef]
  13. Yeo, G.F. Duration of a secretary problem. J. Appl. Probab. 1997, 34, 556–558. [Google Scholar] [CrossRef]
  14. Gnedin, A.V. On the Full Information Best-Choice Problem. J. Appl. Probab. 1996, 33, 678–687. [Google Scholar] [CrossRef]
  15. Moser, L. On a problem of Cayley. Scr. Math. 1956, 22, 289–292. [Google Scholar]
  16. Kennedy, D.P.; Kertz, R.P. Limit Theorems for Threshold-Stopped Random Variables with Applications to Optimal Stopping. Adv. Appl. Probab. 1990, 22, 396–411. [Google Scholar] [CrossRef]
  17. Kennedy, D.P.; Kertz, R.P. The asymptotic behavior of the reward sequence in the optimal stopping of iid random variables. Ann. Probab. 1991, 19, 329–341. [Google Scholar] [CrossRef]
  18. Bayón, L.; Fortuny Ayuso, P.; Grau, J.; Oller-Marcén, A.; Ruiz, M. A new method for computing asymptotic results in optimal stopping problems. Bull. Malays. Math. Sci. Soc. 2023, 46, 46. [Google Scholar] [CrossRef]
  19. Pearce, C.E.; Szajowski, K.; Tamaki, M. Duration problem with multiple exchanges. Numer. Algebr. Control Optim. 2012, 2, 333. [Google Scholar]
  20. Haggstrom, G.W. Optimal sequential procedures when more than one stop is required. Ann. Math. Stat. 1967, 38, 1618–1626. [Google Scholar] [CrossRef]
Figure 1. Comparison of large-N asymptotic predictions for the expectation of the optimal double stopping rule for the standard uniform distribution ( a = 0 , b = 1 ).
Figure 1. Comparison of large-N asymptotic predictions for the expectation of the optimal double stopping rule for the standard uniform distribution ( a = 0 , b = 1 ).
Mathematics 12 00652 g001
Figure 2. Comparison of large-N asymptotic predictions for the expectation of the optimal triple stopping rule ( k = 3 ) for the standard exponential distribution ( β = 1 ) and the gamma distribution. ( α = 3 , β = 2 ). The rows correspond to the first, second, and third stops, respectively. The first column is for the exponential distribution, and the second for the gamma distribution.
Figure 2. Comparison of large-N asymptotic predictions for the expectation of the optimal triple stopping rule ( k = 3 ) for the standard exponential distribution ( β = 1 ) and the gamma distribution. ( α = 3 , β = 2 ). The rows correspond to the first, second, and third stops, respectively. The first column is for the exponential distribution, and the second for the gamma distribution.
Mathematics 12 00652 g002
Table 1. Values for the standard uniform distribution.
Table 1. Values for the standard uniform distribution.
n123456
v n , 1 0.50000.62500.69530.74170.77510.8004
v n , 2 1.00001.19531.32031.40911.4761
v n , 2 v n , 1 0.37500.50000.57860.63400.6757
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Entwistle, H.N.; Lustri, C.J.; Sofronov, G.Y. Asymptotic Duration for Optimal Multiple Stopping Problems. Mathematics 2024, 12, 652. https://doi.org/10.3390/math12050652

AMA Style

Entwistle HN, Lustri CJ, Sofronov GY. Asymptotic Duration for Optimal Multiple Stopping Problems. Mathematics. 2024; 12(5):652. https://doi.org/10.3390/math12050652

Chicago/Turabian Style

Entwistle, Hugh N., Christopher J. Lustri, and Georgy Yu. Sofronov. 2024. "Asymptotic Duration for Optimal Multiple Stopping Problems" Mathematics 12, no. 5: 652. https://doi.org/10.3390/math12050652

APA Style

Entwistle, H. N., Lustri, C. J., & Sofronov, G. Y. (2024). Asymptotic Duration for Optimal Multiple Stopping Problems. Mathematics, 12(5), 652. https://doi.org/10.3390/math12050652

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop