Next Article in Journal
Orbit Growth of Shift Spaces Induced by Bouquet Graphs and Dyck Shifts
Next Article in Special Issue
Single-Threshold Model Resource Network and Its Double-Threshold Modifications
Previous Article in Journal
On a Generalization of One-Dimensional Kinetics
Previous Article in Special Issue
Queuing-Inventory Models with MAP Demands and Random Replenishment Opportunities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimation of the Optimal Threshold Policy in a Queue with Heterogeneous Servers Using a Heuristic Solution and Artificial Neural Networks

by
Dmitry Efrosinin
1,2,* and
Natalia Stepanova
3
1
Insitute for Stochastics, Johannes Kepler University Linz, 4030 Linz, Austria
2
Department of Information Technologies, Faculty of Mathematics and Natural Sciences, Peoples’ Friendship University of Russia (RUDN University), 117198 Moscow, Russia
3
Laboratory N17, Trapeznikov Institute of Control Sciences of RAS, 117997 Moscow, Russia
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(11), 1267; https://doi.org/10.3390/math9111267
Submission received: 26 March 2021 / Revised: 30 April 2021 / Accepted: 25 May 2021 / Published: 31 May 2021
(This article belongs to the Special Issue Distributed Computer and Communication Networks)

Abstract

:
This paper deals with heterogeneous queues where servers differ not only in service rates but also in operating costs. The classical optimisation problem in queueing systems with heterogeneous servers consists in the optimal allocation of customers between the servers with the aim to minimise the long-run average costs of the system per unit of time. As it is known, under some assumptions the optimal allocation policy for this system is of threshold type, i.e., the policy depends on the queue length and the state of faster servers. The optimal thresholds can be calculated using a Markov decision process by implementing the policy-iteration algorithm. This algorithm may have certain limitations on obtaining a result for the entire range of system parameter values. However, the available data sets for evaluated optimal threshold levels and values of system parameters can be used to provide estimations for optimal thresholds through artificial neural networks. The obtained results are accompanied by a simple heuristic solution. Numerical examples illustrate the quality of estimations.

1. Introduction

Many queueing systems are analysed for their dynamic and optimal control related to system access, resource allocation, changing service area characteristics and so on. Sets of computerised tools and procedures provide large data sets which can be useful to expand potential of classical optimisation methods. The paper deals with a known model of a multi-server queueing system with controllable allocation of customers between heterogeneous servers which are differentiated by their service and cost attributes. For the queueing system with two heterogeneous servers it has been shown in [1] by using a dynamic programming approach that to minimise the mean sojourn time of customers in the system, the faster server must be always used and the customer has to be assigned to the slower server if and only if the number of customers in the queue exceeds the certain threshold level. Furthermore, this result was obtained independently in more simple form in [2,3]. In [4], the author has analysed a multi-server version of such a system and confirmed a threshold nature of the optimal policy as well.
The problem of an optimal allocation of customers between heterogeneous servers in queueing systems with additional costs with the aim to minimise the long-run average cost per unit of time is notoriously more difficult. Some progress has been made after the appearance of a review paper [5]. In [6,7], the authors studied a model with set-up costs using a hysteretic control rule, thereby stressing the algorithmic aspects of the optimal control structure. The same system has been discussed in [8], where a direct method that provides a closed-form expression for the stationary occupancy distribution was proposed. In [9,10], the authors have used theoretical study and exhaustive numerical analysis to show that for some specified servers, ordering the optimal allocation policy which minimises the long-run average cost belongs to a set of structural policies. In other words, for the servers’ enumeration (1), the allocation control policy denoted by f can be defined through a sequence of threshold levels 1 = q 1 q 2 q K < . With respect to the defined policy, the server operating at highest rate should remain busy by non-empty queueing system. The kth server ( k 2 ) is used only if the first k 1 servers are busy and the queue length reaches a threshold level q k > 0 . In the general case, the optimal threshold levels can depend on states of slower server and formally the optimal policy f is not of a pure threshold type. However, since the kth threshold value may vary by at most one when the state of slower server changes and it has a weak effect on the average cost, such influence can be neglected. Hence the optimal allocation policy for multi-server heterogeneous queueing system can be treated as a threshold one.
Searching for the optimal values of q 2 , , q K by direct minimising the average cost function can be expensive, especially when K is large. To calculate the optimal threshold levels we can use a policy-iteration algorithm [11,12,13] which constructs a sequence of improved policies that converges to optimal one. This algorithm is a fairly versatile tool for solving various optimisation problems. Unfortunately, as is usually the case in practice, this algorithm is not without some limitations, such as the difficulties associated with convergence when the traffic is close to loaded, limitation on the process dimension and, consequently, on the number of states. Thus, we would like to compensate for some of the weaknesses of this algorithm with other methods for calculating the optimal control policy. The contribution of this paper can be briefly described in two conceptual parts. In the first part, we propose a heuristic solution (HS) to obtain functional relationships for optimal thresholds based on a simple discrete approximation of the system’s behaviour. The second part is devoted to the alternative machine learning technique such as artificial neural networks (NN) [14,15,16] which is used again for the estimation of the optimal threshold levels. The policy-iteration algorithm is used in the paper to generate the data sets needed both to verify the quality of the proposed optimal threshold estimation methods and to train the neural networks. We strongly believe that the trained neural network can be successfully used to calculate the optimal thresholds for those system parameters for which alternative numerical methods are difficult or impossible to use, for example, in heavy traffic case, or, in general, to reconstruct the areas of optimality without usage of time-expensive algorithms and procedures. There are some number of papers on prediction of the stochastic behaviour of queueing systems and networks using machine learning algorithms, see e.g., [17,18] and references therein. However, we unsuccessfully tried to find published works where heuristics and machine learning methods would be used to solve a similar optimisation problem for heterogeneous queueing systems and therefore we consider this paper relevant.
This paper is organised as follows. In Section 2, we briefly discuss a mathematical model. Section 3 introduces some heuristic choices for threshold levels that turn out to be nearly optimal. Section 4 presents results when the trained neural network was ran on verification data of the policy-iteration algorithm.

2. Mathematical Model

We summarise briefly the model under study. The queueing system is of the type M / M / K with infinite-capacity buffer and K heterogeneous servers. This system is shown schematically in Figure 1. The Poisson arrival stream has a rate λ and the exponential distribution of the service time at server j has a rate μ j . We assume that the service in the system is without preemption, when customer in service cannot change the server. The random variables of the inter-arrival times and the service times of the servers are assumed to be independent. An additional cost structure is introduced, consisting of the operating cost c j > 0 per unit of time of service on server j and the holding cost c 0 > 0 of waiting in the queue. Assume that the servers are enumerated in a way   
μ 1 μ K , c 1 μ 1 1 c K μ K 1 ,
where c j μ j 1 stands for the mean operating cost per customer for the jth server.
The controller has full information about the system’s state and, based on this information, can make control actions on the system at the decision epochs when certain state transitions occur, following the prescription of the policy f. In our case, the controller selects the control action at the time when a new customer enters the system and at the service completion times, if the queue is not empty. When a new customer arrives, it joins the queue and at the same time, the controller sends another customer from the head of the queue to one of the idle servers or leaves it in the queue. At the service completion, the customer leaves the corresponding server, and at the same time the controller takes the next customer from the head of the queue, if it is not empty, and dispatches it to one of idle servers or can leave it in the queue as well. The service completion in the system without waiting customers does not require the controller to perform any control action.
The fact that the optimal policy for the problem of minimising the long-run average cost per unit of time belongs to a set of threshold-based policies for the multi-server heterogeneous queueing systems with costs were proved first in [10] and further conformed for systems with heterogeneous groups of servers in [19]. The corresponding optimal thresholds can in the general case depend on the states of slower servers. However, according to obtained numerical results in [9], we can neglect the weak influence of the slower servers’ states on the optimal allocation policy for the faster servers. This phenomena was discussed additionally in Example 2. Therefore, we may assume that the optimal policy belongs to the class of a pure threshold policy when the use of a certain server depends solely on the number of waiting customers in the queue. Specifically, for the system under study, such a policy is defined by the following sequence of threshold levels:
1 = q 1 q 2 q K < .
The policy prescribes the use of the k fastest servers whenever the number of customers waiting in the queue satisfies the condition q k q q k + 1 1 .
To calculate optimal thresholds we need to formulate the introduced optimisation problem in terms of a Markov decision process. This process is based on a K + 1 -dimensional continuous-time Markov chain
{ X ( t ) } t 0 = { Q ( t ) , D 1 ( t ) , , D K ( t ) } t 0
with an infinitesimal matrix Λ f which depends on the policy f. Here the component Q ( t ) N 0 stands for the number of waiting customers at time t and
D j ( t ) = 0 if j th server is idle 1 if j th server is busy .
The state space of the process { X ( t ) } t 0 operating under some policy f is E f = { x = ( q ( x ) , d 1 ( x ) , , d K ( x ) ) } N 0 × { 0 , 1 } K , where the notations q ( x ) and d j ( x ) are used respectively for the queue length and for the state of jth server in state x E f .
The possible server states are partitioned as follows:
J 0 ( x ) = { j : d j ( x ) = 0 } , J 1 ( x ) = { j : d j ( x ) = 1 } .
The sets J 0 ( x ) and J 1 ( x ) denote the sets of idle and busy servers in state x E f , respectively. The set of control actions a is A = { 0 , 1 , , K } . If a = 0 , the controller allocates a customer to the queue. Otherwise, if a 0 , the controller instructs a customer to occupy the server with a number a. In addition, we can define the subsets A ( x ) = J 0 ( x ) { 0 } A of admissible actions in state x The policy f specifies the choice of a control action at any decision epoch and the infinitesimal matrix Λ f = [ λ x y ( a ) ] of the Markov-chain (3) has then the following elements,
λ x y ( a ) = λ y = x + e a , j A ( x ) μ j y = x e j , j J 1 ( x ) , q ( x ) = 0 μ j y = x e j e 0 + e a , a A ( x e j e 0 ) , q ( x ) > 0 ,
where e j is defined as K + 1 -dimensional unit vector with each element equal to zero except the jth position ( j = 0 , 1 , , K ).
We will search for the optimal control policy among the set of stationary Markov policies f that guarantee ergodicity of the Markov chain { X ( t ) } t 0 . The corresponding stability condition is obviously defined as λ < j = 1 K μ j . It follows from the fact, that if number of customers exceeds a threshold q K , then the queueing systems behaves like a M / M / 1 queue with an arrival rate λ and total service rate μ 1 + + μ K . As it is known, see e.g., [13], the ergodic Markov chain with costs implies the equality of the long-run average cost per unit of time for the policy f and the corresponding assemble average, that can be written in the form
g f = lim sup t 1 t V f ( x , t ) = y E f c ( y ) π y f ,
where c ( y ) = c 0 q ( y ) + j = 1 K c j d j ( y ) is an immediate cost in state y E f . The cost function V f ( x , t ) is given by
V f ( x , t ) = E f 0 t c 0 Q ( t ) + j = 1 K c j D j ( t ) d t | X ( 0 ) = x .
This function describes the total average cost up to time t given the initial state is x and π y f = P f [ X ( t ) = y ] is a stationary state distribution for the policy f. The policy f * is said to be optimal when for g f defined in (4) we evaluate
g * = inf f g f = min q 2 , , q K g ( q 2 , , q K ) .
To evaluate optimal threshold levels and optimised value for the mean average cost per unit of time the policy-iteration Algorithm 1 is used. This algorithm constructs a sequence of improved policies until the average cost optimal is reached. It consists of three main steps: value evaluation, policy improvement and threshold evaluation. The Value evaluation is based on solving, for a given policy f, a system of linear equations
v f ( x ) = 1 λ x ( a ) c ( x ) + y x λ x y ( a ) v f ( y ) g f .
Algorithm 1 Policy-iteration algorithm
1:
procedurePIA( K , W , λ , μ j , c j , j = 1 , 2 , , K , c 0 )
2:
     f ( 0 ) ( x ) = argmin j J 0 ( x ) c j μ j                               ▹ Initial policy
3:
     n 0
4:
     g f ( n ) = λ v f ( n ) ( e 1 )                                   ▹ Value evaluation
5:
    for  x = ( 0 , 1 , 0 , , 0 )   to   ( N , 1 , 1 , , 1 )  do
6:
        
v f ( n ) ( x ) = 1 λ + j J 1 ( x ) μ j [ c ( x ) g f ( n ) + λ v f ( n ) ( x + e f ( n ) ( x ) ) + j J 1 ( x ) μ j v f ( n ) ( x e j ) 1 { q ( x ) = 0 } + j J 1 ( x ) μ j v f ( n ) ( x e j e 0 + e f ( n ) ( x e j e 0 ) ) 1 { q ( x ) > 0 } ]
7:
    end for
8:
                                  ▹ Policy improvement
f ( n + 1 ) ( x ) = argmin a A ( x ) v f ( n ) ( x + e a )
9:
    if  f ( n + 1 ) ( x ) = f ( n ) ( x ) , x E f  then return  f ( n + 1 ) ( x ) , v f ( n ) ( x ) , g f ( n )
10:
    else  n n + 1 , go to step 4
11:
    end if
12:
                                  ▹ Threshold evaluation
q k : f ( n + 1 ) ( q , 1 , , 1 , 0 , d k + 1 , , d K ) = 0 q q k 2 k q > q k 2 , k = 2 , , K
13:
end procedure
For the dynamic-programming value function v f : E f R , which indicates a transition effect of an initial state x to the total average cost and satisfies the following asymptotic relation,
V f ( x , t ) = g f t + v f ( x ) + o ( 1 ) , t , x E f .
In order to make the system (6) solvable, one of the values v ( x ) must be set to zero, e.g., for x 0 = ( 0 , , 0 ) we set v ( x 0 ) = 0 . Since in our case c ( x 0 ) = 0 , the first equation of the system (6) is of the form g f = y x 0 λ x 0 y ( a ) v f ( y ) . In the policy improvement step a new policy f is calculated by minimising the value function v ( x + e a ) for any state x E f and any admissible control action a A ( x ) . The algorithm converges if the policies f and f on neighbouring iterations are equal. In the threshold evaluation we calculate the optimal thresholds q k , k = 2 , , K , based on optimal policy f. As an initial policy we select the policy which prescribes in any state the usage of a server j with the minimal value of the mean operating cost c j μ j per customer. More detailed information on deriving the dynamic programming equations for the heterogeneous queueing system and calculating the corresponding optimal allocation control policy can be found in [9]. For existence of an optimal stationary policy and convergence of the policy-iteration algorithm we refer to [12,20,21,22].
To realise the policy-iteration algorithm we convert the K + 1 -dimensional state space E f of the Markov decision process to a one-dimensional equivalent state space. Let Δ : E f N 0 be a one-to-one mapping of the vector state x = ( q ( x ) , d 1 ( x ) , , d K ( x ) ) E f to a value from N 0 which is of the form
Δ ( x ) = q ( x ) 2 K + i = 1 K d i ( x ) 2 i 1 .
A new state after transition involving the addition or removal of customer in some state x E f , in a one-dimensional state space is calculated by
Δ ( x ± e 0 ) = ( q ( x ) ± 1 ) 2 K + i = 1 K d i ( x ) 2 i 1 = Δ ( x ) ± 2 K , Δ ( x ± e j ) = q ( x ) 2 K + i = 1 K d i ( x ) 2 i 1 ± 2 j 1 = Δ ( x ) ± 2 j 1 .
Further in the algorithm, an infinite buffer system must be approximated by an equivalent system where the number of waiting places is finite but at the same time is sufficiently large. As a truncation criterion, we use the loss probability which should not exceed some small value ε > 0 .
Remark 1.
If the buffer size is W, the number of states is
| E f | = 2 K ( W + 1 ) .
In case the number of waiting customers is getting larger as the level q K , all servers must be occupied and the system dynamics is the same as in a classical queue M / M / 1 with arrival rate λ and service rate j = 1 K μ j . The stationary state probabilities for the states x where the component q ( x ) q K satisfy the following difference equation
λ π ( q 1 , 1 , , 1 ) λ + j = 1 K μ j π ( q , 1 , , 1 ) + j = 1 K μ j π ( q + 1 , 1 , , 1 ) = 0 ,
which has a solution in a geometric form, π ( q , 1 , , 1 ) = π ( q K , 1 , , 1 ) ρ q q K , q q K . For details and theoretical substantiation see, e.g., [23]. Note that the value of q K included in this formula can be estimated by a heuristic solution (9). Then the truncation parameter W of the buffer size can be evaluated from the following constraint for the loss probability
q = W π ( q , 1 , , 1 ) = π q K q = W ρ q q K q = W ρ q q K = ρ W q K 1 ρ < ε ,
where ρ = λ j = 1 K μ j . After simple algebra, it implies
W > log ε ( 1 ρ ) log ( ρ ) + q K .
Example 1.
Consider the system M / M / 5 with K = 5 and λ = 15 . All other parameters take the following values
j012345
c j 154321
μ j -208431
c j μ j 1 -0.250.500.750.671.00
The truncation parameter W of the buffer size is chosen at value 80 which for ε = 0.0001 guarantees that W > log 0.0001 ( 1 14 / 36 ) log ( 14 / 36 ) + q 5 = 22.2734 . Here q 5 = 12 was calculated by (9). In a control table, we summarise the functions f ( x ) which specify the control actions at time of arrivals to a certain state x:
System State x Queue Length q ( x )
d = ( d 1 , d 2 , d 3 , d 4 , d 5 ) 0123456789101112...
(0,*,*,*,*)11111111111111
(1,0,*,*,*)00222222222222
(1,1,0,*,*)00033333333333
(1,1,1,0,*)00004444444444
(1,1,1,1,0)00000000000555
(1,1,1,1,1)00000000000000
Threshold levels q k , k = 1 , , K = 5 , can be evaluated by comparing the optimal actions f ( q , 1 , , 1 k 1 , 0 , , 0 K k + 1 ) < f ( q + 1 , 1 , , 1 k 1 , 0 , , 0 K k + 1 ) for q = 0 , , W 1 . In this example the optimal policy f * is defined here through a sequence of threshold levels ( q 2 , q 3 , q 4 , q 5 ) = ( 3 , 4 , 5 , 12 ) and g * = 4.92897 . The bold and underline format in a control table is used to label the change of the control action in a certain system state.
In the next example we give some arguments that allow us to work further only with the threshold-based control policies.
Example 2.
Consider the system M / M / 3 with K = 3 servers. The aim of this example consists in the following: With respect to the system states x = ( q , 1 , 0 , 0 ) and y = ( q , 1 , 0 , 1 ) the assignment to the second server can in general depend not only on the number of customers in the queue but also on the state of the third server. In this example it is optimal to make an assignment in state x but not in state y. We solve optimisation problem for the following parameters:
  • λ = 0.238 , μ 1 = 0.621 , μ 2 = 0.071 and μ 3 = 0.070 ,
  • λ = 0.477 , μ 1 = 0.356 , μ 2 = 0.096 and μ 3 = 0.070 .
The of optimal solution for the first and second group of system parameters are represented in Table 1 and Table 2, respectively.
We notice that for most parameter values the optimal decision can be made independently of the states of the slower servers. However, it is interesting to consider the reasons for such possible dependence. It is evident that in our optimisation problem, the optimal policy assigns a customer to the fastest free server in states for which this would not be optimal if there were no arrivals. This is because the system should be ready for possible arrivals, which, if they occur, will wish to see a less congested system.
Consider now the system with three servers in the states x + e 0 + e 1 and x + e 1 + e 2 , where x = ( 0 , 0 , 0 , 0 ) . Let us consider the case of potential service completion at the second server, taking into account a large number q of accompanied arrivals. Because of large q, it is optimal to occupy all accessible idle servers. The states mentioned above become x + ( q 1 ) e 0 + e 1 + e 2 + e 3 and x + ( q 2 ) e 0 + e 1 + e 2 + e 3 . Thus, the difference v ( x + ( q 1 ) e 0 + e 1 + e 2 + e 3 ) v ( x + ( q 2 ) e 0 + e 1 + e 2 + e 3 ) of value functions measures the advantage that will be obtained in the case of the assignment to the second processor x + e 0 + e 1 x + e 1 + e 2 . The events of service completion on the second server provide the incentive to make an assignment to the second server. However, if the two initial states are x + e 0 + e 1 + e 3 and x + e 1 + e 2 + e 3 , the measure of advantage if service completion takes place is v ( x + q e 0 + e 1 + e 2 + e 3 ) v ( ( q 1 ) e 0 + e 1 + e 2 + e 3 ) . Since we expect that the value function v ( q e 0 + e 1 + e 2 + e 3 ) is convex in q, it is plausible that the incentive to make an assignment to the second server is greater in state x + e 0 + e 1 + e 3 than in x + e 0 + e 1 . Numerical examples proposed in Table 3 confirm our expectations.
The further numerical examples show that the threshold levels have a very weak dependence of slower servers’ states. According to our observations, the optimal threshold may vary by at most 1 when the state of a slower server changes.
The data needed either to verify the heuristic solution or for training and verification of the neural network was generated by a policy-iteration algorithm in form of the list
S = { ( λ , μ 1 , , μ K , c 0 , c 1 , , c K ) ( q 2 , , q K ) : λ [ 1 , 45 ] , μ 1 , , μ K [ 1 , 40 ] , c 0 [ 1 , 3 ] , c 1 , , c K [ 1 , 5 ] , λ < j = 1 K μ j , μ 1 μ K , c 1 μ 1 1 c K μ K 1 } .
Example 3.
Some elements of the list S for the M / M / 5 queueing system are
( 1 , 20 , 8 , 4 , 2 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) ( 2 , 5 , 13 , 30 ) , ( 10 , 20 , 8 , 4 , 2 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) ( 1 , 4 , 9 , 21 ) , ( 1 , 20 , 8 , 4 , 2 , 1 , 1 , 5 , 4 , 3 , 2 , 1 ) ( 5 , 12 , 20 , 20 ) , ( 10 , 20 , 8 , 4 , 2 , 1 , 1 , 5 , 4 , 3 , 2 , 1 ) ( 3 , 8 , 13 , 13 ) .

3. Heuristic Solution

In this section, we want to obtain a heuristic solution (HS) to calculate the optimal thresholds q k , k = 2 , , K for the arbitrary K in explicit form. For this purpose, we will use a simple deterministic approximation for the dynamic behaviour of the number of customers in the queue as illustrated in Figure 2.
Let q k is an optimal threshold used to dispatch the customer to server k in state ( q k 1 , 1 , , 1 k 1 , 0 , , 0 K k + 1 ) , where the first k 1 servers are busy. Now we compare the queues of the system given initial state is x 0 = ( q k , 1 , , 1 k 1 , 0 , 0 , , 0 K k ) , where the kth server is not used for a new customer, and y 0 = ( q k 1 , 1 , , 1 k 1 , 1 , 0 , , 0 K k ) , where the kth server is occupied by a waiting customer. It is assumed that the stability condition holds. The initial queue lengths are labelled in Figure 2 by A = q k and B = q k 1 . The proposed deterministic approximation is based on an assumption that the queue length of the system with the first k 1 busy servers decreases with the rate j = 1 k 1 μ j λ . When this rate is keeping until the queue is empty, it occurs at time points D = q k j = 1 k 1 μ j λ and C = q k 1 j = 1 k 1 μ j λ respectively for the given initial queue length A and B. The total (accumulated) holding times of all customers in the queue with lengths q k and q k 1 are equal respectively to the number of square blocks of dimension 1 × 1 j = 1 k 1 μ j λ within the areas A O D and B O C multiplied by the mean service time of the approximated model:
F A O D = ( q k + ( q k 1 ) + ( q k 2 ) + + 1 ) 1 j = 1 k 1 μ j λ = q k ( q k + 1 ) 2 · 1 j = 1 k 1 μ j λ and F B O C = ( ( q k 1 ) + ( q k 2 ) + + 1 ) 1 j = 1 k 1 μ j λ = q k ( q k 1 ) 2 · 1 j = 1 k 1 μ j λ .
The mean operating cost of the first k 1 servers during the time period until the queue becomes empty given the initial state is x 0 can be calculated by
q k c 1 μ 1 μ 1 j = 1 k 1 μ j + + c k 1 μ k 1 μ k 1 j = 1 k 1 μ j = q k j = 1 k 1 c j j = 1 k 1 μ j .
The expression μ i j = 1 k 1 μ j means the probability of the service completion at the ith server, and the mean operating cost given the initial state is y 0 , which can be defined as ( q k 1 ) j = 1 k 1 c j j = 1 k 1 μ j .
Now using the deterministic approximation we can formulate the following proposition.
Proposition 1.
The optimal thresholds q k , k = 2 , , K , are defined by
q k q ^ k = min 1 , j = 1 k 1 μ j λ c 0 c k μ k j = 1 k 1 c j j = 1 k 1 μ j .
Proof. 
Let V ( x ) be the overall average system cost until the system becomes empty given the initial state is x E f . This value can be represented as a sum of the total holding cost of customers waiting in the queue and mean operating cost of all servers which remain busy in state x. Assume that the controller performs a decision to allocate the customer to the kth server in state ( q k 1 , 1 , , 1 k 1 , 0 , , 0 K k + 1 ) . As a result, it leads to a reduction of the overall system costs according to the proposed deterministic approximation, i.e.,
V ( x 0 ) V ( y 0 ) > 0 .
where
V ( x 0 ) = c 0 F A O D + q k j = 1 k 1 c j j = 1 k 1 μ j + V ( 0 , 1 , , 1 k 1 , 0 , , 0 K k + 1 ) ,
V ( y 0 ) = c k μ k + V ( q k 1 , 1 , , 1 k 1 , 0 , 0 , , 0 K k ) = c k μ k + c 0 F B O C + ( q k 1 ) j = 1 k 1 c j j = 1 k 1 μ j + V ( 0 , 1 , , 1 k 1 , 0 , , 0 K k + 1 ) .
After substitution of (11) into (10) we get
c 0 ( F A O D F B O C ) + j = 1 k 1 c j j = 1 k 1 μ j c k μ k = c 0 q k j = 1 k 1 μ j λ + j = 1 k 1 c j j = 1 k 1 μ j c k μ k > 0 .
Now, expressing q k after some simple manipulations we obtain the heuristic solution for the optimal value of q k in form (9). □
Example 4.
Consider a queueing system from the previous example for K = 5 . We select randomly from the data set S (8) a list of system parameters α = ( λ , μ 1 , , μ K , c 0 , c 1 , , c K ) and calculate by means of the HS (9) threshold levels q k , k = 1 , , K . Figure 3 illustrates the efficiency of the proposed heuristic solution respectively for threshold levels ( q 2 , q 3 , q 4 , q 5 ) by confusion matrices. The matrix row represents the elements including a predicted value while each column represents the elements for an actual value. As a metric for the closeness of the measurements to a specific value and to the interval with possible deviation of threshold by ± 1 from the real value, the overall accuracy and accuracy ± 1 are used. The results are summarised in Table 4.

4. Artificial Neural Networks

Artificial Neural Networks (NN) belong to a set of supervised machine learning methods. It is most popular in different applied problems including data classification, pattern recognition, regression, clustering and time series forecasting. Here we show that the NN can give even more positive results compared to the HS that indicates the possibility to use it for predicting the structural control policies.
The data set S (8) is used to explore predictions for the optimal threshold levels through the NN. The multilayer neural network is used for the data classification. It can be formally defined as a function f : α y , which maps an input vector α of dimension 2 m + 1 to an estimate output y R N c of the class number N = 1 , , N c . The network is decomposed into 6 layers as illustrated in Figure 4, each of which represents a different function mapping vectors to vectors. The successive layers are: a linear layer with an output vector of size k, a nonlinear elementwise activation layer, other three linear layers with output vectors of size k and a nonlinear normalisation layer.
The first layer is an affine transformation
q 1 = W 1 α + b 1 ,
where q 1 = R 2 m + 1 is the output vector, W R 2 m + 1 × k = 30 is the weight matrix, b 1 R 2 m + 1 is the bias vector. The rows in W 1 are interpreted as features that are relevant for differentiating between corresponding classes. Consequently, W 1 α is a projection of the input α onto these features. The second layer is an elementwise activation layer which is defined by the nonlinear function q 2 = max ( 0 , q 1 ) setting negative entries of q 1 to zero and uses only positive entries. The next three layers are other affine transformations,
q i = W i q i 1 + b i ,
where q i R k , W i R k × k , and b i R k , i = 3 , 4 , 5 . The last layer is the normalisation layer y = softmax ( q 5 ) , whose componentwise is of the form
y N = e q 5 N N e q 5 N , N = 1 , , N c .
The last layer normalises the output vector y with the aim to get the values between 0 and 1. The output y can be treated as a probability distribution vector, where the Nth element y N represents the likelihood that α belongs to class N.
We use 70% of the same data S which was not used to verify the quality of the HS in a training phase of the NN and the rest of S—as validation data. We train a multilayer (6-layer) NN using an adaptive moment estimation method [24] and the neural network toolbox in Mathematica© of the Wolfram Research. Then we verify the approximated function
q ^ k : = q ^ k ( λ , μ 1 , , μ K , c 0 , c 1 , , c K ) ,
which should be accurate enough to be used to predict new output from verification data. The algorithm was ran many times on samples and networks with different sizes. In all cases the results were quite positive and indicate the potential of machine learning methodology for optimisation problems in the queueing theory.
Example 5.
The results of estimations of the optimal threshold values using the trained NN are summarised again in form of confusion matrices, as is shown in Figure 5. The overall accuracy of classification and accuracies for the values with deviations are given in Table 5. We can see that the NN methodology exhibits even more accurate estimations for the optimal thresholds if the results are compared with the corresponding HS.

5. Conclusions

We combine classic methodology of analysing controllable queues with a heuristic solution and machine learning to study the possibility to estimate the values of optimal thresholds. Due to the fact that the results were quite positive, we can make the following general conclusion. With this study we confirm that the analysis of controlled queueing systems and the solution of optimisation problems using classical Markov decision theory can be successfully combined with machine learning techniques. These approaches do not contradict each other; on the contrary, combining them provides new results.

Author Contributions

Conceptualization, D.E.; formal analysis, investigation, methodology, software and writing, D.E. and N.S. Both authors have read and agreed to the published version of the manuscript.

Funding

Open Access Funding by the University of Linz. This research has been supported by the RUDN University Strategic Academic Leadership Program (recipient D. Efrosinin).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This paper has been supported by the RUDN University Strategic Academic Leadership Program (recipient D. Efrosinin), Open Access Funding by the University of Linz.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lin, W.; Kumar, P.R. Optimal control of a queueing system with two heterogeneous servers. IEEE Trans. Autom. Control 1984, 29, 696–703. [Google Scholar] [CrossRef]
  2. Koole, G. A simple proof of the optimality of a threshold policy in a two-server queueing system. Syst. Control. Lett. 1995, 26, 301–303. [Google Scholar] [CrossRef]
  3. Walrand, J. A note on: “Optimal control of a queuing system with two heterogeneous servers”. Syst. Control Lett. 1984, 4, 131–134. [Google Scholar] [CrossRef]
  4. Rykov, V. Monotone Control of Queueing Systems with Heterogeneous Servers. QUESTA 2001, 37, 391–403. [Google Scholar]
  5. Crabill, T.; Gross, D.; Magazine, M.J. A classified bibliography of research on optimal design and control of queues. Oper. Res. 1977, 25, 219–232. [Google Scholar] [CrossRef]
  6. Nobel, N. Hysteretic and Heuristic Control of Queueing Systems. Ph.D. Thesis, Vrije University Amsterdam, Amsterdam, The Netherlands, November 1998. [Google Scholar]
  7. Nobel, R.; Tijms, H.C. Optimal control of a queueing system with heterogeneous servers and set-up costs. IEEE Trans. Autom. Control 2000, 45, 780–784. [Google Scholar] [CrossRef]
  8. Le Ny, L.-M.; Tuffin, B. A Simple Analysis of Heterogeneous Multi-Server Threshold Queues with Hysteresis; Institut National de Recherche en Informatique: Nancy, France, 2000. [Google Scholar]
  9. Efrosinin, D. Controlled Queueing Systems with Heterogeneous Servers: Dynamic Optimization and Monotonicity Properties of Optimal Control Policies in Multiserver Heterogeneous Queues; VDM Verlag: Saarbrücken, Germany, 2008. [Google Scholar]
  10. Rykov, V.; Efrosinin, D. On the slow server problem. Autom. Remote. Control 2010, 70, 2013–2023. [Google Scholar] [CrossRef]
  11. Howard, R. Dynamic Programming and Markov Processes; Wiley Series; Wiley: London, UK, 1960. [Google Scholar]
  12. Puterman, M.L. Markov Decision Process; Wiley Series in Probability and Mathematical Statistics; Wiley: London, UK, 1994. [Google Scholar]
  13. Tijms, H.C. Stochastic Models. An Algorithmic Approach; John Wiley and Sons: New York, NY, USA, 1994. [Google Scholar]
  14. Gershenson, C. Artificial Neural Networks for Beginners; 2003; Available online: http://arxiv.org/abs/cs/0308031 (accessed on 20 August 2003).
  15. Rätsch, G. A Brief Introduction into Machine Learning; Friedrich Miescher Laboratory of the Max Planck Society: Tuebinger, Germany, 2004. [Google Scholar]
  16. Russel, S.J.; Norvig, P. Artificial Intelligence. A Modern Approach; Prentice-Hall, Inc.: Upper Saddle River, NJ, USA, 1995. [Google Scholar]
  17. Kyritsis, A.I.; Deriaz, M. A machine learning approach to waiting time prediction in queueing scenarios. In Proceedings of the Second International Conference on Artificial Intelligence for Industries, Laguna Hills, CA, USA, 25–27 September 2019; pp. 17–21. [Google Scholar]
  18. Stintzing, J.; Norrman, F. Prediction of Queuing Behaviour through the Use of Artificial Neural Networks. Available online: http://www.diva-portal.se/smash/get/diva2:1111289/FULLTEXT01.pdf (accessed on 18 June 2017).
  19. Xia, L.; Zhang, Z.G.; Li, Q.-L.; Glynn, P.W. A c/μ-Rule for Service Resource Allocation in Group-Server Queues. arXiv 2018, arXiv:1807.05367. [Google Scholar]
  20. Aviv, Y.; Federgruen, A. The value-iteration method for countable state Markov decision processes. Oper. Res. Lett. 1999, 24, 223–234. [Google Scholar] [CrossRef]
  21. Özkan, E.; Kharoufeh, J.P. Optimal control of a two-server queueing system with failures. Probab. Eng. Inform. Sci. 2014, 28, 489–527. [Google Scholar] [CrossRef] [Green Version]
  22. Sennott, L.I. Stochastic Dynamic Programming and the Control of Queueing Systems; Wiley: New York, NY, USA, 1999. [Google Scholar]
  23. Efrosinin, D.; Sztrik, J. An algorithmic approach to analyzing the reliability of a controllable unreliable queue with two heterogeneous servers. Eur. J. Oper. Res. 2018, 271, 934–952. [Google Scholar] [CrossRef]
  24. Kingma, D.P.; Adam, J.B. A Method for Stochastic Optimization; 2015; Available online: https://arxiv.org/abs/1412.6980 (accessed on 30 January 2017).
Figure 1. Controllable multi-server queueing system with heterogeneous servers and operating costs.
Figure 1. Controllable multi-server queueing system with heterogeneous servers and operating costs.
Mathematics 09 01267 g001
Figure 2. Queue length approximation.
Figure 2. Queue length approximation.
Mathematics 09 01267 g002
Figure 3. Confusion matrices (ad) for prediction of q 2 , q 3 , q 4 and q 5 using HS.
Figure 3. Confusion matrices (ad) for prediction of q 2 , q 3 , q 4 and q 5 using HS.
Mathematics 09 01267 g003
Figure 4. Architecture of the neural network.
Figure 4. Architecture of the neural network.
Mathematics 09 01267 g004
Figure 5. Confusion matrices (ad) for prediction of q 2 , q 3 , q 4 and q 5 using NN.
Figure 5. Confusion matrices (ad) for prediction of q 2 , q 3 , q 4 and q 5 using NN.
Mathematics 09 01267 g005
Table 1. Control table.
Table 1. Control table.
System State xQueue Length q ( x )
( d 1 , d 2 , d 3 ) 012345678910111213141516...
(0,0,0)111111111111111111
(1,0,0)000002222222222222
(0,1,0)111111111111111111
(1,1,0)000003333333333333
(0,0,1)111111111111111111
(1,0,1)000022222222222222
(0,1,1)111111111111111111
(1,1,1)000000000000000000
Table 2. Control table.
Table 2. Control table.
System State xQueue Length q ( x )
( d 1 , d 2 , d 3 ) 012345678910111213141516...
(0,0,0)111111111111111111
(1,0,0)022222222222222222
(0,1,0)111111111111111111
(1,1,0)033333333333333333
(0,0,1)111111111111111111
(1,0,1)222222222222222222
(0,1,1)111111111111111111
(1,1,1)000000000000000000
Table 3. Value function for system states.
Table 3. Value function for system states.
System State xValue Function v ( x )
( q , d 1 , d 2 , d 3 ) example 1example 2
(0,0,0,0)00
(0,1,0,0)2.603419.4480
(0,0,1,0)14.086528.3810
(0,0,0,1)14.287233.7009
(1,1,0,0)7.797951.3142
(0,1,1,0)16.690551.4444
(0,1,0,1)16.891055.9981
(0,0,1,1)28.374765.9866
(2,1,0,0)15.552096.1454
(1,1,1,0)21.887490.3521
(1,1,0,1)22.087393.2714
(0,1,1,1)30.979893.2581
(3,1,0,0)25.7823154.6580
(2,1,1,0)29.6487142.7630
(2,1,0,1)29.8469145.4230
(1,1,1,1)36.1809140.4050
......-
(6,1,0,0)68.3382-
(5,1,1,0)66.8622-
(5,1,0,1)66.9946-
(4,1,1,1)66.9830-
(7,1,0,0)85.9322-
(6,1,1,0)82.9672-
(6,1,0,1)83.0730-
(5,1,1,1)81.9234-
Table 4. Accuracy for prediction with HS.
Table 4. Accuracy for prediction with HS.
HS q 2 q 3 q 4 q 5
Accuracy0.84300.87780.78990.6282
Accuracy ± 1 0.98610.98840.98710.9769
Table 5. Accuracy for prediction with NN.
Table 5. Accuracy for prediction with NN.
NN q 2 q 3 q 4 q 5
Accuracy0.97000.87850.87080.7977
Accuracy ± 1 0.99910.99510.98740.9962
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Efrosinin, D.; Stepanova, N. Estimation of the Optimal Threshold Policy in a Queue with Heterogeneous Servers Using a Heuristic Solution and Artificial Neural Networks. Mathematics 2021, 9, 1267. https://doi.org/10.3390/math9111267

AMA Style

Efrosinin D, Stepanova N. Estimation of the Optimal Threshold Policy in a Queue with Heterogeneous Servers Using a Heuristic Solution and Artificial Neural Networks. Mathematics. 2021; 9(11):1267. https://doi.org/10.3390/math9111267

Chicago/Turabian Style

Efrosinin, Dmitry, and Natalia Stepanova. 2021. "Estimation of the Optimal Threshold Policy in a Queue with Heterogeneous Servers Using a Heuristic Solution and Artificial Neural Networks" Mathematics 9, no. 11: 1267. https://doi.org/10.3390/math9111267

APA Style

Efrosinin, D., & Stepanova, N. (2021). Estimation of the Optimal Threshold Policy in a Queue with Heterogeneous Servers Using a Heuristic Solution and Artificial Neural Networks. Mathematics, 9(11), 1267. https://doi.org/10.3390/math9111267

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop