Next Article in Journal
Combination of Multigrid with Constraint Data for Inverse Problem of Nonlinear Diffusion Equation
Previous Article in Journal
How Effective Is Reverse Cross-Docking and Carbon Policies in Controlling Carbon Emission from the Fashion Industry?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Theorem on the Structure of the Fractionally Linear Functional Extremal Function

1
HSE Tikhonov Moscow Institute of Electronics and Mathematics (MIEM HSE), Federal State Autonomous Educational Institution for Higher Education, Higher School of Economics, National Research University, 101000 Moscow, Russia
2
JSC NIIAS, Research and Design Institute of Information, Automation and Communication on Railway Transport, 109029 Moscow, Russia
3
Moscow Aviation Institute, National Research University, 125993 Moscow, Russia
*
Authors to whom correspondence should be addressed.
Mathematics 2023, 11(13), 2886; https://doi.org/10.3390/math11132886
Submission received: 12 June 2023 / Revised: 22 June 2023 / Accepted: 26 June 2023 / Published: 27 June 2023
(This article belongs to the Section Engineering Mathematics)

Abstract

:
The paper proves a theorem about the structure of the distribution function on which the extremum of the fractionally linear functional is reached in the presence of an uncountable number of linear constraints. The problem of finding an extremal distribution function arises when determining the optimal control strategy in a class of Markov homogeneous randomized control strategies. The structure of extremal functions is described by a finite number of parameters; hence, the problem is greatly simplified since it is reduced to the search for an extremum of some function.

1. Introduction

Currently, there are many practical models of reliability [1,2], mass maintenance, and safety [3] that are adequately described by a semi-Markov controlled process (SMCP) with a finite set of states [4,5].
In these models, the control consists of choosing the moments of system intervention (e.g., preventive maintenance, replacement of failed or worn components, frequency of current system state detection). When the models under study consider a class of randomized control strategies, we face the problem of functional analysis: finding an extremum of some function characterizing the quality of control on the set of probability measures (distributions) defining the randomized strategy.
The result presented in the article allows to simplify the problem (if the probability measures are defined on a straight line or a semi-straight line) and to reduce it to a problem of mathematical analysis (a problem of finding an extremum of a function over a set of parameters), which in principle allows to use software tools and to bring the result to a number.

2. Statement of the Problem

Currently, there are many practical models of reliability, mass maintenance, and security that are adequately described by a semi-Markov controlled process (MCP) with a finite set of states.
In these models, control consists of choosing the moments of intervention in the system operation (for example, preventive repair, replacement of failed or worn-out units, frequency of determination of the current state of the system). If the studied models consider a class of randomized control strategies, then we come to the problem of functional analysis: the search for an extremum of some functional characterizing the quality of control of the set of probability measures (distributions) defining the randomized strategy. The result stated in this paper simplifies the problem (if probabilistic measures are defined on a straight line or a semi-straight line) by reducing it to the problem of mathematical analysis (the problem of finding an extremum of some function over a set of parameters), which, in principle, allows the use of software tools and to obtain a numerical result.
I G = u U A u G d u u U B u G d u
On some admissible set of probability measures Ω (the set of Markov homogeneous randomized control strategies) and the definition of the probability measure on which this extremum is reached [6].
For controllable models of mass service, reliability, and safety, when it comes to the optimal choice of service duration or periodicity of restoration work, the control space coincides with the set of positive numbers U = R + = 0 , , and the probability measure is given by distribution functions g = G u = P ζ < u , G 0 = 0 , where ζ is the decision to be made.
Let us introduce two distribution functions
g = G 1 u , g = G 2 u ,
0 G 1 u G 2 u 1 , u 0 ,
G 1 ( 0 ) = G 2 ( 0 ) = 0 , u 0 ,
and define the set of admissible distributions
Ω = G : G 1 ( u ) G ( u ) G 2 ( u ) , u 0 ,
Now, using the accepted notations above, we can formulate the main mathematical problem: to determine the maximum of fractionally linear functional (1) over the set of admissible distributions (2) and the strategy (distribution) G * on which this maximum is reached,
m a x G Ω I G = I G * .
Note that the constraints (2) can be represented as constraints of the type of inequalities on linear functionals
G 1 u 0 + φ x , u d G x G 2 u , φ x , u = 1 , x < u , 0 , x u , u 0 , + .
Consequently, the formulated problem is a fractionally linear programming problem with an uncountable number of linear constraints.
If we use the lemma [7] on the coincidence of the sets of probability measures on which the extremum of a fractionally linear functional and the extremum of a specially chosen linear functional are reached, the problem can be simplified by reducing it to the linear case.
Here is the formulation and proof of the lemma.
Lemma 1. 
If there exists a maximum of a fractional linear functional I ( G ) on some set Ω
m a x G Ω I G = C ,
then the equality of the sets
G : I ( G ) = C = G : u U C u G d u = 0 ,
where U is the set of possible controls, and the integrand function C u is defined by equality
C u = A u C B u ,
and for the linear functional u U C u G d u the relation.
Proof of Lemma 1. 
It follows from the conditions of the lemma that among permissible distributions Ω , there are no distributions for which the denominator (1) is zero,
Ω G : u U B u G d u = 0 =
Denote
Ω + = Ω G : u U B u G d u > 0 , Ω = Ω G : u U B u G d u < 0 , Ω + * = G : m a x G Ω + I ( G ) = C + , Ω * = G : m a x G Ω I ( G ) = C
and the relations are fulfilled
Ω * = G : I ( G ) = C Ω + * Ω * ,
C = max C + , C
Prove the equality of the two sets
Ω + * = G : m a x G Ω + I G = C + = G : m a x G Ω + J + G = 0   ( * )
where through J + ( G ) denotes the linear functional
J + G = u U A u C + B u d G u   ( * * )
For any distribution G Ω + * linear functional (**) is zero, and considering that for any distribution Φ Ω + the linear functional (**) is less than or equal to zero
J + ( Φ ) = u U A u C + B u d Φ u 0 ,
then it follows from these properties that the inclusion of
Ω + * G : m a x G Ω + J + G = 0 .
Reverse. Let G G : m a x G Ω + J + ( G ) = 0 . Then, by virtue of definition (**) for the fractional linear functional, we have I ( G ) = C . Since zero is the maximum of the linear functional (**), we obtain the inequality I ( Φ ) C + , Φ Ω + . Thus, the inverse inclusion of Ω + * G : m a x G Ω + J + ( G ) = 0 and inequalities (*).
A similar equality for the set Ω
Ω * = G : m a x G Ω I G = C = G : m a x G Ω J G = 0   ( * * * )
is proved by a complete repetition of the above reasoning for the fractional linear functional
I ( G ) = u U A u G d u u U B u G d u
and the linear functional
J ( G ) = u U A u G d u + C u U A u G d u
If C + > C , then the statement of the lemma follows from equality (*). If C + < C , then the statement of the lemma follows from equality (***). If C + = C , then the statement of the lemma follows from the obvious equality Ω * = Ω + * Ω * .
This completes the proof of the lemma. □
Thus, if it is established that for some probability measure of a certain structure, an extremum of the linear functional is reached and all probability measures, of the given structure, belong to the set Ω, then an extremum of the fractional-linear functional can be sought over the narrower set.
Thus, if it is found that for some probability measure, a certain structure, the extremum of the linear functional is reached and all probability measures, of the given structure, belong to the set Ω , then the extremum of the fractionally linear functional can be sought over a narrower set. Moreover, if each probabilistic measure of a selected structure is uniquely defined by parameters (for example, measures defined on a discrete set), then the problem of functional analysis is reduced to the problem of finding an extremum of a function in the parameter space.
Here, it should be noted that the integrand function of a linear functional depends on an unknown parameter, and this creates certain difficulties when studying the structure of extreme probability measures.

3. Theorem on the Structure of the Extremal Function

Let us formulate the requirements to the integrand function C u and call these requirements conditions (*):
  • the function is defined on the half line [ 0 , + ] ;
  • function is piecewise continuous;
  • has a finite number of discontinuity points;
  • has only discontinuities of the first kind;
  • at the discontinuity points the function takes a larger value;
  • function has a finite number of local maxima.
  • Let us denote the values of maxima
α i , i = 1 , 2 , , n ,
α 1 > α 2 > > α n .
Let us define a maximum ordering rule for any segment of the set [ 0 , + ] . Each maximum is characterized by the pair α , u –value of the maxima and its argument.
Definition. When comparing the two maxima α , u 1 and β , u 2 :
  • maximum α , u 1 is assigned a lower number under the condition that α > β ;
  • maximum α , u 1 is assigned a lower number under the condition that α = β and u 2 > u 1 .
Let us introduce the notations that we will use hereafter and for any subset of the set of definitions of the integrand function:
  • α i , i = 1 , 2 , , n , values of maxima and n —is the number of different values of maxima, + > α 1 > α 2 > > α n > ;
  • u i j , i = 1 , 2 , , n ,   j = 1 , 2 , , k i is the argument of the maximum, for which the conditions are satisfied
C u i j = α i ,
the second index is determined by the equations
u i 1 = min u : C u = α i , u i k i = max u : C u = α i ,
and for the other arguments, the order determines the inequality u i j < u i j + 1 ,   j = 1 , 2 , 3 , , k i 1 , that is, it sets the order of maxima of the fixed level (4).
Note that this process of maximum ordering also involves the outermost points of the segment, if the maxima are reached in them.
Let us introduce the notation u = G i 1 g , i = 1 , 2 for the function inverse of the function g = G i ( u ) . Given the properties of distribution functions (the presence of discontinuities), in the areas of uncertainty we will consider the inverse function constant, defining it so that it is nonincreasing. If the distribution function has areas in which it does not increase, then the inverse function is multivalued. In this case, we choose one of the possible values and consider the inverse function to be discontinuous.
Let us introduce notations for some areas of the independent variable.
Denote
U α , β = u : α < C u β , < α β < +
U α = U , α = u : < C u α .
The sets U α i , α i 1 , i = 1 , 2 , , n , α 0 = do not intersect, and for these sets the relations are valid
U α n U α 2 U α 1 = 0 , + ,
α i 1 \ U α i = U α i , α i 1 .
Now let us formulate a basic theorem.
Theorem 1. 
If there exists a maximum of a linear functional
L G = 0 C u d G u
over the set of distribution functions  Ω = G : G 1 ( u ) G ( u ) G 2 ( u ) , u 0  and the integrand function satisfies conditions (*), then there exists a distribution function of the following structure among the distribution functions on which the maximum is achieved:
-
or the function coincides with one of the boundaries;
-
or the function moves from boundary to boundary;
-
or a piecewise constant (stepped) function.
Proof of Theorem 1. 
The proof of the theorem is carried out step by step: ordering of maxima, construction of reference functions, investigation of the properties of the reference functions, and the proof of the theorem statement.
The ordering of the maxima is described above, so we will use the previously introduced notations.
Next, we describe the algorithm for constructing reference functions, determine their areas of values and definitions, and examine their structure and properties.
Note that the formulation of the theorem defines distribution functions that have structures of three kinds, which we will hereafter refer to as reference functions.
For each isolated global maximum of the integrand function α 1 , u 1 j , 1 j < k 1 we define a reference function in the area u 0 , +
G 1 j * u = G 1 u 1 j , u 0 , u 1 j min G 2 u 1 j , G 1 u 1 j + 1 , u u 1 j , + .
Then, we define the area of influence of each maximum by the ratio
U 1 , j = u 1 j 1 , u 1 j , е с л и G 2 1 G 1 u 1 j u 1 j 1 , u 1 j , u 1 j , е с л и G 2 1 G 1 u 1 j > u 1 j 1 , u 10 = 0 ,
and in the area of
U 1 = 1 j k 1 U 1 , j
let us construct a non-decreasing, stepped, continuous function on the left
G * u = j = 1 k 1 G j * u G j * 0 ,
satisfying the conditions
G 1 j * u 1 j = G 1 u 1 j ,
l i m u u 1 j + 0 G 1 j * u = min G 2 u 1 j , G 1 u 1 j + 1 , j = 1 , 2 , , k 1 .
It is easy to see that function (7) coincides with the reference functions in the area (6).
Recall that if the global maximum of the α 1 of the integrand function C u is reached on some segment u 1 , u 2 , then the maxima at the boundary points take part in the process of the maxima ordering. If for these maxima the inequality is satisfied
u 1 < G 2 1 G 1 u 2 ,
then add to the set U 1 segment u 1 , G 2 1 G 1 u 2 and define the reference function on this set as a stepped function having a finite number of jumps and belonging to the set of admissible functions.
Let us prove one important inequality, which is true for the reference function. To this end, we introduce the function
A ~ u = α 1 , u U 1 , A u , u U α 1 \ U 1
For any admissible distribution G u Ω by virtue of the properties of the reference function G * u and the major function A ~ u , the following inequalities are true
U 1 A u d G u U 1 A ~ u d G u U 1 A ~ u d G * u = U 1 A u d G * u
In addition to this property of the reference functions, we note another interesting inequality for the reference function and for any function G Ω . Let us denote by γ 1 j = G 1 j * 0 , γ 2 j = G 1 j * + and define the function
G 1 j u = γ 1 j , u 0 , G 1 γ 1 j , G u , u G 1 γ 1 j , G 1 γ 2 j , γ 2 j , u G 1 γ 2 j , + .
From the definition of the reference functions and functions G 1 j u , it follows the fulfillment of the inequalities
L G 1 j * L G 1 j .
If U 1 = U α 1 , the theorem is proved, since
L G = j = 1 k 1 L G 1 j j = 1 k 1 L G 1 j * = L G 1 * .
and among the optimal distributions there is a stepped function
G 1 * u = j = 1 k 1 G 1 j * u G 1 j * 0
with jumps at the points of maxima.
If U 1 U α 1 , then U 1 = U α 1 \ U 1 and set the problem to determine the reference function in this area. Note that this area is a finite number of intervals, in each of which, at least at one boundary point, the integrand function equals α 1 . Let us describe the process of determining the reference functions of one of them by denoting this interval u 1 , u 2 .
Then, two variants are possible:
There are no maxima within the interval;
Within the interval there are maxima and a global maximum of the level s , 2 s n , i.e., there are interior points u s j , 1 l 1 j l 2 k s for which the equality is true C u s j = α s .
In the first case, the reference function in the area u 1 , u 2 we define by equality
G 10 * u = G 2 u 1 , u 0 , u 1 , G 2 u , u u 1 , G 2 1 γ * , γ * , u G 2 1 γ * , G 1 1 γ * , G 1 u , u G 1 1 γ * , u 2 , G 1 u 2 , u u 2 , +
where γ * G 2 u 1 , G 1 u 2 , and for it the equality is done
m a x γ G 2 u 1 G 1 u 2 u u 1 , G 2 1 γ C u d G 2 u + u G 1 1 γ , u 2 C u d G 1 u = u u 1 , G 2 1 γ * C u d G 2 u + u G 1 1 γ * , u 2 C u d G 1 u = u u 1 , u 2 C u d G 10 * u
If there are no maxima inside the area u 1 , u 2 , then there exists a point u 0 u 1 , u 2 for which G 1 u 0 G 2 u 0 and in the area u 1 , u 0 the integrand function does not increase, and in the area u 0 , u 2 the integrand function does not decrease. Let us introduce two functions
G 10 u = γ 1 , u 0 , G 1 γ 1 , G u , u G 1 γ 1 , G 1 γ 2 , γ 2 , u G 1 γ 2 , + ,
Ψ 10 * u = γ 1 , u 0 , u 1 , G 2 u , u u 1 , G 2 1 G u 0 , G u 0 , u G 2 1 G u 0 , G 1 1 G u 0 , G 1 u , u G 1 1 G u 0 , u 2 , γ 2 u u 2 , + .
From the definition of these functions at G Ω , these inequalities follow
Ψ 10 * u G j 0 u , u 0 , u 0 ,
Ψ 10 * u G j 0 u , u u 0 , + .
If we consider the properties of the integrand function, it is easy to obtain an estimate by integrating over the parts
0 + C u d Ψ 10 u G 10 u 0
From this evaluation and the choice of the parameter γ * follows
G 1 γ 1 G 1 γ 2 C u d G u = 0 + C u d G 10 u 0 + C u d Ψ 10 u 0 + C u d G 10 * u
This completes the construction of the reference functions for the interval in question.
Next, let us consider the case where there are maxima within the interval and the global maximum of the level s ,   2 s n , i.e., there are interior points u s j , 1 l 1 j l 2 k s for which the equality C u s j = α s is true. Formally, it is necessary to reorder the maxima for the new area. We will use the previous notations in order to avoid unnecessary cumbersomeness.
Let us define the areas u 1 , u s 1 , u s k s , u 2 .
In the first area, the integrand function does not increase and the inequalities are satisfied
α 1 C u α s ,
and in the second area the integrand function does not decrease and the same inequalities (15) are satisfied for it.
Note that if one of the areas does not fulfill the inequalities, it is no longer considered.
If γ 1 = G 2 u s 1 G 1 u s k s = γ 2 , then we define the reference function in γ 2 , γ 1 by the equality (12), where the parameter γ * is defined by the relation
m a x γ γ 2 , γ 1 u u 1 , G 2 1 γ C u d G 2 u + u G 1 1 γ , u 2 C u d G 1 u = u u 1 , G 2 1 γ * C u d G 2 u + u G 1 1 γ * , u 2 C u d G 1 u = u u 1 , u 2 C u d G 10 * u
Next, prove the inequality
G 1 γ 1 G 1 γ 2 C u d G u 0 + C u d G 10 * u .
for any admissible distribution. The proof of this inequality is reduced to the introduction of a major function
C ~ u = α s , u u s 1 , u s k s , C u , u u s 1 , u s k s
and the definition of functions G 10 u , Ψ 10 * u , only as a parameter u 0 can be any point of the area u s 1 , u s k s .
Finally, if γ 1 = G 2 u s 1 < G 1 u s k s = γ 2 , then we define two reference functions
G 10 1 * u = γ 1 , u 0 , u 1 , G 2 u , u u 1 , u s 1 , γ 1 , u u s 1 , + ,
G 10 2 * u = γ 2 , u 0 , u s k s , G 1 u , u u s k s , u 2 , γ 2 , u u 2 , + ,
and two functions associated with one admissible distribution function G Ω ,
G 10 1 u = γ 1 , u 0 , G 1 γ 1 , G u , u G 1 γ 1 , G 1 γ 1 , γ 1 , u G 1 γ 1 , + ,
G 10 2 u = γ 2 , u 0 , G 1 γ 2 , G u , u G 1 γ 2 , G 1 γ 2 , γ 2 , u G 1 γ 2 , + .
Then, two inequalities can be easily proved by integration over parts
G 1 γ 1 G 1 γ 1 C u d G u = 0 + C u d G 10 1 u 0 + C u d G 10 1 * u
G 1 γ 2 G 1 γ 2 C u d G u = 0 + C u d G 10 2 u 0 + C u d G 10 2 * u .
It is easy to see that the reference functions in the area u u s 1 , u s k s are not defined. This procedure repeats the above, only for a new narrower area and a smaller maximum.
The procedure for constructing the reference functions will end due to the finiteness of the number of maxima of the integrand function.
So, a finite set of reference functions is constructed, which fields of values do not overlap and which sum coincides with 0,1 . Denote by M the set of reference functions and define the function
G 0 u = G * M G * u G * 0 .
Since inequalities (11), (14), (16), (19), and (20) for the reference functions are satisfied for any function G Ω , we have the inequality
L G L G 0
which proves the statement of the theorem. □

4. Discussion

Let us formulate useful corollaries of the proved theorem.
Given the statement of the above lemma, we give a formulation for the fractionally linear functional (1), assuming the existence of a maximum of this functional m a x G Ω I G = I G * = C and fulfillment of the conditions of the theorem on the integrand functions.
Source 1 
[8,9]. If Ω = G : G u i = π i , 0 i n , 0 = u 0 < u 1 < < u n < u n + 1 = + , 0 = π 9 π 1 π n π n + 1 i , then
m a x G Ω I G = m a x x k u k , u k + 1 , 1 k n k = 0 n A x k π k + 1 π k k = 0 n B x k π k + 1 π k
Source 2 
[10]. If the function C u = A u C B u has one maximum at 0 , + , then
m a x G Ω I G = m a x x 0 0 x A u d G 1 u + A x 1 G 2 x G 1 x + x + A u d G 2 u 0 x B u d G 1 u + B x 1 G 2 x G 1 x + x + B u d G 2 u
Source 3. 
If the function C u = A u C B u has no maxima in 0 , + , then
m a x G Ω I G = m a x γ 0,1 u 0 , G 2 1 γ A u d G 2 u + u G 1 1 γ , + A u d G 1 u u 0 , G 2 1 γ В u d G 2 u + u G 1 1 γ , + В u d G 1 u
The following example of a controlled mass service model in the “classical” statement, when the constraints are formulated in the form of standard inequalities on probability measures Ω = G : 0 G B 1 are well known [11], so we will omit the detailed conclusions of the basic relations.
Example 1. 
A mass service system.
We will study a mass service system, the input of which is a Poisson flow of demands with the parameter λ comes. The number of service channels is one, and the number of waiting places is K , 1 K < . In Kendall’s terminology the system M | G | 1 | K .
Contrary to the classical formulation, we will modify the distribution of the service time depending on the number of requests in the system at the moments of the end of service of the next request and at the moments of the arrival of requests in the free system. If at the given moment the number of demands in the system is equal to i, then we choose the service time distribution of the next demand equal to G ( i , t ) = P ξ i < t where ξ i —is random service time. Assigning a random service duration means introducing randomization into the decision making process (a Markov moment), i.e., at the moment when a decision has to be made and state i is observed, the realization of τ of a random variable ξ i = τ distributed according to the law G i , t = P ξ i < t and the service duration is assigned equal to the value τ .
Let us introduce the cost characteristics, which determine the functional that characterizes the quality of functioning and control.
Let:
c 0 —fee (income) per one serviced claim;
c 1 —payment for one hour of channel operation;
c 2 —payment for one hour of free channel idle time,
c 3 —payment for the loss of one claim,
c 4 —payment for one hour of one requirement in the queue.
In [11], there is an expression of fractionally linear functional (1)—the specific mathematical expectation of accumulated income for an arbitrary number of places for expectation K , 1 K < . We consider the solution of the above optimization problem under K = 1,2 and the constraints
0 G 1 ( i , i , t ) G ( i , t ) G 2 ( i , t ) 1 ,
where G 1 ( i , i , t ) , G 2 ( i , i , t ) , G 11 ( t ) G 21 ( t ) are the distribution functions belonging to the set Ω .
With one waiting place K = 1 at Markov moments (moments of end of service and moments of arrival of demands to the free system), there are either zero demands or one demand in the system.
Hence, a semi-Markov process has two states, but decisions are made only in one of them when there is one requirement in the system and the functional (1) depends on one distribution function G ( 1 , t ) = G ( t ) .
In [11], the dependence of this functional on the initial characteristics is given
I G = 0 + λ c 0 + c 1 t + c 2 e λ t + λ t + e λ t 1 c 3 λ + c 4 d G t 0 + λ t + e λ t d G t .
Next, we will use the notation
A ( t ) = λ c 0 + c 1 t + c 2 e λ t + λ t + e λ t 1 c 3 λ + c 4 , B ( t ) = λ t + e λ t .
Next, let us set the optimization problem: let there be two distribution functions
G 1 ( 1 , t ) , G 2 ( 1 , t ) , G 1 ( 1 , t ) G 2 ( 1 , t ) , G i ( 1,0 ) = 0 , i = 1,2
and we need to find the maximum of the functional (25) over the set of distributions
Ω = 0 G 1 ( 1 , t ) G ( t ) G 2 ( 1 , t ) 1 .
By the conditions of the theorem and lemma proved above, it is necessary to investigate the function
C ( t ) = A ( t ) C B ( t ) = λ c 0 + c 1 t + c 2 e λ t + λ t + e λ t 1 c 3 λ + c 4 C λ t + e λ t ,
where m a x G Ω I G = C .
Let us investigate the function C ( t ) and prove that it satisfies the conditions of the above theorem and has one maximum.
Suppose that the inequality C > 0 is true for optimal demand service since the initial data should be such that, under optimal control, the operation of the mass service system should produce a positive effect.
It is important to pay attention to the sign of the initial constants. The coefficients c i ,   i = 1 , 2 , 3 , 4 are negative, since they are losses from the functioning of the system; the parameter c 0 is greater than zero, because it is a profit.
The function under study (27) and all its derivatives are continuous.
The elementary ratios below
C ( 0 ) = λ c 0 C 0 , C ( t ) , t + , d C ( t ) d t = λ c 1 + c 3 λ + c 4 C c 3 λ + c 4 + c 2 C e λ t , d C ( 0 ) d t = λ c 1 c 2 , d 2 C ( t ) d t 2 = c 2 + c 3 λ + c 4 C λ 2 e λ t 0 ,
prove that this function has a maximum at some point 0 t 0 < + and in the area 0 , t 0 it increases, and in the area t 0 , + it decreases. Thus, it is proved that the function C ( t ) has a maximum at some finite point. Therefore, the maximum of the functional (25) is reached on the function
G * ( t ) = G 1 ( 1 , t ) , 0 t τ 0 , G 2 ( 1 , t ) , τ 0 < t + ,
the parameter τ 0 is defined as the maximum point of the function
m a x G 1 ( x ) G ( x ) G 2 ( x ) I ( G ) = m a x 0 τ + 0 τ A ( t ) d G 1 ( 1 , t ) + A ( τ ) G 2 ( 1 , τ ) G 1 ( 1 , τ ) + τ + A ( t ) d G 2 ( 1 , t ) 0 τ B ( t ) d G 1 ( 1 , t ) + B ( τ ) G 2 ( 1 , τ ) G 1 ( 1 , τ ) + τ + B ( t ) d G 2 ( 1 , t ) = 0 τ 0 A ( t ) d G 1 ( 1 , t ) + A ( τ 0 ) G 2 ( 1 , τ 0 ) G 1 ( 1 , τ 0 ) + τ 0 + A ( t ) d G 2 ( 1 , t ) 0 τ 0 B ( t ) d G 1 ( 1 , t ) + B ( τ 0 ) G 2 ( 1 , τ 0 ) G 1 ( 1 , τ 0 ) + τ 0 + B ( t ) d G 2 ( 1 , t ) ,
and the functions A ( t ) and B ( t ) are defined by the equations
A t = λ c 0 + c 1 t + c 2 e λ t + λ t + e λ t 1 c 3 λ + c 4 ,
B ( t ) = λ t + e λ t .
Let us note the obvious conclusion: at c 2 c 1 we have G * ( t ) = G 2 ( 1 , t ) .
Example 2. 
Reliability model.
Let a system be defined in which the time of no-failure operation ξ is distributed according to the law F ( x ) = P { ξ < x } , F ̄ ( x ) = P { ξ x } . Suppose a failure that occurs during the functioning of the system is independently detected (manifested) instantly.
At the initial moment t0 = 0, the system operation begins and the scheduled preventive update (preventive maintenance) of the system is assigned in time η ≥ 0, distributed according to the law G ( x ) = P { η < x } , G ( 0 ) = 0 . The appointment of scheduled preventive updates of the system at random time means introduction of randomization in the decision making process, i.e., at the moment when the decision should be made, the realization of τ random variable η (=ητ), distributed according to the law G ( x ) , and a scheduled preventive update of the system is performed after the time τ.
If the system has not failed by the appointed time η (the event {η < ξ} has occurred), then at the time η a planned preventive update of the system is started, which by assumption completely updates the system. Let us denote the duration of this scheduled preventive (prophylactic) update by γ 1 , and F 1 ( x ) = P { γ 1 < x } is a function of the distribution of this duration, F ̄ 1 ( x ) = P { γ 1 x } .
If the system has failed by the appointed time η, an unscheduled emergency update of the system begins at the time of the failure ξ. We denote the duration of this recovery operation by γ 2 , and denote the distribution law by F 2 ( x ) = P { γ 2 < x } , F ̄ 2 ( x ) = P { γ 2 x } .
After possible remedial work, when the system is assumed to be completely renewed, the next precautionary remedial work is rescheduled and the entire maintenance process is repeated all over again.
Let us introduce a random process ξ (t), characterizing the state of the system at an arbitrary point in time t, by putting
ξ(t) = e0 if at time t the system is working properly;
ξ(t) = e1 if at time t the system undergoes a scheduled preventive update of the system;
ξ(t) = e3 if at time t an unscheduled emergency update of the system is performed.
The steady-state availability factor K г is defined as the probability of catching the system in an operable state at an infinitely distant moment of time. In [12], this characteristic in the notations adopted above is defined by equality
K г ( G ) = 0 + 0 u F ̄ ( y ) d y d G u 0 + 0 u F ̄ ( y ) d y + M γ 1 F ̄ ( u ) + M γ 2 F ( u ) d G u
The mathematical problem is to determine the distribution G * Ω for which
m a x G Ω K г ( G ) = K г ( G * ) = C
Let us investigate the function
C u = 1 C 0 u F ̄ ( y ) d y C M γ 1 F ̄ ( u ) + M γ 2 F ( u )
and obtain for the derivative the equality
d C u d u = 1 C F ̄ ( u ) C M γ 2 M γ 1 f ( u ) = F ̄ ( u ) 1 C ) C M γ 2 M γ 1 λ u ,
where it is marked λ u = f u F ̄ ( u ) , f u = d F u d u
Under the natural conditions 0 C 1 , M γ 2 M γ 1 and the assumption that the system is aging, that is, the function λ u is increasing, we obtain: the derivative changes sign from plus to minus no more than once.
Then
m a x G Ω K г ( G ) = m a x τ 0 0 τ 0 u F ̄ ( y ) d y d G 1 u + 0 τ F ̄ ( y ) d y G 2 τ G 1 τ + τ + 0 u F ̄ ( y ) d y d G 1 u 0 τ B u d G 1 u + B τ G 2 τ G 1 τ + τ + B u d G 1 u = 0 τ 0 0 u F ̄ ( y ) d y d G 1 u + 0 τ 0 F ̄ ( y ) d y G 2 τ 0 G 1 τ 0 + τ 0 + 0 u F ̄ ( y ) d y d G 1 u 0 τ 0 B u d G 1 u + B τ 0 G 2 τ 0 G 1 τ 0 + τ 0 + B u d G 1 u
where B u = 0 u F ̄ ( y ) d y + M γ 1 F ̄ ( u ) + M γ 2 F ( u ) .
Thus, the distribution is set
G * u = G 1 u , 0 u τ 0 , G 2 u , τ 0 < u + ,
at which the maximum of the investigated functional is reached.

5. Concluding Remarks

Note that the conditions given in Section 2 are quite obvious and simple. The authors believe that these conditions are almost always met. However, if we assume that the conditions are not satisfied, further investigation may be necessary. For example, if the condition that a maximum is reached at discontinuities is not satisfied, i.e., a maximum is not reached at an upper bound, then the extremum of the function may not exist, but an upper bound does. It is beyond the scope of this paper to investigate this case.
The theorem proved has an obvious practical application, since it solves the problem in a formulation close to the real one.
In previous studies in this paper, it was assumed that when considering the controlled models of mass maintenance, reliability, and safety systems, and when it is a matter of choosing the duration and the moment of intervention in the system, it is possible to intervene in the system after a time equal to 0 or a service duration equal to 0 (this situation corresponds to optimal random process control).
In this paper, additional constraints are introduced to exclude these cases that are not realized in practice, i.e., a situation that more accurately describes reality is considered.
The solution obtained, as mentioned above, allows us to solve the problem numerically using software tools and methods.

Author Contributions

Conceptualization, V.K. and A.B.; methodology, V.K.; software, O.Z.; validation, V.K., A.B. and O.Z.; formal analysis, A.B.; investigation, O.Z.; data curation, O.Z.; writing—original draft preparation, V.K.; writing—review and editing, A.B.; visualization, O.Z.; supervision, O.Z.; project administration, V.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Frenkel, I.; Lisnianski, A. Non-homogeneous markov reward model for aging multistate system under minimal repair. Int. J. Perform. Eng. 2009, 5, 303–312. [Google Scholar]
  2. Grabski, F. Semi-Markov Processes: Applications in System Reliability and Maintenance; Elsevier: Amsterdam, The Netherlands, 2014; pp. 1–255. [Google Scholar]
  3. Kaalen, S.; Nyberg, M. Branching Transitions for Semi-Markov Processes with Application to Safety-Critical Systems. In Proceedings of the 7th International Symposium, IMBSA 2020, Lisbon, Portugal, 14–16 September 2020. [Google Scholar] [CrossRef]
  4. Mohammed, A.; Filar, J. Perturbation Theory for Semi-Markov Control Problems. 1992, Volume 1, pp. 489–493. Available online: https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=778001a996a1b6af6b1869f0f015e06267354e74 (accessed on 25 June 2023).
  5. Warr, R.; Collins, D. A comprehensive method for solving finite-state semi-Markov processes. Int. J. Simul. Process Model. 2015, 10, 89–99. [Google Scholar] [CrossRef]
  6. Kashtanov, V.A. The Structure of the Functional of Accumulation Defined on a Trajectory of Semi-Markov Process with a Finite Set of States. Theory Probab. Its Appl. 2016, 60, 281–294. [Google Scholar] [CrossRef]
  7. Kashtanov, V.A. Discrete Distributions in Control Problems. In Probabilistic Methods in Discrete Mathematics: Proceedings of the Fourth International Petrozavodsk Conference, Petrozavodsk, Russia, 3–7 June 1996; De Gruyter: Berlin, Germany; Boston, MA, USA, 1997; pp. 267–274. [Google Scholar] [CrossRef]
  8. Barzilovich, Y.Y.; Kashtanov, V.A.; Kovalenko, I.N. On minimax criteria in reliability problems. Proc. Acad. Sci. USSR Tech. Cybern. 1971, 3, 87–98. [Google Scholar]
  9. Barzilovich, Y.Y.; Kashtanov, V.A. Organization of Service with Limited Information about System Reliability. In Sovetskoe Radio. 1975. 135p. Available online: https://lib-bkm.ru/13141 (accessed on 25 June 2023).
  10. Kashtanov, V.A.; Zaitseva, O.B.; Efremov, A.A. Controlled Semi-Markov Processes with Constraints on Control Strategies and Construction of Optimal Strategies in Reliability and Safety Models. Math. Notes 2021, 109, 585–592. [Google Scholar] [CrossRef]
  11. Kashtanov, V.A.; Zaitseva, O.B. Research of Operations (Linear Programming and Stochastic Models); Textbook Course; INFRA-M: Moscow, Russia, 2016; 256p. [Google Scholar]
  12. Kashtanov, V.A.; Medvedev, A.I. Reliability Theory of Complex Systems; Fizmatlit: Moscow, Russia, 2010; 608p. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kashtanov, V.; Bochkov, A.; Zaitseva, O. Theorem on the Structure of the Fractionally Linear Functional Extremal Function. Mathematics 2023, 11, 2886. https://doi.org/10.3390/math11132886

AMA Style

Kashtanov V, Bochkov A, Zaitseva O. Theorem on the Structure of the Fractionally Linear Functional Extremal Function. Mathematics. 2023; 11(13):2886. https://doi.org/10.3390/math11132886

Chicago/Turabian Style

Kashtanov, Victor, Alexander Bochkov, and Olga Zaitseva. 2023. "Theorem on the Structure of the Fractionally Linear Functional Extremal Function" Mathematics 11, no. 13: 2886. https://doi.org/10.3390/math11132886

APA Style

Kashtanov, V., Bochkov, A., & Zaitseva, O. (2023). Theorem on the Structure of the Fractionally Linear Functional Extremal Function. Mathematics, 11(13), 2886. https://doi.org/10.3390/math11132886

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop