2. The Mathematical Model and Stability
In this section, we consider a mathematical model with the four following main compartments:
S is the number of susceptible people to infection or who are not yet infected.
is the number of susceptible people who are temporarily controlled so they cannot move to the removed class due to the limited effect of control.
I is the number of infected people who are capable of spreading the epidemic to those in the susceptible and temporary controlled categories.
R is the number of removed people from the epidemic.
In our modeling approach, we choose to describe dynamics of variables
S,
,
I and
R at time
t, based on the following differential system
with initial conditions
,
,
and
, and where
, with
as the total population size, gives the newborn people at time
t;
is the recruitment rate of susceptibles to the controlled class with
defining the control parameter as a constant between 0 and 1 and “
a” modeling the reduced chances of a susceptible individuals to be controlled;
with
the infection transmission rate,
the natural death rate,
the recruitment rate of controlled people to the infected class even in the presence of
and “
b” modeling the reduced chances of a temporarily controlled individual to be infected; and
is the recovery rate. We note that the population size is constant because
. Hence,
, and then,
.
For the sake of readability, hereafter, we use S, , I and R as notations of the time functions , , and .
Recalling that
is the basic reproduction number of the standard SIR model (see [
25] where it is concluded that the disease free equilibrium
is global asymptotically stable if
, and there exists a global asymptotically stable and unique endemic equilibrium
if
).
Since the two first equations are independent of the last equation, we only study the stability of the following differential system
A disease free equilibrium in our case can be defined as where and are obtained based on the assumptions and when there is no infection.
Explicitly, we have when , gives . In addition, we have when , gives .
If as a consequence of the case when , we define the basic reproduction number for our case by which is the average new infections produced by one infected individual during his life cycle when the population is at .
Since
I is the only infected compartment, then
. Thus, we have
Now, we try to find the components of the endemic equilibrium where and are obtained based on the assumptions and when there is an infection.
Explicitly, we have when , which gives . In addition, we have when , which gives .
On the other part, we have
when
, which gives
Thus, we find that is the root of the function where and are constants.
In the following three theorems, we state and prove stability results on free and endemic equilibria.
Theorem 1. always exists and is locally asymptotically stable if (respectively, is unstable if ).
Proof. The existence of is trivial.
For the stability of
, we define the Jacobian Matrix associated to the system in Equation (
2) by
At
, (
4) becomes
whose eigenvalues are
which imply the local asymptotic stability of
when
, and its instability when
. □
Theorem 2. The differential system in Equation (2) admits as the unique positive equilibrium and which is asymptotically stable when it exists, if and only if . Proof. For the sufficiency of the existence and uniqueness of
, so we have
and since
if
, then
has two real roots, one is positive and the other is negative. For the necessity, let us assume that
and prove that
has no positive roots. In this case, the first fraction in Equation (
3) verifies
Thus, we have and since , is increasing and , then we reach the non-positivity of the roots.
For the stability of
, at
, Equation (
4) is defined as
whose characteristic equation is
and where
Finally, based on the Routh–Hurwitz Criterion, we deduce the local asymptotic stability of . □
Theorem 3. If , then is globally asymptotically stable. If , then is globally asymptotically stable.
Proof. We suppose that
and we prove that
is globally asymptotically stable. Let us define the Lyapunov function by
Its derivative is then defined by
Since
and
, then this derivative becomes
Now, we have and due to the fact that arithmetic mean is larger than or equals to the geometric mean, and the equalities hold if and . Thus, which implies the global asymptotic stability of based on Lyapunov–LaSalle’s invariance principle.
Similarly, we study the global asymptotic stability of
by considering the following Lyapunov function
The derivative is then defined as
Since
,
and
, this derivative becomes
which is the final result, sought to prove for deducing that
is globally asymptotically stable. □
3. Free Horizon Isoperimetric Optimal Control Approach
Now, we consider the mathematical model in Equation (
1) with
as a control function of time
t.
Motivated by the desire to find the optimal time needed to reduce the number of infected people as much as possible while minimizing the value of the control
over a free (non-fixed) horizon
, our objective is to seek a couple
such that
where
J is the functional defined by
and where the control space
U is defined by the set
where
and
represent constant severity weights associated to functions
I and
, respectively. Alkama et al. treated three cases of the form of the free horizon
in the final gain function of their objective function when applying a free final time optimal control approach to a cancer model [
9]. Here, we suppose that
takes the quadratic form as formulated in Equation (
6) to obtain a direct formula which characterizes
. In fact, if
is taken linear or the final gain function is zero,
would just be approximated numerically due to the nature of necessary conditions in these two cases (see [
9] for explanation).
Since managers of the anti-epidemic resources cannot well-predict whether their control strategy will reach the entire susceptible population over a fixed horizon
T, we treat here an example where the number of targeted people in the susceptible class is equal for example to only a constant
for
months. Hence, we try to find
under the definition of the following isoperimetric restriction
In [
13], the authors defined an isoperimetric constraint on the control variable only to model the total tolerable dosage amount of a therapy along the treatment period. In their conferences talks [
26,
27], Kornienko et al. and De Pinho et al. introduced state constraints in an optimal control problem that is subject to an S-Exposed-I-R differential system to model the situation of limited supply of vaccine based on work in [
14] and where the isoperimetric constraint is defined on the product of the control and state variables.
In our case, to take into account the constraint in Equation (
7) for the resolution of the optimal control problem in Equation (
5), we consider a new variable
Z defined as
Then, we have
. Using notations of the state variables in the previous section and keeping
as a notation of
and
Z in place of
, we study the differential system defined as follows
instead of the model in Equation (
1). We also note that, when the minimization problem in Equation (
5) is under the constraint in Equation (
7), the application of Pontryagin’s Maximum Principle would not be appropriate for this case, but the new variable
Z has the advantage to convert Equations (
5)–(
7) to a classical optimal control problem under one restriction which is the system in Equation (
9) only [
28]. If we follow most optimal control approaches in the literature, the objective function in Equation (
6) will be defined over a fixed time interval. However,
is free here, and, to find the optimal duration needed to control an epidemic, it would be advantageous to managers of medical or health resources to control an epidemic before reaching the fixed time
T for lesser costs. For this purpose, we need to assume that
to guarantee the sufficient condition for an optimal
in the case of a free horizon. This is because
exists for the minimization problem in Equation (
5) when Equation (
6) is defined over
T based on the verified properties of the sufficient conditions as stated in details in Theorem 4.1, pp. 68–69 of [
29] and that can easily be verified for many examples as ours, and this implies in our case that the existence of an optimal control
and associated optimal trajectories
and
comes directly from the convexity of the integrand term in Equation (
6) with respect to the control
and the Lipschitz properties of the state system with respect to state variables
and
Z. Then, it exists for any time in the interval
including
. As regard the necessary conditions, we state and prove the following theorem.
Theorem 4. If there exist optimal control and optimal horizon which minimize Equation (6) along with the optimal solutions , , and associated to the differential system in Equation (9), then there exist adjoint variables , as notations of and which satisfy the following adjoint differential systemwith the transversality conditions , and constant which should be determined. Furthermore, the sought optimal control is characterized bywhile the sought optimal horizon is characterized bywhere defines the Hamiltonian function as the sum of the integrand term of Equation (6) and the term at . Moreover, is positive only when is negative.
Proof. Let
H be a notation of the Hamiltonian function
in all time
t. Then, we have
Using Pontryagin’s maximum principle [
30], we have
while the transversality conditions defined as minus the derivative of the final gain function with respect to the state variables
S,
,
I and
R. Since the final gain function in Equation (
6) does not contain any term of these variables, then
,
and
is unknown but we are sure it is a constant since
. The solution of this problem is treated in the next section.
The optimality condition at
implies that
. Then, after setting
, we have
Taking into account the bounds of the control, we obtain,
Now, let us prove the necessary conditions on
. As
reaches its minimum at
and
, we have
with the consideration of the final gain function
that we deduce it is defined in Equation (
6) by
, while setting
and
, we obtain
Otherwise, the positivity of
under the condition of negativity of
is trivial, but this is not a condition we should have necessarily for
since the Hamiltonian could change signs any time along the interval of study. □
4. Numerical Simulations
Based on the formulation of Equation (
8), we have
and
. Since the optimal control problem consists to resolve the two-point boundary value problem defined by the two systems in Equations (
2) and (
10), the differential system in Equation (
2) will be numerically resolved forward in time because of its initial conditions and the value of Z(0) does not change, while the differential system in Equation (
10) will be numerically resolved backward in time because of its final or transversality conditions but with the condition that
varies depending on the value of
k. Based on the numerical approach in [
13], we propose also here to define a real function
g such that
and where
is the value of
Z at
for various values of
k and
is the value fixed by
C. This leads to the combination of the Forward–Backward–Sweep Method (FBSM) which resolves the two-point boundary value problem in Equations (
2)–(
10), with the secant-method to find the value of the root
of the function
g [
31]. The necessary condition on
defined by the characterization in Equation (
12), which leads to seek a fixed point of a real function
F such that
. We choose to solve this numerical problem differently to the method used in [
8,
9] using the fixed point method. In brief, the four steps of numerical calculus associated to the resolution of our free optimal control problem (
5) under isoperimetric constraint (
7), are described in Algorithm 1.
Algorithm 1: Resolution steps of the two-point boundary value optimal control problem (9) and (10). |
Step 0: |
Guess an initial estimation to and . |
Step 1: |
Use the initial condition , , , and and the stocked values by and . |
Find the optimal states , , , and which iterate forward in the two-point boundary value problem (2)–(10). |
Step 2: |
Use the stocked values by and the transversality conditions for while searching the constant using the secant-method. |
Find the adjoint variables for which iterate backward in the two-point boundary value problem (2)–(10). |
Step 3: |
Update the control utilizing new S, , I, R, Z and for in the characterization of as presented in (11) while searching the optimal time characterized by (12) using the fixed point method. |
Step 4: |
Test the convergence. If the values of the sought variables in this iteration and the final iteration are sufficiently small, check out the recent values as solutions. If the values are not small, go back to Step 1. |
Figure 1 depicts the
dynamics in the absence and presence of the control and we can see that the number of susceptible people has increased linearly from its initial condition to a number higher than 92.5 individuals when we choose
, while the optimal state
increases during the first months of the optimal control strategy and it decreases when we work with the characterization of Equation (
11). Simultaneously, the number of removed people increases to only a value close to eight people while it reaches a value higher than this number with a maximal peak equaling to 17 when
. As regards to the number of infected people, it decreases from its initial condition to a value close to an important value of 30 individuals because of the natural death and recovery only, while it decreases towards a value very close to zero after the introduction of the control
. We can see the relationship between the number of controlled people and the optimal values taken by
so when this is increasing, the optimal state
is also increasing. In fact, we can deduce that, with only small values of
, we reach our goal by minimizing
I function, and maximizing
R function while the total number of the susceptible who received the control along
T and which is represented by the function
Z has not exceeded the imposed constant
C. The dashed lines introduced in this figure show the highest fixed point value of the sought final time, and we can understand that, at this point, we have already reached our goal which concerns the minimization of the number of infected people and maximization of the number of removed people. The next figure gives more information about the obtained value of
.
In
Figure 2, we present dynamics of the functions
S,
I and
R, and we can see the fixed points
in the first plot above. The solution of the equation
starts from an initial guess which equals zero, and increases to values that are very close or sometimes equal to 26 months (we note that, even if they appeared taking the value 26, this is not the case at all iterations but just because all values are very close to 26 with a small precision of about
). As noted in this figure, for instance, the highest value of
found at iteration 292 among 1000. In the same figure, in the plot below, we observe that, at
indicated by the dashed purple line, the number of infected people has already taken the direction towards zero values, while the number of removed people has already reached its positive peak and started to decrease because of the decrease of the optimal control function
, as shown in the previous figure. This means that there is no need in this case to extend the optimal control approach for other months since, at
, Equation (
5) has been almost realized.
In
Figure 3, the fixed points
for different values of the control severity weight
suggest that, as the value of
increases,
increases. In fact, the bigger is
, the lesser is the optimal control
, which is important, as we can deduce from the formulation in Equation (
11), and this is reasonable since, when
is small, we need more time to control the epidemic. The obtained results in this figure can be summarized as follows:
When : with (iteration 278), which implies that of infected people have left the I compartment.
When : with (iteration 295), which implies that of infected people have left the I compartment.
When : with (iteration 332), which implies that of infected people have left the I compartment.
In
Figure 4, we show the impact of the initial condition of
I function, namely
, on fixed points
, and we can deduce from the obtained optimal horizons that, as
increases,
increases, and this is reasonable since, when the number of infected people is important, the anti-epidemic measures need longer time for controlling the situation. The obtained results in this figure can be summarized as follows:
When : with (iteration 278), which implies that of infected people have left the I compartment.
When : with (iteration 338), which implies that of infected people have left the I compartment.
When : with (iteration 310), which implies that of infected people have left the I compartment.