1. Introduction
In this paper, we present and solve theoretically and numerically some optimal control problems concerning the extinction of mosquito populations by using moving devices whose main role is to spread insecticide. The controls are the trajectories or paths followed by the devices and the states are the resulting mosquito population densities. Thus, the unknowns in the considered problems are couples
, where
is a curve in the plane (see
Figure 1), and
u indicates how many mosquitoes there are and where they stay. The goal is to compute a trajectory leading to a minimal cost, measured in terms of operational costs and total population up to a final time.
The main simplifying assumptions for the model are the following:
Assumption 1. Mosquitoes grow at a not necessarily constant known rate and diffuses at a known constant rate α.
Assumption 2. The insecticide immediately kills a fixed fraction of the population at a rate that decreases with the distance to the spreading source.
Assumption 3. There are no obstacles for the admissible trajectories.
These assumptions can be relaxed in several ways.
For the problems considered in this paper, we prove the existence of an optimal solution, that is, a trajectory that leads to the best possible status of the system. We also characterize the optimal control–state pairs by appropriate first order optimality conditions. As usual, this is a coupled system that must be satisfied by any optimal control, its associated state, and an additional variable (the adjoint state) and must be viewed as a necessary condition for optimality.
In this paper, this characterization is obtained by using the so-called Dubovitskii–Milyutin techniques (see, for instance, Girsanov [
1]). This relies on the following basic idea: on a local minimizer, the set of descent directions for the functional to minimize must be disjoint to the intersection of the cones of admissible directions determined by the restrictions. Accordingly, in view of Hahn-Banach’s Theorem, there exist elements in the associated dual cones, not all zero, whose sum vanishes. This algebraic condition is in fact the Euler–Lagrange equation for the problem in question. In order to be applied in our context, we must carry out the task of identifying all these (primal and dual) cones.
This formalism has been applied with success to several optimal control problems for PDEs, including the FitzHugh–Nagumo equation Brandao et al. [
2]; solidification Boldrini et al. [
3], the evolutionary Boussinesq model Boldrini et al. [
4]; the heat equation Gayte et al. [
5] and, also, some ecological systems that model the interaction of three species Coronel et al. [
6].
We remark that part of the results presented here have their origin in ones in the Ph.D. thesis of Araujo [
7].
This paper is organized as follows:
Section 2 is devoted to describe the main achievements. The next two sections will be devoted to the rigorous proof of the optimality conditions;
Section 3 contains some preparatory material and
Section 4 deals with the main part of the proof. In
Section 5, we deduce optimality conditions for a problem similar to the previous one, including in this case a restriction on the norm of the admissible trajectories. Once the optimality conditions are established, they can be used to devise numerical schemes to effectively compute suitable approximations of optimal trajectories; this will be explained in detail in
Section 6. In
Section 7, we present the results of numerical experiments performed with the numerical scheme described in the previous section. Finally, in
Section 8, we present our conclusions and comment on further possibilities of investigation.
2. Main Achievements
Let
be a non-empty bounded connected open set with boundary
such that either
is of class
or
is a convex polygon and let
be a given time. We want to find a curve
such that
where
is the space of admissible controls, and
F is the following cost functional:
where
is the time derivative of
, and
is the associated state, that is, the unique solution to the problem
Parameters
,
, and
are given constants. Concerning the interpretation of the cost functional (
2), we observe that it can also be written in the form
The function determines the trajectory of the device and is its corresponding velocity; moreover, the same path (geometric locus of the trajectory) can be traveled with different speeds leading to different operational costs. Since is a measure of the size of and is a measure of the velocity , the quantity can be regarded as a measure of the operational costs associated with the trajectory . Parameters and weight the relative importance being attributed to the size and the velocity of a trajectory in those operational costs. On the other hand, is a measure of the size of the mosquito population and is a constant parameter that weights the importance that is attributed to the decrease of such population.
It is expected that an improvement of the operational costs implies an increase of the effectiveness of the process and consequently leads to a reduction of the mosquito population, while a decrease of the operational costs leads to an increase of the mosquito population. Therefore, the minimization of F, once the parameters , , and are given, leads to a trajectory that balances the reduction of the mosquito population and the amount of operational costs. We remark that the values of , and have a large influence on the shape of a possible optimal trajectory.
In short,
is thus the sum of three terms, respectively related (but not coincident) to the length of the path travelled by the device, its speed along
and the resulting population of insects. A maybe more realistic cost functional is indicated in
Section 8.
In (
3),
,
,
is a non-negative function and
is a positive
-function. The coefficient
a is related to insect proliferation. It is standard in dynamics of population. In fact, in (
3), the mosquito population
u is assumed to have a space-time dependent Malthusian growth; this means that the population growth rate at each position and time is proportional to the number of individuals. The model admits a non-constant
a to cope with the possibility of geographical places and times with different effects; for instance, more favorable growth rates occur in places with the presence of bodies of water or during rainy seasons.
As we mentioned in the Introduction, we assume that the insecticide immediately kills a fraction of the mosquito population present at a position x and time t, with a rate that is proportional to the insecticide concentration at that same position and time. At time t, the spreading device is at position and spreads a cloud of insecticide around it; the resulting insecticide concentration at any point x then depends on the position of x relative to (we expect that such concentration decays with the distance from the device). The spatial distribution of this concentration is then mathematically described as , where is the maximal spreading concentration capacity of the device, and k is a known -function such that and . Then, the associated effective killing rate of the mosquito population is proportional to the product of insecticide concentration and the mosquito population. That is, for some proportionality factor f. By introducing , we get that the killing rate at position x and time t is .
In our present analysis, it is not strictly needed but is natural to view
k as an approximation of the Dirac distribution. For instance, it is meaningful to take
for some
; this means that the device action in (
3) at time
t is maximal at
and negligeable far from
. To this respect, see the choice of
k in the numerical experiments in
Section 7.
Of course, we can consider more elaborate models and include additional (nonlinear) terms in (
3) related to competition; this is not done here just for simplicity of exposition.
Any solution
to (
1) provides an optimal trajectory and the associated population of mosquitoes. The existence of such a pair
is established below, see
Section 4.
As already mentioned, we also provide in this paper a characterization of optimal control–state pairs. It will be seen that, in the particular case of (
1), the main consequence of the Dubovitskii–Milyutin method is the following: if
is an optimal control and
is the associated state, there exists
(the adjoint state), such that the following optimality conditions are satisfied:
and
3. Preliminaries
As usual,
will stand for the space of the functions in
with compact support in
and
and
will stand for the usual Sobolev spaces; see Adams [
8] for their definitions and properties. The gradient and Laplace operators will be respectively denoted by ∇ and
.
The constants
,
,
,
and
and the functions
k,
and
a are given, with
In the sequel, C will denote a generic positive constant and the symbol will stand for several duality pairings.
Very frequently, the following spaces will be needed:
and
For the main results concerning these spaces, we refer, for instance, to [
9]. Let us just recall one of them that is sometimes called the Lions–Peetre Embedding Theorem, see ([
10], p. 13):
Lemma 1. The embedding is compact for all .
For any Banach space
B, we will denote by
the corresponding norm. Its topological dual space will be denoted by
. Recall that, if
is a cone, the associated dual cone is the following:
Let
be given and let us assume that
. It will be said that a nonzero linear form
is a support functional for
A at
if
Let Z be a Banach space and let be a mapping. For any , we will denote by the Gateaux-derivative of M at in the direction whenever it exists. For obvious reasons, in the particular case , this quantity will be written in the form .
The following result is known as
Aubin–Lions’ Immersion Theorem. In fact, this version was given by Simon in [
11], see p. 85, Corollary 4:
Lemma 2. Let X, B, and Y be Banach spaces such that with continuous embeddings, the first of them being compact and assume that . Then, the following embeddings are compact:
for all .
for all .
The following result guarantees the existence and uniqueness of a state for each control in the space of admissible controls :
Theorem 1. For each , there exists a unique solution of problem (3) satisfying the estimate The constant C only depends on T, α, , b, and Ω.
For completeness, let us also recall an existence–uniqueness result for the adjoint system:
Theorem 2. Let and be given. The linear system possesses exactly one solution such that with C as in Theorem 1.
We finish this section by recalling the Dubovitskii–Milyutin method. The presentation is similar to that in Boldrini et al. [
3].
Let
X be a Banach space and let
be a given function. Consider the extremal problem
where the
are by definition
the restriction sets. It is assumed that
There are many situations where (
9) holds In particular, the following holds:
For any
,
is an inequality restriction set of the form
where the
are continuous seminorms and the
.
is the equality restriction set
where
is a differentiable mapping (
Z is another Banach space).
The following theorem is a generalized version of the Dubovitskii–Milyutin principle and will be used in
Section 4 and
Section 5:
Theorem 3. Let be a local minimizer of problem (8) Let be the decreasing cone of the cost functional J at , let be the feasible (or admissible) cone of at for , and let be the tangent cone to at . Suppose that The cones and are non-empty, open, and convex.
The cone is non-empty, closed, and convex.
ThenConsequently, there exist , and , not all zero, such that In order to identify the decreasing, feasible, and tangent cones, we will use the following well known definitions and facts:
Assume that
is Fréchet-differentiable. Then, for any
, the decreasing cone of
J at
e is open and convex and is given by
where
stands for the usual duality product associated with
X and
.
Suppose that the set
is convex,
and
. Then, the feasible cone of
at
e is open and convex and is given by
Finally, we have the celebrated
Ljusternik Inverse Function Theorem; see, for instance, ([
12], p. 167). The statement is the following: suppose that
X and
Y are Banach spaces,
is a mapping and the set
is non-empty; in addition, suppose that
,
M is strictly differentiable at
and
; then,
M maps a neighborhood of
onto a neighborhood of 0 and the tangent cone to
at
is the closed subspace:
4. A First Optimal Control Problem
In this section, we will consider the optimal control problem (
1)–(
3). We will introduce an equivalent reformulation as an extremal problem of the kind (
8). Then, we will prove that there exist optimal control–state pairs. Finally, we will use Theorem 3 to deduce appropriate (first order) necessary optimality conditions.
Our problem is the following:
where
J is given by (
2) for any
, and the set of admissible control–state pairs
is defined by
Here,
is the mapping defined by
, with
Theorem 4. The extremal problem (10) possesses at least one solution. Proof. Let us first check that is non-empty.
Let
be given. Then, from Theorem 1, there exists a unique solution
to (
3) that, in particular, satisfies
. Thus, we have
.
Let us now see that
is sequentially weakly closed in
. Thus, assume that
for all
n and
Then,
converges uniformly to
in
. Since
is the unique solution to (
3) with
replaced by
, we necessarily have that
u is the state associated with
, that is,
and
are certainly sequentially weakly closed.
Obviously, is a well defined real number for any . Furthermore, is continuous and (strictly) convex. Consequently, J is sequentially weakly lower semicontinuous in .
Finally,
J is coercive, i.e., it satisfies
Therefore,
J attains its minimum in the weakly closed set
, and the result holds. □
Remark 1. Note that, although J is strictly convex, we cannot guarantee the uniqueness of solution to (10), since the admissible set is not necessarily convex. See, however, Remark 3 for a discussion on uniqueness. In the next result, we specify the optimality conditions that must be satisfied by any solution to the extremal problem (
10):
Theorem 5. Let be an optimal control–state pair for (10). Then, there exists such that the triplet satisfies (4)–(6). We will provide a proof based on the Dubovitskii–Milyutin’s formalism, i.e., Theorem 3. Before, let us establish some technical results.
First of all, the functional
is obviously
and the derivative of
J at
in the direction
is given by
Consequently, we have the following:
Lemma 3. For each , the cone of decreasing directions of J at isThe associated dual cone is Lemma 4. Let be given.
- (i)
Then, M is G-differentiable at . The G-derivative of M at in the direction is given by - (ii)
The mapping M is in a neighborhood of . Furthermore, is surjective.
Proof. In order to prove (i), we use the definition of the Gâteaux-derivative. First, note that
while
Therefore, in view of the assumptions on
k, (
15) and Lebesgue’s Theorem, it follows that
strongly in
, which proves (i).
We will now prove (ii). It is clear that, for any
,
is a well defined continuous linear operator on
. Let
be given and let us assume that
for all
n and
strongly in
as
. Then,
In the last line, the first norm can be bounded as follows:
Hence, from the hypotheses on
k and the fact that
uniformly in
, we find that
On the other hand, we can deduce the following inequalities:
Consequently, using again the hypotheses on
k and the uniform convergence of
, we can also write that
From (
16) and (
17), we find that
Since
is independent of
, we deduce that
, regarded as a mapping from
into
, is continuous.
Let us finally see that is surjective.
Thus, let
be given in
. We have to find
such that
However, this is easy: it suffices to first choose
arbitrarily and then solve the linear problem:
Obviously, the couple
satisfies (
18). □
From this lemma and Ljusternik’s Theorem, we obtain the following characterization of any tangent cone to :
Lemma 5. Let be given in . Then, the tangent cone to at is the set The associated dual cone is given by We can now present the proof of Theorem 5.
Proof of Theorem 5. Let
be a solution to (
10). The cone of decreasing directions of
J at
is
and
In addition,
, the cone of tangent directions to
at this point is
and
From Theorem 3, we know that
whence there must exist
and also
, not simultaneously zero, such that
Since
and
and
G cannot be simultaneously zero, we necessarily have
, and it can be assumed that
.
Accordingly,
and
for all
. In particular, we see that
Now, let
be an arbitrary admissible control. Let
w be the unique solution to the linear system:
It follows from Lemma 5 that
, whence
Let us introduce the adjoint system (
5) and let us denote by
its unique solution. By multiplying (
21) by
and (
5) by
w, summing, integrating with respect to
x and
t and performing integrations by parts, we easily get that
Taking into account (
22), we thus find that
However, this is just the weak formulation of the boundary problem (
6). Indeed, standard arguments show that
solves (
24) if and only if
, the second-order integro-differential equation
holds in the distributional sense in
,
and
. Thus, the triplet
satisfies (
4)–(
6).
This ends the proof. □
Remark 2. There are other ways to prove Theorem 5. For instance, it is possible to apply an argument relying on Lagrange multipliers, starting from the LagrangianThat is an optimal control–state pair, and is an associated adjoint state (resp. that satisfies the optimality conditions (4)–(6)) is formally equivalent to say that the triplet is a saddle point (resp. a stationary point) of L. Remark 3. As already said, in general, there is no reason to expect uniqueness. However, in view of Theorem 5, it is reasonable to believe that, under appropriate assumptions on b and k, the solution is unique. Indeed, taking into account that is uniformly bounded in , if b is sufficiently small, is an optimal pair for and one sets , , the following is not difficult to prove:
The are uniformly bounded (for instance) in by a constant of the form .
u and p are bounded in by a constant of the form .
γ is bounded in by a constant of the form .
In these estimates, C depends on and the other data of the problem but is independent of b. Consequently, if b is small enough, the solution to the optimal control problem is unique.
5. A Second Optimal Control Problem
This section deals with a more realistic second optimal control problem. Specifically, we will analyze the constrained problem
where
and
F is given by (
2). Here,
is a prescribed constant. We can interpret the constraint
as a limitation on the positions and the speed of the device; roughly speaking, a solution to (
25) furnishes a strategy that leads to a minimal insect population (in the
sense) with few resources.
For this problem, we will also prove the existence of optimal controls, and we will also find the optimality conditions furnished by the Dubovitskii–Milyutin formalism.
Thus, let
be an optimal control for this new problem, let
and
be the associated state and adjoint state and let us introduce the notation
Then, it will be shown that the associated optimality conditions of first order are (
4), (
5) and
The problem can be rewritten in the form
where
J is given by (
2) and the set of admissible pairs
is
Recall that
is given by (
12).
One has:
Theorem 6. The extremal problem (29) possesses at least one solution. The proof is similar to the proof of Theorem 4. Indeed, it is easy to check that
is a non-empty weakly closed subset of
. Since
is continuous, (strictly) convex and coercive, it also attains its minimum in
, and this proves that (
29) is solvable.
Theorem 7. Let be an optimal control–state pair for (29). Then, there exists such that the triplet satisfies (4), (5) and (28), where the bilinear form is given by (27). Proof. We will argue as in the proof of Theorem 5.
First, notice that
can be written in the form
Let
be a solution to (
29). Then, the feasible cone of
at
is the set
and
Recall that the latter means that
and
From Theorem 3, we now have
whence there exist
,
and
, not simultaneously zero, such that
As before, we necessarily have
. Indeed, if
, then
However, for any
, we can always construct
w in
such that
. Hence, we would have
and
for all
, which is impossible.
We can thus assume that
and
Let us consider again the adjoint system (
5) and let us denote by
its unique solution. Let
be an arbitrary admissible control and let
w be the unique solution to (
21). Since
and we still have (
23), we get:
Taking into account that
is arbitrary in
and (
31) holds, we deduce that (
28) is satisfied, and the result is proved. □
6. A Numerical Scheme
In this section, we present a numerical scheme to find an approximate solution to (
1)–(
3). To this purpose, we will use the optimality conditions (
4)–(
6).
Obviously, we will have to approximate the equations in time and space. With respect to the time variable, we will incorporate finite differences taking into account the following:
Since the equation (
4) is parabolic, in order to guarantee unconditional stability, we discretize in time by using a
backward Euler method. For the same reason, we discretize M in time in by using a
forward Euler method.
Since (
6) is a second order two-point boundary value problem, we approximate there the time derivatives with a standard (centered) finite difference scheme.
Let N be a positive (large) integer and let us consider a partition of the time interval in N subintervals. For simplicity, we assume that this partition is uniform, with time step .
On the other hand, the approximation in space will be performed via a finite element method. Thus, let be a triangular mesh of a polynomial approximation of and let us denote by a suitable finite element space associated with . For instance, can be the usual -Lagrange space, formed by the continuous piecewise linear functions on .
Then, given an approximation
of
, we must compute functions
and
and vectors
such that:
In the last line of (
36),
denotes a suitable discrete operator associated with the time derivative. In our code, to preserve second order approximation in time, we took
, where
is an additional time-mesh point. This implies that
and, thus, the required Neumann boundary condition can be imposed just using a reduced form of the finite difference operator at
, that is,
.
6.1. An Iterative Algorithm for Fixed N and
The previous finite dimensional system is nonlinear and, consequently, cannot be solved exactly. Accordingly, an iterative algorithm has been devised to obtain a solution. It is the following:
Base step:
Choose tolerances and ; by starting with , proceed recursively for in an outer iteration scheme as follows:
First step: Find .
Since
is known, advance in time to find
, for
, by successively solving the linear problems
Second step: Find .
Since
and
are known, proceed backwards in time to find
, for
, by solving the problems
Third step: Find .
This is done by applying an inner iteration scheme. Thus, by starting with
, find recursively
by repeating the following for
:
with a meaning similar to above for
. This is a finite linear system for
where the coefficient matrix is independent of
m and can thus be inverted only once, at the beginning of the process.
The relative stopping criterion for the iteration process is
where
denotes the usual discrete
-norm. When this stopping criterion is satisfied, take
and proceed to the next step.
Fourth step: Compare and .
When the overall relative stopping criterion
is satisfied, stop the iterative process and take
as the computed approximated solutions. Otherwise, increase
n by one and go back to the First step.
Acceleration of Algorithm
To accelerate the process, can be taken larger than . In addition, to get a better resolution, in the final iterative scheme, we have allowed for a decrease of the time-step by increasing the number of time intervals N.
The resulting iterates are as follows:
Base step:
Give tolerances , , and an initial number of discrete time intervals .
First step:
Take and apply the previous iterative algorithm, to obtain , and . Define .
Second step:
Double the value of N and apply the previous algorithm to obtain new , and .
Third step:
Compare
and
. If the convergence criterion
is satisfied, stop the iterates and take as approximated solutions
Otherwise, go back to the Second step and repeat the process.
7. Numerical Experiments
In order to illustrate the behavior of the previous algorithm, let us present the results of some experiments with .
Let us denote by
the indicator function of
for any
; we consider an initial mosquito population given by
where
and
are, respectively, the circle of center
and radius
and the circle of center
and radius
. This means that the mosquito population is initially concentrated in two disjoint circles with the same radius and the same amount of population, see
Figure 2.
We also consider the following values for the parameters and functions in (
3):
For the stopping criteria, we have taken
The computations have been performed by using the software
FreeFem++, [
13] and all figures were made with Octave [
14].
7.1. Convergence Behavior
In this section, we test the convergence of the algorithm described in
Section 6.1. We take
in (
3) and
. The numerical solution of the optimality system (
4)–(
6) is computed for several values of the final time
T and the number of subintervals
N:
Moreover, we consider three different regular meshes
, with
lateral nodes.
Table 1 shows the size
, the number of vertices
, and the number of triangles
for each mesh.
With no control, i.e.,
in (
3), the solution evolves to a final state that is displayed in
Figure 3a,c,e for
and 6, respectively. In these figures, we observe that the mosquito population spreads and increases in
. When we apply the control, due to the initial distribution of population, it is expected to obtain a solution that starts traveling in the direction of the nearest herd of mosquitos (the circle
in the present case) and, then, changes course in the direction of the farthest herd (
). This is what is found in each case, as can be seen respectively in
Figure 3b,d,f), where, besides the trajectories, the respective computed optimal states are also shown at the final time.
Table 2 reports the minimal and maximal values of the control
u in all cases depicted in
Figure 3;
Table 3 presents the required number of outer iterations and the obtained values of the cost functional
F for the three considered meshes.
From the results in
Table 3, we can observe that, for
, the change in the cost is lower to 4% for the third mesh and
. For this reason, we select these parameters to perform the simulations in the following sections. To keep the computational cost at a reasonable level, we also use the same parameters for
, where the relative change is about 13%.
7.2. Influence of the Functional Weights
In this section, we study the influence of the functional weights. As in the previous section, we take
.
Table 4 presents the required number of outer iterations, and the values of the cost functional in four cases I, II, III, and IV for three values of the final time:
, and 6. Calculations were made by using
time steps and the third spatial mesh of
Table 1.
From the results in
Table 4, we see that the cost is larger in case I for small
T. This seems to indicate that, when we assign major relevance to the remaining population, short time operations are not satisfactory. For larger
T, the device cost becomes more important. We see, however, that, as
T grows, the costs have a tendency to equalize (and the particular values of the
seem to lose relevance).
Figure 4,
Figure 5 and
Figure 6 depict the trajectories and states
u, respectively, at
for all cases described in
Table 4. The minimal and maximal values of the mosquito population
u for each case are reported in
Table 5.
7.3. Two Examples with Variable
In this section, we present the calculations obtained by considering the following two different definitions of the function
a:
with
a positive constant associated with the proliferation velocity of the insect population. With these choices, we will study separately the influence of
a in the spatial and time behavior of the state
u and the control
.
Calculations were carried out by considering
with
time steps, the third spatial mesh of
Table 1 and the following values for
, and the weights of the functional
F:
Table 6 details the values of the number of outer iterations and the respective values of the cost functional in the case of
defined in (
40) and (
41) by considering three different values of
.
Figure 7 depicts the uncontrolled state, the controlled state, and the respective trajectory
for
in the case of
defined in (
40) by considering three different values of
: 1 (
Figure 7a,b), 2 (
Figure 7c,d), and 4 (
Figure 7e,f). We observe that the optimal control trajectories are qualitatively similar in all cases and stay close to the initial point. This is because the non-zero values of
a (where the insects proliferate) depend on the spatial coordinate and are located in the unit ball centered at this point.
Figure 8 depicts the uncontrolled state, the controlled state, and the respective trajectory
for
in the case of
defined in (
41) by considering three different values of
: 1 (
Figure 8a,b), 2 (
Figure 8c,d), and 4 (
Figure 8e,f). In these cases, we can observe the influence of the time coordinate
t: the length of the trajectory
increases when the value of
does, giving a similar behavior for the case
studied in
Section 7.1.
The respective minimal and maximal values of the uncontrolled and controlled states corresponding to
Figure 7 and
Figure 8 are reported in
Table 7. The influence of the control
on the state
u by decreasing their minimal and maximal values is clearly observed.
8. Conclusions
We have performed a rigorous analysis of an optimal control problem concerning the spreading of mosquito populations; the optimality conditions have been used to devise a suitable numerical scheme and compute an optimal trajectory.
The success in completing these tasks with the help of appropriate theoretical and numerical tools seems to indicate that a similar analysis can be performed in other more complex cases. Thus, it can be more natural to consider, instead of (
2), the cost functional
Indeed, in this functional, the three terms can be respectively viewed as measures of the true length of the path traveled by the device, the total fuel needed in the process, and the total mosquito population along . The norms of , and u represent quantities more adequate to the model, although the analysis of the corresponding control problem is more involved.
Other realistic situations can also be taken into account. For instance, a very interesting setup appears when there are obstacles to the admissible trajectories. In addition, instead of the Malthusian growth rate for the mosquito population assumed in the present work, we could assume a Verhustian or Gomperzian growth rate. The corresponding models and their associated control problems are being investigated at present.