Next Article in Journal
The Mathematics of Quasi-Diffusion Magnetic Resonance Imaging
Next Article in Special Issue
A Mass- and Energy-Conserving Numerical Model for a Fractional Gross–Pitaevskii System in Multiple Dimensions
Previous Article in Journal
Out-of-Sample Prediction in Multidimensional P-Spline Models
Previous Article in Special Issue
Numerical Analysis of Flow Phenomena in Discharge Object with Siphon Using Lattice Boltzmann Method and CFD
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal Control of Insect Populations

by
Anderson L. Albuquerque de Araujo
1,†,
José L. Boldrini
2,†,
Roberto C. Cabrales
3,*,†,
Enrique Fernández-Cara
4,† and
Milton L. Oliveira
5,†
1
Departamento de Matemática, Universidade Federal de Viçosa, Viçosa 36570-000, Brazil
2
Departamento de Sistemas Integrados, Faculdade de Engenharia Mecânica, Universidade Estadual de Campinas, Campinas 13083-970, Brazil
3
Instituto de Investigación Multidisciplinaria en Ciencia y Tecnología, Universidad de la Serena, La Serena 1720256, Chile
4
Departamento de Ecuaciones Diferenciales y Análisis Numérico e IMUS, Universidad de Sevilla, 41004 Sevilla, Spain
5
Departamento de Matemática, Universidade Federal da Paraíba, João Pessoa 58051-900, Brazil
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2021, 9(15), 1762; https://doi.org/10.3390/math9151762
Submission received: 3 June 2021 / Revised: 22 July 2021 / Accepted: 23 July 2021 / Published: 26 July 2021
(This article belongs to the Special Issue Numerical Analysis and Scientific Computing)

Abstract

:
We consider some optimal control problems for systems governed by linear parabolic PDEs with local controls that can move along the domain region Ω of the plane. We prove the existence of optimal paths and also deduce the first order necessary optimality conditions, using the Dubovitskii–Milyutin’s formalism, which leads to an iterative algorithm of the fixed-point kind. This problem may be considered as a model for the control of a mosquito population existing in a given region by using moving insecticide spreading devices. In this situation, an optimal control is any trajectory or path that must follow such spreading device in order to reduce the population as much as possible with a reasonable not too expensive strategy. We illustrate our results by presenting some numerical experiments.

1. Introduction

In this paper, we present and solve theoretically and numerically some optimal control problems concerning the extinction of mosquito populations by using moving devices whose main role is to spread insecticide. The controls are the trajectories or paths followed by the devices and the states are the resulting mosquito population densities. Thus, the unknowns in the considered problems are couples ( γ , u ) , where γ is a curve in the plane (see Figure 1), and u indicates how many mosquitoes there are and where they stay. The goal is to compute a trajectory leading to a minimal cost, measured in terms of operational costs and total population up to a final time.
The main simplifying assumptions for the model are the following:
Assumption 1.
Mosquitoes grow at a not necessarily constant known rate a = a ( x , t ) and diffuses at a known constant rate α.
Assumption 2.
The insecticide immediately kills a fixed fraction of the population at a rate that decreases with the distance to the spreading source.
Assumption 3.
There are no obstacles for the admissible trajectories.
These assumptions can be relaxed in several ways.
For the problems considered in this paper, we prove the existence of an optimal solution, that is, a trajectory that leads to the best possible status of the system. We also characterize the optimal control–state pairs by appropriate first order optimality conditions. As usual, this is a coupled system that must be satisfied by any optimal control, its associated state, and an additional variable (the adjoint state) and must be viewed as a necessary condition for optimality.
In this paper, this characterization is obtained by using the so-called Dubovitskii–Milyutin techniques (see, for instance, Girsanov [1]). This relies on the following basic idea: on a local minimizer, the set of descent directions for the functional to minimize must be disjoint to the intersection of the cones of admissible directions determined by the restrictions. Accordingly, in view of Hahn-Banach’s Theorem, there exist elements in the associated dual cones, not all zero, whose sum vanishes. This algebraic condition is in fact the Euler–Lagrange equation for the problem in question. In order to be applied in our context, we must carry out the task of identifying all these (primal and dual) cones.
This formalism has been applied with success to several optimal control problems for PDEs, including the FitzHugh–Nagumo equation Brandao et al. [2]; solidification Boldrini et al. [3], the evolutionary Boussinesq model Boldrini et al. [4]; the heat equation Gayte et al. [5] and, also, some ecological systems that model the interaction of three species Coronel et al. [6].
We remark that part of the results presented here have their origin in ones in the Ph.D. thesis of Araujo [7].
This paper is organized as follows: Section 2 is devoted to describe the main achievements. The next two sections will be devoted to the rigorous proof of the optimality conditions; Section 3 contains some preparatory material and Section 4 deals with the main part of the proof. In Section 5, we deduce optimality conditions for a problem similar to the previous one, including in this case a restriction on the norm of the admissible trajectories. Once the optimality conditions are established, they can be used to devise numerical schemes to effectively compute suitable approximations of optimal trajectories; this will be explained in detail in Section 6. In Section 7, we present the results of numerical experiments performed with the numerical scheme described in the previous section. Finally, in Section 8, we present our conclusions and comment on further possibilities of investigation.

2. Main Achievements

Let Ω R 2 be a non-empty bounded connected open set with boundary Ω such that either Ω is of class C 2 or Ω is a convex polygon and let T > 0 be a given time. We want to find a curve γ * : [ 0 , T ] R 2 such that
F ( γ * ) = min γ V F ( γ ) ,
where V : = { γ H 1 ( 0 , T ) 2 : γ ( 0 ) = 0 } is the space of admissible controls, and F is the following cost functional:
F ( γ ) : = μ 0 0 T | γ | 2 d t + μ 1 0 T | γ ˙ | 2 d t + μ 2 Q | u | 2 d x d t ,
where γ ˙ is the time derivative of γ , and u = u ( x , t ) is the associated state, that is, the unique solution to the problem
u t α Δ u = a ( x , t ) u b k ( x γ ( t ) ) u , in Q : = Ω × ( 0 , T ) , u n = 0 , on S : = Ω × ( 0 , T ) , u ( x , 0 ) = u 0 , in Ω .
Parameters μ 0 0 , μ 1 > 0 , and μ 2 > 0 are given constants. Concerning the interpretation of the cost functional (2), we observe that it can also be written in the form
F ( γ ) : = μ 0 γ L 2 ( 0 , T ) 2 2 + μ 1 γ ˙ L 2 ( 0 , T ) 2 2 + μ 2 u L 2 ( Q ) 2 .
The function γ determines the trajectory of the device and γ ˙ is its corresponding velocity; moreover, the same path (geometric locus of the trajectory) can be traveled with different speeds leading to different operational costs. Since γ L 2 ( 0 , T ) is a measure of the size of γ and γ ˙ L 2 ( 0 , T ) is a measure of the velocity γ ˙ , the quantity μ 0 γ L 2 ( 0 , T ) 2 + μ 1 γ ˙ L 2 ( 0 , T ) 2 can be regarded as a measure of the operational costs associated with the trajectory γ . Parameters μ 0 and μ 1 weight the relative importance being attributed to the size and the velocity of a trajectory in those operational costs. On the other hand, u L 2 ( Q ) is a measure of the size of the mosquito population and μ 2 is a constant parameter that weights the importance that is attributed to the decrease of such population.
It is expected that an improvement of the operational costs implies an increase of the effectiveness of the process and consequently leads to a reduction of the mosquito population, while a decrease of the operational costs leads to an increase of the mosquito population. Therefore, the minimization of F, once the parameters μ 0 , μ 1 , and μ 2 are given, leads to a trajectory that balances the reduction of the mosquito population and the amount of operational costs. We remark that the values of μ 0 , μ 1 and μ 2 have a large influence on the shape of a possible optimal trajectory.
In short, F ( γ ) is thus the sum of three terms, respectively related (but not coincident) to the length of the path travelled by the device, its speed along  ( 0 , T ) and the resulting population of insects. A maybe more realistic cost functional is indicated in Section 8.
In (3), α > 0 , b > 0 , a = a ( x , t ) is a non-negative function and k : R 2 R is a positive C 1 -function. The coefficient a is related to insect proliferation. It is standard in dynamics of population. In fact, in (3), the mosquito population u is assumed to have a space-time dependent Malthusian growth; this means that the population growth rate at each position and time is proportional to the number of individuals. The model admits a non-constant a to cope with the possibility of geographical places and times with different effects; for instance, more favorable growth rates occur in places with the presence of bodies of water or during rainy seasons.
As we mentioned in the Introduction, we assume that the insecticide immediately kills a fraction of the mosquito population present at a position x and time t, with a rate that is proportional to the insecticide concentration at that same position and time. At time t, the spreading device is at position γ ( t ) and spreads a cloud of insecticide around it; the resulting insecticide concentration at any point x then depends on the position of x relative to  γ ( t ) (we expect that such concentration decays with the distance from the device). The spatial distribution of this concentration is then mathematically described as k ( x γ ( t ) ) c m a x , where c m a x is the maximal spreading concentration capacity of the device, and k is a known C 1 -function such that 0 k 1 and k ( 0 ) = 1 . Then, the associated effective killing rate of the mosquito population is proportional to the product of insecticide concentration and the mosquito population. That is, f k ( x γ ( t ) ) c m a x u ( x , t ) for some proportionality factor f. By introducing b = f c m a x , we get that the killing rate at position x and time t is b k ( x γ ( t ) ) u ( x , t ) .
In our present analysis, it is not strictly needed but is natural to view k as an approximation of the Dirac distribution. For instance, it is meaningful to take
k ( z ) : = k 0 e | z | 2 / σ
for some k 0 , σ > 0 ; this means that the device action in (3) at time t is maximal at x = γ ( t ) and negligeable far from  γ ( t ) . To this respect, see the choice of k in the numerical experiments in Section 7.
Of course, we can consider more elaborate models and include additional (nonlinear) terms in (3) related to competition; this is not done here just for simplicity of exposition.
Any solution ( γ * , u * ) to (1) provides an optimal trajectory and the associated population of mosquitoes. The existence of such a pair ( γ * , u * ) is established below, see Section 4.
As already mentioned, we also provide in this paper a characterization of optimal control–state pairs. It will be seen that, in the particular case of (1), the main consequence of the Dubovitskii–Milyutin method is the following: if γ * is an optimal control and u * is the associated state, there exists p * (the adjoint state), such that the following optimality conditions are satisfied:
u t * α Δ u * = a ( x , t ) u * b k ( x γ * ) u * , in Q , u * n = 0 , on S , u * ( x , 0 ) = u 0 , in Ω ,
p t * α Δ p * = a ( x , t ) p * b k ( x γ * ) p * 2 μ 2 u * , in Q , p * n = 0 , on S , p * ( x , T ) = 0 , in Ω
and
2 μ 1 γ ¨ * + 2 μ 0 γ * b Ω p * u * k ( x γ * ) d x = 0 , in ( 0 , T ) , γ * ( 0 ) = 0 , γ ˙ * ( T ) = 0 .

3. Preliminaries

As usual, D ( Ω ) will stand for the space of the functions in  C ( Ω ) with compact support in Ω and W r , p ( Ω ) and  H m ( Ω ) = W m , 2 ( Ω ) will stand for the usual Sobolev spaces; see Adams [8] for their definitions and properties. The gradient and Laplace operators will be respectively denoted by ∇ and  Δ .
The constants μ 0 0 , μ 1 > 0 , μ 2 > 0 , α > 0 and  b > 0 and the functions k, u 0 and a are given, with
k C b 1 ( R 2 ) , 0 k 1 , u 0 H 1 ( Ω ) and a L ( Q ) .
In the sequel, C will denote a generic positive constant and the symbol · · will stand for several duality pairings.
Very frequently, the following spaces will be needed:
W 2 2 , 1 ( Q ) : = { u L 2 ( Q ) : D σ u L 2 ( Q ) 1 | σ | 2 , u t L 2 ( Q ) }
and
W 2 , N 2 , 1 ( Q ) : = { u W 2 2 , 1 ( Q ) : u n = 0 on S } .
For the main results concerning these spaces, we refer, for instance, to [9]. Let us just recall one of them that is sometimes called the Lions–Peetre Embedding Theorem, see ([10], p. 13):
Lemma 1.
The embedding W 2 2 , 1 ( Q ) L r ( Q ) is compact for all 1 r < + .
For any Banach space B, we will denote by  · B the corresponding norm. Its topological dual space will be denoted by B . Recall that, if K B is a cone, the associated dual cone is the following:
K * = { f B : f ( e ) 0 e K } .
Let A B be given and let us assume that e 0 A . It will be said that a nonzero linear form f B is a support functional for A at  e 0 if
f ( e ) f ( e 0 ) e A .
Let Z be a Banach space and let M : B Z be a mapping. For any e 0 , e 1 B , we will denote by  M ( e 0 ) e 1 the Gateaux-derivative of M at e 0 in the direction e 1 whenever it exists. For obvious reasons, in the particular case Z = R , this quantity will be written in the form  M ( e 0 ) , e 1 .
The following result is known as Aubin–Lions’ Immersion Theorem. In fact, this version was given by Simon in [11], see p. 85, Corollary 4:
Lemma 2.
Let X, B, and Y be Banach spaces such that X B Y with continuous embeddings, the first of them being compact and assume that 0 < T < + . Then, the following embeddings are compact:
  • L q ( 0 , T ; X ) { ϕ : ϕ t L 1 ( 0 , T ; Y ) } L q ( 0 , T ; B ) for all 1 q .
  • L ( 0 , T ; X ) { ϕ : ϕ t L r ( 0 , T ; Y ) } C 0 ( [ 0 , T ] ; B ) for all 1 < r .
The following result guarantees the existence and uniqueness of a state for each control γ in the space of admissible controls  V :
Theorem 1.
For each γ V , there exists a unique solution u W 2 , N 2 , 1 ( Q ) of problem (3) satisfying the estimate
u W 2 2 , 1 ( Q ) C u 0 H 1 ( Ω ) .
The constant C only depends on T, α, a L ( Q ) , b, k L ( R 2 ) and Ω.
For completeness, let us also recall an existence–uniqueness result for the adjoint system:
Theorem 2.
Let γ V and u L 2 ( Q ) be given. The linear system
p t α Δ p = a ( x , t ) p b k ( x γ ) p + 2 μ 2 u , in Q , p n = 0 , on S , p ( x , T ) = 0 , in Ω ,
possesses exactly one solution p W 2 , N 2 , 1 ( Q ) such that
p W 2 2 , 1 ( Q ) C u 0 H 1 ( Ω ) ,
with C as in Theorem 1.
We finish this section by recalling the Dubovitskii–Milyutin method. The presentation is similar to that in Boldrini et al. [3].
Let X be a Banach space and let J : X R be a given function. Consider the extremal problem
J ( e * ) = min e Q = = 1 n + 1 Q J ( e )
where the Q ( = 1 , , n + 1 ) are by definition the restriction sets. It is assumed that
Int Q i Ø 1 i n a n d Int Q n + 1 = Ø .
There are many situations where (9) holds In particular, the following holds:
  • For any 1 i n , Q i is an inequality restriction set of the form
    Q i = { e X : p i ( e ) a i } ,
    where the p i : X R are continuous seminorms and the   a i > 0 .
  • Q n + 1 is the equality restriction set
    Q n + 1 = { e X : M ( e ) = 0 } ,
    where M : X Z is a differentiable mapping (Z is another Banach space).
The following theorem is a generalized version of the Dubovitskii–Milyutin principle and will be used in Section 4 and Section 5:
Theorem 3.
Let e 0 = 1 n + 1 Q i be a local minimizer of problem (8) Let D C 0 be the decreasing cone of the cost functional J at e 0 , let F C i be the feasible (or admissible) cone of Q i at e 0 for  1 i n , and let T C be the tangent cone to Q n + 1 at e 0 . Suppose that
  • The cones D C 0 and F C i ( 1 i n ) are non-empty, open, and convex.
  • The cone T C is non-empty, closed, and convex.
Then
D C 0 i = 1 n F C i T C = Ø .
Consequently, there exist G 0 [ D C 0 ] * , G i [ F C i ] * ( 1 i n ) and G n + 1 [ T C ] * , not all zero, such that
G 0 + i = 1 n G i + G n + 1 = 0 .
In order to identify the decreasing, feasible, and tangent cones, we will use the following well known definitions and facts:
  • Assume that J : X R is Fréchet-differentiable. Then, for any e X , the decreasing cone of J at e is open and convex and is given by
    D C = { h X : J ( e ) , h < 0 } ,
    where · · stands for the usual duality product associated with X and  X .
  • Suppose that the set Q X is convex, Int Q Ø and e Q . Then, the feasible cone of Q at e is open and convex and is given by
    F C = { μ ( e e ) : e Int Q , μ > 0 } .
  • Finally, we have the celebrated Ljusternik Inverse Function Theorem; see, for instance, ([12], p. 167). The statement is the following: suppose that X and Y are Banach spaces, M : X Y is a mapping and the set
    Q : = { e X : M ( e ) = 0 }
    is non-empty; in addition, suppose that e 0 Q , M is strictly differentiable at  e 0 and R ( M ( e 0 ) ) = Y ; then, M maps a neighborhood of e 0 onto a neighborhood of 0 and the tangent cone to  Q at  e 0 is the closed subspace:
    T C = N ( M ( ξ ) ) = { h X : M ( e 0 ) h = 0 } .

4. A First Optimal Control Problem

In this section, we will consider the optimal control problem (1)–(3). We will introduce an equivalent reformulation as an extremal problem of the kind (8). Then, we will prove that there exist optimal control–state pairs. Finally, we will use Theorem 3 to deduce appropriate (first order) necessary optimality conditions.
Our problem is the following:
min ( γ , u ) U a d J ( γ , u ) ,
where J is given by (2) for any ( γ , u ) V × L 2 ( Q ) , and the set of admissible control–state pairs U a d is defined by
U a d : = { ( γ , u ) V × W 2 , N 2 , 1 ( Q ) : M ( γ , u ) = 0 } .
Here,
M : V × W 2 , N 2 , 1 ( Q ) L 2 ( Q ) × H 1 ( Ω )
is the mapping defined by M ( γ , u ) = ( ψ , φ ) , with
ψ = M 1 ( γ , u ) : = u t α Δ u a u + b k ( x γ ) u φ = M 2 ( γ , u ) : = u ( · 0 ) u 0 .
Theorem 4.
The extremal problem (10) possesses at least one solution.
Proof. 
Let us first check that U a d is non-empty.
Let γ V be given. Then, from Theorem 1, there exists a unique solution u W 2 , N 2 , 1 ( Q ) to (3) that, in particular, satisfies M ( γ , u ) = 0 . Thus, we have ( γ , u ) U a d .
Let us now see that U a d is sequentially weakly closed in V × L 2 ( Q ) . Thus, assume that ( γ n , u n ) U a d for all n and
γ n γ weakly in V and u n u weakly in L 2 ( Q ) .
Then, γ n converges uniformly to γ in [ 0 , T ] . Since u n is the unique solution to (3) with γ replaced by γ n , we necessarily have that u is the state associated with γ , that is, ( γ , u ) U a d and U a d are certainly sequentially weakly closed.
Obviously, J ( γ , u ) is a well defined real number for any ( γ , u ) U a d . Furthermore, J : V × L 2 ( Q ) R is continuous and (strictly) convex. Consequently, J is sequentially weakly lower semicontinuous in V × L 2 ( Q ) .
Finally, J is coercive, i.e., it satisfies
J ( γ n , u n ) + as ( γ n , u n ) V × L 2 ( Q ) + .
Therefore, J attains its minimum in the weakly closed set U a d , and the result holds.    □
Remark 1.
Note that, although J is strictly convex, we cannot guarantee the uniqueness of solution to (10), since the admissible set U a d is not necessarily convex. See, however, Remark 3 for a discussion on uniqueness.
In the next result, we specify the optimality conditions that must be satisfied by any solution to the extremal problem (10):
Theorem 5.
Let ( γ * , u * ) V × W 2 , N 2 , 1 ( Q ) be an optimal control–state pair for (10). Then, there exists p * W 2 , N 2 , 1 ( Q ) such that the triplet ( γ * , u * , p * ) satisfies (4)–(6).
We will provide a proof based on the Dubovitskii–Milyutin’s formalism, i.e., Theorem 3. Before, let us establish some technical results.
First of all, the functional J : V × L 2 ( Q ) R is obviously C and the derivative of J at ( γ , u ) in the direction ( β , v ) is given by
J ( γ , u ) , ( β , v ) = 2 μ 0 0 T γ · β d t + 2 μ 1 0 T γ ˙ · β ˙ d t + 2 μ 2 Q u v d x d t ( γ , u ) , ( β , v ) V × L 2 ( Q ) .
Consequently, we have the following:
Lemma 3.
For each ( γ , u ) V × L 2 ( Q ) , the cone of decreasing directions of J at ( γ , u ) is
D C ( γ , u ) = { ( β , v ) V × L 2 ( Q ) : J ( γ , u ) , ( β , v ) < 0 } .
The associated dual cone is
[ D C ( γ , u ) ] * = { λ J ( γ , u ) : λ 0 } .
Lemma 4.
Let ( γ , u ) V × W 2 , N 2 , 1 ( Q ) be given.
(i)
Then, M is G-differentiable at ( γ , u ) . The G-derivative of M at ( γ , u ) in the direction ( β , v ) is given by
M ( γ , u ) ( β , v ) = ( M 1 ( γ , u ) ( β , v ) , M 2 ( γ , u ) ( β , v ) ) ,
where
M 1 ( γ , u ) ( β , v ) : = v t α Δ v a v + b k ( x γ ) v b ( k ( x γ ) · β ) u , M 2 ( γ , u ) ( β , v ) : = v ( 0 ) .
(ii)
The mapping M is C 1 in a neighborhood of ( γ , u ) . Furthermore, M ( γ , u ) is surjective.
Proof. 
In order to prove (i), we use the definition of the Gâteaux-derivative. First, note that
1 h M 1 ( γ + h β , u + h v ) M 1 ( γ , u ) M 1 ( γ , u ) ( β , v ) = b k ( x ( γ + h β ) ) k ( x γ ) v + b 1 h k ( x ( γ + h β ) ) k ( x γ ) k ( x γ ) · β u ,
while
1 h M 2 ( γ + h β , u + h v ) M 2 ( γ , u ) M 2 ( γ , u ) ( β , v ) 0 .
Therefore, in view of the assumptions on k, (15) and Lebesgue’s Theorem, it follows that
1 h M ( γ + h β , u + h v ) M ( γ , u ) M ( γ , u ) ( β , v ) 0 as h 0
strongly in L 2 ( Q ) × H 1 ( Ω ) , which proves (i).
We will now prove (ii). It is clear that, for any ( γ , u ) V × W 2 , N 2 , 1 ( Q ) , M ( γ , u ) is a well defined continuous linear operator on  V × W 2 , N 2 , 1 ( Q ) . Let ( β , v ) V × W 2 , N 2 , 1 ( Q ) be given and let us assume that ( γ n , u n ) V × W 2 , N 2 , 1 ( Q ) for all n and  ( γ n , u n ) ( γ , u ) strongly in  V × W 2 , N 2 , 1 ( Q ) as  n + . Then,
M 1 ( γ n , u n ) ( β , v ) M 1 ( γ , u ) ( β , v ) L 2 ( Q ) b ( k ( · γ n ) k ( · γ ) ) v L 2 ( Q ) + b ( u n k ( · γ n ) u k ( · γ ) ) · β ) L 2 ( Q ) .
In the last line, the first norm can be bounded as follows:
( k ( · γ n ) k ( · γ ) ) v L 2 ( Q ) sup Q | k ( x γ n ( t ) ) k ( x γ ( t ) ) | v L 2 ( Q ) .
Hence, from the hypotheses on k and the fact that γ n γ uniformly in [ 0 , T ] , we find that
( k ( · γ n ) k ( · γ ) ) v L 2 ( Q ) ϵ n v L 2 ( Q ) , with ϵ n 0 .
On the other hand, we can deduce the following inequalities:
( u n k ( · γ n ) u k ( · γ ) ) · β ) L 2 ( Q ) ( u n u ) k ( · γ n ) · β ) L 2 ( Q ) + u ( k ( · γ n ) k ( · γ ) ) · β ) L 2 ( Q ) sup Q | k ( x γ n ( t ) ) · β ( t ) | u n u L 2 ( Q ) + sup Q | ( k ( x γ n ( t ) ) k ( x γ ( t ) ) ) · β ( t ) | u L 2 ( Q ) C u n u L 2 ( Q ) + sup Q | k ( x γ n ( t ) ) k ( x γ ( t ) ) | u L 2 ( Q ) β H 1 ( 0 , T ) 2 .
Consequently, using again the hypotheses on k and the uniform convergence of γ n , we can also write that
( u n k ( · γ n ) u k ( · γ ) ) · β ) L 2 ( Q ) ϵ n β H 1 ( 0 , T ) 2 , with ϵ n 0 .
From (16) and (17), we find that
M 1 ( γ n , u n ) ( β , v ) M 1 ( γ , u ) ( β , v ) L 2 ( Q ) C ( ϵ n + ϵ n ) ( β , v ) H 1 ( 0 , T ) 2 × L 2 ( Q ) .
Since M 2 ( γ , u ) is independent of ( γ , u ) , we deduce that ( γ , u ) M ( γ , u ) , regarded as a mapping from V × W 2 , N 2 , 1 ( Q ) into L ( V × W 2 , N 2 , 1 ( Q ) ; L 2 ( Q ) × H 1 ( Ω ) ) , is continuous.
Let us finally see that M ( γ , u ) is surjective.
Thus, let ( φ , ψ ) be given in L 2 ( Q ) × H 1 ( Ω ) . We have to find ( β , v ) such that
M ( γ , u ) ( β , v ) = ( φ , ψ ) , ( β , v ) V × W 2 , N 2 , 1 ( Q ) .
However, this is easy: it suffices to first choose β V arbitrarily and then solve the linear problem:
v t α Δ v = a ( x , t ) v b k ( x γ ) v + b ( k ( x γ ) · β ) u + φ in Q v n = 0 on S v ( x , 0 ) = ψ in Ω .
Obviously, the couple ( β , v ) satisfies (18).    □
From this lemma and Ljusternik’s Theorem, we obtain the following characterization of any tangent cone to U a d :
Lemma 5.
Let ( γ , u ) be given in U a d . Then, the tangent cone to U a d at ( γ , u ) is the set
T C ( γ , u ) = N ( M ( γ , u ) ) T C ( γ , u ) : = { ( β , v ) V × W 2 , N 2 , 1 ( Q ) : M ( γ , u ) ( β , v ) = 0 } .
The associated dual cone is given by
[ T C ( γ , u ) ] * = { ( η , ζ ) V × W 2 , N 2 , 1 ( Q ) : ( η , ζ ) , ( β , v ) = 0 ( β , v ) T C ( γ , u ) } .
We can now present the proof of Theorem 5.
Proof of Theorem 5.
Let ( γ * , u * ) V × W 2 2 , 1 ( Q ) be a solution to (10). The cone of decreasing directions of J at ( γ * , u * ) is
D C : = D C ( γ * , u * ) = { ( β , v ) V × W 2 , N 2 , 1 ( Ω ) : J ( γ * , u * ) , ( β , v ) < 0 }
and
[ D C ] * = { λ J ( γ * , u * ) : λ 0 } .
In addition, ( γ * , u * ) U a d , the cone of tangent directions to U a d at this point is
T C : = T C ( γ * , u * ) = { ( β , v ) V × W 2 , N 2 , 1 ( Q ) : M ( γ * , u * ) ( β , v ) = 0 }
and
[ T C ] * = { ( η , ζ ) V × W 2 , N 2 , 1 ( Q ) : ( η , ζ ) , ( β , v ) = 0 ( β , v ) T C } .
From Theorem 3, we know that
D C T C = Ø ,
whence there must exist G 0 = λ 0 J ( γ * , u * ) [ D C ] * and also G = ( η , ζ ) [ T C ] * , not simultaneously zero, such that
G 0 + G = 0 .
Since λ 0 0 and G 0 and G cannot be simultaneously zero, we necessarily have λ 0 > 0 , and it can be assumed that λ 0 = 1 .
Accordingly,
G = G 0 = J ( γ * , u * )
and
( η , ζ ) , ( β , v ) = 2 0 T [ μ 0 γ * · β + μ 1 γ ˙ * · β ˙ ] d t + μ 2 Q u * v d x d t
for all ( β , v ) V × W 2 , N 2 , 1 ( Q ) . In particular, we see that
0 T [ μ 0 γ * · β + μ 1 γ ˙ * · β ˙ ] d t + μ 2 Q u * v d x d t = 0 ( β , v ) T C .
Now, let γ V be an arbitrary admissible control. Let w be the unique solution to the linear system:
w t α Δ w = a w b k ( x γ * ) w + b ( k ( x γ * ) · γ ) u * in Q w n = 0 on S w ( x , 0 ) = 0 in Ω .
It follows from Lemma 5 that ( γ , w ) T C , whence
0 T μ 0 γ * · γ + μ 1 γ ˙ * · γ ˙ d t + μ 2 Q u * w d x d t = 0 .
Let us introduce the adjoint system (5) and let us denote by p * its unique solution. By multiplying (21) by p * and (5) by w, summing, integrating with respect to x and t and performing integrations by parts, we easily get that
b Q p * u * ( k ( x γ * ) · γ ) d x d t + 2 μ 2 Q u * w d x d t = 0 .
Taking into account (22), we thus find that
0 T 2 μ 0 γ * · γ + 2 μ 1 γ ˙ * · γ ˙ d t b Q p * u * k ( x γ * ) · γ d x d t = 0 γ V ; γ * V .
However, this is just the weak formulation of the boundary problem (6). Indeed, standard arguments show that γ * solves (24) if and only if γ * H 1 ( 0 , T ) 2 , the second-order integro-differential equation
2 μ 1 γ ¨ * + 2 μ 0 γ * b Ω p * ( x , t ) u * ( x , t ) k ( x γ * ( t ) ) d x = 0
holds in the distributional sense in ( 0 , T ) , γ * ( 0 ) = 0 and γ ˙ * ( T ) = 0 . Thus, the triplet ( γ * , u * , p * ) satisfies (4)–(6).
This ends the proof.    □
Remark 2.
There are other ways to prove Theorem 5. For instance, it is possible to apply an argument relying on Lagrange multipliers, starting from the Lagrangian
L ( γ , u , p ) : = J ( γ , u ) + p , M ( γ , u ) .
That ( γ * , u * ) is an optimal control–state pair, and p * is an associated adjoint state (resp. that ( γ * , u * , p * ) satisfies the optimality conditions (4)–(6)) is formally equivalent to say that the triplet  ( γ * , u * , p * ) is a saddle point (resp. a stationary point) of L.
Remark 3.
As already said, in general, there is no reason to expect uniqueness. However, in view of Theorem 5, it is reasonable to believe that, under appropriate assumptions on b and k, the solution is unique. Indeed, taking into account that k is uniformly bounded in  R 2 , if b is sufficiently small, ( γ i , u i ) is an optimal pair for i = 1 , 2 and one sets γ : = γ 1 γ 2 , u : = u 1 u 2 , the following is not difficult to prove:
  • The u i are uniformly bounded (for instance) in L 2 ( Q ) by a constant of the form e C ( 1 + b ) .
  • u and p are bounded in L 2 ( Q ) by a constant of the form b e C ( 1 + b ) γ L .
  • γ is bounded in L ( 0 , T ) by a constant of the form b e C ( 1 + b ) γ L .
In these estimates, C depends on k W 1 , ( R 2 ) and the other data of the problem but is independent of b. Consequently, if b is small enough, the solution to the optimal control problem is unique.

5. A Second Optimal Control Problem

This section deals with a more realistic second optimal control problem. Specifically, we will analyze the constrained problem
F ( γ * ) = min γ B F ( γ ) ,
where
B : = { γ H 1 ( 0 , T ) 2 : γ ( 0 ) = 0 , γ H 1 ( 0 , T ) 2 R 0 }
and F is given by (2). Here, R 0 > 0 is a prescribed constant. We can interpret the constraint γ B as a limitation on the positions and the speed of the device; roughly speaking, a solution to (25) furnishes a strategy that leads to a minimal insect population (in the L 2  sense) with few resources.
For this problem, we will also prove the existence of optimal controls, and we will also find the optimality conditions furnished by the Dubovitskii–Milyutin formalism.
Thus, let γ * be an optimal control for this new problem, let u * and p * be the associated state and adjoint state and let us introduce the notation
m ( γ , β ) : = 0 T 2 μ 0 γ · β + 2 μ 1 γ ˙ · β ˙ d t .
Then, it will be shown that the associated optimality conditions of first order are (4), (5) and
m ( γ * , γ γ * ) b Q u * p * k ( x γ * ) ( γ γ * ) d x d t 0 γ B ; γ * B .
The problem can be rewritten in the form
min ( γ , u ) Z a d J ( γ , u ) ,
where J is given by (2) and the set of admissible pairs Z a d is
Z a d : = { ( γ , u ) V × W 2 , N 2 , 1 ( Q ) : M ( γ , u ) = 0 , γ H 1 ( 0 , T ) 2 R 0 } .
Recall that M = ( M 1 , M 2 ) is given by (12).
One has:
Theorem 6.
The extremal problem (29) possesses at least one solution.
The proof is similar to the proof of Theorem 4. Indeed, it is easy to check that Z a d is a non-empty weakly closed subset of V × L 2 ( Q ) . Since J : V × L 2 ( Q ) R is continuous, (strictly) convex and coercive, it also attains its minimum in Z a d , and this proves that (29) is solvable.
Theorem 7.
Let ( γ * , u * ) V × W 2 2 , 1 ( Q ) be an optimal control–state pair for (29). Then, there exists p * W 2 2 , 1 ( Q ) such that the triplet ( γ * , u * , p * ) satisfies (4), (5) and (28), where the bilinear form m ( · · ) is given by (27).
Proof. 
We will argue as in the proof of Theorem 5.
First, notice that Z a d can be written in the form
Z a d = U a d ( B × L 2 ( Q ) )
Let ( γ * , u * ) V × W 2 2 , 1 ( Q ) be a solution to (29). Then, the feasible cone of B × L 2 ( Q ) at ( γ * , u * ) is the set
F C : = F C ( γ * , u * ) = { ( μ ( γ γ * ) , v ) : γ Int B , μ > 0 , v L 2 ( Q ) }
and
[ F C ] * = { ( h , 0 ) : h is a support functional of B at γ * } .
Recall that the latter means that h V and
h , γ h , γ * γ B .
From Theorem 3, we now have
D C T C F C = Ø ,
whence there exist G 0 = λ 0 J ( γ * , u * ) [ D C ] * , G = ( η , ζ ) [ T C ] * and H = ( h , 0 ) [ F C ] * , not simultaneously zero, such that
G 0 + G + H = 0 .
As before, we necessarily have λ 0 > 0 . Indeed, if λ 0 = 0 , then
( η , ζ ) , ( γ , w ) + μ h , γ = 0 ( β , v ) V × W 2 , N 2 , 1 ( Q ) .
However, for any γ V , we can always construct w in W 2 , N 2 , 1 ( Q ) such that ( γ , w ) T C . Hence, we would have ( η , ζ ) , ( γ , w ) = 0 and μ h , γ = 0 for all γ V , which is impossible.
We can thus assume that λ 0 = 1 and
G + H = G 0 = J ( γ * , u * ) .
In particular, we get:
0 T [ μ 0 γ * · β + μ 1 γ ˙ * · β ˙ ] d t + μ 2 Q u * v d x d t = h , β ( β , v ) T C .
Let us consider again the adjoint system (5) and let us denote by p * its unique solution. Let γ B be an arbitrary admissible control and let w be the unique solution to (21). Since ( γ , w ) T C and we still have (23), we get:
0 T 2 μ 0 γ * · γ + 2 μ 1 γ ˙ * · γ ˙ d t b Q p * u * k ( x γ * ) · γ d x d t = h , γ .
Taking into account that γ is arbitrary in B and (31) holds, we deduce that (28) is satisfied, and the result is proved.    □

6. A Numerical Scheme

In this section, we present a numerical scheme to find an approximate solution to (1)–(3). To this purpose, we will use the optimality conditions (4)–(6).
Obviously, we will have to approximate the equations in time and space. With respect to the time variable, we will incorporate finite differences taking into account the following:
  • Since the equation (4) is parabolic, in order to guarantee unconditional stability, we discretize in time by using a backward Euler method. For the same reason, we discretize M in time in by using a forward Euler method.
  • Since (6) is a second order two-point boundary value problem, we approximate there the time derivatives with a standard (centered) finite difference scheme.
Let N be a positive (large) integer and let us consider a partition Λ N = { 0 = t 0 , t 1 , , t N = T } of the time interval [ 0 , T ] in N subintervals. For simplicity, we assume that this partition is uniform, with time step Δ t : = T / N .
On the other hand, the approximation in space will be performed via a finite element method. Thus, let T h be a triangular mesh of a polynomial approximation Ω h of Ω and let us denote by V h a suitable finite element space associated with T h . For instance, V h can be the usual P 1 -Lagrange space, formed by the continuous piecewise linear functions on Ω h .
Then, given an approximation u h ( · 0 ) V h of u 0 , we must compute functions u h N ( t i ) and p h N ( t i ) and vectors γ h N ( t i ) = ( γ h , 1 N ( t i ) , γ h , 2 N ( t i ) ) such that:
Ω u h N ( t i ) u h N ( t i 1 ) Δ t v d x + Ω α u h N ( t i ) · v d x = Ω a ( x , t i ) u h N ( t i ) v d x Ω b k ( x γ h N ( t i ) ) u h N ( t i ) v d x v V h u h N ( t 0 ) = u h ( x , 0 ) ,
Ω p h N ( t i ) p h N ( t i 1 ) Δ t q d x + Ω α p h N ( t i 1 ) · q d x = Ω a ( x , t ) p h N ( t i 1 ) q d x Ω b k ( x γ h N ( t i 1 ) ) p h N ( t i 1 ) q d x Ω 2 μ 2 u h N ( t i 1 ) q d x q V h p h N ( t N ) = 0 ,
2 μ 1 γ h N ( t k 1 ) 2 γ h N ( t k ) + γ h N ( t k + 1 ) ( Δ t ) 2 + 2 μ 0 γ h N ( t k ) b Ω p h N ( t k ) u h N ( t k ) · k ( x γ h N ( t k ) ) d x = 0 for k = 2 , , N 1 γ h N ( t 0 ) = 0 , D h γ h N ( t N ) = 0 .
In the last line of (36), D h denotes a suitable discrete operator associated with the time derivative. In our code, to preserve second order approximation in time, we took D h γ h N ( t N ) : = [ γ h N ( t N 1 ) γ h N ( t N + 1 ) ] / 2 Δ t , where t N + 1 = t N + Δ t is an additional time-mesh point. This implies that γ h N ( t N 1 ) = γ h N ( t N + 1 ) and, thus, the required Neumann boundary condition can be imposed just using a reduced form of the finite difference operator at t N , that is, [ γ h N ( t N 1 ) 2 γ h N ( t N ) + γ h N ( t N + 1 ) ] ( Δ t ) 2 = [ 2 γ h N ( t N 1 ) 2 γ h N ( t N ) ] / ( Δ t ) 2 .

6.1. An Iterative Algorithm for Fixed N and T h

The previous finite dimensional system is nonlinear and, consequently, cannot be solved exactly. Accordingly, an iterative algorithm has been devised to obtain a solution. It is the following:
Base step:
Choose tolerances ϵ outer > 0 and ϵ inner > 0 ; by starting with γ h , 0 N 0 , proceed recursively for n = 1 , 2 , in an outer iteration scheme as follows:
First step: Find u h , n N .
Since γ h , n 1 N ( t i ) is known, advance in time to find u h , n N ( t i ) , for i = 1 , , N , by successively solving the linear problems
Ω u h , n N ( t i ) u h , n N ( t i 1 ) Δ t v d x + Ω α u h , n N ( t i ) · v d x = Ω a ( x , t i ) u h , n N ( t i ) v d x Ω b k ( x γ h , n 1 N ( t i ) ) u h , n N ( t i ) v d x v V h u h , n N ( t 0 ) = u h ( t 0 ) .
Second step: Find p h , n N .
Since γ h , n 1 N ( t i ) and u h , n N are known, proceed backwards in time to find p h , n N ( t i ) , for i = N 1 , , 0 , by solving the problems
Ω p h , n N ( t i ) p h , n N ( t i 1 ) Δ t q d x + Ω α p h , n N ( t i 1 ) · q d x = Ω a ( x , t ) p h , n N ( t i 1 ) q d x Ω b k ( x γ h , n 1 N ( t i 1 ) ) p h , n N ( t i 1 ) q d x Ω 2 μ 2 u h , n N ( t i 1 ) q d x q V h p h , n N ( t N ) = 0 .
Third step: Find γ h , n N .
This is done by applying an inner iteration scheme. Thus, by starting with γ ˜ 0 = γ h , n 1 N , find recursively { γ ˜ m } m = 1 by repeating the following for m = 1 , 2 , :
2 μ 1 γ ˜ m ( t k 1 ) 2 γ ˜ m ( t k ) + γ ˜ m ( t k + 1 ) ( Δ t ) 2 + 2 μ 0 γ ˜ m ( t k ) b Ω p h N ( t k ) u h N ( t k ) · k ( x γ ˜ m 1 ( t k ) ) d x = 0 for k = 2 , , N 1 γ ˜ m ( t 0 ) = 0 , D h γ ˜ m ( t N ) = 0 ,
with a meaning similar to above for D h γ ˜ m ( t N ) = 0 . This is a finite linear system for γ ˜ m = [ γ ˜ m ( t 1 ) , , γ ˜ m ( t N ) ] where the coefficient matrix is independent of m and can thus be inverted only once, at the beginning of the process.
The relative stopping criterion for the iteration process is
max γ ˜ m γ ˜ m 1 γ ˜ m , γ ˜ m γ ˜ m 1 γ ˜ m 1 ϵ inner .
where · denotes the usual discrete L 2 -norm. When this stopping criterion is satisfied, take γ h , n N = γ ˜ n and proceed to the next step.
Fourth step: Compare γ h , n N and γ h , n 1 N .
When the overall relative stopping criterion
max u h , n N u h , n 1 N u h , n N , p h , n N p h , n 1 N p h , n N , γ h , n N γ h , n 1 N min γ h , n N , γ h , n 1 N ϵ outer
is satisfied, stop the iterative process and take
u h N = u h , n N , p h N = p h , n N and γ h N = γ h , n N
as the computed approximated solutions. Otherwise, increase n by one and go back to the First step.

Acceleration of Algorithm

To accelerate the process, ϵ i n n e r can be taken larger than ϵ o u t e r . In addition, to get a better resolution, in the final iterative scheme, we have allowed for a decrease of the time-step by increasing the number of time intervals N.
The resulting iterates are as follows:
Base step:
Give tolerances ϵ γ > 0 , ϵ outer > 0 , ϵ inner > 0 and an initial number of discrete time intervals N 0 1 .
First step:
Take N = N 0 and apply the previous iterative algorithm, to obtain u h N , p h N and γ h N . Define γ before = γ h N 0 .
Second step:
Double the value of N and apply the previous algorithm to obtain new u h N , p h N and γ h N .
Third step:
Compare γ h N and γ before . If the convergence criterion
γ h N γ before γ h N ϵ γ
is satisfied, stop the iterates and take as approximated solutions
u * = u h , n N , p * = p h , n N and γ * = γ h , n N .
Otherwise, go back to the Second step and repeat the process.

7. Numerical Experiments

In order to illustrate the behavior of the previous algorithm, let us present the results of some experiments with Ω = [ 2.5 , 2.5 ] × [ 2.5 , 2.5 ] .
Let us denote by 1 ω the indicator function of ω for any ω Ω ; we consider an initial mosquito population given by
u 0 ( x ) 10.0 × 1 B 1 ( x , y ) + 10.0 × 1 B 2 ( x , y ) ,
where B 1 and B 2 are, respectively, the circle of center ( 1.5 , 1.5 ) and radius 0.5 and the circle of center ( 1.0 , 1.0 ) and radius 0.5 . This means that the mosquito population is initially concentrated in two disjoint circles with the same radius and the same amount of population, see Figure 2.
We also consider the following values for the parameters and functions in (3):
α = 1.0 , b = 10.0 , k ( x , y ) exp [ ( x 2 + y 2 ) / ( 0.5 ) 2 ] .
For the stopping criteria, we have taken
ϵ outer = 0.05 and ϵ inner = 0.2 .
The computations have been performed by using the software FreeFem++, [13] and all figures were made with Octave [14].

7.1. Convergence Behavior

In this section, we test the convergence of the algorithm described in Section 6.1. We take a ( x , t ) 1.0 in (3) and μ 0 = μ 1 = μ 2 = 1 . The numerical solution of the optimality system (4)–(6) is computed for several values of the final time T and the number of subintervals N:
T = 2 , 4 , 6 ; N = 50 , 100 , 200 .
Moreover, we consider three different regular meshes T h k , k = 1 , 2 , 3 , with m × m lateral nodes. Table 1 shows the size h k , the number of vertices n v , and the number of triangles n T for each mesh.
With no control, i.e., b = 0 in (3), the solution evolves to a final state that is displayed in Figure 3a,c,e for T = 2 , 4 and 6, respectively. In these figures, we observe that the mosquito population spreads and increases in Ω . When we apply the control, due to the initial distribution of population, it is expected to obtain a solution that starts traveling in the direction of the nearest herd of mosquitos (the circle B 2 in the present case) and, then, changes course in the direction of the farthest herd ( B 1 ). This is what is found in each case, as can be seen respectively in Figure 3b,d,f), where, besides the trajectories, the respective computed optimal states are also shown at the final time.
Table 2 reports the minimal and maximal values of the control u in all cases depicted in Figure 3; Table 3 presents the required number of outer iterations and the obtained values of the cost functional F for the three considered meshes.
From the results in Table 3, we can observe that, for T = 2 , 4 , the change in the cost is lower to 4% for the third mesh and N = 200 . For this reason, we select these parameters to perform the simulations in the following sections. To keep the computational cost at a reasonable level, we also use the same parameters for T = 6 , where the relative change is about 13%.

7.2. Influence of the Functional Weights

In this section, we study the influence of the functional weights. As in the previous section, we take a ( x , t ) 1.0 . Table 4 presents the required number of outer iterations, and the values of the cost functional in four cases I, II, III, and IV for three values of the final time: T = 2 , 4 , and 6. Calculations were made by using N = 200 time steps and the third spatial mesh of Table 1.
From the results in Table 4, we see that the cost is larger in case I for small T. This seems to indicate that, when we assign major relevance to the remaining population, short time operations are not satisfactory. For larger T, the device cost becomes more important. We see, however, that, as T grows, the costs have a tendency to equalize (and the particular values of the μ i seem to lose relevance).
Figure 4, Figure 5 and Figure 6 depict the trajectories and states u, respectively, at T = 2 , 4 , 6 for all cases described in Table 4. The minimal and maximal values of the mosquito population u for each case are reported in Table 5.

7.3. Two Examples with a ( · , · ) Variable

In this section, we present the calculations obtained by considering the following two different definitions of the function a:
a ( x , t ) = a 0 max 1 x 2 + y 2 , 0 ,
a ( x , t ) = a 0 max 1 t , 0 .
with a 0 a positive constant associated with the proliferation velocity of the insect population. With these choices, we will study separately the influence of a in the spatial and time behavior of the state u and the control γ .
Calculations were carried out by considering T = 2 with N = 200 time steps, the third spatial mesh of Table 1 and the following values for a 0 , and the weights of the functional F:
a 0 = 1 , 2 , 4 , μ 0 = μ 1 = μ 2 = 1 .
Table 6 details the values of the number of outer iterations and the respective values of the cost functional in the case of a ( x , t ) defined in (40) and (41) by considering three different values of a 0 .
Figure 7 depicts the uncontrolled state, the controlled state, and the respective trajectory γ for T = 2 in the case of a ( x , t ) defined in (40) by considering three different values of a 0 : 1 (Figure 7a,b), 2 (Figure 7c,d), and 4 (Figure 7e,f). We observe that the optimal control trajectories are qualitatively similar in all cases and stay close to the initial point. This is because the non-zero values of a (where the insects proliferate) depend on the spatial coordinate and are located in the unit ball centered at this point.
Figure 8 depicts the uncontrolled state, the controlled state, and the respective trajectory γ for T = 2 in the case of a ( x , t ) defined in (41) by considering three different values of a 0 : 1 (Figure 8a,b), 2 (Figure 8c,d), and 4 (Figure 8e,f). In these cases, we can observe the influence of the time coordinate t: the length of the trajectory γ increases when the value of a 0 does, giving a similar behavior for the case a = 1 studied in Section 7.1.
The respective minimal and maximal values of the uncontrolled and controlled states corresponding to Figure 7 and Figure 8 are reported in Table 7. The influence of the control γ on the state u by decreasing their minimal and maximal values is clearly observed.

8. Conclusions

We have performed a rigorous analysis of an optimal control problem concerning the spreading of mosquito populations; the optimality conditions have been used to devise a suitable numerical scheme and compute an optimal trajectory.
The success in completing these tasks with the help of appropriate theoretical and numerical tools seems to indicate that a similar analysis can be performed in other more complex cases. Thus, it can be more natural to consider, instead of (2), the cost functional
F ( γ ) : = μ 0 0 T | γ | d t + μ 1 0 T | γ ˙ | d t + μ 2 Q u d x d t .
Indeed, in this functional, the three terms can be respectively viewed as measures of the true length of the path traveled by the device, the total fuel needed in the process, and the total mosquito population along ( 0 , T ) . The L 1 norms of γ , γ ˙ and u represent quantities more adequate to the model, although the analysis of the corresponding control problem is more involved.
Other realistic situations can also be taken into account. For instance, a very interesting setup appears when there are obstacles to the admissible trajectories. In addition, instead of the Malthusian growth rate for the mosquito population assumed in the present work, we could assume a Verhustian or Gomperzian growth rate. The corresponding models and their associated control problems are being investigated at present.

Author Contributions

Conceptualization, J.L.B. and A.L.A.d.A.; Software, E.F.-C., M.L.O. and R.C.C.; validation, E.F.-C. and R.C.C.; Formal analysis, J.L.B., M.L.O. and A.L.A.d.A.; Investigation, J.L.B. and A.L.A.d.A.; Writing—original draft preparation, J.L.B. and A.L.A.d.A.; Writing—review and editing, J.L.B., E.F.-C. and R.C.C. All authors have read and agreed to the published version of the manuscript.

Funding

Anderson L.A. de Araujo is partially supported by FAPESP, grant 2006/02262-9, Brazil, José L. Boldrini is partially sponsored by CNPq grant 306182/2014-9, Brazil, and also by FAPESP grant 2009/15098-0, Brazil, and Enrique Fernández-Cara is partially supported by DGI-MINECO, grant MTM2013-41286-P.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Girsanov, I.V. Lectures on Mathematical Theory of Extremum Problem; Lectures Notes in Economics and Mathematical Systems; Springer: Berlin, Germany, 1972; Volume 67. [Google Scholar]
  2. Brandão, A.J.V.; Magalhães, P.M.D.; Fernández-Cara, E.; Rojas-Medar, M.A. Theoretical analysis ad control results for the FitzHugh–Nagumo equation. Electron. J. Differ. Eq. 2008, 164, 1–20. [Google Scholar]
  3. Boldrini, J.L.; Caretta, B.M.C.; Fernández-Cara, E. Some optimal control problems for a two-phase field model of solidification. Rev. Mat. Complut. 2010, 23, 49–75. [Google Scholar] [CrossRef]
  4. Boldrini, J.L.; Fernández-Cara, E.; Rojas-Medar, M.A. An optimal control problem for a generalized Boussinesq model: The time dependent case. Rev. Mat. Complut. 2007, 20, 339–366. [Google Scholar] [CrossRef] [Green Version]
  5. Gayte, I.; Guillén-González, F.; Rojas-Medar, M.A. Dubovitskii-Milyutin formalism applied to optimal control problems with constraints given by the heat equation with final data. IMA J. Math. Control Inform. 2010, 27, 57–76. [Google Scholar] [CrossRef]
  6. Coronel, A.; Huancas, F.; Lozada, E.; Rojas-Medar, M. The Dubovitskii and Milyutin Methodology Applied to an Optimal Control Problem Originating in an Ecological System. Mathematics 2021, 9, 479. [Google Scholar] [CrossRef]
  7. De Araujo, A.L.A. Análise Matemática de um Modelo de Controle de Populações de Mosquitos. Ph.D. Thesis, State University of Campinas, Campinas, Brazil, 2010. (In Portuguese). [Google Scholar]
  8. Adams, R.A. Sobolev Spaces; Academic Press: New York, NY, USA, 1975. [Google Scholar]
  9. Ladyzhenskaya, O.A.; Solonnikov, V.A.; Uraltseva, N.N. Linear and Quasilinear Equations of Parabolic Type; American Mathematical Society: Providence, RI, USA, 1968. [Google Scholar]
  10. Lions, J.-L. Contrôle des Systèmes Distribués Singuliers; Gautier-Villars; Méthodes Mathématiques de I’Informatique: Paris, France, 1983. [Google Scholar]
  11. Simon, J. Compact sets in the space Lp(0, T; B). Ann. Mat. Pura Appl. 1987, 146, 65–96. [Google Scholar] [CrossRef]
  12. Alexéev, V.; Fomine, S.; Tikhomirov, V. Commande Optimale; Mir: Moscow, Russia, 1982. [Google Scholar]
  13. Hecht, F. New developments in FreeFem++. J. Numer. Math. 2012, 20, 251–265. [Google Scholar] [CrossRef]
  14. Eaton, J.W.; Bateman, D.; Hauberg, S.; Wehbring, R. GNU Octave Version 3.8.1 Manual: A High-Level Interactive Language for Numerical Computations. CreateSpace Independent Publishing Platform, 2014; ISBN 1441413006. Available online: http://www.gnu.org/software/octave/doc/interpreter/ (accessed on 3 June 2021).
Figure 1. Scheme of the considered problem in a rectangular domain Ω with dimensions L and H. The trajectory γ with initial and final points O 1 , O 1 , respectively, follows a path connecting the sets B 1 and B 2 where an initial mosquito population density is concentrated.
Figure 1. Scheme of the considered problem in a rectangular domain Ω with dimensions L and H. The trajectory γ with initial and final points O 1 , O 1 , respectively, follows a path connecting the sets B 1 and B 2 where an initial mosquito population density is concentrated.
Mathematics 09 01762 g001
Figure 2. The initial state u with 0 u 10 .
Figure 2. The initial state u with 0 u 10 .
Mathematics 09 01762 g002
Figure 3. Uncontrolled state u (a,c,e) and Trajectory γ and controlled state u (b,d,f) at different times T with a minimal and a maximal value detailed in Table 2.
Figure 3. Uncontrolled state u (a,c,e) and Trajectory γ and controlled state u (b,d,f) at different times T with a minimal and a maximal value detailed in Table 2.
Mathematics 09 01762 g003
Figure 4. Trajectory γ and state u at T = 2 for the considered four cases with a minimal and a maximal value detailed in Table 5.
Figure 4. Trajectory γ and state u at T = 2 for the considered four cases with a minimal and a maximal value detailed in Table 5.
Mathematics 09 01762 g004
Figure 5. Trajectory γ and state u at T = 4 for the considered four cases with a minimal and a maximal value detailed in Table 5.
Figure 5. Trajectory γ and state u at T = 4 for the considered four cases with a minimal and a maximal value detailed in Table 5.
Mathematics 09 01762 g005
Figure 6. Trajectory γ and state u at T = 6 for the considered four cases with a minimal and a maximal value detailed in Table 5.
Figure 6. Trajectory γ and state u at T = 6 for the considered four cases with a minimal and a maximal value detailed in Table 5.
Mathematics 09 01762 g006
Figure 7. Uncontrolled state u (a,c,e) and Trajectory γ and controlled state u (b,d,f) at T = 2 for a ( x , t ) in (40) with a 0 = 1 , 2 , 4 with minimum and maximal values given in Table 7.
Figure 7. Uncontrolled state u (a,c,e) and Trajectory γ and controlled state u (b,d,f) at T = 2 for a ( x , t ) in (40) with a 0 = 1 , 2 , 4 with minimum and maximal values given in Table 7.
Mathematics 09 01762 g007
Figure 8. Uncontrolled state u (a,c,e) and Trajectory γ and controlled state u (b,d,f) at T = 2 for a ( x , t ) in (41) with a 0 = 1 , 2 , 4 with minimum and maximal values given in Table 7.
Figure 8. Uncontrolled state u (a,c,e) and Trajectory γ and controlled state u (b,d,f) at T = 2 for a ( x , t ) in (41) with a 0 = 1 , 2 , 4 with minimum and maximal values given in Table 7.
Mathematics 09 01762 g008
Table 1. Details of the meshes.
Table 1. Details of the meshes.
Number of LateralMesh Nodes m × m Number of Size h k Number of Vertices n v Triangles n T
Mesh 1 20 × 20 0.3536441800
Mesh 2 40 × 40 0.176816813200
Mesh 3 80 × 80 0.0884656112,800
Table 2. Minimal and maximal values of the state u at T = 2 , 4 , 6 for Figure 3.
Table 2. Minimal and maximal values of the state u at T = 2 , 4 , 6 for Figure 3.
Values for T = 2 Figure 3aFigure 3b
u min u max u min u max
1.65178.37230.89063.4263
Values for T = 4 Values for Figure 3cValues for Figure 3d
u min u max u min u max
24.069646.08252.900713.5890
Values for T = 6 Values for Figure 3eValues for Figure 3f
u min u max u min u max
229.9722305.125516.362777.1129
Table 3. Number of outer iterations (o.i.) and cost for each final time T, time subintervals N and meshes.
Table 3. Number of outer iterations (o.i.) and cost for each final time T, time subintervals N and meshes.
NMesh T = 2 T = 4 T = 6
o.i.Costo.i.Costo.i.Cost
20 × 20 1072.20958923.9731234,349.7
50 40 × 40 11109.81101603.251165,883.5
80 × 80 11117.87101765.481173,208.5
20 × 20 972.17078828.7031225,471.3
100 40 × 40 11110.059111426.181148,688.7
80 × 80 11118.189111566.381154,065
20 × 20 972.08128816.1191122,673.7
200 40 × 40 11110.17291398.031142,226.1
80 × 80 11118.333111516.091146,869.8
Table 4. Number of outer iterations (o.i.) and cost for three final times T and four cases.
Table 4. Number of outer iterations (o.i.) and cost for three final times T and four cases.
T = 2 T = 4 T = 6
μ 0 μ 1 μ 2 o.i.Costo.i.Costo.i.Cost
Case I111012799.8431114,175.411467,884
Case II101111157.0111589.281146,938.6
Case III11018179.831101969.461147,592.8
Case IV101015201.137102043.811147,662.2
Table 5. Minimal and maximal values of the controlled state u at T = 2 , 4 , 6 for cases I, II, III, and IV depicted in Figure 4, Figure 5 and Figure 6.
Table 5. Minimal and maximal values of the controlled state u at T = 2 , 4 , 6 for cases I, II, III, and IV depicted in Figure 4, Figure 5 and Figure 6.
Values for T = 2 Values for T = 4 Values for T = 6
Figure 4Figure 5Figure 6
u min u max u min u max u min u max
(a)0.74682.96802.873013.520516.372977.1137
(b)0.82144.98173.082913.584316.524377.0914
(c)1.24114.78363.220414.266416.275377.1378
(d)1.16356.64533.260314.282116.346277.1164
Table 6. Number of outer iterations (o.i.) and values of the cost functional.
Table 6. Number of outer iterations (o.i.) and values of the cost functional.
a 0 a ( x , t ) = a 0 max 1 x 2 + y 2 , 0 a ( x , t ) = a 0 max 1 t , 0
o.i.Costo.i.Cost
1428.2127550.0751
2328.59421392.2256
4329.491612350.886
Table 7. Minimal and maximal values of the state u at T = 2 for Figure 7 and Figure 8.
Table 7. Minimal and maximal values of the state u at T = 2 for Figure 7 and Figure 8.
Values for Figure 7Values for Figure 8
u min u max u min u max
(a)0.23411.14720.35951.8495
(b)0.15790.96520.25171.4524
(c)0.25471.17260.59503.0458
(d)0.16780.98590.37691.5651
(e)0.31812.37651.64658.3448
(f)0.19821.00710.80433.6763
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Albuquerque de Araujo, A.L.; Boldrini, J.L.; Cabrales, R.C.; Fernández-Cara, E.; Oliveira, M.L. Optimal Control of Insect Populations. Mathematics 2021, 9, 1762. https://doi.org/10.3390/math9151762

AMA Style

Albuquerque de Araujo AL, Boldrini JL, Cabrales RC, Fernández-Cara E, Oliveira ML. Optimal Control of Insect Populations. Mathematics. 2021; 9(15):1762. https://doi.org/10.3390/math9151762

Chicago/Turabian Style

Albuquerque de Araujo, Anderson L., José L. Boldrini, Roberto C. Cabrales, Enrique Fernández-Cara, and Milton L. Oliveira. 2021. "Optimal Control of Insect Populations" Mathematics 9, no. 15: 1762. https://doi.org/10.3390/math9151762

APA Style

Albuquerque de Araujo, A. L., Boldrini, J. L., Cabrales, R. C., Fernández-Cara, E., & Oliveira, M. L. (2021). Optimal Control of Insect Populations. Mathematics, 9(15), 1762. https://doi.org/10.3390/math9151762

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop