1. Introduction
Distributed-order fractional operators were introduced and studied by Caputo at the end of the previous century [
1,
2]. They can be seen as a kind of generalization of fractional order derivatives/integrals in the sense that these operators are defined by a weighted integral of different orders of differentiation over a certain range. This subject gained more interest at the beginning of the current century by researchers from different mathematical disciplines, through attempts to solve differential equations with distributed-order derivatives [
3,
4,
5,
6]. Moreover, at the same time, in the domain of applied mathematics, those distributed-order fractional operators have started to be used, in a satisfactory way, to describe some complex phenomena modeling real world problems—see, for instance, works in viscoelasticity [
7,
8] and in diffusion [
9]. Today, the study of distributed-order systems with fractional derivatives is a hot subject—see, e.g., [
10,
11,
12] and references therein.
Fractional optimal control deals with optimization problems involving fractional differential equations, as well as a performance index functional. One of the most important results is the Pontryagin Maximum Principle, which gives a first-order necessary optimality condition that every solution to the dynamic optimization problem must verify. By applying such a result, it is possible to find and identify candidate solutions to the optimal control problem. For the state of the art on fractional optimal control, we refer the readers to [
13,
14,
15] and references therein. Recently, distributed-order fractional problems of the calculus of variations were introduced and investigated in [
16]. Here, our main aim is to extend the distributed-order fractional Euler–Lagrange equation of [
16] to the Pontryagin setting (see Remark 2).
Regarding optimal control for problems with distributed-order fractional operators, the results are rare and reduce to the following two papers: [
17,
18]. Both works develop numerical methods while, in contrast, here we are interested in analytical results (not in numerical approaches). Moreover, our results are new and bring new insights. Indeed, in [
17], the problem is considered with Riemann–Liouville distributed derivatives, while in our case we consider optimal control problems with Caputo distributed derivatives. We must also note an inconsistency in [
17]: when one defines the control system with a Riemann–Liouville derivative, then in the adjoint system it should appear as a Caputo derivative—when one considers optimal control problems with a control system with Caputo derivatives, the adjoint equation should involve a Riemann–Liouville operator—as a consequence of integration by parts (cf. Lemma 1). This inconsistency has been corrected in [
18], where optimal control problems with Caputo distributed derivatives (as in this paper) are considered. Unfortunately, there is still an inconsistency in the necessary optimality conditions of both [
17,
18]: the transversality conditions are written there exactly as in the classical case, with the multiplier vanishing at the end of the interval, while the correct condition, as we prove in our Theorem 1, should involve a distributed integral operator—see condition (
3).
The text is organized as follows. We begin by recalling definitions and necessary results of the literature in
Section 2 of preliminaries. Our original results are then given in
Section 3. More precisely, we consider fractional optimal control problems where the dynamical system constraints depend on distributed-order fractional derivatives. We prove a weak version of Pontryagin’s maximum principle for the considered distributed-order fractional problems (see Theorem 1) and investigate a Mangasarian-type sufficient optimality condition (see Theorem 2). An example, illustrating the usefulness of the obtained results, is given (see Examples 1 and 2). We end with
Section 4 of conclusions, mentioning also some possibilities of future research.
2. Preliminaries
In this section, we recall necessary results and fix notations. We assume the reader to be familiar with the standard Riemann–Liouville and Caputo fractional calculi [
19,
20].
Let
be a real number in
and let
be a non-negative continuous function defined on
such that
This function
will act as a distribution of the order of differentiation.
Definition 1 (See [
1]).
The left and right-sided Riemann–Liouville distributed-order fractional derivatives of a function are defined, respectively, bywhere and are, respectively, the left and right-sided Riemann–Liouville fractional derivatives of order α. Definition 2 (See [
1]).
The left and right-sided Caputo distributed-order fractional derivatives of a function are defined, respectively, bywhere and are, respectively, the left and right-sided Caputo fractional derivatives of order α. As noted in [
16], there is a relation between the Riemann–Liouville and the Caputo distributed-order fractional derivatives:
and
Along the text, we use the notation
where
represents the right Riemann–Liouville fractional integral of order
.
The next result has an essential role in the proofs of our main results; that is, in the proofs of Theorems 1 and 2.
Lemma 1 (Integration by parts formula [
16]).
Let x be a continuous function and y a continuously differentiable function. Then, Next, we recall the standard notion of concave function, which will be used in
Section 3.3.
Definition 3 (See [
21]).
A function is concave iffor all and for all , in . Lemma 2 (See [
21]).
Let be a continuously differentiable function. Then h is a concave function if and only if it satisfies the so called gradient inequality:for all . Finally, we recall a fractional version of Gronwall’s inequality, which will be useful to prove the continuity of solutions in
Section 3.1.
Lemma 3 (See [
22]).
Let α be a positive real number and let , , and be non-negative continuous functions on with monotonic increasing on . Ifthenfor all .
3. Main Results
The basic problem of optimal control we consider in this work, denoted by (BP), consists in finding a piecewise continuous control
and the corresponding piecewise smooth state trajectory
solution of the distributed-order non-local variational problem
where functions
L and
f, both defined on
, are assumed to be continuously differentiable in all their three arguments:
,
. Our main contribution is to prove necessary (
Section 3.2) and sufficient (
Section 3.3) optimality conditions.
3.1. Sensitivity Analysis
Before we can prove necessary optimality conditions to problem (BP), we need to establish continuity and differentiability results on the state solutions for any control perturbation (Lemmas 4 and 5), which are then used in
Section 3.2. The proof of Lemma 4 makes use of the following mean value theorem for integration, that can be found in any textbook of calculus (see Lemma 1 of [
23]): if
is a continuous function and
is an integrable function that does not change the sign on the interval, then there exists a number
, such that
Lemma 4 (Continuity of solutions).
Let be a control perturbation around the optimal control , that is, for all , , where is a variation and . Denote by its corresponding state trajectory, solution ofThen, we have that converges to the optimal state trajectory when ϵ tends to zero. Proof. Starting from the definition, we have, for all
, that
Then, by linearity,
and it follows, by definition of the distributed operator, that
Now, using the mean value theorem for integration, and denoting
, we obtain that there exists an
such that
Clearly, one has
which leads to
Moreover, because
f is Lipschitz-continuous, we have
By setting
, it follows that
for all
. Now, by applying Lemma 3 (the fractional Gronwall inequality), it follows that
The series in the last inequality is a Mittag–Leffler function and thus convergent. Hence, by taking the limit when tends to zero, we obtain the desired result: for all . □
Lemma 5 (Differentiation of the perturbed trajectory).
There exists a function η defined on such that Proof. Since
, we have that
Observe that
and
when
and, by Lemma 4, we have
when
. Thus, the residue term can be expressed in terms of
only, that is, the residue is
. Therefore, we have
which leads to
meaning that
We want to prove the existence of the limit
, that is, to prove that
. This is indeed the case, since
is solution of the distributed order fractional differential equation
The intended result is proven. □
3.2. Pontryagin’s Maximum Principle of Distributed-Order
The following result is a necessary condition of Pontryagin type [
24] for the basic distributed-order non-local optimal control problem (BP).
Theorem 1 (Pontryagin Maximum Principle for (BP)). If is an optimal pair for (BP), then there exists , called the adjoint function variable, such that the following conditions hold for all t in the interval :
The transversality condition
Proof. Let
be the solution to problem (BP),
be a variation, and
a real constant. Define
, so that
. Let
be the state corresponding to the control
, that is, the state solution of
Note that
for all
whenever
. Furthermore,
Something similar is also true for
. Because
, it follows from Lemma 4 that, for each fixed
t,
as
. Moreover, by Lemma 5, the derivative
exists for each
t. The objective functional at
is
Next, we introduce the adjoint function
. Let
be in
, to be determined. By the integration by parts formula (see Lemma 1),
and one has
Adding this zero to the expression
gives
which by (
4) is equivalent to
Since the process
is assumed to be a maximizer of problem (BP), the derivative of
with respect to
must vanish at
; that is,
where the partial derivatives of
L and
f, with respect to
x and
u, are evaluated at
. Rearranging the term and using (
5), we obtain that
Setting
, it follows that
where the partial derivatives of
H are evaluated at
. Now, choosing
that is, given the adjoint equation (
2) and the transversality condition (
3), it yields
and, by the fundamental lemma of the calculus of variations [
25], we have the optimality condition (
1):
This concludes the proof. □
Remark 1. If we change the basic optimal control problem (BP) by changing the boundary condition given on the state variable at initial time, , to a terminal condition, then the optimality condition and the adjoint equation of the Pontryagin Maximum Principle (Theorem 1) remain exactly the same. Changes appear only on the transversality condition:
A boundary condition at final/terminal time—that is, fixing the value with remaining free, leads to In the case when no boundary conditions is given (i.e., both and are free), then we have
Remark 2. If , that is, , then our problem (BP) gives a basic problem of the calculus of variations, in the distributed-order fractional sense of [16]. In this very particular case, we obtain from our Theorem 1 the Euler–Lagrange equation of [16] (cf. Theorem 2 of [16]). Remark 3. Our distributed-order fractional optimal control problem (BP) can be easily extended to the vector setting. Precisely, let and with , such that , and functions and be continuously differentiable with respect to all its components. If is an optimal pair, then the following conditions hold for :
The optimality conditions The transversality conditions
Definition 4. The candidates to solutions of (BP), obtained by the application of our Theorem 1, will be called (Pontryagin) extremals.
We now illustrate the usefulness of our Theorem 1 with an example.
Example 1. The triple given by , , and , for , is an extremal of the following distributed-order fractional optimal control problem: Indeed, by defining the Hamiltonian function as it follows:
From the optimality condition , From the adjoint equation , From the transversality condition,
We easily see that (9), (10) and (11) are satisfied for 3.3. Sufficient Condition for Global Optimality
We now prove a Mangasarian type theorem for the distributed-order fractional optimal control problem (BP).
Theorem 2. Consider the basic distributed-order fractional optimal control problem (BP). If and are concave and is a Pontryagin extremal with , , then for any admissible pair .
Proof. Because
L is concave as a function of
x and
u, we have from Lemma 2 that
for any control
u and its associated trajectory
x. This gives
From the adjoint equation (
2), we have
From the optimality condition (
1), we know that
It follows from (
12) that
Using the integration by parts formula of Lemma 1,
meaning that
Substituting (
14) into (
13), we get
Finally, taking into account that and f is concave in both x and u, we conclude that . □
Example 2. The extremal given in Example 1 is a global minimizer for problem (7). This is easily checked from Theorem 2 since the Hamiltonian defined in (8) is a concave function with respect to both variables x and u and, furthermore, . In Figure 1, we give the plots of the optimal solution to problem (7).