Next Article in Journal
Hopf Bifurcation Analysis of a Diffusive Nutrient–Phytoplankton Model with Time Delay
Next Article in Special Issue
A New Equilibrium Version of Ekeland’s Variational Principle and Its Applications
Previous Article in Journal
Acknowledgment to Reviewers of Axioms in 2021
Previous Article in Special Issue
On Constrained Set-Valued Semi-Infinite Programming Problems with ρ-Cone Arcwise Connectedness
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Straightforward Sufficiency Proof for a Nonparametric Problem of Bolza in the Calculus of Variations

by
Gerardo Sánchez Licea
Departamento de Matemáticas, Facultad de Ciencias, Universidad Nacional Autónoma de México, Ciudad de Mexico 04510, Mexico
Axioms 2022, 11(2), 55; https://doi.org/10.3390/axioms11020055
Submission received: 24 December 2021 / Revised: 24 January 2022 / Accepted: 25 January 2022 / Published: 29 January 2022
(This article belongs to the Special Issue Calculus of Variations and Nonlinear Partial Differential Equations)

Abstract

:
We study a variable end-points calculus of variations problem of Bolza containing inequality and equality constraints. The proof of the principal theorem of the paper has a direct nature since it is independent of some classical sufficiency approaches invoking the Hamiltonian-Jacobi theory, Riccati equations, fields of extremals or the theory of conjugate points. In contrast, the algorithm employed to prove the principal theorem of the article is based on elementary tools of the real analysis.

1. Introduction

In this paper, we study a nonparametric calculus of variations problem of Bolza having variable end-points, isoperimetric inequality and equality restrictions and mixed inequality and equality pointwise restraints. The fundamental sufficiency theorem presented in this article, assumes that a proposed optimal trajectory with an essentially bounded derivative is given, that the set of active indices of the mixed inequality restrictions is piecewise constant on the underlying interval of time, that the corresponding multipliers of the inequality restrictions are nonnegative at each point of the basic time interval and they are zero whenever the time-dependent index is inactive, that the matching Lagrange multipliers of the inequality isoperimetric constraints are nonnegative and they vanish whenever the corresponding index is inactive, that a sufficiency first order condition very related with the Euler–Lagrange equations holds, that a generalized transversality condition is verified, that an inequality hypothesis whose source comes from the proof of the main result of the paper is satisfied, that a very similar hypothesis of the Legendre necessary condition is satisfied, that the positivity of a quadratic integral over the cone of critical directions is fulfilled and, that three conditions involving the Weierstrass functions delimiting the calculus of variations problem are verified. Then the deviation between any admissible cost and the proposed optimal cost, can be estimated by a quadratic functional whose role is very similar with that of the square of the norm of the Banach space of the Lebesgue integrable functions. In particular, the result shows that if the proposed optimal trajectory satisfies the above sufficiency conditions, then it is a strict strong minimum of the problem in hand.
It is worthwhile mentioning that the proof of the main sufficiency theorem of the paper is self-contained in the sense that it is independent of some classical approaches such as the ones that invoke to the theory of Mayer fields by using independent path integrals, commonly called Hilbert integrals, Hamilton–Jacobi theory which frequently uses a fundamental inequality, symmetric solutions of some Riccati equations, generalizations of the conjugate point theory, local convexity processes or the insertion of the proposed optimal trajectories in some fields of extremals, see for instance [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]. On the other hand, it is important to point out that the calculus of variations has as its aim a generalization of the structure of classical analysis that will make possible the solution of some extremum problems having numerous applications in the qualitative analysis of various classes of differential equations and partial differential equations; see, e.g., the papers [17,18] for more details. The technique used in this article to obtain the main theorem of the paper corresponds to a generalization of a method originally introduced by Hestenes in [9]. This algorithm have been generalized in [19,20,21] for the case of a parametric problem of the calculus of variations, however, a direct sufficiency proof for the nonparametric problem of Bolza had not been provided. A crucial property of this direct sufficiency proof not only has the advantage that one does not need to invoke to a parametric problem as it is done in [19,20,21], but also the sufficiency result for the parametric problem, provides sufficient conditions to a strict strong minimum and not only for a strong minimum as it is the case of [20,21].
Some of the novelties of the main theorem of the paper as well as the technique employed to prove it can be described as follows: the problem has a wide range of applicability since the functions delimiting the problem only have to be continuous in their domain and they need to have first and second partial derivatives with respect to the state and the state-derivative independent variables. The smoothness of the first and second partial derivatives with respect to the previously mentioned variables is no longer imposed. The derivatives of the proposed optimal trajectories need not be continuous but only essentially bounded. This feature is a celebrated component since the derivatives of the admissible trajectories must only be essentially bounded. In fact, we have already provided concrete examples, in which our theory of sufficiency, indeed gives a response, meanwhile the classical sufficiency theories for optimality are not able to detect it, since they need the smoothness of the optimal trajectory in the basic time interval, see [21]. Finally, the technique used to prove the main theorem of the paper, allows us to avoid imposing some type of preliminary assumptions not appearing in the theorems, in contrast, with some classical necessary and sufficiency theories. To mention a few, in [12,22] it is indispensable that the gradients arising from the pointwise mixed constraints be linearly independent at each point of the underlying interval of time or see [22,23,24], where some preliminary assumptions of normality or regularity play a crucial role for obtaining the necessary optimality theory.
The paper is organized as follows. In Section 2, we pose the problem we are going to study, introduce some basic definitions and state the main result of the article. In Section 3, we illustrate the sufficient theorem of the paper by means of an example. In Section 4, we enunciate two auxiliary lemmas whose statements and proofs can be found in [21]. Finally, in Section 5, we develop the proof of Theorem 1.

2. The Problem and the Sufficiency Theorem

Suppose that an interval T : = [ t 0 , t 1 ] in R is given, that we have functions l , l γ : R n × R n R ( γ = 1 , , K ) , Φ i : R n R n ( i = 0 , 1 ) , L ( t , x , x ˙ ) : T × R 2 n R , L γ ( t , x , x ˙ ) : T × R 2 n R ( γ = 1 , , K ) and φ ( t , x , x ˙ ) : T × R 2 n R s . Let
A : = { ( t , x , x ˙ ) T × R 2 n φ α ( t , x , x ˙ ) 0 ( α R ) , φ β ( t , x , x ˙ ) = 0 ( β S ) }
where R : = { 1 , , r } and S : = { r + 1 , , s } ( r = 0 , 1 , , s ) . If r = 0 then R = and we disregard assertions concerning φ α . Similarly, if r = s then S = and we disregard assertions concerning φ β .
Throughout the article we assume that L, L γ ( γ = 1 , , K ) and φ have first and second derivatives with respect to x and x ˙ . Furthermore, if we denote by g ( t , x , x ˙ ) either L ( t , x , x ˙ ) , L γ ( t , x , x ˙ ) ( γ = 1 , , K ) , φ ( t , x , x ˙ ) or any of their partial derivatives of order less or equal than two with respect to x and x ˙ , we are going to suppose that if G is any bounded subset of T × R 2 n , then | g ( G ) | is a bounded subset of R . Additionally, we suppose that if ( ( Λ q , Γ q ) ) is any sequence in A C ( T ; R n ) × L 1 ( T ; R n ) such that for some Θ T measurable and some ( Λ , Γ ) A C ( T ; R n ) × L ( T ; R n ) , ( Λ q ( · ) , Γ q ( · ) ) L ( Λ ( · ) , Γ ( · ) ) on Θ , then for all q N , g ( · , Λ q ( · ) , Γ q ( · ) ) is measurable on Θ and
g ( · , Λ q ( · ) , Γ q ( · ) ) L g ( · , Λ ( · ) , Γ ( · ) ) on Θ .
Note that all conditions given above are satisfied if the functions L, L γ ( γ = 1 , , K ) and φ and their first and second derivatives with respect to x and x ˙ are continuous on T × R 2 n . We shall also assume that the functions l, l γ ( γ = 1 , , K ) are of class C 2 on R n × R n and Φ i ( i = 0 , 1 ) are of class C 2 on R n .
The calculus of variations problem we shall be concerned, labeled (P), is that of finding a minimum value to the functional
I ( x ) : = l ( x ( t 0 ) , x ( t 1 ) ) + t 0 t 1 L ( t , x ( t ) , x ˙ ( t ) ) d t
over all absolutely continuous x : T R n satisfying the constraints
g ( · , x ( · ) , x ˙ ( · ) ) is integrable on T . x ( t i ) = Φ i ( x ( t i + 1 ) ) for i = 1 , 0 . I i ( x ) : = l i ( x ( t 0 ) , x ( t 1 ) ) + t 0 t 1 L i ( t , x ( t ) , x ˙ ( t ) ) d t 0 ( i = 1 , , k ) . I j ( x ) : = l j ( x ( t 0 ) , x ( t 1 ) ) + t 0 t 1 L j ( t , x ( t ) , x ˙ ( t ) ) d t = 0 ( j = k + 1 , , K ) . ( t , x ( t ) , x ˙ ( t ) ) A ( a . e . in T ) .
Designate by X the space of absolutely continuous functions mapping T to R n and by U s the Banach space L ( T ; R s ) ( s N ) . Elements of X are named arcs or trajectories and an arc x is admissible or feasible if it satisfies the restrictions. A trajectory x solves (P) if it is feasible and I ( x ) I ( y ) for all feasible arcs y. An admissible arc x is called a strong minimum of (P) if it is a minimum of I relative to the norm
x : = sup t T | x ( t ) | ,
that is, if we have the existence of some ϵ > 0 such that I ( x ) I ( y ) for all feasible trajectories y verifying y x < ϵ . It is a strict strong minimum when I ( x ) = I ( y ) only if x = y .
The following definitions are going to be useful in the content of the paper. The notation * means transpose.
  • Given K real numbers λ γ ( γ = 1 , , K ) , take into consideration the functional I λ : X R defined by
    I λ ( x ) : = I ( x ) + γ = 1 K λ γ I γ ( x ) = l λ ( x ( t 0 ) , x ( t 1 ) ) + t 0 t 1 L λ ( t , x ( t ) , x ˙ ( t ) ) d t ,
    where l λ : R n × R n R is given by
    l λ ( a 1 , a 2 ) : = l ( a 1 , a 2 ) + γ = 1 K λ γ l γ ( a 1 , a 2 ) ,
    and L λ : T × R 2 n R is defined by
    L λ ( t , x , x ˙ ) : = L ( t , x , x ˙ ) + γ = 1 K λ γ L γ ( t , x , x ˙ ) .
  • For all ( t , x , x ˙ , ρ , μ ) T × R 3 n × R s , set
    H ( t , x , x ˙ , ρ , μ ) : = ρ * x ˙ L λ ( t , x , x ˙ ) μ * φ ( t , x , x ˙ ) .
    If ρ X and μ U s are given, set for all ( t , x , x ˙ ) T × R 2 n ,
    F λ ( t , x , x ˙ ) : = H ( t , x , x ˙ , ρ ( t ) , μ ( t ) ) ρ ˙ * ( t ) x ,
    and let
    J λ ( x ) : = ρ * ( t 1 ) x ( t 1 ) ρ * ( t 0 ) x ( t 0 ) + l λ ( x ( t 0 ) , x ( t 1 ) ) + t 0 t 1 F λ ( t , x ( t ) , x ˙ ( t ) ) d t .
  • The first variations of J λ and I γ ( γ = 1 , , K ) along x X with x ˙ L ( T ; R n ) in the direction y X are given, respectively, by
    J λ ( x , y ) : = l λ ( x ( t 0 ) , x ( t 1 ) ) y ( t 0 ) y ( t 1 ) + t 0 t 1 { F λ x ( t , x ( t ) , x ˙ ( t ) ) y ( t ) + F λ x ˙ ( t , x ( t ) , x ˙ ( t ) ) y ˙ ( t ) } d t ,
    I γ ( x , y ) : = l γ ( x ( t 0 ) , x ( t 1 ) ) y ( t 0 ) y ( t 1 ) + t 0 t 1 { L γ x ( t , x ( t ) , x ˙ ( t ) ) y ( t ) + L γ x ˙ ( t , x ( t ) , x ˙ ( t ) ) y ˙ ( t ) } d t .
    The second variation of J λ along x X with x ˙ L ( T ; R n ) in the direction y X with y ˙ L 2 ( T ; R n ) is given by
    J λ ( x , y ) : = ( y * ( t 0 ) , y * ( t 1 ) ) l λ ( x ( t 0 ) , x ( t 1 ) ) y ( t 0 ) y ( t 1 ) + t 0 t 1 2 ω λ ( t , x ( t ) , x ˙ ( t ) ; t , y ( t ) , y ˙ ( t ) ) d t
    where, for all ( t , y , y ˙ ) T × R 2 n ,
    2 ω λ ( t , x ( t ) , x ˙ ( t ) ; t , y , y ˙ ) : = y * F λ x x ( t , x ( t ) , x ˙ ( t ) ) y + 2 y * F λ x x ˙ ( t , x ( t ) , x ˙ ( t ) ) y ˙ + y ˙ * F λ x ˙ x ˙ ( t , x ( t ) , x ˙ ( t ) ) y ˙ .
  • Set
    E λ ( t , x , x ˙ , u ) : = F λ ( t , x , u ) F λ ( t , x , x ˙ ) F λ x ˙ ( t , x , x ˙ ) ( u x ˙ ) .
    Similarly, for all γ = 1 , , K , set
    E γ ( t , x , x ˙ , u ) : = L γ ( t , x , u ) L γ ( t , x , x ˙ ) L γ x ˙ ( t , x , x ˙ ) ( u x ˙ ) .
  • For all x X , set
    D ( x ) : = V ( x ( t 0 ) ) + t 0 t 1 V ( x ˙ ( t ) ) d t
    where for all e R n ,
    V ( e ) : = ( 1 + | e | 2 ) 1 / 2 1 .
Finally, for all ( t , x , x ˙ ) T × R 2 n , designate by
I a ( t , x , x ˙ ) : = { α R φ α ( t , x , x ˙ ) = 0 } ,
the set of active indices of ( t , x , x ˙ ) corresponding to the mixed inequality constraints. Given x X , designate by
i a ( x ) : = { i = 1 , , k I i ( x ) = 0 } ,
the set of active indices of x corresponding to the isoperimetric inequality restrictions. For all x X , let Y ( x ) be the set of all y X with y ˙ L 2 ( T ; R n ) verifying
y ( t i ) = Φ i ( x ( t i + 1 ) ) y ( t i + 1 ) for i = 1 , 0 . I i ( x , y ) 0 ( i i a ( x ) ) , I j ( x , y ) = 0 ( j = k + 1 , , K ) . φ α x ( t , x ( t ) , x ˙ ( t ) ) y ( t ) + φ α x ˙ ( t , x ( t ) , x ˙ ( t ) ) y ˙ ( t ) 0 ( a . e . in T , α I a ( t , x ( t ) , x ˙ ( t ) ) ) . φ β x ( t , x ( t ) , x ˙ ( t ) ) y ( t ) + φ β x ˙ ( t , x ( t ) , x ˙ ( t ) ) y ˙ ( t ) = 0 ( a . e . in T , β S ) .
The cone Y ( x ) is commonly called the cone of critical directions along x.
Theorem 1.
Let x 0 be a feasible arc with x ˙ 0 L ( T ; R n ) . Assume that I a ( · , x 0 ( · ) , x ˙ 0 ( · ) ) is piecewise constant on T, that there exist ρ X , μ U s satisfying μ α ( t ) 0 , μ α ( t ) φ α ( t , x 0 ( t ) , x ˙ 0 ( t ) ) = 0 ( α R , a . e . i n T ) , δ , ϵ > 0 , and multipliers λ i ( i = 1 , , K ) satisfying λ i 0 , λ i I i ( x 0 ) = 0 ( i = 1 , , k ) such that
ρ ˙ ( t ) = H x * ( t , x 0 ( t ) , x ˙ 0 ( t ) , ρ ( t ) , μ ( t ) ) ( a . e . i n T ) ,
H x ˙ * ( t , x 0 ( t ) , x ˙ 0 ( t ) , ρ ( t ) , μ ( t ) ) = 0 ( a . e . i n T ) ,
and the following assumptions hold:
i.
l λ * ( x 0 ( t 0 ) , x 0 ( t 1 ) ) + Φ 1 * ( x 0 ( t 0 ) ) 0 n × n ρ ( t 1 ) 0 n × n Φ 0 * ( x 0 ( t 1 ) ) ρ ( t 0 ) = 0 .
ii.
i = 1 0 ( 1 ) i + 1 ρ * ( t i ) Φ i ( x 0 ( t i + 1 ) ; h ) 0 f o r a l l h R n .
iii.
H x ˙ x ˙ ( t , x 0 ( t ) , x ˙ 0 ( t ) , ρ ( t ) , μ ( t ) ) 0 ( a . e . i n T ) .
iv.
J λ ( x 0 , y ) > 0 for all y 0 , y Y ( x 0 ) .
v.
For all x feasible satisfying x x 0 < ϵ ,
(a)
E λ ( t , x ( t ) , x ˙ 0 ( t ) , x ˙ ( t ) ) 0 ( a . e . i n T ) ;
(b)
t 0 t 1 E λ ( t , x ( t ) , x ˙ 0 ( t ) , x ˙ ( t ) ) d t δ t 0 t 1 V ( x ˙ ( t ) x ˙ 0 ( t ) ) d t ;
(c)
t 0 t 1 E λ ( t , x ( t ) , x ˙ 0 ( t ) , x ˙ ( t ) ) d t δ t 0 t 1 E γ ( t , x ( t ) , x ˙ 0 ( t ) , x ˙ ( t ) ) d t ( γ = 1 , , K ) .
Then, there exist ν 1 , ν 2 > 0 such that, if x is feasible with x x 0 < ν 1 , we have
I ( x ) I ( x 0 ) + ν 2 D ( x x 0 ) .
In particular, x 0 is a strict strong minimum of(P).

3. Example

In this section, we give an illustration of Theorem 1 by means of an example.
Let (P) be the problem of finding a minimum value to the functional
I ( x ) : = x 2 ( 1 ) 2 x ( 1 ) + 1 1 2 ( x ( t ) + t ) 2 d t
over all absolutely continuous x : [ 1 , 1 ] R verifying the constraints
g ( · , x ( · ) , x ˙ ( · ) ) is integrable on [ 1 , 1 ] . x ( 1 ) = x ( 1 ) . I 1 ( x ) : = 1 1 { ( x ˙ ( t ) + 1 ) 2 + x ˙ ( t ) ( x ( t ) + t ) 2 } d t 0 . ( t , x ( t ) , x ˙ ( t ) ) A ( a . e . in [ 1 , 1 ] ) .
For this case, T = [ 1 , 1 ] , n = 1 , K = k = 1 , r = s = 1 , Φ 0 = Φ 1 = Id where Id is the identity function, l ( a 1 , a 2 ) = a 1 2 2 a 1 , l 1 ( a 1 , a 2 ) = 0 , L ( t , x , x ˙ ) = 2 ( x + t ) 2 , L 1 ( t , x , x ˙ ) = ( x ˙ + 1 ) 2 + x ˙ ( x + t ) 2 , φ 1 ( t , x , x ˙ ) = x ˙ 1 and
A = { ( t , x , x ˙ ) T × R 2 φ 1 ( t , x , x ˙ ) 0 } .
For all ( t , x , x ˙ , ρ , μ ) T × R 4 , we have
H ( t , x , x ˙ , ρ , μ ) = ρ x ˙ 2 ( x + t ) 2 λ 1 ( x ˙ + 1 ) 2 λ 1 x ˙ ( x + t ) 2 + μ 1 [ x ˙ + 1 ] ,
H x ( t , x , x ˙ , ρ , μ ) = 4 ( x + t ) 2 λ 1 x ˙ ( x + t ) ,
H x ˙ ( t , x , x ˙ , ρ , μ ) = ρ 2 λ 1 ( x ˙ + 1 ) λ 1 ( x + t ) 2 + μ 1 .
Let x 0 t on T and note that x 0 X = A C ( T ; R ) , x ˙ 0 L ( T ; R ) and x 0 is admissible. Furthermore, note that I a ( · , x 0 ( · ) , x ˙ 0 ( · ) ) { 1 } on T, and hence it is constant on T. Set ρ = μ 1 0 on T and note that ρ X and μ = μ 1 U 1 = L ( T ; R ) . Moreover, observe that μ 1 ( t ) 0 and μ 1 ( t ) φ 1 ( t , x 0 ( t ) , x ˙ 0 ( t ) ) = 0 ( α R = { 1 } , a . e . in T ) . Additionally, let λ 1 = 1 and note that λ 1 0 and λ 1 I 1 ( x 0 ) = 0 . With these concepts in mind, observe that
ρ ˙ ( t ) = H x ( t , x 0 ( t ) , x ˙ 0 ( t ) , ρ ( t ) , μ ( t ) ) ( a . e . in T ) ,
H x ˙ ( t , x 0 ( t ) , x ˙ 0 ( t ) , ρ ( t ) , μ ( t ) ) = 0 ( a . e . in T ) .
Now, note that l λ ( a 1 , a 2 ) = l ( a 1 , a 2 ) + λ 1 l 1 ( a 1 , a 2 ) = a 1 2 2 a 1 and hence
l λ ( a 1 , a 2 ) = ( 2 a 1 2 , 0 )
and l λ ( x 0 ( 1 ) , x 0 ( 1 ) ) = ( 0 , 0 ) . As ρ 0 on T, then as one readily verifies, hypotheses (i) and (ii) of Theorem 1 are verified. Furthermore, observe that H x ˙ x ˙ ( t , x , x ˙ , ρ , μ ) = 2 λ 1 and so H x ˙ x ˙ ( t , x 0 ( t ) , x ˙ 0 ( t ) , ρ ( t ) , μ ( t ) ) = 2 ( a . e . in T ) and then, hypothesis (iii) of Theorem 1 is also verified. Now, note that since
H x x ( t , x , x ˙ , ρ , μ ) = 4 2 λ 1 x ˙ and H x x ˙ ( t , x , x ˙ , ρ , μ ) = 2 λ 1 ( x + t ) ,
then H x x ( t , x 0 ( t ) , x ˙ 0 ( t ) , ρ ( t ) , μ ( t ) ) = 2 and H x x ˙ ( t , x 0 ( t ) , x ˙ 0 ( t ) , ρ ( t ) , μ ( t ) ) = 0 ( a . e . in T ) .
Furthermore,
l λ ( a 1 , a 2 ) = 2 0 0 0
and so,
l λ ( x 0 ( 1 ) , x 0 ( 1 ) ) = 2 0 0 0 .
Then, the second variation J λ is given by
J λ ( x 0 , y ) = 2 y 2 ( 1 ) + 1 1 2 { y 2 ( t ) + y ˙ 2 ( t ) } d t
which is greater than zero for all y 0 , y Y ( x 0 ) where Y ( x 0 ) is given by all y X with y ˙ L 2 ( T ; R ) satisfying
y ( 1 ) = y ( 1 ) . I 1 ( x 0 , y ) 0 ( i i a ( x 0 ) = { 1 } ) . y ˙ ( t ) 0 ( a . e . in T ) .
Thus, hypothesis (iv) of Theorem 1 is satisfied. We also have that
F λ ( t , x , x ˙ ) = 2 ( x + t ) 2 + ( x ˙ + 1 ) 2 + x ˙ ( x + t ) 2 .
Consequently, if x is admissible, then for almost all t T ,
E λ ( t , x ( t ) , x ˙ 0 ( t ) , x ˙ ( t ) ) = ( x ˙ ( t ) + 1 ) 2 + x ˙ ( t ) ( x ( t ) + t ) 2 + ( x ( t ) + t ) 2 ( x ˙ ( t ) + 1 ) 2
and so, if x is admissible, then
(a)
E λ ( t , x ( t ) , x ˙ 0 ( t ) , x ˙ ( t ) ) 0 ( a . e . in T ) ;
(b)
1 1 E λ ( t , x ( t ) , x ˙ 0 ( t ) , x ˙ ( t ) ) d t 1 1 ( x ˙ ( t ) + 1 ) 2 d t 1 1 V ( x ˙ ( t ) x ˙ 0 ( t ) ) d t .
Moreover, as one readily verifies, if x is admissible, then for almost all t T ,
E 1 ( t , x ( t ) , x ˙ 0 ( t ) , x ˙ ( t ) ) = ( x ˙ ( t ) + 1 ) 2 + x ˙ ( t ) ( x ( t ) + t ) 2 + ( x ( t ) + t ) 2 ,
and hence, if x is admissible, then
(c)
1 1 E λ ( t , x ( t ) , x ˙ 0 ( t ) , x ˙ ( t ) ) d t = | 1 1 E 1 ( t , x ( t ) , x ˙ 0 ( t ) , x ˙ ( t ) ) d t | implying that hypothesis (v) of Theorem 1 is verified with any ϵ > 0 and δ = 1 . Then, there exist ν 1 , ν 2 > 0 such that, if x is admissible with x x 0 < ν 1 , we have
I ( x ) I ( x 0 ) + ν 2 D ( x x 0 ) .
In particular, x 0 is a strict strong minimum of (P).

4. Auxiliary Lemmas

In this section, we are going to prove Theorem 1. First, we state two auxiliary lemmas whose statements and proofs are given in Lemmas 4.1 and 4.2 of [21].
In the following we suppose that we are given x 0 X and a subsequence ( x q ) in X such that
lim q D ( x q x 0 ) = 0 and d q : = [ 2 D ( x q x 0 ) ] 1 / 2 > 0 ( q N ) .
For all q N , define
y q : = x q x 0 d q .
We write x ˙ q au x ˙ 0 on T, if for any ϵ > 0 , there exists Θ ϵ T measurable with m ( Θ ϵ ) < ϵ such that x ˙ q u x ˙ 0 on T \ Θ ϵ , that is, if ( x ˙ q ) converges uniformly to x ˙ 0 on T \ Θ ϵ .
We shall not relabel the subsequences of a given sequence since this fact will not modify our results.
Lemma 1.
For some subsequence of ( x q ) , and some y 0 X with y ˙ 0 L 2 ( T ; R n ) , x ˙ q au x ˙ 0 on T, y q u y 0 on T and y ˙ q L 1 y ˙ 0 on T.
Lemma 2.
Let Θ T be measurable, R λ L ( Θ ; R n × n ) and ( R q ) a sequence in L ( Θ ; R n × n ) . If x ˙ q u x ˙ 0 on Θ, R q u R λ on Θ and R λ ( t ) 0 ( t Θ ) , then
lim inf q Θ y ˙ q * ( t ) R q ( t ) y ˙ q ( t ) d t Θ y ˙ 0 * ( t ) R λ ( t ) y ˙ 0 ( t ) d t .

5. Proof of Theorem 1

Proof. 
The proof of Theorem 1 will be made by contradiction, that is, we are going to assume that, for all ν 1 , ν 2 > 0 , there exists an admissible trajectory x such that
x x 0 < ν 1 and I ( x ) < I ( x 0 ) + ν 2 D ( x x 0 ) .
We recall also that I a ( · , x 0 ( · ) , x ˙ 0 ( · ) ) is piecewise constant on T, ( x 0 , ρ , μ ) satisfies the first order sufficiency conditions
ρ ˙ ( t ) = H x * ( t , x 0 ( t ) , x ˙ 0 ( t ) , ρ ( t ) , μ ( t ) ) ( a . e . in T ) ,
H x ˙ * ( t , x 0 ( t ) , x ˙ 0 ( t ) , ρ ( t ) , μ ( t ) ) = 0 ( a . e . in T ) ,
and hypotheses (i), (ii), (iii) and (v) of Theorem 1. We are going to obtain the negation of hypothesis (iv) of Theorem 1.
First note that, as
μ α ( t ) 0 ( α R , a . e . in T ) and λ i 0 ( i = 1 , , k ) ,
if x is feasible, then I ( x ) J λ ( x ) . Furthermore, as
μ α ( t ) φ α ( t , x 0 ( t ) , x ˙ 0 ( t ) ) = 0 ( α R , a . e . in T ) and λ i I i ( x 0 ) = 0 ( i = 1 , , k ) ,
then I ( x 0 ) = J λ ( x 0 ) . Consequently, (1) implies that, for all ν 1 , ν 2 > 0 , there exists x admissible with
x x 0 < ν 1 and J λ ( x ) < J λ ( x 0 ) + ν 2 D ( x x 0 ) .
Observe that by setting
[ θ ] : = ( x 0 ( t 0 ) + θ [ x ( t 0 ) x 0 ( t 0 ) ] , x 0 ( t 1 ) + θ [ x ( t 1 ) x 0 ( t 1 ) ] ) ,
for all admissible trajectories x,
J λ ( x ) 0 1 ( 1 θ ) ( x * ( t 0 ) x 0 * ( t 0 ) , x * ( t 1 ) x 0 * ( t 1 ) ) l λ [ θ ] x ( t 0 ) x 0 ( t 0 ) x ( t 1 ) x 0 ( t 1 ) d θ = ρ * ( t 1 ) [ x ( t 1 ) x 0 ( t 1 ) ] ρ * ( t 0 ) [ x ( t 0 ) x 0 ( t 0 ) ] + J λ ( x 0 ) + J λ ( x 0 , x x 0 ) + K λ ( x ) + E λ ( x )
where
E λ ( x ) : = t 0 t 1 E λ ( t , x ( t ) , x ˙ 0 ( t ) , x ˙ ( t ) ) d t ,
K λ ( x ) : = t 0 t 1 { M λ ( t , x ( t ) ) + [ x ˙ * ( t ) x ˙ 0 * ( t ) ] N λ ( t , x ( t ) ) } d t ,
and the functions M λ and N λ are given by
M λ ( t , x ) : = F λ ( t , x , x ˙ 0 ( t ) ) F λ ( t , x 0 ( t ) , x ˙ 0 ( t ) ) F λ x ( t , x 0 ( t ) , x ˙ 0 ( t ) ) ( x x 0 ( t ) ) ,
N λ ( t , x ) : = F λ x ˙ * ( t , x , x ˙ 0 ( t ) ) F λ x ˙ * ( t , x 0 ( t ) , x ˙ 0 ( t ) ) .
Note that
M λ ( t , x ) = 1 2 [ x * x 0 * ( t ) ] P λ ( t , x ) ( x x 0 ( t ) ) , N λ ( t , x ) = Q λ ( t , x ) ( x x 0 ( t ) ) ,
where
P λ ( t , x ) : = 2 0 1 ( 1 θ ) F λ x x ( t , x 0 ( t ) + θ [ x x 0 ( t ) ] , x ˙ 0 ( t ) ) d θ ,
Q λ ( t , x ) : = 0 1 F λ x ˙ x ( t , x 0 ( t ) + θ [ x x 0 ( t ) ] , x ˙ 0 ( t ) ) d θ .
Now, we claim that there exists η > 0 such that, for all x admissible with x x 0 < 1 ,
| K λ ( x ) | η x x 0 [ 1 + D ( x x 0 ) ] .
Indeed, observe that if x is admissible with x x 0 < 1 , then for some α i ( i = 1 , 2 ) and almost all t T , we have that
| M λ ( t , x ( t ) ) + [ x ˙ * ( t ) x ˙ 0 * ( t ) ] N λ ( t , x ( t ) ) | = | 1 2 [ x * ( t ) x 0 * ( t ) ] P λ ( t , x ( t ) ) ( x ( t ) x 0 ( t ) ) + [ x ˙ * ( t ) x ˙ 0 * ( t ) ] Q λ ( t , x ( t ) ) ( x ( t ) x 0 ( t ) ) | = | { 1 2 [ x * ( t ) x 0 * ( t ) ] P λ ( t , x ( t ) ) + [ x ˙ * ( t ) x ˙ 0 * ( t ) ] Q λ ( t , x ( t ) ) } ( x ( t ) x 0 ( t ) ) | | 1 2 [ x * ( t ) x 0 * ( t ) ] P λ ( t , x ( t ) ) + [ x ˙ * ( t ) x ˙ 0 * ( t ) ] Q λ ( t , x ( t ) ) | | x ( t ) x 0 ( t ) | | x ( t ) x 0 ( t ) | ( | 1 2 [ x * ( t ) x 0 * ( t ) ] P λ ( t , x ( t ) ) | + | [ x ˙ * ( t ) x ˙ 0 * ( t ) ] Q λ ( t , x ( t ) ) | ) | x ( t ) x 0 ( t ) | ( 1 2 | x ( t ) x 0 ( t ) | | P λ ( t , x ( t ) ) | + | x ˙ ( t ) x ˙ 0 ( t ) | | Q λ ( t , x ( t ) ) | ) α 1 | x ( t ) x 0 ( t ) | ( | x ( t ) x 0 ( t ) | + | x ˙ ( t ) x ˙ 0 ( t ) | ) α 1 | x ( t ) x 0 ( t ) | ( 1 + | x ˙ ( t ) x ˙ 0 ( t ) | ) α 2 | x ( t ) x 0 ( t ) | ( 1 + | x ˙ ( t ) x ˙ 0 ( t ) | 2 ) 1 / 2 .
Setting η : = max { α 2 , ( t 1 t 0 ) α 2 } , x admissible with x x 0 < 1 implies that
| K λ ( x ) | α 2 x x 0 t 0 t 1 ( V ( x ˙ ( t ) x ˙ 0 ( t ) ) + 1 ) d t α 2 x x 0 ( D ( x x 0 ) + t 1 t 0 ) = α 2 x x 0 D ( x x 0 ) + α 2 x x 0 ( t 1 t 0 ) η x x 0 D ( x x 0 ) + η x x 0 = η x x 0 [ 1 + D ( x x 0 ) ]
and then (4) is proved.
Now, by (2), for all q N there exists x q admissible such that
x q x 0 < ϵ , x q x 0 < 1 q , J λ ( x q ) J λ ( x 0 ) < 1 q D ( x q x 0 ) .
The last inequality of (5) implies that for all q N ,
d q : = [ 2 D ( x q x 0 ) ] 1 / 2 > 0 .
Since
ρ ˙ ( t ) = H x * ( t , x 0 ( t ) , x ˙ 0 ( t ) , ρ ( t ) , μ ( t ) ) ( a . e . in T ) ,
H x ˙ * ( t , x 0 ( t ) , x ˙ 0 ( t ) , ρ ( t ) , μ ( t ) ) = 0 ( a . e . in T ) ,
we have that
J λ ( x 0 , y ) = l λ ( x 0 ( t 0 ) , x 0 ( t 1 ) ) y ( t 0 ) y ( t 1 )
for all y X . Having this in mind, by (3), (v)(b) of Theorem 1, (4) and (5),
J λ ( x q ) J λ ( x 0 ) = 0 1 ( 1 θ ) ( x q * ( t 0 ) x 0 * ( t 0 ) , x q * ( t 1 ) x 0 * ( t 1 ) ) l λ [ θ ] x q ( t 0 ) x 0 ( t 0 ) x q ( t 1 ) x 0 ( t 1 ) d θ + ρ * ( t 1 ) [ x q ( t 1 ) x 0 ( t 1 ) ] ρ * ( t 0 ) [ x q ( t 0 ) x 0 ( t 0 ) ] + l λ ( x 0 ( t 0 ) , x 0 ( t 1 ) ) x q ( t 0 ) x 0 ( t 0 ) x q ( t 1 ) x 0 ( t 1 ) + K λ ( x q ) + E λ ( x q ) 0 1 ( 1 θ ) ( x q * ( t 0 ) x 0 * ( t 0 ) , x q * ( t 1 ) x 0 * ( t 1 ) ) l λ [ θ ] x q ( t 0 ) x 0 ( t 0 ) x q ( t 1 ) x 0 ( t 1 ) d θ + ρ * ( t 1 ) [ x q ( t 1 ) x 0 ( t 1 ) ] ρ * ( t 0 ) [ x q ( t 0 ) x 0 ( t 0 ) ] + l λ ( x 0 ( t 0 ) , x 0 ( t 1 ) ) x q ( t 0 ) x 0 ( t 0 ) x q ( t 1 ) x 0 ( t 1 ) η x q x 0 η x q x 0 D ( x q x 0 ) + δ t 0 t 1 V ( x ˙ q ( t ) x ˙ 0 ( t ) ) d t = 0 1 ( 1 θ ) ( x q * ( t 0 ) x 0 * ( t 0 ) , x q * ( t 1 ) x 0 * ( t 1 ) ) l λ [ θ ] x q ( t 0 ) x 0 ( t 0 ) x q ( t 1 ) x 0 ( t 1 ) d θ + ρ * ( t 1 ) [ x q ( t 1 ) x 0 ( t 1 ) ] ρ * ( t 0 ) [ x q ( t 0 ) x 0 ( t 0 ) ] + l λ ( x 0 ( t 0 ) , x 0 ( t 1 ) ) x q ( t 0 ) x 0 ( t 0 ) x q ( t 1 ) x 0 ( t 1 ) η x q x 0 η x q x 0 D ( x q x 0 ) + δ D ( x q x 0 ) δ V ( x q ( t 0 ) x 0 ( t 0 ) ) .
By (5), for all q N ,
D ( x q x 0 ) δ η q 1 q < η q + δ V ( x q ( t 0 ) x 0 ( t 0 ) ) l λ ( x 0 ( t 0 ) , x 0 ( t 1 ) ) x q ( t 0 ) x 0 ( t 0 ) x q ( t 1 ) x 0 ( t 1 ) ρ * ( t 1 ) [ x q ( t 1 ) x 0 ( t 1 ) ] + ρ * ( t 0 ) [ x q ( t 0 ) x 0 ( t 0 ) ] 0 1 ( 1 θ ) ( x q * ( t 0 ) x 0 * ( t 0 ) , x q * ( t 1 ) x 0 * ( t 1 ) ) l λ [ θ ] x q ( t 0 ) x 0 ( t 0 ) x q ( t 1 ) x 0 ( t 1 ) d θ .
Consequently,
lim q D ( x q x 0 ) = 0 .
For all q N , define
y q : = x q x 0 d q .
By Lemma 1, there exist y 0 X with y ˙ 0 L 2 ( T ; R n ) and some subsequence of ( x q ) such that y ˙ q L 1 y ˙ 0 on T. Once again, by Lemma 1, there exist some subsequence of ( x q ) such that y q u y 0 on T.
We claim that
i.
J λ ( x 0 , y 0 ) 0 , y 0 0 .
ii.
y 0 ( t i ) = Φ i ( x 0 ( t i + 1 ) ) y 0 ( t i + 1 ) for i = 1 , 0 .
iii.
I i ( x 0 , y 0 ) 0 ( i i a ( x 0 ) ) , I j ( x 0 , y 0 ) = 0 ( j = k + 1 , , K ) .
iv.
φ α x ( t , x 0 ( t ) , x ˙ 0 ( t ) ) y 0 ( t ) + φ α x ˙ ( t , x 0 ( t ) , x ˙ 0 ( t ) ) y ˙ 0 ( t ) 0 ( a . e . in T , α I a ( t , x 0 ( t ) , x ˙ 0 ( t ) ) ) .
v.
φ β x ( t , x 0 ( t ) , x ˙ 0 ( t ) ) y 0 ( t ) + φ β x ˙ ( t , x 0 ( t ) , x ˙ 0 ( t ) ) y ˙ 0 ( t ) = 0 ( a . e . in T , β S ) .
For all q N ,
K λ ( x q ) d q 2 = t 0 t 1 M λ ( t , x q ( t ) ) d q 2 + y ˙ q * ( t ) N λ ( t , x q ( t ) ) d q d t .
By Lemma 1,
M λ ( · , x q ( · ) ) d q 2 L 1 2 y 0 * ( · ) F λ x x ( · , x 0 ( · ) , x ˙ 0 ( · ) ) y 0 ( · ) ,
N λ ( · , x q ( · ) ) d q L F λ x ˙ x ( · , x 0 ( · ) , x ˙ 0 ( · ) ) y 0 ( · ) ,
both on T and, as y ˙ q L 1 y ˙ 0 on T,
1 2 J λ ( x 0 , y 0 ) = 1 2 ( y 0 * ( t 0 ) , y 0 * ( t 1 ) ) l λ ( x 0 ( t 0 ) , x 0 ( t 1 ) ) y 0 ( t 0 ) y 0 ( t 1 ) + lim q K λ ( x q ) d q 2 + 1 2 t 0 t 1 y ˙ 0 * ( t ) F λ x ˙ x ˙ ( t , x 0 ( t ) , x ˙ 0 ( t ) ) y ˙ 0 ( t ) d t .
We have,
lim inf q E λ ( x q ) d q 2 1 2 t 0 t 1 y ˙ 0 * ( t ) F λ x ˙ x ˙ ( t , x 0 ( t ) , x ˙ 0 ( t ) ) y ˙ 0 ( t ) d t .
Indeed, by Lemma 1, we can choose Θ T measurable such that x ˙ q u x ˙ 0 on Θ . Additionally, for all t Θ and q N ,
1 d q 2 E λ ( t , x q ( t ) , x ˙ 0 ( t ) , x ˙ q ( t ) ) = 1 2 y ˙ q * ( t ) R q ( t ) y ˙ q ( t )
where
R q ( t ) : = 2 0 1 ( 1 θ ) F λ x ˙ x ˙ ( t , x q ( t ) , x ˙ 0 ( t ) + θ [ x ˙ q ( t ) x ˙ 0 ( t ) ] ) d θ .
Clearly,
R q ( · ) u R λ ( · ) : = F λ x ˙ x ˙ ( · , x 0 ( · ) , x ˙ 0 ( · ) ) on Θ .
By hypothesis (iii) of Theorem 1, R λ ( t ) 0 ( t Θ ) . Moreover, by hypothesis (v)(a) of Theorem 1, and by Lemma 2,
lim inf q E λ ( x q ) d q 2 = lim inf q 1 d q 2 t 0 t 1 E λ ( t , x q ( t ) , x ˙ 0 ( t ) , x ˙ q ( t ) ) d t lim inf q 1 d q 2 Θ E λ ( t , x q ( t ) , x ˙ 0 ( t ) , x ˙ q ( t ) ) d t = 1 2 lim inf q Θ y ˙ q * ( t ) R q ( t ) y ˙ q ( t ) d t 1 2 Θ y ˙ 0 * ( t ) R λ ( t ) y ˙ 0 ( t ) d t .
As Θ can be selected to be different from T by a set of an arbitrarily small measure and the function y ˙ 0 * ( · ) R λ ( · ) y ˙ 0 ( · ) is integrable on T, this inequality is verified when Θ = T and hence (7) is satisfied.
By, (3), (5), (6), (7) and hypotheses (i) and (ii) of Theorem 1, we have
1 2 J λ ( x 0 , y 0 ) 1 2 ( y 0 * ( t 0 ) , y 0 * ( t 1 ) ) l λ ( x 0 ( t 0 ) , x 0 ( t 1 ) ) y 0 ( t 0 ) y 0 ( t 1 ) + lim q K λ ( x q ) d q 2 + lim inf q E λ ( x q ) d q 2 = lim inf q J λ ( x q ) J λ ( x 0 ) d q 2 lim q 1 d q 2 ρ * ( t 1 ) [ x q ( t 1 ) x 0 ( t 1 ) ] ρ * ( t 0 ) [ x q ( t 0 ) x 0 ( t 0 ) ] + l λ ( x 0 ( t 0 ) , x 0 ( t 1 ) ) x q ( t 0 ) x 0 ( t 0 ) x q ( t 1 ) x 0 ( t 1 ) lim q 1 d q 2 { ρ * ( t 1 ) [ Φ 1 ( x q ( t 0 ) ) Φ 1 ( x 0 ( t 0 ) ) Φ 1 ( x 0 ( t 0 ) ) ( x q ( t 0 ) x 0 ( t 0 ) ) ] ρ * ( t 0 ) [ Φ 0 ( x q ( t 1 ) ) Φ 0 ( x 0 ( t 1 ) ) Φ 0 ( x 0 ( t 1 ) ) ( x q ( t 1 ) x 0 ( t 1 ) ) ] } = lim q 1 d q 2 { ρ * ( t 1 ) 0 1 ( 1 θ ) Φ 1 ( x 0 ( t 0 ) + θ [ x q ( t 0 ) x 0 ( t 0 ) ] ; x q ( t 0 ) x 0 ( t 0 ) ) d θ ρ * ( t 0 ) 0 1 ( 1 θ ) Φ 0 ( x 0 ( t 1 ) + θ [ x q ( t 1 ) x 0 ( t 1 ) ] ; x q ( t 1 ) x 0 ( t 1 ) ) d θ } = 1 2 i = 1 0 ( 1 ) i + 1 ρ * ( t i ) Φ i ( x 0 ( t i + 1 ) ; y 0 ( t i + 1 ) ) 0 .
Now, if y 0 = 0 , then
lim q K λ ( x q ) d q 2 = 0 ,
and hence, by hypothesis (v)(b) of Theorem 1,
0 lim inf q E λ ( x q ) d q 2 δ lim inf q 1 d q 2 t 0 t 1 V ( x ˙ q ( t ) x ˙ 0 ( t ) ) d t = δ lim inf q D ( x q x 0 ) d q 2 V ( x q ( t 0 ) x 0 ( t 0 ) ) d q 2 = δ 2 lim sup q V ( x q ( t 0 ) x 0 ( t 0 ) ) d q 2 δ 2 1 2 lim sup q | x q ( t 0 ) x 0 ( t 0 ) | 2 d q 2 = δ 2 1 2 | y 0 ( t 0 ) | 2 = δ 2
implying that δ cannot be positive, which is not the case and in this way we have obtained (i) of our claim.
Now, observe that since x q is admissible, then for i = 1 , 0 and all q N , we have
y q ( t i ) = 0 1 Φ i ( x 0 ( t i + 1 ) + θ [ x q ( t i + 1 ) x 0 ( t i + 1 ) ] ) d θ y q ( t i + 1 ) .
As y q u y 0 on T, then for i = 1 , 0 , we have
y 0 ( t i ) = Φ i ( x 0 ( t i + 1 ) ) y 0 ( t i + 1 )
and so (ii) of our claim is established.
Now, let us show that
I i ( x 0 , y 0 ) 0 ( i i a ( x 0 ) ) .
Indeed, first observe that for all γ = 1 , , K ,
I γ ( x ) 0 1 ( 1 θ ) ( x * ( t 0 ) x 0 * ( t 0 ) , x * ( t 1 ) x 0 * ( t 1 ) ) l γ [ θ ] x ( t 0 ) x 0 ( t 0 ) x ( t 1 ) x 0 ( t 1 ) d θ = I γ ( x 0 ) + I γ ( x 0 , x x 0 ) + K γ ( x ) + E γ ( x )
where
E γ ( x ) : = t 0 t 1 E γ ( t , x ( t ) , x ˙ 0 ( t ) , x ˙ ( t ) ) d t ,
K γ ( x ) : = t 0 t 1 { M γ ( t , x ( t ) ) + [ x ˙ * ( t ) x ˙ 0 * ( t ) ] N γ ( t , x ( t ) ) } d t ,
and the functions M γ and N γ are defined by
M γ ( t , x ) : = L γ ( t , x , x ˙ 0 ( t ) ) L γ ( t , x 0 ( t ) , x ˙ 0 ( t ) ) L γ x ( t , x 0 ( t ) , x ˙ 0 ( t ) ) ( x x 0 ( t ) ) ,
N γ ( t , x ) : = L γ x ˙ * ( t , x , x ˙ 0 ( t ) ) L γ x ˙ * ( t , x 0 ( t ) , x ˙ 0 ( t ) ) .
We have
M γ ( t , x ) = [ x * x 0 * ( t ) ] P γ ( t , x ) ( x x 0 ( t ) ) , N γ ( t , x ) = Q γ ( t , x ) ( x x 0 ( t ) ) ,
where
P γ ( t , x ) : = 0 1 ( 1 θ ) L γ x x ( t , x 0 ( t ) + θ ( x x 0 ( t ) ) , x ˙ 0 ( t ) ) d θ ,
Q γ ( t , x ) : = 0 1 L γ x ˙ x ( t , x 0 ( t ) + θ ( x x 0 ( t ) ) , x ˙ 0 ( t ) ) d θ .
It is clear that, for all γ = 1 , , K ,
M γ ( · , x q ( · ) ) d q = [ x q * ( · ) x 0 * ( · ) ] P γ ( · , x q ( · ) ) y q ( · ) L 0 ,
N γ ( · , x q ( · ) ) = Q γ ( · , x q ( · ) ) ( x q ( · ) x 0 ( · ) ) L 0 ,
all on T and, since y ˙ q L 1 y ˙ 0 on T, then
lim q K λ ( x q ) d q = 0 and lim q K γ ( x q ) d q = 0 ( γ = 1 , , K ) .
By (5) and (10),
0 lim sup q J λ ( x q ) J λ ( x 0 ) d q = lim q 1 d q i = 1 0 ( 1 ) i + 1 0 1 ( 1 θ ) ρ * ( t i ) Φ i ( x 0 ( t i + 1 ) + θ [ x q ( t i + 1 ) x 0 ( t i + 1 ) ] ; x q ( t i + 1 ) x 0 ( t i + 1 ) ) d θ + lim sup q E λ ( x q ) d q = lim sup q E λ ( x q ) d q .
Since for all q N , E λ ( x q ) 0 , then
lim q E λ ( x q ) d q = 0 .
Thus, by hypothesis (v)(c) of Theorem 1, for all γ = 1 , , K ,
lim q E γ ( x q ) d q = 0 .
Since for all q N and i i a ( x 0 ) ,
0 I i ( x q ) = I i ( x q ) I i ( x 0 ) = 0 1 ( 1 θ ) ( x q * ( t 0 ) x 0 * ( t 0 ) , x q * ( t 1 ) x 0 * ( t 1 ) ) l i [ θ ] x q ( t 0 ) x 0 ( t 0 ) x q ( t 1 ) x 0 ( t 1 ) d θ + I i ( x 0 , x q x 0 ) + K i ( x q ) + E i ( x q ) ,
then, by (10) and (11), for i i a ( x 0 ) ,
0 lim q I i ( x 0 , x q x 0 ) d q .
As y q u y 0 and y ˙ q L 1 y ˙ 0 both on T, then for i i a ( x 0 ) ,
0 lim q I i ( x 0 , x q x 0 ) d q = I i ( x 0 , y 0 )
establishing (8).
Let us prove that
I j ( x 0 , y 0 ) = 0 ( j = k + 1 , , K ) .
Indeed, by (9), (10), (11) and the admissibility of x q , for all j = k + 1 , , K ,
0 = lim q I j ( x 0 , x q x 0 ) d q = I j ( x 0 , y 0 )
which is precisely (12), and hence we obtain (iii) of our claim.
Now, we claim that
φ α x ( t , x 0 ( t ) , x ˙ 0 ( t ) ) y 0 ( t ) + φ α x ˙ ( t , x 0 ( t ) , x ˙ 0 ( t ) ) y ˙ 0 ( t ) 0 ( a . e . in T ) .
In fact, for all α R , q N , almost all t T and θ [ 0 , 1 ] , define
Ω q α ( t ; θ ) : = φ α ( t , x 0 ( t ) + θ [ x q ( t ) x 0 ( t ) ] , x ˙ 0 ( t ) + θ [ x ˙ q ( t ) x ˙ 0 ( t ) ] ) ,
G q α ( t ) : = [ φ α ( t , x q ( t ) , x ˙ q ( t ) ) ] 1 / 2 ,
O α ( t ) : = φ α x ( t , x 0 ( t ) , x ˙ 0 ( t ) ) y 0 ( t ) φ α x ˙ ( t , x 0 ( t ) , x ˙ 0 ( t ) ) y ˙ 0 ( t ) .
If t [ t 0 , t 1 ) is a point of continuity of I a ( · , x 0 ( · ) , x ˙ 0 ( · ) ) and α I a ( t , x 0 ( t ) , x ˙ 0 ( t ) ) , as I a ( · , x 0 ( · ) , x ˙ 0 ( · ) ) is piecewise constant on T, we have the existence of an interval [ t , t ¯ ] T satisfying t < t ¯ and such that φ α ( σ , x 0 ( σ ) , x ˙ 0 ( σ ) ) = 0 for almost all σ [ t , t ¯ ] . Using the notation
{ σ } : = ( σ , x 0 ( σ ) + θ [ x q ( σ ) x 0 ( σ ) ] , x ˙ 0 ( σ ) + θ [ x ˙ q ( σ ) x ˙ 0 ( σ ) ] ) ,
we have
0 lim q [ t , t ¯ ] Θ ( G q α ( σ ) ) 2 d q d σ = lim q 1 d q [ t , t ¯ ] Θ { φ α ( σ , x q ( σ ) , x ˙ q ( σ ) ) + φ α ( σ , x 0 ( σ ) , x ˙ 0 ( σ ) ) } d σ = lim q 1 d q [ t , t ¯ ] Θ { Ω q α ( σ ; 1 ) Ω q α ( σ ; 0 ) } d σ = lim q 1 d q [ t , t ¯ ] Θ 0 1 θ Ω q α ( σ ; θ ) d θ d σ = lim q 1 d q [ t , t ¯ ] Θ 0 1 { φ α x { σ } ( x q ( σ ) x 0 ( σ ) ) + φ α x ˙ { σ } ( x ˙ q ( σ ) x ˙ 0 ( σ ) ) } d θ d σ = lim q [ t , t ¯ ] Θ 0 1 { φ α x { σ } y q ( σ ) + φ α x ˙ { σ } y ˙ q ( σ ) } d θ d σ = [ t , t ¯ ] Θ { φ α x ( σ , x 0 ( σ ) , x ˙ 0 ( σ ) ) y 0 ( σ ) φ α x ˙ ( σ , x 0 ( σ ) , x ˙ 0 ( σ ) ) y ˙ 0 ( σ ) } d σ = [ t , t ¯ ] Θ O α ( σ ) d σ .
As Θ can be chosen to be different from T by a set of an arbitrarily small measure, then
0 t t ¯ O α ( σ ) d σ .
If O α < 0 on a measurable set Σ such that Σ [ t , t ¯ ] and m ( Σ ) > 0 , then
0 > Σ Θ O α ( σ ) d σ = lim q Σ Θ ( G q α ( σ ) ) 2 d q d σ 0
which is not the case. Consequently, O α 0 almost everywhere on [ t , t ¯ ] with t [ t 0 , t 1 ) an arbitrary point of continuity of I a ( · , x 0 ( · ) , x ˙ 0 ( · ) ) . Thus, O α ( t ) 0 for almost all t T showing that (13) is verified.
Now, let us prove that for all β S ,
φ β x ( t , x 0 ( t ) , x ˙ 0 ( t ) ) y 0 ( t ) + φ β x ˙ ( t , x 0 ( t ) , x ˙ 0 ( t ) ) y ˙ 0 ( t ) = 0 ( a . e . in T ) .
Indeed, for all β S , q N , almost all t T and θ [ 0 , 1 ] , set
Υ q β ( t ; θ ) : = φ β ( t , x 0 ( t ) + θ [ x q ( t ) x 0 ( t ) ] , x ˙ 0 ( t ) + θ [ x ˙ q ( t ) x ˙ 0 ( t ) ] ) .
For all β S , q N and almost all t T , we have
0 = Υ q β ( t ; 1 ) Υ q β ( t ; 0 ) = 0 1 θ Υ q β ( t ; θ ) d θ = 0 1 [ φ β x { t } ( x q ( t ) x 0 ( t ) ) + φ β x ˙ { t } ( x ˙ q ( t ) x ˙ 0 ( t ) ) ] d θ
Then, for all β S , q N and almost all t T ,
0 = 0 1 [ φ β x { t } y q ( t ) + φ β x ˙ { t } y ˙ q ( t ) ] d θ .
By (15), for all t T and β S ,
0 = [ t 0 , t ] Θ { φ β x ( σ , x 0 ( σ ) , x ˙ 0 ( σ ) ) y 0 ( σ ) + φ β x ˙ ( σ , x 0 ( σ ) , x ˙ 0 ( σ ) ) y ˙ 0 ( σ ) } d σ .
Once again, since Θ can be chosen to be different from T by a set of an arbitrarily small measure, then for t T and β S ,
0 = t 0 t { φ β x ( σ , x 0 ( σ ) , x ˙ 0 ( σ ) ) y 0 ( σ ) + φ β x ˙ ( σ , x 0 ( σ ) , x ˙ 0 ( σ ) ) y ˙ 0 ( σ ) } d σ
and hence (14) holds. Consequently, (iv) and (v) of our claim are satisfied. □

Funding

This research was funded by Dirección General Asuntos del Personal Académico, DGAPA-UNAM, by the project PAPIIT-IN102220.

Data Availability Statement

Not applicable.

Acknowledgments

The author is deeply appreciative to Dirección General de Asuntos del Personal Académico, Universidad Nacional Autónoma de México, for the financial support transfered by the project PAPIIT-IN102220. The author also thanks to three anonymous referees whose comments improve the content of the article.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Bliss, G.A. Lectures on the Calculus of Variations; University of Chicago Press: Chicago, IL, USA, 1946. [Google Scholar]
  2. Bolza, O. Lectures on the Calculus of Variations; Chelsea Press: New York, NY, USA, 1961. [Google Scholar]
  3. Brechtken-Manderscheid, U. Introduction to the Calculus of Variations; Chapman & Hall: London, UK, 1983. [Google Scholar]
  4. Cesari, L. Optimization-Theory and Applications, Problems with Ordinary Differential Equations; Springer: New York, NY, USA, 1983. [Google Scholar]
  5. Ewing, G.M. Calculus of Variations with Applications; Dover: New York, NY, USA, 1985. [Google Scholar]
  6. Gelfand, I.M.; Fomin, S.V. Calculus of Variations; Prentice-Hall: Hoboken, NJ, USA, 1963. [Google Scholar]
  7. Giaquinta, M.; Hildebrandt, S. Calculus of Variations I; Springer: New York, NY, USA, 2004. [Google Scholar]
  8. Giaquinta, M.; Hildebrandt, S. Calculus of Variations II; Springer: New York, NY, USA, 2004. [Google Scholar]
  9. Hestenes, M.R. Calculus of Variations and Optimal Control; John Wiley & Sons: New York, NY, USA, 1966. [Google Scholar]
  10. Leitmann, G. The Calculus of Variations and Optimal Control; Plenum Press: New York, NY, USA, 1981. [Google Scholar]
  11. Loewen, P.D. Second-order sufficiency criteria and local convexity for equivalent problems in the calculus of variations. J. Math. Anal. Appl. 1990, 146, 512–522. [Google Scholar] [CrossRef] [Green Version]
  12. Milyutin, A.A.; Osmolovskii, N.P. Calculus of Variations and Optimal Control; American Mathematical Society: Providence, RI, USA, 1998. [Google Scholar]
  13. Morse, M. Variational Analysis: Critical Extremals and Sturmian Extensions; John Wiley & Sons: New York, NY, USA, 1973. [Google Scholar]
  14. Rindler, F. Calculus of Variations; Springer: Coventry, UK, 2018. [Google Scholar]
  15. Troutman, J.L. Variational Calculus with Elementary Convexity; Springer: New York, NY, USA, 1983. [Google Scholar]
  16. Wan, F.Y.M. Introduction to the Calculus of Variations and Its Applications; Chapman & Hall: New York, NY, USA, 1995. [Google Scholar]
  17. Chiu, K.S.; Li, T. Oscillatory and periodic solutions of diffential equations with piecewise constant generalized mixed arguments. Math. Nachr. 2019, 292, 2153–2164. [Google Scholar] [CrossRef]
  18. Li, T.; Viglialoro, G. Boundedness for a nonlocal reaction chemotaxis model even in the attraction-dominated regime. Differ. Integral Equ. 2021, 34, 316–336. [Google Scholar]
  19. Licea, G.S. Sufficiency by a direct method in the variable state problem of calculus of variations: Singular extremals. IMA J. Math. Control. Inf. 2009, 26, 257–279. [Google Scholar] [CrossRef]
  20. Callejas, C.M.; Licea, G.S. Sufficiency for singular arcs in two isoperimetric calculus of variations problems. Appl. Math. Sci. 2015, 9, 7281–7306. [Google Scholar] [CrossRef]
  21. Licea, G.S. Sufficiency for singular trajectories in the calculus of variations. AIMS Math. 2019, 5, 111–139. [Google Scholar] [CrossRef]
  22. Cortez, K.L.; Rosenblueth, J.F. The broken link between normality and regularity in the calculus of variations. Syst. Control. Lett. 2019, 124, 27–32. [Google Scholar] [CrossRef]
  23. Becerril, J.A.; Rosenblueth, J.F. The importance of being normal, regular and proper in the calculus of variations. J. Optim. Theory Appl. 2017, 172, 759–773. [Google Scholar] [CrossRef]
  24. Becerril, J.A.; Rosenblueth, J.F. Necessity for isoperimetric inequality constraints. Discret. Contin. Dyn. Syst. 2017, 37, 1129–1158. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Licea, G.S. A Straightforward Sufficiency Proof for a Nonparametric Problem of Bolza in the Calculus of Variations. Axioms 2022, 11, 55. https://doi.org/10.3390/axioms11020055

AMA Style

Licea GS. A Straightforward Sufficiency Proof for a Nonparametric Problem of Bolza in the Calculus of Variations. Axioms. 2022; 11(2):55. https://doi.org/10.3390/axioms11020055

Chicago/Turabian Style

Licea, Gerardo Sánchez. 2022. "A Straightforward Sufficiency Proof for a Nonparametric Problem of Bolza in the Calculus of Variations" Axioms 11, no. 2: 55. https://doi.org/10.3390/axioms11020055

APA Style

Licea, G. S. (2022). A Straightforward Sufficiency Proof for a Nonparametric Problem of Bolza in the Calculus of Variations. Axioms, 11(2), 55. https://doi.org/10.3390/axioms11020055

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop