Next Article in Journal
Oscillatory Behavior of a Type of Generalized Proportional Fractional Differential Equations with Forcing and Damping Terms
Next Article in Special Issue
Set-Valued Symmetric Generalized Strong Vector Quasi-Equilibrium Problems with Variable Ordering Structures
Previous Article in Journal
On the Lyapunov Exponent of Monotone Boolean Networks
Previous Article in Special Issue
Strong Convergence of Mann’s Iteration Process in Banach Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Minirobots Moving at Different Partial Speeds

by
Constantin Udrişte
1,2,* and
Ionel Ţevy
1
1
Faculty of Applied Sciences, Department of Mathematics-Informatics, University Politehnica of Bucharest, Splaiul Independenţei 313, 060042 Bucharest, Romania
2
Academy of Romanian Scientists, Ilfov 3, 050044 Bucharest, Romania
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(6), 1036; https://doi.org/10.3390/math8061036
Submission received: 25 May 2020 / Revised: 20 June 2020 / Accepted: 23 June 2020 / Published: 24 June 2020
(This article belongs to the Special Issue Fixed Point, Optimization, and Applications)

Abstract

:
In this paper, we present the mathematical point of view of our research group regarding the multi-robot systems evolving in a multi-temporal way. We solve the minimum multi-time volume problem as optimal control problem for a group of planar micro-robots moving in the same direction at different partial speeds. We are motivated to solve this problem because a similar minimum-time optimal control problem is now in vogue for micro-scale and nano-scale robotic systems. Applying the (weak and strong) multi-time maximum principle, we obtain necessary conditions for optimality and that are used to guess a candidate control policy. The complexity of finding this policy for arbitrary initial conditions is dominated by the computation of a planar convex hull. We pointed this idea by applying the technique of multi-time Hamilton-Jacobi-Bellman PDE. Our results can be extended to consider obstacle avoidance by explicit parameterization of all possible optimal control policies.

1. Introduction

Our multi-time model extends the single-time case formulated and solved by T. Bretl [1,2] (see also, [3,4,5,6]). We refer to a microrobotic system consisting of n planar robots which evolve in multi-temporal sense. The control of this system is hard, at least from an algorithmic point of view. We solve the problem via a multi-time maximum optimal control problem and via the technique of multi-time Hamilton-Jacobi-Bellman PDE (see, [7,8,9,10,11,12,13]). The problem of multi-temporal evolution has many pitfalls due to the correlation between the dimension of state variables and that of evolution variables.
The microrobotic systems are intended for a wide range of applications that include microfabrication, minimally invasive medical diagnosis and treatment, adaptive optics, regenerative electronics, and biosensing for environmental monitoring and toxin detection [2].
The term “multi-time” was used for the first time by Dirac (1932) [14] to introduce “multi-time wave function” as candidate for relativistic many-particle quantum mechanics.
Section 2 formulates a multi-time optimal control problem for a system of many robots that move at different partial speeds, but that must all move in the same partial direction. Section 3 shows how we can solve the problem via the weak multi-time maximum principle. Here, the solution of the adjoint PDEs system is obtained by geometrical techniques. As it is too complicated to continue with this method, Section 4 solves the problem by the strong multi-time maximum principle. Section 5 gives a geometrical solution of our problem. Section 6 proves that the multi-time dynamic programming method permits the design of multi-time optimal controls for the problem in Section 3. Section 7 refers to originality of the subject and to the possibility of further research.
We consistently use mathematical language from multi-temporal dynamical systems and differential geometry. Particularly the Einstein convention of summation, and a short dictionary for notations in differential geometry (∧ = exterior product or wedge product of two differential forms, δ α β , δ β α , δ α β = Kronecker symbols, ⌟ = interior product or inner derivative) are used throughout. The tensor fields are written also via their components etc.

2. Many Robots That Move at Different Partial Speeds

The evolutive multivariate parameter t = ( t 1 , , t m ) R + m is called multi-time. A multi-temporal evolution is conceived as follows: It is considered a generic hyper-parallelepiped Ω 0 T R + m determined by the diagonal opposite points 0 , T R + m . An evolution in Ω 0 T is determined by the partial order in R + m and by a positive sense of movement. A C 1 curve γ : [ 0 , 1 ] Ω 0 T , t α = t α ( τ ) , τ [ 0 , 1 ] , joining the points γ ( 0 ) = 0 and γ ( 1 ) = T , is called marker of evolution in Ω 0 T if d t α d τ 0 (increasing curve). The simplest marker of evolution is the main diagonal t α = T α τ , τ [ 0 , 1 ] that joins the points 0 and T.
Now let us consider a C 1 function φ : Ω 0 T R . The evolution in φ ( Ω 0 T ) means that the image of the function φ runs from the point φ ( 0 ) to the point φ ( T ) . The graph ( t , φ ( t ) ) can be more suggestive, being a hypersurface in Ω 0 T × R , running from the point ( 0 , φ ( 0 ) ) to the point ( T , φ ( T ) ) . The normal vector field to this hypersurface is ( φ , 1 ) . The marker of evolution in Ω 0 T induces a marker of evolution in the image φ ( Ω 0 T ) , if φ , d t d τ 0 (acute angle), and more suggestive, a marker of evolution on the hypersurface ( t , φ ( t ) ) .
To study the multi-temporal evolution of micro-scale and nano-scale robotic systems we must create a controlled m-flow evolution, an elapsed volume functional and a minimum type problem. We underline that the initial positions of the robots are given, and the goal is to bring them to the origin, minimizing the elapsed time volume. The solution ( x ( t ) , y ( t ) ) of a controlled completely integrable system takes the place of evolutionary function φ ( t ) .
If we leave the multi-time T free, then for n planar robots the following problem of multi-time optimal control appears: Let ( x , y ) = ( x 1 , y 1 , , x n , y n ) ( R 2 ) n be the state variables (one pair ( x i , y i ) means one robot) and u R , v = ( v α i ) R m n , α = 1 , , m ; i = 1 , , n be the controls (inputs). The main goal is to find
min u , v I ( u ( · ) , v ( · ) ) = Ω 0 T d t 1 d t m
subject to
t α ( x y ) ( t ) = v α ( t ) ( cos u ( t ) sin u ( t ) ) , t Ω 0 T , | v α i ( t ) | 1 ,
x ( 0 ) = x 0 , y ( 0 ) = y 0 , x ( T ) = 0 , y ( T ) = 0 .
The previous controlled PDEs can be written
x i t α ( t ) = v α i ( t ) cos u ( t ) , y i t α ( t ) = v α i ( t ) sin u ( t ) .
If rank ( v α i ( t ) ) = m n , then the 2 m vector fields X α i ( t ) = v α i ( t ) cos u ( t ) , Y α i ( t ) = v α i ( t ) sin u ( t ) , α = 1 , , m , are linearly independent.
For each robot ( x i , y i ) , it appears the square of speed
δ α β x i t α x i t β + δ α β y i t α y i t β = δ α β v α i v β i , i = 1 , , n .
Consequently, the group of n robots move in a planar workspace at different (although bounded) speeds, but that must all move in the same partial direction fixed by the unit vector ( cos u ( t ) , sin u ( t ) ) . In fact, the speeds δ α β v α i v β i , i = 1 , , n and the direction ( cos u ( t ) , sin u ( t ) ) are the only physically observable measures.
The complete integrability conditions of this PDE system are
v α i t β ( t ) v β i t α ( t ) = 0 , v α i ( t ) u t β ( t ) v β i ( t ) u t α ( t ) = 0 .
It follows the piecewise general solution
v α i ( t ) = φ i t α ( t ) = λ i ( u ( t ) ) u t α ( t ) .
Remark 1.
(i) The quadruple ( x ( t ) , y ( t ) , u ( t ) , v ( t ) ) constitutes an admissible m-mapping if it has the following properties: (1) ( u ( t ) , v ( t ) ) is a measurable function from Ω 0 T to U × V ; (2) for t Ω 0 T , x i ( t ) = x 0 i + Γ 0 t cos u ( s ) v α i ( s ) d s α , y i ( t ) = y 0 i + Γ 0 t sin u ( s ) v α i ( s ) d s α (path independent curvilinear integrals), (3) ( x ( T ) , y ( T ) ) X T × Y T (compact set). Please note that the second property implies that ( x ( t ) , y ( t ) ) is differentiable almost everywhere as a function of multi-time t, satisfying the previous PDE system for almost all t Ω 0 T .
(ii) If the previous PDE system is not completely integrable, we can formulate and solve a similar problem using the nonholonomic evolution d x i = cos u ( t ) v α i ( t ) d t α , d y i = sin u ( t ) v α i ( t ) d t α .
Because of periodicity of sine and cosine, we can take u [ π , π ] . Also, we can restrict u [ 0 , π ) without loss of generality.
We solve the foregoing problem using the multi-time maximum principle (see, [7,8,9,10,11,12,13]). We introduce the Lagrange multipliers p ( t ) = ( p i α ( t ) ) , q ( t ) = ( q i α ( t ) ) , the Hamiltonian
H ( x , y , u , v , p , q ) = 1 + ( p i α cos u + q i α sin u ) v α i
and its anti-trace
H β α ( x , y , u , v , p , q ) = 1 m δ β α + ( p i α cos u + q i α sin u ) v β i ,
called the control Hamiltonian tensor field.

3. Solution via Weak Multi-Time Maximum Principle

According the weak multi-time maximum principle [7] (coming from variational calculus techniques), along any optimal sheet
( x , y , u , v , p , q ) ,
we must have
p i α t α = H x i ( x , y , u , v , p , q ) , q i α t α = H y i ( x , y , u , v , p , q )
H ( x , y , u , v , p , q ) = max u , v H ( x , y , u , v , p , q )
.
Due to the fact that this Hamiltonian is a linear function with respect to v, its extremum point cannot be interior. Moreover, we have
max u , v H ( x , y , u , v , p , q ) = max u max v H ( x , y , u , v , p , q )
= max v max u H ( x , y , u , v , p , q ) .

Solving the Adjoint PDEs System

Since this Hamiltonian has no dependence on the state vector variables ( x , y ) , the adjoint PDEs are of divergence form
p i α t α = 0 , q i α t α = 0 , i = 1 , n .
To find the general solution of this adjoint divergence PDEs system, we recall some facts from differential geometry [15] about closed and exact forms.
An r-form ω is called closed if d ω = 0 . We say that ω is exact if there exists an ( r 1 ) -form η such that d η = ω .
To characterize situations in which closed forms are also exact, we call a famous.
Theorem 1
(The Poincaré Lemma). Let U be a contractible domain in R n . If ω is a closed r-form, then there exists an ( r 1 ) -form η such that d η = ω . In other words, all closed differential r-forms on contractible domains are exact.
In particular, if ω is a closed r-form on R n , then it is exact.
The m-form (volume form) ω = d t 1 d t m and the vector fields t α produce (see the inner derivative) the ( m 1 ) -forms ω α = t α ω and the ( m 2 ) -forms ω β α = t β ω α . These satisfy
d t γ ω α = δ α γ ω , d t γ ω α β = δ α γ ω β δ β γ ω α .
Now, the Lagrange multipliers p , q are the m-forms
p = p i α ω α d x i , q = q i α ω α d y i .
As solutions of the adjoint PDEs, they are closed, i.e.,
d p = p i α t γ d t γ ω α d x i = p i α t α ω d x i = 0 ,
d q = q i α t γ d t γ ω α d y i = q i α t α ω d y i = 0 .
According the Poincaré Lemma, there exist two ( m 1 ) -forms
η = N i α β ω α β d x i , μ = M i α β ω α β d y i
such that
p = d η = N i α β t γ d t γ ω α β d x i = t α N i α β N i β α ω β d x i ,
q = d μ = M i α β t γ d t γ ω α β d y i = t α M i α β M i β α ω β d y i .
It follows that the solution of the adjoint system is
p i β ( t ) = t α N i α β N i β α ( t ) , q i β ( t ) = t α M i α β M i β α ( t ) .
On the other hand, the strong multi-time maximum principle actually shows that the particular solution p i β ( t ) = const , q i β ( t ) = const is sufficient to obtain the complete solution of our problem.

4. Solution via Strong Multi-Time Maximum Principle

According the strong multi-time maximum principle [11] (coming from m-needle techniques), along any optimal sheet
( x , y , u , v , p , q ) ,
we must have
p i α t β = H β α x i ( x , y , u , v , p , q ) , q i α t β = H β α y i ( x , y , u , v , p , q )
H ( x , y , u , v , p , q ) = max u , v H ( x , y , u , v , p , q )
.
Also, the function t H ( x ( t ) , y ( t ) , u ( t ) , v ( t ) , p ( t ) , q ( t ) ) is constant.
Since v H ( x , y , u , v , p , q ) is a linear function with respect to v, the Hamiltonian H ( x , y , u , v , p , q ) has no interior extremum point. Also, we have
max u , v H ( x , y , u , v , p , q ) = max u max v H ( x , y , u , v , p , q )
= max v max u H ( x , y , u , v , p , q ) .

4.1. Solving the Adjoint PDEs System

Since the control Hamiltonian tensor field has no dependence on the state ( x , y ) , the adjoint PDEs reduce to
p i α t β = 0 , q i α t β = 0 ,
with the piecewise constant solution
p i α ( t ) = p i α , q i α ( t ) = q i α .

4.2. Finding the Maximum with Respect to v

To prove the existence of a bang-bang control v, we use the following steps.
Lemma 1.
The maximum of the Hamiltonian H ( x , y , u , v , p , q ) with respect to the control v is
H ( x , y , u , v , p , q ) = 1 + α = 1 m i = 1 n | p i α cos u + q i α sin u | .
Proof. 
The inputs v α i belong to the control set V = [ 1 , 1 ] m n R m n . The maximum of the linear function v H exists since each control variable v α i belongs to the interval [ 1 , 1 ] ; for maximum, the control must be at a vertex of V (see, linear optimization, simplex method). If Q i α ( t ) = p i α cos u ( t ) + q i α sin u ( t ) are the switching functions, then each optimal control v α i must be the function
v α i = sign Q i α ( t ) = 1 for Q i α ( t ) > 0 : bang - bang control undetermined for Q i α ( t ) = 0 : sin gular control 1 for Q i α ( t ) < 0 : bang - bang control .
If p i α = 0 , q i α = 0 , then Q i α ( t ) = 0 , t Ω 0 T , and hence v α i is undetermined. Otherwise, the function Q i α ( t ) vanishes only for one value of u ( t ) . Then, the singular control is ruled out and the remaining possibilities are bang-bang controls. This optimal control is discontinuous since each component jumps from a minimum to a maximum and vice versa, in response to each change in the sign of each switching function. The form of the optimal Hamiltonian follows. □

4.3. Finding the Maximum with Respect to u

Although we follow the path of finding the maximum with respect to v and then those with respect to u, it is useful to keep in mind the reverse procedure. This facilitates the understanding of some formulas in the following text.
According Formula (13), along any multi-time optimal sheet v , the Hamiltonian is a function only of the heading angle u. As continuous function it has a maximum on the compact interval [ 0 , π ] . Since H ( 0 ) = H ( π ) , the same maximum value is on the interval [ 0 , π ) . We shall show that at least one and at most m n values of u maximize the Hamiltonian. We conclude that the input u ( t ) is piecewise constant and takes on at most m n values along any multi-time optimal sheet.
To simplify, we use the m n functions ϕ i α ( u ) = p i α cos u + q i α sin u , α = 1 , , m ; i = 1 , , n .
Lemma 2.
(i) The equality H ( x , y , u , v , p , q ) = 1 is true if and only if each term of the sum
α = 1 m i = 1 n | ϕ i α ( u ) |
is zero.
(ii) A zero u 0 [ 0 , π ) of one of the functions ϕ i α ( u ) , with ( p i α , q i α ) ( 0 , 0 ) , is not a maximum point of H ( x , y , u , v , p , q ) .
Proof. 
Let ϕ 1 1 ( u 0 ) = 0 , with ( p 1 1 , q 1 1 ) ( 0 , 0 ) , for example, p 1 1 > 0 . Then the function H ( x , y , u , v , p , q ) = h ( u ) ,
h ( u ) = p 1 1 cos u + q 1 1 sin u + A ( u ) for 0 < u 0 ϵ < u u 0 p 1 1 cos u q 1 1 sin u + A ( u ) for u 0 < u < u 0 + ϵ < π
has the derivative
h ( u ) = p 1 1 sin u + q 1 1 cos u + A ( u ) for 0 < u 0 ϵ < u < u 0 p 1 1 sin u q 1 1 cos u + A ( u ) for u 0 < u < u 0 + ϵ < π .
If u 0 is a maximum point, then we should have h ( u 0 ) > 0 and h ( u 0 + ) < 0 , i.e.,
p 1 1 sin u 0 + q 1 1 cos u 0 + A ( u 0 ) > 0 ,
p 1 1 sin u 0 q 1 1 cos u 0 + A ( u 0 ) < 0 .
Consequently
p 1 1 q 1 1 ctan u 0 < 0 .
On the other hand, p 1 1 cos u 0 + q 1 1 sin u 0 = 0 or ctan u 0 = q 1 1 p 1 1 , whence ( p 1 1 ) 2 + ( q 1 1 ) 2 < 0 , which is a contradiction. □
Lemma 3.
If φ ( u ) 0 in an open interval I, then the first two derivatives of the function | φ ( u ) | : I R are
d d u | φ ( u ) | = ( sign φ ( u ) ) φ ( u ) , d 2 d u 2 | φ ( u ) | = ( sign φ ( u ) ) φ ( u ) .
Each function ϕ i α ( u ) , which is not identically zero, has exactly one zero in the interval [ 0 , π ) . Totally, we have a set A consisting of at most m n zeros in [ 0 , π ) .
Lemma 4.
On an interval fixed by two consecutive zeros in A, the Hamiltonian (13) has the properties: (i) it is a C function, (ii) it is a concave function, (iii) the derivative d H d u has at most one zero.
Proof. 
(i) The function u H ( u ) + 1 is a sum of absolute values of smooth functions and consequently it is piecewise smooth. (ii) Since
d 2 ϕ i α d u 2 ( u ) = ϕ i α ( u ) ,
we find
d 2 H d u 2 ( u ) < 0 .
(iii) It is almost obvious. □
Lemma 5.
The maximum of the Hamiltonian H ( x , y , u , v , p , q ) with respect to the control u, for an optimal value v , is
H ( x , y , u , v , p , q ) = 1 + α = 1 m i = 1 n v α i p i α 2 + α = 1 m i = 1 n v α i q i α 2 .
Proof. 
The maximum of the Hamiltonian H ( x , y , u , v , p , q ) with respect to the control v is given in Lemma 1. On the other hand, the maximum of the function
α = 1 m i = 1 n ( p i α v α i cos u + q i α v α i sin u ) ,
with respect to u, is
α = 1 m i = 1 n v α i p i α 2 + α = 1 m i = 1 n v α i q i α 2 .
It follows the maximum of the Hamiltonian. □
Lemma 6.
For any t, the maximum value is H ( x , y , u , v , p , q ) = 0 .
Proof. 
Suppose w is a maximum value function and
w 1 ( x , y ) = 1 2 D t 2 w ( x , y ) , w 2 ( x , y ) = 1 2 D t 1 w ( x , y )
is the generating vector field. The multi-time Hamilton-Jacobi-Bellman PDE (feedback law) is [11]
w α t α + max u U ; v V w α x cos u + w α y sin u v α 1 = 0 .
On the other hand, the evolution PDEs and the Lagrangian L = 1 do not depend on the variable t. Then, the generating vector field is independent on t. The multi-time Hamilton-Jacobi-Bellman PDE becomes
max u U ; v V w α x cos u + w α y sin u v α 1 = 0 ,
equivalent to
max u U ; v V H ( x , y , u , v , p , q ) = 0 .
Consequently, the statement is true. □

Guess Solution for Maximum with Respect to u

In our target problem, the adjoint variables (co-states p , q ) have no conditions on the boundary and so they may not be initially specified. However, giving extrema points u, we can calculate the optimal co-states p , q . Indeed, for k n , and the sequence
u k π = u 0 < u 1 < u 2 < < u k < π ,
we can define
p i α = 1 m p i , p i = cos u i 1 cos u i 2 for i = 1 , , k 0 for i = k + 1 , , n .
q i α = 1 m q i , q i = sin u i 1 sin u i 2 for i = 1 , , k 0 for i = k + 1 , , n ,
where α = 1 , , m . The function (13) becomes
H ( u ) = 1 + i = 1 n α = 1 m | p i α cos u + q i α sin u |
= 1 + i = 1 n | α = 1 m p i α cos u + q i α sin u |
= 1 + i = 1 k | p i cos u + q i sin u | .
This expression demonstrates the following properties:
( i ) sign ( p i cos u j + q i sin u j ) = 1 for i j 1 for i > j .
( i i ) H ( u j ) = 0 , j = 1 , , k .
(iii) The points u j , j = 1 , , k , are the only maximum points for the function H ( u ) . Consequently, max u H ( u ) = 0 .

4.4. Finding the Optimal Evolution

The optimal control has a piecewise form
v α i = sign Q i α ( t ) , u = u ( t ) = const .
In this way, we have transformed the foregoing problem from an infinite-dimensional one, in which we are required to specify the functions u : [ 0 , T ] [ 0 , π ) and v α i : [ 0 , T ] [ 1 , 1 ] , for α = 1 , , m ; i = 1 , , n , into a finite-dimensional problem, in which we are required only to specify a double sequence of m n values of v. Then the optimal evolution is a piecewise solution to the Pfaff equations
d x i ( t ) = v α i d t α cos u , d y i ( t ) = v α i d t α sin u .
The general solution is
x i ( t ) = v α i t α cos u + a i , y i ( t ) = v α i t α sin u + b i .
These formulas generate a piecewise general solution, splitting the domain Ω 0 T into sub-domains depending on the optimal values u . As example, for a single optimal u and the boundary conditions
x ( 0 ) = x 0 , y ( 0 ) = y 0 , x ( T ) = 0 , y ( T ) = 0 ,
we obtain the optimal evolution
x i ( t ) = v α i ( t α T α ) cos u , y i ( t ) = v α i ( t α T α ) sin u ,
x 0 i = v α i T α cos u , y 0 i = v α i T α sin u .

5. Geometrical Solution

Suppose the set of robots { z 1 , , z n ; z 1 , , z n } determine, in this order, a convex polygon in R 2 . We select the point z j . Applying the Bretl theory [1], the point z j can attend the origin in n steps P 1 , , P n defined by
P i = along 1 2 z i z i + 1 with velocity v i = 1 , i < j v i = + 1 , j i n ,
with the convention z n + 1 = z 1 . The spend time for each step P i is t i = 1 2 | | z i + 1 z i | | , i = 1 , , n .
For the point z j , the connection between our point of view and the theory of Bretl [1] is
v i 1 t i 1 + v i 2 t i 2 = v i t i , | v i 1 | = 1 , | v i 2 | = 1 , i = 1 , , n .
If T 1 = i = 1 n t i 1 and T 2 = j = 1 n t j 2 , we must solve the first problem:
min ( t 1 , t 2 ) [ ( T 1 ) 2 + ( T 2 ) 2 ] , subject to ( 19 ) .
To solve this problem, we use the Lagrange function
L = i = 1 n t i 1 2 + j = 1 n t j 2 2 + i = 1 n 2 λ i ( v i 1 t i 1 + v i 2 t i 2 v i t i ) .
From the equations of critical points, we find
i t i 1 = λ k v k 2 , j t j 2 = λ k v k 1 , for each k = 1 , , n .
Since | v k α | = 1 , it follows
i t i 1 = j t j 2 = | λ k | = 1 2 i t i ,
and hence T 1 = T 2 (square). On the other hand, the product T 1 T 2 depends on t. According Bretl, for min ( T 1 T 2 ) , we have
min i | v i t i | = i t i = 1 4 perim { z 1 , , z n ; z 1 , , z n } .
Hence
min i | v i 1 t i 1 + v i 2 t i 2 | = 1 4 perim { z 1 , , z n ; z 1 , , z n } .
But,
Q = perim { z 1 , , z n ; z 1 , , z n } = 1 2 | | z 1 + z n | | + i = 1 n 1 | | z i + 1 z i | | .
Running the point z j , the relations (19) are changed into
v i 1 j t i 1 + v i 2 j t i 2 = v i j t i , j = 1 , , n .
Fixing the index i, we obtain a system of n linear equations with two unknowns t i 1 , t i 2 . If the rank of the system is two, one obtains the uni-temporal case of Bretl, i.e., either t i 1 = 0 or t i 2 = 0 . For significant two-time case, the rank must be one, and we can take the repartition t i 1 = t i 2 = t i 2 , v i 1 j = v i 2 j = v i j . It follows (square)
T 1 = T 2 = Q 2 , T 1 T 2 = Q 2 4 .

6. Multi-Time Hamilton-Jacobi-Bellman PDE

To solve the problem formulated in Section 2, let us use the idea that the multi-time dynamic programming method permits the design of multi-time optimal controls.
To simplify, let us accept α = 1 , 2 . Also, to use the multi-time maximum principle, we replace the initial multiple integral functional I ( u ( · ) , v ( · ) ) by
J ( u ( · ) , v ( · ) ) = Ω 0 T d t 1 d t 2 = max
(equivalent minimum area). Let us consider the set Ω ( t 1 , t 2 ) ( T 1 , T 2 ) , where t = ( t 1 , t 2 ) . Since
J t , ( x 1 , y 1 ) , ( x 2 , y 2 ) ( u ( · ) , v ( · ) ) = t 1 T 1 t 2 T 2 d s 1 d s 2 ,
we transform the maximum problem in Section 1 into similar problems: find
max u ( · ) , v ( · ) J t , ( x 1 , y 1 ) , ( x 2 , y 2 ) ( u ( · ) , v ( · ) ) = ( t 1 T 1 ) ( T 2 t 2 )
subject to
X i s α ( s 1 , s 2 ) = v α i ( s 1 , s 2 ) cos u ( s 1 , s 2 ) , Y i s α ( s 1 , s 2 ) = v α i ( s 1 , s 2 ) sin u ( s 1 , s 2 ) ,
X ( t 1 , t 2 ) = x , Y ( t 1 , t 2 ) = y , ( s 1 , s 2 ) Ω ( t 1 , t 2 ) ( T 1 , T 2 )
X ( T 1 , T 2 ) = 0 , Y ( T 1 , T 2 ) = 0 ,
were ( T 1 t 1 , T 2 t 2 ) is selected to have a minimum norm.
Remark 2.
For m-volume multi-time optimal problems, the maximum value function w does not depend on the multi-time t.

6.1. One Optimal Value of the Control u

6.1.1. Case α = 1 , 2 , i = 1

Omitting the index “star”, the constraints (boundary value problem) rewrite in the form
x = v α ( t α T α ) cos u , y = v α ( t α T α ) sin u , α = 1 , 2 .
Generally, y x = tan u and the relation x = v α ( t α T α ) cos u connects linearly the differences t 1 T 1 and t 2 T 2 . We need to find
min ( T 1 t 1 ) 2 + ( T 2 t 2 ) 2 subject to x = v α ( t α T α ) cos u .
Denoting
L = ( T 1 t 1 ) 2 + ( T 2 t 2 ) 2 + ( v α ( t α T α ) cos u x ) ,
we find the critical point condition
T 1 t 1 = 1 2 v 2 cos u , T 2 t 2 = 1 2 v 1 cos u ,
Because | v 1 | = | v 2 | = 1 , it follows
T 1 t 1 = T 2 t 2 = x 2 | cos u | = O M 2
and
max u ( · ) , v ( · ) J t , ( x 1 , y 1 ) , ( x 2 , y 2 ) ( u ( · ) , v ( · ) ) = x 2 4 cos 2 u = O M 2 4 = x 2 + y 2 4 .
Let us correlate this result with the Hamilton-Jacobi-Bellman PDE. Since w ( ( x , y ) ) does not depend on t, the generating vector ( w 1 ( ( x , y ) ) , w 2 ( ( x , y ) ) ) does not depend on t.
Suppose w is a maximum value function and
w 1 ( x , y ) = 1 2 D t 2 w ( x , y ) , w 2 ( x , y ) = 1 2 D t 1 w ( x , y )
is the generating vector field. The two-time Hamilton-Jacobi-Bellman PDE (feedback law) is [11]
w α t α + max u U ; v V w α x cos u + w α y sin u v α 1 = 0
The maximum with respect v is obtained for
v α = s i g n w 1 x cos u + w 1 y sin u .
It follows the PDE
max u w 1 x cos u + w 1 y sin u + w 2 x cos u + w 2 y sin u 1 = 0 .
Taking
w 1 x = w 2 x = x 2 x 2 + y 2 , w 1 y = w 2 y = y 2 x 2 + y 2 ,
the value max u is 1 for tan u = y x . Consequently,
w 1 ( ( x , y ) ) = w 2 ( ( x , y ) ) = x 2 + y 2 2
is a generating vector field.
In this case, using the total derivative operator D, we have
2 w 1 = D t 2 w = w x x t 2 + w y y t 2 , 2 w 2 = D t 1 w = w x x t 1 + w y y t 1 .
For v 1 = v 2 = 1 , one obtains a single PDE
x w x + y w y = x 2 + y 2 2 , w ( 0 , 0 ) = 0 ,
whose solution is
w ( x , y ) = x 2 + y 2 2 .
On the other hand, according [11],
max u ( · ) , v ( · ) J ( u ( · ) , v ( · ) ) = w ( x ( t 1 T 1 , t 2 T 2 ) , y ( t 1 T 1 , t 2 T 2 ) )
w ( x ( t 1 , t 2 T 2 ) , y ( t 1 , t 2 T 2 ) ) w ( x ( t 1 T 1 , t 2 ) , y ( t 1 T 1 , t 2 ) ) .
From the evolution (17), it follows
x ( t 1 T 1 , t 2 ) = x ( t 1 , t 2 T 2 ) = x 2 , y ( t 1 T 1 , t 2 ) = y ( t 1 , t 2 T 2 ) = y 2 .
The equality
x 2 + y 2 4 = x 2 + y 2 2 + x 2 + y 2 8 + x 2 + y 2 8
confirms the previous results.

6.1.2. Case α = 1 , 2 , i = 1 , 2

Omitting the index “star”, the constraints (boundary value problem) rewrite in the form
x i = v α i ( t α T α ) cos u , y i = v α i ( t α T α ) sin u , i = 1 , 2 ; α = 1 , 2 .
Since y i = ( tan u ) x i , i = 1 , 2 , to find the maximum value max u ( · ) , v ( · ) J , we need to solve the problem:
min ( T 1 t 1 ) 2 + ( T 2 t 2 ) 2 subject to x = v α ( t α T α ) cos u .
Case det v 0 If det v = det ( v α i ) = ± 2 , then we find
t 1 T 1 = 1 det v cos u ( v 2 2 x 1 v 2 1 x 2 ) , T 2 t 2 = 1 det v cos u ( v 1 1 x 2 v 1 2 x 1 ) ,
It follows
max u , v J ( ( x 1 , y 1 ) , ( x 2 , y 2 ) ) = 1 ( det v cos u ) 2 v 2 1 x 2 v 2 2 x 1 v 1 1 x 2 v 1 2 x 1
or
max u , v J ( ( x 1 , y 1 ) , ( x 2 , y 2 ) ) = 1 4 x 1 cos u 2 x 2 cos u 2
= 1 4 | O M 1 2 O M 2 2 | ,
where M 1 = ( x 1 , y 1 ) , M 2 = ( x 2 , y 2 ) .
The two-time Hamilton-Jacobi-Bellman PDE (feedback law) is [11]
max u U ; v V w α x i cos u + w α y i sin u v α i 1 = 0 .
This PDE can be rewritten in the form
1 + max u U α , i = 1 2 w α x i cos u + w α y i sin u = 0 ,
since each optimal control v α i is
v α i ( t ) = s i g n w α x i cos u + w α y i sin u .
Using a single optimal control u ( t ) = const , the previous two-time Hamilton-Jacobi-Bellman PDE reduces to
1 + α , i = 1 2 v α i w α x i 2 + α , i = 1 2 v α i w α y i 2 = 0 .
We obtain an eikonal PDE
α , i = 1 2 v α i w α x i 2 + α , i = 1 2 v α i w α y i 2 = 1 ,
with the unknown functions w 1 ( x , y ) , w 2 ( x , y ) . This PDE is equivalent to the system
α , i = 1 2 v α i w α x i = cos χ , α , i = 1 2 v α i w α y i = sin χ .
Consequently, for α , i = 1 2 v α i = 2 , a solution ( w 1 , w 2 ) of the Hamilton-Jacobi-Bellman PDE is obtained from
w 1 ( ( x 1 , y 1 ) , ( x 2 , y 2 ) ) = 1 2 ( x 1 + x 2 ) cos χ + 1 2 ( y 1 + y 2 ) sin χ
+ ϕ 1 ( v 1 2 x 1 v 1 1 x 2 , v 1 2 y 1 v 1 1 y 2 ) ,
w 2 ( ( x 1 , y 1 ) , ( x 2 , y 2 ) ) = 1 2 ( x 1 + x 2 ) cos χ + 1 2 ( y 1 + y 2 ) sin χ
+ ϕ 2 ( v 2 2 x 1 v 1 1 x 2 , v 2 2 y 1 v 1 1 y 2 ) ,
for ( x 1 , y 1 ) R 2 , ( x 2 , y 2 ) R 2 . The solution obtained via the strong multi-time maximum principle is recovered by the conditions
ϕ 1 ( v 1 2 x 1 v 1 1 x 2 , v 1 2 y 1 v 1 1 y 2 ) = a 1 ( v 1 2 x 1 v 1 1 x 2 ) + b 1 ( v 1 2 y 1 v 1 1 y 2 )
ϕ 2 ( v 2 2 x 1 v 1 1 x 2 , v 2 2 y 1 v 1 1 y 2 ) = a 2 ( v 2 2 x 1 v 1 1 x 2 ) + b 2 ( v 2 2 y 1 v 1 1 y 2 ) .
In this case, the complexity of finding an optimal policy (for arbitrary initial conditions) is dominated by the computation of a planar convex hull.
Case det v = 0 . In this case, we need to solve the problem
min ( T 1 t 1 ) 2 + ( T 2 t 2 ) 2 subject to x 1 = v α 1 ( t α T α ) cos u .
The result is similar to those when α = 1 , 2 , i = 1 .
Remark 3.
Consider the eikonal PDE
| | D u ( x ) | | = 1 , x Ω R n ; u ( x ) | Ω = 0 .
Show that:
(i) | | D u ( x ) | | = 1 sup | | q | | 1 ( D u ( x ) · q 1 ) = 0 , x Ω ;
(ii) the function u ( x ) = d i s t ( x ; Ω ) solves the eikonal PDE in the viscosity sense.

6.2. Two Optimal Values of the Control u

Let us consider the partitions t 1 = t 0 1 < t 1 1 < < t k 1 = T 1 , t 2 = t 0 2 < t 1 2 < < t k 2 = T 2 , and the rectangles Ω j = Ω ( t 1 , t 2 ) ( t j 1 , t j 2 ) , j = 1 , , k . We order the optimal values u α i in an increasing sequence u 1 , , u k and we set u j for the multi-time set Ω j Ω j 1 . For finding the optimal evolution it is enough to consider the diagonal rectangles Ω ( t j 1 1 , t j 1 2 ) ( t j 1 , t j 2 ) , j = 1 , , k . The points t j = ( t j 1 , t j 2 ) are determined by u j and are connected to ( T 1 , T 2 ) .
To simplify, in Ω ( t 1 , t 2 ) ( T 1 , T 2 ) , let us consider two diagonal rectangles
Ω 1 = Ω ( t 1 , t 2 ) ( 1 2 ( t 1 + T 1 ) , 1 2 ( t 2 + T 2 ) ) , Ω 2 = Ω ( 1 2 ( t 1 + T 1 ) , 1 2 ( t 2 + T 2 ) ) ( T 1 , T 2 )
the first corresponding to the optimal value u 1 , and the second to u 2 . Denoting
t = ( t 1 , t 2 ) , t = 1 2 ( t 1 + T 1 ) , 1 2 ( t 2 + T 2 , T = ( T 1 , T 2 ) ,
and imposing
x ( t ) = x , y ( t ) = y ; x ( t ) = x , y ( t ) = y ; x ( T ) = 0 , y ( T ) = 0 ,
the optimal evolution splits as:
x i = x i + 1 2 v α i ( t α T α ) cos u 1 , y i = y i + 1 2 v α i ( t α T α ) sin u 1 , on Ω 1 ;
x i = v α i ( t α T α ) cos u 2 , y i = v α i ( t α T α ) sin u 2 , on Ω 2 .
We need to solve the problem of finding the maximum cost on Ω 1 , then on Ω 2 and to add them.
Maximum on Ω 1 . Using the Lagrangian function
L 1 = ( t 1 t 1 ) ( t 2 t 2 ) + λ i ( x i + 1 2 v α i ( t α T α ) cos u 1 x i ) , det v = det ( v α i ) = ± 2 ,
we find
t 1 t 1 = 1 2 λ i v 2 i cos u 1 , t 2 t 2 = 1 2 λ i v 1 i cos u 1 ,
where
λ 1 = 2 det v cos 2 u 1 ( x 2 x 2 ) , λ 2 = 2 det v cos 2 u 1 ( x 1 x 1 ) .
Denoting
A = v 2 1 ( x 2 x 2 ) + v 2 2 ( x 1 x 1 ) v 1 1 ( x 2 x 2 ) v 1 2 ( x 1 x 1 ) ,
it follows
w 1 ( t , ( x 1 , y 1 ) , ( x 2 , y 2 ) ) = A ( det v cos u 1 ) 2
or
w 1 ( t , ( x 1 , y 1 ) , ( x 2 , y 2 ) ) = 1 4 x 1 x 1 cos u 1 2 x 2 x 2 cos u 1 2
= 1 4 | M 1 M 1 2 M 2 M 2 2 | ,
on Ω 1 , where
M 1 = ( x 1 , y 1 ) , M 2 = ( x 2 , y 2 ) , M 1 = ( x 1 , y 1 ) , M 2 = ( x 2 , y 2 ) .
Maximum on Ω 2 . Since the constraints have the form x i = v α i ( t α T α ) cos u 2 , the result is similar to those in “Case: One optimal value of the control”. Hence
w 2 ( t , ( x 1 , y 1 ) , ( x 2 , y 2 ) ) = 1 4 | O M 1 2 O M 2 2 | ,
on Ω 2 . It follows
w ( t , ( x 1 , y 1 ) , ( x 2 , y 2 ) ) = w 1 ( t , ( x 1 , y 1 ) , ( x 2 , y 2 ) ) + w 2 ( t , ( x 1 , y 1 ) , ( x 2 , y 2 ) ) .

6.3. Viscosity Solution

The Hamilton-Jacobi-Bellman PDE has smooth solution on Ω 1 , respectively on Ω 2 . Since at the point t we have a discontinuity of the partial derivatives, we must refer to the PDE system (18) and to its viscosity solutions. The basic idea is to replace the differentials D ( x 1 , y 1 ; x 2 , y 2 ) φ α ( t , ( x 1 , y 1 ) , ( x 2 , y 2 ) ) at a point ( t , ( x 1 , y 1 ) , ( x 2 , y 2 ) ) where it does not exist (for example because of a kink in φ ) with the differentials D ( x 1 , y 1 ; x 2 , y 2 ) ψ α ( t , ( x 1 , y 1 ) , ( x 2 , y 2 ) ) of a smooth function ψ touching the graph of φ , from above for the subsolution condition and from below for the supersolution one, at the point ( t , ( x 1 , y 1 ) , ( x 2 , y 2 ) ) .
Definition 1.
(i) A continuous function φ = ( φ 1 , φ 2 ) is said to be a viscosity subsolution of the PDE system (18) if, for any point ( t , ( x 1 , y 1 ) , ( x 2 , y 2 ) ) and for any smooth function ψ = ( ψ 1 , ψ 2 ) such that each function φ α ψ α , α = 1 , 2 , has a maximum point at ( t , ( x 1 , y 1 ) , ( x 2 , y 2 ) ) , we have
α , i = 1 2 ψ α x i ( t , ( x 1 , y 1 ) , ( x 2 , y 2 ) ) cos χ , α , i = 1 2 ψ α y i ( t , ( x 1 , y 1 ) , ( x 2 , y 2 ) ) sin χ .
(ii) A continuous function φ = ( φ 1 , φ 2 ) is said to be a viscosity supersolution of (16) if, for any point ( t , ( x 1 , y 1 ) , ( x 2 , y 2 ) ) and for any smooth function ψ such that each function φ α ψ α , α = 1 , 2 , has a minimum point at ( t , ( x 1 , y 1 ) , ( x 2 , y 2 ) ) , we have
α , i = 1 2 ψ α x i ( t , ( x 1 , y 1 ) , ( x 2 , y 2 ) ) cos χ , α , i = 1 2 ψ α y i ( t , ( x 1 , y 1 ) , ( x 2 , y 2 ) ) sin χ .
(iii) A continuous function φ = ( φ 1 , φ 2 ) is said to be a viscosity solution of the PDE system (16) if it is a viscosity subsolution and supersolution.
The viscosity solution of the PDE system is φ = ( φ 1 , φ 2 ) = ( Q , Q ) , where Q is a quarter of the perimeter of the parallelogram ( x 1 , y 1 ) , ( x 2 , y 2 ) , ( x 1 , y 1 ) , ( x 2 , y 2 ) , i.e.,
2 Q ( ( x 1 , y 1 ) , ( x 2 , y 2 ) ) = ( x 1 x 2 ) 2 + ( y 1 y 2 ) 2 + ( x 1 + x 2 ) 2 + ( y 1 + y 2 ) 2 .

7. Conclusions

Our work is the first which introduces and studies the theory of minirobots moving at different partial speeds (in a multi-temporal sense), but that must all move in the same partial direction. We are motivated to solve this problem because constraints of previous sort must be common in micro-scale and nano-scale robotic systems appearing in applied fields mentioned above. To understand a multi-temporal evolution we must think the dependence on multi-time either as an immersion, or as a diffeomorphism, or as a submersion, and that the partial order in R + m induces a partial order on the image of such a function.
The phenomenon described by us takes place in spaces with at least four dimensions. That is why graphic representations lose their meaning.
By application of the (weak and strong) multi-time maximum principle, we obtain necessary conditions for optimality and use them to guess a candidate control policy. By the multi-time Hamilton-Jacobi-Bellman PDE, we verify that our guess is optimal. The complexity of finding this policy for arbitrary initial conditions is only quasilinear in the number of robots, and in fact is dominated by the computation of a planar convex hull.
In our minds the previous theory can be extended to the situation of three-dimensional robots, using the versor of unit sphere, which we will do in a future paper. We tested the theory of multi-time optimal control in relevant applications: multi-time control strategies for skilled movements [13], optimal control of electromagnetic energy [16], multi-time optimal control for quantum systems [10] etc.

Author Contributions

The contributions of both authors are equal. The main results and illustrative examples were developed together. All authors have read and agreed to the published version of the manuscript.

Funding

This research received funding from Balkan Society of Geometers, Bucharest, Romania.

Acknowledgments

Thanks to referees for pertinent remarks.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bretl, T. Minimum-time optimal control of many robots that move in the same direction at different speeds. IEEE Trans. Robbot. 2012, 28, 351–363. [Google Scholar] [CrossRef]
  2. DeVon, D.A.; Bretl, T. Control of many robots moving in the same direction with different speeds: A decoupling approach. In Proceedings of the 2009 American Control Conference, St. Louis, MO, USA, 10–12 June 2009. [Google Scholar]
  3. Becker, A.; Onyuksel, C.; Bretl, T.; McLurkin, J. Controlling many differential-drive robots with uniform control inputs. Int. J. Robot. Res. 2014, 33, 1626–1644. [Google Scholar] [CrossRef]
  4. Bien, Z.; Lee, J. A minimum-time trajectory planning method for two robots. IEEE Trans. Robot. Autom. 1992, 8, 414–418. [Google Scholar] [CrossRef]
  5. Bloch, A.M.; Baillieul, J.; Crouch, P.E.; Marsden, J.E. Nonholonomic Mechanics and Control; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 2003. [Google Scholar]
  6. Mauder, M. Time-Optimal Control of the Bi-Steerable Robot. Ph.D. Thesis, Fakultät für Mathematik und Informatik der Julius-Maximilians-Universität, Würzburg, Germany, 2012. [Google Scholar]
  7. Udrişte, C. Multitime controllability, observability and bang-bang principle. J. Optim. Theory Appl. 2008, 39, 141–157. [Google Scholar] [CrossRef]
  8. Udrişte, C.; Ţevy, I. Multitime dynamic programming for curvilinear integral actions. J. Optim. Theory Appl. 2010, 146, 189–207. [Google Scholar] [CrossRef]
  9. Udrişte, C. Equivalence of multitime optimal control problems. Balk. J. Geom. Appl. 2010, 15, 155–162. [Google Scholar]
  10. Udrişte, C. Multitime optimal control for quantum systems. In Proceedings of the Third International Conference on Lie-Admissible Treatments of Irreversible Processes (ICLATIP-3), Kathmandu University, Dhulikhel, Nepal, 3–7 January 2011. [Google Scholar]
  11. Udrişte, C.; Ţevy, I. Multitime dynamic programming for multiple integral actions. J. Glob. Optim. 2011, 51, 345–360. [Google Scholar] [CrossRef]
  12. Udrişte, C.; Bejenaru, A. Multitime optimal control with area integral costs on boundary. Balk. J. Geom. Appl. 2011, 16, 138–154. [Google Scholar]
  13. Iliuţă, M.; Udrişte, C.; Ţevy, I. Multitime control strategies for skilled movements. Balk. J. Geom. Appl. 2013, 18, 31–46. [Google Scholar]
  14. Dirac, P.A.M. Relativistic quantum mechanics. Proc. R. Soc. A 1932, 136, 453–464. [Google Scholar] [CrossRef]
  15. Taubes, C.H. Differential Geometry: Bundles, Connections, Metrics and Curvature; Oxford University Press: Oxford, UK, 2011. [Google Scholar]
  16. Pîrvan, M.; Udrişte, C. Optimal control of electromagnetic energy. Balk. J. Geom. Appl. 2010, 15, 131–141. [Google Scholar]

Share and Cite

MDPI and ACS Style

Udrişte, C.; Ţevy, I. Minirobots Moving at Different Partial Speeds. Mathematics 2020, 8, 1036. https://doi.org/10.3390/math8061036

AMA Style

Udrişte C, Ţevy I. Minirobots Moving at Different Partial Speeds. Mathematics. 2020; 8(6):1036. https://doi.org/10.3390/math8061036

Chicago/Turabian Style

Udrişte, Constantin, and Ionel Ţevy. 2020. "Minirobots Moving at Different Partial Speeds" Mathematics 8, no. 6: 1036. https://doi.org/10.3390/math8061036

APA Style

Udrişte, C., & Ţevy, I. (2020). Minirobots Moving at Different Partial Speeds. Mathematics, 8(6), 1036. https://doi.org/10.3390/math8061036

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop