Next Article in Journal
Acknowledgment to Reviewers of MCA in 2020
Previous Article in Journal
Chaotic Multi-Objective Simulated Annealing and Threshold Accepting for Job Shop Scheduling Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Univariate Theory of Functional Connections Applied to Component Constraints †

1
Aerospace Engineering, Texas A&M University, College Station, TX 77843, USA
2
Systems and Industrial Engineering, Aerospace and Mechanical Engineering, The University of Arizona, Tucson, AZ 85721, USA
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in Proceedings of the 2018 AAS/AIAA Astrodynamics Specialist Conference, Snowbird, UT, USA, 19–23 August 2018.
Math. Comput. Appl. 2021, 26(1), 9; https://doi.org/10.3390/mca26010009
Submission received: 17 December 2020 / Revised: 10 January 2021 / Accepted: 11 January 2021 / Published: 14 January 2021

Abstract

:
This work presents a methodology to derive analytical functionals, with embedded linear constraints among the components of a vector (e.g., coordinates) that is a function a single variable (e.g., time). This work prepares the background necessary for the indirect solution of optimal control problems via the application of the Pontryagin Maximum Principle. The methodology presented is part of the univariate Theory of Functional Connections that has been developed to solve constrained optimization problems. To increase the clarity and practical aspects of the proposed method, the work is mostly presented via examples of applications rather than via rigorous mathematical definitions and proofs.

1. Introduction

The Theory of Functional Connections (TFC) is an analytical framework developed to perform functional interpolation, that is, to derive analytical functionals, called constrained expressions, describing all functions satisfying a set of assigned constraints. This framework has been developed for univariate and multivariate rectangular domains and for a wide class of constraints, including points and derivatives constraints, integral constraints, linear combination of constraints, and, partially, for component constraints. The TFC theory has been presented in detail in [1,2,3,4,5,6]. For instance, the extension to 2-dimensional space allows TFC to generate all surfaces connecting Dirichlet and Neumann boundary conditions. Recently, the domain mapping technique has been shown to be the first step toward extending TFC to any (non-rectangular) domain [7].
The first TFC application was to obtain least-squares solutions of linear and nonlinear ordinary (ODEs) and partial (PDEs) differential equations. Specifically, for ODEs the solutions have been obtained with the following features:
  • solutions are approximate and analytical (this allows easier subsequent analysis and further manipulation);
  • the approach solves initial, boundary, or multi-value problems by the same unified procedure;
  • the approach is numerically robust (low condition number);
  • solutions are usually provided at machine error accuracy;
  • solutions are usually obtained at msec level (suitable for real-time applications); and
  • constraint range is independent from the integration range (solution accuracy is maintained outside the constraints range).
The TFC application to ODEs and PDEs can be found in [6,8,9,10,11,12,13,14,15,16]. The successful applications of TFC to solutions of differential equations has generated subsequent applications in various fields, such as in optimization and optimal control [17,18,19,20,21,22,23,24], in astrodynamics [24,25,26,27,28,29,30], in transport theory [31,32], and in machine learning, with particular emphasis in physics-informed neural networks [33,34].
TFC applied to component constraints has been initially presented in [12] to solve first-order ODEs. However, the solution provided in [12] is restricted only to the cases presented. In this article, the general theory of univariate component constraints is presented. This theory can be further applied to solve more complex differential equations or optimization problems subject to component constraints, such as those generated in indirect optimal control problems. Applications on optimization, based on the theory presented in this work, will not be considered in this study and will be the subject of future works. However, a simple optimal control problem is included, as example.
This study considers a vector, y ( t ) R n , depending from a single independent variable, t (univariate case), whose components must satisfy a set of p constraints. Constraints can be absolute, relative, or any linear combination. The general multivariate case of a vector y ( t ) : R m R n depending on m independent variables, t : = { t 1 , t 2 , , t m } , will be subject of future studies.
Before presenting the univariate Theory of Functional Connections for component constraints, a brief summary of univariate TFC and a summary of the initial (and incomplete) work on component constraints presented in [12] are provided in the next two sections

2. Summary of Univariate Theory of Functional Connections

The univariate Theory of Functional Connections derives analytical functionals, called constrained expressions, satisfying p distinct linear constraints. These constraints can be points and derivatives constraints (e.g., y ( π ) = 7 and y ( 1 ) = 1 ), integral constraints (e.g., 5 1 y ( τ ) d τ = 2 ), and linear combination of constraints (e.g., 2 y ( π ) y ( 1 ) + 3 5 1 y ( τ ) d τ = 0 ). These functionals can be analytically obtained using any of the following two equivalent formal expressions,
y ( t , g ( t ) ) = g ( t ) + k = 1 p η k ( t , g ( t ) ) s k ( t ) y ( t , g ( t ) ) = g ( t ) + k = 1 p ϕ k ( t , s k ( t ) ) ρ k ( t , g ( t ) )
where g ( t ) is the free function; s k ( t ) are p linearly independent support functions (e.g., monomials, Fourier, etc.); η k ( t , g ( t ) ) are functionals coefficients to be found by imposing the constraints (these are actually scalars in the univariate case and become functionals in the multivariate case); ϕ k ( t , s k ( t ) ) are switching functions satisfying ϕ k = 1 if the k-th constraint is satisfied and ϕ k = 0 if the i-th constraint, with i k , is satisfied; ρ k ( t , g ( t ) ) are projection functionals; and t is a vector specifying where the p constraints are defined. This means that η k ( t , g ( t ) ) and ρ k ( t , g ( t ) ) are not continuous functions of t. A rigorous mathematical definition of ϕ k ( t , s k ( t ) ) and ρ k ( t , g ( t ) ) is given in [4].

Example

Consider to find the functional (constrained expression) always satisfying the p = 4 constraints,
d 2 y d t 2 1 = y 1 , y ( 0 ) = y 0 , y ( 2 ) = y 2 , and d y d t 2 = y 2 .
For p = 4 constraints Equation (1) can be written as
y ( t , g ( t ) ) = g ( t ) + η 1 s 1 ( t ) + η 2 s 2 ( t ) + η 3 s 3 ( t ) + η 4 s 4 ( t ) .
By imposing the four constraints a system of four equations in the four η k unknowns
g ( 1 ) + η 1 s 1 ( 1 ) + η 2 s 2 ( 1 ) + η 3 s 3 ( 1 ) + η 4 s 4 ( 1 ) = y 1 g ( 0 ) + η 1 s 1 ( 0 ) + η 2 s 2 ( 0 ) + η 3 s 3 ( 0 ) + η 4 s 4 ( 0 ) = y 0 g ( 2 ) + η 1 s 1 ( 2 ) + η 2 s 2 ( 2 ) + η 3 s 3 ( 2 ) + η 4 s 4 ( 2 ) = y 2 g ( 1 ) + η 1 s 1 ( 2 ) + η 2 s 2 ( 2 ) + η 3 s 3 ( 2 ) + η 4 s 4 ( 2 ) = y 2
is obtained. By selecting the support functions as monomials, s 1 ( t ) = 1 , s 2 ( t ) = t , s 3 ( t ) = t 2 , and s 4 ( t ) = t 3 , the previous system becomes
g ( 1 ) + 2 η 3 6 η 4 = y 1 g ( 0 ) + η 1 = y 0 g ( 2 ) + η 1 + 2 η 2 + 4 η 3 + 8 η 4 = y 2 g ( 1 ) + η 2 + 4 η 3 + 12 η 4 = y 2 ,
whose solution
η 1 η 2 η 3 η 4 = 0 0 2 6 1 0 0 0 1 2 4 8 0 1 4 12 1 y 1 g ( 1 ) y 0 g ( 0 ) y 2 g ( 2 ) y 2 g ( 1 )
provides the functional (constrained expression) for the four specified constraints,
y ( t , g ( t ) ) = g ( t ) + 4 t + 4 t 2 t 3 14 y 1 g ( 1 ) + 28 24 t + 3 t 2 + t 3 28 y 0 g ( 0 ) + 24 t 3 t 2 t 3 28 y 2 g ( 2 ) + 10 t + 3 t 2 + t 3 14 y 2 g ( 1 ) .
This functional represents all functions satisfying simultaneously all four constraints. Furthermore, this equation highlights the switching/projection formal expression given in Equation (1), where the corresponding switching functions and projection functionals are
ϕ 1 ( t , s ( t ) ) = 4 t + 4 t 2 t 3 14 ϕ 2 ( t , s ( t ) ) = 28 24 t + 3 t 2 + t 3 28 ϕ 3 ( t , s ( t ) ) = 24 t 3 t 2 t 3 28 ϕ 4 ( t , s ( t ) ) = 10 t + 3 t 2 + t 3 14 and ρ 1 ( t , g ( t ) ) = y 1 g ( 1 ) ρ 2 ( t , g ( t ) ) = y 0 g ( 0 ) ρ 3 ( t , g ( t ) ) = y 2 g ( 2 ) ρ 4 ( t , g ( t ) ) = y 2 g ( 1 ) .

3. Correct Functionals for the Component Constraints Previously Provided

The TFC functionals for component constraints provided in [12] were specifically obtained to be the most simple and specific functionals that can be used to solve first-order differential equations. However, while fitting the purpose of [12], the functionals provided cannot be adopted for the general case of component constraints. In this section, for completeness, we provide the correct expressions of these functionals by also highlighting the constraint requirements affecting the selected support functions. These correct expressions are derived using the general formalism provided in Section 4.

3.1. Two Absolute Constraints

The case of two absolute components constraints of a vector v R 2 | v ( t ) : = { x ( t ) , y ( t ) } , generates the single simple case
x ( t a ) = x a y ( t b ) = y b x ( t , g x ( t ) ) = g x ( t ) + η 1 s 11 ( t ) y ( t , g y ( t ) ) = g y ( t ) + η 2 s 22 ( t ) ,
where
η 1 = x a g x ( t a ) s 11 ( t a ) and η 2 = y b g y ( t b ) s 22 ( t b )
and the support functions are subject to s 11 ( t a ) 0 and s 22 ( t b ) 0 , respectively.

3.2. One Absolute and One Relative Constraints

The case of one absolute and one relative constraints generates two distinct cases:
 Case (1) 
x ( t a ) = x a y ( t b ) = y ( t c ) x ( t , g x ( t ) ) = g x ( t ) + η 1 s 11 ( t ) y ( t , g y ( t ) ) = g y ( t ) + η 2 s 22 ( t ) ,
where
η 1 = x a g x ( t a ) s 11 ( t a ) and η 2 = g y ( t b ) g y ( t c ) s 22 ( t c ) s 22 ( t b )
subject to s 11 ( t a ) 0 and s 22 ( t c ) s 22 ( t b ) .
 Case (2) 
x ( t a ) = x a y ( t b ) = x ( t c ) x ( t , g x ( t ) ) = g x ( t ) + η 1 s 11 ( t ) y ( t , g y ( t ) ) = g y ( t ) + η 1 s 21 ( t ) + η 2 s 22 ( t ) ,
where
η 1 = x a g x ( t a ) s 11 ( t a ) and η 2 = g x ( t c ) g y ( t b ) + η 1 [ s 11 ( t c ) s 21 ( t b ) ] s 22 ( t b )
subject to s 11 ( t a ) 0 and s 22 ( t b ) 0 .

3.3. Two Relative Constraints

The general case of two relative constraints generates three distinct cases:
 Case (1) 
x ( t a ) = x ( t b ) y ( t c ) = y ( t d ) x ( t , g x ( t ) ) = g x ( t ) + η 1 s 11 ( t ) y ( t , g y ( t ) ) = g y ( t ) + η 2 s 22 ( t ) ,
where
η 1 = g x ( t b ) g x ( t a ) s 11 ( t a ) s 11 ( t b ) and η 2 = g y ( t c ) g y ( t d ) s 22 ( t d ) s 22 ( t c )
subject to s 11 ( t a ) s 11 ( t b ) and s 22 ( t c ) s 22 ( t d ) .
 Case (2) 
x ( t a ) = y ( t b ) y ( t c ) = y ( t d ) x ( t , g x ( t ) ) = g x ( t ) + η 1 s 11 ( t ) y ( t , g y ( t ) ) = g y ( t ) + η 1 s 21 ( t ) + η 2 s 22 ( t ) ,
where
g x ( t a ) + η 1 s 11 ( t a ) = g y ( t b ) + η 1 s 21 ( t b ) + η 2 s 22 ( t b ) g y ( t c ) + η 1 s 21 ( t c ) + η 2 s 22 ( t c ) = g y ( t d ) + η 1 s 21 ( t d ) + η 2 s 22 ( t d )
η 1 η 2 = s 11 ( t a ) s 21 ( t b ) s 22 ( t b ) s 21 ( t c ) s 21 ( t d ) s 22 ( t c ) s 22 ( t d ) 1 g y ( t b ) g x ( t a ) g y ( t d ) g y ( t c )
subject to [ s 11 ( t a ) s 21 ( t b ) ] [ s 22 ( t c ) s 22 ( t d ) + s 22 ( t b ) [ s 21 ( t c ) s 21 ( t d ) ] 0 .
 Case (3) 
x ( t a ) = y ( t b ) y ( t c ) = x ( t d ) x ( t , g x ( t ) ) = g x ( t ) + η 1 s 11 ( t ) + η 2 s 12 ( t ) y ( t , g y ( t ) ) = g y ( t ) + η 1 s 21 ( t ) + η 2 s 22 ( t ) ,
where
g x ( t a ) + η 1 s 11 ( t a ) + η 2 s 12 ( t a ) = g y ( t b ) + η 1 s 21 ( t b ) + η 2 s 22 ( t b ) g y ( t c ) + η 1 s 21 ( t c ) + η 2 s 22 ( t c ) = g x ( t d ) + η 1 s 11 ( t d ) + η 2 s 12 ( t d )
η 1 η 2 = s 11 ( t a ) s 21 ( t b ) s 12 ( t a ) s 22 ( t b ) s 21 ( t c ) s 11 ( t d ) s 22 ( t c ) s 12 ( t d ) g y ( t b ) g x ( t a ) g x ( t d ) g y ( t c )
subject to [ s 11 ( t a ) s 21 ( t b ) ] [ s 22 ( t c ) s 12 ( t d ) ] [ s 12 ( t a ) s 22 ( t b ) ] [ s 21 ( t c ) s 11 ( t d ) ] .

4. Univariate Theory of Functional Connections Subject to Component Constraints

Let us consider a set of p component constraints as linear combination of point, derivative, or integral of components defined at specific values (point and derivatives) or domain range (integral). To derive the constrained expression of each component the following rules apply.
1.
The x i component appears in n i constraints whose indices are the elements of the vector of integers, I i . For instance, if the x i component appears in the constraint equations identified as “2”, “9”, and “19”, only, then I i = { 2 , 9 , 19 } and n i = 3 , which is the length of the I i vector.
2.
The constrained expression of the x i component is made of a sum of the g i ( t ) free function and a linear combination of n i functional coefficients, η k ( t , g ( t ) ) and n i linearly independent support functions, s i k ( t ) ,
x i ( t , g i ( t ) ) = g i ( t ) + k I i η k ( t , g ( t ) ) s i k ( t ) .
Note that the total number of distinct functional coefficients, η k ( t , g ( t ) ) , is the total number of constraints to be satisfied. The coefficients η k ( t , g ( t ) ) are not continuous functions of t. They depend on the specific values of t where the constraints are defined. All these specific values are the elements of the vector t = { t 1 , t 2 , } . Each component, x i , has its own constrained expression and its own free function, g i ( t ) . The following three examples help to clarify how to apply Equation (2).
 Example 1. 
Consider the p = 2 constraints among the components of the vector x ( t ) R 4 ,
3 x 1 ( π ) 2 x 2 ( 0 ) + 7 x 2 ( 1 ) = 1 + 4 x 3 ( τ ) d τ 5 ( constraint 1 ) x 2 ( 3 ) x 3 ( 2 ) = 0 ( constraint 2 ) .
Note that, as the last component, x 4 ( t ) , does not appear in any constraint equations, then I 4 { Ø } , while the other I i vectors are
I i { 1 } , I 2 { 1 , 2 } , a n d I 3 { 1 , 2 } .
Therefore, for the two constraints given in Equation (3) the component constrained functionals are expressed as
x 1 ( t , g 1 ( t ) ) = g 1 ( t ) + η 1 ( t , g ( t ) ) s 11 ( t ) x 2 ( t , g 2 ( t ) ) = g 2 ( t ) + η 1 ( t , g ( t ) ) s 21 ( t ) + η 2 ( t , g ( t ) ) s 22 ( t ) x 3 ( t , g 3 ( t ) ) = g 3 ( t ) + η 1 ( t , g ( t ) ) s 31 ( t ) + η 2 ( t , g ( t ) ) s 32 ( t )
because x 1 appears in the constraint Equation (1) while x 2 , x 3 appear in the constraint Equations (1) and (2), and t : = { 0 , 1 , 2 , 3 , π , 4 } .
The first constraint becomes
3 g 1 ( π ) + η 1 s 11 ( π ) 2 g 2 ( 0 ) + η 1 s 21 ( 0 ) + η 2 s 22 ( 0 ) + 7 g 2 ( 1 ) + η 1 s 21 ( 1 ) + η 2 s 22 ( 1 ) = 1 + 4 g 3 ( τ ) + η 1 s 31 ( τ ) + η 2 s 32 ( τ ) d τ 5 .
Collecting the η k terms we obtain
3 s 11 ( π ) 2 s 21 ( 0 ) + 7 s 21 ( 1 ) 1 + 4 s 31 ( τ ) d τ η 1 + 2 s 22 ( 0 ) + 7 s 22 ( 1 ) 1 + 4 s 32 ( τ ) d τ η 2 = 3 g 1 ( π ) + 2 g 2 ( 0 ) 7 g 2 ( 1 ) + 1 + 4 g 3 ( τ ) d τ 5 ,
while the second constraint,
g 3 ( 2 ) + η 1 s 31 ( 2 ) + η 2 s 32 ( 2 ) g 2 ( 3 ) η 1 s 21 ( 3 ) η 2 s 22 ( 3 ) = 0
and collecting the η k terms we have
s 31 ( 2 ) s 21 ( 3 ) η 1 + s 32 ( 2 ) s 22 ( 3 ) η 2 = g 2 ( 3 ) g 3 ( 2 ) .
Therefore, the η k functional coefficients can be computed by matrix inversion, -4.6cm0cm
3 s 11 ( π ) 2 s 21 ( 0 ) + 7 s 21 ( 1 ) 1 + 4 s 31 ( τ ) d τ 2 s 22 ( 0 ) + 7 s 22 ( 1 ) 1 + 4 s 32 ( τ ) d τ s 31 ( 2 ) s 21 ( 3 ) s 32 ( 2 ) s 22 ( 3 ) η 1 η 2 = 3 g 1 ( π ) + 2 g 2 ( 0 ) 7 g 2 ( 1 ) + 1 + 4 g 3 ( τ ) d τ 5 g 2 ( 3 ) g 3 ( 2 ) .
The support functions selected are consistent if the functional coefficients η k can be computed. In this case, the above matrix (which depend from the selected support functions, only) is nonsingular. The matrix is nonsingular if a linear combination of the selected support functions can be adopted to interpolate the constraints (this interpolation problem can be obtained by setting to zeros all g k ( t ) free functions). This means that the consistency of the selected support functions is directly connected to an interpolation problem. By solving this interpolation problem the functional interpolation problem (also known as, its generalization) is also solved. The following simple equivalent example clarifies this interpolation issue.
Let us consider adopting the function
y ( t ) = η 1 s 1 ( t ) + η 2 s 2 ( t ) ,
using s 1 ( t ) = 1 and s 2 ( t ) = t , to interpolate two known derivatives in two distinct points, y ( t 1 ) = y 1 and y ( t 2 ) = y 2 . This problem cannot be solved using the support functions selected. In this case, the selected support functions are inconsistent with the interpolation problem. The support functions become consistent with respect to the constraints by selecting, for instance, s 1 ( t ) = t and s 2 ( t ) = t 2 .
Coming back to our example, by selecting s 11 ( t ) = s 21 ( t ) = s 31 ( t ) = 1 , s 22 ( t ) = s 32 ( t ) = t , the previous equation provides the expressions for the η k functional coefficients,
5 29 / 2 0 1 η 1 η 2 = 3 g 1 ( π ) + 2 g 2 ( 0 ) 7 g 2 ( 1 ) + 1 + 4 g 3 ( τ ) d τ 5 g 2 ( 3 ) g 3 ( 2 ) = ρ 1 ρ 2 .
This equation also shows the connections between the functional coefficients, η k , and the projection functionals, ρ k , appearing in Equation (1) (see in [4] for the formal definition and properties of projection functionals). Therefore, the η k functional coefficients for this specific example can be written as
η 1 η 2 = 1 5 1 29 / 2 0 5 ρ 1 ρ 2 .
 Example 2. 
Consider the p = 3 constraints among the components of the vector v ( t ) R 3 | v ( t ) : = { x ( t ) , y ( t ) , z ( t ) } ,
x ( 1 ) = π y ( + 1 ) + 1 + 1 z ( τ ) d τ y ( 0 ) = 2 x ( + 1 ) z ( 1 ) = z ( + 1 ) .
For these constraints the I i vectors are
I 1 { 1 , 2 } , I 2 { 1 , 2 } , and I 3 { 1 , 3 } .
For the three constraints given in Equation (4) the component constrained expressions are
x ( t , g x ( t ) ) = g x ( t ) + η 1 ( t , g ( t ) ) s 11 ( t ) + η 2 ( t , g ( t ) ) s 12 ( t ) y ( t , g y ( t ) ) = g y ( t ) + η 1 ( t , g ( t ) ) s 21 ( t ) + η 2 ( t , g ( t ) ) s 22 ( t ) z ( t , g z ( t ) ) = g z ( t ) + η 1 ( t , g ( t ) ) s 31 ( t ) + η 3 ( t , g ( t ) ) s 33 ( t ) .
Selecting s 11 ( t ) = s 21 ( t ) = s 31 ( t ) = 1 and s 12 ( t ) = s 22 ( t ) = s 33 ( t ) = t , the constraints becomes
g x ( 1 ) + η 1 η 2 = π g y ( 1 ) + η 1 + η 2 + 1 + 1 g z ( τ ) + η 1 + η 3 τ d τ g y ( 0 ) + η 2 = 2 [ g x ( 1 ) + η 1 + η 2 ] g z ( 1 ) + η 1 η 3 = g z ( 1 ) + η 1 + η 3 .
The η k functional coefficients can be then computed from
1 π 1 π 0 2 1 0 0 0 2 η 1 η 2 η 3 = g x ( 1 ) + π g y ( 1 ) + 1 + 1 g z ( τ ) d τ g y ( 0 ) + 2 g x ( 1 ) g z ( 1 ) + g z ( 1 ) ,
whose solution is
η 1 η 2 η 3 = 1 2 ( 1 + π ) 2 2 ( 1 + π ) 0 4 2 ( 1 + π ) 0 0 0 1 + π g x ( 1 ) + π g y ( 1 ) + 1 + 1 g z ( τ ) d τ g y ( 0 ) + 2 g x ( 1 ) g z ( 1 ) + g z ( 1 ) .
 Example 3. 
Consider the vector v ( t ) R 2 | v ( t ) : = { x ( t ) , y ( t ) } whose components are subject to the following four constraints,
x ( 0 ) = 0 , y ( 0 ) = 0 , y ( 1 ) = y ( 2 ) , and 2 y ( 1 ) 0 3 x ( τ ) d τ = 4 .
For these constraints the I i vectors are
I 1 { 1 , 4 } , and I 2 { 2 , 3 , 4 } .
Therefore, for the constraints given in Equation (5) the component constrained expressions are
x ( t , g x ( t ) ) = g x ( t ) + η 1 s 11 ( t ) + η 4 s 14 ( t ) y ( t , g y ( t ) ) = g y ( t ) + η 2 s 22 ( t ) + η 3 s 23 ( t ) + η 4 s 24 ( t )
Selecting s 11 ( t ) = s 22 ( t ) = 1 , s 14 ( t ) = s 23 ( t ) = t , and s 24 ( t ) = t 2 , the constraints becomes
g x ( 0 ) + η 1 = 0 g y ( 0 ) + η 2 = 0 g y ( 1 ) + η 3 + η 4 = g y ( 2 ) + 2 η 3 + 4 η 4 2 g y ( 1 ) + η 2 + η 3 + η 4 0 3 g x d τ 3 η 1 9 2 η 4 = 4 .
The η k functional coefficients can be computed from the linear system,
1 0 0 0 0 1 0 0 0 0 1 3 3 2 2 5 / 2 η 1 η 2 η 3 η 4 = g x ( 0 ) g y ( 0 ) g y ( 2 ) g y ( 1 ) 4 + 0 3 g x d τ 2 g y ( 1 )
providing the solution
η 1 = g x ( 0 ) η 2 = g y ( 0 ) η 3 η 4 = 2 17 5 / 2 3 2 1 g y ( 2 ) g y ( 1 ) 4 + 0 3 g x d τ 2 g y ( 1 ) 3 g x ( 0 ) + 2 g y ( 0 ) .

Application to a Simple Example of Optimal Control Problem

Consider the following one-dimensional optimal control problem. A unitary mass is in rectilinear motion subject to the one-dimensional control force, u ( t ) , in the direction of motion. Initial state conditions are set to be x ( t 0 ) = x 0 = 1 m and v ( t 0 ) = v 0 = 2 m/s, where t 0 = 0 . The goal is to find the optimal control, u ( t ) , to bring the point mass sufficiently close to the origin at the final time, t f = 2 s, with minimum control effort. Overall, the optimal control problem can be mathematically formulated as follows.
Find the optimal control u ( t ) and the trajectory x ( t ) and v ( t ) that minimize the following cost function
J = S ( x , v ) | t f + t 0 t f L ( u ) d τ = 1 2 x 2 | t f + 1 2 t 0 t f u 2 ( τ ) d τ ,
subject to the dynamic equations
x ˙ ( t ) = v ( t ) and v ˙ ( t ) = u ( t )
and the boundary conditions
x ( t 0 ) = x 0 and v ( t 0 ) = v 0 .
The necessary conditions are determined by applying the Pontryagin’s Maximum Principle (PMP) [35]. First, we compute the Hamiltonian
H = 1 2 u 2 + λ x v + λ v u .
We find the optimal control by applying the optimality conditions, i.e.,
H u = u + λ v = 0 u = λ v .
Therefore, the Hamiltonian becomes
H = 1 2 λ v 2 + λ x v λ v 2 = λ x v 1 2 λ v 2 .
State and costate dynamic equations are derived from the Hamiltonian as follows,
x ˙ = H λ x = v v ˙ = H λ v = λ v and λ ˙ x = H x = 0 λ ˙ v = H v = λ x .
The constraints are the initial conditions,
x ( t 0 ) = x 0 and v ( t 0 ) = v 0 ,
while the boundary conditions can be determined as follows,
λ x ( t f ) = Φ x t f = x ( t f ) and λ v ( t f ) = Φ v t f = 0 .
where
Φ = 1 2 x ( t f ) 2 + μ 1 [ x ( t 0 ) x 0 ] + μ 2 [ v ( t 0 ) v 0 ] .
Here, μ 1 and μ 2 are the Lagrange multipliers associated with the initial states x ( t 0 ) and v ( t 0 ) , respectively. The optimal control and trajectory solution can be found by solving the set of ODEs (necessary conditions, i.e., two-point boundary value problem) provided by the Hamiltonian formulation, i.e.,
x ˙ v ˙ λ ˙ x λ ˙ v = [ 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 ] x v λ x λ v subject to: x ( t 0 ) = x 0 v ( t 0 ) = v 0 λ x ( t f ) = x ( t f ) λ v ( t f ) = 0 , ,
where λ x and λ v represent the costate components associated to x and v, respectively. This system of differential equations admits the following analytical solution,
x ( t ) = 5 22 t 3 15 11 t 2 + 2 t + 1 v ( t ) = 15 22 t 2 30 11 t + 2 and λ x ( t ) = 15 11 λ v ( t ) = 15 11 t + 30 11 .
Using the Theory of Functional Connections formulation for component constraints presented, the constraints given in Equation (16) can be embedded into the following functionals,
x ( t , g x ( t ) ) = g x ( t ) + η 1 s 11 ( t ) + η 3 s 13 ( t ) v ( t , g v ( t ) ) = g v ( t ) + η 2 s 22 ( t ) λ x ( t , g λ x ( t ) ) = g λ x ( t ) + η 3 s 33 ( t ) λ v ( t , g λ v ( t ) ) = g λ v ( t ) + η 4 s 44 ( t ) ,
from which we immediately derive the expressions for η 2 and η 4 ,
η 2 = v 0 g v ( t 0 ) s 22 ( t 0 ) and η 4 = g λ v ( t f ) s 44 ( t f ) ,
while the expressions for η 1 and η 3 are computed from the constraint expressions,
g x ( t 0 ) + η 1 s 11 ( t 0 ) + η 3 s 13 ( t 0 ) = x 0 g λ x ( t f ) + η 3 s 33 ( t f ) = g x ( t f ) + η 1 s 11 ( t f ) + η 3 s 13 ( t f )
η 1 η 3 = s 11 ( t 0 ) s 13 ( t 0 ) s 11 ( t f ) s 33 ( t f ) s 13 ( t f ) 1 x 0 g x ( t 0 ) g x ( t f ) g λ x ( t f ) ,
providing the requirement for the support function selection,
s 11 ( t f ) [ s 33 ( t f ) s 13 ( t f ) ] + s 13 ( t 0 ) s 11 ( t f ) 0 .
Table 1 provides three examples of support functions selection and the requirements the constraints are subject to.
For instance, using Selection 3 the expressions for η 1 and η 3 are
η 1 η 3 = 1 t f 0 1 t f t 0 x 0 g x ( t 0 ) g x ( t f ) g λ x ( t f ) = 1 t f g λ x ( t f ) g x ( t f ) t f ( x 0 g x ( t 0 ) ) + t 0 [ g x ( t f ) g λ x ( t f ) ]
and, consequently, the constrained expressions are -4.6cm0cm
x ( t , g x ( t ) ) = g x ( t ) + t t f g λ x ( t f ) g x ( t f ) + 1 t f t f ( x 0 g x ( t 0 ) ) + t 0 [ g x ( t f ) g λ x ( t f ) ] v ( t , g v ( t ) ) = g v ( t ) + [ v 0 g v ( t 0 ) ] λ x ( t , g λ x ( t ) ) = g λ x ( t ) + 1 t f t f ( x 0 g x ( t 0 ) ) + t 0 [ g x ( t f ) g λ x ( t f ) ] λ v ( t , g λ v ( t ) ) = g λ v ( t ) g λ v ( t f ) .
Using these expressions, the four free functions can be expanded as a linear combination of a set of basis functions with unknown coefficients,
g x ( t ) = ξ x T h ( t ) , g v ( t ) = ξ v T h ( t ) , g λ x ( t ) = ξ λ x T h ( t ) and g λ v ( t ) = ξ λ v T h ( t )
and the system of differential equations given in Equation (16) can be written in a matrix form,
H ( t ) ξ = H ( t ) ξ x ξ v ξ λ x ξ λ v = b x ( t ) b v ( t ) b λ x ( t ) b λ v ( t ) = b ( t ) ,
where the expression of the H ( t ) matrix is derived by applying the constrained expressions given in Equation (18) to the differential equation given in Equation (16). Discretizing the time from the initial value, t 0 , to the final value, t f , the overdetermined linear system
A ξ = H ( t 0 ) H ( t 1 ) H ( t f ) { ξ x ξ v ξ λ x ξ λ v } = b ( t 0 ) b ( t 1 ) b ( t f ) = B
is obtained. Then, the four unknown coefficients vectors ( ξ x , ξ v , ξ λ x , and ξ λ v ) can be computed by least-squares,
ξ = ξ x T , ξ v T , ξ λ x T , ξ λ v T T = A T A 1 A T B .
Particular attention should be given to vectors, like the state vectors in dynamical systems, where some components are derivatives of other components (e.g., position, velocity, and acceleration). For instance, it would be a mistake trying to merge the x ( t ) and v ( t ) constrained expressions given in Equation (17) by the following single constrained expression,
x ( t ) = g ( t ) + [ x 0 g ( t 0 ) ] + ( t t 0 ) [ v 0 g ˙ ( t 0 ) ] .
While this equation satisfies both initial constraints for x ( t ) and v ( t ) , it does not satisfy the dynamical equation obtained from the Hamiltonian formulation, given in Equation (16), where the control is embedded in the costate terms.
Figure 1 shows the numerical results obtained in this simple 4 × 4 optimal control example. The left plots shows the errors with respect the true (analytical) solution of the numerical least-squares solution for x ( t ) , v ( t ) , λ x ( t ) , and λ v ( t ) , using m = 4 basis functions (Chebyshev orthogonal polynomials of the first kind) and 16 collocation points. The top right plot show the control error, u ( t ) , while the bottom right plot shows the condition number of the ( A T A ) matrix. The solution is obtained in 1.3 ms, using MATLAB® on a standard commercial laptop.
In this specific case, as the true solution is polynomial and the selected basis functions are also polynomial (Chebyshev orthogonal polynomials), the estimated solution fully captures the true solution with the corresponding minimum polynomial degree of the true solution (cubic). By increasing the number of basis functions, the solution does not changes and all the coefficients associated to polynomials with degree higher than m > 3 are estimated as zero.

5. Conclusions

This paper provides a mathematical methodology to perform functional interpolation for vector’s components that are subject to a set of linear constraints. The methodology adopts the framework of the Theory of Functional Connections, a mathematical method to derive functionals that are always satisfying a set of linear constraints. These functional reduce the whole search space of functions to just the functions subspace satisfying the assigned constraints.
The main motivation of this study is to provide a numerical new method to solve indirect optimal control problems, where states and costates (components) are connected by constraints.
Several examples are provided showing how to derive the functionals fully satisfying the component constraints. This has been done for 2-dimensional time-varying vectors subject to three classic component constraints (two absolute, one absolute and one relative, and two relative) and three more complex examples involving constraints as linear combinations of points, derivatives, and integrals.
The study also includes a simple example of indirect optimal control problem that is solved using Pontryagin Maximum Principle and the proposed method. This example, whose solution is obtained by least-squares, has been numerically tested. The results validate the proposed approach in terms of speed and accuracy.

Author Contributions

Conceptualization, D.M. and R.F.; methodology, D.M. and R.F.; software, D.M.; validation, D.M.; formal analysis, D.M. and R.F.; investigation, D.M. and R.F.; resources, D.M.; data curation, D.M.; writing—original draft preparation, D.M.; writing—review and editing, D.M. and R.F.; visualization, D.M.; supervision, D.M. and R.F.; project administration, D.M. Both authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mortari, D. The Theory of Connections: Connecting Points. Mathematics 2017, 5, 57. [Google Scholar] [CrossRef] [Green Version]
  2. Mortari, D. The Theory of Functional Connections: Connecting Functions. In Proceedings of the IAA-AAS-SciTech-072, Forum 2018, Peoples’ Friendship University of Russia, Moscow, Russia, 13–15 November 2018. [Google Scholar]
  3. Mortari, D.; Leake, C. The Multivariate Theory of Connections. Mathematics 2019, 7, 296. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Leake, C.; Johnston, H.; Mortari, D. The Multivariate Theory of Functional Connections: Theory, Proofs, and Application in Partial Differential Equations. Mathematics 2020, 8, 1303. [Google Scholar] [CrossRef]
  5. Leake, C.; Mortari, D. An Explanation and Implementation of the Multivariate Theory of Functional Connections via Examples. In Proceedings of the 2019 AAS/AIAA Astrodynamics Specialist Conference, Portland, ME, USA, 11–15 August 2019. [Google Scholar]
  6. Johnston, H.; Leake, C.; Mortari, D. An Analysis of the Theory of Functional Connections Subject to Inequality Constraints. In Proceedings of the 2019 AAS/AIAA Astrodynamics Specialist Conference, Portland, ME, USA, 11–15 August 2019. [Google Scholar]
  7. Mortari, D.; Arnas, D. Bijective Mapping Analysis to Extend the Theory of Functional Connections to Non-Rectangular 2-Dimensional Domains. Mathematics 2020, 8, 1593. [Google Scholar] [CrossRef] [PubMed]
  8. Mortari, D. Least-Squares Solution of Linear Differential Equations. Mathematics 2017, 5, 48. [Google Scholar] [CrossRef]
  9. Johnston, H.; Mortari, D. Linear Differential Equations Subject to Relative, Integral, and Infinite Constraints. In Proceedings of the AAS/AIAA Astrodynamics Specialist Conference, 2018, 167, AAS 18-273, Snowbird, UT, USA, 19–23 August 2018; pp. 3107–3121. [Google Scholar]
  10. Mortari, D.; Johnston, H.; Smith, L. High accuracy least-squares solutions of nonlinear differential equations. J. Comput. Appl. Math. 2019, 352, 293–307. [Google Scholar] [CrossRef] [PubMed]
  11. Johnston, H.; Mortari, D. Weighted Least-Squares Solutions of Over-Constrained Differential Equations. In Proceedings of the IAA-AAS-SciTech-081, Forum 2018, Peoples’ Friendship University of Russia, Moscow, Russia, 13–15 November 2018. [Google Scholar]
  12. Mortari, D.; Furfaro, R. Theory of Connections Applied to First-Order System of Ordinary Differential Equations Subject to Component Constraints. In Proceedings of the 2018 AAS/AIAA Astrodynamics Specialist Conference, Snowbird, UT, USA, 19–23 August 2018. [Google Scholar]
  13. Schiassi, E.; Furfaro, R.; Leake, C.; De Florio, M.; Johnston, H.; Mortari, D. Extreme Theory of Functional Connections: A Fast Physics-Informed Neural Network Method for Solving Ordinary and Partial Differential Equations. Neurocomputing 2020. Submitted revised version. [Google Scholar]
  14. Leake, C.; Johnston, H.; Smith, L.; Mortari, D. Analytically Embedding Differential Equation Constraints into Least-Squares Support Vector Machines using the Theory of Functional Connections. Mach. Learn. Knowl. Extr. 2019, 1, 60. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Johnston, H.; Mortari, D. Least-squares Solutions of Boundary-value Problems in Hybrid Systems. arXiv 2019, arXiv:1911.04390v1. [Google Scholar]
  16. Leake, C.; Mortari, D. Deep Theory of Functional Connections: A New Method for Estimating the Solutions of Partial Differential Equations. Mach. Learn. Knowl. Extr. 2020, 2, 4. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Mai, T.; Mortari, D. Theory of Functional Connections Applied to Nonlinear Programming under Equality Constraints. In Proceedings of the 2019 AAS/AIAA Astrodynamics Specialist Conference, Portland, ME, USA, 11–15 August 2019. [Google Scholar]
  18. Drozd, K.; Furfaro, R.; Mortari, D. Constrained Energy-Optimal Guidance in Relative Motion via Theory of Functional Connections and Rapidly-Explored Random Trees. In Proceedings of the 2019 Astrodynamics Specialist Conference, Portland, ME, USA, 11–15 August 2019. [Google Scholar]
  19. Furfaro, R.; Mortari, D. Least-squares Solution of a Class of Optimal Guidance Problems via Theory of Connections. ACTA Astronaut. 2020, 168, 92–103. [Google Scholar] [CrossRef]
  20. Furfaro, R.; Drozd, K.; Mortari, D. Energy-Optimal Rendezvous Spacecraft Guidance via Theory of Functional Connections. In Proceedings of the 70th International Astronautical Congress 2019, IAF Astrodynamics Symposium, Washington, DC, USA, 21–25 October 2019. [Google Scholar]
  21. Johnston, H.; Schiassi, E.; Furfaro, R.; Mortari, D. Fuel-Efficient Powered Descent Guidance on Large Planetary Bodies via Theory of Functional Connections. J. Astronaut. Sci. 2020, 67, 1521–1552. [Google Scholar] [CrossRef] [PubMed]
  22. Schiassi, E.; D’Ambrosio, A.; Johnston, H.; Furfaro, R.; Curti, F.; Mortari, D. Complete Energy Optimal Landing on Small and Large Planetary Bodies via Theory of Functional Connections. In Proceedings of the Astrodynamics Specialist Conference, AAS 20-557, Lake Tahoe, CA, USA, 9–13 August 2020. [Google Scholar]
  23. Furfaro, R.; Schiassi, E.; Drozd, K.; Mortari, D. Physics-Informed Neural Networks and Theory of Functional Connections for Optimal Space Guidance Applications. In Proceedings of the 71-st International Astronautical Congress (IAC 2020), Dubai, UAE, 12–16 October 2020. [Google Scholar]
  24. Schiassi, E.; D’Ambrosio, A.; Johnston, H.; De Florio, M.; Drozd, K.; Furfaro, R.; Curti, F.; Mortari, D. Physics-Informed Extreme Theory of Functional Connections Applied to Optimal Orbit Transfer. In Proceedings of the Astrodynamics Specialist Conference, Lake Tahoe, CA, USA, 9–13 August 2020. [Google Scholar]
  25. Johnston, H.; Mortari, D. The Theory of Connections Applied to Perturbed Lambert’s Problem. In Proceedings of the Astrodynamics Specialist Conference, Snowbird, UT, USA, 19–23 August 2018. [Google Scholar]
  26. Mortari, D. The Theory of Connections with Application. In Proceedings of the XVI Jornadas de Trabajo en Mecánica Celeste, Soria, Spain, 19–21 June 2017. [Google Scholar]
  27. Mortari, D. The Theory of Functional Connections: Current Status. In Proceedings of the XIX Colóquio Brasileiro de Dinâmica Orbital (CBDO-2018), Instituto Nacional de Pesquisas Espaciais, São José dos Campos, Brasil, 3–7 December 2018. [Google Scholar]
  28. Johnston, H.; Mortari, D. Orbit Propagation via the Theory of Functional Connections. In Proceedings of the Astrodynamics Specialist Conference, Portland, ME, USA, 11–15 August 2019. [Google Scholar]
  29. de Almeida, A.K., Jr.; Johnston, H.; Leake, C.; Mortari, D. Evaluation of Transfer Costs in the Earth-Moon System using the Theory of Functional Connections. In Proceedings of the Astrodynamics Specialist Conference, Lake Tahoe, CA, USA, 9–13 August 2020. [Google Scholar]
  30. Johnston, H.; Lo, M.; Mortari, D. Functional Interpolation Method to Compute Period Orbits in the Circular Restricted Three-Body Problem. In Proceedings of the Space Flight Mechanics Meeting, Virtual Conference, 1–4 February 2021. [Google Scholar]
  31. De Florio, M. Accurate Solutions of the Radiative Transfer Problem via Theory of Connections. Master’s Thesis, University of Bologna, Cesena, Italy, 2019. [Google Scholar]
  32. De Florio, M.; Schiassi, E.; Furfaro, R.; Ganapol, B.D.; Mostacci, D. Solutions of Chandrasekhar’s Basic Problem in Radiative Transfer via Theory of Functional Connections. J. Quant. Spectrosc. Radiat. Transf. 2021, 259, 107384. [Google Scholar] [CrossRef]
  33. Schiassi, E.; Leake, C.; De Florio, M.; Johnston, H.; Furfaro, R.; Mortari, D. Extreme Theory of Functional Connections: A Physics-Informed Neural Network Method for Solving Parametric Differential Equations. arXiv 2020, arXiv:2005.10632v1. [Google Scholar]
  34. Schiassi, E.; D’Ambrosio, A.; De Florio, M.; Furfaro, R.; Curti, F. Physics-Informed Extreme Theory of Functional Connections Applied to Data-Driven Parameters Discovery of Epidemiological Compartmental Models. arXiv 2020, arXiv:2008.05554v1. [Google Scholar]
  35. Hermes, H. Foundations of Optimal Control Theory; John Wiley: New York, NY, USA, 1968. [Google Scholar]
Figure 1. Results of the 4 × 4 optimal control example.
Figure 1. Results of the 4 × 4 optimal control example.
Mca 26 00009 g001
Table 1. Examples of supports function selection.
Table 1. Examples of supports function selection.
Selection s 11 ( t ) s 13 ( t ) s 33 ( t ) Requirement
11t1 t f t 0 1
21tt t 0 0
3t11 t f 0
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mortari, D.; Furfaro, R. Univariate Theory of Functional Connections Applied to Component Constraints. Math. Comput. Appl. 2021, 26, 9. https://doi.org/10.3390/mca26010009

AMA Style

Mortari D, Furfaro R. Univariate Theory of Functional Connections Applied to Component Constraints. Mathematical and Computational Applications. 2021; 26(1):9. https://doi.org/10.3390/mca26010009

Chicago/Turabian Style

Mortari, Daniele, and Roberto Furfaro. 2021. "Univariate Theory of Functional Connections Applied to Component Constraints" Mathematical and Computational Applications 26, no. 1: 9. https://doi.org/10.3390/mca26010009

APA Style

Mortari, D., & Furfaro, R. (2021). Univariate Theory of Functional Connections Applied to Component Constraints. Mathematical and Computational Applications, 26(1), 9. https://doi.org/10.3390/mca26010009

Article Metrics

Back to TopTop