Next Article in Journal
Variational-Like Inequality Problem Involving Generalized Cayley Operator
Next Article in Special Issue
Two Inverse Problems Solution by Feedback Tracking Control
Previous Article in Journal
A Decision Support Model for Measuring Technological Progress and Productivity Growth: The Case of Commercial Banks in Vietnam
Previous Article in Special Issue
Analyzing Uncertain Dynamical Systems after State-Space Transformations into Cooperative Form: Verification of Control and Fault Diagnosis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Nash Equilibrium Sequence in a Singular Two-Person Linear-Quadratic Differential Game

The Galilee Research Center for Applied Mathematics, ORT Braude College of Engineering, Karmiel 2161002, Israel
Axioms 2021, 10(3), 132; https://doi.org/10.3390/axioms10030132
Submission received: 12 April 2021 / Revised: 27 May 2021 / Accepted: 22 June 2021 / Published: 25 June 2021
(This article belongs to the Special Issue Advances in Analysis and Control of Systems with Uncertainties)

Abstract

:
A finite-horizon two-person non-zero-sum differential game is considered. The dynamics of the game is linear. Each of the players has a quadratic functional on its own disposal, which should be minimized. The case where weight matrices in control costs of one player are singular in both functionals is studied. Hence, the game under the consideration is singular. A novel definition of the Nash equilibrium in this game (a Nash equilibrium sequence) is proposed. The game is solved by application of the regularization method. This method yields a new differential game, which is a regular Nash equilibrium game. Moreover, the new game is a partial cheap control game. An asymptotic analysis of this game is carried out. Based on this analysis, the Nash equilibrium sequence of the pairs of the players’ state-feedback controls in the singular game is constructed. The expressions for the optimal values of the functionals in the singular game are obtained. Illustrative examples are presented.

1. Introduction

Differential games, which cannot be solved by application of the first-order solvability conditions, are called singular. For instance, a zero-sum differential game is called singular if it cannot be solved using the Isaacs MinMax principle [1,2] and the Bellman–Isaacs equation method [1,3]. Similarly, Nash equilibrium set of controls in a singular non-zero-sum differential game cannot be derived using the first-order variational method and the generalized Hamilton-Jacobi-Bellman equation method [3,4].
Singular differential games appear in various applications. For example, such games appear in pursuit-evasion problems (see, e.g., Ref. [5]), in robust controllability problems (see, e.g., Ref. [6]), in robust interception problems of maneuvering targets (see e.g., Ref. [7]), in robust tracking problems (see, e.g., Ref. [8]), in biology processes (see, e.g., Ref. [9]), and in robust investment problems (see, e.g., Ref. [10]).
Treating a singular differential game, one can try to use higher order solvability conditions. However, such conditions are useless for the game, which does not have an optimal control of at least one player in the class of regular (non-generalized) functions.
Singular zero-sum differential games were extensively analyzed in the literature by different methods (see, e.g., Refs. [7,11,12,13,14,15,16,17,18] and references therein). Thus, in Refs [7,15,16], various singular zero-sum differential games were solved by regularization method. In Reference [11], a numerical method was proposed to solve one class of zero-sum differential games with singular control. In Reference [12], a class of zero-sum differential games with singular arcs was considered. For this class of the games, sufficient conditions for the existence of a saddle-point solution were established. In Reference [13], the Riccati matrix inequality was applied to establish the existence of an almost equilibria in a singular zero-sum differential game. In Reference [14], a saddle-point solution of a singular zero-sum differential game was derived in the class of generalized functions. In Reference [17], a class of zero-sum stochastic differential games was studied. Each player of this game has a control consisting of regular and singular parts. Necessary and sufficient saddle-point optimality conditions were derived for the considered game. In Reference [18], a singular zero-sum linear-quadratic differential game was considered. This game was treated by its regularization and numerical solution of the regularized game.
Singular non-zero-sum Nash equilibrium differential games also were studied in the literature, but mostly in various stochastic settings (see, e.g., Refs. [10,19,20,21,22] and references therein). Deterministic singular non-zero-sum Nash equilibrium differential games were studied only in few works. Thus, in Reference [23], a two-person non-zero-sum differential game with a linear second order dynamics and scalar controls of both players was considered. Each player controls one equation of the dynamics. The infinite horizon quadratic functionals of the players do not contain control costs. The admissible class of controls for both players is the set of linear state-feedbacks. The notion of asymptotic (with respect to time) ε -Nash equilibrium was introduced, and this equilibrium was designed subject to some condition. In Reference [9], a finite-horizon two-person non-zero-sum differential game was studied. This game models a biological process. Its fourth-order dynamics is linear with respect to scalar controls of the players, and these controls are bounded. The players’ functionals depend only on the state variables, and this dependence is quadratic. For this singular game, a Nash equilibrium set of open-loop controls was derived in the class of regular functions. In Reference [24], an infinite horizon two-person non-zero-sum differential game with n-order linear dynamics and vector-valued unconstrained players’ controls was considered. Functionals of both players are quadratic, and these functionals do not contain control costs of one (the same) player. This singular game was solved by the regularization approach.
In the present paper, we consider a deterministic finite-horizon two-person non-zero-sum differential game. The dynamics of this game is linear and time-dependent. The controls of the players are unconstrained. Each player aims to minimize its own quadratic functional. We look for the Nash equilibrium in this game, and we treat the case where weight matrices in control costs of one player (the “singular” player) in both functionals are singular but non-zero. Such a feature means that the game under the consideration is singular. However, since the aforementioned singular weight matrices are non-zero, the control of the “singular” player contains both, singular and regular, coordinates. For this game, in general, the Nash equilibrium pair of controls, in which singular coordinates of the “singular” player’s control are regular (non-generalized) functions, does not exist. To the best of our knowledge, such a game has not yet been studied in the literature. The aims of the paper are the following: (A) to define the solution (the Nash equilibrium) of the considered game; (B) to derive this solution. Thus, we propose for the considered singular game a novel notion of the Nash equilibrium (a Nash equilibrium sequence). Based on this notion, we solve the game by application of the regularization method. Namely, we associate the original singular game with a new differential game. This new game has the same equation of the dynamics and a similar functional of the “singular” player augmented by a finite-horizon integral of the square of its singular control coordinates with a small positive weight (a small parameter). The functional of the other (“regular”) player remains unchanged. Thus, the new game is a finite-horizon regular linear-quadratic game.
The regularization method was applied for solution of singular optimal control problems in many works (see, e.g., Refs. [25,26,27] and references therein). This method also was applied for solution of singular H control problems (see, e.g., Refs. [28,29] and references therein) and for solution of singular zero-sum differential games (see, e.g., Refs. [7,15,16]). However, to the best of our knowledge, the application of the regularization method to analysis and solution of singular non-zero-sum differential games was considered only in two short conference papers [24,30]. In each of these papers, the study of the game was presented in a brief form and without detailed analysis and proofs of assertions.
The aforementioned new game, obtained by the regularization of the original singular game, is a partial cheap control game. Using the solvability conditions of a Nash equilibrium finite-horizon linear-quadratic regular game, the solution of this partial cheap control game is reduced to solution of a set of two matrix Riccati-type differential equations, singularly perturbed by the small parameter. Using an asymptotic solution of this set, a Nash equilibrium sequence of the pairs of the players’ state-feedback controls in the original singular game is constructed. The expressions for the optimal values of the players’ functionals in this game are obtained. Note that a particular case of the differential game, studied in the present paper, was considered briefly and without detailed proofs in the short conference paper [30].
The paper is organized as follows. In the next section, the initial formulation of the singular differential game is presented. The main definitions also are formulated. The transformation of the initially formulated game is carried out in Section 3. It is shown that the initially formulated game and the transformed game are equivalent to each other. Due to this equivalence, in the rest of the paper the transformed game is analyzed as an original singular differential game. The regularization of the original singular game, which is made in Section 4, yields a partial cheap control regular game. Nash equilibrium solution of the latter is presented in Section 5. Asymptotic analysis of the partial cheap control regular game is carried out in Section 6. In Section 7, the reduced differential game, associated with the original singular game, is presented along with its solvability conditions. The Nash equilibrium sequence for the original singular differential game and the expressions of the functionals’ optimal values of this game are derived in Section 8. Two illustrative examples are considered in Section 9. Section 10 is devoted to concluding remarks. Some technically complicated proofs are placed in appendices.
The following main notations are used in the paper:
  • E n is the n-dimensional real Euclidean space.
  • The Euclidean norm of either a vector or a matrix is denoted by · .
  • The upper index “T” denotes the transposition either of a vector x ( x T ) or of a matrix A ( A T ).
  • I n denotes the identity matrix of dimension n.
  • O n × m denotes zero matrix of dimension n × m ; however, if the dimension of zero matrix is clear, it is denoted as 0.
  • L 2 [ t 1 , t 2 ; E n ] denotes the space of all functions x ( · ) : [ t 1 , t 2 ] E n square integrable in the interval [ t 1 , t 2 ] .
  • col ( x , y ) , where x E n , y E m , denotes the column block-vector of the dimension n + m with the upper block x and the lower block y, i.e., col ( x , y ) = ( x T , y T ) T .
  • ⊗ denotes the Kronecker product of matrices.
  • For a given n × m -matrix A, vec ( A ) means its vectorization, i.e., the n m -dimensional block vector in which the first (upper) block is the first (upper) row of A, the second block is the second row of A, and so on, the lower block of vec ( A ) is the last (lower) row of A.

2. Initial Game Formulation

The game’s dynamics is described by the following system:
d Z ( t ) d t = A ( t ) Z ( t ) + B u ( t ) u ( t ) + B v ( t ) v ( t ) , t [ 0 , t f ] , Z ( 0 ) = Z 0 ,
where t f > 0 is a given final time instant; Z ( t ) E n is the state vector, u ( t ) E r , ( r < n ), v ( t ) E s are the players’ controls; A ( t ) , B u ( t ) and B v ( t ) , t [ 0 , t f ] are given matrix-valued functions of corresponding dimensions; Z 0 E n is a given constant vector.
The functionals of the player “u” with the control u ( t ) and the player “v” with the control v ( t ) are, respectively,
J u ( u , v ) = Z T ( t f ) C u Z ( t f ) + 0 t f Z T ( t ) D u ( t ) Z ( t ) + u T ( t ) R u u ( t ) u ( t ) + v T ( t ) R u v ( t ) v ( t ) d t ,
J v ( u , v ) = Z T ( t f ) C v Z ( t f ) + 0 t f Z T ( t ) D v ( t ) Z ( t ) + v T ( t ) R v v ( t ) v ( t ) + u T ( t ) R v u ( t ) u ( t ) d t ,
where C u and C v are given symmetric positive semi-definite matrices of corresponding dimensions; D i ( t ) , R i j ( t ) , ( i = u , v ; j = u , v ) , t [ 0 , t f ] are given matrix-valued functions of corresponding dimensions; the matrix R v v ( t ) is symmetric positive definite; the matrices D u ( t ) , D v ( t ) , R u u ( t ) , R u v ( t ) , and R v u ( t ) are symmetric positive semi-definite.
In what follows, we assume that the weight matrices R u u ( t ) and R v u ( t ) of the costs of the control u ( t ) in both functionals have the block form
R u u ( t ) = R ¯ u u ( t ) 0 0 0 , R v u ( t ) = R ¯ v u ( t ) 0 0 0 , t [ 0 , t f ] ,
where the matrices R ¯ u u ( t ) and R ¯ v u ( t ) are of the dimension q × q , ( 0 < q < r ); the matrix R ¯ u u ( t ) is positive definite; the matrix R ¯ v u ( t ) is positive semi-definite.
The player “u” aims to minimize the functional (2) by a proper choice of the control u ( t ) , while the player “v” aims to minimize the functional (3) by a proper choice of the control v ( t ) .
We study the game (1)–(3) with respect to its Nash equilibrium, and subject to the assumption that both players know perfectly the current game state.
Remark 1.
Due to the assumption (4), the first-order Nash-equilibrium solvability conditions (see, e.g., Refs. [3,4]) cannot be applied to analysis and solution of the game (1)–(3), i.e., this game is singular. Moreover, this game does not have, in general, its solution (a Nash-equilibrium pair of controls) in the class of regular (non-generalized) functions.
Consider the set U Z of all functions F u ( Z , t ) : E n × [ 0 , t f ] E r , which are measurable w.r.t. t [ 0 , t f ] for any fixed Z E n and satisfy the local Lipschitz condition w.r.t. Z E n uniformly in t [ 0 , t f ] . In addition, consider the set V Z of all functions F v ( Z , t ) : E n × [ 0 , t f ] E s with the same properties.
Definition 1.
By ( U V ) Z , we denote the set of all pairs F u ( Z , t ) , F v ( Z , t ) of functions satisfying the following conditions:
(i) 
F u ( Z , t ) U Z , F v ( Z , t ) V Z ;
(ii) 
the initial-value problem (1) for u ( t ) = F u ( Z , t ) , v ( t ) = F v ( Z , t ) and any Z 0 E n has the unique absolutely continuous solution Z F ( t ; Z 0 ) , t [ 0 , t f ] ;
(iii) 
F u Z F ( t ; Z 0 ) , t L 2 [ 0 , t f ; E r ] ;
(iv) 
F v Z F ( t ; Z 0 ) , t L 2 [ 0 , t f ; E s ] .
In what follows, ( U V ) Z is called the set of all admissible pairs of players’ state-feedback controls (strategies) u = F u ( Z , t ) , v = F v ( Z , t ) in the game (1)–(3).
For any given functions F ˜ u ( Z , t ) U Z and F ˜ v ( Z , t ) V Z , we consider the sets
E v F ˜ u ( Z , t ) = F v ( Z , t ) V Z : F ˜ u ( Z , t ) , F v ( Z , t ) ( U V ) Z ,
E u F ˜ v ( Z , t ) = F u ( Z , t ) U Z : F u ( Z , t ) , F ˜ v ( Z , t ) ( U V ) Z .
Consider the sequence of the pairs F u , k * ( Z , t ) , F v * ( Z , t ) ( U V ) Z , ( k = 1 , 2 , ) .
Definition 2.
The sequence F u , k * ( Z , t ) , F v * ( Z , t ) k = 1 + is called a Nash equilibrium strategies’ sequence (or simply, a Nash equilibrium sequence) in the game (1)–(3) if:
(a) 
for any Z 0 , there exist finite limits lim k + J u F u , k * ( Z , t ) , F v * ( Z , t ) and lim k + J v F u , k * ( Z , t ) , F v * ( Z , t ) in the game (1)–(3);
(b) 
lim k + J u F u , k * ( Z , t ) , F v * ( Z , t ) J u F u ( Z , t ) , F v * ( Z , t )
for all F u ( Z , t ) E u F v * ( Z , t ) ;
(c) 
lim k + J v F u , k * ( Z , t ) , F v * ( Z , t ) lim inf k + J v F u , k * ( Z , t ) , F v ( Z , t )
for all F v ( Z , t ) M v * = k = 1 + E v F u , k * ( Z , t ) .
The values
J u * = lim k + J u F u , k * ( Z , t ) , F v * ( Z , t )
and
J v * = lim k + J v F u , k * ( Z , t ) , F v * ( Z , t )
are called optimal values of the functionals (2) and (3), respectively, in the game (1)–(3).

3. Transformation of the Game (1)–(3)

Let us represent the matrix B u ( t ) in the block form
B u ( t ) = B u , 1 ( t ) , B u , 2 ( t ) , t [ 0 , t f ] ,
where the matrices B u , 1 ( t ) and B u , 2 ( t ) have the dimensions n × q and n × ( r q ) , respectively.
In what follows, we assume:
AI. The matrix B u ( t ) has full column rank r for all t [ 0 , t f ] .
AII. det B u , 2 T ( t ) D ( t ) B u , 2 ( t ) 0 , t [ 0 , t f ] .
AIII. C u B u , 2 ( t f ) = 0 , C v B u , 2 ( t f ) = 0 .
AIV. The matrix-valued functions A ( t ) , B v ( t ) , R ¯ uu ( t ) , R uv ( t ) , R vv ( t ) , R ¯ vu ( t ) , and D v ( t ) are continuously differentiable in the interval [ 0 , t f ] .
AV. The matrix-valued functions B u ( t ) and D u ( t ) are twice continuously differentiable in the interval [ 0 , t f ] .
Let the n × ( n r ) -matrix B u , c ( t ) be a complement matrix to B u ( t ) in the interval [ 0 , t f ] , i.e., the block matrix B u , c ( t ) , B u ( t ) is invertible for all t [ 0 , t f ] . Therefore, the n × ( n r + q ) -matrix B ˜ u , c ( t ) = B u , c ( t ) , B u , 1 ( t ) is a complement matrix to B u , 2 ( t ) in the interval [ 0 , t f ] .
In what follows, we also assume:
AVI. The matrix-valued function B u , c ( t ) is twice continuously differentiable in the interval [ 0 , t f ] .
Using the matrices B u , 2 ( t ) and B ˜ u , c ( t ) , we construct the following matrices:
H u ( t ) = B u , 2 T ( t ) D u ( t ) B u , 2 ( t ) 1 B u , 2 T ( t ) D u ( t ) B ˜ u , c ( t ) , t [ 0 , t f ] , L u ( t ) = B ˜ u , c ( t ) B u , 2 ( t ) H u ( t ) , R u ( t ) = L u ( t ) , B u , 2 ( t ) , t [ 0 , t f ] .
Now, using the matrix R u ( t ) , we make the following transformation of the state variable Z ( t ) in the game (1)–(3):
Z ( t ) = R u ( t ) z ( t ) , t [ 0 , t f ] ,
where z ( t ) E n is a new state variable.
Due to the results of Reference [31], the transformation (9) is invertible.
For the sake of the further analysis, we partition the matrix H u ( t ) into blocks as:
H u ( t ) = H u , 1 ( t ) , H u , 2 ( t ) , t [ 0 , t f ] ,
where the matrices H u , 1 ( t ) and H u , 2 ( t ) have the dimensions ( r q ) × ( n r ) and ( r q ) × q , respectively.
Quite similarly to the results of References [15,29], we have the following assertion.
Proposition 1.
Let the assumptions AI-AVI be valid. Then, the state transformation (9) converts the system (1) to the system
d z ( t ) d t = A ( t ) z ( t ) + B u ( t ) u ( t ) + B v ( t ) v ( t ) , t [ 0 , t f ] , z ( 0 ) = z 0 ,
and the functionals (2), (3) to the functionals
J u ( u , v ) = z T ( t f ) C u z ( t f ) + 0 t f z T ( t ) D u ( t ) z ( t ) + u T ( t ) R u u ( t ) u ( t ) + v T ( t ) R u v ( t ) v ( t ) d t ,
J v ( u , v ) = z T ( t f ) C v z ( t f ) + 0 t f z T ( t ) D v ( t ) z ( t ) + v T ( t ) R v v ( t ) v ( t ) + u T ( t ) R v u ( t ) u ( t ) d t ,
where
A ( t ) = R u 1 ( t ) A ( t ) R u ( t ) d R u ( t ) / d t , B v ( t ) = R u 1 ( t ) B v ( t ) , t [ 0 , t f ] ,
B u ( t ) = R u 1 ( t ) B u ( t ) = O ( n r ) × q O ( n r ) × ( r q ) I q O q × ( r q ) H u , 2 ( t ) I r q , t [ 0 , t f ] ,
C i = R u T ( t f ) C i R u ( t f ) = C i 1 O ( n r + q ) × ( r q ) O ( r q ) × ( n r + q ) O ( r q ) × ( r q ) , i = u , v , C i 1 = L T ( t f ) C i L ( t f ) , i = u , v ,
D u ( t ) = R u T ( t ) D u ( t ) R u ( t ) = D u 1 ( t ) O ( n r + q ) × ( r q ) O ( r q ) × ( n r + q ) D u 2 ( t ) , D u 1 ( t ) = L u T ( t ) D u ( t ) L u ( t ) , D u 2 ( t ) = B u , 2 T ( t ) D u ( t ) B u , 2 ( t ) , t [ 0 , t f ] ,
D v ( t ) = R u T ( t ) D v ( t ) R u ( t ) , t [ 0 , t f ] ,
z 0 = R u 1 ( 0 ) Z 0 .
The matrices D u 1 ( t ) and D v ( t ) are symmetric positive semi-definite, while the matrix D u 2 ( t ) is symmetric positive definite for all t [ 0 , t f ] . The matrices C u 1 and C v 1 are symmetric positive semi-definite. Moreover, the matrix-valued functions A ( t ) , B u ( t ) , B v ( t ) , D u ( t ) , and D v ( t ) are continuously differentiable in the interval [ 0 , t f ] .
Remark 2.
In the new (transformed) game with the dynamics (11) and the functionals (12), (13), the player “u” aims to minimize the functional (12) by a proper choice of the control u ( t ) , while the player “v” aims to minimize the functional (13) by a proper choice of the control v ( t ) . Since in the game (1)–(3) both players know perfectly the current state Z ( t ) , then due to the invertibility of the transformation (9), in the game (11)–(13) both players also know perfectly the current state z ( t ) . Like the game (1)–(3), the new game (11)–(13) also is singular.
Consider the set U z of all functions G u ( z , t ) : E n × [ 0 , t f ] E r , which are measurable w.r.t. t [ 0 , t f ] for any fixed z E n and satisfy the local Lipschitz condition w.r.t. z E n uniformly in t [ 0 , t f ] . In addition, consider the set V z of all functions G v ( z , t ) : E n × [ 0 , t f ] E s with the same properties.
Definition 3.
By ( U V ) z , we denote the set of all pairs G u ( z , t ) , G v ( z , t ) of functions satisfying the following conditions:
(i) 
G u ( z , t ) U z , G v ( z , t ) V z ;
(ii) 
the initial-value problem (11) for u ( t ) = G u ( z , t ) , v ( t ) = G v ( z , t ) and any z 0 E n has the unique absolutely continuous solution z G ( t ; z 0 ) , t [ 0 , t f ] ;
(iii) 
G u z G ( t ; z 0 ) , t L 2 [ 0 , t f ; E r ] ;
(iv) 
G v z G ( t ; z 0 ) , t L 2 [ 0 , t f ; E s ] .
In what follows, ( U V ) z is called the set of all admissible pairs of players’ state-feedback controls (strategies) u = G u ( z , t ) , v = G v ( z , t ) in the game (11)–(13).
Corollary 1.
Let the assumptions AI-AVI be valid. Let F u ( Z , t ) , F v ( Z , t ) ( U V ) Z and Z F ( t ; Z 0 ) , t [ 0 , t f ] be the solution of the initial-value problem (1) generated by this pair of the players’ controls. Then, F u R u ( t ) z , t , F v R u ( t ) z , t ( U V ) z and Z F ( t ; Z 0 ) = R u ( t ) z G ( t ; z 0 ) , t [ 0 , t f ] , where z G ( t ; z 0 ) , t [ 0 , t f ] is the unique solution of the initial-value problem (11) generated by the players’ controls u ( t ) = G u ( z , t ) = F u R u ( t ) z , t , v ( t ) = G v ( z , t ) = F v R u ( t ) z , t .
Vice versa: let G u ( z , t ) , G v ( z , t ) ( U V ) z and z G ( t ; z 0 ) , t [ 0 , t f ] be the solution of the initial-value problem (11) generated by this pair of the players’ controls.
Then, G u R u 1 ( t ) Z , t , G v R u 1 ( t ) Z , t ( U V ) Z and z G ( t ; Z 0 ) = R u 1 ( t ) Z F ( t ; Z 0 ) , t [ 0 , t f ] , where Z F ( t ; Z 0 ) , t [ 0 , t f ] is the unique solution of the initial-value problem (1) generated by the players’ controls u ( t ) = F u ( Z , t ) = G u R u 1 ( t ) Z , t , v ( t ) = F v ( Z , t ) = G v R u 1 ( t ) Z , t .
Proof. 
The statements of the corollary directly follow from Definitions 1 and 3 and Proposition 1. □
For any given G ˜ u ( z , t ) U z and G ˜ v ( z , t ) V z , consider the sets
K v G ˜ u ( z , t ) = G v ( z , t ) V z : G ˜ u ( z , t ) , G v ( z , t ) ( U V ) z ,
K u G ˜ v ( z , t ) = G u ( z , t ) U z : G u ( z , t ) , G ˜ v ( z , t ) ( U V ) z .
Consider the sequence of the pairs G u , k * ( z , t ) , G v * ( z , t ) ( U V ) z , ( k = 1 , 2 , ) .
Definition 4.
The sequence G u , k * ( z , t ) , G v * ( z , t ) k = 1 + is called a Nash equilibrium strategies’ sequence (or simply, a Nash equilibrium sequence) in the game (11)–(13) if:
(I) 
for any z 0 E n , there exist finite limits lim k + J u G u , k * ( z , t ) , G v * ( z , t ) and
lim k + J v G u , k * ( z , t ) , G v * ( z , t ) in the game (11)–(13);
(II) 
lim k + J u G u , k * ( z , t ) , G v * ( z , t ) J u G u ( z , t ) , G v * ( z , t )
for all G u ( z , t ) K u G v * ( z , t ) ;
(III) 
lim k + J v G u , k * ( z , t ) , G v * ( z , t ) lim inf k + J v G u , k * ( z , t ) , G v ( z , t )
for all G v ( z , t ) N v * = k = 1 + K v G u , k * ( z , t ) .
The values
J u * = lim k + J u G u , k * ( z , t ) , G v * ( z , t )
and
J v * = lim k + J v G u , k * ( z , t ) , G v * ( z , t )
are called optimal values of the functionals (12) and (13), respectively, in the game (11)–(13).
Lemma 1.
Let the assumptions AI-AVI be valid. Let F u , k * ( Z , t ) , F v * ( Z , t ) k = 1 + be the Nash equilibrium sequence in the game (1)–(3). Then, F u , k * R u ( t ) z , t , F v * R u ( t ) z , t k = 1 + is the Nash equilibrium sequence in the game (11)–(13).
Vice versa: let G u , k * ( z , t ) , G v * ( z , t ) k = 1 + be the Nash equilibrium sequence in the game (11)–(13). Then, G u , k * R u 1 ( t ) Z , t , G v * R u 1 ( t ) Z , t k = 1 + is the Nash equilibrium sequence in the game (1)–(3).
Proof of the lemma is presented in Appendix A.
Corollary 2.
Let the assumptions AI-AVI be valid. Then, the optimal values J u * and J v * of the functionals (2) and (3) in the game (1)–(3) coincide with the optimal values J u * and J v * of the corresponding functionals (12) and (13) in the game (11)–(13), i.e., J u * = J u * and J v * = J v * .
Proof. 
The statement of the corollary is a direct consequence of the expressions for J u * , J v * , J u * and J v * (see Definitions 2 and 4), and the proof of Lemma 1 (see Equations (A2) and (A3)–(A6) in Appendix A). □
Remark 3.
Due to Lemma 1 and Corollary 2, the initially formulated differential game (1)–(3) is equivalent to the new (transformed) differential game (11)–(13). Moreover, due to Proposition 1, the new game is simpler than the initial game. Due to this observation, in what follows of this paper, we consider the game (11)–(13) as an original game. We call this game the Singular Differential Game (SDG).

4. Regularization of the SDG

We are going to solve the SDG by regularization method. This method consists in replacing the SDG with a regular differential game. The latter depends on a small positive parameter ε . When we set formally ε = 0 , the new (regular) game becomes the SDG. Based on this observation, we construct the regular differential game, associated with the SDG, as follows. We keep for this regular game the dynamic Equation (11) and the cost functional (13) of the player “v”, while we construct the functional of the player “u” in the new game to be of the regular form
J u , ε ( u , v ) = z T ( t f ) C u z ( t f ) + 0 t f z T ( t ) D u ( t ) z ( t ) + u T ( t ) R u u ( t ) + Λ ( ε ) u ( t ) + v T ( t ) R u v ( t ) v ( t ) d t ,
where
Λ ( ε ) = diag ( 0 , , 0 q , ε 2 , , ε 2 r q ) .
Due to (4) and (23), the matrix R u u ( t ) + Λ ( ε ) is positive definite for any t [ 0 , t f ] and any ε 0 . In addition, it is seen that, for ε = 0 , the functional (22) becomes the functional (12).
Remark 4.
Since the parameter ε > 0 is small, the game (11), (13), (22) is a differential game with a partial cheap control of the player “u” in its functional (22). In what follows, the game (11), (13), (22) is called the Partial Cheap Control Differential Game (PCCDG). Zero-sum differential games with a complete/partial cheap control of at least one player were studied in many works (see e.g., Refs. [7,8,15,16,32] and references therein). Non zero-sum differential games with a complete cheap control of one player were considered only in few works (see References [4,24,30]). However, to the best of our knowledge, a non-zero-sum differential game with a partial cheap control of at least one player has not yet been considered in the literature. Since, for any ε > 0 , the weight matrix for the control cost of the player “u” in the functional (22) is positive definite, the PCCDG is a regular differential game. The set of all admissible pairs of players’ state-feedback controls (strategies) in this game is the same as in the SDG; namely, it is ( U V ) z .
Definition 5.
For a given ε > 0 , the pair G u , ε * ( z , t ) , G v , ε * ( z , t ) ( U V ) z is called a Nash equilibrium in the PCCDG if:
(I) 
J u , ε G u , ε * ( z , t ) , G v , ε * ( z , t ) J u , ε G u ( z , t ) , G v , ε * ( z , t )
for all G u ( z , t ) K u G v , ε * ( z , t ) ;
(II) 
J v G u , ε * ( z , t ) , G v , ε * ( z , t ) J v G u , ε * ( z , t ) , G v ( z , t )
for all G v ( z , t ) K v G u , ε * ( z , t ) .
The values
J u , ε * = J u , ε G u , ε * ( z , t ) , G v , ε * ( z , t )
and
J v , ε * = J v G u , ε * ( z , t ) , G v , ε * ( z , t )
are called optimal values of the functionals (22) and (13), respectively, in the PCCDG.

5. Nash Equilibrium Solution of the PCCDG

Let us consider the following terminal-value problem for the set of two Riccati-type differential equations with respect to the symmetric matrix-valued functions K u ( t ) and K v ( t ) , t [ 0 , t f ] :
d K u ( t ) d t = K u ( t ) A ( t ) A T ( t ) K u ( t ) + K u ( t ) S u ( t , ε ) K u ( t ) + K u ( t ) S v ( t ) K v ( t ) + K v ( t ) S v ( t ) K u ( t ) K v ( t ) S u v ( t ) K v ( t ) D u ( t ) ,
d K v ( t ) d t = K v ( t ) A ( t ) A T ( t ) K v ( t ) + K u ( t ) S u ( t , ε ) K v ( t ) + K v ( t ) S u ( t , ε ) K u ( t ) + K v ( t ) S v ( t ) K v ( t ) K u ( t ) S v u ( t , ε ) K u ( t ) D v ( t ) ,
K u ( t f ) = C u , K v ( t f ) = C v ,
where
S u ( t , ε ) = B u ( t ) R u u ( t ) + Λ ( ε ) 1 B u T ( t ) , S u v ( t ) = B v ( t ) R v v 1 ( t ) R u v ( t ) R v v 1 ( t ) B v T ( t ) , S v ( t ) = B v ( t ) R v v 1 ( t ) B v T ( t ) , S v u ( t , ε ) = B u ( t ) R u u ( t ) + Λ ( ε ) 1 R v u ( t ) R u u ( t ) + Λ ( ε ) 1 B u T ( t ) .
By virtue of the results of References [3,4], we have the following assertion.
Proposition 2.
Let, for a given ε > 0 , the terminal-value problem (24)–(26) have the solution K u ( t , ε ) , K v ( t , ε ) , t [ 0 , t f ] . Then, the PCCDG has the Nash equilibrium G u , ε * ( z , t ) , G v , ε * ( z , t ) , where
G u , ε * ( z , t ) = R u u ( t ) + Λ ( ε ) 1 B u T ( t ) K u ( t , ε ) z , G v , ε * ( z , t ) = R v v 1 ( t ) B v T ( t ) K v ( t , ε ) z .
The corresponding (optimal) values J u , ε * and J v , ε * of the functionals (22) and (13), respectively, have the form
J u , ε * = z 0 T K u ( 0 , ε ) z 0 , J v , ε * = z 0 T K v ( 0 , ε ) z 0 .

6. Asymptotic Analysis of PCCDG

We begin this analysis with the asymptotic solution of the terminal-value problem (24)–(26).

6.1. Zero-Order Asymptotic Solution of the Problem (24)–(26)

6.1.1. Transformation of the Problem (24)–(26)

First of all, let us represent the matrices S u ( t , ε ) and S v u ( t , ε ) (see Equation (27)) in the block form. Namely, based on the block form of the matrix B u ( t ) (see the Equation (15)) and the block-diagonal form of the matrices R u u ( t ) and Λ ( ε ) (see the Equations (4) and (23)), we obtain:
S u ( t , ε ) = S u 1 ( t ) S u 2 ( t ) S u 2 T ( t ) ( 1 / ε 2 ) S u 3 ( t , ε ) ,
where the ( n r + q ) × ( n r + q ) -matrix S u 1 ( t ) , the ( n r + q ) × ( r q ) -matrix S u 2 ( t ) , and ( r q ) × ( r q ) -matrix S u 3 ( t , ε ) are
S u 1 ( t ) = 0 0 0 R ¯ u u 1 ( t ) , S u 2 ( t ) = 0 R ¯ u u 1 ( t ) H u , 2 T ( t ) , S u 3 ( t , ε ) = ε 2 H u , 2 ( t ) R ¯ u u 1 ( t ) H u , 2 T ( t ) + I r q .
Similarly, we have
S v u ( t , ε ) = S v u ( t ) = S v u 1 ( t ) S v u 2 ( t ) S v u 2 T ( t ) S v u 3 ( t ) ,
where the ( n r + q ) × ( n r + q ) -matrix S v u 1 ( t ) , the ( n r + q ) × ( r q ) -matrix S v u 2 ( t ) , and ( r q ) × ( r q ) -matrix S v u 3 ( t ) are of the form
S v u 1 ( t ) = 0 0 0 R ¯ u u 1 ( t ) R ¯ v u ( t ) R ¯ u u 1 ( t ) , S v u 2 ( t ) = 0 R ¯ u u 1 ( t ) R ¯ v u ( t ) R ¯ u u 1 ( t ) H u , 2 T ( t ) , S v u 3 ( t ) = H u , 2 ( t ) R ¯ u u 1 ( t ) R ¯ v u ( t ) R ¯ u u 1 ( t ) H u , 2 T ( t ) .
Due to the block form of the matrix S u ( t , ε ) (see Equations (30) and (31)), the right-hand sides of Equations (24) and (25) have the singularities at ε = 0 . To remove these singularities and to represent the set (24)–(25) in an explicit singular perturbation form, we look for the solution K u ( t , ε ) , K v ( t , ε ) , t [ 0 , t f ] of the terminal-value problem (24)–(26) in the block form
K i ( t , ε ) = K i 1 ( t , ε ) ε K i 2 ( t , ε ) ε K i 2 T ( t , ε ) ε K i 3 ( t , ε ) , K i j ( t , ε ) T = K i j ( t , ε ) , i = u , v , j = 1 , 3 ,
where the matrices K i 1 ( t , ε ) , K i 2 ( t , ε ) , and K i 3 ( t , ε ) , ( i = u , v ) are of the dimensions ( n r + q ) × ( n r + q ) , ( n r + q ) × ( r q ) and ( r q ) × ( r q ) , respectively.
In addition, we represent the matrices A ( t ) , D v ( t ) , S v ( t ) , and S u v ( t ) in the block form
A ( t ) = A 1 ( t ) A 2 ( t ) A 3 ( t ) A 4 ( t ) , D v ( t ) = D v 1 ( t ) D v 2 ( t ) D v 2 T ( t ) D v 3 ( t ) ,
S v ( t ) = S v 1 ( t ) S v 2 ( t ) S v 2 T ( t ) S v 3 ( t ) , S u v ( t ) = S u v 1 ( t ) S u v 2 ( t ) S u v 2 T ( t ) S u v 3 ( t ) .
The blocks of the matrices in (35) and (36) are of the same dimensions as the corresponding blocks of the matrices in (34).
Now, substitution of (17), (30), (32), and (34)–(36) into the set (24)–(25) yields, after a routine matrix algebra, the following set of six Riccati-type differential equations with respect to the matrices K i 1 ( t , ε ) , K i 2 ( t , ε ) , and K i 3 ( t , ε ) , ( i = u , v ) (in this set, for simplicity, we omit the designation of the dependence of the unknown matrix-valued functions on ε ):
d K u 1 ( t ) d t = K u 1 ( t ) A 1 ( t ) ε K u 2 ( t ) A 3 ( t ) A 1 T ( t ) K u 1 ( t ) ε A 3 T ( t ) K u 2 T ( t ) + K u 1 ( t ) S u 1 ( t ) K u 1 ( t ) + ε K u 2 ( t ) S u 2 T ( t ) K u 1 ( t ) + ε K u 1 ( t ) S u 2 ( t ) K u 2 T ( t ) + K u 2 ( t ) S u 3 ( t , ε ) K u 2 T ( t ) + K u 1 ( t ) S v 1 ( t ) K v 1 ( t ) + ε K u 2 ( t ) S v 2 T ( t ) K v 1 ( t ) + ε K u 1 ( t ) S v 2 ( t ) K v 2 T ( t ) + ε 2 K u 2 ( t ) S v 3 ( t ) K v 2 T ( t ) + K v 1 ( t ) S v 1 ( t ) K u 1 ( t ) + ε K v 1 ( t ) S v 2 ( t ) K u 2 T ( t ) + ε K v 2 ( t ) S v 2 T ( t ) K u 1 ( t ) + ε 2 K v 2 ( t ) S v 3 ( t ) K u 2 T ( t ) K v 1 ( t ) S u v 1 ( t ) K v 1 ( t ) ε K v 2 ( t ) S u v 2 T ( t ) K v 1 ( t ) ε K v 1 ( t ) S u v 2 ( t ) K v 2 T ( t ) ε 2 K v 2 ( t ) S u v 3 ( t ) K v 2 T ( t ) D u 1 ( t ) , t [ 0 , t f ] ,
ε d K u 2 ( t ) d t = K u 1 ( t ) A 2 ( t ) ε K u 2 ( t ) A 4 ( t ) ε A 1 T ( t ) K u 2 ( t ) ε A 3 T ( t ) K u 3 ( t ) + ε K u 1 ( t ) S u 1 ( t ) K u 2 ( t ) + ε 2 K u 2 ( t ) S u 2 T ( t ) K u 2 ( t ) + ε K u 1 ( t ) S u 2 ( t ) K u 3 ( t ) + K u 2 ( t ) S u 3 ( t , ε ) K u 3 ( t ) + ε K u 1 ( t ) S v 1 ( t ) K v 2 ( t ) + ε 2 K u 2 ( t ) S v 2 T ( t ) K v 2 ( t ) + ε K u 1 ( t ) S v 2 ( t ) K v 3 ( t ) + ε 2 K u 2 ( t ) S v 3 ( t ) K v 3 ( t ) + ε K v 1 ( t ) S v 1 ( t ) K u 2 ( t ) + ε K v 1 ( t ) S v 2 ( t ) K u 3 ( t ) + ε 2 K v 2 ( t ) S v 2 T ( t ) K u 2 ( t ) + ε 2 K v 2 ( t ) S v 3 ( t ) K u 3 ( t ) ε K v 1 ( t ) S u v 1 ( t ) K v 2 ( t ) ε 2 K v 2 ( t ) S u v 2 T ( t ) K v 2 ( t ) ε K v 1 ( t ) S u v 2 ( t ) K v 3 ( t ) ε 2 K v 2 ( t ) S u v 3 ( t ) K v 3 ( t ) , t [ 0 , t f ] ,
ε d K u 3 ( t ) d t = ε K u 2 T ( t ) A 2 ( t ) ε K u 3 ( t ) A 4 ( t ) ε A 2 T ( t ) K u 2 ( t ) ε A 4 T ( t ) K u 3 ( t ) + ε 2 K u 2 T ( t ) S u 1 ( t ) K u 2 ( t ) + ε 2 K u 3 ( t ) S u 2 T ( t ) K u 2 ( t ) + ε 2 K u 2 T ( t ) S u 2 ( t ) K u 3 ( t ) + K u 3 ( t ) S u 3 ( t , ε ) K u 3 ( t ) + ε 2 K u 2 T ( t ) S v 1 ( t ) K v 2 ( t ) + ε 2 K u 3 ( t ) S v 2 T ( t ) K v 2 ( t ) + ε 2 K u 2 T ( t ) S v 2 ( t ) K v 3 ( t ) + ε 2 K u 3 ( t ) S v 3 ( t ) K v 3 ( t ) + ε 2 K v 2 T ( t ) S v 1 ( t ) K u 2 ( t ) + ε 2 K v 2 T ( t ) S v 2 ( t ) K u 3 ( t ) + ε 2 K v 3 ( t ) S v 2 T ( t ) K u 2 ( t ) + ε 2 K v 3 ( t ) S v 3 ( t ) K u 3 ( t ) ε 2 K v 2 T ( t ) S u v 1 ( t ) K v 2 ( t ) ε 2 K v 3 ( t ) S u v 2 T ( t ) K v 2 ( t ) ε 2 K v 2 T ( t ) S u v 2 ( t ) K v 3 ( t ) ε 2 K v 3 ( t ) S u v 3 ( t ) K v 3 ( t ) D u 2 ( t ) , t [ 0 , t f ] ,
d K v 1 ( t ) d t = K v 1 ( t ) A 1 ( t ) ε K v 2 ( t ) A 3 ( t ) A 1 T ( t ) K v 1 ( t ) ε A 3 T ( t ) K v 2 T ( t ) + K u 1 ( t ) S u 1 ( t ) K v 1 ( t ) + ε K u 2 ( t ) S u 2 T ( t ) K v 1 ( t ) + ε K u 1 ( t ) S u 2 ( t ) K v 2 T ( t ) + K u 2 ( t ) S u 3 ( t , ε ) K v 2 T ( t ) + K v 1 ( t ) S u 1 ( t ) K u 1 ( t ) + ε K v 2 ( t ) S u 2 T ( t ) K u 1 ( t ) + ε K v 1 ( t ) S u 2 ( t ) K u 2 T ( t ) + K v 2 ( t ) S u 3 ( t , ε ) ( t ) K u 2 T ( t ) + K v 1 ( t ) S v 1 ( t ) K v 1 ( t ) + ε K v 2 ( t ) S v 2 T ( t ) K v 1 ( t ) + ε K v 1 ( t ) S v 2 ( t ) K v 2 T ( t ) + ε 2 K v 2 ( t ) S v 3 ( t ) K v 2 T ( t ) K u 1 ( t ) S v u 1 ( t ) K u 1 ( t ) ε K u 2 ( t ) S v u 2 T ( t ) K u 1 ( t ) ε K u 1 ( t ) S v u 2 ( t ) K u 2 T ( t ) ε 2 K u 2 ( t ) S v u 3 ( t ) K u 2 T ( t ) D v 1 ( t ) , t [ 0 , t f ] ,
ε d K v 2 ( t ) d t = K v 1 ( t ) A 2 ( t ) ε K v 2 ( t ) A 4 ( t ) ε A 1 T ( t ) K v 2 ( t ) ε A 3 T ( t ) K v 3 ( t ) + ε K u 1 ( t ) S u 1 ( t ) K v 2 ( t ) + ε 2 K u 2 ( t ) S u 2 T ( t ) K v 2 ( t ) + ε K u 1 ( t ) S u 2 ( t ) K v 3 ( t ) + K u 2 ( t ) S u 3 ( t , ε ) K v 3 ( t ) + ε K v 1 ( t ) S u 1 ( t ) K u 2 ( t ) + ε 2 K v 2 ( t ) S u 2 T ( t ) K u 2 ( t ) + ε K v 1 ( t ) S u 2 ( t ) K u 3 ( t ) + K v 2 ( t ) S u 3 ( t , ε ) K u 3 ( t ) + ε K v 1 ( t ) S v 1 ( t ) K v 2 ( t ) + ε 2 K v 2 ( t ) S v 2 T ( t ) K v 2 ( t ) + ε K v 1 ( t ) S v 2 ( t ) K v 3 ( t ) + ε 2 K v 2 ( t ) S v 3 ( t ) K v 3 ( t ) ε K u 1 ( t ) S v u 1 ( t ) K u 2 ( t ) ε 2 K u 2 ( t ) S v u 2 T ( t ) K u 2 ( t ) ε K u 1 ( t ) S v u 2 ( t ) K u 3 ( t ) ε 2 K u 2 ( t ) S v u 3 ( t ) K u 3 ( t ) D v 2 ( t ) , t [ 0 , t f ] ,
ε d K v 3 ( t ) d t = ε K v 2 T ( t ) A 2 ( t ) ε K v 3 ( t ) A 4 ( t ) ε A 2 T ( t ) K v 2 ( t ) ε A 4 T ( t ) K v 3 ( t ) + ε 2 K u 2 T ( t ) S u 1 ( t ) K v 2 ( t ) + ε 2 K u 3 T ( t ) S u 2 T ( t ) K v 2 ( t ) + ε 2 K u 2 T ( t ) S u 2 ( t ) K v 3 ( t ) + K u 3 ( t ) S u 3 ( t , ε ) K v 3 ( t ) + ε 2 K v 2 T ( t ) S u 1 ( t ) K u 2 ( t ) + ε 2 K v 3 T ( t ) S u 2 T ( t ) K u 2 ( t ) + ε 2 K v 2 T ( t ) S u 2 ( t ) K u 3 ( t ) + K v 3 ( t ) S u 3 ( t , ε ) K u 3 ( t ) + ε 2 K v 2 T ( t ) S v 1 ( t ) K v 2 ( t ) + ε 2 K v 3 ( t ) S v 2 T ( t ) K v 2 ( t ) + ε 2 K v 2 T ( t ) S v 2 ( t ) K v 3 ( t ) + ε 2 K v 3 ( t ) S v 3 ( t ) K v 3 ( t ) ε 2 K u 2 T ( t ) S v u 1 ( t ) K u 2 ( t ) ε 2 K u 3 ( t ) S v u 2 T ( t ) K u 2 ( t ) ε 2 K u 2 T ( t ) S v u 2 ( t ) K u 3 ( t ) ε 2 K u 3 T ( t ) S v u 3 ( t ) K u 3 ( t ) D v 3 ( t ) , t [ 0 , t f ] .
It is clear that the set of Equations (37)–(42) is equivalent to the set of Equations (24) and (25). The set (37)–(42) has the explicit singular perturbation form. To obtain the terminal conditions for the set (37)–(42), we substitute (16) and (34) into the terminal conditions (26), which yields
K u 1 ( t f ) = C u 1 , K u 2 ( t f ) = 0 , K u 3 ( t f ) = 0 , K v 1 ( t f ) = C v 1 , K v 2 ( t f ) = 0 , K v 3 ( t f ) = 0 .

6.1.2. Zero-Order Asymptotic Solution of the Terminal-Value Problem (37)–(42), (43): Formal Construction

To construct this asymptotic solution, we adapt the Boundary Function Method, Ref. [33]. Namely, we seek the zero-order asymptotic solution K i j , 0 ( t , ε ) , ( i = u , v ) , ( j = 1 , 2 , 3 ) of the problem (37)–(42), (43) in the form
K i j , 0 ( t , ε ) = K i j , 0 o ( t ) + K i j , 0 b ( τ ) , τ = ( t t f ) / ε i = u , v , j = 1 , 2 , 3 ,
where the terms with the superscript o are so-called outer solution terms, while the terms with the superscript b are boundary-layer correction terms in a left-hand neighborhood of the boundary t = t f ; the variable τ is called the stretched time and, for any t [ 0 , t f ) , τ as ε + ..
Equations and conditions for the terms of the zero-order asymptotic solution are obtained by substitution of (44) into the problem (37)–(42), (43) instead of K i j , ( i = u , v ) , ( j = 1 , 2 , 3 ) , and equating coefficients for the same power of ε on both sides of the resulting equations, separately for the coefficients depending on t and on τ .
Let us start the construction of the zero-order asymptotic solution with obtaining the terms K i 1 , 0 b ( τ ) , ( i = u , v ) . For these terms, we have the differential equations
d K i 1 , 0 b ( τ ) d τ = 0 , τ 0 , i = u , v .
Following the Boundary Function Method, we require that K i 1 , 0 b ( τ ) 0 for τ , ( i = u , v ) . Subject to this requirement, Equations in (45) yield the unique solutions
K i 1 , 0 b ( τ ) = 0 , τ 0 , i = u , v .
We proceed with obtaining the terms of the outer solution. Using the equality S u 3 ( t , 0 ) = I r q , t [ 0 , t f ] , we derive the following set of equations and conditions for these terms:
d K u 1 , 0 o ( t ) d t = K u 1 , 0 o ( t ) A 1 ( t ) A 1 T ( t ) K u 1 , 0 o ( t ) + K u 1 , 0 o ( t ) S u 1 ( t ) K u 1 , 0 o ( t ) + K u 2 , 0 o ( t ) K u 2 , 0 o ( t ) T + K u 1 , 0 o ( t ) S v 1 ( t ) K v 1 , 0 o ( t ) + K v 1 , 0 o ( t ) S v 1 ( t ) K u 1 , 0 o ( t ) K v 1 , 0 o ( t ) S u v 1 ( t ) K v 1 , 0 o ( t ) D u 1 ( t ) , t [ 0 , t f ] , K u 1 , 0 o ( t f ) = C u 1 ,
0 = K u 1 , 0 o ( t ) A 2 ( t ) + K u 2 , 0 o ( t ) K u 3 , 0 o ( t ) , t [ 0 , t f ] ,
0 = K u 3 , 0 o ( t ) 2 D u 2 ( t ) , t [ 0 , t f ] ,
d K v 1 , 0 o ( t ) d t = K v 1 , 0 o ( t ) A 1 ( t ) A 1 T ( t ) K v 1 , 0 o ( t ) + K u 1 , 0 o ( t ) S u 1 ( t ) K v 1 , 0 o ( t ) + K u 2 , 0 o ( t ) K v 2 , 0 o ( t ) T + K v 1 , 0 o ( t ) S u 1 ( t ) K u 1 , 0 o ( t ) + K v 2 , 0 o ( t ) K u 2 , 0 o ( t ) T + K v 1 , 0 o ( t ) S v 1 ( t ) K v 1 , 0 o ( t ) K u 1 , 0 o ( t ) S v u 1 ( t ) K u 1 , 0 o ( t ) D v 1 ( t ) , t [ 0 , t f ] , K v 1 , 0 o ( t f ) = C v 1 ,
0 = K v 1 , 0 o ( t ) A 2 ( t ) + K u 2 , 0 o ( t ) K v 3 , 0 o ( t ) + K v 2 , 0 o ( t ) K u 3 , 0 o ( t ) D v 2 ( t ) , t [ 0 , t f ] ,
0 = K u 3 , 0 o ( t ) K v 3 , 0 o ( t ) + K v 3 , 0 o ( t ) K u 3 , 0 o ( t ) D v 3 ( t ) , t [ 0 , t f ] .
Equation (49) yields the unique symmetric positive definite solution
K u 3 , 0 o ( t ) = D u 2 ( t ) 1 / 2 , t [ 0 , t f ] ,
where the superscript “1/2” denotes the unique symmetric positive definite square root of the corresponding symmetric, positive definite matrix.
Substituting (53) into (52), we obtain, after some rearrangement, the Lyapunov algebraic equation with respect to the matrix K v 3 , 0 o ( t ) :
D u 2 ( t ) 1 / 2 K v 3 , 0 o ( t ) + K v 3 , 0 o ( t ) D u 2 ( t ) 1 / 2 = D v 3 ( t ) , t [ 0 , t f ] .
Since the matrix D u 2 ( t ) 1 / 2 is symmetric positive definite, and the matrix D v 3 ( t ) is symmetric, then, by virtue of the results of Reference [34], Equation (54) has the unique symmetric solution
K v 3 , 0 o ( t ) = 0 + exp D u 2 ( t ) 1 / 2 σ D v 3 ( t ) exp D u 2 ( t ) 1 / 2 σ d σ , t [ 0 , t f ] .
Substitution of (53) into (48) yields the linear algebraic equation with respect to K u 2 , 0 o ( t ) , in which the solution is
K u 2 , 0 o ( t ) = K u 1 , 0 o ( t ) A 2 ( t ) D u 2 ( t ) 1 / 2 , t [ 0 , t f ] ,
where the superscript " 1 / 2 " denotes the inverse matrix for the unique symmetric positive definite square root of the corresponding symmetric positive definite matrix.
Similarly, substituting (53) and (56) into (51) and solving the resulting algebraic equation with respect to K v 2 o ( t ) yield
K v 2 , 0 o ( t ) = [ K v 1 , 0 o ( t ) A 2 ( t ) K u 1 , 0 o ( t ) A 2 ( t ) D u 2 ( t ) 1 / 2 K v 3 , 0 o ( t ) + D v 2 ( t ) ] D u 2 ( t ) 1 / 2 , t [ 0 , t f ] .
Now, we substitute (56) into (47), which yields
d K u 1 , 0 o ( t ) d t = K u 1 , 0 o ( t ) A 1 ( t ) A 1 T ( t ) K u 1 , 0 o ( t ) + K u 1 , 0 o ( t ) S u , 0 ( t ) K u 1 , 0 o ( t ) + K u 1 , 0 o ( t ) S v 1 ( t ) K v 1 , 0 o ( t ) + K v 1 , 0 o ( t ) S v 1 ( t ) K u 1 , 0 o ( t ) K v 1 , 0 o ( t ) S u v 1 ( t ) K v 1 , 0 o ( t ) D u 1 ( t ) , t [ 0 , t f ] , K u 1 , 0 o ( t f ) = C u 1 ,
where
S u , 0 ( t ) = S u 1 ( t ) + A 2 ( t ) D u 2 1 ( t ) A 2 T ( t ) .
Further, substituting (56)–(57) into (50) and using (52), we have, after a routine matrix algebra,
d K v 1 , 0 o ( t ) d t = K v 1 , 0 o ( t ) A 1 ( t ) A 1 T ( t ) K v 1 , 0 o ( t ) + K u 1 , 0 o ( t ) A 2 ( t ) D u 2 1 ( t ) D v 2 T ( t ) + D v 2 ( t ) D u 2 1 ( t ) A 2 T ( t ) K u 1 , 0 o ( t ) + K u 1 , 0 o ( t ) S u , 0 ( t ) K v 1 , 0 o ( t ) + K v 1 , 0 o ( t ) S u , 0 ( t ) K u 1 , 0 o ( t ) + K v 1 , 0 o ( t ) S v 1 ( t ) K v 1 , 0 o ( t ) K u 1 , 0 o ( t ) S v u , 0 ( t ) K u 1 , 0 o ( t ) D v 1 ( t ) , t [ 0 , t f ] , K v 1 , 0 o ( t f ) = C v 1 ,
where
S v u , 0 ( t ) = S v u 1 ( t ) + A 2 ( t ) D u 2 1 ( t ) D v 3 ( t ) D u 2 1 ( t ) A 2 T ( t ) .
In what follows, we assume:
AVII. The terminal-value problem (58), (60) has the solution K u 1 , 0 o ( t ) , K v 1 , 0 o ( t ) in the entire interval [ 0 , t f ] .
Now, let us obtain the boundary-layer correction terms K i j , 0 b ( τ ) , ( i = u , v ) , ( j = 2 , 3 ) . Using (46) and the equality S u 3 ( t , 0 ) I r q , we have for these terms the following terminal-value problem in the interval τ ( , 0 ] :
d K u 2 , 0 b ( τ ) d τ = K u 2 , 0 o ( t f ) K u 3 , 0 b ( τ ) + K u 2 , 0 b ( τ ) K u 3 , 0 o ( t f ) + K u 2 , 0 b ( τ ) K u 3 , 0 b ( τ ) ,
d K u 3 , 0 b ( τ ) d τ = K u 3 , 0 o ( t f ) K u 3 , 0 b ( τ ) + K u 3 , 0 b ( τ ) K u 3 , 0 o ( t f ) + K u 3 , 0 b ( τ ) 2 ,
d K v 2 , 0 b ( τ ) d τ = K u 2 , 0 o ( t f ) K v 3 , 0 b ( τ ) + K u 2 , 0 b ( τ ) K v 3 , 0 o ( t f ) + K u 2 , 0 b ( τ ) K v 3 , 0 b ( τ ) + K v 2 , 0 o ( t f ) K u 3 , 0 b ( τ ) + K v 2 , 0 b ( τ ) K u 3 , 0 o ( t f ) + K v 2 , 0 b ( τ ) K u 3 , 0 b ( τ ) ,
d K v 3 , 0 b ( τ ) d τ = K u 3 , 0 o ( t f ) K v 3 , 0 b ( τ ) + K u 3 , 0 b ( τ ) K v 3 , 0 o ( t f ) + K u 3 , 0 b ( τ ) K v 3 , 0 b ( τ ) + K v 3 , 0 o ( t f ) K u 3 , 0 b ( τ ) + K v 3 , 0 b ( τ ) K u 3 , 0 o ( t f ) + K v 3 , 0 b ( τ ) K u 3 , 0 b ( τ ) .
K u 2 , 0 b ( 0 ) = K u 2 , 0 o ( t f ) , K u 3 , 0 b ( 0 ) = K u 3 , 0 o ( t f ) ,
K v 2 , 0 b ( 0 ) = K v 2 , 0 o ( t f ) , K v 3 , 0 b ( 0 ) = K v 3 , 0 o ( t f ) .
This problem consists of two subproblems, which can be solved consecutively: first, the subproblem with respect to ( K u 2 , 0 b ( τ ) , K u 3 , 0 b ( τ ) ) is solved, then the subproblem with respect to ( K v 2 , 0 b ( τ ) , K v 3 , 0 b ( τ ) ) is solved. Let us start with the first subproblem. Using (53), (56) and the equality K u 1 , 0 o ( t f ) = C u 1 (see Equation (58)), we can rewrite the subproblem (62)–(63), (66) as:
d K u 2 , 0 b ( τ ) d τ = K u 2 , 0 b ( τ ) D 2 ( t f ) 1 / 2 + K u 3 , 0 b ( τ ) + C u 1 A 2 ( t f ) D 2 ( t f ) 1 / 2 K u 3 , 0 b ( τ ) , τ ( , 0 ] , P 2 , 0 b ( 0 ) = C u 1 A 2 ( t f ) D 2 ( t f ) 1 / 2 ,
d K u 3 , 0 b ( τ ) d τ = D 2 ( t f ) 1 / 2 K u 3 , 0 b ( τ ) + K u 3 , 0 b ( τ ) D 2 ( t f ) 1 / 2 + K u 3 , 0 b ( τ ) 2 , τ ( , 0 ] , P 3 , 0 b ( 0 ) = D 2 ( t f ) 1 / 2 .
The terminal-value problem (68)–(69) also can be solved consecutively: first, the problem (69) is solved, then the problem (68) is solved. Let us observe that the differential equation in (69) is a Bernoulli-type matrix differential equation, as in Ref. [35]. Using this observation, we directly obtain the solution of the problem (69)
K u 3 , 0 b ( τ ) = 2 D 2 ( t f ) 1 / 2 exp 2 D 2 ( t f ) 1 / 2 τ [ I r q + exp 2 D 2 ( t f ) 1 / 2 τ ] 1 , τ ( , 0 ] .
Substituting (70) into the problem (68) and solving the obtained terminal-value problem with respect to K u 2 , 0 b ( τ ) , we have
K u 2 , 0 b ( τ ) = 2 C u 1 A 2 ( t f ) D 2 ( t f ) 1 / 2 exp 2 D 2 ( t f ) 1 / 2 τ [ I r q + exp 2 D 2 ( t f ) 1 / 2 τ ] 1 , τ ( , 0 ] .
Since the matrix D 2 ( t f ) 1 / 2 is positive definite, the solution (70)–(71) to the problem (68)–(69) (and, therefore, to the subproblem (62)–(63), (66) of the problem (62)–(67)) satisfies the inequality
max K u 2 , 0 b ( τ ) , K u 3 , 0 b ( τ ) c u exp ( β u τ ) , τ ( , 0 ] ,
where c u > 0 and β u > 0 are some constants.
Proceed to the solution of the subproblem (64)–(65), (67). First, we solve the differential Equation (65) with the corresponding terminal condition from (67). This terminal-value problem can be rewritten as:
d K v 3 , 0 b ( τ ) d τ = D 2 ( t f ) 1 / 2 + K u 3 , 0 b ( τ ) K v 3 , 0 b ( τ ) + K v 3 , 0 b ( τ ) D 2 ( t f ) 1 / 2 + K u 3 , 0 b ( τ ) + K u 3 , 0 b ( τ ) K v 3 , 0 o ( t f ) + K v 3 , 0 o ( t f ) K u 3 , 0 b ( τ ) , τ ( , 0 ] , K v 3 , 0 b ( 0 ) = K v 3 , 0 o ( t f ) .
The differential equation in (73) is the Lyapunov matrix differential equation, as in Ref. [36]. Using the results of this work, we obtain the solution of the problem (73)
K v 3 , 0 b ( τ ) = Φ ( τ , 0 ) K v 3 , 0 o ( t f ) Φ T ( τ , 0 ) + 0 τ Φ ( τ , σ ) [ K u 3 , 0 b ( σ ) K v 3 , 0 o ( t f ) + K v 3 , 0 o ( t f ) K u 3 , 0 b ( σ ) ] Φ T ( τ , σ ) d σ , τ ( , 0 ] ,
where, for any σ 0 , the matrix-valued function Φ ( τ , σ ) is the solution of the following terminal-value problem:
d Φ ( τ , σ ) d t = D 2 ( t f ) 1 / 2 + K u 3 , 0 b ( τ ) Φ ( τ , σ ) , τ ( , σ ] , Φ ( σ , σ ) = I r q .
Using the positive definiteness of the matrix D 2 ( t f ) 1 / 2 , the inequality (72), and the results of Reference [33], we obtain the estimate of the matrix Φ ( τ , σ )
Φ ( τ , σ ) c exp β ( τ σ ) , < τ σ 0 ,
where c > 0 and 0 < β < β u are some constants.
Now, let us solve the differential Equation (64) with the corresponding terminal condition from (67). This terminal-value problem can be rewritten as:
d K v 2 , 0 b ( τ ) d τ = K v 2 , 0 b ( τ ) D 2 ( t f ) 1 / 2 + K u 3 , 0 b ( τ ) + K u 2 , 0 o ( t f ) K v 3 , 0 b ( τ ) + K u 2 , 0 b ( τ ) K v 3 , 0 o ( t f ) + K u 2 , 0 b ( τ ) K v 3 , 0 b ( τ ) + K v 2 , 0 o ( t f ) K u 3 , 0 b ( τ ) , τ ( , 0 ] , K v 2 , 0 b ( 0 ) = K v 2 , 0 o ( t f ) .
This problem yields the solution
K v 2 , 0 b ( τ ) = K v 2 , 0 o ( t f ) Φ T ( τ , 0 ) + 0 τ [ K u 2 , 0 o ( t f ) K v 3 , 0 b ( σ ) + K u 2 , 0 b ( σ ) K v 3 , 0 o ( t f ) + K u 2 , 0 b ( σ ) K v 3 , 0 b ( σ ) + K v 2 , 0 o ( t f ) K u 3 , 0 b ( σ ) ] Φ T ( τ , σ ) d σ , τ ( , 0 ] .
Using the inequalities (72) and (76), we directly obtain the following inequality for the above obtained matrix-valued functions K v 3 , 0 b ( τ ) and K v 2 , 0 b ( τ ) :
max K v 2 , 0 b ( τ ) , K v 3 , 0 b ( τ ) c v exp ( β v τ ) , τ ( , 0 ] ,
where c v > 0 and 0 < β v < β are some constants.

6.1.3. Justification of the Asymptotic Solution to the Problem (37)–(42), (43)

Lemma 2.
Let the assumptions AI-AVII be valid. Then, there exists a number ε 0 > 0 such that for all ε ( 0 , ε 0 ] , the terminal-value problem (37)–(42), (43) has the unique solution { K i j ( t , ε ) , ( i = u , v ) , ( j = 1 , 2 , 3 ) } in the entire interval t [ 0 , t f ] . Moreover, for all t [ 0 , t f ] and ε ( 0 , ε 0 ] , the following inequalities are satisfied:
K i j ( t , ε ) K i j , 0 ( t , ε ) a ε , i = u , v , j = 1 , 2 , 3 ,
where K i j , 0 ( t , ε ) , ( i = u , v ) , ( j = 1 , 2 , 3 ) are given by (44), and their terms are obtained in Section 6.1.2; a > 0 is some constant independent of ε.
Proof of the lemma is presented in Appendix B.
As a direct consequence of Lemma 2, we have the following two assertions.
Corollary 3.
Let the assumptions AI-AVII be valid. Then, for all ε ( 0 , ε 0 ] , the terminal-value problem (24)–(26) has the unique solution K u ( t , ε ) , K v ( t , ε ) , t [ 0 , t f ] . The matrices K u ( t , ε ) and K v ( t , ε ) have the block form (34), where the blocks are the corresponding components of the solution to the terminal-value problem (37)–(42), (43) mentioned in Lemma 2.
Corollary 4.
Let the assumptions AI-AVII be valid. Then, for all ε ( 0 , ε 0 ] , the PCCDG has the Nash equilibrium mentioned in Proposition 2.

6.2. Asymptotic Representations of the Optimal Values of the Functionals in the PCCDG

Let us represent the initial state position z 0 of the PCCDG in the block form
z 0 = col ( x 0 , y 0 ) , x 0 E n r + q , y 0 E r q .
Using the upper block x 0 of the vector z 0 and the solution K u 1 , 0 o ( t ) , K v 1 , 0 o ( t ) of the terminal-value problem (58), (60) mentioned in the assumption AVII, we construct the values
J u , 0 * = x 0 T K u 1 , 0 o ( 0 ) x 0 , J v , 0 * = x 0 T K v 1 , 0 o ( 0 ) x 0 .
Corollary 5.
Let the assumptions AI-AVII be valid. Then, for all ε ( 0 , ε 0 ] , the optimal values J u , ε * and J v , ε * of the functionals (22) and (13) in the PCCDG satisfy the inequalities
| J i , ε * J i , 0 * | χ ( z 0 ) ε , i = u , v ,
where χ ( z 0 ) > 0 is some constant independent of ε but depending on z 0 .
Proof. 
The corollary follows immediately from Proposition 2, Lemma 2, and Corollaries 3 and 4. □

7. Reduced Differential Game

To construct this game, we introduce into the consideration the following block-form matrices:
B 1 , 0 ( t ) = B ˜ , A 2 ( t ) , B ˜ = O ( n r ) × q I q , D ˜ v ( t ) = O ( n r + q ) × q , D v 2 ( t ) , Θ u u ( t ) = R ¯ uu ( t ) O q × r q O r q × q D u 2 ( t ) , Θ v u ( t ) = R ¯ v u ( t ) O q × r q O r q × q D v 3 ( t ) .
Consider the following finite-horizon non-zero-sum differential game with the dynamics of the form
d x r ( t ) d t = A 1 ( t ) x r ( t ) + B 1 , 0 ( t ) u r ( t ) + B v 1 ( t ) v r ( t ) , t [ 0 , t f ] , x r ( 0 ) = x 0 ,
where x r ( t ) E n r + q is a state variable; u r ( t ) E r and v r ( t ) E s are controls of the game’s players; B v 1 ( t ) is the upper block of the matrix B v ( t ) of the dimension ( n r + q ) × s .
The functionals of the game, to be minimized by u r ( t ) and v r ( t ) , respectively, are
J u r ( u r , v r ) = x r T ( t f ) C u 1 x r ( t f ) + 0 t f x r T ( t ) D u 1 ( t ) x r ( t ) + u r T ( t ) Θ u u ( t ) u r ( t ) + v r T ( t ) R u v ( t ) v r ( t ) d t
and
J v r ( u r , v r ) = x r T ( t f ) C v 1 x r ( t f ) + 0 t f [ x r T ( t ) D v 1 ( t ) x r ( t ) + v r T ( t ) R v v ( t ) v r ( t ) + u r T ( t ) Θ v u ( t ) u r ( t ) + 2 x r T ( t ) D ˜ v ( t ) u r ( t ) ] d t .
More precisely, in the game (84)–(86), the player with the control u r ( t ) aims to minimize the functional (85) by a proper choice of u r ( t ) , while the player with the control v r ( t ) aims to minimize the functional (86) by a proper choice of v r ( t ) . We consider this game with respect to its Nash equilibrium, and subject to the assumption that both players know perfectly the current game state. We call the game (84)–(86) the Reduced Differential Game (RDG).
Remark 5.
Since the matrices R ¯ u u ( t ) , D u 2 ( t ) , and R v v ( t ) are positive definite in the entire interval [ 0 , t f ] , then the RDG is regular. Nash equilibrium pair of state-feedback controls in the RDG is defined quite similarly to such an equilibrium pair in the PCCDG.
By virtue of the results of References [3,4], we have the following assertion.
Proposition 3.
Let the assumptions AI-AVII be valid. Then, the RDG has the Nash equilibrium u r * ( x r , t ) , v r * ( x r , t ) , where
u r * ( x r , t ) = Θ u u 1 ( t ) B 1 , 0 T ( t ) K u 1 , 0 o ( t ) x r , v r * ( x r , t ) = R v v 1 ( t ) B v 1 T ( t ) K v 1 , 0 o ( t ) x r ,
and K u 1 , 0 o ( t ) , K v 1 , 0 o ( t ) , t [ 0 , t f ] is the solution of the terminal-value problem (58), (60) mentioned in the assumption AVII.
The optimal values of the functionals (85) and (86) in the RDG coincides with the values J u , 0 * and J v , 0 * , respectively, given in (82).
Remark 6.
Using the block form of the matrices B 1 , 0 ( t ) and Θ u u ( t ) (see Equation (82)), we can represent the control u r * ( x r , t ) in the Nash equilibrium of the RDG as:
u r * ( x r , t ) = u r , 1 * ( x r , t ) u r , 2 * ( x r , t ) ,
where
u r , 1 * ( x r , t ) = R ¯ u u 1 ( t ) B ˜ T K u 1 , 0 o ( t ) x r , u r , 2 * ( x r , t ) = D u 2 1 ( t ) A 2 T ( t ) K u 1 , 0 o ( t ) x r .

8. Nash Equilibrium Sequence of the SDG

For a given ε ( 0 , ε 0 ] , consider the following vector-valued function of ( z , t ) E n × [ 0 , t f ] :
u ε , 0 * ( z , t ) = u r , 1 * ( x , t ) 1 ε K u 2 , 0 o ( t ) T x + K u 3 , 0 o ( t ) y ,
where z = col ( x , y ) , x E n r + q , y E r q ; K u 2 , 0 o ( t ) and K u 3 , 0 o ( t ) are given by (56) and (53), respectively.
Lemma 3.
Let the assumptions AI-AVII be valid. Then, for any given ε ( 0 , ε 0 ] , the pair u ε , 0 * ( z , t ) , v r * ( x , t ) is an admissible pair of the players’ state-feedback controls in the SDG (11)–(13), i.e., u ε , 0 * ( z , t ) , v r * ( x , t ) ( U V ) z .
Proof. 
The statement of the lemma directly follows from the linear dependence of u ε , 0 * ( z , t ) on z E n , v r * ( x , t ) on x E n r + q , and the continuity with respect to t [ 0 , t f ] of the gain matrices in u ε , 0 * ( z , t ) and v r * ( x , t ) . □
Lemma 4.
Let the assumptions AI-AVII be valid. Then, in the SDG (11)–(13), the following limit equalities are satisfied:
lim ε + 0 J u u ε , 0 * ( z , t ) , v r * ( x , t ) = J u , 0 * , lim ε + 0 J v u ε , 0 * ( z , t ) , v r * ( x , t ) = J v , 0 * ,
where J u , 0 * and J v , 0 * are given in (82).
Proof of the lemma is presented in Appendix C.
Let us substitute the control u ε , 0 * ( z , t ) instead of u ( t ) into the system (11) and the functional (13). Due to this substitution, we obtain the following optimal control problem with the state variable z ( t ) E n and the control v ( t ) E s :
d z ( t ) d t = A ( t ) z ( t ) + B u ( t ) u ε , 0 * ( z , t ) + B v ( t ) v ( t ) , t [ 0 , t f ] , z ( 0 ) = z 0 , J ˜ ( v ) = z T ( t f ) C v z ( t f ) + 0 t f [ z T ( t ) D v ( t ) z ( t ) + v T ( t ) R v v ( t ) v ( t ) + u ε , 0 * ( z , t ) T R v u ( t ) u ε , 0 * ( z , t ) ] d t min v .
We seek the optimal control of the problem (92) in the state-feedback form v = v ( z , t ) among all such controls belonging to the set K v u ε , 0 * ( z , t ) , where K v ( · ) is given in (20). Let J ˜ ε * be the optimal value of the functional in the problem (92).
Lemma 5.
Let the assumptions AI-AVII be valid. Then, there exists a positive number ε ˜ 0 ε 0 such that, for all ε ( 0 , ε ˜ 0 ] , the following inequality is satisfied: | J ˜ ε * J v , 0 * | κ ˜ ( z 0 ) ε , where J v , 0 * is given in (82); κ ˜ ( z 0 ) > 0 is some constant independent of ε but depending on z 0 .
Proof. 
The lemma is proven similarly to the results of Reference [7] (see Lemmas 1, 4 and their proofs). □
Now, let us replace v ( t ) with v r * ( x , t ) in the system (11) and the functional (12). Due to such a replacement, we obtain the following optimal control problem with the state variable z ( t ) E n and the control u ( t ) E r :
d z ( t ) d t = A ( t ) z ( t ) + B u ( t ) u ( t ) + B v ( t ) v r * ( x , t ) , t [ 0 , t f ] , z ( 0 ) = z 0 , J ^ ( u ) = z T ( t f ) C u z ( t f ) + 0 t f [ z T ( t ) D u ( t ) z ( t ) + u T ( t ) R u u ( t ) u ( t ) + v r * ( x , t ) T R u v ( t ) v r * ( x , t ) ] d t inf u .
We seek the infimum of the functional J ^ ( u ) in the optimal control problem (93) for the state-feedback controls u = u ( z , t ) belonging to the set K u v r * ( x , t ) , where K u ( · ) is given in (21). Let J ^ * be the infimum value of the functional in the problem (93).
Lemma 6.
Let the assumptions AI-AVII be valid. Then, the following equality is satisfied: J ^ * = J u , 0 * , where J u , 0 * is given in (82).
Proof. 
The lemma is proven similarly to the results of Reference [7] (see Lemma 5 and its proof). □
Let { ε k } , ( k = 1 , 2 , ) be a sequence of numbers such that: (i) ε k ( 0 , ε ˜ 0 ] , ( k = 1 , 2 , ...); (ii) lim k + ε k = 0 .
Theorem 1.
Let the assumptions AI-AVII be valid. Then, the sequence of the state-feedback controls u ε k , 0 * ( z , t ) , v r * ( x , t ) , ( k = 1 , 2 , ) , where v r * ( x , t ) and u ε , 0 * ( z , t ) are defined in (87) and (90), is the Nash equilibrium sequence in the SDG. Moreover, the optimal values J u * and J v * of the functionals in this game are
J u * = J u , 0 * , J v * = J v , 0 * ,
where J u , 0 * and J v , 0 * are the optimal values of the functionals in the RDG (84)–(86) given by Equation (82).
Proof. 
First of all let us note that, due to Lemma 3, the pair u ε k , 0 * ( z , t ) , v r * ( x , t ) is admissible in the SDG for any k { 1 , 2 , } . Therefore, to prove the first statement of the theorem, we should show the fulfillment of all the items of Definition 4 for the sequence u ε k , 0 * ( z , t ) , v r * ( x , t ) , ( k = 1 , 2 , ) . Lemma 4 yields the fulfillment of the item (I) of this definition. The fulfillment of the item (II) directly follows from the first equality in (91) and Lemma 6. The fulfillment of the item (III) follows immediately from the second equality in (91) and Lemma 5. Namely, from this lemma, we have the inequality J v , 0 * κ ˜ ( z 0 ) ε k J ˜ ε k * J v , 0 * + κ ˜ ( z 0 ) ε k , ( k = 1 , 2 , ) , while, from the definition of the value J ˜ ε k * , we have the inequality J ˜ ε k * J v u ε k * ( z , t ) , v ( z , t ) , ( k = 1 , 2 , ) , v ( z , t ) k = 1 + K u ε k * ( z , t ) . The left-hand side of the first inequality, along with the second inequality and the second equality in (91), yields lim k + J v u ε k , 0 * ( z , t ) , v r * ( x , t ) κ ˜ ( z 0 ) ε k J v u ε k * ( z , t ) , v ( z , t ) , ( k = 1 , 2 , ) , v ( z , t ) k = 1 + K u ε k * ( z , t ) . Calculating lim inf k + of both sides of the latter inequality, we obtain the fulfillment of the item (III) of Definition 4 for the sequence u ε k , 0 * ( z , t ) , v r * ( x , t ) , ( k = 1 , 2 , ) . Thus, this sequence satisfies all the items of Definition 4.
The second statement of the theorem is a direct consequence of the expressions for J u * and J v * in Definition 4, as well as Lemma 4 and Proposition 3. □
Remark 7.
Due to Theorem 1, to design the Nash equilibrium sequence in the SDG and to obtain the optimal values of its functionals, one has to solve the lower dimension regular RDG and to construct the gain matrices K u 2 , 0 o ( t ) , K u 3 , 0 o ( t ) , t [ 0 , t f ] .

9. Examples

9.1. Example 1

Consider the particular case of the SDG (11)–(13), where n = 2 , r = 2 , s = 1 , q = 1 , and
A ( t ) = 1 t + 1 t 2 t 1 , B u ( t ) = 1 0 t 1 , B v ( t ) = B v = 1 2 , C u 1 = 1 , C v 1 = 2 , D u ( t ) = 1 0 0 ( t + 1 ) 2 , D v ( t ) = 2 0 0 2 ( t + 1 ) 2 , R u u ( t ) = R u u = 1 0 0 0 , R v u ( t ) = R v u = 2 0 0 0 , R u v ( t ) = R u v = 0.5 , R v v ( t ) = R v v = 1 , z 0 = col ( 2 , 1 ) , t f = 4 .
In this example, the terminal-value problem (58), (60) becomes as:
d K u 1 , 0 o ( t ) d t = 2 K u 1 , 0 o ( t ) + 2 K u 1 , 0 o ( t ) 2 + 2 K u 1 , 0 o ( t ) K v 1 , 0 o ( t ) 0.5 K v 1 , 0 o ( t ) 2 1 , t [ 0 , 4 ] , K u 1 , 0 o ( 4 ) = 1 , d K v 1 , 0 o ( t ) d t = 2 K v 1 , 0 o ( t ) + 4 K u 1 , 0 o ( t ) K v 1 , 0 o ( t ) + K v 1 , 0 o ( t ) 2 4 K u 1 , 0 o ( t ) 2 2 , t [ 0 , 4 ] , K v 1 , 0 o ( 4 ) = 2 .
This problem has the unique solution
K u 1 , 0 o ( t ) , K v 1 , 0 o ( t ) = S ( t ) , 2 S ( t ) , t [ 0 , 4 ] , S ( t ) = ( 1 γ ) 1 2 1 4 γ exp 2 ( 1 4 γ ) ( t 4 ) + 2 1 4 γ 1 + γ , γ = 1 + 5 4 .
Using this solution, as well as Equations (53), (56), and (94), we obtain
K u 2 , 0 o ( t ) = S ( t ) , K u 3 , 0 o ( t ) = t + 1 , t [ 0 , 4 ] .
Now, using the above obtained K u j , 0 o ( t ) , ( j = 1 , 2 , 3 ) and K v 1 , 0 o ( t ) , as well as Theorem 1, we design the Nash equilibrium sequence u ε k , 0 * ( z , t ) , v r * ( x , t ) , ( k = 1 , 2 , ) in the game (11)–(13) with the data (94), where
u ε k , 0 * ( z , t ) = S ( t ) x 1 / ε k S ( t ) x + ( t + 1 ) y , z = col ( x , y ) ,
v r * ( x , t ) = 2 S ( t ) x ,
ε k > 0 and lim k + ε k = 0 . Moreover, by virtue of Theorem 1, the optimal values of the game’s functionals are
J u * = 4 S ( 0 ) , J v * = 8 S ( 0 ) .

9.2. Example 2

First of all, let us make two remarks which are used in this example.
Remark 8.
Due to the results of Reference [4], Propositions 2 and 3 hold also in the case where the matrices C v , D v ( t ) , R u v ( t ) , t [ 0 , t f ] are negative semi-definite. Therefore, all the other assertions of the present paper (including Theorem 1) also are valid for such matrices.
Remark 9.
If all the coordinates of the “singular” player are singular ( q = 0 ), then the upper block of the control u ε , 0 * ( z , t ) (see Equation (90)) vanishes, while the lower block remains unchanged. Thus, in this case we have u ε , 0 * ( z , t ) = 1 ε K u 2 , 0 o ( t ) T x + K u 3 , 0 o ( t ) y , z = col ( x , y ) , x E n r , y E r , t [ 0 , t f ] .
In this example, we consider a singular non-zero-sum game, which is an extension of the singular zero-sum planar pursuit-evasion game studied in Reference [7], as well as a singular version of the non-zero-sum pursuit-evasion game analyzed in Reference [4]. Namely, we consider the following particular case of the SDG:
d x ( t ) d t = y ( t ) , t [ 0 , t f ] , x ( 0 ) = x 0 , d y ( t ) d t = u ( t ) + v ( t ) , t [ 0 , t f ] , y ( 0 ) = y 0 ,
J u ( u , v ) = C u 1 x 2 ( t f ) + 0 t f D u 1 x 2 ( t ) + D u 2 y 2 ( t ) + R u v v 2 ( t ) d t ,
J v ( u , v ) = C v 1 x 2 ( t f ) + 0 t f D v 1 x 2 ( t ) + D v 3 y 2 ( t ) + R v v v 2 ( t ) d t ,
where the player with the scalar control u ( t ) is a pursuer, while the player with the scalar control v ( t ) is an evader; the scalar state variables x ( t ) and y ( t ) are the relative lateral separation and the relative lateral velocity of the players; the controls u ( t ) and v ( t ) are the lateral accelerations of the players. Moreover, all the coefficients in the game (95)–(97) are constant, and C u 1 > 0 , D u 1 0 , D u 2 > 0 , R u v 0 , C v 1 < 0 , D v 1 < 0 , D v 3 0 , R v v > 0 . As in the general case of SDG, both players aim to minimize their own functionals.
Remark 10.
Note that if C v 1 = C u 1 , D v 1 = D u 1 , D v 3 = D u 2 , R v v = R u v , the non-zero-sum game (95)–(97) becomes the singular zero-sum game considered in Reference [7].
In what follows of this example, we analyze the case where
D u 1 = 0 , R u v = 0 , D v 3 = 0 .
This case, being reasonable from the application’s viewpoint, allows a non-complicated analytical study of the game.
Subject to (98), the terminal-value problem (58), (60) becomes as:
d K u 1 , 0 o ( t ) d t = D u 2 1 K u 1 , 0 o ( t ) 2 , t [ 0 , t f ] , K u 1 , 0 o ( t f ) = C u 1 , d K v 1 , 0 o ( t ) d t = 2 D u 2 1 K u 1 , 0 o ( t ) K v 1 , 0 o ( t ) D v 1 , t [ 0 , t f ] , K v 1 , 0 o ( t f ) = C v 1 .
This problem has the unique solution
K u 1 , 0 o ( t ) = D u 2 t f t + D u 2 / C u 1 , t [ 0 , t f ] , K v 1 , 0 o ( t ) = ( D u 2 / C u 1 ) 2 C v 1 ( 1 / 3 ) ( D u 2 / C u 1 ) 3 D v 1 ( t f t + D u 2 / C u 1 ) 2 + ( 1 / 3 ) D v 1 ( t f t + D u 2 / C u 1 ) , t [ 0 , t f ] .
Using Equations (53), (56), (99), we obtain
K u 2 , 0 o ( t ) = D u 2 1 / 2 t f t + D u 2 / C u 1 , K u 3 , 0 o ( t ) = D u 2 1 / 2 , t [ 0 , t f ] .
Now, using Equations (99)–(100), Remarks 8, 9, Theorem 1, and taking into account that B v 1 ( t ) = 0 for all t [ 0 , t f ] , we design the Nash equilibrium sequence u ε k , 0 * ( z , t ) , v r * ( x , t ) , ( k = 1 , 2 , ) in the game (95)–(97), (98), where
u ε k , 0 * ( z , t ) = 1 / ε k D u 2 1 / 2 x t f t + D u 2 / C u 1 + D u 2 1 / 2 y , z = col ( x , y ) , v r * ( x , t ) = 0 ,
ε k > 0 and lim k + ε k = 0 . Moreover, by virtue of Theorem 1, the optimal values of the functionals in this game are
J u * = D u 2 x 0 2 t f + D u 2 / C u 1 , J v * = ( D u 2 / C u 1 ) 2 C v 1 ( 1 / 3 ) ( D u 2 / C u 1 ) 3 D v 1 ( t f + D u 2 / C u 1 ) 2 + ( 1 / 3 ) D v 1 ( t f + D u 2 / C u 1 ) x 0 2 .

10. Concluding Remarks

CR1. In this paper, a finite-horizon two-person linear-quadratic Nash equilibrium differential game was studied. The game is singular because the weight matrices of the control costs of one player (the “singular” player) are singular in the functionals of both players. These singular weight matrices are positive semi-definite but non-zero. The weight matrix of the control cost of the other player (the ”regular” player) in its own functional is positive definite.
CR2. Subject to proper assumptions, the system of dynamics of this game was transformed to an equivalent system consisting of three modes. The first mode is controlled directly only by the ”regular” player. The second mode is controlled directly by the ”regular” player and the nonsingular control’s coordinates of the “singular” player. The third mode is controlled directly by the entire controls of both players. Due to this transformation, the initially formulated game was converted to an equivalent Nash equilibrium game. The new game, also being singular, is simpler than the initially formulated game. Therefore, the new game was considered as an original one.
CR3. For this game, a novel notion of the Nash equilibrium (the Nash equilibrium sequence) was proposed. To derive the Nash equilibrium sequence in the original singular game, the regularization method was applied. This method consists in the replacing the original singular game with a regular Nash equilibrium game depending on a small parameter ε > 0 . This regular game becomes the original singular game if we set formally ε = 0 . It should be noted that the regularization method was widely applied in the literature for analysis and solution of singular optimal control problems, singular H control problems and zero-sum differential games. However, in the present paper, this method was applied for the first time in the literature to the rigorous and detailed analysis and solution of the general singular linear-quadratic Nash equilibrium differential game.
CR4. The regularized game is a partial cheap control game. Complete/partial cheap control problems were widely studied in the literature in the settings of an optimal control problem, an H control problem and a zero-sum differential game. Non zero-sum differential games with a complete cheap control of one player also were considered in the literature, although in few works. However, in the present paper, for the first time in the literature, a non-zero-sum differential game with a partial cheap control of at least one player was analyzed.
CR5. Solvability conditions of the regularized (partial cheap control) game depend on the small parameter ε , which allowed us to analyze these conditions asymptotically with respect to ε . Using this analysis, the Nash equilibrium sequence in the original singular game was designed, and the expressions for the optimal values of the functionals were obtained.
CR6. It was established that the construction of the Nash equilibrium sequence in the original singular game and the obtaining the optimal values of its functionals are based on the solution of a lower dimension regular Nash equilibrium differential game (the reduced game). Namely, to solve the original singular game, one has to solve the lower dimension regular game and to calculate by explicit formulas two additional gain matrices.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A. Proof of Lemma 1

We start with the proof of the first lemma’s statement.
Using Definitions 1,2,3 and Corollary 1, we directly obtain the following inclusions:
F u , k * R u ( t ) z , t , F v * R u ( t ) z , t ( U V ) z k { 1 , 2 , } .
Let k { 1 , 2 , } be any given number. By Z F , k * ( t ; Z 0 ) , t [ 0 , t f ] , we denote the unique solution of the initial-value problem (1) generated by the players’ controls u ( t ) = F u , k * ( Z , t ) , v ( t ) = F v * ( Z , t ) . By z G , k * ( t ; z 0 ) , t [ 0 , t f ] , we denote the unique solution of the initial-value problem (11) generated by the players’ controls
u ( t ) = G u , k * ( z , t ) = F u , k * R u ( t ) z , t , v ( t ) = G v * ( z , t ) = F v * R u ( t ) z , t .
Thus, by virtue of Corollary 1, we have Z F , k * ( t ; Z 0 ) = R u ( t ) z G , k * ( t ; z 0 ) , t [ 0 , t f ] .
Due to Definition 2 and Equations (2) and (3), the following limits exist and are finite:
J u * = lim k + { Z F , k * ( t f ; Z 0 ) T C u Z F , k * ( t f ; Z 0 ) + 0 t f [ Z F , k * ( t ; Z 0 ) T D u ( t ) Z F , k * ( t ; Z 0 ) + F u , k * Z F , k * ( t ; Z 0 ) , t T R u u ( t ) F u , k * Z F , k * ( t ; Z 0 ) , t + F v * Z F , k * ( t ; Z 0 ) , t T R u v ( t ) F v * Z F , k * ( t ; Z 0 ) , t ] d t } ,
J v * = lim k + { Z F , k * ( t f ; Z 0 ) T C v Z F , k * ( t f ; Z 0 ) + 0 t f [ Z F , k * ( t ; Z 0 ) T D v ( t ) Z F , k * ( t ; Z 0 ) + F v * Z F , k * ( t ; Z 0 ) , t T R v v ( t ) F v * Z F , k * ( t ; Z 0 ) , t + F u , k * Z F , k * ( t ; Z 0 ) , t T R v u ( t ) F u , k * Z F , k * ( t ; Z 0 ) , t ] d t } .
Substitution Z F , k * ( t ; Z 0 ) = R u ( t ) z G , k * ( t ; z 0 ) into (A3)–(A4), and use of (16)–(18), (A2) yield
lim k + { z G , k * ( t f ; z 0 ) T C u z G , k * ( t f ; z 0 ) + 0 t f [ z G , k * ( t ; z 0 ) T D u ( t ) z G , k * ( t ; z 0 ) + G u , k * z G , k * ( t ; z 0 ) , t T R u u ( t ) G u , k * z G , k * ( t ; z 0 ) , t + G v * z G , k * ( t ; z 0 ) , t T R u v ( t ) G v * z G , k * ( t ; z 0 ) , t ] d t } = J u * ,
lim k + { z G , k * ( t f ; z 0 ) T C v z G , k * ( t f ; z 0 ) + 0 t f [ z G , k * ( t ; z 0 ) T D v ( t ) z G , k * ( t ; z 0 ) + G v * z G , k * ( t ; z 0 ) , t T R v v ( t ) G v * z G , k * ( t ; z 0 ) , t + G u , k * z G , k * ( t ; z 0 ) , t T R v u ( t ) G u , k * z G , k * ( t ; z 0 ) , t ] d t } = J v * .
Let, for any given G u ( z , t ) K u G v * ( z , t ) , z ˜ G * ( t ; z 0 ) , t [ 0 , t f ] be the unique solution of the problem (11) generated by the controls u ( t ) = G u ( z , t ) and v ( t ) = G v * ( z , t ) . Similarly, let, for any given k { 1 , 2 , } and any given G v ( z , t ) N v * = k = 1 + K v G u , k * ( z , t ) , z ^ G , k * ( t ; z 0 ) , t [ 0 , t f ] be the unique solution of the problem (11) generated by the controls u ( t ) = G u , k * ( z , t ) and v ( t ) = G v ( z , t ) . Then, using Equations (5)–(6), (20)–(21), and (A2), as well as the expression for the set M v * (see Definition 2) and Corollary 1, we obtain
F u ( Z , t ) = G u R u 1 ( t ) Z , t E u F v * ( Z , t ) , F v ( Z , t ) = G v R u 1 ( t ) Z , t M v * ,
z ˜ G * ( t ; z 0 ) = R u 1 ( t ) Z ˜ F * ( t ; Z 0 ) , t [ 0 , t f ] ,
z ^ G , k * ( t ; z 0 ) = R u 1 ( t ) Z ^ F , k * ( t ; Z 0 ) , k = 1 , 2 , , t [ 0 , t f ] .
In (A8), Z ˜ F * ( t ; Z 0 ) is the unique solution of the problem (1) generated by the controls u ( t ) = F u ( Z , t ) , v ( t ) = F v * ( Z , t ) . In (A9), Z ^ F , k * ( t ; Z 0 ) is the unique solution of the problem (1) generated by the controls u ( t ) = F u , k * ( Z , t ) , v ( t ) = F v ( z , t ) .
Using the aforementioned definitions of Z ˜ F * ( t ; Z 0 ) and Z ^ F , k * ( t ; Z 0 ) , as well as Definition 2 and Equations (16)–(18), (A2)–(A4), and (A7)–(A9), we obtain the following inequalities:
J u * z ˜ G * ( t f ; z 0 ) T C u z ˜ G * ( t f ; z 0 ) + 0 t f [ z ˜ G * ( t ; z 0 ) T D u ( t ) z ˜ G * ( t ; z 0 ) + G u z ˜ G * ( t ; z 0 ) , t T R u u ( t ) G u z ˜ G * ( t ; z 0 ) , t + G v * z ˜ G * ( t ; z 0 ) , t T R u v ( t ) G v * z ˜ G * ( t ; z 0 ) , t ] d t , J v * lim inf k + { z ^ G , k * ( t f ; z 0 ) T C v z ^ G , k * ( t f ; z 0 ) + 0 t f [ z ^ G , k * ( t ; z 0 ) T D v ( t ) z ^ G , k * ( t ; z 0 ) + G v z ^ G , k * ( t ; z 0 ) , t T R v v ( t ) G v z ^ G , k * ( t ; z 0 ) , t + G u , k * z ^ G , k * ( t ; z 0 ) , t T R v u ( t ) G u , k * Z ^ G , k * ( t ; z 0 ) , t ] d t } .
These inequalities, along with Equations (12) and (13), (A1) and (A2) and the equalities (A5) and (A6), directly imply the fulfillment of all the items of Definition 4 for the sequence of the pairs G u , k * ( z , t ) , G v * ( z , t ) k = 1 + = F u , k * R u z , t , F v * R u z , t k = 1 + . This completes the proof of the first statement of the lemma. The second statement is proven similarly.

Appendix B. Proof of Lemma 2

The proof of the lemma is based on the results of Reference [33] (see Section 2.1, Theorem 2.2). To use these results, we convert the terminal-value problem (37)–(42), (43) with respect to the unknown matrix-valued functions K i j ( t ) , ( i = u , v ) , ( j = 1 , 2 , 3 ) to the equivalent terminal-value problem with respect to the unknown vector-valued functions
K i j ( t ) = vec K i j ( t ) , i = u , v , j = 1 , 2 , 3 .
Let us denote the right-hand sides of Equations (37)–(42) as:
F u 1 K u 1 ( t ) , K u 2 ( t ) , K v 1 ( t ) , K v 2 ( t ) , t , ε ,
F u 2 K u 1 ( t ) , K u 2 ( t ) , K u 3 ( t ) , K v 1 ( t ) , K v 2 ( t ) , K v 3 ( t ) , t , ε ,
F u 3 K u 2 ( t ) , K u 3 ( t ) , K v 2 ( t ) , K v 3 ( t ) , t , ε ,
F v 1 K u 1 ( t ) , K u 2 ( t ) , K v 1 ( t ) , K v 2 ( t ) , t , ε ,
F v 2 K u 1 ( t ) , K u 2 ( t ) , K u 3 ( t ) , K v 1 ( t ) , K v 2 ( t ) , K v 3 ( t ) , t , ε ,
F v 3 K u 2 ( t ) , K u 3 ( t ) , K v 2 ( t ) , K v 3 ( t ) , t , ε .
In addition, let us introduce into consideration the following vectors:
K 1 ( t ) = col K u 1 ( t ) , K v 1 ( t ) , K 12 ( t ) = col K u 1 ( t ) , K u 2 ( t ) , K v 1 ( t ) , K v 2 ( t ) , K 23 ( t ) = col K u 2 ( t ) , K u 3 ( t ) , K v 2 ( t ) , K v 3 ( t ) , K 123 ( t ) = col K u 1 ( t ) , K u 2 ( t ) , K u 3 ( t ) , K v 1 ( t ) , K v 2 ( t ) , K v 3 ( t ) .
Converting the matrices F i j ( · ) , ( i = u , v ) , ( j = 1 , 2 , 3 ) to vector form, and using (A10) and (A11), we obtain the vector-valued functions depending on the vectors K 12 ( t ) , K 23 ( t ) , K 123 ( t ) , as well as on t and ε
G u 1 K 12 ( t ) , t , ε = vec F u 1 K u 1 ( t ) , K u 2 ( t ) , K v 1 ( t ) , K v 2 ( t ) , t , ε , G u 2 K 123 ( t ) , t , ε = vec F u 2 K u 1 ( t ) , K u 2 ( t ) , K u 3 ( t ) , K v 1 ( t ) , K v 2 ( t ) , K v 3 ( t ) , t , ε , G u 3 K 23 ( t ) , t , ε = vec F u 3 K u 2 ( t ) , K u 3 ( t ) , K v 2 ( t ) , K v 3 ( t ) , t , ε , G v 1 K 12 ( t ) , t , ε = vec F v 1 K u 1 ( t ) , K u 2 ( t ) , K v 1 ( t ) , K v 2 ( t ) , t , ε , G v 2 K 123 ( t ) , t , ε = vec F v 2 K u 1 ( t ) , K u 2 ( t ) , K u 3 ( t ) , K v 1 ( t ) , K v 2 ( t ) , K v 3 ( t ) , t , ε , G v 3 K 23 ( t ) , t , ε = vec F v 3 K u 2 ( t ) , K u 3 ( t ) , K v 2 ( t ) , K v 3 ( t ) , t , ε .
Based on (A12), we construct the following vector-valued functions:
G 1 K 12 ( t ) , t , ε = col G u 1 K 12 ( t ) , t , ε , G v 1 K 12 ( t ) , t , ε , G 23 K 123 ( t ) , t , ε = col G u 2 K 123 ( t ) , t , ε , G u 3 K 23 ( t ) , t , ε , G v 2 K 123 ( t ) , t , ε , G u 3 K 23 ( t ) , t , ε .
Now, using these vector-valued functions and the vectors in (A10) and (A11), we can convert the terminal-value problem (37)–(42), (43) to the following equivalent form:
d K 1 ( t ) d t = G 1 K 12 ( t ) , t , ε , t [ 0 , t f ] , K 1 ( t f ) = col vec ( C u 1 ) , vec ( C v 1 ) ,
ε d K 23 ( t ) d t = G 23 K 123 ( t ) , t , ε , t [ 0 , t f ] , K 23 ( t f ) = 0 ,
where 0 in the terminal condition for K 23 ( t ) means the zero vector of the dimension 2 n ( r q ) .
Let us introduce into the consideration the following vectors:
K i j , 0 o ( t ) = vec K i j , 0 o ( t ) , i = u , v , j = 1 , 2 , 3 , K 123 , 0 o ( t ) = col K u 1 , 0 o ( t ) , K u 2 , 0 o ( t ) , K u 3 , 0 o ( t ) , K v 1 , 0 o ( t ) , K v 2 , 0 o ( t ) , K v 3 , 0 o ( t ) .
Now, based on the aforementioned results of Reference [33], and taking into account the fact that the problem (A13) and (A14) is a terminal-value problem, we can conclude the following. To prove the lemma, it is sufficient to show that the real parts of all the eigenvalues λ k ( t ) , k = 1 , , 2 n ( r q ) of the matrix M ( t ) = G 23 K 123 ( t ) , t , ε K 23 ( t ) | K 123 ( t ) = K 123 , 0 o ( t ) , ε = 0 are positive for all t [ 0 , t f ] . The matrix M ( t ) is of the dimension 2 n ( r q ) × 2 n ( r q ) . Calculating this matrix, and using Equation (53), the equality S u 3 ( t , 0 ) = I r q and the symmetry of the matrices D u 2 ( t ) 1 / 2 , K v 3 , 0 o ( t ) , we obtain
M ( t ) = N 1 ( t ) K u 2 , 0 o ( t ) I r q 0 0 0 N 2 ( t ) 0 0 I n r + q K v 3 , 0 o ( t ) K v 2 , 0 o ( t ) I r q N 1 ( t ) K u 2 , 0 o ( t ) I r q 0 I r q K v 3 , 0 o ( t ) 0 N 2 ( t ) ,
where N 1 ( t ) = I n r + q D u 2 ( t ) 1 / 2 , N 2 ( t ) = I r q D u 2 ( t ) 1 / 2 + D u 2 ( t ) 1 / 2 I r q .
Due to the structure of the matrix M ( t ) , we can conclude that the set of its eigenvalues consists of all the eigenvalues of the matrices N 1 ( t ) and N 2 ( t ) with the corresponding algebraic multiplicities. Due to the property of the eigenvalues of the Kronecker product of two matrices (see, e.g., Ref. [37]), the set of the eigenvalues of N 1 ( t ) consists of all the eigenvalues of the matrix D u 2 ( t ) 1 / 2 with the corresponding algebraic multiplicities, i.e., all the eigenvalues of N 1 ( t ) are real positive for all t [ 0 , t f ] . Similarly to the analysis of the matrix N 1 ( t ) , the sets of both addends in the expression of N 2 ( t ) consist of all the eigenvalues of the matrix D u 2 ( t ) 1 / 2 with the corresponding algebraic multiplicities. Thus, all the eigenvalues of these addends are real positive for all t [ 0 , t f ] . Moreover, due to the symmetry of the matrix D u 2 ( t ) 1 / 2 , both addends in the expression of N 2 ( t ) are symmetric matrices. Therefore, by virtue of the results of Reference [38], all the eigenvalues of N 2 ( t ) are real positive. Hence, all the eigenvalues of the matrix M ( t ) are real positive for all t [ 0 , t f ] . This completes the proof of the lemma.

Appendix C. Proof of Lemma 4

The proof consists of four stages.
Stage 1: Expressions for J u u ε , 0 * ( z , t ) , v r * ( x , t ) and J v u ε , 0 * ( z , t ) , v r * ( x , t ) .
Using the expressions for S u v ( t ) , S v ( t ) (see Equation (27)), the block representations for the matrices R u u ( t ) , R v u ( t ) , S u ( t , ε ) , A ( t ) , S v ( t ) , S u v ( t ) and B ˜ (see the Equations (4), (30), (35), (36) and (83)), we can represent the values J u u ε , 0 * ( z , t ) , v r * ( x , t ) and J v u ε , 0 * ( z , t ) , v r * ( x , t ) as:
J u u ε , 0 * ( z , t ) , v r * ( x , t ) = z T ( t f , ε ) C u z ( t f , ε ) + 0 t f z T ( t , ε ) Q u ( t ) z ( t , ε ) d t , J v u ε , 0 * ( z , t ) , v r * ( x , t ) = z T ( t f , ε ) C v z ( t f , ε ) + 0 t f z T ( t , ε ) Q v ( t ) z ( t , ε ) d t ,
where the n × n -matrix-valued functions Q u ( t ) and Q v ( t ) have the form
Q u ( t ) = D u 1 ( t ) + K u 1 , 0 o ( t ) S u 1 ( t ) K u 1 , 0 o ( t ) + K v 1 , 0 o ( t ) S u v 1 ( t ) K v 1 , 0 o ( t ) 0 0 D u 2 ( t ) , Q v ( t ) = D v 1 ( t ) + K v 1 , 0 o ( t ) S v 1 ( t ) K v 1 , 0 o ( t ) + K u 1 , 0 o ( t ) S v u 1 ( t ) K u 1 , 0 o ( t ) D v 2 ( t ) D v 2 T ( t ) D v 3 ( t ) ,
the vector-valued function z ( t , ε ) is the solution of the following initial-value problem:
d z ( t ) d t = H ( t , ε ) z ( t ) , t [ 0 , t f ] , z ( 0 ) = z 0 , H ( t , ε ) = H 1 ( t ) H 2 ( t ) ( 1 / ε ) H 3 ( t , ε ) ( 1 / ε ) H 4 ( t , ε ) , H 1 ( t ) = A 1 ( t ) S u 1 ( t ) K u 1 , 0 o ( t ) S v 1 ( t ) K v 1 , 0 o ( t ) , H 2 ( t ) = A 2 ( t ) , H 3 ( t , ε ) = ε A 3 ( t ) ε S u 2 T ( t ) K u 1 , 0 o ( t ) K u 2 , 0 o ( t ) T ε S v 2 T ( t ) K v 1 , 0 o ( t ) , H 4 ( t , ε ) = ε A 4 ( t ) K u 3 , 0 o ( t ) .
Stage 2: Expanded expressions for J u , 0 * and J v , 0 * .
Due to Proposition 3, J u , 0 * and J v , 0 * are the optimal values of the functionals in the RDG (84)–(86). Hence, using the Equations (83) and (87) and taking into account the Equations (27), (31), (33), (36), (59) and (61), we obtain, after a routine matrix algebra, the following expanded expressions for J u , 0 * and J v , 0 * :
J u , 0 * = J u r u r * ( x r , t ) , v r * ( x r , t ) = x r * ( t f ) T C u 1 x r * ( t f ) + 0 t f x r * ( t ) T Q u r ( t ) x r * ( t ) d t , J v , 0 * = J v r u r * ( x r , t ) , v r * ( x r , t ) = x r * ( t f ) T C v 1 x r * ( t f ) + 0 t f x r * ( t ) T Q v r ( t ) x r * ( t ) d t , Q u r ( t ) = D u 1 ( t ) + K u 1 , 0 o ( t ) S u , 0 ( t ) K u 1 , 0 o ( t ) + K v 1 , 0 o ( t ) S uv 1 ( t ) K v 1 , 0 o ( t ) , Q v r ( t ) = D v 1 ( t ) + K v 1 , 0 o ( t ) S v 1 ( t ) K v 1 , 0 o ( t ) + K u 1 , 0 o ( t ) S vu , 0 ( t ) K u 1 , 0 o ( t ) 2 D v 2 ( t ) D u 2 1 ( t ) A 2 T ( t ) K u 1 , 0 o ( t ) ,
and the vector-valued function x r * ( t ) , t [ 0 , t f ] is the solution of the initial-value problem
d x r ( t ) d t = A r ( t ) x r ( t ) , t [ 0 , t f ] , x r ( 0 ) = x 0 , A r ( t ) = A 1 ( t ) S u , 0 ( t ) K u 1 , 0 o ( t ) S v 1 ( t ) K v 1 , 0 o ( t ) .
Stage 3: Asymptotic analysis of the problem (A18).
Let us represent the vector-valued function z ( t ) in the block form as:
z ( t ) = col x ( t ) , y ( t ) , x ( t ) E n r + q , y ( t ) E r q , t [ 0 , t f ] . Using this block form of z ( t ) and the block form of the vector z 0 (see Equation (81)), we can rewrite the problem (A18) in the explicit singular perturbation form
d x ( t ) d t = A 1 ( t ) S u 1 ( t ) K u 1 , 0 o ( t ) S v 1 ( t ) K v 1 , 0 o ( t ) x ( t ) + A 2 ( t ) y ( t ) , x ( 0 ) = x 0 , ε d y ( t ) d t = ε A 3 ( t ) ε S u 2 T ( t ) K u 1 , 0 o ( t ) K u 2 , 0 o ( t ) T ε S v 2 T ( t ) K v 1 , 0 o ( t ) x ( t ) + ε A 4 ( t ) K u 3 , 0 o ( t ) y ( t ) , y ( 0 ) = y 0 .
Let us represent the solution of the problem (A18) in the block form as:
z ( t , ε ) = col x ( t , ε ) , y ( t , ε ) , x ( t , ε ) E n r + q , y ( t , ε ) E r q , t [ 0 , t f ] .
Hence, x ( t , ε ) and y ( t , ε ) , t [ 0 , t f ] are the corresponding components of the solution to the problem (A21). Using the results of Reference [33] (see Section 2.1, Theorem 2.2), and taking into account the positive definiteness of the matrix K u 3 , 0 o ( t ) for all t [ 0 , t f ] , we obtain the following asymptotic representations of x ( t , ε ) and y ( t , ε ) for all t [ 0 , t f ] , ε ( 0 , ε 1 ] :
x ( t , ε ) = x 0 o ( t ) + O x ( t , ε ) , y ( t , ε ) = y 0 o ( t ) + y 0 b ( t , ε ) + O y ( t , ε ) ,
where 0 < ε 1 ε 0 is some positive number; the vector-valued function x 0 o ( t ) is the solution of the initial-value problem
d x 0 o ( t ) d t = A 0 ( t ) x 0 o ( t ) , t [ 0 , t f ] , x 0 o ( 0 ) = x 0 , A 0 ( t ) = A 1 ( t ) S u 1 ( t ) K u 1 , 0 o ( t ) S v 1 ( t ) K v 1 , 0 o ( t ) A 2 ( t ) K u 3 , 0 o ( t ) 1 K u 2 , 0 o ( t ) T ;
y 0 o ( t ) = K u 3 , 0 o ( t ) 1 K u 2 , 0 o ( t ) T x 0 o ( t ) , y 0 b ( t , ε ) = exp K u 3 , 0 o ( 0 ) t ε y 0 y 0 o ( 0 ) ;
the vector-valued functions O x ( t , ε ) and O y ( t , ε ) satisfy the inequality
max { O x ( t , ε ) , O y ( t , ε ) } a ε , t [ 0 , t f ] , ε ( 0 , ε 1 ] ,
and a > 0 is some constant independent of ε .
Substituting (53) and (56) into the expression for A 0 ( t ) and taking into account (59), we obtain that A 0 ( t ) coincides with A r ( t ) given in (A20). Therefore, the solution of the problem (A24) coincides with the solution of the problem (A20)
x 0 o ( t ) = x r * ( t ) , t [ 0 , t f ] .
Similarly, substitution of (53) and (56) into the expression for y 0 o ( t ) and use of (A27) yield y 0 o ( t ) = D u 2 1 ( t ) A 2 T ( t ) K u 1 , 0 o ( t ) x r * ( t ) , t [ 0 , t f ] . Thus,
z 0 o ( t ) = col x 0 o ( t ) , y 0 o ( t ) = P 0 ( t ) x r * ( t ) , t [ 0 , t f ] , P 0 ( t ) = I n r + q D u 2 1 ( t ) A 2 T ( t ) K u 1 , 0 o ( t ) .
Finally, taking into account (53) and the positive definiteness of the matrix D u 2 ( 0 ) 1 / 2 , we have the estimate of y 0 b ( t , ε ) given in (A25)
y 0 b ( t , ε ) c y exp ( β y t ) y 0 y 0 o ( 0 ) , t [ 0 , t f ] , ε ( 0 , ε 1 ] ,
where c y > 0 and β y > 0 are some constants independent of ε .
Stage 4: Asymptotic behavior of J u u ε , 0 * ( z , t ) , v r * ( x , t ) and J v u ε , 0 * ( z , t ) , v r * ( x , t ) .
Substituting (A22)–(A23) into the expressions of J u u ε , 0 * ( z , t ) , v r * ( x , t ) and
J v u ε , 0 * ( z , t ) , v r * ( x , t ) (see Equation (A16)), and taking into account the Equations (16), (A17), (A19) and (A27)–(A28) and the inequalities (A26), (A29), we obtain, after a routine algebra,
J u u ε , 0 * ( z , t ) , v r * ( x , t ) = J u , 0 * + O u ( ε ) , J v u ε , 0 * ( z , t ) , v r * ( x , t ) = J v , 0 * + O v ( ε ) ,
where the values O u ( ε ) and O v ( ε ) satisfy the inequality
max | O u ( ε ) | , | O v ( ε ) | a ε , ε ( 0 , ε 1 ] ,
and a > 0 is some constant independent of ε .
Equation (A30), along with the inequality (A31), yields immediately the limit equalities in (91), which completes the proof of the lemma.

References

  1. Isaacs, R. Differential Games; John Wiley and Sons: New York, NY, USA, 1967. [Google Scholar]
  2. Bryson, A.E.; Ho, Y.C. Applied Optimal Control; Hemisphere: New York, NY, USA, 1975. [Google Scholar]
  3. Basar, T.; Olsder, G.J. Dynamic Noncooparative Game Theory; SIAM Books: Philadelphia, PA, USA, 1999. [Google Scholar]
  4. Starr, A.W.; Ho, Y.C. Nonzero-sum differential games. J. Optim. Theory Appl. 1969, 3, 184–206. [Google Scholar] [CrossRef]
  5. Shinar, J. Solution techniques for realistic pursuit-evasion games. In Advances in Control and Dynamic Systems; Leondes, C., Ed.; Academic Press: New York, NY, USA, 1981; pp. 63–124. [Google Scholar]
  6. Turetsky, V.; Glizer, V.Y. Robust state-feedback controllability of linear systems to a hyperplane in a class of bounded controls. J. Optim. Theory Appl. 2004, 123, 639–667. [Google Scholar] [CrossRef]
  7. Shinar, J.; Glizer, V.Y.; Turetsky, V. Solution of a singular zero-sum linear-quadratic differential game by regularization. Int. Game Theory Rev. 2014, 16, 1440007-1–1440007-32. [Google Scholar] [CrossRef]
  8. Turetsky, V.; Glizer, V.Y.; Shinar, J. Robust trajectory tracking: Differential game/cheap control approach. Internat. J. Systems Sci. 2014, 45, 2260–2274. [Google Scholar] [CrossRef]
  9. Hamelin, F.M.; Lewis, M.A. A differential game theoretical analysis of mechanistic models for territoriality. J. Math. Biol. 2010, 61, 665–694. [Google Scholar] [CrossRef]
  10. Hu, Y.; ksendal, B.; Sulem, A. Singular mean-field control games. Stoch. Anal. Appl. 2017, 35, 823–851. [Google Scholar] [CrossRef] [Green Version]
  11. Forouhar, K.; Leondes, C.T. Singular differential game numerical techniques. J. Optim. Theory Appl. 1982, 37, 69–87. [Google Scholar] [CrossRef]
  12. Forouhar, K.; Gibson, S.J.; Leondes, C.T. Singular linear quadratic differential games with bounds on control. J. Optim. Theory Appl. 1983, 41, 341–348. [Google Scholar] [CrossRef]
  13. Stoorvogel, A.A. The singular zero-sum differential game with stability using H control theory. Math. Control Signals Syst. 1991, 4, 121–138. [Google Scholar] [CrossRef]
  14. Amato, F.; Pironti, A. A note on singular zero-sum linear quadratic differential games. In Proceedings of the 33rd IEEE Conference on Decision and Control, Lake Buena Vista, FL, USA, 14–16 December 1994; pp. 1533–1535. [Google Scholar]
  15. Glizer, V.Y.; Kelis, O. Solution of a zero-sum linear quadratic differential game with singular control cost of minimizer. J. Control Decis. 2015, 2, 155–184. [Google Scholar] [CrossRef]
  16. Glizer, V.Y.; Kelis, O. Upper value of a singular infinite horizon zero-sum linear-quadratic differential game. Pure Appl. Funct. Anal. 2017, 2, 511–534. [Google Scholar]
  17. Wang, Y.; Wang, L.; Teo, K.L. Necessary and sufficient optimality conditions for regular-singular stochastic differential games with asymmetric information. J. Optim. Theory Appl. 2018, 179, 501–532. [Google Scholar] [CrossRef]
  18. Gibali, A.; Kelis, O. An analytic and numerical investigation of a differential game. Axioms 2021, 10, 66. [Google Scholar] [CrossRef]
  19. Fu, G.X.; Horst, U. Mean field games with singular controls. SIAM J. Control Optim. 2017, 55, 3833–3868. [Google Scholar] [CrossRef]
  20. De Angelis, T.; Ferrari, G. Stochastic nonzero-sum games: A new connection between singular control and optimal stopping. Adv. Appl. Prob. 2018, 50, 347–372. [Google Scholar] [CrossRef] [Green Version]
  21. Dianetti, J.; Ferrari, G. Nonzero-sum submodular monotone-follower games: Existence and approximation of Nash equilibria. SIAM J. Control Optim. 2020, 58, 1257–1288. [Google Scholar] [CrossRef]
  22. Cao, H.; Guo, X.; Lee, J.S. Approximation of mean field games to N-player stochastic games with singular controls. arXiv 2020, arXiv:1703.04437v3. [Google Scholar]
  23. Wang, X.; Cruz, J.B. Asymptotic ε-Nash equilibrium for 2nd order two-player nonzero-sum singular LQ games with decentralized control. In Proceedings of the 17th IFAC World Congress, Seoul, Korea, 6–11 July 2008; pp. 3970–3975. [Google Scholar]
  24. Glizer, V.Y. Asymptotic properties of a cheap control infinite horizon Nash differential game. In Proceedings of the 2018 American Control Conference, Milwaukee, WI, USA, 27–29 June 2018; pp. 5768–5773. [Google Scholar]
  25. Bell, D.J.; Jacobson, D.H. Singular Optimal Control Problems; Academic Press: New York, NY, USA, 1975. [Google Scholar]
  26. Glizer, V.Y.; Kelis, O. Asymptotic properties of an infinite horizon partial cheap control problem for linear systems with known disturbances. Numer. Algebra Control Optim. 2018, 8, 211–235. [Google Scholar] [CrossRef] [Green Version]
  27. Kurina, G.A. On a degenerate optimal control problem and singular perturbations. Sov. Math. Dokl. 1977, 18, 1452–1456. [Google Scholar]
  28. Petersen, I.R. Disturbance attenuation and H optimization: A design method based on the algebraic Riccati equation. IEEE Trans. Automat. Control 1987, 32, 427–429. [Google Scholar] [CrossRef]
  29. Glizer, V.Y.; Kelis, O. Finite-horizon H control problem with singular control cost. In Informatics in Control, Automation and Robotics; Gusikhin, O., Madani, K., Eds.; Springer: Cham, Switzerland, 2020; Volume 495, pp. 23–46. [Google Scholar]
  30. Glizer, V.Y. Nash equilibrium in a singular two-person linear-quadratic differential game: A regularization approach. In Proceedings of the 24th Mediterranean Conference on Control and Automation, Athens, Greece, 21–24 June 2016; pp. 1041–1046. [Google Scholar]
  31. Glizer, V.Y.; Fridman, L.M.; Turetsky, V. Cheap suboptimal control of an integral sliding mode for uncertain systems with state delays. IEEE Trans. Automat. Control 2007, 52, 1892–1898. [Google Scholar] [CrossRef]
  32. Petersen, I.R. Linear-quadratic differential games with cheap control. Syst. Control Lett. 1986, 8, 181–188. [Google Scholar] [CrossRef]
  33. Vasil’eva, A.B.; Butuzov, V.F.; Kalachev, L.V. The Boundary Function Method for Singular Perturbation Problems; SIAM Books: Philadelphia, PA, USA, 1995. [Google Scholar]
  34. Gajic, Z.; Qureshi, M.T.J. Lyapunov Matrix Equation in System Stability and Control; Dover Publications: Mineola, NY, USA, 2008. [Google Scholar]
  35. Derevenskii, V.P. Matrix Bernoulli equations, I. Russian Math. 2008, 52, 12–21. [Google Scholar] [CrossRef]
  36. Abou-Kandil, H.; Freiling, G.; Ionescu, V.; Jank, G. Matrix Riccati Equations in Control and Systems Theory; Birkha¨user: Basel, Switzerland, 2003. [Google Scholar]
  37. Horn, R.A.; Johnson, C.R. Topics in Matrix Analysis; Cambridge University Press: Cambridge, UK, 1991. [Google Scholar]
  38. Bellman, R. Introduction to Matrix Analysis, 2nd ed.; SIAM Books: Philadelphia, PA, USA, 1997. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Glizer, V.Y. Nash Equilibrium Sequence in a Singular Two-Person Linear-Quadratic Differential Game. Axioms 2021, 10, 132. https://doi.org/10.3390/axioms10030132

AMA Style

Glizer VY. Nash Equilibrium Sequence in a Singular Two-Person Linear-Quadratic Differential Game. Axioms. 2021; 10(3):132. https://doi.org/10.3390/axioms10030132

Chicago/Turabian Style

Glizer, Valery Y. 2021. "Nash Equilibrium Sequence in a Singular Two-Person Linear-Quadratic Differential Game" Axioms 10, no. 3: 132. https://doi.org/10.3390/axioms10030132

APA Style

Glizer, V. Y. (2021). Nash Equilibrium Sequence in a Singular Two-Person Linear-Quadratic Differential Game. Axioms, 10(3), 132. https://doi.org/10.3390/axioms10030132

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop