Next Article in Journal
Multimorbidity from Diabetes, Heart Failure, and Related Conditions: Assessing a Panel of Depressive Symptoms as Both Formative and Reflective Indicators of a Latent Trait
Previous Article in Journal
The Exact Solutions of Stochastic Fractional-Space Kuramoto-Sivashinsky Equation by Using (GG)-Expansion Method
Previous Article in Special Issue
Feedforward of Measurable Disturbances to Improve Multi-Input Feedback Control
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Closed-Loop Nash Equilibrium in the Class of Piecewise Constant Strategies in a Linear State Feedback Form for Stochastic LQ Games

by
Vasile Drăgan
1,2,†,
Ivan Ganchev Ivanov
3,†,
Ioan-Lucian Popa
4,*,† and
Ovidiu Bagdasar
5,†
1
“Simion Stoilow” Institute of Mathematics, Romanian Academy, P.O. Box 1-764, 014700 Bucharest, Romania
2
The Academy of the Romanian Scientists, Str. Ilfov, 3, 050044 Bucharest, Romania
3
Faculty of Economics and Business Administration, Sofia University St. Kliment Ohridski, 1113 Sofia, Bulgaria
4
Department of Computing, Mathematics and Electronics, “1 Decembrie 1918” University of Alba Iulia, 510009 Alba Iulia, Romania
5
School of Computing and Engineering, University of Derby, Derby DE22 1GB, UK
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2021, 9(21), 2713; https://doi.org/10.3390/math9212713
Submission received: 7 September 2021 / Revised: 15 October 2021 / Accepted: 20 October 2021 / Published: 26 October 2021
(This article belongs to the Special Issue Dynamical Systems in Engineering)

Abstract

:
In this paper, we examine a sampled-data Nash equilibrium strategy for a stochastic linear quadratic (LQ) differential game, in which admissible strategies are assumed to be constant on the interval between consecutive measurements. Our solution first involves transforming the problem into a linear stochastic system with finite jumps. This allows us to obtain necessary and sufficient conditions assuring the existence of a sampled-data Nash equilibrium strategy, extending earlier results to a general context with more than two players. Furthermore, we provide a numerical algorithm for calculating the feedback matrices of the Nash equilibrium strategies. Finally, we illustrate the effectiveness of the proposed algorithm by two numerical examples. As both situations highlight a stabilization effect, this confirms the efficiency of our approach.

1. Introduction

Stochastic control problems governed by Itô’s differential equations have been the subject of intensive research over the last decades. This generated a rich literature and fundamental results such as the H 2 and LQ robust sampled-data control problems under a unified framework studied in [1,2], classes of uncertain sampled-data systems with random jumping parameters characterized by finite state semi-Markov process analysed in [3], or stochastic differential games investigated in [4,5,6,7].
Dynamical games have been used to solve many real life problems (see e.g., [8]). For example, the concept of Nash equilibrium is very important for dynamical games, where for controlled systems the closed-loop and open-loop equilibria strategies present special interest. Various aspects of open-loop Nash equilibria are studied for a LQ differential game in [9], other results being reported in [10,11,12]. In addiytion, in [13] applications to gas network optimisation are studied via open-loop sampled-data Nash equilibrium strategy. The framework in which state vector measurements for a class of differential games are available only at discrete times was first studied in [14]. There, a two-player differential game was considered, and necessary conditions for the sample data controls were obtained using a backward translation method starting at the last time interval, and following the previous state measurements. This case has been extended to a stochastic framework in [15], where the players have access to sample-data state information with sampling interval. For other results dealing with closed-loop systems (see, e.g., [16]). Stochastic dynamical games are an important, but more challenging framework. First introduced in [17], stochastic LQ problems have been studied extensively (see, [18,19]).
In the present paper, we consider stochastic differential games governed by Itô’s differential equation, with state multiplicative and control multiplicative white noise perturbations. The original contributions of this work are the following. First, we analyze the design of a Nash equilibrium strategy in a state feedback form in the class of piecewise constant admissible strategies. It is assumed that the state measurements are available only at some discrete times. The original problem is transformed into an equivalent one which asks to find some existence conditions for a Nash equilibrium strategy in a state feedback form for a LQ stochastic differential game described by a system of Itô differential equations controlled by impulses. Necessary and sufficient conditions for the existence of a Nash equilibrium strategy for the new LQ differential game are obtained based on methods from [20,21]. The feedback matrices of the equilibrium strategies for the original dynamical game are obtained from the general result using the structure of the matrix coefficients of the system controlled by impulses. Another major contribution of this paper consists of the numerical methods for computing the feedback matrices of the Nash equilibrium strategy.
To our knowledge, in the stochastic framework, there are few papers dealing with the problem of sampled-data Nash equilibrium strategy in both open-loop and closed-loop forms ([22,23]), the papers [13,14] mentioned before only considering the deterministic framework. In that case, the problem of sampled-data Nash equilibrium strategy can be transformed in a natural way into a problem stated in discrete-time framework. Such a transformation is not possible when the dynamical system contains state multiplicative and control multiplicative white noise perturbations. In [15], the stochastic character is due only to the presence of the additive white noise perturbations. In that case, the approach is not essentially different from the one used in the deterministic case.
The paper is organized as follows. In Section 2, we formulate the problem, introducing the L-players Nash equilibria concept. In Section 2.2, we state an equivalent form of the original problem and we introduce a system of matrix linear differential equations with jumps and algebraic constraints which is involved in the derivation of the feedback matrices of the equilibrium strategies. Then, in Section 2.3, we provide some necessary and sufficient conditions which guarantee the existence of a piecewise constant Nash equilibrium strategy. An algorithm implementing these developments is given in Section 3. The efficiency of the proposed algorithm is demonstrated by two numerical examples illustrating the behavior of the optimal trajectories generated by the equilibrium strategy. Section 4 is dedicated to conclusions.

2. Problem Formulation

2.1. Model Description and Problem Setting

Consider the controlled system having the state space representation described by
d x ( t ) = [ A x ( t ) + k = 1 L B k u k ( t ) ] d t + [ C x ( t ) + k = 1 L D k u k ( t ) ] d w ( t ) , x ( t 0 ) = x 0 , t [ t 0 , t f ] ,
where x ( t ) R n is the state vector, L is a positive integer, u k ( t ) R m k , k = 1 , , L are control parameters, and { w ( t ) } t 0 is a 1-dimensional standard Wiener process defined on a probability space ( Ω , F , P ) .
In the controlled system there are L players ( k = 1 , 2 , , L ) who change their behavior through their control function u k ( · ) , k = 1 , , L . The matrices of the system A , C R n × n and matrices of the players B k , D k R n × m k , k = 1 , , L , are known. In the field of the game theory, the controls u k ( · ) are called admissible strategies (or policies) for the players. The different classes of admissible strategies can be defined in various ways, depending on the available information.
Each player aims to minimize its own cost function (performance criterion), and for k = 1 , , L we have
J k ( t 0 , x 0 ; u 1 , , u L ) = E x u T ( t f ) G k x u ( t f ) + t 0 t f ( x u T ( t ) M k x u ( t ) + j = 1 L u j T ( t ) R k j u j ( t ) ) d t .
We make the following assumption regarding the weights matrices in (2):
H. G k 0 , M k 0 , R k k > 0 , and R k l 0 , with k , l = 1 , , L , and l k .
Here we generalize Definition 2.1 given in [23].
Definition 1.
The L-tuple of admissible strategies ( u ˜ 1 ( · ) , u ˜ 2 ( · ) , , u ˜ L ( · ) ) is said to achieve a Nash equilibrium for the differential game described by the controlled system (1), the cost function (2), and the class of the admissible strategies U = U 1 × U 2 × × U L , if for all u k ( · ) U k , k = 1 , , L we have
J k ( t 0 , x 0 ; u ˜ 1 , u ˜ 2 , , u ˜ L ) J k ( t 0 , x 0 ; u ˜ 1 , u ˜ 2 , . , u ˜ k 1 , u k , u ˜ k + 1 , , u ˜ L ) .
In this paper we consider a special class of closed-loop admissible strategies in which the states x ( t ) of the dynamical system are available for measurement at the discrete-times 0 t 0 < t 1 < < t N 1 < t N = t f , and the set of admissible strategies consists of piecewise constant stochastic processes of the form
u k ( t ) = F k ( j ) x ( t j ) , t j t < t j + 1 , j = 0 , 1 , , N 1 ,
with F k ( j ) R m k × n are arbitrary matrices.
Our aim is to investigate the problem of designing a Nash equilibrium strategy in the class of piecewise constant admissible strategies of type (4) (the closed-loop admissible strategies), for a LQ differential game described by a dynamical system of type (1), under the performance criteria (2). Moreover, we also present a method for the numerical computation of the feedback gains of the equilibrium strategy.
We denote U ˜ p c = U ˜ 1 p c × U ˜ 2 p c × × U ˜ L p c the set of the piecewise constant admissible strategies of type (4).

2.2. The Equivalent Problem

Define v k : [ t 0 , t f ] R m k by v k ( t ) = u k ( j ) , t j t < t j + 1 , j = 0 , 1 , , N 1 , where u k ( j ) are arbitrary m k -dimensional random vectors with finite second moments. If x ( t ) is the solution of system (1) determined by the piecewise constant inputs v k ( · ) , we set ξ ( t ) = ( x T ( t ) v 1 T ( t ) v L T ( t ) ) T R n + m , m = k = 1 L m k .
Direct calculations show that ξ ( t ) is the solution of the initial value problem (IVP) associated to a linear stochastic system with finite jumps often called system controlled by impulses:
d ξ ( t ) = A ξ ( t ) d t + C ξ ( t ) d w ( t ) , t j t < t j + 1
ξ ( t j + ) = A d ξ ( t j ) + k = 1 L B d k u k ( j ) , j = 0 , 1 , , N 1 ,
ξ ( t 0 ) = ( x 0 T 0 T 0 T ) T ,
under the notations:
A = A B 1 B 2 B L 0 m n 0 m m 1 0 m m 2 0 m m L , C = C D 1 D 2 D L 0 m n 0 m m 1 0 m m 2 0 m m L , A d = I n 0 n m 1 0 n m 2 0 n m l 0 m n 0 m m 1 0 m m 2 0 m m L , B d k = 0 n m k T 0 m 1 m k T 0 m k 1 m k T I m k 0 m k + 1 m k T 0 m L m k T T ,
where 0 p q denotes the zero matrix of size p × q .
The performance criteria (2) becomes
J k ( t 0 , ξ 0 ; u 1 , u 2 , , u L ) = E [ ξ T ( t f ) G k ξ ( t f ) + t 0 t f ξ T ( t ) M k ξ ( t ) d t ] + j = 0 N 1 E [ i = 1 L u i T ( j ) R k i ( j ) u i ( j ) ] ,
for all u k = ( u k ( 0 ) , , u k ( N 1 ) ) , u k ( j ) are m k -dimensional random vectors F t j -measurable such that
E [ | u k ( j ) | 2 ] < .
Throughout the paper F t denotes the σ -algebra generated by the random variables w ( s ) , 0 s t . The matrices in (7) can be written as
G k = diag ( G k 0 ) R ( n + m ) × ( n + m ) M k = diag ( M k 0 ) R ( n + m ) × ( n + m ) R k i ( j ) = ( t j + 1 t j ) R k i .
Let U s d = U 1 s d × U 2 s d × × U L s d be the set of the inputs of the form of sampled data linear state feedback, i.e., u = ( u 1 , u 2 , , u L ) U s d if and only if u k = ( u k ( 0 ) , , u k ( N 1 ) ) with
u k ( j ) = F k ( j ) ξ ( t j ) , 0 j N 1 ,
where F k ( j ) R m k × ( n + m ) are arbitrary matrices and ξ ( t j ) are the values at the time instants t j of the solution of the following IVP:
d ξ ( t ) = A ξ ( t ) d t + C ξ ( t ) d w ( t ) , t j < t t j + 1
ξ ( t j + ) = ( A d + k = 1 L B d k F k ( j ) ) ξ ( t j ) , j = 0 , 1 , , N 1 ,
ξ ( t 0 ) = ξ 0 R n + m .
Let Φ k be a matrix valued sequence of the form
Φ k = ( F k ( 0 ) , F k ( 1 ) , , F k ( N 1 ) ) ,
where F k ( i ) R m k × ( n + m ) are arbitrary matrices. We consider the set
U Φ s d = { ( Φ 1 , Φ 2 , , Φ L ) : Φ k are arbitrary sequences defined as in ( 11 ) } .
Remark 1.
By (9) and (10), there is a one to one correspondence between the sets U s d and U Φ s d . Each u k from U k s d can be identified with the sequence Φ k = ( Φ k ( 0 ) , Φ k ( 1 ) , , Φ k ( N 1 ) ) of its feedback matrices.
Based on this remark we can rewrite the performance criterion (7) as:
J k ( t 0 , ξ 0 ; Φ 1 , Φ 2 , , Φ L ) = E [ ξ T ( t f ) G k ξ ( t f ) + t 0 t f ξ T ( t ) M k ξ ( t ) d t ] + j = 0 N 1 E [ i = 1 L ξ T ( t j ) F i T ( j ) R k i ( j ) F i ( j ) ξ ( t j ) ] ,
for all ( Φ 1 , Φ 2 , , Φ L ) U Φ s d .
Similarly to Definition 1, one can define a Nash equilibrium strategy for the LQ differential game described by the controlled system (5), the performance criteria (13) and the class of admissible strategies U Φ s d described by (12).
Definition 2.
The L-tuple of admissible strategies ( Φ ˜ 1 , Φ ˜ 2 , , Φ ˜ L ) is said to achieve a Nash equilibrium for the differential game described by the controlled system (5), the cost function (13), and the class of the admissible strategies U Φ s d , if for all ( Φ 1 , Φ 2 , , Φ L ) U Φ s d , we have
J k ( t 0 , ξ 0 ; Φ ˜ 1 , Φ ˜ 2 , , Φ ˜ L ) J k ( t 0 , ξ 0 ; Φ ˜ 1 , Φ ˜ 2 , . , Φ ˜ k 1 , Φ k , Φ ˜ k + 1 , , Φ ˜ L ) .
Remark 2.
(a)
Based on the Remark 1 we may infer that if ( Φ ˜ 1 , Φ ˜ 2 , , Φ ˜ L ) is an equilibrium strategy in the sense of the Definition 2, then ( u ˜ 1 , u ˜ 2 , , u ˜ L ) given by (9) using the matrix components of Φ ˜ k , provides an equilibrium strategy for the LQ differential game described by (5), (7) and the family of admissible strategies U s d .
(b)
Among the feedback matrices from (9) some have the form:
F k ( j ) = ( F k ( j ) 0 m k m ) ,
where F k ( j ) R m k × n . Hence, some admissible strategies (9) are of type (4). Hence, if the feedback matrices of the Nash equilibrium strategy ( Φ ˜ 1 , Φ ˜ 2 , , Φ ˜ L ) have the structure given in (15), then the strategy of type (9) with these feedback matrices provide the Nash equilibrium strategy for the LQ differential game described by (1), (2) and (4).
To obtain explicit formulae for the feedback matrices of a Nash equilibrium strategy of type (9) (or, equivalently (11), (12)), we use the following system of matrix linear differential equations (MLDEs) with jumps and algebraic constraints:
P ˙ k ( t ) = A T P k ( t ) + P k ( t ) A + C T P k ( t ) C + M k , t j t < t j + 1 P k ( t j ) = A [ k ] T ( j ) P k ( t j ) A [ k ] ( j ) A [ k ] T ( j ) P k ( t j ) B d k ×
× ( R k k ( j ) + B d k T P k ( t j ) B d k ) B d k T P k ( t j ) A [ k ] ( j ) + M [ k ] ( j ) i = 1 k 1 B d k T P k ( t j ) B d i F i ( j ) + ( R k k ( j ) + B d k T P k ( t j ) B d k ) F k ( j )
+ i = k + 1 L B d k T P k ( t j ) B d i F i ( j ) = B d k T P k ( t j ) A d
P k ( t N ) = G k , k = 1 , , L ,
where we have denoted
A [ k ] ( j ) = A d + i = 1 , i k L B d i F i ( j )
and
M [ k ] ( j ) = i = 1 , i k L F i T ( j ) R k i ( j ) F i ( j ) ,
while the superscript † denotes the generalized inverse of a matrix.
Remark 3.
A solution of the terminal value problem (TVP) with algebraic constraints (16) is a 2L-uple of the form ( P 1 ( · ) , P 2 ( · ) , , P L ( · ) ; F 1 ( · ) , F 2 ( · ) , , F L ( · ) ) where, for each 1 k L , P k ( · ) is a solution of the TVP (16a), (16b), (16d) and F k ( j ) R m k × ( n + m ) , 0 j N 1 . On the interval [ t N 1 , t N ] , P k ( · ) is the solution of the TVP described by the perturbed Lyapunov-type equation from (16a) and the terminal value given in (16d). On each interval [ t j 1 , t j ) , j N 1 , the terminal value P k ( t j ) of P k ( · ) is computed via (16b) together with (17) and (18) provided that ( F 1 ( j ) , F 2 ( j ) , , F L ( j ) ) to be obtained as solution of (16c). So, the TVPs solved by P k ( · ) , 1 k L are interconnected via (16c).
To facilitate the statement of the main result of this section, we rewrite (16c) in a compact form as:
Π d ( P 1 ( t j ) , , P L ( t j ) , j ) F ( j ) = Γ d ( P 1 ( t j ) , , P L ( t j ) ) ,
where F ( j ) = ( F 1 T ( j ) F 2 T ( j ) F L T ( j ) ) T and the matrices Π d ( P 1 ( t j ) , , P L ( t j ) , j ) and Γ d ( P 1 ( t j ) , , P L ( t j ) ) are obtained using the block components of (16c).

2.3. Sampled Data Nash Equilibrium Strategy

First we derive a necessary and sufficient condition for the existence of an equilibrium strategy of type (9) for the LQ differential game given by the controlled system (5), the performance criteria (7) and the set of the admissible strategies U s d . To this end we adapt the argument used in the proof of ([22], Theorem 4).
We prove:
Theorem 1.
Under the assumption H . the following are equivalent:
(i)
the LQ differential game defined by the dynamical system controlled by impulses (5), the performance criteria (7) and the class of the admissible strategies of type (9) has a Nash equilibrium strategy
u ˜ k ( j ) = F ˜ k ( j ) ξ ( t j ) , 0 j N 1 , 1 k L .
(ii)
the TVP with constraints (16) has a solution ( P ˜ 1 ( · ) , P ˜ 2 ( · ) , , P ˜ L ( · ) ; F ˜ 1 ( · ) , F ˜ 2 ( · ) , , F ˜ L ( · ) ) defined on the whole interval [ t 0 , t f ] and satisfying the conditions below for 0 j N 1 :
Π d ( P ˜ 1 ( t j ) , , P ˜ L ( t j ) , j ) Π d ( P ˜ 1 ( t j ) , , P ˜ L ( t j ) , j ) Γ d ( P ˜ 1 ( t j ) , , P ˜ L ( t j ) ) = = Γ d ( P ˜ 1 ( t j ) , , P ˜ L ( t j ) ) .
If condition (21) holds, then the feedback matrices of a Nash equilibrium strategy of type (9) are the matrix components of the solution of the TVP (16) and are given by
( F ˜ 1 T ( j ) F ˜ 2 T ( j ) F ˜ L T ( j ) ) T = Π d ( P ˜ 1 ( t j ) , , P ˜ L ( t j ) , j ) Γ d ( P ˜ 1 ( t j ) , , P ˜ L ( t j ) ) , 0 j N 1 .
The minimal value of the cost of the k-th player is ξ 0 T P ˜ k ( t 0 ) ξ 0 .
Proof. 
From (14) and Remarks 1 and 2(a), one can see that a strategy of type (9) defines a Nash equilibrium strategy for the linear differential game described by the controlled system (5), the performance criteria (7) (or equivalently (13)) if and only if for each 1 k L the optimal control problem described by the controlled system
d ξ ( t ) = A ξ ( t ) d t + C ξ ( t ) d w ( t ) , t j < t t j + 1
ξ ( t j + ) = A ˜ [ k ] ( j ) ξ ( t j ) + B d k u k ( j ) , j = 0 , 1 , , N 1 ,
ξ ( t 0 ) = ξ 0 R n + m ,
and the quadratic functional
J [ k ] ( t 0 , ξ 0 ; u k ) = E [ ξ T ( t f ) G k ξ ( t f ) + t 0 t f ξ T ( t ) M k ξ ( t ) d t ] + j = 0 N 1 E [ ξ T ( t j ) M ˜ [ k ] ( j ) ξ ( t j ) + u k T ( j ) R k k ( j ) u k ( j ) ] ,
has an optimal control in a state feedback form. The controlled system (23) and the performance criterion (24) are obtained substituting u ˜ ( j ) = F ˜ ( j ) ξ ( t j ) , 1 k , L , k in (5) and (7), respectively. A ˜ [ k ] and M ˜ [ k ] are computed as in (17) and (18), respectively, but with F i ( j ) replaced by F ˜ i ( j ) .
To obtain necessary and sufficient conditions for the existence of the optimal control in a linear state feedback form we employ the results proved in [20]. First, notice that in the case of the optimal control problem (23)–(24), the TVP (16a), (16b), (16d) plays the role of the TVP (19)–(23) from [20].
Using Theorem 3 in [20] in the case of the optimal control problem described by (23) and (24) we deduce that the existence of the Nash equilibrium strategy of the form (9) for the differential game described by the controlled system (5), the performance criteria (7) (or its equivalent form (13)), is equivalent to the solvability of the TVP described by (16). The feedback matrix F ˜ k ( j ) of the optimal control solves the equation:
( R k k ( j ) + B d k T P ˜ k ( t j ) B d k ) F ˜ k ( j ) = B d k T P ˜ k ( t j ) A ˜ [ k ] ( j ) .
Substituting the formulae of A ˜ [ k ] in (25) we deduce that the feedback matrices of the Nash equilibrium strategy solve an equation of the form (16c) written for F ˜ k ( j ) instead of F k ( j ) . This equation may be written in the compact form:
Π d ( P ˜ 1 ( t j ) , , P ˜ L ( t j ) , j ) F ˜ ( j ) = Γ d ( P ˜ 1 ( t j ) , , P ˜ L ( t j ) ) ,
where F ˜ ( j ) = ( F ˜ 1 T ( j ) F ˜ 2 T ( j ) F ˜ L T ( j ) ) T .
By Lemma 2.7 in [21] we deduce that the Equation (26) has a solution if and only if the condition (21) holds. A solution of the Equation (26) is given by (22). The minimal value of the cost for the k-th player is obtained from Theorem 1 in [20] applied in the case of the optimal control problem described by (23), (24). Thus the proof is complete.□
Remark 4.
When the matrices Π d ( P ˜ 1 ( t j ) , , P ˜ L ( t j ) , j ) are invertible, the conditions (21) are satisfied automatically. In this case, the feedback matrices F ˜ k ( j ) of a Nash equilibrium strategy of type (20) are obtained as the unique solution of the Equation (22), because in this case, the generalized inverse of each matrix Π d ( P ˜ 1 ( t j ) , , P ˜ L ( t j ) , j ) , 0 j N 1 is the usual inverse.
Combining (6) and (16c), we deduce that the matrices F ˜ k ( j ) provided by (22) have the structure F ˜ k ( j ) = ( F ˜ k ( j ) 0 m k m ) . Hence, the Nash equilibrium strategy of the differential game described by the dynamical system (5), the performance criteria (7) and the admissible strategies of type (9) have the form
u ˜ k ( j ) = ( F ˜ k ( j ) 0 m k m ) ξ ( t j ) = F ˜ k ( j ) x ( t j ) , 0 j N 1 .
Now we obtain the following Nash equilibrium strategy of the differential game.
Theorem 2.
Assume that the conditions H . and (ii) in Theorem 1 are satisfied. Then, a Nash equilibrium strategy in a state feedback form with sampled measurements of type (4) of the differential game described by the dynamical system (1) and the performance criteria (2) are given by:
u ˜ k ( t ) = F ˜ k ( j ) x ( t j ) , t j t t j + 1 , 0 j N 1 , 1 k L .
The feedback matrices F ˜ k ( j ) from (27) are given by the first n columns of the matrices F ˜ k ( j ) , which are obtained as solutions of Equation (26). In (27), x ( t j ) are the values measured at the times t j , 0 j N 1 , of the solution of the closed-loop system obtained when (27) is plugged into (1). The minimal value of the cost (2) associated to the k-th player is given by
( x 0 T 0 n m ) P ˜ k ( t 0 ) ( x 0 T 0 n m ) T .
In the next section, we present an algorithm which allows the numerical computation of the matrices F ˜ k ( j ) arising in (27) for an LQ differential game with two players.

3. Numerical Computations and the Algorithm

In what follows we assume that L = 2 and t j + 1 t j = h > 0.0 j N 1 .
We propose a numerical approach to compute the optimal strategies
u ˜ k ( j ) = F ˜ k ( j ) x ˜ ( t j ) , j = 0 , 1 , , N 1 .
The algorithm consists of two steps:
  • We first compute the feedback matrices F ˜ k ( j ) , j = 0 , 1 , , N 1 , k = 1 , 2 of the Nash equilibrium strategy, based on the solution P ˜ 1 ( · ) , P ˜ 2 ( · ) :
    P ˙ k ( t ) = A T P k ( t ) + P k ( t ) A + C T P k ( t ) C + M k , t j t < t j + 1 .
    STEP 1.A. We take P k ( t N ) = G k , k = 1 , 2 , and compute
    P ˜ k ( t N 1 ) = e L * h [ G k ] + M k , k = 1 , 2 , where
    M k = h M k + h 2 2 L * [ M k ] + h 3 6 ( L * ) 2 [ M k ] + + h p p ! ( L * ) p 1 [ M k ]
    e L * h [ X ] = 0 q h ! L * [ X ] = X + h L * [ X ] + h 2 2 ( L * ) [ L * [ X ] ] + + h q q ! ( L * ) q [ X ] ,
    with p 1 and q 1 sufficiently large.
    For the operator L * [ X ] we have
    L * [ X ] = A T X + X A + C T X C
    for all X = X T R ( n + m 1 + m 2 ) × ( n + m 1 + m 2 ) .
    The iterations L [ X ] are computed from:
    L * [ X ] = A T L * ( 1 ) [ X ] + L * ( 1 ) [ X ] A + C T L * ( 1 ) [ X ] C
    for 1 with L 0 [ X ] = X where X = P k ( t j + 1 ) or X = M k , respectively.
    We compute the feedback matrices F ˜ k ( N 1 ) R m k × n as solutions of the linear equation
    R 11 + h 1 P ˜ 1 , 11 ( t N 1 ) h 1 P ˜ 1 , 12 ( t N 1 ) h 1 P ˜ 2 , 12 T ( t N 1 ) R 22 + h 1 P ˜ 2 , 22 ( t N 1 ) F ˜ 1 ( N 1 ) F ˜ 2 ( N 1 ) = h 1 P ˜ 1 , 01 T ( t N 1 ) h 1 P ˜ 2 , 02 T ( t N 1 )
    STEP 1.B. We set
    F ˜ k ( N 1 ) = ( F ˜ k ( N 1 ) 0 0 ) R m k × ( n + m 1 + m 2 ) , k = 1 , 2 .
    Next, we compute P ˜ k ( t N 1 ) , k = 1 , 2 , :
    P ˜ 1 ( t N 1 ) = ( A d + B d 2 F ˜ 2 ( N 1 ) ) T P ˜ 1 ( t N 1 ) ( A d + B d 2 F ˜ 2 ( N 1 ) ) ( A d + B d 2 F ˜ 2 ( N 1 ) ) T P ˜ 1 ( t N 1 ) B d 1 ( h R 11 + B d 1 T P ˜ 1 ( t N 1 ) B d 1 ) 1 · B d 1 T P ˜ 1 ( t N 1 ) ( A d + B d 2 F ˜ 2 ( N 1 ) ) + h F ˜ 2 T ( N 1 ) R 12 F ˜ 2 ( N 1 )
    and
    P ˜ 2 ( t N 1 ) = ( A d + B d 1 F ˜ 1 ( N 1 ) ) T P ˜ 2 ( t N 1 ) ( A d + B d 1 F ˜ 1 ( N 1 ) ) ( A d + B d 1 F ˜ 1 ( N 1 ) ) T P ˜ 2 ( t N 1 ) B d 2 ( h R 22 + B d 2 T P ˜ 2 ( t N 1 ) B d 2 ) 1 · B d 2 T P ˜ 2 ( t N 1 ) ( A d + B d 1 F ˜ 1 ( N 1 ) ) + h F ˜ 1 T ( N 1 ) R 21 F ˜ 1 ( N 1 ) .
    STEP 2.A. Fix j such that j N 2 . Assuming that P ˜ k ( t j + 1 ) have already been computed for a j N 2 , k = 1 , 2 , we compute
    P ˜ k ( t j ) = e L * h [ P ˜ k ( t j + 1 ) ] + M k , k = 1 , 2 ,
    where M k is computed as in (31).
    We compute the feedback gains F ˜ k ( j ) R m k × n as solution of the linear equation
    R 11 + h 1 P ˜ 1 , 11 ( t j ) h 1 P ˜ 1 , 12 ( t j ) h 1 P ˜ 2 , 12 T ( t j ) R 22 + h 1 P ˜ 2 , 22 ( t j ) F ˜ 1 ( j ) F ˜ 2 ( j ) = h 1 P ˜ 1 , 01 T ( t j ) h 1 P ˜ 2 , 02 T ( t j )
    STEP 2.B. Setting F ˜ k ( j ) = F ˜ k ( j ) 0 0 R m k × ( n + m 1 + m 2 ) , k = 1 , 2 we compute P ˜ k ( t j ) as in the formulae below
    P ˜ 1 ( t j ) = ( A d + B d 2 F ˜ 2 ( j ) ) T P ˜ 1 ( t j ) ( A d + B d 2 F ˜ 2 ( j ) ) ( A d + B d 2 F ˜ 2 ( j ) ) T P ˜ 1 ( t j ) B d 1 ( h R 11 + B d 1 T P ˜ 1 ( t j ) B d 1 ) 1 · B d 1 T P ˜ 1 ( t j ) ( A d + B d 2 F ˜ 2 ( j ) ) + h F ˜ 2 T ( j ) R 12 F ˜ 2 ( j )
    and
    P ˜ 2 ( t j ) = ( A d + B d 1 F ˜ 1 ( j ) ) T P ˜ 2 ( t j ) ( A d + B d 1 F ˜ 1 ( j ) ) ( A d + B d 1 F ˜ 1 ( j ) ) T P ˜ 2 ( t j ) B d 2 ( h R 22 + B d 2 T P ˜ 2 ( t j ) B d 2 ) 1 B d 2 T P ˜ 2 ( t j ) ( A d + B d 1 F ˜ 1 ( j ) ) + h F ˜ 1 T ( j ) R 21 F ˜ 1 ( j ) .
  • In the second step, the computation of the optimal trajectory x ˜ ( t ) involves the initial vector x 0 and the equilibrium strategy values u k ( j ) , k = 1 , 2 .
    Then, we illustrate the mean squares of the optimal trajectory E [ | x ˜ ( t ) | 2 ] and of the equilibrium strategy E [ | u ˜ k ( t ) | 2 ] , k = 1 , 2 . We set ξ ˜ ( t ) = ( x ˜ T ( t ) u ˜ 1 T ( t ) u ˜ 2 T ( t ) ) T and define X ( t ) = E [ ξ ˜ ( t ) ξ ˜ T ( t ) ] .
    We have t X ( t ) solves the forward linear differential equation with finite jumps:
    X ˙ ( t ) = L X ( t ) , t j t < t j + 1 .
    For t j = j h we write:
    X ( t j + ) = ( A d + B d 1 F ˜ 1 ( j ) + B d 2 F ˜ 2 ( j ) ) X ( t j ) · ( A d + B d 1 F ˜ 1 ( j ) + B d 2 F ˜ 2 ( j ) ) T
    0 j N 1 , t j = j h , where
    L X = A X + X A T + C X C T .
    Then, we have used the values to make plots
    E [ | x ˜ ( i δ + j h ) | 2 ] = T r [ X 11 ( i δ + j h ) ] E [ | u ˜ 1 ( i δ + j h ) | 2 ] = T r [ X 22 ( i δ + j h ) ] E [ | u ˜ 2 ( i δ + j h ) | 2 ] = T r [ X 33 ( i δ + j h ) ] ,
    where
    X ( i δ + j h ) = X 11 ( i δ + j h ) X 12 ( i δ + j h ) X 13 ( i δ + j h ) X 12 T ( i δ + j h ) X 22 ( i δ + j h ) X 23 ( i δ + j h ) X 13 T ( i δ + j h ) X 23 T ( i δ + j h ) X 33 ( i δ + j h )
    such that X 11 ( i δ + j h ) R n × n , X 22 ( i δ + j h ) R m 1 × m 1 and X 33 ( i δ + j h ) R m 2 × m 2 .
This algorithm enables us to compute the equilibrium strategies values u k ( j ) of the players. The experiments illustrate that the optimal strategies are piecewise constant, which seems to indicate that we have a stabilization effect.
Further, we consider two examples for the LQ differential game described by the dynamical system (1), the performance criteria (2) and the class of piecewise constant admissible strategies of type (28).
Example 1.
We consider the controlled system (1) in the special form n = m 1 = m 2 = 2 . The coefficient matrices A , B k , C , D k , M k , G k , R k k , R k , k , = 1 , 2 , k are defined as
A = 1.5 0.17 0.07 1.4 B 1 = 1.5 0.7 0.3 0.4 B 2 = 1.2 0.95 0.8 0.7
C = 0.7 0.19 0.24 0.9 D 1 = 0.2 0.04 0.4 0.5 D 2 = 0.1 0.06 0.2 0.3
M 1 = 0.8 0.7 0.7 0.95 M 2 = 0.09 0.04 0.04 0.08
G 1 = 1.2 0.45 0.45 1.5 G 2 = 0.95 0.8 0.8 1.15
R 11 = 0.6 0.25 0.25 0.8 R 22 = 0.3 0.15 0.15 0.4
R 12 = 0.05 0.04 0.04 0.08 R 21 = 0.06 0.07 0.07 0.09 .
The evolution of the mean square values E [ | x ˜ ( t ) | ] 2 and E [ | u o p t ( t ) | ] 2 of the optimal trajectory x ˜ ( t ) (with the initial point x 0 T = ( 0.03 0.01 ) ) and the equilibrium strategies u 1 , o p t ( t ) and u 2 , o p t ( t ) is depicted in Figure 1 on the intervals [ 0 , 1 ] , and in Figure 2 for [ 0 , 2 ] , respectively. The values of the optimal trajectory x ˜ ( t ) equilibrium strategies of both players are very close to zero in both the short-term and long-term periods.
Example 2.
We consider the controlled system (1) in the special form n = 4 and m 1 = m 2 = 2 . We define the matrix coefficients A , B k , C , D k , M k , G k , R k k , R k , k , = 1 , 2 , k as follows:
A = 0.5 0.17 0.07 0.9 0.07 0.54 0.2 0.25 0.6 0.8 0.92 0.06 0.35 0.45 0.04 0.99 B 1 = 4.05 0.4 0.4 0.8 1 0.9 0 0.8
C = 0.07 0.19 0.8 0 0.4 0.18 0.24 0.7 0.06 0.3 0.15 0.4 0.45 0.37 0.09 0.08 D 1 = 0.15 0 0.2 0.25 0 0.035 0.04 0.2
B 2 = 0.4 0.05 0.05 0.07 0 0.07 0.3 0.05 D 2 = 0.25 0.525 1.25 0.025 0.35 0.75 0.25 0.9
M 1 = 0.78 0 0 0 0 0.82 0 0 0 0 0.6 0 0 0 0 0.5 M 2 = 0.6 0 0 0 0 0.8 0 0 0 0 0.48 0 0 0 0 1.05
G 1 = 0.9 0.05 0.25 0.35 0.05 1 0.2 0.07 0.25 0.2 1.05 0.3 0.35 0.07 0.3 0.9 G 2 = 1.25 0.75 0.21 0.65 0.75 0.88 0.45 0.76 0.21 0.45 1 0.87 0.65 0.76 0.87 0.99
R 11 = 1.26 0.25 0.25 0.8 0.25 0.95 0.15 0.4 0.25 0.15 0.96 0.3 0.8 0.4 0.3 0.88 R 22 = 0.6 0.15 0.15 0.4 0.15 0.85 0.36 0.4 0.15 0.36 0.4 0.25 0.4 0.4 0.25 0.87
R 12 = 0.98 0.04 0.36 0.4 0.04 0.8 0.36 0.45 0.36 0.36 0.64 0.1 0.4 0.45 0.1 0.89 R 21 = 0.6 0.07 0.35 0.28 0.07 0.8 0.39 0.25 0.35 0.39 1.2 0.48 0.28 0.25 0.48 1.01 .
The evolution of the mean square values E [ | x ˜ ( t ) | ] 2 and E [ | u o p t ( t ) | ] 2 of the optimal trajectory x ˜ ( t ) (with the initial point x 0 T = ( 0.15 0.01 0.02 0.03 ) ) and the equilibrium strategies u 1 , o p t ( t ) and u 2 , o p t ( t ) on the intervals [ 0 , 1 ] (Figure 3) and [ 0 , 5 ] (Figure 4), respectively. The values of the optimal trajectory x ˜ ( t ) equilibrium strategies of both players are very close to zero in short-term and long-term period.

4. Concluding Remarks

In this paper, we have investigated the formulation of existence conditions for the Nash equilibria strategy in a state feedback form, in the piecewise constant admissible strategies case. These conditions are expressed through the solvability of the algebraic Equation (26). The solutions of these equations provide the feedback matrices of the desired Nash equilibrium strategy. To obtain such conditions for the existence of a sampled-data Nash equilibrium strategy, we have transformed the original problem into an equivalent one which requires to find a Nash equilibrium strategy in a state feedback form for a stochastic differential game, in which the dynamic is described by Itô type differential equations controlled by impulses. Unlike for the deterministic case, when the problem of finding of a sampled-data Nash equilibrium strategy can be transformed into an equivalent problem in discrete-time, in the stochastic framework when the controlled system is described by Itô type differential equations, such a transformation to the discrete-time case is not possible. The developments from the present work clarify and extend the results from Section 5 of [23], where only the particular case L = 2 was considered. The key method used for obtaining the feedback matrices of the Nash equilibrium strategy via the Equation (26) is the solution P ˜ k ( · ) , 1 k L of the TVP (16). On each interval ( t j 1 , t j ) , 1 j N , (16a) consists of L uncoupled backward linear differential equation. The boundary values P ˜ k ( t j ) are computed via (16d) for j = N and via (16b) for j N 1 . Finally, we gave an algorithm for calculating the equilibrium strategies of the players, and the numerical experiments suggest a stabilization effect.

Author Contributions

Conceptualization, V.D., I.G.I., I.-L.P. and O.B.; methodology, V.D., I.G.I., I.-L.P. and O.B.; software, V.D., I.G.I., I.-L.P. and O.B.; validation, V.D., I.G.I., I.-L.P. and O.B.; investigation, V.D., I.G.I., I.-L.P. and O.B.; resources, V.D., I.G.I., I.-L.P. and O.B.; writing—original draft preparation, V.D., I.G.I., I.-L.P. and O.B.; writing—review and editing, V.D., I.G.I., I.-L.P. and O.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by “1 Decembrie 1918” University of Alba Iulia through scientific research funds.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hu, L.S.; Cao, Y.Y.; Shao, H.H. Constrained robust sampled-data control for nonlinear uncertain systems. Int. J. Robust Nonlinear Control 2002, 12, 447–464. [Google Scholar] [CrossRef]
  2. Hu, L.-S.; Lam, J.; Cao, Y.-Y.; Shao, H.-H. A linear matrix inequality (LMI) approach to robust H/sub 2/sampled-data control for linear uncertain systems. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2003, 33, 149–155. [Google Scholar] [CrossRef] [Green Version]
  3. Hu, L.; Shi, P.; Huang, B. Stochastic stability and robust control for sampled-data systems with Markovian jump parameters. J. Math. Anal. Appl. 2006, 504–517. [Google Scholar] [CrossRef]
  4. Ramachandran, K.; Tsokos, C. Stochastic Differential Games Theory and Applications; Atlantis Studies in Probability and Statistics; Atlantis Press: Dordrecht, The Netherlands, 2012. [Google Scholar]
  5. Yeung, D.K.; Petrosyan, L.A. Cooperative Stochastic Differential Games; Springer Series in Operations Research and Financial Engineering; Springer: New York, NY, USA, 2006. [Google Scholar]
  6. Zhang, J. Backward Stochastic Differential Equations: From Linear to Fully Nonlinear Theory; Probability Theory and Stochastic Modelling; Springer: New York, NY, USA, 2017; Volume 86. [Google Scholar]
  7. Dockner, E.; Jorgensen, S.; Long, N.; Sorger, G. Differential Games in Economics and Management Science; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar] [CrossRef]
  8. Başar, T.; Olsder, G.J. Dynamic Noncooperative Game Theory; Classics in Applied Mathematics; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1999; Volume 23. [Google Scholar]
  9. Engwerda, J. On the open-loop Nash equilibrium in LQ-games. J. Econ. Dyn. Control 1998, 22, 729–762. [Google Scholar] [CrossRef] [Green Version]
  10. Engwerda, J. Computational aspects of the open-loop Nash equilibrium in linear quadratic games. J. Econ. Dyn. Control 1998, 22, 1487–1506. [Google Scholar] [CrossRef] [Green Version]
  11. Engwerda, J. Open-loop Nash equilibria in the non-cooperative infinite-planning horizon LQ game. J. Frankl. Inst. 2014, 351, 2657–2674. [Google Scholar] [CrossRef] [Green Version]
  12. Nian, X.; Duan, Z.; Tang, W. Analytical solution for a class of linear quadratic open-loop Nash game with multiple players. J. Control Theory Appl. 2006, 4, 239–244. [Google Scholar] [CrossRef]
  13. Azevedo-Perdicoúlis, T.P.; Jank, G. Disturbance Attenuation of Linear Quadratic OL-Nash Games on Repetitive Processes with Smoothing on the Gas Dynamics. Multidimens. Syst. Signal Process. 2012, 23, 131–153. [Google Scholar] [CrossRef]
  14. Imaan, M.; Cruz, J. Sampled-data Nash controls in non-zero-sum differential games. Int. J. Control 1973, 17, 1201–1209. [Google Scholar] [CrossRef]
  15. Başar, T. On the existence and uniqueness of closed-loop sampled-data nash controls in linear-quadratic stochastic differential games. In Optimization Techniques; Iracki, K., Malanowski, K., Walukiewicz, S., Eds.; Lecture Notes in Control and Information Sciences; Springer: Berlin/Heidelberg, Germany, 1980; Volume 22, pp. 193–203. [Google Scholar]
  16. Engwerda, J. A numerical algorithm to find soft-constrained Nash equilibria in scalar LQ-games. Int. J. Control 2006, 79, 592–603. [Google Scholar] [CrossRef] [Green Version]
  17. Wonham, W.M. On a Matrix Riccati Equation of Stochastic Control. SIAM J. Control 1968, 6, 681–697. [Google Scholar] [CrossRef]
  18. Yong, J.S.J. Linear–quadratic stochastic two-person nonzero-sum differential games: Open-loop and closed-loop Nash equilibria. Stoch. Process. Appl. 2019, 381–418. [Google Scholar] [CrossRef] [Green Version]
  19. Sun, J.; Li, X.; Yong, J. Open-Loop and Closed-Loop Solvabilities for Stochastic Linear Quadratic Optimal Control Problems. SIAM J. Control Optim. 2016, 54, 2274–2308. [Google Scholar] [CrossRef] [Green Version]
  20. Drăgan, V.; Ivanov, I.G. On the stochastic linear quadratic control problem with piecewise constant admissible controls. J. Frankl. Inst. 2020, 357, 1532–1559. [Google Scholar] [CrossRef]
  21. Rami, M.; Moore, J.; Zhou, X. Indefinite stochastic linear quadratic control and generalized differential Riccati equation. Siam J. Control Optim. 2001, 40, 1296–1311. [Google Scholar] [CrossRef] [Green Version]
  22. Drăgan, V.; Ivanov, I.G.; Popa, I.L. Stochastic linear quadratic differential games in a state feedback setting with sampled measurements. Syst. Control Lett. 2019, 104563. [Google Scholar] [CrossRef]
  23. Drăgan, V.; Ivanov, I.G.; Popa, I.L. On the closed loop Nash equilibrium strategy for a class of sampled data stochastic linear quadratic differential games. Chaos Solitons Fractals 2020, 109877. [Google Scholar] [CrossRef]
Figure 1. (left) E [ | x ˜ ( t ) | 2 ] ; Interval [ t 0 , τ ] = [ 0 , 1 ] ; (right) E [ | u 1 , o p t ( t ) | 2 ] and E [ | u 2 , o p t ( t ) | 2 ] ; Interval [ t 0 , τ ] = [ 0 , 1 ] .
Figure 1. (left) E [ | x ˜ ( t ) | 2 ] ; Interval [ t 0 , τ ] = [ 0 , 1 ] ; (right) E [ | u 1 , o p t ( t ) | 2 ] and E [ | u 2 , o p t ( t ) | 2 ] ; Interval [ t 0 , τ ] = [ 0 , 1 ] .
Mathematics 09 02713 g001
Figure 2. (left) E [ | x ˜ ( t ) | 2 ] ; Interval [ t 0 , τ ] = [ 0 , 2 ] ; (right) E [ | u 1 , o p t ( t ) | 2 ] and E [ | u 2 , o p t ( t ) | 2 ] ; Interval [ t 0 , τ ] = [ 0 , 2 ] .
Figure 2. (left) E [ | x ˜ ( t ) | 2 ] ; Interval [ t 0 , τ ] = [ 0 , 2 ] ; (right) E [ | u 1 , o p t ( t ) | 2 ] and E [ | u 2 , o p t ( t ) | 2 ] ; Interval [ t 0 , τ ] = [ 0 , 2 ] .
Mathematics 09 02713 g002
Figure 3. (left) E [ | x ˜ ( t ) | 2 ] ; Interval [ t 0 , τ ] = [ 0 , 1 ] ; (right) E [ | u 1 , o p t ( t ) | 2 ] and E [ | u 2 , o p t ( t ) | 2 ] ; Interval [ t 0 , τ ] = [ 0 , 1 ] .
Figure 3. (left) E [ | x ˜ ( t ) | 2 ] ; Interval [ t 0 , τ ] = [ 0 , 1 ] ; (right) E [ | u 1 , o p t ( t ) | 2 ] and E [ | u 2 , o p t ( t ) | 2 ] ; Interval [ t 0 , τ ] = [ 0 , 1 ] .
Mathematics 09 02713 g003
Figure 4. (left) E [ | x ˜ ( t ) | 2 ] ; Interval [ t 0 , τ ] = [ 0 , 5 ] ; (right) E [ | u 1 , o p t ( t ) | 2 ] and E [ | u 2 , o p t ( t ) | 2 ] ; Interval [ t 0 , τ ] = [ 0 , 5 ] .
Figure 4. (left) E [ | x ˜ ( t ) | 2 ] ; Interval [ t 0 , τ ] = [ 0 , 5 ] ; (right) E [ | u 1 , o p t ( t ) | 2 ] and E [ | u 2 , o p t ( t ) | 2 ] ; Interval [ t 0 , τ ] = [ 0 , 5 ] .
Mathematics 09 02713 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Drăgan, V.; Ivanov, I.G.; Popa, I.-L.; Bagdasar, O. Closed-Loop Nash Equilibrium in the Class of Piecewise Constant Strategies in a Linear State Feedback Form for Stochastic LQ Games. Mathematics 2021, 9, 2713. https://doi.org/10.3390/math9212713

AMA Style

Drăgan V, Ivanov IG, Popa I-L, Bagdasar O. Closed-Loop Nash Equilibrium in the Class of Piecewise Constant Strategies in a Linear State Feedback Form for Stochastic LQ Games. Mathematics. 2021; 9(21):2713. https://doi.org/10.3390/math9212713

Chicago/Turabian Style

Drăgan, Vasile, Ivan Ganchev Ivanov, Ioan-Lucian Popa, and Ovidiu Bagdasar. 2021. "Closed-Loop Nash Equilibrium in the Class of Piecewise Constant Strategies in a Linear State Feedback Form for Stochastic LQ Games" Mathematics 9, no. 21: 2713. https://doi.org/10.3390/math9212713

APA Style

Drăgan, V., Ivanov, I. G., Popa, I. -L., & Bagdasar, O. (2021). Closed-Loop Nash Equilibrium in the Class of Piecewise Constant Strategies in a Linear State Feedback Form for Stochastic LQ Games. Mathematics, 9(21), 2713. https://doi.org/10.3390/math9212713

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop