Next Article in Journal
Matching Ontologies through Multi-Objective Evolutionary Algorithm with Relevance Matrix
Next Article in Special Issue
A Geologic-Actuarial Approach for Insuring the Extraction Tasks of Non-Renewable Resources by One and Two Agents
Previous Article in Journal
Numerical Study of MHD Natural Convection inside a Cubical Cavity Loaded with Copper-Water Nanofluid by Using a Non-Homogeneous Dynamic Mathematical Model
Previous Article in Special Issue
A Preventive Replacement Policy for a System Subject to Bivariate Generalized Polya Failure Process
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal Control with Partially Observed Regime Switching: Discounted and Average Payoffs

by
Beatris Adriana Escobedo-Trujillo
1,*,
Javier Garrido-Meléndez
1,
Gerardo Alcalá
2 and
J. D. Revuelta-Acosta
1
1
Facultad de Ingeniería Campus Coatzacoalcos, Universidad Veracruzana, Coatzacoalcos 96535, Veracruz, Mexico
2
Centro de Investigación en Recursos Energéticos y Sustentables, Universidad Veracruzana, Coatzacoalcos 96535, Veracruz, Mexico
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(12), 2073; https://doi.org/10.3390/math10122073
Submission received: 18 May 2022 / Revised: 6 June 2022 / Accepted: 10 June 2022 / Published: 15 June 2022
(This article belongs to the Special Issue Probability Theory and Stochastic Modeling with Applications)

Abstract

:
We consider an optimal control problem with the discounted and average payoff. The reward rate (or cost rate) can be unbounded from above and below, and a Markovian switching stochastic differential equation gives the state variable dynamic. Markovian switching is represented by a hidden continuous-time Markov chain that can only be observed in Gaussian white noise. Our general aim is to give conditions for the existence of optimal Markov stationary controls. This fact generalizes the conditions that ensure the existence of optimal control policies for optimal control problems completely observed. We use standard dynamic programming techniques and the method of hidden Markov model filtering to achieve our goals. As applications of our results, we study the discounted linear quadratic regulator (LQR) problem, the ergodic LQR problem for the modeled quarter-car suspension, the average LQR problem for the modeled quarter-car suspension with damp, and an explicit application for an optimal pollution control.
MSC:
49N05; 49N10; 49N30; 49N90; 93C41

1. Introduction

In recent years, there has been more attention to a class of optimal control problems where the dynamic systems are governed means switching diffusions in which the switching is modeled by a continuous-time Markov chain ( ψ ) with unobservable hidden states (also known as partially observed optimal control problems). In these problems, an observable process y whose outcomes are “influenced” by the outcomes of ψ in a known way is assumed. Since ψ cannot be observed directly, the goal is to learn about ψ by observing y. Following the last mentioned, this article concerns with an optimal control problem with discounted and ergodic payoff in which the dynamic system x ( t ) evolves according to a Markovian regime-switching diffusion d x ( t ) = f ( x ( t ) , ψ ( t ) ) d t + σ ( x ( t ) , ψ ( t ) ) d W ( t ) for given continuous functions f and σ . The reward rate is allowed to be unbounded from above and from below. In this paper, the Wonham filter to estimate the states of the Markov chain from the observable evolution of a given process (y) is used. As a result, the original system x ( t ) is converted to a completely observable one x ¯ ( t ) .
Our main results extend the dynamic programming technique to this family of stochastic optimal control problems with reward (or cost) rate per unit of time unbounded and Markovian regime-switching diffusions. The regime switching is modeled by a continuous-time Markov chain ( ψ ) with unobservable states. Early works include research on an optimal control problem with an ergodic payoff, considering that the dynamic system evolves according to Markovian switching diffusions. However, this diffusion does not depend on a hidden Markov chain [1]. Research on deriving the dynamic programming principle for a partially observed optimal control problem in which the dynamic system is governed by a discrete-time Markov control process taking values in a finite-dimensional space has also been proposed [2]. Finally, one paper studied the optimal control with Markovian switching that is completely observable and rewards rate unbounded [3]. As an application of our results, we study the discounted linear quadratic regulator (LQR) problem, the ergodic LQR problem for the modeled quarter-car suspension, the average (ergodic) LQR problem for the modeled quarter-car suspension with damp, and an explicit application for an optimal pollution control. Other applications with bounded payoff different from those studied in this work are found in [4,5,6].
The objective of the theory of controlled regime-switching diffusions is to model controlled diffusion systems whose dynamics are affected by discrete phenomena. In these systems, the discrete phenomena are modeled by a Markov chain in continuous time, whose states represent the discrete phenomenon involved. There is an extensive list of references dealing with the case of completely observable stochastic optimal control in which a switching diffusion governs the stochastic systems. A literature review includes the textbooks [7,8] and the papers [9,10,11,12,13,14], with several applications, including optimization portfolios, wireless communication systems, and wind turbines, among others.
Generally, to solve unobserved optimal control problems, where the dynamic systems are governed by a hidden Markovian switching diffusion, it is necessary to transform them into completely observed ones, which in our case is done using a Wonham filter.
This Wonham filter estimates the hidden state of the Markov chain from the observable evolution of the process y. When these estimates are replaced in the original system, this becomes a completely observable system [15,16] and ([17], Section 22.3). The numerical results for Wonham’s filter are given in [18].
The paper is organized as follows: in Section 1, an introduction is given. In Section 2, the main assumptions are given. In this section, the partially observable system is converted into an observable system. The conditions to ensure the existence of optimal solutions for the optimal control problem with discounted payoff are given in Section 3. In Section 4, the conditions to ensure the existence of optimal solutions for the optimal control problem with average payoff are deduced. To illustrate our results, four applications are developed: an application on a linear quadratic regulator (LQR) with discounted payoff (Section 5); the development of a model of a quarter-car suspension LQR with an average payoff (Section 6); the study of an optimal control of a vehicle active suspension system with damp (Section 7); and an explicit application for an optimal pollution control (Section 8).

2. Formulation of the Problem

This work focuses on controlled hybrid stochastic differential Equations (HSDE) under partial observation. To explain this, first, we consider the stochastic differential equations of the form:
d x ( t ) = b ( x ( t ) , ψ ( t ) , u ( t ) ) d t + σ ( x ( t ) , ψ ( t ) ) d W ( t ) , x ( 0 ) = x 0 , ψ ( 0 ) = i ,
where b : R n × E × U R n and σ : R n × E R n × d in (1) depend on a finite state and time-continuous irreducible and aperiodic Markov chain ψ ( · ) taking values in E = { 1 , , N } . For all i , j E the transition probabilities are given by:
P ( ψ ( s + t ) ) = j ψ ( s ) = i = q i j t + o ( t ) , if i j , 1 + q i i t + o ( t ) ,
where the constants q i j 0 are the transition rates from i to j and satisfy that q i i ( x ) = i j q i j ( x ) , the transition matrix is denoted by Q = { q i j } i , j = 1 , 2 , , N . The control component is u ( t ) U with U a compact set of R m , and W is a d-dimensional standard Brownian motion independent of ψ ( · ) . Throughout the work, it is considered that both the Markov chain ψ ( · ) and the Brownian motion W are defined on a complete filtered probability space ( Ω , F , P , { F t } ) that satisfies the usual conditions.
Until now, the switching diffusion (1) seems to be formulated as a classical switching diffusion, as in [11,12,13,14,19], among others. However, we propose that the process ψ is a hidden Markov chain, i.e., at any given instant of time, the exact state of the Markov chain ψ ( · ) cannot be observed directly. Instead, we can only observe the process y given by:
d y ( t ) = h ( ψ ( t ) ) d t + σ 0 d B ( t ) , y ( 0 ) = 0 ,
whose dynamics depends on the value of ψ ( · ) . In Equation (2), h : E R is a bounded function, whereas B is a one-dimensional Brownian motion independent of W and ψ , and σ 0 is a positive constant.
Under partial observation, the best way to work is through nonlinear filtering. This technique studies the conditional distribution of ψ ( t ) given the observed data accumulated up to time t, namely:
Ψ i ( t ) = P ( ψ ( t ) = i | σ 1 ( y ( s ) , 0 s t ) ) , i E ,
where σ 1 ( y ( s ) , 0 s t ) ) is the σ 1 -algebra generated by the process y ( t ) and i = 1 N Ψ i ( t ) = 1 . Taking into account the following notation:
h T ( Ψ ) = ( h ( 1 ) , h ( 2 ) , , h ( N ) ) , Ψ T ( t ) = ( Ψ 1 ( t ) , , Ψ N ( t ) ) , diag ( h ) = diag ( h ( 1 ) , , h ( N ) ) ,
and using the Wonham filtering techniques, we know that the process Ψ in (3) satisfies the following Equation (see for instance [15] or ([17], Section 22.3)):
d Ψ ( t ) = Q Ψ ( t ) σ 0 2 h T ( Ψ ( t ) ) diag ( h ) h T ( Ψ ( t ) ) I N Ψ ( t ) d t + σ 0 2 diag ( h ) h T ( Ψ ( t ) ) I N Ψ ( t ) d y ( t ) ,
where I N is the N × N identity matrix. If we introduce the process:
d w 0 ( t ) = σ 0 1 ( d y ( t ) h T ( Ψ ( t ) ) d t ) ,
then Equation (4) can be rewritten as:
d Ψ ( t ) = Q Ψ ( t ) d t + σ 0 1 diag ( h ) h T ( Ψ ( t ) ) I N Ψ ( t ) d w 0 ( t ) .
Remark 1.
Note that the unique solution of (5) exists up to an explosion time τ (see, for instance [20]). However, τ = a.s. since Ψ i ( t ) 1 for all t < τ and i E .
At this point, we have defined the controlled HSDE with partial observation. To fulfill the objective of this work, that is, to solve an optimal control problem with the discounted and average payoff with partial observation, we will transform this problem into one with complete observation (see for instance [5,6,16]). First, we will establish the following notational convention.
For the coefficients b : R n × E × U R n and σ : R n × E R n × d
b ( x ( t ) , ψ ( t ) , u ( t ) ) = ( b 1 ( x ( t ) , ψ ( t ) , u ( t ) ) , , b n ( x ( t ) , ψ ( t ) , u ( t ) ) ) , σ ( x ( t ) , ψ ( t ) ) = { σ k l ( x ( t ) , ψ ( t ) ) } k = 1 , , n ; l = 1 , , d ,
we have their filtered estimates:
b ¯ k ( x ( t ) , Ψ ( t ) , u ( t ) ) = i = 1 N Ψ i ( t ) b k ( x ( t ) , i , u ( t ) ) ,
σ ¯ k l ( x ( t ) , Ψ ( t ) ) = i = 1 N Ψ i ( t ) σ k l ( x ( t ) , i ) ,
and with equalities (6)–(7), we establish the new coefficients:
b ¯ ( x ( t ) , Ψ ( t ) , u ( t ) ) = ( b ¯ 1 ( x ( t ) , Ψ ( t ) , u ( t ) ) , , b ¯ n ( x ( t ) , Ψ ( t ) , u ( t ) ) ) , σ ¯ ( x ( t ) , Ψ ( t ) ) = { σ ¯ k l ( x ( t ) , Ψ ( t ) ) } k = 1 , , n ; l = 1 , , d
With the use of above functions and Equation (1), we introduce the components of a new diffusion process as:
d x k ( t ) = b ¯ k ( x ( t ) , Ψ ( t ) , u ( t ) ) d t + l = 1 d σ ¯ k l ( x k ( t ) , Ψ ( t ) ) d W l ( t ) , x ( 0 ) = x 0 ,
and therefore, we obtain from (5) and (8) the following controlled system with complete observation:
d x ( t ) = b ¯ ( x ( t ) , Ψ ( t ) , u ( t ) ) d t + σ ¯ ( x ( t ) , Ψ ( t ) ) d W ( t ) , d Ψ ( t ) = Q Ψ ( t ) d t + σ 0 1 diag ( h ) h T ( Ψ ( t ) ) I N Ψ ( t ) d w 0 ( t ) ,
where ( x ( t ) , Ψ ( t ) ) R n × S N with:
S N = { Ψ = ( Ψ 1 , , Ψ N ) R N | Ψ i ( t ) > 0 , i = 1 N Ψ i ( t ) = 1 } .
Throughout this work, we will use the following Assumption 1.
Assumption 1.
(a)
The control set U is compact.
(b)
b : R n × E × U R n is a continuous function that satisfies the Lipschitz continuous property on x uniformly in ( i , u ) E × U , that is, there exists a constant C 1 > 0 such that:
max ( i , u ) E × U b ( x , i , u ) b ( y , i , u ) C 1 x y .
(c)
There exists constants C 2 , C 3 > 0 such that, σ : R n × E R n × d satisfies:
σ ( x , i ) σ ( y , i ) C 2 x y and x T σ ( x , i ) σ T ( x , i ) x C 3 x 2
for all x , y R n and for all i E .
(d)
There exists C 4 , C 5 > 0 with:
σ ( x , i ) C 4 ( 1 + x ) and b ( x , i , u ) C 5 ( 1 + x )
for i E and u U .
Under Assumption 1 and taking into account Remark 1, we know that the system (9) has a unique solution.
For x R n , we denote by ν x and H x the gradient and the Hessian matrix of x, respectively, and · , · the scalar product. For a sufficiently smooth real-valued function ν : R n × R N R . Let:
L u , Ψ ν ( x , Ψ ) : = ν x , b ¯ ( x , Ψ , u ) + 1 2 T r ( H x ν ) a ( x , Ψ ) + ν Ψ , Q Ψ ( t ) + 1 2 σ 0 2 T r ( H Ψ ν ( ( x , Ψ ) ) ) A 2 ( Ψ ( t ) )
with
a ( x , Ψ ) = σ ¯ ( x , Ψ ) σ ¯ ( x , Ψ ) T ,
A 2 ( Ψ ( t ) ) = [ diag ( h ) h T ( Ψ ( t ) ) I N Ψ ( t ) ] [ diag ( h ) h T ( Ψ ( t ) ) I N Ψ ( t ) ] T ,
the operator associated with Equation (9). In order to carry out the aim of this work, we define the control policies.
Definition 1.
A function of the form u ( t ) : = f ( t , x ( t ) , Ψ ( t ) ) for some measurable function f : [ 0 , ) × R n × S N U , is called a Markov policy, whereas u ( t ) : = f ( x ( t ) , Ψ ( t ) ) for some measurable function f : R n × S N U is said to be a stationary Markov policy. The stationary Markov policies set is denote by F .
The following assumption represents a Lyapunov-like condition.
Assumption 2.
There exists a function ( w 1 ) C 2 ( R n × S N ) , and constants p q > 0 , such that:
(i)
lim | x | w ( x , Ψ ) = + , and
(ii)
L u , Ψ w ( x , Ψ ) q w ( x , Ψ ) + p for each u U and ( x , Ψ ) R n × S N .
It is important to point out that since the ψ ( · ) is irreducible and aperiodic, we can ensure the existence of a unique invariant measure for the Markov–Feller process ( x f ( · ) , Ψ ( · ) ) (see [21,22]). Moreover, the Assumption 2 allows us to conclude that the Markov process ( x f ( · ) , Ψ ( · ) ) , where f F is positive recurrent and there exists a unique invariant probability measure μ f ( d x , Ψ ) for which is satisfied:
μ f ( w ) : = R n × S N w ( x , Ψ ) μ f ( d x , d Ψ ) < .
Note that for every f F , the measure μ f belongs to the space defined as follows.
Definition 2.
The w-norm is defined as:
ν w : = sup ( x , Ψ ) R n × S N ν ( x , Ψ ) w ( x , Ψ ) ,
where ν is the real-valued measurable function on R n × S N and w is the Lyapunov function given in Assumption 2. The normed linear space of real-valued measurable functions ν with finite w-norm is denoted by B w ( R n × S N ) . Moreover, the normed linear space of finite signed measures μ on R n × S N such that:
μ w : = R n w ( x , Ψ ) μ ( d x , d Ψ ) < ,
where μ is the total variation of μ is denoted by M w ( R n × S N ) .
Remark 2.
For each ν B w ( R n × S N ) and μ M w ( R n × S N ) , we get:
| ν ( x , Ψ ) μ ( d x , d Ψ ) | ν w w ( x , Ψ ) μ ( d x , d Ψ ) = ν w μ w < ,
that is, the integral ν ( x , Ψ ) μ ( d x , Ψ ) is finite.
The next result will be useful later.
Lemma 1.
The condition ( i i ) in Assumption 2 implies that:
(a)
E x , Ψ , f [ w ( x ( t ) , Ψ ( t ) ) ] e q t w ( x , Ψ ) + p q ( 1 e q t ) ;
(b)
lim t 1 t E x , Ψ , f [ w ( x ( t ) , Ψ ( t ) ) ] = 0 for all f F , ( x , Ψ ) R n × S N , and t 0 ;
(c)
μ f ( w ) p q for all h F .
Proof.
(a) After applying Dynkin’s formula to the function e q t w , we use case ( i i ) of Assumption 2 to get:
E x , Ψ , f [ e q t w ( x ( t ) , Ψ ( t ) ] = w ( x , Ψ 0 ) + E x , Ψ , f 0 t e q s [ L u , Ψ w ( x ( s ) , Ψ ( s ) ) + q w ( x ( s ) , Ψ ( s ) ) ] d s w ( x , Ψ 0 ) + E x , Ψ , f 0 t e q s p d s w ( x , Ψ 0 ) + p q ( e q t 1 ) .
Finally, if we multiply the inequality (12) by e q t , we obtain the result. To prove ( b ) , it is enough take the limit from the inequality (12). Integrating both sides of (12) with respect to the invariant probability μ f , we obtain μ f ( w ) e q t μ f ( w ) + p q ( 1 e q t ) , i.e., μ f ( w ) p / q ; thus, the result ( c ) follows. □
In this work, the reward rate is a measurable function r : R n × E × U R that satisfies the following conditions:
Assumption 3.
(a)
The function r ( x , i , u ) is continuous on R n × E × U ; moreover, for each R > 0 , there exists a constant K ( R ) > 0 such that:
sup ( i , u ) E × U | r ( x , i , u ) r ( y , i , u ) | K ( R ) | x y | f o r   a l l | x | , | y | R ,
i.e., r is locally Lipschitz in x uniformly with respect to i E and u U .
(b)
r ( · , · , u ) is in the normed linear space of real-valued functions B w ( R n × E ) uniformly in u; that is, there exists M > 0 such that for all ( x , i ) R n × E :
sup u U | r ( x , i , u ) | M w ( x , i ) .
Notation. The rate reward r : R n × E × U R is vector form is given by:
r T ( x , Ψ , u ) = ( r ( x , 1 , u ) , r ( x , 2 , u ) , , r ( x , N , u ) ) ,
and its estimation is:
r ¯ ( x , Ψ ( t ) , u ) = Ψ T ( t ) r ( x , Ψ , u ) = i = 1 N Ψ i ( t ) r ( x , i , u ) .
Henceforth, for each stationary Markov policy f F , we write:
r ¯ ( x , Ψ , f ) : = r ¯ ( x , Ψ , f ( x , i ) ) .

3. The Discounted Case

The objective of this section is to give conditions that guarantee the existence of discounted optimal policies for the α -discounted payoff criterion we are concerned with.
Definition 3.
Let r be as in Assumption 3 and α a positive constant. Given a stationary Markov policy f F and an initial state x ( 0 ) = x , Ψ ( 0 ) = Ψ , the total expected discount payoff (or discounted payoff, for short) is defined as:
V α ( x , Ψ , f ) : = E x , Ψ , f 0 e α t r ¯ ( x ( t ) , Ψ ( t ) , f ) d t .
Observe that the value function does not depend on the time at which the optimal control problem is studied to get the stationarity of the problem.
The following result shows a bound of the total expected discount payoff given in Definition 3. We will omit its proof because it is a direct consequence of Assumption 3 and inequality in Lemma 1a.
Proposition 1.
Suppose that Assumptions 2 and 3b hold. Then, for each x in R n , Ψ S N and f F we have:
sup f F V α x , Ψ , f M ( α ) w ( x , Ψ ) w i t h M ( α ) : = M α + d α c .
implying that α-discounted payoff V α ( · , · , f ) , belongs to the space B w ( R n × S N ) . Here, q and p are as in Assumption 2 and M is the constant in Assumption 3b.
α -discounted optimal problem. The optimal control problem with discounted payoff consists of finding a policy f * F such that:
V α * ( x , Ψ ) = V α ( x , Ψ , f * ) = sup f F V α ( x , Ψ , f ) .
The function V α * ( x , Ψ ) is referred to as the optimal discount payoff, whereas the policy f * F is called the discounted optimal.
Definition 4.
We say that a function v C 2 ( R n × S N ) B w ( R n × S N ) , and a policy f * F verify (are a solution of) the α -discounted payoff optimality equations (or Hamilton–Jacobi–Bellman (HJB) equation) if, for every x R n and Ψ S N :
α v ( x , Ψ ) = r ¯ ( x , Ψ , f * ) + L f * , Ψ v ( x , Ψ )
= sup f F r ¯ ( x , Ψ , f ) + L f , Ψ v ( x , Ψ ) .
Proposition 2.
If Assumptions 1, 2, and 3 hold, then:
(a)
There exists a function v in C 2 ( R n × S N ) B w ( R n × S N ) and a policy f * F , such that (14) and (15) hold.
(b)
The function v coincides with V α * ( x , Ψ ) in (13).
(c)
A policy f * F is an α-discount optimal if and only if (14) and (15) are satisfied.
Proof.
(a)
Theorem 3.2 in [23] ensures that the value function V α ( x , Ψ ) defined in (13) considering Ψ 0 is the unique solution of the HJB Equation (14) in C 2 ( R n ) B w ( R n ) . The existence of a function v in C 2 ( R n × S N ) B w ( R n × S N ) and a policy f * F , such that (14) and (15) hold, follows from Theorem 3.1 and 3.2 in [23] for each Ψ S N fixed.
(b)
By Dynkin’s formula for all ( x , Ψ ) R n × S N , f F and t 0 :
E x , Ψ , f [ e α t v ( x ( t ) , Ψ ( t ) ) ] = v ( x , Ψ ) + E x , Ψ , f [ 0 T L f , Ψ e α t v ( x ( t ) , Ψ ( t ) ) d t
Observe that:
L f , Ψ e α t v ( x ( t ) , Ψ ( t ) ) = α e α t v ( x , Ψ ) + e α t b ¯ ( x , Ψ , f ) v x ( x , Ψ ) + e α t 1 2 T r ( a ( x , Ψ ) ) v x x ( x , Ψ ) = e α t [ α v ( x ( t ) , Ψ ( t ) ) + L f , Ψ v ( x ( t ) , Ψ ( t ) ) ] .
Therefore, the right-hand member of (16) equals:
E x , Ψ , f [ e α t v ( x ( t ) , Ψ ( t ) ) ] = v ( x , Ψ ) + E x , Ψ , f e α t ( L f , Ψ v ( x ( t ) , Ψ ( t ) ) α v ( x ( t ) , Ψ ( t ) ) ) d t
and from (15):
E x , Ψ , f [ e α t v ( x ( t ) , Ψ ( t ) ) ] v ( x , Ψ ) E x , Ψ , f 0 T e α t r ¯ ( x ( t ) , Ψ ( t ) , f ) d t .
This yields:
v ( x , Ψ ) E x , Ψ , f 0 t [ e α t r ¯ ( x ( t ) , Ψ ( t ) , f ) d t + E x , Ψ , f [ e α t v ( x ( t ) , Ψ ( t ) ) ] .
Now, as a consequence of v is in B w ( R n × S N ) and Lemma 1 (a),(b), we have that:
| E x , Ψ , f [ e α t v ( x ( t ) , Ψ ( t ) ) ] | E x , Ψ , f [ [ e α t v w w ( x ( t ) , Ψ ( t ) ) ] e α t v w E x , Ψ , f w ( x ( t ) , Ψ ( t ) ) e α t v w e q T w ( x , Ψ ) + p q ( 1 e q T ) ( by   Lemma   ( 1 ( a ) ) 0 a s t .
Therefore:
v ( x , Ψ ) E x , Ψ , f 0 [ e α s r ¯ ( x ( s ) , Ψ ( s ) , f ) d s = V α ( x , Ψ , f ) for   all f F .
Thus, v ( x , Ψ ) V α ( x , Ψ , f ) . In particular, if we take f * F satisfying (14) and proceed as above, we get:
v ( x , Ψ ) = V α * ( x , Ψ , f * ) .
(c)
The if part. Suppose that f * F satisfies Equations (14) and (15). Then, proceeding as in part (b), we obtain that f * F is an optimal policy.
The only if part. By mimic the same procedure of part (b), we can obtain that for any f F fixed:
α V α ( x , Ψ , f ) = r ¯ ( x , Ψ , f ) + L f , Ψ V α ( x , Ψ , f ) ; for   all   x R n , Ψ S N .
On the other hand, by part (b) we can assert that:
α v ( x , Ψ ) = sup f F { r ¯ ( x , Ψ , f ) + L f , Ψ v ( x , Ψ ) } ; for   all   x R n , Ψ S N .
Now let f * F be an optimal policy, so that V α ( x , Ψ , f * ) = v ( x , Ψ ) . Then, we get the result from (17) and (18).
Remark 3.
Briefly, Proposition 2 says that if the HJB-Equations (14) and (15) admit a solution v C 2 ( R n × S N ) B w ( R n × S N ) , then v is the optimal discount payoff (13) to the switching Markovian stochastic control problem with a discounted payoff completely observed, and f * F is an optimal stationary policy.

4. Average Optimality Criteria

As in (10), let μ f ( ν ) : = R n ν ( x , Ψ ) μ f ( d x , Ψ ) for every ν B w ( R n × S N ) .
Assumption 4.
Let ( x ( t ) , Ψ ( t ) ) be the solution of the hidden Markovian-switching diffusion (1)–(4). Then, we suppose that there exist positive constants C and δ such that:
sup f F | E x , Ψ , f [ ν ( x ( t ) , Ψ ( t ) ) ] μ f ( ν ) | C e δ t ν w w ( x , Ψ )
for all ( x , Ψ ) R n × S N , ν B w ( R n × S N ) , and t 0 . That is, we assume that the process ( x ( t ) , Ψ ( t ) ) is uniformly w-exponentially ergodic.
Next, we define the long-run average optimality criterion.
Definition 5.
For each f M , ( x , Ψ ) R n × S N , and T 0 , let:
J T ( x , Ψ , f ) : = E x , Ψ , f 0 T r ¯ ( t , x ( t ) , Ψ ( t ) , f ) d t .
The long-run expected average reward given the initial state ( x , Ψ ) is:
J ( x , Ψ , f ) : = lim inf T 1 T J T ( x , Ψ , f ) .
The function:
J * ( x , Ψ ) : = sup f F J ( x , Ψ , f ) f o r   a l l ( x , Ψ ) R n × S N
is referred to as the optimal gain or the optimal average reward. If there is a policy f * F for which J ( x , Ψ , f * ) = J * ( x , Ψ ) for all ( x , Ψ ) R n × S N , then f * is called average optimal.
Remark 4.
In some optimal control problems, the limit of J T ( x , Φ , f ) / T as T might not exist. To avoid this difficulty, in optimal control problems, it defines the average payoff as a liminf as in (21), which be interpreted as the worst average payoff that is to be maximized.
For each f F , let:
J ( f ) : = μ f ( r ¯ ( · , Ψ , f ) ) = R n r ¯ ( x , Ψ , f ) μ f ( d x , d Ψ ) .
with μ f as in (10). Now, observe that J T defined in (20) can be expressed as:
J T ( x , Ψ , f ) = T J ( f ) + 0 T [ E x , Ψ , f r ¯ ( x ( t ) , Ψ ( t ) , f ) J ( f ) ] d t ,
therefore, multiplying (23) by 1 T and letting T we obtain, by (19):
J ( x , Ψ , f ) = lim T 1 T J T ( x , Ψ , f ) = J ( f ) for   all ( x , Ψ ) R n × S N .
Moreover, by the definition (22) of J ( f ) , the Assumption 3b, and (10):
| J ( f ) | R n r ¯ ( x ( t ) , Ψ ( t ) , f ) μ f ( d x , d Ψ ) M · μ f ( w ) < for   all f F .
Therefore, by Lemma 1c:
sup f F | J ( f ) | M · μ f ( w ) M · p q ,
thus, the reward J ( f ) is uniformly bounded on F . From (24) and (25) we obtain that the following:
J * : = sup f F J ( f ) = sup f F J ( x , Φ , f ) = J * ( x , Φ ) for   all ( x , Φ ) R n × S N
has a finite value.
Thus, under the Assumptions 1, 2, and 4, it follows from (19) (w-exponential ergodicity) and (22) that the long-run expected average reward (21) coincides with the constant J ( f ) for every f F . Indeed, note that J T defined in (20) can be expressed as:
J T ( x , Ψ , f ) = T J ( f ) + 0 T [ E x , Ψ , f r ¯ ( x ( t ) , Ψ ( t ) , f ) J ( f ) ] d t .
Definition 6.
(a) A pair ( J , v ) consisting of a constant J R and a function v C 2 ( R n × S N ) B w ( R n × S N ) is said to be a solution of the average reward HJB-equation if:
J = max u U [ r ¯ ( x , Ψ , u ) + L u , Ψ v ( x , Ψ ) ] f o r   a l l ( x , Ψ ) R n × S N .
(b) If a stationary policy f F attains the maximum in (27), that is:
J = r ¯ ( x , Ψ , f ) + L f , Ψ v ( x , Ψ ) ] f o r   a l l ( x , Ψ ) R n × S N ,
then f is called a canonical policy.
The following theorem shows that if a policy satisfies the average reward HJB-equation, then it is an optimal average policy.
Theorem 1.
If Assumptions 1, 2, and 3 hold, then:
(i)
The average reward HJB Equation (27) admits a unique solution ( J , v ) , with v C 2 ( R n × S N ) B w ( R n × S N ) satisfying v ( 0 , Ψ 0 ) = 0 for some Ψ 0 S N fixed.
(ii)
There exists a canonical policy.
(iii)
The constant J in (27) equals J * in (26).
(iv)
There exists a stationary average optimal policy.
Proof.
( i ) The steps for the proof of this incise are essentially the same given in proof of Theorem 6.4 in [24]; thus, we omit the proof.
( i i ) Since u r ( · , · , u ) and u b ( · , · , u ) are continuous functions on the compact set U , we obtain that u r ¯ ( · , · , u ) + L u , Ψ v ( · , · ) is a continuous function on U ; thus, the existence of a canonical policy f F follows from standard measurable selection theorems; see [25] (Theorem 12.2).
( i i i ) Observe that, by (27):
J r ¯ ( x , Ψ , u ) + L u , Ψ v ( x , Ψ ) for   all ( x , Ψ ) R n × S N and u U .
Therefore, for any f F , using Dynkin’s formula and (29) we obtain:
E x , Ψ , f v ( x ( t ) , Ψ ( t ) ) = v ( x , Ψ ) + E x , Ψ , f 0 t L f , Ψ h ( x ( s ) , Ψ ( s ) ) d s v ( x , Ψ ) + J t E x , Ψ , f 0 t r ¯ ( x ( s ) , Ψ ( s ) ) d s .
Thus, multiplying by t 1 in (30) we have:
t 1 J t ( x , Ψ , f ) J + t 1 v ( x , Ψ ) t 1 E x , Ψ , f v ( x ( t ) , Ψ ( t ) ) .
Consequently, letting t in (31), and using Lemma 1b and (24), we obtain:
J J ( f ) for   all f F .
To obtain the reverse inequality, similar arguments show that if:
J r ¯ ( x , Ψ , u ) + L u , Ψ v ( x , Ψ ) for   all ( x , Ψ ) R n × S N and u U ,
then J J ( f ) for all f F . This last inequality together with (29) yields that if f F is a canonical policy, which satisfies (28), then we obtain that J ( f ) = J , and by (26):
J = J ( f ) = J * = J * ( x , Ψ ) for   all ( x , Ψ ) R n × S N .
( i v ) Similar arguments to those given in ( i i i ) lead us to that if f F is a canonical policy, then it is an average optimal. □
Theorem 1 indicates that if a policy satisfies the HJB Equation (27), then this policy is an optimal policy for the optimal control problem associated with the HJB equation. The difficulty with this approach is how to get a solution ( J * , v , f ) of the HJB equation. The most common form of the solve the HJB equation is based on variants on the vanishing discount approach (see [11,24,26] for details).
Remark 5
([1]). In the optimality criteria known as bias optimality, overtaking optimality, sensitive discount optimality, and Blackwell optimality, the early returns and the asymptotic returns are both relevant; thus, to study them, we need first to analyze the discounted and average optimality criteria. These optimality criteria will be studied in future work.
Remark 6.
  • On Assumption 1, ([7], Theorems 3.17 and 3.18). The uniform Lipschitz and linear growth conditions of b and σ ensure the existence and uniqueness of the global solution of the SDE with Markovian switching (1). The uniform Lipschitz condition ( max ( i , u ) E × U b ( x , i , u ) b ( y , i , u ) C 1 x y , σ ( x , i ) σ ( y , i ) C 2 x y ) imply that the change rates of the functions b ( x , i , u ) and σ ( x , i ) are minor or equal to the change rate of a linear function of x. This gives, in particular, the continuity of b and σ in x for all [ t 0 , ) . Thus, the uniform Lipschitz condition excludes the functions b and σ that are discontinuous concerning x. It is important to note that although a function let continuous, it does not guarantee that it satisfies the uniform Lipschitz condition; for example, the continuous function s i n ( x 2 ) does not satisfy this condition. Uniform Lipschitz condition can be replaced by the local Lipschitz condition. In fact, the local Lipschitz condition allows us to include a great variety of functions, such as functions v C 2 ( R n × E ) . However, the linear growth condition (Assumption 1 (d)) also excludes some important functions, such as b ( x , i ) = | x | 2 x + i . Assumption 1 (d) is quite standard but may be restrictive for some applications. As far as the results of this paper are concerned, the uniform Lipschitz condition may be replaced by the weaker condition:
    x T b ( x , i , u ) + 1 2 | | σ ( x , i ) | | 2 K ( 1 + | | x | | 2 ) , f o r   a l l ( x , i ) R n × E ,
    where K is a positive constant. This last condition allows us to include many functions as the coefficients b and σ . For example:
    b ( x , i , u ) = a ( i ) [ x ( t ) x 3 ( t ) ] + x g ( u ) σ ( x , i ) = b ( i ) x 2 ( t )
    with a ( i ) , b ( i ) > 0 such that b 2 ( i ) 2 a ( i ) and for some continuous function g : U R given. It is possible to check that a diffusion process with the parameters given above satisfies the local Lipschitz condition but the linear growth condition is not satisfied. On the other hand, note that:
    a ( i ) x [ x x 3 ] + x 2 g ( u ) + 1 2 b 2 ( i ) x 4 a ( i ) x 2 + x 2 g ( u ) K ( 1 + x 2 )
    with K = max ( i , u ) E × U { a ( i ) + g ( u ) } and a compact control set U. That is, the condition (33) is fulfilled. Thus, ([7], Theorem 3.18) guarantees that the SDE with Markovian switching with these coefficients has a unique global solution on [ t 0 , ) .
  • On Assumption 2, ([7], Theorem 5.2). This assumption guarantees the positive recurrence and the existence of an invariant measure μ f ( d x , Ψ ) for the Markov–Feller process ( x ( t ) , Ψ ( t ) ) . Moreover, if this assumption holds together with the inequality k ( | x | p ¯ ) w ( x , i ) for positive numbers k , p ¯ , H , then, the diffusion process (1) satisfies:
    l i m s u p t E | x ( t ) | p ¯ H ,
    that is, x ( t ) is asymptotically bounded in p ¯ th moment. Some Lyapunov functions are, for example:
    w ( x , i ) = k ( i ) | x | p ¯ , k ( i ) > 0 , p ¯ 2 , ( x , i ) R n × E ,
    considering that the coefficients b and σ in (1) satisfy the Lipschitz condition and:
    x T b ( x , i , u ) + p ¯ 1 2 | | σ ( x , i ) | | 2 B ( i ) | | x | | 2 + a ,
    with a > 0 , and B ( i ) be constants. In fact, using the inequality a c b 1 c a c + b ( 1 c ) a , b 0 , c [ 0 , 1 ] and (35), we get:
    L u , ψ w ( x , i ) = k ( i ) p ¯ | | x | | p ¯ 1 b ( x , i , u ) + 1 2 k ( i ) p ¯ ( p ¯ 1 ) | | σ ( x , i ) | | 2 | x | p ¯ 2 + j = i N q i j k ( j ) | | x | | p ¯ = p ¯ k ( i ) | | x | | p ¯ 2 x T b ( x , i , u ) + p ¯ 1 2 | | σ ( x , i ) | | 2 + j = i N q i j k ( j ) | | x | | p ¯ p ¯ k ( i ) | | x | | p ¯ 2 { B ( i ) | | x | | 2 + a } + j = i N q i j k ( j ) | | x | | p ¯ ( p ¯ B ( i ) k ( i ) + j = i N q i j k ( j ) ) | | x | | p ¯ + a p ¯ k ( i ) | | x | | p ¯ 2 = ( p ¯ B ( i ) k ( i ) + j = i N q i j k ( j ) ) | | x | | p ¯ + ( a p ¯ k ( i ) ) p ¯ / 2 2 λ ( i ) ( p ¯ 2 ) / 2 2 / p ¯ λ ( i ) 2 | | x | | p ¯ ( p ¯ 2 ) / p ¯ ( p ¯ B ( i ) k ( i ) + j = i N q i j k ( j ) ) | | x | | p ¯ + 2 p ¯ ( a p ¯ k ( i ) ) p ¯ / 2 2 λ ( i ) ( p ¯ 2 ) / 2 + λ ( i ) ( p ¯ 2 ) 2 p ¯ | | x | | p λ ( i ) ( p ¯ + 2 ) 2 p ¯ | | x | | p ¯ + 2 p ¯ ( a p ¯ k ( i ) ) p ¯ / 2 2 λ ( i ) ( p ¯ 2 ) / 2
    where λ ( i ) = ( p ¯ B ( i ) k ( i ) + j = i N q i j k ( j ) ) .
    If we set:
    q : = min i E λ ( i ) ( p ¯ + 2 ) 2 p ¯ p : = max i E 2 p ¯ ( a p ¯ k ( i ) ) p ¯ / 2 2 λ ( i ) ( p ¯ 2 ) / 2 ,
    then
    L u , ψ w ( x , i ) q | | x | | p ¯ + p q w ( x , i ) + p .
    Now, taking the Lyapunov function (34) we define:
    w ( x , Ψ ) = i = 1 N Ψ i w ( x , i ) = i = 1 N Ψ i k ( i ) | | x | | p ¯ .
    Considering that w x ( x , Ψ ) = i = 1 N Ψ i k ( i ) p ¯ | | x | | p ¯ 1 , w x x ( x , i ) = i = 1 N Ψ i k ( i ) p ¯ ( p ¯ 1 ) | | x | | p ¯ 2 , w Ψ ( x , i ) = [ k ( i ) , k ( 2 ) , , k ( n ) ] | | x | | p ¯ and w Ψ Ψ ( x , Ψ ) = 0 ; a similar procedure to that given in (37) allows us to obtain that W is also a Lyapunov function. That is:
    L u , Ψ w ( x , Ψ ) q | | x | | p ¯ + p q w ( x , Ψ ) + p .
  • On Assumption 3.This assumption allows us that the reward rate (or cost rate) can be unbounded from above and below. For the Lyapunov function w ( x , i ) = k ( i ) | x | p ¯ , a reward rate of the form:
    r ( x , i , u ) = k ( i ) | x | p ¯ + h ( u )
    for some continuous function h : U R satisfies the Assumption 3. In fact:
    | r ( x , i , u ) | k ( i ) | x | p ¯ + max u U h ( u ) ( k ( i ) + max u U h ( u ) ) | x | p ¯ = M w ( x , i )
    with M = max i E { k ( i ) + max u U } and U a compact set.
  • On Assumption 4.This assumption indicates asymptotic behavior of x ( t ) when t goes to infinite. Sufficient conditions for the w-exponentially ergodicity of the process ( x ( t ) , ψ ( t ) ) can be seen in ([1], Theorem 2.8). In fact, in the proof of this theorem, Assumptions 1 and 2 are required. Note that, for the optimal control problem with discounted optimality criterion, the w-exponentially ergodicity of the process ( x ( t ) , ψ ( t ) ) is not required. This assumption is only necessary to study the average reward optimality criterion.
Remark 7.
In the following sections, our theoretical results are implemented in three applications. The dynamic system in the three applications evolves according to linear stochastic differential equations d x ( t ) = ( A ( i ) x ( t ) + B u ( t ) ) d t + σ d W ( t ) , namely, Assumption 1. The state numbers of the Markov chain is 2, that is, E = { 1 , 2 } . The payoff rate is of the form r ( x , i , u ) = x T R ( i ) x + u T S u with x R 2 and u U : = [ 0 , a 1 ] × [ 0 , a 2 ] , a 1 , a 2 > 0 . Taking w ( x , i ) = x T R ( i ) x + 1 we get:
| r ( x , i , u ) | = | x T R ( i ) x | + | u T S u | | x T R ( i ) x | + | u T S u | | x T R ( i ) x + 1 | = m a x u U ( | u T S u | + 1 ) | x T R ( i ) x + 1 | = M 2 w ( x , i )
with M 2 = m a x u U ( | u T S u | + 1 ) ; thus, Assumption 3 also holds. A few calculations allow us to obtain the Assumption 2 with w ( x , Ψ ) = i = 1 2 Ψ i ( t ) w ( x , ψ ( t ) ) = i = 1 2 Ψ i ( t ) ( x T R ( ψ ( t ) ) x + 1 ) . In fact:
L u , Ψ w ( x , Ψ ) = x 2 [ 2 A ( i ) [ Ψ 1 R ( 1 ) + Ψ 2 R ( 2 ) ] + R ( 1 ) i = 1 2 q i 1 Ψ i + R ( 2 ) i = 1 2 q i 2 Ψ i ] + x [ R ( 1 ) i = 1 2 q i 1 Ψ i + R ( 2 ) i = 1 2 q i 2 Ψ i ] + σ 2 [ Ψ 1 R ( 1 ) + Ψ 2 R ( 2 ) ] .
Let 0 < q < [ 2 A ( i ) [ Ψ 1 R ( 1 ) + Ψ 2 R ( 2 ) ] + R ( 1 ) i = 1 2 q i 1 Ψ i + R ( 2 ) i = 1 2 q i 2 Ψ i ] , and rewrite L u , Ψ w ( x , Ψ ) as:
L u , Ψ w ( x , Ψ ) = q w ( x , Ψ ) + l ( x , i , u ) .
where
l ( x , i , u ) : = q w ( x , Ψ ) + x 2 [ 2 A ( i ) [ Ψ 1 R ( 1 ) + Ψ 2 R ( 2 ) ] + R ( 1 ) i = 1 2 q i 1 Ψ i + R ( 2 ) i = 1 2 q i 2 Ψ i ] + x [ R ( 1 ) i = 1 2 q i 1 Ψ i + R ( 2 ) i = 1 2 q i 2 Ψ i ] + σ 2 [ Ψ 1 R ( 1 ) + Ψ 2 R ( 2 ) ] p ,
where the last inequality is obtained from fact that the function l ( x . i . u ) is continuous on the compact set U for all x R and that the term q + [ 2 A ( i ) [ Ψ 1 R ( 1 ) + Ψ 2 R ( 2 ) ] + R ( 1 ) i = 1 2 q i 1 Ψ i + R ( 2 ) i = 1 2 q i 2 Ψ i ] is negative. Thus, L u , Ψ w ( x , Ψ ) = q w ( x , Ψ ) + p and Assumption 2b follows.

5. Application 1: Discounted Linear Quadratic Regulator (LQR)

In this subsection, we consider the α -discounted linear quadratic regulator. To this end, we suppose that the dynamic system evolves according to the linear stochastic differential equations:
d x t = ( A ¯ ( Ψ ( t ) ) x t + B u ( t ) ) d t + σ d W ( t ) .
with A ¯ ( Ψ ( t ) ) : = i = 1 N A ( i ) Ψ i ( t ) , A : E R n × n , B R n × m , W ( · ) is a m-dimensional Brownian motion, and σ is a positive constant. The expected cost is:
V α ( x , Ψ , u ) : = E x , Ψ u 0 e α s { x T ( s ) D ¯ ( Ψ ( s ) ) x ( s ) + u T R ¯ ( Ψ ( s ) ) u ( s ) } d s .
where D ¯ ( Ψ ( t ) ) : = i = 1 N D ( i ) Ψ i ( t ) , D : E R n × n , R ¯ ( Ψ ( t ) ) : = i = 1 N R ( i ) Ψ i ( t ) and R : E R n × n . The optimality equation or HJB-equation for the α -discounted partially observed LQR-optimal control problem is:
α v ( x , Ψ ) = min u U { x D ¯ ( Ψ ( t ) ) x T + u T R ¯ ( Ψ ( t ) ) u + L u v s . ( x , Ψ ) } ,
where the infinitesimal generator for the process ( x ( t ) , Ψ ( t ) ) applied to v ( x , Ψ ) C 2 , 2 ( R n × S N ) is:
L u v s . ( x , Ψ ) = ( A ¯ ( Ψ ) x + B u ) v x ( x , Ψ ) + 1 2 [ T r ( σ σ T ) ] v x x ( x , ψ ) + Q T Ψ v Ψ x , Ψ , + 1 2 v Ψ Ψ x , Ψ , T r [ A 2 ]
where
A 2 = [ σ 0 1 diag ( h ) h T ( Ψ ( t ) ) I N Ψ ( t ) ] [ σ 0 1 diag ( h ) h T ( Ψ ( t ) ) I N Ψ ( t ) ] T .
Note that, by minimizing (40) with respect to u, we find that the optimal control is the form:
f * ( x , Ψ ) = R ¯ 1 ( Ψ ) 2 B T v x .
By Proposition 2, if there exist a function v C 2 , 2 ( R n × S N ) B w ( R n × S N ) and a policy f * F such that (14) and (15) hold, then v coincides with the value function v * ( x , Ψ ) : = min u U V α ( x , Ψ , u ) and u ( t ) = f * ( x ) is the α -discount optimal policy. Thus, we propose that the function v C 2 , 2 ( R n × S N ) B w ( R n × S N ) that solves the HJB-Equation (40) has the form:
v ( x , Ψ ) = x T K x + n ( Ψ ) + c ,
where n : S N R is a twice differentiable continuous function, c is a constant, and K is a positive definite matrix. Inserting the derivative of v ( x , Ψ ) in (43) we get the optimal control:
f * ( x , Ψ ) = R ¯ 1 ( Ψ ) B T K T x ,
where the equality (40) holds if the matrix K satisfies the algebraic Riccati equation:
A ¯ T ( Ψ ( t ) ) K + K A ¯ ( Ψ ( t ) ) K B R ¯ ( Ψ ( t ) ) 1 B T K + D ¯ ( Ψ ( t ) ) α K = 0 ,
c = T r [ b ( w ( t ) ) b T ( w ( t ) ) K ] / α
and n ( · ) C 2 ( S N ) satisfies the partial differential equation:
Q T Ψ ( t ) n ( Ψ ( t ) ) + 1 2 T r [ A 2 ] n ( Ψ ( t ) ) α n ( Ψ ( t ) ) ) I n = 0 , Ψ ( t ) S N ,
where A 2 is as in (42), I N is the identity matrix of N × N , and n and n are the gradient and the Hessian of the n, respectively.
Simulation results. In the following figures, we assume that the Markov chain ψ ( t ) has two states, namely, E = { 1 , 2 } and the dynamic system x ( t ) R 2 . We have computed the Wonham filter, the states of the dynamic system (39) x ( t ) = [ x 1 ( t ) , x 2 ( t ) ] T with initial condition x ( 0 ) = [ 10 , 15 ] T , the value function (44), and the optimal control (45) for the following data: σ = 1 ,   σ 0 = 1 , α = 0.01 , h ( 1 ) = 1 , h ( 2 ) = 2 , Ψ 1 ( 0 ) = 0.5 , Ψ 2 ( 0 ) = 0.5 , R 1 = 1 , R 2 = 2 :
A ( 1 ) = 5 1 0 10 , A ( 2 ) = 10 1 0 10 ,
D ( 1 ) = 1 0 0 1 , D ( 2 ) = 2 0 0 3 ,
and the transition matrix:
Q = 0.2 0.2 0.7 0.7 .
To solve the Wonhan filter, we use the numerical method given in ([18], Section 8.4), considering that the Markov chain can only be observed through d y ( t ) = h ( ψ ( t ) ) + σ 0 d B ( t ) .
Figure 1 shows the solution of the filter Wonham equation and the states of the hidden Markov chain ψ ( t ) . As can be noted, in t = 0.05 s Ψ 2 ( 0.05 ) = P ( ψ ( t ) = 2 | y ( s ) , 0 s 0.05 ) Ψ 1 ( 0.05 ) , implying that the Markov chain with a higher probability to 0.5 is in state 2 in t = 0.3 ( ψ ( 0.3 ) = 2 ). The evolution of the dynamic system (39) is given in Figure 2 (top); in this figure, we can note that the optimal control (45) moves the initial point x ( 0 ) = [ 10 , 15 ] T to the point [ 0 , 0 ] T in t = 0.8 s, indicating the good performance of the optimal control (45). The asymptotic behavior of the optimal control (45) is given in Figure 2 (bottom); this control stabilizes at zero around t = 0.8 s, since x ( t ) also stabilizes at zero around t = 0.8 s.

6. Application 2: Average LQR: Modeling of a Quarter-Car Suspension

In this section, the basic quarter-car suspension model analyzed in [27] is considered, see Figure 3. The parameters are: the sprung mass ( m s ), the unsprung mass ( m u ), the suspension spring constant ( k s ), and the tire spring constant (k). Let z s , z u , and z r be the vertical displacements of the sprung mass, the unsprung mass, and the road profile, respectively. The equations of motion for this model are given by:
m s z s ( t ) = k s ( z s ( t ) z u ( t ) ) u ( t ) ,
m u z u ( t ) = k s ( z s ( t ) z u ( t ) ) k ( z u ( t ) z r ( t ) ) + u ( t ) .
Now, defining x 1 ( t ) = z s ( t ) , x 2 ( t ) = z u ( t ) , x 3 ( t ) = z s ( t ) z u ( t ) , and x 4 ( t ) = z u ( t ) z r , the equations of motion (46) and (47) can be expressed in matrix form as:
d x ( t ) = ( A x ( t ) + B u ( t ) ) d t + C 1 d z r ( t )
where d x ( t ) = d x 1 ( t ) d x 2 ( t ) d x 3 ( t ) d x 4 ( t ) ,   A = 0 0 k s m s 0 0 0 k s m u k m u 1 1 0 0 0 1 0 0 ,   B = 1 m s 1 m s 0 0 ,   C 1 = 0 0 0 1 , and in the time domain, the road profile, z r ( t ) , can be represented as the output of a linear first-order filter to white noise as follows:
d z r ( t ) = a ( ψ ( t ) ) V z r ( t ) d t + σ 2 d W 1 ( t ) ,
where V is the vehicle speed (assumed constant), σ 2 is a positive constant, and a is the road roughness coefficient depending on the type of road. Here, we assume that a depends on a hidden Markov chain, that is, a ( ψ ( t ) ) with ψ ( t ) { 1 , 2 } . In our case, we consider that the dynamic system (48) evolves with additional white noise, that is:
d x ( t ) = ( A x ( t ) + B u ( t ) ) d t + σ 1 d W ( t ) + C 1 d z r ( t )
The experts introduced the following performance index in order to trade off between the ride comfort and the handling while maintaining the constraint on suspension deflection:
J ( x , Ψ , u ) = lim T 1 T E x , Ψ , u [ 0 T [ c 1 d 2 z s d 2 t 2 + c 2 [ z 1 ( t ) z u ( t ) ] 2 + c 3 [ z u ( t ) z r ( t ) ] 2 + c 4 u ( t ) 2 ] d t ]
Defining y : = d 2 z s d 2 t 2 , [ z 1 ( t ) z u ( t ) ] 2 , [ z u ( t ) z r ( t ) ] 2 ,   C : = d i a g ( c 1 , c 2 , c 3 ) , and R : = [ c 4 ] , we can rewrite (50) as:
J ( x , Ψ , u ) = lim T 1 T E x , Ψ , u 0 T y C y T + u T ( t ) R u ( t ) d t
Now, from the equations of motion in (46) and (47), note that y = M x + N u with M = 0 0 k s m s 0 0 0 1 0 0 0 0 1 , and N = 1 m s 0 0 . Thus, replacing this matrix form of y in (51) we can rewrite (50) again as:
J ( x , Ψ , u ) = lim T 1 T E x , Ψ , u 0 T ( x T Q 1 x + 2 x T Q 2 u + u T R 1 u ) d t
where Q 1 = M T C M , Q 2 = M T C N , R 1 = N T C N + R .
The optimal control problem (OCP). The OCP in this application consists of finding u * U such that it minimizes the performance index (52) considering that the dynamic system evolves according to the stochastic differential Equation (49).
In the dynamic programming technique, we need the infinitesimal generator L u of the process ( x ( t ) , Ψ ( t ) ) applied to v ( x , Ψ , z r ) C 2 , 2 , 2 ( R n × S N × R ) ; in this case, this generator is:
L u v s . ( x , Ψ , z r ) = a ( Ψ ( t ) ) v z r ( x , Ψ , z r ) + ( A x + B u ) v x ( x , Ψ , z r ) + Q T Ψ v Ψ x , Ψ , z r + 1 2 T r [ σ 1 σ 1 T ] v x x ( x , Ψ , z r ) . + 1 2 T r [ σ 2 σ 2 T ] v z r z r ( x , Ψ , z r ) + 1 2 v Ψ Ψ x , Ψ , z r T r [ A 2 ]
where A 2 ( Ψ ( t ) ) = [ σ 0 1 diag ( h ) h T ( Ψ ( t ) ) I N Ψ ( t ) ] [ σ 0 1 diag ( h ) h T ( Ψ ( t ) ) I N Ψ ( t ) ] T , whereas the Hamilton–Jacobi–Bellman Equation (or dynamic programming equation) associated with this problem is:
J = max u U [ x T Q 1 x + 2 x T Q 2 u + u T R 1 u + L u v s . ( x , Ψ , z r ) ] for   all ( x , Ψ ) R n × S N ,
see [28] for more details.
Proposition 3.
Assume that ( x ( t ) , z r ( t ) , Ψ ( t ) ) evolves according to (49). Then, the control that minimizes the long-run cost (52) is:
f * ( x , Ψ , z r ) = R 1 1 ( Q 2 T + B T K ) T x ( t ) ,
whereas the corresponding function v that solves the HJB Equation (54) is given by:
v ( x , Ψ , z r ) = x T K x + g ( z r ) + n ( Ψ )
where K is a positive semi-definite matrix that satisfies the Ricatti differential equation
K ( A B R 1 1 Q 2 T ) + ( A B R 1 1 Q 2 T ) K K B R 1 B T P ( Q 1 Q 2 R 1 1 Q 2 T ) = 0 ,
and g ( · ) C 2 ( R ) satisfies the differential equation:
a ( Ψ ) g ( z r ) + 1 2 σ 2 2 g ( z r ) = 0 ,
and n ( · ) C 2 ( S N ) satisfies the partial differential equation:
Q T Ψ n ( Ψ ( t ) ) + 1 2 T r [ A 2 ] n ( Ψ ) = 0 ,
where A 2 is as in (41) and n and n denote the gradient and the Hessian of the n, respectively. The optimal cost is given by:
J = T r [ σ 1 σ 1 T ] K = J * ( x , Ψ ) = min u U J ( x , Ψ , u ) .
Proof.
The HJB-equation for the partially observed LQR optimal control problem with ( x ( t ) , Ψ ( t ) ) evolves according to (49) and finite cost (52) is (54), where L u v ( t , x , w , Ψ ) is the infinitesimal generator given in (53). We are looking for a candidate solution h C 2 , 2 , 2 ( R n × S N × R ) to (54) in the form:
v ( x , Ψ , z r ) = x T K x + g ( z r ) + n ( Ψ ) ,
for some continuous functions g ( · ) C 2 ( R ) , h ( · ) C 2 ( S N ) and K a positive semi-definite matrix. We assume that g ( z r ) > 0 for all z r R and n ( Ψ ) is positive definite, so that the function ( x , Ψ , z r ) v ( x , Ψ , z r ) is convex.
Now, the function u U 2 x T Q 2 u + u T R 1 u + B u v x is strictly convex on the compact set U, and thus, attains its minimum at:
f * ( x , Ψ , z r ) = 1 2 R 1 [ 2 x T Q 2 B h x ] = R 1 1 ( Q 2 T + B T K ) T x ( t ) .
Inserting f * ( x , Ψ , z r ) and the partial derivatives of v with respect to x, z r , and Ψ in the HJB-Equation (54), we obtain:
J = x T Q 1 x + 2 x T Q 2 ( R 1 1 ( Q 2 T + B T K ) T x ) + ( R 1 1 ( Q 2 T + B T K ) T x ) T R 1 ( R 1 1 ( Q 2 T + B T K ) T x ) a ( Ψ ( t ) ) g ( z r ) + ( A x + B ( R 1 1 ( Q 2 T + B T K ) T x ) ) 2 K x + Q T Ψ h ( Ψ ) + + T r [ σ 1 σ 1 T ] K + 1 2 T r [ σ 2 σ 2 T ] g ( z r ) + 1 2 h ( Ψ ) T r [ A 2 ] .
For equality (61) to hold, it is necessary that the functions g and h satisfy (57) and (58), respectively, and the matrix K satisfies the Ricatti differential Equation (56), whereas the constant J = T r [ σ 1 σ 1 T ] K . Finally, from the Theorem 1, it follows that f * is an optimal Markovian control and the value function J T * ( t , x , w , Ψ ) is equal to (59). That is:
J * ( x , Ψ ) = min u U J ( x , Ψ , u ) = J = T r [ σ 1 σ 1 T ] K .
Simulation results. To solve the Wonhan filter, we use the numerical method given in ([18], Section 8.4), considering that the Markov chain ψ ( t ) has two states that can only be observed through d y ( t ) = h ( ψ ( t ) ) + σ 0 d B ( t ) . The following data were used: σ 1 = 1 , σ 2 = 1 , σ 0 = 1 , α = 0.01 , a ( 1 ) = 0.03 , a ( 2 ) = 0.015 , Ψ 1 ( 0 ) = 0.5 , Ψ 2 ( 0 ) = 0.5 , R = 1.0239 × 10 5 , h ( 1 ) = 1 , h ( 2 ) = 0.5 , m s = 329 kg, m u = 51 kg, k s = 4300 N/m, k = 210 , 000 N/m, V = 20 m/s, c 1 = 1 , c 2 = c 3 = 1 × 10 5 , c 4 = 1 × 10 6 and:
Q = 0.3 0.3 0.5 0.5 .
The solution of the Wonham filter equation and the states of the hidden Markov chain ψ ( t ) are shown in Figure 4. As can be noted, in t = 1 s, Ψ 1 ( 1 ) = P ( ψ ( t ) = 1 | y ( s ) , 0 s 1 ) Ψ 2 ( 1 ) , implying that the Markov Chain with a probability greater than 0.5 is in state 1 at t = 1 .
The asymptotic behavior of the optimal control (55) is given in Figure 5 (bottom). It is interesting to note that this control minimizes the magnitude of the sprung mass velocity, x 1 = z s and unsprung mass velocity, x 2 = z u after t = 9 s, see Figure 5 (top). This behavior implies that the magnitude of the sprung mass acceleration, x 1 = z s and unsprung mass acceleration x 2 = z u are also minimized, considering that the stochastic differential equation that models the road profile depends on a hidden Markov chain. These results agree with the obtained by authors in [27]. These authors mentioned that two important objectives of a suspension system are ride comfort and handling performance. The ride comfort requires that the car body be isolated from road disturbances as much as possible to provide a good feeling for passengers. In practice, we are looking to minimize the acceleration of the sprung mass.

7. Application 3: Optimal Control of a Vehicle Active Suspension System with Damp

The model analyzed in this subsection is given in [29]. In this application, a damp b s is added to the quarter-car suspension given in Section 6, see Figure 6. The parameters in Figure 6 are: the sprung mass ( m s ), the unsprung mass ( m u ), the suspension spring constant ( k s ), and the tire spring constant (k). Let z s , z u , and r be the vertical displacements of the sprung mass, the unsprung mass, and the road disturbance, respectively. The equations of motion are given by:
m s z s ( t ) = k s ( z s ( t ) z u ( t ) ) + b s ( z u z s ) + u ( t ) ,
m u z u ( t ) = k s ( z s ( t ) z u ( t ) ) k ( r ( t ) z u ( t ) ) b s ( z u z s ) u ( t ) .
Now, defining x 1 ( t ) = z s ( t ) , x 2 ( t ) = z u ( t ) , x 3 ( t ) = z s ( t ) , and x 4 ( t ) = z u ( t ) , the equations of motion in (62) and (63) can be expressed in matrix form as:
d x ( t ) = ( A x ( t ) + B u ( t ) ) d t + F r ( t )
where d x ( t ) = d x 1 ( t ) d x 2 ( t ) d x 3 ( t ) d x 4 ( t ) ,   A = 0 0 1 0 0 0 0 1 k s m s k s m s k s m s k s m s k s m u ( k s + k ) m u b s m u b s m u ,   B = 0 0 1 m s 1 m u ,   F = 0 0 0 k m u , and we assume that the road profile r ( t ) is represented by a function with hidden Markovian switchings:
r ( t ) = a ( ψ ( t ) ) { 1 c o s ( 8 π t ) } , τ p t τ p + 1 0 o t h e r w i s e
where a ( 1 ) = 0.05 (road bump height is 10 cm), a ( 2 ) = 0.025 (road bump height is 16 cm), and τ p , p = 1 , 2 , are the random jump times of ψ ( t ) . In our case, we consider that the dynamic system (64) evolves with additional white noise, that is:
d x ( t ) = ( A x ( t ) + B u ( t ) + F r ( t ) ) d t + σ d W ( t )
and we wish to minimize the discounted expected cost:
V α ( x , Ψ , u ) : = E x , Ψ u 0 e α s { x T ( s ) D x ( s ) + u T ( s ) R u ( s ) } d s ,
subject to (66) and (65). Considering the infinitesimal generator given in (53) with z r ( t ) r ( t ) and the Hamilton–Jacobi–Bellman equation associated as the following problem:
α v ( x , Ψ ) = max u U [ x T D x + u T R 1 u + L u v s . ( x , Ψ , r ) ] f o r a l l ( x , Ψ ) R n × S N ,
similar arguments to these given in Section 5 and Section 6 allow us to find the optimal control f * and the value function v * for this setting. In fact:
v * ( x , Ψ ) = x T K x + n ( Ψ ) + g ( r ) + c ,
where n : S N R is a twice differentiable continuous function, c is a constant, g : R R is a twice differentiable continuous function, and K is a positive definite matrix. Inserting the derivative of v ( x , Ψ ) in (43), we get the optimal control:
f * ( x , Ψ ) = R ¯ 1 ( Ψ ) B T K T x ,
where the matrix K satisfies the algebraic Riccati equation:
A T K + K A K B R 1 B T K + D α K = 0 ,
c = T r [ σ σ T K ] / α ,
the function g C 2 ( R ) satisfies the differential equation:
a ( Ψ ( t ) ) g ( r ) + α g ( r ) = 0 ,
and n ( · ) C 2 ( S N ) satisfies the partial differential equation:
Q T Ψ ( t ) n ( Ψ ( t ) ) + 1 2 T r [ A 2 ] n ( Ψ ( t ) ) α n ( Ψ ( t ) ) ) I 4 = 0 , Ψ ( t ) S N ,
where A 2 is as in (42), I 4 is the identity matrix of 4 × 4 , and n and n are the gradient and the Hessian of the n, respectively.
Simulation results. To solve the Wonhan filter, we use the numerical method given in ([18], Section 8.4) considering that the Markov chain ψ ( t ) has two states and that can be only observed through d y ( t ) = h ( ψ ( t ) ) + σ 0 d B ( t ) . The following data were used: σ = 1 , σ 0 = 1 , α = 0.01 , a ( 1 ) = 0.05 , a ( 2 ) = 0.08 , Ψ 1 ( 0 ) = 0.4 , Ψ 2 ( 0 ) = 0.6 , h ( 1 ) = 1 , h ( 2 ) = 2 , R = 1.0239 × 10 5 , m s = 300 kg, m u = 60 kg, k s = 1600 N/m, k = 190 , 000 N/m, b s = 1000 N/m, and:
Q = 0.2 0.2 0.4 0.4 .
Figure 7 shows the solution of the Wonham filter equation and the states of the hidden Markov chain ψ ( t ) . As can be seen, in the time interval [ 2 , 4 ] , Ψ 1 ( 1 ) = P ( ψ ( t ) = 1 | y ( s ) , 0 s 1 ) Ψ 2 ( 1 ) , implying that the Markov chain with a probability greater than 0.5 is in state 1.
The asymptotic behavior of the optimal control (67) is given in Figure 8 (bottom). It is interesting to note that this control minimizes the magnitude of the sprung mass, x 1 = z s , and unsprung mass, x 2 = z u , al well as their velocities, x 3 = z s and x 4 = z u , after t = 12 s, see Figure 8 (top).

8. Application 4: Optimal Pollution Control with Average Payoff

The application studies the pollution accumulation incurred by the consumption of a certain product, such as gas or petroleum, see [30]. The stock of pollution x ( · ) is governed by the controlled diffusion process:
d x ( t ) = [ u ( t ) η ( ψ ( t ) ) x ( t ) ] d t + k d W ( t ) , x ( 0 ) = x > 0 ,
where u ( t ) represents the pollution flow generated by an entity due to the consumption of the product, η ( ψ ( t ) ) represents the decay rate of pollution, chosen at each time by nature, and k is a positive constant. We shall assume that u ( t ) U = [ 0 , γ ] is bounded and the parameter γ represents the consumption/production restriction. Let ψ ( t ) be a Markov chain with two states E = { 1 , 2 } and a generator Q given by:
q 11 q 12 q 21 q 22 = λ 0 λ 0 λ 1 λ 1 .
The reward rate r : [ 0 , ) × E × U R in this example represents the social welfare and is defined as:
r ( x , i , u ) : = F ( u ) a ( i ) x , ( x , i , u ) [ 0 , ) × E × U ,
where F C 2 0 , C ( 0 , ) and D = a ( i ) x C ( [ 0 , ) × E ) is the social utility of the consumption u and the social disutility of the pollution ( x , i ) , respectively. We assume that the function F in (69) satisfies:
F ( u ) > 0 , F ( u ) < 0 , F ( ) = F ( 0 ) = 0 , F ( 0 + ) = F ( ) = ,
Clearly, (68) is a liner stochastic differential equation, and satisfies Assumption 1.
Now, we define the Banach space B w ( R × E ) and use w ( x , i ) : = x + i , w ( x , Ψ ) = i = 1 2 Ψ i w ( x . i ) = Ψ 1 ( x + 1 ) + Ψ 2 ( x + 2 ) = x + ( 1 Ψ 1 ) . Hence, lim x + w ( x , Ψ ) = + and Assumption 2i holds. On the other hand, since the utility function F ( · ) is continuous on the compact interval U = [ 0 , γ ] , then:
| r ( x , i , u ) | = | F ( u ) a ( i ) x | ( max u [ 0 , γ ] F ( u ) + max i { 1 , 2 } a ( i ) ) ( x + i ) = M w ( x , i )
where M : = max u [ 0 , γ ] F ( u ) + max i { 1 , 2 } a ( i ) ; thus, Assumption 3 holds. Note that:
L u , Ψ w ( x , Ψ ) = u η ( i ) x λ 0 Ψ 1 + λ 1 ( 1 Ψ 1 ) , for   all x > 0 .
Thus, taking q : = max i E η ( i ) and p : = max u [ 0 , γ ] u ( λ 0 λ 1 ) Ψ 1 we obtain:
L u w ( x , Ψ ) p w ( x , Ψ ) + q for   all x > 0 .
Therefore, Assumption 2(ii) holds. It can be proven that the process (68) satisfies Assumption 2.6 in [1]; thus, by ([1], Theorem 2.8), x ( t ) is exponentially ergodic (Assumption 4). In this application, we seek a policy u that maximizes the long-run average welfare J ( x , i , f ) :
J ( x , i , u ) : = lim inf T 1 T E x , i u 0 T [ F ( u ) a ( i ) x ] d t .
We propose v ( x , Ψ ) = v ( x ) + h ( Ψ ) , where v C 2 ( R × E ) B w ( R × E ) and h C 2 ( S N ) as a solution that verify the HJB Equation (27) associated with this pollution control problem. Simple calculations allow us to conclude that the policy on consumption/pollution takes the form:
u : = f ( x , Ψ ) = I ( v ( x ) ) i f F ( γ ) < v ( x ) , γ i f F ( γ ) v ( x ) .
where I ( v ( x ) ) is the inverse function of derivative F , f F .

9. Concluding Remarks

Under hypotheses such as uniform ellipticity in Assumption 1c, the Lyapunov-like conditions in Assumption 2, and the w-exponential ergodicity in (4) for the average criterion, this work shows the existence of optimal controls for the control problems with discounted and average payoffs, where the dynamic system evolves according to switching diffusion with hidden states. To conclude, we conjecture that the results obtained in this work still hold (with obvious changes) if the hidden Markov chain ( ψ ) in (1) is replaced with any other diffusion process. Furthermore, these results can be extended to constrained and unconstrained nonzero-sum stochastic differential games with additive structures, which will allow us to model a larger class of practical systems. This will be a topic in future works.

Author Contributions

Conceptualization, B.A.E.-T.; Formal analysis, B.A.E.-T. and J.G.-M.; Investigation, B.A.E.-T., J.G.-M. and G.A.; Methodology, B.A.E.-T., J.G.-M. and J.D.R.-A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Escobedo-Trujillo, B.A.; Hernández-Lerma, O. Overtaking optimality for controlled Markov-modulated diffusions. J. Optim. 2011, 61, 1405–1426. [Google Scholar] [CrossRef]
  2. Borkar, V.S. The value function in ergodic control of diffusion processes with partial observations. Stoch. Stoch. Rep. 1999, 67, 255–266. [Google Scholar] [CrossRef]
  3. Borkar, V.S. Dynamic programming for ergodic control with partial observations. Stoch. Process. Their Appl. 2003, 103, 293–310. [Google Scholar] [CrossRef]
  4. Rieder, U.; Bäuerle, N. Portfolio optimization with unobservable Markov-modulated drift Process. J. Appl. Probab. 2005, 362–378. [Google Scholar] [CrossRef] [Green Version]
  5. Tran, K. Optimal exploitation for hybrid systems of renewable resources under partial observation. Nonlinear Anal. Hybrid Syst. 2021, 40, 101013. [Google Scholar] [CrossRef]
  6. Tran, K.; Yin, G. Stochastic competitive Lotka–Volterra ecosystems under partial observation: Feedback controls for permanence and extinction. J. Frankl. Inst. 2014, 351, 4039–4064. [Google Scholar] [CrossRef]
  7. Mao, X.; Yuan, C. Stochastic Differential Equations with Markovian Switching; World Scientific Publishing Co.: London, UK, 2006; Available online: https://www.worldscientific.com/doi/pdf/10.1142/p473 (accessed on 20 March 2022). [CrossRef]
  8. Yin, G.G.; Zhu, C. Hybrid Switching Diffusions. In Stochastic Modelling and Applied Probability; Properties and Applications; Springer: New York, NY, USA, 2010; Volume 63, p. xviii+395. [Google Scholar] [CrossRef]
  9. Yin, G.; Mao, X.; Yuan, C.; Cao, D. Approximation methods for hybrid diffusion systems with state-dependent switching processes: Numerical algorithms and existence and uniqueness of solutions. SIAM J. Math. Anal. 2009, 41, 2335–2352. [Google Scholar] [CrossRef] [Green Version]
  10. Yu, L.; Zhang, Q.; Yin, G. Asset allocation for regime-switching market models under partial observation. Dynam. Syst. Appl. 2014, 23, 39–61. [Google Scholar]
  11. Ghosh, M.K.; Arapostathis, A.; Marcus, S.I. Optimal control of switching diffusions with application to flexible manufacturing systems. SIAM J. Control Optim. 1993, 31, 1183–1204. [Google Scholar] [CrossRef]
  12. Ghosh, M.K.; Marcus, S.I.; Arapostathis, A. Controlled switching diffusions as hybrid processes. In Proceedings of the International Hybrid Systems Workshop, New Brunswick, NJ, USA, 22–25 October 1995; Springer: Berlin/Heidelberg, Germany, 1995; pp. 64–75. [Google Scholar]
  13. Zhang, X.; Zhu, Z.; Yuan, C. Asymptotic stability of the time-changed stochastic delay differential equations with Markovian switching. Open Math. 2021, 19, 614–628. [Google Scholar] [CrossRef]
  14. Zhu, C.; Yin, G. Asymptotic properties of hybrid diffusion systems. SIAM J. Control Optim. 2007, 46, 1155–1179. [Google Scholar] [CrossRef]
  15. Wonham, W.M. Some applications of stochastic differential equations to optimal nonlinear filtering. J. SIAM Control Ser. A 1965, 2, 347–369. [Google Scholar] [CrossRef]
  16. Elliott, R.J.; Aggoun, L.; Moore, J.B. Hidden Markov Models: Estimation and Control; Springer: Berlin/Heidelberg, Germany, 1995. [Google Scholar]
  17. Cohen, S.N.; Elliott, R.J. Stochastic Calculus and Applications, 2nd ed.; Probability and Its Applications; Springer: Cham, Switzerland, 2015; p. xxiii+666. [Google Scholar] [CrossRef]
  18. Yin, G.; Zhang, Q. Discrete-Time Markov Chains: Two-Time-Scale Methods and Applications; Stochastic Modelling and Applied Probability; Springer: New York, NY, USA, 2006. [Google Scholar]
  19. Yin, G.G.; Zhu, C. Hybrid Switching Diffusions: Properties and Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2009; Volume 63. [Google Scholar]
  20. Protter, P.E. Stochastic integration and differential equations. In Stochastic Modelling and Applied Probability, 2nd ed.; Version 2.1, Corrected Third Printing; Springer: Berlin/Heidelberg, Germany, 2005; Volume 21, p. xiv+419. [Google Scholar] [CrossRef]
  21. Chigansky, P. An ergodic theorem for filtering with applications to stability. Syst. Control Lett. 2006, 55, 908–917. [Google Scholar] [CrossRef]
  22. Kunita, H. Asymptotic behavior of the nonlinear filtering errors of Markov processes. J. Multivar. Anal. 1971, 1, 365–393. [Google Scholar] [CrossRef] [Green Version]
  23. Lu, X.; Yin, G.; Guo, X. Infinite Horizon Controlled Diffusions with Randomly Varying and State-Dependent Discount Cost Rates. J. Optim. Theory Appl. 2017, 172, 535–553. [Google Scholar] [CrossRef]
  24. Ghosh, M.K.; Arapostathis, A.; Marcus, S.I. Ergodic control of switching diffusions. SIAM J. Contr. Optim 1997, 35, 1962–1988. [Google Scholar] [CrossRef]
  25. SchÄl, M. Conditions for optimality and for the limit of n-stage optimal policies to be optimal. Z. Wahrs. Verw. Gerb. 1975, 32, 179–196. [Google Scholar] [CrossRef]
  26. Ghosh, M.K.; Marcus, S.I. Stochastic differential games with multiple modes. Stoch. Anal. Appl. 1998, 16, 91–105. [Google Scholar] [CrossRef]
  27. Nguyen, L.H.; Seonghun, P.; Turnip, A.; Hong, K.S. Application of LQR Control Theory to the Design of Modified Skyhook Control Gains for Semi-Active Suspension Systems. In Proceedings of the ICROS-SICE International Joint Conference 2009, Fukuoka, Japan, 18–21 August 2009; pp. 4698–4703. [Google Scholar]
  28. Escobedo-Trujillo, B.; Garrido-Meléndez, J. Stochastic LQR optimal control with white and colored noise: Dynamic programming technique. Rev. Mex. Ing. QuÍmica 2021, 20, 1111–1127. [Google Scholar] [CrossRef]
  29. Maurya, V.K.; Bhangal, N.S. Optimal Control of Vehicle Active Suspension System. J. Autom. Control. Eng. 2018, 6, 1111–1127. [Google Scholar] [CrossRef]
  30. Kawaguchi, K.; Morimoto, H. Long-run average welfare in a pollution accumulation model. J. Econom. Dynam. Control 2007, 31, 703–720. [Google Scholar] [CrossRef]
Figure 1. Wonham filter for the α -discounted LQR.
Figure 1. Wonham filter for the α -discounted LQR.
Mathematics 10 02073 g001
Figure 2. Asymptotic behavior of the state of dynamic system (top) and optimal control α -discount LQR (bottom).
Figure 2. Asymptotic behavior of the state of dynamic system (top) and optimal control α -discount LQR (bottom).
Mathematics 10 02073 g002
Figure 3. Schematic of a quarter-car suspension.
Figure 3. Schematic of a quarter-car suspension.
Mathematics 10 02073 g003
Figure 4. Wonham filter and hidden Markov chain (in t = 1 s).
Figure 4. Wonham filter and hidden Markov chain (in t = 1 s).
Mathematics 10 02073 g004
Figure 5. Asymptotic behavior of the state of dynamic system (top) and optimal control (bottom).
Figure 5. Asymptotic behavior of the state of dynamic system (top) and optimal control (bottom).
Mathematics 10 02073 g005
Figure 6. Quarter vehicle model of active suspension system.
Figure 6. Quarter vehicle model of active suspension system.
Mathematics 10 02073 g006
Figure 7. Wonham filter and hidden Markov chain (time interval [2, 4]).
Figure 7. Wonham filter and hidden Markov chain (time interval [2, 4]).
Mathematics 10 02073 g007
Figure 8. Asymptotic behavior of the state of the dynamic system (top) and optimal control (bottom).
Figure 8. Asymptotic behavior of the state of the dynamic system (top) and optimal control (bottom).
Mathematics 10 02073 g008
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Escobedo-Trujillo, B.A.; Garrido-Meléndez, J.; Alcalá, G.; Revuelta-Acosta, J.D. Optimal Control with Partially Observed Regime Switching: Discounted and Average Payoffs. Mathematics 2022, 10, 2073. https://doi.org/10.3390/math10122073

AMA Style

Escobedo-Trujillo BA, Garrido-Meléndez J, Alcalá G, Revuelta-Acosta JD. Optimal Control with Partially Observed Regime Switching: Discounted and Average Payoffs. Mathematics. 2022; 10(12):2073. https://doi.org/10.3390/math10122073

Chicago/Turabian Style

Escobedo-Trujillo, Beatris Adriana, Javier Garrido-Meléndez, Gerardo Alcalá, and J. D. Revuelta-Acosta. 2022. "Optimal Control with Partially Observed Regime Switching: Discounted and Average Payoffs" Mathematics 10, no. 12: 2073. https://doi.org/10.3390/math10122073

APA Style

Escobedo-Trujillo, B. A., Garrido-Meléndez, J., Alcalá, G., & Revuelta-Acosta, J. D. (2022). Optimal Control with Partially Observed Regime Switching: Discounted and Average Payoffs. Mathematics, 10(12), 2073. https://doi.org/10.3390/math10122073

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop