Next Article in Journal
Improved Properties of Positive Solutions of Higher Order Differential Equations and Their Applications in Oscillation Theory
Next Article in Special Issue
Robust Statistic Estimation in Constrained Optimal Control Problems of Pollution Accumulation (Part II: Markovian Switchings)
Previous Article in Journal
Consensus in Networks of Agents with Cooperative and Antagonistic Interactions
Previous Article in Special Issue
Estimation of Initial Stock in Pollution Control Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Statistic Estimation of Constrained Optimal Control Problems of Pollution Accumulation (Part I)

by
Beatris Adriana Escobedo-Trujillo
1,
José Daniel López-Barrientos
2,*,
Carmen Geraldi Higuera-Chan
3 and
Francisco Alejandro Alaffita-Hernández
4
1
Facultad de Ingeniería, Universidad Veracruzana, Coatzacoalcos 96535, Mexico
2
Facultad de Ciencias Actuariales, Universidad Anáhuac México, Naucalpan de Juárez 52786, Mexico
3
Departamento de Matemáticas, Universidad de Sonora, Hermosillo 83000, Mexico
4
Centro de Investigación en Recursos Energéticos y Sustentables, Universidad Veracruzana, Coatzacoalcos 96535, Mexico
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(4), 923; https://doi.org/10.3390/math11040923
Submission received: 17 January 2023 / Revised: 5 February 2023 / Accepted: 8 February 2023 / Published: 11 February 2023

Abstract

:
In this paper, we study a constrained optimal control on pollution accumulation where the dynamic system was governed by a diffusion process that depends on unknown parameters, which need to be estimated. As the true values are unknown, we intended to determine (adaptive) policies that maximize a discounted reward criterion with constraints, that is, we used Lagrange multipliers to find optimal (adaptive) policies for the unconstrained version of the optimal control problem. In the present context, the dynamic system evolves as a diffusion process, and the cost function is to be minimized by another function (typically a constant), which plays the role of a constraint in the control model. We offer solutions to this problem using standard dynamic programming tools under the constrained discounted payoff criterion on an infinite horizon and the so-called principle of estimation and control. We used maximum likelihood estimators by means of a minimum least square error approximation in a pollution accumulation model to illustrate our results. One of the advantages of our approach compared to others is the intuition behind it: find optimal policies for an estimated version of the problem and let this estimation tend toward the real version of the problem. However, most risk analysts will not be as used to our methods as they are to, for instance, the model predictive control, MATLAB’s robust control toolbox, or the polynomial chaos expansion method, which have been used in the literature to address similar issues.

1. Introduction

This work studies the problem of the optimal control of pollution accumulation with an unknown parameter, which needs to be statistically estimated, in a constrained context. We aim to construct adaptive policies for the discounted reward criterion on an infinite horizon. We assumed that the stock of pollution is driven by an Itô’s diffusion and used the discounted criterion. We consider the presence of constraints on the reward function. We estimated the unknown parameter θ and followed the principle of estimation and control (PEC) to use standard dynamic programming tools to find optimal solutions.
We kept the presentation at the application level to the pollution accumulation problem while attempting to maximize a utility function. However, the theory is general enough to be exploited in other contexts. The goal of pollution accumulation models is to examine how some things are managed for society’s consumption. It is commonly acknowledged that this consumption produces two by-products: pollution and social welfare. The latter term refers to the distinction between the benefits and harms connected with pollution. The theory studied here enables the decision-maker to identify a consumption policy that maximizes anticipated social welfare for society, subject to a limitation that may reflect, for example, that some environmental clean-up expenditures will not surpass a certain number over time, while (for instance) the rate at which nature cleans itself is unknown. One of the features of the discounted optimality criterion used in this paper is that emphasis is placed on the utility of consumption for the present generations, which is mirrored by the value functions that we obtained. This characteristic renders the problem a rather flexible one and enables us to use standard dynamic programming tools.
We employ the PEC, which has roots in Kurano (1972) [1] and Mandl (1974) [2], to analyze the adaptive control problem with constraints. The goal of the PEC is to estimate parameter θ , replace the unknown θ with its estimated value, and then solve the optimal control problem with constraints. Refer to [1,3,4], and the references therein for studies developing asymptotically optimal adaptive strategies. For instance, Kurano and Mandl introduce the idea of estimation and control when thinking about Markov decision models with constrained rewards and a finite-state space. Based on a consistent estimator for the unknown parameter that is uniformly optimal in the parameter, they demonstrate the existence of an optimal policy. Reference [4] works with discrete-time stochastic control systems.
In the late 1990s and early 2000s, the stochastic optimization issue with constraints was addressed, under the assumption that all the coefficients—the diffusion itself, the reward function, and the restrictions—are bounded (see, for example, [5,6,7,8]). Related publications include Borkar and Ghosh’s foundational publication on constrained optimum control under discounted and ergodic criteria, the work of Mendoza-Pérez, Jasso-Fuentes, Prieto-Rumeau, and Hernandez-Lerma (see [9,10]), and the paper of Jasso-Fuentes, Escobedo-Trujillo, and Mendoza-Pérez [11].
The adaptive control of a linear diffusion process (of the sort we use here) with regard to the discounted cost criterion is studied in [12]. In [13], an adaptive optimal control for continuous-time linear systems is investigated. The use of statistical tools is common when modeling control problems. For instance, Bayesian adaptive techniques for ergodic diffusions are considered in [14], and ref. [15] uses the method of maximum likelihood estimation of parameters to study a self-tuning scheme for a diffusion process with the long-term average cost criterion. The main idea is to estimate the unknown parameter online so that the most recent estimate is used to determine the true parameter when choosing each control (see [16]). The estimation of parameters for diffusion processes using discrete observations was studied in several papers, including [17,18,19,20,21] and the references therein.
The issue of pollution accumulation has previously been studied from the perspective of dynamic optimization; for instance, refs. [22,23] use a linear-quadratic model to explain this phenomenon, ref. [24] deals with the average payoff in a deterministic framework, refs. [25,26] extends the former’s approach to a stochastic context, ref. [27] uses a robust stochastic differential game to characterize the situation, ref. [28] studies the problem from the perspective of constrained stochastic optimization, and ref. [29] is a statistical survey of the effects of air pollution on public health. The main contribution of our paper is as follows: a statistical estimation procedure is used to find θ ; we construct adaptive policies that are almost certainly optimal for the constrained optimization problem under the discounted payoff on an infinite horizon. These adaptive policies were obtained by substituting the estimates into optimal stationary controls (or PEC); see [1,2]. In this sense, our findings resemble those presented in [30], for most risk analysts will not be as used to our methods as they are to, for instance, the model predictive control, the robust control toolbox or the polynomial chaos expansion method, which have been used in the literature to address similar issues.
Our work lies at the intersection of three important classes of optimal control problem. The first is the problem of controlling pollution accumulation, as presented in [22,29,30,31,32]; the second class deals with constrained optimal control problems (references [11,33,34] study this type of problem, considering that all parameters are known); the third class concerns the adaptive optimal control problems, as presented in [14,15,20,35,36]. Reference [30] belongs to the first and third classes. Our work is an extension of [11,22,28,31,32,36] to the adaptive constrained optimal control framework. Reference [22] builds a robust control to explore the normative maxim that, in the presence of uncertainty, society must guard against worst-case outcomes; [11] studies a constrained optimal control problem where all the parameters are known, while [28,32] study the same in the context of pollution accumulation; Refs. [31,36] study an unconstrained adaptive optimal control problem. In addition, a numerical example is given for demonstration purposes.
The rest of the paper is organized as follows. We present the theoretical preliminaries in the next Section. Then, we devote Section 3 to our main results and we illustrate them in Section 4. We provide our conclusions in Section 5. Please note that, for the sake of self-completeness of the presentation and acknowledgement of our sources, we have included references at the end of this article. However, we recognize that these might diverge the reader. We apologize for this inconvenience.
Throughout our work, we use the following notations. All x = ( x 1 , x 2 , , x n ) in R n and matrices A = A k , p we denote by | · | the usual Euclidean norm | x | 2 : = k x k 2 , and we let | A | 2 : = Tr ( A A ) = k , p A k , p 2 , where A and Tr ( · ) , denote the transpose and the trace of a square matrix, respectively. Sometimes, we use the notation i : = x i , and i j 2 : = 2 x i x j .

2. Problem Statement

In our model, the stock of pollution is modeled as an n-dimensional controlled stochastic differential equation (SDE) of the form:
d x ( t ) = b ( x ( t ) , u ( t ) , θ ) d t + σ ( x ( t ) ) d W ( t ) , x ( 0 ) = x 0 , t 0 ,
where b : R n × U × Θ R n and σ : R n R n × d are given functions, and W ( · ) is an F t -adapted d-dimensional Wiener process, such that W ( t ) W ( s ) and F s are pairwise independent. The compact set U is assumed to be contained in a suitable metric space and is called the control set. Here, u ( t ) represents the flow of consumption at time t; this is a stochastic process that takes values from U, which, in turn, is bounded to represent the consumption restrictions imposed by worldwide protocols. In this work, we assume that the pollution decay rate θ is an unknown parameter taking values from a compact set Θ R m called the parameter set. Assumption A1 in Appendix A ensures the existence and uniqueness of a strong solution to (1).

2.1. Control Policies and Stability Assumptions

Although we manage to illustrate our theoretical developments using the so-called stationary Markovian policies, we need to introduce the concept of randomized policies (also known as relaxed controls). To this end, we used the following nomenclature:
  • B ( B ) is the Borel σ -algebra spawned by the Borel set B.
  • C ( O ) is, as usual, the space of all real-valued continuous functions on a bounded, open and connected subset O R n .
  • C b ( O ) stands for the space of all real-valued continuous bounded functions f on the bounded, open and connected subset O R n .
  • C κ ( O ) is the space of all real-valued continuous functions f on the bounded, open and connected subset O R n , with continuous derivatives up to order κ N .
  • L p ( Ω ) is, as is customary, the Lebesgue space of functions g on Ω such that Ω | g | p d μ < , with ( Ω , F , μ ) a suitable measure space, and p 1 .
  • P ( B ) is the family of probability measures on B endowed with the topology of weak convergence.
Definition 1.
A randomized policy is a family π : = π t : t 0 of stochastic kernels on B ( U ) × R n , satisfying:
(a) 
for each t 0 and x R n , π t ( · | x ) P ( U ) , and for each t 0 and D B ( U ) , π t ( D | · ) is a Borel function on R n ;
(b) 
for each D B ( U ) and x R n , the function π t ( D | x ) is Borel-measurable in t 0 .
A randomized policy is said to be stationary if there is a probability measure π ( · | x ) P ( U ) , such that π t ( · | x ) = π ( · | x ) for all t 0 and x R n . The set of randomized stationary policies is denoted by Π.
Let F be the family of measurable functions f : R n U . A strategy u ( t ) : = f ( x ( t ) ) for some f F is said to be a stationary Markov policy.
For each randomized stationary policy π Π , we write the drift coefficient b defined in (1) as
b ( x , π , θ ) : = U b ( x , u , θ ) π ( d u | x ) .
Note that b ( x , π , θ ) inherits the same continuity and Lipschitz properties from b, given in Assumption A1.
Remark 1.
Under Assumption A1, for each policy π Π and θ Θ , there exists a weak solution x π , θ ( · ) of (1) which is a Markov–Feller process in the probability space Ω , F , P π , θ . See Theorem 2.2.6 in [37].
Topology of relaxed controls. We will need the limit and continuity concepts. For this reason, we topologized the set of randomized stationary policies Π , as in [38]. This topology renders Π a compact metric space, and is determined by the following convergence criterion (see [37,38,39]).
Definition 2.
A sequence π m : m = 1 , 2 , in Π converges to π Π , if
R n g ( x ) h ( x , π m ) d x R n g ( x ) h ( x , π ) d x .
for all g L 1 ( R n ) , and h C b ( R n × U ) , where
h ( x , π m ) : = U h ( x , u ) π m ( d y | x ) a n d   h ( x , π ) : = U h ( x , u ) π ( d y | x ) .
We denote this type of convergence as π m W π .
For u U , θ Θ and ν ( · , θ ) C 2 ( R n ) , let
L u , θ ν ( x , θ ) : = i = 1 n b i ( x , u , θ ) i ν ( x , θ ) + 1 2 i , j = 1 n a i j ( x ) i j 2 ν ( x , θ ) ,
where b i is the i-th component of b, and a i j is the ( i , j ) -component of the matrix a ( · ) , defined in Assumption A1(d). Again, for each randomized stationary policy π Π , we write the infinitesimal generator L u , θ defined in (3) as
L π , θ ν ( x , θ ) : = U L u , θ ν ( x , θ ) π ( d u | x ) .
Note that the application of Dynkin’s formula to the function v ( t , x ) : = e c t w ( x ) , and Assumption A2(b) yields
E x π , θ w x ( t ) e c t w ( x ) + d c 1 e c t , for all π Π , θ Θ , x R n and t 0 ,
where E x π , θ [ · ] stands for the conditional expectation of ·, given that (1) starts at x, the controller uses the randomized stationary policy π , and the unknown parameter is fixed at θ . That is, E x π , θ [ · ] is the expectation of · taken with respect to the probability measure P π , θ when x ( t ) starts at x.

2.2. Reward, Cost and Constraint Rates

We will consider that the reward and the cost rates, along with the constraints of our model, can be unbounded from above and below, but are dominated by the Lyapunov function w given in Assumption A2. Namely, they are in the Banach space of real-valued measurable functions on R n with the finite w-norm, which is defined as follows.
Definition 3.
Let B w ( R n ) denote the Banach space of real-valued measurable functions v on R n with finite w-norm, which is defined as
v w : = sup x R n | v ( x ) | w ( x ) < .
Let r , c : R n × U × Θ R be measurable functions that will be identified as the social welfare (also called payoff or reward) rate and the cost rate, respectively, and let η : R n × Θ R be another measurable function that will be referred to as the constraint rate. In the present context, this restriction stands for the fact that, in some situations, due to each country’s legal framework, the cost of cleaning the environment must not exceed a given quantity. Both functions are supposed to meet Assumption A3.
When the controller uses policy π Π , we write the reward and cost rates in a similar way as (2); that is,
c ( x , π , θ ) = U c ( x , u , θ ) π ( d u | x ) , r ( x , π , θ ) : = U r ( x , u , θ ) π ( d u | x ) .

3. Main Results

3.1. Discounted Control with Constraints

In the sequence, we work in the space W 2 , p ( R n ) B w ( R n ) , where W , p ( O ) stands for the Sobolev space of real-valued measurable functions on the open and connected subset O R n whose generalized derivatives up to order 0 are in L p ( O ) for p 1 .
Definition 4.
Given the initial state x, a parameter value θ Θ and a discount rate α > 0 , we define the total expected α-discounted reward and cost when the controller uses a policy π in Π, as
V ( x , π , r , θ ) : = E x π , θ 0 e α t r x ( t ) , π , θ d t a n d V ( x , π , c , θ ) : = E x π , θ 0 e α t c x ( t ) , π , θ d t .
Propositions 1–3 below state the properties of the functional v V ( x , π , v , θ ) for all ( x , π , θ ) R n × Π × Θ . Under Assumption A3 and inequality (4), a direct computation yields the following result. This estabilishes that the expected α -discounted reward are dominated by a Lyapunov function.
Proposition 1.
If Assumptions A1–A3 hold, the functions V · , π , r , θ and V · , π , c , θ belong to the space B w ( R n ) for each π in Π; in fact, for each x in R n and θ Θ , we have
sup π Π V x , π , r , θ + sup π Π V x , π , c , θ 2 M ( α ) w ( x ) w i t h   M ( α ) : = M α + d α c .
Here, c and d are as in Assumption A2, and M is the constant in Assumption A3(b).
The following result is an extension of Proposition 3.1.5 in [40] to the topology of the relaxed controls. This shows that both the expected discounted reward and the expected cost are solutions to the linear version of the dynamic programming partial differential equation. This can be regarded as a necessary condition for the optimality of the value function. Its proof mimics the steps of the original, replacing the control sets with those used here while keeping θ Θ fixed.
Proposition 2.
Let Assumptions A1–A3 hold, and let v : R n × U × Θ R be a measurable function satisfying Assumption A3. Then, for every π Π , the associated expected α-discounted function V ( · , π , v , θ ) is in W 2 , p ( R n ) B w ( R n ) , and is such that
α V ( x , π , v , θ ) = V ( x , π , v , θ ) + L π , θ V ( x , π , v , θ ) f o r   a l l   x R n   a n d   θ Θ .
Conversely, if some function φ ( · , θ ) W 2 , p ( R n ) B w ( R n ) verifies (6), then
φ ( x , θ ) = V ( x , π , v , θ ) f o r   a l l   x R n   a n d   θ Θ .
Moreover, if the equality in (6) is replaced by “≤” or “≥”, then (7) holds, with the respective inequality.
Definition 5.
Let α > 0 . The total expected α-discounted constraint when the controller uses a policy π Π , given the initial state x R n and θ Θ , is defined by
η ¯ ( x , π , θ ) : = α E x π , θ 0 e α t η x ( t ) , θ d t .
Remark 2.
The function η ¯ ( · , π , θ ) belongs to the space B w ( R n ) for each π Π . Moreover, for each x R n , we have
sup π Π η ¯ ( x , π , θ ) η w α + d c w ( x ) .
For each α > 0 , θ Θ and x R n , assume we are given a constraint function η ( · , θ ) satisfying Assumption A3(c). In this way, we define the set
F θ x : = π Π | V ( x , π , c , θ ) η ¯ ( x , π , θ ) .
We assume that F θ x is nonempty.
Definition 6 (The discounted problem with constraints (DPC)).
We say that a policy π * Π is optimal for the DPC with initial state x R n , given that θ Θ is the true parameter value if π * F θ x and, in addition,
V * x , π * , r , θ = sup π F θ x V x , π , r , θ .
In this case, V * ( x , r , θ ) : = V ( x , π * , r , θ ) is called the α -discount optimal reward for the DPC.

3.2. Lagrange Multipliers Approach

We mimic the technique we used in [32] to transform the original DPC into an unconstrained problem. To this end, take λ 0 and consider the new reward rate
r λ ( x , u , θ ) : = r ( x , u , θ ) + λ ( c ( x , u , θ ) α η ( x , θ ) ) .
Using the same notation of (5), we can write (8) as
r λ ( x , π , θ ) : = r ( x , π , θ ) + λ ( c ( x , π , θ ) α η ( x , θ ) ) , π Π , θ Θ .
Observe that, for each α > 0 and λ < 0 , r λ ( · , π , θ ) is in B w ( R n ) uniformly in π Π and θ Θ . In fact,
| r λ ( x , π , θ ) | | r ( x , π , θ ) | + | λ | | c ( x , π , θ ) | + | λ | | α η ( x , θ ) | M w ( x ) + M | λ | w ( x ) + | λ | | α η ( x , θ ) | ( M + M | λ | + | λ | α η w ) w ( x ) = N λ w ( x ) ,
where N λ : = M + M | λ | + | λ | α η w , and M, as in Assumption A3(b).
For all x R n and θ Θ , define
V ( x , π , r λ , θ ) : = E x π , θ 0 e α t r λ x ( t ) , π , θ d t .
The discounted unconstrained problem is defined as follows.
Definition 7 ( λ -Discounted unconstrained problem ( λ -DUP)).
A policy π * Π for which
V ( x , π * , r λ , θ ) = sup π Π V ( x , π , r λ , θ ) = : V * ( x , r λ , θ ) f o r   a l l   x R n ,
is called discounted optimal for the λ-DUP, and V * ( · , r λ , θ ) is referred to as the optimal discounted reward for the λ-DUP.
Let v : R n × U × Θ R be a measurable function satisfying similar conditions as those given in Assumption A3. The following is called a verification result in the literature. It shows that V * ( · , v , θ ) is the unique solution of the Hamilton–Jacobi–Bellman (HJB) Equation (11), and also proves the existence of stationary policies f θ * F . Observe that, by virtue of Definition 7, the functional to which it refers can be the optimal discounted reward for the λ -DUP. Its proof can be found in [11,15,41], considering θ as fixed.
Proposition 3.
Suppose that Assumptions A1–A3 hold. Then:
(i) 
The α-optimal discount reward V * ( · , v , θ ) belongs to W 2 , p ( R n ) B w ( R n ) and verifies the discounted reward HJB equation; that is, for all x R n and θ Θ ,
α V * ( x , v , θ ) = sup u U { r ( x , u , θ ) + L u , θ V * ( x , v , θ ) } .
Conversely, if a function φ θ W 2 , p ( R n ) B w ( R n ) verifies (11), then φ θ ( x ) = V * ( x , v , θ ) for all x R n .
(ii) 
There exists a stationary policy f θ * F that maximizes the right-hand side of (11); that is,
α V * ( x , v , θ ) = r ( x , f θ * , θ ) + L f θ * , θ V * ( x , v , θ ) f o r   a l l x R n ,
and f θ * is α-discount optimal.
Remark 3.
(a) 
Notice that V ( x , π , r λ , θ ) = V ( x , π , r , θ ) + λ V ( x , π , c , θ ) η ¯ ( x , π , θ ) .
(b) 
By Definitions 4 and 5,
V ( x , π , c , θ ) η ¯ ( x , π , θ ) = E x π , θ 0 e α t [ c ( x ( t ) , π , θ ) α η ( x ( t ) , θ ) ] d t = : V ( x , π , c α η , θ ) ,
(c) 
Given that the cost and constraint rates c , η satisfy Assumption A3, we deduce
| c ( x , π , θ ) α η ( x , θ ) | | c ( x , π , θ ) | + | α η ( x , θ ) | M w ( x ) + | α η ( x , θ ) | ( M + α η w ) w ( x ) ,
Thus, c ( · , π , θ ) α η ( · , θ ) B w ( R n ) .
(d) 
The function c α η : R n × U × Θ R is locally Lipschitz on R n . In fact,
| c ( x , u , θ ) α η ( x , θ ) c ( y , u , θ ) + α η ( y , θ ) | | c ( x , u , θ ) c ( y , u , θ ) | + | η ( x , θ ) η ( y , θ ) | [ K ( R ) + K ˜ ( R ) ] | x y | .
for each R > 0 , and for all | x | , | y | R . The last inequality in (12) is met since we assume that Assumption A3 holds.
(e) 
Parts  (c)   and  (d)   imply that the function c α η : R n × U × Θ R satisfies Assumption A3. Thus, the rate r λ ( · , π , θ ) is Lipschitz-continuous and r λ ( · , π , θ ) B w ( R n ) . Furthermore, by virtue of (9) and Proposition 1,
sup π Π V ( x , π , r λ , θ ) M α λ w ( x ) , w i t h   M α λ : = N λ α + d α c ,
implying that V ( · , π , r λ , θ ) B w ( R n ) .

3.3. Convergence of Value Functions V ( x , π , r , θ ) and V ( x , π , r λ , θ )

Definition 8.
A sequence θ m : m = 1 , of measurable functions θ m : Ω Θ is said to be a sequence of uniformly strongly consistent (USC) estimators of θ Θ if, as m ,
θ m ( ω ) θ P π , θ - a . s . f o r   a l l   π Π ,
where P π , θ is the probability measure referred to by Remark 1.
For ease of notation, we write θ m : = θ m ( ω ) Θ . Let v : R n × U × Θ R be a measurable function satisfying similar conditions to those given in Assumption A3.
Remark 4.
(a) 
If Assumptions A1–A3 hold, then by Proposition 3.4 in [11], the mappings π V ( x , π , v , θ ) , π V ( x , π , c α η , θ ) and π V ( x , π , r λ , θ ) are continuous on Π for each x R n and θ Θ .
(b) 
Let θ m : m = 1 , be a sequence of USC estimators of θ Θ . Then, using Theorem 4.5 in [36], for every measurable function v : R n × Π × Θ R that satisfies the Assumptions A1–A3, the sequence V x , π , v , θ m : m = 1 , converges to V x , π , v , θ P π , θ -a.s., for each x R n and π Π .
(c) 
Let π m : m = 1 , be a sequence in Π. Since Π is a compact set, there exists a subsequence π m k : k = 1 , π m : m = 1 , such that π m k W π Π , thus, combining parts  (a)    and   (b), and using the following triangular inequality:
| V ( x , π m k , v , θ m k ) V ( x , π , v , θ ) | | V ( x , π m k , v , θ m k ) V ( x , π , v , θ m k ) | + | V ( x , π , v , θ m k ) V ( x , π , v , θ ) | ,
we deduce that, for every measurable function v : R n × U × Θ R satisfying Assumption A3, we have that
V ( x , π m k , v , θ m k ) V ( x , π , v , θ ) P π , θ a . s . a s k .
(d) 
The optimal discount reward for the λ -DUP, V * ( · , r λ , θ ) satisfies Proposition 3. In addition, Proposition 3(ii) ensures the existence of stationary policy f θ λ F .
(e) 
For each λ 0 , θ Θ and α > 0 , we denote
Π λ , θ : = π Π | α V * ( x , r λ , θ ) = r λ ( x , π , θ ) + L π , θ V * ( x , r λ , θ ) f o r   a l l   x R n .
Since F can be seen as an embedding of Π, Proposition 3(ii) ensures that the set Π λ , θ is nonempty.
(f) 
Under the hypotheses of Proposition 3, Lemma 3.15 in [11] ensures that for each θ Θ fixed and any sequence λ m : m = 1 , in ( , 0 ] converging to some λ 0 ; if there exists a sequence π λ m , θ : m = 1 , Π λ m , θ for each m 1 , such that it converges to a policy π Π , then π Π λ , θ . That is, π satisfies
α V * ( x , r λ , θ ) = r λ ( x , π , θ ) + L θ , π V * ( x , r λ , θ ) f o r   a l l   x R n .
(g) 
Lemma 3.16 in [11] ensures that the mapping λ V α * ( x , r λ , θ ) is differentiable on ( , 0 ) , for any x R n and α > 0 ; in fact, for each λ < 0 and θ Θ
V * ( x , r λ , θ ) λ = V ( x , π λ , c , θ ) η ¯ ( x , π λ , θ ) .

3.4. Estimation Methods for Our Application

Pedersen [42] describes the approximate maximum likelihood estimator in the following manner. The unknown parameter θ is estimated by means of some function h m : Ω × Θ R , for   m = 1 , that measures the likelihood of different values of θ . If for each ω Ω fixed, the function h m ( ω , θ ) has a unique maximum point θ m ( ω ) Θ , then θ is estimated by θ m ( ω ) .
Under the assumption that, for m N and θ Θ , h m ( · , θ ) is a measurable function of ω and is also continuously differentiable in θ for all P π , θ and almost all ω Ω , it is proven that the function θ h m ( ω , θ ) is continuous and has a unique maximum point θ m ( ω ) for each ω Ω fixed. The number m N is the index of a sequence of random experiments on the measurable space ( Ω , F ) .
In our application, the outcomes of the random experiments will be represented by a sequence X T : = x t i : 0 i m of a trajectory { x u , θ ( t ) : t [ 0 , T ] } at times 0 = t 0 < t 1 < t m : = T on ( Ω , F ) : = ( C ( [ 0 , ) ) , B ( C ( [ 0 , ) ) ) and the function h m will be called the least square function (LSE), i.e., h m ( w , θ ) : = L S E ( w , θ ) .
In practice, x u , θ ( t ) in (1) can only be observed in a finite horizon; for example, T. Actually, this is one of the hypotheses of the so-called model predictive control. However, at least from a theoretical point of view, our version of the PEC makes no such assumption, but still chooses T to be as large as practically possible (with regard to computer power, measurement instruments, computation time, etc.) so that we can define LSE as:
L S E ( X T , θ ) : = i = 1 m x t i x t i 1 b ( x t i 1 , u t i 1 , θ ) ( t i t i 1 ) 2 .
with b as in (1). The LSE function generates the least square estimator, θ L S E ,
θ L S E θ L S E ( X T ) : = arg min θ Θ L S E ( X T , θ ) .
Consistency and asymptotic normality of θ L S E are studied in [18,19,42]. Shoji [18] demonstrates that the optimization based on the LSE function is identical to the optimization based on the discrete approximate likelihood ratio function when a one-dimensional stochastic differential equation with a constant diffusion coefficient is taken into account:
M L R ( X T , θ ) : = i = 1 m b ( x t i 1 , u t i 1 , θ ) σ ( x t i 1 ) σ ( x t i 1 ) 1 ( x t i x t i 1 ) 1 2 i = 1 m b ( x t i 1 , u t i 1 , θ ) σ ( x t i 1 ) σ ( x t i 1 ) T 1 · b ( x t i 1 , u t i 1 , θ ) ( t i t i 1 ) ,
with b and σ as in (1). The MLR function generates the discrete approximate likelihood ratio estimator:
θ L R θ L R ( X T ) : = arg max θ Θ M L R ( X T , θ ) .
We establish our main result considering Remarks 3 and 4.
Theorem 1.
Let θ m be a sequence of USC estimators of θ Θ , and let λ x , θ m * < 0 be a critical point of V * ( x , r λ , θ m ) . Assume that there is a sequence π λ x , θ m * Π λ x , θ m * : m = 1 , that converges to π Π . Then, π is optimal for the DPC. Moreover, the equalities V ( x , π , c , θ ) = η ¯ ( x , π , θ ) and V * ( x , r λ x * , θ ) = V * ( x , r , θ ) hold P π , θ -a.s.
Proof. 
By Remark 3(a), we get
V ( x , π λ x , θ m * , r λ x , θ m * , θ m ) = V ( x , π λ x , θ m * , r , θ m ) + λ x , θ m * V ( x , π λ x , θ m * , c , θ m ) η ¯ ( x , π λ x , θ m * , θ m ) ,
whereas the facts in Remark 4(a)–(c) ensure that (17) converges to
V ( x , π , r λ x , θ , θ ) = V ( x , π , r , θ ) + λ x , θ V ( x , π , c , θ ) η ¯ ( x , π , θ )
P π , θ -a.s., as m .
On the other hand, note that
| V ( x , π λ x , θ m * , c α η , θ m ) V ( x , π , c α η , θ ) |
| V ( x , π λ x , θ m * , c α η , θ m ) V ( x , π , c α η , θ m ) | +
| V ( x , π , c α η , θ m ) V ( x , π , c α η , θ ) | ,
thus, by Remark 4(a) we can obtain that the term in (19) converges to zero as m ; whereas, using Remark 4(b), we can deduce that the term in (20) converges to zero P π , θ -a.s. So,
V ( x , π λ x , θ m * , c α η , θ m ) V ( x , π , c α η , θ ) P π , θ - a . s .
As λ x , θ m * < 0 is a critical point of V * ( x , r λ , θ m ) , we obtain, from (14), that for every π λ x , θ m * Π λ x , θ m * , θ m ,
V * ( x , r λ , θ m ) λ | λ = λ x , θ m * = V ( x , π λ x , θ m * , c , θ m ) η ¯ ( x , π λ x , θ m * , θ m ) = V ( x , π λ x , θ m * , c α η , θ m ) = 0 .
Therefore, from (21) and (22), we obtain
V ( x , π , c α η , θ ) = V ( x , π , c , θ ) η ¯ ( x , π , θ ) = 0 P π , θ - a . s .
This last result, along with (18) yields
V ( x , π , r λ x * , θ ) = V ( x , π , r , θ ) P π , θ - a . s .
Using Remark 4(g), π is in Π λ x * , θ ,
V * ( x , r λ x * , θ ) = sup π ¯ Π V ( x , π ¯ , r λ x * , θ ) = V ( x , π , r λ x * , θ ) .
We have that, for all π ¯ F θ x ,
V ( x , π ¯ , c , θ ) η ¯ ( x , π ¯ , θ ) 0 .
This implies that λ x * [ V ( x , π ¯ , c , θ ) η ¯ ( x , π ¯ , θ ) ] 0 , which, in turn, together with (10), (24), (25) and Remark 3(a), lead to
V ( x , π , r , θ ) = V ( x , π , r λ x * , θ ) = V * ( x , r λ x * , θ ) V ( x , π ¯ , r λ x * , θ ) = V x , π ¯ , r , θ + λ x * V ( x , π ¯ , c , θ ) η ¯ ( x , π ¯ , θ ) V x , π ¯ , r , θ for   all π ¯ F θ x P π , θ - a . s .
Thus,
V ( x , π , r , θ ) sup π ¯ F θ x V x , π ¯ , r , θ P π , θ a . s .
Finally, by (25) we obtain that π F θ x . Therefore, (26), along with (24) and (25) show that
V * ( x , r λ x * , θ ) = V ( x , π , r , θ ) = sup π ¯ F θ x V x , π ¯ , r , θ ,
in other words, π is optimal for the DPC, and V * ( x , r λ x * , θ ) coincides with the optimal reward for the DPC P π , θ -a.s. □

4. Numeric Illustration

To exemplify our results, we substitute (1) by
d x ( t ) = [ u ( t ) θ x ( t ) ] d t + σ d W ( t ) , x ( 0 ) = x > 0 ,
where 0 u ( t ) γ . We assume that the reward and cost rates r , c : [ 0 , ) × U R , as well as the constraint rate η : [ 0 , ) × Θ R , are defined by
r ( x , u ) : = u a x ( x , u ) [ 0 , ) × U   with   a 0 , c ( x , u ) = c 1 x + c 2 u ( x , u ) [ 0 , ) × U , η ( x , θ ) : = c 1 x α + θ + q , for   all   x > 0 ,
with U : = [ 0 , γ ] , c 1 > 0 , c 2 R satisfying c 1 + θ c 2 > 0 and θ c 1 + ( α + θ ) 2 c 2 > 0 , where q is a positive constant. Here, r ( x , u ) represents the social welfare, where u and a x represent the social utility of the consumption u and the social disutility of the pollution x, respectively.
Remark 5.
Assumptions A1–A3 given in this work hold for the controlled diffusion (27); see Lemma 5.2 in [11]. In fact, the Lyapunov function in Assumption A2 is taken as w ( x ) = x 2 + 1 .

4.1. The λ -DUP

Lemma 5.3 in [11] ensures that, under the conditions imposed on the constants c 1 , c 2 , α , θ and q given above, for every x > 0 and λ 0 , the optimal reward V * ( x , r λ , θ ) in (10) with r λ ( x , u ) = u a x + λ c 1 x + c 2 u c 1 x α + θ q , becomes
V * ( x , r λ , θ ) = [ λ θ c 1 ( α + θ ) a ] x ( α + θ ) 2 + f θ λ a λ , θ f α λ α λ q ,
where
a λ , θ : = ( α + θ ) a λ [ θ c 1 + ( α + θ ) 2 c 2 ] ( α + θ ) 2 > 0 ,
and the discounted optimal policy for the λ - D U P ( f θ λ F ), which maximizes the right-hand side of (11) for this example, is the constant function given by
f θ λ = 1 4 ( a λ , θ ) 2 if 1 2 γ < a λ , θ , γ if 1 2 γ a λ , θ .

4.2. The Dpc

Using Theorem 5.5 in [11], for a fixed point z > 0 such that
q < η c 1 z ( α + θ ) 2 + [ θ c 1 + ( α + θ ) 2 c 2 ] γ α ( α + θ ) 2 .
If 1 2 α ( α + θ ) 2 q α θ c 1 z θ c 1 + ( α + θ ) 2 c 2 > a α + θ , then the mapping λ V * ( z , r λ , θ ) admits a critical point λ z , θ * λ z , θ * ( α , z ) < 0 satisfying
a λ z , θ * , θ = ( α + θ ) a λ z , θ * [ θ c 1 + ( α + θ ) 2 c 2 ] ( α + θ ) 2 = 1 2 α ( α + θ ) 2 q α θ c 1 z θ c 1 + ( α + θ ) 2 c 2 .
Hence, every π λ z , θ * Π λ z , θ * is α -optimal for the DPC and V ( z , π λ z , θ * , c , θ ) = η ¯ ( z , π λ z , θ * , θ ) ; in particular, the corresponding α -optimal policy for the DPC is f θ λ z , θ * F Π λ z , θ *
f θ λ z , θ * = α ( α + θ ) 2 q α θ c 1 z θ c 1 + ( α + θ ) 2 c 2 ,
and the α -optimal value for the DPC is given by
V * ( z , r λ z , θ * , θ ) = V * ( z , π λ z , θ * , r , θ ) = a z α + θ + 1 α α ( α + θ ) 2 q α θ c 1 z θ c 1 + ( α + θ ) 2 c 2 a α + θ ( α + θ ) 2 q θ c 1 z θ c 1 + ( α + θ ) 2 c 2 .

4.3. Numerical Results for the Optimal Accumulation Problem

To implement the optimal controller (28), we estimate the unknown parameter with LSE (15) and (16). By replacing b ( x , u , θ ) = u ( t ) θ x ( t ) in (15), a direct computation yields:
θ L S E m = i = 2 m 1 u x i d x i x i i = 2 m 1 x i 2 ,
where d x i : = 1 2 x i + 1 x i 1 t i + 1 t i .
Assume that the true parameter value of the parameter is θ = 2.5 , and take u ( t ) = 0.5 , α = 0.2 , T = 2.8 , c 1 = 100 , c 2 = 10 , λ = 0.1 , q = 60 , a = 1.25 , and γ = 3 . We next obtained discrete observations of the stochastic differential Equation (27) by simulating the equation using the Euler–Maruyama technique on [ 0 , 2.8 ] . Based on this information, we obtained m = 1200 observations with different values of the diffusion coefficient σ = 0.0001 , 0.0007 .

4.3.1. Numerical Results for the λ -DUP

In Table 1 and Table 2, we denote the root mean square error (RMSE) between the predicted process x θ L S E m , f θ L S E m λ ( t ) and the real process x θ , f θ λ ( t ) by R M S E ( x θ m * x θ * ) , and the RMSE between the predicted optimal discount cost V θ L S E m * ( x ) and the real optimal discount cost V θ * ( x ) , by R M S E ( V θ m * V θ * ) .
Table 1 and Table 2 take m = 240 , 300 , 400 , 600 and 1200 and display the information on the different values of θ L S E m . As can be seen, as m increases, the estimator approaches the true parameter value θ = 2.5 , and the RMSEs between the predicted processes x ( t ) θ L S E m , f θ L S E m λ , V θ L S E m * ( x ) , and the real processes x ( t ) θ , f θ λ , V θ * ( x ) decrease, thus implying a good fit (see Figure 1 and Figure 2). We can also see that, as the amount of data increases, the absolute error between the predicted optimal control f θ L S E m ( x θ L S E m ( t ) ) , and the real optimal control f θ ( ( x θ ( t ) ) ) decreases. Therefore, the predicted optimal control approaches the true optimal control.
The diffusion process (27) with σ = 0.0001 showed the best fit because, with 1200 data, θ L S E m = 2.499682 and its RMSE is 9.76504 × 10 5 , which suggests that the lower the noise in the measured data, the more accurate the least square estimator.

4.3.2. Numerical Results for the DPC

Table 3 and Table 4 show the predicted optimal controls f θ L S E m λ z , θ L S E m * defined in (29), as well as the predicted α -optimal rewards for the DPC V * z , θ L S E m , f θ L S E m λ z , θ L S E m * , r given in (30) and denoted by V z * ( θ L S E m ) . As m increases, the estimator approaches the true parameter value θ = 2.5 , and the predicted optimal controls and rewards converge to the real optimal control and reward f θ λ z , θ * , V * ( z , θ , f θ λ z , θ * , r ) , respectively, implying a good fit. Again, σ = 0.001 showed the best fit, which suggests that the lower the noise in the measured data, the more accurate the LSE.

5. Concluding Remarks

This paper concerns controlled stochastic differential equations of the form (1), where the drift coefficient depends on an unknown parameter θ Θ . Using a statistical estimation procedure to find θ , we constructed adaptive policies, which are almost certainly optimal for the constrained optimization problem under the discounted payoff on an infinite horizon. To this end, we let V θ * ( x , r λ ) : = sup π Π V ( x , π , r λ , θ ) , and V θ * ( x , r ) : = sup π F θ x V ( x , π , r , θ ) be the optimal discounted rewards for the DUP and the DPC, respectively. Our results are our own version of the PEC, and can be summarized as follows:
1.
For each m, there are optimal control policies π θ m for the θ m -DUP and θ m -DPC.
2.
For each initial state x R n , V θ m * ( x , r λ ) V θ * ( x , r λ ) and V θ m * ( x , r ) V θ * ( x , r ) , and is almost certainly m .
3.
For the DUP, there is a subsequence ( m k : k = 1 , ) of ( m ) and a policy π θ * Π λ , θ , such that π λ m k , θ m k converges to π θ * in the topology of relaxed controls, and, moreover, π θ * is optimal for the θ -DUP. Moreover, if λ m k < 0 is a critical point of V θ m k * ( x , r λ ) , then π θ * is optimal for the θ -DCP.
Some of the techniques we use are standard in the context of dynamic programming, and our use of the discounted payoff criterion on the infinite horizon renders the problem a rather flexible one. This criterion emphasizes the weight of the rewards and costs on the present generations while tending to overlook their effects on future generations. One way to prevent this from happening is to use the ergodic criterion by means of the so-called vanishing technique, which has been used in, for instance [43]. To obtain some insight as to how this method would alter the value function and the overall results presented here, we invite the reader to let α 0 .
There are many ways to obtain a sequence of USC estimators of the unknown parameter θ (see [18]). However, when implementing the approximation algorithms for θ , one needs to check the type numerical approximation of the derivative d x ( t ) that is required. In our case, we replaced d x ( t ) with its central difference, instead of the backward difference, because our applications yield more accurate approximations.
The PEC requires knowledge and storage of the optimal policies π θ for all values of θ , which may require considerable off-line computation and considerable storage. Therefore, for optimal control problems with closed-form solutions ( π θ * , V θ * ( x ) ), such as, for example, LQ problems (linear systems with quadratic costs), the PEC works well. In this sense, our model resembles the model predictive control. However, the fact that the horizon has to be finite in the latter is a serious limitation that is surpassed by our proposal. In fact, the numeric illustration from Section 4 is another example of the distinction between our version of the PEC, and the polynomial chaos expansion method. While the latter aims to approximate the probability densities of finite-variance random variables, our goal is to obtain an optimal control while making estimations of the infinite-total variation processes (1). This is particularly true in the case of (27), regardless of how small the diffusion coefficient σ is in our illustration. There, the focus point should be that the lower the noise in the measured data, the more accurate the LSE.
One of the downfalls of our method is that a closed-form solution of π θ , V θ * ( x ) is virtually impossible for many optimal control problems with or without constraints. Another limitation is that, for each application, there is a large number of assumptions and constraints that need to be verified. We believe this deflection from the main problem could be eased by the inclusion of our method in (for instance) MATLAB’s robust control.
The second part of this project will approximate the adaptive original problem using a sequence of discrete-time adaptive optimal control problems of controlled Markov switching diffusions.

Author Contributions

Conceptualization, methodology, and writing/original draft preparation of this research are due to B.A.E.-T., F.A.A.-H. and J.D.L.-B.; software, validation, visualization, and data curation are original of F.A.A.-H.; formal analysis, investigation, writing/review and editing are due to C.G.H.-C.; project administration, funding acquisition are due to J.D.L.-B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Universidad Anáhuac México grant number 00100575.

Data Availability Statement

Not applicable.

Acknowledgments

The authors wish to sincerely thank Ekaterina Viktorovna Gromova for her kind invitation to publish this work.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A. Technical Assumptions

The following hypothesis ensures the existence and uniqueness of a strong solution to (1). For more details, see Theorem 3.1 in [43] and Chapter III.2 in [44].
Assumption A1.
(a) 
The random process (1) belongs to a complete probability space Ω , F , P u , θ . Here, { F t } t 0 is a filtration on ( Ω , F ) , such that each F t is complete relative to F ; and P u , θ is the law of the state process x ( · ) given the parameter θ Θ and the control u ( · ) .
(b) 
The drift coefficient b ( · , · , · ) in (1) is continuous and locally Lipschitz in the first and third arguments uniformly in u; that is, for each R > 0 , there exist nonnegative constants K θ ( R ) and D ( R ) such that, for all u U , all | θ 1 | , | θ 2 | R and | x | , | y | R ,
| b ( x , u , θ ) b ( y , u , θ ) | K θ ( R ) | x y | ,
| b ( x , u , θ 1 ) b ( x , u , θ 2 ) | D ( R ) | θ 1 θ 2 | ,
Moreover, u b ( x , u , θ ) is continuous on U.
(c) 
The diffusion coefficient σ ( · ) satisfies a local Lipschitz condition; that is, for each R > 0 , there is a positive constant K 1 ( R ) such that, for all | x | , | y | R ,
| σ ( x ) σ ( y ) | K 1 ( R ) | x y | .
(d) 
The coefficients b and σ satisfy a global linear growth condition of the form
sup ( u , θ ) U × Θ b x , u , θ 2 + | σ ( x ) | 2 K ˜ ( 1 + | x | 2 ) f o r   a l l   x R n ,
where K ˜ is a positive constant.
(e) 
(Uniform ellipticity). The matrix a ( x ) : = σ ( x ) σ ( x ) satisfies that, for some constant K 1 > 0 ,
x a ( y ) x K 1 | x | 2 f o r   a l l   x , y R n .
The following hypothesis is a standard Lyapunov stability condition for the solution of the dynamic system (1) (see [37,41]). It gives the following inequality (4).
Assumption A2.
There exists a function w 1 in C 2 ( R n ) and constants d c > 0 , such that
(a) 
lim | x | w ( x ) = .
(b) 
L π , θ w ( x ) c w ( x ) + d for all π Π , θ Θ and x in R n .
The reward and cost functions are supposed to meet the next hypothesis.
Assumption A3.
(a) 
The payoff rate r ( x , u , θ ) , the cost rate c ( x , u , θ ) and the constraint rate η ( · , θ ) are continuous on R n × U × Θ and R n × Θ , respectively. Moreover, they are locally Lipschitz on R n , uniformly on U and Θ; that is, for each R > 0 , there are positive constants K ( R ) and K 2 ( R ) such that for all | x | , | y | R
sup ( u , θ ) U × Θ | r ( x , u , θ ) r ( y , u , θ ) | + sup ( u , θ ) U × Θ | c ( x , u , θ ) c ( y , u , θ ) | K ( R ) | x y | ,
sup θ Θ | η ( x , θ ) η ( y , θ ) | K 4 ( R ) | x y | .
(b) 
The rates r ( · , u , θ ) , c ( · , u , θ ) and η ( · , θ ) are in B w ( R n ) uniformly on U and Θ; in other words, there exists M > 0 such that, for all x R n
sup θ Θ | η ( x , θ ) | + sup ( u , θ ) U × Θ | r ( x , u , θ ) | + sup ( u , θ ) U × θ | c ( x , u , θ ) | M w ( x ) .

References

  1. Kurano, M. Discrete-time Markovian decision processes with an unknown parameter-average return criterion. J. Oper. Res. Soc. Jpn. 1972, 15, 67–76. [Google Scholar]
  2. Mandl, P. Estimation and control in Markov chains. Adv. Appl. Probab. 1974, 6, 40–60. [Google Scholar] [CrossRef]
  3. Hernández-Lerma, O.; Marcus, S. Technical note: Adaptive control of discounted Markov Decision chains. J. Optim. Theory Appl. 1985, 46, 227–235. [Google Scholar] [CrossRef]
  4. Hilgert, N.; Minjárez-Sosa, A. Adaptive control of stochastic systems with unknown disturbance distribution: Discounted criteria. Math. Methods Oper. Res. 2006, 63, 443–460. [Google Scholar] [CrossRef]
  5. Broadie, M.; Cvitanic, J.; Soner, H.M. Optimal replication of contingent claims under portfolio constraints. Rev. Fin. Stud. 1998, 11, 59–79. [Google Scholar] [CrossRef]
  6. Cvitanic, J.; Pham, H.; Touzi, N. A closed-form solution for the super-replication problem under transaction costs. Financ. Stochastics 1999, 3, 35–54. [Google Scholar] [CrossRef]
  7. Cvitanic, J.; Pham, H.; Touzi, N. Superreplication in stochastic volatility models under portfolio constraints. J. Appl. Probab. 1999, 36, 523–545. [Google Scholar] [CrossRef]
  8. Soner, M.; Touzi, N. Super replication under gamma constraints. SIAM J. Control Optim. 2000, 39, 73–96. [Google Scholar] [CrossRef]
  9. Mendoza-Pérez, A.; Jasso-Fuentes, H.; Hernández-Lerma, O. The Lagrange approach to ergodic control of diffusions with cost constraints. Optimization 2015, 64, 179–196. [Google Scholar] [CrossRef]
  10. Prieto-Rumeau, T.; Hernández-Lerma, O. The vanishing discount approach to constrained continuous-time controlled Markov chains. Syst. Control Lett. 2010, 59, 504–509. [Google Scholar] [CrossRef]
  11. Jasso-Fuentes, H.; Escobedo-Trujillo, B.A.; Mendoza-Pérez, A. The Lagrange and the vanishing discount techniques to controlled diffusion with cost constraints. J. Math. Anal. Appl. 2016, 437, 999–1035. [Google Scholar] [CrossRef]
  12. Bielecki, T. Adaptive control of continuous-time linear stochastic systems with discounted cost criterion. J. Optim. Theory Appl. 1991, 68, 379–383. [Google Scholar] [CrossRef]
  13. Vrabie, D.; Pastravanu, O.; Abu-Khalaf, M.; Lewis, F. Adaptive optimal control for continuous-time linear systems based on policy iteration. Automatica 2009, 45, 477–484. [Google Scholar] [CrossRef]
  14. Di Masp, G.; Stettner, L. Bayesian ergodic adaptive control of diffusion processes. Stochastics Stochastics Rep. 1997, 60, 155–183. [Google Scholar] [CrossRef]
  15. Borkar, V.; Ghosh, M. Ergodic Control of Multidimensional Diffusions II: Adaptive Control. Appl. Math. Optim. 1990, 21, 191–220. [Google Scholar] [CrossRef]
  16. Borkar, V.; Bagchi, A. Parameter estimation in continuous-time stochastic processes. Stochastics 1982, 8, 193–212. [Google Scholar] [CrossRef]
  17. Huzak, M. Estimating a class of diffusions from discrete observations via approximate maximum likelihood method. Statistics 2018, 52, 239–272. [Google Scholar] [CrossRef]
  18. Shoji, I. A note on asymptotic properties of the estimator derived from the Euler method for diffusion processes at discrete times. Stat. Probab. Lett. 1997, 36, 153–159. [Google Scholar] [CrossRef]
  19. Ralchenko, K. Asymptotic normality of discretized maximum likelihood estimator for drift parameter in homogeneous diffusion model. Mod. Stochastics Theory Appl. 2015, 2, 17–28. [Google Scholar] [CrossRef]
  20. Duncan, T.; Pasik-Duncan, B.; Stettner, L. Almost self-optimizing strategies for the adaptive control of diffusion processes. J. Optim. Theory Appl. 1994, 81, 479–507. [Google Scholar] [CrossRef]
  21. Durham, G.; Gallant, A. Numerical Techniques for Maximum Likelihood Estimation of Continuous-Time Diffusion Processes. J. Bus. Econ. Stat. 2002, 20, 297–316. [Google Scholar] [CrossRef] [Green Version]
  22. Athanassoglou, S.; Xepapadeas, A. Pollution control with uncertain stock dynamics: When, and how, to be precautious. J. Environ. Econ. Manag. 2012, 63, 304–320. [Google Scholar] [CrossRef]
  23. Jiang, K.; You, D.; Li, Z.; Shi, S. A differential game approach to dynamic optimal control strategies for watershed pollution across regional boundaries under eco-compensation criterion. Ecol. Indic. 2019, 105, 229–241. [Google Scholar] [CrossRef]
  24. Kawaguchi, K. Optimal Control of Pollution Accumulation with Long-Run Average Welfare. Environ. Resour. Econ. 2003, 26, 457–468. [Google Scholar] [CrossRef]
  25. Kawaguchi, K.; Morimoto, H. Long-run average welfare in a pollution accumulation model. J. Econ. Dyn. Control 2007, 31, 703–720. [Google Scholar] [CrossRef]
  26. Morimoto, H. Optimal Pollution Control with Long-Run Average Criteria. In Stochastic Control and Mathematical Modeling: Applications in Economics; Encyclopedia of Mathematics and its Applications, Cambridge University Press: Cambridge, UK, 2010; pp. 237–251. [Google Scholar] [CrossRef]
  27. Jasso-Fuentes, H.; López-Barrientos, J.D. On the use of stochastic differential games against nature to ergodic control problems with unknown parameters. Int. J. Control 2015, 88, 897–909. [Google Scholar] [CrossRef]
  28. Zhang, G.; Zhang, Z.; Cui, Y.; Yuan, C. Game Model of Enterprises and Government Based on the Tax Preference Policy for Energy Conservation and Emission Reduction. Filomat 2016, 30, 3963–3974. [Google Scholar] [CrossRef]
  29. Zhang, Z.; Zhang, G.; Su, B. The spatial impacts of air pollution and socio-economic status on public health: Empirical evidence from China. Socio-Econ. Plan. Sci. 2022, 83, 101167. [Google Scholar] [CrossRef]
  30. Cox, L.A.T., Jr. Confronting Deep Uncertainties in Risk Analysis. Risk Anal. 2012, 32, 1607–1629. [Google Scholar] [CrossRef]
  31. López-Barrientos, J.D.; Jasso-Fuentes, H.; Escobedo-Trujillo, B.A. Discounted robust control for Markov diffusion processes. Top 2015, 23, 53–76. [Google Scholar] [CrossRef]
  32. Escobedo-Trujillo, B.A.; López-Barrientos, J.D.; Garrido-Meléndez, J. A Constrained Markovian Diffusion Model for Controlling the Pollution Accumulation. Mathematics 2021, 9, 1466. [Google Scholar] [CrossRef]
  33. Borkar, V.; Ghosh, M. Controlled diffusions with constraints. J. Math. Anal. Appl. 1990, 152, 88–108. [Google Scholar] [CrossRef]
  34. Borkar, V. Controlled diffusions with constraints II. J. Math. Anal. Appl. 1993, 176, 310–321. [Google Scholar] [CrossRef]
  35. Duncan, T.; Pasik-Duncan, B. Adaptive control of continuous time linear stochastic systems. Math. Control. Signals Syst. 1990, 3, 45–60. [Google Scholar] [CrossRef]
  36. Escobedo-Trujillo, B.; Hernández-Lerma, O.; Alaffita-Hernández, F. Adaptive control of diffusion processes with a discounted criterion. Appl. Math. 2020, 47, 225–253. [Google Scholar] [CrossRef]
  37. Arapostathis, A.; Borkar, V.; Ghosh, M. Ergodic control of diffusion processes. In Encyclopedia of Mathematics and its Applications; Cambridge University Press: Cambridge, UK, 2012; Volume 143. [Google Scholar]
  38. Warga, J. Optimal Control of Differential and Functional Equations; Academic Press: New York, NY, USA, 1972. [Google Scholar]
  39. Fleming, W.; Nisio, M. On the stochastic relaxed control for partially observed diffusions. Nagoya Mathhematical J. 1984, 93, 71–108. [Google Scholar] [CrossRef]
  40. Jasso-Fuentes, H.; Yin, G. Advanced Criteria for Controlled Markov-Modulated Diffusions in an Infinite Horizon: Overtaking, Bias, and Blackwell Optimality; Science Press: Beijing, China, 2013. [Google Scholar]
  41. Jasso-Fuentes, H.; Hernández-Lerma, O. Characterizations of overtaking optimality for controlled diffusion processes. Appl. Math. Optim. 2007, 57, 349–369. [Google Scholar] [CrossRef]
  42. Pedersen, A.R. Consistency and asymptotic normality of an approximate maximum likelihood estimator for discretely observed diffusions process. Bernoulli 1995, 1, 257–279. [Google Scholar] [CrossRef]
  43. Ghosh, M.K.; Arapostathis, A.; Marcus, S.I. Ergodic control of switching diffusions to flexible manufacturing systems. SIAM J. Control Optim. 1993, 31, 1183–1204. [Google Scholar] [CrossRef] [Green Version]
  44. Rogers, L.; Williams, D. Diffusions, Markov Processes and Martingales, Vol.1, Foundations; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
Figure 1. Asymptotic behavior of x θ L S E m , f θ L S E m ( t ) with σ = 0.0001 (left) and σ = 0.0007 (right).
Figure 1. Asymptotic behavior of x θ L S E m , f θ L S E m ( t ) with σ = 0.0001 (left) and σ = 0.0007 (right).
Mathematics 11 00923 g001
Figure 2. Asymptotic behavior of the optimal discount cost for σ = 0.0001 (left) and σ = 0.0007 (right).
Figure 2. Asymptotic behavior of the optimal discount cost for σ = 0.0001 (left) and σ = 0.0007 (right).
Mathematics 11 00923 g002
Table 1. RMSE and absolute error between the estimated processes and the real processes ( θ = 2.5 ) with σ = 0.0001 .
Table 1. RMSE and absolute error between the estimated processes and the real processes ( θ = 2.5 ) with σ = 0.0001 .
σ = 0.0001
m θ LSE m RMSE ( V θ m * V θ * ) RMSE ( x θ m * x θ * ) f λ , θ LSE m | f λ , θ LSE m f λ , θ |
240 2.499627 0.00204305 0.000509042 0.00410459 3.1947 × 10 8
300 2.499643 0.00161749 0.00040062 0.00410458 3.0571 × 10 8
400 2.499659 0.00117931 0.000285924 0.00410458 2.91879 × 10 8
600 2.499678 0.000670105 0.000155536 0.00410458 2.75402 × 10 8
1200 2.499682 0.000437844 9.76504 × 10 5 0.00410458 2.72459 × 10 8
Real 2.5 00 0.00410455 0
Table 2. RMSE and absolute error between the estimated processes and the real processes ( θ = 2.5 ) with σ = 0.0007 .
Table 2. RMSE and absolute error between the estimated processes and the real processes ( θ = 2.5 ) with σ = 0.0007 .
σ = 0.0007
m θ LSE m RMSE ( V θ m * V θ * ) RMSE ( x θ m * x θ * ) f λ , θ LSE m | f λ , θ LSE m f λ , θ |
240 2.498748 0.00299057 0.000718096 0.00410466 1.06613 × 10 7
300 2.498791 0.00266943 0.000629978 0.00410466 1.03033 × 10 7
400 2.498873 0.00184702 0.000423607 0.00410465 9.6021 × 10 8
600 2.498762 0.00176898 0.000437829 0.00410466 1.05499 × 10 7
1200 2.498762 0.00136883 0.00033381 0.00410466 1.05454 × 10 7
Real 2.5 0 0.00410455 0
Table 3. Estimated processes and the real processes ( θ = 2.5 ) with σ = 0.0001 .
Table 3. Estimated processes and the real processes ( θ = 2.5 ) with σ = 0.0001 .
m f θ LSE m λ z , θ LSE m * f θ LSE m λ z , θ LSE m * f θ λ z , θ * V * z , θ LSE m , f θ LSE m λ z , θ LSE m * , r | V z * ( θ LSE m ) V z * ( θ ) |
240 0.116042 3.11391 × 10 5 0.971567 0.000257628
300 0.116043 2.97955 × 10 5 0.971578 0.000246511
400 0.116045 2.84453 × 10 5 0.971589 0.00023534
600 0.116046 2.68369 × 10 5 0.971602 0.000222032
1200 0.116047 2.65498 × 10 5 0.971605 0.000219656
Real 0.116073 0 0.971824 0
Table 4. Estimated processes and the real processes ( θ = 2.5 ) with σ = 0.0007 .
Table 4. Estimated processes and the real processes ( θ = 2.5 ) with σ = 0.0007 .
m f θ LSE m λ z , θ LSE m * f θ LSE m λ z , θ LSE m * f θ λ z , θ * V * z , θ LSE m , f θ LSE m λ z , θ LSE m * , r | V z * ( θ LSE m ) V z * ( θ ) |
240 0.115969 0.000104368 0.970961 0.000863615
300 0.115972 0.000100842 0.97099 0.00083443
400 0.115979 9.39409 × 10 5 0.971047 0.000777316
600 0.11597 0.000103271 0.97097 0.000854533
1200 0.11597 0.000103226 0.97097 0.000854166
Real 0.116073 0 0.971824 0
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Escobedo-Trujillo, B.A.; López-Barrientos, J.D.; Higuera-Chan, C.G.; Alaffita-Hernández, F.A. Robust Statistic Estimation of Constrained Optimal Control Problems of Pollution Accumulation (Part I). Mathematics 2023, 11, 923. https://doi.org/10.3390/math11040923

AMA Style

Escobedo-Trujillo BA, López-Barrientos JD, Higuera-Chan CG, Alaffita-Hernández FA. Robust Statistic Estimation of Constrained Optimal Control Problems of Pollution Accumulation (Part I). Mathematics. 2023; 11(4):923. https://doi.org/10.3390/math11040923

Chicago/Turabian Style

Escobedo-Trujillo, Beatris Adriana, José Daniel López-Barrientos, Carmen Geraldi Higuera-Chan, and Francisco Alejandro Alaffita-Hernández. 2023. "Robust Statistic Estimation of Constrained Optimal Control Problems of Pollution Accumulation (Part I)" Mathematics 11, no. 4: 923. https://doi.org/10.3390/math11040923

APA Style

Escobedo-Trujillo, B. A., López-Barrientos, J. D., Higuera-Chan, C. G., & Alaffita-Hernández, F. A. (2023). Robust Statistic Estimation of Constrained Optimal Control Problems of Pollution Accumulation (Part I). Mathematics, 11(4), 923. https://doi.org/10.3390/math11040923

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop