Next Article in Journal
Application of a Non-Dominated Sorting Genetic Algorithm to Solve a Bi-Objective Scheduling Problem Regarding Printed Circuit Boards
Next Article in Special Issue
A Super-Twisting Extended State Observer for Nonlinear Systems
Previous Article in Journal
Traveling Band Solutions in a System Modeling Hunting Cooperation
Previous Article in Special Issue
Event-Triggered Attitude-Orbit Coupled Fault-Tolerant Control for Multi-Spacecraft Formation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Convergence of Data-Driven Optimal Iterative Learning Control for Linear Multi-Phase Batch Processes

1
Department of Applied Mathematics, School of Sciences, Xi’an Polytechnic University, Xi’an 710048, China
2
Department of Applied Mathematics, School of Mathematics and Statistics, Xi’an Jiaotong University, Xi’an 710049, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(13), 2304; https://doi.org/10.3390/math10132304
Submission received: 25 May 2022 / Revised: 27 June 2022 / Accepted: 29 June 2022 / Published: 1 July 2022

Abstract

:
For multi-phase batch processes with different dimensions whose dynamics can be described as a linear discrete-time-invariant system in each phase, a data-driven optimal ILC was explored using multi-operation input and output data that subordinate a tracking performance criterion. An iterative learning identification was constructed to estimate the system Markov parameters by minimizing the evaluation criterion that consists of the residual of the real outputs from the predicted outputs and two adjacent identifications. Meanwhile, the estimated Markov parameters matrix was embedded into the learning control process in the form of an interaction. By virtue of inner product theory, the monotonic descent of the estimation error was derived, which does not restrict the weighting factor at all. Furthermore, algebraic derivation demonstrates that the tracking is strictly monotonically convergent if the estimation error falls within an appropriate domain. Numerical simulations were carried out to illustrate the validity and the effectiveness of the proposed method.

1. Introduction

Iterative learning control (ILC) has been recognized as one of the most effective intelligent control strategies because it needs less prior knowledge of the system parameter requirements and because of its significant performance (e.g., [1,2,3,4]). The core mission of the ILC mechanism is to design an adequate control input law for achieving perfect repetitive tracking throughout the whole operation duration as the iterations increase. The conventional control laws can be based on proportional-type, derivative-type, and/or integral-type tracking errors, which are exploited to supply sufficient conditions for guaranteeing either an asymptotical or a monotonic convergence (e.g., [5,6,7,8,9]). Nevertheless, the conventional control input law is usually embedded with a constant learning gain, with no evaluation of the tracking performance, and its convergence condition is dependent upon the system parameter information.
For improving the tracking performance of conventional ILC schemes, an optimized control input law has been explored by minimizing the performance index function on the basis of the norm-optimal or parameter-optimal framework. Regarding this topic, norm-optimal ILC (NOILC) algorithms have been comprehensively analyzed by solving optimization problems in the form of the principle of the minimum, gradient or Newton or quasi-Newton algorithm (e.g., [10,11,12,13]). Moreover, parameter-optimal ILC (POILC) has probed the learning gain by minimizing the quadratic norms of the tracking error and the learning gain (e.g., [14,15,16]). It is clear that optimized ILC contributes significantly to the development of ILC.
However, the above-mentioned optimized ILCs are heavily dependent upon the systematic Markov parameter matrix. This makes those optimized ILCs hardly implementable if the system Markov parameters are not available. A feasible option, in a trade-off sense, may be to design a suboptimal ILC scheme to identify the unknown Markov parameters and incorporate the identification of a Markov parameter matrix into the learning control process. In this respect, a novel data-driven ILC has been presented to determine the learning gain matrices with sequential estimations of the Markov parameter matrix, which are acquired by the lower-triangular Toeplitz-spanned matrices of a linear combination of the multi-batch outputs and inputs. However, theoretical analysis of the convergence of either the estimation error or the tracking error has not yet been explored [17]. Another type of data-driven ILC scheme is a gradient-based learning gain matrix that is constructed in an interactive form with the estimation of the system parameter matrix updated by multi-batch inputs and outputs [18]. Further, the relevant works are extended to different types of discrete-time systems (e.g., [19,20,21]). It is obvious that these works achieved impressive results. However, in order to avoid the matrix inversion and guarantee the convergence of a chi-type data-driven ILC, the gain matrix of the control law is compromised that replaces the matrix η I + G k q T G k q 1 with η I + G k q 2 1 . However, it cannot achieve optimal control in the practical sense. Recently, a newly input–output-driven gain-adaptive ILC has been proposed to abolish the harsh condition [22].
On the other side, the multi-phase batch process is a common process in which the dynamics of each batch can be described as a switched system whose dynamics switches among a finite number of subsystems with different phases (or time intervals) [23,24,25]. Although the achievement of ILC schemes for single-phase batch processes is remarkable (e.g., [26,27,28,29]), to date, there have been few investigations of the ILC mechanism for the multi-phase batch process. It was not until 2007 that a formulation of control was presented for the first time for multi-phase batch processes in [24]. The work that followed first proposed an iterative learning two-dimensional (2D) predictive control model by solving a quadratic programming problem for multi-phase batch processes [25]. Other studies have explored system stability, shortest running time, interval time-varying delays, and robustness [30,31,32]. Another class of ILC investigations for the multi-phase batch process is conventional ILC schemes that are employed in repetitive switched systems, such as a D-type ILC scheme for a class of switched continuous-time nonlinear systems [33], a hybrid ILC scheme for a class of discrete-time linear switched systems [34], and a PD-type ILC for a class of linear continuous-time switched systems with measurement noise [35]. However, in multi-phase batch processes, neither 2D switched-system ILCs nor conventional ILCs consider the tracking performance. For an ILC scheme, the more learning information is used, the the better control performance that may be obtained. Hence, it is very necessary to exploit an effective ILC strategy with multifarious learning information.
The purpose of this paper is to study a data-driven optimal ILC (DDOILC) for linear discrete-time multi-phase batch processes by incorporating multi-batch learning inputs and outputs to establish the tracking performance index and the parameter evaluation index on the basis of the Owens-type norm-optimal or parameter-optimal mechanism. The main idea is to update the current estimated systematic Markov parameter vector using the residual from the real outputs to the predicted outputs and the lower-triangular Toeplitz matrix of the inputs at each phase so that the parameter evaluation index is minimal. Then, the DDOILC strategy is constructed in an interactive form with Markov parameter vector estimation.
The main contributions are listed as follows. Firstly, we exploit a data-driven ILC for a linear multi-phase batch process by assessing the tracking performances and estimation mechanism, where the optimal input law is independent of the systematic Markov parameter matrix. Secondly, the system states can be different in different phases, which means the end state of the former phase may not be the same as that of the initial state of the next phase at the switched point. Thirdly, by taking advantage of the algebra method, the monotone convergence of the tracking error is derived if uncertainty falls within an appropriate range, and the convergence condition does not depend on the precise system parameter information.
The remainder of this paper is as follows. In Section 2, the problem description and preliminaries are elaborated for the multi-phase batch process. Section 3 presents an iterative learning identification algorithm for the system Markov parameters, and Section 4 exploits a data-driven optimal ILC. The numerical experiment in Section 5 demonstrates the method’s validity, and Section 6 concludes the paper.

2. Problem Description and Preliminaries

Consider a multi-phase batch process whose system dynamics in each batch can be depicted by a class of repetitive single-input–single-output (SISO) switched linear systems with the following form:
x k ( t + 1 ) = A σ ( t ) x k ( t ) + B σ ( t ) u k ( t ) , t T , y k ( t ) = C σ ( t ) x k ( t ) , t T + ,
where T { 0 ,   1 , ,   N 1 } and T + { 1 ,   2 , ,   N } represent the sampling time and k = 1 , 2 , is the batch index. x k ( t ) R n , u k ( t ) R and y k ( t ) R are the state vector, scalar input, and scalar output, respectively. The function σ ( t ) represents a time- varying switching signal defined as σ ( t ) : T Q = { 1 ,   2 , ,   m } , and the switching signal shows that each batch is divided into m phases. The matrix pair A σ k ( t ) ,   B σ k ( t ) ,   C σ k ( t ) switches among the following finite set:
A 1 ,   B 1 ,   C 1 ,   A 2 ,   B 2 ,   C 2 , , A m ,   B m ,   C m ,
where q Q , A q , B q , and C q are unknown constant matrices with appropriate dimensions.
Generally, it can be assumed that the switching signal is expressed as follows:
σ k ( t ) = q = 1 , t t 0 ,   t 0 + 1 , ,   t 1 , 2 , t t 1 + 1 ,   t 1 + 2 , ,   t 2 , m , t t m 1 + 1 ,   t m 1 + 2 , ,   t m ,
where 0 = t 0 < t 1 < t 2 < < t m 1 < t m = N 1 .
Since all the states in batch k can be divided into m groups according to the corresponding phases, x k q ( t + 1 ) can be used to indicate the states of phase q . Then, for phase q , the switched system (1) can be formulated as
x k q ( t + 1 ) = A q ( t ) x k ( t ) + B q ( t ) u k ( t ) ,   t T q , y k q ( t ) = C q ( t ) x k q ( t ) ,   t T q + ,
where T q = { t q 1 ,   t q 1 + 1 , , t q 1 } and T q + = { t q 1 + 1 , , t q } , q = 1 ,   2 , ,   m . Without loss of generality, set x k q ( t ) = 0 , where 0 represents the zero vector and its dimension is the same as the state.
Some super-vectors are denoted as
u k q = u k ( t q 1 ) ,   u k ( t q 1 + 1 ) , ,   u k ( t q 1 ) T , y k q = y k ( t q 1 + 1 ) ,   y k ( t q 1 + 2 ) , ,   y k ( t q ) T
Then, system (3) is reformulated in an input-output form as
y k q = G q u k q
where G q R ( t q t q 1 ) × ( t q t q 1 ) is a Markov parameter matrix with a lower triangular Toeplitz structure expressed as
G q = C q B q 0 0 0 C q A q B q C q B q 0 0 C q ( A q ) 2 B q C q A q B q C q B q 0 C q ( A q ) t q t q 1 1 B C q ( A q ) t q t q 1 2 B q C q A q B q C q B q .
Denote g q = C q B q C q A q B q     C q A q t q t q 1 1 B q T and G q = Toep   ( g q ) ; then, the input-output description (4) is represented by the Markov vector g q in
y k q = U k q g q
where U k q = Toep   ( u k q ) is constructed as
U k q = u k ( t q 1 ) 0 0 0 u k ( t q 1 + 1 ) u k ( t q 1 ) 0 0 u k ( t q 1 + 2 ) u k ( t q 1 + 1 ) u k ( t q 1 ) 0 u k ( t q 1 ) u k ( t q 2 ) u k ( t q 1 + 1 ) u k ( t q 1 )
Let y d q = y d ( t q 1 + 1 ) ,   y d ( t q 1 + 2 ) , ,   y d ( t q ) T be a given desired trajectory of system (3). The objective of ILC is to design an updating control input law u k + 1 q that enables the output y k + 1 q of system (3) to follow the desired trajectory y d q as precisely as possible as the batch index approaches infinity, represented mathematically as
lim k e k + 1 q 2 = 0 ,
where e k + 1 q is the tracking error of system (3) and is defined as e k + 1 q = y d q y k + 1 q . Further, for the whole multi-phase batch process, the desired trajectory y d , the output y k + 1 , and the tracking error e k + 1 are defined as
y d = y d 1 y d 2   y d m , y k + 1 = y k + 1 1 y k + 1 2   y k + 1 m ,   e k + 1 = e k + 1 1 e k + 1 2   e k + 1 m = y d 1 y d 2   y d m y k + 1 1 y k + 1 2   y k + 1 m
It is fairly obvious that the tracking error of the multi-phase batch process is convergent to zero if lim k e k + 1 q 2 = 0 ( q = 1 , , m ), that is, lim k e k + 1 2 = 0 .
Remark 1.
It can be seen from the above description that the multi-phase batch process (1) with switching signal (2) is equivalent to system (3). Therefore, the construction of a data-driven optimal ILC for a multi-phase batch process (1) is no different than the construction of the ILC for system (3). Further, in the existing ILC strategies for multi-phase batch process control [30,31,32], the end state of the former phase should be the same as the initial state of the next phase at the switched point. However, in many cases, the dimensions of the states in the different phases may not be the same. In this paper, the dimensions of the states in different phases can be different.

3. Iterative Learning Identification

The main idea of iterative learning identification is that the updating law is designed by minimizing the sum of residual error between the system output and the predicted output and two iteration-adjacent estimations difference.
Definition 1.
For the controller u k q = [ u k ( t q 1 ) ,   u k ( t q 1 + 1 ) , ,   u k ( t q 1 ) ] T , if u k ( t q 1 ) = u k ( t q 1 + 1 ) = = u k ( t q 1 + v k 2 ) = 0 and u k ( t q 1 + v k 1 ) 0 , then the order of the controller u k q is said to be v k , noted as v k = order   ( u k ) .
Definition 2.
For the Markov vector g = [ g 0 ,   g 1 , ,   g t q t q 1 1 ] T , if g 0 = g 1 = = g w 2 = 0 and g w 1 0 , then the relative degree of system (1) represented by Markov vector g is said to be w , denoted as w = degree   ( g ) .
Let g k q be the k -th estimation of the Markov vector g q . Then G k q = Toep   ( g k q ) is the k -th estimation of the Markov matrix G q . Further, the k -th output is estimated by
y ^ k q = U k q g k q
Subtracting (6) from Equation (5) yields the k -th output error z k q as
z k q = y k q y ^ k q = U k q ( g q g k q ) .
Furthermore, the residual error is defined as the difference between the real output and the predicted output:
α k + 1 q = y k q U k q g k + 1 q ,
where U k q g k + 1 q is seen as the k -th predicted output.
For the purpose of generating an updating law to modify the estimation g ^ k q , the following minimization problem is constructed using the residual of the real output to the predicted output and the two adjacent identifications g k q and g k + 1 q .
arg   min J g k + 1 q = α k + 1 q 2 + η q g k + 1 q g k q 2 ,
where η q is a positive weighting factor to weight the importance of the cost function g k + 1 q g k q 2 to the residual error energy α k + 1 q 2 .
If v k = order   ( u k ) = 1 , then it is obvious that the matrix U k q = Toep   ( u k q ) is nonsingular. Further,
J g k + 1 q = g k + 1 q T I + U k q T U k q g k + 1 q 2 y k q T U k q + g k q g k + 1 q + y k q T y k q + g k q T g k q .
By letting the gradient of the objective function of (10) with respect to the argument g k + 1 q equal to zero, it is easy to reach the following learning identification algorithm:
g k + 1 q = g k q + η q I + U k q T U k q 1 U k q T z k q
If v k = order   ( u k ) 2 , then it is evident that the matrix U k q = Toep   ( u k q ) is singular, and some super vectors are denoted as
u k q , 1 = [ u k ( t q 1 ) ,   u k ( t q 1 + 1 ) , ,   u k ( t q 1 + v k 2 ) ] T , u k q , 2 = [ u k ( t q 1 + v k 1 ) ,   u k ( t q 1 + v k ) , ,   u k ( t q 1 ) ] T
From Definition 1, it is confirmed that the segment u k q , 1 is a null vector and U k q = 0 1 0 2 U k q , 2 0 1 T with the block matrix U k q , 2 = Toep   u k q , 2 is nonsingular. Here, 0 1 and 0 2 are zero matrices with appropriate dimensions. Moreover, some super vectors are denoted as follows:
y k q , 1 = [ u k ( t q 1 + 1 ) ,   u k ( t q 1 + 2 ) , ,   u k ( t q 1 + v k 1 ) ] T , y k q , 2 = [ u k ( t q 1 + v k ) ,   u k ( t q 1 + v k + 1 ) , ,   u k ( t q ) ] T , g k q , 2 = [ g k ( 1 ) ,   g k ( 2 ) , ,   y k ( t q t q 1 v k + 1 ) ] T , g k q , 1 = [ g k ( t q t q 1 v k + 2 ) , ,   g k ( t q t q 1 ) ] T , g k + 1 q , 2 = [ g k + 1 ( 1 ) ,   g k + 1 ( 2 ) , ,   g k + 1 ( t q t q 1 v k + 1 ) ] T , g k + 1 q , 1 = [ g k + 1 ( t q t q 1 v k + 2 ) , ,   g k + 1 ( t q t q 1 ) ] T , g k q , 02 = [ g ( 1 ) ,   g ( 2 ) , ,   g ( t q t q 1 v k + 1 ) ] T , g k q , 01 = [ g k ( t q t q 1 v k + 2 ) , ,   g k ( t q t q 1 ) ] T .
Thus, it follows from the above denotations that
α k + 1 q = y k q U k q g k + 1 q = y k q , 1 y k q , 2 0 1 0 2 U k q , 2 0 1 T g k + 1 q , 2 g k + 1 q , 1 = y k q , 1 y k q , 2 U k q , 2 g k + 1 q , 2 ,
g k + 1 q g k q = g k + 1 q , 2 g k q , 2 g k + 1 q , 1 g k q , 1
Substituting (12) and (13) into (9) yields
J g k + 1 q , 2 ,   g k + 1 q , 1 = α k + 1 q 2 + η q g k + 1 q g k q 2 = y k q , 1 y k q , 2 U k q , 2 g k + 1 q , 2 T y k q , 1 y k q , 2 U k q , 2 g k + 1 q , 2 + η q g k + 1 q , 2 g k q , 2 g k + 1 q , 1 g k q , 1 T g k + 1 q , 2 g k q , 2 g k + 1 q , 1 g k q , 1 = y k q , 1 T y k q , 1 + y k q , 2 U k q , 2 g k + 1 q , 2 T y k q , 2 U k q , 2 g k + 1 q , 2 + η q g k + 1 q , 2 g k q , 2 T g k + 1 q , 2 g k q , 2 + η q g k + 1 q , 1 g k q , 1 T g k + 1 q , 1 g k q , 1 .
Letting the partial derivatives of the objective function of (14) with respect to the argument g k + 1 q , 2 and g k + 1 q , 1 equal zero yields
g k + 1 q = g k + 1 q , 2 g k + 1 q , 1 = g k q , 2 + η q I + U k q , 2 T U k q , 2 1 U k q , 2 T g k q , 02 g k q , 2 g k q , 1
Therefore, it is concluded that the iterative learning identification algorithm for Markov vector g q is
g k + 1 q = g k q + η q I + U k q T U k q 1 U k q T z k q , v k = 1 ; g k q , 2 + η q I + U k q , 2 T U k q , 2 1 U k q , 2 T g k q , 02 g k q , 2 g k q , 1 , v k 2
.
Remark 2.
It is worth noting from (9)–(15) that the solution g k + 1 q of the minimization problem (9) is existent and unique. It can be derived from (16) that
g k + 1 q g k q = η q I + U k q T U k q 1 U k q T z k q , v k = 1 ; η q I + U k q , 2 T U k q , 2 1 U k q , 2 T g k q , 02 g k q , 2 0 , v k 2 ,
which implies that the time order of the compensator g k + 1 q g k q is adaptive to the time order of the controller u k q .
For further theoretical analysis, the following lemmas are indispensable and easily stated.
Lemma 1.
The equation S T = S T 1 S 1 T is identical and holds the compatible nonsingular matrices S and T .
Lemma 2.
The equality λ S I + S 1 = λ ( S ) 1 + λ ( S ) is true for any positive semidefinite matrix S .
Lemma 3.
The function f ( x ) = x 1 + x strictly monotonically increases on interval [ 0 ,   + ) .
Theorem 1.
For the iterative learning identification algorithm (16), the estimation error is strictly monotonically declining, that is, g g k + 1 2 < g g k 2 .
Proof of Theorem 1.
Case 1. Matrix U k q is nonsingular or v k = order   u k = 1 .
By virtue of Lemma 1 and the iterative learning identification algorithm (16), the estimation error between the real Markov vector g q and its ( k + 1 ) -th estimation can be derived as
g q g k + 1 q = g q g k q η q I + U k q T U k q 1 U k q T z k q = g q g k q η q I + U k q T U k q 1 U k q T U k q ( g q g k q ) = I η q I + U k q T U k q 1 U k q T U k q g q g k q = I I + η q U k q 1 U k q T 1 g q g k q = η q U k q 1 U k q T I + η U k q 1 U k q T 1 g q g k q .
Calculating the inner products of both sides of (17) and considering Lemma 2 obtain
g q g k + 1 q 2 = g q g k q T η q U k q 1 U k q T I + η q U k q 1 U k q T 1 2 g q g k q = λ max η q U k q 1 U k q T I + η q U k q 1 U k q T 1 2 g q g k q 2 η q λ max U k q 1 U k q T 1 + η q λ max U k q 1 U k q T 2 g q g k q 2 < g q g k q 2 .
Case 2. Matrix U k q is singular or v k = order   ( u k ) 2 .
By means of the iterative learning identification algorithm (16), the estimation error between the real Markov vector g q and its ( k + 1 ) -th estimation can be calculated as
g g k + 1 = g k q , 02 g k + 1 q , 2 g k q , 01 g k q , 1 = g k q , 02 g k q , 2 η q I + U k q , 2 T U k q , 2 1 U k q , 2 T U k q , 2 g k q , 02 g k q , 2 g k q , 01 g k q , 1 .
By utilizing Lemma 1, the first partition of g g k + 1 is derived as
g k q , 02 g k + 1 q , 2 = I η q I + U k q , 2 T U k q , 2 1 U k q , 2 T U k q , 2 g k q , 02 g k q , 2 = I I + η q U k q , 2 1 U k q , 2 T 1 g q , 02 g k q , 2 = η q U k q , 2 1 U k q , 2 T I + η q U k q , 2 1 U k q , 2 T 1 g k q , 02 g k q , 2
Taking the inner products of both sides of (20) and employing Lemma 2, we obtain
g k q , 02 g k + 1 q , 2 2 = g k q , 02 g k q , 2 T η q U k q , 2 1 U k q , 2 T I + η q U k q , 2 1 U k q , 2 T 1 2 g k q , 02 g k q , 2 η q λ max U k q , 2 1 U k q , 2 T 1 + η q λ max U k q , 2 1 U k q , 2 T 2 g k q , 02 g k q , 2 2 < g k q , 02 g k q , 2 2
Therefore, we have
g g k + 1 2 = g k q , 02 g k + 1 q , 2 2 + g k q , 01 g k + 1 q 1 2 < g k q , 02 g k q , 2 2 + g k q , 01 g k q , 1 2 = g g k 2 .
This completes the proof. □
Remark 3.
In Theorem 1, the estimation error is strictly monotonically declining, which does not restrict the weighting factor or rely on any reset condition; it is hoped that the iterative learning identification (16) may improve the estimation performance as the iteration increases. Although the iterative learning identification algorithm (16) for a linear discrete-time-invariant system with no perturbation is feasible, whether it is available for a linear time-varying system is not yet verified.

4. Data-Driven Optimal ILC

For the multi-phase batch process, a data-driven optimal ILC scheme is used to construct an updating law of the control command u k + 1 q in a recursive form in order to optimize a criterion listed as
arg   min J ( u k + 1 ) = e k + 1 q 2 + ω q u k + 1 q u k q 2 ,
where ω q is a positive tuning factor to adjust the weight of the cost u k + 1 q u k q 2 to the energy e k + 1 q 2 .
By virtue of the definition of the system tracking error, we have
e k + 1 q = y d q y k + 1 q = y d q y k q y k + 1 q y k q = e k q G q u k + 1 q u k q
Substituting (24) into (23) generates
arg   min u k + 1 J ( u k + 1 ) = e k q T e k q 2 e k q T G q u k + 1 q u k q + u k + 1 q u k q T G q T G q u k + 1 q u k q + ω q u k + 1 q u k q T u k + 1 q u k q
According to the first-order necessary condition of the optimization, solving the optimal problem (25) is equivalent to the gradient of the objective function J ( u k + 1 q ) with respect to the argument vector u k + 1 q being zero, that is,
J ( u k + 1 q ) = G q T e k q + ω q I + G q T G q u k + 1 q u k q = 0
It follows from (26) that
u k + 1 q = u k q + ω q I + G q T G q 1 G q T e k q
Furthermore, the Markov matrix G q is replaced by the k -th estimation G k q , then for system (3), the DDOILC is constructed as
u k + 1 q = u k q + ω q I + G k q T G k q 1 G k q T e k q
Remark 4.
In the case when the system Markov parameters are unavailable, iterative learning identification (16) is embedded into the norm-optimal ILC (27), which is formed in the DDOILC scheme. In (28), the control law is equipped with an inversion matrix, which is prominently different from the existing strategies [19,21,22]; for avoiding matrix inversion and guaranteeing the convergence of the tracking error, the gain matrix of the control law is transigent and replaces the matrix η I + G k q T G k q 1 with η I + G k q 2 1 .
Theorem 2.
Assume that the proposed DDOILC (28) is applied to system (3). Then, the tracking error is monotonically convergent if the uncertainty Δ G k q falls within an appropriate domain.
Proof of Theorem 2.
Let the relative degree of system (3) be w q , that is, w q = degree   ( g q ) . Since G q is a lower-triangular Toeplitz matrix formed by the Markov vector g q , rank   ( G q ) = t q t q 1 w q + 1 . For the case when w q = 1 , G q is full rank and nonsingular. Meanwhile, for the case when w > 1 , it is obvious that G q is singular and can be partitioned in blocks as G q = 0 0 W q 0 , where W q R ( t q t q 1 w + 1 ) × ( t q t q 1 w + 1 ) is a nonsingular lower-triangular Toeplitz matrix.
Substituting (28) into (24) generates
e k + 1 q = e k q G q u k + 1 q u k q = I G q ω q I + G k q T G k q 1 G k q T e k q .
By denoting Δ G k q = G k q G q as estimation error of lower-triangular Toeplitz matrix G q , we obtain
G k q = G q + Δ G k q
Case 1. G ( q ) is nonsingular or w q = 1 .
Denote
Λ q = ω q I + G q T G q , Δ Λ k q = Δ G k q T G q + G q T Δ G k q + Δ G k q T Δ G k q .
Further, by (30), we obtain
ω q I + G k q T G k q = ω q I + G q + Δ G k q T G q + Δ G k q = Λ q + Δ Λ k q
According to (31), we obtain
ω q I + G k q T G k q 1 = Λ q + Δ Λ k q 1 = Λ q 1 Λ q 1 Δ Λ k q Λ q + Δ Λ k q 1
Denote
Δ Ξ k q = G q Λ q 1 Δ G k q T G q Λ q 1 Δ Λ k q Λ q + Δ Λ k q 1 G k T G k q T , Ξ q = ω q G q T G q 1 I + ω q G q T G q 1 1 .
Considering Lemma 1 and utilizing (32) yields
I G q ω q I + G k q T G k q 1 G k q T = I G q Λ q 1 Λ q 1 Δ Λ k q Λ q + Δ Λ k q 1 G q + Δ G k q T = I G q ω q I + G q T G q 1 G q T + Δ Ω k q = ω q G q T G q 1 I + ω q G q T G q 1 1 + Δ Ξ k q = Ξ q + Δ Ξ k q .
Denote
Δ Ψ k q = Ξ q T Δ Ξ k q + Δ Ξ k q T Ξ q + Δ Ξ k q T Δ Ξ k q , θ 1 ( ω q ) = ω q λ max G q T G q 1 1 + ω q λ max G q T G q 1 2 , θ k 2 ( ω q ) = λ max Δ Ψ k q .
By means of Lemma 2, Lemma 3, and (33), we achieve
e k + 1 q 2 = Ξ q + Δ Ξ k q e k q T Ξ q + Δ Ξ k q e k q = e k q T Ξ q T Ξ q e k q + e k q T Δ Ψ k q e k q , ω q λ max G q T G q 1 1 + ω q λ max G q T G q 1 2 + λ max Δ Ψ k q e k q 2 = θ 1 ( ω q ) + θ k 2 ( ω q ) e k q 2 .
It follows from the above derivation that the eigenvalue function θ k 2 ( ω q ) is associated with the elements of the uncertainty Δ G k q and that the inequality is always true θ 1 ( ω q ) < 1 . Therefore, it is concluded that θ 1 ( ω q ) + θ k 2 ( ω q ) < 1 is guaranteed if the uncertainty Δ G k q falls into an appropriately small domain. This reveals that the tracking error is monotonically convergent. In particular, it is understood that θ k 2 ( ω q ) = 0 if Δ G k q = 0 .
Case 2. G q is nonsingular or w q > 1 .
Denote
F q = ω q I t q t q 1 w + 1 + W q T W q 0 0 ω q I w 1 , Δ F k q = G q Δ G k q + Δ G k q T G q + Δ G k q T Δ G k q
Substituting G q = 0 0 W q 0 and G k q = G q + Δ G k q into ω q I + G k q T G k q yields
ω q I + G k q T G k q = ω q I + G q + Δ G k q T G q + Δ G k q = ω q I + G q T G q + Δ F k q = ω q I + W q T W q 0 0 0 + Δ F k q = F q + Δ F k q
By (35), we have
ω q I + G k q T G k q 1 = F q + Δ F k q 1 = F q 1 F q 1 Δ F k q F q + Δ F k q 1
Denote
Δ H k q = G q F q 1 Δ G k q + G q F q 1 Δ F k q F q + Δ F k q 1 G k q
It follows from (36) that
G q ω q I + G k q T G k q 1 G k q T = G q F q 1 F q 1 Δ F k q F q + Δ F k q 1 G q + Δ G k q T = G q F q 1 G q + Δ H k q = 0 0 W q 0 ω q I t q t q 1 w + 1 + W q T W q 0 0 ω q I w 1 1 0 0 W q 0 + Δ H k q = 0 0 W q 0 ω q I t q t q 1 w + 1 + W q T W q 1 0 0 1 ω q I w 1 0 0 W q 0 + Δ H k q = 0 0 0 W q ω q I t q t q 1 w + 1 + W q T W q 1 W q T + Δ H k q = 0 0 0 I t q t q 1 w + 1 + ω q W ( q ) T W ( q ) 1 1 + Δ H k q
Denote
Μ q = ω q λ max W q T W q 1 1 + ω q λ max W q T W q 1 I w 1 , N q = ω q W q T W q 1 I t q t q 1 w + 1 + ω q W q T W q 1 1 , P q = Μ q 0 0 N q , Δ P q = 1 1 + ω q λ max W q T W q 1 I w 1 0 0 0
Utilizing Lemma 2 and Equation (37) generates
I G q ω q I + G k q T G k q 1 G k q T = I w 1 0 0 I t q t q 1 w + 1 I t q t q 1 w + 1 + ω q W q T W q 1 1 Δ H k q = I w 1 0 0 ω q W q T W q 1 I t q t q 1 w + 1 + ω q W q T W q 1 1 Δ H k q = P ( q ) + Δ P ( q ) Δ H k ( q )
Denote
Δ Π k q = Δ H k q T Δ H k q P q + Δ P q T Δ H k q Δ H k q T P q + Δ P q , Δ Z q = Δ P q T Δ P q + P q T Δ P q + Δ P q T P q , ξ 1 ( ω q ) = ω q λ max W q T W q 1 1 + ω q λ max W q T W q 1 2 , e ˜ k q = e k ( t q 1 + 1 ) ,   e k ( t q 1 + 2 ) , , e k ( t q 1 + P 1 ) , ξ k 2 ( ω q ) = λ max Δ Π k q ,   ϕ k ( ω q ) = λ max Δ Z q e ˜ k q 2
According to Lemmas 2 and 3, it is easy to obtain
e k + 1 q 2 = P q + Δ P q Δ H k q e k q T P q + Δ P q Δ H k q e k q = e k q T P q T P q e k q + e k q T Δ Π k q e k q + e k q T Δ Z q e k q T ω q λ max W q T W q 1 1 + ω q λ max W q T W q 1 2 + λ max Δ Π k q e k q 2 + λ max Δ Z q e ˜ k q 2 ξ 1 ( ω q ) + ξ k 2 ( ω q ) e k q + ϕ k ( ω q )
As discussed in Case 1, if the uncertainty Δ G k q falls into an appropriately small domain, then ξ 1 ( ω q ) + ξ k 2 ( ω q ) < 1 holds. Further, it can be understood from (29) that e ˜ k q = e ˜ 1 q for all k if we only consider the segment e ˜ k q in the learning process. According to the expressions G q = Toep   ( g q ) and w q = degree   ( g q ) , it is derived that the segment of the output of (4), y ˜ k q = y k ( t q 1 + 1 ) ,   , y k ( t q 1 + p 1 ) T , is out of control and thus the segment tracking is invalid. For this case, it is feasible that setting e ˜ 1 q = 0 as the initial output for compatibility with the zero initial output resetting, leads to ϕ k ( ω q ) = 0 . □

5. Numerical Simulation

To demonstrate the effectiveness of the proposed DDOILC (28), a numerical example was considered in the simulation study that offers a comparison between the proposed DDOILC algorithm and the traditional D-type ILC. It should be emphasized that in the simulation, all system parameters are supposed to be unavailable and to just serve as the input and output data generator for the plants to be controlled. No parameter information of the systems will be included in the proposed DDOILC scheme design. The results of the simulation show that the proposed DDOILC has good convergence although the system parameters are unavailable.
Let us consider the following linear discrete-time switched system:
x k ( t + 1 ) = A σ ( t ) x k ( t ) + B σ ( t ) u k ( t ) ,   t T , y k ( t ) = C σ ( t ) x k ( t ) ,   t T + ,
where T = 0 ,   1 , ,   199 , T + = 1 , 2 , ,   200 , σ ( t ) :   { t |   0 ,   1 , ,   200 } { 1 ,   2 } and the switching sequence is assumed to be
σ ( t ) = q = 1 , t 0 , 1 , , 20 , 2 ,       t 21 , 22 , , 200 .
Suppose that system (40) contains the following two subsystems,
S 1 : x k ( t + 1 ) = 0.4 0.035 0.25 0.0255 0.6 0.99 0.75 0.03 0.025 x k ( t ) + 0.2 0.2 0.0 u k ( t ) ,   t T 1 , y k ( t ) = 1.0 0.0 1.0 x k ( t ) ,   t T 1 + ,
S 2 : x k ( t + 1 ) = 0.01 0.15 1 0.7 x k ( t ) + 1.6 0.5 u k ( t ) ,     t T 2 , y k ( t ) = 1.0 0.0 1.0 x k ( t ) ,     t T 2 + ,
where T 1 = 0 ,   1 , ,   59 , T 1 + = 1 ,   2 , ,   60 , T 2 = 60 ,   61 , ,   199 and T 2 + = 61 ,   42 , ,   200 .
It can be noticed that the states of subsystems (1) and (2) are different; in other words, the system states are not identical in different phases.
The desired trajectory is given as y d ( t ) = 1 e 0.2 t , t T + = 1 , 2 , ,   200 . Set the initial state x k ( 0 ) = [ 0   0   0 ] T , x k ( 20 ) = [ 0 0 ] T . Then, subsystems (42) and (43) are equivalent to y k 1 = G 1 u k 1 and y k 2 = G 2 u k 2 , respectively, where G 1 and G 2 can be seen as unknown lower-triangular Toeplitz matrices. For subsystems (42) and (43), the learning identification algorithm (44) and DDOILC scheme (45) are listed as
g k + 1 q = g k q + η q I + U k q T U k q 1 U k q T z k q , v k = 1 ; g k q , 2 + η q I + U k q , 2 T U k q , 2 1 U k q , 2 T g k q , 02 g k q , 2 g k q , 1 , v k 2 ,
u k + 1 q = u k q + ω q I + G k q T G k q 1 G k q T e k q
where q = 1 ,   2 .
The initial inputs are u k 1 = 0 . 5   rand   ( 60 , 1 ) and u k 2 = 0 . 5   rand   ( 140 , 1 ) , where rand   ( m ,   n ) is an m × n -dimensional random matrix whose elements are within ( 0 ,   1 ) . The initial estimation vectors are chosen as
g k 1 = 1 , 0.01 , , 0.01 T R 60   and   g k 2 = 1 , 0.01 , , 0.01 T R 140
The system output and tracking error of (40) are expressed as
e k = e k 1 e k 2 , y k = y k 1 y k 2
In this section, the proposed DDOILC (44) is applied to system (40), and the tracking performance of the DDOILC is simulated as follows.
Figure 1 exhibits the switching rule produced by (41) with the values 1 and 2.
Figure 2 manifests the convergences of learning identification algorithm (44) with the different parameters selected as η 1 = η 2 = 0.01 and η 1 = η 2 = 0.5 under the given parameters ω 1 = ω 2 = 0.01 of DDOILC (45), respectively. It is shown that the estimation error of the Markov vector g = [ g 1 ,   g 2 ] T from the estimated Markov vector g k = [ g k 1 ,   g k 2 ] T decreases as the batch number increases, that is, g g k + 1 g g k . It is also shown that the rate of convergence of learning identification algorithm (44) accelerates as the parameters η 1 and η 2 decrease.
Figure 3 demonstrates the convergences of the tracking errors of the DDOILC with the parameters chosen as ω 1 = ω 2 = 0.01 , ω 1 = ω 2 = 0.5 , ω 1 = ω 2 = 1 , and ω 1 = ω 2 = 3 under the given parameters η 1 = η 2 = 0.01 of learning identification algorithm (44), where the tracking errors are measured in the form of 2-norm. It is shown that not only are the tracking errors of the DDOILC monotonically convergent to zero but also the tracking performance is better as the parameters ω 1 and ω 2 decrease.
Figure 4 meshes the tracking error of the DDOILC with the parameters chosen as ω 1 = ω 2 = 0.5 and η 1 = η 2 = 0.01 ; e k ( t ) is over the time-batch plane, which indicates that the tracking error is convergent to zero over the time-batch plane. Thus, the proposed DDOILC is effective.
For the system (40), the outputs of DDOILC with the parameters ω 1 = ω 2 = 0.5 and η 1 = η 2 = 0.01 at the 3rd, 4th, and 15th batches are exhibited in Figure 5, where the dashed curve refers the desired trajectory, the dotted curve represents the 3rd output, the solid curve shows the output at the 4th batch and the dashed–dotted curve represents the 15th output.
Figure 2, Figure 3, Figure 4 and Figure 5 demonstrate that the proposed DDOILC has good convergence.
Further, a comparison of the proposed DDOILC with the following D-type ILC was made:
u k + 1 q ( t ) = u k q ( t ) + γ D q e k q ( t + 1 ) ,   q = 1 ,   2 ,
where γ D q is the constant learning gain.
In Figure 6, a comparison of the tracking errors of the DDOILC and D-type ILC is depicted, where the weight factors of the DDOILC are selected as ω 1 = ω 2 = 0.01 and η 1 = η 2 = 0.01 , and learning gains of the D-type ILC are chosen as γ D 1 = γ D 2 = 0.1 , γ D 1 = γ D 2 = 0.5 , and γ D 1 = γ D 2 = 1 . It is shown that the 2-norm of the tracking error of the DDOILC is monotonically convergent to zero, and that the D-type ILC is also effective, but the proposed DDOILC performs better than the D-type ILC. Further, Figure 7 illustrates a more accurate comparison of tracking errors of the DDOILC and D-type ILC measured by the natural logarithm of the 2-norm based on Figure 6.
Based on the parameters shown in Figure 6, the outputs of the DDOILC and D-type ILC for system (40) at 4th batch are exhibited in Figure 8, where the solid curve represents the desired trajectory, the dotted curve refers the 4th output of DDOILC, and the dashed curve shows the 4th output of the D-type ILC. Figure 8 reveals that the proposed DDOILC performs better than the D-type ILC.

6. Conclusions

For multi-phase batch processes, this paper investigated a data-driven optimal ILC by solving an optimization problem constructed with multi-batch system input and output data. An iterative learning identification algorithm was generated using the minimizing sequential quadratic objective function that is formed by the residual of the real system outputs from the predicted outputs and the compensator. Interactively, a data-driven optimal ILC was designed that consists of the compensator with the identified Markov parameters and the tracking error. The results show that the estimation error is declining and the tracking error is monotonically convergent. Finally, the simulation results illustrate the effectiveness and practicability of the proposed data-driven optimal ILC. However, how to investigate an efficacious data-driven optimal ILC for multi-phase batch processes which can be described as a linear time-varying switched system still remains to be explored. This problem will be investigated in our future work.

Author Contributions

Data curation, S.W.; Formal analysis, Y.G. and X.R.; Funding acquisition, X.R.; Investigation, S.W.; Methodology, Y.G., S.W. and X.R.; Supervision, X.R.; Writing—original draft, Y.G.; Writing—review & editing, Y.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Science Foundation of China (61973338), the Natural Science Basic Research Plan in Shaanxi Province of China (2020JQ-831,2021JQ-657,2021JQ-662), and the Scientific Research Program Funded by Shaanxi Provincial Education Department (20JK0642).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Arimoto, S.; Kawamura, S.; Miyazaki, F. Bettering operation of robots by learning. J. Robot. Syst. 1984, 1, 123–140. [Google Scholar] [CrossRef]
  2. Ahn, H.S.; Chen, Y.Q.; Moore, K.L. Iterative learning control: Brief survey and categorization. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 2007, 37, 1099–1121. [Google Scholar] [CrossRef]
  3. Saab, S.S.; Shen, D.; Orabi, M.; Kors, D.; Jaafar, R.H. Iterative learning control: Practical implementation and automation. IEEE Trans. Ind. Electron. 2021, 69, 1858–1866. [Google Scholar] [CrossRef]
  4. Shahriari, Z.; Bernhardsson, B.; Troeng, O. Convergence analysis of iterative learning control using pseudospectra. Int. J. Control 2022, 95, 269–281. [Google Scholar] [CrossRef]
  5. Ye, Y.; Wang, D. Learning more frequency components using P-type ILC with negative learning gain. IEEE Trans. Ind. Electron. 2006, 53, 712–716. [Google Scholar]
  6. Liu, T.; Wang, X.Z.; Chen, J. Robust PID based indirect-type iterative learning control for batch processes with time-varying uncertainties. J. Process Control 2014, 24, 95–106. [Google Scholar] [CrossRef] [Green Version]
  7. Nekoo, S.R.; Acosta, J.Á.; Heredia, G.; Ollero, A. A-PD-type state-dependent Riccati equation with iterative learning augmentation for mechanical systems. IEEE/CAA J. Autom. 2022, 1–13. [Google Scholar] [CrossRef]
  8. Meng, D.Y.; Moore, K.L. Convergence of iterative learning control for SISO nonrepetitive systems subject to iteration-dependent uncertainties. Automatica 2017, 79, 167–177. [Google Scholar] [CrossRef]
  9. Liu, J.; Zhang, Y.M.; Ruan, X. Iterative learning control for a class of uncertain nonlinear systems with current state feedback. Int. J. Syst. Sci. 2019, 50, 1889–1901. [Google Scholar] [CrossRef]
  10. Lee, J.H.; Lee, K.S.; Kim, W.C. Model-based iterative learning control with a quadratic criterion for time-varying linear systems. Automatica 2000, 36, 641–657. [Google Scholar] [CrossRef]
  11. Harte, T.J.; Hätönen, J.; Owens, D.H. Discrete-time inverse model-based iterative learning control, stability, monotonicity and robustness. Int. J. Control 2005, 78, 577–586. [Google Scholar] [CrossRef]
  12. Owens, D.H.; Freeman, C.T.; Chu, B. Multivariable norm optimal iterative learning control with auxiliary optimization. Int. J. Control 2013, 86, 1026–1045. [Google Scholar] [CrossRef]
  13. Sun, H.; Alieyne, A.G. A computationally efficient norm optimal iterative learning control approach for LTV systems. Automatica 2014, 50, 141–148. [Google Scholar] [CrossRef]
  14. Owens, D.H.; Feng, K. Parameter optimization in iterative learning control. Int. J. Control 2003, 76, 1059–1069. [Google Scholar] [CrossRef]
  15. Owens, D.H.; Chu, B.; Songjun, M. Parameter-optimal iterative learning control using polynomial representations of the inverse plant. Int. J. Control 2012, 85, 533–544. [Google Scholar] [CrossRef]
  16. Liao-McPherson, D.; Balta, E.C.; Rupenyan, A.; Lygeros, J. On Robustness in Optimization-Based Constrained Iterative Learning Control. IEEE Control Syst. Lett. 2022, 6, 2846–2851. [Google Scholar] [CrossRef]
  17. Janssens, P.; Pipeleer, G.; Swevers, J. A data-driven constrained norm-optimal iterative learning control framework for LTI systems. IEEE Trans. Control Syst. Technol. 2012, 21, 546–551. [Google Scholar] [CrossRef] [Green Version]
  18. Chi, R.H.; Liu, Y.; Hou, Z.S.; Jin, S.T. Data-driven terminal iterative learning control with higher-order learning law for a class of nonlinear discrete-time multiple-input–multiple-output systems. IET Control Theory Appl. 2015, 9, 1075–1082. [Google Scholar] [CrossRef]
  19. Chi, R.H.; Huang, B.; Hou, Z.S.; Jin, S.T. Data-driven higher-order terminal iterative learning control with a faster convergence speed. Int. J. Robust Nonlinear Control 2018, 28, 103–119. [Google Scholar] [CrossRef]
  20. Bu, X.H.; Hou, Z.S. Adaptive iterative learning control for linear systems with binary-valued observations. IEEE Trans. Neural Netw. Learn. Syst. 2016, 29, 232–237. [Google Scholar] [CrossRef]
  21. Geng, Y.; Ruan, X.; Zhou, Q.H.; Yang, X. Robust adaptive iterative learning control for nonrepetitive systems with iteration-varying parameters and initial state. Int. J. Mach. Learn. Cybern. 2021, 12, 2327–2337. [Google Scholar] [CrossRef]
  22. Liu, C.Y.; Ruan, X. Input-output-driven gain-adaptive iterative learning control for linear discrete-time-invariant systems. Int. J. Robust Nonlinear Control 2021, 31, 8551–8568. [Google Scholar] [CrossRef]
  23. Csernák, G.; Gyebrószki, G.; Stépán, G. Multi-baker map as a model of digital PD control. Int. J. Bifurcation Chaos 2016, 26, 1650023. [Google Scholar] [CrossRef]
  24. Wang, Y.Q.; Yang, Y.; Gao, F.R.; Zhou, D.H. Control of multi-phase batch processes: Formulation and challenge. IFAC Proc. 2007, 40, 339–344. [Google Scholar] [CrossRef] [Green Version]
  25. Wang, Y.Q.; Zhou, D.H.; Gao, F.R. Iterative learning model predictive control for multi-phase batch processes. J. Process Control 2008, 18, 543–557. [Google Scholar] [CrossRef]
  26. Xu, Z.; Zhao, J.; Yang, Y.; Shao, Z.; Gao, F. Optimal iterative learning control based on a time-parametrized linear time-varying model for batch processes. Ind. Eng. Chem. Res. 2013, 52, 6182–6192. [Google Scholar] [CrossRef]
  27. Oh, S.K.; Lee, J.M. Stochastic iterative learning control for discrete linear time-invariant system with batch-varying reference trajectories. J. Process Control 2015, 36, 64–78. [Google Scholar] [CrossRef]
  28. Yang, X. Robustness of reinforced gradient-type iterative learning control for batch processes with Gaussian noise. Chin. J. Chem. Eng. 2016, 24, 623–629. [Google Scholar] [CrossRef]
  29. Zhou, L.M.; Jia, L.; Wang, Y.L.; Peng, D.G.; Tan, W.D. An integrated robust iterative learning control strategy for batch processes based on 2D system. J. Process Control 2020, 85, 136–148. [Google Scholar] [CrossRef]
  30. Wang, L.M.; He, X.; Zhou, D.H. Average dwell time-based optimal iterative learning control for multi-phase batch processes. J. Process Control 2016, 40, 1–12. [Google Scholar] [CrossRef]
  31. Wang, L.M.; Zhu, C.J.; Yu, J.X.; Ping, L.; Zhang, R.D.; Gao, F.R. Fuzzy Iterative Learning Control for Batch Processes with Interval Time-Varying Delays. Ind. Eng. Chem. Res. 2017, 56, 3993–4001. [Google Scholar] [CrossRef]
  32. Wang, L.M.; Shen, Y.T.; Yu, J.X.; Ping, L.; Zhang, R.D.; Gao, F.R. Robust iterative learning control for multi-phase batch processes: An average dwell-time method with 2D convergence indexes. Int. J. Syst. Sci. 2018, 49, 324–343. [Google Scholar] [CrossRef]
  33. Bu, X.H.; Hou, Z.S.; Yu, F.S. Iterative learning control for a class of non-linear switched systems. IET Control Theory Appl. 2013, 7, 470–481. [Google Scholar] [CrossRef]
  34. Wang, J.P.; Luo, J.X.; Hu, Y.M. Monotonically convergent hybrid ILC for uncertain discrete-time switched systems with state delay. Trans. Inst. Meas. Control 2017, 39, 1047–1058. [Google Scholar] [CrossRef]
  35. Yang, X.; Ruan, X. Iterative learning control for linear continuous-time switched systems with observation noise. Trans. Inst. Meas. Control 2019, 41, 1178–1185. [Google Scholar] [CrossRef]
Figure 1. Switching rule.
Figure 1. Switching rule.
Mathematics 10 02304 g001
Figure 2. The convergences of the learning identification algorithm.
Figure 2. The convergences of the learning identification algorithm.
Mathematics 10 02304 g002
Figure 3. The convergences of the tracking errors of the DDOILC.
Figure 3. The convergences of the tracking errors of the DDOILC.
Mathematics 10 02304 g003
Figure 4. Tracking error mesh of the DDOILC.
Figure 4. Tracking error mesh of the DDOILC.
Mathematics 10 02304 g004
Figure 5. Outputs of the DDOILC with ω 1 = ω 2 = 0.5 .
Figure 5. Outputs of the DDOILC with ω 1 = ω 2 = 0.5 .
Mathematics 10 02304 g005
Figure 6. Comparison of the DDOILC with the D-type ILC.
Figure 6. Comparison of the DDOILC with the D-type ILC.
Mathematics 10 02304 g006
Figure 7. Accurate comparison of the DDOILC with the D-type ILC.
Figure 7. Accurate comparison of the DDOILC with the D-type ILC.
Mathematics 10 02304 g007
Figure 8. Outputs of the DDOILC and D-type ILC at the 4-th batch.
Figure 8. Outputs of the DDOILC and D-type ILC at the 4-th batch.
Mathematics 10 02304 g008
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Geng, Y.; Wang, S.; Ruan, X. The Convergence of Data-Driven Optimal Iterative Learning Control for Linear Multi-Phase Batch Processes. Mathematics 2022, 10, 2304. https://doi.org/10.3390/math10132304

AMA Style

Geng Y, Wang S, Ruan X. The Convergence of Data-Driven Optimal Iterative Learning Control for Linear Multi-Phase Batch Processes. Mathematics. 2022; 10(13):2304. https://doi.org/10.3390/math10132304

Chicago/Turabian Style

Geng, Yan, Shouqin Wang, and Xiaoe Ruan. 2022. "The Convergence of Data-Driven Optimal Iterative Learning Control for Linear Multi-Phase Batch Processes" Mathematics 10, no. 13: 2304. https://doi.org/10.3390/math10132304

APA Style

Geng, Y., Wang, S., & Ruan, X. (2022). The Convergence of Data-Driven Optimal Iterative Learning Control for Linear Multi-Phase Batch Processes. Mathematics, 10(13), 2304. https://doi.org/10.3390/math10132304

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop