Next Article in Journal
On the Existence of Solutions of Nonlinear Fredholm Integral Equations from Kantorovich’s Technique
Previous Article in Journal
Evolutionary Optimization for Robust Epipolar-Geometry Estimation and Outlier Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Auxiliary Model Based Multi-Innovation Stochastic Gradient Identification Algorithm for Periodically Non-Uniformly Sampled-Data Hammerstein Systems

1
Key Laboratory of Advanced Process Control for Light Industry (Ministry of Education), Jiangnan University, Wuxi 214122, China
2
School of Internet of Things Engineering, Jiangnan University, Wuxi 214122, China
*
Author to whom correspondence should be addressed.
Algorithms 2017, 10(3), 84; https://doi.org/10.3390/a10030084
Submission received: 12 May 2017 / Revised: 7 July 2017 / Accepted: 19 July 2017 / Published: 31 July 2017

Abstract

:
Due to the lack of powerful model description methods, the identification of Hammerstein systems based on the non-uniform input-output dataset remains a challenging problem. This paper introduces a time-varying backward shift operator to describe periodically non-uniformly sampled-data Hammerstein systems, which can simplify the structure of the lifted models using the traditional lifting technique. Furthermore, an auxiliary model-based multi-innovation stochastic gradient algorithm is presented to estimate the parameters involved in the linear and nonlinear blocks. The simulation results confirm that the proposed algorithm is effective and can achieve a high estimation performance.

1. Introduction

The dynamics of most practical systems are inherently nonlinear due to complex physical, chemical and biological mechanisms. The modeling of nonlinear systems is challenging and has become an active research area in both academia and industry [1,2]. To simplify the modeling problem, block-oriented models, which are composed of linear dynamic blocks in combination with nonlinear memoryless blocks, have been widely utilized to describe nonlinear systems. The state of the art in designing, analyzing and implementing identification algorithms for block-oriented nonlinear systems were well summarized in a recent book by Giri and Bai [3]. Depending on the location of the static nonlinear component, block-oriented models can be classified into the Hammerstein model, the Wiener model and the Hammerstein–Wiener model [4,5,6]. The Hammerstein model represents a class of input nonlinear systems, where the nonlinear block is prior to the linear one. It can flexibly approximate various input nonlinearities, such as saturation, dead zone, backlash and hysteresis, thus having been extensively employed in realistic applications [7,8,9,10,11].
For decades, the identification of Hammerstein nonlinear systems has attracted much attention, and numerous methods have been reported in the literature. For example, Pouliquen et al. studied the parameter estimation of Hammerstein systems where the linear part is described by an output error model and presented an iterative algorithm based on the optimal bounding ellipsoid criterion [12]. Ding et al. applied the auxiliary model identification principle to deal with unmeasurable noise-free outputs in Hammerstein output error systems, presented a recursive least squares (RLS) algorithm and investigated its convergence properties [13]. Filipovic derived a robust extended RLS algorithm to estimate the parameters of Hammerstein systems interfered by non-Gaussian disturbance [14]. Gao et al. proposed a blind identification algorithm for Hammerstein systems with hysteresis nonlinearity and further developed a composite control strategy to track the reference input [15].
Among various new methodologies in the identification area, the multi-innovation theory has been considered as a useful way to improve estimation precision and convergence rate. The basic idea of the multi-innovation theory is innovation expanding, which helps to update the parameter estimates at each recursion using the data over a moving and fixed-size window [16,17]. Typically, the multi-innovation theory is incorporated with the RLS algorithm, the stochastic gradient (SG) algorithm, the stochastic Newton recursive algorithm, etc., to address identification problems [18]. For instance, a multi-innovation RLS algorithm was developed for Hammerstein AutoRegressive eXogenous (ARX) systems with backlash nonlinearity [19]. Furthermore, a multi-innovation SG algorithm [20] and an auxiliary model-based multi-innovation generalized extended SG algorithm [21] were proposed to estimate the parameters of Hammerstein nonlinear ARX and Box–Jenkins systems, respectively. Compared with the multi-innovation RLS algorithm, the multi-innovation SG algorithm is more efficient in computation because it avoids performing large matrix inversion [22].
The above-mentioned Hammerstein systems all belong to single-rate systems, the inputs and outputs of which are uniformly sampled at the same rate. However, non-uniform sampling can be encountered in practice due to hardware limitations or economic considerations [23,24,25]. For example, influenced by transmission delays and packet losses, the input-output data in networked control systems might be available at non-uniformly spaced time instants [26,27]. The non-uniform sampling includes the uniform sampling as its special case, which can always preserve controllability and observability in discretization [28]. Furthermore, it can overcome the restriction of the Nyquist limit and enable a much lower average sampling frequency. Therefore, intentional non-uniform sampling has the potential to reduce the hardware cost in control applications [29]. Due to the complexity of arbitrary non-uniform sampling, most of the literature works have focused on periodically non-uniformly sampled-data systems [30,31]. For periodically non-uniformly sampled-data Hammerstein systems, Li et al. derived the lifted transfer function model by means of the lifting technique and presented a least squares-based iterative algorithm for parameter estimation [32]. The lifting technique is a benchmark tool to deal with multirate and non-uniformly sampled-data systems [33,34]. However, the corresponding lifted models are complex and involve a large number of parameters, which brings a great challenge for identification. To simplify the model structure and reduce the identification complexity, Xie et al. put forward a novel input-output representation of linear systems with non-uniform sampling by introducing a time-varying backward shift operator δ 1 [35]. On the basis of that work, this paper aims to propose a δ 1 -based model to describe periodically non-uniformly sampled-data Hammerstein systems and presents an auxiliary model-based multi-innovation SG algorithm to estimate the model parameters.
The rest of this paper is organized as follows. Section 2 formulates the identification problem of periodically non-uniformly sampled-data Hammerstein systems. Identification algorithms are proposed in Section 3, and an example is provided in Section 4 to examine their estimation performance. Finally, concluding remarks are given in Section 5.

2. Problem Description

Consider a periodically non-uniformly sampled-data Hammerstein system as depicted in Figure 1, in which H τ is a periodic non-uniform zero-order hold, converting a discrete-time input sequence { u ( k T + t i ) } to a continuous-time input u ( t ) , i.e.,
u ( t ) = u ( k T + t 0 ) , k T + t 0 t < k T + t 1 ( t 0 = 0 ) , u ( k T + t 1 ) , k T + t 1 t < k T + t 2 , u ( k T + t q 1 ) , k T + t q 1 t < k T + t q ( t q = T ) ,
where k = 0 , 1 , ; T is the frame period spaced by q non-uniform sampling instants t i ( i = 0 , 1 , , q 1 ).
By passing through a nonlinear static block f ( · ) , u ( t ) is transformed into an unmeasurable inner input u ¯ ( t ) to a linear dynamic process P with order n, which can be expressed as:
u ¯ ( t ) = m = 1 n c c m f m [ u ( t ) ] ,
where f m [ u ( t ) ] are known nonlinear basis functions and c m are unknown coefficients to be estimated.
The noise-free output w ( t ) of the process P is corrupted by a white noise v ( t ) , generating a measurable output y ( t ) . The non-uniform sampler S τ has a synchronous sampling pattern with H τ ; thus, the discrete-time output sequence { y ( k T + t i ) } is obtained at sampling instants t = k T + t i ( i = 0 , 1 , , q 1 ).
For the notational simplicity, in the following the data s ( k T + t i ) , at non-uniform sampling time t = k T + t i , is denoted by s i ( k ) . By using a time-varying backward shift operator δ 1 [ δ 1 s i ( k ) = s i 1 ( k ) ] proposed in [35], the mapping relationship between the inner input u ¯ i ( k ) and the noise-free output w i ( k ) of the periodically non-uniformly sampled-data Hammerstein system can be represented as:
w i ( k ) = B i ( δ ) A i ( δ ) u ¯ i ( k ) , i = 0 , 1 , 2 , , q 1 ,
where:
A i ( δ ) : = 1 + a i 1 δ 1 + a i 2 δ 2 + + a i n δ n , B i ( δ ) : = b i 0 + b i 1 δ 1 + b i 2 δ 2 + + b i n δ n .
From the system schematic diagram in Figure 1, we have:
y i ( k ) = w i ( k ) + v i ( k ) .
Given the non-uniformly-sampled input-output data { u i ( k ) , y i ( k ) , k = 0 , 1 , 2 , , i = 0 , 1 , 2 , , q 1 } , the objective of this paper is to estimate the parameters of the nonlinear block in (1):
c : = [ c 1 , c 2 , , c n c ] T R n c ,
and the parameters of the linear block in (2):
a i : = [ a i 1 , a i 2 , , a i n ] T R n ,
b i : = [ b i 0 , b i 1 , b i 2 , , b i n ] T R n + 1 .

3. Identification Algorithms

3.1. The AM-SG Algorithm

According to the over-parameterized linear regression approach [36], define the information vector φ i ( k ) and the parameter vector θ i as:
φ i ( k ) : = [ φ i w ( k ) φ i u ( k ) ] R n 0 , θ i : = [ a i θ i u ] R n 0 , n 0 = n + ( n + 1 ) n c ,
φ i w ( k ) : = [ w i 1 ( k ) w i 2 ( k ) w i n ( k ) ] R n , ϕ i j ( k ) : = [ f 1 [ u i j ( k ) ] f 2 [ u i j ( k ) ] f n c [ u i j ( k ) ] ] R n c , j = 0 , 1 , 2 , , n ,
φ i u ( k ) : = [ ϕ i 0 ( k ) ϕ i 1 ( k ) ϕ i 2 ( k ) ϕ i n ( k ) ] R ( n + 1 ) n c , θ i u : = [ b i 0 c b i 1 c b i 2 c b i n c ] R ( n + 1 ) n c .
Using Equation (1), Equation (2) can be written in the following vector form:
w i ( k ) = j = 1 n a i j w i j ( k ) + j = 0 n b i j m = 1 n c c m f m [ u i j ( k ) ] = φ i T ( k ) θ i .
Substituting Equation (4) into Equation (3), we have:
y i ( k ) = φ i T ( k ) θ i + v i ( k ) .
Equation (5) is the identification model of the periodically non-uniformly sampled-data Hammerstein system, in which the parameter vector θ i includes the products of parameters b i j and c m . To guarantee a unique parametrization, the first coefficient c 1 of the nonlinear function is assumed to be one [14,37]. Furthermore, the information vector φ i ( k ) contains unmeasurable noise-free outputs w i j ( k ) . A solution to this difficulty is to replace w i j ( k ) with their estimates w ^ i j ( k ) based on the auxiliary model identification idea [38,39,40]. Accordingly, define the estimate of φ i ( k ) as:
φ ^ i ( k ) : = [ φ ^ i w ( k ) φ i u ( k ) ] ,
φ ^ i w ( k ) : = [ w ^ i 1 ( k ) , w ^ i 2 ( k ) , , w ^ i n ( k ) ] T .
Using the estimates φ ^ i ( k ) and θ ^ i ( k ) to replace φ i ( k ) and θ i in (4), respectively, yields
w ^ i ( k ) = φ ^ i T ( k ) θ ^ i ( k ) .
Applying the stochastic gradient search principle to minimize the following criterion function:
J 1 ( θ i ) = E [ y i ( k ) φ ^ i T ( k ) θ i ] 2 ,
an auxiliary model-based stochastic gradient (AM-SG) algorithm can be derived for identifying the parameter vector θ i in (5):
θ ^ i ( k ) = θ ^ i ( k 1 ) + φ ^ i ( k ) r i ( k ) [ y i ( k ) φ ^ i T ( k ) θ ^ i ( k 1 ) ] ,
r i ( k ) = r i ( k 1 ) + φ ^ i ( k ) 2 , r i ( 0 ) = 1 .

3.2. The AM-MISG Algorithm

The AM-SG algorithm only uses the current dataset to update the parameter estimates. Therefore, it has a slow convergence rate and low estimation accuracy. To improve the identification performance of the AM-SG algorithm, an innovation length p is introduced to derive an auxiliary model-based multi-innovation stochastic gradient (AM-MISG) algorithm.
Considering the most recent p sets of input-output data, define the stacked output vector Y i ( p , k ) , the stacked noise vector V i ( p , k ) and the stacked information matrix Ψ i ( p , k ) as:
Y i ( p , k ) = [ y i ( k ) , y i ( k 1 ) , , y i ( k p + 1 ) ] T R p ,
V i ( p , k ) = [ v i ( k ) , v i ( k 1 ) , , v i ( k p + 1 ) ] T R p ,
Ψ i ( p , k ) = [ φ i ( k ) , φ i ( k 1 ) , , φ i ( k p + 1 ) ] R n 0 × p .
Equation (5) can be expanded into the following matrix form:
Y i ( p , k ) = Ψ i T ( p , k ) θ i + V i ( p , k ) .
However, the information vectors φ i ( k l ) , l = 0 , 1 , , p 1 in Ψ i ( p , k ) include unknown noise-free outputs. Let φ ^ i ( k l ) be their estimates; the estimate of Ψ i ( p , k ) can be defined as:
Ψ ^ i ( p , k ) = [ φ ^ i ( k ) , φ ^ i ( k 1 ) , , φ ^ i ( k p + 1 ) ] R n 0 × p .
Define the following criterion function:
J 2 ( θ i ) = V i ( p , k ) 2 = Y i ( p , k ) Ψ ^ i T ( p , k ) θ i 2 ,
where X 2 = tr [ X X T ] represents the norm of the matrix X. The gradient of J 2 ( θ i ) with respect to θ i is given by:
grad [ J 2 ( θ i ) ] = J 2 ( θ i ) θ i = 2 Ψ ^ i ( p , k ) [ Y i ( p , k ) Ψ ^ i T ( p , k ) θ i ] .
Applying the stochastic gradient search principle to minimize the criterion function in (10), we have:
θ ^ i ( k ) = θ ^ i ( k 1 ) μ i ( k ) grad J 2 [ θ ^ i ( k 1 ) ] = θ ^ i ( k 1 ) + 2 μ i ( k ) Ψ ^ i ( p , k ) [ Y i ( p , k ) Ψ ^ i T ( p , k ) θ ^ i ( k 1 ) ] ,
where μ i ( k ) > 0 is called the step size or the convergence factor. For the convenience of formula derivation, let μ i ( k ) = 1 2 r i ( k ) ; Equation (11) can be rewritten into:
θ ^ i ( k ) = θ ^ i ( k 1 ) + 1 r i ( k ) Ψ ^ i ( p , k ) [ Y i ( p , k ) Ψ ^ i T ( p , k ) θ ^ i ( k 1 ) ] = I 1 r i ( k ) Ψ ^ i ( p , k ) Ψ ^ i T ( p , k ) θ ^ i ( k 1 ) + 1 r i ( k ) Ψ ^ i ( p , k ) Y i ( p , k ) .
To guarantee the convergence of this recursive algorithm, all eigenvalues of the matrix I 1 r i ( k ) Ψ ^ i ( p , k ) Ψ ^ i T ( p , k ) should be located inside the unit circle. Therefore, a conservative choice of 1 r i ( k ) is:
0 < 1 r i ( k ) 1 λ max [ Ψ ^ i ( p , k ) Ψ ^ i T ( p , k ) ] .
In this paper, we take a common choice:
r i ( k ) = λ i r i ( k 1 ) + φ ^ i ( k ) 2 , r i ( 0 ) = 1 ,
where λ i ( 0 , 1 ] is the forgetting factor.
Substituting Equation (13) into Equation (12), the auxiliary model-based multi-innovation stochastic gradient (AM-MISG) algorithm can be derived:
θ ^ i ( k ) = θ ^ i ( k 1 ) + Ψ ^ i ( p , k ) r i ( k ) [ Y i ( p , k ) Ψ ^ i T ( p , k ) θ ^ i ( k 1 ) ] ,
r i ( k ) = λ i r i ( k 1 ) + φ ^ i ( k ) 2 , r i ( 0 ) = 1 ,
Y i ( p , k ) = [ y i ( k ) , y i ( k 1 ) , , y i ( k p + 1 ) ] T ,
Ψ ^ i ( p , k ) = [ φ ^ i ( k ) , φ ^ i ( k 1 ) , , φ ^ i ( k p + 1 ) ] ,
φ ^ i ( k ) = [ φ ^ i w ( k ) φ i u ( k ) ] ,
φ ^ i w ( k ) = [ w ^ i 1 ( k ) , w ^ i 2 ( k ) , , w ^ i n ( k ) ] T ,
φ i u ( k ) = [ ϕ i 0 T ( k ) , ϕ i 1 T ( k ) , ϕ i 2 T ( k ) , , ϕ i n T ( k ) ] T ,
ϕ i j ( k ) = [ f 1 [ u i j ( k ) ] , f 2 [ u i j ( k ) ] , , f n c [ u i j ( k ) ] ] T , j = 0 , 1 , 2 , , n ,
w ^ i ( k ) = φ ^ i T ( k ) θ ^ i ( k ) .
Since c 1 = 1 , the estimates of a i j and b i j can be directly read from θ ^ i ( k ) ,
a ^ i j ( k ) = θ ^ i , j ( k ) , j = 1 , 2 , , n ,
b ^ i j ( k ) = θ ^ i , n + n c j + 1 ( k ) , j = 0 , 1 , 2 , , n ,
where θ ^ i , j ( k ) represents the j-th element of θ ^ i ( k ) . Note that c m ( m = 2 , 3 , , n c ) has been estimated n + 1 times at each non-uniform sampling instant k T + t i ( i = 0 , 1 , 2 , , q 1 ). Therefore, we can simply take their average like in [36] as the estimate of c m over the k-th frame period, i.e.,
c ^ m ( k ) = 1 q ( n + 1 ) i = 0 q 1 j = 0 n θ ^ i , n + n c j + m ( k ) b ^ i j ( k ) , m = 2 , 3 , , n c .
Furthermore, a more numerically-sound SVD-based approach proposed by Bai [41] can be applied to obtain the estimates of b i j and c m .
The flowchart of the AM-MISG algorithm in (14)–(25) for computing the parameter estimates of periodically non-uniformly sampled-data Hammerstein systems can be illustrated in Figure 2, and the detailed implementation steps are summarized as follows:
  • Initialization: Choose the data length L, the innovation length p and the forgetting factor λ i ; give the nonlinear basis functions { f m ( · ) , m = 1 , 2 , , n c } ; set u i ( k ) = 0 , y i ( k ) = 0 , w ^ i ( k ) = 0 for k 0 and i = 0 , 1 , 2 , , q 1 ; take the initial values to be θ ^ i ( 0 ) = 1 n 0 / p 0 , where p 0 = 10 6 and 1 n 0 is a column vector of ones; let k = 1 and i = 0 .
  • Collect the non-uniformly sampled input-output data u i ( k ) and y i ( k ) .
  • Calculate f m [ u i ( k ) ] based on u i ( k ) ; form φ ^ i w ( k ) , φ i u ( k ) , ϕ i j ( k ) by (19)–(21); and construct φ ^ i ( k ) by (18).
  • Form the stacked output vector Y i ( p , k ) and the stacked information matrix Ψ ^ i ( p , k ) by (16) and (17), respectively.
  • Compute the step size r i ( k ) by (15) and update the parameter estimate θ ^ i ( k ) by (14); calculate w ^ i ( k ) by (22); obtain a ^ i j ( k ) and b ^ i j ( k ) based on (23) and (24), respectively.
  • If i < q 1 , then increase i by one, and go to Step 2; otherwise, compute c ^ m ( k ) , m = 2 , 3 , , n c by (25); let i = 0 and go to the next step.
  • If k < L , then increase k by one, and go to Step 2; otherwise, terminate the computing process.

3.3. The Main Convergence Result

The main convergence result of the proposed AM-MISG algorithm for periodically non-uniformly sampled-data Hammerstein systems is given in the following theorem.
Theorem 1.
Assume that the noise sequences v i ( k ) ( i = 0 , 1 , 2 , , q 1 ) satisfy:
(A1) 
E [ v i ( k ) ] = 0 , E [ v i 2 ( k ) ] σ 2 < , E [ v i ( k ) v i ( j ) ] = 0 , k j ,
and there exist a positive constant α and an integer N such that the following persistent excitation condition holds:
(A2) 
j = 0 N l = 0 p 1 φ ^ i ( k + j l ) φ ^ i T ( k + j l ) r i ( k + j ) α I .
Then, the parameter estimation vector θ ^ i ( k ) given by the AM-MISG algorithm consistently converges to the true parameter vector θ i in the mean-square sense.
Theorem 1 can be proven in a similar way to [42]. Therefore, its detailed proof is omitted here.

4. Simulation Example

Assume that the nonlinear block in Figure 1 is described by:
u ¯ ( t ) = u ( t ) + 0.5 u 2 ( t ) + 0.25 u 3 ( t ) ,
and the linear continuous process P is described by:
P ( s ) = 1 4 s 2 + 2 s + 1 .
Over a frame period of T = 2 2 + 1 s, the input-output data are non-uniformly-sampled twice (i.e., q = 2 ) at t 0 = 0 s and t 1 = 2 s. According to Theorem 1 in [35], the following δ 1 -based transfer function model is derived:
w 0 ( k ) = 1 0.61623 δ 1 + 0.50931 δ 2 1 0.8086 δ 1 + 0.25514 δ 2 u ¯ 0 ( k ) ,
w 1 ( k ) = 1 0.49522 δ 1 + 0.75553 δ 2 1 0.94779 δ 1 + 0.57794 δ 2 u ¯ 1 ( k ) .
Therefore, the parameters of this periodically non-uniformly sampled-data Hammerstein system are:
a 0 = [ 0.8086 , 0.25514 ] T , b 0 = [ 1 , 0.61623 , 0.50931 ] T , a 1 = [ 0.94779 , 0.57794 ] T , b 1 = [ 1 , 0.49522 , 0.75553 ] T ,    c = [ 1 , 0.5 , 0.25 ] T .
In the simulation, take the non-uniform inputs { u 0 ( k ) } and { u 1 ( k ) } as two uncorrelated random sequences with zero mean and unit variance, { v 0 ( k ) } and { v 1 ( k ) } as two white noise sequences with zero mean and variance σ 2 = 0.10 2 . Based on 5000 non-uniform input-output dataset, applying the AM-MISG algorithm in (14)–(25) with λ 0 = λ 1 = 0.95 to estimate the system parameters, the results for p = 1 , p = 5 and p = 12 are shown in Table 1, Table 2 and Table 3, respectively, where ε is the estimation error defined as:
ε = a ^ 0 a 0 2 + b ^ 0 b 0 2 + a ^ 1 a 1 2 + b ^ 1 b 1 2 + c ^ c 2 a 0 2 + b 0 2 + a 1 2 + b 1 2 + c 2 × 100 % .
Meanwhile, the estimation errors ε versus k are shown in Figure 3.
From Table 1, Table 2 and Table 3 and Figure 3, we can see that the parameter estimation error gradually decreases as the data length k increases, demonstrating the effectiveness of the proposed AM-MISG algorithm. Furthermore, the AM-MISG algorithm with a larger innovation length p can result in higher identification accuracy and faster convergence to the true parameters.
To study the identification performance of the proposed AM-MISG algorithm against the output noise, 50 Monte Carlo simulations for the noise variance being σ 2 = 0.10 2 and σ 2 = 0.50 2 have been conducted, respectively. In each simulation run, a new dataset with length L = 5000 is generated to estimate the model parameters. For the innovation length p = 1 , p = 5 and p = 12 , the mean values and the standard deviations of the parameter estimates are listed in Table 4 and Table 5. From the simulation results, it can be observed that the estimation accuracy of the AM-MISG algorithm is higher when the noise variance is smaller. For a larger noise variance, increasing the innovation length p can help to obtain a satisfactory identification result.
Considering the noise variance σ 2 = 0.10 2 and σ 2 = 0.50 2 , a separate dataset with length 30 has been generated for model validation, respectively. Using the mean values of the parameter estimates listed in Table 4 and Table 5 for p = 12 to predict the outputs of the periodically non-uniformly sampled-data Hammerstein system, the results are shown in Figure 4, where the solid line and the x-marks represent the true measured outputs and the model predicted outputs, respectively. From Figure 4, it is clear that the model predictions can catch the trend of the true outputs well, and the prediction performance becomes better if a noise with smaller variance is introduced.

5. Conclusions

Based on the non-uniform input-output dataset, an auxiliary model-based stochastic gradient (AM-SG) algorithm is developed in this paper to estimate the parameters of Hammerstein systems. To improve the identification performance of the AM-SG algorithm, an auxiliary model-based multi-innovation stochastic gradient (AM-MISG) algorithm is proposed by introducing an innovation length p. The simulation results illustrate that the AM-MISG algorithm with a larger p can provide more accurate parameter estimates and a faster convergence rate. In addition, the proposed algorithm can be extended to identify more complex nonlinear systems with non-uniform sampling.

Acknowledgments

This paper is supported by the National Natural Science Foundation of China (No. 61403166), the Natural Science Foundation of Jiangsu Province (China; BK20140164) and Fundamental Research Funds for the Central Universities (JUSRP11561, JUSRP51510).

Author Contributions

Li Xie derived the identification algorithm and was in charge of the overall research. Huizhong Yang helped with preparing the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liu, Y.; Wang, H.; Yu, J.; Li, P. Selective recursive kernel learning for online identification of nonlinear systems with NARX form. J. Process Control 2010, 20, 181–194. [Google Scholar] [CrossRef]
  2. Tang, Y.; Li, Z.; Guan, X. Identification of nonlinear system using extreme learning machine based Hammerstein model. Commun. Nonlinear Sci. Numer. Simul. 2014, 19, 3171–3183. [Google Scholar] [CrossRef]
  3. Giri, F.; Bai, E.W. Block-Oriented Nonlinear System Identification; Springer: London, UK, 2010. [Google Scholar]
  4. Wills, A.; Schön, T.B.; Ljung, L.; Ninness, B. Identification of Hammerstein-Wiener models. Automatica 2013, 49, 70–81. [Google Scholar] [CrossRef]
  5. Lawrynczuk, M. Nonlinear predictive control for Hammerstein-Wiener systems. ISA Trans. 2014, 55, 49–62. [Google Scholar] [CrossRef] [PubMed]
  6. Wang, Y.Y.; Wang, X.D.; Wang, D.Q. Identification of dual-rate sampled Hammerstein systems with a piecewise-linear nonlinearity using the key variable separation technique. Algorithms 2015, 8, 366–379. [Google Scholar] [CrossRef]
  7. Chen, J.; Wang, X.; Ding, R. Gradient based estimation algorithm for Hammerstein systems with saturation and dead-zone nonlinearities. Appl. Math. Model. 2012, 36, 238–243. [Google Scholar] [CrossRef]
  8. Lv, X.; Ren, X. Non-iterative identification and model following control of Hammerstein systems with asymmetric dead-zone non-linearities. IET Control Theory Appl. 2012, 6, 84–89. [Google Scholar] [CrossRef]
  9. Giri, F.; Rochdi, Y.; Brouri, A.; Chaoui, F.Z. Parameter identification of Hammerstein systems containing backlash operators with arbitrary-shape parametric borders. Automatica 2011, 47, 1827–1833. [Google Scholar] [CrossRef]
  10. Fang, L.; Wang, J.; Zhang, Q. Identification of extended Hammerstein systems with hysteresis-type input nonlinearities described by Preisach model. Nonlinear Dyn. 2015, 79, 1257–1273. [Google Scholar] [CrossRef]
  11. Zhang, Q.; Wang, Q.; Li, G. Nonlinear modeling and predictive functional control of Hammerstein system with application to the turntable servo system. Mech. Syst. Signal Process. 2016, 72, 383–394. [Google Scholar] [CrossRef]
  12. Pouliquen, M.; Pigeon, E.; Gehan, O. Identification scheme for Hammerstein output error models with bounded noise. IEEE Trans. Autom. Control 2016, 61, 550–555. [Google Scholar] [CrossRef]
  13. Ding, F.; Shi, Y.; Chen, T. Auxiliary model-based least-squares identification methods for Hammerstein output-error systems. Syst. Control Lett. 2007, 56, 373–380. [Google Scholar] [CrossRef]
  14. Filipovic, V.Z. Consistency of the robust recursive Hammerstein model identification algorithm. J. Frankl. Inst. 2015, 352, 1932–1945. [Google Scholar] [CrossRef]
  15. Gao, X.; Ren, X.; Zhu, C.; Zhang, C. Identification and control for Hammerstein systems with hysteresis non-linearity. IET Control Theory Appl. 2015, 9, 1935–1947. [Google Scholar] [CrossRef]
  16. Cao, P.; Luo, X. Performance analysis of multi-innovation stochastic Newton recursive algorithms. Digit. Signal Process. 2016, 56, 15–23. [Google Scholar] [CrossRef]
  17. Ding, F. Complexity, convergence and computational efficiency for system identification algorithms. Control Decis. 2016, 31, 1729–1741. [Google Scholar]
  18. Ding, F. Several multi-innovation identification methods. Digit. Signal Process. 2010, 20, 1027–1039. [Google Scholar] [CrossRef]
  19. Shi, Z.; Wang, Y.; Ji, Z. A multi-innovation recursive least squares algorithm with a forgetting factor for Hammerstein CAR systems with backlash. Circuits Syst. Signal Process. 2016, 35, 4271–4289. [Google Scholar] [CrossRef]
  20. Xiao, Y.; Song, G.; Liao, Y.; Ding, R. Multi-innovation stochastic gradient parameter estimation for input nonlinear controlled autoregressive models. Int. J. Control Autom. Syst. 2012, 10, 639–643. [Google Scholar] [CrossRef]
  21. Chen, J.; Zhang, Y.; Ding, R. Gradient-based parameter estimation for input nonlinear systems with ARMA noises based on the auxiliary model. Nonlinear Dyn. 2013, 72, 865–871. [Google Scholar] [CrossRef]
  22. Ma, J.; Xiong, W.; Ding, F.; Alsaedi, A.; Hayat, T. Data filtering based forgetting factor stochastic gradient algorithm for Hammerstein systems with saturation and preload nonlinearities. J. Frankl. Inst. 2016, 353, 4280–4299. [Google Scholar] [CrossRef]
  23. Li, W.; Shah, S.L.; Xiao, D. Kalman filters in non-uniformly sampled multirate systems: For FDI and beyond. Automatica 2008, 44, 199–208. [Google Scholar] [CrossRef]
  24. Albertos, P.; Salt, J. Non-uniform sampled-data control of MIMO systems. Ann. Rev. Control 2011, 35, 65–76. [Google Scholar] [CrossRef]
  25. Ding, F.; Wang, F.F. Recursive least squares identification algorithms for linear-in-parameter systems with missing data. Control Decis. 2016, 31, 2261–2266. [Google Scholar]
  26. Yang, H.; Xia, Y.; Shi, P. Stabilization of networked control systems with nonuniform random sampling periods. Int. J. Robust Nonlinear Control 2011, 21, 501–526. [Google Scholar] [CrossRef]
  27. Aibing, Q.; Bin, J.; Chenglin, W.; Zehui, M. Fault estimation and accommodation for networked control systems with nonuniform sampling periods. Int. J. Adapt. Control Signal Process. 2015, 29, 427–442. [Google Scholar] [CrossRef]
  28. Sheng, J.; Chen, T.; Shah, S.L. Generalized predictive control for non-uniformly sampled systems. J. Process Control 2002, 12, 875–885. [Google Scholar] [CrossRef]
  29. Khan, S.; Goodall, R.; Dixon, R. Non-uniform sampling strategies for digital control. Int. J. Syst. Sci. 2013, 44, 2234–2254. [Google Scholar] [CrossRef]
  30. Jing, S.; Pan, T.; Li, Z. Recursive bayesian algorithm with covariance resetting for identification of Box-Jenkins systems with non-uniformly sampled input data. Circuits Syst. Signal Process. 2016, 35, 919–932. [Google Scholar] [CrossRef]
  31. Xie, L.; Yang, H.; Ding, F. Identification of non-uniformly sampled-data systems with asynchronous input and output data. J. Frankl. Inst. 2017, 354, 1974–1991. [Google Scholar] [CrossRef]
  32. Li, X.; Ding, R.; Zhou, L. Least-squares-based iterative identification algorithm for Hammerstein nonlinear systems with non-uniform sampling. Int. J. Comput. Math. 2013, 90, 1524–1534. [Google Scholar] [CrossRef]
  33. Salt, J.; Albertos, P. Model-based multirate controllers design. IEEE Trans. Control Syst. Technol. 2005, 13, 988–997. [Google Scholar] [CrossRef]
  34. Huang, J.; Shi, Y.; Huang, H.; Li, Z. l2-l filtering for multirate nonlinear sampled-data systems using T-S fuzzy models. Digit. Signal Process. 2013, 23, 418–426. [Google Scholar] [CrossRef]
  35. Xie, L.; Yang, H.; Ding, F.; Huang, B. Novel model of non-uniformly sampled-data systems based on a time-varying backward shift operator. J. Process Control 2016, 43, 38–52. [Google Scholar] [CrossRef]
  36. Chang, F.; Luus, R. A noniterative method for identification using Hammerstein model. IEEE Trans. Autom. Control 1971, 16, 464–468. [Google Scholar] [CrossRef]
  37. Shen, Q.; Ding, F. Least squares identification for Hammerstein multi-input multi-output systems based on the key-term separation technique. Circuits Syst. Signal Process. 2016, 35, 1–14. [Google Scholar] [CrossRef]
  38. Zhang, Y.; Zhao, Z.; Cui, G. Auxiliary model method for transfer function estimation from noisy input and output data. Appl. Math. Model. 2014, 39, 4257–4265. [Google Scholar] [CrossRef]
  39. Jin, Q.; Wang, Z.; Liu, X. Auxiliary model-based interval-varying multi-innovation least squares identification for multivariable OE-like systems with scarce measurements. J. Process Control 2015, 35, 154–168. [Google Scholar] [CrossRef]
  40. Ding, J. Data filtering based recursive and iterative least squares algorithms for parameter estimation of multi-input output systems. Algorithms 2016, 9, 49. [Google Scholar] [CrossRef]
  41. Bai, E.W. An optimal two-stage identification algorithm for Hammerstein-Wiener nonlinear systems. Automatica 1998, 34, 333–338. [Google Scholar] [CrossRef]
  42. Wang, X.; Ding, F. Convergence of the auxiliary model-based multi-innovation generalized extended stochastic gradient algorithm for Box-Jenkins systems. Nonlinear Dyn. 2015, 82, 269–280. [Google Scholar] [CrossRef]
Figure 1. Periodically non-uniformly sampled-data Hammerstein system.
Figure 1. Periodically non-uniformly sampled-data Hammerstein system.
Algorithms 10 00084 g001
Figure 2. The flowchart of computing the parameter estimate.
Figure 2. The flowchart of computing the parameter estimate.
Algorithms 10 00084 g002
Figure 3. The AM-MISG estimation errors ε versus k with p = 1 , 5, 12.
Figure 3. The AM-MISG estimation errors ε versus k with p = 1 , 5, 12.
Algorithms 10 00084 g003
Figure 4. The predicted outputs and the measured outputs. (a) For the noise variance σ 2 = 0.10 2 . (b) For the noise variance σ 2 = 0.50 2 .
Figure 4. The predicted outputs and the measured outputs. (a) For the noise variance σ 2 = 0.10 2 . (b) For the noise variance σ 2 = 0.50 2 .
Algorithms 10 00084 g004
Table 1. The auxiliary model-based multi-innovation stochastic gradient (AM-MISG) parameter estimates and errors ( p = 1 ).
Table 1. The auxiliary model-based multi-innovation stochastic gradient (AM-MISG) parameter estimates and errors ( p = 1 ).
k10020050010002000300040005000True Values
a 01 −0.23798−0.23467−0.24155−0.23676−0.32003−0.37711−0.42344−0.46120−0.80860
a 02 −0.17253−0.15925−0.14095−0.10136−0.07608−0.04580−0.002650.022110.25514
b 00 0.212470.283250.368200.434350.547770.633470.706530.769971.00000
b 01 0.030190.019330.011960.009160.00769−0.01410−0.04420−0.08780−0.61623
b 02 0.087930.107420.140030.184420.232720.285530.312630.321510.50931
a 11 −0.14144−0.17751−0.20109−0.25406−0.32583−0.39976−0.44355−0.49299−0.94779
a 12 −0.26469−0.24184−0.17359−0.12188−0.07576−0.023740.040340.073910.57794
b 10 0.240880.322320.394630.458980.579290.668770.731080.773701.00000
b 11 0.027790.052570.072720.090470.110980.116150.108110.09453−0.49522
b 12 0.046440.043430.047130.074800.125380.161000.189160.221880.75553
c 2 1.053910.966680.981870.938950.800870.722970.667120.630750.50000
c 3 1.796331.680651.440461.160090.823350.654560.525890.459220.25000
ε ( % ) 103.0489097.4192489.4099680.5715468.9436961.4411455.5220651.00555
Table 2. The AM-MISG parameter estimates and errors ( p = 5 ).
Table 2. The AM-MISG parameter estimates and errors ( p = 5 ).
k10020050010002000300040005000True Values
a 01 −0.17179−0.18574−0.31748−0.41405−0.61777−0.71905−0.76959−0.80135−0.80860
a 02 −0.17111−0.11012−0.054960.010770.126430.177750.219750.243970.25514
b 00 0.373440.456160.626470.760160.916870.956930.984830.996921.00000
b 01 0.066730.043170.02695−0.05431−0.24467−0.39466−0.50874−0.56748−0.61623
b 02 0.180520.223950.271920.338560.356500.393920.417450.436680.50931
a 11 −0.34108−0.41100−0.47313−0.56512−0.68070−0.77319−0.82088−0.85964−0.94779
a 12 −0.01887−0.030440.077730.160530.306820.374360.444590.477820.57794
b 10 0.380090.498360.591200.756930.928720.973760.980980.985251.00000
b 11 0.022150.042950.060780.02968−0.07356−0.20574−0.29673−0.36568−0.49522
b 12 0.109850.133100.183000.251330.337330.434260.520670.597530.75553
c 2 0.978250.905950.805590.634550.517890.507290.501780.503880.50000
c 3 1.356671.137330.733260.473880.319510.277870.261810.254380.25000
ε ( % ) 84.5299575.9177261.6463348.7305432.9407622.4533215.0382310.03271
Table 3. The AM-MISG parameter estimates and errors ( p = 12 ).
Table 3. The AM-MISG parameter estimates and errors ( p = 12 ).
k10020050010002000300040005000True Values
a 01 −0.21097−0.29207−0.48337−0.62718−0.78913−0.82242−0.82167−0.81394−0.80860
a 02 −0.09616−0.009990.043100.147200.235420.256310.257550.255520.25514
b 00 0.468310.602060.824760.925270.994630.995620.999661.000881.00000
b 01 0.059820.04225−0.06860−0.27395−0.53707−0.61472−0.62645−0.62093−0.61623
b 02 0.279870.328480.345890.387040.422830.478640.496520.503270.50931
a 11 −0.42997−0.51235−0.60352−0.72537−0.84006−0.90571−0.93254−0.94364−0.94779
a 12 −0.000110.042700.217880.329750.475200.534050.563830.572170.57794
b 10 0.442560.616960.765260.928361.000280.998580.996691.000041.00000
b 11 0.019710.058710.01162−0.14602−0.33453−0.42806−0.46968−0.48801−0.49522
b 12 0.116250.165670.248970.365570.558200.680470.732840.751250.75553
c 2 0.880430.765520.641290.529320.494180.498550.500870.499150.50000
c 3 1.044850.798190.455340.303180.255420.253280.251630.250570.25000
ε ( % ) 73.5664662.3300445.4335429.2832512.292804.726521.750230.55900
Table 4. The AM-MISG parameter estimates with standard deviations ( σ 2 = 0 . 10 2 ).
Table 4. The AM-MISG parameter estimates with standard deviations ( σ 2 = 0 . 10 2 ).
Parameters p = 1 p = 5 p = 12 True Values
a 01 0.4618 ± 0.0241 0.7999 ± 0.0067 0.8156 ± 0.0016 −0.80860
a 02 0.0190 ± 0.0145 0.2390 ± 0.0050 0.2575 ± 0.0012 0.25514
b 00 0.7923 ± 0.0148 0.9979 ± 0.0038 1.0001 ± 0.0010 1.00000
b 01 0.0783 ± 0.0165 0.5640 ± 0.0081 0.6243 ± 0.0020 −0.61623
b 02 0.3478 ± 0.0153 0.4419 ± 0.0054 0.5030 ± 0.0019 0.50931
a 11 0.5010 ± 0.0187 0.8457 ± 0.0105 0.9396 ± 0.0024 −0.94779
a 12 0.0944 ± 0.0165 0.4700 ± 0.0099 0.5707 ± 0.0021 0.57794
b 10 0.7879 ± 0.0164 0.9951 ± 0.0049 0.9999 ± 0.0014 1.00000
b 11 0.1057 ± 0.0137 0.3463 ± 0.0136 0.4830 ± 0.0029 −0.49522
b 12 0.2357 ± 0.0127 0.5779 ± 0.0138 0.7465 ± 0.0033 0.75553
c 2 0.6136 ± 0.0106 0.4982 ± 0.0035 0.4998 ± 0.0010 0.50000
c 3 0.4436 ± 0.0153 0.2516 ± 0.0033 0.2499 ± 0.0009 0.25000
Table 5. The AM-MISG parameter estimates with standard deviations ( σ 2 = 0.50 2 ).
Table 5. The AM-MISG parameter estimates with standard deviations ( σ 2 = 0.50 2 ).
Parameters p = 1 p = 5 p = 12 True Values
a 01 0.4613 ± 0.0261 0.7870 ± 0.0217 0.7921 ± 0.0249 −0.80860
a 02 0.0186 ± 0.0168 0.2275 ± 0.0173 0.2376 ± 0.0268 0.25514
b 00 0.7931 ± 0.0161 1.0017 ± 0.0170 1.0071 ± 0.0231 1.00000
b 01 0.0777 ± 0.0191 0.5491 ± 0.0217 0.5954 ± 0.0350 −0.61623
b 02 0.3489 ± 0.0178 0.4452 ± 0.0203 0.5015 ± 0.0307 0.50931
a 11 0.5009 ± 0.0225 0.8320 ± 0.0203 0.9036 ± 0.0269 −0.94779
a 12 0.0951 ± 0.0195 0.4552 ± 0.0203 0.5323 ± 0.0260 0.57794
b 10 0.7872 ± 0.0197 0.9938 ± 0.0203 1.0007 ± 0.0295 1.00000
b 11 0.1037 ± 0.0168 0.3375 ± 0.0244 0.4518 ± 0.0327 −0.49522
b 12 0.2375 ± 0.0171 0.5744 ± 0.0256 0.7177 ± 0.0312 0.75553
c 2 0.6130 ± 0.0121 0.4961 ± 0.0141 0.4949 ± 0.0207 0.50000
c 3 0.4446 ± 0.0185 0.2510 ± 0.0138 0.2462 ± 0.0166 0.25000

Share and Cite

MDPI and ACS Style

Xie, L.; Yang, H. Auxiliary Model Based Multi-Innovation Stochastic Gradient Identification Algorithm for Periodically Non-Uniformly Sampled-Data Hammerstein Systems. Algorithms 2017, 10, 84. https://doi.org/10.3390/a10030084

AMA Style

Xie L, Yang H. Auxiliary Model Based Multi-Innovation Stochastic Gradient Identification Algorithm for Periodically Non-Uniformly Sampled-Data Hammerstein Systems. Algorithms. 2017; 10(3):84. https://doi.org/10.3390/a10030084

Chicago/Turabian Style

Xie, Li, and Huizhong Yang. 2017. "Auxiliary Model Based Multi-Innovation Stochastic Gradient Identification Algorithm for Periodically Non-Uniformly Sampled-Data Hammerstein Systems" Algorithms 10, no. 3: 84. https://doi.org/10.3390/a10030084

APA Style

Xie, L., & Yang, H. (2017). Auxiliary Model Based Multi-Innovation Stochastic Gradient Identification Algorithm for Periodically Non-Uniformly Sampled-Data Hammerstein Systems. Algorithms, 10(3), 84. https://doi.org/10.3390/a10030084

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop