Next Article in Journal
Wind Turbine Bearing Temperature Forecasting Using a New Data-Driven Ensemble Approach
Next Article in Special Issue
Disturbance Detection of a Power Transmission System Based on the Enhanced Canonical Variate Analysis Method
Previous Article in Journal
Variable Stiffness Design and Multiobjective Crashworthiness Optimization for Collision Post of Subway Cab Cars
Previous Article in Special Issue
A Process Monitoring Method Based on Dynamic Autoregressive Latent Variable Model and Its Application in the Sintering Process of Ternary Cathode Materials
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Auxiliary Model-Based Multi-Innovation Fractional Stochastic Gradient Algorithm for Hammerstein Output-Error Systems

1
Key Laboratory of Advanced Process Control for Light Industry (Ministry of Education), Jiangnan University, Wuxi 214122, China
2
School of Science, Jiangnan University, Wuxi 214122, China
*
Author to whom correspondence should be addressed.
Machines 2021, 9(11), 247; https://doi.org/10.3390/machines9110247
Submission received: 29 August 2021 / Revised: 14 October 2021 / Accepted: 21 October 2021 / Published: 23 October 2021
(This article belongs to the Special Issue Deep Learning-Based Machinery Fault Diagnostics)

Abstract

:
This paper focuses on the nonlinear system identification problem, which is a basic premise of control and fault diagnosis. For Hammerstein output-error nonlinear systems, we propose an auxiliary model-based multi-innovation fractional stochastic gradient method. The scalar innovation is extended to the innovation vector for increasing the data use based on the multi-innovation identification theory. By establishing appropriate auxiliary models, the unknown variables are estimated and the improvement in the performance of parameter estimation is achieved owing to the fractional-order calculus theory. Compared with the conventional multi-innovation stochastic gradient algorithm, the proposed method is validated to obtain better estimation accuracy by the simulation results.

1. Introduction

The accuracy of a system model affects the performance and safety of industrial control systems [1,2,3,4,5], and system identification is a theory and method for constructing mathematical model of systems and has been widely implemented in practice [6,7,8,9]. The behavior of most modern industrial control systems and synthetic systems are nonlinear by nature. Presently, an important research field in modern signal processing is the research of parameter identification for nonlinear systems, in which the block-structure systems, such as the Hammerstein model, are among the most current nonlinear systems due to their efficiency and accuracy to model complex nonlinear systems [10,11,12]. The representative feature of a Hammerstein model is that its architecture consists of two blocks: a static nonlinear model followed by a linear dynamic model. The simplicity in structure makes it provide a good compromise between the accuracy of nonlinear systems and the tractability of linear systems, and thus promoting its use in different nonlinear applications such as automatic control [13,14,15], fault detection and diagnosis [16,17,18], and so on.
Recently, several new system identification methods and theories have been developed for nonlinear models in the literature, including the least squares methods [19], the gradient-based methods [20], the iterative methods [21], the subspace identification methods [22], the hierarchical identification theory [23], the auxiliary model and the multi-innovation (MI) identification theories [24]. One well-known algorithm is the stochastic gradient (SG) algorithm, which has lower computational cost and complexity than the recursive least squares algorithm, whereas slow-convergence phenomena are often observed. Therefore, different modifications of the SG algorithm were developed to enhance its performance [25,26,27,28,29,30]. In particular, by extending scalar innovation into innovation vectors, the MI identification theory was proposed to improve the convergence speed and estimation accuracy in [31], and the fractional-order calculus method was introduced to show that it can achieve more satisfactory performance in [32,33].
To the best of our knowledge, different fractional-order gradient methods have been produced [34,35,36]. For example, in [37], a fractional-order SG algorithm was designed to identify the Hammerstein nonlinear ARMAX systems by an improved fractional-order gradient method. Based on the MI theory and the fractional-order calculus, an MI fractional least mean squares identification algorithm was presented for the Hammerstein controlled autoregressive systems, where the update mechanism was composed of the first-order gradient and the fractional gradient [38]. However, the above-discussed papers only consider the Hammerstein equation-error systems, and the cross-products between the parameters in the linear block and nonlinear block can lead to many redundant parameters. When the dimensions of parameter vectors are large, it will cause high computational complexity and deteriorate the identification accuracy.
In this work, we study the identification problem of the Hammerstein output-error moving average (OEMA) systems, which have been less studied due to the difficulty in identification [39,40]. To avoid estimating the redundant parameters, the Hammerstein model is parameterized using the key-term separation principle [41]. Furthermore, based on the identification model, the fractional-order SG algorithm is extended to the identification of Hammerstein OEMA systems and an auxiliary model-based multi-innovation fractional stochastic gradient (AM-MIFSG) algorithm is presented by the auxiliary model identification idea. The proposed algorithm can generate higher estimation accuracy than the common multi-innovation stochastic gradient (MISG) algorithm, with fewer parameters required to be estimated.
The paper is structured as follows. Section 2 gives a description for Hammerstein OEMA systems. Section 3 introduces the multi-innovation identification theory and drives an auxiliary model-based multi-innovation stochastic gradient (AM-MISG) identification algorithm for a comparison purpose. Section 4 presents the AM-MIFSG identification algorithm for the Hammerstein OEMA systems. Section 5 gives the convergence analysis of the proposed AM-MIFSG algorithm. Section 6 verifies the results in this paper using a simulation example. Finally, concluding remarks are given in Section 7.

2. The System Description

Consider the Hammerstein OEMA systems shown in Figure 1,
y k = B ( z ) A ( z ) u ¯ k + D ( z ) v k ,
u ¯ k = c 1 f 1 ( u k ) + c 2 f 2 ( u k ) + + c m f m ( u k ) ,
where { u k } and { y k } are the input and output sequences of the system, { u ¯ k } is the output sequence of the nonlinear block, and it can be represented as a linear combination of a known basis f ( u k ) : = [ f 1 ( u k ) , f 2 ( u k ) , , f m ( u k ) ] with unknown coefficients c i ( i = 1 , 2 , , m ) , { v k } is a stochastic white noise sequence with zero mean and variance σ 2 , A ( z ) , B ( z ) and D ( z ) are the polynomials in the unit backward shift operator z 1 [ z 1 y k = y k 1 ] , and defined as
A ( z ) : = 1 + a 1 z 1 + a 2 z 2 + + a n a z n a , B ( z ) : = 1 + b 1 z 1 + b 2 z 2 + + b n b z n b , D ( z ) : = 1 + d 1 z 1 + d 2 z 2 + + d n d z n d .
Assume that the orders of these polynomials n a , n b and n d are known and u k = 0 , y k = 0 and v k = 0 for k 0 .
Define the intermediate variables x k and w k as follows:
x k : = B ( z ) A ( z ) u ¯ k = [ 1 A ( z ) ] x k + B ( z ) u ¯ k = u ¯ k i = 1 n a a i x k i + i = 1 n b b i u ¯ k i ,
w k : = D ( z ) v k = i = 1 n d d i v k i + v k .
Take the first variable u ¯ k on the right-hand side of (3) as a separated key-term. Based on the principle of key-term separation [42,43], substituting u ¯ k in (2) into (3) gives
x k = i = 1 m c i f i ( u k ) i = 1 n a a i x k i + i = 1 n b b i u ¯ k i .
Define the following related parameter vectors:
θ : = θ s d R n , n : = n a + n b + n d + m , θ s : = [ a T , b T , c T ] T R n a + n b + m , a : = [ a 1 , a 2 , , a n a ] T R n a , b : = [ b 1 , b 2 , , b n b ] T R n b , c : = [ c 1 , c 2 , , c m ] T R m , d : = [ d 1 , d 2 , , d n d ] T R n d ,
and the information vectors:
φ k : = φ s , k φ n , k R n , φ s , k : = [ x k 1 , x k 2 , , x k n a , u ¯ k 1 , u ¯ k 2 , , u ¯ k n b , f ( u k ) ] T R n a + n b + m , φ n , k : = [ v k 1 , v k 2 , , v k n d ] T R n d .
From (1)–(5), we have
y k = x k + w k = φ s , k T θ s + φ n , k T d + v k = φ k T θ + v k .
Equation (6) is the identification model of the Hammerstein OEMA system. Please note that the parameter vector θ contains all the parameters of the system in (1)–(2), and the parameters in the linear and nonlinear blocks are separated. This means there is no need to identify redundant parameters. This paper aims to present an AM-MIFSG algorithm for Hammerstein OEMA systems to improve the parameter estimation accuracy.

3. The AM-MISG Algorithm

In this section, we introduce the auxiliary model and multi-innovation identification theories briefly, and derive the AM-MISG algorithm for the Hammerstein OEMA system.
Let θ ^ k denote the estimate of θ . Based on the search principle of negative gradient, defining and minimizing the cost function
J ( θ ) : = 1 2 j = 1 k [ y j φ j T θ ] 2 ,
the following SG algorithm can be obtained for estimating the parameter vector θ :
θ ^ k = θ ^ k 1 μ 1 J ( θ ) θ = θ ^ k 1 + φ k s k e k ,
e k = y k φ k T θ ^ k 1 ,
s k = s k 1 + φ k 2 .
where μ 1 is the step size for the SG algorithm, which is taken as μ 1 = 1 s k , and s 0 = 1 .
However, it is worth noting that the variables x k i , u ¯ k i and v k i in φ k are unknown, and thus the algorithms in (7)–(9) cannot be implemented directly. The solution is to use the idea of the auxiliary model to build the following auxiliary models based on the parameter estimate θ ^ k :
x ^ k = φ ^ s , k T θ ^ s , k , u ¯ ^ k = c ^ 1 , k f 1 ( u k ) + c ^ 2 , k f 2 ( u k ) + + c ^ m , k f m ( u k ) , v ^ k = y k φ ^ k T θ ^ k ,
and use the outputs x ^ k i , u ¯ ^ k i and v ^ k i of the auxiliary models instead of the unknown variables x k i , u ¯ k i and v k i to construct the estimates of the information vectors:
φ ^ k = φ ^ s , k φ ^ n , k , φ ^ s , k = [ x ^ k 1 , x ^ k 2 , , x ^ k n a , u ¯ ^ k 1 , u ¯ ^ k 2 , , u ¯ ^ k n b , f ( u k ) ] T , φ ^ n , k = [ v ^ k 1 , v ^ k 2 , , v ^ k n d ] T .
The SG algorithm update the parameter estimate using the current data information, thus its computational complexity is low, but estimation accuracy needs to be improved. Based on the multi-innovation identification theory [44,45], a slide window of length p (i.e., innovation length) is built to improve the estimation performance of the SG algorithm, which contains the data information from the current time k to k p + 1 , i.e.,
E p , k = y k φ ^ k T θ ^ k 1 , y k 1 φ ^ k 1 T θ ^ k 2 , , y k p + 1 φ ^ k p + 1 T θ ^ k p T .
Define the stacked output vector Y p , k and information matrix Φ ^ p , k as
Y p , k : = [ y k , y k 1 , , y k p + 1 ] T R p , Φ ^ p , k : = [ φ ^ k , φ ^ k 1 , , φ ^ k p + 1 ] R n × p .
In principle, the estimate θ ^ t 1 is closer to the optimal value θ than θ ^ t i for i = 2 , , p , then Equation (10) can be approximated by
E p , k = Y p , k Φ ^ p , k T θ ^ k 1 .
In summary, we can obtain the AM-MISG algorithm as follows:
θ ^ k = θ ^ k 1 + Φ ^ p , k s k E p , k ,
E p , k = Y p , k Φ ^ p , k T θ ^ k 1 ,
s k = s k 1 + φ ^ k 2 , s 0 = 1 ,
Y p , k = [ y k , y k 1 , , y k p + 1 ] T ,
Φ ^ p , k = [ φ ^ k , φ ^ k 1 , , φ ^ k p + 1 ] ,
u ¯ ^ k = f ( u k ) c ^ k ,
x ^ k = φ ^ s , k T θ ^ s , k ,
v ^ k = y k φ ^ k T θ ^ k ,
f ( u k ) = [ f 1 ( u k ) , f 2 ( u k ) , , f m ( u k ) ] ,
φ ^ k = φ ^ s , k φ ^ n , k ,
φ ^ s , k = [ x ^ k 1 , x ^ k 2 , , x ^ k n a , u ¯ ^ k 1 , u ¯ ^ k 2 , , u ¯ ^ k n b , f ( u k ) ] T ,
φ ^ n , k = [ v ^ k 1 , v ^ k 2 , , v ^ k n d ] T ,
θ ^ k = θ ^ s , k d ^ k ,
θ ^ s , k = [ a ^ k T , b ^ k T , c ^ k T ] T .
Please note that the AM-MISG algorithm will reduce to the auxiliary model-based stochastic gradient (AM-SG) algorithm when p = 1 .

4. The AM-MIFSG Algorithm

This section deduces an AM-MIFSG algorithm to improve the parameter estimation performance of above AM-MISG identification algorithm.
In (7), the first-order gradient is used to update the parameter vector. In contrast to the integer order, for the quadratic objective function, the derivative of a fractional-order near a point is uncertain, so its essential property is nonlocal. This excellent property enables the fractional-order gradient method to jump out of local optimum and reach global minimum point more quicker. Here, we propose to add the fractional-order gradient in addition to the first-order gradient, and the final update relation is written as:
θ ^ k = θ ^ k 1 μ 1 J ( θ ) θ μ α α J ( θ ) θ ,
where μ α is the step size for the factional order derivative α . According to the Caputo and Riemann–Liouville definition [46,47], the fractional derivation of a power function f ( t ) = t n ( n > 1 ) is defined as:
D t α t n = Γ ( n + 1 ) Γ ( n + 1 α ) t n α ,
where D t α is the fractional derivative operator of order α and Γ is the gamma function which defined as Γ ( n ) = ( n 1 ) ! .
According to (26), the fractional-order gradient in Equation (25) can be written as follows:
α J ( θ ) θ = φ k α θ θ = φ k Γ ( 2 ) Γ ( 2 α ) θ 1 α ,
where Γ ( 2 ) = 1 . Then Equation (25) can be approximated as follows:
θ ^ k = θ ^ k 1 + φ k s k e k + ψ k s α , k e k , 0 < α < 1 ,
s α , k = s α , k 1 + ψ k 2 , s α , 0 = 1 ,
ψ k = diag ( φ k ) ( | θ | k 1 1 α ) Γ ( 2 α ) .
Please note that the absolute value of θ is used to avoid complex values, this is a common way of dealing with fractional-order gradient [38]. The introduction of fractional-order parameter α provides additional degrees of freedom and increases the flexibility of the parameter estimation.
Similar to the AM-MISG algorithm in Section 3, expanding the information vector ψ k to the information matrix
Ψ p , k = [ ψ k , ψ k 1 , , ψ k p + 1 ] ,
and applying the auxiliary model identification idea, we can obtain the following AM-MIFSG algorithm:
θ ^ k = θ ^ k 1 + Φ ^ p , k s k + Ψ ^ p , k s α , k E p , k ,
E p , k = Y p , k Φ ^ p , k T θ ^ k 1 ,
s k = s k 1 + φ ^ k 2 , s 0 = 1 ,
s α , k = s α , k 1 + ψ ^ k 2 , s α , 0 = 1 ,
Y p , k = [ y k , y k 1 , , y k p + 1 ] T ,
Φ ^ p , k = [ φ ^ k , φ ^ k 1 , , φ ^ k p + 1 ] ,
Ψ ^ p , k = [ ψ ^ k , ψ ^ k 1 , , ψ ^ k p + 1 ] ,
ψ ^ j = diag ( φ ^ j ) ( | θ ^ | k 1 1 α ) Γ ( 2 α ) , j = k , k 1 , , k p + 1 ,
u ¯ ^ k = f ( u k ) c ^ k ,
x ^ k = φ ^ s , k T θ ^ s , k ,
v ^ k = y k φ ^ k T θ ^ k ,
f ( u k ) = [ f 1 ( u k ) , f 2 ( u k ) , , f m ( u k ) ] ,
φ ^ k = φ ^ s , k φ ^ n , k ,
φ ^ s , k = [ x ^ k 1 , x ^ k 2 , , x ^ k n a , u ¯ ^ k 1 , u ¯ ^ k 2 , , u ¯ ^ k n b , f ( u k ) ] T ,
φ ^ n , k = [ v ^ k 1 , v ^ k 2 , , v ^ k n d ] T ,
θ ^ k = θ ^ s , k d ^ k ,
θ ^ s , k = [ a ^ k T , b ^ k T , c ^ k T ] T .
Here, the above AM-MIFSG algorithm reduces to the auxiliary model-based fractional stochastic gradient (AM-FSG) algorithm when p = 1 .
Remark 1.
In general, as the innovation length p increases, the collected data are being used more fully, and therefore the estimation accuracy is gradually improved. However, the computational amount increases at the same time. How to choose optimal innovation p is an open problem to be solved. In practice, we often choose p < n .
Remark 2.
The differential order α is chose in the range of (0,1). The orders may show different characteristics for different systems, and can be adjusted during the procedure as needed.
The implementation of the AM-MIFSG algorithm is listed as follows.
  • Choose p, α and initialize: let k = 1 , θ ^ 0 = θ ^ s , 0 d ^ 0 = 1 n / p 0 , s 0 = 1 , s α , 0 = 1 , and set x ^ i = 1 / p 0 , u ¯ ^ i = 1 / p 0 and v ^ i = 1 / p 0 for i 0 , p 0 = 10 6 , and give the base function f i ( · ) .
  • Collect the input-output data u k and y k , form the basis function vector f ( u k ) by (42), and the information vectors φ ^ k by (43), φ ^ s , k by (44) and φ ^ n , k by (45).
  • Compute ψ ^ j by (38). Form the stacked output vector Y p , k by (35), the information matrices Φ ^ p , k and Ψ ^ p , k by (36) and (37).
  • Compute the innovation vector E p , k by (32), s k by (33) and s α , k by (34).
  • Update the parameter estimate θ ^ k by (31), compute the estimates u ¯ ^ k by (39), x ^ k by (40), v ^ k by (41).
  • Increase k by 1, go to step 2.
The algorithm obtained above combined with the method in [48,49,50,51,52,53] can cope with linear and nonlinear systems with different disturbances. Furthermore, prediction models or soft sensor models can be obtained with the assistance of other parameter estimation algorithms [54,55,56,57,58,59] and can be applied to process control and other fields [60,61,62,63,64,65].

5. Convergence Analysis

Theorem 1.
For the system in (1)–(2) and the AM-MIFSG algorithm in (31)–(47), assume that the noise sequence { v k } satisfies
( A 1 ) E [ v k | F t ] = 0 , a . s . , E [ v k 2 | F t ] σ 2 < , a . s . ,
and there exist an integer N k and a positive constant ϱ independent of k such that the following persistent excitation condition holds,
( A 2 ) i = 0 N k Φ ^ α , p , k + i T Φ ^ α , p , k + i s k + i ϱ I , a . s . ,
where Φ ^ α , p , k = [ φ ^ k θ α , φ ^ k 1 θ α , , φ ^ k p + 1 θ α ] , θ α : = 1 n + θ ^ k 1 1 α ,denotes an element-by-element multiplication of vectors. Then the parameter estimation error given by the AM-MIFSG algorithm satisfies lim k E [ θ ^ k θ 2 ] 0 .
Proof. 
Define the parameter estimation error θ ¯ k = θ ^ k θ R n . To simplify the proof, assuming s α , k = s k / Γ ( 2 α ) . Inserting (32) into (31) and rearranging, we have
θ ¯ k = θ ¯ k 1 + Φ ^ p , k s k Y p , k Φ ^ p , k T θ ^ k 1 θ α = θ ¯ k 1 + Φ ^ p , k s k Φ p , k T θ k 1 + V p , k Φ ^ p , k T θ ^ k 1 θ α = : θ ¯ k 1 + Φ ^ p , k s k μ p , k ς p , k + V p , k θ α ,
where
μ q , t : = [ Φ p , k Φ ^ p , k ] T θ R p , ς q , t : = Φ ^ p , k T θ ¯ k 1 R p , V p , k : = [ v k , v k 1 , , v k p + 1 ] R p .
Pre-multiplying (49) by θ ¯ k T gives
θ ¯ k T θ ¯ k = θ ¯ k 1 T θ ¯ k 1 + 2 s k θ ¯ k 1 T Φ ^ α , p , k [ μ p , k ς p , k + V p , k ] + 1 r k 2 [ μ p , k ς p , k + V p , k ] T Φ ^ α , p , k T Φ ^ α , p , k [ μ p , k ς p , k + V p , k ] .
The rest can be proved in a similar to the way in [66].

6. Examples

Consider the following Hammerstein OEMA system:
y k = B ( z ) A ( z ) u ¯ k + D ( z ) v k , A ( z ) = 1 + a 1 z 1 + a 2 z 2 = 1 + 0.45 z 1 + 0.56 z 2 , B ( z ) = 1 + b 1 z 1 + b 2 z 2 = 1 + 0.25 z 1 0.35 z 2 , D ( z ) = 1 + d 1 z 1 = 1 0.54 z 1 , u ¯ k = c 1 u k + c 2 u k 2 + c 3 u k 3 = 0.52 u k + 0.54 u k 2 + 0.82 u k 3 , θ = [ a 1 , a 2 , b 1 , b 2 , c 1 , c 2 , c 3 , d 1 ] T = [ 0.45 , 0.56 , 0.25 , 0.35 , 0.52 , 0.54 , 0.82 , 0.54 ] T .
In this example, the input { u k } is a persistently excited signal sequence and { v k } is a white noise sequence with zero mean and variances σ 2 = 0.80 2 . The data length is taken as L = 4000 , where the first 3500 samples are assigned for system identification and the remaining 500 samples are assigned for prediction and validation. The details are as follows.
1. Firstly, applying the AM-MISG algorithm and the AM-MIFSG algorithm with α = 0.94 to estimate the parameters of considered system. Table 1 and Table 2 show the parameter estimates and their errors with p = 1 , 2, 4 and 6. Figure 2 and Figure 3 indicate the parameter estimation errors δ : = θ ^ k θ / θ versus k.
2. Secondly, to validate the influence of the fraction order α , in the AM-MIFSG algorithm, we take p = 5 and 6, and α = 0.80, 0.90 and 0.92, respectively, the simulation results are shown in Table 3 and Table 4, and Figure 4 and Figure 5.
3. In the end, a different data set ( L e = 500 samples from k = 3501 to 4000) and the estimated model obtained by the AM-MIFSG algorithm with p = 6 and α = 0.92 are used for model validation. The predicted output and true output are plotted in Figure 6 from k = 3501 to 3700 and Figure 7 from k = 3501 to 4000, where the average predicted output error is
δ e = 1 L e k = 3501 4000 [ y ^ k y k ] 2 1 / 2 = 0.0658 ,
and the dots line is the output y ^ k of the estimated model and the solid line is the true output y k .
From Table 1, Table 2, Table 3 and Table 4 and Figure 2, Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7, we can draw the following conclusions: (1) with the innovation length p increases, both the AM-MISG and the AM-MIFSG algorithm can give higher parameter estimation accuracy; (2) in general, the AM-MIFSG algorithm has a faster convergence rate than the AM-MISG algorithm in the same situation, and the introduction of the fractional-order can improve the parameter estimation accuracy; (3) the convergence rate of the AM-MIFSG increases as the fractional-order α increases, the α within the range of [0.90, 0.95] seems to be an appropriate choice which can give better estimation results for the Hammerstein output-error systems; (4) the estimated model obtained by the AM-MIFSG algorithm can well capture system dynamics.

7. Conclusions

This paper derives an AM-MIFSG estimation algorithm for Hammerstein output-error systems based on the key-term separation principle and auxiliary model identification idea. By means of the key-term separation principle, all the parameters in the linear and nonlinear blocks are separated, and the unknown variables in the identification model are replaced by the outputs of the auxiliary models. The analysis of the simulation results shows that the proposed algorithm obtains better parameter estimation performance than the AM-MISG algorithm. However, there also exist many topics that need to be further discussed. For example, is this algorithm still effective for systems with missing data? And is the performance of the algorithm can be improved by introducing a time-varying differential order α ? These topics remain as open problems for future studies.

Author Contributions

Conceptualization, C.X.; methodology, Y.M.; software, C.X.; validation, C.X.; formal analysis, C.X.; investigation, C.X.; resources, Y.M.; data curation, Y.M.; writing-original draft preparation, Y.M.; writing—review and editing, C.X.; visualization, C.X.; supervision, Y.M.; project administration, Y.M.; funding acquisition, C.X. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China (No. 62103167), and in part by the Natural Science Foundation of Jiangsu Province (No. BK20210451), and the research project of Jiangnan University (Nos. JUSRP12028 and JUSRP12040).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, L.J.; Guo, J.; Xu, C.; Wu, T.Z.; Lin, H.P. Hybrid model predictive control strategy of supercapacitor energy storage system based on double active bridge. Energies 2019, 12, 2134. [Google Scholar] [CrossRef] [Green Version]
  2. Zhang, Y.; Yan, Z.; Zhou, C.C.; Wu, T.Z.; Wang, Y.Y. Capacity allocation of HESS in micro-grid based on ABC algorithm. Int. J. Low-Carbon Technol. 2020, 15, 496–505. [Google Scholar] [CrossRef]
  3. Chen, H.T.; Jiang, B.; Chen, W.; Yi, H. Data-driven detection and diagnosis of incipient faults in electrical drives of high-speed trains. IEEE Trans. Ind. Electron. 2019, 66, 4716–4725. [Google Scholar] [CrossRef]
  4. Chen, H.T.; Jiang, B.; Ding, S.X.; Huang, B. Data-driven fault diagnosis for traction systems in high-speed trains: A survey, challenges, and perspectives. IEEE Trans. Intell. Transp. Syst. 2020. [Google Scholar] [CrossRef]
  5. Ding, F.; Zhang, X.; Xu, L. The innovation algorithms for multivariable state-space models. Int. J. Adapt. Control Signal Process. 2019, 33, 1601–1608. [Google Scholar] [CrossRef]
  6. Ding, F.; Liu, Y.J.; Bao, B. Gradient based and least squares based iterative estimation algorithms for multi-input multi-output systems. Proc. Inst. Mech. Eng. Part I J. Syst. Control Eng. 2012, 226, 43–55. [Google Scholar] [CrossRef]
  7. Xu, L.; Song, G.L. A recursive parameter estimation algorithm for modeling signals with multi-frequencies. Circuits Syst. Signal Process. 2020, 39, 4198–4224. [Google Scholar] [CrossRef]
  8. Xu, L.; Xiong, W.L.; Alsaedi, A.; Hayat, T. Hierarchical parameter estimation for the frequency response based on the dynamical window data. Int. J. Control Autom. Syst. 2018, 16, 1756–1764. [Google Scholar] [CrossRef]
  9. Zhang, X.; Yang, E.F. Highly computationally efficient state filter based on the delta operator. Int. J. Adapt. Control Signal Process. 2019, 33, 875–889. [Google Scholar] [CrossRef]
  10. Cuevas, E.; Díaz, P.; Avalos, O.; Zaldivar, D.; Pérez-Cisneros, M. Nonlinear system identification based on ANFIS-Hammerstein model using Gravitational search algorithm. Appl. Intell. 2018, 48, 182–203. [Google Scholar] [CrossRef]
  11. Mukhopadhyay, S.; Mukherjee, A. ImdLMS: An imputation based LMS algorithm for linear system identification with missing input data. IEEE Trans. Signal Process. 2020, 68, 2370–2385. [Google Scholar] [CrossRef]
  12. Sepulveda, N.E.; Sinha, J. Mathematical validation of experimentally optimised parameters used in a vibration-based machine-learning model for fault diagnosis in rotating machines. Machines 2021, 9, 155. [Google Scholar] [CrossRef]
  13. Zhao, S.Y.; Yuriy, S.; Ahn, C.; Zhao, C.H. Probabilistic monitoring of correlated sensors for nonlinear processes in state space. IEEE Trans. Ind. Electron. 2020, 67, 2294–2303. [Google Scholar] [CrossRef]
  14. Du, J.; Zhang, L.; Chen, J.; Li, J.; Jiang, X.; Zhu, C. Self-adjusted decomposition for multi-model predictive control of Hammerstein systems based on included angle. ISA Trans. 2020, 103, 19–27. [Google Scholar] [CrossRef] [PubMed]
  15. Chen, H.T.; Jiang, B. A review of fault detection and diagnosis for the traction system in high-speed trains. IEEE Trans. Intell. Transp. Syst. 2020, 21, 450–465. [Google Scholar] [CrossRef]
  16. Yang, M.; Wang, J.; Zhang, Y. Fault detection and diagnosis for plasticizing process of single-base gun propellant using mutual information weighted MPCA under limited batch samples modelling. Machines 2021, 9, 166. [Google Scholar] [CrossRef]
  17. Chandra, S.; Hayashibe, M.; Thondiyath, A. Muscle fatigue induced hand tremor clustering in dynamic laparoscopic manipulation. IEEE Trans. Syst. Man Cybern. -Syst. 2020, 50, 5420–5431. [Google Scholar] [CrossRef]
  18. Jalaleddini, K.; Kearney, R.E. Subspace identification of SISO Hammerstein systems: Application to stretch reflex identification. IEEE Trans. Biomed. Eng. 2013, 60, 2725–2734. [Google Scholar] [CrossRef]
  19. Ding, F.; Chen, H.; Xu, L.; Dai, J.; Li, Q.; Hayat, T. A hierarchical least squares identification algorithm for Hammerstein nonlinear systems using the key term separation. J. Frankl. Inst. 2018, 355, 3737–3752. [Google Scholar] [CrossRef]
  20. Ding, J.; Cao, Z.; Chen, J.; Jiang, G. Weighted parameter estimation for Hammerstein nonlinear ARX systems. Circuits Syst. Signal Process. 2020, 39, 2178–2192. [Google Scholar] [CrossRef]
  21. Kazemi, M.; Arefi, M.M. A fast iterative recursive least squares algorithm for Wiener model identification of highly nonlinear systems. ISA Trans. 2017, 67, 382–388. [Google Scholar] [CrossRef]
  22. Hou, J.; Liu, T.; Wang, Q.G. Subspace identification of Hammerstein-type nonlinear systems subject to unknown periodic disturbance. Int. J. Control 2021, 94, 849–859. [Google Scholar] [CrossRef]
  23. Wang, L.; Ji, Y.; Wan, L.; Bu, N. Hierarchical recursive generalized extended least squares estimation algorithms for a class of nonlinear stochastic systems with colored noise. J. Frankl. Inst. 2019, 356, 10102–10122. [Google Scholar] [CrossRef]
  24. Wan, L.; Ding, F. Decomposition-and gradient-based iterative identification algorithms for multivariable systems using the multi-innovation theory. Circuits Syst. Signal Process. 2019, 38, 2971–2991. [Google Scholar] [CrossRef]
  25. Loizou, N.; Richtárik, P. Momentum and stochastic momentum for stochastic gradient, newton, proximal point and subspace descent methods. Comput. Optim. Appl. 2020, 77, 653–710. [Google Scholar] [CrossRef]
  26. Pan, J.; Jiang, X.; Wan, X.; Ding, W. A filtering based multi-innovation extended stochastic gradient algorithm for multivariable control systems. Int. J. Control Autom. Syst. 2017, 15, 1189–1197. [Google Scholar] [CrossRef]
  27. Ma, H.; Pan, J.; Lv, L. Recursive algorithms for multivariable output-error-like ARMA systems. Mathematics 2019, 7, 558. [Google Scholar] [CrossRef] [Green Version]
  28. Pan, J.; Ma, H.; Zhang, X. Recursive coupled projection algorithms for multivariable output-error-like systems with coloured noises. IET Signal Process. 2020, 14, 455–466. [Google Scholar] [CrossRef]
  29. Ma, H.; Zhang, X.; Liu, Q.Y.; Hayat, T. Partially-coupled gradient-based iterative algorithms for multivariable output-error-like systems with autoregressive moving average noises. IET Contr. Theory Appl. 2020, 14, 2613–2627. [Google Scholar] [CrossRef]
  30. Khan, S.; Ahmad, J.; Naseem, I.; Moinuddin, M. A novel fractional gradient-based learning algorithm for recurrent neural networks. Circuits Syst. Signal Process. 2018, 37, 593–612. [Google Scholar] [CrossRef]
  31. Ding, F.; Chen, T. Performance analysis of multi-innovation gradient type identification methods. Automatica 2007, 43, 1–14. [Google Scholar] [CrossRef]
  32. Chaudhary, N.I.; Raja, M.A.Z. Identification of Hammerstein nonlinear ARMAX systems using nonlinear adaptive algorithms. Nonlinear Dyn. 2015, 79, 1385–1397. [Google Scholar] [CrossRef]
  33. Chaudhary, N.I.; Raja, M.A.Z. Design of fractional adaptive strategy for input nonlinear Box-Jenkins systems. Signal Process. 2015, 116, 141–151. [Google Scholar] [CrossRef]
  34. Wang, Y.; Li, M.; Chen, Z. Experimental study of fractional-order models for lithium-ion battery and ultra-capacitor: Modeling, system identification, and validation. Appl. Energy 2020, 278, 115736. [Google Scholar] [CrossRef]
  35. Aslam, M.S.; Chaudhary, N.I.; Raja, M.A.Z. A sliding-window approximation-based fractional adaptive strategy for Hammerstein nonlinear ARMAX systems. Nonlinear Dyn. 2017, 87, 519–533. [Google Scholar] [CrossRef]
  36. Chaudhary, N.I.; Raja, M.A.Z.; Khan, A.U.R. Design of modified fractional adaptive strategies for Hammerstein nonlinear control autoregressive systems. Nonlinear Dyn. 2015, 82, 1811–1830. [Google Scholar] [CrossRef]
  37. Cheng, S.; Wei, Y.; Sheng, D.; Chen, Y.; Wang, Y. Identification for Hammerstein nonlinear ARMAX systems based on multi-innovation fractional order stochastic gradient. Signal Process. 2018, 142, 1–10. [Google Scholar] [CrossRef]
  38. Chaudhary, N.I.; Raja, M.A.Z.; He, Y.; Khan, Z.A.; Machado, J.T. Design of multi innovation fractional LMS algorithm for parameter estimation of input nonlinear control autoregressive systems. Appl. Math. Model. 2021, 93, 412–425. [Google Scholar] [CrossRef]
  39. Ding, F.; Shi, Y.; Chen, T. Auxiliary model-based least-squares identification methods for Hammerstein output-error systems. Syst. Control Lett. 2007, 56, 373–380. [Google Scholar] [CrossRef]
  40. Zhang, Q. Nonlinear system identification with output error model through stabilized simulation. IFAC Proc. Vol. 2004, 37, 501–506. [Google Scholar] [CrossRef]
  41. Vörös, J. Parameter identification of discontinuous Hammerstein systems. Automatica 1997, 33, 1141–1146. [Google Scholar] [CrossRef]
  42. Vörös, J. Recursive identification of Hammerstein systems with discontinuous nonlinearities containing dead-zones. IEEE Trans. Autom. Control 2003, 48, 2203–2206. [Google Scholar] [CrossRef]
  43. Vörös, J. Identification of nonlinear cascade systems with output hysteresis based on the key term separation principle. Appl. Math. Model. 2015, 39, 5531–5539. [Google Scholar] [CrossRef]
  44. Mao, Y.; Ding, F.; Yang, E. Adaptive filtering based multi-innovation gradient algorithm for input nonlinear systems with autoregressive noise. Int. J. Adapt. Control Signal Process. 2017, 31, 1388–1400. [Google Scholar] [CrossRef] [Green Version]
  45. Liu, Y.; Yu, L.; Ding, F. Multi-innovation extended stochastic gradient algorithm and its performance analysis. Circuits Syst. Signal Process. 2010, 29, 649–667. [Google Scholar] [CrossRef]
  46. Shah, S.M. Riemann-Liouville operator-based fractional normalised least mean square algorithm with application to decision feedback equalisation of multipath channels. IET Signal Process. 2016, 10, 575–582. [Google Scholar] [CrossRef]
  47. Li, C.; Qian, D.; Chen, Y. On Riemann-Liouville and caputo derivatives. In Discrete Dynamics in Nature and Society; Hindawi Limited: London, UK, 2011. [Google Scholar]
  48. Li, M.H.; Liu, X.M. Maximum likelihood least squares based iterative estimation for a class of bilinear systems using the data filtering technique. Int. J. Control Autom. Syst. 2020, 18, 1581–1592. [Google Scholar] [CrossRef]
  49. Ding, F.; Xu, L.; Meng, D.D. Gradient estimation algorithms for the parameter identification of bilinear systems using the auxiliary model. J. Comput. Appl. Math. 2020, 369, 112575. [Google Scholar] [CrossRef]
  50. Li, M.H.; Liu, X.M. Maximum likelihood hierarchical least squares-based iterative identification for dual-rate stochastic systems. Int. J. Adapt. Control Signal Process. 2021, 35, 240–261. [Google Scholar] [CrossRef]
  51. Xu, L.; Chen, F.Y.; Hayat, T. Hierarchical recursive signal modeling for multi-frequency signals based on discrete measured data. Int. J. Adapt. Control Signal Process. 2021, 35, 676–693. [Google Scholar] [CrossRef]
  52. Ji, Y.; Kang, Z.; Zhang, C. Two-stage gradient-based recursive estimation for nonlinear models by using the data filtering. Int. J. Control Autom. Syst. 2021, 19, 2706–2715. [Google Scholar] [CrossRef]
  53. Xu, L.; Yang, E.F. Auxiliary model multiinnovation stochastic gradient parameter estimation methods for nonlinear sandwich systems. Int. J. Robust Nonlinear Control 2021, 31, 148–165. [Google Scholar] [CrossRef]
  54. Wang, J.W.; Ji, Y.; Zhang, C. Iterative parameter and order identification for fractional-order nonlinear finite impulse response systems using the key term separation. Int. J. Adapt. Control Signal Process. 2021, 35, 1562–1577. [Google Scholar] [CrossRef]
  55. Zhang, X. Adaptive parameter estimation for a general dynamical system with unknown states. Int. J. Robust Nonlinear Control 2020, 30, 1351–1372. [Google Scholar] [CrossRef]
  56. Zhang, X. Recursive parameter estimation methods and convergence analysis for a special class of nonlinear systems. Int. J. Robust Nonlinear Control 2020, 30, 1373–1393. [Google Scholar] [CrossRef]
  57. Mao, Y.W.; Liu, S.; Liu, J.F. Robust economic model predictive control of nonlinear networked control systems with communication delays. Int. J. Adapt. Control Signal Process. 2020, 34, 614–637. [Google Scholar] [CrossRef]
  58. Ding, F. State filtering and parameter estimation for state space systems with scarce measurements. Signal Process. 2014, 104, 369–380. [Google Scholar] [CrossRef]
  59. Ding, F. Combined state and least squares parameter estimation algorithms for dynamic systems. Appl. Math. Modell. 2014, 38, 403–412. [Google Scholar] [CrossRef]
  60. Li, M.H.; Liu, X.M. Iterative identification methods for a class of bilinear systems by using the particle filtering technique. Int. J. Adapt. Control Signal Process. 2021, 35, 2056–2074. [Google Scholar] [CrossRef]
  61. Zhao, S.Y.; Huang, B.; Liu, F. Linear optimal unbiased filter for time-variant systems without apriori information on initial conditions. IEEE Trans. Autom. Control 2017, 62, 882–887. [Google Scholar] [CrossRef]
  62. Liu, Y.J.; Shi, Y. An efficient hierarchical identification method for general dual-rate sampled-data systems. Automatica 2014, 50, 962–970. [Google Scholar] [CrossRef]
  63. Ding, F.; Qiu, L.; Chen, T. Reconstruction of continuous-time systems from their non-uniformly sampled discrete-time systems. Automatica 2009, 45, 324–332. [Google Scholar] [CrossRef]
  64. Zhao, S.Y.; Yuriy, S.; Ahn, C.; Liu, F. Adaptive-horizon iterative UFIR filtering algorithm with applications. IEEE Trans. Ind. Electron. 2018, 65, 6393–6402. [Google Scholar] [CrossRef]
  65. Zhao, S.Y.; Huang, B. Trial-and-error or avoiding a guess? Initialization of the Kalman filter. Automatica 2020, 121, 109184. [Google Scholar] [CrossRef]
  66. Ding, F.; Liu, G.; Liu, X.P. Parameter estimation with scarce measurements. Automatica 2011, 47, 1646–1655. [Google Scholar] [CrossRef]
Figure 1. The Hammerstein OEMA systems.
Figure 1. The Hammerstein OEMA systems.
Machines 09 00247 g001
Figure 2. The AM-MISG estimation error δ versus k with p = 1 , 2, 4 and 6 and the AM-MIFSG estimation error δ versus k with p = 2 and 6.
Figure 2. The AM-MISG estimation error δ versus k with p = 1 , 2, 4 and 6 and the AM-MIFSG estimation error δ versus k with p = 2 and 6.
Machines 09 00247 g002
Figure 3. The AM-MIFSG estimation error δ versus k with p = 1 , 2, 4 and 6.
Figure 3. The AM-MIFSG estimation error δ versus k with p = 1 , 2, 4 and 6.
Machines 09 00247 g003
Figure 4. The AM-MIFSG estimation error δ versus k with α = 0.80 , 0.90 and 0.92 ( p = 5 ).
Figure 4. The AM-MIFSG estimation error δ versus k with α = 0.80 , 0.90 and 0.92 ( p = 5 ).
Machines 09 00247 g004
Figure 5. The AM-MIFSG estimation error δ versus k with α = 0.80 , 0.90 and 0.92 ( p = 6 ).
Figure 5. The AM-MIFSG estimation error δ versus k with α = 0.80 , 0.90 and 0.92 ( p = 6 ).
Machines 09 00247 g005
Figure 6. The predicted output y ^ k and true output y k from k = 3501 to 3700.
Figure 6. The predicted output y ^ k and true output y k from k = 3501 to 3700.
Machines 09 00247 g006
Figure 7. The predicted output y ^ k and true output y k from k = 3501 to 4000.
Figure 7. The predicted output y ^ k and true output y k from k = 3501 to 4000.
Machines 09 00247 g007
Table 1. The AM-MISG estimates and errors p = 1 , 2, 4 and 6.
Table 1. The AM-MISG estimates and errors p = 1 , 2, 4 and 6.
pk a 1 a 2 b 1 b 2 c 1 c 2 c 3 d 1 δ ( % )
11000.036950.23553−0.03327−0.208750.273380.031950.49278−0.3182561.82507
2000.071130.34613−0.03653−0.294820.322610.071000.59507−0.3727652.41770
5000.109510.42150−0.04098−0.340400.350830.108120.64790−0.3900646.76773
10000.136320.46020−0.03759−0.360820.363860.128520.67688−0.3976543.71611
20000.166010.49396−0.03164−0.380520.376290.148940.70178−0.4076540.77833
30000.181270.50703−0.02857−0.386300.382120.159520.71245−0.4102839.42179
21000.087180.45055−0.02857−0.349260.382500.123670.64113−0.4416545.20563
2000.212520.53700−0.00743−0.406870.431400.192300.74170−0.4633934.63613
5000.276020.533810.01178−0.394970.446760.234080.76740−0.4616329.78555
10000.295390.536580.03287−0.388750.453500.260060.78181−0.4592227.12588
20000.316010.545890.05191−0.390390.459350.283270.79322−0.4605824.67852
30000.323850.548100.06104−0.387510.461890.294840.79696−0.4600523.55543
41000.274860.609680.03978−0.390970.500490.230790.79164−0.5072928.18166
2000.379330.577410.09918−0.376440.520860.307140.83586−0.4851719.67676
5000.389120.540690.13209−0.348770.518370.350740.82239−0.4827416.01070
10000.389330.548940.15511−0.345800.519210.378940.82425−0.4832513.73325
20000.400810.557450.17376−0.349490.519430.403090.82417−0.4861211.58755
30000.402020.559500.18102−0.347170.518450.414330.82086−0.4867510.74226
61000.351690.617550.08824−0.354740.535810.302230.83886−0.5265420.81443
2000.408660.590020.16182−0.349230.536540.379530.84693−0.5042613.13328
5000.420350.549520.18298−0.328110.528210.420930.81930−0.503779.82957
10000.418230.564250.20204−0.333460.530100.446670.82367−0.506097.80786
20000.429810.566130.21766−0.340050.529620.468470.82213−0.509625.89007
30000.426780.567300.22265−0.339110.527640.477330.81677−0.510535.32836
True values0.450000.560000.25000−0.350000.520000.540000.82000−0.54000
Table 2. The AM-MIFSG estimates and errors with p = 1 , 2, 4 and 6.
Table 2. The AM-MIFSG estimates and errors with p = 1 , 2, 4 and 6.
pk a 1 a 2 b 1 b 2 c 1 c 2 c 3 d 1 δ ( % )
11000.157080.59506−0.10818−0.363950.386270.081910.73420−0.3657746.47154
2000.276410.58549−0.07338−0.374680.425170.138420.81526−0.3731038.73219
5000.294320.55031−0.03625−0.356040.431840.178160.82178−0.3734234.99604
10000.297550.54549−0.01005−0.350970.434700.204990.82736−0.3737832.70875
20000.308780.552800.01095−0.355010.437880.229160.83306−0.3782730.47638
30000.313870.554530.02109−0.353660.439180.241580.83382−0.3795229.40027
21000.358290.627440.15273−0.376630.551090.233570.78715−0.5997123.46638
2000.427460.587700.19005−0.365980.560240.314560.81037−0.5556516.12428
5000.431210.553750.20459−0.346520.557030.359700.79703−0.5466512.87485
10000.426420.560040.21858−0.345500.558800.387890.80138−0.5440810.92266
20000.433050.564760.22928−0.349670.558900.412080.80204−0.543369.22414
30000.431940.565350.23216−0.347770.557700.423150.79911−0.542368.52773
41000.376140.625280.15435−0.359370.561080.314190.83373−0.6265318.87062
2000.430320.600350.22302−0.362720.554060.422500.82836−0.561399.08948
5000.448450.551520.22616−0.338810.544910.462860.79775−0.554116.00591
10000.442180.569000.23948−0.346150.551330.486070.81308−0.553614.43982
20000.449750.563130.24931−0.350570.550480.506320.81111−0.554053.24848
30000.442020.564730.25018−0.349230.547860.512660.80477−0.553413.01322
61000.371540.636840.14344−0.333250.547340.384040.84946−0.6202915.86854
2000.431280.618350.23888−0.364930.531420.488500.83328−0.558885.77068
5000.455360.554670.23160−0.340120.523150.509470.79826−0.554343.08071
10000.449220.571230.24453−0.350880.536250.524840.82763−0.555002.04837
20000.452960.559180.25378−0.354000.534170.539850.82120−0.556011.49579
30000.441230.563690.25237−0.353280.530730.541510.81199−0.555321.53252
True values0.450000.560000.25000−0.350000.520000.540000.82000−0.54000
Table 3. The AM-MIFSG estimates and errors with α = 0.80 , 0.90 and 0.92 ( p = 5 ).
Table 3. The AM-MIFSG estimates and errors with α = 0.80 , 0.90 and 0.92 ( p = 5 ).
α k a 1 a 2 b 1 b 2 c 1 c 2 c 3 d 1 δ ( % )
0.801000.235810.599850.09673−0.401260.352490.177770.82055−0.4851232.54342
2000.436140.598240.19561−0.404230.398620.346380.91657−0.4520418.57503
5000.441560.536890.21507−0.351790.391490.422750.86423−0.4639613.37312
10000.439300.562100.23408−0.357310.404620.466950.88603−0.4732311.19137
20000.448660.556240.24854−0.357420.407740.497430.88189−0.483689.82294
30000.438320.562840.25029−0.354520.406300.506700.87138−0.487699.37582
0.901000.259650.587620.16306−0.434640.410110.325650.75782−0.6228923.25864
2000.437570.588250.25570−0.434660.451240.466390.85201−0.552549.35233
5000.461610.532070.24689−0.376260.453380.495680.83323−0.550196.10290
10000.454430.556260.25376−0.374200.466460.516040.85868−0.550425.04977
20000.456290.551140.25927−0.369180.467310.532460.85369−0.551634.58180
30000.444250.556460.25785−0.364800.465170.535510.84496−0.551374.29344
0.921000.363470.635180.16075−0.382550.493770.313520.84459−0.6048618.82927
2000.440170.604380.24641−0.386090.492050.439730.85042−0.539598.24557
5000.458240.546250.24047−0.350900.486400.478230.81880−0.537394.87871
10000.450850.566190.25046−0.357260.497830.502090.84203−0.539183.35483
20000.455080.557830.25789−0.358240.497920.521520.83812−0.541612.43473
30000.444060.561910.25685−0.356030.495330.526240.82969−0.541852.13756
True values0.450000.560000.25000−0.350000.520000.540000.82000−0.54000
Table 4. The AM-MIFSG estimates and errors with α = 0.80 , 0.90 and 0.92 ( p = 6 ).
Table 4. The AM-MIFSG estimates and errors with α = 0.80 , 0.90 and 0.92 ( p = 6 ).
α k a 1 a 2 b 1 b 2 c 1 c 2 c 3 d 1 δ ( % )
0.801000.262150.644360.06610−0.357000.412700.262700.84456−0.5500727.25073
2000.428920.617310.19706−0.369990.437080.425260.90034−0.5001012.53946
5000.443080.551540.21290−0.335270.426960.478650.84239−0.507208.39876
10000.441260.569850.23379−0.347280.443270.508870.87299−0.514496.95024
20000.448930.559080.24812−0.351020.444200.530260.86479−0.521636.06418
30000.437310.565580.24875−0.350510.442130.533930.85338−0.523645.87107
0.901000.324750.622630.16067−0.409490.458320.355300.80222−0.6326118.70725
2000.442200.604270.26639−0.414150.473600.488660.84739−0.554777.38987
5000.464800.541240.24811−0.364470.472630.509410.82077−0.552894.30671
10000.456220.562500.25474−0.366740.488140.526620.85226−0.553503.52333
20000.456410.553560.25988−0.363640.487530.541480.84452−0.554643.17023
30000.443180.559720.25723−0.360730.484730.542830.83450−0.554112.90188
0.921000.380620.645560.16445−0.360530.516340.338280.85954−0.6155817.41472
2000.443130.616050.25735−0.380490.502540.465480.84460−0.546156.92261
5000.461990.550730.24347−0.348020.496150.496090.80967−0.543933.60595
10000.453770.568750.25282−0.356500.510540.516720.83983−0.545822.32078
20000.455860.557460.25944−0.357580.509440.534190.83334−0.547951.60509
30000.443340.562720.25702−0.356040.506280.536920.82365−0.547861.35745
True values0.450000.560000.25000−0.350000.520000.540000.82000−0.54000
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xu, C.; Mao, Y. Auxiliary Model-Based Multi-Innovation Fractional Stochastic Gradient Algorithm for Hammerstein Output-Error Systems. Machines 2021, 9, 247. https://doi.org/10.3390/machines9110247

AMA Style

Xu C, Mao Y. Auxiliary Model-Based Multi-Innovation Fractional Stochastic Gradient Algorithm for Hammerstein Output-Error Systems. Machines. 2021; 9(11):247. https://doi.org/10.3390/machines9110247

Chicago/Turabian Style

Xu, Chen, and Yawen Mao. 2021. "Auxiliary Model-Based Multi-Innovation Fractional Stochastic Gradient Algorithm for Hammerstein Output-Error Systems" Machines 9, no. 11: 247. https://doi.org/10.3390/machines9110247

APA Style

Xu, C., & Mao, Y. (2021). Auxiliary Model-Based Multi-Innovation Fractional Stochastic Gradient Algorithm for Hammerstein Output-Error Systems. Machines, 9(11), 247. https://doi.org/10.3390/machines9110247

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop