Next Article in Journal
A Review: Study on the Enhancement Mechanism of Heat and Moisture Transfer in Deformable Porous Media
Next Article in Special Issue
Improved Time-Varying BLF-Based Tracking Control of a Position-Constrained Robot
Previous Article in Journal
Amomum subulatum Fruit Extract Mediated Green Synthesis of Silver and Copper Oxide Nanoparticles: Synthesis, Characterization, Antibacterial and Anticancer Activities
Previous Article in Special Issue
ROV Sliding Mode Controller Design and Simulation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Filtering-Based Stochastic Gradient Estimation Method for Multivariate Pseudo-Linear Systems Using the Partial Coupling Concept

1
Jiangsu Key Laboratory of Media Design and Software Technology, School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China
2
School of Mechanical and Electrical Engineering, Soochow University, Suzhou 215137, China
*
Author to whom correspondence should be addressed.
Processes 2023, 11(9), 2700; https://doi.org/10.3390/pr11092700
Submission received: 25 August 2023 / Revised: 6 September 2023 / Accepted: 7 September 2023 / Published: 9 September 2023
(This article belongs to the Special Issue Adaptive Control: Design and Analysis)

Abstract

:
Solutions for enhancing parameter identification effects for multivariate equation-error systems in random interference and parameter coupling conditions are considered in this paper. For the purpose of avoiding the impact of colored noises on parameter identification precision, an appropriate filter is utilized to process the autoregressive moving average noise. Then, the filtered system is transformed into a number of sub-identification models based on system output dimensions. Founded on negative gradient search, a new multivariate filtering algorithm employing a partial coupling approach is proposed, and a conventional gradient algorithm is derived for comparison. Parameter identification for multivariate equation-error systems has a high estimation accuracy and an efficient calculation speed with the application of the partial coupling approach and the data filtering method. Two simulations are performed to reveal the proposed method’s effectiveness.

1. Introduction

The foundation of industrial automatic production and intelligent control is a precise model of the production processes [1]. With the expansion of production scale, multivariate systems have been widely used in production processes [2,3,4]. Parameter estimation for multivariate systems has taken a considerable role in system identification and has attracted much attention from researchers in recent decades. Multivariate systems are more difficult to identify than scalar systems because they have more unknown parameters, are accompanied by more complex random interferences, and the parameters of some channels are coupled [5,6,7]. Good identification results are often not achieved if the identification approaches for scalar systems are applied to multivariate systems without modification. Some improved methods for the identification of multivariate systems have been researched recently [8,9,10]. For instance, Mari et al. combined the Schur restabilization technique and a covariance fitting algorithm to propose a parameter estimation method for finite dimensional multivariate linear stochastic systems [11]. Luo and Manikas proposed an iterative method and a nonlinear optimization algorithm for suppressing the mutual target interference in the multitarget parameter estimation [12]. Zhang et al. identified the parameters of multivariate uncertain regression model with a maximum likelihood identification algorithm [13]. Oigard et al. researched an expectation maximization algorithm for heavy-tailed processes with a multivariate normal inverse Gaussian distribution, which has fast and accurate parameter identification effectiveness [14].
Although the identification methods for multivariate systems are gradually enriched, researchers have been devoted to finding methods with faster identification efficiency and higher identification accuracy for multivariate systems. In terms of improving estimation efficiency, in addition to the decomposition identification method, the coupling identification approach also can effectively reduce the amount of identification computation [15,16,17]. The basis of the coupling identification approach is the transformation of multivariate systems into some identification subsystems, then coupling the identification results of each subsystem to make their results correlated [18,19,20]. There are some studies on coupling identification methods for multivariate systems. Ding researched parameter identification issues for non-uniformly sampled systems and proposed a partially coupled algorithm based upon the stochastic gradient method. A simulation in the paper revealed that the new algorithm requires less calculation than the standard stochastic gradient algorithm [21]. Zhou designed a nonlinear partially coupled parameter identification algorithm for multivariate radial basis function-based hybrid models inspired by the coupling concept which reduces the amount of calculation by dealing with the associated items brought by model decomposition [22]. Huang et al. provided a coupled probability representation regarding model coupling in feature-based image source identification which improved the identification accuracy significantly [23]. Wang solved parameter identification problems of nonlinear multivariate systems and developed a coupled gradient method by introducing the coupling idea, which can realize subsystem-coupled computation [24].
The data filtering approach in parameter estimation is the improvement of the parameter identification precision by modifying the structure of system interference noises through an appropriate filter without changing the system’s input–output relationship [25,26,27]. The data filtering approach has been applied to scalar system identification in some studies. Ji and Jiang utilized a data filter to process collected data to deal with the disturbance of colored noise on identification precision for generalized time-varying systems [28]. Imani studied a maximum-likelihood parameter identification method for partially observed Boolean dynamical systems by using a Boolean Kalman filter [29]. Zhang developed a filtering hierarchical maximum likelihood iterative algorithm for nonlinear systems by applying the data filtering approach and multi-innovation identification method, which obtains highly precise parameter estimates and tracks time-varying parameters well [30]. Chen et al. proposed a multi-step-length gradient iterative algorithm for ARX models with the application of a modified Kalman filter. The Kalman filter was designed to enhance unmeasurable output estimates, which improved parameter identification accuracy [31]. Li and Liu addressed parameter identification problems in bilinear systems and presented iterative methods with high estimation accuracies by utilizing the particle filtering approach [32].
The least squares estimation algorithm, gradient estimation algorithm, least mean square estimation algorithm, and stochastic approximation estimation algorithm are all classical identification methods in the field of system identification. The least squares method is a basic parameter estimation method which can be used for dynamic system identification as well as for static system parameter fitting [33,34]. The gradient identification method is a search for parameter estimates along the direction of the negative gradient of the criterion functions [35,36,37]. Compared with the least squares method, the gradient identification algorithm has less computational complexity because it does not involve the covariance matrix. By extending the gradient identification algorithm and combining it with other methods, estimation algorithms with high identification performances can be obtained. Zhang and Ding proposed an optimal adaptive filtering algorithm for filter design by combining the data filtering approach with the gradient method [38]. Roman et al. derived a gradient descent method for identifying parameters of a linear wave equation from experimental boundary data [39]. Chen et al. identified parameters of time-delay rational state–space systems and presented two improved gradient descent algorithms by utilizing an intelligent search method and a momentum method, which had faster convergence speeds and higher computational efficiencies [40]. Kulikova researched adaptive filtering methods based on the gradient algorithm for identifying unknown parameters of pairwise linear Gaussian systems [41].
The data filtering approach can improve the parameter identification precision for multivariate systems by transforming colored noises with complex structures into white noises with simple structures [42,43,44]. At the same time, the coupling identification method can effectively speed up identification and the gradient identification approach can quickly search for the optimal estimates [45,46,47]. Therefore, motivated by the significant advantages of these three methods, this paper combines the data filtering approach and the coupling identification method based upon gradient search to recognize parameters of multivariate equation-error systems. The introduction of the data filtering approach overcomes the influence of colored noise on identification precision. The use of the coupling identification method reduces the computation of the identification algorithm. The main highlights of this paper are summarized as follows.
(1)
A filter is used to transform the autoregressive moving average noise of multivariate pseudo-linear systems into white noise by applying the data filtering approach. The filtered system is converted into a number of subsystem identification models based upon system the output dimensions according to the coupling identification method.
(2)
A filtering-based multivariate gradient algorithm employing the partial coupling concept for multivariate pseudo-linear systems is proposed. Additionally, a conventional multivariate gradient algorithm is derived for comparison. The proposed algorithm has higher identification precision and faster computational efficiency than the conventional algorithm.
The structure of this paper is as follows. The multivariate pseudo-linear system is presented and the system identification obstacles are analyzed in Section 2. A new gradient algorithm based upon the coupling identification approach and the data filtering method is proposed in Section 3. Section 4 derives a conventional multivariate gradient algorithm. Convergence of the proposed method is discussed in Section 5. In Section 6, two simulations are performed to reveal the effectiveness of the proposed methods. Finally, Section 7 provides some conclusions of this paper.

2. Problem Description

At the beginning, we provide some notation to make the paper concise and clear. I m is an identity matrix of size m × m . 1 m × n denotes a matrix of size m × n whose elements are 1. The norm of a matrix A is defined by A 2 : = tr [ A A T ] , and the superscript T represents the matrix/vector transpose. The symbol ⊗ stands for the Kronecker product, such as X : = [ x i j ] R m × n , Y : = [ y i j ] R p × q , X Y = [ x i j Y ] R ( m p ) × ( n q ) ; generally, X Y Y X . col [ B ] denotes a vector consisting of all columns of matrix B arranged in order, that is, B : = [ b 1 , b 2 , ⋯, b n ] R m × n , b i R m ( i = 1 , 2 , , n ) , col [ B ] : = [ b 1 T , b 2 T , ⋯, b n T ] T R m n . H ^ ( k ) is the estimate of H at time k.
According to the type of colored noise, the multivariate pseudo-linear system can be divided into different types. In this paper, we consider systems where the noise is of the autoregressive moving average type. The system is widely present in industrial processes and its structure is described as
y ( k ) = Φ ( k ) θ + D ( z ) c ( z ) v ( k ) ,
where y ( k ) : = [ y 1 ( k ) , y 2 ( k ) , , y m ( k ) ] T R m is the system output vector which can be measured, Φ ( k ) R m × n is the system information matrix which is formed from system input–output data, θ R n is the system parameter vector which is unknown and to be identified, v ( k ) : = [ v 1 ( k ) , v 2 ( k ) ,⋯, v m ( k ) ] T R m is a white noise process with zero mean, c ( z ) R is a polynomial in the unit backward shift operator [ z 1 y ( k ) = y ( k 1 ) ] , and D ( z ) R m × m is a polynomial matrices:
c ( z ) : = 1 + c 1 z 1 + c 2 z 2 + + c n c z n c , c i R , D ( z ) : = I m + D 1 z 1 + D 2 z 2 + + D n d z n d , D i R m × m .
Define the noise model as,
w ( k ) : = D ( z ) c ( z ) v ( k ) R m .
In general, suppose that orders m, n, n c , and n d of the system are known, and y ( k ) = 0 , Φ ( k ) = 0 and v ( k ) = 0 for k 0 .
Define the parameter vector η , the parameter matrix κ , the information vector ψ ( k ) , and the information matrix Ψ ( k ) as
η : = [ θ T , c 1 , c 2 , , c n c ] T R n + n c ,
κ T : = [ D 1 , D 2 , , D n d ] R m × m n d ,
ψ ( k ) : = [ v T ( k 1 ) , v T ( k 2 ) , , v T ( k n d ) ] T R m n d ,
Ψ ( k ) : = [ Φ ( k ) , w ( k 1 ) , w ( k 2 ) , , w ( k n c ) ] R m × ( n + n c ) .
Based on Equation (2), we obtain
w ( k ) = [ 1 c ( z ) ] w ( k ) + [ D ( z ) I m ] v ( k ) + v ( k ) = ( c 1 z 1 c 2 z 2 c n c z n c ) w ( k ) + ( D 1 z 1 + D 2 z 2 + + D n d z n d ) v ( k ) + v ( k ) = i = 1 n c c i w ( k i ) + j = 1 n d D j v ( k j ) + v ( k ) .
The identification model of the system in (1) is represented as
(8) y ( k ) = Φ ( k ) θ + w ( k ) = Φ ( k ) θ i = 1 n c c i w ( k i ) + j = 1 n d D j v ( k j ) + v ( k ) (9) = Ψ ( k ) η + κ T ψ ( k ) + v ( k ) .
Uniting the information matrix Ψ ( k ) with the information vector ψ ( k ) , and the parameter vector η with the parameter matrix κ , the new information matrix Ω ( k ) and new parameter vector ϑ are
Ω ( k ) : = [ Ψ ( k ) , ψ T ( k ) I m ] R m × n 0 , n 0 : = n + n c + m 2 n d ,
ϑ : = η col [ κ ] R n 0 .
Equation (9) is changed into
y ( k ) = Ω ( k ) ϑ + v ( k ) .
The goal is to find effective identification methods to estimate unmeasurable parameters θ , c i , and D i which are in ϑ . The analysis shows that if the identification model in (12) is performed directly, superfluous calculations will be generated in the unknown parameter estimation processes because the Kronecker product calculation produces substantial zero elements to the information matrix Ω ( k ) . With a view to enhance the identification performance for the system in (1), it is necessary to explore another efficient identification method.

3. The Filtering-Based Multivariate Partially Coupled Gradient Algorithm

By analyzing System (1), the existence of the noise reduces the parameter identification precision. For overcoming the adverse effects of disturbances, the data filtering method is adopted to convert colored noise into white noise. Setting c ( z ) as the filter for System (1) is an appropriate solution of the problem. First of all, multiply both sides of Equation (1) by c ( z ) :
c ( z ) y ( k ) = c ( z ) Φ ( k ) θ + D ( z ) v ( k ) .
Define the filtered output vector y f ( k ) and the filtered information matrix Φ f ( k ) as
y f ( k ) : = c ( z ) y ( k ) R m , Φ f ( k ) : = c ( z ) Φ ( k ) R m × n .
Then, Equation (13) can be rewritten as
y f ( k ) = Φ f ( k ) θ + D ( z ) v ( k ) = Φ f ( k ) θ + D 1 v ( k 1 ) + D 2 v ( k 2 ) + + D n d v ( k n d ) + v ( k ) = Φ f ( k ) θ + κ T ψ ( k ) + v ( k ) .
Let κ i T R 1 × m n d be the ith row of the parameter matrix κ T , y fi ( k ) R be the ith row of the filtered output vector y f ( k ) , and Φ fi ( k ) R 1 × n be the ith row of the filtered information matrix Φ f ( k ) , that is
κ T : = [ D 1 , D 2 , , D n d ] : = [ κ 1 , κ 2 , , κ m ] T , y f ( k ) : = [ y f 1 ( k ) , y f 2 ( k ) , , y f m ( k ) ] T , Φ f ( k ) : = [ Φ f 1 T ( k ) , Φ f 2 T ( k ) , , Φ f m T ( k ) ] T .
Transform Equation (14) into m sub-identification models:
y f 1 ( k ) y f 2 ( k ) y f m ( k ) : = Φ f 1 ( k ) Φ f 2 ( k ) Φ f m ( k ) θ + κ 1 T κ 2 T κ m T ψ ( k ) + v 1 ( k ) v 2 ( k ) v m ( k ) , i = 1 , 2 , , m .
Equation (14) is described by
y f i ( k ) = Φ f i ( k ) θ + κ i T ψ ( k ) + v i ( k ) , i = 1 , 2 , , m .
In Equation (16), vectors θ and ψ ( k ) are common in each subsystem, which is in line with the characteristics of the partially coupled-type identification model. Next, parameter estimation algorithm employing partial coupling concept for the identification model in (16) is derived in detail.
Define a gradient criterion function for the new identification model in (16) as
J 1 ( θ , κ i ) : = y fi ( k ) Φ fi ( k ) θ κ i T ψ ( k ) 2 , i = 1 , 2 , , m .
Minimizing J 1 ( θ , κ i ) based upon the gradient search, the gradient relationships are
θ ^ ( k ) = θ ^ ( k 1 ) + Φ f i T ( k ) r θ , i ( k ) [ y f i ( k ) Φ f i ( k ) θ ^ ( k 1 ) κ ^ i T ( k 1 ) ψ ( k ) ] ,
r θ , i ( k ) = r θ , i ( k 1 ) + Φ f i ( k ) 2 , r θ , i ( 0 ) = 1 ,
κ ^ i ( k ) = κ ^ i ( k 1 ) + ψ ( k ) r κ , i ( k ) [ y f i ( k ) Φ f i ( k ) θ ^ ( k 1 ) κ ^ i T ( k 1 ) ψ ( k ) ] ,
r κ , i ( k ) = r κ , i ( k 1 ) + ψ ( k ) 2 , r κ , i ( 0 ) = 1 .
However, estimates θ ^ ( k ) and κ i ^ ( k ) in (17)–(20) cannot be computed because y f i ( k ) , Φ f i ( k ) , and ψ ( k ) involve the unmeasurable terms c i and v ( k ) . If we define,
φ y ( k ) : = [ y ( k 1 ) , y ( k 2 ) , , y ( k n c ) ] R m × n c ,
τ : = [ c 1 , c 2 , , c n c ] T R n c .
Then, y f ( k ) and Φ f ( k ) are represented by
y f ( k ) = c ( z ) y ( k ) = y ( k ) + c 1 y ( k 1 ) + c 2 y ( k 2 ) + + c n c y ( k n c ) = y ( k ) + φ y ( k ) τ ,
Φ f ( k ) = c ( z ) Φ ( k ) = Φ ( k ) + c 1 Φ ( k 1 ) + c 2 Φ ( k 2 ) + + c n c Φ ( k n c ) .
However, y f ( k ) and Φ f ( k ) still cannot be calculated because c i is unknown. The auxiliary model identification method [48] is a classical method which can solve some system identification problems with unmeasured variables. The essential thought behind this is to replace the unknown variable with the output of an auxiliary model. The problem is that parameter estimation cannot be calculated in the algorithm, which can be solved by replacing the unknown variable with its estimate when its value cannot be obtained. Here, utilizing the auxiliary model identification method, according to Equation (22), replacing c i with estimates c i ^ ( k ) , the estimate τ ^ ( k ) is formed by
τ ^ ( k ) = [ c ^ 1 ( k ) , c ^ 2 ( k ) , , c ^ n c ( k ) ] T .
After that, replacing unknown parameters c i and τ with estimates c i ^ ( k ) and τ ^ ( k ) in (23) and (24), estimates y ^ f ( k ) and Φ ^ f ( k ) are calculated by
y ^ f ( k ) = y ( k ) + φ y ( k ) τ ^ ( k ) = [ y ^ f 1 ( k ) , y ^ f 2 ( k ) , , y ^ f m ( k ) ] T ,
Φ ^ f ( k ) = Φ ( k ) + c ^ 1 ( k ) Φ ( k 1 ) + c ^ 2 ( k ) Φ ( k 2 ) + + c ^ n c ( k ) Φ ( k n c ) = [ Φ ^ f 1 T ( k ) , Φ ^ f 2 T ( k ) , , Φ ^ f m T ( k ) ] T .
Meanwhile, according to Equation (5), the estimate ψ ^ ( k ) is formed by
ψ ^ ( k ) = [ v ^ T ( k 1 ) , v ^ T ( k 2 ) , , v ^ T ( k n d ) ] T .
Define the noise information matrix:
χ ( k ) : = [ w ( k 1 ) , w ( k 2 ) , , w ( k n c ) ] R m × n c .
Equation (7) can be written as
w ( k ) = i = 1 n c c i w ( k i ) + j = 1 n d D j v ( k j ) + v ( k ) = χ ( k ) τ + κ T ψ ( k ) + v ( k ) .
Define an intermediate vector w n ( k ) : = w ( k ) κ T ψ ( k ) R m . Thus, the noise model is rewritten as
w n ( k ) : = χ ( k ) τ + v ( k ) .
In order to obtain the value of estimate τ ^ ( k ) , define another gradient criterion function for the noise model in Equation (29) as
J 2 ( τ ) : = w n ( k ) χ ( k ) τ 2 .
Minimizing J 2 ( τ ) based upon the gradient search, the gradient relationship is
τ ^ ( k ) = τ ^ ( k 1 ) + χ T ( k ) r τ ( k ) [ w n ( k ) χ ( k ) τ ^ ( k 1 ) ] ,
r τ ( k ) = r τ ( k 1 ) + χ ( k ) 2 , r τ ( 0 ) = 1 .
Obviously, the parameter vector estimate τ ^ ( k ) cannot be calculated because w n ( k ) and χ ( k ) are unknown. Replacing them with estimates w ^ n ( k ) and χ ^ ( k ) can solve this problem. We have
χ ^ ( k ) = [ w ^ ( k 1 ) , w ^ ( k 2 ) , , w ^ ( k n c ) ] ,
w ^ n ( k ) = w ^ ( k 1 ) κ ^ T ( k 1 ) ψ ^ ( k ) .
According to Equation (8), substitute the estimate θ ^ ( k ) for the unmeasurable term θ . Then, the estimate w ^ ( k ) is calculated by
w ^ ( k ) = y ( k ) Φ ( k ) θ ^ ( k ) .
Similarly, according to Equation (14), replacing unmeasurable terms Φ f ( k ) , θ , and κ with their estimates Φ ^ f ( k ) , θ ^ ( k ) , and κ ^ ( k ) , the estimate v ^ ( k ) is calculated by
v ^ ( k ) = y ^ f ( k ) Φ ^ f ( k ) θ ^ ( k ) κ ^ T ( k ) ψ ^ ( k ) .
There are superfluous estimates in algorithm (17)–(20) because θ is repeatedly computed m times. To reduce the excess computation, θ ^ i is used instead of θ ^ in (17) and (19). Meanwhile, substitute estimates Φ ^ f i T ( k ) , y ^ f i ( k ) and ψ ^ ( k ) for unknown terms Φ f i T ( k ) , y f i ( k ) , and ψ ( k ) ; then, the new algorithm is
θ ^ i ( k ) = θ ^ i ( k 1 ) + Φ ^ f i T ( k ) r θ , i ( k ) [ y ^ f i ( k ) Φ ^ f i ( k ) θ ^ i ( k 1 ) κ ^ i T ( k 1 ) ψ ^ ( k ) ] ,
r θ , i ( k ) = r θ , i ( k 1 ) + Φ ^ f i ( k ) 2 , r θ , i ( 0 ) = 1 ,
κ ^ i ( k ) = κ ^ i ( k 1 ) + ψ ^ ( k ) r κ , i ( k ) [ y ^ f i ( k ) Φ ^ f i ( k ) θ ^ i ( k 1 ) κ ^ i T ( k 1 ) ψ ^ ( k ) ] ,
r κ , i ( k ) = r κ , i ( k 1 ) + ψ ^ ( k ) 2 , r κ , i ( 0 ) = 1 .
In recursive algorithms, the estimated values of parameters tend to true values infinitely with the data length increasing. As is well known, the estimate θ ^ i 1 ( k ) of the ( i 1 ) th subsystem at time k is closer to the true value θ than the estimate θ ^ i ( k 1 ) of the ith subsystem at time k 1 . Accordingly, in Algorithm (36)–(39), substitute θ ^ i 1 ( k ) for θ ^ i ( k 1 ) on the right-hand side of Equation (36), and substitute θ ^ m ( k 1 ) for θ ^ 1 ( k 1 ) in Equation (36) when i = 1 . To conclude, the filtering-based multivariate partially coupled generalized extended stochastic gradient (F-M-PC-GESG) algorithm is as follows.
θ ^ 1 ( k ) = θ ^ m ( k 1 ) + Φ ^ f 1 T ( k ) r θ , 1 ( k ) [ y ^ f 1 ( k ) Φ ^ f 1 ( k ) θ ^ m ( k 1 ) κ ^ 1 T ( k 1 ) ψ ^ ( k ) ] ,
r θ , 1 ( k ) = r θ , 1 ( k 1 ) + Φ ^ f 1 ( k ) 2 , r θ , 1 ( 0 ) = 1 ,
κ ^ 1 ( k ) = κ ^ 1 ( k 1 ) + ψ ^ ( k ) r κ , 1 ( k ) [ y ^ f 1 ( k ) Φ ^ f 1 ( k ) θ ^ m ( k 1 ) κ ^ 1 T ( k 1 ) ψ ^ ( k ) ] ,
r κ , 1 ( k ) = r κ , 1 ( k 1 ) + ψ ^ ( k ) 2 , r κ , 1 ( 0 ) = 1 ,
θ ^ i ( k ) = θ ^ i 1 ( k ) + Φ ^ f i T ( k ) r θ , i ( k ) [ y ^ f i ( k ) Φ ^ f i ( k ) θ ^ i 1 ( k ) κ ^ i T ( k 1 ) ψ ^ ( k ) ] ,
r θ , i ( k ) = r θ , i ( k 1 ) + Φ ^ f i ( k ) 2 , r θ , i ( 0 ) = 1 ,
κ ^ i ( k ) = κ ^ i ( k 1 ) + ψ ^ ( k ) r κ , i ( k ) [ y ^ f i ( k ) Φ ^ f i ( k ) θ ^ i 1 ( k ) κ ^ i T ( k 1 ) ψ ^ ( k ) ] ,
r κ , i ( k ) = r κ , i ( k 1 ) + ψ ^ ( k ) 2 , r κ , i ( 0 ) = 1 ,
y ^ f ( k ) = y ( k ) + φ y ( k ) τ ^ ( k ) = [ y ^ f 1 ( k ) , y ^ f 2 ( k ) , , y ^ f m ( k ) ] T ,
Φ ^ f ( k ) = Φ ( k ) + c ^ 1 ( k ) Φ ( k 1 ) + c ^ 2 ( k ) Φ ( k 2 ) + + c ^ n c ( k ) Φ ( k n c ) = [ Φ ^ f 1 T ( k ) , Φ ^ f 2 T ( k ) , , Φ ^ f m T ( k ) ] T ,
ψ ^ ( k ) = [ v ^ T ( k 1 ) , v ^ T ( k 2 ) , , v ^ T ( k n d ) ] T ,
φ y ( k ) = [ y ( k 1 ) , y ( k 2 ) , , y ( k n c ) ] ,
τ ^ ( k ) = τ ^ ( k 1 ) + χ ^ T ( k ) r τ ( k ) [ w ^ n ( k ) χ ^ ( k ) τ ^ ( k 1 ) ] = [ c ^ 1 ( k ) , c ^ 2 ( k ) , , c ^ n c ( k ) ] T ,
r τ ( k ) = r τ ( k 1 ) + χ ^ ( k ) 2 , r τ ( 0 ) = 1 ,
χ ^ ( k ) = [ w ^ ( k 1 ) , w ^ ( k 2 ) , , w ^ ( k n c ) ] ,
w ^ n ( k ) = w ^ ( k 1 ) κ ^ T ( k 1 ) ψ ^ ( k ) ,
w ^ ( k ) = y ( k ) Φ ( k ) θ ^ m ( k ) ,
v ^ ( k ) = y ^ f ( k ) Φ ^ f ( k ) θ ^ ( k ) κ ^ T ( k ) ψ ^ ( k ) ,
κ ^ ( k ) = [ κ ^ 1 ( k ) , κ ^ 2 ( k ) , , κ ^ m ( k ) ] .
The calculation steps of the F-M-PC-GESG algorithm in (40)–(58) are presented as follows.
  • Let k = 1 , set the initial values θ ^ m ( 0 ) = 1 n / p 0 , κ ^ ( 0 ) = 1 m n d × m / p 0 , τ ^ ( 0 ) = 1 n c / p 0 , r θ , i ( 0 ) = r κ , i ( 0 ) = r τ ( 0 ) = 1 , i = 1 , 2 , , m , w ^ ( k j ) = 0 , v ^ ( k j ) = 0 , j = 0 , 1, ⋯, max[ n c , n d ], p 0 = 10 6 , and set the data length K.
  • Acquire the observation data Φ ( k ) and y ( k ) . Configure ψ ^ ( k ) , φ y ( k ) and χ ^ ( k ) using (50), (51) and (54).
  • Calculate r τ ( k ) and w ^ n ( k ) utilizing (53) and (55). Update τ ^ ( k ) using (52) and read c ^ i ( k ) from τ ^ ( k ) .
  • Compute y ^ f ( k ) and Φ ^ f ( k ) using (48) and (49). Read y ^ f i ( k ) from y ^ f ( k ) and Φ ^ f i T ( k ) form Φ ^ f ( k ) .
  • Compute the step size r θ , 1 ( k ) and r κ , 1 ( k ) utilizing (41) and (43).
  • Refresh the parameter estimates θ ^ 1 ( k ) and κ ^ 1 ( k ) utilizing (40) and (42).
  • Calculate r θ , i ( k ) and r κ , i ( k ) utilizing (45) and (47) when i = 2 , 3 , , m . Refresh the parameter estimates θ ^ i ( k ) and κ ^ i ( k ) utilizing (44) and (46).
  • Construct κ ^ ( k ) using (58). Calculate v ^ ( k ) and w ^ ( k ) using (57) and (56).
  • Gain k by 1 if k < K , and then skip to Step 2. If not, obtain the parameter estimates θ ^ ( k ) , κ ^ ( k ) , and τ ^ ( k ) and quit.
By analyzing the whole calculation steps, the F-M-PC-GESG algorithm uses the method of interactive estimation. That is, the value of estimate τ ^ ( k ) can be obtained first after setting the initial values, and then using the value of estimate τ ^ ( k ) to calculate the values of estimates θ ^ i ( k ) and κ ^ i ( k ) . The loop continues until the estimates τ ^ ( k ) , θ ^ i ( k ) and κ ^ i ( k ) are stable.
The schematic diagram of the F-M-PC-GESG algorithm in (40)–(58) is given in Figure 1. It indicates that θ ^ i ( k ) in each subsystem are common whereas κ ^ i ( k ) is separate. Figure 1 clearly shows how the partially coupled identification approach of the F-M-PC-GESG algorithm operates.

4. The Multivariate Generalized Extended Stochastic Gradient Algorithm

The gradient algorithm is a classical identification method which does not generate covariance matrix in the calculation process and has a significant effect on improving the computational efficiency. In this section, the direct stochastic gradient method without improvement is utilized to identify parameters of the multivariate system. Define another quadratic criterion function for the identification model in (12):
J 3 ( ϑ ) : = Ω ( k ) ϑ y ( k ) 2 .
Suppose that μ ( k ) is the step size. Minimizing J 3 ( ϑ ) based upon the gradient search, the gradient relationship is
ϑ ^ ( k ) = ϑ ^ ( k 1 ) μ ( k ) grad [ J 3 ( ϑ ^ ( k 1 ) ) ] = ϑ ^ ( k 1 ) + μ ( k ) Ω T ( k ) [ y ( k ) Ω ( k ) ϑ ^ ( k 1 ) ] .
Let μ ( k ) : = 1 / r ( k ) ; then, the relationship is changed into
ϑ ^ ( k ) = ϑ ^ ( k 1 ) + Ω T ( k ) r ( k ) [ y ( k ) Ω ( k ) ϑ ^ ( k 1 ) ] ,
r ( k ) = r ( k 1 ) + Ω ( k ) 2 .
The obstacle in identification is that ϑ ^ ( k ) cannot be calculated because v ( k ) and w ( k ) in Ω ( k ) are unmeasurable. Substitute estimates v ^ ( k ) and w ^ ( k ) for terms v ( k ) and w ( k ) . From Equations (8) and (12), estimates v ^ ( k ) and w ^ ( k ) are calculated by
v ^ ( k ) = y ( k ) Ω ^ ( k ) ϑ ^ ( k ) ,
w ^ ( k ) = y ( k ) Φ ( k ) θ ^ ( k ) .
Considering that Ψ ( k ) also involves the unmeasurable w ( k ) , define Ψ ^ ( k ) by the estimate w ^ ( k ) :
Ψ ^ ( k ) = [ Φ ( k ) , w ^ T ( k 1 ) , w ^ T ( k 2 ) , , w ^ T ( k n c ) ] T .
Similarly, because ψ ( k ) involves the unmeasurable v ( k i ) , define ψ ^ ( k ) by the estimate v ^ ( k i ) as
ψ ^ ( k ) = [ v ^ T ( k 1 ) , v ^ T ( k 2 ) , , v ^ T ( k n d ) ] T .
Additionally, because Ω ( k ) involves the unmeasurable terms ψ ( k ) and Ψ ( k ) , define Ω ^ ( k ) by estimates ψ ^ ( k ) and Ψ ^ ( k ) :
Ω ^ ( k ) = [ Ψ ^ ( k ) , ψ ^ T ( k ) I m ] .
Subsequently, the multivariate generalized extended stochastic gradient (M-GESG) algorithm is obtained as follows.
ϑ ^ ( k ) = ϑ ^ ( k 1 ) + Ω ^ T ( k ) r ( k ) [ y ( k ) Ω ^ ( k ) ϑ ^ ( k 1 ) ] ,
r ( k ) = r ( k 1 ) + Ω ^ ( k ) 2 , r ( 0 ) = 1 ,
Ω ^ ( k ) = [ Ψ ^ ( k ) , ψ ^ T ( k ) I m ] ,
Ψ ^ ( k ) = [ Φ ( k ) , w ^ T ( k 1 ) , w ^ T ( k 2 ) , , w ^ T ( k n c ) ] T ,
ψ ^ ( k ) = [ v ^ T ( k 1 ) , v ^ T ( k 2 ) , , v ^ T ( k n d ) ] T ,
w ^ ( k ) = y ( k ) Φ ( k ) θ ^ ( k ) ,
v ^ ( k ) = y ( k ) Ω ^ ( k ) ϑ ^ ( k ) ,
ϑ ^ ( k ) = η ^ ( k ) col [ κ ^ ( k ) ] ,
η ^ ( k ) = [ θ ^ T ( k ) , c ^ 1 ( k ) , c ^ 2 ( k ) , , c ^ n c ( k ) ] T .
The computation procedures of the M-GESG algorithm in (65)–(73) are presented as follows.
  • Let k = 1 , set the initial values ϑ ^ ( 0 ) = 1 n 0 / p 0 , r ( 0 ) = 1 , w ^ ( k j ) = 0 , v ^ ( k j ) = 0 , j = 0 , 1, ⋯, max[ n c , n d ], p 0 = 10 6 , and set the data length K.
  • Acquire the observation data Φ ( k ) and y ( k ) . Configure the matrix Ψ ^ ( k ) and vector ψ ^ ( k ) utilizing (68) and (69).
  • Calculate Ω ^ ( k ) using (67), and calculate r ( k ) utilizing (66).
  • Refresh the parameter estimate vector ϑ ^ ( k ) utilizing (65). Read θ ^ ( k ) form ϑ ^ ( k ) in (72) and (73).
  • Calculate v ^ ( k ) and w ^ ( k ) utilizing (71) and (70).
  • Gain k by 1 if k < K , and then skip to Step 2. If not, acquire the parameter estimate ϑ ^ ( k ) and quit.

5. Convergence Analysis

The convergence of the F-M-PC-GESG algorithm is analyzed in this section.
Suppose that the σ algebra sequence F k = σ ( v ( k ) , v ( k 1 ) , v ( k 2 ) , ) generated by v ( k ) , and { v ( k ) , F k } is a Martingale difference sequence on a probability space { Ω , F , P } [49]. The sequence { v ( k ) } satisfies
( Q 1 ) E [ v ( k ) | F k 1 ] = 0 , a . s . , ( Q 2 ) E [ v ( k ) 2 | F k 1 ] σ 2 < , a . s .
Lemma 1.
For the systems in (16) and (29) and the F-M-PC-GESG algorithm in (40)–(58), the following inequalities hold:
k = 1 Φ ^ f i ( k ) 2 r θ , i 2 ( k ) < , i = 1 , 2 , , m , k = 1 ψ ^ ( k ) 2 r κ , i 2 ( k ) < , i = 1 , 2 , , m , k = 1 χ ^ ( k ) 2 r τ 2 ( k ) < .
Theorem 1.
For the systems in (16) and (29) and the F-M-PC-GESG algorithm in (40)–(58), suppose that (Q1) and (Q2) hold. There exists the positive constants λ 1 , λ 2 , and λ 3 independent of k, and an integer K such that the following persistent excitation condition holds:
( Q 3 ) j = 0 K Φ ^ f i ( k + j ) Φ ^ f i T ( k + j ) r θ , i ( k + j ) λ 1 I , i = 1 , 2 , , m , ( Q 4 ) j = 0 K ψ ^ ( k + j ) ψ ^ T ( k + j ) r κ , i ( k + j ) λ 2 I , i = 1 , 2 , , m , ( Q 5 ) j = 0 K χ ^ ( k + j ) χ ^ T ( k + j ) r τ ( k + j ) λ 3 I .
Then, the parameter estimation errors θ ^ i ( k ) θ , κ ^ ( k ) κ , and τ ^ ( k ) τ given by the F-M-PC-GESG algorithm converge to zero in the mean square sense.
Through the above analysis, we can determine that the proposed algorithm can make parameter estimation errors of multivariate pseudo-linear systems converge to zero in the case of random disturbance. In other words, the proposed algorithm not only has the ability to estimate the unknown parameters accurately, but also has certain stability.
Theorem 1 can be proved in a similar to the way in [50] and is omitted here.

6. Simulations

This part is to demonstrate the superiority of F-M-PC-GESG algorithm in identification performances by conducting two simulations.
Example 1.
Consider the following multivariate equation-error autoregressive moving average systems,
y ( k ) = Φ ( k ) θ + D ( z ) c ( z ) v ( k ) , Φ ( k ) = u 1 ( k 2 ) sin ( y 1 ( k 2 ) ) y 1 ( k 1 ) cos ( u 2 ( k 2 ) ) y 2 ( k 1 ) cos ( u 1 ( k 1 ) ) u 2 ( k 1 ) sin ( y 2 ( k 2 ) ) , θ = [ θ 1 , θ 2 , θ 3 , θ 4 ] T = [ 0.24 , 0.14 , 0.60 , 0.02 ] T , c ( z ) = 1 + c 1 z 1 + c 2 z 2 = 1 + 0.65 z 1 0.31 z 2 , D ( z ) = I 2 + d 11 d 12 d 21 d 22 z 1 = 1 0 0 1 + 0.12 0.17 0.03 0.24 z 1 .
The parameter vector to be identified is
ϑ = [ θ 1 , θ 2 , θ 3 , θ 4 , c 1 , c 2 , d 11 , d 12 , d 21 , d 22 ] T = [ 0.24 , 0.14 , 0.60 , 0.02 , 0.65 , 0.31 , 0.12 , 0.17 , 0.03 , 0.24 ] T .
In this simulation, the input vector u ( k ) = u 1 ( k ) u 2 ( k ) R 2 is a random sequence with zero mean and variance one. The output vector is y ( k ) = y 1 ( k ) y 2 ( k ) R 2 . The white noise vector v ( k ) = v 1 ( k ) v 2 ( k ) R 2 is with zero mean. σ 1 2 and σ 2 2 are variances of v 1 ( k ) and v 2 ( k ) . Taking the noise variances σ 1 2 = σ 2 2 = 0 . 20 2 , utilizing the M-GESG algorithm and the F-M-PC-GESG algorithm to identify system parameters, parameter estimates and their errors δ : = ϑ ^ ( k ) ϑ / ϑ are given in Table 1. The parameter identification errors versus k are given in Figure 2. The parameter estimates θ ^ 1 ( k ) , θ ^ 2 ( k ) , θ ^ 3 ( k ) , θ ^ 4 ( k ) , c ^ 1 ( k ) , c ^ 2 ( k ) , d ^ 11 ( k ) , d ^ 12 ( k ) , d ^ 21 ( k ) , and d ^ 22 ( k ) versus k of the F-M-PC-GESG algorithm are given in Figure 3 and Figure 4.
Example 2.
Consider another multivariate equation-error autoregressive moving average system,
Φ ( t ) = u 1 2 ( k 1 ) cos 2 ( u 1 ( k 2 ) ) u 2 ( k 1 ) y 1 ( k 2 ) u 2 3 ( k 1 ) sin 2 ( y 1 ( k 2 ) ) u 1 ( k 1 ) y 2 ( k 2 ) y 1 3 ( k 1 ) sin 2 ( y 2 ( k 1 ) ) y 2 2 ( k 2 ) cos 2 ( y 2 ( k 1 ) ) , θ = [ θ 1 , θ 2 , θ 3 , θ 4 , θ 5 ] T = [ 0.08 , 0.09 , 0.02 , 0.01 , 0.03 ] T , c ( z ) = 1 + c 1 z 1 = 1 + 0.89 z 1 , D ( z ) = I 2 + D 1 z 1 + D 2 z 2 = 1 0 0 1 + 0.22 0.29 0.03 0.10 z 1 + 0.04 0.08 0.05 0.12 z 1 .
The parameter vector to be identified is
ϑ = [ θ 1 , θ 2 , θ 3 , θ 4 , θ 5 , c 1 , d 1 11 , d 1 12 , d 2 11 , d 2 12 , d 1 21 , d 1 22 , d 2 21 , d 2 22 ] T = [ 0.08 , 0.09 , 0.02 , 0.01 , 0.03 , 0.89 , 0.22 , 0.29 , 0.04 , 0.08 , 0.03 , 0.10 , 0.05 , 0.12 ] T .
The configuration of the simulation in this example is the same as in Example 1. Set noise variances σ 1 2 = 0 . 50 2 and σ 2 2 = 0 . 40 2 , and utilize the M-GESG algorithm and the F-M-PC-GESG algorithm to identify the system parameters. The parameter estimates and their errors are given in Table 2. The parameter identification errors versus t are given in Figure 5.
With Table 1 and Table 2 and Figure 2, Figure 3, Figure 4 and Figure 5, identification performances of proposed algorithms are analyzed as follows.
  • Table 1 and Table 2, Figure 2 and Figure 5 indicate that parameter identification errors of the M-GESG and the F-M-PC-GESG algorithms decrease with increasing data length k. This reveals that the proposed algorithms are valid in parameter identification for the multivariate equation-error autoregressive moving average system.
  • Through Figure 2 and Figure 5, we can see that the F-M-PC-GESG algorithm has superiority over the M-GESG algorithm in parameter identification precision under the same data length and noise variances.
  • Figure 3 and Figure 4 show that the F-M-PC-GESG algorithm can rapidly obtain access to precise parameter estimates.

7. Conclusions

This paper presents methods of how to improve parameter identification effects for multivariate pseudo-linear systems under conditions of random interference and parameter coupling, which provides modular solutions for modeling and forecasting of real multivariate time series. Taking into account colored noises and high-dimensional unknown parameters, the original system is filtered by the designed filter and then be transformed into several subsystem identification models by utilizing the coupling identification approach. A new filtering-based multivariate gradient algorithm is proposed, which has higher parameter estimation precision and faster identification efficiency than the conventional multivariate gradient algorithm. Convergence analysis and simulation experiments confirms that the F-M-PC-GESG algorithm has performance that can obtain access to unknown parameter estimates precisely and rapidly. Future research directions include applying proposed methods to parameter identification problems for other linear or nonlinear models under random interference in various engineering systems.

Author Contributions

Methodology, P.M.; Writing—original draft, P.M.; Writing—review & editing, Y.C.; Supervision, Y.L.; Funding acquisition, Y.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (No. 62103293), by the Natural Science Foundation of Jiangsu Province (No. BK20210709), by the Suzhou Municipal Science and Technology Bureau (No. SYG202138), and by the ‘Taihu Light’ Basic Research Project on Scientific and Technological Breakthroughs of Wuxi (No. K20221006).

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, Y.Y.; Jiang, W.; Charalambous, T. Machine learning based iterative learning control for non-repetitive time-varying systems. Int. J. Robust Nonlinear Control 2022, 33, 4098–4116. [Google Scholar] [CrossRef]
  2. Zhang, X.; Lei, Y.; Chen, H.; Zhang, L.; Zhou, Y. Multivariate time-series modeling for forecasting sintering temperature in rotary kilns using DCGNet. IEEE Trans. Ind. Inform. 2021, 17, 4635–4645. [Google Scholar] [CrossRef]
  3. Terbuch, A.; O’Leary, P.; Khalili-Motlagh-Kasmaei, N.; Auer, P.; Zohrer, A.; Winter, V. Detecting anomalous multivariate time-series via hybrid machine learning. IEEE Trans. Instrum. Meas. 2023, 72, 2503711. [Google Scholar] [CrossRef]
  4. Niaki, S.T.A.; Davoodi, M. Designing a multivariate-multistage quality control system using artificial neural networks. Int. J. Prod. Res. 2009, 47, 251–271. [Google Scholar] [CrossRef]
  5. Wen, C.; Xie, Y.; Qiao, Z.; Xu, L.; Qian, Y. A tensor generalized weighted linear predictor for FDA-MIMO radar parameter estimation. IEEE Trans. Veh. Technol. 2022, 71, 6059–6072. [Google Scholar] [CrossRef]
  6. Olivier, L.E.; Craig, I.K. Model-plant mismatch detection and model update for a run-of-mine ore milling circuit under model predictive control. J. Process Control 2013, 23, 100–107. [Google Scholar] [CrossRef]
  7. Zhang, Z.; Ren, J.; Bai, W. MIMO non-parametric modeling of ship maneuvering motion for marine simulator using adaptive moment estimation locally weighted learning. Ocean. Eng. 2022, 261, 112103. [Google Scholar] [CrossRef]
  8. Qi, R.; Tao, G.; Jiang, B. Adaptive control of MIMO time-varying systems with indicator function based parametrization. Automatica 2014, 50, 1369–1380. [Google Scholar] [CrossRef]
  9. Wang, Z.; Xie, L.; Wan, Q. Beamspace joint azimuth, elevation, and delay estimation for large-scale MIMO-OFDM system. IEEE Trans. Instrum. Meas. 2023, 72, 9506412. [Google Scholar] [CrossRef]
  10. Ibrir, S. Joint state and parameter estimation of non-linearly parameterized discrete-time nonlinear systems. Automatica 2018, 97, 226–233. [Google Scholar] [CrossRef]
  11. Mari, J.; Stoica, P.; McKelvey, T. Vector ARMA estimation: A reliable subspace approach. IEEE Trans. Signal Process. 2000, 48, 2092–2104. [Google Scholar] [CrossRef]
  12. Luo, K.; Manikas, A. Superresolution multitarget parameter estimation in MIMO radar. IEEE Trans. Geosci. Remote. Sens. 2013, 51, 3683–3693. [Google Scholar] [CrossRef]
  13. Zhang, G.D.; Sheng, Y.H.; Shi, Y.X. Uncertain hypothesis testing of multivariate uncertain regression model. J. Intell. Fuzzy Syst. 2022, 43, 7341–7350. [Google Scholar] [CrossRef]
  14. Oigard, T.A.; Hanssen, A.; Hansen, R.E.; Godtliebsen, F. EM-estimation and modeling of heavy-tailed processes with the multivariate normal inverse Gaussian distribution. Signal Process. 2005, 85, 1655–1673. [Google Scholar] [CrossRef]
  15. Ding, F.; Liu, X.M.; Chen, H.B.; Yao, G.Y. Hierarchical gradient based and hierarchical least squares based iterative parameter identification for CARARMA systems. Signal Process. 2014, 97, 31–39. [Google Scholar] [CrossRef]
  16. Xu, L.; Ding, F. Separable synthesis estimation methods and convergence analysis for multivariable systems. J. Comput. Appl. Math. 2023, 427, 115104. [Google Scholar] [CrossRef]
  17. Xu, L. Separable Newton recursive estimation method through system responses based on dynamically discrete measurements with increasing data length. Int. J. Control Autom. Syst. 2022, 20, 432–443. [Google Scholar] [CrossRef]
  18. Cui, T.; Chen, F.; Ding, F.; Sheng, J. Combined estimation of the parameters and states for a multivariable state-space system in presence of colored noise. Int. J. Adapt. Control Signal Process. 2020, 34, 590–613. [Google Scholar] [CrossRef]
  19. Zheng, M.; Li, L.; Peng, H.; Xiao, J.; Yang, Y.; Zhao, H. Parameters estimation and synchronization of uncertain coupling recurrent dynamical neural networks with time-varying delays based on adaptive control. Neural Comput. Appl. 2018, 30, 2217–2227. [Google Scholar] [CrossRef]
  20. Geng, X.; Xie, L. Learning the LMP-load coupling from data: A support vector machine based approach. IEEE Trans. Power Syst. 2017, 32, 1127–1138. [Google Scholar]
  21. Ding, F.; Liu, G.; Liu, X.P. Partially coupled stochastic gradient identification methods for non-uniformly sampled systems. IEEE Trans. Autom. Control 2010, 55, 1976–1981. [Google Scholar] [CrossRef]
  22. Zhou, Y.; Zhang, X.; Ding, F. Partially-coupled nonlinear parameter optimization algorithm for a class of multivariate hybrid models. Appl. Math. Comput. 2022, 414, 126663. [Google Scholar] [CrossRef]
  23. Huang, Y.G.; Cao, L.B.; Zhang, J.; Pan, L.; Liu, Y.Y. Exploring feature coupling and model coupling for image source identification. IEEE Trans. Inf. Forensics Secur. 2018, 13, 3108–3121. [Google Scholar] [CrossRef]
  24. Wang, X.H.; Ding, F. Partially coupled extended stochastic gradient algorithm for nonlinear multivariable output error moving average systems. Eng. Comput. 2017, 34, 629–647. [Google Scholar] [CrossRef]
  25. Ma, P.; Wang, L. Filtering-based recursive least squares estimation approaches for multivariate equation-error systems by using the multiinnovation theory. Int. J. Adapt. Control Signal Process. 2021, 35, 1898–1915. [Google Scholar] [CrossRef]
  26. Ma, J.; Xiong, W.; Ding, F.; Alsaedi, A.; Hayat, T. Data filtering based forgetting factor stochastic gradient algorithm for Hammerstein systems with saturation and preload nonlinearities. J. Frankl.-Inst.-Eng. Appl. Math. 2016, 353, 4280–4299. [Google Scholar] [CrossRef]
  27. Li, M.; Liu, X. The least squares based iterative algorithms for parameter estimation of a bilinear system with autoregressive noise using the data filtering technique. Signal Process. 2018, 147, 23–34. [Google Scholar] [CrossRef]
  28. Ji, Y.; Jiang, A. Filtering-based accelerated estimation approach for generalized time-varying systems with disturbances and colored noises. IEEE Trans. Circuits Syst. II Express Briefs 2023, 70, 206–210. [Google Scholar] [CrossRef]
  29. Imani, M.; Braga-Neto, U.M. Maximum-likelihood adaptive filter for partially observed boolean dynamical systems. IEEE Trans. Signal Process. 2017, 65, 359–371. [Google Scholar] [CrossRef]
  30. Zhang, C.; Liu, H.; Ji, Y. Gradient parameter estimation of a class of nonlinear systems based on the maximum likelihood principle. Int. J. Control Autom. Syst. 2022, 20, 1393–1404. [Google Scholar] [CrossRef]
  31. Chen, J.; Zhu, Q.M.; Liu, Y.J. Modified Kalman filtering based multi-step length gradient iterative algorithm for ARX models with random missing outputs. Automatica 2020, 118, 109034. [Google Scholar]
  32. Li, M.H.; Liu, X.M. Iterative identification methods for a class of bilinear systems by using the particle filtering technique. Int. J. Adapt. Control Signal Process. 2021, 35, 2056–2074. [Google Scholar] [CrossRef]
  33. Ding, F.; Liu, X.M.; Ma, X.Y. Kalman state filtering based least squares iterative parameter estimation for observer canonical state space systems using decomposition. J. Comput. Appl. Math. 2016, 301, 135–143. [Google Scholar] [CrossRef]
  34. Li, M.H.; Liu, X.M. Maximum likelihood least squares based iterative estimation for a class of bilinear systems using the data filtering technique. Int. J. Control Autom. Syst. 2020, 18, 1581–1592. [Google Scholar] [CrossRef]
  35. Chen, Y.Y.; Zhou, Y.W. Machine learning based decision making for time varying systems: Parameter estimation and performance optimization. Knowl.-Based Syst. 2020, 190, 105479. [Google Scholar] [CrossRef]
  36. Altbawi, S.M.A.; Khalid, S.B.A.; Bin Mokhtar, A.S.; Shareef, H.; Husain, N.; Yahya, A.; Haider, S.A.; Moin, L.; Alsisi, R.H. An improved gradient-based optimization algorithm for solving complex optimization problems. Processes 2023, 11, 498. [Google Scholar] [CrossRef]
  37. Xu, L. Parameter estimation for nonlinear functions related to system responses. Int. J. Control Autom. Syst. 2023, 21, 1780–1792. [Google Scholar] [CrossRef]
  38. Zhang, X.; Ding, F. Optimal adaptive filtering algorithm by using the fractional-order derivative. IEEE Signal Process. Lett. 2022, 29, 399–403. [Google Scholar] [CrossRef]
  39. Roman, C.; Ferrante, F.; Prieur, C. Parameter identification of a linear wave equation from experimental boundary data. IEEE Trans. Control Syst. Technol. 2021, 29, 2166–2179. [Google Scholar] [CrossRef]
  40. Chen, J.; Zhu, Q.M.; Hu, M.F.; Guo, L.X.; Narayan, P. Improved gradient descent algorithms for time-delay rational state-space systems: Intelligent search method and momentum method. Nonlinear Dyn. 2020, 101, 361–373. [Google Scholar] [CrossRef]
  41. Kulikova, M.V. Gradient-based parameter estimation in pairwise linear Gaussian system. IEEE Trans. Autom. Control 2017, 62, 1511–1517. [Google Scholar] [CrossRef]
  42. Imani, M.; Braga-Neto, U.M. Particle filters for partially-observed Boolean dynamical systems. Automatica 2018, 87, 238–250. [Google Scholar] [CrossRef]
  43. Ding, F. Least squares parameter estimation and multi-innovation least squares methods for linear fitting problems from noisy data. J. Comput. Appl. Math. 2023, 426, 115107. [Google Scholar] [CrossRef]
  44. Ding, F.; Xu, L.; Zhang, X.; Zhou, Y. Filtered auxiliary model recursive generalized extended parameter estimation methods for Box-Jenkins systems by means of the filtering identification idea. Int. J. Robust Nonlinear Control 2023, 33, 5510–5535. [Google Scholar] [CrossRef]
  45. Ma, P.; Wang, L. Partially coupled stochastic gradient estimation for multivariate equation-error systems. Mathematics 2022, 10, 2955. [Google Scholar] [CrossRef]
  46. Ding, F. Coupled-least-squares identification for multivariable systems. IET Control Theory Appl. 2013, 7, 68–79. [Google Scholar] [CrossRef]
  47. Xu, L.; Ding, F.; Zhu, Q. Separable synchronous multi-innovation gradient-based iterative signal modeling from on-line measurements. IEEE Trans. Instrum. Meas. 2022, 71, 6501313. [Google Scholar] [CrossRef]
  48. Wang, Y.J. Novel data filtering based parameter identification for multiple-input multiple-output systems using the auxiliary model. Automatica 2016, 71, 308–313. [Google Scholar] [CrossRef]
  49. Goodwin, G.C.; Sin, K.S. Adaptive Filtering, Prediction and Control; Prentice-Hall: Englewood Cliffs, NJ, USA, 1984. [Google Scholar]
  50. Ding, F. System Identification-Performances Analysis for Identification Methods; Science Press: Beijing, China, 2014. [Google Scholar]
Figure 1. The schematic diagram of the F-M-PC-GESG algorithm.
Figure 1. The schematic diagram of the F-M-PC-GESG algorithm.
Processes 11 02700 g001
Figure 2. Identification errors versus k ( σ 1 2 = σ 2 2 = 0 . 20 2 ).
Figure 2. Identification errors versus k ( σ 1 2 = σ 2 2 = 0 . 20 2 ).
Processes 11 02700 g002
Figure 3. Parameter estimates θ ^ 1 ( k ) , θ ^ 2 ( k ) , θ ^ 3 ( k ) , θ ^ 4 ( k ) , c ^ 1 ( k ) versus k.
Figure 3. Parameter estimates θ ^ 1 ( k ) , θ ^ 2 ( k ) , θ ^ 3 ( k ) , θ ^ 4 ( k ) , c ^ 1 ( k ) versus k.
Processes 11 02700 g003
Figure 4. Parameter estimates c ^ 2 ( k ) , d ^ 11 ( k ) , d ^ 12 ( k ) , d ^ 21 ( k ) , d ^ 22 ( k ) versus k.
Figure 4. Parameter estimates c ^ 2 ( k ) , d ^ 11 ( k ) , d ^ 12 ( k ) , d ^ 21 ( k ) , d ^ 22 ( k ) versus k.
Processes 11 02700 g004
Figure 5. Identification errors versus k ( σ 1 2 = 0 . 50 2 , σ 2 2 = 0 . 40 2 ).
Figure 5. Identification errors versus k ( σ 1 2 = 0 . 50 2 , σ 2 2 = 0 . 40 2 ).
Processes 11 02700 g005
Table 1. Parameter estimates and errors ( σ 1 2 = σ 2 2 = 0 . 20 2 ).
Table 1. Parameter estimates and errors ( σ 1 2 = σ 2 2 = 0 . 20 2 ).
Algorithmsk100200500100020003000True Value
M−GESG θ 1 0.242210.241510.242230.232920.229830.228630.24000
θ 2 0.052030.02792−0.00854−0.02886−0.04654−0.05528−0.14000
θ 3 −0.61451−0.61142−0.59979−0.61122−0.60372−0.60521−0.60000
θ 4 −0.009230.004790.004160.017560.019470.021170.02000
c 1 0.429690.440320.448130.466230.470980.477680.65000
c 2 −0.43009−0.43931−0.44511−0.45792−0.45798−0.46094−0.31000
d 11 −0.05862−0.06323−0.06660−0.06933−0.07090−0.07281−0.12000
d 12 −0.00128−0.00411−0.00552−0.00673−0.00676−0.006910.17000
d 21 −0.02181−0.02119−0.02004−0.01749−0.01566−0.01452−0.03000
d 22 0.004300.005560.00304−0.00250−0.00599−0.00827−0.24000
δ ( % ) 42.7947341.6083440.0104638.7118837.8311337.30980
F−M−PC−GESG θ 1 0.247430.239890.245110.239010.236640.237130.24000
θ 2 −0.10583−0.11597−0.12770−0.13172−0.13459−0.13672−0.14000
θ 3 −0.59934−0.58195−0.59534−0.59614−0.59351−0.59624−0.60000
θ 4 0.014630.021550.014220.026090.023760.022920.02000
c 1 0.653120.669100.688010.701320.712570.718860.65000
c 2 −0.34080−0.32821−0.31140−0.29937−0.28767−0.28125−0.31000
d 11 −0.16261−0.17089−0.15802−0.14864−0.13486−0.13573−0.12000
d 12 0.032060.041360.048040.066470.075540.079580.17000
d 21 −0.02175−0.01882−0.01885−0.02282−0.02068−0.02177−0.03000
d 22 −0.14040−0.13357−0.13861−0.15208−0.15712−0.16094−0.24000
δ ( % ) 17.6722017.4439316.3848914.4814713.9566113.86945
Table 2. Parameter estimates and errors ( σ 1 2 = 0 . 50 2 , σ 2 2 = 0 . 40 2 ).
Table 2. Parameter estimates and errors ( σ 1 2 = 0 . 50 2 , σ 2 2 = 0 . 40 2 ).
Algorithmsk100200500100020003000True Value
M−GESG θ 1 0.091120.088570.066250.087610.087130.081180.08000
θ 2 −0.10932−0.10922−0.11672−0.11222−0.11331−0.11291−0.09000
θ 3 0.103300.066290.047650.031560.029400.027980.02000
θ 4 −0.08128−0.05999−0.05284−0.04692−0.04326−0.04202−0.01000
θ 5 0.054080.047760.036440.034230.028180.02637−0.03000
c 1 0.634540.644770.669180.699220.715240.733250.89000
d 1 11 0.167460.168230.162940.158830.157320.154460.22000
d 1 12 −0.11067−0.10817−0.10595−0.10446−0.10122−0.10005−0.29000
d 2 11 −0.07832−0.08579−0.09696−0.10167−0.10818−0.111860.04000
d 2 12 −0.02125−0.02080−0.02172−0.02349−0.02480−0.02649−0.08000
d 1 21 0.195800.193610.195520.200440.198510.19854−0.03000
d 1 22 −0.03530−0.03103−0.02681−0.02341−0.02217−0.022480.10000
d 2 21 −0.19341−0.19073−0.18208−0.17582−0.17124−0.16896−0.05000
d 2 22 0.303830.298120.296490.299940.298970.298970.12000
δ ( % ) 51.6920850.0986748.6949447.5285246.8353046.27683
F−M−PC−GESG θ 1 0.060550.088600.079010.089990.087150.078850.08000
θ 2 −0.07339−0.08860−0.09644−0.09032−0.09163−0.08886−0.09000
θ 3 0.070200.024020.011280.011680.011240.008700.02000
θ 4 −0.008160.000630.000350.002750.00146−0.00635−0.01000
θ 5 0.00114−0.02136−0.03064−0.03338−0.03768−0.03383−0.03000
c 1 0.994560.988150.980770.980140.974180.974880.89000
d 1 11 0.168790.157510.181990.199420.219850.219060.22000
d 1 12 −0.27855−0.29182−0.30472−0.29502−0.29439−0.29678−0.29000
d 2 11 0.109590.060960.052440.048690.026270.022320.04000
d 2 12 −0.20205−0.19511−0.16709−0.14945−0.13756−0.13392−0.08000
d 1 21 0.051760.051750.038010.017580.013430.00674−0.03000
d 1 22 0.029240.067730.083950.086580.093980.097180.10000
d 2 21 0.026880.042680.030900.028070.014380.00469−0.05000
d 2 22 0.148540.139470.117600.113070.109070.104240.12000
δ ( % ) 23.9478321.2782617.3233015.1483613.2400112.48907
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, P.; Liu, Y.; Chen, Y. A Filtering-Based Stochastic Gradient Estimation Method for Multivariate Pseudo-Linear Systems Using the Partial Coupling Concept. Processes 2023, 11, 2700. https://doi.org/10.3390/pr11092700

AMA Style

Ma P, Liu Y, Chen Y. A Filtering-Based Stochastic Gradient Estimation Method for Multivariate Pseudo-Linear Systems Using the Partial Coupling Concept. Processes. 2023; 11(9):2700. https://doi.org/10.3390/pr11092700

Chicago/Turabian Style

Ma, Ping, Yuan Liu, and Yiyang Chen. 2023. "A Filtering-Based Stochastic Gradient Estimation Method for Multivariate Pseudo-Linear Systems Using the Partial Coupling Concept" Processes 11, no. 9: 2700. https://doi.org/10.3390/pr11092700

APA Style

Ma, P., Liu, Y., & Chen, Y. (2023). A Filtering-Based Stochastic Gradient Estimation Method for Multivariate Pseudo-Linear Systems Using the Partial Coupling Concept. Processes, 11(9), 2700. https://doi.org/10.3390/pr11092700

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop