Next Article in Journal
Associations between the Avatar Characteristics and Psychometric Test Results of VK Social Media Users
Previous Article in Journal
A Novel Concept-Cognitive Learning Method for Bird Song Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Maximum Correntropy Criterion-Based Identification for Fractional-Order Systems under Stable Distribution Noises

School of Mathematics and Statistics, Northeastern University at Qinhuangdao, Qinhuangdao 066004, China
Mathematics 2023, 11(20), 4299; https://doi.org/10.3390/math11204299
Submission received: 18 September 2023 / Revised: 10 October 2023 / Accepted: 11 October 2023 / Published: 16 October 2023

Abstract

:
This paper studies the identification for fractional-order systems (FOSs) under stable distribution noises. First, the generalized operational matrix of block pulse functions is used to convert the identified system into an algebraic one. Then, the conventional least mean square (LMS) criterion is replaced by the maximum correntropy criterion (MCC) to restrain the effect of noises, and a MCC-based algorithm is designed to perform the identification. To verify the superiority of the proposed method, the identification accuracy is examined when the noise follows different types of stable distributions. In addition, the impact of parameters of stable distribution on identification accuracy is discussed. It is shown that when the impulse of noise increases, the identification error becomes larger, but the proposed algorithm is always superior to its LMS counterpart. Moreover, the location parameter of stable distribution noise has a significant impact on the identification accuracy.

1. Introduction

Fractional-order calculus (FOC), which originated more than 300 years ago, had remained for a long time as an abstract mathematical concept for the lack of acceptable geometric or physical explanations. It was not until the 1970s that FOC was introduced and applied in fields such as electrochemistry, rheology, energy consumption prediction, and epidemiology [1,2,3,4].
Fractional-order systems or models can be regarded as a generalization of integer-order systems or models based on the concept of FOC. Being nonlocal and historically dependent, FOSs can describe the anomalous behavior of dynamical systems more accurately than integer-order systems. The research related to FOSs can be mainly divided into three categories: control, synchronization, and identification [5,6,7,8]. Identification, in which the measured input and output data are used to establish an appropriate mathematical model under certain optimization criteria, provides detailed information for the FOS before control and synchronization. Methods for FOS identification can be classified into frequency domain methods and time domain methods. A number of frequency domain methods have been developed for FOS identification. Malti R et al. combined interval constraint techniques and constraint propagation techniques to identify FOSs based on frequency domain uncertain but bounded data in [9]. Valerio et al. expended the Levy method to identify non-commensurable FOSs from the frequency response in [10]. After analyzing the frequency characteristics of the fractional-order differentiator, Lin et al. proposed an unbiased parameter estimation method for FOSs in a noisy environment using the auxiliary variable [11]. In [12], the recursive least square identification algorithm was extended to identify the commensurate FOS in the frequency domain. In time domain, the idea of most identification methods for FOSs are learnt from existing integer-order identification methods. Equation-error- and output-error-based methods are the two main methods. In [13], a refined instrumental variable method for continuous-time systems was extended to identify FOS. In [14], the recursive least square with state-variable filters and the prediction-error method are proposed to identify continuous-time FOSs. In [15], a metaheuristic optimization-based method is used to identify FOSs. In addition, the orthogonal basis-based method was widely used in the identification of FOSs. Its main idea is to use orthogonal-basis functions to approximate the fractional-order integral operation and then obtain the corresponding operational matrix. With operational matrices, the underlying system can be transformed into an algebraic one, which simplifies the identification process [16,17,18].
Notably, all the above-mentioned methods adopt mean square error (MSE) as the objective function because of its low computational complexity [19,20]. Based on MSE, the LMS criterion has been developed for various applications, which performs well only when the measurement is without or with Gaussian white noise. In practice, however, the Gaussian assumption does not always hold [21]. Noises with more peak or impulse, which follow the stable distribution, are usually encountered during the data measurement process. This will cause an LMS-based algorithm to lose its performance in the identification process. Therefore, it is necessary to introduce a new identification criterion for FOS identification when the data are contaminated by stable distribution noises. Ref. [21] adopted an approximate absolute error function as the cost function and proposed a stochastic gradient method. When the independent variable fluctuates sharply, the differentiation of this cost function approximates to ± 1 , which can restrain the influence of impulse noise. Moreover, correntropy has better robustness and has been used in signal processing and machine learning [22,23]. Over the past few years, a series of adaptive filters based on the MCC have been developed for constructing robust adaptive filters [24,25]. In addition, lots of efforts have been reported to improve the performance of MCC-based algorithms [26,27,28]. To the best of our knowledge, research on the application of the MCC in FOS identification has been very limited.
Motivated by the above discussion, this paper aims to design a robust method for identifying FOSs when the output signal is disturbed by stable distribution noises. The generalized operational matrix of block pulse functions (BPFs) is adopted to discretize the FOS to solve the difficulty involved in directly finding the FOC of signals. Then, correntropy is chosen as the objective function over the conventional MSE to restrain the effect of noise on the identification process; correspondingly, an MCC-based stochastic gradient ascent (MCC-SGA) method is designed to identify FOSs. To verify the superiority of the proposed algorithm, an identification algorithm based on MSE is introduced, and the identification accuracy is examined when noise follows different types of stable distribution. Additionally, the effect of parameters in the identification algorithm and stable distribution on identification accuracy are discussed. The contributions of this paper are threefold: (1) the identification problem of FOSs under stable distribution noises are studied; (2) it is verified both theoretically and experimentally that the MCC-SGA algorithm can deliver more excellent performance than the LMS-SGD algorithm; (3) the impact of parameters in the identification algorithm and the distribution of the noise on identification accuracy are analyzed.
The remainder of the paper is organized as follows. In Section 2, a brief mathematical background of FOC, the BPF-based operational matrix, and the stable distribution noise are given. The identification of FOS based on the MCC is proposed in Section 3, and the impact of kernel width on estimation accuracy is discussed in detail. In Section 4, simulation examples are given. Finally, the conclusion remarks are presented in Section 5.

2. Preliminaries

2.1. Definition of Fractional-Order Integral and Derivative

FOC is the generalization of classical integration and differentiation to the non-integer order. There are several definitions for FOC. Due to the unity of definitions, Riemann–Liouville (R-L) definitions [29] are used in this paper, which are given as follows.
Definition 1.
The R-L fractional integral operator of order ν is defined as
I ν f ( t ) = 1 Γ ( ν ) 0 t ( t s ) ν 1 f ( s ) d s = 1 Γ ( ν ) t ν 1 f ( t ) , ν > 0 , f ( t ) , ν = 0 ,
where ( 0 , t ) is the integral interval, Γ ( · ) denotes the Gamma function, and ∗ is the convolution operator.
The R-L fractional-order integral is a linear operator, namely
I ν ( λ f ( t ) + μ g ( t ) ) = λ I ν f ( t ) + μ I ν g ( t ) ,
where λ and μ are constants.
Definition 2.
The R-L fractional-order derivative is defined as
D ν f ( t ) = D n [ I n ν f ( t ) ] = 1 Γ ( n ν ) d d t n 0 t f ( s ) ( t s ) ν + 1 n d s ,
where ν is a positive number and n is the smallest integer greater than ν.

2.2. Generalized Operational Matrices of Block Pulse Functions

A set of BPFs in the semi-open interval [ 0 , T ) can be defined as follows:
ψ i ( t ) = 1 , i h t < ( i + 1 ) h , 0 , otherwise , i = 0 , , M 1 ,
where h = T M is the sampling period and M is the number of BPFs presented in the set.
According to [30], an arbitrary absolutely integrable function defined on the interval [ 0 , T ) can be represented by a linear combination of the BPFs as
f ( t ) i = 0 M 1 f i ψ i ( t ) = f T Ψ M ( t ) ,
where the superscript T denotes transposition, Ψ M ( t ) = [ ψ 0 ( t ) , ψ 1 ( t ) , , ψ M 1 ( t ) ] T is the block pulse vector of order M, and f = [ f 0 , f 1 , , f M 1 ] T is the coefficient vector defined as
f i = 1 h i h ( i + 1 ) h f ( t ) d t f ( i h ) .
Remark 1.
The approximation errors arisen from Equations (5) and (6) approach zero, with the value of M tending to be infinite [31].
Based on Equation (1), the R-L fractional-order integration of the block pulse vector can be obtained as
I ν Ψ M ( t ) = 1 Γ ( ν ) t ν 1 · Ψ M ( t ) ,
which can be written in a matrix form, i.e.,
I ν Ψ M ( t ) = P ν Ψ M ( t ) .
Here P ν is the operational matrix of R-L integration, and given as
P ν = h ν Γ ( ν + 2 ) ξ 1 ν ξ 2 ν ξ M ν 0 ξ 1 ν ξ M 1 ν 0 0 ξ 1 ν ,
ξ 1 ν = 1 , ξ l ν = l ν + 1 2 ( l 1 ) ν + 1 + ( l 2 ) ν + 1 , l = 2 , , M . Combining with Equation (5) and using the operational matrix, the fractional integration of function f ( t ) can be written as
I ν f ( t ) = f T P ν Ψ M ( t ) .
After some matrix operations, Equation (10) can be rewritten as
I ν f ( t ) = h ν Γ ( ν + 2 ) i = 0 M 1 q i ψ i ( t ) ,
where
q i = p = 0 i f p ξ i p + 1 ν ,
and f p can be calculated by using Equation (6).
In this way, the fractional integration of a function can be converted into an algebraic operation, and the complexity of directly calculating the fractional integral of the input and output signals is avoided.
Remark 2.
Most of the currently reported methods utilize the definition of the Grünwald–Letnikov (G-L) derivative to calculate FOC, which poses a huge computational burden and amplifies the effect of noise [32]. In view of this, here, the operational matrices of BPFs are adopted to discretize the FOS.

2.3. Stable Distribution for Impulse Noise

In order to model the impulse noise in real environment, stable distribution is studied in this section. Stable distribution is a four-parameter family of distributions and is usually denoted by S ( α , β , γ , μ ) . The parameter α is the characteristic exponent satisfying 0 < α 2 , which controls the thickness of tails in the distribution. μ ( , + ) is the location parameter, which represents the mean value of the distribution. β [ 1 , 1 ] is the symmetry parameter, which specifies the skewness of the distribution. The parameter γ > 0 is the dispersion parameter, which acts as the variance and determines the spread of the distribution around its location parameter. The family of stable distribution includes three special cases:
  • Gaussian distribution: When α = 2 , the stable distribution turns to be a Gaussian distribution, and the skewness β loses its effect. The probability density function of Gaussian distribution is written as:
    f ( x ) = 1 π γ [ 1 + ( x μ γ ) 2 ] .
  • Cauchy distribution: When α = 1 and β = 0 , the stable distribution turns to be a Cauchy distribution. The probability density function of Cauchy distribution is written as:
    f ( x ) = 1 8 γ 2 π e ( x μ ) 2 8 γ 2 .
  • Lévy distribution: When α = 1 2 and β = 1 , the stable distribution reduces to be a Lévy distribution, and the probability density function of Lévy distribution is written as:
    f ( x ) = γ 2 π e γ 2 ( x μ ) ( x μ ) 3 / 2 .
Except for the aforementioned three cases, the probability density function of stable distribution S ( α , β , γ , μ ) cannot be expressed in a closed form, but its characteristic function is given by
ϕ ( t ; α , β , γ , μ ) = exp i t μ γ t α 1 i β sgn ( t ) tan π α 2 , α 1 , exp i t μ γ t 1 + i β 2 π sgn ( t ) ln t , α = 1 ,
where
sgn ( t ) = 1 , t > 0 , 0 , t = 0 , 1 , t < 0 .
Based on the characteristic function, the probability density function of stable distribution S ( α , β , γ , μ ) can be given as
p ( x ) = 1 2 π e | ω | α e i ω x d ω ,
and the probability density functions of the stable distribution with different parameter values are plotted in Figure 1. Figure 1a shows that as α decreases, the tail of the probability density function becomes heavier, which means that the probability of noises with large magnitude occurring increases. Figure 1b demonstrates that when β > 0 , the curve is skewed to the right and the right tail is thicker, resulting in most of the noise being positive; when β < 0 , the curve is skewed to the left and the left tail is thicker, which results in the noise being mostly negative; when β = 0 , the distribution is symmetric around the location parameter. Figure 1c shows that as γ increases, the distribution becomes more spread, which will generate noise with high fluctuations. Figure 1d exhibits that the location of the distribution is decided by the value of μ . As | μ | increases, the mean of the noise deviates farther from 0.

3. Parameter Identification of FOS Based on MCC

3.1. Problem Formulation

Consider the linear FOS described by the following differential equation:
i = 0 n a i D β i x ( t ) = b u ( t ) ,
where u ( t ) and x ( t ) are the input and output signals. The derivative orders β i , ( i = 0 , , n ) are known, sorted in descending order. a i and b are the unknown parameters to be identified. Under zero initial condition, applying the fractional integral of order β n to both sides of Equation (17) yields
i = 0 n a i I β n β i x ( t ) = b I β n u ( t ) .
According to Equation (5), u ( t ) and x ( t ) can be expanded onto the BPFs as
u ( t ) = U T Ψ M ( t ) , x ( t ) = X T Ψ M ( t ) ,
where U = [ u 0 , u 1 , , u M 1 ] T and X = [ x 0 , x 1 , , x M 1 ] T . Combined with Equation (10), we have
i = 0 n a i X T P β n β i Ψ M ( t ) = b U T P β n Ψ M ( t ) .
For the ease of operation, the data are collected at sampling time t = ( k 1 ) h , k = 1 , , M , the values of variables X T P γ n γ i Ψ M ( t ) | t = ( k 1 ) h and U T P β n Ψ M ( t ) | t = ( k 1 ) h are defined as X i ( k ) and U n ( k ) , respectively. Taking use of Equation (11), X i ( k ) and U n ( k ) can be given as
X i ( k ) = h β n β i Γ ( β n β i + 2 ) p = 1 k x p ξ k p + 1 β n β i
and
U n ( k ) = h β n Γ ( β n + 2 ) p = 1 k u p ξ k p + 1 β n .
Then, Equation (18) can be discretized as follows:
i = 0 n a i X i ( k ) = b U n ( k ) .
Without loss of generality, we let a n = 1 ; then, Equation (23) can be rewritten as
X n ( k ) = ϕ T ( k ) θ ,
where
ϕ ( k ) = [ X 0 ( k ) , , X n 1 ( k ) , U n ( k ) ] T
and
θ = [ a 0 , , a n 1 , b ] T , k = 1 , , M .
In practice, the output x ( t ) is unobtainable, and the measured output y ( t ) can be presented as
y ( t ) = x ( t ) + v ( t ) ,
where v ( t ) denotes the measurement noise. Then, Equation (18) can be converted to
i = 0 n a i I β n β i y ( t ) = b I β n u ( t ) + i = 0 n a i I β n β i v ( t ) .
Similar to Equation (19), y ( t ) and v ( t ) can be written as
y ( t ) = Y T Ψ M ( t ) , v ( t ) = V T Ψ M ( t ) ,
where Y = [ y 0 , y 1 , , y M 1 ] T and V = [ v 0 , v 1 , , v M 1 ] T . Combined with Equation (10), we have
i = 0 n a i Y T P β n β i Ψ M ( t ) = b U T P β n Ψ M ( t ) + i = 0 n a i V T P β n β i Ψ M ( t ) .
We represent the values of variables Y T P β n β i Ψ M ( t ) | t = ( k 1 ) h , V T P β n β i Ψ M ( t ) | t = ( k 1 ) h as Y i ( k ) and V i ( k ) , which can be obtained by the way similar to Equations (21) and (22). Subsequently, Equation (28) can be expressed in the following discrete form:
Y n ( k ) = ϕ c T ( k ) θ + v f ( k ) ,
where
ϕ c ( k ) = [ Y 0 ( k ) , , Y n 1 ( k ) , U n ( k ) ] T
and
v f ( k ) = i = 0 n a i V i ( k ) .
The goal of identifying FOS (17) is to find a desired θ ^ that minimizes the objective function
J ( θ ^ ) = J [ e ( k ) ] = J [ Y n ( k ) ϕ c T ( k ) θ ^ ] .
Presently, the objective function J ( · ) is usually chosen as MSE, which cannot achieve satisfactory performance in non-Gaussian situations, especially when the data are disturbed by impulse noises. The main reason is that the second-order statistics contained in MSE will enlarge the contribution of noises with strong impulse. Therefore, searching for an alternative objective function against large outliers or impulse noises is the main research of this paper.

3.2. Correntropy

Correntropy is a nonlinear measure of the similarity between two random variables X and Y, which is defined as [23]
V ( X , Y ) = E κ ( X , Y ) ,
where κ ( · ) is a positive definite kernel, and E [ · ] is the expectation operator. A widely used kernel in correntropy is the Gaussian kernel
κ ( x , y ) = G σ ( e ) = exp e 2 2 σ 2 ,
where e = x y and σ > 0 denotes the kernel width. Gaussian kernel functions with different kernel widths and their differentiations are shown in Figure 2.
As we can see, correntropy with the Gaussian kernel is a measure of how similar two variables are within a small neighborhood determined by the kernel width σ . The differentiation of correntropy contains an error term related to noise, which approximates to 0. The smaller the value of σ , the wider the neighbourhood with a differentiation value of 0, resulting in less sensitivity of the MCC-based algorithm to noise.

3.3. Parameter Identification Based on Maximum Correntropy Criterion

Based on the above discussion, the objective function in Equation (32) is chosen as the correntropy with Gaussian kernel, which can be given as follows:
J ( θ ^ ) = E exp e 2 ( k ) 2 σ 2 .
To achieve the goal of parameter identification, we maximize the instantaneous value of correntropy criterion (35) at every iteration k. Combined with the stochastic gradient method, the MCC-SGA algorithm is expressed as
θ ^ ( k ) = θ ^ ( k 1 ) + η 1 grad J ( θ ^ ) | θ ^ ( k 1 ) ,
where grad ( · ) presents gradient, and can be calculated as
grad ( J ( θ ^ ) ) = 1 σ 2 exp e 2 ( k ) 2 σ 2 e ( k ) ϕ c ( k ) .
Using Equation (32) and substituting Equation (37) into Equation (36) yields
θ ^ ( k ) = θ ^ ( k 1 ) + η 1 σ 2 exp Y n ( k ) ϕ c T ( k ) θ ^ ( k 1 ) 2 2 σ 2 Y n ( k ) ϕ c T ( k ) θ ^ ( k 1 ) ϕ c ( k ) = θ ^ ( k 1 ) + η P k Y n ( k ) ϕ c T ( k ) θ ^ ( k 1 ) ϕ c ( k ) ,
where η = η 1 σ 2 is the step size of updating formula, P k = exp Y n ( k ) ϕ c T ( k ) θ ^ ( k 1 ) 2 2 σ 2 satisfies 0 < P k 1 . The presence of term P k makes the MCC-SGA algorithm have following properties:
1.
In a certain noisy environment, when σ , P k 1 , then the MCC-SGA algorithm degenerates into
θ ^ ( k ) = θ ^ ( k 1 ) + η Y n ( k ) ϕ c T ( k ) θ ^ ( k 1 ) ϕ c ( k ) ,
which is the LMS-based stochastic gradient descent (LMS-SGD) algorithm.
2.
The MCC-SGA algorithm can be regarded as the LMS-SGD algorithm with a variable step size η ( k ) = η P k , satisfying η ( k ) η . This leads to the convergence of the MCC-SGA algorithm being slightly slower than that of the LMS-SGD algorithm when using the same step size. But it also makes the MCC-SGA algorithm be more robust to change of step size.
3.
For a given σ , if the prediction error e ( k ) is far from 0, the term P k e ( k ) will approximate to 0, then the MCC-SGA algorithm stops updating. Thus, the MCC-SGA algorithm given in Equation (38) has good robustness to the noise. And the smaller the value of σ , the higher robustness the MCC-SGA algorithm has.
After some simple calculations, Equation (38) can be rewritten as
θ ^ ( k ) = I η P k ϕ c ( k ) ϕ c T ( k ) θ ^ ( k 1 ) + η P k ϕ c ( k ) Y n ( k ) ,
which can be viewed as a discrete-time system with the state θ ^ ( k ) . To guarantee the convergence of parameter θ ^ ( k ) , all the eigenvalues of symmetric matrices I η P k ϕ c ( k ) ϕ c T ( k ) should be within the unit circle [33]. Thus, the choice of η should satisfy
0 < η P k 2 λ max ϕ c ( k ) ϕ c T ( k ) ,
where λ max [ ϕ c ( k ) ϕ c T ( k ) ] is the maximum eigenvalue of matrix ϕ c ( k ) ϕ c T ( k ) .
Remark 3.
The data are collected at regular time-instants in the identification interval [ 0 , T ] . The number of sampling points M is selected to be as large as possible to reduce the approximate error generated by Equation (5), and also to ensure that the estimated parameters can reach a steady state.

3.4. Analysis of Estimation Bias

In this subsection, the bias of parameter estimation is deduced, and the impact of kernel width on estimation accuracy is studied. Before discussing, let us make an assumption that ϕ ( k ) is not related to θ ^ ( k ) and denote E ϕ ( k ) ϕ T ( k ) = Ω .
First, substituting Equation (29) into Equation (38) yields
θ ^ ( k ) = θ ^ ( k 1 ) + η P k ϕ c ( k ) ϕ c T ( k ) θ η P k ϕ c ( k ) ϕ c T ( k ) θ ^ ( k 1 ) + η P k ϕ c ( k ) v f ( k ) .
Taking the expectation on both sides of Equation (42), we can obtain
E θ ^ ( k ) = E θ ^ ( k 1 ) + η P k E ϕ c ( k ) ϕ c T ( k ) θ η P k E ϕ c ( k ) ϕ c T ( k ) E θ ^ ( k 1 ) + η P k E ϕ c ( k ) v f ( k ) .
Based on the above assumption, Equation (43) can be transformed into
E θ ^ ( k ) = E θ ^ ( k 1 ) + η P k Ω θ η P k Ω E θ ^ ( k 1 ) + η P k G + Σ s θ η P k G + Σ s E θ ^ ( k 1 ) + η P k Λ ,
where G = E ϕ ( k ) s T ( k ) + E ϕ T ( k ) s ( k ) , Σ s = E s ( k ) s T ( k ) and Λ = E ϕ c ( k ) v f ( k ) . Then the deviation of MCC-SGA algorithm for each iteration is
b M C C ( k ) = η P k G + Σ s θ E θ ^ ( k 1 ) + η P k Λ .
Taking the absolute value of Equation (45), we have
| b M C C ( k ) | = η P k G + Σ s θ E θ ^ ( k 1 ) + Λ η G + Σ s θ E θ ^ ( k 1 ) + Λ = b M S E ( k ) ,
where b M S E ( k ) is the deviation of the LMS-SGD algorithm for each iteration.
Remark 4.
It can be seen from Equation (46) that the absolute deviation of the MCC-SGA algorithm for each iteration is no more than that of the LMS-SGD algorithm. Moreover, for a certain noisy environment, if σ decreases, P k decreases and then b M C C ( k ) decreases. This means that the kernel width is useful in reducing the estimation bias. In practice, if the data size is large, a small kernel size should be used to achieve the identification results with high accuracy. However, if the data size is small, the kernel size has to be chosen as a compromise between estimation accuracy and convergence speed.

4. Experiments and Results Analysis

In this section, two numerical examples are given. To illustrate the superiority of the proposed method, the LMS-SGD algorithm is utilized to identify the FOS. The relative error δ = θ θ ^ / θ is adopted to objectively evaluate identification accuracy, where θ ^ is the estimated parameter value and θ is the true one. Due to the stochastic noise, a Monte Carlo test is carried out, that is to say, both identification algorithms independently run 30 times. In each run, the algorithm terminates when the pre-specified maximum iteration number is reached.

4.1. Example 1

Consider the FOS [21]
D 0.8 x ( t ) + a 0 x ( t ) = b u ( t )
with a 0 = 1 and b = 3 . The identification interval is [ 0 , 16 ] . A multi-sine function
u ( t ) = k = 1 25 sin 0.1 π k t k ( k 1 ) π 25
is chosen as the input signal to excite the system. The initial conditions of the FOS are assumed to be x ( t ) = 0 , t 0 . The sampling period is set as h = 0.001 , then the number of BPFs to be used is M = T / h = 16,000, which is also the maximum number of iterations.
Case 1. The effect of parameters in algorithms on the identification accuracy.
First, we set the step size of the MCC-SGA algorithm and the LMS-SGD algorithm setting as η = 0.003 , and the kernel width of the correntropy function as σ = { 1.5 , 15 } , respectively. The relative error curves of estimation results obtained by the MCC-SGA algorithm and LMS-SGD algorithm are plotted in Figure 3. It can be seen that as expected, for a large σ , the MCC-SGA algorithm performs the same as the LMS-SGD algorithm. For a small σ , the MCC-SGA algorithm can obtain more accurate identification results with a bit slow convergence speed.
Meanwhile, the impact of the step size of these two algorithms is studied. The kernel width of the correntropy function is set as σ = 1.5 , and the step size of the MCC-SGA algorithm and LMS-SGD algorithm is set as η = { 0.003 , 0.05 , 0.07 , 0.1 } . Table 1 lists the relative errors of the identification results after 16,000 iterations. The results show that compared with the LMS-SGD algorithm, the MCC-SGA algorithm has more stable performance to the change of η , which makes it have a wider choice of step size and broadens its practical application range. Based on the above discussions, the parameters of identification algorithm are set as σ = 1.5 and η = 0.003 in the following experiments.
Case 2. Output signal is disturbed by different types of stable distribution noises.
In this case, FOS (47) is identified in an environment perturbed by Gaussian, Cauchy and Lévy noises, respectively. In the case of the Gaussian noise v ( t ) S ( 2 , 0 , 0.5 , 0 ) ; in the case of the Cauchy noise v ( t ) S ( 1 , 0 , 0.1 , 0 ) ; in the case of the Lévy noise v ( t ) S ( 0.5 , 1 , 10 5 , 0 ) . The figures of these three types of noises are shown in Figure 4. It can be seen that as α decreases, the impulsivity of the noise becomes stronger.
The MCC-SGA algorithm and LMS-SGD algorithm are both used to identify the FOS. The evolution processes of the estimated parameters are shown in Figure 5, Figure 6 and Figure 7. The identification results after 16,000 iterations for the above two methods are given in Table 2.
The identification results show the following: (1) When α = 2 , the LMS-SGD algorithm achieves the same identification accuracy as the MCC-SGA algorithm. The reason behind that is that the LMS-based identification method has good performance in a Gaussian noise environment. (2) When α = 1 , the identification results achieved by the LMS-SGD algorithm are lower in accuracy. Moreover, when α = 0.5 , the LMS-SGD algorithm cannot obtain identification results. (3) When the noise switches to Cauchy and Lévy noises, the identification accuracy of these two algorithms becomes worse, but the MCC-SGA algorithm can always achieve satisfactory identifications. Therefore, it can be concluded that the performance of the LMS-SGD algorithm deteriorates seriously when a large impulse noise exists. On the contrary, the MCC-SGA algorithm strains the effect of impulse noise, enhances the robustness, and improves the estimation accuracy.
Case 3. The effect of parameters in stable distribution noises on the identification accuracy.
In this case, parameters of the stable distribution noises are set as α = { 1 , 2 } , β = { 1 , 0 , 1 } , γ = { 10 2 , 10 3 } , and μ = { 0.1 , 0 , 0.1 } . Both the MCC-SGA algorithm and LMS-SGD algorithm are used to perform the identification. The relative errors of identification results after 16,000 iterations are listed in Table 3 and Table 4. These results demonstrate that under various stable distribution noises, the MCC-SGA algorithm can obtain more accurate estimations than its LMS-SGD counterpart. To better illustrate the effect of each parameter in noise, we plot 3D bar graphs in Figure 8 and Figure 9 to show the results listed in Table 3 and Table 4. It can be noted that the location parameter μ plays a significant role in the identification accuracy. A larger | μ | means the measurement output deviates farther from its true value, which results in poor estimations. When location parameter μ = 0 , the exponent parameter α has an impact on identification, while when μ 0 , α has almost no impact on identification. The dispersion parameter γ has an minor impact on the identification accuracy, but it affects the stability of identification results. In addition, the value of symmetric parameter β has a small impact on the estimation accuracy. In this experiment, the most accurate estimations can be achieved with μ = 0 , α = 2 , γ = 10 3 . It can be concluded that when the stable distribution noise has a lighter tail, smaller absolute location parameter, and narrower dispersion, accurate identifications are achieved.

4.2. Example 2

To further validate the effectiveness of the proposed method, the FOS
D 2.2 x ( t ) + a 1 D 1.1 x ( t ) + a 0 x ( t ) = b u ( t )
is considered. Here, a 1 = 2 , a 0 = 5 and b = 4 . The identification interval is [ 0 , 16 ] . A multi-sine function,
u ( t ) = k = 1 200 sin 0.01 π k t k ( 200 k ) π 100
is chosen as the input signal. The initial conditions of the FOS are assumed to be x ( t ) = 0 , t 0 . The sampling period is set as h = 0.0016 , then the number of BPFs involved in the proposed method and the maximum number of iterations are M = T / h = 10,000.
Simulations without noise are omitted here to avoid repetition, and FOS (49) is identified when the output is disturbed by general impulse noises. In this case, v ( t ) S ( 1.5 , 0 , 0.2 , 0 ) and v ( t ) S ( 1.2 , 0 , 0.1 , 0 ) , respectively. The MCC-SGA algorithm and LMS-SGD algorithm are both used to identify FOS. The kernel width is set as σ = 1.5 ; the step size is set as η = 0.003 . The evolution processes of the estimated parameters are shown in Figure 10 and Figure 11. The identification results after 10 , 000 iterations are given in Table 5.
It can be seen that when the location parameter μ = 0 , the exponent parameter α has an impact on the identification accuracy. The identification error becomes larger with the decrease in α . However, as expected, the proposed algorithm can always obtain more satisfactory results than its LMS counterpart.

5. Conclusions

In this paper, the identification problem of FOS is studied. By using block pulse operational matrix, the identified FOS is converted into an algebraic one. The correntropy is utilized as the objective function, and an iterative formula based on the stochastic gradient ascent method is designed to perform the identification. By studying the impact of parameters in the proposed algorithm on identification, appropriate kernel width and step size are recommended. To demonstrate the superiority of the proposed method, an LMS-based identification method is introduced, and these two algorithms are used to identify FOSs under noises of different types of stable distribution. Moreover, the impact of parameters of stable distribution on identification accuracy is examined. Simulation results demonstrate the following: (1) For a large kernel width, the MCC-SGA algorithm has the same performance as the LMS-SGD algorithm; for a small kernel width, the MCC-SGA algorithm can obtain more accurate estimations with a slightly slower convergence speed. Moreover, the MCC-SGA algorithm is more robust to the change in step size. (2) When the impulsivity of noise increases, the identification error becomes larger, but the proposed method is always superior to its LMS counterpart in identification accuracy and robustness. (3) The location of the noise distribution plays a significant role in the identification accuracy.
It can be believed that the proposed MCC-SGA algorithm provides a new approach to identify the FOS. In the future, FOS identification is still a challenging problem. The modification of the stochastic gradient algorithm to improve the convergence rate of proposed method and the identification of the fractional orders are the future research goals.

Funding

This work was funded by the National Natural Science Foundation of China (No. 62003294) and the Natural Science Foundation of Hebei Province (No. E2021203099).

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
FOSFractional-order system
FOCFractional-order calculus
R-LRiemann–Liouville
G-LGrünwald–Letnikov
BPFsBlock pulse functions
MCCMaximum correntropy criterion
LMSLeast mean square
MSEMean square error
MCC-SGAMCC-based stochastic gradient ascent
LMS-SGDLMS-based stochastic gradient descent

References

  1. Cai, W.; Wang, P.; Fan, J. A variable-order fractional model of tensile and shear behaviors for sintered nano-silver paste used in high power electronics-ScienceDirect. Mech. Mater. 2020, 145, 103391. [Google Scholar] [CrossRef]
  2. Nasser-Eddine, A.; Huard, B.; Gabano, J.D.; Poinot, T.; Thomas, A. Fast time domain identification of electrochemical systems at low frequencies using fractional modeling. J. Electroanal. Chem. 2020, 862, 113957. [Google Scholar] [CrossRef]
  3. Wang, Y.; Sun, L.; Yang, R.; He, W.; Tang, Y.; Zhang, Z.; Wang, Y.; Sapnken, F.E. A novel structure adaptive fractional derivative grey model and its application in energy consumption prediction. Energy 2023, 282, 128380. [Google Scholar] [CrossRef]
  4. Xie, B.; Ge, F. Parameters and order identification of fractional-order epidemiological systems by Lévy-PSO and its application for the spread of COVID-19. Chaos Solitons Fractals 2023, 168, 113163. [Google Scholar] [CrossRef]
  5. Zhang, X.; Chen, S.; Zhang, J. Adaptive sliding mode consensus control based on neural network for singular fractional order multi-agent systems. Appl. Math. Comput. 2022, 434, 127442. [Google Scholar] [CrossRef]
  6. Jin, K.; Zhang, J.; Zhang, X. Output Feedback Robust Fault-Tolerant Control of Interval Type-2 Fuzzy Fractional Order Systems With Actuator Faults. Int. J. Fuzzy Syst. 2022, 24, 3277–3292. [Google Scholar] [CrossRef]
  7. Yu, W.; Luo, Y.; Pi, Y.G. Fractional order modeling and control for permanent magnet synchronous motor velocity servo system. Mechatronics 2013, 23, 813–820. [Google Scholar] [CrossRef]
  8. Kothari, K.; Mehta, U.; Vanualailai, J. A novel approach of fractional-order time delay system modeling based on Haar wavelet. ISA Trans. 2018, 80, 371–380. [Google Scholar] [CrossRef]
  9. Malti, R.; Raïssi, T.; Thomassin, M.; Khemane, F. Set membership parameter estimation of fractional models based on bounded frequency domain data. Commun. Nonlinear Sci. Numer. Simul. 2010, 15, 927–938. [Google Scholar] [CrossRef]
  10. Valério, D.; Tejado, I. Identifying a non-commensurable fractional transfer function from a frequency response. Signal Process. 2015, 107, 254–264. [Google Scholar] [CrossRef]
  11. Jun, L.; Thierry, P.; Shoutao, L.; Claude, T.J. Identification of non-integer-order systems in frequency domain. Control. Theory Appl. 2008, 25, 517–520. (In Chinese) [Google Scholar]
  12. Li, Y.; Yu, S. Frequency domain identification of non-integer order dynamical systems. J. Southeast Univ. (Engl. Ed.) 2007, 23, 47–50. [Google Scholar]
  13. Victor, S.; Malti, R.; Oustaloup, A. An Optimal Instrumental Variable Method for Continuous-Time Fractional Model Identification. IFAC Proc. Vol. 2008, 41, 14379–14384. [Google Scholar]
  14. Duhé, J.F.; Victor, S.; Melchior, P.; Abdelmounen, Y.; Roubertie, F. Recursive system identification for coefficient estimation of continuous-time fractional order systems. IFAC-PapersOnLine 2021, 54, 114–119. [Google Scholar] [CrossRef]
  15. Du, W.; Tong, L.; Tang, Y. Metaheuristic optimization-based identification of fractional-order systems under stable distribution noises. Phys. Lett. A 2018, 382, 2313–2320. [Google Scholar] [CrossRef]
  16. Tang, Y.; Li, N.; Liu, M.; Lu, Y.; Wang, W. Identification of fractional-order systems with time delays using block pulse functions. Mech. Syst. Signal Process. 2017, 91, 382–394. [Google Scholar] [CrossRef]
  17. Xie, J.; Wang, T.; Ren, Z.; Zhang, J.; Quan, L. Haar wavelet method for approximating the solution of a coupled system of fractional-order integral-differential equations. Math. Comput. Simul. 2019, 163, 80–89. [Google Scholar] [CrossRef]
  18. Wang, Z.; Wang, C.; Ding, L.; Wang, Z.; Liang, S. Parameter identification of fractional-order time delay system based on Legendre wavelet. Mech. Syst. Signal Process. 2022, 163, 108141. [Google Scholar] [CrossRef]
  19. Gao, G.; Sun, G.; Na, J.; Guo, Y.; Wu, X. Structural parameter identification for 6 DOF industrial robots. Mech. Syst. Signal Process. 2018, 113, 145–155. [Google Scholar] [CrossRef]
  20. Zhang, T.; Lu, Z.-R.; Liu, J.-K.; Chen, Y.-M.; Liu, G. Parameter estimation of linear fractional-order system from laplace domain data. Appl. Math. Comput. 2023, 438, 127522. [Google Scholar] [CrossRef]
  21. Cui, R.; Wei, Y.; Cheng, S.; Wang, Y. An innovative parameter estimation for fractional order systems with impulse noise. ISA Trans. 2018, 82, 120–129. [Google Scholar] [CrossRef] [PubMed]
  22. Liu, Y.; Chen, J. Correntropy-based kernel learning for nonlinear system identification with unknown noise: An industrial case study. IFAC Proc. Vol. 2013, 46, 361–366. [Google Scholar] [CrossRef]
  23. Yu, L.; Liu, L.; Yue, Z.; Kang, J. A maximum correntropy criterion based recursive method for output-only modal identification of time-varying structures under non-Gaussian impulsive noise. J. Sound Vib. 2019, 448, 178–194. [Google Scholar] [CrossRef]
  24. Lv, S.; Zhao, H.; Zhou, L. Maximum mixture total correntropy adaptive filtering against impulsive noises. Signal Process. 2021, 189, 108236. [Google Scholar] [CrossRef]
  25. Zhao, J.; Zhang, J.A.; Li, Q.; Zhang, H.; Wang, X. Recursive constrained generalized maximum correntropy algorithms for adaptive filtering. Signal Process. 2022, 199, 108611. [Google Scholar] [CrossRef]
  26. Wang, B.; Gao, S.; Ge, H.; Wang, W. A Variable Step Size for Maximum Correntropy Criterion Algorithm with Improved Variable Kernel Width. IEEJ Trans. Electr. Electron. Eng. 2020, 15, 1465–1474. [Google Scholar] [CrossRef]
  27. Li, Y.; Jia, L.; Yang, Z.J.; Tao, R. Diffusion bias-compensated recursive maximum correntropy criterion algorithm with noisy input. Digit. Signal Process. 2022, 122, 103373. [Google Scholar] [CrossRef]
  28. Tian, T.; Wu, F.-Y.; Yang, K. Block-sparsity regularized maximum correntropy criterion for structured-sparse system identification. J. Frankl. Inst. 2020, 357, 12960–12985. [Google Scholar] [CrossRef]
  29. Podlubny, I. Fractional Differential Equations: An Introduction to Fractional Derivatives, Fractional Differential Equations, to Methods of Their Solution and Some of Their Applications; Academic: San Diego, CA, USA, 1998. [Google Scholar]
  30. Tang, Y.; Liu, H.; Wang, W.; Lian, Q.; Guan, X. Parameter identification of fractional order systems using block pulse functions. Signal Process. 2015, 107, 272–281. [Google Scholar] [CrossRef]
  31. Babolian, E.; Masouri, Z. Direct method to solve Volterra integral equation of the first kind using operational matrix with block-pulse functions. J. Comput. Appl. Math. 2008, 220, 51–57. [Google Scholar] [CrossRef]
  32. Zhang, B.; Tang, Y.; Zhang, X.; Lu, Y. Operational matrix based set-membership method for fractional order systems parameter identification. J. Frankl. Inst. 2021, 358, 10141–10164. [Google Scholar] [CrossRef]
  33. Li, J.; Hua, C.; Tang, Y.; Guan, X. Stochastic gradient with changing forgetting factor-based parameter identification for Wiener systems. Appl. Math. Lett. 2014, 33, 40–45. [Google Scholar] [CrossRef]
Figure 1. Probability density functions of stable distribution with different parameters. (a) α = { 0.5 , 1 , 2 } , β = 0 , γ = 1 , μ = 0 ; (b) α = 1.2 , β = { 1 , 0 , 1 } , γ = 1 , μ = 0 ; (c) α = 1.2 , β = 0 , γ = { 0.5 , 1 , 2 } , μ = 0 ; (d) α = 1.2 , β = 0 , γ = 1 , μ = { 2 , 0 , 2 } .
Figure 1. Probability density functions of stable distribution with different parameters. (a) α = { 0.5 , 1 , 2 } , β = 0 , γ = 1 , μ = 0 ; (b) α = 1.2 , β = { 1 , 0 , 1 } , γ = 1 , μ = 0 ; (c) α = 1.2 , β = 0 , γ = { 0.5 , 1 , 2 } , μ = 0 ; (d) α = 1.2 , β = 0 , γ = 1 , μ = { 2 , 0 , 2 } .
Mathematics 11 04299 g001aMathematics 11 04299 g001b
Figure 2. The curve of G σ ( e ) and G σ ( e ) . (a) G σ ( e ) . (b) G σ ( e ) .
Figure 2. The curve of G σ ( e ) and G σ ( e ) . (a) G σ ( e ) . (b) G σ ( e ) .
Mathematics 11 04299 g002
Figure 3. Relative error curves of estimations obtained by algorithms in Case 1.
Figure 3. Relative error curves of estimations obtained by algorithms in Case 1.
Mathematics 11 04299 g003
Figure 4. Original noise. (a) v ( t ) S ( 2 , 0 , 0.5 , 0 ) . (b) v ( t ) S ( 1 , 0 , 0.1 , 0 ) . (c) v ( t ) S ( 0.5 , 1 , 10 5 , 0 ) .
Figure 4. Original noise. (a) v ( t ) S ( 2 , 0 , 0.5 , 0 ) . (b) v ( t ) S ( 1 , 0 , 0.1 , 0 ) . (c) v ( t ) S ( 0.5 , 1 , 10 5 , 0 ) .
Mathematics 11 04299 g004
Figure 5. Evolution of parameters in Gaussian noise. (a) Evolution of a ^ 0 . (b) Evolution of b ^ .
Figure 5. Evolution of parameters in Gaussian noise. (a) Evolution of a ^ 0 . (b) Evolution of b ^ .
Mathematics 11 04299 g005
Figure 6. Parameter evolution process in Cauchy noise. (a) Evolution of a ^ 0 . (b) Evolution of b ^ .
Figure 6. Parameter evolution process in Cauchy noise. (a) Evolution of a ^ 0 . (b) Evolution of b ^ .
Mathematics 11 04299 g006
Figure 7. Parameter evolution process in Lévy noise. (a) Evolution of a ^ 0 . (b) Evolution of b ^ .
Figure 7. Parameter evolution process in Lévy noise. (a) Evolution of a ^ 0 . (b) Evolution of b ^ .
Mathematics 11 04299 g007
Figure 8. The 3D bar graphs of relative errors of estimations obtained by MCC-SGA under stable distribution noises with different parameter settings after 30 runs.
Figure 8. The 3D bar graphs of relative errors of estimations obtained by MCC-SGA under stable distribution noises with different parameter settings after 30 runs.
Mathematics 11 04299 g008
Figure 9. The 3D bar graphs of relative errors of estimations obtained by LMS-SGD under stable distribution noises with different parameter settings after 30 runs.
Figure 9. The 3D bar graphs of relative errors of estimations obtained by LMS-SGD under stable distribution noises with different parameter settings after 30 runs.
Mathematics 11 04299 g009aMathematics 11 04299 g009b
Figure 10. Parameter evolution process in case of v ( t ) S ( 1.5 , 0 , 0.2 , 0 ) . (a) Evolution of a ^ 1 . (b) Evolution of a ^ 0 . (c) Evolution of b ^ .
Figure 10. Parameter evolution process in case of v ( t ) S ( 1.5 , 0 , 0.2 , 0 ) . (a) Evolution of a ^ 1 . (b) Evolution of a ^ 0 . (c) Evolution of b ^ .
Mathematics 11 04299 g010
Figure 11. Parameter evolution process in case of v ( t ) S ( 1.2 , 0 , 0.1 , 0 ) . (a) Evolution of a ^ 1 . (b) Evolution of a ^ 0 . (c) Evolution of b ^ .
Figure 11. Parameter evolution process in case of v ( t ) S ( 1.2 , 0 , 0.1 , 0 ) . (a) Evolution of a ^ 1 . (b) Evolution of a ^ 0 . (c) Evolution of b ^ .
Mathematics 11 04299 g011
Table 1. Relative errors of estimations obtained by algorithms with different values of η in Experiment 1.
Table 1. Relative errors of estimations obtained by algorithms with different values of η in Experiment 1.
η = 0.003 η = 0.005 η = 0.007 η = 0.01
MCC-SGA 0.49 × 10 2 2.24 × 10 2 3.60 × 10 2 4.82 × 10 2
LMS-SGD 0.60 × 10 2 2.55 × 10 2 3.95 × 10 2
Table 2. Estimation results under Gaussian, Cauchy, Lévy noises after 30 runs.
Table 2. Estimation results under Gaussian, Cauchy, Lévy noises after 30 runs.
NoiseParameterTrue ValueMCC-SGALMS-SGD
avgstdavgstd
Gaussian a 0 1 0.9897 0.0215 0.9891 0.0263
b3 2.9803 0.0263 2.9793 0.0389
δ - 0.70 × 10 2 - 0.77 × 10 2 -
Cauchy a 0 1 1.0391 0.3125 0.8418 1.2630
b3 2.9316 0.3400 3.0740 2.0630
δ - 2.49 × 10 2 - 5.52 × 10 2 -
Lévy a 0 1 0.9649 0.5288
b3 3.0950 0.5582
δ - 3.07 × 10 2 --
Table 3. Relative errors of estimations (avg ± std) obtained by MCC-SGA under stable distribution noises with different parameter settings after 30 runs.
Table 3. Relative errors of estimations (avg ± std) obtained by MCC-SGA under stable distribution noises with different parameter settings after 30 runs.
γ = 10 2 , μ = 0.1 γ = 10 3 , μ = 0.1
β = 1 β = 0 β = 1 β = 1 β = 0 β = 1
α = 1 0.106 ± 3.20 × 10 2 0.085 ± 1.21 × 10 2 0.091 ± 9.90 × 10 2 α = 1 0.086 ± 6.60 × 10 2 0.082 ± 3.12 × 10 2 0.082 ± 1.81 × 10 2
α = 2 0.083 ± 5.24 × 10 4 0.083 ± 5.14 × 10 4 0.083 ± 6.13 × 10 4 α = 2 0.083 ± 5.85 × 10 5 0.083 ± 5.14 × 10 5 0.083 ± 1.59 × 10 5
γ = 10 2 , μ = 0 γ = 10 3 , μ = 0
β = 1 β = 0 β = 1 β = 1 β = 0 β = 1
α = 1 0.027 ± 9.29 × 10 2 0.012 ± 5.66 × 10 2 0.030 ± 6.53 × 10 2 α = 1 0.003 ± 1.42 × 10 2 0.005 ± 1.33 × 10 2 0.006 ± 9.10 × 10 3
α = 2 0.005 ± 5.77 × 10 4 0.005 ± 5.77 × 10 4 0.005 ± 5.96 × 10 4 α = 2 0.005 ± 5.77 × 10 5 0.005 ± 5.77 × 10 5 0.005 ± 5.77 × 10 5
γ = 10 2 , μ = 0 . 1 γ = 10 3 , μ = 0 . 1
β = 1 β = 0 β = 1 β = 1 β = 0 β = 1
α = 1 0.087 ± 5.87 × 10 2 0.103 ± 2.90 × 10 2 0.122 ± 1.55 × 10 2 α = 1 0.105 ± 4.70 × 10 3 0.103 ± 7.90 × 10 3 0.107 ± 1.27 × 10 2
α = 2 0.102 ± 6.22 × 10 4 0.102 ± 6.20 × 10 4 0.102 ± 6.22 × 10 4 α = 2 0.102 ± 6.23 × 10 5 0.102 ± 6.21 × 10 5 0.093 ± 1.34 × 10 4
Table 4. Relative errors of estimations (avg ± std) obtained by LMS-SGD under stable distribution noises with different parameter settings after 30 runs.
Table 4. Relative errors of estimations (avg ± std) obtained by LMS-SGD under stable distribution noises with different parameter settings after 30 runs.
γ = 10 2 , μ = 0.1 γ = 10 3 , μ = 0.1
β = 1 β = 0 β = 1 β = 1 β = 0 β = 1
α = 1 0.148 ± 1.50 × 10 1 0.150 ± 1.91 × 10 1 0.102 ± 1.46 × 10 1 α = 1 0.128 ± 1.22 × 10 1 0.112 ± 3.38 × 10 2 0.114 ± 4.47 × 10 2
α = 2 0.112 ± 6.03 × 10 4 0.112 ± 6.19 × 10 4 0.112 ± 6.68 × 10 4 α = 2 0.112 ± 6.02 × 10 5 0.112 ± 6.19 × 10 5 0.112 ± 2.22 × 10 5
γ = 10 2 , μ = 0 γ = 10 3 , μ = 0
β = 1 β = 0 β = 1 β = 1 β = 0 β = 1
α = 1 0.054 ± 2.48 × 10 1 0.057 ± 1.25 × 10 1 0.050 ± 5.24 × 10 2 α = 1 0.009 ± 2.19 × 10 2 0.006 ± 1.34 × 10 2 0.005 ± 1.26 × 10 2
α = 2 0.006 ± 6.66 × 10 4 0.006 ± 6.66 × 10 4 0.006 ± 6.45 × 10 4 α = 2 0.006 ± 6.67 × 10 5 0.006 ± 6.67 × 10 5 0.006 ± 6.67 × 10 5
γ = 10 2 , μ = 0 . 1 γ = 10 3 , μ = 0 . 1
β = 1 β = 0 β = 1 β = 1 β = 0 β = 1
α = 1 0.119 ± 8.71 × 10 2 0.109 ± 7.24 × 10 2 1.56 × 10 1 ± 0.041 α = 1 0.111 ± 6.10 × 10 3 0.109 ± 1.38 × 10 2 0.149 ± 1.91 × 10 2
α = 2 0.108 ± 7.06 × 10 4 0.108 ± 6.82 × 10 4 0.108 ± 7.06 × 10 4 α = 2 0.108 ± 7.06 × 10 5 0.108 ± 6.83 × 10 5 0.108 ± 6.88 × 10 5
Table 5. Estimation results under general impulse noises after 30 runs.
Table 5. Estimation results under general impulse noises after 30 runs.
NoiseParameterTrue ValueMCC-SGALMS-SGD
avgstdavgstd
S ( 1.5 , 0 , 0.2 , 0 ) a 1 2 1.9161 1.2632 1.7403 1.9338
a 0 5 4.8356 1.1608 4.4576 2.4246
b4 4.1103 0.9572 3.5723 1.9572
δ - 3.21 × 10 2 - 1.10 × 10 1 -
S ( 1.2 , 0 , 0.1 , 0 ) a 1 2 1.8838 1.2222 0.8418 2.5206
a 0 5 4.9404 1.0354 2.9793 2.3874
b4 4.2532 0.9320 3.0740 1.9482
δ - 4.25 × 10 2 - 1.96 × 10 1 -
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lu, Y. The Maximum Correntropy Criterion-Based Identification for Fractional-Order Systems under Stable Distribution Noises. Mathematics 2023, 11, 4299. https://doi.org/10.3390/math11204299

AMA Style

Lu Y. The Maximum Correntropy Criterion-Based Identification for Fractional-Order Systems under Stable Distribution Noises. Mathematics. 2023; 11(20):4299. https://doi.org/10.3390/math11204299

Chicago/Turabian Style

Lu, Yao. 2023. "The Maximum Correntropy Criterion-Based Identification for Fractional-Order Systems under Stable Distribution Noises" Mathematics 11, no. 20: 4299. https://doi.org/10.3390/math11204299

APA Style

Lu, Y. (2023). The Maximum Correntropy Criterion-Based Identification for Fractional-Order Systems under Stable Distribution Noises. Mathematics, 11(20), 4299. https://doi.org/10.3390/math11204299

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop