Next Article in Journal
Quasi-Periodic Oscillations of Roll System in Corrugated Rolling Mill in Resonance
Next Article in Special Issue
On Spectral Decomposition of States and Gramians of Bilinear Dynamical Systems
Previous Article in Journal
Secondary School Students’ Construction and Interpretation of Statistical Tables
Previous Article in Special Issue
Analysis and Prediction of Electric Power System’s Stability Based on Virtual State Estimators
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Maximum-Likelihood-Based Adaptive and Intelligent Computing for Nonlinear System Identification

by
Hasnat Bin Tariq
1,
Naveed Ishtiaq Chaudhary
1,
Zeshan Aslam Khan
1,
Muhammad Asif Zahoor Raja
2,*,
Khalid Mehmood Cheema
3 and
Ahmad H. Milyani
4
1
Department of Electrical Engineering, International Islamic University, Islamabad 44000, Pakistan
2
Future Technology Research Center, National Yunlin University of Science and Technology, 123 University Road, Section 3, Douliou, Yunlin 64002, Taiwan
3
School of Electrical Engineering, Southeast University, Nanjing 210096, China
4
Department of Electrical and Computer Engineering, King Abdulaziz University, Jeddah 21589, Saudi Arabia
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(24), 3199; https://doi.org/10.3390/math9243199
Submission received: 11 November 2021 / Revised: 5 December 2021 / Accepted: 9 December 2021 / Published: 11 December 2021

Abstract

:
Most real-time systems are nonlinear in nature, and their optimization is very difficult due to inherit stiffness and complex system representation. The computational intelligent algorithms of evolutionary computing paradigm (ECP) effectively solve various complex, nonlinear optimization problems. The differential evolution algorithm (DEA) is one of the most important approaches in ECP, which outperforms other standard approaches in terms of accuracy and convergence performance. In this study, a novel application of a recently proposed variant of DEA, the so-called, maximum-likelihood-based, adaptive, differential evolution algorithm (ADEA), is investigated for the identification of nonlinear Hammerstein output error (HOE) systems that are widely used to model different nonlinear processes of engineering and applied sciences. The performance of the ADEA is evaluated by taking polynomial- and sigmoidal-type nonlinearities in two case studies of HOE systems. Moreover, the robustness of the proposed scheme is examined for different noise levels. Reliability and consistent accuracy are assessed through multiple independent trials of the scheme. The convergence, accuracy, robustness and reliability of the ADEA are carefully examined for HOE identification in comparison with the standard counterpart of the DEA. The ADEA achieves the fitness values of 1.43 × 10−8 and 3.46 × 10−9 for a population size of 80 and 100, respectively, in the HOE system identification problem of case study 1 for a 0.01 nose level, while the respective fitness values in the case of DEA are 1.43 × 10−6 and 3.46 × 10−7. The ADEA is more statistically consistent but less complex when compared to the DEA due to the extra operations involved in introducing the adaptiveness during the mutation and crossover. The current study may consider the approach of effective nonlinear system identification as a step further in developing ECP-based computational intelligence.

1. Introduction

1.1. Background and Motivation

System identification or parameter estimation involves the approximation of unknown variables of the system, and this concept provides the foundation for solving different engineering, science and technology problems [1]. Most real-time systems are nonlinear and complex in nature. There are many applications for nonlinear systems in science and engineering, such as the inverted pendulum system [2], motion control of a motor driven robot [3], average dwell-time switching [4], tail-control missile system [5], and weather station systems [6].
Nonlinear systems can be described through different nonlinear models, including the Volterra series [7], Wiener series [8], NARMAX model [9], Wiener model [10,11,12] and Hammerstein model [13], etc. Researchers revealed the strong relations between different nonlinear models and the Volterra series [14]. Sidorov et al. contributed significantly in the theory and applications of Volterra equations by proposing different methods [15,16,17] and exploring the applications in power system operations and energy storage systems [18]. The Volterra series can also represent the Hammerstein model, and Kibangou et al. [19] described Hammerstein model through Volterra series representation and identified the coefficients of the Hammerstein model using the Volterra series. The Hammerstein model has a simpler structure with easier identification than the Volterra series [20]. Therefore, the Hammerstein model is often used to represent a wide class of nonlinear systems [21,22,23,24].
The Hammerstein structure, presented in Figure 1, belongs to a class of input nonlinear systems (INL) where a nonlinear block is cascaded with a linear block. Different local and global search algorithms were proposed for the identification of INL models. Local search algorithms are easy to implement but prone to become stuck in local minima. Local search algorithms include the key term separation technique for the parameter estimation of the Hammerstein-controlled autoregressive system [25]; impulse response, constrained, least-square support vector machine modeling for multiple-input and multiple-output Hammerstein system identification [26]; fractional calculus-based adaptive techniques [27,28]; and the parameter estimation problems of input-nonlinear-output error autoregressive systems, based on the key variable separation technique and the auxiliary model-based identification [29], whereas global search techniques effectively handle the local minima issues. The global search methods based on evolutionary and swarm optimization heuristics are effectively applied for the parameter estimation of different-input nonlinear systems, such as genetic algorithms, which are used for the parameter estimation of nonlinear, Hammerstein-controlled autoregressive systems [30]. Meta-heuristic computing techniques are used for the parameter estimation of Hammerstein-controlled auto-regressive, moving-average systems using differential evolution, genetic algorithms, pattern searches and simulated annealing algorithms [31]. Evolutionary computational heuristics are presented for the parameter estimation problem of nonlinear Hammerstein-controlled, auto-regressive systems through a global search competency of the backtracking search algorithm, differential evolution, and genetic algorithms [32]. The neural networks and fuzzy-logic-based, computational, intelligent approaches are also used to solve complex system identification problems [33,34,35,36,37].
The DEA was also effectively applied to INL systems, and it showed better results than its standard counterparts [38]. Recently, a new variant of the DEA called the maximum-likelihood-based adaptive DEA (ADEA) was proposed [39] for linear systems. The ADEA showed an improved performance compared to the standard DAE in terms of convergence speed and accuracy. The increasing complexity of nonlinear systems requires a continuous search for more accurate and reliable computing algorithms. Thus, the enhanced performance of the DEA and ADEA inspired authors to investigate the behavior of these algorithms for effective INL system identification.

1.2. Objectives and Contribution

In this study, the performance of the DEA and ADEA in terms of correctness, robustness, and convergence, is examined for different nonlinearities, as well as noise levels, in INL systems. The most important contributions of this study are as follows:
  • A novel application of the evolutionary computing paradigm through maximum-likelihood-based adaptive, differential, evolution algorithm, ADEA, is explored for efficient optimization in nonlinear system identification.
  • The ADEA is developed by introducing the concept of adaptiveness in the mutation and crossover operators of the standard DEA approach.
  • The convergence, accuracy and robustness analyses of the ADEA are conducted for different types of nonlinearities and noise levels considered in two case studies of nonlinear systems.
  • The reliability of the ADEA is tested in comparison with the standard counterpart of the DEA through executing multiple independent executions of both schemes.
  • The ADEA is statistically more consistent than the DEA but less complex due to the extra operations involved in introducing the adaptiveness during the mutation and crossover.

1.3. Paper Outline

The rest of the paper is presented as follows: the INL-based system model of the Hammerstein output error (HOE) structure is given in Section 2. The differential evolution based proposed schemes are presented in Section 3. The simulation results for two case studies of HOE systems are provided in Section 4. The main conclusions and some future research directions are listed in Section 5.

2. Mathematical Model of HOE Systems

Figure 2 shows the block diagram of the HOE model [40].
The input–output relation of HOE system in Figure 2 is represented as:
y ( t ) = u ( t ) + v ( t )
where y ( t ) represents the systems’ output, v ( t ) denotes the additive noise, and u ( t ) denotes noise-free output, defined as:
u ( t ) = C ( z ) D ( z )   s ¯ ( t )
s ¯ ( t ) shows the nonlinear block’s output and is defined as a nonlinear function of the system input s ( t ) with a known basis: γ 1 ,   γ 2 , ,   γ m ,
s ¯ ( t ) = f ( s ( t ) ) = e 1 γ 1 ( s ( t ) ) + e 2 γ 2 ( s ( t ) ) + + e m γ m ( s ( t ) )
or:
s ¯ ( t ) = j = 1 m e j γ j ( s ( t ) )
Substituting (3) in (2) yields:
u ( t ) = C ( z ) D ( z ) e 1 γ 1 ( s ( t ) ) + e 2 γ 2 ( s ( t ) ) + + e m γ m ( s ( t ) )
where D ( z ) and C ( z ) represents the polynomials with shifting operator as: z 1 [ z 1   y ( t ) = y ( t 1 ) ]
D ( z ) = 1 + d 1 z 1 + d 2 z 2 + + d n z n C ( z ) = c 1 z 1 + c 2 z 2 + + c n z n
The output of the HOE can be expressed in terms of information and parameter vectors, where the information vector containing the input and output delay terms is denoted by w(t) and the corresponding parameter vector of the HOE is defined as [40]:
θ = [ d , c , e ] T ϵ n 0
where n 0 = 2 n + m and the variables in the parameter vector are:
d = [ d 1 ,   d 2 , . d n ] T   ϵ   n
c = [ c 1 ,   c 2 , . c n ] T   ϵ   n
e = [ e 1 ,   e 2 , . e m ] T   ϵ     m
The block diagram of the identification of the nonlinear system modelled through the block-oriented HOE structure shown in Figure 2, by using the proposed evolutionary algorithms, is shown in Figure 3. The objective is to minimize the error z(t) between the desired response and the estimated response by exploiting the proposed evolutionary computing approach, such that y ( t ) approaches y ^ ( t ) .

3. Proposed Methodology

In this section, the proposed methodology based on the DEA and ADEA with maximum-likelihood criteria are presented for the identification of the HOE system given in Section 2.

3.1. Differential Evolution Algorithm (DEA)

The DEA is one of the most broadly exploited algorithms in ECP, developed by Rainer Stron and Kenneth Price in 1995 [41]. This is a population-based algorithm which has the ability to solve global optimum problems. Due to its usefulness and efficiency, this algorithm is applied to various problems, such as the parameter estimation of Hammerstein control autoregressive systems [38], deep belief network [42], effective long short-term memory for electricity price prediction [43], parameter estimation of solar cells [44], effective electricity energy consumption forecasting using an echo state network [45]. In this study, a recently introduced maximum-likelihood-based adaptive DEA is exploited for HOE identification and the maximum-likelihood-based DEA is used for the purpose of comparison [39]. The flowchart describing the main steps of the DEA is presented in Figure 4.

3.2. Adaptive Differential Evolution Algorithm (ADEA)

Different variants of the DEA are proposed through introducing the adaptivity in the process of mutation and crossover. These adaptive DEA variants are effectively exploited to solve many nonlinear problems, such as those involving photovoltaic models and other optimization problems [46,47,48,49,50,51,52,53,54]. The main steps of the adaptive DEA are similar to the simple DEA that starts from population initialization, mutation, crossover, selection and then termination. The only aspect which differs is the adaptiveness factor of both the mutation and crossover processes. Recently, a maximum-likelihood-criterion-based adaptive DEA, i.e., ADEA, was proposed, where the fitness value is calculated by the maximum-likelihood-criterion function [39]. In ADEA, the values of mutation and crossover process change automatically according to the generation (T). The pseudocode of the ADEA is presented in Algorithm 1, whereas the stepwise mechanism involved in the learning of the ADEA is as follows:
Step 1. Initialization:
Set the generation t = 1 and set the initial population P i , 0 .
Given the population size Np, the mutation Factor F and the maximum generation T.
Step 2. Data Collection:
Collect the calculated data { w i (1), w i (2), …, w i (N)} and { y i (1), y i (2), …, y i (N)}.
Step 3. Adaptive Mutation Operation:
Calculate the mutation vector V i . a , t using L = exp(1 − T T + 1 t ); AMF = F ⋅ 2 L ;
V i . a , t = p i . X 1 , t 1 + AMF ⋅ ( p i . X 2 , t 1 p i . X 3 , t 1 );
Step 4. Adaptive Crossover Operation:
Read V i . a , j , t from mutation vector V i . a , t = [ V i . a , 1 , t ,   V i . a , 2 , t , ,   V i . a , D , t ] T ; and read p i . a , j , t 1 from target vector p i . a , t 1 = [ p i . a , 1 , t 1 ,   p i . a , 2 , t 1 , ,   p i . a , D , t 1 ] T ; to create the crossover vector U i . a , t .
For t = 1, the adaptive crossover probability Pc will be Pc = 1 + cos ( t ) 2 ; and for t = 2l the adaptive crossover probability Pc will be Pc = 1 + cos ( t 1 ) 2 ;
Step 5. Selection Procedure:
Compute the maximum-likelihood criterion function of U i . a , t and p i . a , t 1 using the equations:
J( U i . a , t ) = 1 N t = 1 N [ y i ( t ) w i T ( t ) U i . a , t ] 2 ; J( p i . a , t 1 ) = 1 N t = 1 N [ y i ( t ) w i T ( t ) p i . a , t 1 ] 2 ;
  • Develop the target vector p i . a , t .
Step 6. Optimal Target Vector:
  • Compute the optimal target vector p i . b e s t , t .
  • Using equations J( p i . a , t ) = 1 N t = 1 N [ y i ( t ) w i T ( t ) p i . a , t ] 2 ; p i . b e s t , t = a r g   m i n p i . a , t J( p i . a , t );
Step 7. Iteration:
  • If t > T, then let t := t + 1 and go back to Step 2; otherwise, obtain the optimal target vector p i . b e s t , t .
Algorithm 1 Pseudo-code of the ADEA
Input: Collect data { w i (1), w i (2), …, w i (N)} and { y i (1), y i (2), …, y i (N)}. Given the population size Np, the mutation factor F and maximum generation T. Let the generation t = 1.
Output:  p i , b e s t , t
  (1)
for a = 1 : Np do
  (2)
   for j = 1: D do
  (3)
      p i . a , j , 0 = rand(0, 1)
  (4)
   end
  (5)
    p i . a , 0 = [ p i . a , 1 , 0 ,   p i . a , 2 , 0 , ,   p i . a , D , 0 ] T
  (6)
end
  (7)
P i , 0 = [ p i .1 , 0 ,   p i .2 , 0 , ,   p i . N p , 0 ] T
  (8)
for t = 1 : T do
  (9)
   for i = 1 : Np do
  (10)
     X1 = randperm(Np, 1)
  (11)
     while X1 = p do
  (12)
       X1 = randperm(Np, 1)
  (13)
     end
  (14)
     X2 = randperm(Np, 1)
  (15)
     while X2 = p or X2 = X1 do
  (16)
       X2 = randperm(Np, 1)
  (17)
     end
  (18)
     X3 = randperm(Np, 1); while X3 = p or X3 = X1 or X3 = X2 do
  (19)
     X3 = randperm(Np, 1)
  (20)
   end
  (21)
L = exp (1 − T T + 1 t ); AMF = F ⋅ 2 L ; V i . a , t = p i . X 1 , t 1 + AMF ⋅ ( p i . X 2 , t 1 p i . X 3 , t 1 )
  (22)
if t = 1 or t = 2l then
  (23)
   Pc = 1 + cos ( t ) 2
  (24)
else
  (25)
   Pc = 1 + cos ( t 1 ) 2
  (26)
end
  (27)
V i . a , t = [ V i . a , 1 , t ,   V i . a , 2 , t , ,   V i . a , D , t ] T ; p i . a , t 1 = [ p i . a , 1 , t 1 ,   p i . a , 2 , t 1 , ,   p i . a , D , t 1 ] T
  (28)
for j = 1 : D do
  (29)
   if rand(0, 1) ⩽ Pc or j = randperm(D, 1) then
  (30)
      U i . a , j , t = V i . a , j , t
  (31)
   else
  (32)
      U i . a , j , t = p i . a , j , t 1
  (33)
   end
  (34)
end
  (35)
U i . a , t = [ U i . a , 1 , t ,   U i . a , 2 , t , ,   U i . a , D , t ] T
  (36)
J( U i . a , t ) = 1 N t = 1 N [ y i ( t ) w i T ( t ) U i . a , t ] 2
  (37)
J( p i . a , t 1 ) = 1 N t = 1 N [ y i ( t ) w i T ( t ) p i . a , t 1 ] 2
  (38)
if J( U i . a , t ) > J( p i . a , t 1 ) then
  (39)
    p i . a , t = U i . a , t
  (40)
else
  (41)
    p i . a , t = p i . a , t 1
  (42)
end
  (43)
J( p i . a , t ) = 1 N t = 1 N [ y i ( t ) w i T ( t ) p i . a , t ] 2
  (44)
p i . b e s t , t = a r g   m i n p i . a , t J( p i . a , t )
  (45)
End

4. Simulation and Performance Analyses

This section includes the simulation results of two case studies for HOE system identification using DEA and ADEA. The simulations for both algorithms are performed in MATLAB. The identification of HOE systems is performed by considering different noise levels, as well as various sets of generation size and diverse population size, while the results are presented in a variety of convergence graphs and multiple statistical analyses. The input s(t) for this system is taken as a zero mean and unit variance, while noise v(t) is an additive noise. The performance of algorithms in terms of convergence speed, accuracy, robustness and reliability is evaluated through fitness function formulation. The equation for the fitness function is given below:
Fitness = mean ( y y ^ ) 2
where y represents the desired response and y ^ is the estimated response through proposed evolutionary algorithms. The optimal parameter settings for DAE and ADAE technique are presented in Table 1.

4.1. Case Study 1

The desired response for case study 1 of the HOE system is obtained through a set of parameters taken in [40]. The performance of the ADEA is assessed by considering two type of nonlinearities and different noise levels in the HOE system. The performances of DEA and ADEA in terms of fitness are initially investigated for the variable size of generations (400, 600, 800) and populations (50, 100, 150). The detailed results with polynomial-type nonlinearity are presented in Table 2. It is observed from the fitness values in Table 2 that, for the given generations size (400, 600, 800), the fitness of both algorithms decreases with the increase in population size. Furthermore, both methods achieved minimum fitness values for largest generation sizes. It is observed that the ADEA showed an improved performance compared to the DEA for almost all generations and population sizes. The best fitness achieved by ADEA for 800 generations with 150 population size is 7.08 × 10−15.
The fitness-based learning curves for different generations and population sizes with polynomial-type nonlinearity are shown in Figure 5. Figure 5a–c represent the learning curves for DEA with different generations and population size, whereas Figure 5d–f denote learning curves for ADEA with variations in generations and population size. Figure 5a–c show that the DEA aachieved a fast and accurate convergence for a large number of generations and population sizes, but a slight difference in convergence is observed for DEA until 100 generations with different populations are reached. Likewise, Figure 5d–f show that the ADEA also accomplished minimum fitness values for more generations and populations.
The performance of DAE and ADAE is further examined for three noise variances (0.09, 0.05, 0.01) with fixed population sizes, (50, 80, 100) and a changing size of generations. To analyze the methods in terms of optimal fitness, the results with polynomial-type nonlinearity are provided in Table 3 for three noise levels, as well as three different populations. It is witnessed from the fitness values in Table 3 that both the DEA and ADEA accomplished a significant performance in terms of fitness for small values of noise variances along with a different number of populations. However, both methods did not perform significantly for higher noise variances with different population sizes. The optimal fitness achieved by both DEA and ADEA with a noise variance of 0.01 and population size of 100 is 2.12 × 10−6 and 8.69 × 10−10, respectively.
The learning curves for the fitness achieved with polynomial-type nonlinearity, three noise variances and three population variations are presented in Figure 6. Learning curves for DEA and ADEA are shown in Figure 6a–c and Figure 6d–f, respectively. Figure 6a–c show that the convergence and steady-state performance of the DEA increases with the reducing population size, noise variance and generations and vice versa. Similar behavior was shown by the ADEA for lower noise levels and smaller population sizes. Moreover, ADEA accomplishes optimal fitness for generations, twice that of the DEA.
The fitness with regard to MSE for both DEA and ADEA is also evaluated by introducing sigmoidal-type nonlinearity to the HOE system. The investigations are made for different populations (50, 100, 150) and generations (400, 600, 800). The comparison of fitness results between DEA and ADEA for the HOE system under consideration with sigmoidal-type nonlinearity are shown in Table 4. It is seen from the MSE results shown in Table 4 that the performances of DEA and ADEA increase with the increase in population size for various generations (400, 600, 800). The optimal fitness of both algorithms is accomplished for the largest values of generations and populations. The relative performance, in terms of minimum value of fitness achieved by both methods with respect to particular generations and populations, is not consistent. The minimum fitness attained by DEA and ADEA with a maximum number of generations and populations is 6.26 × 10−11 and 3.24 × 10−9, respectively.
The learning plots representing fitness for sigmoidal-type nonlinearity with variations of generation and population are shown in Figure 7. The fitness curves for DEA are shown in Figure 7a–c and the learning curves for ADEA are given in Figure 7d–f. A similar trend in performance of DEA and ADEA is noticed from Figure 7a–f regarding convergence speed and final estimated accuracy. Both methods exhibit fast convergence for smaller population and generation size. However, they have achieved optimal fitness for bigger values of population and generation.
The behavior of DEA and ADEA methods in terms of minimal fitness achieved with sigmoidal-type nonlinearity for the HOE system is also assessed by fixing the three population sizes, i.e., [50, 80, 100] against three noise variances [0.09, 0.05, 0.01] and different generations set. The optimal fitness attained for different noise levels and population sizes are presented in Table 5. It is observed that both DEA and ADEA performed well for smallest value of noise by obtaining optimal fitness of 1.67 × 10−7 and 3.46 × 10−9, respectively.
Figure 8 shows the fitness-based learning curves with sigmoidal-type nonlinearity for various noise levels, population sizes, and different generations. The learning curves for DEA, shown in Figure 8a–c, demonstrate that DEA performs effectively in terms of convergence rate for low noise variances with the maximum number of populations. DEA achieves a fast convergence by increasing the generation size up to 200, whereas the graphs in Figure 8d–f show a fast convergence rate for ADEA up to 600 generations with low noise levels, e.g., 0.01, and a small population size, e.g., 50.

4.2. Case Study 2

The desired response for case study 2 of the HOE system is obtained through a set of parameters taken in [55]. The performance of the ADEA is assessed by considering two type of nonlinearities and different noise levels in the HOE system.
In case study 2, the methods DEA and ADEA are assessed for various populations, (50, 100, 150) and generations (400, 600, 800) using two types of nonlinearities: polynomial and sigmoidal. The performance outcomes of DEA and ADEA for polynomial- and sigmoidal-type nonlinearities are shown in Table 6 and Table 7 respectively. Table 6 and Table 7 show that the performance of both DEA and ADEA for different numbers of generations increases for both types of nonlinearities with an increase in the population size. Moreover, the best performance of both methods is achieved for a larger generation size. For polynomial-type nonlinearity, the minimum fitness values achieved by DEA and ADEA are 6.32 × 10−19 and 6.84 × 10−12, respectively, whereas the minimum fitness values accomplished by DEA and ADEA for sigmoidal-type nonlinearity are 7.68 × 10−12 and 3.78 × 10−12, respectively.
Figure 9 and Figure 10 show fitness-based learning curves with different populations and generations for polynomial-type and sigmoidal-type nonlinearities, respectively. Figure 9 shows that both DEA and ADEA show fast convergence for a smaller population and generation size, but both methods obtained better steady-state performance for larger population and generation sizes. A similar performance trend was shown by DEA and ADEA in Figure 10 for sigmoidal-type nonlinearity.
To prove the robustness of DEA and ADEA, the performance of both techniques was evaluated for different noise variances (0.09, 0.05, 0.01), variable generation sizes and three population sizes (50, 80, 100). The optimal results achieved by DEA and ADEA with polynomial- and sigmoidal-type nonlinearities for three noise variances and populations are presented in Table 8 and Table 9, respectively. It is seen from the fitness values shown in Table 8 and Table 9 that both DEA and ADEA obtained optimal fitness values for the smallest value of noise level. Furthermore, the performance of both methods in terms of fitness is increased by increasing the population size for different noise levels. The optimum fitness values achieved by DEA and ADEA with polynomial-type nonlinearity and smallest value of noise (0.01) are 1.03 × 10−6 and 5.02 × 10−10, respectively. However, the minimum fitness values accomplished by DEA and ADEA with a sigmoidal-type nonlinearity are 1.09 × 10−9 and 1.39 × 10−7, respectively.
The fitness-based learning curves for polynomial- and sigmoidal-type nonlinearities with three noise variances, three populations and varying generation sizes are shown in Figure 10 and Figure 11, respectively. Figure 10a–c and Figure 11a–c represent the performance-based learning curves of DEA; Figure 10d–f and Figure 11d–f denote the plots for ADEA with different settings. Figure 9 shows that the convergence rate of both DEA and ADEA with polynomial-type nonlinearity increases by increasing the population size and decreasing the noise level, as well as generation size, while both methods accomplished an optimum steady-state performance for the smallest value of noise, a larger population, and larger generation size. A similar performance was demonstrated by both methods for sigmoidal-type nonlinearity, as shown in Figure 12.

4.3. Statistical Study of DEA and ADEA

The statistical investigations of DEA and ADEA for various numbers of runs, with different noise variances, fixed population sizes, and constant generation sizes are shown in Figure 13. It is witnessed from Figure 13a–c that, for all values of noise variances, ADEA is more convergent than DEA, and the optimal fitness achieved by ADEA is much better than that of DEA for all noise variances. It is also noticed that the performance of both DEA and ADEA only slightly degrades by increasing the noise level.
It is observed from the detail results presented for the two case studies that the proposed evolutionary algorithms can be effectively utilized for nonlinear systems identification with polynomial- and sigmoidal-type nonlinearities. The proposed evolutionary algorithms identify the unknown HOE system through optimizing the fitness function that makes the difference between the desired and the estimated response approach to zero. However, the optimal fitness value is not required to correspond to the same set of parameters taken to generate the desired response since, in practical applications, only the desired response is available, rather than the set of parameters.

5. Conclusions

The following are the conclusions drawn from the extensive simulation results presented in the last section:
The evolutionary, computing, paradigm-based DEA and ADEA are effectively used for the nonlinear system identification of Hammerstein output error structures. The DEA and ADEA are accurate and convergent for different nonlinearities, based on polynomial- and sigmoidal-type nonlinearities. The robustness of the DEA and ADEA is established for different levels of external disturbances. However, the accuracy of both algorithms decreases by increasing the noise level. The performance of both DEA and ADEA improves by increasing the population size and generation count, but at the cost of a higher computational budget. The reliable inferences regarding the performance of the DEA and ADEA are drawn through statistical analyses based on 20 independent executions of the algorithms. The convergence speed of the ADEA is slightly slower than the DEA due to the crossover and mutation adaptiveness factor. In comparison, the ADEA is more accurate and statistically consistent compared to the DEA, but at the cost of a little more complexity due to the extra operations involved in introducing the adaptiveness during the mutation and crossover steps. The presented study is a step further in the domain of nonlinear system identification through the use of intelligent computing based on evolutionary algorithms.
In future, the application of the proposed methodology can be investigated for solving nonlinear supply energy systems [56], industrial reactive distillation processes [57], power supply systems [58] and delivery systems [59]. Moreover, the other recently introduced evolutionary algorithms [60] and fuzzy predictive control [61,62,63,64] can be used for efficient nonlinear system identification.

Author Contributions

Conceptualization, N.I.C. and Z.A.K.; methodology, H.B.T., N.I.C. and M.A.Z.R.; software, H.B.T.; validation, M.A.Z.R. and Z.A.K.; resources, H.B.T., K.M.C. and A.H.M.; writing—original draft preparation, H.B.T.; writing—review and editing, N.I.C., Z.A.K. and M.A.Z.R.; supervision, N.I.C. and Z.A.K.; project administration, K.M.C. and A.H.M.; funding acquisition, K.M.C. and A.H.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Beck, J.V.; Arnold, K.J. Parameter Estimation in Engineering and Science; John Wiley & Sons: Hoboken, NJ, USA, 1977. [Google Scholar]
  2. Su, X.; Xia, F.; Liu, J.; Wu, L. Event-triggered fuzzy control of nonlinear systems with its application to inverted pendulum systems. Automatica 2018, 94, 236–248. [Google Scholar] [CrossRef]
  3. Yao, J.; Deng, W. Active disturbance rejection adaptive control of uncertain nonlinear systems: Theory and application. Nonlinear Dyn. 2017, 89, 1611–1624. [Google Scholar] [CrossRef]
  4. Niu, B.; Ahn, C.K.; Li, H.; Liu, M. Adaptive control for stochastic switched nonlower triangular nonlinear systems and its application to a one-link manipulator. IEEE Trans. Syst. Man Cybern. Syst. 2017, 48, 1701–1714. [Google Scholar] [CrossRef]
  5. Sun, R.; Na, J.; Zhu, B. Robust approximation-free prescribed performance control for nonlinear systems and its application. Int. J. Syst. Sci. 2018, 49, 511–522. [Google Scholar] [CrossRef]
  6. Chakour, C.; Benyounes, A.; Boudiaf, M. Diagnosis of uncertain nonlinear systems using interval kernel principal components analysis: Application to a weather station. ISA Trans. 2018, 83, 126–141. [Google Scholar] [CrossRef]
  7. Benamor, A.; Messaoud, H. A new adaptive sliding mode control of nonlinear systems using Volterra series: Application to hydraulic system. Int. J. Model. Identif. Control 2018, 29, 44–52. [Google Scholar] [CrossRef]
  8. Da Silva, S.; Cogan, S.; Foltête, E. Nonlinear identification in structural dynamics based on Wiener series and Kautz filters. Mech. Syst. Signal Process. 2010, 24, 52–58. [Google Scholar] [CrossRef] [Green Version]
  9. Billings, S.A. Nonlinear System Identification: NARMAX Methods in the Time, Frequency, and Spatio-Temporal Domains; Wiley: West Sussex, UK, 2013. [Google Scholar]
  10. Ławryńczuk, M.; Tatjewski, P. Offset-free state-space nonlinear predictive control for Wiener systems. Inf. Sci. 2020, 511, 127–151. [Google Scholar] [CrossRef]
  11. Ławryńczuk, M. MPC Algorithms Using Input-Output Wiener Models. In Nonlinear Predictive Control Using Wiener Models; Springer: Cham, Switzerland, 2022; pp. 71–141. [Google Scholar]
  12. Ławryńczuk, M. MPC of State-Space Benchmark Wiener Processes. In Nonlinear Predictive Control Using Wiener Models; Springer: Cham, Switzerland, 2022; pp. 309–336. [Google Scholar]
  13. Boubaker, S. Identification of nonlinear Hammerstein system using mixed integer-real coded particle swarm optimization: Application to the electric daily peak-load forecasting. Nonlinear Dyn. 2017, 90, 797–814. [Google Scholar] [CrossRef]
  14. Cheng, C.M.; Peng, Z.K.; Zhang, W.M.; Meng, G. Volterra-series-based nonlinear system modeling and its engineering applications: A state-of-the-art review. Mech. Syst. Signal Process. 2017, 87, 340–364. [Google Scholar] [CrossRef]
  15. Sidorov, D.N.; Sidorov, N.A. Convex majorants method in the theory of nonlinear Volterra equations. Banach J. Math. Anal. 2012, 6, 1–10. [Google Scholar] [CrossRef]
  16. Noeiaghdam, S.; Sidorov, D.; Wazwaz, A.M.; Sidorov, N.; Sizikov, V. The Numerical Validation of the Adomian Decomposition Method for Solving Volterra Integral Equation with Discontinuous Kernels Using the CESTAC Method. Mathematics 2021, 9, 260. [Google Scholar] [CrossRef]
  17. Sidorov, D.; Muftahov, I.; Tynda, A. Numerical solution of fractional Volterra integral equation with piecewise continuous kernel. In Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2021; Volume 1847, p. 012011. [Google Scholar]
  18. Sidorov, D.; Muftahov, I.; Tomin, N.; Karamov, D.; Panasetsky, D.; Dreglea, A.; Liu, F.; Foley, A. A dynamic analysis of energy storage with renewable and diesel generation using Volterra equations. IEEE Trans. Ind. Inform. 2019, 16, 3451–3459. [Google Scholar] [CrossRef] [Green Version]
  19. Kibangou, A.Y.; Favier, G. Tensor analysis-based model structure determination and parameter estimation for block-oriented nonlinear systems. IEEE J. Sel. Top. Signal Process. 2010, 4, 514–525. [Google Scholar] [CrossRef]
  20. Cheng, C.M.; Dong, X.J.; Peng, Z.K.; Zhang, W.M.; Meng, G. Kautz basis expansion-based Hammerstein system identification through separable least squares method. Mech. Syst. Signal Process. 2019, 121, 929–941. [Google Scholar] [CrossRef]
  21. Holcomb, C.M.; de Callafon, R.A.; Bitmead, R.R. Closed-Loop Identification of Hammerstein Systems with Application to Gas Turbines. IFAC Proc. Vol. 2014, 47, 493–498. [Google Scholar] [CrossRef] [Green Version]
  22. AitMaatallah, O.; Achuthan, A.; Janoyan, K.; Marzocca, P. Recursive wind speed forecasting based on Hammerstein Auto-Regressive model. Appl. Energy 2015, 145, 191–197. [Google Scholar] [CrossRef]
  23. Liang, T.; Dinavahi, V. Real-Time System-on-Chip Emulation of Electro-Thermal Models for Power Electronic Devices Via Hammerstein Configuration. IEEE J. Emerg. Sel. Top. Power Electron. 2017, 6, 203–218. [Google Scholar] [CrossRef]
  24. Le, F.; Markovsky, I.; Freeman, C.T.; Rogers, E. Recursive identification of Hammerstein systems with application to electrically stimulated muscle. Control. Eng. Pract. 2012, 20, 86–396. [Google Scholar] [CrossRef] [Green Version]
  25. Ding, F.; Chen, H.; Xu, L.; Hayat, T. A hierarchical least squares identification algorithm for Hammerstein nonlinear systems using the key term separation. J. Frankl. Inst. 2018, 355, 3737–3752. [Google Scholar] [CrossRef]
  26. Castro-Garcia, R.; Agudelo, O.M.; Suykens, J.A. Impulse response constrained LS-SVM modelling for MIMO Hammerstein system identification. Int. J. Control. 2018, 92, 908–925. [Google Scholar] [CrossRef]
  27. Chaudhary, N.I.; Zubair, S.; Aslam, M.S.; Raja, M.A.Z.; Machado, J.T. Design of momentum fractional LMS for Hammerstein nonlinear system identification with application to electrically stimulated muscle model. Eur. Phys. J. Plus 2019, 134, 407. [Google Scholar] [CrossRef]
  28. Chaudhary, N.I.; Raja, M.A.Z.; He, Y.; Khan, Z.A.; Machado, J.T. Design of multi innovation fractional LMS algorithm for parameter estimation of input nonlinear control autoregressive systems. Appl. Math. Model. 2021, 93, 412–425. [Google Scholar] [CrossRef]
  29. Xiong, W.; Ding, F. Iterative identification algorithms for input nonlinear output error autoregressive systems. Int. J. Control. Autom. Syst. 2016, 14, 140–147. [Google Scholar]
  30. Raja, M.A.Z.; Shah, A.A.; Mehmood, A.; Chaudhary, N.I.; Aslam, M.S. Bio-inspired computational heuristics for parameter estimation of nonlinear Hammerstein controlled autoregressive system. Neural Comput. Appl. 2018, 29, 1455–1474. [Google Scholar] [CrossRef]
  31. Mehmood, A.; Chaudhary, N.I.; Zameer, A.; Raja, M.A.Z. Novel computing paradigms for parameter estimation in Hammerstein controlled auto regressive auto regressive moving average systems. Appl. Soft Comput. 2019, 80, 263–284. [Google Scholar] [CrossRef]
  32. Mehmood, A.; Chaudhary, N.I.; Zameer, A.; Raja, M.A.Z. Backtracking search optimization heuristics for nonlinear Hammerstein controlled auto regressive auto regressive systems. ISA Trans. 2019, 91, 99–113. [Google Scholar] [CrossRef]
  33. Mohammadi Moghadam, H.; Mohammadzadeh, A.; Hadjiaghaie Vafaie, R.; Tavoosi, J.; Khooban, M.H. A type-2 fuzzy control for active/reactive power control and energy storage management. Trans. Inst. Meas. Control. 2021, 01423312211048038. [Google Scholar] [CrossRef]
  34. Tavoosi, J.; Mohammadzadeh, A.; Jermsittiparsert, K. A review on type-2 fuzzy neural networks for system identification. Soft Comput. 2021, 25, 7197–7212. [Google Scholar] [CrossRef]
  35. Tavoosi, J.; Suratgar, A.A.; Menhaj, M.B.; Mosavi, A.; Mohammadzadeh, A.; Ranjbar, E. Modeling Renewable Energy Systems by a Self-Evolving Nonlinear Consequent Part Recurrent Type-2 Fuzzy System for Power Prediction. Sustainability 2021, 13, 3301. [Google Scholar] [CrossRef]
  36. Tavoosi, J.; Shirkhani, M.; Abdali, A.; Mohammadzadeh, A.; Nazari, M.; Mobayen, S.; Bartoszewicz, A. A New General Type-2 Fuzzy Predictive Scheme for PID Tuning. Appl. Sci. 2021, 11, 10392. [Google Scholar] [CrossRef]
  37. Tavoosi, J.; Zhang, C.; Mohammadzadeh, A.; Mobayen, S.; Mosavi, A.H. Medical Image Interpolation Using Recurrent Type-2 Fuzzy Neural Network. Front. Neuroinformatics 2021, 15, 667375. [Google Scholar] [CrossRef]
  38. Mehmood, A.; Aslam, M.S.; Chaudhary, N.I.; Zameer, A.; Raja, M.A.Z. Parameter estimation for Hammerstein control autoregressive systems using differential evolution. Signal Image Video Process. 2018, 12, 1603–1610. [Google Scholar] [CrossRef]
  39. Cui, T.; Xu, L.; Ding, F.; Alsaedi, A.; Hayat, T. Maximum likelihood-based adaptive differential evolution identification algorithm for multivariable systems in the state-space form. Int. J. Adapt. Control. Signal Process. 2020, 34, 1658–1676. [Google Scholar] [CrossRef]
  40. Pouliquen, M.; Pigeon, E.; Gehan, O. Identification scheme for Hammerstein output error models with bounded noise. IEEE Trans. Autom. Control. 2015, 61, 550–555. [Google Scholar] [CrossRef]
  41. Stron, R.; Price, K. Differential evolution—A simple and efficient adaptive scheme for global optimization over continuous space. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  42. Deng, W.; Liu, H.; Xu, J.; Zhao, H.; Song, Y. An improved quantum-inspired differential evolution algorithm for deep belief network. IEEE Trans. Instrum. Meas. 2020, 69, 7319–7327. [Google Scholar] [CrossRef]
  43. Peng, L.; Liu, S.; Liu, R.; Wang, L. Effective long short-term memory with differential evolution algorithm for electricity price prediction Energy 2018, 162, 1301–1314. Energy 2017, 162, 1301–1314. [Google Scholar] [CrossRef]
  44. Biswas, P.P.; Suganthan, P.N.; Wu, G.; Amaratunga, G.A.J. Parameter estimation of solar cells using datasheet information with the application of an adaptive differential evolution algorithm. Renew. Energy 2019, 132, 425–438. [Google Scholar] [CrossRef]
  45. Wang, L.; Hu, H.; Ai, X.-Y.; Liu, H. Effective electricity energy consumption forecasting using echo state network improved by differential evolution algorithm. Energy 2018, 153, 801–815. [Google Scholar] [CrossRef]
  46. Li, S.; Gu, Q.; Gong, W.; Ning, B. An enhanced adaptive differential evolution algorithm for parameter extraction of photovoltaic models. Energy Convers. Manag. 2020, 205, 112443. [Google Scholar] [CrossRef]
  47. Peng, Y.; He, S.; Sun, K. Parameter identification for discrete memristive chaotic map using adaptive differential evolution algorithm. Nonlinear Dyn. 2021, 1–13. [Google Scholar] [CrossRef]
  48. Ramli, M.A.M.; Bouchekara, H.R.E.H.; Alghamdi, A.S. Optimal sizing of PV/wind/diesel hybrid microgrid system using multi-objective self-adaptive differential evolution algorithm. Renew. Energy 2018, 121, 400–411. [Google Scholar] [CrossRef]
  49. Mohamed, A.W.; Suganthan, P.N. Real-parameter unconstrained optimization based on enhanced fitness-adaptive differential evolution algorithm with novel mutation. Soft Comput. 2018, 22, 3215–3235. [Google Scholar] [CrossRef]
  50. Sakr, W.S.; L-Sehiemy, R.A.E.; Azmy, A.M. Adaptive differential evolution algorithm for efficient reactive power management. Appl. Soft Comput. 2017, 53, 336–351. [Google Scholar] [CrossRef]
  51. Wang, S.; Li, Y.; Yang, H.; Liu, H. Self-adaptive differential evolution algorithm with improved mutation strategy. Soft Comput. 2018, 22, 3433–3447. [Google Scholar] [CrossRef]
  52. Wang, S.; Li, Y.; Yang, H. Self-adaptive differential evolution algorithm with improved mutation mode. Appl. Intell. 2017, 47, 644–658. [Google Scholar] [CrossRef]
  53. Fu, C.M.; Jiang, C.; Chen, G.S.; Liu, Q.M. An adaptive differential evolution algorithm with an aging leader and challengers mechanism. Appl. Soft Comput. 2017, 57, 60–73. [Google Scholar] [CrossRef]
  54. Mohamed, A.W. Solving large-scale global optimization problems using enhanced adaptive differential evolution algorithm. Complex Intell. Syst. 2017, 3, 205–231. [Google Scholar] [CrossRef] [Green Version]
  55. Ding, F.; Shi, Y.; Chen, T. Auxiliary model-based least-squares identification methods for Hammerstein output-error systems. Syst. Control. Lett. 2007, 56, 373–380. [Google Scholar] [CrossRef]
  56. Bakhtadze, N.; Yadykin, I.; Maximov, E.; Maximova, N.; Chereshko, A.; Vershinin, Y. Forecasting the Risks of Stability Loss for Nonlinear Supply Energy Systems. IFAC-Pap. 2021, 54, 478–483. [Google Scholar] [CrossRef]
  57. Klimchenko, V.; Torgashov, A.; Shardt, Y.A.; Yang, F. Multi-Output Soft Sensor with a Multivariate Filter That Predicts Errors Applied to an Industrial Reactive Distillation Process. Mathematics 2021, 9, 1947. [Google Scholar] [CrossRef]
  58. Bakhtadze, N.; Yadikin, I. Discrete Predictive Models for Stability Analysis of Power Supply Systems. Mathematics 2020, 8, 1943. [Google Scholar] [CrossRef]
  59. Bakhtadze, N.; Karsaev, O.; Sabitov, R.; Smirnova, G.; Eponeshnikov, A.; Sabitov, S. Identification models in flexible delivery systems for groupage cargoes. Procedia Comput. Sci. 2020, 176, 225–232. [Google Scholar] [CrossRef]
  60. Ramos-Pérez, J.M.; Miranda, G.; Segredo, E.; León, C.; Rodríguez-León, C. Application of Multi-Objective Evolutionary Algorithms for Planning Healthy and Balanced School Lunches. Mathematics 2021, 9, 80. [Google Scholar] [CrossRef]
  61. Mohammadzadeh, A.; Kumbasar, T. A new fractional-order general type-2 fuzzy predictive control system and its application for glucose level regulation. Appl. Soft Comput. 2020, 91, 106241. [Google Scholar] [CrossRef]
  62. Mosavi, A.; Qasem, S.N.; Shokri, M.; Band, S.S.; Mohammadzadeh, A. Fractional-order fuzzy control approach for photovoltaic/battery systems under unknown dynamics, variable irradiation and temperature. Electronics 2020, 9, 1455. [Google Scholar] [CrossRef]
  63. Vafaie, R.H.; Mohammadzadeh, A.; Piran, M. A new type-3 fuzzy predictive controller for MEMS gyroscopes. Nonlinear Dyn. 2021, 106, 381–403. [Google Scholar] [CrossRef]
  64. Qasem, S.N.; Ahmadian, A.; Mohammadzadeh, A.; Rathinasamy, S.; Pahlevanzadeh, B. A type-3 logic fuzzy system: Optimized by a correntropy based Kalman filter with adaptive fuzzy kernel size. Inf. Sci. 2021, 572, 424–443. [Google Scholar] [CrossRef]
Figure 1. Block Diagram of INL systems.
Figure 1. Block Diagram of INL systems.
Mathematics 09 03199 g001
Figure 2. Mathematical structure of HOE system.
Figure 2. Mathematical structure of HOE system.
Mathematics 09 03199 g002
Figure 3. Identification model of nonlinear systems.
Figure 3. Identification model of nonlinear systems.
Mathematics 09 03199 g003
Figure 4. Flowchart of the DEA.
Figure 4. Flowchart of the DEA.
Mathematics 09 03199 g004
Figure 5. Fitness plots of DEA vs. ADEA, with respect to generations and population size for polynomial-type nonlinearities in case study 1.
Figure 5. Fitness plots of DEA vs. ADEA, with respect to generations and population size for polynomial-type nonlinearities in case study 1.
Mathematics 09 03199 g005aMathematics 09 03199 g005b
Figure 6. Fitness plots of DEA vs. ADEA with respect to noise variance and population size for polynomial-type nonlinearities in case study 1.
Figure 6. Fitness plots of DEA vs. ADEA with respect to noise variance and population size for polynomial-type nonlinearities in case study 1.
Mathematics 09 03199 g006
Figure 7. Fitness curves of DEA vs. ADEA with regard to generations and population size for sigmoidal-type nonlinearities in case study 1.
Figure 7. Fitness curves of DEA vs. ADEA with regard to generations and population size for sigmoidal-type nonlinearities in case study 1.
Mathematics 09 03199 g007aMathematics 09 03199 g007b
Figure 8. Fitness curves of DEA vs. ADEA with regard to noise variance and population size for sigmoidal-type nonlinearities in case study 1.
Figure 8. Fitness curves of DEA vs. ADEA with regard to noise variance and population size for sigmoidal-type nonlinearities in case study 1.
Mathematics 09 03199 g008
Figure 9. Fitness plots of DEA vs. ADEA with respect to generations and population size for polynomial-type nonlinearities in case study 2.
Figure 9. Fitness plots of DEA vs. ADEA with respect to generations and population size for polynomial-type nonlinearities in case study 2.
Mathematics 09 03199 g009
Figure 10. Fitness plots of DEA vs. ADEA with respect to generations and population size for sigmoidal-type nonlinearity in case study 2.
Figure 10. Fitness plots of DEA vs. ADEA with respect to generations and population size for sigmoidal-type nonlinearity in case study 2.
Mathematics 09 03199 g010
Figure 11. Fitness plots of DEA vs. ADEA with respect to noise variance and population size for polynomial-type nonlinearities in case study 2.
Figure 11. Fitness plots of DEA vs. ADEA with respect to noise variance and population size for polynomial-type nonlinearities in case study 2.
Mathematics 09 03199 g011aMathematics 09 03199 g011b
Figure 12. Fitness plots of DEA vs. ADEA with respect to noise variance and population size for sigmoidal-type nonlinearities in case study 2.
Figure 12. Fitness plots of DEA vs. ADEA with respect to noise variance and population size for sigmoidal-type nonlinearities in case study 2.
Mathematics 09 03199 g012aMathematics 09 03199 g012b
Figure 13. Statistical analyses plots of DEA and ADEA for Np = 100, T = 500 and multiple noise variances.
Figure 13. Statistical analyses plots of DEA and ADEA for Np = 100, T = 500 and multiple noise variances.
Mathematics 09 03199 g013aMathematics 09 03199 g013b
Table 1. Parameter settings of DEA and ADEA.
Table 1. Parameter settings of DEA and ADEA.
Sr. No.Type of ParameterDEAADEA
1Number of variables88
2Mutation factor0.25Adaptive
3Crossover probability0.8Adaptive
4Lower bound−2−2
5Upper bound22
Table 2. Comparison of DEA and ADEA with respect to generations and population size for polynomial-type nonlinearity in case study 1.
Table 2. Comparison of DEA and ADEA with respect to generations and population size for polynomial-type nonlinearity in case study 1.
Generations (T)Population Size (Np)DEA FitnessADEA Fitness
400501.32 × 10−42.33 × 10−6
1002.89 × 10−67.86 × 10−9
1503.41 × 10−106.63 × 10−9
600501.09 × 10−43.44 × 10−7
1002.44 × 10−74.21 × 10−11
1504.53 × 10−93.33 × 10−12
800505.16 × 10−56.54 × 10−9
1001.01 × 10−63.24 × 10−14
1502.26 × 10−97.08 × 10−15
Table 3. Comparison of DEA and ADEA with respect to noise variance and fixed population size for polynomial-type nonlinearity in case study 1.
Table 3. Comparison of DEA and ADEA with respect to noise variance and fixed population size for polynomial-type nonlinearity in case study 1.
Noise VariancePopulation Size (Np)DEA FitnessADEA Fitness
0.09502.84 × 10−35.02 × 10−3
804.94 × 10−33.73 × 10−3
1003.71 × 10−34.21 × 10−3
0.05502.01 × 10−41.13 × 10−5
801.30 × 10−59.05 × 10−6
1001.44 × 10−58.68 × 10−6
0.01501.11 × 10−43.03 × 10−8
808.22 × 10−69.18 × 10−10
1002.12 × 10−68.69 × 10−10
Table 4. Performance comparison of DEA and ADEA with regard to generations and population size for sigmoidal-type nonlinearity in case study 1.
Table 4. Performance comparison of DEA and ADEA with regard to generations and population size for sigmoidal-type nonlinearity in case study 1.
Generations (T)Population Size (Np)DEA FitnessADEA Fitness
400507.14 × 10−51.77 × 10−4
1009.44 × 10−61.37 × 10−5
1508.89 × 10−71.41 × 10−4
600502.08 × 10−51.18 × 10−4
1003.82 × 10−73.40 × 10−8
1508.54 × 10−88.18 × 10−9
800501.10 × 10−51.21 × 10−7
1005.17 × 10−74.63 × 10−8
1506.26 × 10−113.24 × 10−9
Table 5. Performance comparison of DEA and ADEA with regard to noise variance and population size for sigmoidal-type nonlinearity in case study 1.
Table 5. Performance comparison of DEA and ADEA with regard to noise variance and population size for sigmoidal-type nonlinearity in case study 1.
Noise VariancePopulation Size (Np)DEA FitnessADEA Fitness
0.09505.60 × 10−36.79 × 10−3
803.60 × 10−35.37 × 10−3
1003.70 × 10−33.82 × 10−3
0.05505.50 × 10−51.91 × 10−5
808.68 × 10−61.29 × 10−5
1009.04 × 10−68.27 × 10−5
0.01501.67 × 10−76.63 × 10−7
808.74 × 10−61.43 × 10−8
1005.49 × 10−73.46 × 10−9
Table 6. Comparison of DEA and ADEA with respect to generations and population size for polynomial-type nonlinearity.
Table 6. Comparison of DEA and ADEA with respect to generations and population size for polynomial-type nonlinearity.
Generations (T)Population Size (Np)DEA FitnessADEA Fitness
400506.60 × 10−55.04 × 10−4
1003.39 × 10−52.06 × 10−6
1502.12 × 10−119.71 × 10−7
600508.87 × 10−41.54 × 10−3
1007.26 × 10−79.41 × 10−9
1507.01 × 10−161.69 × 10−9
800501.65 × 10−44.00 × 10−4
1009.14 × 10−89.87 × 10−12
1506.32 × 10−196.84 × 10−12
Table 7. Comparison of DEA and ADEA with respect to generations and population size for sigmoidal-type nonlinearity in case study 2.
Table 7. Comparison of DEA and ADEA with respect to generations and population size for sigmoidal-type nonlinearity in case study 2.
Generations (T)Population Size (Np)DEA FitnessADEA Fitness
400501.46 × 10−32.51 × 10−5
1003.18 × 10−54.79 × 10−7
1501.04 × 10−51.90 × 10−7
600502.34 × 10−43.30 × 10−5
1003.88 × 10−63.67 × 10−7
1504.66 × 10−104.37 × 10−9
800505.55 × 10−45.84 × 10−10
1002.31 × 10−75.03 × 10−10
1507.68 × 10−123.78 × 10−12
Table 8. Comparison of DEA and ADEA with respect to noise variance and fixed population size for polynomial-type nonlinearity.
Table 8. Comparison of DEA and ADEA with respect to noise variance and fixed population size for polynomial-type nonlinearity.
Noise LevelPopulation Size (Np)DEA FitnessADEA Fitness
0.09500.0134.23 × 10−3
800.0082.91 × 10−3
1000.0011.98 × 10−3
0.05504.89 × 10−51.05 × 10−5
808.97 × 10−47.46 × 10−6
1003.57 × 10−55.01 × 10−6
0.01501.54 × 10−68.66 × 10−8
801.03 × 10−68.66 × 10−10
1003.40 × 10−65.02 × 10−10
Table 9. Comparison of DEA and ADEA with respect to noise variance and fixed population size for sigmoidal-type nonlinearities in case study 2.
Table 9. Comparison of DEA and ADEA with respect to noise variance and fixed population size for sigmoidal-type nonlinearities in case study 2.
Noise VariancePopulation Size (Np)DEA FitnessADEA Fitness
0.09501.36 × 10−34.60 × 10−3
803.90 × 10−33.29 × 10−3
1002.70 × 10−31.59 × 10−3
0.05505.28 × 10−51.14 × 10−5
808.80 × 10−62.63 × 10−5
1004.61 × 10−63.97 × 10−6
0.01502.58 × 10−51.54 × 10−7
801.09 × 10−94.39 × 10−6
1004.20 × 10−71.39 × 10−7
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tariq, H.B.; Chaudhary, N.I.; Khan, Z.A.; Raja, M.A.Z.; Cheema, K.M.; Milyani, A.H. Maximum-Likelihood-Based Adaptive and Intelligent Computing for Nonlinear System Identification. Mathematics 2021, 9, 3199. https://doi.org/10.3390/math9243199

AMA Style

Tariq HB, Chaudhary NI, Khan ZA, Raja MAZ, Cheema KM, Milyani AH. Maximum-Likelihood-Based Adaptive and Intelligent Computing for Nonlinear System Identification. Mathematics. 2021; 9(24):3199. https://doi.org/10.3390/math9243199

Chicago/Turabian Style

Tariq, Hasnat Bin, Naveed Ishtiaq Chaudhary, Zeshan Aslam Khan, Muhammad Asif Zahoor Raja, Khalid Mehmood Cheema, and Ahmad H. Milyani. 2021. "Maximum-Likelihood-Based Adaptive and Intelligent Computing for Nonlinear System Identification" Mathematics 9, no. 24: 3199. https://doi.org/10.3390/math9243199

APA Style

Tariq, H. B., Chaudhary, N. I., Khan, Z. A., Raja, M. A. Z., Cheema, K. M., & Milyani, A. H. (2021). Maximum-Likelihood-Based Adaptive and Intelligent Computing for Nonlinear System Identification. Mathematics, 9(24), 3199. https://doi.org/10.3390/math9243199

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop