Next Article in Journal
An Analytical Expression for the Fundamental Frequency of a Long Free-Spanning Submarine Pipeline
Previous Article in Journal
Optimal Bandwidth Selection Methods with Application to Wind Speed Distribution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Morlet Wavelet Neural Network Investigations to Present the Numerical Investigations of the Prediction Differential Model

1
Department of Computer Science and Mathematics, Lebanese American University, Beirut 1401, Lebanon
2
Laboratory of Engineering Mathematics (LR01ES13), Tunisia Polytechnic School, University of Carthage, Tunis 2078, Tunisia
3
Department of Advanced Sciences and Technologies at National School of Advanced Sciences and Technologies of Borj Cedria, University of Carthage, Hammam-Chott 1164, Tunisia
4
Department of Mathematics and Statistics, College of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 13318, Saudi Arabia
5
Department of Mathematics and Information Science, Faculty of Science, Beni-Suef University, Beni-Suef 62514, Egypt
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(21), 4480; https://doi.org/10.3390/math11214480
Submission received: 3 October 2023 / Revised: 22 October 2023 / Accepted: 24 October 2023 / Published: 29 October 2023
(This article belongs to the Topic Advances in Artificial Neural Networks)

Abstract

:
In this study, a design of Morlet wavelet neural networks (MWNNs) is presented to solve the prediction differential model (PDM) by applying the global approximation capability of a genetic algorithm (GA) and local quick interior-point algorithm scheme (IPAS), i.e., MWNN-GAIPAS. The famous and historical PDM is known as a variant of the functional differential system that works as theopposite of the delay differential models. A fitness function is constructed by using the mean square error and optimized through the GA-IPAS for solving the PDM. Three PDM examples have been presented numerically to check the authenticity of the MWNN-GAIPAS. For the perfection of the designed MWNN-GAIPAS, the comparability of the obtained outputs and exact results is performed. Moreover, the neuron analysis is performed by taking 3, 10, and 20 neurons. The statistical observations have been performed to authenticate the reliability of the MWNN-GAIPAS for solving the PDM.

1. Introduction

The study of the prediction differential model (PDM) is considered very significant for researchers due to its various applications in climate forecasting, biological systems, stock markets, transport, astrophysics, engineering, etc. The sense of delay differential model (DDM) is presented due to the historical system, which has been applied to design the form of PDM. The idea of DDM, introduced by Newton and Leibnitz, was presented a few centuries ago and has widely been applied in many applications of engineering, economic systems, population dynamics, communication, and transport networks [1,2,3,4,5]. Many researchers applied different techniques to solve the DDM. For example, Bildik et al. [6] proposedsolving the DDM using the optimal perturbation iterative scheme. Rahimkhani et al. [7] presented an approach to solve the fractional form of DDM. Sabir et al. [8] presented a new multi-singular nonlinear system with delayed factors. Aziz et al. [9] used a Haar wavelet approach to solve the partial form of DDM. Frazier [10] implemented a wavelet Galerkin method to solve the DDM of the second kind. Tomasiello [11] solved a famous class of the historical DDM by applying the fuzzy transform method. Vaid [12] implemented the trigonometric B-spline approach to solve the second kind of singularly perturbed-based DDM. Hashemi et al. [13] solved the fractional pantograph delay system by applying an efficient computational approach. Adel et al. [14] discussed the solutions of a pantograph singular DDM using the Bernoulli collocation scheme. Erdogan et al. [15] worked to solve the perturbed singularly DDM using a well-known finite difference approach. The DDM is a second-order differential model, which is given as [16]:
{ y ( t ) = f t ,   y ( t ) ,   y ( t γ 1 ) , γ 1 > 0 ,     c t b , y ( t ) = θ ( t ) , σ t c , 0 γ 1 c σ , y ( c ) = w ,
where γ 1 and θ ( t ) indicate the delayed factor and initial conditions. The delayed form y ( t γ 1 ) is shownin the above model, which is to subtract in time t, i.e., c t b . σ , is a small constant and w is value derivative of y . The prediction form of the DDM is achieved by adding some terms in t, i.e., y ( t + γ 1 ) , with prediction term γ 1 . The literature form of the mathematical PDM is given as [17,18]:
{ y ( t ) = f t ,   y ( t ) ,   y ( t + γ 1 ) , γ 1 > 0 ,     c t b , y ( t ) = θ ( t ) , σ t c , 0 γ 1 σ c , y ( c ) = w ,
The above mathematical PDM shown in Equation (2) has been designed recently and has never been solved by functioning the universal approximation ability of the Morlet wavelet neural network (MWNN) together with the global and local search optimizations of a genetic algorithm (GA) and interior-point algorithm scheme (IPAS), i.e., MWNN-GAIPAS. The numerical investigations have been performed by using the MWNN-GAIPAS by taking 3, 10, and 20 neurons. Recently, stochastic computing solvers have been used to exploit the Hall effects on boundary layer flow [19], the nonlinear doubly singular model [20], large-scale continuous multi-objective optimization [21], the heat flux model [22], functional differential singular systems [23], the detection and contact tracing of coronavirus [24], control autoregressive systems [25], the Thomas–Fermi system [26], the Ree–Eyring dissipative fluid flow system [27], stiff nonlinear models [28], fractional multi-singular differential models [29], a mosquito dispersal system in a heterogeneous atmosphere [30], eye surgery models [31], and dynamical models based on the HIV prevention system [32]. These above performances of stochastic solvers authenticate their worth in terms of robustness, convergence, and precision. Based on the above applications, the authors are inspired to present the solutions of the PDM by using the universal approximation ability of MWNN together with the optimization procedures of GAIPAS. A few noticeable, prominent, and salient measures of the current study aresummarized as follows:
  • A layer structure of MWNNs is designed, and optimization is performed through an integrated neuro-evolution-based heuristic with IPAS to solve the PDM numerically;
  • The analysis with 3, 10, and 20 neurons is presented to interpret the stability and accuracy of the designed approach for solving the PDM;
  • The proposed MWNN-GAIPAS is executed for three different examples based on PDM, and a comparison is performed with the exact solutions to validate the accuracyof the proposed MWNN-GAIPAS;
  • Statistical investigations through different performances of fitness: “root mean square error (R.MSE)”, “variance account for (VAF)”, “Theil’s inequality coefficients (TIC)”, and semi-inter quartile range (S.I.R) further authenticate the MWNN-GAIPAS for solving all examples of the PDM;
  • The complexity performances of the MWNN-GAIPAS based on 3, 10, and 20 neurons with the use of different statistical operators are examined for all of the examples of the PDM;
  • The proposed MWNN-GAIPAS provides reasonable and accurate results in the training span. Furthermore, smooth processes of implementation, constancy, and expendability are other obvious advantages.
The organization of the paper is as follows: Section 2 provides the details of the design of MWNN-GAIPAS. Performance procedures are given in Section 3. Results are provided in Section 4. Conclusions, along with upcoming reports of the research, are provided in the final Section.

2. Methodology: MWNN-GAIPAS

The proposed methodology based on the MWNN-GAIPAS to solve each example of the PDM is separated into two phases.
  • An error-based merit function is presented to construct the MWNNs;
  • For the optimization of the merit function, the hybrid form of GAIPAS is described for the decision variables of MWNNs.

2.1. MWNN Modeling

The ability of the NNs to usethe MW function is used to present stable, steady, and reliable outcomes in many areas. The PDM mathematical form given in Equation (2) is stated with feed-forward NNs, including the derivatives in input, hidden, and output layers as:
y ^ ( t ) = k =   1 s q k v ( w k t + m k ) , y ^ ( n ) = k = 1 s q k v ( n ) ( w k t + m k ) .
In the above network, s represents the neurons, W = [ q , w , m ] is the unknown weight vector, i.e., q = [ q 1 , q 2 , , q s ] ,   w = [ w 1 , w 2 , , w s ]     a n d     m = [ m 1 , m 2 , , m s ] .   The MWNN has not been implemented before to present the numerical solutions of PDM. Morlet wavelet (MW) is one of the forms of continuous wavelet, which is frequently applied in the process of time-frequency, signal processing, as well as image processing. This function is categorized through its aptitude to examine the frequency based on a signal using various scales. It is also defined as a function of complex values based on both imaginary and real mechanisms. In conventional neural networks, the neurons are characteristically stimulated by applying different functions, e.g., ReLU, log-sigmoid, or hyperbolic tangent. However, an architecture based on the neural network is defined, where the above functions can be replaced with the MW functions. The MW function is mathematically given as [33]:
v ( t ) = cos 4 3 t e 1 2 t 2
Equation (3) takes the form of:
y ^ ( t ) = k = 1 s q k cos 4 3 ( w k t + m k ) e 1 2 ( w k t + m k ) 2 , y ^ ( t ) = k = 1 s q k w k e 1 2 ( w k τ + m k ) 2 sin 4 3 ( w k τ + m k ) + 4 3 ( w k τ + m k ) cos 4 3 ( w k τ + m k ) , y ^ ( t ) = i = 1 s q k w k 2 e 1 2 ( w k t + m k ) 2 3.0625 cos 4 3 ( w k t + m k ) + 7 2 ( w k t + m k ) sin 4 3 ( w k t + m k ) + 1 + ( w k t + m k ) 2 cos 4 3 ( w k t + m k ) ,
A merit function E is defined as:
E = E 1 + E 2
where E 1 and E 2 are the unsupervised error based on the differential model and boundary conditions of system (2), shown as:
E 1 = 1 N m = 1 N y m f ( t m , y m , y ( t m + γ 1 ) ,       0 t m 1 ,
where N h = 1 ,   and t m = m h .
E 2 = 1 2 y ^ 0 θ ( t m ) 2 + 1 2 y ^ N w 2 .

2.2. Optimization Process: GAIPAS

The optimization through MWNN is accomplished to solve all the examples of the PDM using the computing hybrid construction of GA and IPAS, i.e., GAIPAS.
Genetic Algorithmis a reliable global search method that is applied to unconstrained, nonlinear systems using its important operators called selection, elitism, crossover, and mutation. Recently, GAs have been used in extensive applications in heart disease diagnosis models [34], economic and environmental multi-objective-based optimization of a household level of renewable energy [35], power and heat economic dispatch models [36], electromagnetic detection satellite scheduling models [37], prediction of air blasts [38], data sciences [39], fraud transactions of Ethereum smart contracts [40], and monorail vehicle-based dynamical systems [41]. These recent potential citations inspired the authors to use the global search GA process to achieve the decision variables of MWNNs for solving the PDM.
The interior-point algorithm schemeis a well-known local search mechanism implemented for convex optimization systems. IPAS works to solve the optimization problems of both types—constrained and unconstrained. Recently, IPAS has been used in image restoration [42], nested-constraint resource allocation problems [43], power system state estimation [44], risk-averse PDE-constrained optimization problems [45], and monotone weighted linear complementarity problems [46].
The hybridization is performed to regulate the sluggishness of GA with the IPAS through the optimization procedure. Structure of MWNN-GAIPAS for solving the PDM is shown in Figure 1. The detail of the hybridization of GAIPAS is provided in Algorithm 1.
Algorithm 1. The optimization-based MWNN-GAIPAS is given in the pseudo-code for solving the PDM.
“GA” start
    (i) 
Inputs: Select the chromosome selected with equal entries of the system as: W = [q, w,m]
    (ii) 
Population: Chromosomes are given as:
q = [ q 1 , q 2 , , q s ] ,   w = [ w 1 , w 2 , , w s ] , m = [ m 1 , m 2 , m 3 , , m k ] .
    (iii) 
Output: Best GA values are denoted as: WBGA
    (iv) 
Initialization: Form a vector of weights “W” to signify the ‘chromosome’. This set is related tothe initial population. Set the values of ‘Generation’ and ‘declaration’ based on [gaoptimset] and [GA].
    (v) 
Fitness designs: AdjustE, which is fitness in population for each vector.
    (vi) 
Ending process: Stop if
  •    [Fitness=10−19], [Iterations =120], [PopulationSize240], [TolCon/Fun=10−21], [StallLimit=135],
  •    Setas default the other values.
    Goto [storage] to achieve the stopping standards.
    (vii) 
Rank: For the fitness, rank the weight vectors in Population.
    (viii) 
Generation: {selectionuniform}, {mutationadaptfeasible},
{crossoverheuristic}.
    (ix) 
Generation: Storage: To obtain the best WBGA, counts of function, Generations, and   E.
End of GA
IPAS initiates
    (i) 
WBGA is an initial point.
    (ii) 
Output: Optimal GAIPAS denoted as WGAIPAS.
    (iii) 
Initialize: Take the best WBGA, Bounded restraints,
assignments, iterations, and the other points.
    (iv) 
Terminate: Practice terminates when any of the condition obtains
[E= 10−18], [MaxFunEvals= 275000], [Iterations = 650], [TolX/Con=10−21], [TolFun =10−22].
    (v) 
Fitness: CalculateE, and W.
    (vi) 
Modifications: Use “fmincon” usingIPAS. Adjust ‘W’ for IPAS.
    (vii) 
Accumulate:Adjust ‘WGAIPAS’, ‘E’, ‘count of function’, epochs, and ‘time’.
IPAS procedure ends

3. Statistical Performances

To check the consistency and reliability of the MWNN-GAIPAS, particularly in the framework of statistical modeling or data analysis, a number of statistical measures and schemes have been applied. The choice of metrics is dependent on the precise scheme’s features and the goals of the obtained analysis. Some of the common statistical measures used in this study are:
Root mean square error (R.MSE):An extensively applied metric to assess the accuracy based on the predictive system, mainly in the forecasting of time series and analysis of regression. It trials the error’s average magnitude or residuals based on the predicted performances along with the actual observed measures using the dataset. The mathematical form of the RMSE is given as:
R M S E = 1 s i = 1 s y i y ^ i 2 ,
where s is the total data points used in the dataset, y i and y ^ i represent the actual and proposed values, while the symbol of summation is used to sum the values for i from 1 to s.
VAF:One of the metrics thatis applied to assess the presentation of a regression. It provides the percentage of the total variance in the real performances, which is reported through the variance in the proposed measures. The mathematical VAF is shown as follows:
V A F = 1 var y i y ^ i var y i × 100 , E - VAF = 100 VAF ,
TIC:Also one of the statistical operators used to check the reliability of the MWNN-GAIPAS for solving the PDM, which is mathematically provided as:
T I C = 1 s i = 1 s y i y ^ i 2 1 s i = 1 s y i 2 + 1 n i = 1 s y ^ i 2 ,
S. IR:Also used to measure the statistical dispersal, which is associated withthe interquartile range. The mathematical S. IR form is presented as:
S . I   R = 1 2 × q 1 q 3 , q 1     and       q 3   = 1 s t   and     3 r d   quartiles .

4. Simulations of the Results

The comprehensive form of the solutions based on three examples of the PDM ispresented in this section.
Example 1. Consider
2 y ( t ) y ( t + π ) + y ( t ) = 0 , y ( 0 ) = 1 , y ( 0 ) = 1 .
The exact form of the Equation (13) is y ( t ) = 1 + sin ( t ) , while the fitness function is shown as:
E = 1 N i = 1 N 2 y ^ i + y ^ i y ^ ( t i + π ) 2 + 1 2 ( y ^ 0 1 ) 2 +   y ^ 0 1 2 .
Example 2.  Consider the trigonometric PDM-based problem given as:
y ( t ) y ( t + 1 ) + y ( t + 1 ) + y ( t ) + cos ( 1 + t ) sin ( 1 + t ) = 0 , y ( 0 ) = 0 , y ( 0 ) = 1 .
The exact form of the above model (15) is y ( t ) = sin ( t ) , and the merit function is given as:
E = 1 N i = 1 N y ^ i y ^ ( t i + 1 ) + y ^ ( t i + 1 ) + y ^ ( t i ) + cos ( 1 + t i ) sin ( 1 + t i ) 2                   + 1 2 ( y ^ 0 ) 2 +   y ^ 0 1 2 .
Example 3. Consider the PDM-based equation given as:
y ^ ( t ) + y ( t + 1 ) y ( t ) 2 t = 0 , y ( 1 ) = 0 , y ( 0 ) = 2 .
The exact form of the above model (17) is y ( t ) = t 2 3 t + 2 , and the merit function is given as:
E = 1 N i = 1 N y ^ i y ^ ( t i ) + y ^ ( 1 + t i ) 2 t i ) 2 + 1 2 ( y ^ N 2 ) 2 +   y ^ 0 2 .
The prediction terms are y ( t + 1 ) , y ( t + π ) , and y ( t + 1 ) in the above examples. The optimization of each example is performed by using the MWNN-GAIPAS for 40 independent executions in order to assess the parameters. The best weight set is accessible to authenticate the proposed outcomes of the PDM are given in Equations (19)–(21), (22)–(24), and (25)–(27) for 3, 10, and 20 neurons. The estimated results using 3, 10, and 20 neurons are given as:
y ^ E I ( t ) = 19.387 cos 4 3 ( 0.6186 t + 3.6688 ) e 0.5 ( 0.6186 t + 3.6688 ) 2                                 + 5.1798 cos 4 3 ( 1.2053 t 4.2393 ) e 0.5 ( 1.205 t 4.23 ) 2                                 + 2.0001 cos 4 3 ( 0.3514 t + 0.5519 ) e 0.5 ( 0.351 t + 0.55 ) 2 ,
y ^ E II ( t ) = 20 cos 4 3 ( 1.1897 t 3.3343 ) e 0.5 ( 1.1897 t 3.3343 ) 2                                   + 20.0 cos 4 3 ( 20.000 t + 4.8592 ) e 0.5 ( 20.000 t + 4.8592 ) 2                                   + 2.00 cos 4 3 ( 5.3245 t + 6.4450 ) e 0.5 ( 5.3245 t + 6.4450 ) 2 ,                    
y ^ E III ( t ) = 19.9986 cos 4 3 ( 1.8152 t + 3.9123 ) e 1 2 ( 1.8152 t + 3.9123 ) 2                                   16.3478 cos 4 3 ( 0.5487 t + 2.0246 ) e 1 2 ( 0.5487 t + 2.0246 ) 2                                   0.18690 cos 4 3 ( 1.0269 t + 1.458 ) e 1 2 ( 1.0269 t + 1.458 ) 2 ,
y ^ E I ( t ) = 0.3846 cos 4 3 ( 0 . 2461 t + 0 . 456 ) e 0.5 ( 0 . 2461 t + 0 . 456 ) 2                                 0.5671 cos 4 3 ( 0.0886 t + 2.668 ) e 0.5 ( 0.0886 t + 2.668 ) 2                                 2.1220 cos 4 3 ( 0 . 3888 t + 0.6070 ) e 0.5 ( 0 . 3888 t + 0.6070 ) 2             + .... 0.6948 cos 4 3 ( 0.3344 t 1.6760 ) e 0.5 ( 0.3344 t 1.6760 ) 2 ,
y ^ E II ( t ) = 19.98 cos 4 3 ( 19 . 996 t 5.153 ) e 1 2 ( 19 . 996 t 5.153 ) 2                                 + 19.9970 cos 4 3 ( 3.6362 t 7.2460 ) e 1 2 ( 3.6362 t 7.246 ) 2                                 1.80890 cos 4 3 ( 1.3754 t + 2.2409 ) e 1 2 ( 1.3754 t + 2.240 ) 2             + .... + 3.09940 cos 4 3 ( 7.0618 t + 8.1960 ) e 1 2 ( 7.0618 t + 8.196 ) 2 ,
y ^ E III ( t ) = 1.3604 cos 4 3 ( 2 . 9655 t 6.807 ) e 1 2 ( 2 . 9655 t 6.807 ) 2                                     + 1.6470 cos 4 3 ( 3.027 t + 5.7125 ) e 1 2 ( 3.027 t + 5.7125 ) 2                                     18.224 cos 4 3 ( 1.7829 t 4.7960 ) e 1 2 ( 1.7829 t 4.7960 ) 2                     +   + 6.4136 cos 4 3 ( 2.9551 t 3.147 ) e 1 2 ( 2.9551 t 3.147 ) 2 ,
y ^ E I ( t ) = 0.4821 cos 4 3 ( 0.5832 t 0.908 ) e 0.5 ( 0.5832 t 0.908 ) 2                                 1.27390 cos 4 3 ( 1.114 t + 0.1298 ) e 0.5 ( 1.114 t + 0.1298 ) 2                                 + 1.11890 cos 4 3 ( 1 . 1290 t 0.3187 ) e 0.5 ( 1 . 1290 t 0.3187 ) 2               + ....   0.1105 cos 4 3 ( 0.7800 t + 1.3431 ) e 0.5 ( 0.7800 t + 1.3431 ) 2 ,
y ^ E II ( t ) = 9.24 cos 4 3 ( 10.2434 t + 7.2396 ) e 0.5 ( 10.2434 t + 7.2396 ) 2                                     + 19.991 cos 4 3 ( 19.991 t + 4.9252 ) e 0.5 ( 19.991 t + 4.9252 ) 2                                     19.992 cos 4 3 ( 19.9923 t + 4.908 ) e 0.5 ( 19.9923 t + 4.908 ) 2                 + .... + 10.897 cos 4 3 ( 1.1228 t 6.9925 ) e 0.5 ( 1.1228 t 6.9925 ) 2 ,
y ^ E III ( t ) = 1.366 cos 4 3 ( 4.1848 t 6.807 ) e 0.5 ( 4.1848 t 6.807 ) 2                                     + 0.4530 cos 4 3 ( 0.0045 t + 3.020 ) e 0.5 ( 0.0045 t + 3.020 ) 2                                     1.4459 cos 4 3 ( 0.0093 t 4.305 ) e 0.5 ( 0.0093 t 4.305 ) 2                 + ....   + 19.191 cos 4 3 ( 1.0855 t + 3.0155 ) e 0.5 ( 1.0855 t + 3.015 ) 2 .
The performances of the optimization to solve the model are presented through MWNN-GAIPAS for the trained values set using forty independent executions based on 3, 10, and 20 neurons. Figure 2 is drawn using the 3, 10, and 20 neurons based on Equations (19)–(27) to find the optimal weights.
For the comparison, the obtained outcomes have been compared with the exact solutions to solve each example of the PDM using 3, 10, and 20 neurons. These comparison plots are drawn in Figure 3,and onecan observe that the optimal results intersect with the exact outcomes for each example of the PDM by considering 3, 10, and 20 neurons, which shows the precision of MWNN-GAIPAS.
The performances using the designed MWNN-GAASAS for each example of PDM with 3, 10, and 20 neurons are tabulated in Table 1, Table 2, and Table 3, respectively. The Minimum (Min), Median (Med), Mean, S.I.R, and standard deviation (SD) values for Examples 1, 2, and 3were found to be good measures for each example of the PDM. These small, calculated values based on these statistics gages for each example of the PDM based on 3, 10, and 20 neurons show the accuracy of the designed MWNN-GAIPAS.
The plots of absolute error (AE) for each example of the PDM for 3, 10, and 20 neurons are shown in Figure 4a, Figure 4b, and Figure 4c, respectively. One can observe that the AE values using three neurons for Examples 1, 2, and 3 lie at 10−7–10−9, 10−5–10−6and 10−4–10−6. The AE for 10 neurons for Examples 1, 2, and 3 lies at 10−7–10−9, 10−6–10−8, and 10−07–10−9. For 20 neurons, the AE for Examples 1, 2, and 3 lies at around 10−07–10−09, 10−6–10−07, and 10−07–10−10, respectively. It is clear in the AE that each example of the PDM is found in good ranges for considering 3,10, and 20 neurons. The R.MSE, Fitness (FIT), TIC, and EVAF are measured in Figure 5a–c for 3, 10, and 20 neurons. Figure 5a presents that the best FIT for Examples 1 to 3is calculated as10−09–10−10, 10−07–10−09, and 10−5–10−6, respectively. The optimal R.MSE is found around 10−6–10−07, while the R.MSE for the other two examples lie at around 10−5–10−6. The best EVAF values are found around 10−11–10−12, while the R.MSE for the other two examples lies at around 10−09 to 10−10. The TIC best values for all the Examples based on three neurons are calculated around 10−09 to 10−10. The performance based on 10 neurons is presented in Figure 5b. The plots in this figure indicate that the FIT and EVAF best measures for each example of the PDM lie at 10−10 to 10−12, while the R.MSE and TIC best values lie at around 10−6–10−8 and 10−10–10−12, respectively. The performance based on 20 neurons is presented in Figure 5c. The plots in this figure indicate that the best values of these statistical operators for Example 1 to 3 lie at around 10−10–10−12, 10−6–10−8, 10−12–10−14, and 10−10–10−12. These obtained outcomes verify the competent trend for solving PDM based on 3, 10, and 20 neurons.
The statistical performance to solve each example of the PDM are drawn in Figure 6, Figure 7, Figure 8 and Figure 9 for 3, 10, and 20 neurons. Figure 6 presents the FIT measures for 40trials to present the solutions of each example of the PDM using 3, 10, and 20 neurons. One can observe that the best runs are found around 10−01to 10−04, 10−02 to 10−8, and 10−02to 10−12 for solving each Example of the PDM using 3, 10, and 20 neurons. Figure 7 shows the statistical investigations through R.MSE using MWNN-GAIPAS for each example of the PDM taking 3, 10, and 20 neurons.It is seen that the best runs are found around 10−01–10−3, 10−02–10−6, and 10−02–10−8 for solving each example of the PDM using 3, 10, and 20 neurons. Figure 8 shows the statistical investigations through EVAF using MWNN-GAIPAS for each example of the PDM taking 3, 10, and 20 neurons.It is seen that the best runs are found around 10−01–10−5, 10−02–10−08, and 10−02–10−15 for solving each example of the PDM using 3, 10, and 20 neurons. Figure 9 represents the statistical investigations through TIC using MWNN-GAIPAS for each example of the PDM taking 3, 10, and 20 neurons. It is seen that the best runs are found around 10−02–10−07, 10−04–10−10, and 10−04–10−12 for solving each example of the PDM using 3, 10, and 20 neurons. It is easy to understandthat by taking three numbers of neurons, the method performs quicker in the comparison of 10 and 20 neurons, but one can obtain more reliable solutions by taking a larger neuron.

5. Conclusions

The current investigations provide the design of a Morlet wavelet neural network using the hybridization process of global and local search schemes GA-IPAS to solve the prediction model. This model based on prediction is a kind of functional differential equation, which works as theopposite of the historical delay differential models. The analysis based on 3, 10, and 20 neurons is also presented to solve three different examples of the prediction differential model. The overlapping of the obtained results through the proposed methodology with the exact results shows the exactness of each example of the model based on3, 10, and 20 neurons. The AE for Examples 1 to 3 is found in good and negligible measures for 3, 10, and 20 neurons. It is observed that the proposed MWNN-GAIPAS has been implemented accurately, efficiently, and viably for different numbers of neurons to solve the model. Furthermore, the statistical soundings based on 40runs for solving the prediction system in terms of statistical Min, Med, standard deviation, mean, and SI. Range operators have also been performed. These operators authenticate the accuracy and trustworthiness of MWNN-GAIPAS, which is updated further by using the investigations based on the TIC, EVAF, and R.MSE operators to solve each example of the prediction differential system. It is also noticed that the small optimum values of these operators are further used to justify the accuracy and precision of MWNN-GAIPAS.
In upcoming studies, the MWNN-GAIPAS can be executed to solve biological systems, fluid dynamic nonlinear equations, and higher-order singular differential models [46,47,48,49,50,51,52,53,54,55,56,57].

Author Contributions

Conceptualization, A.A.; Software, Z.S. and M.A.A.; Validation, Z.S. and M.A.A.; Formal analysis, Z.S.; Investigation, A.A. and A.F.H.; Resources, A.F.H.; Data curation, A.F.H.; Writing—original draft, Z.S., A.A. and M.A.A.; Supervision, A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University (IMSIU) (grant number IMSIU-RP23088).

Data Availability Statement

Not Application.

Conflicts of Interest

The authors declare that they have no conflict of interest.

References

  1. Niculescu, S.I. Delay Effects on Stability: A Robust Control Approach; Springer Science & Business Media: Berlin, Germany, 2001; Volume 269. [Google Scholar]
  2. Luo, D.; Tian, M.; Zhu, Q. Some results on finite-time stability of stochastic fractional-order delay differential equations. Chaos Solitons Fractals 2022, 158, 111996. [Google Scholar] [CrossRef]
  3. Almarri, B.; Janaki, S.; Ganesan, V.; Ali, A.H.; Nonlaopon, K.; Bazighifan, O. Novel Oscillation Theorems and Symmetric Properties of Nonlinear Delay Differential Equations of Fourth-Order with a Middle Term. Symmetry 2022, 14, 585. [Google Scholar] [CrossRef]
  4. Sabermahani, S.; Ordokhani, Y. General Lagrange-hybrid functions and numerical solution of differential equations containing piecewise constant delays with bibliometric analysis. Appl. Math. Comput. 2020, 395, 125847. [Google Scholar] [CrossRef]
  5. Rihan, F.A.; Alsakaji, H.J. Stochastic delay differential equations of three-species prey-predator system with cooperation among prey species. Discret. Contin. Dyn. Syst. S 2022, 15, 245–263. [Google Scholar] [CrossRef]
  6. Bildik, N.; Deniz, S. A new efficient method for solving delay differential equations and a comparison with other methods. Eur. Phys. J. Plus 2017, 132, 51. [Google Scholar] [CrossRef]
  7. Rahimkhani, P.; Ordokhani, Y.; Babolian, E. A new operational matrix based on Bernoulli wavelets for solving fractional delay differential equations. Numer. Algorithms 2016, 74, 223–245. [Google Scholar] [CrossRef]
  8. Sabir, Z.; Günerhan, H.; Guirao, J.L.G. On a New Model Based on Third-Order Nonlinear Multisingular Functional Differential Equations. Math. Probl. Eng. 2020, 2020, 1683961. [Google Scholar] [CrossRef]
  9. Aziz, I.; Amin, R. Numerical solution of a class of delay differential and delay partial differential equations via Haar wavelet. Appl. Math. Model. 2016, 40, 10286–10299. [Google Scholar] [CrossRef]
  10. Frazier, M.W. Background: Complex Numbers and Linear Algebra. In An Introduction to Wavelets through Linear Algebra; Springer: Berlin/Heidelberg, Germany, 1999; pp. 7–100. [Google Scholar]
  11. Tomasiello, S. An alternative use of fuzzy transform with application to a class of delay differential equations. Int. J. Comput. Math. 2016, 94, 1719–1726. [Google Scholar] [CrossRef]
  12. Vaid, M.K.; Arora, G. Solution of Second Order Singular Perturbed Delay Differential Equation Using Trigonometric B-Spline. Int. J. Math. Eng. Manag. Sci. 2019, 4, 349–360. [Google Scholar] [CrossRef]
  13. Hashemi, M.S.; Atangana, A.; Hajikhah, S. Solving fractional pantograph delay equations by an effective compu-tational method. Math. Comput. Simul. 2020, 177, 295–305. [Google Scholar] [CrossRef]
  14. Adel, W.; Sabir, Z. Solving a new design of nonlinear second-order Lane–Emden pantograph delay differential model via Bernoulli collocation method. Eur. Phys. J. Plus 2020, 135, 427. [Google Scholar] [CrossRef]
  15. Erdogan, F.; Sakar, M.G.; Saldır, O. A finite difference method on layer-adapted mesh for singularly perturbed delay differential equations. Appl. Math. Nonlinear Sci. 2020, 5, 425–436. [Google Scholar] [CrossRef]
  16. Seong, H.Y.; Majid, Z.A. Solving second order delay differential equations using direct two-point block method. Ain Shams Eng. J. 2017, 8, 59–66. [Google Scholar] [CrossRef]
  17. Sabir, Z.; Guirao, J.L.G.; Saeed, T.; Erdoğan, F. Design of a Novel Second-Order Prediction Differential Model Solved by Using Adams and Explicit Runge–Kutta Numerical Methods. Math. Probl. Eng. 2020, 2020, 9704968. [Google Scholar] [CrossRef]
  18. Sabir, Z.; Raja, M.A.Z.; Wahab, H.A.; Shoaib, M.; Aguilar, J.G. Integrated neuro-evolution heuristic with sequential quadratic programming for second-order predic-tion differential models. Numer. Methods Partial. Differ. Equ. 2020. [Google Scholar]
  19. Raja, M.A.Z.; Shoaib, M.; Hussain, S.; Nisar, K.S.; Islam, S. Computational intelligence of Levenberg-Marquardt backpropagation neural networks to study thermal radiation and Hall effects on boundary layer flow past a stretching sheet. Int. Commun. Heat Mass Transf. 2021, 130, 105799. [Google Scholar] [CrossRef]
  20. Tian, Y.; Chen, H.; Ma, H.; Zhang, X.; Tan, K.C.; Jin, Y. Integrating conjugate gradients into evolutionary algo-rithms for large-scale continuous multi-objective optimization. IEEE/CAA J. Autom. Sin. 2022, 9, 1801–1817. [Google Scholar] [CrossRef]
  21. Sabir, Z.; Amin, F.; Pohl, D.; Guirao, J.L. Intelligence computing approach for solving second order system of Emden–Fowler model. J. Intell. Fuzzy Syst. 2020, 38, 7391–7406. [Google Scholar] [CrossRef]
  22. Awais, M.; Rehman, H.; Raja, M.A.Z.; Awan, S.E.; Ali, A.; Shoaib, M.; Malik, M.Y. Hall effect on MHD Jeffrey fluid flow with Cattaneo–Christov heat flux model: An application of stochastic neural computing. Complex Intell. Syst. 2022, 8, 5177–5201. [Google Scholar] [CrossRef]
  23. Sabir, Z.; Wahab, H.A.; Umar, M.; Erdoğan, F. Stochastic numerical approach for solving second order nonlinear singular functional differential equation. Appl. Math. Comput. 2019, 363, 124605. [Google Scholar] [CrossRef]
  24. Wahid, M.A.; Bukhari, S.H.R.; Daud, A.; Awan, S.E.; Raja, M.A.Z. COVICT: An IoT based architecture for COVID-19 detection and contact tracing. J. Ambient. Intell. Humaniz. Comput. 2022, 14, 7381–7398. [Google Scholar] [CrossRef] [PubMed]
  25. Chaudhary, N.I.; Raja, M.A.Z.; Khan, Z.A.; Mehmood, A.; Shah, S.M. Design of fractional hierarchical gradient de-scent algorithm for parameter estimation of nonlinear control autoregressive systems. Chaos Solitons Fractals 2022, 157, 111913. [Google Scholar] [CrossRef]
  26. Sabir, Z.; Manzar, M.A.; Raja, M.A.Z.; Sheraz, M.; Wazwaz, A.M. Neuro-heuristics for nonlinear singular Thomas-Fermi systems. Appl. Soft Comput. 2018, 65, 152–169. [Google Scholar] [CrossRef]
  27. Shoaib, M.; Kausar, M.; Nisar, K.S.; Raja, M.A.Z.; Zeb, M.; Morsy, A. The design of intelligent networks for entropy generation in Ree-Eyring dissipative fluid flow system along quartic autocatalysis chemical reactions. Int. Commun. Heat Mass Transf. 2022, 133, 105971. [Google Scholar] [CrossRef]
  28. Raja, M.A.Z.; Sabir, Z.; Mehmood, N.; Al-Aidarous, E.S.; Khan, J.A. Design of stochastic solvers based on genetic algorithms for solving nonlinear equations. Neural Comput. Appl. 2014, 26, 1–23. [Google Scholar] [CrossRef]
  29. Sabir, Z.; Baleanu, D.; Raja, M.A.Z.; Guirao, J.L.G. Design of Neuro-Swarming Heuristic Solver for Multi-Pantograph Singular Delay Differential Equation. Fractals 2021, 29, 2140022. [Google Scholar] [CrossRef]
  30. Umar, M.; Amin, F.; Ali, M.R. Neuro-swarm intelligence to study mosquito dispersal system in a heterogeneous atmosphere. Evol. Syst. 2023. [CrossRef]
  31. Umar, M.; Amin, F.; Wahab, H.A.; Baleanu, D. Unsupervised constrained neural network modeling of boundary value corneal model for eye surgery. Appl. Soft Comput. 2019, 85, 105826. [Google Scholar] [CrossRef]
  32. Umar, M.; Amin, F.; Al-Mdallal, Q.; Ali, M.R. A stochastic computing procedure to solve the dynamics of preven-tion in HIV system. Biomed. Signal Process. Control. 2022, 78, 103888. [Google Scholar] [CrossRef]
  33. Łuczak, D. Mechanical vibrations analysis in direct drive using CWT with complex Morlet wavelet. Power Electron. Drives 2023, 8, 65–73. [Google Scholar] [CrossRef]
  34. Reddy, G.T.; Reddy, M.P.K.; Lakshmanna, K.; Rajput, D.S.; Kaluri, R.; Srivastava, G. Hybrid genetic algorithm and a fuzzy logic classifier for heart disease diagnosis. Evol. Intell. 2020, 13, 185–196. [Google Scholar] [CrossRef]
  35. Mayer, M.J.; Szilágyi, A.; Gróf, G. Environmental and economic multi-objective optimization of a household level hybrid renewable energy system by genetic algorithm. Appl. Energy 2020, 269, 115058. [Google Scholar] [CrossRef]
  36. Zou, D.; Li, S.; Kong, X.; Ouyang, H.; Li, Z. Solving the combined heat and power economic dispatch problems by an improved genetic algorithm and a new constraint handling strategy. Appl. Energy 2019, 237, 646–670. [Google Scholar] [CrossRef]
  37. Song, Y.; Wei, L.; Yang, Q.; Wu, J.; Xing, L.; Chen, Y. RL-GA: A Reinforcement Learning-based Genetic Algorithm for Electromagnetic Detection Satellite Scheduling Problem. Swarm Evol. Comput. 2023, 77, 101236. [Google Scholar] [CrossRef]
  38. Jahed Armaghani, D.; Hasanipanah, M.; Mahdiyar, A.; Abd Majid, M.Z.; Bakhshandeh Amnieh, H.; Tahir, M.M.D. Airblast prediction through a hybrid genetic algorithm-ANN model. Neural Comput. Appl. 2018, 29, 619–629. [Google Scholar] [CrossRef]
  39. Sohail, A. Genetic Algorithms in the Fields of Artificial Intelligence and Data Sciences. Ann. Data Sci. 2021, 10, 1007–1018. [Google Scholar] [CrossRef]
  40. Aziz, R.M.; Mahto, R.; Goel, K.; Das, A.; Kumar, P.; Saxena, A. Modified Genetic Algorithm with Deep Learning for Fraud Transactions of Ethereum Smart Contract. Appl. Sci. 2023, 13, 697. [Google Scholar] [CrossRef]
  41. Jiang, Y.; Wu, P.; Zeng, J.; Zhang, Y.; Zhang, Y.; Wang, S. Multi-parameter and multi-objective optimisation of articulated monorail vehicle system dynamics using genetic algorithm. Veh. Syst. Dyn. 2019, 58, 74–91. [Google Scholar] [CrossRef]
  42. Bertocchi, C.; Chouzenoux, E.; Corbineau, M.-C.; Pesquet, J.-C.; Prato, M. Deep unfolding of a proximal interior point method for image restoration. Inverse Probl. 2020, 36, 034005. [Google Scholar] [CrossRef]
  43. Wright, S.E.; Lim, S. Solving nested-constraint resource allocation problems with an interior point method. Oper. Res. Lett. 2020, 48, 297–303. [Google Scholar] [CrossRef]
  44. Pesteh, S.; Moayyed, H.; Miranda, V. Favorable properties of Interior Point Method and Generalized Correntropy in power system State Estimation. Electr. Power Syst. Res. 2020, 178, 106035. [Google Scholar] [CrossRef]
  45. Garreis, S.; Surowiec, T.M.; Ulbrich, M. An interior-point approach for solving risk-averse PDE-constrained opti-mization problems with coherent risk measures. SIAM J. Optim. 2021, 31, 1–29. [Google Scholar] [CrossRef]
  46. Asadi, S.; Darvay, Z.; Lesaja, G.; Mahdavi-Amiri, N.; Potra, F. A Full-Newton Step Interior-Point Method for Mon-otone Weighted Linear Complementarity Problems. J. Optim. Theory Appl. 2020, 186, 864–878. [Google Scholar] [CrossRef]
  47. El Hayek, P.; Boueri, M.; Nasr, L.; Aoun, C.; Sayad, E.; Jallad, K. Cholera Infection Risks and Cholera Vaccine Safety in Pregnan-cy. Infect. Dis. Obstet. Gynecol. 2023, 2023, 4563797. [Google Scholar] [CrossRef] [PubMed]
  48. Tian, M.; El Khoury, R.; Alshater, M.M. The nonlinear and negative tail dependence and risk spillovers between foreign ex-change and stock markets in emerging economies. J. Int. Financ. Mark. Inst. Money 2023, 82, 101712. [Google Scholar] [CrossRef]
  49. Issa, J.S. A nonlinear absorber for the reflection of travelling waves in bars. Nonlinear Dyn. 2022, 108, 3279–3295. [Google Scholar] [CrossRef]
  50. Kassis, M.T.; Tannir, D.; Toukhtarian, R.; Khazaka, R. Moments-based sensitivity analysis of x-parameters with re-spect to linear and nonlinear circuit components. In Proceedings of the 2019 IEEE 28th Conference on Electrical Performance of Electronic Packaging and Systems (EPEPS), Montreal, QC, Canada, 6–9 October 2019; pp. 1–3. [Google Scholar]
  51. Younes, G.A.; El Khatib, N. Mathematical modeling of atherogenesis: Atheroprotective role of HDL. J. Theor. Biol. 2021, 529, 110855. [Google Scholar] [CrossRef]
  52. Kanbar, F.; Null, R.T.; Klingenberg, C. Well-Balanced Central Scheme for the System of MHD Equations with Gravitational Source Term. Commun. Comput. Phys. 2022, 32, 878–898. [Google Scholar] [CrossRef]
  53. Habre, S.S. Qualitative aspects of differential equations in an inquiry-oriented course. Int. J. Math. Educ. Sci. Technol. 2021, 54, 351–364. [Google Scholar] [CrossRef]
  54. Touma, R.; Saleh, M. Well-balanced central schemes for pollutants transport in shallow water equations. Math. Comput. Simul. 2021, 190, 1275–1293. [Google Scholar] [CrossRef]
  55. Younes, G.A.; El Khatib, N. Mathematical modeling of inflammatory processes of atherosclerosis. Math. Model. Nat. Phenom. 2022, 17, 5. [Google Scholar] [CrossRef]
  56. Younes, Y.; Hallit, S.; Obeid, S. Premenstrual dysphoric disorder and childhood maltreatment, adulthood stressful life events and depression among Lebanese university students: A structural equation modeling approach. BMC Psychiatry 2021, 21, 548. [Google Scholar] [CrossRef]
  57. Habre, S. Inquiry-oriented differential equations: A guided journey of learning. Teach. Math. Its Appl. Int. J. IMA 2020, 39, 201–212. [Google Scholar] [CrossRef]
Figure 1. Structure of MWNN-GAIPAS for solving the PDM.
Figure 1. Structure of MWNN-GAIPAS for solving the PDM.
Mathematics 11 04480 g001
Figure 2. Best weights of MWNN-GAIPAS for each example of the PDM using 3, 10, and 20 neurons. (a) Optimal weights for Example 1 (3 neurons). (b) Optimal weights for Example 2 (3 neurons). (c) Optimal weights for Example 3 (3 neurons). (d) Best weights for Example 1 for 10 neurons. (e) Best weights for Example 2 for 10 neurons. (f) Best weights for Example 3 for 10 neurons. (g) Best weights for Example 1 for 20 neurons. (h) Best weights for Example 2 for 20 neurons. (i) Best weights for Example 3 for 20 neurons.
Figure 2. Best weights of MWNN-GAIPAS for each example of the PDM using 3, 10, and 20 neurons. (a) Optimal weights for Example 1 (3 neurons). (b) Optimal weights for Example 2 (3 neurons). (c) Optimal weights for Example 3 (3 neurons). (d) Best weights for Example 1 for 10 neurons. (e) Best weights for Example 2 for 10 neurons. (f) Best weights for Example 3 for 10 neurons. (g) Best weights for Example 1 for 20 neurons. (h) Best weights for Example 2 for 20 neurons. (i) Best weights for Example 3 for 20 neurons.
Mathematics 11 04480 g002
Figure 3. Comparison of the best and exact solutions based on MWNN-GAIPAS for solving each example of the PDM using 3, 10, and 20 neurons. (a) Result of Example 1 (3 neurons). (b) Result of Example 2 (3 neurons). (c) Result of Example 3 (3 neurons). (d) Result of Example 1 (10 neurons). (e) Result of Example 2 (10 neurons). (f) Result of Example 3 (10 neurons). (g) Result of Example 1 (20 neurons). (h) Result of Example 2 (20 neurons). (i) Result of Example 3 (20 neurons).
Figure 3. Comparison of the best and exact solutions based on MWNN-GAIPAS for solving each example of the PDM using 3, 10, and 20 neurons. (a) Result of Example 1 (3 neurons). (b) Result of Example 2 (3 neurons). (c) Result of Example 3 (3 neurons). (d) Result of Example 1 (10 neurons). (e) Result of Example 2 (10 neurons). (f) Result of Example 3 (10 neurons). (g) Result of Example 1 (20 neurons). (h) Result of Example 2 (20 neurons). (i) Result of Example 3 (20 neurons).
Mathematics 11 04480 g003
Figure 4. AE for solving each example of the PDM using 3, 10, and 20 neurons. (a) AE values of Examples 1 to 3 for three neurons. (b) AE values of Examples 1 to 3 for 10 neurons. (c) AE values of Examples 1 to 3 for 20 neurons.
Figure 4. AE for solving each example of the PDM using 3, 10, and 20 neurons. (a) AE values of Examples 1 to 3 for three neurons. (b) AE values of Examples 1 to 3 for 10 neurons. (c) AE values of Examples 1 to 3 for 20 neurons.
Mathematics 11 04480 g004aMathematics 11 04480 g004b
Figure 5. Performance values for solving each example of the PDM based on 3, 10, and 20 neurons. (a) Performance values for each example of the PDM for three neurons. (b) Performance values for each example of the PDM for 10 neurons. (c) Performance values for each example of the PDM for 20 neurons.
Figure 5. Performance values for solving each example of the PDM based on 3, 10, and 20 neurons. (a) Performance values for each example of the PDM for three neurons. (b) Performance values for each example of the PDM for 10 neurons. (c) Performance values for each example of the PDM for 20 neurons.
Mathematics 11 04480 g005aMathematics 11 04480 g005b
Figure 6. Statisticalinvestigations-based fitness for Examples1 to 3 by taking 3, 10, and 20 neurons. (a) Plots of Fitness convergence for each example of the PDM using three neurons. (b) Plots of Fitness convergence for each example of the PDM using 10 neurons. (c) Plots of Fitness convergence for each example of the PDM using 20 neurons.
Figure 6. Statisticalinvestigations-based fitness for Examples1 to 3 by taking 3, 10, and 20 neurons. (a) Plots of Fitness convergence for each example of the PDM using three neurons. (b) Plots of Fitness convergence for each example of the PDM using 10 neurons. (c) Plots of Fitness convergence for each example of the PDM using 20 neurons.
Mathematics 11 04480 g006
Figure 7. Statistical investigations of R.MSE for Examples1 to 3 by taking 3, 10, and 20 neurons. (a) Plots of R.MSE convergence for each example of the PDM using three neurons. (b) Plots of R.MSE convergence for each example of the PDM using 10 neurons. (c) Plots of R.MSE convergence for each example of the PDM using 20 neurons.
Figure 7. Statistical investigations of R.MSE for Examples1 to 3 by taking 3, 10, and 20 neurons. (a) Plots of R.MSE convergence for each example of the PDM using three neurons. (b) Plots of R.MSE convergence for each example of the PDM using 10 neurons. (c) Plots of R.MSE convergence for each example of the PDM using 20 neurons.
Mathematics 11 04480 g007
Figure 8. Statistical investigations of EVAF for each example by taking 3, 10, and 20 neurons. (a) Plots of EVAF convergence for three neurons. (b) Plots of EVAF convergence for 10 neurons. (c) Plots of EVAF convergence for 20 neurons.
Figure 8. Statistical investigations of EVAF for each example by taking 3, 10, and 20 neurons. (a) Plots of EVAF convergence for three neurons. (b) Plots of EVAF convergence for 10 neurons. (c) Plots of EVAF convergence for 20 neurons.
Mathematics 11 04480 g008
Figure 9. Statistical investigations-based TIC operator for Examples1 to 3 by taking 3, 10, and 20 neurons. (a) Plots of TIC convergence for each example of the PDM using three neurons. (b) Plots of TIC convergence for each example of the PDM using 10 neurons. (c) Plots of TIC convergence for each example of the PDM using 20 neurons.
Figure 9. Statistical investigations-based TIC operator for Examples1 to 3 by taking 3, 10, and 20 neurons. (a) Plots of TIC convergence for each example of the PDM using three neurons. (b) Plots of TIC convergence for each example of the PDM using 10 neurons. (c) Plots of TIC convergence for each example of the PDM using 20 neurons.
Mathematics 11 04480 g009
Table 1. Statistical measures using MWNN-GAAPAS for each example of the PDM using three neurons.
Table 1. Statistical measures using MWNN-GAAPAS for each example of the PDM using three neurons.
Mode y ^ ( t )
00.10.20.30.40.50.60.70.80.91
E-IMin4.14× 10−92.07× 10−82.85 × 10−74.46 × 10−74.86 × 10−75.58 × 10−77.48 × 10−71.00 × 10−61.20 × 10−61.30 × 10−61.34 × 10−6
Mean3.75 × 10−14.13 × 10−14.54 × 10−14.98 × 10−15.41 × 10−15.83 × 10−16.22 × 10−16.59 × 10−16.92 × 10−17.23 × 10−17.49 × 10−1
SD4.46 × 10−15.1 × 10−15.57 × 10−16.7 × 10−16.53 × 10−16.95 × 10−17.35 × 10−17.72 × 10−18.05 × 10−18.34 × 10−18.60 × 10−1
Med3.93 × 10−22.13 × 10−21.21 × 10−22.48 × 10−24.61 × 10−26.65 × 10−28.59 × 10−21.05 × 10−11.23 × 10−11.41 × 10−11.59 × 10−1
S.IR4.38 × 10−14.84 × 10−15.38 × 10−15.88 × 10−16.35 × 10−16.79 × 10−17.21 × 10−17.60 × 10−17.97 × 10−18.30 × 10−18.59 × 10−1
E-IIMin5.20 × 10−77.23 × 10−74.89 × 10−61.14 × 10−51.59 × 10−51.68 × 10−51.60 × 10−51.72 × 10−52.20 × 10−52.71 × 10−52.77 × 10−5
Mean6.72 × 10−21.31 × 10−12.03 × 10−12.76 × 10−13.50 × 10−14.22 × 10−14.91 × 10−15.57 × 10−16.18 × 10−16.74 × 10−17.23 × 10−1
SD3.19 × 10−25.19 × 10−27.16 × 10−29.52 × 10−21.19 × 10−11.43 × 10−11.66 × 10−11.89 × 10−12.10 × 10−12.29 × 10−12.46 × 10−1
Med7.70 × 10−21.57 × 10−12.33 × 10−13.6 × 10−13.89 × 10−14.79 × 10−15.65 × 10−16.44 × 10−17.17 × 10−17.83 × 10−18.41 × 10−1
S.IR1.55 × 10−22.86 × 10−21.99 × 10−21.31 × 10−21.46 × 10−22.09 × 10−22.71 × 10−23.30 × 10−23.83 × 10−24.31 × 10−24.70 × 10−2
E-IIIMin1.98 × 10−58.21 × 10−68.50 × 10−65.50 × 10−67.92 × 10−62.12 × 10−51.52 × 10−51.19 × 10−52.95 × 10−55.74 × 10−55.59 × 10−5
Mean1.25 × 10−11.08 × 10−19.21 × 10−17.77 × 10−16.43 × 10−15.19 × 10−14.03 × 10−12.95 × 10−11.90 × 10−11.28 × 10−11.50 × 10−1
SD9.57 × 10−18.20 × 10−16.93 × 10−15.72 × 10−14.60 × 10−13.60 × 10−12.75 × 10−12.15 × 10−11.98 × 10−12.18 × 10−11.06 × 10−1
Med1.91 × 10−11.65 × 10−11.40 × 10−11.17 × 10−19.51 × 10−17.42 × 10−15.37 × 10−13.29 × 10−11.37 × 10−18.53 × 10−21.65 × 10−1
S.IR9.91 × 10−18.47 × 10−17.10 × 10−15.77 × 10−14.52 × 10−13.39 × 10−12.31 × 10−11.34 × 10−15.59 × 10−26.92 × 10−27.89 × 10−2
Table 2. The performances based MWNN-GAAPAS for each example of the PDM using 10 neurons.
Table 2. The performances based MWNN-GAAPAS for each example of the PDM using 10 neurons.
Mode y ^ ( t )
00.10.20.30.40.50.60.70.80.91
E-IMin1.20× 10−94.45 × 10−93.18 × 10−86.57 × 10−88.26 × 10−87.97 × 10−87.48 × 10−88.81 × 10−81.21 × 10−71.48 × 10−71.47 × 10−7
Mean4.26 × 10−14.78 × 10−15.34 × 10−15.88 × 10−16.39 × 10−16.88 × 10−17.33 × 10−17.75 × 10−18.14 × 10−18.48 × 10−18.79 × 10−1
SD4.46 × 10−14.97 × 10−15.50 × 10−16.02 × 10−16.51 × 10−16.98 × 10−17.42 × 10−17.83 × 10−18.20 × 10−18.54 × 10−18.83 × 10−1
Med2.54 × 10−12.99 × 10−13.68 × 10−14.31 × 10−14.80 × 10−15.27 × 10−15.71 × 10−16.13 × 10−16.53 × 10−16.89 × 10−17.22 × 10−1
S.IR4.66 × 10−15.23 × 10−15.79 × 10−16.33 × 10−16.82 × 10−17.27 × 10−17.68 × 10−18.06 × 10−18.40 × 10−18.77 × 10−19.05 × 10−1
E-IIMin1.15 × 10−82.04 × 10−86.16 × 10−81.79 × 10−72.65 × 10−72.88 × 10−72.64 × 10−72.38 × 10−72.55 × 10−73.30 × 10−74.24 × 10−7
Mean4.16 × 10−88.85 × 10−81.47 × 10−12.06 × 10−12.63 × 10−13.19 × 10−13.72 × 10−14.23 × 10−14.70 × 10−15.12 × 10−15.48 × 10−1
SD6.10 × 10−87.67 × 10−81.03 × 10−11.36 × 10−11.73 × 10−12.10 × 10−12.46 × 10−12.79 × 10−13.11 × 10−13.40 × 10−13.64 × 10−1
Med2.32 × 10−37.26 × 10−81.73 × 10−12.69 × 10−13.58 × 10−14.48 × 10−15.15 × 10−15.86 × 10−16.52 × 10−17.12 × 10−17.61 × 10−1
S.IR3.70 × 10−87.45 × 10−81.11 × 10−11.50 × 10−11.93 × 10−12.36 × 10−12.79 × 10−13.18 × 10−13.53 × 10−13.84 × 10−14.08 × 10−1
E-IIIMin5.56 × 10−91.91 × 10−84.54 × 10−92.45 × 10−81.44 × 10−82.28 × 10−99.65 × 10−91.85 × 10−83.85 × 10−96.66 × 10−93.87 × 10−9
Mean2.30 × 10−12.50 × 10−12.13 × 10−11.80 × 10−11.50 × 10−11.24 × 10−19.84 × 10−87.42 × 10−85.08 × 10−82.80 × 10−83.14 × 10−8
SD5.50 × 10−15.48 × 10−14.69 × 10−13.97 × 10−13.28 × 10−12.63 × 10−12.00 × 10−11.40 × 10−18.43 × 10−84.14 × 10−84.69 × 10−8
Med7.86 × 10−47.97 × 10−44.52 × 10−46.83 × 10−41.40 × 10−32.06 × 10−33.38 × 10−34.62 × 10−35.79 × 10−37.18 × 10−38.47 × 10−3
S.IR9.30 × 10−36.24 × 10−33.54 × 10−34.98 × 10−36.69 × 10−31.19 × 10−81.87 × 10−82.58 × 10−83.32 × 10−82.27 × 10−82.45 × 10−8
Table 3. The performances based MWNN-GAAPAS for each example of the PDM using 20 neurons.
Table 3. The performances based MWNN-GAAPAS for each example of the PDM using 20 neurons.
Mode y ^ ( t )
00.10.20.30.40.50.60.70.80.91
E-IMin4.22× 10−91.67 × 10−87.93 × 10−81.42 × 10−83.39 × 10−84.01 × 10−81.32 × 10−71.50 × 10−78.64 × 10−85.00 × 10−81.43 × 10−7
Mean3.47 × 10−13.89 × 10−14.43 × 10−14.94 × 10−15.44 × 10−15.92 × 10−16.37 × 10−16.79 × 10−17.19 × 10−17.55 × 10−17.87 × 10−1
SD4.24 × 10−14.62 × 10−15.07 × 10−15.53 × 10−15.97 × 10−16.41 × 10−16.83 × 10−17.23 × 10−17.61 × 10−17.95 × 10−18.25 × 10−1
Med8.71 × 10−41.89 × 10−24.71 × 10−27.45 × 10−21.02 × 10−11.28 × 10−11.54 × 10−11.79 × 10−12.03 × 10−12.26 × 10−12.49 × 10−1
S.IR4.23 × 10−14.62 × 10−15.14 × 10−15.62 × 10−16.04 × 10−16.48 × 10−16.92 × 10−17.32 × 10−17.69 × 10−17.99 × 10−18.23 × 10−1
E-IIMin2.31 × 10−91.48 × 10−88.52 × 10−89.16 × 10−89.46 × 10−81.67 × 10−73.06 × 10−74.31 × 10−74.63 × 10−74.43 × 10−75.02 × 10−7
Mean1.15 × 10−25.33 × 10−21.04 × 10−11.54 × 10−12.03 × 10−12.50 × 10−12.95 × 10−13.37 × 10−13.75 × 10−14.10 × 10−14.41 × 10−1
SD2.85 × 10−25.47 × 10−29.77 × 10−21.43 × 10−11.88 × 10−12.31 × 10−12.72 × 10−13.11 × 10−13.47 × 10−13.79 × 10−14.08 × 10−1
Med4.78 × 10−56.73 × 10−21.50 × 10−12.29 × 10−13.16 × 10−13.89 × 10−14.54 × 10−15.16 × 10−15.72 × 10−16.25 × 10−16.78 × 10−1
S.IR9.75 × 10−44.99 × 10−29.93 × 10−21.48 × 10−11.95 × 10−12.40 × 10−12.82 × 10−13.22 × 10−13.59 × 10−13.92 × 10−14.21 × 10−1
E-IIIMin2.23 × 10−96.07 × 10−87.51 × 10−89.56 × 10−81.82 × 10−77.64 × 10−92.99 × 10−84.07 × 10−85.42 × 10−81.76 × 10−94.38 × 10−9
Mean6.58 × 10−15.99 × 10−15.08 × 10−14.24 × 10−13.46 × 10−12.74 × 10−12.06 × 10−11.42 × 10−18.09 × 10−23.31 × 10−24.71 × 10−2
SD8.23 × 10−17.13 × 10−16.04 × 10−15.02 × 10−14.06 × 10−13.17 × 10−12.35 × 10−11.58 × 10−18.82 × 10−23.78 × 10−27.41 × 10−2
Med2.43 × 10−25.77 × 10−24.54 × 10−23.42 × 10−23.23 × 10−23.22 × 10−23.50 × 10−23.96 × 10−24.82 × 10−21.77 × 10−24.90 × 10−3
S.IR7.97 × 10−17.09 × 10−16.07 × 10−15.11 × 10−14.21 × 10−13.36 × 10−12.47 × 10−11.64 × 10−17.70 × 10−22.73 × 10−22.96 × 10−2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sabir, Z.; Arbi, A.; Hashem, A.F.; Abdelkawy, M.A. Morlet Wavelet Neural Network Investigations to Present the Numerical Investigations of the Prediction Differential Model. Mathematics 2023, 11, 4480. https://doi.org/10.3390/math11214480

AMA Style

Sabir Z, Arbi A, Hashem AF, Abdelkawy MA. Morlet Wavelet Neural Network Investigations to Present the Numerical Investigations of the Prediction Differential Model. Mathematics. 2023; 11(21):4480. https://doi.org/10.3390/math11214480

Chicago/Turabian Style

Sabir, Zulqurnain, Adnène Arbi, Atef F. Hashem, and Mohamed A Abdelkawy. 2023. "Morlet Wavelet Neural Network Investigations to Present the Numerical Investigations of the Prediction Differential Model" Mathematics 11, no. 21: 4480. https://doi.org/10.3390/math11214480

APA Style

Sabir, Z., Arbi, A., Hashem, A. F., & Abdelkawy, M. A. (2023). Morlet Wavelet Neural Network Investigations to Present the Numerical Investigations of the Prediction Differential Model. Mathematics, 11(21), 4480. https://doi.org/10.3390/math11214480

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop