Numerical Non-Linear Modelling Algorithm Using Radial Kernels on Local Mesh Support
Abstract
:1. Introduction
2. Materials and Methods
2.1. Radial Basis Functions
2.2. Random Fields
3. Results
3.1. Radial Kernel Weighted Average
3.2. Interpolation over a Mesh of Finite Elements
3.3. Computational Algorithm
Algorithm 1. |
1. Begin |
2. Step 1: for i =1:P do |
3. Initialize the point Q_1 with X[i]. 4. Initialize estimation[i] with 0. |
5. for k = 1:2dimension do |
6. Initialize the point Q_node with Node[k]. 7. Initialize dist and nodeEstim with 0. |
6. for j = 1:P do |
7. Initialize the point Q_2 with X[j]. |
8. Increment dist with Radial_Kernel(Q_node,Q_2) |
9. Increment nodeEstim with Z[j] * Radial_Kernel(Q_node,Q_2). |
10. end for j |
11. end for k |
12. Update the estimate node as nodeEstim = nodeEstim/dist. |
13. Increment estimation[i] with nodeEstim * ShapeFunction(k,Q_1). |
14. end for i |
15. End. |
3.4. Applications
- R2 coefficient: The strict use of R2 is limited to the linear regression. Out of this case, the value of R2 can vary in the interval . However, the expression that defines R2 is related to the other family of quality indicators as Legates-McCabe, Nash and Sutcliffe, and Willmott:
- Mean squared error (MSE): This indicator is associated to the second moment of the error, incorporating information about its variance and bias. Its principal problem is the overweighting of outliers, so usually, other parameters as MAE are preferred:
- Root mean squared error (RMSE): Squared root of MSE. Its main advantage, with respect to it, is that it is measured in the same units of the predicted variable. However, it suffers of the same problems with respect to the outlier influence:
- Mean absolute error (MAE): The best characteristic of MAE compared to RMSE is the equal weight of the deviations on the error. The ratio RMSE/MAE serves as an indication of the outlier presence on the sample:
- Mean absolute percentage error (MAPE): This parameter is frequently used by its simple interpretation as a relative error. However, it presents a bias that tends to select the model with smaller forecasts. One option could be the symmetric MAPE, but is not so widely used:
- Regression error characteristic (REC) curve: It is a curve that obtains plotting the error tolerance on the X-axis versus the percentage of points predicted within the tolerance on the Y-axis.
3.4.1. Air foil Self Noise
3.4.2. Combined Cycle Power
4. Discussion
Author Contributions
Funding
Conflicts of Interest
References
- Gallagher, R.H. Finite element analysis: Fundamentals. Int. J. Numer. Methods Eng. 1975, 9, 732. [Google Scholar] [CrossRef]
- Brenner, S.; Scott, R. The Mathematical Theory of Finite Element Methods; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
- Navarro-González, F.J.; Villacampa, Y. A new methodology for complex systems using n-dimensional finite elements. Adv. Eng. Softw. 2012, 48, 52–57. [Google Scholar] [CrossRef]
- Navarro-González, F.J.; Villacampa, Y. Generation of representation models for complex systems using Lagrangian functions. Adv. Eng. Softw. 2013, 64, 33–37. [Google Scholar] [CrossRef]
- Navarro-González, F.J.; Villacampa, Y. A finite element numerical algorithm for modelling and data fitting in complex systems. Int. J. Comput. Methods Exp. Meas. 2016, 4, 100–113. [Google Scholar] [CrossRef] [Green Version]
- Palazón, A.; López, I.; Aragonés, L.; Villacampa, Y.; Navarro-González, F.J. Modelling of Escherichia coli concentrations in bathing water at microtidal coasts. Sci. Total Environ. 2017, 593–594, 173–181. [Google Scholar] [CrossRef] [Green Version]
- Aragonés, L.; Pagán, J.I.; López, I.; Navarro-González, F.J. Galerkin’s formulation of the finite elements method to obtain the depth of closure. Sci. Total Environ. 2019, 660, 1256–1263. [Google Scholar] [CrossRef]
- Migallón, V.; Navarro-González, F.; Penadés, J.; Villacampa, Y. Parallel approach of a Galerkin-based methodology for predicting the compressive strength of the lightweight aggregate concrete. Constr. Build. Mater. 2019, 219, 56–68. [Google Scholar] [CrossRef] [Green Version]
- Buhmann, M.D. Radial Basis Functions. Acta Numer. 2000, 9, 1–38. [Google Scholar] [CrossRef] [Green Version]
- Mai-Duy, N.; Tran-Cong, T. Approximation of function and its derivatives using radial basis function networks. Appl. Math. Model. 2003, 27, 197–220. [Google Scholar] [CrossRef] [Green Version]
- Kansa, E.J. Multiquadrics-A scattered data approximation scheme with applications to computational fluid-dynamics-II solutions to parabolic, hyperbolic and elliptic partial differential equations. Comput. Math. Appl. 1990, 19, 147–161. [Google Scholar] [CrossRef] [Green Version]
- Kansa, E.J.; Hon, Y.C. Circumventing the ill-conditioning problem with multiquadric radial basis functions: Applications to elliptic partial differential equations. Comput. Math. Appl. 2000, 39, 123–137. [Google Scholar] [CrossRef] [Green Version]
- Schaback, R.; Wendland, H. Adaptive greedy techniques for approximate solution of large RBF systems. Numer. Algorithms 1999, 24, 239–254. [Google Scholar] [CrossRef]
- Broomhead, D.S.; Lowe, D. Multivariable Functional Interpolation and Adaptive Networks. Complex Syst. 1988, 2, 321–355. [Google Scholar]
- Orr, M.J.L. Introduction to Radial Basis Function Networks; Center for Cognitive Science, Univ. of Edinburgh: Edinburgh, UK, 1996. [Google Scholar]
- Elanayar, S.; Shin, Y.C. Radial Basis Function Neural Network for Approximation and Estimation of Nonlinear Stochastic Dynamic Systems. IEEE Trans. Neural Netw. 1994, 5, 594–603. [Google Scholar] [CrossRef] [Green Version]
- Saha, A.; Keeler, J.D. Algorithms for better representation and faster learning in radial basis function networks. In Advances in Neural Information Processing Systems, 2nd ed.; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1990; pp. 482–489. [Google Scholar]
- Wettschereck, D.; Dietterich, T. Improving the performance of radial basis function networks by learning center locations. In Advances in Neural Information Processing Systems; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1992; pp. 1133–1140. [Google Scholar]
- Gomm, J.B.; Yu, D.L. Selecting radial basis function network centers with recursive orthogonal least squares training. IEEE Trans. Neural Netw. 2000, 11, 306–314. [Google Scholar] [CrossRef]
- Park, J.; Sandberg, I.W. Universal Approximation Using Radial-Basis-Function Networks. Neural Comput. 1991, 3, 246–257. [Google Scholar] [CrossRef]
- Poggio, T.; Girosi, F. Regularization algorithms for learning that are equivalent to multilayer networks. Science 1990, 247, 978–982. [Google Scholar] [CrossRef] [Green Version]
- Howell, A.J.; Buxton, H. Learning identity with radial basis function networks. Neurocomputing 1998, 20, 15–34. [Google Scholar] [CrossRef]
- Scholkopf, B.; Sung Kah-Kay Burges, C.J.; Girosi, F.; Niyogi, P.; Poggio, T.; Vapnik, V. Comparing Support Vector Machines with Gaussian Kernels to Radial Basis Function Classifiers. IEEE Trans. Signal Process. 1997, 45, 2758–2765. [Google Scholar] [CrossRef] [Green Version]
- Chng, E.S.; Chen, S.; Mulgrew, B. Gradient radial basis function networks for nonlinear and nonstationary time series prediction. IEEE Trans. Neural Netw. 1996, 7, 190–194. [Google Scholar] [CrossRef] [Green Version]
- Chen, S. Nonlinear time series modelling and prediction using Gaussian RBF networks with enhanced clustering and RLS learning. Electron. Lett. 1995, 31, 117–118. [Google Scholar] [CrossRef] [Green Version]
- Li, M.M.; Verma, B. Nonlinear curve fitting to stopping power data using RBF neural networks. Expert Syst. Appl. 2016, 45, 161–171. [Google Scholar] [CrossRef]
- Han, H.G.; Chen, Q.L.; Qiao, J.F. An efficient self-organizing RBF neural network for water quality prediction. Neural Netw. 2011, 24, 717–725. [Google Scholar] [CrossRef] [PubMed]
- Yilmaz, I.; Kaynar, O. Multiple regression, ANN (RBF, MLP) and ANFIS models for prediction of swell potential of clayey soils. Expert Syst. Appl. 2011, 38, 5958–5966. [Google Scholar] [CrossRef]
- Fabri, S.; Kadirkamanathan, V. Dynamic structure neural networks for stable adaptive control of nonlinear systems. IEEE Trans. Neural Netw. 1996, 7, 1151–1167. [Google Scholar] [CrossRef]
- Chen, S.; Billings, S.A.; Grant, P.M. Recursive hybrid algorithm for non-linear system identification using radial basis function networks. Int. J. Control. 1992, 55, 1051–1070. [Google Scholar] [CrossRef]
- Yu, D.L.; Gomm, J.B.; Williams, D. Sensor fault diagnosis in a chemical process via RBF neural networks. Control Eng. Pract. 1999, 7, 49–55. [Google Scholar] [CrossRef]
- Yang, F.; Paindavoine, M. Implementation of an RBF Neural Network on Embedded Systems: Real-Time Face Tracking and Identity Verification. IEEE Trans. Neural Netw. 2003, 14, 1162–1175. [Google Scholar] [CrossRef]
- Tsai, Y.-T.; Shih, Z.-C. All-frequency precomputed radiance transfer using spherical radial basis functions and clustered tensor approximation. ACM Trans. Graph. 2006, 25, 967–976. [Google Scholar] [CrossRef]
- Cho, S.-Y.; Chow, T.W.S. Neural computation approach for developing a 3D shape reconstruction model. IEEE Trans. Neural Netw. 2001, 12, 1204–1214. [Google Scholar]
- Oglesby, J.; Mason, J.S. Radial basis function networks for speaker recognition. In Proceedings of the 1991 International Conference on Acoustics, Speech, and Signal Processing, Toronto, ON, Canada, 14–17 April 1991; pp. 393–396. [Google Scholar]
- Schilling, R.J.; Carroll, J.J.; Al-Ajlouni, A.F. Approximation of nonlinear systems with radial basis function neural networks. IEEE Trans. Neural Netw. 2001, 12, 1–15. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Jang, J.-S.; Sun, C.-T. Functional equivalence between radial basis function networks and fuzzy inference systems. IEEE Trans. Neural Netw. 1993, 4, 156–159. [Google Scholar] [CrossRef] [PubMed]
- Billings, S.A.; Zheng, G.L. Radial basis function network configuration using genetic algorithms. Neural Netw. 1995, 8, 877–890. [Google Scholar] [CrossRef]
- Chen, S.; Wu, Y.; Luk, B.L. Combined genetic algorithm optimization and regularized orthogonal least squares learning for radial basis function networks. IEEE Trans. Neural Netw. 1999, 10, 1239–1243. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kubat, M. Decision trees can initialize radial-basis function networks. IEEE Trans. Neural Netw. 1998, 9, 813–821. [Google Scholar] [CrossRef]
- Benoudjit, N.; Archambeau, C.; Lendasse, A.; Lee, J.A.; Verleysen, M. Width optimization of the Gaussian kernels in Radial basis function networks. In Proceedings of the ESANN 2002, 10th Eurorean Symposium on Artificial Neural Networks, Bruges, Belgium, 24–26 April 2002. [Google Scholar]
- González, J.; Rojas, I.; Ortega, J.; Pomares, H.; Fernandez, F.J.; Diaz, A.F. Multiobjective evolutionary optimization of the size, shape, and position parameters of radial basis function networks for function approximation. IEEE Trans. Neural Netw. 2003, 14, 1478–1495. [Google Scholar] [CrossRef] [PubMed]
- Er, M.J.; Wu, S.; Lu, J.; Toh, H.L. Face recognition with radial basis function (RBF) neural networks. IEEE Trans. Neural Netw. 2002, 13, 697–710. [Google Scholar]
- Bugmann, G. Normalized Gaussian radial basis function networks. Neurocomputing 1998, 20, 97–110. [Google Scholar] [CrossRef]
- Hofmann, S.; Treichl, T.; Schroder, D. Identification and observation of mechatronic systems including multidimensional nonlinear dynamic functions. In Proceedings of the 7th International Workshop on Advanced Motion Control, Maribor, Slovenia, 3–5 July 2002; pp. 285–290. [Google Scholar]
- Saha, P.; Tarafdar, D.; Pal, S.K.; Srivastava, A.K.; Das, K. Modelling of wire electro-discharge machining of TiC/Fe in situ metal matrix composite using normalized RBFN with enhanced k-means clustering technique. Int. J. Adv. Manuf. Technol. 2009, 43, 107–116. [Google Scholar] [CrossRef]
- Mori, H.; Awata, A. Data mining of electricity price forecasting with regression tree and normalized radial basis function network. In Proceedings of the 2007 IEEE International Conference on Systems, Man and Cybernetics, Montreal, QC, Canada, 7–10 October 2007; pp. 3743–3748. [Google Scholar]
- Deco, G.; Neuneier, R.; Schümann, B. Non-parametric data selection for neural learning in non-stationary time series. Neural Netw. 1997, 10, 401–407. [Google Scholar] [CrossRef]
- Deng, H.; Li, H.-X.; Wu, Y.-H. Feedback-linearization-based neural adaptive control for unknown nonaffine nonlinear discrete-time systems. IEEE Trans. Neural Netw. 2008, 19, 1615–1625. [Google Scholar] [CrossRef] [PubMed]
- Ranković, V.; Radulović, J. Prediction of magnetic field near power lines by normalized radial basis function network. Adv. Eng. Softw. 2011, 42, 934–938. [Google Scholar] [CrossRef]
- Golbabai, A.; Seifollahi, S.; Javidi, M. Normalized RBF networks: Application to a system of integral equations. Phys. Scr. 2008, 78, 15008. [Google Scholar] [CrossRef]
- Nelles, O. Axes-oblique partitioning strategies for local model networks. In Proceedings of the 2006 IEEE Conference on Computer Aided Control System Design, 2006 IEEE International Conference on Control Applications, 2006 IEEE International Symposium on Intelligent Control, Munich, Germany, 4–6 October 2006; pp. 2378–2383. [Google Scholar]
- Hartmann, B.; Nelles, O. On the smoothness in local model networks. In Proceedings of the 2009 American Control Conference, St. Louis, MO, USA, 10–12 June 2009; pp. 3573–3578. [Google Scholar]
- Brett, M.; Penny, W.D.; Kiebel, S. Human Brain Function II, an Introduction to Random Field Theory; Academic Press: London, UK, 2003. [Google Scholar]
- Pieczynski, W.; Tebbache, A.-N. Pairwise Markov random fields and segmentation of textured images. Mach. Graph. Vis. 2000, 9, 705–718. [Google Scholar]
- Pantazis, D.; Nichols, T.E.; Baillet, S.; Leahy, R.M. A comparison of random field theory and permutation methods for the statistical analysis of MEG data. NeuroImage 2005, 25, 383–394. [Google Scholar] [CrossRef]
- Zhang, Y.; Brady, M.; Smith, S. Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm. IEEE Trans. Med. Imaging 2001, 20, 45–57. [Google Scholar] [CrossRef]
- De Oliveira, V. Bayesian prediction of clipped Gaussian random fields. Comput. Stat. Data Anal. 2000, 34, 299–314. [Google Scholar] [CrossRef]
- Adler, R.J.; Taylor, J.E. Random Fields and Geometry; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
- Zheng, S.; Jayasumana, S.; Romera-Paredes, B.; Vineet, V.; Su, Z.; Du, D.; Huang, C.; Torr, P.H. Conditional random fields as recurrent neural networks. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1529–1537. [Google Scholar]
- Abrahamsen, P. A Review of Gaussian Random Fields and Correlation Functions. 1997. Available online: https://www.researchgate.net/profile/Petter_Abrahamsen/publication/335490333_A_Review_of_Gaussian_Random_Fields_and_Correlation_Functions/links/5d68cc0692851c154cc5b9bd/A-Review-of-Gaussian-Random-Fields-and-Correlation-Functions.pdf (accessed on 17 September 2020). [CrossRef]
- Radosavljevic, V.; Vucetic, S.; Obradovic, Z. Continuous Conditional Random Fields for Regression in Remote Sensing. In Proceedings of the ECAI 2010-19th European Conference on Artificial Intelligence, Lisbon, Portugal, 16–20 August 2010; pp. 809–814. [Google Scholar]
- Zhang, J. The mean field theory in EM procedures for Markov random fields. IEEE Trans. Signal Process. 1992, 40, 2570–2583. [Google Scholar] [CrossRef]
- Zhu, X.; Ghahramani, Z.; Lafferty, J.D. Semi-supervised learning using gaussian fields and harmonic functions. In Proceedings of the Twentieth International Conference Machine Learning (ICML 2003), Washington, DC, USA, 21–24 August 2003; pp. 912–919. [Google Scholar]
- Cohen, A.; Jones, R.H. Regression on a random field. J. Am. Stat. Assoc. 1969, 64, 1172–1182. [Google Scholar] [CrossRef]
- Hallin, M.; Lu, Z.; Tran, L.T. Local linear spatial regression. Ann. Stat. 2004, 32, 2469–2500. [Google Scholar] [CrossRef] [Green Version]
- Carbon, M.; Francq, C.; Tran, L.T. Kernel regression estimation for random fields. J. Stat. Plan. Inference 2007, 137, 778–798. [Google Scholar] [CrossRef]
- El Machkouri, M. Nonparametric regression estimation for random fields in a fixed-design. Stat. Inference Stoch. Process. 2007, 10, 29–47. [Google Scholar] [CrossRef] [Green Version]
- Dua, D.; Graff, C. UCI Machine Learning Repository. 2019. Available online: https://archive.ics.uci.edu/ml/citation_policy.html (accessed on 19 December 2019).
- Brooks, T.F.; Pope, D.S.; Marcolini, M.A. Airfoil Self-Noise and Prediction; NASA: Washington, DC, USA, 1989.
- NASA-Airfoil Noise, Revisiting Machine Learning Datasets. 2018. Available online: https://www.simonwenkel.com/2018/11/06/revisiting-ml-NASA-airfoil-noise.html (accessed on 20 September 2006).
- Patri, A.; Patnaik, Y. Random forest and stochastic gradient tree boosting based approach for the prediction of airfoil self-noise. Procedia Comput. Sci. 2015, 46, 109–121. [Google Scholar] [CrossRef] [Green Version]
- The NASA Data Set, Machine Learning Examples. Predict the Noise Generated by Airfoil Blades. 2018. Available online: https://www.neuraldesigner.com/learning/examples/airfoil-self-noise-prediction (accessed on 19 December 2019).
- Gonzalez, R.L. Neural Networks for Variational Problems in Engineering. Ph.D. Dissertation, Universitat Politècnica de Catalunya (UPC), Barcelona, Spain, 2009. [Google Scholar]
- Errasquin, L. Airfoil Self-Noise Prediction Using Neural Networks for Wind Turbines. Ph.D. Dissertation, Virginia Tech, Blacksburg, VA, USA, 2009. [Google Scholar]
- Tüfekci, P. Prediction of full load electrical power output of a base load operated combined cycle power plant using machine learning methods. Int. J. Electr. Power Energy Syst. 2014, 60, 126–140. [Google Scholar] [CrossRef]
- Kaya, H.; Tüfekci, P.; Gürgen, S.F. Local and Global Learning Methods for Predicting Power of a Combined Gas & Steam Turbine. In Proceedings of the International Conference on Emerging Trends in Computer and Electronics Engineering (ICETCEE 2012), Dubai, UAE, 24–25 March 2012. [Google Scholar]
- Navarro-González, F.J.; Compañ, P.; Satorre, R.; Villacampa, Y. Numerical determination for solving the symmetric eigenvector problem using genetic algorithm. Appl. Math. Model. 2016, 40, 4935–4947. [Google Scholar] [CrossRef]
Input Variables | Output Variable |
---|---|
| Scaled sound pressure level (dB) |
| |
| |
| |
|
Research | Model | R2 | MSE | RMSE | MAE |
---|---|---|---|---|---|
[71] | LR | 0.558 | 22.129 | 4.704 | 3.672 |
DT | 0.871 | 6.457 | 2.541 | 1.788 | |
SVMR | 0.751 | 12.483 | 3.533 | 2.674 | |
RF | 0.935 | 3.283 | 1.812 | 1.313 | |
ABR | 0.725 | 13.796 | 3.714 | 3.078 | |
XGBR | 0.946 | 2.707 | 1.645 | 1.074 | |
Baseline NN | 0.676 | 16.236 | 4.029 | 3.078 | |
Deeper NN | 0.802 | 9.916 | 3.149 | 2.474 | |
[72] | RF | 0.929 | |||
SGTB | 0.969 | ||||
[73] | NN | 0.952 (R2 of linear regression predicted/real) | |||
[74] | NN | 0.894 | |||
[75] | NN | 0.943/0.838 | (several models) |
Parameter | Full Model | Restricted Model |
---|---|---|
R2 | 0.964 | 0.887 |
MAE | 0.755 | 1.679 |
MAPE | 0.60% | 1.35% |
MSE | 1.711 | 5.387 |
RMSE | 1.308 | 2.321 |
Input Variables | Output Variable |
---|---|
| Net hourly electrical energy output |
| |
| |
|
Research | Model | MSE | RMSE | MAE |
---|---|---|---|---|
[76] | LMS | 20.903 | 4.572 | 3.621 |
SMOReg | 20.821 | 4.563 | 3.620 | |
K * | 14.907 | 3.861 | 2.882 | |
BREP | 14.341 | 3.787 | 2.818 | |
M5R | 17.040 | 4.128 | 3.172 | |
M5P | 16.704 | 4.087 | 3.140 | |
REP | 17.733 | 4.211 | 3.133 | |
[77] | k-NN + ANN | 19.990 | ||
k-means + ANN | 15.430 |
Parameter | Full Model | Restricted Model |
---|---|---|
R2 | 0.979 | 0.957 |
MAE | 1.741 | 2.572 |
MAPE | 0.39% | 0.57% |
MSE | 6.102 | 12.592 |
RMSE | 2.470 | 3.548 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Navarro-González, F.J.; Villacampa, Y.; Cortés-Molina, M.; Ivorra, S. Numerical Non-Linear Modelling Algorithm Using Radial Kernels on Local Mesh Support. Mathematics 2020, 8, 1600. https://doi.org/10.3390/math8091600
Navarro-González FJ, Villacampa Y, Cortés-Molina M, Ivorra S. Numerical Non-Linear Modelling Algorithm Using Radial Kernels on Local Mesh Support. Mathematics. 2020; 8(9):1600. https://doi.org/10.3390/math8091600
Chicago/Turabian StyleNavarro-González, Francisco José, Yolanda Villacampa, Mónica Cortés-Molina, and Salvador Ivorra. 2020. "Numerical Non-Linear Modelling Algorithm Using Radial Kernels on Local Mesh Support" Mathematics 8, no. 9: 1600. https://doi.org/10.3390/math8091600
APA StyleNavarro-González, F. J., Villacampa, Y., Cortés-Molina, M., & Ivorra, S. (2020). Numerical Non-Linear Modelling Algorithm Using Radial Kernels on Local Mesh Support. Mathematics, 8(9), 1600. https://doi.org/10.3390/math8091600