Next Article in Journal
Modeling Firm Search and Innovation Trajectory Using Swarm Intelligence
Next Article in Special Issue
Adjustable Pheromone Reinforcement Strategies for Problems with Efficient Heuristic Information
Previous Article in Journal
On the Moments of the Number of Hires in the Assistant Hiring Algorithm
Previous Article in Special Issue
Self-Learning Salp Swarm Optimization Based PID Design of Doha RO Plant
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Locating the Parameters of RBF Networks Using a Hybrid Particle Swarm Optimization Method

by
Ioannis G. Tsoulos
*,† and
Vasileios Charilogis
Department of Informatics and Telecommunications, University of Ioannina, 47150 Arta, Greece
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Algorithms 2023, 16(2), 71; https://doi.org/10.3390/a16020071
Submission received: 12 December 2022 / Revised: 16 January 2023 / Accepted: 17 January 2023 / Published: 21 January 2023
(This article belongs to the Special Issue Swarm Intelligence Applications and Algorithms)

Abstract

:
In the present work, an innovative two-phase method is presented for parameter tuning in radial basis function artificial neural networks. These kinds of machine learning models find application in many scientific fields in classification problems or in function regression. In the first phase, a technique based on particle swarm optimization is performed to locate a promising interval of values for the network parameters. Particle swarm optimization was used as it is a highly reliable method for global optimization problems, and in addition, it is one of the fastest and most-flexible techniques of its class. In the second phase, the network was trained within the optimal interval using a global optimization technique such as a genetic algorithm. Furthermore, in order to speed up the training of the network and due to the use of a two-stage method, parallel programming techniques were utilized. The new method was applied to a number of famous classification and regression datasets, and the results were more than promising.

1. Introduction

Regression and data classification are two major categories of problems that are solved with machine learning techniques. Such problems appear regularly in scientific fields such as physics [1,2], chemistry [3,4], economics [5,6], medicine [7,8], etc. A programming tool that is used quite often to handle such problems is the Radial Basis Function (RBF) artificial neural network [9]. An RBF network can be defined as the following function:
y x = i = 1 k w i ϕ x c i
The following applies to the above equation:
  • The vector x stands for the input pattern to the Equation (1). The number of elements in this vector is denoted as d.
  • The vectors c i , i = 1 , . . , k are denoted as the center vectors.
  • The vector w is considered as the output weight of the RBF network.
  • The value y x represents the predicted value of the network for the pattern x .
Typically, the Gaussian function can bed used as the function ϕ ( x ) , and it is defined as:
ϕ ( x ) = exp x c 2 σ 2
A plot of the previous function with c = 0 , σ = 1 is displayed in Figure 1. As can be observed, the value of the function decreases as we move away from the center. An extensive overview of RBF networks was given in the work of Ghosh and Nag [10]. RBF networks are used as approximation tools in various cases, such as solutions to differential equations [11,12], digital communications [13,14], physics [15,16], chemistry [17,18], economics [19,20,21], network security [22,23], etc. RBF networks were thoroughly discussed in [24], and they have been parallelized in a variety of research papers [25,26]. This model has been extended by various researchers in tasks such as creating new initialization techniques for the network parameters, [27,28,29], pruning techniques [30,31,32], the construction of RBF networks [33,34,35] etc.
In this work, a hybrid technique is proposed for the optimal calculation of the parameters of an RBF network. This technique consists of two phases. During the first phase, information was collected from the training data of the neural network and an attempt was made to identify a small interval of values for the neural network parameters. To identify this interval, an optimization method was used, which gradually creates the optimal value interval, which was estimated to give the lowest value for the training error of the network. To locate the optimal interval, the Particle Swarm Optimization (PSO) technique was used [36]. The PSO method was chosen for the first phase because it is fast and flexible enough for optimization problems, does not require a large number of parameters to be input by the user, and has been successfully used in a variety of problems such as flow shop scheduling [37], developing charging strategies for electric vehicles [38], emotion recognition [39], robot trajectory planning [40], etc. The detection of the value interval was performed in order to then make the minimization of the network error faster and more efficient in the second phase of the optimization method. In the second phase, the parameters of the neural network were optimized within the optimal value interval of the first phase. The optimization can be performed by any global optimization method [41]. In this work, genetic algorithms [42,43,44] were chosen for the second phase. The main advantages of genetic algorithms are tolerance to errors, easy implementation in parallel, efficient exploration of the search space, etc.
Recently, much work has been appeared to tune the parameters of machine learning models, such as the work of Agarwal and Bhanot [45] for the adaptation of the RBF parameters, the incorporation of an improved ABC algorithm to adapt the parameters of RBF networks [46], the usage of the Firefly algorithm for optimization [47], along with machine learning models for cervical cancer diagnosis [48], the adaptation of the CNN and XGBOOST models by an optimization algorithm for COVID-19 diagnosis [49], etc.
The rest of this article is organized as follows: in Section 2, the two phases of the proposed method are thoroughly discussed; in Section 3, the experimental datasets are listed, as well as the experimental results; finally, in Section 4, some conclusions are presented.

2. Method Description

The training error of the RBF network is expressed as:
E ( y ( x , g ) ) = i = 1 m y x i , g t i 2
The value m stands for the number of patterns, and the values t i denote the real output for the input. The vector g denotes the set of parameters of the RBF network. Usually, RBF networks are trained through a two-phase procedure:
  • In the first phase, the k centers, as well as the associated variances are calculated through the K-means algorithm [50]. A typical formulation of the K-means algorithm is outlined in Algorithm 1.
  • In the second phase, the weight vector w = w 1 , w 2 , , w k is estimated by solving a linear system of equations:
    (a)
    Set W = w k j ;
    (b)
    Set Φ = ϕ j x i ;
    (c)
    Set T = t i = f x i , i = 1 , . . , M ;
    (d)
    The system to be solved is identified as:
    Φ T T Φ W T = 0
    with the solution:
    W T = Φ T Φ 1 Φ T T = Φ T .
The proposed work used two computational phases to optimally calculate the network parameters. Firstly, a promising range of the parameters of the network was calculated through an optimization process that incorporated interval arithmetic. Subsequently, the parameters of the network were trained with the usage of a genetic algorithm inside the located range of the first phase. The following subsections analyze both of these phases in detail.
Algorithm 1 The K-means algorithm.
  • Repeat.
    (a)
    S j = { } , j = 1 . . k .
    (b)
    For each pattern x i , i = 1 , . . . , m , do:
    i.
    Calculate j = min i = 1 k D x i , c j .
    ii.
    Update S j = S j x i .
    (c)
    EndFor.
    (d)
    For each center c j , j = 1 . . k , do:
    i.
    Define M j as the number of points in S j .
    ii.
    Calculate c j :
    c j = 1 M j i = 1 M j x i
    (e)
    EndFor.
  • Compute the variances for every center as
    σ j 2 = i = 1 M j x i c j 2 M j
  • Terminate if there is no change in centers c j .

2.1. Preliminaries

In order to perform interval arithmetic on RBF networks, the following definitions are introduced:
  • The comparison of two intervals W = w 1 , w 2 , Z = z 1 , z 2 is performed through the function:
    L ( W , Z ) = T R U E , w 1 < z 1 , O R w 1 = z 1 A N D w 2 < z 2 F A L S E , O T H E R W I S E ;
  • The function E ( y ) (Equation (3)) is modified to an interval one E min ( y ) , E max ( y ) calculated with the procedure given in Algorithm 2.
In the proposed algorithm, the RBF network contains n variables, where
n = ( d + 2 ) × k
The value of n is calculated as follows:
  • Every center c i , i = 1 , . . , k has d variables, which means d × k variables.
  • For every center, a separate value σ i is used for the Gaussian processing unit, which means k variables.
  • The output weight w also has k variables.
Algorithm 2 Fitness calculation for the modified PSO algorithm.
The fitness calculation for a given particle g is as follows:
  • Take N S random samples in g.
  • Calculate E min g = min g i N S j = 1 M y x j , g i t j 2 .
  • Calculate E m a x g = max g i N S j = 1 M y x j , g i y j 2 .
  • Return f g = E m i n ( g ) , E m a x ( g ) .

2.2. The Proposed PSO Algorithm

During this phase, arithmetic interval techniques are used to locate a range for the parameters of the RBF network. The interval techniques [51,52,53] comprise a common method in global optimization with various applications [54,55,56]. The first phase aims to locate the most-promising bounding box for the n parameters of the corresponding RBF network. The initial bounding box is defined as S, which is a subset of R n :
S = a 1 , b 1 a 2 , b 2 a n , b n
The interval method of the first phase divides the set S subsequently by discarding areas that are not promising enough to contain the global minimum. In order to locate the best interval for Equation (8), a modified PSO algorithm [57] is used. The proposed variant of the PSO method is based on the original technique (Algorithm 1 of [57]); however, the particles are intervals of values, and at each iteration, a normalization of the velocity vector takes place to avoid generating particles outside the original range of values. The PSO method is based on a population of candidate solutions, which, in most cases, are called particles. The method is based on two vectors: the current location of particles denoted as p and the velocity of their movement denoted as u . The PSO method finds the global minimum by moving the particles based on their previous best position, as well as the best position of the total population of particles.
The initial bounding boxes for the centers and variances of the RBF network are constructed using the K-means clustering algorithm. Subsequently, the initial values for the intervals a i , b i are calculated through Algorithm 3. The values for the intervals of the first ( d + 1 ) × k variables are obtained as a multiple of the positive quantity F with the values obtained by K-means. The value B is used to initialize the intervals for the output weight w . Afterwards, the following PSO variant is executed:
  • Set N c as the amount of particles.
  • Set the normalization factor λ .
  • Set the k weights of the RBF network.
  • Set N g the maximum generations allowed.
  • Set N s the number of random samples that will be used in the fitness calculation algorithm.
  • Set f = , , the fitness of the best located particle p .
  • Construct S = a 1 , b 1 a 2 , b 2 a n , b n , as obtained from the previous two algorithms.
  • Initialize the N c particles. Each particle p i , i = 1 , . . . , N c is considered as a set of intervals randomly initialized in S. The layout of each particle is graphically presented in Figure 2.
  • For i = 1 , . . . , N c , do:
    (a)
    Calculate the fitness f i of particle p i using the procedure outlined in Algorithm 2.
    (b)
    If L f i , f = T R U E , then f = f i , p = p i
    (c)
    Set p b , i = p i , f b , i = f i as the best located position for particle i and the associated fitness value.
    (d)
    For j = 1 , . . . , n , do:
    i.
    Set δ the width of interval p i j .
    ii.
    Set u i j = r δ 20 , r δ 20 , with r a random number in [ 0 , 1 ] . The velocity is initialized to a small sub-interval of the range of values for the corresponding parameter in order to avoid, as much as possible, excessive values for the velocity. This would result in the particles moving out of their value range very quickly, thus making the optimization process difficult.
    (e)
    EndFor.
  • EndFor.
  • Set iter = 0.
  • Calculate the inertia value as ω = ω max iter N g ω max ω min , where common values for these parameters are ω min = 0.4 and ω max = 0.9 . Many inertia calculations have appeared in the relevant literature such as constant inertia [58], linearly decreasing inertia [59], exponential inertia [60], random inertia calculation [61], dynamic inertia [62], fuzzy inertia calculation [63], etc. The present method of calculating the inertia was chosen because it decreases linearly with time, and for large values of the inertia, it allows a wider search in the search space, while for low values, it allows a more focused search.
  • For i = 1 , . . . , N c , do:
    (a)
    Calculate the new velocity u i = ω u i + r 1 c 1 p b , i p i + r 2 c 2 p p i , where r 1 , r 2 are random numbers in [ 0 , 1 ] , and the constant values c 1 and c 2 stand for the cognitive and the social parameters, correspondingly. Usually, the values for c 1 and c 2 are in [ 1 , 2 ] .
    (b)
    Normalize the velocity as: u i = 1 λ u i , where λ is a positive number with λ > 1 .
    (c)
    Update the position p i = p i + u i .
    (d)
    Calculate the fitness f i of particle p i .
    (e)
    If L f i , f b , i = TRUE , then p b , i = p i , f b , i = f i .
    (f)
    If L f i , f = TRUE , then f = f i , p = p i .
  • EndFor.
  • Set iter = iter+1.
  • If iter N g , goto Step 13.
  • Else, return S = a 1 , b 1 a 2 , b 2 a n , b n , the domain range for the best particle  p .
Algorithm 3 Algorithm used to locate the initial values for a i , b i , i = 1 , . . . , n .
  • Set m = 0.
  • Set F > 1 , B > 0 .
  • For i = 1 . . k , do:
    (a)
    For j = 1 . . d , do:
    i.
    Set a m = F × c i j , b m = F × c i j .
    ii.
    Set m = m + 1 .
    (b)
    EndFor.
    (c)
    Set a m = F × σ i , b m = F × σ i .
    (d)
    Set m = m + 1 .
  • EndFor.
  • For j = 1 , . . . , k , do:
    (a)
    Set a m = B , b m = B .
    (b)
    Set m = m + 1 .
  • EndFor.

2.3. Optimization of Parameters through Genetic Algorithm

During the second phase of the proposed method, a genetic algorithm is implemented, which optimizes the parameters of the RBF network within the optimal interval calculated in the first phase. The used genetic algorithm has its roots in the G A c r 1 , l algorithm from the paper of Kaelo and Ali [64]. This method was enhanced using the stopping rule suggested by Tsoulos [65]. This genetic algorithm has the following steps:
  • Initialization step:
    (a)
    Set N c as the number of chromosomes. Every chromosome is coded as in the case of PSO using the scheme of Figure 2.
    (b)
    Set N g as the maximum number of generations allowed.
    (c)
    Setk as the weight number of the RBF network.
    (d)
    Obtain the domain range S from the procedure of Section 2.2.
    (e)
    Initialize N C randomly in S.
    (f)
    Define the selection rate p s [ 0 , 1 ] .
    (g)
    Define the mutation rate p m [ 0 , 1 ] .
    (h)
    Set iter = 0.
  • Evaluation step:
    For every chromosome g, calculate the associated fitness value f g = i = 1 m y x i , g t i 2 :
  • Genetic operations step:
    Perform the genetic operations of selection, crossover, and mutation.
    (a)
    Selection procedure: First, the population of chromosomes is sorted based on the associated fitness values. The first 1 p s × N c chromosomes are copied unchanged to the next generation, while the rest are replaced by offspring constructed by the crossover procedure. During the selection step, a series of mating pairs is chosen using the well-known procedure of tournament selection for each parent.
    (b)
    Crossover procedure: For each pair ( z , w ) of chosen parents, two new offspring z ˜ and w ˜ are constructed with the steps:
    z i ˜ = a i z i + 1 a i w i w i ˜ = a i w i + 1 a i z i
    where a i is a random number with a i [ 0.5 , 1.5 ] [64].
    (c)
    Mutation procedure: For every element of each chromosome, pick a random number r [ 0 , 1 ] . If r p m , then alter randomly the corresponding element.
  • Termination check step:
    (a)
    Set i t e r = i t e r + 1 .
    (b)
    If the termination criteria hold, then Terminate; else, goto evaluation step.
The overall process of the two phases is graphically shown in Figure 3.

3. Experiments

The suggested method was tested on a series of classification and regression problems found from various papers and sites of the relevant literature. For the classification problems, two Internet databases were used:
The regression problems can be found at the Statlib URL ftp://lib.stat.cmu.edu/datasets/index.html (accessed on 5 January 2023).

3.1. Experimental Datasets

The classification problems used here were the following:
  • Appendicitis dataset, a medical dataset suggested in [67].
  • Australian dataset [68], an economic dataset.
  • Balance dataset [69], used for the prediction of psychological states.
  • Cleveland dataset, related to heart diseases [70,71].
  • Bands dataset, a dataset related to printing problems [72].
  • Dermatology dataset [73], which is a medical dataset.
  • Hayes-roth dataset [74].
  • Heart dataset [75], a medical dataset.
  • HouseVotes dataset [76].
  • Ionosphere dataset, a dataset from the Johns Hopkins database [77,78].
  • Liverdisorder dataset [79], a medical dataset about liver disorders.
  • Lymography dataset [80].
  • Mammographic dataset [81], which is a dataset about breast cancer.
  • Parkinsons dataset, a medical dataset about Parkinson’s Disease (PD) [82].
  • Pima dataset, a medical dataset [83].
  • Popfailures dataset [84], a dataset about climate.
  • Spiral dataset: The spiral artificial dataset contains 1000 two-dimensional examples that belong to two classes (500 examples each). The number of features is 2. The data in the first class were created using the following formula: x 1 = 0.5 t cos 0.08 t , x 2 = 0.5 t cos 0.08 t + π 2 , and the second class data using: x 1 = 0.5 t cos 0.08 t + π , x 2 = 0.5 t cos 0.08 t + 3 π 2 .
  • Regions2 dataset, described in [85].
  • Saheart dataset [86], which is related to heart diseases.
  • Segment dataset [87], which is related to image processing.
  • Wdbc dataset [88], which is related to breast tumors.
  • Wine dataset. The wine recognition dataset contains data from wine chemical analysis. It contains 178 examples of 13 features each, which are classified into three classes. It has been examined in many published works [89,90].
  • Eeg dataset. As a real-word example, an EEG dataset described in [91] was used here. The datasets derived from the dataset are denoted as Z_F_S, ZONF_S, and ZO_NF_S.
  • Zoo dataset [92], used for the classification of animals.
The regression datasets were as follows:
  • Abalone dataset [93].
  • Airfoil dataset, a dataset from NASA related to aerodynamic and acoustic tests [94].
  • Baseball dataset, a dataset used to predict the points scored by baseball players.
  • BK dataset [95], used to estimate the points scored per minute in a basketball game.
  • BL dataset; this dataset is related to an experiment on the affects of machine adjustments on the time to count bolts.
  • Concrete dataset, related to civil engineering [96].
  • Dee dataset, used to predict the daily average price of electric energy in Spain.
  • Diabetes dataset, a medical dataset.
  • FA dataset, related to fat measurements.
  • Housing dataset, described in [97].
  • MB dataset, a statistics dataset [95].
  • MORTGAGE dataset, which contains economic data.
  • NT dataset, derived from [98].
  • PY dataset (the Pyrimidines problem) [99].
  • Quake dataset, which contains data from earthquakes [100].
  • Treasure dataset, which contains economic data.
  • Wankara dataset, which is about weather measurement

3.2. Experimental Results

The RBF network for the tests was coded in ANSI C++ with the help of the freely available Armadillo library [101]. In addition, in order to have greater reliability of the experimental results, a 10-fold validation technique was used. All the experiments were executed 30 times with different seeds for the random generator each time, and the average was measured. For the classification datasets, the average classification error is reported and, for the regression datasets, the mean test error. The machine used for the experiments was an AMD Ryzen 5950X with 128GB of RAM. The used operating system was Debian Linux. In order to accelerate the training process, the OpenMP library was incorporated [102]. The experimental settings are listed in Table 1. The experimental results for the classification datasets are listed in Table 2 and, for the regression datasets, in Table 3. For the experimental tables, the following were applied:
  • The column NN-PROP indicates the application of the Rprop method [103] in an artificial neural network [104,105] with 10 hidden nodes. The RPROP method is coded in the FCNN software package [106].
  • The column NN-GENETIC denotes the application of a genetic algorithm in the artificial neural network with 10 hidden nodes. The parameters of the used genetic algorithm are the same as in the second phase of the proposed method.
  • The column RBF-KMEANS denotes the classic training method for RBF networks by estimating centers and variances through K-means and the output weights by solving a linear system of equations.
  • The column IRBF-100 denotes the application of the current method with λ = 100 .
  • The column IRBF-1000 denotes the application of the current method with λ = 1000 .
  • In both tables, an extra line is added, in which the mean error for each method is shown. This row is denoted by the name AVERAGE. This line also shows the number of times the corresponding method achieved the best result. This number is shown in parentheses.
As one can see from the experimental results, the proposed method significantly outperformed the other techniques in the majority of cases in terms of the average error in the test set. Moreover, the difference from the established method of training RBF networks was of the order of 40%, and in some cases, this percentage can be doubled. The statistical difference of the proposed technique against the rest is also shown in Figure 4 and Figure 5. However, the proposed technique was significantly slower than the original training technique, as it is a two-stage technique. In the first stage, an optimal interval of values for the network parameters is created with a modified PSO method, and in the second stage, the network is trained using a genetic algorithm. Of course, this extra time can be significantly reduced by incorporating parallel techniques, as was done experimentally using the OpenMP library. Furthermore, changing the normalization factor λ from 100 to 1000 did not have much effect on the mean error in the test set. This implies that the proposed method is quite robust, since it does not have much dependence on this parameter.
An additional experiment was performed with different values for the parameter F. The experimental results for this experiment are shown in Table 4 for the classification datasets and in Table 5 for the regression datasets. For this critical parameter, no large deviations appeared in the results of the proposed method. This further enhances the robustness and reliability of the proposed technique.
Furthermore, in the Table 6, the metrics of precision, recall, and f-score are shown for a series of classification datasets and for the proposed method (IRBF-100) and the classic method for training RBF networks (RBF-KMEANS). In these experimental results, the reader can see the superiority of the proposed technique over the traditional method of training RBF networks.

4. Conclusions

In the present work, a two-stage hybrid method was proposed to efficiently identify the parameters of RBF neural networks. In the first stage of the method, a technique rooted in particle swarm optimization was used to efficiently identify a reliable interval of values for the neural network parameters. In the second stage of the method, an intelligent global optimization technique was used to locate the neural network parameters within the optimal value interval of the first stage. In this work, a genetic algorithm was used in the second phase, but any global optimization method could be used in its place.
The method was applied to a multitude of classification and regression problems from the relevant literature. In almost all cases, the proposed method significantly outperformed the other machine learning models, and on average, the improvement in the error on the test sets was of the order of 40% relative to the established RBF training method. Moreover, the method is quite robust with respect to the basic parameters since any changes in the parameter values do not significantly affect its performance. Furthermore, the method can efficiently locate the value interval of network parameters without any prior knowledge about the type of training data or whether it is a classification or a regression problem. However, the proposed technique is significantly more time-consuming than the traditional training technique, as it requires computational time for both of its phases. However, this effect can be overcome to some extent by the use of modern parallel computing techniques.
The method could be extended by the use of other techniques of training the parameters in RBF networks, such as, for example, the differential evolutionary method [107]. Furthermore, more efficient methods of terminating the first stage of the method could be used, as finding a suitable interval of values for the network parameters requires many numerical calculations.

Author Contributions

I.G.T. and V.C. conceived of the idea and methodology and supervised the technical part regarding the software. I.G.T. conducted the experiments, employing datasets, and provided the comparative experiments. V.C. performed the statistical analysis and prepared the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The experiments of this research work were performed at the high-performance computing system established at the Knowledge and Intelligent Computing Laboratory, Department of Informatics and Telecommunications, University of Ioannina, acquired with the project “Educational Laboratory equipment of TEI of Epirus” with MIS 5007094 funded by the Operational Programme “Epirus” 2014–2020, by ERDF and national funds.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mjahed, M. The use of clustering techniques for the classification of high energy physics data. Nucl. Instrum. Methods Phys. Res. Sect. A 2006, 559, 199–202. [Google Scholar] [CrossRef]
  2. Andrews, M.; Paulini, M.; Gleyzer, S.; Poczos, B. End-to-End Event Classification of High-Energy Physics Data. J. Phys. 2018, 1085, 42022. [Google Scholar] [CrossRef]
  3. He, P.; Xu, C.J.; Liang, Y.Z.; Fang, K.T. Improving the classification accuracy in chemistry via boosting technique. Chemom. Intell. Lab. Syst. 2004, 70, 39–46. [Google Scholar] [CrossRef]
  4. Aguiar, J.A.; Gong, M.L.; Tasdizen, T. Crystallographic prediction from diffraction and chemistry data for higher throughput classification using machine learning. Comput. Mater. Sci. 2020, 173, 109409. [Google Scholar] [CrossRef]
  5. Kaastra, I.; Boyd, M. Designing a neural network for forecasting financial and economic time series. Neurocomputing 1996, 10, 215–236. [Google Scholar] [CrossRef]
  6. Hafezi, R.; Shahrabi, J.E. Hadavandi, A bat-neural network multi-agent system (BNNMAS) for stock price prediction: Case study of DAX stock price. Appl. Soft Comput. 2015, 29, 196–210. [Google Scholar] [CrossRef]
  7. Yadav, S.S.; Jadhav, S.M. Deep convolutional neural network based medical image classification for disease diagnosis. J. Big Data 2019, 6, 113. [Google Scholar] [CrossRef] [Green Version]
  8. Qing, L.; Linhong, W.; Xuehai, D. A Novel Neural Network-Based Method for Medical Text Classification. Future Internet 2019, 11, 255. [Google Scholar] [CrossRef] [Green Version]
  9. Park, J.; Sandberg, I.W. Universal Approximation Using Radial-Basis-Function Networks. Neural Comput. 1991, 3, 246–257. [Google Scholar] [CrossRef]
  10. Ghosh, J.; Nag, A. An Overview of Radial Basis Function Networks. In Radial Basis Function Networks 2. Studies in Fuzziness and Soft Computing; Howlett, R.J., Jain, L.C., Eds.; Physica: Heidelberg, Germany, 2001; Volume 67. [Google Scholar]
  11. Nam, M.-D.; Thanh, T.-C. Numerical solution of differential equations using multiquadric radial basis function networks. Neural Netw. 2001, 14, 185–199. [Google Scholar]
  12. Mai-Duy, N. Solving high order ordinary differential equations with radial basis function networks. Int. J. Numer. Meth. Eng. 2005, 62, 824–852. [Google Scholar] [CrossRef]
  13. Laoudias, C.; Kemppi, P.; Panayiotou, C.G. Localization Using Radial Basis Function Networks and Signal Strength Fingerprints. In Proceedings of the WLAN, GLOBECOM 2009—2009 IEEE Global Telecommunications Conference, Honolulu, HI, USA, 30 November–4 December 2009; pp. 1–6. [Google Scholar]
  14. Azarbad, M.; Hakimi, S.; Ebrahimzadeh, A. Automatic recognition of digital communication signal. Int. J. Energy 2012, 3, 21–33. [Google Scholar]
  15. Teng, P. Machine-learning quantum mechanics: Solving quantum mechanics problems using radial basis function networks. Phys. Rev. E 2018, 98, 33305. [Google Scholar] [CrossRef] [Green Version]
  16. Jovanović, R.; Sretenovic, A. Ensemble of radial basis neural networks with K-means clustering for heating energy consumption prediction. Fme Trans. 2017, 45, 51–57. [Google Scholar] [CrossRef] [Green Version]
  17. Yu, D.L.; Gomm, J.B.; Williams, D. Sensor fault diagnosis in a chemical process via RBF neural networks. Control. Eng. Pract. 1999, 7, 49–55. [Google Scholar] [CrossRef]
  18. Shankar, V.; Wright, G.B.; Fogelson, A.L.; Kirby, R.M. A radial basis function (RBF) finite difference method for the simulation of reaction–diffusion equations on stationary platelets within the augmented forcing method. Int. J. Numer. Meth. Fluids 2014, 75, 1–22. [Google Scholar] [CrossRef] [Green Version]
  19. Shen, W.; Guo, X.; Wu, C.; Wu, D. Forecasting stock indices using radial basis function neural networks optimized by artificial fish swarm algorithm. Knowl.-Based Syst. 2011, 24, 378–385. [Google Scholar] [CrossRef]
  20. Momoh, J.A.; Reddy, S.S. Combined Economic and Emission Dispatch using Radial Basis Function. In Proceedings of the 2014 IEEE PES General Meeting Conference & Exposition, National Harbor, MD, USA, 27–31 July 2014; pp. 1–5. [Google Scholar]
  21. Sohrabi, P.; Shokri, B.J.; Dehghani, H. Predicting coal price using time series methods and combination of radial basis function (RBF) neural network with time series. Miner. Econ. 2021, 1–10. [Google Scholar] [CrossRef]
  22. Ravale, U.; Marathe, N.; Padiya, P. Feature Selection Based Hybrid Anomaly Intrusion Detection System Using K Means and RBF Kernel Function. Procedia Comput. Sci. 2015, 45, 428–435. [Google Scholar] [CrossRef] [Green Version]
  23. Lopez-Martin, M.; Sanchez-Esguevillas, A.; Arribas, J.I.; Carro, B. Network Intrusion Detection Based on Extended RBF Neural Network With Offline Reinforcement Learning. IEEE Access 2021, 9, 153153–153170. [Google Scholar] [CrossRef]
  24. Yu, H.T.; Xie, T.; Paszczynski, S.; Wilamowski, B.M. Advantages of Radial Basis Function Networks for Dynamic System Design. IEEE Trans. Ind. Electron. 2011, 58, 5438–5450. [Google Scholar] [CrossRef]
  25. Yokota, R.; Barba, L.A.; Knepley, M.G. PetRBF—A parallel O(N) algorithm for radial basis function interpolation with Gaussians. Comput. Methods Appl. Mech. Eng. 2010, 199, 1793–1804. [Google Scholar] [CrossRef]
  26. Lu, C.; Ma, N.; Wang, Z. Fault detection for hydraulic pump based on chaotic parallel RBF network. EURASIP J. Adv. Signal Process. 2011, 2011, 49. [Google Scholar] [CrossRef] [Green Version]
  27. Kuncheva, L.I. Initializing of an RBF network by a genetic algorithm. Neurocomputing 1997, 14, 273–288. [Google Scholar] [CrossRef]
  28. Ros, F.; Pintore, M.; Deman, A.; Chrétien, J.R. Automatical initialization of RBF neural networks. Chemom. Intell. Lab. Syst. 2007, 87, 26–32. [Google Scholar] [CrossRef]
  29. Wang, D.; Zeng, X.J.; Keane, J.A. A clustering algorithm for radial basis function neural network initialization. Neurocomputing 2012, 77, 144–155. [Google Scholar] [CrossRef]
  30. Ricci, E.; Perfetti, R. Improved pruning strategy for radial basis function networks with dynamic decay adjustment. Neurocomputing 2006, 69, 1728–1732. [Google Scholar] [CrossRef]
  31. Huang, G.-B.P. Saratchandran and N. Sundararajan, A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation. IEEE Trans. Neural Netw. 2005, 16, 57–67. [Google Scholar] [CrossRef] [PubMed]
  32. Bortman, M.; Aladjem, M. A Growing and Pruning Method for Radial Basis Function Networks. IEEE Trans. Neural. Netw. 2009, 20, 1039–1045. [Google Scholar] [CrossRef] [PubMed]
  33. Karayiannis, N.B.; Randolph-Gips, M.M. On the construction and training of reformulated radial basis function neural networks. IEEE Trans. Neural Netw. 2003, 14, 835–846. [Google Scholar] [CrossRef] [Green Version]
  34. Peng, J.X.; Li, K.; Huang, D.S. A Hybrid Forward Algorithm for RBF Neural Network Construction. IEEE Trans. Neural Netw. 2006, 17, 1439–1451. [Google Scholar] [CrossRef]
  35. Du, D.; Li, K.; Fei, M. A fast multi-output RBF neural network construction method. Neurocomputing 2010, 73, 2196–2202. [Google Scholar] [CrossRef]
  36. Marini, F.; Walczak, B. Particle swarm optimization (PSO). A tutorial. Chemom. Intell. Lab. Syst. 2015, 149, 153–165. [Google Scholar] [CrossRef]
  37. Liu, B.; Wang, L.; Jin, Y.H. An Effective PSO-Based Memetic Algorithm for Flow Shop Scheduling. IEEE Trans. Syst. Cybern. Part B 2007, 37, 18–27. [Google Scholar] [CrossRef] [PubMed]
  38. Yang, J.; He, L.; Fu, S. An improved PSO-based charging strategy of electric vehicles in electrical distribution grid. Appl. Energy 2014, 128, 82–92. [Google Scholar] [CrossRef]
  39. Mistry, K.; Zhang, L.; Neoh, S.C.; Lim, C.P.; Fielding, B. A Micro-GA Embedded PSO Feature Selection Approach to Intelligent Facial Emotion Recognition. IEEE Trans. Cybern. 2017, 47, 1496–1509. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Han, S.; Shan, X.; Fu, J.; Xu, W.; Mi, H. Industrial robot trajectory planning based on improved pso algorithm. J. Phys. Conf. Ser. 2021, 1820, 12185. [Google Scholar] [CrossRef]
  41. Floudas, C.A.; Gounaris, C.E. A review of recent advances in global optimization. J. Glob. Optim. 2009, 45, 3–38. [Google Scholar] [CrossRef]
  42. Goldberg, D. Genetic Algorithms. In Search, Optimization and Machine Learning; Addison-Wesley Publishing Company: Reading, MA, USA, 1989. [Google Scholar]
  43. Michaelewicz, Z. Genetic Algorithms + Data Structures = Evolution Programs; Springer: Berlin, Germany, 1996. [Google Scholar]
  44. Grady, S.A.; Hussaini, M.Y.; Abdullah, M.M. Placement of wind turbines using genetic algorithms. Renew. Energy 2005, 30, 259–270. [Google Scholar] [CrossRef]
  45. Agarwal, V.; Bhanot, S. Radial basis function neural network-based face recognition using firefly algorithm. Neural. Comput. Appl. 2018, 30, 2643–2660. [Google Scholar] [CrossRef]
  46. Jiang, S.; Lu, C.; Zhang, S.; Lu, X.; Tsai, S.-B.; Wang, C.-K.; Gao, Y.; Shi, Y.; Lee, C.-H. Prediction of Ecological Pressure on Resource-Based Cities Based on an RBF Neural Network Optimized by an Improved ABC Algorithm. IEEE Access 2019, 7, 47423–47436. [Google Scholar] [CrossRef]
  47. Wang, H.; Wang, W.; Zhou, X.; Sun, H.; Zhao, J.; Yu, X.; Cui, Z. Firefly algorithm with neighborhood attraction, Information. Sciences 2017, 382–383, 374–387. [Google Scholar]
  48. Khan, I.U.; Aslam, N.; Alshehri, R.; Alzahrani, S.; Alghamdi, M.; Almalki, A.; Balabeed, M. Cervical Cancer Diagnosis Model Using Extreme Gradient Boosting and Bioinspired Firefly Optimization. Sci. Program. 2021, 2021, 5540024. [Google Scholar] [CrossRef]
  49. Zivkovic, M.; Bacanin, N.; Antonijevic, M.; Nikolic, B.; Kvascev, G.; Marjanovic, M.; Savanovic, N. Hybrid CNN and XGBoost Model Tuned by Modified Arithmetic Optimization Algorithm for COVID-19 Early Diagnostics from X-ray Images. Electronics 2022, 11, 3798. [Google Scholar] [CrossRef]
  50. MacQueen, J. Some methods for classification and analysis of multivariate observations. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, CA, USA, 21 June–18 July 1965; Volume 1, pp. 281–297. [Google Scholar]
  51. Hansen, E.; Walster, G.W. Global Optimization Using Interval Analysis; Marcel Dekker Inc.: New York, NY, USA, 2004. [Google Scholar]
  52. Markót, M.; Fernández, J.; Casado, L.G.; Csendes, T. New interval methods for constrained global optimization. Math. Program. 2006, 106, 287–318. [Google Scholar] [CrossRef]
  53. Žilinskas, A.; Žilinskas, J. Interval Arithmetic Based Optimization in Nonlinear Regression. Informatica 2010, 21, 149–158. [Google Scholar] [CrossRef]
  54. Schnepper, C.A.; Stadtherr, M.A. Robust process simulation using interval methods. Comput. Chem. Eng. 1996, 20, 187–199. [Google Scholar] [CrossRef] [Green Version]
  55. Carreras, C.; Walker, I.D. Interval methods for fault-tree analysis in robotics. IEEE Trans. Reliab. 2001, 50, 3–11. [Google Scholar] [CrossRef]
  56. Serguieva, A.; Hunte, J. Fuzzy interval methods in investment risk appraisal. Fuzzy Sets Syst. 2004, 142, 443–466. [Google Scholar] [CrossRef]
  57. Poli, R.; kennedy, J.K.; Blackwell, T. Particle swarm optimization An Overview. Swarm Intell. 2007, 1, 33–57. [Google Scholar] [CrossRef]
  58. Trelea, I.C. The particle swarm optimization algorithm: Convergence analysis and parameter selection. Inf. Process. Lett. 2003, 85, 317–325. [Google Scholar] [CrossRef]
  59. Shi, Y.; Eberhart, R.C. Empirical study of particle swarm optimization. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), Washington, DC, USA, 6–9 July 1999; Volume 3, pp. 1945–1950. [Google Scholar]
  60. Borowska, B. Exponential Inertia Weight in Particle Swarm Optimization. In Information Systems Architecture and Technology: Proceedings of 37th International Conference on Information Systems Architecture and Technology—ISAT 2016—Part IV; Wilimowska, Z., Borzemski, L., Grzech, A., Świątek, J., Eds.; Springer: Cham, Switzerland, 2017; Volume 524, p. 524. [Google Scholar]
  61. Zhang, L.; Yu, H.; Hu, S. A New Approach to Improve Particle Swarm Optimization. In Genetic and Evolutionary Computation—GECCO 2003; Springer: Berlin/Heidelberg, Germany, 2003; Volume 2723. [Google Scholar]
  62. Borowska, B. Dynamic Inertia Weight in Particle Swarm Optimization. In Advances in Intelligent Systems and Computing II. CSIT 2017; Shakhovska, N., Stepashko, V., Eds.; Springer: Cham, Switzerland, 2018; Volume 689. [Google Scholar]
  63. Shi, Y.; Eberhart, R.C. Fuzzy adaptive particle swarm optimization. In Proceedings of the 2001 Congress on Evolutionary Computation (IEEE Cat. No.01TH8546), Seoul, Republic of Korea, 27–30 May 2001; Volume 1, pp. 101–106. [Google Scholar]
  64. Kaelo, P.; Ali, M.M. Integrated crossover rules in real coded genetic algorithms. Eur. J. Oper. Res. 2007, 176, 60–76. [Google Scholar] [CrossRef]
  65. Tsoulos, I.G. Modifications of real code genetic algorithm for global optimization. Appl. Math. Comput. 2008, 203, 598–607. [Google Scholar] [CrossRef]
  66. Alcalá-Fdez, J.; Fernandez, A.; Luengo, J.; Derrac, J.; García, S.; Sánchez, L.; Herrera, F. KEEL Data-Mining Software Tool: Data Set Repository, Integration of Algorithms and Experimental Analysis Framework. J. Mult. Valued Log. Soft Comput. 2011, 17, 255–287. [Google Scholar]
  67. Weiss, S.M.; Kulikowski, C.A. Computer Systems That Learn: Classification and Prediction Methods from Statistics, Neural Nets, Machine Learning and Expert Systems; Morgan Kaufmann Publishers Inc.; Morgan Kaufmann Publishing: San Mateo CA, USA, 1991. [Google Scholar]
  68. Quinlan, J.R. Simplifying Decision Trees. Int. -Man-Mach. Stud. 1987, 27, 221–234. [Google Scholar] [CrossRef] [Green Version]
  69. Shultz, T.; Mareschal, D.; Schmidt, W. Modeling Cognitive Development on Balance Scale Phenomena. Mach. Learn. 1994, 16, 59–88. [Google Scholar] [CrossRef] [Green Version]
  70. Zhou, Z.H.; Jiang, Y. NeC4.5: Neural ensemble based C4.5. IEEE Trans. Knowl. Data Eng. 2004, 16, 770–773. [Google Scholar] [CrossRef]
  71. Setiono, R.; Leow, W.K. FERNN: An Algorithm for Fast Extraction of Rules from Neural Networks. Appl. Intell. 2000, 12, 15–25. [Google Scholar] [CrossRef]
  72. Evans, B.; Fisher, D. Overcoming process delays with decision tree induction. IEEE Expert. 1994, 9, 60–66. [Google Scholar] [CrossRef]
  73. Demiroz, G.; Govenir, H.A.; Ilter, N. Learning Differential Diagnosis of Eryhemato-Squamous Diseases using Voting Feature Intervals. Artif. Intell. Med. 1998, 13, 147–165. [Google Scholar]
  74. Hayes-Roth, B.; Hayes-Roth, B.F. Concept learning and the recognition and classification of exemplars. J. Verbal Learning Verbal Behav. 1977, 16, 321–338. [Google Scholar] [CrossRef]
  75. Kononenko, I.; Šimec, E.; Robnik-Šikonja, M. Overcoming the Myopia of Inductive Learning Algorithms with RELIEFF. Appl. Intell. 1997, 7, 39–55. [Google Scholar] [CrossRef]
  76. French, R.M.; Chater, N. Using noise to compute error surfaces in connectionist networks: A novel means of reducing catastrophic forgetting. Neural Comput. 2002, 14, 1755–1769. [Google Scholar] [CrossRef] [PubMed]
  77. Dy, J.G.; Brodley, C.E. Feature Selection for Unsupervised Learning. J. Mach. Learn. Res. 2004, 5, 845–889. [Google Scholar]
  78. Perantonis, S.J.; Virvilis, V. Input Feature Extraction for Multilayered Perceptrons Using Supervised Principal Component Analysis. Neural Process. Lett. 1999, 10, 243–252. [Google Scholar] [CrossRef]
  79. Garcke, J.; Griebel, M. Classification with sparse grids using simplicial basis functions. Intell. Data Anal. 2002, 6, 483–502. [Google Scholar] [CrossRef]
  80. Cestnik, G.; Konenenko, I.; Bratko, I. Assistant-86: A Knowledge-Elicitation Tool for Sophisticated Users. In Progress in Machine Learning; Bratko, I., Lavrac, N., Eds.; Sigma Press: Wilmslow, UK, 1987; pp. 31–45. [Google Scholar]
  81. Elter, M.R.; Schulz-Wendtland, T.W. The prediction of breast cancer biopsy outcomes using two CAD approaches that both emphasize an intelligible decision process. Med. Phys. 2007, 34, 4164–4172. [Google Scholar] [CrossRef]
  82. Little, M.A.; McSharry, P.E.; Hunter, E.J.; Spielman, J.; Ramig, L.O. Suitability of dysphonia measurements for telemonitoring of Parkinson’s disease. IEEE Trans. Biomed. Eng. 2009, 56, 1015. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  83. Smith, J.W.; Everhart, J.E.; Dickson, W.C.; Knowler, W.C.; Johannes, R.S. Using the ADAP learning algorithm to forecast the onset of diabetes mellitus. In Proceedings of the Symposium on Computer Applications and Medical Care IEEE Computer Society Press in Medical Care, Orlando, FL, USA, 7–11 November 1988; pp. 261–265. [Google Scholar]
  84. Lucas, D.D.; Klein, R.; Tannahill, J.; Ivanova, D.; Brandon, S.; Domyancic, D.; Zhang, Y. Failure analysis of parameter-induced simulation crashes in climate models. Geosci. Model Dev. 2013, 6, 1157–1171. [Google Scholar] [CrossRef] [Green Version]
  85. Giannakeas, N.; Tsipouras, M.G.; Tzallas, A.T.; Kyriakidi, K.; Tsianou, Z.E.; Manousou, P.; Hall, A.; Karvounis, E.C.; Tsianos, V.; Tsianos, E. A clustering based method for collagen proportional area extraction in liver biopsy images. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, New Orleans, LA, USA, 4–7 November 1988; pp. 3097–3100. [Google Scholar]
  86. Hastie, T.; Tibshirani, R. Non-parametric logistic and proportional odds regression. JRSS-C 1987, 36, 260–276. [Google Scholar] [CrossRef]
  87. Dash, M.; Liu, H.; Scheuermann, P.; Tan, K.L. Fast hierarchical clustering and its validation. Data Knowl. Eng. 2003, 44, 109–138. [Google Scholar] [CrossRef]
  88. Wolberg, W.H.; Mangasarian, O.L. Multisurface method of pattern separation for medical diagnosis applied to breast cytology. Proc. Natl. Acad. Sci. USA 1990, 87, 9193–9196. [Google Scholar] [CrossRef] [Green Version]
  89. Raymer, M.; Doom, T.E.; Kuhn, L.A.; Punch, W.F. Knowledge discovery in medical and biological datasets using a hybrid Bayes classifier/evolutionary algorithm. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2003, 33, 802–813. [Google Scholar] [CrossRef]
  90. Zhong, P.; Fukushima, M. Regularized nonsmooth Newton method for multi-class support vector machines. Optim. Methods Softw. 2007, 22, 225–236. [Google Scholar] [CrossRef]
  91. Andrzejak, R.G.; Lehnertz, K.; Mormann, F.; Rieke, C.; David, P.; Elger, C.E. Indications of nonlinear deterministic and finite-dimensional structures in time series of brain electrical activity: Dependence on recording region and brain state. Phys. Rev. E 2001, 64, 1–8. [Google Scholar] [CrossRef] [Green Version]
  92. Koivisto, M.; Sood, K. Exact Bayesian Structure Discovery in Bayesian Networks. J. Mach. Learn. Res. 2004, 5, 549–573. [Google Scholar]
  93. Nash, W.J.; Sellers, T.L.; Talbot, S.R.; Cawthor, A.J.; Ford, W.B. The Population Biology of Abalone (Haliotis Species) in Tasmania. I. Blacklip Abalone (H. rubra) from the North Coast and Islands of Bass Strait, Sea Fisheries Division; Technical Report No. 48; Department of Primary Industry and Fisheries, Tasmania: Hobart, Australia, 1994. [Google Scholar]
  94. Brooks, T.F.; Pope, D.S.; Marcolini, A.M. Airfoil Self-Noise and Prediction; Technical Report, NASA RP-1218; National Aeronautics and Space Administration: Washington, DC, USA, 1989. [Google Scholar]
  95. Simonoff, J.S. Smooting Methods in Statistics; Springer: Berlin/Heidelberg, Germany, 1996. [Google Scholar]
  96. Cheng, Y.I. Modeling of strength of high performance concrete using artificial neural networks. Cem. Concr. Res. 1998, 28, 1797–1808. [Google Scholar]
  97. Harrison, D.; Rubinfeld, D.L. Hedonic prices and the demand for clean ai. J. Environ. Econ. Manag. 1978, 5, 81–102. [Google Scholar] [CrossRef]
  98. Mackowiak, P.A.; Wasserman, S.S.; Levine, M.M. A critical appraisal of 98.6 degrees f, the upper limit of the normal body temperature, and other legacies of Carl Reinhold August Wunderlich. J. Amer. Med. Assoc. 1992, 268, 1578–1580. [Google Scholar] [CrossRef]
  99. King, R.D.; Muggleton, S.; Lewis, R.; Sternberg, M.J.E. Drug design by machine learning: The use of inductive logic programming to model the structure-activity relationships of trimethoprim analogues binding to dihydrofolate reductase. Proc. Nat. Acad. Sci. USA 1992, 89, 11322–11326. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  100. Sikora, M.; Wrobel, L. Application of rule induction algorithms for analysis of data collected by seismic hazard monitoring systems in coal mines. Arch. Min. Sci. 2010, 55, 91–114. [Google Scholar]
  101. Sanderson, C.; Curtin, R. Armadillo: A template-based C++ library for linear algebra. J. Open Source Softw. 2016, 1, 26. [Google Scholar] [CrossRef]
  102. Dagum, L.; Menon, R. OpenMP: An industry standard API for shared-memory programming. IEEE Comput. Sci. Eng. 1998, 5, 46–55. [Google Scholar] [CrossRef] [Green Version]
  103. Riedmiller, M.; Braun, H. A Direct Adaptive Method for Faster Backpropagation Learning: The RPROP algorithm. In Proceedings of the IEEE International Conference on Neural Networks, San Francisco, CA, USA, 28 March–1 April 1993; pp. 586–591. [Google Scholar]
  104. Bishop, C. Neural Networks for Pattern Recognition; Oxford University Press: Oxford, UK, 1995. [Google Scholar]
  105. Cybenko, G. Approximation by superpositions of a sigmoidal function. Math. Control. Signals Syst. 1989, 2, 303–314. [Google Scholar] [CrossRef]
  106. Klima, G. Fast Compressed Neural Networks. Available online: http://fcnn.sourceforge.net/ (accessed on 5 January 2023).
  107. Das, S.; Suganthan, P.N. Differential Evolution: A Survey of the State-of-the-Art. IEEE Trans. Evol. 2011, 15, 4–31. [Google Scholar] [CrossRef]
Figure 1. A typical plot for the Gaussian function, for c = 0 and σ = 1 .
Figure 1. A typical plot for the Gaussian function, for c = 0 and σ = 1 .
Algorithms 16 00071 g001
Figure 2. The scheme of the particles in the current PSO algorithm.
Figure 2. The scheme of the particles in the current PSO algorithm.
Algorithms 16 00071 g002
Figure 3. Graphical representation of the proposed two-phase method.
Figure 3. Graphical representation of the proposed two-phase method.
Algorithms 16 00071 g003
Figure 4. Graphical comparison of all methods for the classification datasets.
Figure 4. Graphical comparison of all methods for the classification datasets.
Algorithms 16 00071 g004
Figure 5. Graphical comparison of the methods for the regression datasets.
Figure 5. Graphical comparison of the methods for the regression datasets.
Algorithms 16 00071 g005
Table 1. The used values for the experimental parameters. The first column denotes the name of the parameter and the second the used value.
Table 1. The used values for the experimental parameters. The first column denotes the name of the parameter and the second the used value.
ParameterValue
N c 200
N g 100
N s 50
c 1 1.0
c 2 1.0
F5.0
B100.0
k10
p s 0.90
p m 0.05
Table 2. Experimental results for the classification datasets. The first column is the name of the used dataset.
Table 2. Experimental results for the classification datasets. The first column is the name of the used dataset.
DatasetNN-RPROPNN-GENETICRBF-KMEANSIRBF-100IRBF-1000
Appendicitis16.30%18.10%12.23%16.47%14.03%
Australian36.12%32.21%34.89%23.61%22.39%
Balance8.81%8.97%33.42%12.65%13.15%
Bands36.32%35.75%37.22%37.38%36.29%
Cleveland61.41%51.60%67.10%49.77%49.64%
Dermatology15.12%30.58%62.34%38.24%35.64%
Hayes Roth37.46%56.18%64.36%33.62%34.13%
Heart30.51%28.34%31.20%15.91%15.60%
HouseVotes6.04%6.62%6.13%4.77%3.90%
Ionosphere13.65%15.14%16.22%8.64%7.52%
Liverdisorder40.26%31.11%30.84%27.36%25.63%
Lymography24.67%23.26%25.31%19.12%20.02%
Mammographic18.46%19.88%21.38%17.17%17.30%
Parkinsons22.28%18.05%17.41%15.51%13.59%
Pima34.27%32.19%25.78%23.61%23.23%
Popfailures4.81%5.94%7.04%5.21%5.10%
Regions227.53%29.39%38.29%26.08%25.77%
Saheart34.90%34.86%32.19%27.94%28.91%
Segment52.14%57.72%59.68%47.19%40.28%
Spiral46.59%44.50%44.87%19.43%19.56%
Wdbc21.57%8.56%7.27%5.33%5.44%
Wine30.73%19.20%31.41%9.20%6.84%
Z_F_S29.28%10.73%13.16%4.19%4.18%
ZO_NF_S6.43%8.41%9.02%4.31%4.35%
ZONF_S27.27%2.60%4.03%2.23%2.08%
ZOO15.47%16.67%21.93%10.13%11.13%
AVERAGE26.86%(3)24.87%(1)29.03%(1)19.43%(8)18.68%(13)
Table 3. Experimental results for the regression datasets. The first column is the name of the used regression dataset.
Table 3. Experimental results for the regression datasets. The first column is the name of the used regression dataset.
DATASETNN-RPROPNN-GENETICRBF-KMEANSIRBF-100IRBF-1000
ABALONE4.557.177.375.575.32
AIRFOIL0.0020.0030.270.0040.003
BASEBALL92.05103.6093.0278.8985.58
BK1.600.030.020.040.03
BL4.385.740.0130.00030.0003
CONCRETE0.0090.0090.0110.0070.007
DEE0.6081.0130.170.160.16
DIABETES1.1119.860.490.780.89
HOUSING74.3843.2657.6820.2721.54
FA0.141.950.0150.0320.029
MB0.553.392.160.120.09
MORTGAGE9.192.411.450.390.78
NT0.040.0068.140.0070.007
PY0.0391.410.0120.0240.014
QUAKE0.0410.0400.070.040.03
TREASURY10.882.932.020.330.51
WANKARA0.00030.0120.0010.0020.002
AVERAGE11.71(1)11.34(1)10.17(5)6.27(7)6.76(3)
Table 4. Experimental results with the proposed method and using different values for the parameter F on the classification datasets.
Table 4. Experimental results with the proposed method and using different values for the parameter F on the classification datasets.
DATASET F = 3 F = 5 F = 10
Appendicitis14.43%14.03%14.47%
Australian23.45%22.39%23.21%
Balance13.35%13.15%11.79%
Bands36.48%36.29%36.76%
Cleveland49.26%49.64%49.02%
Dermatology36.54%35.64%34.37%
Hayes Roth39.28%34.13%36.46%
Heart15.14%15.60%14.89%
HouseVotes4.93%3.90%6.41%
Ionosphere7.56%7.52%9.05%
Liverdisorder28.37%25.63%28.97%
Lymography20.12%20.02%21.05%
Mammographic18.04%17.30%18.21%
Parkinsons18.51%13.59%13.49%
Pima23.69%23.23%23.52%
Popfailures5.76%5.10%4.50%
Regions225.79%25.77%25.32%
Saheart28.89%28.91%26.99%
Segment36.53%40.28%43.28%
Spiral16.78%19.56%22.18%
Wdbc4.64%5.44%5.10%
Wine8.31%6.84%8.27%
Z_F_S4.32%4.18%4.03%
ZO_NF_S3.70%4.35%3.72%
ZONF_S2.04%2.08%1.98%
ZOO11.87%11.13%9.97%
AVERAGE18.65%18.68%19.12%
Table 5. Experimental results with the proposed method using different values for the parameter F on the classification datasets.
Table 5. Experimental results with the proposed method using different values for the parameter F on the classification datasets.
DATASET F = 3 F = 5 F = 10
ABALONE5.565.325.41
AIRFOIL0.0040.0030.004
BASEBALL88.4085.5884.43
BK0.030.030.02
BL0.00050.00030.0002
CONCRETE0.0090.0070.007
DEE0.180.160.16
DIABETES0.670.890.77
HOUSING20.0321.5420.84
FA0.030.0290.036
MB0.190.090.26
MORTGAGE0.890.780.03
NT0.0060.0070.007
PY0.0270.0140.018
QUAKE0.040.030.04
TREASURY0.770.510.17
WANKARA0.0020.0020.002
AVERAGE6.876.766.60
Table 6. Precision, recall, and f-score for a series of classification datasets.
Table 6. Precision, recall, and f-score for a series of classification datasets.
RBF-KMEANS
DATASET
IRBF-100
PRECISION
RECALLF-SCOREPRECISIONRECALLF-SCORE
APPENDICITIS0.800.770.760.790.740.78
AUSTRALIAN0.670.610.580.790.760.76
BALANCE0.740.760.640.750.780.76
BANDS0.520.510.480.580.570.56
HEART0.680.690.670.860.850.85
IONOSPHERE0.840.810.810.920.890.90
LIVERDISORDER0.650.640.640.720.710.71
MAMMOGRAPHIC0.810.810.810.830.830.82
PARKINSONS0.760.680.690.850.800.81
PIMA0.720.670.680.750.700.71
SAHEART0.650.610.610.700.660.67
SEGMENT0.430.390.390.580.530.53
SPIRAL0.560.560.550.700.700.70
WDBC0.930.910.920.960.940.95
WINE0.740.650.660.930.930.92
Z_F_S0.850.840.830.960.970.96
ZO_NF_S0.900.900.900.950.950.95
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tsoulos, I.G.; Charilogis, V. Locating the Parameters of RBF Networks Using a Hybrid Particle Swarm Optimization Method. Algorithms 2023, 16, 71. https://doi.org/10.3390/a16020071

AMA Style

Tsoulos IG, Charilogis V. Locating the Parameters of RBF Networks Using a Hybrid Particle Swarm Optimization Method. Algorithms. 2023; 16(2):71. https://doi.org/10.3390/a16020071

Chicago/Turabian Style

Tsoulos, Ioannis G., and Vasileios Charilogis. 2023. "Locating the Parameters of RBF Networks Using a Hybrid Particle Swarm Optimization Method" Algorithms 16, no. 2: 71. https://doi.org/10.3390/a16020071

APA Style

Tsoulos, I. G., & Charilogis, V. (2023). Locating the Parameters of RBF Networks Using a Hybrid Particle Swarm Optimization Method. Algorithms, 16(2), 71. https://doi.org/10.3390/a16020071

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop