Next Article in Journal
Software Reliability Modeling Incorporating Fault Detection and Fault Correction Processes with Testing Coverage and Fault Amount Dependency
Next Article in Special Issue
Numerical Solution of Linear Volterra Integral Equation Systems of Second Kind by Radial Basis Functions
Previous Article in Journal
Quantile Regression Analysis between the After-School Exercise and the Academic Performance of Korean Middle School Students
Previous Article in Special Issue
Weighted Quasi-Interpolant Spline Approximations of Planar Curvilinear Profiles in Digital Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal Centers’ Allocation in Smoothing or Interpolating with Radial Basis Functions

by
Pedro González-Rodelas
1,*,†,
Hasan M. H. Idais
2,†,
Mohammed Yasin
3,† and
Miguel Pasadas
1,†
1
Departamento Matemática Aplicada, Universidad de Granada, 18071 Granada, Spain
2
Department of Mathematics and Statistics, Arab American University, Jenin P240, Palestine
3
Department of Mathematics, An-Najah National University, Nablus P400, Palestine
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2022, 10(1), 59; https://doi.org/10.3390/math10010059
Submission received: 20 November 2021 / Revised: 10 December 2021 / Accepted: 15 December 2021 / Published: 24 December 2021
(This article belongs to the Special Issue Spline Functions and Applications)

Abstract

:
Function interpolation and approximation are classical problems of vital importance in many science/engineering areas and communities. In this paper, we propose a powerful methodology for the optimal placement of centers, when approximating or interpolating a curve or surface to a data set, using a base of functions of radial type. In fact, we chose a radial basis function under tension (RBFT), depending on a positive parameter, that also provides a convenient way to control the behavior of the corresponding interpolation or approximation method. We, therefore, propose a new technique, based on multi-objective genetic algorithms, to optimize both the number of centers of the base of radial functions and their optimal placement. To achieve this goal, we use a methodology based on an appropriate modification of a non-dominated genetic classification algorithm (of type NSGA-II). In our approach, the additional goal of maintaining the number of centers as small as possible was also taken into consideration. The good behavior and efficiency of the algorithm presented were tested using different experimental results, at least for functions of one independent variable.

1. Introduction

In the last decades, radial basis function (RBF) methods have been developed in a way that contributed to their great spread and emergence, mainly for their simplicity and for having been lauded for the ability to solve multivariate scattered data approximation and interpolation problems (see for example [1,2,3], and many other references included therein). The main advantages of these methods are that they do not need a mesh or triangulation, that they are simple to implement and dimension independent, and that no staircasing or polygonization for boundaries is required [4,5].
Many authors have presented several methodologies for approximating or fitting a surface to data (see for example [6,7]), such as the methodology that is based on the polynomial interpolation theorem of Lagrange [8,9] and another important approach, that is based on orthogonal polynomials [10].
Also, some seminal works on dynamic node adaptive strategy, or knot selection placement for sparse reduced data, were published in [11,12].
In [13], the methodologies for single objective problems (SOP) have been presented for selection and optimization of centers, mainly based on techniques for the selection of center points through non-dominated sorting, where promising points are selected as centers. Also, some successful hierarchical genetic algorihtm approaches for curve fitting with B-splines were applied in [14].
On the other hand, the authors in [15] developed techniques to perform centers’ selection and estimation simultaneously by a componentwise boosting algorithm. Moreover, the centers of RBFs are selected among the vertices of the Voronoi diagram of the sample data point in [16]. The methodology in [17,18] was to optimize the shape parameter of radial basis functions for the solution of ordinary differential equations. The authors in [19] used an approach based on genetic algorithms to optimize centers and radial basis function neural networks (RBFNNs). However, they used one type of neural network where the basis function is of Gaussian type. Research in neural networks has also attracted many computer and mathematical scientists over the last years and covers a broad spectrum of different subjects and real-world applications, such as image processing, pattern recognition and optimization in general (the reader can also consult the new upcoming book of the authors Grienggrai Rajchakit, Praveen Agarwal and Sriraman Ramalingam [20] for more information about the multiple applications and stability properties of these type of neural networks and many other mathematical aspects of them).
In this paper, a new methodology for optimal placement of random centers, for approximating or fitting a curve to data, using radial basis functions is developed. In fact, we chose a radial basis function under tension (RBFT) that depends on a positive parameter, that also provides a convenient way to control the behavior of the corresponding interpolation or approximation method. A new technique is presented to optimize both the number of radial basis centers and their optimal placement, using multi-objective genetic algorithms. More precisely, we used a new approach based on an appropriate modification of a goal-based non-dominated genetic classification algorithm (of type NSGA-II) and tested the good behavior and efficiency of the algorithm using different experimental results, at least for functions of one independent variable. However, the procedure and theory can be developed in the same way for functions of any number of variables, so we present it in this more general setting.
The key points concerning this goal-based NSGA-II version, compared with the original NSGA, are the following (see for example [21,22]):
  • A fast, non-dominated sorting mechanism, grouping the entire population into a hierarchy of sub-populations, based on the Pareto dominance ordering.
  • Inside each sub-group, the similarity between members is evaluated on the Pareto front; to promote a diverse front of non-dominated solutions, the resulting groups and similarity measures are used.
  • For that, an adequate crowding distance is an appropriate mechanism of ranking members of a front which are dominating, or dominated by others.
  • The appropriate consideration of a certain elitist mechanism, in order to improve the convergence properties of the algorithm.
  • A niching operator without parameters is adopted, to maintain a certain level of diversity among the possible solutions.
  • One or several decision maker’s (DM) preferences within the evolutionary process are provided, in order to focus into a particular region of the Pareto front, where the more interesting features are present.
  • In this way, a so-called region of interest (ROI) appears; it is the set of non-dominated solutions that are preferred by the DM.
  • The DM can provide his/her preferences before (a priori), after (a posteriori), or during (interactively) the MOEA run.
  • Therefore, some trade-off between objectives specifies that the gain of a unit on one target is worth the degradation on some others and vice versa.
  • In our case, keeping the number of centers as small as possible while controlling some specific standard errors is our main goal.
Then, these ranking mechanisms are used with the corresponding genetic selection operators (usually of Tournament type) to create the population of possible solutions of the next generations.
Regarding implementation issues, we developed a MATLAB code through a suitable adaptation of some previous subroutines for the NSGA-II algorithm by Mostapha Kalami Heris in [23] (see also [24]); we have also implemented the corresponding approximation or interpolation procedures, using the specified RBFs according with [25].

2. Proposed Methodology

Let Ω be an open bounded connected nonempty subset of R d (usually d = 1 , 2 , 3 ), with Lipschitz-continuous boundary. We also use the classical notation H k ( Ω ) to denote the usual Sobolev space of all distributions u, whose partial derivatives, up to order k, are in the classical Lebesgue space L 2 ( Ω ) .
Let m > 1 be a positive integer (usually small) and let Π m 1 ( R d ) denote the space of polynomials on R d of total degree at most m 1 , whose dimension is denoted by d ( m ) and let q 1 , , q d ( m ) be the standard basis of Π m 1 ( R d ) . Let us give an arbitrary finite set b 1 , , b M Ω R d of distinct approximation points and a set of real values U = f f 1 , , f M , that usually comes from the evaluation of a certain function f : Ω R . We also use a set of centers c 1 , , c N R d and, for each of them, the translation of an appropriate radial function Φ ( · c i ) , i = 1 , , N , as in (2).
The main goal is to approximate, in the best possible way, the points
( b 1 , f 1 ) , , ( b M , f M ) R d + 1
by means of an adequate linear combination of radial basis functions (RBFs).

2.1. Spaces of Radial Functions

We define the radial basis functions that is used in the sequel. We consider the following function (for ε R + , t 0 ):
ϕ ε ( t ) = 1 2 ε 3 ( e ε t + ε t )
that is the fundamental solution, in the distribution sense, of the linear fourth-order differential equation
d 4 ϕ ε d t 4 ε 2 d 2 ϕ ε d t 2 = δ ,
where δ is the Dirac’s measure at the origin. The real parameter ε is called a tension parameter.
In our case, we use radial basis functions that are the translations of the radial function (now, for x R d )
Φ ε ( x ) = ϕ ε ( x d 2 ) = 1 2 ε 3 e ε < x > d + ε < x > d
where < · > d denote the Euclidean norm on R d .
This type of radial basis functions under tension (RBFTs) were introduced by A. Bouhamidi and A. Le Mehauté [26] in 2004. They depend on a positive parameter that provides a convenient way of controlling the behavior of the corresponding interpolation or approximation method. Moreover, error estimates of the corresponding interpolation method [26] are given in terms of Sobolev semi-norms, that also help in the convergence of the associated procedure and provide attractive geometrical tension effects.

2.2. Smoothing Radial Basis Approximations

Therefore, the idea is to use a finite-dimensional space H N , whose basis is of the type
H N s p a n Φ ε ( · c 1 ) , , Φ ε ( · c N ) , q 1 ( · ) , , q d ( m ) ( · ) .
In this case, the problem to be resolved can be formulated as follows: Given a set of real values and an approximation data set ( b i , f i ) : i = 1 , , M , we still want to obtain σ N H N such that
σ N ( b i ) f i , i = 1 , , M
by means of the following minimization problem.
Find σ N H N such that
J ( σ N ) J ( υ ) , υ H N
where τ ( 0 , ) and
J ( v ) = i = 1 M f i υ ( b i ) 2 + τ | υ | 2 2
where
| υ | 2 2 = | α | = 2 Ω ( α υ ( x ) ) 2 d x
with
| α | = α 1 + + α d , α = ( α 1 , , α d ) N d , α υ ( x 1 , , x d ) = | α | υ x 1 α 1 x d α d .
Thus,
σ N ( x ) = l = 1 N + d ( m ) β l w l ( x ) ,
where
w l ( · ) = Φ ε ( · c l ) , l = 1 , , N q l N ( · ) , l = N + 1 , , N + d ( m ) ;
and β 1 , , β N + d ( m ) R are the control coefficients (i.e., the problem’s unknowns), obtained as the solution of the variational problem formulated in the following Theorem (see for example [27,28]).
Theorem 1.
The minimization problem (3) has a unique solution, that is also the only solution of the following variational problem.
Find σ N H N such that, for all υ H N ,
i = 1 M σ N ( b i ) υ ( b i ) + τ ( σ N , υ ) 2 = i = 1 M υ ( b i ) f i
where
( u , v ) 2 = | α | = 2 Ω ( α u ( x ) α υ ( x ) ) d x
Proof. 
The expression
[ [ υ ] ] : = i = 1 M υ ( b i ) 2 + τ ( v , v ) 2 1 2
constitutes a norm on H N equivalent to its usual norm. Then, as a consequence, the continuous symmetric bilinear form
a : H N × H N R
defined by
a ( u , υ ) = i = 1 M u ( b i ) υ ( b i ) + τ ( u , υ ) 2
is H N -elliptic. Besides, ψ : H N R , so
ψ ( υ ) : = i = 1 M f i υ ( b i )
is clearly a continuous linear form on this space.
Then, applying the Lax–Milgram lemma (see [29] for example), there exists a unique function σ N H N such that
a ( σ N , υ ) = ψ ( υ ) , υ H N ;
that is, (4) holds. Moreover, σ N is the unique function of H N that minimizes the functional
J ˜ ( υ ) = 2 1 2 a ( υ , υ ) ψ ( υ ) ,
concluding that σ N also minimizes the functional J . □
By linearity, we can reduce the problem (4) to the following linear system:
A A + τ R β = A f
where A = ( w i ( b j ) ) , i = 1 , , N + d ( m ) , j = 1 , , M .
R = | α | = 2 Ω α w i α w j d x i , j = 1 , , N + d ( m )
β = β i i = 1 , , N + d ( m ) R N + d ( m ) , f f i i = 1 , , M
Remark 1.
When M = N and b i c i for i = 1 , N , we could also consider an interpolating problem at the centers, where the interpolating function σ ^ N has explicitly the following form:
σ ^ N ( x ) = i = 1 N λ i Φ ε ( x c i ) + j = 1 d ( m ) β j q j ( x ) , x R d ,
where the coefficients λ = ( λ 1 , , λ N ) and β = ( β 1 , , β d ( m ) ) are now the solution of the following linear system:
A ε M M O λ β = f 0
where A ε = ( Φ ε ( c i c j ) ) 1 i , j N is an N × N matrix, M = ( q j ( c i ) ) 1 i N 1 j d ( m ) is an N × d ( m ) matrix, M denotes the transpose of M , O is the d ( m ) × d ( m ) zero matrix, f ( f 1 , , f N ) R N and 0 is the zero vector of R d ( m ) .
Take into account that we are considering the interpolating conditions
σ ^ N ( c i ) = f i , i = 1 , , N ;
together with the constraints (the usual orthogonality conditions to the space of polynomials Π m 1 ( R d ) when we do not have the strict positiveness of the radial kernel function, to assure the Π m 1 ( R d ) -unisolvency of the centers; see for example [1])
i = 1 N λ i q j ( c i ) = 0 , j = 1 , , d ( m ) .

3. Methodology Overview

At this stage, it is important to define some ideas about multi-objective optimization problems and describe fundamental issues for NSGA-II-type algorithms (see [22]). The non-dominated sorting genetic algorithm is a multiple objective optimization algorithm, whose objective is to improve the adaptive fit of a population of candidate solutions to a Pareto front, constrained by a set of objective functions G ( g 1 , , g l , , g L ) , each one defined in an appropriate domain.
Amongst the first algorithms to explicitly exert certain tendency towards the discovery of nondominated solutions, we could mention, for example, Fonseca and Fleming’s MOGA [30]; it uses fitness sharing amongst solutions of the same rank, coupled along a proportional adaptive selection to help promoting diversity.
The authors Srinivas and Deb’s [31] original nondominated sorting genetic algorithm (NSGA) works in a similar way, but assigns fitness based on dividing the population into a certain number of fronts of equal domination. To achieve this, the algorithm iteratively seeks all the nondominated points in the population that still have not been labeled as belonging to the current front and increments the front count, repeating until all solutions have been finally labeled. Therefore, each point in a given front obtains, as its raw score, the count of all solutions in inferior fronts. After this, Deb [22] and his coworkers, proposed a revised NSGA-II algorithm, which still uses the idea of non-dominated fronts, but also incorporates the following important changes:
  • A crowding distance metric is defined for each point as the average side length of the cuboid defined by its nearest neighbors in the same front. The larger the value, the fewer solutions reside in the vicinity of the point.
  • An appropriate survivor selection strategy, where the new population is obtained by accepting the individuals from progressively inferior fronts until it is full, so that, if not all the individuals in the last front considered can be accepted, they are chosen on the basis of their crowding distance.
  • Parent selection uses a modified tournament operator that considers, first, the dominance rank, then crowding distance as the second criterium.
We can also see that, for the appropriate use of any NSGA-II procedure, it is necessary to describe the following fundamental issues:
(1)
Population initialization: the size of the population depends on the problem range.
(2)
Non-dominated sort. Before talking about this stage we define the following:
  • p and P, the individual and the main population, respectively.
  • S p is the set of all individuals dominated by p.
  • n p is the number of individuals that p dominates.
  • p r a n k is the rank that an individual p has, depending on the front assigned to it.
  • Q is the set for storing temporarily the individuals in the ( i + 1 ) -th front.
(We recall that, in this context of multi-objective optimization, we say that p dominates q, or, equivalently, that q is dominated by p, if and only if p is no worse than q in all the objective functions, but it is strictly better in at least one of them.)
We describe the non-dominated procedure as follows:
For all individuals, p P :
Initialize S p = and n p = 0
q P { p }
  if   p   dominates   q ,   then   S p = S p q
  else ,   if   q   dominates   p ,   then   n p = n p + 1
  if   n p = 0 , then   F 1 = F 1 p   and   p   r a n k   = 1 .
Begin initializing the new front, from the first one (with i = 1 ).
Next, for every front F i , while F i ,
Q =
p F i   and   q S p
n q = n q 1
  i f   n q = 0 ,   then   Q = Q q   and   q   r a n k   = i + 1
i = i + 1 (augment the front counter by one).
At this stage, the new front is F i = Q .
(3)
Crowding distance: It is very important to compare between individuals belonging to the same front F i , so that each individual has a crowding distance that estimates the density of solutions surrounding each member in this front. The method of calculating the crowding distance is summarized below.
For all considered fronts F i and every objective function g l , l = 1 , , L , we perform the following:
We previously sort all of the members of the front F i , according to the lth objective function s o r t ( F i , l ) { p i , 1 [ l ] , , p i , N i [ l ] } ,   w i t h   N i c a r d ( F i ) ; (where g l ( p i , 1 [ l ] ) g l   m i n   and g l ( p i , N i [ l ] ) g l m a x ). Now, we use this arrangement to calculate the crowding distance for each individual in F i with the following passages.
First, initialize the distances of each member of the front to zero.
Namely, p i , j [ l ] F i , w i t h j = 1 , , N i . Let us define and compute the following:
d i , 1 [ l ] = d i , N i [ l ] = (to assure that these points are taken for next generations).
d i , j [ l ] d p i , j [ l ] [ l ] ( F i ) = 0 , j = 2 , , N i 1 .
For k = 2 : ( N i 1 ) ,
d i , k [ l ] = d i , k [ l ] + g l ( p i , k + 1 [ l ] ) g l ( p i , k 1 [ l ] ) g l m a x g l m i n
(4)
Selection: In order to select individuals, we use a crowded comparison operator that depends on a binary tournament selection. The operation depends on F i and p r a n k ; we say that p c q if
p r a n k < q r a n k
o r i f p a n d q F i
t h e n d p [ l ] ( F i ) > d q [ l ] ( F i )
(5)
Genetic operator: Now, we describe the two parts of the simulated binary crossover and that of polynomial mutation, both performed componentwise:
Binary crossover: The following expressions represent the simulated binary crossover (for each component k) of two contestants in the tournament, p 1 and p 2 :
c 1 , k = 1 2 ( 1 β k ) p 1 , k + ( 1 + β k ) p 2 , k
c 2 , k = 1 2 ( 1 β k ) p 1 , k + ( 1 + β k ) p 2 , k
with β k 0 and c i , k represents the kth component to the ith child. However, the density of any random sample is generated by
p ( β ) = 1 2 ( η c + 1 ) β η c , if 0 β 1 1 2 ( η c + 1 ) 1 β η c + 2 , if β > 1
where η c is defined as the distribution index of crossover. Let u be a uniform sample random number included in ( 0 , 1 ) ; then, we have
β ( u ) = ( 2 u ) 1 η c + 1 , if   u < 0.5 1 2 ( 1 u ) ] 1 η c + 1 , otherwise
Polynomial Mutation: First, we define some important notions, such as the following:
c i , k is a child component.
p i , k is a parent component.
r k is a uniform sample random in ( 0 , 1 ) .
δ k is a mutation distribution index.
[ p i , k   l o w e r   , p i , k   u p p e r ] is the box constraints imposed over the kth-component.
Then, the polynomial mutation is defined componentwise by
c i , k = p i , k + ( p i , k u p p e r p i , k l o w e r ) δ k
where
δ k = ( 2 r k ) 1 η l + 1 1 , , i f r k < 0 1 2 ( 1 r k ) 1 η l + 1 , , i f r k 0
(6)
Recombination and selection: The offspring population is combined with the current generation population and selection is performed to set the individuals of the next generation; the elitism is ensured, because the best individuals are added sequentially to the population. The selection of parents for the next generation is also based on their crowding distance, by selecting the individual at random, but taking into account their tournament score, then choosing the best individuals out of that set to be parents.
(7)
Selection of an appropriate region of interest (ROI), according with DM’s preferences: In this case, the preference of the user is always to maintain the number of centers as small as possible, if the degree of approximation remains within a preset range. In this manner, the proposed procedure attempts to find a set of solutions in the neighborhood of the corresponding Pareto optimal solution, so that the decision-maker can have a better idea of the desired region.
Different forms of fitness (or objective functions) can be used in a NSGA-II procedure, but the main goal, here, is to minimize some usual errors, such as E C and/or E L between the original function and the approximating or interpolating function, constructed from each population of random centers. At this step, it is also important to consider two error estimations that are appropriate normalizations of a discrete version of the usual norms in C ( Ω ) and L 2 ( Ω ) , respectively, and they are given by the following expressions:
E C = max i = 1 , , K | f ( a i ) σ ( a i ) | max i = 1 , , K | f ( a i ) |
E L = i = 1 K ( f ( a i ) σ ( a i ) ) 2 i = 1 K ( f ( a i ) ) 2
where f C 2 ( Ω ) is a given function, σ σ N is the approximating or σ σ N ^ interpolating RBF function associated with the given data set and a 1 , , a K Ω is another scattered random point test data set (TDS) where the errors are computed.
However, before running any MOGA algorithm, there are always some key parameters that should be defined. The most important ones are shown in Table 1 below.
Therefore, the most important goal of these methods will always be to try to improve the adaptive fit of a population of possible solutions to a Pareto front constrained by these often conflicting objectives. In this case, these objective functions are to minimize at least one or several normalized discrete versions of some approximate errors; however, the intention to maintain the number of centers as small as possible is also taken into account, so that we allow the procedure to delete some of the obtained centers, when they become sufficiently close to each other and the same level of approximation can be retained.

4. Simulation Examples

The objective of this study is to analyze the performance of this procedure for different types of functions, optimizing the centers placement using radial basis functions by the multi-objective genetic algorithms (MOGA) already used and explained in more detail in [32], but using cubic spline basis functions instead of RBFs. Different experiments were carried out and we show, in this section, the results for the approximation of several functions, also presenting the evolution of the optimal distribution of points, together with the related Pareto fronts. Similar results could be obtained for the case of interpolating RBFs, as explained in the Remark 1.
In order to analyze the behavior of the radial basis approximations, a TDS with a large (100) number of points was used, with a population number of 40 individuals, until 20 generations, in each one of the examples below. Concerning the shape parameter, it is also true that it can affect the approximation accuracy of the RBF and the overall computational stability of the procedure (see for example [33,34]), but it is not the goal of our study for the moment; therefore, a fixed value of ε = 0.1 was used without problems in all our computations. The same applies to the other smoothness parameter of the RBFs approximating procedure, τ , where a constant fixed value of 1 was used in all our numerical experiments.

One-Dimensional Examples

Example 1.
f 1 : [ 0 , π ] R
f 1 ( x ) = 0.12 + 0.25 exp 4 ( x π 4 ) 2 cos ( 2 x ) sin ( 2 π x ) .
To test the behavior of the approximation for the presented methodology, performed by optimization of the centers’ placement of radial basis functions by a NSGA-II algorithm, Figure 1 shows the results of the approximation of function f 1 ( x ) , using radial basis function approximations, while, in Figure 2, we can see how the E L error decreased clearly when the number of centers increased. We can also see, in the left column of Figure 1, the radial basis approximation corresponding to the last centers’ distribution, whose evolution is also showed in the right column, with the increase in the number of interior centers for the function f 1 ( x ) with 7, 9 and 11 centers, respectively.
Example 2.
f 2 : [ 0 , 1 ] R
f 2 ( x ) = 0.5 e 5 ( x π / 2 ) 2 s i n ( 4 π x ) c o s ( 4 x ) .
For Example 2, the numerical approximation results for f 2 ( x ) and the evaluation of centers are shown in Figure 3; on the other hand, the two normalized discretised errors for f 2 ( x ) are also described in Table 2.
We can see, in the left column of Figure 3, the radial basis approximation corresponding to the last centers’ distribution, whose evolution is also showed at the right column, with the increase in the number of interior centers for the function f 2 ( x ) , using 2, 4 and 6 interior centers, respectively.
Example 3.
f 3 : [ 1 , 1 ] R
f 3 ( x ) = 1 e 50 | x | .
The graphical result approximating f 3 ( x ) with 20 interior centers, using 20 generations in the NSGA-II algorithm, using radial basis functions, is showed in Figure 4.

5. Conclusions and Future Work

Although there are some other approaches to select knots amongst a uniform partition of the interval (or domain) of definition of the function to be approximated, or interpolated, they are usually based on splines and the evolutionary procedure used to improve the final allocation of these knots usually relies on basic genetic algorithms, not very performing for general multi-objective problems, or even on self-adjusting or hybrid neural networks, with the requirement of the previous hard training process, etc. This paper presents another important and interesting possibility for optimizing both the number and the corresponding centers’ placement in the problem of radial basis function approximation of functions of one or several independent variables, where we can clearly see the effectiveness of the strategy for different types of functions. We chose a radial basis function under tension (RBFT) depending on a positive parameter, that also provides a convenient way to control the behavior of the corresponding interpolation or approximation method. In addition, the preliminary numerical results of this MOGA strategy, for centers’ placement in approximating functions of one variable, show that the placement of the centers in radial basis approximation function also has an important and significant impact on the behavior of the final results, because the ideal place or location of the centers is not known in advance. However, one important feature of this approach is that completely the same theoretical procedure can be developed for problems in more spatial dimensions, for the well-known meshless character of these type of functions. It is also well known that increasing the number of centers for the radial basis in the RBF approximation usually also increases the accuracy of the approximation, but only up to a certain level, where the computational effort is not worthy enough; therefore, in our procedure, we can admit the coalescence of very close centers, maintaining the same level of accuracy. In addition, in the interpolating case, this possible, excessive proximity of the centers has to be avoided, so as not to have excessive matrix conditioning problems with the linear systems to be solved. On the other hand, in classic allocation methods, the designer should choose a priori the number of centers that will be used and, many times, they are just equidistant uniform partitions of the domain. However, using such a MOGA procedure of the type considered, they can move freely inside the domain and the corresponding Pareto fronts can also be directly improved using a different or variable number of centers, at the same time as simulations are being performed.
An open problem would be the optimal election of both the shape parameter ε and the smoothness parameter τ , since they can depend not only on the number and distribution of data points and the data vector, but even on the precision of the computations, even though it is well known that very small values give, in general, highly accurate results. Then, the idea would be to introduce, in a future work, these parameters also as possible optimization variables in the general MOGA algorithm. In addition, the possibility of using the basis of RBFs of compact support, such as the well-known Wendland’s RBFs, would be a good alternative to the RBFs used in this work.

Author Contributions

Investigation, P.G.-R., H.M.H.I., M.Y. and M.P. All authors contributed equally to this work and have read and agreed to the published version of the manuscript.

Funding

This work was supported by FEDER/Junta de Andalucía-Consejería de Transformación Económica, Industria, Conocimiento y Universidades (Research Project A-FQM-76-UGR20, University of Granada) and by the Junta de Andalucía (Research Group FQM191).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

We acknowledge the anonymous referees for their useful comments and suggestions, as well as the Department of Applied Mathematics of the University of Granada and the Mathematics Journal Editorial Board, for the financial aid offered for the final cost of the APC.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DMdecision maker
MATLABMAtrix LABoratory (by MathWorks)
MOEAmulti-objective evolutionary algorithm
MOGAmulti-objective genetic algorithm
NSGA-IInon-dominated sorting genetic algorithm, second version
RBFradial basis function
RBFNNradial basis function neural network
ROIregion of interest
SOPsingle objective problem
TDStest data set

References

  1. Buhmann, M.D. Radial Basis Functions: Theory and Implementations. In The Cambridge Monographs on Applied and Computational Mathematics; Cambridge University Press: Cambridge, UK, 2003; Volume 12. [Google Scholar]
  2. Powell, M.J.D. The Theory of Radial Basis Functions Approximation in 1990. In Advances in Numerical Analisis, Volume, 2: Wavelets, Subdivision Algorithms and Radial Basis Functions; Light, W.A., Ed.; Clarendon Press: Oxford, UK, 1992; pp. 105–210. [Google Scholar]
  3. Hoschek, J.; Lasser, D. Fundamentals of Computer Aided Geometric Design; A K Peters, Ltd.: Natick, MA, USA, 1993. [Google Scholar]
  4. Buhmann, M.; Dyn, N. Spectral convergence of multiquadric interpolation. Proc. Edinburgh Math. Soc. 1993, 36, 319–333. [Google Scholar] [CrossRef] [Green Version]
  5. Cheng, A.H.-D.; Golberg, M.A.; Kansa, E.J.; Zammito, G. Exponential convergence and H-c multiquadric collocation method for partial differential equations. Numer. Methods Partial Diff. Equ. 2003, 19, 571–594. [Google Scholar] [CrossRef]
  6. Huang, M.C.; Tai, C.C. The pre-processing of data points for curve fitting in reverse engineering. Int. J. Adv. Manuf. Technol. 2000, 16, 635–642. [Google Scholar] [CrossRef]
  7. Lai, Y.-K.; Hu, S.-M.; Pottmann, H. Surface fitting based on a feature sensitive parametrization. Comput. Aided Des. 2006, 38, 800–807. [Google Scholar] [CrossRef] [Green Version]
  8. Jeffreys, H.; Jeffreys, B.S. Lagrange’s Interpolation Formula. In Methods of Mathematical Physics, 3rd ed.; Cambridge University Press: Cambridge, UK, 1988; Volume 59. [Google Scholar]
  9. Whittaker, E.T.; Robinson, G. Lagrange’s formula of interpolation. In The Calculus of Observations: A Treatise on Numerical Mathematics, 4th ed.; Dover: New York, NY, USA, 1967; pp. 28–30. [Google Scholar]
  10. Szego, G.; Sakai, M. Orthogonal Polynomials; American Mathematical Society: Providence, RI, USA, 1975. [Google Scholar]
  11. Esmaeilbeigi, M.; Hosseini, M.M. Dynamic node adaptive strategy for nearly singular problems on large domains. Eng. Anal. Bound. Elem. 2012, 36, 1311–1321. [Google Scholar] [CrossRef]
  12. Kozera, R.; Noakes, L. Optimal Knots Selection for Sparse Reduced Data. Appl. Math. Inform. 2016, LNCS 9555, 3–14. [Google Scholar]
  13. Krityakierne, T.; Akhtar, T.; Shoemaker, C.A. SOP: Parallel surrogate global optimization with Paretocenter selection for computationally expensive single-objective problems. J. Glob. Optim. 2016, 66, 417–437. [Google Scholar] [CrossRef] [Green Version]
  14. García-Capulin, C.H.; Cuevas, F.J.; Trejo-Caballero, G.; Rostro-González, H. A hierarchical genetic algorithm approach for curve fitting with B-splines. Genet. Program. Evolvable Mach. 2015, 16, 151–166. [Google Scholar] [CrossRef]
  15. Leitenstorfer, F.; Tutz, G. Knot Selection by Boosting Techniques. Comput. Stat. Data Anal. 2007, 51, 4605–4621. [Google Scholar] [CrossRef] [Green Version]
  16. Samozino, M.; Alexa, M.; Alliez, P.; Yvinec, M. Reconstruction with Voronoi Centered Radial Basis Functions. In Proceedings of the SGP’06: Fourth Eurographics Symposium on Geometry Processing, Cagliari-Sardinia, Italy, 26–28 June 2006; Polthier, K., Sheffer, A., Eds.; Eurographics Association: Goslar, Germany, 2006; pp. 51–60. [Google Scholar]
  17. Afiatdoust, F.; Esmaeilbeigi, M. Optimal variable shape parameters using genetic algorithm for radial basis function approximation. Ain Shams Eng. J. 2015, 6, 639–647. [Google Scholar] [CrossRef]
  18. Zheng, S.; Feng, R.; Huang, A. The Optimal Shape Parameter for the Least Squares Approximation Based on the Radial Basis Function. Mathematics 2020, 8, 1923. [Google Scholar] [CrossRef]
  19. Awad, M.; Pomares, H.; Herrera, L.J.; González, J.; Guillén, A.; Rojas, F. Approximating I/O Data Using Radial Basis Functions: A New Clustering-Based Approach. In Computational Intelligence and BioInspired Systems, IWANN 2005, LNCS, 3512; Cabestany, J., Prieto, A., Sandoval, F., Eds.; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  20. Rajchakit, G.; Agarwal, P.; Ramalingam, S. Stability Analysis of Neural Networks, 1st ed.; Springer: Singapore, 2021. [Google Scholar]
  21. Bechikh, S.; Kessentini, M.; Said, L.B.; Ghédira, K. Chapter Four—Preference Incorporation in Evolutionary Multiobjective Optimization: A Survey of the State-of-the-Art. Adv. Comput. 2015, 98, 141–207. [Google Scholar]
  22. Deb, K.; Agrawal, S.; Pratap, A.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef] [Green Version]
  23. Heris, M.K. NSGA-II in MATLAB. 2015. Available online: https://yarpiz.com/56/ypea120-nsga2 (accessed on 1 May 2017).
  24. Brownlee, J. Clever Algorithms: Nature-Inspired Programming Recipes. Available online: http://www.cleveralgorithms.com (accessed on 15 September 2020).
  25. Scattered Data Interpolation and Approximation Using Radial Base Functions. Available online: https://es.mathworks.com/matlabcentral/fileexchange/10056-scattered-data-interpolation-and-approximation-using-radial-base-functions (accessed on 1 June 2020).
  26. Bouhamidi, A.; Le Mehauté, A. Radial Basis Functions Under Tension. J. Approx. Theory 2004, 127, 135–154. [Google Scholar] [CrossRef]
  27. Kouibia, A.; Pasadas, M. Approximation of surfaces by fairness bicubic splines. Adv. Comput. Math. 2004, 20, 87–103. [Google Scholar] [CrossRef]
  28. Kouibia, A.; Pasadas, M. Approximation of curves by fairness cubic splines. Appl. Math. Sci. 2007, 1, 227–240. [Google Scholar]
  29. Atkinson, K.; Han, W. Theoretical Numerical Analysis, 2nd ed.; Springer: New York, NY, USA, 2005. [Google Scholar]
  30. Fonseca, C.M.; Fleming, P.J. An overview of evolutionary algorithms in multiobjective optimization Evol. Comput. 1995, 3, 1–16. [Google Scholar]
  31. Srinivas, N.; Deb, K. Muiltiobjective optimization using nondominated sorting in genetic algorithms. Evol. Comput. 1994, 2, 221–248. [Google Scholar] [CrossRef]
  32. Idais, H.; Yasin, M.; Pasadas, M.; González, P. Optimal knots allocation in the cubic and bicubic spline interpolation problems. Math. Comput. Simul. 2019, 164, 131–145. [Google Scholar] [CrossRef]
  33. Sarra, S.A.; Sturgill, D. A random variable shape parameter strategy for radial basis function approximation methods. Eng. Anal. Bound. Elem. 2009, 33, 1239–1245. [Google Scholar] [CrossRef]
  34. Skala, V.; Karim, S.; Zabran, M. Radial Basis Function Approximation Optimal Shape Parameters Estimation: Preliminary Experimental Results. In Computational Science-ICCS 2020, Part VI, LNCS 12142; Springer: Cham, Switzerland, 2020; pp. 309–317. [Google Scholar]
Figure 1. Radial basis approximation and evolution of final centers’ placement for the function f 1 ( x ) with 7, 9 and 11 centers, respectively.
Figure 1. Radial basis approximation and evolution of final centers’ placement for the function f 1 ( x ) with 7, 9 and 11 centers, respectively.
Mathematics 10 00059 g001
Figure 2. Pareto front of E L error vs. number of centers for f 1 ( x ) .
Figure 2. Pareto front of E L error vs. number of centers for f 1 ( x ) .
Mathematics 10 00059 g002
Figure 3. Radial basis approximation and evolution of final centers’ placement for the function f 2 ( x ) with 2, 4 and 6 interior centers, respectively.
Figure 3. Radial basis approximation and evolution of final centers’ placement for the function f 2 ( x ) with 2, 4 and 6 interior centers, respectively.
Mathematics 10 00059 g003
Figure 4. Graphical results for f 3 ( x ) with 20 interior centers (marked on the x-axis).
Figure 4. Graphical results for f 3 ( x ) with 20 interior centers (marked on the x-axis).
Mathematics 10 00059 g004
Table 1. Parameters and functions used by MOGA in the simulations results.
Table 1. Parameters and functions used by MOGA in the simulations results.
Parameters of the MOGAValue
Number of generations20
Population size40
Crossover functionBinary crossover
Selection functionBinary tournament
Crossover fraction 0.9
Pareto fraction 0.4
Mutation functionPolynomial mutation
Mutation rate 0.01
Centers’ deletion tolerance 0.5 × 10 2
Fitness’ functionsCenters number and E C or/and E L
Table 2. Two approximation discretization errors for f 2 ( x ) .
Table 2. Two approximation discretization errors for f 2 ( x ) .
Centers E L E C
2 4.3978 × 10 2 1.8918 × 10 3
4 2.6664 × 10 2 6.7993 × 10 4
6 4.0691 × 10 3 1.9208 × 10 4
8 3.9539 × 10 3 1.7241 × 10 4
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

González-Rodelas, P.; Idais, H.M.H.; Yasin, M.; Pasadas, M. Optimal Centers’ Allocation in Smoothing or Interpolating with Radial Basis Functions. Mathematics 2022, 10, 59. https://doi.org/10.3390/math10010059

AMA Style

González-Rodelas P, Idais HMH, Yasin M, Pasadas M. Optimal Centers’ Allocation in Smoothing or Interpolating with Radial Basis Functions. Mathematics. 2022; 10(1):59. https://doi.org/10.3390/math10010059

Chicago/Turabian Style

González-Rodelas, Pedro, Hasan M. H. Idais, Mohammed Yasin, and Miguel Pasadas. 2022. "Optimal Centers’ Allocation in Smoothing or Interpolating with Radial Basis Functions" Mathematics 10, no. 1: 59. https://doi.org/10.3390/math10010059

APA Style

González-Rodelas, P., Idais, H. M. H., Yasin, M., & Pasadas, M. (2022). Optimal Centers’ Allocation in Smoothing or Interpolating with Radial Basis Functions. Mathematics, 10(1), 59. https://doi.org/10.3390/math10010059

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop