1. Introduction
Integral equations are used in many fields of applied mathematics, physics, engineering, biology, dynamical systems, etc., to model a wide variety of applications. They also appear as reformulations of the boundary value, initial value, and bilocal problems for partial differential equations. Integral equations with symmetric kernels occur frequently in optimization, spectral analysis, electronic, and optic problems.
A wide variety of numerical methods are used to approximate their solutions, such as projection–collocation, Galerkin methods ([
1,
2,
3,
4,
5,
6]), or decomposition methods ([
7]). Many methods are based on searching for a solution of a certain form and then finding the values of the solution at a set of nodes by substituting that into the equation. Numerical solutions have been found using block-pulse functions ([
8]), wavelets ([
9,
10]), Taylor series expansions ([
11]), Bernoulli polynomials ([
2]), and others. In general, this leads to solving a nonlinear system for the coefficients in the unknown function, a system which is often ill-conditioned, thus complicating the procedure.
In contrast, iterative numerical methods approximate the solution at each step, using the previous iteration, without the need to solve a nonlinear system. Iterative methods consist of two parts, one provided by an iterative process and the other by a numerical integration scheme. The first part is usually a fixed point result, such as Banach’s contraction principle [
12], Altman’s algorithm [
13], the Krasnoselskii [
14], Mann [
12], or Ishikawa [
12] iteration, or various versions of Newton–Kantorovich theorems [
15]. The quadrature formulas used in the second part can be interpolatory-based numerical schemes, Newton–Cotes formulas, or Gaussian quadratures. For iterative procedures, the number of unknowns (the approximations of the values of the solution at the nodes) does not increase at each step. They are just computed from the values obtained at the previous step, without significantly increasing the computational cost. Thus, the implementation of such methods is also simplified. Iterative numerical methods have been used successfully to approximate solutions of various integral equations of the second kind ([
16,
17,
18,
19,
20,
21,
22]).
The composite error of the method will thus consist of two parts. Improving the speed of convergence of either will result in a faster converging method overall.
In this paper, we derive such a numerical method for Fredholm–Hammerstein integral equations of the second kind with modified argument (FHMA), using Banach’s fixed point theorem and the trapezium quadrature scheme. Nonlinear integral equations with modified argument arise in modeling populations dynamics or the spread of a disease in epidemiology ([
23]). The nonlinearity of Hammerstein type allows for better numerical methods, making the proof of convergence, the applicability, and the implementation easier.
The paper is organized as follows: in
Section 2, we give an overview of the iterative processes and recall some main results in fixed point theory.
Section 3 contains the detailed description of the numerical method. We start with the description of the FHMA integral equation. Then, we use a local fixed point result and give conditions for the existence, uniqueness, and successive approximation of the solution. Next, we derive the conditions of convergence for a general quadrature formula. Finally, we apply the trapezium rule, which satisfies the previously derived condition, without requiring extra assumptions. Numerical examples are given in
Section 4, showing the applicability of the method and the agreement of the numerical results with the theoretical findings. In
Section 5, we give some concluding remarks and discuss ideas for future research.
2. Preliminaries on Iterative Processes and Integral Equations
We recall some results of iterative processes on a Banach space.
Let
X and
Y be two Banach spaces,
A be a convex subset of
X, and
be an operator, in general nonlinear. Many applications in all scientific areas are reduced to solving the operator equation
where 0 is the zero-vector in the space
Y.
A special and important case is when
and
. In fact, when
, in most cases, it is possible to write Equation (
1) in the form
i.e., the problem is reduced to finding a
fixed point for the operator
T.
Definition 1. A sequence of the formstarting with a point , is called an iterative process. Under certain conditions imposed on the operator T, the sequence will converge to the fixed point of the operator T.
We recall some well-known results in fixed point theory.
Definition 2. Let be a Banach space. A mapping is called a contraction if there exists a constant , such thatfor all . Theorem 1 (Banach’s contraction principle). Let be a Banach space and be an α-contraction. Then,
(a) the equation has exactly one solution ;
(b) the iterative process , , converges to the solution for any arbitrary initial point ;
(c)
the error estimateholds for each . A stronger (faster converging) fixed point result can be formulated in the following form (see Altman [
13]).
Theorem 2 (Altman’s Algorithm). Let be a Banach space and be an α-contraction. Let be a sequence of positive numbers, such that and .
Then,
(a) the equation has exactly one solution ;
(b)
the iterative processconverges to for any arbitrary initial point ;(c)
the error estimateholds for each , where and , . There are many results for the convergence of iterative processes, under certain conditions, mostly versions of Kantorovich’s Theorem ([
24]). For more details, the reader is referred to [
12,
14,
15] and the references therein.
Remark 1. The above results remain true if instead of the entire space X, we consider any closed subset satisfying For instance, such results are useful if applied on a closed ball , for a suitable point .
Such iterative processes are very suitable for approximating solutions of integral equations of the second kind
where
is an (nonlinear, in general) integral operator. Not only do such results state the conditions for the existence of a unique solution
of the integral Equation (
3), but they also provide a sequence of approximating iterations
. So, the first step in developing a numerical method for integral equations of the second kind is using such an iterative process. The second step consists of approximating numerically the integrals involved in the iterative method (
2), i.e., finding the approximations
of
at a set of nodes
. Then, the composite error of the numerical method will be given by
The first part depends on the iterative process and its speed, while the second one depends on the degree of precision of the chosen quadrature formula.
3. Iterative Methods
In this paper, we consider a special type of nonlinear Fredholm integral equations with modified argument
where
, and
f are continuous functions and
. Such equations, where the kernel depends on the unknown
u through the function
g only, are called equations of
Hammerstein type, and they can be handled better than equations with general nonlinear kernel, in the sense that the approximation of their solutions is much simplified. Equations with modified arguments arise in modeling dynamical systems of population growth/decay or spread of a virus ([
23]).
3.1. Existence, Uniqueness, and Successive Approximation of the Solution
Let
be equipped with the uniform norm
and consider the closed ball
for some suitable
. Then,
is a Banach space, and
is a closed subset of
X. Define the operator
by
Then, finding a solution of the integral Equation (
4) is equivalent to finding a fixed point for the operator
T:
Theorem 3. Consider defined by (
5)
, and let and Assume that (i)
there exist constants , such thatfor all and all (iii)
the following condition is satisfiedwhere over all and all .Then,
(a)
operator T has exactly one fixed point, i.e., the Equation (
6)
has exactly one solution ;(b)
the iterative processconverges to the solution for any arbitrary initial point ;(c)
the error estimateholds for each . Proof. We use the Picard iteration, i.e., Theorem 1.
Let
be fixed. By (
7), we have
because
, since
(so,
).
Hence,
and by (
8), it follows that
T is an
contraction.
Next, for every
and
, we have
Hence, by (
9), it follows that
. Now, all the conclusions follow from Theorem 1 and Remark 1. □
3.2. Numerical Integration of the Iterates
To numerically approximate the iterates in (
10), we consider a numerical integration scheme
with nodes
, coefficients
, and for which the remainder satisfies
for some
, with
as
.
Let
be the nodes, and let
be the initial approximation. Then, we use the iteration (
10) and the quadrature Formula (
11) to approximate
and
, with
and
, respectively, for
and
. For simplicity, we make the following notations:
where
We also use the notations
When applying Formula (
11) to the functions
and
, we denote the remainder in (
12) by
and
, respectively, for
. We have:
Since we start with
, there exists
, such that
where
depends on
, and
but not on
and
k, and
as
. Then, denoting by
and
we have
Let
. Then, for every
,
Absolutely similarly, we obtain
We can now state the approximation result.
Theorem 4. Under the assumptions of Theorem 3, if, in addition,then the error estimateholds for every . Thus, as , . 3.3. An Iterative Numerical Method Using the Trapezium Rule
Let us consider the trapezium rule:
where the
nodes are
, and the remainder is given by
We use it to approximate the integrals in (
10), as described above, with the initial approximation
. Let us check the conditions in the previous section. The error in the trapezium rule depends on
, which in its turn, depends on the second order (partial) derivatives of the functions
, and
f. So, if we assume that
, and
f are
functions with bounded second order derivatives, then we have
where
depends on the constants
and
and the functions
, and
f but
not on
k or
n. Thus, condition (
12) is satisfied.
Now, for the constant
in (
13), in the case of the trapezium quadrature formula, we have
from Equation (
8), which is already assumed to be less than 1 from the contraction condition. Thus, in this case, no extra assumptions are necessary for the coefficients of the quadrature formula. Then, by Theorem 4, the error estimate of our approximation is given by
4. Numerical Experiments
Example 1. Let us consider the Fredholm–Hammerstein integral equation The exact solution of (
15) is
.
Thus, all conditions of Theorem 3 were satisfied.
We used the trapezium rule with
and
, with the corresponding nodes
.
Table 1 contains the errors
, for various values of
k, with initial approximation
. With
nodes, the CPU time per iteration was approximately
, while for
nodes, it was approximately
.
Example 2. Next, we considered the Fredholm–Hammerstein integral equationwhere
, whose exact solution was
.
Choosing
, it follows that
and
so, all theoretical assumptions were satisfied. Again, we took
and
in the trapezoidal rule, with the corresponding nodes
. The errors
are shown in
Table 2, starting with the initial approximation
. The CPU times per iteration for
and
nodes were approximately
and
, respectively.
5. Conclusions
In this paper, we discussed iterative numerical methods for a special type of nonlinear Fredholm integral equation with modified argument, of Hammerstein type, where the kernel can be factored into two functions and . Iterative methods consist of two parts, one that approximates the solution globally, by a sequence of iterates, and another that approximates the values of the solution at a set of nodes. For the first part, any iterative process can be used, while the second is determined by a numerical integration scheme.
We used the Picard iteration at the first step, from Banach’s contraction principle. For the second step, we gave the conditions of convergence of the numerical approximation. In particular, we used the trapezium rule, for which the assumption needed for convergence coincided with the contraction condition from the first part. The result was a numerical method with order of convergence . This method is simple to use and understand, producing good approximating results.
Other iterative processes can be used, such as the Mann, Krasnoselskii, and Ishikawa iterations (to name a few), which may improve the speed of convergence of the first part of the approximation. Moreover, for the second step, more accurate quadrature schemes can increase the convergence rate of the method.