1. Introduction
In this paper, we are concerned with obtaining an approximate solution of Fredholm integral equations of a second kind given by
where
,
,
is a given function, and
is a known function in
, called the kernel of the integral equation. Finally,
is the unknown function to be determined. This integral equation is said to be homogeneous if function
is zero. Otherwise, if
is not zero, it is said not to be homogeneous.
The problem of solving integral equations is quite general, so in this work we focus on a particular type of integral equation called Fredholm integral equations [
1]. In general, this kind of integral equation appears frequently in Mathematical Physics, Mathematics, and other fields of Science and Engineering [
2,
3,
4,
5,
6]. Several physical processes (Fluid Mechanics, Biology, Chemistry, etc.) can be modeled by these equations. In addition, other purely mathematical problems, such as initial and boundary value problems, can be transformed into integral equations.
We introduce the integral operator
, given by
that allow us to express Equation (
1) in the following form:
Therefore, if there exists
, a solution of the Equation (
1) is given by
With Formula (
3), it is possible to obtain the exact solution of integral Equation (
1) in a theoretical way. However, in practice, the calculation of the inverse
could be very complicated (or even impossible). Consequently, we present a strategy based on the symmetry between the problem of numerically solving linear integral equations and the problem of approximating the inverse of a linear operator. With this idea, the use of iterative methods gives us an alternative way of approaching this inverse and therefore the solution of the integral equation, instead of trying to calculate the exact solution of the problem (see [
7,
8,
9,
10]).
There exist other techniques to numerically solve Fredholm integral equations of a second kind, or even systems of such equations. For instance, in [
11], the idea is to approximate the operator associated with the same integral equation, instead of approximating an inverse operator as we propose in this paper. In [
12] or [
13], discretization techniques (Galerkin method, collocation method) are used to solve the finite dimensional equations
related to (
2), where
is a discrete approximation of
and
,
the corresponding discrete solutions. Iterative techniques can be also used for other types of integral equations, as made in [
14] or [
15] for example.
Along this paper, we consider an operator
, defined on a nonempty convex domain
in
, with
Instead of approaching directly
by means of an iterative method, as made in [
10] for instance, in this paper, we apply Newton’s method to the equation
. As we will see later (see (
7)), the calculation of
is also related to the calculation of
. Consequently, Newton’s method is only applicable in cases where this inverse can be calculated (for instance, separable kernels, as seen in [
16]). One of the targets of this work is to use Newton’s method for solving
. In this point, we introduce the use of iterative methods for the calculus of the inverse. This is the main difference compared with other previous works as [
10]: here, we consider firstly the equation
and next we consider iterative methods for approaching the inverse instead of approaching the inverses directly.
Amongst the plethora of iterative methods for solving nonlinear equations
, in this work, we have chosen what is known as Ulm’s method that is a Newton-type method given by
The method presents some attractive features. Firstly, it has quadratic convergence, that is, the same order of convergence of Newton’s method. Secondly, the proposed method does not contain inverse operators in its expression, or equivalently it is not necessary to solve a linear equation per iteration. Thirdly, in addition to solving the nonlinear equation
, the method produces successive approximations
to the value of the inverse operator
, where
is a solution of the equation. The method was firstly proposed by Ulm in [
17], as a variant of a similar method given by Moser [
18] that has just superlinear convergence. A study of its semilocal convergence by using the
-theory of Smale, as well as an application to approximate the solution of integral equations of Fredholm-type can be seen in [
19]. The local convergence of the method can be seen in [
20].
2. Construction of an Iterative Scheme of Ulm-Type
We consider the equation given by the operator
defined in (
4). Let us note that finding a solution of the equation
is equivalent to solving the integral Equation (
1). Iterative methods are among the most used methods to solve these kinds of equations. The idea is to start with an initial approximation of
, a solution of the equation
. Next, a sequence
of approximations to the solution
is obtained at each step, satisfying
is strictly decreasing. Obviously, our interest is focused on the case
. It is possible to obtain the sequence of approximations
by using different iterative algorithms. For instance, Newton’s method is one of the most used for this purpose. It is defined by the following recursive process:
In practice, it is not easy to construct an iterative scheme like (
6) for operators defined on infinite dimension spaces. The main difficulties arise for calculating at each step the inverse of the linear operator
or, equivalently, for solving the associated linear equation.
In our case, we have that the operator
is Fréchet differentiable, with
defined by
for each function
. In order to calculate the inverse of
for
, let us write
for a given
. Then, if there exists
, the following equality must be satisfied:
If we denote
, the value of
can be obtained independently from
w. To do this, we multiply the next-to-last equality by
,
, and we integrate it between
a and
b in the
x variable. In this way, we obtain
By means of the change of the variable
in the previous integrals, we obtain
Now, as a consequence of the last equation, we can rewrite an iteration of Newton’s iterative scheme (
6) as follows:
Let us note that, if
, Banach’s Lemma on invertible operators guarantees the existence of the operator
. Now, our target is to approximate the inverse of the linear operator
by using iterative methods for solving nonlinear equations.
Let
be the set of bounded linear operators from the Banach space
into the Banach space
. Within this set, we consider the subset of invertible operators:
Now, we use Newton’s method for approaching the inverse of a given linear operator
, or equivalently for solving:
Therefore, proceeding as in [
16], Newton’s iteration in this case can be written in the following way:
We would like to highlight that Newton’s method does not use inverse operators for approximating the inverse operator .
Now, taking into account the previous reasoning, we approximate
in (
8) by means of the Newton sequence (
10) and then we define the following Ulm-type algorithm:
Let us notice that (
11) is an inverse-free iterative process, as the Ulm’s method given by (
5). In the next section, we are going to study the semilical convergence of this method as well as its order of convergence.
3. Main Convergence Result
As a first step in our study, we analyze the convergence of the sequence of linear operators
defined in (
10). Our target is to state a semilocal convergence result, without assuming the existence of the inverse
.
Lemma 1. Let such that , with . Then, the sequence defined by (10) belongs to and converges quadratically to , with . Moreover, Proof. As a direct application of (
10), we obtain
and therefore:
On the other hand, as
we have
Now, we apply recursively the previous inequality and we take into account (
12). Thus, we obtain:
As a consequence, by the definition of the sequence (
10) and by the inequalities (
12) and (
13), we have the following bound for each
:
Therefore, as
, for
, we have:
Then,
for
. Moreover, we obtain:
and it follows that
is a Cauchy sequence. Then,
converges to
. In addition, as:
we have
and then
. □
Let us notice that, if we prove that
exists, then
. Otherwise, if we do not suppose the existence of the inverse
, and we consider
such that
, we have:
Hence, by following an inductive procedure, we deduce that and then . Thus, in this case, is the inverse operator of A. However, in general, if , then satisfies only , so that the sequence converges to the right inverse of A.
Lemma 2. Under the conditions of Lemma 1, let us assume that for . Then, the sequence defined by (11) satisfies Proof. Firstly, using (
4) and (
11), as
, we have
Secondly, taking norms in the previous equality and from Lemma 1, we get
Thirdly, by a recursive procedure, as
for
, we obtain
and the result is then proved. □
With the aid of these two technical lemmas, we can prove a result of semilocal convergence for the sequence
defined by (
11).
Theorem 1. Let A be the operator defined in (9) and let be an initial approach to such that In addition, let us assume that the initial guess satisfies with Then, the sequence defined in (11) belongs to and converges quadratically to , a solution of . Proof. From Lemma 1, it follows that the sequence
converges quadratically to the right inverse of
A and
On the other hand, if
for
, from Lemma 2, as
for
, we obtain
Therefore, from (
13), it follows that
Consequently, for
, we get
Then, if we take
it is obvious that
for all
. Moreover, it follows that the sequence
is a Cauchy sequence and therefore there exists a function
such that
On the other hand, by (
16) and the continuity of the function
, we obtain that
.
Next, to prove the quadratic convergence of the sequence
. It is easy to check that
and then
Thus, from Lemma 1, it follows
and then
converges quadratically to
. □
4. A Particular Case
Now, we consider the application of the iterative method (
11) to the problem (
1) in the particular case given by the initial choices
and
.
In this case, it is easy to check that, if
, the condition (
15) is verified. Then, by Theorem 1, if
, the sequence
defined in (
11) belongs to
and converges quadratically to
, a solution of
.
In addition, the following sequences of iterates can be explicitly obtained, one
for approaching a solution
:
and the other one
for approaching the inverses of
:
Note that the operator
is given by the
th partial sum of the series
that gives the inverse of the operator
, when it exists. We can prove that Equations (
17) and (
18) hold by following an inductive procedure. For the operators
, it is clear that the Formula (
18) is true for
. Now, if we assume that the Formula (
18) is true for
, we have:
and therefore, taking into account the recurrence for
given in (
11):
the Formula (
18) is true for
.
For the case of functions
, it is clear again that the Formula (
17) is true for
. Now, let us suppose that the Formula (
17) holds for
. Then, taking into account the recurrence for
given in (
11), we have:
By using the inductive hypothesis, we deduce
and therefore, rearranging superscripts,
Thus, the Formula (
17) is true for
.
Notice that Equation (
17) can be expressed in the following way to approximate the solution
of Equation (
1):