1. Introduction
In this study we are concerned with the problem of approximating a locally unique solution
of equation
where
G is a Fréchet-differentiable operator defined on a nonempty, open convex subset
D of a Banach space
with values in a Banach space
.
Many problems in Computational disciplines such us Applied Mathematics, Optimization, Mathematical Biology, Chemistry, Economics, Medicine, Physics, Engineering and other disciplines can be solved by means of finding the solutions of equations in a form like Equation (
1) using Mathematical Modelling [
1,
2,
3,
4,
5,
6,
7]. The solutions of this kind of equations are rarely found in closed form. That is why most solutions of these equations are given using iterative methods. A very important problem in the study of iterative procedures is the convergence region. In general this convergence region is small. Therefore, it is important to enlarge the convergence region without additional hypotheses.
The study of convergence of iterative algorithms is usually centered into two categories: Semi-local and local convergence analysis. The semi-local convergence is based on the information around an initial point, to obtain conditions ensuring the convergence of theses algorithms while the local convergence is based on the information around a solution to find estimates of the computed radii of the convergence balls.
Newton’s method defined for all
by
is undoubtedly the most popular method for generating a sequence
approximating
, where
is an initial point. There is a plethora of convergence results for Newton’s method [
1,
2,
3,
4,
6,
8,
9,
10,
11,
12,
13,
14]. We shall increase the convergence region by finding a more precise domain where the iterates
lie leading to smaller Lipschitz constants which in turn lead to a tighter convergence analysis for Newton’s method than before. This technique can apply to improve the convergence domain of other iterative methods in an analogous way.
Let us consider the conditions:
There exist
and
such that
There exists
such that the Lipschitz condition
holds for all
.
Then, the sufficient convergence condition for Newton’s method is given by the famous for its simplicity and clarity Kantorovich sufficient convergence criterion for Newton’s method
Let us consider a motivational and academic example to show that this condition is not satisfied. Choose
,
,
,
and define function
G on
D by
Then, we have
. Then, the Kantorovich condition is not satisfied, since
for all
. We set
to be the set of point satisfying Equation (
3). Hence, there is no guarantee that Newton’s sequence starting at
converges to
.
The rest of the paper is structured as follows: In
Section 2 we present the semi-local convergence analysis of Newton’s method Equation (
2). The numerical examples and applications are presented in
Section 3 and the concluding
Section 4.
2. Semi-Local Convergence Analysis
We need an auxiliary result on majorizing sequences for Newton’s method.
Lemma 1. Let , , , and be parameters. Suppose that:whereandholds. Then, scalar sequence given byis well defined, increasing, bounded from above byand converges to its unique least upper bound which satisfieswhere . Moreover, the following estimates hold:and Proof. By induction, we show that
holds for all
. Estimate Equation (
10) is true for
by Equation (
4). Then, we have by Equation (
5)
and for
Assume that Equation (
10) holds for all natural integers
. Then, we get by Equations (
5) and (
10) that
and
Evidently estimate Equation (
10) is true, if
m is replaced by
provided that
or
or
Estimate Equation (
11) motivates us to define recurrent functions
on
by
We need a relationship between two consecutive functions
. We get that
Therefore, we deduce that
Estimate Equation (
11) is satisfied, if
Using Equation (
12) we obtain that
Let us now define function
on
by
Then, we have by Equation (
14) and the choice of
that
Hence, Equation (
13) is satisfied, if
Using Equation (
11) we get that
It then, follows from Equations (2.1) and (2.13) that Equation (
15) is satisfied. The induction is now completed. Hence, sequence
is increasing, bounded from above by
given by Equation (
6), and as such it converges to its unique least upper bound
which satisfies Equation (
7). □
Let , stand, respectively for the open and closed ball in with center and of radius .
The conditions for the semi-local convergence are:
is Fréchet differentiable and there exist
,
such that
and
There exists
such that for all
and there exists
such that
for all
.
There exists
such that
where
.
There exists
such that for all
Notice that
. Clearly, we have that
and
can be arbitrarily large [
9]. It is worth noticing that
–
are not additional to
hypotheses, since in practice the computation of Lipschitz constant
T requires the computation of the other constants as special cases.
Next, first we present a semi-local convergence result relating majorizing sequence with Newton’s method and hypotheses .
Theorem 1. Suppose that hypotheses , hypotheses of Lemma 1 and hold, where is given in Lemma 1. Then, sequence generated by Newton’s method is well defined, remains in and converges to a solution of equation . Moreover, the following estimates holdandwhere sequence is given in Lemma 1. Furthermore, if there exists such thatthen, the solution of equation is unique in . Proof. We use mathematical induction to prove that
and
Let
.
Then, we obtain that
which implies
. Note also that
Hence, estimates Equations (
20) and (
21) hold for
. Suppose these estimates hold for
. Then, we have that
and
for all
. Using Lemma 1 and the induction hypotheses, we get in turn that
where
It follows from Equation (
22) and the Banach lemma on invertible operators that
exists and
Using iteration of Newton’s method, we obtain the approximation
Then, by Equation (
24) we get in turn
where
Moreover, by iteration of Newton’s method, Equations (
23) and (
25) and the induction hypotheses we get that
That is, we showed Equation (
20) holds for all
. Furthermore, let
. Then, we have that
That is,
. The induction for Equations (
20) and (
21) is now completed. Lemma 1 implies that sequence
is a complete sequence. It follows from Equations (
20) and (
21) that
is also a complete sequence in a Banach space
and as such it converges to some
(since
is a closed set). By letting
in Equation (
25) we get
. Estimate Equation (
19) is obtained from Equation (
18) (cf. [
4,
6,
12]) by using standard majorization techniques. The proof for the uniqueness part has been given in [
9]. □
The sufficient convergence criteria for Newton’s method using the conditions , constants and given in affine invariant form are:
Argyros [
12]
If
, then Equations (
27)–(
30) coincide with Equations (
26). If
, then
but not vice versa. We also have that for
Conditions Equations (
31) show by how many times (at most) the better condition improves the less better condition.
Remark 1. - (a)
The majorizing sequence , , given in [12] under conditions and Equation (29) is defined byUsing a simple inductive argument and Equation (32) we get for thatandEstimates for Equations (5)–(7) show the new error bounds are more precise than the old ones and the information on the location of the solution is at least as precise as already claimed in the abstract of this study (see also the numerical examples). Clearly the new majorizing sequence is more precise than the corresponding ones associated with other conditions. - (b)
Condition can be replaced by (or ). In this case condition holds for all (or ).
- (c)
If , then, we have that , since .
3. Numerical Examples
Example 1. Returning back to the motivational example, we have .
Conditions Equations (27)–(29) are satisfied, respectively forand We are now going to consider such an initial point which previous conditions cannot be satisfied but our new criteria are satisfied. That is, the improvement that we get with our new weaker criteria.
Using this values we obtain that condition Equation (4) is satisfied for , However, must also have thatwhich is satisfied for . That is, we must have , so there exist numerous values of p for which the previous conditions cannot guarantee the convergence but our new ones can. Notice that we have Hence, the interval of convergence cannot be improved further under these conditions. Notice that the convergence criterion is even weaker than the corresponding one for the modified Newton’s method given in [11] by . For example, we choose different values of p and we see in Table 1. Example 2. Consider . Let , such that and G defined on aswhere is a given function, λ is a real constant and the kernel μ is the Green function. In this case, for all , is a linear operator defined on by the following expression: If we choose , it follows Hence, if is defined and Consider , we getand By these values we conclude that conditions (26)–(29) are not satisfied, sincebut condition (2.27) and condition (4) are satisfied, sinceand Hence, Newton’s method converges by Theorem 1.
4. Application: Planck’s Radiation Law Problem
We consider the following problem [
15]:
which calculates the energy density within an isothermal blackbody. The maxima for
occurs when density
. From (
36), we get
that is when
After using the change of variable
and reordering terms, we obtain
As a consequence, we need to find the roots of Equation (
39).
We consider
and we obtain
and
So
are satisfied. Moreover, as
, then
which satisfies
and that means that conditions of Lemmal 1 are also satisfied. Finally, we obtain that
Hence, Newton’s method converges to the solution by Theorem 1.