1. Introduction
Let
,
be Banach spaces and
be a nonempty and open set. Set
, bounded and linear operators. A plethora of works from numerous disciplines can be phrased in the following way:
using mathematical modelling, where
is a continuously differentiable operator in the Fréchet sense. Introducing better iterative methods for approximating a solution
of expression (
1) is a very challenging and difficult task in general. Notice that this task is extremely important, since exact solutions of Equation (
1) are available in some occasions.
We are motivated by four iterative methods given as
and
where
are initial points,
,
,
,
,
and
is a first order divided difference. These methods specialize to the corresponding ones (when
,
i is a natural number) studied by Nedzhibov [
1], Hueso et al. [
2], Junjua et al. [
3], and Behl et al. [
4], respectively. The 4-order convergence of them was established by Taylor series and conditions on the derivatives up to order five. Even though these derivatives of higher-order do not appear in the methods (
2)–(
5). Hence, the usage of methods (
2)–(
5) is very restricted. Let us start with a simple problem. Set
and
. We suggest a function
as
Then,
is a zero of the above function and we have
and
Then, the third-order derivative of function
is not bounded on
. The methods (
2)–(
5) cannot be applicable to such problems or their special cases that require the hypotheses on the third or higher-order derivatives of
. Moreover, these works do not give a radius of convergence, estimations on
, or knowledge about the location of
. The novelty of our work is that we provide this information, but requiring only the derivative of order one, for these methods. This expands the scope of utilization of them and similar methods. It is vital to note that the local convergence results are very fruitful, since they give insight into the difficult operational task for choosing the starting points/guesses.
Otherwise with the earlier approaches: (i) We use the Taylor series and high order derivative, (ii) we do not have any clue for the choice of the starting point , (iii) we have no estimate in advance about the number of iterations needed to obtain a predetermined accuracy, and (iv) we have no knowledge of the uniqueness of the solution.
The work lays out as follows: We give the convergence of these iterative schemes (
2)–(
5) with some main theorems in
Section 2. Some numerical problems are discussed in the
Section 3. The final conclusions are summarized in
Section 4.
2. Local Convergence Analysis
Let us consider that and be a non-decreasing and continuous function with .
Assume that the following equation
has a minimal positive solution
. Let
. Let
and
be continuous and non-decreasing functions with
. We consider functions on the interval
as
and
Then, by (
7),
and
, as
. On the basis of the classical intermediate value theorem, the function
has a minimal solution
in
. In addition, we assume
has a minimal positive solution
, where
Set
.
Moreover, we consider two functions
and
on
as
and
Then,
and
, with
. We recall
as the minimal solution of
. Set
It follows from (
9) that for every
and
Define by and denote by the closure of .
The local convergence of method (
2) uses the conditions
:
- (a1)
is a continuously differentiable operator in the Fréchet sense, and there exists .
- (a2)
There exists a function
non-decreasing and continuous with
for all
Set
where
is given in (
6).
- (a3)
There exist functions
,
non-decreasing and continuous with
so that for all
and
- (a4)
, radii
as given, respectively by (
6), (
8) exist; the condition (
7) holds, where
R is defined in (
9).
- (a4)
Set .
We can now proceed with the local convergence study of Equation (
2) adopting the preceding notations and the conditions
.
Theorem 1. Under the conditions sequence starting at converges to , so thatandwith and functions considered previously and R is given in (9). Moreover, is a unique solution in the set . Proof. We proof the estimates (
14) and (
15) by adopting mathematical induction. Therefore, we consider
. By
,
, (
9), and (
10), we have
hence
and
The point
is also exists by (
17) for
. Now, by using
, we have
From
and (
18), we yield
We can also write by method (
2) for
By expressions (
9), (
11), (
17), (
19), and (
20), we obtain in turn that
which confirms
and (
14) for
. We need to show that
.
In view of
, (
12), and (
21), we have
so
Using (
9), (
13), (
17),
, (
21), (
23), and the second substep of method (
2) (since
exists by (
23)), we can first write
so
So, (
15) holds and
.
To obtain estimate (
25), we also used the estimate
The induction for (
14) and (
15) can be finished, if
,
,
replace
,
in the preceding estimations. Then, from the estimate
we arrive at
and
. Let us consider that
for
with
. From
and
, we obtain
So,
, and
by the identity
□
Proof. Next, we deal with method (
3) in an analogous way. We shall use the same notation as previously. Let
, and
be as previously.
We assume
has a minimal solution
. Set
. Define functions
and
on interval
by
and
Then,
and
, with
.
is known as the minimal solution of equation
in
, and set
Replace by in the conditions and call the resulting conditions .
Moreover, we use the estimate obtained for the second substep of method (
3)
Then, by replacing (
24) by (
32) in the proof of Theorem 1, we have instead of (
25)
The rest follows as in Theorem 1. □
Hence, we arrived at the next Theorem.
Theorem 2. Under the conditions , the conclusions of Theorem 1 hold for method (3). Proof. Next, we deal with method (
4) in the similar way. Let
, and
be as in the case of method (
3). We consider functions
and
on
as
and
The minimal zero of
is denoted by
in
, and set
Notice again that from the second substep of method (
4), we have
so
The rest follows as in Theorem 1. □
Hence, we arrived at the next following Theorem.
Theorem 3. Under the conditions , conclusions of Theorem 1 hold for scheme (4). Proof. Finally, we deal with method (
5). Let
be as in method (
2). Let also
,
,
and
be continuous and increasing functions with
. We consider functions
and
on
as
and
Then, by (
6) and (
37), we yield
and
with
.
is known as the minimal zero of
in
. We assume
where
, has a minimal positive solution
. Set
, where
. We suggest functions
and
on
as
and
By (
39) and the definition of
, we have
,
with
. We assume
as the minimal solution of
. Set
The study of local convergence of scheme (
5) is depend on the conditions
:
- (c1)
= .
- (c2)
= .
- (c3)
There exist functions
,
,
,
,
, and
, increasing and continuous functions with
so for all
and
- (c4)
given, respectively by (
6), (
38) exist, (
37) and (
38) hold, and
R is defined in (
40).
- (c5)
= .
Then, using the estimates
and
Here, recalling that
, we also used the estimates
and
to obtain (
41) and (
42). □
Hence, we arrived at the next following Theorem.
Theorem 4. Under the conditions , the conclusions of Theorem 1 hold for method (5). 3. Numerical Applications
We test the theoretical results on many examples. In addition, we use five examples and out of them: The first one is a counter example where the earlier results are applicable; the next three are real life problems, e.g., a chemical engineering problem, an electron trajectory in the air gap among two parallel surfaces problem, and integral equation of Hammerstein problem, which are displayed in Examples 1–5. The last one compares favorably (
5) to the other three methods. Moreover, the solution to corresponding problem are also listed in the corresponding example which is correct up to 20 significant digits. However, the desired roots are available up to several number of significant digits (minimum one thousand), but due to the page restriction only 30 significant digits are displayed.
We compare the four methods namely (
2)–(
5), denoted by
, and
, respectively on the basis of radii of convergence ball and the approximated computational order of convergence
(for the details please see Cordero and Torregrosa [
5])
. We have included the radii of ball convergence in the following
Table 1,
Table 2,
Table 3,
Table 4,
Table 5 and
Table 6 except, the
Table 4 that belongs to the values of abscissas
and weights
. We use the
programming package with multiple precision arithmetic for computing work.
We choose in all examples and , so and . The divided difference is In addition, we choose the following stopping criteria (i) and (ii) where .
Example 1. Set . We suggest a function λ on asBut, is unbounded on Ω at . The solution of this problem is . The results in Nedzhibov [1], Hueso et al. [2], Junjua et al. [3], and Behl et al. [4] cannot be utilized. In particular, conditions on the 5th derivative of λ or may be even higher are considered there to obtain the convergence of these methods. But, we need conditions on according to our results. In additon, we can choose The distinct radius of convergence, number of iterations n, and COC (ρ) are mentioned in Table 1. Example 2. The functionappears in the conversion to ammonia of hydrogen-nitrogen [6,7]. The function has 4 zeros, but we choose . Moreover, we have The distinct radius of convergence, number of iterations n, and COC (ρ) are mentioned in Table 2. Example 3. An electron trajectory in the air gap among two parallel surfaces is formulated given aswhere e, m, , , and are the charge, the mass of the electron at rest, the position, velocity of the electron at time , and the RF electric field among two surfaces, respectively. For particular values of these parameters, the following simpler expression is provided: The solution of function is . Moreover, we have The distinct radius of convergence, number of iterations n, and COC (ρ) are mentioned in Table 3. Example 4. Considering mixed Hammerstein integral equation Ortega and Rheinbolt [8], aswhere the kernel U is We phrase (47) by using the Gauss-Legendre quadrature formula with where and are the abscissas and weights respectively. Denoting the approximations of with , then we yield the following system of nonlinear equations The values of and can be easily obtained from Gauss-Legendre quadrature formula when mentioned in Table 4. The required approximate root is , . Moreover, we have The distinct radius of convergence, number of iterations n, and COC (ρ) are mentioned in Table 5. Example 5. We consider a boundary value problem from [8], which is defined as follows: We assume the following partition on We discretize this BVP (48) by Then, we obtain a order nonlinear system, given bywhere and initial approximation . In particular, we choose so that we can obtain a nonlinear system. The required solution of this problem is The distinct radius of convergence, number of iterations n, and COC (ρ) are mentioned in Table 6. 4. Conclusions
The convergence order of iterative methods involves Taylor series, and the existence of high order derivatives. Consequently, upper error bounds on
and uniqueness results are not reported with this technique. Hence, the applicability of these methods is limited to functions with high order derivatives. To address these problems, we present local convergence results based on the first derivative. Moreover, we compare methods (
2)–(
5). Notice that our convergence criteria are sufficient but not necessary. Therefore, if e.g., the radius of convergence for the method (
5) is zero, that does not necessarily imply that the method does not converge for a particular numerical example. Our method can be adopted in order to expand the applicability of other methods in an analogous way.