1. Introduction
We consider solving equation
using iterative methods. Here
,
are Banach spaces,
is an open region of
.
Secant method
is a popular device for solving nonlinear equations. It is due to the following: simplicity of the method; small amount of calculations on each iteration and use of the value of an operator from only two previous iterations in the iterative formula of the method. A lot of works are dedicated to this method [
1,
2,
3]. In [
4] the Secant method is used for solving the nonlinear least squares problem. The Kurchatov’s method of linear interpolation
is less known. This method has the same order of convergence as Newton’s method but does not require the calculation of derivatives. In (
2) and (
3),
is a divided difference of the first order for the operator
F at the points
u and
v [
5,
6].
In this work we will investigate the Secant method and Kurchatov’s method using the Kantorovich’s principle of majorants. For the first time, this principle was used by L.V. Kantorovich for investigating the convergence of the classical and modified Newton’s method, having built for the nonlinear operator a majorizing real quadratic function [
7]. Corresponding to this, the iterative sequence for nonlinear operator is majorized by a converging sequence for nonlinear equation with one variable. Later the nonlinear majorants for investigating other methods of solving nonlinear functional equations have been built. In work [
8] with the help of the majorant principle, a method with the order of convergence
, which in its iterative formula uses the value of an operator from the three previous iterations, is investigated. Specifically, a real cubical polynomial, which majorizes the given nonlinear operator is built. With that, the Lipschitz conditions are put upon the divided differences’ operator of the second order [
8,
9]. We investigate the Secant method with different conditions that have been put upon the nonlinear operator. In particular, if the Lipschitz condition for the divided differences of the first order are fulfilled, the quadratic majorizing function of one variable is built, and if the Lipschitz condition for operator of divided difference of the second order are fulfilled, the cubical majorizing function is built. The cubical majorizing function for Kurchatov’s method is also built. Methods of linear interpolation applied to these functions produce a numerical sequence, which majorizes by norm the iterative sequence, produced by applying these methods to the nonlinear operator. In all cases, the a priori and a posteriori error estimations of the linear interpolation methods are also provided.
2. Divided Differences and Their Properties
Let us assume that and z are three points in region .
Definition 1 ([
6])
. Let F be a nonlinear operator defined on a subset Ω of a Banach space with values in a Banach space and let be two points of Ω. A linear operator from to which is denoted by and satisfies the conditions:(1) for all fixed two points (2) if exist a Fréchet derivative , thenis called a divided difference of F at the points x and y. Note that (
4) and (
5) do not uniquely determine the divided difference with the exception of the case when
is one-dimensional. For specific spaces, the differences are defined in
Section 6.
Definition 2 ([
8])
. The operator is called divided difference of the second order of function F at the points x, y and z, if We assume that for
and
the conditions of the Lipschitz type are being satisfied in the following form:
If the divided difference
of
F satisfies (
7) or (
8), then
F is differentiable by Fréchet on
. Moreover, if (
7) and (
8) are fulfilled, then the Fréchet derivative is continuous by Lipschitz on
with the Lipschitz constant
[
8].
Let us denote and . The semilocal convergence of the Secant method uses the conditions :
is nonlinear operator with denoting a first order divided difference on .
Let
. Suppose that the linear operator
is invertible and let
be nonnegative numbers such that
Assume that the following conditions hold on
or
Moreover, assume the following Lipschitz conditions hold for all
for some
and
and
Set
. Define
provided
The following Lipschitz conditions hold on
for some
and
and
Set
Suppose
, and define
and
, where is the unique root in of equation .
Remark 1. The following Lipschitz condition is used in the literature for the study of iterative methods using divided differences [1,2,3,4,8,9,10,11,12,13,14,15,16] for although it is not really needed, since tighter conditions are really needed (see conditions and proofs that follow). By these definitions we have The sufficient semilocal convergence criterion in the literature arrived at different ways and corresponding to [1] iswhereprovided that (stronger than ). Then, we have
but not necessarily vice versa,
and
for each
.
Hence, the applicability of the Secant method is extended and under no additional conditions, since all new Lipschitz conditions are specializations of the old condition. Then, in practice the computation of
requires that of of the other
as special cases. Some more advantages are reported after Proposition 1. It is also worth noticing that
and
help define
through which
and
p are defined too. With the old approach
p depends only on
, which contains
. In our approach the iterates
remain in
(not
used in [
1]). That is why our new
p constants are at least as tight as
. There is where the novelty of our paper lies and the new idea helps us extend the applicability of these methods. It is also worth noticing that the new constants are specializations of the old ones. Hence, no additional conditions are added to obtain these extensions.
It is worth noting from the proof of Theorem 1 that can be defined as or .
Theorem 1. Suppose that the conditions hold. Then, the iterative procedure (2) is well defined and the sequence generated by it converges to a root of the equation . Moreover, the following error estimate holds:where The semilocal convergence of the discussed methods was based on the verification of the criterion (
11). If this criterion is not satisfied there is no guarantee that the methods converge. We have now replaced (
11) by (
10) which is weaker (see (
12)).
Proof. Notice that the sequence
is generated by applying the iterative method (
2) to a real polynomial
It is easy to see that the sequence monotonically converges to zero. In addition, we have
We prove by using of induction that the iterative method is well defined and that
Using
, (
13), (
14) and
it follows that (
17) holds for
. Let
k be a nonnegative integer and for all the
fulfills (
17). If
, then by
, we have
In view of the Banach lemma [
7]
is invertible, and
Next, we prove that the iterative method exist for
n =
k + 1. We get
By condition
, we have
Then, it follows from (
20)–(
22)
In view of (
16) and (
17), we obtain
Hence, the iterative method is well defined for each
n. Hence, it follows that
Estimate (
23) shows that
is a Cauchy sequence in space
so, it is converging. Let
k tend to infinity in formula (
23), then we get (
13). It is easy to see that
is the root of equation
, because accordingly to (
22), we can write
□
Corollary 1. The convergence order of iterative Secant method (2) is equal to . Proof. From equality (
15) it follows that the order of convergence of the real sequence
is the only positive root of the equation
, i.e.,
. Given inequality (
13), according to Kantorovich’s majorant principle, we obtain that the sequence
also has an order of convergence
. □
Concerning the uniqueness of the solution, we have the result.
Proposition 1. Under the conditions further suppose that for holds for all , where , and provided , where is a solution of equation . Then, is the only solution of equation in the set . Proof. Let
, where
and
. Then, we get
so
follows from
□
Remark 2. The result in Proposition 1 improves the corresponding one in the literature using the old condition, since . Hence, we present a larger ball inside which we guarantee the uniqueness of the solution .
If, additionally, the second divided difference of function F exists and satisfies the Lipschitz condition with constant q, then the majorizing function for is a cubical polynomial. Then, the following theorem holds.
Theorem 2. Under the conditions (except ) further suppose
Let us presume that and denote Let h be a real polynomial It the following inequality is satisfiedand the closed ball , where is the root of equation . Conditions of Propositions 1 hold on .
Then, the iterative method (2) is well defined and the generated by it sequence converges to the solution of the equation . Moreover, the following estimate is satisfiedwhere This proof is analogous to the proof of Theorem 1.
Remark 3. The majorizing sequences are more precise than the one in [1] (using instead of , respectively). Similar advantages we report in the case of Theorem 2, see e.g., [1] where instead of the following condition is used on Ω 6. Numerical Experiments
In this Section, we verify the conditions of the theorems on convergence of the considered methods for some nonlinear operators, and also compare the old and new radii of the convergence domains and error estimates. We first consider the representation of the first-order divided differences for specific nonlinear operators [
5,
6].
Let
. We have a nonlinear system of
m algebraic and transcendental equations with
m variables
In this case
is the matrix with entries
If , then .
Let us consider a nonlinear integral equation
where
is a continuous function of its arguments and continuously differentiable by
x. In this case
is defined by formula
If holds for some , then .
Example 1. Let , and . The solution of equation is .
In view of F, we can write , , .
Let us choose and . Then we get , for Secant method and , for Kurchatov’s method, , , , . For the corresponding theorems in [1] , . In
Table 1, there are radii and convergence domains of considered methods. They are solutions of corresponding equations and satisfy the condition
. We see that
hold. Moreover, for Kurchatov’s method
and
. So, the assumptions of the theorems are fulfilled. Next, we show that error estimates hold, i.e.,
, and compare them with corresponding ones in [
1].
Table 2 and
Table 3 give results for Secant method (
2), and
Table 4 for Kurchatov’s method (
3).
Table 2,
Table 3 and
Table 4 show the superiority of our results over the earlier ones, i.e., obtained error estimates are tighter in all cases. That means fewer iterates than before are needed to reach a predetermined error tolerance.
Example 2. Let , and The solution of equation is .
For x and , we have For this problem we verify conditions (C) and corresponding ones from [1]. Let us choose and . Having made calculations, we get , , , , and . Then . Next , and . The equation has two solutions and . Only . Therefore, , and . Analogy, an equation has two solutions and . . Therefore, , and .
In view of (14) and (15), and Remark 1, we get So, estimates (13) are tighter than the corresponding ones in [1]. Secant and Kurchatov’s methods solve this system under 5 iterations for and the specified initial approximations.
Example 3. Let and The solution of this equation is . In view of F, we can write Let us choose and . Both methods give approximate solution of the integral equation under 13 iterations for . To solve a linear integral equation at each iteration was applied Nystrom method. We use a trapezoidal quadrature formula with 101 nodes. On the graphs denotes and denotes (see Figure 1). We can see that , where . This corresponds to the error estimation of the trapezoidal quadrature formula.