Next Article in Journal
Controllability of a Class of Heterogeneous Networked Systems
Previous Article in Journal
Extended Convergence of Two Multi-Step Iterative Methods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Newton-like Midpoint Method for Solving Equations in Banach Space

1
Department of Mathematics, University of Houston, Houston, TX 77204, USA
2
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
3
Department of Mathematics, Hans Raj Mahila Mahavidyalaya, Jalandhar 144008, Punjab, India
4
Department of Mathematics, Indira Gandhi National Tribal University, Lalpur, Markantak, Annuppur 484887, Madhya Pradesh, India
*
Author to whom correspondence should be addressed.
Foundations 2023, 3(2), 154-166; https://doi.org/10.3390/foundations3020014
Submission received: 25 February 2023 / Revised: 16 March 2023 / Accepted: 23 March 2023 / Published: 27 March 2023
(This article belongs to the Section Mathematical Sciences)

Abstract

:
The present paper includes the local and semilocal convergence analysis of a fourth-order method based on the quadrature formula in Banach spaces. The weaker hypotheses used are based only on the first Fréchet derivative. The new approach provides the residual errors, number of iterations, convergence radii, expected order of convergence, and estimates of the uniqueness of the solution. Such estimates are not provided in the approaches using Taylor expansions involving higher-order derivatives, which may not exist or may be very expensive or impossible to compute. Numerical examples, including a nonlinear integral equation and a partial differential equation, are provided to validate the theoretical results.
MSC:
47J25; 49M15; 65J15; 65H10; 65G99

1. Introduction

In the field of numerical analysis, a significant role is played by numerical methods for solving nonlinear equations. Due to lack of analytical methods, iterative techniques are required to approximate the solutions. One of the foremost objectives to use numerical methods for solving nonlinear transcendental equations is the ability to handle non-analytic and complex functions. Oftentimes, such equations arise in diverse disciplines such as science, engineering, and applied sciences [1,2,3,4]. For example, in physics, nonlinear equations often describe the behavior of systems with multiple interacting components, such as the Navier-Stokes equations in fluid dynamics. In engineering, nonlinear equations are used to model the behavior of materials under different loads and conditions. The ability to handle large and complex systems is another essential reason to use numerical methods. Nonlinear equations generally describe the behavior of systems with many interacting components, and solving them analytically can be extremely difficult, if not impossible. Numerical methods provide a way to break down these large systems into smaller, more manageable parts and find approximate solutions using iterative techniques.
A plethora of iterative methods are used for solving nonlinear transcendental equations, including fixed point iteration, root-finding methods, and the Newton–Raphson method. Each method has its own robustness and limitations, and the selection of the method depends on the particular equation being solved and the pre-decided accuracy level. For instance, the bisection method is one of the simplest and most robust methods for finding the roots of an equation but has a disadvantage of being slower and diverging for certain types of functions. The Newton–Raphson method, on the other hand, is faster and more accurate, but it requires the derivative of the function and may not converge for certain types of functions.
Moreover, the numerical method to be chosen depends on the specific equation being solved, the interval of the solutions, the number of solutions, and the desired accuracy level. For example, the bisection method is a good choice for finding all solutions in a given interval, while the Newton–Raphson method is better for finding a specific solution with an initial guess. In numerical optimization, root-finding methods are used to find the solutions of nonlinear equations that describe the behavior of the system, which enables the design of algorithms that are more efficient and more robust. There are several root-finding methods for solving nonlinear transcendental equations in research. Some common methods include:
1
The bisection method: a simple yet robust method that involves repeatedly bisecting an interval and determining which subinterval a root lies in.
2
The Newton–Raphson method: this method uses an initial guess and an iterative process to converge on a root and requires the ability to compute the derivative of the function.
3
The Secant method: this method is similar to the Newton–Raphson method but uses the slope of the secant line between two points rather than the derivative of the function.
4
Fixed-point iteration: this method involves finding the fixed point of a function using an iterative process. It requires the function to be in a specific form.
5
Muller’s method: this method is an extension of the secant method and is used for complex roots.
6
Bairstow’s method: this method is used for finding the roots of polynomials with real coefficients, and it is used to find the roots of polynomials of degree greater than two.
7
Aitken’s delta-squared method: this method is used for speeding up the convergence of fixed-point iteration method.
8
The Hybrid method: as the name suggests, this method combines two or more methods to find the root of the nonlinear equation.
As a workaround, iterative methods have been developed to locate the initial values of solutions to the nonlinear in the form as follows:
F ( x ) = 0 ,
where F is a Fréchet differentiable operator mapping between a Banach space B 1 into a Banach space B 2 , and D is a convex and open subset of B 1 . The determination of a solution x * D of the equation, whose analytical form is rarely attainable, is very important in many disciplines [1,2,3,4]. This is the case since applications are formulated as an equation such as (1) using mathematical modeling [1,2,3,5]. This is the explanation of why iterative methods are introduced producing sequences approximating x * . There is extensive literature on the convergence of iterative methods motivated by algebraic or geometrical considerations [3,5,6,7,8].
A widely used method to solve (1) is Newton’s (NM), which is defined for each n = 0 , 1 , 2 , by
x 0 D , x n + 1 = x n F ( x n ) 1 F ( x n ) .
NM uses one function evaluation and one inverse per iteration. It is of convergence order two [5]. It is always important to develop iterative methods of a higher convergence order as they provide an efficient approximation and more accuracy in finding the solution. There is a plethora of such methods (see [9,10,11,12,13,14] and references therein) proposed by various researchers.
In particular, we investigate the convergence of the fourth convergence order method defined for each n = 0 , 1 , 2 , by
y n = x n F ( x n ) 1 F ( x n ) , x n + 1 = x n A n 1 F ( x n ) ,
where { a j } [ 0 , 1 ] , { b j } with j = 1 k b j = 1 are sequences of nonnegative parameters, k is a natural number, and
A n = j = 1 k b j F ( x n a j F ( x n ) 1 F ( x n ) ) .
The authors in [8,9] motivated by the quadrature formula studied the local convergence of this method utilizing the Taylor series expansion of the operator F in the special case when B 1 = B 2 = R m , where m is a natural number. The benefits over the other methods of the same convergence order were also explained in [8]. The convergence is established under the differentiability assumptions on F ( λ ) , λ = 1 , 2 , 3 , 4 , 5 . However, these results assure the convergence in case the operator is five times differentiable although the method may converge. Let us look at a simple example in the case when D = [ 0.5 , 1.5 ] and F ( t ) = t 2 log t + t 4 t 3 , t 0 0 , t = 0 . .
Then, one can clearly see that the results in [8,9] do not apply since F ( 3 ) is unbounded at t = 0 . Other problems include:
(1)
The uniqueness of the solution region is not provided.
(2)
The choice of the starting point x 0 D is a “shot in the dark ”.
(3)
There are no estimates on x n + 1 x n or x * x n that can be computed in advance based on the properties of the operator F.
(4)
The semilocal convergence of the method has not been studied.
(5)
The derivative higher than one used in the local convergence is not on the method.
It is worth noticing that the aforementioned problems appear in numerous other methods. These problems motivate the writing of this paper. In particular, we positively address all of these problems utilizing the operators on the method and the very general ω -continuity conditions on the operator F [1,7]. In the case of the semilocal convergence, the concept of the majorizing sequences is employed [1,6,7]. The idea of this paper can also be applied to other methods [6,15,16,17] analogously since it only depends on the inverse of the operators F and not on the method itself [12]. Moreover, see the related papers [18,19,20,21].
The paper is structured as follows: The local convergence in Section 2 is followed by the semilocal convergence in Section 3. The numerical applications and concluding remarks appearing in Section 4 and Section 5, respectively, complete the paper.

2. Convergence I: Local

We denote the interval [ 0 , ) by M for brevity.
Suppose:
  • There exists a nondecreasing and continuous function (NCF) w 0 : M R such that the function w 0 ( t ) 1 has a smallest positive root denoted by s.
    Set M 1 = [ 0 , s ) .
  • NCF w : M 1 R exists such that the function g 1 ( t ) 1 has a smallest root r 1 M 1 { 0 } , where
    g 1 ( t ) = 0 1 w ( ( 1 θ ) t ) d θ 1 w 0 ( t ) .
The function q ( t ) 1 has the smallest root r q M 1 { 0 } , where
q ( t ) = j = 1 k | b j | w 0 | 1 a j | t + a j g 1 ( t ) t .
Set r 1 = min { s , r q } and M 2 = [ 0 , r 1 ) .
Define the function p : M 2 R by
p ( t ) = j = 1 k 0 1 | b j | w | 1 θ a j | t + a j g 1 ( t ) t d θ .
The function g 2 ( t ) 1 has a smallest root r 2 M 2 { 0 } , where
g 2 ( t ) = p ( t ) 1 q ( t ) .
Then, in Theorem 1 the parameter r given as
r = min { r i } , i = 1 , 2
is proven to be a radius of convergence for the method (3).
Set M 3 = [ 0 , r ) .
It is implied by these definitions that for each t M 3
0 w 0 ( t ) < 1
0 q ( t ) < 1
0 p ( t )
and
0 g i ( t ) < 1 .
The sets S ( x * , μ ) , S [ x * , μ ] denote, respectively, the open and closed balls in B 1 with center x * B 1 and of radius μ > 0 .
The parameter r and the functions w 0 and w are connected to the operator F as follows, provided that x * is a solution of the Equation (1) with F ( x * ) 1 L ( B 2 , B 1 ) .
( E 1 )
F ( x * ) 1 ( F ( u ) F ( x * ) ) w 0 ( u x * ) for each u D .
Set D 1 = D S ( x * , r ) .
( E 2 )
F ( x * ) 1 ( F ( u 1 ) F ( u 2 ) ) w ( u 1 u 2 ) for each u 1 , u 2 D 1 .
and
( E 3 )
S [ x * , r ] D .
The local convergence of the method (3) follows next based on the terminology and the conditions ( E 1 ) ( E 3 ) .
Theorem 1.
Suppose the conditions ( E 1 ) ( E 3 ) hold. Then, the sequence { x n } is convergent to x * provided that the starting point x 0 S ( x * , r ) { x * } .
Proof. 
We shall establish using induction the assertions
y n x *   g 1 ( x n x * ) x n x *     x n x * < r
and
x n + 1 x *   g 2 ( x n x * ) x n x *     x n x *
with r, g 1 , and g 2 as previously defined.
By applying the condition ( E 1 ) for u S ( x * , r ) { x * } , we obatin, in turn, by (4) and (5)
F ( x * ) 1 ( F ( u ) F ( x * ) )   w 0 ( u x * ) w 0 ( r ) < 1 .
The Banach lemma for invertible linear operators [1,2,3,16] and the estimate (11) imply that F ( u ) 1 L ( B 2 , B 1 ) with
F ( u ) 1 F ( x * )   1 1 w 0 ( u x * ) .
In particular, if u = x 0 in (12) the iterate y 0 is well defined, and we can write by the first substep of the method (3) if n = 0
y 0 x * = x 0 x * F ( x 0 ) 1 F ( x 0 ) = [ F ( x 0 ) 1 F ( x * ) ] 0 1 F ( x * ) 1 F ( x * + θ ( x 0 x * ) ) F ( x 0 ) d θ ( x 0 x * ) .
In view of (4), (8) (for i = 1 ), (12) (for u = x 0 ), ( E 2 ) and (13), we have in turn that
y 0 x *   0 1 w ( 1 θ ) x 0 x * d θ x 0 x * 1 w 0 ( x 0 x * ) = g 1 ( x 0 x * ) x 0 x * x 0 x * < r .
Thus, the iterate y 0 S ( x * , r ) and the assertion (9) holds for n = 0 .
Next, we estimate:
F ( x * ) 1 ( A 0 j = 1 k b j F ( x * ) ) j = 1 k | b j | F ( x * ) 1 ( F ( x 0 ) a j F ( x 0 ) 1 F ( x 0 ) F ( x * ) ) j = 1 k | b j | w 0 ( | 1 a j | x 0 x * + a j y 0 x * ) q ( x 0 x * ) < 1 .
Thus, we deduce
A 0 1 F ( x * ) 1 1 q ( x 0 x * ) .
Moreover, the iterate x 1 is well defined by the second substep of the method (3) if n = 0 .
Similarly, we first have
A 0 0 1 F ( x * + θ ( x 0 x * ) ) d θ = j = 1 k b j ( F ( x 0 + a j ( y 0 x 0 ) ) 0 1 F ( x * + θ ( x 0 x * ) ) d θ ) ,
so
F ( x * ) 1 ( A 0 0 1 F ( x * + θ ( x 0 x * ) ) d θ ) 0 1 j = 1 k | b j | w ( x 0 + a j ( y 0 x 0 ) x * θ ( x 0 x * ) ) d θ 0 1 j = 1 k | b j | w ( | 1 θ a j | x 0 x * + a j y 0 x * ) d θ p ( x 0 x * ) ,
hence,
x 1 x * =   x 0 x * A 0 1 F ( x 0 ) =   A 0 1 F ( x * ) F ( x * ) 1 ( A 0 0 1 F ( x * + θ ( x 0 x * ) ) d θ ) ( x 0 x * ) p ( x 0 x * ) 1 q n g 2 ( x 0 x * ) x 0 x *     x 0 x * .
That is, the iterate x 1 S ( x * , r ) and the assertion (10) holds if n = 0 .
By switching x 0 , y 0 , x 1 with x m , y m , x m + 1 in the previous calculations, the induction for the assertions (9) and (10) is terminated. Therefore, the estimate
x m + 1 x *   c x m x * < r ,
where c = g 2 ( x 0 x * ) [ 0 , 1 ) gives lim m x m = x * , and the iterate x m + 1 S ( x * , r ) . □
The uniqueness of the solution region is determined in the next result.
Proposition 1.
Suppose:
(1)
A solution  u * S ( x * , ρ 3 )  of the equation  F ( x ) = 0  exists for some  ρ 3 > 0 .
(2)
The condition  ( E 1 )  holds on the ball  S ( x * , ρ 3 ) .
(3)
ρ 4 ρ 3  exists such that
0 1 w 0 ( θ ρ 4 ) d θ < 1 .
Set  D 2 = D S [ x * , ρ 4 ] Then, the equation (1) is uniquely solvable by  x *  in the region  D 2 .
Proof. 
Let us define the linear operator T by
T = 0 1 F ( x * + θ ( u * x * ) ) d θ .
It follows by ( 1 ) ( 3 ) that
F ( x * ) 1 ( T F ( x * ) ) 0 1 w 0 ( θ u * x * ) d θ 0 1 w 0 ( θ ρ 4 ) d θ < 1 ,
thus u * x * = T 1 ( F ( u * ) F ( x * ) ) = T 1 ( 0 ) = 0 . □
Remark 1.
We can choose ρ 3 = r provided that all hypotheses ( E 1 ) ( E 3 ) of the Theorem 1 hold.

3. Convergence II: Semilocal

We still rely on the ω -continuity of F , but a scalar majorizing sequence is also employed.
Let v 0 : M R , v : M 1 R be NCF’s. If α 0 = 0 and β 0 0 , define the sequences { t n } , { s n } by
q n ¯ = j = 1 k | b j | v 0 ( | 1 a j | t n + a j s n ) p n ¯ = j = 1 k | b j | 0 1 v ( | 1 θ a j | t n + a j s n ) d θ t n + 1 = s n + p n ¯ ( s n t n ) 1 q n ¯ γ n + 1 = 0 1 v ( ( 1 θ ) ( t n + 1 t n ) ) d θ ( t n + 1 t n ) + ( 1 + v 0 ( t n ) ) ( t n + 1 s n )
and
s n + 1 = t n + 1 + γ n + 1 1 v 0 ( t n + 1 ) .
These scalar sequences are shown to be majorizing for the method (3). However, first, some general convergence conditions are needed for them.
Lemma 1.
Suppose that there d > 0 exists such that for each n = 0 , 1 , 2 ,
q n ¯ < 1 , v 0 ( t n ) < 1 a n d t n < d .
Then, the sequences { t n } , { s n } given by the formula (20) are convergent to some d * [ 0 , d ] .
Proof. 
The Formula (20) and Condition (21) imply t n s n t n + 1 < d . Hence, the result follows. □
Remark 2.
(a) 
If the function v 0 is strictly increasing on the interval [ 0 , ρ ) ; then, we can choose d = v 0 1 ( 1 ) .
(b) 
If the smallest positive root ρ 0 of the function v 0 ( t ) 1 exists then we can set d = ρ 0 .
The functions v 0 , v and parameter d * relate to the operators F and F provided x 0 D is such that F ( x 0 ) 1 L ( B 2 , B 1 ) and F ( x 0 ) 1 F ( x 0 ) β 0 .
Suppose:
( H 1 )
F ( x 0 ) 1 ( F ( u ) F ( x 0 ) ) v 0 ( u x 0 ) for each u D .
Set D 3 = D S ( x 0 , ρ 0 ) , where ρ 0 is the smallest positive root of the function v 0 ( t ) 1 .
( H 2 )
F ( x 0 ) 1 ( F ( u 1 ) F ( u 2 ) ) v ( u 1 u 2 ) for each u 1 , u 2 D 3 .
( H 3 )
The condition (21) holds
and
( H 4 )
S [ x 0 , d * ] D .
The semilocal convergence follows for the method (3).
Theorem 2.
Suppose that the conditions ( H 1 ) ( H 4 ) hold. Then, the sequence is convergent to some x * S [ x 0 , d * ] solving the equation F ( x ) = 0 and such that
x * x n   d * t n .
Proof. 
The following assertions are shown using induction.
y n x n   s n t n < d *
and
x n + 1 y n   t n + 1 s n .
The assertion (23) holds if n = 0 by the choice of t 0 , s 0 , and the first substep of the method (3). It follows that the iterate y 0 S ( x 0 , d * ) . By switch x * , conditions ( E 1 ) ( E 3 ) by x 0 , ( H 1 ) ( H 4 ) , we obtain
A m 1 F ( x 0 )   1 1 q m ¯
and
F ( x 0 ) 1 ( F ( x m ) A m )   p m ¯ .
We can write by the second substep of the method (3)
x m + 1 y m = ( F ( x m ) 1 A m 1 ) F ( x m ) = ( A m 1 F ( x m ) 1 ) F ( x m ) = A m 1 ( F ( x m ) A m ) F ( x m ) 1 F ( x m ) = A m 1 ( F ( x m ) A m ) ( y m x m ) ,
thus,
x m + 1 y m   p m ¯ y m x m 1 q m ¯ t m + 1 s m
and
x m + 1 x 0     x m + 1 y m + y m x 0 t m + 1 s m + s m t 0 = t m + 1 < d * .
Hence, the iterate x m + 1 S ( x 0 , d * ) and (23) holds. We can write by the first substep of the method (3) in turn that
F ( x m + 1 ) = F ( x m + 1 ) F ( x m ) F ( x m ) ( y m x m ) = F ( x m + 1 ) F ( x m + 1 ) F ( x m ) ( x m + 1 x m ) + F ( x m ) ( x m + 1 y m ) ,
thus,
F ( x 0 ) 1 F ( x m + 1 ) 0 1 F ( x 0 ) 1 ( F ( x m + θ ( x m + 1 x m ) ) F ( x m ) ) d θ ( x m + 1 x m ) + F ( x 0 ) 1 ( F ( x m ) F ( x 0 ) + F ( x 0 ) ) 0 1 v ( ( 1 θ ) x m + 1 x m ) d θ x m + 1 x m + ( 1 + v 0 ( x m x 0 ) ) x m + 1 y m 0 1 v ( ( 1 θ ) ( t m + 1 t m ) ) d θ ( t m + 1 t m ) + ( 1 + v 0 ( t m ) ) ( t m + 1 s m ) = γ m + 1 .
Consequently, we obtain
y m + 1 x m + 1 F ( x m + 1 ) 1 F ( x 0 ) F ( x 0 ) 1 F ( x m + 1 ) s m + 1 t m + 1
and
y m + 1 x 0 y m + 1 x m + 1 + x m + 1 x 0 s m + 1 t m + 1 + t m + 1 t 0 = s m + 1 < d * .
Hence, the induction is completed and the iterate y m + 1 S ( x 0 , d * ) . It follows by Lemma 1 and the condition ( H 2 ) that the sequences { t m } , { s m } are Cauchy as convergent. Then, by (23) and (24), the sequences { x m } , { y m } are also Cauchy and, as such, they are convergent to some x * S [ x * , d * ] . Moreover, by letting m in (28) and using the continuity of the operator F, we deduce that F ( x * ) = 0 . Furthermore, for j 0 an integer, and the estimation
x m + j x m t m + j t m ,
we conclude that (22) holds by letting j + in (29). □
Next, the uniqueness region is provided.
Proposition 2.
Suppose:
(1)
There exists a solution u * S ( x 0 , d 1 ) of the Equation (1) for some d 1 > 0 .
(2)
The condition ( H 1 ) holds on the ball S ( x 0 , d 1 ) .
(3)
There exists d 2 d 1 such that
0 1 v 0 ( ( 1 θ ) d 1 + θ d 2 ) d θ < 1 .
Set D 4 = D S [ x 0 , d 2 ] .
Then, the equation F ( x ) = 0 is uniquely solvable by u * in the region D 4 .
Proof. 
As in Proposition 1 define the linear operator T 1 = 0 1 F ( u * + θ ( y * u * ) ) d θ for some y * D 4 with F ( y * ) = 0 . Then, it follows in turn by ( 1 ) ( 3 )
F ( x 0 ) 1 ( T 1 F ( x 0 ) ) 0 1 v 0 ( ( 1 θ ) u * x 0 + θ y * x 0 ) d θ 0 1 v 0 ( ( 1 θ ) ρ 5 ) + θ d 1 ) d θ < 1 .
Thus, we conclude again that u * = y * . □
Remark 3.
(i)
Under all the conditions of Theorem 2, we can let d 1 = d * and u * = x * .
(ii)
The condition ( H 4 ) can be replaced by ( H 4 ) S [ x 0 , ρ 0 ] D , where ρ 0 is given in closed form.

4. Examples and Numerical Calculations

Validating and verifying theoretical results, numerical experiments are essential. This section comprises six numerical problems based on three applied science problems to check the theoretical results obtained from preceding sections. Two types of convergence analysis are mainly focused on: semi-local and local.
In order to evaluate the effectiveness of the method (3), some applications are simulated, and the results are analyzed. In particular, the residual errors, the number of iterations, the convergence radii, and the expected order of convergence are computed. The following formulas used for COC:
μ = ln x j + 1 x * x j x * ln x j x * x j 1 x * , for j = 1 , 2 ,
or ACOC by:
μ * = ln x j + 1 x j x j x j 1 ln x j x j 1 x j 1 x j 2 , for j = 2 , 3 ,
We observe that the iterations terminate when the error is sufficiently small, according to the following sopping criterion:
(i)
x k + 1 x k ϵ , and
(ii)
F ( x k ) < ϵ ,
where ϵ = 10 100 as error tolerance. The stopping criteria ensure that the computed approximations are accurate to a pre-decided level of precision. The numerical examples are stimulated by using Mathematica 11 software.
The first four examples are based on local convergence. Moreover, on the last we apply the method (3).
Example 1.
Let B 1 = B 2 = R 3 , D = S [ 0 , 1 ] and define F on D for u = ( x , y , z ) by
F ( u ) = e x 1 , e 1 2 y 2 + y , z T .
The first Fréchet derivative is given by
F ( u ) = e x 0 0 0 ( e 1 ) y + 1 0 0 0 1
Then, we find that x * = ( 0 , 0 , 0 ) , ω 0 ( t ) = ( e 1 ) t and ω 1 ( t ) = e t . Then, taking k = 2 , a 1 = a 2 = 1 / 2 , b 1 = b 2 = 1 / 2 the smallest positive roots of g i ( t ) 1 = 0 for i = 1 , 2 are 0.324947 and 0.264229 . Then, the radius of convergence is given as r = 0.264229 .
Example 2.
Let B 1 = B 2 = R . Define F on D = ( 1 , 1 ) by
F ( x ) = sin x + x 7 4 .
Then, clearly x * = 0 . For k = 1 , a 1 = 1 , b 1 = 1 , and ω 0 ( t ) = ω 1 ( t ) = t + 7 4 t 3 4 . The smallest positive roots of g i ( t ) 1 = 0 for i = 1 , 2 are 0.173601 and 0.14117 . Then, the radius of convergence is given as r = 0.14117 .
Example 3.
Consider the nonlinear integral equation of mixed Hammerstein-type equation given by
F ( x ) ( u ) = x ( u ) 0 1 u t x ( t ) 1 + α d t , α ( 0 , 1 )
where x ( u ) C [ 0 , 1 ] . Clearly, x * = 0 . For k = 1 , a 1 = 1 , b 1 = 1 , α = 1 / 2 , and ω 0 ( t ) = ω 1 ( t ) = 2.5 ( 1 + α ) t α . The smallest positive roots of g i ( t ) 1 = 0 for i = 1 , 2 are 0.0256 and 0.0189628 . Then, the radius of convergence is given as r = 0.0189628 .
Example 4.
Consider the function defined on D = [ 0.5 , 1.5 ] by
F ( t ) = t 2 log t + t 4 t 3 , t 0 0 , t = 0 .
The unique solution is x * = 1 . Then, we find that for k = 2 , a 1 = a 2 = 1 / 2 , b 1 = b 2 = 1 / 2 , ω 0 ( t ) = 96.6628 t , and ω 1 ( t ) = 96.6628 t . Then, the smallest positive roots of g i ( t ) 1 = 0 for i = 1 , 2 are 0.00689683 and 0.0064939 . Then, the radius of convergence is given as r = 0.0064939 .
Example 5.
Consider the following nonlinear partial differential equation, also known as problem of molecular interaction and defined by
θ t 1 t 1 + θ t 2 t 2 = θ 2 ,
subject to the following conditions:
θ ( t 1 , 0 ) = 2 t 1 2 t 1 + 1 , θ ( t 1 , 1 ) = 2 , θ ( 0 , t 2 ) = 2 t 2 2 t 2 + 1 , θ ( 1 , t 2 ) = 2
where ( t 1 , t 2 ) [ 0 , 1 ] × [ 0 , 1 ] .
Discretize the PDE (30) by applying the central divided difference
θ t 1 t 1 = θ i + 1 , j 2 θ i , j + θ i 1 , j a 2 , θ t 2 t 2 = θ i , j + 1 2 θ i , j + θ i , j 1 a 2
which further produces
θ i + 1 , j 4 θ i , j + θ i 1 , j + θ i , j + 1 + θ i , j 1 a 2 θ i , j 2 = 0 ,
a system of nonlinear equations, where i = 1 , 2 , 3 , , l 1 , j = 1 , 2 , 3 , , l 1 . For instance l = 6 , we obtain a system of 5 × 5 and a = 1 l . The COC, the number of iterations, residual errors, CPU timing, and error difference between two iterations for Example 5 are mentioned in Table 1.
Example 6.
Let us consider the following the Van der Pol equation, which is defined as
ν η ( ν 2 1 ) ν + ν = 0 , η > 0 ,
which governs the flow of current in a vacuum tube, with the boundary conditions ν ( 0 ) = 0 , ν ( 2 ) = 1 . Further, we consider the partition of the given interval [ 0 , 2 ] , which is given by
τ 0 = 0 < τ 1 < τ 2 < τ 3 < < τ k , w h e r e τ i = τ 0 + i h , h = 2 k .
Moreover, we assume that
ν 0 = ν ( τ 0 ) = 0 , ν 1 = ν ( τ 1 ) , , ν k 1 = ν ( τ k 1 ) , ν k = ν ( τ k ) = 1 .
If we discretize the above problem (31) by using the second order divided difference for the first and second derivatives, which are given by
ν k = ν k + 1 ν k 1 2 h , ν k = ν k 1 2 ν k + ν k + 1 h 2 , k = 1 , 2 , , n 1 ,
then, we obtain a ( n 1 ) × ( n 1 ) system of nonlinear equations
2 h 2 τ k h η ( τ k 2 1 ) ( τ k + 1 τ k 1 ) + 2 ( τ k 1 + τ k + 1 2 τ k ) = 0 , k = 1 , 2 , , n 1 .
Let us consider η = 1 2 and n = 8 ; so, we have a 7 × 7 system of nonlinear equations. The obtained results are depicted in Table 2.
Method (3) converges to the following estimated zero:
x * = ( 0.3381 , 0.6208 , 0.8452 , 1.009 , 1.111 , 1.146 , 1.108 ) t r

5. Concluding Remarks

In the foregoing study, we have analyzed the local and the semilocal convergence for a fourth-order iterative method based on quadrature formulae in Banach spaces by using majorizing sequences. Local convergence analysis is based on very general ω -continuity conditions on first order Fréchet derivative, thereby extending the applicability and usage of the method. Theoretical results are applied to some numerical examples to demonstrate the efficiency of our convergence analysis. It can be observed that our theoretical conclusions worked well in the situation where the earlier analysis based on Lipschitz condition cannot be used. Future work involves other methods and applications to integral equations and to the solution of PDE’s.

Author Contributions

Conceptualization, S.R., I.K.A., G.D. and L.R.; methodology, S.R., I.K.A., G.D. and L.R.; software, S.R., I.K.A., G.D. and L.R.; validation, S.R., I.K.A., G.D. and L.R.; formal analysis, S.R., I.K.A., G.D. and L.R.; investigation, S.R., I.K.A., G.D. and L.R.; resources, S.R., I.K.A., G.D. and L.R.; data curation, S.R., I.K.A., G.D. and L.R.; writing—original draft preparation, S.R., I.K.A., G.D. and L.R.; writing—review and editing, S.R., I.K.A., G.D. and L.R.; visualization, S.R., I.K.A., G.D. and L.R.; supervision, S.R., I.K.A., G.D. and L.R.; project administration, S.R., I.K.A., G.D. and L.R.; and funding acquisition, S.R., I.K.A., G.D. and L.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Argyros, I.; Magreñán, Á.A. Iterative Methods and Their Dynamics with Applications; CRC Press: New York, NY, USA, 2017. [Google Scholar]
  2. Argyros, I.K. The Theory and Applications of Iteration Methods; Taylor and Francis: Abingdon, UK; CRC Press: New York, NY, USA, 2022. [Google Scholar]
  3. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  4. Sharma, J.R.; Guha, R.K.; Sharma, R. An efficient fourth order weighted-Newton method for systems of nonlinear equations. Numer. Algo. 2013, 62, 307–323. [Google Scholar] [CrossRef]
  5. Ezquerro, J.A.; Hernández, M.A. Newton’s Method: An Updated Approach of Kantorovich’s Theory; Springer: Cham, Switzerland, 2018. [Google Scholar]
  6. Argyros, I.K.; Shakhno, S.; Regmi, S.; Yarmola, H. Newton-Type Methods for Solving Equations in Banach spaces: A Unified Approach. Symmetry 2023, 15, 15. [Google Scholar] [CrossRef]
  7. Argyros, I.K.; Deep, G.; Regmi, S. Extended Newton-like Midpoint Method for Solving Equations in Banach Space. Foundations 2023, 3, 82–98. [Google Scholar] [CrossRef]
  8. Darvishi, M.T.; Barati, A. A fourth-order method from quadrature formulae to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 188, 257–261. [Google Scholar] [CrossRef]
  9. Frontini, M.; Sormani, E. Third-order methods from quadrature formulae for solving systems of nonlinear equations. Appl. Math. Comput. 2004, 149, 771–782. [Google Scholar] [CrossRef]
  10. Gutiérrez, J.M. A new semilocal convergence for Newton’s method. J. Comput. Appl. Math. 1997, 79, 131–145. [Google Scholar] [CrossRef] [Green Version]
  11. Gutiérrez, J.M.; Hernández, M.A. Third-order iterative methods for operators with bounded second derivative. J. Comput. Appl. Math. 1997, 82, 171–183. [Google Scholar] [CrossRef] [Green Version]
  12. Herceg, D.; Herceg, D.J. Means based modifications of Newton’s method for solving nonlinear equations. Appl. Math. Lett. 2013, 219, 6126–6133. [Google Scholar] [CrossRef]
  13. Kou, J. A third-order modification of Newton method for systems of non-linear equations. Appl. Math. Comput. 2007, 191, 117–121. [Google Scholar] [CrossRef]
  14. Singh, S.; Gupta, D.; Badoni, R.; Martínez, E.; Hueso, J.L. Local convergence of a parameter based iteration with Hölder continuous derivative in Banach spaces. Calcolo 2017, 54, 527–539. [Google Scholar] [CrossRef]
  15. Singh, S.; Gupta, D.K.; Martínez, E.; Hueso, J.L. Semilocal and local convergence of a fifth order iteration with Fréchet derivative satisfying Hölder condition. Appl. Math. Comput. 2016, 276, 266–277. [Google Scholar]
  16. Traub, J.F. Iterative Methods for the Solution of Equations; Chelsea Publishing Company: New York, NY, USA, 1982. [Google Scholar]
  17. Wang, X.; Gu, C.; Kou, J. Semilocal convergence of a multipoint fourth-order super-Halley method in Banach spaces. Numer. Algo. 2011, 56, 497–516. [Google Scholar] [CrossRef]
  18. Kamran, I.M.; Alotaibi, F.M.; Haque, S.; Mlaiki, N.; Shah, K. RBF-Based Local Meshless Method for Fractional Diffusion Equations. Fractal Fract. 2023, 7, 143. [Google Scholar] [CrossRef]
  19. Khan, A.; Shah, K.; Abdeljawad, T.; Sher, M. On Fractional Order Sine-Gordon Equation Involving Nonsingular Derivative. Fractals 2022. [Google Scholar] [CrossRef]
  20. Saifullah, S.; Ali, A.; Khan, A.; Shah, K.; Abdeljawad, T. A Novel Tempered Fractional Transform: Theory, Properties and Appli- cations to Differential Equations. Fractals 2022. [Google Scholar] [CrossRef]
  21. Shah, K.; Sinan, M.; Abdeljawad, T.; El-Shorbagy, M.A.; Abdalla, B.; Abualrub, M.S. A Detailed Study of a Fractal-Fractional Transmission Dynamical Model of Viral Infectious Disease with Vaccination. Complexity 2022, 2022, 7236824. [Google Scholar] [CrossRef]
Table 1. Numerical outcomes for Example 5.
Table 1. Numerical outcomes for Example 5.
Cases x 0 | F ( x n ) | | x n + 1 x n | n μ CPU Timing
Method (3) ( 39 100 , 39 100 , 39 100 , 39 100 , 39 100 ) t r 8.5 × 10 827 1.2 × 10 826 34 7.56632
Table 2. Numerical outcomes for Example 6.
Table 2. Numerical outcomes for Example 6.
Cases x 0 | F ( x n ) | | x n + 1 x n | n μ CPU Timing
Method (3) ( 34 100 , 62 100 , 8 10 , 9 10 , 12 10 , 11 10 , 13 10 ) t r 8.7 × 10 944 5.7 × 10 944 34 3.63682
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Regmi, S.; Argyros, I.K.; Deep, G.; Rathour, L. A Newton-like Midpoint Method for Solving Equations in Banach Space. Foundations 2023, 3, 154-166. https://doi.org/10.3390/foundations3020014

AMA Style

Regmi S, Argyros IK, Deep G, Rathour L. A Newton-like Midpoint Method for Solving Equations in Banach Space. Foundations. 2023; 3(2):154-166. https://doi.org/10.3390/foundations3020014

Chicago/Turabian Style

Regmi, Samundra, Ioannis K. Argyros, Gagan Deep, and Laxmi Rathour. 2023. "A Newton-like Midpoint Method for Solving Equations in Banach Space" Foundations 3, no. 2: 154-166. https://doi.org/10.3390/foundations3020014

APA Style

Regmi, S., Argyros, I. K., Deep, G., & Rathour, L. (2023). A Newton-like Midpoint Method for Solving Equations in Banach Space. Foundations, 3(2), 154-166. https://doi.org/10.3390/foundations3020014

Article Metrics

Back to TopTop