Next Article in Journal
Effects of Anodizing Conditions on Thermal Properties of Al 20XX Alloys for Aircraft
Next Article in Special Issue
A Composite Initialization Method for Phase Retrieval
Previous Article in Journal
Some Results on Ricci Almost Solitons
Previous Article in Special Issue
A Family of Derivative Free Optimal Fourth Order Methods for Computing Multiple Roots
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Application of Gauss Quadrature Method for Solving Systems of Nonlinear Equations

1
Department of Mathematics and Statistics, University of Victoria, Victoria, BC V8W 3R4, Canada
2
Department of Medical Research, China Medical University Hospital, China Medical University, Taichung 40402, Taiwan
3
Department of Mathematics and Informatics, Azerbaijan University, 71 Jeyhun Hajibeyli Street, Baku AZ1007, Azerbaijan
4
Department of Mathematics, Abdul Wali Khan University, Mardan 23200, KPK, Pakistan
5
Division of Computational Science, Faculty of Science, Prince of Songkla University, Hat Yai, Songkhla 90110, Thailand
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(3), 432; https://doi.org/10.3390/sym13030432
Submission received: 26 January 2021 / Revised: 24 February 2021 / Accepted: 1 March 2021 / Published: 7 March 2021
(This article belongs to the Special Issue Symmetry in Numerical Analysis and Numerical Methods)

Abstract

:
In this paper, we introduce a new three-step Newton method for solving a system of nonlinear equations. This new method based on Gauss quadrature rule has sixth order of convergence (with n = 3 ). The proposed method solves nonlinear boundary-value problems and integral equations in few iterations with good accuracy. Numerical comparison shows that the new method is remarkably effective for solving systems of nonlinear equations.

1. Introduction

In numerical analysis and other branches of scientific interests, solving a system of nonlinear equations by means of computational methods has always been very well motivated and convincing for researchers. For a system of nonlinear equations:
P ( S ) = p 1 ( S ) , p 2 ( S ) , , p n ( S ) T = 0 ,
where S = ( s 1 , s 2 , , s n ) T and P : D R n R n is a nonlinear system, and p i , i = 1 , 2 , , n : D R n R is a nonlinear mapping. The solution of the nonlinear system of equations in (1) may be defined as the process of finding a vector S * = ( s 1 * , s 2 * , , s n * ) T such that P ( S * ) = 0 . The classical Newton method is one of the most commonly used iterative methods:
S ( k + 1 ) = S k P S ( k ) 1 P S ( k ) ( k = 0 , 1 , 2 , ) ,
where P ( S ( k ) ) is the Jacobian matrix of the nonlinear function P ( S ) in the kth iteration at the point S ( k ) (see [1,2,3]). Newton’s method quadratically converges to the solution S * if the function P is continuous and differentiable. In recent years, several methods have been developed to analyze the solution of systems of nonlinear equations to improve interaction by using the quadrature formulas and fractional iterative method in the literature (see [4,5,6,7,8,9,10,11,12]). In particular, Codero and Torregrosa [9] developed the third-order Newton–Simpson method as follows:
S ( k + 1 ) = S ( k ) 6 P ( S ( k ) ) + 4 P S ( k ) + Y ( k ) 2 + P ( Y ( k ) ) 1 P ( S ( k ) ) ,
and the Open Newton method:
S ( k + 1 ) = S ( k ) 3 [ 2 P S ( k ) + 3 Z ( k ) 4 P S ( k ) + Z ( k ) 2 + 2 P 3 S ( k ) + Z ( k ) 4 ] 1 P ( S ( k ) ) ,
where Z ( k ) represents the Newton approximation. Khirallah and Hafiz [13] suggested a cubically convergent method using the four-point Newton–Cotes formula for solving systems of nonlinear equations as follows:
S ( k + 1 ) = S ( k ) 8 [ P ( S ( k ) ) + 3 P 2 S ( k ) + Z ( k ) 2 + 3 P S ( k ) + 2 Z ( k ) 2 + P ( Z ( k ) ) ] 1 P ( S ( k ) ) .
The quadrature rule is used to approximate the definite integral of a function. The general form of a quadrature rule is given by [14]
Q = a b v ( r ) s ( r ) d r Q m = i = 0 m w i s ( r i )
where v ( r ) is a weight function, w i , i = 0 , 1 , 2 , m are coefficients (weights). r i are points of the rule and s is a given function integrable on the interval [ a , b ] with the weight function v.
Motivated and inspired by the research going on in this area, we have introduced a new iterative for solving nonlinear equations. Several numerical examples are considered to show the effectiveness of the proposed method. The new iterative method shows the compatibility of numerical results with the scheme’s theoretical analysis. We have solved nonlinear boundary-value problems by using the proposed method. Our method gives better results than the other methods and converges more rapidly to the solution. Section 5 concludes the paper.

2. Three-Step Newton Method

Let P : D R n R n , be s-times Fréchet differentiable function on a convex set D R n . Using the Mean-Value Theorem of multi-variable vectors function P ( S ( k ) ) (see [1]), we have
P ( S ) P ( S ( k ) ) = 0 1 P S ( k ) + r ( S S ( k ) ) ( S S ( k ) ) d r .
Using the left rectangular rule, the right-hand side of (5) can be written as
0 1 P S ( k ) + r ( S S ( k ) ) ( S S ( k ) ) d r P ( S ( k ) ) ( S S ( k ) ) .
From (5) and (6), we get
S = S ( k ) P ( S ( k ) ) 1 P ( S ( k ) ) ( k = 0 , 1 , 2 , ) .
Replacing S by S ( k + 1 ) in (7), we get the Newton method. Using (5) and different numerical integration formulas, one can obtain different iterative methods such as (2), (3), and (4). To develop the new iterative method, we approximate the integral in (5) by the following three-point Gauss Legendre integration formula:
x y f ( t ) d t y x 9 [ 4 f y + x 2 + 5 2 f ( y x ) 3 5 + y + x 2 + 5 2 f ( y x ) 3 5 + y + x 2 ] ,
Thus, from (5) and (8), we have
0 1 P S ( k ) + r ( S S ( k ) ) ( S S ( k ) ) d r S S ( k ) 9 [ 4 P S + S ( k ) 2 + 5 2 P ( S S ( k ) ) 3 5 + S + S ( k ) 2 + 5 2 P ( S S ( k ) ) 3 5 + S + S ( k ) 2 ] .
Moreover, from (1), (5), and (9), we get
0 P S ( k ) + S S ( k ) 9 [ 4 P S + S ( k ) 2 + 5 2 P S S ( k ) 3 5 + S + S ( k ) 2 + 5 2 P S S ( k ) 3 5 + S + S ( k ) 2 ] .
From (10), the iterative scheme is given by
S S ( k ) 9 [ 4 P S + S ( k ) 2 + 5 2 P S S ( k ) 3 5 + S + S ( k ) 2 + 5 2 P S S ( k ) 3 5 + S + S ( k ) 2 ] 1 P S ( k ) .
Subsequently, we use the k t h iteration T ( k ) and Z ( k ) of Newton’s method to replace S and S ( k ) respectively on the right-hand side of (11) and obtain a new iterative scheme as follows:
Algorithm 1: Three-Step Newton Method
Step 1: Select an initial guess S ( 0 ) R n and start k from 0.
Step 2: Compute
S ( k + 1 ) = T ( k ) 9 [ 4 P ( H ( k ) ) + 5 2 P ( H ( k ) + W ( k ) ) + 5 2 P ( H ( k ) + J ( k ) ) ] 1 P ( T ( k ) ) .
Step 3: Set for the next step:
H ( k ) = T ( k ) + Z ( k ) 2 ,

J ( k ) = ( T ( k ) Z ( k ) ) 3 5
and
W ( k ) = ( T ( k ) Z ( k ) ) 3 5 .
Step 4: Compute
Z ( k ) = S ( k ) P ( S ( k ) ) 1 P ( S ( k ) ) ( k = 0 , 1 , 2 , ) T ( k ) = Z ( k ) P ( Z ( k ) ) 1 P ( Z ( k ) ) ( k = 0 , 1 , 2 , ) .
Step 5: If | | S ( k + 1 ) S ( k ) | | < ϵ , then stop; otherwise. put k = k + 1 and go to Step 2.
In the next section, we discuss the convergence of the proposed method.

3. Convergence Analysis

In the following theorem, we prove the convergence of the proposed method.
Theorem 1.
Suppose that the function P : U R n R n is sufficiently Fréchet differentiable at each point of an open convex neighborhood U of the solution S * R n of ( 1 ) . Assume also that P ( S ) is continuous and nonsingular at S = S * . Then, the sequence { S ( k ) } generated by Algorithm 5 converges to S * with the sixth order of convergence and the error equation is given by
e k + 1 = 3 C 2 5 e k 6 + O ( e k 7 ) ,
where
e k = S ( k ) S * .
Proof. 
Let P : D R n R n be s-times Fréchet differentiable in U. Then, by using the usual notation for the mth derivative of P at v R n , the m-linear function P ( m ) ( v ) : R n × × R n R n is such that P ( m ) ( v ) ( u 1 , u 2 , , u n ) R n . Suppose now that S * + h R n lies in the neighborhood of S * . The Taylor polynomial for P ( S * + h ) can be of the form:
P ( S * + h ) = P ( S * ) h + m = 2 f 1 C m h m + O ( h f ) ,
where
C m = 1 m ! [ P ( S * ) ] 1 P ( m ) ( S * ) ( m 2 ) .
We observe that C m h m R n , since
P ( m ) ( S * ) L ( R n × R n , , R n ) and [ P ( X * ) ] 1 L R n .
In addition, we can express P as follows:
P ( S * + h ) = P ( S * ) I + m = 2 f 1 m C m h m 1 + O ( h f ) ,
where I R n × n is the identity matrix. We note that m C m h m 1 L R n . From (13) and (14), we get
P ( S ( k ) ) = P ( S * ) e k + C 2 e k 2 + C 3 e k 3 + C 4 e k 4 + C 5 e k 5 + C 6 e k 6 + .
P ( S ( k ) ) = P ( S * ) I + 2 C 2 e k + 3 C 3 e k 2 + 4 C 4 e k 3 + 5 C 5 e k 4 + 6 C 5 e k 5 + ,
where
C k = ( 1 k ! ) [ P ( S * ) ] 1 P ( k ) ( S * ) ( k = 2 , 3 , 4 , )
and
e k = S ( k ) S * .
From ( 15 ) , we have
P ( S ( k ) ) 1 = [ I 2 C 2 e k + ( 4 C 2 2 3 C 3 ) e k 2 + ( 4 C 4 + 6 C 2 C 3 + 6 C 3 C 2 8 C 2 3 ) e k 3 + ( 16 C 2 4 36 C 2 2 C 3 + 16 C 2 C 4 + 9 C 3 2 5 C 5 ) e k 4 + ] P ( S * ) 1 .
By multiplying P S ( k ) 1 and P S ( k ) , we obtain
P ( S ( k ) ) 1 P ( S ( k ) ) = e k C 2 e k 2 + 2 ( C 2 2 C 3 ) e k 3 + ( 4 C 2 3 + 4 C 2 C 3 + 3 C 3 C 2 3 C 4 ) e k 4 + ( 8 C 2 4 20 C 2 2 C 3 + 6 C 3 2 + 10 C 2 C 4 4 C 5 ) e k 5 + ( 16 C 2 5 + 52 C 2 3 C 3 33 C 2 C 3 2 28 C 2 2 C 4 + 17 C 3 C 4 + 13 C 2 C 5 5 C 6 ) e k 6 + .
Taylor’s series expansion of P Z ( k ) is given by
P Z ( k ) = P ( S * ) [ Z ( k ) S * + C 2 Z ( k ) S * 2 + C 3 ( Z ( k ) S * ) 3 + C 4 Z ( k ) S * 4 C 5 Z ( k ) S * 5 + C 6 Z ( k ) S * 6 + ] ,
where
C k = 1 k ! P ( S * ) 1 P ( k ) ( S * ) ( k = 2 , 3 , 4 , ) .
Moreover, Z ( k ) can be written as follows:
Z ( k ) = S * + C 2 e k 2 2 ( C 2 2 C 3 ) e k 3 4 C 2 3 + 7 C 2 C 3 3 C 4 e k 4 + ( 4 C 5 12 C 2 4 + 24 C 2 2 C 3 10 C 2 C 4 6 C 3 2 ) e k 5 + ( 16 C 2 5 52 C 2 3 C 3 + 28 C 2 2 C 4 + 33 C 2 C 3 2 13 C 2 C 5 17 C 3 C 4 + 5 C 6 ) e k 6 + .
Taylor’s series expansion of P Z ( k ) is given by
P ( Z ( k ) ) = P ( S * ) [ I + 2 C 2 ( Z ( k ) S * ) + 3 C 3 ( Z ( k ) S * ) 2 + 4 C 4 ) ( Z ( k ) S * ) 3 + 5 C 5 ( Z ( k ) S * ) 4 ] + O ( Z ( k ) S * ) 5 .
Putting (20) in (21), we have
P ( S ( k ) ) = P ( S * ) [ I + 2 C 2 2 e k 2 4 C 2 ( C 2 2 C 3 ) e k 3 C 2 ( 8 C 2 3 + 11 C 2 C 3 6 C 4 ) e k 4 + ( 16 C 2 5 + 28 C 2 3 C 3 20 C 2 2 C 4 + 8 C 2 C 5 ) e k 5 + ( 32 C 2 6 68 C 2 4 C 3 + 60 C 2 3 C 4 26 C 2 2 C 5 16 C 2 C 3 C 4 + 12 C 3 3 + 10 C 2 ] C 6 ) e k 6 + .
Upon multiplying P Z ( k ) 1 by P Z ( k ) , we have
P ( Z ( k ) ) 1 P ( Z ( k ) ) = C 2 e k 2 + 2 ( C 2 2 + C 3 ) e k 3 + ( 3 C 2 3 7 C 2 C 3 + 3 C 4 ) e k 4 + ( 4 C 2 4 + 16 C 2 2 C 3 10 C 2 C 4 6 C 3 2 + 4 C 5 ) e k 5 + ( 6 C 2 5 32 C 2 3 C 3 + 22 C 2 2 C 4 + 29 C 2 C 3 2 13 C 2 C 5 17 C 3 C 4 + 5 C 6 ) e k 6 + .
The expression for T ( k ) is given below:
T ( k ) = S * + 3 C 2 3 e k 4 16 C 2 4 20 C 2 2 C 3 + 6 C 3 2 + 1 0 C 2 C 4 C 5 e k 5 ( 24 C 2 5 + 96 C 2 3 C 3 41 C 2 C 3 2 40 C 2 2 C 4 12 C 2 5 + 17 C 3 C 4 + 13 C 2 C 5 + 8 C 2 6 5 C 6 ) e k 6 + .
Similarly, P T ( k ) can be written as follows:
P T ( k ) = P ( S * ) [ 3 C 2 3 e k 4 16 C 2 4 20 C 2 2 C 3 + 6 C 3 2 + 10 C 2 C 4 4 C 5 e k 5 ( 24 C 2 2 + 96 C 2 3 C 3 41 C 2 C 3 2 40 C 2 2 C 4 12 C 2 5 + 17 C 3 C 4 + 13 C 2 C 5 + 8 C 2 6 5 C 6 ) e k 6 + ] .
Furthermore, we have
H ( k ) = 1 2 C 2 e k 2 + C 2 2 + C 3 e k 3 + 7 2 C 2 C 3 + 3 2 C 4 + 5 2 C 2 3 e k 4 + ( 6 C 2 4 + 12 C 2 2 C 3 5 C 2 C 4 3 C 3 2 + 2 C 5 + 69 5 C 2 3 C 3 10 C 2 2 C 4 ) e k 5 + 37 2 C 2 C 3 2 + 17 C 2 2 C 4 13 2 C 2 C 5 36 C 2 3 C 3 + 5 2 C 6 e k 6 +
and
J ( k ) = 1 2 C 2 + 1 5 15 C 2 e k 2 + C 2 2 + C 3 2 5 15 C 2 2 + 2 5 15 C 3 e k 3 + ( 3 2 C 4 + 5 2 C 2 3 + 3 5 15 C 2 3 + 3 5 15 C 4 7 5 15 C 2 C 3 7 2 C 2 C 3 ) e k 4 + .
Similarly, we have
W ( k ) = 1 2 C 2 + 1 5 15 C 2 e k 2 + C 2 2 + C 3 2 5 15 C 2 2 + 2 5 15 C 3 e k 3 + ( 3 2 C 4 + 5 2 C 2 3 + 3 5 15 C 2 3 + 3 5 15 C 4 7 5 15 C 2 C 3 7 2 C 2 C 3 ) e k 4 + .
The expression P H ( k ) + J ( k ) can be written as follows:
P H ( k ) + J ( k ) = I + C 2 2 2 5 15 C 2 2 e k 2 + ( 2 C 2 3 4 5 C 2 15 C 3 + 2 C 2 C 3 + 4 5 15 C 2 3 ) e k 3 + ( 5 C 2 4 + 11 5 15 C 2 2 C 3 6 5 C 2 15 C 4 89 20 C 2 2 C 3 + 3 C 2 C 4 6 5 15 C 2 4 ) e k 4 +
and
P H ( k ) + W ( k ) = I + C 2 2 2 5 15 C 2 2 e k 2 + ( 2 C 2 3 + 4 5 C 2 15 C 3 + 2 C 2 C 3 4 5 15 C 2 3 ) e k 3 + ( 5 C 2 4 11 5 15 C 2 2 C 3 + 6 5 C 2 15 C 4 89 20 C 2 2 C 3 + 3 C 2 C 4 + 6 5 15 C 2 4 ) e k 4 + .
Furthermore, we have
P H ( k ) = I + C 2 2 e k 2 + 2 C 2 3 + 2 C 2 C 3 e k 3 + 5 C 2 4 25 4 C 2 2 C 3 + 3 C 2 C 4 e k 4 + 12 C 2 5 + 21 C 2 3 C 3 10 C 2 2 C 4 3 C 2 C 3 2 + 4 C 2 C 5 e k 5 + .
From (29)–(31), we have
4 P H ( k ) + 5 2 P H ( k ) + W ( k ) + 5 2 P H ( k ) + J ( k ) 1 = 1 3 C 2 2 e k 2 + ( 2 3 C 2 3 2 3 C 2 C 3 ) e k 3 + 2 3 C 2 4 + 7 4 C 2 2 C 3 C 2 C 4 e k 4 + ( 5 3 C 2 3 C 3 + 10 3 C 2 2 C 4 1 3 C 2 C 3 2 4 3 C 2 C 5 ) e k 5 + .
From (18)–(32), we obtain
e k + 1 = 3 C 2 5 e k 6 + ( 38 C 2 6 + 78 C 2 4 C 3 + 32 C 2 3 C 4 28 C 2 2 C 3 2 + 8 C 2 2 C 5 + 12 C 2 C 3 C 4 ) e k 7 + .
From (33), we conclude that the proposed method yields convergence of order 6. □

4. Numerical Results

In this section, we consider some problems to show the performance and efficiency of the newly developed method. We compare Newton’s method (NM) (see [6]) and methods (4), (5), (23), (25) and (27) in [15] with Algorithm 1. The stopping criterion is
E r r o r = | | S ( k + 1 ) S ( k ) | | < 10 15 ,
and k denotes the number of iterations. The computational order of convergence q (see [16]) is approximated by
q ln ( | | S ( k + 1 ) S ( k ) | | / | | S ( k ) S ( k 1 ) | | ) ln | | S ( k ) S ( k 1 ) | | / | | S ( k 1 ) S ( k 2 ) | | .
Consider the following systems of nonlinear equations (see [16]).
Problem 1.
x 2 + y 2 6 + y 17 = 0 , x 2 y 19 = 0 ,
Problem 2.
x 2 + y 2 + z 2 1 = 0 , 2 x 2 + y 2 4 z = 0 , 3 x 2 4 y 2 + z 2 = 0 .
Problem 3.
e x 2 + 8 x sin ( y ) = 0 , x + 2 y 1 = 0 .
Problem 4.
cos ( x ) sin ( y ) = 0 , z x + 1 / y = 0 , e x z 2 = 0 .
Problem 5.
x 2 + y 2 + z 2 9 = 0 , x y z = 1 , x + y z 2 = 0 .
Problem 6.
x y + x z + y z 1 = 0 , y z + w ( y + z ) = 0 , x z + w ( x + z ) = 0 , x y + w ( x + y ) = 0 .
Numerical results are given in Table 1 below.
Problem 7
([15]). Consider a nonlinear boundary-value problem of the following form:
y ( t ) + y 1 + b ( t ) = 0 ( t [ 0 , 1 ] ; b > 0 ) y ( 0 ) = 0 , y ( 1 ) = 1 .
Here we have discretized the above nonlinear ODE (35) by using the finite difference method.
By taking b = 2.5 and n = 10 , we obtain the following system of nonlinear equations.
100 y 2 200 y 1 + y 1 3.5 = 0 , 100 y 3 200 y 2 + 100 y 1 + y 2 3.5 = 0 , 100 y 4 200 y 3 + 100 y 2 + y 3 3.5 = 0 , 100 y 5 200 y 4 + 100 y 3 + y 4 3.5 = 0 , 100 y 6 200 y 5 + 100 y 4 + y 5 3.5 = 0 , 100 y 7 200 y 6 + 100 y 5 + y 6 3.5 = 0 , 100 y 8 200 y 7 + 100 y 6 + y 7 3.5 = 0 , 100 y 9 200 y 8 + 100 y 7 + y 8 3.5 = 0 , 200 y 9 + 100 y 8 + 100 + y 9 3.5 = 0 ,
where y ( 0 ) = ( 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 , 1 ) T is the initial guess. We obtain the approximate solution as follows:
y * = ( 0.1039574502033 , 0.2079117381290 , 0.3118302489670 , 0.4156008747144 , 0.5189667289214 , 0.6214486996519 , 0.7222575415768 , 0.8201966396108 , 0.9135562704267 ) T .
We compare Algorithm 1 with the Newton–Simpson method (NS-M) and the Open Newton method (ON-M) (see [9]), the four-point method (KH-M) (see [13]), the Newton–Gauss method (NG-M), and the fifth-order scheme (M 14) (see [15]). The numerical results are shown in Table 2 below.
From Table 2, we see that the proposed method converges to the solution in just two iterations. To illustrate the performance of the new method, we plot the approximate solution against the Maple solution in Figure 1.
In the next problem, we compare Algorithm 1 with M6 [17] of order 6.
Problem 8.
Consider the following integral equation:
y ( r ) 1 y ( r ) 4 μ y ( r ) 0 1 k ( r , u ) y ( u ) d u = 0 .
Solving (38), we have the following system of nonlinear equations:
y i 1 + 1 8 y i j = 1 8 u i β j u i + u j y j , i = 1 , 2 . . . 8 .
For more detail see [17]. We compare Algorithm 1 with M6 [17] in Table 3.
From the last column of Table 3, we conclude that the new method is more accurate than M6 [17].

5. Conclusions

In this article, we have implemented a new three-step Newton method for solving a system of nonlinear equations. The order of convergence of the proposed method is six. To show the effectiveness of the new method, we have provided some numerical tests. The graphical illustration shows the accuracy of the proposed method. Numerical results confirmed that the suggested method converges to the solution in fewer iterations with high accuracy, which justifies the advantage of this method.

Author Contributions

Conceptualization, H.M.S., J.I. and A.K.; methodology, J.I., M.A. and A.K.; formal analysis, Y.S.G., J.I. and R.C.; investigation, review and editing, H.M.S., J.I. and M.A.; writing, J.I. and A.K.; funding acquisition, R.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been supported by the grant provided by Division of Computational Science, Faculty of Science, Prince of Songkla University, Hat Yai, Songkhla 90110, Thailand.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Atkinson, K.E. An Introduction to Numerical Analysis, 2nd ed.; John Wiley and Sons: New York, NY, USA, 1987. [Google Scholar]
  2. Abbasbandy, S. Extended Newton’s method for a system of nonlinear equations by modified Adomian decomposition method. Appl. Math. Comput. 2005, 170, 648–656. [Google Scholar] [CrossRef]
  3. Babajee, D.K.R.; Dauhoo, M.Z. An analysis of the properties of the variants of Newton’s method with third order convergence. Appl. Math. Comput. 2006, 183, 659–684. [Google Scholar] [CrossRef]
  4. Babajee, D.K.R.; Dauhoo, M.Z.; Darvishi, M.T.; Barati, A. A note on the local convergence of iterative methods based on Adomian decomposition method and 3-node quadrature rule. Appl. Math. Comput. 2008, 200, 452–458. [Google Scholar] [CrossRef]
  5. Babolian, E.; Biazar, J.; Vahidi, A.R. Solution of a system of nonlinear equations by Adomian decomposition method. Appl. Math. Comput. 2004, 150, 847–854. [Google Scholar] [CrossRef]
  6. Burden, R.L.; Faires, J.D. Numerical Analysis, 7th ed.; PWS Publishing Company: Boston, MA, USA, 2001. [Google Scholar]
  7. Candelario, G.; Cordero, A.; Torregrosa, J.R. Multipoint Fractional Iterative Methods with (2α+1)th-Order of Convergence for Solving Nonlinear Problems. Mathematics 2020, 8, 452. [Google Scholar] [CrossRef] [Green Version]
  8. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method for functions of several variables. Appl. Math. Comput. 2006, 183, 199–208. [Google Scholar] [CrossRef]
  9. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput.. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  10. Darvishi, M.T.; Barati, A. A third-order Newton-type method to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 187, 630–635. [Google Scholar] [CrossRef]
  11. Darvishi, M.T.; Barati, A. Super cubic iterative methods to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 188, 1678–1685. [Google Scholar] [CrossRef]
  12. Darvishi, M.T.; Barati, A. A fourth-order method from quadrature formulae to solve system of nonlinear equations. Appl. Math. Comput. 2007, 188, 257–261. [Google Scholar] [CrossRef]
  13. Khirallah, M.Q.; Hafiz, M.A. Novel three order methods for solving a system of nonlinear equations. Bull. Math. Sci. Appl. 2012, 2, 1–14. [Google Scholar] [CrossRef]
  14. Noeiaghdam, S.; Araghi, M.A.F. A novel algorithm to evaluate definite integrals by the Gauss-Legendre integration rule based on the stochastic arithmetic: Application in the model of osmosis system. Math. Model. Eng. Prob. 2020, 7, 577–586. [Google Scholar]
  15. Su, Q.-F. A unified model for solving a system of nonlinear equations. Appl. Math. Comput. 2016, 290, 46–55. [Google Scholar] [CrossRef]
  16. Liu, Z.-L.; Zheng, Q.; Huag, C.-E. Third- and fifth-order Newton-Guass methods for solving system of nonlinear equations with n variables. Appl. Math. Comput. 2016, 290, 250–257. [Google Scholar]
  17. Maduh, K. Sixth order Newton-Type method for solving system Of nonlinear equations and its applications. Appl. Math. E-Notes 2017, 17, 221–230. [Google Scholar]
Figure 1. Comparison between the exact solution ( Maple solution) and the approximate solution.
Figure 1. Comparison between the exact solution ( Maple solution) and the approximate solution.
Symmetry 13 00432 g001
Table 1. Numerical results for the Problems 1 to 6.
Table 1. Numerical results for the Problems 1 to 6.
MethodInitial GuesskApproximate Solutionq
Problem 1
NM ( 5.5 , 6.8 ) T 6 ( 5.0000000000000 , 6.0000000000000 ) T 2.0
(4)4 ( 5.0000000000000 , 6.0000000000000 ) T 2.9
(5)4 ( 5.0000000000000 , 6.0000000000000 ) T 2.9
(23)4 ( 5.0000000000000 , 6.0000000000000 ) T 3.9
(25)4 ( 5.0000000000000 , 6.0000000000000 ) T 3.9
(27)4 ( 5.0000000000000 , 6.0000000000000 ) T 3.9
Algorithm 1 3 ( 5.0000000000000 , 6.0000000000000 ) T 5.9
Problem 2
NM ( 0.5 , 0.5 , 0.5 ) T 6 ( 0.698288610 , 0.628524230 , 0.342564189 ) T 2.0
(4)4 ( 0.698288610 , 0.628524230 , 0.342564189 ) T 3.0
(5) ( 0.698288610 , 0.628524230 , 0.342564189 ) T 3.0
(23)4 ( 0.698288610 , 0.628524230 , 0.342564189 ) T 4.0
(25)4 ( 0.698288610 , 0.628524230 , 0.342564189 ) T 4.0
(27)4 ( 0.698288610 , 0.628524230 , 0.342564189 ) T 4.0
Algorithm 1 3 ( 0.698288610 , 0.628524230 , 0.342564189 ) T 5.9
Problem 3
NM ( 0.5 , 1.0 ) T 7 ( 0.22850805121143 , 0.61425402560572 ) T 2.0
(4)4 ( 0.22850805121143 , 0.61425402560572 ) T 3.0
(5)4 ( 0.22850805121143 , 0.61425402560572 ) T 3.0
(23)4 ( 0.22850805121143 , 0.61425402560572 ) T 3.8
(25)4 ( 0.22850805121143 , 0.61425402560572 ) T 4.2
(27)4 ( 0.22850805121143 , 0.61425402560572 ) T 4.2
Algorithm 1 3 ( 0.22850805121143 , 0.61425402560572 ) T 6.1
Problem 4
NM ( 1.0 , 0.5 , 1.5 ) T 7 ( 0.90956949 , 0.66122683 , 1.57583414 ) T 2.0
(4)5 ( 0.90956949 , 0.66122683 , 1.57583414 ) T 3.0
(5)5 ( 0.90956949 , 0.66122683 , 1.57583414 ) T 3.0
(23)4 ( 0.90956949 , 0.66122683 , 1.57583414 ) T 4.7
(25)4 ( 0.90956949 , 0.66122683 , 1.57583414 ) T 4.1
(27)4 ( 0.90956949 , 0.66122683 , 1.57583414 ) T 4.1
Algorithm 1 3 ( 0.90956949 , 0.66122683 , 1.57583414 ) T 6.3
Problem 5
NM ( 2.5 , 0.5 , 1.5 ) T 7 ( 2.49137571 , 0.2427458788 , 1.653517941 ) T 2.0
(4)5 ( 2.49137571 , 0.2427458788 , 1.653517941 ) T 3.0
(5)5 ( 2.49137571 , 0.2427458788 , 1.653517941 ) T 3.0
(23)4 ( 2.49137571 , 0.2427458788 , 1.653517941 ) T 4.5
(25)4 ( 2.49137571 , 0.2427458788 , 1.653517941 ) T 4.5
(27)4 ( 2.49137571 , 0.2427458788 , 1.653517941 ) T 4.5
Algorithm 1 3 ( 2.49137571 , 0.2427458788 , 1.653517941 ) T 6.1
Problem 6
NM ( 0.6 , 0.6 , 0.6 , 0.2 ) T 5 ( 0.577350 , 0.577350 , 0.577350 , 0.288680 ) T 2.1
(4)4 ( 0.577350 , 0.577350 , 0.577350 , 0.288680 ) T 3.3
(5)4 ( 0.577350 , 0.577350 , 0.577350 , 0.288680 ) T 3.3
(23)3 ( 0.577350 , 0.577359 , 0.577350 , 0.288680 ) T 5.5
(25)3 ( 0.577350 , 0.577350 , 0.577350 , 0.288680 ) T 5.5
(27)3 ( 0.577350 , 0.577350 , 0.577350 , 0.288680 ) T 5.5
Algorithm 1 3 ( 0.577350 , 0.577350 , 0.577350 , 0.288680 ) T 6.0
Table 2. Numerical results for Problem 7.
Table 2. Numerical results for Problem 7.
Methodk12345
NS-M | | S k + 1 S k | | 2
q
| | F ( S k ) | | 2
1.61121
3.20022
4.3531 × 10 3
4.7021 × 10 2
2.98531
5.6034 × 10 8
5.7614 × 10 7
2.99899
1.2677 × 10 22
1.2500 × 10 21
2.99999
1.3581 × 10 66
1.3253 × 10 65
3.00000
1.6345 × 10 198
ON-M | | S k + 1 S k | | 2
q
| | F ( S k ) | | 2
1.61111
3.19700
4.3121 × 10 3
46671 × 10 2
2.98555
5.4934 × 10 8
5.6464 × 10 7
2.99891
1.1944 × 10 22
1.17711 × 10 21
2.99999
1.13333 × 10 66
1.10665 × 10 65
3.00000
9.4989 × 10 199
KH-M | | S k + 1 S k | | 2
q
| | F ( S k ) | | 2
1 .61111
3.19921
4.3411 × 10 3
4.6911 × 10 2
2.98555
5.5731 × 10 8
5.7277 × 10 7
2.99800
1.2455 × 10 22
1.2288 × 10 21
2.99999
1.2888 × 10 66
1.2577 × 10 65
3.00000
1.3951 × 10 198
NG-M | | S k + 1 S k | | 2
q
| | F ( S k ) | | 2
6.44555
3.19711
4.3175 × 10 3
1.8681 × 10 1
2.98555
5.5051 × 10 8
2.2633 × 10 6
2.99899
1.2011 × 10 22
4.7391 × 10 21
2.99999
1.1544 × 10 66
4.5099 × 10 65
3.00000
1.0044 × 10 198
M(14) | | S k + 1 S k | | 2
q
| | F ( S k ) | | 2
1.64888
5.10032
1.9300 × 10 4
2.0166 × 10 3
4.99732
2.8741 × 10 19
2.8188 × 10 18
4.99999
1.6941 × 10 93
1.6500 × 10 92
5.00000
1.1788 × 10 464
1.1465 × 10 463
5.00000
1.9077 × 10 2320
Alg. 1 | | S k + 1 S k | | 2
q
| | F ( S k ) | | 2
0.1622 × 10 48
6.10040
0.7544 × 10 47
0
6
0
-
-
-
-
-
-
-
-
-
Table 3. Numerical results and comparison for Problem 8.
Table 3. Numerical results and comparison for Problem 8.
MethodNumber of IterationsError
Newton5 3.1408 × 10 16
M63 2.2204 × 10 16
Algorithm 13 1.0000 × 10 19
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Srivastava, H.M.; Iqbal, J.; Arif, M.; Khan, A.; Gasimov, Y.S.; Chinram, R. A New Application of Gauss Quadrature Method for Solving Systems of Nonlinear Equations. Symmetry 2021, 13, 432. https://doi.org/10.3390/sym13030432

AMA Style

Srivastava HM, Iqbal J, Arif M, Khan A, Gasimov YS, Chinram R. A New Application of Gauss Quadrature Method for Solving Systems of Nonlinear Equations. Symmetry. 2021; 13(3):432. https://doi.org/10.3390/sym13030432

Chicago/Turabian Style

Srivastava, Hari M., Javed Iqbal, Muhammad Arif, Alamgir Khan, Yusif S. Gasimov, and Ronnason Chinram. 2021. "A New Application of Gauss Quadrature Method for Solving Systems of Nonlinear Equations" Symmetry 13, no. 3: 432. https://doi.org/10.3390/sym13030432

APA Style

Srivastava, H. M., Iqbal, J., Arif, M., Khan, A., Gasimov, Y. S., & Chinram, R. (2021). A New Application of Gauss Quadrature Method for Solving Systems of Nonlinear Equations. Symmetry, 13(3), 432. https://doi.org/10.3390/sym13030432

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop