1. Introduction
Finding the roots of nonlinear functions is a significant problem in numerical mathematics and has numerous applications in different parts of applied sciences [
1,
2]. In this paper, we consider iterative techniques to locate a multiple root
of multiplicity
m—that means
and
—of a nonlinear equation
.
The most widely used method for finding the multiple root of
is the quadratically convergent modified Newton’s method (MNM) (see [
3]):
In literature, there are many iterative methods of different order of convergence to approximate the multiple roots of
, (see, for example [
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19]). Such methods require the evaluations of derivatives of either first order or first and second order. The motivation for developing higher order methods is closely related to the Kung–Traub conjecture [
20] that establishes an upper bound for the order of convergence to be attained with a specific number of functional evaluations,
, where
is order of convergence and
represents functions’ evaluations. The methods which follow the Kung–Traub conjecture are called optimal methods.
Contrary to the methods that require derivative evaluation, the derivative-free techniques to deal with the instances of multiple roots are exceptionally uncommon. The principle issue with generating such techniques is the difficulty in finding their convergence order. Derivative-free techniques are significant in the circumstances when the derivative of function
g is hard to evaluate or is costly to compute. One such derivative-free scheme is the old style Traub–Steffensen iteration [
21] which replaces
in the traditional Newton’s iteration with suitable approximation based on difference quotient:
where
is the first order divided difference with
,
.
Then modified Newton’s method (
1) becomes modified Traub–Steffensen method
The modified Traub–Steffensen method (
2) is a noticeable improvement of Newton’s method, because it maintains quadratic convergence without using any derivative.
The principle objective of this work is to design a general class of derivative-free multiple root methods including (
2) by utilising the weight-function approach. These methods are developed when specific weight functions (that may rely upon at least one parameters) are chosen. The rest of the paper is as follows. In
Section 2, the plan for a second-order technique is created and its convergence is considered. In
Section 3, the basins of attractors are displayed to check the steadiness of techniques. Numerical examinations on various conditions are enacted in
Section 4 to exhibit the applicability and efficiency of the schemes introduced here. Concluding comments are given in
Section 5.
2. Formulation of Method
Given a known multiplicity
, we consider the following one-step scheme for multiple roots with the first step as the Traub–Steffensen iteration (
2):
where the function
is analytic in a neighbourhood of zero with
.
In the sequel, we shall study the convergence results of the proposed iterative scheme (
3). For clarity, the results are obtained separately for different cases depending upon the multiplicity
m.
Theorem 1. Assume that is an analytic function in a domain containing a multiple zero (say, α) with multiplicity . Suppose that the initial point is close enough to α; then the convergence order of the Formula (3) is at least 2, provided that , and . Proof. Assume that the error at
n-th stage is
. Using the Taylor’s expansion of
about
and keeping in mind that
,
and
we have
where
for
.
Similarly we have the Taylor’s expansion of
about
where
By using (
4) and (
5), we obtain that
Expanding weight function
in the neighbourhood of origin by Taylor series, we have that
Using the expression (
6) into (
3), it follows that
In order to obtain second order convergence, the constant term and coefficient of
in (
7) should be simultaneously put equal to zero. It is possible only for the following values of
and
:
By using the above values in (
7), the error relation is given by
Thus, the theorem is proven. ☐
Theorem 2. Assume that is an analytic function in a domain containing a multiple zero (say, α) with multiplicity . Suppose that the initial point is close enough to α; then the convergence order of the Formula (3) is at least 2, provided that , and . Proof. Suppose that the error at
n-th iteration is
. Using the Taylor’s expansion of
about
and keeping into mind that
,
,
and
we have
where
for
.
Additionally, the Taylor’s expansion of
about
is
where
By using (
10) and (
11), we obtain that
We can expand weight function
in the neighbourhood of origin by Taylor series; then we have that
By inserting the expression (
12) into (
3)
In order to obtain second order convergence, the constant term and coefficient of
in (
13) should be simultaneously equal to zero. It is possible only for the following values of
and
:
By using the above values in (
13), the error relation is given by
Thus, the theorem is proven. ☐
Remark 1. We can observe from the above results that the number of conditions on is 2 corresponding to the cases to attain the quadratic order convergence of the method (3). These cases also satisfy common conditions: , . Nevertheless, their error equations differ from each other, as the parameter β does not appear in the equation for . It has been seen that when , the error equation in each such case does not contain β term. We shall prove this fact in next theorem. For the multiplicity
, we prove the order of convergence of scheme (
3) by the following theorem:
Theorem 3. Let be an analytic function in a region enclosing a multiple zero (say, α) with multiplicity m. Assume that initial guess is sufficiently close to α; then the iteration scheme defined by (3) has second order of convergence, provided that , and . Proof. Let the error at
n-th iteration be
. Using the Taylor’s expansion of
about
, we have that
Using (
16) in
, we obtain that
Taylor’s expansion of
about
is given as
From (
16)–(
18), it follows that
By using (
16) and (
5), we obtain that
We can expand weight function
in the neighbourhood of origin by Taylor series; then we have that
Inserting the expression (
20) in the scheme (
3), we obtain the error
In order to obtain second order convergence, the constant term and coefficient of
in (
21) should be simultaneously equal to zero. That is possible only for the following values of
and
By using the above values in (
21), the error relation is given by
Hence, the second order convergence is established. This completes the proof of theorem.
Remark 2. It is important to note that parameter β, which is used in , appears only in the error equations of the cases but not for . For we have observed that this parameter appears in the coefficients of and higher order. However, we do not need such terms in order to show the required quadratic order convergence.
We can yield numerous methods of the family (
3) based on the form of function
that satisfies the conditions (
22). However, we must restrict the choices to consider the forms of low order polynomials and simple rational functions. Accordingly, the following simple forms, satisfying the conditions (
22), are chosen:
- (1)
- (2)
- (3)
,
- (4)
- (5)
- (6)
- (7)
- (8)
,
where , , , and are free parameters.
The corresponding method to each of the above form is defined as follows:
Remark 3. The scheme (3) defines a family of second order methods which utilises two function evaluations; namely, and . This family is, therefore, optimal in the sense of the Kung–Traub hypothesis [20]. 3. Basins of Attraction
Our point here is to analyse the new strategies dependent on graphical tool—
the basins of attraction—of the zeros of a polynomial
in Argand plane. Examination of the basins of attraction of roots by the iterative scheme provides a significant idea about the convergence of scheme. This thought was presented at first by Vrscay and Gilbert [
22]. As of late, numerous researchers utilised this idea in their work; see, for instance [
23,
24] and references in therein. We consider different cases relating to the function
of family (
3) to study the basins of attraction.
We select as the initial point belonging to D, where D is a rectangular region in containing all the roots of the equation An iterative method beginning at a point can converge to the zero of the function or diverge. In order to assess the basins, we consider as the stopping criterion for convergence up to maximum of 25 iterations. If this tolerance is not achieved in the required iterations, the procedure is dismissed with the result showing the divergence of the iteration function started from . While drawing the basins, the following criterion is adopted: A colour is allotted to every initial guess in the attraction basin of a zero. If the iterative formula begins at the point of convergence, then it forms the basins of attraction with that assigned colour, and if the formula fails to converge in the required number of iterations, then it is draw in black.
We draw the basins of attraction by applying the methods M1–M8 (choosing ) on the following two polynomials:
Problem 1. In the first example, we consider the polynomial which has zeros with multiplicity 2. In this case, we use a grid of points in a rectangle of size and assign the colours cyan, green, yellow, red and blue to each initial point in the basin of attraction of zero "," "," "0," "" and "2." Basins obtained for the methods M1–M8 are shown in Figure 1, Figure 2 and Figure 3 corresponding to . When observing the behaviour of the methods, we see that the method M5 possesses a lesser number of divergent points, followed by M4 and M6. On the contrary, the methods M3 and M7 have higher numbers of divergent points, followed by M2 and M8. Notice that the basins are becoming wider as parameter β assumes smaller values. Problem 2. Let us take the polynomial having zeros with multiplicity 3. To see the dynamical view, we consider a rectangle with grid points and allocate the colours colour cyan, green, yellow, red, purple and blue to each initial point in the basins of attraction of zero "," "," "1," "," "" and "." Basins for this problem are shown in Figure 4, Figure 5 and Figure 6 corresponding to parameter choices in the proposed methods. Basins obtained for the methods M1–M8 are shown in Figure 4, Figure 5 and Figure 6 corresponding to . When observing the behaviour of the methods, we see that the methods M3 and M6 possess lesser numbers of divergent points, followed by M1, M4, M8, M2, M3 and M7. From the graphics, we can judge the behaviour and suitability of any strategy relying upon the conditions. In the event that we pick an underlying point in a zone where various basins of attraction meet one another, it is difficult to foresee which root will be attained by the iterative strategy that starts in . Thus, the selection of in such a zone is anything but a decent one. Both the dark zones and the zones with various hues are not appropriate to speculate when we need to accomplish a specific root. The most alluring pictures show up when we have extremely perplexing wildernesses between basins of attraction, and they compare to the situations wherein the strategy is additionally requesting, as it does for the underlying point, and its dynamic conduct is progressively eccentric. We close this segment with a comment that the union conduct of proposed strategies relies on the estimation of parameter . The smaller the value of , the better the convergence of the technique.
4. Numerical Results
This section is committed to exhibiting the convergence behaviour of the displayed family. In this regard, we consider the special cases of the proposed class, M1–M8 by choosing
,
,
,
and
. Performance is compared with the existing classical Newton modified method (
1). We select four test functions for correlation.
To check the theoretical order of convergence, we obtain the computational order of convergence (COC) using the formula (see [
25]):
All computations were performed in the programming package Mathematica using multiple-precision arithmetic with 4096 significant digits. Numerical results displayed in
Table 1,
Table 2,
Table 3 and
Table 4 include: (i) number of iterations
required to converge to the solution such that
, (ii) values of last three consecutive errors
, (iii) residual error
and (iv) computational order of convergence (COC).
Example 1 (Eigenvalue problem). One of the challenging tasks in linear algebra is concerned with the Eigenvalues of a large square matrix. Finding the zeros of the characteristic equation of a square matrix of order greater than 4 is another big job. So, we consider the following 9 × 9 matrix.
The characteristic polynomial of the matrix is given as This function has one multiple zero with multiplicity 4. We choose initial approximations and obtain the numerical results as shown in Table 1. Example 2 (Beam Designing Model).
We consider a beam positioning problem (see [26]) where a beam of length r unit is leaning against the edge of a cubical box with sides of length 1 unit each, such that one end of the beam touches the wall and the other end touches the floor, as shown in Figure 7.The problem is: What should be the distance along the floor from the base of the wall to the bottom of the beam? Suppose y is the distance along the beam from the floor to the edge of the box, and let x be the distance from the bottom of the box to bottom of the beam. Then, for a given value of r, we have The positive solution of the equation is a double root . We consider the initial guess to find the root. Numerical results of various methods are shown in Table 2. Example 3. The van der Waals equation of state, whose expression is explains the behaviour of a real gas by taking in the ideal gas equations two more parameters, and , specific for each gas. In order to determine the volume V of the gas in terms of the remaining parameters, we are required to solve the nonlinear equation in V. Given the constants and of a particular gas, one can find values for n, P and T, such that this equation has three real roots. By using the particular values, we obtain the following nonlinear functionhaving three roots, out of which one is a multiple zero of multiplicity of order two, and other one is simple zero . However, we seek the multiple zero . We consider initial guess to obtain this zero. The numerical results so-obtained are shown in Table 3. Example 4. Lastly, consider the test function The function has multiple zero of multiplicity 4. We chose the initial approximation for obtaining the zero of the function. Numerical results of various methods are shown in Table 4. 5. Conclusions
In the foregoing work, we have developed a class of optimal second order methods free from derivatives for solving nonlinear uni-variate functions. Convergence analysis has shown the second order convergence under standard presumptions with respect to the nonlinear functions whose zeros we were searching for. Some best cases of the class were discussed. Their stability was tested by analyzing complex geometry shown by drawing their basins. We have noticed from the graphics that the basins become wider and wider as the parameter assumes smaller values. The methods were employed to solve some nonlinear uni-variate equations and also compared with already defined techniques. We conclude the paper with the remark that derivative-free techniques may prove to be good options for Newton-like methods in the cases where derivatives are expensive to evaluate.