Next Article in Journal
Picture Fuzzy Interaction Partitioned Heronian Aggregation Operators for Hotel Selection
Next Article in Special Issue
A New Newton Method with Memory for Solving Nonlinear Equations
Previous Article in Journal
On the Necessary Conditions for Non-Equivalent Solutions of the Rotlet-Induced Stokes Flow in a Sphere: Towards a Minimal Model for Fluid Flow in the Kupffer’s Vesicle
Previous Article in Special Issue
A New Three-Step Class of Iterative Methods for Solving Nonlinear Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generalized Inverses Estimations by Means of Iterative Methods with Memory

by
Santiago Artidiello
1,*,
Alicia Cordero
2,
Juan R. Torregrosa
2 and
María P. Vassileva
1
1
Instituto Tecnológico de Santo Domingo (INTEC), 10602 Santo Domingo, Dominican Republic
2
Multidisciplinary Institute of Mathematics, Universitat Politècnica de València, 46022 València, Spain
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(1), 2; https://doi.org/10.3390/math8010002
Submission received: 15 November 2019 / Revised: 9 December 2019 / Accepted: 13 December 2019 / Published: 18 December 2019

Abstract

:
A secant-type method is designed for approximating the inverse and some generalized inverses of a complex matrix A. For a nonsingular matrix, the proposed method gives us an approximation of the inverse and, when the matrix is singular, an approximation of the Moore–Penrose inverse and Drazin inverse are obtained. The convergence and the order of convergence is presented in each case. Some numerical tests allowed us to confirm the theoretical results and to compare the performance of our method with other known ones. With these results, the iterative methods with memory appear for the first time for estimating the solution of a nonlinear matrix equations.

1. Introduction

Recently, many iterative methods without memory have been published for approximating the inverse or some generalized inverse of a complex matrix A of arbitrary order (see, for example, [1,2,3,4,5,6] and the references therein). This topic has a significant role to play in many areas in applied sciences and engineering, such as multivariate analysis, image and signal processing, approximation theory, cryptography, etc. (see [7]).
The discretization process of boundary problems or partial differential equations by means of divided difference technique or finite elements yields to an important number of linear systems being solved. This statement is applicable both in equations with integer derivatives and in the case of fractional derivatives (see, for example, [8,9]). In these linear problems, usually the matrix of coefficients is too big or ill-conditioned to be solved analytically. Thus, iterative methods can play a key role.
The main purpose of this manuscript is to design a secant-type iterative scheme with memory, free for inverse operators and efficient under the point of view of CPU-time, for estimating the inverse of a non-singular complex matrix. We also argue the generalization of the proposed scheme for approximating the Drazin inverse of singular square matrices and the Moore–Penrose inverse of complex rectangular matrices. As far as we know, this is the first time that this kind of methods with memory is applied to estimate generalized inverses. This might be the first step to develop higher-order methods with memory in the future. This kind of schemes has proven to be very stable for scalar equations; we expect a similar performance in the case of matrix equations.
Let us consider a non-singular complex matrix A of size n × n . The extension of the iterative methods for the real equation g ( x ) = a x 1 = 0 to obtain the inverse of A, that is the zero of the matrix function G ( X ) = X 1 A , gives us the so-called Schulz-type schemes.
The most known of these schemes to estimate A 1 is the Newton–Schulz method [10], whose iterative expression is
X k + 1 = X k ( 2 I A X k ) , k = 0 ,   1 ,   ,
where I denotes the identity matrix of order n. Schulz [11] demonstrated that the eigenvalues of matrix I A X 0 must be less than 1 to assure the convergence of the scheme in Equation (1). Taking into account that the residuals E k = I A X k in each iteration of Equation (1) satisfy E k + 1 E k 2 , Newton–Schulz method has quadratic convergence. In general, it is known that this scheme converges to A 1 with X 0 = α A * or X 0 = α A , where 0 < α < 2 / ρ ( A * A ) , ρ ( · ) denotes the spectral radius, and A * is the conjugate transpose of A. Such schemes are also used for sensitivity analysis when accurate approximate inverses are needed for both square and rectangular matrices.
On the other hand, for a nonsingular matrix A C n × n , Li et al. [12] suggested the scheme
X k + 1 = X k m I m ( m 1 ) 2 A X k + m ( m 1 ) ( m 2 ) 3 ! + ( 1 ) m 1 ( A X k ) m 1 , m = 2 , 2 , ,
with X 0 = α A * . They proved the convergence of m-order of { X k } to the inverse of A. This result was extended by Chen et al. [13] for computing the Moore–Penrose inverse. Other iterative schemes without memory have been designed for approximating the inverse or some generalized inverses.
In this paper, we construct an iterative method with memory (that is, k + 1 iterate is obtained not only from the iterate k but also from other previous iterates) for computing the inverse of a nonsingular matrix. In the iterative expression of the designed method, inverse operators do not appear. We prove the order of convergence of the proposed scheme and we extend it for approximating the Moore–Penrose inverse of rectangular matrices and the Drazin inverse of singular square matrices.
For analyzing the order of convergence of an iterative method with memory, we use the concept of R-order introduced in [14] by Ortega and Rheinboldt and the following result.
Let us consider an iterative method with memory (IM) that generates a sequence { X k } of estimations to the solution ξ , and let us also assume that this sequence converges to ξ . If there exists a nonzero constant η and nonnegative numbers t i , 0 i m such that the inequality
| e k + 1 | η i = 0 m | e k i | t i
holds, where e k is the error of iterate X k , then the R-order of convergence of (IM) satisfies
O R ( ( I M ) , ξ ) s * ,
where s * is the unique positive root of the polynomial
s m + 1 i = 0 m t i s m i = 0 .
The proof of this result can be found in [14].
From here, the work is organized as follows. In the next section, we describe how a secant-type method, free of inverse operators, is constructed for estimating the inverse of a nonsingular complex matrix, proving its order of convergence. In Section 3 and Section 4, we study the generalization of the proposed methods for computing the Moore–Penrose inverse of a rectangular complex matrix and the Drazin inverse of a singular square matrix. Section 5 is devoted to the numerical test for analyzing the performance of the proposed schemes and to confirm the theoretical results. With a section of conclusions, the paper is finished.

2. A Secant-Type Method for Matrix Inversion

Let us recall that, for an scalar nonlinear equation g ( x ) = 0 , the secant method is an iterative scheme with memory such that
x k + 1 = x k g ( x k ) α k ,
with α k satisfying g ( x k ) g ( x k 1 ) = α k ( x k x k 1 ) , k 0 , given x 0 and x 1 as initial approximations.
For a nonlinear matrix equation G ( X ) = 0 , where G : C n × n C n × n , the secant method can be described as
X k + 1 = X k A k 1 G ( X k ) , k 0 ,
where X 0 and X 1 are initial estimations and being A k a suitable linear operator satisfying
A k + 1 ( X k + 1 X k ) = G ( X k + 1 ) G ( X k ) A k + 1 S k = Y k ,
where S k = X k + 1 X k and Y k = G ( X k + 1 ) G ( X k ) . Thus, it is necessary to solve, at each iteration, the linear system A k + 1 S k = Y k . It is proven in [15] that, with this formulation, secant method converges to the solution of G ( X ) = 0 .
Let us consider an n × n nonsingular complex matrix A. We want to construct iterative schemes for computing the inverse A 1 of A, that is, iterative methods for solving the matrix equation
G ( X ) = X 1 A = 0 .
The secant method was adapted by Monsalve et al. [15] to estimate the solution of Equation (5), that is the inverse of A, when the matrix is diagonalizable. The secant method applied to G ( X ) = X 1 A (see [15]) gives us:
X k + 1 = X k S k 1 G ( X k ) G ( X k 1 ) 1 G ( X k ) = X k ( X k X k 1 ) X k 1 X k 1 1 1 ( X k 1 A ) .
Now, we extend the result presented in [15] to any nonsingular matrix, not necessarily diagonalizable. If A is a nonsingular complex matrix of size n × n , then there exist unitary matrices U and V, of size n × n , such that
U * A V = Σ = d i a g ( σ 1 ,   σ 2 ,   ,   σ n ) ,
being σ 1 σ 2 σ n > 0 the singular values of A.
Let us define D k = V * X k U , that is X k = V D k U * . Then, from Equation (6),
V D k + 1 U * = V D k U * ( V D k U * V D k 1 U * ) ( U D k 1 V * U D k 1 1 V * ) 1 ( U D k 1 V * U Σ V * ) .
Several algebraic manipulations allow us to assure that
D k + 1 = D k ( D k D k 1 ) ( D k 1 D k 1 1 ) 1 ( D k 1 Σ ) .
If we choose initial estimations, X 1 and X 0 , such that D 1 = V * X 1 U and D 0 = V * X 0 U are diagonal matrices, then all matrices D k are diagonal and therefore D i D j = D j D i , for all i , j . Thus, from Equation (8), we assure
D k + 1 = D k 1 + D k D k 1 Σ D k ,
and, from this expression, we propose the secant-type method:
X k + 1 = X k 1 + X k X k 1 A X k , k = 0 ,   1 ,   2 ,  
being X 0 and X 1 initial approximations given.
The analysis of the convergence of the iterative method with memory in Equation (9) is presented in the following result.
Theorem 1.
Let A C n x n be a nonsingular matrix, with singular value decomposition U * A V = Σ . Let X 0 and X 1 be such that V * X 1 U and V * X 0 U are diagonal matrices. Then, sequence { X k } , obtained by Equation (9), converges to A 1 with super-linear convergence.
Proof. 
Let us consider U and V unitary matrices such that the singular values decomposition in Equation (7) is satisfied, where σ 1 σ 2 σ n > 0 are the singular values of A.
We define D k = V * X k U , that is X k = V D k U * , for k 1 . From Equation (9), we have
V D k + 1 U * = V D k 1 U * + V D k U * V D k 1 U * U Σ V * V D k U * ,
then
V D k + 1 U * = V ( D k 1 + D k D k 1 Σ D k ) U *
and therefore
D k + 1 = D k 1 + D k D k 1 Σ D k ,
where D k = d i a g ( d k 1 ,   d k 2 ,   ,   d k n ) .
Then, component by component, we obtain
d k + 1 j = d k 1 j + d k j d k 1 j d k j σ j , j = 1 ,   2 ,   ,   n .
By subtracting 1 σ j from both sides of Equation (10) and denoting e k j = d k j 1 / σ j , we get
e k + 1 j = d k 1 j + d k j d k 1 j d k j σ j 1 σ j = σ j e k j e k 1 j
From Equation (11), we conclude that, for each value of j from 1 to n, d k + 1 j in Equation (10) converges to 1 σ j with order of convergence of the unique positive root of λ 2 λ 1 = 0 , that is, λ 1.618 (by using the result of Ortega–Rheinboldt mentioned in the Introduction).
Then, for each j, 1 j n , there exist a { c k j } k satisfying c k j > 0 , k and ( c k j ) k tends to zero when k tends to infinity. Moreover,
e k + 1 j c k j e k j , 1 j n .
Thus,
D k + 1 Σ 1 2 2 = j = 1 n ( e k + 1 j ) 2 j = 1 n ( c k j ) 2 ( e k j ) 2 n m k 2 D k Σ 1 2 2 ,
where
m k = max 1 j n { c k j } .
Therefore,
X k + 1 A 1 2 = V D k + 1 U * V Σ 1 U * 2 = V ( D k + 1 Σ 1 ) U 2 V 2 ( D k + 1 Σ 1 ) 2 U * 2 = ( D k + 1 Σ 1 ) 2 n m k ( D k Σ 1 ) 2 n m k ( X k A 1 ) 2 ,
which allows us to affirm that { X k } converges to A 1 . □
On the other hand, Highan in [10] introduced the following definition for the stability of the iterative process Z k + 1 = H ( Z k ) , with a fixed point Z * . If we assume that H is Frechét differentiable in Z * , the iteration is stable in a neighborhood of Z * if the Frechét derivative H ( Z * ) has bounded powers, that is, there exists a positive constant C such that
H ( Z * ) k C , k > 0 .
Therefore, the following result can be stated for the secant method.
Theorem 2.
The secant method in Equation (9) for the estimation of inverse matrix is a stable iterative scheme.
Proof. 
The proof is made demonstrating that H ( Z * ) is an idempotent matrix.
The secant-type method described as a fixed point scheme, can be written as
H ( Z k ) = H X k X k 1 = X k 1 + X k X k 1 A X k X k .
It is easy to deduce that
H X k X k 1 Q = Q 1 + Q 2 X k 1 A Q 1 Q 2 A X k Q 1 ,
where Q = ( Q 1 , Q 2 ) T . Then, for Z = Z * = ( A 1 , A 1 ) T , we have
H ( Z * ) Q = 0 Q 1 = 0 0 I 0 Q 1 Q 2 .
Thus, H ( Z * ) is an idempotent matrix and the iteration is stable. □

3. A Secant-Type Method for Approximating the Moore–Penrose Inverse

Now, we would like to extend the proposed iterative scheme for computing the Moore–Penrose inverse [7] of a m × n complex matrix A, denoted by A . It is the unique n × m matrix X satisfying the equations
A X A = A , X A X = X , ( A X ) * = A X , ( X A ) * = X A .
If r a n k ( A ) = r m i n { m , n } , by using the singular value decomposition of A, we obtain
A = U Σ 0 0 0 V * ,
being Σ = d i a g ( σ 1 ,   σ 2 ,   ,   σ r ) , σ 1 σ 2 , , σ r > 0 . U and V are unitary matrices with U C m × m and V C n × n . It is also known that
A = V * Σ 1 0 0 0 U ,
where Σ 1 = d i a g ( 1 / σ 1 ,   1 / σ 2 ,   ,   1 / σ r ) .
The convergence of the method in Equation (9) for Moore–Penrose inverse is established in the following result.
Theorem 3.
Let A C m × n be a matrix with r a n k ( A ) = r , with singular value decomposition
U * A V = Σ 0 0 0 .
Let X 1 and X 0 be initial estimations such that
V * X 1 U = Σ 1 0 0 0 a n d V * X 0 U = Σ 0 0 0 0 ,
being Σ 1 and Σ 0 diagonal matrices of size r × r . Then, sequence { X k } , obtained by Equation (9), converges to A with super-linear order of convergence.
Proof. 
Given the singular value decomposition of A, for any fixed arbitrary value of k, we define matrix D k as
D k = V * X k U = Σ k 0 0 0 ,
being Σ k C r × r . Thus, by using the iterative expression in Equation (9), we obtain
Σ k + 1 0 0 0 = Σ k 1 + Σ k Σ k 1 Σ Σ k 0 0 0 .
Therefore, as Σ 1 and Σ 0 are diagonal matrices, so are all matrices Σ k , and the expression
Σ k + 1 = Σ k 1 + Σ k Σ k 1 Σ Σ k
represents r scalar uncoupled iterations converging to 1 σ i , 1 i r with super-linear order, that is to say,
Σ k + 1 Σ 1 2 2 r M k 2 Σ k Σ 1 2 2 ,
with M k = max 1 i r { c k 2 } , being c k i > 0 such that sequence { c k i } tends to zero for k tending to infinity.
With an analogous argument as in Theorem 1,
X k + 1 A 2 r m k X k A 2 ,
which allows us to affirm that { X k } converges to A , with the desired order of convergence. □

4. A Secant-Type Method for Approximating the Drazin Inverse

Drazin, in 1958 (see [10]), proposed a different kind of generalized inverse, in which some conditions of the Moore–Penrose inverse and the index of the matrix appeared. The importance of this inverse has motivated many researchers to propose algorithms for its calculation.
It is known (see [10]) that the smallest nonnegative integer l, such that r a n k ( A l + 1 ) = r a n k ( A l ) is called the index of A and it is denoted by i n d ( A ) . If A is a complex matrix of size n × n , the Drazin inverse of A, denoted A D , is the unique matrix X satisfying
A l + 1 X = A l , X A X = X , ( A X ) = X A ,
where l is the index of A.
If i n d ( A ) = 1 , then X is called the g-inverse or group inverse of A, and, if i n d ( A ) = 0 , then A is nonsingular and A D = A 1 . Let us observe that the idempotent matrix A A D is the projector on R ( A l ) along N ( A l ) , where R ( A l ) and N ( A l ) denote the range and null spaces of A l , respectively.
In [16], the following result is presented, which is used in the proof of the main result.
Proposition 1.
If P A , B is the projector on a space A along a space B, the following statements hold:
(a) 
P A , B C = C if and only if R ( C ) A .
(b) 
C P A , B = C if and only if N ( C ) B .
Li and Wei [1] proved that the Newton–Schulz method in Equation (1) can be used for approximating the Drazin inverse, using as initial estimation X 0 = α A l , where parameter α is chosen so that condition I A X 0 < 1 is satisfied. One way for selecting the initial matrix used by different authors is
X 0 = 2 t r ( A l + 1 ) A l ,
where t r ( · ) is the trace of a square matrix. Another fruitful initial matrix is
X 0 = 2 2 A 2 l + 1 A l .
Using two initial matrices of these form, α A l , with α a constant, we want to prove that the sequence obtained by the secant-type method in Equation (9) converges to the Drazin inverse A D . In this case, we use a different type of demonstration than those used in the previous cases.
Theorem 4.
Let A C n × n be a square nonsingular matrix. We choose as initial estimations X 0 = α 0 A l 0 and X 1 = α 1 A l 1 , with l 0 , l 1 i n d ( A ) . Then, sequence { X k } k 0 generated by Equation (9) satisfies the following error equation
A D X k + 1 A D A 2 A D X k 1 A D X k .
Thus, { X k } k 0 converges to A D with order of convergence 1.618 , that is, with super-linear convergence.
Proof. 
Let us define E k = I A X k , k = 0 ,   1 ,   . Then,
E k + 1 = I A X k + 1 = I A ( X k + X k 1 ( I A X k ) ) = I A X k A X k 1 ( I A X k ) = ( I A X k 1 ) ( I A X k ) = E k 1 E k .
Therefore, E k + 1 E k 1 E k . In addition, it is easy to prove that, if we choose X 0 and X 1 such that E 0 < 1 and E 1 < 1 , then E k < 1 , k N .
Now, we denote e k = A D X k the error of iterate k. From the selection of X 0 and X 1 and by applying Proposition 1, we establish
A D A X k = X k = X k A A D , k 0 .
Thus,
e k = A D X k = A D A D A X k = A D ( I A X k ) = A D E k .
From this identity, there exists k 0 N such that
e k A D E k A D E 0 k E 1 k , k k 0 .
Thus, { e k } k 0 tends to zero and therefore { X k } k 0 tends to A D .
On the other hand,
e k + 1 = X k + 1 A D = A D A X k + 1 A D A A D = A D ( A X k + 1 A A D ) A D A e k + 1
Now, we analyze A e k + 1 .
A e k + 1 = A ( A D X k + 1 ) = A A D I + I A X k + 1 = A A D I + E k + 1 = A A D I + E k 1 E k ,
but
E k 1 E k + A A D I = ( I A X k 1 ) ( I A X k ) + A A D I = ( I A A D + A A D A X k 1 ) ( I A A D + A A D A X k ) + A A D I = ( I A A D + A e k 1 ) ( I A A D + A e k ) + A A D I = ( I A A D ) 2 + ( I A A D ) A e k + A e k 1 ( I A A D ) + A e k 1 A e k + A A D I = A e k 1 A e k .
In the last equality, we use that ( I A A D ) 2 = I A A D , in fact ( I A A D ) m = I A A D , m N . In addition, ( I A A D ) A e k = 0 and A e k 1 ( I A A D ) = 0 .
Therefore,
e k + 1 A D A e k + 1 = A D E k 1 E k + A A D I A D A 2 e k 1 e k .
Finally, by applying the theorem of convergence for iterative methods with memory, as mentioned in the Introduction, we assure that the order of convergence of secant-type method is the unique positive root of λ 2 λ 1 = 0 , that is λ = 1.618 . □

5. Numerical Experiments

In this section, we check the behavior for the calculation of the inverse, Moore–Penrose inverse and Drazin inverse, of different test matrices A, using the secant method, which we compared with the Newton–Schulz scheme in Equation (1). Numerical computations were carried out in Matlab R2018b (MathWorks, Natick, USA) with a processor Intel(R) Xeon(R) CPU E5-2420 v2 at 2.20 GHz. As stopping criterion, we used X k + 1 X k 2 < 10 6 or F ( X k + 1 ) 2 < 10 6 .
To numerically check the theoretical results, Jay [17] introduced the order of approximate computational convergence (COC), defined as
order C O C = ln ( F ( X k + 1 ) 2 / F ( X k ) 2 ) ln ( F ( X k ) 2 / F ( X k 1 ) 2 ) .
In a similar way, the authors presented in [18] another numerical approximation of the theoretical order, denoted by ACOC, and defined as
order A C O C = ln ( X k + 1 X k 2 / X k X k 1 2 ) ln ( X k X k 1 2 / X k 1 X k 2 2 ) .
We use indistinctly any of these computational order estimates, to show the accuracy of these approximations on the proposed method. In the case of vector COC (or ACOC) is not stable, we write “-” in the corresponding table.
Example 1.
In this example, matrix A is a n × n random matrix with n = 10 ,   100 ,   200 ,   300 ,   400 ,   500 . The initial estimation used for the Newton–Schulz scheme is X 0 = A T / A 2 and for the secant method X 1 = A T A 2 and X 0 = 0.5 A T A 2 .
In Table 1, we show the results obtained by Newton–Schulz and secant-type method for the different random matrices, the number of iterations, the residuals, and the value of COC. The results are in concordance with the order of convergence of each scheme. All obtained random matrices are nonsingular and both methods give us an approximation of the inverse of A. Newton method needs lower number of iterations than Secant scheme, as was expected, being the first one quadratic and the latter one super-linear.
Example 2.
In this example, matrix A is a m × n random matrix for different values of m and n. The initial matrices are calculated in the same way as in the previous example.
In Table 2, we show the results obtained by Newton–Schulz and secant-type method for the different random matrices, the number of iterations, the residuals, and the value of ACOC. The results are in concordance with the order of convergence of each scheme, despite being non-square matrices. Both methods give us an approximation of the Moore–Penrose inverse of A.
Example 3.
In this example, we want to analyze the performance of the secant method for computing the Drazin inverse of the following matrix A of size 6 × 6 with i n d ( A ) = 2 .
A = 1 1 0 0 0 0 1 1 0 0 0 0 1 1 1 1 0 0 1 1 1 1 0 0 1 1 1 0 2 1 1 1 0 1 1 2
Here, its Drazin inverse is expressed by
A D = 1 / 4 1 / 4 0 0 0 0 1 / 4 1 / 4 0 0 0 0 0 0 1 / 4 1 / 4 0 0 0 0 1 / 4 1 / 4 0 0 0 0 5 / 12 7 / 12 2 / 3 1 / 3 0 0 7 / 12 5 / 12 1 / 3 2 / 3 .
By using the initial matrix X 0 = 0.5 t r ( A 3 ) and the same stopping criterion as in the previous examples, Newton–Schulz method gives us the following information:
  • A C O C = 2.0009 ;
  • i t e r = 11 ; and
  • Exact error A D X 11 2 = 7.7716 × 10 16 .
On the other hand, secant method is used with X 1 = 1 t r ( A 3 ) and X 0 = 0.5 t r ( A 3 ) , obtaining:
  • A C O C = 1.6225 ;
  • i t e r = 15 ; and
  • Exact error A D X 15 2 = 1.8539 × 10 13 .
Example 4.
This is another example for computing the Drazin inverse of the following matrix B of size 12 × 12 (see [1]) with i n d ( B ) = 3 .
B = 2 0.4 0 0 0 0 0 0 0 0 0 0 2 0.4 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 1 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 1 2 0.4 0 0 0 0 0 0 0 0 0 0 2 0.4 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0.4 2 0 0 0 0 0 0 0 0 0 0 0.4 . 4 2 .
Now, its Drazin inverse is expressed by
A D = 0.25 0.25 0 0 0 0 0 0 0 0 0 0 1.25 1.25 0 0 0 0 0 0 0 0 0 0 1.6641 0.9922 0.25 0.25 0 0 0 0 0.0625 0.0625 0 0.1563 1.1953 0.6797 0.25 0.25 0 0 0 0 0.0625 0.1875 0.6875 1.3438 2.7637 1.0449 1.875 1.25 1.25 1.25 1.25 1.25 1.4844 2.5781 3.3203 6.6406 2.7637 1.0449 1.875 1.25 1.25 1.25 1.25 1.25 1.4844 2.5781 4.5703 8.5156 14.1094 6.3008 6.625 3.375 5 3 5 5 4.1875 8.5 10.5078 22.4609 19.3242 8.5078 9.75 5.25 7.5 4.5 7.5 7.5 6.375 12.5625 15.9766 33.7891 0.625 0.3125 0 0 0 0 0 0 0.25 0.25 0.875 1.625 1.25 0.9375 0 0 0 0 0 0 0.25 0.25 0.875 1.625 0 0 0 0 0 0 0 0 0 0 1.25 1.25 0 0 0 0 0 0 0 0 0 0 0.25 0.25 .
By using the initial matrix X 0 = 0.5 t r ( A 5 ) and the same stoping criterion as in the previous examples, Newton–Schulz method gives us the following information:
  • A C O C = 2.0031 ;
  • i t e r = 14 ; and
  • Exact error B D X 14 = 1.8354 × 10 9 .
On the other hand, secant method is used with X 1 = 1 t r ( A 5 ) and X 0 = 0.5 t r ( A 5 ) , obtaining:
  • A C O C = 1.6201 ;
  • i t e r = 20 ; and
  • Exact error B D X 20 = 1.8453 × 10 9 .
Again, the numerical tests confirm the theoretical results.
Example 5.
Finally, in this example, we test Newton–Schlutz and secant methods on several known square matrices of size n × n , constructed by using different Matlab functions. Specifically, the test matrices are:
(a) 
A = g a l l e r y ( r i s , n ) . Hankel matrix of size n × n.
(b) 
A = g a l l e r y ( g r c a r , n ) . Toeplitz matrix of size n × n.
(c) 
A = g a l l e r y ( l e h m e r , n ) . Symmetric and positive definite matrix of size n × n, a i , j = i / j , i , j .
(d) 
A = g a l l e r y ( l e s l i e , n ) . Leslie matrix of size n × n. This type of matrices appears in problems of population models.
(e) 
A = g a l l e r y ( i n v o , n ) . Matrix ill-conditioned of size n × n, such that A 2 = I .
By using the stopping criterion
X k + 1 X k 2 < 10 10 o r F ( X k + 1 ) 2 < 10 10
and the initial matrix X 1 = A T A 2 and X 0 = 0.5 A T A 2 , we obtain the numerical results that appear in Table 3. In this cases, as in the previous ones, the proposed method shows good performance in terms of stability, precision, and number of iterations needed. We must take into account that both schemes have different orders of convergence, which is displayed in Table 3.

6. Conclusions

An iterative method with memory for approximating the inverse of nonsingular square complex matrices, the Moore–Penrose inverse of rectangular complex matrices, and the Drazin inverse of square singular matrices is presented. As far as we know, it is the first time that a scheme with memory is employed to approximate the solution of nonlinear matrix equations. The proposed scheme is free of inverse operators and its iterative expression is simple; therefore, it is computationally efficient. From particular initial approximations, the convergence is guaranteed for all matrices, without conditions. Numerical tests allowed us to analyze the performance of the proposed scheme and confirm the theoretical results.

Author Contributions

Investigation, S.A., A.C. and J.R.T.; Supervision, M.P.V.; Validation, A.C.; Writing—original draft, S.A. and M.P.V.; Writing—review and editing, J.R.T. All the authors contributed to the different parts of this manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by PGC2018-095896-B-C22 (MCIU/AEI/FEDER, UE), Generalitat Valenciana PROMETEO/2016/089, and FONDOCYT 029-2018 República Dominicana.

Acknowledgments

The authors would like to thank the anonymous reviewers for their useful comments and suggestions that have improved the final version of this manuscript.

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this paper.

References

  1. Li, X.; Wei, Y. Iterative methods for the Drazin inverse of a matrix with a complex spectrum. Appl. Math. Comput. 2004, 147, 855–862. [Google Scholar] [CrossRef]
  2. Li, H.B.; Huang, T.Z.; Zhang, Y.; Liu, X.P.; Gu, T.X. Chebyshev-type methods and preconditioning techniques. Appl. Math. Comput. 2011, 218, 260–270. [Google Scholar] [CrossRef]
  3. Soleymani, F.; Stanimirović, P.S. A higher order iterative method for computing the Drazin inverse. Sci. World J. 2013, 2013, 708647. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Li, W.; Li, J.; Qiao, T. A family of iterative methods for computing Moore-Pennrose inverse of a matrix. Linear Algebra Appl. 2013, 438, 47–56. [Google Scholar]
  5. Toutounian, F.; Soleymani, F. An iterative method for computing the approximate inverse of a square matrix and the Moore-Pennrose inverse of a nonsquare matrix. Appl. Math. Comput. 2015, 224, 671–680. [Google Scholar]
  6. Soleymani, F.; Salmani, H.; Rasouli, M. Finding the Moore-Penrose inverse by a new matrix iteration. Appl. Math. Comput. 2015, 47, 33–48. [Google Scholar] [CrossRef]
  7. Ben-Israel, A.; Greville, T.N.E. Generalized Inverses, 2nd ed.; Springer: New York, NY, USA, 2003. [Google Scholar]
  8. Gu, X.; Huang, T.; Ji, C.; Carpentieri, B.; Alikhanov, A.A. Fast iterative method with a second-order implicit difference scheme for time-space fractional convection-diffusion equation. J. Sci. Comput. 2017, 72, 957–985. [Google Scholar] [CrossRef]
  9. Li, M.; Gu, X.; Huang, C.; Fei, M.; Zhang, G. A fast linearized conservative finite element method for the strongly coupled nonlinear fractional Schrödinger equations. J. Comput. Phys. 2018, 358, 256–282. [Google Scholar] [CrossRef]
  10. Higham, N.J. Functions of Matrices: Theory and Computation; SIAM: Philadelphia, PA, USA, 2008. [Google Scholar]
  11. Schulz, G. Iterative Berechmmg der reziproken matrix. Z. Angew. Math. Mech. 1933, 13, 57–59. [Google Scholar] [CrossRef]
  12. Li, W.G.; Li, Z. A family of iterative methods for computing the approximate inverse of a square matrix and inner inverse of a non-square matrix. Appl. Math. Comput. 2010, 215, 3433–3442. [Google Scholar] [CrossRef]
  13. Chen, H.; Wang, Y. A family of higher-order convergent iterative methods for computing the Moore-Penrose inverse. Appl. Math. Comput. 2011, 218, 4012–4016. [Google Scholar] [CrossRef]
  14. Ortega, J.M.; Rheinbolt, W.C. Iterative Solutions of Nonlinears Equations in Several Variables; Academic Press, Inc.: Cambridge, MA, USA, 1970. [Google Scholar]
  15. Monsalve, M.; Raydan, M. A secant method for nonlinear matrix problem. Lect. Notes Electr. Eng. 2011, 80, 387–402. [Google Scholar]
  16. Wang, G.; Wei, Y.; Qiao, S. Generalized Inverses; Science Press: New York, NY, USA, 2004. [Google Scholar]
  17. Jay, L. A note of Q-order of convergence. BIT 2001, 41, 422–429. [Google Scholar] [CrossRef]
  18. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
Table 1. Results for approximating the inverse of a random matrix (Example 1).
Table 1. Results for approximating the inverse of a random matrix (Example 1).
MethodnIter X k + 1 X k 2 F ( X k + 1 ) 2 COC
Newton–Schulz1019 5.2 × 10 7 1.12 × 10 14 2.0005
Secant1022 5.4 × 10 5 9.8 × 10 7 1.8660
Newton–Schulz10026 2.0 × 10 8 5.4 × 10 13 1.9988
Secant10036 1.3 × 10 6 2.0 × 10 7 1.6645
Newton–Schulz20032 2.5 × 10 12 4.6 × 10 12 2.0012
Secant20040 1.6 × 10 6 1.8 × 10 8 1.8866
Newton–Schulz30034 3.1 × 10 12 5.9 × 10 12 1.8888
Secant30040 3.7 × 10 5 3.6 × 10 7 1.8865
Newton–Schulz40036 2.2 × 10 10 1.9 × 10 11 2.0001
Secant40043 3.5 × 10 7 1.1 × 10 8 1.8222
Newton–Schulz50033 1.5 × 10 7 1.2 × 10 11 1.9999
Secant50036 9.0 × 10 5 2.6 × 10 7 1.6666
Table 2. Results for approximating the Moore–Penrose inverse of a rectangular random matrix (Example 2).
Table 2. Results for approximating the Moore–Penrose inverse of a rectangular random matrix (Example 2).
MethodmnIter X k + 1 X k 2 ACOC
Newton–Schulz201014 9.7 × 10 12 2.0005
Secant201013 9.9 × 10 10 1.6199
Newton–Schulz20010017 4.1 × 10 10 2.0018
Secant20010017 2.02 × 10 9 1.6210
Newton–Schulz30040021 2.03 × 10 11 2.0007
Secant30040027 1.4 × 10 7 1.6267
Newton–Schulz50060023 3.8 × 10 9 2.0028
Secant50060031 5.2 × 10 10 1.6197
Newton–Schulz100090025 4.5 × 10 8 2.0055
Secant100090036 2.5 × 10 9 1.6205
Table 3. Results for approximating the inverse of classical square matrices (Example 5).
Table 3. Results for approximating the inverse of classical square matrices (Example 5).
MethodMatrixnIter X k + 1 X k 2 F ( X k + 1 ) COC
Newton–SchulzLehmer1018 3.5 × 10 7 6.3 × 10 15 -
SecantLehmer1020 3.9 × 10 9 1.7 × 10 11 1.6164
Newton–SchulzHankel1008 1.1 × 10 5 1.2 × 10 11 1.9993
SecantHankel10011 1.9 × 10 12 4.4 × 10 13 1.6180
Newton–SchulzToeplitz2009 1.6 × 10 9 3.2 × 10 15 1.9976
SecantToeplitz20011 6.3 × 10 11 6.4 × 10 11 1.6182
Newton–SchulzToeplitz3009 1.7 × 10 9 2.5 × 10 15 1.9975
SecantToeplitz30011 6.3 × 10 11 6.4 × 10 11 1.6182
Newton–SchulzLeslie40022 4.3 × 10 5 2.3 × 10 13 1.9995
SecantLeslie40033 4.2 × 10 12 1.0 × 10 14 1.6177
Newton–SchulzLeslie50023 1.6 × 10 6 1.4 × 10 16 2.0001
SecantLeslie50025 1.7 × 10 12 3.8 × 10 15 1.6070

Share and Cite

MDPI and ACS Style

Artidiello, S.; Cordero, A.; Torregrosa, J.R.; P. Vassileva, M. Generalized Inverses Estimations by Means of Iterative Methods with Memory. Mathematics 2020, 8, 2. https://doi.org/10.3390/math8010002

AMA Style

Artidiello S, Cordero A, Torregrosa JR, P. Vassileva M. Generalized Inverses Estimations by Means of Iterative Methods with Memory. Mathematics. 2020; 8(1):2. https://doi.org/10.3390/math8010002

Chicago/Turabian Style

Artidiello, Santiago, Alicia Cordero, Juan R. Torregrosa, and María P. Vassileva. 2020. "Generalized Inverses Estimations by Means of Iterative Methods with Memory" Mathematics 8, no. 1: 2. https://doi.org/10.3390/math8010002

APA Style

Artidiello, S., Cordero, A., Torregrosa, J. R., & P. Vassileva, M. (2020). Generalized Inverses Estimations by Means of Iterative Methods with Memory. Mathematics, 8(1), 2. https://doi.org/10.3390/math8010002

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop