Matrix Pencil Optimal Iterative Algorithms and Restarted Versions for Linear Matrix Equation and Pseudoinverse
Abstract
:1. Introduction
1.1. Notation
1.2. Main Contribution
1.3. Outline
2. The Matrix Pencil Krylov Subspace Method
3. Double-Optimal Solution
3.1. Two Minimizations
3.2. Two Main Theorems
3.3. Estimating Residual Error
3.4. Orthogonal Projection
4. Double-Optimal Iterative Algorithm
Algorithm 1 DOIA |
1: Select m and give an initial value of 2: Do 3: (Case 1), (Case 2) 4: (Case 1), (Case 2) 5: (orthonormalization) 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: Enddo, if |
4.1. Crucial Properties of DOIA
4.2. Restarted DOIA(m)
Algorithm 2 DOIA(m) |
1: Select and , and give 2: Do 3: Do , 4: (Case 1), (Case 2) 5: (Case 1), (Case 2) 6: (orthonormalization) 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: Enddo of (3), if 18: Otherwise, , go to (2) |
5. Numerical Examples
5.1. Example 1
5.2. Example 2
Algorithm 3 For cyclic matrix |
1: Give N 2: Do 3: Do 4: If , then ; otherwise 5: 6: If , then 7: Enddo of j 8: Enddo of i |
5.3. Example 3
5.4. Example 4
5.5. Example 5: Solving Ill-Conditioned Hilbert Matrix Equation
5.6. Example 6
5.7. Example 7
6. Pseudoinverse of Rectangular Matrix
6.1. A New Matrix Pencil
Algorithm 4 MPIA |
1: Select m and give 2: Do 3: 4: (orthonormalization) 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: Enddo, if |
6.2. Optimized Hyperpower Method
Algorithm 5 OHPIA(m) |
1: Select and , and give 2: Do 3: Do , 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: Enddo of (3), if 18: Otherwise, , go to (2) |
7. Numerical Testing of Rectangular Matrix
7.1. Example 8
7.2. Example 9: Inverting the Ill-Conditioned Rectangular Hilbert Matrix
7.3. Example 10
7.4. Example 11
7.5. Example 12
8. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Hestenes, M.R.; Stiefel, E.L. Methods of conjugate gradients for solving linear systems. J. Res. Nat. Bur. Stand. 1952, 49, 409–436. [Google Scholar] [CrossRef]
- Lanczos, C. Solution of systems of linear equations by minimized iterations. J. Res. Nat. Bur. Stand. 1952, 49, 33–53. [Google Scholar] [CrossRef]
- Liu, C.S. An optimal multi-vector iterative algorithm in a Krylov subspace for solving the ill-posed linear inverse problems. CMC Comput. Mater. Contin. 2013, 33, 175–198. [Google Scholar]
- Dongarra, J.; Sullivan, F. Guest editors’ introduction to the top 10 algorithms. Comput. Sci. Eng. 2000, 2, 22–23. [Google Scholar] [CrossRef]
- Simoncini, V.; Szyld, D.B. Recent computational developments in Krylov subspace methods for linear systems. Numer. Linear Algebra Appl. 2007, 14, 1–59. [Google Scholar] [CrossRef]
- Saad, Y.; Schultz, M.H. GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J. Sci. Stat. Comput. 1986, 7, 856–869. [Google Scholar] [CrossRef]
- Saad, Y. Krylov subspace methods for solving large unsymmetric linear systems. Math. Comput. 1981, 37, 105–126. [Google Scholar] [CrossRef]
- Freund, R.W.; Nachtigal, N.M. QMR: A quasi-minimal residual method for non-Hermitian linear systems. Numer. Math. 1991, 60, 315–339. [Google Scholar] [CrossRef]
- van Den Eshof, J.; Sleijpen, G.L.G. Inexact Krylov subspace methods for linear systems. SIAM J. Matrix Anal. Appl. 2004, 26, 125–153. [Google Scholar] [CrossRef]
- Paige, C.C.; Saunders, M.A. Solution of sparse indefinite systems of linear equations. SIAM J. Numer. Anal. 1975, 12, 617–629. [Google Scholar] [CrossRef]
- Fletcher, R. Conjugate gradient methods for indefinite systems. In Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 1976; Volume 506, pp. 73–89. [Google Scholar]
- Sonneveld, P. CGS: A fast Lanczos-type solver for nonsymmetric linear systems. SIAM J. Sci. Stat. Comput. 1989, 10, 36–52. [Google Scholar] [CrossRef]
- van der Vorst, H.A. Bi-CGSTAB: A fast and smoothly converging variant of Bi-CG for the solution of nonsymmetric linear systems. SIAM J. Sci. Stat. Comput. 1992, 13, 631–644. [Google Scholar] [CrossRef]
- Saad, Y.; van der Vorst, H.A. Iterative solution of linear systems in the 20th century. J. Comput. Appl. Math. 2000, 123, 1–33. [Google Scholar] [CrossRef]
- Bouyghf, F.; Messaoudi, A.; Sadok, H. A unified approach to Krylov subspace methods for solving linear systems. Numer. Algorithms 2024, 96, 305–332. [Google Scholar] [CrossRef]
- Saad, Y. Iterative Methods for Sparse Linear Systems, 2nd ed.; SIAM: Philadelphia, PA, USA, 2003. [Google Scholar]
- van der Vorst, H.A. Iterative Krylov Methods for Large Linear Systems; Cambridge University Press: New York, NY, USA, 2003. [Google Scholar]
- Jbilou, K.; Messaoudi, A.; Sadok, H. Global FOM and GMRES algorithms for matrix equations. Appl. Numer. Math. 1999, 31, 49–63. [Google Scholar] [CrossRef]
- El Guennouni, A.; Jbilou, K.; Riquet, A.J. Block Krylov subspace methods for solving large Sylvester equations. Numer. Algorithms 2002, 29, 75–96. [Google Scholar] [CrossRef]
- Frommer, A.; Lund, K.; Szyld, D.B. Block Krylov subspace methods for functions of matrices. Electron. Trans. Numer. Anal. 2017, 47, 100–126. [Google Scholar] [CrossRef]
- Frommer, A.; Lund, K.; Szyld, D.B. Block Krylov subspace methods for functions of matrices II: Modified block FOM. SIAM J. Matrix Anal. Appl. 2020, 41, 804–837. [Google Scholar] [CrossRef]
- El Guennouni, A.; Jbilou, K.; Riquet, A.J. The block Lanczos method for linear systems with multiple right-hand sides. Appl. Numer. Math. 2004, 51, 243–256. [Google Scholar] [CrossRef]
- Kubínová, M.; Soodhalter, K.M. Admissible and attainable convergence behavior of block Arnoldi and GMRES. SIAM J. Matrix Anal. Appl. 2020, 41, 464–486. [Google Scholar] [CrossRef]
- Lund, K. Adaptively restarted block Krylov subspace methods with low-synchronization skeletons. Numer. Algorithms 2023, 93, 731–764. [Google Scholar] [CrossRef]
- Konghua, G.; Hu, X.; Zhang, L. A new iteration method for the matrix equation AX = B. Appl. Math. Comput. 2007, 187, 1434–1441. [Google Scholar] [CrossRef]
- Meng, C.; Hu, X.; Zhang, L. The skew-symmetric orthogonal solutions of the matrix equation AX = B. Linear Algebra Appl. 2005, 402, 303–318. [Google Scholar] [CrossRef]
- Peng, Z.; Hu, X. The reflexive and anti-reflexive solutions of the matrix equation AX = B. Linear Algebra Appl. 2003, 375, 147–155. [Google Scholar] [CrossRef]
- Zhang, J.C.; Zhou, S.Z.; Hu, X. The (P,Q) generalized reflexive and anti-reflexive solutions of the matrix equation AX = B. Appl. Math. Comput. 2009, 209, 254–258. [Google Scholar] [CrossRef]
- Liu, C.S.; Hong, H.K.; Atluri, S.N. Novel algorithms based on the conjugate gradient method for inverting ill-conditioned matrices, and a new regularization method to solve ill-posed linear systems. Comput. Model. Eng. Sci. 2010, 60, 279–308. [Google Scholar]
- Higham, N.J. Functions of Matrices: Theory and Computation; SIAM: Philadelphia, PA, USA, 2008. [Google Scholar]
- Amat, S.; Ezquerro, J.A.; Hernandez-Veron, M.A. Approximation of inverse operators by a new family of high-order iterative methods. Numer. Linear Algebra Appl. 2014, 21, 629. [Google Scholar] [CrossRef]
- Homeier, H.H.H. On Newton-type methods with cubic convergence. J. Comput. Appl. Math. 2005, 176, 425–432. [Google Scholar] [CrossRef]
- Petkovic, M.D.; Stanimirovic, P.S. Iterative method for computing Moore–Penrose inverse based on Penrose equations. J. Comput. Appl. Math. 2011, 235, 1604–1613. [Google Scholar] [CrossRef]
- Dehdezi, E.K.; Karimi, S. GIBS: A general and efficient iterative method for computing the approximate inverse and Moore–Penrose inverse of sparse matrices based on the Schultz iterative method with applications. Linear Multilinear Algebra 2023, 71, 1905–1921. [Google Scholar] [CrossRef]
- Cordero, A.; Soto-Quiros, P.; Torregrosa, J.R. A general class of arbitrary order iterative methods for computing generalized inverses. Appl. Math. Comput. 2021, 409, 126381. [Google Scholar] [CrossRef]
- Kansal, M.; Kaur, M.; Rani, L.; Jantschi, L. A cubic class of iterative procedures for finding the generalized inverses. Mathematics 2023, 11, 3031. [Google Scholar] [CrossRef]
- Cordero, A.; Segura, E.; Torregrosa, J.R.; Vassileva, M.P. Inverse matrix estimations by iterative methods with weight functions and their stability analysis. Appl. Math. Lett. 2024, 155, 109122. [Google Scholar] [CrossRef]
- Petkovic, M.D.; Stanimirovic, P.S. Two improvements of the iterative method for computing Moore–Penrose inverse based on Penrose equations. J. Comput. Appl. Math. 2014, 267, 61–71. [Google Scholar] [CrossRef]
- Katsikis, V.N.; Pappas, D.; Petralias, A. An improved method for the computation of the Moore–Penrose inverse matrix. Appl. Math. Comput. 2011, 217, 9828–9834. [Google Scholar] [CrossRef]
- Stanimirovic, I.; Tasic, M. Computation of generalized inverse by using the LDL* decomposition. Appl. Math. Lett. 2012, 25, 526–531. [Google Scholar] [CrossRef]
- Sheng, X.; Wang, T. An iterative method to compute Moore–Penrose inverse based on gradient maximal convergence rate. Filomat 2013, 27, 1269–1276. [Google Scholar] [CrossRef]
- Toutounian, F.; Ataei, A. A new method for computing Moore–Penrose inverse matrices. J. Comput. Appl. Math. 2009, 228, 412–417. [Google Scholar] [CrossRef]
- Soleimani, F.; Stanimirovic, P.S.; Soleymani, F. Some matrix iterations for computing generalized inverses and balancing chemical equations. Algorithms 2015, 8, 982–998. [Google Scholar] [CrossRef]
- Baksalary, O.M.; Trenkler, G. The Moore–Penrose inverse: A hundred years on a frontline of physics research. Eur. Phys. J. 2021, 46, 9. [Google Scholar] [CrossRef]
- Pavlikova, S.; Sevcovic, D. On the Moore–Penrose pseudo-inversion of block symmetric matrices and its application in the graph theory. Linear Algebra Appl. 2023, 673, 280–303. [Google Scholar] [CrossRef]
- Sayevand, K.; Pourdarvish, A.; Machado, J.A.T.; Erfanifar, R. On the calculation of the Moore–Penrose and Drazin inverses: Application to fractional calculus. Mathematics 2021, 9, 2501. [Google Scholar] [CrossRef]
- AL-Obaidi, R.H.; Darvishi, M.T. A comparative study on qualification criteria of nonlinear solvers with introducing some new ones. J. Math. 2022, 2022, 4327913. [Google Scholar] [CrossRef]
- Liu, C.S.; Kuo, C.L.; Chang, C.W. Solving least-squares problems via a double-optimal algorithm and a variant of Karush–Kuhn–Tucker equation for over-determined system. Algorithms 2024, 17, 211. [Google Scholar] [CrossRef]
- Einstein, A. The foundation of the general theory of relativity. Ann. Phys. 1916, 49, 769–822. [Google Scholar] [CrossRef]
- Todd, J. The condition of finite segments of the Hilbert matrix. In The Solution of Systems of Linear Equations and the Determination of Eigenvalues; Taussky, O., Ed.; National Bureau of Standards: Applied Mathematics Series; National Bureau of Standards: Gaithersburg, MD, USA, 1954; Volume 39, pp. 109–116. [Google Scholar]
- Pan, Y.; Soleymani, F.; Zhao, L. An efficient computation of generalized inverse of a matrix. Appl. Math. Comput. 2018, 316, 89–101. [Google Scholar] [CrossRef]
- Climent, J.J.; Thome, N.; Wei, Y. A geometrical approach on generalized inverses by Neumann-type series. Linear Algebra Appl. 2001, 332–334, 533–540. [Google Scholar] [CrossRef]
- Soleymani, F.; Stanimirovic, P.S.; Haghani, F.K. On hyperpower family of iterations for computing outer inverses possessing high efficiencies. Linear Algebra Appl. 2015, 484, 477–495. [Google Scholar] [CrossRef]
- Xia, Y.; Chen, T.; Shan, J. A novel iterative method for computing generalized inverse. Neural Comput. 2014, 26, 449–465. [Google Scholar] [CrossRef]
n | 60 | 100 | 150 | 300 | 500 |
ME | |||||
IN | 3 | 3 | 3 | 3 | 3 |
CPU | 0.45 | 0.69 | 1.06 | 8.32 | 29.22 |
n | 4 | 5 | 6 | 7 |
---|---|---|---|---|
DOIA(m) | (, 4) | (, 4) | (, 7) | (, 14) |
NSIM | (, 11) | (, 12) | (, 12) | (, 13) |
s | 0 | 0.01 | 0.03 | 0.05 | 0.1 |
ME | |||||
IN | 67 | 124 | 84 | 246 | 622 |
n | 4 | 5 | 6 |
---|---|---|---|
DOIA(m) | (, 5) | (, 9) | (, 148) |
NSIM | (, 500) | (, 500) | (, 500) |
Alg. | IP | IN | ||||
---|---|---|---|---|---|---|
PSM | 34 | |||||
KKRJ | 12 | |||||
NSIM | 20 | |||||
DOIA(m) | 3 | |||||
OHPIA(m) | 3 |
Alg. | IP | IN | ||||
---|---|---|---|---|---|---|
RRKJ | >500 | |||||
MPIA | 50 | |||||
MPIA(m) | 7 |
Alg. | IP | IN | ||||
---|---|---|---|---|---|---|
KKRJ | 11 | |||||
MPIA | 24 | |||||
MPIA(m) | 17 | |||||
OHPIA(m) | 6 |
Alg. | IP | IN | ||||
---|---|---|---|---|---|---|
[54] | 130 | |||||
NSM | 27 | |||||
RRKJ | 11 | |||||
MPIA(m) | 7 | |||||
OHPIA(m) | 5 |
Alg. | IP | IN | ||||
---|---|---|---|---|---|---|
KKRJ | 10 | |||||
MPIA(m) | 6 | |||||
OHPIA(m) | 6 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, C.-S.; Kuo, C.-L.; Chang, C.-W. Matrix Pencil Optimal Iterative Algorithms and Restarted Versions for Linear Matrix Equation and Pseudoinverse. Mathematics 2024, 12, 1761. https://doi.org/10.3390/math12111761
Liu C-S, Kuo C-L, Chang C-W. Matrix Pencil Optimal Iterative Algorithms and Restarted Versions for Linear Matrix Equation and Pseudoinverse. Mathematics. 2024; 12(11):1761. https://doi.org/10.3390/math12111761
Chicago/Turabian StyleLiu, Chein-Shan, Chung-Lun Kuo, and Chih-Wen Chang. 2024. "Matrix Pencil Optimal Iterative Algorithms and Restarted Versions for Linear Matrix Equation and Pseudoinverse" Mathematics 12, no. 11: 1761. https://doi.org/10.3390/math12111761
APA StyleLiu, C. -S., Kuo, C. -L., & Chang, C. -W. (2024). Matrix Pencil Optimal Iterative Algorithms and Restarted Versions for Linear Matrix Equation and Pseudoinverse. Mathematics, 12(11), 1761. https://doi.org/10.3390/math12111761