Next Article in Journal
Finite and Infinte Time Blow Up of Solutions to Wave Equations with Combined Logarithmic and Power-Type Nonlinearities
Previous Article in Journal
CR-Selfdual Cubic Curves
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Robust Hermitian and Skew-Hermitian Based Multiplicative Splitting Iterative Method for the Continuous Sylvester Equation

by
Mohammad Khorsand Zak
1,* and
Abbas Abbaszadeh Shahri
2
1
Department of Applied Mathematics, Aligudarz Branch, Islamic Azad University, Aligudarz P.O. Box 159, Iran
2
Faculty of Engineering and Technology, Bircham International University, P.O. Box 2233 Madrid, Spain
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(2), 318; https://doi.org/10.3390/math13020318
Submission received: 9 December 2024 / Revised: 31 December 2024 / Accepted: 7 January 2025 / Published: 20 January 2025

Abstract

:
For solving the continuous Sylvester equation, a class of Hermitian and skew-Hermitian based multiplicative splitting iteration methods is presented. We consider two symmetric positive definite splittings for each coefficient matrix of the continuous Sylvester equations, and it can be equivalently written as two multiplicative splitting matrix equations. When both coefficient matrices in the continuous Sylvester equation are (non-symmetric) positive semi-definite, and at least one of them is positive definite, we can choose Hermitian and skew-Hermitian (HS) splittings of matrices A and B in the first equation, and the splitting of the Jacobi iterations for matrices A and B in the second equation in the multiplicative splitting iteration method. Convergence conditions of this method are studied in depth, and numerical experiments show the efficiency of this method. Moreover, by numerical computation, we show that multiplicative splitting can be used as a splitting preconditioner and induce accurate, robust and effective preconditioned Krylov subspace iteration methods for solving the continuous Sylvester equation.

1. Introduction

Matrix equations arise in a number of problems of scientific computations and engineering applications, such as in control theory [1,2], model reduction [3], signal processing [4] and image processing [5], and many researchers focus on the matrix equations [6,7,8,9,10,11]. Nowadays, the continuous Sylvester equation is possibly the most famous and the most broadly employed linear matrix equation [5,6,11,12,13,14,15,16,17,18,19]. It is given as
A X + X B = C ,
where A R n × n , B R m × m and C R n × m are defined matrices and X R n × m is an unknown matrix. A Lyapunov equation is a special case with m = n , B = A T and  C = C T . Here and in the subsequent sections, W T is used to denote the transpose of the matrix W R n × n . The continuous Sylvester Equation (1) has a unique solution if and only if A and B have no common eigenvalues, which will be assumed throughout this paper.
The continuous Sylvester equation has been studied extensively in recent decades as a key pipeline and has received wide applications in areas such as control theory, differential systems and signal processing. As a result, a number of algorithms focusing on the dimension reduction of the equation have been proposed [20]. The resulting algorithms can be extended to more general forms of the continuous algebraic Lyapunov equations and extended Sylvester equations, as well as to the Markov jump system. For the continuous Sylvester equation, various iterative methods are given in the literature. For instance, splitting the matrix equation to find an approximate solution of the continuous Sylvester equation has shown proper convergence, and is simpler than the corresponding multiplicative splitting and thusg has relatively lower computational complexity [20,21,22,23,24]. Another class of solutions focuses on overcoming the lack of stability via the combination of two split equations and corresponding new iterative methods to approximate the solution of the continuous Sylvester equation via the modified Richardson technique [23,25,26].
As a summary, if the equation can be expressed as a simple, symmetric, regular singular equation, it can exploit the mixed structure to solve the equation by finding a specific transformation to the standard form or degenerative decomposition for large-scale experiments. Toward a solution, different techniques like regular deflating linearization, the Hamiltonian boundary value method, dealing with fundamental subspaces which do not relate to eigenvalues of the M-matrix, utilizing standard polynomial eigenproblem algorithms without computation of any special splitting and implementing the Hessenberg matrix to convert the homogeneous equivalent matrix to a special block structure have been reported in the literature [25,27,28,29].
In general, the dimensions of A and B may be orders of magnitude different, and this fact is key in selecting the most appropriate numerical solution strategy [11]. For solving general Sylvester equations of small size, we use some methods classified such as direct methods. Among these direct methods are the Bartels–Stewart [30] and the Hessenberg–Schur [31] methods, which consist of transforming coefficient matrices A and B into triangular or Hessenberg form by an orthogonal similarity transformation and then solving the resulting system directly by a back-substitution process. When the coefficient matrices A and B are large and sparse, iterative methods are often the methods of choice for solving the Sylvester equation (1) efficiently and accurately. Many iterative methods have been developed for solving matrix equations, such as the alternating direction implicit (ADI) method [32], the Krylov subspace-based algorithms [13,33,34], the Hermitian and skew-Hermitian splitting (HSS) method, the inexact variant of HSS (IHSS) iteration method [6] and the nested splitting conjugate gradient (NSCG) method [17,35] and the nested splitting CGNR (NS-CGNR) method [18].
In order to study the numerical methods, we often rewrite the continuous Sylvester equation (1) as th following linear system of equations
A x = c ,
where the matrix A is of dimension n m × n m and is given by
A = I m A + B T I n ,
where ⊗ denotes the Kronecker product ( A B = [ a i j B ] ) and
c = v e c ( C ) = ( c 11 , c 21 , , c n 1 , c 12 , c 22 , , c n 2 , , c n m ) T x = v e c ( X ) = ( x 11 , x 21 , , x n 1 , x 12 , x 22 , , x n 2 , , x n m ) T .
Of course, this is quite expensive and a numerically poor way to determine the solution X of the continuous Sylvester equation (1), as the linear system of Equation (2) is costly to solve and can be ill-conditioned.
Now, we recall some necessary notations and useful results, which will be used in the following section. In this paper, we use λ ( M ) , | | M | | 2 , | | M | | F and I n to denote the eigenvalue, the spectral norm, the Frobenius norm of a matrix M R n × n and the identity matrix with dimension n, respectively. Note that | | . | | 2 is also used to represent the 2-norm of a vector. For nonsingular matrix B , we denote by κ ( B ) = | | B | | 2 | | B 1 | | 2 its spectral condition number, and for a symmetric a positive definite matrix B , we define the | | · | | B norm of a vector x R n as | | x | | B = x H B x . Then, the induced | | · | | B norm of a matrix M R n × n is defined as | | M | | B = | | B 1 2 M B 1 2 | | 2 . In addition, it holds that | | M x | | B | | M | | B | | x | | B , | | M | | B κ ( B ) | | M | | 2 and | | I | | B = 1 , where I is the identity matrix. For any matrices A = [ a i j ] and B = [ b i j ] , A B denotes the Kronecker product, defined as A B = [ a i j B ] . For the matrix X = ( x 1 , x 2 , , x m ) R n × m , v e c ( X ) denotes the v e c operator, defined as v e c ( X ) = ( x 1 T , x 2 T , , x m T ) T . Moreover, for a matrix M R n × n and the vector v e c ( M ) R n m , we have | | M | | F = | | v e c ( M ) | | 2 .
For matrix A R n × n , A = B C is called a splitting of the matrix A if B is nonsingular. This splitting is a convergent splitting if ρ ( B 1 C ) < 1 and a contractive splitting if | | B 1 C | | < 1 for some matrix norm.
The reminder of this paper is organized as follows. Section 2 presents our main contribution. In other words, the multiplicative splitting iteration (MSI) method for the continuous Sylvester equation and its convergence properties are studied deeply. Section 3 is devoted to an extensive numerical experiments with full comparison with other state-of-the-art methods in the literature. In Section 4, we address some challenges and suggestions for future work. Finally, we present our conclusions in Section 5.

2. Multiplicative Splitting Iterations

2.1. Traditional MSI Method

Consider the linear system of Equation (2). Let A = M i N i ( i = 1 , 2 ) be two splittings of the coefficient matrix A . The MSI method for solving the system of linear Equation (2) is defined as follows [36]:
MSI method for linear system of equations:
  • Given an initial guess  x ( 0 ) R n ,
  • For  k = 1 , 2 ,  until convergence, do
    • u ( k + 1 ) = M 1 1 N 1 x ( k ) + M 1 1 c
    • x ( k + 1 ) = M 2 1 N 2 u ( k + 1 ) + M 2 1 c
  • end
The MSI method can be equivalently written in the form
x ( k + 1 ) = T m s i x ( k ) + G m s i c , k = 0 , 1 , 2 ,
where T m s i = M 2 1 N 2 M 1 1 N 1 and G m s i = M 2 1 N 2 M 1 1 + M 2 1 . See [36] for more details.

2.2. MSI Method for the Sylvester Equation

Based on the MSI method proposed in [36], we obtain the MSI method for the continuous Sylvester equation. Let A = M i N i and B = P i Q i , ( i = 1 , 2 ) be two splittings of the matrices A and B, such that M i and P i , ( i = 1 , 2 ) are symmetric positive definite. The continuous Sylvester equation (1) can be equivalently written as the multiplicative splitting matrix equations
M 1 U + U P 1 = N 1 X + X Q 1 + C M 2 X + X P 2 = N 2 U + U Q 2 + C
Under the assumption that M i and P i , ( i = 1 , 2 ) are symmetric positive definite, we easily know that there is no common eigenvalues between the matrices M i and P i , ( i = 1 , 2 ) , so that this two multiplicative splitting matrix equations have unique solutions for all given right-hand-side matrices.
Now, based on the above observations, we can establish the following multiplicative splitting iterations for solving the continuous Sylvester Equation (1):
MSI method for Sylvester equation:
  • Given an initial guess  X ( 0 ) R m × n ,
  • For  k = 1 , 2 ,  until convergence, do
    • Solve  M 1 U ( k + 1 ) + U ( k + 1 ) P 1 = N 1 X ( k ) + X ( k ) Q 1 + C
    • Solve  M 2 X ( k + 1 ) + X ( k + 1 ) P 2 = N 2 U ( k + 1 ) + U ( k + 1 ) Q 2 + C
  • end
In special cases, when both coefficient matrices A and B in the Sylvester equation (1) are (non-symmetric) positive semi-definite, and at least one of them is positive definite, in the first equation in the MSI method we can choose Hermitian and skew-Hermitian (HS) splittings of matrices A and B, i.e.,  A = H A S A and B = H B S B , where H A , S A , H B , S B are the Hermitian and skew-Hermitian parts of A and B, respectively. Also, in the second equation in the MSI method, we consider the splitting of the Jacobi iterations [37] for matrices A and B, i.e.,  A = D A N A and B = D B N B , where D A , N A , D B , N B are the diagonal and non-diagonal parts of A and B, respectively. Therefore, we can rewrite this method as follows:
  • Given an initial guess  X ( 0 ) R m × n ,
  • For  k = 1 , 2 .  until convergence, do
    • Solve system  H A U ( k + 1 ) + U ( k + 1 ) H B = S A X ( k ) + X ( k ) S B + C
    • Solve system  D A X ( k + 1 ) + X ( k + 1 ) D B = N A U ( k + 1 ) + U ( k + 1 ) N B + C
  • end
Achieving two Sylvester equations that we can easily solve is our motivation for choosing these splittings. This is because the first system can be solved by the Sylvester conjugate gradient method [38], and the following routine can be used for direct solution of the second system:
Directly solution of matrix equation D A X + XD B = F :
  • For  i = 1 : n
    • For  j = 1 : m
      • x i j = f i j a i i + b j j
    • end
  • end
where a i i and b j j are the diagonal elements of matrices A and B, respectively. Moreover, F = N A U + U N B + C , and U is the solution of the first equation in the MSI method for Sylvester equation.

2.3. Using Multiplicative Splitting as a Preconditioner

Given the fact that any matrix splitting can naturally induce a splitting preconditioner for the Krylov subspace methods (see [39]) in Section 3, by numerical computation, we show that multiplicative splitting can be used as a splitting preconditioner to induce accurate, robust and effective preconditioned Krylov subspace iteration methods for solving the continuous Sylvester equation.

2.4. Convergence Analysis

In the subsequent solution, we need the following lemmas.
Lemma 1
([36]). Let B , C R n × n be two Hermitian matrices. Then, B C = C B if and only if B and C have a common set of orthonormal eigenvectors.
Lemma 2
([40]). Let A R n × n be a symmetric positive definite matrix. Then, for all x R n , we have | | A 1 2 x | | 2 = | | x | | A and
λ min ( A ) | | x | | A | | A x | | 2 λ max ( A ) | | x | | A .
Lemma 3
([41]). Suppose that A , B R n × n are two Hermitian matrices, and denote the minimum and the maximum eigenvalues of a matrix M with λ min ( M ) and λ max ( M ) , respectively. Then,
λ max ( A + B ) λ max ( A ) + λ max ( B ) , λ min ( A + B ) λ min ( A ) + λ min ( B ) .
Lemma 4
([41]). Let A , B C n × n , and λ and μ be the eigenvalues of A and B, and x and y be the corresponding eigenvectors, respectively. Then λ μ is an eigenvalue of A B corresponding to the eigenvector x y .
Lemma 5.
Suppose that A = M N is a splitting such that M is symmetric positive definite, with M = I m M + P T I n and N = I m N + Q T I n . If
θ 3 max | λ ( N ) | + max | λ ( Q ) | λ min ( M ) + λ min ( P ) < 1 ,
where θ = λ max ( M ) + λ max ( P ) λ min ( M ) + λ min ( P ) , then | | M 1 N | | M < 1 .
Proof. 
By Lemmas 3 and 4, we have
| | M | | 2 = λ max ( M ) λ min ( M ) λ min ( M ) + λ min ( P ) ,
and
| | N | | 2 = max λ Λ ( N ) | λ ( N ) | max | λ ( N ) | + max | λ ( Q ) | .
Therefore, it follows that
| | M 1 N | | M κ ( M ) | | M 1 N | | 2 κ ( M ) | | M 1 | | 2 | | N | | 2 ( κ ( M ) ) 3 2 | | N | | 2 | | M | | 2 ( κ ( M ) ) 3 2 max | λ ( N ) | + max | λ ( Q ) | λ min ( M ) + λ min ( P ) .
Again, the use of Lemmas 3 and 4 implies that
κ ( M ) = λ max ( M ) λ min ( M ) λ max ( M ) + λ max ( P ) λ min ( M ) + λ min ( P ) = θ .
So, we can write
| | M 1 N | | M θ 3 max | λ ( N ) | + max | λ ( Q ) | λ min ( M ) + λ min ( P ) .
This clearly proves the lemma. □
Theorem 1.
Let A R n × n and B R m × m and consider two splittings A = M i N i and B = P i Q i ( i = 1 , 2 ) such that M i and P i , ( i = 1 , 2 ) are symmetric positive definite. Denote by A = M i N i ( i = 1 , 2 ) with M i = I m M i + P i T I n and N i = I m N i + Q i T I n ( i = 1 , 2 ) , and assume that M 1 A 1 and M 2 A 1 are Hermitian matrices and M 1 A 1 M 2 = M 2 A 1 M 1 . Then, the MSI method is convergent if ϱ 1 ϱ 2 < 1 , where
ϱ i = θ i 3 max | λ ( N i ) | + max | λ ( Q i ) | λ min ( M i ) + λ min ( P i ) , a n d θ i = λ max ( M i ) + λ max ( P i ) λ min ( M i ) + λ min ( P i ) , ( i = 1 , 2 ) .
Proof. 
By making use of the Kronecker product, we can rewrite the above-described MSI method in the following matrix–vector form:
( I m M 1 + P 1 T I n ) u ( k + 1 ) = ( I m N 1 + Q 1 T I n ) x ( k ) + c ( I m M 2 + P 2 T I n ) x ( k + 1 ) = ( I m N 2 + Q 2 T I n ) u ( k + 1 ) + c
which can be arranged equivalently as
M 1 u ( k + 1 ) = N 1 x ( k ) + c M 2 x ( k + 1 ) = N 2 u ( k + 1 ) + c
which can be obtained by the following iteration method:
u ( k + 1 ) = M 1 1 N 1 x ( k ) + M 1 1 c x ( k + 1 ) = M 2 1 N 2 u ( k + 1 ) + M 2 1 c
Evidently, the above iteration scheme is the MSI method [36] for solving the system of linear Equation (2) with A = M i N i ( i = 1 , 2 ) . The MSI iteration (6) can be neatly expressed as a stationary fixed-point iteration as follows:
x ( k + 1 ) = T x ( k ) + G c
with T = M 2 1 N 2 M 1 1 N 1 and G = M 2 1 N 2 M 1 1 + M 2 1 .
Because M 1 A 1 M 2 = M 2 A 1 M 1 is equivalent to the two matrices M 1 A 1 and M 2 A 1 being commutative, according to Lemma 1, we know that M 1 A 1 and M 2 A 1 have a common set of orthonormal eigenvectors. That is say, there exists a unitary matrix Q R n m × n m and two diagonal matrices Λ i = diag ( λ 1 ( i ) , λ 2 ( i ) , , λ n m ( i ) ) , i = 1 , 2 , such that Q M i 1 A Q * = Λ i , i = 1 , 2 . Noticing that
T = M 2 1 N 2 M 1 1 N 1 = M 2 1 ( M 2 A ) M 1 1 ( M 1 A ) = ( I M 2 1 A ) ( I M 1 1 A ) = ( Q * Q Q * Q M 2 1 A Q * Q ) ( Q * Q Q * Q M 1 A Q * Q ) = ( Q * Q Q * Λ 2 Q ) ( Q * Q Q * Λ 1 Q ) = Q * ( I Λ 2 ) Q * Q ( I Λ 1 ) Q = Q * ( I Λ 2 ) ( I Λ 1 ) Q
by definition we have
ρ ( T ) max 1 i , j n m | ( 1 λ i ( 2 ) ) ( 1 λ j ( 1 ) ) | max 1 i n m | ( 1 λ i ( 2 ) ) | max 1 j n m | ( 1 λ j ( 1 ) ) | = ρ ( I M 2 1 A ) ρ ( I M 1 1 A ) = ρ ( M 2 1 N 2 ) ρ ( M 1 1 N 1 ) | | M 2 1 N 2 | | M 2 | | M 1 1 N 1 | | M 1
Therefore, by Lemma 5 we have
ρ ( T ) θ 1 3 max | λ ( N 1 ) | + max | λ ( Q 1 ) | λ min ( M 1 ) + λ min ( P 1 ) θ 2 3 max | λ ( N 2 ) | + max | λ ( Q 2 ) | λ min ( M 2 ) + λ min ( P 2 ) = ϱ 1 ϱ 2
and this completes the proof. □

3. Numerical Results

All numerical experiments presented in this section were computed in double precision with a number of MATLAB R2018a codes. All iterations are started from the initial zero matrix X ( 0 ) and terminated when the current iteration satisfies
R ( k ) F R ( 0 ) F 10 8 ,
where R ( k ) = C A X ( k ) X ( k ) B is the residual of the kth iterate. Also, we use the tolerance ε = 0.01 for inner iterations in corresponding methods. For each experiment, we report the number of iterations or the number of total outer iteration steps (shown as out-itr) and CPU time. In the tables, the norm of the residual is shown as res-norm.
The MSI method was compared with two Hermitian- and skew-Hermitian-based splitting methods such the NSCG [17] and the HSS [6], and two familiar iterative methods such the GMRES [34] and the BiCGSTAB [14] methods. Note that although the NS-CGNR method is a Hermitian- and skew-Hermitian-based splitting method, it works well for problems with a dominant skew-Hermitian part and not efficiently for problems with a dominant Hermitian part, see [18]. Therefore, it is not fair to compare this version of the MSI method with it.
Example 1.
For this example, we use the matrices
A = B = M + 2 r N + 100 ( n + 1 ) 2 I ,
where M , N R n × n are the tridiagonal matrices given by
M = tridiag ( 1 , 2 , 1 ) and N = tridiag ( 0.5 , 0 , 0.5 ) .
We consider r = 0.01 and n = m = 256 [6].
This class of problems may arise in the preconditioned Krylov subspace iteration methods used for solving the systems of linear equations resulting from the finite difference or Sinc–Galerkin discretization of various differential equations and boundary value problems [6].
We apply the iteration methods to this problem. The results are given in Table 1.
From the results presented in Table 1, it can be seen that for this problem, the MSI and the NSCG methods are more efficient than the other methods.
Example 2.
For the second experiment, consider A = tridiag ( 2 , 4 , 1 ) and B = tridiag ( 1 , 4 , 2 ) with dimensions 2048 × 2048 and 128 × 128 , respectively.
This is a problem with a strong Hermitian part [42,43]. The numerical results for this problem are listed in Table 2.
Regarding Table 2, it is obvious that although the MSI method is more effective versus the NSCG and the HSS methods, the GMRES and the BiCGSTAB methods are more effective than it.
Example 3.
We consider the continuous Sylvester Equation (1) with n = m and the coefficient matrices
A = diag ( 1 , 2 , , n ) + r L T , B = 2 t I n + diag ( 1 , 2 , , n ) + r L T + 2 t L ,
where L is the strictly lower triangular matrix having ones in the lower triangle part [6]. Here, t is a problem parameter to be specified in actual computations.
The iteration methods were used for this problem, and the results are given in Table 3. Moreover, we compare the convergence history of the iterative methods by residual norm decreasing in Figure 1.
In Table 3, we report the number of outer iterations (out-itr), the CPU time and the residual norm (res-norm) after convergence. For this example, we observe that the MSI method is superior to the other iterative methods in terms of the number of iterations and it is similar to the BiCGSTAB method in terms of CPU time. Comparing the convergence history of the iterative methods by residual norm decreasing shows that the MSI method converges more rapidly and smoothly than the BiCGSTAB method (see Figure 1).
Example 4.
For this example, we used the nonsymmetric sparse matrix SHERMAN3 of dimension 5005 × 5005 with 20033 nonzero entries from the Harwell–Boeing collection [44] instead of the coefficient matrix A. For the coefficient matrix B, we used B = tridiag ( 1 , 4 , 2 ) of dimension 8 × 8 [17].
We apply the iteration methods to this problem, and the results are given in Table 4. Moreover, we compare the convergence history of the iterative methods by residual norm decreasing in Figure 2.
In Table 4, we report the number of outer iterations (out-itr), the CPU time and the residual norm (res-norm) after convergence or in 5000 outer iterations. For this example, we observe that the MSI method is superior to the other iterative methods in terms of the number of iterations and CPU times, the NSCG method has an acceptable performance. Furthermore, the HSS and the GMRES methods have a very slow convergence rate ( see Figure 2). From the Table 4 and Figure 2, we observe that the BiCGSTAB method was diverged for this problem. Therefore, we use splitting of the MSI, the NSCG and the HSS method as the splitting preconditioner denoted as MSI-BiCGSTAB, NSCG-BiCGSTAB and HSS-BiCGSTAB, respectively. The results of the preconditioned method for these preconditioners are given in Table 5. In Table 4 and Table 5 dagger (†) and notation “>1000” show that no solution has been obtained after 5000 iteration or CPU time is more than 1000 s respectively.
The results in Table 5 show that the use of the MSI method as a preconditioner improves the results obtained by the corresponding methods.

4. Future Work

As an important challenge in the fractional mathematical field, the extension of this work for solving the multi-term fractional Sylvester equation with frequency is of interest because most of the existing methods are only useful for untangling the multi-term fractional differential equation. Therefore, investigation of capturing appropriate methods that are applicable to solve a specific type of multi-term fractional equation is attractive and meaningful [45,46,47,48].

5. Conclusions

In this paper, we have proposed an efficient iterative method for solving the continuous Sylvester equation A X + X B = C . This method employs two symmetric positive definite splittings of the coefficient matrices A and B and present a multiplicative splitting iteration method. The convergence conditions have been derived based on the iteration matrix.
We have compared the MSI method with well-known iterative methods such as the NSCG method, the HSS method, the BiCGSTAB method and the GMRES method for some problems. We have observed that, for these problems, the MSI method is more efficient than the other methods.
In summary, by focusing on the results presented in Table 1, Table 3, Table 4 and Table 5, one can observe that the MSI method is often superior to the other iterative methods. Moreover, the use of the multiplicative splitting as a preconditioner can induce an accurate and effective preconditioned BiCGSTAB method.

Author Contributions

Conceptualization, M.K.Z.; Methodology, M.K.Z. and A.A.S.; Software, M.K.Z. and A.A.S.; Validation, M.K.Z.; Investigation, M.K.Z.; Writing—original draft, M.K.Z.; Writing—review & editing, A.A.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors are grateful to the anonymous referees for their comments which substantially improved the quality of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Datta, B. Numerical Methods for Linear Control Systems; Elsevier: Amsterdam, The Netherlands; Academic Press: Cambridge, MA, USA, 2004. [Google Scholar]
  2. Van der Schaft, A. L2-Gain and Passivity Techniques in Nonlinear Control, 2nd ed.; Springer: London, UK, 2000. [Google Scholar]
  3. Baur, U.; Benner, P. Cross-Gramian based model reduction for data-sparse systems. Electron. Trans. Numer. Anal. 2008, 31, 256–270. [Google Scholar]
  4. Anderson, B.D.O.; Agathoklis, P.; Jury, E.I.; Mansour, M. Stability and the matrix Lyapunov equation for discrete 2-dimensional systems. IEEE Trans. Circuits Syst. 1986, 33, 261–267. [Google Scholar] [CrossRef]
  5. Bouhamidi, A.; Jbilou, K. Sylvester Tikhonov-Regularization methods in image restoration. J. Comput. Appl. Math. 2007, 206, 86–98. [Google Scholar] [CrossRef]
  6. Bai, Z.-Z. On Hermitian and skew-Hermitian splitting iteration methods for continuous Sylvester equations. J. Comput. Math. 2011, 29, 185–198. [Google Scholar]
  7. Beik, F.P.A.; Salkuyeh, D.K. Weighted versions of Gl-FOM and Gl-GMRES for solving general coupled linear matrix equations. Comput. Math. Math. Phys. 2015, 55, 1606–1618. [Google Scholar] [CrossRef]
  8. Dehghan, M.; Hajarian, M. Two algorithms for finding the Hermitian reflexive and skew-Hermitian solutions of Sylvester matrix equations. Appl. Math. Lett. 2011, 24, 444–449. [Google Scholar] [CrossRef]
  9. Hajarian, M. Solving the general Sylvester discrete-time periodic matrix equations via the gradient based iterative method. Appl. Math. Lett. 2016, 52, 87–95. [Google Scholar] [CrossRef]
  10. Jbilou, K. An Arnoldi based algorithm for large algebraic Riccati equations. Appl. Math. Lett. 2006, 19, 437–444. [Google Scholar] [CrossRef]
  11. Simoncini, V. Computational methods for linear matrix equations. SIAM Rev. 2016, 58, 377–441. [Google Scholar] [CrossRef]
  12. Dehghan, M.; Shirilord, A. The double-step scale splitting method for solving complex Sylvester matrix equation. Comp. Appl. Math. 2019, 38, 444–449. [Google Scholar] [CrossRef]
  13. El Guennouni, A.; Jbilou, K.; Riquet, J. Block Krylov subspace methods for solving large Sylvester equation. Numer. Algorithms 2002, 29, 75–96. [Google Scholar] [CrossRef]
  14. El Guennouni, A.; Jbilou, K.; Sadok, H. A block version of BiCGSTAB for linear systems with multiple right-hand sides. Electron. Trans. Numer. Anal. 2004, 16, 243–256. [Google Scholar]
  15. Hajarian, M. Extending the CGLS algorithm for least squares solutions of the generalized Sylvester-transpose matrix equations. J. Frankl. Inst. 2016, 353, 1168–1185. [Google Scholar] [CrossRef]
  16. Hajarian, M. Matrix iterative methods for solving the Sylvester-transpose and periodic Sylvester matrix equations. J. Frankl. Inst. 2013, 350, 3328–3341. [Google Scholar] [CrossRef]
  17. Khorsand Zak, M.; Toutounian, F. Nested splitting CG-like iterative method for solving the continuous Sylvester equation and preconditioning. Adv. Comput. Math. 2014, 40, 865–880. [Google Scholar] [CrossRef]
  18. Khorsand Zak, M.; Toutounian, F. An iterative method for solving the continuous Sylvester equation by emphasizing on the skew-Hermitian parts of the coefficient matrices. Int. J. Comput. Math. 2017, 94, 633–649. [Google Scholar] [CrossRef]
  19. Tohidi, E.; Khorsand Zak, M. A new matrix approach for solving second-order linear matrix partial differential equations. Mediterr. J. Math. 2016, 13, 1353–1376. [Google Scholar] [CrossRef]
  20. Bouhamidi, A.; Elbouyahyaoui, L.; Heyouni, M. The constant solution method for solving large-scale differential Sylvester matrix equations with time invariant coefficients. Numer. Algorithms 2024, 96, 449–488. [Google Scholar] [CrossRef]
  21. Chu, E.K.W.; Hou, L.; Szyld, D.B.; Zhou, J. Numerical solution of singular Sylvester equations. J. Comput. Appl. Math. 2024, 436, 115426. [Google Scholar] [CrossRef]
  22. Sasaki, N.; Chansangiam, P. Modified Jacobi-Gradient Iterative Method for Generalized Sylvester Matrix Equation. Symmetry 2020, 12, 1831. [Google Scholar] [CrossRef]
  23. Shafiei, S.G.; Hajarian, M. An iterative method based on ADMM for solving generalized Sylvester matrix equations. J. Frankl. Inst. 2022, 15, 8155–8170. [Google Scholar] [CrossRef]
  24. Yu, S.; Peng, J.; Tang, Z.; Peng, Z. Iterative methods to solve the constrained Sylvester equation. AIMS Math. 2023, 8, 21531–21553. [Google Scholar] [CrossRef]
  25. Chu, E.K.W.; Szyld, D.B.; Zhou, J. Numerical solution of singular Lyapunov equations. Numer. Linear Algebra Appl. 2021, 28, e2381. [Google Scholar] [CrossRef]
  26. Du, X.; Iqbal, K.I.B.; Uddin, M.M.; Hossain, M.T.; Shuzan, M.N.I. A computationally effective time-restricted stability preserving H2-optimal model order reduction approach. Results Control. Optim. 2023, 11, 100217. [Google Scholar] [CrossRef]
  27. Jiang, W.; Ge, S.S.; Hu, Q.; Li, D. Sliding-Mode Control for Perturbed MIMO Systems With Time-Synchronized Convergence. IEEE Trans. Cybern. 2024, 54, 4375–4388. [Google Scholar] [CrossRef]
  28. Ma, L.; Chang, R.; Han, M.; Jiao, Y. Explicit solutions of conjugate, periodic, time-varying Sylvester equations. J. Inequal. Appl. 2023, 2023, 133. [Google Scholar] [CrossRef]
  29. Sarumi, I.O.; Furati, K.M.; Khaliq, A.Q.M. Efficient second-order accurate exponential time differencing for time-fractional advection–diffusion–reaction equations with variable coefficients. Math. Comput. Simul. 2025, 230, 20–38. [Google Scholar] [CrossRef]
  30. Bartels, R.H.; Stewart, G.W. Algorithm 432: Solution of the matrix equation AX + XB = C. Circ. Syst. Signal Proc. 1994, 13, 820–826. [Google Scholar] [CrossRef]
  31. Golub, G.H.; Nash, S.; Van Loan, C. A Hessenberg-Schur method for the problem AX + XB = C. IEEE Trans. Contr. 1979, AC-24, 909–913. [Google Scholar] [CrossRef]
  32. Benner, P.; Li, R.C.; Truhar, N. On the ADI method for Sylvester equations. J. Comput. Appl. Math. 2009, 233, 1035–1045. [Google Scholar] [CrossRef]
  33. Hu, D.Y.; Reichel, L. Krylov-subspace methods for the Sylvester equation. Linear Algebra Appl. 1992, 172, 283–313. [Google Scholar] [CrossRef]
  34. Salkuyeh, D.K.; Toutounian, F. New approaches for solving large Sylvester equations. Appl. Math. Comput. 2006, 173, 9–18. [Google Scholar]
  35. Khorsand Zak, M.; Toutounian, F. Nested splitting conjugate gradient method for matrix equation AXB = C and preconditioning. Comput. Math. Appl. 2013, 66, 269–278. [Google Scholar] [CrossRef]
  36. Bai, Z.-Z. On the convergence of additive and multiplicative splitting iterations for systems of linear equations. J. Comput. Appl. Math. 2003, 154, 195–214. [Google Scholar] [CrossRef]
  37. Saad, Y. Iterative Methods for Sparse Linear Systems, 2nd ed.; SIAM: Philadelphia, PA, USA, 2003. [Google Scholar]
  38. Evans, D.J.; Wan, C.R. A preconditioned conjugate gradient method for AX + XB = C. Int. J. Comput. Math. 1993, 49, 207–219. [Google Scholar] [CrossRef]
  39. Bai, Z.-Z.; Yin, J.-F.; Su, Y.-F. A shift-splitting preconditioner for non-Hermitian positive definite matrices. J. Comput. Math. 2006, 24, 539–552. [Google Scholar]
  40. Kelley, C.T. Iterative Methods for Linear and Nonlinear Equations; No. 16, Frontiers in Applied Mathematics; SIAM: Philadelphia, PA, USA, 1995. [Google Scholar]
  41. Lütkepohl, H. Handbook of Matrices; John Wiley & Sons Press: England, UK, 1996. [Google Scholar]
  42. Krukier, L.A.; Martynova, T.S.; Bai, Z.-Z. Product-type skew-Hermitian triangular splitting iteration methods for strongly non-Hermitian positive definite linear systems. J. Comput. Appl. Math. 2009, 232, 3–16. [Google Scholar] [CrossRef]
  43. Wang, L.; Bai, Z.-Z. Skew-Hermitian triangular splitting iteration methods for non-Hermitian positive definite linear systems of strong skew-Hermitian parts. BIT Numer. Math. 2004, 44, 363–386. [Google Scholar] [CrossRef]
  44. Duff, I.S.; Grimes, R.G.; Lewis, J.G. User’s Guide for the Harwell-Boeing Sparse Matrix Collection; Technical Report RAL-92-086; Rutherford Applton Laboratory: Chilton, UK, 1992. [Google Scholar]
  45. Dwivedi, H.K. A fast difference scheme for the multi-term time fractional advection diffusion equation with a non-linear source term. Chin. J. Phys. 2024, 85, 86–103. [Google Scholar] [CrossRef]
  46. Lin, J.; Reutskiy, S.; Zhang, Y.; Sun, Y.; Lu, J. The novel analytical–numerical method for multi-dimensional multi-term time-fractional equations with general boundary conditions. Mathematics 2023, 11, 929. [Google Scholar] [CrossRef]
  47. Rahman, G.; Wahid, F.; Gómez-Aguilar, J.F.; Ali, A. Analysis of multi-term arbitrary order implicit differential equations with variable type delay. Phys. Scr. 2024, 99, 115246. [Google Scholar] [CrossRef]
  48. Santra, S. Analysis of a higher-order scheme for multi-term time-fractional integro-partial differential equations with multi-term weakly singular kernels. Numer. Algorithms 2024. [Google Scholar] [CrossRef]
Figure 1. Convergence history of MSI versus the other iterative methods for Example 3.
Figure 1. Convergence history of MSI versus the other iterative methods for Example 3.
Mathematics 13 00318 g001
Figure 2. Convergence history of MSI versus the other iterative methods for Example 4.
Figure 2. Convergence history of MSI versus the other iterative methods for Example 4.
Mathematics 13 00318 g002
Table 1. Results of the Example 1.
Table 1. Results of the Example 1.
MethodOut-ItrCPU TimeRes-Norm
MSI710.542.0887 × 10−6
NSCG78.192.1050 × 10−6
HSS29875.323.2107 × 10−6
GMRES(10)15140.563.0400 × 10−6
BiCGSTAB25517.912.4616 × 10−6
Table 2. Results of Example 2.
Table 2. Results of Example 2.
MethodOut-ItrCPU TimeRes-Norm
MSI75.664.3252 × 10−5
NSCG96.187.7746 × 10−5
HSS2139.828.7083 × 10−5
GMRES(10)32.842.3509 × 10−6
BiCGSTAB142.679.7977 × 10−5
Table 3. Results of Example 3.
Table 3. Results of Example 3.
MethodOut-ItrCPU TimeRes-Norm
MSI516.430.0029
NSCG821.650.0070
HSS99326.710.0288
GMRES(10)2049.870.0027
BiCGSTAB7516.220.0028
Table 4. Results of Example 4.
Table 4. Results of Example 4.
MethodOut-ItrCPU TimeRes-Norm
MSI3478.4371.57 × 10−4
NSCG64121.2652.61 × 10−4
HSS>5000>10002.32
GMRES(10)>5000>1000247.77
BiCGSTABNaN
Table 5. Results of the preconditioned BiCGSTAB for Example 4.
Table 5. Results of the preconditioned BiCGSTAB for Example 4.
MethodOut-ItrCPU TimeRes-Norm
BiCGSTABNaN
MSI-BiCGSTAB2113.519.0749 × 10−6
NSCG-BiCGSTAB4263.925.6612 × 10−6
HSS-BiCGSTABNaN
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Khorsand Zak, M.; Abbaszadeh Shahri, A. A Robust Hermitian and Skew-Hermitian Based Multiplicative Splitting Iterative Method for the Continuous Sylvester Equation. Mathematics 2025, 13, 318. https://doi.org/10.3390/math13020318

AMA Style

Khorsand Zak M, Abbaszadeh Shahri A. A Robust Hermitian and Skew-Hermitian Based Multiplicative Splitting Iterative Method for the Continuous Sylvester Equation. Mathematics. 2025; 13(2):318. https://doi.org/10.3390/math13020318

Chicago/Turabian Style

Khorsand Zak, Mohammad, and Abbas Abbaszadeh Shahri. 2025. "A Robust Hermitian and Skew-Hermitian Based Multiplicative Splitting Iterative Method for the Continuous Sylvester Equation" Mathematics 13, no. 2: 318. https://doi.org/10.3390/math13020318

APA Style

Khorsand Zak, M., & Abbaszadeh Shahri, A. (2025). A Robust Hermitian and Skew-Hermitian Based Multiplicative Splitting Iterative Method for the Continuous Sylvester Equation. Mathematics, 13(2), 318. https://doi.org/10.3390/math13020318

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop