Next Article in Journal
Quantum-Gravitational Trans-Planckian Energy of a Time-Dependent Black Hole
Previous Article in Journal
A General Principle of Isomorphism: Determining Inverses
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Breakdown-Free Block COCG Method for Complex Symmetric Linear Systems with Multiple Right-Hand Sides

1
School of Science, Jiangnan University, Wuxi 214122, Jiangsu, China
2
School of Economic Mathematics/Institute of Mathematics, Southwestern University of Finance and Economics, Chengdu 611130, Sichuan, China
3
Department of Applied Physics, Graduate School of Engineering, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8603, Japan
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(10), 1302; https://doi.org/10.3390/sym11101302
Submission received: 19 September 2019 / Revised: 12 October 2019 / Accepted: 14 October 2019 / Published: 16 October 2019

Abstract

:
The block conjugate orthogonal conjugate gradient method (BCOCG) is recognized as a common method to solve complex symmetric linear systems with multiple right-hand sides. However, breakdown always occurs if the right-hand sides are rank deficient. In this paper, based on the orthogonality conditions, we present a breakdown-free BCOCG algorithm with new parameter matrices to handle rank deficiency. To improve the spectral properties of coefficient matrix A, a precondition version of the breakdown-free BCOCG is proposed in detail. We also give the relative algorithms for the block conjugate A-orthogonal conjugate residual method. Numerical results illustrate that when breakdown occurs, the breakdown-free algorithms yield faster convergence than the non-breakdown-free algorithms.

1. Introduction

Consider the following complex symmetric linear system with multiple right-hand sides:
A X = B ,
with A C n × n non-Hermitian but symmetric (i.e., A A H , A = A T ), X , B C n × p , and 1 p n . Such systems arise from many practical and important applications, for example, electromagnetic scattering, quantum chemistry, the complex Helmholtz equation, and so on [1,2].
Due to simple calculations and less information required, block Krylov subspace methods are always designed to solve system (1) efficiently [3,4]. Recently, Tadano et al. presented the block conjugate orthogonal conjugate gradient (BCOCG) method [5], which can exploit the symmetry of A naturally. The BCOCG is also deemed a natural generalization of the conjugate orthogonal conjugate gradient (COCG) method [6,7,8] for solving systems (1). Besides COCG-type methods, the COCR method described in [2,7,8] can also exploit the symmetry of A when p = 1 in (1). In [1], Gu et al. introduced a block version of the COCR method (BCOCR) by employing the residual orthonormalization technique.
However, rank deficiency is a common problem that can lead block Krylov subspace methods to breakdown. The main reason is that the block search direction vectors may be linearly dependent on the existing basis by the increasing of the iteration number [9,10]. Consequently, some useless information will affect the accuracy of the solution and the numerical stability. BCOCG and BCOCR may also encounter such a problem, especially when p of (1) is larger, and we will show this phenomenon in the example section. Hence, it is valuable to solve the rank deficiency problem, and finally to enhance the numerical stability of BCOCG and BCOCR. Motivated by [10], in this paper, we propose a breakdown-free block COCG algorithm (BFBCOCG) and a breakdown-free block COCR algorithm (BFBCOCR) that can efficiently solve the rank deficiency problem of BCOCG and BCOCR, respectively.
The convergence rate of the Krylov subspace method depends on the spectral properties of the coefficient matrix, for example, the eigenvalue or singular value distribution, field of values, condition of the eigensystem, and so on. Fortunately, preconditioning can improve the spectral properties [11]. In this paper, we present the preconditioned version of BFBCOCG and BFBCOCR in detail.
The rest of this paper is organized as follows. In Section 2, based on the orthogonality conditions, we propose the BFBCOCG and BFBCOCR algorithms and their preconditioned variants with their new parameter matrices. Some numerical examples are listed in Section 3 to show the efficiency of our new algorithms. Finally, some conclusions are given in Section 4.
In this paper, K k + 1 ( A , R 0 ) denotes the ( k + 1 ) -dimension Krylov subspace span { R 0 , A R 0 , , A k R 0 } , A ¯ denotes the conjugate of A.

2. The Breakdown-Free Variants of BCOCG and BCOCR

In this section, we present our main algorithms, i.e., the breakdown-free block COCG algorithm (BFBCOCG) and the breakdown-free block COCR algorithm (BFBCOCR) in detail. We first introduce the block COCG (BCOCG) and block COCR (BCOCR) methods briefly and then give the derivation of BFBCOCG and BFBCOCR with their orthogonality properties, respectively. In the end, the preconditioned variants of BFBCOCG and BFBCOCR are also proposed, denoted by BFPBCOCG and BFPBCOCR, respectively. We use an underscore “_” to distinguish the breakdown-free and the non-breakdown-free algorithms.

2.1. BCOCG and BCOCR

Let X 0 be an initial approximation, and let X k + 1 X 0 + K k + 1 ( A , R 0 ) be the k + 1 th approximate solution of system (1). Hence, the recurrence relation of BCOCG and BCOCR is as follows: [1]
R 0 = P 0 = B A X 0 K 1 ( A , R 0 ) , X k + 1 = X k + P k α k X 0 + K k + 1 ( A , R 0 ) , R k + 1 = R k A P k α k K k + 2 ( A , R 0 ) , P k + 1 = R k + 1 + P k β k K k + 2 ( A , R 0 ) , for k = 0 , 1 , 2 .
The difference between BCOCG and BCOCR is the different calculation formulas of matrices α k and β k in (2), which are derived by applying the following orthogonality conditions:
R k + 1 L and A P k + 1 L .
Different choices of L lead to different algorithms:
  • L = K k + 1 ( A ¯ , R ¯ 0 )  results in BCOCG.
  • L = A ¯ K k + 1 ( A ¯ , R ¯ 0 )  results in BCOCR.

2.2. BFBCOCG and BFBCOCR

If the block size p is large, then the vectors of the block search direction will inevitably be linearly dependent on the increasing of the iteration number for BCOCG, hence rank deficiency occurs. In the following, in order to overcome this problem, we consider applying the breakdown-free strategy to BCOCG and propose the breakdown-free block COCG algorithm (BFBCOCG). The rationale of BFBCOCG is extracting an orthogonal basis P ̲ k + 1 from the current searching space by using the operation o r t h ( · ) . Thus, compared with (2), the new recurrence relation becomes
P 0 = R 0 = B A X 0 , P ̲ 0 = o r t h ( P 0 ) K 1 ( A , R 0 ) , X k + 1 = X k + P ̲ k α ̲ k X 0 + K k + 1 ( A , R 0 ) , R k + 1 = R k A P ̲ k α ̲ k K k + 2 ( A , R 0 ) , P ̲ k + 1 = o r t h ( P k + 1 ) = o r t h ( R k + 1 + P ̲ k β ̲ k ) K k + 2 ( A , R 0 ) , for k = 0 , 1 , 2 .
Therefore, again using the orthogonality condition (3), we can get the Lemma 1. Here, we denote U ̲ k = A P ̲ k .
Lemma 1.
For all 0 j < k , R j T R k = 0 , P ̲ j T R k = 0 , and P ̲ j T A P k = 0 .
Proof. 
Because R j , P ̲ j K k ( A , R 0 ) for all 0 j < k , and R k K k ( A ¯ , R ¯ 0 ) by (3), thus R j T R k = 0 and P ̲ j T R k = 0 . Then P ̲ j T A P k = 0 can be obtained by the second orthogonality condition in (3). □
Similarly, the following Theorem 1 is obtained to update the parameters α ̲ k and β ̲ k .
Theorem 1.
Under the orthogonality condition (3), the value of parameters α ̲ k and β ̲ k in the recurrence relation (4) can be obtained by solving the following equations:
( P ̲ k T U ̲ k ) α ̲ k = P ̲ k T R k , ( R k T P ̲ k ) β ̲ k = R k + 1 T R k + 1 , for k = 0 , 1 , 2 , .
Proof. 
From Lemma 1 and (4), we have the following two equations:
0 = P ̲ k T R k + 1 = P ̲ k T ( R k A P ̲ k α ̲ k ) = P ̲ k T R k P ̲ k T U ̲ k α ̲ k
and
0 = P ̲ k T A P k + 1 = P ̲ k T A ( R k + 1 + P ̲ k β ̲ k ) = U ̲ k T R k + 1 + P ̲ k T U ̲ k β ̲ k .
So, solving (6), we can easily get the α ̲ k .
Pre-multiplying α ̲ k T to (7), then from the third equation of (4), we have
α ̲ k T P ̲ k T U ̲ k β ̲ k = α ̲ k T U ̲ k T R k + 1 = ( R k R k + 1 ) T R k + 1 .
From Lemma 1, we have R k T R k + 1 = 0 , and by the first equation of (5), one has α ̲ k T P ̲ k T U ̲ k = R k T P ̲ k . Thus the above equation can be rewritten as
R k T P ̲ k β ̲ k = R k + 1 T R k + 1 ,
which can be used to update matrix β ̲ k . □
The following Algorithm 1 is the breakdown-free block COCG.
Algorithm 1 Breakdown-free block COCG (BFBCOCG)
1. 
Given the initial guess X 0 C n × p and a tolerance t o l ;
Compute: R 0 = B A X 0 , P ̲ 0 = o r t h ( R 0 ) , U ̲ 0 = A P ̲ 0 ;
2. 
For k = 0 , 1 , 2 , until R k F / R 0 F t o l , do
  • Solve: ( P ̲ k T U ̲ k ) α ̲ k = P ̲ k T R k ;
  • Update: X k + 1 = X k + P ̲ k α ̲ k , R k + 1 = R k U ̲ k α ̲ k ,
  • Solve: ( R k T P ̲ k ) β ̲ k = R k + 1 T R k + 1 ;
  • Update: P ̲ k + 1 = o r t h ( R k + 1 + P ̲ k β ̲ k ) , U ̲ k + 1 = A P ̲ k + 1 ;
End For
Similar to BFBCOCG, we can also easily get BFBCOCR by using L = A ¯ K k + 1 ( A ¯ , R ¯ 0 ) in the orthogonality condition (3). The following Algorithm 2 is the breakdown-free block COCR.
Algorithm 2 Breakdown-free block COCR (BFBCOCR)
1. 
Given the initial guess X 0 C n × p and a tolerance t o l ;
Compute: R 0 = B A X 0 , P ̲ 0 = o r t h ( R 0 ) , U ̲ 0 = A P ̲ 0 ;
2. 
For k = 0 , 1 , 2 , until R k F / R 0 F t o l , do
  • Solve: ( U ̲ k T U ̲ k ) α ̲ k = U ̲ k T R k ;
  • Update: X k + 1 = X k + P ̲ k α ̲ k , R k + 1 = R k U ̲ k α ̲ k ,
  • Solve: ( R k T U ̲ k ) β ̲ k = R k + 1 T A R k + 1 ;
  • Update: P ̲ k + 1 = o r t h ( R k + 1 + P ̲ k β ̲ k ) , U ̲ k + 1 = A P ̲ k + 1 ;
End For

2.3. BFPBCOCG and BFPBCOCR

As we all know, if the coefficient matrix has poor spectral properties, then the Krylov subspace methods may not robust, while a preconditioning strategy can make it better [11]. The trick is preconditioning (1) with a symmetric positive matrix M, which approximates to the inverse of matrix A; we get the following equivalent system:
M A X = M B .
Let M = L L T be the Cholesky decomposition of M. Then system (9) is equivalent to
A ˜ X ˜ = B ˜ , with A ˜ = L T A L , X ˜ = L 1 X , B ˜ = L T B .
We add a tilde “˜” on the variables derived from the new system. Then applying the BFBCOCG method and its recurrence relations (4) to (10), we have the orthogonality conditions
R ˜ k K k ( A ˜ ¯ , R ˜ ¯ 0 ) and A ˜ P ˜ k K k ( A ˜ ¯ , R ˜ ¯ 0 ) .
It is easy to see R ˜ k = L T R k , P ̲ ˜ k = o r t h ( R ˜ k + P ̲ ˜ k 1 β ̲ k 1 ) K k + 1 ( A ˜ , R ˜ 0 ) . The approximate solution X k = L X ˜ k is from L X ˜ 0 + L K k ( A ˜ , R ˜ 0 ) = X 0 + K k ( M A , M R 0 ) . Set P ̲ k = L P ̲ ˜ k , then P ̲ k K k + 1 ( M A , M R 0 ) . The new recurrence relation becomes
R 0 = B A X 0 , P ̲ 0 = o r t h ( P 0 ) = o r t h ( M R 0 ) K 1 ( M A , M R 0 ) , X k + 1 = X k + P ̲ k α ̲ k X 0 + K k + 1 ( M A , M R 0 ) , R k + 1 = R k A P ̲ k α ̲ k L T K k + 2 ( A ˜ , R ˜ 0 ) , P ̲ k + 1 = o r t h ( P k + 1 ) = o r t h ( M R k + 1 + P ̲ k β ̲ k ) K k + 2 ( M A , M R 0 ) , for k = 0 , 1 , 2 .
The orthogonality condition (11) become
R k K k ( M A ¯ , M ¯ R ¯ 0 ) and A P k K k ( M A ¯ , M R ¯ 0 ) .
Under the recurrence relation (12) and the orthogonality condition (13), we can get the following Lemma 2 and Theorem 2 to update the matrices α ̲ k and β ̲ k . Here, we omit the proof because it is like the proof of Lemma 1 and Theorem 1. We denote Z k = M R k , U ̲ k = A P ̲ k .
Lemma 2.
For all 0 j < k , R j T Z k = 0 , P ̲ j T R k = 0 , and P ̲ j T A P k = 0 .
Remark 1.
Under the preconditioned strategy, the relations of the block residuals are changed from orthogonal for BFBCOCG to M-orthogonal for BFPBCOCG. Here, two vectors x and y are M-orthogonal, meaning x M y , i.e., y H M x = 0 .
Theorem 2.
Under the orthogonality condition (13), the value of parameters α ̲ k and β ̲ k in the recurrence relations (12) can be obtained by solving the following equations:
P ̲ k T U ̲ k α ̲ k = P ̲ k T R k , ( R k T P ̲ k ) β ̲ k = R k + 1 T Z k + 1 , for k = 0 , 1 , 2 , .
The following Algorithm 3 is the breakdown-free preconditioned block COCG algorithm.
Algorithm 3 Breakdown-free preconditioned block COCG (BFPBCOCG)
1. 
Given the initial guess X 0 C n × p and a tolerance t o l ;
Compute: R 0 = B A X 0 , Z 0 = M R 0 , P ̲ 0 = o r t h ( Z 0 ) , U ̲ 0 = A P ̲ 0 ;
2. 
For k = 0 , 1 , 2 , until R k F / R 0 F t o l , do
  • Solve: ( P ̲ k T U ̲ k ) α ̲ k = P ̲ k T R k ;
  • Update: X k + 1 = X k + P ̲ k α ̲ k , R k + 1 = R k U ̲ k α ̲ k , Z k + 1 = M R k + 1 ;
  • Solve: ( R k T P ̲ k ) β ̲ k = R k + 1 T Z k + 1 ;
  • Update: P ̲ k + 1 = o r t h ( Z k + 1 + P ̲ k β ̲ k ) , U ̲ k + 1 = A P ̲ k + 1 ;
End For
Change the orthogonality conditions (11) to the following conditions:
R ˜ k A ˜ ¯ K k ( A ˜ ¯ , R ˜ ¯ 0 ) and A ˜ P ˜ k A ˜ ¯ K k ( A ˜ ¯ , R ˜ ¯ 0 ) .
The breakdown-free preconditioned block COCR (BFPBCOCR) can be deduced with the similar derivation of BFPBCOCG. Algorithm 4 shows the code of BFPBCOCR.
Algorithm 4 Breakdown-free preconditioned block COCR (BFPBCOCR)
1. 
Given the initial guess X 0 C n × p and a tolerance t o l ;
Compute: R 0 = B A X 0 , Z 0 = M R 0 , P ̲ 0 = o r t h ( Z 0 ) , U ̲ 0 = A P ̲ 0 ;
2. 
For k = 0 , 1 , 2 , until R k F / R 0 F t o l , do
  • Solve: ( U ̲ k T M U ̲ k ) α ̲ k = U ̲ k T Z k ;
  • Update: X k + 1 = X k + P ̲ k α ̲ k , R k + 1 = R k U ̲ k α ̲ k , Z k + 1 = M R k + 1 ;
  • Solve: ( Z k T U ̲ k ) β ̲ k = Z k + 1 T A Z k + 1 ;
  • Update: P ̲ k + 1 = o r t h ( Z k + 1 + P ̲ k β ̲ k ) , U ̲ k + 1 = A P ̲ k + 1 ;
End For
At the end of this section, we will give the complexity for six algorithms. They are the block COCG algorithm, block COCR algorithm, breakdown-free block COCG algorithm, breakdown-free block COCR algorithm, breakdown-free preconditioned block COCG algorithm, and breakdown-free preconditioned block COCR algorithm, which are denoted by BCOCG, BCOCR, BFBCOCG, BFBCOCR, BFPBCOCG, BFPBCOCR, respectively. The pseudocodes of BCOCG and BCOCR are from [1]. Table 1 shows the main costs per cycle of the six algorithms. Here, we denote as “block vector” the matrix with size n × × p , “bdp” the dot product number of two block vectors, “bmv” the product number of a matrix with n × p and a block vector, “bsaxpy” the number of two block vectors summed with one of the block vectors being from multiplying a block vector to a p × p matrix, “LE” the number of solving linear equations with a p × p coefficient matrix, and “bSC” the storage capacity of block vectors.
From Table 1, we can see the last four algorithms need one more dot product of two block vectors than the original two algorithms, i.e., BCOCG and BCOCR. For the product number of a matrix with a block vector, BFBCOCR and BFPBCOCR are both twice BFBCOCG and BFPBCOCG, respectively. This may result in more time to spend in BFBCOCR and BFPBCOCR, especially for a problem with a dense matrix. We reflect on the phenomenon in Example 1.

3. Numerical Examples

In this section, examples are given to demonstrate the effectiveness of our new algorithms. All examples are operated through MATLAB 8.4 (R2014b) on a laptop with an Intel Core i5-6200U CPU 2.3 GHz memory 8GB under the Win10 operating system.
We evaluate the algorithms according to the iteration number ( I t e r ), CPU time ( C P U ), and log 10 of the Frobenius norm for the true relative residual log 10 B A X k F B F ( T R R ). All algorithms are started with X 0 = z e r o s ( n , p ) and stopped with B A X k F B F 10 10 . Symbol † means no convergence within 1000 iterations. The bold values in the following tables represent the shortest CPU time.
Example 1.
In this example, six algorithms without preconditioning are compared. They are BCOCG, BCOCR, BFBCOCG, BFBCOCR, the block COCG method with residual orthonormalization (BCOCG_rq) [1], and the block COCR method with residual orthonormalization (BCOCR_rq) [1]. Two types of matrices are tested. The first type contains three matrices, c u b e 1800 , p a r a l l e l e p i p e d e , and s p h e r e 2430 , which are all dense and from monostatic radar cross-section calculations, and obtained from the GitHub repository (https://github.com/Hsien-Ming-Ku/Test_matrices/tree/master/Example2). The dimensions n of these matrices are 1800, 2016, 2430, respectively. The second type contains h e l m d a t e _ N 40 , h e l m d a t e _ N 80 , and h e l m d a t e _ N 160 , which are all from the discretization of the Helmholtz equation [12]. Their dimensions n are 1681, 6561, 25921, respectively. The right-hand sides are chosen as B = ( 1 + i ) [ r a n d ( n , 6 ) , o n e s ( n , 2 ) ] , so that the block size p = 8 , and B is rank deficiency. Here, i = 1 . Table 2 and Table 3 give the results.
From Table 2 and Table 3, we can see that for these rank-deficiency problems, the breakdown-free algorithms perform better than the non-breakdown-free algorithms in CPU time and iteration number. For the first type of problem, although the iterations of BFBCOCR are fewer than BFBCOCG, the CPU time is nearly double because the matrices are all dense and the product number of matrix and block vectors for BFBCOCR is one more than BFBCOCG per iteration. For the first two matrices of the second type of problem, the difference between BFBCOCG and BFBCOCR is not obvious, the main reason being that the matrices are sparse. However, for the third matrix h e l m d a t e _ N 160 , only BFBCOCR can solve the problem. This indicates that for the matrices from the discretization of the Helmholtz equation, BFBCOCR performs the most robust compared to the other five algorithms.
Example 2.
To make it fair, all algorithms compared in this example are preconditioned; they are the preconditioned version sof the same six algorithms as in Example 1 and are denoted by PBCOCG, PBCOCR, BFPBCOCG, BFPBCOCR, PBCOCG_rq, and PBCOCR_rq, respectively. The preconditioning strategy we used is IC(3) in [13]. IC(3) produces L L T for a complex symmetric A, and if L is nonsingular, then we can use L L T as a preconditioner. We test three matrices. The first matrix is b w g 961 b , which is from the NEP collection [14] for electrical engineering and has dimension n = 961 . The second and third matrices are h e l m d a t e _ N 40 and h e l m d a t e _ N 80 , respectively. The right-hand sides are chosen as B = ( 1 + i ) [ r a n d ( n , 8 ) , o n e s ( n , 2 ) ] so that the block size p = 10 , and B is rank deficiency. Numerical results are shown in Table 4 and Figure 1.
From the results, we can see that the breakdown-free algorithms perform better than the non-breakdown-free algorithms. Especially in Figure 1, the convergence curve of the four non-breakdown-free algorithms has not dropped, while the breakdown-free ones quickly drop to the accuracy. For matrix b w g 961 b , from Table 4 and Figure 1 we can see that the convergence curves of BFPBCOCG and BFPBCOCR are both downward-trending, while BFPBCOCG performs smoother. For matrix h e l m d a t e _ N 40 , the difference between BFPBCOCG and BFPBCOCR is not big. For matrix h e l m d a t e _ N 80 , BFPBCOCG does not converge, but BFPBCOCR converges quickly. This illustrates that for the matrices from the discretization of the Helmholtz equation, BFPBCOCR has an advantage over the other five preconditioned algorithms in terms of robustness.

4. Conclusions

In this paper, we presented a breakdown-free block conjugate orthogonal conjugate gradient algorithm for complex symmetric linear systems with multiple right-hand sides. Based on the orthogonality conditions, we gave its two new parameter matrices. The preconditioned version is also proposed in detail. At the same time, we also present the breakdown-free version for the block conjugate A-orthogonal conjugate residual method with its preconditioned version. From the numerical examples, we realized that when the right-hand sides are rank deficiency, our four new algorithms perform better than other algorithms. Moreover, for Helmholtz equation problems, BFBCOCR and BFPBCOCR show more stable behavior than BFBCOCG and BFPCOCG; while for dense matrices problems, BFBCOCG and BFPBCOCG converge faster than BFBCOCR and BFPBCOCR. However, there is still a lack of theoretical analysis for the advantages of BFBCOCG and BFBCOCR, and even of BFPBCOCG and BFPBCOCR. All of these require further investigations.

Author Contributions

X.-M.G. guided the process of the whole paper and reviewed the paper; S.-L.Z. provided some innovative advice and reviewed the paper; H.-X.Z. deduced the theory, implemented the algorithms with the numerical examples, and wrote the paper.

Funding

This work was financed by the National Nature Science Foundation of China (11701225 and 11801463), the Fundamental Research Funds for the Central Universities (JBK1902028), and the Natural Science Foundation of Jiangsu Province (BK20170173).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gu, X.-M.; Carpentieri, B.; Huang, T.-Z.; Meng, J. Block variants of the COCG and COCR methods for solving complex symmetric linear systems with multiple right-hand sides. In Numerical Mathematics and Advanced Applications ENUMATH 2015; Karasözen, B., Manguoǧlu, M., Tezer-Sezgin, M., Göktepe, S., Uǧur, Ö., Eds.; Lecture Notes in Computational Science and Engineering 112; Springer International Publishing: Cham, Switzerland, 2016; pp. 305–313. [Google Scholar]
  2. Sogabe, T.; Zhang, S.-L. A COCR method for solving complex symmetric linear systems. J. Comput. Appl. Math. 2007, 199, 297–303. [Google Scholar] [CrossRef] [Green Version]
  3. Gutknecht, M.H. Block Krylov space methods for linear systems with multiple right-hand sides: An introduction. In Modern Mathematical Models, Methods and Algorithms for Real World Systems; Siddiqi, A.H., Duff, I.S., Christensen, O., Eds.; Anamaya Publishers: New Delhi, India, 2006; pp. 420–447. [Google Scholar]
  4. Zhang, J.; Zhao, J. A novel class of block methods based on the block AAT-Lanczos biorthogonalization process for matrix equations. Int. J. Comput. Math. 2013, 90, 341–359. [Google Scholar] [CrossRef]
  5. Tadano, H.; Sakurai, T. A block Krylov subspace method for the contour integral method and its application to molecular orbital computations. IPSJ Trans. Adv. Comput. Syst. 2009, 2, 10–18. (In Japanese) [Google Scholar]
  6. Van der Vorst, H.A.; Melissen, J.B.M. A Petrov-Galerkin type method for solving Ax = b, where A is symmetric complex. IEEE Trans. Mag. 1990, 26, 706–708. [Google Scholar] [CrossRef]
  7. Gu, X.-M.; Huang, T.-Z.; Li, L.; Li, H.-B.; Sogabe, T.; Clemens, M. Quasi-minimal residual variants of the COCG and COCR methods for complex symmetric linear systems in electromagnetic simulations. IEEE Trans. Microw. Theory Tech. 2014, 62, 2859–2867. [Google Scholar] [CrossRef]
  8. Gu, X.-M.; Clemens, M.; Huang, T.-Z.; Li, L. The SCBiCG class of algorithms for complex symmetric linear systems with applications in several electromagnetic model problems. Comput. Phys. Commun. 2015, 191, 52–64. [Google Scholar] [CrossRef]
  9. Broyden, C.G. A breakdown of the block CG method. Optim. Methods Softw. 1996, 7, 41–55. [Google Scholar] [CrossRef]
  10. Ji, H.; Li, Y. A breakdown-free block conjugate gradient method. BIT Numer. Math. 2017, 57, 379–403. [Google Scholar] [CrossRef]
  11. Van der Vorst, H.A. Iterative Krylov Methods for Large Linear Systems; Cambridge University Press: Cambridge, UK, 2003; pp. 173–178. [Google Scholar]
  12. Bayliss, A.; Goldstein, C.I.; Turkel, E. An iterative method for the Helmholtz equation. J. Comput. Phys. 1983, 49, 443–457. [Google Scholar] [CrossRef]
  13. Meijerink, J.A.; Van der Vorst, H.A. An iterative solution method for linear systems of which the coefficient matrix is a symmetric M-matrix. Math. Comp. 1977, 31, 148–162. [Google Scholar]
  14. Bai, Z.; Day, D.; Demmel, J.; Dongarra, J. A Test Matrix Collection for Non-Hermitian Eigenvalue Problems; Technical Report CS-97-355; University of Tennessee: Knoxville, TN, USA, 1997. [Google Scholar]
Figure 1. Relative residual F-norm for matrices d w g 961 b and h e l m d a t e _ 40 N in Example 2.
Figure 1. Relative residual F-norm for matrices d w g 961 b and h e l m d a t e _ 40 N in Example 2.
Symmetry 11 01302 g001
Table 1. Main costs per cycle for six algorithms.
Table 1. Main costs per cycle for six algorithms.
BCOCGBCOCRBFBCOCGBFBCOCRBFPBCOCGBFPBCOCR
bdp223333
bmv111224
LE222222
bsaxpy343333
bSC454455
Table 2. The numerical results for the first type from Example 1.
Table 2. The numerical results for the first type from Example 1.
Algorithmcube1800parallelepipedesphere2430
CPUIterTRRCPUIterTRRCPUIterTRR
BCOCG
BCOCR4.4542190−10.2579
BCOCG_rq
BCOCR_rq
BFBCOCG2.5071161−10.10503.5124179−10.27534.3642172−10.0783
BFBCOCR4.4694162−10.38565.8695164−10.20958.8278171−10.1099
Table 3. The numerical results for the second type from Example 1.
Table 3. The numerical results for the second type from Example 1.
Algorithmhelmdate_N40helmdate_N80helmdate_N160
CPUIterTRRCPUIterTRRCPUIterTRR
BCOCG
BCOCR
BCOCG_rq
BCOCR_rq
BFBCOCG0.2599155−10.31662.0171364−10.0025
BFBCOCR0.3132173−10.10851.9872332−10.387638.2284876−10.0528
Table 4. The numerical results for Example 2.
Table 4. The numerical results for Example 2.
Algorithmdwg961bhelmdate_N40helmdate_N80
CPUIterTRRCPUIterTRRCPUIterTRR
PBCOCG
PBCOCR
PBCOCG_rq
PBCOCR_rq
BFPBCOCG1.7563904−10.03330.323891−10.2684
BFPBCOCR0.482491−10.01383.5199197−10.1141

Share and Cite

MDPI and ACS Style

Zhong, H.-X.; Gu, X.-M.; Zhang, S.-L. A Breakdown-Free Block COCG Method for Complex Symmetric Linear Systems with Multiple Right-Hand Sides. Symmetry 2019, 11, 1302. https://doi.org/10.3390/sym11101302

AMA Style

Zhong H-X, Gu X-M, Zhang S-L. A Breakdown-Free Block COCG Method for Complex Symmetric Linear Systems with Multiple Right-Hand Sides. Symmetry. 2019; 11(10):1302. https://doi.org/10.3390/sym11101302

Chicago/Turabian Style

Zhong, Hong-Xiu, Xian-Ming Gu, and Shao-Liang Zhang. 2019. "A Breakdown-Free Block COCG Method for Complex Symmetric Linear Systems with Multiple Right-Hand Sides" Symmetry 11, no. 10: 1302. https://doi.org/10.3390/sym11101302

APA Style

Zhong, H. -X., Gu, X. -M., & Zhang, S. -L. (2019). A Breakdown-Free Block COCG Method for Complex Symmetric Linear Systems with Multiple Right-Hand Sides. Symmetry, 11(10), 1302. https://doi.org/10.3390/sym11101302

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop