Next Article in Journal
An Energy-Efficient Unrelated Parallel Machine Scheduling Problem with Batch Processing and Time-of-Use Electricity Prices
Previous Article in Journal
A Model of the Control Problem of the Thermal Effect of a Laser Beam on a Two-Layer Biomaterial
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Two Preconditioners for Time-Harmonic Eddy-Current Optimal Control Problems

Department of Mathematics, College of Sciences, Northeastern University, Shenyang 100098, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(3), 375; https://doi.org/10.3390/math12030375
Submission received: 21 November 2023 / Revised: 19 January 2024 / Accepted: 22 January 2024 / Published: 24 January 2024

Abstract

:
In this paper, we consider the numerical solution of a large complex linear system with a saddle-point form obtained by the discretization of the time-harmonic eddy-current optimal control problem. A new Schur complement is proposed for this algebraic system, extending it to both the block-triangular preconditioner and the structured preconditioner. A theoretical analysis proves that the eigenvalues of block-triangular and structured preconditioned matrices are located in the interval [1/2, 1]. Numerical simulations show that two new preconditioners coupled with a Krylov subspace acceleration have good feasibility and effectiveness and are superior to some existing efficient algorithms.

1. Introduction

The mathematical theory of optimal control has rapidly evolved over the last few decades into an important and distinct field of applied mathematics, as shown in [1]. J. L. Lions was the first to propose partial differential equation (PDE)-constrained optimization [2,3]. Eddy-current problems are a unique kind of electromagnetic field problem that appear in computational electromagnetism when at least one of the electromagnetic fields is changing very slowly over time. It is an application of Maxwell’s equations, which appears in many practical situations, such as in the numerical modeling of transformers, relays and electric motors [1,4].
We focus on the solution method of the complex-valued linear system obtained after the discretization of the time-harmonic eddy-current optimal control problem by the finite element method (FEM). At present, the main methods for solving large-scale linear systems are iterative methods and preconditioners. In 2001, Golub et al. extended the SOR iterative method to the generalized saddle-point problem and proposed the SOR-like method [5]. In 2002, Benzi et al. extended the HSS iterative method to the saddle-point problem [6,7]. Since then, numerous iterative solutions have emerged based on the HSS iterative method [8,9,10,11,12].
Because the coefficient matrix of the discrete linear equations has a special structure, the efficiency of the Krylov subspace method can be improved by constructing efficient preconditioners. In 2013, Krendl et al. [13] proposed a block diagonal preconditioner to accelerate the MINRES method. In 2016, Zheng and Zhang et al. [14] introduced a parameter on this basis and proposed a generalized block-diagonal preconditioner. In 2018, Liang et al. [15] proposed an efficient structured preconditioner to accelerate the GMRES method. In 2021, Liang et al. [16] proposed an exact complex decomposition technique of the Schur complement to accelerate the GMRES method [17,18,19,20,21,22,23,24,25,26,27,28,29]. In 2022, Luo et al. [30] proposed two new block preconditioners P D E and P V D E for solving saddle-point problems. In 2023, Luo et al. [31] used the block preconditioner P D E for solving general block two-by-two linear systems by expanding the dimensions of the coefficient matrix.
We propose a new Schur complement based on [13,16] to obtain a new block-triangular preconditioner P T r i and a new structured preconditioner P S t r and theoretically prove the eigenvalue distribution interval of their preconditioned matrices. Numerical simulation results show that P T r i and P S t r exhibit stable numerical performance. When compared with the block-diagonal preconditioner P B D and the algorithm EI-GMRES, the two newly proposed preconditioners can reduce iteration time and steps.
The paper is structured as follows. We introduce the background of the problem in Section 1. Some details and discretization processes of eddy-current problems are introduced in Section 2. We summarize the existing preconditioner and propose two new preconditioners P T r i and P S t r in Section 3. The algorithm of P T r i is derived, and the eigenvalue interval of its preconditioned matrix is proved in Section 4. The algorithm of P S t r is derived, and the eigenvalue interval of its preconditioned matrix is proved in Section 5. Numerical results of the preconditioners P T r i and P S t r are presented and discussed in Section 6. We summarize the work and look to the future in Section 7.

2. Eddy Current Problem

The problem originates from the magneto quasi-static model, which is the application of Maxwell’s equations in slowly changing electric fields [17,18]. We consider the following linearized eddy-current problem: find a time-varying magnetic vector potential p ( x , t ) such that
σ t p ( x , t ) + curl ( v curl ( p ( x , t ) ) ) = j in Q T , p ( x , t ) × n = 0 on Σ T , p ( x , t ) = p ( x , 0 ) in Σ 0 ,
where Q T = Ω × ( 0 , T ) is the space-time cylinder, and Σ T = Γ × ( 0 , T ) is its extracellular surface, Σ 0 = Γ × { 0 } . Ω is a bounded region in R 3 with a Lipschitz continuous boundary Γ , and  Γ n is the outer normal vector of Ω . j denotes the impressed current density, σ denotes the electrical conductivity, and v denotes the magnetic reluctivity. The time-harmonic field is an important type of time-varying electromagnetic field, whose solution is a sine function of a single frequency. According to [19], the following problem is considered: find the state p ( x , t ) and the control u ( x , t ) to minimize the cost function
J ( p , u ) = 1 2 0 T Ω | p ( x , t ) p d ( x , t ) | 2 d x d t + β 2 0 T Ω | u ( x , t ) | 2 d x d t ,
subject to the time-harmonic problem
σ t p ( x , t ) + curl ( v curl ( p ( x , t ) ) ) + ε p ( x , t ) = u ( x , t ) in Q T , p ( x , t ) × n = 0 on Σ T , p ( x , 0 ) = p ( x , T ) in Ω , u ( x , 0 ) = u ( x , T ) in Ω .
where p d ( x , t ) is a function of the given target. β > 0 is a regularization parameter, and ε > 0 is an extra regularization parameter, but in some cases, ε = 0 can be chosen. The conductivity is a constant, and the magnetic reluctivity σ L ( Ω ) is consistently positive and independent of p ( x , t ) .
In Ref. [20], it is deduced and proved that the vector-potential formulation method is used to solve the time-harmonic eddy-current optimal control problem. The external current density is a harmonic relation with time in alternating currents. According to the special case of a linear material law, we obtain a time-harmonic expression of the target state:
p d ( x , t ) = Re p ^ d ( x ) e i ω ^ t ,
with angular frequency ω ^ = 2 π k T , k Z ; p ^ d ( x ) is a complex amplitude. For the original problems (2) and (3), the time-harmonic solutions are
p ( x , t ) = Re p ^ ( x ) e i ω ^ t and u ( x , t ) = Re u ^ ( x ) e i ω t ^ ,
where p ^ ( x ) and u ^ ( x ) are the solutions of the optimal control problems:
min p ^ , u ^ 1 2 Ω | p ^ ( x ) p ^ d ( x ) | 2 d x + β 2 Ω | u ^ ( x ) | 2 d x ,
subject to
i ω ^ σ p ^ ( x ) + curl ( v curl p ^ ( x ) ) + ε p ^ ( x ) = u ^ ( x ) in Ω , p ^ ( x ) × n = 0 on Γ .
Therefore, the problem changes from the time domain to the frequency domain, and the time variation problem becomes a complex time-independent problem.
Since the operators in the problem are Hermitian, the same coefficient matrix can be obtained by using two approaches: the discretize-then-optimize or optimize-then-discretize approach. In order to solve problems (6) and (7), we choose to use the former. The FEM is used to discretize the problem, and the following finite-dimensional optimal control problems are obtained:
min p , u 1 2 p p d M p p d + β 2 u M u ,
subject to
i σ ω ^ M p + K p M u = 0 ,
where M = M i j is the mass matrix, K = K i j is the stiffness matrix, with 
M i j = Ω φ i φ j d x and K i j = Ω v curl φ i curl φ j + ε φ i φ j d x ,
and ( · ) denotes the conjugate transposition vector, p d , p and u are the discrete forms of p ^ d ( x ) , p ^ ( x ) and u ^ ( x ) , respectively. φ i ( 1 i N ) is the linear Nédélec−I edge basis function [21,22]. This constrained optimization problem is solved by constructing the Lagrange function
L ( p , u , z ) = 1 2 p p d M p p d + β 2 u M u + z ( i σ ω ^ M p + K p M u ) ,
where z represents the Lagrange multiplier. In order to obtain stationarity, the following first-order necessary optimality conditions must be satisfied
L ( p , u , z ) = 0 .
The linear system is derived:
M 0 K i ω M 0 β M M K + i ω M M 0 p u z = M p d 0 0 ,
where ω = σ ω ^ . We simplify the system by eliminating variables. According to the system of Equation (9), we have z = β u , eliminating the Lagrange multiplier z to obtain the system of equations:
M β ( K i ω M ) K + i ω M M p u = M p d 0 .
By scaling u ˜ = β u or u ¯ = β u , Equation (10) is simplified to
A 1 p u ˜ = M p d 0 , with A 1 = M β ( K i ω M ) β ( K + i ω M ) M ,
or
A 2 p u ¯ = M p d 0 , with A 2 = M β ( K i ω M ) β ( K + i ω M ) M .
The matrix A 1 in linear system (11) has the Hermitian property, and the matrix A 2 in linear system (12) has the positive-definite property. According to the different properties of the coefficient matrix, different Krylov subspace solvers such as the MINRES and GMRES methods are used to solve it [23,24].

3. The Preconditioners

Since the Krylov subspace method converges slowly when applied to large and sparse complex linear systems, an ideal preconditioner is required in order to achieve fast convergence. Since the coefficient matrix A 1 is Hermitian and it is more convenient to design the efficient preconditioning techniques when we consider the discretized linear system (12), we solve the discretized linear system (11) in this paper. For the discrete linear equations of the time-harmonic eddy-current optimal control problem, many papers have proposed and studied various preconditioners [13,14,15,16,25].
In 2013, Krendl et al. [13] designed the block-diagonal preconditioner for the Hermitian matrix A 1
P B D = M + β ( K + ω M ) 0 0 M + β ( K + ω M ) .
In 2016, according to the approximation Schur complement of the coefficient matrix A 1 [26]
S 1 = 1 + ω 2 β M + β K M 1 1 + ω 2 β M + β K ,
Zheng et al. [14], on the basis of [13], proposed a generalized block-diagonal preconditioner
P B D 1 = α 1 + ω 2 β M + β K 0 0 α 1 + ω 2 β M + β K ,
and applied the approximate Schur complement (14) to a block-triangular preconditioner
P T r i 1 = M 0 β ( K + i ω M ) S 1 .
In 2018, Liang et al. [15] applied the approximate Schur complement (14) to the block two-by-two preconditioner and proposed the highly active structured preconditioner
P S t r 2 = M β ( K i ω M ) β ( K + i ω M ) M + 2 β 1 + β ω 2 K .
In Ref. [15], it is proved that the structured preconditioned matrix P S t r 2 1 A 2 and the block-triangular preconditioned matrix P T r i 1 1 A 1 have the same eigenvalue distribution as 1 2 , 1 . In 2021, Liang et al. [16] proposes an exact Schur complement form
S 2 = 1 + ω 2 β M i β K M 1 1 + ω 2 β M + i β K ,
which is inspired by [27]. The exact Schur complement (18) is used to accelerate the GMRES method to obtain the algorithm EI-GMRES.
Using the exact decomposition of (18), the article gives a practical expression for the inverse of the matrix A 1 :
A 1 1 = β ω H 1 1 + β ω H 2 1 β ω 2 H 2 1 M H 1 1 i ( I β ω H 2 1 M ) H 1 1 i H 2 1 ( I β ω M H 1 1 ) H 2 1 M H 1 1 ,
where
H 1 = 1 + ω 2 β M i β K , H 2 = 1 + ω 2 β M + i β K
and
β ω = 1 + ω 2 β ω β .
Secondly, matrix (19) is applied to the preconditioned Krylov subspace method, and the following linear system needs to be solved:
A 1 p u ˜ = M p d 0 p u ˜ = A 1 1 M p d 0 .
Combining Equations (19) and (22), the following algorithm is obtained.
According to Algorithm 1, the solution of the complex valued linear system (11) is equivalently transformed into the solution of two complex-valued linear systems with coefficient matrices H 1 and H 2 . For Steps 1 and 2, the article [16] first uses the C-to-R method to transform them into real-valued linear systems, respectively, and then the corresponding preconditioned square blocks (PRESB)-type preconditioners are used, i.e.,
P 1 = 1 + ω 2 β M β K β K 1 + ω 2 β M + 2 β K
and
P 2 = 1 + ω 2 β M β K β K 1 + ω 2 β M + 2 β K .
Algorithm 1 Computation of [ p , u ˜ ] H of system (11)
  • Solve ( 1 + ω 2 β M i β K ) h = β ω M p d ;
  • Solve ( 1 + ω 2 β M + i β K ) u ˜ = i M ( p d h ) ;
  • Compute p = h i β ω u ˜ .
This leads to Algorithms 2 and 3 below:
Algorithm 2 Computation of z from P 1 z = r with z = [ z 1 , z 2 ] T , r = [ r 1 , r 2 ] T
  • Solve ( 1 + ω 2 β M + β K ) h = r 1 r 2 ;
  • Solve ( 1 + ω 2 β M + β K ) z 2 = r 2 + β K h ;
  • Compute z 1 = h + z 2 .
Algorithm 3 Computation of z from P 2 z = r with z = [ z 1 , z 2 ] T , r = [ r 1 , r 2 ] T :
  • Solve ( 1 + ω 2 β M + β K ) h = r 1 + r 2 ;
  • Solve ( 1 + ω 2 β M + β K ) z 2 = r 2 β K h ;
  • Compute z 1 = h z 2 .
Algorithms 2 and 3 implement steps 1 and 2 of Algorithm 1, respectively. The preconditioners P 1 and P 2 are such that the eigenvalue distributions of the preconditioned matrices is in the interval [ 1 2 , 1 ] .
The advantage of algorithm EI-GMRES is that it retains the true Schur complement form and increases the accuracy of the calculation. However, there are two shortcomings in the exact Schur complement (18). The first is that the exact decomposition causes the first and third factors to be different, which increases the amount of computation. The second is that the exact decomposition introduces complex numbers, which increases computational complexity.
This paper focuses on how to overcome the shortcomings of the EI-GMRES algorithm. According to [13,14], matrices ( 1 + ω β ) M + β K and 1 + ω 2 β M + β K have the same structure except for the different coefficient before M . Note
lim β 0 1 + ω β 1 + ω 2 β = 1 ,
which means that the two matrices are almost the same when the regularization parameter β approaches 0.
In this paper, we propose a new approximate Schur complement
S ˜ = [ ( 1 + ω β ) M + β K ] M 1 [ ( 1 + ω β ) M + β K ] ,
based on the Schur complement (14), and extend it to a block-triangular preconditioner and a block two-by-two preconditioner. A new block-triangular preconditioner
P T r i = M 0 β ( K + i ω M ) S ˜ ,
and a new structured preconditioner
P S t r = M β ( K i ω M ) β ( K + i ω M ) [ ( 1 + 2 ω β ) M + 2 β ( 1 + ω β ) K ]
are proposed for the Hermitian matrix A 1 . We prove that the distribution intervals of the eigenvalues of preconditioned matrices P T r i 1 A 1 and P S t r 1 A 1 are both 1 2 , 1 when β tends to 0. The block-triangular preconditioner P T r i and structured preconditioner P S t r proposed in this paper avoid the different decomposition factors and the calculation of complex numbers, which improves the efficiency of the calculation.

4. Block-Triangular Preconditioner

In this section, the algorithm of the preconditioner P T r i is presented through the expression of the inverse of the block-triangular preconditioner P T r i , and the eigenvalue interval of the preconditioned matrix P T r i 1 A 1 is proved.
The preconditioner P T r i is applied to the preconditioned Krylov subspace method, and the linear systems need to be solved:
P T r i x 1 x 2 = r 1 r 2 r 1 r 2 = P T r i 1 x 1 x 2 ,
where r denotes the current residual vector, and x denotes the generalized residual vector.
Let
D = [ ( 1 + ω β ) M + β K ] ;
then, the approximate Schur complement (26) can be written as
S ˜ = D M 1 D .
Thus, the exact inverse P T r i can be written in the form
P T r i 1 = M 1 0 I δ 1 D 1 M D 1 D 1 M D 1 ,
where δ 1 = ( 1 + ω β ) i ω β . Based on (32), we can implement (29) by the following Algorithm 4.
Algorithm 4 Computing the solution x of P T r i x = r with x = [ x 1 , x 2 ] T and r = [ r 1 , r 2 ] T
  • Let r = M p d , 0 H ;
  • Solve the h form [ ( 1 + ω β ) M + β K ] h = [ ( 1 + ω β ) i ω β ] r 1 + r 2 ;
  • Solve the x 2 form [ ( 1 + ω β ) M + β K ] x 2 = r 1 M h ;
  • Solve the x 1 form M x 1 = r 1 ;
  • Set the iteration termination condition to residuals less than 10 6 .
In the above steps, the equations to be solved in Steps 2–4 have coefficient matrices that are all real, symmetric, positive-definite matrices. We chose to use the conjugate gradient (CG) method to solve them.
Next, we consider the eigenvalue expressions of the preconditioned matrix P T r i 1 A 1 .
Theorem 1. 
Set A 1 C 2 n × 2 n as the coefficient matrix for linear system (11), defined in Equation (27). Then, the eigenvalues of the preconditioned matrix are 1 with algebraic multiplicity n, and the rest of the eigenvalues are given by
λ j = 1 + ω 2 β + β μ j 2 1 + ω β + β μ j 2 , j = 1 , 2 , , n
where β > 0 is a regularization parameter, and ω = 2 π k T is a frequency parameter.
Proof of Theorem 1. 
Define H = blkdiag ( M , M ) . Then, the preconditioned matrix P T r i 1 A 1 is similar to the following matrix
H 1 2 P T r i 1 A 1 H 1 2 = H 1 2 P T r i H 1 2 1 H 1 2 A 1 H 1 2 = I 0 β ( K ˜ + i ω I ) [ ( 1 + ω β ) I + β K ˜ ] 2 1 I β ( K ˜ i ω I ) β ( K ˜ + i ω I I ,
with K ˜ = M 1 2 K M 1 2 . Since M and K are symmetric positive-definite matrices, there exists a positive diagonal matrix Σ = diag μ 1 , μ 2 , , μ n and an orthogonal matrix Q such that K ˜ = Q T Σ Q . Thus, we have
H 1 2 P Tri 1 A 1 H 1 2 = Q T 0 0 Q T I β ( Σ i ω I ) 0 [ ( 1 + ω β ) I + β Σ ] 2 1 + ω 2 β I + β Σ 2 Q 0 0 Q .
According to the eigenvalue determinant
| I λ β ( Σ i ω I ) 0 [ ( 1 + ω β ) I + β Σ ] 2 1 + ω 2 β I + β Σ 2 λ | = 0 ,
the following equation
1 + ω β + β μ j 2 1 + ω 2 β + β μ j 2 λ = 0 , j = 1 , 2 , , n .
is obtained. The eigenvalues of the preconditioned matrix P T r i 1 A 1 are solved to be 1 or
λ j = 1 + ω 2 β + β μ j 2 1 + ω β + β μ j 2 , j = 1 , 2 , , n .
 □
Theorem 2. 
Assume that M and K are both symmetric positive-definite matrices. When the regularization parameter β tends to 0, the eigenvalues of the preconditioned matrix P T r i 1 A 1 are equal to 1 or Equation (33), which lies in the interval
λ 1 2 , 1 .
Proof of Theorem 2. 
When the regularization parameter β tends to 0, according to Equations (25) and (33), we can obtain
λ j = 1 + μ ^ j 2 1 + μ ^ j 2 , with μ ^ j = β 1 + ω 2 β μ j > 0 .
Therefore
λ j 1 2 , 1 .
In summary, the eigenvalues of the preconditioned matrix P T r i 1 A 1 are found to lie in the interval 1 2 , 1 .  □

5. Structured Preconditioner

In this part, we decompose the structured preconditioner P S t r , deduce the algorithm of preconditioner P S t r , and prove the eigenvalue distribution interval of the preconditioned matrix P S t r 1 A 1 .
Firstly, the structured two-by-two preconditioner P S t r is applied to the preconditioned Krylov subspace method, and the following linear system need to be solved:
P S t r x 1 x 2 = r 1 r 2 ,
where r denotes the current residual vector, and x denotes the generalized residual vector.
Secondly, based on [16] and Equation (28), the structured preconditioner P S t r can be decomposed into
P S t r = M 0 β ( K + i ω M ) D M 1 0 0 M 1 M β ( K i ω M ) 0 D ,
with D = [ ( 1 + ω β ) M + β K ] . According to the Equations (35) and (36), the linear system we solve becomes
I β M 1 ( K i ω M ) 0 M 1 D x 1 x 2 = M 0 β ( K + i ω M ) D 1 r 1 r 2 ,
thus obtaining the following two equations
x 1 + β M 1 ( K i ω M ) x 2 = M 1 r 1 ,
and
M 1 D x 2 = D 1 β ( K + i ω M ) M 1 r 1 D 1 r 2 .
Because of β K = D ( 1 + ω β ) M , Equation (38) can be written as
D x 2 = r 1 M h ,
with h = D 1 δ 1 r 1 + r 2 , δ 1 = ( 1 + ω β ) i ω β . Similarly, Equation (37) can be written as
x 1 = h δ 2 x 2 ,
where δ 2 = ( 1 + ω β ) + i ω β .
Finally, based on Equations (39) and (40), we can get the following Algorithm 5 to implement (35).
Algorithm 5 Computing the solution x of P S t r x = r with x = [ x 1 , x 2 ] T and r = [ r 1 , r 2 ] T
  • Let r = M p d , 0 H ;
  • Solve the h form [ ( 1 + ω β ) M + β K ] h = [ ( 1 + ω β ) i ω β ] r 1 + r 2 ;
  • Solve the x 2 form [ ( 1 + ω β ) M + β K ] x 2 = r 1 M h ;
  • Solve the x 1 form x 1 = h [ ( 1 + ω β ) + i ω β ] x 2 ;
  • Set the iteration termination condition to residuals less than 10 6 .
In the above execution steps, the equations to be solved in Steps 2 and 3 have real, symmetric, positive-definite coefficient matrices. We chose to use the conjugate gradient (CG) method to solve them. Step 4 only requires direct computation and does not require any additional solving.
Next, we consider the eigenvalue expressions of the preconditioned matrix P S t r 1 A 1 .
Lemma 1. 
Suppose that M and K are positive-definite symmetric matrices. The true Schur complement of A 1 is § = 1 + β ω 2 M + β K M 1 K , and the eigenvalues τ of the matrix S ˜ 1 S satisfy
τ = 1 + σ 2 ( 1 + σ ) 2 1 2 , 1 ,
where σ > 0 is an eigenvalue of K ^ = γ M 1 2 K M 1 2 with γ = β ( 1 + ω β ) 2 .
Proof of Lemma 1. 
Assume that γ = β ( 1 + ω β ) 2 { χ , m } is an eigenpair of τ of the matrix S ˜ 1 S ; thus,
S m = τ S ˜ m with m 0 .
Then, it is evident that τ > 0 such that S and S ˜ are both positive-definite symmetric matrices. We obtain the equality
1 + β ω 2 M + β K M 1 K m = τ ( 1 + ω β ) 2 M + β K M 1 K + 2 β ( 1 + ω β ) K m .
Divide both sides of Equation (42) by the coefficient ( 1 + ω β ) 2 to obtain
M + γ K M 1 K m = τ M + γ K M 1 K + 2 γ K m ,
with γ = β 1 + ω β . Multiply M 1 at both ends of Equation (43) to obtain
M 1 2 I + K ^ 2 m ^ = τ M 1 2 ( I + K ^ ) 2 m ^ ,
where K ^ = γ M 1 2 K M 1 2 is a symmetric positive-definite matrix, m ^ = M 1 2 m . The eigenvalues τ of the matrix S ˜ 1 S can be obtained as
τ = 1 + σ 2 ( 1 + σ ) 2 ,
where σ > 0 is the eigenvalue of K ^ . Therefore, the eigenvalue interval of matrix S ˜ 1 S is
τ 1 2 , 1 .
 □
In the following, we prove the eigenvalue interval of the preconditioned matrix P S t r 1 A 1 .
Theorem 3. 
Suppose that M and K are positive-definite symmetric matrices. When the regularization parameter β tends to 0, the eigenvalues of the preconditioned matrix P S t r 1 A 1 are equal to 1 or Equation (41), which lies in the interval
λ 1 2 , 1 .
Proof of Theorem 3. 
Firstly, according to the decomposition form (36) of the structured preconditioner P S t r 1 ,
P S t r 1 = I β M 1 ( K i ω M ) 0 I 1 M 0 β ( K + i ω M ) S ˜ 1
can be obtained, where S ˜ is an approximate Schur complement (26). Then, the inverse of the preconditioned matrix P S t r is
P S t r 1 = M 1 M 1 β ( K i ω M ) S ˜ 1 β ( K + i ω M ) M 1 M 1 β ( K i ω M ) S ˜ 1 S ˜ 1 β ( K + i ω M ) M 1 S ˜ 1 ·
Let δ 1 = ( 1 + ω β ) i ω β , δ 2 = ( 1 + ω β ) + i ω β ; then, Equation (44) can be written in the following form
P S t r 1 = δ 1 + δ 2 D 1 δ 1 δ 2 D 1 M D 1 I δ 2 D 1 M D 1 D 1 I δ 1 M D 1 D 1 M D 1 ,
with D = [ ( 1 + ω β ) M + β K ] .
Secondly, the preconditioned matrix P S t r 1 A 1 is calculated. Denote
R S t r = 0 0 0 [ 2 ω β M + 2 β ( 1 + ω β ) K ] ;
thus, we have
P S t r 1 A = I P S t r 1 R S t r t = I I δ 2 D 1 M D 1 [ 2 ω β M + 2 β ( 1 + ω β ) K ] 0 I D 1 M D 1 [ 2 ω β M + 2 β ( 1 + ω β ) K ] ,
where
I D 1 M D 1 [ 2 ω β M + 2 β ( 1 + ω β ) K ] = S ˜ 1 ( S ˜ [ 2 ω β M + 2 β ( 1 + ω β ) K ] ) = S ˜ 1 S .
Finally, we can obtain the eigenvalues of the preconditioned matrix P S t r 1 A located in the interval 1 2 , 1 .

6. Numerical Experiments

In this part, we solve linear system (11) using the new preconditioners P T r i and P S t r , and compare them with the preconditioner P B D and the algorithm EI-GMRES. In Table 1, the numerical experiment method and corresponding abbreviations are shown.
Example 1. 
The problem of optimal control of time-harmonic eddy-current Equations (2) and (3) was considered, with Ω = [ 0 , 1 ] 3 and σ = 1 , v = 1 . The target state is of the form defined as in Equation (2), with
p ^ d ( x ) = 0 0 sin π x 1 sin π x 2 .
In the experiment, the CG method [28] was used in the inner iteration. The initial vector of the iteration method was assumed to be x ( 0 ) = 0 , the maximum number of iteration steps was k max = 1000 , and the stopping tolerance was b A x ( k ) 2 b 2 10 6 , where x ( k ) is the kth iteration. Similarly, the external iteration adopts the MINRES method or GMRES method. The initial vector of the iteration method was assumed to be x ( 0 ) = 0 , the maximum number of iteration steps was k max = 1000 , and the stopping tolerance was b A x ( k ) 2 b 2 10 6 . In Table 2, we show the relationship between the degree of mesh refinement and the orders of the matrices M and K .
In Refs. [21,22], two classes of linear Nédélec edge finite element spaces were proposed. Here, we discretized the equations using the first class of Nédélec edge finite element spaces. The 3D problem used a tetrahedral dissection to discretize state variables and control variables. In order to construct the relevant matrices, we used the MATLAB package of [29]. All results were calculated in MATLAB with an Intel Xeon Gold 6258R Processor (38.5 M Cache, 2.70 GHz) FC-LGA14B, Tray.
We used the iteration step (denoted by “IT”) and CPU time in seconds (denoted by “CPU”) to illustrate the performance of the algorithm. For the additional regularization parameter ε , we chose two values as 10 2 and 10 4 .
In Table 3 and Table 4, the iteration time and iteration steps of different tested methods are shown with a mesh refinement degree of one in 3D.
In Table 5 and Table 6, the iteration time and iteration steps of different tested methods are shown with a mesh refinement degree of two in 3D.
In Table 7 and Table 8, the iteration time and iteration steps of different tested methods are shown with a mesh refinement degree of three in 3D.
Based on Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8, we can draw the following conclusions:
  • With the increase in mesh refinement degree and matrix dimension, preconditioners P T r i and P S t r reduce the iteration steps and shorten the iteration time compared with preconditioner P B D and the EI-GMRES algorithm.
  • The algorithm is robust. Preconditioners P T r i and P S t r still have better numerical performance as parameters β and ω change.
  • As parameter β decreases and tends to zero, the iteration time of preconditioners P T r i and P S t r also decreases. It shows that the assumption that β tends to zero is reasonable.
Finally, the eigenvalue distribution images of P T r i 1 A and P S t r 1 A are given. Here, the case where the mesh refinement degree is one and ε = 10 2 in 3D was tested.
In Figure 1, the eigenvalue distribution image of preconditioned matrix P T r i 1 A with ε = 10 2 is displayed.
In Figure 2, the eigenvalue distribution image of preconditioned matrix P S t r 1 A with ε = 10 2 is shown.
According to Figure 1 and Figure 2, the eigenvalues of the preconditioned matrices P T r i 1 A and P S t r 1 A are closer to one for smaller β ’s. This partly explains the reason for the shorter iteration time for smaller β ’s in the numerical calculations. As seen from Figure 1 and Figure 2, all the eigenvalues of the preconditioned matrices are indeed located in the interval 1 2 , 1 , which is consistent with our spectral analyses in Section 4 and Section 5.

7. Conclusions

The purpose of this paper was to construct and analyze the block-triangular preconditioner and structured preconditioner of the discrete linear system of the time-harmonic eddy-current optimal control problem. It was proved that the eigenvalues of their corresponding preconditioned matrices were all located in the interval 1 2 , 1 . Numerical experiments showed that both newly proposed algorithms were robust to the parameters involved in the problem and ran faster than some existing algorithms.
In the future, an extension of our work is to apply new block-triangular preconditioners and structured preconditioners to solve other problems arising in practice, such as the optimal control problem involving the heat equation in [15]. We can further apply this method to compute additional Hermitian operators, addressing problems as referenced in [32]. In addition, we can try to use the newly proposed preconditioners to solve more general algebraic problems with non-Hermite matrices, etc.

Author Contributions

Methodology, X.-H.S. and J.-R.D.; software, J.-R.D.; validation, J.-R.D.; formal analysis, X.-H.S. and J.-R.D.; investigation, X.-H.S.; resources, X.-H.S.; writing—original draft, X.-H.S. and J.-R.D.; writing—review and editing, X.-H.S. and J.-R.D.; project administration, X.-H.S.; funding acquisition, X.-H.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Tröltzsch, F. Optimal Control of Partial Differential Equations: Theory, Methods, and Applications; American Mathematical Society: Providence, RI, USA, 2010; Volume 112. [Google Scholar]
  2. Kolmbauer, M. The Multiharmonic Finite Element and Boundary Element Method for Simulation and Control of Eddy Current Problems. Ph.D. Thesis, Johannes Kepler University, Linz, Austria, 2012. [Google Scholar]
  3. Lions, J.-L. Optimal Control of Systems Governed by Partial Differential Equations; Springer: Berlin, Germany, 1971. [Google Scholar]
  4. Ida, N. Numerical Modeling for Electromagnetic Non-Destructive Evaluation; Engineering NDE; Chapman & Halls: London, UK, 1995. [Google Scholar]
  5. Golub, G.H.; Wu, X.; Yuan, J.-Y. SOR-like methods for augmented systems. Bit Numer. Math. 2001, 41, 71–85. [Google Scholar] [CrossRef]
  6. Bai, Z.-Z.; Golub, G.H.; Ng, M.K. Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems. SIAM J. Matrix Anal. Appl. 2003, 24, 603–626. [Google Scholar] [CrossRef]
  7. Benzi, M.; Golub, G.H. An iterative method for generalized saddle point problems. SIAM J. Matrix Anal. Appl. 2002, 1–28. [Google Scholar]
  8. Benzi, M. A generalization of the Hermitian and skew-Hermitian splitting iteration. SIAM J. Matrix Anal. Appl. 2009, 31, 360–374. [Google Scholar] [CrossRef]
  9. Jiang, M.-Q.; Cao, Y. On local Hermitian and skew-Hermitian splitting iteration methods for generalized saddle point problems. J. Comput. Appl. Math. 2009, 231, 973–982. [Google Scholar] [CrossRef]
  10. Yang, A.-L.; An, J.; Wu, Y.-J. A generalized preconditioned HSS method for non-Hermitian positive definite linear systems. Appl. Math. Comput. 2010, 216, 1715–1722. [Google Scholar] [CrossRef]
  11. Bai, Z.-Z. Block alternating splitting implicit iteration methods for saddle-point problems from time-harmonic eddy current models. Numer. Linear Algebra Appl. 2012, 19, 914–936. [Google Scholar] [CrossRef]
  12. Zheng, Z.; Zhang, G.-F.; Zhu, M.-Z. A block alternating splitting iteration method for a class of block two-by-two complex linear systems. J. Comput. Appl. Math. 2015, 288, 203–214. [Google Scholar] [CrossRef]
  13. Krendl, W.; Simoncini, V.; Zulehner, W. Stability estimates and structural spectral properties of saddle point problems. Numer. Math. 2013, 124, 183–213. [Google Scholar] [CrossRef]
  14. Zheng, Z.; Zhang, G.-F.; Zhu, M.-Z. A note on preconditioners for complex linear systems arising from PDE-constrained optimization problems. Appl. Math. Lett. 2016, 61, 114–121. [Google Scholar] [CrossRef]
  15. Liang, Z.-Z.; Axelsson, O.; Neytcheva, M. A robust structured preconditioner for time-harmonic parabolic optimal control problems. Numer. Algorithms 2018, 79, 575–596. [Google Scholar] [CrossRef]
  16. Liang, Z.-Z.; Axelsson, O. Exact inverse solution techniques for a class of complex valued block two-by-two linear systems. Numer. Algorithms 2022, 90, 79–98. [Google Scholar] [CrossRef]
  17. Buffa, A.; Ammari, H.; Nédélec, J.-C. A justification of eddy currents model for the Maxwell equations. SIAM J. Appl. Math. 2000, 60, 1805–1823. [Google Scholar] [CrossRef]
  18. Schmidt, K.; Sterz, O.; Hiptmair, R. Estimating the eddy-current modeling error. IEEE Trans. Magn. 2008, 44, 686–689. [Google Scholar] [CrossRef]
  19. Axelsson, O.; Lukáš, D. Preconditioning methods for eddy-current optimally controlled time-harmonic electromagnetic problems. J. Numer. Math. 2019, 27, 1–21. [Google Scholar] [CrossRef]
  20. Zaglmayr, S. High Order Finite Element Methods for Electromagnetic Field Computation. PhD Thesis, Universität Linz, Linz, Austria, 2006. [Google Scholar]
  21. Nédélec, J.-C. Mixed finite elements in R3. Numer. Math. 1980, 35, 315–341. [Google Scholar] [CrossRef]
  22. Nédélec, J.-C. A new family of mixed finite elements in R3. Numer. Math. 1986, 50, 57–81. [Google Scholar] [CrossRef]
  23. Paige, C.C.; Saunders, M.A. Solution of sparse indefinite systems of linear equations. SIAM J. Numer. Anal. 1975, 12, 617–629. [Google Scholar] [CrossRef]
  24. Saad, Y.; Schultz, M.H. GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J. Sci. Stat. Comput. 1986, 7, 856–869. [Google Scholar] [CrossRef]
  25. Axelsson, O.; Liang, Z.-Z. A note on preconditioning methods for time-periodic eddy current optimal control problems. J. Comput. Appl. Math. 2019, 352, 262–277. [Google Scholar] [CrossRef]
  26. Pearson, J.W.; Wathen, A.J. A new approximation of the Schur complement in preconditioners for PDE-constrained optimization. Numer. Linear Algebra Appl. 2012, 19, 816–829. [Google Scholar] [CrossRef]
  27. Choi, Y.; Farhat, C.; Murray, W.; Saunders, M. A practical factorization of a Schur complement for PDE-constrained distributed optimal control. J. Sci. Comput. 2015, 65, 576–597. [Google Scholar] [CrossRef]
  28. Hestenes, M.R.; Stiefel, E. Methods of conjugate gradients for solving linear systems. J. Res. Natl. Bur. Stand. 1952, 49, 409–436. [Google Scholar] [CrossRef]
  29. Rahman, T.; Valdman, J. Fast MATLAB assembly of FEM matrices in 2D and 3D: Nodal elements. Appl. Math. Comput. 2013, 219, 7151–7158. [Google Scholar] [CrossRef]
  30. Luo, W.-H.; Gu, X.-M.; Carpentieri, B. A dimension expanded preconditioning technique for saddle point problems. BIT Numer. Math. 2022, 62, 1983–2004. [Google Scholar] [CrossRef]
  31. Luo, W.-H.; Carpentieri, B.; Guo, J. A dimension expanded preconditioning technique for block two-by-two linear equations. Demonstr. Math. 2023, 56, 20230260. [Google Scholar] [CrossRef]
  32. Gu, X. Efficient preconditioned iterative linear solvers for 3-D magnetostatic problems using edge elements. Adv. Appl. Math. Mech. 2020, 12, 301–318. [Google Scholar] [CrossRef]
Figure 1. Eigenvalue distribution of preconditioned matrix P T r i 1 A .
Figure 1. Eigenvalue distribution of preconditioned matrix P T r i 1 A .
Mathematics 12 00375 g001
Figure 2. Eigenvalue distribution of preconditioned matrix P S t r 1 A .
Figure 2. Eigenvalue distribution of preconditioned matrix P S t r 1 A .
Mathematics 12 00375 g002
Table 1. The abbreviated names of the method being tested.
Table 1. The abbreviated names of the method being tested.
AbbreviationDescription
P B D MINRES with a block-diagonal preconditioner (13)
EI-GMRESGMRES with the exact Schur complement (18)
P T r i GMRES with a block-triangular preconditioner (27)
P S t r GMRES with a structured preconditioner (28)
Table 2. Sizes of the matrix for the three-dimensional problem.
Table 2. Sizes of the matrix for the three-dimensional problem.
LevelSize of M and K
11854
213,428
3102,024
Table 3. IT and CPU of different methods with a mesh refinement degree of 1 ( ε = 10 2 ).
Table 3. IT and CPU of different methods with a mesh refinement degree of 1 ( ε = 10 2 ).
Method β ω = 10 2 ω = 10 1 ω = 1 ω = 10 ω = 100
IT CPU IT CPU IT CPU ITCPUITCPU
P B D 10 2 140.4314140.3827140.3755160.4547160.4343
10 4 140.3775140.3883150.3949160.4670180.4868
10 6 130.4047140.4034140.3722140.4268130.3587
10 8 110.3078110.3022110.3339110.3017110.3040
EI-GMRES 10 2 360.3453370.3453380.3527470.36122851.3114
10 4 2070.94422080.94152150.97212291.04803031.3246
10 6 740.2320750.2450760.2413790.2496930.2838
10 8 120.0692120.0642130.0670140.0738150.0739
P T r i 10 2 100.5884100.5507110.6189130.5595120.3076
10 4 100.2818110.3153110.2926120.2941140.3090
10 6 80.183980.180390.187790.1925110.2012
10 8 50.146750.147360.157560.139070.1590
P S t r 10 2 80.449380.449680.4419110.4304110.2405
10 4 90.203790.227590.2327100.2191120.2096
10 6 100.1563100.1533110.1583110.1689110.1649
10 8 70.145380.135490.148190.144190.1542
Table 4. IT and CPU of different methods with a mesh refinement degree of 1 ( ε = 10 4 ).
Table 4. IT and CPU of different methods with a mesh refinement degree of 1 ( ε = 10 4 ).
Method β ω = 10 2 ω = 10 1 ω = 1 ω = 10 ω = 100
IT CPU IT CPU IT CPU ITCPUITCPU
P B D 10 2 140.3497140.4169140.3526160.3951160.3993
10 4 140.3560140.3509150.3880160.3947180.4448
10 6 130.3915140.3526140.3522140.3506130.3291
10 8 110.3318110.2839110.2800110.2799110.2781
EI-GMRES 10 2 360.3523370.3576380.3654470.38402861.3653
10 4 2070.96192080.97422150.99722291.10273031.3806
10 6 740.2520750.2529760.2549790.2614930.2999
10 8 120.0699120.0657130.0723140.0754150.0775
P T r i 10 2 100.5825100.4703110.5512130.5106120.3455
10 4 100.3009110.3062110.3188120.3080140.3229
10 6 80.191580.190390.197590.1976110.1738
10 8 50.138450.123760.161160.159270.1440
P S t r 10 2 80.470080.479380.4771110.4741110.2776
10 4 90.234790.229990.235090.2369120.2390
10 6 100.1665100.1658110.1604110.1799110.1733
10 8 70.131380.156590.153190.167890.1664
Table 5. IT and CPU of different methods with a mesh refinement degree of 2 ( ε = 10 2 ).
Table 5. IT and CPU of different methods with a mesh refinement degree of 2 ( ε = 10 2 ).
Method β ω = 10 2 ω = 10 1 ω = 1 ω = 10 ω = 100
IT CPU IT CPU IT CPU ITCPUITCPU
P B D 10 2 123.8865124.0014144.8745165.3181165.8238
10 4 165.0045165.2044165.3404165.3438206.8909
10 6 154.8586154.9940155.0187144.7922144.8349
10 8 154.8220155.1933155.0183155.1233155.2811
EI-GMRES 10 2 338.5808349.4610358.5315449.618927730.6406
10 4 21923.655422026.037322723.190322223.704829327.4971
10 6 32022.723132119.711432425.542134622.825781982.3304
10 8 311.1882311.0294321.0578341.3804351.1845
P T r i 10 2 94.6557105.2937116.0840135.0051122.4285
10 4 102.3134112.4929122.2728122.6970142.6304
10 6 91.088391.1096100.9939100.9687111.0648
10 8 70.685180.685580.648280.809490.7477
P S t r 10 2 84.151484.197084.1532114.1784112.1912
10 4 91.743691.765491.8235101.7920121.5819
10 6 100.9174100.9010110.9029110.9861110.9318
10 8 90.5362100.5933100.6658110.6811110.7139
Table 6. IT and CPU of different methods with a mesh refinement degree of 2 ( ε = 10 4 ).
Table 6. IT and CPU of different methods with a mesh refinement degree of 2 ( ε = 10 4 ).
Method β ω = 10 2 ω = 10 1 ω = 1 ω = 10 ω = 100
IT CPU IT CPU IT CPU ITCPUITCPU
P B D 10 2 124.2890124.5644144.6339165.5704165.4510
10 4 165.1801165.3446165.2663165.4883206.5576
10 6 154.7688155.3357155.0471144.8455145.0261
10 8 154.8160155.2069155.5848154.9962155.4429
EI-GMRES 10 2 337.91453411.8086348.2787449.777627729.8383
10 4 21921.087822021.801622723.430922222.078629332.1779
10 6 32018.901732118.326332425.572634620.881381980.4379
10 8 311.3566311.4166321.3717341.4684351.1625
P T r i 10 2 94.8221104.9119115.4390135.3903122.5764
10 4 101.9896112.4368122.5581122.6989142.2606
10 6 90.932290.9472101.0677101.1115111.2455
10 8 70.692080.742180.752880.758290.7885
P S t r 10 2 84.239384.528184.3064114.2991112.0287
10 4 91.531191.755291.7384101.8320121.5949
10 6 100.8088100.8456110.9061110.8727110.8555
10 8 90.5540100.5974100.6033110.6705110.7197
Table 7. IT and CPU of different methods with a mesh refinement degree of 3 ( ε = 10 2 ).
Table 7. IT and CPU of different methods with a mesh refinement degree of 3 ( ε = 10 2 ).
Method β ω = 10 2 ω = 10 1 ω = 1 ω = 10 ω = 100
IT CPU IT CPU IT CPU ITCPUITCPU
P B D 10 2 1281.828116105.488416109.920316105.984920134.6659
10 4 16107.444216105.488416109.920316105.984920134.6659
10 6 1597.857415100.77151599.68381598.308316103.6864
10 8 15101.05901597.167015102.13591596.28781599.9818
EI-GMRES 10 2 32133.648333139.429934141.492741134.3916264369.5007
10 4 212312.7996213294.7765212313.8426207297.1443277338.1885
10 6 10041114.716410051050.419210061531.910410081019.213210091035.8507
10 8 12340.537212338.336812437.930712539.362413140.6898
P T r i 10 2 953.92151059.01821162.54871357.89921226.1145
10 4 1022.33951124.16351225.89251225.14411524.8410
10 6 98.5960109.45121110.15391110.01471210.8725
10 8 84.481884.343584.483394.873694.9703
P S t r 10 2 853.3623854.2410853.87721152.23471124.5139
10 4 919.7767919.9295920.60461021.65931218.5826
10 6 108.5884108.5899119.4848119.2194119.4156
10 8 93.5892104.0119103.9536114.2778114.3012
Table 8. IT and CPU of different methods with a mesh refinement degree of 3 ( ε = 10 4 ).
Table 8. IT and CPU of different methods with a mesh refinement degree of 3 ( ε = 10 4 ).
Method β ω = 10 2 ω = 10 1 ω = 1 ω = 10 ω = 100
IT CPU IT CPU IT CPU ITCPUITCPU
P B D 10 2 1281.42661386.68901494.430018116.057416101.5482
10 4 16104.324316106.371616109.363716109.143120137.9688
10 6 15110.681215104.75311599.71181598.556516105.2505
10 8 15113.479215110.889115104.328115110.832015111.7145
EI-GMRES 10 2 32146.205233140.793433141.054341133.7156264381.4948
10 4 212309.2196213309.0034212311.9646207289.9215278332.9411
10 6 10041015.954210051006.336910061038.208410081012.936510091036.8108
10 8 12337.893512340.942812439.374412544.057313143.2647
P T r i 10 2 954.06461059.58611162.93631357.69601226.0438
10 4 1022.05961123.85761226.06551225.74851525.0688
10 6 98.7380109.44831110.23971110.21841210.7271
10 8 84.488284.424084.411594.965094.8197
P S t r 10 2 853.7306853.8112853.81671152.12131123.9081
10 4 919.4848920.0403920.10231021.96831219.1874
10 6 108.6292108.5744119.3201119.1705118.8240
10 8 93.5368103.8886104.2097114.3156114.4294
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shao, X.-H.; Dong, J.-R. Two Preconditioners for Time-Harmonic Eddy-Current Optimal Control Problems. Mathematics 2024, 12, 375. https://doi.org/10.3390/math12030375

AMA Style

Shao X-H, Dong J-R. Two Preconditioners for Time-Harmonic Eddy-Current Optimal Control Problems. Mathematics. 2024; 12(3):375. https://doi.org/10.3390/math12030375

Chicago/Turabian Style

Shao, Xin-Hui, and Jian-Rong Dong. 2024. "Two Preconditioners for Time-Harmonic Eddy-Current Optimal Control Problems" Mathematics 12, no. 3: 375. https://doi.org/10.3390/math12030375

APA Style

Shao, X. -H., & Dong, J. -R. (2024). Two Preconditioners for Time-Harmonic Eddy-Current Optimal Control Problems. Mathematics, 12(3), 375. https://doi.org/10.3390/math12030375

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop