Next Article in Journal
On Coding by (2,q)-Distance Fibonacci Numbers
Previous Article in Journal
Inequalities for Information Potentials and Entropies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Block Structural Index Reduction Approach for Large-Scale Differential Algebraic Equations

1
School of Computer Science and Cyber Engineering, Guangzhou University, Guangzhou 510006, China
2
Institute of Computing Science and Technology, Guangzhou University, Guangzhou 510006, China
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(11), 2057; https://doi.org/10.3390/math8112057
Submission received: 27 October 2020 / Revised: 14 November 2020 / Accepted: 15 November 2020 / Published: 18 November 2020
(This article belongs to the Section Mathematics and Computer Science)

Abstract

:
A new generation of universal tools and languages for modeling and simulation multi-physical domain applications has emerged and became widely accepted; they generate large-scale systems of differential algebraic equations (DAEs) automatically. Motivated by the characteristics of DAE systems with large dimensions, high index or block structures, we first propose a modified Pantelides’ algorithm (MPA) for any high order DAEs based on the Σ matrix, which is similar to Pryce’s Σ method. By introducing a vital parameter vector, a modified Pantelides’ algorithm with parameters has been presented. It leads to a block Pantelides’ algorithm (BPA) naturally which can immediately compute the crucial canonical offsets for whole (coupled) systems with block-triangular form. We illustrate these algorithms by some examples, and preliminary numerical experiments show that the time complexity of BPA can be reduced by at least O ( ) compared to the MPA, which is mainly consistent with the results of our analysis.

1. Introduction

The recent decades encountered a tremendous progress in the field of multi-disciplinary simulation tools for continuous time discrete variable systems. A new generation of universal tools and languages for modeling and simulation multi-physical domain applications, such as Modelica [1], emerged and became widely accepted [2]; they can be found in commercial and academic high quality implementations and allow to generate large-scale systems of differential algebraic equations (DAEs) automatically. The generated DAEs have many interesting characteristics, such as large scale, high index block structures, which are the major motivation of our work in this paper. In the context of numerical calculation of DAEs, it is well known that a direct numerical simulation without index reduction may not be possible or may provide a bad result [3,4]. In brief, higher index DAEs suffer from a rank deficiency in the subset of algebraic equations. The index of a DAE system is a key notion in the theory for measuring the distance from the given system with a singular Jacobian to the corresponding ordinary differential equations with a nonsingular Jacobian. There are various index concepts in the theory of DAEs. The one related to the structural analysis approach is the “structural index”, which is defined by (4). For other indices, we refer the interested readers to [5,6]. High-index DAE systems usually need differentiation to reveal all the system’s constraints, which are crucial to determine consistent initial conditions. This procedure is the called “index reduction” of DAEs.
Index reduction in the analysis of DAEs numerical solving is an active technique of research. In [7], Campbell and Gear gave a derivative-array method to reduce DAEs, which may not be applicable to large-scale nonlinear systems. Pantelides in [8] constructed a graph-oriented methodology which gives a systematic way to reduce high-index of DAEs with order one to lower index, by selectively adding differentiated forms of the equations already present in the system. In [9], Mattsson-Söderlind used Pantelides’ method as a preprocessing step, and then differentiated equations in the DAE system with the aid of the information obtained by Pantelides’ method and replaced some derivatives with dummy variables. The dummy method returns a sparse DAEs if the given DAEs is sparse, which can be applied to large-scale DAEs. Pryce in [10] developed structural analysis method (or Σ method) which is proved to compute the same structural index as Pantelides’ method and is a straightforward method for analyzing the structure of DAEs of any order. This approach is based on solving an assignment problem, which can be formulated as an integer linear programming problem. The idea was generalized to a class of partial differential algebraic equations by Wu et al. [11]. In [12,13], Pryce and Nediakov et al. generalized the structural analysis method to the DAE systems with coarse or fine block-triangular form (BTF), and showed that the difference between the global offsets of signature matrix Σ and the local offsets of each diagonal sub-block in Σ with fine BTF is a constant. They compute the fine BTF for system’s numerical scheme via valid global offset vectors or the local offsets of each separated coarse block. Qin and Tang et al. in [14] generalized the Σ method for large-scale DAE systems. In addition, there are other index reduction methods for different DAEs [15,16,17,18,19,20,21]. We focus on structural index reduction method to directly calculate the global canonical offsets of large-scale DAEs with any high order, which are critical for its efficient solution scheme by combing the Pantelides’ method with Pryce’s Σ-method.
The paper is organized as follows. In the next section, we first make necessary preparations and briefly reviews about Pryce’s Σ-method. Section 3 presents a modified Pantelides’ method derived from Pantelides’ method based on Σ matrix. In Section 4, we introduce the block triangular forms (BTF) for large-scale DAEs, firstly. Based on our modified Pantelides’ method with parameter, a block Pantelides’ algorithm is proposed to find the canonical offsets of the systems of DAEs. We illustrate the performance of these algorithms via numerical experiments in Section 5. In the last section, some necessary conclusions are made.

2. Preliminaries

In this section, we give a brief review about the main steps of Pryce’s structural analysis method or Σ-method [10] and some remarks.
We consider n dimension DAEs as follows
f = ( f 1 , f 2 , , f n ) = 0 ,
where
f i = f i ( t , D ( x j ( t ) ) ) ) , 1 i , j n ,
x 1 ( t ) , x 2 ( t ) , , x n ( t ) are scalar independent functional variable, D ( x j ( t ) ) is the derivatives of x j ( t ) , t R .
Step 1. Built the n × n signature matrix Σ = ( σ i j ) of the DAEs, where
σ i j = highest differential order of x j in f i , if x j appears in f i , , otherwise .
Step 2. Solve an assignment problem to find a highest value transversal (HVT) T, which is a subset of sparsity pattern S with n finite entries and describes just one element in each row and each column, such that σ i j is maximized and finite. The sparsity pattern S of Σ is defined as:
S = sparse ( Σ ) = { ( i , j ) : σ i j > } .
This can be formulated as a Linear Programming Problem, the Primal Problem is:
max ξ z = ( i , j ) S σ i j ξ i j , s . t . j : ( i , j ) S ξ i j = 1 for each i , i : ( i , j ) S ξ i j = 1 for each j , ξ i j 0 for ( i , j ) S .
The problem is equivalent to finding a maximum-weight perfect matching in a bipartite graph whose incidence matrix is the signature matrix, and can be solved by Kuhn-Munkres algorithm [22] whose time complexity is O ( n 3 ) .
Step 3. Determine the offsets of the problem, which are the vectors c = ( c i ) 1 i n , d = ( d j ) 1 j n , the smallest such that d j c i σ i j , for all 1 i n , 1 j n , and d j c i = σ i j when ( i , j ) T .
This problem can be formulated as the dual of (2) in the variables c = ( c 1 , c 2 , , c n ) and d = ( d 1 , d 2 , , d n ) . The Dual Problem is defined as follows:
min c , d z = j d j i c i , s . t . d j c i σ i j for all ( i , j ) , c i 0 for all i .
Step 4. Compute the system Jacobian matrix J , given by
J i j = f i ( ( d j c i ) th derivative of x j ) , if this derivative is present in f i , 0 , otherwise .
Step 5. Choose a consistent point. If J is nonsingular at that point, then the solution can be computed with Taylor series or numerical homotopy continuation techniques in a neighborhood of that point. And using the canonical offsets c , d of Problem (3), the structural index is then defined as:
ν = max i c i + 0 , for all d j > 0 , 1 , for some d j = 0 .
In order to determine the crucial canonical offsets for structural analysis in DAEs system using fixed-point iteration algorithm (FPIA) [10], we introduce some necessary definitions, firstly. Define a natural semi-ordering of vectors in R n , for a , b , a b if a i b i for each i, the canon of offsets is in the sense of ordering ⪯. Given Σ of DAEs systems and a corresponding transversal T, for c = ( c i ) ( R n ) , we define a mapping
D ( c ) = ( d j ) , w h e r e   d j = max i ( σ i j + c i ) ,
and for d = ( d j ) ( R n ) , we define a mapping
C T ( d ) = ( c i * ) , where   c i * = d j σ i , j , ( i , j ) T .
Furthermore, we define the composition mapping ϕ T ( c ) = C T ( D ( c ) ) from R n to R n . Then we introduce fixed-point iteration algorithm below.
Remark 1.
If the used transversal T is a HVT, Algorithm 1 can find the canonical offsets c * and d * for DAE systems by | | c * | | 1 + 1 iterations at most, where | | c * | | 1 = m a x i { c i * } .
Remark 2.
If the Σ matrix of given DAE systems in Problem (3) contains some transversal T, then the canonical dual-optimal pair c * and d * can be found in time O ( n 3 + | | c * | | 1 · n 2 ) via the fixed-point iteration algorithm.
Algorithm 1: Fixed-point Iteration Algorithm (FPIA)
Mathematics 08 02057 i001

3. A Modified Pantelides’ Method

In this section, we present a modified Pantelides’ method to calculate the canonical offsets for given signature matrix of DAEs system with any high order. Some necessary definitions from are given below.
Definition 1.
For Σ = ( σ i j ) and d = ( d 1 , d 2 , , d n ) R n , the Leading Derivative Set L is defined as
L = L D S ( Σ , d ) = { ( i , j ) | σ i j = d j , i = 1 , 2 , , n , j = 1 , 2 , , n } ,
and let L i denote the set of j indices in the ith row:
L i = { j | ( i , j ) L } ,
If I is a set of i-indices, define
L ( I ) = i I L i ,
that is L ( I ) represents the total set of leading derivatives that appear in the I set of equations.
Definition 2
([8]). The set I is structurally singular ( S S ) if L ( I ) has fewer elements than I:
| L ( I ) | < | I | ,
where | · | is the cardinality of a set, that is if the I-equations have too few leading derivatives. I is minimal structurally singular ( M S S ) if it contains no proper S S subset.
In order to locate the MSS subsets for a given DAEs system, we review the Algorithm 2 (see [8]) below.
Algorithm 2: SearchMSS (i, L ,MARK,MATCH,PATHFOUND)
Mathematics 08 02057 i002
If Algorithm 2 returns PATHFOUND=TURE, there exists an augmenting path emanating from the i-th equation of DAEs. Otherwise, the MSS subsets was found. MARK is a vector for the functional variables of DAEs. If M A R K ( j ) = 1 , this means that the j-th variable has been marked, or it was not unmarked. In addition, MATCH is a matching vector between the equations and the variables of DAEs. MATCH(j) equals 0 which means the j-th variable was not matched. But if MATCH(j) = i , this means that the j-th variable has been matched with the i-th equation.
Remark 3.
If Algorithm 2 returns flag PATHFOUND with value FALSE, then the F s e t = { i } { M A T C H ( j ) | j X s e t } of equations is MSS with respect to X s e t = { j | M A R K ( j ) = 1 } , and | F s e t | = | V s e t | + 1 .
Based on Algorithm 2, a complete Algorithm 3 below can be constructed to determine an necessary differentiations of system equations.
Algorithm 3: Modified Pantelides’ Algorithm (MPA)
Mathematics 08 02057 i003
Lemma 1
([10]). For nonsingular DAE systems, Pantelides’ algorithm gives the same canonical offsets of the DAEs as fixed-point iteration algorithm.
Then we are able to give the following theorem.
Theorem 1.
For n dimension nonsingular DAE systems with any high order, the modified Pantelides’ algorithm gives the same canonical offsets as the fixed-point iteration algorithm, and the time of the MPA is O ( n 3 ) .
Proof of Theorem 1.
Similarly, the proof of the theorem can follow from the Lemma 1 in [10]. □
Remark 4.
From Theorem 1, the modified Pantelides’ algorithm not only is able to calculate the canonical offsets for DAEs with any high order, but also finds a HVT T = j { ( M A T C H ( j ) , j ) } for S(1), which is similar with the Kuhn-Munkres algorithm.
Example 1.
The motion of a free pendulum in the Cartesian space can be described by a second-order system of DAEs f = ( f 1 , f 2 , f 3 ) = 0 ,
0 = f 1 = x ( t ) λ ( t ) x ( t ) , 0 = f 2 = y ( t ) λ ( t ) y ( t ) + g , 0 = f 3 = x 2 ( t ) + y 2 ( t ) L 2 ,
where L , g > 0 are constants.
From the definition of Σ, labeled by equations and dependent variables, sigma matrix is
Σ = x y λ f 1 f 2 f 3 ( 2 0 2 0 0 0 ) ,
where a blank denotes .
Now, we use Algorithm 3 to compute the canonical offsets of the DAEs and obtain the computational process below.
From Table 1, we can obtain the canonical offsets c = ( 0 , 0 , 2 ) and d = ( 2 , 2 , 0 ) , and also get a HVT { ( f 1 , x ) , ( f 3 , y ) , ( f 2 , λ ) } via MATCH = (1, 3, 2).
From step 4 in Section 2, the system Jacobian matrix J is
J = f 1 x 0 f 1 λ 0 f 2 y f 2 λ f 3 x f 3 y 0 = 1 0 x 0 1 y 2 x 2 y 0
and det J = 2 x 2 + 2 y 2 = 2 L 2 0 , which means that the modified Pantelides’ algorithm succeeds.

4. Structural Index Reduction Methods for DAEs with Block Structure

In this section, assuming that a given DAE system is structurally nonsingular, i.e., the Σ matrix of the system contains at least one transversal, we can get the block triangular forms of signature matrix M by permuting its rows and columns [23,24],
M = M 1 , 1 M 1 , 2 M 1 , M 2 , 2 M 2 , M , ,
where the elements in the blanks of M are , and the diagonal matrix M i , i is square and irreducible.
The bipartite graph of M or DAE is the undirected graph whose 2 n vertices are the n rows and n columns, and which has an edge between row i and column j whenever ( i , j ) S . We define a structurally nonsingular system of DAE is a coupled system if its graph is connected. In other words, a coupled system does not contains separated subsystems, or the structural analysis can be done in parallel over each subsystem directly and the global canonical offsets of the whole system can be assembled naturally from the local canonical offsets of subsystems.
Next, we propose a block Pantelides’ method for the Σ matrix of a coupled system with BTF whose main idea is to use the modified Pantelies’ method with parameter mentioned below to process each diagonal matrix in block upper triangulated signature matrix from top to bottom in sequence.

4.1. Modified Pantelides’ Method with Parameter

For any given nonnegative parameter p , we present a modified Pantelides’ algorithm with parameter (PMPA) just derived from modified Pantelides’ algorithm, and obtain the following Lemma from Theorem 1 easily.
Lemma 2.
For a given nonnegative parameter p and a nonsingular DAEs system, the modified pantelides’ algorithm with parameter can find the canonical offsets c and d which satisfies with d p . And the time of the PMPA is O ( n 3 ) .
It is noted that if p j max i σ i j for all j, then obtain d = max { D ( 0 ) , p } = D ( 0 ) , which means that the Algorithm 4 is the Algorithm 3.
Example 2.
Consider the DAE system from Example 1, and assume parameter p = ( 0 , 3 , 0 ) , then we also get sigma matrix is
Σ = p 0 3 0 x y λ f 1 f 2 f 3 ( 2 0 2 0 0 0 ) .
Similarly, we use Algorithm 4 to compute the canonical offsets of the DAEs and obtain the computational process below.
From Table 2, we can obtain the canonical offsets c = ( 1 , 1 , 3 ) and d = ( 3 , 3 , 1 ) which satisfies with d = ( 3 , 3 , 1 ) p = ( 0 , 3 , 0 ) .
Algorithm 4: Modified Pantelides’ Algorithm with Parameter (PMPA)
Mathematics 08 02057 i004

4.2. Block Pantelides’ Method for DAEs with BTF

Since a coupled DAE system is structurally nonsingular, we can obtain the signature matrix M with BTF (13), where i = 1 n i = n and n i is the order of M i i .
We give some necessary symbols and definitions from [25] firstly. Assum the parameter vector is p = ( p 1 , p 2 , , p ) with sections, the offsets are c = ( c 1 , c 2 , , c ) and d = ( d 1 , d 2 , , d ) , where the dimension of p i , c i and d i are n i for i = 1 , 2 , , . For any n × r order matrix B and B , n order vector q , q ¯ and q ^ , r order vector w , the mapping B = RowAdd ( B , q ) is defined as B i , j = B i , j + q i , for i = 1 , 2 , , n , j = 1 , 2 , , r ; the mapping w = ColMax ( B ) is defined via w j = max i { 1 , 2 , , n } B i , j , for j = 1 , 2 , , r .
Then we are able to propose Block Pantelides’ algorithm (BPA) for DAEs with block triangular forms by using PMPA.
Lemma 3.
Assume that the DAE system is a coupled system, then the block Pantelides’ algorithm gives the same canonical offsets for the system as the modified Pantelides’ algorithm.
Proof of Lemma 3.
Similarly, the proof of the Lemma can follow from the Lemma 3.5 in our former article [25]. □
Moreover, we can obtain the following theorem by Lemma 2 and 3 naturally.
Theorem 2.
Assume a system of DAE is a coupled system with BTF (11), then the canonical offsets c * and d * of the system can be found in time O ( i = 1 n i 3 ) via the block Pantelides’ algorithm. Furthermore, if n i = r for each i, i.e., n = · r , then the time is O ( 1 2 · n 3 ) .
Proof of Theorem 2.
Combining Remark 2, Lemma 1 with Lemma 3, block Pantelides’ algorithm can obtain the canonical offsets c * and d * of the system.
Now, we consider the running time of the algorithm. The time of step 4 in algorithm is O ( n 1 3 ) by Lemma 2. For i = 2 : , the running time of step 6 is O ( ( n h = 1 i 1 n h ) × n i 1 ) , the time of step 7 and 8 is O ( v = 1 i 1 n v × n i ) , and the running time of step 9 is O ( n i 3 ) by Lemma 2. Thus, the elapsed time of the algorithm is
O ( i = 1 n i 3 + i = 2 ( n h = 1 i 1 n h ) × n i 1 + i = 2 v = 1 i 1 n v × n i )
It is noted that
( i = 1 n i ) 2 ( i = 1 n i 2 ) i = 1 n i 3
Then we have
i = 2 ( n h = 1 i 1 n h ) × n i 1 < ( i = 1 n i ) 2 i = 1 n i 3
and
i = 2 v = 1 i 1 n v × n i < ( i = 1 n i ) 2 i = 1 n i 3
Therefore, the time complexity of the algorithm is O ( i = 1 n i 3 ) . Moreover, if n i = r for each i, then the time is O ( 1 2 · n 3 ) . □
Example 3.
Consider a 6 dimension DAEs system f = ( f 1 , f 2 , f 3 , f 4 , f 5 , f 6 ) = 0 as follows,
0 = f 1 = x 1 ( t ) x 3 ( t ) x 1 ( t ) , 0 = f 2 = x 2 ( t ) x 3 ( t ) x 2 ( t ) + g , 0 = f 3 = x 1 2 ( t ) + x 2 2 ( t ) + x 5 ( t ) L 1 2 , 0 = f 4 = x 4 ( t ) x 6 ( t ) x 4 ( t ) , 0 = f 5 = x 5 ( t ) x 6 x 5 ( t ) + g , 0 = f 6 = x 4 2 ( t ) + x 5 2 ( t ) L 2 2 ,
where L 1 , L 2 , g > 0 are constants.
Firstly, we get the signature matrix Σ with BTF (19), and use Algorithm 5 to compute the canonical offsets c = ( c 1 , c 2 ) and d = ( d 1 , d 2 ) of the DAEs, where
Σ = x 1 x 2 x 3 x 4 x 5 x 6 f 1 f 2 f 3 f 4 f 5 f 6 ( 2   0   2   0   0   0   1   2   0   2   0   0   0   ) = ( Σ 11 Σ 12 Σ 22 ) .
For i = 1 , p 1 = 0 , then ( c 1 , d 1 ) = PMPA( Σ 11 , 0 ) = MPA( Σ 11 ). From Example 1, obtain c 1 = ( 0 , 0 , 2 ) and d 1 = ( 2 , 2 , 0 ) . For i = 2 , p 2 = m a x { C o l M a x ( R o w A d d ( Σ 12 , c 1 ) ) , 0 } = ( 0 , 3 , 0 ) , then ( c 2 , d 2 ) = P M P A ( Σ 11 , p 2 ) . From Example 2, get c 2 = ( 1 , 1 , 3 ) and d 2 = ( 3 , 3 , 1 ) . Therefore, we obtain whole canonical offsets c 2 = ( 0 , 0 , 2 , 1 , 1 , 3 ) and d 2 = ( 2 , 2 , 0 , 3 , 3 , 1 ) for the DAEs.
Algorithm 5: Block Pantelides’ Algorithm (BPA)
Mathematics 08 02057 i005

5. Numerical Experimentation

In this section, we do numerical simulation experiments about the running time of structural analysis algorithms for the DAE systems with different scale.
For large-scale or sparse systems produced by multi-domain unified modeling techniques, they always contain several (coupled) subsystems. Considering about the low complexity of block triangulation algorithms [23], we assume that the signature matrices M of coupled systems have been changed into the BTFs (13) and that the order of each diagonal block M i i in n × n order matrix M is r in our experiments, that is n = r · . It is worth noting that the entries of M are the differential orders in DAEs. We can randomly generate the sparse M for coupled systems with BTFs simply and properly as follows:
  • for each element in M 1 , 1 , randomly select an integer from {0, 1, 2, 3} according to its probability from {0.7, 0.1, 0.1, 0.1}, respectively; and M i i = M 1 , 1 , i = 2 , 3 , , ;
  • for each element in M 1 , 2 , randomly select an integer from {-1000, 0, 1, 2} according to its probability from {0.9, 0.05, 0.025, 0.025}, respectively; and M i , i + 1 = M 1 , 2 , i = 2 , 3 , , 1 ;
  • the rest elements in M are 1000 (means ).
All codes are written in Matlab 2016a under Windows 10 system and run on a personal computer with Intel(R) Core(TM) i5-3570 CPU @ 3.40 GHz, 4.00 GB RAM and 64-bit operating system. We do corresponding random trials for FPIA, extended signature matrix method (ESMM [14]), MPA and BPA with r = { 10 , 20 , 40 } and n = 800 : 200 : 2400 , and calculate their constants in μ · n ν using the standard least-square method. The running times of these algorithms are shown in Figure 1, respectively; some ratios of the running times are given in Figure 2.
As we can observe in Figure 1, we empirically know that the running times of the BPA for the large-scale DAE systems with different r are between O ( n ) and O ( n 2 ) , FPIA is more like O ( n 2.5 ) , and ESMM is between O ( n ) and O ( n 1.5 ) . However, MPA is more like O ( n 3 ) which is time consuming because of its recursive operations. It is worth noting that the experimental results of the MPA are consistent with Theorem 1.
Observing from Figure 2, we also know that the ESMM may reduce its running time of large systems for fixed r by nearly O ( n ) compared to FPIA, while BPA can reduce its running time by at least O ( n ) (i.e., O ( ) ) compared to MPA which is, by and large, consistent with Theorem 2.
In addition, for a fixed n, the running time of the FPIA and ESSM are not influenced basically by the r, while the time of the MPA and BFA decrease with the increase of r. It means that the MPA and BFA may have taken advantage of the intrinsic structure of DAEs.

6. Conclusions

In this article, we first propose a modified Pantelides’ method for any high order DAEs based on its Σ matrix, which is similar to Pryce’s Σ method and also can find a highest value transversal. By introducing a vital parameter vector, a modified Pantelides’ method with parameter has been presented. It leads to a block Pantelides’ method naturally which can be applied to immediately calculate the crucial canonical offsets for large-scale (coupled) systems of DAEs with block-triangular form. The main idea of the block Pantelides’ method is to use the modified Pantelies’ method with parameter to address each diagonal matrix in block upper triangulated signature matrix from top to bottom in sequence. In addition, some examples explain these proposed method clearly, and preliminary numerical experiments show that the time complexity of block Pantelides’ algorithm can be reduced by at least O ( ) compared to the modified Pantelides’ algorithm.
As the examples showed in [16,26,27], structural index reduction methods including our methods, which ignore numerical information, may fail on some DAEs due to production of a singular Jacobian. Some researchers proposed different index reduction methods [16,17,19,20] to address this problem for special DAEs which we will consider in the future. Compared with these index reduction methods, our method is able to address a fairly wide class of large-dimension systems of DAEs precisely and efficiently. In the future, the proposed approach will also be applied to exploit some large-scale fractional DAEs [28] or partial DAEs.

Author Contributions

J.T. contributed to supervision, methodology, validation and project administration. Y.R. contributed some computations. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partly supported by the National Key R&D Program of China (Nos. 2018YFB1005100, 2018YFB1005104), and the Science and Technology Program of Guangzhou, China (No. 202002030138).

Acknowledgments

The authors wish to thank the referee for his or her very helpful comments and useful suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fritzson, P. Principles of Object-Oriented Modeling and Simulation with Modelica 3.3: A Cyber-Physical Approach; Wiley-IEEE Press: Piscataway, NJ, USA, 2015. [Google Scholar]
  2. Elsheikh, A.; Wiechert, W. The structural index of sensitivity equation systems. Math. Comput. Model. Dyn. Syst. 2018, 24, 573–592. [Google Scholar] [CrossRef]
  3. Gear, C.W.; Petzold, L.R. ODE methods for the solution of differential/algbraic systems. SIAM J. Numer. Anal. 1983, 21, 716–728. [Google Scholar] [CrossRef]
  4. Petzold, L.R. Differential/algebraic equations are not ODEs. SIAM J. Sci. Stat. Comp. 1982, 3, 367–384. [Google Scholar] [CrossRef] [Green Version]
  5. Lamour, R.; Tischendorf, C.; März, R. Differential-Algebraic Equations: A Projector Based Analysis; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  6. Rheinboldt, W.C. Differential-algebraic systems as differential equations on manifolds. Math. Comp. 1984, 43, 473–482. [Google Scholar] [CrossRef]
  7. Campbell, S.L.; Gear, C.W. The index of general nonlinear DAEs. Numer. Math. 1995, 72, 173–196. [Google Scholar] [CrossRef]
  8. Pantelides, C.C. The consistent initialization of differential-algebraic systems. SIAM J. Sci. Stat. Comput. 1988, 9, 213–231. [Google Scholar] [CrossRef]
  9. Mattsson, S.E.; Söderlind, G. Index reduction in differential-algebraic equations using dummy derivatives. SIAM J. Sci. Comput. 1993, 14, 677–692. [Google Scholar] [CrossRef]
  10. Pryce, J.D. A simple structural analysis method for DAEs. BIT 2001, 41, 364–394. [Google Scholar] [CrossRef]
  11. Wu, W.Y.; Reid, G.; Ilie, S. Implicit riquier bases for PDAE and their semi-discretizations. J. Symb. Comput. 2009, 44, 923–941. [Google Scholar] [CrossRef] [Green Version]
  12. Nedialkov, N.S.; Pryce, J.D.; Tan, G.N. Algorithm 948: DAESA-A matlab tool for structural analysis of DAEs: Software. ACM Trans. Math. Softw. 2015, 41, 12. [Google Scholar] [CrossRef]
  13. Pryce, J.D.; Nedialkov, N.S.; Tan, G.N. DAESA-A matlab tool for structural analysis of DAEs: Theory. ACM Trans. Math. Softw. 2015, 41, 9. [Google Scholar] [CrossRef]
  14. Qin, X.L.; Tang, J.; Feng, Y.; Bachmann, B.; Fritzson, P. Efficient index reduction algorithm for large scale systems of differential algebraic equations. Appl. Math. Comput. 2016, 277, 10–22. [Google Scholar] [CrossRef]
  15. Takamatsu, M.; Iwata, S. Index reduction for differential-algebraic equations by substitution method. Linear Algebra Appl. 2008, 429, 2268–2277. [Google Scholar] [CrossRef] [Green Version]
  16. Iwata, S.; Oki, T.; Takamatsu, M. Index Reduction for Differential-algebraic Equations with Mixed Matrices. J. ACM 2019, 66, 35. [Google Scholar] [CrossRef] [Green Version]
  17. Chowdhry, S.; Krendl, H.; Linninger, A.A. Symbolic numeric index analysis algorithm for differential algebraic equations. Industr. Eng. Chem. Res. 2004, 43, 3886–3894. [Google Scholar] [CrossRef]
  18. Iwata, S.; Takamatsu, M. Index reduction via unimodular transformations. SIAM J. Matrix Anal. Appl. 2018, 39, 1135–1151. [Google Scholar] [CrossRef] [Green Version]
  19. Oki, T. Improved structural methods for nonlinear differential-algebraic equations via combinatorial relaxation. In Proceedings of the 44th International Symposium on Symbolic and Algebraic Computation (ISSAC19); ACM: New York, NY, USA, 2019. [Google Scholar]
  20. Tan, G.; Nedialkov, N.S.; Pryce, J.D. Conversion methods for improving structural analysis of differential-algebraic equation systems. BIT Numer. Math. 2017, 57, 845–865. [Google Scholar] [CrossRef] [Green Version]
  21. Qin, X.L.; Yang, L.; Feng, Y.; Bachmann, B.; Fritzson, P. Index reduction of differential algebraic equations by differential Dixon resultant. Appl. Math. Comput. 2018, 328, 189–202. [Google Scholar] [CrossRef]
  22. Schrijver, A. Combinatorial Optimization: Polyhedra and Efficiency; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  23. Pothen, A.; Fan, C. Computing the block triangular form of a sparse matrix. ACM Trans. Math. Softw. 1990, 16, 303–324. [Google Scholar] [CrossRef]
  24. Duff, I.S.; Uçar, B. On the block triangualr form of symmetric matrices. SIAM Rev. 2010, 53, 455–470. [Google Scholar] [CrossRef] [Green Version]
  25. Tang, J.; Wu, W.Y.; Qin, X.L.; Feng, Y. Structural Analysis Methods for Differential Algebraic Equations via Fixed-Point Iteration. J. Comput. Theor. Nanosci. 2016, 13, 7705–7711. [Google Scholar] [CrossRef]
  26. Campell, S.L.; Griepentrog, E. Solvability of general differential algebraic equations. SIAM J. Sci. Comput. 1995, 16, 257–270. [Google Scholar] [CrossRef]
  27. Lamour, R.; März, R. Detecting structures in differential algebraic equations: Computational aspects. J. Comput. Appl. Math. 2012, 236, 4055–4066. [Google Scholar] [CrossRef] [Green Version]
  28. Tai, Y.P.; Chen, N.; Wang, L.J.; Feng, Z.Y.; Xu, J. A numerical method for a system of fractional differential-algebraic equations based on sliding mode control. Mathematics 2020, 8, 1134. [Google Scholar] [CrossRef]
Figure 1. The running times of structural index reduction methods versus n, (a) is fixed-point iteration algorithm (FPIA), (b) is modified Pantelides’ algorithm (MPA), (c) is extended signature matrix method (ESMM) and (d) is block Pantelides’ algorithm (BPA). The constants in the legends are the fitted values ν in μ n ν for different r.
Figure 1. The running times of structural index reduction methods versus n, (a) is fixed-point iteration algorithm (FPIA), (b) is modified Pantelides’ algorithm (MPA), (c) is extended signature matrix method (ESMM) and (d) is block Pantelides’ algorithm (BPA). The constants in the legends are the fitted values ν in μ n ν for different r.
Mathematics 08 02057 g001aMathematics 08 02057 g001b
Figure 2. (a) The ratio, i.e., FPIA s running time ESMM s running time , versus n. (b) The ratio, i.e., MPA s running time BPA s running time , versus n. The constants in the legends are the fitted values ν in μ n ν for different r.
Figure 2. (a) The ratio, i.e., FPIA s running time ESMM s running time , versus n. (b) The ratio, i.e., MPA s running time BPA s running time , versus n. The constants in the legends are the fitted values ν in μ n ν for different r.
Mathematics 08 02057 g002
Table 1. Process for computing the canonical offsets via Algorithm 3.
Table 1. Process for computing the canonical offsets via Algorithm 3.
icd L MARKMATCHPATHFOUNDXsetFset
1 ( 0 , 0 , 0 ) ( 2 , 2 , 0 ) { ( 1 , 1 ) , ( 1 , 3 ) , ( 2 , 2 ) , ( 2 , 3 ) } (1,0,0)(1,0,0)TRUE
2 ( 0 , 0 , 0 ) ( 2 , 2 , 0 ) { ( 1 , 1 ) , ( 1 , 3 ) , ( 2 , 2 ) , ( 2 , 3 ) } (0,1,0)(1,2,0)TRUE
3 ( 1 ) ( 0 , 0 , 0 ) ( 2 , 2 , 0 ) { ( 1 , 1 ) , ( 1 , 3 ) , ( 2 , 2 ) , ( 2 , 3 ) } (0,0,0)(1,2,0)FALSE { 3 }
3 ( 2 ) ( 0 , 0 , 1 ) ( 2 , 2 , 0 ) { ( 1 , 1 ) , ( 1 , 3 ) , ( 2 , 2 ) , ( 2 , 3 ) } (0,0,0)(1,2,0)FALSE { 3 }
3 ( 3 ) ( 0 , 0 , 2 ) ( 2 , 2 , 0 ) { ( 1 , 1 ) , ( 1 , 3 ) , ( 2 , 2 ) , ( 2 , 3 ) , ( 3 , 1 ) , ( 3 , 2 ) } (0,1,1)(1,3,2)TRUE
Table 2. Process for computing the canonical offsets via Algorithm 4.
Table 2. Process for computing the canonical offsets via Algorithm 4.
icd L MARKMATCHPATHFOUNDXsetFset
1 ( 0 , 0 , 0 ) ( 2 , 3 , 0 ) { ( 1 , 1 ) , ( 1 , 3 ) , ( 2 , 3 ) } (1,0,0)(1,0,0)TRUE
2 ( 0 , 0 , 0 ) ( 2 , 3 , 0 ) { ( 1 , 1 ) , ( 1 , 3 ) , ( 2 , 3 ) } (0,0,1)(1,0,2)TRUE
3 ( 1 ) ( 0 , 0 , 0 ) ( 2 , 3 , 0 ) { ( 1 , 1 ) , ( 1 , 3 ) , ( 2 , 3 ) } (0,0,0)(1,0,2)FALSE { 3 }
3 ( 2 ) ( 0 , 0 , 1 ) ( 2 , 3 , 0 ) { ( 1 , 1 ) , ( 1 , 3 ) , ( 2 , 3 ) } (0,0,0)(1,0,2)FALSE { 3 }
3 ( 3 ) ( 0 , 0 , 2 ) ( 2 , 3 , 0 ) { ( 1 , 1 ) , ( 1 , 3 ) , ( 2 , 3 ) , ( 3 , 1 ) } (1,0,3)(1,0,2)FALSE{1,3} { 1 , 2 , 3 }
3 ( 4 ) ( 1 , 1 , 3 ) ( 3 , 3 , 1 ) { ( 1 , 1 ) , ( 1 , 3 ) , ( 2 , 2 ) , ( 2 , 3 ) , ( 3 , 1 ) , ( 3 , 2 ) } (0,1,0)(1,3,2)TRUE
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tang, J.; Rao, Y. A New Block Structural Index Reduction Approach for Large-Scale Differential Algebraic Equations. Mathematics 2020, 8, 2057. https://doi.org/10.3390/math8112057

AMA Style

Tang J, Rao Y. A New Block Structural Index Reduction Approach for Large-Scale Differential Algebraic Equations. Mathematics. 2020; 8(11):2057. https://doi.org/10.3390/math8112057

Chicago/Turabian Style

Tang, Juan, and Yongsheng Rao. 2020. "A New Block Structural Index Reduction Approach for Large-Scale Differential Algebraic Equations" Mathematics 8, no. 11: 2057. https://doi.org/10.3390/math8112057

APA Style

Tang, J., & Rao, Y. (2020). A New Block Structural Index Reduction Approach for Large-Scale Differential Algebraic Equations. Mathematics, 8(11), 2057. https://doi.org/10.3390/math8112057

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop