Next Article in Journal
X-ray Spectroscopy Based on Micro-Calorimeters at Internal Targets of Storage Rings
Next Article in Special Issue
GRASP Manual for Users
Previous Article in Journal
Collinear Laser Spectroscopy of Helium-like 11B3+
Previous Article in Special Issue
An Introduction to Relativistic Theory as Implemented in GRASP
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Performance Tests and Improvements on the rmcdhf and rci Programs of GRASP

1
Shanghai EBIT Lab, Key Laboratory of Nuclear Physics and Ion-Beam Application, Institute of Modern Physics, Department of Nuclear Science and Technology, Fudan University, Shanghai 200433, China
2
Department of Materials Science and Applied Mathematics, Malmö University, SE-20506 Malmö, Sweden
3
Department of Physics, University of Strathclyde, Glasgow G40 NG, UK
4
Department of Physics and Anhui Key Laboratory of Optoelectric Materials Science and Technology, Key Laboratory of Functional Molecular Solids, Ministry of Education, Anhui Normal University, Wuhu 241000, China
5
Hebei Key Lab of Optic-Electronic Information and Materials, The College of Physics Science and Technology, Hebei University, Baoding 071002, China
6
Spectroscopy, Quantum Chemistry and Atmospheric Remote Sensing, Université libre de Bruxelles, B-1050 Brussels, Belgium
7
Institute of Theoretical Physics and Astronomy, Vilnius University, Saulėtekio av. 3, LT-10222 Vilnius, Lithuania
*
Authors to whom correspondence should be addressed.
Atoms 2023, 11(1), 12; https://doi.org/10.3390/atoms11010012
Submission received: 23 November 2022 / Revised: 8 January 2023 / Accepted: 10 January 2023 / Published: 13 January 2023
(This article belongs to the Special Issue The General Relativistic Atomic Structure Package—GRASP)

Abstract

:
The latest published version of GRASP (General-purpose Relativistic Atomic Structure Package), i.e., GRASP2018, retains a few suboptimal subroutines/algorithms, which reflect the limited memory and file storage of computers available in the 1980s. Here we show how the efficiency of the relativistic self-consistent-field (SCF) procedure of the multiconfiguration-Dirac–Hartree–Fock (MCDHF) method and the relativistic configuration-interaction (RCI) calculations can be improved significantly. Compared with the original GRASP codes, the present modified version reduces the CPU times by factors of a few tens or more. The MPI performances for all the original and modified codes are carefully analyzed. Except for diagonalization, all computational processes show good MPI scaling.

1. Introduction

Atomic energy levels, oscillator strengths, transition probabilities and energies are essential parameters for abundance analysis and diagnostics in astrophysics and plasma physics. In the past decade, the atomic spectroscopy group of Fudan University carried out two projects to calculate transition characteristics with high accuracy in collaboration with other groups. One project focused on the ions with Z 30 , which are generally of astrophysical interest, and the other on tungsten ions ( Z = 74 ), which are relevant in the research of magnetic confinement fusion. Employing the multiconfiguration Dirac–Hartree–Fock (MCDHF) approach [1,2,3,4,5], implemented within the GRASP2K package [6], and/or the relativistic many-body perturbation theory (RMBPT) [7], implemented within the FAC package [8,9,10], we performed a series of systematic and large-scale calculations of radiative atomic data for ions of low and medium Z-values belonging to the He I [11], Be I-Ne I [12,13,14,15,16,17,18,19,20,21,22], Si I-Cl I [23,24,25,26,27] isoelectronic sequences, and for the highly-charged isonuclear sequence ions of tungsten [28,29,30,31,32,33]. A large amount of atomic data, including level energies, transition wavelengths, line strengths, oscillator strengths, transition probabilities and lifetimes, were obtained. Their uncertainties were comprehensively assessed by cross-validations between the MCDHF and RMBPT results and by detailed comparisons with observations. It showed that spectroscopic accuracy was achieved for the computed excitation and transition energies in most of the ions concerned due to the fact that electron correlation was treated at a high level of approximation by using a very large expansion of configuration state functions (CSF) based on extended sets of one-electron orbitals. To make these large-scale calculations feasible and tractable, many efforts were devoted to improving the performance and stability of the codes used. Here, we describe some improvements made in the last two years for the rmcdhf and rci programs, which have not yet been included in the latest published version of GRASP, i.e., GRASP2018 [34].
The GRASP2018 package is an updated Fortran 95 version of recommended programs from GRASP2K Version 1_1 [6], providing improvements in accuracy and efficiency in addition to the translation from Fortran 77 to Fortran 95 [34]. However, it has retained some original subroutines/algorithms that reflect the limited memory and file storage capacities of computers in the 1980s, when the first versions of GRASP were released [2]. For example, the spin-angular coefficients, which are used to build the Hamiltonian matrix and the potentials, are stored on disks in unformatted files. During the iterations of the self-consistent-field (SCF) calculations, aiming to optimize the one-electron radial functions, the spin-angular coefficients are read from disks again and again. The calculations using expansions of hundreds of thousands of CSFs are very time-consuming, as the disk files easily exceed over 10 GB. This kind of inefficiency, which was considered a major bottleneck of the GRASP package for a long time, was removed very recently by one of the authors (GG) through two programs, rmcdhf_mem and rmcdhf_mem_mpi, which have been uploaded to the GRASP depository [35]. The new feature of these two programs is that the spin-angular coefficients, once they are read from disk files, are stored in memory by using arrays. In the present work, we show that these codes can be further improved by redesigning the procedure to obtain the direct and exchange potentials and the Lagrange multipliers, which are used to update the radial orbitals (large and small components) during the SCF procedure.
Once the radial functions have been determined by an MCDHF calculation based on the Dirac–Coulomb Hamiltonian, subsequent relativistic configuration-interaction (RCI) calculations are often performed to include the transverse photon interaction (which reduces to the Breit interaction at the low-frequency limit) and the leading quantum electrodynamics (QED) corrections. At this stage, the CSFs expansions are usually considerably enlarged to capture additional electron correlation effects. For example, our recent MCDHF calculations on C-like ions [16] were performed using an expansion of about two-million CSFs, which were generated by single and double (SD) excitations from the outer subshells of the multi-reference (MR) configurations, taking only the valence–valence correlation into account. The subsequent RCI calculations were based on approximately 20 million CSFs to adequately account for the additional core-valence (CV) electron correlation effects.
MCDHF and RCI calculations, using large CSFs expansions, require a lot of computing resources. Firstly, the construction of the Hamiltonian matrix is very time-consuming. The spin-angular integration of the Hamiltonian between pairs of CSFs has to be performed N ( N + 1 ) / 2 times, where N is the order of the interaction matrix, i.e., the size of CSFs expansion for the block of given J and parity. Fortunately, we recently implemented a computational methodology based on configuration state function generators (CSFGs) that relaxes the above scaling. Instead of having to perform the spin-angular integration for each of the elements in the Hamiltonian matrix, the use of generators makes it possible to restrict the integration to a limited number of cases and then directly infer the spin-angular coefficient for all matrix elements between groups of CSFs spanned by the generators, which takes advantage of the fact that spin-angular expressions are independent of the principal quantum number [36]. Secondly, the time for solving the eigenvalue problem in MCDHF and RCI may also be significant, especially if many eigenpairs are required, as is normally the case in spectrum calculations for complex systems [16,23,24,25,26,27].
The present paper, which reports on improvements both for MCDHF and RCI, is organized as follows:
  • In Section 2, we show how the diagonalization procedure in MCDHF and RCI calculations can be improved by further parallelization.
  • In Section 3, we discuss the improvements in the MCDHF program resulting from the new management of spin-angular coefficients in memory and from the redesign of the procedures for calculating the potentials and Lagrange multipliers. Results are reported from a number of performance tests.
  • In Section 4, we study the improvements in RCI performances thanks to the use of CSFGs. We also investigate the time ratios for constructing and diagonalizing the Hamiltonian matrix to determine the desired eigenpairs.
  • Finally, in Section 5, we summarize the results of the performance tests and identify the remaining bottlenecks. This is followed by a discussion on how the latter could be circumvented in future developments.

2. Additional Parallelization for the DVDRC Library of GRASP

In the DVDRC library of GRASP, the Davidson algorithm [37], as implemented in [38], is used to extract the eigenpairs of interest from the interaction matrix. Assuming that the K lowest eigenpairs are required of a large, sparse, real and symmetric matrix A of order N, the original Davidson algorithm can be described as shown in Algorithm 1, in which the upper limit of P, the order of the expanding basis, is defined by the variable LIM = 2 K + 80 in GRASP2018 [34]. The matrix-vector multiplication (6), which is the most time-consuming step, has already been parallelized in GRASP using message passing interface (MPI) by calling upon one of the three subroutines named DNICMV, SPODMV, and SPICMVMPI, depending on if the interaction matrix is sparse or dense, and stored in memory or on disk.
Algorithm 1: Davidson algorithm.
    (0)
Set P = K . Compute the initial Basis B = { b 1 , , b P } R N × P , D = AB = { d 1 , , d P } R N × P , and the projection S = B T AB = B T D R P × P .
   
Repeat until converged steps (1) through (8):
    (1)
Solve the symmetric eigenvalue problem: SC = CU (size P × P ).
    (2)
Target one of the K sought eigenpairs, say ( u , c ), c R P .
    (3)
If the basis size is maximal, restart: D DC , B BC , C = I K , S = U , P = K .
    (4)
Compute R = ( d i a g ( A ) u I ) 1 ( D u B ) c .
    (5)
Orthogonalize: b n e w = R b i b i T R , normalize: b n e w b n e w / b n e w .
    (6)
Matrix-vector multiplication: d n e w = Ab n e w .
    (7)
Include b n e w in B and d n e w in D . Increase P.
    (8)
Compute the new column of S : S i , P = b i T d P , i = 1 , , P .
It should be pointed out that the subroutines of the library in GRASP2018 [34] performing the remaining calculations, except for step (6) of the Davidson algorithm, are all serial. Step (1), solving the small symmetric eigenvalue problem of order P, which is generally smaller than 500, is very fast as it calls upon the DSPEVX routine from the LAPACK library. However, in steps (3)–(5) and (7)–(8), the matrix-vector, matrix–matrix multiplication and inner-products involve vectors of size N, such as all the column vectors of B and D . In the MCDHF and RCI calculations, when N, the size of the CSFs expansion of a given J and parity, is large enough, and meanwhile, dozens of eigenpairs or more are searched, steps (3)–(5) and (7)–(8) can be as time-consuming as step (6). Hence, we have parallelized all the possibly time-consuming routines of the DVDRC library for these steps by using MPI, such as MULTBC, NRM_MGS, NEWVEC, ADDS, etc. We show in Section 3 and Section 4 that the CPU time for diagonalization can be significantly reduced by factors of about three in relatively large-scale calculations.

3. Improvements for MCDHF

3.1. Outline of the MCDHF Method

The theory of MCDHF has been comprehensively described in the literature; for examples, see [1,2,3,4,5]. Here it is outlined to explain the modifications of the original GRASP2018 codes. Atomic units are used throughout except for those given explicitly.
In MCDHF calculations with GRASP, only the Dirac–Coulomb Hamiltonian ( H D C ) is taken into account. The Dirac one-electron orbital a is given by
u a ( r ) = 1 r P n l j ( r ) Ω κ m ( θ , ϕ ) i Q n l j ( r ) Ω κ m ( θ , ϕ ) ,
in which P and Q are the radial functions, and Ω is the usual spherical spinor, i.e., the spin-angular function, κ = 2 ( j l ) ( j + 1 / 2 ) , a ( n , l , s , j , m ) ( n , κ , m ) . For a state α of given total angular momentum J, total magnetic quantum number M J , and parity π , the atomic state function (ASF) is formed by a linear combination of CSFs
Ψ ( α J M J π ) | α J M J π = r = 1 N C S F c α ; r Φ ( γ r J M J π ) .
N C S F is the number of CSFs used in the expansion. Each CSF, Φ ( γ r J M J π ) , is constructed on four-component spinor orbital functions (1). The label γ r contains all the needed information on its structure, i.e., the constituent subshells with their symmetry labels and the way their angular momenta are coupled to each other in j j -coupling. The level energy E α and the vector of expansion coefficients c α are solved from the following secular equation:
( H E α I ) c α = 0 ,
with
E α = α J M J π | H D C | α J M J π = 1 2 J + 1 α J π H D C α J π
where the reduced matrix element (RME) α J π H D C α J π is defined from Edmond’s formulation of the Wigner–Eckart theorem [39]. This RME can be developed in terms of RMEs in the CSF basis, H r s = Φ r γ r J H D C Φ s γ s J , which are generally expressed as
H r s = a b t a b r s I ( a , b ) + a b c d k v a b c d ; k r s R k ( a b , c d ) .
The radial integrals I ( a , b ) and R k ( a b , c d ) are, respectively, relativistic kinetic-energy and Slater integrals, t a b r s and v a b c d ; k r s are the corresponding spin-angular coefficients, and k is the tensor rank.
The radial functions of the orbitals are unknown and should be determined on a grid. The stationary condition with respect to variations in the radial functions, in turn, gives the following MCDHF integro-differential equations for each orbital a [1,2,3,4]:
d d r + κ a r P a 2 c ϵ a c V nuc c + Y a c r Q a = 1 c q ¯ a b a n w δ κ a κ b ϵ a b Q b X a ( P ) r , d d r κ a r Q a + ϵ a c V nuc c + Y a c r P a = 1 c q ¯ a b a n w δ κ a κ b ϵ a b P b + X a ( Q ) r ,
in which the Lagrange multipliers ϵ a and ϵ a b ensure that the n w orbitals of { a , b , } form an orthonormal set. The direct potential Y a ( r ) arising from the two-body interactions, summing over the allowed tensor rank k, is given by
Y a ( r ) = b = 1 n w k y k ( a b ) Y k ( b b ; r ) k b , d y k ( a b a d ) Y k ( b d ; r ) ,
with Y k ( ; r ) being the relativistic one-dimensional radial integrals [1,2,3,4], and
q ¯ ( a ) y k ( a b ) / ( 1 + δ a b ) = r d r r f r k ( a b ) ,
q ¯ ( a ) y k ( a b a d ) = r , s d r s v a b a d ; k r s ,
where f r k ( a b ) v a b a b ; k r r and v a b a d ; k r s are the spin-angular coefficients. The exchange potentials X a ( r ) in Equation (6) are given by
X a ( P ) ( r ) = 1 c b a n w k x k ( a b ) Y k ( a b ; r ) Q b + b c d , c a k x k ( a b c d ) Y k ( b d ; r ) Q c , X a ( Q ) ( r ) = 1 c b a n w k x k ( a b ) Y k ( a b ; r ) P b + b c d , c a k x k ( a b c d ) Y k ( b d ; r ) P c ,
with
q ¯ ( a ) x k ( a b ) = r d r r g r k ( a b ) ,
q ¯ ( a ) x k ( a b c d ) = r s d r s v a b c d ; k r s ,
where g r k ( a b ) v a b b a ; k r r and v a b c d ; k r s are also the spin-angular coefficients. The coefficients d r s are the generalized weights
d r s = α = 1 n L g α c α ; r c α ; s
in which g α is the weight attributed to level α , and n L is the number of targeted levels. In the extended optimal level (EOL) calculation of GRASP, the MCDHF optimization procedure ensures that the average energy weighted by g α , i.e., E ¯ = g α E α , is stationary with respect to small changes of the orbitals and expansion mixing coefficients. In all of the above equations, q ¯ ( a ) is the generalized occupation number of orbital a:
q ¯ ( a ) = r d r r q r ( a ) ,
where q r ( a ) is the occupation number for orbital a of CSF r. The resulting direct and exchange potentials are also used to determine the Lagrange multipliers [1].
It should be mentioned that b , c , d a is assumed in both Equations (9) and (12), whose left-hand sides should be multiplied by some adequate factors if b = a and/or d = a , as given in Equation (8). In addition, the contributions to the exchange potential arising from off-diagonal one-body integrals I ( a , b ) δ κ a , κ b are not presented here, but they have been included since the GRASP92 version [40].
Spin-angular coefficients f r k ( a b ) and g r k ( a b ) are known in closed forms [1,2] and calculated during the constructions of potentials and the Hamiltonian matrix, whereas v a b c d ; k r s , as well as t a b r s involving a one-body integral I ( a , b ) , are obtained from the unformatted disk files, namely mcp.XXX, which are generated by the rangular program [41,42,43] of GRASP.

3.2. Redesigning the Calculations of Potentials

The MCDHF calculations are generally divided into two parts, i.e., (i) searching the concerned eigenpairs from solving Equation (3) for a given set of one-electron orbitals and (ii) updating the orbitals from iteratively solving the orbital equations Equation (6) for a given set of mixing coefficients. In addition to the additional parallelization for the DVDRC Library of GRASP mentioned in Section 2, the computational task can be reduced significantly by redesigning the calculations of potentials.
The general MCDHF procedure used in rmcdhf or rmcdhf_mpi programs of GRASP2018 [34] is illustrated in Algorithm 2. The notes integrated in the description of the SCF procedure outline the modifications provided in the memory-version rmcdhf_mem and rmcdhf_mem_mpi [35], and the present modified version referred to as rmcdhf_mpi_FD for convenience. Only the parallel versions are referred to hereinafter, as we focus on large-scale MCDHF calculations.
Algorithm 2: SCF procedure.
    (0.0)
Load all of the mcp.XXX files into arrays. Note: This step is added in both rmcdhf_mem_mpi and rmcdhf_mpi_FD.
    (0.1)
Initialize the orbitals, read CSFs expansion, perform any other needed initialization.
    (0.2)
Determine NEC, the number (NEC) of needed off-diagonal Lagrange multipliers ϵ a b with a b and κ a = κ b , and subshells a and/or b are partially occupied. Labels a and b recorded.
    (0.3)
Call MATRIXmpi and then MANEIGmpi, set H-matrix and solve Equation (6) to obtain E α ( 0 ) and c α ( 0 ) , α = 1 , , n L , calculate q ¯ ( 0 ) for all orbitals and E ¯ ( 0 ) . Note: Additional parallelizations for the DVDRC library presented in Section 2 are included in rmcdhf_mpi_FD.
    (0.4)
For each orbital a involved in the NEC off-diagonal Lagrange multipliers, construct the sorted NYA and NXA arrays storing the unique and packed labels to identify the possible direct and exchange contributions in Equations (7) and (10), respectively. Note: This step is added in rmcdhf_mpi_FD.
   
Start the SCF procedure, repeat until E ¯ converged, in the ith loop:
    (1.0)
For all orbitals a involved in the calculations of NEC off-diagonal Lagrange multipliers, update the y k and x k coefficients needed in Equations (7) and (10), respectively, by using Equations (8) and (9) or by Equations (11) and (12) in which q ¯ ( i 1 ) is used. The eigenvector matrix c ( i 1 ) is used in Equation (13). The results are saved in YA and XA arrays in the same order of NYA and NXA arrays. Note: This step is added in rmcdhf_mpi_FD.
    (1.1)
Call routine SetLAGmpi to determine the needed NEC off-diagonal Lagrange multipliers ϵ a b
             (a1)
For orbital a, call routine SETCOF to build unsorted NYA and NXA arrays, sequentially update the needed y k and x k coefficients. Note: This step is removed in rmcdhf_mpi_FD as the needed data have been obtained in steps (0.4) and (1.0).
             (a2)
Build the potentials for orbital a by calling routines YPOT, XPOT and DACON. Note: Routines YPOT and XPOT have been parallelized using MPI in rmcdhf_mpi_FD.
             (b1)
As done in (a1) but for orbital b. Note: This step is removed in rmcdhf_mpi_FD.
             (b2)
As done in (a2) but for orbital b.
             (c1)
Calculate ϵ a b from the equations given in [1].
    (2)
Call routine IMPROVmpi to update the orbitals by solving Equation (6), update the potentials for each varied orbital by calls of SETCOF, YPOT, XPOT, etc. Note: The inside calling routine SETCOF is removed in rmcdhf_mpi_FD.
    (3)
As done in step (0.3) but using the updated orbitals, obtain E α ( i ) , c α ( i ) , E ¯ ( i ) and q ¯ ( i ) , etc.
We describe some of the modifications in detail below:
  • One routine SETMCP_MEM is added in rmcdhf_mem_mpi and retained in rmcdhf_mpi_FD to read the t a b r s and v a b c d ; k r s spin-angular coefficients together with the corresponding packed orbital labels from mcp.XXX disk-files into arrays. When needed, the data are fetched from memory in rmcdhf_mem_mpi and rmcdhf_mpi_FD, whereas rmcdhf_mpi reads the mcp.XXX disk-files in steps (0.3), (a1), (b1), (2) and (3) of Algorithm 2.
  • The most time-consuming SETCOF subroutine of rmcdhf_mpi is split into two routines, i.e., SETTVCOF and SETALLCOF in rmcdhf_mpi_FD.
    -
    During the first call just before the SCF iterations start, SETTVCOF records the Slater integrals R k ( a b , c d ) contributing to the NEC off-diagonal Lagrange multipliers, the packed labels, i.e., L A B V = ( ( I A × K E Y + I B ) × K E Y + I C ) × K E Y + I D with K E Y = 215 (and n w 214 , which is the maximum value allowing for that LABV variable that may be stored as an integer of 4 bytes), and the corresponding tensor rank k are saved into arrays. I A , I B , I C , I D are, respectively, the positions of a , b , c , d in the set consisting of n w orbitals. There are many identical Slater integrals R k ( a b , c d ) arising from different J π blocks.
    -
    During the SCF procedure, SETTVCOF only performs all the summations of Equations (9) and (12) within one entrance for each iteration.
    -
    Within the first entrance just before the SCF iterations start, SETALLCOF constructs the N Y A ( : , a ) and N X A ( : , a ) arrays for all the orbitals involved in the calculations of all off-diagonal Lagrange multipliers. The diagonal Slater integrals R k ( a b , a b ) or R k ( a b , b a ) of the Hamiltonian matrix involved in the calculations for y k and x k (see Equations (8) and (11)), and those R k ( a b , c d ) , recorded by SETTVCOF and involved in Equations (9) and (12), are considered. The labels L A B Y k ( = ( I B × K E Y + I D ) × K E Y + k ) and L A B X k packed by ( ( ( I C × K E Y + I B ) × K E Y + I D ) × K E Y ) + k are sorted and saved into N Y A ( : , a ) and N X A ( : , a ) arrays, respectively. Hence, N Y A ( : , a ) and N X A ( : , a ) are sorted lists with distinct elements. All MPI processes are modified to have the same N Y A and N X A arrays. The y k and x k coefficients, arising from the same Slater integrals but from different J π blocks, are accumulated, respectively, according to the L A B Y k and L A B X k values stored in N Y A ( : , a ) and N X A ( : , a ) .
    -
    During SCF iterations, SETALLCOF only accumulates all the needed coefficients in Equations (7) and (10) across different J π blocks, employing a binary search strategy (with time complexity of O ( l o g 2 ( n ) ) to match the L A B Y k and L A B X k values with those stored in the N Y A and N X A arrays, respectively. The accumulated y k and x k coefficients are saved into YA and XA arrays at the same positions as those of L A B Y k and L A B X k in N Y A and N X A arrays. This accumulation scheme significantly reduces the computation efforts for the relativistic one-dimensional radial integrals Y k ( ; r ) in Equations (7) and (10).
    -
    In both SETTVCOF and SETALLCOF routines, the computation efforts are significantly reduced by taking advantage of the symmetry properties of Equations (8), (9), (11) and (12). Their right-hand sides, i.e., the summations, are the same for all the involved orbitals and performed only once within the individual SCF loop. For example, given a b c d , the corresponding R k ( a b , c d ) contributes to the exchange parts of the four orbitals, and the associated four x k coefficients can be obtained simultaneously by considering their generalized occupation number.
    -
    In the SETCOF routine of rmcdhf_mpi, the symmetry properties are not yet considered. The N Y A and N X A arrays are constructed again and again in each entrance and have repetitious labels for which the sequential search method (with time complexity of O ( n ) ), used to accumulate the corresponding y k and x k coefficients, is inefficient. In MCDHF calculations using many orbitals, the number of labels L A B X k can easily exceed hundreds of thousands or even more. This inefficiency of rmcdhf_mpi significantly slows down the computations.
  • In rmcdhf_mpi_FD, the subroutines YPOT and XPOT are parallelized by using MPI, whereas they are serial in both rmcdhf_mpi and rmcdhf_mem_mpi.
  • Obviously, compared with rmcdhf_mpi and rmcdhf_mem_mpi, the new code rmcdhf_mpi_FD is more memory-consuming since many additional arrays possibly of large size are maintained during the SCF procedure, and dozens of additional GB of memory are needed if the number of labels L A B X k reaches several million.

3.3. Performance Tests for MCDHF

In the present section, we would like to compare the relative performances of the three available codes, rmcdhf_mpi, rmcdhf_mem_mpi and rmcdhf_mpi_FD, to perform MCDHF calculations. Here we choose two examples, i.e., Mg VII and Be I, to illustrate and discuss the improvements in efficiency obtained with the two new codes, i.e., rmcdhf_mem_mpi and rmcdhf_mpi_FD. The calculations are all performed using the Linux server with two Intel(R) Xeon(R) Gold 6 278C CPU (2.60 GHz) and 52 cores, except in some cases for which the used CPU is explicitly given. In this comparative work, we carefully checked that the results obtained with the three codes were identical. Throughout the present work, the reported CPU times are all wall-clock times, as they are more meaningful for the end-users.

3.3.1. Mg VII

In our recent work on C-like ions [16], large-scale MCDHF-RCI calculations were performed for the n 5 states in C-like ions from O III to Mg VII. Electron correlation effects were accounted for by using large configuration state function expansions, built from the orbital sets with principal quantum numbers n 10 . A consistent atomic data set including both energies and transition data with spectroscopic accuracy was produced for the lowest hundreds of states of C-like ions from O III to Mg VII. Here we take Mg VII as an example to investigate the performances of rmcdhf_mpi, rmcdhf_mem_mpi and rmcdhf_mpi_FD programs.
In the MCDHF calculations of [16] aiming at the orbital optimisation, the CSF expansions were generated by SD-excitations up to 10 ( s p d f g h i ) orbitals from all possible ( 1 s 2 ) 2 l 3 n l with 2 n 5 configurations. (More details can be found in [16].) The MCDHF calculations were performed layer by layer using the following sequence of active sets (AS):
A S 1 = { 6 s , 6 p , 6 d , 6 f , 6 g , 6 h } , A S 2 = { 7 s , 7 p , 7 d , 7 f , 7 g , 7 h , 7 i } , A S 3 = { 8 s , 8 p , 8 d , 8 f , 8 g , 8 h , 8 i } , A S 4 = { 9 s , 9 p , 9 d , 9 f , 9 g , 9 h , 9 i } , A S 5 = { 10 s , 10 p , 10 d , 10 f , 10 g , 10 h , 10 i } .
Here the test calculations are carried out only for the even states with J = 0–3. The CSF expansions using the above AS orbitals, as well as the number of targeted levels for each block, are listed in Table 1. To keep the calculations tractable, only two SCF iterations are performed, taking the converged radial functions from [16] as the initial estimation. The zero- and first-order partition techniques [4,44], often referred to as ‘Zero-First’ methods [45], are employed. The zero-space contains the CSFs with orbitals up to 5 ( s p d f g ) , the numbers of which are also reported in Table 1. The corresponding sizes of mcp.XXX files are, respectively, about 5.2, 11, 19, 29, and 41 GB in the A S 1 through A S 5 calculations.
The CPU times for these MCDHF calculations using the A S 3 and A S 5 orbital sets are reported in Table 2 and Table 3, respectively. To show the MPI performance, the calculations are carried out using various numbers of MPI processes (np) ranging from 1 to 48. The rmcdhf_mpi and rmcdhf_mem_mpi MPI calculations using the A S 5 orbitals set are only performed with n p 8 , as the calculations with smaller n p -values are too time-consuming. The CPU times are presented in the time sequence of Algorithm 2. For MCDHF calculations limited to two iterations, the eigenpairs are searched three times, i.e., once at step (0.3) and twice at step (3). The three rows with label “SetH&Diag” in Table 2 and Table 3 report the corresponding CPU times for setting the Hamiltonian matrix (routine MATRIXmpi) and for its diagonalization (routine MANEIGmpi), whereas the row with “Sum(SetH&Diag)” reports their sum. Steps (1.1) and (2) of Algorithm 2 are carried out twice in all calculations, as well as step (1.0) in rmcdhf_mpi_FD calculations. The rows labeled by “SetCof + LAG” and “IMPROV” report, respectively, the CPU times for routines SETLAGmpi and IMPROVmpi, i.e., for steps (1.1) and (2) of Algorithm 2, while the row “Update” gives their sum. The rows labeled “Sum(Update)” display the total CPU times needed to update the orbitals twice. The rows “Walltime” represent the total code execution times. The differences between the summed value “Sum(Update)” + “Sum(SetH&Diag)” and the “Walltime” ones represent the CPU times that are not monitored by the former two. It can be seen that these differences are relatively small in cases of rmcdhf_mpi and rmcdhf_mem_mpi, implying that most of the time-consuming parts of the codes have been included in the tables, while the relatively large differences in the case of rmcdhf_mpi_FD could be reduced if the CPU times needed for constructing the sorted NXA and NYA arrays in step (0.4) of Algorithm 2 would be taken into account.
In the rmcdhf_mpi_FD calculations, five kinds of CPU times are additionally recorded, labeled, respectively, “NXA&NYA”, “SetTVCof”, “WithoutMCP”, “WithMCP”, and “SetLAG”. Row “NXA&NYA” reports the CPU times to construct the sorted NXA and NYA arrays. The “SetTVCof” displays the CPU times required to perform all the summations of Equations (9) and (12) in the newly added routine SETTVCOF. The “WithoutMCP” and “WithMCP” rows report the CPU times spent in the added routine SETALLCOF to accumulate the y k and x k coefficients using Equations (8) and (11), and Equations (9) and (12), respectively. These three contributions—“SetTVCof”, “WithoutMCP” and “WithMCP”, correspond to the computation effort associated with step (1.0) of Algorithm 2. The “SetLAG” row represents the CPU times required to calculate the off-diagonal Lagrange multipliers ϵ a b in routine SETLAGmpi using the calculated y k and x k coefficients. The “SetCof + LAG” CPU time values correspond approximately to the sum of the four tasks “SetTVCof” + “WithoutMCP” + “WithMCP” + “SetLAG”, as the calculations involving the one-body integral contributions are generally very fast. The “Update” row reports the summed value of “SetCof + LAG” and “IMPROV”, as done above for rmcdhf_mpi and rmcdhf_mem_mpi. (The CPU times with the same labels for the different codes can be compared because they are recorded for the same computation tasks.)
Based on the CPU times reported in Table 2 and Table 3, some comparisons are illustrated in Figure 1, Figure 2, Figure 3 and Figure 4. We discuss below the relative performances of the three codes.
As seen in Table 2 and Figure 1 for the A S 3 calculations, the MPI performances for diagonalization are unsatisfactory for all three codes. The largest speed-up factors are about 1.9 for rmcdhf_mpi and rmcdhf_mem_mpi, and 4.7 for rmcdhf_mpi_FD. The optimal numbers of MPI processes used for diagonalization are all around n p = 16 and the MPI performances deteriorate when n p exceeds 24. The CPU times of rmcdhf_mem_mpi and rmcdhf_mpi are very similar. Compared to these two codes, the CPU time of rmcdhf_mpi_FD is reduced by a factor of ≃2.5 with n p 16 , thanks to the additional parallelization described in Section 2. The speed-up efficiency of rmcdhf_mpi_FD relative to rmcdhf_mpi increases slightly with the size of the CSF expansion. As seen from the first line of Table 3, the CPU time gain factor reaches 3 for the calculations using the A S 5 orbital set. It should be noted that the CPU times to set the Hamiltonian matrix are negligible in all three codes, being tens of times shorter than those for the first search of eigenpairs. The eigenpairs are searched three times, and the corresponding CPU times are included in the three rows labeled “SetH&Diag”. As seen in Table 3, the first “SetH&Diag” CPU time is 945 s in the rmcdhf_mpi_FD  A S 5 calculation with n p = 48 , consisting of 14 and 931 s, respectively, for the matrix construction and diagonalization. For the subsequent two “SetH&Diag”, the matrix construction CPU times are still about 14 s, whereas those for diagonalization are, respectively, reduced to 34 and 26 s because the mixing coefficients are already converged. If the present calculations would be initialized by Thomas–Fermi or hydrogen-like approximations, these CPU times should reach about 900 s.
As far as the orbital updating process is concerned, the MPI performances of the three codes scale very well. The linearity is indeed attained even with n p = 48 or more, as seen in Figure 2a and Figure 4b. The speed-up factors with n p = 48 are, respectively, 26.0, 17.8 and 44.3 for the rmcdhf_mpi, rmcdhf_mem_mpi and rmcdhf_mpi_FD A S 3 calculations, while it is 43.2 for the A S 5 calculation using rmcdhf_mpi_FD. The slopes obtained by a linear fit of the speed-up factors as a function of np are, respectively, 0.53 , 0.35 , and 0.93 for the rmcdhf_mpi, rmcdhf_mem_mpi, and rmcdhf_mpi_FD A S 3 calculations, while it reaches 0.91 for the rmcdhf_mpi_FD A S 5 calculation. In the A S 3 calculations, compared to rmcdhf_mpi and rmcdhf_mem_mpi, the rmcdhf_mpi_FD CPU times for updating the orbitals with n p 8 are, respectively, reduced by factors ( 60 65 ) and ( 31 37 ) , as seen in Figure 2b. The corresponding reduction factors are ( 57 69 ) and ( 36 43 ) for the A S 5 calculations, as seen in Figure 4b. These large CPU time-saving factors result from the new strategy developed to calculate the potentials, as implemented in rmcdhf_mpi_FD, and described in Section 3.2. Unlike the diagonalization part, the memory version rmcdhf_mem_mpi brings about some interesting improvements over rmcdhf_mpi: the orbital updating CPU times are indeed reduced by a factor of 2 and 1.6 for the A S 3 and A S 5 calculations, respectively.
The MPI performances for the “walltimes” are different from each other among the three codes. As seen in Table 2 and Table 3, the orbital updating CPU times are predominant in most of the rmcdhf_mpi and rmcdhf_mem_mpi MPI calculations, whereas the diagonalization CPU times dominate in all rmcdhf_mpi_FD MPI calculations for all np values. Hence, as seen in Figure 3a and Figure 4a, the global MPI performance of rmcdhf_mpi_FD is similar to the one achieved for diagonalization. The maximum speed-up factors are, respectively, about 6.6 and 7.3 for the A S 3 and A S 5 calculations, both with n p = 16–32, though the “Update” CPU times could be reduced by factors of 44 or 43 with n p = 48 , as shown above. The speed-ups increase along with np in the rmcdhf_mpi and rmcdhf_mem_mpi A S 3 calculations, being, respectively, about 13.5 and 7.2 with n p = 48 . As shown in Figure 3b, comparing to rmcdhf_mpi, the walltimes are, respectively, reduced by a factor of 11.2 and 4.3 in rmcdhf_mpi_FD  A S 3 calculations using n p = 8 and n p = 48 , while the reduction factors are, respectively, 15 and 4.8 for the A S 5 calculations, as shown in Figure 4a. The corresponding speed-up factors of rmcdhf_mpi_FD relative to rmcdhf_mem_mpi are smaller by a factor of 1.5, as rmcdhf_mem_mpi is 1.5 times faster than rmcdhf_mpi, as seen in Figure 3b.
As mentioned above, the total CPU times for diagonalization reported in Table 2 and Table 3 (see rows labeled “Sum(SetH&Diag)”) are dominated by the first diagonalization, as the initial radial functions are taken from the converged calculations. In SCF calculations initialized by Tomas–Fermi or screened Hydrogenic approximations, more computation efforts have to be devoted to subsequent diagonalization during the SCF iterations. It is obvious that the limited MPI performance for diagonalization is the bottleneck in rmcdhf_mpi_FD calculations. As seen in Table 2 and Table 3 and Figure 3a and Figure 4a, more CPU time is required if np exceeds the optimal number of cores for diagonalization, which is generally in the range of 16–32. In rmcdhf_mpi and rmcdhf_mem_mpi calculations, the inefficiency of the orbital-updating procedure is another bottleneck, though this limitation may be alleviated by using more cores to perform the SCF calculations. However, this kind of alleviation would be eventually prohibited by the limited MPI performance of diagonalization. As seen in Table 3 and Figure 4a for the rmcdhf_mpi and rmcdhf_mem_mpi  A S 5 calculations, the walltimes with n p = 48 are longer than those with n p = 32 , though the CPU times for updating the orbitals are still reduced significantly in the former calculation.

3.3.2. Be I

To further understand the inefficiency of the orbital updating process in both rmcdhf_mpi and rmcdhf_mem_mpi codes, the second test case is carried out for a rather simple system, i.e., Be I. The calculations are performed to target the lowest 99 levels arising from the configurations ( 1 s 2 ) 2 l n l with n 7 . The 99 levels are distributed over 15 J π blocks, i.e., 0 + , 0 , , 7 + , with the largest numbers of 12 for 1 and 2 + blocks. The MCDHF calculations are performed simultaneously for both even and odd parity states. The largest CSF space contains 55 166 CSFs formed by SD excitation up to 15 ( s p d f g ) 14 ( h i ) 13 ( k l ) orbitals from all the targeted states distributed over the above 15  J π blocks, with the largest size of 4 868 for 4 + . The orbitals are also optimized with a layer-by-layer strategy. The CPU times recorded for the calculations using 9 ( s p d f g h i k l ) and 15 ( s p d f g ) 14 ( h i ) 13 ( k l ) orbital sets are given in Table 4 and Table 5. These calculations are hereafter labeled n = 9 and n = 15 . The corresponding sizes of mcp.XXX files are, respectively, 760 MB and 19 GB. As the rmcdhf_mpi and rmcdhf_mem_mpi n = 15 calculations are time-consuming, they are only performed with n p 16 .
In comparison to the Mg VII test case considered in Section 3.3.1 (see Table 1), the CSF expansions for Be I are much smaller, and fewer levels are targeted. Hence, fewer computational efforts are expected for the construction of a Hamiltonian matrix and the subsequent diagonalization. This is true for the diagonalization parts of all the calculations using various n p MPI processes. For example, the CPU times for searching for eigenpairs are tens of times smaller than those for building the Hamiltonian matrix, representing 14s out of 150s, as reported by the first “SetH&Diag” value given in Table 5 for the rmcdhf_mpi_FD calculation with n p = 16 . These CPU times are negligible (<1 s) for the following two diagonalizations. Unlike the cases considered for Mg VII, the CPU times for setting the Hamiltonian matrix predominate in the three “SetH&Diag” values, being all around 136s in the n = 15 calculations using n p = 16 , as shown in Table 5. These large differences in CPU time distributions between our Mg II and Be I test cases arise from the fact that the n = 15 expansion in Be I is built on a rather large set of orbitals, consisting of 171 Dirac one-electron orbitals, whereas the A S 5 expansion in Mg VII involves only 88 ones. The number of Slater integrals R k ( a b , c d ) possibly contributing to matrix elements is, therefore, much larger in Be I (95 451 319) than in Mg VII (6 144 958). Consequently, the three codes report very similar “SetH&Diag” and “Sum(SetH&Diag)” CPU times and all attain the maximum speed-up factors of about 10 around n p = 32 , as seen in Table 4.
The MPI performances of the Be I n = 9 calculations are shown in Figure 5. In general, a perfect MPI scaling is a speed-up factor equal to n p . With respect to this, the speed-up factors observed in rmcdhf_mpi and rmcdhf_mem_mpi are unusual, being much larger than the corresponding n p values. For example, the speed-up factors of rmcdhf_mpi and rmcdhf_mem_mpi for the orbital updating calculations are 79.5 and 93.6 at n p = 48 , while the corresponding slopes obtained from the linear fit of the speed-up factors as a function of n p are about 1.7 and 2.0, respectively. The corresponding reductions at n p = 48 for the code running times are 69.4 and 79.1, while the slopes are about 1.5 and 1.7, respectively. These reductions should be even larger for the n = 15 calculations. A detailed analysis shows that the inefficiency of the sequential search method largely accounts for the above unexpected MPI performances. As mentioned in Section 3.2, the labels LABYk and LABXk are constructed and stored sequentially in NYA(:,a) and NXA(:,a) arrays, respectively. In the subsequent accumulations of the y k and x k coefficients, the sequential search method is employed to match the labels. As mentioned above, a large number of Slater integrals contribute to the Hamiltonian matrix elements in calculations using a large set of orbitals. They are also involved in the calculations of the potential. In general, the number of x k terms is much larger than the number of y k terms. For example, the largest number of the former is 196 513 for all the n g 9 / 2 orbitals while there are at most 3 916 y k terms for all the n i 11 / 2 orbitals in the n = 9 calculations. Similarly, for the n = 15 calculations, there are at most 2 191 507 x k terms and 17 328 y k terms, both for the n g 9 / 2 orbitals. These values correspond to the largest sizes of the one-dimension vectors NXA(:,a) and NYA(:,a). The sequential search from a large list is obviously more inefficient than from a small list, as the time complexity is O ( n ) . In the MPI calculations with small np values, for example n p = 1 , the sequential search of the LABXk from the NXA(:,a) lists of over two million elements is very time-consuming. It is obvious that the sizes of NXA(:,a) in each MPI process decrease as n p increases, alleviating the inefficiency of the sequential search method, as also shown in Mg VII calculations. For example, when n p = 48 , the size of N X A ( : , n g 9 / 2 ) in each MPI process is reduced to 528 802 in the n = 15 calculations, and consequently, the unusually high speed-up factors are attained with both rmcdhf_mpi and rmcdhf_mem_mpi.
In rmcdhf_mpi_FD, the above inefficiency is removed by using the binary search strategy from the sorted arrays NXA(:,a) and NYA(:,a), and this code benefits from some other improvements, as discussed in Section 3.2. As seen in Figure 5a and Figure 6a, the speed-up factors for updating the orbitals increase slightly along with n p and attain the value of about 22 for both the n = 9 and n = 15 rmcdhf_mpi_FD calculations with n p = 48 , while the corresponding reductions for the code running times are, respectively, 12.5 and 22.5, as seen in Figure 5b and Figure 6b. It should be mentioned that in the rmcdhf_mpi and rmcdhf_mem_mpi calculations, the “Sum(SetH&Diag)” CPU time values are all less than the “Sum(Update)” ones, as seen in Table 4 and Table 5. However, it is the opposite for the rmcdhf_mpi_FD calculations, with all “Sum(SetH&Diag)” CPU time values still larger than the “Sum(Update)” ones, as in Mg VII calculations. Comparing to rmcdhf_mpi, the code rmcdhf_mpi_FD reduces the CPU times required for updating the orbitals by a factor lying in the range of 38–20, with n p = 16–48 for the n = 9 MCDHF calculations, while for the n = 15 calculations, the corresponding reduction factors are in the range of 242–287. The corresponding reduction factor ranges for the code running times are, respectively, 11–4.7 and 54–22.5, as seen in Figure 5b and Figure 6b. One can conclude that the larger the scale of the calculations, the larger the CPU time reduction factors. Moreover, the lower the number of cores used, the larger the reduction factors obtained with rmcdhf_mpi_FD. These features become highly relevant for extremely large-scale MCDHF calculations if they have to be performed using a small number of cores due to the limited performance of diagonalization, as discussed in the above section.

3.3.3. Possible Further Improvements for rmcdhf_mpi_FD

As discussed above, the MPI performances of rmcdhf_mpi_FD for updating orbitals scale well in Mg VII calculations. The speed-up factors roughly follow a scaling law of ≃ 0.9 n p (see Figure 2b and Figure 4b). For the Be I calculations, however, as illustrated by Figure 5a and Figure 6a, the speed-up factor increases slightly with n p to attain a maximum value of 22. The partial CPU times for the orbital updating process, labeled “SetTVCof”, “WithoutMCP”, “WithMCP”, and “SetLAG”, are plotted in Figure 7, together with the total updating time labeled “Update”, for the Mg VII A S 5 and for the Be I n = 15 calculations (these labels have been explained in Section 3.3.1.). The partial CPU times labeled “IMPROV” are not reported here, as they are generally negligible. It can be seen that the “SetTVCof” and “WithoutMCP” partial times dominate the total CPU times required for updating orbitals in Mg VII A S 5 calculations, and they all scale well along with n p . In the Be I n = 15 calculations, the partial “SetLAG” CPU times are predominant, and the remaining partial CPU times scale well. However, the scaling for both the partial “SetLAG” and total CPU times is worse than in Mg VII. Extra speed-up for “SetLAG” in Mg VII A S 5 calculations is observed. These different scalings can again be attributed to the fact that there is a large amount of x k terms contributing to the exchange potentials in Be I n = 15 calculations, as mentioned above. For each x k ( a b c d ) term, calculations of relativistic one-dimensional radial integrals Y k ( b d ; r ) are performed on the grid with hundreds of r values. All these calculations are serial and are often repetitious for the same Y k ( b d ; r ) integral associated with different x k ( a b c d ) terms which differ from each other only by a and/or c orbitals. This kind of inefficiency could be improved by calculating all the needed Y k ( b d ; r ) integrals in advance and storing them in arrays. This will be implemented in future versions of rmcdhf_mpi_FD codes.
The MPI performances of the construction of N Y A and N X A arrays are also displayed as dotted lines in Figure 7. As the distributed L A B Y k and L A B X k values obtained by different MPI processes have to be collected, sorted and then re-distributed, the linear scaling begins to deteriorate at about 32 cores for the Be I n = 15 calculations. Fortunately, this construction is realized only once, just before the SCF iterations. This slightly poor MPI performances should not be the bottleneck in large-scale MCDHF calculations.
After a deep investigation of the procedures that could affect the MPI performances of rmcdhf_mpi_FD, one concludes that the poor performances of diagonalization could be the bottleneck for MCDHF calculations based on relatively large expansions consisting of hundreds of thousands of CSFs and targeting dozens of eigenpairs. We will discuss this issue in the next section.

4. Performance Tests for RCI Codes

The MCDHF calculations are generally followed by RCI calculations employing the GRASP rci and rci_mpi codes. In these calculations, larger CSF expansions than those considered in MCDHF calculations are used to capture higher-order electron correlation effects. Corrections to the Dirac–Coulomb Hamiltonian, such as the transverse photon interaction and the leading QED corrections, are also taken into account in this configuration interaction step, without affecting the one-electron orbitals. As mentioned in Section 1, we recently implemented in GRASP2018 [36] the original computational methodology based on configuration state function generators (CSFG) to build the Hamiltonian matrix. This strategy takes full advantage of the fact that the spin-angular integrals, such as the coefficients in Equation (5), are independent of the principal quantum numbers. In this approach, the CSFs space is divided into two parts, i.e., the labeling space and correlation space. The former typically accounts for the major correlation effects due to close degenerate and long-range rearrangements, while the latter typically accounts for short-range interactions and dynamical correlation. The orbitals set is also divided into two parts, i.e., a subset of labeling-ordered (LO) orbitals and a subset of symmetry-ordered (SO) orbitals [36]. The labeling CSFs are built with the LO orbitals only, generated by electron excitations (single (S), double (D), tripe (T), quadruple (Q), etc.) from an MR. The correlation CSFs are built with the LO orbitals together with SO orbitals, generated by only SD excitations also from the given MR. In the present implementation, two electrons at most are allowed to occupy the SO orbitals.
A CSFG of a given type is a correlation CSF in which one or two electrons occupy the SO orbitals with the highest principal quantum number allowed. Given a CSFG, a group of correlation CSFs can be generated by orbital de-excitations, within the SO orbitals set, that preserve the spin-angular coupling. The generated CSFs within the same group differ from each other only by the principal quantum numbers. The use of CSFGs makes it possible to restrict the spin-angular integration to a limited number of cases rather than being performed for each of the elements in the Hamiltonian matrix. Compared to ordinary RCI calculations employing rci_mpi, the CPU times are demonstrated to be reduced by factors of ten with the newly developed code, referred hereafter to as rci_mpi_CSFG. It is also found that the Breit contributions involving high orbital angular momentum (l) can be safely discarded. An efficient a priori condensation technique is also developed by using CSFG to significantly reduce the expansion sizes, with negligible changes to the computed transition energies. Some test calculations are presented for a number of atomic systems and correlation models with increasing sets of one-electron orbitals in [36]. Compared to the original GRASP2018 rci_mpi program, the larger the scale of the calculations, the larger the CPU time reduction factors will be with rci_mpi_CSFG. The latter is, therefore, very suitable for extremely large-scale calculations. Here we focus on the MPI performances of rci_mpi_CSFG and rci_mpi.
The MPI performance test calculations are performed for the 2 + block in Ne VII using the A S 5 orbitals set, i.e., 10 ( s p d f g h i ) . As in the MCDHF calculations, all the possible ( 1 s 2 ) 2 l 3 n l with 2 n 5 configurations define the MR. The correlation CSFs are formed by SD excitations from this MR, allowing at most one electron excitation from the 1s subshell. These CSF expansions model both the VV and CV electron correlation. The number of resulting CSFs is n c = 2 112 922. This expansion is used in the rci_mpi calculation.
In the rci_mpi_CSFG calculation, all the n 5 orbitals are treated as LO orbitals, while the others are regarded as SO orbitals. The CSFs are generated as in the rci_mpi calculation. The labeling space contains 95 130 CSFs, while there are 197 480 CSFGs within the correlation space spanning 2 017 792 correlation CSFs. The total number of the original CSF expansion, n c is reproduced by adding the sizes of the labeling and correlation spaces, i.e., 95 130 + 2 017 792 = 2 112 922, as it should be. However, the program rci_mpi_CSFG reads a file of only n c = 292 610 CSFs, corresponding to the total of the labeling CSFs (95 130) and CSFGs (197 480). The size is reduced by the ratio R = n c / n c 7 , compared to the file containing all the n c CSFs treated by rci_mpi. This ratio is very meaningful, being related to the performance enhancement of rci_mpi_CSFG, as the numbers of spin-angular integrations are, respectively, n c ( n c + 1 ) / 2 and about n c ( n c + 1 ) / 2 in the rci_mpi and rci_mpi_CSFG calculations. Ideally, relative to the former, a speed-up factor of R 2 is expected for the latter. It is obvious that this kind of outperformance is impossible to achieve because the spin-angular integration is not the whole computation load of RCI calculations.
The MPI performances of the rci_mpi and rci_mpi_CSFG can be realized from Table 6. All the MPI calculations are performed for the lowest 54 levels of 2 + in Ne VII, but using various n p cores in the range of 16–128 within a Linux server with two AMD EPYC 7763 64-Core Processors. Rather than using the zero-first approximation as in the above MCDHF calculations, here, all the matrix elements are calculated and taken into account in the RCI calculation. The disk space taken by the nonzero matrix elements is about 173 GB. The CPU times for building the Hamiltonian matrix, searching eigenpairs, and the sums are also shown in Figure 8. The former can be precisely reproduced by allometric scaling, i.e., 1 121 × n p 0.822 and 16 005 × n p 0.894 both in minutes, respectively, for rci_mpi_CSFG and rci_mpi. This means that the CPU times for the matrix construction can be, respectively, reduced by factors of 1.77 and 1.86 if using double cores, implying that both rci_mpi and rci_mpi_CSFG have good MPI scaling to build the Hamiltonian matrix. However, the poor MPI scaling is again seen for diagonalization. The optimal n p values for the two codes are both around 32. The CPU times increase significantly with n p > 32 . The rci_mpi and rci_mpi_CSFG diagonalization with n p = 128 is longer than with n p = 32 by factors of about 4.1 and 3.7, respectively. The different MPI scalings for the matrix construction and diagonalization are not unexpected. For the former, after each MPI process obtains the CSFs expansion, communications between different processes are not needed anymore. However, during the diagonalization procedure, a large amount of MPI communications are needed to ensure that each process has the same approximated eigenvector after every matrix-vector multiplication.
Consequently, considering both tasks (matrix construction and diagonalization), as seen in Table 6 and Figure 8, the optimal n p values for the whole code running times are in the ranges of 64–96 and 32–64 for rci_mpi and rci_mpi_CSFG, respectively. The latter outperforms the former by factors of 8.7 and 4.0 for the calculations using 16 and 128 cores, respectively. The best performance of rci_mpi_CSFG is 118 m using 32 cores, while it is 693 m using 64 cores for rci_mpi. The CPU time is reduced by a factor of 5.9 for rci_mpi_CSFG, but we should keep in mind that it uses half of the cores used by rci_mpi. This is more interesting for public servers. The outperformance of rci_mpi_CSFG is obviously due to the improvements in matrix construction and diagonalization, i.e., thanks to the implementation of CSFG and the additional parallelization, as discussed above. The CPU times needed for these two tasks are averagely reduced by about a factor of 10 and 3, respectively.
The scalability of the codes is also of interest if more and more eigenpairs are searched from a given Hamiltonian matrix. In Figure 9, the diagonalization CPU times are plotted versus n L , the number of searched eigenpairs. These calculations were also performed for the 2 + block in Ne VII using 16 cores. We observe that the reported CPU times, T ( n L ) in minutes, with large enough n L -values, can be well reproduced by a quadratic polynomial fit as T ( n L ) = 5.20 + 0.519 n L + 0.0662 n L 2 and T ( n L ) = 3.46 + 1.54 n L + 0.00538 n L 2 for calculations with rci_mpi and rci_mpi_CSFG, respectively. For the latter, T ( n L ) is approximately linear in n L and the quadratic term is smaller by over one order of magnitude for this code than for rci_mpi. Consequently, the rci_mpi_CSFG outperforms rci_mpi more significantly as n L increases, reducing the diagonalization CPU times by a factor of 4.2 for n L = 128 . This feature of rci_mpi_CSFG code is very helpful for large-scale spectrum calculation involving hundreds of levels.

5. Conclusions

In summary, the computation load of MCDHF calculations employing the GRASP rmcdhf_mpi code is generally divided into the orbital-updating process and the matrix diagonalization. The inefficiency found in the former part has been removed by redesigning the calculation of direct and exchange potentials, as well as Lagrange multipliers. Consequently, the CPU times may be significantly reduced by one or two orders of magnitude. For the second part, the additional parallelization of the diagonalization procedure may reduce the CPU times by about a factor of 3. The computation load of RCI calculations employing GRASP rci_mpi can also be divided into the Hamiltonian matrix construction and its diagonalization. In addition to the additional parallelization that improves the efficiency of the latter, the load of the former is reduced by a factor of ten or more thanks to the recently implemented computational methodology based on CSFG. Compared to the original rmcdhf_mpi and rci_mpi codes, the present modified versions, i.e., rmcdhf_mpi_FD and rci_mpi_CSFG cut down the whole computation loads of MCDHF and RCI calculations by several or tens of times, a factor that depends on the calculation scale governed by (i) the size of the CSF expansion, (ii) the size of the orbital set, (iii) the number of desired eigenpairs and (iv) the number of MPI processes used. In general, the larger the first three, the larger the CPU time reduction factors obtained with rmcdhf_mpi_FD and rci_mpi_CSFG. On the other hand, the smaller the number of used cores, the larger the reduction factors observed. These features make the rmcdhf_mpi_FD and rci_mpi_CSFG codes very suitable for extremely large-scale MCDHF and RCI calculations.
The MPI performances of the above four codes, as well as the memory version of rmcdhf_mpi, i.e., rmcdhf_mem_mpi, are carefully investigated. All codes have a good MPI scaling for the orbital updating process and the matrix construction step, respectively, in MCDHF and RCI calculations, whereas the MPI scaling for diagonalization is poor. If few eigenpairs are searched or very small CSFs expansions are employed, the MPI calculations may be performed using as many cores as possible. To obtain the best performance for large-scale calculation using hundreds of thousands or millions of CSFs expansion and targeting dozens of levels or more, the relative computation loads of diagonalization versus orbital update and matrix construction should be considered. As the latter two are, respectively, reduced significantly by rmcdhf_mpi_FD and rci_mpi_CSFG codes, the diagonalization computation load will often dominate. For such cases, the MPI calculations should be performed using the optimal number of cores for diagonalization, generally being around 32. The poor MPI scaling of diagonalization is obviously the bottleneck of rmcdhf_mpi_FD and rci_mpi_CSFG codes for precise spectrum calculations involving hundreds of levels. The way to improve the MPI scaling for diagonalization is unclear to us. An MPI/OpenMP hybridization might be helpful. By now, a temporary method is provided for large-scale RCI calculations. Firstly, the Hamiltonian matrix is calculated using as many cores as possible. The files storing the nonzero matrix elements are then re-distributed by a program to match the optimal diagonalization performances.

Author Contributions

Methodology, Y.L., J.L., C.S., C.Z., R.S., K.W., M.G., G.G., P.J. and C.C.; software, Y.L., J.L., C.S., C.Z., R.S., K.W., M.G., G.G., P.J. and C.C.; validation, Y.L., J.L., C.S., C.Z., R.S., K.W., M.G., G.G., P.J. and C.C.; investigation, Y.L., J.L., R.S., K.W., M.G., G.G., P.J. and C.C.; writing—original draft, Y.L., R.S., K.W. and C.C.; writing—review and editing, Y.L., J.L., C.S., C.Z., R.S., K.W., M.G., G.G., P.J. and C.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant nos. 12104095 and 12074081). Y.L. acknowledges support from the China Scholarship Council with Grant No. 202006100114. K.W. expresses his gratitude for the support from the visiting researcher program at Fudan University. M.G. acknowledges support from the Belgian FWO and FNRS Excellence of Science Programme (EOSO022818F).

Data Availability Statement

Not applicable.

Acknowledgments

The authors wish to thank the members of the CompAS group for valuable suggestions for improvements of the computer codes. R.S. and C.Y.C. would like to thank Charlotte Froese Fischer for the suggestions about the performance test. The authors would also like to thank Jacek Bieroń for his valuable comments on the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dyall, K.G.; Grant, I.P.; Johnson, C.T.; Parpia, F.A.; Plummer, E.P. Grasp—A General-purpose Relativistic Atomic-structure Program. Comput. Phys. Commun. 1989, 55, 425–456. [Google Scholar] [CrossRef]
  2. Grant, I.P.; McKenzie, B.J.; Norrington, P.H.; Mayers, D.F.; Pyper, N.C. An atomic multiconfigurational Dirac-Fock package. Comput. Phys. Commun. 1980, 21, 207–231. [Google Scholar] [CrossRef]
  3. Grant, I.P. Relativistic Quantum Theory of Atoms and Molecules. Theory and Computation (Atomic, Optical and Plasma Physics); Springer Science and Business Media, LLC: New York, NY, USA, 2007. [Google Scholar] [CrossRef] [Green Version]
  4. Froese Fischer, C.; Godefroid, M.; Brage, T.; Jönsson, P.; Gaigalas, G. Advanced multiconfiguration methods for complex atoms: I. Energies and wave functions. J. Phys. B At. Mol. Opt. Phys. 2016, 49, 182004. [Google Scholar] [CrossRef] [Green Version]
  5. Jönsson, P.; Godefroid, M.; Gaigalas, G.; Ekman, J.; Grumer, J.; Li, W.; Li, J.; Brage, T.; Grant, I.P.; Bieroń, J.; et al. An Introduction to Relativistic Theory as Implemented in GRASP. Atoms 2023, 11, 7. [Google Scholar] [CrossRef]
  6. Jönsson, P.; Gaigalas, G.; Bieroń, J.; Froese Fischer, C.; Grant, I.P. New version: Grasp2K relativistic atomic structure package. Comput. Phys. Commun. 2013, 184, 2197–2203. [Google Scholar] [CrossRef] [Green Version]
  7. Lindgren, I. The Rayleigh-Schrodinger perturbation and the linked-diagram theorem for a multi-configurational model space. J. Phys. B At. Mol. Phys. 1974, 7, 2441–2470. [Google Scholar] [CrossRef]
  8. Gu, M.F. The flexible atomic code. Can. J. Phys. 2008, 86, 675–689. [Google Scholar] [CrossRef]
  9. Gu, M.F. Energies of 1s22lq (1 ≤ q ≤ 8) states for Z ≤ 60 with a combined configuration interaction and many-body perturbation theory approach. At. Data Nucl. Data Tables 2005, 89, 267–293. [Google Scholar] [CrossRef]
  10. Gu, M.F.; Holczer, T.; Behar, E.; Kahn, S.M. Inner-Shell Absorption Lines of Fe VI-Fe XVI: A Many-Body Perturbation Theory Approach. Astrophys. J. 2006, 641, 1227–1232. [Google Scholar] [CrossRef]
  11. Si, R.; Guo, X.; Wang, K.; Li, S.; Yan, J.; Chen, C.; Brage, T.; Zou, Y. Energy levels and transition rates for helium-like ions with Z= 10–36. Astron. Astrophys. 2016, 592, A141. [Google Scholar] [CrossRef] [Green Version]
  12. Wang, K.; Chen, Z.B.; Zhang, C.Y.; Si, R.; Jönsson, P.; Hartman, H.; Gu, M.F.; Chen, C.Y.; Yan, J. Benchmarking Atomic Data for Astrophysics: Be-like Ions between B II and Ne VII. Astrophys. J. Suppl. Ser. 2018, 234, 40. [Google Scholar] [CrossRef] [Green Version]
  13. Wang, K.; Guo, X.; Liu, H.; Li, D.; Long, F.; Han, X.; Duan, B.; Li, J.; Huang, M.; Wang, Y.; et al. Systematic calculations of energy levels and transition rates of Be-like ions with Z= 10–30 using a combined configuration interaction and many-body perturbation theory approach. Astrophys. J. Suppl. Ser. 2015, 218, 16. [Google Scholar] [CrossRef]
  14. Wang, K.; Song, C.X.; Jönsson, P.; Ekman, J.; Godefroid, M.; Zhang, C.Y.; Si, R.; Zhao, X.H.; Chen, C.Y.; Yan, J. Large-scale Multiconfiguration Dirac–Hartree–Fock and Relativistic Configuration Interaction Calculations of Transition Data for B-like S xii. Astrophys. J. 2018, 864, 127. [Google Scholar] [CrossRef]
  15. Si, R.; Zhang, C.; Cheng, Z.; Wang, K.; Jönsson, P.; Yao, K.; Gu, M.; Chen, C. Energy Levels, Transition Rates and Electron Impact Excitation Rates for the B-like Isoelectronic Sequence with Z= 24–30. Astrophys. J. Suppl. Ser. 2018, 239, 3. [Google Scholar] [CrossRef]
  16. Li, J.; Zhang, C.; Del Zanna, G.; Jönsson, P.; Godefroid, M.; Gaigalas, G.; Rynkun, P.; Radžiūtė, L.; Wang, K.; Si, R.; et al. Large-scale Multiconfiguration Dirac–Hartree–Fock Calculations for Astrophysics: C-like Ions from O iii to Mg vii. Astrophys. J. Suppl. Ser. 2022, 260, 50. [Google Scholar] [CrossRef]
  17. Wang, K.; Si, R.; Dang, W.; Jönsson, P.; Guo, X.L.; Li, S.; Chen, Z.B.; Zhang, H.; Long, F.Y.; Liu, H.T.; et al. Calculations with spectroscopic accuracy: Energies and transition rates in the nitrogen isoelectronic sequence from Ar XII to Zn XXIV. Astrophys. J. Suppl. Ser. 2016, 223, 3. [Google Scholar] [CrossRef] [Green Version]
  18. Wang, K.; Jönsson, P.; Ekman, J.; Gaigalas, G.; Godefroid, M.; Si, R.; Chen, Z.; Li, S.; Chen, C.; Yan, J. Extended calculations of spectroscopic data: Energy levels, lifetimes, and transition rates for O-like ions from Cr XVII to Zn XXIII. Astrophys. J. Suppl. Ser. 2017, 229, 37. [Google Scholar] [CrossRef] [Green Version]
  19. Song, C.; Zhang, C.; Wang, K.; Si, R.; Godefroid, M.; Jönsson, P.; Dang, W.; Zhao, X.; Yan, J.; Chen, C. Extended calculations with spectroscopic accuracy: Energy levels and radiative rates for O-like ions between Ar XI and Cr XVII. At. Data Nucl. Data Tables 2021, 138, 101377. [Google Scholar] [CrossRef]
  20. Si, R.; Li, S.; Guo, X.; Chen, Z.; Brage, T.; Jönsson, P.; Wang, K.; Yan, J.; Chen, C.; Zou, Y. Extended calculations with spectroscopic accuracy: Energy levels and transition properties for the fluorine-like isoelectronic sequence with Z= 24–30. Astrophys. J. Suppl. Ser. 2016, 227, 16. [Google Scholar] [CrossRef]
  21. Li, J.; Zhang, C.; Si, R.; Wang, K.; Chen, C. Calculations of energies, transition rates, and lifetimes for the fluorine-like isoelectronic sequence with Z= 31- 35. At. Data Nucl. Data Tables 2019, 126, 158–294. [Google Scholar] [CrossRef]
  22. Wang, K.; Chen, Z.B.; Si, R.; Jönsson, P.; Ekman, J.; Guo, X.L.; Li, S.; Long, F.Y.; Dang, W.; Zhao, X.H.; et al. Extended relativistic configuration interaction and many-body perturbation calculations of spectroscopic data for the n ≤ 6 configurations in Ne-like ions between Cr XV and Kr XXVII. Astrophys. J. Suppl. Ser. 2016, 226, 14. [Google Scholar] [CrossRef] [Green Version]
  23. Zhang, X.; Del Zanna, G.; Wang, K.; Rynkun, P.; Jönsson, P.; Godefroid, M.; Gaigalas, G.; Radžiūtė, L.; Ma, L.; Si, R.; et al. Benchmarking Multiconfiguration Dirac–Hartree–Fock Calculations for Astrophysics: Si-like Ions from Cr xi to Zn xvii. Astrophys. J. Suppl. Ser. 2021, 257, 56. [Google Scholar] [CrossRef]
  24. Wang, K.; Jönsson, P.; Gaigalas, G.; Radžiūtė, L.; Rynkun, P.; Del Zanna, G.; Chen, C. Energy levels, lifetimes, and transition rates for P-like ions from Cr X to Zn XVI from large-scale relativistic multiconfiguration calculations. Astrophys. J. Suppl. Ser. 2018, 235, 27. [Google Scholar] [CrossRef] [Green Version]
  25. Song, C.; Wang, K.; Del Zanna, G.; Jönsson, P.; Si, R.; Godefroid, M.; Gaigalas, G.; Radžiūtė, L.; Rynkun, P.; Zhao, X.; et al. Large-scale Multiconfiguration Dirac–Hartree–Fock Calculations for Astrophysics: n = 4 Levels in P-like Ions from Mn xi to Ni xiv. Astrophys. J. Suppl. Ser. 2020, 247, 70. [Google Scholar] [CrossRef] [Green Version]
  26. Wang, K.; Song, C.X.; Jönsson, P.; Del Zanna, G.; Schiffmann, S.; Godefroid, M.; Gaigalas, G.; Zhao, X.H.; Si, R.; Chen, C.Y.; et al. Benchmarking atomic data from large-scale multiconfiguration Dirac–Hartree–Fock calculations for astrophysics: S-like ions from Cr ix to Cu xiv. Astrophys. J. Suppl. Ser. 2018, 239, 30. [Google Scholar] [CrossRef]
  27. Wang, K.; Jönsson, P.; Del Zanna, G.; Godefroid, M.; Chen, Z.; Chen, C.; Yan, J. Large-scale Multiconfiguration Dirac–Hartree–Fock Calculations for Astrophysics: Cl-like Ions from Cr viii to Zn xiv. Astrophys. J. Suppl. Ser. 2019, 246, 1. [Google Scholar] [CrossRef] [Green Version]
  28. Zhang, C.Y.; Wang, K.; Godefroid, M.; Jönsson, P.; Si, R.; Chen, C.Y. Benchmarking calculations with spectroscopic accuracy of excitation energies and wavelengths in sulfur-like tungsten. Phys. Rev. A 2020, 101, 032509. [Google Scholar] [CrossRef] [Green Version]
  29. Zhang, C.Y.; Wang, K.; Si, R.; Godefroid, M.; Jönsson, P.; Xiao, J.; Gu, M.F.; Chen, C.Y. Benchmarking calculations with spectroscopic accuracy of level energies and wavelengths in W LVII–W LXII tungsten ions. J. Quant. Spectrosc. Radiat. Transf. 2021, 269, 107650. [Google Scholar] [CrossRef]
  30. Zhang, C.Y.; Li, J.Q.; Wang, K.; Si, R.; Godefroid, M.; Jönsson, P.; Xiao, J.; Gu, M.F.; Chen, C.Y. Benchmarking calculations of wavelengths and transition rates with spectroscopic accuracy for W xlviii through W lvi tungsten ions. Phys. Rev. A 2022, 105, 022817. [Google Scholar] [CrossRef]
  31. Guo, X.; Li, M.; Zhang, C.; Wang, K.; Li, S.; Chen, Z.; Liu, Y.; Zhang, H.; Hutton, R.; Chen, C. High accuracy theoretical calculation of wavelengths and transition probabilities in Se-through Ga-like ions of tungsten. J. Quant. Spectrosc. Radiat. Transf. 2018, 210, 204–216. [Google Scholar] [CrossRef]
  32. Guo, X.; Li, M.; Si, R.; He, X.; Wang, K.; Dai, Z.; Liu, Y.; Zhang, H.; Chen, C. Accurate study on the properties of spectral lines for Br-like W39+. J. Phys. At. Mol. Opt. Phys. 2017, 51, 015002. [Google Scholar] [CrossRef]
  33. Guo, X.; Grumer, J.; Brage, T.; Si, R.; Chen, C.; Jönsson, P.; Wang, K.; Yan, J.; Hutton, R.; Zou, Y. Energy levels and radiative data for Kr-like W38+ from MCDHF and RMBPT calculations. J. Phys. At. Mol. Opt. Phys. 2016, 49, 135003. [Google Scholar] [CrossRef] [Green Version]
  34. Froese Fischer, C.; Gaigalas, G.; Jönsson, P.; Bieroń, J. GRASP2018—A Fortran 95 version of the general relativistic atomic structure package. Comput. Phys. Commun. 2019, 237, 184–187. [Google Scholar] [CrossRef]
  35. Gaigalas, G. Commits on Feb 2, 2022, commit/77aa600ab02b58718b9c5a82ce9e6c638cc09921. Available online: https://www.github.com/compas/grasp2018 (accessed on 20 November 2022).
  36. Li, Y.T.; Wang, K.; Si, R.; Godefroid, M.; Gaigalas, G.; Chen, C.Y.; Jönsson, P. Reducing the Computational Load—Atomic Multiconfiguration Calculations based on Configuration State Function Generators. Comput. Phys. Commun. 2022, 283, 108562. [Google Scholar] [CrossRef]
  37. Davidson, E.R. Iterative Calculation of A Few of Lowest Eigenvalues and Corresponding Eigenvectors of Large Real-symmetric Matrices. J. Comput. Phys. 1975, 17, 87–94. [Google Scholar] [CrossRef]
  38. Stathopoulos, A.; Froese Fischer, C. A Davidson program for finding a few selected extreme eigenpairs of a large, sparse, real, symmetric matrix. Comput. Phys. Commun. 1994, 79, 268–290. [Google Scholar] [CrossRef]
  39. Edmonds, A. Angular Momentum in Quantum Mechanics; Princeton University Press: Princeton, NJ, USA, 1957. [Google Scholar]
  40. Parpia, F.A.; Froese Fischer, C.; Grant, I.P. GRASP92: A package for large-scale relativistic atomic structure calculations. Comput. Phys. Commun. 1996, 94, 249–271. [Google Scholar] [CrossRef]
  41. Gaigalas, G.; Rudzikas, Z.; Froese Fischer, C. An efficient approach for spin - angular integrations in atomic structure calculations. J. Phys. B At. Mol. Opt. Phys. 1997, 30, 3747. [Google Scholar] [CrossRef]
  42. Gaigalas, G.; Fritzsche, S.; Grant, I.P. Program to calculate pure angular momentum coefficients in jj-coupling. Comput. Phys. Commun. 2001, 139, 263–278. [Google Scholar] [CrossRef]
  43. Gaigalas, G. A Program Library for Computing Pure Spin-Angular Coefficients for One- and Two-Particle Operators in Relativistic Atomic Theory. Atoms 2022, 10, 129. [Google Scholar] [CrossRef]
  44. Gustafsson, S.; Jönsson, P.; Froese Fischer, C.; Grant, I.P. Combining multiconfiguration and perturbation methods: Perturbative estimates of core–core electron correlation contributions to excitation energies in Mg-like iron. Atoms 2017, 5, 3. [Google Scholar] [CrossRef] [Green Version]
  45. Gaigalas, G.; Rynkun, P.; Radžiūtė, L.; Kato, D.; Tanaka, M.; Jönsson, P. Energy Level Structure and Transition Data of Er2+. Astrophys. J. Suppl. Ser. 2020, 248, 13. [Google Scholar] [CrossRef]
Figure 1. MPI performances of the first diagonalization in the Mg VII A S 3 -SCF calculations. Solid lines (left y axis): CPU times (T in s) of rmcdhf_mpi (squares), rmcdhf_mem_mpi (circles), and rmcdhf_mpi_FD (triangles) codes versus the number of MPI processes ( n p ) (also listed in the first “SetH&Diag” line for each code of Table 2). Dashed lines (right y axis): speed-up factors for the three codes, with the same corresponding symbols, estimated from the ratios of T ( n p = 1 ) to others. Dotted line (right y axis) (square symbols): speed-up of rmcdhf_mpi_FD relative to rmcdhf_mpi, calculated as T(rmcdhf_mpi)/T(rmcdhf_mpi_FD).
Figure 1. MPI performances of the first diagonalization in the Mg VII A S 3 -SCF calculations. Solid lines (left y axis): CPU times (T in s) of rmcdhf_mpi (squares), rmcdhf_mem_mpi (circles), and rmcdhf_mpi_FD (triangles) codes versus the number of MPI processes ( n p ) (also listed in the first “SetH&Diag” line for each code of Table 2). Dashed lines (right y axis): speed-up factors for the three codes, with the same corresponding symbols, estimated from the ratios of T ( n p = 1 ) to others. Dotted line (right y axis) (square symbols): speed-up of rmcdhf_mpi_FD relative to rmcdhf_mpi, calculated as T(rmcdhf_mpi)/T(rmcdhf_mpi_FD).
Atoms 11 00012 g001
Figure 2. MPI performances for updating orbitals in the Mg VII A S 3 -SCF calculations. (a) Solid lines (left y axis): orbital updating CPU times (T in s) of rmcdhf_mpi (squares), rmcdhf_mem_mpi(circles), and rmcdhf_mpi_FD (triangles) codes versus the number of MPI processes ( n p ) (also listed in the second “Update” line for each code of Table 2). Dashed lines (right y axis): speed-up factors for the three codes, with the same corresponding symbols, estimated from the ratios of T ( n p = 1 ) to others. (b) Speed-up factors of rmcdhf_mpi_FD relative to rmcdhf_mpi (squares) and to rmcdhf_mem_mpi (circles) calculated as T(rmcdhf_mpi)/T(rmcdhf_mpi_FD) and T(rmcdhf_mem_mpi)/T(rmcdhf_mpi_FD), respectively. Speed-up factors of rmcdhf_mem_mpi relative to rmcdhf_mpi (stars) calculated as T(rmcdhf_mpi)/T(rmcdhf_mem_mpi).
Figure 2. MPI performances for updating orbitals in the Mg VII A S 3 -SCF calculations. (a) Solid lines (left y axis): orbital updating CPU times (T in s) of rmcdhf_mpi (squares), rmcdhf_mem_mpi(circles), and rmcdhf_mpi_FD (triangles) codes versus the number of MPI processes ( n p ) (also listed in the second “Update” line for each code of Table 2). Dashed lines (right y axis): speed-up factors for the three codes, with the same corresponding symbols, estimated from the ratios of T ( n p = 1 ) to others. (b) Speed-up factors of rmcdhf_mpi_FD relative to rmcdhf_mpi (squares) and to rmcdhf_mem_mpi (circles) calculated as T(rmcdhf_mpi)/T(rmcdhf_mpi_FD) and T(rmcdhf_mem_mpi)/T(rmcdhf_mpi_FD), respectively. Speed-up factors of rmcdhf_mem_mpi relative to rmcdhf_mpi (stars) calculated as T(rmcdhf_mpi)/T(rmcdhf_mem_mpi).
Atoms 11 00012 g002
Figure 3. MPI performances for the code running times in the Mg VII A S 3 -SCF calculations. (a) Solid (left y axis): walltimes (in s) for rmcdhf_mpi (squares), rmcdhf_mem_mpi (circles), and rmcdhf_mpi_FD (triangles) codes versus the number of MPI processes ( n p ). Dashed lines (right y axis): speed-up factors for the three codes, with the same corresponding symbols, estimated from the ratios of T ( n p = 1 ) to others. (b): Speed-up factors of rmcdhf_mpi_FD relative to rmcdhf_mpi (squares) and to rmcdhf_mem_mpi (circles), respectively. Speed-up factors of rmcdhf_mem_mpi relative to rmcdhf_mpi (stars).
Figure 3. MPI performances for the code running times in the Mg VII A S 3 -SCF calculations. (a) Solid (left y axis): walltimes (in s) for rmcdhf_mpi (squares), rmcdhf_mem_mpi (circles), and rmcdhf_mpi_FD (triangles) codes versus the number of MPI processes ( n p ). Dashed lines (right y axis): speed-up factors for the three codes, with the same corresponding symbols, estimated from the ratios of T ( n p = 1 ) to others. (b): Speed-up factors of rmcdhf_mpi_FD relative to rmcdhf_mpi (squares) and to rmcdhf_mem_mpi (circles), respectively. Speed-up factors of rmcdhf_mem_mpi relative to rmcdhf_mpi (stars).
Atoms 11 00012 g003
Figure 4. MPI performances of the codes for the Mg VII A S 5 -SCF calculations. (a) Solid lines (left y axis): walltimes (in s) for rmcdhf_mpi (squares), rmcdhf_mem_mpi (circles) and rmcdhf_mpi_FD (triangles) codes versus the number of MPI processes ( n p ). Dashed line (right y axis): speed-up factors for rmcdhf_mpi_FD (triangles) calculated as the ratios of T ( n p = 1 ) to others. Dotted lines (right y axis): speed-up factors for rmcdhf_mpi_FD relative to rmcdhf_mpi (squares) and to rmcdhf_mem_mpi (circles), respectively. (b) Solid lines (left y axis): orbital updating CPU times (T in s) for rmcdhf_mpi (squares), rmcdhf_mem_mpi (circles) and rmcdhf_mpi_FD (triangles) calculations. Dashed line (right y axis): speed-up factors (triangles) for rmcdhf_mpi_FD estimated from the ratios of T ( n p = 1 ) to others. The second “Update” times given in Table 3 are shown here. Dotted lines (right y axis): speed-up factors of rmcdhf_mpi_FD relative to rmcdhf_mpi (squares) and to rmcdhf_mem_mpi (circles), respectively.
Figure 4. MPI performances of the codes for the Mg VII A S 5 -SCF calculations. (a) Solid lines (left y axis): walltimes (in s) for rmcdhf_mpi (squares), rmcdhf_mem_mpi (circles) and rmcdhf_mpi_FD (triangles) codes versus the number of MPI processes ( n p ). Dashed line (right y axis): speed-up factors for rmcdhf_mpi_FD (triangles) calculated as the ratios of T ( n p = 1 ) to others. Dotted lines (right y axis): speed-up factors for rmcdhf_mpi_FD relative to rmcdhf_mpi (squares) and to rmcdhf_mem_mpi (circles), respectively. (b) Solid lines (left y axis): orbital updating CPU times (T in s) for rmcdhf_mpi (squares), rmcdhf_mem_mpi (circles) and rmcdhf_mpi_FD (triangles) calculations. Dashed line (right y axis): speed-up factors (triangles) for rmcdhf_mpi_FD estimated from the ratios of T ( n p = 1 ) to others. The second “Update” times given in Table 3 are shown here. Dotted lines (right y axis): speed-up factors of rmcdhf_mpi_FD relative to rmcdhf_mpi (squares) and to rmcdhf_mem_mpi (circles), respectively.
Atoms 11 00012 g004
Figure 5. MPI performances for the Be I 9( s p d f g h i k l )-SCF calculations. (a) Solid (left y axis): orbital updating CPU times (T in s) for rmcdhf_mpi (squares), rmcdhf_mem_mpi (circles), and rmcdhf_mpi_FD (triangles) codes versus the number of MPI processes ( n p ). Dashed line (right y axis): speed-up factors of the three codes (with the same corresponding symbols) estimated as the ratios of T ( n p = 1 ) to others. Dotted line (right y axis): speed-up factors of rmcdhf_mpi_FD relative to rmcdhf_mpi (squares) and to rmcdhf_mem_mpi (circles), calculated as T(rmcdhf_mpi)/T(rmcdhf_mpi_FD) and T(rmcdhf_mem_mpi)/T(rmcdhf_mpi_FD), respectively. (b) Same as in (a), but for the walltimes.
Figure 5. MPI performances for the Be I 9( s p d f g h i k l )-SCF calculations. (a) Solid (left y axis): orbital updating CPU times (T in s) for rmcdhf_mpi (squares), rmcdhf_mem_mpi (circles), and rmcdhf_mpi_FD (triangles) codes versus the number of MPI processes ( n p ). Dashed line (right y axis): speed-up factors of the three codes (with the same corresponding symbols) estimated as the ratios of T ( n p = 1 ) to others. Dotted line (right y axis): speed-up factors of rmcdhf_mpi_FD relative to rmcdhf_mpi (squares) and to rmcdhf_mem_mpi (circles), calculated as T(rmcdhf_mpi)/T(rmcdhf_mpi_FD) and T(rmcdhf_mem_mpi)/T(rmcdhf_mpi_FD), respectively. (b) Same as in (a), but for the walltimes.
Atoms 11 00012 g005
Figure 6. MPI performances for the Be I 15( s p d f g )14( h i )13( k l )-SCF calculations. (a) Solid (left y axis): orbital updating CPU times (T in s) for rmcdhf_mpi (squares), rmcdhf_mem_mpi (circles), and rmcdhf_mpi_FD (triangles) codes versus the number of MPI processes ( n p ). Dashed line (right y axis): speed-up factors of rmcdhf_mpi_FD (triangles) calculated as the ratios of T ( n p = 1 ) to others. Dotted line (right y axis): speed-up factors of rmcdhf_mpi_FD relative to rmcdhf_mpi (squares) and to rmcdhf_mem_mpi (circles), calculated as T(rmcdhf_mpi)/T(rmcdhf_mpi_FD) and T(rmcdhf_mem_mpi)/T(rmcdhf_mpi_FD), respectively. (b) Same as in (a), but for the walltimes.
Figure 6. MPI performances for the Be I 15( s p d f g )14( h i )13( k l )-SCF calculations. (a) Solid (left y axis): orbital updating CPU times (T in s) for rmcdhf_mpi (squares), rmcdhf_mem_mpi (circles), and rmcdhf_mpi_FD (triangles) codes versus the number of MPI processes ( n p ). Dashed line (right y axis): speed-up factors of rmcdhf_mpi_FD (triangles) calculated as the ratios of T ( n p = 1 ) to others. Dotted line (right y axis): speed-up factors of rmcdhf_mpi_FD relative to rmcdhf_mpi (squares) and to rmcdhf_mem_mpi (circles), calculated as T(rmcdhf_mpi)/T(rmcdhf_mpi_FD) and T(rmcdhf_mem_mpi)/T(rmcdhf_mpi_FD), respectively. (b) Same as in (a), but for the walltimes.
Atoms 11 00012 g006
Figure 7. MPI performances for the partial and total CPU times for updating the orbitals in (a) Mg VII A S 5 and (b) Be I n = 15 calculations. Solid curves are the CPU times (in s) labeled “Update” (squares), “SetTVCof” (circles), “WithoutMCP” (up-triangles), “WithMCP” (down-triangles), and “SetLAG” (diamonds), respectively, for the first iteration of the rmcdhf_mpi_FD SCF calculations. (Some of them are also listed in Table 3 and Table 5). In addition, the “NXA&NYA” CPU times are also shown as the dotted line.
Figure 7. MPI performances for the partial and total CPU times for updating the orbitals in (a) Mg VII A S 5 and (b) Be I n = 15 calculations. Solid curves are the CPU times (in s) labeled “Update” (squares), “SetTVCof” (circles), “WithoutMCP” (up-triangles), “WithMCP” (down-triangles), and “SetLAG” (diamonds), respectively, for the first iteration of the rmcdhf_mpi_FD SCF calculations. (Some of them are also listed in Table 3 and Table 5). In addition, the “NXA&NYA” CPU times are also shown as the dotted line.
Atoms 11 00012 g007
Figure 8. MPI performances of rci_mpi and rci_mpi_CSFG. (a) CPU times (in min) for the construction of the Hamiltonian matrix (squares), its diagonalization (circles), and their sum (triangles) versus the number of MPI processes ( n p ) for the 2 + block in Mg VII calculations with rci_mpi. The dashed curve is reproduced by an allometric fit, see text. (b) Same as in (a), but for calculations with rci_mpi_CSFG.
Figure 8. MPI performances of rci_mpi and rci_mpi_CSFG. (a) CPU times (in min) for the construction of the Hamiltonian matrix (squares), its diagonalization (circles), and their sum (triangles) versus the number of MPI processes ( n p ) for the 2 + block in Mg VII calculations with rci_mpi. The dashed curve is reproduced by an allometric fit, see text. (b) Same as in (a), but for calculations with rci_mpi_CSFG.
Atoms 11 00012 g008
Figure 9. CPU times (in min) for searching different numbers of eigenpairs of the 2 + block in Mg VII. Dashed lines: CPU times for rci_mpi (squares) and rci_mpi_CSFG (circles). The lines are reproduced by a quadratic polynomial fit. Dotted line (right y axis): speed-up factors of rci_mpi_CSFG relative to rci_mpi (circles).
Figure 9. CPU times (in min) for searching different numbers of eigenpairs of the 2 + block in Mg VII. Dashed lines: CPU times for rci_mpi (squares) and rci_mpi_CSFG (circles). The lines are reproduced by a quadratic polynomial fit. Dotted line (right y axis): speed-up factors of rci_mpi_CSFG relative to rci_mpi (circles).
Atoms 11 00012 g009
Table 1. MCDHF calculations for the even states of Mg VII. For each J-block, the number of targeted levels (eigenpairs) and sizes (number of CSFs) of the zero-space and CSF-expansions for the different orbital active sets are listed.
Table 1. MCDHF calculations for the even states of Mg VII. For each J-block, the number of targeted levels (eigenpairs) and sizes (number of CSFs) of the zero-space and CSF-expansions for the different orbital active sets are listed.
J = 0J = 1J = 2J = 3
Eigenvalues23475436
Zero space3 93111 06015 48716 089
AS19 91231 04645 77949 053
AS219 44965 09999 824109 618
AS332 534113 193177 519197 601
AS449 167175 328278 864313 002
AS569 348251 504403 859455 821
Table 2. CPU times (in s) for the Mg VII AS3 SCF calculations using the rmcdhf_mpi, rmcdhf_mem_mpi and rmcdhf_mpi_FD codes as a function of the number of MPI processes ( n p ). See text for the label meanings.
Table 2. CPU times (in s) for the Mg VII AS3 SCF calculations using the rmcdhf_mpi, rmcdhf_mem_mpi and rmcdhf_mpi_FD codes as a function of the number of MPI processes ( n p ). See text for the label meanings.
np12481624324048
rmcdhf_mpi
SetH&Diag1 001.71752.79608.58565.55522.02546.13576.30648.67759.93
Iteration 1
SetCof + LAG8 366.724 960.482 614.341 604.33815.48571.63449.01378.51320.42
IMPROV1 727.39947.32541.60332.96167.40116.3390.8676.1563.73
Update10 094.105 907.803 155.941 937.29982.89687.96539.88454.66384.15
SetH&Diag165.6488.8950.5535.0224.2318.5921.2921.0821.49
Iteration 2
SetCof + LAG8 291.354 965.602 615.251 603.00815.35571.02448.73378.33320.51
IMPROV1 707.80944.99540.88332.16167.28116.1790.8476.1163.65
Update9 999.145 910.593 156.131 935.16982.63687.18539.57454.45384.15
SetH&Diag167.3483.2647.7829.6122.6019.8818.2321.8818.68
Sum(SetH&Diag)1 334.69924.94706.91630.18568.85584.60615.82691.63800.10
Sum(Update)20 093.2411 818.396 312.073 872.451 965.521 375.141 079.45909.11768.30
Walltime21 46712 7737 0444 5282 5541 9781 7141 6191 587
rmcdhf_mem_mpi
SetH&Diag983.97723.53600.12559.52515.09544.02570.96649.47760.98
Iteration 1
SetCof + LAG3 152.541 858.491 205.67833.63454.55312.74252.01214.02170.87
IMPROV644.27381.98246.99171.3491.8962.3249.6941.8033.40
Update3 796.812 240.471 452.661 004.96546.44375.06301.69255.82204.27
SetH&Diag118.0964.9338.7828.8421.2316.2619.5918.8020.19
Iteration 2
SetCof + LAG3 028.231 859.011 205.41833.44454.53312.95251.81214.09171.05
IMPROV616.07381.84246.87171.4191.9462.2949.6941.7933.39
Update3 644.302 240.851 452.281 004.85546.47375.24301.50255.89204.44
SetH&Diag116.6959.4435.8323.4219.3817.6216.5419.5117.40
Sum(SetH&Diag)1 218.75847.90674.73611.78555.70577.90607.09687.78798.57
Sum(Update)7441.114 481.322 904.942 009.811 092.91750.30603.19511.71408.71
Walltime8 8225 4203 6362 6621 6771 3531 2331 2211 228
rmcdhf_mpi_FD
SetH&Diag962.78548.74342.01246.59202.29210.55222.01259.01296.04
NXA&NYA33.6217.159.225.112.902.091.891.781.80
Iteration 1
SetTVCof130.0568.2138.3123.7311.847.966.185.094.39
WithoutMCP125.6532.5512.066.723.072.081.261.020.80
WithMCP0.410.410.400.410.430.420.420.420.43
SetLAG4.612.334.060.640.690.470.530.330.24
SetCof + LAG261.89104.0655.1531.6816.1210.998.446.925.90
IMPROV0.630.320.170.090.050.040.030.030.03
Update262.52104.3855.3231.7716.1611.038.476.945.93
SetH&Diag118.2667.7637.1325.9018.9615.2813.5217.3520.45
Iteration 2
SetTVCof130.0168.2038.3423.6711.827.956.165.064.35
WithoutMCP125.1332.7312.066.723.062.081.251.010.79
WithMCP0.410.410.400.410.430.420.420.420.43
SetLAG4.612.334.000.840.670.440.510.330.27
SetCof + LAG261.28104.2355.1131.8316.0810.958.396.865.88
IMPROV0.630.320.170.090.050.040.030.030.03
Update261.91104.5555.2831.9216.1210.988.426.885.91
SetH&Diag112.9963.3235.5322.2316.7714.3715.5716.9817.19
Sum(SetH&Diag)1 194.03679.82414.67294.72238.02240.20251.10293.34333.68
Sum(Update)524.43208.93110.6063.6932.2822.0116.8913.8211.84
Walltime1 915997591404301289292331369
Table 3. CPU times (in s) for the Mg VII AS5 SCF calculations using the rmcdhf_mpi, rmcdhf_mem_mpi and rmcdhf_mpi_FD codes as a function of the number of MPI processes ( n p ). See text for the label meanings.
Table 3. CPU times (in s) for the Mg VII AS5 SCF calculations using the rmcdhf_mpi, rmcdhf_mem_mpi and rmcdhf_mpi_FD codes as a function of the number of MPI processes ( n p ). See text for the label meanings.
rmcdhf_mpirmcdhf_mem_mpirmcdhf_mpi_FD
np163248163248163248
SetH&Diag1 299.621 605.742 478.681 290.831 598.932 475.46452.67595.13944.78
NXA&NYA 6.524.514.26
Iteration 1
SetTVCof 28.2513.829.90
WithoutMCP 22.149.557.15
WithMCP 1.191.191.13
SetLAG 10.323.381.79
SetCof + LAG3 172.041 575.541 209.011 999.94986.68761.8962.0928.0620.07
IMPROV461.45222.55168.86288.38135.88103.190.070.040.04
Update3 633.491 798.091 377.872 288.321 122.56865.0862.1628.1020.11
SetH&Diag54.6759.4455.0347.3955.0051.8758.2441.6448.01
Iteration 2
SetTVCof 28.0913.809.82
WithoutMCP 25.329.767.09
WithMCP 1.091.181.14
SetLAG 7.593.181.83
SetCof + LAG3 174.721 576.071 207.942 000.64986.54761.6462.2728.0219.95
IMPROV461.71222.33168.74288.50135.88103.110.070.040.04
Update3 636.441 798.411 376.682 289.141 122.43864.7562.3428.0619.99
SetH&Diag54.0040.4659.7446.6738.1756.6543.4532.7940.38
Sum(SetH&Diag)1 408.291 705.642 593.451 384.891 692.102 583.98554.36669.561 033.17
Sum(Update)7 269.933 596.502 754.554 577.462 244.991 729.83124.5056.1640.10
Walltime8 7285 3445 3916 0303 9874 3667547821 129
Table 4. CPU times (in s) for the Be I n = 9 SCF calculations using the rmcdhf_mpi, rmcdhf_mem_mpi and rmcdhf_mpi_FD codes as a function of the number of MPI processes ( n p ). See text for the label meanings.
Table 4. CPU times (in s) for the Be I n = 9 SCF calculations using the rmcdhf_mpi, rmcdhf_mem_mpi and rmcdhf_mpi_FD codes as a function of the number of MPI processes ( n p ). See text for the label meanings.
np12481624324048
rmcdhf_mpi
SetH&Diag73.4342.5725.7715.989.618.207.137.347.83
Iteration 1
SetCof + LAG3 441.241 858.12905.13386.56153.1489.5661.9950.2041.14
IMPROV821.38454.10225.33100.9241.4925.7318.2114.7612.38
Update4 262.622 312.221 130.46487.48194.63115.2980.2064.9653.51
SetH&Diag71.5040.7023.8514.237.736.305.325.205.38
Iteration 2
SetCof + LAG3 440.461 857.90903.92386.53153.1289.4861.9749.9541.20
IMPROV821.16453.98225.12100.9141.4525.7218.2014.7612.37
Update4 261.622 311.881 129.04487.43194.57115.2080.1864.7153.57
SetH&Diag71.6740.7223.8214.037.726.075.305.205.44
Sum(SetH&Diag)216.60123.9973.4444.2425.0620.5717.7517.7418.65
Sum(Update)8 524.244 624.102 259.50974.91389.20230.49160.38129.67107.08
Walltime8 7414 7482 3331 019414251178147126
rmcdhf_mem_mpi
SetH&Diag70.6841.0524.6415.578.977.997.357.297.43
Iteration 1
SetCof + LAG3 065.841 656.84795.63329.83123.7570.0847.4237.9430.53
IMPROV741.20410.89201.4588.7435.1521.5415.0612.1910.10
Update3 807.032 067.72997.09418.56158.9091.6262.4850.1240.63
SetH&Diag69.0639.4223.0413.687.495.935.225.125.32
Iteration 2
SetCof + LAG3 065.781 656.36795.29329.60123.6369.9647.3737.9130.56
IMPROV741.54410.94201.1588.7335.1421.5115.0612.1810.10
Update3 807.322 067.29996.44418.33158.7791.4762.4350.0940.66
SetH&Diag69.1039.3122.9213.677.495.915.195.075.40
Sum(SetH&Diag)208.84119.7870.6042.9223.9519.8317.7617.4818.15
Sum(Update)7 614.354 135.011 993.53836.89317.67183.09124.91100.2181.29
Walltime7 8334 2602 06888134220314311899
rmcdhf_mpi_FD
SetH&Diag68.8239.3323.1113.828.927.436.837.147.30
NXA&NYA10.987.525.033.212.452.342.482.843.29
Iteration 1
SetTVCof2.211.170.620.320.160.110.080.070.06
WithoutMCP1.690.500.260.140.080.050.040.030.03
WithMCP7.394.432.931.700.960.700.530.440.39
SetLAG43.2121.8911.505.963.162.291.851.791.75
SetCof + LAG54.6428.0115.338.134.383.182.552.382.29
IMPROV10.105.092.701.390.730.540.440.430.42
Update64.7433.1018.039.525.113.722.982.812.71
SetH&Diag67.7138.1222.0512.767.716.025.205.085.30
Iteration 2
SetTVCof2.211.170.620.310.160.110.080.070.06
WithoutMCP1.690.500.260.140.080.050.040.030.03
WithMCP7.424.432.931.700.960.700.530.440.39
SetLAG43.3321.8611.525.963.152.281.851.791.75
SetCof + LAG54.7127.9715.358.124.373.172.532.382.29
IMPROV10.135.092.701.380.730.540.430.420.41
Update64.8433.0618.059.515.103.712.972.802.70
SetH&Diag67.7538.1122.1112.687.646.015.205.085.25
Sum(SetH&Diag)204.28115.5667.2739.2624.2719.4617.2317.3017.85
Sum(Update)129.5866.1636.0819.0310.217.435.955.615.41
Walltime354194111633830262627
Table 5. CPU times (in s) for the Be I n = 15 SCF calculations using the rmcdhf_mpi, rmcdhf_mem_mpi and rmcdhf_mpi_FD codes as a function of the number of MPI processes ( n p ). See text for the label meanings.
Table 5. CPU times (in s) for the Be I n = 15 SCF calculations using the rmcdhf_mpi, rmcdhf_mem_mpi and rmcdhf_mpi_FD codes as a function of the number of MPI processes ( n p ). See text for the label meanings.
rmcdhf_mpirmcdhf_mem_mpirmcdhf_mpi_FD
np163248163248163248
SetH&Diag153.1389.0074.25147.0285.0872.15150.0485.1472.63
NXA&NYA 80.1752.0757.96
Iteration 1
SetTVCof 3.811.951.29
WithoutMCP 1.210.610.43
WithMCP 18.8610.777.92
SetLAG 46.1928.3528.12
SetCof + LAG16 363.365 402.803 235.5815 296.314 869.142 850.5670.7542.7539.52
IMPROV1 662.70569.32390.061 578.91527.69360.423.762.292.24
Update18 026.065 972.123 625.6416 875.215 396.833 210.9974.5145.0441.76
SetH&Diag138.2371.4255.28132.7468.9653.04136.6870.9154.79
Iteration 2
SetTVCof 3.811.941.31
WithoutMCP 1.210.610.42
WithMCP 18.8610.787.93
SetLAG 46.1928.2928.00
SetCof + LAG16 364.565 403.633 234.7115 295.554 870.412 849.6470.6342.6139.19
IMPROV1 662.93569.19389.981 578.93527.58360.373.772.292.25
Update18 027.485 972.823 624.6916 874.485 397.993 210.0074.4044.9041.44
SetH&Diag137.8671.3555.03132.3768.8652.82136.6670.6254.62
Sum(SetH&Diag)429.22231.77184.56412.13222.90178.01423.38226.67182.04
Sum(Update)36 053.5411 944.947 250.3333 749.6910 794.826 420.99148.9189.9483.20
Walltime36 48412 1777 43534 18111 0286 606672379331
Table 6. CPU times (in s) for the construction of the Hamiltonian matrix (H), its diagonalization (D), and for the cumulated tasks (Sum) using rci_mpi and rci_mpi_CSFG for the 2 + block Mg VII calculations, as a function of the number of MPI processes ( n p ).
Table 6. CPU times (in s) for the construction of the Hamiltonian matrix (H), its diagonalization (D), and for the cumulated tasks (Sum) using rci_mpi and rci_mpi_CSFG for the 2 + block Mg VII calculations, as a function of the number of MPI processes ( n p ).
rci_mpirci_mpi_CSFG
npHDSumHDSum
161 342.08194.871 536.95114.9761.45176.42
32722.68158.54881.2264.9853.11118.09
64370.86321.84692.7034.8596.66131.51
96281.45439.48720.9326.80149.45176.25
128216.50651.80868.3022.42195.08217.50
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, Y.; Li, J.; Song, C.; Zhang, C.; Si, R.; Wang, K.; Godefroid, M.; Gaigalas, G.; Jönsson, P.; Chen, C. Performance Tests and Improvements on the rmcdhf and rci Programs of GRASP. Atoms 2023, 11, 12. https://doi.org/10.3390/atoms11010012

AMA Style

Li Y, Li J, Song C, Zhang C, Si R, Wang K, Godefroid M, Gaigalas G, Jönsson P, Chen C. Performance Tests and Improvements on the rmcdhf and rci Programs of GRASP. Atoms. 2023; 11(1):12. https://doi.org/10.3390/atoms11010012

Chicago/Turabian Style

Li, Yanting, Jinqing Li, Changxian Song, Chunyu Zhang, Ran Si, Kai Wang, Michel Godefroid, Gediminas Gaigalas, Per Jönsson, and Chongyang Chen. 2023. "Performance Tests and Improvements on the rmcdhf and rci Programs of GRASP" Atoms 11, no. 1: 12. https://doi.org/10.3390/atoms11010012

APA Style

Li, Y., Li, J., Song, C., Zhang, C., Si, R., Wang, K., Godefroid, M., Gaigalas, G., Jönsson, P., & Chen, C. (2023). Performance Tests and Improvements on the rmcdhf and rci Programs of GRASP. Atoms, 11(1), 12. https://doi.org/10.3390/atoms11010012

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop