Next Article in Journal
A Review on Business Analytics: Definitions, Techniques, Applications and Challenges
Previous Article in Journal
A New One-Parameter Distribution for Right Censored Bayesian and Non-Bayesian Distributional Validation under Various Estimation Methods
Previous Article in Special Issue
Improved Least-Squares Progressive Iterative Approximation for Tensor Product Surfaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

IG-LSPIA: Least Squares Progressive Iterative Approximation for Isogeometric Collocation Method

School of Mathematical Sciences, Zhejiang University, Hangzhou 310058, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(4), 898; https://doi.org/10.3390/math11040898
Submission received: 3 January 2023 / Revised: 7 February 2023 / Accepted: 8 February 2023 / Published: 10 February 2023
(This article belongs to the Special Issue Computer-Aided Geometric Design)

Abstract

:
The isogeometric collocation method (IGA-C), which is a promising branch of isogeometric analysis (IGA), can be considered fitting the load function with the combination of the numerical solution and its derivatives. In this study, we develop an iterative method, isogeometric least-squares progressive-iterative approximation (IG-LSPIA), to solve the fitting problem in the collocation method. IG-LSPIA starts with an initial blending function, where the control coefficients are combined with the B-spline basis functions and their derivatives. A new blending function is generated by constructing the differences for collocation points (DCP) and control coefficients (DCC), and then adding the DCC to the corresponding control coefficients. The procedure is performed iteratively until the stop criterion is reached. We prove the convergence of IG-LSPIA and show that the computation complexity in each iteration of IG-LSPIA is related only to the number of collocation points and unrelated to the number of control coefficients. Moreover, an incremental algorithm is designed; it alternates with knot refinement until the desired precision is achieved. After each knot refinement, the result of the last round of IG-LSPIA iterations is used to generate the initial blending function of the new round of iteration, thereby saving great computation. Experiments show that the proposed method is stable and efficient. In the three-dimensional case, the total computation time is saved twice compared to the traditional method.

1. Introduction

The isogeometric analysis, proposed by Hughes et al. [1], is an important research field with a wide range of applications in Computer-aided Engineering and Computer-aided Design (CAE/CAD) due to its shared representation for CAD and CAE. The isogeometric collocation method (IGA-C) is an efficient numerical method for isogeometric analysis [2,3,4], where the numerical solution is obtained by solving a linear system formed by substituting collocation points into the strong form of a partial differential equation (PDE). In conventional IGA-C, the number of collocation points and the degrees of freedom (DoFs) of the numerical solution are the same. In the isogeometric least-squares collocation method (IGA-L), the number of collocation points is larger than the DoFs of the numerical solution [5]. However, the main drawback of those methods is that the exact solution of a PDE is usually unknown, and the approximation error of the numerical solution to the exact solution cannot be calculated. Thus, determining how many degrees of freedom are required for a numerical solution is difficult. Actually, IGA-L can be considered fitting the load function (the right-hand function of a PDE) with the combination of a numerical solution and its derivatives. In this study, we develop the least squares progressive iterative approximation method for solving the linear systems in the isogeometric collocation method (IG-LSPIA), and then show its convergence. The computational complexity in each iteration of IG-LSPIA is related only to the number of collocation points and unrelated to the DoFs of the numerical solution; thus, it is suitable for large-scale data fitting and incremental data fitting. Then, we propose an incremental approach for IG-LSPIA, namely, incremental IG-LSPIA (Incre-IG-LSPIA), where the freedom of the numerical solution can be automatically determined using the fitting precision.
As an iterative method with geometric meaning, the progressive–iterative approximation for least square fitting (LSPIA) [6,7] has been widely used in CAD [8], where a sequence of curves (surfaces) is generated iteratively to fit a given data set. The limit of the sequence is proven to be the least-squares solution of the fitting problem. Moreover, Lin et al. [9] proved that the LSPIA method is convergent whether or not the iterative matrix is singular. However, LSPIA has never been used in solving PDE. In this study, IG-LSPIA is developed for solving IGA-L. In IG-LSPIA, a sequence of blending functions is constructed to approximate the load function, and the limit of the sequence is the solution of IGA-L [5]. Specifically, during each iteration step of IG-LSPIA, we initially calculate the differences for collocation points (DCP) and then the differences for control coefficients (DCC) of the numerical solution. Subsequently, we can update the control coefficients by adding the DCC. This procedure is performed iteratively until the stop criteria are reached. Moreover, given a fitting precision, IG-LSPIA can be performed incrementally, and the approach is called Incre-IG-LSPIA. That is, if the fitting error of the current round of IG-LSPIA does not satisfy the given fitting precision, then we further raise the degree of freedom of the numerical solution using knot refinement and invoke a new round of IG-LSPIA iterations. In the new round of iterations, the result of the previous round iterations after knot refinement can be considered the initial value for the new round of iterations, saving greatly on computations. The Incre-IG-LSPIA stops when the given fitting precision of the numerical solution is reached. In this approach, the DoFs of the numerical solution can be automatically determined. The contributions of this study are summarized as follows.
  • We propose the IG-LSPIA, where the convergence rate is related to the number of collocation points.
  • An incremental fitting algorithm is proposed with the IG-LSPIA method, which saves computational costs.
  • Using the incremental IG-LSPIA and fitting load function, the DoFs of the numerical solution can be determined automatically.
The paper is organized as follows: in Section 2, we provide a brief description of LSPIA and isogeometric collocation methods. The background of IGA-C with non-uniform rational B-splines (NURBS) is presented in Section 3. In Section 4, we present the IG-LSPIA method. The convergence analysis is demonstrated in Section 5, and we show the numerical experiments in Section 6. The conclusion and some ideas about future work are presented in Section 7.

2. Related Work

In this section, we provide a concise review of the LSPIA and isogeometric collocation method.

2.1. LSPIA

Motivated by the work of Qi et al. [10] and de Boor [11], Lin et al. [6] proposed the progressive-iterative approximation (PIA) method for fitting data points using the NURBS curve and surface. As a data-fitting algorithm without solving the linear system, its main idea is to generate a sequence of curves by iteratively adjusting the control points. Thus, the limit of the curve sequence can approach and finally interpolate the given data points. Based on the local property of the PIA, Lin [12] provided a local data fitting method for curves and surfaces. This method only updates some control points to realize the interpolation of the corresponding data points. Subsequently, the studies have mainly focused on two aspects: one is to accelerate the PIA method, and the other is to generalize the algorithm to more forms of curves and surfaces. Lu [13] and Zhang et al. [14] used the weighted difference vectors to accelerate the convergence of the PIA method. However, the former multiplies all difference vectors by the same optimal weight, whereas the latter uses the weight matrix. Chen et al. [15] set up a progressive interpolation for Catmull–Clark subdivision surfaces. Cheng et al. [16] further developed a PIA for loop subdivision surface fitting. Chen and Wang [17] designed a PIA format for triangular Bézier.
Although these techniques are intuitive and efficient, they require that the number of data points be equal to that of the control points, and they cannot solve the least-squares problem. In particular, Lin and Zhang [18] developed an extended PIA format (EPIA), and Deng and Lin [7] subsequently proposed the LSPIA to deal with such problems. In those methods, the number of data points could be much larger than that of control points, which is a common situation in practical fitting problems. Meanwhile, an incremental fitting idea is proposed, where each new round of iteration uses the previous fitting result as the initial guess. As a result, the convergence rate is enhanced. Lin et al. [9] further verified that even if a singular matrix appeared in the problem, LSPIA still converged under certain conditions. This result theoretically guarantees the applicability of the LSPIA algorithm to wider fitting problems. Zhang et al. [19,20] then applied LSPIA to generalized B-spline curves and NURBS. In addition to data fitting, PIA has been widely used in path planning [21], data compression [22], and implicit surface generation [23,24]. For more details, we refer the reader to the review [8].

2.2. Isogeometric Collocation Methods

The theoretical study of the isogeometric collocation method has always been a challenge. Auricchio et al. [2] first developed a theoretical analysis of the algorithm in the 1D case. Lin et al. [3] then explored the algorithm’s consistency and convergence. Furthermore, the algorithm’s convergence order and the necessary and sufficient condition to ensure the algorithm’s consistency are provided by Lin et al. [4].
Another major bottleneck in the usage of IGA-C is that the linear systems derived from it can be typically ill-conditioned when the grid size and degree of spline increase. Remarkable progress has been made in recent years to improve the stability and convergence rate of IGA-C. It focuses mainly on selecting appropriate collocation points or methods for solving linear equations.
For example, Anitescu et al. [25] applied collocation points derived from the superconvergence theory to obtain an optimal convergence order of IGA-C. Donatelli et al. [26] introduced a fast multigrid solver based on the spectral analysis of the collocation matrix. The symbol functions capturing the spectral information could be beneficial for designing efficient preconditioners and solvers, but they are challenging to construct. In addition, Veiga et al. [27] constructed the overlapping additive Schwarz (OAS) preconditioners, resulting in a robust formulation. Cho [28] later developed optimal multilevel preconditioners for isogeometric collocation methods and showed their consistent performance under various conditions, including basis function, spline degree, and domain deformations. Lin et al. [5] proposed a least-squares collocation method that allows a more flexible number of collocation points and is shown to be more stable than the conventional IGA-C. In this work, we adopt an incremental, iterative method to reuse the information in the last round of iterations and thus shorten the total computational cost.

3. Preliminaries

In this section, we present a short description of the basics of the isogeometric collocation method and IGA-L method [5].
The following boundary value problem is considered:
L u = f , in Ω , G u = g , on Ω ,
where u : Ω R is the unknown solution, L is a differential operator, G is a boundary operator, and f and g are the continuous functions. The main idea of the isogeometric analysis algorithm is to adopt the same basis functions to represent the numerical solution u h and the geometric map G. In industrial production and applications, we usually select the NURBS basis functions, i.e.,
R j ( ξ ) = N j , p ( ξ ) w j j ^ = 1 n N j ^ , p ( ξ ) w j ^ ,
where N j , p ( ξ ) denotes the B-spline basis function of degree p, and  w j is the weight. Multidimensional NURBS basis functions can be defined analogically using tensor product
R i j ( ξ , η ) = N i , p 1 ( ξ ) N j , p 2 ( η ) w i j i ^ = 1 n 1 j ^ = 1 n 2 N i ^ , p 1 ( ξ ) N j ^ , p 2 ( η ) w i ^ j ^ , R i j k ( ξ , η , ζ ) = N i , p 1 ( ξ ) N j , p 2 ( η ) N k , p 3 ( ζ ) w i j k i ^ = 1 n 1 j ^ = 1 n 2 k ^ = 1 n 3 N i ^ , p 1 ( ξ ) N j ^ , p 2 ( η ) N k ^ , p 3 ( ζ ) w i ^ j ^ k ^ ,
where p 1 , p 2 , and  p 3 indicate the degree of the basis functions in the relevant parameters’ directions.
In the isogeometric collocation method, a finite set of collocation points are further introduced to discretize the partial differential Equation (1) directly and avoid constructing the variational form. The selection of collocation points is not unique but highly important. Greville abscissae [29] is adopted in general, but Demko abscissae [30] and superconvergent points [25] exist. Without loss of generality, we consider the 1D case in the following. Let τ ^ i , i = 1 , 2 , , m , be the collocation points on the parametric domain Ω p . Then, the corresponding collocation points τ i Ω can be calculated using
τ i = G ( τ ^ i ) = j = 1 n R j ( τ ^ i ) P j , i = 1 , 2 , , m .
Furthermore, the numerical solution is defined as
u h τ ^ = j = 1 n R j ( τ ^ ) u j ,
or
u h τ = j = 1 n R j ( G 1 ( τ ) ) u j ,
where u j , j = 1 , 2 , , n is the control coefficient. The parametric domain is set as Ω p = [ 0 , 1 ] and τ ^ 1 = 0 , τ ^ m = 1 , and the index set of the collocation points I can be partitioned into two subsets, as follows:
I L = { 2 , 3 , , m 1 } , I G = { 1 , m } ,
where I L and I G denote the indices of the interior and boundary collocation points, respectively. Substituting Equation (2) into Equation (1), the numerical solution u h ( τ ^ ) satisfies the rules
L u h ( τ ^ i ) = f ( G ( τ ^ i ) ) , i I L , G u h ( τ ^ i ) = g ( G ( τ ^ i ) ) , i I G ,
Arranging the control coefficients u j of the numerical solution u h ( τ ^ ) into an n-dimensional column vector, i.e.,  U = [ u 1 , u 2 , , u n ] T , the discretized problem (4) can be reformulated in matrix form using
AU = b ,
with
A i j = A j ( τ ^ i ) = L R j ( τ ^ i ) , i I L , G R j ( τ ^ i ) , i I G , j = 1 , 2 , , n , b i = b ( τ ^ i ) = f ( G ( τ ^ i ) ) , i I L , g ( G ( τ ^ i ) ) , i I G ,
where A is the collocation matrix, and  b denotes the load vector. Finally, solving the system of Equation (5), we obtain the control coefficients { u j } j = 1 n and further achieve the numerical solution u h ( τ ^ ) by substituting the control coefficients into Equation (3).
The amount of collocation points is assumed to be more than that of the control coefficients { u j } j = 1 n and geometric control points { P j } j = 1 n , i.e.,  m > n . Then, the linear system of Equation (5) can be solved using the IGA-L method, that is, to find a least-squares solution satisfying
min U AU b 2 ,
where A and b are the collocation matrix and load vector, respectively.

4. IG-LSPIA Method

In this section, we present the least-squares progressive-iterative approximation for isogeometric collocation methods (IG-LSPIA). We mainly describe the 1D case, and its generalization to the multidimensional cases is straightforward.
The discretized values b i i = 1 m of load function in Equation (6) is considered an ordered point set. The collocation points on the parametric domain 0 = τ ^ 1 < τ ^ 2 < < τ ^ m = 1 are their corresponding parameters. Given the initial guess of the control coefficients u j ( 0 ) j = 1 n with n < m , we obtain an initial blending function
U ( 0 ) ( τ ^ ) = j = 1 n A j ( τ ^ ) u j ( 0 ) , τ ^ [ τ ^ 1 , τ ^ m ] ,
where A j ( τ ^ ) , j = 1 , 2 , , n , represents the combination of different order derivatives of the NURBS basis functions determined using the operators L and G  (1), i.e.,
A j ( τ ^ ) = L R j ( τ ^ ) , τ ^ Ω p i n , G R j ( τ ^ ) , τ ^ Ω p b d j = 1 , 2 , , n .
Ω p i n and Ω p b d indicate the interior and boundary of the parameter domain, respectively. Each A j ( τ ^ ) corresponds to the jth control coefficient. The initial control coefficients can be selected randomly. In our experiments, we choose u j ( 0 ) = 0 , j = 1 , 2 , , n . Because when the collocation matrix is singular, if the initial control coefficients are zero, the IG-LSPIA method converges to the M-P pseudo-inverse solution of the linear system with the minimum Euclidean norm [9].
Assume that J i n and J b d are the index sets of the internal and boundary control coefficients, respectively. Without loss of generality, we further assume that the boundary control coefficients have been obtained using strong or weak imposition [31] and are fixed, i.e.,
u j ( k ) = u j * , j J b d , k = 0 , 1 , 2 , .
Initially, we calculate the differences for collocation points (DCP) in the first step of iteration
δ i ( 0 ) = b i j = 1 n A j ( τ ^ i ) u j ( 0 ) = b i j J b d A j ( τ ^ i ) u j ( 0 ) j J i n A j ( τ ^ i ) u j ( 0 ) , i = 1 , 2 , , m ,
and group all load values whose parameters fall in the local support of the jth derivatives function, i.e.,  A j ( τ ^ ) 0 , into the jth group corresponding to the jth control coefficient. In the following, we denote the subscript of the jth group of load values using I j .
Subsequently, we derive the differences for control coefficients (DCC) as follows:
d j ( 0 ) = μ h I j A j ( τ ^ h ) δ h ( 0 ) , j = 1 , 2 , , n ,
where μ is a normalization technique to guarantee the convergence of our algorithm. Note that since the assumption (9), the boundary DCCs are zero, i.e.,  d j J b d ( k ) = 0 , k = 0 , 1 , 2 , . Then, the new control coefficients u j ( 1 ) are generated by adding d j ( 0 ) to u j ( 0 ) , i.e.,
u j ( 1 ) = u j ( 0 ) + d j ( 0 ) , j = 1 , 2 , , n .
Substituting them into Equation (8), we finally obtain the new blending function, as follows:
U ( 1 ) ( τ ^ ) = j = 1 n A j ( τ ^ ) u j ( 1 ) , τ ^ [ τ ^ 1 , τ ^ m ] .
The kth blending function, generated after the kth iteration of IG-LSPIA, is assumed to be as follows:
U ( k ) ( τ ^ ) = j = 1 n A j ( τ ^ ) u j ( k ) , τ ^ [ τ ^ 1 , τ ^ m ] .
Then, the DCP in the ( k + 1 ) st iteration are obtained using
δ i ( k ) = b i j = 1 n A j ( τ ^ i ) u j ( k ) = b i j J b d A j ( τ ^ i ) u j ( k ) j J i n A j ( τ ^ i ) u j ( k ) , i = 1 , 2 , , m ,
Furthermore, the DCC are obtained as follows:
d j ( k ) = μ h I j A j ( τ ^ h ) δ h ( k ) , j = 1 , 2 , , n .
Thus, the new control coefficients are updated via the following relation:
u j ( k + 1 ) = u j ( k ) + d j ( k ) , j = 1 , 2 , , n .
Consequently, the  ( k + 1 ) st blending function is as follows:
U ( k + 1 ) ( τ ^ ) = j = 1 n A j ( τ ^ ) u j ( k + 1 ) .
The above iteration process is performed until the desired fitting error is reached and a sequence of blending functions is obtained.
U ( k ) ( τ ^ ) , k = 0 , 1 , .

5. Convergence Analysis of the Iterative Scheme

In this section, we show that the IG-LSPIA is convergent and converges to the solution of a constrained optimization problem derived from the least-squares collocation method [5].
Assume that I i n is the index set of the internal collocation points.
Based on Equations (9), (10), and (12), the ( k + 1 ) st internal DCP follow the rule:
δ i ( k + 1 ) = b i j = 1 n A j ( τ ^ i ) u j ( k + 1 ) = b i j J b d A j ( τ ^ i ) u j ( k + 1 ) j J i n A j ( τ ^ i ) u j ( k + 1 ) = b i j J b d A j ( τ ^ i ) u j ( k + 1 ) j J i n A j ( τ ^ i ) u j ( k ) + d j ( k ) = b i j J b d A j ( τ ^ i ) u j ( k ) j J i n A j ( τ ^ i ) u j ( k ) j J i n A j ( τ ^ i ) d j ( k ) = b i j = 1 n A j ( τ ^ i ) u j ( k ) j J i n A j ( τ ^ i ) d j ( k ) = δ i ( k ) j J i n A j ( τ ^ i ) d j ( k ) , i I i n .
According to Equations (11) and (14), the ( k + 1 ) st internal DCC are derived using
d j ( k + 1 ) = μ h I j A j ( τ ^ h ) δ h ( k + 1 ) = μ h I j I i n A j ( τ ^ h ) δ h ( k + 1 ) = μ h I j I i n A j ( τ ^ h ) δ h ( k ) t J i n A t ( τ ^ h ) d t ( k ) = μ h I j I i n A j ( τ ^ h ) δ h ( k ) μ h I j I i n A j ( τ ^ h ) t J i n A t ( τ ^ h ) d t ( k ) = d j ( k ) μ h I j I i n A j ( τ ^ h ) t J i n A t ( τ ^ h ) d t ( k ) , j J i n .
To show the convergence of the IG-LSPIA, Equation (15) can be rewritten in matrix form as follows:
D i n ( k + 1 ) = D i n ( k ) μ B T B D i n ( k ) = I 2 μ B T B D i n ( k ) ,
with
D i n ( k ) = d n 1 + 1 ( k ) , d n 1 + 2 ( k ) , , d n ( k ) T , B = A n 1 + 1 τ ^ m 1 + 1 A n 1 + 2 τ ^ m 1 + 1 A n τ ^ m 1 + 1 A n 1 + 1 τ ^ m 1 + 2 A n 1 + 2 τ ^ m 1 + 2 A n τ ^ m 1 + 2 A n 1 + 1 τ ^ m A n 1 + 2 τ ^ m A n τ ^ m ,
where n 1 and m 1  ( m 1 > n 1 ) are the numbers of the boundary control coefficients and collocation points, respectively. n 1 + 1 , n 1 + 2 , , n are the serial numbers of internal control coefficients, τ ^ m 1 + 1 , τ ^ m 1 + 2 , , τ ^ m are the internal collocation points. Denote n 2 and m 2 ( m 2 > n 2 ) are the numbers of the internal control coefficients and collocation points, respectively. We have m = m 1 + m 2 and n = n 1 + n 2 . In addition, I 2 is the identity matrix with rank I 2 = n 2 , and B is an m 2 × n 2 matrix. From Equation (16), the iterative matrix for internal control coefficients is I 2 μ B T B .
Lemma 1.
If the normalization weight satisfies 0 < μ < 2 λ m a x B T B , where B is given in Equation (17), then the eigenvalues of μ B T B are real numbers and satisfy 0 λ ( μ B T B ) < 2 .
Proof. 
The B T B matrix is generally a positive semi-definite matrix; thus, the eigenvalues are nonnegative real numbers as well as μ B T B , i.e.,
λ ( μ B T B ) 0 .
Moreover, we select 0 < μ < 2 λ m a x B T B , and then we obtain the following:
λ μ B T B λ m a x μ B T B 2 λ m a x B T B λ m a x B T B < 2 .
In summary, the eigenvalues of the matrix μ B T B satisfy 0 λ < 2 , thereby completing the proof.    □
Remark 1.
In practice, we select μ = 2 B T B , which satisfies the condition in Lemma 1.
Theorem 1.
When B T B is nonsingular, where B is defined in Equation (17), the iterative format (16) is convergent.
Proof. 
From Equation (16), we obtain the following equations:
D i n ( k + 1 ) = I 2 μ B T B D i n ( k ) = I 2 μ B T B 2 D i n ( k 1 ) = = I 2 μ B T B k D i n ( 0 ) .
According to Lemma 1 and the assumptions, the eigenvalues of matrix μ B T B satisfy 0 < λ ( μ B T B ) < 2 , resulting in the following:
1 < 1 λ ( μ B T B ) < 1 ,
Thus, the spectral radius satisfies ρ I 2 μ B T B < 1 , leading to
lim k I 2 μ B T B k = 0 .
Substituting Equation (19) into Equation (18), we have lim k D i n ( k ) = 0 , i.e., the iterative format (16) is convergent.    □
Furthermore, by substituting Equation (10) into Equation (11) and reformulating it into the matrix form, we have
D i n ( k ) = μ B T b i n A 21 U b d * B U i n ( k ) ,
where
U b d * = u 1 * , u 2 * , , u n 1 * T , U i n ( k ) = u n 1 + 1 ( k ) , u n 1 + 2 ( k ) , , u n ( k ) T , b i n = b m 1 + 1 , b m 1 + 2 , , b m T ,
A 21 = A 1 τ ^ m 1 + 1 A 2 τ ^ m 1 + 1 A n 1 τ ^ m 1 + 1 A 1 τ ^ m 1 + 2 A 2 τ ^ m 1 + 2 A n 1 τ ^ m 1 + 2 A 1 τ ^ m 1 + m 2 A 2 τ ^ m 1 + m 2 A n 1 τ ^ m 1 + m 2 .
Theorem 2.
When B T B is nonsingular, where B is defined in Equation (17), the IG-LSPIA format (16) converges to the solution of the constrained least-squares collocation problem, as follows:
min U AU b 2 , s . t H U = U b d * ,
where H is an n 1 × n matrix
H = I 1 0 .
A and b are defined similarly to Equation (6). U b d * consists of the known control coefficients on the boundary, and I 1 is the identity matrix with rank I 1 = n 1 .
Proof. 
Based on Theorem 1, we have lim k D i n ( k ) = 0 . Then, when k , Equation (20) leads to
μ B T B U i n ( ) = μ B T b i n A 21 U b d * .
Since μ is non-zero, the internal control coefficients converge to the solution of
B T B U i n = B T b i n A 21 U b d *
under the IG-LSPIA algorithm.
On the other hand, we can partition the collocation matrix A of problem (23) into
A = A 11 A 12 A 21 B ,
where
A 11 = A 1 τ 1 A 2 τ 1 A n 1 τ 1 A 1 τ 2 A 2 τ 2 A n 1 τ 2 A 1 τ m 1 A 2 τ m 1 A n 1 τ m 1 , A 12 = A n 1 + 1 τ 1 A n 1 + 2 τ 1 A n τ 1 A n 1 + 1 τ 2 A n 1 + 2 τ 2 A n τ 2 A n 1 + 1 τ m 1 A n 1 + 2 τ m 1 A n τ m 1 ,
A 21  (22) and B  (17) are the same as the previous definitions. Subsequently, denote
U = U b d U i n , b = b b d b i n ,
with
U b d = u 1 , u 2 , , u n 1 T , U i n = u n 1 + 1 , u n 1 + 2 , , u n T , b b d = b 1 , b 2 , , b m 1 T ,
and b i n are load values defined in Equation (21). Then, we obtain the following equation:
AU b = A 11 A 12 A 21 B U b d U i n b b d b i n .
According to the boundary condition and because A 12 = 0 , we have A 11 U b d * b b d = 0 . Then, the constrained least-squares collocation problem (23) becomes
min U i n BU i n b i n A 21 U b d * 2 ,
which is equivalent to the normal equations
B T B U i n = B T b i n A 21 U b d * .
In conclusion, the IG-LSPIA method converges to the solution of a constrained optimization problem (23) derived from the least-squares collocation method.    □
Remark 2.
After obtaining the solution of the constrained least-squares collocation problem (23) using IG-LSPIA, i.e., U i n ( ) = min U i n BU i n b i n A 21 U b d * 2 and U b d = U b d * , we can derive the numerical solution of the partial differential Equation (1) by substituting the control coefficients
U ( ) = u 1 ( ) , u 2 ( ) , , u n 1 ( ) , u n 1 + 1 ( ) , u n 1 + 2 ( ) , , u n ( ) T
with u j ( ) = u j * , j = 1 , 2 , , n 1 , into Equation (3), namely,
u h τ ^ = j = 1 n R j ( τ ^ ) u j ( ) .

6. Numerical Examples

In this section, we test several numerical examples to demonstrate the convergence properties and applications of the IG-LSPIA method. Particularly, we consider 1D, 2D, and 3D partial differential equations with homogenous Dirichlet boundaries. The boundary conditions are imposed directly on the control coefficients [1]. In our experiments, we use the NURBS functions to build the geometric map G (2) and the fitting error of the kth iteration is denoted with
E f i t ( k ) = i = 1 m δ i ( k ) 2 m , k = 0 , 1 , 2 , ,
where δ i ( k )  (10) indicates the DCP after the kth iteration. In particular, E f i t ( 0 ) indicates the initial fitting error with a zero initial guess of control coefficients. The stopping criterion is U ( k ) U ( k 1 ) 2 < 10 7 .
When the iterations converge, but the current fitting error E f i t does not reach the predetermined precision ϵ , we adopt the knot refinement strategy and then proceed with the incremental iterations. During the incremental iteration, each new round of iteration after knot refinement reuses information from the last round of iteration to avoid rebuilding the system. In summary, the Incre-IG-LSPIA is shown in Algorithm 1.
Algorithm 1 The Incremental IG-LSPIA Algorithm
Input: Initial control coefficients U ( 0 ) = u j j = 1 n ,
   Discretized load functions b = b i i = 1 m ,
   and desired fitting accuracy ε > 0 .
1: Conduct IG-LSPIA with the initial control coefficients U ( 0 ) ;
2: while E f i t ( k ) > ε do
3:   Generate the new control coefficients, control points, and corresponding weights using knot refinement;
4:   Use the new control coefficients as the initial control coefficients for the next iteration;
5:   while  U ( k ) U ( k 1 ) 2 > 10 7  do
6:     Conduct IG-LSPIA to update control coefficients;
7:      k = k + 1 ;
8:   end while
9:end while
Output: Numerical solution u h ( x ) .
When the fitting error reaches the desired accuracy, we can evaluate the error between the analytical solution u ( τ ^ ) and the current numerical solution u h ( τ ^ ) (25) using the uniform sampling points. Here three forms of error measures are employed, i.e., relative L 2 norm error, relative L norm error, and root-mean-square error (RMSE)
E p = u u h p u p , p = 2 , , E R = i = 1 m u τ i ^ u h τ i ^ 2 m .
We implement the algorithms in C++ and run them on a PC with an Intel Core i7-10700 2.90 GHz CPU and 16 GB RAM.

6.1. 1D Tests

The first case is defined in domain [ 0 , 1 ] with boundary conditions, as follows:
0.001 u + 10 u + 5000 u = 5000 + 0.004 π 2 sin 2 π x + 20 π cos ( 2 π x ) , x 0 , 1 , u ( 0 ) = u ( 1 ) = 0 ,
where the analytical solution is u ( x ) = sin ( 2 π x ) . We initially select 17 initial control points and 600 Greville collocation points. The maximum number of iterations per round is set to 500. The geometric map G (2) is based on the cubic NURBS basis function.
In Figure 1, we provide the convergence order of the IG-LSPIA algorithm. The horizontal axis is the logarithm (base 10) of the degrees of freedom (DoFs), and the vertical axis is the logarithm (base 10) of the error. We evaluate the convergence rates under three different errors, i.e., relative L 2 norm error, relative L norm error, and RMSE, denoted with E 1 , E , and E R , respectively. The decreasing trend of the three error types is consistent, indicating that the IG-LSPIA algorithm is convergent and stable.
The analytical solution to the problem (27) is illustrated in Figure 2a. We perform a knot refinement after each round of iteration to achieve the prescribed fitting accuracy, and in this example, we refine the knots five times. Figure 2b demonstrates the absolute error distribution curves of the different numbers of control coefficients obtained using knot refinement. The absolute error range gradually decreases as the count of refinements increases.
In addition, we apply the incremental algorithm to speed up the iteration after each knot refinement. Table 1 shows the experimental data for Incre-IG-LSPIA, i.e., the fitting error E f i t ( k ) (26), the number of iterations, the iteration time in seconds, the iteration time per step, and the total computation time per round iteration to reach the same stopping criterion. For Incre-IG-LSPIA, the total computation time mainly includes the computation time of the convergence coefficients μ shown in Remark 1 and the iteration time. In particular, the Incre-IG-LSPIA algorithm does not need to reassemble the collocation matrix explicitly, but it requires computing the convergence coefficients. Moreover, the average time to reassemble the collocation matrix is usually much higher than the average time to compute the convergence coefficients, approximately three times in this 1D case, i.e., 0.124 s and 0.042 s.

6.2. 2D Tests

The second case is as follows:
u + u = f , ( x , y ) Ω , u | Ω = 0 ,
where
f = ( 3 x 4 67 x 2 67 y 2 + 3 y 4 + 6 x 2 y 2 + 116 ) sin ( x ) sin ( y ) + ( 68 x 8 x 3 8 x y 2 ) cos ( x ) sin ( y ) + ( 68 y 8 y 3 8 y x 2 ) cos ( y ) sin ( x ) ,
and Ω is a quarter of an annulus domain. The analytical solution for this problem is u ( x , y ) = ( x 2 + y 2 1 ) ( x 2 + y 2 16 ) sin ( x ) sin ( y ) . We initially select 7 × 7 initial control points and 31 × 31 Greville collocation points. The maximum number of iterations per round is set to 5000. In 2D and 3D, we use the square of the 2-norm error between the steps of iteration as the convergence error. Moreover, we reduce the error tolerance by 0.1 after each round of iteration. As a result, each round iteration is sufficient. The diagram of l o g 10 ( n 1 / 2 ) vs. l o g 10 ( e r r o r ) is illustrated in Figure 3. The figure shows that the IG-LSPIA algorithm has consistent convergence under the three errors E 2 , E , and E R .
The analytical solution to the problem (28) is shown in Figure 4a. The absolute error distribution of the initial round of iteration using 7 × 7 control points is plotted in Figure 4b. Moreover, the results after the first and second knot refinement, i.e., 11 × 11 and 19 × 19 control coefficients are shown in Figur Figure 4c,d. More experimental details are demonstrated in Table 2. The maximal absolute error in Figure 4 decreases gradually from 1.200 to 0.052, showing that the numerical solution is closer to the analytical solution when the fitting error of the load function is smaller. In addition, the iteration time per step of each iteration of the algorithm does not fluctuate significantly with the increase in control points, that is at 0.001 s.

6.3. 3D Tests

The third case is as follows:
u + u = f , ( x , y , z ) Ω , u | Ω = 0 ,
with f = ( 1 + 12 π 2 ) sin ( 2 π x ) sin ( 2 π y ) sin ( 2 π z ) . The analytical solution is u ( x , y , z ) = sin ( 2 π x ) sin ( 2 π y ) sin ( 2 π z ) . The 3D physical domain Ω is modeled as a B-spline solid. We select 30 × 30 × 30 collocation points and 7 × 7 × 7 initial control coefficients.
The diagram of l o g 10 ( n 1 / 3 ) v.s. l o g 10 ( e r r o r ) is illustrated in Figure 5. The RMSE error has the smallest value, but the overall trend is the same for all three errors. The error decreases rapidly after the first knot refinement and slows down the rate of increase after the second knot refinement because the number of control points gradually approaches the number of collocation points. The analytical solution to the problem (29) is plotted in Figure 6a. Furthermore, the absolute error distributions after three round iterations are shown in Figure 6b–d. The maximal absolute error is decreased from 0.032 to 4.743 × 10 4 when the number of control coefficients arises from 7 × 7 × 7 to 19 × 19 × 19 .
Table 3 shows the data from the 3D experiments. The table shows that the average iteration time per step is approximately 0.159 s. In particular, in the last round iteration, the conventional method requires 18.997 s to reassemble the collocation matrix and 23.571 s to solve the equation directly, for a total time of 42.668 s. However, the Incre-IG-LSPIA method does not need to compute the collocation matrix explicitly but requires computing the convergence coefficient, which consumes 1.11 s. In the end, the Incre-IG-LSPIA takes 23.623 s to iterate and a total time of 24.733 s, much less than solving the equations directly.

7. Discussion

In the one, two, and three-dimensional experiments, the IG-LSPIA has a consistent convergence rate under three different error metrics, i.e., relative L 2 norm error, relative L norm error, and RMSE. In addition, the DoFs of the numerical solutions in IG-LSPIA are automatically determined using the fitting error rather than the approximate difference between the numerical and exact solutions. Since the exact solution is usually unknown, the proposed method is more practical than the traditional approach for engineering applications. Regarding time consumption, the cost of reassembling the collocation matrix in traditional algorithms is high, but IG-LSPIA does not require the explicit calculation of the matrix. Although the proposed algorithm involves calculating the normalization weight, the best choice for this parameter has been discussed. It only needs to be computed in the first step of each iteration round. In the one-dimensional example, this time is three times less than the time for reassembly. In the three-dimensional example, the total computation times of the last round in IG-LSPIA is 24.733 s, while rebuilding the equations and directly solving them cost 42.668 s. The efficiency has been increased twice. Finally, the Incre-IG-LSPIA method is adopted in our experiments to increase the convergence rate. Our method has limitations. Despite the convergence of the proposed method, the number of iterations is higher in the two-dimensional example than in the other cases.

8. Conclusions

In this study, we present the IG-LSPIA method, a novel and stable iterative format for IGA-C. Moreover, the convergence analysis of the IG-LSPIA method is provided. We discuss sufficient conditions for the iterative algorithm’s convergence and the optimal choice of the normalization weight μ . To enhance iterative efficiency, we develop the Incre-IG-LSPIA to reuse the information in the last round of iterations and save iteration time to some extent. Finally, many examples are tested to demonstrate the efficiency and effectiveness of the developed IG-LSPIA method. In the three-dimensional case, twice the total computation time is saved compared to the traditional method.
For future work, an acceleration of the iterative process should be developed. The proposed method also shows the potential to solve more complicated PDEs. In addition to IGA-C, IG-LSPIA can be combined with the isogeometric Galerkin method.

Author Contributions

Conceptualization, H.L. and Y.J.; methodology, H.L. and Y.J.; validation, Y.J.; supervision, H.L.; project administration, H.L.; funding acquisition, H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Nature Science Foundation of China under Grant Nos. 62272406, 61872316.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hughes, T.; Cottrell, J.; Bazilevs, Y. Isogeometric analysis: CAD, finite elements, NURBS, exact geometry and mesh refinement. Comput. Methods Appl. Mech. Eng. 2005, 194, 4135–4195. [Google Scholar] [CrossRef]
  2. Auricchio, F.; Da Veiga, L.B.; Hughes, T.J.R.; Reali, A.; Sangalli, G. Isogeometric collocation methods. Math. Model. Methods Appl. Sci. 2010, 20, 2075–2107. [Google Scholar] [CrossRef]
  3. Lin, H.; Hu, Q.; Xiong, Y. Consistency and convergence properties of the isogeometric collocation method. Comput. Methods Appl. Mech. Eng. 2013, 267, 471–486. [Google Scholar] [CrossRef]
  4. Lin, H.; Xiong, Y.; Hu, H.; Yan, J.; Hu, Q. The convergence rate and necessary-and-sufficient condition for the consistency of isogeometric collocation method. Appl.-Math. J. Chin. Univ. 2022, 37, 272–289. [Google Scholar] [CrossRef]
  5. Lin, H.; Xiong, Y.; Hu, Q. Isogeometric Least-Squares Collocation Method with Consistency and Convergence Analysis. J. Syst. Sci. Complex. 2020, 33, 1656–1693. [Google Scholar] [CrossRef]
  6. Lin, H.; Wang, G.; Dong, C. Constructing iterative non-uniform B-spline curve and surface to fit data points. Sci. China Ser. Inf. Sci. 2004, 47, 315–331. [Google Scholar] [CrossRef]
  7. Deng, C.; Lin, H. Progressive and iterative approximation for least squares B-spline curve and surface fitting. Comput. Aided Des. 2014, 47, 32–44. [Google Scholar] [CrossRef]
  8. Lin, H.; Maekawa, T.; Deng, C. Survey on geometric iterative methods and their applications. Comput. Aided Des. 2018, 95, 40–51. [Google Scholar] [CrossRef]
  9. Lin, H.; Cao, Q.; Zhang, X. The Convergence of Least-Squares Progressive Iterative Approximation with Singular Iterative Matrix. J. Syst. Sci. Complex. 2018, 31, 1618–1632. [Google Scholar] [CrossRef]
  10. Qi, D.; Tian, Z.; Zhang, Y. The method of numeric polish in curve fitting. Acta Math. Sin. 1975, 18, 173–184. [Google Scholar]
  11. de Boor, C. How does Agee’s smoothing method work? In Proceedings of the 1979 Army Numerical Analysis and Computers Conference, White Sands Missile Range, NM, USA, 14–16 February 1979. [Google Scholar]
  12. Lin, H. Local progressive-iterative approximation format for blending curves and patches. Comput. Aided Geom. Des. 2010, 27, 322–339. [Google Scholar] [CrossRef]
  13. Lu, L. Weighted progressive iteration approximation and convergence analysis. Comput. Aided Geom. Des. 2010, 27, 129–137. [Google Scholar] [CrossRef]
  14. Zhang, L.; She, X.; Ge, X.; Tan, J. Progressive Interpolation Method of Catmull-Clark Subdivision Surface with Matrix Weight. J. Comput. Aided Des. Comput. Graph. 2019, 31, 1312. [Google Scholar] [CrossRef]
  15. Chen, Z.; Luo, X.; Tan, L.; Ye, B.; Chen, J. Progressive Interpolation based on Catmull-Clark Subdivision Surfaces. Comput. Graph. Forum 2008, 27, 1823–1827. [Google Scholar] [CrossRef]
  16. Cheng, F.; Fan, F.; Lai, S.; Huang, C.; Wang, J.; Yong, J.H. Loop Subdivision Surface Based Progressive Interpolation. J. Comput. Sci. Technol. 2009, 24, 39–46. [Google Scholar] [CrossRef]
  17. Chen, J.; Wang, G. Progressive iterative approximation for triangular Bézier surfaces. Comput. Aided Des. 2011, 43, 889–895. [Google Scholar] [CrossRef]
  18. Lin, H.; Zhang, Z. An extended iterative format for the progressive-iteration approximation. Comput. Graph. 2011, 35, 967–975. [Google Scholar] [CrossRef]
  19. Zhang, L.; Ge, X.; Tan, J. Least square geometric iterative fitting method for generalized B-spline curves with two different kinds of weights. Vis. Comput. 2015, 32, 1109–1120. [Google Scholar] [CrossRef]
  20. Zhang, L.; Zhao, Z.; Ge, X.; Tan, J. Stepwise iteration and optimization scheme for data fitting based on non-uniform B-spline. In Proceedings of the 8th International Conference on Digital Home (ICDH), Dalian, China, 19–20 September 2020; pp. 248–253. [Google Scholar]
  21. Usami, R.; Kobashi, Y.; Onuma, T.; Maekawa, T. Two-Lane Path Planning of Autonomous Vehicles in 2.5D Environments. IEEE Trans. Intell. Veh. 2020, 5, 281–293. [Google Scholar] [CrossRef]
  22. Ebadi, M.; Ebrahimi, A. Video Data Compression by Progressive Iterative Approximation. Int. J. Interact. Multimed. Artif. Intell. 2020, 6, 189–195. [Google Scholar] [CrossRef]
  23. Hamza, Y.; Lin, H.; Li, Z. Implicit progressive-iterative approximation for curve and surface reconstruction. Comput. Aided Geom. Des. 2020, 77, 101817. [Google Scholar] [CrossRef] [Green Version]
  24. Wang, H. Implicit Randomized Progressive-Iterative Approximation for Curve and Surface Reconstruction. Comput. Aided Des. 2022, 152, 103376. [Google Scholar] [CrossRef]
  25. Anitescu, C.; Jia, Y.; Zhang, Y.J.; Rabczuk, T. An isogeometric collocation method using superconvergent points. Comput. Methods Appl. Mech. Eng. 2015, 284, 1073–1097. [Google Scholar] [CrossRef]
  26. Donatelli, M.; Garoni, C.; Manni, C.; Serra-Capizzano, S.; Speleers, H. Robust and optimal multi-iterative techniques for IgA collocation linear systems. Comput. Methods Appl. Mech. Eng. 2015, 284, 1120–1146. [Google Scholar] [CrossRef]
  27. Veiga, L.; Cho, D.; Pavarino, L.; Scacchi, S. Overlapping Schwarz preconditioners for isogeometric collocation methods. Comput. Methods Appl. Mech. Eng. 2014, 278, 239–253. [Google Scholar] [CrossRef]
  28. Cho, D. Optimal multilevel preconditioners for isogeometric collocation methods. Math. Comput. Simul. 2019, 168, 76–89. [Google Scholar] [CrossRef]
  29. Johnson, R.W. Higher order B-spline collocation at the Greville abscissae. Appl. Numer. Math. 2005, 52, 63–75. [Google Scholar] [CrossRef]
  30. Demko, S. On the existence of interpolating projections onto spline spaces. J. Approx. Theory 1985, 43, 151–156. [Google Scholar] [CrossRef] [Green Version]
  31. Beer, G.; Bordas, S. Isogeometric Methods for Numerical Simulation; Springer: Vienna, Austria, 2015. [Google Scholar]
Figure 1. Diagram of degrees of freedom (DoFs) vs. the error between the numerical and analytical solutions in 1D.
Figure 1. Diagram of degrees of freedom (DoFs) vs. the error between the numerical and analytical solutions in 1D.
Mathematics 11 00898 g001
Figure 2. Comparison of the absolute error between the numerical and analytical solutions after each knot refinement. (a) Analytical solution. (b) Absolute error.
Figure 2. Comparison of the absolute error between the numerical and analytical solutions after each knot refinement. (a) Analytical solution. (b) Absolute error.
Mathematics 11 00898 g002
Figure 3. Diagram of the DoFs vs. the error between the numerical and analytical solutions in 2D.
Figure 3. Diagram of the DoFs vs. the error between the numerical and analytical solutions in 2D.
Mathematics 11 00898 g003
Figure 4. Comparison of the absolute error between the numerical and analytical solutions after each knot refinement. (a) Analytical solution. (b) n = 7 × 7 , absolute error = 1.200. (c) n = 11 × 11 , absolute error = 0.115. (d) n = 19 × 19 , absolute error = 0.052.
Figure 4. Comparison of the absolute error between the numerical and analytical solutions after each knot refinement. (a) Analytical solution. (b) n = 7 × 7 , absolute error = 1.200. (c) n = 11 × 11 , absolute error = 0.115. (d) n = 19 × 19 , absolute error = 0.052.
Mathematics 11 00898 g004
Figure 5. Diagram of the DoFs vs. the error between the numerical and analytical solutions in 3D.
Figure 5. Diagram of the DoFs vs. the error between the numerical and analytical solutions in 3D.
Mathematics 11 00898 g005
Figure 6. Comparison of the absolute error between the numerical and analytical solutions after each knot refinement. (a) Analytical solution. (b) n = 7 × 7 × 7 , absolute error = 0.032. (c) n = 11 × 11 × 11 , absolute error = 0.002. (d) n = 19 × 19 × 19 , absolute error = 4.743 × 10 4 .
Figure 6. Comparison of the absolute error between the numerical and analytical solutions after each knot refinement. (a) Analytical solution. (b) n = 7 × 7 × 7 , absolute error = 0.032. (c) n = 11 × 11 × 11 , absolute error = 0.002. (d) n = 19 × 19 × 19 , absolute error = 4.743 × 10 4 .
Mathematics 11 00898 g006
Table 1. Experimental data of Incre-IG-LSPIA algorithm with five times knot refinement. The number of collocation points is m = 600 .
Table 1. Experimental data of Incre-IG-LSPIA algorithm with five times knot refinement. The number of collocation points is m = 600 .
# Ctrl 1 E fit ( k ) # Iter 2Iteration Time (s)Iteration Time per Step (s)Total Time (s)
170.1441400.3800.0030.480
310.009620.1660.0030.199
590.001550.1470.0030.180
115 6.168 × 10 5 730.2030.0030.240
227 1.780 × 10 5 40.0120.0030.045
451 5.380 × 10 6 10.0030.0030.040
1 Number of control coefficients. 2 Number of iterations.
Table 2. Experimental data of Incre-IG-LSPIA algorithm with twice knot refinement. The number of collocation points is m = 31 × 31 .
Table 2. Experimental data of Incre-IG-LSPIA algorithm with twice knot refinement. The number of collocation points is m = 31 × 31 .
# Ctrl 1 E fit ( k ) # Iter 2Iteration Time (s)Iteration Time per Step (s)Total Time (s)
7 × 7 8.9263650.3520.0010.371
11 × 11 1.89825062.5910.0012.605
19 × 19 0.39450005.1320.0015.147
1 Number of control coefficients. 2 Number of iterations.
Table 3. Experimental data of Incre-IG-LSPIA algorithm with twice knot refinement. The number of collocation points is m = 30 × 30 × 30 .
Table 3. Experimental data of Incre-IG-LSPIA algorithm with twice knot refinement. The number of collocation points is m = 30 × 30 × 30 .
# Ctrl 1 E fit ( k ) # Iter 2Iteration Time (s)Iteration Time per Step (s)Total Time (s)
7 × 7 × 7 2.9796910.0000.14511.460
11 × 11 × 11 0.5626910.9930.15912.113
19 × 19 × 19 0.11914723.6230.16124.733
1 Number of control coefficients. 2 Number of iterations.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiang, Y.; Lin, H. IG-LSPIA: Least Squares Progressive Iterative Approximation for Isogeometric Collocation Method. Mathematics 2023, 11, 898. https://doi.org/10.3390/math11040898

AMA Style

Jiang Y, Lin H. IG-LSPIA: Least Squares Progressive Iterative Approximation for Isogeometric Collocation Method. Mathematics. 2023; 11(4):898. https://doi.org/10.3390/math11040898

Chicago/Turabian Style

Jiang, Yini, and Hongwei Lin. 2023. "IG-LSPIA: Least Squares Progressive Iterative Approximation for Isogeometric Collocation Method" Mathematics 11, no. 4: 898. https://doi.org/10.3390/math11040898

APA Style

Jiang, Y., & Lin, H. (2023). IG-LSPIA: Least Squares Progressive Iterative Approximation for Isogeometric Collocation Method. Mathematics, 11(4), 898. https://doi.org/10.3390/math11040898

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop