Next Article in Journal
High-Dimensional Consistencies of KOO Methods for the Selection of Variables in Multivariate Linear Regression Models with Covariance Structures
Next Article in Special Issue
IG-LSPIA: Least Squares Progressive Iterative Approximation for Isogeometric Collocation Method
Previous Article in Journal
Complex Dynamical Sampling Mechanism for the Random Pulse Circulation Model and Its Application
Previous Article in Special Issue
Polynomial-Based Non-Uniform Ternary Interpolation Surface Subdivision on Quadrilateral Mesh
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Least-Squares Progressive Iterative Approximation for Tensor Product Surfaces

School of Statistics and Mathematics, Zhejiang Gongshang University, Hangzhou 310018, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(3), 670; https://doi.org/10.3390/math11030670
Submission received: 22 December 2022 / Revised: 21 January 2023 / Accepted: 23 January 2023 / Published: 28 January 2023
(This article belongs to the Special Issue Computer-Aided Geometric Design)

Abstract

:
Geometric iterative methods, including progressive iterative approximation and geometric interpolation methods, are efficient for fitting a given data set. With the development of big data technology, the number of fitting data points has become massive, and the progressive iterative approximation for least-squares fitting (LSPIA) is generally applied to fit mass data. Combining the Schulz iterative method for calculating the Moore–Penrose generalized inverse matrix with the traditional LSPIA method, this paper presents an accelerated LSPIA method for tensor product surfaces and shows that the corresponding iterative surface sequence converged to the least-squares fitting surface of the given data set. The iterative format is that of a non-stationary iterative method, and the convergence rate increased rapidly as the iteration number increased. Some numerical examples are provided to illustrate that the proposed method has a faster convergence rate.

1. Introduction

The need to apply a parametric curve or surface to fit a given set of data is an important problem encountered frequently in engineering applications, such as product styling in aerospace, computer vision, NURBS solid construction, and reverse engineering [1,2,3,4].
Progressive iterative approximation (PIA) is a simple iterative method for data fitting with intuitive geometric significance. Qi et al. presented the “profit and loss” algorithm for uniform cubic B-spline curves [5], and at the same time, de Boor proved its convergence independently [6]. Lin et al. proved that non-uniform B-spline curves and surfaces also hold this property [7]. Furthermore, Lin et al. extended this property to blending curves and surfaces with normalized totally positive (NTP) bases and named it progressive iterative approximation [8]. Over the next 15 years, this has become an attractive and promising research field, as evident from the number of publications on the PIA subject. Lin et al. provided an overview of PIA and geometric approximation methods [9]. A local progressive iterative approximation method was proposed to make data fitting more flexible by adjusting only a part of the control points [10,11]. Some improved methods have been proposed to speed up the convergence rate, such as a weighted PIA method [12], the Chebyshev accelerating PIA method [13], a preconditioned PIA method [14], and the composite Schulz–PIA method [15]. Delgado and Peña compared the convergence rates of different NTP bases and proved that the normalized B-basis had the fastest convergence rate [16]. Hamza et al. designed an implicit PIA method for implicit curve and surface reconstruction to reduce computational cost effectively [17]. In the classical PIA methods, the number of the control points needs to be equal to that of the data set, which is not suitable for fitting mass data. As we all known, the number of data set is often exceptionally large nowadays, and the LSPIA methods can handle s large set of points [18,19,20,21]. However, as the data set increases, the coefficient matrix of the least-squares fitting system may be singular. Lin et al. proved that the LSPIA method is still convergent for singular least-squares fitting systems [22].
Tensor product surfaces are suitable for surfaces with a rectangular topology, which widely exist in engineering practice [23,24,25]. They can be regarded as the trajectory of the curves with variable control points in three-dimensional space. Therefore, the theoretical study on the LSPIA method for tensor product surfaces may be generalized by the curve case. An alternative explanation of the LSPIA method is to find the least-squares solution of linear systems, that is, to calculate the generalized inverse of the coefficient matrix. Its convergence rate depends on the size of the spectral radius of the iterative matrix. However, the convergence rate would greatly slow down with large-scale problems in practice. The tensor product surface cannot be treated as a curve with variable control points, although the proof of its convergence is easily generalized from the curve case. The reason is that the collocation matrix of bivariate NTP bases is a Kronecker product of two collocation matrices of NTP bases, that is, a large-scale matrix. The collocation matrix is exponentially ill-conditioned as the dimension of the collocation matrix increases [26]. To speed up the convergence rate, we combined the Schulz iterative method [27] for Moore–Penrose generalized inverse of the coefficient matrix with the traditional LSPIA method and develop an improved LSPIA format with fast convergence rate. Applying the properties of the Kronecker product and the Moore–Penrose generalized inverse, we proved that the proposed method was convergent and the limit surface was just the least-squares fitting result of the given data points.
This paper is organized as follows. Section 2 introduces the improved least-squares progressive iterative format for tensor product surfaces. The convergence of the proposed method is proved theoretically in Section 3. Then, Section 4 provides the preparatory work before the experimental examples and the improved LSPIA algorithm for fitting data points. Subsequently, some numerical experiments are presented in Section 5 to demonstrate the efficiency of the proposed algorithm by comparing the error plots and error data with those obtained with the traditional LSPIA method. Finally, the conclusions are presented in Section 6.

2. Improved Least-Squares Progressive Iterative Format for Tensor Product Surfaces

Let us consider a set of data points { Q i , j } i = 0 , j = 0 m 1 , m 2   with   the   parameters   { u i , v j } i = 0 , j = 0 m 1 , m 2 satisfying
u 0 < u 1 < < u m 1 , v 0 < v 1 < < v m 2 .
We arbitrarily select some data points { P h , l } h = 0 , l = 0 n 1 , n 2 ( n 1 < m 1 , n 2 < m 2 ) from { Q i , j } i = 0 , j = 0 m 1 , m 2 as the control points and construct an initial iterative surface P 0 ( u , v ) , i.e.,
P 0 ( u , v ) = h = 0 n 1 l = 0 n 2 B h ( u ) B l ( v ) P h , l 0 , ( u , v ) [ u 0 , u m 1 ] × [ v 0 , v m 2 ] ,
where P h , l 0 = P h , l are the control points, and { B h ( u ) , h = 0 , 1 , ... , n 1 } , { B l ( v ) , l = 0 , 1 , ... , n 2 } are the NTP bases function [8], satisfying
B h ( u ) , B l ( v ) 0 , h = 0 n 1 B h ( u ) = l = 0 n 2 B l ( v ) = 1 .
The collocation matrix of { B h ( u ) , h = 0 , 1 , ... , n 1 } at an increasing sequence u 0 < u 1 < < u m 1 is
B 1 = ( B 0 ( u 0 ) B 1 ( u 0 ) B n 1 ( u 0 ) B 0 ( u 1 ) B 1 ( u 1 ) B n 1 ( u 1 ) B 0 ( u m 1 ) B 1 ( u m 1 ) B n 1 ( u m 1 ) ) ,
and the collocation matrix of { B l ( v ) , l = 0 , 1 , ... , n 2 } at v 0 < v 1 < < v m 2 is
B 2 = ( B 0 ( v 0 ) B 1 ( v 0 ) B n 2 ( v 0 ) B 0 ( v 1 ) B 1 ( v 1 ) B n 2 ( v 1 ) B 0 ( v m 2 ) B 1 ( v m 2 ) B n 2 ( v m 2 ) ) .
Here, B 1 and B 2 are totally positive matrices, that is, all of their minors are nonnegative [28].
Suppose
Z 1 0 = ω 1 B 1 T , Z 2 0 = ω 2 B 2 T
where B 1 , B 2 are defined as in (1) and (2), respectively, and ω 1 , ω 2 satisfy the condition
0 < ω 1 < 2 λ 0 1 , 0 < ω 2 < 2 μ 0 1 ,
where λ i ( i = 0 , 1 , ... , n 1 ) and μ j ( j = 0 , 1 , ... , n 2 ) are the eigenvalues of B 1 T B 1 and B 2 T B 2 sorted in non-increasing order, respectively.
Defining
δ i , j 0 = Q i , j P 0 ( u i , v j ) , i = 0 , 1 , ... , m 1 , j = 0 , 1 , ... , m 2 ,
and rewriting it into the matrix form, it yields
δ 0 = Q B 1 P 0 B 2 T .
Here, Q and P 0 are control points sorted in a matrix form.
For the (h, l)-th control point P h , l 0 , we take the adjusting vector as
Δ h l 0 = i = 0 m 1 j = 0 m 2 ( Z 1 1 ) h , i δ i , j 0 ( Z 2 1 ) l , j , h = 0 , 1 , ... , n 1 , l = 0 , 1 , ... , n 2 ,
where ( Z 1 1 ) h , i is the ( h , i ) -th element of Z 1 1 ,   and   ( Z 2 1 ) l , j is the ( l , j ) -th element of Z 2 1 . Here, Z 1 1 and Z 2 1 are defined by
Z 1 1 = ( 2 I 1 Z 1 0 B 1 ) Z 1 0 , Z 2 1 = ( 2 I 2 Z 2 0 B 2 ) Z 2 0
where I 1 and I 2 are the identity matrices of order ( n 1 + 1 ) and ( n 2 + 1 ) , respectively, Z 1 0 and Z 2 0 are defined as in (3).
Rewriting the adjusting vectors (5) into the matrix form, it yields
Δ 0 = Z 1 1 δ 0 ( Z 2 1 ) T .
For each control point P h , l 0 , moving it along the adjusting vector Δ h , l 0 , we have
P 1 = P 0 + Δ 0 .
Then, we can obtain the first iterative surface as
P 1 ( u , v ) = h = 0 n 1 l = 0 n 2 B h ( u ) B l ( v ) P h , l 1 .
Similarly, we can obtain the iterative surface P k ( u , v ) after the k-th iteration. According to the iterative format
{ δ k = Q B 1 P k B 2 T Z 1 k + 1 = ( 2 I 1 Z 1 k B 1 ) Z 1 k Z 2 k + 1 = ( 2 I 2 Z 2 k B 2 ) Z 2 k Δ k = Z 1 k + 1 δ k ( Z 2 k + 1 ) T P k + 1 = P k + Δ k ,
the (k + 1)-st iterative tensor product surface
P k + 1 ( u , v ) = h = 0 n 1 l = 0 n 2 B h ( u ) B l ( v ) P h , l k + 1 ,
is generated.
The iterative process is repeated, and we obtain an iterative surface sequence { P k ( u , v ) , k = 0 , 1 , ... } .

3. Convergence Analysis for the Improved LSPIA

In this section, if the collocation matrices B 1 and B 2 are of full column rank, we present Theorem 1 to prove that the surface sequence { P k ( u , v ) , k = 0 , 1 , ... } generated by the iterative format (7) is convergent, and the limit surface is just the least-squares fitting of the given data points { Q i , j } i = 0 , j = 0 m 1 , m 2 . The initial tensor product surface has the least-squares progressive-iterative approximation property.
An alternative explanation of LSPIA is to find the least-squares solution of the matrix equation
B 1 X B 2 T = Q .
If the collocation matrices B 1 and B 2 are of full column rank, the above equation is equivalent to
B 1 T B 1 X B 2 T B 2 = B 1 T Q B 2 .
Minimizing the L2-norm of the residual matrix, the solution for the matrix Equation (8) is the control points of the least-squares fitting surface to the given data points Q .
A sufficient condition for the matrix Equation (9) with one solution is [29]
B 1 T B 1 ( B 1 T B 1 ) + B 1 T Q B 2 ( B 2 T B 2 ) + ( B 2 T B 2 ) = B 1 T Q B 2 ,
where ( B 1 T B 1 ) + is the Moore–Penrose generalized inverse of B 1 T B 1 , and ( B 1 T B 1 ) + is equal to ( B 1 T B 1 ) 1 , when B 1 T B 1 is non-singular, and ( B 2 T B 2 ) + is treated analogously. The least-squares solution for the matrix Equation (9) is
X = ( B 1 T B 1 ) 1 B 1 T Q B 2 ( B 2 T B 2 ) 1
Lemma 1. 
When the collocation matrices  B 1 and B 2 are of full column rank, B 1 T B 1 and B 2 T B 2 are positive definite matrices. Furthermore, they are non-singular and have positive eigenvalues.
Proof. 
Consider that  rank ( B 1 ) = n 1 + 1 . Therefore, the homogeneous linear equation   B 1 x = 0  has only zero solution. For any non-zero column vector  x  in  ( n 1 + 1 )  -dimension, there is  B 1 x 0 , and
x T B 1 T B 1 x = B 1 x 2 2 > 0 .
Therefore, B 1 T B 1 is a positive definite matrix. It is non-singular and has positive eigenvalues. B 2 is handled in a similar fashion. Thus, the Lemma is proved. □
Lemma 2. 
For all the matrices  Z 1 k , Z 2 k defined as in (7), if B 1 and B 2 are of full column rank, then we have
Z 1 k Q ( Z 2 k ) T = ( Z 1 k B 1 ) ( B 1 T B 1 ) 1 B 1 T Q [ ( Z 2 k B 2 ) ( B 2 T B 2 ) 1 B 2 T ] T k = 0 , 1 , 2 , ... ,
where Z 1 0 , Z 2 0 are defined by (3).
Proof. 
Mathematical induction is used. According to Lemma 1, when B 1 and B 2 are of full column rank, B 1 T B 1 and B 2 T B 2 are both non-singular. Then, applying (3) and (10), it yields
Z 1 0 B 1 ( B 1 T B 1 ) 1 B 1 T Q [ ( Z 2 k B 2 ) ( B 2 T B 2 ) 1 B 2 T ] T = ω 1 ω 2 B 1 T B 1 ( B 1 T B 1 ) 1 B 1 T Q B 2 ( B 2 T B 2 ) 1 ( B 2 T B 2 ) = ω 1 ω 2 B 1 T Q B 2 = Z 1 0 Q ( Z 2 0 ) T .
Then (11) is true when k = 0 . Assume that Lemma 2 is true for the case of k, i.e.,
Z 1 k Q ( Z 2 k ) T = ( Z 1 k B 1 ) ( B 1 T B 1 ) 1 B 1 T Q [ ( Z 2 k B 2 ) ( B 2 T B 2 ) 1 B 2 T ] T .
Applying to the iterative formulae for Z 1 k + 1 and Z 2 k + 1 in (6), we have
( Z 1 k + 1 B 1 ) ( B 1 T B 1 ) 1 B 1 T Q [ ( Z 2 k + 1 B 2 ) ( B 2 T B 2 ) 1 B 2 T ] T . = ( 2 I 1 Z 1 k B 1 ) Z 1 k B 1 ( B 1 T B 1 ) 1 B 1 T Q [ ( 2 I 2 Z 2 k B 2 ) Z 2 k B 2 ( B 2 T B 2 ) 1 B 2 T ] T = ( 2 I 1 Z 1 k B 1 ) Z 1 k B 1 ( B 1 T B 1 ) 1 B 1 T Q [ Z 2 k B 2 ( B 2 T B 2 ) 1 B 2 T ] T ( 2 I 2 Z 2 k B 2 ) T = Z 1 k + 1 Q ( Z 2 k + 1 ) T .
For the case of k + 1 , (11) is true. Then, this completes the proof. □
Theorem 1. 
If the weights  ω 1   a n d   ω 2 satisfy the condition (4) and the collocation matrices B 1 and B 2 are of full column rank, then the iterative tensor product surface sequence generated by the iterative format (7) is convergent to the least-squares fitting result of the initial data points Q .
Proof. 
Applying Lemma 1,  B 1 T B 1  and  B 2 T B 2  are both non-singular. According to the iterative format (7), we have
P k + 1 ( B 1 T B 1 ) 1 B 1 T Q [ ( B 2 T B 2 ) 1 B 2 T ] T = P k + Z 1 k + 1 ( Q B 1 P k B 2 T ) ( Z 2 k + 1 ) T ( B 1 T B 1 ) 1 B 1 T Q [ ( B 2 T B 2 ) 1 B 2 T ] T .
Using Lemma 2, substituting (11) into the abovementioned equation, it yields
P k + 1 ( B 1 T B 1 ) 1 B 1 T Q [ ( B 2 T B 2 ) 1 B 2 T ] T = P k Z 1 k + 1 B 1 P k B 2 T ( Z 2 k + 1 ) T ( B 1 T B 1 ) 1 B 1 T Q [ ( B 2 T B 2 ) 1 B 2 T ] T + Z 1 k + 1 B 1 ( B 1 T B 1 ) 1 B 1 T Q [ ( B 2 T B 2 ) 1 B 2 T ] T ( Z 2 k + 1 B 2 ) T = [ P k ( B 1 T B 1 ) 1 B 1 T Q ( ( B 2 T B 2 ) 1 B 2 T ) T ] Z 1 k + 1 B 1 { P k ( B 1 T B 1 ) 1 B 1 T Q [ ( B 2 T B 2 ) 1 B 2 T ] T } ( Z 2 k + 1 B 2 ) T
The above equation is expressed in matrix form. For the convenience of analysis, we need to rearrange it in column-vector form. For example, the initial data point Q is expressed as
Q = [ Q 00 , Q 01 , , Q 0 m 2 , Q 10 , , Q 1 m 2 , , Q m 1 m 2 ] T .
P is treated analogously. We define
P = ( B 1 T B 1 ) 1 B 1 T Q [ ( B 2 T B 2 ) 1 B 2 T ] T .
Applying the properties of the Kronecker product [30], (12) is equivalent to
P k + 1 P = P k P ( Z 1 k + 1 B 1 ) ( Z 2 k + 1 B 2 ) P k P = [ I ( Z 1 k + 1 B 1 ) ( Z 2 k + 1 B 2 ) ] P k P .
The above iterative process is repeated, and we obtain
P k + 1 P = [ I ( Z 1 k + 1 B 1 ) ( Z 2 k + 1 B 2 ) ] [ I ( Z 1 k B 1 ) ( Z 2 k B 2 ) ] P k 1 P = = s = 1 k + 1 [ I ( Z 1 s B 1 ) ( Z 2 s B 2 ) ] P 0 P .
where ( Z 1 s B 1 ) ( Z 2 s B 2 ) is the Kronecker product of Z 1 s B 1 and Z 2 s B 2 , and I is the identity matrix of order ( n 1 + 1 ) ( n 2 + 1 ) .
According to the iterative process of Z 1 s and Z 2 s shown in (7), we have
ρ ( Z 1 1 ) = ( 2 ω 1 λ i ) ω 1 λ i , ρ ( Z 2 1 ) = ( 2 ω 2 μ j ) ω 2 μ j .
When the weights ω 1 and ω 2 satisfy the condition (4), i.e.,
0 < ω 1 < 2 λ 0 1 , ω 1 λ i 1 , 0 < ω 2 < 2 μ 0 1 , ω 2 μ j 1 ,
there is
1 < ( 1 ω 1 λ i ) 2 , ( 1 ω 2 μ j ) 2 0 ,
It means
0 < ρ ( Z 1 1 ) , ρ ( Z 2 1 ) 1 .
Therefore, ρ ( I ( Z 1 1 B 1 ) ( Z 2 1 B 2 ) ) = 1 ρ ( Z 1 1 B 1 ) ρ ( Z 2 1 B 2 ) according to [31], and the value of ρ ( I ( Z 1 1 B 1 ) ( Z 2 1 B 2 ) ) belongs to [ 0 , 1 ) .
Suppose there is 0 < ρ ( Z 1 s ) , ρ ( Z 2 s ) 1 , s 1 , then using the iterative process of Z 1 s and Z 2 s shown in (7), we obtain
0 < ρ ( Z 1 s + 1 B 1 ) = 1 [ ρ ( Z 1 s B 1 ) 1 ] 2 1 ,
and
0 < ρ ( Z 2 s + 1 B 2 ) = 1 [ ρ ( Z 2 s B 2 ) 1 ] 2 1 .
The assumption is true for the case of s + 1 . Using mathematical induction, we have
0 < ρ ( Z 1 s ) , ρ ( Z 2 s ) 1 , s 1
Further, we have
ρ [ I ( Z 1 s B 1 ) ( Z 2 s B 2 ) ] = 1 ρ ( Z 1 s B 1 ) ρ ( Z 2 s B 2 ) [ 0 , 1 ) ,
and the spectral radius of the iterative matrix s = 1 k + 1 [ I ( Z 1 s B 1 ) ( Z 2 s B 2 ) ] has the value range [ 0 , 1 ) . Therefore,
lim k s = 1 k + 1 [ I ( Z 1 s B 1 ) ( Z 2 s B 2 ) ] = 0 ( n 1 + 1 ) ( n 2 + 1 ) ,
where 0 ( n 1 + 1 ) ( n 2 + 1 ) is a zero matrix of order ( n 1 + 1 ) ( n 2 + 1 ) .
Using (14), when k , it follows
P k + 1 P = 0 ( n 1 + 1 ) ( n 2 + 1 ) × 2 .
We rewrite the above equation in matrix form and replace P by (13). It is equivalent to P k ( B 1 T B 1 ) 1 B 1 T Q [ ( B 2 T B 2 ) 1 B 2 T ] T as k , which means the surface sequence { P k ( u , v ) , k = 0 , 1 , ... } (8) obtained by the iterative format (7) is convergent, and the limit surface is the least-squares approximation surface of the initial data points { Q i , j } i = 0 , j = 0 m 1 , m 2 . This completes the proof. □

4. Implementation

This section introduces the preparation for the fitting algorithm.

4.1. Parameterization of the Initial Data Points

Given a data point set { Q i , j } i = 0 , j = 0 m 1 , m 2 , each point Q i , j is assigned to a pair of parameter values ( u i , v j ) , where u i is defined as the row parameter of Q i , j in the u -direction, and v j is defined as the column parameter of Q i , j in the v-direction (See Figure 1a). The procedure for calculating ( u i , v j ) is illustrated in Figure 1.
First, we calculate the parameter t i , j for Q i , j using the normalized accumulated chord parameterization method [32], i.e.,
t 0 , j = 0 , t i , j = t i 1 , j + Q i , j Q i 1 , j i = 1 m 1 Q i , j Q i 1 , j , i = 1 , 2 , , m 1 , j = 0 , 1 , , m 2 .
Then, the row parameter u i of point Q i , j in the i -th row is defined by the average value of the parameters t i , j of the data points in the i -th row (See Figure 1b), i.e.,
u i = 1 m 2 + 1 j = 0 m 2 t i , j , i = 0 , 1 , , m 1 .
The column parameter v j of Q i , j can be calculated in the same way. Therefore, the parameter pair ( u i , v j ) of each point Q i , j can be derived.

4.2. Selection of the Initial Control Points

According to Theorem 1, the iterative surface sequence converges to the least-squares fitting result of the data points, no matter how we choose the initial control points. As a practical matter, the control points cannot be selected completely at random. Considering that the shape of the data set is mainly characterized by points with a large curvature, we choose the points with a larger curvature as the initial control points to make the proposed method converge quickly.
Similar to the parameterization procedure in Section 4.1, given a data point set { Q i , j } i = 0 , j = 0 m 1 , m 2 , each point Q i , j is assigned to a curvature pair ( k i u , k j v ) . First, the total curvature k i , j is calculated for each point Q i , j as follows.
According to the formula of discrete curvature [33], the discrete curvature k i , j u in the u-direction is expressed as
k i , j u = Δ Q i , j × Δ 2 Q i , j Δ Q i , j 3 , i = 0 , 1 , , m 1 ,
where Δ Q i , j and Δ 2 Q i , j are the first-order and second-order differences of Q i , j in the u-direction, i.e.,
Δ Q 0 , j = Q 1 , j Q 0 , j Q 1 , j Q 0 , j ,   Δ Q i , j = Q i + 1 , j Q i 1 , j Q i + 1 , j Q i , j + Q i , j Q i 1 , j , i = 1 , 2 , , m 1 1 , Δ Q m 1 , j = Q m 1 , j Q m 1 1 , j Q m 1 , j Q m 1 1 , j , Δ 2 Q 0 , j = Δ Q 1 , j Δ Q 0 , j Q 1 , j Q 0 , j , Δ 2 Q i , j = Δ Q i + 1 , j Δ Q i 1 , j Q i + 1 , j Q i , j + Q i , j Q i 1 , j , i = 1 , 2 , , m 1 1 , Δ 2 Q m 1 , j = Δ Q m 1 , j Δ Q m 1 1 , j Q m 1 , j Q m 1 1 , j .
The discrete curvature k i , j v of Q i , j in the v -direction is treated analogously. Therefore, the total curvature k i , j of Q i , j is calculated by
k i , j = ( k i , j u ) 2 + ( k i , j v ) 2 , i = 0 , 1 , , m 1 , j = 0 , 1 , , m 2 .
Next, the row curvature k i u of Q i , j is obtained by averaging the total curvatures in the v -direction
k i u = 1 m 2 + 1 j = 0 m 2 k i , j , i = 0 , 1 , , m 1 .
The column curvature k j v of Q i , j is calculated in a similar way. Therefore, the curvature pair ( k i u , k j v ) of each data point Q i , j is derived.
To choose the control points { P h , l } h = 0 , l = 0 n 1 , n 2 ( n 1 < m 1 , n 2 < m 2 ) of the initial iterative surface from the original data points { Q i , j } i = 0 , j = 0 m 1 , m 2 , we just select their subscripts (row subscript and column subscript) based on their curvature pairs. Since the boundary points are also the feature points, four corners are selected as the initial control points, namely, the row subscripts 0 , m 1 and the column subscripts 0 , m 2 . For the row curvature k i u , choose the row subscripts { i 1 , ... , i n 1 1 } from { 1 , ... , m 1 1 } , whose row curvatures are top n 1 1 , where n 1 < m 1 . For the column curvature k j v , choose the column subscripts { j 1 , ... , j n 2 1 } from { 1 , ... , m 2 1 } , whose column curvatures are top n 2 1 , where n 2 < m 2 . Therefore, the subscripts of the initial control points belong to the following subscript sequence
{ i 0 = 0 , i 1 , , i n 1 = m 1 } × { j 0 = 0 , j 1 , , j n 2 = m 2 } ,
and the initial position of the control points can be determined while the parameter pair ( u i , v j ) of each control point satisfies
( u i , v j ) { u i 0 , u i 1 , , u i n 1 } × { v j 0 , v j 1 , , v j n 2 } .

4.3. Knot Vectors and Fitting Error

The knot vectors for a bi-cubic tensor product B-spline surface are defined by the following technique of averaging [32]
u ¯ 0 = u ¯ 1 = u ¯ 2 = u ¯ 3 = u 0 ,   u ¯ n 1 = u ¯ n 1 + 1 = u ¯ n 1 + 2 = u ¯ n 1 + 3 = u n 1 , u ¯ k u + 3 = 1 3 k = p p + 2 u i k , p = 1 , 2 , , n 1 3 , v ¯ 0 = v ¯ 1 = v ¯ 2 = v ¯ 3 = v 0 ,   v ¯ n 2 = v ¯ n 2 + 1 = v ¯ n 2 + 2 = v ¯ n 2 + 3 = v n 2 , v ¯ k v + 3 = 1 3 k = q q + 2 v j k , q = 1 , 2 , , n 2 3 .
The fitting error E k of the k-th iteration is defined by
E k = i = 0 m 1 j = 0 m 2 Q i , j h = 0 n 1 l = 0 n 2 B h ( u i ) B l ( v j ) P h , l k 2

4.4. Algorithm Implementation

The process of fitting a data set with the proposed method is described in Algorithm 1.
Algorithm 1. Surface fitting of the data points by the accelerated LSPIA method
Input: The data   point   set   { Q i , j } i = 0 , j = 0 m 1 , m 2 and a given iterative tolerance ε .
Output :   The   fitting   surface   P k ( u , v ) .
Step 1:   Calculate   the   parameter   pair   ( u i , v j )   for   each   point   Q i , j according to Section 4.1.
Step 2:   Choose   the   initial   control   points   P 0 according to Section 4.2.
Step 3: Calculate the knot vectors according to Section 4.3.
Step 4:   Compute   Z 1 0   and   Z 2 0 according to (3).
k = 0;
do
  • Calculate   the   difference   vectors   δ k = Q B 1 P k B 2 T ;
  • Calculate   Z 1 k + 1 = ( 2 I 1 Z 1 k B 1 ) Z 1 k   and   Z 2 k + 1 = ( 2 I 2 Z 2 k B 2 ) Z 2 k ;
  • Calculate   the   adjustment   vectors   Δ k = Z 1 k + 1 δ k ( Z 2 k + 1 ) T ;
  • Construct   the   control   points   P k + 1 = P k + Δ k for the (k + 1)-st iteration;
  • Calculate   the   fitting   error   E k by (15);
  • k = k + 1 ;
While  ( | E k + 1 E k | < ε ) .

5. Numerical Examples

In this section, we present five examples and compare the results with those of the traditional LSPIA method [18] to show the efficiency of the proposed LSPIA algorithm for the surface fitting of data set. The fitting surface had bi-cubic B-spline basis functions, since the bi-cubic tensor product B-spline surface is simple and widely used in CAD/CAM. It is hard to accurately calculate the optimal weights; so, we defined ω 1 , ω 2 in the same way as the weight in [18] for practical applications, i.e.,
ω 1 = 2 B 1 T B 1 1 , ω 2 = 2 B 2 T B 2 1 .
The point sets in the five examples were as follows.
Example 1: 121 × 161 grid data points sampled from a human face collection.
Example 2: 21 × 21 grid data points sampled from an ear shape flake collection.
Example 3: 41 × 61 grid data points sampled from a mold collection.
Example 4: 81 × 61 grid data points sampled from a tooth shape flake collection.
Example 5: 501 × 501 grid data points sampled from a peaks (501) function collection.
The fitting surfaces and the fitting errors are plotted in Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6, where the point grids and the initial iterative surface are shown in (a), the iterative surfaces after five iterations are provided in (b), the limit fitting surfaces are shown in (c), and the comparison chart of the fitting error vs. the number of iterations for the proposed method and the traditional LSPIA method are presented in (d). In (d), it is easy to observe that the fitting error of our proposed method decreased sharply and satisfied the termination criterion quickly.
Table 1 contains the fitting errors E k (k = 0, 1,..., 6), E s t o p , and E for examples 1–5 when using our method and the traditional LSPIA method. The termination criterion is | E k + 1 E k | < 10 3 . E s t o p is the fitting error when the algorithm is stopped, E represents the error between Q and P . If E k is null in Table 1, it means that the termination criterion is satisfied in the previous iteration. In Table 1, we can see that the error of the proposed method was less than that of the LSPIA method after the same iterations, and the proposed algorithm was terminated after about five or six iterations in examples 1–4.
The error between the control point P k after k iterations and the limit control point P is defined by
e k = h = 0 n 1 l = 0 n 2 P h , l k P h , l 2 .
Table 2 shows that the error e k of the proposed method was less than that of the LSPIA method after k iterations, and the control point P k was much closer to the limit control point.
Table 3 lists the running time and iteration number of the proposed method and the LSPIA method. It can be observed that the iteration number of the LSPIA method was much larger than that of the proposed method. When the termination condition was | E k + 1 E k | < 10 7 , the iteration number of the LSPIA method was 70–140 times that of our method. The iteration number rapidly increased as the termination criterion decreased. The reason is that our method is a non-stationary iterative method. As the iteration number increased, the spectral radius of the iterative matrix decreased rapidly and tended to zero. Although our iterative format is complex, the running time of the proposed method was less than that of the LSPIA method. Especially, it is obvious that the proposed method is more advantageous and efficient than the LSPIA method, when there is a large-scale fitting problem, such as in Example 5.

6. Conclusions

In this paper, we propose an efficient least-squares progressive iterative method for tensor product surfaces. Combined with the Schulz iterative method for computing the Moore–Penrose inverse, we designed an accelerated LSPIA iterative format with fast convergence rate. In addition, the iterative format was expressed in matrix form, which is different from previous methods. The calculation amount was greatly reduced, since the method avoided the Kronecker product. The presented method is perfectly suitable for big data fitting in engineering applications. Future research may focus on extending the method to different kinds of PIA methods.

Author Contributions

Conceptualization, Q.H.; Methodology, Q.H.; Software, Z.W.; Validation, R.L.; Writing—original draft, Z.W.; Writing—review & editing, Q.H. and R.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. 62272406).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets generated or analyzed in this study are available from the corresponding author on reasonable request.

Acknowledgments

The authors would like to thank the anonymous referees for their valuable suggestions and comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lin, H.; Jin, S.; Liao, H.; Jian, Q. Quality guaranteed all-hex mesh generation by a constrained volume iterative fitting algorithm. Comput. Aided Des. 2015, 67, 107–117. [Google Scholar] [CrossRef]
  2. Martin, T.; Cohen, E.; Kirby, R. Volumetric parameterization and trivariate B-spline fitting using harmonic functions. Comput. Aided Des. 2009, 26, 648–664. [Google Scholar] [CrossRef]
  3. Lin, H.; Jin, S.; Hu, Q.; Liu, Z. Constructing B-spline solids from tetrahedral meshes for isogeometric analysis. Comput. Aided Geom. Des. 2015, 35, 109–120. [Google Scholar] [CrossRef]
  4. Kineri, Y.; Wang, M.; Lin, H.; Maekawa, T. B-spline surface fitting by iterative geometric interpolation/approximation algorithms. Comput. Aided Des. 2012, 44, 697–708. [Google Scholar] [CrossRef]
  5. Qi, D.; Tian, Z.; Zhang, Y.; Zheng, J.B. The method of numeric polish in curve fitting. Acta Math. Sin. 1975, 18, 173–184. (In Chinese) [Google Scholar]
  6. De Boor, C. How does Agee’s smoothing method work. In Proceedings of the 1979 Army Numerical Analysis and Computers Conference, ARO Report, Madison, WI, USA; 1979; pp. 79–83. [Google Scholar]
  7. Lin, H.; Wang, G.; Dong, C. Constructing iterative non-uniform B-spline curve and surface to fit data points. Sci. China Ser. Inf. Sci. 2004, 47, 315–331. [Google Scholar] [CrossRef]
  8. Lin, H.W.; Bao, H.J.; Wang, G.J. Totally positive bases and progressive iterative approximation. Comput. Math. Appl. 2005, 50, 575–586. [Google Scholar] [CrossRef] [Green Version]
  9. Lin, H.; Maekawa, T.; Deng, C. Survey on geometric iteartive methods and their applications. Comput. Aided Des. 2018, 95, 40–51. [Google Scholar] [CrossRef]
  10. Lin, H. Local progressive-iterative approximation format for blending curves and patches. Comput. Aided Geom. Des. 2010, 27, 322–339. [Google Scholar] [CrossRef]
  11. Hu, Q.; Wang, J.; Liang, R. Weighted local progressive-iterative approximation property for triangular Bézier surfaces. Vis. Comput. 2022, 28, 2819–3830. [Google Scholar] [CrossRef]
  12. Lu, L. Weighted progressive iteration approximation and convergence analysis. Comput. Aided Geom. Des. 2010, 27, 129–137. [Google Scholar] [CrossRef]
  13. Liu, C.; Han, X.; Li, J. The Chebyshev accelerating method for progressive iterative approximation. Commun. Inf. Syst. 2017, 17, 25–43. [Google Scholar] [CrossRef]
  14. Liu, C.; Han, X.; Li, J. Preconditioned progressive iterative approximation for triangular Bézier patches and its application. J. Comput. Appl. Math. 2020, 366, 112389. [Google Scholar] [CrossRef]
  15. Ebrahimi, A.; Loghmani, G. A composite iterative procedure with fast convergence rate for the progressive-iteration approximation of curves. J. Comput. Appl. Math. 2019, 359, 1–15. [Google Scholar] [CrossRef]
  16. Delgado, J.; Peña, J.M. Progressive iterative approximation and bases with the fastest convergence rates. Comput. Aided Geom. Des. 2007, 24, 10–18. [Google Scholar] [CrossRef]
  17. Hamza, Y.; Lin, H.; Li, Z. Implicit progressive-iterative approximation for curve and surface reconstruction. Comput. Aided Geom. Des. 2020, 77, 101817. [Google Scholar] [CrossRef] [Green Version]
  18. Deng, C.; Lin, H. Progressive and iterative approximation for least squares B-spline curve and surface fitting. Comput. Aided Des. 2014, 47, 32–44. [Google Scholar] [CrossRef]
  19. Wang, H. On extended progressive and iterative approximation for least squares fitting. Vis. Comput. 2022, 38, 591–602. [Google Scholar] [CrossRef]
  20. Hu, Q.; Wang, J.; Wang, G. Improved least square progressive iterative approximation format for triangular Bezier surfaces. J. Comput. Aided Des. Comput. Graph. 2022, 34, 777–783. [Google Scholar]
  21. Sajavičius, S. Hyperpower least squares progressive iterative approximation. J. Comput. Appl. Math. 2023, 422, 114888. [Google Scholar] [CrossRef]
  22. Lin, H.; Cao, Q.; Zhang, X. The convergence of least-squares progressive iterative approximation for singular least-squares fitting system. J. Syst. Sci. Complex. 2018, 31, 1618–1632. [Google Scholar] [CrossRef]
  23. Shi, F. Computer Aided Geometric Design with Non-Uniform Rational B-Splines; Higher Education Press: Beijing, China, 2013. (In Chinese) [Google Scholar]
  24. Massarwi, F.; van Sosin, B.; Elber, G. Untrimming: Precise conversion of trimmed-surfaces to tensor-product surfaces. Comput. Graph. 2018, 70, 80–91. [Google Scholar] [CrossRef]
  25. Vaitkus, M.; Várady, T. Parameterizing and extending trimmed regions for tensor-product surface fitting. Comput. Aided Des. 2018, 104, 125–140. [Google Scholar] [CrossRef]
  26. Marco, A.; Martínez, J.J. A fast and accurate algorithm for solving Bernstein–Vandermonde linear systems. Linear Algebra Appl. 2007, 422, 616–628. [Google Scholar] [CrossRef] [Green Version]
  27. Schulz, G. Iterative berechung der reziproken matrix. ZAMM J. Appl. Math. Mech. Z. Angew. Math. Mech. 1933, 13, 57–59. [Google Scholar] [CrossRef]
  28. De Boor, C. Total positivity of the spline collocation matrix. Indiana Univ. Math. J. 1976, 25, 541–551. [Google Scholar] [CrossRef]
  29. Penrose, R. A generalized inverse for matrices. In Mathematical Proceedings of the Cambridge Philosophical Society; Cambridge University Press: Cambridge, UK, 1955; Volume 51, pp. 406–413. [Google Scholar]
  30. Henderson, H.; Searle, S. The vec-permutation matrix, the vec operator and Kronecker products: A review. Linear Multilinear Algebra 1981, 9, 271–288. [Google Scholar] [CrossRef]
  31. Brewer, J. Kronecker products and matrix calculus in system theory. IEEE Trans. Circuits Syst. 1978, 25, 772–781. [Google Scholar] [CrossRef]
  32. Piegl, L.; Tiller, W. The NURBS Book; Springer: Berlin/Heidelberg, Germany, 1997. [Google Scholar]
  33. Ray, B.; Pandyan, R. ACORD—An adaptive corner detector for planar curves. Pattern Recognit. 2003, 36, 703–708. [Google Scholar] [CrossRef]
Figure 1. (a) u-direction and the v-direction, (b) row parameter u i .
Figure 1. (a) u-direction and the v-direction, (b) row parameter u i .
Mathematics 11 00670 g001
Figure 2. A data point grid with 121 × 161 points was fitted by a bi−cubic tensor product B−spline surface with 60 × 80 control points.
Figure 2. A data point grid with 121 × 161 points was fitted by a bi−cubic tensor product B−spline surface with 60 × 80 control points.
Mathematics 11 00670 g002
Figure 3. A data point grid with 21 × 21 points was fitted by a bi−cubic tensor product B−spline surface with 10 × 10 control points.
Figure 3. A data point grid with 21 × 21 points was fitted by a bi−cubic tensor product B−spline surface with 10 × 10 control points.
Mathematics 11 00670 g003
Figure 4. A data point grid with 41 × 61 points was fitted by a bi−cubic tensor product B−spline surface with 20 × 30 control points.
Figure 4. A data point grid with 41 × 61 points was fitted by a bi−cubic tensor product B−spline surface with 20 × 30 control points.
Mathematics 11 00670 g004
Figure 5. A data point grid with 81 × 61 points was fitted by a bi−cubic tensor product B−spline surface with 40 × 30 control points.
Figure 5. A data point grid with 81 × 61 points was fitted by a bi−cubic tensor product B−spline surface with 40 × 30 control points.
Mathematics 11 00670 g005
Figure 6. A data point grid with 501 × 501 points was fitted by a bi−cubic tensor product B−spline surface with 250 × 250 control points.
Figure 6. A data point grid with 501 × 501 points was fitted by a bi−cubic tensor product B−spline surface with 250 × 250 control points.
Mathematics 11 00670 g006
Table 1. (k = 0, 1,..., 6), E s t o p   and   E for examples 1–5.
Table 1. (k = 0, 1,..., 6), E s t o p   and   E for examples 1–5.
ExamplesMethodsE0E1E2E3E4E5E6EstopE
Ex. 1LSPIA21.965259.581746.131424.425163.399262.713922.226400.046580.00292
Ours21.9652510.075682.709660.301800.017800.003550.002930.00293
Ex. 2LSPIA4.806611.101150.405870.207030.132030.097080.077820.027300.01619
Ours4.806611.923690.277690.026630.016870.01620
Ex. 3LSPIA17.463945.962532.765241.538870.992370.711540.546840.028280.00766
Ours17.463949.372282.617010.172710.010610.007730.00766
Ex. 4LSPIA15.255815.741243.078721.978441.427571.108200.901860.074470.03950
Ours15.255817.645691.987350.182410.046910.039740.03950
Ex. 5LSPIA173,996.46144,906.89132,412.83122,822.74114,511.11107,057.94100,288.9515.5196214.63951
Ours173,996.46140,489.4495,713.55343,647.2698899.5952434.9192716.78704914.63951
Table 2. (k = 0, 1,..., 7) for examples 1–5.
Table 2. (k = 0, 1,..., 7) for examples 1–5.
ExamplesMethodse0e1e2e3e4e5e6e7
Ex. 1LSPIA4.496 × 1003.406 × 1002.840 × 1002.454 × 1002.167 × 1001.943 × 1001.763 × 1001.615 × 100
Ours4.496 × 1002.558 × 1001.024 × 1002.497 × 10−13.112 × 10−21.891 × 10−35.310 × 10−51.163 × 10−7
Ex. 2LSPIA2.107 × 1009.672 × 10−16.391 × 10−14.975 × 10−14.204 × 10−13.710 × 10−13.354 × 10−13.079 × 10−1
Our2.107 × 1009.146 × 10−12.761 × 10−18.403 × 10−21.375 × 10−23.041 × 10−47.247 × 10−81.754 × 10−15
Ex. 3LSPIA2.419 × 1001.406 × 1001.013 × 1007.989 × 10−16.639 × 10−15.694 × 10−14.986 × 10−14.430 × 10−1
Ours2.419 × 1001.165 × 1003.351 × 10−16.330 × 10−28.403 × 10−33.900 × 10−43.268 × 10−64.527 × 10−10
Ex. 4LSPIA3.026 × 1002.054 × 1001.632 × 1001.383 × 1001.215 × 1001.091 × 1009.940 × 10−19.148 × 10−1
Ours3.026 × 1001.611 × 1006.031 × 10−11.610 × 10−12.331 × 10−21.032 × 10−38.683 × 10−62.193 × 10−9
Ex. 5LSPIA1.159 × 1041.113 × 1041.072 × 1041.035 × 1041.000 × 1049.676 × 1039.372 × 1039.088 × 103
Ours1.159 × 1041.043 × 1047.890 × 1034.187 × 1031.225 × 1031.295 × 1023.505 × 1009.999 × 10−2
Table 3. Comparison of the iteration number and running time of our method and the LSPIA method under the different termination criteria.
Table 3. Comparison of the iteration number and running time of our method and the LSPIA method under the different termination criteria.
ExamplesCriterionLSPIAOurs
Time (s)IterationsTime (s)Iterations
Ex. 11 × 10−30.4640800.38636
1 × 10−50.75203500.39717
1 × 10−71.26507710.42108
Ex. 21 × 10−30.0123200.01025
1 × 10−50.01451100.01186
1 × 10−70.03467120.01287
Ex. 31 × 10−30.0544400.05126
1 × 10−50.07201900.05267
1 × 10−70.12505710.05378
Ex. 41 × 10−30.1180610.09516
1 × 10−50.17102620.09627
1 × 10−70.28706520.09718
Ex. 51 × 10−3230.46540767.0799
1 × 10−5533.76591547.38610
1 × 10−71151.94718,8378.10411
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hu, Q.; Wang, Z.; Liang, R. Improved Least-Squares Progressive Iterative Approximation for Tensor Product Surfaces. Mathematics 2023, 11, 670. https://doi.org/10.3390/math11030670

AMA Style

Hu Q, Wang Z, Liang R. Improved Least-Squares Progressive Iterative Approximation for Tensor Product Surfaces. Mathematics. 2023; 11(3):670. https://doi.org/10.3390/math11030670

Chicago/Turabian Style

Hu, Qianqian, Zhifang Wang, and Ruyi Liang. 2023. "Improved Least-Squares Progressive Iterative Approximation for Tensor Product Surfaces" Mathematics 11, no. 3: 670. https://doi.org/10.3390/math11030670

APA Style

Hu, Q., Wang, Z., & Liang, R. (2023). Improved Least-Squares Progressive Iterative Approximation for Tensor Product Surfaces. Mathematics, 11(3), 670. https://doi.org/10.3390/math11030670

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop