Next Article in Journal
Unified Local Convergence for Newton’s Method and Uniqueness of the Solution of Equations under Generalized Conditions in a Banach Space
Previous Article in Journal
Picard-Jungck Operator for a Pair of Mappings and Simulation Type Functions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Total Least Squares Spline Approximation

1
Institute of Geodesy and Geoinformation Science, Technische Universität Berlin, 10623 Berlin, Germany
2
Section 1.2: Global Geomonitoring and Gravity Field, GFZ German Research Centre for Geosciences, 14473 Potsdam, Germany
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(5), 462; https://doi.org/10.3390/math7050462
Submission received: 17 April 2019 / Revised: 10 May 2019 / Accepted: 14 May 2019 / Published: 22 May 2019

Abstract

:
Spline approximation, using both values y i and x i as observations, is of vital importance for engineering geodesy, e.g., for approximation of profiles measured with terrestrial laser scanners, because it enables the consideration of arbitrary dispersion matrices for the observations. In the special case of equally weighted and uncorrelated observations, the resulting error vectors are orthogonal to the graph of the spline function and hence can be utilized for deformation monitoring purposes. Based on a functional model that uses cubic polynomials and constraints for continuity, smoothness and continuous curvature, the case of spline approximation with both the values y i and x i as observations is considered. In this case, some of the columns of the functional matrix contain observations and are thus subject to random errors. In the literature on mathematics and statistics this case is known as an errors-in-variables (EIV) model for which a so-called “total least squares” (TLS) solution can be computed. If weights for the observations and additional constraints for the unknowns are introduced, a “constrained weighted total least squares” (CWTLS) problem is obtained. In this contribution, it is shown that the solution for this problem can be obtained from a rigorous solution of an iteratively linearized Gauss-Helmert (GH) model. The advantage of this model is that it does not impose any restrictions on the form of the functional relationship between the involved quantities. Furthermore, dispersion matrices can be introduced without limitations, even the consideration of singular ones is possible. Therefore, the iteratively linearized GH model can be regarded as a generalized approach for solving CWTLS problems. Using a numerical example it is demonstrated how the GH model can be applied to obtain a spline approximation with orthogonal error vectors. The error vectors are compared with those derived from two least squares (LS) approaches.

1. Introduction

To capture the geometry of arbitrary objects as completely as possible, nowadays in engineering geodesy areal (area-wise, in contrast to point-wise) measurement methods are applied. Examples are terrestrial laser scanning or photogrammetric approaches based on the evaluation of image data. However, the resulting point cloud is almost never used as a final result since the creation of Computer-aided Design (CAD) models or deformation analyses require a curve or surface approximation by a continuous mathematical function. An overview of curve and surface approximations of three-dimensional (3D) point clouds, including polynomial curves, Bézier curves, B-Spline curves and NURBS curves, was presented by Bureick et al. [1]. Ezhov et al. [2] elaborated the basic methodology of spline approximation in two dimensions (2D) from a geodetic point of view using splines constructed from ordinary polynomials of the form y i = f ( x i ) . In the resulting adjustment problem, the values y i were introduced as observations subject to random errors and the values x i as error-free values. This approach is relevant, for example, for the following applications:
  • Registration of a signal at certain time intervals, whereby the time measurement is much more precise than the measurement of the signal and can therefore be considered as error-free.
  • Acquisition of measurements at fixed sensor positions, e.g., measurement of vertical displacements with displacement transducers at fixed horizontal positions during a load test.
  • Regarding the values x i as error-free to generate a solution using a linear functional model. This can be justified when the interpretation of the error vectors (in contrast to deformation analysis) is not the primary aim, but point clouds are approximated to obtain results that primarily satisfy aesthetic requirements.
The fundamental concept and the statistical properties of geodetic measurements are explained e.g., by Mikhail [3] (p. 3 ff.). In that textbook, on page 24, it is pointed out that in practical applications, normal distributions are encountered very often, since in geodesy random variables that represent measurements (respectively, their errors) are often nearly normally distributed. In this contribution we consider the case that both values y i and x i of spline functions of the form y i = f ( x i ) are observations subject to random errors. An example for this case is the spline approximation in the field of deformation monitoring where the data were captured with a profile laser scanner.
If the dispersion matrix is a scalar multiple of the identity matrix, the observations y i and x i are equally weighted and uncorrelated and a parameter estimation using the method of least squares yields a result where the error vectors of each point are orthogonal to the computed spline approximation. This characteristic is of vital importance for the interpretation of the error vectors not only for the purpose of deformation analysis in engineering geodesy but in many other scientific fields. Hence, a vast literature has been created that discusses the general problem of “orthogonal distance fitting”. Elaborations of this topic are offered e.g., by Sourlier [4], Turner [5] and Ahn [6].
In the contribution of Crambes [7] the case of spline approximation with y i and x i as erroneous observations is regarded as errors-in-variables (EIV) model and a total least squares (TLS) solution is presented. A total least squares fitting of Bézier curves to ordered data with an extension to B-spline curves was proposed by Borges and Pastva [8]. They developed an algorithm for applying the Gauss-Newton approach to this problem with a direct method for evaluating the Jacobian based on implicitly differentiating a pseudo-inverse. However, a dispersion matrix for the observations was not introduced.
Jazaeri et al. [9] have presented a new algorithm for solving arbitrary weighted TLS problems within an EIV model having linear and quadratic constraints and singular dispersion matrices. They equivalently translate the weighted total least squares (WTLS) objective function under consideration of the constraints into the Lagrange target function of the constrained minima technique to derive an integrative solution. Fang [10] presented a WTLS and constrained weighted total least squares (CWTLS) solution based on Newton iteration.
In this contribution, we follow the idea of Neitzel [11] and present a total least squares spline approximation as a rigorous evaluation of an iteratively linearized Gauss-Helmert (GH) model taking into account additional constraints for the unknowns. This model has the advantage that it does not impose any restrictions on the form of the functional relationship between the quantities involved in the model. The functional model of 2D similarity transformation discussed by Neitzel [11], e.g., yields a functional matrix that contains two columns that do not contain any random errors (“fixed” or “frozen” columns in the frame of TLS terminology) and two columns in which the same x - and y -values appear crosswise (one with reversed sign). In order to appropriately consider this structure of the functional matrix, special TLS algorithms must be applied, as presented by Mahboub [12], Schaffrin et al. [13] or Han et al. [14], while this case can be solved in a standard way in the GH model. Furthermore, arbitrary and even singular dispersion matrices can be introduced into the GH model as shown by Schaffrin et al. [15] and Neitzel and Schaffrin [16]. Therefore, the iteratively linearized GH model can be regarded as a generalized approach for solving constrained weighted total least squares (CWTLS) problems.
The aim of this publication is not to investigate which algorithm is more efficient, but rather to show that the TLS spline approximation can be solved in the well-known GH model, which can be traced back to Helmert [17] (p. 285 ff.), and to offer the algorithm which can be used for solving practical geodetic problems.

2. A Review of the Errors-in-Variables (EIV) Model

Using the function of a cubic polynomial as an example, a brief overview of the errors-in-variables (EIV) model with regular dispersion matrix is presented in the following. In the case of the EIV model with a singular dispersion matrix, please refer to the contribution of Schaffrin et al. [15]. Ezhov et al. [2] elaborated a spline function constructed from ordinary cubic polynomials
y i = a 0 + a 1 x i + a 2 x i 2 + a 3 x i 3
with additional constraints to impose continuity, smoothness and continuous curvature. Under consideration of the values y i as observations subject to random errors e y i , values x i as fixed (error-free) values and a 0 , a 1 , a 2 , a 3 as unknowns, the observation equations
y i e y i = a ^ 0 + a ^ 1 x i + a ^ 2 x i 2 + a ^ 3 x i 3
are obtained. For a description of the constraint equations please refer to Ezhov et al. [2]. The resulting adjustment problem can be expressed as
L e L = A X ^ ,
e L ( 0 , σ 0 2 Q L L ) ,
c = C X ^ ,
where
  • L  is the observation vector containing the values y i ,
  • e L  the normally distributed random error vector associated with L containing the values e y i with expected value 0 and dispersion matrix Q L L ,
  • A  the functional matrix containing the coefficients 1 , x i , x i 2 , x i 3 of the linear functional model,
  • X ^  the vector of adjusted unknowns containing the parameters a ^ 0 j , a ^ 1 j , a ^ 2 j , a ^ 3 j ,
  • c  the vector with given constant values,
  • C  the functional matrix containing the coefficients of the linear constraints for the unknowns, refer to Ezhov et al. [2] for details, and
  • σ 0 2  the theoretical reference standard deviation.
With Q L L being a non-singular matrix the corresponding weight matrix can be computed from
P = Q L L 1 .
Introducing the vector λ of Lagrange multipliers, the constrained minima technique can be applied which yields the objective function
Φ ( e L , λ , X ^ ) = e L T P e L + 2 λ T ( C X ^ c ) = min ,
compare with the textbook by Mikhail [3] (p. 215). The solution can be obtained from a Gauss-Markov (GM) model with constraints, as shown in detail by Ezhov et al. [2].
If we now consider both the values y i and the values x i as observations, we have to introduce not only errors e y i associated with the values y i but also errors e x i associated with observations x i and we obtain the condition equations
y i e y i = a ^ 0 + a ^ 1 ( x i e x i ) + a ^ 2 ( x i e x i ) 2 + a ^ 3 ( x i e x i ) 3 .
For a description of the additional constraint equations please refer to Ezhov et al. [2]. Starting from (3) these condition equations can be expressed in a general form as a so-called errors-in-variables (EIV) model (in which some of the columns of the functional matrix A contain observations and are thus subject to random errors E A ) with constraints
L e L = ( A E A ) X ^ ,
c = C X ^ .
The normally distributed random error vector associated with L and A containing the vectors e L and e A with expected value [ 0 0 ] T and dispersion matrix Q reads
[ e L e A ] = [ e L vec E A ] ( [ 0 0 ] , σ 0 2 Q = σ 0 2 [ Q L L Q L A Q A L Q A A ] ) ,
where
  • E A  is the random error matrix associated with A containing the values e x i , and
  • e A  the vectorized form of E A .
From the non-singular dispersion matrix Q , the weight matrix
P = Q 1 = [ P 11 P 12 P 21 P 22 ]
can be computed, which leads to the objective function
Φ ( e L , e A , λ , X ^ ) = [ e L T e A T ] [ P 11 P 12 P 21 P 22 ] [ e L e A ] + 2 λ T ( C X ^ c ) = min .
The solution for this EIV model can be determined (amongst others) using the TLS approach, more precisely in this case the constrained weighted total-least squares (CWTLS) approach. The notion “total least squares” (TLS) was introduced by Golub and van Loan [18] and very elegant solution algorithms based on singular value decomposition (SVD) were developed; see, e.g., the contributions by Golub and van Loan [18], van Huffel and Vandewalle [19] (p. 29 ff.) or Markovsky and van Huffel [20].
However, a solution with SVD is only possible if the functional model of the adjustment problem is pseudo-linear and if it has a certain structure, yielding a functional matrix that does not contain fixed (error-free) columns or columns in which the same variables occur multiple times (e.g., crosswise). The problem discussed in this article does not belong to this class. A systematic approach for direct solutions of specific non-linear adjustment problems such as fitting of a straight line in 2D and 3D, fitting of a plane in 3D and 2D similarity transformation with all coordinates introduced as observations subject to random errors was elaborated from a geodetic point of view by Malissiovas et al. [21].
Jazaeri et al. [9] have presented a new flexible algorithm for solving the weighted TLS problem within an EIV model having linear and quadratic constraints and singular dispersion matrices. Their iterative algorithm is based on an equivalent translation of the objective function (13) subject to (9) and (10) into a Lagrange target function. The case of quadratic constraints occurs, e.g., when the 2D or 3D coordinate transformation is formulated for the most general case (affine transformation) and the cases of similarity or rigid transformation are imposed with the help of constraint equations, which was shown by Fang [10]. Singular dispersion matrices are always associated with coordinates computed from a free network adjustment, as explained by Neitzel and Schaffrin [16].
The term errors-in-variables (EIV) model was introduced in the literature on mathematics and statistics. It is used for regression models where measurement errors are taken into account for both, the dependent variables and the independent variables. For geodesists, this terminology is rather confusing, because whenever we introduce errors, they are inherently associated with the term observations and not with variables. A remark of this kind was also given by Fuller [22], who explained: “The term errors in variables is used in econometrics, while the terms observation error or measurement error are common in other areas of statistical application.” Thus, it is immediately evident that an EIV model is just another formulation of a well-known adjustment problem. Therefore, we can rearrange the EIV model (8) and obtain an implicit form of the functional relation
y i e y i a ^ 0 a ^ 1 ( x i e x i ) a ^ 2 ( x i e x i ) 2 a ^ 3 ( x i e x i ) 3 = 0 ,
which is an example of a functional relationship that leads to the adjustment of (nonlinear) condition equations with unknowns that can already be found in the classical textbook of Helmert [17] (p. 285 ff.).
Neitzel [11] showed using the example of unweighted and weighted 2D similarity transformation that the TLS solution within an EIV model can be obtained from a rigorous solution of an iteratively linearized Gauss-Helmert model. After the definition of a spline constructed from ordinary cubic polynomials in Section 3 and the definition of the problem in Section 4, we will follow this idea and show in Section 5 how to derive the constrained weighted total-least squares (CWTLS) of an EIV model from a rigorous solution of an iteratively linearized Gauss-Helmert model with additional constraints for the unknowns. A numerical example will be presented in Section 6 followed by conclusions and outlook in Section 7.

3. Definition of a Spline Constructed from Ordinary Cubic Polynomials

For the definition of a spline constructed from ordinary cubic polynomials the elaboration from Ezhov et al. [2] is presented in what follows. Considering a 2D Cartesian coordinate system, Figure 1 illustrates a spline interpolation where the given knots are interpolated by a piecewise continuous spline consisting of polynomials which meet at these knots. The coordinates of the knots are denoted by ( x k j , y k j ) ; hence, j = 0 , 1 , , n is the index (ordinal number) of the knot and n the number of segments. Character k stands for “knot” and is not an index; hence, x k j means “ x -value of the knot j ”. Then the spline is given by the function S ( x ) with the following properties:
  • S ( x ) is a piecewise cubic polynomial with knots at x k 0 , x k 1 , …, x k n ,
  • S ( x ) is a linear polynomial for x x k 0 and x x k n ,
  • S ( x ) has two continuous derivatives everywhere,
  • S ( x k j ) = y k j for j = 0 , 1 , , n .
According to Schumaker [23] (p. 7) S ( x ) (see Figure 1) defined as above, is in a way the best interpolating function. This is one of the reasons why the cubic spline is generally accepted as the most relevant of all splines.
Starting from the above mentioned properties a spline function can be defined as
S ( x ) = { S 1 ( x ) , x k 0 x x k 1 S j ( x ) , x k j 1 x x k j S n ( x ) , x k n 1 x x k n ,
where the expression for a cubic polynomial in each segment j can be written as
S j ( x ) = a 0 j + a 1 j x + a 2 j x 2 + a 3 j x 3 , j = 1 , 2 , , n .
The property that the spline has to meet exactly the data at the given knots yields the conditions
S j ( x k j 1 ) = y k j 1   and   S j ( x k j ) = y k j , j = 1 , 2 , , n .
The property that S ( x ) has two continuous derivatives everywhere yields the constraints
S j ( x k j ) = S j + 1 ( x k j ) , j = 1 , 2 , , n 1 ,
S j ( x k j ) = S j + 1 ( x k j ) , j = 1 , 2 , , n 1 .
From Equation (16) and Figure 1 we see that there are n polynomial functions S j ( x ) and n + 1 knots. The cubic function depends on four parameters, and consequently, the spline S ( x ) on 4 n unknowns. From (17), we obtain 2 n and from (18) and (19) we get 2 ( n 1 ) condition equations. This results in 4 n 2 condition equations to be solved for 4 n unknowns and hence the equation system is underdetermined. To overcome this problem, arbitrary additional conditions can be introduced. Usually the missing constraints are introduced at the first and the last knot. For more details refer to Ezhov et al. [2].
The main application of spline interpolation is the construction of curves through a sparse sequence of data points, e.g., in the construction of aesthetically pleasing curves in CAD models. The aim is different when modern sensors such as laser scanners are used with an acquisition rate of up to 2 million points per second, providing a very dense sequence of data points. Due to the high point density and the fact that measurements are affected by random errors, it is no longer appropriate to apply spline interpolation, since observation errors and other abrupt changes in the data points would be modeled, resulting in a strongly oscillating spline.
To avoid this effect, the point sequence is divided into not so many intervals using predefined knots. Due to the high point density, this results in an overdetermined configuration and the parameters of piecewise cubic splines can be determined using least squares adjustment. Conditions for smoothness at the knots (where two polynomials meet) can be introduced as constraint equations into the formulation of the adjustment problem. The result is a spline that no longer follows all data points exactly, but as closely as possible, subject to a predefined target function. In the case of least squares adjustment, this means that the sum of the weighted squared residuals becomes minimal. This case, illustrated in Figure 2, is called spline approximation.
For formulating this approximation problem, cubic polynomials (16) are used for each interval of the spline. Furthermore, constraint equations for continuity (17), smoothness (18) and continuous curvature (19) are to be introduced at the knots. Both the values y i and the values x i are regarded as observations subject to random errors. An example for this case is the spline approximation in the field of deformation monitoring where the data were captured with a profile laser scanner.
In spline approximation, the parameters for an overdetermined configuration are determined by a least squares adjustment. In contrast to spline interpolation, a solution can be found by minimizing the sum of the weighted squared residuals without imposing any extra conditions to the first and last point of the spline. Only the already mentioned conditions for continuity, smoothness and continuous curvature have to be introduced at the “inner” knots as additional constraints. In the following section the resulting adjustment problem is defined.

4. Definition of the Problem

Let:
  • ( x 1 j , y 1 j ) , ( x 2 j , y 2 j ) , , ( x i j , y i j ) , , ( x m j , y m j ) be pairs of observed values within an interval j , where the sequence on the x -axis x 1 j < x 2 j < < x i j < x m j is non-decreasing,
  • [ e x e y ] ( [ 0 0 ] , σ 0 2 Q L L = σ 0 2 [ Q x x Q x y Q y x Q y y ] ) be the stochastic model and P = Q L L 1 the corresponding weight matrix,
  • x k 1 < x k 2 < < x k j < < x k n 1 be a non-decreasing sequence of user-defined values for the knots on the x -axis which are regarded as error-free. As mentioned before, the first knot x k 0 and the last knot x k n , see Figure 1, have no particular influence on the adjustment, they are just x k 0 = x min and x k n = x max . Therefore, when the positioning of the knots is discussed, we refer only to the knots x k 1 , , x k j , , x k n 1 , see Figure 2, on which some constraints are imposed. The positioning of the knots plays a key role in spline approximation problems. This problem is further discussed in Section 5.4.
To determine:
  • ( a 0 1 , a 1 1 , a 2 1 , a 3 1 ) , , ( a 0 j , a 1 j , a 2 j , a 3 j ) , , ( a 0 n , a 1 n , a 2 n , a 3 n ) , which represent a set of parameters for the cubic polynomials S j ( x ) which has to be estimated for all intervals [ x k j 1 , x k j ) .
According to (16) the functional model for the cubic polynomials reads
y i j = S j ( x i j ) = a 0 j + a 1 j x i j + a 2 j x i j 2 + a 3 j x i j 3
with j = 1 , 2 , , n as index for an interval [ x k j 1 , x k j ) . Furthermore, i = 1 , 2 , , m j represents an index for all pairs of observed values within the interval j .
Since an overdetermined system of equations is considered, a spline approximation is performed without putting any extra conditions on the first and the last knots of the spline. The dataset is subdivided into intervals w.r.t. the x -axis by predefined knots. For each interval [ x k j 1 , x k j ) there is a set of at least two pairs of values ( x i j , y i j ) inside of it. According to (17)–(19) the conditions
S j ( x k j ) = S j + 1 ( x k j ) , j = 1 , , n 1
S j ( x k j ) = S j + 1 ( x k j ) , j = 1 , , n 1
S j ( x k j ) = S j + 1 ( x k j ) , j = 1 , , n 1
are imposed on the knots for constructing a spline. These knots x k j are also coordinates on the x -axis but their number and their values are freely specified, or can be determined by some knot placement strategy, see Section 5.4.
After having defined all these quantities, a spline approximation can be performed from an overdetermined configuration using the Gauss-Helmert model with additional constraints for the unknowns.

5. TLS Solution Generated by Least Squares Adjustment within the Gauss-Helmert Model

In this section, the solution for the weighted total least-squares problem with constraints (CWTLS) (13) is based on “classical” procedures, specifically on an iterative evaluation of the nonlinear normal equations derived for nonlinear condition and constraint equations by least squares adjustment. A common approach does replace the original nonlinear condition and constraint equations by a sequence of linearized equations which obviously requires a correct linearization of the condition and constraint equations. This means that the linearization has to be done both at the approximate values X 0 for the unknowns, and at approximate values e 0 for the random errors. Such an iterative solution is regarded as “rigorous” evaluation of the iteratively linearized Gauss-Helmert model.
An extensive presentation of an evaluation of this kind was already given e.g., by Pope [24]. Lenzmann and Lenzmann [25] pick up this problem once more and present another rigorous solution for the iteratively linearized GH model. In doing so, they show very clearly which terms when neglected will produce merely approximate formulas. Unfortunately, these approximate formulas, which can yield an unusable solution, can be found in many popular textbooks on adjustment calculus. For more details please refer to Neitzel [11]. The rigorous solution as presented in the following is based on Lenzmann and Lenzmann [25].

5.1. Formulation of the Adjustment Problem

Based on the functional model (20) with observations y i and x i , the condition equations
ψ i ( e , X ^ ) = y i e y i a ^ 0 j a ^ 1 j ( x i e x i ) a ^ 2 j ( x i e x i ) 2 a ^ 3 j ( x i e x i ) 3 = 0
can be formulated. Obviously, (24) is nonlinear. However, the application of least squares adjustment usually “lives” in the linear world. Since the Gauss-Newton method is used here for solving least squares problems, the condition equations have to be linearized. With appropriate approximate values e 0 and X 0 , the linearized condition equations can be written as
f ( e , X ^ ) = B ( e e 0 ) + A ( X ^ X 0 ) + ψ ( e 0 , X 0 ) = c 1
with c 1 = 0 and the matrices of partial derivatives
A ( e , X ^ ) = ψ ( e , X ^ ) X ^ | e 0 , X 0 ,
B ( e , X ^ ) = ψ ( e , X ^ ) e | e 0 , X 0 .
These derivatives have to be evaluated at the approximate values e 0 and X 0 . Taking the partial derivatives for the function that defines the condition Equation (24) with respect to the unknowns
ψ i ( e , X ^ ) a ^ 0 j | e 0 , X 0 = 1 ,
ψ i ( e , X ^ ) a ^ 1 j | e 0 , X 0 = ( x i e x i 0 ) ,
ψ i ( e , X ^ ) a ^ 2 j | e 0 , X 0 = ( x i e x i 0 ) 2 ,
ψ i ( e , X ^ ) a ^ 3 j | e 0 , X 0 = ( x i e x i 0 ) 3 ,
we can set up the matrices A j for each polynomial piece of the spline. The functional matrix for the first interval reads
A 1 = [ 1 ( x 1 1 e x 1 1 0 ) ( x 1 1 e x 1 1 0 ) 2 ( x 1 1 e x 1 1 0 ) 3 1 ( x 2 1 e x 2 1 0 ) ( x 2 1 e x 2 1 0 ) 2 ( x 2 1 e x 2 1 0 ) 3 1 ( x m 1 e x m 1 0 ) ( x m 1 e x m 1 0 ) 2 ( x m 1 e x m 1 0 ) 3 ] ,
for the second interval, we obtain
A 2 = [ 1 ( x 1 2 e x 1 2 0 ) ( x 1 2 e x 1 2 0 ) 2 ( x 1 2 e x 1 2 0 ) 3 1 ( x 2 2 e x 2 2 0 ) ( x 2 2 e x 2 2 0 ) 2 ( x 2 2 e x 2 2 0 ) 3 1 ( x m 2 e x m 2 0 ) ( x m 2 e x m 2 0 ) 2 ( x m 2 e x m 2 0 ) 3 ] .
This construction of A j matrices continues until the last interval. Finally, the entire functional matrix has the following form
A = d i a g [ A 1 A n ] .
From (32) and (33), we can see the typical characteristic of an EIV model (9), where some (or sometimes all) columns of the functional matrix contain observations with their corresponding errors.
Taking the partial derivatives of the function ψ from condition Equations (24) with respect to the errors
ψ i ( e , X ^ ) e x i | e 0 , X 0 = a 1 j 0 + 2 a 2 j 0 ( x i e x i 0 ) + 3 a 3 j 0 ( x i e x i 0 ) 2 ,
ψ i ( e , X ^ ) e y i | e 0 , X 0 = 1 ,
we can define the matrices B 1 and B 2
B 1 = d i a g [ a 1 j 0 + 2 a 2 j 0 ( x i e x i 0 ) + 3 a 3 j 0 ( x i e x i 0 ) 2 ] ,
B 2 = I .
Finally, matrix B obtains the following form
B = [ B 1 B 2 ] .
Now, we have to take into account also the constraints (21)–(23). Since the knots x k are error-free values, we obtain linear constraint equations
a ^ 0 1 + a ^ 1 1 x k 1 + a ^ 2 1 x k 1 2 + a ^ 3 1 x k 1 3 = a ^ 0 2 + a ^ 1 2 x k 1 + a ^ 2 2 x k 1 2 + a ^ 3 2 x k 1 3 , a ^ 0 2 + a ^ 1 2 x k 2 + a ^ 2 2 x k 2 2 + a ^ 3 2 x k 2 3 = a ^ 0 3 + a ^ 1 3 x k 2 + a ^ 2 3 x k 2 2 + a ^ 3 3 x k 2 3 , a ^ 0 n 1 + a ^ 1 n 1 x k n 1 + a ^ 2 n 1 x k n 1 2 + a ^ 3 n 1 x k n 1 3 = a ^ 0 n + a ^ 1 n x k n 1 + a ^ 2 n x k n 1 2 + a ^ 3 n x k n 1 3
which enforce the spline to be continuous at the knots. The next set of constraint equations
a ^ 1 1 + 2 a ^ 2 1 x k 1 + 3 a ^ 3 1 x k 1 2 = a ^ 1 2 + 2 a ^ 2 2 x k 1 + 3 a ^ 3 2 x k 1 2 , a ^ 1 2 + 2 a ^ 2 2 x k 2 + 3 a ^ 3 2 x k 2 2 = a ^ 1 3 + 2 a ^ 2 3 x k 2 + 3 a ^ 3 3 x k 2 2 , a ^ 1 n 1 + 2 a ^ 2 n 1 x k n 1 + 3 a ^ 3 n 1 x k n 1 2 = a ^ 1 n + 2 a ^ 2 n x k n 1 + 3 a ^ 3 n x k n 1 2
enforces the spline to have a continuous first derivative at the knots. This implies that the spline will have smooth transition at the knots. The last set of constraint equations
2 a ^ 2 1 + 6 a ^ 3 1 x k 1 = 2 a ^ 2 2 + 6 a ^ 3 2 x k 1 , 2 a ^ 2 2 + 6 a ^ 3 2 x k 2 = 2 a ^ 2 3 + 6 a ^ 3 3 x k 2 , 2 a ^ 2 n 1 + 6 a ^ 3 n 1 x k n 1 = 2 a ^ 2 n + 6 a ^ 3 n x k n 1
enforces the spline to have a continuous second derivative at the knots, which implies that the curvature at the knots will be continuous. Rearranging Equations (40)–(42) yields for continuity the constraint equations
γ 1 ( X ^ ) = a ^ 0 j + a ^ 1 j x k j + a ^ 2 j x k j 2 + a ^ 3 j x k j 3 a ^ 0 j + 1 a ^ 1 j + 1 x k j a ^ 2 j + 1 x k j 2 a ^ 3 j + 1 x k j 3 = 0 ,
for smoothness it is
γ 2 ( X ^ ) = a ^ 1 j + 2 a ^ 2 j x k j + 3 a ^ 3 j x k j 2 a ^ 1 j + 1 2 a ^ 2 j + 1 x k j 3 a ^ 3 j + 1 x k j 2 = 0 ,
and for the curvature we obtain
γ 3 ( X ^ ) = 2 a ^ 2 j + 6 a ^ 3 j x k j 2 a ^ 2 j + 1 6 a ^ 3 j + 1 x k j = 0 .
The nonlinearity of the condition equations (24) affects the unknowns of the linear constraint equations. As a consequence, we have to consider the linear constraint equations in a “linearized” form
g q ( X ^ ) = C q ( X ^ X 0 ) + γ q ( X 0 ) = c 2 ,
with c 2 = 0 and q = 1 , 2 , 3 , where the matrices C q contain the coefficients of the unknowns. Every row of these matrices refers only to two consecutive intervals, i.e.,:
  • Unknowns from the first row refer to the first two intervals, [ x k 0 , x k 1 ) and [ x k 1 , x k 2 ) .
  • Unknowns from the second row refer to the second and the third interval, [ x k 1 , x k 2 ) and [ x k 2 , x k 3 ) .
  • This continues until the last row where the unknowns refer to the last and its preceding interval, [ x k n 2 , x k n 1 ) and [ x k n 1 , x k n ) .
Taking into account the sequence of the unknowns, matrix C 1 that imposes continuity at the knots is
C 1 = [ 1 x k 1 x k 1 2 x k 1 3 1 x k 1 x k 1 2 x k 1 3 0 0 0 0 0 0 0 0 0 1 x k 2 x k 2 2 x k 2 3 1 x k 2 x k 2 2 x k 2 3 0 0 0 0 0 0 0 0 0 0 0 0 0 x k n 1 3 ] .
Matrix C 2 contains the coefficients from the equations of constraints that preserve the smoothness of the spline at the knots and is expressed as
C 2 = [ 0 1 2 x k 1 3 x k 1 2 0 1 2 x k 1 3 x k 1 2 0 0 0 0 0 0 0 0 0 0 1 2 x k 2 3 x k 2 2 0 1 2 x k 2 3 x k 2 2 0 0 0 0 0 0 0 0 0 0 0 0 0 3 x k n 1 2 ] .
Matrix C 3 contains the coefficients from the equations of constraints which enforce continuous curvature at the knots written as
C 3 = [ 0 0 2 6 x k 1 0 0 2 6 x k 1 0 0 0 0 0 0 0 0 0 0 0 2 6 x k 2 0 0 2 6 x k 2 0 0 0 0 0 0 0 0 0 0 0 0 0 6 x k n 1 ] .
Now, we expand (25) and introduce
w 0 = B e 0 + ψ ( e 0 , X 0 ) c 1 ,
and for the constraint Equations (46) we introduce
w 1 = γ 1 ( X 0 ) c 2 , w 2 = γ 2 ( X 0 ) c 2 , w 3 = γ 3 ( X 0 ) c 2
as vectors of misclosures. Finally, we can write the linearized condition equation and the constraint equations in the form
B e + A ( X ^ X 0 ) + w 0 = 0 ,
C 1 ( X ^ X 0 ) + w 1 = 0 , C 2 ( X ^ X 0 ) + w 2 = 0 , C 3 ( X ^ X 0 ) + w 3 = 0 .

5.2. Least Squares Adjustment

Since from (52) and (53), four separate sets of equations are obtained, four vectors of Langrange multipliers λ 0 , λ 1 , λ 2 and λ 3 have to be introduced to apply the constrained minima technique. The quadratic form to be minimized is
Φ ( e , X ^ , λ 0 , λ 1 , λ 2 , λ 3 ) = e T P e 2 λ 0 T ( B e + A ( X ^ X 0 ) + w 0 ) 2 λ 1 T ( C 1 ( X ^ X 0 ) + w 1 ) 2 λ 2 T ( C 2 ( X ^ X 0 ) + w 2 ) 2 λ 3 T ( C 3 ( X ^ X 0 ) + w 3 ) .
To minimize the objective function we take the partial derivatives of (54) w.r.t. the errors e , the reduced unknowns ( X ^ X 0 ) and the Lagrange multipliers λ 0 , λ 1 , λ 2 , λ 3 . Afterwards, every derivative is set equal to zero and we obtain
P e + B T λ 0 = 0 ,
A T λ 0 + C 1 T λ 1 + C 2 T λ 2 + C 3 T λ 3 = 0 ,
B e + A ( X ^ X 0 ) + w 0 = 0 ,
C 1 ( X ^ X 0 ) + w 1 = 0 ,
C 2 ( X ^ X 0 ) + w 2 = 0 ,
C 3 ( X ^ X 0 ) + w 3 = 0 .
Solving (55) for e yields
e = Q L L B T λ 0 ,
with the dispersion matrix of the observations Q L L = P 1 . Since in almost all cases the dispersion matrix Q L L is given directly, this inversion is usually not necessary. Inserting (61) into (57) yields
B Q L L B T λ 0 + A ( X ^ X 0 ) + w 0 = 0 .
Finally, we can set up the system of matrix equations
B Q L L B T λ 0 + A ( X ^ X 0 ) = w 0 , A T λ 0 + C 1 T λ 1 + C 2 T λ 2 + C 3 T λ 3 = 0 , C 1 ( X ^ X 0 ) = w 1 , C 2 ( X ^ X 0 ) = w 2 , C 3 ( X ^ X 0 ) = w 3 .
This system of matrix equations can be written in matrix form as
[ B Q L L B T A 0 0 0 A T 0 C 1 T C 2 T C 3 T 0 C 1 0 0 0 0 C 2 0 0 0 0 C 3 0 0 0 ] [ λ 0 X ^ X 0 λ 1 λ 2 λ 3 ] = [ w 0 0 w 1 w 2 w 3 ] .
Obviously, it is possible to further reduce this system by eliminating λ 0 but in the form (64), it offers the great advantage to consider also a singular dispersion matrix Q L L for the observations. The structure of the coefficient matrix in (64) coincides with the one that Jazaeri et al. [9] use for their weighted TLS solution with constraints when Q L L is singular. The rank condition that has to be fulfilled to obtain a unique solution was explained by Neitzel and Schaffrin [16].
The solution X ^ X 0 and the Lagrange multipliers can be derived from (64) using matrix inversion. However, in most of the applications, other methods such as Cholesky decomposition are used, because the inversion of a matrix is highly prone to rounding errors and is computationally inefficient. The error vector e can be obtained from (61). The solution e , X ^ , after stripping it of its random character, is to be substituted as a new approximation as long as necessary until a break-off condition is met. For the choice of break-off conditions, we refer, e.g., to Lenzmann and Lenzmann [25]. Note that often the update in (50) is erroneous, in which case convergence may occur, but not to the correct nonlinear least squares solution.
After the solution is obtained we can compute the total sum of squared residuals (TSSR) by
Ω = e T P e .
In case of a singular dispersion matrix, the weight matrix P = Q L L 1 does not exist and we introduce (61) for the residuals to obtain the TSSR from
Ω = λ 0 T B Q L L B T λ 0 .
Finally, we can compute the estimated reference standard deviation
σ ^ 0 2 = Ω / r ,
where r is the redundancy of the adjustment problem. Based on the value σ ^ 0 2 , we can get an idea about the quality of the adjustment by applying a statistical test.

5.3. Initial Values for the Unknowns

In the case under consideration, appropriate starting values for the unknowns can be obtained from the solution of the linear adjustment problem with y i as observations and x i as fixed values, presented by Ezhov et al. [2] and as initial values of the errors we can choose
e x i 0 = 0 , e y i 0 = 0 .
Experience with several numerical examples has shown that it is also possible to introduce
a 0 j 0 = 1 , a 1 j 0 = 1 , a 2 j 0 = 1 , a 3 j 0 = 1
as initial values for the unknowns.

5.4. Knot Placement Strategies

As described by Ezhov et al. [2], the basic problem of knot placement in spline approximation comprises two fundamental challenges:
  • Systematic or heuristic choice of the knots.
  • Selection of suitable quality criteria according to which the resulting spline approximation can be assessed.
According to these specifications, the knots are rearranged in the course of an iterative process until the quality criteria are met. Since the distribution of the knots plays a crucial role in spline approximation, various knot placement strategies have been proposed in the literature; see, e.g., the approaches developed by Cox et al. [26], Schwetlick and Schütze [27] and Park [28]. An overview of knot placement strategies is offered by Bureick et al. [1,29]. Current research focuses on choosing the optimal number of B-spline control points using the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC); see the contribution by Harmening and Neuner [30].
In this article, the main focus is on generating the TLS solution for spline approximation. Hence, for simplicity, a uniformly distributed knot sequence is chosen, proposed by De Boor [31], also called “equidistant arrangement”; see the contribution by Yanagihara and Ohtaki [32]. The term uniformly distributed knot sequence describes that all knots have equal spacing within the dataset [ x min , x max ] in which the spline is fitted.

6. Numerical Example

In the following numerical example, we consider the spline approximation for the case that both the values x i and y i are observations subject to random errors. Two inherently different approaches for this case are presented:
  • Based on spline functions of the form y i = f ( x i ) , see (20), a TLS solution will be computed from an iteratively linearized GH model introduced in the previous sections.
  • Based on spline functions in parametric form x i = f 1 ( t i ) and y i = f 2 ( t i ) , i.e.,
    x i j = a 0 j + a 1 j t i j + a 2 j t i j 2 + a 3 j t i j 3 ,
    y i j = b 0 j + b 1 j t i j + b 2 j t i j 2 + b 3 j t i j 3 ,
    it is also possible to introduce x i and y i as observations. In contrast to the first approach, x i and y i are now considered independently of each other which enables the representation of arbitrary curves that depend on a location parameter t i . These values of the location parameter in (70) and (71) form a non-decreasing sequence of error-free values t 1 j < t 2 j < < t i j < t m j referring to the observations within an interval j . The parameters a 0 j , a 1 j , a 2 j , a 3 j and b 0 j , b 1 j , b 2 j , b 3 j are the unknowns to be estimated for all intervals. The choice of the numerical values for t i j plays a crucial role, and hence, it determines the position on the curve to which the values x i j , y i j refer. In this example we apply two methods for the choice of numerical values for t i j that are suggested by Piegl and Tiller [33] (p. 364 ff.), namely “equally spaced” and “chord length”. Introducing these values t i j as fixed parameters, the resulting linear least squares (LS) problem can be solved within the Gauss-Markov (GM) model with constraints, as described in detail by Ezhov et al. [2].
Using the synthetic data set in Table 1, which can be regarded as a cutout from a profile laser scan, we want to investigate the error vectors that are obtained from the TLS and the LS approach.
Naturally, both approaches will yield different results, because they consider two different least squares adjustment problems. Therefore, it is not a question of which approach yields “better” results than the other. It is simply to show which approach might be more suitable for the application in engineering geodesy. In all computations, the observations x i and y i are considered equally weighted and uncorrelated (although the methodology proposed in this article can handle the most general correlated case, as well), and the knots are uniformly distributed, as described in Section 5.4.
However, for the LS approach based on spline functions in parametric form, there is a slight difference. The knots are chosen from the span of the vector of parameters t i instead of the span of the vector of observations x i as for the TLS approach. Since in the LS case two separate functional models x i = f 1 ( t i ) and y i = f 2 ( t i ) are considered, two separate least squares adjustments are performed. The chosen knots t k are identical for both of them.
Computations were performed for three different cases:
  • Case 1: Total least squares (TLS) adjustment of a spline function,
  • Case 2.a: Least squares (LS) adjustment of a spline curve with equally spaced location parameters t i j ,
  • Case 2.b: Least squares (LS) adjustment of a spline curve with location parameters t i j proportional to the chord length.
The results for Case 1 and Case 2.a are depicted in Figure 3, a magnification of the area marked by a rectangle is shown in Figure 4. The numerical values for the observation errors are listed in Table 2. With x i , y i , introduced as equally weighted and uncorrelated observations, the TLS solution yields error vectors that are orthogonal to the graph of the adjusted spline. In contrast, it is clearly visible in Figure 4 that the LS solution yields error vectors with “irregular” behavior, in the sense that they are far away from being orthogonal to the curve. Since the LS and the TLS solution refer to two different adjustment problems, the resulting adjusted graphs of the spline differ from each other. In this numerical example, the difference between the two adjusted graphs is rather small. The total sum of squared residuals (TSSR) for the TLS and for the LS solution are listed in Table 3.
Figure 5 compares the result for Case 1 with the result for Case 2.b. In the magnification of the area marked by a rectangle shown in Figure 6 it is again clearly visible that the directions of the error vectors obtained from the LS solution are “irregular” and not orthogonal to the graph of the spline curve. Furthermore, if the length and direction of the error vectors for Case 2.a and Case 2.b are compared, a considerable influence of the chosen numerical values for the location parameter t i can be observed. In this numerical example the resulting adjusted graphs of the spline deviate considerably from each other. The corresponding TSSR values are listed in Table 3.
It can be concluded that both approaches have their advantages and disadvantages. Their application depends on the type of the phenomenon that is subject to the approximation. The parametric spline curve (LS solution) can be useful for geometrical approximation of arbitrary shapes where the interpretation of the errors is not the main focus since they vary with the choice of the values of the location parameter t i . The error vectors obtained from the TLS approach can be utilized for deformation monitoring purposes, since they are orthogonal to the graph of the spline function in the case of equally weighted and uncorrelated observations.
It should be noted that the coordinates x i and y i captured with, e.g., a profile laser scanner are not original observations. They are based on the same angle and distance measurements and are, therefore, differently weighted and correlated. Since the precision of the distance measurement is influenced by a variety of factors, simplified assumptions about the stochastic model have often been used in the past. The intensity-based approach developed by Wujanz et al. [34] allows now to set up an appropriate stochastic model for 3D laser scanners. Based on this, Heinz et al. [35] have developed a stochastic model for a profile laser scanner. If a general stochastic model is used, the error vectors will no longer be strictly orthogonal to the adjusted spline, but it will be possible to apply statistical tests for outlier detection and deformation analysis.

7. Conclusions and Outlook

In this contribution, we have presented a total least squares spline approximation generated from a rigorous evaluation of an iteratively linearized Gauss-Helmert model under consideration of additional constraints for the unknowns. Similar to the approach for such problems proposed by Jazaeri et al. [9], the GH model allows introducing singular dispersion matrices for the observations.
Together with the basic methodology of spline approximation elaborated by Ezhov et al. [2], a clear und understandable formulation of spline approximation with both values x i and y i as observations is now available. This approach is of particular interest for engineering geodesy because the error vectors are orthogonal to the graph of the adjusted spline which has been demonstrated using a numerical example with equally weighted and uncorrelated observations. Hence, this approach is very suitable for the analysis of deformations.

Author Contributions

Conceptualization, F.N. and S.P.; methodology, N.E.; writing—original draft preparation, F.N. and N.E.; writing—review and editing, F.N. and S.P.

Funding

This research received no external funding.

Acknowledgments

The authors acknowledge support by the German Research Foundation and the Open Access Publication Fund of TU Berlin.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bureick, J.; Neuner, H.; Harmening, C.; Neumann, I. Curve and Surface Approximation of 3D Point Clouds. Allg. Vermess.-Nachr. 2016, 123, 315–327. [Google Scholar]
  2. Ezhov, N.; Neitzel, F.; Petrovic, S. Spline Approximation—Part 1: Basic Methodology. J. Appl. Geod. 2018, 12, 139–155. [Google Scholar] [CrossRef]
  3. Mikhail, E.M. Observations and Least Squares; Harper & Row Publishers: New York, NY, USA; Hagerstown, MD, USA; San Francisco, CA, USA; London, UK, 1976. [Google Scholar]
  4. Sourlier, D.M. Three Dimensional Feature Independent Bestfit in Coordinate Metrology. Ph.D. Thesis, Swiss Federal Institute of Technol., Zürich, Switzerland, 1995. Diss. ETH No. 11319. [Google Scholar]
  5. Turner, D.A. The Approximation of Cartesian Coordinate Data by Parametric Orthogonal Distance Regression. Ph.D. Thesis, University of Huddersfield, Huddersfield, UK, 1999. [Google Scholar]
  6. Ahn, S.J. Least Squares Orthogonal Distance Fitting of Curves and Surfaces in Space. In Lecture Notes in Computer Science 3151; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 2004. [Google Scholar]
  7. Crambes, C. Total least squares for functional data. In Proceedings of the 11th Symposium on ASMDA, Brest, France, 17–20 May 2005; pp. 619–626. [Google Scholar]
  8. Borges, C.F.; Pastva, T. Total least squares fitting of Bézier and B-spline curves to ordered data. Comput. Aided Geom. Des. 2002, 19, 275–289. [Google Scholar] [CrossRef]
  9. Jazaeri, S.; Schaffrin, B.; Snow, K. On Weighted Total Least-Squares Adjustment with Multiple Constraints and Singular Dispersion Matrices. Zeitschrift für Geodäsie, Geoinformation und Landmanagement 2014, 139, 229–240. [Google Scholar]
  10. Fang, X. Weighted total least-squares with constraints: A universal formula for geodetic symmetrical transformations. J. Geod. 2015, 89, 459–469. [Google Scholar] [CrossRef]
  11. Neitzel, F. Generalization of total least-squares on example of unweighted and weighted 2D similarity transformation. J. Geod. 2010, 84, 751–762. [Google Scholar] [CrossRef]
  12. Mahboub, V. On weighted total least-squares for geodetic transformations. J. Geod. 2012, 86, 359–367. [Google Scholar] [CrossRef]
  13. Schaffrin, B.; Neitzel, F.; Uzun, S.; Mahboub, V. Modifying Cadzow’s Algorithm to Generate the Optimal TLS-Solution for the Structured EIV-Model of a Similarity Transformation. J. Geod. Sci. 2012, 2, 98–106. [Google Scholar] [CrossRef]
  14. Han, J.; Zhang, S.; Li, Y.; Zhang, X. A general partial errors-invariables model and a corresponding weighted total least-squares algorithm. Surv. Rev. 2018, 50. [Google Scholar] [CrossRef]
  15. Schaffrin, B.; Snow, K.; Neitzel, F. On the Errors-In-Variables model with singular dispersion matrices. J. Geod. Sci. 2014, 4, 28–36. [Google Scholar] [CrossRef]
  16. Neitzel, F.; Schaffrin, B. On the Gauss-Helmert model with a singular dispersion matrix where BQ is of smaller rank than B. J. Comput. Appl. Math. 2016, 291, 458–467. [Google Scholar] [CrossRef]
  17. Helmert, F.R. Adjustment Computation with the Least Squares Method (in German), 3rd ed.; Teubner-Verlag: Leipzig, Germany, 1924. [Google Scholar]
  18. Golub, G.H.; van Loan, C.F. An analysis of the total least squares problem. SIAM J. Numer. Anal. 1980, 17, 883–893. [Google Scholar] [CrossRef]
  19. van Huffel, S.; Vandewalle, J. The Total Least Squares Problem, Computational Aspects and Analysis; SIAM: Philadelphia, PA, USA, 1991. [Google Scholar]
  20. Markovsky, I.; van Huffel, S. Overview of total least-squares methods. Signal Proc. 2007, 87, 2283–2302. [Google Scholar] [CrossRef] [Green Version]
  21. Malissiovas, G.; Neitzel, F.; Petrovic, S. Götterdämmerung over total least squares. J. Geod. Sci. 2016, 6, 43–60. [Google Scholar] [CrossRef]
  22. Fuller, W.A. Errors in Variables with Emphasis on Theory. In Wiley StatsRef: Statistics Reference Online; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2014. [Google Scholar]
  23. Schumaker, L. Spline Functions: Basic Theory; Cambridge University Press: Cambridge, UK, 2007. [Google Scholar]
  24. Pope, A.J. Some pitfalls to be avoided in the iterative adjustment of nonlinear problems. In Proceedings of the 38th Annual Meeting of the American Society of Photogrammetry, Washington, DC, USA, 12–17 March 1972; pp. 449–477. [Google Scholar]
  25. Lenzmann, L.; Lenzmann, E. Strenge Auswertung des nichtlinearen Gauß-Helmert-Modells. Allg. Vermess.-Nachr. 2004, 111, 68–73. [Google Scholar]
  26. Cox, M.; Harris, P.; Jones, H.M. Algorithms for Approximation II; Chapman and Hall: London, UK, 1989. [Google Scholar]
  27. Schwetlick, H.; Schütze, T. Least squares approximation by splines with free knots. BIT Numer. Math. 1995, 35, 361–384. [Google Scholar] [CrossRef]
  28. Park, H. B-spline surface fitting based on adaptive knot placement using dominant columns. Comput. Aided Des. 2011, 43, 258–264. [Google Scholar] [CrossRef]
  29. Bureick, J.; Alkhatib, H.; Neumann, I. Robust Spatial Approximation of Laser Scanner Point Clouds by Means of Free-form Curve Approaches in Deformation Analysis. J. Appl. Geod. 2016, 10, 27–35. [Google Scholar] [CrossRef]
  30. Harmening, C.; Neuner, H. Choosing the Optimal Number of B-spline Control Points (Part 1: Methodology and Approximation of Curves). J. Appl. Geod. 2016, 10, 139–157. [Google Scholar] [CrossRef]
  31. De Boor, C. B(asic)-Spline Basics. Technical Report, No. MRC-TSR-2952; Wisconsin Univ-Madison Mathematics Research Center: Madison, WI, USA, 1986. [Google Scholar]
  32. Yanagihara, H.; Ohtaki, M. Knot-placement to avoid over fitting in B-spline scedastic smoothing. Commun. Stat. Simul. Comput. 2003, 32, 771–785. [Google Scholar] [CrossRef]
  33. Piegl, L.; Tiller, W. The NURBS Book, 2nd ed.; Springer Verlag: Berlin/Heidelberg, Germany; New York, NY, USA, 1997. [Google Scholar]
  34. Wujanz, D.; Burger, M.; Mettenleiter, M.; Neitzel, F. An intensity-based stochastic model for terrestrial laser scanners. ISPRS J. Photogramm. Remote Sens. 2017, 125, 146–155. [Google Scholar] [CrossRef]
  35. Heinz, E.; Mettenleiter, M.; Kuhlmann, H.; Holst, C. Strategy for determining the stochastic distance characteristics of the 2D Laser Scanner Z + F Profiler 9012A with special focus on the close range. Sensors 2018, 18, 2253. [Google Scholar] [CrossRef]
Figure 1. Spline interpolation after Ezhov et al. [2].
Figure 1. Spline interpolation after Ezhov et al. [2].
Mathematics 07 00462 g001
Figure 2. Spline approximation after Ezhov et al. [2].
Figure 2. Spline approximation after Ezhov et al. [2].
Mathematics 07 00462 g002
Figure 3. Total least squares (TLS) spline function and least squares (LS) spline curve with equally spaced location parameter.
Figure 3. Total least squares (TLS) spline function and least squares (LS) spline curve with equally spaced location parameter.
Mathematics 07 00462 g003
Figure 4. Magnification of the area marked by a rectangle in Figure 3.
Figure 4. Magnification of the area marked by a rectangle in Figure 3.
Mathematics 07 00462 g004
Figure 5. TLS spline function and LS spline curve with location parameter proportional to the chord length.
Figure 5. TLS spline function and LS spline curve with location parameter proportional to the chord length.
Mathematics 07 00462 g005
Figure 6. Magnification of the area marked by a rectangle in Figure 5.
Figure 6. Magnification of the area marked by a rectangle in Figure 5.
Mathematics 07 00462 g006
Table 1. Numerical example.
Table 1. Numerical example.
Pointxiyi
11.1289.730
22.25511.195
33.62412.054
44.97212.688
56.24812.844
67.53012.920
78.84012.766
810.28612.196
911.55611.637
1012.78711.451
1114.17111.174
1215.35011.571
1316.64412.201
1418.01313.044
1519.33414.038
1620.66914.823
1721.95815.857
1823.40816.523
1924.60917.025
2025.84616.663
2127.06115.977
2228.52814.687
2329.89712.979
2431.23511.426
2532.3979.715
2633.7798.335
2735.0726.852
2836.3376.198
2937.7375.480
3038.9014.902
Table 2. Error components from TLS and LS spline approximation.
Table 2. Error components from TLS and LS spline approximation.
PointCase 1Case 2.aCase 2.b
e x TLS e y TLS e x LS e y LS e x LS e y LS
10.0286−0.02170.0274−0.00580.01180.1954
2−0.06440.0684−0.07150.0404−0.0391−0.2529
30.0292−0.04880.0225−0.05980.0258−0.2995
40.0045−0.01550.06060.01180.0097−0.0128
50.0039−0.07210.0060−0.06040.02790.1192
60.00700.0553−0.04910.05880.01630.3405
70.03760.1490−0.06860.1566−0.00310.4266
80.00370.01180.0641−0.0283−0.07930.1472
9−0.0408−0.14170.0337−0.1719−0.1223−0.2025
10−0.0063−0.0324−0.0282−0.0188−0.0400−0.2721
11−0.0009−0.16150.0654−0.13980.0670−0.5328
12−0.02460.1071−0.04880.12330.1357−0.2524
13−0.10800.1914−0.05570.22600.15430.0541
14−0.08440.10680.00290.16400.09890.2499
15−0.03000.03440.00680.0463−0.04960.2926
160.1486−0.17790.0211−0.3057−0.08690.0775
170.1035−0.1568−0.0111−0.2528−0.24710.1072
180.0669−0.22460.1203−0.2305−0.17140.0386
190.01630.15700.00800.1364−0.06620.2322
200.05560.1266−0.06290.16920.1140−0.0808
210.08990.1262−0.15290.31360.2207−0.2843
220.08380.08640.00950.19240.2008−0.3201
23−0.0398−0.03490.0719−0.1057−0.0041−0.1345
24−0.0262−0.02160.0987−0.1050−0.04860.2171
25−0.1711−0.1417−0.0566−0.2168−0.19140.2269
26−0.0501−0.04460.0064−0.0592−0.01790.1687
27−0.1120−0.1188−0.0143−0.1769−0.0468−0.1908
280.09160.1241−0.05050.25110.1224−0.1052
290.07880.20750.06800.22110.1091−0.0263
300.0023−0.1372−0.0228−0.1737−0.10050.0733
Table 3. Total sum of squared residuals (TSSR).
Table 3. Total sum of squared residuals (TSSR).
Case 1Case 2.aCase 2.b
TSSR0.5784660.9235181.968949

Share and Cite

MDPI and ACS Style

Neitzel, F.; Ezhov, N.; Petrovic, S. Total Least Squares Spline Approximation. Mathematics 2019, 7, 462. https://doi.org/10.3390/math7050462

AMA Style

Neitzel F, Ezhov N, Petrovic S. Total Least Squares Spline Approximation. Mathematics. 2019; 7(5):462. https://doi.org/10.3390/math7050462

Chicago/Turabian Style

Neitzel, Frank, Nikolaj Ezhov, and Svetozar Petrovic. 2019. "Total Least Squares Spline Approximation" Mathematics 7, no. 5: 462. https://doi.org/10.3390/math7050462

APA Style

Neitzel, F., Ezhov, N., & Petrovic, S. (2019). Total Least Squares Spline Approximation. Mathematics, 7(5), 462. https://doi.org/10.3390/math7050462

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop