1. Introduction
Large-size components with a large number of supports are commonly seen in modern advanced manufacturing, especially in the aerospace industry. The machining quality of the key local features on the support mounting surface directly impacts the quality of assemblies between the large-scale component and external instruments. On-line machining is necessary because high-precision and high-efficiency machining cannot presently be achieved through off-line processing. Therefore, to ensure on-line machining accuracy, it is essential to measure the key local features on-line at the product design coordinates. The key 3D and geometrical information derived from the measurement includes the component’s position and orientation, the 3D shape, and the location of the key local features on the supports. Furthermore, the measuring range (3 × 3 m to 5 × 6 m) is large, which makes it challenging to locate the support with high accuracy and simultaneously acquire 3D information of the key local feature (25 × 25 mm) with high accuracy (±0.035 mm). The requirement of measuring the key local features in large-scale on-site machining usually cannot be met by the employment of a single measuring device. Additionally, all information needs to be obtained at the same time and transformed into a unified coordinate system, which also makes the large-scale 3D measurement extremely challenging.
Various methods and systems have been proposed for the large-scale 3D measurements [
1,
2]. The three-coordinate measuring machines (CMMs) have been extensively applied in 3D measurement due to their high accuracy and excellent stability. With the development of noncontact optical measuring equipment and computer vision techniques, visual sensors have been integrated with traditional CMMs [
3,
4,
5]. However, due to the limitation of CMM measurement efficiency, this allows only a small percentage of products to be sampled and inspected. Furthermore, CMM is less used in on-site and on-line measurement due to its structure. When 3D shape information of large-sized components needs to be measured, the 3D shape measurement system integrated with photogrammetry and fringe projection [
6] is widely used. However, reflective markers on the target must be attached before the measurement, which interferes with the morphology of the measured part and affects the measurement efficiency. Recently, industrial robots have been extensively applied in the manufacturing field as economical and flexible orienting devices. Therefore, an increasing number of visual sensors are being integrated into robots. Laser scanning, a technology for large-scale 3D shape measurement [
7,
8,
9,
10], is more available and economical. However, laser scanning only collects data along limited lines for each measurement, which may result in the robot scanning results containing ripples. Thus, the measurements of the key local features remain barriers to obtaining high accuracy. To extend the measuring range of the laser scanner at designated measurement positions, a movement mechanism [
11,
12] has been integrated into the laser scanning system. In addition, a linear track or a rotary system will be required to be put into use. However, the movement mechanism inevitably gives rise to errors, reducing the measurement accuracy. The movement mechanism should be calibrated to ensure measurement accuracy. Compared with the laser scanning, structured light profilometry [
13,
14,
15,
16,
17,
18,
19,
20] has been well developed and widely used for scanning the surface of the object rapidly, as well as acquiring a high-density and high-quality point cloud of a region for each measurement. If the calibration process of the visual sensors is well designed and implemented, their measurement accuracy can be guaranteed [
21,
22]. Furthermore, compared with the line scanning method, structured light profilometry has a much bigger scanned area and thus is more efficient. Due to the large-size geometry of the component and the finite measuring range of a single station, it is difficult to guarantee the overall measurement accuracy.
To further expand the measurement range for measuring large-size objects and guarantee the overall measurement accuracy, more external measurement devices are being integrated into 3D shape measurement systems [
23,
24,
25,
26], such as indoor GPS (iGPS) systems, total stations, and laser trackers. Du, F. et al. [
23] developed a large-scale 3D measurement system that combines iGPS, a robot, and a portable scanner. However, the overall measurement accuracy is limited by the measurement property of the iGPS. Paoli, A. et al. [
24] developed a 3D measurement system that combines a 3D scanner, a total station, and a robot for automating the measurement process of hull yacht shapes. Several optical target corner cube reflectors are mounted on the robotic system basis and tracked by a total station. However, the robot positioning error is inevitably introduced, and the measurement accuracy is restricted by the robot positioning accuracy. Leica developed a large-scale 3D shape measurement system [
25] by combining a laser tracker, T-scan, and a robot. However, it was too expensive to be widely adopted. Du, H. et al. [
26] proposed a robot-integrated 3D scanning system, which combines a 3D scanner, a laser tracker, and a robot. The scanner is carried by the robot heading to the planned measurement position during operation. Its end coordinate system is created by rotating the 3D scanner, which is tracked by the laser tracker. However, the laser tracker cannot detect and control the measurement errors during the measurement process.
As the requirements for accuracy have continued to increase, the current measurement methods and systems mentioned above cannot meet the present requirements for high-accuracy on-line measurement of key local features. Besides, the study of the error control in measurement systems is limited. Therefore, a combined measurement method for the large-scale 3D shape measurement of key local features is proposed, combining a 3D scanner, a laser tracker, and an industrial robot. On the basis, a novel calibration method is carried out.
The remainder of the paper is structured as follows:
Section 2 introduces the combined measurement method in detail. The calibration of the measurement system is described in
Section 3. In
Section 4, the proposed method is verified through calibration experiments and measurement experiments, and concluding remarks are provided in
Section 5.
3. Calibration of the Measurement System
The high-precision model for coordinate transformation between SCS and ICS is established by extrinsic parameter calibration. Additionally, the extrinsic parameter calibration has to be performed before the combined measurement system is applied for large-scale metrology, and there is no calibration procedure during the measurement process. Therefore, an accurate extrinsic parameter calibration result is a critical factor in ensuring the overall measurement accuracy of the proposed system.
As for improving the overall measurement accuracy and minimizing the measurement errors, a extrinsic parameter calibration method based on COCP and COGP optimization is proposed. Firstly, the COCP is recommended. Then, the COGP based on the angular constraint is proposed for minimizing the measurement errors and improving the measurement accuracy of the position and orientation of the 3D scanner.
3.1. Calibration Principle
The homogeneous coordinates of the points
in two coordinate systems can be denoted as
and
. The relationship can be expressed as follows:
where
is the homogeneous transformation matrix.
is a rotation matrix, and
are the angles of Cardan.
is a translation vector.
The combined calibration system consists of a 3D scanner, a laser tracker, an industrial robot, and the calibration target. The principle of extrinsic parameter calibration is shown in
Figure 3. Firstly, to establish the transformation relationship
between SCS and GCS, the laser tracker and the 3D scanner measure the common points arranged on the calibration target. Then, the laser tracker can locate and track the position and orientation of the 3D scanner by measuring the coordinates of the global control points, and the transformation matrix
is established after the process. Finally, according to the two transformation matrixes above, the extrinsic parameter matrix
can be calculated.
Four or more noncollinear SMRs set on the 3D scanner, known as global control points, are denoted as
. Besides, the homogeneous coordinates of
can be denoted as
in ICS. Two groups of target observation points replaced by each other are arranged on the calibration target for ensuring the accuracy of extrinsic parameter calibration. As the standard ceramic spheres take the place of SMRs, the sphere centers of the SMRs are basically in the same positions as those of sphere centers of the standard ceramic spheres. The homogeneous coordinates of the laser tracker observation points
can be denoted as
in laser tracker measurement coordinate system
(GCS), and the homogeneous coordinates of the 3D scanner observation points
can be denoted as
in SCS; the following relationship between them exists:
The homogeneous coordinates of
can be denoted as
in GCS, and then the following relationship between
in ICS and
in GCS is expressed as follows:
where
is the transformation matrix between GCS and ICS.
According to Equations (2)–(4), the
can be calculated as follows:
Improving the accuracy of transformation matrixes
and
is the key to improving the accuracy of transformation matrix
. However, the measurement errors of the laser tracker and the 3D scanner could lead to transformation parameter errors. Therefore, to improve the accuracy of extrinsic parameter calibration by minimizing measurement errors, the coordinate optimization method of the common points is proposed in
Section 3.2, and the coordinate optimization method of the global control points is proposed in
Section 3.3.
3.2. Optimization of the Coordinates of Common Points
The common points arranged on the calibration target are measured by both the laser tracker and the 3D scanner. COCP is proposed to minimize measurement errors. In addition, it optimizes the transformation parameters of the , which consists of three angle parameters in matrix and three translation parameters in .
Figure 4 shows common points that are measured at M positions. If the coordinate system of the first position is taken as the reference coordinate system, the Cartesian coordinates of the common points obtained by the laser tracker and the 3D scanner at first position can be denoted as
and
, respectively. The Cartesian coordinates of the common points measured in other positions can be denoted as
and
. If measurement errors are considered, Equation (2) can be rewritten as follows:
where
and
represent the correction values for common points of the laser tracker and the 3D scanner, respectively. The simultaneous equation of measurement errors is as follows:
The rotation and translation matrix between the laser tracker and 3D scanner at all positions can be computed by the Procrustes method [
27]. Based on this, the correction value of coordinates of the common points can be calculated by the rank-defect network adjustment algorithm [
28]. As a result, the coordinate values of the common points are optimized. Thereby, the accuracy of the transformation parameters of
is improved.
3.3. Optimization of the Coordinates of Global Control Points
Due to environmental uncertainties and instrument instability, errors in the measurement of global control points are unavoidable. To solve the problem of unknown and uncontrollable error in measuring the global control points and optimize the transformation parameters of
, COGP based on the angular constraint is proposed to obtain the correction values of coordinates of the global control points. Because the angle between two vectors in Euclidean space is independent of the coordinate system [
29], the geometric information of the global control points set on the 3D scanner is fully used. In four global control points, a vector is established by two target points, and the angle between the vectors
and
is
, as shown in
Figure 5; the four target points can make up six vectors, which can form 15 angles.
CMM is used to calibrate the angle values, which are set as the nominal angles. We can obtain the angle error equation by calculating the difference between the actual angle values measured from the on-site measurement and the nominal angle values. Then, the normal equation is obtained by the least squares method.
The method for finding the angle between two nonzero vectors is expressed as follows:
Therefore, the angle
could be obtained by the arc cosine function as follows:
Equation (9) is expanded by Taylor’s formula, and the second-order term is ignored. Therefore, the linearized equation of angular constraint is expressed as follows:
where
are the optimized correction values of coordinates of the global control points, and
is a vector of the correction values.
The angle error equation is expressed as follows:
where
are nominal angles, and
are actual angles.
In an alternative way, Equation (10) can be rewritten as follows:
where
is the angle error vector.
Then, the normal equation is as follows:
where
is the weight matrix.
The objective function for finding the best coordinate estimates can be expressed as follows:
The angle adjustment is conducted to obtain the optimal estimate value by solving the error matrix equation in the least squares norm. However, the traditional least squares method requires the coefficient matrix to be a nonsingular or full rank matrix. The coefficient matrix in Equation (13) is an ill-conditioned matrix with maximum condition number ( and represent the maximum and minimum eigenvalues of coefficient matrix ), and the result of this solution is extremely unstable.
To obtain the optimal solution, a two-objective optimization formula can be constructed as follows:
According to Tikhonov’s regularization method, the objective function of Equation (15) based on the ridge estimation algorithm is given as follows:
where the non-negative parameter
is the ridge estimation parameter, and
is the unit matrix. Finding the conjugate gradient of
in Equation (16), we have
According to the extremum condition, let Equation (17) be equal to 0. Therefore, the final solution can be expressed as follows:
where the damping term
added to the main diagonal of the coefficient matrix
in Equation (18) can overcome the ill-conditioned effect of the coefficient matrix. Thus, a stable solution can be obtained.
The ridge estimation method can change singular matrix
into a nonsingular matrix. Besides, it ensures the stability for the solution of the ill-conditioned equation. The appropriate ridge parameter
can be solved by the L-curve method [
30], which can reduce the condition number of the equation and change the ill-conditioned equation into a well-conditioned equation. As a result, the coordinate values of the global control points are optimized. Thereby, the accuracy of the transformation parameters of
is improved.