Next Article in Journal
Research on Energy-Saving Optimization Method and Intelligent Control of Refrigeration Station Equipment Based on Fuzzy Neural Network
Previous Article in Journal
Metabolomic Insights into the Potential of Chestnut Biochar as a Functional Feed Ingredient
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Large-Space Laser Tracking Attitude Combination Measurement Using Backpropagation Algorithm Based on Neighborhood Search

by
Ziyue Zhao
1,
Zhi Xiong
2,*,
Zhengnan Guo
2,
Hao Zhang
2,
Xiangyu Li
2,
Zhongsheng Zhai
2 and
Weihu Zhou
2,3
1
AVIC Changcheng Institute of Metrology & Measurement, Beijing 100095, China
2
Hubei Key Laboratory of Modern Manufacture Quality Engineering, School of Mechanical Engineering, Hubei University of Technology, Wuhan 430068, China
3
Institute of Microelectronics of the Chinese Academy of Sciences, Beijing 100029, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(3), 1083; https://doi.org/10.3390/app15031083
Submission received: 26 November 2024 / Revised: 15 January 2025 / Accepted: 20 January 2025 / Published: 22 January 2025

Abstract

:
Large-space high-precision attitude dynamic measurement technology has urgent application needs in large equipment manufacturing fields, such as aerospace, rail transportation, automobiles, and ships. In this paper, taking laser tracking equipment as the base station, a backpropagation algorithm based on neighborhood search is proposed, which is applied to the fusion of multi-source information for solving the dynamic attitude angle. This paper firstly established a mathematical model of laser tracking attitude dynamic measurement based on IMU and CCD multi-sensor, designed a 6-11-3 back propagation network structure and algorithm flow, and realized the prediction of attitude angle through model training. Secondly, the method based on neighborhood search realizes the determination of the optimal training target value of the model, of which the MSE has a 34% reduction compared to the IMU determination method. Finally, the experimental platform is set up with the precision rotary table as the motion carrier to verify the effectiveness of the research method in this paper. The experimental results show that with the neighborhood-based backpropagation algorithm, the measurement results have a higher data update rate and a certain inhibition effect on the error accumulation of IMU. The absolute value of the system angle error can be less than 0.4° within 8 m and 0–50°, with an angle update rate of 100 Hz. The research method in this paper can be applied to the dynamic measurement of laser tracking attitude angles, which provides a new reference for the angle measurement method based on the fusion of multi-source information.

1. Introduction

With the rapid development of high-end intelligent manufacturing and large-scale scientific engineering fields, there is an increasing demand for high-precision six-degree-of-freedom measurement technology for spatial dynamics [1,2]. In addition, roll-pitch-yaw angles in aeronautics, navigation, and robotics are now being incorporated into the field of six-degree-of-freedom interferometry [3]. The acquisition of pose information for moving targets and the development of pose measurement methods that meet the needs of dynamic measurement are currently urgent issues that need to be resolved in the fields of intelligent manufacturing, large-scale scientific engineering construction, and digital assembly [4,5,6]. The six-degree-of-freedom measurement method based on laser tracking equipment is a promising solution [7]. This system typically consists of laser tracking equipment, an upper computer, and cooperative targets, using laser technology to track and measure the pose information of moving targets [8]. It features high precision, a large measurement range, and on-site measurement capabilities, making it a core detection system for large high-end equipment manufacturing. This system is widely used in various industrial fields such as aviation, aerospace, automotive, shipbuilding, and manufacturing [9,10].
The T-Mac Sensor system from Leica integrates laser tracking devices, high-speed cameras, and Mac target units. This system is capable of real-time online measurement and tracking, making it suitable for high-precision measurements in dynamic environments [11]. When used in conjunction with laser tracking devices, it can cover ranges from several meters to tens of meters while providing sub-millimeter measurement accuracy. However, due to its reliance on visual equipment, the system cannot perform continuous measurements when the field of view is obstructed [12]. On the other hand, the STS from API is another device designed for high-precision six-degree-of-freedom measurements [13]. Compared to the T-Mac, STS overcomes the limitation of interrupted measurements when the visual line is blocked, thereby adapting better to complex and variable measurement environments. Nonetheless, the use of inclinometers constrains its dynamic performance.
Research has been conducted on six-degree-of-freedom measurements based on the guidance system used in shield machines, including the Huazhong University of Science and Technology [14,15], which investigated the use of high-resolution CCDs to improve the positioning accuracy of the center of the spot and effectively improve the measurement accuracy of the shield attitude angle. Zhengzhou University [16], based on the kinematic model of the equivalent mechanism, used a hybrid neural network–Newton algorithm to solve the shield position and found that the accuracy of the shield tunneling position control was improved. The error value of the prediction using a single neural network is up to 30 mm, and the error value using the hybrid neural network–Newton iterative algorithm is stabilized around 1 × 10−5 mm. Li Wei [17] from Tangshan College developed a laser-guiding target for shield machines. The shield guiding system is mainly composed of a total station, a reference prism, and a laser target [18], employing a CCD camera, tilt sensor, and prism to accurately determine the shield’s attitude angle and position coordinates. The system has a simple structure, easy calculation, as well as higher accuracy. In the system mentioned above, the laser target is mainly composed of a pinhole prism, an inclinometer, and a CCD industrial camera. This method has the advantages of simple operation, large measurement range, good portability, real-time measurement, and strong environmental adaptability, but it has the problems of low measurement update rate and slow response. Additionally, to ensure measurement accuracy, the inclinometer requires a certain range of measurement angles, which significantly restricts the system’s angle measurement range, particularly the roll angle [19,20].
H. Zhang from Hubei University of Technology [21] conducted a study on monocular visual attitude measurement methods, where feature points on the cooperative target were captured using a CCD camera to complete the attitude angle measurement and the measurement accuracy was evaluated using a high-precision two-dimensional turntable. Yan, K; Xiong, Z [22] posed measurement method based on 2DPSD and monocular vision is proposed, and the SVD method is used to calculate the relative attitude between the laser tracker coordinate system and the camera coordinate system. This method can control the maximum error of attitude angle measurement to be less than 2° within the effective angle range of [−25°, 25°] and at a distance range of 3 m. L. Zhang [23] proposed a laser-tracking attitude measurement method that combines a PSD with monocular vision. This method overcomes the issues of a slow update rate for the roll angle and limited measurement range, showing good measurement performance over long distances. Compared to traditional methods, such as the POSIT algorithm, the accuracy of both azimuth and elevation angles has been improved by more than 75%. To further improve the measurement accuracy of vision, Wu, C and Xiong, Z [24] proposed a method for pose calculation based on the WAOI algorithm. This algorithm uses the OI algorithm as its core to solve visual measurements. Within an angular measurement range of −20° to 20° and a measurement distance of 3 to 15 m, the accuracy of the azimuth and elevation angles of the laser tracking measurement system using the WAOI algorithm can be better than 0.26°. Compared to the POSIT algorithm, the accuracy of both azimuth and elevation angles has been improved by more than 75%. However, there is still considerable room for improvement in terms of system measurement accuracy and dynamic performance [25].
Tianjin University [26,27,28,29,30] proposed a six-degree-of-freedom measurement method based on a laser tracker and inertial navigation. An FKF based on the residual chi-square test is proposed in order to solve the fatal problem of inclinometer failure under dynamic conditions. Finally, comparison experiments with Leica T-MAC were conducted to verify the accuracy and dynamic performance of the method. The method verifies that the attitude accuracy of the integrated system is better than 0.15° while the three-coordinate accuracy can reach 0.3 mm under dynamic conditions. In summary, the laser tracking attitude measurement system utilizes the visual measurement method, which is recognized for its compact size, low cost, and flexible measurement capabilities, yet is not without its drawbacks. The accuracy of the measurement is observed to diminish as the distance increases, with limited potential for enhanced precision in long-distance attitude measurements. Additionally, visual measurement, being quasi-static, is susceptible to frame loss when tracking rapidly moving objects, thereby constraining its dynamic measurement capabilities. The attitude measurement method based on a pinhole prism, photoelectric position sensor, and inclinometer solves the problem of the low sensitivity of yaw and pitch angles, but it is still not suitable for dynamic measurement due to the problems of low measurement update rate, slow response, and limited incident angle. The multi-sensor data fusion based on inertial navigation with good complementarity and autonomy is a new research hotspot and an important development direction in the field of industrial measurement.
Leveraging the laser tracking equipment as the central station, this paper introduces a dynamic attitude measurement method that integrates visual and inertial measurement units and conducts an in-depth study with the data fusion methods derived from both visual and inertial data sources. Ultimately, the effectiveness of the methods proposed in this paper has been substantiated through experimental validation.
The key highlights of this study are as follows:
(1)
A laser tracking attitude dynamic measurement method combining CCD and IMU is proposed. By training neural network models, high-precision dynamic measurements have been achieved.
(2)
Applying machine learning algorithms to solve attitude measurement problems. Machine learning techniques are used to optimize the process of determining attitude angles from the fused sensor data, improving the accuracy and robustness of the measurement system.
(3)
An innovative method based on neighborhood search was proposed to determine the true values of training. This approach optimizes the training process by selecting the most relevant and accurate data points for training the machine learning model, leading to an optimized design of the training model.

2. Measurement System Composition and Principles

The laser tracking attitude measurement system is primarily led by laser tracking equipment, combined with cameras, an IMU, and cooperative targets to complete the task of dynamic attitude measurement. As is shown in Figure 1, this system aims to measure the attitude of cooperative targets relative to the laser tracking equipment. The system integrates the high precision of vision with the high frequency of IMU data comprehensively to ensure the dynamic performance of the combined attitude angle measurement.
As shown in Figure 1, the positions of the laser tracking equipment and the camera are relatively fixed. The origin of the laser tracking equipment coordinate system (OL-XLYLZL) is located at the laser head of the laser tracking equipment, where the XL-axis is perpendicular to the horizontal code disk, and the YL-axis is perpendicular to the vertical code disk. The origin of the camera coordinate system (OC-XCYCZC) is positioned at the optical center of the camera, with the XC-axis, YC-axis, and ZC-axis parallel to the three axes of the laser tracking equipment coordinate system. The cooperative target is fixed with the target to be measured, and the origin of the target coordinate system (OT-XTYTZT) is located at the center of the target sphere with the ZT-axis perpendicular to the plane pointing to the IMU, and the XT-axis and YT-axis determined by the right-hand rule. The origin of the IMU coordinate system (OI-XIYIZI) is located at the sensitive center of the IMU, with the XI-axis, YI-axis, and ZI-axis parallel to the three axes of the target coordinate system. The distance between the center of the target sphere and the IMU is l. The IMU is fixedly installed inside the cooperative target, and the rotational and translational relationship between its coordinate system and the target coordinate system is determined by system calibration.
Taking the initial position of the laser tracking device coordinate system as the reference coordinate system, we define R L C as the rotation matrix between the camera coordinate system and the laser tracker coordinate system, R T C as the rotation matrix between the camera coordinate system and the cooperative target coordinate system, and R T I as the rotation matrix between the IMU coordinate system and the cooperative target coordinate system. After rotating around its own Z axis, X axis and Y axis, the cooperative target coordinate system coincides with the camera coordinate system. According to the definition of Euler angles, the rotation matrix representing the relationship between these two coordinate systems can be expressed as
R T C = R ϕ R θ R φ = cos ϕ 0 sin ϕ 0 1 0 sin ϕ 0 cos ϕ 1 0 0 0 cos θ sin θ 0 sin θ cos θ cos φ sin φ 0 sin φ cos φ 0 0 0 1 = cos ϕ cos φ + sin θ sin ϕ sin φ cos φ sin θ sin ϕ cos ϕ sin φ cos θ sin ϕ cos θ sin φ cos θ cos φ sin θ cos ϕ sin θ sin φ cos sin ϕ sin ϕ sin φ + cos ϕ cos φ sin θ cos θ cos ϕ
Using the initial position of the laser tracking device coordinate system as the reference coordinate system, the rotation relationship between the laser tracking device coordinate system and the target coordinate system can be calculated through changes in the rotation matrices, as shown in the following equation:
R T L = R C L × R T C
Expressed in the form of a 3 × 3 matrix
R T L = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33
Using the mathematical relationships between rotation matrices and Euler angles, one can calculate the azimuth angle (rotation around the X-axis), elevation angle (rotation around the Y-axis), and roll angle (rotation around the Z-axis) of the target in the laser tracking device coordinate system.
α = arcsin r 23 β = arctan r 13 r 33 γ = arctan r 12 r 22
In the aforementioned attitude dynamic measurement system, R T C ’s data consist of two parts: first, attitude tracking measurements of the target in the camera coordinate system obtained through visual means, resulting in attitude angle 1; second, integrated measurements of the target’s attitude relative to its initial position using the IMU, followed by refinement with real-time relative pose information from vision to obtain attitude angle 2 in the camera coordinate system. The visual system is able to provide highly accurate target attitude information due to its superior spatial resolution, especially achieving high precision for targets at short distances. The IMU provides real-time acceleration and angular velocity information, offering significant advantages in tracking fast-moving targets. However, the visual system may experience interference from factors like ambient light and occlusion during prolonged tracking, slightly compromising its dynamic performance compared to the IMU. Therefore, effectively integrating multi-source sensor data using a combination of visual and IMU-based measurements to enhance system dynamic performance is a critical focus of the subsequent research.

3. Data Fusion Method Based on Self-Adaptive Backpropagation Algorithm

3.1. Combined Measurement Fusion Method

As mentioned above, the attitude angle measurement data come from visual and inertial units, which have different sampling frequencies. Due to the strong learning capability, adaptability, and flexibility of the backpropagation algorithm, it was chosen to fuse the data from visual and inertial measurement units. The fusion process is illustrated in Figure 2 below. Based on the principle of time alignment, first, the visual measurement data with a lower sampling frequency is linearly interpolated. Next, the interpolated data is fused with the inertial measurement unit data using a backpropagation algorithm.

3.2. BackPropagation Algorithm Based on Neighborhood Search Structure Design

The main function of the backpropagation algorithm based on neighborhood search is to learn the mapping relationship between inputs and outputs, demonstrating powerful nonlinear mapping capabilities even with a three-layer network. In this paper, a three-layer neural network structure is designed for the fusion method of attitude information based on visual and inertial measurements. The network structure includes an input layer, a hidden layer, and an output layer. The input layer comprises six nodes corresponding to three attitude angles derived from visual estimation and three attitude angles from inertial measurement (namely azimuth, pitch, and roll angles). The output layer consists of three nodes representing the fused final output attitude angles.
The scale of the backpropagation network is determined by the depth of the hidden layer and the number of neurons per layer, which directly affects the performance of the network. Therefore, determining the number of hidden layer neurons is crucial for building a training model. In general, the formula for determining the number of hidden layer neurons is as follows:
c o u n t h i d d e n = c o u n t i n p u t + c o u n t o u t p u t + a c o u n t h i d d e n = log 2 c o u n t i n p u t c o u n t h i d d e n = 2 c o u n t i n p u t + c o u n t o u t p u t c o u n t h i d d e n = c o u n t i n p u t + c o u n t o u t p u t 2
where countinput and countoutput represent the number of neurons in the input layer and the output layer. α is the regulatory parameter between 0 and 10. counthidden represents the number of neurons in the hidden layer. The number of nodes in the input layer is usually determined according to the dimension of the input data, and the number of nodes in the output layer is determined according to the dimension of the output data. For the backpropagation algorithm of this system, there are six nodes in the input layer and three nodes in the output layer. Using Formula (5), the number of hidden layer neurons in the backpropagation algorithm can be obtained, and the value range should be between 3 and 15. In the design of neural network structure, the number of neurons in the hidden layer is typically greater than that in the input layer, a phenomenon widely documented in the literature [17]. Based on this, the present study sets the range of neuron numbers in the hidden layer from 6 to 15. This range is selected based on a comprehensive consideration of network complexity and computational efficiency. Specifically, the minimum number of neurons in the hidden layer is set to 6 to ensure that the network has sufficient capacity to capture the basic features of the input data; the maximum is set to 15 to maintain the model’s generalization ability while avoiding overfitting. This range of neuron numbers is intended to balance the learning ability of the model with the consumption of computational resources, aiming to achieve optimal network performance.
In this study, to implement the backpropagation algorithm, we adopted MATLAB R2022b as the main software tool. We utilized its built-in Deep Learning Toolbox to construct and train the backpropagation algorithm structure. When training a neural network, the maximum number of iterations is set to 600, and the learning rate is set to 0.01. At the same time, in order to optimize the network parameters, we used Gradient Descent with Momentum as an optimizer.
We utilized a dataset comprising 10,000 samples. To evaluate the predictive performance of the model, the dataset was partitioned into training, testing, and validation sets. The training set consisted of 7000 samples, accounting for 70% of the total dataset. The testing set included 1500 samples, representing 15% of the total dataset. Similarly, the validation set also comprised 1500 samples, making up the remaining 15% of the dataset. To ensure the representativeness and randomness of the datasets, we employed a random partitioning method.
Under the condition of setting other error terms, the neural network is trained by simulating different numbers of neurons one by one, and the simulation results of the maximum error curve and training time corresponding to different numbers of neurons are obtained, which is shown in Figure 3.
As can be seen from Figure 3, the training time will increase with the increasing number of hidden neurons, and the attitude angle error will decrease with the increase in the number of hidden neurons. When the number of hidden neurons is 11, the error is smaller and the training time is shorter. As the number of neurons increases, when the number of hidden neurons reaches 11, there is a significant reduction in error, while the training time of the network also begins to increase sharply. When the number of neurons exceeds 11, although the training time continues to increase, the improvement in model accuracy is relatively small. Although increasing the number of hidden layer neurons can enhance model precision, in order to ensure the generalization capability of the model and reduce the risk of overfitting, the number of hidden layer nodes of the backpropagation algorithm is set to 11 in this paper.
Optimization of the number of neurons is conducted based on the training time and attitude angle standard deviation, resulting in the chosen backpropagation algorithm structure for this study. In addition, the sigmoid function is chosen as the activation function for the hidden layer, and a linear function is used as the activation function for the output layer. By defining the basic parameters of the backpropagation algorithm structure, training sample data can be input into the neural network for the training and learning process. The trained model structure is illustrated in Figure 4.
In this model, the input to the hidden layer can be written as
Q i n p u t = w 1 x + b 1
the output of the hidden layer, where x represents the combined attitude angles from visual and IMU measurements, can be expressed as
Q o u t p u t = s i g m o i d ( Q i n p u t )
the input to the output layer can be represented as
U i n p u t = w 2 * Q o u t p u t + b 2
and the output of the output layer can be written as
U o u t p u t = U i n p u t
where w 1 and b 1 are the weight matrix and bias vector between the input layer and the hidden layer, and w 2 and b 2 are the weight matrix and bias vector between the hidden layer and the output layer. Through the backpropagation algorithm, these weight matrices and bias vectors are iteratively optimized to better fit the training data in order to improve the predictive performance of the model.
To optimize the network parameters, we used Gradient Descent (GD) as the optimizer. The maximum number of iterations was set to 600, and the learning rate was set to 0.01. And the training process of the network model based on back propagation is shown in Figure 5.
Figure 6, Figure 7 and Figure 8 show the training process of a backpropagation-based network with 11 hidden layer neurons. The regression function fit for all processes during training a backpropagation-based network (training, validation, and test) was R = 0.97855 (Figure 6).
The regression function for training was R = 0.97853; for validation, it was R = 0.97983; while for the test process, it was R = 0.97724. The MSE in this model was 0.039407 and was reached in the 600th epoch (Figure 7).
Figure 8 is an error histogram for the training process showing the difference between the targets and the output values from the network. The chart shows errors in the neural network training, testing, and validation process. Most cases oscillate in a range from −0.1926 to 0.1864.

3.3. Determination of Training Target Values Based on Neighborhood Search

When confronting the challenge of selecting target values during the data fusion process using a backpropagation algorithm, the neighborhood search method is employed due to its capability to locate optimal or near-optimal solutions within the solution space. This paper adopts a method to determine the training target values based on neighborhood search. Utilizing the neighborhood search approach involves exploring the vicinity of current parameter settings to find values that minimize the objective function. The steps of the neighborhood search are illustrated in Figure 9.
Let X i . j t be the input dataset for the backpropagation neural network, where i is the input sensor (i = 1 for visual sensor, i = 2 for inertial measurement sensor), and j represents the attitude angle corresponding to a sensor at time t (j = 1 for azimuth angle, j = 2 for elevation angle, j = 3 for roll angle). F(.) is the minimum value function. The training set error matrix is defined as Y i . j t , which is shown in Equation (10):
Y i . j t = F X i . j t + 1 X i . j t , X i . j t X i . j t 1 i = 1 , 2 ;   j = 1 , 2 , 3
After performing the neighborhood search, the comparative error data between the visual and inertial measurement sensors at time t are obtained as shown in Equation (11):
Δ e i . j t = Y 1 . j t , Y 2 . j t Y 1 . j t
Transposing the above equation and solving for the minimum value, we obtain
Δ E i . j t = F ( Δ e i . j t T )
By solving the above equation, the minimum error value between the adjacent data of the three sensors is obtained. Due to the high precision of visual measurements, the minimum error from visual data serves as the reference value. Through inverse search calculation, the training target value at a certain time is derived as v j t shown in Equation (13):
v j t = a × X 1 . j t 1 + b × X 1 . j t + c × X 1 . j t + 1
where a, b, and c are linear weighting coefficients adjusted continuously during network training. The neighborhood search method aims to analyze the temporal variation trends in sensor data to determine the target values to be used during neural network training.
In summary, training sample data are input into the neural network training model for learning and training. This process yields parameters for the backpropagation algorithm structure. Subsequently, actual measurement data are fed into the network structure to achieve predictions of attitude angles.
To assess the effectiveness of the neighborhood search method in the training of backpropagation algorithm models, this study conducted a simulation experiment. In this experiment, we set up the output of the neural network training set in three different ways: the first was set to the calculated output values of the IMU; the second was set to the target values obtained through the neighborhood search method; and the third was set to the true rotation angle values in the simulation analysis. Subsequently, we used another set of simulation data generated under the same conditions for the verification experiment, aiming to explore whether the neighborhood search method can serve as an effective output target value for neural network training data.
The training results are shown in Table 1: the network with the output of the neural network training using true value is Net 1, the network using the value of the IMU is Net 2, and the network using neighborhood search is Net 3. From Table 1, we can see that the MSE of Net 3 decreased to 0.196, which is a 34% reduction compared to the MSE of Net 2. The R of Net 3 has increased from 0.901 to 0.929. The neighborhood search method can effectively improve the performance of the model when the true value cannot be determined.
By analyzing the results shown in Figure 10, it can be observed that as the rotation angle increases, the angular error tends to increase when the training output value is set to the IMU solution value. In contrast, when the training output value is set to the target value obtained by the neighborhood search method, the rotation error is reduced compared to the IMU solution value and is closer to the situation where the true rotation angle value is used as the training output value. This finding indicates that using the target values obtained by the neighborhood search method as the output target values in backpropagation algorithm model training can more effectively determine the target values during training, thereby optimizing the performance of the model. Therefore, the results of this simulation experiment support the use of the target values obtained by the neighborhood search method as the output target values in the training process of the backpropagation algorithm to improve the model’s accuracy and robustness.

4. Experimental Verification and Results Analysis

4.1. Experimental Platform

To validate the measurement performance of the attitude measurement system in practical applications, experiments were conducted using a precision turntable for dynamic attitude measurement verification. As shown in Figure 11, the attitude measurement system consists of a total station, camera, cooperative target, and upper computer. The precision two-dimensional turntable has a measurement accuracy of 2 arcseconds and a displayed angular velocity of 5°/s. The azimuth angle can rotate from 50° to 330°, and the elevation angle can rotate from 0° to 360°. However, the precision two-dimensional turntable can rotate only in azimuth and elevation directions, and the roll angle accuracy is higher than that of the azimuth and elevation angles based on the literature. Therefore, this study only focuses on validating the azimuth and elevation angles experimentally. The IMU frequency is 100 Hz, and the visual measurement frequency is 1 Hz.

4.2. Experimental Steps

The procedure for attitude measurement based on a rotary table system is outlined as follows:
Step 1: Establish a dynamic attitude measurement system platform by positioning the laser tracking device (total station) and camera at predetermined locations. Conduct calibration for the IMU error model and the camera intrinsic parameter model.
Step 2: Mount a cooperative target equipped with an IMU onto a precision two-dimensional rotary table, ensuring that the target coordinate system aligns with the rotary table coordinate system. Through calibration, ascertain the rotational relationship between the laser tracking device’s coordinate system and the camera’s coordinate system, as well as the rotation relationship between the IMU and the rotary table’s coordinate system.
Step 3: Utilize an upper computer to control the precision two-dimensional rotary table for pre-set single-axis rotations. Set the azimuth angle rotation range from 15° to 55°, starting at 15°, with intervals of 10° for a total of 5 sets, each repeated 10 times. For the pitch angle, the rotation range is set from 10° to 50°, with 10° intervals across five sets, also repeated 10 times per set. At measurement distances of both 4 m and 8 m, continuously capture images of the moving cooperative target using camera control software while simultaneously recording IMU data.
Step 4: Transform the visual information and IMU-acquired data into attitude angles within the laser tracking device’s coordinate system. Following the preprocessing of the visual and IMU data (including solving for attitude angles from both sources and time synchronization of the collected data), employ an improved backpropagation algorithm for data fusion to derive the attitude angles in the laser tracking device’s frame of reference.

4.3. Experimental Results and Analysis

The following figure shows the effect graphs after the cooperative target is mounted on the rotary table, with the rotary table rotating in the azimuth range of 50° and the pitch angle rotating in the range of 55°, using the IMU alone measurement, the vision alone measurement, and the fusion based on the backpropagation algorithm, respectively. From Figure 12, we can see that the fused measurements have a higher data update rate and have a certain suppression effect on the error accumulation of the IMU.
The absolute values of the errors of repeated multiple measurements at different distances and angles were statistically analyzed to obtain their average absolute value of error and maximum absolute value of error, as shown in Table 2 and Table 3.
From the measurement results in Table 2 and Table 3, we can see that the absolute value of the fused error shows an upward trend with the increase in the measured angle, and the error becomes larger with the increase in the measurement distance. For the azimuth angle, the average absolute value of the error at 4 m is 0.057°, the maximum absolute value of the error is 0.120°, the average absolute value of the error at 8 m is 0.101°, and the maximum absolute value of the error is 0.147°. For pitch angle, the maximum absolute value of the mean error is 0.168°, the maximum absolute value of the error is 0.250° at 4 m, the maximum absolute value of the error is 0.317° and 0.393° at 8 m.
Figure 13 shows the comparative analysis chart between the results of the IMU-integrated solution and the data fusion solution. The horizontal axis of the chart represents the rotation angles of azimuth and pitch, with azimuth rotation angles at 15°, 25°, 35°, 45°, 55°, and pitch rotation angles at 10°, 20°, 30°, 40°, 50°. The vertical axis indicates the angular error between the IMU-integrated solution and the data fusion solution relative to the rotation values of a precision turntable. This angular error is calculated based on the mean error obtained from 10 independent experiments.
From Table 2 and Table 3 and Figure 13, it can be observed that the average measurement error and the maximum measurement error of the system at a measurement distance of 8 m have almost all increased compared to those at 4 m. This is because, in the system described in this article, the distance of measurement only affects the measurement accuracy of the vision system and has almost no impact on the measurement accuracy of the IMU. As the measurement distance increases, the number of pixels occupied by the feature points on the target in the camera image will decrease, leading to a relative increase in the positioning error of the feature points and a decline in the imaging quality of the camera. Moreover, with the increase in the measurement distance, the shape and size of the feature points in the images captured by the camera will also change, resulting in a decrease in the matching accuracy of the feature points, which in turn further affects the measurement accuracy of the camera. Therefore, the measurement accuracy of the system decreases with the increase in the measurement distance.
By analyzing the comparison chart, it is evident that as the rotation angle increases, the cumulative error of the IMU-integrated solution shows a gradually increasing trend. In contrast, the attitude angles obtained from the data fusion solution do not exhibit significant cumulative error even at larger rotation angles. This phenomenon indicates that the employed data fusion algorithm effectively suppresses the cumulative error of the IMU. Therefore, the data fusion algorithm holds significant value in the field of high-precision attitude estimation, significantly enhancing the accuracy and reliability of IMU solutions. The results of this study provide empirical evidence for the effectiveness of the IMU data fusion algorithm, further validating its superior performance in complex dynamic environments.

5. Summary

As one of the advanced large-size six-degree-of-freedom measurement technologies, the laser tracking attitude dynamic measurement system has shown a broad application prospect in the field of high-end intelligent manufacturing. With the deepening of the transformation of intelligent manufacturing mode, it is especially important to study and optimize the real-time dynamic measurement technology.
In this paper, a multi-source information fusion attitude angle dynamic measurement method based on a backpropagation algorithm is proposed with laser tracking equipment as the base station. The laser tracking attitude dynamic measurement mathematical model is established, the network structure and algorithm flow are designed, and the attitude angle prediction of the target object is realized through model training. The optimal network structure is 6-11-3. For the problem that the target value is difficult to select in the process of backpropagation algorithm data fusion, this paper adopts the method based on the neighborhood search to realize the determination of the optimal training target value. The MSE of the neighborhood search method has decreased to 0.196, which is a 34% reduction compared to the IMU determination method; meanwhile, the R has increased from 0.901 to 0.929. The neighborhood search method can effectively improve the performance of the model when the true value cannot be determined.
Finally, the experimental platform is set up, with the total station as the base station and the precision rotary table as the carrier. The effectiveness of the research method of this paper is verified, and the method adopts the backpropagation algorithm proposed in this paper for the fusion within the range of 8 m and 0–50° of angle measurement, and the absolute value of the angle measurement error can be less than 0.4°.
The research method in this paper provides a new idea for the attitude dynamic measurement method based on multi-source information fusion. However, the attitude measurement method based on the BP neural network in this study has achieved significant results. In order to further improve the performance and adaptability of the system, future work will explore the introduction of reinforcement learning into the optical tracking system to achieve more efficient and accurate attitude measurement. In addition, there are problems of insufficient data volume and unbalanced distribution in the data collection process, which affect the generalization ability of the model. In future research, we will focus on optimizing the model structure. By introducing reinforcement learning, we expect to significantly improve the precision and speed of attitude measurement in the optical tracking system. Especially when dealing with fast-moving and severely occluded targets, the reinforcement learning model will be able to more effectively adjust the tracking parameters and improve the robustness of the measurement. Through these future works, we believe that we can elevate the attitude measurement method based on the BP neural network to a new level and provide more powerful technical support for related applications.

Author Contributions

Conceptualization, Z.Z. (Ziyue Zhao) and Z.X.; methodology, Z.X. and Z.Z. (Ziyue Zhao); software, H.Z., Z.G. and Z.X.; validation, Z.X., W.Z. and Z.Z. (Ziyue Zhao); formal analysis, Z.X. and Z.Z. (Ziyue Zhao); investigation, H.Z.; resources, Z.Z. (Ziyue Zhao), Z.G. and Z.X.; data curation, Z.Z. (Zhongsheng Zhai) and H.Z.; writing—original draft preparation, Z.Z. (Ziyue Zhao) and Z.X.; writing—review and editing, Z.Z. (Ziyue Zhao) and Z.X.; visualization, W.Z.; supervision, X.L. and Z.Z. (Ziyue Zhao); project administration, X.L. and Z.Z. (Zhongsheng Zhai); funding acquisition, Z.Z. (Ziyue Zhao) and Z.Z. (Zhongsheng Zhai). All authors have read and agreed to the published version of the manuscript.

Funding

The National Key Research and Development Program “High Performance Manufacturing Technology and Major Equipment” Special Project—Large Component High Precision Assembly Force Position Collaborative Measurement and Traceability Technology (No. 2023YFB3407900).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

IMUInertial Measurement Unit
CCDCharge-Coupled Device
ANNArtificial Neural Network
RTLSReal-Time Location Systems
T-MACTracker-Machine Control
STSSmart Tracker Sensor
PSDPosition Sensitive Detector
FKFfault-tolerant Kalman filter
2DPSDTwo-Dimensional Position Sensitive Detector References
SVDSingular Value Decomposition
WAOIWeighted Accelerated Orthogonal Iteration
OIOrthogonal Iteration
MSEMean Square Error

References

  1. Yan, H.; Yeh, H.; Mao, Q. High precision six-degree-of-freedom interferometer for test mass readout. Class. Quantum Gravity 2022, 39, 075024. [Google Scholar] [CrossRef]
  2. Lorenc, A.; Szarata, J.; Czuba, M. Real-Time Location System (RTLS) Based on the Bluetooth Technology for Internal Logistics. Sustainability 2023, 15, 4976. [Google Scholar] [CrossRef]
  3. Chang, D.; Hu, P.; Tan, J. Fused-like angles: Replacement for roll-pitch-yaw angles for a six-degree-of-freedom grating interferometer. Front. Inf. Technol. Electron. Eng. 2021, 22, 1677–1684. [Google Scholar] [CrossRef]
  4. Formentini, G.; Bouissiere, F.; Cuiller, C.; Dereux, P.E.; Favi, C. Conceptual Design for Assembly methodology formalization: Systems installation analysis and manufacturing information integration in the design and development of aircraft architectures. J. Ind. Inf. Integr. 2022, 26, 100327. [Google Scholar] [CrossRef]
  5. Li, G.; Zhang, F.; Fu, Y.; Wang, S. Kinematic calibration of serial robot using dual quaternions. Ind. Robot 2019, 46, 247–258. [Google Scholar] [CrossRef]
  6. Fernandez, S.R.; Olabi, A.; Gibaru, O. On-Line accurate 3D positioning solution for robotic large-scale assembly using a vision system and a 6Dof tracking unit. In Proceedings of the 2018 IEEE 3rd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chongqing, China, 12–14 October 2018; pp. 682–688. [Google Scholar] [CrossRef]
  7. Ma, L.; Bristow, D.A.; Landers, R.G. Modeling and compensation of industrial robot kinematic errors using a smart track sensor. In Proceedings of the International Symposium on Flexible Automation, Kanazawa, Japan, 15–19 July 2018; pp. 209–216. [Google Scholar] [CrossRef]
  8. Zheng, F.; Liu, Z.; Long, F.; Fang, H.; Jia, P.; Xu, Z.; Zhao, Y.; Li, J.; Zhang, B.; Feng, Q. High-Precision method for simultaneously measuring the six-degree-of-freedom relative position and pose deformation of satellites. Opt. Express 2023, 31, 13195–13210. [Google Scholar] [CrossRef]
  9. Xiong, Z.; He, J.; Chang, L.; Liu, N.; Zhai, Z.; Lao, D.; Dong, D.; Zhou, W. On-Site accuracy evaluation of laser tracking attitude measurement system. Opt. Express 2023, 31, 37212–37228. [Google Scholar] [CrossRef]
  10. Lau, K.; Yang, Y.; Liu, Y.; Song, H. Dynamic performance evaluation of 6D laser tracker sensor. In Proceedings of the 10th Performance Metrics for Intelligent Systems Workshop, Baltimore, MD, USA, 28–30 September 2010; pp. 285–289. [Google Scholar] [CrossRef]
  11. Clark, R.; Wang, S.; Wen, H.; Markham, A.; Trigoni, N. Vinet: Visual-inertial odometry as a sequence-to-sequence learning problem. In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; Volume 31. [Google Scholar] [CrossRef]
  12. Gao, Y.; Lin, J.; He, F.; Yin, S.; Zhu, J. A monocular vision and inclinometer combined system for 6DOF measurement of double shield TBM. Sens. Actuators A Phys. 2016, 249, 155–162. [Google Scholar] [CrossRef]
  13. Pan, N.; Zhao, Q.; Wu, Y.; Yuan, L.; Bai, J. High-Accuracy attitude angle measurement system using laser collimation. Appl. Opt. 2024, 63, F67–F73. [Google Scholar] [CrossRef]
  14. Shao, T. Key Technology for Laser Target Used in Shield Attitude Measurement. Master’s Thesis, Huazhong University of Science and Technology, Wuhan, China, 2012. [Google Scholar] [CrossRef]
  15. Chen, P. Key Technology of Shield Attitude Measurement System. Master’s Thesis, Huazhong University of Science and Technology, Wuhan, China, 2014. [Google Scholar] [CrossRef]
  16. Zhang, Q.; Qin, D.; Zhu, Q.; Chen, J. Research on the solution of shield machine digging position attitude based on neuralnetwork-Newton hybrid algorithm. Mech. Transm. 2021, 45, 24–29. [Google Scholar] [CrossRef]
  17. Li, W.; Guo, Y.; Zhao, C. Design of Laser Guiding Target for Shield. In Proceedings of the International Conference on Machinery, Materials and Computing Technology (ICMMCT 2017), Beijing, China, 25–26 March 2017; Department of Mechanical & Electrical Engineering, Tangshan College: Tangshan, China, 2017; Volume 4. [Google Scholar] [CrossRef]
  18. Liu, G.; Li, J. Goniometric error analysis of total station oblique incidence angle cone prism. Eng. Surv. 2017, 45, 4. [Google Scholar]
  19. Cao, S.; Cheng, Q.; Guo, Y.; Zhu, W.; Wang, H.; Ke, Y. Pose error compensation based on joint space division for 6-DOF robot manipulators. Precis. Eng. 2022, 74, 195–204. [Google Scholar] [CrossRef]
  20. Ma, D.; Li, J.; Feng, Q.; Zhao, Y.; Cui, J.; Wu, L. Method and system for simultaneously measuring six degrees of freedom motion errors of a rotary axis based on a semiconductor laser. Opt. Express 2023, 31, 24127–24141. [Google Scholar] [CrossRef] [PubMed]
  21. Zhang, H.; Xiong, Z.; Lao, D.; Zhou, W. EPNP Monocular vision measurement system based on EPNP algorithm. Infrared Laser Eng. 2019, 48, 190–195. [Google Scholar] [CrossRef]
  22. Yan, K.; Xiong, Z.; Lao, D.B.; Zhou, W.H.; Zhang, L.G.; Xia, Z.P.; Chen, T. Attitude measurement method based on 2DPSD and monocular vision. In Proceedings of the Applied Optics and Photonics China (2019), Beijing, China, 7–9 July 2019. [Google Scholar] [CrossRef]
  23. Zhang, L.; Xiong, Z.; Feng, W.; Zhou, W.; Dong, D. Laser tracking attitude angle measurement method based on vision and laser collimation. Chin. J. Sci. Instrum. 2022, 41, 30–36. [Google Scholar]
  24. Wu, C.; Xiong, Z.; Xu, H.; Zhai, Z.; Zhou, W. Laser tracking attitude measurement based on improved visual orthogonal iterative method. Chin. J. Sci. Instrum. 2022, 43, 63–71. [Google Scholar]
  25. Hunter, D.; Yu, H.; Pukish, M.S., III; Kolbusz, J.; Wilamowski, B.M. Selection of proper neural network sizes and architectures: A comparative study. IEEE Trans. Ind. Inform. 2012, 8, 228–240. [Google Scholar] [CrossRef]
  26. Wang, P.; Sun, C.; Zhang, Z. Linear pose estimation with a monocular vision system. Chin. J. Sci. Instrum. 2011, 32, 1126–1131. [Google Scholar]
  27. He, F.; Lin, J.; Gao, Y. Optimized Pose Measurement System Combining Monocular Vision with Inclinometer Sensors. Acta Opt. Sin. J. Opt. 2016, 36, 190–197. [Google Scholar] [CrossRef]
  28. Gao, K. Research on 6-DOF Dynamic Measurement Method Based on the Laser Tracker. Master’s Thesis, Tianjin University, Tianjin, China, 2018. [Google Scholar] [CrossRef]
  29. Yang, W.; Ma, K.; Chen, W.; Liu, C. Research on portable high-precision six-degree-of-freedom motion optical measurement system. J. Weapons Equip. Eng. 2022, 43, 214–218+251. [Google Scholar] [CrossRef]
  30. Lin, J.; Xin, R.; Shi, S.; Huang, Z.; Zhu, Z. An accurate 6-DOF dynamic measurement system with laser tracker for large-scale metrology. Measurement 2022, 204, 112052. [Google Scholar] [CrossRef]
Figure 1. Composition of the laser tracking attitude dynamic measurement system.
Figure 1. Composition of the laser tracking attitude dynamic measurement system.
Applsci 15 01083 g001
Figure 2. Fusion method of visual and inertial combined measurements.
Figure 2. Fusion method of visual and inertial combined measurements.
Applsci 15 01083 g002
Figure 3. Training effect of different numbers of neurons.
Figure 3. Training effect of different numbers of neurons.
Applsci 15 01083 g003
Figure 4. Backpropagation algorithm training structure model.
Figure 4. Backpropagation algorithm training structure model.
Applsci 15 01083 g004
Figure 5. Backpropagation-based network model training process flowchart.
Figure 5. Backpropagation-based network model training process flowchart.
Applsci 15 01083 g005
Figure 6. Regression function for the learning process.
Figure 6. Regression function for the learning process.
Applsci 15 01083 g006
Figure 7. The smallest MSE for the learning process.
Figure 7. The smallest MSE for the learning process.
Applsci 15 01083 g007
Figure 8. Diagram error histogram for the learning process.
Figure 8. Diagram error histogram for the learning process.
Applsci 15 01083 g008
Figure 9. Neighborhood search flowchart.
Figure 9. Neighborhood search flowchart.
Applsci 15 01083 g009
Figure 10. Comparison of neighborhood search effects.
Figure 10. Comparison of neighborhood search effects.
Applsci 15 01083 g010
Figure 11. Experimental verification platform of attitude dynamic measurement system based on rotary table.
Figure 11. Experimental verification platform of attitude dynamic measurement system based on rotary table.
Applsci 15 01083 g011
Figure 12. Comparison of fusion effect and single measurement unit results at 4 m.
Figure 12. Comparison of fusion effect and single measurement unit results at 4 m.
Applsci 15 01083 g012
Figure 13. The comparative analysis between the results of IMU integrated solution and data fusion solution.
Figure 13. The comparative analysis between the results of IMU integrated solution and data fusion solution.
Applsci 15 01083 g013
Table 1. Summary of the results of different outputs of the neural network training.
Table 1. Summary of the results of different outputs of the neural network training.
Net No.InputOutputNumber of
Neurons
The Source of the Theoretical ValueTraining
True ValueIMUNeighborhood SearchMSER
16311 0.0390.976
26311 0.2960.901
36311 0.1960.929
0.956
0.956
Table 2. Standard deviation of azimuth measurement results at different distances and angles.
Table 2. Standard deviation of azimuth measurement results at different distances and angles.
Distance4 m8 m
AzimuthAverage ErrorMax ErrorAverage ErrorMax Error
15°0.037°0.120°0.046°0.105°
25°0.039°0.042°0.042°0.093°
35°0.057°0.114°0.065°0.136°
45°0.031°0.059°0.053°0.092°
55°0.027°0.060°0.101°0.147°
Table 3. Standard deviation of pitch angle measurements at different distances and angles.
Table 3. Standard deviation of pitch angle measurements at different distances and angles.
Distance4 m8 m
PitchAverage ErrorMax ErrorAverage ErrorMax Error
10°0.094°0.129°0.114°0.201°
20°0.143°0.189°0.174°0.231°
30°0.156°0.196°0.160°0.202°
40°0.145°0.197°0.219°0.263°
50°0.168°0.250°0.317°0.393°
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, Z.; Xiong, Z.; Guo, Z.; Zhang, H.; Li, X.; Zhai, Z.; Zhou, W. Large-Space Laser Tracking Attitude Combination Measurement Using Backpropagation Algorithm Based on Neighborhood Search. Appl. Sci. 2025, 15, 1083. https://doi.org/10.3390/app15031083

AMA Style

Zhao Z, Xiong Z, Guo Z, Zhang H, Li X, Zhai Z, Zhou W. Large-Space Laser Tracking Attitude Combination Measurement Using Backpropagation Algorithm Based on Neighborhood Search. Applied Sciences. 2025; 15(3):1083. https://doi.org/10.3390/app15031083

Chicago/Turabian Style

Zhao, Ziyue, Zhi Xiong, Zhengnan Guo, Hao Zhang, Xiangyu Li, Zhongsheng Zhai, and Weihu Zhou. 2025. "Large-Space Laser Tracking Attitude Combination Measurement Using Backpropagation Algorithm Based on Neighborhood Search" Applied Sciences 15, no. 3: 1083. https://doi.org/10.3390/app15031083

APA Style

Zhao, Z., Xiong, Z., Guo, Z., Zhang, H., Li, X., Zhai, Z., & Zhou, W. (2025). Large-Space Laser Tracking Attitude Combination Measurement Using Backpropagation Algorithm Based on Neighborhood Search. Applied Sciences, 15(3), 1083. https://doi.org/10.3390/app15031083

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop