1. Introduction
High-precision curved parts are widely used in aerospace, transport, and medical devices. If the surface morphology of these parts is abnormal, it may lead to failure [
1]. High-precision 3D surface measurement technology is an important guarantee of machining accuracy. However, due to the complexity of the surfaces of these parts, commonly used machines such as coordinate measuring machines, articulated arm measuring machines, and binocular vision systems are unable to meet the demand for high-precision surface topography measurement [
2,
3,
4,
5].
Structured-light measurement technology, with its high efficiency and simple structure, is widely used for surface measurements in reverse engineering, industrial inspection, quality control, identification, and positioning [
6,
7,
8,
9]. It mainly consists of a laser light source and a camera. The laser light source projects a light beam onto the surface of the measurement object, forming a light band that is captured by the camera. From the geometric distortion of the light band, the 3D coordinates of the surface are calculated.
The camera is a core component of a structured-light measurement system and is responsible for sensing signals and obtaining the 3D coordinates of the object to be measured based on the camera model and its parameters. In the case of measuring curved parts, the undulation of the object surface often distorts the light bar beyond the depth of field of the camera. To address the depth of field limitation, researchers often choose a Scheimfplug camera with a tilted optical axis. The camera is always assumed as a pinhole, which is a significant assumption to use under vertical–optical axis conditions because the object and image have no coordinate changes along the optical axis but do not work well with Scheimfplug cameras. Due to the tilted optical axis of the Scheimfplug camera, both the object and image have a significant coordinate shift in the optical axis direction. The small-aperture approximation is not sufficient to accommodate this shift, resulting in an inability to accurately analyze the relationship between the object and image with the Scheimfplug camera. The linear-structured-light measurement model established on this basis is inaccurate, bringing systematic errors and affecting the measurement accuracy of the linear-structured-light sensor. Therefore, it is necessary to remodel the structured-light measurement system of the Scheimfplug camera with a large aperture [
10,
11].
To address the inaccuracy of the line-structured-light measurement model applied to the Scheimfplug camera, some researchers continued to insist on using the traditional small-hole assumption but could only maintain high accuracy at small angles below 6° [
12]. In 2001, Grossberg came up with a general camera model based on focal dispersion surfaces for a variety of special cameras, including Scheimfplug cameras, camera arrays, etc. However, due to the need to trace multiple rays, it was not easy to obtain the focal dispersion plane in experiments, which led to difficulties in solving this model [
13]. In 2013, Antonin Miks analyzed the properties of aberrations in the Scheimfplug camera in line-structured-light measurements, but only a computational example was given and practical validation was lacking [
14]. In 2016, the Southern Methodist University raised a pupil-centered ray-tracing model, which was complex, cumbersome, and lacked experimental validation [
15]. In 2017, Steger presented a new camera model based on the relationship between projected camera matrices, but the accuracy was too low [
16]. Subsequently, he used a scanning camera model considering lens aberration, but it could only be applied to the telecenter [
17,
18]. In 2019, Yin X Q proposed an aberration-based model, but it was difficult to be applied to the reconstruction of 3D measurements [
19]. In 2021, Zhang Y established a transformation model from the image coordinate system to the coordinate system of the measurement object through the spatial projection relation [
20]. In the same year, Alvarez H proposed a multi-camera sensing model but failed to achieve high accuracy [
21]. In 2022, Hu Y proposed a simplified camera model, but only one-dimensional detection was performed [
22,
23].
All the studies mentioned have been categorized as either modifications of the traditional small-aperture model or as overly complex, hindering their practical application in line-structured-light measurements. The systematic errors introduced to the camera model led to large errors when solving for the 3D coordinates of the object from image light bands, which reduced the accuracy of the structured-light sensors and did not allow for current high-precision measurements [
19,
24,
25]. Therefore, there is a need to investigate an accurate and systematic structured-light measurement model, which should go beyond the small-aperture model while maintaining good practicality.
In this paper, a structured-light measurement model was derived from the Scheimfplug camera thick-lens imaging principle to achieve accurate measurement. The imaging matrix was established according to the ideal optical imaging principle, the spatial position of the image point was deduced according to the optical axis tilt condition, and the relationship between the pixel coordinates and the image plane coordinates was derived, which was combined with the optical plane equation to obtain an accurate and practical linear-structured-light measurement model. It is shown through simulation and experiment that the proposed model compensates more for the magnification and imaging position than the conventional small-hole model, reduces the systematic error of the model, and has higher accuracy.
The rest of the paper is organized as follows: In
Section 2, a structured-light measurement model is built based on the Scheimpflug imaging principle for thick lenses.
Section 3 establishes the measurement system, simulates the two models, and conducts system calibration experiments, measurement block verification tests, and bearing measurement experiments to verify the feasibility of the models and their advantages over conventional models. The fourth part is the conclusion.
2. Scheimfplug Imaging System Model
2.1. Linear-Structured-Light Measurement Model
The linear-structured-light measurement model is shown in
Figure 1. A laser plane is projected onto the measured surface by the laser source to create a light strip. Point
is a point on the light strip, photographed with a camera and imaged on the CCD (charge-coupled device). The corresponding image point is the point
. The upper-left corner of the image is taken as the origin of the pixel coordinate system,
. The origin of the camera coordinate system
is the optical center, and the optical axis is the
-axis. Then, the relationship between point
and point
is (1).
are the 3D coordinates of point in the camera coordinate system. is an indefinite factor and is the imaging matrix. are the plane coordinates of point in the pixel coordinate system. Model (1) shows that a linear relationship exists between the coordinates of the object point and the corresponding image point. This relationship is fundamental to understanding how the structured-light measurement model correlates to the physical position of a point on the measured surface with its representation on the camera’s image sensor. The equation encapsulates the mathematical foundation that allows for the accurate conversion of pixel coordinates to real-world coordinates within the context of the structured-light system.
Because of an unknown trimming factor
in the formula, only one ray passing through
can be obtained, so a constraint condition is needed. In the linear-structured-light measurement system, P is substituted to the laser plane equation to determine the 3D coordinates, as presented in (2):
A, B, and C in (2) are the coefficients of , , and of the laser plane in the camera coordinate system. The three coefficients are represented by a vector, . By substituting the plane equation into (1), the solution of the indefinite factor λ is .
Substituting λ into (1) gives the final measurement model in (3):
Model (3) presents the coordinates of a spatial curve truncated by the laser plane. According to (3), in the case of a single measurement, the relationship between the object point and the corresponding image point is a rational function, rather than a simple linear one.
For a complete measurement, additional movements are required to make the laser scan the entire contour. Transforming the solved coordinate points from the camera coordinate system to the measurement coordinate system is a rigid body transformation, as is each scanning motion. Therefore, the line-structured-light measurement model is (4).
is the rotation matrix and translation vector from the camera coordinate system to the measurement coordinate system. is the rotation matrix and translation vector of the transformed object position and initial position funding for the ith measurement. It can be seen from (4) that the measurement results are also affected by scanning motion.
According to the model, the main influencing factors of the linear-structured-light measurement system are (1) light stripe extraction, (2) camera model accuracy, (3) optical plane calibration accuracy, and (4) scanning motion accuracy. The main problem to be solved in this paper is the correction of the camera model.
2.2. Scheimfplug Camera Model
The camera model is a central part of structured-light measurement. All information is collected with a camera, and the optical plane calibration is also transformed into 3D coordinates utilizing this model. Currently, the camera is always assumed to have a small aperture and is described as an imaging matrix based on the triangle similarity principle, namely the pinhole model [
26]. The small-aperture assumption is an accurate one in the case of weak
variations.
However, since the Scheimfplug camera is a shifted-axis camera, the object image is offset along the optical axis, causing a macroscopic change in . This change is beyond the pinhole model, making the accuracy decrease rapidly as the change in increases. This means that in (3) introduces a systematic error, and the solution for the object–image relationship is inaccurate, affecting the measurement accuracy. Therefore, in this section, a Scheimfplug model with lens thickness is established to replace the pinhole model based on the ideal optical imaging principle rather than the small-aperture assumption.
Supposedly, the pixel coordinate is established in
Figure 1 with the pixel point
, whose origin is the intersection of the optical axis and CCD plane and the coordinate axes are the same as the CCD. The image coordinates are parallel to the pixel coordinates and the untilted image coordinates of point
are presented in (5):
are the pixel size of the , direction. are the , coordinates of the intersection point between the optical axis and the image plane. The units of image points are converted from pixels to millimeters in (5).
In terms of camera coordinates, the untilted coordinate of point
is presented in (6).
is the z-coordinate of point in the camera coordinate system. The position of the image plane relative to the camera is fixed in (6).
In terms of camera coordinates, point
is the intersection of the optical axis and the CCD plane. Considering the tilt angle, the relationship between the camera coordinate point and its corresponding image point is confirmed in (7).
φ, θ is the camera image plane 2D tilt angle. After the tilt of (7), the image plane and lens constitute the basic structure of the Scheimfplug camera.
Associating (6) and (7) and simplifying them, the relationship between the camera coordinate point and its corresponding CCD point is in (8):
Mark matrix .
The relationship between the coordinate points of an image and the corresponding image points in the Scheimfplug camera is described in (8).
2.3. Thick-Lens Imaging Combined Scheimfplug Camera Model
An ideal optical system has three sets of cardinal points, i.e., the focal point, the principal point, and the node, where the principal point and the node usually coincide. The two principal points of the object and image planes are assumed to coincide as the center of light and are designated as the origin of the system in the pinhole model. Therefore, according to the triangular similarity property, the ratio of the z-coordinate of the image point to the z-coordinate of the object point is the magnification. In practice, an interval exists between the two principal points, which is important in a thick-lens model, while the interval is negligible in the small-aperture assumption, as in
Figure 2.
The camera coordinate system is established with the main point of the image side as the origin, as in
Figure 2. The optical axis is directed from the image side towards the object side in the positive direction of the z-axis, and the x-axis and y-axis are aligned parallel to the x-axis and y-axis of the image coordinate system that is not tilted. It is assumed that the coordinates of the principal points on the object side are
, the focal length is
, the coordinates of the focus on the object side are
, the coordinates of the focus on the image side are
, and the z-coordinates of the object point and image point are
and
, respectively.
According to Newton’s formula
, we obtain (9):
is the object distance and image distance, where . is the focal length of object and focal length of image; in general, . is the vertical magnification, . In the thick-lens hypothesis, the image point coordinates of the camera and the corresponding object point coordinates are governed by the ideal optical imaging principle, known from (9).
Combining
, we obtain (10).
The above Equation (10) indicates that in the thick-lens hypothesis, the key parameter between the coordinate of the image point and the coordinate of the object point is , and the relationship between and is the inverse proportional function of the offset.
In matrix form, it is (11):
The same denominator in (10) is presented as the coefficient of the matrix in (11). The image relation of the camera is interpreted in (11) as a matrix with undetermined coefficients, which are related to the coordinates. Combining (8) and (11), the Scheimfplug camera model of the ideal optical system can be obtained with (12):
, the Scheimfplug camera imaging matrix, is
In the imaging matrix represented by (12), the parameters of the thick-lens hypothesis are in the third row, that is, they have a direct impact on the solution of , but in fact, due to the coefficients in (11), the thick-lens hypothesis has a certain effect on the three coordinates.
Since the experimental objects are all rotating bodies, the scanning motion should use a rotary motion with no translational motion.
So in (13) and , where is the angle of rotation of the object around the -axis between measurements.
Finally, the linear-structured-light model is inferred by substituting
into (4).
The above Equation (13) is the complete structured-light measurement model, from pixel-point coordinates to reconstructed object-point coordinates.
2.4. Linear-Structured-Light Model Calibration Step
In the linear-structured-light model (13), calibration is required to determine the values of the two parts of the matrix, , the plane vector , the rotation matrix , and the translation vector , separately.
Considering that the parameters inside the camera are difficult to obtain, the calibration method consists of three steps, as follows: Step 1: a checkerboard calibration board with 15 positions is placed on the working range. Step 2: initial values are obtained from the nominal values of the device. is the focal length of the lens, and are the size of the CCD pixel, and are the center of the CCD, and are the tilt angles of the Scheimfplug adapter, and is set to 0 because it is theoretically very small. Step 3: the proposed model is validated using the L-M optimization algorithm, and the optimization principle is to minimize the reprojection error.
The calibration process of the plane vector can be performed by making the light stripe irradiate on the calibration plate plane and calculating the 3D coordinates of the light stripe in the camera coordinate system according to the camera calibration results. Since the light stripe is also on the laser plane, several different 3D light stripes can be fitted to the optical plane coefficients.
The rotation matrix, , and translation vector, , are optimized with the ring gauge. The 3D coordinates of the light bar are reconstructed from the calibrated camera and light plane parameters by rotating the measured standard ring gauge. The obtained coordinates are subjected to a rigid body transformation to obtain a reconstructed model of the ring gauge. The evaluation parameter is set as the difference between the radius of the reconstructed cylinder and the nominal radius of the ring gauge. The optimization object is the rotation matrix and translation vector. The L-M (Levenberg–Marquardt) algorithm is applied to obtain the optimized parameters and realize the calibration.
4. Discussion
The introduction of a Scheimpflug camera thick-lens imaging model in this study significantly enhances the precision of structured-light measurement. This model is rooted in the principles of geometrical optics and the thick-lens hypothesis, making it a more accurate representation of real-world measurement systems. The experimental outcomes have demonstrated a remarkable reduction in the standard deviation by 66% when compared to the pinhole model, validating the effectiveness of the proposed model in improving measurement accuracy.
The model’s derivation from fundamental optical principles allows it to more closely mimic the behavior of actual cameras, which typically have a non-negligible lens thickness. This approach addresses the limitations of traditional models that assume a negligible lens thickness, leading to inaccuracies in measurement. The practical utility of this model is evident as it can be potentially applied to other structured-light measurement systems where lens thickness plays a significant role. This adaptability makes the model a valuable tool for improving the performance of various imaging systems.
Moreover, the model’s practicability is highlighted by its simplicity and effectiveness. Unlike many complex models that require intricate calibration processes, this model offers a more straightforward approach to achieving high-precision measurements. This characteristic is particularly beneficial in industrial applications with critical ease of use and reliability.
The results of this study open avenues for further research and development. Future work will concentrate on simplifying the calibration algorithm of the model. This endeavor aims to streamline the process of obtaining accurate camera and motion parameters, thereby enhancing the overall measurement accuracy. The focus will be on developing algorithms that are precise and easy to implement, making the technology more accessible and user-friendly.
5. Conclusions
In conclusion, the Scheimpflug camera thick-lens imaging model proposed in this study offers a significant advancement in structured-light measurement. The model’s foundation in geometrical optics and the thick-lens hypothesis provides a more realistic and accurate representation of camera systems, leading to improved measurement precision. The experimental results confirm a 66% reduction in the standard deviation compared to the traditional pinhole model, showcasing the model’s superior performance.
The proposed model’s practicality and adaptability make it a promising candidate for enhancing the performance of other structured-light measurement systems. Its straightforward calibration process and robust performance underpin its potential for widespread applications in various industrial settings.
Looking ahead, the focus will be on refining the model’s calibration algorithm to achieve even greater accuracy in camera and motion parameter determination. This will further enhance the model’s utility and precision, solidifying its role as a key technology in the field of structured-light measurement. The ongoing commitment to improving the model’s calibration and measurement capabilities will ensure its continued relevance and effectiveness in addressing the challenges of high-precision imaging systems.