Next Article in Journal
DeepIOD: Towards A Context-Aware Indoor–Outdoor Detection Framework Using Smartphone Sensors
Next Article in Special Issue
Optical Target Projector: Principle of Functioning and Basic Performance Test
Previous Article in Journal
Temperature Compensation Method Based on Bilinear Interpolation for Downhole High-Temperature Pressure Sensors
Previous Article in Special Issue
Fast Three-Dimensional Profilometry with Large Depth of Field
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Linear-Structured-Light Measurement System Based on Scheimpflug Camera Thick-Lens Imaging

Center of Ultra-Precision Optoelectronic Instrument Engineering, Harbin Institute of Technology, Harbin 150080, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(16), 5124; https://doi.org/10.3390/s24165124
Submission received: 5 July 2024 / Revised: 28 July 2024 / Accepted: 5 August 2024 / Published: 7 August 2024

Abstract

:
A thick-lens, structured-light measurement model is introduced to overcome the oversights in traditional models, which often disregard the impact of lens thickness. This oversight can lead to inaccuracies in Scheimpflug camera calculations, causing systematic errors and diminished measurement precision. By geometrical optics, the model treats the camera as a thick lens, factoring in the locations of its principal points and the spatial shifts due to image plane tilting. The model deduces the positional relationship of the thick lens with a tilted optical axis and establishes a linear-structured-light measurement model. Simulations confirm that the model can precisely calculate the 3D coordinates of subjects from image light strip data, markedly reducing systematic errors across the measurement spectrum. Moreover, experimental results suggest that the refined sensor model offers enhanced accuracy and lower standard deviation.

1. Introduction

High-precision curved parts are widely used in aerospace, transport, and medical devices. If the surface morphology of these parts is abnormal, it may lead to failure [1]. High-precision 3D surface measurement technology is an important guarantee of machining accuracy. However, due to the complexity of the surfaces of these parts, commonly used machines such as coordinate measuring machines, articulated arm measuring machines, and binocular vision systems are unable to meet the demand for high-precision surface topography measurement [2,3,4,5].
Structured-light measurement technology, with its high efficiency and simple structure, is widely used for surface measurements in reverse engineering, industrial inspection, quality control, identification, and positioning [6,7,8,9]. It mainly consists of a laser light source and a camera. The laser light source projects a light beam onto the surface of the measurement object, forming a light band that is captured by the camera. From the geometric distortion of the light band, the 3D coordinates of the surface are calculated.
The camera is a core component of a structured-light measurement system and is responsible for sensing signals and obtaining the 3D coordinates of the object to be measured based on the camera model and its parameters. In the case of measuring curved parts, the undulation of the object surface often distorts the light bar beyond the depth of field of the camera. To address the depth of field limitation, researchers often choose a Scheimfplug camera with a tilted optical axis. The camera is always assumed as a pinhole, which is a significant assumption to use under vertical–optical axis conditions because the object and image have no coordinate changes along the optical axis but do not work well with Scheimfplug cameras. Due to the tilted optical axis of the Scheimfplug camera, both the object and image have a significant coordinate shift in the optical axis direction. The small-aperture approximation is not sufficient to accommodate this shift, resulting in an inability to accurately analyze the relationship between the object and image with the Scheimfplug camera. The linear-structured-light measurement model established on this basis is inaccurate, bringing systematic errors and affecting the measurement accuracy of the linear-structured-light sensor. Therefore, it is necessary to remodel the structured-light measurement system of the Scheimfplug camera with a large aperture [10,11].
To address the inaccuracy of the line-structured-light measurement model applied to the Scheimfplug camera, some researchers continued to insist on using the traditional small-hole assumption but could only maintain high accuracy at small angles below 6° [12]. In 2001, Grossberg came up with a general camera model based on focal dispersion surfaces for a variety of special cameras, including Scheimfplug cameras, camera arrays, etc. However, due to the need to trace multiple rays, it was not easy to obtain the focal dispersion plane in experiments, which led to difficulties in solving this model [13]. In 2013, Antonin Miks analyzed the properties of aberrations in the Scheimfplug camera in line-structured-light measurements, but only a computational example was given and practical validation was lacking [14]. In 2016, the Southern Methodist University raised a pupil-centered ray-tracing model, which was complex, cumbersome, and lacked experimental validation [15]. In 2017, Steger presented a new camera model based on the relationship between projected camera matrices, but the accuracy was too low [16]. Subsequently, he used a scanning camera model considering lens aberration, but it could only be applied to the telecenter [17,18]. In 2019, Yin X Q proposed an aberration-based model, but it was difficult to be applied to the reconstruction of 3D measurements [19]. In 2021, Zhang Y established a transformation model from the image coordinate system to the coordinate system of the measurement object through the spatial projection relation [20]. In the same year, Alvarez H proposed a multi-camera sensing model but failed to achieve high accuracy [21]. In 2022, Hu Y proposed a simplified camera model, but only one-dimensional detection was performed [22,23].
All the studies mentioned have been categorized as either modifications of the traditional small-aperture model or as overly complex, hindering their practical application in line-structured-light measurements. The systematic errors introduced to the camera model led to large errors when solving for the 3D coordinates of the object from image light bands, which reduced the accuracy of the structured-light sensors and did not allow for current high-precision measurements [19,24,25]. Therefore, there is a need to investigate an accurate and systematic structured-light measurement model, which should go beyond the small-aperture model while maintaining good practicality.
In this paper, a structured-light measurement model was derived from the Scheimfplug camera thick-lens imaging principle to achieve accurate measurement. The imaging matrix was established according to the ideal optical imaging principle, the spatial position of the image point was deduced according to the optical axis tilt condition, and the relationship between the pixel coordinates and the image plane coordinates was derived, which was combined with the optical plane equation to obtain an accurate and practical linear-structured-light measurement model. It is shown through simulation and experiment that the proposed model compensates more for the magnification and imaging position than the conventional small-hole model, reduces the systematic error of the model, and has higher accuracy.
The rest of the paper is organized as follows: In Section 2, a structured-light measurement model is built based on the Scheimpflug imaging principle for thick lenses. Section 3 establishes the measurement system, simulates the two models, and conducts system calibration experiments, measurement block verification tests, and bearing measurement experiments to verify the feasibility of the models and their advantages over conventional models. The fourth part is the conclusion.

2. Scheimfplug Imaging System Model

2.1. Linear-Structured-Light Measurement Model

The linear-structured-light measurement model is shown in Figure 1. A laser plane is projected onto the measured surface by the laser source to create a light strip. Point P is a point on the light strip, photographed with a camera and imaged on the CCD (charge-coupled device). The corresponding image point is the point p . The upper-left corner of the image is taken as the origin of the pixel coordinate system, O - x u y u . The origin of the camera coordinate system O - x c y c z c is the optical center, and the optical axis is the z c -axis. Then, the relationship between point p and point P is (1).
x c y c z c T = λ K x u y u 1 T
x c y c z c T are the 3D coordinates of point P in the camera coordinate system. λ is an indefinite factor and K is the imaging matrix. x u y u 1 T are the plane coordinates of point p in the pixel coordinate system. Model (1) shows that a linear relationship exists between the coordinates of the object point and the corresponding image point. This relationship is fundamental to understanding how the structured-light measurement model correlates to the physical position of a point on the measured surface with its representation on the camera’s image sensor. The equation encapsulates the mathematical foundation that allows for the accurate conversion of pixel coordinates to real-world coordinates within the context of the structured-light system.
Because of an unknown trimming factor λ in the formula, only one ray passing through P can be obtained, so a constraint condition is needed. In the linear-structured-light measurement system, P is substituted to the laser plane equation to determine the 3D coordinates, as presented in (2):
A x c + B y c + C z c = 1
A, B, and C in (2) are the coefficients of x c , y c , and z c of the laser plane in the camera coordinate system. The three coefficients are represented by a vector, A B C . By substituting the plane equation into (1), the solution of the indefinite factor λ is 1 / A B C K x u y u 1 T .
Substituting λ into (1) gives the final measurement model in (3):
x c y c z c T = K x u y u 1 T A B C K x u y u 1 T
Model (3) presents the coordinates of a spatial curve truncated by the laser plane. According to (3), in the case of a single measurement, the relationship between the object point and the corresponding image point is a rational function, rather than a simple linear one.
For a complete measurement, additional movements are required to make the laser scan the entire contour. Transforming the solved coordinate points from the camera coordinate system to the measurement coordinate system is a rigid body transformation, as is each scanning motion. Therefore, the line-structured-light measurement model is (4).
x m y m z m T = R i R c 2 m x c y c z c T + t c 2 m + t i
R c 2 m , t c 2 m is the rotation matrix and translation vector from the camera coordinate system to the measurement coordinate system. R i , t i is the rotation matrix and translation vector of the transformed object position and initial position funding for the ith measurement. It can be seen from (4) that the measurement results are also affected by scanning motion.
According to the model, the main influencing factors of the linear-structured-light measurement system are (1) light stripe extraction, (2) camera model accuracy, (3) optical plane calibration accuracy, and (4) scanning motion accuracy. The main problem to be solved in this paper is the correction of the camera model.

2.2. Scheimfplug Camera Model

The camera model is a central part of structured-light measurement. All information is collected with a camera, and the optical plane calibration is also transformed into 3D coordinates utilizing this model. Currently, the camera is always assumed to have a small aperture and is described as an imaging matrix based on the triangle similarity principle, namely the pinhole model [26]. The small-aperture assumption is an accurate one in the case of weak z c variations.
However, since the Scheimfplug camera is a shifted-axis camera, the object image is offset along the optical axis, causing a macroscopic change in z c . This change is beyond the pinhole model, making the accuracy decrease rapidly as the change in z c increases. This means that K in (3) introduces a systematic error, and the solution for the object–image relationship is inaccurate, affecting the measurement accuracy. Therefore, in this section, a Scheimfplug model with lens thickness is established to replace the pinhole model based on the ideal optical imaging principle rather than the small-aperture assumption.
Supposedly, the pixel coordinate is established in Figure 1 with the pixel point p u = x u y u 1 T , whose origin is the intersection of the optical axis and CCD plane and the coordinate axes are the same as the CCD. The image coordinates are parallel to the pixel coordinates and the untilted image coordinates of point p u are presented in (5):
x i m y i m 1 = d x 0 d x x 0 0 d y d y y 0 0 0 1 x u y u 1
d x , d y are the pixel size of the x u , y u direction. x 0 , y 0 are the x u , y u coordinates of the intersection point between the optical axis and the image plane. The units of image points are converted from pixels to millimeters in (5).
In terms of camera coordinates, the untilted coordinate of point p u is presented in (6).
x i m y i m z 0 1 = 1 0 0 0 1 0 0 0 z 0 0 0 1 x i m y i m 1
z 0 is the z-coordinate of point p u in the camera coordinate system. The position of the image plane relative to the camera is fixed in (6).
In terms of camera coordinates, point 0 0 z 0 T is the intersection of the optical axis and the CCD plane. Considering the tilt angle, the relationship between the camera coordinate point and its corresponding image point is confirmed in (7).
x c y c z c 1 = cos θ sin φ sin θ cos φ sin θ z 0 cos φ sin θ 0 cos φ sin φ z 0 sin φ sin θ sin φ cos θ cos φ cos θ z 0 z 0 cos φ cos θ 0 0 0 1 x i m y i m z 0 1
φ, θ is the camera image plane 2D tilt angle. After the tilt of (7), the image plane and lens constitute the basic structure of the Scheimfplug camera.
Associating (6) and (7) and simplifying them, the relationship between the camera coordinate point and its corresponding CCD point is in (8):
x c y c z c 1 T = A x u y u 1 T
Mark matrix A = cos θ d x sin φ sin θ d y cos θ d x x 0 + sin φ sin θ d y y 0 0 cos φ d y cos φ d y y 0 sin θ d x sin φ cos θ d y z 0 sin θ d x x 0 sin φ cos θ d y y 0 0 0 1 .
The relationship between the coordinate points of an image and the corresponding image points in the Scheimfplug camera is described in (8).

2.3. Thick-Lens Imaging Combined Scheimfplug Camera Model

An ideal optical system has three sets of cardinal points, i.e., the focal point, the principal point, and the node, where the principal point and the node usually coincide. The two principal points of the object and image planes are assumed to coincide as the center of light and are designated as the origin of the system in the pinhole model. Therefore, according to the triangular similarity property, the ratio of the z-coordinate of the image point to the z-coordinate of the object point is the magnification. In practice, an interval exists between the two principal points, which is important in a thick-lens model, while the interval is negligible in the small-aperture assumption, as in Figure 2.
The camera coordinate system is established with the main point of the image side as the origin, as in Figure 2. The optical axis is directed from the image side towards the object side in the positive direction of the z-axis, and the x-axis and y-axis are aligned parallel to the x-axis and y-axis of the image coordinate system that is not tilted. It is assumed that the coordinates of the principal points on the object side are 0 0 e T , the focal length is f , the coordinates of the focus on the object side are 0 0 e + f T , the coordinates of the focus on the image side are 0 0 f T , and the z-coordinates of the object point and image point are z c and z c , respectively.
According to Newton’s formula x x = f f , we obtain (9):
x c = β m x c y c = β m y c z c f z c e f = f f
x , x is the object distance and image distance, where x = z c e f , x = z c f . f , f is the focal length of object and focal length of image; in general, f = f . β m is the vertical magnification, β m = f x = x f . In the thick-lens hypothesis, the image point coordinates of the camera and the corresponding object point coordinates are governed by the ideal optical imaging principle, known from (9).
Combining z c , we obtain (10).
x c = f z c + f x c y c = f z c + f y c z c = z c e + f + e f z c + f
The above Equation (10) indicates that in the thick-lens hypothesis, the key parameter between the coordinate of the image point and the coordinate of the object point is z c , and the relationship between z c and z c is the inverse proportional function of the offset.
In matrix form, it is (11):
P c = x c y c z c = λ f 0 0 0 0 f 0 0 0 0 e + f e f x c y c z c 1 = λ B P c
The same denominator in (10) is presented as the coefficient of the matrix in (11). The image relation of the camera is interpreted in (11) as a matrix with undetermined coefficients, which are related to the z c coordinates. Combining (8) and (11), the Scheimfplug camera model of the ideal optical system can be obtained with (12):
K f , the Scheimfplug camera imaging matrix, is
f cos θ d x f sin φ sin θ d y f cos θ d x x 0 sin φ sin θ d y y 0 0 f cos φ d y f cos φ d y y 0 e + f sin θ d x e + f sin φ cos θ d y e + f sin θ d x x 0 sin φ cos θ d y y 0 + z 0 + e f
In the imaging matrix represented by (12), the parameters of the thick-lens hypothesis are in the third row, that is, they have a direct impact on the solution of z c , but in fact, due to the coefficients in (11), the thick-lens hypothesis has a certain effect on the three coordinates.
Since the experimental objects are all rotating bodies, the scanning motion should use a rotary motion with no translational motion.
So t i = 0 in (13) and R i = cos i Δ ω sin i Δ ω 0 sin i Δ ω cos i Δ ω 0 0 0 1 , where Δ ω is the angle of rotation of the object around the z m -axis between measurements.
Finally, the linear-structured-light model is inferred by substituting K f into (4).
x m y m z m T = R i R c 2 m K f x u y u 1 T / A B C K f x u y u 1 T + t c 2 m
The above Equation (13) is the complete structured-light measurement model, from pixel-point coordinates to reconstructed object-point coordinates.

2.4. Linear-Structured-Light Model Calibration Step

In the linear-structured-light model (13), calibration is required to determine the values of the two parts of the matrix, K f , the plane vector A B C , the rotation matrix R c 2 m , and the translation vector t c 2 m , separately.
Considering that the parameters inside the camera are difficult to obtain, the K f calibration method consists of three steps, as follows: Step 1: a checkerboard calibration board with 15 positions is placed on the working range. Step 2: initial values are obtained from the nominal values of the device. f is the focal length of the lens, d x and d y are the size of the CCD pixel, x 0 and y 0 are the center of the CCD, φ and θ are the tilt angles of the Scheimfplug adapter, and e is set to 0 because it is theoretically very small. Step 3: the proposed model is validated using the L-M optimization algorithm, and the optimization principle is to minimize the reprojection error.
The calibration process of the plane vector A B C can be performed by making the light stripe irradiate on the calibration plate plane and calculating the 3D coordinates of the light stripe in the camera coordinate system according to the camera calibration results. Since the light stripe is also on the laser plane, several different 3D light stripes can be fitted to the optical plane coefficients.
The rotation matrix, R c 2 m , and translation vector, t c 2 m , are optimized with the ring gauge. The 3D coordinates of the light bar are reconstructed from the calibrated camera and light plane parameters by rotating the measured standard ring gauge. The obtained coordinates are subjected to a rigid body transformation to obtain a reconstructed model of the ring gauge. The evaluation parameter is set as the difference between the radius of the reconstructed cylinder and the nominal radius of the ring gauge. The optimization object is the rotation matrix and translation vector. The L-M (Levenberg–Marquardt) algorithm is applied to obtain the optimized parameters and realize the calibration.

3. Simulation and Experiment

3.1. Simulation

In this section, we discuss the numerical simulations that were conducted to test the performance of the proposed thick-lens hypothesis in a Scheimfplug camera. The simulation conditions were the solutions of the object plane coordinates from the image plane coordinates at β m = 0.5 , 1 , 2 , and the solutions were compared with the position of the imaging point calculated by the law of refraction. The simulation results in Figure 3 show that the aperture model led to increased computational deviation as the image plane coordinates kept moving away from the center point at different magnifications. In contrast, the accuracy of the proposed thick-lens model was maintained even with the change in image plane position.

3.2. Experimental Section

This experimental part includes four sections. Section 3.2.1 describes the experimental structure, including the equipment and structure of the line-structured-light measurement system built in the experiment. Section 3.2.2 describes the calibration of the line-structured-light measurement model, including the imaging matrix, light plane, rotation matrix, and translation vector. Section 3.2.3 describes how to measure the standard measurement block with the measuring device, use two models to solve the measurement results, and compare the mean value and standard deviation of the two results. Section 3.2.4 describes the use of measuring devices to measure a real bearing, proving that the proposed model has a good effect in practical industrial measurement.

3.2.1. Experimental Structure

A zoom 6000 camera from Navitar (Rochester, NY, USA) and Scheimpflug adapter were used to build a Scheimpflug camera. The CCD was GRAS-20S4M-C, 1/1.8inch, with 1200 × 1600 pixels, and the pixel size was 4.4 μm, as in Figure 4a. The lens tilt of the Scheimfplug adapter ranged from 0° to 17°.
A linear-structured-light system was designed containing a Scheimfplug camera and a linear laser, as in Figure 4b.

3.2.2. Line-Structured-Light Measurement System Calibration Experiment

The calibration plates in Figure 4c were used to compare the two camera models. Experiments were performed on both the Scheimfplug camera and the regular camera at both ×1.25 and ×1.5 magnifications, and the relevant calibration results are shown in Table 1 and Table 2. In Table 1, each parameter corresponds to the matrix K f in (13), whereas in Table 2, each parameter corresponds to the matrix K h in (14).
K h = α γ x 0 0 β y 0 0 0 1
The above Equation (14) is the imaging matrix of the pinhole model, which represents a special linear transformation, and the matrix is an upper triangular matrix. The units of the reprojection errors in the two tables are not the same due to the different objects of calibration optimization; with pixels being applied to the traditional small-aperture model and μm being used for the proposed model, it is difficult to analyze the two in terms of how good they are in terms of the results of the camera calibration.
The Scheimfplug camera with a magnification of ×1.5 in Table 1 was selected to form a measurement system with the laser and the optical plane was calibrated, and the results were 10 4 8.6089 3.6205 31.749 .
The R c 2 m and t c 2 m of the measurement system were calibrated using a ring gauge with a radius of 17.500 mm. The system was calibrated against the reconstructed 3D model by rotating the ring gauge once on a rotary table, as shown in Figure 5a. The ring gauge optical strip image is shown in Figure 5b. and the 3D reconstructed cylinder is shown in Figure 5c. The calibration results were as follows:
R c 2 m = 0.41199 0.036022 0.91047 0.082152 0.99368 0.076488 0.90748 0.10631 0.40643 , t c 2 m = 10.023 300.06 11.231
The deviation of the reconstructed cylindrical radius from the standard ring gauge in Figure 5 is 1.2 μm.

3.2.3. Single-Measurement Model Validation Experiment

Three measurement blocks of 1 mm, 1.5 mm, and 2 mm, respectively, were stacked to form a stepped block as a measurement object to verify the model’s validity. The Scheimpflug camera captured an image of the target, as shown in Figure 6. After image processing and 3D reconstruction, the position of the stripe in the 3D space, i.e., the single measurement of the block, was obtained. Then, the measurements were compared with the real values. Since the target was randomly moved to different positions, enough results were obtained to evaluate our measurement system, but it was, therefore, impossible to obtain specific measurements, only the height ratio of the stepped blocks. The 3D results are shown in Figure 7 and listed in Table 3.
The standard values of the height ratio were 1, 1.5, and 2. Correspondingly, the relative deviations were 0, 0.6%, and 0.8%, respectively, when using the proposed model, while the pinhole model’s relative deviations were 0, 1.3%, and 3.2%. The proposed deviations of the model were reduced by 0.7% except for the first item because the first was necessarily 0 and not worth discussing. The evaluation results were sufficiently accurate as the blocks’ uncertainty was 0.02 µm. The experiments showed that the proposed thick-lens model had a higher accuracy and lower standard deviation than the traditional small-hole model.

3.2.4. Measurement Experiments of Real Objects

The radii of the inner rings of the two bearings were measured using the parameters obtained from the calibration; the measurement objects are shown in Figure 8, the 3D measurement results are shown in Figure 9, and the results of 10 measurements of the radii are shown in Figure 10.
The bearing radii calculated using the proposed model are 18.646 mm and 17.545 mm with standard deviations of 0.003 mm and 0.001 mm, respectively, while the results of the conventional model are 18.646 mm and 17.540 mm with standard deviations of 0.009 mm and 0.003 mm, respectively. The standard deviation of the proposed model is significantly lower. The effectiveness of the structured-light measurement system based on the thick-lens model is verified.

4. Discussion

The introduction of a Scheimpflug camera thick-lens imaging model in this study significantly enhances the precision of structured-light measurement. This model is rooted in the principles of geometrical optics and the thick-lens hypothesis, making it a more accurate representation of real-world measurement systems. The experimental outcomes have demonstrated a remarkable reduction in the standard deviation by 66% when compared to the pinhole model, validating the effectiveness of the proposed model in improving measurement accuracy.
The model’s derivation from fundamental optical principles allows it to more closely mimic the behavior of actual cameras, which typically have a non-negligible lens thickness. This approach addresses the limitations of traditional models that assume a negligible lens thickness, leading to inaccuracies in measurement. The practical utility of this model is evident as it can be potentially applied to other structured-light measurement systems where lens thickness plays a significant role. This adaptability makes the model a valuable tool for improving the performance of various imaging systems.
Moreover, the model’s practicability is highlighted by its simplicity and effectiveness. Unlike many complex models that require intricate calibration processes, this model offers a more straightforward approach to achieving high-precision measurements. This characteristic is particularly beneficial in industrial applications with critical ease of use and reliability.
The results of this study open avenues for further research and development. Future work will concentrate on simplifying the calibration algorithm of the model. This endeavor aims to streamline the process of obtaining accurate camera and motion parameters, thereby enhancing the overall measurement accuracy. The focus will be on developing algorithms that are precise and easy to implement, making the technology more accessible and user-friendly.

5. Conclusions

In conclusion, the Scheimpflug camera thick-lens imaging model proposed in this study offers a significant advancement in structured-light measurement. The model’s foundation in geometrical optics and the thick-lens hypothesis provides a more realistic and accurate representation of camera systems, leading to improved measurement precision. The experimental results confirm a 66% reduction in the standard deviation compared to the traditional pinhole model, showcasing the model’s superior performance.
The proposed model’s practicality and adaptability make it a promising candidate for enhancing the performance of other structured-light measurement systems. Its straightforward calibration process and robust performance underpin its potential for widespread applications in various industrial settings.
Looking ahead, the focus will be on refining the model’s calibration algorithm to achieve even greater accuracy in camera and motion parameter determination. This will further enhance the model’s utility and precision, solidifying its role as a key technology in the field of structured-light measurement. The ongoing commitment to improving the model’s calibration and measurement capabilities will ensure its continued relevance and effectiveness in addressing the challenges of high-precision imaging systems.

Author Contributions

Conceptualization, D.G.; methodology, D.G.; software, D.G.; validation, D.G., J.C., and Y.W.; data curation, J.C.; writing—original draft preparation, D.G.; writing—review and editing, D.G., J.C., and Y.W.; visualization; supervision; project administration, J.C.; funding acquisition, J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China 5207513.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Acknowledgments

The authors thank Wenya Liu and Xionglei Lin for their help in logic and language proofreading.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Li, J.; Ju, Y. Advanced technology analysis of LEAP-X engine and TECH-X engine. Aeronaut. Manuf. Technol. 2013, 56, 56–59. [Google Scholar]
  2. Tain, G.F.; Song, J.B. Robotic blade edge grinding method and simulation based on Roboguide. Manuf. Technol. Mach. Tool 2017, 656, 126–131. [Google Scholar]
  3. Jamontt, M.; Paweł, P. Smart coordinate measuring machine (CMM) inspection for turbine guide vanes with trend line and geometric profile tolerance. Appl. Sci. 2021, 11, 1658. [Google Scholar] [CrossRef]
  4. Hu, Y.; Huang, W.; Hu, P.-H.; Liu, W.-W.; Ye, B. Design and validation of a self-driven joint model for articulated arm coordinate measuring machines. Appl. Sci. 2019, 9, 3151. [Google Scholar] [CrossRef]
  5. Yang, S.; Gao, Y.; Liu, Z.; Zhang, G. A calibration method for binocular stereo vision sensor with short-baseline based on 3D flexible control field. Opt. Lasers Eng. 2020, 124, 105817. [Google Scholar] [CrossRef]
  6. Yu, C.; Chen, X.; Xi, J. Modeling and calibration of a novel one-mirror galvanometric laser scanner. Sensors 2017, 17, 164. [Google Scholar] [CrossRef] [PubMed]
  7. Idrobo-Pizo, G.A.; Motta, J.M.S.T.; Sampaio, R.C. A calibration method for a laser triangulation scanner mounted on a robot arm for surface mapping. Sensors 2019, 19, 1783. [Google Scholar] [CrossRef] [PubMed]
  8. Xiong, Z.; Li, Q.; Mao, Q.; Zou, Q. A 3D laser profiling system for rail surface defect detection. Sensors 2017, 17, 1791. [Google Scholar] [CrossRef] [PubMed]
  9. Altinisik, A.; Emre, B. A comparison of off-line laser scanning measurement capability with coordinate measuring machines. Measurement 2021, 168, 108228. [Google Scholar] [CrossRef]
  10. Scheimpflug, T. Improved Method and Apparatus for the Systematic Alteration or Distortion of Plane Pictures and Images by Means of Lenses and Mirrors for Photography and for Other Purposes. GB Patent 1196, 12 May 1904. [Google Scholar]
  11. Peng, J.; Wang, M.; Deng, D.; Liu, X.; Yin, Y.; Peng, X. Distortion correction for microscopic fringe projection system with Scheimpflug telecentric lens. Appl. Opt. 2015, 54, 10055–10062. [Google Scholar] [CrossRef]
  12. Nocerino, E.; Menna, F.; Remondino, F.; Beraldin, J.A.; Cournoyer, L.; Reain, G. Experiments on calibrating tilt-shift lenses for close-range photogrammetry. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 99–105. [Google Scholar] [CrossRef]
  13. Grossberg, M.D.; Shree, K.N. A general imaging model and a method for finding its parameters. In Proceedings of the Eighth IEEE International Conference on Computer Vision, Vancouver, BC, Canada, 7–14 July 2001; IEEE: Piscataway, NJ, USA, 2001. [Google Scholar]
  14. Miks, A.; Novak, J.; Novak, P. Analysis of imaging for laser triangulation sensors under Scheimpflug rule. Opt. Express 2013, 21, 18225–18235. [Google Scholar] [CrossRef] [PubMed]
  15. Sinharoy, I.; Prasanna, R.; Marc, P. Christensen. Geometric model of image formation in Scheimpflug cameras. PeerJ Prepr. 2016, 4, e1887v1. [Google Scholar]
  16. Steger, C. A comprehensive and versatile camera model for cameras with tilt lenses. Int. J. Comput. Vis. 2017, 123, 121–159. [Google Scholar] [CrossRef]
  17. Steger, C.; Markus, U. A camera model for line-scan cameras with telecentric lenses. Int. J. Comput. Vis. 2021, 129, 80–99. [Google Scholar] [CrossRef]
  18. Steger, C.; Markus, U. A multi-view camera model for line-scan cameras with telecentric lenses. J. Math. Imaging Vis. 2022, 64, 105–130. [Google Scholar] [CrossRef]
  19. Yin, X.Q.; Tao, W.; Zheng, C.; Yang, H.W.; He, Q.Z.; Zhao, H. Analysis and simplification of lens distortion model for the Scheimpflug imaging system calibration. Opt. Commun. 2019, 430, 380–384. [Google Scholar] [CrossRef]
  20. Zhang, Y.; Qian, Y.; Liu, S.; Wang, Y. Real-time line-structured light measurement system based on Scheimpflug principle. Opt. Rev. 2021, 28, 471–483. [Google Scholar] [CrossRef]
  21. Álvarez, H.; Alonso, M.; Sánchez, J.R.; Izaguirre, A. A multi camera and multi laser calibration method for 3D reconstruction of revolution parts. Sensors 2021, 21, 765. [Google Scholar] [CrossRef]
  22. Hu, Y.; Liang, Z.; Feng, S.; Yin, W.; Qian, J.; Chen, Q.; Zuo, C. Calibration and rectification of bi-telecentric lenses in Scheimpflug condition. Opt. Lasers Eng. 2022, 149, 106793. [Google Scholar] [CrossRef]
  23. Hu, Y.; Zheng, K.; Liang, Z.; Feng, S.; Zuo, C. Microscopic fringe projection profilometry systems in Scheimpflug condition and performance comparison. Surf. Topogr. Metrol. Prop. 2022, 10, 024004. [Google Scholar] [CrossRef]
  24. Legarda, A.; Izaguirre, A.; Arana, N.; Iturrospe, A. Comparison and error analysis of the standard pin-hole and Scheimpflug camera calibration models. In Proceedings of the 2013 IEEE 11th International Workshop of Electronics, Control, Measurement, Signals and Their Application to Mechatronics, Toulouse, France, 24–26 June 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 1–6. [Google Scholar]
  25. Yang, H.; Tao, W.; Liu, K.; Selami, Y.; Zhao, H. Irradiance distribution model for laser triangulation displacement sensor and parameter optimization. Opt. Eng. 2019, 58, 095106. [Google Scholar] [CrossRef]
  26. Shao, M.; Pan, W.; Wang, J. Improved sensors based on scheimpflug conditions and multi-focal constraints. IEEE Access 2020, 8, 161276–161287. [Google Scholar] [CrossRef]
Figure 1. Linear-structured-light measurement model.
Figure 1. Linear-structured-light measurement model.
Sensors 24 05124 g001
Figure 2. Scheimpflug imaging system model.
Figure 2. Scheimpflug imaging system model.
Sensors 24 05124 g002
Figure 3. Simulation results for both models.
Figure 3. Simulation results for both models.
Sensors 24 05124 g003
Figure 4. Experimental equipment: (a). Scheimfplug transfer interface. (b). Linear-structured-light measurement system. (c). Checkboard calibration target.
Figure 4. Experimental equipment: (a). Scheimfplug transfer interface. (b). Linear-structured-light measurement system. (c). Checkboard calibration target.
Sensors 24 05124 g004
Figure 5. Calibration ring gauge and calibration image: (a). Ring gauge. (b). Ring gauge light stripe image. (c). Three-dimensional reconstruction of environmental regulations.
Figure 5. Calibration ring gauge and calibration image: (a). Ring gauge. (b). Ring gauge light stripe image. (c). Three-dimensional reconstruction of environmental regulations.
Sensors 24 05124 g005
Figure 6. Measurement images of measuring blocks under different ambient lights: (a). Dark. (b). Low light. (c). Bright.
Figure 6. Measurement images of measuring blocks under different ambient lights: (a). Dark. (b). Low light. (c). Bright.
Sensors 24 05124 g006
Figure 7. Three-dimensional coordinates of the light strip corresponding to the results in Figure 6: (a) Corresponding to Figure 6a. (b) Corresponding to Figure 6b. (c) Corresponding to Figure 6c.
Figure 7. Three-dimensional coordinates of the light strip corresponding to the results in Figure 6: (a) Corresponding to Figure 6a. (b) Corresponding to Figure 6b. (c) Corresponding to Figure 6c.
Sensors 24 05124 g007
Figure 8. Bearing inner ring: (a) Big bearing inner ring. (b) Half of small bearing inner-half ring.
Figure 8. Bearing inner ring: (a) Big bearing inner ring. (b) Half of small bearing inner-half ring.
Sensors 24 05124 g008
Figure 9. Bearing reconstruction results: (a) Big bearing inner ring. (b) Half of small bearing inner-half ring.
Figure 9. Bearing reconstruction results: (a) Big bearing inner ring. (b) Half of small bearing inner-half ring.
Sensors 24 05124 g009
Figure 10. Bearing measurement results: (a) Large bearing results. (b) Small bearing results.
Figure 10. Bearing measurement results: (a) Large bearing results. (b) Small bearing results.
Sensors 24 05124 g010
Table 1. Calibration results for models in this paper.
Table 1. Calibration results for models in this paper.
Scheimfplug Camera
Magnification 1.251.5
Matrix   K f parameter f 100.0197.004
x 0 807.88796.21
y 0 598.82593.99
φ −0.342432.8718
θ 13.74913.307
e −0.97048−2.4125
z 0 256.31249.47
d x 0.00490.0049
d y 0.00510.0048
Reprojection error (μm) 1.61.3
General camera model
Magnification 1.251.5
Matrix   K f parameter f 99.74986.920
x 0 801.78799.64
y 0 596.79601.90
φ −5.73970.46466
θ 5.82300.43254
e −2.1826−2.6541
z 0 248.92250.85
d x 0.00490.0049
d y 0.00490.0049
Reprojection error (μm) 1.61.7
Table 2. Calibration results of small-hole model.
Table 2. Calibration results of small-hole model.
Scheimfplug Camera
Magnification 1.251.5
Matrix K parameter α 36,80339,326
β 32,67636,572
γ 1411.3256.63
x 0 −4042.5−1818.3
y 0 −490.96504.58
Reprojection error (pixel) 0.680.24
General camera model
Magnification1.251.5
Matrix K parameter α 38,82137,285
β 38,79237,241
γ −37.424−18.943
x 0 481.41557.03
y 0 649.06587.87
Reprojection error (pixel) 0.210.18
Table 3. Gauge measurement results.
Table 3. Gauge measurement results.
Standard Value1st Measure2nd Measure3rd MeasureHeight Ratio
New model1.000 ± 0.00021.0221.0141.0361
1.500 ± 0.00021.5211.5101.5481.491
2.000 ± 0.00022.0202.0262.0501.984
Traditional model1.000 ± 0.00021.0080.9991.0181
1.500 ± 0.00021.5341.5161.5461.519
2.000 ± 0.00022.0772.0782.0892.064
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, D.; Cui, J.; Wu, Y. Linear-Structured-Light Measurement System Based on Scheimpflug Camera Thick-Lens Imaging. Sensors 2024, 24, 5124. https://doi.org/10.3390/s24165124

AMA Style

Guo D, Cui J, Wu Y. Linear-Structured-Light Measurement System Based on Scheimpflug Camera Thick-Lens Imaging. Sensors. 2024; 24(16):5124. https://doi.org/10.3390/s24165124

Chicago/Turabian Style

Guo, Dongyu, Jiwen Cui, and Yuhang Wu. 2024. "Linear-Structured-Light Measurement System Based on Scheimpflug Camera Thick-Lens Imaging" Sensors 24, no. 16: 5124. https://doi.org/10.3390/s24165124

APA Style

Guo, D., Cui, J., & Wu, Y. (2024). Linear-Structured-Light Measurement System Based on Scheimpflug Camera Thick-Lens Imaging. Sensors, 24(16), 5124. https://doi.org/10.3390/s24165124

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop