Next Article in Journal
High Selectivity Hydrogen Gas Sensor Based on WO3/Pd-AlGaN/GaN HEMTs
Next Article in Special Issue
A Novel Optical Instrument for Measuring Mass Concentration and Particle Size in Real Time
Previous Article in Journal
Patient–Therapist Cooperative Hand Telerehabilitation through a Novel Framework Involving the Virtual Glove System
Previous Article in Special Issue
A Targetless Method for Simultaneously Measuring Three-Degree-of-Freedom Angular Motion Errors with Digital Speckle Pattern Interferometry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Accurate Calibration of a Large Field of View Camera with Coplanar Constraint for Large-Scale Specular Three-Dimensional Profile Measurement

1
School of Instrument Science and Opto-Electronics Engineering, Hefei University of Technology, Hefei 230009, China
2
Anhui Province Key Laboratory of Measuring Theory and Precision Instrument, Hefei University of Technology, Hefei 230009, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(7), 3464; https://doi.org/10.3390/s23073464
Submission received: 11 February 2023 / Revised: 22 March 2023 / Accepted: 22 March 2023 / Published: 25 March 2023

Abstract

:
In the vision-based inspection of specular or shiny surfaces, we often compute the camera pose with respect to a reference plane by analyzing images of calibration grids, reflected in such a surface. To obtain high precision in camera calibration, the calibration target should be large enough to cover the whole field of view (FOV). For a camera with a large FOV, using a small target can only obtain a locally optimal solution. However, using a large target causes many difficulties in making, carrying, and employing the large target. To solve this problem, an improved calibration method based on coplanar constraint is proposed for a camera with a large FOV. Firstly, with an auxiliary plane mirror provided, the positions of the calibration grid and the tilt angles of the plane mirror are changed several times to capture several mirrored calibration images. Secondly, the initial parameters of the camera are calculated based on each group of mirrored calibration images. Finally, adding with the coplanar constraint between each group of calibration grid, the external parameters between the camera and the reference plane are optimized via the Levenberg-Marquardt algorithm (LM). The experimental results show that the proposed camera calibration method has good robustness and accuracy.

1. Introduction

In recent years, the vision measurement system has been widely used in industrial production due to its high precision, non-contact, real-time capabilities, etc. [1,2]. At the same time, for some special objects, such as car windshield [3], painted body shell [4], polishing mold, stainless steel products, and other smooth surface objects, the demand for three-dimensional measurement is greater and greater. Meanwhile, the traditional three-dimensional reconstruction method [5,6,7] is not ideal for the reconstruction of the bright surface. The two-dimensional feature information of the image obtained by the camera mainly comes from the surrounding environment of the shiny surface, rather than the surface itself. For the high reflection characteristics of the shiny surface, the reference pattern is usually placed around it, and the reference pattern modulated by the surface helps realize three-dimensional reconstruction of itself [8,9,10,11,12]. In this case, the calibration accuracy of the reference plane and camera directly affects the subsequent three-dimensional reconstruction accuracy of the shiny surface. Meanwhile, to measure more area of surface, a camera with a large FOV is needed. However, for calibration in large FOV, the targets with large areas and high precision are not only difficult to make, but they are also inconvenient to carry and use.
For the calibration of the catadioptric system, many scholars have proposed methods of using an auxiliary plane mirror to estimate the external parameters between the camera and the reference object [13]. Kumar et al. [14] proposed using the orthogonal constraint between the direction vector of the connection from the corresponding point of the object to the mirror image, as well as the column vector of the rotation matrix to list linear equations for solving, and each set of equations requires at least five calibration images. However, the calculated position parameter has a large error with the true value, which is harmful to the subsequent parameter optimization. Takahashi et al. [15] obtain the unique solution of three P3P problems (perspective-three-point problem) from three mirror images based on the orthogonal constraint. However, if the reference object is smaller than a certain size, a wrong solution will be obtained. The method proposed by Hesch et al. [16] also obtains the solutions of three P3P problems from three mirror images, but it can only select an optimal solution from 64 candidate solutions after re-projection error evaluation. Xin et al. [17] directly estimate the camera rotation matrix by the SVD decomposition of the sum of the rotation matrices. Additionally, they calculate the translation vector by solving overdetermined linear equations. While it is more sensitive to noise, the algorithm stability is poor. Bergamasco [18] proposed a method to locate coplanar circles from images by means of a non-cooperative evolutionary game and refined the estimation of camera parameters by observing a set of coplanar circles. However, the accuracy of this method is low.
For the calibration of the camera with a large FOV, scholars consider combing several two-dimensional small targets into a large three-dimensional target. While in the methods proposed in the paper [19,20], all intrinsic parameters of the camera cannot be obtained because the polynomial projection mode is used. Meanwhile, in the methods proposed in the paper [21,22], the relative positions between the small targets are subject to certain restrictions, which makes it difficult to be applied in real applications. Occlusion-resistant markers, such as Charuco [23] or RUNETag [24], are also robust options, but they present fewer points for calibration.
To solve this problem, we use a LCD monitor as a reference plane to produce the calibration grid. It not only solves the problem of difficulty in manufacturing, carrying, and using large-sized objects, but it also can be used as a carrier for projecting encoded patterns when measuring bright surfaces due to its ability to produce free patterns. Bergamasco [25,26] also used a monitor that displays dense calibration grids for camera calibration, but it requires multiple frames, and when dense grid points are spread over the display, the curvature of the display surface will greatly affect the accuracy and robustness of calibration. Therefore, this article calibrates using a smaller calibration grid on the monitor and covers the camera’s field of view by moving the position of the calibration grid, which to some extent reduces the impact of display surface curvature, and ultimately it achieves high accuracy and robustness.
Firstly, by moving the calibration grid on the reference plane and changing the tilt angle of the plane mirror on the optical platform to obtain multiple sets of mirrored calibration images, the internal and external parameters of the camera are computed by Zhang’s [27] calibration method. Secondly, the orthogonality constraint calibration method and P3P algorithm proposed in [15,16] are used to obtain the external parameters from the reference plane to the camera. Finally, the LM [28] algorithm is used to obtain the optimal solution of the external parameters with the coplanar constraint of multiple calibration grid positions. At the same time, using the method of reconstructing the smooth mirror shape from a single image proposed in [12], three-dimensional measurement experiments are carried out to indirectly verify the accuracy of the calibration method proposed.

2. Geometry of Camera Pose Estimation

2.1. Plane Mirror Reflection Model

As is shown in Figure 1, in the camera coordinate system C, the plane mirror can be described by the plane parameters Π = { n , d } . The unit vector n denotes the normal vector of the mirror plane, d represents the distance between the origin of C and the plane [17], and R s 2 c and T s 2 c are the rotation matrix and the translation vector between the reference plane coordinate system and the camera coordinate system. P is a feature point on the reference plane.
Based on the reflection property of the mirror, the relationship between this point and its mirror point is given by:
[ P 1 ] = M 1 [ P 1 ] , M 1 = [ I 2 n n T 2 d n O 1 ]
This denotes the symmetric transformation induced by Π . Note that M 1 = M 1 1 , and ( I 2 n n T ) is a Householder matrix. Let M 2 describe the rigid transformation that transforms points from the reference to the camera frame:
M 2 = [ R s 2 c T s 2 c O 1 ]

2.2. Mirror-Based Camera Projection Model

The perspective projection model is a camera imaging model widely used in computer vision [23]. The mapping relation between any three-dimensional point P w in the space and its corresponding pixel point v = [ x y 1 ] T in the image can be described as:
v = s A [ R T ] P w
where s is a nonzero scale factor, A is the intrinsic parameters matrix of the camera, and R and T are the rotation matrix and the translation vector between the camera coordinate system and the world coordinate system. Taking the mirror reflection into account, concatenate the camera model with the mirror reflection, the mirror-based camera projection model becomes:
v = s A M 1 M 2 P w
R and T can be written as:
{ R = ( I 2 n n T ) R s 2 c T = ( I 2 n n T ) T s 2 c + 2 d n
According to the Equation (5), we need at least three specular reflection images to calculate R s 2 c and T s 2 c .

2.3. Computation of External Parameters

By changing the tilt angle of the plane mirror, we can obtain mirrored images at different positions and compute external parameters by the P3P algorithm [16]. Let j , j { 1 , 2 , 3 } , and R j represents the rotation matrix of the mirrored image at the j position of the plane mirror. Assume unit vector m j j is perpendicular to n j and n j , so we can obtain:
R j R j T m j j = ( I 2 n j n j T ) × ( I 2 n j n j T ) m j j = m j j
R j R j T is a special orthogonal matrix, which has two complex conjugate eigenvalues, and one eigenvalue equals 1. So m j j is the eigenvector of R j R j T corresponding to the eigenvalue of 1. According to the cross-product properties of the eigenvector, the unit normal vectors corresponding to the three positions of the plane mirror can be calculated.
n 1 = m 13 × m 12 m 13 × m 12 ,   n 2 = m 21 × m 23 m 21 × m 23 ,   n 3 = m 13 × m 23 m 13 × m 23 .
According to the Equation (5), R s 2 c can be calculated. In the case of an ideal condition without noise, the three rotation matrices calculated by three R j should be equal. While they are not equal in fact due to the noise. Therefore, the average of the rotation matrices should be calculated [20].
R ¯ = [ ( R ^ T R ^ ) 1 / 2 ] 1 R ^ ,   where   R ^ = 1 3 j = 1 3 R s 2 c j
The rest of the parameters [ T ,   d 1 ,   d 2 ,   d 3 ] T can be solved by linear equations constructed by the Equation (5). So far, all of the initial values of the pose parameters have been calculated.
[ ( I 2 n 1 n 1 T ) 2 n 1 o o ( I 2 n 2 n 2 T ) o 2 n 2 o ( I 2 n 3 n 3 T ) o o 2 n 3 ] [ T d 1 d 2 d 3 ] = [ T 1 T 2 T 3 ]

2.4. Optimization with Coplanar Constraint

Linear solutions are usually sensitive to noise, we can minimize the reprojection error of back-projection by adjusting R s 2 c , T s 2 c , n and d with coplanar constraint. As is shown in Figure 2, we move the calibration grid on the LCD monitor for W times and rotate the plane mirror corresponding to each grid position for M times. The grid has N characteristic corners. Let R ji represent the rotation matrix of the mirrored image of the j grid at the i plane mirror position. In the same way, T ji is translation vector, n ji represents the normal vector of the mirror, d j i represents the distance between the origin of the camera coordinate system and the plane mirror, R s 2 c j represents rotation matrix from the j checkerboard coordinate system to the camera coordinate system, and T s 2 c j represents the translation vector. P k represents the k feature point of the grid in the reference plane coordinate system. q jik represents the projection point of the k feature point of the j grid at the i planar mirror position. q ˜ jik represents the back-projection point. The back-projection process can be written as:
q ˜ jik = λ j i A ( R ji P k + T ji )
where λ j i is a nonzero scale factor, A represents the intrinsic matrix of the camera, and R ji = ( I 2 n ji n ji T ) R s 2 c j , T ji = ( I 2 n ji n ji T ) T s 2 c j + 2 d j i n ji .
Combined with the Equation (10), the reprojection error function of the back-projection can be expressed as:
E r r p r o = j = 1 W i = 1 M k = 1 N q jik q ˜ jik ( R s 2 c j , T s 2 c j , n ji , d j i , P k ) 2
Let P jk represent the k feature point of the j checkboard in the camera coordinate system.
P jk = ( R s 2 c j P k + T s 2 c j )
Since the reference plane can be regarded as a standard plane, the coplanar constraint of the W grids should be added. Let P e r r represent the fitting effect evaluation value of plane fitting function: [ f i t r e s u l t , P e r r ] = c r e a t e F i t ( d x ,   d y ,   d z ) . The input of the function is P jk . The smaller the P e r r value is, the better the coplanar effect will perform. In addition, R s 2 c j , j { 1 , W } are equal in theory. Let R av represent the average rotation matrix [24]. The error Rerr between R s 2 c j and R av can be written as:
Rerr = j = 1 W R s 2 c j R av 2
The smaller the Rerr value is, the better the coplanar effect will perform. Likely, the five plane mirror positions with zero tilt angle on the optical platform also have coplanar characteristics. Therefore, the corresponding normal vectors n j 1 are theoretically equal. The average normal vector n av can also be calculated.
N e r r = j = 1 W n j 1 n av 2
In the ideal condition, P e r r = 0 ,   Rerr = 0 ,   N e r r = 0 . Therefore, the cost function can be regarded as two major components: the reprojection error term E r r p r o and the coplanar constraint term ( P e r r ,   Rerr ,   N e r r ) . We can establish the cost function in the case of equality constraints:
{ F = min j = 1 W i = 1 M k = 1 N q jik q ˜ ( R s 2 c j , T s 2 c j , n ji , d j i , P k ) 2 + E r r c o p E r r c o p = P e r r + Rerr + N e r r
where R s 2 c j , T s 2 c j , n ji and d j i are parameters to be optimized. The calculation of the specific LM algorithm can be realized by the tool function l s q n o n l i n ( ) in Matlab.

2.5. Three-Dimensional Measurement Principle of a Single Camera

In the monocular measurement system, we observe the images of the grid pattern, reflected in the unknown surface when the pose of the camera is known, and establish the reflection correspondence between the three-dimensional reference points and the two-dimensional image points. The depth of the reflection points on the surface is parameterized, and the surface shape is fitted by a polynomial. Therefore, the measurement of the surface shape is converted into an optimization problem: minimizing the error between the reference points and the corresponding points through the surface back projection [12]. The principle of the measurement system is shown in Figure 3. O is the origin of the camera coordinate frame, m is a feature point on the reference plane, p is a reflection point of the surface, and v is a projection point on the normalized image plane. p and v are called reflection correspondences. l is the reflected ray at p , and i is the incident ray. R s 2 c and T s 2 c are the rotation matrix and translation vector from reference plane coordinate frame to camera coordinate frame. Obviously, v is on the incident ray i . The relationship between p and v is given by
p = s v
s is the depth of the corresponding reflected point p . Correspondingly, the normal n to the surface at p can be written as:
n = ( p x , p y , p z ) T
Suppose the coordinates of the normalized image points { v 1 , v 1 , , v m } and points on the reference plane { m 1 , m 2 , , m m } are known. The principle of back projection is shown in Figure 4. The three-dimensional reflection point on the mirror corresponds to the normalized image plane coordinates ( x i , y i ) T that can be expressed as p i = s i ( x i , y i , 1 ) T . The unit vector of the incident ray is i i = ( x i , y i , 1 ) T / ( x i , y i , 1 ) T , the unit vector of the reflected ray is l i = i i 2 n ˜ i , i i n ˜ i , and n ˜ i = n i / n i . Let R s 2 c = ( r 1   r 2   r 3 ) , r 3 represents the coordinates of the unit vector in the Z-axis direction of the reference plane coordinate frame in the camera coordinate frame. T s 2 c indicates the coordinates of the origin of the reference plane coordinate frame in the camera coordinate frame. The reference plane can be represented by the vector q = ( r 3 T , r 3 T T s 2 c ) T , such that q , ( m ^ i T , 1 ) T = 0 for any point on the reference plane. Back-projection can be achieved by computing the point m ^ , the intersection of the reflected ray with the reference plane.
m ^ i = p i ( r 3 , p i r 3 T T s 2 c ) / r 3 , l i l i
In Equation (18), m ^ i is a function of depth s . We can build an optimization model to minimize the error between the back projection point and the real point on the reference plane. That means solving a nonlinear least-squares problem to estimate the depth of the mirror.
min s i = 1 m m ^ i ( s ) m i 2
For minimizing problems in (19), we can also iteratively calculate s with the LM algorithm. The initial surface can be regarded as a plane.

3. Experimental Verification

3.1. Calibration Experiment

To verify the accuracy and universality of the calibration method proposed in this paper, a monocular vision system measurement experiment was designed (Figure 5). The whole measurement system consists of an optical platform, standard plane mirror, LCD monitor, and large FOV camera. The focal length of the camera is 8 mm; the image resolution is 1280 pixel × 1024 pixel, and the pixel size is 4 μm; the FOV of the camera is 820 mm × 670 mm, which is much bigger than grid image. When the measurement distance is about 1000 mm, the field of view of the camera is 820 mm × 670 mm. The LCD is 19 inches in size and has a pixel size of 0.2451 mm. In order to approach a large field of view measurement scene, we use a 90 × 120 mm checkerboard image as a calibration target, which is much smaller than the camera’s field of view range.
The LCD faces the standard plane mirror on the optical platform. The grid image on the LCD is captured by the camera through the plane mirror. In the experiment, the grid image is moved on the LCD. Each grid position corresponds to three positions of a plane mirror, which are the position STZ on the optical platform, the position STX around the X-axis, and the position STY around the Y-axis. In this way, it not only ensures that the three positions of the plane mirror intersect with each other to satisfy the orthogonality constraint, but also ensures that there is an obvious height difference to satisfy the conditions of Zhang’s calibration method.
Figure 6 is a set of mirrored images of the grid taken by the camera for calibration. The grid image was moved five times, and the five positions of the grid basically filled the whole LCD screen to cover the whole FOV of the camera. In the five pose conversion parameters from the reference coordinate system to the camera coordinate system, the rotation matrices are equal, and the translation vectors change with the motion of the grid in theory. In the same way, the plane mirrors at the STZ position corresponding to the five grid images are also coplanar, so the corresponding mirror normal vectors are equal. This is the coplanar constraint described in Section 2.4.
Figure 7 describes the mirrored grid positions and the real grid positions with and without coplanar constraints. The mirrored grid positions corresponding to the STZ positions of the five plane mirrors are coplanar. Fitting the plane of five mirrored grid positions, the average distance error RMSE is 0.14 mm, which is consistent with Figure 7a,c. However, the coplanarity of the five grid positions restored is obviously different.
As is shown in Figure 7c,d, the positions of each chessboard are not only poor in coplanarity, but they also have a large offset in the relative positions, which can not comply with the law of mirror reflection.
Figure 8a is the coplanarity of the five grids performs well with coplanar constraint, RMSE = 0.11 mm. However, the five grids without coplanar constraint have poor coplanarity, RMSE = 6.45 mm. Figure 8b is the reprojection error of the two methods after back projection. The average reproject error of the method proposed in this paper is 0.1641 pixels, and in paper [16], it is 0.1419 pixels.
The two methods are similar in terms of calibration accuracy, and the reprojection error without coplanar constraint is smaller. However, for the reference plane, the calibration result of this method is locally optimal. With coplanar constraints, the reprojection optimization model can unify the positions of five checkerboards and optimize the calibration results as a whole. Therefore, the calibration method in this paper sacrifices part of the calibration accuracy to improve the reliability of the algorithm. This calibration result is more suitable for practical measurement.

3.2. Measurement of the Step Surface

After the calibration of the reference plane, we can carry out a three-dimensional measurement experiment according to Section 2.5. As is shown in Figure 9a,b, a standard plane mirror is placed on the optical platform, and the mirror feature point calculation is performed at the STZ position. Then place the standard gauge block between the optical table and the planar mirror, so the mirror position is 8.74 mm higher than before, and the mirror feature points are calculated at the higher mirror position. Fit the mirror surface with feature points by c r e a t e F i t ( ) , and then use the point-to-plane distance formula to calculate the distance from each feature point to the fitting plane, and then take the average value. Compare it with the actual distance of 8.74 mm to indirectly verify the accuracy of the calibration method proposed in this paper. The mirror feature points of the first mirror position are shown in Figure 9c. The plane fitting model is as follows:
f ( x , y ) = p 00 + p 10 x + p 01 y
We can obtain the coefficients of the plane: p 00 = 421.4000 ,   p 10 = 0.6167 ,   p 01 = 0.0267 , and the RMSE = 0.02 mm. To have an intuitive display effect, the first and second mirror positions are shown together in Figure 9d. The average distance of the two mirror positions is 8.68 mm. The difference with the actual distance of 8.74 mm is 0.06 mm, and the relative error is 0.69%.

3.3. Measurement of the Spherical Mirror

In addition, we also measure the spherical mirror surface. The principle of the experiment is the same as that of the mirror. Firstly, measure five sets of spherical characteristic points, with 108 points in each group as measurement data. Then, place the spherical mirror on a coordinate measuring machine (model: MC850) with the highest resolution of 1 um for sampling.
The number of detection points is 202, which is used as reference data. Since the coordinate system of the coordinate measuring machine is not unified with the camera coordinate system, it is necessary to use Cloud-Compare software to unify the measurement data and reference data with the method of iterative closest point (ICP). The ICP registration of the measured feature points and the reference feature points is shown in Figure 10b.
The spherical equation is fitted to the reference data through Cloud-Compare software. As shown in Figure 11a, the spherical equation is:
z = 459.621 + ( 475.617 2 ( x 0.506226 ) 2 ( y 0.264729 ) 2 )
Additionally, RMSE = 0.01 mm. The fitting error distribution is shown in Figure 11b. We can obtain the spherical mirror radius from the Equation (21) (Table 1).
In the experiment, we use a cubic polynomial to initialize the spherical mirror surface because we treat the mirror surface as unknown. Supposing we directly use the spherical equation to iteratively optimize the mirror surface, the measurement accuracy will perform better.

4. Conclusions

This paper proposes a calibration method based on coplanar constraints for a camera with a large FOV. The whole experiment process is divided into two parts. The first is the calibration of a large FOV camera and the reference plane. By adjusting the tilt angle of the planar mirror and moving the grid image on the LCD monitor, the camera acquires multiple sets of calibration images and then obtains the optimal solution of the external parameters between the camera and the LCD monitor with the coplanar constraint. The other is shiny surface reconstruction. When the pose of the reference plane is known, we can establish the dense reflection correspondence between normalized image plane two-dimensional feature points, reference plane three-dimensional feature points, and bright surface reflection points, and we can iteratively calculate the reflection point depth information. In terms of calibration accuracy, the calibration accuracy of the method proposed in this paper is similar to that of [16]. At the same time, in the step surface and spherical surface measurement experiments, the results also indirectly prove the accuracy of the proposed method. The universality of the method has important research significance for further application to the multi-camera measurement system in the future.

Author Contributions

Conceptualization, R.L.; Methodology, Z.Z.; Software, Z.W.; Validation, Z.Z.; Investigation, Z.Z.; Writing—original draft, Z.Z.; Writing—review & editing, Z.W.; Supervision, R.L.; Project administration, R.L.; Funding acquisition, R.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (NSFC) (Grant No. 51875164), as well as the National Key Research and Development Program of China (No. 2018YFB2003801).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data will be made available on request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Morandi, P.; Brémand, F.; Doumalin, P.; Germaneau, A.; Dupré, J. New Optical Scanning Tomography using a rotating slicing for time-resolved measurements of 3D full field displacements in structures. Opt. Lasers Eng. 2014, 58, 85–92. [Google Scholar] [CrossRef]
  2. Yu, L.; Pan, B. High-speed stereo-digital image correlation using a single color high-speed camera. Appl. Opt. 2018, 57, 31. [Google Scholar] [CrossRef] [PubMed]
  3. Xu, J.; Xi, N.; Zhang, C.; Shi, Q. Windshield shape inspection using structured light patterns from two diffuse planar light sources. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 10–15 October 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 11–15. [Google Scholar] [CrossRef]
  4. Balzer, J.; Hfer, S.; Beyerer, J. Multiview specular stereo reconstruction of large mirror surfaces. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 20–25 June 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 2537–2544. [Google Scholar]
  5. Zuo, C.; Feng, S.; Huang, L.; Tao, T.; Yin, W.; Chen, Q. Phase shifting algorithms for fringe projection profilometry: A review. Opt. Lasers Eng. 2018, 109, 23–59. [Google Scholar] [CrossRef]
  6. Song, Z. High-speed 3D shape measurement with structured light methods: A review. Opt. Lasers Eng. 2018, 106, 119–131. [Google Scholar]
  7. Liu, Y.; Fu, Y.; Cai, X.; Zhong, K.; Guan, B. A novel high dynamic range 3D measurement method based on adaptive fringe projection technique—ScienceDirect. Opt. Lasers Eng. 2020, 128, 106004. [Google Scholar] [CrossRef]
  8. Shengpeng, F.U. Imaging Simulation Method for Specular Surface Measurement. J. Mech. Eng. 2015, 51, 17–24. [Google Scholar]
  9. Halstead, M.A.; Barsky, B.A.; Klein, S.A.; Mandell, R.B. Reconstructing curved surfaces from specular reflection patterns using spline surface fitting of normals. In Proceedings of the Conference on Computer Graphics and Interactive Techniques, New Orleans, LA, USA, 1 August 1996. [Google Scholar]
  10. Tarini, M.; Lensch, H.P.; Goesele, M.; Seidel, H.-P. 3D acquisition of mirroring objects using striped patterns. Graph. Model. 2005, 67, 233–259. [Google Scholar] [CrossRef] [Green Version]
  11. Savarese, S.; Chen, M.; Perona, P. Local Shape from Mirror Reflections. Int. J. Comput. Vis. 2005, 64, 31–67. [Google Scholar] [CrossRef]
  12. Liu, M.; Hartley, R.; Salzmann, M. Mirror Surface Reconstruction from a Single Image. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 760–773. [Google Scholar] [CrossRef] [PubMed]
  13. Sturm, P.; Bonfort, T. How to Compute the Pose of an Object without a Direct View? Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  14. Kumar, R.K.; Ilie, A.; Frahm, J.M.; Pollefeys, M. Simple calibration of non-overlapping cameras with a mirror. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 1–7. [Google Scholar]
  15. Takahashi, K.; Nobuhara, S.; Matsuyama, T. Mirror-based Camera Pose Estimation Using an Orthogonality Constraint. IPSJ Trans. Comput. Vis. Appl. 2016, 8, 11–19. [Google Scholar] [CrossRef] [Green Version]
  16. Hesch, J.A.; Mourikis, A.I.; Roumeliotis, S.I. Mirror-Based Extrinsic Camera Calibration, Algorithmic Foundation of Robotics VIII. Springer Tracts Adv. Robot. 2009, 57, 285–299. [Google Scholar] [CrossRef] [Green Version]
  17. Li, X.; Long, G.; Guo, P.; Liu, J.; Zhang, X.; Yu, Q. Accurate mirror-based camera pose estimation with explicit geometric meanings. Sci. China Technol. Sci. 2014, 57, 2504–2513. [Google Scholar] [CrossRef]
  18. Bergamasco, F.; Cosmo, L.; Albarelli, A.; Torsello, A. Camera Calibration from Coplanar Circles. In Proceedings of the International Conference on Pattern Recognition IEEE Computer Society, Stockholm, Sweden, 6 December 2014; pp. 2137–2142. [Google Scholar]
  19. Li, W.; Chu, J.; Meng, H.; Wang, J.; Li, X.; Xing, X. Calibration method with separation patterns of a single-camera. Proc. SPIE 2006, 6269, 303–304. [Google Scholar]
  20. Yang, N.; Huo, J.; Yang, M.; Wang, W.X. A calibration method of camera with large field-of-view based on spliced small targets. Guangdianzi Jiguang/J. Optoelectron. Laser 2013, 24, 1569–1575. [Google Scholar]
  21. Sun, J.; Liu, Z.; Zhang, G. Camera Calibration Based on Flexible 3D Target. Acta Opt. Sin. 2009, 29, 3433–3439. [Google Scholar]
  22. Liu, Z.; Li, F.; Li, X.; Zhang, G. A novel and accurate calibration method for cameras with large field of view using combined small targets. Measurement 2015, 64, 1–16. [Google Scholar] [CrossRef]
  23. An, G.H.; Lee, S.; Seo, M.-W.; Yun, K.; Cheong, W.-S.; Kang, S.-J. Charuco Board-Based Omnidirectional Camera Calibration Method. Electronics 2018, 7, 421. [Google Scholar] [CrossRef] [Green Version]
  24. Bergamasco, F.; Albarelli, A.; Cosmo, L.; Rodolà, E.; Torsello, A. An Accurate and Robust Artificial Marker based on Cyclic Codes. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 2359. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Bergamasco, F.; Cosmo, L.; Gasparetto, A.; Albarelli, A.; Torsello, A. Parameter-Free Lens Distortion Calibration of Central Cameras. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; IEEE: Piscataway, NJ, USA, 2017. [Google Scholar]
  26. Bergamasco, F.; Albarelli, A.; Cosmo, L.; Torsello, A.; Rodola, E.; Cremers, D. Adopting an unconstrained ray model in light-field cameras for 3D shape reconstruction. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar]
  27. Zhang, Z. A Flexible New Technique for Camera Calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
  28. Moré, J.J. The Levenberg-Marquardt algorithm: Implementation and theory. In Numerical Analysis; Watson, G.A., Ed.; Springer: Berlin/Heidelberg, Germany, 1978; p. 630. [Google Scholar]
Figure 1. Calibration principle for reference plane. The camera C observes a point P on the reference plane via the plane mirror Π . We denote by i the incident ray and by l the reflected ray, R s 2 c and T s 2 c denote the pose parameters between the reference plane and the camera, n denotes the normal of the mirror, and d is the distance between C and Π .
Figure 1. Calibration principle for reference plane. The camera C observes a point P on the reference plane via the plane mirror Π . We denote by i the incident ray and by l the reflected ray, R s 2 c and T s 2 c denote the pose parameters between the reference plane and the camera, n denotes the normal of the mirror, and d is the distance between C and Π .
Sensors 23 03464 g001
Figure 2. Structure of measurement system. The calibration grid moves on the reference plane for M times. A feature point P k of the grid is reflected in the image point q k via the mirror at position STX, STY, STZ. We can obtain three calibration images at each grid location. Then, W × 3 calibration images can calculate the intrinsic matrix A , as well as the pose parameters R ji and T ji . Finally, R s 2 c , T s 2 c , n , and d can be calculated by Equations (8) and (9).
Figure 2. Structure of measurement system. The calibration grid moves on the reference plane for M times. A feature point P k of the grid is reflected in the image point q k via the mirror at position STX, STY, STZ. We can obtain three calibration images at each grid location. Then, W × 3 calibration images can calculate the intrinsic matrix A , as well as the pose parameters R ji and T ji . Finally, R s 2 c , T s 2 c , n , and d can be calculated by Equations (8) and (9).
Sensors 23 03464 g002
Figure 3. Principle of mirror surface measurement. A pinhole camera centered at O is observing a mirror surface point p that reflects a reference point m to an image point v . We refer to m and v as reflection correspondences. The reflected ray l is determined by m and p . We denote by i the incident ray for image point v and by n the normal at p . R s 2 c and T s 2 c denote the pose parameters between the reference plane and the camera.
Figure 3. Principle of mirror surface measurement. A pinhole camera centered at O is observing a mirror surface point p that reflects a reference point m to an image point v . We refer to m and v as reflection correspondences. The reflected ray l is determined by m and p . We denote by i the incident ray for image point v and by n the normal at p . R s 2 c and T s 2 c denote the pose parameters between the reference plane and the camera.
Sensors 23 03464 g003
Figure 4. Principle of back projection. The rotation matrix R s 2 c can be written as ( r 1   r 2   r 3 ) . r 3 denotes the unit vector in the Z-axis of the reference plane. T s 2 c denotes the distance between S and C . The reflected ray l intersects the reference plane at the point m ^ . The point m ^ satisfies r 3 T m ^ = d s 2 c . We denote by d c 2 s the distance between C and the reference plane.
Figure 4. Principle of back projection. The rotation matrix R s 2 c can be written as ( r 1   r 2   r 3 ) . r 3 denotes the unit vector in the Z-axis of the reference plane. T s 2 c denotes the distance between S and C . The reflected ray l intersects the reference plane at the point m ^ . The point m ^ satisfies r 3 T m ^ = d s 2 c . We denote by d c 2 s the distance between C and the reference plane.
Sensors 23 03464 g004
Figure 5. Experiment setup. The whole measurement system consists of an optical platform, standard plane mirror, LCD monitor, and large FOV camera.
Figure 5. Experiment setup. The whole measurement system consists of an optical platform, standard plane mirror, LCD monitor, and large FOV camera.
Sensors 23 03464 g005
Figure 6. Checkerboard mirror image taken by the camera for calibration. The calibration grid moves on the reference plane five times. We change the position of the plane mirror to position STX, STY, and STZ for each grid. Then, we can obtain 5 × 3 = 15 calibration images.
Figure 6. Checkerboard mirror image taken by the camera for calibration. The calibration grid moves on the reference plane five times. We change the position of the plane mirror to position STX, STY, and STZ for each grid. Then, we can obtain 5 × 3 = 15 calibration images.
Sensors 23 03464 g006
Figure 7. Calibration results with and without coplanar constraints. points denote the real feature points on the reference plane. points denote the mirrored feature points. (a) Calibration results with coplanar constraints in view 1. (b) Calibration results with coplanar constraints in view 2. (c) Calibration results without coplanar constraints in view 1. (d) Calibration results without coplanar constraints in view 2. Yellow: Grid location 1-STZ. Blue: Grid location 2-STZ. Green: Grid location 3-STZ. Red: Grid location 4-STZ. Cyan: Grid location 5-STZ.
Figure 7. Calibration results with and without coplanar constraints. points denote the real feature points on the reference plane. points denote the mirrored feature points. (a) Calibration results with coplanar constraints in view 1. (b) Calibration results with coplanar constraints in view 2. (c) Calibration results without coplanar constraints in view 1. (d) Calibration results without coplanar constraints in view 2. Yellow: Grid location 1-STZ. Blue: Grid location 2-STZ. Green: Grid location 3-STZ. Red: Grid location 4-STZ. Cyan: Grid location 5-STZ.
Sensors 23 03464 g007aSensors 23 03464 g007b
Figure 8. Comparison of error with and without constraints. (a) Coplanarity error of the two methods. With constraints: RMSE = 0.11 mm. Without constraints: RMSE = 6.45 mm. (b) Reprojection error of the two methods. With constraints: RMS = 0.1641 pixel. Without constraints: RMS = 0.1419 pixel.
Figure 8. Comparison of error with and without constraints. (a) Coplanarity error of the two methods. With constraints: RMSE = 0.11 mm. Without constraints: RMSE = 6.45 mm. (b) Reprojection error of the two methods. With constraints: RMS = 0.1641 pixel. Without constraints: RMS = 0.1419 pixel.
Sensors 23 03464 g008
Figure 9. Restoration of plane mirror feature points at position 5. (a) Mirrored checkerboard image before placing the standard gauge block. (b) Mirrored checkerboard image after placing the standard gauge block. (c) Mirror feature points at position STZ. (d) Plane fitting of two mirror feature points. The distance between the two planes is 8.68 mm.
Figure 9. Restoration of plane mirror feature points at position 5. (a) Mirrored checkerboard image before placing the standard gauge block. (b) Mirrored checkerboard image after placing the standard gauge block. (c) Mirror feature points at position STZ. (d) Plane fitting of two mirror feature points. The distance between the two planes is 8.68 mm.
Sensors 23 03464 g009
Figure 10. Restoration of spherical mirror feature points. (a) Mirrored checkerboard image of the spherical mirror at position 5. (b) Display of feature points and reference data.
Figure 10. Restoration of spherical mirror feature points. (a) Mirrored checkerboard image of the spherical mirror at position 5. (b) Display of feature points and reference data.
Sensors 23 03464 g010
Figure 11. Spherical equation fitting of reference points. (a) Spherical equation fitting. The fitting result: radius of the spherical mirror is 475.62 mm. RMSE = 0.01 mm, the manufacturing error can be ignored. (b) Surface fitting error.
Figure 11. Spherical equation fitting of reference points. (a) Spherical equation fitting. The fitting result: radius of the spherical mirror is 475.62 mm. RMSE = 0.01 mm, the manufacturing error can be ignored. (b) Surface fitting error.
Sensors 23 03464 g011
Table 1. The fitting results of measurement data and the measurement error results compared with the reference data.
Table 1. The fitting results of measurement data and the measurement error results compared with the reference data.
Point-DataRadius (mm)RMSE (mm)Error (%)
Data 1477.940.020.49
Data 2478.320.020.57
Data 3472.480.020.66
Data 4478.540.010.61
Data 5472.370.020.68
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lu, R.; Wang, Z.; Zou, Z. Accurate Calibration of a Large Field of View Camera with Coplanar Constraint for Large-Scale Specular Three-Dimensional Profile Measurement. Sensors 2023, 23, 3464. https://doi.org/10.3390/s23073464

AMA Style

Lu R, Wang Z, Zou Z. Accurate Calibration of a Large Field of View Camera with Coplanar Constraint for Large-Scale Specular Three-Dimensional Profile Measurement. Sensors. 2023; 23(7):3464. https://doi.org/10.3390/s23073464

Chicago/Turabian Style

Lu, Rongsheng, Zhizhuo Wang, and Zhiting Zou. 2023. "Accurate Calibration of a Large Field of View Camera with Coplanar Constraint for Large-Scale Specular Three-Dimensional Profile Measurement" Sensors 23, no. 7: 3464. https://doi.org/10.3390/s23073464

APA Style

Lu, R., Wang, Z., & Zou, Z. (2023). Accurate Calibration of a Large Field of View Camera with Coplanar Constraint for Large-Scale Specular Three-Dimensional Profile Measurement. Sensors, 23(7), 3464. https://doi.org/10.3390/s23073464

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop