Next Article in Journal
Identification of the Rice Wines with Different Marked Ages by Electronic Nose Coupled with Smartphone and Cloud Storage Platform
Next Article in Special Issue
Single-Shot Dense Depth Sensing with Color Sequence Coded Fringe Pattern
Previous Article in Journal
Artificial Neural Network to Predict Vine Water Status Spatial Variability Using Multispectral Information Obtained from an Unmanned Aerial Vehicle (UAV)
Previous Article in Special Issue
Modified Gray-Level Coding Method for Absolute Phase Retrieval
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Global Calibration of Multi-Cameras Based on Refractive Projection and Ray Tracing

1
School of Advanced Manufacturing Engineering, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
2
College of Mechanical Engineering, Chongqing University of Technology, Chongqing 400054, China
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(11), 2494; https://doi.org/10.3390/s17112494
Submission received: 30 September 2017 / Revised: 25 October 2017 / Accepted: 26 October 2017 / Published: 31 October 2017
(This article belongs to the Special Issue Imaging Depth Sensors—Sensors, Algorithms and Applications)

Abstract

:
Multi-camera systems are widely applied in the three dimensional (3D) computer vision, especially when multiple cameras are distributed on both sides of the measured object. The calibration methods of multi-camera systems are critical to the accuracy of vision measurement and the key is to find an appropriate calibration target. In this paper, a high-precision camera calibration method for multi-camera systems based on transparent glass checkerboards and ray tracing is described, and is used to calibrate multiple cameras distributed on both sides of the glass checkerboard. Firstly, the intrinsic parameters of each camera are obtained by Zhang’s calibration method. Then, multiple cameras capture several images from the front and back of the glass checkerboard with different orientations, and all images contain distinct grid corners. As the cameras on one side are not affected by the refraction of glass checkerboard, extrinsic parameters can be directly calculated. However, the cameras on the other side are influenced by the refraction of glass checkerboard, and the direct use of projection model will produce a calibration error. A multi-camera calibration method using refractive projection model and ray tracing is developed to eliminate this error. Furthermore, both synthetic and real data are employed to validate the proposed approach. The experimental results of refractive calibration show that the error of the 3D reconstruction is smaller than 0.2 mm, the relative errors of both rotation and translation are less than 0.014%, and the mean and standard deviation of reprojection error of the four-camera system are 0.00007 and 0.4543 pixels, respectively. The proposed method is flexible, highly accurate, and simple to carry out.

1. Introduction

Multi-camera systems (MCSs) have many advantages over single cameras because they can cover wider and more complete fields of view (FOVs), which makes MCSs increasingly prevalent in industrial vision measurements [1,2], visual navigation [3,4], and security monitoring [5], etc. With the advantages of flexibility, cost performance and high precision, industrial vision measurement using MCSs has been widely studied in many applications, such as car body-in-white inspections [6], and deformation and displacement measurements [7,8]. The measurement of dimension, shape, and deformation is a dynamic process, so all cameras should observe parts of the surface from different viewpoints simultaneously (one-shot image acquisition) and dynamically reconstruct the 3D shape of the whole object. This kind of MCS includes multiple cameras sharing an overlapping FOV at different orientations. In special cases, these cameras are distributed in the opposite direction. Accurate calibration of multiple cameras is quite significant [9], since the calibration results determine the mapping relationship between world points and their image projections. Generally speaking, the overall performance of the MCS strongly depends on the accuracy of the camera calibration.
The calibration methods of the MCS are divided into two categories: metric calibration and self-calibration. The proposed method, using knowledge of the scene such as the calibration pattern to calculate stable and accurate calibration results, forms a part of metric rather than self-calibration approaches. Several patterns were proposed for multi-camera metric calibration, which can be grouped into three main categories: 3D calibration targets, planar targets, and one-dimensional targets.
A representative calibration scenario of multi-cameras begins by placing calibration target in the overlapping FOV of the cameras to provide a projection relationship between image and world points [10]. The standard calibration target is a planar pattern, such as a checkerboard. Zhang [11] proposed a flexible new technique to easily calibrate single cameras using a planar pattern which had been used in other types of multi-camera calibration [12,13,14,15]. Dong [12] presented an extrinsic calibration method for a non-overlapping camera network based on close range photogrammetry. This method calibrated the extrinsic parameter of multi-cameras using a vast number of encoded targets pasted on the wall. Baker [13] used textures printed on either side of a board to calibrate dozens of cameras. One side of the board was printed with a set of lines, while the other side of the board was printed with a set of boxes with one missing in the middle. Belden [14] described a refractive calibration procedure applied to calibrate MCSs for fluid experiments. This method contributed to volumetric multi-camera fluid experiments, where it was desirable to avoid the tedious alignment of calibration grids in multiple locations and a premium was placed on accurately locating world points. The authors of [15], developed an MCS to measure the shape variations and the 3D displacement field of a sheet metal part during a single point incremental forming operation. The calibration of the multi-cameras determining camera parameters was described in their paper using a planar calibration target. The planar calibration pattern limits the distribution of multiple cameras, especially when multiple cameras are distributed on both sides of the planar pattern. The uneven printed pattern can also affect the accuracy of camera calibration.
In addition, 1D and 3D calibration targets are also widely used in the calibration of MCSs, such as in Figure 1 and Figure 2. One-dimensional target-based camera calibration was firstly proposed by Zhang [16]. Compared with conventional 2D or 3D target-based camera calibration, the main advantage of 1D target-based camera calibration is that it does not require the 2D or 3D coordinates of markers, which significantly simplifies the manufacturing process of calibration targets. More importantly, without self-occlusion problems, the 1D calibration target can be observed by all cameras in the MCS. The advantage is that all cameras are calibrated simultaneously, which avoids the accumulation of errors when multi-camera calibration is performed in steps or groups. This camera calibration method has been widely used by many MCSs [10,17,18,19,20,21]. However, there are some disadvantages of 1D calibration targets [21]. Firstly, in the construction of the 1D pattern, it is not possible to guarantee the exact linearity of the points, affecting one of the main assumptions of the adopted model. Secondly, another source of error is the tool used to extract the points of the calibration pattern, which cannot achieve the same accuracy of corner extraction in 2D target-based camera calibration.
A typical 3D calibration target is composed of multiple 1D patterns. Shen [10] presented a complete calibration methodology using a novel nonplanar target for rapid calibration of inward-looking visual sensor networks. The calibration target consists of a large central sphere with smaller spheres of different colors mounted on support rods. A flexible method constructing a global calibration target with circular targets was proposed by Gong [2]. Shin [22] described a multi-camera calibration method using a three-axis frame and wand. In this study, the calibration parameters were estimated using the direct linear transform (DLT) method from the three-axis calibration frame. However, the main source of error in this kind of 3D calibration target is attributed to errors of ellipse fitting caused by image noise and lighting conditions. The accuracy of center extraction cannot achieve the same accuracy of corner extraction in a planar pattern [21]. This type of 3D calibration target has the same disadvantages as the 1D calibration target. Another kind of 3D calibration target consists of a multiple planar pattern. Examples are found in the works of Long [23] and Xu [24]. Unfortunately, in MCSs like in Figure 2, it is hard to use this calibration target, which cannot be viewed by all the cameras simultaneously. This 3D calibration target limits the distribution of multiple cameras, which restricts its application.
At the same time, we must realize that the scale of the distributed camera networks grows dramatically while multiple cameras are spread over a wide geographical area. Because the calibration method based on calibration target cannot meet the requirements in many scenarios, the calibration of camera networks purely from the scene has been widely studied. For example, a distributed inference algorithm based on belief propagation had been developed to refine the initial estimate of camera networks [25]. Gemeiner [26] presented a practical method for video surveillance networks to calibrate multiple cameras which have mostly non-overlapping field of views and might be tens of meters apart.
In order to overcome the shortcomings of the foregoing methods, and guarantee high accuracy and convenience of multi-camera calibration, we propose a novel method of global calibration for multiple cameras with overlapping FOVs. This method adopts a planar calibration target made of transparent glass, and the checkerboard pattern is printed on one side of the glass panel. Multiple cameras are distributed on both sides of the calibration target and towards the calibration target. This kind of configuration is useful to get a one-shot 3D shape of the whole object. The cameras in front of the calibration target are not affected by the refraction, and the Zhang’s traditional method can be used to calibrate the intrinsic and extrinsic camera parameters. However, the cameras in the rear of the calibration target are influenced by refraction, and the direct use of Zhang’s method will cause a calibration error. The refraction of glass will affect the accuracy of multi-camera calibration results. This proposed method uses refractive projection model and ray tracing to eliminate the error of refraction. Based on the 3D position accuracy of the corner point on the glass checkerboard being as high as 0.0015 mm, the proposed multi-camera calibration in this paper can achieve high-accuracy and flexibility.
The remainder of this paper is organized as follows: Section 2 introduces the basic mathematical model of the MCS and ray tracing. In Section 3, the proposed calibration method of multi-cameras based on the refractive projection model and ray tracing is described. Section 4 presents a series of experiments (synthetic and real data) to verify the feasibility and accuracy of the proposed approach. A single-camera experiment verifies feasibility of the refractive projection model and calibration of extrinsic camera parameters. A two-camera experiment confirms the accuracy of calibration of extrinsic camera parameters and refractive index. A four-camera experiment verifies the performance of our method used in the actual MCS. The conclusions are indicated in Section 5.

2. Mathematical Model of Camera and Ray Tracing

This section briefly introduces the basic concepts used in the calibration of single camera and the MCS. Then, the refractive projection model and ray tracing used in this paper will be described.

2.1. Camera Model

An ideal camera is modeled by the pinhole imaging. The relationship between a 3D point in world coordinates and the same point in camera coordinates is approximated by means of the rotation matrix and transformation matrix, as shown in Equation (1).
[ X C Y C Z C ] = R P + T = R [ X Y Z ] + T
The projection of the point in camera coordinate on the image is p = [ u , v ] T , which obeys Equation (2).
λ [ u v 1 ] = K [ X C Y C Z C ] with K = [ f u γ u 0 0 f v v 0 0 0 1 ]
where P = [ X , Y , Z ] T are the world coordinates of a 3D point, [ X C , Y C , Z C ] T are the camera coordinates, and [ u , v ] T are the pixel image coordinates. λ denotes a nonzero scale factor. [ u 0 , v 0 ] T denote the principal point in the imaging plane with the unit of pixel. K is the matrix of the intrinsic parameter. f u and f v represent the focal length in pixels along the image axes u and v , respectively, while γ is the skew coefficient defining the angle between the u and v pixel axes. R and T , called the extrinsic parameters, are the rotation matrix and the translation vector from the world coordinate frame to the camera coordinate frame, respectively.
However, the real camera projection is not ideal, particularly when a commercial lens is used. Therefore, the lens distortion on the imaging has to been taken into account. Commonly, only a first-order or second-order distortion model is adopted to correct the radial distortion [11,27,28]. More rigorously, the radial distortion and tangential distortion should be adopted to correct the lens distortion [9,29]. After considering the lens distortion, the new normalized point coordinates [ x d , y d ] T are defined as follows.
The distortion-free and the distorted normalized image coordinates are [ x , y ] T and [ x d , y d ] T , respectively.
[ x y ] = [ X c / Z c Y c / Z c ]
[ x d y d ] = ( 1 + k 1 r 2 + k 2 r 4 + k 5 r 6 ) x n + d x
d x = [ 2 k 3 x y + k 4 ( r 2 + 2 x 2 ) k 3 ( r 2 + 2 y 2 ) + 2 k 4 x y ] with r 2 = x 2 + y 2
where 1 + k 1 r 2 + k 2 r 4 + k 5 r 6 is the radial distortion and d x is the tangential distortion. k 1 , k 2 , k 5 are the coefficients of radial distortion, and k 3 , k 4 are the coefficients of tangential distortion. We will use D = [ k 1 , k 2 , k 3 , k 4 , k 5 ] to represent the vector of distortion coefficients in this paper.
Based on the descriptions above, a 3D point P in the world coordinate system (WCS) can be projected to a 2D point p in the image coordinate system using the following projection equation:
p = f ( K , R , T , D , P )

2.2. Refractive Projection and Ray Tracing

Usually, the aforementioned pinhole model can meet the requirements of the camera calibration, but the transparent glass checkerboard is applied in our method. The direct application of the pinhole model between world and image points is erroneous as the refraction of light must be considered in our MCS. As shown in Figure 3, if the rays emanating from the world points are drawn along the path taken in the glass (red line), they do not meet in a single point in the air. In this case, the accurate pinhole model will also lead to error that is exacerbated for cameras when image planes are not parallel to the glass checkerboard. In Belden’s work [14], the image plane is angled relative to the interface, which results in a relatively high calibration error when the pinhole model is applied. The reprojection error using the pinhole model in the experiment of our paper is also the same order of magnitude. The non-ignorable error keeps us from adopting the pinhole model for multi-camera calibration using a glass checkerboard.
In order to eliminate the calibration error caused by refraction, the refraction in the optical paths must be appropriately considered when projecting 3D points into cameras through glass. We need to find the intersection of each ray with the refractive interface between the glass and air, and project the intersection points to the pinhole camera. In this paper, we adopt the ray tracing method proposed by Muslow that initializes the intersection points using an alternating forward ray tracing (AFRT) method [30]. To calculate the intersection of rays with glass surfaces, the points that simultaneously satisfy the equation of a line and the plane equation defining the surface geometry of glass should be solved. A point on a line along the direction of a given ray r ^ is defined in Equation (7).
X ( t ) = X 0 + t r ^
The refractive index of air and glass is n 1 and n 2 , respectively ( n 2 > n 1 ). Assume that the refractive index of air is equal to one, and the relative refractive index of glass ( n = n 2 n 1 = n 2 ) is one of the optimized parameters. The thickness of glass is d . r ^ i and N ^ denote the direction of the incident ray and the normal vector of refractive surface, respectively. The direction of the refracted ray r ^ t is given by:
r ^ t = n r ^ i + [ n N ^ r ^ i 1 n 2 [ 1 ( N ^ r ^ i ) 2 ] ] N ^
Figure 4 depicts the algorithm of ray tracing in order to find the intersection of rays with a planar glass surface. The procedure of ray tracing is described as follows:
  • The procedure is initialized by k = 1 . r 1 k denotes the direction of the line connecting the camera center X C and the 3D point P . We can find the intersection of r 1 k and S 1 at the point X i 1 k .
  • When n 1 and n 2 are known, we can find the r 2 k using the Equation (8), which intersects S 2 at the point X i 2 k .
  • The ray r 2 k is projected from P to interface S 1 , and parallel to r 2 k but opposite in direction.
  • Finally, the ray r 2 is intersected with S 1 , resulting the point X i 1 k .
  • If the distance Δ X i 1 k = | X i 1 k X i 1 k | between the X i 1 k and X i 1 k is larger than the tolerance, the above procedures will be reiterated, and the point at 1 2 ( X i 1 k + X i 1 k ) is defined as X i 1 k + 1 . Otherwise, the optimal solution of the intersection of r 1 k and S 1 is found.
In addition to the intrinsic and extrinsic parameters of the camera, the main parameters affecting the projective ray include the refractive index and thickness of refraction glass. The thickness of the glass can be accurately measured. Because the refractive index of different types of glasses is different, the refractive index is considered as unknown. Through the above discussion, Equation (6) can be converted to Equation (9) with refraction.
p r = f r ( K , D , R , T , P , n )
where p r and f r represent the image points generated by the refraction and the refractive projection model, respectively.

3. The Proposed Calibration Method

3.1. Multi-Camera Calibration Based on Refractive Projection

In the previous section, we introduced the camera model and the refractive projection, which are combined to calibrate the MCS in this section. In our work, the single camera model is extended to the modeling and calibration of a MCS made up of more than two cameras. Without loss of generality, the MCS will be explained by the particular case of a four-camera system, which is also used in the calibration experiments described in the present paper. The MCS is shown in Figure 5, and the object in the center is the glass calibration plate. One side of the glass is printed with a checkerboard pattern, which can be seen from both sides of the calibration plate. Four cameras are distributed on both sides of the calibration plate. These cameras are grouped into two pairs: pair I, including cameras 1 and 2, and pair II, including cameras 3 and 4. The cameras of pair I directly project the 3D point on calibration plate to image without refraction (Equation (6)), while the cameras of pair II are for imaging through the reflection of glass (Equation (9)), which can lead to the calibration errors. The errors can be eliminated by the above refractive projection model and the ray tracing method. Because each camera needs to calculate the initial estimation of extrinsic parameters respectively, the major WCS (red) is fixed on the upper left corner of the pattern of the non-refractive side, and the auxiliary WCS (blue) is fixed on the other side of the pattern with refraction. R and T denote the rotation and translation between the two WCSs.
For an MCS, during the calibration procedure, m   ( i = 1 , 2 , , m ) images of the calibration plate are taken from each camera at different orientations. For each image, n   ( j = 1 , 2 , , n ) object points are recognized by the program. In this system, l   ( k = 1 , 2 , , l ) represents the number of cameras. K k and D k respectively represent the intrinsic camera parameters and distortion coefficients of the k th camera. R k i and T k i denote the rotation matrix and translation vector of the i th position of calibration plate relative to the k th camera. p k i j is the projection of the j th 3D point on the i th image of the k th camera without refraction. p r k i j denotes the projection of the j th 3D point on the i th image of the k th camera with refraction. The imaging functions are shown as follows.
p k i j = f ( K k , D k , R k i , T k i , P j )
p r k i j = f r ( K k , D k , R k i , T k i , P j , n )
The cameras distributed on both sides of calibration plate use two projection model to solve their extrinsic camera parameters, which are relative to the major WCS or auxiliary WCS. The rotation and translation of each camera need to be aligned to the major WCS. Camera 1 is set as the master camera. The rotation and translation of each camera relative to the master camera is obtained as follows:
{ R 1 k = R k i R 1 i T 1 k = T k i R 1 k T 1 i ( without   refraction )
{ R 1 k = R k i ( R 1 i R ) 1 T 1 k = T k i R 1 k ( R 1 i T + T 1 i ) ( with   refraction )
R 1 i , T 1 i are the extrinsic parameters of the master camera and R 1 k , T 1 k are the relative extrinsic parameters of other cameras relative to the master camera.

3.2. Solving Intrinsic Camera Parameters and Initial Estimation of Extrinsic Camera Parameters

The intrinsic parameters of the MCS are obtained by Zhang’s method. Because the positioning accuracy of the 3D point of calibration target is as high as 0.0015 mm, the calibration results are relatively accurate. Before the extrinsic parameters of the system are optimized, it requires an initial estimation of extrinsic camera parameters, which can be obtained using the DLT method described by Hartley [31] or the theory of multi-layer flat refractive geometry presented by Agrawal [32]. The DLT method can only be used when the thickness of glass is relatively small, otherwise the initial estimation of extrinsic parameters will deviate significantly from the truth. The initial estimation of extrinsic parameters gives no consideration to lens distortion and glass refraction, so nonlinear refinement must be applied to the initial estimation aiming at improving accuracy. The best estimate of the camera parameters can be obtained by nonlinear refinement based on the maximum likelihood criterion, such as the Levenberg–Marquardt algorithm. The maximum likelihood estimate for our proposed method can be written as Equation (14).
min R 1 i , T 1 i , R 1 k , T 1 k , n k l i m j n ( ( 1 w ) x k i j p k i j 2 + w x r k i j p r k i j 2 )
Equation (14) shows minimization of the sum of the reprojection error, which is a 2D Euclidean distance between the projected points based on Equations (10) and (11) and the actual image points. x k i j , p k i j are the measured image point and the predicted image point without refraction, and x r k i j , p r k i j are the measured image point and the predicted image point with refraction. w is the refraction flag. The value 0 of w indicates the projection without refraction, while 1 means the projection with refraction.
A 3D point and corresponding image point can provide two independent equations. Assuming a l -camera system is applied, each camera takes m image of calibration target, and the calibration object contains j known 3D points. The parameters of the Equation (14) that need to be solved include 6 m rotation and translation parameters of the master camera, 6 ( l 1 ) rotation and translation parameters of each camera relative to the master camera, and the refractive index of the glass calibration target. Therefore, 6 ( m + l 1 ) + 1 parameters are solved by 2 l m n equations, which leads to an over-determined system. Taking four-camera system as an example, the calibration target contains 182 known 3D points, and each camera captures 20 images. A total of 29,120 equations are solved for 139 variables. Assume that the image points are corrupted by independent and identically distributed noise, and the maximum likelihood solution of these variables is obtained.
The nonlinear optimization algorithms commonly employed in bundle adjustment routines require evaluation of the Jacobian matrix of the projection function, defined in Equations (10) and (11). The individual camera is independent of other cameras and calibration points. Therefore, the Jacobian matrix tends to be a very sparse matrix. The sparse structure can be exploited in the minimization routine to improve computational performance.
The quality of the camera calibration is evaluated by computing the mean and the standard deviation of the individual reprojection errors, which is the residual that exists after minimizing Equation (14). Assuming that the individual reprojection error is d and N is the number of equations, the evaluation parameter can be set as follows.
d ¯ = 1 N k N d k
σ d = 1 N k N ( d k d ¯ ) 2

3.3. Summary

The proposed method combines Zhang’s conventional method and the refractive projection model to realize the calibration of the MCS. The global calibration process works as follows:
(1)
Multiple cameras are installed and their FOV covers the same area of the calibration target simultaneously. Intrinsic camera parameters and distortion coefficients of each camera are calibrated independently.
(2)
In the overlapping FOV of the MCS, multiple cameras acquire the image of the calibration target from different orientations. Images captured by each camera contain the front or back of the calibration target.
(3)
Using the DLT method or the theory of multi-layer flat refractive geometry to obtain the extrinsic camera parameters of each camera relative to their WCS, the extrinsic camera parameters of each camera are unified to the master WCS. The rotation and translation of each camera relative to the master camera are obtained as Equations (12) and (13).
(4)
The extrinsic camera parameters of the system and the refractive index of the glass are optimized by the bundle adjustment method and the refractive projection model.

4. Experiments and Discussion

The accuracy and robustness of the algorithm discussed in this paper are analyzed using both synthetic and real data. Multiple cameras are usually distributed on both sides of the glass checkerboard and towards the calibration target, so both the direct projection model and refractive projection model are adopted in the proposed calibration method. Since the direct projection model has been verified and applied by many scholars, this article will not discuss it. The experiments mainly analyze the refractive projection model, and the two models are simultaneously applied in the calibration of the MCS. In practice, one camera or multiple cameras (for example two cameras) may be deployed on one side of the measured object. In the experiments of synthetic data and real data, we analyze the accuracy of the refractive projection model, which is applied to acquire the refractive index and the extrinsic parameters of single camera and multiple cameras. The extrinsic parameters of each camera are estimated by the DLT method from images of the planar pattern.

4.1. Synthetic Data

The intrinsic parameters and extrinsic parameters of the camera are obtained through the 3D points of the calibration target and the corresponding image points. The image points are obtained by the corner detection algorithm in the real data experiment, but the experiment of synthetic data does not need to verify corner detection algorithm. We directly generate the intrinsic and extrinsic parameters of the camera and space points, and obtain the ideal image points using the direct projection model (Equation (10)) and refractive projection model (Equation (11)). The actual image points have the error of corner detection, and the error is simulated by random error of normal distribution. The random error is added to the ideal image point to simulate the real image point.
The simulated camera’s image size is 2592 × 2048 pixels with the principal point at (1296.5, 1024.5) pixels. The focal length along the u and v direction is f u = 2604 pixels and f v = 2604 pixels, respectively. All the distortion coefficients are zero. The skew factor is set to zero. The calibration target is a glass checkerboard with 182 corners (14 × 13) uniformly distributed, and the point interval is 12 mm. The glass checkerboard has a thickness of 4 mm and the refractive index of the glass is 1.5. In the generation of the synthetic data, all the images are captured randomly in the constrained range, which includes the distance between camera and object being 300-400mm, and the angles between camera coordinates and world coordinates being α = ( 180 ± 15 ) ° , β = ( 90 ± 15 ) ° , and γ = ( 0 ± 15 ) ° . The world coordinate frame is set on the checkerboard. The basic parameters of the synthetic experiment are basically consistent with the real experiment.
In order to evaluate the robustness of our method with respect to noise, some simulations have been performed, in which noise is added to the ideal image points ranging from 0 to 0.4 pixels. For each noise level, we perform 100 independent trials and each trial contains 20 images. The estimated camera parameters using simulative image points are compared with the ground truth. In this section, the mean relative error of rotation and translation vector is used to assess the calibration accuracy.
If the rotation vector is v = Rodrigues ( R ) , the relative errors of rotation and translation vector are | v | / v and | T | / T .
In practice, the thickness of the glass plate is known, while the refractive index is unknown, being equal to approximately 1.5. The smaller change of refractive index has less influence on the image projection, and the refractive effect on the calibration result is relatively small compared to the noise. The extrinsic parameters of the single camera are estimated to be divided into two scenarios: with refractive index estimation and without refractive index estimation.
As shown in Figure 6 and Figure 7, whether the refractive index is estimated, the relative errors of rotation and translation vector for single camera gradually increase along with the noise level. The relative error of rotation vector is less than 1.5 × 10 6 and translation error is less than 1 × 10 6 when the refraction index is not estimated ( μ = 1.5 ). The error of extrinsic parameters using fixed refractive index is more consistent and stable than the error using estimated refractive index. It can be seen from Figure 7a,b that the calibration results in all directions are inconsistent. The growth rate of the error in y direction is inconsistent with the x and z direction. The results are shown in Figure 7c. The error of refractive index increases dramatically, which can be considered incorrect. There is reason to believe that this result is due to an incorrect estimation of the refractive index. When the thickness of glass is small, a single camera cannot accurately estimate the refractive index. The main cause of this problem is that the ray direction is less restrictive. If the cameras can be added in different orientations, the estimated accuracy of refractive index can be improved. Meanwhile, we can also find in Figure 6 and Figure 7 that the extrinsic parameters of the camera are accurate in both cases. When the extrinsic parameters of single camera are estimated, the fixed refractive index can obtain higher accuracy.
In addition to the synthetic experiment of one camera, we have carried out a simulation experiment on multiple cameras using the refractive projection model (taking binocular camera as an example). This experiment is the same as a universal binocular camera because the left camera is a reference camera. The optimized parameters include the rotation and translation of the left camera relative to the world coordinate frame, and the rotation and translation of the right camera relative to the left camera. Meanwhile, the refractive index of glass is estimated and compared with single camera.
For a binocular camera, Gaussian noise (mean = 0, STD = 0–0.4) is also added to the images of left and right camera, respectively, and then the calibration is conducted with these independent images 100 times. Figure 8 shows the relative error of the extrinsic parameters of binocular camera and the refractive index. It can be seen from the figures that the rotation vector is more accurate than that of the single camera. At the same time, the translation accuracy of binocular camera is lower than for the single camera. Due to the ray constraints of multiple direction of the binocular camera, the precision of estimated refractive index of binocular camera is significantly improved compared with the single camera. Meanwhile, the accuracy of rotation and translation are relatively high.

4.2. Real Data

For the experiments with real data, all CMOS cameras (Basler acA2500-60uc) have the same configuration. The focal length of lens is 12.5 mm and the image resolution of the camera is 2590 × 2048 pixels. The four-camera system is presented in Figure 9. As shown in Figure 10, the calibration target is a planar checkerboard with 14 × 13 corner points uniformly distributed. The size of the checkerboard is 200 × 200 mm2 and the distance between the adjacent points is 12 mm in the horizontal and the vertical directions. The checkerboard pattern is printed on one side of glass calibration plate with a position accuracy of 0.0015 mm.
It is possible to install one camera or multiple cameras on one side of the measured object. Similar to the synthetic experiment, the experiments of real data will verify the calibration accuracy of one camera, the binocular camera, and the four-camera system. Four cameras are used to perform these experiments using the refractive projection model. Meanwhile, the reflection and overlapping FOV of all cameras can lead to the restriction of positioning the calibration target and the inconvenience of operating the calibration target in the actual application. In order to improve the accuracy and convenience of the proposed method, the intrinsic parameters of all camera are calibrated first and then the extrinsic parameters are calibrated using the proposed calibration method. In the calibration process of intrinsic parameters, the cameras are fixed according to the size of the object and 21 images are taken from different orientations. Table 1 shows the intrinsic parameters of camera 1–4 obtained through Zhang’s flexible calibration method [19]. As Table 1 illustrated, only the distortion coefficients k 1 and k 2 are listed.
The extrinsic parameters of one camera and multiple cameras, and refractive index of glass are solved by using the proposed method. We use the reprojection error of corner point to evaluate the accuracy of camera calibration. Figure 11, Figure 12 and Figure 14 display the bivariate histogram of the unoptimized and optimized reprojection error of one camera, binocular cameras, and four cameras, respectively. The reprojection errors of one camera (camera 4) are shown in Figure 11. It is obvious that the reprojection errors improve significantly through the nonlinear optimization. The initial projection error is larger and the distribution is more dispersed. The optimized error is smaller and the distribution is more concentrated. The mean value and the standard deviation of the initial reprojection errors are 0.0011 pixel and 0.1452 pixel, respectively. After the optimization, the mean value of the reprojection errors is −0.00003 pixels and the standard deviation is 0.0949 pixels. The calibration results of the binocular cameras (cameras 3, 4) are shown in Figure 12. The comparison between the results of refractive calibration and the initial value shows that the bundle adjustment with refractive projection model is more reliable and more accurate. The mean value and standard deviation of reprojection errors change from 0.2842 and 0.6791 pixels to −0.0005 and 0.2213 pixels. The optimized extrinsic parameters of binocular camera are used to calculate the 3D position of the corner point. Then, the position error is calculated based on the 3D position and the theoretical value. As shown in Figure 13, the position error of optimized extrinsic parameters has been reduced to half of the unoptimized one. As can be found from Figure 13, the curve of position error presents a symmetric waveform. We believe that the main reason is that the distortion cannot be completely eliminated, and the imaging will be affected by the distortion coefficient. Therefore, the position error of the center of image is small, while the error of the edge is large. The horizontal coordinate of the figure is the number of points, and the counting mode is increased from top to bottom and from left to right, so it is distributed in a systematic waveform.
In the one-camera and binocular camera system, we only use the refractive projection model. Four cameras are distributed on both sides of the glass calibration plate, which simultaneously uses the direct projection model and the refractive projection model. The four-camera system is used to verify the practicability of our presented method. Figure 14 shows the reprojection error of the four-camera system. The mean value and standard deviation of reprojection error change from −0.3378 and 2.9542 pixels to 0.00007 and 0.4543 pixels, respectively. When the number of cameras is greater than two, the initial extrinsic parameters of the camera are inaccurate. From Figure 12 and Figure 14, it can be seen that the reprojection error of different cameras is not concentrated, resulting in multiple peaks. The binocular camera’s reprojection error ranges from −1 to 3 pixels, while the error of the four cameras ranges from −10 to 10 pixels. After optimization, accurate camera parameters are obtained, so that the reprojection error is reduced, while the multiple peaks of the reprojection error are eliminated, and the distribution conforms to the normal distribution. All this means that the error of extrinsic parameters is reduced. We can also discover that the standard deviation of reprojection errors is basically linear to the number of cameras. The optimized calibration results indicate the stability and accuracy of our proposed method in real data experiments. The relative extrinsic parameters of the four-camera system are reported in Table 2.

4.3. Discussion

The above experiments based on synthetic and real data verify the accuracy and effectiveness of the proposed method. This method is applicable to the multi-camera measurement system which can perform a one-shot measurement of the dynamic shape of the whole part. The typical MCS is shown in Figure 5, and the cameras are distributed both sides of the glass calibration plate. Several patterns were designed for multi-camera calibration, which can be grouped into three categories: 1D patterns, 3D target consisting of 1D patterns, and planar patterns. Compared with planar patterns, the disadvantage of other two calibration targets is that it is difficult to guarantee the exact linearity and the extraction accuracy of the points. However, the opaque planar pattern means it is difficult to complete multi-camera calibration and it is easy to generate cumulative errors. With the help of a precision manufacturing technique, a transparent glass calibration target can overcome the above limitations and complete the calibration of the MCS. The position accuracy of corner point on commercial glass calibration plate can reach 0.0015 mm, so it can satisfy the precision requirement of multi-camera calibration. The extrinsic parameters can be optimized in the global coordinates, and the refractive projection model is used to eliminate the refractive effect.
However, the proposed method also shows some limitations. Due to the reflection of glass, the camera’s distribution and the calibration accuracy of multiple cameras are affected. In the calibration process of this paper, a few reprojection errors can occur with abnormal values, which are caused by the reflection. Fortunately, the number of these outliers is very small and they have little impact on the calibration results. Alternatively, we can delete these outliers and reduce the impact on the calibration results. The method can also improve the reflection from the production process. Even if the method is affected by reflection, compared with the existing methods based on 2D or 3D calibration targets [10,33], the mean and standard deviation of the reprojection error based on our method are relatively small. In addition, the calibration method cannot be applied to the multi-camera calibration without an overlapping FOV.

5. Conclusions

A typical MCS is installed on both sides of the measured object, which makes it difficult to calibrate the system using the existing camera calibration methods. In this paper, a novel multi-camera calibration method based on glass calibration plates and ray tracing is proposed. Based on the traditional direct projection model, the refractive projection model is developed and the model is applied for multi-camera calibration. Firstly, the mathematical models of refractive projection and bundle adjustment are established with introduction of ray tracing. Then, the intrinsic parameters of each camera are obtained by Zhang’s calibration method and direct linear transformation is used to obtain the initial extrinsic parameters. Finally, the modified bundle adjustment method is applied to optimize the extrinsic parameters of the MCS and the refractive index of glass calibration target. The experimental results of refractive calibration show that the error of the 3D reconstruction is smaller than 0.2 mm, the relative errors of both rotation and translation are less than 0.014%, and the mean and standard deviation of the reprojection error of the four-camera system are 0.00007 and 0.4543 pixels. The experiments performed on synthetic and real data indicate that our proposed method has high accuracy and feasibility.

Acknowledgments

This work has been supported by the National Natural Science Foundation of China (Grant No. 51505054, 51705057), the Chongqing Science and Technology Commission (Grant No. cstc2015zdcy-ztzx60002, cstc2016jcyjA0538, cstc2015zdcy-ztzx30001, cstc2014jcyjA60003), and the Scientific and Technological Research Program of Chongqing Municipal Education Commission (Grant No. KJ1500405, CYS17233).

Author Contributions

The paper was a collaborative effort between the authors. Mingchi Feng and Xiang Jia proposed the idea of the paper. Mingchi Feng, Xiang Jia, and Song Feng implemented the algorithm, and designed and performed the experiments. Mingchi Feng, Jingshu Wang, and Taixiong Zheng analyzed the experimental results and prepared the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhan, D.; Yu, L.; Xiao, J.; Chen, T.L. Multi-camera and structured-light vision system (MSVS) for dynamic high-accuracy 3d measurements of railway tunnels. Sensors 2015, 15, 8664–8684. [Google Scholar] [CrossRef] [PubMed]
  2. Gong, Z.; Liu, Z.; Zhang, G.J. Flexible global calibration of multiple cameras with nonoverlapping fields of view using circular targets. Appl. Opt. 2017, 56, 3122–3131. [Google Scholar] [CrossRef] [PubMed]
  3. Bosch, J.; Gracias, N.; Ridao, P.; Ribas, D. Omnidirectional underwater camera design and calibration. Sensors 2015, 15, 6033–6065. [Google Scholar] [CrossRef] [PubMed]
  4. Schmidt, A.; Kasiński, A.; Kraft, M.; Fularz, M.; Domagała, Z. Calibration of the multi-camera registration system for visual navigation benchmarking. Int. J. Adv. Robot. Syst. 2014, 11, 83. [Google Scholar] [CrossRef]
  5. Ryan, D.; Denman, S.; Fookes, C.; Sridharan, S. Scene invariant multi camera crowd counting. Pattern Recogn. Lett. 2014, 44, 98–112. [Google Scholar] [CrossRef] [Green Version]
  6. Kovac, I. Flexible inspection systems in the body-in-white manufacturing. In Proceedings of the 2004 International Workshop on Robot Sensing, Graz, Austria, 24–25 May 2004; Institute of Electrical and Electronics Engineers Inc.: Graz, Austria, 2004; pp. 41–48. [Google Scholar] [CrossRef]
  7. Chen, X.; Yang, L.X.; Xu, N.; Xie, X.; Sia, B.; Xu, R. Cluster approach based multi-camera digital image correlation: Methodology and its application in large area high temperature measurement. Opt. Laser Technol. 2014, 57, 318–326. [Google Scholar] [CrossRef]
  8. Chen, F.X.; Chen, X.; Xie, X.; Feng, X.; Yang, L.X. Full-field 3d measurement using multi-camera digital image correlation system. Opt. Laser Eng. 2013, 51, 1044–1052. [Google Scholar] [CrossRef]
  9. Weng, J.; Cohen, P.; Herniou, M. Camera calibration with distortion models and accuracy evaluation. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 965–980. [Google Scholar] [CrossRef]
  10. Shen, E.; Hornsey, R. Multi-camera network calibration with a non-planar target. IEEE Sens. J. 2011, 11. [Google Scholar] [CrossRef]
  11. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  12. Dong, S.; Shao, X.X.; Kang, X.; Yang, F.J.; He, X.Y. Extrinsic calibration of a non-overlapping camera network based on close-range photogrammetry. Appl. Opt. 2016, 55, 6363–6370. [Google Scholar] [CrossRef] [PubMed]
  13. Baker, P.T.; Aloimonos, Y. Calibration of a multicamera network. In Proceedings of the 2003 Conference on Computer Vision and Pattern Recognition Workshop, Madison, WI, USA, 16–22 June 2003; p. 72. [Google Scholar] [CrossRef]
  14. Belden, J. Calibration of multi-camera systems with refractive interfaces. Exp. Fluids 2013, 54, 1463. [Google Scholar] [CrossRef]
  15. Orteu, J.J.; Bugarin, F.; Harvent, J.; Robert, L.; Velay, V. Multiple-camera instrumentation of a single point incremental forming process pilot for shape and 3d displacement measurements: Methodology and results. Exp. Mech. 2011, 51, 625–639. [Google Scholar] [CrossRef]
  16. Zhengyou, Z. Camera calibration with one-dimensional objects. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 892–899. [Google Scholar] [CrossRef] [PubMed]
  17. Wang, L.; Wang, W.W.; Shen, C.; Duan, F.Q. A convex relaxation optimization algorithm for multi-camera calibration with 1d objects. Neurocomputing 2016, 215, 82–89. [Google Scholar] [CrossRef]
  18. Liu, Z.; Li, F.J.; Zhang, G.J. An external parameter calibration method for multiple cameras based on laser rangefinder. Measurement 2014, 47, 954–962. [Google Scholar] [CrossRef]
  19. Fu, Q.; Quan, Q.; Cai, K.Y. Calibration of multiple fish-eye cameras using a wand. IET Comput. Vis. 2015, 9, 378–389. [Google Scholar] [CrossRef]
  20. Loaiza, M.E.; Raposo, A.B.; Gattass, M. Multi-camera calibration based on an invariant pattern. Comput. Graph. 2011, 35, 198–207. [Google Scholar] [CrossRef]
  21. De Franca, J.A.; Stemmer, M.R.; Franca, M.B.D.; Piai, J.C. A new robust algorithmic for multi-camera calibration with a 1d object under general motions without prior knowledge of any camera intrinsic parameter. Pattern Recogn. 2012, 45, 3636–3647. [Google Scholar] [CrossRef]
  22. Shin, K.Y.; Mun, J.H. A multi-camera calibration method using a 3-axis frame and wand. Int. J. Precis. Eng. Manuf. 2012, 13, 283–289. [Google Scholar] [CrossRef]
  23. Long, Q.; Zhongdan, L. Linear n-point camera pose determination. IEEE Trans. Pattern Anal. Mach. Intell. 1999, 21, 774–780. [Google Scholar] [CrossRef]
  24. Xu, G.; Zhang, X.; Li, X.; Su, J.; Hao, Z. Global calibration method of a camera using the constraint of line features and 3d world points. Meas. Sci. Rev. 2016, 16, 190. [Google Scholar] [CrossRef]
  25. Devarajan, D.; Cheng, Z.L.; Radke, R.J. Calibrating distributed camera networks. Proc. IEEE 2008, 96, 1625–1639. [Google Scholar] [CrossRef]
  26. Gemeiner, P.; Micusik, B.; Pflugfelder, R. Calibration methodology for distant surveillance cameras. Lect. Notes Comput. Sci. 2015, 8927, 162–173. [Google Scholar] [CrossRef]
  27. Tsai, M.-J.; Hung, C.-C. Development of a high-precision surface metrology system using structured light projection. Measurement 2005, 38, 236–247. [Google Scholar] [CrossRef]
  28. Tsai, R. A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J. Robot. Autom. 1987, 3, 323–344. [Google Scholar] [CrossRef]
  29. Huang, J.H.; Wang, Z.; Gao, Z.H.; Gao, J.M. A novel color coding method for structured light 3d measurement. Proc. SPIE 2011, 8085. [Google Scholar] [CrossRef]
  30. Mulsow, C. A flexible multi-media bundle approach. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2010, XXXVIII, 472–477. [Google Scholar]
  31. Hartley, R.I. In defense of the eight-point algorithm. IEEE Trans. Pattern Anal. Mach. Intell. 1997, 19, 580–593. [Google Scholar] [CrossRef]
  32. Agrawal, A.; Ramalingam, S.; Taguchi, Y.; Chari, V. A theory of multi-layer flat refractive geometry. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3346–3353. [Google Scholar] [CrossRef]
  33. Tan, L.; Wang, Y.N.; Yu, H.S.; Zhu, J. Automatic camera calibration using active displays of a virtual pattern. Sensors 2017, 17, 685. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Multi-camera system without an overlapping field of view (FOV).
Figure 1. Multi-camera system without an overlapping field of view (FOV).
Sensors 17 02494 g001
Figure 2. Multi-camera system with an overlapping FOV.
Figure 2. Multi-camera system with an overlapping FOV.
Sensors 17 02494 g002
Figure 3. Schematic of imaging through glass using the pinhole and refractive projection model.
Figure 3. Schematic of imaging through glass using the pinhole and refractive projection model.
Sensors 17 02494 g003
Figure 4. Schematic of ray tracing method.
Figure 4. Schematic of ray tracing method.
Sensors 17 02494 g004
Figure 5. The rotation and translation of the four-camera system.
Figure 5. The rotation and translation of the four-camera system.
Sensors 17 02494 g005
Figure 6. The relative error of extrinsic parameters for one camera without refraction estimation. (a) Relative error for the rotation vector; (b) Relative error for the translation vector.
Figure 6. The relative error of extrinsic parameters for one camera without refraction estimation. (a) Relative error for the rotation vector; (b) Relative error for the translation vector.
Sensors 17 02494 g006
Figure 7. The relative error of extrinsic parameters for one camera with refraction estimation. (a) Relative error for the rotation vector; (b) Relative error for the translation vector; (c) Relative error for the refraction index.
Figure 7. The relative error of extrinsic parameters for one camera with refraction estimation. (a) Relative error for the rotation vector; (b) Relative error for the translation vector; (c) Relative error for the refraction index.
Sensors 17 02494 g007
Figure 8. The relative error of extrinsic parameters for binocular cameras with refraction estimation. (a) Relative error for the rotation vector of the left camera; (b) Relative error for the translation vector of the left camera; (c) Relative error for the rotation vector of the left and right camera; (dRelative error for the translation vector of the left and right camera; (e) Relative error for the refraction index.
Figure 8. The relative error of extrinsic parameters for binocular cameras with refraction estimation. (a) Relative error for the rotation vector of the left camera; (b) Relative error for the translation vector of the left camera; (c) Relative error for the rotation vector of the left and right camera; (dRelative error for the translation vector of the left and right camera; (e) Relative error for the refraction index.
Sensors 17 02494 g008
Figure 9. The four-camera system.
Figure 9. The four-camera system.
Sensors 17 02494 g009
Figure 10. The glass calibration target.
Figure 10. The glass calibration target.
Sensors 17 02494 g010
Figure 11. The reprojection error of one camera. (a) unoptimized; (b) optimized.
Figure 11. The reprojection error of one camera. (a) unoptimized; (b) optimized.
Sensors 17 02494 g011
Figure 12. The reprojection error of binocular camera. (a) unoptimized; (b) optimized.
Figure 12. The reprojection error of binocular camera. (a) unoptimized; (b) optimized.
Sensors 17 02494 g012
Figure 13. The 3D position error using binocular camera. (a) unoptimized; (b) optimized.
Figure 13. The 3D position error using binocular camera. (a) unoptimized; (b) optimized.
Sensors 17 02494 g013
Figure 14. The reprojection error of one camera. (a) unoptimized; (b) optimized.
Figure 14. The reprojection error of one camera. (a) unoptimized; (b) optimized.
Sensors 17 02494 g014
Table 1. The intrinsic parameters of four cameras.
Table 1. The intrinsic parameters of four cameras.
Camera 1Camera 2Camera 3Camera 4Uncertainty (3σ)
Focal length [ 2618.29 2618.20 ] [ 2625.76 2625.61 ] [ 2617.17 2616.88 ] [ 2620.34 2620.35 ] [ 0.49 0.44 ]
Principal point [ 1290.91 1014.72 ] [ 1286.45 1001.44 ] [ 1255.36 1026.86 ] [ 1293.70 1006.56 ] [ 0.80 0.73 ]
Distortion ( k 1   k 2 ) [ 0.1338 0.1326 ] [ 0.1356 0.1462 ] [ 0.1332 0.1360 ] [ 0.1324 0.1344 ] [ 0.0008 0.0036 ]
Table 2. The relative extrinsic parameters of the four-camera system.
Table 2. The relative extrinsic parameters of the four-camera system.
Camera 2-1Camera 3-1Camera 4-1Uncertainty (3σ)
Rotation Vector [ 0.0892 0.7389 0.0365 ] [ 0.1479 3.0076 0.2330 ] [ 0.0212 2.3761 0.1040 ] [ 0.0018 0.0026 0.0009 ]
Translation vector [ 248.9713 1.0768 93.0314 ] [ 4.3414 81.7511 766.3633 ] [ 274.9126 42.0589 641.0544 ] [ 0.1858 0.1280 0.3189 ]

Share and Cite

MDPI and ACS Style

Feng, M.; Jia, X.; Wang, J.; Feng, S.; Zheng, T. Global Calibration of Multi-Cameras Based on Refractive Projection and Ray Tracing. Sensors 2017, 17, 2494. https://doi.org/10.3390/s17112494

AMA Style

Feng M, Jia X, Wang J, Feng S, Zheng T. Global Calibration of Multi-Cameras Based on Refractive Projection and Ray Tracing. Sensors. 2017; 17(11):2494. https://doi.org/10.3390/s17112494

Chicago/Turabian Style

Feng, Mingchi, Xiang Jia, Jingshu Wang, Song Feng, and Taixiong Zheng. 2017. "Global Calibration of Multi-Cameras Based on Refractive Projection and Ray Tracing" Sensors 17, no. 11: 2494. https://doi.org/10.3390/s17112494

APA Style

Feng, M., Jia, X., Wang, J., Feng, S., & Zheng, T. (2017). Global Calibration of Multi-Cameras Based on Refractive Projection and Ray Tracing. Sensors, 17(11), 2494. https://doi.org/10.3390/s17112494

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop