Next Article in Journal
A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach
Next Article in Special Issue
Current Research in Lidar Technology Used for the Remote Sensing of Atmospheric Aerosols
Previous Article in Journal
Adhesive Defect Monitoring of Glass Fiber Epoxy Plate Using an Impedance-Based Non-Destructive Testing Method for Multiple Structures
Previous Article in Special Issue
Reduction of the Influence of Laser Beam Directional Dithering in a Laser Triangulation Displacement Probe
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rapid Global Calibration Technology for Hybrid Visual Inspection System

State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, Tianjin 300072, China
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(6), 1440; https://doi.org/10.3390/s17061440
Submission received: 23 April 2017 / Revised: 30 May 2017 / Accepted: 3 June 2017 / Published: 19 June 2017

Abstract

:
Vision-based methods for product quality inspection are playing an increasingly important role in modern industries for their good performance and high efficiency. A hybrid visual inspection system, which consists of an industrial robot with a flexible sensor and several stationary sensors, has been widely applied in mass production, especially in automobile manufacturing. In this paper, a rapid global calibration method for the hybrid visual inspection system is proposed. Global calibration of a flexible sensor is performed first based on the robot kinematic. Then, with the aid of the calibrated flexible sensor, stationary sensors are calibrated globally one by one based on homography. Only a standard sphere and an auxiliary target with a 2D planar pattern are applied during the system global calibration, and the calibration process can be easily re-performed during the system’s periodical maintenance. An error compensation method is proposed for the hybrid inspection system, and the final accuracy of the hybrid system is evaluated with the deviation and correlation coefficient between the measured results of the hybrid system and Coordinate Measuring Machine (CMM). An accuracy verification experiment shows that deviation of over 95% of featured points are less than ±0.3 mm, and the correlation coefficients of over 85% of points are larger than 0.7.

1. Introduction

Product quality inspection is playing an increasingly important role in modern industries, including automobile, shipbuilding, aviation, and aerospace industries. The integration of three-dimensional (3D) measurement in industrial development has done a great favor to monitor the overall quality of production processes, shortening the development periods, lowering the product cost, and considerably increasing performance, all of which is greatly helpful in the optimization of the entire manufacturing process. The coordinate measuring machine (CMM), which has the advantage of its precision and flexibility with respect to various measured objects, has become an indispensable part of manufacturing, and is more often applied to electron, automobile, and spaceflight manufacturing, etc. However, CMMs generally work under an environment of constant temperature, constant humidity, dustless conditions, and they require skillful operators. The measuring efficiency of CMMs is also limited by its contact measurement principle. With the rapid progress of the optical image device and the continuous decrease of its cost, the method for inspecting the product quality with the vision-based system is becoming available. Various vision systems have been applied in modern industries, including monocular vision [1], binocular stereo vision [2], structural-light vision [3], and digital projection profilometry [4], etc. And the rocketing development of computer software, hardware and perfection of the computer vision theory have also promoted the performance and efficiency of these vision systems.
For mass production such as automobile manufacturing, there are some exceptional requirements for the vision inspection systems, such as that the inspection process should be performed in-line, and that the large sum of critical dimensions to be measured are distributed all over the car body, including the underbody. Industrial robots, featured as an economical and flexible orienting device, offer a new idea to solve these problems. The robotic visual inspection system is not only economically promising, but also combines the industrial robot’s high-flexibility and the visual sensor’s high-throughput and relatively high accuracy. However, for the dimensions that are beyond the robot’s reachable volume, such as the featured points under the car body, some dedicated vision sensors are added. Also, for the inspection system, the coordinate system used to define the car body is determined by measuring some locating holes, which requires a higher measuring accuracy and is also realized by adding dedicated sensors. As these sensors are usually fixed on a steel structure, we name them stationary sensors in this paper, while the sensors mounted on the robot flange are named flexible sensors. Hybrid visual inspection systems, which consist of flexible sensors and stationary sensors, have been commonly used in mass production.
For the car body-in-white in-line hybrid inspection system, the robot orients the flexible sensor to measure featured points on the car body surface, such as side panel, roof, front and rear windshield, while the stationary sensors measure the featured points on the underbody and the locating holes. For a typical hybrid visual inspection system as described above, there are two kinds of vision sensors and an industrial robot, all of which have their defined reference coordinates in the system. The final measured results of the hybrid system are usually unified and analyzed in a global coordinate system, which is defined with several locating points on the product. So, the global calibration to establish the relationship between different coordinate systems is of great significance for the final measuring accuracy of the hybrid inspection system. Moreover, the hybrid visual inspection system is usually integrated in the industrial field, and the global calibration should be performed in the workshop, which brings forward higher requirement to the efficiency and robustness of the calibration method. Also, vision sensors may be uninstalled and replaced during the system’s periodical maintenance, and a rapid global calibration method is an indispensable function for the hybrid inspection system.
In order to realize the global calibration of the measurement system with several visual sensors, Kitahara et al. [5] combined a calibration board and a 3D laser-surveying instrument to realize camera calibration in a large-scale space. Lu and Li [6] utilized a theodolite coordinate measurement system to realize the global calibration. Liu and Lin [7] employed a laser tracker (FARO, Orlando, FL, USA) and a special 3D target to conduct the global calibration. Still, all the above methods require an additional high precision 3D measurement apparatus to assist in the calibration, and although they can acquire relatively high calibration accuracy, they are quite complicate, high-cost and not very effective. And when the stationary sensors are installed on a complex fixture, the light path of these apparatuses may be obstructed easily. Moreover, sensors may only be uninstalled and replaced during the system’s periodical maintenance and calibration with help of sophisticated auxiliary apparatuses that are inconvenient and time-consuming. To solve this problem, some other researchers have been searching for calibration approaches without involving sophisticated apparatus. Liu et al. [8] computed the coordinate transformation from each vision sensor to a global coordinate frame according to the co-linearity of the laser spots at each spot laser position. Xie and Wei [9] unified the coordinate frames of the two cameras by a flexible target composed of two short 1D bars. The above two methods use a simple and flexible apparatus for global calibration, which are not only convenient and fast but also have a relatively high accuracy. However, these methods mainly discussed the calibration method of the transformation relationship among several visual sensors or cameras, and the measured results are unified in the coordinate system of a sensor, not a global reference frame. Furthermore, there is no flexible sensor applied in the systems discussed.
In this paper, we proposed a rapid global calibration method for hybrid visual inspection systems. System construction and measurement principle of the hybrid visual inspection system are presented first. The global calibration of the hybrid system is divided into two steps, calibrating the flexible sensor first, and then calibrating stationary sensors. Based on the robot kinematic principle, a flexible sensor global calibration method is proposed, which is performed with the aid of a standard sphere. Then, with the calibrated flexible sensor, the stationary sensors are calibrated one by one based on the homography principle. An auxiliary target with a 2D planar pattern is used as the calibration target, and the global calibration of a stationary sensor can be finished with just one robot movement and two images captured by the flexible and stationary sensor, which is very simple to operate and high-efficient. No sophisticated equipment is adopted during the global calibration process, and it is convenient to re-perform during the hybrid system periodical maintenance. An accuracy verification experiment is performed and an error compensation method is proposed for the hybrid inspection system, and the final accuracy of the hybrid system is evaluated with the deviation and correlation coefficient between the measured results of the hybrid visual inspection system and the CMM.
The remainder of this paper is organized as follows: Section 2 is a brief introduction to the system construction and measurement principle of the hybrid visual inspection system. A detailed description of the system rapid global calibration method is given in Section 3, including flexible and stationary sensor global calibration. In Section 4, the accuracy verification experiment is carried out, and an error compensation method is proposed for the hybrid inspection system. The conclusions are given in Section 5.

2. Principle of the Hybrid Visual Inspection System

2.1. System Principle

As shown in Figure 1, a typical hybrid visual inspection system mainly consists of a six degrees of freedom industrial robot, a flexible vision sensor, and several stationary vision sensors. The flexible sensor is oriented by the industrial robot to measure featured points on the workpiece surface, and the stationary sensors are responsible for measuring the featured points on the underbody or the points beyond the robot’s reachable volume. Also, as the measured results are usually unified and accessed in the workpiece coordinate system, which is established with several locating points on it, these locating points are measured with the stationary sensors in order to guarantee a good accuracy.
Line-structured laser sensors are used in the hybrid visual inspection system discussed in this paper. The main measuring principle of a line-structured laser sensor is the optical triangulation principle [10]. The shape of the object to be measured can be modulated by a laser projector from one direction and captured by the camera from another. The 3D geometrical dimensions of the laser stripe can be calculated directly.
As shown in Figure 1, the coordinate systems of the hybrid visual inspection system consist of a workpiece frame (WF), robot base frame (BF), end-effector frame (EF), flexible sensor frame (FF), and stationary sensor frames (SFi). For the inspection system described in this paper, BF is a fixed frame and is regarded as the global reference frame; establishing the transform relationships between BF and the sensor frames is the global calibration process described in this paper; and the measured results of the flexible and stationary sensors are unified in BF first. WF is constructed separately on each workpiece to be measured; when the workpiece is fixed and clamped in the inspection station, several stationary sensors measure the locating points on it, then the transformation between BF and WF ( T b w ) is determined with the construction method described in Section 2.2.
For a featured point measured by the flexible sensor, the mapping relationship between coordinate P w 1 in WF and P f in FF is expressed as follows:
P w 1 = T b w · T e b · T f e · P f
where T f e is the transformation between FF and EF, which is also called hand-eye relationship; T e b is the transformation between EF and BF, which could be obtained from robot angular values and the forward kinematic model.
For a featured point measured by a stationary sensor, the mapping relationship between coordinate P w 2 in WF and P S i in SFi is expressed as follows:
P w 2 = T b w · T s i b · P s i
where T s i b is the transformation between SFi and BF. An individual SFi is defined on each stationary sensor and T s i b is to be determined in the system global calibration process.
Before such a hybrid visual inspection system being applied in the industrial measurement, the following three calibration procedures should be performed. First, the intrinsic and extrinsic parameters calibration of the line-structured laser sensor should be calibrated [11,12]. Second, it is necessary to determine the transformation between the flexible sensor and robot end-flange. Third, the relationship between each stationary sensor and the robot base frame should be obtained. The last two procedures can be regarded as the global calibration of the laser sensors, with which the measured results of the laser sensors can be unified to the global frame (robot base frame). As the sensor’s intrinsic and extrinsic parameters in this hybrid system have been pre-calibrated, only the global calibration is needed to be performed in the industrial field. Also, when a vision sensor is uninstalled and replaced with another sensor during the system maintenance, the global calibration of this sensor should be re-performed.

2.2. Reference Point System (RPS) and Workpiece Frame Definition

As mentioned before, the measured results of the hybrid visual system are generally analyzed in WF. In this paper, WF is defined based on the datum features on the workpiece itself. In an industry, especially in the automobile industry, RPS is frequently used for the measurement systems to ensure a comparable result worldwide. For a complete vehicle, reference points are defined on the workpiece, and these points allow for an alignment in 3D space. Once the object is aligned to the theoretical coordinate system of the model (AutoCad data), one is able to compare any desired point and detect manufacturing errors on the product.
Every rigid body possesses six degrees of freedom in three-dimensional space: three translational degrees of freedom parallel to the axes of a reference system and three rotatory degrees of freedom around the axes. In order to support a non-rotationally-symmetric body in a uniquely determinate manner, it must be fixed in all six possible directions of movement. The 3-2-1-rule provides for such unique fixing. As shown in Figure 2, the 3-2-1 locating rule is used to determine the location of the workpiece in 3D space, and only three stationary sensors are necessary to determine the position of the part: Sensor A measures a slot; Sensor B measures a hole; Sensor C measures a point on a plane. This completes the 3-2-1 locating as follows:
(3)
locating Points in Z axis (plane) (measures Z translation and rotation about Y and X)
Sensor A—slot center Z value
Sensor B—hole center Z value
Sensor C—range Z value
(2)
locating Points in Y axis (line) (measures Y translation and rotation about Z)
Sensor A—slot center Y value
Sensor B—hole center Y value
(1)
locating Point in X axis (point) (measures X translation)
Sensor B—hole center X value
Coordinate of the datum features P W A ( x W A , y W A , z W A ), P W B ( x W B , y W B , z W B ), P W C ( x W C , y W C , z W C ) in WF can be obtained from the CAD model. When determining WF, the measured results of the three stationary sensors are unified in BF first, and the coordinates of the datum features in BF are P B A ( x B A , y B A , z B A ), P B B ( x B B , y B B , z B B ), P B C ( x B C , y B C , z B C ). When the transformation from WF to BF is T b w = [ R T ] = [ r 11 r 12 r 13 t 1 r 21 r 22 r 23 t 2 r 31 r 32 r 33 t 3 ] , with the measured datum features, we have:
{ F 1 = r 21 · x B A + r 22 · y B A + r 23 · z B A + t 2 y W A = 0 F 2 = r 31 · x B A + r 32 · y B A + r 33 · z B A + t 3 z W A = 0 F 3 = r 11 · x B B + r 12 · y B B + r 13 · z B B + t 1 x W B = 0 F 4 = r 21 · x B B + r 22 · y B B + r 23 · z B B + t 2 y W B = 0 F 5 = r 31 · x B B + r 32 · y B B + r 33 · z B B + t 3 z W B = 0 F 6 = r 31 · x B C + r 32 · y B C + r 33 · z B C + t 3 z W C = 0
The rotation matrix R satisfies the orthogonal constraint condition described as follows:
{ f 1 = r 11 · r 11 + r 21 · r 21 + r 31 · r 31 1 = 0 f 2 = r 12 · r 12 + r 22 · r 22 + r 32 · r 32 1 = 0 f 3 = r 13 · r 13 + r 23 · r 23 + r 33 · r 33 1 = 0 f 4 = r 11 · r 21 + r 12 · r 22 + r 13 · r 23 = 0 f 5 = r 11 · r 31 + r 12 · r 32 + r 13 · r 33 = 0 f 6 = r 21 · r 31 + r 22 · r 32 + r 23 · r 33 = 0
From Equations (3) and (4), [ R T ] can be obtained by minimizing the following function:
E = i = 1 6 ( F i 2 + M f i 2 )
where M is the penalty factor. Minimizing Equation (5) is a nonlinear minimization problem, which can be solved with the Levenberg–Marquardt algorithm [13,14,15].

3. Rapid Global Calibration Method

Global calibration of the laser sensors (including flexible sensors and stationary sensors) are generally finished in the industrial field, and may need to be re-performed during the system periodical maintenance, so a streamlined and efficient global calibration method is necessary for the hybrid visual inspection system. Also, with the on-site industrial environment being more and more complicated, calibration with a sophisticated apparatus such as a laser tracker or an articulated arm CMM is not applicable; it is thus preferred that the hybrid system can be self-calibrated with the aid of simple apparatuses. In this paper, we propose a rapid global calibration technology for the hybrid visual inspection system. Global calibration of the flexible sensor is finished by using a standard sphere as the calibration target. Then, with the calibrated flexible sensor and an auxiliary planar target, the stationary sensors are calibrated based on the homography principle.

3.1. Flexible Sensor Global Calibration

In this method, the global calibration of the flexible sensor is to determine the rotation matrix R F and translation vector T F between the flexible sensor frame and robot end-flange frame, which is also called hand-eye calibration. A standard sphere is used as a reference through measuring the fixed sphere-center at different robot poses by the flexible sensor. R F and T F can be solved through the constraint of a fixed space point [16].
For the standard sphere fixed in the robot working space, the relationship between the coordinate of the sphere-center X B = [ x B y B z B ] T in the robot base frame and X F = [ x F y F z F ] T in the flexible laser sensor frame is as follows:
[ X B 1 ] = [ R 0 T 0 0 1 ] · [ R F T F 0 1 ] · [ X F 1 ]
where R 0 and T 0 are the rotation matrix and translation vector from robot end-flange to robot base. R F and T F are the rotation matrix and translation vector from the flexible sensor to end-flange. Equation (6) can be expanded as:
X B = R 0 · R F · X F + R 0 · T F + T 0
Controlling the robot and making the flexible sensor measure the sphere-center twice, we can obtain the following equations:
{ X B = R 01 · R F · X F 1 + R 01 · T F + T 01 X B = R 02 · R F · X F 2 + R 02 · T F + T 02
If the robot pose remains unchanged during these two measurements (motion with only translation), that is R 01 = R 02 = R 0 , by subtracting the two equations in Equation (8) we can get:
R 0 · R F · ( X F 1 X F 2 ) + T 01 T 02 = 0
By repeating the measurements by several robot translation movements, we can collect several sets of experimental data and obtain the following equation:
R F · [ X F 1 X F 2 X F 1 X F 3 X F 1 X F n ] = R 0 T · [ T 02 T 01 T 03 T 01 T 0 n T 01 ]
Let P 1 = [ X F 1 X F 2 X F 1 X F 3 X F 1 X F n ] , P 0 = R 0 T · [ T 02 T 01 T 03 T 01 T 0 n T 01 ] , and Equation (10) can be rewritten as R F P 1 = P 0 . According to the Singular Value Decomposition (SVD) method, the following equation should be established:
K = d 0 · d 1 T = ( P 0 C 0 ) · ( P 1 C 1 ) T
where C 0 = i = 1 n P 0 n and C 1 = i = 1 n P 1 n . The K can be decomposed into K = U · Λ · V T , and the solution of matrix R F will be set to R F = U · V T .
After determining R F , we can control the robot and make the flexible sensor measure the same fixed point from different robot poses, and obtain the following equation from Equation (8):
( R 02 R 01 ) · T F = R 01 · R F · X F 1 R 02 · R F · X F 2 + T 01 T 02
By repeating the robot pose-changing movement several times and gathering several sets of experimental data, a linear equation is obtained in the form of matrix as:
[ R 02 R 01 R 03 R 01 R 0 n R 01 ] · T F = [ R 01 · R F · X F 1 R 02 · R F · X F 2 + T 01 T 02 R 01 · R F · X F 1 R 03 · R F · X F 2 + T 01 T 03 R 01 · R F · X F 1 R 0 n · R F · X F 2 + T 01 T 0 n ]
Equation (13) is a system of linear equations in the form of A T F = B . As long as the coefficient matrix in Equation (13) is nonsingular, T F could be solved by means of the Least-squares method as follows:
T F = ( A T · A ) 1 A T · B
Till now, rotation matrix R F and translation vector T F from the flexible sensor to end-flange have been determined, and the global calibration of the flexible sensor has been finished.
The method for measuring the sphere-center is described in Reference [17], in which the sphere radius should be known in order to obtain the sphere-center at one robot position by one laser shoot. In order to guarantee the measuring accuracy, the laser should project on R/4 of the sphere surface.

3.2. Stationary Sensors Global Calibration

After the global calibration of the flexible sensor, the stationary sensors’ global calibration can be carried out with the aid of an auxiliary target and the calibrated flexible sensor. Global calibration of the stationary sensors is based on the camera pinhole model and homography principle, which is described in this section first.

3.2.1. Camera Pinhole Model and Homography

For the vision sensors in the hybrid visual inspection system, the usual pinhole is used to model the cameras in the visual sensors. The relationship between the coordinate of a spatial point P s and its corresponding image point P c can be given as follows:
s P c = A c T c P s
where s is an arbitrary scale factor, P s = [ x s y s z s 1 ] T is the homogeneous coordinate of a spatial point in the object space, and P c = [ u c u c 1 ] T is the corresponding homogeneous coordinate in the camera image space. T c = [ r c 11 r c 12 r c 13 t c x r c 21 r c 22 r c 23 t c y r c 31 r c 32 r c 33 t c z 0 0 0 1 ] is the camera extrinsic matrix, which represents the transformation from the object coordinate system to the camera coordinate system, and A c = [ f c x γ c u c 0 0 0 f c y v c 0 0 0 0 1 0 ] is the camera intrinsic matrix which represents the transformation from the camera coordinate system to the image space, with the focal length in pixels ( f c x , f c y ), the principal point ( u c 0 , v c 0 ), and the skew factor γ c .
Homography matrix, which is a nonsingular 3 × 3 matrix, is used to define the homogeneous linear transformation from a plane to another in the projective space [18]. In this paper, planar homography is used to express the homogeneous linear relationship between the camera image plane and the target plane. According to the pinhole camera model, coordinates of a target point on the target plane and its image point on the camera image plane satisfy the following relationship:
s [ u c v c 1 ] = [ f c x γ c u c 0 0 0 f c y v c 0 0 0 0 1 0 ] [ r c 11 r c 12 r c 13 t c x r c 21 r c 22 r c 23 t c y r c 31 r c 32 r c 33 t c z 0 0 0 1 ] [ x t y t 0 1 ] = [ f c x γ c u c 0 0 f c y v c 0 0 0 1 ] [ r c 11 r c 12 t c x r c 21 r c 22 t c y r c 31 r c 32 t c z ] [ x t y t 1 ]
H t c stands for the homography matrix from the target plane to the camera image plane:
H t c = A [ r c 1 r c 2 t c ] = [ f c x γ c u c 0 0 f c y v c 0 0 0 1 ] [ r c 11 r c 12 t c x r c 21 r c 22 t c y r c 31 r c 32 t c z ]
Thus, Equation (16) can be rewritten as:
s P c = H t c P t
where P t = [ x t y t 1 ] T is the coordinate of a point on the target plane and P c = [ u c v c 1 ] T is the corresponding camera image coordinate.
When H t c = [ h 1 h 2 h 3 ] = A [ r c 1 r c 2 t c ] , and when A and H t c have been obtained, the extrinsic matrix (transformation from the target coordinate system to the camera coordinated system) can be calculated with the following method:
{ r c 1 = γ A 1 h 1 r c 2 = γ A 1 h 2 r c 3 = r 1 × r 2 t c = γ A 1 h 3
As the target is always in front of the camera image, if we get t c ( 3 ) > 0 in Equation (19), then we should let γ = 1 / A 1 h 1 = 1 / A 1 h 2 . If t c ( 3 ) < 0 , and then γ = 1 / A 1 h 1 = 1 / A 1 h 2 , we can calculate the extrinsic matrix again [19].

3.2.2. Principle of Stationary Sensor Global Calibration

The principle of stationary sensor global calibration is illustrated in Figure 3. After the global calibration of the flexible sensor, an auxiliary target with a 2D planar pattern is mounted on the robot end-flange, and the planar pattern is fixed on the working distance of the flexible sensor. The planar target is made of aluminum and has 5 × 7 holes machined on the target; the holes are manufactured with an accuracy of 0.05 mm. As shown in Figure 3, the front image and the back image are captured by the flexible sensor and the stationary sensor, respectively, and a picture of the planar target captured with a camera has also been provided.
When calibrating a stationary sensor, the robot orients the flexible sensor and the calibration target to above of the stationary sensor, where the planar pattern is also on the working distance of the stationary sensor. Then, the flexible sensor captures the image of the front side of the planar pattern, and the stationary sensor captures the image of the back side, and the image coordinates of the hole-centers on the flexible and stationary sensors are extracted by ellipse-fitting methods [20,21]. As the hole arrays are manufactured by precision machining, coordinates of the hole centers on the target coordinate system are known from the CAD model. With the hole-centers’ image coordinates and object coordinates, we can obtain the following equation:
{ s 1 P c f = H t f c f P t f s 2 P c s = H t b c s P t b
where P c f = [ u c f v c f 1 ] T and P c s = [ u c s v c s 1 ] T are the image coordinates of the hole array on the flexible and stationary sensors; P t f = [ x t f y t f 1 ] T and P t b = [ x t b y t b 1 ] T are the coordinate of hole centers on the front and back sides of the target plane; H t f c f is the homography matrix from the front side of the target plane to the flexible sensor’s camera image plane; and H t b c s is the homography matrix from the back side of the target plane to the stationary sensor’s camera image plane.
Let T t f c f = [ r f 1 r f 2 r f 3 t f ] be the extinct transformation matrix from the front side of the target to the flexible sensor, T t b c s = [ r s 1 r s 2 r s 3 t s ] be the extinct matrix from the back side of the target to the stationary sensor, and T t b t f = [ r t 1 r t 2 r t 3 t t ] be the transformation from the back side of the target to the front side, which is pre-calibrated with CMM. According to Equation (17), H t f c f   and H t b c s can be rewritten as:
{ H t f c f = A c f · [ r f 1 r f 2 t f ] H t b c s = A c s · [ r s 1 r s 2 t s ]
As the camera intrinsic matrices of the flexible and stationary sensors have been pre-calibrated, when the homography H t f c f and H t b c s have been determined, we can calculate the extinct transformation matrixes T t f c f and T t b c s according to Equation (19).
As shown in Figure 4, we can obtain the transformation from the target front side to the robot base frame from two different coordinate conversion chains:
{ T t f b = T e b · T c f e · T t f c f T t f b = T c s b · T t b c s · T t f t b
where T e b is the robot end-flange posture with respect to the robot base, which can be obtained from the robot forward kinematics; T c f e is the robot hand-eye relationship, which is calibrated in Section 3.1; and T c s b is the transformation from the stationary sensor to the robot base, which is to be determined during the stationary sensor global calibration.
According to Equation (22), T c s b can be calculated as:
T c s b = T e b · T c f e · T t f c f · ( T t b c s · T t f t b ) 1
With the calibrated flexible sensor and an auxiliary target, global calibration of a stationary sensor can be completed with just one robot movement and two images captured by the flexible and stationary sensors. The whole calibration process only needs one person and takes about five minutes to complete, and it is simple to operate as well as highly efficient. The auxiliary target is installed on the connecting rod directly and, once installed, the robot can move the target and flexible sensor to calibrate all of the stationary sensors. The auxiliary target is also convenient to reinstall during the hybrid system’s periodical maintenance.

4. Experiment and Analysis

4.1. Experiment Setup

As mentioned at the beginning, hybrid visual inspection systems have been commonly used in mass production, especially in automobile manufacturing. In order to verify the effectiveness and efficiency of the global calibration method proposed in this paper, we tested an on-site hybrid visual inspection station for the car body rear underbody panel. The car body rear underbody panel is about 1500 mm in length and 1200 mm in width. A 3D model of the inspection station is shown in Figure 4, and it is on the last station of a production line for the rear underbody panel. The car body rear underbody panel is assembled and welded on the front stations first, and then brought to the last station for quality inspection. Dimensional information is saved on a computer and can be queried by quality engineers; product that is significantly dimensionally out of tolerance will trigger line-stopping and be removed for repair.
This hybrid visual inspection system consists of a flexible sensor mounted on a KUKA KR16-2 industrial robot (KUKA, Augsburg, Germany), and 10 stationary sensors fixed on the baseboard of the fixture. According to the specifications, the typical repeatability of this robot is about ±0.06 mm. The robot uses the flexible sensor to measure 48 points around the rear underbody panel, and the stationary sensors measure the featured points under the panel, which cannot be reached by the robot. Also, three stationary sensors were adopted to measure the locating points under the panel, and the workpiece frame was established on each panel with the principle presented in Section 2.2.

4.2. Calibration Results

Before carrying out the experimental test, the global calibration of the flexible and stationary sensors were performed first. As shown in Figure 4, there were two standard spheres fixed in the robot workspace. The standard sphere is extremely hard with a low coefficient of thermal expansion, and is designed for real-time thermal error compensation of the industrial robot. Also, it can be used as the reference point in the global calibration of the flexible sensor.
First, the industrial robot used the flexible sensor to scan the standard sphere in four different positions, and the pose of the end-flange remained unchanged during these four motions. Then, the robot used the flexible sensor to scan the standard sphere in six other different positions, and the pose of the end-flange changed during these six motions. With the robot end-flange poses read from the robot control system and the positions of the sphere-center measured in the flexible sensor frame, the rotation matrix and translation vector from the flexible sensor to the end-flange was calibrated as:
[ 0.647394 0.01769 0.76195 46.037782 0.014428 0.999836 0.010954 4.455687 0.762019 0.003901 0.647543 413.045474 0 0 0 1 ]
After the flexible sensor was calibrated, an auxiliary target was mounted on the robot connection rod as shown in Figure 5, and the stationary sensors were calibrated with the method presented in Section 3.2, one by one. First, the robot oriented the flexible sensor and the calibration target to above of the stationary sensor, where the planar pattern was also within the working distance of the stationary sensor. Then, the flexible sensor captured the image of the front side of the planar pattern, and the stationary sensor captured the image of the back side. With the captured image, the transformation from the target to the flexible and station sensors was calculated. The robot end-flange pose was also read from the robot control system, and the transformation between the stationary sensor and the robot base was determined with Equation (23).

4.3. Accuracy Validation Experiments

With the calibrated hybrid system, the rear underbody panel was measured at the inspection station. As mentioned above, 58 featured points were measured on the panel, including holes, edges, and flanges. Forty-eight points were measured by the flexible sensor and 10 points were measured by the stationary sensors. Once the underbody panel was fixed and clamped on the fixture, three stationary sensors measured the locating points and established the workpiece frame first, then the other seven stationary sensors and the flexible sensor measured the other points, and coordinates of the featured points were unified in the workpiece frame. After being measured at the inspection station, the underbody panel was also measured by CMM. Due to the high accuracy, measured results of CMM were regarded as reference values, and deviations between the measured results of the hybrid system and CMM were calculated. Ten different panels were measured to assess the system accuracy, and for illustrative purposes, deviations of a featured hole in three vectors are shown in Figure 6.
From Figure 6, it can be seen that deviations between the measured values of the hybrid visual system and CMM look random and symmetric, but not uniformly distributed between minus and plus, which can be assumed to follow a normal distribution with the expected value of nonzero. According to the Lilliefors test for normality [22,23], we can confirm this normality assumption. Therefore, we can fit the experimental data with a normal distribution function, and the fitting results are shown in the right side of Figure 6. For all of the 58 featured points, most of the deviations between the measured results in these two systems are about 2–4 mm, and these errors are reproducible and consistently in the same direction. It can be safe to conclude that there are remarkable systematic errors in the measured results of this hybrid inspection system, which are mainly caused by the inaccuracy of the global calibration method.
To a large extent, the measuring accuracy of the hybrid inspection system has been limited by the robot positioning accuracy, as the industrial robots are normally designed for repeated work such as picking and placing, spot welding, and so on. The results showed high repeatability but low accuracy. For a KUKA KR16-2 (KUKA, Augsburg, Germany) industrial robot, its repeatability is up to ±0.05 mm, but its absolute accuracy can reach a few millimeters. Moreover, the discrepancy between the robot’s ideal and real behavior is due to the joint angle offsets, geometric link length inaccuracy, and non-geometric factors such as flexibility, backlash, thermal expansion, etc. As the robot normal model specified by the manufacturer has been used in the global calibration and measurement model of the hybrid system, the model inaccuracy will inevitably pass to the calibration and measurement results.
As the hybrid visual inspection system described in this paper is mainly applied for mass production such as automobile manufacturing, we can compensate the systematic error of the hybrid system based on the measured data of earlier products. That is, after the hybrid inspection system has been installed and debugged on-site, a few products (usually 10–20) are measured with the hybrid system and CMM, and an average error of each featured point is calculated as follows:
{ Δ x = i = 1 n ( x C M M x h y b r i d ) / n Δ y = i = 1 n ( y C M M y h y b r i d ) / n Δ z = i = 1 n ( z C M M z h y b r i d ) / n
where n is the number of products measured for compensation. As the systematic errors are reproducible and consistent, we can use the average error above to represent it. An offset value is obtained for each featured point as o f f s e t = ( · x , · y , · z ) . For the products measured during the later practical production process, the offset value is used to compensate the measured result as:
{ x c o m p = x m e a s + Δ x y c o m p = y m e a s + Δ y z c o m p = z m e a s + Δ z
where ( x m e a s , y m e a s , z m e a s ) is the measured result of the hybrid system, and ( x c o m p , y c o m p , z c o m p ) is the compensated result, which is the final result of the measurement system.
In order to verify the efficiency of the compensation method presented above and evaluate the final accuracy of the hybrid system, a follow-up study was performed on the compensated hybrid system. During the practical production process, hundreds of products were inspected on the hybrid station every day, and one of the products was picked out randomly and measured with CMM. The experiment lasted for one month and we obtained a dataset of 30 products. Deviations between measured results of the hybrid system and CMM were calculated.
Another way to evaluate the accuracy of the hybrid system is to calculate the correlation coefficient between the measured results of each featured point. The correlation coefficient determines the degree to which two variables are related. The range of the correlation coefficient is from −1 to 1. The value of 1 indicates that there is a perfect positive relationship between these two variables. For a positive increase in one variable, there is also a positive increase in another variable. The strength of the relationship varies in degree based on the value of the correlation coefficient. The correlation coefficient of x vector is calculated as follows:
r = i = 1 n ( x i x ¯ ) ( X i X ¯ ) i = 1 n ( x i x ¯ ) 2 · i = 1 n ( X i X ¯ ) 2 = n i = 1 n x i X i i = 1 n x i · i = 1 n X i n i = 1 n x i 2 ( i = 1 n x i ) 2 · n i = 1 n X i 2 ( i = 1 n X i ) 2
where xi and Xi are the measured values of the hybrid system and CMM. The deviation and correlation coefficient between the measured results of these two systems can be calculated on the three vectors (x, y, z) of each featured point.
For the 58 featured points on the rear underbody, there are 174 vectors, and the deviation and correlation coefficient were calculated for each vector. Statistics of the calculated results are shown in Table 1.
From Table 1, we can see that for the 174 measured vectors on the car rear underbody, deviations of all the vectors are less than ±0.5 mm, 96.55% vectors are less than ±0.3 mm, and correlation coefficient of 91.38% of vectors are larger than 0.6, while 85.64% vectors are larger than 0.7. Most of the dimension tolerances in the automobile manufacturing are set to be ±2 mm, a quality inspection system with an accuracy of ±0.5 mm is quite qualified for the job. So, it can be concluded that the compensated hybrid inspection system has a relatively high accuracy, which can be applied for the quality inspection in mass production such as automobile manufacturing.

5. Conclusions

A rapid global calibration technology for a hybrid visual inspection system has been presented in this paper. The hybrid system consists of an industrial robot with a flexible sensor and several stationary sensors, and the system global calibration has been divided into two steps, calibrating the flexible sensor first and then calibrating the stationary sensors. The flexible sensor global calibration method is performed based on the robot kinematic principle. With the calibrated flexible sensor, the stationary sensors are calibrated one by one based on the homography principle. Only a standard sphere and an auxiliary target with a 2D planar pattern are applied during the system global calibration; no sophisticated equipment is adopted, and it is convenient to re-perform during the hybrid system’s periodical maintenance. With an error compensation method, the final accuracy of the hybrid system is evaluated with the deviation and correlation coefficient between the measured results of the hybrid system and CMM. An accuracy verification experiment showed that the deviations of 96.55% of the featured points are below ±0.3 mm, and the correlation coefficients of 85.64% points are larger than 0.7. This has proved that the global calibration method is feasible and its accuracy is relatively high. Since it is easy and convenient to implement on-site, we expect that the proposed rapid global calibration technology will have wide application in the product quality of mass production. Future efforts will also be devoted to extend the application of hybrid visual inspection systems.

Acknowledgments

The work was supported by National Natural Science Foundation of China (51475329, 51225505, 51305297); National Science and Technology Major Project (2014ZX04001-081-06); National Key Scientific Instrument and Equipment Development Project (2013YQ350747). The authors would like to express their sincere appreciation to them, and comments from the reviewers and the editor are very much appreciated.

Author Contributions

Tao Liu conceived the idea, designed the architecture and finalized the paper; Shibin Yin collected references and performed the experiments; Yin Guo and Jigui Zhu collected publicly available datasets and polished the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhao, C.; Xu, J.; Zhang, Y. Three dimensional reconstruction of free flying insect based on single camera. Acta Opt. Sin. 2006, 26, 61–66. [Google Scholar]
  2. Lim, K.B.; Xiao, Y. Virtual stereovision system: new understanding on single-lens stereovision using a biprism. J. Electron. Imaging 2005, 14, 781–792. [Google Scholar] [CrossRef]
  3. Zhang, S.; Huang, P.S. Novel method for structured light system calibration. Opt. Eng. 2006, 45, 083601. [Google Scholar]
  4. Quan, C.; Tay, C.J.; Chen, L.J. A study on carrier-removal techniques in fringe projection profilometry. Opt. Laser Technol. 2007, 39, 1155–1161. [Google Scholar] [CrossRef]
  5. Kitahara, I.; Ohta, Y.; Saito, H. Recording of multiple videos in a large-scale space for large-scale virtualized reality. J. Inst. Image Inf. Telev. Eng. 2002, 56, 1328–1333. [Google Scholar]
  6. Lu, R.S.; Li, Y.F. A global calibration technique for high-accuracy 3D measurement systems. Sens. Actuators A Phys. 2004, 116, 384–393. [Google Scholar] [CrossRef]
  7. Liu, Y.; Lin, J.R. Multi-sensor Global calibration Technology of Vision Sensor in Car Body-in-White Visual Measurement System. Acta Metrol. Sin. 2014, 5, 204–209. [Google Scholar]
  8. Liu, Z.; Wei, X.; Zhang, G. External parameter calibration of widely distributed vision sensors with non-overlapping fields of view. Opt. Lasers Eng. 2013, 51, 643–650. [Google Scholar] [CrossRef]
  9. Xie, M.; Wei, Z. A flexible technique for calibrating relative position and orientation of two cameras with no-overlapping FOV. Measurement 2013, 46, 34–44. [Google Scholar] [CrossRef]
  10. Leu, M.C.; Ji, Z. Non-Linear Displacement Sensor Based on Optical Triangulation Principle. U.S. Patent 5,113,080, 12 May 1992. [Google Scholar]
  11. Weng, J.; Cohen, P. Camera calibration with distortion models and accuracy evaluation. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 965–980. [Google Scholar] [CrossRef]
  12. Wu, B.; Xue, T. Calibrating stereo visual sensor with free-position planar pattern. J. Optoelectron. Laser 2006, 17, 1293–1296. [Google Scholar]
  13. Levenberg, K.A. A method for the solution of certain problems in least squares. Q. Appl. Math. 1944, 2, 164–168. [Google Scholar] [CrossRef]
  14. Marquardt, D.W. An algorithm for least-squares estimation of nonlinear parameters. J. Soc. Ind. Appl. Math. 1963, 11, 431–441. [Google Scholar] [CrossRef]
  15. Moré, J.J. The Levenberg-Marquardt algorithm: Implementation and theory. Numer. Anal. 1978, 630, 105–116. [Google Scholar]
  16. Yin, S.; Ren, Y. A vision-based self-calibration method for robotic visual inspection systems. Sensors 2013, 13, 16565. [Google Scholar] [CrossRef] [PubMed]
  17. Yin, S.; Ren, Y.; Guo, Y. Development and calibration of an integrated 3D scanning system for high-accuracy large-scale metrology. Measurement 2014, 54, 65–76. [Google Scholar] [CrossRef]
  18. Zhang, Z.Y. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  19. Hartley, R.I. In defence of the 8-point Algorithm. In Proceedings of the Fifth International Conference on Computer Vision, Boston, MA, USA, 20–23 June 1995; pp. 1064–1070. [Google Scholar]
  20. He, D.; Liu, X.; Peng, X. Eccentricity error identification and compensation for high-accuracy 3D optical measurement. Meas. Sci. Technol. 2013, 24, 660–664. [Google Scholar] [CrossRef] [PubMed]
  21. Fitzgibbon, A.W.; Fisher, R.B.; Pilu, M. Ellipse-Specific Direct Least-Square Fitting. In Proceedings of the International Conference on Image Processing, Lausanne, Switzerland, 19 September 1996; pp. 599–602. [Google Scholar]
  22. Lilliefors, H.W. On the Kolmogorov-Smirnov test for normality with mean and variance unknown. J. Am. Stat. Assoc. 1967, 62, 399–402. [Google Scholar] [CrossRef]
  23. Van Soest, J. Some experimental results concerning tests of normality. Stat. Neerlandica 1967, 21, 91–97. [Google Scholar] [CrossRef]
Figure 1. Schematic of the hybrid visual inspection system.
Figure 1. Schematic of the hybrid visual inspection system.
Sensors 17 01440 g001
Figure 2. The 3-2-1 principle used to define workpiece frame (WF).
Figure 2. The 3-2-1 principle used to define workpiece frame (WF).
Sensors 17 01440 g002
Figure 3. Principle of the stationary sensor global calibration.
Figure 3. Principle of the stationary sensor global calibration.
Sensors 17 01440 g003
Figure 4. 3D model of the hybrid visual inspection station.
Figure 4. 3D model of the hybrid visual inspection station.
Sensors 17 01440 g004
Figure 5. Global calibration of the stationary sensors.
Figure 5. Global calibration of the stationary sensors.
Sensors 17 01440 g005
Figure 6. Deviations between measured values of hybrid visual system and Coordinate Measuring Machine (CMM): (a) 10 origin datums of a hole; (b) The corresponding normal distribution fitting.
Figure 6. Deviations between measured values of hybrid visual system and Coordinate Measuring Machine (CMM): (a) 10 origin datums of a hole; (b) The corresponding normal distribution fitting.
Sensors 17 01440 g006
Table 1. Statistics of the calculated deviation and correlation coefficients.
Table 1. Statistics of the calculated deviation and correlation coefficients.
Deviation (mm)SumRateCorrelation CoefficientSumRate
[−0.1, 0.1]3721.26%>0.92816.09%
[−0.2, −0.1] & [0.1, 0.2]8247.13%[0.8, 0.9]5833.33%
[−0.3, −0.2] & [0.2, 0.3]4928.16%[0.7, 0.8]6336.21%
[−0.5, −0.3] & [0.3, 0.5]63.45%[0.6, 0.7]105.74%
[<−0.5] & [>0.5]00[<0.6]158.62%

Share and Cite

MDPI and ACS Style

Liu, T.; Yin, S.; Guo, Y.; Zhu, J. Rapid Global Calibration Technology for Hybrid Visual Inspection System. Sensors 2017, 17, 1440. https://doi.org/10.3390/s17061440

AMA Style

Liu T, Yin S, Guo Y, Zhu J. Rapid Global Calibration Technology for Hybrid Visual Inspection System. Sensors. 2017; 17(6):1440. https://doi.org/10.3390/s17061440

Chicago/Turabian Style

Liu, Tao, Shibin Yin, Yin Guo, and Jigui Zhu. 2017. "Rapid Global Calibration Technology for Hybrid Visual Inspection System" Sensors 17, no. 6: 1440. https://doi.org/10.3390/s17061440

APA Style

Liu, T., Yin, S., Guo, Y., & Zhu, J. (2017). Rapid Global Calibration Technology for Hybrid Visual Inspection System. Sensors, 17(6), 1440. https://doi.org/10.3390/s17061440

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop