2.2. Hand–Eye Sensor Module
It is well known that if the rollers can correctly follow the curve of the work piece and keep good and stable contact with the sheet metal surface during the hemming process, most of the defects caused by improper force can be effectively eliminated or at least minimized. For this reason, it is very important to eliminate position deviation that is introduced and caused by manufacturing tolerances, the sheet metal deflection deformation, and the assembling tolerances of the working platform, especially when there is no special die to offer data for aligning the parts in the system. In order to overcome the above-mentioned problems, the compensation of the hemming path must depend on reconstruction and analysis of the three-dimensional (3D) data after the work piece is positioned. Obtaining reconstruction data and coordinate accuracy will affect the final product quality, mainly in the rolled-over edges, i.e., the roll-in, roll-out, warp, and recoil by the rollers. Therefore, we proposed and designed a hand–eye sensor module for roller hemming to achieve the purpose of reconstructing the 3D shape data of the incoming materials.
In 2013, Swillo [
12] designed a new portable vision-based measurement system for the hemming process. It was demonstrated that quick and accurate analysis of the geometry and deformation of hemmed samples can be achieved. Following Swillo’s original design, an enhanced vision measurement module, namely the quick-change sensor module as shown in
Figure 3, was proposed and constructed. This quick-change sensor module can be easily installed onto and released from the robotic arm through the quick-change interface. Once it is installed on to the robotic arm, this sensor module can be used to scan over time to determine the position and posture of the sheet metal work piece. The sensor module consists of a laser emitter and a CMOS image receiver. The laser emitter projects a line pattern onto object surface, from which the reflecting light of the projected pattern on the contour of work piece can then be captured by the CMOS image receiver to determine and estimate coordinates of the projected line. A series of coordinates are acquired by simultaneously scanning over the hemming path, and thus forming the 3D model of the metal sheet around the hemming path.
The quick-change sensor module has a built-in battery to avoid the process of module restart and reinitializing due to lack of power each time it is disconnected and reconnected to the robot arm. The quick-change interface as illustrated in
Figure 4 not only serves as the interface for the sensor module but also for the hemming tool. It is divided into two parts, namely the main plate and the tool plate. The main plate is installed on the flange surface of the robot arm, and the tool plate is installed on the top of the sensor module and the hemming tool. With this quick-change interface, the robot arm can quickly replace the hemming tool or sensing module within 5 s through the combination and separation function of the main board and the tool board.
2.3. 3D Surface Reconstruction and Coordinate Calibration Methodology
In order to estimate the 3D coordinates from the sensor module, the poses, including positions and orientations of the image receiver and the laser emitter, are calibrated relative to the hemming tool using the hand–eye calibration method [
13,
14]. As shown in
Figure 5, the hand–eye coordinate system includes the world frame (also called the user frame), the sensor frame, the tool frame, and the robot frame. The location of the sensor frame is placed at the matching point of the two rollers with its
y-axis pointing along the roller’s circumferential direction, with the
z-axis being perpendicular to the lower roller’s surface and the
x-axis being perpendicular to the
y-axis and
z-axis. In order to transform the measured coordinates from the sensor frame into the world frame, a multi-frame transformation is performed to solve the rotation and translation relation between the robot and the sensor module. This operation uses a transformation matrix, which contains a rotation matrix and a translation vector in the form of
where
is the rotation matrix of the frame {
A} to {
B},
is the translation vector of {
A} to {
B}, and the rotation matrix adopts X-Y-Z fixed angles to calculate the rotation angles
α,
β, and
γ, respectively. This rotation matrix
can then be expressed as
By using the above conversion matrix to convert each layer, the coordinates converted from the world frame {
W} to the sensor frame {
L} are calculated by the conversion calculation process as shown in
Figure 6. The conversion relationship
between the robot coordinate and end-effector coordinate is known through the built-in encoder of the robot arm. The relationship
between the world frame {
W} and the robot frame {
O} and the hand-to-eye relationship
can be calculated by the size of the sensor and the connector, although it can easily be affected by manufacturing tolerances and assembly errors leading to poor accuracy. Therefore, the relationship
between the world frame {
W} and the robot frame {
O} and the hand–eye relationship
need to go through the calibration procedure to achieve better accuracy.
Figure 7 shows the calibration procedure used in this research. Firstly, calibration is conducted for the robot tool frame to find out the conversion relationship between the tool center point (TCP) of the robot and the default tool frame. This calibration allows the robot arm to calculate the position and orientation of the end-effector, which means the conversion relationship
between the end frame and the robot frame. Then, the conversion relationship
between the world frame and the robot frame is calibrated by the known
. Finally, through
and
, the hand–eye relationship can be calibrated and established to obtain the conversion relationship
between the sensor frame, the eye, the end-effector frame, and the hand of the robot arm.
In order to perform the above calculation and proceed with the calibration, a calibration device as shown in
Figure 8 was designed and used for establishing the references of the sensor. This device is composed of three modules, namely an optic calibration plate, the reference ball used to establish the robot reference, and the scale block used to identify the XYZ position of the robot TCP.
The conversion relationship
from the robot arm frame {
O} to the robot arm end frame {
H} is calibrated using the following steps. First, the calibration jig is installed to the original TCP of the robot arm as a hemming tool. Since the designed fixture size is the same as the actual hemming tool, the end-effector frame {
H} can be calibrated by aligning the hemming tool frame. Secondly, the robot arm is manipulated with the calibration jig to the origin of the calibration module at three or more different angles as illustrated in
Figure 9a. In this step, one needs to let the spherical device on the front of the jig lightly touch the meter module to make the meter module produce displacement readings. According to the changes of the readings, the endpoint of the calibration fixture with different postures can be adjusted to the same position, thereby reducing the influence of the gap within the robot itself on the calibration accuracy. When multiple postures are recognized, the last step is to calculate the conversion relationship between the end frame and the original robot arm TCP frame to obtain the transformation matrix
from the robot arm frame {
O} to the end-effector frame {
H}.
The calibration procedure for the world frame is similar to the above-mentioned TCP calibration process. After installing the calibration fixture on the end of the robot, one needs to manipulate the robot to touch the origin’s X-direction point and Y-direction point of the world frame {
W} with the same posture as shown in
Figure 9b. According to the readings of the gauge module, one then proceeds to fine-tune the end position of the robot arm to confirm the alignment with the calibration module. After which process, the transformation matrix
describing the transformation from the world frame {
W} to the robot frame {
O} can then be derived from the current posture of the robot arm.
The last step is the calibration of the hand–eye relationship. The purpose of this is to calibrate the conversion relationship between the sensors and the end frame of the robot, so that in the future, the data can be directly converted into the world frame through the robot’s posture during the sensing process. As shown in
Figure 10, the calibration method of this item needs to align the sensor module with the optic calibration plate in the center of the calibration device, and it uses the device’s black and white checkerboard grid points to perform vision correction and calculate the 3D conversion relationship with the optic calibration plate. Next, through the known position of the optic calibration plate in the world frame, the conversion relationship
of the sensor relative to the world frame can be derived. In the meantime, not only are the postures of the robot recorded, but the conversion relationship
of the end-effector of the robot under the world frame is calculated and obtained at the same time. Using the following formula, the hand–eye relationship
can then be further derived. After calibrating the hand–eye relationship, the points scanned by the sensor can be transformed from the sensor frame into the world frame.
2.4. Hemming Path Adjustment
After completing the above-mentioned calibration procedure, the 3D surface model of the actual workpiece can be reconstructed by the scanned data. Because a general working platform instead of a special die was used in this study, there is no supporting surface for the reference or alignment point under the sheet metal. In this situation, the working posture theoretically is non-ideal. As such, if an ideal hemming path that is generated from the ideal CAD model is used, hemming failure and defects will certainly occur. On the other hand, the real hemming path that will adapt to such deviations can be generated and obtained through the 3D reconstructed surface model. This real hemming path, which apparently is different from the ideal hemming path generated by the CAD model, can be used to overcome the position deviation caused by the manufacture tolerances, the sheet metal deflection deformation, and position errors introduced by loading-unloading of the work pieces.
For an illustrated example to explain how the actual hemming point can be estimated, please see
Figure 11: Take the surface model of the inner sheet S1 as a reference line, the projection point, marked by
, of the tip of the outer sheet S2, denoted as
, on this reference line can be calculated to obtain the actual hemming point
O. Since the sensor coordinate {
L} is only on the X-Z plane, each slice of the 3D surface model can certainly be reduced to a two-dimensional plane.
The reference line equation is defined as
where
are the horizontal and vertical coordinates of the point
in S1, respectively, and the cost function is expressed as
where
n is the amount of 3D points
. To obtain the parameters
a and
b so as to minimize
, a partial derivative is performed on
with the following calculation.
Through singular value decomposition (SVD), the optimal x can be obtained from the minimal eigenvector of matrix . When a and b are obtained, by calculating the flange point from the 3D model , and subtracting any point on the reference line to obtain vector , the hemming point O can then be solved by projection of onto the vector . After each path point is calculated from the scanned data cloud, that is the data points that form a cloud, the hemming path is in fact the collection of all the path point coordinates that are calculated through the above process.