3.1. Concept of the Method
The programming method for industrial robots presented here claims to keep both the duration of the programming process and the robot downtime as low as possible. For this purpose, the method is divided into two stages, which take place in two real rooms and are extended by virtual objects (see
Figure 1).
The two rooms do not have to be identical, as long as they have a similar size ratio, which excludes a miniature replica of the real cell. The relevant commonality of the rooms is only the workpieces to be machined and their material carriers or fixtures. One or several workpieces, along with all material supports and fasteners, are hereafter referred to as the flexible workplace (FXW).
During the first stage, the programming process is initiated, where a digital robot replica can be positioned in any desired location and manually controlled by the user with a pointing device (PD). The second stage involves setting up the robot cell by referencing the robot with the FXW and performing physical processing or manipulation. The following section will explain both stages and their respective processes in detail.
Figure 2 serves for the graphical illustration of the process flow.
Stage I takes place in a room separate from the shop floor and represents the programming environment. In this context, the term environment is considered to be everything that a person can perceive with their senses. In this example, there are three main objects in the programming environment (see
Figure 2, stage I, hexagonal boxes). It contains the FXW, which is to be processed or manipulated by the robot. In addition, there is a virtual image of the physical robot (the virtual robot), which imitates its visual appearance and properties. The last initial main object is a pointing device, which can be used to manually guide the virtual robot in 3D space. All these objects are involved in processes and create further virtual objects, data, and, finally, a template of the application.
Initially, preparatory processes are necessary to perform the actual programming. To address the correspondence problem of offline programming discussed in
Section 2.4, a new approach of kinesthetic teaching focusing on the robot’s end effector is used. A physical pointing device controls the virtual robot and interacts with real-world objects on the flexible workplace. The PD serves as an end effector for the virtual robot and provides a physical connection to the virtual robot. To align the objects, the pointer must be calibrated to the FXW first.
This process involves the coordination of all real and virtual objects in the programming environment through spatial coordination. First, a global coordinate system (G-CS) is defined in real space, to which the coordinate systems of the real (see Calibration Element (CE), FXW, PD, and Sensor (S) in
Figure 3) objects are referenced. There is a corresponding coordinate system in virtual space for each real object. By using geometric calibration elements (CE) and visual motion capture sensors (S), the transformation
can be determined (see
Section 3.2). Using this transformation, the pose of the pointing device can be mapped to the end effector of the virtual robot. This allows all poses of the objects to be completely represented as an element of the special Euclidean group SE(3) with
SE(3) with
SO(3) at the end of the process.
The programming processes start with the frame creation process. Here, points with six degrees of freedom (6-DOF) are initially generated. They are subsequently paired with commands and now referred to as frames. Thanks to the object matching process, the end-effector of the virtual robot moves synchronously with the PD. The positions of the remaining robot links result from the inverse kinematics (IK) algorithm used and the prevailing transformation between the TCP and the robot’s base reference coordinate system. If the robot takes up a non-ideal overall position, the user must initiate a recalculation. This process is called virtual kinesthetic teaching. The PD can now be used to approach target points in real space, and command buttons can be used to create frames. These buttons are used to assign certain properties to the frames that affect the robot’s path planning. These properties are used to organize the frames hierarchically in trajectories, as well as to initiate a gripping process or similar.
The trajectories created are the foundation for the path planning process. The goal of this process is to generate a continuous robot motion for efficient motion planning. User input is used to define the motion types of the trajectories, and then the necessary data is generated. A distinction is made between simple waypoints, which are used for regular manipulation movements with a start and end point, and complex splines, which can represent, for example, a welded seam. Due to the different types of trajectories, it is necessary to specify the reference coordinate system that the respective trajectory uses. Trajectories that describe the approach and manipulation of a workpiece are referenced to the global coordinate system. Complex splines, on the other hand, which describe how to machine a part, use the workpiece pose as the reference system. This creates the first version of the application template.
The aim of the control process is to identify errors in the application template. The inverse kinematics of the virtual robot are constantly computed in real time in order to execute the entire trajectory with the TCP. Possible errors in the program sequence and collisions with objects can be identified by visually examining the overall robot motion. The robot’s joint angles are discretely recorded and analyzed over time for the application template to automatically verify the reachability of each target point. Through analysis, discontinuities in the joint angle trajectory can be detected, indicating singularities.
If any errors emerge during the control process, it is essential to revert back to the frame creation process. During this process, it is possible to delete frames, to move them, or to link them to other commands. It is also possible to add auxiliary points to the trajectory or components and to reposition virtual robots. A complete re-recording of all robot joint angles is required for robot repositioning. Once stage I is completed without further errors, the FXW is introduced into the production environment. All the generated data is then transferred to stage II.
The manufacturing environment now contains the FXW, the real robot, and the virtual robot. The setup process involves performing a calibration process to determine the actual geometric parameters within the robot cell, which helps identify the model deviations between stage I and stage II. It should be noted that the trajectories recorded in the application template are based on the base reference coordinate system of the virtual robot in stage I, which makes them invalid in the event of model deviations in stage II. To adjust a change in relative position, a measurement run must first be conducted within the real cell. To determine the coordinated transformation, the real robot’s end effector is coupled temporarily to a measuring object at the flexible workstation. This transformation between the robot’s base reference coordinate system and the FXWs coordinate system allows the virtual robot to be adapted to the pose of the real robot. The repositioning results in the need to recalculate all joint angles for the robot program. Through the adaptation and control process, in a similar way to the control process in stage I, a check of reachability and singularity is performed. The user visually monitors the process to detect any errors. If errors arise, the objects require repositioning, or the environment necessitates slight adjustments. Once a flawless run is achieved, the export process can commence. The export process is used to convert the application template into an executable robot application. A source code file is created using a special export format. This source code can be transferred to the respective programming language of the real robot used by means of a converter. Finally, the application is loaded onto the robot’s controller and the program can be executed. If several objects are manipulated or processed in a similar way, it is necessary to jump back to the calibration process.
3.2. Prototypical System and Environment Implementation
In the following section, the prototypical implementation of the method described above is explained and the most important hardware and software components are shown.
Figure 4 illustrates how the components interact with each other and in which way data is exchanged. The description does not represent a chronological process flow like in
Figure 2 and the spatial allocation of the components is also not shown in
Figure 4.
The central hardware component is the workpiece, which is, in this case, permanently connected to the FXW. A setup with several independent workpieces or small load carriers is also possible (see
Figure 5a). In this case, all objects must be permanently tracked in position. The FXW is equipped with an optical target and a calibration block. In addition, an AR system consisting of an HTC Vive Pro (HTC Corporation, Xindian District, New Taipei City, Taiwan), a controller and associated tracking system is used. A smart pointing device (SmartPd) is used as a representation of the robot’s end-effector, which can be tracked using an optical target and serves as an input device for additional commands. In addition, the SmartPd can be used to detect contact forces and grip objects. With a motion capture system consisting of six Flex 13 cameras (OptiTrack, Portland, OR, USA), the optical targets can be tracked with sub-millimeter accuracy. A Kuka iiwa LBR R800 (KUKA AG, Augsburg, Bavaria, Germany) is used to execute the tasks in the cell.
The most important software component is the GuideARob-Teach software, developed in Unity 2019.4.20f1 (Unity Technologies, San Francisco, CA, USA), which serves as the central control unit. The software GuideARob-Ex, written in C#, serves as the interface to the Kuka iiwa. For the control of the SmartPd and the wireless communication with GuideARob-Teach the SmartPd-OS is used, which runs on the microcontroller Particle Photon (Particle, Boulder, CO, USA). Commercial tools are utilized for communication with the OptiTrack MoCap system and its software Motive 2.2.3 (OptiTrack, Portland, OR, USA), as well as the AR system from OpenVR (Valve Corporation, Bellevue, Washington, DC, USA).
The AR system consists of modified Virtual Reality (VR) glasses (HTC Vive Pro) with two associated tracking sensors (Base Station 2.0) and a modified HTC Vive controller. With the help of the developed software GuideARob-Teach and the VIVE SRWorks SDK [
36], the full potential of the VIVE VR system could be utilized and the glasses could be used in the context of AR pass-through technology.
Object matching takes place via the MoCap system and the various optical targets fixed at the FXW (see
Figure 6a), as well as via the calibration block (see
Figure 6a,b) and the AR system. Since the AR system has its own tracking system and the controller can be connected temporarily to the calibration block, it is possible to link the real objects (video feed) and the virtual content. Once the AR system is calibrated, the virtual robot’s TCP and the SmartPd’s TCP must be matched. To do this, the SmartPd’s calibration tip (see
Figure 6c) is inserted into the calibration block and the virtual robot assumes its calibration pose. In this configuration, a transformation can be performed to adjust both TCPs.
The workpiece can be detected by attaching it to the FXW or by using a separate optical target. The target’s six mechanical degrees of freedom can be determined using a camera system. The SmartPd is used to specify the destination point for the TCP of the virtual robot. At the same time, the robot base reference coordinate system is defined in the software via a fixed relative position to the FXW. From this, the inverse kinematics solver can be used to compute the joint angles of the virtual robot. The multi-objective IK solver used here is called Bio-IK and internally uses a hybrid evolutionary algorithm [
37]. In conjunction with a corresponding CAD model of the Kuka iiwa, a realistic simulation of the robot can be achieved (see
Figure 5b). As the virtual robot follows the SmartPd, the user can set individual frames, including additional commands, at the push of a button.
If the TCP is in the correct position during frame creation, but the position of the remaining robot links is not optimal, which can happen with a 7-DOF robot, a recalculation of the IK can be triggered by pressing a button on the SmartPd. The individual frames are hierarchically assigned to specific trajectories. The reference coordinate system of the trajectories is linked to an optical target, which is permanently connected to the FXW or the workpiece. This allows the FXW or workpiece to be moved in space without the frames changing in relation to their reference object. The trajectories can be either simple movement types, like point to point (PTP), or complex spline movement types. Command keys on the SmartPd can be used to link additional parameters to them. This results in the application template, which shows the entire movement of the robot TCP relative to the reference object. To avoid errors, the application template is permanently synchronized with the virtual robot. This means that frames can only be created if they are reachable by the robot. In addition, the user receives a permanent AR visualization of the robot and the frames set via the AR module and the HTC Vive.
To set up the Kuka iiwa in the cell, its TCP can be temporarily connected to the calibration block, and the position of the coordinate system can be calculated using the joint angles. The virtual robot will now be automatically moved to the position of the real robot for adaptation. Subsequently, the application template is simulated again with the virtual robot and the final application is created. If there are no singularities or collisions and all points are reachable, an offline data package is created using the export module, which is then converted to Kuka iiwa source code using GuideARob-Ex. GuideARob-Ex enables the adjustment of individual frames or trajectory parameters, such as TCP speed, when necessary. Finally, the source code is loaded to the controller of the Kuka iiwa, and the application can be executed.
The prototype, and, in particular, the SmartPd haptic input device, are subject to various limitations in terms of the type of application, component handling, and accuracy. The range of possible applications depends on the interchangeable tip selected. Currently, there is a probe tip for creating welding, gluing, grinding, and handling applications. There is also a gripping tip that can be used to handle and assemble small parts in the range of 400 g and
cm. The user must compensate for the weight of the SmartPd and the part during the learning process. In conventional KT, the robot’s own weight and, when appropriate, the weight of the component, can be compensated. Thus, KT offers certain advantages. However, an initial torque in the joints must be overcome in order for the robot to move, which can lead to overshoot for very fine tasks. For larger, heavier components, the gripping position can also be specified using the probe tip of the SmartPd. With this method, the limiting factor is the payload and opening width of the robot and gripper used later. The same applies to the size of the usable workspace. The SmartPd can operate in the field of view of the MoCap cameras in a radius of up to 5 m, but the actual usable workspace is limited by the robot selected later. The accuracy of the system is studied in
Section 5 and is also limited by the robot.