Next Article in Journal
Polarization Beam Splitter Based on Si3N4/SiO2 Horizontal Slot Waveguides for On-Chip High-Power Applications
Next Article in Special Issue
Reactive Autonomous Navigation of UAVs for Dynamic Sensing Coverage of Mobile Ground Targets
Previous Article in Journal
Industry 4.0 Lean Shopfloor Management Characterization Using EEG Sensors and Deep Learning
Previous Article in Special Issue
Improving the Heading Accuracy in Indoor Pedestrian Navigation Based on a Decision Tree and Kalman Filter
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Peg-in-Hole Assembly Based on Six-Legged Robots with Visual Detecting and Force Sensing

State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(10), 2861; https://doi.org/10.3390/s20102861
Submission received: 20 March 2020 / Revised: 15 May 2020 / Accepted: 16 May 2020 / Published: 18 May 2020
(This article belongs to the Special Issue Sensors and Perception Systems for Mobile Robot Navigation)

Abstract

:
Manipulators with multi degree-of-freedom (DOF) are widely used for the peg-in-hole task. Compared with manipulators, six-legged robots have better mobility performance apart from completing operational tasks. However, there are nearly no previous studies of six-legged robots performing the peg-in-hole task. In this article, a peg-in-hole approach for six-legged robots is studied and experimented with a six-parallel-legged robot. Firstly, we propose a method whereby a vision sensor and a force/torque (F/T) sensor can be used to explore the relative location between the hole and peg. According to the visual information, the robot can approach the hole. Next, based on the force feedback, the robot plans the trajectory in real time to mate the peg and hole. Then, during the insertion, admittance control is implemented to guarantee the smooth insertion. In addition, during the whole assembly process, the peg is held by the gripper and attached to the robot body. Connected to the body, the peg has sufficient workspace and six DOF to perform the assembly task. Finally, experiments were conducted to prove the suitability of the approach.

1. Introduction

In the past few years, research on legged robots has received much attention [1,2,3,4]. Legged robots require merely isolated footholds for movement on the ground, which provides better adaptability in rough terrains, in contrast to tracked and wheeled robots [5,6,7]. According to current studies, legged robots can be used for detecting and operating tasks in complex environments. The walking robot SILO6 was designed to detect and locate antipersonnel landmines [8]. ATHLETE, studied by Wilcox et al. [9], emphasized handling cargos and manipulating objects on the moon. HUBO [10] and ATLAS [11] are biped robots that perform door opening manipulations. The robot COMAN applied the visual positioning method to turn circular valves [12]. These studies demonstrate that with their natural advantages legged robots are able to replace manpower to perform different kinds of tasks such as assembly processes. For instance, during the manual assembly process, professional techniques and high labor cost are required. Especially in some hazardous environments, manual assembly becomes more difficult and can even cause accidents. With the development of legged robots, the application of the robots in complex and dangerous scenarios shows that there is also possibility to employ them in assembly tasks.
In the assembly task research field, analyses usually highlight the key point of the assembly process, which is the peg-in-hole problem. Zhang et al. developed a fuzzy force control strategy for the dual peg-hole fitting using an ABB IRB1200 robot to accomplish an automatic fitting process [13]. Tang et al. designed a novel display device to adopt the data demonstrated by human beings. A modified learning from demonstration (LfD) algorithm was applied to a FANUC manipulator to carry out the peg-in-hole procedure [14]. Besides, Park et al. [15] used industrial manipulators to insert square pegs into square holes. Current studies on peg-in-hole tasks mostly focus on industrial robots and mechanical arms. Compared with industrial robots, legged robots possess the advantage of mobility. During operation, a legged robot can move easily to complete the peg-hole assembly process at different positions. In the realm of legged robots, studies mainly include bipedal robots [16,17], quadruped robots [18,19] and hexapod robots [20,21]. Apparently, the hexapod robot, with better stability during walking and operation, is more suitable for automatic assembly. However, there is no relevant research covering specific approaches for the six-legged robot conducting a peg-in-hole assembly task. Hence, it is of value and helpful to establish an approach for the six-legged robot peg-in-hole implementation.
To fulfill the abovementioned task, the relative orientation and position between the hole and peg need to be set up as a priority. Visual sensors are applied for locating the two. Due to the location uncertainties, Pauli et al. [22] used cameras to collect the edges of the object and examine the overall peg-hole insertion procedure. Huang et al. [23] applied a visual high-speed camera and a single eye-in-hand aimed to match the peg and hole. Su et al. [24] proposed a peg-hole insertion tactic without a force sensor. Based on vision guidance and the attractive region Su et al. presented how the peg-hole insertion was accomplished. Wang et al. [25] and Chang et al. [26] applied visual servo devices to perform the puny hole and peg matching process. In addition to this, force/torque (F/T) sensors were utilized to mate the peg and hole. Li et al. [27] presented a compliant strategy following human testing result for the peg-in-hole task, based on force information and the environment constraints. Abdullah et al. [28] developed an algorithm for the peg-hole insertion task on the basis of human operation. During the insertion process the center of the hole was determined by using a force sensor. In [29], the position of the hole was also judged by the measured force caused by the collision of the peg with the hole area. The six-legged robot, by contrast, can approach the hole from a distance to accomplish the peg-hole insertion operation. Therefore, the locating process in this article is divided into two parts. Firstly, the vision sensor preliminarily locates the relative orientation and position between the hole and the peg. Secondly, the robot moves with its legs to approach the hole, and then makes the peg contact the hole. Based on the feedback of a F/T sensor, the relative location between the hole and peg can be more accurately determined. Since force sensing is used to adjust the position error after visual hole location, a low-cost vision sensor is used for detection in this paper.
When the location process is completed, the peg will be inserted into the hole to perform the assembly task. However, in the case of a small gap between the hole and the peg, a force will be caused due to the robot locomotion error and partial occlusion of the hole during the insertion. In order to ensure smooth completion of the assembly task, force control strategies can be applied. At present, there are two common ways of applying force control tactics for a peg-hole insertion process, namely passive force control and active force control. For the strategy of passive control, some compliant mechanical devices are used to assist the peg insertion into the hole. Yun et al. [30] integrated passive compliance with a learning method to accomplish the peg-in-hole procedure. The compliant device consists of torsional springs and dampers. Pitchandi et al. [31] developed a mathematic model of the peg-hole insertion task containing the stiffness and damping factors of the compliance. Influence of damping in the assembly process was analyzed. Baksys et al. [32] applied remote center compliance (RCC) as the passive compliance device, and the mathematic method of the vibratory alignment with the RCC equipment was proposed. However, it is hard to control the passive compliance equipment, and various operation tasks need various devices, which limits the application of passive force control. By contrast, in terms of the active force control, the external force is sensed by the force sensor and the assembly process is completed by the corresponding control strategy. The impedance control scheme is commonly used for industrial robots to implement the peg-hole inserting task. Kim et al. [33] proposed a shape recognition algorithm on the basis of a force sensor, and impedance control was used to provide stable contact between the hole and the peg during the insertion procedure. Lopes et al. [34] presented a strategy involving cooperation of two robotic manipulators: an industry robot and the RCID manipulator. During the peg-in-hole assembly, RCID preserved its small workspace for force and impedance control. Song et al. [35] proposed a complex-shaped peg-in-hole approach on the basis of a guidance algorithm, and in order to attain stable contact movement, the impedance control applying an admittance filter was conducted. According to related studies, multi-DOF manipulators are used for the peg-hole insertion assembly. In this article, the peg is held by a gripper connected to the body of a six-legged robot, not connected to a multi-DOF robot arm. The robot body, serving as an end-effector, adopts admittance control (another form of impedance control) for the insertion assembly. Based on its own mobility characteristics, the robot can approach the holes from different locations. Moreover, the peg has enough space to work and sufficient DOFs to accomplish the assembly requirement by linking with a 6-DOF body.
In this article, we propose an approach for six-legged robots to accomplish the peg-in-hole task and verify the approach with a six-parallel-legged robot. A vision sensor and an F/T sensor are used for locating the hole and admittance control is applied to guarantee smooth assembly completion. The robot adopts a tripod gait during its movement process.
The rest of the article are divided as follows: in Section 2, a six-parallel-legged robot is introduced and the coordinate system is defined; in Section 3, the means of peg-in-hole assembly is detailed described; in Section 4, the experiment is conducted to testify the proposed approach; eventually, in Section 5, the article is concluded.

2. Robot System Overview

2.1. The Six-Legged Robot

A six-parallel-legged robot named Octopus EDU, illustrated in Figure 1, is a 6-DOF mobile platform with integrated walking and manipulation functionalities. The robot can move on different terrains and be used in complex environments for detection and rescue. A gripper is mounted on the front upper area of the body to enable the robot to complete the peg-in-hole assembly task.
The robot body is a regular hexagon, on which six legs are arranged in diagonally symmetrical. The six legs have an identical mechanical structure, and each of them is a 3-DOF parallel mechanism (UP-2UPS). Benefitting from the 3-DOF, the robot can adapt to different terrains during its movement. Moreover, the parallel mechanism of the legs improves the load capacity of the robot.
The construction of the robot system is shown in Figure 2. The system includes the sensing, the actuation and the control modules. The sensing system consists of an F/T sensor and a vision sensor. As shown in Figure 1, the F/T sensor is mounted at the end of the gripper and linked to the robot body. The information recorded by the F/T sensor is used to sense external forces. The vision sensor is installed at the front of the body so as to detect visual messages in the moving direction. The vision sensor connects with the control system via USB, and in the peg-in-hole task process, the relative orientation and position between the hole and peg are preliminarily detected by the vision sensor. The actuation system is mainly used to drive the robot. Taking the actuation system of one leg as an example, each leg has three parts and the prismatic joint of the limb is regarded as the active joint. The active joint is actuated by using a servo motor via ball screw linkages. Motors are installed at the top of legs, and each of them is controlled by a corresponding servo driver. The motor has a resolver, by which the real position of the motor can be detected.
The control system, working as the central system of the robot, primarily contains the state estimator and the trajectory generator. Operators can send commands to control the robot via a remote terminal, which is connected to the control system via WiFi. To accomplish the instructions of real-time control, an individual PC running Linux system with Xenomai patch is employed as the control system of the legged robot. Moreover, the control system connects with the actuation system via EtherCAT real-time industrial fieldbus. In this way, the state estimator can supervise the situation of the robot and the trajectory generator can plan motion trajectories in real time. Similarly, the F/T sensor sends the information of external forces to the control system via EtherCAT, so that the robot can adjust the trajectory planning strategy immediately according to the force feedback.

2.2. Definition of Coordinate Systems

When moving and operating, the motion state of the robot needs to be expressed in various coordinate systems. To guarantee the smoothness of peg-in-hole task, five coordinate systems are established in Figure 3. The first one is the coordinate system of the robot (RCS), the origin of which is fixed in the middle of the robot body. The Y-axis and Z-axis of the RCS are parallel to the vertical and sagittal axes of the robot, respectively. The location of the RCS changes with the movement of the robot body. The second is the F/T sensor coordinate system (SCS), the origin of which is located at the center of the F/T sensor. The location of the SCS is related to the installation orientation and position of the F/T sensor. For convenience, the directions of the XS-axis and the XR-axis are consistent, and the same with the ZS-axis and the YR-axis. The third is the vision sensor coordinate system (VCS), similar to the SCS, the VCS has been set up after delivery. The VCS location is also related to the mounting position of the vision sensor. The fourth is the peg coordinate system (PCS), which is fixed on the peg. The YP-axis coincides with the center line of the peg and points to the insertion direction during the assembly process. Finally, the ground coordinate system (GCS) is established, which is fixed on the ground and defined by the operator. Since the GCS keeps still when the robot moves and operates, it can refer to the other coordinate systems. For the sake of convenience, the GCS is defined to overlap the RCS if the legged robot is at its original position before the assembly process begins.
Considering that parameters need to be demonstrated in various coordinate systems, the pose matrix T is introduced. For instance, T R G indicates the orientation and position of the RCS expressed in the GCS. Similarly, T S R , T V R and T P R describe the orientations and positions of the SCS, the VCS and the PCS in the RCS respectively. As mentioned above, locations of the sensors and the peg determine the pose matrices T S R , T V R and T P R . For instance, the solving process of T S R is stated as below:
T S R = [ R S R O R S 0 1 ]
where the matrix R S R indicates the orientation of the SCS in the RCS. O R S is a vector describing the position of the SCS origin expressed in the RCS. For the detailed calculation process readers can refer to [36]. Moreover, T V R and T P R can also be determined by using the same method according to the locations of the vision sensor and the peg.

3. Peg-in-Hole Method

Compared with previous methods that rely on manipulators, we use the legged robot to execute the peg-in-hole task in this paper. When the legged robot stands on the ground, the body of the robot has six DOFs in space. Thus, the robot body can be regarded as an end effector. Without additionally installing multi-DOF mechanical arms, the assembly task can be completed by the legged robot itself. The method presented in the paper can enrich the study of legged robots. Moreover, the manipulator usually has a fixed base, whereas the legged robot has the mobility to move to different locations. During the peg-in-hole process, the robot can walk to the hole from a long distance. When the robot reaches the location that the vision sensor can detect the hole, the task can be accomplished through the subsequent method. The legs of the robot are not allowed to slip during the entire task.
The sensing system of the robot consists of the vision sensor and the F/T sensor. The relative location between the hole and peg is determined by the combination of visual and force sensing. And then the control system plans the motion of the robot to proceed the peg-in-hole task. The chamfered peg-hole insertion task is introduced in this article. As exhibited in Figure 4, the whole task includes three subtasks: visual localization, F/T location and the assembly process.
In the process of the visual location subtask, the relative position and orientation between the hole and peg are adjusted preliminarily according to the detection results of the vision sensor. During the F/T location subtask, the peg and hole are located more precisely via the F/T sensor. During locating by the F/T sensor, the peg contacting the chamfer of the hole is defined as Case a, whereas no touching is defined as Case b. Then, the corresponding motions Move a and Move b are applied to adjust the relative position between the hole and peg, respectively. In the end, during the assembly process subtask, the peg is inserted into the hole with the movement of the robot body. During the insertion process, the F/T sensor detect the completion of assembly task. The rest of the paper introduces the three subtasks in details.

3.1. Locating the Hole with the Visual System

In the visual location subtask, the relative orientation between the peg and hole is detected by the vision sensor first. Then according to the detection result, the robot rotates to make the peg parallel to the hole. After that, a second visual detection is carried out to modulate the relative location between the peg and hole. The maximum initial error of the approach is mainly limited by the camera view. The initial position for visual detecting allows ±10° rotational error and ±50 cm positional error. Before the visual detection, the robot can move into that visual range from different locations under the user’s remote control.

3.1.1. Visual Detecting Design Scheme

As exhibited in Figure 5, the hole coordinate system (HCS) is set up, the origin of which is located at the cross point between the axis of the hole and the contact surface. The directions of the YH-axis and the hole-axis are the same. When detecting using the vision sensor, the location of the hole can be determined according to YH and OH. Since the axis of the hole is perpendicular to the contact cover, the direction vector of the hole-axis can be expressed by τ , which is the normal vector of the contact surface.
Color images, depth images and point clouds of objects can be obtained by the vision sensor. Identification of the hole can be separated into two steps. Firstly, the point cloud of the contact surface is extracted from the environment, and the normal vector τ described in the VCS is derived by fitting the point cloud information. Thus, the first step of visual detection is fulfilled. The robot rotates according to the detection result, and proceeds with the second positioning. As shown in Figure 6a, the Hough-transform algorithm is used in color images to obtain pixel positions of the edge of the hole. Then, as exhibited in Figure 6b, these pixels are mapped to the depth image, and marked as the candidate points of the hole on the contact surface. After that, the candidate points in the depth image are mapped into the point clouds. Depending on the change of gradient between the candidate points and the surrounding points, the points of the circular hole on the contact surface is selected. Finally, fitting the selected points, we can obtain the vector O V H , that represents the position of the hole in the VCS. By using the following coordinate transformation, the vectors τ and OH can be described in the PCS. Thus, the relative location between the peg and hole is derived:
τ P = R P R 1 R V R τ V
[ O P H 1 ] = T P R 1 T V R [ O V H 1 ]
where the orientation matrices R P R and R V R , denote the orientations of the PCS and the VCS in the RCS, respectively.

3.1.2. Trajectory Planning for Visual Locating

During the visual location procedure, the trajectory planning of the robot is produced by position control. The trajectory planning can be divided into three parts, as shown in Figure 7. The first part is illustrated in Figure 7a: according to the relative angle θ r between the peg and hole detected by the vision sensor, the body and the legs move together which enables the robot to rotate an angle θ r . After the rotation of the robot, the vision sensor is used to detect whether the peg is vertical to the cover of the hole. If not, the above motion is repeated until the angle error is eliminated. The second part is described in Figure 7b: the vision sensor detects again to observe the relative position between the hole and peg, and then the robot moves distances d1 sideways and d2 forward to approach the hole. The third part is exhibited in Figure 7c: the robot adjusts the height of the peg to complete visual location.
As the body and the legs move together in the first and second parts, it is necessary to design the trajectory of the leg. The six-legged robot usually adopts the tripod 3-3 gait, which needs the legs to move alternately for stable walking.
As exhibited in Figure 7a, the robot is at its original position, thus the RCS superposes the GCS. During the rotation process, the robot usually takes one step to rotate to the corresponding position. After the robot rotates θ r along the YG-axis, the relative position between the body and the leg remains unchanged. Therefore in the first part, the trajectory of leg i in the GCS can be expressed as follows:
E G i = R Y G ( θ r ) P G io P G io
where the position vector P G io represents the initial position of leg i foot-tip, and E G i means the change of the foot-tip position. The trajectory of the leg is generated according to E G i .
In the second part, the movement of the robot is a parallel motion. In Figure 7b, the process by which the robot moves a distance d1 along the XR-axis is analyzed as an example. Considering d1 may be larger than the maximum stride length dmax, the number of the step and the length of each step need to be calculated in the walking process. When the robot walks, its six legs move alternately in two groups. The first stride length of the first group legs and the last stride length of the second group legs are half of the other stride lengths. Thus the equation that expressing the relationship between the step number and the stride length can be obtained:
( N 3 2 ) d max < d 1 ( N 1 2 ) d max
where N represents the step number. As N is an integer, we get the value of N. Then we input N into Equation (6) and the stride length dN can be calculated. Thus, the trajectory planning of the second part is also completed.
d N = d 1 N 0.5

3.2. Locating the Hole by the F/T Sensor

Via the visual location procedure, the relative location between the peg and hole is located preliminarily. Considering the small clearance between them during the insertion procedure, it is more precise to adopt the F/T sensor to locate the relative location between the hole and peg more precisely.

3.2.1. Positioning Scheme Based on the F/T Sensor

The F/T sensor collects the data of force and torque every millisecond, and the data recorded at millisecond n is expressed as follows:
S S n = [ F S n M S n ] = [ F S nx F S ny F S nz M S nx M S ny M S nz ] T
In the above equation, F S n and M S n represent the force date and the torque data, respectively, in the SCS. For convenience, the force and torque data should be expressed in the PCS:
F P n = R S P F S n
M P n = R S P ( M S n O S P × F S n )
where R S P describes the orientation of the SCS in the PCS. O S P is a vector describing the origin of the PCS described in the SCS.
During the F/T location process, the change of the robot motion status will lead to errors when the peg contacts the hole. A follow up procedure is applied to decrease the inaccuracy. When detecting the contact, the robot stops moving and remains stationary for a while. Then the peg leaves the hole. In this process, the recorded forces and torques are used to calculate the more accurate contact data Savg:
S avg = [ F avg M avg ] = n = n 0 n 3 S n n = n 0 n 1 S n n = n 2 + 1 n 3 S n n 2 n 1
In the above equation, n0 and n1 denote the instant the peg contacts the hole and the moment the robot begins to keep still, respectively; n2 and n3 respectively indicate the moment the robot restarts to move and the instant the peg is completely separated from the hole.
After the visual location process, the maximum angle positioning error between the hole and peg is 1º. With this error, the peg can also be inserted into the hole to a certain depth. The angle error can be adjusted during the insertion. Therefore, the force location model focuses on determining the relative position error and is designed under the circumstance that the peg is parallel to the hole ahead of the insertion task. In the F/T location process, based on the possibility of contacting the chamfer, two situations of the model are discussed:
Case a. Contact the chamfer of the hole:
The condition of contacting the chamfer of the hole is illustrated in Figure 8a. As shown in the figure, rp is the radius of the peg; cp denotes the width of the peg chamfer; rh is the radius of the hole; ch denotes the width of the hole chamfer; e represents the deviation between the hole and peg. When the peg contacts the chamfer of the hole:
r p c p + e < r h + c h
The contact point between the peg and the hole chamfer is defined as A. Since the shapes of the hole and peg are cylindrical, the direction of the contact force coincides with the line OHOP in the XPOPZP plane, as shown in Figure 8a. The force and torque data, S S n , is recorded by the F/T sensor. And calculating S P n via Equations (8) and (9), the data S P n described in the PCS is obtained. Then S P avg can be derived according to Equation (10). Thus, the angle ωA between the vector OPOH and the XP-axis in Case a can be obtained:
{ ω A = arctan F P avgz F P avgx + π F P avgx < 0 ω A = arctan F P avgz F P avgx F P avgx > 0
where F P avgx and F P avgz denote the force data along the XP-axis and the ZP-axis, respectively, in the PCS.
Case b. Contact the surface of the hole:
When the value of the deviation e makes Equation (11) invalid, the peg contacts the surface of the hole. As exhibited in Figure 8b, the contact point between the peg and the cover of the hole is defined as B. According to the side view of Figure 8b, ideally the contact force is parallel to the peg-axis. In real tests, friction forces exist. As the friction force is light and the final location is based on Move a, the force along the YP-axis can be extracted for calculation. In this case, F P avgx = F P avgz = M P avgy = 0 , and torque analyze is the means to solve the position of B in the XPOPZP plane:
{ M P avgx = z P B F P avgy M P avgz = x P B F P avgy
where F P avgy is the force data along the YP-axis in the PCS. M P avgx and M P avgz represent the torque data along the XP-axis and the ZP-axis, respectively, in the PCS. Point B is the centroid of the interface. According to symmetry of the circle, the points B, OP and OH are located in a straight line. Therefore, the angle ωB between the vector OPOH and axis XP in Case b is obtained:
{ ω B = arctan z P B x P B x P B < 0 ω B = arctan z P B x P B + π x P B > 0
The relative position between the hole and peg is expressed by the vector OPOH, which includes the orientation information ω and the size information e. ωA and ωB, the orientation information of OPOH under different contact conditions, are obtained according to the above method. Then, the deviation e can be derived in the process of trajectory planning as follows.

3.2.2. Trajectory Planning for F/T Locating

For trajectory planning, the control system sends the robot’s instantaneous position to the actuation system every millisecond. During the F/T location process, an instantaneous stop is required when the peg contacts the hole. In order to fulfill the above requirement, the gait of the body is planned via a discrete force control approach in this situation:
{ F mov = M mov A n + C mov V n 1 V n = A n Δ t + V n 1 D n = V n Δ t + D n 1
In Equation (15), the 6D vectors An,, Vn and Dn, describe the acceleration, the velocity and the trajectory of the robot body with respect to time n, respectively. △t represents the space of time between two commands that the controller sends. Fmov is a virtual 6D force vector decided by the moving direction. Mmov = diag (m1, m2, …, m6) and Cmov = diag (c1, c2, …, c6) are the mass and damp matrices, respectively.
When the peg contacts the hole, the robot plans the body trajectory by applying different Fmov. Fmov is a 6D vector, its first three elements describe the change of the position and its second three elements represent the change of orientation. In the process of the actual moving, the contact between the hole and peg shall be judged by Sn, which is the force and torque data recorded by the F/T sensor. If the value of the force feedback exceeds the specific range, it means that the peg touched the hole. At this moment, the elements of Fmov are all set to zero to achieve the instantaneous stop of the robot. According to Equation (15), the contact force sensed by the F/T sensor is not used for calculation. The contact force has no impact on trajectory generation in Equation (15). Thus, Mmov and Cmov are merely determined by the required accelerations and velocities of the body. The acceleration can be changed according to Mmov. And the velocity can be derived from Fmov and Cmov, when the robot moves at a constant speed. The third term in Equation (15) is used to generate the target positions during the F/T locating process. The trajectory planning method is introduced in detail as follows.
Before the peg moves, the peg position is defined as O. The peg moves along the YP-axis to approach the hole at first, as shown in Figure 9a and Figure 10a. Then after contacting the hole, the trajectory of the peg is generated according to the contact condition. If the contact condition is Case a, the robot adopts the motion Move a to adjust the position of the peg. As exhibited in Figure 9b, the peg contacts the chamfer of the hole at point A, and the peg locates at the position C. According to ωA calculated in Equation (12), the movement direction of the peg is obtained. In Figure 9b,c, the peg moves along the direction of the angle ωA until it contacts the other side of the chamfer at point A1. At this moment, the end of the peg is located at point D, and the movement that the peg contacting the other side of the chamfer is the motion C D of the peg. The distance from C to D is defined as dA. Since dA = 2e, the deviation e is derived. After that, the peg moves the distance e along the direction from D to C. Finally, the peg moves to the position E to mate the peg and hole. In the above movement, the peg will contact the hole during the motion O C D, and the trajectory is planned via Equation (15). The peg moves with the robot body, and the movement is planned in the GCS. Therefore, during the motion O C D, Fmov is generated as follows:
F mov = { ( R P G [ 0 1 0 ] T R P G [ 0 0 0 ] T ) O C ( R P G [ cos ω A 0 sin ω A ] T R P G [ 0 0 0 ] T ) C D
In the above equation, R P G = R R G R P R . During the motion O C, the matrix [010]T means the virtual force in the PCS and we can mark it as 1 N along the YP-axis. Meanwhile, Cmov is set to calculate the velocity of the robot. For instance, the virtual damp factor along the Y-axis in the PCS can be set to 50 N·s/m. When the robot moves at a constant speed, the velocity along the YP-axis is 0.02 m/s. When the peg locates at the position C, it will stay static for a while. In this case, all elements of Fmov are set to zero. During the motion C D, Fmov is determined by ωA via Equation (16). Mmov and Cmov are deduced from the required acceleration and velocity. We want to obtain a relatively large acceleration, so in the experiment we set Mmov = diag(1, 1, 1, 1, 1, 1) during the F/T locating process. If a larger acceleration is needed, the values of Mmov can be reduced. When the robot moves at a constant speed during the F/T location process, we want the total velocity to be 0.02 m/s. Thus, the values of Cmov can be set to Cmov = diag(50, 50, 50, 50, 50, 50) in the experiment. According to the values of Cmov, when the robot moves at a constant speed during the motion C D, the velocities along the XP-axis and the ZP-axis in the PCS are 0.02cosωA m/s and 0.02sinωA m/s, respectively.
If the contact condition is Case b, the robot applies the motion Move b to adjust the position of the peg. As exhibited in Figure 10a,b, the peg moves from O to F. The peg contacts the surface of the hole and B is the force point of the interface. ωB is calculated according to Equation (13) and Equation (14). Since the peg is pressed on the surface of the hole at the position F, the other motions should be proceeded after the peg moving away from the hole. Thus in Figure 10b,c, the peg leaves the hole along the YP-axis. Then, after moving to the position G, the peg moves the distance dB along the direction of the angle ωB. According to Equation (11), if the contact condition is Case b, we can get that e ≥ rh − rp + ch + cp. Therefore, in order to decrease the position error between the hole and peg, we can make the distance dB = rh − rp + ch + cp. After that, as denoted in Figure 10d, after the motion from G to H, the peg moves along the YP-axis to contact the hole again. And the contact condition is analyzed via the feedback of the F/T sensor. If the contact condition is Case b, repeat the motion Move b. Otherwise, the motion Move a is applied to mate the peg and hole.
The F/T locating process is illustrated in the F/T locate block diagram of Figure 4. The peg and the hole are mated by applying the two motion patterns, Move a and Move b. In the process of Move b, the peg will contact the hole when moving from O to F. During the motion from O to F, the 6D vector can be generated via Equation (17):
F mov = ( R P G [ 0 1 0 ] T R P G [ 0 0 0 ] T ) O F
In addition, during the movement F G H, the peg will not contact the hole. Thus in this case, position control can be applied to plan the trajectory of the body. Finally, when the peg moves from H to the hole position, the trajectory planning is the same as the motion from O to F.

3.3. Assembly Strategy

After the location procedures, the peg is inserted into the hole to perform the assembly task. However during the insertion, the movement error of the robot affects the assembly result. When the peg and the hole are mated, a fixed coordinate system (QCS) is set to superpose the current PCS before the insertion process. In the QCS the insertion direction is along the YQ-axis, and the motion along YQ is planned via Equation (15). Thus, the insertion process can be stopped immediately when the external forces on the peg exceed a certain range. In addition, the peg contacts the wall of the hole along the XQOZQ plane. In order to guarantee the smooth assembly, the authors present the admittance control. Therefore, the robot body performs as if its target and desired positions were linked via a virtual mass-spring-damper system, as shown in Figure 11.
In Figure 11, (xd, zd) and (xt, zt) denote the desired and the target positions in workspace separately. Based on the difference between these two positions, the robot can actively generate the virtual damping and spring forces. In addition, m, c and k represent parameters of the mass, the damping and the stiffness successively. Thus, the trajectory along the XQOZQ plane can be planned. Figure 12 shows the robot admittance control system.
As exhibited in Figure 12, Fdx represents the desired force along the XQ-axis. Fsx is the force information recorded by the F/T sensor, and the contact force Fcx is extracted through the filter. Then, depending on Fdx and Fcx, the compensated position xc along the XQ-axis is obtained via the admittance filter. After that, xt can be derived according to xd and xc and zt can also be obtained by applying the same method. Finally, the target position ut in the robot joint space is calculated by using the inverse kinematic model, and thus trajectory planning of the motion is generated. The kinematic model was introduced in [37].
In the above control process, the parameters of the admittance controller and the low-pass filter have a significant influence on the reliability of the system. Analyzing the transfer function of the system, we can obtain:
G ( s ) = s 2 + k 1 s + k 2 m s 4 + ( c + m k 1 ) s 3 + ( k + c k 1 + m k 2 ) s 2 + ( k k 1 + c k 2 ) s + ( k k 2 + k r k 2 )
According to the Routh–Hurwitz stability criterion, the Routh-Hurwitz Table is established and the first column turns out to be:
R c = [ m c + m k 1 c 2 k 1 + c m k 1 2 + c k + m 2 k 1 k 2 c + m k 1 c 3 k 1 k 2 + c 2 k k 1 2 + c 2 m k 1 2 k 2 + c k 2 k 1 + c m k k 1 3 + c m 2 k 1 k 2 2 2 c m k k 1 k 2 ( c 2 k 2 + 2 c m k 1 k 2 + m 2 k 1 2 k 2 ) k r c 2 k 1 + c m k 1 2 + c k + m 2 k 1 k 2 k k 2 + k 2 k r ]
To guarantee the stability of the system, all elements of Rc must be positive. Considering that m, c, k, k1, k2, kr are all positive, only the fourth element (denoted by ER4) may become negative. The parameters k1 and k2 are determined by the filter and not used to regulate the stability. Therefore, two parameters should be critically analyzed. The first one is kr, which is unknown and determined by the materials. It may be very large, causing ER4 to negative. The second one is c. Observing ER4, it can be found that it is cubic to c and quadratic to m and k, which means c has greater weight on the influence on ER4. Moreover, the smaller m is, the higher bandwidth of the system will be. And also, k decides the offset of the peg, which should not be too large. As a result, c becomes a key parameter to regulate the system reliability.
In the experiments, by adjusting the filtering effect, k1, k2 were set to be 11.25 and 63.33. And m, c, k were tuned to be 12, 2254 and 196. In such case, ER4 is positive when kr < 24324. That means the system would be unstable if the contact rigidity between the peg and the hole are larger than 24,324 N/m. If the system becomes unstable because of the large kr, the parameter c can be enlarged to enable the system stability again.
In the above procedures (illustrated in Figure 12), the orientations of the QCS and the PCS remain the as the force described in the QCS and the PCS. The model of the mass-spring-damper system can be expressed by:
m [ x ¨ Q t z ¨ Q t ] + c [ x ˙ Q t z ˙ Q t ] + k [ x Q t x Q d z Q t z Q d ] = [ F P d x F P d z ] [ F P c x F P c z ]
The target position (xt, zt) is obtained according to Equation (20). Moreover, yt along the YQ-axis is derived via Equation (15). Thus, the trajectory planning in the QCS is obtained, and converted to the GCS as follows:
[ x G t y G t z G t 1 ] = T Q G [ x Q t y Q t z Q t 1 ]
As mentioned above, the relative location between the peg and hole is compensated by employing the admittance control method. Due to the inaccurate location and the movement during the assembly, the relative angle error between the peg and hole also exists. However, there is a coupling relationship between the position and the orientation during the insertion assembly. Changing both at the same time may affect the smooth completion of the assembly process. In this article, a tactic on the basis of the position admittance control and the orientation compensation is applied to design the trajectory of the peg as exhibited in Figure 13.
Regardless of the angle error, the peg can still be inserted some distance into the hole along the YQ-axis, as denoted in Figure 13a. During the above phase, admittance control is applied to compensate the position error. Then, as shown in Figure 13b, the peg is unable to continue the insertion when contacting two sides of the hole-wall. In this case, the motion generated by admittance control is stopped. During this two-contact phase, the torque applying on the peg is caused by the angular misalignment between the two mating parts. Rotating the peg according to the torque to compensate for the angular error is the process called orientation compensation strategy. And as shown in Figure 13c, after decreasing the angular misalignment, admittance control is used again to continue the assembly process. In addition, according to the new orientation and position of the PCS, we can reset the QCS to coincide with the current PCS.
Therefore, the whole insertion assembly can be regarded as a cyclic process: admittance control orientation compensation admittance control. Finally, when the assembly task is finished, the peg is released by the gripper and the robot moves the body away from the peg.

4. Experiment and Discussion

4.1. Experiment

Experiments were conducted on the six-parallel-legged robot to perform the peg-in-hole assembly and validate the proposed means. During different peg-in-hole manipulations, the robot started to perform the task at different initial positions. The experiments were executed 10 times at each initial position. The success rate was over 80 percent under the summary of the experiments at different initial locations. In the experiment, the diameter of the peg was 50 mm; the clearance between the hole and peg was less than 0.1 mm. Both the peg and the hole were chamfered.
Before the peg-hole insertion, the robot was unaware of the hole location. Once the operation started, the visual sensor was applied to detect the orientation and position of the hole at first. And the visual detection results were applied to guide the robot to walk close to the hole. After that, the robot automatically designed the movement in real time on the basis of the feedback of the force to locate the hole more precisely. Finally, after the alignment procedure, the six-legged robot inserted the peg into the hole. Figure 14 shows snapshots of the experiment, and locations of the robot at diverse instants are stated in subfigures of Figure 14.
Subfigures ① to ⑥ in Figure 14 show the visual locating process. In subfigure ①, the robot was at its initial position and detected the orientation of the hole by the vision sensor. And as shown in subfigure ②, the robot rotated to make the peg parallel to the hole according to the measured orientation information. Then, the robot proceeded the second time visual detection to locate the position of the hole, as denoted in subfigure ③. After that, in subfigures ④, ⑤ and ⑥, the robot approached the hole via lateral (parallel to the XP-axis) and forward (parallel to the YP-axis) movements and adjusted the height of the body to mate the peg and hole.
Figure 15 denotes the body and feet positions during the visual location process. Figure 16 denotes the body and feet positions during the F/T location and insertion assembly processes. Since the peg was held by the gripper and fixed to the body, the trajectory of the peg can be described via the trajectory of the robot body. Besides, as the 2nd and the 5th legs moved alternately in 3-3 gait, here the motions of these two feet were applied to exhibit motions of all feet.
Subfigures ⑦ to ⑫ in Figure 14 show the F/T location and the insertion assembly processes. In addition, Figure 17 and Figure 18 exhibit the data of force and torque recorded in the F/T sensor. As illustrated in subfigure ⑦, the robot body moved forward until the peg contacted the hole. Meanwhile, we can find that force and torque traces illustrated in Figure 17 and Figure 18 change obviously. After ⑦, the traces in Figure 16 illustrate the temporary stationary position of the robot. During the temporary stationary period, the data of F/T sensor was recorded. Using the recorded data, the average contact force/torque can be calculated via Equation (10). And the average contact force/torque expressed in the PCS is derived via Equations (8) and (9). Then, using the method introduced in Section 3.2.1, the robot identified the contact condition of Case b. In this case, the motion, Move b, was applied for another contact. As stated in Section 3.2.2, the motion O F in the process of Move b is generated via Equation (15), and Fmov for the motion O F is calculated via Equation (17). Since the robot rotated in the visual location process before the F/T location process, the matrix R P G used in Equation (16) and Equation (17) should be deduced first. In the experiment, R P G was:
R P G = [ 0.9951 0.0991 0 0 0 1 0.0991 0.9951 0 ]
According to R P G and Equation (17), we can obtain that F mov = [ 0.0991 0 0.9951 0 0 0 ] T during the motion O F in the experiment.
Then, as shown in subfigure ⑧, the peg contacted the hole again. The traces exhibited in Figure 17 and Figure 18 also change due to the contact. Similar to the previous explanation, the robot judged the contact condition of Case a according to the force feedback. Thus, the motion, Move a, was adopted to align the hole and peg. During the process of Move a, the motion O→C→D is planned via Equation (15).
During the motion O→C, Fmov is calculated via the first equation in Equation (16). According to R P G in Equation (22), we obtained that F mov = [ 0.0991 0 0.9951 0 0 0 ] T in the experiment. In addition, during the motion C→D, Fmov is calculated via the second equation in Equation (16). ωA used in Equation (16) can be calculated via Equation (12). In the experiment, we obtained that cosωA = 0.9723 and sinωA = 0.2337 according to the force feedback. Thus in the experiment, Fmov was [ 0.9675 0.2337 0.0964 0 0 0 ] T during the motion C→D via R P G in Equation (22). And the F/T locating process was accomplished after Move a. Then, the robot began to insert the peg into the hole, as shown in subfigures ⑨ and ⑩. Finally, in subfigures ⑪ and ⑫, the peg-in-hole task was finished, and the robot moved away from the hole. Figure 16 denotes the locations of the body and the feet in the processes of F/T locating and the insertion assembly.

4.2. Discussion

A six-parallel-legged robot was used in the experiment to demonstrate the methodology presented in the paper. The proposed method can also be used by other six-legged robots to perform the peg-in-hole task considering that six-legged robots with other leg mechanisms can adopt the tripod gait for moving as well. Thus, the legged robot is able to fulfill the peg-hole assembly task by taking advantage of its own mobility without installing additional mechanical arms.
In addition, the focus of this paper is to perform the chamfered peg-in-hole task. There is no change in the visual location process if the method is extended to a chamfer-less situation. The vision sensor is used to detect the hole first, and the robot is adjusted based on the visual feedback. Then, in the following F/T location process, if there is no chamfer, the contact condition is always Case b (defined in Section 3.2.1) when the deviation between the hole and peg is e > e0 = rh − rp. According to the analysis of Case b introduced in Section 3.2.1, the orientation information of OPOH is derived and the peg moves the distance ne0 (n is used to adjust the distance) along OPOH to reduce the deviation. After that, the peg contacts the hole again. If the peg still contacts the same side of the hole according to the force feedback, the peg continues to move along the distance ne0 along OPOH. If the peg contacts the other side, the new deviation between the hole and peg is ne0-e. In this case, the peg moves the distance 0.5ne0 along the opposite direction for another contact. The procedure above is repeated until the deviation is smaller than e0. Thus, the F/T location process is completed. Finally, the insertion assembly is proceeded. When the clearance between the hole and peg is larger, the insertion process becomes easier due to the larger permissible error. Otherwise, when the clearance is smaller, the insertion speed is reduced to provide more time for adjusting the relative position between the hole and peg.
In the experiments, the F/T location process relied on the results of the visual detection. Occasionally, the orientation information of the vision sensor is not accurate so the experimental results may be affected. To reduce the angle error, the hole is detected 20 times during each visual sensing process and the detection results are averaged to increase the accuracy. Moreover, we give priority to angular precision during the vision location procedure. After the rotation movements, if the angle error is eliminated according to the visual feedback, the robot begins to approach the hole to reduce the position error. Otherwise, the robot continues to rotate to decrease the angle error.
In future work, we plan to improve the presented method in the paper by reducing the reliance on visual detection and various peg-hole forms, such as different sizes and the chamfer-less situation will be studied. In addition, the forces exerted by the legged-robot during the insertion process are related to the frictional interaction between the feet and the ground and the robot needs to maintain its stability during the entire task. As these two challenges are easily satisfied in our case, we did not analyze them concretely. Therefore, we also plan to analyze the two challenges in the case of a low-friction surface with a lighter robot in future.

5. Conclusions

The paper proposes a means for a six-legged robot to accomplish a peg-hole insertion task. The conclusions can be summarized as follows:
(1)
A vision sensor and an F/T sensor are used to detect the orientation and position of the hole.
(2)
On the base of the feedback of the force sensing, the trajectory of the robot is planned in real time.
(3)
The peg is held by the gripper and connected to the robot body directly. The body adopts admittance control for the insertion process.
(4)
The proposed method is conducted by a six-parallel-legged robot. Based on the mobile performance of the prototype, it can approach holes located at different positions to perform the assembly task.
(5)
Verification experiments were conducted, and the experimental results proved the effectiveness of the method.

Author Contributions

Y.Z. (Yinan Zhao) proposed the method and accomplished the analysis under the supervision of F.G. Y.Z. (Yue Zhao) and Z.C. did some studies and tested the sensors. Y.Z. (Yinan Zhao) and Y.Z. (Yue Zhao) designed and carried out the experiment. Y.Z. (Yinan Zhao) wrote the paper under the guidance of F.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Nature Science Foundation of China (Grant No. U1613208), the National Key Research and Development Plan of China (Grant No. 2017YFE0112200), and the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant (Grant No. 734575).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Raibert, M.; Blankespoor, K.; Nelson, G.; Playter, R. BigDog, the Rough-Terrain Quadruped Robot. IFAC Proc. Vol. 2008, 41, 10822–10825. [Google Scholar] [CrossRef] [Green Version]
  2. Wermelinger, M.; Fankhauser, P.; Diethelm, R.; Krusi, P.; Siegwart, R.; Hutter, M. Navigation planning for legged robots in challenging terrain. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1184–1189. [Google Scholar]
  3. Bermudez, F.L.G.; Julian, R.; Haldane, D.W.; Abbeel, P.; Fearing, R.S. Performance analysis and terrain classification for a legged robot over rough terrain. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura, Portugal, 7–12 October 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 513–519. [Google Scholar]
  4. Kolter, J.Z.; Ng, A.Y. The Stanford LittleDog: A learning and rapid replanning approach to quadruped locomotion. Int. J. Robot. Res. 2011, 30, 150–174. [Google Scholar] [CrossRef]
  5. Khalaji, A.K.; Moosavian, S.A.A. Stabilization of a tractor-trailer wheeled robot. J. Mech. Sci. Technol. 2016, 30, 421–428. [Google Scholar] [CrossRef]
  6. Ueda, K.; Guarnieri, M.; Hodoshima, R.; Fukushima, E.F.; Hirose, S. Improvement of the remote operability for the arm-equipped tracked vehicle HELIOS IX. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 363–369. [Google Scholar]
  7. Nagatani, K.; Kiribayashi, S.; Okada, Y.; Otake, K.; Yoshida, K.; Tadokoro, S.; Nishimura, T.; Yoshida, T.; Koyanagi, E.; Fukushima, M.; et al. Emergency response to the nuclear accident at the Fukushima Daiichi Nuclear Power Plants using mobile rescue robots. J. Field Robot. 2012, 30, 44–63. [Google Scholar] [CrossRef]
  8. De Santos, P.G.; Cobano, J.; García, E.; Estremera, J.; Armada, M. A six-legged robot-based system for humanitarian demining missions. Mechatronics 2007, 17, 417–430. [Google Scholar] [CrossRef]
  9. Wilcox, B.H.; Litwin, T.; Biesiadecki, J.J.; Matthews, J.; Heverly, M.; Morrison, J.C.; Townsend, J.; Ahmad, N.; Sirota, A.; Cooper, B. Athlete: A cargo handling and manipulation robot for the moon. J. Field Robot. 2007, 24, 421–434. [Google Scholar] [CrossRef]
  10. Zucker, M.; Jun, Y.; Killen, B.; Kim, T.-G.; Oh, P.; Zucker, M. Continuous trajectory optimization for autonomous humanoid door opening. In Proceedings of the 2013 IEEE Conference on Technologies for Practical Robot Applications (TePRA), Woburn, MA, USA, 22–23 April 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 1–5. [Google Scholar]
  11. Banerjee, N.; Long, X.; Du, R.; Polido, F.; Feng, S.; Atkeson, C.G.; Gennert, M.; Padir, T. Human-supervised control of the ATLAS humanoid robot for traversing doors. In Proceedings of the 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), Seoul, Korea, 3–5 November 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 722–729. [Google Scholar]
  12. Kanoulas, D.; Lee, J.; Caldwell, D.; Tsagarakis, N.G. Visual Grasp Affordance Localization in Point Clouds Using Curved Contact Patches. Int. J. Humanoid Robot. 2017, 14, 1650028. [Google Scholar] [CrossRef]
  13. Zhang, K.; Shi, M.; Xu, J.; Liu, F.; Chen, K. Force control for a rigid dual peg-in-hole assembly. Assem. Autom. 2017, 37, 200–207. [Google Scholar] [CrossRef]
  14. Tang, T.; Lin, H.-C.; Zhao, Y.; Fan, Y.; Chen, W.; Tomizuka, M. Teach industrial robots peg-hole-insertion by human demonstration. In Proceedings of the 2016 IEEE International Conference on Advanced Intelligent Mechatronics (AIM), Banff, AB, Canada, 12–15 July 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 488–494. [Google Scholar]
  15. Park, J.-H.; Bae, J.-H.; Park, J.-H.; Baeg, M.-H.; Park, J. Intuitive peg-in-hole assembly strategy with a compliant manipulator. In Proceedings of the IEEE ISR 2013, Seoul, Korea, 24–26 October 2013; pp. 1–5. [Google Scholar] [CrossRef]
  16. Ott, C.; Baumgartner, C.; Mayr, J.; Fuchs, M.; Burger, R.; Lee, D.; Eiberger, O.; Albu-Schaffer, A.; Grebenstein, M.; Hirzinger, G. Development of a biped robot with torque controlled joints. In Proceedings of the 2010 10th IEEE-RAS International Conference on Humanoid Robots, Nashville, TN, USA, 6–8 December 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 167–173. [Google Scholar]
  17. Tsagarakis, N.G.; Caldwell, D.G.; Negrello, F.; Choi, W.; Baccelliere, L.; Loc, V.; Noorden, J.; Muratore, L.; Margan, A.; Cardellino, A.; et al. WALK-MAN: A High-Performance Humanoid Platform for Realistic Environments. J. Field Robot. 2017, 34, 1225–1259. [Google Scholar] [CrossRef]
  18. Semini, C.; Barasuol, V.; Goldsmith, J.; Frigerio, M.; Focchi, M.; Gao, Y.; Caldwell, D.G. Design of the Hydraulically Actuated, Torque-Controlled Quadruped Robot HyQ2Max. IEEE/ASME Trans. Mechatron. 2017, 22, 635–646. [Google Scholar] [CrossRef]
  19. Badri-Sprowitz, A.; Tuleu, A.; Vespignani, M.; Ajallooeian, M.; Badri, E.; Ijspeert, A. Towards dynamic trot gait locomotion: Design, control, and experiments with Cheetah-cub, a compliant quadruped robot. Int. J. Robot. Res. 2013, 32, 932–950. [Google Scholar] [CrossRef] [Green Version]
  20. Tedeschi, F.; Carbone, G. Design Issues for Hexapod Walking Robots. Robotics 2014, 3, 181–206. [Google Scholar] [CrossRef] [Green Version]
  21. Wang, Z.; Ding, X.; Rovetta, A.; Giusti, A. Mobility analysis of the typical gait of a radial symmetrical six-legged robot. Mechatronics 2011, 21, 1133–1146. [Google Scholar] [CrossRef]
  22. Pauli, J.; Schmidt, A.; Sommer, G. Vision-based integrated system for object inspection and handling. Robot. Auton. Syst. 2001, 37, 297–309. [Google Scholar] [CrossRef]
  23. Huang, S.; Yamakawa, Y.; Senoo, T.; Ishikawa, M. Realizing peg-and-hole alignment with one eye-in-hand high-speed camera. In Proceedings of the 2013 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Wollongong, Australia, 9–12 July 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 1127–1132. [Google Scholar]
  24. Su, J.; Qiao, H.; Ou, Z.; Zhang, Y. Sensor-less insertion strategy for an eccentric peg in a hole of the crankshaft and bearing assembly. Assem. Autom. 2012, 32, 86–99. [Google Scholar] [CrossRef] [Green Version]
  25. Wang, J.; Liu, A.; Tao, X.; Cho, H. Microassembly of micropeg and -hole using uncalibrated visual servoing method. Precis. Eng. 2008, 32, 173–181. [Google Scholar] [CrossRef]
  26. Chang, R.J.; Lin, C.Y.; Lin, P.S. Visual-Based Automation of Peg-in-Hole Microassembly Process. J. Manuf. Sci. Eng. 2011, 133, 041015. [Google Scholar] [CrossRef]
  27. Li, X.; Li, R.; Qiao, H.; Ma, C.; Li, L. Human-inspired compliant strategy for peg-in-hole assembly using environmental constraint and coarse force information. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 4743–4748. [Google Scholar]
  28. Abdullah, M.W.; Roth, H.; Weyrich, M.; Wahrburg, J. An Approach for Peg-in-Hole Assembling using Intuitive Search Algorithm based on Human Behavior and Carried by Sensors Guided Industrial Robot. IFAC PapersOnLine 2015, 48, 1476–1481. [Google Scholar] [CrossRef]
  29. Dietrich, F.; Buchholz, D.; Wobbe, F.; Sowinski, F.; Raatz, A.; Schumacher, W.; Wahl, F.M. On contact models for assembly tasks: Experimental investigation beyond the peg-in-hole problem on the example of force-torque maps. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 2313–2318. [Google Scholar]
  30. Yun, S.-K. Compliant manipulation for peg-in-hole: Is passive compliance a key to learn contact motion? In Proceedings of the 2008 IEEE International Conference on Robotics and Automation, Pasadena, CA, USA, 19–23 May 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 1647–1652. [Google Scholar]
  31. Pitchandi, N.; Subramanian, S.P.; Irulappan, M. Insertion force analysis of compliantly supported peg-in-hole assembly. Assem. Autom. 2017, 37, 285–295. [Google Scholar] [CrossRef]
  32. Baksys, B.; Baskutiene, J.; Baskutis, S.; Loughlin, C. The vibratory alignment of the parts in robotic assembly. Ind. Robot. Int. J. 2017, 44, 720–729. [Google Scholar] [CrossRef]
  33. Kim, Y.-L.; Song, H.-C.; Song, J.-B. Hole detection algorithm for chamferless square peg-in-hole based on shape recognition using F/T sensor. Int. J. Precis. Eng. Manuf. 2014, 15, 425–432. [Google Scholar] [CrossRef]
  34. Lopes, A.M.; Almeida, F. A force–impedance controlled industrial robot using an active robotic auxiliary device. Robot. Comput. Manuf. 2008, 24, 299–309. [Google Scholar] [CrossRef]
  35. Song, H.-C.; Kim, Y.-L.; Song, J.-B. Guidance algorithm for complex-shape peg-in-hole strategy based on geometrical information and force control. Adv. Robot. 2016, 30, 1–12. [Google Scholar] [CrossRef]
  36. Spong, M.W.; Hutchinson, S.; Vidyasagar, M. Robot Modeling and Control; John Wiley & Sons: Hoboken, NJ, USA, 2020. [Google Scholar]
  37. Pan, Y.; Gao, F. A new six-parallel-legged walking robot for drilling holes on the fuselage. Proc. Inst. Mech. Eng. C J. Mech. Eng. Sci. 2013, 228, 753–764. [Google Scholar] [CrossRef]
Figure 1. Prototype of the six-parallel-legged robot.
Figure 1. Prototype of the six-parallel-legged robot.
Sensors 20 02861 g001
Figure 2. System structure of the robot.
Figure 2. System structure of the robot.
Sensors 20 02861 g002
Figure 3. Definition of coordinate systems.
Figure 3. Definition of coordinate systems.
Sensors 20 02861 g003
Figure 4. The process of the peg-in-hole task.
Figure 4. The process of the peg-in-hole task.
Sensors 20 02861 g004
Figure 5. Definition of the HCS.
Figure 5. Definition of the HCS.
Sensors 20 02861 g005
Figure 6. The color image and the depth image.
Figure 6. The color image and the depth image.
Sensors 20 02861 g006
Figure 7. Trajectory planning during the visual locating process.
Figure 7. Trajectory planning during the visual locating process.
Sensors 20 02861 g007
Figure 8. The contact conditions between the peg and hole during the F/T locating process.
Figure 8. The contact conditions between the peg and hole during the F/T locating process.
Sensors 20 02861 g008
Figure 9. The movement of the peg when the peg contacts the chamfer of the hole.
Figure 9. The movement of the peg when the peg contacts the chamfer of the hole.
Sensors 20 02861 g009
Figure 10. The movement of the peg when the peg contacts the surface of the hole.
Figure 10. The movement of the peg when the peg contacts the surface of the hole.
Sensors 20 02861 g010
Figure 11. The virtual mass-spring-damper model.
Figure 11. The virtual mass-spring-damper model.
Sensors 20 02861 g011
Figure 12. The robot admittance control system.
Figure 12. The robot admittance control system.
Sensors 20 02861 g012
Figure 13. The trajectory planning strategy during the insertion process. (a) Insertion with an angle error; (b) Compensation for the angle error; (c) Insertion after angle compensation.
Figure 13. The trajectory planning strategy during the insertion process. (a) Insertion with an angle error; (b) Compensation for the angle error; (c) Insertion after angle compensation.
Sensors 20 02861 g013
Figure 14. Snapshots of the experiment.
Figure 14. Snapshots of the experiment.
Sensors 20 02861 g014
Figure 15. Motions of the body and feet in the process of visual locating.
Figure 15. Motions of the body and feet in the process of visual locating.
Sensors 20 02861 g015
Figure 16. Motions of the body and feet in the processes of F/T locating and insertion assembly.
Figure 16. Motions of the body and feet in the processes of F/T locating and insertion assembly.
Sensors 20 02861 g016
Figure 17. Feedback force during the F/T location and the insertion assembly processes.
Figure 17. Feedback force during the F/T location and the insertion assembly processes.
Sensors 20 02861 g017
Figure 18. Feedback torque during the F/T location and the insertion assembly processes.
Figure 18. Feedback torque during the F/T location and the insertion assembly processes.
Sensors 20 02861 g018

Share and Cite

MDPI and ACS Style

Zhao, Y.; Gao, F.; Zhao, Y.; Chen, Z. Peg-in-Hole Assembly Based on Six-Legged Robots with Visual Detecting and Force Sensing. Sensors 2020, 20, 2861. https://doi.org/10.3390/s20102861

AMA Style

Zhao Y, Gao F, Zhao Y, Chen Z. Peg-in-Hole Assembly Based on Six-Legged Robots with Visual Detecting and Force Sensing. Sensors. 2020; 20(10):2861. https://doi.org/10.3390/s20102861

Chicago/Turabian Style

Zhao, Yinan, Feng Gao, Yue Zhao, and Zhijun Chen. 2020. "Peg-in-Hole Assembly Based on Six-Legged Robots with Visual Detecting and Force Sensing" Sensors 20, no. 10: 2861. https://doi.org/10.3390/s20102861

APA Style

Zhao, Y., Gao, F., Zhao, Y., & Chen, Z. (2020). Peg-in-Hole Assembly Based on Six-Legged Robots with Visual Detecting and Force Sensing. Sensors, 20(10), 2861. https://doi.org/10.3390/s20102861

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop