Next Article in Journal
Middleware Design for Swarm-Driving Robots Accompanying Humans
Next Article in Special Issue
A Delay-Aware and Reliable Data Aggregation for Cyber-Physical Sensing
Previous Article in Journal
Theoretical Analysis of an Optical Accelerometer Based on Resonant Optical Tunneling Effect
Previous Article in Special Issue
Capacity-Delay Trade-Off in Collaborative Hybrid Ad-Hoc Networks with Coverage Sensing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Smooth Sensor Motion Planning for Robotic Cyber Physical Social Sensing (CPSS)

1
School of Computer Science & Engineering, South China University of Technology, Guangzhou 510641, China
2
Department of Information and Electronic Engineering, Muroran Institute of Technology, Muroran 050-0071, Japan
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(2), 393; https://doi.org/10.3390/s17020393
Submission received: 12 December 2016 / Revised: 7 February 2017 / Accepted: 9 February 2017 / Published: 17 February 2017
(This article belongs to the Special Issue New Paradigms in Cyber-Physical Social Sensing)

Abstract

:
Although many researchers have begun to study the area of Cyber Physical Social Sensing (CPSS), few are focused on robotic sensors. We successfully utilize robots in CPSS, and propose a sensor trajectory planning method in this paper. Trajectory planning is a fundamental problem in mobile robotics. However, traditional methods are not suited for robotic sensors, because of their low efficiency, instability, and non-smooth-generated paths. This paper adopts an optimizing function to generate several intermediate points and regress these discrete points to a quintic polynomial which can output a smooth trajectory for the robotic sensor. Simulations demonstrate that our approach is robust and efficient, and can be well applied in the CPSS field.

1. Introduction

With the huge developments in sensing and network technologies, Cyber Physical Social Sensing (CPSS) has attracted the attention of many researchers [1,2,3,4]. Although much research has been proposed to advance the field of CPSS, little research has focused on social robotic sensing. We successfully utilize robots in the CPSS area, and propose an effective robotic sensing method in the paper. As shown in Figure 1, we use some robots equipped with specially-designed eye-in-hand sensors to explore the world, and share the information among all robots using a wireless network and cloud platform. To move sensors accurately and smoothly, robots need to calculate their trajectory. Traditional methods, however, cannot be applied directly to robotic sensing in CPSS, mainly because the existing trajectory planning methods are not designed for sensing tasks. Historically, most trajectory planning methods are not suitable for eye-in-hand sensors because of their low efficiency due to the extra calculation of inverse kinematics, instability coming from inadequacy of the traditional methods with sensor performance optimization, and non-smooth-generated paths. To solve these problems, we propose a novel trajectory planning method to improve the sensing performance in CPSS.
Trajectory planning is a fundamental problem in robotics. Because of its limitations, both the velocity and acceleration of robotic drivers cannot achieve the ideal level. Robots are multi-variable and have highly nonlinear complex systems. It is extremely difficult to obtain a smooth trajectory to simultaneously meet the requirements of velocity, acceleration, and jerk. Some trajectory planning methods (e.g., C-space [5] and preprocessing algorithms) can find a smooth trajectory that satisfies the kinematic limits [6,7]. Most of these traditional methods, however, are focused merely on time and jerk optimization [8,9,10], and visual information is not used. In the past decade, significant progress has been made in machine vision technology [11], and it has been applied to trajectory planning methods to improve planning performance. Li et al. [12] adopted vision-guided robot control to build a visual feedback system for real-time measurement of the end-effector and the joint position. Among all machine vision methods, classic binocular stereo-vision—which captures the same image from two angles using two cameras—is used most widely for its simple configuration and high reliability, and this method is adopted as the visual system in this paper. Using the stereo-matched algorithm [11], the disparity between two images can be calculated. Following this calculation, the three-dimensional (3D) position and orientation of the objects can be obtained using the camera calibration technique, which illustrates the mapping relationship between the pixels in the digital image and the 3D position in the world coordinate system.
Trajectory planning methods use a series of transformation matrices [13,14,15] to obtain the position of each joint in one robot. When the reverse kinematics method is used to calculate the joint angle for a given manipulator position, the solution trajectory of relevant joints is usually not distinct. Therefore, the optimization objective must be determined to arrive at the optimal trajectory [16,17]. Another problem results form the joint positioning errors caused by weight distribution, load change, vibration, mechanical friction, and recoil, making it difficult to obtain an accurate robotic dynamics model in real-world applications. Instead of providing the complete trajectory (which would have some deviation in the actual robotic motion), our approach is to provide the next position that can be reached at the next time unit. We believe that it is not necessary to get an accurate rotation angle for each joint, and instead focus on how the end-effector of the robot reaches the object continually and smoothly to achieve better sensing performance. In practical applications, the working precision of the robot is confined by such factors as manipulator limitations and working environments, which cause various errors in the sensor’s motion. We use binocular stereo-vision to rectify these motion errors. Both velocity and acceleration of the joint must be continuous, and therefore, the proposed method introduces jerk restriction to avoid vibration and reduce mechanical wear. To make a smooth motion path of the equipped sensor, we adopt an optimizing function to generate several intermediate points and regress these discrete points to a quintic polynomial, which ultimately outputs a smooth trajectory for the sensor.

2. System Overview

Generally, the robotic motion trajectory is described in Cartesian space or joint space. The trajectory represented in the joint angle space, however, offers several advantages [18]. First, the trajectory directly generated by the angle rotation avoids a lot of forward and inverse kinematics calculation—in particular for real-time applications. Second, the trajectory represented in Cartesian space will eventually be converted into the joint coordinates. If the trajectory is generated directly in the joint space, it is clear that the computation time can be reduced. To improve position accuracy, a visual sensor is used to compensate for errors and correct the trajectory.
Due to the accumulated error, the manipulator cannot reach the target to be processed when the manipulator is running on a predetermined trajectory. In order to improve the accuracy, we use the binocular stereo visual sensor to compensate and correct the trajectory. Therefore, the schematic of the trajectory planning method proposed in this paper is divided into a visual module and trajectory planning module, and Figure 2 shows the schematic of the trajectory planning method with the binocular stereo visual sensor. In our method, the planning trajectory is first generated according to the initial states and the end states of the manipulator. When the manipulator is running, the trajectory of the manipulator is corrected by acquiring the joint angles of the current joints and the positions of the distal end of the manipulator, which can improve the grasping accuracy of the manipulator. In order to compensate for the trajectory, we need to measure some parameters, in which the joint angles are obtained by the angle sensors, and the end position of the manipulator needs to use the binocular stereo visual sensor and the stereo vision algorithm. The stereoscopic vision algorithm can give the mapping from the pixel coordinates to the spatial coordinates. This provides us a great convenience to calculate the trajectory compensation values. Therefore, it can be seen that the binocular stereo visual sensor plays an great important role in improving the grasping progress of the target to be processed. After obtaining the trajectory of the angular space, the trajectory in the Cartesian space can be obtained by the forward kinematics of the manipulator.
After getting the trajectory represented in joint angle space, we can transform it into Cartesian space by robotic forward kinematics.

2.1. Binocular Vision Sensor

The proposed vision method is shown in Figure 3, which illustrates how the binocular vision sensor is used to improve the manipulator’s operation accuracy. Many other machine vision methods have been applied in trajectory planning. Classical binocular stereovision, however, is still widely used for its simplicity and effectiveness. This method uses two vision sensors to obtain different images of the same object from different angles. Then, the Cartesian coordinate of the target can be obtained by finding the difference between the two visual images.
Currently, RGB-Depth sensors and optical fiber sensors [19] are used widely to measure the Cartesian coordinate of the object. If, however, some obstacles appear between the sensor and the object, these sensors may fail to get the location and the orientation of the object. Further, additional computational costs are required to obtain the absolute object coordinates for trajectory planning. Because relative position is more important than absolute position to control the mobile robot and the equipped sensor, traditional binocular stereo-vision is adopted in the proposed method.

2.2. Testing Platform

The proposed system was tested in a simulation environment called Robot Operating System (ROS). ROS is a software framework for robotic research and development, and has been a mainstream robotic simulation platform. This system integrates hardware abstraction, device drivers, libraries, visualizers, message-passing, package management, and many other convenient functions. The UR5 robot is a robot with six degrees of freedom (6-DOF). In our experiments, it is equipped with a mobile chassis and a binocular stereo vision sensor to conduct the performance evaluations.

3. Binocular Stereo Vision Sensor System

To generate the transform vector that directly maps the end-effector to the target, the mapping relationship between the space positions and the pixel locations in the camera plane is required. Considering the influence of lens distortion, the transformation matrix from the camera coordinate to the world coordinate [14] can be expressed by a homogeneous transformation matrix, per Equation (1):
Z c u v 1 T = M * P W
where u , v is a pixel of a point in the camera image plane, its homogeneous coordinates are u , v , 1 T ; P W is the world coordinate of a point, its homogeneous coordinates are described as X w Y w Z w 1 T . The transformation matrix M * can be described as follows:
M * = f / d x 0 u 0 0 f / d y v 0 0 0 1 f 0 0 0 0 f 0 0 0 0 1 0 R 3 × 3 t 3 × 1 0 1 × 3 1
The matrix M * can be obtained easily by Zhang’s calibrating method [20]. In Equation (2), u 0 , v 0 is the origin coordinate of the physical coordinates in camera image plane. d x and d y are the length and the width of a pixel, respectively. R 3 × 3 and t 3 × 1 are respectively the rotation matrix and the translation vector of the camera coordinate frame to the world coordinate system. In order to get the world coordinate of the point, the homogeneous coordinates of the point X c Y c Z c 1 in camera coordinate system is needed. The coordinates X c Y c Z c 1 is obtained by binocular stereo vision, which provides additional information about the objects and environments through the left and right cameras. If we obtain the perspective difference between the left and right camera images, we then can calculate the coordinates of the target point. The parallax principle of the binocular stereovision is shown in Figure 4; l p and r p are the points at which the target point c p is projected on the left and right camera planes, respectively. If b is the distance between the optical centers of the left and right cameras, the coordinates of c p in the left and right camera planes are x l e f t = f X c / Z c , y l e f t = f Y c / Z c and ( x r i g h t = f ( X c b ) / Z c , y r i g h t = f Y c / Z c ) .
The visual disparity between the left and right image planes then can be obtained by d i s p = x l e f t x r i g h t . Finally, the coordinates of point c p in the camera coordinate system can be calculated [9] by Equation (3):
X c = b · x l e f t d i s p , Y c = b · y l e f t d i s p , Z c = b · f d i s p
Equation (3) shows the mathematical model of the transformation from the pixel to the Cartesian coordinate.

4. Trajectory Planning For Binocular Stereo Sensors

Generally, a trajectory is obtained by some calculations pertaining to the initial states and the final states (e.g., position, velocity, and acceleration) of the joints in the Cartesian coordinates. The points on the trajectory must then be mapped to a set of joint angles by inverse kinematics calculation. In fact, the robotic motion is actually the rotary movement of each of joints. Therefore, the trajectory represented in the joint coordinate system can describe the robotic motion more directly.

4.1. Joint Space-Based Trajectory Planning

A smooth interpolation function is required to obtain a satisfactory joint trajectory connecting the initial joint angles and the final joint angles. Considering the constraints, a five-order interpolation function [9] is used to calculate the robotic trajectory. In this calculation, θ t is defined as a joint trajectory function that describes the relationship between the joint angle and time; t b , θ b , θ ˙ b , and θ ¨ b represent the initial state of time, joint angle, angular velocity, and angular acceleration, respectively; and t f , θ f , θ ˙ f , and θ ¨ f represent the final state of time, joint angle, angular velocity, and angular acceleration, respectively. The five-order interpolation function can be described as follows:
s t = 0 + 1 t + 2 t 2 + 3 t 3 + 4 t 4 + 5 t 5
Let T f = t f t b . T p —which is determined by the controller of the manipulator—is the trajectory sampling period, then the sampling number n u m is T f T p . Define τ as the sequence number of sample points, τ = t t b / T p , τ [ 0 , n u m ] , then the trajectory can be represented by discrete sampling points as follows:
θ τ = θ b + θ f θ b s τ
The first derivative and the second derivative of Equation (5) can be expressed as
θ ˙ τ = θ f θ b s ˙ τ T p
θ ¨ τ = θ f θ b s ¨ τ T p 2
In Equations (5)–(7), the initial and final states are known; let θ t b = θ b , θ ˙ t b = θ ˙ b , θ ¨ t b = θ ¨ b , θ t f = θ f , θ ˙ t f = θ ˙ f , θ ¨ t f = θ ¨ f , then putting the initial conditions and termination conditions into Equations (5)–(7), we can obtain
s 0 = 0 s ˙ 0 = θ ˙ b T p / θ f θ b s ¨ 0 = θ ¨ b T p 2 / θ f θ b s n u m = 1 s ˙ n u m = θ ˙ f T p / θ f θ b s ¨ n u m = θ ¨ f T p 2 / θ f θ b
Present Equation (8) as matrix by Equation (4) and initial conditions, we can get the following:
s 0 s ˙ 0 s ¨ 0 s n u m s ˙ n u m s ¨ n u m s = 1 0 0 0 0 0 0 1 0 0 0 0 0 0 2 0 0 0 1 T f T p ( T f T p ) 2 ( T f T p ) 3 ( T f T p ) 4 ( T f T p ) 5 0 1 2 T f T p 3 ( T f T p ) 2 4 ( T f T p ) 3 5 ( T f T p ) 4 0 0 2 6 T f T p 12 ( T f T p ) 2 20 ( T f T p ) 3 M 0 1 2 3 4 5
So, vector can be expressed as follows:
= M 1 0 θ ˙ b T p θ f θ b θ ¨ b T p 2 θ f θ b 1 θ ˙ f T p θ f θ b θ ¨ f T p 2 θ f θ b T
Then, the trajectory in joint coordinate can be obtained by Equations (5) and (10).
θ τ = θ b + θ f θ b M 1 0 θ ˙ b T p θ f θ b θ ¨ b T p 2 θ f θ b 1 θ ˙ f T p θ f θ b θ ¨ f T p 2 θ f θ b T 1 τ τ 2 τ 3 τ 4 τ 5
The relationship between the trajectories represented in the Cartesian space and the joint space can be expressed as follows:
X t = f θ t
where X t and θ t are the trajectories in the Cartesian space and the joint space, respectively. The velocity is a constraint factor that needs to be considered, and the mapping relationship can be obtained by a derivative of Equation (12) as follows:
X ˙ t = f θ t t = J θ t θ ˙ t

4.2. Coordinate Transformation

The joint positions relative to the base of the manipulator, as well as the positional relationship between the end-effector and the target object are required to measure the target position. In the proposed method, the joint coordinates in the Cartesian space are used in each iteration, and the coordinates in the angular space are used to calculate the joint position in each iteration. The Cartesian coordinates of the joints are obtained by using the following forward kinematics equation. The 6-DOF robot is represented and modeled by the D-H method [14,15], the transformation matrix of link j + 1 to link j is j + 1 j A , and the forward kinematics equations of the 6-DOF robot can be expressed as
T = K 0 A = 1 0 A 2 1 A j + 1 j A
During the operation of the robot, the relative position of joint j can be obtained by using j 0 A . In the proposed method, bk, ck, and pv shown in Figure 5 need to be calculated using the transformation matrix and the input of the binocular stereovision sensor. If the D-H parameters of the robot are determined, the Cartesian coordinate of the end-effector in the base coordinate system can be calculated by using forward kinematics equations T defined in Equation (14).

4.3. Joint State

Generally, traditional trajectory planning methods merely concentrate on the calculation of the end effector position using the joint angles. These methods calculate the position from joint angle readings through the direct kinematics model to estimate the actual position. However in most instances, these positions are not actually reached due to the mechanical error. A novel trajectory planning method is proposed in the paper to analyze current joint states. A schematic of the proposed method is shown in Figure 5. Suppose k is the joint ordinal; b k (k = 1, 2, ..., n 1 ) represents the vector of the link; pv is an approach vector from the end-effector to the target; and c k denotes the vector from the kth joint to the end-effector. c k can be expressed as
c k = b k + c k + 1
where the penultimate joint c n 1 = b n 1 .
The D-H parameters and joint angles are required to obtain the vectors shown in Figure 5. The joint angles can be obtained by the joint angle sensors. The Cartesian coordinates of the end-effector and target can be measured using the visual sensors. In Section 4.1, T p is defined as the sampling period, n u m is the sampling times, and t b and t f represent the start and final time, respectively. So, the trajectory planning problem can be described as follows:
θ b ( k ) = θ ( k ) ( 0 ) ; ω b ( k ) = θ ˙ ( k ) ( 0 ) ; α b ( k ) = θ ¨ ( k ) ( 0 )
θ f ( k ) = θ ( k ) ( n u m · T p ) ; ω f ( k ) = θ ˙ ( k ) ( n u m · T p ) ; α f ( k ) = θ ¨ ( k ) ( n u m · T p )
ω ( k ) ( j T p ) ω max ( k ) a n d α ( k ) ( j T p ) α max ( k ) , j 0 , n u m
α ( k ) m T p α ( k ) n T p m n T p j max , 0 m , n n u m
If the second-order reciprocal of θ ( k ) is continuous and satisfies Equations (16)–(19), then θ ( k ) can be used as a trajectory solution that minimizes acceleration, shock acceleration, or run time. In the proposed method, a robot with K joints has 2 K motion patterns. The solved trajectory does not need to satisfy all of the previously mentioned optimization goals. In practice, the number of joints of the robot is usually small. For example, K is six in our simulation. Moreover, c k is a major factor to control the rotation of the joint. Given a state of the manipulator, some c k are important to reduce the length of pv, whereas other c k can affect the orientation of pv. How c k affects pv can be determined by the angle between c k and pv. If, for example, c k is almost perpendicular to pv, its main function is to change the length of pv so it can be used to reduce the distance from the end-effector to the target. On the contrary, if c k is almost parallel to pv, it is used to change the orientation of pv. As shown in Figure 5, c k 1 is the main factor to reduce the length of pv, whereas c k is more effective when it is used to change the orientation of pv. The angle between c k and pv can be obtained by
c o s ϑ ( k ) = p v · c k p v · c k
Next, the angle increment of each joint Δ θ ( k ) needs to be determined. The effect of the joint rotation on the pv can be expressed by c k · s i n ϑ k and c k · c o s ϑ k , where c k · s i n ϑ k represents the angle increment in the parallel orientation of pv, and c k · c o s ϑ k represents the angle increment in the perpendicular orientation of pv. To simplify the computation, tr ( k ) is defined in Equation (21) as follows:
t r k = c k · Δ θ ( k ) · s i n ϑ k , c k · Δ θ ( k ) · c o s ϑ k T
In Equation (21), pv, c k , and ϑ ( k ) can be calculated. tr ( k ) is used to construct the optimization function, as shown in Equation (22). Then, determining the next position of the joint can be transformed into finding an appropriate Δ θ ( k ) such that it satisfies Equation (22).
Δ θ ( k ) = a r g m a x t r k · p v
Considering the influence of the restrictive conditions of speed, acceleration, and jerk, some important variables must be defined. Because the motion trajectory of the end-effector is affected by the rotation of the joints, two important factors will be considered when correcting the trajectory: (a) the current joint angles; and (b) the approach vector pv. To address these factors, two pivotal coefficients ξ k and δ ( k ) are introduced; ξ k and δ ( k ) are shown as Equations (23) and (24), respectively:
ξ k = 1 φ k 2
where φ k = θ ( k ) m a x θ ( k ) / θ ( k ) m a x θ m i n ( k ) , and ξ k is used to control the increment of joint angles at the next iteration (e.g., as the joint angle increases, the angular velocity should be slower when the angle value approaches the threshold).
δ k = p v i 1 exp η p v p v i γ
δ ( k ) denotes the influence of the approach vector pv on the next point of the trajectory. A smaller step should be taken when the end-effector is closer to the target at the next moment; η and γ are coefficients to be calibrated. With this implementation, the end-effector can achieve a stable and smooth trajectory. When the end-effector is moving, the proposed method provides a wide range of speeds, and even makes a full stop if necessary. After considering the influence of the current joint state, Equation (22) needs to be changed into Equation (25), as follows:
a r g Δ θ ( k ) m a x c k · Δ θ ( k ) ξ k · δ k · s i n ϑ k , c k · Δ θ ( k ) ξ k · δ k · c o s ϑ k · p v 2
Equation (25) is a convex function. The angle increment of each joint Δ θ ( k ) is obtained by solving Equation (25), and then is described in the coordinate system. After Equation (25) is solved, the polynomial function given in Equation (4) is used to fit these points. Considering the impacts of the joint angles, the link vectors, the approach vector, and the parameters defined in Equations (23) and (24), in the proposed method, the pseudo-code describing the proposed improved trajectory algorithm (Algorithm 1) is presented as follows:
  Algorithm 1: Trajectory Planning
    Sensors 17 00393 i001
At each iteration, θ k j T is changing with a controller that makes the angle follow the desired angle increment rate. In this way, an adapted angle increment that is varied with the current condition can be obtained.

5. Experiments and Analysis

5.1. Experiment Environment

The proposed trajectory planning method is tested on the UR5 robot with 6-DOF. Figure 6 shows the kinematic model, and Table 1 gives the D-H parameters of UR5.
According to the D-H parameters of the UR5 robot, the transformation k 1 k A from link k to k 1 can be derived. Therefore, the vectors described in Figure 5 can be obtained by
b k = b k 1 · k 1 k A c k = b k + c k + 1
Supposing k = 6 and c k = b k , the Cartesian coordinates of the object p t a r g e t and the end-effector p e n d e f f e c t o r in the base coordinate system can be obtained by using the left and right cameras’ calibration. The approach vector can be calculated by p v = p t a r g e t p e n d e f f e c t o r . After all unknown parameters are obtained, the real-time trajectory can be calculated by Algorithm 1. The calibration of the camera is divided into two steps. The first step is the calibration of the single camera, and the second step is the stereo calibration of the left and right cameras. The camera’s internal and external parameters can be obtained using the principles of stereo imaging and camera calibration described in Section 3. After completing the stereo calibration, the relative position between the left and right cameras will not change; otherwise, the left and right cameras will need to be calibrated again. The Cartesian coordinate of the target object in the world coordinate system is usually calculated by analyzing the visual disparity between the left and right image planes.

5.2. Experimental Results

In order to verify the effectiveness of our approach, we compare this method with the time optimal algorithm [21]. The UR5 robotic—as shown in Figure 7—is controlled to reach the same target position from the same starting state in these two methods, respectively. Table 2 shows the initial state and the terminate states of the joint angles. In the simulation, the rotation of the sixth joint has little impact on the position of the end-effector. Therefore, its motion is ignored.
The trajectories are recorded to compare these two methods. Figure 8 shows the initial and final states of UR5 at different viewing angles. The initial and final states are indicated by yellow and gray, respectively. Figure 9 shows the variation of each joint angle during the motion. Figure 10 shows the angular velocity of each joint. All angular velocities are less then 3.14 rad/s. Figure 10a–e represent the trajectories of the first, second, third, fourth, and fifth joint, respectively. As shown in Figure 9 and Figure 10, both the angular velocity and angle variation show smooth curves with the proposed trajectory planning method.
Figure 11 shows the acceleration curve of each joint. The dotted line indicates the time-optimal method. The solid line indicates the acceleration curve of the proposed method, which is controlled and adjusted by the visual system. These solid lines are fitted with the smoothing process of fifth-order interpolation. In this experiment, sampling time is 20 ms, and the operation time of the proposed method and time-optimal method are 2.1 and 1.9 s, respectively. For better comparison, the operation time of the time-optimal method is extended from 1.9 s to 2.1 s in Figure 11. Figure 12 shows the trajectory calculated by Equation (11).
Although the time-optimal method is faster than the proposed method by 0.2 s, there is always some offset between the final position of the end-effector and the target object, which is caused by the errors of mechanical movement and the camera calibration. The proposed method, however, can adjust the robotic motion according to the relative position between the end-effector and the target object to avoid motion and calibration errors, and is therefore able to greatly reduce the position errors. Table 3 shows five records of the comparison test, demonstrating that the proposed method is much better than the time-optimal method at motion precision, which is reflected by the absolute errors.

6. Conclusions

This paper presents an effective robotic sensor planning method for CPSS which differs from traditional polynomial interpolation and inverse trajectory planning methods. This method fully considers the positions and conditions of robotic joints. The influences of the joint angles, link vectors, and approach vectors are analyzed to improve planning performance. An optimization function is adopted to generate several intermediate points, which are regressed to a quantic polynomial. Ultimately, a smooth trajectory can be generated for the robotic sensor. Experimental results demonstrate that the proposed method is feasible and effective.

Acknowledgments

This research was funded by the National Natural Science Foundation (Project No. 61171141, 61573145), the Public Research and Capacity Building of Guangdong Province (Project No. 2014B010104001) and the Basic and Applied Basic Research of Guangdong Province (Project No. 2015A030308018), the authors are greatly thanks to these grants.

Author Contributions

Hong Tang and Liangzhi Li conceived and designed the experiments; Hong Tang and Nanfeng Xiao performed the experiments; Hong Tang analyzed the data; Nanfeng Xiao contributed materials and analysis tools; Hong Tang wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

References

  1. Dong, M.; Ota, K.; Liu, A. RMER: Reliable and Energy-Efficient Data Collection for Large-Scale Wireless Sensor Networks. IEEE Internet Things J. 2016, 3, 511–519. [Google Scholar] [CrossRef]
  2. Liu, Y.; Dong, M.; Ota, K.; Liu, A. ActiveTrust: Secure and Trustable Routing in Wireless Sensor Networks. IEEE Trans. Inf. For. Secur. 2016, 11, 2013–2027. [Google Scholar] [CrossRef]
  3. Dong, M.; Ota, K.; Yang, L.T.; Liu, A.; Guo, M. LSCD: A Low-Storage Clone Detection Protocol for Cyber-Physical Systems. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2016, 35, 712–723. [Google Scholar] [CrossRef]
  4. Liu, X.; Dong, M.; Ota, K.; Hung, P.; Liu, A. Service Pricing Decision in Cyber-Physical Systems: Insights from Game Theory. IEEE Trans. Serv. Comput. 2016, 9, 186–198. [Google Scholar] [CrossRef]
  5. Korayem, M.; Nikoobin, A. Maximum payload for flexible joint manipulators in point-to-point task using optimal control approach. Int. J. Adv. Manuf. Technol. 2008, 38, 1045–1060. [Google Scholar] [CrossRef]
  6. Menasri, R.; Oulhadj, H.; Daachi, B.; Nakib, A.; Siarry, P. A genetic algorithm designed for robot trajectory planning. In Proceedings of the 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC), San Diego, CA, USA, 5–8 October 2014; pp. 228–233.
  7. Zefran, M.; Kumar, V.; Croke, C.B. On the generation of smooth three-dimensional rigid body motions. IEEE Trans. Robot. Autom. 1998, 14, 576–589. [Google Scholar] [CrossRef]
  8. Kröger, T.; Wahl, F.M. Online trajectory generation: Basic concepts for instantaneous reactions to unforeseen events. IEEE Trans. Robot. 2010, 26, 94–111. [Google Scholar] [CrossRef]
  9. Gasparetto, A.; Zanotto, V. A new method for smooth trajectory planning of robot manipulators. Mech. Mach. Theory 2007, 42, 455–471. [Google Scholar] [CrossRef]
  10. Piazzi, A.; Visioli, A. Global minimum-jerk trajectory planning of robot manipulators. IEEE Trans. Ind. Electron. 2000, 47, 140–149. [Google Scholar] [CrossRef]
  11. Kanade, T.; Okutomi, M. A stereo matching algorithm with an adaptive window: Theory and experiment. IEEE Trans. Pattern Anal. Mach. Intell. 1994, 16, 920–932. [Google Scholar] [CrossRef]
  12. Li, Z. Visual Servoing in Robotic Manufacturing Systems for Accurate Positioning. Ph.D. Thesis, Concordia University, Montreal, QC, Canada, 2007. [Google Scholar]
  13. Murray, R.M.; Li, Z.; Sastry, S.S.; Sastry, S.S. A Mathematical Introduction to Robotic Manipulation; CRC Press: Boca Raton, FL, USA, 1994. [Google Scholar]
  14. Nan-feng, X. Intelligent Robot; South China University of Technology Press: Guangzhou, China, 2008; pp. 104–105. [Google Scholar]
  15. Craig, J.J. Introduction to Robotics: Mechanics and Control; Pearson Prentice Hall: Upper Saddle River, NJ, USA, 2005; Volume 3. [Google Scholar]
  16. Marani, G.; Kim, J.; Yuh, J.; Chung, W.K. A real-time approach for singularity avoidance in resolved motion rate control of robotic manipulators. In Proceedings of the ICRA’02 IEEE International Conference on Robotics and Automation, Atlanta, GA, USA, 11–15 May 2002; Volume 2, pp. 1973–1978.
  17. Zorjan, M.; Hugel, V. Generalized humanoid leg inverse kinematics to deal with singularities. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation (ICRA), Karlsruhe, Germany, 6–10 May 2013; pp. 4791–4796.
  18. Li, L.; Xiao, N. Volumetric view planning for 3D reconstruction with multiple manipulators. Ind. Robot Int. J. 2015, 42, 533–543. [Google Scholar] [CrossRef]
  19. Dutta, T. Evaluation of the Kinect™ sensor for 3-D kinematic measurement in the workplace. Appl. Ergon. 2012, 43, 645–649. [Google Scholar] [CrossRef] [PubMed]
  20. Zhang, Z. Flexible camera calibration by viewing a plane from unknown orientations. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; Volume 1, pp. 666–673.
  21. Liu, H.; Lai, X.; Wu, W. Time-optimal and jerk-continuous trajectory planning for robot manipulators with kinematic constraints. Robot. Comput. Integr. Manuf. 2013, 29, 309–317. [Google Scholar] [CrossRef]
Figure 1. Mobile robotic sensing for Cyber Physical Social Sensing (CPSS).
Figure 1. Mobile robotic sensing for Cyber Physical Social Sensing (CPSS).
Sensors 17 00393 g001
Figure 2. Schematic of the proposed trajectory planning method.
Figure 2. Schematic of the proposed trajectory planning method.
Sensors 17 00393 g002
Figure 3. Visual sensor and binocular positioning.
Figure 3. Visual sensor and binocular positioning.
Sensors 17 00393 g003
Figure 4. Binocular stereo sensor.
Figure 4. Binocular stereo sensor.
Sensors 17 00393 g004
Figure 5. Schematic of the proposed method.
Figure 5. Schematic of the proposed method.
Sensors 17 00393 g005
Figure 6. Robotic kinematic model of UR5.
Figure 6. Robotic kinematic model of UR5.
Sensors 17 00393 g006
Figure 7. UR5 robot used in the experiments.
Figure 7. UR5 robot used in the experiments.
Sensors 17 00393 g007
Figure 8. Robot state. (a) The initial and terminate state; (b) The state from another perspective.
Figure 8. Robot state. (a) The initial and terminate state; (b) The state from another perspective.
Sensors 17 00393 g008
Figure 9. The angle variation of each joint.
Figure 9. The angle variation of each joint.
Sensors 17 00393 g009
Figure 10. The angular velocity. (a) 1st joint; (b) 2nd joint; (c) 3rd joint; (d) 4th joint; (e) 5th joint.
Figure 10. The angular velocity. (a) 1st joint; (b) 2nd joint; (c) 3rd joint; (d) 4th joint; (e) 5th joint.
Sensors 17 00393 g010
Figure 11. The acceleration curve. (a) 1st joint; (b) 2nd joint; (c) 3rd joint; (d) 4th joint; (e) 5th joint.
Figure 11. The acceleration curve. (a) 1st joint; (b) 2nd joint; (c) 3rd joint; (d) 4th joint; (e) 5th joint.
Sensors 17 00393 g011
Figure 12. Corresponding trajectories. (a) 2nd joint; (b) 3rd joint; (c) 4th joint; (d) 5th joint.
Figure 12. Corresponding trajectories. (a) 2nd joint; (b) 3rd joint; (c) 4th joint; (d) 5th joint.
Sensors 17 00393 g012
Table 1. The parameters of the UR5 robot.
Table 1. The parameters of the UR5 robot.
Joints123456
Torsion angle α k (rad) π / 2 00 π / 2 π / 2 0
Rod length a k (mm)0–425–392000
Bias length d k (mm)89.200109.394.7582.5
Joint angle θ ( k ) θ ( 1 ) θ ( 2 ) θ ( 3 ) θ ( 4 ) θ ( 5 ) θ ( 6 )
Constraint of joint (rad) ± π / 2 ± π / 2 ± π / 2 ± π / 2 ± π / 2 ± π / 2
Table 2. The initial and terminate joint angles of UR5 robot in the experiment.
Table 2. The initial and terminate joint angles of UR5 robot in the experiment.
θInitial StateTerminate State
101.142958
20–2.630475
30–2.346571
40–1.654041
502.346625
600
Table 3. The end-point errors of the UR5 robot (unit: mm).
Table 3. The end-point errors of the UR5 robot (unit: mm).
No.Coordinates of TargetFinal Position Coordinate
(Time-Optimal/Ours)
Absolute Errors
(Time-Optimal/Ours)
1(328.04,115.59,341.21)(336.34,124.31,351.14); (329.73,117.62,342.67)15.63; 3.02
2(349.76,273.16,345.68)(354.13,274.89,349.98); (361.22,275.72,346.12)6.68; 2.97
3(401.58,178.29,323.17)(407.19,183.62,324.55); (401.77,180.37,323.98)7.87; 2.24
4(345.43,242.75,350.48)(352.04,247.70,356.78); (345.54,244.85,352.44)10.39; 2.87
5(327.12,–41.74,301.31)(332.79,–34.30,303.20); (328.80,–41.57,301.83)9.54; 1.77

Share and Cite

MDPI and ACS Style

Tang, H.; Li, L.; Xiao, N. Smooth Sensor Motion Planning for Robotic Cyber Physical Social Sensing (CPSS). Sensors 2017, 17, 393. https://doi.org/10.3390/s17020393

AMA Style

Tang H, Li L, Xiao N. Smooth Sensor Motion Planning for Robotic Cyber Physical Social Sensing (CPSS). Sensors. 2017; 17(2):393. https://doi.org/10.3390/s17020393

Chicago/Turabian Style

Tang, Hong, Liangzhi Li, and Nanfeng Xiao. 2017. "Smooth Sensor Motion Planning for Robotic Cyber Physical Social Sensing (CPSS)" Sensors 17, no. 2: 393. https://doi.org/10.3390/s17020393

APA Style

Tang, H., Li, L., & Xiao, N. (2017). Smooth Sensor Motion Planning for Robotic Cyber Physical Social Sensing (CPSS). Sensors, 17(2), 393. https://doi.org/10.3390/s17020393

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop