Next Article in Journal
Efficacy of Allicin against Plant Pathogenic Fungi and Unveiling the Underlying Mode of Action Employing Yeast Based Chemogenetic Profiling Approach
Previous Article in Journal
Conceptual Design of BCI for Mobile Robot Control
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Visual Servoing Control for Hoisting Positioning Under Disturbance Condition

1
School of Mechanical Engineering, Shenyang Jianzhu University, Shenyang 110168, China
2
The State Key Laboratory of Rolling and Automation, Northeastern University, Shenyang 110819, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(7), 2562; https://doi.org/10.3390/app10072562
Submission received: 28 February 2020 / Revised: 31 March 2020 / Accepted: 1 April 2020 / Published: 8 April 2020
(This article belongs to the Section Applied Physics General)

Abstract

:
This paper proposes a visual servo scheme for hoisting positioning under disturbance conditions. In actual hoisting work, disturbances such as equipment and load vibration are inevitable, which brings challenges to the development of a visual servo for hoisting positioning. The main problems are as follows: (1) the correlation between visual error and disturbance is not considered or well resolved; (2) the disturbance has a great influence on the control stability, but it is difficult to model. At present, there is no detailed research on the above problems. In this paper, the visual error is defined by the image error of the feedback signal based on dynamic equations containing disturbances. An adaptive sliding mode control algorithm is employed to decrease the influence of external disturbance, and the coefficient of the slide surface is established based on the adaptive gain. In view of the belief that it is difficult to model disturbance terms, a nonlinear disturbance observer is introduced to obtain equivalent disturbance. On this basis, an adaptive control algorithm with disturbance compensation is proposed to improve the robustness of the visual servo system. We use Lyapunov’s method to analyze the stability conditions of the system. Compared with the other state-of-the-art methods, the simulation results show that our method has superior performance in convergence, accuracy, and restraining disturbance. Finally, the proposed algorithm is applied to the hoisting platform for experimental research, which proves the effectiveness of the controller.

1. Introduction

Automatic hoisting positioning technology is the main component of unmanned crane technology. The research into automatic hoisting positioning technology is conducive to the development of intelligent cranes and technology upgrading, which are key in the technology of cranes in the future. The key to improving the automation level of the hoisting operation is to realize accurate hoisting positioning (Figure 1). Port hoisting, prefabricated building construction and the installation of large energy and power equipment all need accurate hoisting positioning. The traditional positioning method is to install an absolute value encoder [1] on the driving shaft of the crane. The absolute value encoder converts the rotation of the shaft into the moving distance and transmits it to the control system, which judges whether the equipment arrives at a designated position. For conventional cranes, this design is feasible and widely used. However, when it is used in high-precision cranes, deviations occur. The reason is that the driving wheel slips, and the wheel shaft rotates but the crane does not move, resulting in the instability of the positioning system. In order to solve the problem of low positioning accuracy, the rack and pinion positioning system is introduced into the hoisting positioning technology. The rack and pinion positioning system [2] improves the positioning accuracy, and is widely used in subsequent nuclear power projects. However, the rack and pinion positioning system (Figure 2) still faces many challenges: the rack needs to be laid out to its full length, and the machining accuracy of the rack itself is too high, which brings great trouble to the engineers. For a crane with long travel distances and precise positioning, researchers tried to use a cable encoder, which consists of a cable box and an absolute value encoder. As the cable is pulled out and retracted, the encoder measures the number of turns of the drum in the cable box and converts it into a measurement signal to output to the system. In the actual project, it is found that the cable is easily damaged in the installation and commissioning processes. Moreover, the magnetic ruler system [3] has been developed and applied to hoisting positioning, but it has not been popularized because of the limitations of the working environment. Radiofrequency technology [4] has also been used in this field, but its performance was greatly discounted due to signal shielding and interference.
As a non-contact sensor, the camera can be used as the eye of the hoisting equipment to increase the ability of environmental perception. With this method, we can control the crane in a closed loop. In recent years, visual servoing has ben widely used in trajectory tracking and positioning technology [5,6]. However, in the hoisting and positioning operation, the external environment is unstable, and is easily affected by wind load, equipment vibration and other factors, which bring disturbances to the visual servo system, resulting in low accuracy of positioning, slow response speed and other problems. Therefore, in this paper, research into visual servo control under disturbance conditions is carried out.

2. Literature Review

As in [5], the visual servo control uses data from a vision system to control the movement of robot or mobile devices. In general, there are two kinds of visual servo control modes, according to the installation position of the camera. According to the fixation of the camera, visual servo can be divided into two categories, one is where the camera is placed directly onto a robot or robotic arm, called “eye-in-hand”, the other is where the camera is fixed at the workspace, called “eye-to-hand”.
Kinematic and dynamic modeling under the disturbance condition is key to completing the relationship between visual error and disturbance. For a visual servo system, a variety of modeling methods are constantly applied and proposed, and the interaction with the external environment is considered. Dong [7] used vision measurement technology and the Extended Kalman Filter (EKF) algorithm to estimate the pose and motion parameters of the target. Based on the target pose and motion parameters, the incremental inverse kinematics model was established to obtain the desired position of the end effector. Different from the above control method, which only considers the system kinematics, Krupínski [8] studied the kinematics and dynamics of the whole system, including nonlinear, coupling effect, interaction with the external environment and other factors, and applied the scene feedback information of the homography matrix between different images to the system dynamics to improve the control stability of the system. Dynamic modeling under different conditions is the key to solving the stability problem, and the state feedback controller is widely used for stability control [9]. Hu [10] proposed a fault-tolerant control scheme based on a disturbance observer, which can effectively suppress the external disturbance and reduce the impact of actuator failure on the control system. Based on the disturbance observer, the uncertainty caused by external disturbance or actuator failure is established and compensated. Aiming to solve the the problem of control error convergence in a closed-loop control system, an attitude stabilization control system based on integral sliding film was proposed. Fan [11] proposed an Model Predictive Control (MPC) control strategy, including an auxiliary state feedback controller and robot system. The kinematic state error of the nominal system was transformed into a chain system to solve the MPC optimization problem of the nominal system and generate the optimal state trajectory of the robot system. Ke [12] used MPC to stabilize the physical constraints of the robot system; the kinematic equation of the nonholonomic chain system was transformed into the form of skew symmetry, and the exponential decay phase was introduced to solve the uncontrollable problem of the system. Obviously, this can be used as a reference to prove that it is possible to transform the kinematic state error into the chain optimization problem.
It is necessary to improve the robustness of the visual servo system in the presence of uncertainty [13,14,15,16]. Under the uncertainty of system dynamics and vision framework, Zergeroglu [17] studied the control of planar mobile devices, in order to compensate the uncertainty of the system; a robust controller was designed to ensure the final uniform boundedness of the position tracking. Ma [18] studied the singularity and local extremum in visual servo control, and proposed a robust design strategy to suppress image noise and external disturbance, so as to ensure the internal stability of the closed-loop system. However, when the external interference changes, the effect of this robust design strategy is not ideal. In the research, the constraint optimization problem is transformed into a H∞ control framework, which improves the anti-interference of the system. It means that the research system not only contains uncertainty, but also requires to a strongly conservative performance index to be achieved.
In order to obtain better dynamic stability performance in a camera robot system, Li [19] proposed a new Lyapunov-positive definite function based on the asymptotic stability of the visual servo system. Considering the more complex situation of visual systems, including the uncertainty of system dynamics and camera parameters, the asymptotic convergence of image tracking errors was proved. Liu [20] focused on the research of ship motion control, and put forward a scheme based on the sliding mode control. With the aim of studying wave disturbance, he introduced the nonlinear disturbance observer, and studied its suppression characteristics. The stability of the above studies are, respectively, observed from two different perspectives—aspects of visual function and disturbance suppression—but the environmental limitations are more prominent. Obviously, the nonlinear problem of the visual servoing system is inevitable [21,22,23]. Elastic objects will deform under the condition of complete constraint, which will bring more nonlinear problems to visual servo control. David [21] proposed an uncalibrated algorithm based on Lyapunov to estimate the visual Jacobian matrix in the deformed state; he applied this method to a clamping operation based on visual servo control, and combined the pose information of the gripper with the visual information to realize the recognition and control of its pose. Xu [22] extracted sensitive features from image information to meet the requirements of position and direction control. The direction and position of the target were controlled by the idea that the target size is sensitive to the depth of the image. The feature translation caused by the rotation process was used as compensation, and the image depth was estimated by the interaction matrix and the change in the image features. Too much dependence on camera-sensitive features, when the delay is too large, it is likely to cause data distortion. Aiming at solving the control problem of multi-camera visual servo control systems, Kornuta [23] proposed a design method based on the embedded concept, and defined each subsystem in the multi-level system structure according to the conversion function. The accuracy of the multi camera system was limited, and the processing load of the system was increased. In recent years, machine learning methods such as neural networks have been used to solve various nonlinear problems [24,25]. Gao [26] proposed a vision servo (IBVS) dynamic positioning strategy. In the speed-tracking loop of the control system, an adaptive controller based on a neural network was designed. With the premise of ensuring the convergence of speed tracking error, the influence of cost function, the dynamic model and the speed reference model on system performance was studied compared to other schemes. Considering the dynamic and nonlinear problems of the system, Wang [27] proposed a vision servo scheme based on a neural network. An adaptive neural network was used to fit the unknown dynamic model. The advantage of this scheme is that it solves the problem of nonlinear effect of output. The controller based on neural network improves the control stability, but puts forward higher requirements for neural network construction. In the application of photometry in visual servo systems, the image changes because of the appearance and disappearance of some scenes [28]. Omar [29] focused on the research of visual servo technology based on photometric moment, and proposed a relatively direct and simple method that does not need feature extraction, visual tracking and image matching in traditional visual servo technology. The challenge of this method is that when the appearance of the image changes or partially disappears, its stability is difficult to guarantee. In the process of visual servo control using the whole photometric image information as a dense feature, the redundancy of visual information makes the convergence domain very small. Nathan [30] proposes an analytic visual servo based on Gaussian mixture to expand its convergence region. When the initial position is far away, it can still achieve stable speed control and converge to the desired position. In the motion control of a wheel robot based on a visual servo, the center position of the visual system is mostly the same as the center position of the robot body, but some settings that deviate from the center position of the robot are conducive to its motion, which will bring deviation to the visual system, resulting in the convergence failure of the visual error. In order to solve this problem, Qiu [31] designed a motion tracking method based on visual servo, which deviated from the center of the robot body, to solve the influence of the translation of the uncalibrated camera on the parameters of the visual system. In the visual servo control of most mobile robots, the trajectory and the desired position image must be given. Li [32] designed a positioning control scheme based on monocular vision, defined the image reference frame by using the visual target and plane motion constraints, proposed the attitude estimation algorithm of the robot relative to the image of desired pose, and constructed the updated rate of unknown feature parameters. However, when the visual target and motion constraint parameters change, the reference frame will change and the control precision will be affected.
This paper researches the hoisting positioning technology based on visual servo control under the disturbance condition and focuses on solving the following two puzzles: (1) the correlation between visual error and disturbance is not considered or well resolved; (2) the disturbance has a great influence on the control stability, but it is difficult to model. In this paper, the visual error model is defined by the image error of a feedback signal based on dynamic equations containing disturbance. To solve the problem of the disturbance term being difficult to model, the nonlinear disturbance observer is employed to obtain equivalent disturbance and an adaptive control algorithm, while disturbance compensation is proposed.
The organization of this paper is as follows. Section 3 depicts the problem description, including dynamics modeling, and IBVS modeling. In Section 4, we describe the visual servo control based on adaptive sliding mode control (SMC), then propose the control law with disturbance compensation, and give the stability analysis based on Lyapunov’s theory. Simulations are conducted in Section 5, which depicts the superiority of the proposed method against the other methods. Section 6 represents the experimental results of two projects with different initial positions. Finally, Section 7 contains a summary and outlook.

3. Materials and Methods

3.1. Dynamics Modeling

The schematic model is shown in Figure 3. The hoisting platform has four degrees of freedom, and the power device is composed of a Gantry driver and a Trolley driver. The driving force in each direction has no coupling effect. The state equation of hoisting equipment can be expressed as follows:
η ˙ = T ( η b ) v
where η = [ x , y , z , φ , 0 , 0 ] T 6 represents the state vector of the end-effector, which contains the displacement vector η a = [ x , y , z ] T and the angle vector η b = [ φ , 0 , 0 ] . v = [ v x , v y , v z , ω φ , 0 , 0 ] T 6 represents the velocity vector. T ( ) is the transformation matrix related to the body-fixed velocities and the global pose rates [25].
In an ideal state, the dynamic equation of the hoisting mechanism is expressed as follows:
M ( η ) v ˙ + C ( η , v ) v + G ( η ) = Q
M ( ) > 0 is the positive definite inertial matrix of the system. C ( ) is the corresponding Coriolis and centripetal matrices. G ( ) represents the gravitational vector and Q represents the control vector, respectively.
However, in the actual hoisting positioning project, considering the nonlinear, uncertain and external interference factors in the model, the dynamic equation of hoisting equipment is described in the following form:
M ( η ) v ˙ + C ( η , v ) v + G ( η ) + D ( v ) v = Q + τ
D ( ) represents the damping matrix [21], and τ is the external disturbance.

3.2. IBVS Model

In this paper, the camera is fixed on the component in the form of “eye-in-hand”. C denotes the frame of the camera at its current pose, C* represents the frame of camera at the desired pose, G represents the world coordinate system. We set the camera coordinate frame to coincide with the hoisting component coordinate frame, where the hoisting component is considered to be consistent with the camera’s pose and speed (as Figure 4). Visual servo control uses a visual feedback signal as input, and calculates the end-effector velocity according to the Jacobian matrix. Therefore, it is necessary to associate the velocity of the end-effector in the camera coordinates with the robot reference system and establish the transformation relationship. A flow chart of the coordinate transformation is shown in Figure 5.
If O c and O c represent the feature points in the frame of C and C*, we define them as O c = ( X c , Y c , Z c ) T 3 , O c = ( X c , Y c , Z c ) T 3 . The derivative of O i can be denoted as follows.
O ˙ c = v a + v b × O c
The camera coordinate system is transformed to the image physical coordinate system through the focal length diagonal matrix:
{ x c = f Z c X c y c = f Z c Y c
In matrix form, it can be written as
Z c [ x y 1 ] = [ f 0 0 0 f 0 0 0 1 ] [ X c Y c Z c ]
f > 0 is the camera focal length. [ X c , Y c , Z c ] T represents the camera coordinates. [ x , y , 1 ] T represents the normalized physical coordinates of the image.
The image physical coordinate system is transformed to the pixel coordinate system through the pixel transformation matrix:
{ u = x d x + u 0 = α 1 x + u 0 v = y d y + v 0 = α 2 y + v 0
In matrix form, it can be written as
[ u v 1 ] = [ 1 d x γ u 0 0 1 d y v 0 0 0 1 ] [ x y 1 ]
α 1 , α 2 are the number of pixels per unit distance in two directions. u 0 , v 0 are the pixel coordinates at the intersection of the camera optical axis and imaging plane.
According to the perspective projection principle, the feature points of the current and desired image coordinates can be expressed as follows:
s i = [ u i , v i ] T = F Z i [ x i , y i ] T , s i = [ u i , v i ] T = F Z i [ x i , y i ] T
where, u i , v i represent the coordinates of the feature points in the pixel coordinate system. F = [ α 1 f 0 0 α 2 f ] .
According to (3), we can directly define the camera velocity as v = [ v x , v y , v z , ω φ ] T 4 , which consists of translational and rotational velocity under the disturbance condition.
In order to solve the problem of motion decoupling control, a vision servo decoupling control method [33] based on the line feature and inner region feature is adopted. The line direction feature is not affected by the camera’s translation movement, but only related to the camera’s rotation movement. The inner region feature is sensitive to the camera’s translation movement along the Z axis, but not affected by the rotation movement around the Z axis. Therefore, we use the inner region feature and the line feature as the visual features of the camera’s translation and rotation movement, respectively. The real-time target feature can be written as s = [ x 0 , y 0 , a , θ , 0 , 0 ] T , where x 0 , y 0 respectively correspond to the centroid coordinates of the circular feature, a is the circular feature of the inner area, and θ corresponds to the direction angle value of any edge of the target area. The desired feature can be written as s = [ x 0 , y 0 , a , θ , 0 , 0 ] T .
Attitude control based on the line feature can be denoted as
ω φ = λ c J c ( θ θ )
J c is the Moore–Penrose pseudo-inverse of the Jacobian matrix J c . λ c is the pose controller gain. The displacement velocity vector of the camera is defined as follows
v = v c a + v ω
where v c a and v ω are the camera translation velocities with the compensation of the inner region feature vector and the center of mass, as in [33].
Considering that the real-time target feature is s and the desired target feature s , the feature error e can be calculated as
e = s s .

4. Controller Design

4.1. Adaptive Control Law

In this paper, the visual error model is defined by the image error of the feedback signal based on dynamic equations containing disturbance. Considering the local asymptotic stability of the visual servo system, the visual error signal is denoted as
ξ = J ^ c e
J ^ c is the Moore–Penrose pseudo inverse of the estimated Jacobian matrix J ^ c .
Here, we use an estimation method [34] to obtain the interaction matrix J ^ c . It is estimated to satisfy the following equation.
s ˙ = J ^ c v
Moreover, J ^ c v can be written in the following linear form
s ˙ = J ^ c v = ϒ ( η ˙ ) ϑ ^
where ϒ ( η ˙ ) is a matrix without depending on the intrinsic and extrinsic parameters of the camera. ϑ ^ is a vector where the components of J ^ c are listed [34].
In this paper, we adopt the method of image-based visual servoing (IBVS) [35]. In image-based visual servoing (IBVS), the control loop is directly closed in the image space of the vision sensor. Compared with position-based visual servoing and hybrid visual servoing, IBVS schemes get rid of 3-D reconstruction steps to compute the visual features.
The derivative of the visual error could be calculated by (13) as
ξ ˙ = J ^ c J c v + J ^ ˙ c e
The estimation error of the Jacobian matrix is defined as
J ˜ c = J c J ^ c
If we substitute (17) into (16), we have
ξ ˙ = J ^ c J c v + J ^ ˙ c e = v ο ξ
where ο ξ = J ^ c J ˜ c v J ^ ˙ c e .
The control purpose for the visual servo system is to use visual feedback ξ to drive the target to the desired pose, like η η . At the same time, we should ensure the asymptotic stability of the system in the case of unknown and bounded uncertainties. Proportional control law is a widely used method and can be seen in most visual servo literature. However, it is difficult for a system using only proportional control law to obtain an ideal dynamic response. In this paper, sliding mode control (SMC) is used to compensate for the influence of external disturbance. The velocity control signal could be calculated from the visual system based on the inverse of the estimated Jacobian matrix and the change in the visual frame.
We design the sliding surface S as follows
S = e λ e d t
Where e = s s is the feature error state, s represents the reference state vector. Since s is the constant, s ˙ = 0 . λ is the positive definite gain matrix.
The derivative of the sliding surface (19) is:
S ˙ = e ˙ λ e
We propose a control law combining proportional control with SMC, and it is designed as follows
U p = J ^ c 1 k p λ e
The proportional control makes the output U p proportional to the input e . In order to reduce the deviation, improve the response speed and shorten the adjustment time, it needs to increase k p . k p is the proportional gain matrix. Due to the existence of an external disturbance, the system error cannot converge to zero. Similarly,
U s = J ^ c 1 k s λ s i g n ( S ) ,
where k s is the diagonal positive proportional gain matrix. Because the sigum function is discontinuous, which can cause a chattering effect, to eliminate the buffeting effect, we use saturation function s a t ( S i ) instead of the sigum function in the equation. According to (21) and (22), the overall control law is:
U = U p + U s = J ^ c 1 [ k p λ e + k s λ s a t ( S ) ]
The sliding surface can be restricted to change in a small range:
| S i | σ i
where σ i > 0 represents the thickness of the sliding surface with a small positive value. The difference between the current sliding mode variable and the specified sliding mode surface is defined as follows
Δ S i = S i σ i s a t ( S i )
where s a t ( S i ) replace s i g n ( S i ) , the expression is as follows:
s a t ( S i ) = { s i g n ( S i ) | S i | > σ i S i / σ i | S i | σ i  
In this controller, a suitable value of λ will make the sliding surface reach the stable value quickly and stabilize the system against the disturbance. Nevertheless, the boundary of disturbance value is difficult to measure in real engineering projects. If the control law is not changed accordingly, the control effect cannot reach the desired level as required. Therefore, we design the adaptive law to adjust the gain value, and then modify the control law of the main controller. An adaptive sliding mode control algorithm is proposed as
U = J ^ c 1 [ k p λ ^ e + k s λ ^ s a t ( S ) ]
where λ ^ is the estimate of the λ , which we obtain as follows
λ ^ ˙ = 1 γ S e
where γ > 0 is the adaptation gain.
Then, we use Lyapunov’s method to prove the stability of the system, analyzing stability by defining a scalar function V ( x ) . This method avoids solving the equation directly and does not carry out approximate linearization.
If the scalar function V ( x ) satisfies:
(1) V ( x ) = 0 if and only if x = 0 ;
(2) V ( x ) > 0 if and only if x 0 ;
(3) V ˙ ( x ) = d d t V ( x ) = i = 1 n V x i f i ( x ) 0 when x 0 .
The system is said to be stable in the Lyapunov theory. If x 0 , V ˙ ( x ) < 0 , the system is asymptotically stable.
Based on the above theory, we establish the Lyapunov function as follows
V = 1 2 S 2 + 1 2 γ λ ˜ 2
where λ ˜ = λ ^ λ is the estimate error. Differentiation of the Lyapunov function is as follows
V ˙ = S S ˙ + γ ( λ ^ λ ) λ ^ ˙
If we substitute (20) and (28) into (30), we have
V ˙ = S S ˙ + γ ( λ ^ λ ) λ ^ ˙ = S ( e ˙ λ ^ e ) + S λ ^ e S λ e = S e ˙ S λ e = S ( J c v ) S λ e = S ( J c v + λ e )
When the sliding mode is asymptotically stable, we obtain the condition for the system to be stable:
λ J c v e .
We carry out parameter selection and design an adaptive control law for the gain parameters. After verification, it can be known that the sliding mode surface decreases to zero in a finite time, which ensures that the control law is sustained.

4.2. Nonlinear Disturbance Observer

In the above description of the control law, the disturbance term is not considered, so we treat the disturbance value as zero, which is obviously not appropriate. Therefore, in this section, we add the analysis of the disturbance term, and we use the nonlinear disturbance observer to obtain equivalent disturbance.
We define an update variable to be
κ = τ ^ q ( η , v )
where τ ^ is observed disturbance, q ( η , v ) is given as follows
d q ( η , v ) d t = m ( η , v ) v ˙
where m ( η , v ) is the state mapping matrix from v to q . The observer error signal is defined as
τ ˜ = τ τ ^
First, for a linear disturbance observer, the derivation process of τ ^ ˙ is as follows
τ ^ ˙ = m ( η , v ) ( τ τ ^ ) = m ( η , v ) ( k 1 v ˙ + k 2 v + k 3 + k 4 v Q ) m ( η , v ) τ ^ = m ( η , v ) [ k 1 v ˙ + ( k 2 + k 4 ) v + k 3 Q ] m ( η , v ) [ κ + q ( η , v ) ] = m ( η , v ) [ k 1 v ˙ + ( k 2 + k 4 ) v + k 3 Q q ( η , v ) ] m ( η , v ) κ = m ( η , v ) [ k 1 v ˙ + ( k 2 + k 4 ) v + k 3 Q q ( η , v ) κ ] = m ( η , v ) τ ˜
where k 1 = M ( η ) , k 2 = C ( η , v ) , k 3 = G ( η ) , k 4 = D ( v ) . For the disturbance observer, it is considered that the disturbance varies slowly, so the derivative of disturbance can be given as
τ ˙ = 0
From (36), we have
τ ˜ = k 1 v ˙ + ( k 2 + k 4 ) v + k 3 Q q ( η , v ) κ
So, the derivative of the observer error could be calculated by
τ ˜ ˙ = τ ˙ τ ^ ˙ = τ ^ ˙ = m ( η , v ) τ ˜
We obtain q ( η , v ) from (33) as
q ( η , v ) = k 5 v
q ˙ = k 5 v ˙
where k 5 = m ( η , v ) .
If we substitute (36), (40) and (41) into (33), and differentiate, the update law can be given as
κ ˙ = τ ^ ˙ d q d t = τ ^ ˙ k 5 v ˙ = k 5 κ + k 5 [ ( k 1 I ) v ˙ + ( k 2 + k 4 k 5 ) v + k 3 Q ] = k 5 [ ( k 1 I ) v ˙ + ( k 2 + k 4 k 5 ) v + k 3 Q κ ]
Then, according to (33), (35) and (42), a nonlinear disturbance observer is proposed as follows.
τ ^ = κ + k 5 v
As mentioned above, the coefficient of the slide surface is defined based on the adaptive gain. In view of the problem that it is difficult to model disturbance terms, a nonlinear disturbance observer is introduced to obtain equivalent disturbance. On this basis, a disturbance compensation adaptive control algorithm is proposed to improve the robustness of the visual servoing system. The control law is defined as follows.
U 0 = J ^ c 1 [ k 5 τ ^ + k p λ ^ 0 e + k s λ ^ 0 s a t ( S ) ]
where λ ^ 0 is the estimate of the adjustable gain, λ ^ 0 > J ^ c 1 τ ¯ ^ , and τ ¯ ^ = k 5 τ ^ .
Then, the sliding surface can be given as
S 0 = e λ ^ 0 e d t
If we differentiate Equation (45), we have
S ˙ 0 = e ˙ λ ^ 0 e
Then, we choose the Lyapunov function
V = 1 2 S 0 2 + 1 2 γ λ ^ 0 2 + 1 2 τ ˜ 2
The differentiation of the Lyapunov function gives
V ˙ = S 0 S ˙ 0 + γ λ ^ 0 λ ^ ˙ 0 + τ ˜ τ ˜ ˙
If we substitute (46), (28) and (39) into (48), we have
V ˙ = S 0 S ˙ 0 + γ λ ^ 0 λ ^ ˙ 0 + τ ˜ τ ˜ ˙ = S 0 [ e ˙ λ ^ 0 e ] + S 0 λ ^ 0 e k 5 ( τ ˜ ) 2 = S 0 e ˙ S 0 λ ^ 0 e + S 0 λ ^ 0 e k 5 ( τ ˜ ) 2 = S 0 e ˙ k 5 ( τ ˜ ) 2 = S 0 ( J c v ) k 5 ( τ ˜ ) 2
According to V 0 , we know, if
k 5 S 0 ( J c v ) ( τ ˜ ) 2 ,
we have the conclusion that V ˙ is both negative and positive definite and the system is asymptotically stable. The structure of the control loop is shown in Figure 6.

5. Simulations

The proposed visual servo positioning method is applied to the simulation platform. Simulations are conducted to investigate the control performance of the proposed controller in the presence of system uncertainties and disturbance.
Consistent with the actual project, we adopt the eye-in-hand style in the simulation platform. To simulate the real camera projection, the controller needs the parameters of the camera, and we use a visual toolbox with the following parameters: focal length 8 mm, imaging pixel frame 1024 × 1024. The intention of this simulation is to drive the manipulator to the desired pose. The system parameters are set as k p = d i a g { 0.8 , 0.8 , 0.68 , 0.4 , 0 , 0 } , k s = d i a g { 0.6 , 0.6 , 0.6 , 0.35 , 0 , 0 } , the disturbance term [ 5.5 ,   5.5 ,   5.5 ,   0.1 , 0 , 0 ] T is exerted on control quantities. The sampling time is set as t = 10 ms. In the simulation, the delay is not considered, which means that, in the ideal state, the time of visual feature extraction and processing is not calculated, and the visual error curve is smooth. We will take the data sampled as the updated data for subsequent analyses and calculations. In this way, the simulation of hoisting positioning with disturbance is carried out to verify the superiority of the method proposed in this paper.
The initial pose of the manipulator is set as [ 0.6 , 0.4 , 0.5 ,   π / 12 , 0 , 0 ] T . The initial feature points and the desired feature points in the image plane are shown in Figure 7a; s 1 , s 2 , s 3 , s 4 represent the initial points, and s 1 , s 2 , s 3 , s 4 represent the desired points. In Figure 7b, the blue dotted lines represent the trajectories of four points; we can see that, although there is a large displacement between the initial position and the desired position, the manipulator can still be driven to the exact position. During the whole motion process, we take the feature center point as the record point to display its trajectory in 3D space, as shown in Figure 7c. The curvature of the simulation curve in three-dimensional space is small, which means that the proposed method can drive the control object to the target position in a shorter path. After the convergence of visual error, it is stable at zero, which shows that our method still has satisfactory accuracy in the presence of uncertainty and disturbance. Although the position curves have tiny tilt angles, the final position converges and stabilizes at zero. In our method, an adaptive sliding mode control algorithm is employed to decrease the influence of external disturbance, and the coefficient of the slide surface is established based on the adaptive gain. Obviously, this can drive the object to the target position smoothly.
Furthermore, we compare the proposed method with the other methods, including proportional control (PC) visual servo, PSMC visual servo [35] and Kalman Filtering (KF) visual servo [36]. The PSMC visual servo proposes a combination of proportional control with the sliding mode control method. Image moments of circular markers labeled are selected as visual features to control the three translational degrees of freedom (DOFs) of the manipulator. The KF visual servo presents an image-based servo control approach with a Kalman neural network filtering scheme; this uses a neural network to estimate and compensate the errors of Kalman filtering (KF). A computer with Intel Corei5 2.67 GHz CPU, 4 GB RAM is used in this comparison. The diagonal positive definite gain matrix K p in the PC visual servo is set as the same as our controller; the diagonal positive definite gain matrices λ , K p , K s in [35] and the minimum sum squared error (MSE) in [36] are set according to their default values.
The comparison results of the accumulated visual errors are shown in Figure 8. The quantitative results of the specific comparison are listed in Table 1. Although the computational time of our method is a little longer than that of the PC visual servo, the ASMCN visual servo (ours) has a better convergence performance. Our method has fewer servo cycles, due to the introduction of a nonlinear disturbance observer and the control law with disturbance compensation.

6. Experiments

In order to further verify the effectiveness in the actual hoisting work of the control scheme proposed in this paper, we conduct the experimental test using the four-degrees-of-freedom hoisting platform in the laboratory. In the experiment, we drive the heavy-long load component to the target position, and the weight of the load component is up to 300 kg, the overall dimensions of the load component are 0.4 × 0.4 × 1.2 (m) of height, width, and length. For the hoisting platform, the vibration of load and equipment is inevitable when it is hoisting heavy–long load components, which is also consistent with the disturbance condition in the actual hoisting operation. OMRON encoders (E6B2-CWZ6C) connected to the motor are used to measure the velocities. The images are captured by Mako camera (AVT, Germany), using GigE Vision interface and Power-over-Ethernet. Through wired network transmission, we can realize the real-time transmission of visual system and control system. The experimental platform is shown in Figure 9. The target is composed of four black origins and a rectangular frame. The Laplacian of the Gaussian algorithm is used to extract the line feature and inner region feature in the image. In the actual condition, the change of light intensity will affect the image quality, and then affect the accuracy of image feature extraction. Therefore, we use adaptive histogram equalization [37] to preprocess the image. Moreover, the feature extraction results of the convergence process are shown in Figure 10.
We carried out two positioning experiments (A and B) by changing the initial position. Firstly, the load component is moved to the target position by the manual system to obtain the desired image and the camera records the characteristic images of different positions. Then, we return to component the initial position, start the automatic control system, and drive the load component to the target position by the proposed scheme. For the IBVS system proposed in this paper, its convergence region is limited due to the nonlinearity and singularity of the desired image to the driving frame. The extracted target points will match to the adjacent target points in the desired image.
Figure 11a,b depict the visual errors of experiment A, from which we can see that, although the visual error value of the initial position is large, the curves converge quickly and finally reach zero. Obviously, the visual error curve has a zigzag character, which is caused by the disturbance. However, the velocity curve in Figure 12 is relatively smooth, which indicates that the delay of mechanical system is lower and the encoder sensitivity is higher. This also shows that the algorithm in this paper can output stable control signals by adjusting the adaptive gain when the visual signal has a small vibration. As for experiment B, by comparing the (e) and (f) in Figure 13, the visual error of rotation converges earlier than that of the translational direction. The velocity curves in Figure 14 have almost no oscillation effect and can gradually approach zero, which indicates that our method has a robust performance on disturbance suppression and precise positioning.

7. Conclusions

In this paper, we propose a visual servo scheme for hoisting positioning under disturbance condition. Through simulation and experimental verification, we can draw the following conclusions:
(1) We define the visual error by the image error of the feedback signal based on dynamic equations containing disturbance. The relationship between visual error and disturbance is established, which lays a foundation for improving control stability;
(2) In view of the problem that it is difficult to model disturbance terms, a nonlinear disturbance observer is introduced to obtain the equivalent disturbance. On this basis, an adaptive control algorithm with disturbance compensation is proposed to improve the robustness and convergence of the visual servoing scheme;
(3) The results of experiment show that the algorithm in this paper can output stable control signals by adjusting the adaptive gain when the visual signal has a small vibration, and our method shows satisfactory positioning accuracy.
Visual servoing control for hoisting positioning is still a problem worthy of further study, including research on robust uncalibrated visual servoing control and velocity estimation from finite visual features.

Author Contributions

Conceptualization, S.T. and K.Z.; methodology, S.T.; software, J.Z.; validation, H.S., J.S.; resources, H.S.; writing—original draft preparation, S.T.; writing—review and editing, H.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by The National Key R & D Program of China, grant number 2017YFC0703903, 2017YFC0704003, and The National Natural Science Foundation of China, grant number 51705341, 51905357.

Acknowledgments

The authors are grateful to the editors and the anonymous reviewers for providing us with insightful comments and suggestions throughout the revision process.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Weng, Z.Y. Application of PLC and Rotary Encoder in Position Control of Tower Crane. Mech. Electr. Eng. Technol. 2013, 6, 157–159. [Google Scholar]
  2. Lin, X.; Lin, X.; Liu, N.; Peng, G. Genetic optimization method of gear rack in luffing mechanism of portal crane. Lift. Transp. Mach. 2004, 2, 14–16. [Google Scholar]
  3. Zhang, J. Positioning technology of container crane. Port Handl. 2012, 5, 29–32. [Google Scholar]
  4. Xu, L. The Application of Radio Frequency Technology in Shipbuilding Crane Positioning System. Eng. Constr. Des. 2019, 13, 191–193. [Google Scholar]
  5. Larouche, B.P.; Zhu, Z.H. Autonomous robotic capture of non-cooperative target using visual servoing and motion predictive control. Auton. Robot. 2014, 37, 157–167. [Google Scholar] [CrossRef]
  6. Ginhoux, R.; Gangloff, J.A.; De Mathelin, M.F.; Soler, L. Beating heart tracking in robotic surgery using 500 Hz visual servoing, model predictive control and an adaptive observer. In Proceedings of the IEEE International Conference on Robotics and Automation, New Orleans, LA, USA, 26 April–1 May 2004. [Google Scholar]
  7. Dong, G.; Zhu, Z.H. Incremental inverse kinematics based vision servo for autonomous robotic capture of non-cooperative space debris. Adv. Space Res. 2016, 57, 1508–1514. [Google Scholar] [CrossRef]
  8. Krupínski, S.; Allibert, G.; Hua, M.D.; Hamel, T. An Inertial-Aided Homography-Based Visual Servo Control Approach for (Almost) Fully Actuated Autonomous Underwater Vehicles. IEEE Trans. Robot. 2017, 99, 1–20. [Google Scholar] [CrossRef] [Green Version]
  9. Shi, H.T.; Bai, X.T. Model-based uneven loading condition monitoring of full ceramic ball bearings in starved lubrication. Mech. Syst. Signal Process. 2020, 139, 106583. [Google Scholar] [CrossRef]
  10. Hu, Q.; Bo, L.I.; Yang, Y.; Postolache, O.A. Finite-time Disturbance Observer Based Integral Sliding Mode Control for Attitude Stabilization under Actuator Failure. Iet Control Theory Appl. 2018, 13, 50–58. [Google Scholar]
  11. Fan, K.; Li, Z.; Yang, C. Robust Tube-based Predictive Control for Visual Servoing of Constrained Differential-Drive Mobile Robots. IEEE Trans. Ind. Electron. 2018, 99, 1–10. [Google Scholar]
  12. Ke, F.; Li, Z.; Xiao, H.; Zhang, X. Visual Servoing of Constrained Mobile Robots Based on Model Predictive Control. IEEE Trans. Syst. Manand Cybern. Syst. 2016, 99, 1–11. [Google Scholar] [CrossRef]
  13. Guo, D.; Bourne, J.R.; Wang, H.; Yim, W.; Leang, K.K. Adaptive-Repetitive Visual-Servo Control of Low-Flying Aerial Robots via Uncalibrated High-Flying Cameras. J. Nonlinear Sci. 2017, 27, 1–22. [Google Scholar] [CrossRef]
  14. Shi, H.T.; Bai, X.T.; Zhang, K.; Wu, Y.H.; Yue, G.D. Influence of uneven loading condition on the sound radiation of starved lubricated full ceramic ball bearings. J. Sound Vib. 2019, 461, 114910. [Google Scholar] [CrossRef]
  15. Hao, M.; Sun, Z. A Universal State-Space Approach to Uncalibrated Model-Free Visual Servoing. IEEE Asme Trans. Mechatron. 2012, 17, 833–846. [Google Scholar] [CrossRef]
  16. Hajiloo, A.; Keshmiri, M.; Xie, W.F.; Wang, T.T. Robust On-Line Model Predictive Control for a Constrained Image Based Visual Servoing. IEEE Trans. Ind. Electron. 2016, 63, 2242–2250. [Google Scholar] [CrossRef]
  17. Zergeroglu, E.; Dawson, D.M.; Queiroz, M.S.D.; Setlur, P. Robust Visual-Servo Control of Robot Manipulators in the Presence of Uncertainty. J. Robot. Syst. 2003, 20, 93–106. [Google Scholar] [CrossRef]
  18. Ma, Z.; Su, J. Robust uncalibrated visual servoing control based on disturbance observer. Isa Trans. 2015, 59, 193–204. [Google Scholar] [CrossRef]
  19. Li, T.; Zhao, H. Global finite-time adaptive control for uncalibrated robot manipulator based on visual servoing. Isa Trans. 2017, 68, 402–411. [Google Scholar] [CrossRef]
  20. Liu, Z. Ship adaptive course keeping control with nonlinear disturbance observer. IEEE Access 2017, 99, 1–10. [Google Scholar] [CrossRef]
  21. Navarro-Alarcon, D.; Liu, Y. Fourier-Based Shape Servoing: A New Feedback Method to Actively Deform Soft Objects into Desired 2D Image Contours. IEEE Trans. Robot. 2017, 99, 1–8. [Google Scholar] [CrossRef]
  22. Xu, D.; Lu, J.; Wang, P.; Zhang, Z.; Liang, Z. Partially Decoupled Image-Based Visual Servoing Using Different Sensitive Features. IEEE Trans. Syst. Man Cybern. Syst. 2017, 99, 1–11. [Google Scholar] [CrossRef]
  23. Kornuta, T.; Cezary, Z. Robot Control System Design Exemplified by Multi-Camera Visual Servoing. J. Intell. Robot. Syst. 2015, 77, 499–523. [Google Scholar] [CrossRef]
  24. Shi, H.T.; Guo, L.; Tan, S.; Bai, X.T. Rolling bearing initial fault detection using long short-term memory recurrent network. IEEE Access 2019, 7, 171559–171569. [Google Scholar] [CrossRef]
  25. Gao, J.; Proctor, A.; Bradley, C. Adaptive neural network visual servo control for dynamic positioning of underwater vehicles. Neurocomputing 2015, 167, 604–613. [Google Scholar] [CrossRef]
  26. Gao, J.; Proctor, A.; Shi, Y.; Bradley, C. Hierarchical Model Predictive Image-Based Visual Servoing of Underwater Vehicles With Adaptive Neural Network Dynamic Control. IEEE Trans. Cybern. 2015, 46, 2323–2334. [Google Scholar] [CrossRef]
  27. Fujie, W.; Zhi, L. Adaptive neural network-based visual servoing control for manipulator with unknown output nonlinearities. Inf. Sci. 2018, 15, 1–10. [Google Scholar]
  28. Bakthavatchalam, M.; Tahri, O.; Chaumette, F. A Direct Dense Visual Servoing Approach Using Photometric Moments. IEEE Trans. Robot. 2018, 9, 1–14. [Google Scholar] [CrossRef] [Green Version]
  29. Tahri, O.; Tamtsia, A.Y.; Mezouar, Y. Visual Servoing Based on Shifted Moments. IEEE Trans. Robot. 2015, 31, 3. [Google Scholar] [CrossRef]
  30. Crombez, N.; Mouaddib, E.; Caron, G. Visual Servoing with Photometric Gaussian Mixtures as Dense Feature. IEEE Trans. Robot. 2018, 9, 1–15. [Google Scholar] [CrossRef]
  31. Qiu, Y.; Li, B.; Shi, W.; Zhang, X. Visual Servo Tracking of Wheeled Mobile Robots with Unknown Extrinsic Parameters. IEEE Trans. Ind. Electron. 2019, 15, 1–12. [Google Scholar] [CrossRef]
  32. Li, B.; Zhang, X.; Fang, Y.; Shi, W. Visual Servoing of Wheeled Mobile Robots without Desired Images. IEEE Trans. Cybern. 2018, 99, 1–10. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Xu, D.; Zhou, L.; Shen, T. Visual servo decoupling control based on line feature and inner region feature. Inf. Control. 2019, 48, 20–35. [Google Scholar]
  34. Wu, B.; Li, H. Uncalibrated Visual Servoing of Robots with New Image Jacobian Estimation Method. J. Syst. Simul. 2008, 20, 32–40. [Google Scholar]
  35. Liu, H.; Zhu, W.; Dong, H.; Ke, Y. An adaptive ball-head positioning visual servoing method for aircraft digital assembly. Assem. Autom. 2019, 39, 287–296. [Google Scholar] [CrossRef]
  36. Zhong, X.G.; Zhong, X.Y.; Peng, X.F. Robots visual servo control with features constraint employing Kalman-neural-network filtering scheme. Neurocomputing 2015, 151, 268–277. [Google Scholar] [CrossRef]
  37. Zhang, K.; Tong, S.; Shi, H. Trajectory Prediction of Assembly Alignment of Columnar Precast Concrete Members with Deep Learning. Symmetry 2019, 11, 629. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Hoisting positioning.
Figure 1. Hoisting positioning.
Applsci 10 02562 g001
Figure 2. The rack and pinion positioning system.
Figure 2. The rack and pinion positioning system.
Applsci 10 02562 g002
Figure 3. Schematic model.
Figure 3. Schematic model.
Applsci 10 02562 g003
Figure 4. Simulation platform and coordinate system.
Figure 4. Simulation platform and coordinate system.
Applsci 10 02562 g004
Figure 5. Flow chart of coordinate transformation.
Figure 5. Flow chart of coordinate transformation.
Applsci 10 02562 g005
Figure 6. The visual servo structure of adaptive control with Nonlinear Disturbance Observer (NDO) for hoisting positioning.
Figure 6. The visual servo structure of adaptive control with Nonlinear Disturbance Observer (NDO) for hoisting positioning.
Applsci 10 02562 g006
Figure 7. Visual servoing performances of the proposed method.
Figure 7. Visual servoing performances of the proposed method.
Applsci 10 02562 g007
Figure 8. Comparison result of accumulated visual errors.
Figure 8. Comparison result of accumulated visual errors.
Applsci 10 02562 g008
Figure 9. Experimental platform.
Figure 9. Experimental platform.
Applsci 10 02562 g009
Figure 10. The feature extraction results of the convergence process.
Figure 10. The feature extraction results of the convergence process.
Applsci 10 02562 g010
Figure 11. Visual errors of experiment A.
Figure 11. Visual errors of experiment A.
Applsci 10 02562 g011
Figure 12. Linear velocities and angle velocity of experiment A.
Figure 12. Linear velocities and angle velocity of experiment A.
Applsci 10 02562 g012
Figure 13. Visual errors of experiment B.
Figure 13. Visual errors of experiment B.
Applsci 10 02562 g013
Figure 14. Linear velocities and angle velocity of experiment B.
Figure 14. Linear velocities and angle velocity of experiment B.
Applsci 10 02562 g014
Table 1. Quantitative comparison result.
Table 1. Quantitative comparison result.
MethodConvergence Time (s)Computational Time (ms)Number of Servo Cycles
ASMCN visual servo (ours)18.05150120
PC visual servo22.20149149
KF visual servo20.21157129
PSMC visual servo22.15155143

Share and Cite

MDPI and ACS Style

Tong, S.; Zhang, K.; Shi, H.; Zhao, J.; Sun, J. Adaptive Visual Servoing Control for Hoisting Positioning Under Disturbance Condition. Appl. Sci. 2020, 10, 2562. https://doi.org/10.3390/app10072562

AMA Style

Tong S, Zhang K, Shi H, Zhao J, Sun J. Adaptive Visual Servoing Control for Hoisting Positioning Under Disturbance Condition. Applied Sciences. 2020; 10(7):2562. https://doi.org/10.3390/app10072562

Chicago/Turabian Style

Tong, Shenghao, Ke Zhang, Huaitao Shi, Jinbao Zhao, and Jie Sun. 2020. "Adaptive Visual Servoing Control for Hoisting Positioning Under Disturbance Condition" Applied Sciences 10, no. 7: 2562. https://doi.org/10.3390/app10072562

APA Style

Tong, S., Zhang, K., Shi, H., Zhao, J., & Sun, J. (2020). Adaptive Visual Servoing Control for Hoisting Positioning Under Disturbance Condition. Applied Sciences, 10(7), 2562. https://doi.org/10.3390/app10072562

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop