Next Article in Journal
Dynamic Adaptation Attack Detection Model for a Distributed Multi-Access Edge Computing Smart City
Next Article in Special Issue
Obstacle Detection in Infrared Navigation for Blind People and Mobile Robots
Previous Article in Journal
Experimental Analysis of the Magnetic Leakage Detection of a Corroded Steel Strand Due to Vibration
Previous Article in Special Issue
Design, Construction and Control of a Manipulator Driven by Pneumatic Artificial Muscles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design of A Finite-Time Adaptive Controller for Image-Based Uncalibrated Visual Servo Systems with Uncertainties in Robot and Camera Models

1
School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
2
School of Mechanical Engineering, Tianjin Sino-German University of Applied Sciences, Tianjin 300350, China
3
School of Electrical Engineering and Automation, Tianjin University of Technology, Tianjin 300384, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(16), 7133; https://doi.org/10.3390/s23167133
Submission received: 13 June 2023 / Revised: 25 July 2023 / Accepted: 27 July 2023 / Published: 11 August 2023

Abstract

:
Aiming at the time-varying uncertainties of robot and camera models in IBUVS (image-based uncalibrated visual servo) systems, a finite-time adaptive controller is proposed based on the depth-independent Jacobian matrix. Firstly, the adaptive law of depth parameters, kinematic parameters, and dynamic parameters is proposed for the uncertainty of a robot model and a camera model. Secondly, a finite-time adaptive controller is designed by using a nonlinear proportional differential plus a dynamic feedforward compensation structure. By applying a continuous non-smooth nonlinear function to the feedback error, the control quality of the closed-loop system is improved, and the desired trajectory of the image is tracked in finite time. Finally, using the Lyapunov stability theory and the finite-time stability theory, the global finite-time stability of the closed-loop system is proven. The experimental results show that the proposed controller can not only adapt to the changes in the EIH and ETH visual configurations but also adapt to the changes in the relative pose of feature points and the camera’s relative pose parameters. At the same time, the convergence rate near the equilibrium point is improved, and the controller has good dynamic stability.

1. Introduction

Intelligent robots with sensing abilities have been recognized as the mainstream trend in robot development. Among many robot sensors, a vision sensor has become one of the most important due to its large amount of information, wide range of applications, and non-contact characteristics [1]. A vision sensor can increase the adaptability of a robot to the surrounding environment and expand its application field. This idea directly gives birth to robot vision servo control technology [2]. Robot visual servo control is the use of visual sensors to indirectly detect the current posture of the robot or the relative posture of the robot to the target object; on this basis, the robot positioning control or trajectory tracking is realized. Thus, robot visual servo control makes an important control means of the robot system [3].
A robot vision servo system includes the following two parts: a robot system and a vision system. Before operation, they need to carry out system calibration, which includes camera calibration, robot calibration, and robot and camera relative position calibration (also known as hand-eye calibration). The performance of the traditional robot visual servo system is highly dependent on the calibration accuracy, which, in many cases, is limited by the following: (1) Calibration results are valid only under calibration conditions, and re-calibration is required when the system structure changes slightly. (2) In many working conditions, the parameters of the system calibration may change slowly. (3) Due to camera distortion and other factors, the calibration area of the camera is generally limited to a certain area, which limits the working range of the robot. (4) The calibration process of the system is very complicated, requiring special calibration equipment and professionals, and the calibration cost is high. Based on the above reasons, using uncalibrated visual servo as a new form of visual servo has gradually attracted the attention of many scholars.
The relationship between robot joint motion and image feature motion is difficult to estimate, which brings challenges to uncalibrated visual servo control. Reference [4] proposes a practical scheme for manipulator operation that combines online and offline learning. The hand-eye relationship is represented by a locally linear Jacobian matrix and approximated using a radial basis function network (RBFN). The scheme can adapt well to the change in camera position and attitude, but the real-time performance needs to be improved. The Kalman filter is an effective method for estimating the image Jacobian matrix, but the servo accuracy will be low if the noise parameter is not set properly. Reference [5] improved the traditional STOA algorithm, adopted the adaptive search radius strategy to improve its local convergence ability, adopted the improved STOA algorithm to optimize the noise parameters of the Kalman filter, applied the optimized noise parameters to the hand-eye structure of the robot, and estimated the image Jacobian matrix at each moment online. The disadvantage is that real-time online optimization cannot be carried out when the noise parameters change.
It is worth noting that the depth parameter in the IBVS system is coupled to the image Jacobian matrix in reciprocal form and has uncertainty, which makes it difficult to process and estimate the depth parameter. Therefore, the depth parameter needs to be decoupled from the Jacobian matrix. To solve the problem of depth estimation, reference [6] studies an adaptive observer framework for the asymptotic estimation of feature depth for uncalibrated monocular cameras. In reference [7], a transformer-based neural network for eye-wise depth estimation is proposed, which is suitable for the compound eye image. In this algorithm, the self-attention module is improved into a locally selective self-attention module, which reduces the computation amount and improves the estimation accuracy. Reference [8] proposes a visual servo method that does not require prior velocity knowledge. This method uses an adaptive time-varying controller to realize the trajectory tracking task under non-holonomic constraints and unknown depth parameters. The desired velocity is estimated in real time based on the reduced-order observer. In addition, an augmented correction law is designed to compensate for the unknown depth parameters and identify the inverse depth constant. The common disadvantage of the above methods is that it is difficult to obtain a small estimation error in a limited convergence time.
An image-based visual servo (IBVS) has a simpler control structure as it does not require 3D reconstruction, and as it is more suitable for building uncalibrated visual servo systems. Thus, IBVS has become the mainstream technology of uncalibrated visual servo control at present [9], especially the uncalibrated visual servo control based on the adaptive Jacobian scheme [10].
To solve the time-varying problem of camera parameters, an adaptive visual servo controller was proposed in the literature [11], and an adaptive law was designed to deal with unknown camera parameters. The above uncalibrated visual servo methods are all based on robot kinematics and visual mapping. However, robot dynamic characteristics are also highly nonlinear [12]. Since the nonlinear dynamic characteristics of the robot also have an important effect on the control error and system stability, research on the dynamic visual servo strategy has been widely discussed.
To deal with dynamic uncertainties in the Jacobian matrix, reference [13] proposed a new adaptive dynamic controller based on vision for tracking objects with a planar manipulator robot in a fixed camera configuration, and it considered the orientation of the camera assembly, the depth of the object, and the main dynamic parameters of the robot to be uncertain. The control scheme is designed with a vision-based adaptive kinematic controller in charge of executing the task of tracking the object even with unknown parameters of the vision system. This controller provides speed references to a dynamic cascade adaptive controller whose objective is to generate the final control actions of the robot with imprecise knowledge of its dynamics. Reference [14] is concerned with the dynamic tracking problem of SNAP orchard harvesting robots in the presence of multiple uncalibrated model parameters in the application of dwarf culture orchard harvest. A new hybrid visual servo adaptive tracking controller and three adaptive laws are proposed to guarantee harvesting robots finish the dynamic harvesting task and adapt to unknown parameters, including camera intrinsic and extrinsic models and robot dynamics.
In the IBUVS control system, real-time performance is an important index. When designing the control system, in addition to considering the stability and asymptotic stability of the system, attention should be paid to the fast convergence characteristic, namely the finite time stability (FTS). Finite-time stability is asymptotically stable, but its convergence rate should be faster than that of asymptotically stable systems. The analysis methods of FTS include terminal sliding mode control (TSM), homogeneous system theory, and the finite-time Lyapunov method. In terms of TSM, aiming at the trajectory tracking control of I-AUV with input saturation and output constraints, a higher-order control barrier function-quadratic program (HoCBF-QP)-based control scheme is proposed in the paper [15]. A feedback control term based on the continuous terminal sliding mode (TSM) technique is designed to improve the tracking performance under uncertainties, disturbances, and dynamic interaction. For other related works, see [16,17,18]. In terms of homogeneous system theory, the PD+ gravity compensation scheme was simulated by reference [19] using homogeneous theory, and global finite-time stability was achieved by measuring the joint position and velocity of the manipulator. If the velocity observer was only used to measure the position, local FTS stability could be realized. Reference [20] addresses the finite-time convergence problem of an uncalibrated camera-robot system with uncertainties. To achieve a better dynamic stability performance of the camera-robot system, a novel FTS adaptive controller is presented to cope with the rapid convergence problem. Meanwhile, FTS adaptive laws are proposed to handle these uncertainties, which exist both in the robot and in the camera model. The finite-time stability analysis is discussed in accordance with homogeneous theory and the Lyapunov function formalism. Reference [21] presents a low-cost Neural adaptive control scheme that can not only achieve the finite-time tracking control of robot systems with multiple uncertainties but also circumvent the possible singularity. Specifically, for the kinematic parameter uncertainties involved, the proposed terminal sliding mode observer can ensure the actual position of the end-effector is accurately estimated within a finite time. For the Lyapunov method, the Lyapunov stability criterion of a finite-time control system is preliminarily established in the literature [22,23]. Reference [24] presents a modified command filter backstepping tracking control strategy for a class of uncertain nonlinear systems with input saturation based on the convex optimization method and the adaptive fuzzy logic system (FLS) control technique. The closed-loop system’s performance is also analyzed using the Lyapunov stability theorem and the Lasalle invariant principle. References [25,26,27] explore the problem of finite-time prescribed performance control (FPPC) for waverider vehicles (WVs). Firstly, a new type of back-stepping controller without any approximation/estimation is devised based on FPPC, such that all tracking errors satisfy spurred finite-time prescribed performance. Furthermore, a fuzzy-neural-approximation-based pseudo-nonaffine control protocol is proposed for WVs, which is capable of guaranteeing tracking errors with the desired prescribed performance and rejecting the obstacle of fragility inherent to the traditional prescribed performance control (PPC). Furthermore, fuzzy neural approximators are combined with the adaptive compensation strategy to resist both system uncertainties and external disturbances. Finally, a prescribed performance control (PPC) methodology, namely fragility-avoidance PPC for Waverider Vehicles (WVs) with sudden disturbances based on fuzzy neural approximation, is proposed, which uses a simplified fuzzy neural approximation framework to suppress unknown non-affine dynamics. The above research results provide a new idea for controlling IBUVS systems with uncertainties in robot and camera models.
The contribution and innovation of this paper are mainly reflected as follows: (1) In the uncalibrated robot visual servo control system, based on the comprehensive consideration of uncertain dynamics, unknown kinematics, and time-varying depth information, a finite-time adaptive control scheme is proposed to solve the global finite-time trajectory tracking problem of the robot manipulator. Compared with references [13,14], the controller considers more unknown parameters of the vision robot, and the convergence speed is also significantly improved. (2) For the problem of parameter uncertainty, three adaptive laws are designed to achieve accurate estimation of kinematics, dynamics, and depth uncertainty parameters. On this basis, a vision tracking control scheme based on a depth-free Jacobian matrix is proposed. Compared with references [6,7,8], the decoupling of depth parameter and Jacobian matrix is realized in this paper. Compared with the reference [13,14], an adaptive law is specially designed to accurately estimate the uncertain dynamic parameters of the robot. (3) Compared with references [19,20], to solve the problem that the spatial velocity of an image is difficult to accurately measure, we define a new vector composed of joint space velocity and reference joint velocity and use the adaptive law to estimate the inverse dynamics of the system. (4) In the design of the control rate and controller, we propose a scheme of image-free space velocity design and extend the finite-time stability control to solve time-varying nonlinear systems with multiple uncertain parameters. Compared with references [21,22], the proposed controller has fast convergence. (5) A notable difference from [24] is that the proposed control scheme extends the asymptotic stability results to finite-time stability. The asymptotic stability control scheme can be regarded as a special case of the FTS scheme when exponent α = 1.
The rest of this paper is organized as follows: Kinematic analysis of an image-based uncalibrated visual servo system is presented in Section 2 and includes “Differential kinematics of a visual servo in an ETH configuration” and “Differential kinematics of a visual servo in an EIH configuration”. Section 3 discusses the control model of the manipulator based on dynamics. Section 4 describes the design and stability analysis of a finite-time tracking controller. Section 5 and Section 6 present the results of the experiment and the final conclusions of the study, respectively.

2. Kinematic Analysis of an Image-Based, Uncalibrated Visual Servo System

2.1. Differential Kinematics of the Visual Servo in an ETH Configuration

In the IBVS system, the depth parameter  z  is coupled to the image Jacobian matrix in reciprocal form, as shown in Equation (1). Where  z i  is the depth value of the i-th image feature point, meanwhile,  u = u u 0 f k u v = ( v v 0 ) s i n θ / f k u , this makes it difficult to process and estimate the depth parameter. To solve this problem, the depth parameter needs to be decoupled from the Jacobian matrix.
L x , i = 1 / z i               0                   u                 u v ( 1 + u 2 ) v                   0     1 / z i v / z i       1 + v 2 u v             u
Based on the analysis of the kinematic relation of the camera, the position vector  x i c ( t )   R 3 × 1  of the feature point P in the three-dimensional coordinate system of the camera and the image coordinate vector  y i ( t ) R 2 × 1  meet the following relation.
y i t = 1 z i ( t ) Ω 1 T Ω 2 T x i c ( t )
where  Ω 1 T Ω 2 T R 3 × 1  respectively represent the first and second row vectors in Ω (the camera internal parameter matrix).  z i ( t ) R  represents the depth parameter, satisfying the following:
z i t = Ω 3 T x i c ( t )
where  Ω 3 T R 3 × 1  represents the third-row vector in the internal parameter matrix  Ω . Differentiating Equation (3) yields the following equation:
z ˙ i t = Ω 3 T x ˙ i c ( t )
Take the derivative of Equation (2) and substitute Equations (3) and (4) into it to obtain the differential kinematic relation of visual mapping:
y ˙ i t = 1 z i ( t ) Ω 1 T u i ( t ) Ω 3 T Ω 2 T v i ( t ) Ω 3 T x ˙ i c ( t )
where  u i ( t )  and  v i ( t ) R  are the  U  and  V  axis coordinates of the image coordinate vector y i , respectively.
Considering the differential kinematic relation of the manipulator in E T H  configuration, the homogeneous transformation matrix T e c  from the end-effector coordinate system to the camera coordinate system satisfies T e c = T b c T e b . Since the camera in ETH  configuration is usually fixed in the scene, it is static relative to the reference coordinate system of the manipulator base. In this case, the external parameter matrix T b c  of the camera is constant. Substituting the above homogeneous transformation matrix into the coordinate system pose transformation Equation (6) and differentiating it to get the ETH configuration differential kinematic relationship (7):
X 2 1 = T 0 2 X 0 1 = T 1 2 T 0 1 X 0 1
X ˙ i c ( t ) 1 = T b c T ˙ e b t X i e 1 = T b c R ˙ e b t X i e + P ˙ e b ( t ) 1
where X i e  is the position vector of feature points in the three-dimensional coordinate system at the end of the manipulator. By substituting Equation (7) into Equation (5), the complete differential kinematic relation of the visual servo in ETH configuration can be obtained as follows:
y ˙ i t = 1 z i t m 1 T u i t m 3 T m 2 T v i t m 3 T R e b q X i e q + P e b q q q ˙ ( t )
where q t , q ˙ ( t ) R n × 1  represent the joint Angle vector and joint velocity vector of the manipulator, respectively. R e b , P e b S O ( 3 )  represent the forward kinematic rotation matrix and translation vector of the manipulator, respectively. Row vectors m 1 T ,   m 2 T ,   m 3 T  ∈ R 1 × 3  represent the first, the second, and third rows of the matrix M b c , respectively. Matrix M b c R 3 × 3  represents the perspective projection matrix from the camera image plane to the reference coordinate system of the manipulator base. Which is specifically defined as:
M b c = Ω R b c
where R b c  is the rotation part in the camera external parameter matrix T e c , subscript e  represents the manipulator end-effector coordinate system, and superscript c  represents the camera coordinate system. Thus, the depth-independent Jacobian matrix in the ETH configuration can be derived as follows:
D i = m 1 T u i ( t ) m 3 T m 2 T v i ( t ) m 3 T R e b ( q ) x i e q + P e b ( q ) q
As can be seen from the above formula, the depth-independent Jacobian matrix does not contain the depth parameters of feature points, thus achieving decoupling from the depth parameters.
In addition, by differentiating the depth parameter z i ( t ) , the differential kinematic relation of depth can be obtained as follows:
z ˙ i t = d i T q ˙ ( t )
where d i T = m 3 T ( R b e x i b + P b e ) / q .

2.2. Differential Kinematics of the Visual Servo in EIH Configuration

In EIH configuration, we also focus on the conversion relationship T b c ϵ S O ( 3 )  between camera coordinate system and base coordinate system, including the camera pose matrix T e c  and manipulator kinematic transformation T b e ( t ) . Since the camera is installed on the end-effector in EIH configuration, the pose relationship T e c  is constant. By differentiating Equation (12), the differential kinematic relation of EIH configuration can be obtained, as shown in Equation (13).
x 2 1 = T 0 2 x 0 1 = T 1 2 T 0 1 x 0 1
where T = R P 0 1 ,     R ϵ S O 3 ,     P ϵ R 3 .
x ˙ i c 1 = T e c T ˙ b e ( t ) x i b 1 = T b c R ˙ b e t x i b + P ˙ b e ( t ) 1
where x i b ϵ R 3 × 1  is the coordinate vector of feature points in the base coordinate system.
By substituting Equation (13) into Equation (5), we get the following formula:
y ˙ i t = 1 z i t m 1 T u i t m 3 T m 2 T v i t m 3 T R b e q X i b q + P b e q q q ˙ ( t )
where vectors m 1 T , m 2 T , and m 3 T  are the first, second, and third row vectors of the matrix M e c ϵ R 3 × 3  respectively, and the perspective projection matrix M e c  is specifically defined as:
M e c = Ω R e c
where R e c  is the rotation part in the camera external parameter matrix T e , subscript e  represents the manipulator end-effectors coordinate system, and superscript c  represents the camera coordinate system. Thus, the depth-independent Jacobian matrix in EIH configuration can be deduced as follows:
D i = m 1 T u i ( t ) m 3 T m 2 T v i ( t ) m 3 T R b e ( q ) x i b q + P b e ( q ) q
and d i T = m 3 T ( R b e x i b + P b e ) / q .
By comparing the differential kinematic relations (10) and (16) of ETH and EIH configurations, it is not difficult to find that the depth-independent Jacobian matrix D i  and vector d i T  in different visual configurations have similar mathematical descriptions. Therefore, the two configurations can be unified as follows:
y i 1 = 1 z i Ω x i
where x i ϵ R 3 × 1 = [ x 1 , x 2 , x 3 ] T  is the coordinate in the camera coordinate system; y i ϵ R 2 × 1 = [ u i , v i ] T  is the coordinate of the imaging point in the image plane coordinate system; Ω ϵ R 3 × 3  is the camera internal parameter matrix; z i ϵ R  is the depth parameter.
The complete mapping model of the visual system (17) is rewritten as (18).
y i 1 = 1 z i M ¯ T ( t ) x i 1
where T ( t )  is the forward kinematic coordinate transformation matrix of the manipulator; M ¯ ϵ R 3 × 4  is the perspective projection equivalent matrix.
In the visual servo system, the visual mapping relationships of different configurations can be uniformly represented by Equation (18), but the physical meanings of the representations of each matrix and vector are different, as shown in Table 1.
The depth parameters of the two configurations can be unified as follows:
z i t = m ¯ 3 T T ( t ) x i 1
In the formula m ¯ 3 T ϵ R 1 × 4  is the third-row vector of the matrix M ¯ . Therefore, Equations (8) and (14) can be unified as follows:
y ˙ i t = 1 z i t D i q ˙ ( t )
where
D i = m 1 T u i ( t ) m 3 T m 2 T v i ( t ) m 3 T R ( q ) x i q + P ( q ) q
The matrices P and R are the rotation and translation parts of the kinematic transformation matrix of the manipulator, respectively. m 1 T ,   m 2 T ,     m 3 T ϵ R 1 × 3  are row vectors of the first, second, and third rows of matrix M, respectively. The specific expressions of matrix M in different visual configurations are given in Table 1.
By taking the derivative of Equation (19) with respect to time, the differential relationship between the depth parameter and joint space can be obtained as follows:
z ˙ i t = d i T q ˙ ( t )
where
d i T = m 3 T ( R x + P ) / q
Equations (18)–(23) can be regarded as a unified differential kinematic framework. During system analysis, with the help of the unified kinematic model of the visual servo system, it is not necessary to pay attention to the configuration of the visual servo system but only to configure the corresponding parameters according to Table 1.

3. Control Model of a Manipulator Based on Dynamics

According to Lagrange mechanics, the dynamic equation of the manipulator system can be given by the following formula:
H q q ¨ + 1 2 H ˙ q + C q , q ˙ q ˙ + g q = τ
where q ˙  and q ¨  are joint velocity and acceleration vectors, respectively; H q ϵ R n × n  is the inertial matrix; C q , q ˙ ϵ R n × n  is the Coriolis matrix; g q ϵ R n × 1  is the gravitational torque; τ  is the control torque exerted on the robot joints; and is the design variable of the dynamics controller. The kinetic Equation (24) has the following properties.
Property 1 
([28]).  H q  is a symmetric positive definite matrix, and there are normal numbers  α 1 , α 2 , h 1 , h 2 , so that the following formula is true.
α 1 I n H q α 2 I n , h 1 H q h 2
Property 2 
([28]).  C q , q ˙  is the antisymmetric matrix, that is, for any vector  ζ ϵ R n × 1 , the following equation is true.
ζ T C q , q ˙ ζ = 0
Property 3 
([28]). The Coriolis moment satisfies the following formula.
1 2 H ˙ q + C q , q ˙ k q ˙
where  k  is the appropriate positive constant.
Property 4 
([28]). The gravitational torque  g q  satisfies the following equation.
g q g 0
where  g 0  is the appropriate positive constant.
Property 5 
([24]). Equation (24) can be linearly parameterized into the following equation by selecting the kinetic parameter  θ d ϵ R p × 1  with the appropriate dimension.
H q ξ ¨ + 1 2 H ˙ q + C q , q ˙ ξ ˙ + g q = Y d ( q , q ˙ , ξ ˙ , ξ ¨ ) θ d
where  Y d ( q , q ˙ , ξ ˙ , ξ ¨ ) ϵ R n × p  is the regression matrix,  n  is the number of joint angles, and  p  is the number of unknown parameters.

4. Design and Stability Analysis of a Finite Time Tracking Controller

Define the image tracking error Δ y = y y d Δ y ˙ = y ˙ y ˙ d , and set the reference value of image speed as follows:
y ˙ r = y ˙ d λ Δ y
where λ ϵ R  is the undetermined constant. According to Property 5, the Lagrange dynamic equation of the manipulator with unknown parameters can be linearized as follows:
H ^ q ξ ¨ + 1 2 H ˙ ^ q + C ^ q , q ˙ ξ ˙ + g ^ q = Y d ( q , q ˙ , ξ ˙ , ξ ¨ ) θ ^ d
where θ ^ d  is the estimated value of the unknown parameter vector, which is estimated online by the undetermined adaptive rate. According to Equation (31), the dynamic estimation error can be linearized as follows:
Y d q , q ˙ , ξ ˙ , ξ ¨ θ d = H ^ H ξ ¨ + 1 2 H ˙ ^ H ˙ + C ^ C ξ ˙ + g ^ q g q
where θ d = θ ^ d θ d , the matrix H ,   H ^ ,   C ,   C ^  are abbreviations for H q , H ^ q , C q , q ˙ , C ^ q , q ˙ ,  respectively.
Based on Equations (21) and (23), the compensation depth Jacobian matrix Q is constructed as follows:
Q = D + 1 1 + α 1 y d T
where α 1 R  is the constant to be determined. The estimated value of Q  is as follows,
Q ^ = D ^ + 1 1 + α 1 y d ^ T
For the adaptive Jacobian scheme, the depth-independent Jacobian matrix D  and its correlation vector d T  have the following important properties (as evidenced in Appendix A, Appendix B and Appendix C).
Property 6. 
For any vector  η ϵ R n × 1 , the matrix product  D η  can be expressed as a linearized form of the unknown parameter vector  θ k .
D η = Y k , 1 ( y , q , η ) θ k
where  Y k , 1 ( y , q , η ) ϵ R 2 × p 1  is a regression matrix independent of the unknown parameter vector  θ k ϵ R p 1 × 1 , the dimension  p 1  ≤ 36.
Property 7. 
For any vector  η ϵ R n × 1 , the product  d T η  can be expressed as a linearized form of the unknown parameter  θ k .
d T η = Y k , 2 ( q , η ) θ k
where  Y k , 2 ( q , η ) R 1 × p 1  is a regression vector independent of the unknown parameter vector  θ k , and the dimension  p 1  ≤ 36.
Because the depth independent Jacobian matrix has nothing to do with the depth parameter, it is necessary to compensate for the depth parameter when designing the system.
Property 8. 
The depth  z  has the following linear parameterized form:
z = Y z ( q ) θ z
where  Y z ( q ) R 1 × p 2  is a regression vector independent of the unknown parameter vector  θ z R p 2 × 1 , and dimension  p 2  satisfies  p 2 ≤ 13.
According to Properties 6–8, the linear parameterized form of the compensated Jacobian matrix estimate Q is derived as follows:
Q ^ q ˙ = D ^ q ˙ + 1 1 + α 1 y d ^ T q ˙ = Y k y , y d , q , q ˙ θ ^ k
where  Y k y , y d , q , q ˙ = Y k , 1 y , q , q ˙ + y Y k , 2 q , q ˙ 1 + α 1 .
Based on the estimated value of the compensated Jacobian matrix, the reference velocity vector of the joint is defined as follows:
q ˙ r = z ^ Q ^ + y ˙ r
where  z ^  is the estimated value of the depth parameter, and  Q ^ +  is the pseudo-inverse of  Q ^ , which is determined by the following equation:
Q ^ + = Q ^ T ( Q ^ Q ^ T ) 1
The joint sliding mode variable is constructed according to the joint reference velocity  q ˙ r  defined in Equation (29).
s q = q ˙ q ˙ r
where  s q R n × 1 . Figure 1 shows the design structure diagram of the uncalibrated visual servo tracking control system.
Based on the above analysis, the IBUVS finite-time tracking control law is proposed as follows:
τ = Y d q , q ˙ , q ˙ r , q ¨ r θ ^ d Q ^ T K y s i g ( y ) α 1 K s s i g ( s q ) α 2
where  K s R n × n  and  K y R 2 × 2  are the undetermined gain matrix,   α 1 , α 2 R  is the undetermined constant, and  s i g ( * ) α  is a nonlinear function defined by the following formula.
s i g ( ξ ) α = [ ξ 1 α s g n ξ 1 , , ξ n α s g n ξ n ] T
where  ξ = [ ξ 1 , , ξ n ] T R n ,   s g n ξ i  is the standard sign function.
s g n ξ i = 1 , i f ξ i < 0 1,1 , i f ξ i = 0 1 , i f ξ i > 0
Equation (33) has the following properties [29]:
ξ T s i g ( ξ ) α ξ T ξ ,   ξ i 0,1 , i = 1,2 , , n .
Similarly, the estimation error  Q ^ Q  and depth estimation error  z ^ z  of the compensated Jacobian matrix are expressed linearly as follows:
Y k y , y d , q , q ˙ θ k = ( Q ^ Q ) q ˙
Y z y , y d , y ˙ d , q θ z = ( z ^ z ) y ˙ r
For the unknown parameter vector estimation  θ ^ d , θ ^ k , θ ^ z , the following adaptive law is proposed:
θ ^ ˙ d = q , ψ d 1 Y d T ( q , q ˙ , q ˙ r , q ¨ r ) s q
θ ^ ˙ k = ψ k 1 Y k T ( y , y d , q , q ˙ ) K y s i g ( y ) α 1
θ ^ ˙ z = ψ z 1 Y z T ( q , q ˙ , y ˙ d , q ¨ r ) K y s i g ( y ) α 1
where  ψ d , ψ k , ψ z  is undetermined gain matrix.
Let  x 1 = Δ y , x 2 = s q , x 3 = Δ θ d , x 4 = Δ θ k , x 5 = Δ θ z ,  to avoid confusion with the feature 3D coordinate vector  x , the total state vector is represented by the symbol  x ¯ = [ x 1 , x 2 , x 3 , x 4 , x 5 ] T .
According to Equations (42) and (48)–(50), the error dynamic equations of the closed-loop system can be summarized as follows:
x ˙ 1 = f 1 x ¯ = λ z + 1 1 + α 1 z ˙ x 1 + Q ^ x 2 Y k y , y d , q , q ˙ x 4 + Y z y , y d , y ˙ d , q x 5 z 1                                                               x ˙ 2 = f 2 x ¯ = H 1 q 1 2 H ˙ q + C q , q ˙ x 2 K s s i g x 2 α 2 Q ^ T K y s i g x 1 α 1 + Y d q , q ˙ , q ˙ r , q ¨ r x 3     x ˙ 3 = f 3 x ¯ = ψ d 1 Y d T q , q ˙ , q ˙ r , q ¨ r x 2                                                                                                                                                                                                                                         x ˙ 4 = f 4 x ¯ = ψ k 1 Y k T y , y d , q , q ˙ K y s i g x 1 α 1                                                                                                                                                                                                               x ˙ 5 = f 5 x ¯ = ψ z 1 Y z T ( q , q ˙ , y ˙ d , q ¨ r ) K y s i g x 1 α 1                                                                                                                                                                                                  
Theorem 1. 
For the system shown in Equations (19), (20) and (24), under the action of finite time tracking control law and adaptive law shown in Equations (42) and (48)–(50), if the constants and gain parameters selected meet the following sufficient conditions:  λ > 0 ,   K s R n × n  and  K y R 2 × 2  are a positive definite symmetric matrix;  ψ d , ψ k , ψ z  is a positive definite symmetric matrix with proper dimensions;  0 < α 1 < 1 ,   α 2 = 2 α 1 1 + α 1  is the global finite time stability of the closed-loop system that can be guaranteed in the sense of Formula (52).
lim t y , y ˙ = 0

4.1. Proof of Global Asymptotic Stability of Closed-Loop Systems

The following formula can be derived from the sliding mode vector in Equation (41).
Q ^ x 2 = Q q ˙ z y ˙ r + Q ^ Q q ˙ ( z ^ z ) y ˙ r
By substituting Equations (46), (47) and (53) into Equation (38), the following equation can be obtained:
z x ˙ 1 = Q ^ x 2 λ z x 1 1 α 1 + 1 z ˙ x 1 Y k y , y d , q , q ˙ x 4 + Y z y , y d , y ˙ d , q x 5
By combining the adaptive rate (48)–(50) with the controller (42) and the dynamic Equation (24), we can obtain:
H q x ˙ 1 = 1 2 H ˙ q + C q , q ˙ x 2 + Y d q , q ˙ , q ˙ r , q ¨ r x 3 K s s i g ( x 2 ) α 2 Q ^ T K y s i g ( x 1 ) α 1
Consider the Lyapunov function  V x ¯ = V 1 x ¯ + V 2 x ¯ + V 3 x ¯ , where,  V 1 x ¯ = 1 α 1 + 1 z x 1 T K y s i g x 1 α 1 = z α 1 + 1 i = 1 N K y , i x 1 , i α 1 + 1 V 2 x ¯ = 1 2 x 2 T H q x 2
V 3 x ¯ = 1 2 ( x 3 T Ψ d x 3 + x 4 T Ψ k x 4 + x 5 T Ψ z x 5 )
Differentiating  V 1 x ¯  along the trajectory of the system (51) yields:
V ˙ 1 x ¯ = z ˙ α 1 + 1 i = 1 N K y , i x 1 , i α 1 + 1 + z α 1 + 1 i = 1 N α 1 + 1 K y , i x 1 , i α 1 x ˙ 1 s i g ( x 1 ) = x 1 T λ z K y s i g x 1 α 1 + x 2 T Q ^ T K y s i g x 1 α 1 x 4 Y k T K y s i g x 1 α 1 + x 5 T Y z T K y s i g ( x 1 ) α 1
Similarly, taking the derivative of  V 2 x ¯  and  V 3 x ¯  along the trajectory of the system (51) yields:
V ˙ 2 x ¯ = 1 2 x 2 T   H ˙ q x 2 + x ˙ 2 T H q x 2 = x 2 T C x 2 s i g T ( x 2 ) α 2 K s T x 2 s i g T ( x 1 ) α 1 K y T Q ^ T x 2 + x 3 T Y d T x 2
V ˙ 3 x ¯ = x 2 T Y d x 3 + s i g T ( x 1 ) α 1 K y T Y k x 4 s i g T ( x 1 ) α 1 K y T Y z x 5
The following formula can be derived from Equations (56)–(58):
V ˙ x ¯ = x 1 T λ z K y s i g ( x 1 ) α 1 + x 2 T Q ^ T K y s i g ( x 1 ) α 1 x 4 Y k T K y s i g ( x 1 ) α 1 + x 5 T Y z T K y s i g ( x 1 ) α 1 + x 2 T C x 2 s i g T ( x 2 ) α 2 K s T x 2 s i g T ( x 1 ) α 1 K y T Q ^ T x 2 + x 3 T Y d T x 2 x 2 T Y d x 3 + s i g T ( x 1 ) α 1 K y T Y k x 4 s i g T ( x 1 ) α 1 K y T Y z x 5 = x 1 T λ z K y s i g ( x 1 ) α 1 + x 2 T C x 2 s i g T ( x 2 ) α 2 K s T x 2
Since  C  is an antisymmetric matrix, that is,  x = 0 , the following formula can be derived by substituting it into Equation (59):
V ˙ x ¯ = λ z 1 + α 1 i = 1 2 K y , i x 1 1 + α 1 1 1 + α 2 i = 1 2 K s , i x 2 1 + α 2
According to sufficient conditions in the above theorem and Equation (60), It is not difficult to derive  V ˙ x ¯ 0 , that is,  x 1 x 2 ,     x 3 ,     x 4 x 5  is bounded. Therefore, the estimates  θ ^ d , θ ^ k , θ ^ z   are also bounded, and we can get the boundedness of  z ^ , d ^ T , D ^ . In the same way, we can obtain the bounded  Q ^  by means of Formula (33). According to the sign function definition shown in Equation (43), it can be inferred that  s i g   ( s q ) α 2  and  s i g ( y ) α 1  are bounded. In addition, it can be inferred from the bounded of  y ˙ d  and  y  that  y ˙ r  is bounded. Substituting  Q ^  and  y ˙ r  into Equation (50), the boundedness of  q ˙ r  can be derived. Accordingly, the bounded  s ˙ q  can be deduced from  q ˙ , q ˙ r . In addition, according to the Formulas (20) and (22), it can be deduced that  z ˙  and  y ˙  are bounded, finally, we can obtain  y ˙  bounded from  y ˙  and  y ˙ d .
To verify the consistent continuity of  V ˙ , we need to take the derivative of  V ˙ . Since the Formula (60) is a continuous, non-smooth function and its derivative cannot be obtained directly, it is necessary to discuss its uniform continuity in sections. By taking the derivative of V in stages, the following equation can be derived:
V ¨ x ¯ = V ¨ 1 x ¯ = x ˙ 1 T λ z K y s i g x 1 α 1 x 1 T λ z ˙ K y s i g x 1 α 1 + x 1 T λ z K y α 1 x 1 α 1 1 x ˙ 1 + α 2 x 2 α 2 1 x ˙ 2 K s T x 2 s i g T ( x 2 ) α 2 K s T x ˙ 2 ,   i f   x 1 < 0 , x 2 < 0 V ¨ 2 x ¯ = x ˙ 1 T λ z K y s i g x 1 α 1 x 1 T λ z ˙ K y s i g x 1 α 1 + x 1 T λ z K y α 1 x 1 α 1 1 x ˙ 1 α 2 x 2 α 2 1 x ˙ 2 K s T x 2 s i g T ( x 2 ) α 2 K s T x ˙ 2 ,   i f   x 1 < 0 , x 2 > 0 V ¨ 3 x ¯ = x ˙ 1 T λ z K y s i g x 1 α 1 x 1 T λ z ˙ K y s i g x 1 α 1 x 1 T λ z K y α 1 x 1 α 1 1 x ˙ 1 + α 2 x 2 α 2 1 x ˙ 2 K s T x 2 s i g T ( x 2 ) α 2 K s T x ˙ 2 ,   i f   x 1 > 0 , x 2 < 0 V ¨ 4 x ¯ = x ˙ 1 T λ z K y s i g x 1 α 1 x 1 T λ z ˙ K y s i g x 1 α 1 x 1 T λ z K y α 1 x 1 α 1 1 x ˙ 1 α 2 x 2 α 2 1 x ˙ 2 K s T x 2 s i g T ( x 2 ) α 2 K s T x ˙ 2 ,   i f   x 1 > 0 , x 2 > 0 V ¨ 5 x ¯ = 0 ,                                                                                                                                                     i f   x 1 = 0 , x 2 = 0
Through the above analysis and  y ¨ r = y ¨ d λ Δ y ˙ , it can be deduced that  y ¨ r  is bounded.  Q ^ ˙  is bounded by  Q ^ ˙ = D ^ + ( Δ y ˙ d ^ T + Δ y d ^ ˙ T ) ˙ / ( 1 + α 1 ) . The boundedness of  q ¨ r  can be derived from the differential  q ¨ r = z ^ ˙ Q ^ + y ˙ r + z ^ Q ^ + y ¨ r + z ^ Q ^ ˙ + y ˙ r  of  q ˙ r . According to Formula (55),  s ˙ q  is bounded. Moreover, by substituting the boundedness of  z , z ˙ , x ˙ 1 , x 1 , s i g x 1 α 1 , x ˙ 2 , x 2 , s i g ( x 2 ) α 2  into the Formula (61), we obtain the boundedness of  V ¨ 1 ( x ¯ ) V ¨ 2 ( x ¯ ) V ¨ 3 x ¯ , V ¨ 4 x ¯ , V ¨ 5 ( x ¯ ) , and the boundedness of  V ¨ ( x ¯ )  can be derived from the following formula:
m i n V ¨ 1 x ¯ ,   V ¨ 2 x ¯ ,   V ¨ 3 x ¯ , V ¨ 4 x ¯ , V ¨ 5 x ¯ V ¨ x ¯ m a x V ¨ 1 ( x ¯ ) ,   V ¨ 2 ( x ¯ ) ,   V ¨ 3 x ¯ , V ¨ 4 x ¯ , V ¨ 5 ( x ¯ )
Thus, it follows that  V ˙  is uniformly continuous. According to Barbalat lemma, when  t 0 ,   V ˙ 0 ,  and  s q 0 ,   y 0 ,  the consistent continuity of  y ˙  can be given by:  q ¨  can be derived from  s ˙ q ; from Equation (20) we can see that  q ˙ , q ¨ , z , z ˙ , D , D ˙  is bounded; therefore,  y ¨  is bounded, that is,  y ¨  is bounded, and therefore,  y ˙  is uniformly continuous.
Based on the above derivation and Barbalat lemma, we can deduce that  lim t y , y ˙ = 0 .

4.2. Proof of Local Finite-Time Stabilization of Closed-Loop Systems

Lemma 1 
([30]). Considering the following system:
x ˙ = f x + f ~ ( x ) , f 0 = 0 , f ~ 0 = 0 , x R n
where  f x  is an n-dimensional continuous homogeneous vector field with k < 0 with respect to the expansion coefficient    r 1 , , r n ( r i > 0 , i = 1 , , n ) ,  f ~ ( x )  is a continuous vector field. Suppose x=0 is the asymptotically stable equilibrium point of the system  x ˙ = f x , if
lim ε 0 + f ~ i ( ε r 1 x 1 , , ε r n x n ) ε r i + k = 0 , i = 1 , , n
For any  x ϵ D = x ϵ R n x δ ,   δ > 0  is uniformly true, then  x = 0  is the locally finite time stable equilibrium points for the system (62).
Lemma 2 
([31]). If a scalar function  V x ,   t  satisfies the following conditions:
(1)
The lower bound of  V x , t  exists.
(2)
V ˙ x , t  is negative and semi-definite.
(3)
V ˙ x , t  is uniformly continuous for time  t
Then, there is  V ˙ x , t 0  when  t .
Lemma 3 
([32,33]). If a system is globally asymptotically stable and locally finite-time convergent, then it is globally finite-time stable.
Lemma 1 can be modified into the following form:  x ˙ = f ~ ( x ¯ ) + f ^ x ¯  where,  f ~ x ¯ = f ~ 1 x ¯ , f ~ 2 x ¯ , , f ~ n x ¯  is a homogeneous vector field,
f ^ x ¯ = ( f ^ 1 x ¯ , f ^ 2 x ¯ , , f ^ n x ¯ )  is a continuous vector field. System (51) can be rewritten as:
f ~ 1 x ¯ = Q ^ x 2 Y k y , y d , q , q ˙ x 4 + Y z y , y d , y ˙ d , q x 5 z 1       f ^ 1 x ¯ = λ z + 1 1 + α 1 z ˙   z 1 x 1                                                                                                               f ~ 2 x ¯ = H 1 q Q ^ T K y s i g x 1 α 1 K s s i g x 2 α 2                                     f ^ 2 x ¯ = H 1 q [ ( 1 2 H ˙ ( q ) + C q , q ˙ ) x 2 + Y d q , q ˙ , q ˙ r , q ¨ r x 3 ] f ~ 3 x ¯ = 0                                                                                                                                                                                         f ^ 3 x ¯ = ψ d 1 Y d T q , q ˙ , q ˙ r , q ¨ r x 2                                                                                                       f ~ 4 x ¯ = ψ k 1 Y k T y , y d , q , q ˙ K y s i g x 1 α 1                                                                             f ^ 4 x ¯ = 0                                                                                                                                                                                         f ~ 5 x ¯ = ψ z 1 Y z T q , q ˙ , y ˙ d , q ¨ r K y s i g x 1 α 1                                                                 f ^ 5 x ¯ = 0                                                                                                                                                                                      
Let the expansion coefficient  r 1 = 2 / ( 1 + α 1 ) r 2 = r 4 = r 5 = 1 , which is not difficult to verify,   f ~ x ¯ = f ~ 1 x ¯ , f ~ 2 x ¯ , f ~ 3 x ¯ , f ~ 4 x ¯ , f ~ 5 x ¯  is a four-dimensional continuous homogeneous vector field with  1 < k = α 2 1 < 0 , with respect to the expansion coefficient  ( r 1 , r 2 , r 4 , r 5 ) . By examining each  f ^ i  in the continuous vector field  f ^ x ¯ , we can easily get the following varieties. For any  x ¯ D = x ¯ R n x ¯ δ , δ > 0 , the following formulas exist.
lim ε 0 f ^ 1 ( ε r 1 ) ε r 1 + k = lim ε 0 f ^ 1 ε k = z 1 ( λ z + 1 1 + α 1 z ˙ ) x 1 lim ε 0 ε k = 0
lim ε 0 f ^ 2 ( ε r 2 ) ε k + r 2 = lim ε 0 f ^ 2 ε k = H 1 q [ ( 1 2 H ˙ ( q ) + C q , q ˙ ) x 2 + Y d q , q ˙ , q ˙ r , q ¨ r x 3 ]   lim ε 0 ε k = 0
lim ε 0 f ^ 3 ( ε r 3 ) ε k + r 3 = ψ d 1 Y d T q , q ˙ , q ˙ r , q ¨ r x 2 lim ε 0 ε k = 0
According to Lemma 1, the system (51) is locally finite-time stabilized.
From Lemma 3 and the global asymptotic stability and local finite-time stability of the system (51), it can be deduced that the closed-loop system (51) is globally finite-time stable.

5. Experiments and Results

Experimental Platform

The effectiveness of the proposed IBUVS finite time tracking control scheme is verified by experiments. The experimental hardware platform is composed of a camera, manipulator, and control platform, as shown in Table 2. Table 3 lists the D-H parameters of the Kinova MICO robot manipulator. The hardware system of the visual servo experiment platform is shown in Figure 2.
In EIH configuration, the camera Logitech C310 is fixed to the end of the MICO manipulator with adaptive firmware to avoid image jitter when the manipulator moves. The actual internal parameter matrix of the camera LogitechC310 is shown as follows:
Ω = 816.07 0 0 0 815.97 0 310.75 236.09 1
The internal parameter matrix of the camera LogitechC920 is:
Ω = 629.78 0 0 0 631.53 0 304.01 241.27 1
The visual feature marker in the experiment is a characteristic color plate composed of four color blocks: red, green, blue, and yellow. The position of feature points C1–C4 in the reference coordinate system of the color plate is as follows:
x b o a r d = 0.1250 0.0000 0.1850 0.1250 0.0000 0.0250 0.0250 0.0000 0.0250 0.0250 0.0000 0.1850
Experiment 1. 
Verify the adaptability of the proposed control schemes (42) and (48)–(50) to the unknown parameters of the system and different visual configurations.
Given initial estimates of parameters, an adaptive algorithm is used for online iterative estimation to achieve convergence of system errors. The following aspects were specifically considered in the experiment to verify the adaptability of the IBUVS scheme: In EIH visual configuration, the depth independent Jacobian adaptive estimation module (S function) is Get-Adaptive-Depth-Independent-Jacobian, and the depth parameter adaptive estimation module (S function) is Get-Adaptive-Depth. We need to set the input parameters of the above two functions as  T b e ( t )  represents the pose transformation from the end-effector reference coordinate system to the base reference coordinate system. Where the parameter  x i b  to be estimated describes the position of the feature points with respect to the base coordinate system and  M e c  is equivalent to the product of the pose relationship between the camera and the end-effector coordinate system and the internal parameter matrix. Since the IBUVS scheme does not need to know the following parameters to be estimated in advance, it is not necessary to set them (including different 3D poses of feature points, different internal imaging parameters of the camera, different external pose parameters of the camera, and different visual configurations (EIH and ETH)).
In terms of visual configuration, the actual pose of the camera relative to the end-effector reference coordinate system is as follows:
T e c = 1 0 0 0         0 0 1 0.01 0 1 0 0         0 0.060 0 1
To investigate the adaptability and flexibility of the system to the three-dimensional pose parameters of feature points, the reference coordinate system of the feature color palette adopts the following three sets of data:
T b B [ 1 ] = 1 0 0 0.9921         0 0.2140 0.1253 0.5960 0 0.1253 0 0         0.9921 0.6120 0 1
T b B [ 2 ] = 1 0 0 0.9921         0 0.1203 0.1253 0.5060 0 0.1253 0 0         0.9921 0.7620 0 1
T b B [ 3 ] = 0 1 1 0         0 0.6500 0 0.1800 0 0 0 0         1 0.6950 0 1
Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9 show the experimental results. In Figure 3, it is not difficult to find that the IBUVS control algorithm proposed in this paper not only completes the visual servo task but also has good three-dimensional trajectory characteristics through the three-dimensional trajectory of the camera mounted on the robot arm. Figure 4, Figure 5, Figure 8 and Figure 9, respectively show the error curve and image track of feature points in pose 1 and pose 3, as well as the angular velocity response of each joint, joint sliding mode variable  S q  and torque output of each joint in pose 3. Figure 6 shows the convergence of some elements ( θ k , 1 θ k , 12 ) in the estimation  θ ^ k  of kinematic unknown parameters in the pose 1 experiment, and Figure 7 shows the convergence of some elements ( θ d , 1 θ d , 8 ) in the estimation  θ ^ d  of dynamical unknown parameters in the pose 1 experiment.
It can be observed from the above experimental results that, at the beginning of the servo task, the Jacobian matrix determined by the IBUVS control scheme in this paper according to the initial estimation parameters has a large deviation from the actual Jacobian matrix, resulting in the system being far away from the equilibrium point. This situation is further aggravated when the initial estimation deviates greatly from the actual value. However, with the continuous increase of the control period, the parameters to be estimated in the system are iterated along the negative gradient direction of the image error quantity and converge to a set of constant values proportional to the true value (as shown in Figure 6 and Figure 7). Currently, the Jacobian estimated matrix approaches the actual Jacobian matrix, and the image space error gradually converges.
The above experimental results verify the adaptability of the IBUVS scheme in EIH configuration to uncalibrated parameters such as 3D pose, feature color palette, and camera internal parameters.
To further verify the adaptability of the scheme to the internal imaging parameters and external pose parameters of different cameras, visual servo experiments under the ETH configuration will continue.
When EIH is switched to ETH, the input parameters of the depth-independent Jacobian adaptive estimation module (Get-Adaptive-Depth-Independent-Jacobian) and the depth-parameter adaptive estimation module (Get-Adaptive-Depth) should be switched to  T b e ( t ) , i.e., the pose transformation from the base reference coordinate system to the end-effector reference coordinate system is realized. At the same time, the control gain should be adjusted appropriately according to the actual initial configuration and feature point selection. In addition to the above steps, no other function parameters need to be adjusted in this IBUVS scheme.
The configuration of ETH is shown in Figure 2b. The LogitechC920 is selected as the fixed camera in this experiment. The reference coordinate system of the camera adopts the following two groups of different poses; the three-dimensional space trajectory of its end-effector is shown in Figure 10.
T b c [ 1 ] = 0.71 0.71 0 0         0 0.30 1 0.55 0.71 0.71 0 0         0 1.33 0 1 ,   T b c [ 2 ] = 0.34 0.94 0 0         0 0.12 1 0.55 0.94 0.34 0 0         0 1.4 0 1
The three-dimensional pose of the characteristic color plate relative to the reference coordinate system of the end-effector is as follows:
T E n d C o l o r B o a r d = 1 0 0 1         0 0.06 0 0 0 0 0 0         1 0.02 0 1
Similarly, the above pose parameters do not need to be set in the IBUVS controller function.
In ETH configuration, when the camera is placed in two different poses, the 3D trajectory curve of the end-effector of the manipulator can complete the visual servo task well and drive the feature color plate fixed at the end of the manipulator to move along the desired trajectory of the image, as shown in Figure 10. Under different camera positions and poses, image error curves and image tracks of feature points are shown in Figure 11 and Figure 12, respectively. Similar to the EIH configuration, under the action of the IBUVS controller, the feature points appear to have different degrees of jitter and deviation from the equilibrium point at the beginning of the servo task. However, with the increase in the control period, the parameters to be estimated in the system will be iterated along the negative gradient of the image error, driving the estimated Jacobian matrix to approximate the actual system Jacobian. The space error of the image converges gradually, and finally the motion along the desired trajectory of the image is realized. The above two groups of experiments show that the IBUVS scheme proposed in this paper can still effectively complete the visual servo task under the condition that the camera imaging model, the relative posture of the camera and the manipulator, and the posture of the characteristic color plate are quite different.
Experiment 2. 
Verify the fast convergence of schemes (42) and (48)–(50) near the equilibrium point.
The convergence rate is the key index to evaluate the performance of the IBUVS controller. In visual servo, when there is a large difference between the initial attitude and the desired attitude and the system has parameter estimation error, pose estimation error, and calculation delay of the output control quantity, to ensure the stability of the system, when the pose difference is large, the IBUVS controller often adopts a small control gain, which directly leads to the slow convergence rate of the system near the equilibrium point.
To fully verify the fast convergence of the proposed IBUVS controller (hereinafter referred to as IBUVS-F) near the equilibrium point, the IBUVS asymptotically stabilized controller (hereinafter referred to as IBUVS-A) proposed in the literature [33] was selected as a comparison scheme in this experiment. In addition, an adaptive gain function ( λ x = a exp ( b x ) + c ) is presented in the open source visual servo platform ViSP whose adaptive gain can be used to improve IBUVS-A, and another comparison scheme is constructed, abbreviated as IBUVS-AAG. To quantitatively evaluate the differences in convergence time of the above three schemes, it is stipulated in this experiment that when the average modulus of error of four image feature points is less than 10 pixels, the system can be convergent, and the convergence time is taken as the quantitative index.
Considering the difference in the gain coefficient of different schemes, to make IBUVS-F comparable with IBUVS-A and IBUVS-AAG, gains with similar control torque output ranges were combined into one comparison group. Specifically, 7 groups of image error term gain coefficients of IBUVS-A, IBUVS-AAG, and IBUVS-F controllers were taken for comparison, as shown in Table 4.
Each scheme in each group was run for at least 1500 control cycles (i.e., 49.5 s) when comparing tests. In addition, to not lose generality, the experiment was repeated five times for each scheme in each group, and the average of the five results was taken as the convergence time for comparison. The comparison results of three schemes under different gain conditions are shown in Table 4 and Figure 13.
The comparative experimental results show that the deviation between the convergence times of the three schemes is small when a larger control gain is applied. However, the convergence time of IBUVS-F schemes is significantly less than that of IBUVS-A and IBUVS-AAG schemes as the control gain decreases gradually. Figure 14 shows the image error convergence curves of the three schemes in the sixth comparison test group. In the actual control process, the use of a larger control gain can effectively reduce the convergence time. However, when the pose difference is large, the output torque of the controller is large, which makes it easy to cause jitter and rotation of the joint of the manipulator. Especially when there is a large pose difference along the Z-axis of the camera, the feature points are easy to leave the field of view of the arm-based camera, thus leading to the failure of the visual servo task. Figure 15 shows the comparison of the control conditions of IBUVS-A and IBUVS-F when three groups of larger gains are taken. Different degrees of servo task failure occurred in the two schemes in ten independent experiments. When the gain continued to increase, two schemes failed to complete the servo control task in 10 experiments.
On the other hand, a small control gain can ensure that the feature points in the image space have a good error convergence curve so that the manipulator moves along a smooth three-dimensional trajectory to ensure the reliability of visual servo control. In the initial pose adjustment stage, the IBUVS-F scheme proposed in this paper can effectively reduce the torque output, keep the manipulator attitude stable, and significantly shorten the convergence time, which contributes to the improvement of the control quality of IBUVS.
Figure 15 shows the comparison of the control conditions of the two schemes when the three sets of large gains are taken. It can be seen that the two schemes have different degrees of servo task failure in ten independent experiments, and when the gain continues to increase, the two schemes have failed to complete ten experiments.
As can be seen from Table 4, when the convergence time is about 7 s, the gain coefficient of the IBUVS-A scheme is 0.35 and that of the IBUVS-F scheme is 0.03. In EIH configuration, the above two schemes are adopted to carry out visual servo tracking experiments. In the experiment, the three-dimensional motion trajectory curves of the arm-mounted camera of the two schemes are shown in Figure 16. The experimental results show that there are obvious differences between the two schemes. The IBUVS-F trajectory is close to a straight line, while the IBUVS-A trajectory is an S-shaped curve. The initial output torque of the IBUVS-A scheme is too large, which easily leads to the failure of the visual servo control task.
Figure 17 shows the joint sliding mode variable responses of IBUVS-F at 0.008 and 0.03 gain coefficients and of the IBUVS-AAG scheme at 0.3. The joint sliding mode variable gradually approaches zero with the convergence of image errors. It is worth noting that there is a certain degree of high-frequency joint angular velocity response in the sliding mode space of the IBUVS-F joint after system convergence. This is caused by the large control output of the IBUVS-F scheme near the equilibrium point. Compared with the IBUVS-AGG scheme, which also has a convergence time of about 7 s, it is not difficult to see that the high-frequency response level of the angular velocity of the IBUVS-AGG scheme is similar to that of the IBUVS-F scheme. Although the rapid convergence of IBUVS-F near the equilibrium point requires a certain amount of joint space noise, it still has advantages over other schemes. The above analysis proves the effectiveness and superiority of the proposed IBUVS-F scheme.

6. Conclusions

Based on the adaptive Jacobian method, a visual servo finite-time control scheme for an uncalibrated manipulator is proposed in this paper. By designing a finite-time controller and proposing the adaptive law of depth parameters, kinematics parameters, and dynamics parameters, the finite time tracking of the desired trajectory of the image is realized. The finite-time tracking controller has a nonlinear proportional differential plus dynamic feed-forward compensation structure (NPD+), which can improve the control quality of the closed-loop system by applying continuous non-smooth nonlinear functions to the feedback errors. By means of Lyapunov stability theory and finite time stability theory, the global finite time stabilization of closed loop systems is proved. Compared with the existing schemes, the experimental results show that the uncalibrated visual servo controller proposed in this paper can not only adapt to the changes in EIH and ETH visual configuration but also adapt to the parameter changes in the relative pose of the feature point and the relative pose of the camera. At the same time, the convergence rate near the equilibrium point is improved effectively, and it has better dynamic stability. In the dynamic equation of the robot arm system (Equation (24)), we use the linear parameterization method to separate the unknown uncertainty parameters, and on this basis, we design the corresponding adaptive rate to estimate them. The effect of this method on the dynamic estimation error of parameters needs to be further studied.

Author Contributions

Conceptualization, Z.Z. and J.W.; methodology, H.Z.; software, Z.Z. and H.Z.; validation, Z.Z., J.W. and H.Z.; formal analysis, H.Z.; investigation, J.W.; resources, J.W.; data curation, Z.Z. and H.Z.; writing—original draft preparation, Z.Z.; writing—review and editing, H.Z.; visualization, Z.Z.; supervision, J.W.; project administration, Z.Z.; funding acquisition, H.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by key research and development projects in Tianjin, grant number 19YFZCSN00360.

Institutional Review Board Statement

The paper does not involve human or animal research.

Informed Consent Statement

The study did not involve humans.

Data Availability Statement

The data that support the findings of this study are included within the article.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of the Property 6

Proof. 
In Equations (21) and (22),  m i T ( R x + P ) q , i = 1,2 , 3  can be expanded as follows:
m i T ( R x + P ) q = q ( r 11 m i 1 x 1 + r 21 m i 2 x 1 + r 31 m i 3 x 1 + r 12 m i 1 x 2 + r 22 m i 2 x 2 + r 32 m i 3 x 2 + r 13 m i 1 x 3 + r 23 m i 2 x 3 + r 33 m i 3 x 3 + ρ 1 m i 1 +                                                                   ρ 2 m i 2 + ρ 3 m i 3
where  r h l ( h = 1,2 , 3 , l = 1,2 , 3 )  represents the  h  row,  l  column element of the matrix  R p k k = 1,2 , 3  represents the  k _ th element of the vector  P m i j ( i = 1,2 , 3 , j = 1,2 , 3 )  represents the  i  row  j  column element of the matrix  M x p p = 1,2 , 3  represents the  p _ th element of the coordinate vector  x . First, let us consider the case:  p 1  = 36, where all elements of the regression matrix  Y k , 1 ( y , q , η )  are non-zero and vector  D η  is linearly dependent on 36 unknown parameters. Suppose that  q i  and  η i  represent the i-th element of the joint vectors  q  and  η , respectively, and define  Q h l = i = 1 n η i r h l q i ( i = 1,2 , 3 , l = 1,2 , 3 ) Q k = i = 1 n η i p k q i ( k = 1,2 , 3 , l = 1,2 , 3 ) Q T ϵ R 1 × 12 = ( Q 11 , Q 12 , Q 13 , Q 21 , Q 22 , Q 23 , Q 31 , Q 32 , Q 33 , Q 1 , Q 2 , Q 3 )  and  Q k , i T ϵ R 1 × 12 = ( m i 1 x 1 , m i 1 x 2 , m i 1 x 3 , m i 2 x 1 , m i 2 x 2 , m i 2 x 3 , m i 3 x 1 , m i 3 x 2 , m i 3 x 3 , m i 1 , m i 2 , m i 3 , ) , i = 1,2 , 3 . Considering that the unknown parameter vector is  Q k T ϵ R 1 × 36 = ( Q k , 1 T , Q k , 2 T , Q k , 3 T ) , the regression matrix is as follows:
Y k , 1 ( y , q , η ) = Q T 0 1 × 12 u Q T 0 1 × 12 Q T v Q T
Now, let us consider the case of  p 1   < 36. When the element  r h l  ( p k ) of the rotation matrix  R  (translation vector  P ) is independent of the joint angle position  q , then the partial differential of this element with respect to the joint angle is zero, that is,  Q h l  = 0 ( Q k = 0 ). In this way, the zero term can be removed from the regression matrix  Y k , 1 ( y , q , η ) , and the corresponding unknown parameter can be removed from the vector  Q k . Therefore, the parameter vector number p1 < 36. □

Appendix B. Proof of the Property 7

The proof procedure for Property 7 is similar to that of Property 6, so it will not be repeated. When  p 1 = 36, Y k , 2 ( q , η ) = ( 0 1 × 24   Q T ) ; similarly, if  p k  is independent of  q Y k , 2 ( q , η )  and  Q k  can reduce the dimension by removing the zero term, in this case  p 1 < 36 . In summary, it can be deduced that  p 1 36 . Properties 6 and 7 are known as linear parameterization properties of depth-independent Jacobian matrices.

Appendix C. Proof of the Property 8

Proof. 
The Formula (19) can be expanded as follows:
z = r 11 m ¯ 31 x 1 + r 21 m ¯ 32 x 1 + r 31 m ¯ 33 x 1 + r 12 m ¯ 31 x 2 + r 22 m ¯ 32 x 2 + r 32 m ¯ 33 x 2 + r 13 m ¯ 31 x 3 + r 23 m ¯ 32 x 3 + r 33 m ¯ 33 x 3 + p 1 m ¯ 31 +                                 p 2 m ¯ 32 + p 3 m ¯ 33 + m ¯ 34
where  m ¯ i j , i = 1 , , 3 , j = 1 , , 4  is an element of matrix  M ¯ . As in Property 6, if all  r h l  ( h = 1,2 , 3 , l = 1,2 , 3 ) and  p k ( k = 1,2 , 3  are not equal to zero,  z  depends linearly on 13 unknown parameters,  p 2 = 13 . The elements in  θ z  are  m ¯ 3 j x m j = 1,2 , 3 , m = 1,2 , 3  and  m ¯ 3 j j = 1,2 , 3,4 ; the depth parameter regression vector is  Y z q = ( r 11 , r 12 , r 13 , r 21 , r 22 , r 23 , r 31 , r 32 , r 33 , p 1 , p 2 , p 3 , 1 ) . When  r h l  = 0,  m ¯ 3 h x m  has no effect on depth parameter  z  and can be removed from  θ z . Similarly,  m ¯ 3 k  can be removed when  p k  = 0, where  p 2 < 13 . In summary, the unknown parameter vector dimension  p 2 13 . □

References

  1. Bai, Y.; Zhang, B.; Xu, N.; Zhou, J.; Shi, J.; Zhi, H. Vision-based navigation and guidance for agricultural autonomous vehicles and robots: A review. Comput. Electron. Agric. 2023, 205, 107584. [Google Scholar] [CrossRef]
  2. Kmich, M.; Karmouni, H.; Harrade, I.; Daoui, A.; Sayyouri, M. Image-Based Visual Servoing Techniques for Robot Control. In Proceedings of the 2022 International Conference on Intelligent Systems and Computer Vision (ISCV), Fez, Morocco, 18–20 May 2022. [Google Scholar]
  3. Jiang, J.; Wang, Y.; Jiang, Y.; Xie, K.; Tan, H.; Zhang, H. A robust visual servoing controller for anthropomorphic manipulators with Field-of-View constraints and swivel-angle motion: Overcoming system uncertainty and improving control performance. IEEE Robot. Autom. Mag. 2022, 29, 104–114. [Google Scholar] [CrossRef]
  4. Zeng, H.; Lu, Z.; Lv, Y.; Qi, J. Adaptive Neural Network-based Visual Servoing with Integral Sliding Mode Control for Manipulator. In Proceedings of the 2022 41st Chinese Control Conference (CCC), Hefei, China, 25–27 July 2022. [Google Scholar]
  5. Zheng, T.; Zhang, J.; Zhu, H. Uncalibrated Visual Servo System Based on Kalman Filter Optimized by Improved STOA. In Proceedings of the 2021 IEEE 2nd International Conference on Information Technology, Big Data and Artificial Intelligence (ICIBA), Chongqing, China, 17–19 December 2021. [Google Scholar]
  6. Keshavan, J.; Escobar-Alvarez, H.; Sean Humbert, J. An adaptive observer framework for accurate feature depth estimation using an uncalibrated monocular camera. Control Eng. Pract. 2016, 46, 59–65. [Google Scholar] [CrossRef]
  7. Oh, W.; Yoo, H.; Ha, T.; Oh, S. Local Selective Vision Transformer for Depth Estimation Using a Compound Eye Camera. Pattern Recognit. Lett. 2023, 167, 82–89. [Google Scholar] [CrossRef]
  8. Zhang, K.; Chen, J.; Li, Y.; Zhang, X. Visual Tracking and Depth Estimation of Mobile Robots without Desired Velocity Information. IEEE Trans. Cybern. 2020, 50, 361–373. [Google Scholar] [CrossRef]
  9. Zhao, W.; Wang, H. Adaptive Image-based Visual Servoing of Mobile Manipulator with an Uncalibrated Fixed Camera. In Proceedings of the 2020 IEEE International Conference on Real-Time Computing and Robotics (RCAR), Hokkaido, Japan, 28–29 September 2020. [Google Scholar]
  10. Fried, J.; Leite, A.C.; Lizarralde, F. Uncalibrated image-based visual servoing approach for translational trajectory tracking with an uncertain robot manipulator. Control Eng. Pract. 2023, 130, 105363. [Google Scholar] [CrossRef]
  11. Liang, X.; Wang, H.; Liu, Y.-H.; You, B.; Liu, Z.; Jing, Z.; Chen, W. Fully Uncalibrated Image-Based Visual Servoing of 2DOFs Planar Manipulators with a Fixed Camera. IEEE Trans. Cybern. 2022, 52, 10895–10908. [Google Scholar] [CrossRef] [PubMed]
  12. Ghasemi, A.; Xie, W.-F. Adaptive Image-Based Visual Servoing of 6 DOF Robots Using Switch Approach. In Proceedings of the 2018 IEEE International Conference on Information and Automation (ICIA), Wuyishan, China, 11–13 August 2018. [Google Scholar] [CrossRef]
  13. Sarapura, J.A.; Roberti, F.; Gimenez, F.; Patiño, D.; Carelli, R. Adaptive Visual Servoing Control of a Manipulator with Uncertainties in Vision and Dynamics. In Proceedings of the 2018 Argentine Conference on Automatic Control (AADECA), Buenos Aires, Argentina, 7–9 November 2018. [Google Scholar] [CrossRef]
  14. Li, T.; Qiu, Q.; Zhao, C. Hybrid Visual Servoing Tracking Control of Uncalibrated Robotic Systems for Dynamic Dwarf Culture Orchards Harvest. In Proceedings of the 2021 IEEE International Conference on Development and Learning (ICDL), Beijing, China, 23–26 August 2021. [Google Scholar] [CrossRef]
  15. Hou, Y.; Wang, H.; Wei, Y.; Iu, H.H.-C.; Fernando, T. Robust adaptive finite-time tracking control for Intervention-AUV with input saturation and output constraints using high-order control barrier function. Ocean. Eng. 2023, 268, 113219. [Google Scholar] [CrossRef]
  16. Moudoud, B.; Aissaoui, H.; Diany, M. Fixed-Time non-singular Fast TSM control for WMR with disturbance observer. IFAC-PapersOnLine 2022, 55, 647–652. [Google Scholar] [CrossRef]
  17. Sun, L.; Liu, Y. Extended state observer augmented finite-time trajectory tracking control of uncertain mechanical systems. Mech. Syst. Signal Process. 2020, 139, 106374. [Google Scholar] [CrossRef]
  18. Galicki, M. Finite-time trajectory tracking control in a task space of robotic manipulators. Automatica 2016, 67, 165–170. [Google Scholar] [CrossRef]
  19. Huang, Y.; Meng, Z. Global finite-time distributed attitude synchronization and tracking control of multiple rigid bodies without velocity measurements. Automatica 2021, 132, 109796. [Google Scholar] [CrossRef]
  20. Li, T.; Zhao, H. Global finite-time adaptive control for uncalibrated robot manipulator based on visual servoing. ISA Trans. 2017, 68, 402–411. [Google Scholar] [CrossRef] [PubMed]
  21. Zhou, B.; Yang, L.; Wang, C.; Lai, G.; Chen, Y. Adaptive finite-time tracking control of robot manipulators with multiple uncertainties based on a low-cost neural approximator. J. Frankl. Inst. 2022, 359, 4938–4958. [Google Scholar] [CrossRef]
  22. Huang, T.; Sun, Y.; Tian, D. Finite-time stability of positive switched time-delay systems based on linear time-varying copositive Lyapunov functional. J. Frankl. Inst. 2022, 359, 2244–2258. [Google Scholar] [CrossRef]
  23. Yu, X.; Yin, J.; Khoo, S. Generalized Lyapunov criteria on finite-time stability of stochastic nonlinear systems. Automatica 2019, 107, 183–189. [Google Scholar] [CrossRef] [Green Version]
  24. Liu, J.; Wang, Q.-G.; Yu, J. Convex Optimization-Based Adaptive Fuzzy Control for Uncertain Nonlinear Systems with Input Saturation Using Command Filtered Backstepping. IEEE Trans. Fuzzy Syst. 2023, 31, 2086–2091. [Google Scholar] [CrossRef]
  25. Bu, X.; Jiang, B.; Lei, H. Performance Guaranteed Finite-Time Non-Affine Control of Waverider Vehicles without Function-Approximation. IEEE Trans. Intell. Transp. Syst. 2023, 24, 3252–3262. [Google Scholar] [CrossRef]
  26. Bu, X.; Lv, M.; Lei, H.; Cao, J. Fuzzy neural pseudo control with prescribed performance for waverider vehicles: A fragility-avoidance approach. IEEE Trans. Cybern. 2023, 53, 4986–4999. [Google Scholar] [CrossRef]
  27. Bu, X.; Hua, C.; Lv, M.; Wu, Z. Flight Control of Waverider Vehicles with Fragility-avoidance Prescribed Performance. In IEEE Transactions on Aerospace and Electronic Systems; IEEE: Piscataway, NJ, USA, 2023. [Google Scholar] [CrossRef]
  28. Cao, Y.; Liu, S. Homography-based platooning control of mobile robots. Control Theory Appl. 2019, 36, 1382–1390. [Google Scholar]
  29. Su, Y. Global continuous finite-time tracking of robot manipulators. Int. J. Robust Nonlinear Control 2009, 19, 1871–1885. [Google Scholar] [CrossRef]
  30. Sun, F.; Guan, Z. Finite-time consensus for leader-following second-order multi-agent system. Int. J. Syst. Sci. 2013, 44, 727–738. [Google Scholar] [CrossRef]
  31. Slotine, J.-J.E.; Li, W. Applied Nonlinear Control; Prentice Hall: Englewood Cliffs, NJ, USA, 2004. [Google Scholar]
  32. Hong, Y.; Cheng, D. Analysis and Control of Nonlinear Systems; Science Press: Beijing, China, 2005. [Google Scholar]
  33. Su, Y. Control Theory of Nonlinear Robot Systems; Science Press: Beijing, China, 2008. [Google Scholar]
Figure 1. Schematic diagram of an uncalibrated visual servo tracking control system.
Figure 1. Schematic diagram of an uncalibrated visual servo tracking control system.
Sensors 23 07133 g001
Figure 2. Hardware system of the visual servo experiment platform. (a) Eye-in-hand configuration; (b) Eye-to-hand configuration.
Figure 2. Hardware system of the visual servo experiment platform. (a) Eye-in-hand configuration; (b) Eye-to-hand configuration.
Sensors 23 07133 g002
Figure 3. Camera 3D space-tracking trajectory when feature points are in different poses (EIH configuration).
Figure 3. Camera 3D space-tracking trajectory when feature points are in different poses (EIH configuration).
Sensors 23 07133 g003
Figure 4. Image tracking trajectory when feature points are in different poses (EIH configuration) (The feature points in the figure are abbreviated as FP). (a) Pose 1; (b) Pose 3.
Figure 4. Image tracking trajectory when feature points are in different poses (EIH configuration) (The feature points in the figure are abbreviated as FP). (a) Pose 1; (b) Pose 3.
Sensors 23 07133 g004
Figure 5. Image convergence curve when feature points are located in different poses (EIH configuration). (a) Pose 1; (b) Pose 3.
Figure 5. Image convergence curve when feature points are located in different poses (EIH configuration). (a) Pose 1; (b) Pose 3.
Sensors 23 07133 g005aSensors 23 07133 g005b
Figure 6. The convergence curve of the unknown kinematic parameter estimation vector  θ ^ k .
Figure 6. The convergence curve of the unknown kinematic parameter estimation vector  θ ^ k .
Sensors 23 07133 g006
Figure 7. The convergence curve of the unknown dynamic parameter estimation vector  θ ^ d .
Figure 7. The convergence curve of the unknown dynamic parameter estimation vector  θ ^ d .
Sensors 23 07133 g007
Figure 8. Response curve of joint sliding mode variable and joint angular velocity (EIH configuration, pose (3)). (a) Response curve of joint sliding mode variable  S q ; (b) Response curve of joint angular velocity.
Figure 8. Response curve of joint sliding mode variable and joint angular velocity (EIH configuration, pose (3)). (a) Response curve of joint sliding mode variable  S q ; (b) Response curve of joint angular velocity.
Sensors 23 07133 g008
Figure 9. Torque output of each joint controller (EIH configuration, pose 3).
Figure 9. Torque output of each joint controller (EIH configuration, pose 3).
Sensors 23 07133 g009
Figure 10. Three-dimensional trajectory of the end-effector (ETH configuration) when the camera is in different poses.
Figure 10. Three-dimensional trajectory of the end-effector (ETH configuration) when the camera is in different poses.
Sensors 23 07133 g010
Figure 11. Image plane trajectory of each feature point when the camera is located in different positions (ETH configuration). (a) Pose 1; (b) Pose 2.
Figure 11. Image plane trajectory of each feature point when the camera is located in different positions (ETH configuration). (a) Pose 1; (b) Pose 2.
Sensors 23 07133 g011
Figure 12. Image convergence curves of each feature point when the camera is located in different positions (ETH configuration). (a) Pose 1; (b) Pose 2.
Figure 12. Image convergence curves of each feature point when the camera is located in different positions (ETH configuration). (a) Pose 1; (b) Pose 2.
Sensors 23 07133 g012
Figure 13. Comparison of the convergence times of different schemes.
Figure 13. Comparison of the convergence times of different schemes.
Sensors 23 07133 g013
Figure 14. Comparison of error convergence curves between IBUVS-F and the reference scheme IBUVS-A in comparison group 6.
Figure 14. Comparison of error convergence curves between IBUVS-F and the reference scheme IBUVS-A in comparison group 6.
Sensors 23 07133 g014
Figure 15. Comparison of tracking task completion between the two schemes when the gain is larger.
Figure 15. Comparison of tracking task completion between the two schemes when the gain is larger.
Sensors 23 07133 g015
Figure 16. Three-dimensional trajectory diagrams of different schemes when the convergence time is about 7 s. (a) IBUVS-A; (b) IBUVS-F.
Figure 16. Three-dimensional trajectory diagrams of different schemes when the convergence time is about 7 s. (a) IBUVS-A; (b) IBUVS-F.
Sensors 23 07133 g016
Figure 17. Comparison of joint sliding mode variable  S q  under different gain conditions.
Figure 17. Comparison of joint sliding mode variable  S q  under different gain conditions.
Sensors 23 07133 g017
Table 1. Kinematic parameters of different configurations.
Table 1. Kinematic parameters of different configurations.
Visual configurations M M ¯ T ( t ) x i
Scenes M b c [ M b c         Ω P b c ] T e b (t) x i e
Hand-eye relationships M e c [ M e c       Ω P e c   ] T b e (t) x i b
Table 2. Hardware configuration of the experimental platform.
Table 2. Hardware configuration of the experimental platform.
EquipmentModel Configuration Parameters
ComputerDell OptiPlex 7050 (Dell, Round Rock, TX, USA)Intel Core i7-2.80 GHz CPU, 8 GBs RAM
cameraLogitechC920 (Logitech, Lausanne, Switzerland)dynamic DPI: 1280 × 720 static DPI 1280 × 960 maximum frame frequency 30 FPS
LogitechC310 (Logitech)dynamic DPI: 1280 × 720 static DPI 1280 × 960 maximum frame frequency 30 FPS
robot manipulatorKinova MICO (Kinova Robotics, Montreal, QC, Canada)6 DOF Bionic robotic arm, Table 3 lists the DH parameters.
Table 3. D-H parameters of the Kinova MICO robot manipulator.
Table 3. D-H parameters of the Kinova MICO robot manipulator.
Serial NumberJoint Offset d (m)The Length of the Common Perpendicular a (m)Angle of Torsion α (rad)
10.275500
200 π / 2
300.29000
40.16610 π / 2
50.085601.0472
60.20280.29001.0472
Table 4. Gain coefficients for convergence tests of different IBUVS schemes.
Table 4. Gain coefficients for convergence tests of different IBUVS schemes.
Contrast GroupSchemeGainConvergence Time
1IBUVS-A0.63.854 s
IBUVS-AAG λ = 0.6 , λ 0 = 1.00 , λ ˙ 0 = 1.00 3.610 s
IBUVS-F0.103.993 s
2IBUVS-A0.505.181 s
IBUVS-AAG λ = 0.5 , λ 0 = 1.00 , λ ˙ 0 = 1.00 4.950 s
IBUVS-F0.084.950 s
3IBUVS-A0.357.494 s
IBUVS-AAG λ = 0.35 , λ 0 = 0.80 , λ ˙ 0 = 0.80 6.798 s
IBUVS-F0.065.478 s
4IBUVS-A0.2311.583 s
IBUVS-AAG λ = 0.23 , λ 0 = 0.60 , λ ˙ 0 = 0.60 7.887 s
IBUVS-F0.055.808 s
5IBUVS-A0.1516.175 s
IBUVS-AAG λ = 0.15 , λ 0 = 0.30 , λ ˙ 0 = 0.30 10.865 s
IBUVS-F0.046.171 s
6IBUVS-A0.1018.315 s
IBUVS-AAG λ = 0.10 , λ 0 = 0.20 , λ ˙ 0 = 0.20 13.266 s
IBUVS-F0.037.293 s
7IBUVS-A0.0536.033 s
IBUVS-AAG λ = 0.05 , λ 0 = 0.10 , λ ˙ 0 = 0.10 28.017 s
IBUVS-F0.0216.170 s
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, Z.; Wang, J.; Zhao, H. Design of A Finite-Time Adaptive Controller for Image-Based Uncalibrated Visual Servo Systems with Uncertainties in Robot and Camera Models. Sensors 2023, 23, 7133. https://doi.org/10.3390/s23167133

AMA Style

Zhao Z, Wang J, Zhao H. Design of A Finite-Time Adaptive Controller for Image-Based Uncalibrated Visual Servo Systems with Uncertainties in Robot and Camera Models. Sensors. 2023; 23(16):7133. https://doi.org/10.3390/s23167133

Chicago/Turabian Style

Zhao, Zhuoqun, Jiang Wang, and Hui Zhao. 2023. "Design of A Finite-Time Adaptive Controller for Image-Based Uncalibrated Visual Servo Systems with Uncertainties in Robot and Camera Models" Sensors 23, no. 16: 7133. https://doi.org/10.3390/s23167133

APA Style

Zhao, Z., Wang, J., & Zhao, H. (2023). Design of A Finite-Time Adaptive Controller for Image-Based Uncalibrated Visual Servo Systems with Uncertainties in Robot and Camera Models. Sensors, 23(16), 7133. https://doi.org/10.3390/s23167133

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop