Next Article in Journal
Fuzzy Evaluation Models for Accuracy and Precision Indices
Next Article in Special Issue
Newton-Based Extremum Seeking for Dynamic Systems Using Kalman Filtering: Application to Anaerobic Digestion Process Control
Previous Article in Journal
Application of Computational Model Based Probabilistic Neural Network for Surface Water Quality Prediction
Previous Article in Special Issue
An Integral Sliding Mode Stator Current Control for Industrial Induction Motor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Backstepping Control Strategy of an Autonomous Underwater Vehicle Based on Probability Gain

School of Mechanical Engineering, Hangzhou Dianzi University, Hangzhou 310018, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(21), 3958; https://doi.org/10.3390/math10213958
Submission received: 15 September 2022 / Revised: 20 October 2022 / Accepted: 21 October 2022 / Published: 25 October 2022
(This article belongs to the Special Issue Automatic Control and Soft Computing in Engineering)

Abstract

:
In this paper, an underwater robot system with nonlinear characteristics is studied by a backstepping method. Based on the state preservation problem of an Autonomous Underwater Vehicle (AUV), this paper applies the backstepping probabilistic gain controller to the nonlinear system of the AUV for the first time. Under the comprehensive influence of underwater resistance, turbulence, and driving force, the motion of the AUV has strong coupling, strong nonlinearity, and an unpredictable state. At this time, the system’s output feedback can solve the problem of an unmeasurable state. In order to achieve a good control effect and extend the cruising range of the AUV, first, this paper will select the state error to make it a new control objective. The system’s control is transformed into the selection of system parameters, which greatly simplifies the degree of calculation. Second, this paper introduces the concept of a stochastic backstepping control strategy, in which the robot’s actuators work discontinuously. The actuator works only when there is a random disturbance, and the control effect is not diminished. Finally, the backstepping probabilistic gain controller is designed according to the nonlinear system applied to the simulation model for verification, and the final result confirms the effect of the controller design.

1. Introduction

With the development of science and technology, ocean exploration technology has reached an unprecedented new level. With the help of science and technology, AUVs have been developed [1,2]. It is now possible to conduct scientific investigations on the seabed at a depth of 10,000 m. However, AUV development is inseparable from control technology development and progress, especially control technology progress in the nonlinear field. Their limitations and deficiencies are known based on previously known control techniques such as PID, robust control, and homogeneous controllers [3,4,5,6]. PID can only control simple systems [3,4] and has little effect when handling coupled nonlinear systems. In particular, the model and system structure are required to be exactly known. A robust control method provides more flexibility, Taking an autonomous vehicle as an example [5,6], the weight change of the passengers or cargo in the vehicle will cause the controlled system to change. Such common problems pose challenges to the design of the controller, so the designed controller must have sufficient robustness to continuously control the vehicle over the course of these changes [7,8,9,10].
It is well-known that nonlinearities exist in all systems. For example [11,12,13,14,15], control problems exist in industry, in relationships in social systems, and in biological relationships in nature; we cannot express them with specific mathematical formulas. However, some researchers [16,17] have simplified and approximated these systems by studying salient relationships in the system and ignoring others considered less important. With the advancement of science and technology, researchers [18,19] found that the influence of the neglected nonlinearity on the whole system cannot be omitted. Therefore, it is necessary to reconsider these characteristics, such as nonlinearity, uncertainty, and randomness. Especially a system’s nonlinear problem, which is the basis of modern control theory research, needs to be addressed. The control problem of the AUV studied in this paper [20,21,22] has a very typical nonlinear characteristic. The system’s various elements are coupled with each other and accompanied by uncertain disturbances in the external environment. This makes the design of the AUV control system very challenging, such as Figure 1. Existing control theory research results contain no solutions to this problem. Some researchers [23,24] study the robot’s motion control problem by restricting the motion of the underwater robot-like vehicle, assuming that the AUV’s trajectory is in a plane and does not involve space motion [25,26]. Such assumptions can simplify the motion problem from the AUV’s six degrees of freedom to the three degrees of freedom. However, it ignores the random interference phenomena.
On the one hand, the environment can cause a variety of nonlinear systems. Such nonlinear disturbances are called randomly nonlinear systems. According to a recent research paper [27,28], external random disturbances satisfy a time-invariant Bernoulli distribution. The scope of this paper was to find a solution for the stochastic nonlinear system. In [29], a time-invariant probability stochastic nonlinear AUV system was used to control an AUV’s hovering. A particularly hot research topic is the design of a high-gain observer and feedback controller in which unmeasurable states can be observed by the high-gain observer. In this paper, we will introduce a coordinate transformation for the output feedback controller [30] and compare the system’s real value and the observed value. This method simplifies the computational complexity of a control system and improves the control effectiveness. According to the calculation method, the parameter selection can be obtained, and the design problem of the controller is transformed into a parameter selection problem.
The objective of this paper is to consider the output of a high-gain observer in developing a stochastic control system. The main contributions of this paper include the following:
  • The high-gain method is used to observe the system’s unmeasurable states, and the complex nonlinear system control problem is transformed into a parameter selection problem by introducing coordinate transformation, which simplifies the calculation process and complexity.
  • The literature assumes that the external disturbance is randomly occurring and the concept of a probability gain controller is introduced. With the control effect unchanged, we convert continuous control to intermittent control. The endurance problem of the AUV is solved under intermittent control, saving significant energy and extending the cruising range.
  • Based on the backstepping control method, the complex nonlinear system control problems is decomposed into a calculation method that can perform iterative recursion. The final controller can be calculated through the designed virtual controller.
  • An easy-to-implement probabilistic control algorithm is designed, and a Bernoulli probability gain controller is designed. This is more lenient than traditional control methods. In the case of simultaneous external interference, system uncertainty, and time delay, the closed-loop system can ensure that all indices are stable. Finally, a simulation example is used to demonstrate the effectiveness of the proposed method.

2. Description of Dynamic Modelling of a Deepwater Robot

This section mainly introduces the principle and components of the AUV. It can be seen from Figure 2 that the AUV model is a front-drive model, which is mainly powered by a vectored thruster at the tail. The four rudder pages at the robot’s tail are used to change the AUV’s directions. With the right amount of propulsion, the rotation angle of the propeller is controlled by three connecting rods to provide 360° of power.
An underactuated underwater robot dynamics model can be expressed as the following formula:
M ν ˙ + C ( ν ) ν + D ( ν ) ν + g ( ϵ ) = F
where M and C ( ν ) are the inertia, additional quality matrix, and Coriolis centripetal force, respectively. D ( ν ) and g ( ϵ ) are the fluid damping and operation and the restoring force of the gravity according to the buoyancy. F = [ F X F Y F Z F K F M F N ] T are a series of moments and forces; ν ˙ = [ u ˙ ν ˙ ω ˙ p ˙ q ˙ r ˙ ] T are the accelerations in the body frame; ν = [ u ν ω p q r ] T are velocity in roll, pitch, and yaw direction, respectively, in the moving frame; ϵ = [ x y z α β γ ] T are surge position, sway position and heave position, and roll angle, pitch angle, and yaw angle in the fixed frame, respectively. In the moving frame, the accelerations of the AUV model can be decomposed into two different parts, linear velocity ν 1 = [ u ν ω ] T and angular velocity ν 2 = [ p q r ] T . In the fixed-frame, the velocity of the AUV model can be decomposed into position ϵ 1 = [ x y z ] T and orientation ϵ 2 = [ α β γ ] T . In this paper, we research the AUV’s linear velocity ν 1 = [ u ν ω ] T and linear position ϵ 1 = [ x y z ] T .
Here, M = M R B + M A .
M R B can be expressed such as M R B = m 0 0 0 m z G m y G 0 m 0 m z G 0 m x G 0 0 m m y G m x G 0 0 m z G m y G I x I x y I x z m z G 0 m x G I y x I y I y z m y G m x G 0 I z x I z y I z
additional quality matrix M A can be expressed: M A = X u ˙ X v ˙ X w ˙ X p ˙ X q ˙ X r ˙ Y u ˙ Y v ˙ Y w ˙ Y p ˙ Y q ˙ Y r ˙ Z u ˙ Z v ˙ Z w ˙ Z p ˙ Z q ˙ Z r ˙ K u ˙ K v ˙ K w ˙ K p ˙ K q ˙ K r ˙ M u ˙ M v ˙ M w ˙ M p ˙ M q ˙ M r ˙ N u ˙ N v ˙ N w ˙ N p ˙ N q ˙ N r ˙
Here, C ( v ) = C R B ( v ) + C A ( v ) . C A ( v ) is fluid damping and operation matrix.
C A ( v ) can be expressed: C A ( v ) = 0 0 0 0 a 3 a 2 0 0 0 a 3 0 a 1 0 0 0 a 2 a 1 0 0 a 3 a 2 0 b 3 b 2 a 3 0 a 1 b 3 0 b 1 a 2 a 1 0 b 2 b 1 0
where
a 1 = X u ˙ u + X v ˙ v + X w ˙ w + X p ˙ p + X q ˙ q + X r ˙ r
a 2 = X v ˙ u + Y v ˙ v + Y w ˙ w + Y p ˙ p + Y q ˙ q + Y r ˙ r
a 3 = X w ˙ u + Y w ˙ v + Z w ˙ w + Z p ˙ p + Z q ˙ q + Z r ˙ r
b 1 = X p ˙ u + Y p ˙ v + Z p ˙ w + K p ˙ p + K q ˙ q + K r ˙ r
b 2 = X q ˙ u + Y q ˙ v + Z q ˙ w + K q ˙ p + M q ˙ q + M q ˙ r
b 3 = X r ˙ u + Y r ˙ v + Z r ˙ w + K r ˙ p + M r ˙ q + N r ˙ r
Next, Formula (2) is introduced to explain the connection between the body frame and the fixed frame:
ϵ ˙ 1 = J ( ϵ ) ν 1
Well-known, J ( ϵ ) is a particular matrix, which is a kinematic transformation and is in the following form:
J ( ϵ ) = c γ c β s γ c α + c γ s β s α s γ s α + c γ c β s α s γ c β c γ c α + s γ s β s α c γ s α + s γ s β c α s β c β s α c β c α
Well-known, s is the s i n function, and c is the c o s function. Another, ϵ 2 = [ α β γ ] T is the orientation variable, which is underfined for β = ± 90 and is below ± 90 . For this, the coordinate transformation ( ϵ 1 , ν 1 ) μ ( ϵ 1 , ϵ ˙ 1 ) is performed using (2), which yields:
ϵ 1 ϵ ˙ 1 = I 0 0 J ( ϵ ) ϵ 1 ν 1 μ is the coordinate transformation. According to μ , the dynamic model of the underwater robot can become as the following form:
M ϵ 1 ϵ ¨ 1 + C ϵ 1 ϵ ˙ 1 + D ϵ 1 ϵ ˙ 1 + g ϵ 1 = F ϵ 1
where
M ϵ 1 = J ( ϵ ) T M J ( ϵ ) 1 C ϵ 1 = J ( ϵ ) T C ( ν ) M J ( ϵ ) 1 J ˙ ( ϵ ) J ( ϵ ) 1 D ϵ 1 = J ( ϵ ) T D ( ν ) J ( ϵ ) 1 g ϵ 1 = J ( ϵ ) T g ( ϵ ) F ϵ 1 = J ( ϵ ) T F
Next, a series of assumptions are introduced.
Assumption 1. 
The literature researches the AUV’s velocity, which is l ˙ = ( x ˙ 2 + y ˙ 2 + z ˙ 2 ) 1 2 . Considering the system origin stations, ϵ 1 ( 0 ) = [ 0 0 0 ] and ϵ ˙ 1 ( 0 ) = [ 0 0 0 ] .
Assumption 2. 
In the literature. the velocity ϵ 1 = [ x y z ] and the angle ϵ 2 = [ α β γ ] is known by using different sensors(such as speed and angle sensors).
According to the features of the AUV system, we can use the dynamic model to obtain a broad mathematical model as follows.
d l 1 = l 2 d t + f 1 ( t , l ( t ) , u ) d t + g 1 ( l ) d ω d l 2 = u d t + f 2 ( t , l ( t ) , u ) d t + g 2 ( l ) d ω y = l 1
where l = ( l 1 , l 2 ) T R 2 are the states, u R is the input, and y R is the output. Here, we will lead into a stochastic process: ω is an m - dimensional standard Wiener process defined on the complete probability space ( Ω , Γ , P ) with Ω , Γ , and P being a sample space, filtration, and probability measure, respectively. l 2 is a state value and unmeasurable. f i : R n R and g i : R n R r , i = 1 , 2 is a linear growth (11) and is local Lipschitz. Meanwhile, f i ( 0 ) = 0 , g i ( 0 ) = 0 .
Definition 1. 
For a given function V C 2 ( R 2 ; R ) associated (2), the differential operator £ is defined as
£ V ( l ) = V ( l ) l f ( l , t ) + 1 2 T r { g T ( l , t ) 2 V ( l ) l 2 g ( l , t ) }
Definition 2. 
Assume γ and γ is the K function, γ is the derivative of γ and exists, R 2 ( l ) meets R 2 ( l ) = R 2 T ( l ) > 0 for all l and is a matrix-valued function. u = α 1 is a continuous feedback control function except for the origin α ( 0 ) = 0 , which can make the equilibrium point l = 0 be probability stable, the cost function J ( u ) = E 0 s ( x ) + γ ( | R 2 ( l ) 1 / 2 u | ) d τ can take the optimal value, which expresses the probability of the system (5) control problem is to be accomplished.
Next, an inverse optimal control scheme based on probability will be given:
Lemma 1. 
According to the following control method:
u = α ( l ) = R 2 1 ( L g 2 V ) T γ ( | L g 2 V R 2 1 / 2 | ) | L g 2 V R 2 1 / 2 | 2
Well-known, V ( l ) is the Lyapunov function, which is preselected. γ is the derivative of γ ( · ) , which is K function, and R 2 ( l ) meet R 2 ( l ) = R 2 T ( l ) > 0 , which is a matrix function.
u * = α * ( l ) = β 2 R 2 1 ( L g 2 V ) T ( γ ) 1 ( | L g 2 V R 2 1 / 2 | ) | L g 2 V R 2 1 / 2 | , β 2
is able to make the cost function
J ( u ) = E 0 s ( l ) + β 2 γ 2 β | R 2 ( l ) 1 / 2 u | d τ
According to the system, (5) can be solved by minimizing the s ( x ) function, satisfying
s ( l ) = 2 β γ ( | L g 2 V R 2 1 / 2 | ) L f V 1 2 T r g 1 T 2 V l 2 g 1 + β ( β 2 ) γ ( | L g 2 V R 2 1 / 2 | )
Assumption 3. 
C f , C g 0 is a known positive constant, which satisfies the following inequality
| f i ( l ) | C f | l 1 | + + | l i | , | g i ( l ) | C g | l 1 | + + | l i |
Young inequality: For any two vectors l and y, which have the same dimension. l T y λ p p | l | p + 1 q λ q | y | q , here λ > 0 , p > 1 , q > 1 , and p 1 + q 1 = 1 .
Based on the assumptions, the inverse optimization problem is solvable by designing a smooth output feedback controller for system (5) so that the closed-loop system at the origin is globally asymptotically stable.

3. State Keeping Probability-Dependent Gain-Scheduled Control Design

In the third section, we mainly introduce the design of the state-preserving probability gain controller, which includes three parts. The first part is the introduction of a high-gain observer; the second part is the introduction of backstepping control; the third part is the gain coefficient design of a Bernoulli probability distribution controller.

3.1. High-Gain Observer Design

The system’s state measurement has been a thorny problem in the control field. Some systems or states are unmeasurable due to the sensor’s complexity, the system’s design, and the observer’s design. Neither can measure the data very well. This increases difficulty for the system control. Some researchers have designed an observer to obtain all the system states value. In this paper, a high-gain observer is introduced, which is different from the general Lomborg observer in that the gain coefficient of the observer is a powder form, which significantly improves the observer’s accuracy.
The state l 1 is measurable; the state l 2 is unmeasurable. Next, we construct an observer, which has high gain:
l ^ ˙ 1 = l ^ 2 + H 1 K 1 l 1 l ^ 1 , l ^ ˙ 2 = u + H 2 K 2 l 1 l ^ 1 ,
In the above observers, the H > 0 and K i > 0 are an unknown parameter and known constants. The following matrix A= K 1 1 K 2 0 is stable; P is a matrix, which is positive definite and meets A T P + P A = I . According to the system (5), we can obtain the error equation, which can be defined as the following form:
l ˜ = l ˜ 1 , l ˜ 2 T , l ˜ i = l i l ^ i H i 1 , i = 1 , 2
In line with (5) and (12), we can obtain the system error model as follows
d l ˜ = H A l ˜ d t + F ( l ) d t + G ( l ) d ω
where
F ( l ) = f 1 ( l ) , 1 H f 2 ( l ) T , G ( l ) = g 1 T ( l ) , 1 H g 2 T ( l ) T
Choosing V 0 ( l ˜ ) = 3 l ˜ T P l ˜ , applying | 1 H i 1 l i | | 1 H i 1 l ^ i | + | l ˜ i | , i = 1 2 a i 2 2 i = 1 2 a i 2 , H > 0 , Definition 1 and Assumption 3, we can obtion
£ V 0 = 3 H l ˜ T A T P + P A l ˜ + 6 l ˜ T P F ( l ) + 3 T r { G T ( l ) P G ( l ) } 3 H | l ˜ | 2 + 3 | l ˜ T P | 2 + 3 i = 1 2 | f i ( l ) H i 1 | 2 + λ m a x ( P ) i = 1 2 | g i ( l ) H i 1 | 2 3 H | l ˜ | 2 + 3 P 2 | l ˜ | 2 + 3 C f 2 i = 1 2 1 H i 1 2 | l 1 | + | l 2 | H + + | l 2 | H 1 2 + 3 λ m a x ( P ) C f 2 i = 1 2 1 H i 1 2 | l 1 | + | l 2 | H + + | l 2 | H 1 2 = 3 H | l ˜ | 2 + 3 P 2 | l ˜ | 2 + c 1 | l 1 | + | l 2 | H + + | l 2 | H 1 2 3 H | l ˜ | 2 + 3 P 2 | l ˜ | 2 + c 1 i = 1 2 | 1 H i 1 l ^ i | + i = 1 2 | l ˜ i | 2 = 3 H 3 P 2 4 c 1 | l ˜ | 2 + 4 c 1 l ^ 1 2 + l ^ 2 2 H 2
where
c 1 = 3 ( C f 2 + λ m a x ( P ) C g 2 ) i = 1 2 1 H i 1 2 = 3 ( C f 2 + λ m a x ( P ) C g 2 ) H 2 i = 0 1 ( i + 1 ) H i + ( 3 i ) H 2

3.2. Output-Feedback Controller Design

The second part is the design of the output feedback controller. In the control system of the AUV, the states are not entirely measurable. At this time, it is necessary to use the system’s output state to design the output feedback controller. However, due to the system’s complexity, the system’s states are firstly transformed to reduce the difficulty and complexity of controller design. On the other hand, according to the backstepping controller’s design characteristics, the nonlinear system’s complexity can be well considered, and the controller can be designed even when the state is not entirely measurable. However, at the last iteration of the derivation, the dummy controller is offset, and only the final controller is obtained.
z 1 = l ˜ 1 , z 2 = l ˜ 2 α 1 l ^ [ 1 ]
Well-known α 1 l ^ [ 1 ] is a series of controller, which is virtual and step designed.
Step 1: defining
V 1 ( l ˜ , z 1 ) = V 0 ( l ˜ ) + 1 2 z 1 2
According to Definition 1, Young’s inequality, and Equations (12) and (16)–(19)
£ V 1 3 H 3 P 2 4 c 1 | l ˜ | 2 + 4 c 1 l ^ 1 2 + l ^ 2 2 H 2 + z 1 ( l ^ 2 + H l 1 l ˜ 1 ) 3 H 3 P 2 4 c 1 | l ˜ | 2 + 4 c 1 z 1 2 + 4 c 1 l ^ 2 2 H 2 + z 1 α 1 + z 1 z 2 + H l ˜ 1 2 + H 4 l 1 2 z 1 2
Using (18) and ( a + b ) 2 2 a 2 + 2 b 2 , choosing H 8 c 1 , we can obtion
4 c 1 z 1 2 H 2 z 1 2 , 4 c 1 1 H 2 l ^ 2 2 8 c 1 1 H 2 z 2 2 + 8 c 1 1 H 2 α 1 2
In line with Equations (20) and (21), the first virtual controller has the form as follows
α 1 ( l ^ 1 ) = H b 1 z 1 , b 1 = 1 2 + l 1 2 4 + 2
renders
£ V 1 3 H 3 P 2 4 c 1 | l ˜ | 2 + H 1 2 + l 1 2 4 z 1 + 8 c 1 1 H 2 z 2 2 + 8 c 1 1 H 2 + z 1 α 1 + z 1 z 2 + H l ˜ 1 2 2 H 3 P 2 4 c 1 | l ˜ | 2 ( 2 H 8 c 1 b 1 2 ) z 1 2 + 8 c 1 1 H 2 z 2 2 + z 1 z 2
In the last step, through backstepping theory, we select the controller.
z 2 = l ^ 2 + H b 1 l ^ 1 d z 2 = u + H 2 d 20 l ˜ 1 + H 2 d 21 z 1 + H 2 d ¯ 2 , 1 z 1 + H 2 d 22 z 2 d t
where
d i 0 = l i + b i 1 l i 1 + b i 1 b i 2 l i 2 + + b i 1 b i 2 b 1 l 1 d i j = k = j 1 i 1 b k b j k = j i 1 b k , j = 1 , , i 2 , b 0 = 0 d ¯ i , i 1 = b i 1 b i 2 b i 1 2 , d i i = b i 1
By (24), choosing the control law
u l ^ [ 2 ] = H b 2 z 2 = M ( l ^ ) z 2 = i = 1 2 H i j = 2 ( i 1 ) 2 b j l ^ 2 ( i 1 )
renders
£ V 2 H 3 P 2 4 c 1 | l ˜ | 2 j = 1 i 1 H 2 j 2 H 8 c 1 b j 2 z j 2 1 H 2 H z 2 2
where
V 2 ( l ˜ , z [ 2 ] ) = 3 l ˜ T P l ˜ + j = 1 2 1 2 H 2 ( j 1 ) z j 2
M ( l ^ ) = H b 2 , b 2 > 0 is a real number that satisfies (22) and does not depend on H.
The third part introduces the controller coefficients based on the Bernoulli probability distribution. In previous research few people combined Bernoulli probability distribution with the controller. In this paper, the author uses the Bernoulli probability distribution to transform a continuous controller into a discrete intermittent controller. In this case, it not only saves the energy consumption of the controller to extend the cruising range but also reduces the calculator computing power.
Theorem 1. 
The random probability distribution ξ ( t ) will be introduced here, which is the control probability of the random variable distribution for the AUV and satisfies the Bernoulli distribution as follows:
P r o b { ξ ( t ) = 1 } = E { ξ ( t ) } = P ( t ) , P r o b { ξ ( t ) = 0 } = 1 E { ξ ( t ) } = 1 P ( t ) ,
In the above Formula (29), when ξ ( l ) is a constant proportional distribution, its value range is the interval [ P 1 , P 2 ] , where P 1 and P 2 are the minimum and maximum values of ξ ( l ) , respectively. For the convenience of calculation in this article, we assume that parameter ξ ( t ) is irrelevant.
In this paper, we are interested in designing a probability gain controller as follows:
u ( l ) = K ( P ) l ( t ) ,
For Formula (30), the parameter K ( P ) is the controller gain, which can be designed as follows:
K ( P ) = K 0 + P ( t ) K u ,
Remark 1. 
The controller’s gain includes K 0 , K u , and the time-invariant parameter P ( t ) . In this paper, the controller will be regarded as a fixed value. It is less conservative than the other controller. The gain filtering problem has attracted many researchers’ attention. Due to the special structure of the controller gain, this gain controller deserves special attention. The constant gains of the controllers K 0 and K u are obtained using this paper’s main results. The Bernoulli probability gain is introduced into the backstepping controller as a stochastic gain, which solves the stochastic problem and the complexity of controller design.

3.3. Stability Analysis of Stochastic Nonlinear System

Theorem 2. 
Where H 1 * 0 , and H > H 1 * are constants, system (5) satisfies Assumption 3. Finally, (8) and (26) are as follows:
Conclusion:
(1)
The system has initial stations ( l 0 , l ^ 0 ) ; system (5), (8), and (26) is unique.
(2)
Initially, the system is stable based on probability.
(3)
Control law
u * = α * ( l ^ ) = β H b 2 z 2 , β 2
makes the system stable. The cost function as follows
J ( u ) = E 0 s ( l ˜ , l ^ ) + 1 H 2 M 1 ( l ^ ) u 2 d τ
is minimum. The well-known function s ( l ˜ , l ^ ) and Equation (10) have the same circumscription, and l ^ = ( l ^ 1 , l ^ 2 ) T , f ¯ ( l ˜ , l ^ ) = ( ( H A l ˜ ) T + F T ( l ) , H s 1 l ˜ 1 + l ^ 2 , H 2 s 2 l ˜ ) T , g ˜ 1 ( l ˜ , l ^ ) = ( G T ( l ) , 0 ) , g ¯ 2 ( l ˜ , l ^ ) = ( 0 , 1 ) T , V = V 2
Proof. 
According to d i i = b i 1 and Equations (22) and (25), we can obtion b i > b i 1 , b 0 = 0 , b i > 1 ( i = 1 , 2 ) . According to m a x { 4 c 1 , 8 c 1 , 8 c 1 b 1 2 } = 8 c 1 b 1 2 . If
H > 8 c 1 b 1 2 + 3 P 2
is found. In line with Theorem 1, Equation (27), and Equation (28), the above conclusions (1) and (2) are found.
When i = 1 , by researching Equations (34), (17), and (22) and the literature, we can obtain c 1 which relies on H and b 1 which does not rely on H. It is well-known that H > 24 ( C f 2 + λ m a x ( P ) C g 2 ) H 2 i = 0 1 ( i + 1 ) H i + ( 3 i ) H 2 b 1 2 + 3 P 2 is equivalent to the following.
H 3 > 24 ( C f 2 + λ m a x ( P ) C g 2 ) b 1 2 i = 0 i ( i + 1 ) H i + ( 3 i ) H 2 + 3 P 2 H 2
renders
H 3 + i = 0 2 a i H i > 0
When a 0 = Δ , a 1 = 2 Δ , a 2 = Δ 3 P 2 , Δ = 24 ( C f 2 + λ m a x ( P ) C g 2 ) b 1 2 . Now, we discuss selecting the H 1 * in two cases:
(i)
If there is at least one positive real number in H 1 , H 2 , H 3 , choose H 1 * = m a x 1 i 3 { H i } to be the largest real root.
H = 8 c 1 b 1 2 + 3 P 2 = 24 ( C f 2 + λ m a x ( P ) C g 2 ) b 1 2 ( 1 + 1 H ) 2 + 3 P 2
(ii)
Otherwise, select H 1 * = 0 , where H 1 * 0 must be held, so (31) isfound for any H > H 1 * . The next step is to certify conclusion (3). Equations (12) and (14) can be expressed
d l ˜ d l ^ = f ¯ ( l ˜ , l ^ ) d t + g ¯ 1 ( l ˜ , l ^ ) d ω + g ¯ 2 ( l ˜ , l ^ ) u d t
By choosing γ ( r ) = 1 2 H 2 r 2 , we can obtain ( γ ) 1 ( r ) = H 2 r and γ ( r ) = 1 2 H 2 r 2 , according to (7), (37), and V = V 2 and can obtain
u = α ( l ^ ) = R 2 1 ( l ^ ) 1 H 2 z 2 1 2 H 2 = 1 2 R 2 1 ( l ^ ) z 2
choosing R 2 ( l ^ ) = ( 2 M ( l ^ ) ) 1 = 1 2 H b 2 , which can obtain u = M ( l ^ ) z 2 . It and Equation (26) have the same shape. In line with conclusion (2) and Lemma 1, the controller can be expressed as follows
u * = α * ( l ^ ) = β 2 R 2 1 ( l ^ ) 1 H 2 z 2 1 2 H 2 = β 2 R 2 1 ( l ^ ) z 2 = β α ( l ^ ) , β 2
minimizes the cost function. □
Remark 2. 
In [8,31,32], the author studied the robust controller and the observer design for uncertain time-delay systems. In this part of the stability analysis, we can obtain the optimal solution controller form. According to the nonlinear system (5) satisfying the assumptions and theorems, the optimal value of the brother parameters can be obtained by introducing the cost function. Moreover, H is the optimal value we obtained through stability analysis, where the value of H can be further reduced so that the constraints of the entire controller become smaller. Finally, through proof, the optimal value can be obtained under the guidance of theory. Under the theory with the most controllers, the designed probability gain controller can be more relaxed, and it can adapt to many more nonlinear systems and expand the application range of the controller.

4. Underwater Robot Transportation Model Example

Remark 3. 
In this section, this paper calculates the numerical simulations in which stochastic noise is used instead of the AUV’s external stochastic disturbances [33]. However, in a realistic environment, such noise disturbances do not exist, and such an approach is idealized.
Consider the following stochastic nonlinear system:
l ˙ 1 = l 2 + 1 10 l 1 l 2 + 1 10 l 1 l 2 ω ˙ l ˙ 2 = u + 1 10 l 2 s i n l 1 + 1 10 l 2 s i n l 1 ω ˙ y = l 1
Assumption 3 was found. Here, we choose C f = 1 10 and C g = 1 10 and the high-gain observer as follows
l ^ ˙ 1 = l ^ 2 + H ( l 1 l ^ 1 ) l ^ ˙ 2 = u + H 2 ( l 1 l ^ 1 )
Selected parameters are K 1 = K 2 = 1 ; the H > 0 is the unknown and real number gain, which can be obtained. By circumscription of l ˜ 1 = l 1 l ^ 1 and l ˜ 2 = l 2 l ^ 2 H , the following is obtained:
d l ˜ = H A l ˜ d t + F ( l ) d t + G ( l ) d ω , l ˜ = ( l ˜ 1 , l ˜ 2 ) T F ( l ) = 1 10 l 2 s i n l 1 , 1 10 ( l 1 l 2 ) 1 2 T G ( l ) = 1 10 l 2 s i n l 1 , 1 10 ( l 1 l 2 ) 1 2 T
Introduction,
z 1 = l ^ 1 z 2 = l ^ 2 α 1 ( l ^ 1 )
According to the theorem and controller law proposed, we can obtain the probability controller.
α 1 ( l ^ 1 ) = H b 1 z 1 , b 1 = 1 2 + 1 4 + 2 = 2.75 , H 8 c 1
u = H b 2 z 2 , b 2 = d 20 2 4 + d 21 2 4 + d 22 + 2 = 19.03
Meanwhile,
c 1 = 3 100 1 + λ m a x ( P ) 1 + 1 H 2
P = 0 1 2 1 2 1 , P = 6 2 , λ m a x ( P ) = 1 + 2 2 , d 20 = 1 + b 1 = 3.75 , d 21 = 1 b 1 2 = 6.56 , d 22 = b 1 = 2.75 . In this part, by using Theorem 1 for a specific AUV control system described in Figure 2, we can lay out a Bernoulli probability controller as follows: u = K ( P ) H b 2 z 2 . We selected the unknown parameters as H > H 1 * = 9.9609 .

4.1. Simple State Keeping Results in the Presence of Underwater Stochastic Disturbances

In line with the above analysis, according to the proposed strategy, we conduct a simulation to prove the effectiveness. Figure 3, Figure 4 and Figure 5 are the simulation results. [ l 1 ( 0 ) , l 2 ( 0 ) , l ^ 1 ( 0 ) , l ^ 2 ( 0 ) ] T = [ 5 , 0 , 0 , 0 ] T are the chosen initial conditions.

4.2. Without the Probability Gain Simulation Results in the Presence of Underwater Stochastic Disturbances

In this section, we will compare the probability gain simulation results with the ordinary nonlinear backstepping control. From Figure 6, Figure 7 and Figure 8, it can be seen that there are random disturbances in the state and observed values of the system, and the probability distribution observer can eliminate and suppress the disturbance very well.
By comparing Figure 3, Figure 4 and Figure 5 and Figure 6, Figure 7 and Figure 8, we can see that in the absence of the Bernoulli probability controller, the system’s state will be affected by external interference, and there will still be fluctuations in the state under the action of the controller. After the addition of the Bernoulli probability controller, the simulation results from Figure 3, Figure 4 and Figure 5 strongly prove that the probability gain controller can achieve the effect of eliminating external interference, and the time and efficiency of the controller are not affected under the action of the probability controller. Finally, by comparing with the PID control effect Figure 9 and Figure 10, we can reduce the control time from 10 s to 2 s. However, due to the enhanced control effect, the peak consumption of this controller increases from 4 to 200, and the increase of the controller consumption is within our tolerance range. Nevertheless, the controller’s action time was reduced from 15 s to less than 1 s.

5. Concluding and Future Prospects

According to the simulation results, the system’s unmeasurable states can be observed using a high-gain observer, and the complex nonlinear control problem is transformed into a parameter selection problem by introducing coordinate transformation, which simplifies the calculation process and complexity. The paper assumes that the external disturbance randomly occurs, and the concept of a probability controller is introduced. With the control effect unchanged, it can convert continuous control to intermittent control. The endurance problem of the AUV is solved under intermittent control. The control method significantly saves energy and extends the cruising range. Based on the backstepping control method, the complex nonlinear controller is decomposed into a computational method that can perform iterative recursion. The final controller can be calculated through the designed virtual controller. An easy-to-implement probabilistic control algorithm is designed, and a Bernoulli probability gain controller with constant parameters and time invariance is designed. This is more lenient than traditional control methods. In the case of simultaneous external interference, system uncertainty, and time delay, the closed-loop system can ensure that all states are stable. Finally, a simulation example is used to demonstrate the effectiveness of the proposed design method.
In this research, the continuous system is mainly studied, which limits the application scope of the probability gain controller because the information obtained in the actual project and the execution of the command are discrete information. The next step in our research is to discretize continuous systems so that the probability gain control method can be applied to discrete systems; on the other hand, it is aimed at the simplified AUV system. This article mainly considers three degrees of freedom, which is different from actual engineering. Another direction is to satisfy the six-degree-of-freedom controller when the controller realizes practical engineering applications.

Author Contributions

Data curation, Y.P.; Formal analysis, L.G.; Funding acquisition, L.G. and Q.M.; Investigation, Y.P.; Methodology, L.G. and Q.M.; Resources, Q.M.; Supervision, L.G.; Visualization, Y.P.; Writing—original draft, Y.P.; Writing—review & editing, Y.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China under Grant 61807010, Zhejiang Provincial Natural Science Foundation of China under Grant LZ21E050002, Key R&D Program of Zhejiang Province under Grant 2021C03013, Fundamental Research Funds for Provincial Universities of Zhejiang under Grant GK199900299012-026 and GK219909299001-309.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data generated or analyzed during this research are included in this paper.

Conflicts of Interest

The author declares that there is no conflict of interest regarding the publication of this paper.

References

  1. Bao, B.; Hu, F.; Liu, Z. Mapping equivalent approach to analysis and realization of memristor-based dynamical circuit. Chin. Phys. B 2014, 23, 303–310. [Google Scholar] [CrossRef]
  2. Soudry, D.; Castro, D.; Gal, A. Memristor-based multilayer neural networks with online gradient descent training. IEEE Trans. Neural Netw. Learn. Syst. 2015, 226, 2408–2421. [Google Scholar] [CrossRef] [PubMed]
  3. Liu, Q.; Liang, Z. Global stabilisation of a class of feedforward systems with distributed delays. IET Control. Theory Appl. 2015, 9, 140–146. [Google Scholar] [CrossRef]
  4. Raoufi, R.; Marquez, H.; Zinober, A. H infinity sliding mode observers for uncertain nonlinear Lipschitz systems with fault estimation synthesis. Int. J. Robust Nonlinear Control. 2010, 20, 1785–1801. [Google Scholar] [CrossRef]
  5. Sheng, Z.; Ma, Q. Composite-observer-based sampled-data control for uncertain upper-triangular nonlinear time-delay systems and its application. Int. J. Robust Nonlinear Control. 2021, 31, 6699–6720. [Google Scholar] [CrossRef]
  6. Briat, C. Stability analysis and control of a class of LPV systems with piecewise constant parameters. Syst. Control. Lett. 2015, 82, 10–17. [Google Scholar] [CrossRef]
  7. Benabdallah, A.; Hammami, M.; Kallel, J. Robust stability of uncertain piecewise-linear systems: LMI approach. Nonlinear Dyn. 2011, 63, 183–192. [Google Scholar] [CrossRef]
  8. Chen, T.; Huang, J. A small gain approach to global stabilization of nonlinear feedforward systems with input unmodeled dynamics. Automatica 2010, 46, 1028–1034. [Google Scholar] [CrossRef]
  9. Soni, S.K.; Kamal, S.; Ghosh, S. Delayed Output Feedback Sliding Mode Control For Uncertain Nonlinear Systems. IET Control. Theory Appl. 2020, 14, 2106–2115. [Google Scholar] [CrossRef]
  10. Li, B.; Xia, J.; Sun, W.; Park, J.H.; Sun, Z.Y. Command filter-based event-triggered adaptive neural network control for uncertain nonlinear time-delay systems. Int. J. Robust Nonlinear Control. 2020, 30, 6363–6382. [Google Scholar] [CrossRef]
  11. Hasan, Z.; Kapetanic, N.; Vaughan, J.; Robinson, G.M. Subsea field development optimization using all electric controls as an alterna-tive to conventional electro- hydraulic. Chin. Phys. B 2014, 23, 303–310. [Google Scholar]
  12. Kokotović, P.; Arcak, M. Constructive nonlinear control: A historical perspective. Automatica 2001, 37, 637–662. [Google Scholar] [CrossRef]
  13. Long, L.; Zhao, J. Global stabilization for a class of switched nonlinear feedforward systems. Syst. Control. Lett. 2011, 60, 734–738. [Google Scholar] [CrossRef]
  14. Ye, X. Adaptive stabilization of time-delay feedforward nonlinear systems. Automatica 2011, 47, 950–955. [Google Scholar] [CrossRef]
  15. Zhang, X.; Lin, Y. Global adaptive stabilisation of feedforward systems by smooth output feedback. IET Control. Theory Appl. 2012, 6, 2134–2141. [Google Scholar] [CrossRef]
  16. Zhang, X.; Liu, Q.; Baron, L.; Boukas, E.K. Feedback stabilization for high order feedforward nonlinear time-delay systems. Automatica 2011, 47, 962–967. [Google Scholar] [CrossRef]
  17. Wang, S.; Na, J.; Ren, X. RISE-Based Asymptotic Prescribed Performance Tracking Control of Nonlinear Servo Mechanisms. IEEE Trans. Syst. Man Cybern. Syst. 2018, 48, 2359–2370. [Google Scholar] [CrossRef]
  18. Wang, S.; Yu, H.; Yu, J.; Na, J.; Ren, X. Neural-Network-Based Adaptive Funnel Control for Servo Mechanisms with Unknown Dead-Zone. IEEE Trans. Cybern. 2020, 50, 1383–1394. [Google Scholar] [CrossRef]
  19. Na, J.; Wang, S.; Liu, Y.J.; Huang, Y.; Ren, X. Finite-Time Convergence Adaptive Neural Network Control for Nonlinear Servo Systems. IEEE Trans. Cybern. 2020, 50, 2568–2579. [Google Scholar] [CrossRef]
  20. Nguyen, K.D. An adaptive controller for uncertain nonlinear systems with multiple time delays. Automatica 2020, 117, 108976. [Google Scholar] [CrossRef]
  21. Pai, M. RBF-Based Discrete Sliding Mode Control for Robust Tracking of Uncertain Time-Delay Systems with Input Nonlinearity. Complexity 2015, 21, 194–201. [Google Scholar] [CrossRef]
  22. Krishnamurthy, P.; Khorrami, F. Feedforward systems with ISS appended dynamics: Adaptive output-feedback stabilization and disturbance attenuation. IEEE Trans. Autom. Control. 2008, 53, 405–412. [Google Scholar] [CrossRef]
  23. Koo, M.S.; Choi, H.L.; Lim, J.T. Global regulation of a class of uncertain nonlinear systems by switching adaptive controller. IEEE Trans. Autom. Control. 2010, 55, 2822–2827. [Google Scholar] [CrossRef]
  24. Li, Y.; Ju, H.P.; Hua, C.; Liu, G. Distributed adaptive output feedback containment control for time-delay nonlinear multiagent systems. Automatica 2021, 127, 109545. [Google Scholar] [CrossRef]
  25. Bringeldal, B.; Storkaas, E.; Dalsmo, M.; Aarset, M.; Marius, H. Recent developments in control and monitoring of remote subsea fields. In Proceedings of the SPE Intelligent Energy Conference and Exhibition, Utrecht, The Netherlands, 23–25 March 2010; p. 128657. [Google Scholar]
  26. Zhang, X.; Baron, L.; Liu, Q.; Boukas, E.K. Design of stabilizing controllers with a dynamic gain for feedforward nonlinear time-delay systems. IEEE Trans. Autom. Control. 2011, 56, 692–697. [Google Scholar] [CrossRef]
  27. Zhang, X.; Gao, H.; Zhang, C. Global asymptotic stabilization of feedforward nonlinear systems with a delay in the input. Int. J. Syst. Sci. 2006, 37, 141–148. [Google Scholar] [CrossRef]
  28. Jo, H.W.; Choi, H.L.; Lim, J.T. Observer based output feedback regulation of a class of feedforward nonlinear systems with uncertain input and state delays using adaptive gain. Syst. Control. Lett. 2014, 71, 44–53. [Google Scholar] [CrossRef]
  29. Shubo, W.; Xuemei, R.; Jing, N.; Tianyi, Z. Extended-State-Observer-Based Funnel Control for Nonlinear Servomechanisms with Prescribed Tracking Performance. IEEE Trans. Autom. Sci. Eng. 2017, 14, 98–108. [Google Scholar]
  30. Manhas, N.; Anbazhagan, N. A mathematical model of intricate calcium dynamics and modulation of calcium signalling by mitochondria in pancreatic acinar cells. Chaos Solitons Fractals 2021, 145, 110741. [Google Scholar] [CrossRef]
  31. Yudong, P.; Longchuan, G.; Qinghua, M.; Huiqin, C. Research on Hover Control of AUV Uncertain Stochastic Nonlinear System Based on Constructive Backstepping Control Strategy. IEEE Access 2022, 10, 50914–50924. [Google Scholar]
  32. Yoerger, D.R.; Cooke, J.G.; Slotine, J.-J. The influence of thruster dynamics on underwater vehicle behavior and their incorporation into control system design. IEEE J. Ocean. Eng. 1990, 15, 167–178. [Google Scholar] [CrossRef] [Green Version]
  33. Di Vito, D.; Cataldi, E.; Di Lillo, P.; Antonelli, G. Vehicle Adaptive Control for Underwater Intervention Including Thrusters Dynamics. In Proceedings of the 2018 IEEE Conference on Control Technology and Applications (CCTA), Copenhagen, Denmark, 21–24 August 2018; pp. 646–651. [Google Scholar]
Figure 1. (a) Typical autonomous underwater vehicle model; (b) an autonomous underwater vehicle underwater experiment.
Figure 1. (a) Typical autonomous underwater vehicle model; (b) an autonomous underwater vehicle underwater experiment.
Mathematics 10 03958 g001
Figure 2. Body-fixed frame and earth-fixed reference frame for underwater robot.
Figure 2. Body-fixed frame and earth-fixed reference frame for underwater robot.
Mathematics 10 03958 g002
Figure 3. (a) State l 1 under probabilistic gain controller; (b) state l 2 under probabilistic gain controller.
Figure 3. (a) State l 1 under probabilistic gain controller; (b) state l 2 under probabilistic gain controller.
Mathematics 10 03958 g003
Figure 4. (a) Observer l 1 under probabilistic gain controller; (b) observer l 2 under probabilistic gain controller.
Figure 4. (a) Observer l 1 under probabilistic gain controller; (b) observer l 2 under probabilistic gain controller.
Mathematics 10 03958 g004
Figure 5. (a) Time-varying probability distribution; (b) controller u under probabilistic gain controller.
Figure 5. (a) Time-varying probability distribution; (b) controller u under probabilistic gain controller.
Mathematics 10 03958 g005
Figure 6. (a) State l 1 without probabilistic gain controller; (b) state l 2 without probabilistic gain controller.
Figure 6. (a) State l 1 without probabilistic gain controller; (b) state l 2 without probabilistic gain controller.
Mathematics 10 03958 g006
Figure 7. (a) Observer l 1 without probabilistic gain controller; (b) observer l 2 without probabilistic gain controller.
Figure 7. (a) Observer l 1 without probabilistic gain controller; (b) observer l 2 without probabilistic gain controller.
Mathematics 10 03958 g007
Figure 8. Controller u without probabilistic gain controller.
Figure 8. Controller u without probabilistic gain controller.
Mathematics 10 03958 g008
Figure 9. (a) State l 1 under PID controller; (b) state l 2 under PID controller.
Figure 9. (a) State l 1 under PID controller; (b) state l 2 under PID controller.
Mathematics 10 03958 g009
Figure 10. Controller u under PID controller.
Figure 10. Controller u under PID controller.
Mathematics 10 03958 g010
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Peng, Y.; Guo, L.; Meng, Q. Backstepping Control Strategy of an Autonomous Underwater Vehicle Based on Probability Gain. Mathematics 2022, 10, 3958. https://doi.org/10.3390/math10213958

AMA Style

Peng Y, Guo L, Meng Q. Backstepping Control Strategy of an Autonomous Underwater Vehicle Based on Probability Gain. Mathematics. 2022; 10(21):3958. https://doi.org/10.3390/math10213958

Chicago/Turabian Style

Peng, Yudong, Longchuan Guo, and Qinghua Meng. 2022. "Backstepping Control Strategy of an Autonomous Underwater Vehicle Based on Probability Gain" Mathematics 10, no. 21: 3958. https://doi.org/10.3390/math10213958

APA Style

Peng, Y., Guo, L., & Meng, Q. (2022). Backstepping Control Strategy of an Autonomous Underwater Vehicle Based on Probability Gain. Mathematics, 10(21), 3958. https://doi.org/10.3390/math10213958

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop