Next Article in Journal
Recognition of Street Landscape Patterns in Kunming City Based on Intelligent Decision Algorithm and Regional Cultural Expression
Next Article in Special Issue
An Explainable Data-Driven Optimization Method for Unmanned Autonomous System Performance Assessment
Previous Article in Journal
The Dynamic Response of Dual Cellular-Connected UAVs for Random Real-Time Communication Requests from Multiple Hotspots: A Deep Reinforcement Learning Approach
Previous Article in Special Issue
Improved Control Strategy for Dual-PWM Converter Based on Equivalent Input Disturbance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Asynchronous Sliding Mode Control of Networked Markov Jump Systems via an Asynchronous Observer Approach Based on a Dynamic Event Trigger

1
Department of Electronics & Information Engineering, Suzhou Vocational University, Suzhou 215106, China
2
School of Electronics and Information Engineering, Suzhou University of Science and Technology, Suzhou 215009, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(21), 4182; https://doi.org/10.3390/electronics13214182
Submission received: 10 September 2024 / Revised: 17 October 2024 / Accepted: 23 October 2024 / Published: 25 October 2024
(This article belongs to the Special Issue Advanced Control Strategies and Applications of Multi-Agent Systems)

Abstract

:
This paper explores the utilization of sliding mode control, which relies on an asynchronous observer, for Markov jump systems subject to external disturbances. Firstly, given that the system’s mode is not directly measurable and could potentially differ from the observer’s and controller’s mode, the paper constructs an asynchronous observer employing a hidden Markov model. Secondly, a sliding surface is designed to correspond with the asynchronous observer. Moreover, a multi-parameter event-triggered mechanism is incorporated into the observer design to alleviate bandwidth strain. Thirdly, by applying the integrated sliding mode control law, we ensure that the system state trajectories will reach the sliding surface within a finite time. Fourthly, the achievement of H stability is realized by making use of the Lyapunov function. Lastly, a practical-oriented example is presented to illustrate the efficiency of the established method.

1. Introduction

The Markov jump system (MJS) [1] is a complex stochastic process characterized by state changes that occur at distinct time intervals, with transition probabilities solely dependent on the current state and not influenced by past states. MJS is applicable to various practical problems, such as communication networks [2], financial markets [3], biological systems [4], etc. The investigation of stability is a crucial aspect in the analysis of MJSs [5]. Previous research on control and filtering design includes control methods based on Lyapunov stability theory [6], control methods based on H control theory [7], and filtering methods based on filters [8]. Currently, there are some new control and filtering design methods, such as event-triggered control methods and fuzzy control methods [9]. These methods can effectively solve the stability problem of MJSs and have broad application prospects in practical applications. The transition rate (TR) refers to the speed of transition from one pattern to another in an MJS. The stability and performance of the MJS are significantly influenced by the TR. The higher the TR, the higher the frequency of system jumps, and the more difficult it is to ensure system stability. Therefore, when designing controllers or observers for MJSs, it is crucial to consider the impact of TRs to ensure system stability. In practical applications, there can be asynchrony between the system state, the observer, and the controller due to factors like data loss or malfunctions. This asynchrony can lead to instability and reduced performance [10,11,12]. Therefore, MJSs are inherently complex due to their stochastic nature and the abrupt changes in system modes. Controlling these systems is particularly challenging when the operating mode cannot be directly measured and may be asynchronous with the observer and controller. To tackle this challenge, some preliminary work has been reported. For example, [13] proposed a solution where the controller can stabilize the underlying system even when the real system mode does not match the observation mode, as seen in [14,15,16]. Recently, the hidden Markov model (HMM) has motivated us to address uncertain operation modes in MJSs, as the HMM treats the unobservable mode changes. An observed component, acting as a detector, provides information on the changes in operation modes.
Sliding mode control (SMC) is advantageous for complex systems due to its robustness, fast response, and high accuracy. In particular, SMC combined with an adaptive method and a neural network improves the quality by estimating system uncertainty and disturbance in real time or learning the system’s nonlinear characteristics (see [17,18,19,20,21,22] and the references therein). Simultaneously, SMC has been successfully applied to address various complex problems in MJSs. For example, [23] investigated the problem of adaptive SMC design for the stabilization of MJSs through mode-dependent control law, although this was countered with an unknown transition probability; an H passivity analysis by SMC for discrete-time singular semi-MJSs with parameter uncertainty, nonlinear perturbation, and external disturbances was presented in [24]; Ref. [25] studied the problem of SMC design for discrete-time descriptor MJSs for stochastic admissibility analysis in the presence of two independent homogeneous Markov chains. However, the design of asynchronous SMC for the analysis and synthesis of MJSs is an interesting issue due to the uncertainty of operation modes, which leads us to investigate the HMM-based SMC design for MJSs.
Continuous data transmission in networks can be bandwidth-intensive. There is a need for methods that reduce unnecessary data transmission without compromising control accuracy. Recently, event-triggered mechanism (ETM) [26,27] appears to be a network communication scheme that only transmits data under specific predefined conditions, depending on the selected threshold. Compared to traditional communication schemes, the ETM significantly enhances network communication efficiency, reduces network bandwidth consumption, and bolsters the performance and efficiency of the entire system, as discussed in [28,29]. Traditional control methods often rely on periodic sampling, where the system is sampled and controlled at fixed intervals. This approach presents challenges like how too short a sampling interval may overload the system, while too long an interval can diminish control precision. In contrast, ETM initiates data transmission and control actions based on system state changes, thus circumventing the drawbacks of periodic sampling. ETM also holds significant advantages for networked MJSs. For example, a static ETM was introduced in [30] to address fuzzy control issues in MJSs with general transition probabilities. The work in [31] explored an event-triggered optimal control problem for nonlinear MJSs using adaptive dynamic programming algorithms. Meanwhile, ref. [32] tackled the static event-triggered SMC design for MJSs facing uncertainty and disturbances, with further details available in [33] and the cited references. However, enhancing network efficiency beyond what static ETM offers has motivated the investigation into dynamic ETM-based asynchronous SMC design for MJSs.
Based on the above analysis, the primary objectives of this study are to address the challenges associated with controlling networked MJSs in the presence of asynchrony and external disturbances. The aims are to develop a novel SMC strategy that operates asynchronously with respect to the system mode, to design an asynchronous observer based on the HMM that can accurately estimate the system state even when the mode of the system is unknown or changes abruptly, to implement a dynamic event-triggered mechanism to optimize data transmission between the system and the controller, and to conduct a thorough stability and performance analysis of the closed-loop system with the proposed control strategy. The key contributions of this paper are concluded as follows: (1) The paper proposes an asynchronous observer using an HMM to handle the asynchrony between the system and the controller/observer, which does not require real-time mode information of the system. (2) A multi-parameter ETM is introduced to reduce bandwidth usage by transmitting data only when specific conditions are met, thus improving network communication efficiency. (3) A novel integral asynchronous sliding mode surface is designed for controlling MJSs with the properties of robustness and fast responses, and the sliding motion is ensured in finite time by the asynchronous SMC law. (4) By virtue of the Lyapunov stochastic stability theory, this paper provides an H stochastic stability analysis of the closed-loop system, ensuring that the control design meets a certain level of disturbance attenuation.
Notation 1.
E denotes mathematical expectation, I represents the identity matrix, and 0 represents the zero matrix. A matrix X is considered symmetric positive definite (semi-definite) if it satisfies X > 0 ( X 0 ). ( Ω , F , P ) refers to a probability space with Ω, the sample space F is the σ-algebra of Ω, and P is the probability measure of F . · and · 1 denote the 2-norm and 1-norm of vectors or matrices, respectively. The symmetric element of a symmetric matrix is denoted by *. Additionally, H e X denotes the sum of X and its transpose, X T .

2. Preliminaries and Problem Statement

Consider the following MJSs fixed on the probability space ( Ω , F , P ) , expressed by
z ˙ t = A r t z t + B r t u t + D r t ω t y t = C r t z t ,
where r t belongs to the set S = { 1 , 2 , , s } as a mode in a continuous-time Markov process. The matrices A r t , B r t , D r t , and C r t are constant switching according to r t , with B r t being full column rank. The system state is represented by z ( t ) R n , while the input and external disturbance are represented by u ( t ) and ω ( t ) , respectively. The system output is denoted by y ( t ) R p . The Markov process is characterized by a TR matrix Π = ι i j , and the conditional probabilities of r t are expressed as follows:
Pr r t + d = j | r t = i = ι i j d + o d , i j 1 + ι i i d + o d , i = j ,
where lim d 0 o d d = 0 d > 0 , and ι i j > 0 represents the TR from mode i to mode j i j , while ι i i = j i ι i j < 0 for each i S .
Definition 1
([34]). When u t = 0 , for any initial conditions z 0 and r 0 S , system (1) can be described as stochastically stable if the following inequality is satisfied:
lim t + E 0 t z s 2 d s z 0 , r 0 < + .
Lemma 1
([35]). For any vector a , b R n , 0 < H R n × n , the following inequality holds: 2 a T b a T H a + b T H 1 b .
The problem to be addressed in the following is how to design an asynchronous sliding mode controller with an integrated ETM for networked MJSs that ensures stability and performance despite asynchronous operation and the presence of external disturbances.

3. Main Results

In consideration of the uncertainty of the operation mode within the Markov process, let us define an HMM r t , s t , Π , Φ with the conditional probability of ϕ i q given as follows:
Pr s t = q | r t = i = ϕ i q ,
where s t is the observation process with values belonging to the set H = { 1 , 2 , , h } . ϕ i q is a crucial probability that defines the degree of asynchrony between the observer or controller and the original system. It is subject to strict constraints, including q = 1 h ϕ i q = 1 and ϕ i q 0 . The conditional probability matrix, denoted by Φ = ϕ i q , has full column rank. The notations r t = i and s t = q will be used for system analysis in the following.

3.1. ETM-Based Asynchronous Observer Design

As illustrated in Figure 1, the system output y t is sampled by the sampler to obtain y τ T . The screened y ˜ t is obtained after passing through the ETM and is provided to the observer. Additionally, the controller operates on an implicit Markov process that is independent of the system’s Markov process. y ( τ T ) ( τ = 0 , 1 , 2 , 3 , τ is the sampling number, T is the sampling period) is the output of the sampler. When the input value satisfies the ETM, the transmitted value y ( τ κ T ) ( τ κ N , κ = 0 , 1 , 2 , 3 , τ 0 = 0 , κ is the number of triggering) is updated. In two consecutive transmission moments, the threshold error is defined as Δ y t = y τ κ T y t , where t τ κ T , τ κ + 1 T . y ( τ κ T ) is updated once y t and Δ y t satisfy the following condition:
Υ t + α β y T t ζ i y t Δ y T t ζ i Δ y t 0 ,
where the parameters 0 < β < 1 , α > 0 , and ζ i > 0 represent a weighted matrix. The design of variable Υ t satisfies
Υ ˙ t = λ Υ t + β y T t ζ i y t Δ y T t ζ i Δ y t ,
where the parameter λ > 0 , and Υ 0 0 is verified.
After the command of y 0 is transmitted, the update of the next time, τ κ + 1 T , meets the following condition:
τ κ + 1 T = i n f t > τ κ T Υ t + α β y T t ζ i y t Δ y T t ζ i Δ y t 0 .
Lemma 2
([36]). For 0 < β < 1 , α > 0 , Υ 0 0 , λ > 0 , and matrix ζ i > 0 , the dynamics (6) guarantee that Υ t > 0 for all t 0 .
The observer receives the latest signal y τ κ T = y ˜ t at time τ κ T ( t τ κ T , τ κ + 1 T ). Then, the following asynchronous observer is constructed:
z ^ ˙ t = A i z ^ t + B i u t + L q y ˜ t y ^ t y ^ t = C i z ^ t ,
where z ^ ( t ) and y ^ ( t ) represent the estimates of z ( t ) and y ( t ) , respectively. The observer gain matrix L q is to be designed.
Defining the estimation error as Δ t = z t z ^ t allows for derivation of the error dynamics as follows:
Δ ˙ t = A i L q C i Δ t + D i ω t L q Δ y t .
Remark 1.
The system may experience negative values of Υ t over time, which contradicts Lemma 2. Therefore, before the event is triggered, it is necessary to check the positivity of Υ t and set it to 1 if it is non-positive, which also ensures the positivity of the Lyapunov function selected for the stochastic stability analysis outlined in Theorem 2.
Remark 2.
In many practical systems, the mode of operation or the state itself might not be directly measurable, especially when communication is restricted or there are delays in obtaining system state information. However, an HMM can capture the probabilistic switching between different modes, allowing the observer to estimate the unmeasurable states. Therefore, asynchronous observers are designed to function effectively even when there is a lack of synchrony between the system and the controller. This is crucial in networked control systems where communication delays and data losses are common. Additionally, the design of the asynchronous observer allows the system to maintain stability and performance even when the observer mode is out of sync with the actual mode of the system.

3.2. Asynchronous Observer-Based SMC Design

Considering the observer given by Equation (8), the asynchronous integral sliding surface function can be formulated as follows:
s t = B ¯ i z ^ t 0 t B ¯ i A i + B i K q z ^ s d s ,
in which B ¯ i = B i T B i 1 B i T and K q is a real matrix that will be designed.
In view of (10), it follows that
s ˙ t = B ¯ i z ^ ˙ t B ¯ i A i + B i K q z ^ t = u t K q z ^ t + B ¯ i L q y ˜ t y ^ t .
Let s ˙ t = 0 . Then, the following equivalent control variable can be obtained:
u e q t = K q z ^ t B ¯ i L q y ˜ t y ^ t .
The combination of (12) and (8) yields the sliding mode dynamics as follows:
z ^ ˙ t = A i + B i K q z ^ t + G i L q Δ y t + G i L q C i Δ t ,
where G i = I B i B ¯ i .
Theorem 1.
The state observer (8) and sliding surface function (10) ensure that the sliding surface s t = 0 is reached within a finite time, using the following SMC law:
u t = K q z ^ t ϵ + ς t sgn s t ,
in which ϵ > 0 is a small scalar. The design of ς t follows:
ς t = B ¯ i L q y ˜ t + y ^ t .
Proof. 
The Lyapunov function is selected as
V t = 1 2 s T t s t ,
Then, it has
V ˙ t = s T t u t K q z ^ t + B ¯ i L q y ˜ t y ^ t s T t u t K q z ^ t + s T t B ¯ i L q y ˜ t + y ^ t .
Combining (14) with (16), and in view of · 1 · , it holds that
V ˙ t s T t ϵ + ς t sgn s t + s T t B ¯ i L q y ˜ t + y ^ t ϵ s T t 2 ϵ V 1 / 2 ( t ) .
Consider equation V ˙ ( t ) = 2 ϵ V 1 / 2 ( t ) , which can be rewritten as V 1 / 2 ( t ) d V ( t ) = 2 ϵ d t . Integrating both sides of the equation from 0 to t , one obtains 2 [ V 1 / 2 ( t ) V 1 / 2 ( 0 ) ] = ϵ t , from which one derives V 1 / 2 ( t ) =0 if t = 2 V 1 / 2 ( 0 ) . Obviously, s ( t ) = 0 holds when V 1 / 2 ( t ) =0. Regarding the case of V ˙ ( t ) < 2 ϵ V 1 / 2 ( t ) , it must hold that t < t when s ( t ) = 0 due to the monotonicity of V ( t ) . Therefore, the sliding surface s ( t ) = 0 can be reached within a finite time. □
Remark 3.
Unlike traditional SMCs that assume synchronous operation, the proposed method uses an asynchronous observer that can estimate the system state even when the system mode is not directly observable. This is particularly advantageous in networked control systems where mode information may be delayed or lost. The integration of an event-triggered mechanism reduces the frequency of data transmission, which is beneficial for network bandwidth and system load. This feature is novel compared to other SMCs that may rely on continuous or periodic data transmission. The use of HMM in the asynchronous observer allows the control system to handle uncertainties and changes in the system dynamics more effectively. This probabilistic approach to state estimation is a novel contribution to SMC methods.

3.3. H Stability Analysis

Through the implementation of the aforementioned design approach, a comprehensive H performance analysis of the closed-loop system will be carried out in the following.
  • Under the condition of ω t = 0 , the closed-loop system is said to be stochastically stable.
  • Under zero initial conditions, it holds that
    J = E 0 + y Δ T s y Δ s γ 2 ω T s ω s d s < 0 ,
    in which y Δ t = C i Δ t and γ > 0 is an index scalar.
Theorem 2.
For fixed scalars ε, γ, and β, the closed-loop system is stochastically stable with K q = V q 1 U q , L ¯ i = P i 1 H ¯ i and satisfies the predefined H performance, in which L ¯ i = q = 1 h ϕ i q L q , if there exist matrices P i > 0 , ζ i > 0 , W i q , H ¯ i , V q and U q for all i S , and q H satisfying the following conditions:
Λ 11 Λ 12 H ¯ i 0 Λ 15 0 0 Λ 22 H ¯ i P i D i 0 C i T H ¯ i T 0 ζ i 0 0 0 H ¯ i T γ 2 I 0 0 0 1 2 P i 0 0 P i 0 P i < 0 ,
H e B i U q W i q P i B i B i V q + ε U q T ε H e V q < 0 ,
where
Λ 11 = H e P i A i + q = 1 h ϕ i q W i q + j = 1 s ι i j P j + β C i T ζ i C i ,
Λ 12 = H ¯ i C i + β C i T ζ i C i ,
Λ 15 = P i B i B ¯ i ,
Λ 22 = H e P i A i H ¯ i C i + j = 1 s ι i j P j + β C i T ζ i C i + C i T C i .
Proof. 
Firstly, we construct the Lyapunov function:
V z ^ t , Δ t = z ^ T t P i z ^ t + Δ T t P i Δ t + Υ t .
Then, it has
V ˙ z ^ t , Δ t = 2 z ^ T ( t ) z ^ ˙ ( t ) + 2 Δ T ( t ) Δ ˙ ( t ) + Υ ˙ ( t ) + z ^ T t j = 1 s ι i j P j z ^ t + Δ T t j = 1 s ι i j P j Δ t = 2 z ^ T t P i q = 1 h ϕ i q A i + B i K q z ^ t + G i L q Δ y t + G i L q C i Δ t + z ^ T t j = 1 s ι i j P j z ^ t + Δ T t j = 1 s ι i j P j Δ t + 2 Δ T t P i q = 1 h ϕ i q A i L q C i Δ t L q Δ y t λ Υ t + β y T t ζ i y t Δ y T t ζ i Δ y t ,
Since λ > 0 , Υ 0 0 , and G i = I B i B ¯ i , the above equation becomes
V ˙ z ^ t , Δ t 2 z ^ T t P i A i z ^ t + 2 z ^ T t P i B i q = 1 h ϕ i q K q z ^ t + 2 z ^ T t P i q = 1 h ϕ i q L q Δ y t 2 z ^ T t P i B i B ¯ i q = 1 h ϕ i q L q Δ y t + 2 z ^ T t P i q = 1 h ϕ i q L q C i Δ t 2 z ^ T t P i B i B ¯ i q = 1 h ϕ i q L q C i Δ t + z ^ T t j = 1 s ι i j P j z ^ t + Δ T t j = 1 s ι i j P j Δ t + 2 Δ T t P i A i Δ t 2 Δ T t P i q = 1 h ϕ i q L q C i Δ t 2 Δ T t P i q = 1 h ϕ i q L q Δ y t + β y T t ζ i y t Δ y T t ζ i Δ y t .
Let q = 1 h ϕ i q L q = L ¯ i . Then, a simplified expression is obtained:
V ˙ z ^ t , Δ t 2 z ^ T t P i A i z ^ t + 2 z ^ T t P i B i q = 1 h ϕ i q K q z ^ t + 2 z ^ T t P i L ¯ i Δ y t 2 z ^ T t P i B i B ¯ i L ¯ i Δ y t + 2 z ^ T t P i L ¯ i C i Δ t 2 z ^ T t P i B i B ¯ i L ¯ i C i Δ t + z ^ T t j = 1 s ι i j P j z ^ t + Δ T t j = 1 s ι i j P j Δ t + 2 Δ T t P i A i Δ t 2 Δ T t P i L ¯ i C i Δ t 2 Δ T t P i L ¯ i Δ y t + β y T t ζ i y t Δ y T t ζ i Δ y t
Additioanlly, it holds that
2 z ^ T t P i B i B ¯ i L ¯ i Δ y t z ^ T t P i B i B ¯ i P i 1 B ¯ i T B i T P i z ^ t + Δ y T t L ¯ i T P i P i 1 P i L ¯ i Δ y t ,
2 z ^ T t P i B i B ¯ i L ¯ i C i Δ t z ^ T t P i B i B ¯ i P i 1 B ¯ i T B i T P i z ^ t + Δ T t C i T L ¯ i T P i P i 1 P i L ¯ i C i Δ t ,
β y T t ζ i y t Δ y T t ζ i Δ y t = β ( z ^ ( t ) + Δ ( t ) ) T C i T ζ i C i ( z ^ ( t ) + Δ ( t ) ) Δ y T t ζ i Δ y t = z ^ T t β C i T ζ i C i Δ t + z ^ T t β C i T ζ i C i z ^ t + Δ T t β C i T ζ i C i Δ t + Δ T t β C i T ζ i C i z ^ t Δ y T t ζ i Δ y t .
In conclusion, one has
V ˙ z ^ t , Δ t η T t Γ i η t ,
where η T t = z ^ T t Δ T t Δ y T t , and
Γ i = Γ i 11 P i L ¯ i C i + β C i T ζ i C i P i L ¯ i Γ i 22 P i L ¯ i L ¯ i T P i P i 1 P i L ¯ i ζ i ,
in which
Γ i 11 = H e P i A i + P i B i q = 1 h ϕ i q K q + 2 P i B i B ¯ i P i 1 B ¯ i T B i T P i + j = 1 s ι i j P j + β C i T ζ i C i ,
Γ i 22 = H e P i A i P i L ¯ i C i + C i T L ¯ i T P i P i 1 P i L ¯ i C i + j = 1 s ι i j P j + β C i T ζ i C i .
One can notice that there is a nonlinear coupling between matrices P i , L ¯ i , and K q , which makes it very difficult to obtain L ¯ i and K q . So, let P i L ¯ i = H ¯ i and K q = V q 1 U q in (28), where V q , U q , and H ¯ i are matrices with appropriate dimensions.
Applying Lemma 5 in [37] to inequality (20) yields
H e P i B i V q 1 U q < W i q ,
in which we let P i L ¯ i = H ¯ i ; then, it holds that Γ i < Γ i , where
Γ i = Θ i 11 H ¯ i C i + β C i T ζ i C i H ¯ i Θ i 22 H ¯ i H ¯ i T P i 1 H ¯ i ζ i ,
in which
Θ i 11 = H e P i A i + q = 1 h ϕ i q W i q + 2 P i B i B ¯ i P i 1 B ¯ i T B i T P i + j = 1 s ι i j P j + β C i T ζ i C i ,
Θ i 22 = H e P i A i H ¯ i C i + C i T H ¯ i T P i 1 H ¯ i C i + j = 1 s ι i j P j + β C i T ζ i C i .
In the case where (20) is satisfied and Γ i < 0 , it is possible to find a positive scalar υ , defined as υ λ min Γ i , which satisfies the inequality V ˙ z ^ t , Δ t υ z ^ t 2 . Then, the following inequality can be obtained by applying the Dynkin formula:
E V z ^ t , Δ t E V z ^ 0 , Δ 0 υ E 0 t z ^ s 2 d s .
By utilizing (31), it is seen that E 0 t z ^ s 2 d s is bounded by υ 1 E V z ^ 0 , Δ 0 . This result implies that the stochastic stability of the sliding mode dynamics (13) and the error dynamics (9) can be established by utilizing Definition 1.
Next, the H performance of the closed-loop system will be examined. We assume that E V t = E 0 + V ˙ s d s 0 and the initial condition is zero. Then, the following conclusion can be made:
J = E 0 + y Δ T u y Δ u γ 2 ω T u ω u d u E 0 + y Δ T u y Δ u γ 2 ω T u ω u + V ˙ u d u E 0 + ϑ T u Γ ¯ i ϑ u d u ,
in which ϑ T t = z ^ T t Δ T t Δ y T t ω T t , and
Γ ¯ i = Γ i + 0 0 0 0 C i T C i 0 0 0 0 0 P i D i 0 γ 2 I .
If Γ ¯ i < 0 , then J < 0 holds according to the Schur complement, meaning the closed-loop system has a disturbance attenuation level γ . □
Remark 4.
The introduction of the ETM above significantly enhances the communication efficiency of the system. Compared to traditional periodic sampling methods, ETM can trigger data transmission and control operations based on changes in the system state, thereby reducing unnecessary data transmission and alleviating bandwidth pressure on the system. Additionally, by properly setting the triggering conditions, ETM ensures control accuracy while avoiding the issues of system overload due to excessively short sampling periods or decreased control accuracy due to excessively long ones. The above theorem also demonstrates that with a well-designed triggering condition, ETM can effectively be integrated with the asynchronous observer and SMC strategy, further improving the performance of nonlinear MJSs.
Remark 5.
Recently, the deep learning method emerged as a hot issue in dealing with un-modeled dynamics. Although deep learning offers powerful data-driven capabilities [38,39], HMM-based methods are more interpretable compared to deep learning models, where the states and transitions in an HMM have a probabilistic interpretation that can be easier to understand and analyze compared to the black-box nature of deep learning models. For complex systems where data are scarce or expensive to obtain, deep learning models might not be the most efficient choice. HMMs can be effectively designed with a smaller amount of data and are less computationally intensive. Simultaneously, deep learning models typically require significant computational resources and may not be suitable for real-time applications due to latency in inference. The asynchronous observer based on HMM is generally less complex and can provide faster response times. The choice of methods should depend on the specific requirements and constraints of the application at hand.
For the implementation of the asynchronous controller and observer, parameters are fine-tuned to satisfy the conditions derived from Lyapunov stability theory, ensuring that the closed-loop system is stochastically stable and that the state trajectories converge to the sliding surface. Initial parameter values are determined through simulation and modeling, using knowledge of the system dynamics and desired performance specifications.

4. Example

Now, let us consider the model of a single-link robotic arm, as described in [40], with its characteristic equation being
θ ¨ t = M g L J sin θ t D t J θ ˙ t + 1 J u t .
The equation presented here describes the relationship between the angle position of the robot arm, denoted by θ t , and the control input applied to the system, represented by u t . The model parameters include the load mass M, moment of inertia J, arm length L, gravitational acceleration g, and coefficient of viscous friction D ( t ) . We assume that the parameters M and J exhibit three distinct modes, as outlined in Table 1, due to changes in payload, component failure, or malfunction. In addition, the TR matrix that controls the transitions between these modes is given as follows:
Π = 30 10 20 10 20 10 10 20 30 .
Regarding the asynchronous control strategy by the controller, the transition probability is given by
Φ = 0.2 0.4 0.4 0.3 0.3 0.4 0.2 0.2 0.6 .
L and D t are deemed to be constants for all subsystems, where D t = 1 and L = 0.5 . Let z 1 t = θ t and z 2 t = θ ˙ t . Additionally, the linearized dynamics with external disturbances are described as
z ˙ t = 0 1 M i g 2 J i 1 J i z t + 0 1 J i u t + D i ω t y t = C i z t ,
in which z T t = z 1 T t z 2 T t and i 1 , 2 , 3 . The system matrices are given by
A 1 = 0 1 49 5 , A 2 = 0 1 32.67 1.67 , A 3 = 0 1 36.75 1.25 ,
B 1 = 0 5 , B 2 = 0 1.67 , B 3 = 0 1.25 ,
D 1 = 0.3 0.1 , D 2 = 0.2 0.2 , D 3 = 0.1 0.3 ,
C 1 = 1 1 , C 2 = 1 1 , C 3 = 1 1 .
ω t is a perturbation with exponential decay. Based on the above system parameters, the following solutions can be obtained with β = 0.5 , ε = 0.1 , and γ = 3 :
P 1 = 88.9247 0.3907 0.3907 2.1505 , P 2 = 86.8923 1.3981 1.3981 2.4568 , P 3 = 87.1636 0.9090 0.9090 2.4907 ,
H ¯ 1 = 0.9806 0.7102 , H ¯ 2 = 1.8121 0.7990 , H ¯ 3 = 0.4483 0.7281 ,
L ¯ 1 = 0.0125 0.3325 , L ¯ 2 = 0.0263 0.3402 , L ¯ 3 = 0.0021 0.2916 ,
U 1 = 7.3241 9.8820 , U 2 = 1.3934 0.0095 , U 3 = 0.9887 25.8504 ,
ζ 1 = 8.6486 , ζ 2 = 7.7168 , ζ 3 = 5.5165 , V 1 = 2.3011 , V 2 = 2.6787 , V 3 = 1.8314 .
Therefore, the gain matrix L q of the observer and the gain matrix K q of the controller can be calculated as follows:
L 1 = 0.1440 1.6421 , L 2 = 0.0056 1.5653 , L 3 = 0.0463 1.5551 ,
K 1 = 3.1829 4.2946 , K 2 = 0.5202 0.0035 , K 3 = 0.5399 14.1151 .
We set the initial conditions of the original system and observer as z 0 = 2 2 T and z ^ 0 = 1 1 T , respectively. The external disturbance is selected as ω t = e 10 t ; the parameters in ETM are given as α = λ = 1 . Then, simulation results are obtained as presented in Figure 2, Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7. Figure 2 depicts the evolution of the Markov process at the system level. The frequency and pattern of mode transitions are consistent with the characteristics of the Markov chain defined by the transition rate matrix. Figure 3 illustrates the Markov process at the control level, which represents the asynchronous mode of the observer and control. This also indicates the control modes change according to the hidden Markov model, which is independent of the system’s Markov process. Figure 4 compares the trajectories of the actual system states and the estimated states provided by the asynchronous observer. The rapid convergence of the estimated states to the actual states demonstrates the effectiveness of the asynchronous observer design. Figure 5 shows the trajectory of the integral sliding mode surface over time, which confirms the finite-time convergence property of the proposed SMC strategy. As demonstrated in the proof of Theorem 1, a significantly smaller value of ϵ will lead to a faster attainment of the sliding motion. Additionally, as demonstrated in the proof of Theorem 1, a significantly smaller value of ϵ will lead to a faster attainment of the sliding motion. Figure 6 presents the control input provided by the asynchronous controller over time, which demonstrates that the controller can effectively handle the uncertainties and asynchrony, providing appropriate control actions to maintain system stability. Finally, Figure 7 illustrates the timing and intervals at which the event-triggered mechanism activates data transmission, which demonstrates the efficiency of the ETM in reducing unnecessary data transmissions, thus conserving bandwidth and reducing system load. Notably, the figure reveals that, initially, the system experienced significant errors and a high number of event-triggered occurrences. However, as time progressed, the system became increasingly stable, with a corresponding decrease in the number of event-triggered occurrences, ultimately achieving the intended design. Additionally, to demonstrate the advantage of dynamic ETM over traditional static ETM, Figure 8 is provided to show triggered times under the situation of static ETM, from which it is obvious that dynamic ETM is sufficiently superior to the static ETM in saving the network bandwidth.

5. Conclusions

This paper has presented an innovative control strategy for MJSs using an asynchronous SMC approach integrated with a dynamic event-triggered mechanism. Firstly, an asynchronous observer was presented that uses an HMM to address the challenge of mode asynchrony between the system, observer, and controller. This observer does not rely on real-time mode information, making it highly suitable for practical applications where direct measurement of the system mode is not feasible. Secondly, a multi-parameter event-triggered mechanism (ETM) was incorporated to optimize data transmission, thereby reducing network bandwidth usage and enhancing the overall system efficiency; Thirdly, a novel integral asynchronous sliding mode surface was designed to ensure robust control performance in the presence of uncertainties and external disturbances. Additionally, the SMC law was synthesized to guarantee that the system trajectories reach the sliding surface within a finite time, thereby achieving the desired control performance. Fourthly, through rigorous mathematical analysis, we demonstrated that the closed-loop system with the proposed control strategy achieves an H stability performance. Lastly, a detailed simulation study using a robotic arm model was conducted to validate the effectiveness of the proposed control approach. Future work will explore the extension of this control strategy to nonlinear MJSs and its implementation on real-world systems. Additionally, the impact of more complex network topologies and communication delays will be investigated.

Author Contributions

Conceptualization, B.J.; Formal analysis, J.D.; Funding acquisition, B.J.; Investigation, J.D. and H.L.; Methodology, J.D.; Software, H.L.; Supervision, B.J.; Validation, J.D.; Writing—original draft, H.L.; Writing—review and editing, J.D. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Natural Science Foundation of China (62003231), the Distinguished Young Scholar Project of Jiangsu Natural Science Foundation (BK20240159), the Science and Technology Planning Project of Suzhou City (SZS2022015), and the Project for constructing an excellent teaching team by “Qing Lan Project” of the Education Department of Jiangsu Province 2023.

Data Availability Statement

The data are available from the corresponding author on reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Jiang, B.; Karimi, H.R.; Yang, S.; Gao, C.; Kao, Y. Observer-based adaptive sliding mode control for nonlinear stochastic Markov jump systems via T–S fuzzy modeling: Applications to robot arm model. IEEE Trans. Ind. Electron. 2020, 68, 466–477. [Google Scholar] [CrossRef]
  2. Zhou, J.; Dong, H.; Feng, J. Event-triggered communication for synchronization of Markovian jump delayed complex networks with partially unknown transition rates. Appl. Math. Comput. 2017, 293, 617–629. [Google Scholar] [CrossRef]
  3. Dombrovskii, V.; Obyedko, T.; Samorodova, M. Model predictive control of constrained Markovian jump nonlinear stochastic systems and portfolio optimization under market frictions. Automatica 2018, 87, 61–68. [Google Scholar] [CrossRef]
  4. Chen, P.; Liu, R.; Li, Y.; Chen, L. Detecting critical state before phase transition of complex biological systems by hidden Markov model. Bioinformatics 2016, 32, 2143–2150. [Google Scholar] [CrossRef]
  5. Shi, P.; Li, F. A survey on Markovian jump systems: Modeling and design. Int. J. Control Autom. Syst. 2015, 13, 1–16. [Google Scholar] [CrossRef]
  6. Yao, D.; Ren, H.; Li, P.; Zhou, Q. Sliding mode output-feedback control of discrete-time Markov jump systems using singular system method. J. Frankl. Inst. 2018, 355, 5576–5591. [Google Scholar] [CrossRef]
  7. Cheng, J.; Park, J.H.; Wu, Z.G. Finite-time control of Markov jump lur’e systems with singular perturbations. IEEE Trans. Autom. Control 2023, 68, 6804–6811. [Google Scholar] [CrossRef]
  8. Dong, S.; Wu, Z.G.; Pan, Y.J.; Su, H.; Liu, Y. Hidden-Markov-model-based asynchronous filter design of nonlinear Markov jump systems in continuous-time domain. IEEE Trans. Cybern. 2018, 49, 2294–2304. [Google Scholar] [CrossRef]
  9. Chen, H.; Zong, G.; Gao, F.; Shi, Y. Probabilistic event-triggered policy for extended dissipative finite-time control of MJSs under cyber-attacks and actuator failures. IEEE Trans. Autom. Control 2023, 68, 7803–7810. [Google Scholar] [CrossRef]
  10. Fang, M.; Shi, P.; Dong, S. Sliding mode control for Markov jump systems with delays via asynchronous approach. IEEE Trans. Syst. Man. Cybern. Syst. 2019, 51, 2916–2925. [Google Scholar] [CrossRef]
  11. Song, J.; Zhou, S.; Niu, Y.; Cao, Z.; He, S. Antidisturbance control for hidden Markovian jump systems: Asynchronous disturbance observer approach. IEEE Trans. Autom. Control 2023, 68, 6982–6989. [Google Scholar] [CrossRef]
  12. Tao, Y.Y.; Wu, Z.G.; Guo, Y. Two-dimensional asynchronous sliding-mode control of Markov jump roesser systems. IEEE Trans. Cybern. 2020, 52, 2543–2552. [Google Scholar] [CrossRef]
  13. Zhang, L.; Cai, B.; Shi, Y. Stabilization of hidden semi-Markov jump systems: Emission probability approach. Automatica 2019, 101, 87–95. [Google Scholar] [CrossRef]
  14. Song, J.; Niu, Y.; Zou, Y. Asynchronous output feedback control of time-varying Markovian jump systems within a finite-time interval. J. Frankl. Inst. 2017, 354, 6747–6765. [Google Scholar] [CrossRef]
  15. Dong, S.; Liu, M.; Wu, Z.G.; Shi, K. Observer-based sliding mode control for Markov jump systems with actuator failures and asynchronous modes. IEEE Trans. Circuits Syst. II Express Briefs 2020, 68, 1967–1971. [Google Scholar] [CrossRef]
  16. Ogura, M.; Cetinkaya, A.; Hayakawa, T.; Preciado, V.M. State-feedback control of Markov jump linear systems with hidden-Markov mode observation. Automatica 2018, 89, 65–72. [Google Scholar] [CrossRef]
  17. Liu, Z.; Chen, X.; Yu, J. Adaptive sliding mode security control for stochastic Markov jump cyber-physical nonlinear systems subject to actuator failures and randomly occurring injection attacks. IEEE Trans. Ind. Inform. 2022, 19, 3155–3165. [Google Scholar] [CrossRef]
  18. Lin, W.; Zhang, B.; Yao, D.; Li, H.; Lu, R. Adaptive neural sliding mode control of Markov jump systems subject to malicious attacks. IEEE Trans. Syst. Man. Cybern. Syst. 2020, 51, 7870–7881. [Google Scholar] [CrossRef]
  19. Shen, X.; Wu, C.; Liu, Z.; Wang, Y.; Leon, J.I.; Liu, J.; Franquelo, L.G. Adaptive-gain second-order sliding-mode control of NPC converters via super-twisting technique. IEEE Trans. Power Electron. 2023, 38, 15406–15418. [Google Scholar]
  20. Zhang, F. Adaptive event-triggered voltage control of distribution network subject to actuator attacks using neural network-based sliding mode control approach. Electronics 2024, 13, 2960. [Google Scholar] [CrossRef]
  21. Zhao, M.; Qian, H.; Zhang, Y. Predefined-time adaptive fast terminal sliding mode control of aerial manipulation based on a nonlinear disturbance observer. Electronics 2024, 13, 2746. [Google Scholar] [CrossRef]
  22. Fathollahi, A.; Gheisarnejad, M.; Andresen, B.; Farsizadeh, H.; Khooban, M.H. Robust artificial intelligence controller for stabilization of full-bridge converters feeding constant power loads. IEEE Trans. Circuits Syst. II Express Briefs 2023, 70, 3504–3508. [Google Scholar] [CrossRef]
  23. Jiang, B.; Karimi, H.R.; Li, B. Adaptive sliding mode control of Markov jump systems with completely unknown mode information. Int. J. Robust Nonlinear Control 2023, 33, 3749–3763. [Google Scholar] [CrossRef]
  24. Zhang, C.; Kao, Y.; Xie, J. Adaptive sliding mode control for semi-Markov jump uncertain discrete-time singular systems. Int. J. Robust Nonlinear Control 2023, 33, 10824–10844. [Google Scholar] [CrossRef]
  25. Zhang, Q.; Li, J.; Song, Z. Sliding mode control for discrete-time descriptor Markovian jump systems with two Markov chains. Optim. Lett. 2018, 12, 1199–1213. [Google Scholar] [CrossRef]
  26. Behera, A.K.; Bandyopadhyay, B.; Cucuzzella, M.; Ferrara, A.; Yu, X. A survey on event-triggered sliding mode control. IEEE J. Emerg. Sel. Top. Ind. Electron. 2021, 2, 206–217. [Google Scholar] [CrossRef]
  27. Zhang, G.; Xia, Y.; Li, X.; He, S. Multievent-triggered sliding-mode control for a class of complex dynamic network. IEEE Trans. Control Netw. Syst. 2021, 9, 835–844. [Google Scholar] [CrossRef]
  28. Li, X.; Ahn, C.K.; Zhang, W.; Shi, P. Asynchronous event-triggered-based control for stochastic networked Markovian jump systems with FDI attacks. IEEE Trans. Syst. Man Cybern. Syst. 2023, 53, 5955–5967. [Google Scholar] [CrossRef]
  29. Gao, H.; Wang, J.; Liu, X.; Xia, Y. Fuzzy fixed-time event-triggered consensus control for uncertain nonlinear multi-agent systems with memory based learning. IEEE Trans. Fuzzy Syst. 2024, 32, 3682–3692. [Google Scholar] [CrossRef]
  30. Cheng, J.; Park, J.H.; Zhang, L.; Zhu, Y. An asynchronous operation approach to event-triggered control for fuzzy Markovian jump systems with general switching policies. IEEE Trans. Fuzzy Syst. 2016, 26, 6–18. [Google Scholar] [CrossRef]
  31. Tang, F.; Wang, H.; Chang, X.H.; Zhang, L.; Alharbi, K.H. Dynamic event-triggered control for discrete-time nonlinear Markov jump systems using policy iteration-based adaptive dynamic programming. Nonlinear Anal. Hybrid Syst. 2023, 49, 101338. [Google Scholar] [CrossRef]
  32. Yu, Y.; Yang, R.; Li, D. Sliding mode control for uncertain Markovian jump systems: An event-triggered approach. In Proceedings of the 2022 41st Chinese Control Conference (CCC), Hefei, China, 25–27 July 2022; pp. 73–78. [Google Scholar]
  33. Tao, J.; Liang, R.; Su, J.; Xiao, Z.; Rao, H.; Xu, Y. Dynamic event-triggered synchronization of Markov jump neural networks via sliding mode control. IEEE Trans. Cybern. 2024, 54, 2515–2524. [Google Scholar] [CrossRef]
  34. Boukas, E.K. Stabilization of stochastic nonlinear hybrid systems. Int. J. Innov. Comput. Inf. Control 2005, 1, 131–141. [Google Scholar] [CrossRef]
  35. Jiang, B.; Gao, C.; Xie, J. Passivity based sliding mode control of uncertain singular Markovian jump systems with time-varying delay and nonlinear perturbations. Appl. Math. Comput. 2015, 271, 187–200. [Google Scholar] [CrossRef]
  36. Guan, C.; Fei, Z.; Feng, Z.; Shi, P. Stability and stabilization of singular Markovian jump systems by dynamic event-triggered control strategy. Nonlinear Anal. Hybrid Syst. 2020, 38, 100943. [Google Scholar] [CrossRef]
  37. Zhou, J.; Park, J.H.; Kong, Q. Robust resilient L2L control for uncertain stochastic systems with multiple time delays via dynamic output feedback. J. Frankl. Inst. 2016, 353, 3078–3103. [Google Scholar] [CrossRef]
  38. Cui, R.; Yang, C.; Li, Y.; Sharma, S. Adaptive neural network control of AUVs with control input nonlinearities using reinforcement learning. IEEE Trans. Syst. Man Cybern. Syst. 2017, 47, 1019–1029. [Google Scholar] [CrossRef]
  39. Gheisarnejad, M.; Fathollahi, A.; Sharifzadeh, M.; Laurendeau, E.; Al-Haddad, K. Data-driven switching control technique based on deep reinforcement learning for packed E-Cell as smart ev charger. IEEE Trans. Transp. Electrif. 2024, 1. [Google Scholar] [CrossRef]
  40. Wu, H.N.; Cai, K.Y. Mode-independent robust stabilization for uncertain Markovian jump nonlinear systems via fuzzy control. IEEE Trans. Syst. Man Cybern. Part B 2006, 36, 509–519. [Google Scholar]
Figure 1. Structure of ETM-based asynchronous observer and controller design.
Figure 1. Structure of ETM-based asynchronous observer and controller design.
Electronics 13 04182 g001
Figure 2. The Markov process r t .
Figure 2. The Markov process r t .
Electronics 13 04182 g002
Figure 3. The observation process s t .
Figure 3. The observation process s t .
Electronics 13 04182 g003
Figure 4. Trajectories of original system states and observer states.
Figure 4. Trajectories of original system states and observer states.
Electronics 13 04182 g004
Figure 5. Integral sliding surface function.
Figure 5. Integral sliding surface function.
Electronics 13 04182 g005
Figure 6. Response of asynchronous controller.
Figure 6. Response of asynchronous controller.
Electronics 13 04182 g006
Figure 7. Time and interval of events triggered for dynamic ETM.
Figure 7. Time and interval of events triggered for dynamic ETM.
Electronics 13 04182 g007
Figure 8. Time and interval of events triggered for static ETM.
Figure 8. Time and interval of events triggered for static ETM.
Electronics 13 04182 g008
Table 1. Parameters M and J for subsystem i.
Table 1. Parameters M and J for subsystem i.
Modes iParameters MParameters J
120.2
240.6
360.8
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Deng, J.; Lou, H.; Jiang, B. Asynchronous Sliding Mode Control of Networked Markov Jump Systems via an Asynchronous Observer Approach Based on a Dynamic Event Trigger. Electronics 2024, 13, 4182. https://doi.org/10.3390/electronics13214182

AMA Style

Deng J, Lou H, Jiang B. Asynchronous Sliding Mode Control of Networked Markov Jump Systems via an Asynchronous Observer Approach Based on a Dynamic Event Trigger. Electronics. 2024; 13(21):4182. https://doi.org/10.3390/electronics13214182

Chicago/Turabian Style

Deng, Jianping, Haocheng Lou, and Baoping Jiang. 2024. "Asynchronous Sliding Mode Control of Networked Markov Jump Systems via an Asynchronous Observer Approach Based on a Dynamic Event Trigger" Electronics 13, no. 21: 4182. https://doi.org/10.3390/electronics13214182

APA Style

Deng, J., Lou, H., & Jiang, B. (2024). Asynchronous Sliding Mode Control of Networked Markov Jump Systems via an Asynchronous Observer Approach Based on a Dynamic Event Trigger. Electronics, 13(21), 4182. https://doi.org/10.3390/electronics13214182

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop