1. Introduction
Conventional power grid systems consist of a few large generators and many consumers. The system can be categorised into power generation, transmission, and distribution [
1]. Electricity demand is expanding rapidly, and climate change is pushing for increased renewable energy through the integration of distributed generators (DGs) into electricity systems. Renewable DGs are intermittent generators with varying output, leading to demand-supply mismatches. The transition to a smart grid (SG) is necessary to address these mismatches, increasing the power system’s reliability by making it more controllable and automated [
2]. One of the functions of the SG to address power distribution issues is economic dispatch (ED) [
3]. The ED’s role is to distribute the total power demand among spatially distributed generating units while minimising operational costs and considering various performance limitations. It optimises electric energy consumption under different conditions, accommodating renewable energy generation and achieving optimal cost while reducing the need for additional traditional power generation units to meet peak-hour demands [
4].
Traditional ED optimisation techniques include the gradient approach, Newton’s methods [
5], the lambda iteration method [
6], and linear programming. These solutions rely on global information about the power system. These ED approaches are carried out centrally in a control unit containing all the necessary details on generation capacity and total load demands. The central control unit resolves the economic dispatch problem (EDP) and transmits the control signal to all generating units [
7]. This centralised strategy is impractical for large-scale distributed generation due to its complexity and creates a single point of failure with a single central control unit. Additionally, a centralised system is not scalable, as adding new generating units may involve modifying the entire system’s architecture. To address these issues, researchers have concentrated on distributed approaches to solve the ED problem [
8].
Distributed or decentralised approaches have recently been adopted due to the high deployment costs and single point of failure of centralised ED systems. A DG can be integrated into the system in a distributed setup without affecting scalability. In a distributed system, each node (DG or load) is connected to other nodes (called neighbours) and exchanges state information with them. In a DG system, information exchange occurs over a communication network. Often, such networks consist of noisy links due to power line communication or wireless channels [
9,
10,
11,
12], which can result in data losses.
One of the significant challenges of a distributed ED approach is obtaining consensus among the various agents (DGs or loads). Consensus control algorithms can be used to overcome the challenges of developing consensus among agents in dynamic systems. They are broadly classified as leader–follower-based and leaderless consensus control [
13]. In leader–follower consensus, one or a few agents are designated leaders, and all other agents follow the leader(s). Only a few followers are directly connected with the leader in this approach. However, other agents are expected to indirectly access the leader’s information via agents connected to the leader [
14].
The communication cost of such a multi-agent system for solving the EDP has received less attention in the literature. The intercommunication frequency between agents is kept very high to accommodate power fluctuations in the grid because of renewable energy generators and varying loads. It is generally considered periodic [
15,
16], resulting in agents communicating even when it is not required (no load or generation power fluctuations) from a control perspective. This is the case, for example, when the generator has already assessed the load demand and functions appropriately or when the load demand has been stable for an extended period. This unnecessary communication can lead to congestion over a distributed communication network and introduce losses or delays. To address this, event-based communication strategies (event-triggered control and self-triggered control) were proposed in [
17]. In the event-triggered control (ETC) approach, agents communicate only when an event occurs; for example, when an agent’s state error exceeds a threshold, the agent communicates. To measure the state error, state information has to be measured continuously by sensors, which requires smart sensors that can sense the constantly changing state. This can lead to an increase in the cost due to the sensor operations of the system. Therefore, researchers have proposed an alternative approach called self-triggered control (STC). In STC, the agent’s state is sensed only when triggering happens. Whenever a triggering event occurs, the sensor detects the system’s state, and using the error threshold, the STC predicts the next time the agent must trigger. STC is proactive, which implies that after calculating the next communication time, the agent can be inactive until the next triggering instant [
18].
In multi-access communications, data packets between nodes might be lost due to limited bandwidth, channel errors, and multiple agents using the same channel. While this communication loss can be mitigated by retransmitting the lost data, it introduces communication delays that can negatively affect consensus convergence. When the connectivity between neighbours is low in a distributed EPD setup, the delays can cause system instability, and a consensus may not be reached [
12]. Therefore, delays need to be mitigated. Hence, a regression-based state estimation mechanism is proposed in this work to account for any lost state data occurring during self-triggered control to avoid retransmissions and congestion over the communication channel.
An incremental cost consensus (ICC) algorithm is used to demonstrate the effectiveness of our proposed distributed STC based consensus control approach. The cost of generation changes by including any distributed generator in the ICC algorithm [
19]. To develop the ICC, all the neighbouring agents must be able to communicate with one another. A cascaded control mechanism is developed, where the inner loop is responsible for developing the consensus among the agents. The output of the internal loop is then fed into the outer loop to develop an STC-based communication mechanism among the agents. A regression-based estimation mechanism is proposed for handling the loss of state information, aiming to avoid delays due to retransmissions. The main contributions of this article are as follows:
Development of an STC-based, fully distributed consensus control mechanism for the incremental cost consensus of DGs in economic dispatch.
Proposal of a regression-based estimation method for the agent state for robust self-triggered consensus control of ED.
The remainder of this paper is organised as follows. In
Section 2, we list and discuss related works.
Section 3 discusses the incremental cost consensus model. This section also discusses the requirements for designing the consensus control mechanism, including algebraic graph theory.
Section 4 discusses the differences between the existing and proposed approaches for self-trigger control and its benefits. The simulation results are presented in
Section 5. In
Section 6, we draw some conclusions and suggest future work.
2. Related Works
Using simple centralised control, the EDP was formulated in [
20], where the optimal cost was obtained using a gradient descent method under the constraint of power generation. A consensus mechanism can solve the EDP by achieving optimal costs for all generators. A survey of the consensus mechanism was conducted in [
7,
21]. Leader–follower-based consensus for the EDP was also studied in [
15,
16]. As a centralised approach typically has scalability issues, distributed optimisation is more advantageous, as surveyed in [
22]. A fully distributed EDP for smart grids was studied in [
8], where the requirement for a central control unit was eliminated and every generator worked independently. A hierarchical-based decentralised EDP was solved in [
6], where each agent solved its problem locally based on the local cost and generation capacity information. Other distributed EDPs were discussed in [
23,
24,
25,
26].
These articles mainly focused on the incremental cost consensus, and the communication between the agents was either neglected or simply periodic. Consensus-based local control was studied in [
27,
28], where the next communication time was calculated based on the state of the agent and its neighbours. Adaptive, event-triggered control (ETC) for the EDP was studied in [
11], where the network topology was unknown, and free-weight parameters were introduced for ETC. A fixed-time EDP using event-triggered control was studied in [
29], where the optimal solution was obtained at a fixed time. Therefore, combining a fixed-time EDP with an ETC mechanism can increase a controller’s computation burden to synchronise with the other controllers. A few other ETC-based EDPs were discussed in [
30,
31,
32]. An event-triggered EDP reduces communication resources but increases the cost of continuous sensing. To avoid continuous monitoring, a self-triggered control (STC)-based EDP was solved in [
33], where the connection topology was not fixed. A
-logarithmic-based formulation of the EDP was introduced in [
34], where the local control used self-triggered control. Previously, we discussed self-triggered control for leader–follower-based consensus control of induction motors [
35], where we considered a synchronous STC mechanism with perfect communication. Here, we extend our previous work to asynchronous STC, a more practical approach for fully distributed systems. All prior work assumes that communication between neighbours is perfect. However, to the best of our knowledge, the problem of communication losses in STC-based communication has not yet been addressed.
3. Incremental Cost Consensus
Consider multiple distributed power generators (DGs) and distributed loads connected to a smart grid. The EDP, formulated in Equation (
1), aims to minimise the power generation cost while meeting the power demand with the total distributed power generation, where each generator has generation capacity constraints [
20].
where
N is the number of distributed power generators,
is the power generated by the
generator,
is the cost of power generation, and
is the total load demand of the system.
and
are the minimum and maximum power generation capacities of the
generator, respectively. The cost function is defined as:
where
are the fuel cost coefficients. A conventional Lagrange multiplier, given by (
3), is used to solve the EDP to obtain the incremental cost (IC),
, of each generator, where IC is the change in the cost of each generator whenever an agent enters or leaves the system. The optimal solution of the EDP is expected to have the same IC,
, for all generators [
36]. We propose a distributed approach to achieve incremental cost consensus among all generators.
In distributed consensus control, each generating unit’s updates its own IC depending on information from its neighbours and its current IC. One of the generators is viewed as a leader, knowing the power demand and optimal cost of the system. The leader updates its IC based on the power demand, and the other generators in the group follow the leader, resulting in incremental cost consensus.
The Lagrangian of (
1) is defined as:
3.1. Algebraic Graph Theory
An undirected graph G represents the communication topology among the power generators. We define , where is the vertex set representing N power generators. E is the set of edges connecting the nodes. is the edge between two nodes . If , is a neighbour to , and they can both communicate with each other.
The adjacency matrix is
, where
when
i is connected to
j; otherwise,
. The in-degree matrix is a diagonal matrix
with
, where
is the number of agents connected to the
ith agent. The graph Laplacian matrix is
, where all row members of
L add up to zero.
L is a symmetric positive semi-definite matrix. The convergence rate (time to reach IC consensus) depends on the eigenvalues of
L and is given by the second-smallest eigenvalue of
L. Let
, where
on-diagonal and
, whereas
off-diagonal is
[
13].
To obtain the consensus variable
and power mismatch
between the power generator and demand, we need to calculate the partial derivative of (
3) with respect to
and
, respectively. By setting the derivative equal to zero, we obtain
or, equivalently,
and by differentiating with respect to
and setting it to zero, we obtain the power mismatch
To obtain the IC consensus, we need to obtain
iteratively; therefore,
needs to be discretized such that the local controller has the information of all neighbouring ICs at each time interval as:
where
is the
th entry of a row-stochastic matrix
L, and
is the IC of the
jth neighbour generator.
k is the time index where the generator updates its IC. By using (
7), the system is expected to converge to a common IC asymptotically [
16].
3.2. Consensus Control Updates
The consensus controller is responsible for developing the consensus of the IC of each generator based on the ICs of all the connected neighbours. Each generator calculates the control input for its own controller using the IC received from its neighbours. The control inputs for the leader and follower are given by (
8) and (
9), respectively.
where
represents the follower set,
represents the leader set, and
is the continuous time index when
is given to the controller.
The objective of the consensus control algorithm is to achieve the same incremental cost for all generators. The state error of the incremental cost for any agent
i is given by (
10) [
37].
Once the consensus is achieved, each generator’s is expected to be zero.
4. Self-Triggered Control
To achieve consensus, each generator requires the IC information from its neighbours and hence the generators must frequently communicate to exchange this information. Centralised EDP approaches use periodic communication strategies, leading to potentially significant communication overhead. With increasing generators and load demand, SGs are moving towards a distributed scenario to reduce system complexity. In several scenarios, the frequent information exchanges due to periodic communication approaches are unnecessary; hence, the move towards event-based, aperiodic communication. In our case of distributed STC, the generator updates its control input and broadcasts its state information to its neighbours [
27] whenever an event occurs, i.e., the state error goes beyond a certain limit. The state error is defined by (
11)
where
) and
is the last triggering time instance at which agent
i communicated its information to its neighbours.
is the incremental cost at this time. The consensus controller of agent
i is updated at the next triggering time instance using the recent information updates it received from its neighbours, and this is given by (
12)
where
is the set of neighbouring agents.
is the latest information received by agent
i from its neighbour
j at its last triggering instance
. In the case of multiple updates from a neighbour,
, where
is the set of natural numbers. The control law of agent
i, given by (
12), is updated at its own event time and on receipt of any information update from any of its neighbours.
Equation (
11) implies that
Therefore,
which leads to
This distributed model can be validated by analysing the stability of the system. Hence, we need to define an input-to-state-stable (ISS) Lyapunov function.
Differentiating, we obtain
Now, using the following Young’s inequality [
38]
we obtain
and assume that
x is bounded by
for all
. From algebraic graph theory, we know that the graph is symmetric, which allows us to interchange the last term of (
19), i.e.,
, where
such that
. Therefore, it results in an error-enforcing condition as follows:
which yields
negative definite when
, where
and if
for some agent
i at any updating instance
means the agent state has not changed and hence the error is equal to zero. Hence, there is no need to send additional updates, and no events should be triggered [
39]. This is the fundamental difference between our triggering rule and [
27]. Thus, for each
i, an event is triggered at
Let us define
Therefore, (
20) can be rewritten as
From [
27], we know that
and from (
9), we obtain,
and here,
; therefore,
where
. The next control update is expected to happen at
Now, defining
and
. Also, let
. Recalling the proposition
and using
, we obtain
Rearranging
, we obtain
Defining
and
and using (
23) and (
26), (
20) can be rewritten as
The self-triggered control law for agent i is defined as follows: if there is a such that , the next triggering instance occurs for agent i at most times after , i.e., . If any agents are triggered, the control laws of all the connected neighbours are expected to be updated.
To calculate the intercommunication time interval of each agent, we need to solve (
29) for
. Since we already know that
, the solution for
is
Both the terms for
may be negative, resulting in
in reality. This scenario is not practical and can cause the system to reach Zeno local behaviour. Therefore, a minimum intercommunication time is required so that the system does not reach Zeno local behaviour. This minimum time interval can be obtained by taking the time derivative of
[
27]. Thus,
Let
; hence, (
31) becomes,
q satisfies the bound
, where
is obtained by:
with
. Therefore, the minimum intercommunication time is bounded by
, which satisfies
. The solution to (
31) is
, resulting in
[
27].
Hence, the next communication time for agent
i is expected to be
Consensus over Lossy Communication Channels
To develop the incremental cost (IC) consensus among different distributed generators (DGs), all the generators need to broadcast their state information at their communication time, and the neighbour receives that information. This work considers that the communication between the generators is over a lossy, multi-access communication channel. Losses occur due to the limited bandwidth resources that create multi-access congestion and due to noise and interference in the channel, e.g., a wireless channel, which causes delays. These delays are mitigated due to the use of self-triggered control, in which all the agents are not trying to broadcast their IC states at the same periodic interval but rather communicate over their STC times. Noise and interference in the channel can also corrupt or cause the broadcast data to be lost.
There are multiple ways to tackle data loss due to noise in the channel. The most common one is to use the previously received value of a DG if a data packet is lost, but due to the rapidly changing nature of DGs from renewable sources, this can slow down the convergence, and increase the cost. The other approach is to retransmit a lost packet, which can also cause delays and slow down the convergence process.
Therefore, to avoid these issues, this work proposes estimating the information contained in the lost data. Since in incremental cost consensus (ICC), all followers have to develop a consensus with the leader, and the leader has a linear cost curve, here, linear regression-based state estimation serves the purpose of estimating the lost state data. An agent updates its control input using the estimated state of its neighbours rather than the previously received and stored state information. To accommodate this approach, the local controller definition is updated in (
12)
where
is the estimated cost of neighbour agent
j at the communication time of agent
i,
.
is defined as
where
is the average of the last
r (
is used for simulations) received values of the cost of agent
j at agent
i, and
is the average of the last
r communication times of agent
j. The slope,
, is defined as
The error Equation (
20) is also updated as
where
.
Hence, the STC proposed in this work becomes:
The communication mode between DGs considered in much of the literature is based on an event-triggered approach [
11,
29,
32,
34], in which continuous monitoring of the IC is required. There has been much less work reported using an STC approach for the EDP [
28,
33,
40]. Furthermore, where STC-based communication between the DGs has been considered, perfect communication between the generators is assumed, which is not always the case. Therefore, a regression-based STC (
39) is designed to compensate for data loss due to communication errors or outages. The proposed regression-based STC for the EDP is shown in
Figure 1.
6. Conclusions
A distributed EDP is solved in this work, in which each DG agent communicates over a potentially lossy communication channel with its neighbours to develop a consensus on their incremental cost. The inter-sampling time for communication is obtained using self-triggered control (STC). In the proposed STC, a regression-based estimation mechanism is used to estimate the latest state information to increase the intercommunication time. The proposed regression-based STC is also validated over a lossy communication channel, where the ICC state information is lost due to data packet loss and hence the state is estimated using the linear regression model. The Zeno behaviour in conventional STC is avoided in this work by providing a minimum inter-sampling time. The simulated results validate that the proposed regression-based STC works better than simple STC, and the performance of the ICC is not affected by packet loss over the lossy channel. The proposed STC increases the inter-sampling time and reduces the communication frequency, reducing communication costs. Improved performance is achieved over a lossy channel with regression-based loss estimations.
The proposed regression-based STC is proactive, which means once the next communication time is calculated, the communication is only expected to happen at that time. This means that any state change would not be addressed between planned communications. In future work, we will address this issue. Further, linear regression estimates the state so that computational complexity is avoided. However, if the reference value for consensus is not linear, the linear regression-based estimation can degrade the system’s performance; hence, other estimation mechanisms must be studied.