Next Article in Journal
Recent Advances in Smart Fabric-Type Wearable Electronics toward Comfortable Wearing
Previous Article in Journal
Nonfullerene Small Molecular Acceptor Acting as a Solid Additive Enables Highly Efficient Pseudo-Bilayer All-Polymer Solar Cells
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Using Stochastic Dual Dynamic Programming to Solve the Multi-Stage Energy Management Problem in Microgrids

Departamento de Ingeniería Industrial, Facultad de Ingeniería, Universidad de los Andes, Cr 1 Este No. 19A-40, Bogotá 111711, Colombia
*
Author to whom correspondence should be addressed.
Energies 2024, 17(11), 2628; https://doi.org/10.3390/en17112628
Submission received: 6 May 2024 / Revised: 16 May 2024 / Accepted: 21 May 2024 / Published: 29 May 2024
(This article belongs to the Section A1: Smart Grids and Microgrids)

Abstract

:
In recent years, the adoption of renewable energy sources has significantly increased due to their numerous advantages, which include environmental sustainability and economic viability. However, the management of electric microgrids presents complex challenges, particularly in the orchestration of energy production and consumption under the uncertainty of fluctuating meteorological conditions. This study aims to enhance decision-making processes within energy management systems specifically designed for microgrids that are interconnected with primary grids, addressing the stochastic and dynamic nature of energy generation and consumption patterns among microgrid users. The research incorporates stochastic models for energy pricing in transactions with the main grid and probabilistic representations of energy generation and demand. This comprehensive methodology allows for an accurate depiction of the volatile dynamics prevalent in the energy markets, which are critical in influencing microgrid operational performance. The application of the Stochastic Dual Dynamic Programming (SDDP) algorithm within a multi-stage adaptive framework for microgrids is evaluated for its effectiveness compared to deterministic approaches. The SDDP algorithm is utilized to develop robust strategies for managing the energy requirements of 1, 2, and 12 prosumers over a 24 h planning horizon. A comparative analysis against the precise solutions obtained from dynamic programming via Monte Carlo simulations indicates a strong congruence between the strategies proposed by the SDDP algorithm and the optimal solutions. The results provide significant insights into the optimization of energy management systems in microgrid settings, emphasizing improvements in operational performance and cost reduction.

1. Introduction

The quest for novel and efficient energy sources, their distribution, and the transformation for use in industrial and residential sectors while mitigating environmental impact stands as a pivotal aspect of societal progression. Central to this endeavor is the electrical grid, a complex system that channels energy from generation sites to the final consumers. This grid encompasses generation plants, transmission lines, and distribution networks. Generation plants convert primary energy sources into electrical energy, which is then conveyed through transmission lines to distribution networks, reaching various load centers.
As economies grow and the well-being of citizens improves, the demand for electrical energy escalates. Traditionally, this growing demand has been met by expanding and reinforcing the electrical grid, constructing new power generation facilities, and enlarging substation capacities. However, this traditional approach, while effective, exerts considerable pressure on natural resources and the environment and demands substantial investment. Consequently, governments are exploring alternative strategies, with the integration of Distributed Energy Resources (DER) being a notable example [1].
DERs encompass unconventional energy sources such as solar and wind power, along with various energy storage systems like batteries and flywheels. An effective method to incorporate these alternative energy sources is via microgrids. A microgrid is a small-scale, decentralized electrical network that combines renewable energy sources, storage technologies, and load management to deliver power in a reliable and sustainable manner to a localized community [2]. It can operate in conjunction with the main electrical grid or independently. Microgrids include a range of components like renewable generators, energy storage devices, electric vehicles, traditional power generators, and sophisticated control systems.
The concept of “prosumers”—entities that both produce and consume energy within the microgrid—is central to microgrid operations. The operational dynamics of a microgrid involve coordinating diverse energy sources and storage systems to efficiently meet its energy needs, facilitated by energy management systems that ensure a balance between supply and demand. The overarching goal is to maximize the use of renewable energy, reduce reliance on the main power grid, and achieve cost savings and enhanced reliability [1,3]. Microgrids not only contribute to a sustainable energy matrix but also promote a more equitable energy market [4,5,6].
If appropriate control techniques are implemented, microgrids can enhance the reliability of the electrical system, increase the utilization of renewable resources, and generate greater well-being in communities [7]. Control tasks within microgrids are typically categorized into three levels [8]:
i.
Primary-level tasks: These tasks ensure the stability of frequency and voltage levels across microgrid components, with response times ranging from 1 to 10 milliseconds.
ii.
Secondary-level tasks: These tasks compensate for any deviations in voltage and power levels caused by the primary control, with response times ranging from 100 milliseconds to 1 s.
iii.
Tertiary-level tasks: These tasks manage the power flow between the microgrid and the main grid, ensuring optimal economic operation via an energy management system (EMS), with response times varying from seconds to hours.
EMS in microgrids is inherently stochastic and dynamic due to uncertainties in energy demand, renewable generation, and pricing [9]. The control system must dynamically make decisions to meet the energy needs within each stage of the designated time horizon while maintaining system stability by keeping voltages and power flows within acceptable limits [10]. This requires minimizing operational costs and achieving other objectives via complex stochastic optimization, necessitating advanced control strategies for economic and stable operation [11].
The typical planning horizon for a microgrid extends 24 h into the future, with decision stages of either 15 min or one hour [12]. These short decision stages are essential to capture the variability and uncertainty in renewable energy generation and demand effectively. The storage capacities in the microgrid range from small residential batteries (around 3 kWh) to larger community storage systems (up to 30 kWh). Typically, it takes approximately 4–8 decision periods (1–2 h) to fully charge or discharge these storage systems under average conditions. This frequent updating of decisions is crucial to managing the rapid changes in generation and demand. Effective EMS are crucial for the success of microgrids, ensuring real-time balance and system stability [13]. Poor energy management can lead to increased operational costs and potential instability in the main grid [14]. Traditional algorithms like stochastic dynamic programming (SDP) and nested benders decomposition (NBD) struggle with the ‘curse of dimensionality’, where the exponential growth in scenarios from stochastic variables challenges computational capacities [15]. As stated before, for the typical planning horizon of 24 h and stages’ length of 1 h, even for a reduced number of scenarios, the computational effort required by SDP and NBD is unaffordable. Approximate methods offer a balance between accuracy and computational feasibility and, due to their reduced computational requirements, are an attractive alternative to solve the EMS problem.
This backdrop sets the stage for the exploration of advanced approximate methods like stochastic and dynamic optimization, machine learning, and artificial intelligence in improving microgrid energy management [16]. The Stochastic Dual Dynamic Programming (SDDP) algorithm shows promise in optimizing microgrid operations by accounting for the variability and dynamism of renewable energy sources. This paper aims to critically review the literature on energy management optimization in microgrids, with a focus on stochastic and dynamic modeling techniques, and to develop a computational simulation using the SDDP algorithm to strategize control.
The primary goal of this paper is to evaluate the efficacy of the SDDP algorithm in optimizing energy management decisions within microgrids and to explore its application in simulated scenarios. This study has significant implications for the reliability and efficiency of power supplies in microgrids, aiming to reduce operational costs and improve overall energy efficiency. The paper will proceed with a comprehensive literature review on the use of SDDP in microgrid EMS, followed by a detailed description of the SDDP algorithm and its application in computational simulations. The findings of this study will be presented, culminating in an analysis and conclusion that will also suggest directions for future research in this field.
The structure of this document is delineated as follows: Section 2 provides a comprehensive review of the literature pertaining to the algorithms employed to tackle the stochastic dynamic energy management challenge. Section 3 delineates the SDDP algorithm alongside the methodology employed for its enhancement. Subsequently, in Section 4, the outcomes of the research are presented. Conclusively, Section 5 offers a concise examination alongside the conclusions drawn from this study, as well as prospective avenues for future research.

2. Literature Review

In the field of microgrid management, a diverse array of methodologies has been explored to enhance system efficiency and cost-effectiveness. These methodologies span from traditional mathematical programming to sophisticated applications of metaheuristics and artificial intelligence. Collectively, these techniques are categorized into four primary groups, as illustrated in Figure 1: artificial intelligence techniques, conventional methods, metaheuristic-based methods, and others [17].
Within the realm of artificial intelligence, notable algorithms include fuzzy logic [18], game theory [19], and various forms of reinforcement learning, such as Q-Learning [20], DQN [21], actor–critic methods [22], and TD3 [23]. These have been applied with considerable success to the challenge of energy management in microgrids. For a detailed examination of reinforcement learning applications in this context, the reader is directed to the comprehensive review in reference [24]. Additionally, metaheuristics applied to energy management systems (EMS) can be subdivided into three categories: swarm intelligent-based EMS, evolutionary algorithm-based EMS, and other related approaches [17]. Noteworthy is the work by Elsied et al. [25], which employs a genetic algorithm to concurrently optimize cost and CO2 emissions, marking a significant advancement toward sustainable microgrid management. However, this approach does not fully incorporate the stochastic variability inherent in renewable energy sources and loads, as detailed in references [17,26].
A widely recognized conventional approach is mixed-integer linear programming (MILP), known for its relative simplicity and efficacy in the domain. Kanchev et al. present a deterministic solution for an isolated microgrid comprising a solar plant, a battery system, and a gas generation unit [27]. Further studies, such as those by Tsikalakis et al. [28] and Riffoneau et al. [29], have utilized mathematical programming to focus on cost minimization in microgrids integrating renewable energy sources. These works explore sequential quadratic programming and dynamic programming, respectively, with the former considering interactions with the main grid and the latter optimizing battery charge states. Sukumar et al. [30] extend these methods to both isolated and grid-connected systems via linear programming and MILP, albeit without addressing conductor losses.
In the realm of stochastic optimization for microgrid energy management, both two-stage and multi-stage stochastic programming models play pivotal roles. Two-stage stochastic models, extensively explored in the literature [31], are structured to make initial decisions before uncertainties are revealed, and subsequent recourse actions are determined once uncertainties become apparent. For instance, a model detailed in [32] operates over two 24 h intervals, optimizing operational costs while accommodating uncertainties in a 33-node microgrid that integrates a diesel generator, a battery system, a solar plant, and a wind turbine.
Expanding beyond two-stage models, multi-stage stochastic programming introduces a more granular approach to decision-making under uncertainty [31]. These models generate a scenario tree where each node represents a possible state of the world, and edges depict transitions between these states across multiple stages [31]. This methodology is beneficial for capturing the temporal dynamics and uncertainties in microgrid operations, as exemplified by the application of NBD [33]. NBD, an advanced form of Benders decomposition tailored for multi-stage problems, efficiently tackles the complexities involved in these models by decomposing them into simpler subproblems, which are solved iteratively. This approach was utilized in [34] to manage the operations of a diesel generator, adjusting dynamically to fluctuations in load and photovoltaic energy production, thereby optimizing operational costs and enhancing microgrid frequency stability via strategic adjustments.
Turning to SDP, this method is employed for solving convex optimization problems aimed at minimizing operational costs in microgrids, as seen in [35]. However, SDP is often limited by the “curse of dimensionality”, which becomes a significant barrier when dealing with a moderate to large number of decisions or state variables, rendering the method impractical for broader applications. Addressing these challenges, Huangfu et al. [9] developed a bi-objective dynamic programming approach that integrates cost minimization with curtailment rate reduction. Further, a study in [36] combined deterministic and stochastic dynamic programming strategies to manage energy, although it assumed a deterministic load profile and energy prices, with limited stochastic inputs, which could restrict the applicability of the results in more variable real-world scenarios.
Continuing from the challenges of the “curse of dimensionality” encountered in both SDP and multi-stage stochastic models, it is essential to recognize how this phenomenon complicates the practical application of advanced stochastic models. As the number of stages in a model increases, the associated scenario tree expands exponentially, thus significantly complicating the computation. This exponential growth renders the application of NBD algorithms impractical for problems encompassing more than five stages, as noted in [37].
To address the computational hurdles imposed by the “curse of dimensionality”, several approximation methods have been developed. One of the most influential of these methods is the Stochastic Dual Dynamic Programming (SDDP) technique, introduced by Pereira and Pinto in 1991 [38]. SDDP is particularly well-suited to tackling multi-stage stochastic problems due to its innovative approach to handling the complexities associated with the scenario trees.
The core principle of SDDP hinges on the assumption of stage-wise independence, which posits that uncertainties across different stages of the decision-making process are independent. This assumption simplifies the model significantly, as it implies that all nodes within a single stage will share identical child nodes featuring the same realizations of uncertainty and their corresponding probabilities. This structure drastically reduces the complexity of the scenario tree and facilitates a more manageable approach to solving the model.
Due to the stage-wise independence, instead of requiring a distinct value function for each node within the scenario tree, SDDP utilizes a singular value function for each stage [37]. This streamlining allows for a significant reduction in computational demand. To approximate the value functions effectively, SDDP employs a set of linear functions or hyperplanes. These hyperplanes are generated via the random sampling of scenarios, which capture the diverse possible outcomes and uncertainties at each stage of the decision process.
By approximating the value functions via these hyperplanes, SDDP provides a powerful and efficient means of solving complex multi-stage stochastic problems that would otherwise be intractable due to the “curse of dimensionality”. This method not only enhances the feasibility of applying stochastic models to real-world problems but also ensures that decisions made at each stage are optimized based on the most probable scenarios and their outcomes.
In the comparison of various stochastic optimization algorithms, Table 1 illustrates the distinctions between SDDP, NBD, and the deterministic equivalent method (DE), revealing significant differences in their operational frameworks and areas of applicability.
SDDP offers a notable reduction in computational effort per iteration by approximating a single cost function per stage rather than individual functions for each node within a stage, as required in more complex stochastic models. This simplification is achieved via the assumption of stage-wise independence, which posits that decisions and uncertainties in one stage do not depend on those in other stages. While this assumption facilitates a more streamlined computational process, it does maintain the exponential complexity in the number of decision stages—a characteristic it shares with NBD. Despite this complexity, SDDP is particularly advantageous in scenarios with continuous action spaces where the state and action spaces do not need to be discretized, unlike in traditional SDP.
NBD, on the other hand, involves decomposing a multi-stage problem into a series of sub-problems that are solved sequentially, linking the solutions via dual and primal cuts. This method is highly effective for a smaller number of stages due to its detailed and precise handling of dependencies between stages. However, as the number of stages increases, the complexity of managing and solving these interconnected sub-problems grows, making it less practical for very large-scale applications.
The deterministic equivalent method converts stochastic problems into a single large deterministic problem by considering all possible scenarios simultaneously. While this approach can theoretically guarantee an optimal solution, it becomes computationally intensive and practically infeasible as the number of scenarios and stages increases due to the combinatorial explosion in the size of the problem.
In practical applications, SDDP has often demonstrated superior performance compared to NBD and traditional SDP [37], particularly in settings involving continuous action spaces and a moderate number of decision stages. The effectiveness of SDDP in such environments can be attributed to its ability to handle large-scale problems more efficiently via the simplification of cost function approximations. However, this efficiency comes at the cost of assuming stage-wise independence, which may not be a valid assumption in all contexts and could lead to suboptimal decisions if inter-stage dependencies are significant.
One innovative application of SDDP can be observed in [39], where a hybrid strategy combining SDP and SDDP was employed for the economic dispatch of a hydroelectric power system in Norway. This study considered stochastic river inflow, along with energy and reserve prices modeled as Markov chains with discrete states, over a two-year planning horizon with weekly decision stages. This approach allowed for precise management under uncertainty, although it operated under the assumption that energy demand remained deterministic, and no energy storage was available within the network.
In a microgrid context, ref. [40] applied SDDP to manage a system with only four buses, where demand, renewable generation, and energy prices were all treated as stochastic. The results highlighted energy cost savings compared to simpler management policies; however, the limited size of the microgrid posed challenges for generalizing findings to larger, more complex systems.
Further expanding the scope, ref. [41] integrated SDDP with model predictive control (MPC) for comprehensive energy management in an industrial park featuring diverse energy sources and loads, including renewable generation and a combined heat and power (CHP) system. Despite this complexity, the study only considered deterministic energy prices and covered a small-scale network, which could hinder broader applicability.
Moreover, ref. [42] showcased the use of SDDP for economic dispatch within a transmission system over a 24 h period with 15 min stages, emphasizing the algorithm’s efficacy in handling renewable generation variability represented via a discrete Markov process. Yet, this study overlooked the dynamics of energy transactions and the stochasticity in demand or prices, which could affect the robustness of the solution in real-world scenarios.
Additionally, ref. [43] employed a robust version of SDDP for a microgrid with three diesel generators, a wind turbine, and a battery system. While this study effectively addressed the stochastic nature of renewable generation and demand, it did not consider the potential for energy transactions among prosumers, thus missing out on modeling a key aspect of microgrid dynamics.
In ref. [44], the SDDP method was again utilized for microgrids, focusing particularly on renewable generation as a stochastic element. However, this study similarly neglected the complexities of prosumer interactions and diverse pricing mechanisms, limiting its scope.
These cases reflect a broader trend in the application of SDDP, primarily for long-term planning of large-scale energy systems, as seen in [37], and short-term planning within microgrids with decision stages ranging typically from 15 min to 1 h. Despite the substantial progress in utilizing SDDP for dynamic and stochastic energy management, significant limitations persist. Many studies focus predominantly on renewable generation variability, often overlooking the stochastic nature of energy demand or price fluctuations. In addition, most of these studies consider consumers as passive agents, limiting the ability to model energy transactions among them. Additionally, the computational intensity of SDDP and the challenges in scaling the studies to more comprehensive and interactive networks pose substantial barriers. These constraints highlight the need for further research to enhance the method’s practicality and accuracy, ensuring it can more effectively represent and manage the complexities of real-world energy systems.
This study aims to bridge these gaps by applying the SDDP algorithm in the context of microgrid EMS. This way, it seeks to enhance the existing methodologies’ capabilities to handle the stochastic and dynamic aspects of microgrid operations, which are critical for efficient and cost-effective energy management. Thus, the main contributions of this work can be summarized as follows:
  • Prior research has often overlooked the full integration of stochastic elements and dynamic conditions in microgrid energy management. This study addresses this gap by employing the SDDP algorithm, which excels in handling the stochastic nature of energy generation, demand, and pricing. It thus extends the existing models by incorporating a more realistic treatment of uncertainty, providing a methodological advancement over traditional deterministic and simpler stochastic methods.
  • The integration of real-time data into the SDDP framework addresses another significant gap in the literature, where previous studies have not fully leveraged the potential of data for operational optimization in microgrids. This approach enhances both the economic efficiency and reliability of microgrid operations, providing practical benefits in terms of cost savings and energy utilization.
  • The literature indicates a limited exploration of multi-stage stochastic optimization [42] in microgrids, particularly with a focus on multi-stage decision processes and adaptive control strategies. By applying the SDDP method, this study introduces a comprehensive framework that captures the multi-temporal nature of decision-making in microgrids, allowing for more dynamic adjustments to operational strategies as new information becomes available.
  • While existing studies have often focused on specific types of microgrids, this work demonstrates the versatility of the SDDP algorithm across various configurations and scales. The inclusion of energy transactions among prosumers is another key contribution of this study, emphasizing the importance of this factor in enabling greater democratization of the energy market and contributing to the development of the electric internet [45]. This contribution addresses the need for adaptable solutions that can be customized to different microgrid setups, from rural, isolated systems to interconnected urban networks, filling a critical gap in scalable and flexible energy management solutions.
  • Finally, the research lays a foundation for future studies in the field by highlighting the effectiveness of the SDDP method in managing the complexity of microgrid systems. It opens new avenues for exploring further enhancements in algorithmic approaches and their applications.

3. Stochastic Decision Processes in Microgrid Energy Management

The management of energy within microgrids epitomizes a quintessential stochastic optimization problem where decisions must be made under multiple layers of uncertainty. These uncertainties arise from volatile renewable energy outputs, fluctuating demand profiles, and dynamic pricing in energy markets. Effective EMS for microgrids must therefore not only respond to immediate energy requirements but also anticipate and strategically manage future states under uncertainty. In general, the decision process can be summarized as follows:
  • Stage Setup: At each decision stage t, the algorithm updates its state based on the latest available information about uncertainties, denoted by ξ t .
  • Decision Execution: Once the uncertainty ξ t for the current stage is known, the decision a t is made. This decision is aimed at optimizing specific performance metrics, such as cost minimization, energy efficiency, or other operational objectives.
  • Objective Function: The core objective at each stage is formulated to minimize expected costs or maximize efficiency, integrating the newly revealed uncertainties. This dynamic adaptation helps to effectively manage the variable nature of microgrids.
Figure 2 summarizes the interaction between the elements and highlights the dynamism of the decision-making process.
In the context of multi-stage decision problems, particularly within the scope of stochastic programming, the concept of a policy plays a pivotal role. A policy, as defined in the literature [37], comprises a sequence of decision rules { a 1 , a 2 , , a T } , where each a t is a function of the stochastic data ξ t observed at each stage t of the decision process. This series of decision rules effectively guides the decision-making process across all stages, tailored to the specific realizations of the stochastic variables encountered.
Each decision rule a t ( ξ t ) within the policy is crafted to respond appropriately to any given realization of the random data ξ t at stage t . This characteristic of the policy ensures that decisions are not only sequential but also adaptive, considering the evolving nature of the uncertainties inherent in the system.
The primary objective in defining and utilizing such a policy is to identify an optimal set of decisions that maximizes (or minimizes) a particular objective function while adhering to a predefined set of technical constraints. These constraints can range from physical limitations within a system to regulatory and safety standards that must be met.

3.1. Formulating the Decision-Making Process under Uncertainty

In the stochastic framework of microgrid EMS, each stage t presents a decision node where the system state E i , t encapsulates the current state of charge, anticipated generation based on renewable sources, and expected consumption for each prosumer i . Decisions made at each interval—ranging from energy trading (buying or selling) to storage management (charging or discharging batteries)—are pivotal, impacting not only current cost and energy efficiency but also the system’s future operational state E.
To model the uncertainty inherent in microgrid operations, the stochastic variables P i , t , s g e n and P i , t , s l o a d represent the potential generation and load for each prosumer i at stage t across possible scenarios s. These variables are drawn from probabilistically defined distributions—gamma for solar output, beta for wind generation, and log-normal for consumption profiles [46]. Similarly, the stochastic nature of market interactions is captured via beta-distributed variables ϕ i , t , s b u y and ϕ i , t , s s e l l , which define the fluctuating buy and sell prices at each stage [12].
All these stochastic variables are assumed to be independent of each other and exhibit stage-wise independence. This independence assumption aligns with the scenario tree methodology, as employed in [47,48]. Notably, ref. [49] has demonstrated that standard SDDP effectively converges to the optimal policy under stochastic objectives. Further advancing this approach, ref. [46] employs the open-source MSPPY package to tackle stochastic multistage decision problems via SDDP, with stochasticity in the objective function modeled by a scenario tree; this package is utilized in our current study.
It should also be noted that an alternative approach prevalent in the literature models price dynamics using a Markov chain, initially proposed in [50]. However, the assumption that prices evolve according to a homogeneous Markov chain remains a topic of debate [51], highlighting the complexity and variability in modeling financial variables within microgrid systems. This discussion underscores the necessity for flexible and robust modeling techniques to accurately reflect the dynamic and uncertain nature of energy markets and grid operations.
Mathematically, the EMS can be represented as a multi-stage stochastic problem as presented below:
min s S π s [ i N t H ϕ i , t , s b u y p i , t , s b u y ϕ i , t , s s e l l p i , t , s s e l l + i N j N t H f i , j , t , s y i , j , t , s ]      
which is subject to
I i , t , s = I i , t 1 , s t 1 + ( P i , t , s l o a d + p i , t , s b u y p i , t , s s e l l + P i , t , s g e n ) Δ t + j N | j i y j , i , t , s j N | j i y i , j , t , s ,     i N , t H , s S        
I i , t , s = I i , t 1 , s t 1 + Δ t p i , t , s c h a r ˙ η b p i , t , s d i s ˙ η b I i , t , s β b       i   N ,   t T ,   s S      
I i , t , s k i ,   i N , t H ,   s S
p i , t , s b u y P ¯ i , t b u y ,     i N , t H ,   s S        
p i , t , s s e l l P ¯ i , t , s s e l l ,     i N , t H , s S          
p i , t , s b u y , p i , t , s s e l l , I i , t , s 0 ,     i N , t H , s S          
y i , j , t , s 0 ,     i N , j N , t H , s S        
wherein the objective Function (1) minimizes the cost of buying and selling energy by prosumers, including the transportation cost. Note that the purchase ϕ i , t , s b u y and sale ϕ i , t , s s e l l   prices of energy from the main grid are stochastic variables, while the power bought p i , t , s b u y   and sold p i , t , s s e l l to the grid are decision variables determined by the EMS.
Constraint (2) ensures the power balance of each prosumer in each stage. Note that in this constraint, the power generated by solar and wind sources P i , t , s g e n , and the power demanded by home appliances of each prosumer are stochastic variables P i , t , s l o a d , while the battery’s state of charge (SOC) I i , t , s and the power transactions with the main grid, p i , t , s b u y   a n d   p i , t , s s e l l are the decision variables determined by the EMS.
Constraint (3) models the battery’s SOC behaviors, including the effect of efficiency at charging and discharging modes represented by η b joining to the round-trip efficiency β b [52]. Finally, Constraints (4)–(8) represent the battery system’s capacity, the limits of power bought and sold to the main grid, and the positive nature of the decision variables, respectively.
It is important to highlight that the value of the time step. Δ t has a direct impact on the computational complexity of the algorithm. As demonstrated by [38], increasing the time step leads to greater uncertainty in predictions of variables such as prices, power demands, power generation, etc., whereas reducing the time step increases computational effort. The substantial computational effort arises from the exponential increase in computational complexity of SDDP as a function of decision epochs and the number of scenarios [37]. In this regard, a common practice, as followed in this study, is to use a time step of 1 h over a planning horizon of 24 h [7,12,53]. It is worth mentioning that these simplifications and approximations are made with the aim of making the solution process of the original problem manageable, which would otherwise be impractical with current computing resources due to the large number of scenarios generated and the complexity of the algorithms that would be feasible to apply [37].
Finally, Figure 3 illustrates the general methodology proposed in this study; solar generation data, wind generation data, electricity demand, and energy prices are processed to be used in scenario generation via the adjustment of probability distributions, as previously described. Furthermore, the original problem outlined above is modeled as a set of linear subproblems for each decision stage. Subsequently, scenario discretization is performed for each decision stage. The SDDP solver is employed to approximately solve the stochastic multi-stage decision problem. Finally, several Monte Carlo simulations are conducted to validate the solution quality. Refined details of the SDDP algorithm and Monte Carlo simulations will be expanded on in a later section of this paper.

3.2. Implementation of the Stochastic Dual Dynamic Programming (SDDP) Algorithm

The SDDP algorithm is crucial for addressing multi-stage stochastic decision problems in energy management systems. These problems involve making sequential decisions over a set of periods, t, to optimize a specified objective function while contending with uncertainties that influence both the objectives and the constraints.
In general form, the decision process for every period t can be formulated as in (9).
min C 1 T x 1 + + C T T x T
Subject to:
A 11 x 1                                                             b 1 A 21 x 1 + A 22 x 2                                 b 2 A T , T 1 x T 1 + A T T x T b T
wherein x 1 ,   x 2 ,   ,   x T denote the decision variables associated with each respective period and C 1 , A 11 , b 1 are known at the outset, but C t , A t , t 1 , A t t y b t generally involve random components that are revealed over time [37]. Consequently, the dynamic structure of the problem is often expressed in a nested stochastic formulation (10):
min C 1 T x 1 + E [ min C 2 T + E [ E [ min C T T x T ] ] ]
The core principle of SDDP lies in approximating the future cost function using a series of linear functions or cuts. Initially, the problem
min C 1 T x 1 + α x 1
subject to
A x 1 b 1
is solved, where α ( x 1 ) varies based on the decision at stage one. The forward recursion phase involves using a set of trial decisions, x t ,   t T , to estimate these cost function approximations. Given the computational impracticality of simulating every potential scenario, a Monte Carlo simulation is conducted for a sample of potential variable combinations [49]. Figure 4 shows semantically the Monte Carlo simulation for three scenarios, highlighted in green, in a multi-stage decision problem with three stages.
After the Forward pass has been performed, the following linear approximation (12) is performed for the cost function [37] and iteration Γ :
α ~ Γ + 1 x t 1 α Γ + 1 x t 1 _ + β t , s Γ ( x T 1 x T 1 Γ , s )
Here, φ T , s Γ ( x T 1 ) represents the expected value of α ~ Γ + 1 x t 1 given all possible uncertainty realizations at stage T and scenario s. A piecewise linear approximation, known as a polyhedral approximation, is constructed as shown in (13):
Λ t Γ + 1 = m a x ( Λ t Γ ,   φ t , 1 Γ ( x t 1 ) ,   φ t , 2 Γ x t 1 ,   . . . ,   φ t , | S | Γ ( x t 1 ) )
The backward recursion involves computing these sub-gradients β t , s Γ using the dual values of the linear subproblem of each stage t . For a more detailed explanation on Backward pass and the calculation of sub-gradients using dual values, please refer to [37,49,50].
Solving stochastic and dynamic linear problems poses challenges due to the curse of dimensionality, where the exponential growth of possible states and actions complicates solutions. Two prevalent methodologies for addressing this challenge are deterministic equivalent (DE) and Markov decision processes (MDP). DE simplifies the stochastic model for quick and efficient solutions by substituting deterministic parameters for random variables [47]. However, accuracy depends on precise parameter estimates, and oversimplification may lead to suboptimal solutions. MDP, on the other hand, models sequential decision-making with outcomes influenced by both actions and the system’s current state [52]. While powerful, MDP demands significant computation and modeling, with complexity escalating as states and actions increase. An alternative, SDDP, tackles the curse of dimensionality by approximating solutions without considering all possible scenarios.

3.3. Implementation of SDDP Algorithm for Microgrid Energy Management Problem

The SDDP algorithm is implemented to manage the complex multi-stage decision-making in microgrid energy systems. Each decision stage typically set at one-hour intervals, allows for frequent updates based on the latest available information, capturing the variability in renewable generation and demand. This is essential for effectively managing storage systems, which can typically be filled or emptied within 1–2 decision periods (1–2 h) under average conditions. This level of granularity and responsiveness is crucial for optimizing the EMS’s performance, making SDDP an ideal choice over more rigid classical multi-stage stochastic programs. This section refines the approach discussed in the Mathematical Formulation section, introducing a concrete structure for the application in microgrid energy management.
  • Stage Initialization and Setup
The foundational stage, t = 0 , is critical for establishing the initial conditions of the energy system, although it does not correspond to a real-time period. This stage initializes the problem framework and sets a precedent for the dynamic decision-making process across subsequent stages. Each stage is characterized by specific scenarios, defined as subproblems, that encapsulate the following variables:
  • ϕ i , t , s b u y and ϕ i , t , s s e l l : prices at which energy is bought and sold.
  • P i , t , s l o a d and P i , t s g e n : power load and generation for each prosumer.
The decision variables used across these subproblems include the following:
  • I i , t , s : state variables that bridge consecutive stages.
  • p i , t , s b u y ,   p i , t s s e l l , y i , j , t , s : operational variables specific to each stage and scenario.
  • Problem Formulation for Each Stage:
Stage 1 (Initialization): At time t = 0 , the aim is to address the problem outlined by Equations (14)–(17). Constraints (15) and (16) are employed to initialize the SOC for each prosumer, accounting for their individual storage capacities. Consequently, if a prosumer lacks a battery bank (i.e., possesses zero capacity), its SOC is initialized equal to k i . Otherwise, it is initialized to a predetermined value. The auxiliary variable z is incorporated to facilitate the computational execution of this stage within the software framework.
min z
which is subject to
I i , 0 = 0   i N |   k i = 0
I i , 0 = I 0 i     i N |   k i > 0
z     f r e e
Stages 2 to T: For stages from 2 to T, the subproblem aligns closely with the base model, transitioning the initial setup into actionable decision-making. The objective at these stages integrates the objective Function (1) subject to the following constraints that ensure the physical and operational feasibility of energy management (2)–(8) but disregarding the index s related to the set of scenarios.
As an illustration, the SDDP algorithm is applied to an electric microgrid with the following parameters: a set of prosumers, P = { 1,2 } ; scenarios, S = { 1,2 } ; and stages, T = { 0,1 , 2 } . Figure 5 depicts the multi-stage adapted EMS problem, serving as a framework for implementing the SDDP, where the circles represent the uncertainty realizations of buying prices, selling prices, energy generated, and energy demanded prosumer 1 and 2, respectively, and the green circles represent the scenarios sampled in the Forward step, while blue circles represent scenarios not sampled in the current iteration.
The initial stage, t = 0 , serves as a fictitious phase, where the initial battery’s energy levels for all prosumers are initialized. Consequently, it does not entail any energy quantities or prices. The corresponding model for this first-stage scenario is illustrated in Figure 6, highlighted in orange. The subproblem for this stage is described in (14)–(17).
Stage  t = 2  to  t = T : When examining any stage other than   t = 0 , several parameters will vary depending on the specific scenario. An illustrative example is presented in the scenario highlighted in orange in Figure 7, wherein the following variables are defined:
  • P 1 g e n : is the power generated by prosumer 1.
  • P 2 g e n : is the power generated by prosumer 2.
  • ϕ 1 b u y : is the price for prosumer 1 to buy power from the main network.
  • ϕ 2 b u y : is the price for prosumer 2 to buy power from the main network.
  • P 1 l o a d : is the load required by prosumer 1.
  • P 2 l o a d : is the load required by prosumer 2.
  • ϕ 1 s e l l : is the price for prosumer 1 to sell energy to the main network.
  • ϕ 2 s e l l : is the price for prosumer 2 to sell energy to the main network.
Thus, the SDDP algorithm can capture the complexity and variability of energy prosumer scenarios across different stages, providing a robust and flexible framework for decision-making and optimization while keeping the computational resources demand at an acceptable level.

3.4. Monte Carlo Simulations

The SDDP algorithm, as detailed earlier, develops a feasible policy for the given energy management problem. This policy can be rigorously assessed across a spectrum of uncertainty scenarios inherent in the original problem formulation by employing Monte Carlo simulations. These simulations enable the construction of a 1 α confidence interval for the average value of the policy. This interval’s upper limit effectively sets an upper bound on the value of the optimal policy, ensuring a confidence level of 1 α .
The confidence interval is calculated using [50]
C I = ω _ ± t α 2 S D n
Here, x ¯ represents the average outcome of the policy values derived from the Monte Carlo simulations, S D is the standard deviation of these outcomes, n is the total number of simulations conducted, and t α 2 is the critical value from the t-Student distribution for the specified confidence level 1 α . with n 1 degrees of freedom.
This approach not only quantifies the policy’s effectiveness under varying conditions but also provides statistical assurance of its performance reliability, as referenced in [50].

4. Simulation Results

This section presents the methodology and outcomes from the application of the Stochastic Dual Dynamic Programming (SDDP) algorithm for microgrid energy management. The microgrid configuration employed in this study is illustrated in Figure 8 where each prosumer is connected to a common coupling point and has the capability to engage in energy transactions with other prosumers.
The study utilizes datasets from Universidad de los Andes and PECAN STREET Dataport (https://www.pecanstreet.org/dataport/, accessed on 10 June 2023), which include wind and solar generation, energy demand, and pricing data. The data undergo cleaning and processing, fitting of probability density functions [54], and integration into the SDDP algorithm.
The SDDP algorithm is implemented using Python 3.9 on a virtual machine powered by Gurobi 10.0.0 optimization software. The setup includes an Ubuntu Linux operating system with a 12-core NV processor and 112 GB of RAM. To mitigate potential convergence issues, two stopping criteria were set: a maximum runtime of 21,600 s and a limit of 300 iterations.
The algorithm is tested via simulations, forming scenarios based on the probability distributions derived from the initial data set. Each scenario evaluates the policy’s response to different energy demands and market conditions. These are detailed in the following sub-sections.
Table 2 details for each of the cases studied the number of prosumers in the microgrid, the number of decision stages, the number of state variables, and the number of scenarios.

4.1. One Prosumer Connected to the Main Grid without Energy Storage System

Figure 7 shows the behavior of the power bought to the main grid by the prosumer, as well as the net energy balance before carrying out the selling transaction. This balance corresponds to Equation (19).
P b a l a n c e = P g e n e r a t e d P d e m a n d
In this context, when P b a l a n c e is greater than zero, there is an excess of energy that can be stored or sold. On the other hand, when P b a l a n c e is less than zero, there is an energy deficit that needs to be compensated by purchasing energy from the grid. Figure 9 shows the behavior of the power sold to the grid. As seen from Figure 9 and Figure 10, without a battery bank, the system aims to satisfy the energy conservation equation by selling all the surplus when it is available and purchasing.

4.2. One Prosumer Connected to the Main Grid with Energy Storage System

In this scenario, a single prosumer with a battery bank capable of storing 3 kWh of energy was considered. Figure 11 depicts the behavior of the power bought from the grid, and Figure 12 illustrates the behavior of the sold to the grid. As can be seen in Figure 11 and Figure 12, the SDDP algorithm aims to minimize power purchases from the grid while attempting to sell energy to it when purchase prices are high. However, if the energy balance exceeds the battery capacity, it has no choice but to sell to the grid at whatever price is available (note the peak at the 20th hour).

4.3. Two Prosumers Connected to the Main Grid with Energy Storage Systems

In this case, two prosumers with 3 kWh battery banks are considered, connected to the grid and with the ability to exchange energy between them at a cost of 1 USD/kWh due to the use of the grid. Figure 13 illustrates the policy for prosumer number 1 for the energy purchased from the grid as well as from prosumer number 2. As it is cheaper to buy energy for prosumer number 2, the algorithm aims to maximize the energy bought by prosumer 2 and to minimize the energy bought to the grid.

4.4. Validation of Results

To validate the results obtained, a Monte Carlo simulation was conducted to assess the policy obtained in each of the previous cases under random realizations of uncertainty. Three thousand simulations were carried out for each case listed in Table 1. The histograms of the values obtained for the policy in the five cases listed in Table 1 can be found in Appendix A. Montecarlo simulation results. As can be seen, the confidence intervals are very narrow, meaning that the exact optimal policy is close to the policy approximated by the SDDP algorithm [50]. Table 3 also presents results for certain cases of the deterministic equivalent of the problem (i.e., a deterministic linear program for the discretized original problem). This type of deterministic equivalent formulation exhibits exponential complexity [50], rendering it only feasible for relatively small-scale problems since it requires great amounts of RAM memory and computation time.

5. Conclusions

This study has detailed a solution methodology for the Stochastic Dynamic Energy Management Problem (SDEMP) within microgrids, emphasizing the incorporation of stochasticity and dynamism in demand, supply, and power pricing. By utilizing the Stochastic Dual Dynamic Programming (SDDP) algorithm, this approach facilitates the modeling of sequential decisions and the sharing of information throughout the operation of an energy management system (EMS).
The SDDP algorithm proves to be a potent tool in the domain of energy management optimization, particularly when judiciously applied to dynamic and stochastic scheduling challenges. Despite being an approximate method, SDDP enables the generation of feasible policies that closely approximate the optimal solutions, which are often unattainable due to the curse of dimensionality.
Our simulations delivered detailed energy management plans for each prosumer at every decision stage. These plans are strategically designed to minimize total energy management costs while ensuring a reliable and efficient energy supply to all participants in the microgrid. The flexibility and adaptability of the proposed method mean it can also be tailored to address additional objectives and constraints, such as reducing greenhouse gas emissions, enhancing energy efficiency, or integrating advanced energy storage solutions and their impact on electrical networks.
The findings from this study underscore the potential of the proposed methodology as a robust solution for energy management in microgrids, potentially fostering the development of more sustainable energy systems. Furthermore, recent advancements have expanded the capabilities of the SDDP algorithm to solve multi-stage mixed integer problems, thereby enhancing its suitability for managing reconfigurable microgrids.
Looking forward, the application of SDDP in scenarios that consider non-linear convex load flow models, along with associated electrical constraints, presents a promising avenue for further research. Moreover, optimal energy management in unbalanced three-phase microgrids is an area that remains largely unexplored, offering significant opportunities for future investigations, particularly as current studies have predominantly focused on two-stage models. This burgeoning field of research holds considerable promise for advancing the efficiency and sustainability of microgrid operations.

Author Contributions

Conceptualization, A.T.; Formal analysis, A.T. and P.C.; Funding acquisition A.T.; Investigation, P.C.; Methodology, A.T.; Resources A.T.; Supervision, A.T.; Coding, P.C.; Writing—review and editing, A.T. and P.C. All authors have read and agreed to the published version of the manuscript.

Funding

Universidad de Los Andes funded this research, particularly its program FAPA (Fondo de Apoyo a Profesores Asistentes).

Data Availability Statement

Data is contained within the article.

Acknowledgments

This work was sponsored by the Departamento de Ingeniería Industrial of the Universidad de los Andes.

Conflicts of Interest

The authors declare no conflicts of interest.

Nomenclature

Variables:
C t : Cos t   vector   for   period   t
x t :Decision vector for period t.
f i , j , t , s : Power   sent   from   prosumer   i   to   prosumer   j   in   period   t and scenario s.
I i , t , s : State   of   charge   of   battery s   bank   of   prosumer   i at period t, scenario s.
p i , t , s b u y : Power   bought   to   the   main   grid   by   prosumer i   in   period   t   and   scenario   s .
p i , t , s s e l l : Power   sold   to   the   main   grid   by   prosumer   i   in   period   t   and   scenario   s
P i , t , s l o a d : Power   demand   for   prosumer   i   at   period   t ,   scenario   s .
P i , t , s g e n : Power   generation   from   renewable   sources   for   prosumer   i   at   period   t ,   scenario   s .
p i , t , s c h a r : Power   charged   to   the   battery   by   prosumer   i   at   period   t ,   scenario   s .
p i , t , s d i s : Power   discharged   from   the   battery   by   prosumer   i   at   period   t ,   scenario   s .
P b a l a n c e : Difference between power generated and power demand.
ω _ : Mean   of   random   variable   ω .
y i , j , t , s : Energy s   price   between   prosumer   i   and   prosumer   j .
z : Artificial variable for initial period.
α x t : Cos t   function   at   time   t
α ~ Γ + 1 x t 1 : Approximate   cos t   function   at   time   t ,   iteration   Γ .
α Γ + 1 x t 1 _ : Cut   intercept   at   iteration   Γ .
β t , s Γ : Cut   gradient   at   time   t ,   scenario   s ,   iteration   Γ
Λ t Γ : Polyhedral   approximation   of   cos t   function   at   time   t ,   iteration   Γ
ξ t : Uncertainty   vector   at   period   t .
ϕ i , t , s b u y : Purchase   price   of   electricity   from   the   main   grid   for   prosumer   i ,   period   t ,   scenario   s
ϕ i , t , s s e l l : Selling   price   of   electricity   to   the   main   grid ,   for   prosumer   i ,   period   t ,   scenario   s .
π s : Probability   of   scenario   s
Parameters:
A t i : Constraint   matrix   for   period   t ,   decision   vector   for   period   i .
b t : Right   hand   side   of   generic   constraint   for   period   t .
k i : Battery’s capacity.
n : Number of observations of a random variable.
β b : Discharge rate.
η b : Round-trip efficiency
Sets:
H : Set of decision stages.
N : Set of prosumers.
S : Set of scenarios.

Appendix A

Montecarlo simulation results.

Appendix A.1. One Prosumer with 3 kWh Battery Bank

Figure A1. Policy histogram for one prosumer with 3 kWh battery bank capacity.
Figure A1. Policy histogram for one prosumer with 3 kWh battery bank capacity.
Energies 17 02628 g0a1

Appendix A.2. One Prosumer with 12 kWh Battery Bank

Figure A2. Policy histogram for one prosumer with 12 kWh battery bank capacity.
Figure A2. Policy histogram for one prosumer with 12 kWh battery bank capacity.
Energies 17 02628 g0a2

Appendix A.3. One Prosumer with 12 kWh Battery Bank

Figure A3. Policy histogram for one prosumer with 30 kWh battery bank capacity.
Figure A3. Policy histogram for one prosumer with 30 kWh battery bank capacity.
Energies 17 02628 g0a3

Appendix A.4. One Prosumer with 12 kWh Battery Bank

Figure A4. Policy histogram for two prosumers with 3 kWh battery bank capacity each one.
Figure A4. Policy histogram for two prosumers with 3 kWh battery bank capacity each one.
Energies 17 02628 g0a4

Appendix A.5. Twelve Prosumers with 5–10 kWh Battery Bank Capacity

Figure A5. Policy histogram for twelve prosumers with 5–10 kWh battery bank capacity each one.
Figure A5. Policy histogram for twelve prosumers with 5–10 kWh battery bank capacity each one.
Energies 17 02628 g0a5

References

  1. Ton, D.T.; Smith, M.A. The U.S. Department of Energy’s Microgrid Initiative. Electr. J. 2012, 25, 84–94. [Google Scholar] [CrossRef]
  2. Cagnano, A.; De Tuglie, E.; Mancarella, P. Microgrids: Overview and guidelines for practical implementations and operation. Appl. Energy 2020, 258, 114039. [Google Scholar] [CrossRef]
  3. Su, W.; Wang, J. Energy Management Systems in Microgrid Operations. Electr. J. 2012, 25, 45–60. [Google Scholar] [CrossRef]
  4. Li, M.; Zhang, X.; Li, G.; Jiang, C. A feasibility study of microgrids for reducing energy use and GHG emissions in an industrial application. Appl. Energy 2016, 176, 138–148. [Google Scholar] [CrossRef]
  5. Singh, S.; Singh, M.; Kaushik, S.C. Feasibility study of an islanded microgrid in rural area consisting of PV, wind, biomass and battery energy storage system. Energy Convers. Manag. 2016, 128, 178–190. [Google Scholar] [CrossRef]
  6. Onu, U.G.; de Souza, A.C.Z.; Bonatto, B.D. Drivers of microgrid projects in developed and developing economies. Util. Policy 2023, 80, 101487. [Google Scholar] [CrossRef]
  7. Magdi, M. Microgrid Advanced Control Methods and Renewable Energy System Integration. In Microgrid; Elsevier: Amsterdam, The Netherlands, 2017; pp. i–ii. [Google Scholar] [CrossRef]
  8. Rahman, M.A.; Islam, M.R. Different control schemes of entire microgrid: A brief overview. In Proceedings of the 2016 3rd International Conference on Electrical Engineering and Information Communication Technology (ICEEICT), Dhaka, Bangladesh, 22–24 September 2016; pp. 1–6. [Google Scholar] [CrossRef]
  9. Huangfu, Y.; Huangfu, Y.; Tian, C.; Zhuo, S.; Xu, L.; Li, P.; Quan, S.; Zhang, Y.; Ma, R. An optimal energy management strategy with subsection bi-objective optimization dynamic programming for photovoltaic/battery/hydrogen hybrid energy system. Int. J. Hydrogen Energy 2023, 48, 3154–3170. [Google Scholar] [CrossRef]
  10. Mahmoud, M.S. Chapter 1—Microgrid Control Problems and Related Issues. In Microgrid; Mahmoud, M.S., Ed.; Butterworth-Heinemann: Oxford, UK, 2017; pp. 1–42. [Google Scholar] [CrossRef]
  11. Tajjour, S.; Chandel, S.S. A comprehensive review on sustainable energy management systems for optimal operation of future-generation of solar microgrids. Sustain. Energy Technol. Assess. 2023, 58, 103377. [Google Scholar] [CrossRef]
  12. Kumar, R.P.; Karthikeyan, G. A multi-objective optimization solution for distributed generation energy management in microgrids with hybrid energy sources and battery storage system. J. Energy Storage 2024, 75, 109702. [Google Scholar] [CrossRef]
  13. Darshi, R.; Shamaghdari, S.; Jalali, A.; Arasteh, H. Decentralized Reinforcement Learning Approach for Microgrid Energy Management in Stochastic Environment. Int. Trans. Electr. Energy Syst. 2023, 2023, 1190103. [Google Scholar] [CrossRef]
  14. Utkarsh, K.; Trivedi, A.; Srinivasan, D.; Reindl, T. A Consensus-Based Distributed Computational Intelligence Technique for Real-Time Optimal Control in Smart Distribution Grids. IEEE Trans. Emerg. Top. Comput. Intell. 2017, 1, 51–60. [Google Scholar] [CrossRef]
  15. Chavas, J.-P. Dynamic Decisions Under Risk. In Risk Analysis in Theory and Practice; Academic Press: Cambridge, MA, USA, 2004; pp. 139–160. [Google Scholar] [CrossRef]
  16. Prakash, A.; Tomar, A.; Jayalakshmi, N.S.; Singh, K.; Shrivastava, A. Energy Management System for Microgrids. In Proceedings of the 2021 International Conference on Recent Trends on Electronics, Information, Communication & Technology (RTEICT), Bengaluru, India, 27–28 August 2021; pp. 512–516. [Google Scholar] [CrossRef]
  17. Thirunavukkarasu, G.S.; Seyedmahmoudian, M.; Jamei, E.; Horan, B.; Mekhilef, S.; Stojcevski, A. Role of optimization techniques in microgrid energy management systems—A review. Energy Strategy Rev. 2022, 43, 100899. [Google Scholar] [CrossRef]
  18. Chaouachi, A.; Kamel, R.M.; Andoulsi, R.; Nagasaka, K. Multiobjective Intelligent Energy Management for a Microgrid. IEEE Trans. Ind. Electron. 2013, 60, 1688–1699. [Google Scholar] [CrossRef]
  19. Tushar, M.H.K.; Zeineddine, A.W.; Assi, C. Demand-Side Management by Regulating Charging and Discharging of the EV, ESS, and Utilizing Renewable Energy. IEEE Trans. Ind. Inf. Inform. 2018, 14, 117–126. [Google Scholar] [CrossRef]
  20. Xu, B.; Zhou, Q.; Shi, J.; Li, S. Hierarchical Q-learning network for online simultaneous optimization of energy efficiency and battery life of the battery/ultracapacitor electric vehicle. J. Energy Storage 2022, 46, 103925. [Google Scholar] [CrossRef]
  21. Lesage-Landry, A.; Callaway, D.S. Batch reinforcement learning for network-safe demand response in unknown electric grids. Electr. Power Syst. Res. 2022, 212, 108375. [Google Scholar] [CrossRef]
  22. Zhou, J.; Xue, Y.; Xu, D.; Li, C.; Zhao, W. Self-learning energy management strategy for hybrid electric vehicle via curiosity-inspired asynchronous deep reinforcement learning. Energy 2022, 242, 122548. [Google Scholar] [CrossRef]
  23. Domínguez-Barbero, D.; García-González, J.; Sanz-Bobi, M. Twin-delayed deep deterministic policy gradient algorithm for the energy management of microgrids. Eng. Appl. Artif. Intell. 2023, 125, 106693. [Google Scholar] [CrossRef]
  24. Al-Saadi, M.; Al-Greer, M.; Short, M. Reinforcement Learning-Based Intelligent Control Strategies for Optimal Power Management in Advanced Power Distribution Systems: A Survey. Energies 2023, 16, 1608. [Google Scholar] [CrossRef]
  25. Elsied, M.; Oukaour, A.; Youssef, T.; Gualous, H.; Mohammed, O. An advanced real time energy management system for microgrids. Energy 2016, 114, 742–752. [Google Scholar] [CrossRef]
  26. Akter, A.; Zafir, E.I.; Dana, N.H.; Joysoyal, R.; Sarker, S.K.; Li, L.; Muyeen, S.M.; Das, S.K.; Kamwa, I. A review on microgrid optimization with meta-heuristic techniques: Scopes, trends and recommendation. Energy Strategy Rev. 2024, 51, 101298. [Google Scholar] [CrossRef]
  27. Kanchev, H.; Lu, D.; Colas, F.; Lazarov, V.; Francois, B. Energy Management and Operational Planning of a Microgrid with a PV-Based Active Generator for Smart Grid Applications. IEEE Trans. Ind. Electron. 2011, 58, 4583–4592. [Google Scholar] [CrossRef]
  28. Tsikalakis, A.G.; Hatziargyriou, N.D. Centralized Control for Optimizing Microgrids Operation. IEEE Trans. Energy Convers. 2008, 23, 241–248. [Google Scholar] [CrossRef]
  29. Riffonneau, Y.; Bacha, S.; Barruel, F.; Ploix, S. Optimal Power Flow Management for Grid Connected PV Systems with Batteries. IEEE Trans. Sustain. Energy 2011, 2, 309–320. [Google Scholar] [CrossRef]
  30. Sukumar, S.; Mokhlis, H.; Mekhilef, S.; Naidu, K.; Karimi, M. Mix-mode energy management strategy and battery sizing for economic operation of grid-tied microgrid. Energy 2017, 118, 1322–1333. [Google Scholar] [CrossRef]
  31. Shapiro, A.; Dentcheva, D.; Ruszczyn, A.P. Lectures on Stochastic Programming: Modeling and Theory; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2009. [Google Scholar]
  32. Su, W.; Wang, J.; Roh, J. Stochastic energy scheduling in microgrids with intermittent renewable energy resources. IEEE Trans. Smart Grid 2014, 5, 1876–1883. [Google Scholar] [CrossRef]
  33. Birge, J.R. Decomposition and Partitioning Methods for Multistage Stochastic Linear Programs. Oper. Res. 1985, 33, 989–1007. [Google Scholar] [CrossRef]
  34. Qi, X.; Zhao, T.; Liu, X.; Wang, P. Three-Stage Stochastic Unit Commitment for Microgrids Toward Frequency Security via Renewable Energy Deloading. IEEE Trans. Smart Grid 2023, 14, 4256–4267. [Google Scholar] [CrossRef]
  35. Rahbar, K.; Xu, J.; Zhang, R. Real-time energy storage management for renewable integration in microgrid: An off-line optimization approach. IEEE Trans. Smart Grid 2015, 6, 124–134. [Google Scholar] [CrossRef]
  36. Grillo, S.; Pievatolo, A.; Tironi, E. Optimal Storage Scheduling Using Markov Decision Processes. IEEE Trans. Sustain. Energy 2016, 7, 755–764. [Google Scholar] [CrossRef]
  37. Füllner, C.; Rebennack, S. Stochastic Dual Dynamic Programming and Its Variants—A Review. Optim. Online 2021, 1–117. [Google Scholar]
  38. Pereira, M.V.F.; Pinto, L.M.V.G. Multi-stage stochastic optimization applied to energy planning. Math. Program. 1991, 52, 359–375. [Google Scholar] [CrossRef]
  39. Helseth, A.; Fodstad, M.; Mo, B. Optimal Medium-Term Hydropower Scheduling Considering Energy and Reserve Capacity Markets. IEEE Trans. Sustain. Energy 2016, 7, 934–942. [Google Scholar] [CrossRef]
  40. Bhattacharya, A.; Kharoufeh, J.P.; Zeng, B. Managing Energy Storage in Microgrids: A Multistage Stochastic Programming Approach. IEEE Trans. Smart Grid 2018, 9, 483–496. [Google Scholar] [CrossRef]
  41. Lei, Q.; Huang, Y.; Xu, X.; Zhu, F.; Yang, Y.; Liu, J.; Hu, W. Optimal scheduling of a renewable energy-based park power system: A novel hybrid SDDP/MPC approach. Int. J. Electr. Power Energy Syst. 2024, 157, 109892. [Google Scholar] [CrossRef]
  42. Papavasiliou, A.; Mou, Y.; Cambier, L.; Scieur, D. Application of Stochastic Dual Dynamic Programming to the Real-Time Dispatch of Storage under Renewable Supply Uncertainty. IEEE Trans. Sustain. Energy 2018, 9, 547–558. [Google Scholar] [CrossRef]
  43. Zhu, J.; Zhu, W.; Chen, J.; Liu, H.; Li, G.; Zeng, K. A DRO-SDDP Decentralized Algorithm for Economic Dispatch of Multi Microgrids with Uncertainties. IEEE Syst. J. 2023, 17, 6492–6503. [Google Scholar] [CrossRef]
  44. Shi, Z.; Liang, H.; Huang, S.; Dinavahi, V. Multistage robust energy management for microgrids considering uncertainty. IET Gener. Transm. Distrib. 2019, 13, 1906–1913. [Google Scholar] [CrossRef]
  45. Hou, H.; Wang, Z.; Zhao, B.; Zhang, L.; Shi, Y.; Xie, C. Peer-to-peer energy trading among multiple microgrids considering risks over uncertainty and distribution network reconfiguration: A fully distributed optimization method. Int. J. Electr. Power Energy Syst. 2023, 153, 109316. [Google Scholar] [CrossRef]
  46. Ding, L.; Ahmed, S.; Shapiro, A. A Python package for multi-stage stochastic programming. Optim. Online 2019, 1–41. [Google Scholar]
  47. Zandrazavi, S.F.; Guzman, C.P.; Pozos, A.T.; Quiros-Tortos, J.; Franco, J.F. Stochastic multi-objective optimal energy management of grid-connected unbalanced microgrids with renewable energy generation and plug-in electric vehicles. Energy 2022, 241, 2884. [Google Scholar] [CrossRef]
  48. Mayhorn, E.; Xie, L.; Butler-Purry, K. Multi-Time Scale Coordination of Distributed Energy Resources in Isolated Power Systems. IEEE Trans. Smart Grid 2017, 8, 998–1005. [Google Scholar] [CrossRef]
  49. Shapiro, A. Analysis of stochastic dual dynamic programming method. Eur. J. Oper. Res. 2011, 209, 63–72. [Google Scholar] [CrossRef]
  50. Wu, H.; Li, H.; Gu, X. Optimal energy management for microgrids considering uncertainties in renewable energy generation and load demand. Processes 2020, 8, 1086. [Google Scholar] [CrossRef]
  51. Rebennack, S. Combining sampling-based and scenario-based nested Benders decomposition methods: Application to stochastic dual dynamic programming. Math. Program. 2016, 156, 343–389. [Google Scholar] [CrossRef]
  52. Ruszczynski, A.R.; Shapiro, A. Stochastic Programming Models. In Handbooks in Operations Research and Management Science; Elsevier: Amsterdam, The Netherlands, 2002. [Google Scholar]
  53. Wets, R.J.-B. Stochastic Programs with Fixed Recourse: The Equivalent Deterministic Program. SIAM Rev. 1974, 16, 309–339. [Google Scholar] [CrossRef]
  54. Khamees, A.K.; Abdelaziz, A.Y.; Eskaros, M.R.; Attia, M.A.; Badr, A.O. The Mixture of Probability Distribution Functions for Wind and Photovoltaic Power Systems Using a Metaheuristic Method. Processes 2022, 10, 2446. [Google Scholar] [CrossRef]
Figure 1. Categorization of techniques in microgrid energy management literature.
Figure 1. Categorization of techniques in microgrid energy management literature.
Energies 17 02628 g001
Figure 2. Multi-stage decision problem.
Figure 2. Multi-stage decision problem.
Energies 17 02628 g002
Figure 3. Overall framework of the proposed methodology.
Figure 3. Overall framework of the proposed methodology.
Energies 17 02628 g003
Figure 4. Monte Carlo simulation for a sample of 3 potential realizations of scenarios in Forward pass.
Figure 4. Monte Carlo simulation for a sample of 3 potential realizations of scenarios in Forward pass.
Energies 17 02628 g004
Figure 5. EMS example as a multistage problem.
Figure 5. EMS example as a multistage problem.
Energies 17 02628 g005
Figure 6. Parameters example at stage t = 0. Notice the subproblem for this stage.
Figure 6. Parameters example at stage t = 0. Notice the subproblem for this stage.
Energies 17 02628 g006
Figure 7. Parameters example at stage t = 1 and scenario s = 1.
Figure 7. Parameters example at stage t = 1 and scenario s = 1.
Energies 17 02628 g007
Figure 8. General Configuration of the Microgrid for the Computational Study.
Figure 8. General Configuration of the Microgrid for the Computational Study.
Energies 17 02628 g008
Figure 9. Behavior of the power sold to the main grid by the prosumer.
Figure 9. Behavior of the power sold to the main grid by the prosumer.
Energies 17 02628 g009
Figure 10. Energy sold to the grid by a single prosumer without energy storage capacity.
Figure 10. Energy sold to the grid by a single prosumer without energy storage capacity.
Energies 17 02628 g010
Figure 11. Power purchased by prosumer to the grid.
Figure 11. Power purchased by prosumer to the grid.
Energies 17 02628 g011
Figure 12. Behavior of the energy sold to the grid.
Figure 12. Behavior of the energy sold to the grid.
Energies 17 02628 g012
Figure 13. Policy for prosumer number 1.
Figure 13. Policy for prosumer number 1.
Energies 17 02628 g013
Table 1. Comparison of NBD, SDP, and SDDP.
Table 1. Comparison of NBD, SDP, and SDDP.
MethodStage-Wise Independence AssumptionState and Action DiscretizationComplexity in TComplexity in State Dimension Complexity in Number of Scenarios
DEXXXExponentialPolynomial
NBDXXXExponentialPolynomial
SDPXYesYesLinearPolynomial
SDDPYesXXExponentialPolynomial
Table 2. Comparison of number of prosumers, decision stages, state variables, and number of scenarios.
Table 2. Comparison of number of prosumers, decision stages, state variables, and number of scenarios.
CaseNumber of StagesState Variables *Number of Scenarios
One prosumer connected to the main grid without an energy storage system241 1 × 10 144
One prosumer connected to the main grid with an energy storage system241 1 × 10 144
Two prosumers connected to the main grid with energy storage systems242 1 × 10 288
Twelve prosumers2412 1 × 10 1728
* State Variables Refer to Variables That Connected Two Consecutive Stages, as Defined by [37], Which Are Not the Total Number of the Mathematical Model.
Table 3. Confidence interval for policy evaluated on true problems.
Table 3. Confidence interval for policy evaluated on true problems.
Number of ProsumersBattery Bank Capacity kwh95% Confidence Interval
(USD Cents)
DE
(USD Cents)
1316.97–18.3418.7040
11216.48–17.8218.6738
13016.47–17.8118.6738
2329.08–30.96-
125 to 10−0.00092–0.0008-
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tabares, A.; Cortés, P. Using Stochastic Dual Dynamic Programming to Solve the Multi-Stage Energy Management Problem in Microgrids. Energies 2024, 17, 2628. https://doi.org/10.3390/en17112628

AMA Style

Tabares A, Cortés P. Using Stochastic Dual Dynamic Programming to Solve the Multi-Stage Energy Management Problem in Microgrids. Energies. 2024; 17(11):2628. https://doi.org/10.3390/en17112628

Chicago/Turabian Style

Tabares, Alejandra, and Pablo Cortés. 2024. "Using Stochastic Dual Dynamic Programming to Solve the Multi-Stage Energy Management Problem in Microgrids" Energies 17, no. 11: 2628. https://doi.org/10.3390/en17112628

APA Style

Tabares, A., & Cortés, P. (2024). Using Stochastic Dual Dynamic Programming to Solve the Multi-Stage Energy Management Problem in Microgrids. Energies, 17(11), 2628. https://doi.org/10.3390/en17112628

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop