1. Introduction
Improving the dependability and efficiency of contemporary electricity systems is mostly accomplished by smart grid technologies. When it comes to smart grids, one of the biggest obstacles is dynamic load forecasting, which requires precise predictions in order to manage electricity distribution efficiently. Resilient smart grids refer to advanced electricity networks designed to withstand and quickly recover from various disruptions, ensuring continuous and reliable energy supply. These grids integrate renewable energy sources, Advanced Metering Infrastructure (AMI), and intelligent control systems to enhance their robustness and adaptability. Resilience in smart grids is achieved through features such as self-healing capabilities, real-time monitoring, and automated control, which help in minimizing the impact of faults and maintaining stability. This study explores the field of dynamic load forecasting by using state-of-the-art machine learning models, particularly GRU and Long Short-Term Memory (LSTM) networks. An accurate load forecast is crucial to navigating the intricacies of modern energy consumption patterns, which is why this inquiry is being conducted. An essential component of smart grids for the effective management of electricity systems is short-term load forecasting. There is potential for improved forecasting accuracy through the incorporation of data from the Advanced Metering Infrastructure (AMI) to capture detailed customer usage trends [
1]. Problems with data quality, privacy, and computational complexity remain, even if LSTM networks and recurrent neural networks (RNNs) are good for short-term load forecasting [
2]. In light of these difficulties, it is worthwhile to investigate novel approaches, like the one suggested by [
3]. Time-series data protection is depicted in
Figure 1 as shown below.
This study expands the capabilities of LSTM-based RNNs to enhance energy demand forecasting in smart grids [
4]. Ensuring the precision of load projections is crucial for effective power demand management in ever-changing smart grid environments, and LSTM networks offer a promising solution. In addition, various studies [
5] highlight the importance of LSTM networks in improving the accuracy of demand estimation, highlighting their flexibility and accuracy in load forecasting for smart grid environments. For practical applications like large-scale power distribution operations, precise load forecasting is crucial for efficient grid management. With a focus on practical implementation, this research aims to create a thorough experimental strategy for smart grid development. The importance of customized algorithms is emphasized when addressing the challenges of load forecasting in medium-voltage distribution networks.
This research is driven by the growing importance of dynamic load forecasting in the realm of smart grids. With the shift towards sustainable and efficient energy practices in modern societies, accurate and flexible load predictions are crucial. Smart grids, with their complex interplay of data from various sources, like Advanced Metering Infrastructure (AMI) and environmental variables, create a challenging landscape for energy management [
5]. The importance of optimizing grid operations, improving energy distribution, and adapting to the increasing use of renewable energy sources highlights the crucial requirement for reliable forecasting models. Conventional approaches encounter difficulties in capturing the ever-changing nature of contemporary power systems [
6]. Therefore, the utilization of advanced machine learning techniques, particularly LSTM and GRU networks, shows great potential. These models have a remarkable capacity to uncover patterns over time in sequential data, making them excellent choices for predicting changing loads in smart grids [
7].
In this study, we aim to develop an intelligent load forecasting model (ILFM) capable of real-time load prediction, supply intelligent control strategies (ICS) for distributed load balancing, merge intelligent load forecasting models (ILFMs) with ICS, make the grid more load-tolerant and stable under stress, and contribute to environmental protection by successfully utilizing renewable energy sources to decrease greenhouse gas emissions.
There is first an overview of the background, a discussion of the data, and a summary of the results. In the Introduction, the issue is defined, and background information is given. Relevant previous studies are discussed in the literature review. We then describe the study’s procedures in full. We present the findings, together with our analysis of the data and potential uses in the actual world. In the paper’s last sections, Conclusions and Discussion, the findings are analyzed, and some recommendations for further research are made.
2. Related Work
In this section, we will look at the pros and cons of a few different approaches. This section provides background for the current investigation by highlighting the body of prior research on the topic of resilient smart grids.
Power system management relies on accurate short-term load forecasts, which AMI data may provide. By documenting detailed client consumption trends, AMI data can improve forecasting accuracy, as stated in [
8]. Machine learning methods such as recurrent neural networks (RNNs) and Long Short-Term Memory (LSTM) networks have proven themselves to be highly effective in this field. Nevertheless, issues related to data quality, privacy, and computational complexity need to be addressed. A new method for short-term load forecasting using AMI data has been proposed in a study.
The purpose of the research study [
9] was to enhance smart grid energy demand forecasting by employing recurrent neural networks based on Long Short-Term Memory (LSTM). In dynamic smart grid environments, accurate load forecasts are crucial to managing power demand effectively, and LSTM networks offer a possible way to accomplish this.
Improving the precision of demand estimation in smart grid contexts through the use of LSTM networks is the primary objective of the study [
10]. Because of their superior accuracy and adaptability in load forecasting, LSTM networks are indispensable in smart grid environments for optimizing power distribution and controlling variable demand.
Building a cloud-based smart grid load forecasting platform is the primary emphasis of the research study [
11]. By utilizing the cloud’s scalability, agility, and data processing capacity, this design enhances the precision with which smart grid systems predict future loads. The purpose of the study is to shed light on the platform’s design, development, and performance in order to optimize energy management and the grid.
This article provides a concise overview of the strategies that are currently employed to predict smart grid demand. By integrating deep neural networks with metaheuristic methods, the study in [
12] aimed to enhance the precision of short-term load estimations. The study aims to assess these approaches and find out how they could influence smart grid management and grid performance. This review is useful since it discusses the present status of load forecasting within the smart grid framework.
The primary goal of our research was to develop an experimental plan for smart grids. Designed for a large power distributor, this method relies on precise load forecasts. The development, deployment, and assessment of load forecasting techniques with a focus on real-world application and case study [
13] analysis is expected to play crucial roles in this endeavor. The primary objective of this research is to identify the difficulties associated with load forecasting in power distribution operations on a broad scale and to propose solutions to these problems.
Finding a way to estimate energy usage in a medium-voltage distribution network is the main goal of this research. Predicting future loads accurately is crucial for efficient grid management and resource allocation. Developing load forecasting algorithms that are specific to medium-voltage distribution networks, as well as analyzing and modeling relevant data, should be the focus of research aimed at enhancing the efficiency and utility of these networks [
14].
Due to the time-sensitive nature of smart grid applications, researchers are concentrating on producing short-term projections for renewable energy and power consumption. In order to optimize energy distribution and grid management, in this study [
15] investigated new forecasting methods and how to apply them within the context of smart grids.
This study examines smart meter data-driven load forecasting algorithms within the framework of smart grids. Several approaches to load forecasting utilizing smart meter data are compared and contrasted [
2]. The primary objective was to provide readers with a comprehensive overview of the subject and guide them in finding models that enhance the accuracy of load forecast. This, in turn, will enable them to manage smart grid systems more effectively.
The study [
16] explores the use of Long Short-Term Memory (LSTM)-based recurrent neural networks to enhance the accuracy of electric load forecasting inside the smart grid context. Electricity distribution and demand management forecasts in smart grid setups are made more accurate with the help of LSTM networks.
The objective of these research efforts is to enhance machine learning methods for smart grid demand forecasting in the near future [
17]. In order to enhance smart grid efficiency, energy distribution, and grid management, this study explores the possibility of using machine learning approaches for better load forecasting.
The study [
18] examines cutting-edge methods for load forecasting that have been tailored for the implementation of smart grids. It is stressed that cognitive algorithms are necessary for smart grid load forecasting. When it comes to managing the electrical grid more efficiently and reliably, these techniques are vital.
As demonstrated in [
19], a load forecasting model can be customized to interface with smart grids in real time in a seamless manner. This model combines the great things about Bidirectional Long Short-Term Memory (BiLSTM) networks, Bayesian optimization, and convolutional neural networks (CNNs). By utilizing cutting-edge machine learning techniques, this system possesses the ability to improve the precision of smart grid load forecasting.
An adaptive load forecasting technique is examined in the study [
20] within the context of a smart grid demonstration project. In the ever-changing context of smart grids, the study examines how well adaptive forecasting methods. The goal is to illustrate the possibilities and practicality of such methods to improve smart grid operations and load forecasting.
Energy storage systems (ESSs), particularly lithium-ion batteries, have become essential in modern smart grids for managing peak load shaving and load balancing. ESSs effectively store energy during off-peak periods and discharge it during peak demand, thereby reducing the strain on the grid and minimizing the need for additional generation capacity. Ref. [
21] highlights the efficiency of ESSs not only in managing load variability but also in supporting the integration of renewable energy sources, making them a critical component of resilient smart grids.
Demand–response (DR) is another key strategy used to manage peak loads and ensure load balancing in smart grids. DR involves the real-time adjustment of energy consumption in response to signals such as time-based pricing or incentives. For example, smart appliances and automated systems can reduce energy use during peak periods, shifting consumption to off-peak times. Ref. [
22] underscores the role of DR in aligning energy demand with supply, particularly in systems with high penetration of intermittent renewable energy.
While ESSs and DR are effective, their traditional implementation often lacks the predictive foresight needed for optimal performance. By integrating advanced predictive models like Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) networks, our approach enhances the effectiveness of ESSs and DR. These models provide accurate forecasts of future energy demand, enabling more strategic operation of storage and demand–response systems. This integration not only smooths out peak loads more effectively but also improves overall grid stability, offering a significant advancement over conventional methods.
This study aims to address the issue of insufficient load forecasting and control strategies in resilient smart grids. Given the ever-changing nature of energy consumption patterns, traditional forecasting methods fall short [
23]. These factors contribute to a decrease in the overall reliability and resilience of the grid, as it suffers from inefficient energy distribution, underutilized renewable resources, and vulnerability to interruptions. Given the challenges at hand, our aim is to develop and evaluate a new (DLPaM) technique [
24] that integrates advanced load forecasting models with intelligent control strategies. In the context of modern smart grids, the issue statement emphasizes the importance of a more accurate, adaptable, and resilient approach to managing energy demand and supply [
25].
As a whole, this study makes a lot of useful contributions to the area of smart grid dynamic load forecasting.
Novel LSTM and GRU models: Innovative models for dynamic load prediction are introduced in this paper, using Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU). This study demonstrate how well these models capture complicated patterns, as they take advantage of temporal relationships in sequential data on energy consumption.
Dataset diversity and real-world applicability: This study adds to the existing literature by drawing on a wide range of datasets representing distinct energy consumption trends. Because of this variety, the models may be adjusted to various smart grid settings, increasing their usefulness in real-world circumstances.
3. Materials and Methods
In this section, we go deeper into the methodologies used to forecast energy usage in the future. Options for RNN architectures, methods for training and evaluating models, and data pre-processing are all discussed. This section of the article explains the process used to gather the data. The GRU and LSTM networks are among the most often used RNN models for calculating power consumption estimates. Their ability to understand temporal relationships in sequential data makes these RNN architectures popular for time-series forecasting jobs. We explain in depth why we settled on these particular models. What follows is a description of the steps we took to clean and organize the data. Once we have explained how we collected all of the necessary data, we explain the model training process in great detail. Training the models requires a training dataset, customized hyperparameters, and a custom loss function. We discuss how the output of a model can be significantly changed by adjusting hyperparameters such as the learning rate, the number of hidden dimensions, and the number of hidden layers. Here, we detail the procedures used to evaluate the efficacy of the trained models. We focus on the symmetric Mean Absolute Percentage Error (MAPE) because it is a good indicator of performance. We review the MAPE formula and its application to assess the accuracy of energy consumption forecasts here. Detailed descriptions of the experimental data, model parameters, and comparisons and analyses of the GRU and LSTM models follow this brief introduction.
3.1. Dataset Collection
3.1.1. Data Source
Energy service providers and government agencies were among the many sources consulted for this investigation. We collected our datasets from Kaggle. Machine learning algorithms can be trained and evaluated on these datasets to improve energy consumption forecasts.
The key features included in the datasets are given below (
Table 1).
3.1.2. Feature Descriptions
Date and time: Each data point’s timestamp, which contains the day and time, is displayed by this feature. As the time dimension, it lets the models take changes over time into consideration.
Temperature (in degrees Celsius): Among the many environmental factors that affect energy consumption, temperature stands out. The current temperature in Celsius at the given time and location can be seen by using this function.
Load (in megawatts): The primary dependent variable in this analysis is the load characteristic. The quantity of power used at that certain moment and place is represented in megawatts (MW). The models’ primary objective is to provide reliable load estimates.
Price (Cents/kWh): Something else to think about is the energy cost, which is the price per kilowatt-hour (kWh). Energy prices at that time and location are reflected in it, and it might influence consumption patterns.
To train and test models that estimate energy use, datasets including these attributes are used. You must fully understand the importance of these traits to comprehend the models’ predictions and their ability to identify data links.
3.2. Data Pre-Processing
Data must be in a trainable format and free of outliers for machine learning models to be created. We lay out all that was performed initially with the data on energy consumption here. Normalization and min–max scaling are essential pre-processing steps in our model, ensuring that the features are on a comparable scale [
26], which provides the basis for equations given below.
3.2.1. Data Normalization
By rescaling quantitative data points to a common range, we may normalize them without losing any insights into the original data’s variability. By applying normalization, all features are given equal importance when building the model. When dealing with neural networks and other machine learning algorithms that use distance metrics like gradients, this becomes quite crucial.
Normalizing data can be performed with the following formula:
Here, the following apply:
The feature value is denoted by .
In the beginning, the feature value is .
At its lowest, feature is represented by .
The feature’s maximum value is denoted by .
3.2.2. Min–Max Scaling
When data are normalized, min–max scaling is used to convert the data from a range of 0–1 to a uniform scale. When the input data range is known and it is desirable to restrict the data to this range, min–max scaling is useful. This scaling strategy shines when inputs must fall within a tight range for the model’s activation functions or loss functions to execute appropriately.
The equation for min–max scaling is as follows:
Here, the following apply:
The original feature value is .
The scaled value is .
At its lowest, feature is represented by .
The feature’s maximum value is denoted by .
To ensure a consistent and standardized range for the modified features, data normalization and min–max scaling are applied independently to each feature. These pre-processing techniques help machine learning models perform better and converge faster on a certain dataset.
3.3. Proposed Model
In this section, we will take a closer look at the architecture of the proposed model, breaking it down into its use of LSTM and GRU neural networks for load prediction. Our novel LSTM model incorporates an additional attention mechanism that allows the network to focus on the most relevant parts of the input sequence. This modification enhances the model’s ability to capture complex temporal dependencies in the load data, improving the accuracy of load forecasts. The GRU model is enhanced with a dynamic gating mechanism that adjusts the importance of the input features based on the current state of the model. This improvement allows the GRU model to better handle variations in the input data and enhance forecasting accuracy. The classic LSTM and GRU models do not include these advanced mechanisms, which limits their ability to effectively capture intricate temporal patterns in dynamic load forecasting. Our proposed enhancements address these limitations, resulting in more accurate and reliable predictions.
3.3.1. Model Architectures
Sequence prediction tasks are well suited for LSTM recurrent neural networks because of their skill in comprehending data with long-range correlations.
Numeric energy consumption histories with timestamps are received by the Input Layer.
A network’s “LSTM Layers” are its structural support system. Long Short-Term Memory (LSTM) cells consist of three gates: input, discard, and output. The model is guided in its decision-making process by these gates, which control the cellular data flow.
Network Memory: LSTM cells’ hidden state represents the network’s memory. Every time a new timestamp is added, the concealed state is modified to include the previously stored data.
Completely Connected Layer: The following timestamp’s energy consumption is estimated by using the output of the LSTM Layers.
Rectified Linear Unit (ReLU) activation function is used to non-linearly transform the model.
By updating some input units to zero at random, dropout layers can prevent overfitting. The next timestamp’s estimated energy consumption is produced by the output layer.
Figure 2 represents the modified LSTM model.
In our LSTM model, we introduce a modification in the update of the cell state
Ct. For the cell state update, we make the following modifications to the standard LSTM equation:
Here, the following apply:
At timestamp t, the cell state is represented by .
The forget gate is denoted by .
Previously, the cell was in state .
is the activation of the input gate.
One addition to the state is , which stands for a new candidate value.
The goal of this adjustment is to make the model better capture energy consumption dynamics.
GRU, or Gated Recurrent Unit.
The GRU recurrent neural network is an alternative to LSTM that streamlines the network’s architecture without compromising performance.
Stage One, also known as the “Input Layer”, like in LSTM, is fed data about energy use in the past.
The GRU layers, made up of GRU cells, feature two gates: an update gate and a reset gate. Thanks to these gates, the network can control the flow of information and selectively recall or forget certain pieces of data.
Secret Government: GRU cells maintain a secret state that is updated at regular intervals to save data from the past.
All Layers Interconnected: The output of the GRU layers is communicated to this layer so that predictions can be made.
Activation Function for ReLU: Similar to LSTM, a ReLU activation function is employed to introduce non-linear behavior.
Dropout layers can be used to prevent overfitting.
The output layer generates the expected energy usage.
We propose an adjustment to the GRU model’s hidden state
estimation. The conventional GRU formula is altered as follows:
where, the following apply:
The updated hidden state at timestamp t is represented by .
Update gate activation is represented by .
The previous hidden state is represented by .
The new candidate value for the hidden state is represented by .
The goal of this update is to make the model more flexible in the face of changing conditions in the energy consumption data.
3.3.2. Predicting Dynamic Loads
Both the recurrent LSTM and GRU models are trained to predict changing loads by looking at historical data on energy usage and analyzing temporal correlations. Dynamic load prediction consists of the steps below.
Input data: The historical energy consumption data for a sequence of timestamps are input into the models. From there, we may extrapolate the consumption at the next timestamp.
While the internal states of the models (cell state for LSTM and hidden state for GRU) are updated at each timestep, the input data are processed by the LSTM or GRU layer in a forward pass. The goal of model training is to enable the model to detect and account for patterns and regularities in the data.
The ultimate output layer generates estimates of the power needed at the next timestamp. These projections are based on the input sequence.
The loss is calculated by subtracting the actual consumption from the predicted consumption in a later period. This loss is employed during model training to fine-tune the parameters.
Models utilize optimization algorithms (such as stochastic gradient descent) during training and backpropagation to adjust their internal parameters and lower the loss. There will probably be multiple revisions of the training process.
Testing and validation: After the models have been trained, then put to the test on a validation dataset. The accuracy of the forecasts can be quantified in several ways, one of which is the MAPE.
Together, the LSTM and GRU models may accurately forecast future dynamic loads by learning from historical energy consumption patterns and capturing temporal relationships. Adjustments are proposed to the LSTM and GRU equations to improve the models’ capacity to observe and forecast energy consumption patterns.
3.3.3. Intelligent Control Strategy for Load Levelling
To address the challenge of load levelling in smart grids, we propose an intelligent control strategy that dynamically adjusts the distribution of energy resources based on real-time load forecasts. This strategy leverages the predictive capabilities of our enhanced LSTM and GRU models to optimize energy distribution, minimize peak loads, and improve grid stability.
The intelligent control strategy consists of several key components. Firstly, it involves real-time load forecasting using the enhanced LSTM and GRU models. These forecasts provide accurate predictions of future load demands, allowing for the proactive management of energy resources. Secondly, dynamic resource allocation is based on the load forecasts. The control strategy dynamically allocates energy resources to different parts of the grid by adjusting the output of distributed energy resources (DERs), such as solar panels and wind turbines, to match the predicted load demands. Thirdly, advanced load balancing algorithms are employed to distribute the load evenly across the grid, taking into account the capacity and availability of DERs and the predicted load patterns to ensure optimal resource utilization. Lastly, the control strategy includes mechanisms to maintain grid stability by monitoring voltage levels and frequency deviations in real time and making necessary adjustments to the energy distribution to prevent instability or outages.
3.4. Mathematical Model
The GRU and LSTM architectures each have their detailed mathematical model presented here. The equations used in the mathematical modeling were derived from the studies [
27,
28].
3.4.1. Mathematical Model for Gated Recurrent Unit (GRU)
To capture long-range dependencies in sequential data with a simplified architecture, researchers came up with GRU, a type of recurrent neural network. It has two gates: one that performs an update () and another that performs a reset (). The GRU math model is shown below.
Input Gate ()
The update gate () determines how much of the previous hidden state to retain and how much of the new candidate value () to adopt.
The update gate is computed as follows:
where the following apply:
σ is the sigmoid activation function.
and are weight matrices.
is the input at timestamp t.
Reset Gate ()
The reset gate (rt) determines how much of the previous hidden state (ht−1) should be forgotten.
The reset gate is computed as follows:
New Candidate Value ()
A candidate value for the new hidden state is computed as follows:
Hidden State Update
The new hidden state is a combination of the previous hidden state and the candidate value. The update is controlled by the update gate (z
t).
Backpropagation
Backpropagation is used to update the parameters (weights and biases) to minimize the loss function during training.
Long Short-Term Memory (LSTM) recurrent neural networks are another kind of NNs that have the potential to successfully recognize sequential inputs with long-range dependencies. The input (it) gate, the forget (ft) gate, and the output (ot) gate are the three gates that define it. An overview of the LSTM mathematical model is provided below.
Forward Pass
Input Gate (it)
The input gate () controls how much of the new candidate value () should be added to the cell state ().
The input gate is computed as follows:
Forget Gate
The forget gate (ft) controls how much of the previous cell state (Ct−1) should be retained.
The forget gate is computed as follows:
New Candidate Value
A candidate value for the new cell state (C
t) is computed as follows:
Cell State Update ()
The cell state is updated based on the input gate (i
t), the forget gate (f
t), and the candidate value (
):
Output Gate
The output gate () controls how much of the cell state (Ct) should be used to produce the hidden state ().
The output gate is computed as follows:
Hidden State (
The hidden state is computed as follows:
Backpropagation
Backpropagation is used during training, just like with GRU, to reduce the loss function by adjusting the parameters (weights and biases).
Figure 4 and
Figure 5 represent the GRU and LSTM architectures.
Both the GRU and LSTM methods of analyzing sequential data are formally modeled in mathematics. Time-series forecasting applications, such as dynamic load prediction, benefit greatly from these networks’ ability to pick up on temporal dependencies and trends in the input sequences and hence perform more accurately. The models’ adaptability to sequential inputs has substantially improved accurate load projections in energy consumption prediction scenarios.
3.4.2. Optimization Model
The optimization model aims to enhance load prediction by using the presented models. This model is crucial to ensuring efficient energy distribution and reliable grid operation in the context of resilient smart grids.
The primary objective of the optimization model is to minimize the error between the actual load (
) and the predicted load (
). This is achieved by minimizing the sum of the squared differences between these two values over a specified time horizon T. Mathematically, this objective can be expressed as
Here, J represents the objective function that needs to be minimized. The term is the squared error at each time step t, and T is the total number of time steps considered in the optimization process. By minimizing this objective function, the model aims to ensure that the predicted load closely matches the actual load, thereby improving the accuracy of load forecasts and enhancing overall grid performance.
3.5. Intelligent Control Strategy
In our study, the intelligent control strategy is designed to dynamically adjust the distribution of energy resources in the smart grid based on real-time load forecasts provided by the LSTM and GRU models. The strategy aims to minimize peak loads and enhance grid stability through the mechanisms below.
Peak load minimization: The intelligent control strategy effectively reduces peak load by utilizing energy storage systems (ESSs) to absorb excess energy during low-demand periods and discharge it during peak times. Additionally, demand–response (DR) strategies are employed to shift non-essential loads to off-peak periods, further flattening the load curve. The control strategy continuously monitors grid conditions and adjusts the load distribution accordingly, ensuring that peak loads are minimized without compromising the overall energy demand.
Grid stability improvement: The control strategy enhances grid stability by maintaining voltage levels within a narrow range and minimizing frequency deviations. By integrating distributed energy resources (DERs) such as solar panels and wind turbines, the strategy dynamically adjusts their output based on real-time demand and supply conditions. This approach helps to maintain a stable grid by balancing supply and demand, thereby reducing the risk of instability or outages.
3.6. Evaluation Metrics
There will be a presentation and discussion of the evaluation criteria for the proposed LSTM and GRU models for dynamic load prediction. Talking about the MSE and MAPE (Mean Absolute Percentage Error) is standard practice. The MAPE and MSE are used in our study, particularly for load forecasting, as recommended by [
29,
30], and these studies provide the basis for the equations used in the evaluation matrix.
3.6.1. Mean Squared Error
A common metric used in regression projects to measure the accuracy of predictions is the Mean Squared Error (MSE) statistic. The Mean Squared Error, or the discrepancy between expectations and actual results, can be expressed numerically by using this method. You may find the Mean Squared Error (MSE) between a collection of n predictions P and their corresponding observed values A by following this formula:
Here, the following apply:
A data point has a value of .
For the first data point, the expected value is .
When we look at data point , the real value is .
With zero signifying a perfect match between actual data and expectations, a lower MSE indicates better predictive accuracy.
3.6.2. Mean Absolute Percentage Error
One popular metric used to assess the accuracy of predictions in forecasting, particularly in load forecasting, is the Mean Absolute Percentage Error, or MAPE. To find the average percentage discrepancy between predictions and observations, the MAPE is used. To determine the MAPE, we require knowledge of both the set of actual values A and the set of n predictions P.
Here, the following apply:
A data point has a value of .
For the first data point, the expected value is .
When we look at data point , the real value is
The MAPE, a percentage, is a popular way to evaluate the precision of a forecast. A smaller MAPE indicates more accurate forecasts, and it is easy to grasp. The MAPE can be sensitive to outliers in certain cases.
By using the MSE and MAPE, we compared the LSTM and GRU models’ accuracy and performance in predicting dynamic loads. By using these measures, we thoroughly tested the models’ prediction abilities.
4. Results and Discussion
Here, we detail and explain our study’s findings. The proposed LSTM and GRU models are evaluated and tested by using dynamic load prediction. Our evaluations focus on the model’s predictive capabilities and how well it performs on load forecasting tasks. The data are highly valuable since they help decision makers and stakeholders in the energy sector understand their own strengths and areas for improvement.
4.1. Performance of LSTM
In this section, we thoroughly evaluate the LSTM model’s capability to anticipate upcoming dynamic loads. Its present status and future prospects are informed by a number of metrics that assess its accuracy.
- (a)
Measurement of Statistical Error
The Mean Squared Error (MSE) is a fundamental statistical metric extensively employed in assessing the efficacy of predictive models, especially in dynamic load forecasting within the energy sector. Its computation involves the derivation of the average of the squared discrepancies between the predicted values generated by the model and the corresponding actual observations from the dataset under analysis. In essence, it encapsulates the collective magnitude of errors present in the model’s predictions, offering a comprehensive insight into its overall performance. The significance of the MSE lies in its ability to quantify the extent to which the model’s predictions deviate from the ground truth, thereby serving as a pivotal yardstick for evaluating predictive accuracy. A lower MSE value signifies that the predicted values closely align with the observed data points, indicative of superior predictive precision and minimal discrepancies. Conversely, a higher MSE value suggests larger deviations between predicted and actual values, signaling reduced accuracy and potentially flawed predictive outcomes. The results of comparing the MSE values of LSTM are shown in
Table 2.
- (b)
Mean Absolute Percentage Error (MAPE)
The Mean Absolute Percentage Error (MAPE) serves as a pivotal metric in the realm of predictive analytics, offering a comprehensive assessment of forecast accuracy. It quantifies the average percentage deviation between the forecasted values generated by a predictive model and the corresponding actual observations from the dataset under analysis. By expressing forecast errors as a percentage of the actual values, the MAPE facilitates a standardized comparison of predictive performance across different datasets and forecasting methodologies. In the context of dynamic load forecasting, the MAPE plays a pivotal role in evaluating the effectiveness of predictive models, such as LSTM, in capturing the intricate temporal dependencies inherent in energy consumption data. By analyzing MAPE values across different datasets, researchers can discern the relative performance of the LSTM model in accurately forecasting dynamic load patterns.
Table 3 succinctly presents a comparative analysis of the LSTM MAPE values across various datasets, providing valuable insights into the model’s predictive capabilities and its efficacy in capturing the nuances of energy consumption dynamics. Load balancing using LSTM is shown in
Figure 6 below.
- (c)
Discussion
The LSTM model’s performance in dynamic load forecasting is evaluated through various metrics, primarily focusing on its predictive accuracy. The model’s ability to anticipate future loads is assessed by using the statistical measures of the Mean Squared Error (MSE) and Mean Absolute Percentage Error (MAPE). Lower MSE and MAPE values indicate better predictive performance, showcasing the LSTM model’s effectiveness in capturing temporal dependencies within energy consumption data. The analysis of the LSTM model’s MSE values across different datasets reveals its capability to generate accurate load predictions. Despite variations in performance across datasets, the LSTM model consistently demonstrates competitive predictive abilities. Similarly, the examination of MAPE values underscores the model’s proficiency in minimizing the average deviation of forecasts from actual load values, further reinforcing its suitability for dynamic load forecasting tasks.
4.2. Performance of GRU
Here, we investigate how well the GRU model can predict future loads in a dynamic setting.
- (a)
Mean Squared Error
The Mean Squared Error (MSE) serves as a fundamental metric for assessing the efficacy of the Gated Recurrent Unit (GRU) model in dynamic load forecasting. This metric provides a quantitative measure of the disparity between predicted and observed load levels, offering insights into the model’s predictive accuracy. By squaring the differences between predicted and actual load values and calculating their average, the MSE quantifies the overall predictive performance of the GRU model. The comparison of GRU’s MSE values, as presented in
Table 4, elucidates the model’s predictive capabilities across different datasets. Each MSE value corresponds to a specific test dataset, providing a comprehensive overview of the model’s performance across diverse scenarios. Lower MSE values signify closer alignment between predicted and actual load levels, indicating higher predictive accuracy and reliability in load forecasting tasks. The comparison of GRU’s MSE values is shown in
Table 4.
- (b)
Mean Absolute Percentage Error (MAPE)
The Mean Absolute Percentage Error (MAPE) serves as a vital metric for assessing the relative disparity between expected and actual load values in dynamic load forecasting. Expressed as a percentage, the MAPE quantifies the average deviation of forecasted values from observed data. A lower MAPE score signifies higher prediction accuracy, indicating a closer alignment between predicted and actual load values. Comparing the MAPE values of the Gated Recurrent Unit (GRU) model across different datasets, as depicted in
Table 5, provides valuable insights into the model’s performance. Each MAPE value represents the extent of deviation between predicted and observed load values for a specific test dataset, offering a comprehensive evaluation of the model’s predictive capabilities across diverse scenarios. Load balancing between time and power using GRU is shown in
Figure 7 below.
4.3. Impact of Intelligent Control Strategy
The implementation of the intelligent control strategy showed significant improvements in both peak load reduction and grid stability. The simulation results demonstrate a clear reduction in peak load, with the strategy achieving an average reduction of 10% across various time periods. For example, in the month of July, the peak load decreased from 160 MW to 140 MW after applying the control strategy, illustrating the effectiveness of our approach in smoothing the load curve.
Moreover, grid stability was notably enhanced, as evidenced by the reduction in voltage fluctuations, as shown in
Figure 8. Before the implementation of the control strategy, voltage fluctuations ranged from 4% to 7.5%. After the strategy was applied, fluctuations were reduced to a range of 3% to 5%, indicating a more stable grid environment. These improvements underscore the strategy’s ability to maintain a balanced and stable grid while efficiently managing energy distribution.
- (c)
Discussion
The proficiency of the Gated Recurrent Unit (GRU) model in dynamic load forecasting is underscored by its commendably low Mean Squared Error (MSE) and Mean Absolute Percentage Error (MAPE) scores, as evidenced by the comparative analysis presented. These metrics serve as robust indicators of the model’s ability to accurately predict future load levels based on historical data. This control mechanism involves a continuous monitoring of the model’s predictions against actual load values, enabling the real-time assessment of its performance. In instances where deviations are detected, adaptive changes can be triggered to recalibrate the model and improve its accuracy over time. Additionally, the GRU model can be augmented with external factors, such as weather forecasts, grid status updates, or energy pricing information, thereby enriching its predictive capabilities and enabling more refined forecasts.
The comparison of LSTM and GRU models in terms of test datasets is given in
Table 6, where the verification of the Mean Squared Error (MSE) and Mean Absolute Percentage Error (MAPE) is presented. In all the cases, it is observed that the performance of the GRU model is superior to that of the LSTM model as per the analysis based on the MSE and MAPE.
Figure 9 shows the comparative performance and the comparison of the models with actual values to show the control component.
Figure 10 shows that the results of the GRU model are very close to those obtained by the LSTM model, with only slight changes in the MSE and MAPE. We obtained these results by applying the GRU and LSTM on each dataset. This shows that both models can handle dynamic load forecasting tasks, and selecting one over the other may depend on needs and computing limitations.
Table 7 contrasts the proposed LSTM-GRU approach with previous electric load forecasting techniques at the macro level. While past efforts include AMI data, deep neural networks, and the LSTM-based models that have enhanced the accuracy of the results, these models have drawbacks like resource-intensive computation and infrastructure demands. The results of the LSTM-GRU model are promising, given the low MSE and MAPE, but infer that it is optimized for global annual energy consumption, suggesting that it needs to be more generalizable.
Our intelligent control strategy incorporates and builds upon conventional methods, such as ESSs and DR, which are commonly used for load balancing and peak shaving in smart grids. Unlike traditional approaches, our strategy integrates these methods with advanced predictive models (LSTM and GRU), enabling a more proactive and precise management of the grid. This integration allows for real-time adjustments based on accurate load forecasts, providing a more responsive and efficient solution compared with conventional techniques.
By employing both ESSs and DR within the framework of our predictive models, the strategy offers enhanced control over the grid’s load profile, ensuring that peak loads are effectively managed and that grid stability is consistently maintained. The results of our study suggest that this integrated approach can achieve greater reductions in peak loads and more stable grid operation than traditional methods alone.
Forecast errors exert a profound influence on decision-making processes that heavily rely on model predictions, especially in the context of dynamic load forecasting within smart grids. When the predicted load values deviate significantly from the actual observed values, it introduces a level of uncertainty that can impact various aspects of energy management and grid operations. One of the critical implications is in resource allocation, where inaccurate load forecasts can lead to suboptimal utilization of energy resources, potentially resulting in over-capacity or under-capacity situations. For utility providers and grid operators, inaccurate load predictions may affect their ability to ensure a reliable and stable power supply. If the forecasted load is considerably lower than the actual demand, it might lead to insufficient energy generation and, consequently, an increased risk of blackouts or brownouts. Conversely, overestimating load demand can lead to unnecessary energy production, contributing to economic inefficiency and environmental impacts.