Next Article in Journal
Numerical Investigation of the Influence of Precooling on the Thermal Performance of a Borehole Heat Exchanger
Next Article in Special Issue
Geopolitical Risk as a Determinant of Renewable Energy Investments
Previous Article in Journal
Space Redevelopment of Old Landfill Located in the Zone between Urban and Protected Areas: Case Study
Previous Article in Special Issue
COVID-19 and the Energy Price Volatility
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Short-Term Load Probabilistic Forecasting Based on Improved Complete Ensemble Empirical Mode Decomposition with Adaptive Noise Reconstruction and Salp Swarm Algorithm

School of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan 232001, China
*
Author to whom correspondence should be addressed.
Energies 2022, 15(1), 147; https://doi.org/10.3390/en15010147
Submission received: 19 November 2021 / Revised: 18 December 2021 / Accepted: 21 December 2021 / Published: 27 December 2021
(This article belongs to the Special Issue Emerging Trends in Energy Economics)

Abstract

:
Short-term load forecasting is an important part of load forecasting, which is of great significance to the optimal power flow and power supply guarantee of the power system. In this paper, we proposed the load series reconstruction method combined improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN) with sample entropy (SE). The load series is decomposed by ICEEMDAN and is reconstructed into a trend component, periodic component, and random component by comparing with the sample entropy of the original series. Extreme learning machine optimized by salp swarm algorithm (SSA-ELM) is used to predict respectively, and the final prediction value is obtained by superposition of the prediction results of the three components. Then, the prediction error of the training set is divided into four load intervals according to the predicted value, and the kernel probability density is estimated to obtain the error distribution of the training set. Combining the predicted value of the prediction set with the error distribution of the corresponding load interval, the prediction load interval can be obtained. The prediction method is verified by taking the hourly load data of a region in Denmark in 2019 as an example. The final experimental results show that the proposed method has a high prediction accuracy for short-term load forecasting.

1. Introduction

With the development of industry and the economy, the conflict between supply and demand for energy is becoming increasingly acute. Among them, electric energy is not only closely related to people’s lives, but also closely related to industrial production. Therefore, the balance between the supply and demand of electric energy is of particular concern. At present, the main power generation model in the world is still coal combustion power generation, which will cause air pollution. To ensure the sustainable development of economy, countries all over the world are vigorously developing new energy [1]. With the development of electric energy conversion technology and electric energy storage technology [2,3], photovoltaic power generation, wind power generation, tidal power generation, and geothermal power generation are more and more incorporated into the power grid, which not only alleviates the energy shortage but also introduces a large number of random power flows. This poses a new severe challenge to the stability and load balance of the power grid.
In the power system incorporating a large number of new energy sources, power needs to achieve a two-way balance between supply and demand. However, due to the uncontrollability of the power generation on the supply side being affected by a variety of influencing factors, the power consumption behavior of users on the demand side also has certain randomness. The interaction between supply and demand increases more uncertain factors for the load flow of the system, and accurate short-term load forecasting is of great significance to ensure the balance of the power system [4]. On the other hand, since September 2021, China has notified many places to limit the power load, which has had a certain impact on the lives of some people and the production of enterprises. Therefore, accurate prediction of power load is a major demand for social development. Finally, with the construction of the smart grid [5], it is not only to improve the stability and energy utilization of the system, and reduce the power generation cost, but also an important goal. Accurate prediction of power demand in various regions is helpful to realize the economic operation of a power system [6].
Load forecasting can be divided into point forecasting [7,8] and probability forecasting [9,10] according to the forecasting results. At present, most load forecasting is mainly point forecasting of load, and the forecasting result is the single point expectation of load at a certain time in the future. Power load is nonlinear and time-varying, so point prediction is difficult to reflect the fluctuation range of load change. The estimation of some uncertain factors in power market by probabilistic prediction method is helpful to the control and stable operation of power grid [11].
According to whether the prediction object or the distribution type of prediction error presupposes, probability prediction can fall into parametric probability prediction [12] and nonparametric probability prediction [13,14]. Using the parametric methods for probability density estimation requires the object is estimated to conform to a specific distribution, which has limitations in the present situation where more and more new energy generation is being integrated into the grid. The a priori assumptions avoided by the non-parametric method and the absence of excessive human intervention make it easier to approach the actual distribution.
In most decomposition and integration models, the load series is decomposed into several components by decomposition method. Then, predicting each component, the number of models is large, and the training time is long. In order to solve this problem, we use improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN) combined with sample entropy to reconstruct the load series into three parts: random component, periodic component, and trend component, which reduces the number of models. In this way, the number of prediction models can be reduced to three and the training time can be shortened. For most load forecasting, point forecasting is used, which is difficult to reflect the load variation range. We use point forecasting combined with probability forecasting to predict the load interval. The error interval of the prediction set is obtained by combining the probability distribution of the error of the training set with KDE, and the final prediction interval can be obtained by combining the predicted value of the point. Finally, under the 90% confidence interval, the prediction intervals coverage probability (PICP) reached 0.919, indicating that 91.9% of the prediction set data fell within the prediction interval. On the other hand, the prediction intervals normalized averaged width (PINAW) on the cover is 0.112, which shows that we do not improve the prediction accuracy by increasing the bandwidth. In conclusion, we can draw a conclusion that the method proposed in this paper has good prediction accuracy and has a good application prospect in the field of load probabilistic forecasting.
The rest of this paper is structured as follows. The second section introduces the current research work of load forecasting. The third section introduces the relevant methods used in this paper. The fourth section mainly introduces the realisation process of the model and evaluation indicators. The fifth section is the experimental results and analysis. The sixth section is the summary of this paper.

2. Literature Review

At present, load forecasting methods are mainly divided into traditional methods and artificial intelligence methods. Artificial intelligence methods mainly include deep learning methods represented by the short-term memory network (LSTM) [15,16] and the convolutional neural network (CNN) [17,18], and machine learning methods represented by support vector regression (SVR) [19,20] and the artificial neural network (ANN) [21,22]. The deep learning method has the characteristics of a good prediction effect and high fault tolerance to input, but the model spends a lot of time in training. At present, decomposition and integration models have made preferable effects in load forecasting and other energy forecasting fields, but these models often predict all decomposed components one by one and then superimpose the results, so the training time is usually long. In addition, there is a direct relationship between the decomposition and the prediction accuracy of the integrated model and the decomposition method. The phenomenon of mode aliasing may occur in empirical mode decomposition (EMD) [23]. The amplitude and iteration number of white noise added by ensemble empirical mode decomposition (EEMD) [24] depends on the human experience setting. When the numerical setting is not set, it may be unable to overcome the phenomenon of modal aliasing. These factors may affect the prediction results.
At present, most load forecasting still takes the determined load value as the forecasting goal. Ge et al. [25] achieved good accuracy in industrial load prediction using reinforcement learning combined with least squares support vector machines for particle swarm optimisation. Zhang et al. [26] used complete ensemble empirical mode decomposition with adaptive noise combined with support vector regression with dragonfly optimization to forecast the electric load, which also had good prediction results. Rafi et al. [27] used convolutional neural networks combined with long- and short-term memory networks to construct a prediction model for short-term electricity load forecasting and achieved good prediction reliability. Wang et al. [28] used a long- and short-term memory network to forecast short-term residential loads with consideration of weather features. Phyo et al. [29] used classification and regression tree and the deep belief network for 30-min granularity load forecasting.
On the other hand, deterministic forecasting is difficult to fully reflect the load information. Therefore, using the probability forecasting method to predict the load change range is helpful to provide strong support for the production, dispatching, operation, and other links of the power grid system.
In addition, the prediction accuracy of decomposition and the integrated model is directly related to the decomposition method, and the phenomenon of mode aliasing may occur in empirical mode decomposition. On the other hand, most decomposition and integration models build prediction models for each component. Although the prediction accuracy is high, the number of models is large and the training time is long.
In this paper, we first carry out point prediction, and then analyze the training set error to obtain the distribution of prediction error in different load intervals to realize load probability prediction. The improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN) [30] effectively solves the problem of mode mixing in empirical mode decomposition (EMD) and avoids the residual noise in decomposition ensemble empirical mode decomposition (EEMD), which helps to improve the prediction accuracy of the model. Firstly, the ICEEMDAN combined with sample entropy is used to reconstruct the load series, which is decomposed into three parts—random component, periodic component, and trend component—which effectively reduces the number of prediction models and shortens the prediction time. Since the extreme learning machine (ELM) algorithm was proposed, it has achieved good results in many fields, such as fault diagnosis [31,32], coal mine safety [33], and so on. The accuracy of the prediction results can be effectively improved by using the salp swarm algorithm (SSA) to optimize the ELM. Then, the kernel density estimation method is used to analyze the training set error, obtain the probability density curve of the training set error, and then estimate the error interval of the prediction set to obtain the final interval prediction result.

3. Methods

3.1. Improved Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (ICEEMDAN)

Improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN) is an algorithm based on empirical mode decomposition (EMD) proposed by Colominas et al. [34]. ICEEMDAN can effectively solve the mode mixing problem of EMD and the residual noise problem of EEMD. The decomposition process is as follows:
(1)
Calculate the local mean of S ( i ) = S + λ 0 C 1 ( α ( i ) ) by EMD to obtain the first-order residue R 1 and corresponding intrinsic mode function (IMF) I M F 1 .
R 1 = ( M ( S ( i ) ) )
I M F 1 = S R 1
where i { 1 , 2 , 3 M } , S is the original signal; λ is the signal-to-noise ratio; α ( i ) be a realization of zero mean unit variance white noise; C j ( · ) is the operator represents the jth order intrinsic mode function obtained by EMD; and M ( · ) is the operator represents the local mean of the resulting signal.
(2)
Calculate the local mean of R 1 + λ 1 C 2 ( α ( i ) ) by EMD to obtain the second-order residue R 2 and corresponding intrinsic mode function I M F 2 .
R 2 = ( M ( R 1 + λ 1 C 2 ( α ( i ) ) ) )
I M F 2 = R 1 R 2
(3)
Repeat the process until the signal cannot be decomposed.
R l = ( M ( R l 1 ( t ) + λ l 1 C l ( α ( i ) ) ) )
I M F l = R l 1 R l
where l = 2 , 3 , L , L are the total numbers of IMF.
Finally, the original signal is decomposed into S = j = 1 L I M F j + R L .

3.2. Sample Entropy SE

Sample entropy ( S E ) [35] is a method to measure the complexity of unstable time series. Compared with the general method, the sample entropy method does not depend on the data length and has a better consistency. The value of sample entropy is positively correlated with the degree of sequence self-similarity. The sample entropy is calculated as follows:
(1)
For the time series x ( i ) with sample size N, the following vectors are obtained according to the order of m dimensional vectors of the time series:
X ( i ) = [ x ( 1 ) , x ( 2 ) , , x ( N m + 1 ) ]
where, i = 1 , 2 , 3 N m + 1 .
(2)
C group optimization algorithm proposed by Mirjaln X m ( i ) whose distance from X m ( j ) is less than r in X m ( i ) . Define this number as B i . The ratio of B i to the total number of vectors is denoted B i m .
d m = [ X m ( i ) , X m ( j ) ] = max 0 k m 1 | x ( i + k ) x ( j + k ) |
B i m ( r ) = B i N m + 1
B m ( r ) = i = 1 N m B i m N m
(3)
Increase the dimension to m + 1, and repeat the step to calculate the B m + 1 ( r )
(4)
Calculate sample entropy SE.
S E = ln [ B m + 1 ( r ) B m ( r ) ]

3.3. Salp Swarm Algorithm (SSA)

The Salp Swarm Algorithm (SSA) is a heuristic group optimization algorithm proposed by Mirjalili et al. [36] in 2017. The SSA algorithm mimics the swarm behaviour of salp on the seabed to find the optimal parameters. In the sea, the salp group is in a chain shape; the frontmost salp is responsible for guiding the whole swarm, and the following salps are responsible for searching the global situation according to the forward direction. The specific process of the SSA is as follows:
Initialize all parameters, the number of salp is M, the maximum number of iterations is I, and [lb, ub] is the search range. d is the dimension of the parade target.
(1)
Population initialization. SSA initializes the population by generating random numbers.
X M × d = l b + r a n d ( M , d ) × ( u b l b )
(2)
Calculate the fitness of each salp. Save the salp coordinates with the highest fitness.
(3)
Calculate variable c1.
c 1 = 2 e ( 4 i I ) 2
In the Equation (13), i is the current iteration number; and I is the maximum iteration number.
(4)
Update the first salp’s position. The first is responsible for searching for food to lead the movement direction of this salp population. The update equation the position of the first salp is:
x d 1 = { P d + c 1 ( ( u b d l b d ) c 2 + l b d ) , c 3 0.5 P d c 1 ( ( u b d l b d ) c 2 + l b d ) , c 3 < 0.5
where, x d 1 denotes the position of the leader of the salp in d dimensional space; u b d and l b d are upper and lower bounds of d dimensional space, respectively. P d is the position of food source in d dimensional space; c 2 and c 3 are random numbers uniformly generated within the range of [0, 1].
(5)
Update the location of the follower, update the equation is:
x d m = 1 2 [ x d m + x d m 1 ]
where, m 2 , x d m is the position parameter of the mth salp in the d dimensional space.
(6)
Calculate the fitness of each salp. Save the salp coordinates with the highest fitness. Update iteration number i = I + 1.
(7)
If the i > I , then output the coordinates of the salp with the optimal fitness. Otherwise skip to step (3).

3.4. Extreme Learning Machine (ELM)

Extreme learning machine (ELM) [37] is proposed by Huang et al. It is a supervised learning method for a single hidden layer feedforward neural network. The input weight matrix and hidden layer threshold of ELM are randomly generated, which has the advantages of fewer training parameters and a short training time.
The mathematical model of ELM is as follows:
y i = j = 1 l g ( ω j · x i + b j ) · β j
In the Equation (19), i = 1 , 2 , , N ; x i is the input vector; y i is the output vector; g ( x ) is the incentive function; ω j is the input weight matrix; b j is the hidden layer threshold; β j is the output weight matrix; l is the number of hidden layer nodes; and N is the number of samples.

3.5. Kernel Density Estimation (KDE)

Kernel density estimation (KDE) [38,39,40] is proposed by Parzen, mainly by using differentiable kernel function to estimate the probability density function.
f ^ ( x ) = 1 M w i = 1 M F ( x x i w )
In the formula, M is the number of samples; F ( x ) is a kernel function, which includes Normal kernel, Box kernel, Triangle kernel, Epanechnikov kernel; w is the window width.

4. Realisation Process and Evaluation Index

4.1. Realisation Process

Although the traditional decomposition “model and ensemble” prediction model has a good prediction effect, it also needs to establish forecasting models for all components separately, which requires a lot of training time. In this paper we reconstructed the ICEEMDAN decomposed components by combination with sample entropy and load characteristics. Specifically, the load is divided into a stochastic component, a periodic component, and a trend component. Then, the three components are predicted respectively, and the final point prediction result is obtained by superimposing the prediction results of the three components. The specific prediction process of the model is as follows:
(1)
Decomposition of load data. ICEEMDAN is used to decompose the original load series to obtain some IMF. Then, calculate the sample entropy of the original series and each IMF.
(2)
Reconstruction of load data. The IMF with sample entropy greater than 0.5 is reconstructed as the random component, the IMF with sample entropy less than 0.04 is reconstructed as the trend component, and the remaining IMF is reconstructed as the periodic component.
(3)
Forecasting of load values. The data set contains 8760 load data. The training set and prediction set are divided according to 4:1. The first 7008 load data are used as the training set, and the remaining data are used as the prediction set. Use SSA-ELM to establish models for random component, periodic component, and trend component respectively for prediction. Take the load value two hours before the prediction time as input to obtain the prediction results of each component, and overlay the three results to get the final point prediction results. SSA searches the number of hidden layer neurons and hidden layer threshold of ELM group optimization to improve the prediction performance of ELM.
(4)
Normalisation of error data. To avoid the effect of predicted value size on the error estimates, the error values were normalised using the maximum actual value of the load in the training set.
(5)
Calculate the upper and lower limits of error. Several error intervals are divided according to the prediction results of the training set. The kernel density estimation is used to obtain the probability density function of each interval training set error. Select the appropriate kernel function by fitting the probability density function image and real error data fitting. Combined with interval confidence, the upper and lower error limits are obtained.
(6)
Obtain the final prediction interval by superimposing the load value of the prediction set with the corresponding upper and lower limits of error.

4.2. Evaluation Index

To evaluate the point prediction results of the proposed model, we use the mean absolute percentage error (MAPE), mean absolute error (MAE), and mean square error (MSE) to evaluate the accuracy of the prediction results. The equations are as follows:
MAPE = 1 M i = 1 M | y i y ^ i y i |
MAE = 1 M i = 1 M | y i y ^ i |
MSE = 1 M i = 1 M | y i y ^ i | 2
In the above equations, M is the number of samples; y i is the actual load value; and y ^ i is the predicted load value.
To evaluate the interval prediction results, PICP and PINAW are introduced. The equations are as follows:
PICP = 1 M i = 1 M c i
PINAW = 1 M R i = 1 M | U i L i |
In the formula, M represents the number of samples; when the prediction result is in the interval, c i = 1 ; when the prediction result is not in the interval, c i = 0 ; R is the true value range; U i is the upper bound of prediction; and L i is the lower bound of prediction.

5. Experiments and Analysis

5.1. Experimental Data and Conditions

To further test the prediction performance of the model, we use the hourly load data of a region in Denmark in 2019 for verification obtained from ENTSO-E. The load value is shown in Figure 1. We can see that the load value is generally stable, and the distribution shows a trend of high, medium, and low at both ends.
Experiments were conducted on 64-bit Windows 10 using MATLAB R2018a with an i7-7700hq CPU and a GTX-1050 graphics card.
From the Figure 1, we can see that the load data at 5–7 p.m. on May 1 is 0, which may be the abnormal data caused by missing data. At 8:00 a.m. and 9:00 a.m. on November 4, the load reached the highest value of the whole year, but this value is relatively isolated. This situation also shows that the change of load is affected by many factors and has some randomness. On the whole, the fluctuation of annual load data is small, and the load at the beginning and end of the year is slightly larger in the overall trend.

5.2. Selection of Mode Decomposition Method

Firstly, empirical mode decomposition (EMD), ensemble empirical mode decomposition (EEMD) and improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN) are used to decompose the original load series. To control the experimental variables, we set the noise weight of EEMD and ICEEMDAN to 0.2 and the number of noise additions to 50. A higher entropy value of the intrinsic mode function (IMF) means a lower autocorrelation of the IMF. The results are shown in Table 1. The sample entropy of the original series is 1.462. The higher the sample entropy, the lower the autocorrelation of the IMF series and the more complex the IMF. The sample entropy of IMF 11 and IMF 12 generated by EEMD decomposition is 0, because the sample entropy of the two IMF is less than 1 × 10−5. The series is chaotic and random. Table 1 shows the sample entropy values and correlation coefficients for each IMF.
We reconstruct the IMF with entropy > 0.5 into a random component. The IMF with 0.04 < entropy < 0.5 is reconstructed into a periodic component. IMF with entropy < 0.04 is reconstructed as a trend component. The composition of the three components under different modal decomposition methods is shown in Table 2.
According to the division results in Table 2, we reconstructed the decomposed load series and then used the extreme learning machine (ELM) to predict the results as shown in the following Table 3. When the ELM algorithm is used for prediction, to ensure the optimal number of neurons in the hidden layer, we set a cycle, that is, the number of hidden neurons is from 1 to 100, and the optimal number of neurons is selected. The prediction results are shown in Table 3. It can be seen that the accuracy of load series prediction after decomposition and reconstruction using ICEEMDAN algorithm is the highest, absolute percentage error (MAPE) is 2.50, mean absolute error (MAE) is 63.84, and mean square error (MSE) is 9625.20. The prediction results based on EMD decomposition and reconstruction are worse. It is possible that a mode mixing situation has occurred. Therefore, we can judge that using ICEEMDAN to reconstruct and predict the load series has good accuracy.
Based on the above experimental results, we choose to use ICEEMDAN combined with sample entropy reconstruction to decompose the load data. The reconstructed load data is shown in Figure 2.
Combined with Figure 2, we can see that the load value showed a downward trend from January to August, reaching the bottom of electricity consumption in August, and the load value showed an upward trend from August to December. Through the variance and standard deviation, we can find that the January, February, April, and December load values is bigger, and the June, July, August, and September load values is smaller.
Figure 2 is the three load components reconstructed by ICEEMDAN combined with sample entropy. We can see that the periodic component has obvious and stable periodicity; when the fluctuation range of the trend component is small, the load value is high at both ends and low in the middle, and the overall trend is similar to that of the original data. The series with a higher frequency of random component variation is more ambiguous, and the variation range of load value is large and random. Through the above analysis, we can conclude that the reconstructed component conforms to the characteristics of the original load data.

5.3. Prediction Performance of Different Prediction Methods

To select the best prediction algorithm, we chose the BP neural network, support vector regression, and ELM to compare. The predicted results are shown in Table 4. The experimental results are shown in Table 3. The MAPE and MAE of ICEEMDAN-ELM are greater then ICEEMDAN-BP, and MSE is smaller than that of ICEEMDAN-BP. However, the three evaluation indexes of ICEEMDAN-ELM are better than ICEEMDAN-SVR. As MSE is more sensitive to extremum, combining the three evaluations we chose ICEEMDAN-ELM.
In the experimental process, we find that although ELM has the advantages of high accuracy and a fast training speed, the prediction stability is slightly poor. To further improve the prediction effect, we use the salp swarm algorithm (SAA) to optimize the number of hidden layer neurons and threshold of ELM to improve the accuracy of point prediction. After using SSA optimization, the prediction accuracy of the model has been significantly improved. It can be seen that MAPE, MAE and MSE decreased to 1.98, 50.42 and 6723.70, respectively. Figure 3 is the comparison between the prediction results of SSA-ELM and ELM. From Figure 3, we can see that SSA-ELM has a higher prediction accuracy than ELM. Therefore, we can conclude that using the SSA method to optimize the number and threshold of ELM hidden layer neurons is better than selecting only the optimal number of ELM hidden layer neurons.

5.4. Performance of Reconstructed Model and Ordinary Model

To better evaluate the three different prediction models, we use SSA-ELM to predict the load data processed by different methods. From Table 5, we can see that the prediction effect of the model combined with ICEEMDAN is better than that of the ordinary model without decomposition. On the other hand, we can see that the training time of the reconstructed model is 127.78 s, which is significantly lower than that of the decomposed model. Considering the prediction accuracy, the number of models, and training time, we believe that the overall performance of the reconstructed model is better.

5.5. Interval Prediction Based on Kernel Density Estimation

To better estimate the uncertainty in the load sequence, we used the kernel density estimation method to estimate the load interval. Firstly, we use the maximum real load value of the training set to normalize the error of the training set, and then divide the error into 0–1750 MW, 1750–2350 MW, 2350–2850 MW, and more than 2850 MW, according to the size of the predicted load value. The four intervals are respectively estimated by kernel density estimation and logistic estimation, and the optimal approximation curve is selected. Then, according to the predicted value of the prediction set, the corresponding error percentage is selected to obtain the final prediction interval.
It can be seen from Figure 4 that the fitting effect of kernel density estimation is better than that of logistic estimation in the process of estimating the set error of the 0–1750 MW interval. Further comparison with Figure 4b, it can be found that the normal kernel has a better fitting effect on the cumulative distribution function curve of the training set error, and the error range is [−1.44%, +2.1%] under the 90% confidence interval. Similarly, we found that the prediction effect of 1750–2350 MW Epanechnikov kernel is better through experiments, and the error range of 90% confidence interval is [−2.9%, +2.6%]. For the 2350–2850 MW load interval, Box kernel has a good fitting. The error range is [−3.3%, +4.1%] under 90% confidence interval. The Box kernel above 2850 MW has a good prediction effect, and the corresponding value range is [−3.21%, +3.98%].
Finally, prediction intervals coverage probability (PICP) is 0.919 and prediction intervals normalized averaged width (PINAW) is 0.112. PCIP is 0.919, indicating that 91.9% of the load values in the test set fall within the prediction interval, and PCIP > interval confidence, which shows that the model in this paper has good prediction performance and can accurately estimate the load change. For PINAW, when the prediction interval width is certain, the larger the variation range of real load data, the smaller the PINAW, which also represents the better performance of the model. To avoid the impact of the highest point of annual load value (4952) on PINAW, we select the second highest point of forecast set value 3416 as the upper limit of load change, and the final PINAW is 0.112. This shows that the width of the prediction interval is within a reasonable range, and the model used in this paper does not obtain high coverage by unlimited increase of the width of the error interval. To sum up, we can conclude that the probability prediction model proposed in this paper has good prediction accuracy.

6. Conclusions

By analyzing the above experiments, we can draw the following conclusions:
(1)
Compared with EEMD and EMD decomposition models, we find that ICEMDAN decomposition has better prediction accuracy. In addition, through the comparison of the decomposition model, reconstruction model, and ordinary model, we can find that the reconstruction model performs well in training time and prediction accuracy, and is suitable for load forecasting scenarios. Combined ICEEMDAN with sample entropy is used to decompose and reconstruct the load series, which not only improves the accuracy of load forecasting, but also reduces the number of models, shortens the training time, and improves the forecasting efficiency.
(2)
Through the comparison between SSA-ELM and ELM, we can find that the prediction accuracy of the model has been significantly improved after using SSA to optimize the number and threshold of ELM hidden layer neurons. SSA-ELM can effectively improve the stability and accuracy of prediction results.
(3)
The kernel density estimation is used to analyze the error interval, which has a good fitting for the error curve and can obtain a more accurate prediction interval. We also found that the choice of different sum functions will affect the fitting effect of error distribution, and then affect the accuracy of interval prediction.
(4)
PICP was 0.919 and PINAW was 0.112. These two indicators show that the model achieves high coverage in a reasonable interval width. This means that the method used in this paper can better predict the variation range of load and reflect some unknown load information. It also proves the feasibility of the method used in this paper.

Author Contributions

Conceptualization, T.H. and M.Z.; methodology, T.H. and M.Z.; software, T.H. and K.B.; validation, T.H. and W.L.; writing—original draft preparation, T.H.; data curation, T.H. and Z.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China, grant number: 2018YFC0604503; the Energy Internet Joint Fund Project of Anhui province, grant number: 2008085UD06; the Major Science and Technology Program of Anhui Province, grant number: 201903a07020013; the Ministry of Education New Generation of Information Technology Innovation Project, grant number: 2019ITA01010; and the Demonstration Project of Science Popularization Innovation and Scientific Research Education for College Students, grant number: KYX202117.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Restrictions apply to the availability of these data. Data was obtained from ENTSO-E and is available at https://transparency.entsoe.eu/dashboard/show (accessed on 15 October 2021).

Acknowledgments

We would like to thank the ENTSO-E for making the data available.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, B.; Wang, Q.; Wei, Y.-M.; Li, Z.-P. Role of renewable energy in China’s energy security and climate change mitigation: An index decomposition analysis. Renew. Sustain. Energy Rev. 2018, 90, 187–194. [Google Scholar] [CrossRef]
  2. Zhao, J.; Oh, U.; Lee, Y.; Park, J.; Choi, J. A Study on Reliability and Capacity Credit Evaluation of China Power System Con-sidering WTG with Multi Energy Storage Systems. J. Electr. Eng. Technol. 2021, 16, 2367–2378. [Google Scholar] [CrossRef]
  3. Cao, R.; Sun, D.; Wang, L.; Yan, Z.; Liu, W.; Wang, X.; Zhang, X. Enhancing solar–thermal–electric energy conversion based on m-PEGMA/GO synergistic phase change aerogels. J. Mater. Chem. A 2020, 8, 13207–13217. [Google Scholar] [CrossRef]
  4. Jahan, I.S.; Snasel, V.; Misak, S. Intelligent Systems for Power Load Forecasting: A Study Review. Energies 2020, 13, 6105. [Google Scholar] [CrossRef]
  5. Borenius, S.; Hämmäinen, H.; Lehtonen, M.; Ahokangas, P. Smart grid evolution and mobile communications—Scenarios on the Finnish power grid. Electr. Power Syst. Res. 2021, 199, 107367. [Google Scholar] [CrossRef]
  6. Li, X.; Dong, C.; Jiang, W.; Wu, X. Distributed control strategy for global economic operation and bus restorations in a hybrid AC/DC microgrid with interconnected subgrids. Int. J. Electr. Power Energy Syst. 2021, 131, 107032. [Google Scholar] [CrossRef]
  7. Sharma, S.; Majumdar, A.; Elvira, V.; Chouzenoux, E. Blind Kalman Filtering for Short-Term Load Forecasting. IEEE Trans. Power Syst. 2020, 35, 4916–4919. [Google Scholar] [CrossRef]
  8. Singh, P.; Dwivedi, P.; Kant, V. A hybrid method based on neural network and improved environmental adaptation method using Controlled Gaussian Mutation with real parameter for short-term load forecasting. Energy. 2019, 174, 460–477. [Google Scholar] [CrossRef]
  9. Zhang, W.; Quan, H.; Gandhi, O.; Rajagopal, R.; Tan, C.-W.; Srinivasan, D. Improving Probabilistic Load Forecasting Using Quantile Regression NN With Skip Connections. IEEE Trans. Smart Grid 2020, 11, 5442–5450. [Google Scholar] [CrossRef]
  10. Gan, D.; Wang, Y.; Yang, S.; Kang, C. Embedding based quantile regression neural network for probabilistic load forecasting. J. Mod. Power Syst. Clean Energy 2018, 6, 244–254. [Google Scholar] [CrossRef] [Green Version]
  11. Toubeau, J.J.; Bottieau, J.; Vallée, F.; De Grève, Z. Deep Learning-Based Multivariate Probabilistic Forecasting for Short-Term Scheduling in Power Markets. IEEE Trans. Power Syst. 2019, 34, 1203–1215. [Google Scholar] [CrossRef]
  12. Brusaferri, A.; Matteucci, M.; Portolani, P.; Vitali, A. Bayesian deep learning based method for probabilistic forecast of day-ahead electricity prices. Appl. Energy 2019, 250, 1158–1175. [Google Scholar] [CrossRef]
  13. Huang, Q.; Li, J.; Zhu, M. An improved convolutional neural network with load range discretization for probabilistic load forecasting. Energy 2020, 203, 117902. [Google Scholar] [CrossRef]
  14. Guan, J.; Lin, J.; Guan, J.; Mokaramian, E. A novel probabilistic short-term wind energy forecasting model based on an im-proved kernel density estimation. Int. J. Hydrogen Energy 2020, 45, 23791–23808. [Google Scholar] [CrossRef]
  15. Memarzadeh, G.; Keynia, F. Short-term electricity load and price forecasting by a new optimal LSTM-NN based prediction algorithm. Electr. Power Syst. Res. 2020, 192, 106995. [Google Scholar] [CrossRef]
  16. Kong, W.; Dong, Z.Y.; Jia, Y.; Hill, D.J.; Xu, Y.; Zhang, Y. Short-Term Residential Load Forecasting Based on LSTM Recurrent Neural Network. IEEE Trans. Smart Grid 2017, 10, 841–851. [Google Scholar] [CrossRef]
  17. Yin, L.; Xie, J. Multi-temporal-spatial-scale temporal convolution network for short-term load forecasting of power systems. Appl. Energy 2020, 283, 116328. [Google Scholar] [CrossRef]
  18. Afrasiabi, M.; Mohammadi, M.; Rastegar, M.; Stankovic, L.; Afrasiabi, S.; Khazaei, M. Deep-Based Conditional Probability Density Function Forecasting of Residential Loads. IEEE Trans. Smart Grid 2020, 11, 3646–3657. [Google Scholar] [CrossRef] [Green Version]
  19. Tan, Z.; Zhang, J.; He, Y.; Zhang, Y.; Xiong, G.; Liu, Y. Short-Term Load Forecasting Based on Integration of SVR and Stacking. IEEE Access 2020, 8, 227719–227728. [Google Scholar] [CrossRef]
  20. Sina, A.; Kaur, D. Short Term Load Forecasting Model Based on Kernel-Support Vector Regression with Social Spider Optimization Algorithm. J. Electr. Eng. Technol. 2019, 15, 393–402. [Google Scholar] [CrossRef]
  21. Xie, K.; Yi, H.; Hu, G.; Li, L.; Fan, Z. Short-term power load forecasting based on Elman neural network with particle swarm optimization. Neurocomputing 2019, 416, 136–142. [Google Scholar] [CrossRef]
  22. Janković, Z.; Selakov, A.; Bekut, D.; Đorđević, M. Day similarity metric model for short-term load forecasting supported by PSO and artificial neural network. Electr. Eng. 2021, 103, 2973–2988. [Google Scholar] [CrossRef]
  23. Zheng, H.; Yuan, J.; Chen, L. Short-Term Load Forecasting Using EMD-LSTM Neural Networks with a Xgboost Algorithm for Feature Importance Evaluation. Energies 2017, 10, 1168. [Google Scholar] [CrossRef] [Green Version]
  24. Xiang, Y.; Gou, L.; He, L.; Xia, S.; Wang, W. A SVR–ANN combined model based on ensemble EMD for rainfall prediction. Appl. Soft Comput. 2018, 73, 874–883. [Google Scholar] [CrossRef]
  25. Ge, Q.; Guo, C.; Jiang, H.; Lu, Z.; Yao, G.; Zhang, J.; Hua, Q. Industrial Power Load Forecasting Method Based on Reinforcement Learning and PSO-LSSVM. IEEE Trans. Cybern. 2020, 99, 1–13. [Google Scholar] [CrossRef]
  26. Zhang, Z.; Hong, W.-C. Electric load forecasting by complete ensemble empirical mode decomposition adaptive noise and support vector regression with quantum-based dragonfly algorithm. Nonlinear Dyn. 2019, 98, 1107–1136. [Google Scholar] [CrossRef]
  27. Rafi, S.H.; Nahid-Al-Masood; Deeba, S.R.; Hossain, E. A Short-Term Load Forecasting Method Using Integrated CNN and LSTM Network. IEEE Access 2021, 9, 32436–32448. [Google Scholar] [CrossRef]
  28. Wang, Y.; Zhang, N.; Chen, X. A Short-Term Residential Load Forecasting Model Based on LSTM Recurrent Neural Network Considering Weather Features. Energies 2021, 14, 2737. [Google Scholar] [CrossRef]
  29. Phyo, P.P.; Jeenanunta, C. Daily Load Forecasting Based on a Combination of Classification and Regression Tree and Deep Belief Network. IEEE Access 2021, 9, 152226–1522421. [Google Scholar] [CrossRef]
  30. Sharma, E.; Deo, R.C.; Prasad, R.; Parisi, A.V. A hybrid air quality early-warning framework: Hourly forecasting with online sequential extreme learning machine and empirical mode decomposition algorithm. Sci. Total Environ. 2019, 709, 135934. [Google Scholar] [CrossRef]
  31. Yang, Z.-X.; Wang, X.-B.; Wong, P.K. Single and Simultaneous Fault Diagnosis with Application to a Multistage Gearbox: A Versatile Dual-ELM Network Approach. IEEE Trans. Ind. Inform. 2018, 14, 5245–5255. [Google Scholar] [CrossRef]
  32. Lan, Y.; Hu, J.; Huang, J.; Niu, L.; Zeng, X.; Xiong, X.; Wu, B. Fault diagnosis on slipper abrasion of axial piston pump based on Extreme Learning Machine. Measurement 2018, 124, 378–385. [Google Scholar] [CrossRef]
  33. Bian, K.; Zhou, M.; Hu, F.; Lai, W. RF-PCA: A New Solution for Rapid Identification of Breast Cancer Categorical Data Based on Attribute Selection and Feature Extraction. Front. Genet. 2020, 11, 566057. [Google Scholar] [CrossRef] [PubMed]
  34. Colominas, M.A.; Schlotthauer, G.; Torres, M.E. Improved complete ensemble EMD: A suitable tool for biomedical signal processing. Biomed. Signal Processing Control 2014, 14, 19–29. [Google Scholar] [CrossRef]
  35. Richman, J.S.; Lake, D.E.; Moorman, J.R. Sample Entropy. Methods Enzymol. 2004, 384, 172–184. [Google Scholar]
  36. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp swarm algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  37. Huang, G.-B.; Zhu, Q.-Y.; Siew, C.-K. Extreme learning machine: Theory and applications. Neurocomputing 2006, 70, 489–501. [Google Scholar] [CrossRef]
  38. Rosenblatt, M. Remarks on Some Nonparametric Estimates of a Density Function. Ann. Math. Stat. 1956, 27, 832–837. [Google Scholar] [CrossRef]
  39. Parzen, E. On Estimation of a Probability Density Function and Mode. Ann. Math. Stat. 1962, 33, 1065–1076. [Google Scholar] [CrossRef]
  40. Hu, B.; LI, Y.; Yang, H.; Wang, H. Wind speed model based on kernel density estimation and its application in reliability as-sessment of generating systems. J. Mod. Power Syst. Clean Energy 2017, 5, 220–227. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Load value of a region in Denmark in 2019.
Figure 1. Load value of a region in Denmark in 2019.
Energies 15 00147 g001
Figure 2. The reconstructed load series.
Figure 2. The reconstructed load series.
Energies 15 00147 g002
Figure 3. Comparison of actual and predicted values.
Figure 3. Comparison of actual and predicted values.
Energies 15 00147 g003
Figure 4. 0–1750 MW interval training set error; (a) probability density function curve; (b) cumulative distribution function curve.
Figure 4. 0–1750 MW interval training set error; (a) probability density function curve; (b) cumulative distribution function curve.
Energies 15 00147 g004
Table 1. Sample entropy and correlation coefficient.
Table 1. Sample entropy and correlation coefficient.
Method IMF1IMF2IMF3IMF4IMF5IMF6IMF7IMF8IMF9IMF10IMF11IMF12IMF13
EMDCC0.087 0.352 0.672 0.203 0.280 0.244 0.104 0.155 0.217 0.324 0.033
S E 0.192 0.518 0.6173 0.1767 0.329 0.155 0.247 0.041 0.0840.341
EEMDCC0.205 0.582 0.607 0.232 0.380 0.272 0.120 0.195 0.339 0.153 0.336 0.308 0.191
S E 0.7631.121 0.8730.092 0.064 0.300 2.63 × 10−33.00 × 10−31.42 × 10−39.56 × 10−4 0 0 2 × 10−5
ICEEMDANCC0.193 0.511 0.618 0.195 0.342 0.212 0.065 0.134 0.3500.019
S E 0.7291.123 1.0590.108 0.0820.0414.30 × 10−33.30 × 10−3 1.66 × 10−31.30 × 10−3
Table 2. Division of three components by different decomposition methods.
Table 2. Division of three components by different decomposition methods.
MethodRandom ComponentPeriodic ComponentTrend Components
EMDIMF1–IMF3IMF4–IMF7IMF8–IMF11
EEMDIMF1–IMF3IMF4–IMF6IMF7–IMF13
ICEEMDANIMF1–IMF3IMF4–IMF6IMF7–IMF10
Table 3. Prediction results of ELM.
Table 3. Prediction results of ELM.
MethodMAPE(%)MAEMSE
EMD-ELM2.6067.2316,393.89
EEMD-ELM2.6668.20 12,555.00
ICEEMDAN-ELM2.5063.849625.20
Table 4. Prediction results of different algorithms.
Table 4. Prediction results of different algorithms.
MethodMAPE(%)MAEMSE
ICEEMDAN-BP2.2858.689822.40
ICEEMDAN-SVR3.1377.1011,582.00
ICEEMDAN-ELM2.5063.849625.20
Table 5. Comparison of the reconstructed model and the decomposition model.
Table 5. Comparison of the reconstructed model and the decomposition model.
MethodMAPE(%)MAEMSETraing Time (s)
Reconstructed Model1.9850.4276723.70127.78
Decomposition Model1.5538.462632.40451.50
Ordinary Model 2.3259.698898.0041.00
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hu, T.; Zhou, M.; Bian, K.; Lai, W.; Zhu, Z. Short-Term Load Probabilistic Forecasting Based on Improved Complete Ensemble Empirical Mode Decomposition with Adaptive Noise Reconstruction and Salp Swarm Algorithm. Energies 2022, 15, 147. https://doi.org/10.3390/en15010147

AMA Style

Hu T, Zhou M, Bian K, Lai W, Zhu Z. Short-Term Load Probabilistic Forecasting Based on Improved Complete Ensemble Empirical Mode Decomposition with Adaptive Noise Reconstruction and Salp Swarm Algorithm. Energies. 2022; 15(1):147. https://doi.org/10.3390/en15010147

Chicago/Turabian Style

Hu, Tianyu, Mengran Zhou, Kai Bian, Wenhao Lai, and Ziwei Zhu. 2022. "Short-Term Load Probabilistic Forecasting Based on Improved Complete Ensemble Empirical Mode Decomposition with Adaptive Noise Reconstruction and Salp Swarm Algorithm" Energies 15, no. 1: 147. https://doi.org/10.3390/en15010147

APA Style

Hu, T., Zhou, M., Bian, K., Lai, W., & Zhu, Z. (2022). Short-Term Load Probabilistic Forecasting Based on Improved Complete Ensemble Empirical Mode Decomposition with Adaptive Noise Reconstruction and Salp Swarm Algorithm. Energies, 15(1), 147. https://doi.org/10.3390/en15010147

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop