Next Article in Journal
Multi-Temporal Analysis of the Impact of Summer Forest Dynamics on Urban Heat Island Effect in Yan’an City
Previous Article in Journal
By-Product Valorization as a Means for the Brewing Industry to Move toward a Circular Bioeconomy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Time Series Prediction Model for Wind Power Based on the Empirical Mode Decomposition–Convolutional Neural Network–Three-Dimensional Gated Neural Network

1
College of Mechanical and Electrical Engineering, Inner Mongolia Agricultural University, Hohhot 010018, China
2
College of Energy and Traffic Engineering, Inner Mongolia Agricultural University, Hohhot 010018, China
*
Author to whom correspondence should be addressed.
Sustainability 2024, 16(8), 3474; https://doi.org/10.3390/su16083474
Submission received: 25 March 2024 / Revised: 15 April 2024 / Accepted: 18 April 2024 / Published: 21 April 2024

Abstract

:
In response to the global challenge of climate change and the shift away from fossil fuels, the accurate prediction of wind power generation is crucial for optimizing grid operations and managing energy storage. This study introduces a novel approach by integrating the proportional–integral–derivative (PID) control theory into wind power forecasting, employing a three-dimensional gated neural (TGN) unit designed to enhance error feedback mechanisms. The proposed empirical mode decomposition (EMD)–convolutional neural network (CNN)–three-dimensional gated neural network (TGNN) framework starts with the pre-processing of wind data using EMD, followed by feature extraction via a CNN, and time series forecasting using the TGN unit. This setup leverages proportional, integral, and differential control within its architecture to improve adaptability and response to dynamic wind patterns. The experimental results show significant improvements in forecasting accuracy; the EMD–CNN–TGNN model outperforms both traditional models like autoregressive integrated moving average (ARIMA) and support vector regression (SVR), and similar neural network approaches, such as EMD–CNN–GRU and EMD–CNN–LSTM, across several metrics including mean absolute error (MAE), mean squared error (MSE), root mean squared error (RMSE), and coefficient of determination ( R 2 ). These advancements substantiate the model’s effectiveness in enhancing the precision of wind power predictions, offering substantial implications for future renewable energy management and storage solutions.

1. Introduction

In the global context of actively addressing climate change and gradually reducing the dependence on fossil fuels [1], wind energy has emerged as a core element of energy diversification due to its cleanliness and sustainability. Accurate wind power prediction is crucial for grid operators, as it aids in the efficient management of power supply and demand, reduces energy waste, and significantly enhances the economic benefits of wind energy [2]. However, wind power prediction faces many challenges due to uncertainties such as the operational status of wind turbines and climatic conditions, exhibiting high non-linearity in wind power time series data [3,4], posing challenges to accurate forecasting.
Since Rumelhart introduced back-propagation in 1986, the development of neural networks has gone through several stages. In 1990, the Elman network enabled RNNs to handle time series, although it struggled with long-term dependencies. In 1997, the ability of the LSTM of meteorological factors to achieve satisfactory results with hybrid models remained a significant challenge.This study, based on the development trends of recurrent neural network units and hybrid models, experimentally integrates PID theory into recurrent neural units, effectively improving the accuracy of wind power prediction [5,6,7]. In recent years, researchers have primarily focused on integrating traditional algorithms with other methodologies in time series forecasting models [8]. For instance, the LSTM technique has also made significant progress in the field of wind power forecasting. In 2021, Shahid et al. [9] introduced a GLSTM model that combines LSTM with the genetic algorithm (GA), using the GA to optimize parameters in the model, focusing on feature learning and global optimization of non-linear sequential data, particularly optimizing the window size and number of neurons in the LSTM layers, which improved the wind power prediction performance by an average of 6% to 30%. In 2022, Xiang et al. [10] combined the self-attention temporal convolutional network (SATCN) with LSTM to create the SATCN–LSTM model. Here, the SATCN captures local features in time series data through convolutional layers, while the self-attention mechanism addresses long-range dependencies, thereby enhancing the model’s capacity to understand dynamic changes in time series data and increasing the accuracy of wind power forecasts, reducing the root mean square error by 17.56%. In 2023, Houran et al. combined LSTM with swarm intelligence (SI) optimization algorithms to develop a framework for short-term offshore wind power output estimation, wherein SI effectively searches the global space for optimal solutions to optimize LSTM model parameters, demonstrating excellent predictive performance in experiments. Also in 2023, Cui et al. [11] employed an improved dynamic sliding door algorithm (ImDSDA), Fuzzy C-Means (FCM), and a similarity matching mechanism in conjunction with an LSTM model to predict wind power ramp events, wherein ImDSDA dynamically adjusts gating parameters, FCM addresses overlapping data categories, and the similarity matching mechanism enhances the predictive capabilities under specific conditions, showing a performance superior to the existing methods in three mountainous wind farms in central China. Meanwhile, the application of a gated recurrent unit (GRU) in time series forecasting models continues to rise, particularly in the realm of wind power forecasting. For instance, in 2021, Kisvari et al. [12] introduced a GRU neural network method that integrates data pre-processing, resampling, anomaly detection and handling, feature engineering, and hyper-parameter tuning. This method displayed a clear advantage over traditional LSTM models in terms of prediction accuracy, training speed, and noise sensitivity, with a training speed increase of 38%. That same year, Liu et al. [13] developed a KK–CNN–GRU model guided by K-shape and K-means clustering, further expanding GRU’s application scope. “KK” in the KK–CNN–GRU method stands for K-shape and K-means clustering, used to extract patterns, denoise, and optimize input data, enhancing the model’s accuracy in wind power prediction. In Experiment A, KK–CNN–GRU achieved an RMSE of 83.6–224 kW and a MAPE of 4.36–18.7%; in Experiment B, the RMSE was in the range of 228–368 kW and the MAPE was in the range of 18.3–34.4%, showing high predictive accuracy. In 2023, Xiao et al. [14] introduced feature weighted principal component analysis (WPCA) and particle swarm optimization (PSO) algorithms to optimize the hyper-parameters of the GRU model. This method demonstrated significant advantages in actual wind power forecasting compared to other machine learning models, with reductions in the MAE and RMSE of 5.3–16% and 10–16%, respectively, and an increase in R 2 of 2.1–3.1%. Although these methods incorporate other algorithms to create hybrid models that mitigate the non-linearity and complexity of wind power forecasting, due to the non-stationary nature of meteorological factors, achieving satisfactory wind power forecasting results using hybrid models remains a significant challenge. Thus, this study builds upon the trends in recurrent neural network (RNN) units and hybrid models, experimentally incorporating PID theory into RNN units, effectively enhancing the accuracy of wind power prediction.
To provide a comprehensive performance comparison framework, this research will also employ traditional machine learning algorithms such as the autoregressive integrated moving average (ARIMA) model and support vector regression (SVR) as baseline models [15]. The ARIMA model is widely used in electricity demand and price forecasting due to its ability to handle the non-stationarity of time series data [16]. Meanwhile, SVR, as a powerful regression tool, has been proven to deliver satisfactory results in forecasting tasks with highly non-linear characteristics. By comparing with these traditional models, the aim is to demonstrate the accuracy of the EMD–CNN–TGNN model in predicting wind power.
In order to fully exploit the predictive capabilities of neural units for wind power time series data and enhance the accuracy of wind power prediction models, this study has undertaken the following tasks: (1) By integrating PID control theory and neural network technology, a three-dimensional gated neural (TGN) unit was designed based on error propagation. (2) Combined with the three-dimensional gated neural unit, the empirical mode decomposition (EMD)–convolutional neural network (CNN)–three-dimensional gated neural network (TGNN) was proposed. (3) The network model was experimentally validated using actual data from three wind farms, demonstrating its robust predictive capability.
Section 1 of this study provides a detailed background and highlights the cutting-edge achievements in the related literature. Section 2 supplies real operational data from three wind farms, providing an empirical basis for the research. Section 3 first describes the proposed EMD–CNN–TGNN model in detail, including the empirical mode decomposition (EMD) technique, the method of feature extraction through a convolutional neural network (CNN), and the foundational theory and prediction process of the three-dimensional gated neural network (TGNN). Section 4 presents the performance of the proposed model and other comparative models in experiments, validating its predictive efficacy. Finally, Section 5 offers conclusions and discussions.

2. Data and Evaluation Method

2.1. Wind Farm Data

To validate the predictive accuracy of our model, this study employed real data from two wind farms located in Xingtai, Hebei Province, and one in Dezhou, Shandong Province, China. The specific locations of these wind farms are illustrated in Figure 1, and the data include historical wind power generation data from each site with a temporal resolution of 10 min. All data underwent rigorous quality control procedures including the removal of erroneous data and the imputation of missing values. Detailed descriptions of these three wind farms are provided in Table 1.

2.2. Data

Real data collected through SCADA systems provide a more accurate reflection of the operational conditions at wind farms. The SCADA system records data in 10-minute intervals, and the raw turbine data from the three wind farms are provided in “.csv” format [17]. Data from nine wind turbines across these farms are used to validate the model’s predictive accuracy.
Upon analyzing the wind data from the three wind farms (Table 2, Table 3 and Table 4), it was noted that the maximum outputs of all turbines are closely ranged between 2200 and 2336.7 kW, with a minimum output of 0 kW, indicating a definite lower output limit. Notably, turbine 1 of Wind Farm C exhibited higher wind power outputs in terms of average (752.98 kW) and median (419.49 kW) compared to turbine 3 of Wind Farm B, which had a lower average (594.64 kW) and median (277.6 kW). This variance highlights performance differences between turbines. All turbines exhibit significant volatility in wind power output, as indicated by the variance and standard deviation values, such as turbine 1 of Wind Farm C with a variance of 676,617.35 and a standard deviation of 822.57. The data distribution analysis shows a positive skewness across all turbines, indicating a rightward skew, while the negative kurtosis values suggest a relatively flat distribution.
These key data points allow us to conclude that, despite performance differences, all turbines show significant output fluctuations, which poses challenges for wind power prediction. For instance, the turbines in Wind Farm C generally perform better in terms of average and median wind power, but they also experience greater fluctuations, particularly turbine 1. Conversely, turbine 3 of Wind Farm B, while showing lower average wind power output, displays a more balanced data distribution, exhibiting the most significant data skew.

2.3. Evaluation Method

In the performance evaluation of regression models, four core metrics are employed: MAE (mean absolute error), MSE (mean squared error), RMSE (root mean squared error), and R 2 (coefficient of determination). These metrics evaluate the discrepancy between the model’s predictions and actual values, and the model’s explanatory power from various perspectives [8,18].

2.3.1. Core Metrics

  • MAE (Mean Absolute Error) reflects the average deviation between predicted and actual values. It is robust against outliers and is defined as follows:
    MAE = 1 n i = 1 n | y i y ^ i |
    where y i is the actual value for the ith sample, y ^ i is the model’s predicted value, and n is the total number of samples.
  • MSE (Mean Squared Error) is the average of the squared differences between observed and predicted values, emphasizing larger errors:
    MSE = 1 n i = 1 n ( y i y ^ i ) 2
  • RMSE (Root Mean Squared Error) provides a standard measure of error magnitude in the same units as the data:
    RMSE = 1 n i = 1 n ( y i y ^ i ) 2
  • R 2 (Coefficient of Determination) indicates the proportion of variance in the dependent variable predictable from the independent variables:
    R 2 = 1 i = 1 n ( y i y ^ i ) 2 i = 1 n ( y i y ¯ ) 2
    where y ¯ is the mean of the actual values.

2.3.2. Performance Improvement Metric

To quantify performance differences between two models, the performance improvement (PI) metric is used:
PI = P i P i + 1 P i + 1 × 100 %
where P i and P i + 1 are the performance metrics for the respective models.

3. Methods

3.1. Construction of EMD–CNN–TGNN

This research introduces a neural network architecture known as EMD–CNN–TGNN, which is based on error feedback. As depicted in Figure 2, the EMD-CNN–TGNN architecture initially utilizes empirical mode decomposition (EMD) to unfold the time series information. The resultant intrinsic mode functions (IMFs) are then compiled into a multi-resolution matrix, which is fed into a convolutional neural network (CNN) for feature extraction. Ultimately, the predictions are made using the three-dimensional gated neural network (TGNN).

3.1.1. EMD

Empirical mode decomposition (EMD) is an adaptive method used to analyze and process the non-linear and non-stationary characteristics of wind power. Each IMF component reflects the inherent oscillatory modes of wind power at different time scales, effectively identifying and predicting power fluctuation trends [19,20].
The core steps of EMD include:
  • Extraction of Local Extrema: Identify all local extrema p 1 , p 2 , , p N in the given signal x ( t ) .
  • Envelope Extraction: For each pair of adjacent local extrema p i and p i + 1 , perform linear interpolation to derive the upper e max ( t ) and lower e min ( t ) envelopes:
    e max ( t ) = p i + 1 p i t i + 1 t i × ( t t i ) + p i
    e min ( t ) = p i + 1 p i t i + 1 t i × ( t t i ) + p i
  • Extraction of IMFs: Subtract the mean envelope e avg ( t ) from the signal x ( t ) to obtain the first-order IMF c 1 ( t ) . Repeat this process until c 1 ( t ) meets the stopping criteria to become the first IMF. Continue similarly until all IMFs are extracted.
Using the empirical mode decomposition method, 2048 wind power time series data points were decomposed into seven different IMFs. These IMFs were then assembled into a matrix of size 2048 × 7 , shown in Figure 3, to serve as input for the CNN.

3.1.2. Construction of CNN

The convolutional neural network (CNN) is a robust deep learning model, instrumental in extracting hierarchical features from data through its layers of convolutions and pooling. In the realm of wind power data, the CNN adeptly learns characteristic patterns at various temporal resolutions, capturing both short-term and long-term trends. Wind power data, known for their complexity, often elude traditional feature extraction techniques, which the CNN circumvents by learning directly from raw data in an end-to-end manner. This method significantly simplifies the otherwise complex task of manual feature design.
The CNN architecture developed in this study comprises multiple convolutional layers, pooling layers, and a fully connected layer, as detailed in Table 5. Designed to extract meaningful features from the intrinsic mode functions (IMFs) matrix, these features are then transformed into a vector of size 128 × 1 . This vector is utilized by subsequent layers of the three-dimensional gated neural (TGN) unit. Specifically, the initial layer of the model is a convolutional layer, which utilizes 32 filters of size 5 × 1 to process the input matrix, employing the ReLU activation function to enhance the model’s capability to handle non-linearities. This is followed by a max pooling layer that uses a 2 × 1 window to down-sample the feature map, thereby reducing computational demand while increasing the model’s abstraction capacity.
Subsequent convolutional and pooling layers deepen the learning process. The number of filters is increased to 64 to capture more complex features, maintaining the same kernel size and pooling strategy. This design not only bolsters the model’s feature extraction capabilities but also efficiently reduces the dimensionality of the features, laying a foundational framework for feature vector generation.
After two cycles of convolution and pooling, the feature map is flattened into a one-dimensional vector and processed through a fully connected layer. This layer, containing 128 neurons and utilizing the ReLU activation function, outputs a vector of size 128 × 1 . This vector, enriched with crucial information from the original time series data, represents a meticulously refined feature representation, providing robust support for the subsequent predictive tasks. The schematic of this layer structure is illustrated in Figure 4.

3.1.3. Construction of the Three-Dimensional Gated Neural (TGN) Unit

While LSTM and GRU units have been effective in managing complex and dynamic time series data, they have not fully addressed the nuances in temporal intervals between data points and the rate of increase in wind power output [21]. Considering the capabilities of PID control theory in handling time series data, a novel neural network unit based on error feedback has been proposed, where the initial error is set to zero. The network utilizes proportional and differential controls, employing the tanh function to manage the flow proportion of error to the model’s output at time t, denoted as h t , and introduces an interval Δ t to calculate the rate of error change, thereby adjusting the impact at the current moment.
To address long-term dependencies, an integral gate mechanism is incorporated, which combines previous integration results with proportional control coefficients. The integration control coefficient at the current moment is calculated using the ReLU function, which helps in preventing gradient vanishing or explosion. Moreover, the error bias term b t in the model output h t is replaced by a three-dimensional gate, facilitating the transfer and predictive functionality of error information. The structure of the TGN is illustrated in Figure 5.
The mathematical representation of the TGN unit includes several key equations:
(8) e 0 = 0 (9) e t = y t y ^ t (10) p t = tanh ( w p t · e t + b p t ) (11) i t = ReLU ( w i t · p t + i t 1 + b i t ) (12) d t = tanh ( w d t · ( e t e t 1 ) / Δ t + b d t ) (13) h t = σ ( w h · h t 1 + w x · x + b + p t + i t + d t )
Here, e 0 represents the initial error, assumed to be zero, indicating no error at the start. e t calculates the error at time step t, with y t being the actual observed value and y ^ t the model’s predicted value. The prediction component p t utilizes the tanh activation function. The internal state i t at time t is computed using the ReLU function, with i t 1 as the internal state from the previous time step. d t calculates the differential term with tanh as the activation function, and Δ t represents the time step length. Finally, h t represents the output of the model at time t, where σ denotes the activation function, and w h , w x are the weight matrices, with x as the input.

3.2. Implementation Steps

All models were implemented on a personal computer equipped with the Windows 11 operating system, an Intel(R) Core(TM) i5-12490F processor (3.0 GHz), and Fury 3200 MHz, 16 GB RAM. The processor was manufactured by Intel Corporation, located in Santa Clara, CA, USA. The RAM was produced by Kingston Technology Corporation, based in Fountain Valley, CA, USA.The deep learning models were developed using the Python 3.8 programming language, with Anaconda IDE and PyCharm as development tools. The TensorFlow deep learning framework was utilized for predictive studies. To mitigate the impact of random initial weights on prediction results, each model was run three times. The specific implementation steps are illustrated in Figure 6.
Step 1: Data Pre-processing.
Data from three wind farms (A, B, and C) were analyzed, selecting active power data from three wind turbines at each site, with each dataset containing 150,000 data points. Missing data within the datasets were filled using the K-nearest neighbors algorithm [22], ensuring the integrity of the data. Additionally, Z-score normalization [23] was applied to eliminate the influence of varying data magnitudes. The first 149,872 data points of each dataset were designated as the training set, with a window of 2048 data points used to divide the data into training and testing sets.
Step 2: EMD Decomposition.
The pre-processed time-series data were decomposed using the empirical mode decomposition (EMD) method. The Python library PyEMD was utilized to perform the EMD decomposition, and the first seven intrinsic mode functions (IMFs) were selected as the basis for further analysis.
Step 3: Construction of IMFs Matrix and CNN–TGNN Model Training.
Based on the results of the EMD decomposition, the first seven selected IMFs were compiled into a matrix of size 2048 × 7 . This matrix served as the input for the subsequent CNN. Each IMF represents a specific frequency component of the original data, effectively capturing the complex features of the data while retaining their temporal information. Utilizing the constructed IMFs matrix, the CNN extracted key features from the time series data and output a vector of size 128 × 1 . This vector was then input into the three-dimensional gated neural network (TGNN) layer for final predictive analysis. During the training phase, the first 149,872 data points served as the training set, with data incrementally supplied to the model using a sliding window approach. The initial learning rate was set at 0.001 and was decreased by 10% every 10 epochs to adjust, with a batch size of 32 used to balance computational efficiency and memory usage. Training strategies included gradually reducing the learning rate and employing the Adam optimizer to meet the model’s needs. To prevent overfitting, L2 regularization was applied with a coefficient of 0.001 to limit weight sizes and reduce the risk of overfitting.

3.3. Comparison Models

In the model comparison segment of this study, hybrid models were employed, specifically including EMD–CNN–LSTM and EMD–CNN–GRU models for analysis. The training methodologies for these models were similar to those used for the EMD–CNN–TGNN model. These hybrid models were compared with traditional models employing autoregressive integrated moving average (ARIMA) and support vector regression (SVR) to highlight their performance differences and applicability in handling complex datasets with temporal and spatial dependencies [15,16,24,25].

4. Results and Analysis

This experiment utilized datasets from two wind farms in Xingtai, Hebei (Wind Farms A and B), and one in Dezhou, Shandong (Wind Farm C), selecting three wind turbines from each wind farm for analysis. To demonstrate the predictive performance of the EMD–CNN–TGNN model, it was compared with two structurally similar models, EMD–CNN–LSTM and EMD–CNN–GRU, as well as two traditional models, ARIMA and SVR.
Significant findings from the experimental analysis, based on Figure 7, Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12 and Table 6 and Table 7, include the following:
  • In Wind Farm A, the TGNN model (EMD–CNN–TGNN) demonstrated higher predictive accuracy for turbines A1, A2, and A3, especially in capturing rapid fluctuations in the data more closely. Compared to EMD–CNN–LSTM and EMD–CNN–GRU, which exhibited varying degrees of deviation, ARIMA and SVR performed the poorest, underscoring the TGNN model’s superior ability to capture complex patterns and trends in the data.
  • Against EMD–CNN–LSTM, the TGNN model showed improvements of 7.88% in MAE, 26.09% in MSE, 14.03% in RMSE, and an increase of 3.30% in R 2 . Compared to EMD–CNN–GRU and particularly SVR, the differences were even more pronounced, with an up to 56.67% improvement in R 2 with SVR, highlighting the TGNN model’s significant advantages in prediction accuracy and data fitting.
  • In the data for turbines A2 and A3, the TGNN model continued to show higher performance enhancements, notably against ARIMA in turbine A2 data, where the improvement in R 2 reached an impressive 66.67%, further proving the TGNN model’s excellence in complex data settings.
  • In Wind Farm B, the performance of the five models varied across turbines B1, B2, and B3. Notably, in turbine B1 data, the EMD–CNN–LSTM improved and approached the actual data more closely. However, the EMD–CNN–TGNN maintained consistency and stability overall, especially in turbine B2 data, where its predictive path highly aligned with the actual wind power outputs. The TGNN model also exhibited superior performance, particularly in turbine B2 data compared to SVR, with a 59.68% improvement in R 2 and a 3.26% increase over the well-performing EMD–CNN–LSTM in turbine B1 data, indicating the TGNN model’s enhanced accuracy and reliability across different turbine data scenarios.
  • In Wind Farm C, the EMD–CNN–TGNN maintained good performance in capturing rapid changes, while the EMD–CNN–GRU showed weaker performance in sudden shifts, and ARIMA and SVR lagged significantly behind the other three models. The performance of EMD–CNN–TGNN in Wind Farm C emphasizes its robustness against disturbances during abrupt events. Compared to other models, especially in turbine C2 data against SVR, the increase in R 2 reached 61.67%, demonstrating its strong capability in capturing data trends and fluctuations. In turbine C1 data, the TGNN model increased R 2 by 45.00% compared to ARIMA, further highlighting its accuracy across different wind power data scenarios.
Table 8, Table 9, Table 10, Table 11, Table 12 and Table 13 are employed to evaluate the comprehensive predictive performance of the EMD–CNN–TGNN model across all datasets and compare it with other models including EMD–CNN–LSTM, EMD–CNN–GRU, ARIMA, and SVR. The EMD–CNN–TGNN model showcases leading performance across all major metrics, achieving the lowest mean absolute error (MAE = 77.93), mean squared error (MSE = 19,971.08), root mean squared error (RMSE = 129.70), and the highest coefficient of determination ( R 2 = 0.94 ). These results demonstrate the superiority of EMD–CNN–TGNN in both prediction accuracy and data fit over the competing models.
The EMD–CNN–LSTM and EMD–CNN–GRU models, while superior to ARIMA and SVR, do not match the performance of EMD–CNN–TGNN. Specifically, the R 2 values for EMD–CNN–LSTM and EMD–CNN–GRU are 0.91 and 0.88, respectively, indicating a lesser ability to explain variability in the data. ARIMA and SVR significantly under-perform across all metrics, particularly in MSE and RMSE, reflecting their limitations in handling wind power prediction tasks. Their respective R 2 values of 0.64 and 0.63 highlight substantial disparities in data fitting.
Relative to EMD–CNN–LSTM, EMD–CNN–TGNN exhibits performance improvements of 23.70% in MAE, 31.28% in MSE, and 17.84% in RMSE, with a 2.82% increase in R 2 . These enhancements emphasize the significant accuracy gains provided by EMD–CNN–TGNN. Against EMD–CNN–GRU, the improvements are even more pronounced, especially in MSE and RMSE, the values of which are higher by 50.86% and 31.36%, respectively, with a 6.99% increase in R 2 .
Compared with ARIMA and SVR, EMD–CNN–TGNN not only shows substantial improvements in MAE, MSE, and RMSE (exceeding 69.90% and 71.55%, respectively), but also demonstrates remarkable increases in R 2 of 47.43% and 49.18%, respectively. These comparisons highlight the exceptional ability of EMD–CNN–TGNN to predict wind power time series data with high data fitting accuracy.
Based on the comprehensive performance comparison facilitated by Table 14 and Table 15, and a detailed analysis of performance enhancements, it is observed that the EMD–CNN–TGNN model achieved significant improvements in key metrics such as MAE, MSE, RMSE, and R 2 . Specifically, compared to the EMD–CNN–GRU and EMD–CNN–LSTM models, EMD–CNN–TGNN exhibited an average performance enhancement of 23.68% in MAE, 28.85% in MSE, 17.83% in RMSE, and 2.81% in R 2 . Moreover, when compared against the EMD–CNN–LSTM model, there were further increases of 28.49%, 65.66%, 34.84%, and 6.89%, respectively. The performance uplift was even more pronounced when contrasted with traditional models such as ARIMA and SVR, which showed improvements of 69.90%, 82.06%, 59.30%, 47.43%, and 71.55%, 82.13%, 59.64%, 49.18%, respectively.
These performance metrics not only validate the significant advantages of the EMD–CNN–TGNN model over existing models but also provide vital references and foundations for future research directions and applications. The following sections outline the implications of these findings and suggest areas for further investigation.

5. Conclusions and Discussions

In summary, a novel hybrid model named EMD–CNN–TGNN has been proposed to enhance the accuracy of short-term wind power forecasting. Within this hybrid model, the empirical mode decomposition (EMD) algorithm effectively reduces the volatility of wind speed series, the convolutional neural network (CNN) efficiently extracts features from wind power data, and the three-dimensional gated neural network (TGNN) utilizes its tri-gate characteristics to effectively predict based on these features. The EMD–CNN–TGNN model demonstrates superior average performance with an MAE of 54.44, MSE of 8982.16, RMSE of 89.43, and R 2 of 0.92, surpassing the other two hybrid models and two traditional models.
Considering the enhanced accuracy of the proposed EMD–CNN–TGNN model, it implies significant benefits for wind farm operational management. More accurate power output predictions can optimize electricity dispatch and energy management strategies, thereby reducing economic losses due to forecasting errors. Additionally, this model could be applied to the energy storage systems of wind farms, providing forecast data to the control systems of PEM electrolyzers to adjust their efficiency curves and improve the utilization rate of wind energy.
Despite the high performance demonstrated by the TGNN in experiments, its structural complexity and the need for parameter tuning may pose challenges in practical deployments. Balancing the improvement in prediction accuracy with the simplification of model complexity is a critical consideration for future research. Further studies could explore more efficient training techniques, such as knowledge distillation or model pruning, and the use of lightweight network architectures to reduce computational resource consumption. These techniques will be explored in subsequent research to provide guidance for the practical deployment of the model.

Author Contributions

Conceptualization, Z.G.; Methodology, Z.G.; Software, Z.G.; Validation, W.Q.; Formal analysis, F.W.; Investigation, F.W., H.L., X.F. and M.Z.; Resources, W.Q. and Q.H.; Writing—original draft, Z.G.; Writing—review & editing, Q.H.; Visualization, F.W.; Supervision, Q.H.; Project administration, Q.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available as they are currently being used in ongoing research, and releasing them could compromise future publications.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wang, H.-z.; Li, G.-q.; Wang, G.-b.; Peng, J.-c.; Jiang, H.; Liu, Y.-t. Deep learning based ensemble approach for probabilistic wind power forecasting. Appl. Energy 2017, 188, 56–70. [Google Scholar] [CrossRef]
  2. Vargas, S.A.; Esteves, G.R.T.; Maçaira, P.M.; Bastos, B.Q.; Cyrino Oliveira, F.L.; Souza, R.C. Wind power generation: A review and a research agenda. J. Clean. Prod. 2019, 218, 850–870. [Google Scholar] [CrossRef]
  3. Hassan, Q.; Algburi, S.; Sameen, A.Z.; Salman, H.M.; Jaszczur, M. A review of hybrid renewable energy systems: Solar and wind-powered solutions: Challenges, opportunities, and policy implications. Results Eng. 2023, 20, 101621. [Google Scholar] [CrossRef]
  4. Petersen, C.; Reguant, M.; Segura, L. Measuring the impact of wind power and intermittency. Energy Econ. 2024, 129, 107200. [Google Scholar] [CrossRef]
  5. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  6. Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  7. Cho, K.; Van Merriënboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv 2014, arXiv:1406.1078. [Google Scholar]
  8. Prema, V.; Bhaskar, M.S.; Almakhles, D.; Gowtham, N.; Rao, K.U. Critical review of data, models and performance metrics for wind and solar power forecast. IEEE Access 2021, 10, 667–688. [Google Scholar] [CrossRef]
  9. Shahid, F.; Zameer, A.; Muneeb, M. A novel genetic LSTM model for wind power forecast. Energy 2021, 223, 120069. [Google Scholar] [CrossRef]
  10. Xiang, L.; Liu, J.; Yang, X.; Hu, A.; Su, H. Ultra-short term wind power prediction applying a novel model named SATCN-LSTM. Energy Convers. Manag. 2022, 252, 115036. [Google Scholar] [CrossRef]
  11. Cui, Y.; Chen, Z.; He, Y.; Xiong, X.; Li, F. An algorithm for forecasting day-ahead wind power via novel long short-term memory and wind power ramp events. Energy 2023, 263, 125888. [Google Scholar] [CrossRef]
  12. Kisvari, A.; Lin, Z.; Liu, X. Wind power forecasting—A data-driven method along with gated recurrent neural network. Renew. Energy 2021, 163, 1895–1909. [Google Scholar] [CrossRef]
  13. Liu, X.; Yang, L.; Zhang, Z. Short-Term Multi-Step Ahead Wind Power Predictions Based On A Novel Deep Convolutional Recurrent Network Method. IEEE Trans. Sustain. Energy 2021, 12, 1820–1833. [Google Scholar] [CrossRef]
  14. Xiao, Y.; Zou, C.; Chi, H.; Fang, R. Boosted GRU model for short-term forecasting of wind power with feature-weighted principal component analysis. Energy 2022, 267, 126503. [Google Scholar] [CrossRef]
  15. Masoumi, M. Machine Learning Solutions for Offshore Wind Farms: A Review of Applications and Impacts. J. Mar. Sci. Eng. 2023, 11, 1855. [Google Scholar] [CrossRef]
  16. Kontopoulou, V.I.; Panagopoulos, A.; Kakkos, I.; Matsopoulos, G. A Review of ARIMA vs. Machine Learning Approaches for Time Series Forecasting in Data Driven Networks. Future Internet 2023, 15, 255. [Google Scholar] [CrossRef]
  17. de Moraes Vieira, J.L.; Farias, F.C.; Ochoa, A.A.V.; de Menezes, F.D.; da Costa, A.C.A.; Ângelo Peixoto da Costa, J.; de Novaes Pires Leite, G.; de Castro Vilela, O.; de Souza, M.G.G.; Michima, P. Remaining Useful Life Estimation Framework for the Main Bearing of Wind Turbines Operating in Real Time. Energies 2024, 17, 1430. [Google Scholar] [CrossRef]
  18. Chicco, D.; Warrens, M.J.; Jurman, G. The coefficient of determination R-squared is more informative than SMAPE, MAE, MAPE, MSE and RMSE in regression analysis evaluation. Peerj Comput. Sci. 2021, 7, e623. [Google Scholar] [CrossRef] [PubMed]
  19. Bokde, N.; Feijóo, A.; Villanueva, D.; Kulat, K. A review on hybrid empirical mode decomposition models for wind speed and wind power prediction. Energies 2019, 12, 254. [Google Scholar] [CrossRef]
  20. Liang, H.; Bressler, S.L.; Desimone, R.; Fries, P. Empirical mode decomposition: A method for analyzing neural data. Neurocomputing 2005, 65, 801–807. [Google Scholar] [CrossRef]
  21. Khaldi, R.; El Afia, A.; Chiheb, R.; Tabik, S. What is the best RNN-cell structure to forecast each time series behavior? Expert Syst. Appl. 2023, 215, 119140. [Google Scholar] [CrossRef]
  22. Triguero, I.; García-Gil, D.; Maillo, J.; Luengo, J.; García, S.; Herrera, F. Transforming big data into smart data: An insight on the use of the k-nearest neighbors algorithm to obtain quality data. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2019, 9, e1289. [Google Scholar] [CrossRef]
  23. Urolagin, S.; Sharma, N.; Datta, T.K. A combined architecture of multivariate LSTM with Mahalanobis and Z-Score transformations for oil price forecasting. Energy 2021, 231, 120963. [Google Scholar] [CrossRef]
  24. Khosravi, A.; Machado, L.; Nunes, R. Time-series prediction of wind speed using machine learning algorithms: A case study Osorio wind farm, Brazil. Appl. Energy 2018, 224, 550–566. [Google Scholar] [CrossRef]
  25. Zhao, Z.; Yun, S.; Jia, L.; Guo, J.; Meng, Y.; He, N.; Li, X.; Shi, J.; Yang, L. Hybrid VMD-CNN-GRU-based model for short-term forecasting of wind power considering spatio-temporal features. Eng. Appl. Artif. Intell. 2023, 121, 105982. [Google Scholar] [CrossRef]
Figure 1. Geographic locations of wind farms.
Figure 1. Geographic locations of wind farms.
Sustainability 16 03474 g001
Figure 2. EMD –CNN–TGNN structure diagram.
Figure 2. EMD –CNN–TGNN structure diagram.
Sustainability 16 03474 g002
Figure 3. EMDestablishment matrix diagram.
Figure 3. EMDestablishment matrix diagram.
Sustainability 16 03474 g003
Figure 4. TGNN layer structure diagram.
Figure 4. TGNN layer structure diagram.
Sustainability 16 03474 g004
Figure 5. TGNE diagram (three-dimensional gated neural elements diagram).
Figure 5. TGNE diagram (three-dimensional gated neural elements diagram).
Sustainability 16 03474 g005
Figure 6. Flowchart of implementation steps.
Figure 6. Flowchart of implementation steps.
Sustainability 16 03474 g006
Figure 7. Comparison of model predictions with actual values for three turbines at Wind Farm A.
Figure 7. Comparison of model predictions with actual values for three turbines at Wind Farm A.
Sustainability 16 03474 g007
Figure 8. Comparison of model predictions with actual values for three turbines at Wind Farm B.
Figure 8. Comparison of model predictions with actual values for three turbines at Wind Farm B.
Sustainability 16 03474 g008
Figure 9. Comparison of model predictions with actual values for three turbines at Wind Farm C.
Figure 9. Comparison of model predictions with actual values for three turbines at Wind Farm C.
Sustainability 16 03474 g009
Figure 10. Violin plot of prediction errors for various models at Wind Farm A.
Figure 10. Violin plot of prediction errors for various models at Wind Farm A.
Sustainability 16 03474 g010
Figure 11. Violin plot of prediction errors for various models at Wind Farm B.
Figure 11. Violin plot of prediction errors for various models at Wind Farm B.
Sustainability 16 03474 g011
Figure 12. Violin plot of prediction errors for various models at Wind Farm C.
Figure 12. Violin plot of prediction errors for various models at Wind Farm C.
Sustainability 16 03474 g012
Table 1. Introduction to wind farm data.
Table 1. Introduction to wind farm data.
Wind FarmTerrainElevation (m)Time Span
Wind Farm APlain26–30.16 March 2021–5 March 2024
Wind Farm BPlain18.531 March 2021–29 February 2024
Wind Farm CPlain28–361 January 2022–31 December 2023
Table 2. Wind farm A data.
Table 2. Wind farm A data.
TurbineMax (kW)Min (kW)Median (kW)Mean (kW)Variance (kW)Standard Deviation (kW)Skewness (kW)Kurtosis (kW)
A12336.600.00382.28702.70584,459.00764.500.970.44
A22335.970.00405.64734.90622,922.48789.250.900.61
A32336.700.00411.91740.92626,814.16791.720.890.63
The first uppercase letter represents the wind farm, and the second Arabic numeral indicates the wind turbine number. For example, “A1” refers to turbine number 1 of wind farm A.
Table 3. Wind farm B data.
Table 3. Wind farm B data.
TurbineMax (kW)Min (kW)Median (kW)Mean (kW)Variance (kW)Standard Deviation (kW)Skewness (kW)Kurtosis (kW)
B12200.100.00321.90634.51513,606.25716.660.990.44
B22200.100.00289.00609.82507,952.22712.711.060.30
B32200.100.00277.60594.64495,569.65703.971.100.20
The first uppercase letter represents the wind farm, and the second Arabic numeral indicates the wind turbine number. For example, “B1” refers to turbine number 1 of wind farm B.
Table 4. Wind farm C data.
Table 4. Wind farm C data.
TurbineMax (kW)Min (kW)Median (kW)Mean (kW)Variance (kW)Standard Deviation (kW)Skewness (kW)Kurtosis (kW)
C12248.590.00419.49752.98676,617.35822.570.760.96
C22249.910.00387.54728.84677,555.42823.140.830.84
C32244.220.00341.04687.23640,244.73800.150.910.66
The first uppercase letter represents the wind farm, and the second Arabic numeral indicates the wind turbine number. For example, “C1” refers to turbine number 1 of wind farm C.
Table 5. CNN model parameters.
Table 5. CNN model parameters.
Layer TypeOutput SizeKernel SizeNumber of FiltersActivation FunctionRemark
Input layer2048 × 7 × 1---Input shape is 2048 × 7, 1 channel
Convolutional layer 12044 × 7 × 325 × 132ReLUKernel stride is 1
Pooling layer 11022 × 7 × 322 × 1--Max pooling
Convolutional layer 21018 × 7 × 645 × 164ReLUKernel stride is 1
Pooling layer 2509 × 7 × 642 × 1--Max pooling
Flatten227136 × 1----
Fully connected layer128 × 1--ReLU-
Table 6. Performance comparison of various models across wind farms.
Table 6. Performance comparison of various models across wind farms.
ScenarioMAE (kW)MSE (kW)RMSE (kW)R2
A1EMD–CNN–TGNN93.4635,482.13188.370.94
A1EMD–CNN–LSTM101.4548,006.73219.100.91
A1EMD–CNN–GRU108.2166,943.16258.730.88
A1ARIMA304.21137,700.8371.080.75
A1SVR384.97220,236469.290.60
A2EMD–CNN–TGNN111.5221,929.61148.090.95
A2EMD–CNN–LSTM139.1330,815.16175.540.93
A2EMD–CNN–GRU133.4462,025.05249.050.86
A2ARIMA375.60191,603437.720.57
A2SVR347.53169,374.6411.550.62
A3EMD–CNN–TGNN90.6445,111.26212.390.91
A3EMD–CNN–LSTM162.2557,743.46240.300.89
A3EMD–CNN–GRU156.1772,656.62269.550.86
A3ARIMA338.55191,558.9437.670.63
A3SVR365.31190,006.8435.900.63
B1EMD–CNN–TGNN65.318332.2691.280.95
B1EMD–CNN–LSTM94.7013,425.87115.870.92
B1EMD–CNN–GRU105.9323,210.08152.350.86
B1ARIMA201.4362,710.59250.420.61
B1SVR205.7155,748.36236.110.65
B2EMD–CNN–TGNN44.364159.1964.490.99
B2EMD–CNN–LSTM96.8514,164.19119.010.96
B2EMD–CNN–GRU80.3022,863.93151.210.93
B2ARIMA213.81106,353326.120.67
B2SVR308.83122,435.1349.910.62
B3EMD–CNN–TGNN132.7437,778.82194.370.95
B3EMD–CNN–LSTM215.7062,981.02250.960.91
B3EMD–CNN–GRU203.7881,130.46284.830.88
B3ARIMA395.75264,475.7514.270.62
B3SVR442.85251,979.5501.980.64
C1EMD–CNN–TGNN25.882696.6151.930.87
C1EMD–CNN–LSTM28.043167.8256.280.84
C1EMD–CNN–GRU30.543597.0859.980.82
C1ARIMA70.967938.5489.100.60
C1SVR65.606375.5679.850.68
C2EMD–CNN–TGNN75.5716,574.71128.740.97
C2EMD–CNN–LSTM76.9222,429.54149.760.96
C2EMD–CNN–GRU103.1447,321.60217.540.91
C2ARIMA373.86197,084.5443.940.63
C2SVR364.17212,188.1460.640.60
C3EMD–CNN–TGNN61.867675.1587.610.91
C3EMD–CNN–LSTM67.439047.1795.120.89
C3EMD–CNN–GRU69.269455.5297.240.89
C3ARIMA153.3528,552.06168.970.66
C3SVR160.2931,230.06176.720.63
Table 7. Comparison of model prediction performance.
Table 7. Comparison of model prediction performance.
ScenarioModel ComparisonMAE (%)MSE (%)RMSE (%)R2 (%)
A1EMD–CNN–TGNN vs. EMD–CNN–LSTM7.8826.0914.033.30
A1EMD–CNN–TGNN vs. EMD–CNN–GRU13.6347.0027.196.82
A1EMD–CNN–TGNN vs. ARIMA69.2874.2349.2425.33
A1EMD–CNN–TGNN vs. SVR75.7283.8959.8656.67
A2EMD–CNN–TGNN vs. EMD–CNN–LSTM19.8428.8315.642.15
A2EMD–CNN–TGNN vs. EMD–CNN–GRU16.4364.6440.5410.47
A2EMD–CNN–TGNN vs. ARIMA70.3188.5566.1766.67
A2EMD–CNN–TGNN vs. SVR67.9187.0564.0253.23
A3EMD–CNN–TGNN vs. EMD–CNN–LSTM44.1421.8811.612.25
A3EMD–CNN–TGNN vs. EMD–CNN–GRU41.9637.9121.215.81
A3EMD–CNN–TGNN vs. ARIMA73.2376.4551.4741.27
A3EMD–CNN–TGNN vs. SVR75.1976.2651.2844.44
B1EMD–CNN–TGNN vs. EMD–CNN–LSTM31.0337.9421.223.26
B1EMD–CNN–TGNN vs. EMD–CNN–GRU38.3564.1040.0910.47
B1EMD–CNN–TGNN vs. ARIMA67.5886.7163.5555.74
B1EMD–CNN–TGNN vs. SVR68.2585.0561.3446.15
B2EMD–CNN–TGNN vs. EMD–CNN–LSTM54.2070.6445.813.13
B2EMD–CNN–TGNN vs. EMD–CNN–GRU44.7681.8157.356.45
B2EMD–CNN–TGNN vs. ARIMA79.2596.0980.2347.76
B2EMD–CNN–TGNN vs. SVR85.6496.6081.5759.68
B3EMD–CNN–TGNN vs. EMD–CNN–LSTM38.4640.0222.554.40
B3EMD–CNN–TGNN vs. EMD–CNN–GRU34.8653.4331.767.95
B3EMD–CNN–TGNN vs. ARIMA66.4685.7262.2053.23
B3EMD–CNN–TGNN vs. SVR70.0385.0161.2848.44
C1EMD–CNN–TGNN vs. EMD–CNN–LSTM7.7014.877.733.57
C1EMD–CNN–TGNN vs. EMD–CNN–GRU15.2625.0313.426.10
C1EMD–CNN–TGNN vs. ARIMA63.5366.0341.7245.00
C1EMD–CNN–TGNN vs. SVR60.5557.7034.9727.94
C2EMD–CNN–TGNN vs. EMD–CNN–LSTM1.7626.1014.041.04
C2EMD–CNN–TGNN vs. EMD–CNN–GRU26.7364.9740.826.59
C2EMD–CNN–TGNN vs. ARIMA79.7991.5971.0053.97
C2EMD–CNN–TGNN vs. SVR79.2592.1972.0561.67
C3EMD–CNN–TGNN vs. EMD–CNN–LSTM8.2615.177.902.25
C3EMD–CNN–TGNN vs. EMD–CNN–GRU10.6818.839.902.25
C3EMD–CNN–TGNN vs. ARIMA59.6673.1248.1537.88
C3EMD–CNN–TGNN vs. SVR61.4175.4250.4244.44
Table 8. Average model prediction performance at Wind Farm A.
Table 8. Average model prediction performance at Wind Farm A.
ModelMAE (kw)MSE (kw)RMSE (kw) R 2
EMD–CNN–TGNN98.5434,174.33182.950.93
EMD–CNN–LSTM134.2845,521.78211.650.91
EMD–CNN–GRU132.6167,208.28259.110.87
ARIMA339.45173,620.88415.490.65
SVR365.94193,205.82438.910.62
Table 9. Average model prediction performance at Wind Farm B.
Table 9. Average model prediction performance at Wind Farm B.
ModelMAE (kw)MSE (kw)RMSE (kw) R 2
EMD–CNN–TGNN80.8016,756.76116.710.96
EMD–CNN–LSTM135.7530,190.36161.950.93
EMD–CNN–GRU130.0042,401.49196.130.89
ARIMA270.33144,513.07363.600.63
SVR319.13143,387.66362.670.64
Table 10. Average model prediction performance at Wind Farm C.
Table 10. Average model prediction performance at Wind Farm C.
ModelMAE (kw)MSE (kw)RMSE (kw) R 2
EMD–CNN–TGNN54.448982.1689.430.92
EMD–CNN–LSTM57.4611,548.18100.390.90
EMD–CNN–GRU67.6520,124.73124.920.87
ARIMA199.3977,858.37234.000.63
SVR196.6983,264.56239.070.64
Table 11. EMD–CNN–TGNN average performance improvement at Wind Farm A.
Table 11. EMD–CNN–TGNN average performance improvement at Wind Farm A.
ComparisonMAE (%)MSE (%)RMSE (%) R 2 (%)
EMD–CNN–TGNN vs. EMD–CNN–LSTM23.9525.6013.762.56
EMD–CNN–TGNN vs. EMD–CNN–GRU24.0149.8529.657.70
EMD–CNN–TGNN vs. ARIMA70.9479.7555.6344.42
EMD–CNN–TGNN vs. SVR72.9482.4058.3851.45
Table 12. EMD–CNN–TGNN average performance improvement at Wind Farm B.
Table 12. EMD–CNN–TGNN average performance improvement at Wind Farm B.
ComparisonMAE (%)MSE (%)RMSE (%) R 2 (%)
EMD–CNN–TGNN vs. EMD–CNN–LSTM41.2349.5329.863.59
EMD–CNN–TGNN vs. EMD–CNN–GRU39.3266.4543.078.29
EMD–CNN–TGNN vs. ARIMA71.1089.5168.6652.24
EMD–CNN–TGNN vs. SVR74.6488.8968.0651.42
Table 13. EMD–CNN–TGNN average performance improvement at Wind Farm C.
Table 13. EMD–CNN–TGNN average performance improvement at Wind Farm C.
ComparisonMAE (%)MSE (%)RMSE (%) R 2 (%)
EMD–CNN–TGNN vs. EMD–CNN–LSTM5.9118.719.892.29
EMD–CNN–TGNN vs. EMD–CNN–GRU17.5636.2821.384.98
EMD–CNN–TGNN vs. ARIMA67.6676.9153.6245.62
EMD–CNN–TGNN vs. SVR67.0775.1152.4844.68
Table 14. Overall average performance of each model.
Table 14. Overall average performance of each model.
ModelMAE (kw)MSE (kw)RMSE (kw) R 2
EMD–CNN–TGNN77.9319,971.08129.700.94
EMD–CNN–LSTM109.1629,086.77157.990.91
EMD–CNN–GRU110.0943,244.83193.390.88
ARIMA269.72131,997.44337.700.64
SVR293.92139,952.68346.880.63
Table 15. Average performance improvement of EMD–CNN–TGNN across all datasets.
Table 15. Average performance improvement of EMD–CNN–TGNN across all datasets.
Comparison (Model 1 vs. Model)MAE (%)MSE (%)RMSE (%) R 2 (%)
EMD–CNN–TGNN vs. EMD–CNN–LSTM23.7031.2817.842.82
EMD–CNN–TGNN vs. EMD–CNN–GRU26.9650.8631.366.99
EMD–CNN–TGNN vs. ARIMA69.9082.0659.3047.43
EMD–CNN–TGNN vs. SVR71.5582.1359.6449.18
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, Z.; Wei, F.; Qi, W.; Han, Q.; Liu, H.; Feng, X.; Zhang, M. A Time Series Prediction Model for Wind Power Based on the Empirical Mode Decomposition–Convolutional Neural Network–Three-Dimensional Gated Neural Network. Sustainability 2024, 16, 3474. https://doi.org/10.3390/su16083474

AMA Style

Guo Z, Wei F, Qi W, Han Q, Liu H, Feng X, Zhang M. A Time Series Prediction Model for Wind Power Based on the Empirical Mode Decomposition–Convolutional Neural Network–Three-Dimensional Gated Neural Network. Sustainability. 2024; 16(8):3474. https://doi.org/10.3390/su16083474

Chicago/Turabian Style

Guo, Zhiyong, Fangzheng Wei, Wenkai Qi, Qiaoli Han, Huiyuan Liu, Xiaomei Feng, and Minghui Zhang. 2024. "A Time Series Prediction Model for Wind Power Based on the Empirical Mode Decomposition–Convolutional Neural Network–Three-Dimensional Gated Neural Network" Sustainability 16, no. 8: 3474. https://doi.org/10.3390/su16083474

APA Style

Guo, Z., Wei, F., Qi, W., Han, Q., Liu, H., Feng, X., & Zhang, M. (2024). A Time Series Prediction Model for Wind Power Based on the Empirical Mode Decomposition–Convolutional Neural Network–Three-Dimensional Gated Neural Network. Sustainability, 16(8), 3474. https://doi.org/10.3390/su16083474

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop