Wind Speed Forecasting Using Attention-Based Causal Convolutional Network and Wind Energy Conversion
Abstract
:1. Introduction
1.1. Existing Methods to Forecast Wind Speed
1.2. Our Contribution
- The SSA decomposition method is used to decompose the wind speed value into several different sub-signals, and the forecasting accuracy of the prediction model is further improved by using the characteristics of each sub-signal.
- A new model for short-term wind speed prediction is proposed, which uses CCN to extract features and employs the attention mechanism to make predictions from the extracted features.
- In order to verify the performance of wind speed signal extraction by SSA, we adopt different decomposition technology, and put the decomposed sub-signals into our proposed model to evaluate the performance of decomposition technology.
- To verify the effectiveness of the proposed model, we use four different time period data and ten comparison models and evaluate the performance of the related models in different prediction intervals.
2. Methodology
2.1. Singular Spectrum Analysis
- Embedding
- 2.
- Singular value decomposition
- 3.
- Grouping
- 4.
- Diagonal averaging
2.2. Causal Convolution Network
2.3. Attention Mechanism
3. The Proposed SSA-CCN-ATT Model
- Data preprocessing. Considering the nonlinearity and volatility of wind speed data, we use SSA to process the original wind speed. SSA has a strict mathematical theory and fewer parameter and can efficiently extract the trend, periodic, and quasi-periodic information of the signals.
- Sample construction. The wind speed data is divided into two types of datasets: the training set and the testing set, respectively. The training set is used to train the CNN-ATT network, whereas the testing data set is used to evaluate the proposed forecasting model.
- CNN-ATT network forecasts. Put the de-noised wind speed time series to the CNN-ATT network. There are two CCN layers, one attention layer in the CNN-ATT network and one full connected layer. CCN is highly noise-resistant model, and it extracts nonlinear spatial features from wind speed; the attention mechanism further increases its extraction efficiency. Finally, a full connected layer is employed to obtain the forecasting result.
- Evaluation. To study the efficiency of the proposed model, a comprehensive evaluation module includes four evaluation metrics, DM test, and improvement ratio analysis is designed to analyze the forecasting results.
- Wind energy conversion and uncertainty analysis. Based on the wind energy conversion curve and wind speed forecasting value, the calculated electricity generation for wind turbines and the forecasting interval method are used to analyze the uncertainty of the wind energy conversion process.
4. Experimental Results
4.1. Dataset Information
4.2. Experimental Design
4.2.1. Model Training
- The one-dimensional wind speed is decomposed into 14 one-dimensional sub-signals by SSA to eliminate the randomness of the original data.
- The first 4800 samples obtained in the first step are used as the training set. 10% of the samples in the training set is used as the verification set. The last 500 samples are used as the training set. Min-Max scaling is used to normalize the training set and the testing set, respectively.
- The input length is 14. When making one-step forecasting, the -th sample to the -th sample are used to predict the -th sample. When making two-step forecasting, the -th sample to the -th sample are used to predict the -th sample. When making three-step forecasting, the -th sample to the -th sample are used to predict the -th sample.
4.2.2. Experimental Setup
4.2.3. Evaluation Criteria
4.3. Result Analysis
4.3.1. Result Analysis of Experiment I
- For dataset 1, the proposed model has the best evaluation metrics value among all models compared with the other three models. For the other three comparison models, the SSA-ANN-ATT is superior to the other two models for all indices. To see the results more intuitively, we draw the forecasting results in Figure 5, which includes the forecasting results of SSA-CCN-ATT and other attention-based model.
- For dataset 2, the best forecasting method differs for different evaluation metrics. For the MAPE, the proposed SSA-CCN-ATT model has the lowest value for every step of forecasting. For the MAE, the SSA-ANN-ATT model has the lowest value for two-step forecasting.
- For dataset 3, the proposed SSA-CCN-ATT model is superior to the other three models for all indices. For the other three models, in one-step and two-step forecasting, the worst is the SSA-GRU-ATT model. In three-step forecasting, the worst is the SSA-LSTM-ATT model.
- For dataset 4, SSA-CCN-ATT is the best forecasting method, with the lowest values in terms of the MAE, MAPE, and MSE, and the highest value of . For the other three models, each model can get its best results for different forecasting steps. For example, SSA-ANN-ATT gets the best MAE, MAPE, MSE, and in two-step forecasting, with the values of 0.1727, 6.2570, 0.0378, and 0.9644, respectively.
4.3.2. Result Analysis of Experiment II
- Similar to experiment I, the proposed model can get the best evaluation metrics values for dataset 1, dataset 3, and dataset 4. The other three comparison models can get their best results for different evaluation metrics and different steps of forecasting.
- For dataset 1, in three-step forecasting, the worst forecasting model is EMD-CCN-ATT.
- For dataset 2, EMD-CCN-ATT gets the best MAE, and MSE in one-step forecasting, EEMD-CCN-ATT gets the best MSE and in two-step forecasting.
- For dataset 3 and dataset 4, in three-step forecasting, the worst forecasting model is EMD-CCN-ATT.
4.3.3. Result Analysis of Experiment III
- For dataset 1, the proposed model achieves the best results for every step of forecasting. Among the four comparison models, LSTM performs best because it has the best values in terms of the four indices.
- For dataset 2, the best forecasting method differs for different step of forecasting. For one-step forecasting, the proposed SSA-CCN-ATT model has the best values for the four indices. For two-step forecasting, LSTM model has the best MSE and values. For three-step forecasting, LSTM model has the best MAE value, with an MAE of 0.1978.
- For dataset 3, by comparing with the four models, it can be seen that the proposed model has the most accurate forecasting results. For the other four individual models, in general, LSTM is the best forecasting model among the four indices, but in one-step forecasting, CNN has the best MSE and value.
- Similar to the dataset 3, the performance of the SSA-CCN-ATT is better than those of the other four individual models. From the comparison of CCN, SVR, ANN, and LSTM, it can be seen LSTM always obtains the best values compared to the other three models for one-step and two-step forecasting.
5. Discussion
5.1. Significance of the Proposed Model
5.2. Improvement Ratio Analysis
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Nomenclature
AR | autoregressive |
ARMA | autoregression integrated moving average |
VMD | variational mode decomposition |
SSA | singular spectrum analysis |
RWT | repetitive waves transform |
BPNN | back propagation neural network |
EMD | empirical mode decomposition |
SVM | support vector machine |
RKF | recurrent Kalman filter |
FS | Fourier series |
WNN | wavelet neural network |
ANN | artificial neural network |
KELM | kernel extreme learning machine |
Bi-LSTM | bidirectional long short term memory |
LSTM | long short term memory |
DBN | deep Boltzmann network |
EWT | empirical wavelet transform |
ESN | echo state network |
ISSD | improved singular spectrum decomposition |
GOASVM | grasshopper optimization algorithm support vector machine |
SSD | singular spectrum decomposition |
CSA | cross search algorithm |
WT | wavelet analysis |
MI | mutual information |
ED | evolutionary decomposition |
BiGRU | bidirectional gated recurrent unit |
CCGRU | causal convolution gated recurrent unit |
CCN | causal convolution network |
SNN | spiking neural network |
ANFIS | adaptive neural-fuzzy system |
SVR | support vector regression |
GRP | gaussian regression process |
EEMD | ensemble empirical mode decomposition |
CEEMDAN | complete EEMD with adaptive noise |
RNN | recurrent neural network |
CS | cuckoo search |
SLFN | single hidden-layer feedforward network |
IELM | incremental extreme learning machine |
MCEEMDAN | modified CEEMDAN |
SVD | singular value decomposition |
References
- Barthelmie, R.J.; Pryor, S.C. Potential contribution of wind energy to climate change mitigation. Nat. Clim. Chang. 2014, 4, 684–688. [Google Scholar] [CrossRef]
- Lam, L.T.; Branstetter, L.; Azevedo, I.M.L. China’s wind electricity and cost of carbon mitigation are more expensive than anticipated. Environ. Res. Lett. 2016, 11, 84015. [Google Scholar] [CrossRef] [Green Version]
- Yao, X.; Liu, Y.; Qu, S. When will wind energy achieve grid parity in China?–Connecting technological learning and climate finance. Appl. Energy 2015, 160, 697–704. [Google Scholar] [CrossRef]
- He, Z.; Chen, Y.; Shang, Z.; Li, C.; Li, L.; Xu, M. A novel wind speed forecasting model based on moving window and multi-objective particle swarm optimization algorithm. Appl. Math. Model. 2019, 76, 717–740. [Google Scholar] [CrossRef]
- Singh, S.N.; Mohapatra, A. Repeated wavelet transform based ARIMA model for very short-term wind speed forecasting. Renew. Energy 2019, 136, 758–768. [Google Scholar]
- Okumus, I.; Dinler, A. Current status of wind energy forecasting and a hybrid method for hourly predictions. Energy Convers. Manag. 2016, 123, 362–371. [Google Scholar] [CrossRef]
- Sharma, P.; Bhatti, T.S. A review on electrochemical double-layer capacitors. Energy Convers. Manag. 2010, 51, 2901–2912. [Google Scholar] [CrossRef]
- Jiang, P.; Ma, X. A hybrid forecasting approach applied in the electrical power system based on data preprocessing, optimization and artificial intelligence algorithms. Appl. Math. Model. 2016, 40, 10631–10649. [Google Scholar] [CrossRef]
- Naik, J.; Satapathy, P.; Dash, P.K. Short-term wind speed and wind power prediction using hybrid empirical mode decomposition and kernel ridge regression. Appl. Soft Comput. 2018, 70, 1167–1188. [Google Scholar] [CrossRef]
- Poggi, P.; Muselli, M.; Notton, G.; Cristofari, C.; Louche, A. Forecasting and simulating wind speed in Corsica by using an autoregressive model. Energy Convers. Manag. 2003, 44, 3177–3196. [Google Scholar] [CrossRef]
- Kaur, D.; Lie, T.T.; Nair, N.K.C.; Vallès, B. Wind speed forecasting using hybrid wavelet transform—ARMA techniques. Aims Energy 2015, 3, 13–24. [Google Scholar] [CrossRef]
- Erdem, E.; Shi, J. ARMA based approaches for forecasting the tuple of wind speed and direction. Appl. Energy 2011, 88, 1405–1414. [Google Scholar] [CrossRef]
- Liu, H.; Tian, H.; Li, Y. An EMD-recursive ARIMA method to predict wind speed for railway strong wind warning system. J. Wind. Eng. Ind. Aerodyn. 2015, 141, 27–38. [Google Scholar] [CrossRef]
- Cao, L.; Qiao, D.; Chen, X. Laplace ℓ1 Huber based cubature Kalman filter for attitude estimation of small satellite. Acta Astronaut. 2018, 148, 48–56. [Google Scholar] [CrossRef]
- Bludszuweit, H.; Domínguez-Navarro, J.A.; Llombart, A. Statistical analysis of wind power forecast error. IEEE Trans. Power Syst. 2008, 23, 983–991. [Google Scholar] [CrossRef]
- Shamshad, A.; Bawadi, M.A.; Wanhussin, W.; Majid, T.A.; Sanusi, S. First and second order Markov chain models for synthetic generation of wind speed time series. Energy 2005, 30, 693–708. [Google Scholar] [CrossRef]
- Moreno, S.R.; Mariani, V.C.; dos Santos Coelho, L. Hybrid multi-stage decomposition with parametric model applied to wind speed forecasting in Brazilian Northeast. Renew. Energy 2021, 164, 1508–1526. [Google Scholar] [CrossRef]
- Ding, W.; Meng, F. Point and interval forecasting for wind speed based on linear component extraction. Appl. Soft Comput. 2020, 93, 106350. [Google Scholar] [CrossRef]
- Domínguez-Navarro, J.A.; Lopez-Garcia, T.B.; Valdivia-Bautista, S.M. Applying Wavelet Filters in Wind Forecasting Methods. Energies 2021, 14, 3181. [Google Scholar] [CrossRef]
- Ghaderpour, E. JUST: MATLAB and python software for change detection and time series analysis. GPS Solut. 2021, 25, 85. [Google Scholar] [CrossRef]
- Liu, M.; Cao, Z.; Zhang, J.; Wang, L.; Huang, C.; Luo, X. Short-term wind speed forecasting based on the Jaya-SVM model. Int. J. Electr. Power Energy Syst. 2020, 121, 106056. [Google Scholar] [CrossRef]
- Aly, H.H. A novel deep learning intelligent clustered hybrid models for wind speed and power forecasting. Energy 2020, 213, 118773. [Google Scholar] [CrossRef]
- Xiao, L.; Shao, W.; Jin, F.; Wu, Z. A self-adaptive kernel extreme learning machine for short-term wind speed forecasting. Appl. Soft Comput. 2021, 99, 106917. [Google Scholar] [CrossRef]
- Hong, Y.Y.; Satriani, T.R.A. Day-ahead spatiotemporal wind speed forecasting using robust design-based deep learning neural network. Energy 2020, 209, 118441. [Google Scholar] [CrossRef]
- Liang, T.; Zhao, Q.; Lv, Q.; Sun, H. A novel wind speed prediction strategy based on Bi-LSTM, MOOFADA and transfer learning for centralized control centers. Energy 2021, 230, 120904. [Google Scholar] [CrossRef]
- Xiang, L.; Li, J.; Hu, A.; Zhang, Y. Deterministic and probabilistic multi-step forecasting for short-term wind speed based on secondary decomposition and a deep learning method. Energy Convers. Manag. 2020, 220, 113098. [Google Scholar] [CrossRef]
- Niu, X.; Wang, J. A combined model based on data preprocessing strategy and multi-objective optimization algorithm for short-term wind speed forecasting. Appl. Energy 2019, 241, 519–539. [Google Scholar] [CrossRef]
- Wang, J.; Yang, Z. Ultra-short-term wind speed forecasting using an optimized artificial intelligence algorithm. Renew. Energy 2021, 171, 1418–1435. [Google Scholar] [CrossRef]
- Liu, H.; Yu, C.; Wu, H.; Duan, Z.; Yan, G. A new hybrid ensemble deep reinforcement learning model for wind speed short term forecasting. Energy 2020, 202, 117794. [Google Scholar] [CrossRef]
- Yan, X.; Liu, Y.; Xu, Y.; Jia, M. Multistep forecasting for diurnal wind speed based on hybrid deep learning model with improved singular spectrum decomposition. Energy Convers. Manag. 2020, 225, 113456. [Google Scholar] [CrossRef]
- Zhang, G.; Liu, D. Causal convolutional gated recurrent unit network with multiple decomposition methods for short-term wind speed forecasting. Energy Convers. Manag. 2020, 226, 113500. [Google Scholar] [CrossRef]
- Wei, D.; Wang, J.; Niu, X.; Li, Z. Wind speed forecasting system based on gated recurrent units and convolutional spiking neural networks. Appl. Energy 2021, 292, 116842. [Google Scholar] [CrossRef]
- Chen, X.J.; Zhao, J.; Jia, X.Z.; Li, Z.L. Multi-step wind speed forecast based on sample clustering and an optimized hybrid system. Renew. Energy 2021, 165, 595–611. [Google Scholar] [CrossRef]
- Zhou, Q.; Wang, C.; Zhang, G. A combined forecasting system based on modified multi-objective optimization and sub-model selection strategy for short-term wind speed. Appl. Soft Comput. 2020, 94, 106463. [Google Scholar] [CrossRef]
- Hu, J.; Heng, J.; Wen, J.; Zhao, W. Deterministic and probabilistic wind speed forecasting with de-noising-reconstruction strategy and quantile regression based algorithm. Renew. Energy 2020, 162, 1208–1226. [Google Scholar] [CrossRef]
- Moreno, S.R.; da Silva, R.G.; Mariani, V.C.; dos Santos Coelho, L. Multi-step wind speed forecasting based on hybrid multi-stage decomposition model and long short-term memory neural network. Energy Convers. Manag. 2020, 213, 112869. [Google Scholar] [CrossRef]
- Duan, J.; Zuo, H.; Bai, Y.; Duan, J.; Chang, M.; Chen, B. Short-term wind speed forecasting using recurrent neural networks with error correction. Energy 2021, 217, 119397. [Google Scholar] [CrossRef]
- Neshat, M.; Nezhad, M.M.; Abbasnejad, E.; Mirjalili, S.; Tjernberg, L.B.; Garcia, D.A.; Alexander, B.; Wagner, M. A deep learning-based evolutionary model for short-term wind speed forecasting, A case study of the Lillgrund offshore wind farm. Energy Convers. Manag. 2021, 236, 114002. [Google Scholar] [CrossRef]
- Tian, Z. Modes decomposition forecasting approach for ultra-short-term wind speed. Appl. Soft Comput. 2021, 105, 107303. [Google Scholar] [CrossRef]
- Jiang, P.; Liu, Z.; Niu, X.; Zhang, L. A combined forecasting system based on statistical method, artificial neural networks, and deep learning methods for short-term wind speed forecasting. Energy 2021, 217, 119361. [Google Scholar] [CrossRef]
- Jaseena, K.U.; Kovoor, B.C. Decomposition-based hybrid wind speed forecasting model using deep bidirectional LSTM networks. Energy Convers. Manag. 2021, 234, 113944. [Google Scholar] [CrossRef]
- Memarzadeh, G.; Keynia, F. A new short-term wind speed forecasting method based on fine-tuned LSTM neural network and optimal input sets. Energy Convers. Manag. 2020, 213, 112824. [Google Scholar] [CrossRef]
- Xu, S.; Hu, H.; Ji, L.; Wang, P. An adaptive graph spectral analysis method for feature extraction of an EEG signal. IEEE Sens. J. 2018, 19, 1884–1896. [Google Scholar] [CrossRef]
- Van Den Oord, A.; Dieleman, S.; Zen, H.; Simonyan, K.; Vinyals, O.; Graves, A.; Kalchbrenner, N.; Senior, A.; Kavukcuoglu, K. WaveNet: A generative model for raw audio. SSW 2016, 125, 2. [Google Scholar]
- Mariani, S.; Rendu, Q.; Urbani, M.; Sbarufatti, C. Causal dilated convolutional neural networks for automatic inspection of ultrasonic signals in non-destructive evaluation and structural health monitoring. Mech. Syst. Signal Process. 2021, 157, 107748. [Google Scholar] [CrossRef]
- Jia, Z.; Yang, L.; Zhang, Z.; Liu, H.; Kong, F. Sequence to point learning based on bidirectional dilated residual network for non-intrusive load monitoring. Int. J. Electr. Power Energy Syst. 2021, 129, 106837. [Google Scholar] [CrossRef]
- Chen, Y.; He, Z.; Shang, Z.; Li, C.; Li, L.; Xu, M. A novel combined model based on echo state network for multi-step ahead wind speed forecasting: A case study of NREL. Energy Convers. Manag. 2019, 179, 13–29. [Google Scholar] [CrossRef]
- A Python Package for Time Series Classification. Available online: https://github.com/johannfaouzi/pyts (accessed on 10 April 2022).
Model | Data Preprocessing | Forecasting Model | Optimization |
---|---|---|---|
Combined model [27] | CEEMDAN | ARIMA, BPNN, ENN, ELM, GRNN | MOGOA |
MWS-CE-ENN [28] | CEEMDAN | Elman neural network | MWS |
Q-LSTM-DBN-ESN [29] | Empirical wavelet transform | LSTM, DBN, ESN | None |
SSD-LSTM-GOASVM [30] | Improved singular spectrum decomposition | LSTM, DBN | Grasshopper optimization algorithm |
CCGRU [31] | CCN | Gated recurrent unit | None |
DTIWSFS [32] | Empirical wavelet transform | Gated recurrent unit, Convolutional SNN | Grey Wolf Optimization |
ECKIE [33] | Ensemble empirical mode decomposition | Incremental extreme learning machine | Cuckoo search (CS) algorithm |
SSAWD-MOGAPSO-CM [34] | SSAWD secondary denoising algorithm | MLP-BP, NARNN, SVM, and ELM | Multi-objective optimization by modified PSO |
Hybrid forecasting model [35] | Modified complete empirical mode decomposition with adaptive noise | Quantile regression-based model | Grasshopper optimization algorithm |
VMD-SSA-LSTM [36] | Variational mode decomposition, Singular spectral analysis | LSTM, ESN, ANFIS, SVR, GRP | None |
ICEEMDAN-RNN-ICEEMDAN-ARIMA [37] | ICEEMDAN | ARIMA, RNN, BPNN | None |
CMAES-LSTM [38] | Evolutionary decomposition | Bi-LSTM | Covariance matrix adaptation evolution strategy |
Proposed approach [39] | Adaptive variational mode decomposition algorithm | ARIMA, SVM, improved LSTM | Improved PSO |
PCFS [40] | Singular spectral analysis | ELM, BPNN, GRNN, ARIMA, ENN, DBN, LSTM | MMODA |
EWT-based BiDLSTM [41] | WT, EMD, EEMD, EWT | Bidirectional LSTM | None |
WT-FS-LSTM [42] | WT | LSTM | CSA |
Datasets | Minimum | Median | Maximum | Mean | Std |
---|---|---|---|---|---|
Dataset 1 | 0.35 | 4.77 | 23.57 | 6.49 | 4.76 |
Dataset 2 | 0.35 | 3.43 | 9.62 | 3.55 | 1.52 |
Dataset 3 | 0.36 | 2.59 | 8.22 | 2.71 | 1.19 |
Dataset 4 | 0.32 | 2.55 | 9.61 | 2.72 | 1.37 |
Experiments | Comparison Models |
---|---|
Experiment I | SSA-LSTM-ATT |
SSA-ANN-ATT | |
SSA-GRU-ATT | |
Experiment II | EMD-CCN-ATT |
EEMD-CCN-ATT | |
EWT-CCN-ATT | |
Experiment III | ANN |
SVR | |
CCN | |
LSTM |
Model | Parameters | Values |
---|---|---|
ANN | Number of hidden layers | 4 |
Number of neurons in hidden layers | (100, 70, 40, 10) | |
SVR | Kernel function | RBF kernel |
Kernel coefficient | {0.01, 0.1, 1, 10, 100} | |
Regularization parameter | {0.01, 0.1, 1, 10, 100} | |
CNN | Number of hidden layers | 2 |
Number of kernels in the CCN layer | (10, 12) | |
LSTM | Number of hidden layers | 2 |
Number of neurons in the LSTM layer | (100, 50) | |
GRU | Number of hidden layers | 2 |
Number of neurons in the GRU layer | (100, 50) |
Dataset | Model | MAE | MAPE (%) | MSE | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1-Step | 2-Step | 3-Step | 1-Step | 2-Step | 3-Step | 1-Step | 2-Step | 3-Step | 1-Step | 2-Step | 3-Step | ||
Dataset 1 | SSA-LSTM-ATT | 0.0866 | 0.1034 | 0.1251 | 5.7890 | 6.1930 | 8.8250 | 0.0136 | 0.0153 | 0.0244 | 0.9624 | 0.9577 | 0.9326 |
SSA-ANN-ATT | 0.0940 | 0.0882 | 0.1147 | 5.5780 | 6.1610 | 8.0060 | 0.0115 | 0.0112 | 0.0214 | 0.9684 | 0.9691 | 0.9408 | |
SSA-GRU-ATT | 0.0960 | 0.0975 | 0.1445 | 5.2620 | 6.6450 | 8.9320 | 0.0139 | 0.0127 | 0.0234 | 0.9616 | 0.9649 | 0.9353 | |
Proposed | 0.0719 | 0.0863 | 0.1241 | 4.8370 | 5.4860 | 7.5480 | 0.0073 | 0.0098 | 0.0202 | 0.9797 | 0.9731 | 0.9441 | |
Dataset 2 | SSA-LSTM-ATT | 0.1696 | 0.1960 | 0.2340 | 5.8170 | 6.9630 | 8.2430 | 0.0366 | 0.0445 | 0.0653 | 0.9756 | 0.9704 | 0.9565 |
SSA-ANN-ATT | 0.1642 | 0.1700 | 0.2302 | 5.6480 | 6.3740 | 8.9750 | 0.0330 | 0.0310 | 0.0600 | 0.9781 | 0.9794 | 0.9600 | |
SSA-GRU-ATT | 0.1472 | 0.1820 | 0.2638 | 5.7780 | 6.9330 | 8.9720 | 0.0238 | 0.0359 | 0.0877 | 0.9842 | 0.9761 | 0.9416 | |
Proposed | 0.1425 | 0.1762 | 0.2047 | 4.6540 | 5.7290 | 7.1170 | 0.0271 | 0.0438 | 0.0510 | 0.9820 | 0.9709 | 0.9660 | |
Dataset 3 | SSA-LSTM-ATT | 0.8384 | 0.9588 | 1.4789 | 5.2870 | 6.0920 | 9.2410 | 0.7238 | 0.9423 | 2.2831 | 0.9504 | 0.9354 | 0.8434 |
SSA-ANN-ATT | 0.8453 | 0.9692 | 1.3630 | 5.2930 | 6.1990 | 8.5190 | 0.7430 | 0.9684 | 1.9140 | 0.9490 | 0.9336 | 0.8687 | |
SSA-GRU-ATT | 0.9071 | 0.9780 | 1.3526 | 5.6330 | 6.2360 | 8.8130 | 0.8565 | 0.9767 | 1.9484 | 0.9412 | 0.9330 | 0.8663 | |
Proposed | 0.7442 | 0.8724 | 1.2333 | 4.7800 | 5.4440 | 7.6940 | 0.5622 | 0.7905 | 1.5744 | 0.9614 | 0.9458 | 0.8920 | |
Dataset 4 | SSA-LSTM-ATT | 0.1509 | 0.1760 | 0.2257 | 5.4550 | 6.9660 | 9.0030 | 0.0312 | 0.0493 | 0.0824 | 0.9706 | 0.9536 | 0.9224 |
SSA-ANN-ATT | 0.1590 | 0.1727 | 0.2214 | 5.6280 | 6.2570 | 8.7370 | 0.0360 | 0.0378 | 0.0809 | 0.9661 | 0.9644 | 0.9238 | |
SSA-GRU-ATT | 0.1552 | 0.1894 | 0.2406 | 5.7570 | 6.2930 | 8.8430 | 0.0299 | 0.0522 | 0.0798 | 0.9718 | 0.9508 | 0.9248 | |
Proposed | 0.1181 | 0.1558 | 0.1928 | 4.4130 | 5.7590 | 7.1140 | 0.0189 | 0.0304 | 0.0489 | 0.9822 | 0.9714 | 0.9539 |
Dataset | Model | MAE | MAPE (%) | MSE | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1-Step | 2-Step | 3-Step | 1-Step | 2-Step | 3-Step | 1-Step | 2-Step | 3-Step | 1-Step | 2-Step | 3-Step | ||
Dataset 1 | EMD-CCN-ATT | 0.0916 | 0.0903 | 0.1446 | 5.8900 | 6.3260 | 8.8440 | 0.0106 | 0.0153 | 0.0244 | 0.9624 | 0.9577 | 0.9326 |
EEMD-CCN-ATT | 0.1071 | 0.1140 | 0.1262 | 6.0430 | 6.3340 | 8.3120 | 0.0156 | 0.0112 | 0.0214 | 0.9684 | 0.9691 | 0.9408 | |
EWT-CCN-ATT | 0.0942 | 0.0921 | 0.1379 | 5.6240 | 6.0750 | 8.2760 | 0.0102 | 0.0127 | 0.0234 | 0.9616 | 0.9649 | 0.9353 | |
Proposed | 0.0719 | 0.0863 | 0.1241 | 4.8370 | 5.4860 | 7.5480 | 0.0073 | 0.0098 | 0.0202 | 0.9797 | 0.9731 | 0.9441 | |
Dataset 2 | EMD-CCN-ATT | 0.1499 | 0.1851 | 0.2467 | 5.8860 | 6.2200 | 9.6540 | 0.0246 | 0.0445 | 0.0653 | 0.9756 | 0.9704 | 0.9565 |
EEMD-CCN-ATT | 0.1699 | 0.1996 | 0.2402 | 5.9350 | 6.4650 | 9.2280 | 0.0339 | 0.0310 | 0.0600 | 0.9781 | 0.9794 | 0.9600 | |
EWT-CCN-ATT | 0.1641 | 0.1686 | 0.2505 | 5.8240 | 6.4080 | 8.9820 | 0.0317 | 0.0359 | 0.0877 | 0.9842 | 0.9761 | 0.9416 | |
Proposed | 0.1425 | 0.1762 | 0.2047 | 4.6540 | 5.7290 | 7.1170 | 0.0271 | 0.0438 | 0.0510 | 0.9820 | 0.9709 | 0.9660 | |
Dataset 3 | EMD-CCN-ATT | 0.8622 | 1.0417 | 1.5789 | 5.3870 | 6.4520 | 9.5580 | 0.7838 | 0.9423 | 2.2831 | 0.9504 | 0.9354 | 0.8434 |
EEMD-CCN-ATT | 0.8120 | 0.9950 | 1.4721 | 5.0950 | 6.3070 | 8.9350 | 0.6801 | 0.9684 | 1.9140 | 0.9490 | 0.9336 | 0.8687 | |
EWT-CCN-ATT | 0.9273 | 0.9843 | 1.4288 | 5.8140 | 6.3950 | 9.2300 | 0.9178 | 0.9767 | 1.9484 | 0.9412 | 0.9330 | 0.8663 | |
Proposed | 0.7442 | 0.8724 | 1.2333 | 4.7800 | 5.4440 | 7.6940 | 0.5622 | 0.7905 | 1.5744 | 0.9614 | 0.9458 | 0.8920 | |
Dataset 4 | EMD-CCN-ATT | 0.1490 | 0.1819 | 0.2656 | 5.4930 | 6.9660 | 9.4750 | 0.0267 | 0.0493 | 0.0824 | 0.9706 | 0.9536 | 0.9224 |
EEMD-CCN-ATT | 0.1507 | 0.1745 | 0.2490 | 5.7490 | 6.3260 | 8.9450 | 0.0265 | 0.0378 | 0.0809 | 0.9661 | 0.9644 | 0.9238 | |
EWT-CCN-ATT | 0.1614 | 0.1760 | 0.2344 | 5.8430 | 6.3700 | 8.9260 | 0.0296 | 0.0522 | 0.0798 | 0.9718 | 0.9508 | 0.9248 | |
Proposed | 0.1181 | 0.1558 | 0.1928 | 4.4130 | 5.7590 | 7.1140 | 0.0189 | 0.0304 | 0.0489 | 0.9822 | 0.9714 | 0.9539 |
Dataset | Model | MAE | MAPE (%) | MSE | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1-Step | 2-Step | 3-Step | 1-Step | 2-Step | 3-Step | 1-Step | 2-Step | 3-Step | 1-Step | 2-Step | 3-Step | ||
Dataset 1 | ANN | 0.1062 | 0.1346 | 0.1693 | 7.3840 | 7.8350 | 10.4760 | 0.0162 | 0.0297 | 0.0328 | 0.9553 | 0.9179 | 0.9094 |
SVR | 0.1203 | 0.1578 | 0.1582 | 6.7700 | 8.5500 | 9.6150 | 0.0233 | 0.0364 | 0.0293 | 0.9356 | 0.8994 | 0.9192 | |
CCN | 0.1192 | 0.1257 | 0.1779 | 7.1220 | 7.7140 | 10.0040 | 0.0185 | 0.0216 | 0.0513 | 0.9488 | 0.9404 | 0.8584 | |
LSTM | 0.0942 | 0.1029 | 0.1405 | 6.6140 | 7.1530 | 9.4080 | 0.0129 | 0.0155 | 0.0240 | 0.9642 | 0.9571 | 0.9337 | |
Proposed | 0.0719 | 0.0863 | 0.1241 | 4.8370 | 5.4860 | 7.5480 | 0.0073 | 0.0098 | 0.0202 | 0.9797 | 0.9731 | 0.9441 | |
Dataset 2 | ANN | 0.1805 | 0.2277 | 0.3493 | 6.4510 | 7.1700 | 11.9590 | 0.0373 | 0.0737 | 0.1514 | 0.9752 | 0.9509 | 0.8992 |
SVR | 0.1562 | 0.2229 | 0.3105 | 6.1370 | 7.8670 | 10.6160 | 0.0273 | 0.0600 | 0.1181 | 0.9818 | 0.9600 | 0.9214 | |
CCN | 0.1592 | 0.2087 | 0.2828 | 5.4560 | 7.1100 | 10.2400 | 0.0355 | 0.0559 | 0.1048 | 0.9764 | 0.9628 | 0.9303 | |
LSTM | 0.1827 | 0.1875 | 0.1978 | 5.8810 | 6.6100 | 8.0790 | 0.0507 | 0.0426 | 0.0557 | 0.9663 | 0.9717 | 0.9630 | |
Proposed | 0.1425 | 0.1762 | 0.2047 | 4.6540 | 5.7290 | 7.1170 | 0.0271 | 0.0438 | 0.0510 | 0.9820 | 0.9709 | 0.9660 | |
Dataset 3 | ANN | 1.1199 | 1.3021 | 1.4535 | 7.0160 | 8.2360 | 9.4780 | 1.3091 | 1.7431 | 2.2702 | 0.9102 | 0.8804 | 0.8443 |
SVR | 1.1496 | 1.2352 | 1.7375 | 7.1630 | 7.7130 | 10.9670 | 1.3708 | 1.5776 | 3.0919 | 0.9060 | 0.8918 | 0.7879 | |
CCN | 1.0711 | 1.2326 | 1.4921 | 6.7240 | 7.7580 | 9.6060 | 1.1927 | 1.5805 | 2.2789 | 0.9182 | 0.8916 | 0.8437 | |
LSTM | 1.0641 | 1.1366 | 1.3326 | 6.6830 | 7.1720 | 8.2530 | 1.1950 | 1.3290 | 1.8636 | 0.9180 | 0.9088 | 0.8722 | |
Proposed | 0.7442 | 0.8724 | 1.2333 | 4.7800 | 5.4440 | 7.6940 | 0.5622 | 0.7905 | 1.5744 | 0.9614 | 0.9458 | 0.8920 | |
Dataset 4 | ANN | 0.1823 | 0.1923 | 0.2736 | 6.6580 | 7.1660 | 9.9420 | 0.0392 | 0.0500 | 0.0902 | 0.9631 | 0.9529 | 0.9150 |
SVR | 0.1852 | 0.2191 | 0.2543 | 6.6700 | 7.9550 | 9.8670 | 0.0497 | 0.0609 | 0.1004 | 0.9532 | 0.9426 | 0.9054 | |
CCN | 0.2192 | 0.2143 | 0.3633 | 7.1970 | 7.8240 | 10.3590 | 0.0806 | 0.0534 | 0.2306 | 0.9240 | 0.9497 | 0.7827 | |
LSTM | 0.1717 | 0.859 | 0.2748 | 6.3470 | 7.0710 | 9.9150 | 0.0353 | 0.0449 | 0.0982 | 0.9667 | 0.9577 | 0.9075 | |
Proposed | 0.1181 | 0.1558 | 0.1928 | 4.4130 | 5.7590 | 7.1140 | 0.0189 | 0.0304 | 0.0489 | 0.9822 | 0.9714 | 0.9539 |
Model | Dataset 1 | Dataset 2 | Dataset 3 | Dataset 4 | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
1-Step | 2-Step | 3-Step | 1-Step | 2-Step | 3-Step | 1-Step | 2-Step | 3-Step | 1-Step | 2-Step | 3-Step | |
SSA-LSTM-ATT | 4.0359 | 4.1468 | 3.2596 | 4.4155 | 3.7395 | 4.3100 | 6.9371 | 6.4640 | 5.8669 | 5.0879 | 4.4626 | 4.8684 |
SSA-ANN-ATT | 2.6796 | 2.7758 | 2.5898 | 3.1393 | 5.4915 | 4.1299 | 6.4450 | 4.8702 | 7.1683 | 5.0396 | 5.2263 | 4.2000 |
SSA-GRU-ATT | 3.2327 | 2.6731 | 2.3547 | 3.2205 | 4.7229 | 6.3161 | 8.8089 | 13.2691 | 5.1547 | 5.4739 | 3.6886 | 5.0449 |
EMD-CCN-ATT | 2.7588 | 1.8350 | 1.9807 | 3.0237 | 3.7056 | 2.7689 | 6.8340 | 6.7868 | 5.9259 | 7.5251 | 4.9170 | 5.7765 |
EEMD-CCN-ATT | 4.3770 | 3.4384 | 2.7487 | 2.8046 | 2.7212 | 4.4261 | 5.9088 | 7.2490 | 8.6187 | 3.8240 | 3.6781 | 5.5065 |
EWT-CCN-ATT | 1.8028 | 3.1977 | 1.9217 | 1.8531 | 1.9728 | 3.2618 | 5.8025 | 3.5395 | 6.1990 | 4.5648 | 1.9389 | 4.6737 |
ANN | 4.2282 | 4.8446 | 3.4828 | 5.3323 | 5.0160 | 6.3535 | 10.7952 | 13.0939 | 6.1399 | 6.6524 | 5.1149 | 6.1361 |
SVR | 3.1649 | 4.0294 | 3.4252 | 6.5651 | 7.0320 | 6.8778 | 11.3199 | 10.6736 | 12.3700 | 7.5924 | 5.4513 | 5.3851 |
CCN | 5.0380 | 5.0494 | 3.8233 | 3.0310 | 3.6377 | 4.4585 | 12.1855 | 10.0671 | 10.0464 | 3.9994 | 5.4779 | 3.3949 |
LSTM | 5.2074 | 3.7417 | 1.8054 | 3.5285 | 3.0930 | 2.5413 | 7.6600 | 12.6493 | 5.8276 | 5.4187 | 5.6419 | 8.8506 |
Model | Dataset 1 | Dataset 2 | Dataset 3 | Dataset 4 | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
1-Step | 2-Step | 3-Step | 1-Step | 2-Step | 3-Step | 1-Step | 2-Step | 3-Step | 1-Step | 2-Step | 3-Step | |
SSA-LSTM-ATT | 16.445% | 11.416% | 14.470% | 19.993% | 17.722% | 13.660% | 9.590% | 10.637% | 16.741% | 19.102% | 17.327% | 20.982% |
SSA-ANN-ATT | 13.284% | 10.956% | 5.7210% | 17.599% | 10.119% | 20.702% | 9.692% | 12.179% | 9.6840% | 21.588% | 7.9590% | 18.576% |
SSA-GRU-ATT | 8.0770% | 17.442% | 15.495% | 19.453% | 17.366% | 20.675% | 15.143% | 12.700% | 12.697% | 23.345% | 8.4860% | 19.552% |
EMD-CCN-ATT | 17.878% | 13.279% | 14.654% | 20.931% | 7.8940% | 26.279% | 11.268% | 15.623% | 19.502% | 19.661% | 17.3275 | 24.918% |
EEMD-CCN-ATT | 19.957% | 13.388% | 9.1920% | 21.584% | 11.384% | 22.876% | 6.1830% | 13.683% | 13.889% | 23.239% | 8.9630% | 20.470% |
EWT-CCN-ATT | 13.994% | 9.6950% | 8.7970% | 20.089% | 10.596% | 20.764% | 17.785% | 14.871% | 16.641% | 24.474% | 9.5920% | 20.300% |
ANN | 34.493% | 29.981% | 27.950% | 27.856% | 20.098% | 40.488% | 31.870% | 33.900% | 18.823% | 33.719% | 19.634% | 28.445% |
SVR | 28.552% | 35.836% | 21.498% | 24.165% | 27.177% | 32.960% | 33.268% | 29.418% | 29.844% | 33.838% | 27.605% | 27.901% |
CCN | 32.084% | 28.883% | 24.550% | 14.699% | 19.423% | 30.498% | 28.911% | 29.827% | 19.904% | 38.683% | 26.3935 | 31.325% |
LSTM | 26.867% | 23.305% | 19.770% | 20.864% | 13.328% | 11.907% | 28.475% | 24.094% | 6.7730% | 30.471% | 18.555% | 28.250% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Shang, Z.; Wen, Q.; Chen, Y.; Zhou, B.; Xu, M. Wind Speed Forecasting Using Attention-Based Causal Convolutional Network and Wind Energy Conversion. Energies 2022, 15, 2881. https://doi.org/10.3390/en15082881
Shang Z, Wen Q, Chen Y, Zhou B, Xu M. Wind Speed Forecasting Using Attention-Based Causal Convolutional Network and Wind Energy Conversion. Energies. 2022; 15(8):2881. https://doi.org/10.3390/en15082881
Chicago/Turabian StyleShang, Zhihao, Quan Wen, Yanhua Chen, Bing Zhou, and Mingliang Xu. 2022. "Wind Speed Forecasting Using Attention-Based Causal Convolutional Network and Wind Energy Conversion" Energies 15, no. 8: 2881. https://doi.org/10.3390/en15082881
APA StyleShang, Z., Wen, Q., Chen, Y., Zhou, B., & Xu, M. (2022). Wind Speed Forecasting Using Attention-Based Causal Convolutional Network and Wind Energy Conversion. Energies, 15(8), 2881. https://doi.org/10.3390/en15082881