Next Article in Journal
The Study of Emission Inventory on Anthropogenic Air Pollutants and Source Apportionment of PM2.5 in the Changzhutan Urban Agglomeration, China
Next Article in Special Issue
Mesoscale Model Simulation of a Severe Summer Thunderstorm in The Netherlands: Performance and Uncertainty Assessment for Parameterised and Resolved Convection
Previous Article in Journal
Calibration and Improved Speckle Statistics of IM-CW Lidar for Atmospheric CO2 Measurements
Previous Article in Special Issue
Warm Rain in Southern West Africa: A Case Study at Savè
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Wind Speed Forecast Based on Post-Processing of Numerical Weather Predictions Using a Gradient Boosting Decision Tree Algorithm

1
Ministry of Education Key Laboratory for Earth System Modeling, Department of Earth System Science, Tsinghua University, Beijing 100084, China
2
Key Laboratory of Ecosystem Network Observation and Modeling, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, Beijing 100101, China
3
Yucheng Comprehensive Experiment Station, Chinese Academy of Science, Beijing 100101, China
*
Author to whom correspondence should be addressed.
Atmosphere 2020, 11(7), 738; https://doi.org/10.3390/atmos11070738
Submission received: 18 March 2020 / Revised: 1 July 2020 / Accepted: 9 July 2020 / Published: 12 July 2020

Abstract

:
With the large-scale development of wind energy, wind power forecasting plays a key role in power dispatching in the electric power grid, as well as in the operation and maintenance of wind farms. The most important technology for wind power forecasting is forecasting wind speed. The current mainstream methods for wind speed forecasting involve the combination of mesoscale numerical meteorological models with a post-processing system. Our work uses the WRF model to obtain the numerical weather forecast and the gradient boosting decision tree (GBDT) algorithm to improve the near-surface wind speed post-processing results of the numerical weather model. We calculate the feature importance of GBDT in order to find out which feature most affects the post-processing wind speed results. The results show that, after using about 300 features at different height and pressure layers, the GBDT algorithm can output more accurate wind speed forecasts than the original WRF results and other post-processing models like decision tree regression (DTR) and multi-layer perceptron regression (MLPR). Using GBDT, the root mean square error (RMSE) of wind speed can be reduced from 2.7–3.5 m/s in the original WRF result by 1–1.5 m/s, which is better than DTR and MLPR. While the index of agreement (IA) can be improved by 0.10–0.20, correlation coefficient be improved by 0.10–0.18, Nash–Sutcliffe efficiency coefficient (NSE) be improved by −0.06–0.6. It also can be found that the feature which most affects the GBDT results is the near-surface wind speed. Other variables, such as forecast month, forecast time, and temperature, also affect the GBDT results.

1. Introduction

Among the renewable energy technologies currently developed, wind power is a renewable energy with mature technology and large-scale development prospects. One of the key technologies for the development of wind power is forecasting of the amount of power generated by wind farms. As the output power of a wind turbine is directly related to wind speed, wind power forecasts strongly depend on wind speed forecasts.
In the development of wind power forecast technology for wind farms, mesoscale model simulation is a useful method for wind speed forecast. Rife et al. [1] used the mesoscale numerical weather prediction (NWP) model MM5 to predict the low-level wind in the boundary layer. Storm et al. [2] evaluated the performance of Weather Research and Forecast (WRF) model in predicting the low-level jet (LLJ) and pointed out that LLJ results simulated by WRF model were similar to observations. This result indicate that the mesoscale model has the ability to capture some characteristics of boundary layer wind. Marquis et al. [3] pointed out that the wind speed predicted by the WRF model can result in wind resources being better developed and utilized. Foley et al. [4] discussed the methods in wind power generation forecasting and mentioned that the use of mesoscale models for dynamic downscaling of large-scale forecasts is the basis of wind power forecasting for wind farms. Zhao et al. [5] presented a day-ahead wind power forecasting system. In the wind power forecasting system, the WRF model was used to forecast the wind speed. Mahoney et al. [6], Stathopoulos et al. [7], and Wyszogrodzki et al. [8] did similar work, mesoscale models are widely used in wind power forecasting.
Based on the NWP model, some research focus on the improvement of wind speed forecasting results. Deppe et al. [9] used aggregated results of WRF model simulations with different planetary boundary layer (PBL) schemes to improve the turbine height wind speed forecasts. Tateo et al. [10] also used the ensemble method of combining different PBL scheme results. Cheng et al. [11] and Marjanovic et al. [12] explored the impact of the choice of physical schemes on the wind forecasts, and the data assimilation is also an effective way to improve the wind speed forecasts [13,14,15,16,17,18,19,20].
In the current status of wind speed forecasting, combining a mesoscale numerical model with post-processing algorithms is an efficient method [21,22]. Such mesoscale numerical models mainly include WRF model or other NWP models, and the post-processing algorithms mainly include statistical methods and machine learning methods.
The statistical methods used have been developed over many years. The Model Output Statistics (MOS) method has been widely used for a long time [23,24,25,26,27]. In addition to the MOS method, there have also been some studies related to error correction by analyzing the systematic errors of the numerical model. Stensurd et al. [28] compared the grid point temperature data of the mesoscale model with observed data and obtained the systematic deviation of the model. After subtracting the average temperature deviation from the model results, the corrected temperature was closer to the observed value. Another work, by Stensured et al. [29], integrated ensemble predictions into this error correction method. The work used the results of 23 ensemble members, a simple seven-day continuous average was used to correct the deviation, and the deviation correction value of the ensemble result was compared with the temperature at 2 m above ground level (AGL). The corrected result was better than that of the MOS method. Eckel et al. [30] and Hacker et al. [31] also carried out similar studies related to ensemble forecasting and the reduction of systematic error. In addition, other statistical methods, such as the Kalman filter, have also been widely used in numerical weather model post-processing [32,33,34,35,36].
In recent years, machine learning algorithms have been applied to wind forecasting in wind farms. Li et al. [37] used three typical neural networks to predict the 1-h-ahead wind speed. Ishak et al. [38] obtained a mesoscale weather forecast using the MM5 (fifth-generation Penn State/NCAR Mesoscale Model) model, and multiple linear regression (MLR), support vector machine (SVM), artificial neural network (ANN) were used to process the mesoscale model results and output the wind speed. The results showed that the SVM algorithm performed the best due to its ability to capture non-linear relations. Sweeney et al. [39] combined several post-processing methods (Short-term bias-correction, Diurnal cycle correction, Linear least-square, Kalman filter, Mean and variance correction, Directional-bias forecast, and ANN) and proposed a combined post-processing method to reduce the error in the wind speed forecast. Zjavka et al. [40] used a polynomial neural network method to process the output of mesoscale numerical meteorological model and obtained a more accurate wind speed forecast. Zhao et al. [41] built a wind speed forecast system based on WRF ensemble results and post-processing algorithms. Zhao et al. [42] combined non-linear and non-parameter algorithms to correct the wind speed output of the numerical model. Papayiannis et al. [43] proposed a new method based on optimal transportation theory to fit the observed wind speed and model results.
From the previous works, it can be seen that the mainstream method for wind speed prediction uses a numerical weather model to predict the meteorological features and select some of the features as the input of a post-processing algorithm, then it uses the post-processing algorithm to output the corrected wind speed.
In this paper, the Weather Research and Forecast (WRF) model was used for numerical weather prediction in wind farm areas. The gradient boosting decision tree (GBDT) method was used to perform the post-processing task and output the post-processed wind speed results. Two additional machine learning models were used for comparison. By comparing the RMSE of the WRF model’s wind speed output with the RMSE of the post-processing models’ wind speed output, we evaluate the performance of GBDT as a post-processing method. Section 2 mainly introduces the WRF model, observation data, and GBDT algorithm. Section 3 analyzes the results of GBDT and its feature importance distribution. Section 4 presents our main conclusions.

2. Experiments

2.1. Numerical Weather Model

The numerical weather prediction model we used in our tests is the Weather Research and Forecast (WRF) model (Version 3.9.1) [44]. We designed a nested-grid simulation of the WRF model, where the outer grid has 25 km horizontal resolution and the inner gird has 5 km horizontal resolution. Figure 1 shows domains 01 and 02 of our simulation. Figure 2 shows the domain 02 area, which contains some wind observation towers; these towers were used to evaluate the forecasting results of WRF and measure the wind speed forecast improvement of the post-processing algorithm.
In China, every wind farm needs to provide wind power forecasts to the power grid. Every morning (UTC + 8 h), the wind farm needs to make a forecast of its own power generation for the next few days. The forecast time starts at 00:00 (UTC + 8 h) of the next day and lasts for several days. Therefore, our WRF model starts running at 00:00 (UTC) every day, and the result starting at 16:00 UTC (+1 day, 00:00 UTC + 8 h) every day is taken as the result of wind speed forecast of the wind farm, where the output interval of the WRF model is 10 min. Figure 3 shows the WRF running time configuration. We start the WRF model every day and make 24, 48, and 72 h forecasts. The model runs from 1 June 2009 and continues until 27 June 2010. From the results of the WRF model, we have 24, 48, and 72 h wind speed forecasts every day over the time range of one year.
The driving data of the WRF model are from the final operational global analysis data (FNL) [45], which provide the initial field and boundary forcing of the WRF model. The FNL data were provided by the National Centre for Environmental Prediction (NCEP), which have 1 degree of spatial resolution and a 6-h temporal resolution.
Table 1 shows the domain configuration and parameter settings of the WRF model; in our study period, the models that were run every day used the same set of configurations.
After obtaining the model results, we interpolated the wind field into heights of 10, 30, 50, and 70 m using linear interpolation. We also interpolated the wind field into the wind tower’s location using bilinear interpolation. We used the results of the interpolation to compare with the observed wind speed.
The ETA values of the near-surface layers were 1.0000, 0.9960, 0.9920, 0.9900, 0.9851… The average altitudes of near-surface layers in model domain were 16.07, 48.22, 72.36, 101.16, and 134.84 m. It can be seen that there are models layer near 50 m and 70 m. The height of the wind tower data is 10, 30, 50, and 70 m. For the data of 10 m, we use the output result of the wind speed of 10 m in WRF model. For data at other heights, we use linear interpolation in the vertical direction for interpolation. Despite the nonlinear characteristics of the wind in the boundary layer atmosphere, due to the existence of model layers at the heights of 50 m and 70 m, the nonlinear changes in the boundary layer have no significant effect on the results of 50 m and 70 m.

2.2. Wind Observation Data

In order to evaluate the results of the post-processing algorithm, we used wind speed observation data from 14 wind observation towers to evaluate the model results and the post-processing results. The geographical distribution of wind towers is shown in Figure 2. These wind towers are distributed in the coastal areas of Jiangsu, China, where wind power has been widely developed.
Table 2 shows the terrain height, location and sensor parameters of 14 wind towers. The time interval of the observation data of the wind tower is 10 min. The observation period lasted from June 2009 to May 2010. Each wind tower had observation data at different heights near the ground, including 10 m above ground level (AGL), 30 m AGL, 50 m AGL, and 70 m AGL.

2.3. Results Measurements

In order to evaluate the results of the different tests, the following evaluation metrics were calculated to evaluate the WRF model results and post-processing results.

2.3.1. Root Mean Square Error

The root mean square error (RMSE) is widely used in NWP to evaluate the error of wind speed and other meteorological variables. The calculation of RMSE is
RMSE = 1 n i = 1 n ( M i O i ) 2
where n is the number of observations, M i is the wind speed in the model results or post-processing results, and O i is the wind speed observation.

2.3.2. Index of Agreement

Index of agreement (IA) is a standardized measure of the degree of model prediction error [46,47,48]. It can be calculated by
IA = 1 i = 1 n ( M i O i ) 2 i = 1 n ( | M i O ¯ | + | O i O ¯ | ) 2   0     IA     1
where O ¯ is the average value of the wind speed observation.
The Index of Agreement varies between 0 and 1, where a value of IA close to 1 indicates well-matched results, and 0 indicates no agreement at all.
The index of agreement can detect additive and proportional differences in the observed and simulated means and variances [49].

2.3.3. Correlation Coefficient

The Pearson correlation coefficient (R) is also widely used to evaluate the performance of wind speed simulation of NWP. It reflects the correlation between wind speed simulation series and observation series.
R = C o v ( M ,   O ) V a r [ M ] V a r [ O ]
Here, C o v ( M ,   O ) represents the covariance of the model results and observation wind speed, and V a r [ M ] and V a r [ O ] represent the variance of the model results and observation wind speed. These variables can be calculated as
C o v ( M ,   O ) = i = 1 n ( M i M ¯ ) ( O i O ¯ )
V a r [ M ] = i = 1 n ( M i M ¯ ) 2
V a r [ O ] = i = 1 n ( O i O ¯ ) 2

2.3.4. Nash–Sutcliffe Efficiency Coefficient

The Nash–Sutcliffe efficiency coefficient (NSE) is used to assess the predictive ability of numerical model [50]. NSE can be calculated as
NSE = 1 i = 1 n ( M i O i ) 2 i = 1 n ( O i O ¯ ) 2
NSE ranges from to 1. The result of 1 ( NSE = 1 ) means that the model results matches the observation perfectly. The result of 0 ( NSE = 0 ) means that the model results are as accurate as the mean of the observed data, and the less-than-zero results ( NSE < 0 ) means that.

2.4. Gradient Boosting Decision Tree

In our study, gradient boosting decision tree (GBDT) is used to conduct the post-processing of WRF model output. The original results of the WRF model were processed by horizontal and vertical interpolation into values of meteorological variables at different heights of each wind tower. In the GBDT model training process, these meteorological variables are used as input, and the wind speed observations are used as output.

2.4.1. Ensemble Learning Approach: Boosting

The GBDT algorithm is a kind of ensemble learning algorithm. Ensemble learning is not a specific machine learning algorithm but completes the learning task by building and combining multiple machine learners; we call these learners “weak learners” and the combined learner a “strong learner” [51].
Boosting is one such ensemble method [52,53]. At the beginning of boosting training, a weak learner is trained with a training data set with initial weights. The weights of the training data set are updated, according to the learning error performance of weak learner. The update of the weights makes the weights of the training samples with high learning errors higher. These poor samples will get more attention in the weak learners, later in the training process. After updating the weights, the GBDT algorithm continues to train new weak learners, based on the updated training data set. This is repeated until the number of weak learners reaches a predetermined number; finally, these weak learners are integrated through a collection strategy to obtain the final strong learner. Figure 4 shows the total process of the boosting algorithm.
In the history of the development of the boosting algorithm, AdaBoost (adaptive boosting) [54,55] is a common algorithm. In each iteration, AdaBoost calculates the error of each sample, calculates a new weight based on the error of each sample, and performs the next iteration. Unlike AdaBoost, GBDT determines the new weight distribution by calculating the error gradient in each iteration. Therefore, the GBDT algorithm can achieve more accurate results than the AdaBoost algorithm.

2.4.2. Classification and Regression Tree (CART)

The gradient boosting decision tree (GBDT) method is a kind of boosting algorithm. GBDT uses a CART (classification and regression tree) decision tree as its weak learner. A CART decision tree can take categorical and numerical features as its input and can be used for classification and regression tasks.
The decision tree algorithm is a classic algorithm in machine learning and was proposed by Quinlan [56]. In the history of decision tree algorithm development, ID3 decision trees and C4.5 decision trees are important types of decision tree algorithms [57]. The C4.5 algorithm improves the deficiencies of the ID3 algorithm. At the same time, C4.5 also has shortcomings such as inability to perform regression and low calculation efficiency. Compared with the C4.5 algorithm, CART has higher computational efficiency and can handle regression problems [58].
For categorical features, the CART decision tree calculates the Gini coefficient [58] to select split features and decide how to split the features in the tree branches. The ID3 and C4.5 algorithms use information gain to select features. CART uses Gini coefficients instead of information gain ratios. The Gini coefficient represents the impurity of the model. The smaller the Gini coefficient, the lower the impurity and the better the characteristics.
Suppose there are K categories in a feature, and the probability of the k th category is p k . Then, the expression of the Gini coefficient is
G i n i ( p ) = k = 1 K p k ( 1 p k ) = 1 k = 1 K p k 2
For a given sample D , suppose there are K categories, and the number of the k th category is C k . Then, the expression of the Gini coefficient of the sample D is
G i n i ( D ) = 1 k = 1 K ( | C k | D ) 2
For a sample D , if D is divided into two parts, D 1 and D 2 , by a value a of a feature A , then, under the condition of feature A , the Gini coefficient of D is
G i n i ( D , A ) = | D 1 | | D | G i n i ( D 1 ) +   | D 2 | | D | G i n i ( D 2 )
For the numerical features in the regression problem, the CART decision tree uses a measurement of the sum of variance. The measurement goal of the CART regression tree is: for feature A , a partition point s divides the data set into D 1 and D 2 ; we need to find which feature and feature value division points minimize the mean square error of the respective sets D 1 and D 2 , and also minimize the sum of the mean square errors of D 1 and D 2 . The expression is
m i n ( A , s ) [ m i n c 1 x i D 1 ( A , s ) ( y i c 1 ) 2 + m i n c 2 x i D 2 ( A , s ) ( y i c 2 ) 2 ]
where c 1 is the mean value of the data set D 1 , and c 2 is the mean value of the data set D 2 .

2.4.3. Training Process of GBDT

The other boosting algorithm, GBDT, iterates by calculating gradients [59,60]. Using the CART decision tree, we can perform multiple training iterations, where each iteration trains a new CART decision tree based on the training data after the updating of the weights. The GBDT algorithm can be expressed by the formula
f ( x ) = t = 1 m γ t h t ( x )
where f ( x ) is the final strong learner, h t ( x ) is the weak learner of each iteration, and γ t is the weight of each weak learner in the strong learner. We suppose that the strong learner is f t ( x ) when iterating to the t th round during the GBDT iteration process and that the loss function is L ( y , f t ( x ) ) , where x is the input data (WRF output) and y is the label (wind speed observation).
When iterating to a new step, GBDT builds the strong learner in a greedy way
f t ( x ) = f t 1 ( x ) +   γ t h t ( x )
The newly added CART decision tree h t ( x ) tries to minimize the loss L , where the new loss function is
L ( y , f t ( x ) ) = L ( y , f t 1 ( x ) + h ( x ) )
and the new CART decision tree is
h t ( x ) = a r g   m i n h i = 1 n L ( y i ,   f t 1 ( x i ) + h ( x i ) )
GBDT’s iteration process intends to solve this minimization problem, numerically, using steepest descent: the steepest descent direction is the negative gradient of the loss function evaluated at the current model f t 1 , which can be calculated for any differentiable loss function
f t ( x ) = f t 1 ( x ) γ t i = 1 n f L ( y i ,   f t 1 ( x i ) )
where γ t is chosen using line search
γ t = a r g   m i n γ i = 1 n L ( y i ,   f t 1 ( x i ) γ L ( y i ,   f t 1 ( x i ) ) f t 1 ( x i ) )
where γ is the weight of the weak learner.
Figure 5 shows the training process of GBDT. When performing the t th round of CART decision tree training, the loss of the sample is used to calculate the negative gradient of decision tree, and the negative gradient of the loss function of the i th sample can be expressed as
r t i = [ L ( y i ,   f t 1 ( x i ) ) f t 1 ( x i ) ]
By using ( x i , r t i ) ( i = 1 ,   2 , 3 n ) , a CART decision tree can be trained, and the t th CART decision tree in the integrated model is obtained as
c t = a r g   m i n c x L ( y ,   f t 1 ( x ) + c )
where c is the combination of leaves of the decision tree and c t is the leaf combination of the t th decision tree.
After we obtain the t th decision tree, we can update the strong learner
f t ( x ) = f t 1 ( x ) + c t I ( x )
where I ( x ) is the leaf output of input x .

2.4.4. Feature Importance of GBDT

After training the GBDT model, we can calculate the feature importance distribution of the GBDT model to obtain each feature’s importance. During the branching of the decision tree, the variable to be split and the split value of the variable are determined by calculating the information gain. The information gain can be expressed as I ( A , D ) , where A is the features and D is the data samples. After all decision trees are constructed, the importance (or contribution value) of a feature can be obtained by calculating the information gain of the feature for the decision tree and dividing by the total frequency of the feature in all of the trees of the GBDT strong learner
S ( a ) = I ( a ,   D ) N a
where N a is the total frequency of the feature a in all trees.

2.5. Features Selection and Parameters Setting of GBDT

After we obtain the WRF model forecast results for the 0–24 h, 24–48 h, and 48–72 h forecasts, we need to decide which variables to use as the input features of GBDT. The output data interval of the WRF model is 10 min. Therefore, we take the output of each moment of the WRF model as one record of input data for GBDT.
In fitting the near-surface wind speeds, we used multiple variables at different heights and pressure layers as the input features of the GBDT model. We used the linear interpolation method to interpolate the model results into different layers. These variables consist of wind speed, wind direction, temperature, pressure, height, absolute vorticity (avo), and potential vorticity (pvo). Table 3 shows the variables at different pressure layers, the pressure layers being 850, 700, 500, and 300 hPa. Table 4 shows the variables at different height layers, with 29 height layers from 10 m to 5000 m; as our post-processing output result is the near-surface wind speed, we used more height layers in the lower atmosphere.
As the GBDT model is based on the CART decision tree, which can effectively deal with categorical features, we added several categorical features into the input of the GBDT model. Table 5 shows the categorical features used in GBDT. For the date of each forecast record, we took ‘month’ as a categorical feature, the month feature ranging from January to December. As for the time of each forecast record, we took ‘hour’ as a categorical feature, where this feature ranged from 1–24, which indicated the forecast record at different hours of the day.
When we completed the feature engineering, we built the GBDT ensemble model to carry out the post-processing work. We used LightGBM [61] to build our GBDT models, where the input of LightGBM was the features we obtained from the output of each forecast time of the WRF model and the output of LightGBM during the training process is the wind speed observations from wind towers.
We divided the total forecast records into two parts: one was training data, used to train the GBDT model, and the other was testing data, used to evaluate the error of the GBDT model.
In the WRF results and observation data, the interval between adjacent data records is 10 min. The state of the atmosphere is unlikely to change a lot within 10 min, the adjacent data records are very similar. Therefore, if all the data are randomly divided into training data and test data according to the traditional machine learning method, the results cannot reflect the true performance of the model. Based on this, we chose to use the entire day of data as a training set or test set.
In each month, we chose the data in date 3, 7, 11, 15, 19, 23, and 27 as test data and data in other date as train data. Table 6 shows the train and test data split in one month. For each wind tower, we trained the GBDT model on the WRF output observation speed at different forecast times (0–24 h, 24–48 h, and 48–72 h) and different heights (10, 30, 50, and 70 m).
Before the training of the GBDT models, we had to set the parameters of the GBDT models. Our GBDT models mainly needed to tune two parameters: the number of leaves and the minimum number of data in each leaf. These two parameters were both related to the effect of fitting and could reduce overfitting. Table 7 shows the parameter configurations of LightGBM and the tuning ranges of the two parameters.
In Table 7, two parameters—number of leaves and minimum data in leaf—needed to be tuned. We set a pairwise combination of the values of two parameters, where the number of leaves contained 10, 20, 40, 80, and 160; and the minimum data in leaf contained 10, 20, 40, and 80.

2.6. Models Used for Comparison

Most previous machine learning based post-processing algorithms used artificial neural network (ANN) as its’ machine learning model or a part of ensemble model. Also, the basic model of our GBDT algorithm is CART decision tree regression (DTR). Based on these, we chose the multi-layer perceptron regression (MLPR) and DTR as the model used for comparison to show the performance improvement of GBDT over traditional machine learning models. The DTR and MLPR models used the same train data and test data as GBDT and the RMSEs of each model was calculated to compare the performance of post-processing results of WRF. Table 8 is the parameters setting of MLPR model and Table 9 is the parameters setting of DTR model.

2.7. Significance Test

When we obtain the statistical variables of all test results, we need to perform significance tests on these statistical variables to verify whether the results of different tests are significantly different. Among the statistical results of all tests, we need to test whether the following statistical variables are significant in different tests:
  • Whether the statistical variables (RMSE, IA, R, NSE) of GBDT results have changed significantly compared to WRF results.
  • Whether the statistical variables (RMSE, IA, R, NSE) of GBDT results have changed significantly compared to the comparison models (DTR, MLPR).
For the significance test, two-sample Student’s t-test is used to calculate the p-value of the following hypothesis
H ( t e s t 1 & t e s t 2 ) : S t e s t 1 S t e s t 2 = 0
where H ( t e s t 1 & t e s t 2 ) is the t-test hypothesis of the two tests, S t e s t 1 is the statistical variable of test 1 and S t e s t 2 is the same statistical variable of test 2. If the p-value is less than 0.01, than it can proved that two results are significantly different at a level of 99% confidence, while a p-value less than 0.05 means that the ‘significant difference’ passed the 95% confidence level.

3. Results

3.1. GBDT Parameters Tuning Results

We used the WRF model output and 10 m wind speed observations of tower ‘10001’ to perform the GBDT parameter tuning. The wind speed forecast time of the tuning work was 0–24 h. Figure 6 shows the parameter tuning results of the parameters ‘number of leaves’ and ‘minimum data in leaf’. From Figure 6, we can see that when ‘L (number of leaves)’ is the same, the curves of different ‘D (Minimum data in leaf)’ are very close. With larger ‘L’, the faster the MSE curve decreases. Thus, a larger number of leaves can reduce MSE and achieve better post-processing results.
Table 10 is the result of parameter tuning after 2000 iterations. For each L and D, we have the results of training data (train) and test data (val). It can be seen from Table 7 that, although increasing ‘L’ can reduce the MSE of training data and test data, it will cause overfitting. In the case of keeping ‘L’ unchanged and changing ‘D’, we can see that ‘val’ has no obvious change, and the MSE of ‘train’ increases, indicating that increasing ‘D’ can weaken overfitting. Combining these results, we set the Number of leaves to 80 and the Minimum data in leaf to 80 in the GBDT wind speed post-processing model.

3.2. Post-Processing Results

After parameter tuning, we used the train data in Table 6 to train the GBDT model for 0–24 h, 24–48 h, and 48–72 h wind speed forecasts at different heights of different towers and evaluated the results with the test data. We calculated the RMSE, IA, R, and NSE of the wind speed output of WRF model results and post-processing results using the test data sets. Appendix A contains the RMSE results of all wind towers, including 0–24 h (Table A1), 24–48 h (Table A2), and 48–72 h (Table A3). Appendix B contains the IA results of all wind towers, including 0–24 h (Table A4), 24–48 h (Table A5), and 48–72 h (Table A6). Appendix C contains the NSE results of all wind towers, including 0–24 h (Table A7), 24–48 h (Table A8), and 48–72 h (Table A9). Appendix D contains the R results of all wind towers, including 0–24 h (Table A10), 24–48 h (Table A11), and 48–72 h (Table A12).
We calculate the average RMSE, IA, NSE, and R value of 14 towers and obtain Figure 7, Figure 8, Figure 9 and Figure 10. These figures contain the results of WRF, GBDT, DTR, and MLPR in different height (10, 30, 50, and 70 m) and different forecast time (0–24 h, 24–48 h, and 48–72 h). We also conduct the significance tests using the statistical variables of different towers, the significance test results are shown in Table 11.
Figure 9 shows the average NSE value of 14 towers in each test. From Figure 9 we can find out that the NSE results of WRF model are close to zero, which means that the simulation results are close to the average level of observations, that is, the overall results are credible, but the simulation errors are large. Therefore, it is very necessary to post-process the results of the model.
From Figure 7, Figure 8, Figure 9 and Figure 10 we can find that the RMSE of WRF output is about 2.7–3.5 m/s, the IA is about 0.61–0.75, the NSE is about −0.35–0.15 and the R is about 0.51–0.67. The increasing RMSE, as well as the decreasing IA, R and NSE between 0–24 h, 24–48 h, and 48–72 h indicates that the error of WRF forecast increases with the forecast time.
From Table 11 we can find that all the significance test results of RMSE, IA and R passed the confidence level of 99%, which means the improvement of GBDT model in RMSE, IA, and R is significant. However, some significance test results of NSE did not pass the 95% confidence level, especially in 24–48 h and 48–72 h, meaning that the NSE improvement in some cases cannot be trusted.
By comparing the results of WRF and post-processing models, it can be found that each post-processing method can reduce RMSE within a certain range. The RMSE of GBDT results is smaller than MLPR and DTR, which shows that GBDT can reduce more RMSE. The degree of reduction of RMSE by GBDT model is between 1–1.5 m/s, compared with 0.2–1 m/s of DTR and 0.7–1.2 m/s of MLPR. For IA, R, and NSE results, GBDT achieved the highest IA, R, and NSE among the four tests. Compared with WRF, GBDT has greatly improved these three metrics (IA between 0.10–0.20, R between 0.10–0.18, and NSE between −0.06–0.6). The reduction of RMSE as well as the improvement of IA and R indicate that GBDT can fit the near-surface wind speed with a smaller error than other two post-processing models. Thus, GBDT can be used to perform post-processing of WRF model to forecast the near-surface wind speed.
In order to further study the error changes of WRF and GBDT in different months, we calculated the average RMSE of wind speed forecast for each month. Figure 11 is the RMSE results in different months, including the monthly changes in the RMSE of WRF and GBDT and the percentage reduction in RMSE of GBDT relative to the WRF results.
From Figure 11, we can find that the RMSE of the WRF forecast results varies greatly between different months, and the RMSE in March, April, June, July, and December is large. However, at the same time, the RMSE of the GBDT results change less in different months. In the months when the RMSE of the WRF result is large, GBDT can reduce more RMSE, so that the final RMSE is roughly the same in each month.
In order to compare the post-processing effects of the GBDT model at different times, we calculate RMSE and IA for different hours in different months. Figure 12 is the RMSE in different month and hour, Figure 13 is the IA in different month and hour. From Figure 12 and Figure 13 we can find that in July, September, October, and December, the forecast results at 12–24 are worse than those at 0–12. Compared with WRF, the results of GBDT did not have larger errors during the above-mentioned poor forecast time. It can be seen from Figure 12c and Figure 13c that when the forecast results have larger error, GBDT reduce more RMSE and improve more IA than other time.

3.3. Weibull Distributions

In general cases, the distribution of near surface wind speed can be fitted using Weibull distribution [62]. The density function of Weibull distribution is
f ( x ; λ , k ) = k λ ( x λ ) k 1 e ( x λ ) k
Here x is the wind speed, k > 0 is the shape parameter, and λ > 0 is the scale parameter of the Weibull distribution.
The two parameters of Weibull distribution can be used to determine whether the distribution of NWP results or the post-processing results is similar to the observations. We used the 10 m wind speed results of test data to fit the Weibull distribution of WRF, GBDT, DTR, and MLPR, the Weibull distributions are shown in Figure 8, and the parameters of these distributions are listed in Table 12.
From Figure 14 and Table 12 we can find that, relative to the original WRF results, the Weibull distribution curves of post-processing models are closer to the observations. Among the post-processing models, the curves of GBDT models are closest to the observations, both in shape and peak value. The results above show that the GBDT post-processing model can capture the distribution better than other post-processing models and original WRF output.

3.4. GBDT Feature Importance Results

After training the GBDT models, we calculated the feature importance of each GBDT model. We first calculated the feature importance distribution of each GBDT model, and then calculated the average value of the feature importance distribution over the 14 towers. As the results in Figure 7 showed that the forecast results for 0–24 h, 24–48 h, and 48–72 h were not significantly different, we calculated the average of the results of the above three forecast times at different heights (10 m, 30 m, 50 m, and 70 m).
Figure 15 shows the feature importance results. It can be seen from Figure 15 that the 10 m wind speed output of the WRF model had the greatest effect on the GBDT post-processing results at 10 m, 30 m, and 50 m; the 30 m WRF output also had a large contribution. The 50 m and 70 m wind speed outputs of the WRF model could strongly affect the 50 m and 70 m GBDT results, and the 50 m wind speed output of WRF model was the largest contributor to the 70 m GBDT results. This means that the most important components of the near-surface wind speed post-processing model are the near-surface wind speeds in the WRF model.
At the same time, we can also see that the two features ‘month’ and ‘hour’ were also very important. In Figure 15, ‘month’ and ‘hour’ are in the top six features of importance. This means that changes in near-surface wind speeds were related to the forecast month and the forecast hour. We input ‘month’ and ‘hour’ into GBDT as two categorical features, such that these two features could contribute to the GBDT result.
Although the near-surface wind speed features importantly contributed to the GBDT results, the ‘other’ component still comprised about a half of the importance in Figure 15. This means the rest of the features also contributed to the final post-processing results. Thus, even if the effect of a single feature is limited, a large number of features can still have a strong effect on the result.

3.5. Feature Importance Sensitivity Tests

In order to further investigate the effect of different features on the results, we set up sensitivity tests on the input features. For sensitivity tests, we set up three sets of tests for different input features. We kept the GBDT model parameters of each sensitivity test the same. Table 13 shows the input features in different sensitivity tests. Test 1 uses all the features as input, Test 2 uses ‘other’ features as input and Test 3 uses the near surface wind speed, ‘hour’, and ‘month’ features as input.
Figure 16 and Figure 17 are the results of each tests, Figure 16 is the average RMSE and Figure 17 is average IA. We also did the significance test between Tests 1–3. Table 14 shows the significance test results. The p-values of each significance tests are less than 0.01, means that all the results passed the confidence level of 99%. From Figure 16 and Figure 17 we can find that Test 1 has the smallest error and the highest IA, which means that we can obtain the best post-processing wind speed when we input all the features into GBDT model. Compared to Test 2, Test 3 has less RMSE and higher IA, the near-surface wind speed, ‘hour’ and ‘month’ features has a greater impact on the post-processing results than the ‘other’ feature. The significant improvement between Tests 1 and 3 indicate that it is necessary to add ‘other’ features to the input of GBDT post-processing model.

3.6. GBDT Feature Split Value Distributions

In the process of feature split for numerical features, each split has a split value. The distribution of feature split values depends on the distribution of feature values and feature’s contribution distribution. If the feature split values are highly distributed over an interval, it means that: (1) the feature value has wide distribution in that interval; and (2) a change of feature value in that interval will have a great effect on the GBDT result.
In Section 3.4, we found that WRF model’s wind speed at 10 m had the most importance in the 10 m wind speed GBDT output. We plot the distributions of 10 m wind speed observation and the distributions of 10 m WRF wind speed feature split values in 10. For the wind speed observation distributions, we used the Weibull distribution function to fit the 10 m wind speed observations.
Figure 18 shows the distributions of all the towers (towers 10001–10014). From Figure 18, we find that the distribution of the wind speed observation at 10 m is roughly similar to the distribution of the feature split value; this is because the distribution of WRF output wind speed was similar to the wind speed observations. However, in the high speed regions (wind speed > 8 m/s), the Weibull distribution decreases significantly but the feature split value distribution remains high. These distributions indicate that a change of feature value in the regions with high wind speeds still has an impact on the GBDT results. It also indicated that, if the WRF model is inaccurately simulated in high wind speed regions, large errors will occur in the GBDT results. This indicates that, in the area we studied, improvement of the WRF model’s simulation performance for high wind speeds can improve the overall wind speed prediction performance.

4. Conclusions

In this work, based on WRF model, we conducted a one-year wind speed forecast of the coastal area of Jiangsu, China. According to the power grid’s requirements for wind power forecasting in wind farms, we obtained 0–24 h, 24–48 h, and 48–72 h wind speed forecasts for wind power forecasting. Based on the NWP forecast results, we extracted multiple variables from WRF output, at different height and pressure levels, for each moment. We built a GBDT regression model to correct the output near-surface wind speed of the WRF model and compared the performance with other two post-processing methods. Finally, we analyzed feature importance in the GBDT model and found which features had a greater impact on the results of the GBDT model. Our main conclusions are as follows:
The Weibull distributions of 10 m wind speed results shows that after the post-processing results, the wind distributions were closer to the observations and the GBDT model had the best performance to fit the near surface wind.
The root mean square error (RMSE) of the wind speed forecast for the wind farm was approximately 2.7–3.5 m/s for different wind towers and at different levels. Wind speed errors will cause greater errors in wind power forecasting, which will affect the operations of wind farms and power grids. After GBDT model correction, the RMSE of the wind speed was reduced, between a range of 1–1.5 m/s, on the test data sets. Also, the IA and R results shows that GBDT can improve these indices, the IA can be improved by 0.10–0.20, and R can be improved by 0.10–0.18. These improvements indicate that the GBDT model, using a large number of features as input, can reduce the wind speed forecast error of the wind farm.
In different months and at different times, the error of the WRF results varies greatly. In the month and time with large error, the GBDT model can reduce a larger error, so that the error distribution of the final wind speed forecast results in different months and different times does not have significant difference.
By analyzing the feature importance of each GBDT model, we found that the distribution of feature importance is different for the correction models at different heights. From the 10, 30, 50, and 70 m wind speed correction results, we found that the near-surface wind speed distribution has a strong impact on the correction results. The 10 m model output wind speed can greatly affect the correction results at 10, 30, and 50 m. The 30 m model output wind speed can affect 10, 30, 50, and 70 m correction results. The 50 m model output wind speed can affect the 50 m and 70 m correction results. The 70 m model output wind speed can affect the 70 m correction results.
At the same time, we found that two categorical features, ‘month’ and ‘hour’, also had a great impact on the result of the correction. This shows that WRF simulation errors have some characteristics in different months and at different hours of the day.
From the feature split distribution of the 10 m WRF output wind speed, it could be seen that the distribution of the feature split and the Weibull distribution of wind speed do not completely coincide, but that the distribution of the split value in the high wind speed region is greater than the Weibull distribution of wind speed. This result shows that the decision tree has frequent branching when the wind speed value is high. Thus, high wind speeds simulated by the WRF model have a great impact on the GBDT results and can easily cause errors.
There have been many studies which used a numerical weather model to make weather forecasts and performed post-processing algorithms to correct the forecast. Such post-processing algorithms only used a few features as their input for two main reasons: (1) in some machine learning algorithms, such as ANN and SVM, too many feature inputs will negatively affect the algorithm’s performance, by some less-effective features. (2) In some algorithms, if the input number of features is too large, the amount of calculation increases squarely or exponentially, which will make the model impossible to train in a short time. The GBDT algorithm we used did not have the above problems: it can input a huge number of features, pick out the important features, and ignore the less important features. Our results showed that even the most important feature (near-surface wind speed), only takes up a small portion of the entire importance. The GBDT results show that smaller errors can be achieved with more features.
For the categorical features, such as forecast month and forecast hour, it is very difficult to input them into post-processing algorithms such as neural networks and other statistical methods. However, with a decision tree, categorical features can be easily processed. Therefore, the GBDT algorithm also has an advantage over other algorithms in being able to deal with categorical features. Our results showed that the feature importance of forecast month and forecast hour was high and, so, categorical features like forecast month and forecast hour have improved our model’s performance.
Another advantage of GBDT is that it can calculate feature importance by analyzing the gain of information while the decision tree is split. Therefore, we can find which features are important and which are less important.
In summary, a more effective method for wind speed forecasting of wind farms is to use a mesoscale meteorological model to forecast wind speed, followed by use of the GBDT algorithm to correct the model simulation results.

Author Contributions

Conceptualization, Y.L.; Methodology, W.X.; Software, W.X.; Validation, W.X.; Formal analysis, W.X.; Investigation, W.X.; Resources, Y.L.; Data curation, W.X.; Writing—original draft preparation, W.X.; Writing—review and editing, Y.L. and L.N.; Visualization, W.X.; Supervision, Y.L. and L.N.; Project administration, Y.L.; Funding acquisition, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Key Research and Development Program of China (2018YFB1502803) and Scientific Research Program of Tsinghua University “Research on Wind Farm Weather Forecasting Technology for Power Grid”.

Acknowledgments

The FNL data was provided by CISL Research Data Archive (RDA) web site (https://rda.ucar.edu/datasets/ds083.2/). The wind observation data of 14 wind towers in Jiangsu was provided by National Climate Center (China Meteorological Administration). These data played a key role in our research work, we are grateful to the providers of the above data.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. The 0–24 h RMSE (m/s) of WRF model original output and post-processing model results.
Table A1. The 0–24 h RMSE (m/s) of WRF model original output and post-processing model results.
10 m30 m50 m70 m
TowerWRFGBDTDTRMLPRWRFGBDTDTRMLPRWRFGBDTDTRMLPRWRFGBDTDTRMLPR
13.001.302.081.783.101.462.301.993.331.692.672.373.361.622.582.23
22.641.292.201.922.731.532.412.192.831.652.652.193.081.712.762.41
32.821.202.011.652.701.502.252.082.701.662.762.162.891.782.732.37
42.281.352.171.892.421.652.542.152.571.792.702.342.761.912.882.59
52.691.522.312.002.631.722.672.272.651.832.802.532.751.922.942.59
63.411.171.881.682.821.542.412.092.761.672.722.252.941.762.682.41
73.451.181.971.692.801.592.632.162.721.752.702.282.831.832.852.38
83.041.352.141.872.871.682.602.272.761.832.732.483.031.912.932.53
92.931.412.372.132.721.572.572.072.721.722.622.322.811.803.042.39
102.301.512.372.022.471.622.512.102.641.712.742.232.981.812.792.39
112.931.292.161.712.821.432.391.922.821.512.502.032.911.582.662.03
122.391.251.971.652.381.462.371.992.361.492.532.142.431.572.652.18
132.871.061.751.422.761.352.201.782.651.552.441.952.761.632.532.10
143.491.111.891.583.221.352.151.802.851.472.442.092.811.632.482.06
Ave2.871.282.091.782.751.532.432.062.741.662.642.242.881.752.752.33
Table A2. The 24–48 h RMSE (m/s) of WRF model original output and post-processing model results.
Table A2. The 24–48 h RMSE (m/s) of WRF model original output and post-processing model results.
10 m30 m50 m70 m
TowerWRFGBDTDTRMLPRWRFGBDTDTRMLPRWRFGBDTDTRMLPRWRFGBDTDTRMLPR
13.131.342.242.003.261.502.362.183.501.702.792.403.541.662.802.45
22.781.402.342.012.921.612.802.363.051.772.932.433.271.852.972.63
33.031.262.151.782.941.602.562.302.981.752.872.683.191.843.112.72
42.611.482.402.062.821.873.072.423.002.023.072.683.232.143.202.84
53.011.622.502.173.031.912.882.533.072.033.222.703.202.133.392.76
63.821.392.191.903.371.802.742.403.331.942.962.603.542.053.182.74
73.831.392.231.903.321.852.882.453.282.013.022.723.422.063.192.77
83.471.532.402.063.371.892.932.553.312.032.992.663.582.153.542.87
93.331.642.462.243.191.832.762.393.241.943.052.553.352.033.002.81
102.571.692.792.262.851.802.802.463.091.932.832.603.472.073.182.85
113.331.462.292.023.281.612.722.193.301.712.712.353.401.822.922.45
122.771.462.241.962.841.732.932.422.861.802.902.452.971.852.862.47
133.151.231.951.623.101.512.452.133.031.712.642.343.181.762.722.41
143.691.262.371.763.541.592.622.063.261.732.692.323.261.852.972.47
Ave3.181.442.331.983.131.722.752.353.161.862.912.533.331.953.072.66
Table A3. The 48–72 h RMSE (m/s) of WRF model original output and post-processing model results.
Table A3. The 48–72 h RMSE (m/s) of WRF model original output and post-processing model results.
10 m30 m50 m70 m
TowerWRFGBDTDTRMLPRWRFGBDTDTRMLPRWRFGBDTDTRMLPRWRFGBDTDTRMLPR
13.351.442.342.083.511.602.622.333.701.752.742.483.811.772.762.49
23.001.462.372.193.191.702.592.433.321.812.912.523.571.903.042.74
33.311.382.101.873.331.742.762.423.371.872.912.573.592.012.962.79
42.801.612.442.183.071.982.972.603.242.073.202.663.472.193.372.89
53.101.662.692.213.161.953.022.643.222.063.132.543.352.113.482.84
63.831.532.391.993.422.002.882.793.392.163.162.743.592.253.432.84
73.831.492.331.933.392.053.262.583.372.193.242.773.512.313.422.98
83.511.682.502.243.462.073.032.753.402.213.372.903.642.333.453.07
93.341.692.542.363.231.892.892.643.262.013.172.703.372.113.362.92
102.681.932.992.592.952.032.972.613.182.023.072.773.532.183.223.00
113.361.532.362.053.351.712.672.243.381.792.782.453.491.953.002.59
122.891.552.422.243.011.812.772.553.061.923.142.743.161.963.122.85
133.261.342.011.763.301.682.512.173.251.872.772.583.451.922.962.67
143.901.322.211.823.791.692.732.253.551.872.942.573.581.993.202.57
Ave3.301.542.412.113.301.852.832.503.341.973.042.643.512.073.202.80

Appendix B

Table A4. The 0–24 h IA of WRF model original output and post-processing model results.
Table A4. The 0–24 h IA of WRF model original output and post-processing model results.
10 m30 m50 m70 m
TowerWRFGBDTDTRMLPRWRFGBDTDTRMLPRWRFGBDTDTRMLPRWRFGBDTDTRMLPR
10.590.830.660.740.620.800.650.690.620.770.610.650.620.780.640.69
20.670.850.690.730.700.830.680.710.700.820.660.750.680.820.650.71
30.650.850.680.760.730.850.750.780.750.850.680.790.740.850.710.77
40.780.880.740.790.800.860.730.790.790.850.720.770.770.840.710.75
50.740.860.720.790.780.860.700.770.800.850.720.750.790.850.710.78
60.650.890.760.790.760.860.730.790.780.860.700.770.770.860.750.78
70.640.880.740.790.760.870.690.800.790.860.730.800.780.860.710.79
80.690.860.710.800.750.850.710.770.780.840.710.750.750.840.690.74
90.700.870.720.770.760.870.720.790.770.860.720.780.780.860.680.79
100.800.910.800.860.800.890.760.840.780.880.740.810.750.870.730.80
110.710.890.740.820.750.890.730.810.760.890.740.810.770.890.740.82
120.780.910.810.860.820.910.800.840.830.910.780.810.830.900.760.82
130.700.910.800.850.760.890.760.840.790.880.740.830.790.870.730.80
140.630.880.710.780.700.880.750.810.770.890.750.800.790.880.770.84
Ave0.700.880.730.800.750.870.730.790.760.860.710.780.760.860.710.78
Table A5. The 24–48 h IA of WRF model original output and post-processing model results.
Table A5. The 24–48 h IA of WRF model original output and post-processing model results.
10 m30 m50 m70 m
TowerWRFGBDTDTRMLPRWRFGBDTDTRMLPRWRFGBDTDTRMLPRWRFGBDTDTRMLPR
10.560.810.640.670.590.780.610.620.580.760.560.620.590.750.560.62
20.650.820.630.710.670.800.600.670.670.790.590.700.660.780.600.66
30.620.830.630.720.690.830.660.700.710.830.660.710.700.830.630.70
40.730.850.700.770.740.810.600.740.720.790.620.710.700.790.630.70
50.700.840.690.750.730.820.670.760.740.820.630.740.740.820.660.75
60.600.840.680.740.680.810.660.720.700.810.670.740.680.810.660.72
70.600.840.690.760.690.820.670.740.710.820.680.720.700.820.660.74
80.630.820.670.730.680.810.640.740.710.800.680.720.680.780.580.71
90.640.830.680.720.690.820.670.720.700.820.670.740.700.820.690.71
100.770.880.740.810.750.860.730.790.720.840.720.760.690.820.660.74
110.660.860.700.780.690.850.640.750.700.840.680.750.700.850.690.76
120.720.870.740.790.750.860.690.760.760.860.690.760.760.850.700.79
130.670.870.740.810.710.860.700.760.730.840.700.750.730.840.710.72
140.600.830.550.730.640.820.610.730.700.830.690.730.720.830.670.73
Ave0.650.840.680.750.690.830.650.730.700.820.660.720.700.810.650.72
Table A6. The 48–72 h IA of WRF model original output and post-processing model results.
Table A6. The 48–72 h IA of WRF model original output and post-processing model results.
10 m30 m50 m70 m
TowerWRFGBDTDTRMLPRWRFGBDTDTRMLPRWRFGBDTDTRMLPRWRFGBDTDTRMLPR
10.560.790.620.660.580.770.580.640.590.760.590.600.590.730.590.60
20.650.810.630.680.660.790.620.640.660.780.610.650.650.780.590.65
30.590.800.660.700.650.800.630.690.670.810.650.710.660.800.670.68
40.710.820.680.740.710.800.630.720.700.800.610.710.690.790.620.69
50.690.830.640.760.720.810.660.710.730.810.680.760.720.810.620.72
60.580.800.630.720.660.760.620.640.670.760.630.680.660.770.580.69
70.580.810.650.730.660.770.580.700.680.780.620.700.680.770.610.67
80.610.780.640.670.650.760.600.650.680.760.580.680.660.740.590.64
90.620.810.640.680.670.790.640.700.680.790.620.680.680.790.610.68
100.730.830.650.730.720.810.670.740.690.800.650.690.660.790.640.68
110.630.830.700.740.650.810.670.740.670.810.660.720.670.810.650.72
120.690.840.700.730.710.840.720.730.710.820.630.700.720.820.660.72
130.630.840.700.760.670.810.680.730.690.790.660.670.680.800.630.67
140.560.800.620.710.600.780.590.700.650.790.620.690.670.790.610.72
Ave0.630.810.660.720.660.790.640.690.680.790.630.690.670.780.620.68

Appendix C

Table A7. The 0–24 h NSE of WRF model original output and post-processing model results.
Table A7. The 0–24 h NSE of WRF model original output and post-processing model results.
10 m30 m50 m70 m
TowerWRFGBDTDTRMLPRWRFGBDTDTRMLPRWRFGBDTDTRMLPRWRFGBDTDTRMLPR
1−0.670.11−0.090.11−0.46−0.01−0.07−0.12−0.51−0.19−0.14−0.21−0.39−0.20−0.07−0.04
2−0.200.280.010.05−0.060.18−0.01−0.02−0.050.13−0.020.14−0.130.16−0.050.03
3−0.220.24−0.020.130.070.260.160.230.140.28−0.020.210.100.290.070.18
40.180.430.060.220.260.330.070.220.240.300.030.130.220.240.040.09
50.040.310.040.180.230.30−0.050.080.280.280.030.030.290.300.010.23
6−0.490.460.150.160.140.280.040.220.230.28−0.020.080.200.300.120.12
7−0.510.430.160.210.140.34−0.100.260.270.310.060.180.260.30−0.080.15
8−0.170.31−0.020.290.110.25−0.040.140.250.18−0.080.030.160.15−0.03−0.09
9−0.160.370.040.240.160.370.040.180.220.26−0.020.160.230.29−0.050.21
100.110.580.220.450.190.490.080.340.170.430.110.250.050.400.020.19
11−0.160.470.060.270.090.460.080.240.170.450.150.280.190.450.140.25
120.130.560.370.450.310.570.320.350.380.560.240.210.400.550.180.29
13−0.160.580.260.420.130.490.160.360.260.390.070.310.280.350.010.19
14−0.500.430.010.17−0.050.420.130.280.240.480.170.230.320.430.220.39
Ave−0.200.400.090.240.090.340.060.200.160.290.040.150.160.290.040.16
Table A8. The 24–48 h NSE of WRF model original output and post-processing model results.
Table A8. The 24–48 h NSE of WRF model original output and post-processing model results.
10 m30 m50 m70 m
TowerWRFGBDTDTRMLPRWRFGBDTDTRMLPRWRFGBDTDTRMLPRWRFGBDTDTRMLPR
1−0.76−0.04−0.07−0.13−0.57−0.22−0.26−0.39−0.63−0.36−0.33−0.40−0.50−0.44−0.26−0.27
2−0.220.14−0.130.01−0.090.05−0.16−0.12−0.10−0.05−0.20−0.03−0.15−0.07−0.18−0.08
3−0.310.10−0.160.02−0.020.19−0.11−0.070.040.15−0.080.110.000.17−0.160.04
40.020.220.020.160.080.01−0.350.080.07−0.04−0.31−0.010.03−0.09−0.19−0.08
5−0.110.24−0.020.100.070.12−0.110.160.120.14−0.210.090.140.15−0.000.16
6−0.710.26−0.010.07−0.100.08−0.14−0.050.010.10−0.060.10−0.030.05−0.060.00
7−0.690.270.060.18−0.070.140.020.070.050.12−0.02−0.020.050.11−0.150.08
8−0.380.12−0.080.04−0.090.08−0.180.130.03−0.02−0.05−0.07−0.04−0.15−0.23−0.02
9−0.340.16−0.12−0.02−0.040.12−0.05−0.090.020.15−0.040.070.030.12−0.03−0.05
100.030.410.150.250.070.320.080.260.020.180.030.16−0.110.12−0.200.11
11−0.300.31−0.020.22−0.060.25−0.200.090.010.21−0.090.090.030.240.000.09
12−0.040.360.140.240.130.350.030.120.200.30−0.040.090.210.29−0.010.25
13−0.310.370.100.29−0.040.26−0.040.100.100.17−0.020.050.110.160.02−0.20
14−0.630.11−0.440.03−0.240.01−0.36−0.100.030.09−0.00−0.120.120.11−0.14−0.10
Ave−0.340.22−0.040.10−0.070.13−0.130.01−0.000.08−0.100.01−0.010.05−0.11−0.01
Table A9. The 48–72 h NSE of WRF model original output and post-processing model results.
Table A9. The 48–72 h NSE of WRF model original output and post-processing model results.
10 m30 m50 m70 m
TowerWRFGBDTDTRMLPRWRFGBDTDTRMLPRWRFGBDTDTRMLPRWRFGBDTDTRMLPR
1−0.57−0.00−0.13−0.12−0.40−0.14−0.21−0.12−0.45−0.26−0.22−0.40−0.34−0.36−0.17−0.29
2−0.110.13−0.140.01−0.010.02−0.19−0.18−0.00−0.01−0.11−0.15−0.06−0.02−0.21−0.06
3−0.190.01−0.09−0.010.010.05−0.16−0.050.070.09−0.110.050.020.04−0.05−0.07
40.060.13−0.080.130.110.05−0.220.070.110.06−0.21−0.040.090.01−0.16−0.07
5−0.040.15−0.180.160.110.05−0.06−0.020.160.07−0.010.120.170.11−0.160.04
6−0.720.03−0.110.01−0.13−0.25−0.23−0.18−0.02−0.19−0.17−0.11−0.05−0.12−0.36−0.13
7−0.710.09−0.100.03−0.11−0.18−0.23−0.060.00−0.11−0.30−0.130.01−0.10−0.29−0.22
8−0.44−0.13−0.13−0.20−0.15−0.27−0.39−0.28−0.02−0.29−0.35−0.11−0.08−0.39−0.29−0.26
9−0.390.05−0.25−0.16−0.09−0.09−0.160.01−0.03−0.09−0.23−0.20−0.00−0.11−0.23−0.16
10−0.120.12−0.28−0.08−0.030.03−0.180.05−0.08−0.08−0.21−0.18−0.19−0.16−0.23−0.14
11−0.400.08−0.010.00−0.150.03−0.080.04−0.07−0.04−0.16−0.00−0.04−0.05−0.18−0.01
12−0.150.200.030.080.010.160.110.020.070.08−0.27−0.060.110.08−0.070.11
13−0.400.13−0.050.09−0.140.01−0.07−0.080.00−0.14−0.17−0.27−0.02−0.10−0.30−0.32
14−0.75−0.06−0.19−0.04−0.34−0.17−0.34−0.11−0.07−0.13−0.26−0.100.01−0.14−0.29−0.06
Ave−0.350.07−0.12−0.01−0.09−0.05−0.17−0.06−0.02−0.07−0.20−0.11−0.03−0.09−0.21−0.12

Appendix D

Table A10. The 0–24 h R of WRF model original output and post-processing model results.
Table A10. The 0–24 h R of WRF model original output and post-processing model results.
10 m30 m50 m70 m
TowerWRFGBDTDTRMLPRWRFGBDTDTRMLPRWRFGBDTDTRMLPRWRFGBDTDTRMLPR
10.48 *0.73 *0.43 *0.56 *0.52 *0.69 *0.41 *0.49 *0.52 *0.64 *0.36 *0.41 *0.52 *0.66 *0.39 *0.47 *
20.60 *0.76 *0.45 *0.53 *0.61 *0.71 *0.45 *0.50 *0.60 *0.71 *0.43 *0.56 *0.58 *0.70 *0.41 *0.50 *
30.59 *0.76 *0.45 *0.59 *0.63 *0.76 *0.55 *0.61 *0.64 *0.76 *0.45 *0.63 *0.63 *0.74 *0.50 *0.60 *
40.72 *0.81 *0.54 *0.64 *0.69 *0.78 *0.53 *0.64 *0.67 *0.76 *0.51 *0.60 *0.65 *0.74 *0.51 *0.57 *
50.66 *0.77 *0.53 *0.63 *0.68 *0.78 *0.50 *0.61 *0.68 *0.77 *0.52 *0.58 *0.67 *0.76 *0.50 *0.62 *
60.70 *0.82 *0.58 *0.64 *0.71 *0.80 *0.53 *0.64 *0.70 *0.79 *0.49 *0.61 *0.67 *0.79 *0.56 *0.61 *
70.67 *0.82 *0.56 *0.64 *0.69 *0.79 *0.47 *0.64 *0.69 *0.79 *0.54 *0.65 *0.68 *0.79 *0.50 *0.64 *
80.68 *0.79 *0.51 *0.65 *0.67 *0.78 *0.50 *0.61 *0.68 *0.76 *0.50 *0.57 *0.65 *0.75 *0.48 *0.56 *
90.62 *0.80 *0.51 *0.61 *0.67 *0.79 *0.51 *0.65 *0.68 *0.78 *0.52 *0.61 *0.68 *0.78 *0.46 *0.63 *
100.70 *0.85 *0.64 *0.74 *0.70 *0.83 *0.58 *0.71 *0.68 *0.80 *0.55 *0.67 *0.65 *0.80 *0.54 *0.64 *
110.69 *0.83 *0.54 *0.69 *0.70 *0.81 *0.53 *0.67 *0.70 *0.81 *0.56 *0.67 *0.69 *0.82 *0.55 *0.70 *
120.75 *0.85 *0.67 *0.74 *0.74 *0.85 *0.65 *0.71 *0.74 *0.85 *0.60 *0.67 *0.74 *0.83 *0.58 *0.68 *
130.75 *0.85 *0.63 *0.74 *0.73 *0.83 *0.58 *0.71 *0.72 *0.81 *0.55 *0.70 *0.72 *0.80 *0.54 *0.66 *
140.68 *0.80 *0.49 *0.61 *0.67 *0.81 *0.56 *0.67 *0.70 *0.82 *0.57 *0.65 *0.72 *0.81 *0.61 *0.71 *
Ave0.660.800.540.640.670.790.530.630.670.770.510.610.660.770.510.61
* The correlation coefficient has a confidence level of more than 99%.
Table A11. The 24–48 h R of WRF model original output and post-processing model results.
Table A11. The 24–48 h R of WRF model original output and post-processing model results.
10 m30 m50 m70 m
TowerWRFGBDTDTRMLPRWRFGBDTDTRMLPRWRFGBDTDTRMLPRWRFGBDTDTRMLPR
10.46 *0.70 *0.40 *0.44 *0.48 *0.66 *0.34 *0.36 *0.47 *0.63 *0.27 *0.36 *0.46 *0.63 *0.28 *0.36 *
20.56 *0.71 *0.38 *0.50 *0.56 *0.68 *0.33 *0.43 *0.54 *0.66 *0.32 *0.47 *0.54 *0.64 *0.33 *0.43 *
30.56 *0.73 *0.37 *0.53 *0.58 *0.72 *0.42 *0.49 *0.59 *0.73 *0.42 *0.51 *0.57 *0.72 *0.37 *0.50 *
40.65 *0.76 *0.48 *0.59 *0.60 *0.71 *0.33 *0.56 *0.57 *0.68 *0.36 *0.51 *0.54 *0.66 *0.38 *0.48 *
50.59 *0.74 *0.47 *0.57 *0.59 *0.71 *0.44 *0.58 *0.60 *0.71 *0.38 *0.55 *0.59 *0.70 *0.44 *0.57 *
60.57 *0.74 *0.46 *0.55 *0.56 *0.70 *0.42 *0.52 *0.56 *0.70 *0.44 *0.55 *0.53 *0.70 *0.43 *0.52 *
70.57 *0.73 *0.48 *0.58 *0.56 *0.71 *0.46 *0.55 *0.56 *0.71 *0.46 *0.52 *0.55 *0.72 *0.42 *0.55 *
80.56 *0.72 *0.43 *0.54 *0.55 *0.70 *0.40 *0.55 *0.56 *0.69 *0.46 *0.51 *0.53 *0.67 *0.32 *0.50 *
90.52 *0.72 *0.45 *0.52 *0.56 *0.71 *0.45 *0.53 *0.56 *0.71 *0.44 *0.55 *0.56 *0.71 *0.47 *0.51 *
100.64 *0.81 *0.55 *0.66 *0.62 *0.78 *0.53 *0.63 *0.59 *0.74 *0.51 *0.59 *0.54 *0.72 *0.42 *0.55 *
110.60 *0.77 *0.49 *0.61 *0.60 *0.76 *0.39 *0.58 *0.60 *0.75 *0.46 *0.57 *0.59 *0.75 *0.47 *0.58 *
120.67 *0.78 *0.55 *0.64 *0.65 *0.77 *0.48 *0.59 *0.64 *0.76 *0.47 *0.58 *0.63 *0.76 *0.49 *0.62 *
130.68 *0.80 *0.55 *0.67 *0.65 *0.78 *0.48 *0.58 *0.63 *0.76 *0.49 *0.57 *0.62 *0.76 *0.50 *0.54 *
140.59 *0.73 *0.23 *0.53 *0.56 *0.72 *0.33 *0.54 *0.58 *0.74 *0.48 *0.54 *0.60 *0.75 *0.43 *0.55 *
Ave0.590.750.450.570.580.720.410.540.570.710.430.530.560.700.410.52
* The correlation coefficient has a confidence level of more than 99%.
Table A12. The 48–72 h R of WRF model original output and post-processing model results.
Table A12. The 48–72 h R of WRF model original output and post-processing model results.
10 m30 m50 m70 m
TowerWRFGBDTDTRMLPRWRFGBDTDTRMLPRWRFGBDTDTRMLPRWRFGBDTDTRMLPR
10.44 *0.66 *0.36 *0.43 *0.47 *0.62 *0.30 *0.40 *0.46 *0.61 *0.32 *0.34 *0.45 *0.58 *0.32 *0.35 *
20.57 *0.68 *0.37 *0.47 *0.56 *0.65 *0.36 *0.40 *0.55 *0.64 *0.36 *0.42 *0.53 *0.62 *0.31 *0.41 *
30.50 *0.67 *0.41 *0.49 *0.52 *0.66 *0.37 *0.46 *0.53 *0.68 *0.40 *0.51 *0.51 *0.66 *0.43 *0.45 *
40.60 *0.72 *0.46 *0.56 *0.56 *0.67 *0.38 *0.52 *0.54 *0.66 *0.36 *0.50 *0.52 *0.65 *0.37 *0.48 *
50.57 *0.72 *0.39 *0.58 *0.57 *0.70 *0.43 *0.51 *0.57 *0.70 *0.45 *0.59 *0.56 *0.70 *0.37 *0.53 *
60.52 *0.67 *0.40 *0.52 *0.52 *0.62 *0.37 *0.41 *0.52 *0.62 *0.37 *0.47 *0.50 *0.63 *0.30 *0.48 *
70.52 *0.69 *0.41 *0.54 *0.51 *0.63 *0.31 *0.49 *0.52 *0.64 *0.36 *0.49 *0.51 *0.63 *0.35 *0.45 *
80.51 *0.65 *0.40 *0.45 *0.50 *0.63 *0.33 *0.42 *0.51 *0.62 *0.31 *0.46 *0.49 *0.60 *0.32 *0.41 *
90.47 *0.70 *0.40 *0.47 *0.51 *0.68 *0.40 *0.49 *0.52 *0.68 *0.36 *0.47 *0.52 *0.68 *0.35 *0.46 *
100.58 *0.74 *0.41 *0.54 *0.56 *0.71 *0.44 *0.56 *0.52 *0.71 *0.41 *0.49 *0.49 *0.68 *0.40 *0.47 *
110.54 *0.74 *0.48 *0.56 *0.53 *0.71 *0.44 *0.56 *0.53 *0.72 *0.42 *0.53 *0.52 *0.71 *0.41 *0.53 *
120.57 *0.75 *0.49 *0.54 *0.56 *0.75 *0.52 *0.54 *0.55 *0.72 *0.37 *0.50 *0.55 *0.72 *0.43 *0.53 *
130.59 *0.75 *0.49 *0.59 *0.55 *0.72 *0.46 *0.54 *0.55 *0.70 *0.43 *0.46 *0.52 *0.70 *0.37 *0.45 *
140.50 *0.70 *0.35 *0.50 *0.47 *0.68 *0.31 *0.49 *0.50 *0.68 *0.36 *0.48 *0.52 *0.69 *0.33 *0.52 *
Ave0.530.700.410.520.530.670.390.490.530.670.380.480.510.660.360.47
* The correlation coefficient has a confidence level of more than 99%.

References

  1. Rife, D.L.; Davis, C.A.; Liu, Y.; Warner, T.T. Predictability of low-level winds by mesoscale meteorological models. Mon. Weather Rev. 2004, 132, 2553–2569. [Google Scholar] [CrossRef]
  2. Storm, B.; Dudhia, J.; Basu, S.; Swift, A.; Giammanco, I. Evaluation of the weather research and forecasting model on forecasting low-level jets: Implications for wind energy. Wind Energy 2009, 12, 81–90. [Google Scholar] [CrossRef]
  3. Marquis, M.; Wilczak, J.; Ahlstrom, M.; Sharp, J.; Stern, A.; Smith, J.C.; Calvert, S. Forecasting the wind to reach significant penetration levels of wind energy. Bull. Am. Meteorol. Soc. 2011, 92, 1159–1171. [Google Scholar] [CrossRef] [Green Version]
  4. Foley, A.M.; Leahy, P.G.; Marvuglia, A.; McKeogh, E.J. Current methods and advances in forecasting of wind power generation. Renew. Energy 2012, 37, 1–8. [Google Scholar] [CrossRef] [Green Version]
  5. Zhao, P.; Wang, J.; Xia, J.; Dai, Y.; Sheng, Y.; Yue, J. Performance evaluation and accuracy enhancement of a day-ahead wind power forecasting system in China. Renew. Energy 2012, 43, 234–241. [Google Scholar] [CrossRef]
  6. Mahoney, W.P.; Parks, K.; Wiener, G.; Liu, Y.; Myers, W.L.; Sun, J.; Delle Monache, L.; Hopson, T.; Johnson, D.; Haupt, S.E. A wind power forecasting system to optimize grid integration. IEEE Trans. Sustain. Energy 2012, 3, 670–682. [Google Scholar] [CrossRef]
  7. Stathopoulos, C.; Kaperoni, A.; Galanis, G.; Kallos, G. Wind power prediction based on numerical and statistical models. J. Wind Eng. Ind. Aerodyn. 2013, 112, 25–38. [Google Scholar] [CrossRef]
  8. Wyszogrodzki, A.A.; Liu, Y.; Jacobs, N.; Childs, P.; Zhang, Y.; Roux, G.; Warner, T.T. Analysis of the surface temperature and wind forecast errors of the NCAR-AirDat operational CONUS 4-km WRF forecasting system. Meteorol. Atmos. Phys. 2013, 122, 125–143. [Google Scholar] [CrossRef] [Green Version]
  9. Deppe, A.J.; Gallus, W.A., Jr.; Takle, E.S. A WRF ensemble for improved wind speed forecasts at turbine height. Weather Forecast. 2013, 28, 212–228. [Google Scholar] [CrossRef]
  10. Tateo, A.; Miglietta, M.M.; Fedele, F.; Menegotto, M.; Monaco, A.; Bellotti, R. Ensemble using different Planetary Boundary Layer schemes in WRF model for wind speed and direction prediction over Apulia region. Adv. Sci. Res. 2017, 14, 95. [Google Scholar] [CrossRef] [Green Version]
  11. Cheng, W.Y.; Liu, Y.; Liu, Y.; Zhang, Y.; Mahoney, W.P.; Warner, T.T. The impact of model physics on numerical wind forecasts. Renew. Energy 2013, 55, 347–356. [Google Scholar] [CrossRef]
  12. Marjanovic, N.; Wharton, S.; Chow, F.K. Investigation of model parameters for high-resolution wind energy forecasting: Case studies over simple and complex terrain. J. Wind Eng. Ind. Aerodyn. 2014, 134, 10–24. [Google Scholar] [CrossRef] [Green Version]
  13. Liu, Y.; Warner, T.; Liu, Y.; Vincent, C.; Wu, W.; Mahoney, B.; Swerdlin, S.; Parks, K.; Boehnert, J. Simultaneous nested modeling from the synoptic scale to the LES scale for wind energy applications. J. Wind Eng. Ind. Aerodyn. 2011, 99, 308–319. [Google Scholar] [CrossRef] [Green Version]
  14. Zhang, F.; Yang, Y.; Wang, C. The Effects of Assimilating Conventional and ATOVS Data on Forecasted Near-Surface Wind with WRF-3DVAR. Mon. Weather Rev. 2015, 143, 153–164. [Google Scholar] [CrossRef]
  15. Ancell, B.C.; Kashawlic, E.; Schroeder, J.L. Evaluation of wind forecasts and observation impacts from variational and ensemble data assimilation for wind energy applications. Mon. Weather Rev. 2015, 143, 3230–3245. [Google Scholar] [CrossRef]
  16. Ulazia, A.; Saenz, J.; Ibarra-Berastegui, G. Sensitivity to the use of 3DVAR data assimilation in a mesoscale model for estimating offshore wind energy potential. A case study of the Iberian northern coastline. Appl. Energy 2016, 180, 617–627. [Google Scholar] [CrossRef]
  17. Che, Y.; Xiao, F. An integrated wind-forecast system based on the weather research and forecasting model, Kalman filter, and data assimilation with nacelle-wind observation. J. Renew. Sustain. Energy 2016, 8, 53308. [Google Scholar] [CrossRef]
  18. Ulazia, A.; Sáenz, J.; Ibarra-Berastegui, G.; González-Rojí, S.J.; Carreno-Madinabeitia, S. Using 3DVAR data assimilation to measure offshore wind energy potential at different turbine heights in the West Mediterranean. Appl. Energy 2017, 208, 1232–1245. [Google Scholar] [CrossRef] [Green Version]
  19. Cheng, W.Y.; Liu, Y.; Bourgeois, A.J.; Wu, Y.; Haupt, S.E. Short-term wind forecast of a data assimilation/weather forecasting system with wind turbine anemometer measurement assimilation. Renew. Energy 2017, 107, 340–351. [Google Scholar] [CrossRef]
  20. Akish, E.; Bianco, L.; Djalalova, I.V.; Wilczak, J.M.; Olson, J.B.; Freedman, J.; Finley, C.; Cline, J. Measuring the impact of additional instrumentation on the skill of numerical weather prediction models at forecasting wind ramp events during the first Wind Forecast Improvement Project (WFIP). Wind Energy 2019, 22, 1165–1174. [Google Scholar] [CrossRef] [Green Version]
  21. Costa, A.; Crespo, A.; Navarro, J.; Lizcano, G.; Madsen, H.; Feitosa, E. A review on the young history of the wind power short-term prediction. Renew. Sustain. Energy Rev. 2008, 12, 1725–1744. [Google Scholar] [CrossRef] [Green Version]
  22. Jung, J.; Broadwater, R.P. Current status and future advances for wind speed and power forecasting. Renew. Sustain. Energy Rev. 2014, 31, 762–777. [Google Scholar] [CrossRef]
  23. Glahn, H.R.; Lowry, D.A. The use of model output statistics (MOS) in objective weather forecasting. J. Appl. Meteorol. 1972, 11, 1203–1211. [Google Scholar] [CrossRef] [Green Version]
  24. Carter, G.M.; Dallavalle, J.P.; Glahn, H.R. Statistical forecasts based on the National Meteorological Center’s numerical weather prediction system. Weather Forecast. 1989, 4, 401–412. [Google Scholar] [CrossRef] [Green Version]
  25. Jacks, E.; Bower, J.B.; Dagostaro, V.J.; Dallavalle, J.P.; Erickson, M.C.; Su, J.C. New NGM-based MOS guidance for maximum/minimum temperature, probability of precipitation, cloud amount, and surface wind. Weather Forecast. 1990, 5, 128–138. [Google Scholar] [CrossRef] [Green Version]
  26. Hart, K.A.; Steenburgh, W.J.; Onton, D.J.; Siffert, A.J. An evaluation of mesoscale-model-based model output statistics (MOS) during the 2002 Olympic and Paralympic Winter Games. Weather Forecast. 2004, 19, 200–218. [Google Scholar] [CrossRef]
  27. Wilks, D.S.; Hamill, T.M. Comparison of ensemble-MOS methods using GFS reforecasts. Mon. Weather Rev. 2007, 135, 2379–2390. [Google Scholar] [CrossRef]
  28. Stensrud, D.J.; Skindlov, J.A. Gridpoint predictions of high temperature from a mesoscale model. Weather Forecast. 1996, 11, 103–110. [Google Scholar] [CrossRef] [Green Version]
  29. Stensrud, D.J.; Yussouf, N. Short-range ensemble predictions of 2-m temperature and dewpoint temperature over New England. Mon. Weather Rev. 2003, 131, 2510–2524. [Google Scholar] [CrossRef]
  30. Eckel, F.A.; Mass, C.F. Aspects of effective mesoscale, short-range ensemble forecasting. Weather Forecast. 2005, 20, 328–350. [Google Scholar] [CrossRef]
  31. Hacker, J.P.; Rife, D.L. A practical approach to sequential estimation of systematic error on near-surface mesoscale grids. Weather Forecast. 2007, 22, 1257–1273. [Google Scholar] [CrossRef]
  32. Homleid, M. Diurnal corrections of short-term surface temperature forecasts using the Kalman filter. Weather Forecast. 1995, 10, 689–707. [Google Scholar] [CrossRef] [Green Version]
  33. Roeger, C.; Stull, R.; McClung, D.; Hacker, J.; Deng, X.; Modzelewski, H. Verification of mesoscale numerical weather forecasts in mountainous terrain for application to avalanche prediction. Weather Forecast. 2003, 18, 1140–1160. [Google Scholar] [CrossRef]
  34. McCollor, D.; Stull, R. Hydrometeorological accuracy enhancement via postprocessing of numerical weather forecasts in complex terrain. Weather Forecast. 2008, 23, 131–144. [Google Scholar] [CrossRef]
  35. Delle Monache, L.; Nipen, T.; Liu, Y.; Roux, G.; Stull, R. Kalman filter and analog schemes to postprocess numerical weather predictions. Mon. Weather Rev. 2011, 139, 3554–3570. [Google Scholar] [CrossRef] [Green Version]
  36. Cassola, F.; Burlando, M. Wind speed and wind energy forecast through Kalman filtering of Numerical Weather Prediction model output. Appl. Energy 2012, 99, 154–166. [Google Scholar] [CrossRef]
  37. Li, G.; Shi, J. On comparing three artificial neural networks for wind speed forecasting. Appl. Energy 2010, 87, 2313–2320. [Google Scholar] [CrossRef]
  38. Ishak, A.M.; Remesan, R.; Srivastava, P.K.; Islam, T.; Han, D. Error correction modelling of wind speed through hydro-meteorological parameters and mesoscale model: A hybrid approach. Water Resour. Manag. 2013, 27, 1–23. [Google Scholar] [CrossRef]
  39. Sweeney, C.P.; Lynch, P.; Nolan, P. Reducing errors of wind speed forecasts by an optimal combination of post-processing methods. Meteorol. Appl. 2013, 20, 32–40. [Google Scholar] [CrossRef] [Green Version]
  40. Zjavka, L. Wind speed forecast correction models using polynomial neural networks. Renew. Energy 2015, 83, 998–1006. [Google Scholar] [CrossRef]
  41. Zhao, J.; Guo, Z.; Su, Z.; Zhao, Z.; Xiao, X.; Liu, F. An improved multi-step forecasting model based on WRF ensembles and creative fuzzy systems for wind speed. Appl. Energy 2016, 162, 808–826. [Google Scholar] [CrossRef]
  42. Zhao, X.; Liu, J.; Yu, D.; Chang, J. One-day-ahead probabilistic wind speed forecast based on optimized numerical weather prediction data. Energy Convers. Manag. 2018, 164, 560–569. [Google Scholar] [CrossRef]
  43. Papayiannis, G.I.; Galanis, G.N.; Yannacopoulos, A.N. Model aggregation using optimal transport and applications in wind speed forecasting. Environmetrics 2018, 29, e2531. [Google Scholar] [CrossRef]
  44. Skamarock, W.C.; Klemp, J.B.; Dudhia, J.; Gill, D.O.; Barker, D.M.; Wang, W.; Powers, J.G. A Description of the Advanced Research WRF Version 3. NCAR Technical Note-475+ STR. 2008. Available online: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.484.3656 (accessed on 11 November 2019).
  45. National Centers for Environmental Prediction/National Weather Service/NOAA/U.S. Department of Commerce. NCEP FNL Operational Model Global Tropospheric Analyses, Continuing from July 1999. Research Data Archive at the National Center for Atmospheric Research, Computational and Information Systems Laboratory. 2000. Available online: https://doi.org/10.5065/D6M043C6 (accessed on 11 December 2018).
  46. Willmott, C.J. On the validation of models. Phys. Geogr. 1981, 2, 184–194. [Google Scholar] [CrossRef]
  47. Willmott, C.J. On the evaluation of model performance in physical geography. In Spatial Statistics and Models; Gaile, G.L., Willmott, C.J., Eds.; Springer: Dordrecht, The Netherlands, 1984; pp. 443–460. [Google Scholar]
  48. Willmott, C.J.; Ackleson, S.G.; Davis, R.E.; Feddema, J.J.; Klink, K.M.; Legates, D.R.; O’Donnell, J.; Rowe, C.M. Statistics for the evaluation and comparison of models. J. Geophys. Res. Oceans 1985, 90, 8995–9005. [Google Scholar] [CrossRef] [Green Version]
  49. Legates, D.R.; McCabe, G.J., Jr. Evaluating the use of “goodness-of-fit” measures in hydrologic and hydroclimatic model validation. Water Resour. Res. 1999, 35, 233–241. [Google Scholar] [CrossRef]
  50. Nash, J.E.; Sutcliffe, J.V. River flow forecasting through conceptual models part I—A discussion of principles. J. Hydrol. 1970, 10, 282–290. [Google Scholar] [CrossRef]
  51. Zhou, Z. Ensemble Methods: Foundations and Algorithms, 1st ed.; Chapman & Hall/CRC: New York, NY, USA, 2012. [Google Scholar]
  52. Kearns, M.J.; Valiant, L.G. Cryptographic limitations on learning Boolean formulae and finite automata. In Machine Learning: From Theory to Applications; Hanson, S.J.E.A., Ed.; Springer: Berlin/Heidelberg, Germany, 1993; pp. 29–49. [Google Scholar]
  53. Schapire, R.E. The Strength of Weak Learnability. Mach. Learn. 1990, 5, 197–227. [Google Scholar] [CrossRef] [Green Version]
  54. Friedman, J.H.; Hastie, T.; Tibshirani, R. Additive logistic regression: A statistical view of boosting. Ann. Stat. 2000, 28, 337–407. [Google Scholar] [CrossRef]
  55. Kegl, B. The return of AdaBoost.MH: Multi-class Hamming trees. In Proceedings of the International Conference on Learning Representations, Banff, AB, Canada, 14–16 April 2014. [Google Scholar]
  56. Quinlan, J.R. Induction of Decision Trees. Mach. Learn. 1986, 1, 81–106. [Google Scholar] [CrossRef] [Green Version]
  57. Quinlan, J.R. Improved Use of Continuous Attributes in C4.5. J. Artif. Int. Res. 1996, 4, 77–90. [Google Scholar] [CrossRef] [Green Version]
  58. Breiman, L.; Friedman, J.H.; Olshen, R.A.; Stone, C.J. Classification and Regression Trees; Wadsworth & Brooks/Cole Advanced Books & Software: Monterey, CA, USA, 1984. [Google Scholar]
  59. Mason, L.; Baxter, J.; Bartlett, P.L.; Frean, M. Boosting Algorithms as Gradient Descent. Adv. Neural Inf. Process. Syst. 1999, 512–518. [Google Scholar] [CrossRef]
  60. Friedman, J.H. Greedy function approximation: A gradient boosting machine. Ann. Stat. 2001, 29, 1189–1232. [Google Scholar] [CrossRef]
  61. Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Ye, Q.; Liu, T. LightGBM: A Highly Efficient Gradient Boosting Decision Tree. In Advances in Neural Information Processing Systems 30; Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2017; pp. 3146–3154. [Google Scholar]
  62. Lun, I.Y.F.; Lam, J.C. A study of Weibull parameters using long-term wind observations. Renew. Energy 2000, 20, 145–153. [Google Scholar] [CrossRef]
Figure 1. Nested grid of WRF models. The d01 grid contains most of China and the d02 grid contains the area of wind towers.
Figure 1. Nested grid of WRF models. The d01 grid contains most of China and the d02 grid contains the area of wind towers.
Atmosphere 11 00738 g001
Figure 2. Domain 02 of the WRF model. There are 14 wind observation towers (red dots) distributed along the coastal area of Jiangsu province; each tower records 10 m above ground level (AGL), 30 m AGL, 50 m AGL, and 70 m AGL wind observations.
Figure 2. Domain 02 of the WRF model. There are 14 wind observation towers (red dots) distributed along the coastal area of Jiangsu province; each tower records 10 m above ground level (AGL), 30 m AGL, 50 m AGL, and 70 m AGL wind observations.
Atmosphere 11 00738 g002
Figure 3. WRF model running time for the wind power forecasting of wind farms in China.
Figure 3. WRF model running time for the wind power forecasting of wind farms in China.
Atmosphere 11 00738 g003
Figure 4. Model training process of the boosting algorithm in ensemble learning.
Figure 4. Model training process of the boosting algorithm in ensemble learning.
Atmosphere 11 00738 g004
Figure 5. Training process of the gradient boosting decision tree method.
Figure 5. Training process of the gradient boosting decision tree method.
Atmosphere 11 00738 g005
Figure 6. Parameter tuning results for ‘number of leaves’ and ‘minimum data in leaf’. Different lines represent different combinations of parameters (e.g., ‘L10_D20’ represents Number of leaves set to 10 and minimum data in leaf set to 20: (a) is the training data mean-square error (MSE) change with iteration step; and (b) is the test data MSE change with iteration step.
Figure 6. Parameter tuning results for ‘number of leaves’ and ‘minimum data in leaf’. Different lines represent different combinations of parameters (e.g., ‘L10_D20’ represents Number of leaves set to 10 and minimum data in leaf set to 20: (a) is the training data mean-square error (MSE) change with iteration step; and (b) is the test data MSE change with iteration step.
Atmosphere 11 00738 g006
Figure 7. RMSEs of the WRF original output (WRF), GBDT output (GBDT), decision tree regression output (DTR), and multi-layer perceptron regression output (MLPR).
Figure 7. RMSEs of the WRF original output (WRF), GBDT output (GBDT), decision tree regression output (DTR), and multi-layer perceptron regression output (MLPR).
Atmosphere 11 00738 g007
Figure 8. IAs of the WRF original output (WRF), GBDT output (GBDT), decision tree regression output (DTR), and multi-layer perceptron regression output (MLPR).
Figure 8. IAs of the WRF original output (WRF), GBDT output (GBDT), decision tree regression output (DTR), and multi-layer perceptron regression output (MLPR).
Atmosphere 11 00738 g008
Figure 9. NSEs of the WRF original output (WRF), GBDT output (GBDT), decision tree regression output (DTR), and multi-layer perceptron regression output (MLPR).
Figure 9. NSEs of the WRF original output (WRF), GBDT output (GBDT), decision tree regression output (DTR), and multi-layer perceptron regression output (MLPR).
Atmosphere 11 00738 g009
Figure 10. Rs of the WRF original output (WRF), GBDT output (GBDT), decision tree regression output (DTR), and multi-layer perceptron regression output (MLPR).
Figure 10. Rs of the WRF original output (WRF), GBDT output (GBDT), decision tree regression output (DTR), and multi-layer perceptron regression output (MLPR).
Atmosphere 11 00738 g010
Figure 11. RMSE results in different month, contains the WRF original output (WRF), GBDT output (GBDT), and the RMSE reduction (%) of GBDT.
Figure 11. RMSE results in different month, contains the WRF original output (WRF), GBDT output (GBDT), and the RMSE reduction (%) of GBDT.
Atmosphere 11 00738 g011
Figure 12. RMSE results of 14 towers in different month and different hour, (a) the WRF results, (b) the GBDT results, and (c) the reduced value of GBDT relative to WRF results.
Figure 12. RMSE results of 14 towers in different month and different hour, (a) the WRF results, (b) the GBDT results, and (c) the reduced value of GBDT relative to WRF results.
Atmosphere 11 00738 g012
Figure 13. IA results of 14 towers in different month and different hour, (a) the WRF results, (b) the GBDT results, and (c) the improved value of GBDT relative to WRF result.
Figure 13. IA results of 14 towers in different month and different hour, (a) the WRF results, (b) the GBDT results, and (c) the improved value of GBDT relative to WRF result.
Atmosphere 11 00738 g013
Figure 14. 10 m wind speed Weibull distribution of observation, WRF, GBDT, DTR, and MLPR on test data of Tower 10001.
Figure 14. 10 m wind speed Weibull distribution of observation, WRF, GBDT, DTR, and MLPR on test data of Tower 10001.
Atmosphere 11 00738 g014
Figure 15. Feature importance at 10 m (a), 30 m (b), 50 m (c), and 70 m (d). ‘spd_10 m’ represents the 10 m wind speed output of the WRF model; ‘tmp’ represents the temperature features; ‘dir’ represents the direction features; and ‘u’ and ‘v’ represent the U and V components of wind speed, respectively, rotated to earth coordinates.
Figure 15. Feature importance at 10 m (a), 30 m (b), 50 m (c), and 70 m (d). ‘spd_10 m’ represents the 10 m wind speed output of the WRF model; ‘tmp’ represents the temperature features; ‘dir’ represents the direction features; and ‘u’ and ‘v’ represent the U and V components of wind speed, respectively, rotated to earth coordinates.
Atmosphere 11 00738 g015
Figure 16. Wind speed RMSE of sensitivity test results.
Figure 16. Wind speed RMSE of sensitivity test results.
Atmosphere 11 00738 g016
Figure 17. Wind speed IA of sensitivity test results.
Figure 17. Wind speed IA of sensitivity test results.
Atmosphere 11 00738 g017
Figure 18. Distribution of 10 m speed feature split values and Weibull distribution of 10 m wind speed observation of all 14 towers.
Figure 18. Distribution of 10 m speed feature split values and Weibull distribution of 10 m wind speed observation of all 14 towers.
Atmosphere 11 00738 g018
Table 1. Domain configuration and parameter settings of the WRF model.
Table 1. Domain configuration and parameter settings of the WRF model.
Domain0102
Grid number252 × 20781 × 96
Grid resolution25 km5 km
Vertical levels4141
MicrophysicsThompson graupel
Longwave radiationRRTMG
Shortwave radiationRRTMG
Land-surfaceNoah
Cumulus conventionKain–Fritsch
PBLYSU
Table 2. Wind observation tower locations, terrain height, and sensor parameters.
Table 2. Wind observation tower locations, terrain height, and sensor parameters.
Tower IDTerrain Height (m)Longitude (E)Latitude (N)Sampling FrequencySensor Bias
100011119.216735.01751 s ± 1 %
100021119.204434.76661 s ± 1 %
100031119.778434.46951 s ± 1 %
100042120.309634.1421 s ± 1 %
100051120.575433.64421 s ± 1 %
100060.5120.880733.01071 s ± 1 %
100070.5120.890433.01311 s ± 1 %
100080.5120.895533.01451 s ± 1 %
100092120.937732.64521 s ± 1 %
100101121.199332.471 s ± 1 %
100111121.418332.25471 s ± 1 %
100122121.531832.10591 s ± 1 %
100132121.734632.01391 s ± 1 %
100141.5121.889431.70031 s ± 1 %
Table 3. GBDT input features at different pressure layers.
Table 3. GBDT input features at different pressure layers.
VariablesPressure Layers
Wind speed, wind direction, temperature, height, avo, pvo.850 hPa, 700 hPa, 500 hPa, 300 hPa
Table 4. GBDT input features at different height layers.
Table 4. GBDT input features at different height layers.
VariablesHeight Levels
Wind speed, wind direction, temperature, pressure, avo, pvo10 m, 30 m, 50 m, 70 m, 90 m, 100 m, 120 m, 150 m, 200 m, 250 m, 300 m, 350 m, 400 m, 450 m, 500 m, 600 m, 700 m, 800 m, 1000 m, 1250 m, 1500 m, 1750 m, 2000 m, 2500 m, 3000 m, 3500 m, 4000 m, 4500 m, 5000 m.
Table 5. GBDT input categorical features.
Table 5. GBDT input categorical features.
FeatureCategories
MonthJanuary, February, March, ……, December
Hour1, 2, 3, ……, 24.
Wind directionN, S, E, W, NW, NE, SW, SE
Table 6. Train and test data split in each month.
Table 6. Train and test data split in each month.
Date Used as Train DataDate Used as Test Data
1, 2, 4, 5, 6, 8, 9, 10, 12, 13, 14, 16, 17, 18, 20, 21, 22, 24, 25, 26, 28, (29), (30), (31)3, 7, 11, 15, 19, 23, 27
(29), (30), (31) means that some month may not have these date.
Table 7. LightGBM parameters configuration and parameter tuning ranges.
Table 7. LightGBM parameters configuration and parameter tuning ranges.
Param NameValue/Value Range
Number of iterations2000
Learning rate0.1
Number of leaves10, 20, 40, 80, 160
Minimum data in leaf10, 20, 40, 80
Bagging fraction0.8
Bagging frequency5
Feature fraction0.9
MetricMean square error
Table 8. Parameters setting of MLPR model.
Table 8. Parameters setting of MLPR model.
Parameter NameValue
Hidden layer sizes100
Activation functionRelu
Optimization methodAdam
Iterations200
Loss functionMean Square Error (MSE)
Learning rate init0.001
Table 9. Parameters setting of DTR model.
Table 9. Parameters setting of DTR model.
Parameter NameValue
CriterionMean Square Error (MSE)
Split methodBest split
Max depthNo limit
Iterations200
Loss functionMean Square Error (MSE)
Learning rate init0.001
Table 10. Mean square error (MSE) of number of leaves (L) and minimum data in leaf (D) pairs after 2000 iterations.
Table 10. Mean square error (MSE) of number of leaves (L) and minimum data in leaf (D) pairs after 2000 iterations.
D10204080
L TrainValTrainValTrainValTrainVal
100.1740.3800.1790.3690.1850.3820.1950.387
200.0690.2810.0740.2850.0800.2870.0880.296
400.0220.2480.0250.2570.0290.2630.0370.264
800.0040.2500.0050.2550.0070.2660.0110.250
1600.0000.2490.0010.2370.0010.2430.0020.238
Table 11. p-value of each significant tests.
Table 11. p-value of each significant tests.
Indices10 m30 m50 m70 m
WRF and GBDTDTR and GBDTMLPR and GBDTWRF and GBDTDTR and GBDTMLPR and GBDTWRF and GBDTDTR and GBDTMLPR and GBDTWRF and GBDTDTR and GBDTMLPR and GBDT
RMSE0–24 h1.8 × 10−103.1 × 10−125.32 × 10−84.09 × 10−131.76 × 10−142.31 × 10−102.17 × 10−133.26 × 10−181.51 × 10−104.15 × 10−141.47 × 10−151.23 × 10−9
24–48 h9.96 × 10−118.1 × 10−134.32 × 10−94.55 × 10−153.67 × 10−142.11 × 10−111.87 × 10−173.57 × 10−161.64 × 10−123.19 × 10−181.24 × 10−131.53 × 10−11
48–72 h8.39 × 10−121.01 × 10−108.8 × 10−82.4 × 10−163.45 × 10−137.56 × 10−107.03 × 10−191.56 × 10−142.39 × 10−124.79 × 10−191.56 × 10−131.24 × 10−11
IA0–24 h1.68 × 10−81.18 × 10−91.57 × 10−62.42 × 10−76.31 × 10−118.75 × 10−67.36 × 10−63.02 × 10−101.09 × 10−54.85 × 10−67.99 × 10−119.79 × 10−6
24–48 h4.54 × 10−94.89 × 10−93 × 10−73.98 × 10−94.74 × 10−123.82 × 10−71.48 × 10−83.21 × 10−104.22 × 10−85.23 × 10−91.05 × 10−104.03 × 10−7
48–72 h3.6 × 10−93.65 × 10−144.78 × 10−91.27 × 10−96.3 × 10−113.43 × 10−82.14 × 10−101.37 × 10−151.01 × 10−81.7 × 10−103 × 10−153.77 × 10−9
R0–24 h2.87 × 10−61.18 × 10−101.2 × 10−73.35 × 10−61.78 × 10−123.79 × 10−72.63 × 10−56.68 × 10−124.53 × 10−71.28 × 10−52.84 × 10−124.77 × 10−7
24–48 h2.14 × 10−81.28 × 10−91.43 × 10−81.17 × 10−91.1 × 10−121.92 × 10−81.38 × 10−91.06 × 10−111.67 × 10−91.86 × 10−91.81 × 10−127.65 × 10−9
48–72 h1.57 × 10−101.04 × 10−157.08 × 10−112.31 × 10−102.08 × 10−121.05 × 10−93.28 × 10−115.16 × 10−172.25 × 10−102.42 × 10−102.08 × 10−167.3 × 10−11
NSE0–24 h3.76 × 10−71.44 × 10−60.0037820.0007489.14 × 10−60.015370.09390.0001840.0213040.0861450.0001730.036681
24–48 h1.5 × 10−64.24 × 10−50.0249310.003545.56 × 10−50.0671880.2217160.0019570.2070060.3664340.0077580.347349
48–72 h4.62 × 10−55.15 × 10−60.0633380.4100480.0164710.7577020.3115370.005550.4358890.1984740.0143060.65035
‘WRF and GBDT’ means the significant test between WRF and GBDT results. ‘DTR and GBDT’ means the significant test between DTR and GBDT results. ‘MLPR and GBDT’ means the significant test between MLPR and GBDT results.
Table 12. Shape and scale parameters of 10 m wind speed Weibull distributions.
Table 12. Shape and scale parameters of 10 m wind speed Weibull distributions.
ObservationWRFGBDTDTRMLPR
TowerK (Shape)Lambda (Scale)K (Shape)Lambda (Scale)K (Shape)Lambda (Scale)K (Shape)Lambda (Scale)K (Shape)Lambda (Scale)
100012.07 4.21 2.66 6.52 1.95 4.21 2.37 4.30 2.92 4.16
100022.17 4.61 2.55 6.55 1.88 4.48 2.38 4.55 2.73 4.46
100032.27 4.44 2.43 6.58 2.00 4.23 2.14 4.17 3.02 4.33
100042.14 5.22 2.53 6.77 2.01 5.05 2.10 4.85 2.63 5.01
100052.26 5.77 2.60 7.56 2.24 5.75 2.71 5.94 2.95 5.66
100062.02 4.44 2.54 7.52 1.93 4.49 2.30 4.82 2.65 4.42
100072.05 4.47 2.54 7.55 1.94 4.48 2.33 4.79 2.66 4.43
100082.20 5.11 2.54 7.58 2.22 5.04 2.62 5.38 2.95 5.05
100092.05 5.15 2.51 7.27 2.03 5.15 2.38 5.34 2.75 5.16
100101.76 5.37 2.50 6.44 1.85 5.34 1.94 5.37 2.19 5.42
100112.05 4.99 2.57 7.38 2.08 4.95 2.19 5.01 2.63 4.89
100121.97 4.93 2.50 6.77 1.91 5.05 2.04 4.92 2.49 4.94
100132.03 4.42 2.47 6.96 1.94 4.37 2.37 4.46 2.51 4.35
100142.32 4.56 2.57 7.69 2.12 4.47 2.47 4.50 2.92 4.45
Table 13. Feature importance sensitivity test settings.
Table 13. Feature importance sensitivity test settings.
TestFeatures
Test 1All features
Test 2‘Other’ features
Test 310 m speed, 30 m speed, 50 m speed, 70 m speed, hour, month
Table 14. p-value of each significance test.
Table 14. p-value of each significance test.
Indices10 m30 m50 m70 m
Tests 1 and 2Tests 1 and 3Tests 1 and 2Tests 1 and 3Tests 1 and 2Tests 1 and 3Tests 1 and 2Tests 1 and 3
RMSE0–24 h4.35 × 10−73.81 × 10−52.48 × 10−83.43 × 10−81.62 × 10−101.58 × 10−83.49 × 10−81.13 × 10−7
24–48 h6.23 × 10−65.28 × 10−53.43 × 10−52.34 × 10−61.73 × 10−62.8 × 10−74.01 × 10−63.74 × 10−6
48–72 h9.36 × 10−60.0002775.95 × 10−51.58 × 10−51.84 × 10−67.75 × 10−76.45 × 10−55.63 × 10−6
IA0–24 h1.92 × 10−136.21 × 10−73.55 × 10−132.43 × 10−64.01 × 10−164.17 × 10−54.78 × 10−152.49 × 10−5
24–48 h1.44 × 10−126.21 × 10−91.51 × 10−121.22 × 10−81.2 × 10−135.85 × 10−93.82 × 10−178.59 × 10−8
48–72 h8.02 × 10−128.47 × 10−114.87 × 10−103.24 × 10−109.33 × 10−103.25 × 10−116.24 × 10−98.31 × 10−10
‘Test 1 and 2’ means t-test between Test 1 and Test 2. ‘Test 1 and 3’ means t-test between Test 1 and Test 3.

Share and Cite

MDPI and ACS Style

Xu, W.; Ning, L.; Luo, Y. Wind Speed Forecast Based on Post-Processing of Numerical Weather Predictions Using a Gradient Boosting Decision Tree Algorithm. Atmosphere 2020, 11, 738. https://doi.org/10.3390/atmos11070738

AMA Style

Xu W, Ning L, Luo Y. Wind Speed Forecast Based on Post-Processing of Numerical Weather Predictions Using a Gradient Boosting Decision Tree Algorithm. Atmosphere. 2020; 11(7):738. https://doi.org/10.3390/atmos11070738

Chicago/Turabian Style

Xu, Wenqing, Like Ning, and Yong Luo. 2020. "Wind Speed Forecast Based on Post-Processing of Numerical Weather Predictions Using a Gradient Boosting Decision Tree Algorithm" Atmosphere 11, no. 7: 738. https://doi.org/10.3390/atmos11070738

APA Style

Xu, W., Ning, L., & Luo, Y. (2020). Wind Speed Forecast Based on Post-Processing of Numerical Weather Predictions Using a Gradient Boosting Decision Tree Algorithm. Atmosphere, 11(7), 738. https://doi.org/10.3390/atmos11070738

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop