Next Article in Journal
An Upgraded Version of the Binary Search Space-Structured VQ Search Algorithm for AMR-WB Codec
Next Article in Special Issue
A Generic Framework for Accountable Optimistic Fair Exchange Protocol
Previous Article in Journal
Existence Theory for Nonlinear Third-Order Ordinary Differential Equations with Nonlocal Multi-Point and Multi-Strip Boundary Conditions
Previous Article in Special Issue
Data Source Selection Based on an Improved Greedy Genetic Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Kernel Ridge Regression Model Based on Beta-Noise and Its Application in Short-Term Wind Speed Forecasting

1
College of Computer and Information Engineering, Henan Normal University, Xinxiang 453007, China
2
The State-Owned Assets Management Office, Henan Normal University, Xinxiang 453007, China
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(2), 282; https://doi.org/10.3390/sym11020282
Submission received: 5 January 2019 / Revised: 14 February 2019 / Accepted: 19 February 2019 / Published: 22 February 2019
(This article belongs to the Special Issue Information Technology and Its Applications 2021)

Abstract

:
The Kernel ridge regression ( K R R ) model aims to find the hidden nonlinear structure in raw data. It makes an assumption that the noise in data satisfies the Gaussian model. However, it was pointed out that the noise in wind speed/power forecasting obeys the Beta distribution. The classic regression techniques are not applicable to this case. Hence, we derive the empirical risk loss about the Beta distribution and propose a technique of the kernel ridge regression model based on the Beta-noise ( B N - K R R ). The numerical experiments are carried out on real-world data. The results indicate that the proposed technique obtains good performance on short-term wind speed forecasting.

1. Introduction

Linear regression ( L R ) is an approach to using the least squares method to model the relationship between a scalar dependent variable and one or more explanatory variables. It also refers to the plane points that are fitted with a straight line or the points in a high dimension space that are fitted with a hyperplane. This method is very sensitive to predictors being in a configuration of near-collinearity. Ridge regression ( R R ) is a variant of linear regression whose goal is to circumvent the problem of predictors collinearity. The ridge regression model is a powerful technique of machine learning which was introduced by Hoerl [1] and Hastie et al. [2], and it is a method from classical statistics that implements a regularized form of least squares regression [3]. Ridge regression is an alternative method for learning function based on a regularized extension of least squares techniques [4].
Given the data-set
D N = ( x 1 , y 1 ) , ( x 2 , y 2 ) , , ( x N , y N )
where x i X = R n , y i R , i = 1 , , N is the data-set. A multiple L R is f ( x ) = ϖ T · x + b . R represents real number set, R n is n dimensional Euclidean space, N is the number of sample points, superscript T denotes the matrix transpose. L R and R R determine the parameter vector ϖ R n by minimizing the objective functions, respectively:
g L R = i = 1 N ( y i ϖ T · x b ) 2 ,
g R R = 1 2 · ϖ T · ϖ + C · i = 1 N ( y i ϖ T · x b ) 2 .
The objective function used in ridge regression implements a form of Tikhonov [5] regularization of a sum-of-squares error metric, which is a regularization parameter controlling the bias-variance trade-off [6]. This corresponds to penalized maximum likelihood estimation of ϖ , assuming the targets have been corrupted by independent identical probability distribution (i.i.d.) samples from a Gaussian noise process with zeros mean and variance σ 2 , i.e., y i = f ( x i ) + ξ i , ξ N ( 0 , σ 2 ) , i = 1 , , N .
The K R R model based on Gaussian-noise characteristic is derived by Saunders et al [7]. R R [1,3,5] aims to find the hidden nonlinear structure in the raw data, while nonlinear mapping is approximated by means of K R R based on kerneltechniques [7,8,9,10,11]. Therefore a linear R R model is constructed in a feature-space H ( Φ : R n H ), induced by a nonlinear kernel function defining the inner product K ( x i , x j ) = ( Φ ( x i ) · Φ ( x j ) ) ( i , j = 1 , , N ). The kernel function Φ : R n H may be any positive definite Mercer kernel. Therefore, the objective function of K R R based on Gaussian-noise ( G N - K R R ) minimization can be written as
g G N K R R = 1 2 · ϖ T · ϖ + C · i = 1 N ( y i ϖ T · Φ ( x ) b ) 2 .
Suppose the noise is Gaussian, the G N K R R model may meet the requirements. However, the noise in wind speed and wind power forecast does not obey the Gaussian distribution, but the Beta distribution. The classic regression techniques are not applicable to above case. The uncertainty of wind power predictions was investigated in [12]. The statistics of the wind power forecasting error were not Gaussian. The work in [13] also found that the output of wind turbine systems is limited between zero and the maximum power and the error statistics do not follow a normal distribution. It also proved that using the Beta-function is justifiable for wind power prediction about chi-squared tests. In [14], the standard deviation of the data set was a function of the normalized predicted power p = p p r e d / p i n s t , where p p r e d is the predicted power and p i n s t is the wind power installed capacity. Fabbri [14] pointed out standardized production power p be within the interval [ 0 , 1 ] and Beta-function are more suitable than standard normal distribution. Literature [15] exhibited the advantages of using Beta-probability distribution function (pdf) instead of Gaussian pdf for approximating the forecasting error. Based on the above literature [12,13,14,15,16], this work plans to study the error of Beta-distribution between the predicted values x p and the measured values x m in the wind speed forecasting, and pdf of ε i is f ( ε i ) = ε i u 1 · ( 1 ε i ) v 1 · h , ε i ( 0 , 1 ) , i = 1 , 2 , , N , plotted below in Figure 1. Where u , v are parameters, h is normalization-factor, and the parameters u , v may be determined by the given values of mean and standard deviation [17].
It is not suitable to apply the K R R techniques based on Gaussian-noise model ( G N - K R R ) to fit functions from data-set with Beta-noise. In order to solve the above problem, this work focuses on the utilization of optimization theory and Beta-noise loss function and derives a method of K R R based on Beta-noise characteristic ( B N - K R R ). It also introduces a forecasting technique that can deal with high-dimensionality and nonlinearity simultaneously.
This paper is organized as follows. In Section 2, we will derive the Beta-noise empirical risk loss by the Bayesian principle. Section 3 describes the proposed K R R model based on the Beta-noise. Section 4 gives the solution and algorithm design of K R R of the Beta-noise characteristic based on Genetic Algorithm. The numerical experiments are carried out on B N - K R R to short-term wind speed and wind power prediction in Section 5, respectively. Finally, the conclusions and future work are given in Section 6.

2. Bayesian Principle to Beta-Noise Empirical Risk Loss

Learning to fit data with noise is an important problem in many real-world data mining applications. Given a training set D N of (1) with noise is additive
y i = f ( x i ) + ξ i , i = 1 , , N
where ξ i is random i.i.d. P ( ξ i ) with standard deviation σ and mean μ .
The objective is to find regressor f minimizing the expected risk [18,19] R [ f ] = l ( x , y , f ( x ) ) d P ( x , y ) based on the empirical data D l , where l ( x , y ) is a empirical risk loss (determining how we will penalize estimation errors). Since we do not know the distribution P ( x , y ) , it can only use data-set D N to estimate a regressor f and minimize R [ f ] . A possible approximation consists of replacing the integration by the empirical estimate to get the empirical risk R e m p [ f ] = 1 N · i = 1 N l ( x , y i , f ( x i ) ) . In general, we should add a capacity control term in R R and K R R , which leads to the regularized risk functional [18,20]
R r e g [ f ] = R e m p [ f ] + λ 2 · ϖ 2 ,
where λ > 0 is a regularization constant, R e m p [ f ] is the empirical risk. It is well known that l ( ξ i ) = 1 2 · ξ i 2 is the empirical risk loss of Gaussian-noise characteristics for L R (2), R R (3), and K R R (4). However, what is the empirical risk loss about Beta-noise of K R R model? The Beta-noise empirical risk loss by the use of Bayesian principle is given as follows.
The regressor f ( x ) is unknown, the objection is to estimate the regressor f ( x ) from g D N . According to the literature [20,21,22], the optimal empirical risk loss from maximum likelihood be
l ( x , y , f ( x ) ) = l o g p ( y f ( x ) ) .
The maximum likelihood estimation be
X f = { ( x 1 , f ( x 1 ) ) , ( x 2 , f ( x 2 ) ) , , ( x N , f ( x N ) ) } ,
p ( X f | X ) = i = 1 N p ( f ( x i ) | ( x i , y i ) ) = i = 1 N p ( y i f ( x i ) ) .
Maximizing p ( X f | X ) is equivalent to minimizing l o g ( p ( X f | X ) ) . Using Equation (7), we have
l ( x , y , f ( x ) ) = l o g p ( X f | X ) .
Suppose noise in Equation (5) adheres to Beta distribution with mean μ ( 0 , 1 ) and variance σ 2 , thus we can get u = ( 1 μ ) · μ 2 / σ 2 μ , v = ( 1 μ ) / μ · u [13,14], where h = Γ ( u + v ) / Γ ( u ) · Γ ( v ) is the normalization-factor. By Equation (10), the Beta-noise empirical risk loss is
l ( ξ ) = l ( y f ( x ) ) = ( 1 u ) l o g ( ξ ) + ( 1 v ) l o g ( 1 ξ ) .
Empirical risk loss of Gauss-noise and Beta-noise with different parameters is shown in Figure 2.

3. KRR Model Based on Beta-Noise

It is not appropriate to apply the K R R model based on Gaussian-noise characteristic ( G N - K R R ) to deal with tasks with the Beta-noise distribution. Consequently, we use Beta-noise loss function and maximum likelihood method to estimate the optimal loss function. Now, we derive the optimal empirical risk loss about Beta-noise distribution, and propose a new technique of the K R R model based on Beta-noise characteristic ( B N - K R R ).
First, considering constructing L R regressor f ( x ) = ϖ T · x i + b , where x i = ( 1 , x i 1 , , x i n ) T ( i = 1 , 2 , , N ), ϖ = ( ϖ 0 , ϖ 1 , , ϖ n ) T . We use kernel techniques and construct the kernel function K ( , ) where K ( x i , x j ) = ( Φ ( x i ) · Φ ( x j ) ) , Φ : R n H , H is Hilbert space, and ( x i · x j ) ( i , j = 1 , 2 , , N ) is inner product of H. Then we extend kernel techniques to the ridge regression model based on the Beta-noise characteristic.
Let the set of inputs be { ( x i , y i ) , i = 1 , , N } , where i represents the indicator for the i-th sample in D l . For the general Beta-noise characteristic, it is Formula (11) that the Beta-noise loss function c ( ξ i ) in the sample point { ( x i , y i ) } of D N . Owing to the fact that ridge regression and K R R techniques with Gaussian-noise characteristic ( G N K R R ) are not suitable to Beta-noise distribution in time series problems, the Formula (11) is selected as Beta empirical risk loss to overcome the shortage of G N - K R R . The primal problem of K R R model with the Beta-noise (Denoted by B N - K R R ) can be described as follows ( C > 0 )
m i n { g P B N K R R = 1 2 ϖ T · ϖ + C · i = 1 N c ( ξ i ) } P B N K R R : s . t . y i ϖ T · Φ ( x i ) b = ξ i .
where c ( ξ i ) = ( 1 u ) l o g ( ξ i ) + ( 1 v ) l o g ( 1 ξ i ) , ξ i = y i f ( x i ) i = 1 , , N .
Theorem 1.
Model B N - K R R ’s Solution to original Problem (12) about ϖ exists and is unique.
Proof. 
The existence of solutions is trivial. The uniqueness of solutions is shown below. If we have solutions ϖ ¯ , ϖ ˜ , then Problem (12) exist solutions ( ϖ ¯ , b ¯ , ξ ¯ ) and ( ϖ ˜ , b ˜ , ξ ˜ ) . Define ( ϖ , b , ξ ) as follows
ϖ = 1 2 ( ϖ ¯ + ϖ ˜ ) , b = 1 2 ( b ¯ + b ˜ ) , ξ = 1 2 ( ξ ¯ + ξ ˜ ) .
We have
y i ϖ T · Φ ( x i ) ξ i = y i 1 2 ( ϖ ¯ + ϖ ˜ ) T · Φ ( x i ) 1 2 ( b ¯ + b ˜ ) 1 2 ( ξ ¯ + ξ ˜ )
= 1 2 ( y i ϖ ¯ T · Φ ( x i ) b ¯ ξ ¯ i ) + 1 2 ( y i ϖ ˜ T · Φ ( x i ) b ˜ ξ ˜ i ) = 0 ,
where 1 2 ( ξ ¯ + ξ ˜ ) 0 ( i = 1 , , l ) , ( ϖ , b , ξ ) is a feasible solution of Problem (12). Further,
1 2 ϖ 2 + C i = 1 N c ( ξ i ) 1 2 ϖ ¯ 2 + C c ( ξ ˜ i ) ,
1 2 ϖ 2 + C i = 1 N c ( ξ i ) 1 2 ϖ ˜ 2 + C c ( ξ ˜ i ) .
By Inequalities (14) and (15), we get 2 ϖ 2 ϖ ¯ 2 + ϖ ˜ 2 . ϖ = ϖ ¯ + ϖ ˜ is substituted into the above inequality
ϖ ¯ + ϖ ˜ 2 2 ( ϖ ¯ 2 + ϖ ˜ 2 ) ,
as ϖ ¯ + ϖ ˜ ϖ ¯ + ϖ ˜ , then
( ϖ ¯ + ϖ ˜ ) 2 2 ( ϖ ¯ 2 + ϖ ˜ 2 ) .
In addition, 2 ϖ ¯ · ϖ ˜ ϖ ¯ 2 + ϖ ˜ 2 , by (17), get
2 ϖ ¯ · ϖ ˜ = ϖ ¯ 2 + ϖ ˜ 2 , ϖ ¯ = ϖ ˜ , ϖ ¯ + ϖ ˜ = ϖ ¯ + ϖ ˜ .
For ϖ ˜ = m · ϖ ¯ , thus m = 1 or m = 1 . Since m = 1 , then ϖ ¯ + ϖ ˜ = 0 . By ϖ ¯ + ϖ ˜ = ϖ ¯ + ϖ ˜ , get ϖ ¯ = ϖ ˜ = 0 . Namely, ϖ ¯ = ϖ ˜ = 0 . For m = 1 , thus ϖ ¯ = ϖ ˜ .
In conclusion, Solution to Problem (12) exists and is unique. □
Theorem 2.
Model B N - K R R ’s dual Problem of primal Problem (12) is
M a x { g D B N K R R = 1 2 i = 1 N j = 1 N ( α i · α j · K ( x i , x j ) ) + i = 1 N ( α i · y i ) + C i = 1 N ( ( 1 u ) log ( ξ i ( α i ) ) + ( 1 v ) log ( 1 ξ i ( α i ) ) ) } D B N K R R : s . t . i = 1 l ( α i ) = 0
where ξ i ( α i ) = 2 + α i / C u v Δ 1 2 2 · α i / C , Δ = ( α i / C + u v ) 2 + 4 · ( 1 + u · v u v ) ( i = 1 , , N ) and C > 0 is constant.
Proof. 
The introduction of Lagrange functional L ( ϖ , b , α , ξ ) is
L ( ϖ , b , α , ξ ) = 1 2 ϖ T · ϖ + C · i = 1 N ( c ( ξ i ) ) + i = 1 N α i ( y i ϖ T · Φ ( x i ) b ξ i ) .
For the sake of the minimum L ( ϖ , b , α , ξ ) , seek partial derivative to ϖ , b , α , ξ respectively. From Karush-Kuhn-Tucker (KKT) conditions, obtain
ϖ ( L ) = 0 , b ( L ) = 0 , ξ ( L ) = 0 .
So ϖ = i = 1 N ( α i · Φ ( x i ) ) , i = 1 N ( α i ) = 0 , C · ( c ( ξ i ) ) ξ i α i = 0 .
Substituting the extreme conditions into L ( ϖ , b , α , ξ ) and finding the Maximum of α, thus derive the dual Problem (18) of Problem (12). □
On account of ( c ( ξ i ) ) ( ξ i ) = 1 α ξ i 1 β 1 ξ i , by C · ( c ( ξ i ) ) ( ξ i ) α i = 0 , we have α i / C · ξ i 2 ( 2 + α i / C α β ) · ξ i + 1 α = 0 .
Now get
ξ i 1 ( α i ) = 2 + α i / C α β + Δ 1 2 2 · α i / C , ξ i 2 ( α i ) = 2 + α i / C α β Δ 1 2 2 · α i / C .
where Δ = ( α i / C + α β ) 2 + 4 · ( 1 + α · β α β ) . Because of 0 < ξ i ( α i ) < 1 , we reject ξ i 1 ( α i ) and let ξ i ( α i ) = ξ i 2 ( α i ) .
We have ϖ = i = 1 N ( α i · Φ ( x i ) ) , b = y i i = 1 N ( α j · K ( x j , x i ) ) ξ i ( α i ) , gain the decision-making function of B N - K R R is
f ( x ) = ϖ T · Φ ( x ) + b = i = 1 N ( α i · K ( x i , x ) ) + b .
Note: The K R R of the Gaussian-noise characteristic ( G N K R R ) was discussed in [9,10,11]. The Gaussian empirical risk loss in the sample point ( x i , y i ) D N is c ( ξ i ) = 1 2 ξ i 2 , thus the dual Problem of model K R R based on Gaussian-noise characteristic ( G N K R R ) is
m a x { g D G N K R R = 1 2 i = 1 N j = 1 N ( α i · α j · K ( x i , x j ) ) + i = 1 N ( α i · y i ) 1 2 C · i = 1 N ( α i 2 ) } D G N K R R : s . t . i = 1 N α i = 0 .
The dual Problem of model R R based on the Gaussian-noise characteristic ( G N - R R ) is
m a x { g D G N R R = 1 2 i = 1 N j = 1 N ( α i · α j · ( x i , x j ) ) + i = 1 N ( α i · y i ) 1 2 C · i = 1 N ( α i 2 ) } D G N R R : s . t . i = 1 N α i = 0 .

4. Solution Based on Genetic Algorithm

We get the Solution and algorithm design of model K R R based on Beta-noise characteristic ( B N - K R R ) as follows.
(1)
Let training samples D N = { ( x 1 , y 1 ) , ( x 2 , y 2 ) , , ( x N , y N ) } , where x i X = R n , y i R ( i = 1 , , N ).
(2)
Select the appropriate positive C , u , v and the suitable kernel K ( , ) .
(3)
Solve optimization Problem (18), gain optimal Solution α = ( α 1 , , α N ) .
(4)
Construct the decision-making function
f ( x ) = ϖ T · Φ ( x ) + b = i = 1 N ( α i · K ( x i , x ) ) + b ,
and b = y i i = 1 N ( α j · K ( x j , x i ) ) ξ i ( α i ) .
The confirmation of unknown parameters of model B N K R R is a complicated process and the appropriate parameter combination of the models can enhance the regression accuracy of the kernel ridge regression based on Beta-noise. Genetic Algorithm (GA) [23,24,25] is a search heuristic that mimics the process of natural evolution, this heuristic is routinely used to generate useful solutions to optimize and search problems. In GA, the evolution usually starts from a population of randomly generated individuals and happens in generations. In each generation, the fitness of every individual in the population is evaluated, multiple individuals are stochastically selected from the current population and modified to form a new population. The new population is then used in the next iteration of the algorithm. Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population. If the algorithm has terminated due to a maximum number of generations, a satisfactory solution may or may not have been reached.
GA is considered as one of the modern optimization algorithms to solve the combinatorial optimization problem and is used to determine the parameters of model B N - K R R . Based on the survival and reproduction of the fitness, GA is continually applied to get new and better solutions without any pre-assumptions, such as continuity and unimodality [26,27,28]. The proposed model B N - K R R has been implemented in Matlab 7.8 programming language. The experiments are made on the 8.0 GHz Core (TM) i7-4790 CPU personal computer with 3.60 GB memory under Microsoft Windows XP Professional. The initial parameters of GA are M a x c g e n = 100 , C [ 1 , 201 ] , u , v ( 0 , ) . Many practical applications display that polynomial and Gaussian kernels perform well under general smooth assumptions [29]. This work, polynomial, and Gaussian kernels can be used as the kernel for models ν - S V R , G N - K R R , and B N - K R R :
K ( x i , x j ) = ( ( x i · x j ) + 1 ) d , K ( x i , x j ) = exp ( x i x j 2 σ 2 ) ,
where d is positive integer, and let d = 1 , 2 , or 3. σ is positive, and take σ = 0.2 .
As we all know, no prediction model forecasts perfectly. There are also certain criteria, such as mean absolute error (MAE), the root mean square error (RMSE), mean absolute percentage error (MAPE), and standard error of prediction (SEP) are used to evaluate the predictive performance of models ν - S V R , G N - K R R , and B N - K R R . The four criteria are defined as follows:
MAE = 1 N · i = 1 N | x p , i x m , i | ,
MAPE = 1 N · i = 1 N | x p , i x m , i | x m , i ,
RMSE = 1 N · i = 1 N ( x p , i x m , i ) 2 ,
SEP = i = 1 N ( x p , i x m , i ) 2 i = 1 N x m , i ,
where l is the size of the selected samples, m , i is the measured result of data-point x i , and p , i is the predictive result of data-point x i ( i = 1 , 2 , , N ) [14,15,16].

5. Short-Term Wind Speed and Wind Power Forecasting with Real Data-Set

The model B N - K R R is applied to the multi-factors actual data-set for wind speed sequence prediction from Jilin Province. The wind speed data contain more than a year of samples which are collected in intervals of ten minutes, and the number of wind speed data is 62,466. Each column attribute is mean, variance, minimum, and maximum, respectively. The short-term wind speed forecast is studied as follows.
Suppose the training sample number is 2160 (from 1 to 2160 for 15 days), and the number of test samples is 720 (from 2161 to 2880 for 5 days). The input vector is x i = ( x i , x i + 1 , x i + 2 , , x i + 11 ) , the output value is x i + 11 + s t e p , and s t e p = 1 , 3 . Namely, the pattern above is used to forecast the wind speed each interval of 10 and 30 min at each Point x i + 11 , respectively [30,31].
1. Forecast wind speed at point x i + 11 each interval of 10 min
The short-term wind speed sequence forecast results at point x i + 11 each interval of 10 min given by G N - K R R [7,8,32], ν - S V R [33,34], and B N - K R R are illuminated with Figure 3. In G N - K R R , parameter C = 151 . In ν - S V R , parameter C = 151 , ν = 0.54 . In B N - K R R , parameters C = 181 , u = 3.6084 , v = 3.0889 .
MAE, MAPE, RMSE, and SEP indicators are used to evaluate the prediction results of the three models at point x i + 11 each interval of 10 min shown in Table 1.
2. Forecast wind speed at point x i + 11 each interval of 30 min
The short-term wind power sequence forecast results at point x i + 11 each interval of 30 min given by G N - K R R , ν - S V R , and B N - K R R are illuminated with Figure 4. In G N K R R , parameter C = 151 . In ν - S V R , parameter C = 151 , ν = 0.54 . In B N - K R R , parameters C = 181 , u = 3.6084 , v = 3.0889 .
MAE, MAPE, RMSE, and SEP indicators are used to evaluate the prediction results of the three models at point x i + 11 each interval of 30 min shown in Table 2.
The results of wind speed forecasting experiments indicate that B N - K R R has better performance than G N - K R R and ν - S V R in 10-min and 30-min short-term wind speed forecasting.
We have predicted the short-term wind speed from the Jilin Province wind farm, so we can calculate the wind power according to the Formula (25):
P M = 0 , v < v c u t i n , or v > v c u t o u t v v c u t i n v r v c u t i n , v c u t i n v < v r P r , v r v < v c u t i n .
where v c u t i n and v c u t o u t represent cut-in wind speed and cut-out wind speed of wind turbine, respectively. v r and P r represent rated wind speed and rated power of wind turbine, respectively. The predictive wind speed is substituted into the Formula (25), we can obtain the predicted wind power.

6. Conclusions and Future Work

In this work, we propose a new version of kernel ridge regression model based on the Beta-noise ( B N - K R R ) to predict the uncertainty system of Beta-noise. Novel results have been obtained by the use of the model B N - K R R , which takes the Bayesian principle to Beta-noise empirical risk loss and improves the prediction accuracy. The numerical experiments are carried out on real-world data (the short-term wind speed). Comparing the model B N - K R R and models G N - K R R and ν - S V R by criteria M A E , M A P E , R M S E , and S E P verifies the validity and feasibility of our proposed model B N K R R . Further, the forecasting results indicate that the proposed technique can obtain good performance on short-term wind speed forecasting.
In practical regression problems, data uncertainty is inevitable. The observed data are usually described in linguistic levels or ambiguous metrics, like the weather forecast, the forecast results of dry and wet, or sunny and cloudy, and so on. We should consider developing fuzzy kernel ridge regression algorithms with different noise models.
We verify the validity and feasibility of the model.

Author Contributions

L.S. and C.L. conceived the algorithm and designed the experiments; T.Z. implemented the experiments; T.Z. analyzed the results; S.Z. drafted the manuscript. All authors read and revised the manuscript.

Funding

This work was supported by the Natural Science Foundation Project of Henan (No.182300410130, 182300410368, 162300410177), and Key Project of Science and Technology Department of Henan Province (Nos.17A520038, 142102210056, 17A520040), Ph.D. Research Startup Foundation of Henan Normal University (Nos.qd15129, qd15130, qd15132), National natural science foundation of China (NSFC) (No.61772176, 61402153, 11702087), Project Funded By China Postdoctoral Science Foundation (No.2016M602247), Key Scientific and Technological Project of Xin Xiang City Of China (No.CP150120160714034529271, CXGG17002).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hoerl, A.E. Application of ridge analysis to regression problems. Chem. Eng. Prog. 1962, 58, 54–59. [Google Scholar]
  2. Hastie, T.; Tibshirani, R.; Friedman, J.H. The Elements of Statistical Learning; Springer: Berlin/Heidelberg, Germany, 2001. [Google Scholar]
  3. Hoerl, A.E.; Kennard, R.W. Ridge Regression: Biased Estimation for Nonorthogonal Problems. Technometrics 1970, 12, 55–67. [Google Scholar] [CrossRef]
  4. Hastie, T.; Tibshirani, R.; Friedman, J. The Elements of Statistical Learning: Data Mining, Inference, and Prediction; Springer: New York, NY, USA, 2009. [Google Scholar]
  5. Tikhonov, A.A.; Arsenin, V.Y. Solutions of Ill-Posed Problems; Wiley: New York, NY, USA, 1977. [Google Scholar]
  6. Geman, S.; Bienenstock, E.; Doursat, R. Neural networks and the bias/variance dilemma. Neural Comput. 1992, 4, 1–58. [Google Scholar] [CrossRef]
  7. Saunders, C.; Gammerman, A.; Vovk, V. Ridge Regression Learning Algorithm in Dual Variables. In Proceedings of the 15th International Conference on Machine Learning, Madison, WI, USA, 24–27 July 1998; Morgan Kaufmann: San Francisco, CA, USA, 1998; pp. 515–521. [Google Scholar]
  8. Suykens, J.A.K.; Lukas, L.; Vandewalle, J. Sparse Approximation using Least-squares Support Vector Machines. IEEE Int. Symp. Circuits Syst. Geneva 2000, 2, 757–760. [Google Scholar]
  9. Gavin, C.C.; Nicola, L.C.T. Reduced rank kernel ridge regression. Neural Process. Lett. 2002, 16, 293–302. [Google Scholar]
  10. Zhang, Z.H.; Dai, G.; Xu, C.F. Regularized Discriminant Analysis, Ridge Regression and Beyond. J. Mach. Learn. Res. 2010, 11, 2199–2228. [Google Scholar]
  11. Orsenigo, C.; Vercellis, C. Kernel ridge regression for out-of-sample mapping in supervised manifold learning. Expert Syst. Appl. 2012, 39, 7757–7762. [Google Scholar] [CrossRef]
  12. Lange, M. On the uncertainty of wind power predictions-Analysis of the forecast accuracy and statistical distribution of errors. J. Sol. Energy Eng. 2005, 127, 177–184. [Google Scholar] [CrossRef]
  13. Bofinger, S.; Luig, A.; Beyer, H.G. Qualification of wind power forecasts. In Proceedings of the 2002 Global Wind Power Conference, Paris, France, 2 April 2002. [Google Scholar]
  14. Fabbri, A.; Roman, T.G.S.; Abbad, J.R.; Quezada, V.H.M. Assessment of the cost associated with wind generation prediction errors in a liberalized electricity market. IEEE Trans. Power Syst. 2005, 20, 1440–1446. [Google Scholar] [CrossRef]
  15. Bludszuweit, H.; Antonio, J.; Llombart, A. Statistical analysis of wind power forecast error. IEEE Trans. Power Syst. 2008, 23, 983–991. [Google Scholar] [CrossRef]
  16. Madhiarasan, M.; Deepa, S.N. A novel criterion to select hidden neuron numbers in improved back propagation networks for wind speed forecasting. Appl. Intell. 2016, 44, 878–893. [Google Scholar] [CrossRef]
  17. Canavos, G.C. Applied Probability and Statistical Methods; Little, Brown and Company: Toronto, ON, Canada, 1984. [Google Scholar]
  18. Vapnik, V.N. The Nature of Statistical Learning Theory; Springer: New York, NY, USA, 1995. [Google Scholar]
  19. Smola, A.; Scholkopf, B. A tutorial on support vector regression. Stat. Comput. 2004, 14, 199–222. [Google Scholar] [CrossRef] [Green Version]
  20. Wei, C.; Keerthi, S.S.; Chong, J.O. Bayesian support vector regression using a unified cost function. IEEE Trans. Neural Netw. 2004, 15, 29–44. [Google Scholar]
  21. Girosi, F. Models of Noise and Robust Estimates; A.I. Memo No. 1287; Massachusetts Institute of Technology, Artificial Intelligence Laboratory: Cambridge, MA, USA, 1991. [Google Scholar]
  22. Pontil, M.; Mukherjee, S.; Girosi, F. On the Noise Model of Support Vector Machines Regression. In Proceedings of the 11th International Conference on Algorithmic Learning Theory, Sydney, NSW, Australia, 11–13 December 2000; pp. 316–324. [Google Scholar]
  23. Goldberg, D.E.; Holland, J.H. Genetic algorithm and machine learning. Mach. Learn. 1988, 3, 95–99. [Google Scholar] [CrossRef]
  24. Eren, B.; Vedide Rezan, U.; Ufuk, Y.; Egrioglu, E. A modified genetic algorithm for forecasting fuzzy time series. Appl. Intell. 2014, 41, 453–463. [Google Scholar]
  25. Wei, H.; Tang, X.S.; Liu, H. A genetic algorithm(GA)-based method for the combinatorial optimization in contour formation. Appl. Intell. 2015, 43, 112–131. [Google Scholar] [CrossRef]
  26. Zojaji, Z. Semantic schema theory for genetic programming. Appl. Intell. 2016, 44, 67–87. [Google Scholar] [CrossRef]
  27. Shi, K.S.; Li, L.M. High performance genetic algorithm based text clustering using parts of speech and outlier elimination. Appl. Intell. 2013, 38, 511–519. [Google Scholar] [CrossRef]
  28. Trivedi, A.; Srinivasan, D.; Biswas, S.; Reindl, T. A genetic algorithm—Differential evolution based hybrid framework: Case study on unit commitment scheduling problem. Inf. Sci. 2016, 354, 275–300. [Google Scholar] [CrossRef]
  29. Wu, Q.; Law, R. Fuzzy support vector regression machine with penalizing Gaussian noises on triangular fuzzy number space. Expert Syst. Appl. 2010, 37, 7788–7795. [Google Scholar] [CrossRef]
  30. Gajowniczek, K.; Zabkowski, T. Simulation study on clustering approaches for short-term electricity forecasting. Complexity 2018, 2018, 3683969. [Google Scholar] [CrossRef]
  31. Massidda, L.; Marrocu, M. Smart meter forecasting from one minute to one year horizons. Energies 2018, 11, 3520. [Google Scholar] [CrossRef]
  32. Petković, D.; Shamshirband, S.; Saboohi, H.; Ang, T.F.; Anuar, N.B.; Pavlović, N.D. Support vector regression methodology for prediction of input displacement of adaptive compliant robotic gripper. Appl. Intell. 2014, 41, 887–896. [Google Scholar]
  33. Chang, C.C.; Lin, C.J. Training v-Support Vector Regression: Theory and Algorithms. Neural Comput. 2002, 14, 1959–1977. [Google Scholar] [CrossRef]
  34. Chalimourda, A.; Schölkopf, B.; Smola, A.J. Experimentally optimal ν in support vector regression for different noise models and parameter settings. Neural Netw. 2004, 17, 127–141. [Google Scholar] [CrossRef]
Figure 1. Beta pdf and Gauss pdf.
Figure 1. Beta pdf and Gauss pdf.
Symmetry 11 00282 g001
Figure 2. Empirical risk loss of Gauss-noise and Beta-noise.
Figure 2. Empirical risk loss of Gauss-noise and Beta-noise.
Symmetry 11 00282 g002
Figure 3. The forecasting results of ν - S V R , G N - K R R , B N - K R R (step = 1).
Figure 3. The forecasting results of ν - S V R , G N - K R R , B N - K R R (step = 1).
Symmetry 11 00282 g003
Figure 4. The forecasting results of ν - S V R , G N - K R R , B N - K R R (step = 3).
Figure 4. The forecasting results of ν - S V R , G N - K R R , B N - K R R (step = 3).
Symmetry 11 00282 g004
Table 1. Error statistic of three models (step = 1).
Table 1. Error statistic of three models (step = 1).
ModelMAERMSEMAPE(%)SEP(%)
ν -SVR0.42800.58337.027.02
GN-KRR0.42190.57687.947.06
BN-KRR0.36680.42336.845.23
Table 2. Error statistic of three models (step = 3).
Table 2. Error statistic of three models (step = 3).
ModelMAERMSEMAPE(%)SEP(%)
ν -SVR0.79791.011623.3612.53
GN-KRR0.71090.922617.1711.43
BN-KRR0.66400.841718.8210.43

Share and Cite

MDPI and ACS Style

Zhang, S.; Zhou, T.; Sun, L.; Liu, C. Kernel Ridge Regression Model Based on Beta-Noise and Its Application in Short-Term Wind Speed Forecasting. Symmetry 2019, 11, 282. https://doi.org/10.3390/sym11020282

AMA Style

Zhang S, Zhou T, Sun L, Liu C. Kernel Ridge Regression Model Based on Beta-Noise and Its Application in Short-Term Wind Speed Forecasting. Symmetry. 2019; 11(2):282. https://doi.org/10.3390/sym11020282

Chicago/Turabian Style

Zhang, Shiguang, Ting Zhou, Lin Sun, and Chao Liu. 2019. "Kernel Ridge Regression Model Based on Beta-Noise and Its Application in Short-Term Wind Speed Forecasting" Symmetry 11, no. 2: 282. https://doi.org/10.3390/sym11020282

APA Style

Zhang, S., Zhou, T., Sun, L., & Liu, C. (2019). Kernel Ridge Regression Model Based on Beta-Noise and Its Application in Short-Term Wind Speed Forecasting. Symmetry, 11(2), 282. https://doi.org/10.3390/sym11020282

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop