Next Article in Journal
Steering Stability Control for a Four Hub-Motor Independent-Drive Electric Vehicle with Varying Adhesion Coefficient
Previous Article in Journal
Deploying Electric Vehicle Charging Stations Considering Time Cost and Existing Infrastructure
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mixed Kernel Function Support Vector Regression with Genetic Algorithm for Forecasting Dissolved Gas Content in Power Transformers

1
Department of Electrical Engineering, Tsinghua University, Beijing 100084, China
2
School of Electrical Engineering, Xinjiang University, Urumqi 830046, China
*
Author to whom correspondence should be addressed.
Energies 2018, 11(9), 2437; https://doi.org/10.3390/en11092437
Submission received: 22 August 2018 / Revised: 5 September 2018 / Accepted: 7 September 2018 / Published: 14 September 2018

Abstract

:
Forecasting dissolved gas content in power transformers plays a significant role in detecting incipient faults and maintaining the safety of the power system. Though various forecasting models have been developed, there is still room to further improve prediction performance. In this paper, a new forecasting model is proposed by combining mixed kernel function-based support vector regression (MKF-SVR) and genetic algorithm (GA). First, forecasting performance of SVR models constructed with a single kernel are compared, and then Gaussian kernel and polynomial kernel are retained due to better learning and prediction ability. Next, a mixed kernel, which integrates a Gaussian kernel with a polynomial kernel, is used to establish a SVR-based forecasting model. Genetic algorithm (GA) and leave-one-out cross validation are employed to determine the free parameters of MKF-SVR, while mean absolute percentage error (MAPE) and squared correlation coefficient (r2) are applied to assess the quality of the parameters. The proposed model is implemented on a practical dissolved gas dataset and promising results are obtained. Finally, the forecasting performance of the proposed model is compared with three other approaches, including RBFNN, GRNN and GM. The experimental and comparison results demonstrate that the proposed model outperforms other popular models in terms of forecasting accuracy and fitting capability.

1. Introduction

Power transformers is some of the most vital and expensive devices in power grids. They play a significant role in transferring energy and converting voltages to different levels. Any unexpected malfunction or failure of a power transformer may jeopardize the continuity of the power supply, cause catastrophic damages to electrical equipment and power system, and bring economic losses for power utilities and society. Therefore, considerable efforts have been made to detect and monitor the operating conditions of power transformers to keep transformers working under safe conditions [1,2,3].
Due to thermal stresses, electrical stresses and aging, the insulation systems (i.e., mineral oil, cellulose and solid insulation) of power transformers inevitably deteriorate and decompose. As a result, several kinds of gases are produced and dissolve in the mineral oil during the degradation process. Numerous approaches and models based on dissolved gas concentration and gas characteristics have been developed and utilized for the last decades [4,5,6,7]. The dissolved gas analysis (DGA) technique, a simple and effective method, has been widely applied to interpret the working conditions and incipient faults of power transformers. Key gases, including hydrogen (H2), methane (CH4), acetylene (C2H2), ethylene (C2H4) and ethane (C2H6), are commonly used by DGA methods to interpret working conditions and identify potential faults in power transformers [8,9]. Consequently, if the concentration of dissolved gas can be forecast exactly with historical data, incipient faults and their development trends can be determined in advance to implement corresponding maintenance plans and minimize latent losses.
Many approaches based on artificial intelligence (AI) have been proposed and applied for forecasting the concentration of dissolved gases in power transformers, such as grey model (GM) [10], artificial neural networks (ANN) [11,12,13,14,15], least squares support vector machine (LSSVM) [16,17,18,19] and support vector regression (SVR) [20,21], etc. Each approach has its own advantages and disadvantages. ANN methods, including back propagation neural network (BPNN), radial basis function neural network (RBFNN) and generalized regression neural network (GRNN), have superior self-learning, acceptable generalization, non-linear data handling capability. However, accurate forecasting of dissolved gases in power transformers requires massive amounts of historical data, which is infeasible in practice. Besides, the structure and related parameters of ANN need to be set properly to ensure satisfactory performance. These requirements mentioned above have restricted the application of ANN in the field of forecasting. The grey model method is capable of providing desirable forecasting result with small-scale data and has been used to predict dissolved gas concentrations in power transformers. However, grey models are only suitable to depict cases where the observed variables change monotonously with time like exponential laws. In practice, the variation of dissolved gas content in power transformers doesn’t follow the premise mentioned above owing to external factors. Thus, an inherent error will always exist when GM is adopted to forecast dissolved gas content in power transformers. Support vector regression (SVR) is regarded as an extended version of the support vector machine method and has received increasing attention in function estimation. SVR is established on the principle of structural risk minimization instead of empirical risk minimization, which makes it have a simpler structure, higher forecasting accuracy and better generalization performance [22].
In general, forecasting of dissolved gas content of power transformers is a non-linear time series problem. As one of the most important component of SVR, kernel functions convert non-linear and inseparable problems into linear divisible ones. Mixed-kernel functions (MKF) have recently attracted great attention since they are able to achieve better classification and regression performance [23,24]. According to literature reviews [10,11,12,13,14,15,16,17,18,19,20,21,22], previously proposed forecasting models are implemented by a single kernel, and MKF-SVR model for dissolved gas content forecasting is rarely investigated. Therefore, in this paper we intend to propose a novel forecasting model based on MKF-SVR to improve prediction performance. In addition, genetic algorithm (GA) is also applied to tune the parameters of MKF-SVR to further improve the forecasting performance.
The remainder of this paper is organized as follows: the methodology of MKF-SVR and GA is introduced in Section 2; Section 3 describes the main process of the proposed approach. The forecasting performance of the proposed approach is presented in Section 4. Finally, the conclusions are drawn in Section 5.

2. Methodology

2.1. Support Vector Regression

The support vector machine (SVM) method, first proposed by Vapnik in the 1990s, has been acknowledged worldwide as an effective and significant method for classification and regression. The subdivision of SVM that tackles the regression problem and function estimation is known as support vector regression (SVR). SVR has been applied in many different fields and obtained remarkable progress due to its advantages of simple structure and convenient application [25,26,27,28,29]. Compared with other AI approaches, the computational complexity of SVR is determined by the number of the support vectors instead of the dimensions of input data, which not only prevents the “dimension curse”, but also reduces computational cost.
Given a dataset D = { ( x i , y i ) } , ( x i R n , y i R , i = 1 , 2 , n ) , where x is the n-dimensional input variable and y is the corresponding output value. n represents the number of samples. A linear problem can be described by the function shown below:
f ( x ) = ω , x + b
where ω and b denote weight coefficient and constant coefficient, respectively; f ( x ) is the forecasting value. When it comes to a non-linear problem, kernel function φ ( x ) is applied to transform the low-dimension nonlinear problem to a high-dimension linear problem. The regression function is shown as Equation (2):
f ( x ) = ω , φ ( x ) + b
The parameters ω and b can be estimated by minimizing the regularized risk function:
min 1 2 ω 2 + C n i = 1 n ε ( y i f ( x i ) ) s . t . ε ( y i f ( x i ) ) = { 0 | y i f ( x ) | ε | y i f ( x ) | ε , o t h e r w i s e
where C is the penalty factor, which is used to balance empirical risk and confidence degree. ε ( ) denotes the ε -non sensitive loss function and ε represents for the ε -intensive loss parameter. Two non-negative slack variable ξ i and ξ i are introduced to facilitate the solving process and then the optimization problem becomes:
min ω , ξ , ξ 1 2 ω 2 + C i = 1 n ( ξ i + ξ i ) s . t . { ω , φ ( x ) + b y i ε + ξ i y i ω , φ ( x ) b ε + ξ i i = 1 , 2 , , n ξ i , ξ i 0
Lagrangian multipliers are introduced to convert the problem described above to a dual optimization problem, which is shown as Equation (5):
min 1 2 i , j = 1 n ( α i α i ) ( α j α j ) K ( x i x j ) + ε i = 1 n ( α i + α i ) i = 1 n y i ( α i α i ) s . t . { i = 1 l ( α i α i ) = 0 i = 1 , 2 , n 0 α i , α i C
where α i and α i are the Lagrangian multipliers, K ( x i x j ) is kernel function. Then, the support vector regression function f ( x ) can be obtained as follows:
f ( x ) = i = 1 n ( α i α i ) K ( x i , x ) + b

2.2. Multi-Kernel Funciton

The kernel function is the most significant component of SVR. It is used to project original low dimensional data to a higher dimensional data space, and converts a nonlinear problem into a linear problem [30]. Different kernel functions have different mapping capability, which results in different prediction accuracy. Therefore, significant efforts have been made to choose proper kernels [31,32]. Four commonly applied kernel functions are listed below:
(1)
Linear kernel function:
k ( x i , x ) = x i x
(2)
Polynomial kernel function:
k ( x i , x ) = ( x i x + 1 ) d
(3)
Gaussian kernel function (or RBF):
k ( x i , x ) = exp ( γ x i x ) 2
(4)
Sigmoid kernel function:
k ( x i , x ) = tanh ( γ x i · x + θ )
Generally, these kernel functions can be divided into two categories: local kernels and global kernels. There are pronounced differences in the projecting ability of different kernel functions. For the global kernel functions, such as linear kernel function and polynomial kernel function, data points far away from each other affect the kernel value and a higher order of the polynomial kernel function has better interpolation ability, while a lower order of the polynomial kernel function has better extrapolation ability. On the contrary, the local kernel functions, including Gaussian kernel function and sigmoid kernel function, allow data close to each other to have an impact on the kernel value [23,24]. The data distribution characteristics of polynomial kernel and Gaussian kernel are shown as Figure 1 and Figure 2, respectively.
Considering the advantages and disadvantage of local kernels and global kernels, we try to integrate different kernel functions to obtain one mixed-kernel function (MKF), which is shown as Equation (11). According to Mercer’s conditions, when k 1 and k 2 are allowable kernel function, then the combined kernel k is also an admissible kernel.
k ( x i , x ) = C 1 k 1 ( x i , x ) + C 2 k 2 ( x i , x ) ( C 1 0 , C 2 0 )
A prevalent MKF is a mixture of the Gaussian kernel and polynomial kernel, which is defined as Equation (12):
k m i x ( x i , x ) = ω exp ( γ x i x ) 2 + ( 1 ω ) ( x i x + 1 ) d ( 0 ω 1 )
where γ and d are the kernel width and power exponent, respectively, and ω is the mixing coefficient. Obviously, a single kernel method can be regarded as a specific case of the MKF. That is, the MKF will be polynomial kernel when ω = 0 , and a Gaussian kernel if ω = 1 . Figure 3 depicts the effect of the mixing of a polynomial kernel and Gaussian kernel when test point X = 0.3, d = 1 and γ = 50. It can be seen that the mixed kernel function possesses the merits of both the local kernel and global kernel, and able to promote fitting and generalization ability.

2.3. Genetic Algorithm

The genetic algorithm (GA), initially developed by John Holland in the 1970s, is a global heuristic searching and optimization technique. GA is inspired by Darwin’s principle of the “survival of the fittest” and natural evolution. GA has been applied to various optimization problems in many diverse fields and has achieved substantial progresses [33,34,35,36]. Compared with other optimization algorithms, GA is easier to converge, its calculation is more efficient and it gets a better global view of the search space because of its effective exploitation and exploration technique [37].
In general, GA starts with a randomly produced population, which represents the candidate solutions of a specific problem. Each candidate solution is called as chromosome or individual. A chromosome is composed of all concerned parameters that need to be optimized. The quality of a chromosome is assessed by a fitness function, which is established according to the objective function of the optimization problems. Genetic operations, including selection, crossover and mutation, are employed to manipulate the genetic reproduction of population during the optimization process. Selection is a process that chooses individual with higher fitness to reproduce offspring for the next generation. By this process, the population size is controlled and excellent individual is put into the next generation with a higher possibility. Crossover is a process that partial genetic information of two chosen chromosomes is exchanged by a specific way to generate new individual. Hence, individual of the next generation inherits some characteristic from each parent. The mutation operation produces new individual by randomly altering genetic information of a chromosome. The main purpose of mutation is to maintain the genetic diversity of population and avoid getting stuck in local minima. These genetic operations aforementioned will be repeated until the stopping criterion is met. A common optimization procedure of GA is shown as Figure 4.

3. Procedure for Forecasting Using Proposed Regression

Usually, the concentrations of dissolved gases, including H2, CH4, C2H2, C2H4 and C2H6, are recorded or saved in chronological order. Therefore, forecasting of dissolved gas content in power transformers is treated as a non-linear time series issue. The historical dissolved gas data are used as the time sequence in the forecasting process. There are two steps to establish an effective and accurate foresting model in this study. These are, data preprocessing, training and verify the forecasting model.

3.1. Data Preprocess

For a time series problem, it is essential to preprocess the raw data due to the possibility of missing values or false data. Firstly, the input data (the historical data of H2, CH4, C2H2, C2H4 and C2H6) needs to be carefully examined in order to remove any singular values and fill in missing data by some interpolation technique. Then, normalization should be implemented prior to the construction of training and testing data to reduce estimation error and improve generalization. In this study, original data is normalized with Equation (13):
x n = x i x min x max x min
where x i and x n are the data before and after normalization, respectively; x max and x min represent the maximum and minimum of the primary data.
Considering the case that historical data of dissolved gas may not be recorded with equal time intervals, it is necessary to convert these unequal interval series into equal time interval series to build a more convenient forecasting model. Hermite spline interpolation [19] and linear interpolation [21] are the two most popular interpolation approaches. In this paper, Hermite spline interpolation is selected to normalize primary data.

3.2. Training and Testing of The Forecasting Model

According to the historical data of dissolved gas sequence G n = { g 1 , g 2 , g n } , the training set T can be built as below:
T = { ( x 1 , y 1 ) , ( x 2 , y 2 ) , ( x n m , y n m ) } ( X × Y ) n m
where: x i = { g i , g i + 1 , g i m + 1 } is the input vector, y i = { g i + m } is the output value, m is the dimension of the input vector:
X = [ g 1 g 2 g m g 2 g 3 g m + 1 g n m g n m + 1 g n 1 ] ,   Y = [ g m + 1 g m + 2 g n ]
After the historical data are divided into a training set and a testing set, a forecasting model based on MFK-SVR is trained to predict the development trend of the dissolved gas content in a power transformer. It has been mentioned in Section 2 that the free parameters of SVR and kernel functions have a great impact on the forecasting performance. Hence, GA is introduced in this paper to optimize these free parameters to improve forecasting accuracy and generalization ability. The main details of training by GA are elaborated as below:
(1) Initialization of GA and encoding parameters
In this investigation, the size of population, maximum iteration number, crossover possibility and mutation possibility are predefined at the initialization process. The chromosome, composed of free parameters (such as penalty factor C, kernel bandwidth σ, intensive loss parameter ε, power exponent d, mixing coefficient ω and so on), is set randomly. These parameters are encoded with the real code as it is suitable for complex problem and simple to use genetic operators to individuals [38]. The range and value of free parameters employed in the optimization are displayed in Table 1.
(2) Definition and calculation of fitness function
The fitness function is the core part of GA, which is used to estimate the performance of each individual. The leave one out cross-validation (LOO CV) method is adopted to calculate the forecasting accuracy. For the LOO CV method, a single sample selected from the training set is used as validation set in turn and other samples are applied as training set, then each sample is validated just once. Mean absolute percentage error (MAPE) and squared correlation coefficient (r2) are employed as fitness function to measure the quality of each chromosome and evaluate the forecasting accuracy. Generally speaking, the less MAPE, the higher forecasting accuracy. While the value of r2 is limited to the range of [0,1], and the greater the value, the better forecasting performance. MAPE and r2 are calculated as follows:
M A P E = 1 l i = 1 l | f ( x i ) y i y i |
r 2 = ( l i = 1 l f ( x i ) y i i = 1 l f ( x i ) i = 1 l y i ) 2 ( l i = 1 l f ( x i ) 2 ( i = 1 l f ( x i ) ) 2 ) ( l i = 1 l y i 2 ( i = 1 l y i ) 2 )
where x i represents the training data; y i and f ( x i ) denote the actual value and forecasting value by the proposed model. l represents the size of training set.
(3) Genetic operation
Based on the estimation of the fitness value, a chromosome with higher fitness value is more likely to be selected to reproduce offspring by crossover and mutation. In this paper, roulette wheel selection, arithmetical crossover and uniform mutation are adopted to carry out genetic operations [39]. The whole process will be repeated until the maximum iteration number is reached, and then the best solution of the last generation is considered as the optimal result. The entire optimization process of the proposed approach is shown in Figure 5.
These parameters contained by the optimal solution (or chromosome) are used to establish the final forecasting model. Testing samples are set as Equation (15) described and used to calculate the forecasting valued by the established model. The index MPAE described as Equation (16), is used to test the forecasting accuracy of the proposed method. After the concerned dissolved gas contents are obtained, a local standard (GBT-7252 2001) can be employed to diagnose the working condition and incipient faults of power transformer.

4. Experimental Results for Forecasting Dissolved Gas Content in Power Transformer Oil

Several dissolved gas content sequences of 110 kV and 220 kV power transformers from China Southern Power Grid are used to demonstrate the forecasting performance of the proposed method. These DGA data are shown in Table 2.
The dissolved gas data is firstly divided into training set and testing set according to data size and related references. Among them, case 1 and case 2 are sampled every day, while case 3 is sampled each week. It should be noted in this study that none singular value is eliminated and no more than 5% of data (11 out of 272) are missing in all three cases. Afterwards, normalization is implemented to improve generalization capability and reduce computational error. The whole experimental tests of the proposed approaches are conducted in the MATLAB (R2016) environment with the aid of the LIBSVM toolbox [40]. After data preprocessing and normalization, the time sequences for training set and testing set are established according to Equation (15). In this study, we test the forecasting performance of SVR models established on different single kernels, shown as Equations (7)–(10). GA is utilized to optimize the kernel parameters of ε -SVR, while LOO CV is applied to estimate the fitness to select the best choice among the candidate solutions. Numerical experiments for each model are repeated 50 times to decrease randomness within the final results. Results of the forecasting model for the training set of case 1 and case 2 are shown in Table 3 and Table 4, respectively.
It can be seen from Table 3 and Table 4 that, the SVR models based on linear kernel and sigmoid kernel have relatively better average MAPE than that of models established on the Gaussian kernel and polynomial kernel for all cases. However, the average r2 of ε-SVR model with linear kernel or sigmoid kernel are far lower than that of models based on the other two kernels. r2 provided by linear or sigmoid kernel is generally no more than 0.2 and 0.3, while the values obtained by the Gaussian and polynomial kernel are no less than 0.6 and 0.8 for case 1 and case 2, respectively.
Small r2 indicates that the established model could not effectively depict the developing trends of the time series. Therefore, it is concluded that sigmoid kernel and lineal kernel are not suitable for forecasting the dissolved gas content of power transformer, and then they have not been studied further in the following investigation.
Models based on Gaussian and polynomial kernel have better squared correlation coefficient and acceptable forecasting accuracy. Therefore, we apply Gaussian and polynomial kernel to develop a novel MKF-SVR model for predicting dissolved gas contents. Forecasting results of the training set are also presented in Table 3 and Table 4, respectively. Compared with Gaussian and polynomial kernels, MKF-SVR model has slightly lower MAPE and comparative r2. Among the forecasting result obtained by mixed kernel, the worst average MAPE is no more than 2% and 5%, and the lowest r2 is no less than 0.95 and 0.97 for case 1 and case 2, respectively. Besides, squared correlation coefficient r2 of MKF-SVR model is far better than that of models based on linear and sigmoid kernel. The results reveal that the proposed MKF-SVR model integrates the advantages of local and global kernel and manifest the superiority of the proposed approach. All cases described in Table 2 are examined with the MKF-SVR model (repeated 50 times). For the proposed model, the mixed kernel function is shown as Equation (12), and free parameters that need to be optimized have been listed in Table 1. It should be pointed out that the dimension of input vector m plays an important role in forecasting performance. An improper dimension value m will lead to undesirable forecasting results [20]. Hence, parameter m is also considered in optimization process. The optimal parameters for each gas and corresponding prediction performance are presented in Table 5.
It is shown in Table 5 that the optimal value of m varies from case to case, which proves that it is necessary to tune the input vector dimension to gain a better performance. Moreover, the variation of mixing coefficient ω suggests that it is indispensable to integrate different kernels to obtain better forecasting performance. Take “H2” of case 1 for example, according to Table 4, the minimum MAPE for Gaussian and polynomial kernel is 0.4961 and 0.8222, while for the mixed kernel, ωG is equal to 0.9991, which means that the mapping characteristic of the kernel function is mainly determined by the Gaussian kernel. Although the weight of polynomial kernel is negligible (ωp = 0.0009), the predicted result of the training set is greatly improved to 0.1884, which indicates that the participation of the linear kernel has greatly improved the learning ability and decreased the forecasting error. Predicted values and absolute percentage error (APE, obtained by Equation (18)) of the training set and testing set are shown in Figure 6 and Figure 7. Compared with models based on Gaussian and polynomial kernels, the MKF-SVR model can depict the variation trend of dissolved gas more accurate and more reliable. In addition, APE of models based on mixed kernels are generally lower than that of models established on other kernels. Specific forecasting values and corresponding MAPE for all training set and testing set are presented in Table 6. For each case, the least MAPE for forecasting value of the testing set are displayed in bold. Most of the result obtained by MKF-SVR are preferable to that of other methods. To sum up, the MKF-SVR model generally can accurately depict developing trends of dissolved gas and elevate the forecasting accuracy:
A P E = | f ( x i ) y i y i |
Furthermore, we compare the forecasting performance of MKF-SVR with other three popular methods (including GRNN, RBFNN and GM) in order to demonstrate the superiority of the proposed method. The experimental results and forecasting values of training and testing sets are presented as Table 7 and Figure 8 (for Case 1, H2). Table 7 shows that the proposed MKF-SVR method has the best MAPE and r2 than that of other traditional methods. According to Figure 8, it can be found that, for the grey model, the forecasting value is monotonously increased, which is not accordance with the actual value at all and gives the biggest error and lowest r2 due to the limitation of grey model mentioned in Section 1. In comparison with RBFNN, GRNN not only has better forecasting results, but also can depict the developing trends better. Nevertheless, these models are based on the principle of empirical risk minimization, thus the forecasting performance can be further promoted by adding extra samples. The proposed MKF-SVR method applies the principle of structural risk minimization, which make it have satisfying generalization ability with fewer samples. Moreover, it has better learning ability and prediction ability as the combination of local kernel and global kernel, which is conducive to illustrate developing trends of dissolved gas in power transformers. In conclusion, the forecasting accuracy and fitting performance of the proposed MKF-SVR model outperform that of other popular approaches.
Considering a situation that there exists bias or noisy during measurement of the dissolved gas content, which might affect the reliability and accuracy of the proposed model. Hence, the robustness of the proposed model is examined with noisy data. The noisy data is obtained by Equation (19):
d a t a n o i s y = d a t a o r i ( 1 + p * r a n d ) ) ( 0 % < p 100 % )
where d a t a o r i and d a t a n o i s y are the original data and noisy data, respectively. p denotes a percentage level of noise. rand represents a data generated by uniform distribution between 0 and 1.
When the noisy data is ready, the data preprocessing techniques mentioned in Section 3 are carried out and the proposed model with the optimal parameters is employed to forecast the dissolved gas content. Forecasting values of training set and testing set are obtained and APE is adopted to estimate the forecasting performance. Data of H2 in case 1 and case 2 are used to demonstrate the prediction capability, and the absolute value of APE of the forecasting results at different p are shown in Figure 9.
It can be seen from Figure 9 that APE is increased as p increases, whereas, there are slight differences in the training sets for both case 1 and case 2. Although the APE value of the testing set varies greatly, for case 1, APE for the testing set is obviously increased when p is larger than 5%, while for case 2, there are minor difference in APE for the testing set when the noise level is increased from 0 to 20%. Besides, the maximum change on APE for the testing sets of both case is no more than 10%, which is acceptable in practical application. Therefore, it can be concluded from Figure 9 that the proposed model has remarkable forecasting performance and desirable robustness.

5. Conclusions

In this paper, a mixed-kernel function based support vector regression model (MKF-SVR) is proposed to forecast the dissolved gas content in power transformers. At the beginning, the forecasting performance of SVR models with single kernel function are checked and the results suggest that models based on sigmoid kernel or linear kernel are not suitable for prediction of dissolved gas content. A mixed kernel function, combined with Gaussian kernel and polynomial kernel, is applied to develop the novel MKF-SVR model. Genetic algorithm and LOO-CV are adopted to optimize free parameters. Forecasting performance of the proposed MKF-SVR model is tested by actual gas data and the results indicate that the proposed model is generally superior to single kernel function based SVR models. Moreover, prediction results of RBFNN, GRNN and GM are compared with that of MKF-SVR, and the comparison results demonstrate that the proposed model has a better forecasting accuracy and fitting capability than that of other models. Additionally, the forecasting results based on noisy data verify the desirable robustness of the proposed model. In the future, several extra factors, including oil temperature, working load and environmental condition, should be taken into consideration for forecasting the development trends of dissolved gas levels in power transformers. Besides, more kernel types and different optimization algorithms can also be investigated to improve the forecasting performance.

Author Contributions

T.K. conceived the experiment and wrote the manuscript. A.T. and Y.Y. debugged the code. W.G. supervised the research and Z.Z. edited the manuscript. All author have approved the submitted manuscript.

Acknowledgments

The authors gratefully acknowledge the financial support from the National Natural Science Foundation of China (No. 61462082). We also thank the anonymous reviewers for their useful suggestion and corrections to the paper.

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

Nomenclature

DGAdissolved gas analysis
MKFmixed-kernel function
SVRsupport vector regression
GAgenetic algorithm
MAPEmean absolute percentage error
r2squared coefficient correlation
AIartificial intelligence
RBFNNradial basis function neural network
BPNNback propagation neural network
GRNNgeneralized regression neural network
GMgrey model
LSSVMleast squares support vector machine
H2hydrogen
CH4methane
C2H6ethane
C2H4ethylene
C2H2acetylene
CVcross validation
LOOleave-one-out
AREabsolute percentage error

References

  1. Kari, T.; Wen, S.; Zhao, D. An Integrated Method of ANFIS and Dempster-Shafer Theory for Fault Diagnosis of Power Transformer. IEEE Trans. Dielectr. Electr. Insul. 2018, 25, 360–371. [Google Scholar] [CrossRef]
  2. Faiz, J.; Soleimani, M. Dissolved gas analysis evaluation in electrical power transformer using conventional methods: A review. IEEE Trans. Dielectr. Electr. Insul. 2017, 24, 1239–1248. [Google Scholar] [CrossRef]
  3. Cheng, L.; Yu, T. Dissolved Gas Analysis Principle-Based Intelligent Approaches to Fault Diagnosis and Decision Making for Large Oil-Immersed Power Transformers: A Survey. Energies 2018, 11, 913. [Google Scholar] [CrossRef]
  4. Duval, M. Dissolved Gas Analysis: It Can Save Your Transformer. IEEE Electr. Insul. Mag. 1989, 5, 22–27. [Google Scholar] [CrossRef]
  5. Rogers, R. IEEE and IEC Codes to Interpret Incipient Faults in Transformers Using Gas in Oil Analysis. IEEE Trans. Electr. Insul. 1978, 13, 349–354. [Google Scholar] [CrossRef]
  6. Ghoneim, S.; Taha, I.; Elkalashy, N. Integrated ANN-Based Proactive Fault Diagnostic Scheme for Power Transformer Using Dissolved Gas Analysis. IEEE Trans. Dielectr. Electr. Insul. 2016, 23, 1838–1845. [Google Scholar] [CrossRef]
  7. Khatib, E.; Barco, R.; Andrades, A. Diagnosis Based on Genetic Fuzzy Algorithms for LTE Self-Healing. IEEE Trans. Veh. Technol. 2016, 65, 1639–1651. [Google Scholar] [CrossRef]
  8. Mansour, D. Development of a New Graphical Technique for Dissolved Gas Analysis in Power Transformers Based on the Five Combustible Gases. IEEE Trans. Dielectr. Electr. Insul. 2015, 22, 2507–2512. [Google Scholar] [CrossRef]
  9. Piotr, M.; Yann, L. Statistical machine learning and dissolve gas analysis: A Review. IEEE Trans. Power Deliv. 2012, 27, 1791–1799. [Google Scholar]
  10. Wang, M. Grey-Extension Method for Incipient Fault Forecasting of Oil-Immersed Power Transformer. Electr. Power Compon. Syst. 2010, 32, 959–975. [Google Scholar] [CrossRef]
  11. Pereira, F.; Bezerra, F.; Junior, S. Nonlinear Autoregressive Neural Network Models for Prediction of Transformer Oil-Dissolved Gas Concentrations. Energies 2018, 11, 1694. [Google Scholar] [CrossRef]
  12. Lin, J.; Sheng, G.; Yan, Y. Prediction of Dissolved Gas Concentrations in Transformer Oil Based on the KPCA-FFOA-GRNN Model. Energies 2018, 11, 225. [Google Scholar] [CrossRef]
  13. Shaban, K.B.; EI-Hag, A.H.; Benhmed, K. Prediction of Transformer Furan Levels. IEEE Trans. Power Deliv. 2016, 31, 1778–1779. [Google Scholar] [CrossRef]
  14. Weron, R. Electricity price forecasting: A review of the state-of-the-art with a look into the future. Int. J. Forecast. 2014, 30, 1030–1081. [Google Scholar] [CrossRef]
  15. Cinotti, S.; Gallo, G.; Ponta, L. Modeling and forecasting of electricity spot-prices: computational intelligence VS classical econometrics. AI Commun. 2014, 27, 301–314. [Google Scholar]
  16. Liao, R.; Zheng, H.; Grzybowski, S. Particle swarm optimization-least square support vector regression based forecasting model on dissolved gas in oil-filled power transformer. Electr. Power Syst. Res. 2011, 81, 2074–2080. [Google Scholar] [CrossRef]
  17. Liao, R.; Bian, J.; Yang, L. Forecasting dissolved gas content in power transformer oil based on weakening buffer operator and least square support vector machine–Markov. IET Gener. Transm. Dis. 2011, 6, 142–151. [Google Scholar] [CrossRef]
  18. Zheng, H.; Zhang, Y.; Liu, J. A novel model based on wavelet LS-SVM integrated improved POS algorithm for forecasting of dissolved gas contents in power transformer. Electr. Power Syst. Res. 2018, 155, 196–205. [Google Scholar] [CrossRef]
  19. Zhang, Y.; Wei, H.; Yang, Y. Forecasting of Dissolved gas in Oil-immersed Transformers Based upon Wavelet LS-SVM Regression and PSO with Mutation. Energy Procedia 2016, 104, 38–43. [Google Scholar] [CrossRef]
  20. Fei, S.; Liu, C.; Miao, Y. Support vector machine with genetic algorithm for forecasting of key-gas ratios in oil-immersed transformer. Expert Syst. Appl. 2009, 36, 6326–6331. [Google Scholar] [CrossRef]
  21. Fei, S.; Sun, Y. Forecasting dissolved gas content in power transformer oil based on support vector machine with genetic algorithm. Electr. Power Syst. Res. 2008, 78, 507–514. [Google Scholar] [CrossRef]
  22. Fei, S.; Wang, M.; Miao, Y. Particle swarm optimization-based support vector machine for forecasting dissolved gas content in power transformer oil. Energy Convers. Manag. 2009, 50, 1604–1609. [Google Scholar] [CrossRef]
  23. Cheng, K.; Lu, Z.; Wei, Y. Mixed kernel function support vector regression for global sensitivity analysis. Mech. Syst. Signal. Appl. 2017, 96, 201–214. [Google Scholar] [CrossRef]
  24. Zhu, X.; Huang, Z.; Shen, H. Dimensionality reduction by Mixed Kernel Canonical Correlation Analysis. Pattern Recognit. 2012, 45, 3003–3016. [Google Scholar] [CrossRef]
  25. Li, S.; Fang, H.; Liu, X. Parameter optimization of support vector regression based on sine cosine algorithm. Expert Syst. Appl. 2018, 91, 63–71. [Google Scholar] [CrossRef]
  26. Wu, L.; Cao, G. Seasonal SVR with FOA algorithm for single-step and multi-step ahead forecasting in monthly inbound tourist flow. Knowl.-Based Syst. 2016, 110, 157–166. [Google Scholar]
  27. Li, W.; Xuan, Y.; Li, H. Hybrid Forecasting Approach Based on GRNN Neural Network and SVR Machine for Electricity Demand Forecasting. Energies 2017, 10, 44. [Google Scholar] [CrossRef]
  28. Peng, L.; Fan, G.; Huang, M. Hybridizing DEMD and Quantum PSO with SVR in Electric Load Forecasting. Energies 2016, 9, 221. [Google Scholar] [CrossRef]
  29. Huang, M. Hybridization of Chaotic Quantum Particle Swarm Optimization with SVR in Electric Demand Forecasting. Energies 2016, 9, 426. [Google Scholar] [CrossRef]
  30. Zhong, Z.; Carr, T. Application of mixed kernels function (MKF) based support vector regression model (SVR) for CO2—Reservoir oil minimum miscibility pressure prediction. Fuel 2016, 184, 590–603. [Google Scholar] [CrossRef]
  31. Wu, D.; Wang, Z.; Chen, Y. Mixed-kernel based weighted extreme learning machine for inertial sensor based human activity recognition with imbalanced dataset. Neurocompuing 2016, 190, 35–49. [Google Scholar] [CrossRef]
  32. Zhang, Y.; Wang, Y.; Zhou, G. Multi-kernel extreme learning machine for EEG classification in brain-computer interfaces. Expert Syst. Appl. 2018, 96, 302–310. [Google Scholar] [CrossRef]
  33. Liu, Y.; Wang, R. Study on network traffic forecast model of SVR optimized by GAFSA. Chaos Solitons Fractals 2016, 89, 153–159. [Google Scholar] [CrossRef]
  34. Wang, S.; Hae, H.; Kim, J. Development of Easily Accessible Electricity Consumption Model Using Open Data and GA-SVR. Energies 2018, 11, 373. [Google Scholar] [CrossRef]
  35. Gholamalizadeh, E.; Kim, M. Multi-Objective Optimization of a Solar Chimney Power Plant with Inclined Collector Roof Using Genetic Algorithm. Energies 2016, 9, 971. [Google Scholar] [CrossRef]
  36. Li, C.; Zhai, R.; Yang, Y. Optimization of a Heliostat Field Layout on Annual Basis Using a Hybrid Algorithm Combining Particle Swarm Optimization Algorithm and Genetic Algorithm. Energies 2017, 10, 1924. [Google Scholar] [Green Version]
  37. Wang, B.; Yang, Z.; Lin, F. An Improved Genetic Algorithm for Optimal Stationary Energy Storage System Locating and Sizing. Energies 2014, 7, 6434–6458. [Google Scholar] [CrossRef] [Green Version]
  38. Haghrah, A.; Mohammadi, B.; Seyedmonir, S. Real coded genetic algorithm approach with random transfer vectors-based mutation for short-term hydrothermal scheduling. IET Gener. Transm. Distrib. 2013, 9, 75–89. [Google Scholar] [CrossRef]
  39. Herrera, F.; Lozano, M.; Verdegay, J. Tackling Real-Coded Genetic Algorithms: Operators and Tools for Behavioural Analysis. Artif. Intell. Rev. 1998, 12, 265–319. [Google Scholar] [CrossRef]
  40. Chang, C.; Lin, C. A library for support vector machines. ACM T. Intell. Syst. Technol. 2011, 2, 27:1–27:27. [Google Scholar]
Figure 1. Data distribution of global kernel (polynomial kernel, test point X = 0.3).
Figure 1. Data distribution of global kernel (polynomial kernel, test point X = 0.3).
Energies 11 02437 g001
Figure 2. Data distribution of local kernel (Gaussian kernel, test point X = 0.3).
Figure 2. Data distribution of local kernel (Gaussian kernel, test point X = 0.3).
Energies 11 02437 g002
Figure 3. Data distribution of a mixed kernel (Gaussian kernel & polynomial kernel, test point X = 0.3).
Figure 3. Data distribution of a mixed kernel (Gaussian kernel & polynomial kernel, test point X = 0.3).
Energies 11 02437 g003
Figure 4. Flowchart of the optimization procedure of GA.
Figure 4. Flowchart of the optimization procedure of GA.
Energies 11 02437 g004
Figure 5. Flowchart of parameter optimization of MKF-SVR.
Figure 5. Flowchart of parameter optimization of MKF-SVR.
Energies 11 02437 g005
Figure 6. Forecasting value and APE of SVR model with different kernels (Case 1).
Figure 6. Forecasting value and APE of SVR model with different kernels (Case 1).
Energies 11 02437 g006aEnergies 11 02437 g006b
Figure 7. Forecasting value and APE of SVR model with different kernels (Case 2).
Figure 7. Forecasting value and APE of SVR model with different kernels (Case 2).
Energies 11 02437 g007aEnergies 11 02437 g007b
Figure 8. The forecasting values of H2 (Case 1).
Figure 8. The forecasting values of H2 (Case 1).
Energies 11 02437 g008
Figure 9. APE of forecasting result with different p.
Figure 9. APE of forecasting result with different p.
Energies 11 02437 g009
Table 1. Parameters of SVR and GA.
Table 1. Parameters of SVR and GA.
AlgorithmsParameterValue
SVRMixing coefficient ω[0,1]
Penalty factor C[0.001,100]
RBF bandwidth σ[0.001,100]
Epsilon ε[0.0001,0.1]
Polynomial degree d[1,5]
GAPopulation size50
Iterations100
Crossover probability0.8
Mutation probability0.02
Table 2. The dissolved gas content in a power transformer (uL/L).
Table 2. The dissolved gas content in a power transformer (uL/L).
Case NumberDateH2CH4C2H6C2H4C2H2Data Type
12015/7/83.7980.5797.44167.510Training
2015/7/94.0488.02101.8178.630Training
2015/7/104.0486.55101.4179.980Training
2015/7/114.0586.68100.98180.390Training
2015/7/124.0285.83100.45176.250Training
2015/7/133.8179.7497.75168.920Training
2015/7/143.8777.8196.51165.950Training
2015/7/153.8278.5596.93168.370Training
2015/7/163.7876.6195.84166.540Training
2015/7/174.0781.9198.46175.090Training
2015/7/184.1183.8199.59180.880Training
2015/7/194.0683.1299.37181.060Training
2015/7/204.0683.5399.49182.50Training
2015/7/214.0483.0398.96180.450Training
2015/7/224.0984.5199.62183.360Training
2015/7/233.8178.8897.04172.180Training
2015/7/244.0683.81100.15183.580Training
2015/7/254.1185.37101.33188.730Training
2015/7/264.185.78101.16188.250Training
2015/7/273.7979.1797.29172.770Training
2015/7/284.0986.08101.86186.710Training
2015/7/294.0986.69102.14187.10Training
2015/7/304.0384.85101.19184.530Testing
22016/11/517.4037.3040.8010.702.89Training
2016/11/617.2040.1038.9010.002.63Training
2016/11/718.6039.9039.5010.802.59Training
2016/11/818.2037.3037.209.842.97Training
2016/11/920.8034.5040.109.732.55Training
2016/11/1020.8040.0036.7010.502.72Training
2016/11/1117.4034.5040.709.202.60Training
2016/11/1220.8035.9040.409.432.48Training
2016/11/1320.2038.0041.609.892.73Training
2016/11/1420.7035.7037.4010.902.52Training
2016/11/1518.5039.0040.309.872.62Training
2016/11/1617.3039.3039.0010.302.71Training
2016/11/1718.8037.0043.7010.402.26Training
2016/11/1819.2036.8040.3010.702.63Training
2016/11/1916.4038.7045.909.722.26Training
2016/11/2019.8040.0042.4010.602.34Training
2016/11/2119.7038.2041.6011.402.69Training
2016/11/2218.3040.5044.9010.802.39Training
2016/11/2318.2034.7043.9011.302.28Training
2016/11/2418.3034.8044.409.532.63Training
2016/11/2516.5040.8042.7010.602.50Training
2016/11/2618.1040.9045.9011.402.71Training
2016/11/2719.8036.2045.0011.002.39Testing
2016/11/2819.6035.1046.009.612.45Testing
32015/7/158.587.586.391.650Training
2015/7/227.677.115.541.580Training
2015/7/298.257.336.591.850Training
2015/8/58.6275.881.660Training
2015/8/127.927.646.561.620Training
2015/8/197.577.36.41.870Training
2015/8/268.527.685.671.590Training
2015/9/27.517.486.531.740Training
2015/9/98.147.675.641.60Training
2015/9/168.39.516.912.110Training
2015/9/237.849.897.142.110Training
2015/9/307.579.767.582.20Training
2015/10/78.689.526.952.110Training
2015/10/147.948.976.641.970Testing
2015/10/217.798.116.592.190Testing
Table 3. The average forecasting performance of the training set (Case 1, 50 times).
Table 3. The average forecasting performance of the training set (Case 1, 50 times).
Dissolved GasKernel TypeMAPE/%Average r2
MaxMinAverage
H2Linear0.91320.51760.8227 ± 0.10180.0403 ± 0.0064
Sigmoid0.63920.40350.5484 ± 0.08010.0649 ± 0.0188
Gaussian0.69170.49610.6077 ± 0.04840.9958 ± 0.0018
Polynomial1.47180.82221.1495 ± 0.14220.2311 ± 0.0834
Mixed0.74110.06530.4144 ± 0.17640.9881 ± 0.0272
CH4Linear1.73950.48851.1726 ± 0.35690.1582 ± 0.0178
Sigmoid2.61030.30901.9075 ± 0.36870.0049 ± 0.0217
Gaussian2.78722.55082.7202 ± 0.06310.9838 ± 0.0067
Polynomial2.57170.21121.5787 ± 0.60750.7397 ± 0.2489
Mixed2.38680.01761.1702 ± 0.68380.9035 ± 0.1975
C2H6Linear2.52651.15191.4769 ± 0.37880.1445 ± 0.0059
Sigmoid3.71743.64123.6703 ± 0.02010.0007 ± 0.0001
Gaussian2.11042.07942.1027 ± 0.00060.9854 ± 0.0042
Polynomial14.66580.18517.9456 ± 6.33040.6667 ± 0.3152
Mixed1.13910.01050.4101 ± 0.25360.9713 ± 0.0394
C2H4Linear1.13850.09570.4457 ± 0.31610.2592 ± 0.0135
Sigmoid9.44990.22943.3252 ± 1.98650.1056 ± 0.0514
Gaussian2.23051.21682.0206 ± 0.20030.9917 ± 0.0052
Polynomial1.74090.56811.5511 ± 0.32390.6648 ± 0.1375
Mixed2.85520.47311.4342 ± 0.44580.9590 ± 0.0180
Table 4. The average forecasting performance of the training set (Case 2, 50 times).
Table 4. The average forecasting performance of the training set (Case 2, 50 times).
Dissolved GasKernel TypeMAPE/%Average r2
MaxMinAverage
H2Linear3.82112.22193.2477 ± 0.37340.0474 ± 0.0055
Sigmoid3.93523.82653.8321 ± 0.01920.0252 ± 0.0392
Gaussian4.52233.98644.1491 ± 0.10880.9723 ± 0.0021
Polynomial11.64596.411610.7708 ± 0.7730.9598 ± 0.0197
Mixed4.11663.97984.0138 ± 0.03110.9855 ± 0.0184
CH4Linear5.66810.91281.9240 ± 1.62860.1198 ± 0.0064
Sigmoid35.22513.830419.7238 ± 4.97720.0037 ± 0.0041
Gaussian5.71695.65745.6986 ± 0.01390.9921 ± 0.0007
Polynomial38.49812.88534.8767 ± 7.9340.9059 ± 0.1747
Mixed5.46893.71554.9363 ± 0.47940.9877 ± 0.0309
C2H4Linear2.44260.10281.5430 ± 0.65440.5348 ± 0.0086
Sigmoid8.39090.19548.0604 ± 1.54510.0560 ± 0.0891
Gaussian5.54274.84435.2869 ± 0.16560.9694 ± 0.0239
Polynomial13.37139.700612.1296 ± 0.9970.9009 ± 0.0084
Mixed3.181.50212.6799 ± 0.42400.9797 ± 0.0797
C2H6Linear10.24478.06359.2906 ± 0.64320.1107 ± 0.0112
Sigmoid11.34527.421910.1950 ± 1.25910.0051 ± 0.0074
Gaussian6.91596.77346.8489 ± 0.03240.9752 ± 0.0032
Polynomial20.97456.244919.6727 ± 2.6440.8881 ± 0.0164
Mixed6.82256.80546.8085 ± 0.00350.9891 ± 0.0081
C2H2Linear2.56550.11491.6961 ± 0.69050.3469 ± 0.0059
Sigmoid5.20495.00815.1173 ± 0.04660.0006 ± 0.0005
Gaussian4.28284.16904.2138 ± 0.02990.9695 ± 0.0022
Polynomial10.7144.929.6174 ± 1.34110.8397 ± 0.0134
Mixed3.18842.63632.9458 ± 0.12120.9934 ± 0.0079
Table 5. The optimal parameters of each dissolved gas sequence.
Table 5. The optimal parameters of each dissolved gas sequence.
Case No.Dissolved GasParametersMAPE/%
mCσξdωTrainingTesting
1H2345.241066.40780.02281.81970.99910.18840.0645
CH4364.066824.88620.02611.56960.39230.35091.0295
C2H6472.774768.20220.00512.88750.11790.03320.1292
C2H4351.180877.36540.05383.77920.29340.64120.4713
2H2466.614359.57280.02731.15630.74900.62214.0578
CH4362.601353.98300.00332.03070.90920.05764.1765
C2H6544.836819.00710.00872.6108092810.19171.5704
C2H4543.979068.05880.01341.07700.86210.28436.7836
C2H2563.001747.32370.00262.60830.77590.07542.9589
3H212.273691.16860.06391.08340.75721.02240.8085
CH410.174255.33300.05773.01110.03814.91753.6875
C2H656.355888.10670.00122.65070.92250.04306.3674
C2H4470.807595.66270.02691.79270.95160.81650.0185
Table 6. The comparison of the forecasting result (testing set).
Table 6. The comparison of the forecasting result (testing set).
CaseKernel TypeH2CH4C2H6C2H4C2H2
1Actual/Mixed
RBF/Polynomial
4.0300/4.0274/
4.0101/4.0631
84.8500/85.7235/
82.6856/85.0292
101.1900/101.3207/
99.0858/102.5478/
184.5300/185.4030/
186.7754/187.6218
--
MAPE(%)
(2015/7/30)
0.0645/0.4938
/0.8213
1.0295/2.5509
/0.2112
0.1292/2.0794/
/1.3418
0.4731/1.2168/
/1.6755
--
2Actual-1/Mixed
RBF/Polynomial
19.8000/18.9596/
18.913/20.434
36.2/37.2366/
37.707/36.548
45.0000/44.4396/
43.0012/51.855
11.0000/10.4411/
10.3502/7.7145
2.3900/2.5220/
2.5206/2.5333
Actual-2/Mixed
RBF/Polynomial
19.6000/18.8412/
18.9117/17.7143
35.1000/37.0268/
37.6111/36.5464
46.0000/45.1275/
43.5865/43.442
9.6100/10.4255/
10.3503/9.017
2.4500/2.4596/
2.5258/2.1396
MAPE1(%)
(2016/11/27)
4.2444/4.4848
/10.5353
2.8635/4.1630
/0.9613
1.2447/4.4418
/15.2333
5.0809/5.9063
/29.8636
5.5230/5.4644
/5.9832
MAPE2(%)
(2016/11/28)
3.8714/3.5102
/9.6224
5.4894/7.1538
/4.1197
1.896/5.2467
/5.5608
8.4860/7.7034
/6.1707
0.3918/3.0938
/12.6938
3Actual-1/Mixed
RBF/Polynomial
7.9400/7.9315/
7.9160/7.5543
8.9700/9.4120/
9.3905/9.2016
1.9700/2.0758/
1.9234/1.4785
6.6400/6.6377/
6.5933/5.0692
--
Actual-2/Mixed
RBF/Polynomial
7.7900/7.6724/
7.7044/7.8758
8.1100/8.5526/
8.8907/8.4542
2.1900/2.0287/
1.9234/1.1178
6.5900/6.5902/
6.5933/4.0049
--
MAPE-1(%)
(2015/10/14)
0.1068/0.3025
/4.8574
1.9171/4.6881
/2.5814
5.3716/2.3650
/24.9491
0.0340/0.7033
/23.6561
--
MAPE-2(%)
(2015/10/21)
1.5102/1.0988
/1.1014
5.4580/9.6268
/4.2437
7.3631/12.1731
/48.9590
0.0031/0.0501
/39.2284
--
Table 7. The comparison of experimental results for H2 (Case 1).
Table 7. The comparison of experimental results for H2 (Case 1).
MethodsTraining SetTesting Set
MAPEr2MAPE
MKF-SVR0.41440.98810.0645
GRNN0.76250.85661.0893
RBFNN2.17120.34780.8734
GM2.65420.09110.2062

Share and Cite

MDPI and ACS Style

Kari, T.; Gao, W.; Tuluhong, A.; Yaermaimaiti, Y.; Zhang, Z. Mixed Kernel Function Support Vector Regression with Genetic Algorithm for Forecasting Dissolved Gas Content in Power Transformers. Energies 2018, 11, 2437. https://doi.org/10.3390/en11092437

AMA Style

Kari T, Gao W, Tuluhong A, Yaermaimaiti Y, Zhang Z. Mixed Kernel Function Support Vector Regression with Genetic Algorithm for Forecasting Dissolved Gas Content in Power Transformers. Energies. 2018; 11(9):2437. https://doi.org/10.3390/en11092437

Chicago/Turabian Style

Kari, Tusongjiang, Wensheng Gao, Ayiguzhali Tuluhong, Yilihamu Yaermaimaiti, and Ziwei Zhang. 2018. "Mixed Kernel Function Support Vector Regression with Genetic Algorithm for Forecasting Dissolved Gas Content in Power Transformers" Energies 11, no. 9: 2437. https://doi.org/10.3390/en11092437

APA Style

Kari, T., Gao, W., Tuluhong, A., Yaermaimaiti, Y., & Zhang, Z. (2018). Mixed Kernel Function Support Vector Regression with Genetic Algorithm for Forecasting Dissolved Gas Content in Power Transformers. Energies, 11(9), 2437. https://doi.org/10.3390/en11092437

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop