1. Introduction
Electrical power dispatch has become more complicated in recent years with the development of large electrical grids, ultra-high voltage, and long-distance power transmission, making electrical power system stability problems more serious, with voltage collapse accidents occurring frequently [
1]. These accidents have caused enormous economic losses and social impacts, making the issue of voltage stability more important.
In studies of the static voltage stability of electrical systems, companies in the electrical sector recommend obtaining bus voltage profiles (P-V curves) as a function of system loading. These profiles are used to (1) determine the limits of power transfer between areas of a system; (2) adjust safety margins; (3) observe the behavior of the voltage in the buses of the analyzed electrical system; and (4) compare planning strategies for the adequate expansion and reinforcement of electrical networks to avoid loss of load. The P-V curves allow a qualitative evaluation of the different operating conditions of the electrical system under different loading and contingency conditions. Tracing the P-V curve is the most appropriate methodology for calculating stability margins. Thus, one of the main objectives of this study is to obtain the maximum loading point of the system.
Traditionally, P-V curve tracing has been achieved using conventional power flow, using the Newton method. This procedure is performed until the iterative process fails to converge (Jacobian matrix singularity). For practical purposes, this point is considered as the maximum loading point.
Continuation power flow (CPF) is the most common static voltage stability analysis method [
2,
3,
4] and includes four steps: prediction, step-size control, parameterization, and correction.
Systems that operate close to their operational limits are subject to the occurrence of a higher number of contingencies. In this context, security analysis is critical to identify contingencies that may affect the system. An electrical power system is subjected to a large number of contingencies, but few are severe enough to cause instability [
5].
The Western System Coordinating Council (WSCC) [
6] requires its companies to evaluate voltage stability margins by P-V and V-Q analyses, requiring at least a 5% real power margin in any simple contingency situation. On the other hand, WSCC requires a 2.5% of real power margin for severe contingency (N-2).
In Brazil, the operational procedures of electrical grids of the National Electric System Operator (ONS) [
7] suggest as a criterion for expansion planning that the stability margin be higher than or equal to 6% for simple contingency situations, using the same WSCC criteria.
These processes have motivated companies to invest in tools that aim to improve the operation of electrical power systems (EPS). Artificial neural networks (ANN) are one of these tools.
A relatively new and promising learning algorithm called an extreme learning machine (ELM) was proposed in [
8] for a more accurate and efficient voltage stability margin prediction. The inputs to the prediction model are the system operating parameters and loading direction, and the output is the voltage stability margin. The average percent error using the algorithm is only 3.32% and the average error is only 0.0495, both of which are satisfactory for practical use.
Promising results were found in [
9], whose ANN reproduced the same results with the same high precision and speed as conventional methods of voltage stability calculation. For this purpose, the loading parameter and the voltage stability margin index were calculated using eight different input variables and fourteen different training functions. This allowed for verifying which training function was the fastest and had the best resource to predict the loading margin and the voltage stability index.
Many works involving artificial neural networks have been proposed in the literature, especially the MLP and RBF networks. Reference [
10] used artificial intelligence (AI) to identify predictive biomarkers for diffuse large B-cell lymphoma prognoses. Two neural networks using the MLP and RBF networks show the methodology for efficiently identifying biomarkers. Reference [
11] presents an application of a multilayer perceptron (MLP) neural network model for fast and automatic prediction of the geometry of finished products. The results indicate that the training and testing were accurate (accuracy rate exceeded 92%), demonstrating the feasibility of the proposed method.
In this context, this study presents a different approach to obtaining post-contingency P-V curves. The use of multilayer perceptron (MLP) and radial basis function (RBF) artificial neural networks (ANN) is proposed to estimate the voltage magnitude in a system subjected to a simple or severe contingency.
2. Materials and Methods
The system studied in this research corresponds to the IEEE 14-bus configuration, shown in
Figure 1. The 1890 samples used for training, validation, and testing were obtained using the method presented in [
3]. Each sample was composed of 18 data: the ANN input data (4 data), represented by the loading factor λ, the real and reactive power generated in the reference bus (P
gslack and Q
gslack), and the branch number (transmission lines or transformers); and the output data (14 data), represented by the voltage magnitudes of all the buses in the system.
The IEEE 14-bus system has 20 branches, as shown in
Figure 1. Each branch can be represented by a transmission line or a transformer connecting two buses of the system. Ninety samples were obtained for each branch removed from the system, representing the applied contingency. Branch 1 (r1) is subjected to a severe contingency N-2 (double contingency) when it is removed from the system, resulting in a drastic reduction in the system loading margin, as shown in the results. The other contingencies are considered simple (N-1).
The ANNs used consisted of MLP [
12] with a learning algorithm in backpropagation training [
13] and the RBF network [
14,
15], both with three layers: input with 4 neurons represented by the loading factor λ, the real and reactive power generated in the reference bus (P
gslack and Q
gslack), and the branch number (transmission lines or transformers), intermediate with 15 neurons for the MLP network, and
s neurons for the RBF network, where
s is the number of centers of the network which also represent the number of radial basis functions (
, in which
p represents the number of samples), and output with 14 neurons (voltage magnitudes of all buses in the system), as shown in
Figure 2. The software used to prepare the data and obtain the results was Matlab
® [
16].
The RBF network (
Figure 2b) differs significantly from the MLP network (
Figure 2a), as it contains a hidden layer with radial basis functions and its activation occurs through the distance between the input vector and a prototype vector (centers); this gives it several advantages, such as faster and more efficient training, faster learning, and no need to start with random weights because it is an incremental training. RBF networks perform better in approximating function and time series than MLP, while MLP performs better in classification problems. These networks are part of the Gaussian function techniques [
17].
The continuation power flow, different from a random experiment of classification, is composed of a series of nonlinear equations, and the solution is obtained by varying the loading factor parameter; i.e., for each loading factor, a value is obtained for the voltage magnitude at each bus of the system. For these types of data, the MLP networks, especially the RBF, which has better approximating functions (nonlinear functions of the continuation power flow), have good applicability.
The radial and linear hyperbolic tangent activation functions used in the two networks are represented in (1), (2), and (3), respectively:
where
t is an arbitrary constant, corresponding to the slope of the curve.
The distance between the central neuron and the input pattern is calculated when an input pattern is presented to the RBF network, and the network output to the central neuron is the result of applying the radial basis function at this distance. The mean square error (
MSE) vector of the neural networks is calculated using (4):
in which
Yob and
Ydes are the obtained and desired outputs of the ANN, compared during the network training (MLP) and creation (RBF). The more similar they are to each other, the lower the error and the better the weight adjustment.
Figure 3 presents the flowchart of the networks used in this paper. For the RBF and MLP networks,
R corresponds to the number of inputs (4 inputs—loading factor λ, the real and reactive power generated in the reference bus (P
gslack and Q
gslack), and the branch number (transmission lines or transformers)),
p to the number of samples (1322 samples for training),
s the number of centers—weights (just for the RBF network, and this is what we want to optimize for better performance, 108 centers were used), and
T to the number of desired outputs—targets (for both networks, 1322 samples of desired outputs, the same number of training samples). For the MLP network only,
m corresponds to the number of neurons in the middle layer (15 neurons) and
i to the number of neurons in the output layer (14 neurons). The ‖
W1 −
x‖ represents the Euclidean distance weight function.
Neural networks that use the backpropagation algorithm, as well as many other types of artificial neural networks, can be seen as “black boxes” in which it is almost unknown why the network reaches a certain result, since the models do not present justifications for their answers. With this in mind, many studies have been carried out aiming at the extraction of knowledge of artificial neural networks and the creation of explanatory procedures, in which the network’s behavior in certain situations is justified [
18]. Therefore, it may be noted that a different value will always be obtained for each time the network is retrained [
18]. The values specified for the training and the number of layers were chosen after several tests were performed. Other values could be used.
During training with the backpropagation algorithm, the network operates in a two-step sequence. First, a pattern is presented to the network input layer. The resulting activity flows through the network, layer by layer, until the response is produced by the output layer. In the second step, the obtained output is compared to the desired output for that particular pattern. If this is not correct, the error is calculated. The error is propagated from the output layer to the input layer, and the connections’ weights of the internal layer units are modified as the error is backpropagated. This result emphasizes the ANN application potential, which can act both as a classification and as a prediction tool.
For the RBF network, the error is propagated from the output layer to the input layer and the number of centers s is modified (increased) until the value of W2 is optimized and the error is minimized. The maximum value that could be used for the number of centers in this study is 1322 (number of samples p), which also corresponds to the number of radial basis functions. In this paper, only 108 centers (radial basis functions) were needed for MSE < 0.001.
3. Results and Discussion
From the 1890 samples, randomly, 70% were used for training (1322 samples), 15% for validation (284 samples), and 15% for testing (284 samples). All the results for 100% of the samples were shown, i.e., all three phases simultaneously.
Figure 4 shows the training performances of the artificial neural networks used in this study.
Figure 4a shows the MSE for each iteration of the MLP network, with a lower-than-expected value (0.001) of 0.00092773 obtained with 48 iterations and 7 s of processing.
Figure 4b shows the histogram of the error (
Yob relative to the desired
Ydes), with 20 intervals for the 26,460 data in the training. The errors were around zero for most of the data. The error in estimating the voltage magnitudes using the MLP network was 1.023%. Better results were found for the RBR network, with an MSE at 0.000163906, which is well below the expected (0.001), as shown in
Figure 4c, and with higher data accumulation, with errors around zero for the histogram, as shown in
Figure 4d. The error in estimating the voltage magnitudes using the MLP network was 0.14%.
Table 1 proves these results. The training parameters were iterations, time, performance, and correlation. For the MLP network, after 7 s, with 48 iterations (48 comparisons (|
Yob −
Ydes| < 0.001)), the error was smaller than the established 0.001, achieving 0.00092773 and correlation between the desired and obtained outputs of 0.98977. Similar results are presented for the RBF network.
In [
10], as presented in this paper, there was a comparison between the MLP and RBF networks. The MLP network performed better than the RBF network. This is explained by the fact that [
10] is focused on data classification, in which the MLP network, depending on the established parameters, becomes better than the RBF network. However, the application of this paper is focused on the approximation of functions, and in this case, the RBF network is better and faster than the MLP [
10,
17].
Figure 5 shows a correlation analysis for both methodologies in the training, validation, and testing of the network.
Figure 5a shows the correlation between the obtained and desired outputs for the MLP network, and
Figure 5b shows the correlation between the obtained and desired outputs for the RBF network. Both networks had good results, with a slight improvement for the RBF network, with an R value very close to 1 (0.9986), showing that 99% of the
Yob variables is explained by the
Ydes variables.
Figure 6 shows the voltage magnitudes of critical bus 14 for all the samples (1890); that is, all the contingencies of the system, with desired output (
Ydes) vs. obtained output (
Yob) via ANN in training, validation, and testing.
Figure 6a,b show the performance of the MLP and RBF networks, respectively. Again, the best result can be observed for the RBF network.
The number of centers
s automatically adopted to create the RBR network was 108 for vector
W1, and the processing time was 3 s after 100 iterations, as shown in
Table 1. Although the number of iterations is higher compared to the network MLP, the number of performed calculations is lower, resulting in less CPU time [
19]. Other values for centers could be adopted by the network, but 108 were already sufficient to reach the adopted criterion (0.001). The higher the number of centers, the more accurate the obtained value
Yob.
Figure 7 shows the P-V curves of critical bus 14 for all the contingencies of the studied system, also presenting the similarity between the desired (
Ydes) and the obtained outputs (
Yob) by the MLP neural network, whose MSE was 0.00092773.
In the operating phase (validation and test), the network estimated samples that were not part of the training process were included. We observed an approximation with an error around 0.01945 of the output obtained by the network (Yob) compared to the desired values (Ydes), showing that the network could act as an estimator of the magnitudes of nodal voltages.
For instance,
Figure 7 shows three curves in the foreground relative to contingency
r1. One of the curves corresponds to the pre-contingency P-V curve and the other two to the post-contingency (
Ydes and
Yob). Contingency
r1, corresponding to the output of the transmission line between buses 1 and 2 (N-2) (
Figure 1), presents a significant reduction in the loading margin, lower than the value corresponding to the base case. Discrepancies can be observed at some points between the obtained P-V curves and the desired P-V curves. This behavior no longer occurs in
Figure 8. The P-V curves were obtained by the RBF neural network and compared to those desired, showing a higher similarity between them. This is due to the better training of the RBF network, whose MSE was around 0.000163906, well below that of the MLP network. The obtained P-V curves (
Yob) are practically the same as the desired curves (
Ydes).
Table 2 shows the values of maximum loading points of bus 14 for both networks. Column 1 presents all the contingencies of the IEEE-14 System (r1–r20), including r0, related to pre-contingency (normal operating conditions). Column 2 presents the loading factor (λ) of the system at the maximum loading point. Column 3 presents the voltage magnitudes values (V
14) obtained by the method presented in [
3], representing the desired output (
Ydes). Columns 4 and 5 present the voltage magnitudes values (V
14) obtained by the MLP and RBF methods (
Yob). Better performance can be observed in the RBF network compared to the MLP network. The values of the maximum loading points correspond to the voltage magnitude for the maximum load value, i.e., the voltage magnitude value for the maximum value of the loading factor λ (critical point of the P-V curve). These desired output values (
Ydes) (corresponding to the P-V curve) were obtained by the method proposed in [
3], modified parameterized Newton, with a precision of 10
−5.
Table 3 shows the differences between the values of the desired maximum loading points and those obtained in bus 14. The average error value was 0.0266 for the MLP network and 0.0122 for the RBF network, proving its best performance in obtaining the maximum loading point and, consequently, the loading margins and P-V curves of the studied system. Similar results were obtained for the other buses of the system.
4. Conclusions
This study presented methodologies via MLP and RBF artificial neural networks to obtain the voltage magnitudes and complete P-V curves of electrical power systems subjected to a contingency as a function of the load factor λ, the real and reactive powers generated in the reference bus (Pgslack and Qgslack), and the branch number. The network had good training, with MSE at 0.00092773 for the MLP network in the forty-eighth iteration, a training time of 7 s, and an R value of 0.98977, on average, for all contingencies. The difference between the desired and obtained output for the MLP network was 0.026643 at bus 14 and other buses in the system. In contrast, the RBF network had a lower MSE, with a value of 0.000163906 and 3 s of training, showing that the desired output was very close to that obtained. That is, the RBF network had better performance than the MLP network for the RBF network, and the difference between the desired and obtained output was around 0.012257 at bus 14 and other buses, as well for network training, validation, and testing. In this operating phase (validation and testing), where the network estimated samples that were not part of the training process, we observed an approximation with an error around 0.01945 of the output obtained by the network (Yob) compared to the desired values (Ydes), showing that the network could act as an estimator. Moreover, the loading margins, i.e., all pre- and post-contingency P-V curves, were obtained, showing that the proposed tools have high potential in estimating the magnitudes of nodal voltages.