Next Article in Journal
Advanced Forecasting Methods of 5-Minute Power Generation in a PV System for Microgrid Operation Control
Previous Article in Journal
Effects of Bulk Flow Pulsation on Film Cooling Involving Compound Angle
Previous Article in Special Issue
Monitoring the Geometry of Tall Objects in Energy Industry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Neural Network-Based Removal of a Decaying DC Offset in Less Than One Cycle for Digital Relaying

Department of Electrical Engineering, Myongji University, Yongin 17058, Korea
*
Author to whom correspondence should be addressed.
Energies 2022, 15(7), 2644; https://doi.org/10.3390/en15072644
Submission received: 23 February 2022 / Revised: 21 March 2022 / Accepted: 28 March 2022 / Published: 4 April 2022

Abstract

:
To make a correct decision during normal and transient states, the signal processing for relay protection must be completed and designated the correct task within the shortest given duration. This paper proposes to solve a dc offset fault current phasor with harmonics and noise based on a Deep Neural Network (DNN) autoencoder stack. The size of the data window was reduced to less than one cycle to ensure that the correct offset is rapidly computed. The effects of different numbers of the data samples per cycle are discussed. The simulations revealed that the DNN autoencoder stack reduced the size of the data window to approximately 90% of a cycle waveform, and that DNN performance accuracy depended on the number of samples per cycle (32, 64, or 128) and the training dataset used. The fewer the samples per cycle of the training dataset, the more training was required. After training using an adequate dataset, the delay in the correct magnitude prediction was better than that of the partial sums (PSs) method without an additional filter. Similarly, the proposed DNN outperformed the DNN-based full decay cycle dc offset in the case of converging time. Taking advantage of the smaller DNN size and rapid converging time, the proposed DNN could be launched for real-time relay protection and centralized backup protection.

1. Introduction

The Internet of Things and real-time operation are key in the 21st century; if the time response of even one system decreases by 1 ms, this is worth considering. When protecting power systems, it can be challenging to ensure rapid and accurate fault current estimation during a DC offset. In the 2000s, the Taiwan Power Company (TPC) (which operates all power utilities in Taiwan) specified that a distance protection relay must act within 1.5 cycles. During normal operation, the discrete Fourier transform (DFT) adequately handles the task, but DFT fails during fault intervals featuring a DC offset and harmonics [1,2].
Since the 1970s, many methods have been developed to deal with a DC offset [3,4,5,6,7,8,9,10,11,12,13]. By assuming that a DC offset has a specific time constant that depends on power system configuration and fault location, least squares error, the Kalman filter, and recursive discrete time filter have been proposed to tackle the dc offset [3,4,5,6]. A new proposal in [7] avoids the time constant issue by creating a new filter where the time constant could be varied, but this filter could not deal with signal-containing harmonics. The digital mimic filter in [8] can completely remove the dc offset, but only when the time constant of the dc offset matches the assumed one. An applicable dc offset removal with a width range of time constant was discovered by using the recursive least squares error [9], but it is within a limited range. There are also other methods used to deal with a particular dc offset, such as the gearbox fault, PR current control of a single-phase grid-tied inverter, and DC offset-current generated during biphasic [11,12,13].
To solve the time constant problem, a modified phasor-domain DFT can be used to remove the DC offset from the fault current or voltage waveform [1,2,10]. Although the response is smooth and independent of a time constant, it fails if the fault signal features any damped high-frequency component.
To overcome this drawback, Nam et al. used an analytical method to accurately measure the fundamental frequency component of a fault current signal distorted by a DC offset, a characteristic frequency component and harmonics [14]. Their method measures the fundamental frequency component well, but the data window is one cycle-plus-seven samples wide. A modified DFT has also been proposed to solve the DC offset in the presence of noise [15,16,17,18]. Other similar methods [19,20,21] have the same characteristic. They provide a smooth signal and are robust against noise, but require more than one cycle to perform the task. DC-offset removal in the time domain has been proposed to reduce this time by using a half-cycle, a half-cycle-plus-one-sample or a half-cycle-plus-two-sample method [22,23,24,25]; the operating times are indeed shorter, but the algorithms fail if high-frequency components are present.
The use of a Deep Neural Network (DNN) has been suggested to eliminate dependence on a time constant and reduce fragility in the presence of high-frequency components and noise [26,27]. The DNN method performs better than a DC-offset filter; data from only a single cycle are required, and the method detects the fundamental component of the fault current with the DC offset, harmonics, and noise.
This paper proposes a Deep Neural Network (DNN) autoencoder stack to reconstruct the current signal using less than one cycle of the power system. In this proposal, autoencoder is applied to reconstruct the distorted signal through the process of stacking encoding layers of each autoencoder and fine tuning. The utilization of the proposed method to reduce the size of the input to less than one cycle has several benefits for real applications:
  • Smaller input DNN produces smaller DNN structures, which requires less computational burden, enabling it to be launched on different Evaluation Module (EVM) boards.
  • Additional filters are not required during signal processing. The signal could be analyzed immediately after analog-to-digital processing, which reduces the signal processing time.
  • Under the same testing conditions, the proposed DNN outperforms the previous studies in terms of the converging time.
Due to the reduced signal processing time, less computational burden, and rapid converging time, the proposed DNN could be applied for both real-time relay protection and centralized back-up protection. The details of how to obtain the proposed DNN are as follows.

2. Deep Neural Network-Based Removal of a Decaying DC Offset

A DNN optimally handles nonlinear applications [26,27,28,29,30,31], and can remove DC offsets from various fault currents (featuring both noise and harmonics) by training 1 cycle data window (64 samples per cycles) [26,27]. From the previous study, it was determined that the DNN has 64 inputs and 64 outputs. This paper proposed to remove the DC offset from a fault signal by training less than a cycle waveform with a different number of samples per cycle (32, 64, and 128). In this case, the requirement input is less than the sampling numbers per cycle, and the DNN structures is also smaller, which is beneficial for the application. There are 2 significant parts of training DNNs, DNN structures and dataset for training.

2.1. DNN Architecture

The Deep Neural Network (DNN) is a multilayer complex Artificial Neural Network (ANN). A normal composition of an ANN is the input, 1 or 2 hidden layers, and output. The training result of the ANN somehow was not able to improve due to the constraint of the available hidden layers. Although the number of layers could be added to the ANN, the results are not sufficient, since there is a significant loss during the forward and backward training of the ANN. The development of the DNN makes the impossible possible. The simple term of DNN is presented in Equation (1), where Y is the output, X is the input, f is the activation function, and w and b are the weight and bias of the lowercase hidden layers, respectively.
Y = f 2 ( w 2 ( f 1 ( w 1 ( X ) + b 1 ) ) + b 2 ) ,
In each hidden layer, there is a component called the neuron or node. The number of nodes affects the overall training of the whole system. Too little or too many nodes might not lead to the desired outcome. Thus, the number of nodes should be carefully selected. If all the requirement training factors are good enough, such as the optimizer, activation function, learning rate, batch sizes, and epoch, the acquired results should be expected. In this study, the numbers of samples per cycle selected for training are 32, 64, and 128. The inputs and outputs of DNN of the previous study should be 32, 64, and 128, respectively, meanwhile, this study minimizes the number of inputs and outputs of the proposed DNN. Our DNN features an autoencoder stack that optimizes the number of layers and the number of neurons in each layer shown in Figure 1. Firstly, the number of nodes in each layer is chosen by applying a method called autoencoder. Autoencoder is a method that compresses the input into a smaller dimension and produces an output, which has the same quality as the input after decompressing the small dimension. The processes of compression and decompression are known as encoding and decoding. We apply 2 simple densely connected layers for this process: one is for encoding and another is for decoding. By using this method, we can reduce noise and avoid over fitting. From this application in Figure 1, we were able to select a suitable number of nodes in each hidden layer by comparing the training loss function in Equation (2) produced by the different nodes.
The number of hidden layers is defined later on in the present paper. As shown in Figure 1, the feature of the autoencoder in hidden layer 1 is extracted and used as the input to train in hidden layer 2. This process can be continued endlessly, but the training is stopped, and the number of hidden layers is noted if the results obtained from the training do not present a significant improvement by comparing the loss function presented in Equation (2). L and n are the cost function and number of samples. This loss function is known as Root-Mean-Square Deviation (RMSD) or Root-Mean-Square Error (RMSE).
L = i = 1 n ( o u t p u t i L a b e l i ) 2 / n ,
The parameters for training are as follows: the batch size is 100, epoch is 30, and learning rate is 0.001, with a decay rate 0.4. The training is conducted for 10 loops. The Adam (adaptive moment) Optimizer is intentionally selected for its fast and powerful convergence [32]. The first and second moments of gradient are estimated and adapt all the neural network weight to the learning rate. The moment can be defined by Equation (3), where m is the moment and X is a the random variable.
m n = E [ X n ] ,
During the training session, the Leaky Rectifier Linear Unit (Leaky ReLU) in Equation (4) is utilized as an activation function due to the benefit of not deleting the negative output, compared to the Rectifier Linear Unit (ReLU) in Equation (5).
f ( x ) = a x   if   x < 0 x   otherwise
f ( x ) = 0   if   x < 0 x   otherwise
The procedure of finding the number of hidden layers and its respective number of nodes is known as pre-training. After each element has been defined in pre-training, the DNN is then reconstructed to be a fully connected DNN (FCDNN), shown in Figure 2. The FCDNN consists of an input layer, 3 hidden layers, and output layers. After training, the DNN signal served as an input to the least squares error (LSE) method; this determined the phasor. The results were compared to those of the partial sums (PS) method, DFT, and DNN in [26,27]. The PS is a powerful method used to determine the magnitude of the DC-offset waveform; DFT is commonly used to define the signal phasors.
To summarize, firstly, the autoencoder was applied to find the number of nodes and the number of hidden layers, while trying to reconstruct the exact same input signal. This process is called pre-training. Then, the encoding of each hidden layer is rearranged to be FCDNN. This arrangement is possible by applying the autoencoder called the DNN autoencoder stack. Lastly, a non-distorted signal is utilized as a target for training. The final training is known as fine tuning. Thus, the clean non-distorted signal is obtained after fine tuning.

2.2. Datasets

The DNN requires a set of input and output targets (also known as target) called datasets. In this paper, the input is a distorted signal (dc offset, harmonics, and noise signal), while the target is a non-distorted signal. A total of 2 datasets is simulated for the training and evaluating purposes presented in Table 1 and Table 2. The simulation is conducted using basic formulas in Equation (6)–(8). From these equations, we can calculate the dc offset, harmonics, and noise separately. Next, the summation of these signals is completed to create a contorted signal. Note that the training dataset is obtained from the faulted time for 0.4 s, and it does not include the pre-fault. The datasets in Table 1 and Table 2, satisfy the sufficient conditions for manually calculating the training and evaluating waveform signal from a simple single-phase RL circuit (60 Hz) at a voltage of 22.9 kV and 80 Ω line impendent. The objective of the DNN is to reconstruct the N samples of a distorted input signal. Before training, pre-processing, including moving window formation, normalization, and random shuffling, were carried out to generate training data in a form suitable for implementing the DNN. The number of training datasets is varying, depending on the number of samples per cycle. In this study, a ratio of 90 and 10 for the training and evaluation datasets was determined.
The current DC offset during a fault developing at time t0 is given by Equation (6):
i ( t ) = E m Z sin ( ω ( t t 0 ) + α ϕ ) + ( i ( t 0 ) E m Z sin ( α ϕ ) ) e ( t t 0 ) τ ,
where φ is the angle of the RL circuit (almost 90°) and α is the inception angle.
Harmonics are the sinusoidal components of a periodic wave (or quantity) with frequencies that are integral multiples of the fundamental frequency [33], as shown by Equation (7):
i n ( t ) = A n sin [ n ( ω t θ ) ] ,
where n is the harmonic order and An is the amplitude.
The last significant component is noise, which lacks a particular frequency. Very commonly, simulators add additive Gaussian white noise (AGWN), the amount of which can be modified by changing the signal-to-noise ratio (SNR):
i n ( t ) = A n sin [ n ( ω t θ ) ] ,
A typical RL dc offset, and its non-distorted current waveform from Table 1, is shown in Figure 3a. A dc offset signal is an input signal, whereas its non-distorted waveform is the final target during fine tuning. A window moving technique is shown in Figure 3b. On this occasion, a window size of 58 is specifically chosen to illustrate how this technique was applied. From this illustration, it is clearly seen that using different window sizes on the same signal produces different numbers of datasets. The procedure of obtaining a minimal number of inputs and outputs is illustrated in Figure 4.

3. Results and Discussion

To render the interpretation simple, in the present paper, we use a fault current of 64 samples per cycle to show how the DNN is trained. The other case scenarios are similar and are not shown. However, the results of all the case studies are shown for comparison purposes.

3.1. Result of The First Layer Autoencoder

The flowchart in Figure 5 shows that the first hidden layer is the only layer in which the autoencoder learns directly from the current input waveforms; the other layers learn from the features extracted from the previous layers. The results of the first hidden layer straining are revealed after decoding and application of the Leaky ReLU activation function. During this step, the output should be the same as the input, and reduce the low-frequency noise. Figure 4 shows a waveform with the maximum errors associated with different numbers of neurons in autoencoder layer 1.
As shown in Figure 5, fewer neurons are as good as more neurons. Thus, we minimized the DNN size by reducing the number of neurons. Subsequently, the autoencoder feature to be used to train autoencoder layer 2 was extracted. This process is repeated (via autoencoder layers 3, 4, etc.), until the result does not significantly improve.

3.2. Validation of the 64-Samples Scenario

The validation result yielded by one entire cycle is shown in Figure 6 to render later comparisons easier.

3.3. Validation of the 60-Samples Scenario

Figure 7 presents the result of training the DNN using 60 samples (a decrease of 4 samples). The DNN target and worst-case outputs are very similar. There is no significant phase shift, and the maximum error is small compared to that of the current value.

3.4. Validation of the 58-Samples Scenario (DNN58)

Figure 8 presents the worst-case result for a 58-samples scenario (a decrease of 6 samples). The DNN still produced a clean sinusoidal waveform, but the output and target slightly differed. However, as the maximum cost function is 55.87, the error percentage is acceptable, compared to the high value of the fault current.

3.5. Validation of The 57-Samples Scenario

Figure 9 presents a scenario in which the number of samples is reduced to 57. A very high maximum error and a major phasor shift are apparent in the worst case. Therefore, we used 58 data windows.
The results listed in Table 3 show that no significant error is apparent if the data window is decreased from 64 (a full cycle) to 58 (about 90% of a cycle). Significant errors occur at and below a data window of 57.

3.6. DNN Results Using 58 Data Windows from 64 Samples Per Cycle

To show the potential of the trained DNN, which could be launched in different power levels, we would like to launch it using a 154-kV system, in which various input fault signals were generated by PSCAD/EMTDC software. Without a retrained DNN using 154 kV data, the DNN that was trained by 22.9 kV systems has shown marvelous results by removing the DC offset current from the 154 kV systems. This application is possible as a result of the normalization step, which was applied in the pretraining step. A typical Korean 154 kV was adapted, as shown in Figure 10.
The total distance between sending and receiving is 50 km. The line parameter in this simulation is presented in Table 4.
The results of the DNN were compared to those of the PS method, DFT, and DNN proposed in [26] and [27]. In this case, the proposed DNN is referred to as DNN, and DNN3 is the DNN proposed in [26] and [27]. Table 5 summarizes the DNN results in terms of the converging times; the transient is ±5% of the steady state after the fault occurred. The new proposed DNN outperformed DNN3, which used a full-cycle waveform in all the cases studied, in terms of the converging time. The fault location does not influence the DNN results; the fault inception angle affects the results only minimally. Most of the time, the DNN performs better than the other two methods, except for the 80° fault inception angle case, where the DNN yields the worst results. However, 18.229 ms is within the permissible relay time. Figure 11 presents the magnitudes predicted by the DNN and other methods, when the faults developed 5 km from the source (at a relay location) at different fault inception angles. The DNN yields the same results as the PS method, but usually requires less time to predict the fault current phasor. Different fault locations were also tested. The fault location did not affect the prediction from the proposed method (Figure 12).

3.7. DNN Results Using the Different Samples Per Cycle

The above results show that neither the fault inception angle nor the fault location affect the DNN results. By following the same procedure, we determined that the 29 and 113 data window sizes are sufficient to train the DNN for 32 and 128 samples per cycle, respectively. As shown in Figure 13a, if the signal features 32 samples per cycle, the DNN method using 29 samples yields the correct answer more rapidly, compared to the PS method. Similarly, the use of a 128 samples per cycle revealed that only 113 data were needed to obtain a good DNN result. As shown in Figure 13b, the DNN method attains the correct result slightly more rapidly, when compared to the PS method.

4. Conclusions

This paper explored whether the DC offset could be removed using a DNN to estimate the phasor current within less than one waveform cycle (using 29/32, 58/64, and 113/128 samples). The adequate training of a well-structured DNN reduced the number of samples required to filter a clean waveform. The proposed DNN worked very well, using only the samples available in about 90% of a cycle. Neither the fault location nor the inception angle greatly affected the DNN results. This method reduces the time required for the estimation of the DC offset; the performance is usually better than those of the PS and DFT methods. Under the same conditions, the DNN that used the 58-samples data window performs better than that of the full-cycle waveform DNN in the form of converging times. The limitation of the proposed DNN is the generalized characteristics of the trained current waveform by utilized unity normalization, which makes the proposed method prone to signals with a high pure DC source. However, we intend to develop a model using a DNN that can work for all kinds of current waveforms. The aim of our future work is to implement this proposed DNN in real time. The capability of the CPU provided in AM574x, which is the hardware platform considered for our implementation in the future, is 40GMAC per core (80G FLOP per core). The floating-point operation for the proposed DNN is 23,600, which is sufficient for the CPU of real-time devices to calculate the neural network implementation into real time.

Author Contributions

Conceptualization, V.S. and S.-R.N.; Methodology, V.S., S.-H.K. and S.-R.N.; Supervision, S.-R.N.; Validation, V.S. and S.-W.L.; Writing—original draft, V.S. and S.-R.N.; Writing—review and editing, S.-W.L. and S.-H.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Korea Electric Power Corporation (KEPCO) (grant number: R17XA05-2). This research was also supported by the Korea Research Foundation with funding from the government (Ministry of Education) in 2021 (No. NRF-2021R1F1A1061798).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jyh-Cherng, G.; Sun-Li, Y. Removal of DC offset in current and voltage signals using a novel Fourier filter algorithm. IEEE Trans. Power Deliv. 2000, 15, 73–79. [Google Scholar] [CrossRef]
  2. Jyh-Cherng, G. Removal of decaying DC in current and voltage signals using a modified Fourier filter algorithm. IEEE Trans. Power Deliv. 2001, 16, 372–379. [Google Scholar] [CrossRef]
  3. Sachdev, M.S.; Baribeau, M.A. A New Algorithm for Digital Impedance Relays. IEEE Trans. Power Appar. Syst. 1979, PAS-98, 2232–2240. [Google Scholar] [CrossRef]
  4. Girgis, A.A.; Brown, R.G. Application of Kalman Filtering in Computer Relaying. IEEE Power Eng. Rev. 1981, PER-1, 43–44. [Google Scholar] [CrossRef]
  5. Sachdev, M.S.; Wood, H.C.; Johnson, N.G. Kalman Filtering Applied to Power System Measurements for Relaying. IEEE Power Eng. Rev. 1985, PER-5, 52–53. [Google Scholar] [CrossRef]
  6. Dash, P.K.; Panda, D.K. Digital impedance protection of power transmission lines using a spectral observer. IEEE Trans. Power Deliv. 1988, 3, 102–110. [Google Scholar] [CrossRef]
  7. Youssef, O.A.S. A fundamental digital approach to impedance relays. IEEE Trans. Power Deliv. 1992, 7, 1861–1870. [Google Scholar] [CrossRef]
  8. Benmouyal, G. Removal of DC-offset in current waveforms using digital mimic filtering. IEEE Trans. Power Deliv. 1995, 10, 621–630. [Google Scholar] [CrossRef]
  9. Sachdev, M.S.; Nagpal, M. A recursive least error squares algorithm for power system relaying and measurement applications. IEEE Trans. Power Deliv. 1991, 6, 1008–1015. [Google Scholar] [CrossRef]
  10. Ferrer, H.J.A.; Verduzco, I.D.; Martinez, E.V. Fourier and Walsh digital filtering algorithms for distance protection. IEEE Trans. Power Syst. 1996, 11, 457–462. [Google Scholar] [CrossRef]
  11. Zhang, X.; Zhao, J.; Bajrić, R.; Wang, L. Application of the DC Offset Cancellation Method and S Transform to Gearbox Fault Diagnosis. Appl. Sci. 2017, 7, 207. [Google Scholar] [CrossRef] [Green Version]
  12. Lee, J.S.; Hwang, S.-H. DC Offset Error Compensation Algorithm for PR Current Control of a Single-Phase Grid-Tied Inverter. Energies 2018, 11, 2308. [Google Scholar] [CrossRef] [Green Version]
  13. Aiello, O. On the DC Offset Current Generated during Biphasic Stimulation: Experimental Study. Electronics 2020, 9, 1198. [Google Scholar] [CrossRef]
  14. Soon-Ryul, N.; Sang-Hee, K.; Jong-Keun, P. An analytic method for measuring accurate fundamental frequency components. IEEE Trans. Power Deliv. 2002, 17, 405–411. [Google Scholar] [CrossRef]
  15. Yong, G.; Kezunovic, M.; Deshu, C. Simplified algorithms for removal of the effect of exponentially decaying DC-offset on the Fourier algorithm. IEEE Trans. Power Deliv. 2003, 18, 711–717. [Google Scholar] [CrossRef]
  16. Kang, S.; Lee, D.; Nam, S.; Crossley, P.A.; Kang, Y. Fourier Transform-Based Modified Phasor Estimation Method Immune to the Effect of the DC Offsets. IEEE Trans. Power Deliv. 2009, 24, 1104–1111. [Google Scholar] [CrossRef]
  17. Nam, S.; Park, J.; Kang, S.; Kezunovic, M. Phasor Estimation in the Presence of DC Offset and CT Saturation. IEEE Trans. Power Deliv. 2009, 24, 1842–1849. [Google Scholar] [CrossRef]
  18. Zadeh, M.R.D.; Zhang, Z. A New DFT-Based Current Phasor Estimation for Numerical Protective Relaying. IEEE Trans. Power Deliv. 2013, 28, 2172–2179. [Google Scholar] [CrossRef]
  19. Rahmati, A.; Adhami, R. An Accurate Filtering Technique to Mitigate Transient Decaying DC Offset. IEEE Trans. Power Deliv. 2014, 29, 966–968. [Google Scholar] [CrossRef]
  20. Silva, K.M.; Nascimento, F.A.O. Modified DFT-Based Phasor Estimation Algorithms for Numerical Relaying Applications. IEEE Trans. Power Deliv. 2018, 33, 1165–1173. [Google Scholar] [CrossRef]
  21. Silva, K.; Kusel, B.F. DFT based phasor estimation algorithm for numerical digital relaying. Electron. Lett. 2013, 49, 412–414. [Google Scholar] [CrossRef]
  22. Nam, S.-R.; Kang, S.-H.; Sohn, J.-M.; Park, J.-K. Modified Notch Filter-based Instantaneous Phasor Estimation for High-speed Distance Protection. Electr. Eng. 2007, 89, 311–317. [Google Scholar] [CrossRef]
  23. Mahari, A.; Sanaye-Pasand, M.; Hashemi, S.M. Adaptive Phasor Estimation Algorithm to Enhance Numerical Distance Protection. IET Gener. Transm. Distrib. 2017, 11, 1170–1178. [Google Scholar] [CrossRef]
  24. Gopalan, S.A.; Mishra, Y.; Sreeram, V.; Iu, H.H. An Improved Algorithm to Remove DC Offsets from Fault Current Signals. IEEE Trans. Power Deliv. 2017, 32, 749–756. [Google Scholar] [CrossRef]
  25. Cho, Y.S.; Lee, C.K.; Jang, G.; Lee, H.J. An Innovative Decaying DC Component Estimation Algorithm for Digital Relaying. IEEE Trans. Power Deliv. 2009, 24, 73–78. [Google Scholar] [CrossRef]
  26. Kim, S.B.; Lee, S.W.; Son, D.H.; Nam, S.R. DC Offset Removal in Power Systems Using Deep Neural Network. In Proceedings of the 2019 IEEE Power & Energy Society Innovative Smart Grid Technologies Conference (ISGT), Washington, DC, USA, 18–21 February 2019; IEEE: Washington, DC, USA, 2019; pp. 1–5. [Google Scholar]
  27. Kim, S.-B.; Sok, V.; Kang, S.-H.; Lee, N.-H.; Nam, S.-R. A Study on Deep Neural Network-Based DC Offset Removal for Phase Estimation in Power Systems. Energies 2019, 12, 1619. [Google Scholar] [CrossRef] [Green Version]
  28. Nagamine, T.; Seltzer, M.; Mesgarani, N. On the Role of Nonlinear Transformations in Deep Neural Network Acoustic Models. In Interspeech; International Speech Communication Association, ISCA: Baixas, France, 2016; pp. 803–807. [Google Scholar] [CrossRef] [Green Version]
  29. Hu, W.; Liang, J.; Jin, Y.; Wu, F.; Wang, X.; Chen, E. Online Evaluation Method for Low Frequency Oscillation Stability in a Power System Based on Improved XGboost. Energies 2018, 11, 3238. [Google Scholar] [CrossRef] [Green Version]
  30. Huang, X.; Hu, T.; Ye, C.; Xu, G.; Wang, X.; Chen, L. Electric Load Data Compression and Classification Based on Deep Stacked Auto-Encoders. Energies 2019, 12, 653. [Google Scholar] [CrossRef] [Green Version]
  31. Kim, M.; Choi, W.; Jeon, Y.; Liu, L. A Hybrid Neural Network Model for Power Demand Forecasting. Energies 2019, 12, 931. [Google Scholar] [CrossRef] [Green Version]
  32. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2015, arXiv:1412.6980. [Google Scholar]
  33. IEEE Std 519-2014 (Revision of IEEE Std 519-1992); IEEE Recommended Practice and Requirements for Harmonic Control in Electric Power Systems. IEEE: Danvers, MA, USA, 2014; pp. 1–29. [CrossRef]
Figure 1. Applying the autoencoder to the input and extracting features.
Figure 1. Applying the autoencoder to the input and extracting features.
Energies 15 02644 g001
Figure 2. Fully connected DNN. N represents the number of samples being used for the training DNN.
Figure 2. Fully connected DNN. N represents the number of samples being used for the training DNN.
Energies 15 02644 g002
Figure 3. How to apply the moving window to the current waveform: (a) a typical current waveform using data from Table 1, and (b) applying the moving window technique (window size: 58).
Figure 3. How to apply the moving window to the current waveform: (a) a typical current waveform using data from Table 1, and (b) applying the moving window technique (window size: 58).
Energies 15 02644 g003
Figure 4. Flowchart of the simulation process.
Figure 4. Flowchart of the simulation process.
Energies 15 02644 g004
Figure 5. Training results of layer 1, featuring different numbers of neurons using a window with 60 data samples.
Figure 5. Training results of layer 1, featuring different numbers of neurons using a window with 60 data samples.
Energies 15 02644 g005
Figure 6. DNN result of a full cycle (worst case).
Figure 6. DNN result of a full cycle (worst case).
Energies 15 02644 g006
Figure 7. DNN result for a 60-samples scenario (worst case).
Figure 7. DNN result for a 60-samples scenario (worst case).
Energies 15 02644 g007
Figure 8. DNN result of a 58-samples scenario (worst case).
Figure 8. DNN result of a 58-samples scenario (worst case).
Energies 15 02644 g008
Figure 9. DNN result of the 57-samples scenario (worst case).
Figure 9. DNN result of the 57-samples scenario (worst case).
Energies 15 02644 g009
Figure 10. Power system model for testing the datasets (PSCAD/EMTDC).
Figure 10. Power system model for testing the datasets (PSCAD/EMTDC).
Energies 15 02644 g010
Figure 11. Results when the fault current develops 5 km from the source at 2 different inception angles: (a) fault inception angle of 0° and (b) fault inception angle of 180°.
Figure 11. Results when the fault current develops 5 km from the source at 2 different inception angles: (a) fault inception angle of 0° and (b) fault inception angle of 180°.
Energies 15 02644 g011
Figure 12. Results when the fault current develops at different locations at an inception angle of 0°: (a) fault current develops at 5 km from the source, (b) fault current develops at 10 km from the source, and (c) fault current develops at 25 km from the source.
Figure 12. Results when the fault current develops at different locations at an inception angle of 0°: (a) fault current develops at 5 km from the source, (b) fault current develops at 10 km from the source, and (c) fault current develops at 25 km from the source.
Energies 15 02644 g012
Figure 13. Results when the fault current develops at 5 km from the source at an inception angle of 0° by different sampling numbers: (a) 29 data windows from 32 samples per cycle and (b) 113 data windows from 128 samples per cycle.
Figure 13. Results when the fault current develops at 5 km from the source at an inception angle of 0° by different sampling numbers: (a) 29 data windows from 32 samples per cycle and (b) 113 data windows from 128 samples per cycle.
Energies 15 02644 g013
Table 1. Condition for generating the training dataset for 64 and 128 samples per cycle.
Table 1. Condition for generating the training dataset for 64 and 128 samples per cycle.
ParameterValue
Time constant (ms)10, 20, 30, 40, 50, 60, 70, 80, 90, 100
Fault inception angle (°)0, 90, 180, −90
Second harmonics ratio (%)0, 10, 20
Third harmonics ratio (%)0, 7, 14
Fourth harmonics ratio (%)0, 5, 10
Fifth harmonics ratio (%)0, 3, 6
Signal-to-noise ratio (dB)25, 40
Table 2. Condition for generating the training dataset for 32 samples per cycle.
Table 2. Condition for generating the training dataset for 32 samples per cycle.
ParameterValue
Time constant (ms)10, 20, 30, 40, 50, 60, 70, 80, 90, 100
Fault inception angle (°)0, 45, 90, 135, 180
Second harmonics ratio (%)0, 10, 20, 30
Third harmonics ratio (%)0, 7, 14, 21
Forth harmonics ratio (%)0, 5, 10, 15
Fifth harmonics ratio (%)0, 3, 6
Signal-to-noise ratio (dB)25, 40
Table 3. Root-Mean-Square Deviation (RMSD) analysis by variation in the data window size.
Table 3. Root-Mean-Square Deviation (RMSD) analysis by variation in the data window size.
Data Window SizeMaximumAverageVariance
6414.005311.130021.14886
6314.482041.177321.32129
6219.336021.220521.55644
6124.697431.497291.87338
6029.775902.301912.72297
5943.231141.956502.89949
5863.769603.193964.33714
57327.69866.497216.43218
Table 4. Transmission line parameters used in the simulation.
Table 4. Transmission line parameters used in the simulation.
SequenceParametersValueUnit
Positive and negativeR1, R20.0001Ω/m
L1, L20.0004Ω/m
C1, C2265.258MΩ/m
ZeroR00.0002Ω/m
L00.0008Ω/m
C0530.516MΩ/m
Table 5. Converging times after the faults at different fault locations and fault inception angles.
Table 5. Converging times after the faults at different fault locations and fault inception angles.
Fault Location (km)Fault Inception Angle (°)DNN3 (ms)DNN (ms)Partial Sum (ms)DFT (ms)
5014.06213.54114.32238.541
2014.32213.80214.32238.281
4515.10414.84315.36430.208
8019.27018.22916.92717.708
18014.06213.54114.32238.541
10013.80213.28114.06238.020
25014.32213.54114.32237.760
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sok, V.; Lee, S.-W.; Kang, S.-H.; Nam, S.-R. Deep Neural Network-Based Removal of a Decaying DC Offset in Less Than One Cycle for Digital Relaying. Energies 2022, 15, 2644. https://doi.org/10.3390/en15072644

AMA Style

Sok V, Lee S-W, Kang S-H, Nam S-R. Deep Neural Network-Based Removal of a Decaying DC Offset in Less Than One Cycle for Digital Relaying. Energies. 2022; 15(7):2644. https://doi.org/10.3390/en15072644

Chicago/Turabian Style

Sok, Vattanak, Sun-Woo Lee, Sang-Hee Kang, and Soon-Ryul Nam. 2022. "Deep Neural Network-Based Removal of a Decaying DC Offset in Less Than One Cycle for Digital Relaying" Energies 15, no. 7: 2644. https://doi.org/10.3390/en15072644

APA Style

Sok, V., Lee, S. -W., Kang, S. -H., & Nam, S. -R. (2022). Deep Neural Network-Based Removal of a Decaying DC Offset in Less Than One Cycle for Digital Relaying. Energies, 15(7), 2644. https://doi.org/10.3390/en15072644

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop