Next Article in Journal
The Effect of the Swiss CO2 Levy on Heating Fuel Demand of Private Real Estate Owners
Previous Article in Journal
Thermoelectric Generator as the Waste Heat Recovery Unit of Proton Exchange Membrane Fuel Cell: A Numerical Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhanced Distributed Parallel Firefly Algorithm Based on the Taguchi Method for Transformer Fault Diagnosis

1
State Key Laboratory of Power Transmission Equipment and System Security and New Technology, Chongqing University, Chongqing 400044, China
2
Guodian Nanjing Automation Co., Ltd., Nanjing 210032, China
3
School of Electric Power Engineering, Shanghai University of Electric Power, Shanghai 200090, China
*
Author to whom correspondence should be addressed.
Energies 2022, 15(9), 3017; https://doi.org/10.3390/en15093017
Submission received: 14 February 2022 / Revised: 5 April 2022 / Accepted: 18 April 2022 / Published: 20 April 2022

Abstract

:
To improve the reliability and accuracy of a transformer fault diagnosis model based on a backpropagation (BP) neural network, this study proposed an enhanced distributed parallel firefly algorithm based on the Taguchi method (EDPFA). First, a distributed parallel firefly algorithm (DPFA) was implemented and then the Taguchi method was used to enhance the original communication strategies in the DPFA. Second, to verify the performance of the EDPFA, this study compared the EDPFA with the firefly algorithm (FA) and DPFA under the test suite of Congress on Evolutionary Computation 2013 (CEC2013). Finally, the proposed EDPFA was applied to a transformer fault diagnosis model by training the initial parameters of the BP neural network. The experimental results showed that: (1) The Taguchi method effectively enhanced the performance of EDPFA. Compared with FA and DPFA, the proposed EDPFA had a faster convergence speed and better solution quality. (2) The proposed EDPFA improved the accuracy of transformer fault diagnosis based on the BP neural network (up to 11.11%).

1. Introduction

Since swarm intelligence optimization algorithms were proposed, they have been accepted by more and more non-computer researchers due to their efficient optimization performance, especially because they do not need special information about the problems to be optimized [1]. Their application fields have rapidly expanded to scientific computing [2], workshop scheduling optimization [3], transportation configuration [4], combination problems [5], digital image processing [6], engineering optimization design [7] and other fields. They have become an indispensable part of artificial intelligence and computer science. However, compared with traditional optimization algorithms, the development history of swarm intelligence optimization algorithms is still relatively short and there are many imperfections. In particular, the foundation of mathematics has always been a hindrance in its development [8]. Therefore, there are still too many problems to be explored and solved in this field.
The Taguchi method is a robust industrial design method that is used to evaluate and implement improvements in products, processes and equipment [9]. It is an experimental design method that focuses on minimizing process variability or making products less sensitive to environmental variability [10]. GA is a famous optimization algorithm [11]. The genetic algorithm has good global search ability and can quickly search out all the solutions in the solution space, but the local search ability of the genetic algorithm is poor and the search efficiency is low in the late evolution [12]. Chou and his associates used the Taguchi method with the genetic algorithm (GA), which improved the quality of solutions [13]. Furthermore, Tsai also applied the Taguchi method to parallel cat swarm optimization (PCSO) [14], which reduced the computational time. GA is a famous optimization algorithm [15]. In 2011, P. Subbaraj combined the self-adaptive real-coded GA with the Taguchi method, which exploited the potential offspring [16]. To sum up, it is a good idea to introduce the Taguchi method into swarm intelligence optimization algorithms. However, so far, no researcher has applied the Taguchi method to the parallel technology or to improve the performance of FA.
The firefly algorithm (FA) was proposed by professor Xin-she Yang and simulates the behavioral characteristics of fireflies [17]. A firefly indicates a solution to the objective problem, and these fireflies consider light intensity and attractiveness to constantly update their position to find potentially better solutions. The advantage of FA is that it has fewer control parameters and is easier to implement than other algorithms [18]. Studies [17,18] have shown that FA has excellent global optimization ability. However, it still has some defects in concrete engineering problems, such as slow convergence speed and low solution accuracy [19]. Therefore, many changes and improvements have been made to the original FA algorithm to produce, for example, the Gaussian firefly algorithm (GD-FF) [20], firefly algorithm with chaos (chaotic FA) [21], binary firefly algorithm (BFA) [22], parallel firefly algorithm (PFA) [23], distributed parallel firefly algorithm (DPFA) [24] and so on. Among them, DPFA was proposed by Pan in 2021. Compared with the standard FA, its solution accuracy is better and its convergence speed is faster. However, the communication strategies in DPFA are too simple, and the advantage of collaboration between groups of distributed parallel strategies is not fully utilized. Therefore, in this study, the Taguchi method is introduced into the communication strategies of DPFA, and an enhanced distributed parallel firefly algorithm (EDPFA) was proposed. The proposed EDPFA can further improve the convergence speed and solution accuracy of DPFA and FA, which was successfully used for transformer fault diagnosis based on a BP neural network.
The firefly algorithm has been used to solve many multiple optimization problems, such as the economic emission load dispatch problem [25], the mobile robot navigation [26], the optimal overcurrent relay coordination [27] and the aeration modeling on spillways [28]. However, there are few related studies on the application of the FA to transformer fault diagnosis. Therefore, this study aimed to use EDPFA to investigate transformer fault diagnosis.
Power transformers are the key equipment in a power system [29,30]. In actual production, whether the existing or latent transformer faults can be diagnosed or predicted quickly and accurately is closely relevant to the safety and stability of a power system [31]. In the field of transformers, dissolved gas analysis (DGA) is an important transformer fault diagnosis technique that establishes a mathematical relationship between a fault type and fault gas [32,33]. In recent years, many algorithms and theories have been proposed and utilized in the field of transformer fault diagnosis, including the fuzzy algorithm [34], support vector machines [35], clustering algorithm [36] and neural networks [37]. These methods have achieved some research results, but they more or less have some limitations, such as easily falling into the local optimal, slow convergence and only supporting a small amount of sample data. Therefore, this study constructed a transformer fault diagnosis model using a BP neural network. A BP network is a mathematical model that can approximate any complex nonlinear relationship to the maximum extent and automatically correct the parameters. However, the unreasonable initial weights and thresholds limit the performance of a BP neural network [7]. In this study, the proposed EDPFA was used to modify the network parameters, which had the benefits of a high diagnostic accuracy and fast convergence.
This study applied the Taguchi method to implement the distributed parallel firefly algorithm (DPFA) and proposed an enhanced distributed parallel firefly algorithm (EDPFA). Then, this study aimed to use the proposed EDPFA in transformer fault diagnosis. Compared with other research, the main work of this study was as follows:
  • The distributed parallel firefly algorithm (DPFA) was implemented and then a new enhanced distributed parallel firefly algorithm (EDPFA) based on the Taguchi method was proposed.
  • The Taguchi method selected the better dimensions of different solutions to obtain a new solution, which was used as a new communication strategy for EDPFA.
  • The proposed EDPFA was tested by using the CEC2013 suite and had better performance than the standard FA and DPFA.
  • The proposed EDPFA was used to train the parameters of the BP neural network and improve the accuracy of the transformer fault diagnosis model based on the BP neural network.
The rest of the paper is structured as follows. Section 2 describes the original DPFA and the Taguchi method. Section 3 introduces the Taguchi method into the original DPFA and analyses the details of algorithm improvements. Section 4 focuses on testing the proposed EDPFA under the CEC2013 suite and compares it with other algorithms. Section 5 implements the proposed EDPFA in the field of transformer fault diagnosis. Section 6 sums up this paper.

2. Distributed Parallel Firefly Algorithm and Taguchi Method

This section provides a brief introduction to the original DPFA and Taguchi method.

2.1. Distributed Parallel Firefly Algorithm

The distributed parallel firefly algorithm (DPFA) was proposed by Pan and his associates in 2021 [24]. The DPFA is an updated version of the firefly algorithm (FA) proposed in 2007 [15]. The core idea of the DPFA is that the initial solutions are divided into some subgroups and share the information based on different communication strategies among subgroups after some fixed number of iterations.

2.1.1. The Mathematical Form of the DPFA

The search process of FA relates to two significant concepts: attractiveness and brightness. The attractiveness exists between two fireflies and indicates the position movement relationship between fireflies. The brightness is an individual characteristic of fireflies and is proportional to the fitness function. The standard FA satisfies the following three characteristics [15]: (1) Suppose that all fireflies can attract each other. (2) Fireflies’ attractiveness is only related to distance and brightness. A firefly with a strong brightness will attract a firefly with a weak brightness. (3) The fitness function determines the brightness.
The mathematical form of the DPFA is as follows:
β ( r ) = β 0 e γ   r i j 2
  r i j = | | x i , g x j , g | | = k = 1 d ( x i , g , k x j , g , k ) 2
x i , g ( t + 1 ) = x i , g ( t ) + β ( x j , g ( t ) x i , g ( t ) ) + α ( r a n d 1 2 )
In Formula (1), β ( r ) represents the attractiveness of two fireflies. β 0 represents the maximum degree of attractiveness ( r = 0 ) . Because the brightness will gradually weaken with the increase of distance and the absorption of the medium, the brightness absorption coefficient ( γ ) can be set as a constant to reflect the above characteristics.
In Formula (2), r i j is the Cartesian distance between two fireflies. x i , g is the i th firefly in the g group. x i , g , k is the k th component of the spatial coordinate of firefly x i , g .
In formula (3), the value of x i , g represents the brightness of firefly x i , g . t represents the current iteration. i = 1 , 2 , 3 , , N g ; j = 1 , 2 , 3 , , N g .   N g represents the number of fireflies in group g .

2.1.2. Communication Strategies

In the DPFA, when t = n R ( n = 1 , 2 , 3 , ), these subgroups trigger communication strategies. t and R represent the current iteration and the fixed communication iteration, respectively. The DPFA has four communication strategies, namely, the maximum of the same subgroup, the average of the same subgroup, the maximum of different subgroups and the average of different subgroups. The core ideal of communication strategies is to select some better solutions to replace the poorer ones in the subgroups. Different communication strategies have different ways of selecting better solutions. Take the maximum of the same subgroup as an example:
In strategy 1, when t = n R iterations ( n = 1 , 2 , 3 , ), the brightest firefly x m a x , g ( t ) in the same group will replace the darkest k fireflies in the same group. Figure 1 shows strategy 1.
x m a x , g ( t ) = M a x { x 1 , g ( t ) , x 2 , g ( t ) , , x n , g ( t ) }
where x 1 , g ( t ) , x 2 , g ( t ) , , x n , g ( t ) represent all fireflies’ positions in the g th group.
The other three communication strategies are as follows. More details of the DPFA are described in the literature [24].
Strategy 2: The average of the same subgroup:
x a v g , g ( t ) = x 1 , g ( t ) + x 2 , g ( t ) + x 3 , g ( t ) + ... + x k , g ( t ) k
where x 1 , g ( t ) , x 2 , g ( t ) , x 3 , g ( t ) , ... , x k , g ( t ) represent the k brightest fireflies’ positions in the g th group.
Strategy 3: The maximum of different subgroups:
x m a x ( t ) = M a x { x 1 ( t ) , x 2 ( t ) , , x N ( t ) }
where x 1 ( t ) , x 2 ( t ) , , x N ( t ) represent all fireflies’ positions in all groups.
Strategy 4: The average of different subgroups:
x a v g ( t ) = x m a x , 1 ( t ) + x m a x , 2 ( t ) + x m a x , 3 ( t ) + ... + x m a x , G ( t ) G
where x m a x , 1 ( t ) , x m a x , 2 ( t ) , x m a x , 3 ( t ) , ... , x m a x , G ( t ) represent the brightest fireflies’ positions in all groups.
For more detail on the DPFA, please refer to [24]. Algorithms 1 shows the pseudocode of DPFA.
Algorithms 1: The pseudo code of the DPFA.
Initialize the N fireflies and divide them evenly into G groups.
1:   while  T < F do
2:   for  g = 1: G do
3:         Calculate the light intensity I i g at x i g using f ( x i g ) and rank the fireflies
4:         for  i = 1: N / G do
5:                 for  j = 1: i do
6:                         if ( I j g > I i g )
7:                         Move firefly i toward j in the g th subgroup in all D dimensions
                            by using Equation (3)
8:                         end if
9:                 Evaluate distance r and update attractiveness.
10:               end for
11:       end for
12: if  T = n R
13:     Communication strategies: apply x m a x , g ; x a v g , g ; x m a x ; x a v g to update x i g in all subgroups.
14: end if
15:  T = T + 1
16:  e n d   w h i l e
O u t p u t :
The global best firefly x g b e s t and the value of f ( x g b e s t ) .

2.2. The Taguchi Method

The Taguchi method includes two major tools: (1) orthogonal arrays and (2) the signal- to-noise ratio (SNR) [10]. In the following, the concepts of these two tools are reviewed.
An array is said to be orthogonal if it satisfies two conditions: (1) each column represents a different level value of a considered factor and these considered factors can be evaluated independently, and (2) each row represents a set of parameters for an experiment. The orthogonal array can be described as
L M ( Q K )
where K represents the number of columns (factors) and K is a positive integer. Q represents the number of level values of a considered factor, where Q is also a positive integer. M represents the number of experiments, where M = K * ( Q 1 ) + 1 .
For instance, suppose that there are three sets of solutions with four parameters in an experiment. This means that each of the four factors can be at three levels. Then, Table 1 shows the orthogonal array L 9 ( 3 4 ) . In the absence of the orthogonal array, if one wishes to find the optimal combination of parameters, the total number of experiments is 3 4 = 81 . However, orthogonal arrays provide us with a set of just nine experiments. The orthogonal array proposed by [12] can effectively reduce the number of experiments in the instance of obtaining the optimal combination of parameters.
The SNR tool is used to find the parameters’ optimal combination from all the combinations listed. To be more specific, the SNR is used to determine the appropriate level for each factor. The SNR can be calculated in various ways. For optimization problems, the value of the objective function can generally be regarded as the SNR.

3. Enhanced DPFA and Communication Strategy

In the original DPFA, the four communication strategies improve the algorithm through the group optimal solution or the global optimal solution, which has a great influence on the performance of the algorithm [24]. However, these strategies ignore the influence of various dimensions (parameters) in the optimal solution. Therefore, this study extracted all the dimensions (parameters) in the optimal solution and then used the Taguchi method to recombine the dimensions (parameters) to obtain a better solution.

3.1. Operation Strategy of the Taguchi Method

The operation strategy of the Taguchi method is described as follows:
Step 1: Choose k sets of solutions, which are denoted using the symbols x 1 , g , d , x 2 , g , d , …, x k , g , d . g represents the g th group and d represents the d th dimension of the solution space ( d = 1 , 2 , 3 , D ). D represents the total number of dimensions of the solution space.
Step 2: Each dimension of candidate solutions corresponds to a factor (the number of factors is   D ). The different values of candidate solutions denote different level values (the number of level values is k ). The value of the objective function corresponding to each candidate solution is used as an SNR to judge whether the solution is good or bad. Next, it can combine these dimensions into a better solution ( x b e t t e r ) using the Taguchi method.
Step 3: The better solution ( x b e t t e r ) replaces the worst solution in the original groups.
To facilitate the reader’s understanding, the following example is given.
Given the objective function f ( x ) = x 1 2 + x 2 2 + x 3 2 + x 4 2 , minimize it. Assume three solutions: x 1 = [ 1 , 2 , 3 , 0 ] ; x 2 = [ 2 , 0 , 4 , 3 ] ; x 3 = [ 3 , 3 , 0 , 2 ] . f ( x 1 ) = 14 ;   f ( x 2 ) = 29 ;   f ( x 3 ) = 22 . Using the Taguchi method to combine these three solutions to get a better solution, Table 2 shows the results of solution combinations. According to Table 2, the best combination is x b e t t e r = [ 2 , 0 , 0 , 0 ] , f ( x b e t t e r ) = 4 .

3.2. New Communication Strategies

In the original DPFA, the communication strategies are divided into two ways: intra-group information exchange (strategies 1 and 2) and inter-group information exchange (strategies 3 and 4). If the parameters of the solutions are independent, it is easier to obtain better results with the former. If the parameters of the solutions are loosely correlated, it is easier to obtain better results with the latter [38]. To improve the efficiency of information exchange, the Taguchi method is used to enhance the original communication strategies.

3.2.1. New Strategy 1

New strategy 1 follows the three steps in Section 3.1. The candidate solutions are the best k solutions in the group. New strategy 1 is an enhanced version of strategies 1 and 2 in the original DPFA. Figure 2 shows the new communication strategy 1.

3.2.2. New Strategy 2

New strategy 2 also follows the three steps in Section 3.1. The candidate solutions are the best solution in each group. New strategy 2 is an enhanced version of strategies 3 and 4 in the original DPFA. Figure 3 shows the new communication strategy 2.

3.3. The Pseudocode of the EDPFA

In the EDPFA, all initial solutions are divided into g subgroups. After the fixed iterations, these subgroups use the new communication strategy 1 or 2 to achieve the benefit of intra-group and inter-group collaboration. Algorithms 2 shows the pseudocode of the EDPFA.
Algorithms 2: The pseudocode of EDPFA.
Objective function f ( x ) , x = ( x 1 , x 2 , , x d ) ;
Initializing a population of N fireflies, x i ( i n ) ;
Set the number of groups G .
17:   w h i l e   t < M a x   G e n e r a t i o n
18:   for  g = 1: G
19:           Calculate the light intensity I i g using f ( x i , g ) and rank the fireflies.
20:          for  i = 1: N / G
21:                 for  j = 1: i
22:                        if ( I j , g > I i , g )
23:                        Move firefly i toward j in the g th subgroup in all D dimensions by using Equation (3).
24:                        end if
25:               Evaluate distance r by Equation (2) and update attractiveness using Equation (1).
26:               end for
27:       end for
28:   end for
29: if  t = n R
30:          Communication strategies: apply   x b e t t e r to update the worst solutions.
31: end if
32:  t = t + 1
33:  e n d   w h i l e

4. Experiment Using the EDPFA

4.1. Test Functions and Parameters Setting

This study chose the CEC2013 suite to test the proposed EDPFA. The CEC2013 suite included unimodal functions ( f 1 ~ f 5 ), multimodal functions ( f 6 ~ f 20 ) and composite functions ( f 21 ~ f 28 ), and their dimensions were set to 30. The search range was set to [ 100 , 100 ] . More details of CEC2013 are presented in [39,40].
This study compared the proposed EDPFA with the FA and DPFA for testing the performance of algorithms. To assure the fairness of the experiment, 28 test functions were evaluated with 51 runs and 500 iterations. Because the operation of the Taguchi method calls test functions, the population size of the EDPFA was set to 94. Furthermore, the population size of the FA and DPFA was set to 100. In the experimental comparison, the number of function calls for all algorithms was the same. In addition, the three algorithms maintained consistent parameter settings ( α = 0.25 , β = 0.2 , γ = 1 ,   G = 4 ). The programming was based on MATLAB 2019a. All the simulations were performed on a laptop with an AMD Ryzen7 2.90 GHz CPU and 16 GB RAM.

4.2. Comparison with the Original FA and DPFA

Table 3 shows the performance comparison results of the FA, DPFA and EDPFA from the “Mean” of 51 runs. The smaller the “Mean”, the better the final result. The experimental results of FA and DPFA on each test function were compared with the EDPFA. The symbol ( = ) represents that the performance of the two algorithms was similar. The symbol ( > ) represents that the EDPFA performed well. The symbol ( < ) represents that the EDPFA performed poorly. Finally, the last row of Table 3 counts the results on all benchmark functions.
According to the experimental results in Table 3, compared with the FA, the proposed EDPFA had 22 better results, 2 similar results and 2 bad results in 28 test functions. This result shows that EDPFA had a competitive search ability and solution accuracy. Compared with the DPFA, the proposed EDPFA had 19 better results, 1 similar result and 8 bad results in all test functions. This showed that the EDPFA was stronger than the EPFA in performance, and the DPFA was enhanced by the Taguchi method. However, regarding the results for test functions f 1 ~ f 5 , the proposed EDPFA was not as good as the DPFA. f 1 ~ f 5 were the unimodal functions. The comparison results showed that the EDPFA was not suitable for solving the unimodal functions.
Next, to further evaluate the performances of the algorithms, the convergence curves of the FA, DPFA and EDPFA were compared. Each curve represented the convergence of the median value of the total 51 runs by a given algorithm, and some of them are presented in Figure 4. Table 4 summarizes the convergence figures under IEEE CEC 2013 for the 30D optimization. As shown in Figure 4, the proposed EDPFA could obtain a better convergence speed in some test functions ( f 5 ,   f 9 ,   f 11 ,   f 12 ,   f 13 ,   f 14 ,   f 15 ,   f 16 ,   f 17 ,   f 21 ,   f 22 ,   f 23 ,   f 24 ,   f 25 , f 27 and f 28 ). However, it did not have the best convergence speed in some test functions ( f 4 ,   f 6 ,   f 8 ,   f 19 ,   f 20 and f 26 ). Furthermore, in other test functions, the three algorithms had similar convergence performances. In general, compared with the FA and DPFA, the proposed EDFPA was more competitive in terms of convergence performance. In addition, the convergence effect of the FA was the worst out of the three algorithms.

4.3. Comparison with Other Algorithms

This section compares the performance of the EDPFA with some famous algorithms. All settings of the EDPFA were the same as in Section 4.1 and Section 4.2.
Table 5 shows the performance comparison results of particle swarm optimization (PSO) [41], parallel particle swarm optimization (PPSO) [42], the genetic algorithm (GA) [11], the multi-verse optimizer (MVO) [43], the whole optimization algorithm (WOA) [44] and the ant lion optimizer (ALO) [45] in terms of the “Mean” of 51 runs. According to the data in Table 5, it is obvious that the proposed EDPFA performed better under the CEC2013 test suite. Compared with PSO, PPSO, the GA, the MVO, the WOA and the ALO, the proposed EDPFA achieved 24, 23, 24, 18, 26 and 21 better results, respectively.

5. Application for Transformer Fault Diagnosis

In machine learning, the backpropagation (BP) neural network has a strong ability to fit nonlinear systems. It is very suitable for solving prediction and classification problems [46]. Transformer fault diagnosis is essentially a fault classification problem. Therefore, it has been a research hotspot to introduce a BP neural network into the field of transformer fault diagnosis [47,48,49,50]. As described in this section, the proposed EDPFA was used to train the initial parameters of a BP neural network to improve the performance of transformer fault diagnosis model based on a BP neural network.

5.1. Structure of Transformer Fault Diagnosis Model Based on a BP Neural Network

The steps to establish the transformer fault diagnosis model based on a BP neural network were as follows:
Step 1: First, the characteristic gas content of transformers and the corresponding fault were composed into a data set.
Step 2: Then, 80% of the samples in the data set were used to train the BP neural network model. The other 20% of samples in the data set were used to test the trained BP neural network model.
Step 3: Finally, the transformer fault classification accuracy of the test set was counted to judge the performance of the model.
The transformer fault diagnosis data for dissolved gas in oil mainly include five fault gases ( H 2 , C H 4 , C 2 H 2 , C 2 H 4 , C 2 H 6 ) and their corresponding six fault types (normal state, NS; low-energy discharge, LED; arc discharge, AD; middle-and-low-temperature overheating, MLTO; high-temperature overheating, HTO; partial discharge, PD). Figure 5 shows the transformer fault diagnosis model based on BP neural network.

5.2. Structure of Transformer Fault Diagnosis Model Based on EDPFA-BP Neural Network

Even though the fitting ability of a traditional BP neural network is very strong, it still has some inherent defects, including low accuracy and slow convergence, which can no longer meet the requirements of a power system regarding transformer reliability [33]. The main reason is that all the thresholds and weights are randomly generated before the training of a BP neural network. These unoptimized initial values often lead to slow convergence and low accuracy of fault diagnosis results. Therefore, this study adopted the EDPFA to optimize the initial value of the BP neural network to improve the performance of the model. Figure 6 shows the transformer fault diagnosis model based on the EDPFA-BP neural network.

5.3. Experiment Process and Analysis

5.3.1. The Data Collection and Pretreatment

In this study, there were 465 sets of transformer fault data (including labels and features), some of which are shown in Table 6. Table 7 shows the codes of the transformer fault types. Figure 7 shows the sample distribution of the transformer fault types, in which the HTO faults had the highest number and the PD faults had the lowest number. To verify the model, 80% of the data of each fault type was randomly selected as the training set and 20% as the test set. In total, there were 375 sets of training data and 90 sets of testing data.

5.3.2. The Parameter Setting of a BP Neural Network

A BP neural network is a kind of mathematical model that can simulate complex nonlinear relations and automatically modify parameters. In a BP neural network, there are input layers, hidden layers and output layers. The signal first travels through the input layer, then to the hidden layer and finally to the output layer. In the above process, the relevant information is processed by regulating internal relations between lots of nodes.
Figure 8 shows the topological type of the BP neural network adopted in this study. The number of inputs was 5 (five fault gases), the number of hidden layers was 12, the number of output layers was 6 and the number of outputs was 6 (six fault types). In addition, after many experimental trials, this study set the iteration times and learning precision goal of the BP neural network as 1000 and 0.0001, respectively. The activation function adopts a sigmoid function and the BP neural network introduced error backpropagation into the multilayer networks.
To ensure the objectivity of the experiment process, all parameters in each transformer fault diagnosis model were the same. The parameters to be used for the EDPFA were consistent with Section 4.

5.3.3. Experiment Results and Analysis

Figure 9 shows the diagnosis results, which included four models (the BP neural network, FA-BP neural network, DPFA-BP neural network and EDPFA neural network). In Figure 9, the ordinate represents the six transformer fault types, and the abscise represents 465 sets of transformer faults. The “” in Figure 9 represents a predicted fault type, and the “✱” represents an actual fault type. If the “” and the “✱” overlap, this transformer fault was correctly predicted; otherwise, the prediction was wrong. To make the results more intuitive, “☐” represents the improved BP neural network identifies correctly, while the original BP neural network made an incorrect identification. “△” is the opposite. Table 8 shows the diagnosis accuracy of each model.
As shown in Figure 9, compared with other models based on the improved BP neural network (b–d), there were more “” faults in the unimproved BP model (a). This shows that the transformer fault diagnosis models based on the BP neural network had poor fault classification ability. Furthermore, it is obvious that, compared with other neural networks, the EDPFA-BP neural network had better performance regarding fault 4 (middle-and-low-temperature overheating), where it identified fault 4 more often. From Table 8, the fault classification accuracy of the BP-EDPFA neural network was the highest (up to 84.44%). Compared with the other models, the accuracy of EDPFA-BP neural network was higher by 11.11%, 6.66% and 3.34%. The recall and precision of each model are shown in Table 9 and Table 10. As shown in Table 9, the EDPFA-BP neural network had the highest recall rate for six fault types. Especially regarding the PD fault, its recall rate reached 100%. From Table 10, the precision of the BP neural network was the lowest, and the precision of the EDPFA-BP neural network was the highest. This indicated that the EDPFA-BP neural network had a better classification effect and fewer fault classification errors. From the above three aspects, it can be concluded that the proposed EDPFA could better optimize the initial parameters of the BP neural network and manage the transformer fault diagnosis model based on a BP neural network.

6. Conclusions

An enhanced distributed parallel firefly algorithm (DEPFA) based on the Taguchi method was proposed and it was applied to transformer fault diagnosis. The Taguchi method could be used to improve the effectiveness of the original communication strategies in the DPFA, which enhanced the influence of various dimensions (parameters) in the optimal solution. In the test functions, the implemented EDPFA achieved faster convergence and could find better solutions. Compared with the FA and DPFA, the EDPFA had 24 and 19 better results. This is important for the safety and stability of a power system to quickly diagnose and predict the existing or latent transformer faults. The proposed EDPFA was used to train the BP neural network to implement diagnoses. The experimental results showed that the proposed EDPFA could effectively improve the accuracy of the transformer fault diagnosis model based on a BP neural network (up to 11.11%). However, the EDPFA is not fully studied and there is still a lot of room for optimization, especially regarding solving the unimodal optimization problems.

Author Contributions

Conceptualization, Z.-J.L. and W.-G.C.; methodology, J.S.; software, J.S.; validation, Z.-J.L.; formal analysis, Z.-Y.Y.; investigation, L.-Y.C.; resources, Z.-J.L.; data curation, Z.-J.L.; writing—original draft preparation, J.S.; writing—review and editing, Z.-J.L. All authors have read and agreed to the published version of the manuscript.

Funding

State Grid Corporation of China Science and Technology Project [5500-202099279A-0-0-00].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fister, I., Jr.; Yang, X.S.; Fister, I.; Brest, J.; Fister, D. A brief review of nature-inspired algorithms for optimization. arXiv 2013, arXiv:1307.4186. [Google Scholar]
  2. Blum, C.; Li, X. Swarm intelligence in optimization. In Swarm Intelligence; Springer: Berlin/Heidelberg, Germany, 2008; pp. 43–85. [Google Scholar]
  3. Shang, J.; Tian, Y.; Liu, Y.; Liu, R. Production scheduling optimization method based on hybrid particle swarm optimization algorithm. J. Intell. Fuzzy Syst. 2018, 34, 955–964. [Google Scholar] [CrossRef]
  4. Osorio, C.; Chong, L. A computationally efficient simulation-based optimization algorithm for large-scale urban transportation problems. Transp. Sci. 2015, 49, 623–636. [Google Scholar] [CrossRef]
  5. Xu, X.; Rong, H.; Trovati, M.; Liptrott, M.; Bessis, N. CS-PSO: Chaotic particle swarm optimization algorithm for solving combinatorial optimization problems. Soft Comput. 2018, 22, 783–795. [Google Scholar] [CrossRef] [Green Version]
  6. Wang, X.; Pan, J.S.; Chu, S.C. A parallel multi-verse optimizer for application in multilevel image segmentation. IEEE Access 2020, 8, 32018–32030. [Google Scholar] [CrossRef]
  7. Pan, J.S.; Shan, J.; Zheng, S.G.; Chu, S.C.; Chang, C.K. Wind power prediction based on neural network with optimization of adaptive multi-group salp swarm algorithm. Clust. Comput. 2021, 24, 2083–2098. [Google Scholar] [CrossRef]
  8. Abido, M.A. Optimal design of power-system stabilizers using particle swarm optimization. IEEE Trans. Energy Convers. 2002, 17, 406–413. [Google Scholar] [CrossRef]
  9. Karna, S.K.; Sahai, R. An overview on Taguchi method. Int. J. Eng. Math. Sci. 2012, 1, 1–7. [Google Scholar]
  10. Roy, R.K. A Primer on the Taguchi Method; Society of Manufacturing Engineers: Southfield, MI, USA, 2010. [Google Scholar]
  11. Labani, M.; Moradi, P.; Jalili, M. A multi-objective genetic algorithm for text feature selection using the relative discriminative criterion. Expert Syst. Appl. 2020, 149, 113276. [Google Scholar] [CrossRef]
  12. Saucedo-Dorantes, J.J.; Jaen-Cuellar, A.Y.; Delgado-Prieto, M.; de Jesus Romero-Troncoso, R.; Osornio-Rios, R.A. Condition monitoring strategy based on an optimized selection of high-dimensional set of hybrid features to diagnose and detect multiple and combined faults in an induction motor. Measurement 2021, 178, 109404. [Google Scholar] [CrossRef]
  13. Lin, H.L.; Chou, C.P. Optimization of the GTA welding process using combination of the Taguchi method and a neural-genetic approach. Mater. Manuf. Process. 2010, 25, 631–636. [Google Scholar] [CrossRef]
  14. Tsai, P.W.; Pan, J.S.; Chen, S.M.; Liao, B.Y. Enhanced parallel cat swarm optimization based on the Taguchi method. Expert Syst. Appl. 2012, 39, 6309–6319. [Google Scholar] [CrossRef]
  15. Subbaraj, P.; Rengaraj, R.; Salivahanan, S. Enhancement of self-adaptive real-coded genetic algorithm using Taguchi method for economic dispatch problem. Appl. Soft Comput. 2011, 11, 83–92. [Google Scholar] [CrossRef]
  16. Yang, X.S. Firefly algorithms for multimodal optimization. In International Symposium on Stochastic Algorithms; Springer: Berlin/Heidelberg, Germany, 2009; pp. 169–178. [Google Scholar]
  17. Yang, X.S.; He, X. Firefly algorithm: Recent advances and applications. Int. J. Swarm Intell. 2013, 1, 36–50. [Google Scholar] [CrossRef] [Green Version]
  18. Xue, X. A compact firefly algorithm for matching biomedical ontologies. Knowl. Inf. Syst. 2020, 62, 2855–2871. [Google Scholar] [CrossRef]
  19. Shan, J.; Chu, S.C.; Weng, S.W.; Pan, J.S.; Jiang, S.J.; Zheng, S.G. A parallel compact firefly algorithm for the control of variable pitch wind turbine. Eng. Appl. Artif. Intell. 2022, 111, 104787. [Google Scholar] [CrossRef]
  20. Farahani, S.M.; Abshouri, A.A.; Nasiri, B.; Meybodi, M. A Gaussian firefly algorithm. Int. J. Mach. Learn. Comput. 2011, 1, 448. [Google Scholar] [CrossRef] [Green Version]
  21. Gandomi, A.H.; Yang, X.S.; Talatahari, S.; Alavi, A.H. Firefly algorithm with chaos. Commun. Nonlinear Sci. Numer. Simul. 2013, 18, 89–98. [Google Scholar] [CrossRef]
  22. Crawford, B.; Soto, R.; Suárez, M.O.; Paredes, F.; Johnson, F. Binary firefly algorithm for the set covering problem. In Proceedings of the 2014 9th Iberian Conference on Information Systems and Technologies (CISTI), Barcelona, Spain, 18–21 June 2014; pp. 1–5. [Google Scholar]
  23. Sai, V.O.; Shieh, C.S.; Nguyen, T.T.; Lin, Y.C.; Horng, M.F.; Le, Q.D. Parallel Firefly Algorithm for Localization Algorithm in Wireless Sensor Network. In Proceedings of the 2015 Third International Conference on Robot, Vision and Signal Processing (RVSP), Kaohsiung, Taiwan, 18–20 November 2015. [Google Scholar] [CrossRef]
  24. Shan, J.; Pan, J.S.; Chang, C.K.; Chu, S.C.; Zheng, S.G. A distributed parallel firefly algorithm with communication strategies and its application for the control of variable pitch wind turbine. ISA Trans. 2021, 115, 79–94. [Google Scholar] [CrossRef]
  25. Apostolopoulos, T.; Vlachos, A. Application of the firefly algorithm for solving the economic emissions load dispatch problem. Int. J. Comb. 2010, 2011, 523806. [Google Scholar] [CrossRef] [Green Version]
  26. Patle, B.K.; Parhi, D.R.; Jagadeesh, A.; Kashyap, S.K. On firefly algorithm: Optimization and application in mobile robot navigation. World J. Eng. 2017, 14, 65–76. [Google Scholar] [CrossRef]
  27. Gokhale, S.S.; Kale, V.S. An application of a tent map initiated Chaotic Firefly algorithm for optimal overcurrent relay coordination. Int. J. Electr. Power Energy Syst. 2016, 78, 336–342. [Google Scholar] [CrossRef]
  28. Mahdavi-Meymand, A.; Zounemat-Kermani, M. A new integrated model of the group method of data handling and the firefly algorithm (GMDH-FA): Application to aeration modelling on spillways. Artif. Intell. Rev. 2020, 53, 2549–2569. [Google Scholar] [CrossRef]
  29. Wang, M.; Vandermaar, A.J.; Srivastava, K.D. Review of condition assessment of power transformers in service. IEEE Electr. Insul. Mag. 2002, 18, 12–25. [Google Scholar] [CrossRef]
  30. Zhang, D.; Li, C.; Shahidehpour, M.; Wu, Q.; Zhou, B.; Zhang, C.; Huang, W. A bi-level machine learning method for fault diagnosis of oil-immersed transformers with feature explainability. Int. J. Electr. Power Energy Syst. 2022, 134, 107356. [Google Scholar] [CrossRef]
  31. de Faria, H., Jr.; Costa, J.G.S.; Olivas, J.L.M. A review of monitoring methods for predictive maintenance of electric power transformers based on dissolved gas analysis. Renew. Sustain. Energy Rev. 2015, 46, 201–209. [Google Scholar] [CrossRef]
  32. Zhang, Y.; Chen, E.; Guo, P.J.; Ma, C. Application of improved particle swarm optimization BP neural network in transformer fault diagnosis. In Proceedings of the 2017 Chinese Automation Congress (CAC), Jinan, China, 20–22 October 2017; pp. 6971–6975. [Google Scholar]
  33. Benmahamed, Y.; Kherif, O.; Teguar, M.; Boubakeur, A.; Ghoneim, S.S. Accuracy improvement of transformer faults diagnostic based on DGA data using SVM-BA classifier. Energies 2021, 14, 2970. [Google Scholar] [CrossRef]
  34. Arshad, M.; Islam, S.M.; Khaliq, A. Fuzzy logic approach in power transformers management and decision making. IEEE Trans. Dielectr. Electr. Insul. 2014, 21, 2343–2354. [Google Scholar] [CrossRef]
  35. Fei, S.; Zhang, X. Fault diagnosis of power transformer based on support vector machine with genetic algorithm. Expert Syst. Appl. 2009, 36, 11352–11357. [Google Scholar] [CrossRef]
  36. Wang, M.H.; Tseng, Y.F.; Chen, H.C.; Chao, K.H. A novel clustering algorithm based on the extension theory and genetic algorithm. Expert Syst. Appl. 2009, 36, 8269–8276. [Google Scholar] [CrossRef]
  37. Ou, M.; Wei, H.; Zhang, Y.; Tan, J. A dynamic adam based deep neural network for fault diagnosis of oil-immersed power transformers. Energies 2019, 12, 995. [Google Scholar] [CrossRef] [Green Version]
  38. Wang, R.-B.; Wang, W.-F.; Xu, L.; Pan, J.-S.; Chu, S.-C. An Adaptive Parallel Arithmetic Optimization Algorithm for Robot Path Planning. J. Adv. Transp. 2021, 2021, 1–22. [Google Scholar] [CrossRef]
  39. Liang, J.J.; Qu, B.Y.; Suganthan, P.N.; Hernández-Díaz, A.G. Problem Definitions and Evaluation Criteria for the CEC 2013 Special Session on Real-Parameter Optimization; Technical Report 201212; Computational Intelligence Laboratory, Zhengzhou University: Zhengzhou, China; Nanyang Technological University: Singapore, 2013; pp. 281–295. [Google Scholar]
  40. Tanabe, R.; Fukunaga, A. Evaluating the performance of SHADE on CEC 2013 benchmark problems. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 1952–1959. [Google Scholar]
  41. Marini, F.; Walczak, B. Particle swarm optimization (PSO). A tutorial. Chemom. Intell. Lab. Syst. 2015, 149, 153–165. [Google Scholar] [CrossRef]
  42. Chu, S.C.; Roddick, J.F.; Pan, J.S. A parallel particle swarm optimization algorithm with communication strategies. J. Inf. Sci. Eng. 2005, 21, 809–818. [Google Scholar]
  43. Aljarah, I.; Mafarja, M.; Heidari, A.A.; Faris, H.; Mirjalili, S. Multi-verse optimizer: Theory, literature review, and application in data clustering. In Nature-Inspired Optimizers; Springer: Berlin/Heidelberg, Germany, 2020; pp. 123–141. [Google Scholar]
  44. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  45. Mirjalili, S. The ant lion optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
  46. Li, J.; Cheng, J.H.; Shi, J.Y.; Huang, F. Brief introduction of back propagation (BP) neural network algorithm and its improvement. In Advances in Computer Science and Information Engineering; Springer: Berlin/Heidelberg, Germany, 2012; pp. 553–558. [Google Scholar]
  47. Sun, Y.J.; Zhang, S.; Miao, C.X.; Li, J.M. Improved BP neural network for transformer fault diagnosis. J. China Univ. Min. Technol. 2007, 17, 138–142. [Google Scholar] [CrossRef]
  48. Luo, Y.; Hou, Y.; Liu, G.; Tang, C. Transformer fault diagnosis method based on QIA optimization BP neural network. In Proceedings of the 2017 IEEE 2nd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), Chengdu, China, 15–17 December 2017; pp. 1623–1626. [Google Scholar]
  49. Yan, C.; Li, M.; Liu, W. Transformer fault diagnosis based on BP-Adaboost and PNN series connection. Math. Probl. Eng. 2019, 2019, 1019845. [Google Scholar] [CrossRef] [Green Version]
  50. Li, X.; Chen, Z.; Fan, X.; Xu, Q.; Lu, J.; He, T. Fault diagnosis of transformer based on BP neural network and ACS-SA. High Volt. Appar. 2018, 54, 134–139. [Google Scholar]
Figure 1. Strategy 1.
Figure 1. Strategy 1.
Energies 15 03017 g001
Figure 2. The new communication strategy 1.
Figure 2. The new communication strategy 1.
Energies 15 03017 g002
Figure 3. The new communication strategy 2.
Figure 3. The new communication strategy 2.
Energies 15 03017 g003
Figure 4. Convergence curves of the FA, DPFA and EDPFA.
Figure 4. Convergence curves of the FA, DPFA and EDPFA.
Energies 15 03017 g004aEnergies 15 03017 g004bEnergies 15 03017 g004cEnergies 15 03017 g004dEnergies 15 03017 g004e
Figure 5. The transformer fault diagnosis model based on a BP neural network.
Figure 5. The transformer fault diagnosis model based on a BP neural network.
Energies 15 03017 g005
Figure 6. The transformer fault diagnosis model based on the EDPFA-BP neural network.
Figure 6. The transformer fault diagnosis model based on the EDPFA-BP neural network.
Energies 15 03017 g006
Figure 7. The sample distribution of the transformer fault types.
Figure 7. The sample distribution of the transformer fault types.
Energies 15 03017 g007
Figure 8. The topological type of the BP neural network.
Figure 8. The topological type of the BP neural network.
Energies 15 03017 g008
Figure 9. The diagnosis results from the four models.
Figure 9. The diagnosis results from the four models.
Energies 15 03017 g009
Table 1. The orthogonal array L 9 ( 3 4 ) .
Table 1. The orthogonal array L 9 ( 3 4 ) .
Number of ExperimentsConsidered Factors
ABCD
11111
21222
31333
42123
52231
62312
73132
83213
93321
Table 2. The results of solution combinations.
Table 2. The results of solution combinations.
Experiment NumberDimensionsf(x)
d 1 d 2 d 3 d 4
1123014
2104326
3130214
4224228
520004
6233331
7320322
8303222
9334034
Table 3. Comparison of the EDPFA with the FA and DPFA.
Table 3. Comparison of the EDPFA with the FA and DPFA.
FunctionFA DPFA EDPFA
f 1 5.60 × 10−3 > 7.46 × 10−4 < 2.50 × 10−3
f 2 2.49 × 107 > 1.49 × 107 < 1.72 × 107
f 3 2.08 × 109 > 1.26 × 108 < 1.39 × 108
f 4 6.08 × 104 > 4.54 × 104 < 6.02 × 104
f 5 9.11 × 10 > 6.81 × 10 < 7.99 × 10
f 6 6.92 × 10 > 4.99 × 10 < 5.06 × 10
f 7 7.34 × 10 > 4.33 × 10 > 3.56 × 10
f 8 2.10 × 10 = 2.12 × 10 > 2.10 × 10
f 9 2.18 × 10 > 1.49 × 10 > 1.43 × 10
f 10 5.07 > 1.48 > 1.47
f 11 4.96 × 10 > 4.45 × 10 > 2.86 × 10
f 12 4.40 × 10 > 4.34 × 10 > 1.73 × 10
f 13 1.36 × 102 > 1.04 × 102 > 5.78 × 10
f 14 3.65 × 103 > 3.71 × 103 > 3.63 × 103
f 15 3.42 × 103 > 3.12 × 103 > 2.65 × 103
f 16 9.70 × 10−1 > 1.73 > 3.48 × 10−1
f 17 6.41 × 10 > 1.03 × 102 > 4.12 × 10
f 18 7.96 × 10 > 9.38 × 10 > 5.21 × 10
f 19 3.99 > 6.36 > 3.75
f 20 1.48 × 10 = 1.48 × 10 = 1.48 × 10
f 21 3.45 × 102 > 3.29 × 102 > 3.26 × 102
f 22 5.12 × 103 > 3.92 × 103 < 4.40 × 103
f 23 5.70 × 103 > 4.90 × 103 > 4.76 × 103
f 24 2.27 × 102 > 2.10 × 102 > 2.03 × 102
f 25 2.65 × 102 > 2.22 × 102 > 2.18 × 102
f 26 2.77 × 102 < 2.57 × 102 < 2.99 × 102
f 27 6.03 × 102 > 4.47 × 102 > 3.99 × 102
f 28 2.83 × 102 < 3.01 × 102 > 2.90 × 102
/ = / 24/2/2 19/1/8 -
Table 4. The summary of the convergence speed comparison of the FA, PCFA and DPCFA.
Table 4. The summary of the convergence speed comparison of the FA, PCFA and DPCFA.
NameBest (Tie Best) PerformanceSimilar Performance
FA f 6 ,   f 19 f 1 ,   f 2 ,   f 3 ,   f 7 ,   f 10 ,   f 18
PCFA f 4 ,   f 8 ,   f 20 ,   f 26
DPCFA f 5 ,   f 9 ,   f 11 ,   f 12 ,   f 13 ,   f 14 ,   f 15 ,   f 16 ,   f 17 , f 21 ,   f 22 ,   f 23 ,   f 24 ,   f 25 ,   f 27 ,   f 28
Table 5. Comparison of the EDPFA with some common algorithms.
Table 5. Comparison of the EDPFA with some common algorithms.
FunctionPSO PPSO GA EDPFA
f 1 9.56 × 10 > 4.96 × 10−1 > 3.74 × 10−3 > 2.50 × 10−3
f 2 7.01 × 106 < 4.06 × 106 < 1.99 × 107 > 1.72 × 107
f 3 9.16 × 109 > 2.32 × 109 > 3.71 × 108 > 1.39 × 108
f 4 1.63 × 104 < 1.95 × 104 < 6.39 × 104 > 6.02 × 104
f 5 1.09 × 102 > 4.19 × 10 < 4.11 × 10 < 7.99 × 10
f 6 9.24 × 10 > 7.51 × 10 > 7.42 × 10 > 5.06 × 10
f 7 1.37 × 102 > 1.06 × 102 > 4.37 × 10 > 3.56 × 10
f 8 2.10 × 10 = 2.09 × 10 < 2.11 × 10 > 2.10 × 10
f 9 3.25 × 10 > 3.16 × 10 > 1.32 × 10 < 1.43 × 10
f 10 7.23 × 10 > 6.43 > 1.82 > 1.47
f 11 2.66 × 102 > 2.23 × 102 > 3.09 × 10 > 2.86 × 10
f 12 2.61 × 102 > 2.13 × 102 > 4.28 × 10 > 1.73 × 10
f 13 3.66 × 102 > 3.16 × 102 > 5.52 × 10 < 5.78 × 10
f 14 4.43 × 103 > 4.15 × 103 > 4.49 × 103 > 3.63 × 103
f 15 4.35 × 103 > 3.97 × 103 > 3.05 × 103 > 2.65 × 103
f 16 1.30 > 1.24 > 2.45 × 10−1 < 3.48 × 10−1
f 17 2.70 × 102 > 1.96 × 102 > 4.56 × 10 > 4.12 × 10
f 18 2.48 × 102 > 1.93 × 102 > 7.15 × 10 > 5.21 × 10
f 19 2.17 × 10 > 1.22 × 10 > 3.99 > 3.75
f 20 1.45 × 10 < 1.46 × 10 < 1.50 × 10 > 1.48 × 10
f 21 3.76 × 102 > 3.56 × 102 > 3.03 × 102 > 3.26 × 102
f 22 5.26 × 103 > 4.91 × 103 > 5.43 × 103 > 4.40 × 103
f 23 5.52 × 103 > 5.19 × 103 > 5.27 × 103 > 4.76 × 103
f 24 3.00 × 102 > 2.92 × 102 > 2.32 × 102 > 2.03 × 102
f 25 3.36 × 102 > 3.26 × 102 > 2.78 × 102 > 2.18 × 102
f 26 3.10 × 102 > 3.00 × 102 > 3.33 × 102 > 2.99 × 102
f 27 1.17 × 103 > 1.16 × 103 > 4.87 × 102 > 3.99 × 102
f 28 2.73 × 103 > 1.93 × 103 > 3.02 × 102 > 2.90 × 102
/ = / 24/1/3 23/0/5 24/0/4 -
f 1 3.89 × 10−1 > 2.08 × 102 > 1.49 × 10−5 < 2.50 × 10−3
f 2 7.64 × 106 < 7.28 × 107 > 1.52 × 107 < 1.72 × 107
f 3 4.97 × 108 > 3.04 × 1010 > 1.03 × 109 > 1.39 × 108
f 4 3.45 × 103 < 9.02 × 104 > 8.17 × 104 > 6.02 × 104
f 5 7.22 < 4.56 × 102 > 6.04 × 10 < 7.99 × 10
f 6 3.68 × 10 < 1.84 × 102 > 6.77 × 10 > 5.06 × 10
f 7 6.97 × 10 > 4.47 × 102 > 1.38 × 102 > 3.56 × 10
f 8 2.10 × 10 = 2.10 × 10 = 2.10 × 10 = 2.10 × 10
f 9 2.01 × 10 > 3.90 × 10 > 3.06 × 10 > 1.43 × 10
f 10 1.92 > 5.17 × 102 > 8.39 > 1.47
f 11 1.01 × 102 > 5.07 × 102 > 2.44 × 102 > 2.86 × 10
f 12 1.06 × 102 > 5.36 × 102 > 2.03 × 102 > 1.73 × 10
f 13 1.94 × 102 > 6.53 × 102 > 3.19 × 102 > 5.78 × 10
f 14 3.66 × 103 > 5.95 × 103 > 4.60 × 103 > 3.63 × 103
f 15 3.75 × 103 > 5.53 × 103 > 4.41 × 103 > 2.65 × 103
f 16 1.29 > 1.54 > 1.08 > 3.48 × 10−1
f 17 2.10 × 102 > 7.03 × 102 > 3.00 × 102 > 4.12 × 10
f 18 2.07 × 102 > 6.70 × 102 > 2.38 × 102 > 5.21 × 10
f 19 1.09 × 10 > 1.03 × 102 > 1.90 × 10 > 3.75
f 20 1.39 × 10 < 1.48 × 10 = 1.44 × 10 < 1.48 × 10
f 21 3.02 × 102 < 5.17 × 102 > 2.89 × 102 < 3.26 × 102
f 22 4.13 × 103 < 6.73 × 103 > 5.50 × 103 > 4.40 × 103
f 23 4.10 × 103 < 7.16 × 103 > 4.64 × 103 < 4.76 × 103
f 24 2.59 × 102 > 3.16 × 102 > 2.79 × 102 > 2.03 × 102
f 25 2.73 × 102 > 3.24 × 102 > 3.32 × 102 > 2.18 × 102
f 26 2.95 × 102 < 4.00 × 102 > 3.40 × 102 > 2.99 × 102
f 27 8.19 × 102 > 1.36 × 103 > 1.08 × 103 > 3.99 × 102
f 28 3.45 × 102 > 4.76 × 103 > 1.18 × 103 > 2.90 × 102
/ = / 18/1/9 26/2/0 21/1/6 -
Table 6. Partial sample set of transformer fault diagnosis.
Table 6. Partial sample set of transformer fault diagnosis.
H 2 C H 4 C 2 H 6 C 2 H 4 C 2 H 2 Fault Types
14.673.6810.542.710.2NS
7.55.73.42.63.2NS
2203404248014NS
301101375222.3NS
801041.50NS
46.1311.5733.148.520.63NS
……
345112.2527.551.558.75LED
5659334470LED
5505334200LED
115.97514.725.36.8LED
781618635310LED
5477.48.65.4LED
……
217.5404.951.867.5AD
1678652.980.71005.9419.1AD
673.6423.577.5988.9344.4AD
60406.911070AD
2004814117131AD
4637.28.310771.9AD
……
1812622105280MLTO
16013033960MLTO
4.321931181250MLTO
170320535203.2MLTO
279042630.2MLTO
9259839726,78210,497−1MLTO
……
172.9334.1172.9812.537.7HTO
25.1411.91320.91832.818.4HTO
56286969287HTO
27437655100217HTO
15125.33.20.2HTO
5628596287HTO
……
9807358120PD
6505334200PD
15659334470PD
24.3216.361.6730.1827.47PD
2587.27.8824.7041.40PD
9807358120PD
Table 7. The codes of transformer fault types.
Table 7. The codes of transformer fault types.
Fault TypesNSLEDADMLTOHTOPD
Code010203040506
Table 8. The accuracy of each model.
Table 8. The accuracy of each model.
Transformer Fault Diagnosis ModelAccuracy (%)
BP neural network73.33%
FA-BP neural network76.67%
DPFA-BP neural network80.00%
EDPFA-BP neural network84.44%
Table 9. The recall of each model.
Table 9. The recall of each model.
Recall (%)
Transformer Fault Diagnosis ModelNSLEDADMLTOHTOPD
BP neural network60.00%75.00%84.00%44.44%80.65%33.33%
FA-BP neural network70.00%75.00%84.00%55.56%80.65%66.66%
DPFA-BP neural network70.00%75.00%88.00%77.78%80.65%66.66%
EDPFA-BP neural network80.00%66.67%88.00%88.89%87.10%100%
Table 10. The precision of each model.
Table 10. The precision of each model.
Precision (%)
Transformer Fault Diagnosis ModelNSLEDADMLTOHTOPD
BP neural network75.00%69.23%77.78%33.33%89.29%50%
FA-BP neural network77.78%69.23%84.00%38.46%92.59%66.66%
DPFA-BP neural network77.78%75.00%84.62%46.67%100%66.66%
EDPFA-BP neural network88.89%80.00%81.48%61.54%100%75.00%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, Z.-J.; Chen, W.-G.; Shan, J.; Yang, Z.-Y.; Cao, L.-Y. Enhanced Distributed Parallel Firefly Algorithm Based on the Taguchi Method for Transformer Fault Diagnosis. Energies 2022, 15, 3017. https://doi.org/10.3390/en15093017

AMA Style

Li Z-J, Chen W-G, Shan J, Yang Z-Y, Cao L-Y. Enhanced Distributed Parallel Firefly Algorithm Based on the Taguchi Method for Transformer Fault Diagnosis. Energies. 2022; 15(9):3017. https://doi.org/10.3390/en15093017

Chicago/Turabian Style

Li, Zhi-Jun, Wei-Gen Chen, Jie Shan, Zhi-Yong Yang, and Ling-Yan Cao. 2022. "Enhanced Distributed Parallel Firefly Algorithm Based on the Taguchi Method for Transformer Fault Diagnosis" Energies 15, no. 9: 3017. https://doi.org/10.3390/en15093017

APA Style

Li, Z. -J., Chen, W. -G., Shan, J., Yang, Z. -Y., & Cao, L. -Y. (2022). Enhanced Distributed Parallel Firefly Algorithm Based on the Taguchi Method for Transformer Fault Diagnosis. Energies, 15(9), 3017. https://doi.org/10.3390/en15093017

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop