Next Article in Journal
Conceptual View of the Implementation of the Nuclear Energy Program in the Republic of Kazakhstan
Previous Article in Journal
Numerical Simulation Study of Salt Cavern CO2 Storage in Power-to-Gas System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving Electrical Fault Detection Using Multiple Classifier Systems

by
José Oliveira
1,
Dioéliton Passos
1,
Davi Carvalho
1,
José F. V. Melo
1,
Eraylson G. Silva
2,* and
Paulo S. G. de Mattos Neto
1,*
1
Centro de Informática, Universidade Federal de Pernambuco (CIn/UFPE), Recife 50740-560, Brazil
2
Campus Garanhuns—Universidade de Pernambuco, Garanhuns 55294-902, Brazil
*
Authors to whom correspondence should be addressed.
Energies 2024, 17(22), 5787; https://doi.org/10.3390/en17225787
Submission received: 1 October 2024 / Revised: 23 October 2024 / Accepted: 6 November 2024 / Published: 20 November 2024
(This article belongs to the Section F1: Electrical Power System)

Abstract

:
Machine Learning-based fault detection approaches in energy systems have gained prominence for their superior performance. These automated approaches can assist operators by highlighting anomalies and faults, providing a robust framework for improving Situation Awareness. However, existing approaches predominantly rely on monolithic models, which struggle with adapting to changing data, handling imbalanced datasets, and capturing patterns in noisy environments. To overcome these challenges, this study explores the potential of Multiple Classifier System (MCS) approaches. The results demonstrate that ensemble methods generally outperform single models, with dynamic approaches like META-DES showing remarkable resilience to noise. These findings highlight the importance of model diversity and ensemble strategies in improving fault classification accuracy under real-world, noisy conditions. This research emphasizes the potential of MCS techniques as a robust solution for enhancing the reliability of fault detection systems.

1. Introduction

Electrical power systems are vital to the functioning and advancement of modern societies, providing energy to critical infrastructure and services, including telecommunications, transportation, water supply, and emergency response systems [1,2,3]. Modern power systems consist of various electrical components [4], which are dynamic and susceptible to disturbances or failures. These failures may result from internal network issues, such as short circuits, or external factors, such as environmental conditions [5]. Large power generation plants must operate in synchronization with electrical grids, making it essential that the entire energy system functions safely and efficiently. In this context, fault detection is crucial, as it enables the prompt activation of protective measures, protecting equipment and preventing further damage to the network [6,7]. In electrical power systems, a fault refers to an abnormal condition, typically a short circuit [8]. Short circuits occur when the insulation of system components fails, often due to overvoltages caused by lightning strikes, switching surges, insulation contamination (such as salt spray or pollution), or other mechanical factors [9]. Faults are generally categorized into three types: single-phase (when one phase comes into contact with the ground), two-phase (when two phases come into contact, with or without ground involvement), and three-phase (when all three phases are shorted, potentially involving the ground as well) [9]. During a fault, there is a sudden drop in the impedance seen by the power system, leading to a significant increase in current. Since electric current is inversely proportional to the total resistance, the instantaneous increase in current can cause severe damage to the unaffected parts of the system still connected at the time of the fault.
Fault detection is crucial in improving Situation Awareness (SA) in energy systems. In this context, SA involves perceiving, understanding, and predicting changes within power systems [10,11]. SA supports operators and automated systems in understanding the system’s current state, detecting potential issues, and predicting future states. According to [12], the first level of SA is the perception of critical elements in the environment. In energy systems, this involves monitoring sensors, performance data, and operational parameters to detect when something is wrong—such as abnormal voltage, temperature spikes, or power outages [13]. Automated fault detection algorithms assist operators by highlighting anomalies that may indicate a fault. In this sense, energy operators can better manage system reliability and prevent cascading failures by integrating advanced fault detection systems (such as Machine Learning (ML) models and real-time monitoring) with a robust framework for SA. Enhanced SA improves the speed and accuracy of the response to faults, reducing downtime and improving overall system resilience. In energy systems, fault diagnosis and fault classification are two related but distinct processes within fault management. They serve complementary roles in maintaining system reliability and operational efficiency. Fault diagnosis in energy systems refers to the broader process of detecting, identifying, and locating faults [14]. Meanwhile, fault classification, as a subset of fault diagnosis, specifically focuses on categorizing the type of fault that has occurred based on its characteristics [15]. In this work, the focus is on the detection and identification of the failure subtasks in fault diagnosis.
In the field of energy fault detection, both data-driven and model-driven approaches have been employed to identify and classify system faults. Data-driven approaches [16,17] are used to discover patterns, make predictions, or draw insights from large amounts of data. They use statistical or ML models to learn from the data without relying on detailed prior knowledge of the system. Model-driven approaches [18] depend on predefined, often physics-based models representing the system’s underlying mechanisms. These models are built from theoretical knowledge, equations, or rules governing the system’s behavior. Model-driven approaches require expert knowledge and system principles [18]. Although they have strong logic and interpretability, applying energy systems with large uncertainties is challenging [18]. Numerous studies in the literature have applied a range of Machine Learning (ML) models for fault classification in transmission lines [16], including Decision Trees [19], Random Forests [20], XGBoost [21], CatBoost [15], LightGBM [22], and K-Nearest Neighbors (KNN) [23]. In addition, Hallmann et al. [24] reviewed the application of Artificial Intelligence (AI) and Machine Learning (ML) models in electric power system operations, including fault detection. They highlighted the usage of approaches based on Support Vector Machine (SVM) and Artificial Neural Networks (ANNs) for this task. Jawad and Abid [25] combined wavelet transform, Ant Colony Optimization (ACO), and ANNs to detect different types of faults in high-voltage direct current (HVDC) transmission lines. Wang et al. [26] presented the use of a Stacked Sparse Autoencoder (SSAE), a Deep Learning (DL) model, for diagnosing power system line trip faults using operational data. The proposed SSAE-based method integrates SVM and PCA to improve accuracy. Wadi and Elmasry [27] proposed an anomaly-based technique for fault detection in electrical power systems based on the One-Class SVM and PCA. The proposed method was trained and tested on a power line fault detection dataset. Veerasamy et al. [28] employed a combination of Discrete Wavelet Transform (DWT) and Long Short-Term Memory (LSTM) to detect high-impedance faults in solar photovoltaic integrated power systems. Alternatively, Ajagekar and You [29] proposed a hybrid quantum computing (QC)-based DL framework for the fault diagnosis of electrical power systems; this approach combines the feature extraction capabilities of a conditional restricted Boltzmann machine with the efficient classification of DL networks. Shadi, Ameli, and Azad [30] proposed a hierarchical framework to detect, classify, and locate frequency disturbance events by employing Recurrent Neural Network (RNN) and LSTM models. Alhanaf, Balik, and Farsadi [31] used ANNs and one-dimensional Convolution Neural Networks (1D-CNNs) to detect, classify, and localize energy faults. Salehimehr, Miraftabzadeh, and Brenna [32] developed fault detection methods for DC microgrids utilizing compressed sensing and Decision Trees (DTs). Goni and Faruk [4] employed two different ELMs for fault detection and classification in transmission lines. Recently, Shakiba et al. [16] reviewed the literature and highlighted the wide usage of DT, Naive Bayes, Random Forest (RF), SVM, K-NN, ANNs, and hybrid methods for fault detection and classification in power transmission lines. These models have been successfully applied to classify faults based on voltage and current signals. However, the performance of these ML models often considerably deteriorates when exposed to noisy data or signal distortions [33] due to issues related to overfitting, underfitting, and model selection. In this context, Multiple Classifier Systems (MCSs) are a promising alternative to address the limitations of single (or monolithic) approaches, improving the robustness and accuracy of fault classification models. An MCS consists of a set of models, each specialized in recognizing different patterns, which are selected and combined to achieve a final classification. The motivation behind this approach is that no single model performs optimally across all possible scenarios [34].
Few studies have applied MCSs, also known as ensemble approaches, to fault classification in electrical transmission systems [33,35,36,37,38]; most have focused on traditional approaches, such as bagging or boosting [33,38]. These approaches combine the output of multiple models to improve overall accuracy but consider each classifier equally across the dataset. Conversely, relevant results in classification tasks [39] have been obtained by selecting specific classifiers for each test pattern, a technique known as dynamic classifier selection, as well as by combining models through stacking, which generates a meta-model that learns to combine the models in an ensemble, resulting in more accurate classifications [40]. This gap highlights the need to investigate whether MCS approaches, such as stacked learning and dynamic classifier selection, can significantly improve the performance of fault classification in electrical transmission systems, particularly with noisy data.
This work investigates the effectiveness of ensemble learning approaches, specifically stacked models and dynamic classifier selection. To achieve this, MCSs were developed, each employing a different approach to perform the final classification. These methods were benchmarked against traditional single models commonly used for fault classification in electrical transmission systems, which also offer interpretability. Some of these models allow for the extraction of rule sets for each decision (e.g., Decision Trees [19], XGBoost [21], CatBoost [15], and LightGBM [41]) or enable the identification of training instances that are most similar to a new instance (KNN [23]). These approaches provide interpretability, which is crucial for decision-making processes [42]. This comparison aimed to determine whether ensembles can provide superior performance under various noisy data conditions. Different noise levels were introduced into the datasets to evaluate the robustness of the approaches.
Through this comparison, it becomes clear that while prior works focus primarily on bagging, boosting, or static ensembles, our research is pioneering in applying dynamic classifier selection to electrical fault classification. Moreover, our results show that dynamic selection techniques outperform single models and static ensembles in noisy conditions, validating the importance of adaptive ensemble methods in the context of electrical fault classification.
The contributions of this work include the following:
  • The evaluation of well-established MCS approaches from the literature for the fault classification task in electrical transmission systems;
  • The assessment of the impact on the performance of static ensemble and dynamic selection approaches when different levels of noise are introduced;
  • A comparison of the MCS with various single models from the literature across 14 different scenarios in total;
  • The superior performance of dynamic selection approaches in dealing with noise and enhancing classification accuracy compared to single models.
The structure of this paper is as follows: Section 2 describes the MCS methods, Section 3 presents the experimental protocol and explains how the study was conducted, Section 4 discusses the results, and finally, Section 5 presents the conclusions and suggestions for future work.

2. Multiple Classifier Systems

The Multiple Classifier System (MCS), or ensemble learning, is an area of ML that is highlighted due to its remarkable theoretical and practical results [43]. Ensemble-based approaches aim to reduce the overall susceptibility of single ML models to bias and variance by combining multiple models, making them more robust. Therefore, these approaches must consider how they group models, combining them to minimize their drawbacks in the final ensemble. The superiority of ensembles to single approaches has been demonstrated across several real-world problems, such as credit scoring [44], heart disease classification [45], correlative microscopy [46], fault classification [47], fingerprint analysis [48], and other applications [49]. The main motivation for using an MCS is based on the No Free Lunch theorem, which states that no single model is the best for all classes of real-world problems [34].
An MCS comprises three phases: generation, selection, and integration. In the generation phase, a pool of classifiers is created. The generated pool must be accurate and diverse [50]. Diversity occurs when the models in the pool differ in their performance for the same patterns. Several strategies can be employed to introduce diversity into the model pool, such as using different training samples, varying classifier types, selecting different features, or tuning distinct parameters for each classifier [50].
In the selection phase, one or more models are chosen based on a predefined criterion. This choice can be performed in two ways: static selection (SS) (Figure 1) or dynamic selection (DS) (Figure 2). In the first, the same set of models is used to classify all new instances in the test set. In the second, one specific model or set of models is chosen for each new instance.
When more than one classifier is selected, the outputs must be combined, making an integration phase necessary. In this phase, an aggregation strategy is applied to combine all outputs to generate the final classification for a given new pattern. For this phase, we evaluate two widely used approaches in the literature: majority voting [51] and stacked generalization [52].
In this work, we evaluate, in an unprecedented way, different MCS approaches for the electrical fault detection task. Figure 1 and Figure 2 show the MCS frameworks evaluated in static and dynamic ensembles, respectively.

2.1. Static Ensemble

In the static ensemble (Figure 1), the entire set of classifiers in the pool is considered for the final classification. The MCS developed in this work consists of two phases: (a) training and (b) testing. In the training phase, a set of classifiers P = { p 1 , p 2 , , p n } , where n is the size of the pool, is trained using the training dataset, resulting in a pool P . In the case of a stacked model, the outputs C train of the pool for the training dataset are sent as inputs to train an aggregation model A, which learns to combine the classifications from the pool. The output of this phase is the pool P containing the base classifiers and the aggregation function A that will be applied during the combination step.
In the testing phase, each new instance is passed to the pool, which returns C test , a set of n classifications, one from each model. The aggregation function A is then applied to combine these classifications and produce the final classification.
As aggregation functions, we evaluated two approaches that are widely used in static ensemble methods: Majority Vote and stacked generalization. The Majority Vote method, introduced by Breiman (1996) [53], involves each base classifier predicting a class for a new data instance, with the final prediction determined by the class that receives the highest number of votes. This method enhances the overall predictive performance, making it particularly effective for unstable models where small changes in the training data can lead to significant variations in predictions [54].
Stacked generalization, introduced by Wolpert (1992) [55], involves training a meta-classifier to combine the predictions from base classifiers. This meta-classifier is trained using the outputs of the base models on the training sample. During inference, for a new pattern, the predictions of the base models are used as inputs to the meta-classifier, which then combines them to return the final prediction. This method enables the meta-classifier to effectively leverage the strengths of diverse base models, leading to improved predictive performance and robustness when these models exhibit high variability [40]. In this work, we evaluate two algorithms as meta-classifiers, Decision Trees and Logistic Regression, both of which are used in stacked generalization [40,56].

2.2. Dynamic Ensemble

In the dynamic ensemble (Figure 2), a subset of classifiers from the pool is selected for each new test instance. This subset can be composed of one to n classifiers, where n is the size of the pool. The MCS developed in this work consists of two phases: (a) training and (b) testing. The training phase is composed of two steps: Classifier Generation and Meta-Classifier Generation. In the Classifier Generation step, similar to the static ensemble, a pool P of n classifiers is generated. The Meta-Classifier Generation step is executed only when the dynamic selection approach requires a meta-classifier to select the best model. In this step, the training samples are used to extract meta-features, and the algorithm M is applied to generate the meta-classifier.
The testing phase consists of three steps, and the path to the final classification depends on the chosen dynamic selection approach. The first step is RoC (Region of Competence) creation, which is executed by dynamic approaches that require a Region of Competence. The RoC is created using the validation instances most similar to the new test instance to select the set of models with the best performance in classifying instances within this RoC. The second step is “Classifiers Evaluation”, which involves selecting the set of models based on specific criteria. Most approaches select classifiers based on their performance within the Region of Competence. However, we also evaluate the use of a meta-classifier, which extracts meta-features from the new instance and selects the set of models best suited to classify it. The output of this step is a set S of selected models, which can range from 1 to N models. Finally, in the classification step, the output of each model in the selected set is obtained. If only one classifier is selected, its classification is returned as the final classification ( C test ). However, if more than one model is selected, C test is achieved through majority voting in the final step.
In this work, we evaluate six state-of-the-art dynamic selection algorithms: OLA, DESP, KNORA-E, KNORA-U, MCB, and M-DES.

2.2.1. DCS-LA

Dynamic Classifier Selection by Local Accuracy (DCS-LA), also known as Overall Local Accuracy (OLA), introduced by Woods et al. (1997) [57], is a method for selecting the most competent classifier for each test sample based on local accuracy estimates within a Region of Competence (RoC). For each test instance, the RoC, composed of the K-Nearest Neighbors from the training set that are most similar to the new instance, is created. Then, the classifier with the highest accuracy within the RoC is selected to return the final classification.

2.2.2. DESP

The Dynamic Ensemble Selection Performance (DESP) method, introduced by Woloszynski et al. (2012) [58], entails selecting the classifiers that achieve a classification performance in the RoC superior to that of a Random Classifier (RC). The performance of the RC is defined as R C = 1 L , where L is the number of classes in the problem. If no base classifiers meet this criterion, then the full pool is utilized for classification, and the final classification is achieved through the majority voting method.

2.2.3. KNORA-E

The K-Nearest Oracles Eliminate (KNORA-E) method, introduced by Ko et al. (2008) [59], entails selecting classifiers that correctly classify all instances within the RoC. If no classifier correctly classifies all the samples in this region, then the size of the region is reduced until at least one classifier meets the criterion. If still no classifiers meet the criterion, the full pool is utilized. The final decision is made using the majority voting method.

2.2.4. KNORA-U

The K-Nearest Oracles Union (KNORA-U) method, as introduced by Ko et al. (2008) [59], involves selecting classifiers that correctly classify at least one instance within the Region of Competence (RoC). Each selected classifier is assigned a weighted vote based on the number of instances it correctly classifies within the RoC. The final decision is made by the class that accumulates the most votes from the selected classifiers.

2.2.5. MCB

The Multiple Classifier Behavior (MCB) method, introduced by Giacinto and Roli (2001) [60], involves dynamically selecting classifiers based on their local accuracy within a RoC, which is determined using similarity metrics and the concept of Behavioral Knowledge Space (BKS). The behavior of classifiers is represented by an MCB vector, which includes the class predictions made by each classifier for the test instance. The similarity between the MCB vector of the test instance and those of its neighbors is then computed. Neighbors with a similarity above a certain threshold are used to refine the RoC. Finally, the classifier with the highest accuracy within this refined RoC is chosen to make the final classification.

2.2.6. META-DES

The META-DES method, introduced by Cruz et al. (2014) [61], entails dynamically selecting classifiers based on multiple criteria that assess the competence of each classifier.
In the training phase, a meta-classifier M is generated through a meta-feature extraction process. This process involves generating multiple sets of meta-features, each representing a different criterion related to the performance of the base classifiers, such as their local accuracy, consensus in predictions, and confidence level for the input sample. These meta-features are then used to train the meta-classifier, which learns to predict whether a base classifier is competent enough to correctly classify new test instances.
In the testing phase, for each instance, the meta-classifier predicts whether a base model is competent to classify it; if more than one classifier is selected, the final classification is achieved through the majority voting method.

3. Experimental Protocol

This section is organized into two subsections: Dataset Description (Section 3.1) and Experimental Setup (Section 3.2). Section 3.1 provides detailed information about the two datasets used in the experiments, while Section 3.2 outlines the protocol followed to evaluate the single and ensemble approaches in this study.

3.1. Dataset Description

Two separate datasets were used to evaluate the single and ensemble approaches discussed in this paper. The first dataset, referred to here as Dataset 1, was generated using ATPDraw 7.4 software (https://www.atpdraw.net/, accessed on 15 November 2024). This dataset simulates the 138 kV section of a substation, including two transmission lines connected to the substation bus (Figure 3). Each line was considered a load of 36 MW and 18 Mvar. Switches, resistors, and inductors connected to the line terminals were used to simulate faults. The switches were programmed to close and open, simulating faults in the system. The resistors and inductors have very small values and are used to correct possible calculation errors in the software since, without them, the associated impedance would be zero, and the current would tend to infinity. Various types of faults were simulated between circuit breakers and line outputs, covering phase-to-ground, phase-to-phase, phase-to-phase-to-ground, and three-phase faults. Current measurements were taken at the transformer connection breaker on the main bus and at the line connection breakers sharing the same bus. Additionally, the voltage at the line outputs was measured. The simulation lasted 200 s, with data collected at 0.01 s intervals, resulting in a database of 20,000 samples. This database is structured as follows: the features columns represent voltage measurements at the outputs of the two lines for phases A, B, and C. Following these columns are current measurements in the circuit above the circuit breakers. The classes are categorized as follows:
  • Class 0: No faults;
  • Class 1: Phase-to-ground faults;
  • Class 2: Phase-to-phase faults;
  • Class 3: Phase-to-phase-to-ground faults;
  • Class 4: Three-phase faults.
Figure 3. A diagram of the substation used to generate Dataset 1.
Figure 3. A diagram of the substation used to generate Dataset 1.
Energies 17 05787 g003
The second dataset (referred to as Dataset 2), derived from [62] (can be accessed in https://github.com/leandroensina/FADbF, accessed on 15 November 2024), was adapted to include only the energy variables of current and voltage signals, along with their Root Mean Square (RMS) values. The system studied was based on one of the transmission lines of the IEEE 09 Bus System (https://www.pscad.com/knowledge-base/article/25, accessed on 15 November 2024) and simulated in the ATPDraw software; however, the author did not present the simulation diagram. In this database, the letters A, B, and C represent the three phases of the system, and G represents the ground. The simulated faults involve the three phases and the ground. For our study, some manipulations were made to the database, such as excluding attributes that did not apply to our analysis and normalizing the original values, which were too high. These two manipulations approximated the real database, where only the current and voltage are measured in the substations, and these quantities are attenuated by instrumentation equipment. As the system comprises three phases, the final database contains twelve attributes and two variables that can be used as targets: the fault location and the fault type. The first is used to locate where the fault occurred and has discrete values between 4.14 and 414, each representing a distance in kilometers. The second variable has categorical values indicating the phases in which the fault occurred, such as AB, BC, and BG. In the present work, the fault type was used as the target. The classes are categorized as follows:
  • Class 0: Fault AB;
  • Class 1: Fault ABC;
  • Class 2: Fault ABG;
  • Class 3: Fault AC;
  • Class 4: Fault ACG.
Gaussian noise was introduced during preprocessing to better approximate real-world conditions and evaluate the robustness of the models. For the first dataset, noise values ranged from 10,000 to 60,000. For the second dataset, due to the extremely high original values (in the megawatt range), we normalized the data by dividing by 100,000, resulting in Gaussian noise values between 0.1 and 1.5. This distribution was chosen due to its similarity to noise encountered in real-world scenarios [63,64,65], making it easily applicable to the simulated cases
Each dataset was divided into two distinct samples: 80% for training and 20% for testing. This split ensures that the models have sufficient data to learn from while preserving a portion for evaluating the generalization power of the approaches on unseen data.

3.2. Experimental Setup

The static ensemble and dynamic selection approaches were evaluated using the two datasets previously described. All selected approaches are well established in the ensemble learning literature and have achieved notable results in various application domains [50,66,67,68]. This study assessed the following static ensemble methods: Majority Vote, Stacked Decision Tree (DT), and Stacked Logistic Regression (LR). For dynamic selection, we evaluated the following approaches: DESP, KNORA-E, KNORA-U, MCB, M-DES, and OLA.
The static and dynamic ensemble approaches are compared with six individual classification models. To ensure a fair comparison, these classification models are also included in the pool of classifiers used in the evaluated ensembles. The individual models are described in detail below.
A Decision Tree (DT) is a supervised learning model that uses a rule-based approach to build a binary tree structure for decision-making. A DT maps a data domain to a response set by recursively dividing the domain into subdomains, ensuring that each division gains more significant information than the original node [69]. Figure 4 illustrates the structure of a DT, composed of multiple decision and leaf nodes. Each decision node represents a feature-based criterion that splits the data into subsets, while the leaf nodes correspond to the final classification outcomes. The hierarchical structure allows the model to traverse from the root to a specific leaf, determining the appropriate fault category based on the provided inputs. Through this structure, the interpretability of each decision can be extracted by the set of IF-ELSE rules constructed from the path between the initial decision node and the leaf node responsible for the classification. The DT is a white-box model for classification, offering the advantage of transparency in its decision-making process [70,71].
Random Forest (RF) is an ensemble learning model with multiple Decision Trees. Figure 5 illustrates the structure of an RF, consisting of several Decision Tree models, each trained on a different subset of training data using the bagging technique (random sampling with replacement). This process introduces diversity among the trees, producing independent and uncorrelated models, which helps reduce overfitting and improve generalization [72]. The final classification is determined by a Majority Vote from all the trees, with the class receiving the most votes being selected as the final prediction [20].
Gradient boosting (GB)-based models create an ensemble using the boosting technique, where new classifiers are trained based on the residuals of the current model. Figure 6 illustrates the structure of the Decision Tree ensemble generated by GB. The ensemble is built sequentially: the first tree is trained to predict the target class, and the subsequent trees are trained to predict the residuals from the previous tree. The final classification is achieved through the combination of the target prediction ( y ^ ) and the residual forecasts ( e ^ ), weighted by a learning rate ( η ). The core idea is that the models use gradient descent to minimize the loss function by iteratively adding models that correct the residuals of the combined ensemble.
The main algorithms in this category are Extreme Gradient Boosting (XGBoost) [21], Light Gradient Boosting Machine (LightGBM) [22], and CatBoost [15]. XGBoost is designed to optimize both computational speed and model performance. This model adds regularization terms to control the complexity of the model, which is helpful in preventing overfitting and improving the generalization of the model. LightGBM is similar to XGBoost but employs a distinct leaf-wise tree growth strategy. This method enables LightGBM to grow trees in a manner that more effectively reduces loss, often resulting in faster training times and improved accuracy. CatBoost is a classifier that simplifies data preparation by effectively handling missing values for numerical variables and non-encoded categorical variables, reducing the need for extensive preprocessing. Unlike XGBoost and LightGBM, which require manual encoding of categorical features, CatBoost processes these features natively, leading to potentially better performance and more straightforward implementation.
K-Nearest Neighbors (KNN) is a lazy learning model that requires no explicit training process. Figure 7 illustrates the decision scheme of KNN, which identifies the K data points in the training sample most similar to a new data point, forming a “nearest neighbors region” and making predictions based on this region. In classification tasks, for each new data point (illustrated in the figure as a blue ball), KNN assigns a class by determining the majority class among its nearest neighbors [23,73].
The value of K and the similarity metric are crucial hyperparameters for the model’s performance. The K value defines the size of the “nearest neighbors region”, while the similarity metric measures how similar the instances are.
Table 1 shows the hyperparameter values for each model evaluated. These values were selected based on previous works that addressed electrical fault detection tasks using ML models [15,19,20,21,22,23]. The hyperparameters for each model were selected through grid search cross-validation with five folds. This technique helps mitigate overfitting by ensuring that the models are evaluated on multiple subsets of the data. For each set of values from the grid search, the training data are split into five equal parts. During each iteration, four parts are used for training, while the remaining part is used for validation. The combination of hyperparameter values with the highest mean accuracy is selected.
Table 2 and Table 3 show the hyperparameters used for static ensembles and dynamic selection approaches, respectively. The hyperparameters employed are the default values of the Python language’s DESlib package (https://deslib.readthedocs.io/en/latest/, accessed on 15 November 2024).
The evaluation of the models was conducted using three well-known metrics: accuracy, precision, and recall. Accuracy measures the overall correctness of the model, providing a general measure of how well the model performs across all classes. Precision and recall, on the other hand, offer insights into the model’s performance specifically on the positive class. Precision focuses on the correctness of positive predictions, indicating the proportion of true positive results among all positive predictions. Recall assesses the completeness of positive predictions, representing the proportion of true positive results out of the actual positive cases. Table 4 presents the equations, ranges, and acronyms for each metric. True positive (TP) refers to instances where the model correctly predicts the positive class. True negative (TN) refers to instances where the model correctly predicts the negative class. False positive (FP) occurs when the model incorrectly predicts the positive class for a negative instance. False negative (FN) occurs when the model incorrectly predicts the negative class for a positive instance.

4. Results

The following sections (Section 4.1 and Section 4.2) analyze the experimental results of the evaluated approaches, encompassing single, static ensemble, and dynamic selection models for the two used datasets. The approaches are evaluated using the metrics accuracy (A), precision (P), and recall (R) in seven and eight scenarios for Datasets 1 and 2, respectively.

4.1. Dataset 1

Table 5 shows the metrics (A, P, and R) used to analyze the performance of the single models. In general, the values of the metrics A, P, and R decreased as the noise level increased. RF and KNN achieved the highest A, P, and R values in the noise-free scenario, while DT resulted in the lowest values. Nevertheless, KNN was the model that was more affected by the increase in the noise level. Indeed, KNN obtained the second-worst result for noise levels from 10,000 to 60,000. This result shows KNN’s sensitivity to noisy data. DT obtained the worst values in all scenarios. This poor result can be caused by overfitting in the training sample or class imbalance.
LightGBM, XGBoost, and CatBoost, which ranked first, second, and third, respectively, demonstrated more stable performance with the addition of noise. All three models achieved A, P, and R values exceeding 99% across all scenarios. This result shows that these gradient boosting algorithms are able to handle noisy data by iteratively refining their predictions, which allows them to maintain robustness in the presence of noise.
Figure 8 shows the evolution of accuracy for the different noise levels. It is possible to note the performance degradation of all models, especially KNN, DT, and RF.
Table 6 shows the A, P, and R metrics attained by the static ensemble models (Majority Vote, Stacked DT, and Stacked LR). For all models, the metric values tend to decrease with the increase in the noise level. Both Stacked models attained a more stable result, varying around 99% for all metrics. This result shows that the strategy of assigning weights was effective. Indeed, the single models LightGBM, XGBoost, and CatBoost obtained the best results and, therefore, received the highest weights in the Stacked ensembles. Conversely, the addition of noise negatively impacted the Majority Vote model, with the A, P, and R metrics dropping from 99.90% across the board to 97.92%, 98.00%, and 97.92%, respectively. This result shows that the DT, KNN, and RF models must have influenced the decision of the Majority Vote. This result shows the importance of the pool’s quality (creating and training) and the combination strategy. Figure 9 shows the evolution of accuracy for the different noise levels. The performance degradation of all models, mainly of the Majority Vote, can be noted.
Table 7 shows the performance of the dynamic selection models (DESP, KNORA-E, KNORA-U, MCB, M-DES, and OLA) for Dataset 1. For all models, the metric values tend to decrease with the increase in the noise level. KNORA-E, KNORA-U, and M-DES reached a more stable performance, with all metrics varying around 99%. On the other hand, the OLA model suffered from increasing noise since the A, P, and R metrics decreased from 99.90%, 99.90%, and 99.90% to 94.97%, 95.11%, and 94.97%. These results show that methods based on the Region of Competence (RoC), which rely on training instances similar to the test instance, like OLA, exhibit higher sensitivity to noise. In contrast, performance-based approaches, such as those employed by KNORA-E and KNORA-U, or the use of meta-features like the M-DES approach, demonstrate greater robustness. Figure 10 shows the evolution of accuracy for the different noise levels. The performance degradation of all models, especially DESP, MCB, and OLA, can be noted.
Figure 11 compares the metric values obtained by the best models of the evaluated approaches: single models, static ensemble, and dynamic selection. It shows that models based on ensembles attained the best results in all scenarios. From the Without Noise scenario until 50,000 Amp, Stacked LR attained the best A, P, and R values; meanwhile, for 60,000 Amp, M-DES was superior for all metrics. It is essential to highlight that ensemble approaches attained higher performance values than single models in most scenarios. For instance, Stacked LR and M-DES reached a superior performance of around 48% and 41% in terms of A for DT and KNN, respectively.
In summary, the results for Dataset 1 demonstrate that the MCS approaches consistently outperformed single models in most scenarios, particularly in the presence of noise. Among the single models, gradient boosting methods such as LightGBM, XGBoost, and CatBoost showed a more stable performance, maintaining high metric values even as the noise level increased, while KNN and DT were the most affected by noise. Static ensemble models, especially Stacked LR, proved to be effective in maintaining robust performance across noise levels, benefiting from the weighted combination of high-performing models. Among the dynamic selection approaches, KNORA-E, KNORA-U, and M-DES demonstrated resilience to noise, maintaining nearly consistent performance. The performance degradation of methods like OLA, MCB, and DESP highlights their sensitivity to noise. Thus, the MCS approaches evaluated, particularly Stacked LR and M-DES, provided the best results, validating the effectiveness of combining selection models to enhance robustness and accuracy in noisy environments.

4.2. Dataset 2

Table 8 shows the A, P, and R performance metrics obtained by the single models for Dataset 2. The A, P, and R metric values for all evaluated models have a downward trend with the increase in the noise level. XGBoost and LightGBM models reached the best metric values in all scenarios. LightGBM attained the first rank in the first two scenarios, while XGBoost was the best one in the others. Both reached the same performance for a noise level of 0.3 (third scenario). KNN and DT obtained the two worst metric values for all scenarios in this order. However, between them, KNN’s performance was more affected by the increase in the noise level. For instance, the A value of the KNN and DT models dropped by 11.90% and 16.78%, respectively. This result shows KNN’s sensitivity to noisy data and the probable overfitting of the DT in the training sample. Figure 12 shows the degradation of all single models in terms of the A metric with the increase in the noise level. It is evident that KNN and DT produced the poorest results. However, CatBoost, LightGBM, RF, and XGBoost were the models most affected by the increasing noise levels.
Table 9 shows the A, P, and R metrics attained by the static ensemble models. For all models, the metric values tend to decrease with the increase in the noise level. These results show that the Stacked LR and Majority Vote models demonstrate greater robustness across all noise levels. Initially, Stacked LR outperforms Majority Vote, delivering superior results with 99.80% A, P, and R in the noise-free scenario, compared to 99.66% obtained by Majority Vote. However, from a noise level of 0.5, Majority Vote showed competitive performance, with 78.58% accuracy at a noise level of 1.5, closely matching Stacked LR, which achieved 78.34% under the same conditions. In contrast, Stacked DT is shown to be the most sensitive to noise, starting with an outstanding performance of 99.77% in the noise-free scenario but dropping significantly to 70.55% at a noise level of 1.5. This behavior likely occurred due to the nature of DTs, which are prone to overfitting, especially when dealing with noisy data. As noise increases, the trees within the ensemble may capture random fluctuations, leading to poorer generalization and significant performance degradation. Figure 13 shows the evolution of accuracy with different noise levels, which is especially pronounced for Stacked DT.
Table 10 shows the performance of the dynamic selection approaches (DESP, KNORA-E, KNORA-U, MCB, M-DES, and OLA) for Dataset 2. KNORA-E, KNORA-U, and M-DES exhibited a more stable performance, starting with an accuracy of around 99% in the noise-free scenario but decreasing to approximately 77% at a noise level of 1.5. The DESP model showed competitive performance starting from a noise level of 0.5, achieving a recall of 84.77, compared to 84.19 for KNORA-U and 83.99 for M-DES. In contrast, OLA and MCB were more sensitive to noise, with OLA showing the most significant performance drop due to its reliance on local training instances, which become less reliable as noise increases.
Figure 14 shows the evolution of accuracy for the different noise levels. The performance degradation of all models, especially MCB and OLA, can be noted.
Figure 15 compares the metric values achieved by the best models from the evaluated approaches: single models, static ensemble, and dynamic selection for Dataset 2. The figure shows that, initially, with no noise ( σ = 0 ), all models performed similarly, with metrics above 95%. As noise levels increase, Stacked LR exhibits a sharper decline compared to XGBoost and META-DES, which maintain better resilience. From σ = 1.0 onward, the degradation becomes more pronounced for all models, with META-DES and XGBoost consistently outperforming Stacked LR, particularly in terms of precision and recall, demonstrating greater robustness to noise.
In summary, the results for Dataset 2 indicate that the ensemble and dynamic selection approaches consistently outperformed single models as noise levels increased. Among the single models, XGBoost and LightGBM demonstrated greater resilience, maintaining strong performance across various noise levels, while KNN and DT were the most affected by noise, with KNN showing higher sensitivity. Static ensemble models, particularly Stacked LR and Majority Vote, exhibited robust performance, with Stacked LR excelling in noise-free conditions and Majority Vote showing competitive performance as noise increased. Among the dynamic selection approaches, KNORA-E, KNORA-U, and M-DES demonstrated stability and resilience, while OLA and MCB experienced significant performance degradation under noisy conditions. Overall, M-DES and XGBoost were the most robust models, consistently providing superior results, validating the effectiveness of ensemble and dynamic selection methods in enhancing model performance and robustness in the presence of noise.

5. Conclusions

This work explored the application of Multiple Classifier Systems for fault classification in electrical transmission systems. Ensemble approaches, particularly Stacked and M-DES, consistently outperformed traditional single models, such as Decision Tree and K-Nearest Neighbors, demonstrating remarkable resilience in noisy environments. The robustness of gradient boosting models, such as LightGBM, XGBoost, and CatBoost, was evident, maintaining high accuracy levels even with the introduction of significant noise. Additionally, dynamic selection approaches, especially KNORA-E, KNORA-U, and M-DES, proved more effective at handling noise than static approaches. These findings highlight the potential of ensemble approaches in improving fault detection accuracy in challenging conditions, as well as the importance of model diversity and combination strategies in enhancing classification performance when dealing with noisy data. In addition, ensemble-based systems employed for fault detection can be a helpful tool in increasing Situational Awareness in electrical power systems.
The proposed approach can be practically applied in industrial and power transmission systems to detect and classify various fault types. When integrated with adequate computational resources for fast data processing, it enhances system operations as a complementary tool for protection, control, and supervision, ultimately improving Situational Awareness in energy systems. However, it is important to emphasize that careful selection of the classifier pool is essential for ensuring a reliable approach, enabling the MCS to generalize effectively across diverse energy system scenarios.
In future work, we plan to explore dynamic classifier selection and combination approaches to manage even higher noise levels and apply these techniques to different types of faults and components within electrical power systems. Additionally, we will investigate the integration of noise-reduction techniques with MCS approaches and examine the relationship between the noise impact and dataset size to gain deeper insights into model performance under varying conditions.

Author Contributions

Conceptualization, J.O., D.P., E.G.S. and P.S.G.d.M.N.; methodology, J.O., D.P., E.G.S. and P.S.G.d.M.N.; experimental process, J.O., D.P., D.C., J.F.V.M., E.G.S. and P.S.G.d.M.N.; formal analysis J.O., D.P., E.G.S. and P.S.G.d.M.N.; writing—original draft preparation, J.O. and D.P.; writing—review and editing, E.G.S. and P.S.G.d.M.N.; supervision, P.N; funding acquisition, P.S.G.d.M.N. All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded by the Research and Development and Innovation (R&D&I) Program regulated by the National Electric Energy Agency (ANEEL-PD-06908-0002/2021), as well as the company EVOLTZ, under the project “Improvement of Operator Situational Awareness using Data Mining and Artificial Intelligence Techniques”. This work also received support from the National Council for Scientific and Technological Development (CNPq) and the Coordination for the Improvement of Higher Education Personnel (CAPES). The authors would like to thank the Federal University of Pernambuco (UFPE) and the Advanced Institute of Technology and Innovation (IATI), Brazil.

Data Availability Statement

The original data used in this study are included in the article, and further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. O’Rourke, T.D. Critical infrastructure, interdependencies, and resilience. BRIDGE-Wash. Acad. Eng. 2007, 37, 22. [Google Scholar]
  2. Dobson, I.; Carreras, B.A.; Lynch, V.E.; Newman, D.E. Complex systems analysis of series of blackouts: Cascading failure, critical points, and self-organization. Chaos Interdiscip. J. Nonlinear Sci. 2007, 17, 026103. [Google Scholar] [CrossRef] [PubMed]
  3. Yagan, O.; Qian, D.; Zhang, J.; Cochran, D. Optimal allocation of interconnecting links in cyber-physical systems: Interdependence, cascading failures, and robustness. IEEE Trans. Parallel Distrib. Syst. 2012, 23, 1708–1720. [Google Scholar] [CrossRef]
  4. Goni, M.F.; Nahiduzzaman, M.; Anower, M.; Rahman, M.; Islam, M.; Ahsan, M.; Haider, J.; Shahjalal, M. Fast and Accurate Fault Detection and Classification in Transmission Lines using Extreme Learning Machine. E-Prime Adv. Electr. Eng. Electron. Energy 2023, 3, 100107. [Google Scholar] [CrossRef]
  5. Janarthanam, K.; Kamalesh, P.; Basil, T.V.; Kovilpillai, A.K.J. Electrical Faults-Detection and Classification using Machine Learning. In Proceedings of the 2022 International Conference on Electronics and Renewable Systems (ICEARS), Tuticorin, India, 16–18 March 2022; pp. 1289–1295. [Google Scholar]
  6. Jamil, M.; Sharma, S.K.; Singh, R. Fault detection and classification in electrical power transmission system using artificial neural network. SpringerPlus 2015, 4, 334. [Google Scholar] [CrossRef] [PubMed]
  7. Lusková, M.; Leitner, B. Societal vulnerability to electricity supply failure. Interdiscip. Descr. Complex Syst. INDECS 2021, 19, 391–401. [Google Scholar] [CrossRef]
  8. Grainger, J.J. Power System Analysis; McGraw-Hill: New York, NY, USA, 1999. [Google Scholar]
  9. Glover, J.D.; Overbye, T.J.; Sarma, M.S. Power System Analysis & Design; Cengage Learning: Boston, MA, USA, 2017. [Google Scholar]
  10. Ge, L.; Yan, J.; Sun, Y.; Wang, Z. Situation Awareness for Smart Distribution Systems; MDPI-Multidisciplinary Digital Publishing Institute: Basel, Switzerland, 2022. [Google Scholar]
  11. Panteli, M.; Kirschen, D.S. Situation awareness in power systems: Theory, challenges and applications. Electr. Power Syst. Res. 2015, 122, 140–151. [Google Scholar] [CrossRef]
  12. Endsley, M.R. Toward a theory of situation awareness in dynamic systems. Hum. Factors 1995, 37, 32–64. [Google Scholar] [CrossRef]
  13. He, X.; Qiu, R.C.; Ai, Q.; Chu, L.; Xu, X.; Ling, Z. Designing for situation awareness of future power grids: An indicator system based on linear eigenvalue statistics of large random matrices. IEEE Access 2016, 4, 3557–3568. [Google Scholar] [CrossRef]
  14. Pinto, R.; Gonçalves, G. Application of artificial immune systems in advanced manufacturing. Array 2022, 15, 100238. [Google Scholar] [CrossRef]
  15. Ogar, V.N.; Hussain, S.; Gamage, K.A. Transmission line fault classification of multi-dataset using Catboost classifier. Signals 2022, 3, 468–482. [Google Scholar] [CrossRef]
  16. Shakiba, F.M.; Azizi, S.M.; Zhou, M.; Abusorrah, A. Application of machine learning methods in fault detection and classification of power transmission lines: A survey. Artif. Intell. Rev. 2023, 56, 5799–5836. [Google Scholar] [CrossRef]
  17. Guo, J.; Yang, Y.; Li, H.; Wang, J.; Tang, A.; Shan, D.; Huang, B. A hybrid deep learning model towards fault diagnosis of drilling pump. Appl. Energy 2024, 372, 123773. [Google Scholar] [CrossRef]
  18. Ruan, Y.; Zheng, M.; Qian, F.; Meng, H.; Yao, J.; Xu, T.; Pei, D. Fault detection and diagnosis of energy system based on deep learning image recognition model under the condition of imbalanced samples. Appl. Therm. Eng. 2024, 238, 122051. [Google Scholar] [CrossRef]
  19. Asman, S.H.; Ab Aziz, N.F.; Ungku Amirulddin, U.A.; Ab Kadir, M.Z.A. Decision tree method for fault causes classification based on RMS-DWT analysis in 275 Kv transmission lines network. Appl. Sci. 2021, 11, 4031. [Google Scholar] [CrossRef]
  20. Viswavandya, M.; Patel, S.; Sahoo, K. Analysis and comparison of machine learning approaches for transmission line fault prediction in power systems. J. Res. Eng. Appl. Sci. 2021, 6, 24–31. [Google Scholar] [CrossRef]
  21. Wang, B.; Yang, K.; Wang, D.; Chen, S.z.; Shen, H.j. The applications of XGBoost in fault diagnosis of power networks. In Proceedings of the 2019 IEEE Innovative Smart Grid Technologies-Asia (ISGT Asia), Chengdu, China, 21–24 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 3496–3500. [Google Scholar]
  22. Atrigna, M.; Buonanno, A.; Carli, R.; Cavone, G.; Scarabaggio, P.; Valenti, M.; Graditi, G.; Dotoli, M. A machine learning approach to fault prediction of power distribution grids under heatwaves. IEEE Trans. Ind. Appl. 2023, 59, 4835–4845. [Google Scholar] [CrossRef]
  23. Abed, N.K.; Abed, F.T.; Al-Yasriy, H.F.; ALRikabi, H.T.S. Detection of power transmission lines faults based on voltages and currents values using K-Nearest neighbors. Int. J. Power Electron. Drive Syst. (IJPEDS) 2023, 14, 1033–1043. [Google Scholar] [CrossRef]
  24. Hallmann, M.; Pietracho, R.; Komarnicki, P. Comparison of Artificial Intelligence and Machine Learning Methods Used in Electric Power System Operation. Energies 2024, 17, 2790. [Google Scholar] [CrossRef]
  25. Jawad, R.S.; Abid, H. HVDC fault detection and classification with artificial neural network based on ACO-DWT method. Energies 2023, 16, 1064. [Google Scholar] [CrossRef]
  26. Wang, Y.; Liu, M.; Bao, Z.; Zhang, S. Stacked sparse autoencoder with PCA and SVM for data-based line trip fault diagnosis in power systems. Neural Comput. Appl. 2019, 31, 6719–6731. [Google Scholar] [CrossRef]
  27. Wadi, M.; Elmasry, W. An anomaly-based technique for fault detection in power system networks. In Proceedings of the 2021 International Conference on Electric Power Engineering–Palestine (ICEPE-P), Gaza, Palestine, 23–24 March 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–5. [Google Scholar]
  28. Veerasamy, V.; Wahab, N.I.A.; Othman, M.L.; Padmanaban, S.; Sekar, K.; Ramachandran, R.; Hizam, H.; Vinayagam, A.; Islam, M.Z. LSTM recurrent neural network classifier for high impedance fault detection in solar PV integrated power system. IEEE Access 2021, 9, 32672–32687. [Google Scholar] [CrossRef]
  29. Ajagekar, A.; You, F. Quantum computing based hybrid deep learning for fault diagnosis in electrical power systems. Appl. Energy 2021, 303, 117628. [Google Scholar] [CrossRef]
  30. Shadi, M.R.; Ameli, M.T.; Azad, S. A real-time hierarchical framework for fault detection, classification, and location in power systems using PMUs data and deep learning. Int. J. Electr. Power Energy Syst. 2022, 134, 107399. [Google Scholar] [CrossRef]
  31. Alhanaf, A.S.; Balik, H.H.; Farsadi, M. Intelligent fault detection and classification schemes for smart grids based on deep neural networks. Energies 2023, 16, 7680. [Google Scholar] [CrossRef]
  32. Salehimehr, S.; Miraftabzadeh, S.M.; Brenna, M. A Novel Machine Learning-Based Approach for Fault Detection and Location in Low-Voltage DC Microgrids. Sustainability 2024, 16, 2821. [Google Scholar] [CrossRef]
  33. Harish, A.; Jayan, M. Classification of power transmission line faults using an ensemble feature extraction and classifier method. In Proceedings of the Inventive Communication and Computational Technologies: Proceedings of ICICCT 2020, Tamil Nadu, India, 25–26 June 2021; Springer: Singapore, 2021; pp. 417–427. [Google Scholar]
  34. Wolpert, D.H. The lack of a priori distinctions between learning algorithms. Neural Comput. 1996, 8, 1341–1390. [Google Scholar] [CrossRef]
  35. Nishat Toma, R.; Kim, C.H.; Kim, J.M. Bearing fault classification using ensemble empirical mode decomposition and convolutional neural network. Electronics 2021, 10, 1248. [Google Scholar] [CrossRef]
  36. Nishat Toma, R.; Kim, J.M. Bearing fault classification of induction motors using discrete wavelet transform and ensemble machine learning algorithms. Appl. Sci. 2020, 10, 5251. [Google Scholar] [CrossRef]
  37. Ghaemi, A.; Safari, A.; Afsharirad, H.; Shayeghi, H. Accuracy enhance of fault classification and location in a smart distribution network based on stacked ensemble learning. Electr. Power Syst. Res. 2022, 205, 107766. [Google Scholar] [CrossRef]
  38. Vaish, R.; Dwivedi, U.; Tewari, S.; Tripathi, S.M. Machine learning applications in power system fault diagnosis: Research advancements and perspectives. Eng. Appl. Artif. Intell. 2021, 106, 104504. [Google Scholar] [CrossRef]
  39. Fragoso, R.C.; Cavalcanti, G.D.; Pinheiro, R.H.; Oliveira, L.S. Dynamic selection and combination of one-class classifiers for multi-class classification. Knowl.-Based Syst. 2021, 228, 107290. [Google Scholar] [CrossRef]
  40. Hajihosseinlou, M.; Maghsoudi, A.; Ghezelbash, R. Stacking: A novel data-driven ensemble machine learning strategy for prediction and mapping of Pb-Zn prospectivity in Varcheh district, west Iran. Expert Syst. Appl. 2024, 237, 121668. [Google Scholar] [CrossRef]
  41. Bhattacharya, D.; Nigam, M.K. Energy efficient fault detection and classification using hyperparameter-tuned machine learning classifiers with sensors. Meas. Sensors 2023, 30, 100908. [Google Scholar] [CrossRef]
  42. Lim, S.H.; Kim, T.; Lee, K.Y.; Song, K.M.; Yoon, S.G. Two-Stage Fault Classification Algorithm for Real Fault Data in Transmission Lines. IEEE Access 2024, 12, 121156–121168. [Google Scholar] [CrossRef]
  43. Dong, X.; Yu, Z.; Cao, W.; Shi, Y.; Ma, Q. A survey on ensemble learning. Front. Comput. Sci. 2020, 14, 241–258. [Google Scholar] [CrossRef]
  44. Moral-García, S.; Abellán, J. Improving the Results in Credit Scoring by Increasing Diversity in Ensembles of Classifiers. IEEE Access 2023, 11, 58451–58461. [Google Scholar] [CrossRef]
  45. Asif, D.; Bibi, M.; Arif, M.S.; Mukheimer, A. Enhancing heart disease prediction through ensemble learning techniques with hyperparameter optimization. Algorithms 2023, 16, 308. [Google Scholar] [CrossRef]
  46. Bitrus, S.; Fitzek, H.; Rigger, E.; Rattenberger, J.; Entner, D. Enhancing classification in correlative microscopy using multiple classifier systems with dynamic selection. Ultramicroscopy 2022, 240, 113567. [Google Scholar] [CrossRef]
  47. Zheng, J.; Liu, Y.; Ge, Z. Dynamic ensemble selection based improved random forests for fault classification in industrial processes. IFAC J. Syst. Control 2022, 20, 100189. [Google Scholar] [CrossRef]
  48. Walhazi, H.; Maalej, A.; Amara, N.E.B. A multi-classifier system for automatic fingerprint classification using transfer learning and majority voting. Multimed. Tools Appl. 2024, 83, 6113–6136. [Google Scholar] [CrossRef]
  49. Mienye, I.D.; Sun, Y. A survey of ensemble learning: Concepts, algorithms, applications, and prospects. IEEE Access 2022, 10, 99129–99149. [Google Scholar] [CrossRef]
  50. Cruz, R.M.; Sabourin, R.; Cavalcanti, G.D. Dynamic classifier selection: Recent advances and perspectives. Inf. Fusion 2018, 41, 195–216. [Google Scholar] [CrossRef]
  51. Aurangzeb, S.; Aleem, M. Evaluation and classification of obfuscated Android malware through deep learning using ensemble voting mechanism. Sci. Rep. 2023, 13, 3093. [Google Scholar] [CrossRef] [PubMed]
  52. Ganaie, M.A.; Hu, M.; Malik, A.K.; Tanveer, M.; Suganthan, P.N. Ensemble deep learning: A review. Eng. Appl. Artif. Intell. 2022, 115, 105151. [Google Scholar] [CrossRef]
  53. Breiman, L. Bagging predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef]
  54. Aeeneh, S.; Zlatanov, N.; Yu, J. New Bounds on the Accuracy of Majority Voting for Multiclass Classification. IEEE Trans. Neural Netw. Learn. Syst. 2024, 1–5. [Google Scholar] [CrossRef]
  55. Wolpert, D.H. Stacked generalization. Neural Netw. 1992, 5, 241–259. [Google Scholar] [CrossRef]
  56. Cui, S.; Yin, Y.; Wang, D.; Li, Z.; Wang, Y. A stacking-based ensemble learning method for earthquake casualty prediction. Appl. Soft Comput. 2021, 101, 107038. [Google Scholar] [CrossRef]
  57. Woods, K.; Kegelmeyer, W.P.; Bowyer, K. Combination of multiple classifiers using local accuracy estimates. IEEE Trans. Pattern Anal. Mach. Intell. 1997, 19, 405–410. [Google Scholar] [CrossRef]
  58. Woloszynski, T.; Kurzynski, M.; Podsiadlo, P.; Stachowiak, G.W. A measure of competence based on random classification for dynamic ensemble selection. Inf. Fusion 2012, 13, 207–213. [Google Scholar] [CrossRef]
  59. Ko, A.H.; Sabourin, R.; Britto, A.S. From dynamic classifier selection to dynamic ensemble selection. Pattern Recognit. 2008, 41, 1718–1731. [Google Scholar] [CrossRef]
  60. Giacinto, G.; Roli, F. Dynamic classifier selection based on multiple classifier behavior. Pattern Recognit. 2001, 34, 1879–1881. [Google Scholar] [CrossRef]
  61. Cruz, R.M.; Sabourin, R.; Cavalcanti, G.D. META-DES: A dynamic ensemble selection framework using meta-learning. Pattern Recognit. 2014, 48, 1925–1935. [Google Scholar] [CrossRef]
  62. Ensina, L.A.; Oliveira, L.E.d.; Cruz, R.M.; Cavalcanti, G.D. Fault distance estimation for transmission lines with dynamic regressor selection. Neural Comput. Appl. 2024, 36, 1741–1759. [Google Scholar] [CrossRef]
  63. Kurukuru, V.S.B.; Blaabjerg, F.; Khan, M.A.; Haque, A. A novel fault classification approach for photovoltaic systems. Energies 2020, 13, 308. [Google Scholar] [CrossRef]
  64. Rahman Fahim, S.; Sarker, S.K.; Muyeen, S.; Sheikh, M.R.I.; Das, S.K. Microgrid fault detection and classification: Machine learning based approach, comparison, and reviews. Energies 2020, 13, 3460. [Google Scholar] [CrossRef]
  65. Bhuiyan, E.A.; Akhand, M.A.; Fahim, S.R.; Sarker, S.K.; Das, S.K. A deep learning through DBN enabled transmission line fault transient classification framework for multimachine microgrid systems. Int. Trans. Electr. Energy Syst. 2022, 2022, 6820319. [Google Scholar] [CrossRef]
  66. Lazzarini, R.; Tianfield, H.; Charissis, V. A stacking ensemble of deep learning models for IoT intrusion detection. Knowl.-Based Syst. 2023, 279, 110941. [Google Scholar] [CrossRef]
  67. Zhu, X.; Li, J.; Ren, J.; Wang, J.; Wang, G. Dynamic ensemble learning for multi-label classification. Inf. Sci. 2023, 623, 94–111. [Google Scholar] [CrossRef]
  68. Cordeiro, P.R.; Cavalcanti, G.D.; Cruz, R.M. Dynamic ensemble algorithm post-selection using Hardness-aware Oracle. IEEE Access 2023, 11, 86056–86070. [Google Scholar] [CrossRef]
  69. Suthaharan, S.; Suthaharan, S. Decision tree learning. In Machine Learning Models and Algorithms for Big Data Classification: Thinking with Examples for Effective Learning; Springer: New York, NY, USA, 2016; pp. 237–269. [Google Scholar]
  70. Jamehbozorg, A.; Shahrtash, S.M. A decision-tree-based method for fault classification in single-circuit transmission lines. IEEE Trans. Power Deliv. 2010, 25, 2190–2196. [Google Scholar] [CrossRef]
  71. Navada, A.; Ansari, A.N.; Patil, S.; Sonkamble, B.A. Overview of use of decision tree algorithms in machine learning. In Proceedings of the 2011 IEEE Control and System Graduate Research Colloquium, Shah Alam, Malaysia, 27–28 June 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 37–42. [Google Scholar]
  72. El Mrabet, Z.; Sugunaraj, N.; Ranganathan, P.; Abhyankar, S. Random forest regressor-based approach for detecting fault location and duration in power systems. Sensors 2022, 22, 458. [Google Scholar] [CrossRef] [PubMed]
  73. Fang, J.; Yang, F.; Chen, C.; Yang, Y.; Pang, B.; He, J.; Lin, H. Power Distribution Transformer Fault Diagnosis with Unbalanced Samples Based on Neighborhood Component Analysis and K-Nearest Neighbors. In Proceedings of the 2021 Power System and Green Energy Conference (PSGEC), Shanghai, China, 13–16 May 2021; pp. 670–675. [Google Scholar]
Figure 1. A general framework of the Multiple Classifier System (MCS), also known as ensemble learning, utilizing static selection (SS). This framework comprises two main components: (a) generating a pool of classifiers and trainable aggregation functions, and (b) selecting one or more models (and their combinations) to classify all test patterns.
Figure 1. A general framework of the Multiple Classifier System (MCS), also known as ensemble learning, utilizing static selection (SS). This framework comprises two main components: (a) generating a pool of classifiers and trainable aggregation functions, and (b) selecting one or more models (and their combinations) to classify all test patterns.
Energies 17 05787 g001
Figure 2. A general framework of the Multiple Classifier System (MCS), or ensemble learning, employing dynamic selection (DS). This framework consists of two parts: (a) generation of the pool of classifiers and (b) selection of one or more models from a Region of Competence (RoC) to classify each pattern of the test set.
Figure 2. A general framework of the Multiple Classifier System (MCS), or ensemble learning, employing dynamic selection (DS). This framework consists of two parts: (a) generation of the pool of classifiers and (b) selection of one or more models from a Region of Competence (RoC) to classify each pattern of the test set.
Energies 17 05787 g002
Figure 4. The structure of a Decision Tree. Decision nodes represent points where the data are split based on rules or characteristics, and leaf nodes are the terminal points that provide the final classification.
Figure 4. The structure of a Decision Tree. Decision nodes represent points where the data are split based on rules or characteristics, and leaf nodes are the terminal points that provide the final classification.
Energies 17 05787 g004
Figure 5. The structure of a Random Forest. Random Forest is an ensemble learning method based on decision trees that employs majority voting among its trees to generate the final classification result.
Figure 5. The structure of a Random Forest. Random Forest is an ensemble learning method based on decision trees that employs majority voting among its trees to generate the final classification result.
Energies 17 05787 g005
Figure 6. The structure of a gradient boosting model. The multiple decision trees are sequentially trained, with each tree learning to correct the errors of the previous ones. The output of each tree, represented as e 1 , e 2 , e 3 , , contributes to the final prediction by combining their individual outputs.
Figure 6. The structure of a gradient boosting model. The multiple decision trees are sequentially trained, with each tree learning to correct the errors of the previous ones. The output of each tree, represented as e 1 , e 2 , e 3 , , contributes to the final prediction by combining their individual outputs.
Energies 17 05787 g006
Figure 7. An example illustrating the decision scheme of the k-Nearest Neighbors (k-NN) algorithm. The figure shows a data point (in blue) being classified based on the nearest data points (neighbors) for different values of K. Two concentric circles represent the neighborhoods for K = 3 and K = 5 . The data points are labeled as belonging to different classes (green and purple), with the classification outcome determined by the majority class among the nearest neighbors within each circle.
Figure 7. An example illustrating the decision scheme of the k-Nearest Neighbors (k-NN) algorithm. The figure shows a data point (in blue) being classified based on the nearest data points (neighbors) for different values of K. Two concentric circles represent the neighborhoods for K = 3 and K = 5 . The data points are labeled as belonging to different classes (green and purple), with the classification outcome determined by the majority class among the nearest neighbors within each circle.
Energies 17 05787 g007
Figure 8. Accuracy obtained by evaluated single models for Dataset 1.
Figure 8. Accuracy obtained by evaluated single models for Dataset 1.
Energies 17 05787 g008
Figure 9. Accuracy obtained by the evaluated static ensemble models for Dataset 1.
Figure 9. Accuracy obtained by the evaluated static ensemble models for Dataset 1.
Energies 17 05787 g009
Figure 10. Accuracy obtained by the evaluated dynamic selection models for Dataset 1.
Figure 10. Accuracy obtained by the evaluated dynamic selection models for Dataset 1.
Energies 17 05787 g010
Figure 11. A, P, and R metrics for the best models of each approach for all noise level scenarios in Dataset 1.
Figure 11. A, P, and R metrics for the best models of each approach for all noise level scenarios in Dataset 1.
Energies 17 05787 g011
Figure 12. Accuracy obtained by evaluated single models for Dataset 2.
Figure 12. Accuracy obtained by evaluated single models for Dataset 2.
Energies 17 05787 g012
Figure 13. Accuracy obtained by the evaluated static ensemble models for Dataset 2.
Figure 13. Accuracy obtained by the evaluated static ensemble models for Dataset 2.
Energies 17 05787 g013
Figure 14. Accuracy obtained by the evaluated dynamic selection models for Dataset 2.
Figure 14. Accuracy obtained by the evaluated dynamic selection models for Dataset 2.
Energies 17 05787 g014
Figure 15. A, P, and R metrics for the best models of each approach for all noise level scenarios in Dataset 2.
Figure 15. A, P, and R metrics for the best models of each approach for all noise level scenarios in Dataset 2.
Energies 17 05787 g015
Table 1. Hyperparameters used in the grid search for each model.
Table 1. Hyperparameters used in the grid search for each model.
ModelParameterValues
KNNn_neighbors{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
weights{‘uniform’, ‘distance’}
metric{‘euclidean’, ‘manhattan’, ‘minkowski’}
Decision Treerandom_state{0, 1, 2, 42}
criterion{‘gini’, ‘entropy’}
max_depth{2, 3, 4, 5, 6, 7, 8, 9, 10, 11}
min_samples_leaf{1, 2, 4, 6}
Random Forestn_estimators{1, 10, 30, 100, 200}
random_state{0, 42}
max_depth{2, 10, 30, None}
max_features{‘auto’, ‘sqrt’, ‘log2’}
min_samples_leaf{1, 2}
XGBoostinglearning_rate{0.1, 0.547, 0.6427}
max_depth{2, 4, 6, 8, 10}
n_estimators{2, 4, 8, 10, 200}
min_child_weight{1, 3, 5}
subsample{0.7, 0.8, 0.9}
Lgbmlearning_rate{0.001, 0.01, 0.1}
max_depth{2, 4, 6, 8, 10}
min_child_samples{20}
n_estimators{2, 4, 8, 10, 200}
num_leaves{7, 31}
boosting_type{‘gbdt’, ‘goss’}
CatBoostinglearning_rate{0.1, 0.01, 0.001}
max_depth{2, 4, 6, 8, 10}
n_estimators{2, 4, 8, 10, 200}
Table 2. Parameters used for each static ensemble approach.
Table 2. Parameters used for each static ensemble approach.
Static EnsembleParameterValue
Majority Votevoting{‘hard’}
Stacked DTmeta_classifier{DecisionTree}
meta_classifier_criterion{‘gini’}
meta_classifier_min_samples_leaf{1}
meta_classifier_min_samples_split{2}
meta_classifier_splitter{‘best’}
Stacked LRmeta_classifier{LogisticRegression}
Table 3. Parameters used for each dynamic selection approach.
Table 3. Parameters used for each dynamic selection approach.
Dynamic SelectionParameterValue
DESPDFP{False}
DESL_perc{0.5}
IH_rate{0.3}
k{7}
knn_classifier{‘knn’}
knn_metric{‘minkowski’}
mode{‘selection’}
KNORA-E
KNORA-U
DFP{False}
DESL_perc{0.5}
IH_rate{0.3}
k{7}
knn_classifier{‘knn’}
knn_metric{‘minkowski’}
MCBDFP{False}
DESL_perc{0.5}
IH_rate{0.3}
diff_thresh{0.1}
k{7}
knn_classifier{‘knn’}
knn_metric{‘minkowski’}
knne{False}
M-DESDFP{False}
DESL_perc{0.5}
Hc{1.0}
IH_rate{0.3}
Kp{5}
k{7}
knn_classifier{‘knn’}
knn_metric{‘minkowski’}
meta_classifier{‘Multinomial naive Bayes’}
mode{‘selection’}
OLADFP{False}
DESL_perc{0.5}
k{7}
knn_classifier{‘knn’}
knn_metric{‘minkowski’}
knne{False}
Table 4. Metrics for classification evaluation. For all metrics, the higher the value, the better the classification performance.
Table 4. Metrics for classification evaluation. For all metrics, the higher the value, the better the classification performance.
MetricAcronymEquationLimits
AccuracyA T P + T N T P + T N + F P + F N [0, 1]
PrecisionP T P T P + F P [0, 1]
RecallR T P T P + F N [0, 1]
Table 5. The evaluation of the single models on Dataset 1 for several noise levels. The best result for each metric is highlighted in bold.
Table 5. The evaluation of the single models on Dataset 1 for several noise levels. The best result for each metric is highlighted in bold.
NoiseMetricSingle Models
LevelCatBoostDTKNNLightGBMRFXGBoost
Without
noise
A99.8074.5899.9099.8599.9099.83
P99.8077.2499.9099.8599.9099.83
R99.8074.5899.9099.8599.9099.83
10,000A99.6771.5894.4099.7599.8099.62
P99.6774.5294.3699.7599.8099.63
R99.6771.5894.4099.7599.8099.62
20,000A99.5569.1088.7299.6599.5899.62
P99.5572.0288.4599.6599.5899.62
R99.5569.1088.7299.6599.5899.62
30,000A99.5066.0784.5299.5599.0899.62
P99.5068.6483.9499.5599.0899.63
R99.5066.0784.5299.5599.0899.62
40,000A99.5869.1578.7599.4898.5099.35
P99.5898.5077.5899.4898.5099.35
R99.5898.5078.7599.4898.5099.35
50,000A99.4267.4275.2599.5098.0299.48
P99.4267.6073.4099.5098.0599.48
R99.4267.4275.2599.5098.0299.48
60,000A99.2267.2270.1799.2097.7099.17
P99.2260.7767.3499.2097.7499.18
R99.2267.2270.1799.2097.7099.17
Table 6. The evaluation of the static ensemble on Dataset 1 for several noise levels. The best result for each metric is highlighted in bold.
Table 6. The evaluation of the static ensemble on Dataset 1 for several noise levels. The best result for each metric is highlighted in bold.
NoiseMetricStatic Ensemble
LevelMajority VoteStacked DTStacked LR
Without
noise
A99.9099.9099.90
P99.9099.9099.90
R99.9099.9099.90
10,000A99.6599.7299.85
P99.6599.7399.85
R99.6599.7299.85
20,000A99.5099.7899.85
P99.5099.7899.85
R99.5099.7899.85
30,000A99.1799.5599.72
P99.1899.5599.73
R99.1799.5599.72
40,000A98.8099.7899.78
P98.8299.7899.78
R98.8099.7899.78
50,000A98.4099.5599.70
P98.4499.5599.70
R98.4099.5599.70
60,000A97.9299.4599.60
P98.0099.4599.60
R97.9299.4599.60
Table 7. The evaluation of the dynamic selection approaches on Dataset 1 for several noise levels. The best result for each metric is highlighted in bold.
Table 7. The evaluation of the dynamic selection approaches on Dataset 1 for several noise levels. The best result for each metric is highlighted in bold.
NoiseMetricDynamic Selection Approach
LevelDESPKNORA-EKNORA-UMCBM-DESOLA
Without
noise
A99.8599.8599.9399.8899.8599.90
P99.8599.8599.9399.8899.8599.90
R99.8599.8599.9399.8899.8599.90
10,000A99.6599.7099.8099.9899.7598.72
P99.6599.7099.8099.9899.7598.72
R99.6599.7099.8099.9899.7598.72
20,000A99.5299.6799.7098.5299.7597.20
P99.5399.6899.7098.5399.7597.26
R99.5299.6799.7098.5299.7597.20
30,000A99.1299.4899.5098.3299.7096.83
P99.1499.4899.5098.3299.7096.90
R99.1299.4899.5098.3299.7096.83
40,000A98.6799.4899.5897.1599.7095.55
P98.7099.4899.5897.1599.7095.61
R98.6799.4899.5897.1599.7095.55
50,000A98.7899.4099.4897.8299.6595.60
P99.7999.4099.4897.8299.6595.64
R99.7899.4099.4897.8299.6595.60
60,000A98.6099.3399.4296.9599.6294.97
P98.6399.3399.4396.9699.6395.11
R98.6099.3399.4296.9599.6294.97
Table 8. The evaluation of the single models on Dataset 2 for several noise levels. The best result for each metric is highlighted in bold.
Table 8. The evaluation of the single models on Dataset 2 for several noise levels. The best result for each metric is highlighted in bold.
NoiseMetricSingle Models
LevelCatBoostDTKNNLightGBMRFXGBoost
Without
noise
A99.4053.8772.0099.8399.7799.79
P99.4043.6671.9599.8399.7799.79
R99.4053.8772.0099.8399.7799.79
0.1A97.7953.7971.4499.2198.4599.10
P97.7943.5871.4499.2198.4599.10
R97.7953.7971.4499.2198.4599.10
0.3A95.4553.1069.9197.1495.8997.14
P95.4542.8469.8497.1495.8997.14
R95.4553.1069.9197.1495.8997.14
0.5A92.1652.5668.3294.1692.4292.43
P92.1648.3268.2294.1692.4394.38
R92.1652.5668.3294.1692.4294.38
0.7A89.5251.6766.3590.5389.1990.83
P89.5248.0166.4090.5489.2290.84
R89.5251.6766.3590.5389.1990.83
1A84.9148.7563.9685.7684.0286.35
P84.9149.1963.8385.8184.1286.39
R84.9148.7563.9685.7684.0286.35
1.3A80.5348.7461.6981.6379.9981.70
P80.5348.9861.5381.6880.0681.73
R80.5348.7461.6981.6379.9981.70
1.5A77.6547.4659.9278.7176.8578.87
P77.6546.7259.7578.8276.9278.91
R77.6547.4659.7278.7176.9578.87
Table 9. The evaluation of the static ensemble on Dataset 2 for several noise levels. The best result for each metric is highlighted in bold.
Table 9. The evaluation of the static ensemble on Dataset 2 for several noise levels. The best result for each metric is highlighted in bold.
NoiseMetricStatic Ensemble
LevelMajority VoteStacked DTStacked LR
Without
noise
A99.6699.7799.80
P99.6699.7799.80
R99.6699.7799.80
0.1A98.9298.7599.23
P98.9298.7499.23
R98.9298.7599.23
0.3A96.6595.6297.05
P96.6695.6297.05
R96.6595.6297.05
0.5A93.6491.2193.84
P93.6791.2193.84
R93.6491.2193.84
0.7A90.4586.5290.76
P90.5086.5290.77
R90.4586.5290.76
1A85.8080.2585.61
P85.9580.1085.45
R85.8080.2585.61
1.3A81.2774.1581.27
P81.4174.3581.47
R81.2774.1581.27
1.5A78.5870.5578.34
P78.7471.0578.56
R78.5870.5578.34
Table 10. The evaluation of the dynamic selection approaches on Dataset 2 for several noise levels. The best result for each metric is highlighted in bold.
Table 10. The evaluation of the dynamic selection approaches on Dataset 2 for several noise levels. The best result for each metric is highlighted in bold.
NoiseMetricDynamic Selection Approach
LevelDESPKNORA-EKNORA-UMCBM-DESOLA
Without
noise
A98.3699.6499.6497.1999.7496.79
P98.3699.6499.6497.1999.7496.79
R98.3699.6499.6497.1999.7496.79
0.1A98.8698.7998.7496.2499.0295.43
P98.9698.7998.7496.2599.0295.43
R98.8698.7998.7496.2499.0295.43
0.3A94.2795.6996.2893.6096.7693.02
P94.4195.7196.3093.6296.7793.02
R94.2795.6996.2893.6096.7693.02
0.5A90.8491.3492.8390.1492.9789.06
P91.8791.4092.8990.1993.0289.09
R90.8491.3492.8390.1492.9789.06
0.7A87.8687.7289.5386.1089.5485.57
P88.1087.8089.6186.1589.6285.61
R87.8687.7289.5386.1089.5485.57
1A84.7782.4184.9580.9083.9980.08
P84.3082.2584.1979.9183.6180.00
R84.7782.4184.1980.9083.9980.08
1.3A80.4777.7080.6276.0179.1675.51
P80.6777.9080.9276.2079.3575.71
R80.4777.7080.6276.0179.1675.51
1.5A77.6774.8677.9774.1575.9072.92
P77.8774.9878.0574.3575.9072.98
R77.6774.8677.9774.1575.9072.92
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Oliveira, J.; Passos, D.; Carvalho, D.; Melo, J.F.V.; Silva, E.G.; de Mattos Neto, P.S.G. Improving Electrical Fault Detection Using Multiple Classifier Systems. Energies 2024, 17, 5787. https://doi.org/10.3390/en17225787

AMA Style

Oliveira J, Passos D, Carvalho D, Melo JFV, Silva EG, de Mattos Neto PSG. Improving Electrical Fault Detection Using Multiple Classifier Systems. Energies. 2024; 17(22):5787. https://doi.org/10.3390/en17225787

Chicago/Turabian Style

Oliveira, José, Dioéliton Passos, Davi Carvalho, José F. V. Melo, Eraylson G. Silva, and Paulo S. G. de Mattos Neto. 2024. "Improving Electrical Fault Detection Using Multiple Classifier Systems" Energies 17, no. 22: 5787. https://doi.org/10.3390/en17225787

APA Style

Oliveira, J., Passos, D., Carvalho, D., Melo, J. F. V., Silva, E. G., & de Mattos Neto, P. S. G. (2024). Improving Electrical Fault Detection Using Multiple Classifier Systems. Energies, 17(22), 5787. https://doi.org/10.3390/en17225787

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop