Next Article in Journal
Numerical Modelling of Forced Convection of Nanofluids in Smooth, Round Tubes: A Review
Previous Article in Journal
Analysis of Roof Stability of Coal Roadway Heading Face
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Filter-Based Feature-Engineering-Assisted SVC Fault Classification for SCIM at Minor-Load Conditions

by
Chibuzo Nwabufo Okwuosa
and
Jang-wook Hur
*
Department of Mechanical Engineering (Department of Aeronautics, Mechanical and Electronic Convergence Engineering), Kumoh National Institute of Technology, 61 Daehak-ro, Gumi-si 39177, Gyeonsang-buk-do, Korea
*
Author to whom correspondence should be addressed.
Energies 2022, 15(20), 7597; https://doi.org/10.3390/en15207597
Submission received: 9 September 2022 / Revised: 30 September 2022 / Accepted: 11 October 2022 / Published: 14 October 2022

Abstract

:
In most manufacturing industries, squirrel cage induction motors (SCIMs) are essential due to their robust nature, high torque generation, and low maintenance costs, so their failure often times affects productivity, profitability, reliability, etc. While various research studies presented techniques for addressing most of these machines’ prevailing issues, fault detection in cases of low slip or, low load, and no loading conditions for motor current signature analysis still remains a great concern. When compared to the impact on the machine at full load conditions, fault detection at low load conditions helps mitigate the impact of the damage on SCIM and reduces maintenance costs. Using stator current data from the SCIM’s direct online starter method, this study presents a feature engineering-aided fault classification method for SCIM at minor-load conditions based on a filter approach using the support vector classification (SVC) algorithm as the classifier. This method leverages the loop-hole of the Fourier Transform at minor-load conditions by harnessing the uniqueness of the Hilbert Transform (HT) to present a methodology that combines different feature engineering technologies to excite, extract, and select 10 discriminant information using a filter-based approach as the selection tool for fault classification. With the selected features, the SVC performed exceptionally well, with a significant diagnostic performance accuracy of 97.32%. Further testing with other well-known robust classifiers such as decision tree (DT), random forest (RF), k-nearest neighbor (KNN), gradient boost classifier (GBC), stochastic gradient descent (SGD), and global assessment metrics revealed that the SVC is reliable in terms of accuracy and computation speeds.

1. Introduction

Every system degrades over time when subjected to stress or a heavy load, especially when the system is running continuously, and SCIMs are no exception. One of a system’s most crucial requirements for optimal performance and maximum productivity is reliability. Due to its high level of dependability, simplicity of use, high efficiency, and other factors, SCIM always emerged as a key component that is frequently used to power and drive industrial equipment for maximum productivity [1]. Therefore, early mechanical and electrical fault detection in SCIMs is crucial for ensuring that industrial processes are run safely and profitably. Because of this, prognostics and health management (PHM) operations are becoming increasingly common. Appropriate predictive maintenance practices are necessary to guarantee that systems aare more productive and available as a result of their extended useful life [2]. PHM is crucial for modern machinery in general and SCIMs in particular because it enables the possibility of gauging the health of a given system component. Furthermore, some factors, such as the environment, mechanical stress, and electrical stresses, can reduce the effectiveness, durability, and productivity of SCIMs. As a result, the system may fail gradually over time or shut down all at once [3].
A significantly easier, quicker, and less expensive method of prediction has been provided by data-driven approaches. The quality and nature of the data gathered from the system to train an AI-based model for predictive and/or FDI modeling is crucial for data-driven predictive maintenance to produce adequate FDI and prognostics [4]. These data are often acquired from these systems using sensors, either intrusively or non-intrusively placed on them or on their power sources for an enabling periodic assessments for condition-based monitoring. Interestingly, motor current analysis (MCSA) has gained popularity over time due to its simplicity, ease of use, and lack of intrusion, even though vibration-based monitoring (CBM) has been used for years due to the tendency of rotary machines to vibrate when in use. Again, research studies have shown that MCSA is compatible with the majority of widely used signal-processing tools for feature extraction that support prognostics and FDI [1,3,5]. The proper analysis of signals, especially current signals from SCIMs, requires the use of techniques that can easily distinguish healthy signatures from faulty signatures to enable feature extraction that could serve as a discriminant for fault diagnosis. One of the most popular and/or efficient signal-processing methods, particularly for stationary signals, MCSA, and motor vibration analysis (MVSA), is the fast Fourier transform (FFT). FFT primarily deals with signal disintegration and/or transformation from its indigenous state (time domain) to its corresponding frequency domain [2]. More improved signal-processing techniques, which include Hilbert Transform (HT), Fast Fourier Transform, Wavelet Transform, etc., are used to mitigate some of the known drawbacks of FFT, such as poor performance at low slip or low load conditions and with non-stationary signals, spectral leakage, and so on [6,7,8]. Furthermore, because of HT’s sensitivity to motor loading level, it is often a more suitable replacement for FFT, providing reliable data analysis even at low mechanical load tests [8].
One must comprehend the use of various methods and techniques to be able to craft a methodology that would perfectly fit the available data to achieve a robust framework for fault prediction and classification either by using machine learning or statistical approachs. Therefore, “feature engineering” entails the capacity to draw on a variety of domain expertise to extract the most important variable from the available raw data [9]. The creation and introduction of computer intelligence such as artificial intelligence (AI), machine learning (ML), and also deep learning (DL) positively impacted research studies and resulted in a progressive CBM approach. Empirically, fault detection accuracy makes DL methods quite popular; on the other hand, DL methods have a number of shortcomings such as high data requirements, applicability, high dependence on numerous parameters, overfitting/underfitting difficulties, and excessive computational costs, which have made ML algorithms a superior choice in most situations [10]. For instance, in many research studies, support vector machine (SVM) demonstrated its prowess in both binary and multi-classification, making it and many other machine learning classifiers such as kNN, RF, SGD, etc., a more preferable approach over the DL algorithm, especially when few data and high computation speeds are required [2,5,10,11,12]. Overall, this research study, simlarly to many others, investigated some well-known traditional ML-based algorithms in order to validate the robustness of the employed model, as well as their individual efficiencies and uniqueness [2,3,5].

2. Motivation and Literature Review

One of the most crucial requirements of a system, as discussed in the previous section, is reliability. As more than 80% of industrial equipment is powered by SCIM, it is imperative that any potential failure be mitigated at all costs in ensuring the smooth production process, and losses from a system failure should be kept at a bare minimum [2,3]. As a result, there has been a constant demand for quick reductions in the operating, maintenance, and repair costs of SCIM; this calls for early fault detection and/or any kind of proactive measures that could aid in preventing any type of motor degradation. To present a framework and standard by which these systems can be easily monitored, CBM has emerged as one of the most crucial methods [1,2,3,4,5]. Therefore, research studies need to be diverse and intellectually explorative in presenting and frequently updating techniques for adequate FDI in an effort to reduce downtime and revenue loss with these systems.
Understanding the nature, mode, and causes of failure in these systems is required in presenting a robust framework that could readily be helpful in industrial application. According to both recent and older research reviews [1,3,5], fault sources in SCIM can be divided into internal, external, and environmental categories; the internal fault category is the most serious, and this category is further subdivided into electrical and mechanical faults. Overall, the internal faults class of the SCIMs is centered on stator, rotor, and bearing faults, which account for more than 70% of the total failure of induction motors generally [2,3]; the bearing is the most common occurring fault that can be linked to its tendency of being affected by overloading, misalignment, harsh weather, and inadequate lubrication [2,5]. Although bearing faults are the most frequent, their failure, which frequently results in high power consumption at low-efficiency [1], is not as disastrous as the effects of winding and/or stator fault, which can quickly cause a complete breakdown of the entire system and cause a significant amount of downtime. Phase-to-ground short circuits, coil-to-coil short circuits, turn-to-turn short circuit faults, phase-to-phase short circuits, and open circuits of the stator winding are some of the most predominant faults of an SCIM’s stator, and their fault causes include (but are not limited to) system surges, insulation degradation due to moisture or ageing, overload or overheating due to fan blocking, and so on [13].
Various CBM have been presented in various research studies based on the investigation of different dimensions such as torque, vibration, motor current signature, partial discharge, thermal, chemical, acoustic, and induced voltage [1,3,5,13]; the strength of the CBM employed depends on understanding the system’s response in faulty and healthy conditions. To quickly identify fault signatures, the signals from these sources are often further evaluated using various established techniques. Due to MCSA’s non-intrusive, straightforward, and affordable method of obtaining current signals from systems, it has proven to be an effective reliable CBM paradigm for SCIM [1,2,5]. Again, deciphering current signals in their raw form in time-domain samples has always proven challenging, necessitating the use of additional well-known signal-processing techniques to further improve the fault signature for simple fault identification for adequate FDI [2]. FFT has been one of the most effective signal-processing methods known to be compatible with stationary signals; however, there have been studies where FFT has shown to be ineffective with signals produced from systems at no load, low load, and low slip conditions, opening a window for researchers to use other cutting-edge signal-processing methods [6,8,14]. In particular, for the MCSA of stationary signals for FDI, the Hilbert transform presents an effective signal-processing method that can be used in place of FFT with ease [8,14].
In the literature, numerous researchers introduced fault diagnostic schemes and fault classification, which utilized HT in no load and low slip scenarios as well as in situations where faults are not as significant but could eventually result in a complete system failure. For instance, in this study [14], the authors employed HT to exploit the motor current signal’s envelope for fault detection for a broken rotor bar. Statistical analyses of the stator current’s envelope for fault detection at an exact location were the focus of their proposed methodology. In [6], the authors employed the uniqueness of HT for fault detection at low slip in broken bars. According to their methodology, the stator current envelope was extracted using HT, and faults were effectively detected when the features were fed into and trained on a neural network using the harmonics’ amplitude and frequency as features. Abd-el-malek et al. [8], applied statistical analysis on signatures extracted by HT. In their study, the authors employed HT to extract fault signatures at full load and no load conditions to develop a novel technique fort multiple broken bar fault locations in induction motors. In [15], the authors presented an offline framework for rotor fault detection in an IM using HT in light load conditions. The author’s suggested methodology was based on an examination of the frequency domain of the HT-extracted stator current envelope. Their methodology successfully identified faults under a variety of operating conditions, including low load conditions, and they came to the conclusion that their proposed method is an improved paradigm for the induction motor’s broken rotor fault diagnosis than the traditional MCSA diagnostic method. Salah et al., in study [16], presented a comparative study to show the efficacy of HT in the case of closed-loop analyses for broken bar fault diagnoses in IM. The analysis envelope of the stator current obtained from HT and the conventional MCSA via FFT on a variety of load conditions were compared by the authors to detect broken rotor bar (BRB) faults for SCIM. Their findings demonstrated that the traditional MCSA via FFT was unable to detect the BRB fault in the absence of any loads, whereas the HT-driven MCSA was successful in detecting the fault in all load conditions, including the absence of any load whatsoever, even as only one of the three phases of the SCIM was used.
In addition, HT has proven to be versatile in a number of research studies as it is easily combinable with other signal-processing techniques, with the aim of presenting effective diagnostic framework analysis for both current and vibration signals of IMs. Based on this knowledge, Puche-Panadero et al. [17] combined HT and FFT to present a reliable methodology for fault detection in the BRB of an induction motor while in operation at minor-slip or no-load conditions for MCSA. In their suggested method, the authors first applied HT to the motor current signal samples and then further carried out a spectral analysis using the resultant time-dependent vector modulus in order to efficiently achieve a MCSA. The authors applied the HT technique to the resultant FFT (proposed method) and compared the output with the FFT of the motor current samples to further validate their study. The results demonstrate that their presented framework is significantly more valid and efficient than the traditional method. Inspired by the effectiveness of HT, in [7], the authors investigated extraction capabilities of two advanced and distinctive signal-processing techniques, wavelet transform (WT) and HT, to create a multi-class fault detection framework for IM with the aid of radial vibration signals. They engaged both advanced signal-processing tools separately and compared the results, and their findings indicated that HT had superior resolutions than WT. However, according to their studies, for HT to achieve the best results with non-component signals, the signal must first ideally undergo a pre-processing stage to separate mono-components using either the empirical mode decomposition technique or other effective filters.
Traditional ML-based algorithms have recently become a sensation in the world of researchers due to their benefits ranging from relatively low cost, interpretability, and computational cost efficiency even on small amounts of data, which would be difficult to achieve with a deep learning approach. Some research studies that used traditional ML-based algorithms for fault classification can be found in [2,18,19]. Okwuosa et al. [2] in their study demonstrated the versatility of traditional ML-based algorithms in data comprehension. The authors compared several ML-based methods to determine the most efficient method with the lowest computational cost for MCSA fault detection and classification in a squirrel cage induction motor at a light load condition, and the random forest classifier outperformed all others in terms of accuracy, with reasonable computational costs. The authors in [18] investigated the efficiencies of three traditional ML-based classifiers to validate their uniqueness. In their study, they fed their extracted features from the genetic algorithm to the random forest, decision tree, and K-nearest neighbor and validated their proposed model with other recently adapted techniques, demonstrating that their proposed technique performs well for bearing fault diagnosis in induction motors. The study’s goal was to demonstrate the efficacy of some traditional ML-based algorithms when combined with various feature extraction techniques for bearing fault diagnosis.
Generally, the fact that HT is frequently used at low slip or no load conditions for fault detection and in BRB fault detection in most studies due to the challenges often witnessed in fault detection in these cases even while they occur individually or combined are some key points to note from all successful case studies that have been presented. Due to the peculiarity of these conditions, this study investigates the effectiveness of HT in fault detection by adding additional fault conditions to the BRB fault to demonstrate the effectiveness of HT in not only detecting BRB faults but also distinguishing them from other fault conditions. Interestingly, SVM outperforms most traditional machine learning-based algorithms in terms of performance for fault detection and is one of the most effective tools for classification; these classification characteristics of the the SVM is why the SVM is often referred to as a support vector classifier (SVC). It is also renowned for its exceptional performance with small datasets [19,20]. Research studies in [5,20] provided an insightful analysis for SVM, highlighting its strengths, weaknesses, and the most effective ways to apply them for the best results based on prior research. According to the study in [5], risk depreciation, which was established in the review study, makes the SVM model more effective and accurate than the Artificial Neural Network (ANN) model; a well-known highly effective tool for estimating and forecasting the remaining useful life of electrical rotary machines. On the other hand, in [20], the authors made the suggestion in their survey that for SVM to perform at its best, it must be combined with other well-known, potent algorithms for increased efficiency and anticipated results. The genuine goal of this paper is to develop a highly efficient yet cost-aware fault detection and classification framework using some unique signal-processing techniques and approaches. This paper is an extended study of a study presented by Okwuosa et al. [2], the limitations of which inspired this study. As a result, in our quest to develop a multi-fault classification framework for SCIM at minor loads, this study contributes the following:
  • A magnitude envelop extracted from the three-phase currents of the SCIM via the Hilbert Transform approach.
  • A simple but efficient feature engineering framework using a statistical time-domain feature extraction technique on all three phases of the SCIM for both faulty and healthy operating conditions and a filter-based correlation feature selection approach. This process is aimed at selecting the most relevant features from the extracted feature and reducing processes that inhibit the efficient fault classification when features are trained with the presented model.
  • An extensive comparison between the proposed ML algorithm and other known ML-based diagnostic models with the aid of presenting a cost-aware and efficient framework for SCIM diagnosis. Further validation and a robust assessment are conducted using some global assessment metrics to validate the efficiency of the suggested model.
  • An efficient framework that outperforms the proposed model in [2] in terms of simplicity, accuracy, and computation speeds, which this study is motivated by.
The remaining part of this paper is arranged in this format: Section 3 discusses about the theoretical background of the major components that make up the study, Section 4 provides insights to the study’s suggested filter-based feature engineering and signal-processing diagnostic model. Section 5 provides a breakdown of the experimental procedures based on a physical testbed setup in the laboratory, while Section 6 generally summarizes the entire study.

3. Theoretical Background of the Study

This section presents the theoretical background of the Hilbert Transform for signal processing at no load and low slip/minor load conditions, support vector machine and its classifier model SVC as used in the study, and other ML-based models that were used to validate the efficiency of the selected model are discussed.

3.1. Review of Hilbert Transform

The ability to portray a signal in such a way that its discriminant would be prominent enough to permit adequate feature extraction is one of the goals of signal processing in defect detection and isolation. Most of the time, the widely used signal-processing tool, FFT, fails to produce the appropriate output, especially when utilized only in no load and low slip situations. For this reason, researchers choose to either use FFT in conjunction with other signal-processing tools or identify any other more effective processing technique that would produce the desired output.
In reality, signals are rarely stationary; however, signals with little or no significant changes over time are frequently considered stationary. In particular, FFT is one of the best signal-processing techniques for stationary signals; despite its poor performance at no load and low slip conditions, it performs efficiently at other load conditions [21,22]. Since HT is known to perform excellently under no load and low slip conditions, it has been used in a variety of studies and in the literature on stationary, non-stationary, and non-linear signals for different types of purposes such as fault diagnosis, signal transmission, etc. [7,8,14,17]. The Hilbert transform, first introduced by David Hilbert, is one of many known integral transforms (along with the Fourier and Laplace transforms), which frequently specializes in the solution of integral-related equations, particularly in the field of mathematical physics [23]. Not only is HT preferable for the reasons stated above but it also provides a realistically useful presentation of a signal from its original signal while not changing the domain of the signal [8,24]. Consequently, the original signal and the HT output are functions of time and of the same domain. However, the HT and the original signal differ by a lag of 90° infrequency components but they maintain the same amplitudes; this places the HT and the original signal in orthogonal positions. The HT of the original signal is often referred to as an “analytic signal” from which its amplitude and the phase angle can easily be extracted, and the instantaneous frequency can be determined to be a derivative of the phase angle [7,14,24]. As a result of its nature as a linear operator, the HT of a time domain signal would result in another time-domain signal, as previously explained. Furthermore, the HT of any constant produces a zero output, and a double application of HT (that is, the HT of an HT) produces a negative output of the original function due to its phase-shifting properties [23]. The study in [24] provides a more in-depth understanding of HT transform application, properties, and uniqueness for a reader’s digest in signal processing.

3.2. Support Vector Classifier (SVC)

SVC is a support vector machine (SVM) feature that involves SVM’s classification characteristics. In our study, we used SVM for fault classification, which is only one of the many applications of SVM. The theoretical background of SVM is, thus, discussed further below.

Theoretical Background of SVM

Vladimir Vapnik first proposed the SVM in 1994, and it is now widely recognized in academia as an efficient method for handling classification and regression issues [19,20]. In contrast to the mean square error, which is frequently the loss minimization step used by empirical risk-minimization-based methods, SVM makes use of the Structural Risk Minimization (SRM) principle from the statistical learning theory; a principle that focuses on the minimization of a bound on the generalization error model [20,25]. SVM primarily serves as a tool that employs a linear separating hyperplane for categorizing training sets into two classes. However, from a border perspective, SVM employs two viable means for this type of classification using its linear hyperplane. The first step involves determining the optimal position for the hyperplane in dividing the two closest samples, while the second step involves adjusting the chosen ideal decision hyperplane to maximize the distance between its two supporting planes. This process and/or method of classification function of the SVM is known as “linear classification” [19,20,25] as shown in Figure 1. Margin in SVM acts as a trade-off between error and margin points; interestingly, more than one hyperplane can be drawn to divide the classes, and the plane that is farthest from the nearest data point on both sides of the classes is considered to be the best fit and most accurate hyperplane.
The other popular type of classification carried out using SVM is “non-linear” classification. Nonlinearities or noise, which are often unavoidable, frequently lead to nonlinear data. Corinna Cortes introduced the penalty function and non-negative variables, which must meet a certain criterion—“Mercer’s criterion”—in the circumstance in which training data are not linearly separable, in order to promote the hyperplane for a generic situation for SVM to function [20,26]. Some of the known efficient kernel often used both in studies and in real-world applications includes a polynomial kernel, linear kernel, radian basis function kernel (RBF), gaussian kernel, etc.; however, in studies as stated in [20,25], the Gaussian RBF function is one of the most efficient and most used of all.
Fault diagnosis by the SVC is often referred to as a classification approach, and SVC can be utilized optimally for both binary classification and fault diagnosis (multi-classification). The primary goal of the SVC for fault diagnosis is to distinguish between the normal condition and the fault situations, and/or to map the two into different data spaces. In essence, SVC’s defect identification process consists of three phases as shown in Figure 2 [20]. The first is the data collection phase, as shown in Figure 2a, followed by training and learning phases, as shown in Figure 2b, and then finally the integration phase, which holds multi-classification characteristics of the SVC, as shown Figure 2c; these classifiers are integrated into a multi-classifier. The classifier’s performance is further validated by subjecting it to the test sample and then recognizing the corresponding area in the data-allocated area, as shown in Figure 2d.

3.3. Review of the Other ML-Based Classification Algorithms

Traditional ML algorithms are mostly sorted for their nature of providing cost-effective and reliable models with performances that are rarely affected by data availability. Again, most ML algorithms are known for their ability to predict or classify a fault on a system based on prior knowledge of the system; however, the nature of the data, the quality of the data, and the adaptability of these models to the features extracted from these data are all critical. To obtain the most accurate model to fit a specific set of data, one must be familiar with its properties and understand how to use them optimally. Consequently, this study was inspired to explore various ML-based algorithms’ robustness and efficiencies in current-based fault detection, which were then compared to the proposed ML algorithm model after a filter-based feature engineering feature extraction and selection techniques were implemented on the current’s signals. As a result, a number of known traditional ML-based classifiers are proposed and discussed to highlight their theoretical structure for fault detection.
Among most popular machine learning algorithms, decision trees (DTs) are easily applied to classification and regression cases. This type of ML-based algorithm mimics tree-like and/or pyramid-like decision-making structures, with each individual non-leaf node representing an input feature and arcs branching from the inner nodes being mapped to potential values of the output features. Its performance may also be based on the idea that different combinations of hyperplanes with the coordinate axes define different separation limits [2,20]. Some of the well-known benefits of DT include simplified decision-making processes and its ease of use. It is, however, prone to overfitting and underfitting issues, which can be alleviated in some cases by pruning [2,27]. DT has been used in a variety of studies for regression and classification problems; for example, in [28], the authors combined DT with Optimized Stationary Wavelet Packet Transform for incipient bearing fault diagnosis. RF is based on the grouping of trees for regression and classification and is thought to mitigate underfitting and overfitting, both of which are common problems in DT [2,9,19]. RF is an ensemble learning method in which each individual tree reveals a class prediction, with the class receiving the most votes becoming the prediction model. RF has several advantages such as efficient solutions for regression, classification, overfitting, and underfitting problems, and it still faces with the problem of complexity and high computing costs in some cases, but it is still an excellent classifier. Okwuosa et al. [2] in their study, presented RF as the best fault classifier amongst the various compared classifier for multi-class fault classification in a SCIM. Another ML-based classifier used in our study is the k-Nearest Neighbor; a classy and simple ML method used for classification problems. k-NN, as the name implies, is a non-parametric and instance-based ML algorithm in which similar items are grouped together to learn their pattern [2,9,29]. Its performance is based on a mathematical sequence, which means that it readily assumes that any category of data with related features displays correlated characteristics and value. This would inevitably affect the performance of k-NN models in cases where the features differed and/or were not evenly distributed [2,29]. Yang et al. [30], in their study, used k-NN rule to solve fault detection issues in gas sensor arrays.
Booster algorithms are among the algorithms that can help weak learners perform as well as possible. Gradient boost is one of the most exceptional classifiers among boosters, with a high level of dependability and effectiveness. Due to the fact that DT is used as one of its weak classifiers and is structured in a greedy manner, gradient boost is also known as gradient boosting trees or gradient-boosted trees [2,9,27]. Gradient boost is sometimes referred to as a “genetic algorithm” because of its ability to minimize any differentiable loss function due to its agnostic behavior when dealing with loss functions [27]. Gradient boost’s most distinctive feature is that it operates by calculating the gradients of the loss function with respect to the prediction made by the current model at each iteration, hence the name gradient [27]. One of the difficulties with gradient boost is its high computation cost, but it is still a very good classifier [2,27]. Numerous studies have been conducted on the gradient boosting algorithm, which has successfully produced extreme gradient boosting (XGB), light gradient boosting (LightGBM), and catboost algorithms. These algorithms, in various degrees of strength, help in modifications for the most ideal realizable result [31]. Stochastic gradient descent (SGD) is a type of algorithm that focuses on convex loss functions, and this includes logistic regression and linear support vector machine [9,32]. SGD iteratively selects random samples to update the model’s parameter; this accelerates the learning rate, which speeds up convergence at minimal training computational times [32,33]. Even though SGD has demonstrated success in both large and sparse datasets, one of its major drawbacks is convergence at the local optimal [33].

4. Proposed Filter-Based, Feature Engineering, and SVC Diagnostic Model

The proposed architecture of the filter-based, feature engineering, and SVC fault diagnostic framework is shown in Figure 3. In order to enhance and modify the process of extracting discriminating features from the raw current signal of a SCIM, a filter-based feature engineering technique has been introduced by using a statistical approach. In general, the proposed framework consists of the following processes: a data collection stage that involves raw signal collection from both healthy and faulty SCIMs; a data preparation stage that includes a unique signal-processing technique—HT; a feature engineering approach for significant feature extraction; a filter-based correlation approach for discriminant feature selection; and a data-training stage where the selected features are trained with the proposed ML algorithm. Further assessment by various performance metrics are introduced to validate the proposed ML algorithm’s performance and effectiveness.
Due to its nature and characteristics as previously stated, the Hilbert Transform was used as a signal-processing technique instead of FFT. The degree to which a signal can be used for FDI is typically determined by how easily it can be interpreted, and HT presents signals in such a visibly realistic representation. However, to present an output that discriminant features could easily be extracted from and fed to an ML-based classifier for an effective fault diagnosis, the study concentrated on the analytical signal characteristic of HT. As demonstrated in the proposed model, a correlation-filter-based feature selection approach was also used. Its goal is to eliminate redundant and/or unhelpful predictors from selected features. This will drastically influence the performance of ML-based classifiers positively and the computational cost decreases; only the pertinent features would be present. Core modules of the suggested architecture are outlined in the subsections that follow.

4.1. Stator Current Analytical Signal of Hilbert Transforms and Its Magnitude Envelope for Signal Processing

A signal pre-processing tool is required for induction motors because the current signature is frequently presented in such a way that it is difficult to see any differences or similarities between the signal from a healthy and a defective motor. As a result, it is necessary to develop a technique that would best present an output from which various rich parameters could be extracted to generate rich discriminative inferences for diagnosis.
As previously mentioned, the HT transform thrives in low slip and no load conditions; with this in mind, its peculiar characteristics were exploited to present a straightforward but effective technique that can quickly and simply present a visible and rich discriminative inference of the stator current signal. By applying a phase shift of ± π / 2 to the original signal, HT converts the real signal x ( t ) into an analytical signal, which is summarized in Table 1. Mathematically, HT is described as a convolution with the function 1/t, as can be seen in Equation (1) below:
HT ( x ( t ) ) = 1 π t × x ( t ) = 1 π + x ( τ ) t τ d τ .
where the divergence at t = τ is accommodated by using the integral’s Cauchy principal value.
To exaggerate the output of the high-energy component while minimizing the lower-energy component of the analytical signal, we employed the magnitude of the analytical signal, which we called the “magnitude envelope”. The mathematical representation of the magnitude envelope can be seen in Equation (2) below.
Magnitude envelope = | H T | 2

4.2. Statistical-Based Feature Extraction and Selection Methodology

Since HT is known to keep the signal’s domain intact, a time domain statistical technique was used to extract features from the pre-processed signal using a reshaping format to change the dataset’s shape and extract rich, trainable features. This method was used to extract a total of 17 features, as summarized in Table 2, which were then subjected to normalization, data cleaning, smoothing, concatenation, and labeling. The dataset is then prepared for use by the algorithm after a filter-based correlation technique was used to remove features with high similarity as they could easily affect the performance of an ML classifier.
The filter-based correlation approach that was effectively used for discriminant feature selection by elimination was derived using Equation (3) below. This unique approach owes its existence to Karl Pearson [35]. The principle of this approach is based on a linear interaction among two variables ranging from −1(−ive correlation) to +1(+ive correlation).
ρ ( X , Y ) = cov ( X , Y ) σ X σ Y

4.3. Discriminative Performance Evaluation Metrics

It is essential to thoroughly investigate the classifier model’s diagnostic capacity while taking into consideration other aspects such as model parameterization, computational time, complexity, etc., because every ML model is distinct with regard to its unique architecture; hence, the need to adopt an excellent diagnostic and discriminatory performance evaluation metrics. Some of these known global evaluation metrics are model accuracy, model precision, model sensitivity/recall, model F1-score, and model false alarm rate (FAR). These metrics are defined, respectively, as follows (4)–(8).
Accuracy = T P T P + F P + T N + F N
Precision = T P T P + F P
Sensitivity = T P T P + F N
F 1 - Score = 2 Sensitivity Precision Precision + sensitivity
FAR = F P F P + T N
TP, FP, TN, and FN stand for the number of classes that were precisely classified (true positive), the number of classes that were inaccurately classified (false positive), the amount of samples with inaccurate labels that belonged to a class that was precisely classified (true negative), and the number of inaccurately labeled samples that belonged to a class that was inaccurately classified (false negative), respectively. Furthermore, for an improved understanding and appreciation of the evaluation metrics, their respective roles in performance evaluation must be highlighted. Accuracy is defined as the proportion of precise predictions for test data by an ML model, precision denotes the percentage of relevant results present in the classification output, sensitivity indicates the ML model’s ability to detect positive instances, F1-score simply is the combination and/or the weighted average of precision and sensitivity, and FAR denotes the rate of false positives.
Even as these global assessment metrics offer a broad point of view for assessing ML classifier performance, it might be imperative to assess each classifier’s performance within each operating condition to achieve a more thorough output evaluation. A high-accuracy output could be a result of the model’s strengths in classifying some of the classes involved correctly while having weaknesses in the remaining few classes. On the other hand, a different model might produce results with a similar grade of accuracy, but its performance for each specific class classification might be satisfactorily consistent across all classes; as a result, it would be a more preferred model than other models in comparison. This situation oftentimes occurs, necessitating the use of a more detailed assessment tool known as a confusion matrix, which offers a means of assessing each classifier’s fault diagnostic performance.

5. Experimental Study

This section presents the diagnostic framework of the proposed filter-based feature engineering framework carried out on various SCIMs running under healthy and faulty conditions as summarized in the Table 3. The flow chat of the filter-based diagnosis framework is presented in Figure 4 for an improved understanding of the fault classification process.
The experiment was carefully setup and carried out at the Defense Reliability Laboratory, Kumoh National Institute of Technology, Gumi, Korea. On a test setup consisting of various but similar four-pole, 1/4 horsepower, 2-phase squirrel cage induction motor powered by an online direct starter and connected through its terminals using a delta connection in a secure and congenial environment. The driven motor was connected to a DC power source to generate a magnetic effect within the stator coils and the rotor bar, which induced a resistive effect in the driver and results in a minor-load effect for the induction motors. This process is known as “DC injection braking of the induction motor”. The motors were run continuously at about 1780 revolution-per-minute (30 Hz), and datasets were gathered from the supply terminals of the driver motor with the aid of an NI–9246 current input module connected via NIDAQ-NI9178 to a desktop computer, as can be seen in Figure 5 below. Using a LabVIEW interface, digital current signals were collected and stored in . c s v .
The goal of condition-based monitoring is to minimize system failures, which is one of its main motivations. As a result, we prioritized some of SCIM’s common failure modes in our study by introducing these conditions on the SCIM that was already in our laboratory to create an enabling CBM for adequate FDI. In reality, failure modes result from a variety of factors, including environmental, thermal, electrical, and other factors. However, due to the nature and robustness of the SCIM, which enables efficient long-term operation, if any fault is left undetected while the system runs for such a long time, particularly under low load, it could easily escalate to a major fault or a complete breakdown of the entire system. Consequently, to present a model that could easily predict and isolate these faults, we have chosen to concentrate on some of the major and frequently occurring faults in this study. Figure 6 depicts these faults as they were created in the lab to mimic the motor broken rotor bar (BSC-2), turn-to-turn short circuit (ITS-3), and rotor misalignment (AMT-1) for CBM.
As used in our study, misalignment is one of the most frequently occurring fault conditions in SCIM, which often presents itself as either a parallel or an angular misalignment or a combination of both types [2]. To create this fault condition, first, we aligned both the driver and the driven SCIM with a precision alignment tool, and then we created a misalignment of 0.3 mm for both angular and parallel rotor misalignment to achieve AMT-1. The broken rotor bar fault often results from an accumulation of mechanical stress springing from various operating conditions [36]. To achieve this, the BSC-2 was created by boring two holes each with a diameter and depth of 5 mm on the rotor bar of an SCIM. In addition to the faults is the turn-to-turn short circuit fault; this fault is often one of the most severe faults that normally springs from thermal stress and also from the aging of insulators separating each turn from one another [5,37,38]. To imitate this fault, the ITS-3 was created by bridging/merging seven (7) turns in a phase of the motor. To check these fault-induced conditions, NOC-4 was employed as the health motor.

5.1. Signal-Processing Approach for Features Extraction

The visual representation of the current signals obtained from the three-phase SCIMs under various operating conditions used in our study is shown in Figure 7, where the phases are denoted by the colors red, blue, and green, respectively.
The waveforms across various operation conditions show great similarities in amplitude across the phases; however, there tends to be a difference in amplitude at one of the phases of the operating condition, as seen in Figure 7c. The phase 2 signal of that particular condition is shown to display maximum and minimum amplitudes of 2 and −2 Amperes, respectively, whereas the rest of the signals are shown to display 1 and −1 Amperes in their maximum and minimum ranges, respectively. Figure 8 presents a visualization of various operating conditions while being processed with FFT to demonstrate the inefficiency of FFT with signals from a low-load operating condition. As can be seen, the FFT was unable to detect a significant change in these conditions, which could result in subpar performance when features are extracted for fault detection from these conditions.
Consequently, the proposed diagnostic method as discussed in Section 4 was used to receptively process the current signals for feature extraction. The visualization of the magnitude envelope extracted from the analytic signal of the current signals under various motor operating conditions can be seen in Figure 9. As demonstrated, it can be seen that various operating conditions produced high and low spikes with various amplitudes and at various time intervals. This suggests that the magnitude envelope contains discriminance; this clearly illustrates the distinction between the raw signal and processed signals since the magnitude envelope extract may exhibit superior discriminative properties when features are extracted from it.
Since HT, as previously discussed, has the property of maintaining the domain of a signal after processing, statistical time-domain features were generally extracted from the magnitude envelope extract of the HT. In light of this, we decided to use 17 statistical features, as shown in Table 2, based on their superiority, widespread use, and significance for trustworthy empirical validation.
The following steps are used in both the training and test datasets’ feature extraction processes: Various operating condition datasets were divided and arranged into uniform portions, with feature extraction carried out on each portion by using a statistical approach. This was followed by data cleaning and normalization and then a filter-based correlation discriminative feature selection, which was covered in Section 3, and this was performed by using a window size of 200 samples, which is often based on discretion.

5.2. Feature Selection and Evaluation

Although the extracted statistical features for diagnosis are distinct and show a promising outcome, highly correlated features can affect performances, particularly in the area of computational cost and model efficiency; for this reason, the filter correlation-based approach as used in our study was necessary. Extracted feature datasets were further fed into the filter-based correlation module for discriminant feature selection with the aid of Equation (3). Then, the module’s very highly correlated features above ρ X , Y > 0.7 were eliminated, leaving the remaining features as highly discriminant features for FDI. As a result, 10 highly discriminant features were left for the support vector classifier to use, leaving 7 out of the 17 extracted features unutilized. Figure 10 displays the correlation plot for feature selection in both the pre- and post-processes.
As can be seen by the metrics in Figure 10a, some of the features have a strong correlation with one another. For instance, as shown, the peak-to-peak, wave factor, clearance factor, crest factor, maximum, and kurtosis features exhibit a high correlation with the impulse factor, with correlation values of 0.84, 0.92, 0.97, 1.00, 0.79, and 0.89, respectively. This is because they exceeded the 0.7 threshold that the filter-based correlation module had set. When fed to any suitable classifier, these uncorrelated features, which pose high discriminance, would lower the computational cost and boost the probability of high efficiency. Consequently, the outcome of the implemented correlation matrix can be seen in Figure 10b as only 10 discriminative features are left following the proposed filter-based correlation approach for feature selection. Some dimension reduction tools, such as locally linear embedding (LLE), principle component analysis (PCA), independent component analysis (ICA), linear discriminant analysis, etc., can be used to further visualize how discriminative the feature sets are for FDI. Due to our familiarity with LLE and its simplicity of use, we used an LLE algorithm [29] (NN = 5) in our study to adjust features to a three-dimensional vector for proper visual assessment of their discriminative potentials.
The level of discriminance between the features for each of the given operating conditions is reasonable, as illustrated in Figure 11. However, due to the overlapping effects of some of the features, as shown in Figure 11 top right, there appears to be a higher probability of false positives for AMT-1 and NOC-4. This provides a visual intuition of what to expect from the FDI and might inspire further studies into the extraction of more discriminative features.

5.3. Proposed Classification Model Assessment Evaluation

In our study, we proposed SVC as the classification algorithm for our proposed model due to its unique nature, as discussed in Section 3. Table 4 displays the classifier’s algorithm parameters and architecture as we trained with training feature sets and tested with test feature sets. To determine this classifier’s overall performance and effectiveness, nevertheless, an evaluation is required. Employing the standard classification assessment metrics presented in Equations (4)–(8), the results of the evaluation can be seen in Table 5.
For the further assessment of the proposed diagnostic framework, the confusion matrix is presented in Figure 12
At operating conditions AMT-1 and BSC-2, the classifier returns the least FPs and FNs, as shown above with TP values of 98.2% and 98.9%, respectively, whereas operating conditions BSC-2 AND AMT-1 presents most FPs, as can be observed with 3.9% (2.9% + 1.0%) and 4.4% (4.0% + 0.4%). The FNs are observed as 7.2% (4.0% + 2.9% + 0.3%), 1.8%, 2% (0.4% + 0.7%), and 2.0% (1.1% + 1.0%) cross operating conditions NOC-4, AMT-1, BSC-2, and ITS-3, respectively. As much as the proposed model accurately diagnosed all operating conditions (both healthy and faulty SCIMs), as seen from the matrix, the false alarm rate is relatively low, with quite a few FNs and FPs for each operating condition, indicating that the proposed framework is validated. Furthermore, the proposed model not only demonstrated good performance and efficiency, but it is also cost-efficient, as shown in Table 5, which further demonstrates its suitability for practical use, particularly in cases where cost-efficiency is paramount.

5.4. Comparative Assessments with Other Models

Other well-known, high-performing ML-based classifier models, such as decision tree (DT), random forest (RF), k-nearest neighbor (kNN), stochastic gradient descent (SGD), and gradient boost (GBC), as discussed in Section 3, were compared with the SVC on the discriminant features for training and testing to further evaluate the performance of the suggested diagnostic model. Each algorithm has a specific set of parameters and architectural requirements for optimum performance, as shown in Table 6, that enables them produce the desired output. The test accuracy, precision, recall, F1-score, and train score for each model are summarized in Figure 13.
As observed, the SVC model displayed superior performances when compared with other models across all evaluation metrics. However, two models, kNN and GBC, exhibit excellent accuracy as well, with respective values of 92.91% and 92.41%. The remaining classifiers, on the other hand, have results between 84% and 86%. It is evident that the training scores and test results are closely correlated, with the exception of DT, for which its training score and test results differ by 15.06% (100%–84.94%). However, the difference between the training score and the test performance also demonstrates the superiority and how well-fitted the models are with data. For example, SVC not only demonstrated the highest accuracy but also showed the smallest difference between the training score and the test result’s performance.
From a different standpoint, the computational score of the models can also offer a means of assessment for determining their suitability in real-world applications where the need for computational cost-effectiveness is critical. The average computational costs of the classifiers for training and testing processes are summarized in Table 7 (in seconds).
As shown, SVC is shown to be computationally efficient, with a computational time of approximately 0.22 s, which is the second-lowest computational time among the three best-performing classifiers (SVC, kNN, and GBC), with kNN having the lowest computational time of approximately 0.01 s. This demonstrates that kNN could be preferable in some cases due to its simplicity, low computation cost, and high accuracy, particularly in situations where very low computational time and good accuracy are a priority. On the high side, the GBC shows that it is a greedy algorithm, having displayed a lengthy computation time when compared to other classifiers.
Beyond the global assessment metrics, there is a need for further assessment to properly compare and validate the efficiency of our proposed model. Figure 14 displays the confusion matrix of all classifiers, which discloses the probability of an ML classifier making accurate predictions for each of the operating conditions. This allows for a thorough evaluation of the predictive performance of the classifier models. A close cross-examination of Figure 14 reveals that DT and SGD returned the highest FP for a particular class prediction as shown in Figure 14b,e with 13.7% and 20.3%, respectively. Unfortunately, the false alarm rate in the prediction for NOC-4 operating conditions is observed on average to be 9.03% false negative relative to AMT-1, which is the highest across all classifier models. This hints that some data belonging to NOC-4 are mostly recognized by the model as belonging to AMT-1, this occurrence, although minimal for our proposed model SVC, is a major limitation to our proposed feature extraction mechanism, as it was revealed by LLE in Figure 11 during discriminative features visualization.
In general, the proposed classifier’s efficiency has been demonstrated, as the SVC classifier outperformed all other models in all assessment evaluation metrics, validating our proposed model.

6. Discussion and Future Works

The SCIM is a well-known industrial power-driving device thanks to its strength and durability. However, due to heat generation, deterioration, environmental, and human factors, such equipment will eventually fail due to constant use; therefore, a CBM is required to ensure that failures that could cause downtime and revenue loss are significantly reduced. However, it is essential to use techniques that can easily transform and present signals from these types of equipment in such a manner that discriminative features can be derived from them for an efficient fault diagnosis (FDI).
This study presents a filter-based feature engineering technique that utilizes signals that have undergone prior HT processing; this technique is a signal-processing tool that aids in enhancing the discriminative properties of the signal from SCIM feature extraction and selection, and in this case, four operating conditions are involved. This study particularly focused on the magnitude envelope extracted from the SCIM current signatures, which were achieved from the magnitude of the analytical signals of the HT. The selected features are further fed to the proposed classifier algorithm (SVC), which produced an accuracy of 97.32% at a computational time of 0.22 s. The performance of the suggested classifier was compared with other well-known efficient and distinctive classifiers, using global evaluation metrics to further assess the effectiveness of the classifier with the extracted features. The SVC outperformed all other classifiers on average, across all evaluation metrics, but kNN showed the lowest computational time (roughly 0.01 s) and also high accuracy (92.91%), which could make it a recommended classifier in situations where speedy computations are crucial.
The effectiveness of the classifiers’ predictions of each operating condition was further demonstrated using a confusion matrix, which also served as an evaluation metric. The confusion matrix indicated that the NOC-4 condition was the most incorrectly predicted by the classifier though very minimal SVC. This, however, is the major limitation to this study, which creates room for further research studies, especially in the area of increased discriminative feature exploitation.
The emphasis and goal of subsequent works and researcher studies should be on providing more in-depth results and analyses of some of the operational conditions presented in the study with the aim of obtaining various fault conditions for fault severity assessment. For instance, an ITS-3 operating condition, which occurs across the same phase, is not such a lethal fault that would instantly halt the operation of the system. As a result, further research could focus on designing various bridged turns with various numbers of turns on different operating conditions for fault severity assessment using the proposed model. Again, the efficiency of the proposed models could be used in exploiting varying load conditions as well as no-load conditions; the no-load condition will aid in off-site fault detection in industrial settings.
The motivation of the study was drawn from the limitation of the research study presented in [2], where the best classifier displayed an accuracy of 79.25% with a computational time of 3.66 s. However, the superiority of this proposed model has been demonstrated with an accuracy of 97.32% at a computational time of 0.22 s on the same quantity and type of data set, which validates our model. In addition, the experience and expertise of the researcher played an important role in ML-based classifiers as the discriminative feature extraction techniques determined the output of the classifier.

Author Contributions

Conceptualization, C.N.O.; methodology, C.N.O.; software, C.N.O.; formal analysis, C.N.O.; investigation, C.N.O.; resources, C.N.O. and J.-w.H.; data curation, C.N.O.; writing—original draft, C.N.O.; writing—review and editing, C.N.O.; visualization, C.N.O.; supervision, J.-w.H.; project administration, J.-w.H.; funding acquisition, J.-w.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Ministry of Science and ICT (MSIT), Korea, under the Grand Information Technology Research Center support program (IITP-2020-2020-0-01612) supervised by the Institute for Information & Communications Technology Planning & Evaluation (IITP).

Data Availability Statement

The data presented in this study are available upon request from the corresponding author. The data are not publicly available due to laboratory regulations.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Terron-Santiago, C.; Martinez-Roman, J.; Puche-Panadero, R.; Sapena-Bano, A. A Review of Techniques Used for Induction Machine Fault Modelling. Sensors 2021, 21, 4855. [Google Scholar] [CrossRef] [PubMed]
  2. Okwuosa, C.N.; Akpudo, U.E.; Hur, J.-W. A Cost-Efficient MCSA-Based Fault Diagnostic Framework for SCIM at Low-Load Conditions. Algorithms 2022, 15, 212. [Google Scholar] [CrossRef]
  3. Gundewar, S.K.; Kane, P.V. Condition Monitoring and Fault Diagnosis of Induction Motor. J. Vib. Eng. Technol. 2021, 9, 643–674. [Google Scholar] [CrossRef]
  4. Shifat, T.A.; Hur, J.-W. ANN Assisted Multi Sensor Information Fusion for BLDC Motor Fault Diagnosis. IEEE Access 2021, 9, 9429–9441. [Google Scholar] [CrossRef]
  5. Choudhary, A.; Goyal, D.; Shimi, S.L. Condition Monitoring and Fault Diagnosis of Induction Motors. Arch. Comput. Methods Eng. 2019, 26, 1221–1238. [Google Scholar] [CrossRef]
  6. Bessam, B.; Menacer, A.; Boumehraz, M.; Cherif, H. Detection of broken rotor bar faults in induction motor at low load using neural network. ISA Trans. 2016, 64, 241–246. [Google Scholar] [CrossRef]
  7. Konar, P.; Chattopadhyay, P. Multi-class fault diagnosis of induction motor using Hilbert and Wavelet Transform. Appl. Soft Comput. 2015, 30, 341–352. [Google Scholar] [CrossRef]
  8. Abd-el-Malek, M.B.; Abdelsalam, A.K.; Hassan, O.E. Novel approach using Hilbert Transform for multiple broken rotor bars fault location detection for three phase induction motor. ISA Trans. 2018, 80, 439–457. [Google Scholar] [CrossRef] [PubMed]
  9. Kareem, A.B.; Hur, J.-W. A Feature Engineering-Assisted CM Technology for SMPS Output Aluminium Electrolytic Capacitors (AEC) Considering D-ESR-Q-Z Parameters. Processes 2022, 10, 1091. [Google Scholar] [CrossRef]
  10. Akpudo, U.E.; Hur, J.-W. A Cost-Efficient MFCC-Based Fault Detection and Isolation Technology for Electromagnetic Pumps. Electronics 2021, 10, 439. [Google Scholar] [CrossRef]
  11. Shinde, P.P.; Shah, S. A Review of Machine Learning and Deep Learning Applications. In Proceedings of the 2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA), Pune, India, 16–18 August 2018; pp. 1–6. [Google Scholar] [CrossRef]
  12. Jing, C.; Hou, J. SVM and PCA based fault classification approaches for complicated industrial process. Neurocomputing 2015, 167, 636–642. [Google Scholar] [CrossRef]
  13. Liang, X.; Ali, M.Z.; Zhang, H. Induction Motors Fault Diagnosis Using Finite Element Method: A Review. IEEE Trans. Ind. Appl. 2020, 56, 1205–1217. [Google Scholar] [CrossRef]
  14. Abd-el-Malek, M.B.; Abdelsalam, A.K.; Hassan, O.E. Induction motor broken rotor bar fault location detection through envelope analysis of start-up current using Hilbert transform. Mech. Syst. Signal Process. 2017, 90, 332–350. [Google Scholar] [CrossRef]
  15. Khelfi, H.; Hamdani, S.; Nacereddine, K.; Chibani, Y. Stator Current Demodulation Using Hilbert Transform for Inverter-Fed Induction Motor at Low Load Conditions. In Proceedings of the International Conference on Electrical Sciences and Technologies in Maghreb (CISTEM), Algiers, Algeria, 28–31 October 2018; pp. 1–5. [Google Scholar] [CrossRef]
  16. Salah, L.; Adel, G.; Khaled, K.; Ahmed, B. A comparative investigation between the MCSA method and the Hilbert transform for broken rotor bar fault diagnostics in a closed-loop three-phase induction motor. Univ. Politeh. Buchar. Sci. Bull. Ser. C-Electr. Eng. Comput. Sci. 2019, 80, 209–226. [Google Scholar]
  17. Puche-Panadero, R.; Pineda-Sanchez, M.; Riera-Guasp, M.; Roger-Folch, J.; Hurtado-Perez, E.; Perez-Cruz, J. Improved Resolution of the MCSA Method via Hilbert Transform, Enabling the Diagnosis of Rotor Asymmetries at Very Low Slip. IEEE Trans. Energy Convers. 2009, 24, 52–59. [Google Scholar] [CrossRef]
  18. Toma, R.N.; Prosvirin, A.E.; Kim, J.-M. Bearing Fault Diagnosis of Induction Motors Using a Genetic Algorithm and Machine Learning Classifiers. Sensors 2020, 20, 1884. [Google Scholar] [CrossRef] [Green Version]
  19. Kumar, P.; Hati, A.S. Review on Machine Learning Algorithm Based Fault Detection in Induction Motors. Arch. Comput. Methods Eng. 2021, 28, 1929–1940. [Google Scholar] [CrossRef]
  20. Yin, Z.; Hou, J. Recent advances on SVM based fault diagnosis and process monitoring in complicated industrial processes. Neurocomputing 2016, 174, 643–650. [Google Scholar] [CrossRef]
  21. Garcia-Bracamonte, J.E.; Ramirez-Cortes, J.M.; de Jesus Rangel-Magdaleno, J.; Gomez-Gil, P.; Peregrina-Barreto, H.; Alarcon-Aquino, V. An Approach on MCSA-Based Fault Detection Using Independent Component Analysis and Neural Networks. IEEE Trans. Instrum. Meas. 2019, 65, 1353–1361. [Google Scholar] [CrossRef]
  22. Gaeid, K.S.; Ping, H.W.; Khalid, M.; Salih, A.L. Fault Diagnosis of Induction Motor Using MCSA and FFT. Sci. Acad. Publ. 2011, 1, 85–92. [Google Scholar] [CrossRef]
  23. Feldman, M. Hilbert transform in vibration analysis. Mech. Syst. Signal Process. 2011, 10, 735–802. [Google Scholar] [CrossRef]
  24. Johansson, M. The Hilbert Transform. Master’s Thesis, Växjö University, Växjö, Sweden, 1999. Available online: http://yumpu.com/en/document/read/6683719/m-johansson-the-hilbert-transformpdf (accessed on 31 August 2022).
  25. Cervantes, J.; Garcia-Lamont, F.; Rodríguez-Mazahua, L.; Lopez, A. A comprehensive survey on support vector machine classification: Applications, challenges and trends. Urocomputing 2020, 408, 189–215. [Google Scholar] [CrossRef]
  26. Ghosh, S.; Dasgupta, A.; Swetapadma, A. A Study on Support Vector Machine based Linear and Non-Linear Pattern Classification. In Proceedings of the 2019 International Conference on Intelligent Sustainable Systems (ICISS), Tirupur, India, 21 February 2019; pp. 24–28. [Google Scholar] [CrossRef]
  27. Stavropoulos, G.; van Vorstenbosch, R.; van Schooten, F.; Smolinska, A. Random Forest and Ensemble Methods. Compr. Chemom. Chem. Biochem. Data Anal. 2020, 2, 661–672. [Google Scholar] [CrossRef]
  28. Abid, F.B.; Sallem, M.; Braham, A. Optimized SWPT and Decision Tree for Incipient Bearing Fault Diagnosis. In Proceedings of the 19th International Conference on Sciences and Techniques of Automatic Control and Computer Engineering (STA), Sousse, Tunisia, 24–26 March 2019; pp. 231–236. [Google Scholar] [CrossRef]
  29. Akpudo, U.E.; Hur, J.-W. Intelligent Solenoid Pump Fault Detection based on MFCC Features, LLE and SVM. In Proceedings of the 2020 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), Fukuoka, Japan, 19–21 February 2020; pp. 404–408. [Google Scholar] [CrossRef]
  30. Yang, J.; Sun, Z.; Chen, Y. Fault Detection Using the Clustering-kNN Rule for Gas Sensor Arrays. Sensors 2016, 16, 2069. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Zhao, L.; Huang, Y.; Xiao, D.; Li, Y.; Liu, C. A Novel Method for Induction Motor Fault Identification Based on MSST and LightGBM. In Proceedings of the 2019 International Conference on Sensing, Diagnostics, Prognostics, and Control (SDPC), Beijing, China, 15–17 August 2019; pp. 90–96. [Google Scholar] [CrossRef]
  32. Huang, J.; Ling, S.; Wu, X.; Deng, R. GIS-Based Comparative Study of the Bayesian Network, Decision Table, Radial Basis Function Network and Stochastic Gradient Descent for the Spatial Prediction of Landslide Susceptibility. Land 2022, 11, 436. [Google Scholar] [CrossRef]
  33. Ketkar, N. Stochastic Gradient Descent. Deep. Learn. Python 2017, 113–132. [Google Scholar] [CrossRef]
  34. Kim, S.; Akpudo, U.E.; Hur, J.-W. A Cost-Aware DNN-Based FDI Technology for Solenoid Pumps. Electronics 2021, 10, 2323. [Google Scholar] [CrossRef]
  35. Galton, F. Co-relations and their measurement, chiefly from anthropometric data. Proc. R. Soc. 1888, 45, 135–145. [Google Scholar] [CrossRef]
  36. Garcia-Calva, T.A.; Morinigo-Sotelo, D.; Fernandez-Cavero, V.; Garcia-Perez, A.; Romero-Troncoso, R.D.J. Early Detection of Broken Rotor Bars in Inverter-Fed Induction Motors Using Speed Analysis of Startup Transients. Energies 2021, 14, 1469. [Google Scholar] [CrossRef]
  37. Swana, E.F.; Doorsamy, W. Investigation of Combined Electrical Modalities for Fault Diagnosis on a Wound-Rotor Induction Generator. IEEE Access 2019, 7, 32333–32342. [Google Scholar] [CrossRef]
  38. Sadeghi, R.; Samet, H.; Ghanbari, T. Detection of Stator Short-Circuit Faults in Induction Motors Using the Concept of Instantaneous Frequency. IEEE Trans. Ind. Inform. 2019, 15, 99. [Google Scholar] [CrossRef]
Figure 1. Two classes classification using SVC.
Figure 1. Two classes classification using SVC.
Energies 15 07597 g001
Figure 2. Diagnosis process using SVC in phases: (a) data collection stage, (b) training and learning phase, (c) multi-classifier integration phase, and (d) test/validation phase.
Figure 2. Diagnosis process using SVC in phases: (a) data collection stage, (b) training and learning phase, (c) multi-classifier integration phase, and (d) test/validation phase.
Energies 15 07597 g002
Figure 3. The proposed diagnostic model.
Figure 3. The proposed diagnostic model.
Energies 15 07597 g003
Figure 4. The flow chat of the filter-based model.
Figure 4. The flow chat of the filter-based model.
Energies 15 07597 g004
Figure 5. A pictorial illustration of the experimental testbed.
Figure 5. A pictorial illustration of the experimental testbed.
Energies 15 07597 g005
Figure 6. A pictorial view showing a replication of the induced failure modes: (a) rotor bar misalignment, (b) motor broken rotor-bar, and (c) turn-to-turn circuit winding.
Figure 6. A pictorial view showing a replication of the induced failure modes: (a) rotor bar misalignment, (b) motor broken rotor-bar, and (c) turn-to-turn circuit winding.
Energies 15 07597 g006
Figure 7. Raw motor current signals plot collected for the squirrel cage induction motors: (a) AMT-1, (b) BSC-2, (c) ITC-3, and (d) NOC-4.
Figure 7. Raw motor current signals plot collected for the squirrel cage induction motors: (a) AMT-1, (b) BSC-2, (c) ITC-3, and (d) NOC-4.
Energies 15 07597 g007
Figure 8. A 3-phase motor current signal FFT spectral of the various operating conditions.
Figure 8. A 3-phase motor current signal FFT spectral of the various operating conditions.
Energies 15 07597 g008
Figure 9. Magnitude envelope of the analytic signal.
Figure 9. Magnitude envelope of the analytic signal.
Energies 15 07597 g009
Figure 10. A filter–based correlation plot of features: (a) all 17 extracted features and (b) 10 discriminant–selected features.
Figure 10. A filter–based correlation plot of features: (a) all 17 extracted features and (b) 10 discriminant–selected features.
Energies 15 07597 g010
Figure 11. A three-dimensional LLE visualization of the discriminant features.
Figure 11. A three-dimensional LLE visualization of the discriminant features.
Energies 15 07597 g011
Figure 12. FDI evaluation: Confusion matrix of the svc model.
Figure 12. FDI evaluation: Confusion matrix of the svc model.
Energies 15 07597 g012
Figure 13. A global performance assessment plot of each classifier’s output on the test data.
Figure 13. A global performance assessment plot of each classifier’s output on the test data.
Energies 15 07597 g013
Figure 14. A full display of the confusion matrix of all ML classifiers on the datasets: (a) SVC, (b) DT, (c) RF, (d) kNN, (e) SGD, (f) GBC.
Figure 14. A full display of the confusion matrix of all ML classifiers on the datasets: (a) SVC, (b) DT, (c) RF, (d) kNN, (e) SGD, (f) GBC.
Energies 15 07597 g014
Table 1. Summary of the Hilbert transform process.
Table 1. Summary of the Hilbert transform process.
StepsLabels
y ( t ) H T x ( t )
(Hilbert Transform of x)
H T ( t ) 1 π t
(Hilbert transform “Function”)
H T ( t ) π / 2 , f > 0 + π / 2 , f < 0
(Hilbert frequency response)
x a ( t ) x ( t ) + y ( t )
(Analytic signal from x)
H T ( x ( t ) ) = x a ( t ) = 1 π + x ( τ ) t τ d τ )
Table 2. Statistically extracted features and their mathematical formulas [9,34].
Table 2. Statistically extracted features and their mathematical formulas [9,34].
Feature NameDefinition
nth percentile (n = 5, 25, 75, 95) P x = 100 x 0.5 n
Median n + 1 2 t h sample
Mean x ¯ = 1 n i = 1 n x i
Root Mean Square X r m s = i = 1 n x i 2 n
Kurtosis X k u r t = 1 N Σ x i μ 3 σ
Skewness X skew = E x i μ 3 σ
Max X max = max x i
Min X max = min x i
Crest Factor X C F = x max x r m s
Peak-to-peak X p p = x max x min
Peak factor x P F = x max x s
Wave Factor x W F = 1 n i = 1 n x i 2 1 n i = 1 n x i
Clearance factor x C F = x max mean | 2 x |
Impulse factor X I F = x max 1 N i = 1 N x i
Table 3. Various operating conditions of the SCIM employed in the experimental study.
Table 3. Various operating conditions of the SCIM employed in the experimental study.
CodeFault ModeDefinition
AMT-1Rotor misalignmentAn occurrence where the central lines of coupled shaft do not alignment properly.
BSC-2Broken rotor barA stress-related occurrence where the rotor fractures or cracks
ITS-3Turn-to-turn short circuit windingAn occurrence where two or more turns in a phase are in raw contact with each other
NOC-4Normal operating conditionA condition where the motor is in its healthy state
Table 4. SVC classification model and its parameters.
Table 4. SVC classification model and its parameters.
ML ModelFunctional ParametersParameter Values
SVCRegularizer–RBF (C), gamma ( γ )C = 1.00 (default), γ = scale (default)
Table 5. FDI performance evaluation and computational cost of the SVC classifier.
Table 5. FDI performance evaluation and computational cost of the SVC classifier.
ML ModelAccuracy (%)Precision (%)Recall (%)F1-Score (%)Computational Time (s)
SVC97.3297.0097.0097.000.2157
Table 6. The machine learning classifiers models and their various parameters.
Table 6. The machine learning classifiers models and their various parameters.
ML ClassifierMajor Functional ParametersParameter Values
SVCRegularization (C), gamma ( γ )C = 1.00, γ = scale
DTmax–depthNone
RFn estimators70
KNNk10
SGDrandom–state101
loss function = modified huber
GBCn estimatorsauto
Table 7. An average computational time display of training and testing processes.
Table 7. An average computational time display of training and testing processes.
ClassifierSVCDTRFkNNSGDGBC
Computaional cost (s)0.21570.10160.30630.01420.06697.1720
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Okwuosa, C.N.; Hur, J.-w. A Filter-Based Feature-Engineering-Assisted SVC Fault Classification for SCIM at Minor-Load Conditions. Energies 2022, 15, 7597. https://doi.org/10.3390/en15207597

AMA Style

Okwuosa CN, Hur J-w. A Filter-Based Feature-Engineering-Assisted SVC Fault Classification for SCIM at Minor-Load Conditions. Energies. 2022; 15(20):7597. https://doi.org/10.3390/en15207597

Chicago/Turabian Style

Okwuosa, Chibuzo Nwabufo, and Jang-wook Hur. 2022. "A Filter-Based Feature-Engineering-Assisted SVC Fault Classification for SCIM at Minor-Load Conditions" Energies 15, no. 20: 7597. https://doi.org/10.3390/en15207597

APA Style

Okwuosa, C. N., & Hur, J. -w. (2022). A Filter-Based Feature-Engineering-Assisted SVC Fault Classification for SCIM at Minor-Load Conditions. Energies, 15(20), 7597. https://doi.org/10.3390/en15207597

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop