Next Article in Journal
Numerical Investigation of Interventions to Mitigate Heat Stress: A Case Study in Dubai
Next Article in Special Issue
Machine Learning and Optimization in Energy Management Systems for Plug-In Hybrid Electric Vehicles: A Comprehensive Review
Previous Article in Journal
Spatial Graphene Structures with Potential for Hydrogen Storage
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detection of Inter-Turn Short Circuits in Induction Motors Using the Current Space Vector and Machine Learning Classifiers

by
Johnny Rengifo
1,*,
Jordan Moreira
2,
Fernando Vaca-Urbano
2 and
Manuel S. Alvarez-Alvarado
2
1
Departamento de Ingeniería Eléctrica, Universidad Técnica Federico Santa María, Av. Vicuña Mackenna 3939, San Joaquín 8940897, Santiago, Chile
2
Facultad de Ingeniería en Electricidad y Computación, Escuela Superior Politécnica del Litoral, Guayaquil 90902, Ecuador
*
Author to whom correspondence should be addressed.
Energies 2024, 17(10), 2241; https://doi.org/10.3390/en17102241
Submission received: 13 March 2024 / Revised: 22 April 2024 / Accepted: 25 April 2024 / Published: 7 May 2024
(This article belongs to the Special Issue Applications of Machine Learning and Optimization in Energy Sectors)

Abstract

:
Electric motors play a fundamental role in various industries, and their relevance is strengthened in the context of the energy transition. Having efficient tools and techniques to detect and diagnose faults in electrical machines is crucial, as is providing early alerts to facilitate prompt decision-making. This study proposes indicators based on the magnitude of the space vector stator current for detecting and diagnosing incipient inter-turn short circuits (ITSCs) in induction motors (IMs). The effectiveness of these indicators was evaluated using four machine learning methods previously documented in the literature: random forests (RFs), support vector machines (SVMs), the k-nearest neighbor (kNN), and feedforward and recurrent neural networks (FNNs and RNNs). This assessment was conducted using experimental data. The results were compared with indicators based on discrete wavelet transform (DWT), demonstrating the viability of the proposed approach, which opens up a way of detecting incipient ITSCs in three-phase IMs. Furthermore, utilizing features derived from the magnitude of the spatial vector led to the successful identification of the phase affected by the fault.

1. Introduction

Induction motors are extensively used within the industry due to their comparatively low maintenance, robustness, and longevity. The monitoring and fault diagnosis of electric machinery are an exciting study area aimed at reducing unscheduled downtimes, which typically result in more significant economic losses beyond the costs associated with motor repair or replacement. Particularly, induction motor (IM) faults may be classified into four main groups [1,2]: stator faults, rotor cage failures, air gap irregularities, and bearing faults. Several authors have reported that approximately 30% of failures identified in induction motors occur in stator windings. In comparison, 40% to 50% are linked to mechanical issues, and the remaining are associated with rotor failures and others [3,4]. Most stator-related failures are induced by thermal, electrical, mechanical, and environmental stresses, according to Siddique [5,6], which leads to line-to-ground and line-to-line short circuits. Nonetheless, this kind of fault starts with inter-turn short circuits (ITSCs), followed by large currents through this low impedance path, producing high local temperatures and affecting the surrounding insulation [7].
The detection of ITSC is conventionally encompassed within insulation studies for induction motor windings, such as insulation resistance, the polarization index, direct current conductivity, capacitive impedance testing, and the dissipation factor [4]. Additionally, there are online alternatives for detection, including the analysis of partial discharges [8,9], the monitoring of axially transmitted flux linkages [10], the analysis of voltage and current harmonics [11,12,13], and the tracking of negative-sequence impedance [14,15], acoustic emissions [16,17], and Kalman filters [18], among other techniques [19,20].
Nowadays, the fault detection and diagnosis problem has been addressed using two main approaches. One approach involves using detailed physical models of the process. At the same time, the other relies on harnessing the power of artificial intelligence to analyze data and identify patterns of fault based on information collected during system operation [21]. Methods grounded in physical models face inherent limitations in accurately representing the behavior of complex systems, which may diminish their efficacy. Conversely, data-driven methods select and process information that aptly describes the relevant process and assesses various machine learning algorithms for their applicability in fault detection and diagnosis [22,23,24].
Supervised machine learning algorithms are widely used in data-driven IM fault detection and diagnosis methods for classification and pattern recognition from different kinds of data. Some of the most popular algorithms used for supervised machine learning include decision trees [25], random forests [26], support vector machines [27,28], and neural networks. Decision trees are simple and easy-to-interpret models that recursively partition the data based on the values of the features until a stopping criterion is met. Random forests (RF) are an extension of decision trees that combine multiple trees to improve accuracy and reduce overfitting. Support vector machines (SVMs) are linear models that use a hyperplane to separate the classes. SVMs aim to maximize the margin between the hyperplane and each class’s closest examples, making them robust to outliers and noise. Neural networks are a nonlinear model inspired by the structure of the human brain. They consist of multiple layers of neurons that transform the input data into a suitable form for classification. Neural networks benefit applications such as image recognition, speech recognition, and natural language processing. The performance of supervised machine learning models is evaluated using a test dataset separate from the training dataset. The most common evaluation metrics for classification problems are accuracy, precision, and recall. These metrics can be used to compare different models and fine-tune the models’ hyperparameters.
The discussion on fault diagnosis using data has primarily centered around dealing with distinct categories of faults through the utilization of indicators, along with a detailed analysis of the performance of the corresponding classifiers. An early example of an approach to this problem can be found in the work of [29], where direct neural networks were used to detect the presence of ITSCs using the phase difference between the currents as an indicator. Similarly, in [30], an SVM was applied to diagnose bearing faults and combined with empirical mode decomposition to enhance the performance of the conventional approach. In [31], a method for diagnosing bearing faults using an LS-SVM was proposed, with parameters optimized through particle swarm optimization. This approach involved the use of intrinsic modal functions to decompose vibration signals.
An application of the kNN, combined with ensemble learning algorithms, was presented in [32] for bearing fault detection, using indicators in the time and frequency domains. The Stockwell transformation allowed for the calculation of characteristic indices in these domains, which were used in conjunction with an SVM [33] to classify mechanical faults. Neural networks have also been employed in electric motor diagnosis and prognosis. For instance, in [34], multilayer feedforward neural networks (FNNs) were applied to detect short circuits between coils in direct-on-line permanent magnet synchronous motors. This work compared statistical indicators based on steady-state currents with frequency domain analysis. A similar study can be found in [35], in which the discrete wavelet transform (DWT) was applied to the motor’s line currents to calculate descriptive features based on the energy of the obtained components, and the performance of the multilayer perceptron and recurrent neural networks was compared. In [36], SVMs and convolutional neural networks are used, and voltage and quadrature current measurements are utilized to train the algorithms. In both cases, precision levels exceeding 99% were achieved, although it was noted that convolutional neural networks require a larger volume of data compared to SVMs.
In a broader context, refs. [37,38] provided comprehensive reviews of the state of the art in detecting and diagnosing faults in electrical machines using artificial intelligence, covering the years 2021 and 2022, respectively. These compilations highlighted significant elements, such as using indicators based on frequency domain analysis through the fast Fourier transform or wavelet transform to diagnose various types of faults in electrical machines. Even though some authors have addressed the detection of short circuits between turns using the Park vector method, taking advantage of the properties of coordinate transformation [39,40,41], these proposals are limited as they have shown sensitivity to different load levels or have relied on the ability to detect images of the current locus. These issues have been analyzed in [42], in which the authors identified the presence of a negative-sequence component in the current through trend adjustments using the Fryze–Buchholz–Dpenbrocj and least-squares algorithms, which became motivation for the development of this research. Therefore, this study proposes using indicators based on the Park vector method to detect incipient ITSC in induction machines, employing supervised machine learning algorithms. Five different classification algorithms were implemented to assess the performance of the proposed indicators. This process uses experimental data that consider various levels of load and fault resistances. Additionally, a comprehensive comparison was carried out with indicators based on the discrete wavelet transform (DWT), assessing their ability to identify faulty windings.
The main contributions of this paper can be summarized as follows:
  • Statistical indicators based on the magnitude of the spatial phasor current are proposed for detecting inter-turn short circuits in induction motors, including faulty phase identification.
  • The performance of the proposed indicators is evaluated using various classification algorithms to assess their performance in each case.
  • The performance of the proposed indicators is compared with results obtained by applying the discrete wavelet transform.
This work is structured as follows: Section 2 briefly describes the induction motor model operating with an ITSC, anddetails the classification methods used, while Section 3 describes the procedure employed to calculate the indicators. Section 4 presents the experimental results along with their respective discussions. Finally, Section 5 states the conclusion of this study.

2. Theoretical Framework

2.1. Induction Motor Inter-Turn Short-Circuit Model

A model of a three-phase IM with an ITSC in phase  a  is described by (1)–(3) [43,44,45], and its schematic is presented in Figure 1. To model this type of fault, the total turn number of the faulty winding is defined as  N s = N s 1 + N s 2 , where  N s 2  represents the shorted turns and  μ = N s 2 / N s  stands for the fault severity factor.
v s = R s i s + L s d i s d t + M d i r d t 2 3 μ L s d i f d t ,
0 = R r i r + L r d i r d t + M d i s d t 2 3 μ M d i f d t j P ω m ( M i s + L r i r μ L m i f ) ,
μ v s α = ( R f + k μ R s ) i f + k μ L l s d i f d t .
Here, the voltage and current space vectors are obtained with  y k = y k α + j y k β = 2 / 3 ( y k a + y k b e j 2 π 3 + y k c e j 4 π 3 ) ,  with  y = { v , i }  and  k = { s , r } k μ = μ ( 1 2 3 μ ) L s  and  L r  are the stator and rotor self-inductance;  M  is the magnetizing inductance;  R s  and  R r  are the stator and rotor resistances;  R f  is the fault resistance;  ω m  is the mechanical angular speed; and  P  is the number of pole pairs.

2.2. Supervised Machine Learning

2.2.1. Random Forest

Decision Trees (DTs) are machine learning models represented as tree structures. They are constructed recursively, dividing the dataset into smaller subsets based on specific features. Decisions are made based on simple rules and can be intuitively visualized, simplifying data interpretation. However, DTs are prone to overfitting as they overadapt to the training data and may lose generalization [25]. In order to overcome this deficiency, this paper employs a random forest (RF). This method is based on constructing multiple DTs and their combinations to achieve a more robust and accurate system [26]. Independent DTs are generated during training, each fitted to a random subset of the training dataset. The final prediction of the model is obtained by weighing the results of each tree. This methodology leverages the diversity of trees to reduce the risk of overfitting.

2.2.2. k-Nearest Neighbors

The kNN classification method is grounded in the idea that data points close in feature space tend to have the same label or class [46]. The algorithm begins by selecting a value for  k , representing the number of nearest neighbors to consider. When predicting a new data point, the algorithm finds the k-nearest points in the training dataset using a distance measure between the data points, such as the Euclidean, Manhattan, or Chebyshev distance. The kNN model follows a nonparametric approach, implying that it makes no assumptions about the underlying data distribution, making it suitable for various problems. However, it has limitations, such as sensitivity to the choice of the  k  number. Smaller values can be susceptible to noise, while larger values can reduce the model’s adaptability. Additionally, the distance metric selection can significantly influence the results.

2.2.3. Neural Networks

Neural Networks (NNs) are inspired by the structure and functioning of the human brain; hence, they operate as a network of interconnected nodes, known as neurons, that process input data, perform computations, and generate outputs [47]. The connection between neurons is weighted to adjust the influence of one neuron on another. Hidden layers within the network enable the learning of complex intermediate representations. NNs allow for modeling complex nonlinear relationships. Hyperparameter selection is critical in network design. However, the training process can be computationally expensive. Different types of NNs exist, but it is possible to broadly classify them into two groups based on the direction of information flow through layers. The first group is called feedforward neural networks (FNNs), where hidden layers use only data from the previous layer. The second group involves the ability to use information from the neuron itself or the subsequent layer in its activation function [48]. This type is known as a recurrent neural network. Figure 2 presents a schematic diagram of a NN.

3. Feature Extraction

3.1. Statistical Signal Analysis

The ITSC changes the symmetry of the IM windings, producing unbalanced line currents. Due to the nature of this imbalanced connection, the line currents only have positive- and negative-sequence components [45]; therefore, the current space vector is described by  i s = 2 I 1 e j ω s t + 2 I 2 e j ω s t . The locus of the stator current space vector depicts an elliptical shape after rotating these components in opposite directions in the complex plane, as demonstrated in Figure 3. The positive- and negative-sequence components rotate in opposite directions, yet at equal velocities. As the level of imbalance increases, the elliptical shape becomes more pronounced. The trajectory of the current space vector amplitude,  i s , as a function of time, shows a helpful pattern, with a maximum value corresponding to half of the ellipse major axis length and a minimum that matches half of the minor axis length. Figure 4 presents the magnitude of the stator space vector as a function of time for several severity factors, ranging from 0.5% to 5%, for two fault resistances obtained using (1)–(3) and the parameters presented in Table 1. In both cases, a correlation between the level of current imbalance and the severity factor is observed.
The amplitude stator current space vector waveform has valuable information for detecting ITSCs through supervised machine learning algorithms. Descriptive indices that summarize the main signal features are suitable candidates for training machine learning algorithms, allowing distinguishing between healthy and faulty operations, and can be instrumental in extending the results to other machines. This work uses the following statistical indicators as features:
The shape factor: the root mean square (RMS) divided by the mean of the absolute value of the signal.
X S F = 1 N k = 1 N x k 2 1 N k N x k ,
The impulse factor compares the peak of the absolute value ( x p ) to the signal’s mean level.
X I F = x p 1 N k = 1 N x k
The clearance factor: The peak value divided by the squared mean value of the square roots of the absolute amplitudes.
X c l e a r = x p ( 1 N k = 1 N x k ) 2
The crest factor: The peak value divided by the RMS.
X c r e s t = x p 1 N k = 1 N x k 2
These features are suitable candidates for training machine learning algorithms, allowing them to distinguish between healthy and failed operations.

3.2. Discrete Wavelet Transform

The wavelet transform (WT) is a signal-processing tool that simultaneously decomposes a signal into its frequency and time components. This method has been used in various applications to describe stationary and transient signals [49]. The two main approaches to analyzing a signal using the WT are in the continuous (CWT) and discrete (DWT) domains [50]. The CWT generates coefficients representing the signal similarity with the wavelet function, extracting significant information from signals. However, the CWT requires higher computational resources than the DWT. On its part, the DWT is based on the concept of multiscale decomposition. The signal is decomposed into different scales and locations using discrete wavelet functions. So, this transformation reduces the original signal into a series of approximations and details at varying resolutions [51]. The decomposition process uses low-pass and high-pass filters to obtain detailed (D) and approximate (A) signal coefficients. This process is repeated recursively as needed for resolution. The number of decomposition levels ( n ls ) is based on the relation between the sampling frequency ( f s ) and the fundamental frequency ( f 1 ), as follows [35]:
n l s > int ( log ( f s f 1 ) log ( 2 ) ) + 1 .
The signal energy related to each decomposition level is calculated as follows [35,52,53]:
E d j = k = 1 N D jk 2 ,
E a j = k = 1 N A jk 2 .
Previous studies have successfully reported indicators to detect ITSCs on Ims using the energy calculated at the most energetic level for each line current, as presented in [35]. These indicators,  S 1 S 2 , and  S 3 ,  are defined based on the energy calculated from the DWT decomposition of the signal at the most energetic level for each current line. The indicators are described below:
S 1 = E d i a E d i b ,
S b = E d i b E d i c ,
S c = E d i c E d i a .
The indicators based on the DWT are used to compare the performance of the statistical indices defined in (4)–(7).

4. Experimental Results and Discussion

4.1. Experimental Setup

A three-phase IM modified with auxiliary taps for applying a controlled ITSC was used. The stator has thirty-six slots, three pole pairs, and thirty-two turns per slot. Taps were arranged in each phase to adjust the degree of severity and the faulty phase. Table 2 presents more dimensional details of the IM stator. For each phase, two incipient faults of  μ = 4 %  and  5 %  with different fault resistances and load levels were set up. In every case, the motor line currents were measured using a AEMC—PowerPad 3945-B network analyzer (AEMC, Dover, NH, USA). Figure 5 depicts the experimental test rig. A total of thirty-one operating points were collected with the faulty machine. Likewise, operating points were measured with the healthy IM. At each operating point, the waveform of the motor phase currents was recorded, with a sampling rate of 256 points per cycle. The motor was supplied with a nominal voltage and frequency.

4.2. Results and Discussion

Sixty-one measurements were used for the training and evaluation of the classification methods using three groups of features, namely A, B, and C. Dataset A comprised indicators associated with the statistical description of the magnitude of the stator current space vector. Dataset B included features calculated using the DWT with ten levels from the IM line currents. Dataset C comprised indicators including the statistical characteristics of the stator current space vector magnitude and its decomposition through the DWT. For this dataset, the energy of each decomposed level was calculated, and the highest energy component among them all was selected. The frequency bandwidth for each DWT level is presented in Table 3, and Figure 6 shows an example of an experimental DWT decomposition of the  a s  phase current for a faulty state.
D7 and D8 have higher energy levels because the fundamental frequency of the currents is at the edge of both bandwidths. In this case, the indicators  S 1 S 2 , and  S 3  are calculated using the seventh-level component. Finally, Figure 6b shows the DWT decomposition of  i s  for the same faulty case previously analyzed. Similarly, the seventh level was the most energetic since the fundamental frequency of the current magnitude relies on this bandwidth.
Before conducting the classification methods’ training, dispersion plots were created, as depicted in Figure 7, using datasets A, B, and C. In Figure 7a–c, a thorough analysis of the relationship between the statistical indicators was performed, revealing data corresponding to each operational condition from distinct subsets. Furthermore, dataset B (Figure 7d) distinguishes between healthy and faulty operations. Likewise, dataset C, which combines both types of indicators, also provides the capability to discern the presence or absence of faults.
Two classification approaches are presented using the previously described attributes. The first focuses on the early detection of ITSCs, regardless of the specific phase where the fault occurs. In addition to detecting the fault’s presence, the second guideline aims to identify the affected phase using the short circuit. Hence, the classification methods should distinguish between healthy and ITSC states in phases  a b , or  c .
The next step involved training various classification methods to detect ITSCs using the three previously mentioned datasets. Data were randomly partitioned in all cases, with 40% of the samples allocated for the training process and the remainder reserved for evaluation. An exhaustive search within predefined parameter settings was employed to perform hyperparameter tuning for RFs, kNNs, and SVMs [54]. Regarding forward and recurrent neural networks (FNNs and ENNs), the hyperband algorithm [55] was applied in Python [56] to determine the network topology for addressing this problem. Table 4 summarizes the hyperparameters and metrics used for training each model.
In every instance during training, the goal was to maximize the model’s accuracy. To avoid overfitting, the quality of the results is evaluated through the test dataset using the accuracy and the area under the receiver operating characteristic curve (AUC) [57]. The resulting scores for each combination of classification method and dataset are presented in Table 5. The results obtained from dataset A showed high effectiveness, with accuracy rates exceeding 96% for all the techniques except the SVM, which achieved 89%. The decision tree-based methods (RFs) and the kNN could classify all the test data with 100% accuracy.
Additionally, both the forward and recurrent neural networks showed equivalent performance, with a high accuracy of 96%. The AUC highlighted the classifiers’ ability to be trained using dataset A to identify the presence of ITSC operations in the IM with a high probability of success. The results ranged from 94% for the RNNs to 99% for the SVMs and 100% for the other methods. Confusion matrices corresponding to each considered classification method are presented in Figure 8, where the test dataset was randomly selected.
The considered results demonstrated that the statistical indicators of the stator current vector magnitude are suitable for training classification models, enabling the detection of incipient ITSCs even in the presence of varying fault resistances and different load levels.
Dataset B, composed of indicators derived from the energy of the DWT components, was used to train the evaluated machine learning methods. The results were satisfactory for the RF, the SVM, and the RNN classifiers, with accuracy rates exceeding 92%. However, it is essential to note that the results showed insignificant improvements in all the analyzed cases compared to those obtained with dataset A. When examining the confusion matrices (Figure 8) corresponding to the models trained with this case, erroneous classifications, including false positives and false negatives, were observed in the SVM, kNN, FNN, and RNN methods. Among them, the kNN presents the highest number of incorrect classifications, producing three false positives and two false negatives. Similarly, as depicted in Figure 9k, the FNN yielded three false positives, while the RNN had two incorrect outcomes.
Dataset C combines statistical indicators with the highest energy level of the discrete wavelet transform components of the stator current vector’s magnitude. It demonstrated performance equal to or better than that of dataset A. On this occasion, all the accuracy rates exceeded 96%, and the AUC reached 99%. This pattern is also reflected in the confusion matrices, where incorrect classifications were only observed in the SVM and RNN cases. Regardless, including the energy-related indicator merely led to a meaningful improvement in the SVM method, where the accuracy increased from 88% to 96%.
After confirming the effectiveness of the characteristics derived from the stator current vector magnitude to detect incipient failures, the ability of the proposed indicators to identify the specific phase where the fault occurs was evaluated. In this regard, the output vector used to train each algorithm was redefined, describing four states: healthy and faulty in phases  s a s b , and  s c . For the RF, kNN, and SVM techniques, an output matrix with four integer states was designed, whereas for the FNN and RNN, a matrix output was employed, characterized by each state and consisting of four binary positions, where only one column can hold the value 1 per sample. Across all the examined methods, each dataset was split, allocating between 40% and 50% of the samples for training purposes while reserving the remainder for evaluating the performance.
The accuracy and confusion matrices obtained for each algorithm are presented in Table 6 and Figure 9, respectively. On this occasion, accuracy refers to the algorithms’ ability to identify the state of each sample within the universe of samples, with that state evaluated for each dataset. The results of the RF using dataset A indicate that 93% of the healthy samples were correctly classified. The classification accuracy for ITSCs in phases  s a  and  s b  was 100%. However, the accuracy for the samples with short circuits in phase  s c  was 50%. The overall precision of the method was 90.3%. Therefore, this method successfully classified healthy and faulty states for two of the three windings. Conversely, there was an improvement in these results when dataset C was used, where all the data were correctly classified. The SVM results demonstrated a higher overall accuracy of 92.85% for dataset A.
Upon closer inspection, two mislabeled cases were detected and linked to the same phase. A healthy sample was identified as faulty, while a fault in phase  s c  was diagnosed as if it were in phase  s b . Then, with the inclusion of the energy of  i s  (dataset C), only one sample was incorrectly labeled as a fault in phase  s b  instead of  s c , resulting in an overall precision of 96.42%.
Moreover, the results of the kNN showed better accuracy for dataset C (92.86%) compared to the first case study (89.86%). This improvement is attributed to the precise classification of an additional sample, as evidenced in the dispersion matrices (Figure 9). Finally, when applying the FNN- and RNN-based classifiers with dataset A, it was observed that the former followed the trend described by the previously discussed methods. At the same time, the latter allocated almost all samples to their corresponding class. Specifically, the overall accuracy for the RNNs was 96.78%.
The results obtained when training all the classification methods with dataset B are comparable to those discussed with dataset C. This suggests that the proposed indicators allow for the reliable detection of incipient ITSCs and provide an objective diagnosis of the fault location with a high probability of success.

5. Conclusions

In this paper, a novel approach to detecting ITSCs in induction machines using a supervised learning approach is presented. The methodology is based on indicators derived from the magnitude of the stator current vector. The experimental results demonstrate the effectiveness of the evaluated classification methods (RFs, kNNs, SVMs, FNNs, and RNNs) in detecting short circuits in any of the phases, regardless of the fault resistance. The accuracy of the proposed indicators has been compared to those based on the discrete wavelet transform (DWT) and revealed to be equivalent or superior in performance for most of the classifiers evaluated, except for the SVM.
The proposed approach was experimentally validated using an induction motor with ITSCs in all machine phases and with various fault resistances. In future research, efforts will be directed toward identifying additional characteristics related to failures, such as assessing the severity of faults. Furthermore, there are plans to expand the working dataset to include motors of different power ratings and those driven by voltage source inverters.

Author Contributions

Conceptualization, J.R.; Methodology, J.M.; Validation, J.M.; Formal analysis, J.R. and M.S.A.-A.; Investigation, J.M.; Resources, J.R.; Data curation, J.R. and J.M.; Writing—review & editing, J.R., F.V.-U. and M.S.A.-A.; Supervision, F.V.-U.; Project administration, F.V.-U. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the project FIEC-2-2023 at ESPOL, and the APC was funded by Universidad Técnica Federico Santa María.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nandi, S.; Toliyat, H.A.; Member, S.; Li, X.; Member, S. Condition Monitoring and Fault Diagnosis of Electrical Motors—A Review. IEEE Trans. Energy Convers. 2005, 20, 719–729. [Google Scholar] [CrossRef]
  2. Zhang, P.; Du, Y.; Member, S.; Habetler, T.G.; Lu, B.; Member, S. A Survey of Condition Monitoring and Protection Methods for Medium-Voltage Induction Motors. IEEE Trans. Ind. Appl. 2011, 47, 34–46. [Google Scholar] [CrossRef]
  3. Awadallah, M.A.; Morcos, M.M. Application of AI Tools in Fault Diagnosis of Electrical Machines and Drives—An Overview. IEEE Trans. Energy Convers. 2003, 18, 245–251. [Google Scholar] [CrossRef]
  4. Das, S.; Purkait, P.; Dey, D.; Chakravorti, S. Monitoring of Inter-Turn Insulation Failure in Induction Motor Using Advanced Signal and Data Processing Tools. IEEE Trans. Dielectr. Electr. Insul. 2011, 18, 1599–1608. [Google Scholar] [CrossRef]
  5. Bonnett, A.H.; Soukup, G.C. Cause and Analysis of Stator and Rotor Failures in Three-Phase Squirrel-Cage Induction Motors. IEEE Trans. Ind. Appl. 1992, 28, 921–936. [Google Scholar] [CrossRef]
  6. Siddique, A.; Yadava, G.S.; Singh, B. A Review of Stator Fault Monitoring Techniques of Induction Motors. IEEE Trans. Energy Convers. 2005, 20, 106–114. [Google Scholar] [CrossRef]
  7. Kliman, G.B.; Premerlani, W.J.; Koegl, R.A.; Hoeweler, D. New Approach to On-Line Turn Fault Detection in AC Motors. Conf. Rec. IAS Annu. Meet. (IEEE Ind. Appl. Soc.) 1996, 1, 687–693. [Google Scholar] [CrossRef]
  8. Cruz, J.d.S.; Fruett, F.; Lopes, R.d.R.; Cruz, J.; Fruett, F.; Lopes, R.; Takaki, F.L.; Tambascia, C.d.A.; Lima, E.R.d.; Giesbrecht, M. Partial Discharges Monitoring for Electric Machines Diagnosis: A Review. Energies 2022, 15, 7966. [Google Scholar] [CrossRef]
  9. Sheikh, M.A.; Bakhsh, S.T.; Irfan, M.; Nor, N.b.M.; Nowakowski, G. A Review to Diagnose Faults Related to Three-Phase Industrial Induction Motors. J. Fail. Anal. Prev. 2022, 22, 1546–1557. [Google Scholar] [CrossRef]
  10. Cao, W.; Huang, R.; Wang, H.; Lu, S.; Hu, Y.; Hu, C.; Huang, X. Analysis of Inter-Turn Short-Circuit Faults in Brushless DC Motors Based on Magnetic Leakage Flux and Back Propagation Neural Network. IEEE Trans. Energy Convers. 2023, 38, 2273–2281. [Google Scholar] [CrossRef]
  11. Park, J.K.; Hur, J. Detection of Inter-Turn and Dynamic Eccentricity Faults Using Stator Current Frequency Pattern in IPM-Type BLDC Motors. IEEE Trans. Ind. Electron. 2016, 63, 1771–1780. [Google Scholar] [CrossRef]
  12. Allal, A.; Khechekhouche, A. Diagnosis of Induction Motor Faults Using the Motor Current Normalized Residual Harmonic Analysis Method. Int. J. Electr. Power Energy Syst. 2022, 141, 108219. [Google Scholar] [CrossRef]
  13. Ghanbari, T.; Mehraban, A.; Farjah, E. Inter-Turn Fault Detection of Induction Motors Using a Method Based on Spectrogram of Motor Currents. Measurement 2022, 205, 112180. [Google Scholar] [CrossRef]
  14. Drif, M.; Drif, M.; Estima, J.O.; Cardoso, A.J.M. The Use of the Stator Instantaneous Complex Apparent Impedance Signature Analysis for Discriminating Stator Winding Faults and Supply Voltage Unbalance in Three-Phase Induction Motors. In Proceedings of the 2013 IEEE Energy Conversion Congress and Exposition, ECCE 2013, Denver, CO, USA, 15–19 September 2013; pp. 4403–4411. [Google Scholar] [CrossRef]
  15. Liu, J.; Tan, H.; Shi, Y.; Ai, Y.; Chen, S.; Zhang, C. Research on Diagnosis and Prediction Method of Stator Interturn Short-Circuit Fault of Traction Motor. Energies 2022, 15, 3759. [Google Scholar] [CrossRef]
  16. Mathew, S.K.; Zhang, Y. Acoustic-Based Engine Fault Diagnosis Using WPT, PCA and Bayesian Optimization. Appl. Sci. 2020, 10, 6890. [Google Scholar] [CrossRef]
  17. Lucas, G.B.; De Castro, B.A.; Ardila-Rey, J.A.; Glowacz, A.; Leao, J.V.F.; Andreoli, A.L. A Novel Approach Applied to Transient Short-Circuit Diagnosis in TIMs by Piezoelectric Sensors, PCA, and Wavelet Transform. IEEE Sens. J. 2023, 23, 8899–8908. [Google Scholar] [CrossRef]
  18. Namdar, A.; Samet, H.; Allahbakhshi, M.; Tajdinian, M.; Ghanbari, T. A Robust Stator Inter-Turn Fault Detection in Induction Motor Utilizing Kalman Filter-Based Algorithm. Measurement 2022, 187, 110181. [Google Scholar] [CrossRef]
  19. Sarkar, S.; Purkait, P.; Das, S. NI CompactRIO-Based Methodology for Online Detection of Stator Winding Inter-Turn Insulation Faults in 3-Phase Induction Motors. Measurement 2021, 182, 109682. [Google Scholar] [CrossRef]
  20. Cardenas-Cornejo, J.J.; Ibarra-Manzano, M.A.; González-Parada, A.; Castro-Sanchez, R.; Almanza-Ojeda, D.L. Classification of Inter-Turn Short-Circuit Faults in Induction Motors Based on Quaternion Analysis. Measurement 2023, 222, 113680. [Google Scholar] [CrossRef]
  21. Lu, X.; Lin, P.; Cheng, S.; Fang, G.; He, X.; Chen, Z.; Wu, L. Fault Diagnosis Model for Photovoltaic Array Using a Dual-Channels Convolutional Neural Network with a Feature Selection Structure. Energy Convers. Manag. 2021, 248, 114777. [Google Scholar] [CrossRef]
  22. Shi, M.; Ding, C.; Wang, R.; Shen, C.; Huang, W.; Zhu, Z. Graph Embedding Deep Broad Learning System for Data Imbalance Fault Diagnosis of Rotating Machinery. Reliab. Eng. Syst. Saf. 2023, 240, 109601. [Google Scholar] [CrossRef]
  23. Guo, J.; Wang, Z.; Li, H.; Yang, Y.; Huang, C.-G.; Yazdi, M.; Kang, S. A Hybrid Prognosis Scheme for Rolling Bearings Based on a Novel Health Indicator and Nonlinear Wiener Process. Reliab. Eng. Syst. Saf. 2024, 245, 110014. [Google Scholar] [CrossRef]
  24. Guo, J.; Yang, Y.; Li, H.; Dai, L.; Huang, B. A Parallel Deep Neural Network for Intelligent Fault Diagnosis of Drilling Pumps. Eng. Appl. Artif. Intell. 2024, 133, 108071. [Google Scholar] [CrossRef]
  25. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  26. Breiman, L.; Friedman, J.H.; Olshen, R.A.; Stone, C.J. Classification and Regression Trees; Routledge: London, UK, 2017; pp. 1–358. [Google Scholar] [CrossRef]
  27. Sain, S.R. The Nature of Statistical Learning Theory. Technometrics 1996, 38, 409. [Google Scholar] [CrossRef]
  28. Fan, R.-E.; Chen, P.-H.; Lin, C.-J. Working Set Selection Using Second Order Information for Training Support Vector Machines. J. Mach. Learn. Res. 2005, 6, 1889–1918. [Google Scholar]
  29. Ben, M.; Bouzid, K.; Champenois, G.; Bellaaj, N.M.; Signac, L.; Jelassi, K. An Effective Neural Approach for the Automatic Location of Stator Interturn Faults in Induction Motor. IEEE Trans. Ind. Electron. 2008, 55, 4277–4289. [Google Scholar] [CrossRef]
  30. Liu, X.; Bo, L.; Luo, H. Bearing Faults Diagnostics Based on Hybrid LS-SVM and EMD Method. Measurement 2015, 59, 145–166. [Google Scholar] [CrossRef]
  31. Deng, W.; Yao, R.; Zhao, H.; Yang, X.; Li, G. A Novel Intelligent Diagnosis Method Using Optimal LS-SVM with Improved PSO Algorithm. Soft Comput. 2019, 23, 2445–2462. [Google Scholar] [CrossRef]
  32. Yao, B.; Zhen, P.; Wu, L.; Guan, Y. Rolling Element Bearing Fault Diagnosis Using Improved Manifold Learning. IEEE Access 2017, 5, 6027–6035. [Google Scholar] [CrossRef]
  33. Singh, M.; Shaik, A.G. Faulty Bearing Detection, Classification and Location in a Three-Phase Induction Motor Based on Stockwell Transform and Support Vector Machine. Measurement 2019, 131, 524–533. [Google Scholar] [CrossRef]
  34. Maraaba, L.S.; Al-Hamouz, Z.M.; Milhem, A.S.; Abido, M.A. Neural Network-Based Diagnostic Tool for Detecting Stator Inter-Turn Faults in Line Start Permanent Magnet Synchronous Motors. IEEE Access 2019, 7, 89014–89025. [Google Scholar] [CrossRef]
  35. Cherif, H.; Benakcha, A.; Laib, I.; Chehaidia, S.E.; Menacer, A.; Soudan, B.; Olabi, A.G. Early Detection and Localization of Stator Inter-Turn Faults Based on Discrete Wavelet Energy Ratio and Neural Networks in Induction Motor. Energy 2020, 212, 118684. [Google Scholar] [CrossRef]
  36. Shih, K.J.; Hsieh, M.F.; Chen, B.J.; Huang, S.F. Machine Learning for Inter-Turn Short-Circuit Fault Diagnosis in Permanent Magnet Synchronous Motors. IEEE Trans. Magn. 2022, 58, 1–7. [Google Scholar] [CrossRef]
  37. Kumar, P.; Hati, A.S. Review on Machine Learning Algorithm Based Fault Detection in Induction Motors. Arch. Comput. Methods Eng. 2021, 28, 1929–1940. [Google Scholar] [CrossRef] [PubMed]
  38. Lang, W.; Hu, Y.; Gong, C.; Zhang, X.; Xu, H.; Deng, J. Artificial Intelligence-Based Technique for Fault Detection and Diagnosis of EV Motors: A Review. IEEE Trans. Transp. Electrif. 2022, 8, 384–406. [Google Scholar] [CrossRef]
  39. Cruz, S.M.A.; Marques Cardoso, A.J. Stator Winding Fault Diagnosis in Three-Phase Synchronous and Asynchronous Motors, by the Extended Park’s Vector Approach. IEEE Trans. Ind. Appl. 2001, 37, 1227–1233. [Google Scholar] [CrossRef]
  40. Sarkar, S.; Das, S.; Purkait, P. Wavelet and SFAM Based Classification of Induction Motor Stator Winding Short Circuit Faults and Incipient Insulation Failures. In Proceedings of the 2013 IEEE 1st International Conference on Condition Assessment Techniques in Electrical Systems, IEEE CATCON 2013—Proceedings; IEEE Computer Society, Kolkata, India, 6–8 December 2013; pp. 237–242. [Google Scholar]
  41. Zhao, Y.; Chen, Y.; Wang, L.; Ur Rehman, A.; Cheng, Y.; Zhao, Y.; Han, B.; Tanaka, T. Experimental Research and Feature Extraction on Stator Inter-Turn Short Circuit Fault in DFIG. In Proceedings of the 2016 IEEE International Conference on Dielectrics, ICD 2016, Montpellier, France, 3–7 July 2016; Institute of Electrical and Electronics Engineers Inc.: Piscataway Township, NJ, USA, 2016; Volume 1, pp. 510–513. [Google Scholar]
  42. Wei, S.; Zhang, X.; Xu, Y.; Fu, Y.; Ren, Z.; Li, F. Extended Park’s Vector Method in Early Inter-Turn Short Circuit Fault Detection for the Stator Windings of Offshore Wind Doubly-Fed Induction Generators. IET Gener. Transm. Distrib. 2020, 14, 3905–3912. [Google Scholar] [CrossRef]
  43. Tallam, R.M.; Habetler, T.G.; Harley, R.G. Transient Model for Induction Machines with Stator Winding Turn Faults. IEEE Trans. Ind. Appl. 2002, 38, 632–637. [Google Scholar] [CrossRef]
  44. Berzoy, A.; Mohammed, O.A.; Restrepo, J. Analysis of the Impact of Stator Interturn Short-Circuit Faults on Induction Machines Driven by Direct Torque Control. IEEE Trans. Energy Convers. 2018, 33, 1463–1474. [Google Scholar] [CrossRef]
  45. Berzoy, A.; Mohamed, A.A.S.; Mohammed, O.A. Stator Winding Inter-Turn Fault in Induction Machines: Complex-Vector Transient and Steady-State Modelling. In Proceedings of the 2017 IEEE International Electric Machines and Drives Conference, IEMDC 2017, Miami, FL, USA, 21–24 May 2017; Institute of Electrical and Electronics Engineers Inc.: Piscataway Township, NJ, USA, 2017. [Google Scholar]
  46. Cover, T.M.; Hart, P.E. Approximate Formulas for the Information Transmitted Bv a Discrete Communication Channel. IEEE Trans. Inf. Theory 1952, 24, 335–342. [Google Scholar]
  47. Abraham, A. Artificial Neural Networks. In Handbook of Measuring System Design; Sydenham, P.H., Thorn, R., Eds.; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2005; ISBN 0470021438. [Google Scholar]
  48. Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  49. Schneider, K.; Farge, M. Wavelets: Mathematical Theory. Encyclopedia of Mathematical Physics: Five-Volume Set; Academic Press: Cambridge, MA, USA, 2006; pp. 426–438. [Google Scholar] [CrossRef]
  50. He, Z. The Fundamental Theory of Wavelet Transform. Wavelet Analysis and Transient Signal Processing Applications for Power Systems; John Wiley & Sons: Hoboken, NJ, USA, 2016; pp. 21–44. [Google Scholar] [CrossRef]
  51. Konar, P.; Chattopadhyay, P. Multi-Class Fault Diagnosis of Induction Motor Using Hilbert and Wavelet Transform. Appl. Soft Comput. 2015, 30, 341–352. [Google Scholar] [CrossRef]
  52. Bouzida, A.; Touhami, O.; Ibtiouen, R.; Belouchrani, A.; Fadel, M.; Rezzoug, A. Fault Diagnosis in Industrial Induction Machines through Discrete Wavelet Transform. IEEE Trans. Ind. Electron. 2011, 58, 4385–4395. [Google Scholar] [CrossRef]
  53. Mathuranathan, V. Digital Modulations Using Matlab: Build Simulation Models from Scratch, 1st ed.; Independently Published: Chicago, IL, USA, 2020; ISBN 9781521493885. [Google Scholar]
  54. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-Learn: Machine Learn-Ing in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  55. Li, L.; Jamieson, K.; Rostamizadeh, A.; Talwalkar, A. Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization. J. Mach. Learn. Res. 2018, 18, 1–52. [Google Scholar]
  56. O’Malley, T.; Bursztein, E.; Long, J.; Chollet, F.; Jin, H.; Invernizzi, L. Others KerasTuner. 2019. Available online: https://github.com/keras-team/keras-tuner (accessed on 5 November 2023).
  57. Fawcett, T. An Introduction to ROC Analysis. Pattern Recognit. Lett. 2006, 27, 861–874. [Google Scholar] [CrossRef]
Figure 1. Schematic of three-phase windings of an induction machine with an inter-turn short-circuit in phase  a .
Figure 1. Schematic of three-phase windings of an induction machine with an inter-turn short-circuit in phase  a .
Energies 17 02241 g001
Figure 2. Neural network schematic diagram.
Figure 2. Neural network schematic diagram.
Energies 17 02241 g002
Figure 3. Stator current space vector locus: (a) interaction between positive and negative space vector components; (b) measurement of a healthy IM; and (c) measurement of a faulty IM.
Figure 3. Stator current space vector locus: (a) interaction between positive and negative space vector components; (b) measurement of a healthy IM; and (c) measurement of a faulty IM.
Energies 17 02241 g003
Figure 4. Stator current space vector magnitude for different severity factors and fault resistances.
Figure 4. Stator current space vector magnitude for different severity factors and fault resistances.
Energies 17 02241 g004
Figure 5. Experimental test rig.
Figure 5. Experimental test rig.
Energies 17 02241 g005
Figure 6. DWT decomposition of stator current for a faulty state of the (a) phase  a  current and (b) the magnitude of the space vector ( i s ).
Figure 6. DWT decomposition of stator current for a faulty state of the (a) phase  a  current and (b) the magnitude of the space vector ( i s ).
Energies 17 02241 g006
Figure 7. Dispersion plots for different combinations of features. (a) Impulse Factor, Shape Factor and Crest Factor, (b) Impulse Factor, Inclination Factor and Ratio (c S 1 S 2  and  S 3 , and (d) Energy of the magnitude of the space vector ( i s ), Shape Factor and Impulse Factor.
Figure 7. Dispersion plots for different combinations of features. (a) Impulse Factor, Shape Factor and Crest Factor, (b) Impulse Factor, Inclination Factor and Ratio (c S 1 S 2  and  S 3 , and (d) Energy of the magnitude of the space vector ( i s ), Shape Factor and Impulse Factor.
Energies 17 02241 g007
Figure 8. Confusion matrix for each classification method to distinguish between healthy and faulty states: (a) RF using set A; (b) RF using set B; (c) RF using set C; (d) SVM using set A; (e) SVM using set B; (f) SVM using set C; (g) kNN using set A; (h) kNN using set B; (i) kNN using set C; (j) FNN using set A; (k) FNN using set B; (l) FNN using set C; (m) RNN using set A; (n) RNN using set B; (o) and RNN using set C.
Figure 8. Confusion matrix for each classification method to distinguish between healthy and faulty states: (a) RF using set A; (b) RF using set B; (c) RF using set C; (d) SVM using set A; (e) SVM using set B; (f) SVM using set C; (g) kNN using set A; (h) kNN using set B; (i) kNN using set C; (j) FNN using set A; (k) FNN using set B; (l) FNN using set C; (m) RNN using set A; (n) RNN using set B; (o) and RNN using set C.
Energies 17 02241 g008
Figure 9. Confusion matrix for each classification method to categorize between healthy and faulty states with phase identification: (a) RF using set A; (b) RF using set B; (c) RF using set C; (d) SVM using set A; (e) SVM using set B; (f) SVM using set C; (g) kNN using set A; (h) kNN using set B; (i) kNN using set C; (j) FNN using set A; (k) FNN using set B; (l) FNN using set C; (m) RNN using set A; (n) RNN using set B; and (o) RNN using set C.
Figure 9. Confusion matrix for each classification method to categorize between healthy and faulty states with phase identification: (a) RF using set A; (b) RF using set B; (c) RF using set C; (d) SVM using set A; (e) SVM using set B; (f) SVM using set C; (g) kNN using set A; (h) kNN using set B; (i) kNN using set C; (j) FNN using set A; (k) FNN using set B; (l) FNN using set C; (m) RNN using set A; (n) RNN using set B; and (o) RNN using set C.
Energies 17 02241 g009aEnergies 17 02241 g009b
Table 1. Induction motor parameters.
Table 1. Induction motor parameters.
R s
( Ω )
L l s
(mH)
R r
( Ω )
L l s
(mH)
L m
(mH)
J
(kg m2)
  P
2.95.980.9725.30264.020.00263
Table 2. Induction motor dimensions.
Table 2. Induction motor dimensions.
IM FeatureDetail
Axial length62.4 mm
Number of stator slots36
Stator inner diameter79.8 mm
Stator outer diameter142.6 mm
Rotor outer diameter79 mm
Turns per slot32
Total winding turns192
Number of pole pairs3
Table 3. Frequency decomposition for a DWT of ten levels.
Table 3. Frequency decomposition for a DWT of ten levels.
LevelFrequency Bandwidth (Hz)
13840–7680
21920–3840
3960–1920
4480–960
5240–480
6120–240
760–120
830–60
915–30
107.5–15
Table 4. Classification algorithms, hyperparameters, and metrics used for training.
Table 4. Classification algorithms, hyperparameters, and metrics used for training.
Classification
Method
HyperparametersMetrics
RFNumber of estimators and minimum number of samples per leafAccuracy
SVMKernel: linear, polynomial, and radial basis function, regularization parameter C, grade of influence of high-order elements in comparison with low-order (coef0) elements and degreesAccuracy
kNNNumber of neighbors, weights, metric, algorithm, and leaf sizeAccuracy
FNNActivation function, number of layers, number of neurons, and learning ratesAccuracy, false negatives and false positives
ENNActivation function, number of layers, number of neurons, and learning ratesAccuracy, false negatives and false positives
Table 5. Classification method score comparison for each dataset to distinguish between healthy and faulty states.
Table 5. Classification method score comparison for each dataset to distinguish between healthy and faulty states.
Classification
Method
ABC
AccuracyAUCAccuracyAUCAccuracyAUC
RF1.01.01.01.01.01.0
SVM0.890.990.930.880.961.0
kNN1.01.00.80.910.961.0
FNN0.961.00.880.931.01.0
ENN0.960.940.920.900.960.99
Table 6. Classification method score comparison for each dataset to distinguish between healthy and faulty states including phase localization.
Table 6. Classification method score comparison for each dataset to distinguish between healthy and faulty states including phase localization.
DatasetOutputAccuracy
RFSVMkNNFNNENN
AHealthy0.930.920.870.941.0
Phase a1.01.00.861.01.0
Phase b1.01.01.00.830.86
Phase c0.50.51.00.861.0
BHealthy1.01.01.01.01.0
Phase a1.01.01.01.01.0
Phase b0.671.00.860.830.88
Phase c1.01.01.01.01.0
CHealthy1.01.01.01.01.0
Phase a1.01.01.01.01.0
Phase b1.01.01.01.00.80
Phase c1.00.670.330.670.67
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rengifo, J.; Moreira, J.; Vaca-Urbano, F.; Alvarez-Alvarado, M.S. Detection of Inter-Turn Short Circuits in Induction Motors Using the Current Space Vector and Machine Learning Classifiers. Energies 2024, 17, 2241. https://doi.org/10.3390/en17102241

AMA Style

Rengifo J, Moreira J, Vaca-Urbano F, Alvarez-Alvarado MS. Detection of Inter-Turn Short Circuits in Induction Motors Using the Current Space Vector and Machine Learning Classifiers. Energies. 2024; 17(10):2241. https://doi.org/10.3390/en17102241

Chicago/Turabian Style

Rengifo, Johnny, Jordan Moreira, Fernando Vaca-Urbano, and Manuel S. Alvarez-Alvarado. 2024. "Detection of Inter-Turn Short Circuits in Induction Motors Using the Current Space Vector and Machine Learning Classifiers" Energies 17, no. 10: 2241. https://doi.org/10.3390/en17102241

APA Style

Rengifo, J., Moreira, J., Vaca-Urbano, F., & Alvarez-Alvarado, M. S. (2024). Detection of Inter-Turn Short Circuits in Induction Motors Using the Current Space Vector and Machine Learning Classifiers. Energies, 17(10), 2241. https://doi.org/10.3390/en17102241

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop