Next Article in Journal
Fractional-Differential Models of the Time Series Evolution of Socio-Dynamic Processes with Possible Self-Organization and Memory
Previous Article in Journal
Improved Conditional Domain Adversarial Networks for Intelligent Transfer Fault Diagnosis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Feasibility of Six Metaheuristic Solutions for Estimating Induction Motor Reactance

Department of Electrical and Electronics Engineering, Faculty of Engineering, Hakkari University, 30000 Hakkari, Turkey
Mathematics 2024, 12(3), 483; https://doi.org/10.3390/math12030483
Submission received: 6 January 2024 / Revised: 30 January 2024 / Accepted: 31 January 2024 / Published: 2 February 2024

Abstract

:
Industry is the primary application for induction machines. As such, it is essential to calculate the induction devices’ electrical properties accurately. With DC testing, no-load rotor tests, and locked rotor tests, one may empirically evaluate the electrical variables of induction motors. These tests are expensive and difficult to conduct, however. The information supplied by machine makers can also be used to accurately approximate the equivalent variables of the circuits in induction machines. This article has successfully predicted motor reactance (Xm) for both double- and single-cage models using artificial neural networks (ANN). Although ANNs have been investigated in the literature, the ANN structures were trained to use unmemorized training. Besides ANN, six other approaches have been suggested to address this issue: heap-based optimization (HBO), leagues championship algorithm (LCA), multi-verse optimization (MVO), osprey optimization algorithm (OOA), cuckoo optimization algorithm (COA), and sooty tern optimization algorithm (STOA). The efficaciousness of the suggested approaches was compared with each another. Regarding the obtained outcomes, the suggested MVO- multi-layer perceptron (MLP) technique performed better than the other five methods regarding reactance prediction, with R2 of 0.99598 and 0.9962, and RMSE of 20.31492 and 20.80626 in the testing and training phases, respectively. For the projected model, the suggested ANNs have produced great results. The novelty lies in the mentioned methods’ ability to tackle the complexities and challenges associated with induction motor reactance optimization, providing innovative approaches to finding optimal or near-optimal solutions. As researchers continue to explore and refine these techniques, their impact on motor design and efficiency will likely grow, driving advancements in electrical engineering.

1. Introduction

Induction devices are suitable for harsh operating settings due to their robust and uncomplicated design. Because of their adaptable, affordable, and dependable architecture, direct grid supply, and single or multi-phase manufacturing capacity, tiny or big powers are widely utilized in daily life and various industries [1]. Among the most favored basic varieties of induction machines are squirrel-cage asynchronous machines. Compared to comparable permanent magnet machines, squirrel-cage induction machines are easier to assemble, less expensive, and require no driver [2]. Understanding the comparable circuit model and its variables is crucial to operating squirrel-cage induction machines correctly and consistently. To the extent that one intends to use the machine in real-world scenarios, examine and test it in lab or simulation settings, and completely optimize it, these characteristics must be understood accurately or very near to reality [3]. Maximum torque, current data, and no-load and locked-rotor tests or full-load tests are utilized to calculate the equivalent circuit variables of squirrel-cage machines [4]. But running these tests is not always safe and feasible, especially when using large, powerful equipment. Machine manufacturers provide information about machine specs in their catalogs [5].
However, the manufacturer’s data does not provide the machine’s corresponding circuit variables. It is possible to approximate the device’s equivalent circuit parameters by using information provided by the manufacturer. Recent studies on asynchronous machine parameter estimation have seen a sharp rise. The literature presents a few important and thorough investigations of methods based on calculating the machine parameters and performance of asynchronous machines [6,7,8]. A novel approach to determining the double-cage method parameters of squirrel-cage induction motors was presented in a study. The machine impedance was measured at several places to approximate the motor characteristics [9]. Through the use of machine labels and manufacturer catalog information, a new, simple, and non-repetitive methodology for deriving equivalent circuit characteristics of an asynchronous machine was described in another study. A slip function was used to simulate the variation in the rotor parameters [10].
The motor variables’ accuracy utilized by the control algorithm determines how successful the controller is, which is why parameter determination of the induction motor is a significant topic in the literature on electric drives [11]. Squirrel-cage induction motor models are often constructed using single- or double-cage versions. These methods’ variables can be acquired in two different models [12]:
  • Using information regarding the maximum torque, current, and comprehensive load test results [13].
  • Making use of the findings from the locker rotor and no-load evaluations.
Accurate forecasting of the behavior and induction motors’ efficiency has challenged designers. Efficiency forecasting has become more accurate when electromagnetic field problems are solved numerically [14]. The finite element method is one of the most effective and widely utilized machine analysis and design tools [11,15]. However, the numerical approach’s primary deficiency is the high processing time and resource requirement [16]. The finite element method can be used during machine design to examine performance and optimize the motor design iteratively.
Moreover, the complete computing procedure may be repeated while creating a special machine that satisfies various constraints and requirements [17]. Consequently, developing a model replicating finite element data and determining the necessity and adequate machine efficiency more quickly while retaining high precision would be an invaluable and crucial design instrument [18]. Artificial neural networks (ANNs) fit such a method well [19].
As previously mentioned, an induction motor’s reactance is a crucial factor influencing its efficiency and performance. It represents the resistance to the alternating current flow provided by the motor’s windings. In this way, the ANNs can be applied to several parts of the study of induction motors, such as the motor’s reactance prediction and modeling. ANNs are robust computational models modeled after the human brain’s composition and operation [20]. They are made up of artificial neurons that are networked together and can learn from data to anticipate the future or carry out tasks. Recently, various techniques for determining the characteristics of induction motors have been developed by researchers [12,21,22]. One of these techniques was used to identify the characteristics of induction motors operating in the electromechanical mode using the transient stator current.
Regarding the squirrel-cage model of induction motors presented in Ref. [23], new parameters are determined by estimating the mechanical speed and instantaneous electrical power using a free acceleration test. Authors have recently proposed a two-step method to detect the characteristics of an induction motor using a nonlinear method that incorporates magnetic satiation [24]. In Ref. [25], the genetic algorithm (GA) presents a unique method for predicting the electrical parameters of an equipollent circuit for a three-stage induction motor. In [26], the topic of variable determination for induction motors is covered. The proposed algorithm for an artificial bee colony (ABC) is compared with the most recent approaches. The shuffling frog-leaping algorithm—initially introduced in Ref. [27]—and standard manufacturer data can be utilized to approximate the variables of the double-cage asynchronous device. Depending on the number of physical factors, the variables of the induction motor were obtained from a poor initial approximation using nonlinear regression approaches based on a least-squares two-step strategy.
It is crucial to have a precise understanding of the electrical properties of induction machines due to their wide applications in industry [28]. The electrical equivalent circuit characteristics of the induction motor are necessary to compute the motor’s efficiency [29], load variations’ response, control driver parameters [30,31], and estimate breakdown behavior [32]. Consequently, determining electrical and mechanical parameters requires accurate parameter estimations and induction motor modeling [33]. However, their nonlinear models make obtaining these motors’ mathematical methods somewhat difficult [34]. This study examined how training affected the parameter determination of squirrel-cage IMs.
This research improves the modeling of IMs utilizing analogous circuits, neural networks, and numerical simulations. ANNs have been used to solve numerous engineering issues [35,36]. Before being expanded to accommodate more polyphase rotating induction motors [37], a neural network was first utilized to produce a single-sided linear IM [38]. In these early research articles, the neural network transferred the input machine’s geometrical design variables to the output machine’s efficiency. The closely linked and interdependent input–output variables in this model are one of its drawbacks; this makes it challenging and ineffective to create data patterns and train the network. The same circuit approach can considerably reduce the coupling effect. It links machine performance through circuit variables to geometric design elements. The circuit parameters of an induction motor can all be connected to particular machine parts [39]. The neural network will be trained more effectively as a result. It is possible to predict and measure the impacts of the airgap, stator, and rotor independently on the behavior and performance of a machine. Using this method will enable you to identify the necessary efficiency attributes swiftly. Because the circuit variables are estimated using the finite element approach, which also considers the complex geometry and excitations, as well as the non-linear magnetic characteristics, this model retains a high level of accuracy.
Using a novel approach proposed in this publication, variables of the double and single-cage patterns of induction motors are estimated. The proposed method uses six network (ANN) techniques. To do this, the manufacturer data [4] use twenty induction motors with synchronous speeds varying from two to eight poles and a 400 V line voltage.

2. Structure and Equivalent Circuit Models of Squirrel-Cage Induction Motors

Induction motors are preferred in the industry since they require less maintenance than many other motors, in addition to their simple, robust, and inexpensive structures. The structure of squirrel-cage induction motors, one of the most preferred, is also straightforward, robust, and valuable. In addition to these features of squirrel-cage induction motors, their load-dependent speed regulation is also quite good. Squirrel-cage rotors can be manufactured as single or double-cage rotors. The squirrel cage machine’s rotor is manufactured by cutting laminated steel sheets into the appropriate geometry and pressing them into blocks. The cage structure is formed by placing aluminum or copper rods in the channels opened following the surface of the rotor, which has been turned into a block, using casting or other techniques [40]. The rotor formed in this way is called a squirrel-cage rotor. Squirrel-cage bars may have different combinations and geometries. The stator of the induction motor of the squirrel-cage approach and the double and single-cage rotor structure are illustrated in Figure 1.
Obtaining equivalent circuit parameters with a high degree of accuracy from the manufacturer’s data sheet of a squirrel-cage machine allows electrical and mechanical analysis of the machine without purchasing or using it directly in a system—in other words, without risking it. This gives us significant advantages in terms of cost, time, and practicality in many applications.
The induction motor is one of the most commonly utilized motor types in high-efficiency drive applications, which requires full knowledge of some, if not all of the induction motor variables of the control schemes of drivers [41]. It is necessary to establish the equivalent circuit method of the machine and know the parameters of this model correctly in terms of detecting critical operating points of the machine, performance analysis, control, protection, malfunction, and operation [7]. Depending on the single or double-cage rotor structure, squirrel-cage induction motors can be modeled as a single or double-cage equivalent circuit. The most commonly utilized equivalent circuit model of the squirrel-cage induction motor is the constant parameter equivalent circuit model with a single-cage rotor structure [11]. Contrary to popular belief, obtaining equivalent circuit parameters is complex, and it is even more challenging to find the typical amounts of the double-cage induction motor variables [42,43]. Since the single-cage equivalent circuit method does not adequately present induction motors, the double-cage equivalent circuit method should be utilized. The equivalent circuit models of the induction motor with a single and double-cage rotor structure are shown in Figure 2 and Figure 3. Under steady-state conditions, the equivalent circuit model consists of five electrical variables, including Rr, Rs, Xsd, Xm, and Xrd, for a single-cage structure, and seven electrical parameters, including Rs, R2, R1, Xsd, Xm, X2d, and X1d, for a double-cage structure. Among these parameters, X1d and R1 demonstrate the inner cage, and the variables of X2d and R2 demonstrate the outer cage [4,44].
Equations (1)–(10) illustrated the mathematical induction motor model [45]. The mathematical modeling of an induction motor involves expressing its electrical and mechanical characteristics through a set of equations. An induction motor is an alternating current (AC) motor widely used in various industrial applications. The basic mathematical model consists of electrical equations representing the stator and rotor windings, and mechanical equations describing the motor’s motion. A proper predictive model can avoid complicated mathematical processes. The proposed predictive network obtained from this study can be implemented as a practical solution by producing design charts.
V S 0 V s d V s q = 2 3 · 1 2 1 2 1 2 cos θ s cos θ s 2 π 3 cos θ s + 2 π 3 sin θ s sin θ s 2 π 3 sin θ s + 2 π 3 · V a s V b s V c s
i a i b i c = 2 3 · 1 2 cos θ s sin θ s 1 2 cos θ s 2 π 3 sin θ s 2 π 3 1 2 cos θ s + 2 π 3 sin θ s + 2 π 3 · i s 0 i s d i s q
R E = R s + R r · L m L r
σ = 1 L m 2 L s · L r
V s d = R s · i s d ω s · σ · L s · i s q + L m L r · ψ r q + d d t σ · L s · i s d + L m L r · ψ r d
V s q = R s · i s q + ω s · σ · L s · i s d + L m L r · ψ r d + d d t σ · L s · i s q + L m L r · ψ r q
d ψ r d d t = R r · L m L r · i s d R r L r · ψ r d + ω r · ψ r q
d ψ r q d t = R r · L m L r · i s q ω r · ψ r d + R r L r · ψ r q
T e T L = J · d ω d t + B · ω
T e = p · L m L r · i s q · ψ r d i s d · ψ r q
For balanced systems, V S 0 and i s 0 are zero; i a , i b , i c V a s ,   V b s , V c s are the stator phase currents and voltages; V s d and V s q are the d-q axis stator voltages; i s d ,   i s q are the d-q axis stator currents; ψ r d ,   ψ r q are the d-q axis rotor fluxes; R r is the rotor winding resistance with referred to stator; R E is the equivalent resistance; σ is the leakage factor; R s is the stator phase winding resistance; ω s and ω r are the angular frequency of stator and rotor currents; ω is the angular speed of the rotor; T e is the electromagnetic moment; T L is the load moment; B is the damping coefficient; J is the moment of inertia; L s is the stator inductance; L r is the rotor inductance with referred to stator; L m is the magnetization inductance; and p is the number of double poles.

3. Established Dataset

The data utilized in this study are taken from Ref [4]. The producer’s evaluations, displayed in Table 1, are used to calculate the experimental data using a numerical method presented by Monjo, Kojooyan-Jafari [46], Pedra and Corcoles [47]. The models’ input parameters have rated power P(KW), full load power factor cos( ρ FL), maximum torque to total load torque T m / T F L , the proportion of initial torque to total load torque T S T / T F L , the proportion of initial current to total load current I S T / I F L , angular velocity ω F L (rpm), and full load efficiency η F L . The induction motor’s output variable is X m . The empirical data are split into two categories to train and test the networks. Eighty percent of the empirical findings are employed for network training, and the remaining twenty percent are employed to assess the efficiency of the trained methods. To find the best ANN structures, various ANN structures (networks with various numbers of concealed neurons and layers in each concealed layer) were tested and optimized in this research.
The experimental data are randomly chosen to form the data training (the more significant data collection) and data assessment. The various ANN structures are initially developed using the training data (the training procedure). The precision of the developed (trained) network is then evaluated using the data set evaluation, which is unidentified to the network. A computer program was created using MATLAB software R2020 to train the ANN methods. Table 2 displays the optimal architectures of the ANN method.

4. Methodology

The graphical depiction of the techniques used in the current article is shown in Figure 4. This study’s first stage is to collect data, which will be utilized in simulations and analyses. These data include P(KW), cos( ρ FL), T m / T F L , T S T / T F L , I S T / I F L , ω F L (rpm), and η F L . To identify the model’s highest possible reliability, the model employs six hybrid methods using the MVO-MLP, COA-MLP, HBO-MLP, LCA-MLP, OOA-MLP, and STOA-MLP networks.

4.1. Artificial Neural Network (ANN)

A system relying on how biological neural networks function is known as an ANN [48]. The most popular neural networks comprising numerous processing units called neurons are multi-layer perceptron (MLP) networks [4,49]. An MLP network consists of at least three layers, usually called the layer of input, the concealed layer, and the layer of output, as demonstrated in Figure 5.
The neurons’ frequency in the MLP network varies depending on the layer. The node’s input in the concealed layer is presented as follows [4]:
ρ d = a d + k = 1 i ( X k · w k d )           d = 1 ,   2 ,   ,   n
where X represents the inputs, n represents the neurons’ frequency in the concealed layer, i represents the neurons’ frequency in the input layer, a represents the bias, and w represents the weighting factor [4]. The concealed layer’s dth neuron’s output is presented as follows:
θ d = f ρ d ,
where f represents the concealed layer’s activation function.
The output layer’s cth neuron output is presented as follows [4]:
Y c = b c + k = 1 i ( θ k · w k c )           c = 1 ,   2 ,   ,   m
Here, b represents the term of bias, w represents the weighting factor, m represents the output layer’s neuron frequency, and i represents the neurons’ frequency in the concealed layer.

4.2. Hybrid Model Development

The gradient descent rule and the Gauss–Newton algorithm are combined in the Levenberg–Marquardt (LM) algorithm, which trains neural networks [50]. The LM algorithm removes the constraints of these two techniques. However, as earlier research has shown, the ANN has many drawbacks, including prematurely converging and becoming stuck in local minima [51,52]. Combining evolutionary algorithms (EAs) with ANNs, which are a part of bio-inspired and evolutionary computing, can partially offset these weaknesses. EAs are mechanisms inspired by nature that solve issues through procedures that mimic living organisms’ behavior. In this section, the six hybrid models (i.e., heap-based optimization (HBO), leagues championship algorithm (LCA), multi-verse optimization (MVO), osprey optimization algorithm (OOA), cuckoo optimization algorithm (COA), and sooty tern optimization algorithm (STOA)) that are combined with conventional ANN are illustrated. The open-source codes for these algorithms alone can be found at mathworks.com. The calculation process was (i) to optimize the ANN (e.g., through the trial-and-error process and changing the hidden layers and neurons’ number) and (ii) utilizing the optimized ANN network to be combined with proposed Metaheuristic solutions. In each section, and to assess the proposed predictive network accuracies, several statistical indices, namely mean average error (MAE), root mean square error (RMSE), and coefficient of determination (R2), were used. The equations of each item are given as follows:
R M S E = N 1 i = 1 N y m e a s u r e d ,   i y p r e d i c t e d ,   i 2 0.5
M A E = 1 N i = 1 N y m e a s u r e d ,   i y p r e d i c t e d ,   i
R 2 = 1 i = 1 N y p r e d i c t e d , i y m e a s u r e d , i 2 i = 1 N y m e a s u r e d , i y _ m e a s u r e d , i 2
where n represents the observation frequency, y m e a s u r e d ,   i indicates the determined amount of the ith data number, y p r e d i c t e d ,   i represents the anticipated amount of the ith data number, and y _ m e a s u r e d , i is the average value of the y m e a s u r e d , i . In the most optimal method, RMSE should be equal to 0, and R2 should equal 1.

4.2.1. Multi-VERSE OPTIMIZATION (MVO)

The three cosmological concepts of black, white, and wormholes are the foundation for this article’s new metaheuristic MVO method. Local search, exploitation, and exploration are implemented based on analytic models of these concepts. The universe’s objects serve as the algorithm’s solution variables, because the algorithm and the universe are analogous [53].
In the literature, the scenario of the roulette wheel is employed to determine the best solution to the black and white hole tunnels’ analytical model and exchange the objects between the worlds [53]. Following is a random representation of the globe found in the solution space:
U = x 1 1 x 2 1 x n 1 x 1 2 x 2 2 x n 2 x 1 n x 2 n x n n
Here, U represents the world, n is the frequency of search components, d represents the control parameters’ measurements, and X i j represents the jth parameter of ith the world that adopts the below shape [53]:
x i j = x k j                   r 1 < N I U i x i j                   r 1 > N I U i
where U i represents the ith world, NI represents the normalized inflation rate, r 1 represents an accidental number in the [0, 1] range, and X k j represents the jth variable of the kth world, chosen by employing the scenario of the roulette wheel.
The mechanism of the transmitted wormholes’ objects [53] is demonstrated as follows:
x i j = x j + T D R · u b j l b j · r 4 + l b j                                 r 3 < 0.5 x j T D R · u b j l b j · r 4 + l b j                                 r 3 0.5 x i j                                                                                                                                                 r 2 W E P                   r 2 < W E P
where X j is the jth the variable of the optimum world, WEP (Wormhole Existence Probability) and TDR (Traveling Distance Rate) are coefficients, u b j and l b j indicate the upper and lower limit of jth variable, x i j indicates the jth parameter of ith world, and r 4 , r 3 , and r 2 are accidental numbers belonging to [0, 1].
The WEP increases the exploitation and adopts the below shape [53]:
W E P = m i n + t × m a x m i n T m a x
where t indicates the current repetition, T m a x indicates the highest repetition frequency, and min and max indicate the minimum and maximum number of controlled variables. TDR is acquired as follows [53]:
T D R = 1 t ( 1 / p ) T m a x ( 1 / p )
where p indicates the exploitation precision over the repetition; as the p enhances, more precise exploitation and quicker local search occur.

4.2.2. Cuckoo Optimization Algorithm (COA)

In 2009, Yang and Deb [54] developed the meta-heuristic approach of COA—Cuckoo Optimization Algorithm, inspired by nature. The obligate brood parasitism present in several species of cuckoos, which causes them to lay their eggs in the nests of other host birds, served as the basis for the algorithm. The female cuckoos of the parasitic species have developed a specialization in imitating the other cuckoo species’ color and egg. This reduces the likelihood of relinquishment and boosts their reproductive capacity. Three idolized rules are established to explain their application in optimization [55].
A metaheuristic algorithm for randomly exploring the optimal strategies in a search environment relies heavily on randomization. Randomized walks are used to achieve this randomization, comprised of trying to take a sequence of accidental stages in a random dispersion. To maximize the effectiveness of resource searches in unpredictable circumstances, the random walk is offered for the Cuckoo optimization algorithm by employing flight of Lévy, whose duration of step proceeded from the Lévy dispersion and possesses a much-extended step duration. As Equation (22) demonstrated, the Lévy flight offers a walk with accidental stages duration derived from Lévy’s power law dispersion [54].
L é v y ~ u = t λ ,             ( 1 < λ 3 )
In Equation (22), u indicates Lévy dispersion. New ones surround the best solution, accelerating the local search [54].
A novel strategy by employing Lévy flights and subsequently its fitness is compared with the remaining eggs in the nest for an objective function f x = ( x 1 , , x d ) t and preliminary cuckoo population n for a nest x i . A new egg substitutes an egg if the recently developed solution is more suitable than the randomly selected nest (solution). The best solutions are maintained for additional analysis, while the majority of the worst nests are disregarded.
The recently developed solution, x i ( t + 1 ) from Lévy flights, is conducted by employing the following equation [54]:
x i ( t + 1 ) = x i t + α L é v y ( λ )
where α > 0 represents the step size associated with the considered issue’s scales, considered one, and the product indicates entry-wise multiplication, i.e., Exclusive-OR operation.

4.2.3. Heap-Based Optimization (HBO)

Askari [56] developed the heap-based optimization algorithm, which drew inspiration from human social attitudes toward organizational hierarchies. This algorithm emulates the corporate rank hierarchy (CRH). According to the CRH, team members in a given organization organize themselves hierarchically to accomplish a certain goal. The suggested HBO technique bases its hierarchical search candidate arrangement on fitness utilizing the CRH idea. The hierarchy is constructed using the heap-based data structure.
The entire concept involves three phases in addition to the modeling of the CRH: (i) modeling the cooperation among the subordinators and their direct manager; (ii) modeling the employee interaction, and (iii) modeling the subordinator’s self-contribution to accomplish the desired function. These steps are explained in brief in the following subsections.

The CRH Concept’s Modeling

Corporate rank hierarchy is constructed using a non-linear tree-shaped data structure. The improved CRH is considered the swarm in the suggested method. The heap node is represented by the search agent in the search spaces throughout the simulation, and the master key to the heap node is the optimization issue’s fitness function. The population index of the search candidate was determined by taking the heap node’s amount.

The Interaction’s Modeling with the Direct Manager

Senior leadership enforces regulations and laws uniformly on employees in large organizations with centralized organizational structures, and employees must follow directions from their superiors. By changing the search candidates’ locations, the next phase can be mathematically described [56]:
x i k t + 1 = B k + γ 2 r 1 B k x i k t
where B is the parent node, t and k are the current repetition and the vector of the component, respectively, and x is the search agent’s location. Equation (25) describes the term (2r − 1), which denotes the k-th component of the vector γ , and is produced accidentally [56].
λ k = 2 r 1
where r represents any arbitrary parameter in a uniformly distributed form within the interval [0, 1]. γ can be written like this [56]:
γ = 2 t m o d T C T 4 c
where T is the maximum iterations’ number, and C is an adjustable parameter that the user defines concerning the case under study, which is dependent upon the number of iterations using the following formula [56]:
C = T 25

Modeling of the Interactivity between the Subordinators

Colleagues who work together as subordinators in a certain organization accomplish official responsibilities. The nodes at the same position in the heap are regarded to be colleagues in the suggested procedure. Equation (28) presents the updated position ( x i ) of each search agent depending on the randomly picked teammate ( S r ) [56].
x i k t + 1 = S r k + γ λ k S r k + x i k t ,       f S r < f x i k t x i k + γ λ k S r k x i k t ,       f S r f x i k t

The Employee’s Self-Contribution Modeling

Equation (29) describes how each subordinator in the organization contributes on their own [56]:
x i k t + 1 = x i t

Position Update

The three-position upgrade formulas covered in the preceding subsections are combined into one formula in this subsection. A roulette wheel was employed to balance the exploitation and exploration stages. The equilibrium between these phases was established by applying the three probabilities P 1 , P 2 , and P 3 . The search agents’ positions within the population were updated using the first probability, P 1 , which may be stated as follows [56]:
P 1 = 1 t T
This is how the second proportion, P 2 , was determined [56]:
P 2 = P 1 + 1 P 1 2
In the end, the probability P 3 was determined using Equation (32) [56]:
P 3 = P 2 + 1 P 1 2 = 1
Equation (33) presents the position upgrade’s generic equation for the suggested HBO [56].
x i k t + 1 = x i k t ,                                                                                                                                                                                       P < P 1 B k + γ λ k B k x i k t ,                                                                                                         P 2 < P < P 3 S r k + γ λ k S r k + x i k t ,               P 2 < P < P 3   a n d   f S r < f x i k t x i k + γ λ k S r k x i k t ,               P 2 < P < P 3   a n d   f S r f x i k t
where P represents any number in the interval [0, 1].

4.2.4. Leagues Championship Algorithm (LCA)

Kashan [57] introduced the LCA technique, a novel metaheuristic approach for resolving continuous optimization issues. It functions similarly to other algorithms inspired by nature, with a population of solutions evolving towards the best one. With n players equal to the number of variables, each team (person) in the league (a swarm of L teams) has a workable solution to the issue being solved. Following the construction of the fake weekly league schedule, team i takes on team j, each with a playing strength corresponding to its fitness rating. The clubs play each other in pairs for S × ( L 1 ) weeks according to the league schedule, where S is the seasons’ number and t is the week. Playing has an outcome that determines who wins and who loses. Each side forms a new team based on the outcomes of the previous week’s matches to prepare for the upcoming game. The present best team configuration is replaced by the anticipated productive team formation, chosen under the direction of a team formation with superior playing strength.

A League Schedule’s Generation

The first stage is to create a schedule that includes every game for every season. Every team in the LCA plays every other team once a season under a single round-robin schedule. L(L − 1)/2 competitions exist, and L has to be an even integer. After that, the championship goes on for S more seasons. The league scheduling method creates an 8-team (L = 8) basic sports league from Figure 6.

Evaluating Winner/Loser

Choosing a loser (and winner) according to the playing strength standard, with formations X i t = x i 1 t , x i 2 t , , x i n t , X i t , and playing strengths f X i t and f X J t ; correspondingly, let us examine teams i and j participating in week t. The possibility that team i will outdo team j in week t is represented by p i t , which is determined by Equation (34) [57].
p i t = f X j t f ^ f X j t + f X i t 2 f ^
where the best team global team formation is represented by f ^ . The probability that team j would overcome team i is also determined simultaneously by a uniform accidental number in the range of [0, 1). Team i triumphs, and team j loses if the result is less than or equal to p i t . On the other hand, j wins and i loses if the number is bigger than p i t .

A New Team Formation

The clubs that played with team i in week t + 1, with team i in week t, and with team l in week t, respectively, are indicated by the other three linked indices of i: l, j, and k, based on the league schedule. Presume that the optimal team configurations for teams k, j, and i at week t are B k t , B j t , and B i t = ( b i 1 t , b i 2 t , , b i n t ) , respectively. It can be argued that for team k to overcome team l, team i must come up with a playing strategy that is comparable to that employed by team k at week t, based on the strengths of team k, which ( B k t B i t ) indicates the gap vector among the playing styles of team k and team i. Similarly, it would make sense to steer clear of adopting a playing style akin to that of team k while concentrating on the deficiencies of that team ( B i t B k t ). As a result, one may understand B j t B i t and B i t B j t similarly. The two gap vectors’ information is combined with the two constant variables, ψ 1 -retreat and ψ 2 -approach, to generate a new team. When team i wishes to go in the direction of a rival, they employ the approach parameter. On the other hand, if team i wants to distance itself from a competitor, the retreat parameter is employed.
A swarm-based approach was created to achieve global optimization in the basic LCA. Despite its simplicity and effectiveness, The LCA is prone to becoming locked in local optima, leading to an imbalance in global exploration and local exploitation.

4.2.5. Osprey Optimization Algorithm (OOA)

The osprey, often known as the sea hawk, is a widely distributed, nocturnal predatory bird. The strategy of osprey in catching fish and transporting them to a prime spot for consumption is a brilliant natural treatment that can act as the basis for developing a novel optimization algorithm. These clever osprey actions were mathematically analyzed to develop the suggested OOA method.
Following the OOA launch, which is covered in detail in this subsection, the process of upgrading the ospreys’ positions in the two stages of exploitation and exploration based on the natural osprey treatment’s simulation is provided [58].
The suggested OOA is a swarm-based method that can identify a feasible solution based on the search capacity of its swarm members in the issue-solving space through an iteration-based method. As the OOA’s swarm member, each osprey evaluates the amounts of the issue parameters based on where it is in the search space. Consequently, every osprey symbolizes a possible resolution to the issue, expressed mathematically as a vector. Equation (35) describes the OOA swarm, which consists of all ospreys, and may be expressed as a matrix. At the beginning of the OOA implementation, the ospreys’ position in the search space is initialized by accidentally utilizing Equation (36) [58].
X = X 1 X i X N N × 1 = x 1,1 x i , 1 x N , 1 x 1 , j x i , j x N , j x 1 , m x i , m x N , m N × m
x i , j = l b j + r i , j · u b j l b j ,   i = 1 ,   2 ,   ,   N ,   j = 1 ,   2 ,   ,   m
where m denotes the issue variables’ number, r i , j indicates accidental numbers in the [0, 1], u b j , and l b j indicates the upper and lower bounds, and X represents the population matrix of the osprey locations. X i indicates the j th osprey, and x i , j indicates its j th dimension. Equation (37) states that the determined amounts for the objective function of the issue can be presented as a vector [58].
F = F 1 F i F N N × 1 = F ( X 1 ) F ( X i ) F ( X N ) N × 1
where F is the vector containing the objective function amounts and F i is the computed objective function amount for the i th osprey.
The values evaluated for the objective function serve as the main criterion for evaluating the possible solutions’ quality. The greatest value found for the objective function is correlated with the best candidate solution or the best member. In contrast, the worst amount found for the objective function is correlated with the worst member or candidate solution. Since the location of the ospreys in the search space is upgraded every repetition, the best candidate solution needs to be modified.

Stage 1: Positions’ Identification and Fish Hunting (Exploration)

Ospreys are strong predators with excellent vision, enabling them to find fish underwater. They find the fish, dive under the surface to attack it, and then pursue it. The true natural behavior of ospreys’ simulation was used to model the first phase of the OOA’s swarm upgrade. Modeling the osprey attack on fish led to a significant change in the osprey’s situation in the search space, increasing the OOA’s ability to explore and find the optimal location while avoiding the local optima.
In the OOA’s design, each osprey’s placements beside other ospreys with a higher objective function amount were considered undersea fish. Equation (38) was used to calculate the fish’s set for each osprey [58].
F P i = X k k 1 ,   2 ,   ,   N Λ F k < F i { X b e s t }
where X b e s t is the optimal osprey solution and F P i is the fish situation set for the ith osprey.
The osprey locates one of these fish at random and strikes it. Equation (39) was used to simulate the osprey’s movement towards the fish to determine a new situation for the corresponding osprey. Equation (40) states that if the osprey’s new location increases the amount of the objective function, it will relocate [58].
x i , j P 1 = x i , j + r i , j · S F i , j I i , j · x i , j ,         x i , j P 1 = x i , j P 1 , l b j x i , j P 1 u b j   l b j , x i , j P 1 < l b j u b j , x i , j P 1 > u b j
X i = X i P 1 ,   F i P 1 < F i X i ,   e l s e ,
where F i , j P 1 is its fitness function, X i , j P 1 is its j th dimension, r i , j are accidental digits in the range of [0, 1], I i , j are accidental digits from the set {1,2}, and X i P 1 is the i th osprey new situation based on the first stage of OOA.

Stage 2: Transporting the Fish to the Appropriate Spot (Exploitation)

The osprey transports fish it has hunted to a suitable feeding site. The second step of upgrading the swarm in the OOA was modeled using this actual behavior simulation. Minute adjustments established the osprey’s location in the search space by simulating the fish’s transportation to the appropriate spot. This improved the OOA’s ability to exploit the local search and led to convergence towards superior solutions near the found solutions. Equation (41) was used in the OOA design to identify a new accidental situation for every swarm member as a “suitable situation for eating fish”. This mimicked the ospreys’ natural treatment.
Then, following Equation (42), if the goal function’s value increased in this new location, it replaced the connected element’s previous location [58].
x i , j P 2 = x i , j + l b j + r · u b j l b j t ,   i = 1 ,   2 ,   ,   N ,   j = 1 ,   2 ,   ,   m ,   t = 1 ,   2 ,   ,   T ,   x i , j P 2 = x i , j P 2 , l b j x i , j P 2 u b j   l b j , x i , j P 2 < l b j u b j , x i , j P 2 > u b j
X i = X i P 2 ,   F i P 2 < F i X i ,   e l s e ,
where r i , j are accidental integers in the range of [0, 1], X i P 2 is the osprey’s new situation based on the second stage of the OOA, x i , j P 2 is the j th proportion, F i P 2 is its fitness function, t is the method’s repetition counter, and T is the total number of repetitions.

4.2.6. Sooty Tern Optimization Algorithm (STOA)

The sooty tern’s natural behavior served as the model for the STOA algorithm. Dhiman originally suggested it for industrial engineering issues [59]. Omnivorous sooty terns consume earthworms, insects, fish, and other food items. The main advantages of STOA over other bionic optimization algorithms are its exploration and exploitation capabilities.

Migration Behavior (Exploration)

The three elements that make up the exploration portion of migrating behavior are as follows:
Avoiding collisions: S A is used to determine the new location to prevent search agents from colliding [59].
C s t = S A × P s t Z
where Z denotes the current iteration, S A depicts the search agent’s motion within a specific search space, P s t indicates the search agent’s current location, and C s t is the situation of the search agent without colliding with other search agents [59].
S A = C f Z × C f M a x i t e r a t i o n s
where
Z = 0 ,   1 ,   2 ,   , M a x i t e r a t i o n s
where C f is the control variable that is linearly reduced from C f to 0 to modify S A . In the meantime, this paper’s C f value is set to 2.
Converge in the best neighbor’s direction: Following collision avoidance, the search agents proceed in the direction of the best neighbor [59].
M s t = C B × P b s t Z P s t Z
where M s t denotes the movement of search agents P s t toward the most suitable search agent P b s t from their respective places. Better exploration, which is defined as follows, is the responsibility of C B [59].
C B = 0.5 × R a n d
where R a n d is a chance value in the interval [0, 1].
Lastly, the search agents adjust their situations to point to the orientation of the top sooty terns [59].
D s t = C s t + M s t
The distance between the fittest search agent and the search agent is denoted by D s t .

Attacking Treatment (Exploitation)

Sooty terns can change their attack angle and speed while migrating. With the help of their wings, they can soar higher. Their spiral behaviors when attacking the prey are presented as bellows [60]:
x = R a d i u s × sin i
y = R a d i u s × cos i
z = R a d i u s × i
R a d i u s = u × e k v
where I is a variable between [0, 2π], and R a d i u s shows the radius of each spiral. The constants u and v determine the spiral’s shape, and the natural logarithm’s base is e. As a result, the search agent’s position will be updated as follows [60]:
P s t Z = D s t × x + y + z × P b s t Z

5. Results and Discussion

The motor reactance (Xm) was predicted using several hybrid prediction models combined with multilayer perceptrons such as MVO, COA, HBO, LCA, OOA, and STOA. MATLAB was used to develop the algorithms. The correlation between the dependent and independent parameters was then determined using the networks fed by these datasets. A trial and error procedure was employed to determine the optimal intricacy of the predictive models. The step to execute an MLP is determining the testing and training phases. The training and testing datasets, which accounted for 80% and 20% of the dataset, respectively, were selected in line with earlier reports from other researchers. The designed models are dependable and straightforward, and agree with the original experimental results. This demonstrates that the developed technique is a versatile and dependable instrument. To determine the suitable population size, this study used parametric research. Different population sizes, such as 500, 450, 400, 350, 300, 250, 200, 150, 100, and 50, were used for several MLP analyses. The effectiveness of the error mentioned above and the trial procedure was approximated using the MSE reduction process. In light of the error procedures depicted in Figure 7, the optimal efficiency was demonstrated in the smaller amount of the means square error (MSE = 429.2693) reached by the multi-verse optimization with a population size of 400. This structure was introduced as the ideal MVO-MLP architecture to facilitate future assessments of motor reactance. The other five methods, COA-MLP, HBO-MLP, LCA-MLP, OOA-MLP, and STOA-MLP, also showed an appropriate result in terms of MSE with values of 665.0085, 2080.2834, 795.005, 3298.092, and 642.6149, and relative population sizes of 350, 400, 450, 500, and 500, respectively.
To evaluate the proposed ANN approaches’ performance, the attained results and the known results are compared with each other. Figure 8, Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13 compare the outcomes of the suggested ANN approaches with the empirical data for testing and training.
Figure 8, Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13 show a regression analysis of empirical and forecasted amounts to provide more detailed research into the suggested ANN models. The correlation factor (CF) can be used to validate the appropriateness of the suggested ANN models. The CF is determined as follows:
C F = 1 i = 1 n X i E x p X i p r e d 2 i = 1 n X i E x p 2
where N represents the data frequency, and ‘X(Exp)’ and ‘X(pred)’ indicates the empirical and anticipated (ANN) amounts, respectively. Figure 8, Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13 demonstrate a positive correlation between the recommended MVO-MLP, COA-MLP, HBO-MLP, LCA-MLP, OOA-MLP, and STOA-MLP methods, respectively, and the empirical amounts for the motors’ reactance. Figure 8, Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13 make it understandable that the suggested ANN models’ anticipated outputs are reasonably close to the empirical findings, demonstrating the appropriateness of ANN as a precise and dependable method for the simulation of the induction motors for Squirrel-cage kind. It is evident that the coefficient of determination was 0.9962, 0.9937, 0.9858, 0.9929, 0.9691, and 0.9946 in the testing phase, and 0.99598, 0.99358, 0.98561, 0.99252, 0.97008, and 0.99395 in the training phase for the MVO-MLP, COA-MLP, HBO-MLP, LCA-MLP, OOA-MLP, and STOA-MLP techniques, respectively. Additionally, based on the enquired findings, it can be observed that the recommended MVO-MLP approach is more precise than the other approaches regarding the R2 values.
To confirm the consistency of the learning system, the training mechanism was repeated several times for each structure. Six models introduced in Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 all underwent the same procedure. It is worth mentioning that a greater frequency of neurons results in more complex networks with elevated accuracy. The researchers opted for seven concealed layers as the fittest structure based on the precision of testing results and a modest increase in testing R2 value (and a negligible reduction in the RMSE value). Consequently, an MLP architecture with a network structure of 8 × 7 × 1 was selected as the best possible structure for an overall better hybridization process (for example, MVO-MLP, COA-MLP, HBO-MLP, LCA-MLP, OOA-MLP, and STOA-MLP). For the testing and training datasets, the proposed MVO-MLP model yielded R2 of 0.9962 and 0.99598 and RMSE of 20.80626 and 20.31492, respectively. The COA-MLP, HBO-MLP, LCA-MLP, and OOA-MLP model, nevertheless, had R2 of 0.9937, 0.9858, 0.9929, and 0.9691 in training, and 0.99358, 0.98561, 0.99252, and 0.97008 in testing; and RMSE amounts of 26.6653, 39.97074, 28.3406, and 58.72181 in training, and 25.64152, 38.31342, 27.67516, and 55.03676, in testing. Additionally, for the training and testing stages of the STOA-MLP, the R2 are 0.9946 and 0.99395, and the RMSE are 24.80362 and 24.89665, respectively. According to Table 9, which summarizes the findings of all six tables, the formed hybrid MVO-MLP model precisely anticipates the motor’s reactance. In light of this, the proposed hybrid MVO-MLP model can be used or recommended as a cutting-edge, accurate model for predicting the motor’s reactance.
Table 9 demonstrates that the optimum population amounts for the MVO-MLP, COA-MLP, HBO-MLP, LCA-MLP, OOA-MLP, and STOA-MLP were estimated to be 400, 350, 400, 450, 500, and 500, respectively. The mistakes occurrence and the minimum mistake number in the MVO-MLP, COA-MLP, HBO-MLP, LCA-MLP, OOA-MLP, and STOA-MLP best-fitted structures are demonstrated in Figure 14, Figure 15, Figure 16, Figure 17, Figure 18 and Figure 19, respectively. The outcomes obtained from the testing and training database show a great agreement among the observed and estimated amounts of motor reactance. During the training stages, the MAE amounts of 13.6075, 19.9943, 33.1182, 21.0433, 43.0551, and 17.8025 were reached for the MVO-MLP, COA-MLP, HBO-MLP, LCA-MLP, OOA-MLP, and STOA-MLP, respectively. Also, the MSE amounts related to the MVO-MLP, COA-MLP, HBO-MLP, LCA-MLP, OOA-MLP, and STOA-MLP models are correspondingly equal to 433.1356, 712.2449, 1961.1447, 813.8206, 3683.0843, and 615.406. According to the error values, it is evident that MVO-MLP is a more reliable predictive network than the other proposed algorithms for approximating real-world induction motor reactance.

Taylor Diagram

In addition to the statistical parameters (RMSE, MAE, and R2), the Taylor diagram [61] was utilized to evaluate the mentioned methods’ accuracy. This graph accurately maps the predicted and observed data [62]. Taylor represented a solo projection for representing various parameters of evaluation. Considerably, these are qualified for focus on the methods’ accuracy utilizing several plot points. The Taylor graph shows the standard deviation, RMSE amount, and correlation coefficient between observed and predicted amounts for better recognition of variations [63].
Figure 20 demonstrates Taylor graphs for different best-fit methods. The radial length from the observed value is the RMSE quantity [61]. As a result, the more accurate method is recognized by the point with the highest R2 amount (R2 = 1) and the RMSE with the minimum value. It was evident from Figure 20 that all six methods show high accuracy in predicting the induction motor reactance, but MVO-MLP produced the best prediction.

6. Practical Implementation

As stated earlier, ANNs (e.g., including all hybridized techniques) can be trained on the historical data of induction motors, including their design specifications, geometrical parameters, operational conditions, and the corresponding reactance values. By learning from this data, the ANN can establish patterns and relationships between the motor characteristics and its reactance. Once trained, the ANN can predict a new motor’s reactance based on its parameters. Their ability to learn from large datasets and generalize from learned patterns makes them valuable tools in motor analysis, optimization, and fault detection. It is noteworthy that ANNs have advantages such as flexibility, adaptability, accuracy, generalization, real-time and fast predictions, data-based optimization, and integration with automation systems. These advantages make ANNs useful and valuable for industrial applications’ motor monitoring, control, and optimization. Estimating equivalent circuit variables of an induction motor using ANN can be applied in various ways in industry. In this way, the engine can optimize operations, increase reliability, reduce maintenance costs, and improve overall system efficiency. In addition, ANNs can help detect and diagnose engine faults early, prevent unexpected failures, and minimize downtime by monitoring equivalent circuit parameters. By analyzing the trends and patterns in the predicted parameters, it can be decided how to intervene according to the signs of deterioration or inefficiency in the engine’s operation.
As expected from the general ANNs, the hybrid methods used in this article are expected to contribute significantly to these applications. The preceding sections have established that the MVO algorithm exhibits a marginally lower convergence curve than other algorithms, as indicated in Table 6. This proposes that, compared to alternative optimization strategies, the algorithm has achieved lower error rates when altering ANN parameters. Thus, the algorithm’s outcomes are provided here to create a predictive approach. Regarding ANN optimization computations, the last neuron involves eight variables and outputs. Seven prior neural layers, each with nine variables, supply this neuron. Metaheuristic techniques are used to optimize these sixty-three variables. Equation (45) uses the MVO method to determine reactance based on seven parameters that describe neural responses in the buried layer. These parameters are O1, O2, O3, O4, O5, O6, and O7.
Reactance MVO-MLP = 0.2437 × O1 + 0.0850 × O2 + 0.5791 × O3 + 0.0937 × O4 − 0.3849 × O5 − 0.4100 × O6 + 0.4853 × O7 + 0.1542, where considering the labels presented in Table 10,
O i = T a n s i g   ( [ W i 1 × P ] + [ W i 2   C o s @   ρ F L ] + [ W i 3 × T m T   F L ] + [ W i 4 × T S T T F L ] + [ W i 5 × I S T I F L ] + [ W i 6 × ω F L ] + [ W i 7 × η F L ] + [ W i 8 × c a g e ] + b i ) ,
In which, Wi1, Wi2, …, Wi11, and bi are in Table 11.
Indeed, the output of this study can be employed to optimize the design of induction motors by adjusting various parameters to achieve desired reactance values. By training the ANN on a dataset that includes different motor designs and their corresponding reactance values, the network can learn to identify the optimal combination of parameters that yield the desired reactance. This can aid in improving motor efficiency, reducing losses, and enhancing performance.

7. Conclusions

This research improves the modeling of induction motors using neural networks, analogous circuits, and numerical simulations. The closely linked and interdependent input-output variables in this model are one of its drawbacks; this makes it challenging and ineffective to create data patterns and train the network. In this study, the COA, LCA, HBO, MVO, OOA, and STOA were applied to estimate the variables of the reactance of the motor. Comparing the earlier investigation and the ANN estimates demonstrates that the suggested MVO-MLP method with R2 of (0.9962 and 0.99598) and RMSE of (20.80626 and 20.31492) performed better than COA-MLP (R2 = 00.9937 and 0.99358 and RMSE = 26.6653 and 25.64152), HBO-MLP (R2 = 0.9858 and 0.98561 and RMSE = 39.97074 and 38.31342), LCA-MLP (R2 = 0.9929 and 0.99252 and RMSE = 28.3406 and 27.67516), OOA-MLP (R2 = 0.9691 and 0.97008 and RMSE = 58.72181 and 55.03676), and STOA-MLP (R2 = 0.9946 and 0.99395 and RMSE = 24.80362 and 24.89665). Not only has the suggested MVO-MLP generated a better outcome than others, but it is also very close to actual results. The suggested ANNs have yielded excellent results for the projected model.
According to the findings, metaheuristic solutions can offer valuable perspectives for optimizing induction motor reactance. Metaheuristic algorithms used in this study are well-suited for handling complex problems. These algorithms can address multimodal optimization problems by searching for multiple solutions simultaneously or adapting their search strategies to explore different regions of the search space. This flexibility allows for comprehensive design space exploration, leading to a better understanding of the trade-offs involved. Optimizing an induction motor’s reactance can enhance its overall efficiency. Metaheuristic algorithms can assist in identifying reactance values that minimize losses, reduce energy consumption, and improve the motor’s overall performance. This perspective aligns with the growing emphasis on energy efficiency and sustainability in various industries. Also, metaheuristic algorithms offer a highly efficient alternative by leveraging stochastic search strategies, reducing the computational burden, and providing reasonably good solutions within a reasonable timeframe. By incorporating domain knowledge and problem-specific constraints, these algorithms can be tailored to address the unique challenges and objectives in motor design and control. Further research and experimentation in this area can help to refine and improve the application of metaheuristic solutions in induction motor reactance optimization, leading to more efficient and optimized motor designs.

Funding

This research received no external funding.

Data Availability Statement

Data are available upon request.

Conflicts of Interest

The author declares no conflicts of interest.

Abbreviations

ANNArtificial neural networks
HBOHeap-based optimization
LCALeagues championship algorithm
MVOMulti-verse optimization
OOAOsprey optimization algorithm
COACuckoo optimization algorithm
STOASooty tern optimization algorithm
MLPMulti-layer perceptron
GAGenetic algorithm
ABCArtificial bee colony
ACAlternating current
DCDirect current
MAEMean average error
RMSERoot mean square error
R2Coefficient of determination
WEPWormhole existence probability
TDRTraveling distance rate
CRHCorporate rank hierarchy
CFCorrelation factor
IMInduction motor
XmMotor reactance
P(KW) Rated power
cos(ρFL)Full load power factor
T m / T F L Maximum torque to total load torque
T S T / T F L Initial torque to total load torque
I S T / I F L Initial current to total load current
ω F L Angular velocity
η F L Full load efficiency

References

  1. Nardo, M.D.; Marfoli, A.; Degano, A.; Gerada, C. Rotor slot design of squirrel cage induction motors with improved rated efficiency and starting capability. IEEE Trans. Ind. Appl. 2022, 58, 3383–3393. [Google Scholar] [CrossRef]
  2. Lee, K.S.; Lee, S.H.; Park, J.H.; Kim, J.M.; Choi, J.Y. Experimental and analytical study of single-phase squirrel-cage induction motor considering end-ring porosity rate. IEEE Trans. Magn. 2017, 53, 1–4. [Google Scholar] [CrossRef]
  3. Yang, M.; Wang, Y.; Xiao, X.; Li, Y. A Robust Damping Control for Virtual Synchronous Generators Based on Energy Reshaping. IEEE Trans. Energy Convers. 2023, 38, 2146–2159. [Google Scholar] [CrossRef]
  4. Jirdehi, M.A.; Rezaei, A. Parameters estimation of squirrel-cage induction motors using ANN and ANFIS. Alex. Eng. J. 2016, 55, 357–368. [Google Scholar] [CrossRef]
  5. Song, X.; Wang, H.; Ma, X.; Yuan, X.; Wu, X. Robust model predictive current control for a nine-phase open-end winding PMSM with high computational efficiency. IEEE Trans. Power Electron. 2023, 38, 13933–13943. [Google Scholar] [CrossRef]
  6. Çetin, O.; Dalcalı, A.; Temurtaş, F. A comparative study on parameters estimation of squirrel cage induction motors using neural networks with unmemorized training. Eng. Sci. Technol. Int. J. 2020, 23, 1126–1133. [Google Scholar] [CrossRef]
  7. Silva, A.M.; Alberto, J.; Antunes, C.H.; Ferreira, F.J.T.E. A Stochastic Optimization Approach to the Estimation of Squirrel-Cage Induction Motor Equivalent Circuit Parameters. In Proceedings of the 2020 International Conference on Electrical Machines (ICEM), Gothenburg, Sweden, 23–26 August 2020. [Google Scholar] [CrossRef]
  8. Shen, Y.; Liu, D.; Liang, W.; Zhang, X. Current reconstruction of three-phase voltage source inverters considering current ripple. IEEE Trans. Transp. Electrif. 2022, 9, 1416–1427. [Google Scholar] [CrossRef]
  9. Ocak, C. A FEM-Based Comparative Study of the Effect of Rotor Bar Designs on the Performance of Squirrel Cage Induction Motors. Energies 2023, 16, 6047. [Google Scholar] [CrossRef]
  10. Abunike, C.E.; Akuru, U.B.; Okoro, O.I.; Awah, C.C. Sizing, Modeling, and Performance Comparison of Squirrel-Cage Induction and Wound-Field Flux Switching Motors. Mathematics 2023, 11, 3596. [Google Scholar] [CrossRef]
  11. Agah, G.R.; Rahideh, A.; Faradonbeh, V.Z.; Hedayati, K.S. Stator Winding Inter-Turn Short-Circuit Fault Modeling and Detection of Squirrel-Cage Induction Motors. IEEE Trans. Transp. Electrif. 2023. [Google Scholar] [CrossRef]
  12. Du, J.; Li, Y. Analysis on the Variation Laws of Electromagnetic Force Wave and Vibration Response of Squirrel-Cage Induction Motor under Rotor Eccentricity. Electronics 2023, 12, 1295. [Google Scholar] [CrossRef]
  13. Pedra, J.; Sainz, L.; Córcoles, F. Study of aggregate models for squirrel-cage induction motors. IEEE Trans. Power Syst. 2005, 20, 1519–1527. [Google Scholar] [CrossRef]
  14. Zhang, H.; Wu, H.; Jin, H.; Li, H. High-Dynamic and Low-Cost Sensorless Control Method of High-Speed Brushless DC Motor. IEEE Trans. Ind. Inform. 2023, 19, 5576–5584. [Google Scholar] [CrossRef]
  15. Özsoy, M.; Kaplan, O.; Akar, M. FEM-based analysis of rotor cage material and slot geometry on double air gap axial flux induction motors. Ain Shams Eng. J. 2024, 15, 102393. [Google Scholar] [CrossRef]
  16. Yang, X.; Wang, X.; Wang, S.; Wang, K.; Sial, M.B. Finite-time adaptive dynamic surface synchronization control for dual-motor servo systems with backlash and time-varying uncertainties. ISA Trans. 2023, 137, 248–262. [Google Scholar] [CrossRef] [PubMed]
  17. Li, M.; Wang, T.; Chu, F.; Han, Q.; Qin, Z.; Zuo, M.J. Scaling-basis chirplet transform. IEEE Trans. Ind. Electron. 2021, 68, 8777–8788. [Google Scholar] [CrossRef]
  18. Zheng, W.; Gong, G.; Tian, J.; Lu, S.; Wang, R.; Yin, Z.; Li, X.; Yin, L. Design of a Modified Transformer Architecture Based on Relative Position Coding. Int. J. Comput. Intell. Syst. 2023, 16, 168. [Google Scholar] [CrossRef]
  19. Sun, Y.; Peng, Z.; Hu, J.; Ghosh, B.K. Event-triggered critic learning impedance control of lower limb exoskeleton robots in interactive environments. Neurocomputing 2024, 564, 126963. [Google Scholar] [CrossRef]
  20. Miaofen, L.; Youmin, L.; Tianyang, W.; Fulei, C.; Zhike, P. Adaptive synchronous demodulation transform with application to analyzing multicomponent signals for machinery fault diagnostics. Mech. Syst. Signal Process. 2023, 191, 110208. [Google Scholar] [CrossRef]
  21. Araoye, T.O.; Ashigwuike, E.C.; Adeyemi, A.C.; Egoigwe, S.V.; Ajah, N.G.; Eronu, E. Reduction and control of harmonic on three-phase squirrel cage induction motors with voltage source inverter (VSI) using ANN-grasshopper optimization shunt active filters (ANN-GOSAF). Sci. Afr. 2023, 21, e01785. [Google Scholar] [CrossRef]
  22. Milykh, V.I. Numerical-field analysis of active and reactive winding parameters and mechanical characteristics of a squirrel-cage induction motor. Electr. Eng. Electromech. 2023, 4, 3–13. [Google Scholar] [CrossRef]
  23. Kojooyan, J.H.; Monjo, L.; Córcoles, F.; Pedra, J. Using the instantaneous power of a free acceleration test for squirrel-cage motor parameters estimation. IEEE Trans. Energy Convers. 2015, 30, 974–982. [Google Scholar] [CrossRef]
  24. Tseligorov, N.; Ozersky, A.I.; Chubikin, A.V.; Tseligorova, E.N. Development of a robust scalar control system for an induction squirrel-cage motor based on a linearized vector model. WSEAS Trans. Comput. 2022, 21, 1–9. [Google Scholar] [CrossRef]
  25. Fortes, M.Z.; Ferreira, V.H.; Coelho, A.P.F. The induction motor parameter estimation using genetic algorithm. IEEE Lat. Am. Trans. 2013, 11, 1273–1278. [Google Scholar] [CrossRef]
  26. Abro, A.G.; Saleh, J.M. Multiple-global-best guided artificial bee colony algorithm for induction motor parameter estimation. Turk. J. Electr. Eng. Comput. Sci. 2014, 22, 620–636. [Google Scholar] [CrossRef]
  27. Gomez, G.M.; Jurado, F.; Pérez, I. Shuffled frog-leaping algorithm for parameter estimation of a double-cage asynchronous machine. IET Electr. Power Appl. 2012, 6, 484–490. [Google Scholar] [CrossRef]
  28. Yan, Z.; Wen, H. Electricity theft detection base on extreme gradient boosting in AMI. IEEE Trans. Instrum. Meas. 2021, 70, 1–9. [Google Scholar] [CrossRef]
  29. Feng, H.; Cui, X.; Si, J.; Gao, C.; Hu, Y. Equivalent Circuit Model of Novel Solid Rotor Induction Motor with Toroidal Winding Applying Composite Multilayer Theory. Appl. Sci. 2019, 9, 3288. [Google Scholar] [CrossRef]
  30. Ganesh, K.P.; Mary, A.D. Speed Estimation and Equivalent Circuit Parameter Determination of Induction Motor Using Virtual Instrumentation. In Proceedings of the 2016 International Conference on Next Generation Intelligent Systems (ICNGIS), Kottayam, India, 1–3 September 2016. [Google Scholar] [CrossRef]
  31. Aryza, S.; Irwanto, M.; Lubis, Z.; Siahaan, A.P.U.; Rahim, R.; Furqan, M. A Novelty Design of Minimization of Electrical Losses in A Vector Controlled Induction Machine Drive. IOP Conf. Ser. Mater. Sci. Eng. 2018, 300, 012067. [Google Scholar] [CrossRef]
  32. Al-Jufout, S.A.; Al-Rousan, W.H.; Wang, C. Optimization of Induction Motor Equivalent Circuit Parameter Estimation Based on Manufacturer’s Data. Energies 2018, 11, 1792. [Google Scholar] [CrossRef]
  33. Mishra, R.N.; Mohanty, K.B. Real time implementation of an ANFIS-based induction motor drive via feedback linearization for performance enhancement. Eng. Sci. Technol. Int. J. 2016, 19, 1714–1730. [Google Scholar] [CrossRef]
  34. Ding, Z.; Wu, X.; Chen, C.; Yuan, X. Magnetic Field Analysis of Surface-Mounted Permanent Magnet Motors Based on an Improved Conformal Mapping Method. IEEE Trans. Ind. Appl. 2023, 59, 1689–1698. [Google Scholar] [CrossRef]
  35. Liu, S.; Liu, C. Direct harmonic current control scheme for dual three-phase PMSM drive system. IEEE Trans. Power Electron. 2021, 36, 11647–11657. [Google Scholar] [CrossRef]
  36. Wang, H.; Sun, W.; Jiang, D.; Qu, R. A MTPA and flux-weakening curve identification method based on physics-informed network without calibration. IEEE Trans. Power Electron. 2023, 38, 12370–12375. [Google Scholar] [CrossRef]
  37. Idir, K.; Chang, L.; Dai, H. A Neural Network-Based Optimization Approach for Induction Motor Design. In Proceedings of the 1996 Canadian Conference on Electrical and Computer Engineering, Calgary, AB, Canada, 26–29 May 1996. [Google Scholar] [CrossRef]
  38. Im, D.H.; Park, S.C.; Park, D.J. Optimum Design of Single-Sided Linear Induction Motor Using the Neural Networks and Finite Element Method. In Proceedings of the 1993 International Conference on Neural Networks (IJCNN), Nagoya, Japan, 25–29 October 1993. [Google Scholar] [CrossRef]
  39. Drabek, T. Derating of Squirrel-Cage Induction Motor Due to Rotating Harmonics in Power Voltage Supply. Energies 2023, 16, 735. [Google Scholar] [CrossRef]
  40. Marfoli, A.; DiNardo, M.; Degano, M.; Gerada, C.; Jara, W. Squirrel cage induction motor: A design-based comparison between aluminium and copper cages. IEEE Open J. Ind. Appl. 2021, 2, 110–120. [Google Scholar] [CrossRef]
  41. Chen, H.; Zhao, J.; Wang, H.; Zhang, Q.; Luo, X.; Xu, H.; Xiong, Y. Multi-objective optimum design of five-phase squirrel cage induction motor by differential evolution algorithm. Energy Rep. 2022, 8, 51–62. [Google Scholar] [CrossRef]
  42. Kumar, P.; Hati, A.S. Dilated convolutional neural network based model for bearing faults and broken rotor bar detection in squirrel cage induction motors. Expert Syst. Appl. 2022, 191, 116290. [Google Scholar] [CrossRef]
  43. Perin, M.; da Silveira, G.B.; Pereira, L.A.; Haffner, S.; Almansa, D.M.S. Estimation of Electrical Parameters of the Double-Cage Model of Induction Motors Using Manufacturer Data and Genetic Algorithm. In Proceedings of the IECON 2022—48th Annual Conference of the IEEE Industrial Electronics Society, Brussels, Belgium, 17–20 October 2022. [Google Scholar] [CrossRef]
  44. Tulicki, J.; Sobczyk, T.J.; Sułowicz, M. Diagnostics of A Double-Cage Induction Motor Under Steady State with the Rotor Asymmetry. In Proceedings of the 2023 IEEE 14th International Symposium on Diagnostics for Electrical Machines, Power Electronics and Drives (SDEMPED), Chania, Greece, 28–31 August 2023. [Google Scholar] [CrossRef]
  45. Karakaya, A. Modeling of Induction Motor and Speed Analysis of Modern Control Methods. Karaelmas Sci. Eng. J. 2017, 7, 497–502. Available online: https://dergipark.org.tr/en/download/article-file/1329457 (accessed on 30 January 2024).
  46. Monjo, L.; Kojooyan-Jafari, H.; Corcoles, F.; Pedra, J. Squirrel-cage induction motor parameter estimation using a variable frequency test. IEEE Trans. Energy Convers. 2014, 30, 550–557. [Google Scholar] [CrossRef]
  47. Pedra, J.; Corcoles, F. Estimation of induction motor double-cage model parameters from manufacturer data. IEEE Trans. Energy Convers. 2004, 19, 310–317. [Google Scholar] [CrossRef]
  48. Mo, J.; Yang, H. Sampled Value Attack Detection for Busbar Differential Protection Based on a Negative Selection Immune System. J. Mod. Power Syst. Clean Energy 2023, 11, 421–433. [Google Scholar] [CrossRef]
  49. Haykin, S. Neural Networks: A Comprehensive Foundation; Prentice Hall PTR: Upper Saddle River, NJ, USA, 1998. [Google Scholar]
  50. Zhou, X.; Liu, X.; Zhang, G.; Jia, L.; Wang, X.; Zhao, Z. An iterative threshold algorithm of log-sum regularization for sparse problem. IEEE Trans. Circuits Syst. Video Technol. 2023, 33, 4728–4740. [Google Scholar] [CrossRef]
  51. Moayedi, H.; Ghareh, S.; Foong, L.K. Quick integrative optimizers for minimizing the error of neural computing in pan evaporation modeling. Eng. Comput. 2022, 38, 1331–1347. [Google Scholar] [CrossRef]
  52. Zhou, G.; Moayedi, H.; Bahiraei, M.; Lyu, Z. Employing artificial bee colony and particle swarm techniques for optimizing a neural network in prediction of heating and cooling loads of residential buildings. J. Clean. Prod. 2020, 254, 120082. [Google Scholar] [CrossRef]
  53. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-Verse Optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Applic. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  54. Yang, X.S.; Deb, S. Cuckoo Search via Lévy flights. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009. [Google Scholar] [CrossRef]
  55. Yang, X.S. Nature-Inspired Metaheuristic Algorithms, 2nd ed.; Luniver Press: Frome, UK, 2010. [Google Scholar]
  56. Askari, Q.; Saeed, M.; Younas, I. Heap-based optimizer inspired by corporate rank hierarchy for global optimization. Expert Syst. Appl. 2020, 161, 113702. [Google Scholar] [CrossRef]
  57. Kashan, A.H. League Championship Algorithm: A New Algorithm for Numerical Function Optimization. In Proceedings of the 2009 International Conference of Soft Computing and Pattern Recognition, Malacca, Malaysia, 4–7 December 2009. [Google Scholar] [CrossRef]
  58. Dehghani, M.; Trojovský, P. Osprey optimization algorithm: A new bio-inspired metaheuristic algorithm for solving engineering optimization problems. Front. Mech. Eng 2023, 8, 1126450. [Google Scholar] [CrossRef]
  59. Dhiman, G.; Kaur, A. STOA: A bio-inspired based optimization algorithm for industrial engineering problems. Eng. Appl. Artif. Intell. 2019, 82, 148–174. [Google Scholar] [CrossRef]
  60. Tamura, K.; Yasuda, K. The spiral optimization algorithm: Convergence conditions and settings. IEEE Trans. Syst. Man Cybern. Syst. 2020, 50, 360–375. [Google Scholar] [CrossRef]
  61. Taylor, K.E. Summarizing multiple aspects of model performance in a single diagram. J. Geophys. Res. Atmos. 2001, 106, 7183–7192. [Google Scholar] [CrossRef]
  62. Solomon, S. Climate Change 2007: The Physical Science Basis; Cambridge University Press: New York, NY, USA, 2007. [Google Scholar]
  63. Gleckler, P.J.; Taylor, K.E.; Doutriaux, C. Performance metrics for climate models. J. Geophys. Res. Atmos. 2008, 113, D06104. [Google Scholar] [CrossRef]
Figure 1. Structure of the induction motor: (a) Single-cage, (b) Double-cage.
Figure 1. Structure of the induction motor: (a) Single-cage, (b) Double-cage.
Mathematics 12 00483 g001
Figure 2. The steady-state pattern of the single-cage induction motor.
Figure 2. The steady-state pattern of the single-cage induction motor.
Mathematics 12 00483 g002
Figure 3. The steady-state pattern of the double-cage induction motor.
Figure 3. The steady-state pattern of the double-cage induction motor.
Mathematics 12 00483 g003
Figure 4. Investigation flowchart encompassing input parameters to anticipate the output of motor reactance.
Figure 4. Investigation flowchart encompassing input parameters to anticipate the output of motor reactance.
Mathematics 12 00483 g004
Figure 5. A simple structure of ANN is in the present example.
Figure 5. A simple structure of ANN is in the present example.
Mathematics 12 00483 g005
Figure 6. A league−schedule simulation for an 8-team league.
Figure 6. A league−schedule simulation for an 8-team league.
Mathematics 12 00483 g006
Figure 7. The best-fit structure for the (a) MVO-MLP, (b) COA-MLP, (c) HBO-MLP, (d) LCA-MLP (e) OOA-MLP, and (f) STOA-MLP.
Figure 7. The best-fit structure for the (a) MVO-MLP, (b) COA-MLP, (c) HBO-MLP, (d) LCA-MLP (e) OOA-MLP, and (f) STOA-MLP.
Mathematics 12 00483 g007aMathematics 12 00483 g007b
Figure 8. Precision findings of training and evaluation database for the distinct suggested MVO-MLP structure.
Figure 8. Precision findings of training and evaluation database for the distinct suggested MVO-MLP structure.
Mathematics 12 00483 g008aMathematics 12 00483 g008b
Figure 9. Precision findings of training and evaluation database for distinct suggested COA-MLP structure.
Figure 9. Precision findings of training and evaluation database for distinct suggested COA-MLP structure.
Mathematics 12 00483 g009aMathematics 12 00483 g009bMathematics 12 00483 g009c
Figure 10. Precision findings of training and evaluation database for the distinct suggested HBO-MLP structure.
Figure 10. Precision findings of training and evaluation database for the distinct suggested HBO-MLP structure.
Mathematics 12 00483 g010aMathematics 12 00483 g010b
Figure 11. Precision findings of training and evaluation database for the distinct suggested LCA-MLP structure.
Figure 11. Precision findings of training and evaluation database for the distinct suggested LCA-MLP structure.
Mathematics 12 00483 g011aMathematics 12 00483 g011b
Figure 12. Precision findings of training and evaluation database for the distinct suggested OOA-MLP structure.
Figure 12. Precision findings of training and evaluation database for the distinct suggested OOA-MLP structure.
Mathematics 12 00483 g012aMathematics 12 00483 g012bMathematics 12 00483 g012c
Figure 13. Precision findings of training and evaluation database for the distinct suggested STOA-MLP structure.
Figure 13. Precision findings of training and evaluation database for the distinct suggested STOA-MLP structure.
Mathematics 12 00483 g013aMathematics 12 00483 g013b
Figure 14. The MAE frequency and error for the best fit MVO-MLP suggested method.
Figure 14. The MAE frequency and error for the best fit MVO-MLP suggested method.
Mathematics 12 00483 g014aMathematics 12 00483 g014b
Figure 15. The MAE frequency and error for the best fit COA-MLP suggested method.
Figure 15. The MAE frequency and error for the best fit COA-MLP suggested method.
Mathematics 12 00483 g015aMathematics 12 00483 g015b
Figure 16. The MAE frequency and error for the best fit HBO-MLP suggested method.
Figure 16. The MAE frequency and error for the best fit HBO-MLP suggested method.
Mathematics 12 00483 g016aMathematics 12 00483 g016b
Figure 17. The MAE frequency and error for the best fit LCA-MLP suggested method.
Figure 17. The MAE frequency and error for the best fit LCA-MLP suggested method.
Mathematics 12 00483 g017aMathematics 12 00483 g017b
Figure 18. The MAE frequency and error for the best fit OOA-MLP suggested method.
Figure 18. The MAE frequency and error for the best fit OOA-MLP suggested method.
Mathematics 12 00483 g018aMathematics 12 00483 g018b
Figure 19. The MAE frequency and error for the best fit STOA-MLP suggested method.
Figure 19. The MAE frequency and error for the best fit STOA-MLP suggested method.
Mathematics 12 00483 g019aMathematics 12 00483 g019b
Figure 20. Taylor diagram for the best-fit structures of MVO-MLP, COA-MLP, HBO-MLP, LCA-MLP, OOA-MLP, and STOA-MLP; (a) training dataset, and (b) testing dataset.
Figure 20. Taylor diagram for the best-fit structures of MVO-MLP, COA-MLP, HBO-MLP, LCA-MLP, OOA-MLP, and STOA-MLP; (a) training dataset, and (b) testing dataset.
Mathematics 12 00483 g020
Table 1. Input and output data values.
Table 1. Input and output data values.
InputOutput
P(KW) Cos ( ρ FL ) Tm/TFLTST/TFLIST/IFL ω F L (rpm) η F L Cage NumberReactance (ohms) Xm
80.742.52.14.69600.8611.056
110.93.12.2729450.9112.5856
150.922.92.26.629100.90413.1806
190.843.22.76.914600.90511.6808
220.772.92.85.59750.90811.2526
300.882.72.3629400.9112.3497
370.863.12.5714750.92911.9975
450.812.32.167400.9211.6728
550.822.42.267380.93111.7594
750.862.42.16.314820.94712.3228
900.862.72.26.814800.9412.1573
1100.86327.629820.95512.1294
1320.8632.77.214860.95512.1209
1600.862.72.4714870.9612.2378
2000.872.72.7714880.96212.4082
2500.832.27.39910.9111.4423
3150.84327.39910.96211.9026
3550.872.72.26.814860.96712.4236
4000.822.62.16.57420.96211.7943
5000.872.72.36.59920.96612.3801
80.742.52.14.69600.8621.0415
110.93.12.2729450.9122.5947
150.922.92.26.629100.90423.0787
190.843.22.76.914600.90521.6559
220.772.92.85.59750.90821.2665
300.882.72.3629400.9122.3576
370.863.12.5714750.92922.0088
450.812.32.167400.9221.7001
550.822.42.267380.93121.7804
750.862.42.16.314820.94722.3514
900.862.72.26.814800.9422.1727
1100.86327.629820.95522.1472
1320.8632.77.214860.95522.1405
1600.862.72.4714870.9622.261
2000.872.72.7714880.96222.4351
2500.832.27.39910.9121.4611
3150.84327.39910.96221.9158
3550.872.72.26.814860.96722.442
4000.822.62.16.57420.96221.8144
5000.872.72.36.59920.96622.4021
Table 2. ANN network outcomes concerning RMSE output.
Table 2. ANN network outcomes concerning RMSE output.
Neurons’ Number Network ResultsScoringTotal ScoreRank
RMSE
Total
RMSE
Train
RMSE
Test
RMSE
Total
RMSE
Train
RMSE
Test
10.0420.1590.094534127
20.0700.0870.076355136
30.0260.0310.0277109262
40.0230.0420.030887234
50.0220.0420.029978243
60.0280.1890.106623118
70.0020.0340.01910910291
80.4130.5570.461111310
90.0850.1580.11224289
100.0550.0540.055466165
Table 3. Network outcomes for ten proposed MVO-MLP swarm sizes.
Table 3. Network outcomes for ten proposed MVO-MLP swarm sizes.
Swarm SizeTraining DatasetTesting DatasetScoringTotal ScoreRank
RMSER2RMSER2TrainingTesting
5023.638230.995124.118760.994326655225
10025.110940.994426.275640.993261111410
15022.225010.995623.800230.994478877303
20023.844490.995024.574250.994115544186
25024.18110.994824.762360.994023333128
30023.336290.995223.652430.994547788303
35023.901060.994925.618310.993594422128
40020.806260.996220.314920.9959810101010401
45024.519220.994724.034860.994362266167
50022.141830.995722.654170.994999999362
Table 4. Network outcomes for ten proposed COA-MLP swarm sizes.
Table 4. Network outcomes for ten proposed COA-MLP swarm sizes.
Swarm SizeTraining DatasetTesting DatasetScoringTotal ScoreRank
RMSER2RMSER2TrainingTesting
5034.083080.989734.060190.988654444167
10031.441150.991229.712320.991375577245
15028.265810.992930.615750.990848866284
20031.100830.991430.848980.99076655226
25027.410330.993327.705970.99259988342
30037.178170.987738.332160.9856113388
35026.66530.993725.641520.9935810101010401
40036.486120.988238.980850.9851222288
45035.185280.989040.225320.98413331188
50028.36020.992927.349850.992697799323
Table 5. Network outcomes for ten proposed HBO-MLP swarm sizes.
Table 5. Network outcomes for ten proposed HBO-MLP swarm sizes.
Swarm SizeTraining DatasetTesting DatasetScoringTotal ScoreRank
RMSER2RMSER2TrainingTesting
5049.454560.978246.481860.97875112269
10047.414780.979946.349320.978874433147
15047.874840.979644.383860.980643344147
20040.97520.985138.162620.98573991010381
25048.133580.979348.153080.97718221169
30046.592090.980639.550790.984665588264
35045.741930.981343.030090.981827755245
40039.970740.985838.313420.98561101099381
45046.528960.980742.556850.982226666245
50042.471820.983941.926750.982758877303
Table 6. Network outcomes for ten proposed LCA-MLP swarm sizes.
Table 6. Network outcomes for ten proposed LCA-MLP swarm sizes.
Swarm SizeTraining DatasetTesting DatasetScoringTotal ScoreRank
RMSER2RMSER2TrainingTesting
5034.472940.989535.403730.987731111410
10030.542250.991730.223690.991074466206
15031.509010.991231.18560.990493355168
20028.61160.992731.577110.990258844245
25030.260230.991928.957050.991815599283
30029.919630.992129.479330.991517777283
35031.536470.991234.091410.98863222289
40028.261440.992929.34450.99159101088362
45028.34060.992927.675160.99252991010381
50030.249850.991931.944920.990026533177
Table 7. Network outcomes for ten proposed OOA-MLP swarm sizes.
Table 7. Network outcomes for ten proposed OOA-MLP swarm sizes.
Swarm SizeTraining DatasetTesting DatasetScoringTotal ScoreRank
RMSER2RMSER2TrainingTesting
5080.241770.941482.226910.931895544186
10099.99790.907493.410850.91114222289
15060.222370.967461.383160.962649999362
20075.747140.948074.629770.944256666245
25085.321820.933581.597790.932964455186
300101.288390.9049105.29510.885561111410
35071.636920.953671.285010.949267777284
40070.027150.955770.149690.950918888323
45085.979350.932486.265550.924753333128
50058.721810.969155.036760.9700810101010401
Table 8. Network outcomes for ten proposed STOA-MLP swarm sizes.
Table 8. Network outcomes for ten proposed STOA-MLP swarm sizes.
Swarm SizeTraining DatasetTesting DatasetScoringTotal ScoreRank
RMSER2RMSER2TrainingTesting
5032.774730.990530.247190.991065555206
10033.414370.990134.527830.988333333128
15026.34520.993925.400950.99378888323
20033.07030.990333.866220.988784444167
25036.463230.988236.540350.98692222289
30027.320190.993426.661010.993067766264
35037.165430.987737.657160.98611111410
40025.612740.994224.791690.994991010381
45028.030650.993026.491070.993156677264
50024.803620.994624.896650.99395101099381
Table 9. R2 and RMSE results for different suggested hybrid approaches.
Table 9. R2 and RMSE results for different suggested hybrid approaches.
MethodSwarm SizeTraining DatasetTesting DatasetScoringTotal ScoreRank
RMSER2RMSER2TrainingTesting
MVO-MLP40020.806260.996220.314920.995986666241
COA-MLP35026.66530.993725.641520.993584444163
HBO-MLP40039.970740.985838.313420.98561222285
LCA-MLP45028.34060.992927.675160.992523333124
OOA-MLP50058.721810.969155.036760.97008111146
STOA-MLP50024.803620.994624.896650.993955555202
Table 10. Labels description.
Table 10. Labels description.
InputOutput
P(KW) Cos@( ρ FL)Tm/TFLTST/TFLIST/IFL ω F L (rpm) η F L Cage NumberReactance (ohms) Xm
Wi1Wi2Wi3Wi4Wi5Wi6Wi7Wi8
Table 11. Values of Wi.
Table 11. Values of Wi.
iWi1Wi2Wi3Wi4Wi5Wi6Wi7Wi8bi
1−0.22380.7354−0.98920.5626−0.4626−0.31241.22560.27621.4507
2−0.09231.00710.4144−0.21650.6689−0.97790.1880−0.6596−1.0351
30.32541.5829−0.58390.3391−0.72960.19820.51220.1488−0.7934
40.1890−0.7108−0.86770.1120−0.8135−1.06540.5661−1.04780.0897
50.7587−0.1117−0.20940.3578−0.8805−0.8035−0.4683−0.21300.8239
6−0.2406−0.1814−0.42861.2509−1.04400.6871−0.32610.3638−1.3322
70.2390−0.37240.17860.9425−0.56530.9022−0.1640−0.1949−2.1776
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gör, H. Feasibility of Six Metaheuristic Solutions for Estimating Induction Motor Reactance. Mathematics 2024, 12, 483. https://doi.org/10.3390/math12030483

AMA Style

Gör H. Feasibility of Six Metaheuristic Solutions for Estimating Induction Motor Reactance. Mathematics. 2024; 12(3):483. https://doi.org/10.3390/math12030483

Chicago/Turabian Style

Gör, Halil. 2024. "Feasibility of Six Metaheuristic Solutions for Estimating Induction Motor Reactance" Mathematics 12, no. 3: 483. https://doi.org/10.3390/math12030483

APA Style

Gör, H. (2024). Feasibility of Six Metaheuristic Solutions for Estimating Induction Motor Reactance. Mathematics, 12(3), 483. https://doi.org/10.3390/math12030483

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop