Next Article in Journal
On Some Weingarten Surfaces in the Special Linear Group SL(2,R)
Previous Article in Journal
Differentiability of the Largest Lyapunov Exponent for Non-Planar Open Billiards
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning Network Based on Improved Sparrow Search Algorithm Optimization for Rolling Bearing Fault Diagnosis

Mechanical and Electrical Engineering, Changchun University of Technology, Yan’an Avenue, Changchun 130012, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(22), 4634; https://doi.org/10.3390/math11224634
Submission received: 5 October 2023 / Revised: 28 October 2023 / Accepted: 6 November 2023 / Published: 13 November 2023

Abstract

:
In recent years, deep learning has been increasingly used in fault diagnosis of rotating machinery. However, the actual acquisition of rolling bearing fault signals often contains ambient noise, making it difficult to determine the optimal values of the parameters. In this paper, a sparrow search algorithm (LSSA) based on backward learning of lens imaging and Gaussian Cauchy variation is proposed. The lens imaging reverse learning strategy enhances the traversal capability of the algorithm and allows for a better balance of algorithm exploration and development. Then, the performance of the proposed LSSA was tested on the benchmark function. Finally, LSSA is used to find the optimal modal component K and the optimal penalty factor α in VMD-GRU, which in turn realizes the fault diagnosis of rolling bearings. The experimental results show that the model can achieve a 96.61% accuracy in rolling bearing fault diagnosis, which proves the effectiveness of the method.

1. Introduction

Rolling bearings are important components of mechanical transmission systems. It is widely used in various types of rotating machinery such as hydroelectric generators and underground exploration equipment [1,2,3,4]. Therefore, the real-time monitoring and fault diagnosis of the bearing condition is very important to protect the machinery and equipment. Deep learning methods are commonly used in rolling bearing fault detection and recurrent neural networks (RNN) make the network have a certain degree of memory by introducing memory units [5,6,7]. However, RNN has the problem of gradient disappearance and gradient explosion, and the long–short time memory network (LSTM) and the gated recurrent unit neural network (GRU) solve the above problems. An, Yiyao et al. [8] introduced a sparse attention mechanism into LSTM for the fault diagnosis of rotating machinery, which has significant advantages in reducing random interference and enhancing feature information. Zhong, Cheng et al. [9] proposed a bi-directional long- and short-term memory fault diagnosis method for rolling bearings based on segmental interception spectrum analysis and information fusion, which reduces the impact of feature redundancy on the training neural network arising from modal blending by comparing the AR spectra of the components corresponding to different fault locations and fusing all features. Wang, Haitao et al. [10] introduced a self-calibrating convolutional module in the residual network and proposed a recurrent neural network based on two-stage attention to achieve good results in bearing fault diagnosis experiments.
In practice, rolling bearings produce abnormal vibration signals when they fail. However, due to the fact that the equipment usually operates under multiple load conditions, the characteristic information of the fault is often weak and disturbed by the noise of the surrounding environment, and the vibration signal is mixed with a large amount of redundant information [11,12]. It makes it difficult to extract and recognize its characteristic frequency using traditional fault diagnosis techniques. In recent years, many researchers have proposed programs. Huang et al. [13] proposed empirical mode decomposition (EMD) and applied it to process nonlinear signals, but EMD is subject to problems such as mode mixing and underenveloping during the decomposition process. To solve this problem, methods such as local mean decomposition (LMD) [14,15] and integrated empirical mode decomposition (EEMD) [16,17] have been proposed and achieved a certain degree of effectiveness; although LMD and EEMD can compensate for the defects of EMD to a certain extent, these methods still belong to recursive mode decomposition algorithms and cannot fundamentally solve the endpoint effects and error accumulation arising from the gradual accumulation of errors in the decomposition process [18]. Dragomiretskiy K. et al. [19] proposed a variational modal decomposition (VMD) algorithm, which can decompose a signal into an amplitude-modulated signal with real physical significance, with high decomposition accuracy and fast convergence [20]. As an emerging time–frequency analysis method, VMD has certain advantages in solving the signal decomposition problem, but it still needs to set different characteristic parameters according to different signals, which will lead to insufficient modal decomposition if the parameters are not selected properly [21]. Therefore, many researchers have introduced swarm intelligence algorithms into the parameter selection of VMD. Ding, Jiakai et al. [22] proposed a genetic mutation particle swarm algorithm to optimize the variational modal decomposition algorithm (GMPSO-VMD) to find the optimal combination of parameters for the VMD algorithm with the minimum envelope entropy as the objective function and accurately extracted the frequency characteristics of the four fault states of rolling bearings. Tan, Shuai, et al. [23] introduced the cuckoo algorithm into the variational modal decomposition by calculating the envelope entropy with the maximum kurtosis between all components and the original signal as the feature input value of the autoencoder, and the experimental results determine that this method is more effective in extracting the initial fault characteristics of the bearing.
The Sparrow Search Algorithm (SSA) is a metaheuristic algorithm proposed in recent years [24], which is a meta-inspired algorithm inspired by sparrows searching for food and escaping from their pursuers. The sparrow search algorithm has many advantages such as simple implementation, less adjustment parameters required, high search accuracy, robustness, and stability [25,26], and is therefore of interest to a wide range of researchers. However, it also suffers from the problems of slow search speed, decreasing population diversity in the late stage of search, and easily falling into local optimum [27,28]. To solve the above problems, Chengtian Ouyang et al. [29] proposed a learning sparrow search algorithm that introduces an improved sine and cosine mechanism, which achieves favorable results in path planning. Farhad Soleimanian Gharehchopogh et al. [30] summarize the various improvements of the sparrow search algorithm in recent years and systematically discuss its application to neural networks and deep learning. Chenglong Zhang et al. [31] proposed a sparrow search algorithm based on the chaotic mechanism with excellent results in randomizing the network configuration parameters. Yanlong Zhu et al. [32] proposed an adaptive sparrow search algorithm for parameter selection in a proton exchange membrane fuel cell model and the results show that the method can accurately select the unknown parameters in the model.
Although researchers have tried various methods to improve the searching ability of SSA, it still has not changed its poor searching ability and low population diversity. In this paper, a deep learning based rolling bearing fault diagnosis method (LSSA-VMD-GRU) is proposed. The main work of this paper is as follows:
(1)
A lens imaging reverse learning strategy is used to find the initial population locations, improve global search, and enhance the quality of the initial population.
(2)
The Gauss–Cauchy mechanism of variation introduces variance factors into populations and enhances population diversity.
(3)
The proposed LSSA is used to optimize the hyperparameters of the VMD-GRU network to improve the accuracy of rolling bearing fault diagnosis.
The other sections of this paper are organized as follows: Section 2 presents the original SSA, the VMD algorithm, and the gated recurrent unitary neural network. Section 3 presents the proposed LSSA algorithm. Section 4 presents the benchmark function and the related experiments for rolling bearing fault detection. Section 5 summarizes the related work in this paper.

2. Related Work

2.1. Basic Sparrow Search Algorithm

A higher value of fitness for the discoverer can find areas for the whole population that can enhance the fitness and provide foraging directions and areas for the followers. The discoverer position update equation is:
X i , j t = X i , j t e x p ( i α · i t e r m a x ) , W < S T X i , j t + Z L , W S T
where X i , j t represents the current location of the sparrow. W [ 0 , 1 ] , S T [ 0.5 , 1 ] , when W < S T means there is no predator in the vicinity, the sparrow can search globally in a wide range. When W S T means that the early warning agent in the population detects the danger and, at the same time, sends a warning signal to the other sparrows, the sparrow population quickly flees the area, engages in antipredator behavior, and updates its position.
The follower will follow the discoverer in order to increase its fitness value, feeding with the discoverer and increasing its energy reserves. The follower position update formula is:
X i , j t + 1 = Z e x p ( X w o r s t t X i , j t i 2 ) , i > n / 2 X p t + 1 + X i , j t X p t + 1 A + L , o t h e r w i s e
where X w o r s t t is the global worst position of the previous generation. X p t + 1 is the most food-rich location globally that the current sparrow has searched. A is a column vector at the same latitude as the individual sparrow. A + = A T ( A · A T ) 1 . When i > n / 2 , it indicates that the sparrow is currently in a food-deficient state (low acclimatization value) and the sparrow should immediately travel to a food-rich area to forage.
X i , j t + 1 = X b e s t t + β X i , j t X b e s t t , i > n / 2 X i , j t + K ( X i , j t X w o r s t t f i f w + ε ) , o t h e r w i s e
where X b e s t t is the most food-rich location in the last generation. β is the control parameter. K [ 1 , 1 ] , ε is a very small trace constant. If f i > f g , the sparrow at the edge has to move to the center. If f i = f g , the sparrow at the center moves closer to the other sparrows.

2.2. Variational Modal Decomposition Algorithm (VMD)

VMD is an algorithm proposed in 2014 for signal processing and data analysis. The input signal f (t) is decomposed into multiple intrinsic mode functions (IMF), where each IMF component after decomposition is a component with a different frequency for frequency and amplitude modulation and a variational constrained problem is obtained by adding an exponential term.
f = m i n { k η t [ ( δ ( t ) + j / π t ) · u k ( t ) ] e j ω k t 2 2 }
where u k is each IMF component, ω k is the center frequency, and δ ( t ) is the impulse function.
The above problem is converted into an unconstrained variational problem by introducing the Lagrange multiplier operator λ and the quadratic penalty factor α . The expressions for the final solutions u k n + 1 ( ω ) and ω k n + 1 are given below:
u k n + 1 ( ω ) = f ^ ( ω ) i k u ^ ( ω ) + λ ^ ( ω ) / 2 1 2 α ( ω ω k ) 2
ω k n + 1 = 0 ω u ^ k ( ω ) 2 d ω 0 u ^ k ( ω ) 2 d ω

2.3. Gated Recurrent Unit Neural Network (GRU)

There are similarities between the structure of the GRU and LSTM network, where both GRU and LSTM are computed by the gating mechanism; however, the difference is that GRU has only an update gate and a reset gate. GRU and LSTM are not much different, but GRU is faster due to fewer parameters. The network results for GRU are shown in Figure 1.

3. The Proposed LSSA

3.1. Lens Imaging Reverse Learning Strategy

Lens imaging inverse learning is an improved method for expanding the search range by calculating the inverse solution for the current position, which is shown in Figure 2. This method can expand the scope of the global search and achieve an overall improvement in the quality of individuals in the initial population.
It is possible to model the graph with the y-axis as a convex lens and [a,b] is the search range interval of the solution. There is a point P of height h. The projection of point P on the x-axis is x. The other side of P imaged through the lens presents an inverted solid image P*, of which the height is h and the projection on the x-axis is x . The mapping method is as follows:
( a + b ) / ( 2 x ) x ( a + b ) / 2 = h h
Let k = h / h be the scaling factor of the lens which can be obtained:
x = a + b 2 + a + b 2 k x k
Let x j represent the current sparrow individual and x j represent the individual after lens imaging reversal, then the lens imaging reversal learning strategy can be applied to the population initialization as shown in Equation (9):
x = a m a x + b m a x 2 + a m a x + b m a x 2 k x j k

3.2. Gaussian Cauchy Variation Mechanism

When the SSA is iterated to a later stage, the sparrow population will move closer to the currently found optimal individual, that is, the group aggregation behavior, and this behavior will lead to the problem of insufficient population diversity as well as premature convergence of the algorithm, which, once trapped in the local optimum, will increase the computational effort of the algorithm or even become incorrectly trapped in the local optimal solution.
In the improvement process of swarm intelligence algorithms, the evolutionary process of biological populations is also often simulated to improve the algorithm, in which the introduction of mutation factors to bein the population evolution is one of the common methods. In this paper, the Gauss–Cauchy variation mechanism is introduced to enhance the diversity of populations.
The Gaussian distribution, which is also known as the normal distribution, is denoted as G a u s s ( u , σ 2 ) and is distributed as in Equation (10):
f G a u s s ( x ) = 1 σ 2 π e ( x μ ) 2 2 σ 2
where the random variable x is a Gaussian distribution obeying a mathematical expectation of μ , a variance of σ 2 , and a standard Gaussian distribution at μ = 0 and σ = 0 . The variance factor that arises when the variance of the Gaussian distribution is large enhances the global search ability of the population, which means that individuals have a larger variance range and more individuals can reach different locations in the search space to search. If the algorithm is stuck in a local optimum, the Gaussian variational factor will allow the algorithm to have better escape ability. The variance factor resulting from a Gaussian distribution with a small variance will give the algorithm a better local search ability and can obtain more accurate local extremes earlier, but this will also cause the algorithm to lose some of its local escape ability, which can easily cause the algorithm to fall into a local optimum.
In order to solve the problems associated with the Gaussian distribution, researchers have also proposed the use of the Cauchy distribution, denoted as, which is formulated as in Equation (11).
f C a u c h y ( x ) = 1 π · γ 2 ( x x 0 ) 2 + γ 2
where the random variable x ( , ) obeys a Cauchy distribution with scale parameter γ and location parameter x 0 . It is a standard Cauchy distribution at γ = 1 and x 0 = 0 . The cumulative distribution formula for the standard Cauchy distribution is given in Equation (12).
F C a u c h y ( x ) = 1 π · arctan ( x x 0 γ ) + 1 / 2
The curves of the Gaussian and Cauchy distributions are similar in shape, but the Gaussian curve is higher in the center than the Cauchy distribution, and the Cauchy curve is higher on both sides than the Gaussian distribution. The Gaussian distribution and the Cauchy distribution of the comparison graph is shown in Figure 3.
The pseudo-code of the proposed LSSA is shown in Algorithm 1.
Algorithm 1 Pseudo-code for LSSA.
/* algorithm initialization phase */
1. Setting the basic parameters of the algorithm
2. Population Initialization by Lens Imaging Reverse Learning Algorithm
3. while ( t < T max )
4. Individuals are ranked according to their fitness values and the best individual X b e s t and the worst individual X w r o s t are identified.
5. for i=1: n · P d
6. Updated discoverer location
7. end for
8. for i=1: ( n · P d + 1 )
9. Update follower position
10. end for
11. for i=1: V d
12. Updating of early warning location
13. end for
14. Calculation of fitness values of mutated individuals based on the Gauss-Corsey mutation
15. Compare the fitness values of the mutated individuals and update them to the current optimal position if the position of the mutated individuals is better
16. i t e r = i t e r + 1
17. end while
18. Output the global optimal solution

3.3. Rolling Bearing Fault Diagnosis Method Based on LSSA-VMD-GRU

For the problem where it is difficult to choose the appropriate variable modal parameters of rolling bearing vibration signals, this paper adopts the proposed LSSA method to adaptively find the optimal parameters of VMD. Several modal components containing fault information are selected via the correlation coefficient, and the energy entropy of each modal component is calculated, which is input into the GRU deep learning network as a feature vector for fault classification, and the specific implementation steps are as follows:
(1)
Signal acquisition of rolling bearings in different states to collect raw data.
(2)
Find the optimal solution of the objective function by LSSA and obtain the optimal parameter combination of LSSA-VMD.
(3)
The optimal parameters are used to obtain IMF components from the variational modal decomposition of the fault signals of the four types of rolling bearings, and the energy entropy is extracted as the feature vector of the classifier by screening the IMF components that contain obvious fault information.
(4)
Input the feature vectors into the GRU fault diagnosis model, train to obtain the prediction model of each state, and input the collected test signal data set into the model to realize the fault diagnosis of rolling bearings.

4. Experimental Verification and Analysis

4.1. Benchmark Experiments

In order to see more intuitively how LSSA performs in solving simple and complex mathematical problems, this paper compares some of the common and latest heuristics for LSSA. These include Particle Swarm Optimization (PSO) [33], the Grey wolf algorithm (GWO) [34], the Whale optimization algorithm (WOA) [35], the original Sparrow search algorithm (SSA), and an improved sparrow search algorithm (NSSA) [36]. The experimental equipment is shown in Table 1.
In order to more fairly measure the performance of the PSO, GWO, WOA, SSA, NSSA, and LSSA algorithms and to ensure the objectivity of the evaluation, the maximum number of iterations T of the algorithms in this paper is uniformly set to 300. The parameter settings that each group intelligence algorithm has individually are shown in Table 2.
In order to verify the optimization seeking performance of LSSA, in this paper, LSSA will be simulated and tested using 23 benchmarking functions proposed by Xin-She Yang et al. [37], as shown in Table 3, Table 4 and Table 5. This test function can test the optimization performance of swarm intelligence algorithms from multiple perspectives, so it has been widely used in algorithm optimization performance testing and achieved good results. The test functions in the test set are continuous and can be categorized into single-peak benchmark functions, multi-peak benchmark functions, and multi-dimensional multi-modal benchmark functions. Among them, the single-peak benchmark function possesses fewer local extremes and is suitable for testing the convergence accuracy of the algorithm; the multi-peak benchmark function has more local extremes than the single-peak benchmark function and it can be used to evaluate the performance characteristics of the algorithm; the multidimensional multimodal benchmark function has a large number of local extremes, of which the algorithmic program is very easy to fall into the local optimum under this test function; and the multimodal test function can test the algorithm’s ability to escape locally.
Table 6 shows the results of LSSA and other algorithms in the benchmark function experiments, of which the best, mean, and std obtained were counted separately. From the comparison result of the mean, it can be seen that LSSA has stronger mining and exploration ability performances compared to other algorithms, and from the comparison result of the standard deviation, it can be seen that LSSA possesses stronger stability. Taking the above results together, it can be concluded that LSSA has excellent results in balancing the ability of algorithms to develop globally and mine locally.
In the current study, to further illustrate the performance of LSSA, the evolutionary curves of LSSA and the above algorithms are analyzed in comparison under the same conditions. The comparison of the optimization-seeking evolutionary curves for each function is shown in Figure 4, Figure 5, Figure 6 and Figure 7. The quality of the solution of LSSA is significantly higher than that of other classical algorithms and NSSA for both the single-peak benchmark and multimodal functions, and the speed of the LSSA algorithm solution is also improved to different degrees.

4.2. Fault Diagnosis Experiment of Rolling Bearing at Casey Western Reserve University

In this paper, the bearing dataset (CWRU) [38] from Casey Western Reserve University was used to validate the methodology of this paper and the test rig is shown in Figure 8. In this paper, the sampling frequency of 12 kHz is selected, the motor drive end is in normal state, and the diameter of 0.1778 mm is under the condition of three kinds of fault signals including the outer ring fault, inner ring fault, and rolling body fault. The composition of the data sample is shown in Table 7.
The obtained data are uniformly decomposed using the LSSA-VMD algorithm, and the sample set is imported into the GRU neural network for training to obtain a well-performing training model. The VMD-GRU- and LSSA-VMD-GRU-based fault diagnosis models are obtained by selecting 400 rounds of iterations as experimental outputs and calculating the accuracy of the fault diagnosis methods. Its training curve is shown in Figure 9.
Analyzing the experimental results in Figure 9, the LSSA-VMD-GRU is relative to the VMD-GRU fault diagnosis model. In the pre-convergence period, the convergence speed is rapid; in the post-convergence period, the accuracy curve does not fluctuate significantly and stabilizes, and the model can achieve a high accuracy rate. The accuracy of VMD-GRU in the pre-convergence period showed an overall increasing trend, but in the late convergence period, the model showed an unstable state with large fluctuations in the accuracy. It can be concluded that the LSSA-VMD-GRU network model is able to obtain a high accuracy rate when dealing with complex vibration signal fault classification.
Figure 10 shows the training curves of the loss values of GRU and LSSA-VMD-GRU. By analyzing the training results in Figure 10, the loss rate of LSS-VMD-GRU decreases dramatically in the pre-training period, and the loss value is able to reach a very small value and converge stably in the late iteration. The loss value of GRU is in a slowly decreasing state in the early stage and the loss value fluctuates a lot in the later stage of the iteration, where a discrete phenomenon occurs.
In order to better validate the accuracy of the diagnostic method, this paper evaluates the performance of the algorithm using accuracy, precision, recall, and F1 score as the measures. Their calculation methods are as follows:
A c c u r a c y = ( T P + T N ) / N
P r e c i s i o n = T P / ( T P + F P )
R e c a l l = T P / ( T P + F N )
F 1 = 2 P r e c i s i o n R e c a l l P r e c i s i o n + R e c a l l
Figure 11 shows the confusion matrix obtained by LSSA-VMD-GRU for the classification of faults in rolling bearings. Table 8 shows the average values of accuracy, precision, recall, and F1 score obtained from the two fault diagnosis methods after 20 experiments. The experiments show that LSSA-VMD-GRU improves the accuracy, precision, recall, and F1 score by 38.29%, 34.2%, 66.48%, and 52.25% over CNN, respectively. LSSA-VMD-GRU improved the accuracy, precision, recall, and F1 score by 24.95%, 20.94%, 67.02%, and 48.05% over VMD-GRU, respectively. From the above indicators, it can be concluded that the diagnostic accuracy and stability of the method proposed in this paper are significantly stronger and it can accurately classify the vibration signals under four different conditions in rolling bearing fault diagnosis. LSSA-VMD-GRU is an effective fault diagnosis method.

4.3. Fault Diagnosis of Bearing at Paderborn University

In this paper, in order to verify the general applicability of the proposed methodology in rolling bearing fault diagnosis methods, a data set from the University of Paderborn, Germany [39] is selected to conduct fault diagnosis experiments on the proposed method, and its test bed is shown in Figure 12.
The test bed consists of a motor, torque machine, rolling bearings, flywheel, and load motor. Each bearing has a different type of damage and level of failure, and the whole can be categorized into three states of health: inner ring failure, outer ring failure, and health. The types of rolling bearing failures can be categorized as man-made damage and natural acceleration experimental damage. Due to the limited number of bearing failures caused by human damage, the vibration data of a normal bearing with a rotational speed of 1500 r/min, the bearing outer ring failure degree 1–2 and bearing inner ring failure degree 1–2 are selected for the experiment in this paper and the samples are set as shown in Table 9.
Same as the Casey Western Reserve University rolling bearing experiment, a comparison experiment of the proposed method with CNN and VMD-GRU is performed. Its training curve is shown in Figure 13 and its loss value curve is shown in Figure 14. From the results, it can be seen that LSSA-VMD-GRU still achieves the best accuracy with the fastest speed and the proposed algorithm iteration curve is stable.
Figure 15 shows the confusion matrix obtained by the proposed method for rolling bearing fault diagnosis on this dataset, and several models mentioned above are evaluated with the evaluation metrics of Equations (13)–(16), of which the experimental results are shown in Table 10.
As shown in Table 10, LSSA-VMD-GRU improves the accuracy, precision, recall, and F1 score over CNN by 26.12%, 26.42%, 56.58%, and 44.6%, respectively. LSSA-VMD-GRU improves the accuracy, precision, recall, and F1 score over VMD-GRU by 24.13%, 26.39%, 48%, and 26.82%, respectively. In summary, the two sets of experiments show that the LSSA-VMD-GRU model can better realize the fault identification of rolling bearings, which provides a key technology for the intelligent fault diagnosis of rolling bearings.

5. Conclusions

In order to solve the problem that the parameters of traditional deep learning are difficult to be selected when dealing with complex rolling bearing fault diagnosis. In this paper, a sparrow search algorithm based on lens imaging reverse learning and Gaussian Cauchy mutation is proposed. Among the 23 benchmark functions, LSSA has the best optimization seeking performance and excellent stability compared to other algorithms. In addition, in a rolling bearing troubleshooting experiment at Case Western Reserve University, the proposed LSSA is used to search for the optimal parameters of VMD-GRU. The accuracy of this model is 96.61%, the precision is 93.36%, the recall is 98.49%, and the F1 score is 92.19%. There is a significant improvement in all the indicators of the proposed method compared to the pre-optimization network, and the experiment proves the effectiveness of LSSA in rolling bearing fault diagnosis.
In the future, with the further improvement and development of swarm intelligence algorithms as well as deep learning networks and other related sciences, the proposed method will be explored for applications within the fields of image processing [40], unmanned vehicles [41], production scheduling [42], and so on.

Author Contributions

Validation, S.L.; Investigation, J.Z.; Resources, Z.L.; Writing—original draft, G.M.; Writing—review & editing, X.Y. All authors have read and agreed to the published version of the manuscript.

Funding

The work is supported in part by the Jilin Provincial Department of Education (JJKH20200654KJ) and the Science and Technology Department of Jilin Province (20220203091SF).

Institutional Review Board Statement

All applicable international, national, and/or institutional guidelines for the care and use of animals were followed.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare that they have no known competing financial interest or personal relationships that could have appeared to influence the work reported in this paper. The authors declare that there is no conflict of interest regarding the publication of this paper.

References

  1. Li, W.; Chen, J.; Li, J.; Xia, K. Derivative and enhanced discrete analytic wavelet algorithm for rolling bearing fault diagnosis. Microprocess. Microsyst. 2021, 82, 103872. [Google Scholar] [CrossRef]
  2. Yu, H.T.; Kim, H.J.; Park, S.H.; Kim, M.H.; Jeon, I.S.; Choi, B.K. Classification of rotary machine fault considering signal differences. J. Mech. Sci. Technol. 2022, 36, 517–525. [Google Scholar] [CrossRef]
  3. Huo, Z.; Zhang, Y.; Jombo, G.; Shu, L. Adaptive multiscale weighted permutation entropy for rolling bearing fault diagnosis. IEEE Access 2020, 8, 87529–87540. [Google Scholar] [CrossRef]
  4. Yu, C.; Ning, Y.; Qin, Y.; Su, W.; Zhao, X. Multi-label fault diagnosis of rolling bearing based on meta-learning. Neural Comput. Appl. 2021, 33, 5393–5407. [Google Scholar] [CrossRef]
  5. Chen, T.; Li, S.; Yan, J. CS-RNN: Efficient training of recurrent neural networks with continuous skips. Neural Comput. Appl. 2022, 34, 16515–16532. [Google Scholar] [CrossRef]
  6. Li, C.; Li, D.; Zhang, Z.; Chu, D. MST-RNN: A Multi-Dimension Spatiotemporal Recurrent Neural Networks for Recommending the Next Point of Interest. Mathematics 2022, 10, 1838. [Google Scholar] [CrossRef]
  7. Hashmi, A.S.; Ahmad, T. GP-ELM-RNN: Garson-pruned extreme learning machine based replicator neural network for anomaly detection. J. King Saud-Univ. Comput. Inf. Sci. 2022, 34, 1768–1774. [Google Scholar] [CrossRef]
  8. An, Y.; Zhang, K.; Liu, Q.; Chai, Y.; Huang, X. Rolling bearing fault diagnosis method base on periodic sparse attention and LSTM. IEEE Sens. J. 2022, 22, 12044–12053. [Google Scholar] [CrossRef]
  9. Zhong, C.; Wang, J.S.; Liu, Y. Bi-LSTM fault diagnosis method for rolling bearings based on segmented interception AR spectrum analysis and information fusion. J. Intell. Fuzzy Syst. 2023, 44, 1–27. [Google Scholar] [CrossRef]
  10. Wang, H.; Guo, Y.; Liu, X.; Yang, J.; Zhang, X.; Shi, L. Fault diagnosis method for imbalanced data of rotating machinery based on time domain signal prediction and SC-ResNeSt. IEEE Access 2023, 11, 38875–388893. [Google Scholar] [CrossRef]
  11. Chen, Y.; Zhang, T.; Luo, Z.; Sun, K. A novel rolling bearing fault diagnosis and severity analysis method. Appl. Sci. 2019, 9, 2356. [Google Scholar] [CrossRef]
  12. Gan, X.; Lu, H.; Yang, G. Fault diagnosis method for rolling bearings based on composite multiscale fluctuation dispersion entropy. Entropy 2019, 21, 290. [Google Scholar] [CrossRef]
  13. Huang, N.E.; Shen, Z.; Long, S.R.; Wu, M.C.; Shih, H.H.; Zheng, Q.; Yen, N.C.; Tung, C.C.; Liu, H.H. The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis. Proc. R. Soc. Lond. Ser. Math. Phys. Eng. Sci. 1998, 454, 903–995. [Google Scholar] [CrossRef]
  14. Liu, W.; Zhou, L.; Hu, N.; He, Z.; Yang, C.; Jiang, J. A novel integral extension LMD method based on integral local waveform matching. Neural Comput. Appl. 2016, 27, 761–768. [Google Scholar] [CrossRef]
  15. Liu, W.; Zhang, W.; Han, J.; Wang, G. A new wind turbine fault diagnosis method based on the local mean decomposition. Renew. Energy 2012, 48, 411–415. [Google Scholar] [CrossRef]
  16. Gao, L.; Li, X.; Yao, Y.; Wang, Y.; Yang, X.; Zhao, X.; Geng, D.; Li, Y.; Liu, L. A modal frequency estimation method of non-stationary signal under mass time-varying condition based on EMD algorithm. Appl. Sci. 2022, 12, 8187. [Google Scholar] [CrossRef]
  17. Cheng, X.; Mao, J.; Li, J.; Zhao, H.; Zhou, C.; Gong, X.; Rao, Z. An EEMD-SVD-LWT algorithm for denoising a lidar signal. Measurement 2021, 168, 108405. [Google Scholar] [CrossRef]
  18. Li, Z.; Li, S.; Mao, J.; Li, J.; Wang, Q.; Zhang, Y. A Novel Lidar Signal-Denoising Algorithm Based on Sparrow Search Algorithm for Optimal Variational Modal Decomposition. Remote Sens. 2022, 14, 4960. [Google Scholar] [CrossRef]
  19. Dragomiretskiy, K.; Zosso, D. Variational mode decomposition. IEEE Trans. Signal Process. 2013, 62, 531–544. [Google Scholar] [CrossRef]
  20. Ur Rehman, N.; Aftab, H. Multivariate variational mode decomposition. IEEE Trans. Signal Process. 2019, 67, 6039–6052. [Google Scholar] [CrossRef]
  21. Miao, Y.; Zhao, M.; Lin, J. Identification of mechanical compound-fault based on the improved parameter-adaptive variational mode decomposition. ISA Trans. 2019, 84, 82–95. [Google Scholar] [CrossRef]
  22. Ding, J.; Huang, L.; Xiao, D.; Li, X. GMPSO-VMD algorithm and its application to rolling bearing fault feature extraction. Sensors 2020, 20, 1946. [Google Scholar] [CrossRef] [PubMed]
  23. Tan, S.; Wang, A.; Shi, H.; Guo, L. Rolling bearing incipient fault detection via optimized VMD using mode mutual information. Int. J. Control Autom. Syst. 2022, 20, 1305–1315. [Google Scholar] [CrossRef]
  24. Xue, J.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  25. Ma, G.; Yue, X.; Gao, X.; Liu, F. Application of an improved sparrow search algorithm in BP network classification of strip steel surface defect images. Multimed. Tools Appl. 2023, 82, 14403–14439. [Google Scholar] [CrossRef]
  26. Tang, Y.; Dai, Q.; Yang, M.; Du, T.; Chen, L. Software defect prediction ensemble learning algorithm based on adaptive variable sparrow search algorithm. Int. J. Mach. Learn. Cybern. 2023, 14, 1967–1987. [Google Scholar] [CrossRef]
  27. Yue, X.; Ma, G.; Liu, F.; Gao, X. Research on image classification method of strip steel surface defects based on improved Bat algorithm optimized BP neural network. J. Intell. Fuzzy Syst. 2021, 41, 1509–1521. [Google Scholar] [CrossRef]
  28. Zhu, Q.; Zhuang, M.; Liu, H.; Zhu, Y. Optimal control of chilled water system based on improved sparrow search algorithm. Buildings 2022, 12, 269. [Google Scholar] [CrossRef]
  29. Ouyang, C.; Zhu, D.; Wang, F. A learning sparrow search algorithm. Comput. Intell. Neurosci. 2021, 2021, 3946958. [Google Scholar] [CrossRef]
  30. Gharehchopogh, F.S.; Namazi, M.; Ebrahimi, L. Advances in sparrow search algorithm: A comprehensive survey. Arch. Comput. Methods Eng. 2023, 30, 427–455. [Google Scholar] [CrossRef]
  31. Zhang, C.; Ding, S. A stochastic configuration network based on chaotic sparrow search algorithm. Knowl. Based Syst. 2021, 220, 106924. [Google Scholar] [CrossRef]
  32. Zhu, Y.; Yousefi, N. Optimal parameter identification of PEMFC stacks using adaptive sparrow search algorithm. Int. J. Hydrogen Energy 2021, 46, 9541–9552. [Google Scholar] [CrossRef]
  33. Eberhart, R.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the MHS’95, Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995; pp. 39–43. [Google Scholar]
  34. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  35. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  36. Zhang, J.; Li, L.; Zhang, H.; Wang, F.; Tian, Y. A novel sparrow search algorithm with integrates spawning strategy. Clust. Comput. 2023, 1–21. [Google Scholar] [CrossRef]
  37. Yao, X.; Liu, Y.; Lin, G. Evolutionary programming made faster. IEEE Trans. Evol. Comput. 1999, 3, 82–102. [Google Scholar]
  38. Smith, W.A.; Randall, R.B. Rolling element bearing diagnostics using the Case Western Reserve University data: A benchmark study. Mech. Syst. Signal Process. 2015, 64, 100–131. [Google Scholar] [CrossRef]
  39. Lessmeier, C.; Kimotho, J.K.; Zimmer, D. Condition monitoring of bearing damage in electromechanical drive systems by using motor current signals of electric motors: A benchmark data set for data-driven classification. In Proceedings of the Third European Conference of the PHM Society, Bilbao, Spain, 5–8 July 2016; Volume 3. [Google Scholar]
  40. Hao, S.; Huang, C.; Heidari, A.A.; Chen, H.; Li, L.; Algarni, A.D.; Elmannai, H.; Xu, S. Salp swarm algorithm with iterative mapping and local escaping for multi-level threshold image segmentation: A skin cancer dermoscopic case study. J. Comput. Des. Eng. 2023, 10, 655–693. [Google Scholar] [CrossRef]
  41. Zhao, D.; Yu, H.; Fang, X.; Tian, L.; Han, P. A path planning method based on multi-objective cauchy mutation cat swarm optimization algorithm for navigation system of intelligent patrol car. IEEE Access 2020, 8, 151788–151803. [Google Scholar] [CrossRef]
  42. Pham, V.H.S.; Trang, N.T.N.; Dat, C.Q. Optimization of production schedules of multi-plants for dispatching ready-mix concrete trucks by integrating grey wolf optimizer and dragonfly algorithm. Eng. Constr. Archit. Manag. 2023. [Google Scholar] [CrossRef]
Figure 1. Gated Recurrent Unit Neural Network.
Figure 1. Gated Recurrent Unit Neural Network.
Mathematics 11 04634 g001
Figure 2. Schematic representation of lens imaging reverse learning strategy.
Figure 2. Schematic representation of lens imaging reverse learning strategy.
Mathematics 11 04634 g002
Figure 3. Comparison of Gaussian and Cauchy distributions.
Figure 3. Comparison of Gaussian and Cauchy distributions.
Mathematics 11 04634 g003
Figure 4. Optimization curve of g 1 g 6 .
Figure 4. Optimization curve of g 1 g 6 .
Mathematics 11 04634 g004
Figure 5. Optimization curve of g 7 g 12 .
Figure 5. Optimization curve of g 7 g 12 .
Mathematics 11 04634 g005
Figure 6. Optimization curve of g 13 g 18 .
Figure 6. Optimization curve of g 13 g 18 .
Mathematics 11 04634 g006
Figure 7. Optimization curve of g 19 g 23 .
Figure 7. Optimization curve of g 19 g 23 .
Mathematics 11 04634 g007
Figure 8. Experimental setup for rolling bearings.
Figure 8. Experimental setup for rolling bearings.
Mathematics 11 04634 g008
Figure 9. Accuracy training curve for experiment one.
Figure 9. Accuracy training curve for experiment one.
Mathematics 11 04634 g009
Figure 10. Loss value training curve for experiment one.
Figure 10. Loss value training curve for experiment one.
Mathematics 11 04634 g010
Figure 11. Confusion matrix for experiments on the CWRU dataset.
Figure 11. Confusion matrix for experiments on the CWRU dataset.
Mathematics 11 04634 g011
Figure 12. Paderborn University test bed.
Figure 12. Paderborn University test bed.
Mathematics 11 04634 g012
Figure 13. Accuracy training curve for experiment two.
Figure 13. Accuracy training curve for experiment two.
Mathematics 11 04634 g013
Figure 14. Loss value training curve for experiment two.
Figure 14. Loss value training curve for experiment two.
Mathematics 11 04634 g014
Figure 15. Experimental confusion matrix for the Paderborn University dataset.
Figure 15. Experimental confusion matrix for the Paderborn University dataset.
Mathematics 11 04634 g015
Table 1. Experimental equipment.
Table 1. Experimental equipment.
Experimental EquipmentDetailed Information
Operating systemMicrosoft Windows 10 Professional
Computer processorIntel(R)Core(TM)i5-6300HQ CPU
Memory12GB DDR4
Graphics processorNVDIA GeForce GTX960M
Simulation experiment platformMATLAB R2016b
Table 2. Parameter settings for each swarm intelligence algorithm.
Table 2. Parameter settings for each swarm intelligence algorithm.
AlgorithmParameter TypeParameter Values
PSOInitial population sizeNP = 20
Learning factor c 1 c 1 = 1.52
Learning factor c 2 c 2 = 1.52
GWOSearch scope α [0, 2]
WOAInitial stepS = 6
SSAInitial population sizeNP = 20
Percentage of discoverer p d = 0.7
Percentage of early warner p v = 0.2
NSSAInitial population sizeNP = 20
Percentage of discoverer p d = 0.7
Percentage of early warner p v = 0.2
LSSAInitial population sizeNP = 20
Percentage of discoverer p d = 0.7
Percentage of early warner p v = 0.2
Safety thresholdST = 06
Table 3. Single-peak benchmark functions.
Table 3. Single-peak benchmark functions.
FunctionDimRange f min
g 1 ( z ) = u = 1 n Z u 2 30[−100, 100]0
g 2 ( z ) = u = 1 n Z u + Π u = 1 n Z u 30[−10, 10]0
g 3 ( z ) = u = 1 n ( v = 1 u Z v ) 2 30[−100, 100]0
g 4 ( z ) = m a x z u { z u , 1 u n } 30[−100, 100]0
g 5 ( z ) = u = 1 n 1 [ 100 ( z u + 1 z u 2 ) 2 + ( z u 1 ) 2 ] 30[−30, 30]0
g 6 ( z ) = u = 1 n ( z u + 0.5 ) 2 30[−30, 30]0
g 7 ( z ) = u = 1 n u z u 4 + r a n d o m [ 0 , 1 ) 30[−1.28, 1.28]0
Table 4. Multi-peak benchmark functions.
Table 4. Multi-peak benchmark functions.
FunctionDimRange f min
g 8 ( z ) = u = 1 n z u 2 s i n z u 30[−500, 500]−12,569.48
g 9 ( z ) = u = 1 n [ z u 2 10 c o s ( 2 π z u ) + 10 ] 30[−5.12, 5.12]0
g 10 ( z ) = 20 e x p ( 0.2 1 n u = 1 n z u 2 ) e x p ( 1 n u = 1 n c o s ( 2 π u ) ) + 20 + e 30[−32, 32]0
g 11 ( z ) = 1 4000 u = 1 n z u 2 Π u = 1 n c o s ( z u u ) + 1 30[−600, 600]0
g 12 ( z ) = π n { 10 s i n ( π y 1 ) + u = 1 n 1 ( y u 1 ) 2 [ 1 + 10 s i n 2 ( π y u + 1 ) ] + ( y n 1 ) 2 } + u = 1 n w ( z u , 10 , 100 , 4 )
u ( z u , a , k , m ) = k ( ( z u a ) m ) , z u > a 0 , a < z u < a k ( ( z u a ) m ) , z u < a
30[−50, 50]0
g 13 ( z ) = u = 1 n 0.1 { s i n 2 ( 3 π z 1 ) + u = 1 n ( z u 1 ) 2 [ 1 + s i n 2 ( 3 π z u + 1 ) + ( z n 1 ) 2 ] [ 1 + s i n 2 ( 2 π z n ) ] } + u = 1 n w ( z u , 10 , 100 , 4 ) 30[−50, 50]0
Table 5. Multi-dimensional multi-modal benchmark functions.
Table 5. Multi-dimensional multi-modal benchmark functions.
FunctionDimRange f min
g 14 ( z ) = ( 1 500 + v = 1 25 1 v + u = 1 2 ( z u a u v ) 6 ) 1 2[−65, 65]1
g 15 ( z ) = u = 1 11 [ a u z 1 ( b u 2 + b u z 2 ) b u 2 + b u z 3 + z 4 ] 2 4[−5, 5]0.0003
g 16 ( z ) = 4 z 1 2 2.1 z 1 4 + 1 3 z 1 6 + z 1 z 2 4 z 2 2 + 4 z 2 4 2[−5, 5]−1.0316
g 17 ( z ) = ( z 2 5.1 4 π 2 z 1 2 + 5 π z 1 6 ) 2 + 10 ( 1 1 8 π ) c o s z 1 + 10 2[−5, 5]0.398
g 18 ( z ) = [ 1 + ( z 1 + z 2 + 1 ) 2 ( 19 14 z 1 + 3 z 1 2 14 z 2 + 6 z 1 z 2 + 3 z 2 2 ) ] × [ 30 + ( 2 z 1 3 z 2 ) 2 × ( 18 32 z 1 + 12 z 1 2 + 48 z 2 36 z 1 z 2 + 27 z 2 2 ) ] 2[−2, 2]3
g 19 ( z ) = u = 1 4 c u e x p ( v = 1 3 a u v ( z v p u v ) 2 ) 3[1, 3]−3.86
g 20 ( z ) = u = 1 4 c u e x p ( v = 1 6 a u v ( z v p u v ) 2 ) 6[0, 1]−3.32
g 21 ( z ) = u = 1 5 [ ( Z a u ) ( Z a u ) T + c u ] 1 4[0, 10]−10.1532
g 22 ( z ) = u = 1 7 [ ( Z a u ) ( Z a u ) T + c u ] 1 4[0, 10]−10.4028
g 23 ( z ) = u = 1 1 0 [ ( Z a u ) ( Z a u ) T + c u ] 1 4[0, 10]−10.5363
Table 6. Benchmark function experiments of LSSA with other algorithms.
Table 6. Benchmark function experiments of LSSA with other algorithms.
FunctionItemAlgorithm
PSOGWOWOASSANSSALSSA
g 1 mean 1.00 × 10 3 2.99 × 10 15 5.70 × 10 42 1.70 × 10 70 9.44 × 10 142 3.65 × 10 198
std 4.52 × 10 2 3.31 × 10 15 3.11 × 10 41 9.30 × 10 70 5.17 × 10 141 0.00 × 10 0
best 2.58 × 10 2 1.48 × 10 16 7.85 × 10 53 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0
g 2 mean 4.88 × 10 1 1.52 × 10 9 6.44 × 10 30 1.64 × 10 38 2.78 × 10 54 9.11 × 10 80
std 2.08 × 10 1 1.02 × 10 9 2.84 × 10 29 9.00 × 10 38 1.52 × 10 53 4.99 × 10 79
best 1.97 × 10 1 4.48 × 10 10 6.79 × 10 35 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0
g 3 mean 1.39 × 10 4 1.22 × 10 1 6.45 × 10 4 2.37 × 10 69 2.21 × 10 98 6.00 × 10 187
std 7.01 × 10 3 3.86 × 10 1 1.74 × 10 4 1.30 × 10 68 1.21 × 10 97 0.00 × 10 0
best 5.60 × 10 3 3.95 × 10 4 3.77 × 10 4 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0
g 4 mean 1.46 × 10 1 1.04 × 10 3 6.23 × 10 1 1.76 × 10 51 5.53 × 10 52 2.18 × 10 73
std 3.98 × 10 0 9.29 × 10 4 2.27 × 10 1 9.64 × 10 51 3.03 × 10 51 1.19 × 10 72
best 7.85 × 10 0 1.73 × 10 4 9.91 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0
g 5 mean 1.23 × 10 5 2.73 × 10 1 2.84 × 10 1 6.81 × 10 2 1.04 × 10 3 1.78 × 10 4
std 1.00 × 10 5 8.22 × 10 1 2.94 × 10 1 1.21 × 10 2 1.82 × 10 3 2.65 × 10 4
best 3.78 × 10 3 2.60 × 10 1 2.77 × 10 1 1.67 × 10 5 1.41 × 10 6 5.75 × 10 7
g 6 mean 1.15 × 10 3 1.10 × 10 0 8.28 × 10 1 1.86 × 10 3 3.87 × 10 4 1.04 × 10 6
std 7.37 × 10 2 4.46 × 10 1 3.77 × 10 1 3.12 × 10 3 9.33 × 10 4 1.60 × 10 6
best 1.90 × 10 2 2.54 × 10 1 2.96 × 10 1 4.01 × 10 6 1.18 × 10 6 3.01 × 10 10
g 7 mean 1.21 × 10 0 4.16 × 10 3 4.67 × 10 3 4.48 × 10 3 3.59 × 10 3 3.11 × 10 4
std 2.72 × 10 0 1.99 × 10 3 4.12 × 10 3 4.08 × 10 3 3.01 × 10 3 3.05 × 10 4
best 1.07 × 10 1 1.20 × 10 3 2.56 × 10 5 1.43 × 10 4 2.80 × 10 4 1.90 × 10 5
g 8 mean 7.09 × 10 3 6.00 × 10 3 1.01 × 10 4 8.44 × 10 3 9.44 × 10 3 1.09 × 10 4
std 9.62 × 10 2 7.56 × 10 2 1.90 × 10 3 2.69 × 10 3 1.62 × 10 3 1.86 × 10 3
best 9.43 × 10 3 7.32 × 10 3 1.26 × 10 4 1.26 × 10 4 1.26 × 10 4 1.26 × 10 4
g 9 mean 2.55 × 10 2 9.19 × 10 0 7.58 × 10 15 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0
std 3.19 × 10 1 7.56 × 10 0 4.15 × 10 14 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0
best 1.81 × 10 2 3.35 × 10 12 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0
g 10 mean 1.24 × 10 1 1.27 × 10 8 6.25 × 10 14 3.24 × 10 13 3.14 × 10 14 4.44 × 10 16
std 5.38 × 10 0 9.16 × 10 9 3.02 × 10 15 8.23 × 10 9 7.15 × 10 9 0.00 × 10 0
best 5.98 × 10 0 2.27 × 10 9 4.44 × 10 15 3.21 × 10 14 3.33 × 10 15 4.44 × 10 16
g 11 mean 8.20 × 10 0 6.54 × 10 3 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0
std 2.54 × 10 0 9.08 × 10 3 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0
best 3.06 × 10 0 2.33 × 10 15 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0 0.00 × 10 0
g 12 mean 1.79 × 10 1 5.85 × 10 2 3.45 × 10 2 2.20 × 10 5 2.01 × 10 6 7.64 × 10 7
std 1.04 × 10 1 3.26 × 10 2 1.39 × 10 2 7.50 × 10 5 4.31 × 10 6 1.61 × 10 7
best 7.54 × 10 0 1.17 × 10 2 1.18 × 10 2 2.64 × 10 9 2.54 × 10 9 4.69 × 10 11
g 13 mean 1.25 × 10 4 8.87 × 10 1 7.54 × 10 1 3.36 × 10 4 7.95 × 10 5 1.46 × 10 6
std 2.96 × 10 4 1.91 × 10 1 3.11 × 10 1 7.86 × 10 4 1.75 × 10 4 2.12 × 10 6
best 2.13 × 10 1 5.02 × 10 1 2.94 × 10 1 1.51 × 10 8 9.45 × 10 9 3.42 × 10 11
g 14 mean 7.38 × 10 0 5.90 × 10 0 3.10 × 10 0 9.90 × 10 0 7.27 × 10 0 9.98 × 10 1
std 5.13 × 10 0 4.62 × 10 0 3.00 × 10 0 4.73 × 10 0 4.94 × 10 0 1.26 × 10 5
best 9.98 × 10 1 9.98 × 10 1 9.98 × 10 1 9.98 × 10 1 9.98 × 10 1 9.98 × 10 1
g 15 mean 1.31 × 10 2 6.59 × 10 3 1.29 × 10 3 3.27 × 10 3 3.54 × 10 3 3.16 × 10 4
std 1.00 × 10 2 9.18 × 10 3 2.55 × 10 3 2.53 × 10 4 1.67 × 10 3 1.24 × 10 5
best 8.30 × 10 4 3.10 × 10 4 3.11 × 10 4 3.08 × 10 3 3.08 × 10 3 3.08 × 10 4
g 16 mean 7.31 × 10 1 8.14 × 10 1 9.73 × 10 1 8.63 × 10 1 9.32 × 10 1 1.03 × 10 0
std 5.15 × 10 4 6.78 × 10 8 1.72 × 10 8 1.01 × 10 9 1.95 × 10 8 9.09 × 10 10
best 1.03 × 10 0 1.03 × 10 0 1.03 × 10 0 1.03 × 10 0 1.03 × 10 0 1.03 × 10 0
g 17 mean 5.78 × 10 1 4.68 × 10 1 4.88 × 10 1 5.12 × 10 1 4.54 × 10 1 3.98 × 10 1
std 5.88 × 10 5 9.42 × 10 6 4.90 × 10 5 5.19 × 10 6 1.20 × 10 7 1.07 × 10 8
best 3.98 × 10 1 3.98 × 10 1 3.98 × 10 1 3.98 × 10 1 3.98 × 10 1 3.98 × 10 1
g 18 mean 3.21 × 10 0 3.22 × 10 0 3.91 × 10 0 3.12 × 10 0 3.05 × 10 0 3.00 × 10 0
std 1.60 × 10 3 7.38 × 10 5 4.96 × 10 0 1.33 × 10 7 5.52 × 10 7 1.40 × 10 7
best 3.00 × 10 0 3.00 × 10 0 3.00 × 10 0 3.00 × 10 0 3.00 × 10 0 3.00 × 10 0
g 19 mean 2.86 × 10 0 3.26 × 10 0 3.35 × 10 0 3.46 × 10 0 3.66 × 10 0 3.84 × 10 0
std 3.90 × 10 3 2.80 × 10 3 8.60 × 10 3 2.64 × 10 4 1.08 × 10 2 1.41 × 10 0
best 3.86 × 10 0 3.86 × 10 0 3.86 × 10 0 3.86 × 10 0 3.86 × 10 0 3.86 × 10 0
g 20 mean 3.08 × 10 0 3.23 × 10 0 3.24 × 10 0 3.25 × 10 0 3.25 × 10 0 3.27 × 10 0
std 8.79 × 10 2 9.19 × 10 2 9.70 × 10 2 7.70 × 10 2 9.66 × 10 2 2.80 × 10 1
best 3.32 × 10 0 3.32 × 10 0 3.32 × 10 0 3.32 × 10 0 3.32 × 10 0 3.32 × 10 0
g 21 mean 8.97 × 10 0 9.23 × 10 0 7.98 × 10 0 9.98 × 10 0 9.81 × 10 0 1.02 × 10 1
std 2.00 × 10 0 2.14 × 10 0 2.64 × 10 0 9.31 × 10 1 1.29 × 10 0 8.46 × 10 6
best 1.02 × 10 1 1.02 × 10 1 1.01 × 10 1 1.02 × 10 1 1.02 × 10 1 1.02 × 10 1
g 22 mean 7.81 × 10 0 9.97 × 10 0 8.44 × 10 0 1.00 × 10 1 1.02 × 10 1 1.04 × 10 1
std 3.21 × 10 0 1.67 × 10 0 2.64 × 10 0 1.35 × 10 0 9.70 × 10 1 6.45 × 10 6
best 1.04 × 10 1 1.04 × 10 1 1.04 × 10 1 1.04 × 10 1 1.04 × 10 1 1.04 × 10 1
g 23 mean 9.52 × 10 0 6.53 × 10 0 5.69 × 10 0 9.54 × 10 0 1.01 × 10 1 1.05 × 10 1
std 2.05 × 10 0 4.49 × 10 3 3.37 × 10 0 7.55 × 10 4 2.94 × 10 4 5.34 × 10 6
best 1.05 × 10 1 1.05 × 10 1 1.05 × 10 1 1.05 × 10 1 1.05 × 10 1 1.05 × 10 1
Table 7. Sample composition of the data.
Table 7. Sample composition of the data.
Fault LocationFault SizeLoad/(hp)Category Label
Normalcy-0/1/2/3hp0001
Outer ring failure0.17780/1/2/3hp0010
Inner ring failure0.17780/1/2/3hp0100
Rolling body failure0.17780/1/2/3hp1000
Table 8. Comparison of diagnostic methods.
Table 8. Comparison of diagnostic methods.
Diagnostic MethodsAccuracyPrecisionRecallF1 Score
CNN58.32%59.16%32.01%39.94%
VMD-GRU71.66%72.42%31.74%44.14%
LSSA-VMD-GRU96.61%93.36%98.49%92.19%
Table 9. Sample setting.
Table 9. Sample setting.
Location of InjuryDamage DegreeLabels
Normalcy-0
Outer ring failure11
22
Inner ring failure13
24
Table 10. Comparison of different classifiers.
Table 10. Comparison of different classifiers.
Diagnostic MethodsAccuracyPrecisionRecallF1 Score
CNN72.33%71.12%41.74%46.54%
VMD-GRU74.32%71.15%50.32%64.32%
LSSA-VMD-GRU98.45%97.54%98.32%91.14%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, G.; Yue, X.; Zhu, J.; Liu, Z.; Lu, S. Deep Learning Network Based on Improved Sparrow Search Algorithm Optimization for Rolling Bearing Fault Diagnosis. Mathematics 2023, 11, 4634. https://doi.org/10.3390/math11224634

AMA Style

Ma G, Yue X, Zhu J, Liu Z, Lu S. Deep Learning Network Based on Improved Sparrow Search Algorithm Optimization for Rolling Bearing Fault Diagnosis. Mathematics. 2023; 11(22):4634. https://doi.org/10.3390/math11224634

Chicago/Turabian Style

Ma, Guoyuan, Xiaofeng Yue, Juan Zhu, Zeyuan Liu, and Shibo Lu. 2023. "Deep Learning Network Based on Improved Sparrow Search Algorithm Optimization for Rolling Bearing Fault Diagnosis" Mathematics 11, no. 22: 4634. https://doi.org/10.3390/math11224634

APA Style

Ma, G., Yue, X., Zhu, J., Liu, Z., & Lu, S. (2023). Deep Learning Network Based on Improved Sparrow Search Algorithm Optimization for Rolling Bearing Fault Diagnosis. Mathematics, 11(22), 4634. https://doi.org/10.3390/math11224634

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop