Next Article in Journal
An Adaptive Weighted KNN Positioning Method Based on Omnidirectional Fingerprint Database and Twice Affinity Propagation Clustering
Next Article in Special Issue
Domain Correction Based on Kernel Transformation for Drift Compensation in the E-Nose System
Previous Article in Journal
Preferred Placement and Usability of a Smart Textile System vs. Inertial Measurement Units for Activity Monitoring
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Grey Model Optimized by Particle Swarm Optimization for Data Analysis and Application of Multi-Sensors

College of Computer and Information, Hohai University, Nanjing 211100, China
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(8), 2503; https://doi.org/10.3390/s18082503
Submission received: 6 July 2018 / Revised: 26 July 2018 / Accepted: 28 July 2018 / Published: 1 August 2018
(This article belongs to the Special Issue Multi-Sensor Fusion and Data Analysis)

Abstract

:
Data on the effective operation of new pumping station is scarce, and the unit structure is complex, as the temperature changes of different parts of the unit are coupled with multiple factors. The multivariable grey system prediction model can effectively predict the multiple parameter change of a nonlinear system model by using a small amount of data, but the value of its q parameters greatly influences the prediction accuracy of the model. Therefore, the particle swarm optimization algorithm is used to optimize the q parameters and the multi-sensor temperature data of a pumping station unit is processed. Then, the change trends of the temperature data are analyzed and predicted. Comparing the results with the unoptimized multi-variable grey model and the BP neural network prediction method trained under insufficient data conditions, it is proved that the relative error of the multi-variable grey model after optimizing the q parameters is smaller.

1. Introduction

In power machinery, the analysis and prediction of the temperature changes of multiple sensors from different parts of the equipment are important bases for the evaluation of its running state [1,2]. Pumping stations are the most widely used water facilities. China has more than 40 large and medium pumping stations, all of which urgently require effective assessment of their pumping station operation status. The pump unit of the pumping station is a typical power mechanical device. The structure of a large pumping station is complex. Many factors, such as water flow, cavitation and other hydraulic factors, spindle bending, asymmetrical and other mechanical factors, short circuits of the stator winding, and overcurrent, can affect the temperature changes in various parts of the pump [3,4]. Temperature variation in various parts often occurs due to the complex coupling of these multiple factors [5]. These coupling actions tends to overlap, resulting in different influences and effects on the temperature in different parts. The analysis results show that analyzing and predicting the temperature change of multiple parts captured by multiple sensors on a pump unit is a multivariable and nonlinear problem [6,7], which is a research hotspot and a difficult concept at present [8,9,10].
Traditional forecasting methods mainly include time series models and regression analyses. These methods can positively predict linear and stationary characteristic quantities. Temperature change data captured by multiple sensors in a pumping station is nonlinear and non-stationary, which prevents the traditional prediction methods from achieving good results. At present, data-driven neural network technology has made rapid progress in the field of prediction. Piotrowski et al. [11] used different neural network models to predict and compare river water temperatures. Drevetskyi et al. [12] used the back propagation (BP) neural network to predict urban water consumption. Tang et al. [13] used the improved BP neural network to predict the bearing bush temperature of hydropower units. However, the prediction method based on neural network requires abundant a priori data as input to obtain accurate and generalized trained models.
Pumping station prototypes and actual pumping stations are different because of their different physical conditions. Specifically, the operation characteristics of the same type of pumping stations are different, and the state analysis model of the pumping station cannot be easily transferred. Effective long sequence operation data of new pumping stations’ pump units are scarce, especially fault and other abnormal performance data. Therefore, temperature changes cannot be completely predicted based on neural networks. The multivariable grey model (MGM) (1, n) was developed based on grey system theory [14] proposed by Deng (where (1, n) represents First order ordinary differential equation with n elements). It is a multidimensional generalization of the single variable grey model (GM) (1, 1) (where (1, 1) represents a first order ordinary differential equation with one element). MGM can describe the different characteristics that affect the operating state of the system from a multidimensional degree, which can overcome the non-stationary signals limitations and effectively analyze and predict multiple correlation eigenvalues of the system under the condition of a small amount of known information. The model is suitable for analyzing and predicting the temperature variation of multiple parts and multiple sensors in pumping stations.
Although the MGM (1, n) has the capability to predict using a small amount of data, the prediction accuracy is greatly influenced by the parameter q values in the model difference expansion. Finding the most suitable q value can improve the prediction accuracy of the model, and the search for parameter q is an NP-hard problem [15]. The particle swarm optimization (PSO) algorithm is a group intelligent optimization method. Compared with the genetic algorithm, PSO avoids complex operations such as “cross” and “mutation” and has the advantages of rapid convergence and high accuracy [16]. For this reason, an MGM is developed based on the temperature data collected from the upper guide bearing, including the temperature data of the stator winding and of the thrust bearing. Then, the PSO algorithm is used to find its optimal parameter q value. Finally, MGM is used to predict the temperature of each part after optimization of the q parameters. The procedure is shown in Figure 1. With the same amount of data, the optimized MGM (1, n) is compared with the traditional MGM and the prediction model based on the BP neural network. Then, the experimental results are compared. The results show that the MGM after optimization of the q parameters is better than the traditional MGM and the BP neural network. The prediction model improved the prediction accuracy by 0.01% and 2.02%, respectively.

2. Multivariable Grey Model

MGM (1, n, q) was developed based on grey system theory. In 1982, Deng published his first paper on the “control of grey system” in the Journal of System Control and Communication which received extensive attention. Since grey system theory was developed, an increasing number of scholars have been involved in research attempting to solve many practical problems with the theory and achieving favorable results [17].
For a variable X i ( 0 ) , the observed value sequence on the time axis is X i ( 0 ) = { x i ( 0 ) ( 1 ) , x i ( 0 ) ( 2 ) , , x i ( 0 ) ( m ) } . Then, the sequence of the observation values of n different variables on the time axis constitutes a data matrix X ( 0 ) = { X 1 ( 0 ) , X 2 ( 0 ) , , X n ( 0 ) } .
Accumulate the sequence of observations for each variable separately, The obtained new data matrix becomes the first-order cumulative generation matrix of the original matrix X ( 0 ) , Write it as X ( 1 ) , which can be expressed as X ( 1 ) = { X 1 ( 1 ) , X 2 ( 1 ) , , X n ( 1 ) } , where X i ( 1 ) is the first-order cumulative generation sequence of the original data sequence X i ( 0 ) , i.e.,:
X i ( 1 ) = { x i ( 1 ) ( 1 ) ,   x i ( 1 ) ( 2 ) ,   ,   x i ( 1 ) ( m ) } ,
x i ( 1 ) ( j ) = k = 1 j x i ( 0 ) ( k ) ,
where i = 1, 2, …, m, and j = 1, 2, …, n.
Then the matrix form of the MGM (1, n) model is as follows:
d X ( 1 ) ( t ) d t = A X ( 1 ) ( t ) + B ,
where X ( 1 ) ( t ) = { x 1 ( 1 ) ( t ) , x 2 ( 1 ) ( t ) , , x n ( 1 ) ( t ) } , A = ( a i j ) n × n , B = ( b 1 , b 2 , , b n ) T .
The first-order ordinary differential equation in Equation (3) can obtain its time response formula as follows:
X ( 1 ) ( t ) = e A ( t 1 ) ( X ( 1 ) ( 1 ) + A 1 B ) A 1 B ,
where e A t = I + A t + A 2 2 ! t 2 + = I + k = 1 A k k ! t k , and X ( 1 ) ( 1 ) = { x 1 ( 1 ) ( 1 ) , x 2 ( 1 ) ( 1 ) , , x n ( 1 ) ( 1 ) } . Equation (4) can be used to predict the value of the next moment from the value of the previous moment.
Set A = [ A ,   B ] , the least squares estimate of a j T (j = 1, 2, …, n) is as follows:
a ^ j T = ( L T L ) 1 L T Y j ,
where:
L = [ 1 2 ( x 1 1 ( 2 ) + x 1 1 ( 1 ) ) 1 2 ( x 2 1 ( 2 ) + x 2 1 ( 1 ) ) 1 2 ( x n 1 ( 2 ) + x n 1 ( 1 ) ) 1 1 2 ( x 1 1 ( 3 ) + x 1 1 ( 2 ) ) 1 2 ( x 2 1 ( 3 ) + x 2 1 ( 2 ) ) 1 2 ( x n 1 ( 3 ) + x n 1 ( 2 ) ) 1 1 2 ( x 1 1 ( m ) + x 1 1 ( m 1 ) ) 1 2 ( x 2 1 ( m ) + x 2 1 ( m 1 ) ) 1 2 ( x n 1 ( m ) + x n 1 ( m 1 ) ) 1 ] ,
and Y j = ( x i ( 0 ) ( 2 ) , x i ( 0 ) ( 3 ) , , x i ( 0 ) ( m ) ) T .
The forward difference of Formula (3) is divided into X t + 1 ( 1 ) X t ( 1 ) ( t + 1 ) t = A X t + B . Collate and obtain the following:
X t + 1 ( 1 ) A X t ( 1 ) X t ( 1 ) = B .
Equation (3) is divided into X t ( 1 ) X t - 1 ( 1 ) t ( t 1 ) = A X t + B or X t + 1 ( 1 ) X t ( 1 ) ( t + 1 ) t = A X t + 1 + B after the backward difference. Collate and obtain the following:
        X t ( 1 ) A X t ( 1 ) X t - 1 ( 1 ) = B . O r         X t + 1 ( 1 ) A X t + 1 ( 1 ) X t ( 1 ) = B .
Equation (7) establishes the MGM (1, n, q). In special cases, when q = 0.5, the model is degenerated into the GM (1, 1) model. When q takes a different value q0, the L in Equation (5) is changed as follows:
[ q 0 x 1 1 ( 2 ) + ( 1 q 0 ) x 1 1 ( 1 ) q 0 x 2 1 ( 2 ) + ( 1 q 0 ) x 2 1 ( 1 ) ) q 0 x n 1 ( 2 ) + ( 1 q 0 ) x n 1 ( 1 ) ) 1 q 0 x 1 1 ( 3 ) + ( 1 q 0 ) x 1 1 ( 2 ) ) q 0 x 2 1 ( 3 ) + ( 1 q 0 ) x 2 1 ( 2 ) ) q 0 x n 1 ( 3 ) + ( 1 q 0 ) x n 1 ( 2 ) ) 1 q 0 x 1 1 ( m ) + ( 1 q 0 ) x 1 1 ( m 1 ) ) q 0 x 2 1 ( m ) + ( 1 q 0 ) x 2 1 ( m 1 ) ) q 0 x n 1 ( m ) + ( 1 q 0 ) x n 1 ( m 1 ) ) 1 ] .
The analysis results show that the different values of q0 affect the value of L and then affect the fitting and prediction accuracies of MGM (1, n, q). Therefore, selecting the most suitable q0 value is necessary to obtain the most accurate model. The optimal value cannot be easily obtained by solving the column equation because a complex nonlinear relationship exists between the value of q0 and the fitting accuracy of the model. Therefore, a swarm intelligence optimization method, PSO, is introduced to optimize the value of q0 and improve the fitting accuracy of MGM (1, n, q).

3. PSO-Based q Parameter Optimization

PSO was introduced in 1995 by two researchers, Kennedy and Eberhart, who were inspired by the predation behavior of birds. PSO is a typical swarm intelligence optimization method. It is simple in structure, easy to implement, and has rapid convergence and high accuracy. After more than 20 years of development, the theoretical basis of PSO is nearing completion. Many scholars have provided some improvements on the special needs of different optimization problems and have successfully applied these enhancements to the optimization of various practical problems.
Prior to the use of the PSO algorithm to optimize MGM (1, n, q) models, the following definitions are provided:
Definition 1:
The actual data collected include X = (x1, x2, …, xn). The value of MGM (1, n, q) is X′ = (x1′, x2′, …, xn′). The residual of the model is D = (d1, d2, …, dn) = (x1 − x1′, x2 − x2′, …, xn − xn′). The relative error is R = (r1, r2, …, rn) = abs(d1/x1, d2/x2, …, dn/xn) × 100%.
v i k + 1 = ω v i k + c 1 ξ ( p ˜ i k χ i k ) + c 2 η ( g ˜ k χ i k ) ,
χ i k + 1 = χ i k + v i k + 1 ,
where ω [0, 1] represents inertia weight, c1 and c2 are the learning factors enabling particles to learn from other excellent individuals, ξ and η represent two pseudo random numbers distributed in [0, 1] intervals. v i k indicates the speed at which the i particle moves at k times. It represents the inertial effect of the particle’s current velocity on the next movement speed. p ˜ i k represents the optimal position of the individual i particle after k movement. c 1 ξ ( p ˜ i k χ i k ) represents the self-cognitive behavior of the particle, and the direction of its next movement, to some extent, refers to the optimal position that it experienced. g ˜ k represents the historical optimal value after the k movement of all particles, and c 2 η ( g ˜ k χ i k ) expresses the social learning behavior of the particles and the next shift. The motion direction, to some extent, refers to the optimal position that all particles experienced. χ i k represents the position after the first movement of the i particle. Formula (11) indicates that the position of the particle after the next movement is equal to the current position plus the speed of the next movement.
The steps of the PSO algorithm when optimizing the MGM (1, n, q) are as follows:
Step 1:
Population initialization, including population n and speed v.
Step 2:
Constructing the objective function fit (q), as follows:
f i t ( q ( i , k ) ) = i = 1 n d i 2 ,
where q ( i , k ) is the fitness of the i particle after the k moves. The fitness values of each particle in the population are solved according to the fitness function.
Step 3:
Saving the individual historical optimal value p ˜ i k of the particle.
Step 4:
Saving the global historical optimal value g ˜ k of the particle.
Step 5:
Step 5: Judging whether the algorithm reaches the prescribed number of iterations. If the condition is satisfied, then the global optimum is outputted; if it is not satisfied, then proceed to Step 6.
Step 6:
Step 6: Iteration of updates, according to Formulas (10) and (11).
Step 7:
Step 7: Proceed to Step 2.
The detailed procedure is shown in Figure 2.

4. Application of PSO to MGM in Temperature Prediction of the Pumping Station Unit

In this paper, the proposed algorithm is applied to the prediction of characteristic quantities in the operation of pump station units of the east line of the south-to-north water transfer project. As the eastern route of the south-to-north water diversion project is just completed, its effective operation time is short, and the accumulated effective data, especially the data and fault data under different working conditions, are very scarce. Currently, the popular data-driven feature volume prediction methods (such as BP neural network) all have high prediction accuracy, but they all need sufficient and effective data as the training basis. When there are few training data, the model trained by this method is often not sufficient, and there are problems such as poor generalization caused by over-fitting and merging. Combined with the application of this project, the experimental results show that when there is less effective running data, it is not good to use the data-driven BP neural network method to predict the feature volume. However, it does not need too much historical data to get a high prediction accuracy by using multi-variable grey model. And the prediction accuracy of the multivariable grey model was further improved after q parameters were optimized by particle swarm optimization algorithm.
In the experimental part of this paper, in order to verify the accuracy of the multivariable grey model optimized by particle swarm optimization algorithm in multivariate prediction, the temperature data of guide bearing, stator winding and thrust bearing of unit 3 at a certain period of time during the operation of Hongze Station in the south-to-north water transfer project were collected, four valid digits are retained and the collection time interval is 3 min. The temperature data of these three parts can not only reflect the temperature of each part, but also correlate with each other, the advantages of the optimized multivariable grey model can be demonstrated.
The MGM (1, 3, q) is optimized by PSO based on the given data, where c1 = c2 = 1.5, maximum iteration number Maxgen = 50, population size Sizepop = 10, and inertia weight W = 1 − (0.8/maxgen) × k. k represents the Kth movement of the particle swarm.
Throughout many experiments, when nine sets of data are obtained, the relative error of MGM (1, 3, q) is the smallest. Therefore, the nine sets of data from T1 to T9 are considered the benchmark data in this study. At this time:
L = [ q 0 × 51.87 + ( 1 q 0 ) × 24.24 q 0 × 48.45 + ( 1 q 0 ) × 23.13 ) q 0 × 43.36 + ( 1 q 0 ) × 21.43 1 q 0 × 81.49 + ( 1 q 0 ) × 51.87 ) q 0 × 75.79 + ( 1 q 0 ) × 48.45 ) q 0 × 66.09 + ( 1 q 0 ) × 43.36 1 q 0 × 246.74 + ( 1 q 0 ) × 212.22 ) q 0 × 269.77 + ( 1 q 0 ) × 234.46 ) q 0 × 214.41 + ( 1 q 0 ) × 188.48 1 ] ,
and:
Y = [ 27.63     25.32     21.93 29.62     27.34     22.73 31.31     29.03     23.34 32.32     30.52     23.95 33.30     31.71     24.64 33.80     33.11     25.03 34.52     34.30     25.43 34.81     35.31     25.93 ] .
On the basis of numerous PSO calculations, when the parameter q = 0.5095, the objective function obtains the best value fit(0.5095) = 0.086. Thus, the model values and predicted values of the MGM (1, 3, 0.5095) and their relative errors to the original data sequence of multi-sensors can be obtained, and two bits are retained. The original data from the experiment, the forecast data from the optimized model, and the relative error between the original data and the forecast data are listed in Table 1.
In Table 1, x 1 ( 0 ) ( k ) , x 2 ( 0 ) ( k ) , and x 3 ( 0 ) ( k ) represent the original temperature data multi-sensors from the upper guide bearing, stator winding, and the thrust bearing, respectively. x 1 ( 0 ) ( k ) , x 2 ( 0 ) ( k ) , and x 3 ( 0 ) ( k ) represent the results after fitting the MGM (1, 3, q) and the 10th behavioral model predictions. Table 1 shows that the MGM (1, 3, q) model has a good fitting effect, with an average relative error of less than 0.26%, and a prediction error of less than 0.99%.
In order to present a more intuitive analysis of the data in the table, the data in the table is transformed into Figure 3. Figure 3 shows the time-varying curve of the original temperature data and the time-varying curve of the data predicted by the optimized grey model proposed in this paper. It can be clearly seen from the figure that the degree of fitting between the original data and the predicted data is relatively high.

5. Comparison among Algorithms

In order to verify the superiority of the proposed algorithm, the proposed multi-variable grey model algorithm optimized by particle swarm optimization is compared with the general multi-variable grey model method without particle swarm optimization, the common single-variable grey model prediction method and the BP neural network method described in [13]. The relative errors between the original data and the predicted data are listed in Table 2, Table 3 and Table 4 as shown in Table 1. From Table 2 to Table 4, it can be found that the prediction accuracy is lower than that of the PSO optimized multi-variable grey model in this paper. Among them, the prediction accuracy of the common single variable grey model shown in Table 3 and the neural network method shown in Table 4 is relatively low. Accordingly, the fitting effect between the original data and the predicted data under the three comparison methods is respectively listed in Figure 4, Figure 5 and Figure 6. The fitting effect diagram more intuitively reflects that the results under the latter two prediction methods have large errors.
The above comparative analysis compares and analyzes the prediction error of the data predicted by different forecasting methods. In order to more directly reflect the size of the prediction error generated by different forecasting methods, this paper further analyzes the errors generated by the prediction of temperature data of guide bearing, stator winding and temperature data of thrust bearing used in the experiment under the above four different methods, and forms a time series relative error graph, as shown in Figure 7, Figure 8 and Figure 9 respectively. It can be seen from the three graphs that, for each set of temperature data at different positions, there is always the minimum relative error of the prediction result of the multivariable grey model method optimized by particle swarm optimization proposed in this paper, while the prediction error of the single variable grey model method and BP neural network method is the largest. The reasons are analyzed in the following three aspects:
(1)
The single-variable grey model only considers the influence of its own variables, but does not consider the coupling relationship between multiple variables. This is the defect relative to the multi-variable grey model method, which limits its prediction accuracy.
(2)
Considering the practical application of the project, there is not enough temperature data in this paper, especially the temperature data in various modes to train the BP neural network model method. It is inevitable that the BP neural network model trained only with finite temperature data will have problems such as insufficient training and poor generalization due to over-fitting and combination. Therefore, the prediction accuracy of BP neural network model is low.
(3)
The prediction accuracy of the general multi-variable grey model is high, but it is still lower than the optimized multi-variable grey model. This is because the default q parameter of the general multi-variable grey model is 0.5, which is not the optimal parameter.

6. Conclusions

MGM is used to process the original temperature data from multiple sensors of a pumping station unit and predict the changes of temperature data. It effectively overcomes the difficulties of the traditional time series method and the regression analysis method in dealing with non-stationary and nonlinear problems and overcomes the problem of the neural network method when the amount of data of the pumping station unit is small and cannot be accurately predicted.
PSO is used to optimize the q parameters in the MGM. The optimized MGM (1, n, q) is compared with traditional MGM (1, n), BP neural network method, and GM (1, 1).
Temperature, which is an important characteristic in evaluating the operation state of pumping station units, can be used to diagnose pumping station unit failures, helping predict when cracking will occur and exceed the safety threshold.

Author Contributions

C.L. and H.G. conceived and designed the experiments; X.Q., Z.B. and C.L. presented tools and carried out the data analysis; H.G., and Y.Y. wrote the paper. J.Q. rewrote and improved the theoretical part. Y.W. collected the materials and did a lot of format editing work.

Funding

This study is supported by National Natural Science Foundation of China (No. 61701166), Projects in the National Science & Technology Pillar Program during the Twelfth Five-year Plan Period (No. 2015BAB07B01), Fundamental Research Funds for the Central Universities (No. 2018B16314), China Postdoctoral Science Foundation (No. 2018M632215), Regional Program of National Natural Science Foundation of China (No. 51669014).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sinay, J.; Pacaiova, H.; Oravec, M. Present state of machinery safety assessment. In Proceedings of the 16th International DAAAM Symposium: Intelligent Manufacturing and Automation: Focus on Young Researchers and Scientists, Opatija, Croatia, 19–21 October 2005; pp. 347–348. [Google Scholar]
  2. Carle, P.F.; Alford, T.; Bibelhausen, D. Machinery Condition Assessment Module. U.S. Patent No. 7,593,784, 22 September 2009. [Google Scholar]
  3. Liu, X.Y.; Qiu, D. Analyzing and Treating the Failure of Unit 1 Governor Screw Pump of Wuqiangxi Hydropower Station. North China Electr. Power 2010, 4, 016. [Google Scholar]
  4. Qiu, B.; Wang, T.; Wei, Q.L.; Tang, Z.J.; Gong, W.M. Common fault analysis of large pumping stations. J. Drain. Irrig. Eng. 1999, 2, 20–24. [Google Scholar]
  5. Bi, Z.; Li, C.; Li, X.; Gao, H. Research on Fault Diagnosis for Pumping Station Based on T-S Fuzzy Fault Tree and Bayesian Network. J. Electr. Comput. Eng. 2017, 2017, 1–11. [Google Scholar] [CrossRef]
  6. Song, M.; Wang, X.; Liao, L.; Deng, S. Termination Control Temperature Study for an Air Source Heat Pump Unit during Its Reverse Cycle Defrosting. Energy Procedia 2017, 105, 335–342. [Google Scholar]
  7. Wang, K.Y. Prediction of System Performance of Fankou Pump Unit Based on CFD Simulation. China Rural Water Hydropower 2009, 2, 027. [Google Scholar]
  8. Belmonte, L.M.; Morales, R.; Fernández-Caballero, A.; Somolinos, J.A. Robust Decentralized Nonlinear Control for a Twin Rotor MIMO System. Sensors 2016, 16, 1160. [Google Scholar] [CrossRef] [PubMed]
  9. Mao, Y.; Liu, Y.; Ding, F. Filtering based multi-innovation stochastic gradient identification algorithm for multivariable nonlinear equation-error autoregressive systems. In Proceedings of the 12th World Congress on Intelligent Control and Automation (WCICA 2016), Guilin, China, 12–15 June 2016; pp. 3027–3032. [Google Scholar]
  10. Ortiz, J.P.; Minchala, L.I.; Reinoso, M.J. Nonlinear Robust H-Infinity PID Controller for the Multivariable System Quadrotor. IEEE Lat. Am. Trans. 2016, 14, 1176–1183. [Google Scholar] [CrossRef]
  11. Piotrowski, A.P.; Napiorkowski, M.J.; Napiorkowski, J.J.; Osuch, M. Comparing various artificial neural network types for water temperature prediction in rivers. J. Hydrol. 2015, 529, 302–315. [Google Scholar] [CrossRef]
  12. Drevetskyi, V.; Klepach, M.; Kutia, V. Water consumption prediction for city pumping station using neural networks. In Proceedings of the 1st International Conference on Intelligent Systems in Production Engineering and Maintenance (ISPEM 2017), Wrocław, Poland, 28–29 September 2017; pp. 459–467. [Google Scholar]
  13. Tang, Y.; Chang, L. Prediction of bearing bush temperature of hydroelectric generating units based on improved BP algorithm. J. Huazhong Univ. Sci. Technol. 2002, 30, 78–80. [Google Scholar]
  14. Liu, S. The birth and development of grey system theory. J. Nanjing Univ. Aeronaut. Astronaut. 2004, 2, 267–272. [Google Scholar]
  15. Hochba, D.S. Approximation Algorithms for NP-Hard Problems. ACM SIGACT News 2004, 28, 40–52. [Google Scholar] [CrossRef]
  16. Zhang, S.; Jiang, H.; Yin, Y.; Zhao, B. The Prediction of the Gas Utilization Ratio Based on TS Fuzzy Neural Network and Particle Swarm Optimization. Sensors 2018, 18, 625. [Google Scholar] [CrossRef] [PubMed]
  17. Hamzacebi, C.; Es, H.A. Forecasting the annual electricity consumption of Turkey using an optimized grey model. Energy 2014, 70, 165–171. [Google Scholar] [CrossRef]
Figure 1. The steps of the PSO algorithms.
Figure 1. The steps of the PSO algorithms.
Sensors 18 02503 g001
Figure 2. The procedure of MGM model optimized by PSO algorithm.
Figure 2. The procedure of MGM model optimized by PSO algorithm.
Sensors 18 02503 g002
Figure 3. MGM (1, 3, q) model prediction effect.
Figure 3. MGM (1, 3, q) model prediction effect.
Sensors 18 02503 g003
Figure 4. MGM (1, 3) model prediction effect.
Figure 4. MGM (1, 3) model prediction effect.
Sensors 18 02503 g004
Figure 5. GM (1, 1) model prediction effect.
Figure 5. GM (1, 1) model prediction effect.
Sensors 18 02503 g005
Figure 6. Prediction effect of BP neural network model.
Figure 6. Prediction effect of BP neural network model.
Sensors 18 02503 g006
Figure 7. Relative error of guide bearing.
Figure 7. Relative error of guide bearing.
Sensors 18 02503 g007
Figure 8. Relative error of stator winding.
Figure 8. Relative error of stator winding.
Sensors 18 02503 g008
Figure 9. Relative error of thrust bearing.
Figure 9. Relative error of thrust bearing.
Sensors 18 02503 g009
Table 1. MGM (1, 3, q) model fitting value and error analysis.
Table 1. MGM (1, 3, q) model fitting value and error analysis.
No (k)Real SequenceMGM (1, 3, q)
Prediction Sequence
Relative Error (%)
x 1 ( 0 ) ( k ) x 2 ( 0 ) ( k ) x 3 ( 0 ) ( k ) x 1 ( 0 ) ( k ) x 2 ( 0 ) ( k ) x 3 ( 0 ) ( k ) r 1 ( 0 ) ( k ) r 2 ( 0 ) ( k ) r 3 ( 0 ) ( k )
124.2423.1321.4324.2423.1321.43000
227.6325.3221.9327.6525.3421.946.19 × 10−26.96 × 10−23.04 × 10−2
329.6227.3422.7329.7127.3322.700.312.14 × 10−40.13
431.3129.0323.3431.2229.0123.380.297.61 × 10−40.16
532.3230.5223.9532.3530.4723.990.100.150.15
633.3031.7124.6433.2331.8124.540.220.310.41
733.8033.1125.0333.9133.0525.040.330.195.4 × 10−2
834.5234.3025.4334.4434.2225.500.220.220.27
934.8135.3125.9334.8535.3525.910.120.129.13 × 10−2
1035.5036.1126.3335.1536.4426.260.990.910.26
Mean relative error0.260.190.15
Table 2. MGM (1, 3) model fitting value and error analysis.
Table 2. MGM (1, 3) model fitting value and error analysis.
No (k)Real SequenceMGM (1, 3, q)
Prediction Sequence
Relative Error (%)
x 1 ( 0 ) ( k ) x 2 ( 0 ) ( k ) x 3 ( 0 ) ( k ) x 1 ( 0 ) ( k ) x 2 ( 0 ) ( k ) x 3 ( 0 ) ( k ) r 1 ( 0 ) ( k ) r 2 ( 0 ) ( k ) r 3 ( 0 ) ( k )
124.2423.1321.4324.2423.1321.43000
227.6325.3221.9327.6725.3621.940.130.146.39 × 10−2
329.6227.3422.7329.7227.3522.710.342.58 × 10−20.11
431.3129.0323.3431.2229.0223.380.284.16 × 10−20.18
532.3230.5223.9532.3530.4823.990.110.120.17
633.3031.7124.6433.2331.8224.540.220.330.40
733.8033.1125.0333.9133.0625.050.320.166.22 × 10−2
834.5234.3025.4334.4434.2325.500.230.200.28
934.8135.3125.9334.8535.3625.910.110.149.12 × 10−4
1035.5036.1126.3335.1436.4426.261.000.940.26
Mean relative error0.270.210.15
Table 3. GM (1, 1) model fitting value and error analysis.
Table 3. GM (1, 1) model fitting value and error analysis.
No (k)Real SequenceMGM (1, 3, q)
Prediction Sequence
Relative Error (%)
x 1 ( 0 ) ( k ) x 2 ( 0 ) ( k ) x 3 ( 0 ) ( k ) x 1 ( 0 ) ( k ) x 2 ( 0 ) ( k ) x 3 ( 0 ) ( k ) r 1 ( 0 ) ( k ) r 2 ( 0 ) ( k ) r 3 ( 0 ) ( k )
124.2423.1321.4324.2423.1321.43000
227.6325.3221.9329.0526.4122.275.154.301.55
329.6227.3422.7329.8727.5422.770.840.730.18
431.3129.0323.3430.7128.7223.281.931.070.26
532.3230.5223.9531.5729.9523.82.331.870.63
633.3031.7124.6432.4531.2424.332.541.481.26
733.8033.1125.0333.3632.5724.871.301.630.64
834.5234.3025.4334.3033.9725.430.650.960
934.8135.3125.9335.2635.4325.991.290.340.23
1035.5036.1126.3336.2536.9426.572.102.300.91
Mean relative error1.811.470.56
Table 4. Prediction value and error analysis of BP neural network model.
Table 4. Prediction value and error analysis of BP neural network model.
No (k)Real SequenceMGM (1, 3, q)
Prediction Sequence
Relative Error (%)
x 1 ( 0 ) ( k ) x 2 ( 0 ) ( k ) x 3 ( 0 ) ( k ) x 1 ( 0 ) ( k ) x 2 ( 0 ) ( k ) x 3 ( 0 ) ( k ) r 1 ( 0 ) ( k ) r 2 ( 0 ) ( k ) r 3 ( 0 ) ( k )
124.2423.1321.4325.6024.6220.135.616.446.07
227.6325.3221.9327.4525.9121.200.652.333.33
329.6227.3422.7329.6527.6421.970.101.103.34
431.3129.0323.3431.7128.7923.041.270.831.28
532.3230.5223.9532.2529.8523.700.222.201.04
633.3031.7124.6433.7530.5424.051.353.692.39
733.8033.1125.0334.2032.1724.461.182.842.27
834.5234.3025.4334.8233.8525.020.861.311.61
934.8135.3125.9335.2135.2425.651.140.201.08
1035.5036.1126.3336.2236.8626.042.022.081.10
Mean relative error1.442.302.35

Share and Cite

MDPI and ACS Style

Li, C.; Gao, H.; Qiu, J.; Yang, Y.; Qu, X.; Wang, Y.; Bi, Z. Grey Model Optimized by Particle Swarm Optimization for Data Analysis and Application of Multi-Sensors. Sensors 2018, 18, 2503. https://doi.org/10.3390/s18082503

AMA Style

Li C, Gao H, Qiu J, Yang Y, Qu X, Wang Y, Bi Z. Grey Model Optimized by Particle Swarm Optimization for Data Analysis and Application of Multi-Sensors. Sensors. 2018; 18(8):2503. https://doi.org/10.3390/s18082503

Chicago/Turabian Style

Li, Chenming, Hongmin Gao, Junlin Qiu, Yao Yang, Xiaoyu Qu, Yongchang Wang, and Zhuqing Bi. 2018. "Grey Model Optimized by Particle Swarm Optimization for Data Analysis and Application of Multi-Sensors" Sensors 18, no. 8: 2503. https://doi.org/10.3390/s18082503

APA Style

Li, C., Gao, H., Qiu, J., Yang, Y., Qu, X., Wang, Y., & Bi, Z. (2018). Grey Model Optimized by Particle Swarm Optimization for Data Analysis and Application of Multi-Sensors. Sensors, 18(8), 2503. https://doi.org/10.3390/s18082503

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop