Next Article in Journal
Analysis and Calibration of Sources of Electronic Error in PSD Sensor Response
Previous Article in Journal
3-D Imaging Systems for Agricultural Applications—A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Effective Sensor Selection and Data Anomaly Detection for Condition Monitoring of Aircraft Engines

Department of Automation Measurement and Control, Harbin Institute of Technology, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Sensors 2016, 16(5), 623; https://doi.org/10.3390/s16050623
Submission received: 27 February 2016 / Revised: 6 April 2016 / Accepted: 14 April 2016 / Published: 29 April 2016
(This article belongs to the Section Physical Sensors)

Abstract

:
In a complex system, condition monitoring (CM) can collect the system working status. The condition is mainly sensed by the pre-deployed sensors in/on the system. Most existing works study how to utilize the condition information to predict the upcoming anomalies, faults, or failures. There is also some research which focuses on the faults or anomalies of the sensing element (i.e., sensor) to enhance the system reliability. However, existing approaches ignore the correlation between sensor selecting strategy and data anomaly detection, which can also improve the system reliability. To address this issue, we study a new scheme which includes sensor selection strategy and data anomaly detection by utilizing information theory and Gaussian Process Regression (GPR). The sensors that are more appropriate for the system CM are first selected. Then, mutual information is utilized to weight the correlation among different sensors. The anomaly detection is carried out by using the correlation of sensor data. The sensor data sets that are utilized to carry out the evaluation are provided by National Aeronautics and Space Administration (NASA) Ames Research Center and have been used as Prognostics and Health Management (PHM) challenge data in 2008. By comparing the two different sensor selection strategies, the effectiveness of selection method on data anomaly detection is proved.

Graphical Abstract

1. Introduction

In modern industry, systems are becoming more and more complex, especially for the machine system. For example, an aircraft consists of several subsystems and millions of parts [1]. To enhance its reliability, the condition of the main subsystems should be monitored. As the aircraft’s heart, the condition of the engine directly affects its operation and safety. The engine works in a very harsh environment (e.g., high pressure, high temperature, high rotation speed, etc.). Therefore, its condition should be monitored thoroughly. The situation in other complex engineering systems is similar [2]. One effective strategy to enhance the reliability of the system is to utilize Condition Monitoring (CM).
For improving the availability of equipment, many mathematical models and methodologies have been developed to realize and enhance the performance of CM. For example, dynamic fault tree is proposed to filter false warnings of the helicopter. The methodology is based on operational data analysis and can help identify the abnormal events of the helicopter [3,4]. For power conversion system CM, it is important to construct the measurement of damage indicators to estimate the current aging status of the power device, which include threshold voltage, gate leak current, etc. [5]. The optimum type, number, and location of sensors for improving fault diagnosis is realized in [6]. Gamma process has been successfully applied to describe a certain type of degradation process [7,8]. CM can also be utilized to monitor the system’s sudden failure, which is carried out by forming an appropriate evolution progress [9,10].
In addition, CM can help provide the scheduled maintenance, reduce life-cycle costs, etc. [11]. Hence, it is important to sense the condition of the engine. Many existing research works have been carried out to realize CM. Among the available methods, one of the most promising technologies is Prognostics and Health Management (PHM). PHM has been applied in industrial systems [12,13] and avionics systems [14,15]. For the aircraft engine, PHM can provide failure warnings, extend the system life, etc. [16].
In summary, PHM methods can be classified into three major categories: model-based method, experience-based method, and data-driven method [17]. If the system can be represented by an exact model, the model-based method is applicable [18]. However, it is difficult for the accurate model to be identified in many practical applications. Hence, the model-based method is difficult to be put into use for complex systems [19]. For the experience-based approach, the stochastic model is necessary, which is often not accurate for complex systems [20]. Compared with the model-based method and the experience-based method, the data-driven method utilizes the direct data collected by instruments (most are based on sensors) and has become the primary selection for complex systems [21,22]. Many sensors are deployed on or inside the engine to sense various physical parameters (e.g., operation temperature, oil temperature, vibration, pressure, etc.) [23]. The operational, environmental and working conditions of the aircraft engine can be monitored by utilizing these sensors.
The aim of CM is to identify the unexpected anomalies, faults, and failures of the system [24]. In theory, more sensor data are more helpful for CM. However, too many sensors will bring a large amount of data processing, system costs, etc. [11]. Therefore, one typical strategy is to select some sensors which can provide better CM results. One common method is to observe the degradation trend of sensor data [25,26]. Then, the appropriate sensors will be selected for CM. In our previous work [27], one metric based on information theory for sensor selection has been proposed. This article is the extension of our previous work [27] and aims at discovering the correlation between sensor selection strategy and data anomaly detection. Reasonable sensor selection can be considered as how to choose data for CM. The correctness of sensed data is significant for the system CM. The influence of sensor selection strategy on data anomaly detection is studied in this article. In this way, the correctness of condition data can help enhance the result of fault diagnosis and failure prognosis. Much work has been carried out for data anomaly detection [28,29,30]. However, to the best of our knowledge, there is no work that considers the influence of sensor selection strategy on data anomaly detection.
To prove the correlation between sensor selection strategy and data anomaly detection, we first select the sensors that are more suitable for system CM. The methodology is based on information theory and the details can be found in our previous work [27]. Then, mutual information is utilized to weight the dependency among sensors. In the domain of probability theory, mutual information is one type of method for correlation analysis and is an effective tool to measure dependency between random variables [31]. It complies with the motivation of our study. To prove the influence of sensor selection strategy on data anomaly detection, mutual information is utilized to find the target sensor and the training sensor.
Then, the classical Gaussian Process Regression (GPR) is adopted to detect the target sensor data anomalies [32]. The parameters of GPR are calculated by the training sensor data. The target sensor data is detected by the trained GPR. For evaluation, the sensor data sets that are provided by the National Aeronautics and Space Administration (NASA) Ames Research Center for aircraft engine CM are utilized. The experimental results show the effectiveness of reasonable sensor selection on data anomaly detection. The claimed correlation between sensor selection strategy and data anomaly detection is one typical problem in the engineering system. The insights founded by the proposed method are expected to help provide more reasonable CM for the system.
The rest of this article is organized as follows. Section 2 introduces the aircraft engine which is utilized as the CM target. Section 3 presents the related theories, including information theory, GPR, and anomaly detection metrics. Section 4 illustrates the detailed evaluation results and analysis. Section 5 concludes this article and points out the future works.

2. Aircraft Engine for Condition Monitoring

The turbofan aircraft engine is utilized as the objective system in this study. An important requirement for an aircraft engine is that its working condition can be sensed correctly. Then, some classical methodologies can be adopted to predict the upcoming anomalies, faults or failures. The typical architecture of the engine is shown in Figure 1 [33]. The engine consists of Fan, Low-Pressure Compressor (LPC), High-Pressure Compressor (HPC), Combustor, High-Pressure Turbine (HPT), Low-Pressure Turbine (LPT), Nozzle, etc.
The engine illustrated in Figure 1 is simulated by C-MAPSS (Commercial Modular Aero-Propulsion System Simulation). C-MAPSS is a simulating tool and has successfully been utilized to imitate the realistic work process of a commercial turbofan engine. Regarding operational profiles, a number of input parameters can be edited to realize expected functions. Figure 2 shows the routine assembled in the engine.
The engine has a built-in control system, which includes a fan-speed controller, several regulators and limiters. The limiters include three high-limit regulators that are used to prevent the engine from exceeding the operating limits. These limits mainly include core speed, engine-pressure ratio, and HPT exit temperature. The function of regulators is to prevent the pressure going too low. These situations in C-MAPSS are the same as the real engine.
The aircraft engine directly influences the reliability and the safety of the aircraft. Therefore, the unexpected conditions of the engine should be monitored. The reliability can be understood from three factors. First, the failure of the main components (LPC, HPC, Combustor, etc.) can lead to the failure of the aircraft engine. Secondly, if the information transmitted to actuators is faulty, it will cause the failure of the aircraft engine. Finally, the external interferences (e.g., birds striking) can result in the failure of the aircraft engine.
In this article, the condition data of the aircraft engine are the concern and the anomaly detection of the condition data is the focus. To monitor the condition of the engine, several types of physical parameters can be utilized, such as temperature, pressure, fan speed, core speed, air ratio, etc. A total of 21 sensors are installed on or inside different components of the aircraft engine to collect its working conditions, as illustrated in Table 1. The deterioration and faults of the engine can be detected by analyzing these sensors data [34].

3. Related Theories

In this section, the related theories utilized in this study are introduced, including information theory (entropy, permutation entropy, and mutual information), Gaussian Process Regression and anomaly detection metrics.

3.1. Information Theory

3.1.1. Entropy

Every sensor S x i ( i = 1 , . . . , n ) deployed in the location l i to sense the target condition variable x i can be be regarded as a random variable X S i . The acquisition result can be expressed by time series data { y i ( t ) , y i ( t + 1 ) ,…, y i ( T ) }. The data can also be visualized as a realization of X S i on the time window [t,T]. Sensor data sets can be described by the probability distribution. The information contained in the data can be measured by entropy, which is defined by Equation (1) [35].
H = i = 1 N p i x l o g p i x
where p i ( x ) indicates the probability of the ith state, and N denotes the total number of states that the process of X S i exhibits.
For a continuous random variable X, the probability is expressed by the probability density function f ( x ) and the entropy is defined as
H = S f x l o g f x d x
where S is the set of the random variables.
If the base of the logarithm is 2, the entropy is measured in b i t s . If the logarithm is based on e, the entropy is measured in n a t s . Besides the base of 2 and e, the logarithm can be based on other dimensions, and the definition entropy can be changed for different applications. To be simple, the base of the logarithm in our study is based on 2, and the entropy will be measured in b i t s .
To help understand the entropy, a simple example is given as follows. Let
X = 1 , p 0 , 1 p
Then
H = p l o g p ( 1 p ) l o g ( 1 p )
The graph of the entropy in Equation (4) is described in Figure 3. Some basic properties of entropy can be drawn from Figure 3. It is concave and the value of entropy is 0 when p = 0 or 1. When the probability is 0 or 1, it means that the variable is not random and there is no uncertainty. Hence, the information contained in the data set is 0. On the other hand, the uncertainty is maximum when p = 1 / 2 , which corresponds to the maximum entropy value.
Entropy can be applied to measure the information contained in the sensor data. For CM, the data that have the characteristics of degradation trend are more suitable. In the following subsection, the permutation entropy that is utilized to calculate the degradation trend of the sensor data will be illustrated.

3.1.2. Permutation Entropy

The sensor data { y i ( t ) , y i ( t + 1 ) ,…, y i ( T ) } includes T ! permutation of possible ranks. The frequency of each permutation T ! can be calculated by
p π = t | 0 t T n , ( x t + 1 , , x t + n ) h a s t y p e π T n + 1
The permutation entropy of order n≥2 is defined as
H ( n ) = p ( π ) ) l o g p ( π ) )
The permutation entropy reflects the information contained in comparing n consecutive values of the sensor data. It is clear that
0 H ( n ) l o g n !
where the lower bound is attained for an increasing or decreasing data set.
The permutation entropy of order n divided by n 1
h ( n ) = H ( n ) / ( n 1 )
can be made use of for determining some properties of dynamics. The information contained in sorting nth value is among the previous n 1 permutation entropy.
The increasing or decreasing trend of sensor data set can be represented by 2 ! permutation entropy which can be calculated by
H ( 2 ) = p l o g p ( 1 p ) l o g ( 1 p )
where p denotes the increasing or decreasing probability of order n = 2 . If p indicates the increasing probability, then 1 p is the decreasing probability.

3.1.3. Mutual Information

In order to measure the conditional entropy of one random variable on the other random variable, the conditional entropy H ( Y | X ) can be adopted, which is defined by
H ( Y | X ) = i = 1 n p ( x i ) H ( Y | X = x i ) = i = 1 n j = 1 m p ( x i , y j ) l o g p ( y j | x i )
For two random variables X and Y, the mutual information I ( Y ; X ) is the reduction between the two random variables and can be calculated by
I ( Y ; X ) = H ( Y ) H ( Y | X ) = x , y p ( x , y ) l o g ( p ( x , y ) / p ( x ) p ( y ) )

3.2. Gaussian Process Regression

Gaussian Process (GP) is the generalization of Gaussian distribution and is one type of important stochastic process [32,36]. The parameters are not required when the GP is modeled. Based on the input data sets, the corresponding functions { f ( x 1 ) , , f ( x n ) } comprise a collection of random variables D = x n n = 1 N , x R d which obey the joint Gaussian distribution. The functions { f ( x 1 ) , , f ( x n ) } can be used to form GP, as given by
f ( x ) G P ( m ( x ) , k ( x i , x j ) )
m ( x ) = E [ f ( x ) ]
k ( x i , x j ) = E [ ( f ( x i ) m ( x j ) ) ( f ( x i ) m ( x j ) ) ]
where m ( x ) denotes the mean function and k ( x i , x j ) indicates the covariance function.
In practical scenarios, the function values contain noise which can be expressed by
y = f ( x ) + ε
where ε is the white noise and ε N ( 0 , σ n 2 ) . In addition, ε is independent on f ( x ) . Moreover, if f ( x ) is used to formulate GP, the observation y is also GP, which can be represented by
y G P ( m ( x ) , k ( x i , x j + σ n 2 δ i j )
where δ i j is the Dirac function. When the value of i and j is equivalent, the value of δ i j is 1.
GPR is one type of probability technology for the regression problem and is restricted by the prior distribution. By utilizing the available training data sets, the estimation of the posterior distribution can be obtained. Hence, this methodology makes use of the functional space defined by the prior distribution of GP. The prediction output of GP function of the posterior distribution can be calculated by the Bayesian framework [37].
In the assumption, the data sets of D 1 = x i , y i i = 1 N are the training data sets and D 2 = x i * , y i i = 1 N * are the testing data sets, x i , x i * R d and d is the input dimension, m and m * are the mean vector of the training data sets and the testing data sets respectively. f ( x * ) is the function output with test input, which complies with the vector f * , and y is the training vector. According to Equation (15), f * and y comply with the joint Gaussian distribution, as illustrated by
y f * m m * , C ( X , X ) K ( X , X * ) K ( X * , X ) K ( X * , X )
where C ( X , X ) = K ( X , X ) + δ i j I is the covariance matrix of the training data sets, δ i j refers to the variance of the white noise, I R N × N indicates the unit matrix, K ( X , X * ) R N × N * denotes the covariance matrix for the training data sets and the testing data sets, and K ( X * , X * ) refers to the covariance of the testing data sets.
According to the characteristics of GP, the posterior conditional distribution of f * can be achieved by
f * | X , y , X * N ( f * ¯ , c o v ( f * ) )
f * ¯ = E [ f * | X , y , X * ] = m + K ( X * , X ) C ( X , X ) 1 ( y m )
c o v ( f * ) = K ( X * , X * ) C ( X , X ) 1 K ( X , X * )

3.3. Anomaly Detection Metrics

Three metrics are usually utilized to measure the accuracy of anomaly detection, which include False Positive Ratio ( F P R ), False Negative Ratio ( F N R ) and A c c uracy ( A C C ). F P R is the ratio that the anomalous data are falsely detected, which can be calculated by
F P R = F N T P + F N × 100 %
where F N is the amount of normal data identified as the anomalous data, T P + F N is the sum of the normal data.
F N R is the ratio that the anomalous data are detected in error and accepted, which can be calculated by
F N R = F P F P + T N × 100 %
where F P is the amount of anomalous data identified as normal data, F P + T N is the sum of the anomalous data. The smaller values of F N R and F P R mean that the performance of the anomaly detection method is better.
A C C is the ratio that the anomalous data are detected in error and accepted, which can be calculated by
A C C = T P + T N F P + F N + T N + T P × 100 %
where T P + T N is the amount of the anomalous data detected as anomaly and the normal data identified as positive data, F P + F N + T N + T P is the number of all data detected.

4. Experimental Results and Analysis

In this section, we first present the overview of the sensor data for CM of the aircraft engine. The suitable sensors for CM are first selected. Then, the most related sensors are carried out the following data anomaly detection. The framework of sensor selection and data anomaly detection is shown in Figure 4.
After the system condition collected by pre-deployed sensors, the sensor data sets can be utilized for the following analysis. The sensor selection strategy for CM is based on our previous work. Then, mutual information among sensors is calculated to weight the correlation. The target sensor that will be used for data anomaly detection should be same for the following comparison of detection performance analysis. The sensor that has the largest value of mutual information to the target sensor is used to train GPR. By analyzing the anomaly detection results of the target sensor, the influence of sensor selection strategy on data anomaly detection is proved.

4.1. Sensor Data Description

As introduced in Section 2, there are 21 sensors that are utilized to sense the engine condition. The experiments are carried out under four different combinations of operational conditions and failure modes [34]. The sensor data sets of the overall experiments are illustrated in Table 2.
Each data set is divided into the training and testing subsets. The training set contains run-to-failure information, while the testing set has up-to-date data. In order to evaluate the effectiveness of our method, the data set 1 which has one fault mode (HPC degradation) and one operation condition (Sea Level) is picked first, as shown in Table 3.

4.2. Sensor Selection Procedure

The sensor selection procedure method is based on our previous work [27], which is based on the quantitative sensor selection strategy. The procedure includes two steps. First, the information contained in the sensor data is weighted by entropy, as introduced in Section 3.1.1. Then, the modified permutation entropy is calculated, which only considers the 2! permutation entropy value. The 2! permutation entropy value can be utilized to describe the increasing or decreasing trend of the sensor data. In this way, the sensors which are more suitable for CM will be selected.
The quantitative sensor selection strategy aims at finding out the information contained in the sensor data sets. The output of every sensor can be considered as one random variable. To measure the information contained in the sensor data, the entropy which calculates the probability of every data is utilized. The larger value of entropy means that the data contains more information, as introduced in Section 3.1. Then, the suitable sensors for system CM are selected by utilizing the improved permutation entropy that considers the probability of the increasing or decreasing of the two adjacent sensor data. This feature can be utilized to describe the increasing or decreasing trend of the sensor data and is preferred for system CM.
The work in [25] utilizes the observing sensor selection strategy and selects seven sensors for the aircraft engine CM. The observing method is based on subjective judgement. To improve the effectiveness of the quantitative sensor selection strategy, the number of sensors selected in [27] is the same as [25]. In this study, we also adopt the same data set and the same seven sensors as in our previous work [27] which are #3, #4, #8, #9, #14, #15, and #17. The sensors selected in [25] are #2, #4, #7, #8, #11, #12, and #15. In the following evaluation, the experiments are carried out between these two groups of sensors to prove the merit of the quantitative sensor selection strategy.

4.3. Data Anomaly Detection and Analysis

In order to evaluate the effectiveness of sensor selection strategy on data anomaly detection, we first calculate the mutual information among the sensors in the two groups, respectively. Mutual information can be utilized to weight the correlation among sensors.
Mutual information values among the sensors selected by the quantitative sensor selection strategy for data set 1 are shown in Table 4.
Mutual information values among the sensors selected by the observing sensor selection strategy for data set 1 are shown in Table 5.
To compare the effectiveness of the quantitative sensor selection method with the observing sensor selection method, the testing sensor data should be same and the training sensor data should be different. By analyzing illustrated sensors in Table 4 and Table 5, sensor #15 is selected as the target testing sensor. For the quantitative sensor selection, sensor #3 is chosen to be the training sensor. For the observing sensor selection, sensor #2 is chosen to be the training sensor.
In the following evaluation step, GPR is utilized to detect the testing sensor data. The parameters of GPR are trained by sensor #4 and sensor #3 for the two sensor selection strategies, respectively, due to the maximal mutual information with the same sensor. The experimental results of data anomaly detection for data set 1 are shown in the following Figure 5 and Figure 6.
The number of normal sensor data detected as anomalous data is four and 24 for the two methods, respectively. For the quantitative sensor selection strategy, the F P R of data anomaly detection is
F P R = 4 4 + 188 × 100 % = 2 . 08 %
For the observing sensor selection strategy, the F P R of data anomaly detection is
F P R = 24 24 + 168 × 100 % = 12 . 50 %
Another sensor data set is also carried out to evaluate the proposed method. Mutual information values among the sensors selected by the quantitative sensor selection strategy for data set 2 are shown in Table 6.
Mutual information values among the sensors selected by the observing sensor selection strategy for data set 2 are shown in Table 7.
The experimental results of data anomaly detection for data set 2 are shown in the following Figure 7 and Figure 8.
For the quantitative sensor selection strategy, the F P R of data anomaly detection is
F P R = 4 4 + 175 × 100 % = 2 . 23 %
For the observing sensor selection strategy, the F P R of data anomaly detection is
F P R = 10 10 + 169 × 100 % = 5 . 59 %
To evaluate the effectiveness in further, two additional data sets of another working condition are utilized to carry out the following experiments. For the third data set, mutual information values among the sensors selected by the quantitative sensor selection strategy are shown in Table 8.
Mutual information values among the sensors selected by the observing sensor selection strategy for data set 3 are shown in Table 9.
The experimental results of data anomaly detection for data set 3 are shown in the following Figure 9 and Figure 10.
For the quantitative sensor selection strategy, the F P R of data anomaly detection is
F P R = 3 3 + 180 × 100 % = 1 . 64 %
For the observing sensor selection strategy, the F P R of data anomaly detection is
F P R = 17 17 + 166 × 100 % = 9 . 29 %
For the fourth data set, mutual information values among the sensors selected by the quantitative sensor selection strategy are shown in Table 10.
Mutual information values among the sensors selected by the observing sensor selection strategy for data set 4 are shown in Table 11.
The experimental results of data anomaly detection for data set 4 are shown in the following Figure 11 and Figure 12.
For the quantitative sensor selection strategy, the F P R of data anomaly detection is
F P R = 18 18 + 213 × 100 % = 7 . 79 %
For the observing sensor selection strategy, the F P R of data anomaly detection is
F P R = 25 25 + 206 × 100 % = 10 . 82 %
For the quantitative sensor selection strategy, the four values of F P R are 2.08%, 2.23%, 1.64%, and 7.79%, respectively. For the observing sensor selection strategy, the four values of F P R are 12.50%, 5.59%, 9.29%, and 10.82%, respectively. In the above mentioned data sets, four pairs of sensor data are selected randomly to implement the data anomaly detection. The values of F P R are 32.81%, 50.84%, 54.10%, and 25.97%, respectively. To compare the performance of three types of data anomaly detection, the mean and standard deviation of F P R are calculated, as illustrated in Table 12.
From the evaluation experimental results, it can be seen that the quantitative sensor selection not only has better F P R values but also the mean and the standard deviation. Therefore, compared with the observing sensor selection strategy and random sensor selection, the performance of the quantitative sensor selection strategy has better performance on data anomaly detection at a certain degree.
Compared with the observing sensor selection strategy and the random sensor selection, the quantitative sensor selection strategy achieves smaller numerical values of F P R . The mean value and the standard deviation value of the quantitative sensor selection strategy are also smaller than the other two methods. The performance of the sensor selection strategy on data anomaly detection is validated at a certain degree. However, the effectiveness needs to be validated further with the involvement of more data, especially the anomalous data.

5. Conclusions

In this article, we present one typical problem in engineering, which is how to select sensors for CM. Compared with the observing sensor selection method, the quantitative sensor selection method is more suitable for system CM and data anomaly detection. The effectiveness is expected to enhance the reliability and performance of system CM. In this way, it can guarantee that the basic sensing information is correct. Therefore, the system reliability can be enhanced further. The method which can be utilized for selecting sensors to carry out anomaly detection is also illustrated. Experimental results with the sensor data sets that are obtained from aircraft engine CM show the correlation between sensor selection strategy and data anomaly detection.
In future work, we will focus on how to utilize multidimensional data sets to carry out anomaly detection. The anomaly detection accuracy is expected to be further improved. Then, the computing resource will be considered, especially for the online anomaly detection and the limited computing resource scenario. The uncertainty about anomaly detection results will also be taken into account. The influence of detection anomaly results for system CM will be evaluated. Finally, the recovery of anomalous data will be considered. The false-alarm produced by CM and the performance on the practical system will also be carried out. F N R and A C C will be utilized for the data sets that include anomalous data and validate the effectiveness of sensor selection on data anomaly detection.

Acknowledgments

This work was partially supported by the National Natural Science Foundation of China under Grant No. 61571160 and the New Direction of Subject Development in Harbin Institute of Technology under Grant No. 01509421.

Author Contributions

Yu Peng proposed the framework and evaluation process; Liansheng Liu, Datong Liu and Yu Peng designed the experiments; Liansheng Liu and Yujie Zhang carried out the experiments and analyzed the results; Liansheng Liu and Datong Liu wrote the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, Q.; Feng, D. The aircraft service life and maintenance early warning management based on configuration. In Proceedings of the First International Conference on Reliability Systems Engineering, Beijing, China, 21–23 October 2015; pp. 1–9.
  2. Hu, C.; Youn, B.D. Adaptive-sparse polynomial chaos expansion for reliability analysis and design of complex engineering systems. Struct. Multidiscip. Optim. 2011, 43, 419–442. [Google Scholar] [CrossRef]
  3. Bect, P.; Simeu-Abazi, Z.; Maisonneuve, P.L. Diagnostic and decision support systems by identification of abnormal events: Application to helicopters. Aerosp. Sci. Technol. 2015, 46, 339–350. [Google Scholar] [CrossRef]
  4. Simeu-Abazi, Z.; Lefebvre, A.; Derain, J.P. A methodology of alarm filtering using dynamic fault tree. Rel. Eng. Syst. Saf. 2011, 96, 257–266. [Google Scholar] [CrossRef]
  5. Avenas, Y.; Dupont, L.; Baker, N.; Zara, H.; Barruel, F. Condition Monitoring: A Decade of Proposed Techniques. IEEE Ind. Electron. Mag. 2015, 9, 22–36. [Google Scholar] [CrossRef]
  6. Zhang, G. Optimum Sensor Localization/Selection in A Diagnostic/Prognostic Architecture. Ph.D. Thesis, Georgia Institute of Technology, Atlanta, GA, USA, 2005. [Google Scholar]
  7. Van Noortwijk, J.M. A survey of the application of gamma processes in maintenance. Rel. Eng. Syst. Saf. 2009, 94, 2–21. [Google Scholar] [CrossRef]
  8. Rausch, M.; Liao, H. Joint production and spare part inventory control strategy driven by condition based maintenance. IEEE Trans. Rel. 2010, 59, 507–516. [Google Scholar] [CrossRef]
  9. Wang, Z.; Chen, X. Condition Monitoring of Aircraft Sudden Failure. Procedia Eng. 2011, 15, 1308–1312. [Google Scholar] [CrossRef]
  10. Wang, Z. Study of Evolution Mechanism on Aircraft Sudden Failure. Procedia Eng. 2011, 15, 1303–1307. [Google Scholar] [CrossRef]
  11. Kumar, S.; Dolev, E.; Pecht, M. Parameter selection for health monitoring of electronic products. Microelectron. Reliab. 2010, 50, 161–168. [Google Scholar] [CrossRef]
  12. Muller, A.; Suhner, M.C.; Iung, B. Formalisation of a new prognosis model for supporting proactive maintenance implementation on industrial system. Reliab. Eng. Syst. Saf. 2008, 93, 234–253. [Google Scholar] [CrossRef]
  13. Yoon, J.; He, D.; Van Hecke, B. A PHM Approach to Additive Manufacturing Equipment Health Monitoring, Fault Diagnosis, and Quality Control. In Proceedings of the Prognostics and Health Management Society Conference, Fort Worth, TX, USA, 29 September–2 October 2014; pp. 1–9.
  14. Orsagh, R.F.; Brown, D.W.; Kalgren, P.W.; Byington, C.S.; Hess, A.J.; Dabney, T. Prognostic health management for avionic systems. In Proceedings of the Aerospace Conference, Big Sky, MT, USA, 4–11 March 2006; pp. 1–7.
  15. Scanff, E.; Feldman, K.L.; Ghelam, S.; Sandborn, P.; Glade, A.; Foucher, B. Life cycle cost impact of using prognostic health management (PHM) for helicopter avionics. Microelectron. Reliab. 2007, 47, 1857–1864. [Google Scholar] [CrossRef]
  16. Ahmadi, A.; Fransson, T.; Crona, A.; Klein, M.; Soderholm, P. Integration of RCM and PHM for the next generation of aircraft. In Proceedings of the Aerospace Conference, Big Sky, MT, USA, 7–14 March 2009; pp. 1–9.
  17. Xu, J.P.; Xu, L. Health management based on fusion prognostics for avionics systems. J. Syst. Eng. Electron. 2011, 22, 428–436. [Google Scholar] [CrossRef]
  18. Hanachi, H.; Liu, J.; Banerjee, A.; Chen, Y.; Koul, A. A physics-based modeling approach for performance monitoring in gas turbine engines. IEEE Trans. Rel. 2014, 64, 197–205. [Google Scholar] [CrossRef]
  19. Liu, D.; Peng, Y.; Li, J.; Peng, X. Multiple optimized online support vector regression for adaptive time series prediction. Measurement 2013, 46, 2391–2404. [Google Scholar] [CrossRef]
  20. Tobon-Mejia, D.A.; Medjaher, K.; Zerhouni, N. CNC machine tool’s wear diagnostic and prognostic by using dynamic Bayesian networks. Mech. Syst. Signal Process. 2012, 28, 167–182. [Google Scholar] [CrossRef] [Green Version]
  21. Xing, Y.; Miao, Q.; Tsui, K.L.; Pecht, M. Prognostics and health monitoring for lithium-ion battery. In Proceedings of the International Conference on Intellignece and Security Informatics, Beijing, China, 10–12 May 2011; pp. 242–247.
  22. Chen, M.; Azarian, M.; Pecht, M. Sensor Systems for Prognostics and Health Management. Sensors 2010, 10, 5774–5797. [Google Scholar] [CrossRef] [PubMed]
  23. Francomano, M.T.; Accoto, D.; Guglielmelli, E. Artificial sense of slip—A review. IEEE Sens. J. 2013, 13, 2489–2498. [Google Scholar] [CrossRef]
  24. Orchard, M.; Brown, D.; Zhang, B.; Georgoulas, G.; Vachtsevanos, G. Anomaly Detection: A Particle Filtering Framework with an Application to Aircraft Systems. In Proceedings of the Integrated Systems Health Management Conference, Denver, CO, USA, 6–9 October 2008; pp. 1–8.
  25. Xu, J.P.; Wang, Y.S.; Xu, L. PHM-Oriented Integrated Fusion Prognostics for Aircraft Engines Based on Sensor Data. IEEE Sens. J. 2014, 14, 1124–1132. [Google Scholar] [CrossRef]
  26. Wang, T.Y.; Yu, J.B.; Siegel, D.; Lee, J. A similarity-based prognostics approach for remaining useful life estimation of engineered systems. In Proceedings of the International Conference on Prognostics and Health Management, Denver, CO, USA, 6–9 October 2008; pp. 1–6.
  27. Liu, L.S.; Wang, S.J.; Liu, D.T.; Zhang, Y.J.; Peng, Y. Entropy-based sensor selection for condition monitoring and prognostics of aircraft engine. Microelectron. Reliab. 2015, 55, 2092–2096. [Google Scholar] [CrossRef]
  28. Hayes, M.A.; Capretz, M.A.M. Contextual Anomaly Detection in Big Sensor Data. In Proceedings of the IEEE International Congress on Big Data, Anchorage, AK, USA, 27 June–2 July 2014; pp. 64–71.
  29. Bosman, H.H.W.J.; Liotta, A.; Iacca, G.; Wortche, H.J. Anomaly Detection in Sensor Systems Using Lightweight Machine Learning. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, Manchester, UK, 13–16 October 2013; pp. 7–13.
  30. Marti, L.; Sanchez-Pi, N.; Molina, J.M.; Garcia, A.C.B. Anomaly Detection Based on Sensor Data in Petroleum Industry Applications. Sensors 2015, 15, 2774–2797. [Google Scholar] [CrossRef] [PubMed]
  31. Han, M.; Ren, W. Global mutual information-based feature selection approach using single-objective and multi-objective optimization. Neurocomputing 2015, 168, 47–54. [Google Scholar] [CrossRef]
  32. Pang, J.; Liu, D.; Liao, H.; Peng, Y.; Peng, X. Anomaly detection based on data stream monitoring and prediction with improved Gaussian process regression algorithm. In Proceedings of the IEEE conference on Prognostics and Health Management, Cheney, WA, USA, 22–25 June 2014; pp. 1–7.
  33. Frederick, D.K.; DeCastro, J.A.; Litt, J.S. User’s Guide for the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS); National Technical Information Service: Springfield, VA, USA, 2007; pp. 1–47. [Google Scholar]
  34. Saxena, A.; Goebel, K.; Simon, D.; Eklund, N. Damage propagation modeling for aircraft engine run-to-failure simulation. In Proceedings of the International Conference on Prognostics and Health Management, Denver, CO, USA, 6–9 October 2008; pp. 1–9.
  35. Shannon, C.E. A mathematical theory of communication. Bell Sys. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  36. Liu, D.; Pang, J.; Zhou, J.; Peng, Y.; Pecht, M. Prognostics for state of health estimation of lithium-ion batteries based on combination Gaussian process functional regression. Microelectron. Reliab. 2013, 53, 832–839. [Google Scholar] [CrossRef]
  37. Rasmussen, C.E.; Williams, C.K.I. Gaussian Processes for Machine Learning; The MIT Press: Cambridge, MA, USA, 2006; pp. 13–16. [Google Scholar]
Figure 1. Diagram for the aircraft gas turbine engine [33].
Figure 1. Diagram for the aircraft gas turbine engine [33].
Sensors 16 00623 g001
Figure 2. Layout of various modules and their connections [33].
Figure 2. Layout of various modules and their connections [33].
Sensors 16 00623 g002
Figure 3. Entropy vs. probability.
Figure 3. Entropy vs. probability.
Sensors 16 00623 g003
Figure 4. Framework of sensor selection and data anomaly detection.
Figure 4. Framework of sensor selection and data anomaly detection.
Sensors 16 00623 g004
Figure 5. Anomaly detection for sensor data set 1 with Gaussian Process Regression (GPR) for the quantitative sensor selection.
Figure 5. Anomaly detection for sensor data set 1 with Gaussian Process Regression (GPR) for the quantitative sensor selection.
Sensors 16 00623 g005
Figure 6. Anomaly detection for sensor data set 1 with GPR for the observing sensor selection.
Figure 6. Anomaly detection for sensor data set 1 with GPR for the observing sensor selection.
Sensors 16 00623 g006
Figure 7. Anomaly detection for sensor data set 2 with GPR for the quantitative sensor selection.
Figure 7. Anomaly detection for sensor data set 2 with GPR for the quantitative sensor selection.
Sensors 16 00623 g007
Figure 8. Anomaly detection for sensor data set 2 with GPR for the observing sensor selection.
Figure 8. Anomaly detection for sensor data set 2 with GPR for the observing sensor selection.
Sensors 16 00623 g008
Figure 9. Anomaly detection for sensor data set 3 with GPR for the quantitative sensor selection.
Figure 9. Anomaly detection for sensor data set 3 with GPR for the quantitative sensor selection.
Sensors 16 00623 g009
Figure 10. Anomaly detection for sensor data set 3 with GPR for the observing sensor selection.
Figure 10. Anomaly detection for sensor data set 3 with GPR for the observing sensor selection.
Sensors 16 00623 g010
Figure 11. Anomaly detection for sensor data set 4 with GPR for the quantitative sensor selection.
Figure 11. Anomaly detection for sensor data set 4 with GPR for the quantitative sensor selection.
Sensors 16 00623 g011
Figure 12. Anomaly detection for sensor data set 4 with GPR for the observing sensor selection.
Figure 12. Anomaly detection for sensor data set 4 with GPR for the observing sensor selection.
Sensors 16 00623 g012
Table 1. Description of sensor signals [33].
Table 1. Description of sensor signals [33].
IndexSymbolDescriptionUnits
1T2Total temperature at fan inlet R
2T24Total temperature at LPC outlet R
3T30Total temperature at HPC outlet R
4T50Total temperature at LPT outlet R
5P2Pressure at fan inletpsia
6P15Total pressure in bypass-ductpsia
7P30Total pressure at HPC outletpsia
8NfPhysical fan speedrpm
9NcPhysical core speedrpm
10EprEngine Pressure ratio-
11Ps30Static pressure at HPC outletpsia
12PhiRatio of fuel flow to Ps30pps/ps
13NRfCorrected fan speedrpm
14NRcCorrected core speedrpm
15BPRBypass ratio-
16farBBurner fuel-air ratio-
17htBleedBleed enthalpy-
18Nf_dmdDemanded fan speedrpm
19PCNfR_dmdDemanded corrected fan speedrpm
20W31HPT coolant bleedlbm/s
21W32LPT coolant bleedlbm/s
R: The Ranking temperature scale; psia: Pounds per square inch absolute; rpm: Revolutions per minute; pps: Pulse per second; psi: Pounds per square inch; lbm/s: Pound mass per second.
Table 2. Sensor data set description.
Table 2. Sensor data set description.
Set 1Set 2Set 3Set 4
Failure mode1122
Operation condition1616
Training units100260100248
Testing units100259100249
Table 3. Sensor data set for evaluation experiments.
Table 3. Sensor data set for evaluation experiments.
CycleSensor 2Sensor 3Sensor 4Sensor 7Sensor 8Sensor 21
1641.821589.71400.6554.362388.123.4190
2642.151591.81403.1553.752388.023.4236
3642.351588.01404.2554.262388.123.3442
192643.541601.41427.2551.252388.322.9649
Table 4. Mutual information values among sensors selected by quantitative strategy for data set 1.
Table 4. Mutual information values among sensors selected by quantitative strategy for data set 1.
Sensor 3Sensor 4Sensor 8Sensor 9Sensor 14Sensor 15Sensor 17
Sensor 34.67404.02702.59504.03533.97174.11291.3478
Sensor 44.02704.61042.59643.97173.90094.04211.3529
Sensor 82.59502.59643.14962.56142.49062.62460.7188
Sensor 94.03533.97172.56144.61873.90924.05761.2321
Sensor 143.97173.90092.49063.90924.54793.98961.2506
Sensor 154.11294.04212.62464.05763.98964.68201.3368
Sensor 171.34781.35290.71881.23211.25061.33681.7409
Table 5. Mutual information values among sensors selected by observing strategy for data set 1.
Table 5. Mutual information values among sensors selected by observing strategy for data set 1.
Sensor 2Sensor 4Sensor 7Sensor 8Sensor 11Sensor 12Sensor 15
Sensor 24.68054.03354.06172.60153.61524.04554.1050
Sensor 44.03354.61043.99162.59643.55233.96814.0421
Sensor 74.06173.99164.63142.59843.57333.98914.0631
Sensor 82.60152.59642.59843.14962.23412.58222.6246
Sensor 113.61523.55233.57332.23414.18493.56433.6238
Sensor 124.04553.96813.98912.58223.56434.61524.0541
Sensor 154.10504.04214.06312.62463.62384.05414.6820
Table 6. Mutual information values among sensors selected by quantitative strategy for data set 2.
Table 6. Mutual information values among sensors selected by quantitative strategy for data set 2.
Sensor 3Sensor 4Sensor 8Sensor 9Sensor 14Sensor 15Sensor 17
Sensor 34.66833.98702.20483.82793.81554.20541.4488
Sensor 43.98704.50602.06573.66553.66874.05081.3359
Sensor 82.20482.06572.67451.94531.97172.25030.4851
Sensor 93.82793.66551.94534.33143.54063.86851.2582
Sensor 143.81553.66871.97173.54064.33463.87941.2694
Sensor 154.20544.05082.25033.86853.87944.71671.4526
Sensor 171.44881.33590.48511.25821.26941.45261.7994
Table 7. Mutual information values among sensors selected by observing strategy for data set 2.
Table 7. Mutual information values among sensors selected by observing strategy for data set 2.
Sensor 2Sensor 4Sensor 7Sensor 8Sensor 11Sensor 12Sensor 15
Sensor 24.60213.92073.92432.16953.56013.83054.1469
Sensor 43.92074.50603.82822.06573.44863.73444.0508
Sensor 73.92433.82824.50192.12353.45993.73804.0389
Sensor 82.16952.06572.12352.67451.72841.99582.2503
Sensor 113.56013.44863.45991.72844.13003.35843.6748
Sensor 123.83053.73443.73801.99583.35844.40803.9451
Sensor 154.14694.05084.03892.25033.67483.94514.7167
Table 8. Mutual information values among sensors selected by quantitative strategy for data set 3.
Table 8. Mutual information values among sensors selected by quantitative strategy for data set 3.
Sensor 3Sensor 4Sensor 8Sensor 9Sensor 14Sensor 15Sensor 17
Sensor 34.65064.06662.43694.11063.92753.97111.2757
Sensor 44.06664.62552.35594.08553.88723.94601.2895
Sensor 82.43692.35592.92472.45292.23952.27550.4737
Sensor 94.11064.08552.45294.66943.93123.96721.3126
Sensor 143.92753.88722.23953.93124.47123.79171.1760
Sensor 153.97113.94602.27553.96723.79174.50721.2015
Sensor 171.27571.28950.47371.31261.17601.20151.6499
Table 9. Mutual information values among sensors selected by observing strategy for data set 3.
Table 9. Mutual information values among sensors selected by observing strategy for data set 3.
Sensor 2Sensor 4Sensor 7Sensor 8Sensor 11Sensor 12Sensor 15
Sensor 24.53993.95593.86402.33383.43503.93463.8452
Sensor 43.95594.62553.95712.35593.51294.02013.9460
Sensor 73.86403.95714.53362.26393.42863.92073.8389
Sensor 82.33382.35592.26392.92471.97702.37812.2755
Sensor 113.43503.51293.42861.97704.08183.46893.4174
Sensor 123.93464.02013.92072.37813.46894.59653.9019
Sensor 153.84523.94603.83892.27553.41743.90194.5072
Table 10. Mutual information values among sensors selected by quantitative strategy for data set 4.
Table 10. Mutual information values among sensors selected by quantitative strategy for data set 4.
Sensor 3Sensor 4Sensor 8Sensor 9Sensor 14Sensor 15Sensor 17
Sensor 34.57103.80832.39523.84593.82283.81391.2177
Sensor 43.80834.67362.46183.95453.94353.92851.2791
Sensor 82.39522.46183.15262.48142.44602.46750.6542
Sensor 93.84593.95452.48144.71123.95713.95411.2905
Sensor 143.82283.94352.4463.95714.68823.93111.2809
Sensor 153.81393.92852.46753.95413.93114.67931.3216
Sensor 171.21771.27910.65421.29051.28091.32161.8265
Table 11. Mutual information values among sensors selected by observing strategy for data set 4.
Table 11. Mutual information values among sensors selected by observing strategy for data set 4.
Sensor 2Sensor 4Sensor 7Sensor 8Sensor 11Sensor 12Sensor 15
Sensor 24.66953.90083.92412.43973.62653.86823.9064
Sensor 43.90084.67363.93422.46183.59463.87833.9285
Sensor 73.92413.93424.68502.4573.61803.87763.9278
Sensor 82.43972.46182.4573.15262.1682.48632.4675
Sensor 113.62653.59463.61802.1684.36343.56203.6123
Sensor 123.86823.87833.87762.48633.56204.62303.8659
Sensor 153.90643.92853.92782.46753.61233.86594.6793
Table 12. Mean and standard deviation of three selection strategies.
Table 12. Mean and standard deviation of three selection strategies.
Random SelectionObserving SelectionQuantitative Selection
Mean40.93%9.55%3.43%
Standard deviation13.68%2.95%2.91%

Share and Cite

MDPI and ACS Style

Liu, L.; Liu, D.; Zhang, Y.; Peng, Y. Effective Sensor Selection and Data Anomaly Detection for Condition Monitoring of Aircraft Engines. Sensors 2016, 16, 623. https://doi.org/10.3390/s16050623

AMA Style

Liu L, Liu D, Zhang Y, Peng Y. Effective Sensor Selection and Data Anomaly Detection for Condition Monitoring of Aircraft Engines. Sensors. 2016; 16(5):623. https://doi.org/10.3390/s16050623

Chicago/Turabian Style

Liu, Liansheng, Datong Liu, Yujie Zhang, and Yu Peng. 2016. "Effective Sensor Selection and Data Anomaly Detection for Condition Monitoring of Aircraft Engines" Sensors 16, no. 5: 623. https://doi.org/10.3390/s16050623

APA Style

Liu, L., Liu, D., Zhang, Y., & Peng, Y. (2016). Effective Sensor Selection and Data Anomaly Detection for Condition Monitoring of Aircraft Engines. Sensors, 16(5), 623. https://doi.org/10.3390/s16050623

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop