Advancements in Gas Turbine Fault Detection: A Machine Learning Approach Based on the Temporal Convolutional Network–Autoencoder Model
Abstract
:1. Introduction
2. Literature Review
3. Data Collection
Data Pre-Processing
4. TCN–Autoencoder
- (1)
- Limited adaptability: many existing models, including rule-based systems and early machine learning, try to adapt to the variable operating modes of modern gas turbines; this limits their effectiveness in dynamic environments where operational parameters change frequently.
- (2)
- Scalability issues: traditional methods often do not scale well with the increasing volume of data generated by modern sensor technologies installed in turbines, leading to performance degradation as the volume of data increases.
- (3)
- Insensitivity to subtle anomalies: most legacy systems are designed to detect large-scale deviations, which makes them less efficient at detecting subtle anomalies that can precede major faults.
- (4)
- High false alarm rate: due to their generalized nature, these systems often have a high rate of false alarms, which can lead to unnecessary shutdown and maintenance activities, and increase operating costs.
- (A)
- TCN architecture: Unlike traditional RNNs, TCNs offer efficient processing through parallelization and can capture long-range dependencies in time-series data. This makes our model highly effective in real-time fault detection scenarios.
- (B)
- MHA mechanism: This feature allows the model to focus on different parts of the data simultaneously, enhancing its ability to detect subtle anomalies that are indicative of potential faults. This directly addresses the issue of insensitivity in many traditional models.
- (C)
- Anomaly detection through reconstruction: The autoencoder aspect of the model focuses on reconstructing the input data and identifying deviations from the norm. This approach reduces false positives by distinguishing between normal fluctuations and actual anomalies.
- (D)
- Scalability and adaptability: Our model is designed to be scalable by incorporating additional sensor data and adapting to new unforeseen operational conditions without requiring extensive retraining.
Algorithm 1 Anomaly Detection Model with TCN and Multi-Head Attention |
1: Input: Time-series data X with shape (T, F) where T is the number of time steps and F is the number of features. 2: Output: Reconstructed time-series , Anomaly scores 3: procedure TCNBRANCH(X, dilation_rate) 4: Apply TCN layer with specified dilation_rate 5: Apply LayerNormalization 6: return TCN output 7: end procedure 8: procedure ENCODER(X) 9: for rate in [1, 2, 4, 8] do 10: TCN_output[rate] ← TCNBRANCH(X, rate) 11: end for 12: Encoded ← Concatenate(TCN_output[1], TCN_output[2], TCN_output[4]) 13: return Encoded 14: end procedure 15: procedure BOTTLENECK(Encoded) 16: Bottleneck ← Dense layer on Encoded 17: AttentionOutput ← MultiHeadAttention(Bottleneck, Bottleneck) 18: return AttentionOutput 19: end procedure 20: procedure DECODER(AttentionOutput) 21: Apply TCN with dilation rates [1, 2, 4] 22: Decoded ← LayerNormalization 23: return Decoded 24: end procedure 25: procedure ANOMALYDETECTIONMODEL(X) 26: Encoded ← ENCODER(X) 27: AttentionOutput ← BOTTLENECK(Encoded) 28: Decoded ← DECODER(AttentionOutput) 29: , ← Dense layer on Decoded to reconstruct input shape 30: return , ComputeAnomalyScores(X, ) 31: end procedure 32: ReconstructedX, AnomalyScores ← ANOMALYDETECTIONMODEL(X) |
5. Results and Discussion
5.1. A Brief Description of the Models Used for Comparison
Algorithm 2 Pseudocode for building a GRU-Autoencoder |
1: function BUILD_GRU_AUTOENCODER(input_shape) 2: inputs ← Input(shape=input_shape) 3: encoded ← GRU(64, return_sequences=False)(inputs) 4: decoded ← RepeatVector(input_shape[0])(encoded) 5: decoded ← GRU(64, return_sequences=True)(decoded) 6: denseLayer ← Dense(input_shape[−1]) 7: decoded ← TimeDistributed(denseLayer)(decoded) 8: autoencoder ← Model(inputs, decoded) 9: autoencoder.compile(optimizer=Adam(learning_rate=0.001), loss=’mean_squared_error’) 10: return autoencoder 11: end function |
Algorithm 3 Pseudocode for building an LSTM-Autoencoder |
1: function BUILD_LSTM_AUTOENCODER(input_shape) 2: inputs ← Input(shape=input_shape) 3: encoded ← LSTM(64, return_sequences=False)(inputs) 4: decoded ← RepeatVector(input_shape[0])(encoded) 5: decoded ← LSTM(64, return_sequences=True)(decoded) 6: denseLayer ← Dense(input_shape[−1]) 7: decoded ← TimeDistributed(denseLayer)(decoded) 8: autoencoder ← Model(inputs, decoded) 9: autoencoder.compile(optimizer=Adam(learning_rate=0.001), loss=’mean_squared_error’) 10: return autoencoder 11: end function |
Algorithm 4 Pseudocode for building a Variational Autoencoder (VAE) |
1: function BUILD_VAE(input_shape, latent_dim=32) 2: inputs ← Input(shape=input_shape) 3: encoded ← LSTM(64)(inputs) 4: z_mean ← Dense(latent_dim)(encoded) 5: z_log_var ← Dense(latent_dim)(encoded) 6: z ← Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_var]) 7: encoder ← Model(inputs, [z_mean, z_log_var, z]) 8: latent_inputs ← Input(shape=(latent_dim,)) 9: decoded ← RepeatVector(input_shape[0])(latent_inputs) 10: decoded ← LSTM(64, return_sequences=True)(decoded) 11: denseLayer ← Dense(input_shape[−1]) 12: decoded ← TimeDistributed(denseLayer)(decoded) 13: decoder ← Model(latent_inputs, decoded) 14: outputs ← (decoder(encoder(inputs)[2])) 15: vae ← Model(inputs, outputs) 16: vae_loss = K.mean(K.square(inputs − outputs)) + K.mean(0.5 * K.sum(1 + z_log_var − K.square(z_mean) − K.exp(z_log_var), axis=−1)) 17: vae.add_loss(vae_loss) 18: vae.compile(optimizer=’adam’) 19: return vae 20: end function |
5.2. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Poullikkas, A. An overview of current and future sustainable gas turbine technologies. Renew. Sustain. Energy Rev. 2005, 9, 409–443. [Google Scholar] [CrossRef]
- Tanaka, K.; Cavalett, O.; Collins, W.J.; Cherubini, F. Asserting the climate benefits of the coal-to-gas shift across temporal and spatial scales. Nat. Clim. Chang. 2019, 9, 389–396. [Google Scholar] [CrossRef]
- Assareh, E.; Agarwal, N.; Arabkoohsar, A.; Ghodrat, M.; Lee, M. A transient study on a solar-assisted combined gas power cycle for sustainable multi-generation in hot and cold climates: Case studies of Dubai and Toronto. Energy 2023, 282, 128423. [Google Scholar] [CrossRef]
- National Academies of Sciences, Engineering, and Medicine. Advanced Technologies for Gas Turbines; National Academies Press: Washington, DC, USA, 2020. [Google Scholar] [CrossRef]
- Meher-Homji, C.B.; Chaker, M.; Bromley, A.F. The Fouling of Axial Flow Compressors: Causes, Effects, Susceptibility, and Sensitivity. In Volume 4: Cycle Innovations; Industrial and Cogeneration; Manufacturing Materials and Metallurgy; Marine, Proceedings of the ASME Turbo Expo 2009: Power for Land, Sea, and Air, Orlando, FL, USA, 8–12 June 2009; ASME: New York City, NY, USA, 2009; pp. 571–590. [Google Scholar] [CrossRef]
- De Michelis, C.; Rinaldi, C.; Sampietri, C.; Vario, R. Condition monitoring and assessment of power plant components. In Power Plant Life Management and Performance Improvement; John, E.O., Ed.; Woodhead Publishing: Sawston, UK, 2011; pp. 38–109. [Google Scholar] [CrossRef]
- Mourad, A.H.I.; Almomani, A.; Sheikh, I.A.; Elsheikh, A.H. Failure analysis of gas and wind turbine blades: A review. Eng. Fail. Ana 2023, 146, 107107. [Google Scholar] [CrossRef]
- Waleed, K.M.; Reza, K.K.; Ghorbani, S. Common failures in hydraulic Kaplan turbine blades and practical solutions. Materials 2023, 16, 3303. [Google Scholar] [CrossRef] [PubMed]
- Fahmi, A.T.W.K.; Kashyzadeh, K.R.; Ghorbani, S. A comprehensive review on mechanical failures cause vibration in the gas turbine of combined cycle power plants. Eng. Fail. Anal 2022, 134, 106094. [Google Scholar] [CrossRef]
- Kurz, R.; Meher, H.C.; Brun, K.; Moore, J.J.; Gonzalez, F. Gas turbine performance and maintenance. In Proceedings of the 42nd Turbomachinery Symposium, Houston, TX, USA, 1–3 October 2013; Texas A&M University, Turbomachinery Laboratories: College Station, TX, 2013. [Google Scholar]
- Sun, J.; Bu, J.; Yang, J.; Hao, Y.; Lang, H. Wear failure analysis of ball bearings based on lubricating oil for gas turbine. Ind. Lubr. Tribol. 2023, 75, 36–41. [Google Scholar] [CrossRef]
- Fentaye, A.D.; Baheta, A.T.; Gilani, S.I.; Kyprianidis, K.G. A review on gas turbine gas-path diagnostics: State-of-the-art methods, challenges and opportunities. Aerospace 2019, 6, 83. [Google Scholar] [CrossRef]
- Volponi, A.J. Gas turbine engine health management: Past, present, and future trends. J. Eng. Gas. Turb Power 2014, 136, 051201. [Google Scholar] [CrossRef]
- Matthaiou, I.; Khandelwal, B.; Antoniadou, I. Vibration monitoring of gas turbine engines: Machine-learning approaches and their challenges. Front. Built Environ. 2017, 3, 54. [Google Scholar] [CrossRef]
- Abram, C.; Fond, B.; Beyrau, F. Temperature measurement techniques for gas and liquid flows using thermographic phosphor tracer particles. Prog. Energ. Combust. 2018, 64, 93–156. [Google Scholar] [CrossRef]
- Goebbels, K.; Reiter, H. Non-Destructive Evaluation of Ceramic Gas Turbine Components by X-Rays and Other Methods. In Progress in Nitrogen Ceramics; Riley, F.L., Ed.; Springer: Dordrecht, The Netherlands, 1983; Volume 65, pp. 627–634. [Google Scholar] [CrossRef]
- Zhu, X.; Zhong, C.; Zhe, J. Lubricating oil conditioning sensors for online machine health monitoring–A review. Tribol. Int. 2017, 109, 473–484. [Google Scholar] [CrossRef]
- DeSilva, U.; Bunce, R.H.; Schmitt, J.M.; Claussen, H. Gas turbine exhaust temperature measurement approach using time-frequency controlled sources. In Volume 6: Ceramics; Controls, Diagnostics and Instrumentation; Education; Manufacturing Materials and Metallurgy; Honors and Awards, Proceedings of the ASME Turbo Expo 2015: Turbine Technical Conference and Exposition, Montreal, QC, Canada, 15–19 June 2015; ASME: New York City, NY, USA, 2015; p. V006T05A001. [Google Scholar] [CrossRef]
- Mevissen, F.; Meo, M. A review of NDT/structural health monitoring techniques for hot gas components in gas turbines. Sensors 2019, 19, 711. [Google Scholar] [CrossRef] [PubMed]
- Qaiser, M.T. Data Analysis and Prediction of Turbine Failures Based on Machine Learning and Deep Learning Techniques. Master’s Thesis, Norwegian University of Science and Technology, Trondheim, Norway, 2023. [Google Scholar]
- Yan, W.; Yu, L. On accurate and reliable anomaly detection for gas turbine combustors: A deep learning approach. arXiv 2019, arXiv:1908.09238. [Google Scholar] [CrossRef]
- Zhong, S.S.; Fu, S.; Lin, L. A novel gas turbine fault diagnosis method based on transfer learning with CNN. Measurement 2019, 137, 435–453. [Google Scholar] [CrossRef]
- Zhou, D.; Yao, Q.; Wu, H.; Ma, S.; Zhang, H. Fault diagnosis of gas turbine based on partly interpretable convolutional neural networks. Energy 2020, 200, 117467. [Google Scholar] [CrossRef]
- Rezaeian, N.; Gurina, R.; Saltykova, O.A.; Hezla, L.; Nohurov, M.; Reza Kashyzadeh, K. Novel GA-Based DNN Architecture for Identifying the Failure Mode with High Accuracy and Analyzing Its Effects on the System. Appl. Sci. 2024, 14, 3354. [Google Scholar] [CrossRef]
- Tang, Y.; Sun, Z.; Zhou, D.; Huang, Y. Failure mode and effects analysis using an improved pignistic probability transformation function and grey relational projection method. Complex. Intell. Syst. 2024, 10, 2233–2247. [Google Scholar] [CrossRef]
- Babu, C.J.; Samuel, M.P.; Davis, A. Framework for development of comprehensive diagnostic tool for fault detection and diagnosis of gas turbine engines. J. Aerosp. Qual. Reliab. (Spec. Issue) 2016, 6, 35–47. [Google Scholar]
- Vatani, A. Degradation Prognostics in Gas Turbine Engines Using Neural Networks. Ph.D. Thesis, Concordia University, Montreal, QC, Canada, 2013. [Google Scholar]
- Lim, M.H.; Leong, M.S. Diagnosis for loose blades in gas turbines using wavelet analysis. J. Eng. Gas. Turbines Power 2005, 127, 314–322. [Google Scholar] [CrossRef]
- Arrigone, G.M.; Hilton, M. Theory and practice in using Fourier transform infrared spectroscopy to detect hydrocarbons in emissions from gas turbine engines. Fuel 2005, 84, 1052–1058. [Google Scholar] [CrossRef]
- Santoso, B.; Anggraeni, W.; Pariaman, H.; Purnomo, M.H. RNN-Autoencoder approach for anomaly detection in power plant predictive maintenance systems. Int. J. Intell. Eng. Syst. 2022, 15, 363–381. [Google Scholar] [CrossRef]
- Farahani, M. Anomaly Detection on Gas Turbine Time-Series’ Data Using Deep LSTM-Autoencoder. Master’s Thesis, Department of Computing Science, Faculty of Science and Technology, Umeå University, Umeå, Sweden, 2021. [Google Scholar]
- Zhultriza, F.; Subiantoro, A. Gas turbine anomaly prediction using hybrid convolutional neural network with LSTM in power plant. In Proceedings of the 2022 IEEE International Conference on Cybernetics and Computational Intelligence (CyberneticsCom), Malang, Indonesia, 16–18 June 2022; pp. 242–247. [Google Scholar] [CrossRef]
- Liu, J.; Zhu, H.; Liu, Y.; Wu, H.; Lan, Y.; Zhang, X. April. Anomaly detection for time series using temporal convolutional networks and Gaussian mixture model. J. Phys. Conf. Ser. 2019, 1187, 042111. [Google Scholar] [CrossRef]
- Fahmi, A.T.W.K.; Kashyzadeh, K.R.; Ghorbani, S. Fault detection in the gas turbine of the Kirkuk power plant: An anomaly detection approach using DLSTM-Autoencoder. Eng. Fail. Anal. 2024, 160, 108213. [Google Scholar] [CrossRef]
- Meggitt PLC. CA202 Piezoelectric Accelerometer, Document Reference DS 262-020 Version 9, 15.06.2021. Available online: https://catalogue.meggittsensing.com/wp-content/uploads/2020/09/CA202-piezoelectric-accelerometer-data-sheet-English-.pdf (accessed on 14 May 2023).
- Guiñón, J.L.; Ortega, E.; García-Antón, J.; Pérez-Herranz, V. Moving average and Savitzki-Golay smoothing filters using Mathcad. Pap. ICEE 2007, 2007, 1–4. [Google Scholar]
- Das, K.; Jiang, J.; Rao, J.N.K. Mean squared error of empirical predictor. Ann. Stat. 2004, 32, 818–840. [Google Scholar] [CrossRef]
- Mylonas, C.; Abdallah, I.; Chatzi, E. Conditional variational autoencoders for probabilistic wind turbine blade fatigue estimation using Supervisory, Control, and Data Acquisition data. Wind. Energy 2021, 24, 1122–1139. [Google Scholar] [CrossRef]
- Reza, K.K.; Amiri, N.; Ghorbani, S.; Souri, K. Prediction of concrete compressive strength using a back-propagation neural network optimized by a genetic algorithm and response surface analysis considering the appearance of aggregates and curing conditions. Buildings 2022, 12, 438. [Google Scholar] [CrossRef]
- Kashyzadeh, K.R.; Ghorbani, S. New neural network-based algorithm for predicting fatigue life of aluminum alloys in terms of machining parameters. Eng. Fail. Anal. 2023, 146, 107128. [Google Scholar] [CrossRef]
- Qi, J.; Du, J.; Siniscalchi, S.M.; Ma, X.; Lee, C.H. On mean absolute error for deep neural network based vector-to-vector regression. IEEE Signal Proc. Let. 2020, 27, 1485–1489. [Google Scholar] [CrossRef]
Algorithm/Method | Advantages | Disadvantages |
---|---|---|
Traditional methods (physics-based and rule-based expert systems) [26] | Precision in identifying and characterizing faults. They provide a robust framework for fault analysis. | Computational demands. Rigidity in adapting to emerging or unconventional fault patterns. |
Signal processing techniques (e.g., Fourier analysis and wavelet transform) [28,29] | Effective in detecting anomalies in the data. | Difficulty distinguishing benign operational changes from real faults in real-time scenarios. |
Machine learning models [20] | Increasing the accuracy of fault detection. More adaptability. | The method performance depends on the quality of the data and requires significant computing resources. |
RNNs (i.e., LSTM and GRU) [30] | Suitable for sequential data. They can capture dependencies over time. | They may have problems with very long sequences or when long-term dependencies need to be fetched. |
Autoencoders (i.e., VAEs and VRAEs) [31] | Useful for dimensionality reduction. They can capture the distribution of normal data. | They may require significant training data. |
CNNs [32] | They can learn spatial patterns in time series. | They are primarily designed for image data. Adaptation to time series can be non-trivial. |
TCNs [33] | Efficient processing through parallelization. Capture long-range dependencies using dilated convolutions. | They may require special adjustments for specific time-series characteristics. |
Hybrid models [30,31] | They combine features of multiple models. Versatility in feature extraction and modeling. | Their complexity can lead to longer training times and may require fine-tuning. |
Layer (type) | Output Shape | Param # | Connected to |
---|---|---|---|
Input_3 (InputLayer) | [(None, 100, 4)] | 0 | [] |
tcn_5 (TCN) | (None, 100, 32) | 13,088 | [input_3[0][0]] |
tcn_6 (TCN) | (None, 100, 32) | 13,088 | [input_3[0][0]] |
tcn_7 (TCN) | (None, 100, 32) | 13,088 | [input_3[0][0]] |
Layer_normalization_3 (LayerNormalization) | (None, 100, 32) | 64 | [tcn_5[0][0]] |
Layer_normalization_4 (LayerNormalization) | (None, 100, 32) | 64 | [tcn_6[0][0]] |
Layer_normalization_5 (LayerNormalization) | (None, 100, 32) | 64 | [tcn_7[0][0]] |
concaternate_1(Concaternate) | (None, 100, 96) | 0 | [layer_normalization_3[0][0], layer_normalization_4[0][0], layer_normalization_5[0][0], |
dense_2 (Dense) | (None, 100, 16) | 1552 | [concaternate_1[0][0]] |
Multi_head-attention_1 (MultIheadattention) | (None, 100, 16) | 2160 | [dense_2[0][0], dense_2[0][0]] |
tcn_8(TCN) | (None, 100, 32) | 45,032 | [multi_head-attention_1[0][0]] |
dense_3 (Dense) | (None, 100, 4) | 132 | [tcn_8[0][0]] |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Fahmi, A.-T.W.K.; Reza Kashyzadeh, K.; Ghorbani, S. Advancements in Gas Turbine Fault Detection: A Machine Learning Approach Based on the Temporal Convolutional Network–Autoencoder Model. Appl. Sci. 2024, 14, 4551. https://doi.org/10.3390/app14114551
Fahmi A-TWK, Reza Kashyzadeh K, Ghorbani S. Advancements in Gas Turbine Fault Detection: A Machine Learning Approach Based on the Temporal Convolutional Network–Autoencoder Model. Applied Sciences. 2024; 14(11):4551. https://doi.org/10.3390/app14114551
Chicago/Turabian StyleFahmi, Al-Tekreeti Watban Khalid, Kazem Reza Kashyzadeh, and Siamak Ghorbani. 2024. "Advancements in Gas Turbine Fault Detection: A Machine Learning Approach Based on the Temporal Convolutional Network–Autoencoder Model" Applied Sciences 14, no. 11: 4551. https://doi.org/10.3390/app14114551
APA StyleFahmi, A. -T. W. K., Reza Kashyzadeh, K., & Ghorbani, S. (2024). Advancements in Gas Turbine Fault Detection: A Machine Learning Approach Based on the Temporal Convolutional Network–Autoencoder Model. Applied Sciences, 14(11), 4551. https://doi.org/10.3390/app14114551