Next Article in Journal
In Situ Preparation of Metallic Copper Nanosheets/Carbon Paper Sensitive Electrodes for Low-Potential Electrochemical Detection of Nitrite
Next Article in Special Issue
A Real-Time Adaptive Station Beamforming Strategy for Next Generation Phased Array Radio Telescopes
Previous Article in Journal
Displacement Assay in a Polythiophene Sensor System Based on Supramacromolecuar Disassembly-Caused Emission Quenching
Previous Article in Special Issue
Comprehensive Separation Algorithm for Single-Channel Signals Based on Symplectic Geometry Mode Decomposition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Neural Network Signal Integration from Thermogas-Dynamic Parameter Sensors for Helicopters Turboshaft Engines at Flight Operation Conditions

by
Serhii Vladov
1,
Lukasz Scislo
2,*,
Valerii Sokurenko
3,
Oleksandr Muzychuk
3,
Victoria Vysotska
4,5,
Serhii Osadchy
6 and
Anatoliy Sachenko
7,8
1
Department of Scientific Work Organization and Gender Issues, Kremenchuk Flight College of Kharkiv National University of Internal Affairs, 17/6 Peremohy Street, 39605 Kremenchuk, Ukraine
2
Faculty of Electrical and Computer Engineering, Cracow University of Technology, Warszawska 24, 31-155 Craków, Poland
3
Kharkiv National University of Internal Affairs, Ministry of Internal Affairs of Ukraine, 61080 Kharkiv, Ukraine
4
Information Systems and Networks Department, Lviv Polytechnic National University, 12 Bandera Street, 79013 Lviv, Ukraine
5
Institute of Computer Science, Osnabrück University, 1 Friedrich-Janssen-Street, 49076 Osnabrück, Germany
6
Flight Operation and Flight Safety Department, Flight Academy of the National Aviation University, 1 Chobanu Stepana Street, 25005 Kropyvnytskyi, Ukraine
7
Research Institute for Intelligent Computer Systems, West Ukrainian National University, 11 Lvivska Street, 46009 Ternopil, Ukraine
8
Department of Teleinformatics, Kazimierz Pulaski University of Radom, 29, Malczewskiego Street, 26-600 Radom, Poland
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(13), 4246; https://doi.org/10.3390/s24134246
Submission received: 3 June 2024 / Revised: 25 June 2024 / Accepted: 27 June 2024 / Published: 29 June 2024

Abstract

:
The article’s main provisions are the development and application of a neural network method for helicopter turboshaft engine thermogas-dynamic parameter integrating signals. This allows you to effectively correct sensor data in real time, ensuring high accuracy and reliability of readings. A neural network has been developed that integrates closed loops for the helicopter turboshaft engine parameters, which are regulated based on the filtering method. This made achieving almost 100% (0.995 or 99.5%) accuracy possible and reduced the loss function to 0.005 (0.5%) after 280 training epochs. An algorithm has been developed for neural network training based on the errors in backpropagation for closed loops, integrating the helicopter turboshaft engine parameters regulated based on the filtering method. It combines increasing the validation set accuracy and controlling overfitting, considering error dynamics, which preserves the model generalization ability. The adaptive training rate improves adaptation to the data changes and training conditions, improving performance. It has been mathematically proven that the helicopter turboshaft engine parameters regulating neural network closed-loop integration using the filtering method, in comparison with traditional filters (median-recursive, recursive and median), significantly improve efficiency. Moreover, that enables reduction of the errors of the 1st and 2nd types: 2.11 times compared to the median-recursive filter, 2.89 times compared to the recursive filter, and 4.18 times compared to the median filter. The achieved results significantly increase the helicopter turboshaft engine sensor readings accuracy (up to 99.5%) and reliability, ensuring aircraft efficient and safe operations thanks to improved filtering methods and neural network data integration. These advances open up new prospects for the aviation industry, improving operational efficiency and overall helicopter flight safety through advanced data processing technologies.

1. Introduction and Related Work

Helicopter turboshaft engines (TEs) are complex technical devices requiring continuous monitoring of their condition [1]. Helicopter TEs’ reliable operation depends on the accurate monitoring of thermogas-dynamic parameters, such as temperature and pressure at the engine’s various points, gas–generator rotor rpm and free turbine rotor speed, fuel consumption and others [2]. Modern monitoring and diagnostic systems use many sensors to obtain this data, which makes it possible to detect deviations from the norm and prevent possible malfunctions promptly. However, under flight operating conditions, sensors are exposed to various factors that affect measurement accuracy [3].
The helicopter turboshaft engine sensors must guarantee that data are received with high accuracy to ensure the helicopter’s safe operation. Sensors that measure gas–generator rotor rpm, free turbine rotor speed, and gas temperature in the compressor turbine front are the engine control system’s critical components [4]. In real operating conditions, situations may arise when sensors begin to provide inadequate information due to various factors, such as noise, interference, sensor malfunctions, etc. Even though the sensor is recognized as operational, the information received may be distorted. This can lead to incorrect conclusions about the engine state and, as a result, to erroneous actions by the crew, which are critically dangerous in flight conditions.
Traditional methods for processing sensor signals often face problems associated with noise, sensor failures, and imperfect mathematical models [5,6,7,8]. As a result, a situation may arise when the control system receives distorted data, which leads to incorrect conclusions about the engine state [9,10]. This is especially critical for in-flight operating conditions, where the measurement accuracy can be affected by vibrations, temperature changes and other external factors [11,12].
To solve these problems, data integration methods [13,14] are of particular importance, which makes it possible to combine information from various sensors and systems, the errors impact minimizing and the diagnostic reliability increase. Data integration is based on the algorithms used to process signals, taking into account their relations and correlations [15]. This makes obtaining a more accurate and complete picture of the engine state possible.
The relevance of data integration methods in helicopter TE control and diagnostic systems is due to several key factors. Increasing helicopter TE reliability is a priority task in aviation [16]. Data integration systems make it possible to integrate and jointly process information from multiple sensors, which significantly increases the accuracy and reliability of engine condition diagnostics [17]. Such systems become indispensable in-flight operating conditions, where failures and errors can lead to serious consequences. They provide continuous monitoring and allow you to quickly respond to any changes in engine operation, preventing potential emergencies.
Secondly, data integration methods significantly reduce the influence of external factors, such as vibrations and temperature fluctuations, which can distort the individual sensors’ readings [18]. It is possible to minimize measurement errors and increase the accuracy of the obtained data by taking into account the relations between various parameters and using correction algorithms [19]. This is especially important in helicopter operations, where the environment and flight conditions can vary significantly, affecting sensor performance.
Thirdly, data integration contributes to early fault detection and deviations from the norm [20]. Combining information from various sensors allows potential problems to be detected faster and more accurately, which makes it possible to carry out timely preventive and repair work [21]. This significantly reduces the risk of sudden failure and accidents, increasing overall flight safety. Early fault detection also helps to extend equipment life and reduce operating costs.
Finally, data fusion techniques are crucial for maintenance optimization [22]. Accurately determining the engine components’ conditions allows for effective maintenance planning, minimizing downtime and reducing costs. This enhances the helicopter’s operational performance and mission readiness.
In [23], an aircraft and spacecraft hull structures state acoustic emission diagnostics method was developed using a distributed fibre-optic sensor system. It is shown that such systems can provide vital information for ensuring safety at aerospace facilities. Green’s function for a single sensor is found, and its features under the pulsed influence of acoustic emission signals are researched. The sensor system’s ability to estimate the coordinates and acoustic emission signal parameters has been determined. Experimental research has confirmed the system’s ability to detect radiation signals even under specific interference conditions. A key disadvantage is the difficulty in distributing fibre optic sensors into existing aerospace designs, which may limit their widespread use and require significant modifications during implementation.
In [24], challenges focused on flexibility and scalability are explored when integrating sensors for Industry 4.0 functions into manufacturing systems. These systems use reconfigurable machines with intelligent actuators connected through a single electromechanical interface. An adaptive sensor integration unit architecture is presented that supports interleaved communication protocols and reduces the number of signal lines and the need for protocol conversion units. A prototype system using dynamic partial FPGA reconfiguration has been demonstrated to be effective in an industrial environment. The key disadvantages are the complexity and high cost of developing and implementing a dynamic partial reconfiguration FPGA system and the need for specialized knowledge to operate and maintain it.
The multi-stage supercharging process in [25] is investigated, which is an effective method for improving the engine’s volumetric efficiency at high altitudes, where the intercooler must withstand high thermal loads caused by high levels of supercharging. However, the existing research’s disadvantages on intercooler performance at high altitudes require additional research to optimize its performance in actual flight conditions.
The research [26] aims to modify engines’ advanced knowledge to run on ammonia to promote decarbonization by demonstrating that compression ignition engines can efficiently burn ammonia using spark plugs. The disadvantage is that existing research is limited, and the available experimental data are randomly distributed without careful design, increasing the results’ uncertainty.
Thus, the data integration methods used in monitoring and control systems for the helicopter TE operation are an integral part of modern technology [23,24,25,26] to increase aircraft operation safety and efficiency. The development and implementation of such methods require an interdisciplinary approach and the advanced advances used in the signal processing [27,28], mathematical modelling [29,30] and artificial intelligence [31,32] field. One of these technologies is the neural network approach [33,34], which has several unique advantages over traditional methods [35,36].
Firstly, neural network algorithms can efficiently process large amounts of data and identify complex dependencies between various parameters, which is impossible when using classical signal processing methods. Neural networks can be trained on large amounts of data, which allows them to adapt to changing operating conditions and improve the accuracy of engine prediction [37].
Secondly, neural network models are highly resistant to noise and data failures, which is especially important in flight conditions, where data may be subject to various external influences. Thanks to the ability to self-train and adapt, neural networks can effectively compensate for distortions and provide more accurate diagnostic results [38].
In addition, neural network approaches make it possible to implement more complex and accurate mathematical models that consider nonlinear dependencies between parameters and their time changes. This provides the helicopter TE operational status with a more in-depth analysis and accurate prediction, which increases the helicopter operation reliability and safety [39].
Introducing neural network technology into helicopter TE control and diagnostic systems is a promising development direction that opens up new opportunities for increasing aircraft operations efficiency and reliability.
The proposed method from helicopter TE thermogas-dynamic parameter sensor signals neural network integration represents significant improvements compared to traditional methods due to the dynamic compensation circuit (sensor signals adjustment in real time to improve control system operations) use and a neural network for closed parameter control loop integration. Dynamic compensation with an adaptive noise suppression device effectively filters noise interference while maintaining the helicopter TE parameter essential characteristics sensor signals, improving the control system’s accuracy and reliability.
A goal of this work is to develop a method for the helicopter TE thermogas-dynamic parameters integrating signals coming from sensors in real-time and correcting them if they contain noise or distortion to ensure the sensor readings’ accuracy and reliability, despite the possible noise and distortion, to maintain safe and efficient helicopter control. To reach this goal, it is necessary to solve the following tasks:
  • Diagram development for integrating signals from the helicopter TE thermogas-dynamic parameter sensors based on the filtering method.
  • Neural network development to implement a diagram for integrating signals from the helicopter TE thermogas-dynamic parameter sensors using the filtering method.
  • Neural network training algorithm development.
  • Helicopter TE thermogas-dynamic parameter sensor signals’ analysis and preliminary processing.
  • Conduct a computational experiment to solve the filtering sensors signal task of the helicopter TE thermogas-dynamic parameters (using a gas–generator rotor rpm signal example).
  • Evaluate the results obtained effectiveness according to efficiency metrics (efficiency coefficient, quality coefficient, accuracy, recall, precision, F1-score, etc.).
  • The 1st and 2nd errors are calculated and the results obtained are compared with known analogues.

2. Materials and Methods

2.1. Diagram Development for Integrating Signals from Helicopter TE Thermogas-Dynamic Parameter Sensors Using the Filtering Method

It is known [40,41,42] that the on-board system for monitoring helicopter TE parameters uses the following sensors: D-2M (records the gas–generator rotor rpm), 14 dual thermocouples T-102 (records the gas temperature in front of the compressor turbine), and D-1M (records free turbine rotor speed). These sensors transmit recorded parameters to the onboard instrument panel, providing the pilot with important information about the engine’s condition. However, all of these sensors are subject to noise, which reduces the accuracy and reliability of the recorded data. This requires the neural network aggregation and filtering methods used for signal processing to reduce the noise influence and increase the information reliability displayed on the dashboard.
In the first stage, data reliability is analyzed, which makes it possible to identify and correct anomalies in the data received by helicopter turboshaft engine sensors. According to the helicopter TE sensors readings at times t1, t2, …, tn, a thermogas-dynamic parameter time series is formed:
N T C = n T C   1 , n T C   2 , ,   n T C   n , T G = T G   1 * , T G   2 * , ,   T G   n * , N F T = n F T   1 , n F T   2 , ,   n F T   n ,
where nTC 1, nTC 2, …, nTC n is the gas–generator rotor rpm values at times t1, t2, …, tn; T G   1 * , T G   2 * , …, T G   n * is the gas temperature in the compressor turbine values front at times t1, t2, …, tn; and nFT 1, nFT 2, …, nFT n is the free turbine rotor speed values at times t1, t2, …, tn.
The interquartile range (IQR) method is used [43] to remove outliers from the helicopter TE thermogas-dynamic parameters data:
I Q R = Q 3 Q 1 , B L = Q 1 1.5 · I Q R , T L = Q 1 + 1.5 · I Q R ,
where Q1 and Q3 are the 1st and 3rd quartiles, respectively, BL is the lower limit, and TL is the upper limit.
IQR is a statistical tool used to identify and remove outliers from a data set, which is especially important when the helicopter TE thermogas-dynamic parameter time series are analysed. IQR is defined as the difference between the 3rd Q3 and 1st Q1 quartiles, which are the values below which the data lie at 75% and 25%, respectively. The IQR calculation allows us to set the BL lower and TL upper bounds. Data that falls outside these boundaries are considered outliers, anomalous values that can skew the analysis. Removing outliers helps ensure data reliability and accuracy, which is critical for helicopter health reliable monitoring and diagnosis, preventing errors and improving operational safety.
Q1 and Q3 are calculated for each data set to apply IQR to eliminate abnormal data preprocessing. Then, values below BL or above TL are excluded from the analysis. Once outliers are removed and cleaned, further processing and analysis can be carried out. This helps ensure data reliability and accuracy, which is critical for helicopter TE operational status reliable monitoring and diagnostics, preventing errors and increasing operational safety.
The reference range standard is based on the central tendency and variability statistical measures used to determine normal and anomalous values in helicopter TE thermogas-dynamic parameter data sets recorded by sensors in onboard implementation conditions. In particular, it applies the concepts of quartiles, which divide ordered data into four equal parts. The Q1 1st quartile and the Q3 3rd quartile are the basis for determining the IQR, which is the difference between Q3 and Q1. This range covers the middle 50% of the data and provides a value spread measure.
To standardize the helicopter TE thermogas-dynamic parameter values, eliminating differences in their ranges and facilitating subsequent analysis, these values are brought to a single scale using normalization:
n T C n o r m t = n T C t μ n T C σ n T C , T G *   n o r m t = T G * t μ T G * σ T G * , n F T n o r m t = n F T t μ n F T σ n F T ,
where µ(•) and σ(•) are the helicopters TE thermogas-dynamic parameters nTC, and T G * and nFT are the average value and standard deviation, respectively.
Data from the helicopter TE thermogas-dynamic parameters that exceed the established BL and TL are considered anomalies since they demonstrate significant deviations from the expected values. These deviations may indicate serious problems such as faulty hardware malfunctions or sensors malfunctioning. Anomalous data require special attention and analysis, as they can lead to erroneous conclusions about the system state and, as a result, to potentially dangerous situations. Such anomalies’ identification and correction are critical to maintaining reliability and helicopter TE operation safety.
To do this, for each i-th helicopter TE thermogas-dynamic parameter xi(t) and each time moment t, the anomaly indicator at is calculated as:
a t = 1 ,   i f   x i t < B L     o r     x i t > T L , 0 ,   i f                                       B L x i t T L ,
Next, the deviation di(t) is calculated for each i-th helicopter TE thermogas-dynamic parameter xi(t) and each time moment t:
d i t = x i t μ i σ i .
The value di(t) shows how many standard deviations the measured value xi(t) differs from the average value μi. If the helicopter turboshaft engine thermogas-dynamic parameter (nTC, T G * , nFT) for any parameter ai(t) = 1, an alarm is generated about a possible sensor malfunction or deviation in the system operation.
The proposed mathematical model makes it possible to detect sensor data anomalies based on the parameter’s statistical characteristics. Using average values and standard deviations to determine the deviations ensures anomaly detection in the reliable boundaries in the helicopter TE sensors’ functioning. This, in turn, is closely related to the helicopter TE control loops (Figure 1) [44], since accurate sensor data are critical for the engine control systems’ proper functioning. Reliable detection and correction of anomalies ensure the control loops’ correct operation, allowing timely response to changes in engine operation and maintaining optimal operating conditions, which increases overall flight safety and efficiency.
It is worth noting that feedback has been introduced into the control loop (Figure 1), which plays a key role in maintaining the helicopter TE parameters control stability and accuracy. Feedback allows the control system to correct deviations in real time using sensor data. This improves the engine operation reliability and safety, ensuring optimal operating parameters and preventing emergencies.
The helicopter TE parameter regulator in the control loop (Figure 1), connected to the fuel dispenser regulator, acts as a filter. A fuel-metering regulator is present in the control loop to ensure the fuel supply is accurate and has timely regulation, which is necessary to maintain optimal engine operation and quickly respond to changes in operating conditions (Figure 1) [44,45].
During the identification process, the closed control loop changes all elements’ parameters in real time (Figure 1). To ensure the desired behaviour, the helicopter TE parameter regulators’ dynamic compensation [46] is carried out by replacing them with a similar structure as the regulators configured in the desired way (Figure 2) [44,47,48].
In Figure 2, the compensator consists of the helicopter TE parameter regulator (nTC, T G * , nFT) transfer function, with the desired settings W r e g * and a transfer function compensating the helicopter TE parameters regulator (nTC, T G * , nFT) W r e g 1 , and the customized model consists of the transfer functions of the helicopter TE parameters regulator Wreg, the fuel-metering regulator WFMU, and the engine model.
The dynamic compensation aim is to tune the system to provide the desired system behaviour over the entire operating range and minimize static control error [49]. Transfer functions are used to describe the system. The closed-loop transfer function with a compensator and tunable model can achieve zero static error and optimal control [50,51,52].
The desired system behaviour over the entire operating range is achieved by tuning the controllers. The helicopter TE customizable model structure optimises them for a symmetrical operating mode [47,53]. In this case [47,54], zero static errors are ensured. The transfer function has the form [55] for an open-loop system configured for the symmetrical mode:
W 1 p = 4 · T μ + 1 8 · T μ 2 · p 2 · T μ · p + 1 ,
where Tμ is a small uncompensated time constant, and p is the Laplace operator.
During dynamic compensation, the current controller Wreg is replaced by a controller with the desired settings W r e g * with the inverse transfer function W r e g 1 added to compensating the current controller. The closed-loop system overall transfer function, taking into account dynamic compensation, is then determined as Wcloset(p) = WFMU(pWTE(pC(p), where C p = W r e g * · W r e g 1 . The Wreg and WFMU controllers are tuned to minimize static errors and provide the required system dynamics to ensure the desired behaviour over the entire operating range.
It is worth considering that various interferences are possible during the helicopter TE operation, which can significantly affect the accuracy of the data received from the sensors. This interference can occur due to external factors, such as electromagnetic fields, vibrations, or sudden changes in the environment, as well as internal factors, including sensor components wear or electronic failures. The interference makes it difficult to correctly read parameters, leading to the engine control circuits’ incorrect operation, increasing the risk of errors in the system operation and reducing the overall helicopter operation reliability and safety. Therefore, the work provides filtering and data processing methods that can effectively eliminate or minimize the influence of interference.
In the helicopter TE operation context, where the helicopter TE recorded parameters (nTC, T G * , nFT) data from sensors’ reliability play a decisive role, it is crucial to consider possible interference that can distort the received signals. To combat this interference, it is effective to use an adaptive interference suppression device, which uses filtering and signal processing methods (Figure 3). This device is highly adaptable to changing operating conditions and helps minimize the interference impact on data quality. Such a device operation involves passing signal components through a reference input, where the signal is compared with a reference value and then corrected or suppressed depending on the deviation degree [56]. This approach allows for effective control and data management, ensuring the helicopter TE control loops reliable operation and increasing overall flight safety.
Figure 3 shows that s(t) is the useful signal that needs to be restored; n0(t) is the noise added to the useful signal at the main input; d(t) = s(t) + n0(t) is the signal at the main input (contaminated with noise); x(t) is the noise correlated with n0(t) at the reference input (reference input); y(t) is the output of the adaptive filter, which tries to predict n0(t) based on x(t); and e(t) is the error signal. According to Figure 3, it is assumed that the filter has a nonlinearity, which is represented as a function f(•) acting on a linear filter output with a transfer function H(p). Thus, the filter output y(t) is expressed as:
y t = f L 1 H p · X p ,
where X(p) is the input signal x(t) Laplace transform of the helicopter TE-recorded thermogas-dynamic parameters (nTC, T G * , nFT), and L–1 is the inverse Laplace transform.
The error signal is defined as the difference between the signal at the main input and the nonlinear filter output, that is:
e(t) = d(t) − y(t).
Nonlinear filter weights’ adaptation is described by a generalized version of the LMS algorithm, which takes nonlinearity into account. It is assumed that w(t) is the nonlinear filter parameters vector, then the parameter update is defined as:
w t + 1 = w t + μ · e t w t ,
where e t w t is the error signal gradient concerning the filter parameters and µ is the training rate.
Models (7)–(9) describe the adaptive noise suppression process using a nonlinear filter and transfer functions, which makes it possible to effectively suppress noise and restore the useful signal s(t) under complex nonlinear influences.
Models (7)–(9) effectively suppress interference and extract an adaptive system capable of a useful signal s(t) from the helicopter TE thermogas-dynamic parameters complex nonlinear data characteristic. Equation (7) describes the nonlinear filter output, which consists of two stages: the input signal X(p), transformed into Laplace space, is passed through a linear filter with a transfer function H(p). The linear filtering result f(•) is subjected to a nonlinear transformation using the function f(•). Equation (8) defines the error signal as the difference between the desired signal d(t) and the nonlinear filter output signal y(t). Equation (9) describes the nonlinear filter parameter adaptation algorithm based on the modified LMS algorithm.
Thus, the helicopter TE parameters nTC, T G * and nFT regulator play the role of an interference suppression device. Therefore, the dynamic compensation circuit in closed loops for regulating helicopter TE parameters takes the form shown in Figure 4. Thus, the registration sensors system nTC, T G * and nFT, respectively, include the helicopter TE parameters in three identical control loops (see Figure 4).
The transition to a unified integration diagram using the filtering method (Figure 5) makes integrating data from three identical control loops for helicopter TE parameters possible, providing more accurate and reliable control.
This approach allows information from various sensors to be combined into one central control unit, where the data are processed using filtering techniques to eliminate noise and improve measurement accuracy.
Thus, the data integration system provides the helicopter TE parameters with more reliable and accurate control, increasing its performance and durability by optimizing the control systems’ operation. This approach ensures engine stability under various flight modes and external conditions.
The key to the helicopter TE parameters resulting in the control loop (see Figure 5) is the development of helicopter TE parameter regulators that act as adaptive filters. Using a neural network controller, which acts as a filter, is advisable due to its ability to adapt to diverse and dynamic operating conditions [56]. Neural network filters have the unique ability to train from available data and automatically adjust their operation to changes in input signals [57]. This allows them to effectively consider complex nonlinear relationships between parameters and quickly adapt to new conditions without manual reconfiguring. The beneficial qualities mentioned, such as the ability to adapt to diverse and dynamic conditions and automatically adjust to changes in input signals, are characteristic of neural networks in general. However, achieving these benefits depends on the specific implementation and training of the neural network used in this context.
Thus, the key task is the neural network architecture choice, the neural network structure choice, determining the activation functions rules and the neural network hidden layers number, which ensures the engine operational status monitoring with 1st and 2nd type error probability at a minimum level.

2.2. Neural Network Architecture Development

To solve this task, a multilayer neural network has been developed that processes input data containing useful signals and noise, selects useful signals and provides feedback (Figure 6). The developed neural network input layer contains 6 neurons: 1 is the helicopter TE gas–generator rotor rpm nTC signal, 2 is the helicopter TE gas–generator rotor rpm nTC signal interference n n T C , 3 is the helicopter TE gas temperature in front of the compressor turbine T G * signal, 4 is the helicopter TE gas temperature in front of the compressor turbine T G * signal interference n T G * , 5 is the helicopter TE free turbine rotor speed nFT signal, and 6 is the helicopter TE free turbine rotor speed nFT signal interference n n F T . In the 1st hidden layer, the parameter signals (inputs 1, 3, 5) are summed with their noise (inputs 2, 4, 6). The 2nd hidden layer means dynamic compensation. The following hidden layers filter the resulting signal, which extracts the useful signal. The output layer contains feedback.
The developed neural network input layer does not perform any data transformation but is intended to receive initial data (helicopter TE parameters nTC, T G * , nFT) from sensors, which the network’s subsequent layers will process. In this case, 6 input neurons are received, each of input signals, one of which receives x1 is the helicopter TE gas–generator rotor rpm nTC signal, x2 is the helicopter TE gas–generator rotor rpm nTC signal interference n n T C , x3 is the helicopter TE gas temperature in the compressor turbine T G * signal front, x4 is the helicopter TE gas temperature in the compressor turbine T G * signal interference front n T G * , x5 is the helicopter TE free turbine rotor speed nFT signal, and x6 is the helicopter TE free turbine rotor speed nFT signal interference n n F T . Each input neuron receives one of these signals unchanged, that is, yi = xi.
The neural network’s 1st hidden layer operates by summing the signals with their corresponding noise. Each neuron in this layer processes a signal–noise pair (the signal-noise is formed by adding random noise to the pure signal, which corresponds to the actual helicopter flight conditions). This is necessary to prepare the data for subsequent dynamic compensation and filtering. The combined signals allow the model to be trained more efficiently to remove noise and highlight useful data. Thus,
h 1 = x 1 + x 2 ,   h 2 = x 3 + x 4 ,   h 3 = x 5 + x 6 ,
where h1, h2, h3 are the first hidden layer neurons outputs.
Based on the above, the neural network’s 1st hidden layer has three neurons. This layer passes through the neuron without activation since simple summation is required.
The 2nd hidden layer performs the dynamic compensation task. This layer corrects signals to compensate for noise and dynamic changes to improve the data quality before filtering it in subsequent layers. In this layer, neurons are trained to adjust signals taking into account their dynamics and noise. This is achieved by applying trainable weights and activation functions to the summed signals, that is:
z 1 = f w 11 · h 1 + w 12 · h 2 + w 13 · h 3 + b 1 , z 2 = f w 21 · h 1 + w 22 · h 2 + w 23 · h 3 + b 2 , z 3 = f w 31 · h 1 + w 32 · h 2 + w 33 · h 3 + b 3 ,
where z1, z2, and z3 are the second hidden layer neuron outputs, wij are the weights trained during the network training process, bi are the biases trained during the network training process, and f(•) is the activation function.
For the 2nd hidden layer, selecting the ReLU (Rectified Linear Unit) activation function is advisable because ReLU effectively copes with dynamic changes in signals and noise due to its ability to pass positive values unchanged and to null out negative ones. This allows neurons to adapt to different levels of input signals and quickly adjust for dynamic changes, which is important for effective noise compensation. In addition, ReLU helps avoid the gradient fading problem, improving the deep networks training and providing more stable and faster model convergence, which is critical for tasks that require accurate dynamic compensation.
Note 1. A critical drawback of the ReLU activation function is the problem of “dying” neurons when input values are negative and the outputs become zero. In this case, neurons stop participating in training since their weight gradient becomes zero. This can cause a significant number of neurons in the network to remain inactive, reducing the model’s overall training ability and degrading its performance. Later in the work, this problem will be solved by modifying the ReLU function.
The 3rd hidden layer (Filtering Layer 1) performs the resulting signals after dynamic compensation filtering in the 1st stage. This layer is designed to reduce noise further and improve the desired signal quality. The 3rd hidden layer applies trainable weights and activation functions to the input signals to filter them and extract useful information. Mathematically, the neural network’s 3rd hidden layer is represented as follows:
f 1 = g w 11 · z 1 + w 12 · z 2 + w 13 · z 3 + b 1 , f 2 = g w 21 · z 1 + w 22 · z 2 + w 23 · z 3 + b 2 , f 3 = g w 31 · z 1 + w 32 · z 2 + w 33 · z 3 + b 3 ,
where f1, f2, and f3 are the 3rd hidden layer neuron outputs, wij is the trainable weights applied to the input signals, bi is the trainable biases, and g(•) is the activation function that helps neurons process input signals nonlinearly, which improves the ability models to isolate useful signals and eliminate noise.
For the 3rd hidden layer, choosing the ReLU activation function is advisable because it can handle noise and extract useful signals efficiently. ReLU allows neurons to only pass positive values through while nulling out negative ones, which helps eliminate unnecessary noise and improves overall signal quality. In addition, ReLU speeds up training by eliminating the gradient decay issues associated with other activation functions and allows the network to better model complex dependencies in data. This makes ReLU ideal for the 1st stage of filtering, effectively extracting useful information from the corrected signals.
The 4th hidden layer represents the resulting signals filtering the 2nd stage, which follows the filtering 1st stage in the 3rd hidden layer. This layer further improves the wanted signal quality by suppressing the remaining noise and making the signal more distinguishable from the background. The 4th hidden layer also applies trainable weights and activation functions to the input signals to further filter the data and improve the desired signal quality. Mathematically, the fourth neural network hidden layer is represented as follows:
g 1 = h w 11 · f 1 + w 12 · f 2 + w 13 · f 3 + b 1 , g 2 = h w 21 · f 1 + w 22 · f 2 + w 23 · f 3 + b 2 , g 3 = h w 31 · f 1 + w 32 · f 2 + w 33 · f 3 + b 3 ,
where g1, g2, and g3 are the 4th hidden layer neuron outputs, wij is the trainable weights applied to the input signals, bi is the trainable biases, and h(•) is the activation function.
For the 4th hidden layer, choosing the ReLU activation function is also advisable due to its ability to effectively suppress unnecessary negative values, thereby reducing the noise impact and preserving the signal’s positive aspects. This allows the network to more efficiently identify and store useful features present in the data, which is important for the task of integrating and improving signal quality.
Thus, the second, third and fourth hidden layers each have three neurons.
The neural network output layer predicts or classifies the input data according to the given task. In this context, the output layer will predict the system parameters or characteristics based on the input signals after they have been processed and filtered through hidden layers. Given the feedback presence, this layer must also take into account the error received in the previous stages and adjust the network outputs according to this error. For the output neuron, the expression is valid:
y = r v 1 · g 1 + v 2 · g 2 + v 3 · g 3 + c ,
where y is the helicopter TE parameter (nTC, T G * and nFT) predicted output, vi is the output layer trainable weights, c is the trainable bias (bias), and r(•) is the activation function.
Taking into account feedback, the error E at the network output can be defined as the difference between the predicted output y and the expected output d, that is:
E = yd.
The backpropagation algorithm is applied to update the weights v i and bias c of the output layer. The weights and bias are adjusted in the opposite direction to the gradient of the loss function to these parameters. Thus, updating the weights v i and bias c of the output layer will occur according to gradient descent as:
v i = α · E v i ,   c = α · E c ,
where α is the adaptive training rate, which includes an adaptive change in the training rate depending on the current gradient and the weight updates history, which, according to the AdaGrad algorithm (Adaptive Gradient Algorithm), is defined as:
α t = η G t + ϵ ,
where ϵ is a small constant added for numerical stability (ϵ ≈ 10–8 is assumed), αt is the training rate at time t, η is the initial training rate, and Gt is the accumulated squared gradients up to time t, which are updated at each training iteration in the following way:
G t = G t 1 + E 2 ,
where ∇E is the loss function gradient over the model parameters.
In a developed neural network where the main task is to predict the parameters or characteristics of the system based on the processed data, the use of a linear activation function at the output layer may be appropriate. This is especially relevant since the output values will be helicopter TE parameters nTC, T G * , and nFT continuous numerical values. A linear activation function will allow the neural network to flexibly adapt to different ranges of output parameter values without limiting them to any specific range.

2.3. The ReLU Activation Function Modification

The work proposes to use the innovative activation function Smooth ReLU, developed by this author’s team, which is a ReLU function derivative. The main aim of the ReLU function modification is to create a smoother and more continuous activation function to improve the convergence process and training stability. The proposed modification can significantly affect the neural network efficiency, especially in deep training problems where stability and convergence speed play a key role. The expression describes the Smooth ReLU activation function:
f x = x ,                       if   x > 0 , 1 1 + e γ · x ,   if   x 0 ,
where γ is a parameter that determines the function “level of smoothness”. For x > 0, the function behaves similarly to a regular ReLU, and for x ≤ 0 it smoothly transitions to negative values using a sigmoid function. This avoids sudden gradient changes and can speed up neural network training. The proposed Smooth ReLU activation function retains the ReLU benefits, such as no gradient for positive values, while adding smoothness for negative values. This can sometimes improve training, allowing for more stable and faster convergence.
Theorem 1. 
The Smooth ReLU function is continuous over the entire domain of definition.
Proof of Theorem 1. 
Let f(x) be the Smooth ReLU activation function defined according to (18). The Smooth ReLU activation function is continuous for x > 0 and x < 0, since for x > 0 the function f(x) = x is a linear function that is continuous over the definition of the entire domain, and for x < 0 f x = 1 1 + e γ · x , which means a modified sigmoid function is continuous throughout the entire domain of the definition.
To prove continuity at x = 0, it is necessary to show that the limit of f(x) as x tends to zero from the left is equal to the limit of f(x) as x tends to zero from the right, and that this limit is equal to the value of the function at x = 0. Consider the limit:
lim x 0 f x = lim x 0 1 1 + e γ · x ,
in which as x → 0, γ·x → 0, eγ·x = 1, thus:
lim x 0 1 1 + e γ · x = 1 1 + 1 = 1 2 .
Consider the limit:
lim x 0 + f x = lim x 0 + x = 0 .
Since the different sides’ limits do not coincide, checking the function value at point x = 0 is necessary. Let us take the value x ≠ 0 at point x = 0, for example, f 0 = 1 2 . Next, continuity is revised with the set value f 0 = 1 2 , that is:
lim x 0 f x = 1 2 ,   lim x 0 + f x = 0 1 2 .
Thus, the original function was not continuous at x = 0 with the given condition, and an adjustment is required to determine continuity correctly. Then, the Smooth ReLU function is described by the expression:
f x = x ,                                   if   x > 0 , 0                                       if   x = 0 , 1 1 + e γ · x ,               if   x < 0 .
This function, similar to (19), is continuous for x > 0 and x < 0, and continuity at point x = 0 is defined as:
lim x 0 f x = lim x 0 1 1 + e γ · x = 1 2 ,   lim x 0 + f x = 0 .
And its value at point x = 0 is zero.
Thus, the adjustment assumes that the function is not smooth at point 0, but remains continuous throughout the definition of the entire domain. The theorem is proven: the function f(x), defined with correction in the form (24), is continuous over the entire domain of the definition. □
To research the neuron’s activation functions, it is imperative to analyze their derivatives. The activation function derivative allows us to estimate this function change rate in response to changes in input data. In turn, the updating neuron weights process helps optimize during neural network training. The traditional neuron activation function ReLU f(x) = max(0, x) derivative (Figure 7a) has the form:
f x = 1 ,   if   x > 0 , 0 ,   if   x 0 ,
The proposed Smooth ReLU neuron activation function with correction (24) derivative (Figure 7b) has the form:
f x = 1 ,                                   if   x > 0 , 0 ,                                   if   x = 0 , γ · e γ · x 1 + e γ · x 2 ,       if   x < 0 .
As can be seen from (26), (27), and Figure 7, the problem with the traditional ReLU function f(x) = max(0, x) is that its derivative is zero for all x ≤ 0. This can lead to “dead neurons” in the neural network when neurons stop updating due to the lack of gradient. The advantage of Smooth ReLU is that it always has a non-zero gradient for all values of x, including negative ones (except for x = 0). This avoids the problem of “dead neurons” and ensures more stable neural network training.
Thus, the proposed Smooth ReLU use with adjustment (24) is mathematically justified since it provides a smooth and continuous gradient throughout the definition domain, which can help improve the convergence and training efficiency of the model.

2.4. A Neural Network Training Algorithm Development

The work proposes an algorithm for training a neural network (Figure 6), consisting of the following stages:
1. Weights initialization is initially setting the neural network weight values before starting training. Proper weight initialization is important for efficient and fast network training, as it helps avoid convergence problems and helps achieve more accurate results.
For the developed neural network that uses Smooth ReLU in the hidden layers, it is appropriate to use the He initialization method [58]. The 1st hidden layer weight initialization is not required since this layer performs simple summation. The neural network remaining layers’ weight initialization is carried out as follows:
W ~ N 0 , 2 n i ,
where N(μ, σ2) is a normal distribution with mean μ and variance σ2, and ni is the neuron number in the neural network i-th layer.
2. Forward propagation is passing the neural network input data through all layers to the output to obtain predictions, in which the input signals’ weighted sums are calculated at each layer, to which an activation function is then applied. This process ensures that the input data are transformed into network output values. Direct propagation is carried out according to (10)–(15).
3. Backpropagation is a neural network training to minimize prediction error by propagating the network output back error calculated through the layers, updating the network’s weights and biases. The main purpose of backpropagation is to adjust the weights in such a way as to reduce the predicted and actual values difference between them. For each layer, the error function gradient concerning its weights and biases is calculated as:
δ o u t = E Y = y d ,   E v i = δ o u t · g i ,   E c = δ o u t .
For the remaining layers (1st–4th hidden layers) of the neural network, similar calculations are given in Table 1.
4. Updating weights and biases is the neural network’s weights and biases changing process over time to the loss function minimization. The squared gradients accumulation is defined as:
G w i j m = G w i j m + E w i j m 2 ,
where m = 2…4 is the neural network hidden layers number (2nd, 3rd and 4th hidden layers, respectively).
Weights and biases are updated according to the expressions:
w i j m = w i j m η G w i j m + ϵ · E w i j m ,
c = c η G c + ϵ · E c .
The weights and biases forward propagation, backpropagation, and updating are repeated for all training examples and throughout all training epochs until the convergence criterion is reached.
For the task posed in the work of filtering interference and integrating signals from the helicopter TE thermogas-dynamic parameter sensors using the developed neural network (Figure 6), which implements a diagram for the regulating helicopter TE parameters integrating closed loops using the filtering method (Figure 5), the stopping criterion is a balanced criterion for improving accuracy on the validation set and controlling the neural network retraining.
To create a balanced convergence criterion that takes into account both the improvement in accuracy on the validation set and control for overfitting, a weighted average between the two criteria can be used:
C = α · ε + 1 α · O ,
where ε is the accuracy (increase in model accuracy on the validation set (for example, the percentage of correct predictions), O is the measure of model retraining control (ratio of error on the validation set to error on the training set), and 0 ≤ α ≤ 1 is the coefficient reflecting the increasing accuracy importance compared to the overfitting control.
Note 2. An appropriate value for α can be chosen depending on the specific task requirements and preferences. For example, if increasing accuracy is more important than controlling for overfitting, then α might be chosen closer to 1. If controlling for overfitting is more important, then α might be chosen closer to 0.
The scientific novelty of the proposed neural network training algorithm lies in the integration of two convergence criteria: increasing accuracy on the validation set and controlling overfitting into a single criterion that allows for balancing, improving model accuracy, and preventing overfitting. The difference from the traditional backpropagation algorithm is that the proposed method not only minimizes the loss function but also takes into account the error dynamics on the validation sample, which allows for maintaining the model generalization ability when achieving a certain level of accuracy. Additionally, the adaptive training rate complements this approach by allowing the model to efficiently and quickly adapt to changes in data and training conditions, which improves its performance in practice.

3. Case Study

3.1. Analysis and Preliminary Processing Results for Initial Signals from Helicopter TE Thermogas-Dynamic Parameter Sensors

To conduct a computational experiment, data were obtained on the TV3-117 TE thermogas-dynamic parameters [59,60,61], recorded on board the Mi-8MTV helicopter during flight: gas–generator rotor rpm nTC at times t1tn; gas temperature in front of the compressor turbine T G * at times t1tn; and free turbine rotor speed nFT at times t1tn. The nTC, T G * , and nFT values are given in absolute values (Table 2, Table 3 and Table 4).
At the helicopter TE thermogas-dynamic parameter (nTC, T G * and nFT) values preliminary processing in the 1st stage, the homogeneity of the training samples is assessed (Table 2, Table 3 and Table 4). According to [59,60,61], the criterion by which the training sample homogeneity is determined is the Fisher–Pearson criterion, which is defined as [62]:
χ 2 = N · N 1 N 2 · 1 N · i = 1 N x i x ¯ 3 1 N · i = 1 N x i x ¯ 2 3 2 ,
where N = 256 is the training sample size, xi is the training sample (Table 2, Table 3 and Table 4) i-th element value, and x ¯ = 1 n · i = 1 n x i is the training sample average value.
The significance level adopted in the work is 0.01, which means the probability of introducing a type I error (erroneously rejecting the true null hypothesis) is 1%. That is, while a statistical test indicates a significant result, there is only a 1% chance that the result is due to chance and is due to noise or random variations in the data. This significance indicates strict requirements for the results’ reliability, which is especially important in the context of helicopter TE thermogas-dynamic parameter sensor readings’ accuracy and reliability. The freedom degrees number is 1 (one type of parameter in each training sample: nTC, T G * or nFT). Thus, the Fisher–Pearson test’s critical value for one degree of freedom at a significance level of 0.01 was 6.6. The Fisher–Pearson criterion obtained values are χ n T C 2 = 4.727 , χ T G * 2 = 4.645 , and χ n F T 2 = 5.619 are less than the critical value χ c r i t i c a l 2 = 6.6 , which indicates the helicopter TE thermogas-dynamic parameter training sample homogeneity.
To confirm the training sample (see Table 2, Table 3 and Table 4) homogeneity assessing results using the Fisher–Pearson criterion according to [59,60,61], an identical experiment was carried out using the Fisher–Snedecor criterion, which is defined as [63]:
F = S 1 2 S 2 2 = 1 n 1 1 · i = 1 n 1 x i 1 x ¯ 1 2 1 n 2 1 · i = 1 n 2 x i 2 x ¯ 2 2 ,
where n1 and n2 are the 1st and 2nd training sample sizes, x i 1 and x i 2 are the i-th element of the 1st and 2nd training samples, respectively, and x ¯ 1 and x ¯ 2 are the average values of the 1st and 2nd training samples, respectively.
To calculate the Fisher–Snedecor criterion according to (35), the helicopter TE thermogas-dynamic parameter (see Table 2, Table 3 and Table 4) value training samples, consisting of 256 elements each, are randomly divided into two equal samples of 128 elements each, that is, n1 = n2 = 128. The significance level for the Fisher–Snedecor criterion is also accepted as 0.01, and the freedom degrees number is 1, which indicates the helicopter TE parameters one type in each training sample: nTC, T G * or nFT. Thus, the Fisher–Snedecor test critical value for one freedom degree at a significance level of 0.01 was 6.6. The Fisher–Snedecor criterion obtained values are F n T C = 4.727 , F T G * = 4.645 , and F n F T = 5.619 are less than the critical value F c r i t i c a l = 6.6 , which confirms the helicopter TE thermogas-dynamic parameter training sample homogeneity.
At the helicopter TE thermogas-dynamic parameter (nTC, T G * and nFT) values preliminary processing in the 1st stage according to [59,60,61], the training and test samples representativeness is assessed using cluster analysis, the which aim is to separate the input data set X = {x1, x2, …, xn} (see Table 2, Table 3 and Table 4) into k disjoint clusters, where k is a predetermined number of clusters (k = 8 is assumed based on [59,60,61]). Each cluster is an object group that is considered more similar than objects from other clusters. Taking into account the results of [64], the k-means cluster analysis method was applied, based on minimizing the squared distances sum between cluster objects and their centroids (the j-th cluster centre) C = {μ1, μ2, …, μn}, where μ j R d .
For each xi value of helicopter thermogas-dynamic parameters TE nTC, T G * and nFT, the distance to all centroids is calculated and the object is assigned to the cluster with the nearest centroid according to the expression:
r i j = 1 ,   if   j = arg min l x i μ l 2 , 0 ,   otherwise
where j = arg min l x i μ l 2 means that object xi belongs to the j-th cluster, with r i j 0,1 .
The k-means method minimizes the squared distance sum between features and their corresponding centroids. The objective function is presented as:
J = j = 1 k i = 1 N r i j · x i μ j 2 .
For each j-th cluster, the centroid μj is updated as all objects assigned to that cluster average as:
μ j = i = 1 N r i j · x i i = 1 N r i j .
Calculations according to (36) and (38) are repeated until the object assignments to clusters stop changing (until convergence).
A random selection procedure was used to select training and test samples in a 2:1 ratio (67 and 33%, respectively—172 and 84 elements) based on the helicopter TE (see Table 2, Table 3 and Table 4) thermogas-dynamic parameter training samples. The cluster data analysis results from the helicopter TE thermogas-dynamic parameter (Table 2, Table 3 and Table 4) training samples revealed 8 classes (classes I…VIII), that is, eight groups are present in them, which indicates the composition similarity of both training and test samples (Figure 8).
The results obtained made it possible to determine the helicopter TE thermogas-dynamic parameters’ optimal sample size: the training sample is 256 elements (100%), the control sample is 172 elements (67% of the training sample), and the test sample is 84 elements (33% of the training sample).

3.2. The Developed Neural Network Training Results

At the developed neural network (see Figure 6) training in the 1st stage, the epochs passed number (Figure 9, Table 5) which influence the final standard deviation that is the training (loss function) quality assessing criterion and is defined as [59,60,61]:
E e p o c h = 1 N · i = 1 N 1 2 · k = 1 n y k y ^ k 2 .
The results obtained indicate that 280 training epochs are sufficient to achieve the minimum value of Eepoch = 2.005. (Figure 9a). It is worth noting that after 280 epochs, the neural network training error increases. The neural network training convergence demonstrated it was trained for 1000 epochs (Figure 9b). It can be seen that almost immediately after 320 training epochs, the loss function decreases to its minimum value Eepoch = 2.005 and remains stable over 1000 training epochs. The slight increase in training error after epoch 280 is due to the overfitting phenomenon, where the model begins to overfit the training data rather than generalize to the new data. However, after epoch 330, the error is reduced again due to hyperparameter adjustments using regularization techniques that stabilize training.
The situation described above, where a slight increase in the training error is observed after the i-th epoch due to model overtraining, is discussed in detail in [65,66]. These sources provide various methods for reaching this state, including adjusting hyperparameters and using regularization methods. These methods aim to stabilize the training process and improve the model’s generalisation ability. It is important to note that the temporary increase in training error is not critical and can be effectively managed using appropriate techniques, allowing the model to continue training and achieve optimal results.
At the next stage of training of the developed neural network (see Figure 6), its performance, accuracy (Figure 10) and loss (Figure 11) are determined for 280 training epochs. Accuracy provides information about how well the model classifies the data, while loss reflects how well the model minimises the difference between predicted and actual values.
As can be seen from Figure 10, accuracy reaches a value of 0.995 (almost 100%). Moreover, as can be seen from Figure 11, the loss function does not exceed 0.025 (2.5%) at the beginning of training and decreases to 0.005 (0.5%) at 280 training epochs. This indicates the high efficiency and accuracy of the developed neural network in solving the problem. Achieving an accuracy value of 0.995 (almost 100%) indicates that the model successfully classifies the data with high accuracy. In parallel, the observed decrease in the loss function from 0.025 (2.5%) to 0.005 (0.5%) indicates that the model successfully reduces the difference between the actual and predicted values, which is a key indicator of the effectiveness of neural network training. These results highlight the high-quality performance of the model and its ability to make accurate and reliable predictions.
It is worth noting that the work compares the accuracy and loss definition. As mentioned above, accuracy reaches a value of 0.995, and the loss function decreases from 0.025 (2.5%) to 0.005 (0.5%) after 280 training epochs when applying the Smooth ReLU activation function. At the same time, when using the ReLU activation function, accuracy reaches a 0.995 value when the loss function is reduced from 0.025 (2.5%) to 0.005 (0.5%) and is achieved with 490 training epochs, which is almost several times further compared to using Smooth ReLU. Moreover, with 280 training epochs, when using the ReLU activation function, accuracy only reaches a 0.972 value, and the loss function decreases from 0.025 (2.5%) to 0.018 (1.8%).

3.3. Helicopter Turboshaft Engines Thermogas-Dynamic Parameter Sensors Signal Neural Network Integration Results

To conduct a computational experiment based on helicopter TE thermogas-dynamic parameter (see Table 2, Table 3 and Table 4) training samples, as an example, the nTC parameter original signal, contaminated with noise, that was received from the corresponding sensor (Figure 12) was restored.
As can be seen from Figure 12, the original nTC parameter signal contains various distortions, interference, and noise, which can affect its accuracy in analysis and interpretation. Figure 13 shows the nTC parameter filtered signal after applying the developed neural network (see Figure 6).
As shown in Figure 13, the nTC parameter signal filtering effectively removed noise and distortion while the signal’s main characteristics were preserved. The filtered nTC signal appears cleaner and smoother, making it more suitable for further analysis and use.
To analyze the signal frequency composition, the nTC parameter is used to isolate or suppress certain frequency components using the direct Fourier transform F ω = f t · e j · ω · t d t the transition from the time domain to the frequency domain has been completed. This allows you to determine which frequencies are present in the signal and at what amplitude, which in turn helps determine which frequencies should be retained or excluded to achieve the desired filtering result. Moving into the frequency domain allows you to understand the signal structure better and make informed decisions about the necessary filtering actions, such as noise reduction, extraction of frequency components of interest, or interference suppression. Figure 14a shows the nTC parameter’s original signal spectrum, and Figure 14b shows the nTC parameter’s filtered signal spectrum.
The nTC parameter is the original and filtered signals spectrum resulting diagram, allowing us to compare their frequency characteristics visually. Analyzing the differences between the spectra allows you to determine the filtration effectiveness and evaluate how successfully your aims were achieved. If the filtered signal spectrum shows a significant reduction in amplitude at frequencies that need to be suppressed and retention or enhanced that are important to the signal, this indicates the designed neural network is (Figure 6) functioning well.
Further in the work, the parameter nTC signal repetition period is determined as T = 1 f , where f = 2 π ω is the frequency. This allows you to understand the nTC parameter signal time’s main characteristics, such as its frequency and frequency spectrum. Knowing the signal period helps analyze its dynamics and detect regular patterns. Figure 15a shows the parameter nTC initial signal repetition period, and Figure 15b shows the nTC parameter filtered signal repetition period.
At the computational experiment next stage, a transition was made from the parameter signal nTC, presented in the time dependence f(t) form (see Figure 12), to the signal-to-noise ratio (SNR) according to the expression:
S N R = 0 T f t 2 0 T n t 2 ,
where n(t) is the noise signal (in this work it is taken as a random variable).
SNR provides a clear numerical value that shows how much the signal stands out from the background noise. A high SNR indicates that the signal dominates the noise, which means better data transmission and processing. A low SNR indicates that noise is having a significant impact, which can result in distortion or loss of important information. Figure 16a shows the SNR based on the original signal of the nTC parameter, and Figure 16b shows the SNR based on the filtered signal of the nTC parameter.
We have performed a comparison of filtered and unfiltered results histograms along with the nTC parameter signal filtered estimates dynamics qualitative analysis (Figure 17, Figure 18, Figure 19 and Figure 20).
The obtained data analysis (see Figure 17, Figure 18, Figure 19 and Figure 20) allows us to state that combining signals using the filtering method leads to narrowing the unfiltered estimate histogram, which, in turn, helps to increase accuracy. The presented histogram research shows that the histogram maximum narrowing, and therefore accuracy improvement, is achieved for the pulse repetition period.

4. Discussion

4.1. Noise Variance Estimation

The nTC parameter signal (see Figure 12 and Figure 13) noise dispersion f(t) estimation is an important aspect of time series analysis and signal processing since dispersion characterizes the noise values spread degree around its average value. In the signal f(t) analyzing process, which can be represented as the useful signal and noise sum, isolating and estimating the noise variance makes it possible for the filtering methods to signal and the effectiveness quality evaluated. A statistical method is used to calculate noise variance: first, the average noise value is determined, and then all noise values’ standard deviation from this average is calculated. Noise dispersion is defined as:
σ 2 = 1 N · i = 1 N n t n ¯ 2 ,
where n(t) are the noise values at time t, n ¯ is the average noise value, and N is the element number in the training set.
Figure 21 shows a diagram of the noise dispersion resulting estimate depending on the number of the elements in the training set, from which it can be seen that if there are 156 elements in the training set (58% of the total volume), the noise dispersion value becomes almost equal to zero. This indicates that increasing the training set size significantly improves the model’s accuracy in estimating noise, and once a certain amount of data are reached, the model almost eliminates the uncertainties associated with noise. Thus, we can conclude that to achieve minimum noise variance, it is necessary to use at least 156 elements in the training set, which ensures that all possible signal variations have adequate representation and allows the model to effectively take them into account.

4.2. Comparative Analysis of Neural Network Signal Integration Based on the Filtering Method with Traditional Filters

When performing the neural network signal integration comparative analysis using the filtering method and traditional filters, the following metrics are used:
1. Mean square error characterizes the differences in squares arithmetic mean between the observed and predicted values defined as:
M S E = 1 N · i = 1 N f i t f ^ i t 2 .
2. The average absolute error characterizes the estimated values’ average absolute deviation from the true signal values and is defined as:
M A E = 1 N · i = 1 N f i t f ^ i t .
3. The determination coefficient is a measure showing the variation proportion in the dependent variable explained by the independent variables in the model and is defined as:
R 2 = 1 1 N · i = 1 N f i t f ^ i t 2 1 N · i = 1 N f i t f ¯ 2 .
4. Peak signal to noise is used to the reconstructed signal quality measure concerning the maximum signal and is defined as:
P N S R = 10 · log 10 M A X 2 M S E ,
where MAX is the maximum possible signal value.
5. Signal-to-noise ratio measures the ratio between signal power and noise power and is defined as:
S N R = 10 · log 10 i = 1 N f i t 2 i = 1 N f i t f ^ i t 2 .
6. Correlation measures the linear relations between the true and filtered values and is defined as:
r = 1 N · i = 1 N f i t f ¯ · f ^ i t f ^ ¯ i = 1 N f i t f ¯ 2 · i = 1 N f i t f ^ ¯ 2 ,
where f ¯ and f ^ ¯ are the average values of the true and filtered signals, respectively.
7. The root mean square error is the square root of the MSE gives an idea of magnitude errors and is defined as:
R M S E = 1 N · i = 1 N f i t f ^ i t 2 .
8. Average absolute percent error measures the average absolute error as the percentage of the true value and is defined as:
M A P E = 100 % N · i = 1 N f i t f ^ i t f i t .
9. Average relative error estimates the predicted values average relative error relative to the true values and is defined as:
M R E = 1 N · i = 1 N f i t f ^ i t f i t .
10. The goodness-of-fit index measures the agreement between the true and predicted values, taking into account both precision and deviation and is defined as:
C C C = 2 · r · σ f · σ f ^ σ f 2 + σ f ^ 2 + f ¯ f ^ ¯ 2 ,
where σf and σ f ^ are the true and predicted values standard deviations, respectively.
11. Normalized mean squared error normalizes the true values variance MSE concerning, allowing comparison of models at different data scales and is defined as:
N M S E = 1 N · i = 1 N f i t f ^ i t 2 1 N · i = 1 N f i t f ¯ 2 .
12. The signal reconstruction quality function evaluates the signal reconstruction quality, taking into account the minimum values between the true and predicted values and is defined as:
S Q R = 1 N · i = 1 N min f i t , f ^ i t 1 N · i = 1 N f i t .
Table 6 shows the neural network signal integration based on the filtering method with recursive [67], median [68] and median-recursive [69] filters comparative analysis results according to metrics (42)–(53).
The results obtained (see Table 6) confirm that neural network signal integration based on the filtering method is the best in all metrics, demonstrating minimal errors and maximum correspondence to the true signal. Meanwhile, applying the median filter is the worst, showing maximum errors and minimum compliance. Table 7 shows the improvement in the neural network quality metrics signal integration results based on the filtering method with traditional filters.

4.3. Results of a Trained Neural Network with Traditional Filtering Methods Comparison

To compare the trained neural network model (Figure 6) for each layer data with the traditional filtering method (for example, using a median-recursive filter), each step results, followed by checking each hidden layer design’s actual effectiveness. The following algorithm is used in the work.
The training samples’ data (Table 2, Table 3 and Table 4) are divided into signals and noise to apply traditional filtering methods, followed by the results’ analysis at each step. In this case, Sclean = {x1, x3, x5} is a pure signals’ set (x1 is the gas–generator rotor rpm nTC signal; x3 is the gas temperature in the compressor turbine T G * signal front; and x5 is the free turbine rotor speed nFT signal), and Nnoice = {x2, x4, x6} is the noise’s set (x2 is the gas–generator rotor rpm nTC noice; x4 is the gas temperature in the compressor turbine T G * noise front; and x6 is the free turbine rotor speed nFT noise). As a result, two sets are obtained: one containing pure signals and the other containing their corresponding noise.
Next, the traditional filtering method performs each step on the prepared data. In this case, signals are sequentially filtered from interference, dynamic changes are compensated, and sequential filtering is carried out to reduce noise and highlight useful signals.
After this, the prepared data are passed through a trained neural network, followed by saving each hidden layer output: the 1st hidden layer outputs (summing signals with noise) are the h1, h2, h3; the 2nd hidden layer outputs (dynamic compensation) are the z1, z2, and z3; the 3rd hidden layer outputs (filtering first stage) are the f1, f2, and f3; the 4th hidden layer outputs (filtering second stage) are the g1, g2, and g3; and the output layer is the y.
The following is a direct comparison:
  • The summing signals with noise results by the traditional method are compared with the neural network’s 1st hidden layer results.
  • The dynamic compensation results by the traditional method are compared with the neural network’s 2nd hidden layer results.
  • The filtering 1st stage results by the traditional method are compared with the neural network’s 3rd hidden layer results.
  • The filtering 2nd stage results by the traditional method are compared with the neural network’s 4th hidden layer results.
In a computational experiment, clear signals and noise interference were identified from the training samples data (Table 2, Table 3 and Table 4), which are shown in Table 8, Table 9 and Table 10.
Table 11 shows the data processing results (Table 8, Table 9 and Table 10) using a neural network. Table 12 uses traditional filtering methods according to stages.
The output parameter value obtained in the neural network for each layer is significantly higher than the values obtained using traditional filtering methods, consisting of summing signals with their noise, dynamic compensation, and the first and second filtering stages (see Table 13). This is because neural networks can train and adapt to data complex and nonlinear relations, allowing them to compensate for interference and improve signal quality more effectively. Having the Smooth ReLU activation function allows the neural network to ignore negative values that may represent noise and emphasize positive values, thereby improving the output signal quality. However, higher output values indicate a cleaner and more amplified signal, which is important for improving the overall system accuracy and reliability.

4.4. The I and II Type Errors Calculation

A type I error occurs when the null hypothesis H0 is rejected even though it is true, and is defined at a given significance level as:
α = P R e j e c t   H 0 H 0   i s   t r u e .
A type II error occurs when the null hypothesis H0 is not rejected even though the alternative hypothesis H1 is true, and is defined as:
β = P D o n t   r e j e c t   H 0 H 1   i s   t r u e .
As mentioned above, the paper set the significance level to 0.01, which means the type I error probability (erroneously rejecting a true null hypothesis) is 1%; that is, if a statistical test shows a significant result, there is only a 1% chance that this result is due to chance and caused by noise or random variations in the data. The significance indicates a given level of high requirements for the reliability of the results, which is especially important for ensuring the helicopter TE thermogas-dynamic parameters sensor readings’ accuracy and reliability.
For the given task, the null hypothesis is “The developed neural network (see Figure 6) does not improve the helicopter TE thermogas-dynamic parameters from sensor signals integration accuracy in comparison with traditional filters (median-recursive, recursive and median filters)”, and the alternative hypothesis is “The developed neural network (see Figure 6) significantly improves the helicopter TE thermogas-dynamic parameter sensor signals integrating accuracy compared to traditional filters (median-recursive, recursive and median filters)”.
Table 8 shows the 1st and 2nd types’ errors calculating results for neural network integration of signals based on the filtering method with recursive [67], median [68] and median-recursive [69] filters according to metrics (42)–(53).
The results (see Table 14) showed that the neural network signal integration based on the filtering method used made it possible to reduce the 1st and 2nd types’ errors by 2.11 times compared with the use of a median-recursive filter, by 2.89 times compared with the recursive filter use, and by 4.18 times compared with the recursive filter use. using a median filter.
The neural network implementation approach in real helicopter operating conditions faces challenges and advantages number. The main challenges include the need for significant computing resources to train and operate the neural network, difficulties adapting to rapidly changing operating conditions, and interference of various types. In addition, careful model tuning and validation of large amounts of data are required to avoid overfitting and ensure that the system operates stably. However, this approach’s advantages are significant: it provides helicopter TE parameters signal filtering with higher accuracy and reliability through adaptive noise suppression and integration of dynamic compensation methods. A neural network trained using a backpropagation algorithm with an adaptive training rate allows you to balance between model accuracy and generalization ability, preventing overfitting. As a result, this method improves the signal filtering efficiency compared to traditional methods and reduces errors of the first and second types several times, significantly increasing the helicopter TE performance and reliability control in real operating conditions.
Thus, the research did not focus on creating a new helicopter or engine. At the same time, it is focused on the sensors data analysis from a particular class of existing helicopter TE thermogas-dynamic parameters (the TV3-117 engine was used in the work) and its parameters (Table 2, Table 3 and Table 4 [40,41,42,59,60,61]), obtained in the Mi-8MTV helicopters flight operation. This aim is achieved by collecting data from sensors during engine operation, data analysis obtained to identify deviations in engine operation, comparison of deviations with benchmarks for a specific engine type, and determining the resulting deviations’ causes.
The prospect for further research is the recommendations for eliminating the identified deviations of helicopter TE parameters from the reference values.

5. Conclusions

The article develops a helicopter turboshaft engine’s thermogas-dynamic parameter signals neural network integration method, which allows data to be effectively corrected from sensors in real time, ensuring high accuracy and reading reliability:
  • The helicopter turboshaft engines’ thermogas-dynamic parameter signals neural network integration method relevance is substantiated since this method provides effective noise filtering, which makes it possible to increase the engine condition monitoring accuracy.
  • An integrating signals scheme from helicopter turboshaft engine thermogas-dynamic parameter sensors has been developed using a filtering method, which achieves almost 100% (0.995 or 99.5%) accuracy and reduces the loss function to 0.005 (0.5%) with 280 training epochs.
  • Based on the backpropagation algorithm, a neural network training method has been developed for the helicopter turboshaft engine parameters integrating control loops, which combines increasing accuracy on the validation sample and controlling overtraining into a single criterion. This method minimizes the loss function and considers the error dynamics on the validation set, preserving the model’s ability to generalize. The adaptive training rate helps quickly adapt to data changes and improves performance. In this case, to achieve the loss function minimum value of 2.005, 280 training epochs are enough, after which the error begins to increase; however, the loss function stabilizes immediately after 320 epochs and remains stable for 1000 epochs.
  • It is proposed that a modified Smooth ReLU activation function be used, in which accuracy reaches 0.995, and the loss function decreases from 0.025 to 0.005 in 280 epochs, while with ReLU it takes 490 epochs to achieve the same accuracy and loss, and in 280 epochs the accuracy reaches only 0.972. Furthermore, losses are reduced to 0.018.
  • It is mathematically substantiated that the neural network integration closed loops used for regulating the helicopter turboshaft engine parameters using the filtering method compared with traditional filters (median-recursive, recursive, median filter) improves efficiency by 1.020…5.101 times compared to the median-recursive filter, 1.031…9.658 times compared to the recursive filter, and 1.082…20.325 times compared to the median filter.
  • It is mathematically substantiated that the neural network signal integration use based on the filtering method made it possible for the first and second types to reduce errors by 2.11 times compared with the median-recursive filter use, by 2.89 times compared with the recursive filter use, and by 4.18 times compared with the median filter use.

Author Contributions

Conceptualization, S.V. and V.V.; methodology, S.V., V.V. and S.O.; software, V.V.; validation, V.V., S.O. and A.S.; formal analysis, S.V.; investigation, L.S. and A.S.; resources, L.S., V.S., O.M. and A.S.; data curation, V.V. and S.O.; writing—original draft preparation, V.V.; writing—review and editing, S.V.; visualization, V.V.; supervision, V.S. and O.M.; project administration, V.S. and O.M.; funding acquisition, L.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Ministry of Internal Affairs of Ukraine “Theoretical and applied aspects of the development of the aviation sphere” under Project No. 0123U104884.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhang, S.; Ma, A.; Zhang, T.; Ge, N.; Huang, X. A Performance Simulation Methodology for a Whole Turboshaft Engine Based on Throughflow Modelling. Energies 2024, 17, 494. [Google Scholar] [CrossRef]
  2. Gu, Z.; Pang, S.; Zhou, W.; Li, Y.; Li, Q. An Online Data-Driven LPV Modeling Method for Turbo-Shaft Engines. Energies 2022, 15, 1255. [Google Scholar] [CrossRef]
  3. Kim, S.; Im, J.H.; Kim, M.; Kim, J.; Kim, Y.I. Diagnostics using a physics-based engine model in aero gas turbine engine verification tests. Aerosp. Sci. Technol. 2023, 133, 108102. [Google Scholar] [CrossRef]
  4. Zhang, J.; Wang, Z.; Li, S.; Wei, P. A digital twin approach for gas turbine performance based on deep multi-model fusion. Appl. Therm. Eng. 2024, 246, 122954. [Google Scholar] [CrossRef]
  5. Catana, R.M.; Badea, G.P. Experimental Analysis on the Operating Line of Two Gas Turbine Engines by Testing with Different Exhaust Nozzle Geometries. Energies 2023, 16, 5627. [Google Scholar] [CrossRef]
  6. Aygun, H.; Turan, O. Application of genetic algorithm in exergy and sustainability: A case of aero-gas turbine engine at cruise phase. Energy 2022, 238 Pt A, 121644. [Google Scholar] [CrossRef]
  7. Liu, X.; Chen, Y.; Xiong, L.; Wang, J.; Luo, C.; Zhang, L.; Wang, K. Intelligent fault diagnosis methods toward gas turbine: A review. Chin. J. Aeronaut. 2024, 37, 93–120. [Google Scholar] [CrossRef]
  8. Li, B.; Zhao, Y.-P. Group reduced kernel extreme learning machine for fault diagnosis of aircraft engine. Eng. Appl. Artif. Intell. 2020, 96, 103968. [Google Scholar] [CrossRef]
  9. Balli, O. Exergetic, sustainability and environmental assessments of a turboshaft engine used on helicopter. Energy 2023, 276, 127593. [Google Scholar] [CrossRef]
  10. Abdalla, M.S.M.; Balli, O.; Adali, O.H.; Korba, P.; Kale, U. Thermodynamic, sustainability, environmental and damage cost analyses of jet fuel starter gas turbine engine. Energy 2023, 267, 126487. [Google Scholar] [CrossRef]
  11. Castiglione, T.; Perrone, D.; Strafella, L.; Ficarella, A.; Bova, S. Linear Model of a Turboshaft Aero-Engine Including Components Degradation for Control-Oriented Applications. Energies 2023, 16, 2634. [Google Scholar] [CrossRef]
  12. Liu, J. Gas path fault diagnosis of aircraft engine using HELM and transfer learning. Eng. Appl. Artif. Intell. 2022, 114, 105149. [Google Scholar] [CrossRef]
  13. Song, J.; Li, S.; Zhu, S.; Wang, Y.; Zhang, H. Low-emission optimization control method for coaxial compound helicopter/engine based on variable geometry adjustment. Aerosp. Sci. Technol. 2024, 151, 109263. [Google Scholar] [CrossRef]
  14. Liu, X.; Song, E.; Zhang, L.; Luan, Y.; Wang, J.; Luo, C.; Xiong, L.; Pan, Q. Design and implementation for the state time-delay and input saturation compensator of gas turbine aero-engine control system. Energy 2024, 288, 129934. [Google Scholar] [CrossRef]
  15. Yang, Y.; Nikolaidis, T.; Jafari, S.; Pilidis, P. Gas turbine engine transient performance and heat transfer effect modelling: A comprehensive review, research challenges, and exploring the future. Appl. Therm. Eng. 2024, 236 Pt A, 121523. [Google Scholar] [CrossRef]
  16. Gong, W.; Lei, Z.; Nie, S.; Liu, G.; Lin, A.; Feng, Q.; Wang, Z. A novel combined model for energy consumption performance prediction in the secondary air system of gas turbine engines based on flow resistance network. Energy 2023, 280, 127951. [Google Scholar] [CrossRef]
  17. Kim, S. A new performance adaptation method for aero gas turbine engines based on large amounts of measured data. Energy 2021, 221, 119863. [Google Scholar] [CrossRef]
  18. Hanachi, H.; Liu, J.; Mechefske, C. Multi-mode diagnosis of a gas turbine engine using an adaptive neuro-fuzzy system. Chin. J. Aeronaut. 2018, 31, 1–9. [Google Scholar] [CrossRef]
  19. Singh, R.; Maity, A.; Nataraj, P.S.V. Modeling, Simulation and Validation of Mini SR-30 Gas Turbine Engine. IFAC-Pap. 2018, 51, 554–559. [Google Scholar] [CrossRef]
  20. Ntantis, E.L.; Botsaris, P. Diagnostic methods for an aircraft engine performance. J. Eng. Sci. Technol. 2015, 8, 64–72. [Google Scholar] [CrossRef]
  21. Pang, S.; Li, Q.; Ni, B. Improved nonlinear MPC for aircraft gas turbine engine based on semi-alternative optimization strategy. Aerosp. Sci. Technol. 2021, 118, 106983. [Google Scholar] [CrossRef]
  22. Zeng, J.; Cheng, Y. An Ensemble Learning-Based Remaining Useful Life Prediction Method for Aircraft Turbine Engine. IFAC-Pap. 2020, 53, 48–53. [Google Scholar] [CrossRef]
  23. Zaletin, V.V.; Savitsky, O.A.; Silnikov, M.V.; Sorokovikov, V.N.; Yakushenko, E.I. Acoustic emission diagnostics of a hull structures by a system of integrating fiber-optic sensors for the aircraft and spacecraft safe operation. Acta Astronaut. 2024, in press. [Google Scholar] [CrossRef]
  24. Schade, F.; Karle, C.; Mühlbeier, E.; Gönnheimer, P.; Fleischer, J.; Becker, J. Dynamic Partial Reconfiguration for Adaptive Sensor Integration in Highly Flexible Manufacturing Systems. Procedia CIRP 2022, 107, 1311–1316. [Google Scholar] [CrossRef]
  25. Sun, M.; Liu, Z.; Liu, J. Numerical Investigation of the Intercooler Performance of Aircraft Piston Engines Under the Influence of High Altitude and Cruise Mode. ASME J. Heat Mass Transf. 2023, 145, 062901. [Google Scholar] [CrossRef]
  26. Liu, Z.; Liu, J. Machine Learning Assisted Analysis of an Ammonia Engine Performance. J. Energy Resour. Technol. 2022, 144, 112307. [Google Scholar] [CrossRef]
  27. Avrunin, O.G.; Nosova, Y.V.; Abdelhamid, I.Y.; Pavlov, S.V.; Shushliapina, N.O.; Wójcik, W.; Kisała, P.; Kalizhanova, A. Possibilities of Automated Diagnostics of Odontogenic Sinusitis According to the Computer Tomography Data. Possibilities of Automated Diagnostics of Odontogenic Sinusitis According to the Computer Tomography Data. Sensors 2021, 21, 1198. [Google Scholar] [CrossRef] [PubMed]
  28. Baranovskyi, D.; Bulakh, M.; Michajłyszyn, A.; Myamlin, S.; Muradian, L. Determination of the Risk of Failures of Locomotive Diesel Engines in Maintenance. Energies 2023, 16, 4995. [Google Scholar] [CrossRef]
  29. Li, B.; Zhao, Y.-P.; Chen, Y.-B. Unilateral alignment transfer neural network for fault diagnosis of aircraft engine. Aerosp. Sci. Technol. 2021, 118, 107031. [Google Scholar] [CrossRef]
  30. Xu, M.; Wang, J.; Liu, J.; Li, M.; Geng, J.; Wu, Y.; Song, Z. An improved hybrid modeling method based on extreme learning machine for gas turbine engine. Aerosp. Sci. Technol. 2020, 107, 106333. [Google Scholar] [CrossRef]
  31. Zhu, X.; Li, M.; Liu, X.; Zhang, Y. A backpropagation neural network-based hybrid energy recognition and management system. Energy 2024, 297, 131264. [Google Scholar] [CrossRef]
  32. Hu, Z.; Kashyap, E.; Tyshchenko, O.K. GEOCLUS: A Fuzzy-Based Learning Algorithm for Clustering Expression Datasets. Lect. Notes Data Eng. Commun. Technol. 2022, 134, 337–349. [Google Scholar] [CrossRef]
  33. Talebi, S.S.; Madadi, A.; Tousi, A.M.; Kiaee, M. Micro Gas Turbine fault detection and isolation with a combination of Artificial Neural Network and off-design performance analysis. Eng. Appl. Artif. Intell. 2022, 113, 104900. [Google Scholar] [CrossRef]
  34. Lytvynenko, V.; Nikytenko, D.; Voronenko, M.; Savina, N.; Naumov, O. Assessing the Possibility of a Country’s Economic Growth Using Dynamic Bayesian Network Models. In Proceedings of the 2020 IEEE 15th International Conference on Computer Sciences and Information Technologies (CSIT), Zbarazh, Ukraine, 23–26 September 2020; pp. 36–39. [Google Scholar] [CrossRef]
  35. Rusyn, B.; Lutsyk, O.; Kosarevych, R.; Maksymyuk, T.; Gazda, J. Features extraction from multi-spectral remote sensing images based on multi-threshold binarization. Sci. Rep. 2023, 13, 19655. [Google Scholar] [CrossRef]
  36. Baranovskyi, D.; Myamlin, S. The criterion of development of processes of the self organization of subsystems of the second level in tribosystems of diesel engine. Sci. Rep. 2023, 13, 5736. [Google Scholar] [CrossRef]
  37. Sachenko, A.; Kochan, V.; Turchenko, V.; Tymchyshyn, V.; Vasylkiv, N. Intelligent nodes for distributed sensor network. In Proceedings of the 16th IEEE Instrumentation and Measurement Technology Conference (IMTC/99), Venice, Italy, 24–26 May 1999; pp. 1479–1484. [Google Scholar] [CrossRef]
  38. Babichev, S.; Korobchynskyi, M.; Lahodynskyi, O.; Korchomnyi, O.; Basanets, V.; Borynskyi, V. Development of a technique for the reconstruction and validation of gene network models based on gene expression. East. -Eur. J. Enterp. Technol. 2018, 1, 19–32. [Google Scholar] [CrossRef]
  39. Shen, Y.; Khorasani, K. Hybrid multi-mode machine learning-based fault diagnosis strategies with application to aircraft gas turbine engines. Neural Netw. 2020, 130, 126–142. [Google Scholar] [CrossRef]
  40. Gebrehiwet, L.; Nigussei, Y.; Teklehaymanot, T. A Review-Differentiating TV2 and TV3 Series Turbo Shaft Engines. Int. J. Res. Publ. Rev. 2022, 3, 1822–1838. [Google Scholar] [CrossRef]
  41. Catana, R.M.; Dediu, G. Analytical Calculation Model of the TV3-117 Turboshaft Working Regimes Based on Experimental Data. Appl. Sci. 2023, 13, 10720. [Google Scholar] [CrossRef]
  42. Vladov, S.; Shmelov, Y.; Yakovliev, R. Control and Diagnostics of TV3-117 Aircraft Engine Technical State in Flight Modes Using the Matrix Method for Calculating Dynamic Recurrent Neural Networks. CEUR Workshop Proc. 2021, 2864, 97–109. [Google Scholar] [CrossRef]
  43. Bernardet, U.; Blanchard, M.; Verschure, P.F.M.J. IQR: A distributed system for real-time real-world neuronal simulation. Neurocomputing 2002, 44–46, 1043–1048. [Google Scholar] [CrossRef]
  44. Bahirev, I.; Basargin, S.; Kavalerov, B. Adaptive control of a gas turbine plant with a reference model and signal tuning. Control Syst. Inf. Technol. 2015, 2, 71–76. [Google Scholar]
  45. Vladov, S.; Shmelov, Y.; Yakovliev, R. Helicopters Aircraft Engines Self-Organizing Neural Network Automatic Control System. CEUR Workshop Proc. 2022, 3137, 28–47. [Google Scholar] [CrossRef]
  46. Vasiliev, S.; Valeev, S. Design of intelligent control systems based on the principle of minimum complexity. Bull. USATU 2007, 9, 32–41. [Google Scholar]
  47. Bahirev, I.; Kavalerov, B. Adaptive control of a gas turbine plant with a reference model and a sigmoid function. Control Syst. Inf. Technol. 2015, 3, 118–123. [Google Scholar]
  48. Bahirev, I. Application of radial basis function networks for interpolating the equation factors of a gas turbine unit model. Innov. Process. Res. Educ. Act. 2014, 1, 40–41. [Google Scholar]
  49. Wang, X.; Xu, B.; Han, T.; Wang, Y. Sensor dynamic compensation method based on GAN and its application in shockwave measurement. Mech. Syst. Signal Process. 2023, 190, 110157. [Google Scholar] [CrossRef]
  50. Wang, Y.; Shi, Y.; Cai, M.; Xu, W.; Yu, Q. Efficiency optimized fuel supply strategy of aircraft engine based on air-fuel ratio control. Chin. J. Aeronaut. 2019, 19, 489–498. [Google Scholar] [CrossRef]
  51. Lutsenko, I.; Mykhailenko, O.; Dmytriieva, O.; Rudkovskyi, O.; Kukharenko, D.; Kolomits, H.; Kuzmenko, A. Development of a method for structural optimization of a neural network based on the criterion of resource utilization efficiency. East. -Eur. J. Enterp. Technol. 2019, 2, 57–65. [Google Scholar] [CrossRef]
  52. Wu, D.; Sun, Y.; Xia, R.; Lu, S. Improved Adaptive Fuzzy Control for Non-Strict Feedback Nonlinear Systems: A Dynamic Compensation System Approach. Appl. Math. Comput. 2022, 435, 127470. [Google Scholar] [CrossRef]
  53. Vijaya Kumar, M.; Suresh, S.; Omkar, S.N.; Ganguli, R.; Sampath, P. A direct adaptive neural command controller design for an unstable helicopter. Eng. Appl. Artif. Intell. 2009, 22, 181–191. [Google Scholar] [CrossRef]
  54. Hernandez-Gonzalez, M.; Hernandez-Vargas, E.A. Discrete-time super-twisting controller using neural networks. Neurocomputing 2021, 447, 235–243. [Google Scholar] [CrossRef]
  55. Vladov, S.; Shmelov, Y.; Yakovliev, R.; Petchenko, M. Neural Network Method for Parametric Adaptation Helicopters Turboshaft Engines On-Board Automatic Control. CEUR Workshop Proc. 2023, 3403, 179–195. Available online: https://ceur-ws.org/Vol-3403/paper15.pdf (accessed on 17 February 2024).
  56. Widrow, B.; Stearns, D.S. Adaptive Signal Processing; Prentice-Hall Inc.: New York, NY, USA, 1985; pp. 302–367. [Google Scholar]
  57. Karatzinis, G.D.; Boutalis, Y.S.; Van Vaerenbergh, S. Aircraft engine remaining useful life prediction: A comparison study of Kernel Adaptive Filtering architectures. Mech. Syst. Signal Process. 2024, 218, 111551. [Google Scholar] [CrossRef]
  58. Dumka, P.; Pawar, P.S.; Sauda, A.; Shukla, G.; Mishra, D.R. Application of He’s homotopy and perturbation method to solve heat transfer equations: A python approach. Adv. Eng. Softw. 2022, 170, 103160. [Google Scholar] [CrossRef]
  59. Vladov, S.; Yakovliev, R.; Bulakh, M.; Vysotska, V. Neural Network Approximation of Helicopter Turboshaft Engine Parameters for Improved Efficiency. Energies 2024, 17, 2233. [Google Scholar] [CrossRef]
  60. Vladov, S.; Yakovliev, R.; Hubachov, O.; Rud, J.; Stushchanskyi, Y. Neural Network Modeling of Helicopters Turboshaft Engines at Flight Modes Using an Approach Based on “Black Box” Models. CEUR Workshop Proc. 2024, 3624, 116–135. Available online: https://ceur-ws.org/Vol-3624/Paper_11.pdf (accessed on 10 March 2024).
  61. Vladov, S.; Shmelov, Y.; Yakovliev, R. Modified Method of Identification Potential Defects in Helicopters Turboshaft Engines Units Based on Prediction its Operational Status. In Proceedings of the 2022 IEEE 4th International Conference on Modern Electrical and Energy System (MEES), Kremenchuk, Ukraine, 20–22 October 2022; pp. 556–561. [Google Scholar] [CrossRef]
  62. Corotto, F.S. Appendix C—The method attributed to Neyman and Pearson. In Wise Use Null Hypothesis Tests; Corotto, F.S., Ed.; Academic Press: Cambridge, MA, USA, 2023; pp. 179–188. [Google Scholar] [CrossRef]
  63. Motsnyi, F.V. Analysis of Nonparametric and Parametric Criteria for Statistical Hypotheses Testing. Chapter 1. Agreement Criteria of Pearson and Kolmogorov. Stat. Ukr. 2018, 4, 14–24. [Google Scholar] [CrossRef]
  64. Babichev, S.; Krejci, J.; Bicanek, J.; Lytvynenko, V. Gene expression sequences clustering based on the internal and external clustering quality criteria. In Proceedings of the 2017 12th International Scientific and Technical Conference on Computer Sciences and Information Technologies (CSIT), Lviv, Ukraine, 5–8 September 2017. [Google Scholar] [CrossRef]
  65. Anfilets, S.; Bezobrazov, S.; Golovko, V.; Sachenko, A.; Komar, M.; Dolny, R.; Kasyanik, V.; Bykovyy, P.; Mikhno, E.; Osolinskyi, O. Deep multilayer neural network for predicting the winner of football matches. Int. J. Comput. 2020, 19, 70–77. [Google Scholar] [CrossRef]
  66. Pasieka, M.; Grzesik, N.; Kuźma, K. Simulation modeling of fuzzy logic controller for aircraft engines. Int. J. Comput. 2017, 16, 27–33. [Google Scholar] [CrossRef]
  67. Ferreira, H.H.; Gastal, E.S.L. Efficient 2D Tikhonov smoothness regularization with recursive filtering. Pattern Recognit. Lett. 2023, 175, 95–103. [Google Scholar] [CrossRef]
  68. Tay, D.B. Sensor network data denoising via recursive graph median filters. Signal Process. 2021, 189, 108302. [Google Scholar] [CrossRef]
  69. Yang, X.; Mu, Y.; Cao, K.; Lv, M.; Peng, B.; Zhang, Y.; Wang, G. Robust kernel recursive adaptive filtering algorithms based on M-estimate. Signal Process. 2023, 207, 108952. [Google Scholar] [CrossRef]
  70. Poirier, C.; Descoteaux, M. A unified filtering method for estimating asymmetric orientation distribution functions. NeuroImage 2024, 287, 120516. [Google Scholar] [CrossRef] [PubMed]
  71. Zhao, Y. An inverse Q filtering method with adjustable amplitude compensation operator. J. Appl. Geophys. 2023, 215, 105111. [Google Scholar] [CrossRef]
  72. Pellegrino, S.F. A filtered Chebyshev spectral method for conservation laws on network. Comput. Math. Appl. 2023, 151, 418–433. [Google Scholar] [CrossRef]
Figure 1. Diagram of closed loops for regulating helicopter turboshaft engine parameters (Wreg is regulator transfer function, WFMU is fuel dispenser model transfer function, WTE is helicopter turboshaft engine model transfer function): (a) gas–generator rotor rpm, (b) gas temperature in front of the compressor turbine, (c) free turbine rotor speed (author’s research, based on [44]).
Figure 1. Diagram of closed loops for regulating helicopter turboshaft engine parameters (Wreg is regulator transfer function, WFMU is fuel dispenser model transfer function, WTE is helicopter turboshaft engine model transfer function): (a) gas–generator rotor rpm, (b) gas temperature in front of the compressor turbine, (c) free turbine rotor speed (author’s research, based on [44]).
Sensors 24 04246 g001
Figure 2. Dynamic compensation diagram in closed loops for regulating helicopter turboshaft engine parameters: (a) gas–generator rotor rpm, (b) gas temperature in the compressor turbine front, (c) free turbine rotor speed (author’s research, based on [44,47,48]).
Figure 2. Dynamic compensation diagram in closed loops for regulating helicopter turboshaft engine parameters: (a) gas–generator rotor rpm, (b) gas temperature in the compressor turbine front, (c) free turbine rotor speed (author’s research, based on [44,47,48]).
Sensors 24 04246 g002
Figure 3. Adaptive device diagram for noise suppression with the helicopter turboshaft engine parameters signal components passage to the reference input (according to B. Widrow and S. Stearns) [56].
Figure 3. Adaptive device diagram for noise suppression with the helicopter turboshaft engine parameters signal components passage to the reference input (according to B. Widrow and S. Stearns) [56].
Sensors 24 04246 g003
Figure 4. Dynamic compensation diagram in closed loops for regulating the helicopter turboshaft engine parameters with an adaptive noise suppression device with the component signals passage to the reference input: (a) gas–generator rotor rpm, (b) gas temperature in the compressor turbine front, (c) free rotor speed turbines (author’s research).
Figure 4. Dynamic compensation diagram in closed loops for regulating the helicopter turboshaft engine parameters with an adaptive noise suppression device with the component signals passage to the reference input: (a) gas–generator rotor rpm, (b) gas temperature in the compressor turbine front, (c) free rotor speed turbines (author’s research).
Sensors 24 04246 g004
Figure 5. Diagram for integrating closed loops for regulating helicopter turboshaft engine parameters using the filtration method (author’s research).
Figure 5. Diagram for integrating closed loops for regulating helicopter turboshaft engine parameters using the filtration method (author’s research).
Sensors 24 04246 g005
Figure 6. The developed neural network architecture, which implements the closed-loop integration for regulating the helicopter turboshaft engines’ parameters using the filtering method (author’s research).
Figure 6. The developed neural network architecture, which implements the closed-loop integration for regulating the helicopter turboshaft engines’ parameters using the filtering method (author’s research).
Sensors 24 04246 g006
Figure 7. Derivative ReLU functions diagrams: (a) traditional ReLU max(0, x); (b) proposed Smooth ReLU with adjustment (22) (author’s research).
Figure 7. Derivative ReLU functions diagrams: (a) traditional ReLU max(0, x); (b) proposed Smooth ReLU with adjustment (22) (author’s research).
Sensors 24 04246 g007
Figure 8. Cluster analysis results: (a) training sample of the parameter nTC, (b) test sample of the parameter nTC, (c) training sample of the parameter T G * , (d) test sample of the parameter T G * , (e) training sample of the parameter nFT, (f) test sample of the nFT parameter (author’s research).
Figure 8. Cluster analysis results: (a) training sample of the parameter nTC, (b) test sample of the parameter nTC, (c) training sample of the parameter T G * , (d) test sample of the parameter T G * , (e) training sample of the parameter nFT, (f) test sample of the nFT parameter (author’s research).
Sensors 24 04246 g008
Figure 9. The influence diagram for the number of epochs passed on the resulting error (author’s research). (a) Training for the 320 epochs (b) Training from 320 to 1000 epochs.
Figure 9. The influence diagram for the number of epochs passed on the resulting error (author’s research). (a) Training for the 320 epochs (b) Training from 320 to 1000 epochs.
Sensors 24 04246 g009
Figure 10. Accuracy metric diagram (author’s research).
Figure 10. Accuracy metric diagram (author’s research).
Sensors 24 04246 g010
Figure 11. Loss function diagram (author’s research).
Figure 11. Loss function diagram (author’s research).
Sensors 24 04246 g011
Figure 12. Initial diagram of the nTC gas–generator rotor rpm signal (author’s research).
Figure 12. Initial diagram of the nTC gas–generator rotor rpm signal (author’s research).
Sensors 24 04246 g012
Figure 13. Resulting diagram of the nTC gas–generator rotor rpm signal (author’s research).
Figure 13. Resulting diagram of the nTC gas–generator rotor rpm signal (author’s research).
Sensors 24 04246 g013
Figure 14. The nTC gas–generator rotor rpm signal spectrum diagram: (a) Original signal (b) Filtered signal (author’s research).
Figure 14. The nTC gas–generator rotor rpm signal spectrum diagram: (a) Original signal (b) Filtered signal (author’s research).
Sensors 24 04246 g014
Figure 15. The nTC gas–generator rotor rpm signal repetition period diagram: (a) Original signal (b) Filtered signal (author’s research).
Figure 15. The nTC gas–generator rotor rpm signal repetition period diagram: (a) Original signal (b) Filtered signal (author’s research).
Sensors 24 04246 g015
Figure 16. The nTC gas–generator rotor rpm signal signal-to-noise ratio diagram: (a) Original signal (b) Filtered signal (author’s research).
Figure 16. The nTC gas–generator rotor rpm signal signal-to-noise ratio diagram: (a) Original signal (b) Filtered signal (author’s research).
Sensors 24 04246 g016
Figure 17. Signal histogram for the nTC gas–generator rotor rpm estimates: (a) Original signal (b) Filtered signal (author’s research).
Figure 17. Signal histogram for the nTC gas–generator rotor rpm estimates: (a) Original signal (b) Filtered signal (author’s research).
Sensors 24 04246 g017
Figure 18. The spectrum histogram for the nTC gas–generator rotor rpm signal estimates: (a) Original signal (b) Filtered signal (author’s research).
Figure 18. The spectrum histogram for the nTC gas–generator rotor rpm signal estimates: (a) Original signal (b) Filtered signal (author’s research).
Sensors 24 04246 g018
Figure 19. The sequence histogram for the gas–generator rotor rpm signal nTC estimates: (a) Original signal (b) Filtered signal (author’s research).
Figure 19. The sequence histogram for the gas–generator rotor rpm signal nTC estimates: (a) Original signal (b) Filtered signal (author’s research).
Sensors 24 04246 g019
Figure 20. The nTC gas–generator rotor rpm signal signal/noise estimates histogram: (a) Original signal (b) Filtered signal (author’s research).
Figure 20. The nTC gas–generator rotor rpm signal signal/noise estimates histogram: (a) Original signal (b) Filtered signal (author’s research).
Sensors 24 04246 g020
Figure 21. Noise dispersion diagram of the nTC gas–generator rotor rpm signal (author’s research).
Figure 21. Noise dispersion diagram of the nTC gas–generator rotor rpm signal (author’s research).
Sensors 24 04246 g021
Table 1. Analytical expressions for the developed neural network hidden layers parameters calculating (author’s research).
Table 1. Analytical expressions for the developed neural network hidden layers parameters calculating (author’s research).
Neural Network LayerParameterAnalytical Expression
4th hidden layerNeuron weight error δ g i = δ o u t · w i j 4 ,
where  w i j 4 mean the 4th hidden layer of neurons’ weights (see expression (13)).
Gradient error δ g i = δ g i · S m o o t h   R e L U x ,
where  S m o o t h   R e L U x is defined according to (27).
Gradients by weights E w i j 4 = δ g i · f i
3rd hidden layerNeuron weight error δ f i = δ f i · w i j 3 ,
where  w i j 3 mean the 3rd hidden layer neurons weights (see expression (12)).
Gradient error δ f i = δ f i · S m o o t h   R e L U x ,
where  S m o o t h   R e L U x is defined according to (27).
Gradients by weights E w i j 3 = δ f i · z i
2nd hidden layerNeuron weight error δ z i = δ z i · w i j 2 ,
where  w i j 2 mean the 3rd hidden layer of neurons’ weights (see expression (11)).
Gradient error δ z i = δ z i · S m o o t h   R e L U x ,
where  S m o o t h   R e L U x is defined according to (27).
Gradients by weights E w i j 2 = δ z i · h i .
Table 2. The training sample fragment for nTC gas–generator rotor rpm (author’s research).
Table 2. The training sample fragment for nTC gas–generator rotor rpm (author’s research).
Number123784115172202256
Value0.9430.9820.9480.9570.9620.9740.9350.981
Table 3. The training sample fragment for T G * gas temperature in the compressor turbine front (author’s research).
Table 3. The training sample fragment for T G * gas temperature in the compressor turbine front (author’s research).
Number122973109164200256
Value0.9320.9640.9750.9260.9180.9050.9020.953
Table 4. The training sample fragment for nFT free turbine rotor speed (author’s research).
Table 4. The training sample fragment for nFT free turbine rotor speed (author’s research).
Number123280105181207256
Value0.9290.9330.9090.9320.9410.9550.9260.973
Table 5. Determining the influence results for the number of epochs passed on the resulting error (author’s research).
Table 5. Determining the influence results for the number of epochs passed on the resulting error (author’s research).
Number123456789
Epoch04080120160200240280320
Eepoch17.35214.01810.3428.6655.2294.3153.3992.0053.767
Table 6. The neural network signal integration based on the filtering method with traditional filters comparative analysis results (author’s research).
Table 6. The neural network signal integration based on the filtering method with traditional filters comparative analysis results (author’s research).
MetricsNeural Network IntegrationMedian Recursive FilterRecursive FilterMedian Filter
MSE0.0009920.002550.009120.0116
MAE0.00790.04030.07630.1622
R20.94950.73580.68920.5171
PNSR40.02 dB25.91 dB20.38 dB13.80 dB
SNR39.35 dB25.25 dB19.72 dB13.13 dB
r0.97610.65190.42190.2740
RMSE0.03150.05050.09550.1077
MAPE0.8615%1.365%4.250%17.50%
MRE0.008610.04360.08250.1750
CCC0.97560.60090.49980.1097
NMSE0.5051.2992.6414.121
SQR0.99600.97660.96560.9207
Table 7. The neural network signal integration improvement degree calculating results based on the filtering method with traditional filters (author’s research).
Table 7. The neural network signal integration improvement degree calculating results based on the filtering method with traditional filters (author’s research).
MetricsThe Improvement Compared to the Median Recursive FilterThe Improvement Compared to the Recursive FilterThe Improvement Compared to the Median Filter
MSE2.5719.19411.694
MAE5.1019.65820.532
R21.2901.3781.836
PNSR1.5451.9642.900
SNR1.5581.9952.997
r1.4972.3143.562
RMSE1.6033.0323.419
MAPE1.5844.93320.313
MRE5.0649.58220.325
CCC1.6241.9528.893
NMSE2.5725.2308.160
SQR1.0201.0311.082
Table 8. The training sample fragment for nTC gas–generator rotor rpm with the separation of clean signal and noise interference (author’s research).
Table 8. The training sample fragment for nTC gas–generator rotor rpm with the separation of clean signal and noise interference (author’s research).
Number123784115172202256
Sclean0.9220.9620.9230.9350.9420.9440.9120.956
Nnoice0.0210.0200.0250.0220.0200.0300.0230.025
Table 9. The training sample fragment for T G * gas temperature in the compressor turbine front with the separation of clean signal and noise interference (author’s research).
Table 9. The training sample fragment for T G * gas temperature in the compressor turbine front with the separation of clean signal and noise interference (author’s research).
Number122973109164200256
Sclean0.9030.9330.9550.8950.8980.8750.8800.922
Nnoice0.0290.0310.0200.0310.0200.0300.0220.031
Table 10. The training sample fragment for nFT-free turbine rotor speed with the separation of clean signal and noise interference (author’s research).
Table 10. The training sample fragment for nFT-free turbine rotor speed with the separation of clean signal and noise interference (author’s research).
Number123280105181207256
Sclean0.9070.9130.8880.9110.9210.9360.9030.952
Nnoice0.0220.0200.0210.0210.0200.0190.0230.021
Table 11. Data processing results from training samples using a neural network (author’s research).
Table 11. Data processing results from training samples using a neural network (author’s research).
Stage NumberStage NameResults
11st hidden layerParameters h1, h2, h3 are calculated according to (10). The final values are h1 = 0.898, h2 = 0.875, h3 = 0.880.
22nd hidden layerThe accepted weight and bias matrices are: W = 0.5 0.3 0.2 0.4 0.6 0.1 0.3 0.3 0.4 , b = 0.1 0.2 0.1 . Using the Smooth ReLU activation function, the parameters z1, z2, z3 are calculated according to (11). The final values are z1 = 0.988, z2 = 1.172, z3 = 0.984.
33rd hidden layerBy applying the Smooth ReLU activation function to the linear combinations zi, the parameters f1, f2, f3 are calculated according to (12). The final values are f1 = 1.143, f2 = 1.396, f3 = 1.142.
44th hidden layerBy applying the Smooth ReLU activation function to the linear combinations fi, the parameters g1, g2, g3 are calculated according to (13). The final values are g1 = 1.319, g2 = 1.609, g3 = 1.319.
5Output layerThe accepted weight and bias matrices are v = 0.4 0.3 0.3 , c = 0.5. The neural network output signal is calculated according to (14). The final value is y = 1.907.
Table 12. Data processing results from training samples using a traditional filtration method (author’s research).
Table 12. Data processing results from training samples using a traditional filtration method (author’s research).
Stage NumberStage NameResults
1Summation of signals with their noiseThe summation of signals with their noise is carried out in the same way as in the neural network method (Table 11). The final values are n T C 1 = 0.898 , T G * 1 = 0.875 , and n F T 1 = 0.880 , similar to h1 = 0.898, h2 = 0.875, and h3 = 0.880.
2Dynamic compensationSignals and interference are adjusted using coefficients for each parameter. For a median-recursive filter, according to [70], it is advisable to use the following coefficients: 0.8 for the nTC parameter, 1.2 for the T G * parameter, and 0.9 for the nFT parameter. Then:
                       C o r r e c t e d n T C = 0.8 · C l e a r n T C + N o i c e n T C ,
                       C o r r e c t e d T G * = 1.2 · C l e a r T G * + N o i c e T G * ,
                       C o r r e c t e d n F T = 0.9 · C l e a r n F T + N o i c e n F T ,
Total values n T C 2 = 0.932 , T G * 2 = 1.007 , and n F T 2 = 0.918 , similar to z1, z2, and z3.
3Filtration 1st stageSignals and interference are adjusted using coefficients for each parameter. For a median-recursive filter, according to [71], it is advisable to use the following coefficients: 0.75 for the nTC parameter, 1.05 for the T G * parameter, and 0.85 for the nFT parameter. Then:
                       F i l t e r e d n T C = 0.75 · C o r r e c t e d n T C ,
                       F i l t e r e d T G * = 1.05 · C o r r e c t e d T G * ,
                       F i l t e r e d n F T = 0.85 · C o r r e c t e d n F T ,
Total values n T C 3 = 0.699 , T G * 3 = 1.057 , and n F T 3 = 0.780 , similar to f1, f2, and f3.
4Filtration 2nd stageSignals and interference are adjusted using coefficients for each parameter. For a median-recursive filter, according to [72], it is advisable to use the following coefficients: 0.65 for the nTC parameter, 0.70 for the T G * parameter, and 0.60 for the nFT parameter. Then:
                       F i l t e r e d n T C = 0.65 · C o r r e c t e d n T C ,
                       F i l t e r e d T G * = 0.70 · C o r r e c t e d T G * ,
                       F i l t e r e d n F T = 0.60 · C o r r e c t e d n F T ,
Total values n T C 3 = 0.454 , T G * 3 = 0.740 , and n F T 3 = 0.507 , similar to f1, f2, and f3.
5Final resultThe output signal is calculated as:
                       y = n T C 4 + T G * 4 + n F T 4 = 1.701 .
Table 13. The obtained data comparison results (author’s research).
Table 13. The obtained data comparison results (author’s research).
Stage NumberMethod TypeStage NameOutput VariableValueComparison Results
1Neural network1st hidden layerh10.898The results obtained in the neural network’s 1st hidden layer are identical to the results obtained using traditional filtering methods.
h20.875
h30.880
Traditional filtration methodSummation of signals with their noise n T C 1 0.898
T G * 1 0.875
n F T 1 0.880
2Neural network2nd hidden layerz10.988The results obtained in the neural network’s 2nd hidden layer are up to 44.1% higher than the results obtained using traditional filtering methods.
z21.172
z30.984
Traditional filtration methodDynamic compensation n T C 2 0.932
T G * 2 1.007
n F T 2 0.918
3Neural network3rd hidden layerf11.143The results obtained in the neural network’s 3rd hidden layer are up to 44.1% higher than the results obtained using traditional filtering methods.
f21.396
f31.142
Traditional filtration methodFiltration 1st stage n T C 3 0.699
T G * 3 1.057
n F T 3 0.780
4Neural network4th hidden layerg11.319The results obtained in the neural network’s 4th hidden layer are up to 68.5% higher than the results obtained using traditional filtering methods.
g21.609
g31.319
Traditional filtration methodFiltration 2nd stage n T C 4 0.454
T G * 4 0.740
n F T 4 0.507
5Neural networkFinal resulty1.907The output signal value obtained in the neural network’s output layer is 10.8% higher than its value obtained using traditional filtering methods.
Traditional filtration methodFinal resulty1.701
Table 14. The neural network signal integration based on the filtering method 1st and 2nd types errors calculating results with traditional filters (author’s research).
Table 14. The neural network signal integration based on the filtering method 1st and 2nd types errors calculating results with traditional filters (author’s research).
Error TypeNeural Network IntegrationMedian Recursive FilterRecursive FilterMedian Filter
Type I error, %0.861.822.493.60
Type II error, %0.380.801.101.59
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Vladov, S.; Scislo, L.; Sokurenko, V.; Muzychuk, O.; Vysotska, V.; Osadchy, S.; Sachenko, A. Neural Network Signal Integration from Thermogas-Dynamic Parameter Sensors for Helicopters Turboshaft Engines at Flight Operation Conditions. Sensors 2024, 24, 4246. https://doi.org/10.3390/s24134246

AMA Style

Vladov S, Scislo L, Sokurenko V, Muzychuk O, Vysotska V, Osadchy S, Sachenko A. Neural Network Signal Integration from Thermogas-Dynamic Parameter Sensors for Helicopters Turboshaft Engines at Flight Operation Conditions. Sensors. 2024; 24(13):4246. https://doi.org/10.3390/s24134246

Chicago/Turabian Style

Vladov, Serhii, Lukasz Scislo, Valerii Sokurenko, Oleksandr Muzychuk, Victoria Vysotska, Serhii Osadchy, and Anatoliy Sachenko. 2024. "Neural Network Signal Integration from Thermogas-Dynamic Parameter Sensors for Helicopters Turboshaft Engines at Flight Operation Conditions" Sensors 24, no. 13: 4246. https://doi.org/10.3390/s24134246

APA Style

Vladov, S., Scislo, L., Sokurenko, V., Muzychuk, O., Vysotska, V., Osadchy, S., & Sachenko, A. (2024). Neural Network Signal Integration from Thermogas-Dynamic Parameter Sensors for Helicopters Turboshaft Engines at Flight Operation Conditions. Sensors, 24(13), 4246. https://doi.org/10.3390/s24134246

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop