Next Article in Journal
Innovative Research on Intelligent Recognition of Winter Jujube Defects by Applying Convolutional Neural Networks
Next Article in Special Issue
Low-Cost Microcontroller-Based System for Condition Monitoring of Permanent-Magnet Synchronous Motor Stator Windings
Previous Article in Journal
Green Wave Arterial Cooperative Control Strategy Based on Through-Traffic Priority
Previous Article in Special Issue
Applications of the TL-Based Fault Diagnostic System for the Capacitor in Hybrid Aircraft
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Data-Driven Rotary Machine Fault Diagnosis Using Multisensor Vibration Data with Bandpass Filtering and Convolutional Neural Network for Signal-to-Image Recognition

Faculty of Automatic Control, Robotics and Electrical Engineering, Poznan University of Technology, 60-965 Poznań, Poland
Electronics 2024, 13(15), 2940; https://doi.org/10.3390/electronics13152940
Submission received: 30 June 2024 / Revised: 20 July 2024 / Accepted: 23 July 2024 / Published: 25 July 2024

Abstract

:
This paper proposes a novel data-driven method for machine fault diagnosis, named multisensor-BPF-Signal2Image-CNN2D. This method uses multisensor data, bandpass filtering (BPF), and a 2D convolutional neural network (CNN2D) for signal-to-image recognition. The proposed method is particularly suitable for scenarios where traditional time-domain analysis might be insufficient due to the complexity or similarity of the data. The results demonstrate that the multisensor-BPF-Signal2Image-CNN2D method achieves high accuracy in fault classification across the three datasets (constant-velocity fan imbalance, variable-velocity fan imbalance, Case Western Reserve University Bearing Data Center). In particular, the proposed multisensor method exhibits a significantly faster training speed compared to the reference IMU6DoF-Time2GrayscaleGrid-CNN, IMU6DoF-Time2RGBbyType-CNN, and IMU6DoF-Time2RGBbyAxis-CNN methods, which use the signal-to-image approach, requiring fewer iterations to achieve the desired level of accuracy. The interpretability of the model is also explored. This research demonstrates the potential of bandpass filters in the signal-to-image approach with a CNN2D to be robust and interpretable in selected frequency bandwidth machine fault diagnosis using multiple sensor data.

1. Introduction

Modern industrial environments are heavily reliant on complex electromechanical machinery that plays a vital role in various sectors from manufacturing to infrastructure. Ensuring the longevity and operational efficiency of this equipment is paramount to maintaining productivity, minimising waste, and ultimately contributing to economic and environmental sustainability. Proactive fault diagnosis strategies are essential to prevent costly equipment failures and damage, allowing for timely maintenance interventions.
The selection of appropriate sensors for machine fault diagnosis is crucial for effective monitoring. Different sensor types target various aspects of a machine’s health. Sensors of mechanical quantities measure the physical characteristics of the machine’s operation. Vibration analysis is a popular choice due to its high sensitivity to subtle changes in a machine’s condition, often indicative of developing faults [1,2,3,4,5,6]. Other mechanical sensors include those for displacement [7], torque [8,9], and angular velocity/position [10,11]. Electrical quantity sensors monitor the electrical health of the machine. Current [12,13] and voltage [14,15] measurements can reveal issues related to power delivery and motor health. Additional sensors of other signals data can provide valuable insights for specific fault types. Temperature (internal and external) [16,17], sound [18,19,20], and even chemical analysis [21,22] can be used depending on the targeted faults. Beyond traditional sensors, recent research has explored innovative approaches to image-based diagnostics where cameras are investigated for fault detection, capturing visual representations of the machine’s condition [23,24,25,26], and signal-to-image conversion techniques that are being developed to convert sensor signals into virtual images, enabling the application of image recognition methods for fault diagnosis [13,27,28,29,30,31,32].
Vibration analysis has emerged as a well-established technique for diagnosing faults in rotating machinery due to its sensitivity to subtle changes in the machine’s condition. However, traditional analysis methods can be limited in complex scenarios or when dealing with nonstationary faults. This research proposes a novel data-driven approach to rotary machine fault diagnosis named multisensor-BPF-Signal2Image-CNN2D. This method leverages data from multiple sensors, bandpass filtering (BPF) for target feature extraction, and a 2D convolutional neural network (CNN2D) for signal-to-image recognition. Using CNNs, the approach aims to automate fault classification processes with improved accuracy and interpretability. By incorporating data from multiple sensors, the method can capture a more comprehensive picture of the machine’s health, potentially improving fault detection capabilities. BPF allows for the extraction of specific frequency bands relevant to targeted fault types, reducing noise and enhancing the signal-to-noise ratio. Converting sensor data to greyscale and RGB images facilitates spatial representation and feature extraction by the CNN2D, potentially leading to more robust fault classification. This research investigates the effectiveness of the multisensor-BPF-Signal2Image-CNN2D method for fault diagnosis in rotary machines. The manuscript details the technical aspects of the method, evaluates its performance using various datasets, and discusses the results in the context of existing approaches. Finally, the manuscript proposes future research directions for the further development and application of this method at higher technology readiness levels (TRLs). Recent research in signal-to-image recognition [13,27,28,29,30,31,32] has explored the potential of image-based approaches for machine fault diagnosis using sensor data, demonstrating promising results. However, these methods often face limitations that the multisensor-BPF-Signal2Image-CNN2D method seeks to overcome. A key limitation of existing approaches lies in their feature extraction capabilities. While converting time-series data into greyscale or RGB images offers a path to leveraging powerful convolutional neural networks (CNNs), it does not explicitly address the issue of frequency content within the signals. This can hinder the CNN’s ability to effectively extract features specifically relevant to different fault types. Additionally, the “black-box” nature of CNNs can be a challenge. Although these models may achieve high accuracy in fault classification, their inner workings remain opaque. It becomes difficult to pinpoint the specific features or frequency bands used for classification, limiting interpretability. The multisensor-BPF-Signal2Image-CNN2D method addresses these shortcomings by incorporating several key aspects. This method leverages multisensor data fusion, providing a richer and more comprehensive picture of the machine’s health compared to relying solely on a single sensor. By strategically applying BPF, the method focusses on specific frequency bands that are most likely to contain signatures indicative of faults. Interpretability via signal-to-image conversion with BPFs allows for visualisation of the filtered data, aiding in understanding the features learnt by the CNN2D in chosen frequency bandwidths. Finally, the proposed method requires a small number of training iterations to achieve good accuracy.
Following the introduction, the manuscript delves into the core aspects of the research. Section 2, “Description of the proposed multisensor-BPF-Signal2Image-CNN2D fault diagnosis method”, forms the key foundation. Here, the detailed architecture of the proposed method is explained with great care. This includes the data acquisition process and the specific sensor data utilised (e.g., inertial measurement unit (IMU) data with 6 degrees of freedom (6DoF)). In addition, the design and implementation of the bandpass filtering (BPF) stage are described, along with justifications for the chosen frequency ranges. Furthermore, this section details the conversion process from filtered signal sequences into greyscale images, providing insights into the spatial representation employed. The methodology for sensor fusion techniques, which combine data from multiple sensors (e.g., accelerometers) into a single RGB image, is also elaborated upon. Finally, the architecture and training process of the 2D convolutional neural network (CNN2D) are described in detail. Section 3, “Demonstrator of Rotary Machine Fault Diagnosis and Datasets Description”, shifts the focus to the practical implementation and evaluation framework. This section introduces the effectiveness of the datasets used to assess the proposed method. Three specific datasets (constant-velocity fan imbalance, variable-velocity fan imbalance, and publicly available dataset of the Bearing Data Center of Case Western Reserve University) are meticulously described. Section 4, “Results of the Multisensor-BPF-Signal2Image-CNN2D Method”, presents the performance evaluation of the proposed method. This section is further divided into subsections that mirror the datasets used. Section 5, “Discussion”, critically analyses the results obtained in Section 4. It delves into the implications of the findings, addressing key points of the effectiveness of the proposed multisensor-BPF-Signal2Image-CNN2D method compared to existing signal-to-image approaches for machine fault diagnosis. Finally, Section 6, “Conclusions”, summarises the main findings of the investigation and reaffirms the contributions of the proposed method to machine fault diagnosis.

2. Description of Proposed Multisensor-BPF-Signal2Image-CNN2D Fault Diagnosis Method

The proposed method named multisensor-BPF-Signal2Image-CNN2D corresponds to data-driven machine fault diagnosis using multisensor data with bandpass filtering (BPF) and a 2D convolutional neural network (CNN2D) for signal-to-image recognition (Signal2Image). The proposed multisensor-BPF-Signal2Image-CNN2D is illustrated in Figure 1. Data collected from multiple channels, in particular from an inertial measurement unit (IMU) boasting 6 degrees of freedom (6DoF), are processed by multiple bandpass filters. The number of BPFs, marked as n in Figure 1, is a design parameter and should be set depending on the complexity of the classification task. An open question is the optimal number n of bandpass filters (BPFs) and their impact on fault diagnosis performance. Increasing the n number of BPFs leads to higher frequency resolution (more narrow frequency bands of BPFs), allowing for more detailed analysis of specific frequency bands in the time domain. However, it also increases the computational complexity. Finding the optimal balance between these factors is challenging. The n number of BPFs can be chosen, based on prior knowledge of potential fault frequencies, to effectively capture the frequency components of interest related to the fault diagnosis. On the other hand, the main motivation behind using n equal-width bands is to simplify the selection process compared to determining an optimal number of BPFs with different bandwidth widths. The proposed approach offers a more straightforward implementation. As a result of filtering for each channel of input time-series data, there will be n new sequences of the same length. For example, the collection of 256 IMU 6DoF samples corresponds to a matrix of 6 × 256, where there are 6 channels and 256 samples in each channel. The BPF operation results in an increase in the size of the data matrix. The new dimensions are 6 · n × 256, where n represents the number of BPF channels used. Each sequence of 256 samples within the 6 · n matrix is reshaped into a 16 × 16 greyscale image. This process transforms the one-dimensional data from each BPF channel into a two-dimensional spatial representation. The research uses a sensor-based fusion approach defined and validated in [32]. This method combines the reshaped greyscale images from sensors of the same type (e.g., accelerometers) to create a single 16 × 16 × 3 RGB (red, green, blue) image. The three dimensions of this image correspond to the x, y, and z axes, respectively, at a single BPF bandwidth. In the case of data from an IMU with 6DoF, all RGB subimages (size 16 × 16 × 3) are further aggregated into a single, larger RGB image. Here, the columns of the image correspond to the data from the accelerometer and gyroscope, respectively, while the rows represent the different BPF bandwidths. However, to stay closer to the proportional width and height of the final image, another arrangement can be applied, for example, for nine (n = 9) BPFs selected based on preliminary investigation, the subimages can be arranged as three following BPFs in each row. This final image provides a combined representation of the sensor data across all axes and frequency bands. This data reshaping and fusion process creates a more informative representation suitable for visualisation and further analysis of the sensor data. The specific details of the sensor fusion approach and the final image structure may vary depending on the number of available sensors and their type. Finally, the resulting image (containing combined information across sensors, axes, and frequency bands) is fed into a CNN2D for machine fault diagnosis. The CNN2D is trained to recognise patterns in the image that correspond to different types of faults.
The selection of the type and parameters is important for the division of the full bandwidth into n bands. In conducted research, the full frequency range from 0 Hz to the Nyquist frequency (half of sampling frequency) is divided into n equal-width frequency ranges. The properties of BPF processing depend on the digital filter type from FIR (Finite Impulse Response) and IIR (Infinite Impulse Response). Key aspects of BPF are ripples in the passband and attenuation in the stopband. The selection of a BPF type significantly influences the filter’s performance characteristics, particularly its phase response. Another important consideration is the signal delay caused by the filter [33]. The phase response of a filter allows for the calculation of both phase delay and group delay. Phase delay refers to the time difference introduced by the filter for a single sinusoidal component of a signal at a specific frequency. Crucially, the phase delay at each frequency is directly proportional to the negative of the phase shift experienced by that frequency component and inversely proportional to the frequency itself. It can be calculated from the phase shift using Equation (1).
τ p ( ω ) = φ ( ω ) ω
where
  • τp(ω) is the phase delay at angular frequency ω;
  • φ(ω) is the unwrapped phase response (shift) of the filter frequency response at ω;
  • ω is the angular frequency.
The group delay, on the other hand, represents the average time delay experienced by all frequency components within a composite signal as they pass through the filter. Group delay τg(ω) is defined as the negative of the first derivative of the filter phase response φ(ω) with respect to frequency ω:
τ g ( ω ) = d φ ( ω ) d ω
For applications requiring a constant group delay, a linear phase response is crucial. The constant group delay ensures that the signal’s overall shape (envelope) is preserved. Ensuring the signal’s temporal characteristics is important in fault diagnosis, when it is based on the conversion of a signal to image approach. Selection of these parameters requires careful consideration of the specific application and the characteristics of the faults being targeted. Balancing factors such as the frequency resolution, computational efficiency, and phase linearity is essential to achieve accurate and robust machine fault diagnosis using the multisensor-BPF-Signal2Image-CNN2D approach. The primary function of BPFs is to allow the passage of signals within a specific frequency band (passband) while attenuating signals outside this band (stopband). The order of the FIR digital filter (number of zeros used in the filter transfer function) affects the complexity of the filter response and its processing characteristics. Higher-order filters can achieve sharper transitions between the passband and stopband, leading to better selectivity and potentially deeper stopband attenuation. However, they can also introduce a higher computational cost and lead to a longer delay. On the basis of preliminary investigations, the order of the BPFs as FIR was selected as twice the length of the time-domain window. The frequency width of the passband was selected as the Nyquist frequency divided by the n number of filters.

3. Demonstrator of Rotary Machine Fault Diagnosis and Datasets Description

This section details the microcontroller-based demonstrator system, shown in Figure 2, developed to evaluate the effectiveness of the multisensor-BPF-Signal2Image-CNN2D approach for rotary machine fault diagnosis. Additionally, this section describes the datasets employed to train and validate the proposed method. The fan demonstrator setup, established in previous research [1] extended to variable velocity [32], offers a controlled environment for method validation.
This work explores the potential of the multisensor-BPF-Signal2Image-CNN2D approach for machine fault diagnosis using a microcontroller-based demonstrator. This allows consistent results and facilitates comparison with previous studies [1,32,34,35]. The demonstration setup, designed for basic research purposes and reflecting a low technology readiness level (TRL), mimics a real-world scenario by employing an IMU sensor for vibration analysis. An imbalance introduced to a fan blade caused by the addition of a mass, in the form of a paper clip on one of the fan blades, simulates a machine fault, generating controlled vibrations that the proposed method can be trained to identify. This controlled environment allows for replicating experiments and facilitates consistent evaluation of the method’s core principles. This offers a practical approach to simulate and mimic real-world scenarios where imbalances might occur. Applying additional mass to one or two fan blades allows for mimicking two levels of imbalance to be mimicked. However, it does limit the ability to achieve perfectly controlled levels of imbalance mass. The demonstrator in the present form is sufficient for rapid verification of the proposed multisensor-BPF-Signal2Image-CNN2D method at limited computational resources, which is sufficient at the first stage of method verification. Both imbalance and bearing failure can cause increased vibration in the fan system. Imbalance creates a centrifugal force that varies with the rotational speed, leading to vibrations at the rotational frequency and its harmonics. Bearing failure can introduce localised wear or looseness, causing erratic vibrations with a broader frequency spectrum. The setup, designed for research purposes, uses a Yate Loon Electronics (Taiwan) fan model GP-D12SH-12(F) operating at a nominal 12 V DC and 0.3 A. The IMU sensor continuously acquires data at a constant sampling rate, building a buffer for analysis. These data represent the vibrational state of the machine and serve as the input for the fault diagnosis algorithm. A microcontroller unit (STM32F746ZG) forms the core of the demonstrator system, responsible for data acquisition and MQTT (Message Queue Telemetry Transport) communication with MathWorks, Natick, Massachusetts, USA, MATLAB R2013a, hosted at a remote machine, which acts as the fault diagnosis system. A microcontroller unit communicates with a remote machine by MQTT protocol to publish data from the IMU sensor. Therefore, the prepared demonstrator can, in the future, be adopted for fault diagnosis at a higher technology readiness level (TRL) as a business service product. Details of the MQTT communication of the demonstrator are shown in [1,32,34,35]. This microcontroller-based approach offers several advantages for basic research. The controlled environment allows for replicating experiments, ensuring consistent and reliable evaluation of the fundamental concepts of the proposed method. The use of a microcontroller makes the demonstrator system relatively inexpensive, making it suitable for early-stage research and development. The core concept of using a microcontroller for data acquisition can be easily scaled to more complex industrial environments with minimal modifications, paving the way for future research advancements. Using this demonstrator system in a controlled setting, research can thoroughly assess the feasibility and effectiveness of the multisensor BPF-Signal2Image-CNN2D approach for machine fault diagnosis. This initial exploration laid the groundwork for further development and refinement, ultimately with the aim of its application in real-world scenarios.

3.1. Constant Velocity of Fan Imbalance

This dataset details sensor data collected from a controlled experiment that simulates a constant-velocity fan imbalance caused by the addition of mass, in the form of a paper clip, at one of the fan blades. The nominal rotational speed of the fan, 3000 RPM (revolutions per minute), corresponds to a fundamental frequency of approximately 50 Hz. By applying a reduced voltage of 5 V, the rotational speed was successfully reduced to around 21 Hz. Consequently, a sampling frequency of 200 Hz was considered sufficient to capture the relevant signal components. The IMU sensor continuously acquires data at a constant sampling rate of 200 Hz, corresponding to a sampling interval of 5 ms (milliseconds). An example time-series window of 6 channels that contains 256 samples each is presented in Figure 3. This represents a time window of 1280 ms (256 samples/axis · 1/200 Hz). The first usage of this fan unbalanced dataset was shown in [1], where data were processed using a short-time Fourier transform (STFT) and a sliding Fourier transform (SDFT). Both techniques are valuable tools, but their suitability depends on the specific aspects that are to be investigated in the data. The main issue is the time–frequency resolution trade-off. A wider time window captures more frequency detail but loses time resolution. A narrower window offers better time resolution but might not capture the full spectrum of frequencies. Moreover, spectral leakage occurs when the frequency content of a signal component “leaks” into neighbouring frequencies in the spectrum. This happens because a finite window function abruptly truncates the signal at the edge of the window. The abrupt truncation introduces discontinuities that are not present in the original signal, and these discontinuities create additional frequency components in the spectrum that were not originally there. Another key limitation of STFT, and by extension a sliding window Fourier transform, is the loss of phase information when a magnitude spectrogram is used. The magnitude spectrogram of STFT analyses the magnitude of the signal’s frequency components within localised time windows. Although this provides valuable insights into the spectral content over time, the phase relationship between these components is not captured. This limitation can lead to situations where two distinct signals with different phase relationships can produce identical spectrograms (magnitude-based time–frequency representations) using STFT. For instance, consider two sine waves with the same frequency and amplitude but differing in phase shift. In STFT analysis, both signals will exhibit the same frequency component at a specific time instant, resulting in identical spectral representations despite their inherent phase differences. It is important to note that the limitations mentioned above apply to STFT and similar magnitude-based time–frequency analysis techniques. In contrast, the proposed multisensor-BPF-Signal2Image-CNN2D method operates in the time domain. While it does not explicitly utilise phase information, it leverages the inherent time-series characteristics of the sensor data to achieve an accurate fault diagnosis.

3.2. Variable Velocity of Fan Imbalance

This dataset expands on the constant-velocity fan imbalance dataset by incorporating variations in the fan speed. The objective remains to evaluate the effectiveness of the proposed method in identifying machine faults using an IMU sensor for vibration analysis, but under more realistic operating conditions with fluctuating speeds in 10% increments, ranging from 10% to 100% of its nominal speed. To capture the dynamic nature of variable-speed operation, with a nominal fan speed of 3000 RPM (50 Hz), the sampling frequency was increased to 2000 Hz compared to the constant-velocity dataset (200 Hz). Each data segment comprises 1024 samples for each of the three gyroscope and accelerometer axes (X, Y, Z), resulting in a total of 6 × 1024 data points per segment. This represents a time window of 512 ms (1024 samples/axis · 1/2000 Hz). As illustrated in Figure 4 and Figure 5, each row in the data matrix corresponds to a specific fan speed (10%, 20%, …, 100%) while the columns represent the different operational states (fault 1—single paper clip, fault 2—two paper clips, normal). This allows for analysing the impact of both varying speeds and introduced faults on the IMU 6DoF data. The first usage of this variable-speed fan unbalanced dataset was shown in [32], where the data were processed using a signal-to-image conversion and fusion of different images, such as a greyscale grid image and RGB image aligned by axis or sensor type. This dataset provides a more comprehensive picture of machine health by capturing the effects of variable-speed operation on the vibration signatures associated with normal and faulty conditions. Analysing these data will be crucial for evaluating the ability of the proposed method to differentiate between these states in realistic operating scenarios where the fan speed is not constant.

3.3. Publicly Available Dataset of Bearing Data Center of Case Western Reserve University

This section introduces a third dataset that is used to evaluate the effectiveness of the proposed method in a broader context. The dataset originates from the well-established Bearing Data Center at Case Western Reserve University (CWRU) [36] and is primarily focused on the detection of bearing failures in machinery. All data files and data description are available on the CWRU website [36], which information was used to prepare the description in the next sentences of this paragraph. Single-axis accelerometers were employed to capture vibration data. These sensors were securely attached to the motor housing at designated locations using magnetic bases. The specific placement involved positioning the accelerometers at the 12 o’clock position on both the drive end and the fan end of the housing. In certain experimental configurations, an additional single-axis accelerometer was mounted on the motor’s supporting base plate. All collected vibration data are stored in the MathWorks, Natick, Massachusetts, USA, MATLAB 5.0 MAT-file (*.mat) file format. The primary sampling rate adopted for digital data acquisition was 12 kHz. However, for experiments focussing on drive-end bearing faults, a higher sampling rate of 48 kHz was utilised to ensure adequate resolution of the high-frequency vibration components associated with such faults.
In the paper, a subset of data sampled at 12 kHz were used. Time-domain signals, as exemplified by 1024 samples per channel in Figure 6, present sixteen different conditions under four different motor loads. Each line represents data from a specific accelerometer (red: drive end, green: fan end, blue: base) for a particular operating condition. The rows correspond to different motor loads (0 HP to 3 HP), and the columns represent various bearing fault conditions (inner race (IR) fault, ball (B) fault, outer race (OR) fault, at different diameters (0.007″, 0.014″, 0.021″, and 0.028″) and positions (@6—cantered, @3—orthogonal, @12—opposite)). This dataset focusses on capturing the intricacies of bearing failures, which often involve high-frequency vibrations. The sampling frequency of 12 kHz ensures that these rapid vibrations are accurately captured in the data. The time window represented by 1024 samples in the dataset is approximately 0.0853 s (1024 samples/channel · 1/12,000 Hz) or 85.33 milliseconds. From the subset of data, class normal, B028, and IR028, which does not contain data from the base accelerometer (BA), were excluded. Finally, thirteen different classes were used under four different motor loads to validate the proposed method.

4. Results of Multisensor-BPF-Signal2Image-CNN2D Method

This section presents the results obtained by applying the multisensor-BPF-Signal2Image-CNN2D method, introduced in Section 2, to the datasets described in Section 3. The analysis focusses on evaluating the effectiveness of the proposed method in identifying machine faults under various operating conditions. The number of samples in a single time window used for fault diagnosis of the dataset shown in Section 3.1, Section 3.2 and Section 3.3 should be interpreted in the context of the sampling frequency, which is different for all datasets. Firstly, the dataset of the fan constant speed is used for the fast evaluation of the proposed method. Therefore, the time window is longer comapred to next datasets, equal to 1280 ms (256 samples per sensor that are reshaped into 16 × 16 matrix). In the next stage, the proposed method is verified at a shorter time window of 512 ms (1024 samples per sensor, which are reshaped into a 32 × 32 matrix). The time window is shorter; however, more samples are used because of a higher sampling frequency of 2000 Hz. The shorter time window is an issue because it captures less information, which increases the complexity of the fault diagnosis task. On the other hand, a short time window allows one to reduce the time of fault detection. Therefore, the time window is gradually decreased for each dataset to 1280 ms, 512 ms, and 85.33 ms (1024 samples per sensor that are reshaped into 32 × 32 matrix) for the constant fan velocity, variable fan velocity, and Bearing Data Center of CWRU, respectively, to validate the proposed multisensor-BPF-Signal2Image-CNN2D method. This shows that the proposed method can be used for a wide range of sampling frequencies and a wide range of time window durations.

4.1. Constant-Velocity Fan Imbalance

The multisensor-BPF-Signal2Image-CNN2D method was first evaluated using the constant-velocity fan imbalance dataset (Section 3.1). This dataset comprised data collected from a controlled experiment in which an imbalance was introduced to a fan blade at a constant rotational speed. The analysis assessed the method’s ability to differentiate between the idle state (no vibration), normal operation (healthy fan), and fault condition (imbalanced blade). The number of nine bandpass filters was chosen based on an empirical evaluation to effectively capture the relevant frequency components associated with the targeted machine faults within the vibration data. The bandwidths of these filters were designed to be equal, effectively dividing the entire frequency spectrum into nine non-overlapping sections, as illustrated in Figure 7. The specific centre frequencies from f c 1 to f c 9 of these filters are presented in Figure 7. According to sampling theory, a sampling frequency of 200 Hz ensures that all frequency components below the Nyquist frequency (which is 100 Hz in this case) are captured.
The application of nine bandpass filters divides the original vibration signal into nine distinct frequency bands. Consequently, instead of obtaining a single RGB image for fault diagnosis, the proposed method generates a set of nine RGB images, one corresponding to each filter band for a particular sensor type. The IMU 6DoF sensor incorporates two sensor modalities: accelerometers and gyroscopes. Each of these modalities comprises three axes (X, Y, and Z), resulting in a total of six individual sensor signals (two sensor types × three axes/sensor). In total, for the IMU 6DoF, there were 54 greyscale images (2 sensor types × 3 axes/sensor × 9 filters). For each sensor type and each of the nine frequency bands, an RGB image is generated by transforming the corresponding three-axis signal into an RGB image. This transformation leverages the three colour channels (red, green, blue) to represent the three spatial axes (X, Y, Z) within a single image. The eighteen individual RGB images (nine bands per sensor type × two sensor types) are then combined to form a single, larger RGB image. This final image provides a comprehensive representation of the vibration data across multiple frequency bands and sensor modalities, facilitating enhanced fault diagnosis capabilities. The specific process of combining these images is further illustrated in Figure 8.
Figure 9 presents the exemplar RGB images generated by the multisensor-BPF-Signal2Image-CNN2D method (Section 2) for the constant-velocity fan imbalance dataset (Section 3.1). These images effectively visualise the distribution of vibration energy across different frequency bands and sensor modalities. By analysing these images, researchers can gain insight into the characteristic spectral signatures associated with healthy and faulty fan operating conditions, aiding in the development of robust fault classification algorithms.
The training progress of the CNN2D component within the multisensor-BPF-Signal2Image-CNN2D method (Section 2) when applied to the dataset described in Section 3.1 is shown in Figure 10. This visualisation tracks metrics such as loss and accuracy over training iterations, providing insights into the model’s learning behaviour.
In machine learning, classification tasks involve assigning an observation (data instance) to a specific class label based on its features. For example, a classification model might be designed to classify the operational state of a fan as fault, normal, or idle based on sensor data characteristics. A confusion matrix serves as a valuable tool for evaluating the performance of a multiclass classification model. It is a table with rows representing the actual (true) classes of the observations and columns representing the classes predicted by the model. Each entry in the table corresponds to the number of instances that fall into a specific combination of true class and predicted class. By analysing the distribution of values within the confusion matrix, insights into the model’s effectiveness in correctly classifying instances into their respective categories (fault, normal, or idle in this example) can be gained. Figure 11 complements the training progress by presenting the confusion matrix resulting from the trained model on the same dataset (Section 3.1). The confusion matrix offers a detailed breakdown of the classification performance, illustrating how many data points in each class were correctly classified (diagonal elements) and misclassified into other classes (off-diagonal elements). By examining this matrix, it can identify potential class imbalances or weaknesses in the model’s ability to differentiate between specific classes within the dataset.
To gain deeper insights into the decision-making process of the proposed multisensor-BPF-Signal2Image-CNN2D method (Section 2), Figure 12 presents a comprehensive interpretability analysis for the dataset described in Section 3.1. This figure employs various visualisation techniques to unveil the reasoning of the model. Each row corresponds to a specific fault class (fault, idle, normal), allowing for a class-wise analysis of the interpretability methods. The first column showcases the original RGB image generated by the method, representing the vibration data for a particular classification instance, the second column presents gradient-weighted class activation mapping (Grad-CAM) [37], which allows for a visual representation of the model’s attention on specific regions within the input image, highlighting areas that contribute most to the classification decision, and the third column contains occlusion sensitivity, presented in [38], which is a technique that analyses how the model’s prediction changes when parts of the input image are masked (occluded). The resulting visualisation can reveal which image regions are most crucial for accurate classification. The last column represents locally interpretable model-agnostic explanations (LIME) [39]. This method aims to create a local, interpretable explanation for a specific prediction. The output here might be a simplified image that provides insights into the model’s reasoning for that particular classification instance.

4.2. Variable-Velocity Fan Imbalance

The multisensor-BPF-Signal2Image-CNN2D method was subsequently evaluated using the variable-velocity fan imbalance dataset described in Section 3.2. This dataset incorporated data collected at various fan speeds. The analysis aimed to determine the method’s robustness in identifying faults across different operational speeds. To visualise the impact of varying speeds on the generated image representations, Figure 13 presents the exemplar RGB images obtained from the variable-velocity fan imbalance dataset. By comparing these images (Figure 13) with those from the constant-velocity dataset (Figure 9), it can be observed how the distribution of vibration energy across frequency bands changes with different fan speeds. This visual comparison can provide insight into the model’s ability to adapt to these variations and maintain accurate fault classification performance.
Figure 14 comprehensively presents the training and testing performance of the multisensor-BPF-Signal2Image-CNN2D method (Section 2) when applied to the variable-velocity fan imbalance dataset (Section 3.2). This figure incorporates three key elements: training progress (left), where this visualisation depicts the model’s learning behaviour during the training phase, the training confusion matrix in the middle column, and the testing confusion matrix in the right column. This matrix complements the training evaluation by presenting the model’s performance on unseen testing data from the variable-velocity dataset. Ideally, the confusion matrix for testing should exhibit a high degree of similarity to the training confusion matrix, indicating that the model generalises well to unseen data and maintains its effectiveness under varying operational speeds.

4.3. Case Western Reserve University Bearing Data Center Results

Finally, in this section, the results of the application of the multisensor-BPF-Signal2Image-CNN2D method to the CWRU Bearing Data Center dataset (Section 3.3) are presented. This analysis evaluates the method to a broader range of machine faults and operating conditions. To assess the generalisability of the multisensor-BPF-Signal2Image-CNN2D method (Section 2) beyond the controlled fan imbalance scenarios, the model was evaluated using the CWRU Bearing Data Center dataset (Section 3.3). This dataset incorporates vibration data from various types of bearing faults, providing a more comprehensive test of the ability of the method to identify different machine faults. Figure 15 presents the exemplar RGB images generated from the CWRU dataset. By comparing these images with those from the fan imbalance datasets (Figure 9 and Figure 13), the distinct characteristics of the vibration data associated with the bearing faults can be observed. This visual comparison can provide insights into the model’s ability to differentiate between these different fault types based on the frequency and spatial information encoded within the multichannel images.
Figure 16 delves into the generalisability of the multisensor-BPF-Signal2Image-CNN2D method (Section 2) by presenting its training and testing performance in the CWRU Bearing Data Center dataset (Section 3.3). This figure mirrors the comprehensive evaluation structure employed in Figure 14.

5. Discussion

Choosing the appropriate sampling frequency depends on the specific application and the desired level of detail in the captured data. Higher sampling frequencies capture a wider range of frequencies, allowing for the analysis of faster-occurring phenomena, such as bearing faults. However, this also increases the data volume and computational demands. Lower sampling frequencies are more efficient in terms of data storage and processing but may miss crucial high-frequency details. The choice between selected datasets for evaluating the proposed method reflects this trade-off. The fan imbalance datasets with lower sampling frequencies served as a good starting point for controlled evaluation, while the CWRU dataset with a higher sampling frequency provides a more rigorous test for real-world bearing fault detection.
The proposed multisensor-BPF-Signal2Image-CNN2D method was compared with reference methods that use a CNN2D (IMU6DoF-Time2GrayscaleGrid-CNN, IMU6DoF-Time2RGBbyType-CNN, and IMU6DoF-Time2RGBbyAxis-CNN methods), which allows a better comparison in deep learning methods. Previous author investigations conducted and published in [13] demonstrate the effectiveness of a CNN2D compared to more baseline methods, e.g., decision tree, Naive Bayes, SVM (support vector machine), and KNN (k-nearest neighbours). Table 1 summarises the training progress of various methods for vibration-based fault diagnosis using a dataset described in Section 3.1. All methods employed 80% (6144 images) of the total dataset (7680 images) for training. The images are randomly selected. The selection of an 80/20 split for training and validation of the generated images is a common practice in machine learning. This split offers a balance between providing the model with sufficient training data to learn effectively and reserving a portion of the data for unbiased evaluation of its performance. Training iterations reflect the number of training epochs required for each method to achieve a specific level of performance. The proposed method, multisensor-BPF-Signal2Image-CNN2D, achieved the desired validation accuracy in the fewest training iterations (96 iterations). On the contrary, the reference methods [32] (IMU6DoF-Time2GrayscaleGrid, IMU6DoF-Time2RGBbyType, and IMU6DoF-Time2RGBbyAxis) required a significantly higher number of iterations (7200 iterations) to reach a similar level of accuracy. The iteration with a validation accuracy greater than 90% highlights the training iteration at which each method first surpassed a validation accuracy of 90%. The proposed method achieved this milestone earlier (20th iteration) compared to the reference methods (ranging from 60th to 150th iteration). The final validation accuracy presents the final validation accuracy achieved by each method after training. All methods achieved a high validation accuracy (more than 99.93%). In particular, two reference methods (IMU6DoF-Time2RGBbyType and IMU6DoF-Time2RGBbyAxis) achieved a perfect validation accuracy of 100% but requested long training. The methods were trained multiple times, and the observed number of training iterations is repeatable.
The results in Table 1 suggest that the proposed multisensor-BPF-Signal2Image-CNN2D method exhibits a faster learning rate compared to the reference methods. This is indicated by the fewer training iterations required to reach a high validation accuracy. However, it is important to acknowledge that all methods achieved an excellent final validation accuracy, suggesting their effectiveness for fault diagnosis on the given dataset. Further research is necessary to assess the generalisability of these findings to real-world scenarios with potential variations in operating conditions, sensor noise, and machine types (technological readiness level (TRL) 4: Technology Demonstration). To achieve higher Technology Readiness Levels (TRLs), such as TRL 6 (System/Subsystem Model or Prototype Demonstration) or TRL 7 (System Prototype Demonstration in Operational Environment), additional research efforts are recommended. At these TRLs, researchers can gain deeper insight into the practical applicability of the proposed method and its potential for real-world fault diagnosis in industrial machinery.
Additional vibrations, for example, from a loose machine foundation or from loose safety covers, can introduce noise into the sensor data, potentially masking the subtle signatures of actual faults within the machine. This can lead to misdiagnosis or altogether missed faults. The BPF stage of the method aims to isolate frequency bands relevant to specific fault types. However, additional vibrations may overlap with these targeted bands, making it more difficult for a CNN2D to distinguish between fault signatures and extraneous noise. This can lead to decreased classification accuracy. A key strength of the multisensor-BPF-Signal2Image-CNN2D method is its ability to leverage data from multiple sensors. By incorporating data from sensors less susceptible to extraneous vibrations (e.g., accelerometers mounted directly on the machine body compared to those on loose covers), the method can potentially improve its ability to isolate the machine’s internal condition. In some cases, additional vibrations can serve as early indicators of developing problems that could lead to more significant failures. For example, vibrations from a loose foundation may precede bearing wear. Although these additional vibrations may initially complicate the diagnosis of the primary fault, the general information they provide can be valuable for preventive maintenance strategies. This leads to other open research areas and places for the increment of the TRL to be higher when considering vibrations in industrial field applications.
Furthermore, the multisensor-BPF-Signal2Image-CNN2D method was evaluated using the variable-velocity fan imbalance dataset described in Section 3.2. The RGB image generation process resulted in a total of 36,900 images (3 class × 10 velocities × 1230 images per class at given velocity) for the multisensor-BPF-Signal2Image-CNN2D method. The number of images and the split ratio of 80% were the same in the proposed and reference method. The CWRU Bearing Data Center dataset employed in the research appears to be a rich resource for machine fault diagnosis, particularly in focussing on bearing fault types. The application of the multisensor-BPF-Signal2Image-CNN2D method to the CWRU Bearing Data Center dataset involved the generation of a significant amount of training data. The process resulted in a total of 34,788 images (13 class faults × 4 loads × 669 images per class at a given load) for the multisensor-BPF-Signal2Image-CNN2D method. This substantial dataset provides a rich foundation for training the CNN2D model and facilitating effective fault classification. Following common practice in machine learning, 80% of the generated images (approximately 27,830 images) were designated for training the CNN2D model. The remaining 20% (approximately 6958 images) were likely reserved for testing and validation purposes. This split ensures that the model is trained on a representative portion of the data while having a separate set for unbiased evaluation of its performance. The number of images and the split ratio in the training were the same in the proposed and reference method. The observed difference in the training efficiency, shown in Table 1 for the dataset from Section 3.1, constant-velocity fan imbalance, can be visualised in Figure 17 (left) for the dataset from Section 3.2—variable-velocity fan imbalance. Furthermore, the proposed multisensor-BPF-Signal2Image-CNN2D method and the reference method adopted in the CWRU dataset named accelerometers-Time2RGBbyType-CNN is shown in Figure 17 (right). Figure 17 presents the training progress of the proposed method on two datasets: the variable-velocity fan imbalance dataset (Section 3.2, left side) and the CWRU Bearing Data Center dataset (Section 3.3, right side). The figure highlighted that the reference methods required a significantly higher number of iterations to achieve a similar level of accuracy on the variable-velocity fan imbalance dataset and CWRU Bearing Data Center dataset. This observation aligns with the notion that the proposed method exhibits a faster training speed compared to the reference methods.

6. Conclusions

This research introduces the multisensor-BPF-Signal2Image-CNN2D method, a novel approach for machine fault diagnosis using multiple sensor data. The aim is to address limitations present in existing image-based diagnostic methods that rely solely on time-series data conversion to greyscale or RGB images. The manuscript details the technical aspects of the method, evaluates its performance on various datasets, presents the results, performs the interpretability of the CNN2D, and discusses the results in the context of existing approaches.
The proposed multisensor-BPF-Signal2Image-CNN2D method offers several advantages for machine fault diagnosis using a data-driven approach. By incorporating data from multiple sensors, the method takes advantage of a richer set of features that can be crucial for accurate fault identification. Different sensors might capture complementary information about the machine’s health, providing a more comprehensive picture of its operational state. The use of bandpass filtering allows the system to focus on specific frequency bands that are potentially associated with particular fault types. This targeted analysis can improve the sensitivity of the method to identifying subtle fault signatures within the sensor data. Transforming sensor data into 2D greyscale and aggregation into RGB images enables the application of a powerful CNN2D for fault classification. CNN2Ds have demonstrated remarkable capabilities in image recognition tasks, and this method effectively translates the time-series sensor data into a format suitable for their exploitation. These combined benefits—enriched feature extraction, frequency-specific analysis, and effective use of a CNN2D—contribute to a more robust and accurate machine fault diagnosis system compared to approaches that rely solely on individual sensors or lack the ability to leverage the power for image-based classification. Beyond the aforementioned advantages, the multisensor-BPF-Signal2Image-CNN2D method exhibits demonstrably faster training compared to reference methods, as evidenced by its performance on three distinct datasets.
The current stage of development aligns with a Technological Readiness Level (TRL) of 4 (Technology Demonstration). Future work should aim to achieve higher TRLs, such as TRL 6 (System/Subsystem Model or Prototype Demonstration) or TRL 7 (System Prototype Demonstration in an Operational Environment). This will involve developing a deployable prototype system and testing it in real-world industrial settings. Further research is necessary to assess the generalisability of these findings to real-world scenarios at the high TRL level. Industrial machinery operates under diverse conditions, with variations in operating speeds, sensor noise characteristics, and machine types.

Funding

This research was funded by the Poznan University of Technology, grant number 0214/SBAD/0249.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Łuczak, D.; Brock, S.; Siembab, K. Cloud Based Fault Diagnosis by Convolutional Neural Network as Time–Frequency RGB Image Recognition of Industrial Machine Vibration with Internet of Things Connectivity. Sensors 2023, 23, 3755. [Google Scholar] [CrossRef] [PubMed]
  2. Chen, H.-Y.; Lee, C.-H. Vibration Signals Analysis by Explainable Artificial Intelligence (XAI) Approach: Application on Bearing Faults Diagnosis. IEEE Access 2020, 8, 134246–134256. [Google Scholar] [CrossRef]
  3. Wang, Y.; Yang, M.; Li, Y.; Xu, Z.; Wang, J.; Fang, X. A Multi-Input and Multi-Task Convolutional Neural Network for Fault Diagnosis Based on Bearing Vibration Signal. IEEE Sens. J. 2021, 21, 10946–10956. [Google Scholar] [CrossRef]
  4. Rauber, T.W.; da Silva Loca, A.L.; Boldt, F.d.A.; Rodrigues, A.L.; Varejão, F.M. An Experimental Methodology to Evaluate Machine Learning Methods for Fault Diagnosis Based on Vibration Signals. Expert Syst. Appl. 2021, 167, 114022. [Google Scholar] [CrossRef]
  5. Meyer, A. Vibration Fault Diagnosis in Wind Turbines Based on Automated Feature Learning. Energies 2022, 15, 1514. [Google Scholar] [CrossRef]
  6. Ruan, D.; Wang, J.; Yan, J.; Gühmann, C. CNN Parameter Design Based on Fault Signal Analysis and Its Application in Bearing Fault Diagnosis. Adv. Eng. Inform. 2023, 55, 101877. [Google Scholar] [CrossRef]
  7. Li, Z.; Zhang, Y.; Abu-Siada, A.; Chen, X.; Li, Z.; Xu, Y.; Zhang, L.; Tong, Y. Fault Diagnosis of Transformer Windings Based on Decision Tree and Fully Connected Neural Network. Energies 2021, 14, 1531. [Google Scholar] [CrossRef]
  8. Gao, S.; Xu, L.; Zhang, Y.; Pei, Z. Rolling Bearing Fault Diagnosis Based on SSA Optimized Self-Adaptive DBN. ISA Trans. 2022, 128, 485–502. [Google Scholar] [CrossRef] [PubMed]
  9. Wang, C.-S.; Kao, I.-H.; Perng, J.-W. Fault Diagnosis and Fault Frequency Determination of Permanent Magnet Synchronous Motor Based on Deep Learning. Sensors 2021, 21, 3608. [Google Scholar] [CrossRef]
  10. Feng, Z.; Gao, A.; Li, K.; Ma, H. Planetary Gearbox Fault Diagnosis via Rotary Encoder Signal Analysis. Mech. Syst. Signal Process. 2021, 149, 107325. [Google Scholar] [CrossRef]
  11. Ma, J.; Li, C.; Zhang, G. Rolling Bearing Fault Diagnosis Based on Deep Learning and Autoencoder Information Fusion. Symmetry 2022, 14, 13. [Google Scholar] [CrossRef]
  12. Huang, W.; Du, J.; Hua, W.; Lu, W.; Bi, K.; Zhu, Y.; Fan, Q. Current-Based Open-Circuit Fault Diagnosis for PMSM Drives with Model Predictive Control. IEEE Trans. Power Electron. 2021, 36, 10695–10704. [Google Scholar] [CrossRef]
  13. Łuczak, D.; Brock, S.; Siembab, K. Fault Detection and Localisation of a Three-Phase Inverter with Permanent Magnet Synchronous Motor Load Using a Convolutional Neural Network. Actuators 2023, 12, 125. [Google Scholar] [CrossRef]
  14. Jiang, L.; Deng, Z.; Tang, X.; Hu, L.; Lin, X.; Hu, X. Data-Driven Fault Diagnosis and Thermal Runaway Warning for Battery Packs Using Real-World Vehicle Data. Energy 2021, 234, 121266. [Google Scholar] [CrossRef]
  15. Chang, C.; Zhou, X.; Jiang, J.; Gao, Y.; Jiang, Y.; Wu, T. Electric Vehicle Battery Pack Micro-Short Circuit Fault Diagnosis Based on Charging Voltage Ranking Evolution. J. Power Sources 2022, 542, 231733. [Google Scholar] [CrossRef]
  16. Wang, Z.; Tian, B.; Qiao, W.; Qu, L. Real-Time Aging Monitoring for IGBT Modules Using Case Temperature. IEEE Trans. Ind. Electron. 2016, 63, 1168–1178. [Google Scholar] [CrossRef]
  17. Dhiman, H.S.; Deb, D.; Muyeen, S.M.; Kamwa, I. Wind Turbine Gearbox Anomaly Detection Based on Adaptive Threshold and Twin Support Vector Machines. IEEE Trans. Energy Convers. 2021, 36, 3462–3469. [Google Scholar] [CrossRef]
  18. Cao, Y.; Sun, Y.; Xie, G.; Li, P. A Sound-Based Fault Diagnosis Method for Railway Point Machines Based on Two-Stage Feature Selection Strategy and Ensemble Classifier. IEEE Trans. Intell. Transp. Syst. 2022, 23, 12074–12083. [Google Scholar] [CrossRef]
  19. Shiri, H.; Wodecki, J.; Ziętek, B.; Zimroz, R. Inspection Robotic UGV Platform and the Procedure for an Acoustic Signal-Based Fault Detection in Belt Conveyor Idler. Energies 2021, 14, 7646. [Google Scholar] [CrossRef]
  20. Karabacak, Y.E.; Gürsel Özmen, N.; Gümüşel, L. Intelligent Worm Gearbox Fault Diagnosis under Various Working Conditions Using Vibration, Sound and Thermal Features. Appl. Acoust. 2022, 186, 108463. [Google Scholar] [CrossRef]
  21. Maruyama, T.; Maeda, M.; Nakano, K. Lubrication Condition Monitoring of Practical Ball Bearings by Electrical Impedance Method. Tribol. Online 2019, 14, 327–338. [Google Scholar] [CrossRef]
  22. Wakiru, J.M.; Pintelon, L.; Muchiri, P.N.; Chemweno, P.K. A Review on Lubricant Condition Monitoring Information Analysis for Maintenance Decision Support. Mech. Syst. Signal Process. 2019, 118, 108–132. [Google Scholar] [CrossRef]
  23. Zhou, Q.; Chen, R.; Huang, B.; Liu, C.; Yu, J.; Yu, X. An Automatic Surface Defect Inspection System for Automobiles Using Machine Vision Methods. Sensors 2019, 19, 644. [Google Scholar] [CrossRef] [PubMed]
  24. Yang, L.; Fan, J.; Liu, Y.; Li, E.; Peng, J.; Liang, Z. A Review on State-of-the-Art Power Line Inspection Techniques. IEEE Trans. Instrum. Meas. 2020, 69, 9350–9365. [Google Scholar] [CrossRef]
  25. Davari, N.; Akbarizadeh, G.; Mashhour, E. Intelligent Diagnosis of Incipient Fault in Power Distribution Lines Based on Corona Detection in UV-Visible Videos. IEEE Trans. Power Deliv. 2021, 36, 3640–3648. [Google Scholar] [CrossRef]
  26. Kim, S.; Kim, D.; Jeong, S.; Ham, J.-W.; Lee, J.-K.; Oh, K.-Y. Fault Diagnosis of Power Transmission Lines Using a UAV-Mounted Smart Inspection System. IEEE Access 2020, 8, 149999–150009. [Google Scholar] [CrossRef]
  27. Ullah, Z.; Lodhi, B.A.; Hur, J. Detection and Identification of Demagnetization and Bearing Faults in PMSM Using Transfer Learning-Based VGG. Energies 2020, 13, 3834. [Google Scholar] [CrossRef]
  28. Long, H.; Xu, S.; Gu, W. An Abnormal Wind Turbine Data Cleaning Algorithm Based on Color Space Conversion and Image Feature Detection. Appl. Energy 2022, 311, 118594. [Google Scholar] [CrossRef]
  29. Xie, T.; Huang, X.; Choi, S.-K. Intelligent Mechanical Fault Diagnosis Using Multisensor Fusion and Convolution Neural Network. IEEE Trans. Ind. Inform. 2022, 18, 3213–3223. [Google Scholar] [CrossRef]
  30. Zhou, Y.; Wang, H.; Wang, G.; Kumar, A.; Sun, W.; Xiang, J. Semi-Supervised Multiscale Permutation Entropy-Enhanced Contrastive Learning for Fault Diagnosis of Rotating Machinery. IEEE Trans. Instrum. Meas. 2023, 72, 3525610. [Google Scholar] [CrossRef]
  31. Xu, M.; Gao, J.; Zhang, Z.; Wang, H. Bearing-Fault Diagnosis with Signal-to-RGB Image Mapping and Multichannel Multiscale Convolutional Neural Network. Entropy 2022, 24, 1569. [Google Scholar] [CrossRef] [PubMed]
  32. Łuczak, D. Machine Fault Diagnosis through Vibration Analysis: Time Series Conversion to Grayscale and RGB Images for Recognition via Convolutional Neural Networks. Energies 2024, 17, 1998. [Google Scholar] [CrossRef]
  33. Luczak, D. Delay of Digital Filter Tuned for Mechanical Resonant Frequency Reduction in Multi-Mass Mechanical Systems in Electrical Direct Drive. In Proceedings of the 2015 IEEE European Modelling Symposium (EMS), Madrid, Spain, 6–8 October 2015; pp. 195–200. [Google Scholar] [CrossRef]
  34. Łuczak, D. Data-Driven Machine Fault Diagnosis of Multisensor Vibration Data Using Synchrosqueezed Transform and Time-Frequency Image Recognition with Convolutional Neural Network. Electronics 2024, 13, 2411. [Google Scholar] [CrossRef]
  35. Łuczak, D. Machine Fault Diagnosis through Vibration Analysis: Continuous Wavelet Transform with Complex Morlet Wavelet and Time–Frequency RGB Image Recognition via Convolutional Neural Network. Electronics 2024, 13, 452. [Google Scholar] [CrossRef]
  36. Case Western Reserve University Bearing Data Center Website. Available online: https://engineering.case.edu/bearingdatacenter/welcome (accessed on 9 May 2024).
  37. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. Int. J. Comput. Vis. 2020, 128, 336–359. [Google Scholar] [CrossRef]
  38. Zeiler, M.D.; Fergus, R. Visualizing and Understanding Convolutional Networks. In Proceedings of the Computer Vision—ECCV 2014, Zurich, Switzerland, 6–12 September 2014; Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T., Eds.; Springer International Publishing: Cham, Switzerland, 2014; pp. 818–833. [Google Scholar]
  39. Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 1135–1144. [Google Scholar]
Figure 1. Proposed multisensor-BPF-Signal2Image-CNN2D method.
Figure 1. Proposed multisensor-BPF-Signal2Image-CNN2D method.
Electronics 13 02940 g001
Figure 2. Microcontroller-based demonstrator of machine fault diagnosis.
Figure 2. Microcontroller-based demonstrator of machine fault diagnosis.
Electronics 13 02940 g002
Figure 3. IMU 6DoF sensor data time-series window with 256 samples, capturing temporal measurements from the three axes (X, Y, and Z).
Figure 3. IMU 6DoF sensor data time-series window with 256 samples, capturing temporal measurements from the three axes (X, Y, and Z).
Electronics 13 02940 g003
Figure 4. Accelerometer time-series data window from the IMU 6DoF sensor, where the window displays 1024 measurements across the three axes (X, Y, and Z).
Figure 4. Accelerometer time-series data window from the IMU 6DoF sensor, where the window displays 1024 measurements across the three axes (X, Y, and Z).
Electronics 13 02940 g004
Figure 5. Gyroscope time-series data window from the IMU 6DoF sensor, where the window displays 1024 measurements across the three axes (X—red, Y—green, and Z—blue).
Figure 5. Gyroscope time-series data window from the IMU 6DoF sensor, where the window displays 1024 measurements across the three axes (X—red, Y—green, and Z—blue).
Electronics 13 02940 g005
Figure 6. Vibration time-series data window from Bearing Data Center drive end bearing fault data at 12 kHz, where data channels are represented by colour: red (DE) for drive-end accelerometer, green (FE) for fan-end accelerometer, and blue (BA) for base accelerometer.
Figure 6. Vibration time-series data window from Bearing Data Center drive end bearing fault data at 12 kHz, where data channels are represented by colour: red (DE) for drive-end accelerometer, green (FE) for fan-end accelerometer, and blue (BA) for base accelerometer.
Electronics 13 02940 g006
Figure 7. Nine bandpass filters for dataset sampled at 200 Hz.
Figure 7. Nine bandpass filters for dataset sampled at 200 Hz.
Electronics 13 02940 g007
Figure 8. Aggregation of RGB subimages into one large RGB image of nine bandpass filters.
Figure 8. Aggregation of RGB subimages into one large RGB image of nine bandpass filters.
Electronics 13 02940 g008
Figure 9. The RGB images for the constant-velocity fan imbalance (see Section 3.1) dataset of the multisensor-BPF-Signal2Image-CNN2D method described in Section 2.
Figure 9. The RGB images for the constant-velocity fan imbalance (see Section 3.1) dataset of the multisensor-BPF-Signal2Image-CNN2D method described in Section 2.
Electronics 13 02940 g009
Figure 10. The training progress of the CNN2D of multisensor-BPF-Signal2Image-CNN2D method described in Section 2 with the dataset shown in Section 3.1.
Figure 10. The training progress of the CNN2D of multisensor-BPF-Signal2Image-CNN2D method described in Section 2 with the dataset shown in Section 3.1.
Electronics 13 02940 g010
Figure 11. The matrix of confusion after training multisensor-BPF-Signal2Image-CNN2D with the dataset shown in Section 3.1 (training—(left); testing—(right)).
Figure 11. The matrix of confusion after training multisensor-BPF-Signal2Image-CNN2D with the dataset shown in Section 3.1 (training—(left); testing—(right)).
Electronics 13 02940 g011
Figure 12. The interpretability of the proposed multisensor-BPF-Signal2Image-CNN2D with the dataset shown in Section 3.1 (training—(left); testing—(right)).
Figure 12. The interpretability of the proposed multisensor-BPF-Signal2Image-CNN2D with the dataset shown in Section 3.1 (training—(left); testing—(right)).
Electronics 13 02940 g012
Figure 13. The RGB images for the variable-velocity fan imbalance (see Section 3.2) dataset of the multisensor-BPF-Signal2Image-CNN2D method described in Section 2.
Figure 13. The RGB images for the variable-velocity fan imbalance (see Section 3.2) dataset of the multisensor-BPF-Signal2Image-CNN2D method described in Section 2.
Electronics 13 02940 g013
Figure 14. The training results for multisensor-BPF-Signal2Image-CNN2D method described in Section 2 with dataset shown in Section 3.2 (training progress —(left); matrix of confusion for training—(middle); matrix of confusion for testing—(right)).
Figure 14. The training results for multisensor-BPF-Signal2Image-CNN2D method described in Section 2 with dataset shown in Section 3.2 (training progress —(left); matrix of confusion for training—(middle); matrix of confusion for testing—(right)).
Electronics 13 02940 g014
Figure 15. The RGB images for the bearing faults (see Section 3.3) dataset of the multisensor-BPF-Signal2Image-CNN2D method described in Section 2.
Figure 15. The RGB images for the bearing faults (see Section 3.3) dataset of the multisensor-BPF-Signal2Image-CNN2D method described in Section 2.
Electronics 13 02940 g015
Figure 16. The training results for multisensor-BPF-Signal2Image-CNN2D method described in Section 2 with dataset shown in Section 3.3 (training progress —(left); matrix of confusion for training—(middle); matrix of confusion for testing—(right)).
Figure 16. The training results for multisensor-BPF-Signal2Image-CNN2D method described in Section 2 with dataset shown in Section 3.3 (training progress —(left); matrix of confusion for training—(middle); matrix of confusion for testing—(right)).
Electronics 13 02940 g016
Figure 17. A comparison of the training progress of the proposed multisensor-BPF-Signal2Image-CNN2D method with the dataset described in Section 3.2 (left) and with the dataset presented in Section 3.3 (right).
Figure 17. A comparison of the training progress of the proposed multisensor-BPF-Signal2Image-CNN2D method with the dataset described in Section 3.2 (left) and with the dataset presented in Section 3.3 (right).
Electronics 13 02940 g017
Table 1. A comparison of the training progress of the proposed method multisensor-BPF-Signal2Image-CNN2D with dataset shown in Section 3.1.
Table 1. A comparison of the training progress of the proposed method multisensor-BPF-Signal2Image-CNN2D with dataset shown in Section 3.1.
MethodNumber of Images Used for TrainingTraining IterationsIteration with Validation Accuracy More than 90%Final Validation Accuracy
Proposed method multisensor-BPF-Signal2Image-CNN2D6144 images (80% of total 7680 images)9620th iteration99.93%
Reference method IMU6DoF-Time2GrayscaleGrid [32] 6144 images (80% of total 7680 images)720060th iteration99.93%
Reference method IMU6DoF-Time2RGBbyType [32] 6144 images (80% of total 7680 images)7200150th iteration100%
Reference method IMU6DoF-Time2RGBbyAxis [32]6144 images (80% of total 7680 images)7200130th iteration100%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Łuczak, D. Data-Driven Rotary Machine Fault Diagnosis Using Multisensor Vibration Data with Bandpass Filtering and Convolutional Neural Network for Signal-to-Image Recognition. Electronics 2024, 13, 2940. https://doi.org/10.3390/electronics13152940

AMA Style

Łuczak D. Data-Driven Rotary Machine Fault Diagnosis Using Multisensor Vibration Data with Bandpass Filtering and Convolutional Neural Network for Signal-to-Image Recognition. Electronics. 2024; 13(15):2940. https://doi.org/10.3390/electronics13152940

Chicago/Turabian Style

Łuczak, Dominik. 2024. "Data-Driven Rotary Machine Fault Diagnosis Using Multisensor Vibration Data with Bandpass Filtering and Convolutional Neural Network for Signal-to-Image Recognition" Electronics 13, no. 15: 2940. https://doi.org/10.3390/electronics13152940

APA Style

Łuczak, D. (2024). Data-Driven Rotary Machine Fault Diagnosis Using Multisensor Vibration Data with Bandpass Filtering and Convolutional Neural Network for Signal-to-Image Recognition. Electronics, 13(15), 2940. https://doi.org/10.3390/electronics13152940

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop