Next Article in Journal
Response-Aided Score-Matching Representative Approaches for Big Data Analysis and Model Selection under Generalized Linear Models
Previous Article in Journal
Inverse Kinematics of Robotic Manipulators Based on Hybrid Differential Evolution and Jacobian Pseudoinverse Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fault Detection in Industrial Equipment through Analysis of Time Series Stationarity

1
Polytechnic Institute of Coimbra, Coimbra Institute of Engineering, Rua Pedro Nunes-Quinta da Nora, 3030-199 Coimbra, Portugal
2
RCM2+ Research Centre for Asset Management and Systems Engineering, ISEC/IPC, Rua Pedro Nunes, 3030-199 Coimbra, Portugal
*
Authors to whom correspondence should be addressed.
Algorithms 2024, 17(10), 455; https://doi.org/10.3390/a17100455
Submission received: 11 August 2024 / Revised: 30 September 2024 / Accepted: 10 October 2024 / Published: 12 October 2024
(This article belongs to the Section Algorithms for Multidisciplinary Applications)

Abstract

:
Predictive maintenance has gained importance due to industrialization. Harnessing advanced technologies like sensors and data analytics enables proactive interventions, preventing unplanned downtime, reducing costs, and enhancing workplace safety. They play a crucial role in optimizing industrial operations, ensuring the efficiency, reliability, and longevity of equipment, which have become increasingly vital in the context of industrialization. The analysis of time series’ stationarity is a powerful and agnostic approach to studying variations and trends that may indicate imminent failures in equipment, thus contributing to the effectiveness of predictive maintenance in industrial environments. The present paper explores the use of the Augmented Dickey–Fuller p-value temporal variation as a possible method for determining trends in sensor time series and thus anticipating possible failures of a wood chip pump in the paper industry.

1. Introduction

Stopping the industrial equipment used on production lines is normally detrimental to companies. Downtime means a loss of production, plus costs due to stopping and restarting the equipment. This underscores the importance of predictive approaches to anticipate behaviours corresponding to potential failures that could trigger these stoppages.
In this context, predictive modelling emerges as a fundamental tool, allowing companies to anticipate equipment failures based on historical and real-time data. By employing advanced data analysis and machine learning techniques, organizations can identify hidden patterns in equipment’s operational data and develop accurate predictive models that alert operators about potential impending failures. This way it is possible to minimize unexpected equipment downtime, improve service quality for customers, and also reduce the additional costs caused by over-maintenance in preventive maintenance policies [1]. There are a variety of approaches to this, each with its advantages and disadvantages [2], but in all of them it is essential to follow the best practices for data preparation, model training, and validation to ensure the accuracy and reliability of their predictions. Some approaches stand out for their ability to capture temporal dependencies in sequential data, which is essential for predicting equipment failures over time. Outstanding state-of-the-art models include recurrent neural networks, namely Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRUs) [3,4].
Other models are effective in modelling linear and stationary relationships in time series data, such as the Autoregressive Integrated Moving Average (ARIMA) or the Seasonal Autoregressive Integrated Moving Average (SARIMA), although these models are normally more efficient in short-term prediction [5,6]. It is also common to use ensemble approaches, which combine multiple models, such as Random Forests (RFs) and Gradient Boosting Machines (GBMs) [7,8]. However, each model should be developed according to the specific needs and characteristics of a problem.
The prediction of future sensor values alone is not sufficient to identify trends automatically. Predictive models only forecast the future, but additional tools are needed to classify features and determine whether the patterns that are being anticipated are normal or abnormal. This classification is normally accomplished through classical machine learning models, such as neural networks, Support Vector Machines, and other classifiers [9].
The present work aims to detect possible failures in an industrial wood chip pump. The methodology consisted of analysing the p-value produced by the Augmented Dickey–Fuller (ADF) test for stationarity. The ADF test establishes that a p-value above a certain level, typically 0.05, indicates a possibly non-stationary time series. In industrial equipment, vibration should be eliminated or contained as much as possible, so that the equipment’s vibration signature will have a profile similar to that recommended by the manufacturer. Over time, however, it normally increases at some point because screws, joints, and fragile parts in general start to become loose. When it reaches a certain value, maintenance interventions are required to fix the problem and reduce vibration again. Hence, the hypothesis is that a trend which may lead to an equipment failure might be detected by the observation of the p-value of a time series in near real time using a rolling window for the test. Specific patterns of increase or decrease or large amplitude variations might reveal a possibly approaching failure in the equipment. These patterns must, therefore, be analysed, aiming to prevent failures from happening and reduce downtime and maintenance costs. To the best of the authors’ knowledge, no previous predictive maintenance study has been reported using the ADF test on chip pump time series. However, this method could be a useful approach for all variables which are stationary in normal conditions, offering great potential for the predictive maintenance of chip pumps and possibly other industrial equipment.
Section 2 describes the state of the art. Section 3 describes the materials and methods used. Section 4 shows the results obtained. Section 5 presents a balance and comparison of the results to the state of the art. Section 6 draws some conclusions and suggests directions for future work.

2. Literature Review

Fault prediction in industrial equipment is an area of research and practice that encompasses a wide range of approaches and methodologies. Given the complexity of industrial systems and the diversity of factors that can lead to faults, there is no single approach that is universally applicable. Instead, engineers and researchers have explored a variety of techniques and strategies to anticipate, avoid, and mitigate equipment failures [10].
Statistical models aim at analysing the behaviour of random variables based on recorded data. For predictive maintenance, statistical models are used to determine the current degradation and the expected remaining life of the equipment. This type of model is often implemented in multi-model approaches [11].
Jie Zhao et al. [12] presented a novel methodology that uses the Autoregressive Moving Average (ARMA). Instead of using the most common data transformation methods, they developed their own method by creating a transfer function and took the transformed sequence as the input data to build an ARMA model. They achieved a Mean Absolute Percentage Error (MAPE) 2.48%, which is lower than that of the ARMA model with traditional methods.
ARIMA is the most common forecast model used in time series, due to the adaptability of its linear patterns to all time series strategies, according to Chafak Tarmanini et al. [13]. Based on daily real electricity load data for 709 individual households that were randomly chosen over an 18 month period, they compared the performance of two forecasting methods: ARIMA and an Artificial Neural Network (ANN). After tests, the ANN’s MAPE was 1.80% and ARIMA’s was approximately 2.61%.
In order to prove the good performance, when applied to problems related to anomaly detection and technological parameter forecasting, of the ARIMA algorithm, using sparse data obtained from an urban sewer system, Karthick Thiyagarajan et al. [14] developed a model and obtained appealing results. Over a time span of 30 days, during which 24 days were used for training, they achieved a Mean Absolute Error (MAE) of approximately 0.0962.
Jing Xu and Yongbo Zhang [15] developed a model, aiming to study quality predictions in power grid equipment based on key inspection parameters. By gathering live monitoring data from power grid equipment and creating an intelligent predictive alerting algorithm using time series and a trend analysis, they achieved a MAPE of 5.62% for the leakage current and 4.16% for the resistive current.
Luigi De Simone et al. [16] proposed an LSTM methodology for the predictive maintenance of railway rolling stock equipment. As the dataset contained some imperfections, they developed an algorithm for filtering the spikes found and another one for standardizing them. They also used 10-fold cross-validation and Adaptive Moment Estimation (ADAM) and obtained the following results: a 0.0184 MAE for a 60 min predicted time window, including for low- and high-severity diagnostic events (L&H). For high-severity diagnostic events only (H), they achieved 0.0008 MAE.
Based on supervisory control and data acquisition (SCADA) data, Wisdom Udo and Yar Muhammad developed eight models using extreme gradient boosting (XGBoost) and LSTM to build the characteristic behaviour of critical wind turbine components [17]. They tested them on two real case studies regarding six different wind turbines with the aim of predicting the gearbox bearing temperature and obtained a MAPE between 0.8% and 6.2%.
Hansika et al. [18] propose that RNNs have been shown to be promising options for prediction tasks, often surpassing the current state-of-the-art statistical benchmarks in the field. One of the existing types of RNNs is a GRU. Mateus et al. [19] have employed neural networks to develop two models aimed at predicting the behaviour of an industrial paper press. Their achievement lies in accurately forecasting the press’ behaviour over a 30-day period, achieving a MAPE of less than 10% when utilizing GRU. Mateus et al.also developed a model using LSTM to compare it with a GRU network. Using the same training and test data in both experiments, with changes such as different resample rates, different layer sizes, and different activation functions in the hidden layer, they concluded the existence of some instability in the LSTM model, with its results varying significantly. To examine the most efficient pre-processing approaches to predicting sensory data trends based on GRU neural networks, Mateus et al.developed a model that can anticipate future values with a MAPE of 1.2%. They proved that it is possible to forecast the future behaviour of industrial paper pulp presses up to 30 days in advance.
Martins et al. [20] also developed a model using a GRU, successfully predicting, within a 7-day timeframe and with a MAPE of 2.79%, the three states classified by a Hidden Markov Model and the observable states obtained from clustering. The methodology also involved the use of a Principal Component Analysis (PCA) and K-Means algorithm during data clustering.
With a variety of models and ways to predict time series values, and, in turn, equipment failures, Peter T. Yamak et al. [21] conducted a study on the performance of LSTM, GRU, and ARIMA models. They used data from Bitcoin transactions from a Crypto data download website. The data were normalized and forced into stationarity, and their trend and seasonality were removed. All models were tested over a time span of 500 days, yielding the following MAPE results: LSTM, 6.80%; GRU, 3.97%; and ARIMA, 2.76%, with this last being the best and taking the least amount of time.
Although the introduction of stationarity tests into predictive maintenance has not yet been the subject of a more in-depth study, there are already some articles on the topic. Phong B. Dao and Wieslaw J. Staszewski [22] investigated the application of tests such as the ADF and Kwiatkowski–Phillips–Schmidt–Shin (KPSS) tests to detect damage in structures using Lamb wave data. This study emphasized the challenges imposed by environmental conditions, such as temperature variations, that affect wave responses. The KPSS test was found to be more sensitive in detecting damage, although it is more affected by temperature changes. Their study highlights the need to mitigate these effects for reliable detection. In this type of study, it is also very common to use techniques such as cointegration, which is capable of removing both natural and artificial common trends, and then use peak-to-peak and variance values to detect and classify damages more efficiently [23].
More recently, Phong B. Dao et al. [24] also introduced a new method for early fault detection through the analysis of time series stationarity across sliding windows of SCADA data with a high level of confidence. This method demonstrated some superiority over others in detecting unusual or abrupt changes and being able to monitor multiple parameters simultaneously, once again showcasing the simplicity, efficiency, and promise of such methods.
Table 1 shows a comparison of some of the relevant approaches to time series prediction and fault detection, including both classical statistical models and modern approaches based on deep neural networks. As the table shows, predictive models already achieve very reasonable accuracy, with errors less than 10 %. Nonetheless, these predictions still need additional analysis in order to determine a potential failure. No studies were founded based on the analysis of the p-value to determine ongoing trends, as we aim to in the present study.

3. Materials and Methods

3.1. Methodology

The CRoss Industry Standard Process for Data Mining (CRISP-DM) methodology was followed during this research. The first steps are described in the present section, while the analysis of the results is described in Section 4. The method’s deployment is left for future work.
The first step in the project was the study of the problem, which consisted of a visit to the company and a literature review. The second step, “Data understanding”, consisted of a statistical analysis of the variables involved, as well as visualization experiments. The third step, “Data preparation”, consisted of data selection and cleaning. After that, seasonal decomposition and a stationarity analysis were performed in the modeling phase. The analysis of the results consisted of the evaluation of the p-value over different time windows.

3.2. Experimental Setup

In the experimental part, the Python programming language was used in a multiplatform open-source integrated development environment (IDE) for scientific programming called Spyder. Additionally, in the development of this project, the following libraries were used: pandas, numpy, matplotlib, pickle, statsmodels, and seaborn. The specifications of the devices used in this study are an Intel(R) Core(TM) i5-10300H CPU @ 2.50GHz 2.50 GHz, with approximately 8.00 GB of RAM.

3.3. Dataset

The dataset used in this project contains sensor data from three industrial wood chip pumps, which are operating in a large Portuguese paper industry. The data were collected by a set of sensors installed in the pumps and from logging readings with 1 min sampling rate. The dataset contains data for the years 2017, 2018, and 2019. The number of variables monitored is 31, but for the simplicity of this analysis only three were used in this project: the vibration of chip pump 1 (VCP 1), vibration of chip pump 2 (VCP 2), and vibration of chip pump 3 (VCP 3). Vibration was chosen because it of its importance for condition monitoring. It is one of the most telling variables and also one of the most difficult to analyse because of the influence of noise. Increasing vibration in industrial equipment is a serious risk, so this is one variable where a stationarity analysis may be more important.
Some statistical results from different years and different chip pumps, taken from the provided dataset, are shown in Table 2. As the table shows, there are some negative numbers, which obviously correspond to noise and outliers in the data. The table also shows that VCP1 has the lowest mean and the highest standard deviation in 2017, which means that it potentially had problems in this year. VCP2 has the highest mean of the three chip pumps. It also reaches its highest values in 2017 and 2018, which means that it may need maintenance interventions to improve its working condition and prevent potential failures.
Table 2 also shows there are some missing samples, otherwise the AF (absolute frequency) would be 525k for all these variables, considering the sampling period is 1 min. Some of the missing samples correspond to periods of planned maintenance, when the equipment and sensors are turned off. Other missing values, however, correspond to failures in the process of reading or recording the data. When this happens, there are still records in the database; however, they are incomplete and may pose challenges to data analysis. Table 3 summarizes the number of missing values that may need attention during the data analysis process.

3.4. Data Preparation

In order to make the dataset less extensive, and to reduce noise at the same time, downsampling was performed, changing the period to 5 min instead of the original 1 min—i.e., the mean of each period of 5 min was taken from the original dataset. Thus, the dataset was reduced to one fifth of its original size; approximately 100k rows instead of approximately 500k. This will not affect the information that is needed for the present research, since any high-frequency components possibly lost are not necessary for the stationarity analysis. Maximum and minimum values may be affected, but again they are not relevant for this goal. Additionally, many maximum and minimum values may be outliers caused by reading errors or possible external interference in the sensors, or other extreme values which can be smoothed without affecting the result of this analysis. Table 4 shows a summary of the statistical parameters of the dataset after downsampling.
After dataset reduction, it is clear that the average values remained largely unchanged, but the maximum values were affected. After further analysis, some periods were identified as candidates for subsequent analysis, given their small percentage of missing values and their temporal extent. The following ranges were selected:
  • Range 1 (R1): 5 November 2017–20 January 2018;
  • Range 2 (R2): 4 February 2018–27 October 2018;
  • Range 3 (R3): 4 November 2018–16 February 2019.
These periods are of interest because of their low number of missing data samples. Table 5 shows a summary of the statistical parameters of these intervals. As the table shows, there were still some missing points in the second interval, so they were imputed using linear interpolation. Moreover, it is also possible to notice a difference between the values of absolute frequency observed in the second interval due to its longer time span.
As shown in the previous Table 6, after applying linear interpolation to the second time interval, the values hardly changed.

3.5. ADF Test

As mentioned in Section 1, the main objective of this study is to understand the variation of the p-value through stationarity tests, namely using the ADF test. This test shows whether the time series can be considered stationary or not. It consists of testing the null hypothesis, which states that there is a unit root in the time series sample and therefore the series is not stationary. If the null hypothesis cannot be rejected, then it is not possible to reject the existence of a unit root and the series is therefore non-stationary [26]. The lower the test’s p-value, the stronger the rejection of the null hypothesis. Conversely, a higher p-value indicates a greater likelihood of accepting non-stationarity.
The ADF test is applied to the time series model
Δ y t = y t y t 1 = α + β t + γ y t 1 + δ 1 Δ y t 1 + + δ p 1 Δ y t p + 1 + ε t ,
where y t is the time series, α is a constant, β is the coefficient of a time trend, p is the lag order of the autoregressive process, t is time, δ j are coefficients, and ε is the error. γ is the coefficient more directly associated with stationarity. We perform a linear regression of Δ y t against t and t t 1 ; if γ = 0 , then the process is a random walk, otherwise the series is non-stationary.
Based on the results obtained from the analyses of the previous time series, it was concluded that the time series corresponding to the interval from 2 April 2018 to 27 October 2018 would be the most suitable for the modelling phase, considering its temporal extent and statistical properties.

4. Results

4.1. Variable Analysis

Through individual analyses of the variables, an attempt was made to understand their behaviour over time. Figure 1 shows the plots of the three variables chosen for the year 2018. Regarding variable VCP 1, it is possible to observe a growing trend over time, but as shown in Figure 2, it also presents some discrepant values, which may happen for different reasons. As for variable VCP 2, it exhibits some irregularity and instability and shows seasonal behaviour due to the repetition of patterns throughout the time series, as revealed in Figure 3. Notably, there are values that are nearly zero and values associated with the mean value. Variable VCP 3, between February and May, demonstrates dynamic behaviour with some randomness in its values. From there, similar to the analysis conducted on the behaviour of variable VCP 2, some seasonal patterns start to emerge. The charts give an idea of the long-term behaviour of these variables, but it is clear that the signals are very noisy and it is difficult to identify clear patterns just from this visual information.

4.2. Seasonal Decomposition

Individual decompositions of the data were performed, focusing on their trend, seasonality, and residual components. Previous analysis of the charts shows that the series do not vary too much over time, but even so they may contain some seasonal components, with additives being more probable than multiplicatives, considering that the average and standard deviation do not suffer large changes over time. Therefore, seasonal decomposition was performed using the additive model. In terms of the period parameter, different values were tried. The value 2016 ( 288 × 7 ) was used to produce the charts shown in Figure 2, taking into account the number of intervals present weekly, in order to understand the weekly pattern. As the figure shows, the variable VCP 1 exhibits an absence of significant trends in the time series; there are no clear upward or downward movements. Additionally, it is worth noting the presence of reduced seasonality, as the seasonal component is fluctuating between the values of −0.02 and 0.03.
Figure 3 shows the seasonal decomposition of variable VCP 2. Again, despite the presence of numerous oscillations in the trend, crystal clear long-term growth and declining movements are not visible. The oscillations show there is more noise in this signal than in VCP 1, but the noise does not show clear patterns that could be identified at this point. Despite the noise and small amplitude of most mid-term trend variations, the chart seems to show that there is a growing trend until mid-April, followed by a sudden decline. And then, after this decline, there is a trend of slow growth until late September, which is again followed by a decline, this time with a period of instability. These faint variations over time may be important for further analysis. As for the seasonality, it exhibits similarities to the behaviour observed previously in variable VCP 1, so there are no clear seasonal components of significant amplitude.
Regarding variable VCP 3, its seasonal decomposition is shown in Figure 4. The charts show the peculiar characteristics of this signal, which is different from VCP 1 and 2. Again, the seasonal component is negligible. The trend is variable in the first months and more stable after April. Until May, a quite pronounced but unstable trend is observed, which denotes problems in the equipment’s normal operation. At the end of April, something changed and the trend decreased suddenly, before slowly increasing in subsequent months.

4.3. Stationarity Analysis

The ADF test was employed in order to analyse stationarity of the signals based on the Akaike Information Criterion. Table 7 shows the p-values, for all of the period under analysis (R2), of the three pumps. As the table shows, all the ADF statistic test values are strongly negative, indicating strong evidence against the null hypothesis of non-stationarity. In other words, the time series are proven to be stationary in this period, based on the visual evidence seen when looking at Figure 1, Figure 2, Figure 3 and Figure 4 and also the information that comes from the factory: the pumps receive periodic maintenance interventions so that their vibrations are kept at safe levels over time.
Nonetheless, when the analysis is performed in smaller time windows, stationarity may not be guaranteed, according to the initial research question of the present research. The most important question, however, is what window sizes will work to show the relevant non-stationarity of the time series, if there is any. Considering thsi project’s objectives, we decided to experiment with two distinct sizes of rolling windows: weekly and daily. This segmentation aims to obtain information that could lead to a more holistic approach.
The weekly variation of the p-value throughout the time series is illustrated in Figure 5. The figure shows that the p-value varies from approximately zero to values above 0.75 in VCP 2 and 0.6 in VCP 1 and VCP 3. This indicates that there are weeks when the vibration follows a non-stationary path, which is the pattern being sought after in the present research.
An example of the daily variation of the p-value throughout the time series is illustrated in Figure 6. As this figure shows, daily variations exhibit constant fluctuations, which make any analysis or monitoring very difficult to perform. The weekly signals produce more stable results, but even so there is certainly a lot of noise, and a more stable signal is required for more confidence in the results.

4.4. Impact of Outliers on the p-Value

Discrepant values are often called outliers. They are normal in industrial sensor readings and they have an impact on the data analysis process, as they affect data averages, standard deviation, and many other statistical characteristics of a series.
The relationship between the discrepant data and the p-value was also sought. We are particularly interested in outliers because, in the absence of labelled data about the failures’ true timestamps, outliers are possible indicators of failures. They may be sources of true or false positives.
In this study, a discrepant value was defined as one 200% higher than the mean recorded for each analysed variable. In our data, 5 days of discrepant values were identified. Figure 7 shows the discrepant values, which are marked with bounding boxes. They correspond to the following dates:
  • 6 March 2018 (VCP 3);
  • 16 May 2018 (VCP 2);
  • 8 June 2018 (VCP 1);
  • 17 July 2018 (VCP 1);
  • 25 September 2018 (VCP 1).
Performing a more specific analysis of the p-value values on the different dates identified, it is possible to observe some variations in the p-value during or around the week that includes the date of the discrepant values. As an example, Figure 8 shows the weekly and daily p-values for VCP 3, where a spike in vibration was observed on 6 March. As the figure shows, there is an increase in the p-value in the first week of March, followed by a steady decrease until the end of the month. The daily chart shows the highest p-values were registered on days 12, 16, and 30—in other words, after the discrepant value was registered. These values are compared in Figure 6, which shows the variation of the p-value and the mean of the variable VCP 3 corresponding to the identified day and which highlights some peaks of the p-value over four months, including March.
The sources of these spikes may be numerous, and during the present study it was not possible to explain them. Hence, the conclusion from this observation is that outliers have an influence on the p-value, but whether they are true or false positives, regarding fault diagnosis, is left for future work, where more data may be available.

4.5. Minimizing Noise

In order to minimize the impact of discrepant data on the variation of the p-value, two noise filters were applied. They consisted of applying a rolling average, with sliding windows of 1, 3, 7, or 15 days, followed by the application of a LOWESS filter with frac = 0.15. The impact of those filters can be observed in Figure 9, Figure 10 and Figure 11.
Figure 9 and Figure 10 show a portion of the VCP1 signal, smoothed with rolling averages of 1- and 3-day windows, respectively, and then a LOWESS filter with a fraction of 15%. Analysing Figure 9 and Figure 10, it is possible to observe some stability in the mean value of VCP1 throughout most of the time window. However, towards the end of the window, there is a sudden increase. Looking at the variation of the p-value, it is possible to see that the same pattern occurs. Additionally, in Figure 9, it is possible to observe an increase in the p-value immediately after the mean of the variable rises. This shows one example of the pattern being sought, as the series becomes non-stationary because of an increasing rolling average due to an increased vibration amplitude.
In Figure 10, it is also possible to see that before the final increase in the VCP1 mean, the p-value reacts twice, rising above the threshold value of 0.05 twice in a short time span. This is the same signal as in Figure 9. The figures show that the width of the rolling window can have a significant impact in the results and possible alarms generated, in cases where the p-value is used to generate an alarm when it rises above a certain threshold.
This same p-value behaviour can also be observed in Figure 11, which shows the vibration of VCP3 with a rolling average of 15 days. In this case, before a negative peak in the mean value of the variable is observed, it is possible to see a significant increase in the p-value followed by a sudden drop. The p-value reacted to a spike in vibration that lasted several days, and at the end of May it also rose above 0.05, reacting to a consistent increase in vibration levels.
In summary, the pictures show the following:
  • The impact of short spikes on the p-value when no noise-filtering methods are used. The p-value responds to the spikes and can generate true or false positives, with more data being necessary to determine which is the case.
  • When applying noise-filtering methods such as a rolling average and LOWESS filter, the impact of short spikes is minimized. The size of the rolling window has a direct impact on the number of times the p-value crosses the significance threshold defined.

5. Discussion

The p-value of the ADF test is one indicator of the stationarity of a time series. In the case of industrial equipment, some variables, such as vibration, would be stationary in ideal conditions. However, what normally happens is that vibration increases with increased workload or between maintenance interventions. The present research shows that analysis of the p-value may be a promising method to detect trends, or even outliers, in the vibration signals of three industrial pumps. The ADF test is conducted using a rolling window, thus avoiding direct repetition of the test on the same data points, reducing the likelihood of false positive inflation. Balancing false positives and false negatives is always challenging and the cost analysis of further domain-specific errors must be conducted before this approach is used in the industry.
To the best of the authors’ knowledge, this method has never been tried before in wood chip pumps. Nonetheless, the analysis of the p-value could have several advantages compared to other methods of detecting and predicting possible faults:
  • One advantage is that it is known that a p-value above 0.05 indicates the non-stationarity of the series. This threshold is easier to establish than fixing limits for each variable. So this could be a kind of universal approach to detect the non-stationarity of a variable in a given time window.
  • Another advantage is that the ADF test is able to work with noisy data, simplifying the analysis process. The presence of noise in the data has some impact on the results but does not cause a catastrophic failure of the method.
On the other hand, the method also has some drawbacks, as demonstrated in the results of the experimental work. One important drawback is that it is necessary to determine the best time windows, both for calculating the p-value and for noise filters, if they are applied, because different window sizes will produce different results and different numbers of possible alarms. Also, even though the p-value method may be universal for different types of equipment, the noise and quality of the data still require careful analysis and specific cleaning or treatment methods. An analysis of possible cointegration issues may also be necessary for ruling out the influence of external factors [27]. Table 8 shows the advantages and disadvantages of the proposed method, making its interpretation more straightforward.

6. Conclusions

Fault detection and prediction in industrial equipment is an important area that is subject to heavy research. To the best of the authors’ knowledge, the analysis of the p-value from an ADF test of a time series is a novel approach, which has never been applied to industrial chip pump vibration analyses. It offers immense potential for monitoring and determining trends in the state of an equipment. The hypothesis was further investigated through the analysis of the p-value of three time series, using rolling windows of different sizes to calculate the p-value and also to apply noise-filtering techniques to stabilize the results obtained.
The results showed a stable increase in their p-value, which was increasingly above the threshold of 0.05, when there were large variations in the vibration of the chip pumps. Equipment prognostic and fault prediction using the p-value is an appealing idea, because it can be a universal method that is simpler than defining limits for each variable, and the present study showed that vibration, although a stationary time series in the long term, is non-stationary during some periods.
Future work requires the analysis of additional data, namely to study in deeper detail the behaviour of the p-value in situations of actual faults, both for vibration and other variables.

Author Contributions

Conceptualization, M.M. and N.L.; methodology, M.M. and N.L.; software, D.F. and F.R.; validation, M.M., N.L. and J.F.; formal analysis, M.M. and N.L.; investigation, D.F. and F.R.; resources, J.F.; data curation, D.F.; writing—original draft preparation, D.F.; writing—review and editing, M.M., N.L. and J.F.; supervision, M.M. and N.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The datasets used to produce this article are not readily available because of non-disclosure agreements. Requests for more information about the datasets should be directed to the corresponding authors.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AFAbsolute frequency
ANNArtificial Neural Network
ARIMAAutoregressive Integrated Moving Average
ARMAAutoregressive Moving Average
CRISP-DMCross Industry Standard Process for Data Mining
DOAJDirectory of open access journals
GBMGradient Boosting Machine
GRUGated Recurrent Unit
IDEIntegrated development environment
LDLinear dichroism
LSTMLong Short-Term Memory
MAPEMean Absolute Percentage Error
MDPIMultidisciplinary Digital Publishing Institute
PCAPrincipal Component Analysis
RFRandom Forest
SARIMASeasonal Autoregressive Integrated Moving Average
SCADASupervisory Control And Data Acquisition
TLAThree-letter acronym
VCPVibration of chip pump
XGBoostExtreme Gradient Boosting

References

  1. Wang, J.; Li, C.; Han, S.; Sarkar, S.; Zhou, X. Predictive maintenance based on event-log analysis: A case study. IBM J. Res. Dev. 2017, 61, 11:121–11:132. [Google Scholar] [CrossRef]
  2. Zhu, T.; Ran, Y.; Zhou, X.; Wen, Y. A Survey of Predictive Maintenance: Systems, Purposes and Approaches. arXiv 2024, arXiv:1912.07383. [Google Scholar]
  3. Han, Z.; Zhao, J.; Leung, H.; Ma, K.F.; Wang, W. A review of deep learning models for time series prediction. IEEE Sens. J. 2019, 21, 7833–7848. [Google Scholar] [CrossRef]
  4. Yan, H.; Ouyang, H. Financial time series prediction based on deep learning. Wirel. Pers. Commun. 2018, 102, 683–700. [Google Scholar] [CrossRef]
  5. Liu, Y.; Wang, Y.; Yang, X.; Zhang, L. Short-term travel time prediction by deep learning: A comparison of different LSTM-DNN models. In Proceedings of the 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), Yokohama, Japan, 16–19 October 2017; pp. 1–8. [Google Scholar]
  6. Lindemann, B.; Müller, T.; Vietz, H.; Jazdi, N.; Weyrich, M. A survey on long short-term memory networks for time series prediction. Procedia Cirp 2021, 99, 650–655. [Google Scholar] [CrossRef]
  7. Daurenbayeva, N.; Nurlanuly, A.; Atymtayeva, L.; Mendes, M. Survey of Applications of Machine Learning for Fault Detection, Diagnosis and Prediction in Microclimate Control Systems. Energies 2023, 16, 3508. [Google Scholar] [CrossRef]
  8. Abid, A.; Khan, M.T.; Iqbal, J. A review on fault detection and diagnosis techniques: Basics and beyond. Artif. Intell. Rev. 2021, 54, 3639–3664. [Google Scholar] [CrossRef]
  9. Miljković, D. Fault detection methods: A literature survey. In Proceedings of the 2011 34th International Convention MIPRO, Opatija, Croatia, 23–27 May 2011; pp. 750–755. [Google Scholar]
  10. Dashti, R.; Daisy, M.; Mirshekali, H.; Shaker, H.R.; Hosseini Aliabadi, M. A survey of fault prediction and location methods in electrical energy distribution networks. Measurement 2021, 184, 109947. [Google Scholar] [CrossRef]
  11. Montero Jimenez, J.J.; Schwartz, S.; Vingerhoeds, R.; Grabot, B.; Salaün, M. Towards multi-model approaches to predictive maintenance: A systematic literature survey on diagnostics and prognostics. J. Manuf. Syst. 2020, 56, 539–557. [Google Scholar] [CrossRef]
  12. Zhao, J.; Xu, L.; Liu, L. Equipment Fault Forecasting Based on ARMA Model. In Proceedings of the 2007 International Conference on Mechatronics and Automation, Harbin, China, 5–9 August 2007; pp. 3514–3518. [Google Scholar] [CrossRef]
  13. Tarmanini, C.; Sarma, N.; Gezegin, C.; Ozgonenel, O. Short term load forecasting based on ARIMA and ANN approaches. Energy Rep. 2023, 9, 550–557. [Google Scholar] [CrossRef]
  14. Thiyagarajan, K.; Kodagoda, S.; Van Nguyen, L. Predictive analytics for detecting sensor failure using autoregressive integrated moving average model. In Proceedings of the 2017 12th IEEE Conference on Industrial Electronics and Applications (ICIEA), Siem Reap, Cambodia, 18–20 June 2017; pp. 1926–1931. [Google Scholar] [CrossRef]
  15. Xu, J.; Zhang, Y. Device Fault Prediction Model based on LSTM and Random Forest. arXiv 2024, arXiv:2403.05179. [Google Scholar]
  16. De Simone, L.; Caputo, E.; Cinque, M.; Galli, A.; Moscato, V.; Russo, S.; Cesaro, G.; Criscuolo, V.; Giannini, G. LSTM-based failure prediction for railway rolling stock equipment. Expert Syst. Appl. 2023, 222, 119767. [Google Scholar] [CrossRef]
  17. Udo, W.; Muhammad, Y. Data-Driven Predictive Maintenance of Wind Turbine Based on SCADA Data. IEEE Access 2021, 9, 162370–162388. [Google Scholar] [CrossRef]
  18. Hewamalage, H.; Bergmeir, C.; Bandara, K. Recurrent Neural Networks for Time Series Forecasting: Current status and future directions. Int. J. Forecast. 2021, 37, 388–427. [Google Scholar] [CrossRef]
  19. Mateus, B.C.; Mendes, M.; Farinha, J.T.; Assis, R.; Cardoso, A.M. Comparing LSTM and GRU Models to Predict the Condition of a Pulp Paper Press. Energies 2021, 14, 6958. [Google Scholar] [CrossRef]
  20. Martins, A.; Mateus, B.; Fonseca, I.; Farinha, J.T.; Rodrigues, J.; Mendes, M.; Cardoso, A.M. Predicting the Health Status of a Pulp Press Based on Deep Neural Networks and Hidden Markov Models. Energies 2023, 16, 2651. [Google Scholar] [CrossRef]
  21. Yamak, P.T.; Yujian, L.; Gadosey, P.K. A Comparison between ARIMA, LSTM, and GRU for Time Series Forecasting. In Proceedings of the 2019 2nd International Conference on Algorithms, Computing and Artificial Intelligence, ACAI ’19, Sanya, China, 20–22 December 2019; pp. 49–55. [Google Scholar] [CrossRef]
  22. Dao, P.B.; Staszewski, W.J. Lamb Wave Based Structural Damage Detection Using Stationarity Tests. Materials 2021, 14, 6823. [Google Scholar] [CrossRef]
  23. Dao, P.B.; Staszewski, W.J. Cointegration and how it works for structural health monitoring. Measurement 2023, 209, 112503. [Google Scholar] [CrossRef]
  24. Dao, P.B.; Barszcz, T.; Staszewski, W.J. Anomaly detection of wind turbines based on stationarity analysis of SCADA data. Renew. Energy 2024, 232, 121076. [Google Scholar] [CrossRef]
  25. Mateus, B.C.; Mendes, M.; Torres Farinha, J.; Marques Cardoso, A.; Assis, R.; Soltanali, H. Improved GRU prediction of paper pulp press variables using different pre-processing methods. Prod. Manuf. Res. 2023, 11, 2155263. [Google Scholar] [CrossRef]
  26. Menegaki, A. Chapter 2—Stationarity and an alphabetical directory of unit roots often used in the energy-growth nexus. In A Guide to Econometrics Methods for the Energy-Growth Nexus; Menegaki, A., Ed.; Academic Press: Cambridge, MA, USA, 2021; pp. 31–61. [Google Scholar] [CrossRef]
  27. Cross, E.J.; Worden, K.; Chen, Q. Cointegration: A novel approach for the removal of environmental trends in structural health monitoring data. Proc. R. Soc. A Math. Phys. Eng. Sci. 2011, 467, 2712–2732. [Google Scholar] [CrossRef]
Figure 1. Data plots for 2018.
Figure 1. Data plots for 2018.
Algorithms 17 00455 g001
Figure 2. Seasonal decomposition of variable VCP 1.
Figure 2. Seasonal decomposition of variable VCP 1.
Algorithms 17 00455 g002
Figure 3. Seasonal decomposition of variable VCP 2.
Figure 3. Seasonal decomposition of variable VCP 2.
Algorithms 17 00455 g003
Figure 4. Seasonal decomposition of variable VCP 3.
Figure 4. Seasonal decomposition of variable VCP 3.
Algorithms 17 00455 g004
Figure 5. Evolution of the p-value over time, using weekly moving windows, from February to October.
Figure 5. Evolution of the p-value over time, using weekly moving windows, from February to October.
Algorithms 17 00455 g005
Figure 6. Daily variation of p-value and variable VCP3, with overlap.
Figure 6. Daily variation of p-value and variable VCP3, with overlap.
Algorithms 17 00455 g006
Figure 7. Discrepant values marked with bounding boxes.
Figure 7. Discrepant values marked with bounding boxes.
Algorithms 17 00455 g007
Figure 8. p-value variation in March 2018.
Figure 8. p-value variation in March 2018.
Algorithms 17 00455 g008
Figure 9. p-value and VCP1 mean variation (1-day rolling average).
Figure 9. p-value and VCP1 mean variation (1-day rolling average).
Algorithms 17 00455 g009
Figure 10. p-value and VCP1 mean variation (3-day rolling average).
Figure 10. p-value and VCP1 mean variation (3-day rolling average).
Algorithms 17 00455 g010
Figure 11. p-value and VCP3 mean variation (15-day rolling average).
Figure 11. p-value and VCP3 mean variation (15-day rolling average).
Algorithms 17 00455 g011
Table 1. Comparison of different methods used for time series predictions.
Table 1. Comparison of different methods used for time series predictions.
AuthorsModelMAPEMAE
Mateus et al. [19]GRU<10.00%-
Martins et al. [20]GRU2.79%-
Mateus et al. [25]GRU1.2%-
Xu and Zhang [15]LSTM5.16%, 4.16%-
De Simone et al. [16]LSTM-0.0184 (L&H), 0.0008 (H)
Wisdom and Yar [17]LSTM0.8–6.2%-
Jie Zhao et al. [12]ARMA2.48%-
Yamak et al. [21]ARIMA2.76%-
Tarmanini et al. [13]ARIMA, ANN2.61%, 1.80%-
Yamak et al. [21]ARIMA-0.0962
GRU—Gated Recurrent Unit, LSTM—Long Short-Term Memory, ARMA—Autoregressive Moving Average, ARIMA—Autoregressive Integrated Moving Average, ANN—Artificial Neural Network.
Table 2. Statistical data related to 2017, 2018, and 2019. Vibration values, in mm/s, for the three chip pumps.
Table 2. Statistical data related to 2017, 2018, and 2019. Vibration values, in mm/s, for the three chip pumps.
VariableVCP 1VCP 2VCP 3
AF516k; 517k; 508k487k; 517k; 511k512k; 517k; 497k
Mean0.69; 1.03; 1.472.13; 2.04; 1.891.03; 1.85; 1.54
STD0.84; 0.57; 0.540.47; 0.54; 0.670.46; 0.81; 0.76
Min−0.05; −0.049; −0.05−0.07; −0.04; −0.04−0.04; −0.01; −0.05
25%0.56; 0.69; 1.211.96; 1.87; 1.550.81; 1.53; 1.07
50%0.67; 0.85; 1.522.25; 2.12; 1.990.96; 1.81; 1.36
75%0.84; 1.20; 1.852.49; 2.37; 2.341.11; 2.03; 1.74
Max5.34; 23.90; 11.5825.11; 19.89; 10.032.60; 9.88; 20.69
AF—absolute frequency, STD—standard deviation, Min—minimum, Max—maximum VCP n—vibration of chip pump n.
Table 3. Missing values from 2017, 2018, and 2019 data.
Table 3. Missing values from 2017, 2018, and 2019 data.
YearVCP1VCP2VCP3
20178964 (1.71%)37,721 (7.18%)13,130 (2.50%)
20188026 (1.53%)7916 (1.51%)8220 (1.56%)
201916,893 (3.21%)14,555 (2.77%)28,027 (5.33%)
Table 4. Statistical data related to 2017, 2018, and 2019 data after downsampling.
Table 4. Statistical data related to 2017, 2018, and 2019 data after downsampling.
VarVCP1VCP2VCP3
AF103,549; 103,734; 101,88697,883; 103,752; 102,354102,715; 103,692; 99,658
Mean0.69; 1.03; 1.472.14; 2.04; 1.891.03; 1.85; 1.54
SD0.28; 0.56; 0.530.88; 0.52; 0.650.45; 0.80; 0.74
Min−0.05; −0.05; −0.05−0.05; −0.04; −0.04−0.02; −0.01; −0.01
25%0.58; 0.72; 1.212.00; 1.89; 1.560.83; 1.55; 1.07
50%0.67; 0.85; 1.512.25; 2.12; 2.000.96; 1.82; 1.36
75%0.84; 1.12; 1.842.47; 2.36; 2.341.08; 2.00; 1.73
Max4.53; 21.28; 5.8124.41; 6.63; 7.452.55; 5.94; 17.25
AF—absolute frequency, SD—standard deviation, Min—minimum, Max—maximum.
Table 5. Statistical data related to previous intervals (R1; R2; R3).
Table 5. Statistical data related to previous intervals (R1; R2; R3).
VarVCP 1VCP 2VCP 3
AF22,176; 76,479; 30,24022,176; 76,525; 30,24022,176; 76,507; 30,240
Mean0.65; 0.86; 1.921.98; 2.05; 2.091.83; 1.87; 1.56
SD0.18; 0.40; 0.430.43; 0.53; 0.480.36; 0.89; 0.38
Min−0.05; −0.05; −0.04−0.04; −0.04; −0.04−0.01; −0.01; −0.01
25%0.59; 0.71; 1.911.89; 1.88; 2.031.80; 1.42; 1.48
50%0.66; 0.83; 2.022.09; 2.12; 2.201.87; 1.81; 1.64
75%0.73; 0.97; 2.112.21; 2.39; 2.331.96; 2.01; 1.77
Max1.97; 21.28; 3.272.69; 6.63; 2.942.37; 5.94; 2.18
AF—absolute frequency, SD—standard deviation, Min—minimum, Max—maximum.
Table 6. Statistical data related to the second interval (R2), after linear interpolation.
Table 6. Statistical data related to the second interval (R2), after linear interpolation.
VarVCP 1VCP 2VCP 3
AF76,60876,60876,608
Mean0.872.051.87
SD0.430.530.89
Min−0.05−0.04−0.01
25%0.711.881.42
50%0.832.121.81
75%0.972.392.01
Max21.286.635.94
AF—absolute frequency, SD—standard deviation, Min—minimum, Max—maximum.
Table 7. Results of the ADF test.
Table 7. Results of the ADF test.
Variable AnalysedVCP1VCP2VCP3
ADF Statistic test−18.90−11.43−6.16
p-value0.0 6.53 × 10 21 7.18 × 10 08
Table 8. Advantages and disadvantages of the proposed method.
Table 8. Advantages and disadvantages of the proposed method.
AdvantagesDisadvantages
Innovative and simple methodNeed to determine the best time windows
Universal approachNeed for more in-depth study
Effective in noisy dataNot immune to external factors
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Falcão, D.; Reis, F.; Farinha, J.; Lavado, N.; Mendes, M. Fault Detection in Industrial Equipment through Analysis of Time Series Stationarity. Algorithms 2024, 17, 455. https://doi.org/10.3390/a17100455

AMA Style

Falcão D, Reis F, Farinha J, Lavado N, Mendes M. Fault Detection in Industrial Equipment through Analysis of Time Series Stationarity. Algorithms. 2024; 17(10):455. https://doi.org/10.3390/a17100455

Chicago/Turabian Style

Falcão, Dinis, Francisco Reis, José Farinha, Nuno Lavado, and Mateus Mendes. 2024. "Fault Detection in Industrial Equipment through Analysis of Time Series Stationarity" Algorithms 17, no. 10: 455. https://doi.org/10.3390/a17100455

APA Style

Falcão, D., Reis, F., Farinha, J., Lavado, N., & Mendes, M. (2024). Fault Detection in Industrial Equipment through Analysis of Time Series Stationarity. Algorithms, 17(10), 455. https://doi.org/10.3390/a17100455

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop