Next Article in Journal
Gearbox Fault Diagnosis Based on Optimized Stacked Denoising Auto Encoder and Kernel Extreme Learning Machine
Previous Article in Journal
Physical Simulation Experiments of Hydraulic Fracture Initiation and Propagation under the Influence of Deep Shale Natural Fractures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Dynamic Process Monitoring Algorithm: Dynamic Orthonormal Subspace Analysis

1
School of Information and Control Engineering, Liaoning Petrochemical University, Fushun 113005, China
2
Institute of Intelligence Science and Engineering, Shenzhen Polytechnic, Shenzhen 518055, China
3
Faculty of Engineering, Technology & Built Environment, UCSI University, Kuala Lumpur 56000, Malaysia
4
Postgraduate Department, Universitas Bina Darma, Palembang 30111, Indonesia
*
Author to whom correspondence should be addressed.
Processes 2023, 11(7), 1935; https://doi.org/10.3390/pr11071935
Submission received: 21 April 2023 / Revised: 1 June 2023 / Accepted: 8 June 2023 / Published: 27 June 2023
(This article belongs to the Section Process Control and Monitoring)

Abstract

:
Orthonormal subspace analysis (OSA) is proposed for handling the subspace decomposition issue and the principal component selection issue in traditional key performance indicator (KPI)-related process monitoring methods such as partial least squares (PLS) and canonical correlation analysis (CCA). However, it is not appropriate to apply the static OSA algorithm to a dynamic process since OSA pays no attention to the auto-correlation relationships in variables. Therefore, a novel dynamic OSA (DOSA) algorithm is proposed to capture the auto-correlative behavior of process variables on the basis of monitoring KPIs accurately. This study also discusses whether it is necessary to expand the dimension of both the process variables matrix and the KPI matrix in DOSA. The test results in a mathematical model and the Tennessee Eastman (TE) process show that DOSA can address the dynamic issue and retain the advantages of OSA.

1. Introduction

Process monitoring and fault detection are two important aspects of process systems engineering because they are the key issues to address in order to ensure the safety and the normal operation of industrial processes [1].As such, traditional data-driven algorithms such as principal components analysis (PCA) [2] and independent components analysis (ICA) [3] have been proposed to monitor processes and to improve the product quality. PCA and ICA can effectively detect faults in a process. However, in the actual production process at a modern industrial plant, there are a large number of controllers, sensors and actuators distributed widely, and not all data need to be analyzed [4,5]. That is to say, not all process variables directly affect the safety and the product quality. The information highly relevant to the product quality and economic benefits are called key performance indicators (KPIs), and their role should be emphasized in process monitoring [6,7]. It is worth mentioning that both PCA and ICA monitor KPI-related and KPI-unrelated components simultaneously, and they perform poorly in detecting faults in KPI-related components because the fault information might be submersed in the disturbances of numerous KPI-unrelated components. As such, KPI-related process monitoring such as partial least squares (PLS) [8] and canonical correlation analysis (CCA) [9] algorithms have developed rapidly in recent decades, and this development is essential for ensuring production safety and obtaining superior operation performance.
However, there are still some drawbacks to these traditional KPI algorithms. First, the residual subspace calculated by the PLS algorithm is non-orthogonal to the principal components (PCs) subspace, which means that some KPI-related information may leak into the residual spaces [10,11]. Second, the CCA algorithm requires KPIs to be available during both offline training and online monitoring stages as it uses KPI variables to construct indices [12,13]. Third, both PLS and CCA algorithms are unable to extract PCs [14,15].
To address the above issues in traditional KPI-related algorithms, Lou et al. proposed orthonormal subspace analysis (OSA) [16]. OSA can divide the process data and KPIs into three orthonormal subspaces, namely, subspaces of KPI-related components, KPI-unrelated components in process data, and process-unrelated components in KPIs. Furthermore, the cumulative percent variance method is used to select the number of PCs in an OSA algorithm. Due to the ability of the OSA algorithm to independently monitor each subspace, the OSA algorithm is not limited by KPIs during the offline and online stages.
The original OSA was proposed for addressing the monitoring issues in static process problems, so it assumes that the observations are time-independent. However, dynamic features widely exist in most industrial processes, and, hence, the auto-correlation relationships in variables interfere with the extraction of the KPI-related information [17,18]. Therefore, the subspaces obtained by the OSA algorithm are not orthonormal in dynamic processes.
The “time lag shift” method, which lists the historical data as additional variables to the original variable set, is an effective measure for handling the dynamic issue, and it has been adopted in the PLS and CCA algorithms, i.e., the dynamic PLS (DPLS) and dynamic CCA (DCCA) algorithms. Therefore, in this paper, the “time lag shift” method is also combined with the OSA algorithm, named the dynamic OSA (DOSA) algorithm, and is applied to the Tennessee Eastman (TE) process to illustrate its efficiency.
The contributions of this study are as follows. First, this study proposes DOSA for dealing with the low detection rate problem caused by the dynamics processes. DOSA can determine whether the fault in a dynamics process originates from KPI-related or KPI-unrelated process variables or the measurement of KPIs. Second, this study discusses whether it is necessary to expand the dimension of both the process variables matrix and the KPI matrix in order to reduce the computation. At the same time, a new method to select the time lag number in the “time lag shift” structure is proposed. Additionally, we analyze the impact of the sampling period on DOSA. Third, we place an emphasis on the real-time nature of information and design new monitoring indices. Finally, this study compares the detection rates of the OSA, DOSA, DPLS, and DCCA algorithms.
The remainder of this paper is organized into five sections. Section 2 discusses the classical OSA algorithm and the “time lag shift” method. Section 3 proposes DOSA for dynamics process monitoring. Section 4 compares the DOSA algorithm with other KPI-related algorithms based on TE process testing. Section 5 reviews the contributions of this work.

2. Methods

2.1. Orthonormal Subspace Analysis

Here, we take X R n × s as the process variables matrix (where n is the number of samples, and s is the number of process variables), and the standard PLS identification technique introduces the KPI matrix as Y R n × r (where r is the number of KPIs). OSA decomposes both X and Y into the following bilinear terms:
{ X = T c o m Ξ X T + E O S A Y = T c o m Ξ Y T + F O S A ,
where T c o m R n × ϕ ( ϕ is the number of principal components) is the common latent variables shared by X and Y ; Ξ X R s × ϕ and Ξ Y R r × ϕ are the transformation matrices; and E O S A R n × s and F O S A R n × r are the residual matrices.
Then, OSA, along with PLS and CCA, is called ‘KPI-related algorithm’. As opposed to PLS and CCA, the extracted subspaces of OSA are proved to be orthogonal [16]. That is to say, T c o m , E O S A , and F O S A are orthogonal in Equation (1), and, most importantly, they can be monitored independently.

2.2. The “Time Lag Shift” Method

The proposed OSA algorithm in Section 2.1 implicitly assumes that the current observations are statistically independent to the historical observations [19,20]. That is to say, OSA only considers the correlation between variables at the same time but does not consider the mutual influence of variables at different times. However, most data from industrial processes show degrees of dynamic characteristics; that is, the sampling data at different times are correlated. For such a process, the static OSA algorithm is not applicable.
The most common method to address such a problem is to use an autoregressive (AR) model to describe the dynamic characteristics. Similarly, the OSA algorithm can be extended to take into account the serial correlations by augmenting each observation vector, X ( t ) R 1 × s or Y ( t ) R 1 × r , at the current time t with the previous l x or l y observations in the following manner [21]:
{ X ˜ ( t ) = [ X ( t ) , X ( t 1 ) , , X ( t l x ) ] R 1 × [ ( l x + 1 ) × s ] Y ˜ ( t ) = [ Y ( t ) , Y ( t 1 ) , , Y ( t l y ) ] R 1 × [ ( l y + 1 ) × s ] ,
As known in Equation (2), the first s columns of X ˜ ( t ) and the first r columns of Y ˜ ( t ) represent the data at the current time, and the rest represent the data at the past time. For n sampling times, one can obtain the augmented matrices X ˜ R n × [ ( l x + 1 ) × s ] and Y ˜ R n × [ ( l y + 1 ) × s ] .
By performing dimension expansion on the data matrix in Equation (2), the static OSA methods can be used to analyze the autocorrelation, cross-correlation, and hysteresis correlation among the data synchronously. That is to say, X ˜ and Y ˜ will be decomposed by OSA. More details can be found in Section 3.

3. Dynamics Orthonormal Subspace Analysis

3.1. Determination of the Lag Number

As the traditional lag determination methods, such as the Akaike information criterion (AIC) [22] and the Bayesian information criterion (BIC) [23], are only suitable for a steady state, a new lag determination method should be proposed for DOSA.
Suppose the relationship between the data at the current time and the past time is as follows:
{ X ( t ) = X ( t 1 ) A 1 + X ( t 2 ) A 2 + + X ( t l x ) A l x + D x ( t ) = X ¯ ( t ) A ¯ + D x ( t ) Y ( t ) = Y ( t 1 ) B 1 + Y ( t 2 ) B 2 + + Y ( t l y ) B l y + D y ( t ) = Y ¯ ( t ) B ¯ + D y ( t ) ,
where X ¯ ( t ) = [ X ( t 1 ) , X ( t 2 ) , , X ( t l x ) ] R 1 × ( l x × s ) , Y ¯ ( t ) = [ Y ( t 1 ) , Y ( t 2 ) , , Y ( t l y ) ] R 1 × ( l y × r ) , A ¯ = [ A 1 , A 2 , , A l x ] R n × ( l x × s ) , and B ¯ = [ B 1 , B 2 , , B l y ] R n × ( l y × r ) . D x ( t ) R 1 × s and D y ( t ) R 1 × s denote the disturbance introduced at each time, and it is statistically independent of the past data. The coefficient matrices A ¯ and B ¯ can be estimated from the least square method as follows:
{ A ¯ = [ X ¯ T ( t ) X ¯ ( t ) ] 1 X ¯ T ( t ) X ( t ) B ¯ = [ Y ¯ T ( t ) Y ¯ ( t ) ] 1 Y ¯ T ( t ) Y ( t ) .
Therefore, D x ( t ) and D y ( t ) can be estimated as follows:
{ D x ( t ) = X ( t ) X ¯ ( t ) A ¯ = X ( t ) X ¯ ( t ) [ X ¯ T ( t ) X ¯ ( t ) ] 1 X ¯ T ( t ) X ( t ) D y ( t ) = Y ( t ) Y ¯ ( t ) B ¯ = Y ( t ) Y ¯ ( t ) [ Y ¯ T ( t ) Y ¯ ( t ) ] 1 Y ¯ T ( t ) Y ( t ) .
Then, the optimal number of time lag will be the one that creates the following indices:
{ L a g x = t = 1 n D x ( t ) 2 = X X ¯ [ X ¯ T X ¯ ] 1 X ¯ T X 2 L a g y = t = 1 n D y ( t ) 2 = Y Y ¯ [ Y ¯ T Y ¯ ] 1 Y ¯ T Y 2
the minimum and the indices will not change significantly if we continue increasing the time lag.
As opposed to X ( t ) and Y ( t ) , D x ( t ) and D y ( t ) are time-uncorrelated and independent of the initial states of X ( t ) and Y ( t ) , so they can be adopted to the dynamic process in both steady and unsteady states.
Additionally, we also set up an index to describe ‘the value of Lagx or Lagy would not change significantly’ as shown in Equation (7):
R C % = | L a g i 1 L a g i | L a g i 1 × 100 % ,
where L a g i represents the value of L a g x or L a g y when the lag number is l x ( l x > 1 ) or l y ( l y > 1 ) , and L a g i 1 represents the value of Lagx or Lagy when the lag number is l x 1 or l y 1 . If the value of R C % begins to be less than 5%, we will say that ‘the value of L a g x or L a g y would not change significantly’.

3.2. DOSA Procedure

  • Step 1. The “Time Lag Shift” method mentioned in Section 2.2. Calculate the lag number of l x and l y in Equation (6). Then, augment X ( t ) and Y ( t ) with the previous observations shown in Equation (2). In doing so, we can obtain the augmented matrix X ˜ and Y ˜ with n samples.
  • Step 2. Traditional OSA mentioned in Section 2.1.
(a)
Calculate the Y-related component X O S A R n × [ ( l x + 1 ) × s ] and the X-related component Y O S A R n × [ ( l y + 1 ) × s ] using Equation (8). X O S A and Y O S A are both called ‘the common component’ and are shown to be equal in reference [16], as shown below:
{ X O S A = Y ˜ ( Y ˜ T Y ˜ ) 1 Y ˜ T X ˜ Y O S A = X ˜ ( X ˜ T X ˜ ) 1 X ˜ T Y ˜ .
We tend to focus on process variables related to KPIs in industrial processes. By extracting common components and monitoring them (Step 3), one can know whether there are faults in the variables related to KPIs.
(b)
Calculate the non-Y-related component E O S A R n × [ ( l x + 1 ) × s ] and the non-X-related component F O S A R n × [ ( l y + 1 ) × s ] as
{ E O S A = X ˜ X O S A F O S A = Y ˜ Y O S A ,
where EOSA and FOSA are both called ‘the unique component’. By extracting and monitoring the unique components (Step 3), one can know whether there are faults in the variables unrelated to KPIs.
(c)
Extract the PCs in XOSA using the PCA decomposition method because the variables in XOSA might be highly correlated:
{ X O S A = T c o m x P c o m T + E f T c o m x = X O S A P c o m ,
where T c o m x R n × k represents the score matrix of the common component; P c o m R [ ( l x + 1 ) × s ] × k is the loading matrix of the common component; E f R n × [ ( l x + 1 ) × s ] is the residual matrix; and k is the number of PCs. In this step, the PCs are selected by using the CPV method, and the threshold value follows the PCA criterion, e.g., 85%.
In theory, the score matrices of the common components X O S A and Y O S A are equal unless there is something wrong with the relationship between X and Y. We use the sum of squares of the score matrices to monitor whether there are faults in the relationship between X and Y (Step 3). Similarly to Equation (10), the score matrix of the common component is T c o m y = Y O S A P c o m .
  • Step 3. Monitoring indices calculation.
Taking into account the real-time nature of the information, PCA monitoring is not directly performed for X O S A , EOSA, and F O S A because these components contain a great amount of information at the past time. The calculation of the indices if as follows:
(a)
The first s columns of XOSA are monitored by the PCA approach and can then be used to generate the T C 2 and S P E C indices. That is to say, we only monitor the data at the current time.
(b)
Similarly, the first s columns of E O S A and the first r columns of F O S A can be monitored by the PCA approach and can then be used to generate the indices T E 2 , T F 2 , S P E E , and S P E F .
(c)
Furthermore, if there is something wrong with the relationship between X and Y, there will be significant differences between the score matrices T c o m x and T c o m y . Therefore, the following index can be used to test the abnormal relationship:
S P E X Y = ( T c o m x T c o m y ) ( T c o m x T c o m y ) T .
Figure 1 summarizes the procedure presented below.

3.3. A Dynamics Model Analyzed with DOSA

3.3.1. Dynamics Model

To analyze the characteristics of the DOSA method and compare its performance with the OSA algorithm, we use a simplistic simulation process in illustrating the monitoring performances of them. Consider a large-scale process in which each single subprocess can be expressed using a time-invariant, state-space model as follows:
{ X ( t ) = C [ X ( t 1 ) , X ( t 2 ) , X ( t 3 ) ] + D [ s 1 , s 2 ] + ξ Y ( t ) = E [ Y ( t 1 ) , Y ( t 2 ) , Y ( t 3 ) ] + F [ s 1 , s 2 ] + ζ ,
where s 1 , s 2 , and s 3 are independent Gaussian distributed vectors; ξ and ζ are the noisy components, which are independent of the process measurements; and C and E and D and F are the coefficient matrices of the dynamic and static parts, respectively. Here, we take three algorithms into consideration: OSA; the DOSA that expands the dimension of X , which is denoted as DOSA-X; the DOSA that expands the dimension of both X and Y , which is denoted as DOSA-XY.

3.3.2. The Optimal Numbers of Time Lag

To determine the number of time lag, the dynamics model with several numbers of lags that are different from the normal data are fitted. Here, l x and l y are the numbers of lags in matrix X and Y, respectively. In this work, we set l x [ 0.1 , 6 ] and l y [ 0.1 6 ] , and several values of L a g x and L a g y are shown in Figure 2 and Figure 3.
From the analyses shown in Figure 2 and Figure 3, the values of L a g x ( l x = 3 ) would be lowest if l x was less than or equal to 3, and the values of L a g y ( l y = 3 ) tended to be lowest if l y was less than or equal to 3. At this time, the values of both L a g x and L a g y would not decrease significantly if we continued increasing the values of l x and l y . Therefore, the optimal lag numbers were l x = 3 and l y = 3 , and this can be seen intuitively in the diagram. Furthermore, the several values of L a g x , L a g y , and R C % are listed in Table 1 and Table 2.
From the data presented in Table 1 and Table 2, the values of R C % were less than 5% when lx and ly gradually increased from 3. This also means that the optimal numbers of lags were lx = 3 and ly = 3, which is consistent with the true value.
Here, we take the traditional BIC method, which has a larger penalty than the AIC, as an example to calculate the optimal number of this model. When selecting the best model from a set of alternative models, the model with the lowest BIC should be chosen.
From the data presented in Table 3 and Table 4, the optimal numbers of lags were l x = 2 and l y = 3 . However, we introduced a third-order lag as Section 3.3.1 mentioned. Therefore, instead of the BIC, the original method of this work was applied to test the algorithm.

3.3.3. Testing Results

(a) Fault 1: a step change with an amplitude of 3 in s 1 . Certainly, the static parameter s 1 is the unique part of X . The detection rates and false alarm rates of three algorithms are shown in Table 5. In Table 5, the detection rate of T E 2 was extremely high, so we could correctly infer that the fault occurred in the unique part of X . In other words, it is possible that there was a fault in the process variables instead of in the measurement of the KPIs. It is more important that the detection rates of the two dynamics monitoring methods were higher than the detection rate of the OSA. Thus, the dynamics problem could be solved by DOSA in this case. Furthermore, the effect of the dimension expansion for both X and Y was better than the dimension expansion for X alone. It can be hypothesized that expanding the dimension of the matrix can improve the sensitivity of the algorithm to the fault.
(b) Fault 2: a step change with an amplitude of 3 in s 3 . It is obvious that the static parameter s 3 is the unique part of Y . The results are shown in Table 6. As can be seen in Table 6, we had already expanded the dimension of X , but the detection rates of all of the indices were extremely low. Then, we found that the index T F 2 performed better while expanding the dimension of both X and Y . This means that the fault occurred in the unique part of Y . That is to say, there was a fault in the measurement of the KPIs instead of the process variables. In addition, the detection rate of DOSA-XY was extremely higher than the other two algorithms. Thus, an algorithm for the dimension expansion of data matrices with dynamic processes performs well while also solving the dynamics issue.
(c) Fault 3: a step change with an amplitude of 3 in s 2 . Certainly, the static parameter s 2 is the common part of both X and Y . The results are shown in Table 7. As can be seen in Table 7, we could not judge the location of the fault if we did not expand the dimension of Y because the detection rates of most of the indices were about 50%. Then, the index T C 2 performed better while expanding the dimension of both X and Y . This means that the fault occurred in the common part of both X and Y . That is to say, there was a fault in both the process variables and in the measurement of the KPIs. In addition, the detection rate of DOSA-XY was extremely higher than that of the other two algorithms. Thus, an algorithm for the dimension expansion of data matrices with dynamic processes performs well while dealing with the dynamics issue.
(d) Fault 4: the matrix D changed to D f :
{ D = [ 0.1 0.1 0.1 0 0.2 0.1 0 0.1 ] D f = [ 0.1 0.1 0.3 0 0.2 0.1 0 0.1 ] .
Generally, the coefficient matrix D affects the relationship of X and Y . The results are shown in Table 8. In Table 8, the index SPEXY that specifically detects the relationship of X and Y performed well. We could infer that there was a high probability of a fault in D or F . Then, the detection rate of DOSA-XY was extremely higher than that of the other two algorithms. Thus, an algorithm for the dimension expansion of both X and Y performs well while also solving the dynamics issue.

3.3.4. The Influence of Sampling Period on DOSA

In sum, it is necessary to expand the dimension of both X and Y . In this section, we will take the effect of the sampling rate on the DOSA algorithm into account. The dynamics models and faults in Section 3.3.1 and Section 3.3.3 still apply to this section.
Firstly, the section will discuss the effect of doubling the sampling period on the selection of the lag number. We still set l x [ 0.1 , 6 ] and l y [ 0.1 , 6 ] , followed by several values of L a g x and L a g y , and the corresponding changes in rate are listed in Table 9 and Table 10.
As shown in Table 9 and Table 10, the optimal lag numbers were l x = 1 and l y = 1 because the values of R C % were less than 5% when l x and l y gradually increased from 1. That is to say, the optimal lag numbers were affected by the sampling period. Thus, the effect of the sampling period on the detection rates of the DOSA was also a concern.
(a)
Fault 1: the fault occurs in the unique part of X . The experimental comparison of the primitive and doubled sampling periods is shown in Table 11. As also shown in the table, the detection rate of T E 2 decreased by about 9%, and the detection rate of S P E E decreased by about 4%.
(b)
Fault 2: the fault occurs in the unique part of Y . The experimental comparison of the primitive and doubled sampling periods is shown in Table 12. As also shown in the table, the detection rate of T F 2 decreased by about 8%, and the detection rate of S P E F decreased by about 3%.
(c)
Fault 3: the fault occurs in the common part of X and Y . The experimental comparison of the primitive and doubled sampling periods is shown in Table 13. As also shown in the table, the detection rate of T C 2 decreased by about 8%, and the detection rate of S P E C decreased by about 5%.
(d)
Fault 4: the fault occurs in the coefficient matrix D , which affects the relationship of X and Y . The experimental comparison of the primitive and doubled sampling periods is shown in Table 14. As can be seen in Table 14, there was no significant change in the detection rate of S P E X Y .
Based on the above testing results, we can see that the change in sampling period affected the determination of the lag numbers. The detection rates were also slightly affected. That is to say, the DOSA algorithm is sensitive to the change in sampling period because the AR model, which is constructed by the DOSA, will be different with the change in sampling period. We hope to solve this problem as we continue our improvement of this project in the future.

3.4. Conclusion

As shown by the above results, we can conclude the following:
(1)
It is necessary to expand the dimension of both X and Y .
(2)
DOSA could adequately solve the dynamics issue.
(3)
DOSA is able to directly analyze the location of the fault. Thus, we can know whether a fault actually occurs in KPI-related process variables, KPI-unrelated process variables, and the measurement of the KPIs.
(4)
DOSA is sensitive to the change in sampling period.

4. Comparison Study Based on Tennessee Eastman Process

4.1. Tennessee Eastman Process

In this section, we would like to briefly introduce an industrial benchmark of the Tennessee Eastman (TE) process [24,25]. All the discussed methods will be further applied to demonstrate their efficiencies. The TE process model is a realistic simulation program of a chemical plant, which is widely accepted as a benchmark for control and monitoring studies [26]. The flow diagram of the process is described in [27,28], and the FORTRAN code of the process is available on the Internet. The process has two products from four reactants as shown in Equation (14):
{ A ( g ) + C ( g ) + D ( g ) G ( l i q ) A ( g ) + C ( g ) + E ( g ) H ( l i q ) A ( g ) + E ( g ) F ( l i q ) 3 D ( g ) 2 F ( l i q ) ,
The TE process has 52 variables, including 41 process variables and 11 manipulated variables. Table 15 lists a set of 15 known faults introduced to the TE process. Training and test sets have been collected by running 25 and 48 h simulations, respectively, in which faults have been introduced 1 and 8 h into the simulation, and each variable is sampled every 3 min. Thus, training sets consist of 500 samples, whereas test sets contain 960 samples per set of simulation [29,30].

4.2. The Numbers of Time Lag in TE Process

Here, L x and L y are the lag numbers in the augmented process variables matrix and the augmented KPI matrix, respectively. In this work, we set L x [ 0 , 1 , , 6 ] and L y [ 0 , 1 , , 6 ] . Several values of Lagx and Lagy and their corresponding changes in rate are listed in Table 16 and Table 17.
From the data presented in Table 16 and Table 17, the values of L a g x ( L x = 3 ) tended to be the lowest, and the values of L a g y ( L y = 3 ) tended to be the lowest if L y was less than or equal to 3. At this time, the values of the rate of change were less than 5% when L y gradually increased from 3. That is to say, the values of L a g y would not decrease significantly if we continued increasing Ly. Therefore, the optimal numbers of lags were L x = 3 and L y = 3 .

4.3. Simulation Study

We tend to focus on the ability to detect KPI-related faults in the TE process. Table 18 lists a set of nine KPI-related faults introduced to the TE process. It shows the detection and false alarm rates for four algorithms: OSA, DOSA, Dynamics CCA (DCCA), and Dynamics PLS (DPLS).
Considering the data presented in Table 18, DOSA shows better performance compared to the other algorithms for KPI-related faults. Meanwhile, the DOSA algorithm showed a great advantage in Faults 1–2, 8, and 12–13 over the OSA algorithm. From this analysis, it can be concluded that the DOSA algorithm performs better than the OSA algorithm on dynamic problems. Figure 4 shows the simulation diagram of OSA and DOSA monitoring in these faults. The blue line represents the value of the statistic, and the red line represents the value of the control limit. When the blue line is higher than the red line, a fault has occurred. It is obvious that the DOSA algorithm is more sensitive to these faults.

5. Conclusions

In this paper, we have presented an improved algorithm of OSA for conducting large-scale process monitoring, called the DOSA algorithm, and compared its performance against DPLS and DCCA, which are KPI-related algorithms that are also used to solve dynamic problems.
Considering the testing results of the dynamics model, this article proved that it is necessary to expand the dimension of both the process variables matrix and the KPI matrix while using the DOSA algorithm. Furthermore, the DOSA algorithm is able to adequately solve the dynamics issue; Thus, we can know whether a fault actually occurs in the KPI-related or KPI-unrelated process variables or in the measurement of the KPIs.
The comparative study was conducted using the Tennessee Eastman benchmark process, and we can conclude that the DOSA algorithm achieves better detection rates of faults from the analysis of the results obtained. However, the DOSA algorithm is sensitive to the change in sampling period. We intend to solve this problem as we continue the improvement of this project in the future.

Author Contributions

Conceptualization, W.H.; methodology, W.H. and Z.L.; validation, S.L.; formal analysis, W.H.; resources, Y.W. and S.L.; writing—original draft preparation, W.H.; writing—review and editing, Z.L. and W.H.; visualization, W.H.; supervision, S.L., X.J., and S.D.; project administration, Z.L.; funding acquisition, S.L. and Z.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Natural Science Foundation of Guangdong Province, China (NO. 2022A1515011040), the Natural Science Foundation of Shenzhen, China (NO. 20220813001358001) and the Young Talents program offered by the Department of Education of Guangdong Province, China (2021KQNCX210).

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhu, J.; Jiang, M.; Liu, Z. Fault Detection and Diagnosis in Industrial Processes with Variational Autoencoder: A Comprehensive Study. Sensors 2022, 22, 227. [Google Scholar] [CrossRef]
  2. Zhao, F.; Rekik, I.; Lee, S.W.; Liu, J.; Zhang, J.; Shen, D. Two-Phase Incremental Kernel PCA for Learning Massive or Online Datasets. Complexity 2019, 2019, 5937274. [Google Scholar] [CrossRef] [Green Version]
  3. Zhang, S.; Zhao, C. Hybrid Independent Component Analysis (H-ICA) with Simultaneous Analysis of High-Order and Second-Order Statistics for Industrial Process Monitoring. Chemom. Intell. Lab. Syst. 2019, 185, 47–58. [Google Scholar] [CrossRef]
  4. Qin, Y.; Lou, Z.; Wang, Y.; Lu, S.; Sun, P. An Analytical Partial Least Squares Method for Process Monitoring. Control. Eng. Pract. 2022, 124, 105182. [Google Scholar] [CrossRef]
  5. Yin, S.; Zhu, X.; Kaynak, O. Improved PLS Focused on Key-Performance-Indicator-Related Fault Diagnosis. IEEE Trans. Ind. Electron. 2015, 62, 1651–1658. [Google Scholar] [CrossRef]
  6. Wang, H.; Gu, J.; Wang, S.; Saporta, G. Spatial Partial Least Squares Autoregression: Algorithm and Applications. Chemom. Intell. Lab. Syst. 2019, 184, 123–131. [Google Scholar] [CrossRef]
  7. Tao, Y.; Shi, H.; Song, B.; Tan, S. Parallel Quality-Related Dynamic Principal Component Regression Method for Chemical Process Monitoring. J. Process Control 2019, 73, 33–45. [Google Scholar] [CrossRef]
  8. Sim, S.F.; Jeffrey Kimura, A.L. Partial Least Squares (PLS) Integrated Fourier Transform Infrared (FTIR) Approach for Prediction of Moisture in Transformer Oil and Lubricating Oil. J. Spectrosc. 2019, 2019, e5916506. [Google Scholar] [CrossRef]
  9. Kanatsoulis, C.I.; Fu, X.; Sidiropoulos, N.D.; Hong, M. Structured SUMCOR Multiview Canonical Correlation Analysis for Large-Scale Data. IEEE Trans. Signal Process. 2019, 67, 306–319. [Google Scholar] [CrossRef] [Green Version]
  10. Cai, J.; Dan, W.; Zhang, X. ℓ0-Based Sparse Canonical Correlation Analysis with Application to Cross-Language Document Retrieval. Neurocomputing 2019, 329, 32–45. [Google Scholar] [CrossRef]
  11. Su, C.H.; Cheng, T.W. A Sustainability Innovation Experiential Learning Model for Virtual Reality Chemistry Laboratory: An Empirical Study with PLS-SEM and IPMA. Sustainability 2019, 11, 1027. [Google Scholar] [CrossRef] [Green Version]
  12. Alvarez, A.; Boente, G.; Kudraszow, N. Robust Sieve Estimators for Functional Canonical Correlation Analysis. J. Multivar. Anal. 2019, 170, 46–62. [Google Scholar] [CrossRef]
  13. de Cheveigné, A.; Di Liberto, G.M.; Arzounian, D.; Wong, D.D.E.; Hjortkjær, J.; Fuglsang, S.; Parra, L.C. Multiway Canonical Correlation Analysis of Brain Data. NeuroImage 2019, 186, 728–740. [Google Scholar] [CrossRef] [Green Version]
  14. Tong, C.; Lan, T.; Yu, H.; Peng, X. Distributed Partial Least Squares Based Residual Generation for Statistical Process Monitoring. J. Process Control 2019, 75, 77–85. [Google Scholar] [CrossRef]
  15. Si, Y.; Wang, Y.; Zhou, D. Key-Performance-Indicator-Related Process Monitoring Based on Improved Kernel Partial Least Squares. IEEE Trans. Ind. Electron. 2021, 68, 2626–2636. [Google Scholar] [CrossRef]
  16. Lou, Z.; Wang, Y.; Si, Y.; Lu, S. A Novel Multivariate Statistical Process Monitoring Algorithm: Orthonormal Subspace Analysis. Automatica 2022, 138, 110148. [Google Scholar] [CrossRef]
  17. Song, Y.; Liu, J.; Chu, N.; Wu, P.; Wu, D. A Novel Demodulation Method for Rotating Machinery Based on Time-Frequency Analysis and Principal Component Analysis. J. Sound Vib. 2019, 442, 645–656. [Google Scholar] [CrossRef]
  18. Zhang, C.; Guo, Q.; Li, Y. Fault Detection Method Based on Principal Component Difference Associated with DPCA. J. Chemom. 2019, 33, e3082. [Google Scholar] [CrossRef] [Green Version]
  19. Dong, Y.; Qin, S.J. A Novel Dynamic PCA Algorithm for Dynamic Data Modeling and Process Monitoring. J. Process Control 2018, 67, 1–11. [Google Scholar] [CrossRef]
  20. Oyama, D.; Kawai, J.; Kawabata, M.; Adachi, Y. Reduction of Magnetic Noise Originating from a Cryocooler of a Magnetoencephalography System Using Mobile Reference Sensors. IEEE Trans. Appl. Supercond. 2022, 32, 1–5. [Google Scholar] [CrossRef]
  21. Lou, Z.; Shen, D.; Wang, Y. Two-step Principal Component Analysis for Dynamic Processes Monitoring. Can. J. Chem. Eng. 2018, 96, 160–170. [Google Scholar] [CrossRef]
  22. Sakamoto, W. Bias-reduced Marginal Akaike Information Criteria Based on a Monte Carlo Method for Linear Mixed-effects Models. Scand. J. Stat. 2019, 46, 87–115. [Google Scholar] [CrossRef]
  23. Gu, J.; Fu, F.; Zhou, Q. Penalized Estimation of Directed Acyclic Graphs from Discrete Data. Stat. Comput. 2019, 29, 161–176. [Google Scholar] [CrossRef] [Green Version]
  24. Wan, J.; Li, S. Modeling and Application of Industrial Process Fault Detection Based on Pruning Vine Copula. Chemom. Intell. Lab. Syst. 2019, 184, 1–13. [Google Scholar] [CrossRef]
  25. Huang, J.; Ersoy, O.K.; Yan, X. Fault Detection in Dynamic Plant-Wide Process by Multi-Block Slow Feature Analysis and Support Vector Data Description. ISA Trans. 2019, 85, 119–128. [Google Scholar] [CrossRef]
  26. Plakias, S.; Boutalis, Y.S. Exploiting the Generative Adversarial Framework for One-Class Multi-Dimensional Fault Detection. Neurocomputing 2019, 332, 396–405. [Google Scholar] [CrossRef]
  27. Zhao, H.; Lai, Z. Neighborhood Preserving Neural Network for Fault Detection. Neural Netw. 2019, 109, 6–18. [Google Scholar] [CrossRef]
  28. Suresh, R.; Sivaram, A.; Venkatasubramanian, V. A Hierarchical Approach for Causal Modeling of Process Systems. Comput. Chem. Eng. 2019, 123, 170–183. [Google Scholar] [CrossRef]
  29. Amin, M.T.; Khan, F.; Imtiaz, S. Fault Detection and Pathway Analysis Using a Dynamic Bayesian Network. Chem. Eng. Sci. 2019, 195, 777–790. [Google Scholar] [CrossRef]
  30. Cui, P.; Zhan, C.; Yang, Y. Improved Nonlinear Process Monitoring Based on Ensemble KPCA with Local Structure Analysis. Chem. Eng. Res. Des. 2019, 142, 355–368. [Google Scholar] [CrossRef]
Figure 1. The flow chart of DOSA.
Figure 1. The flow chart of DOSA.
Processes 11 01935 g001
Figure 2. The values of L a g x under different l x values.
Figure 2. The values of L a g x under different l x values.
Processes 11 01935 g002
Figure 3. The values of L a g y under different l y values.
Figure 3. The values of L a g y under different l y values.
Processes 11 01935 g003
Figure 4. The simulation comparison of OSA and DOSA monitoring in Faults 1–2, 8, and 12–13.
Figure 4. The simulation comparison of OSA and DOSA monitoring in Faults 1–2, 8, and 12–13.
Processes 11 01935 g004aProcesses 11 01935 g004b
Table 1. The values of L a g x under different l x values.
Table 1. The values of L a g x under different l x values.
l x = 0 l x = 1 l x = 2 l x = 3 l x = 4 l x = 5 l x = 6
L a g x 8000.45489.73620.81540.915401539.71538.6
R C % /31.38%34.04%57.44%0.06%0.19%0.71%
Table 2. The values of L a g y under different l y values.
Table 2. The values of L a g y under different l y values.
l y = 0 l y = 1 l y = 2 l y = 3 l y = 4 l y = 5 l y = 6
L a g y 7999.16327.25864.15276.452755274.35270.1
R C % /20.9%7.32%10.02%0.03%0.01%0.08%
Table 3. The values of BIC under different l x values.
Table 3. The values of BIC under different l x values.
l x = 0 l x = 1 l x = 2 l x = 3 l x = 4 l x = 5 l x = 6
BIC−11,427.54−11,423.19−11,453.90−11,447.21−11,442.30−11,439.81−11,433.15
Table 4. The values of BIC under different l y values.
Table 4. The values of BIC under different l y values.
l y = 0 l y = 1 l y = 2 l y = 3 l y = 4 l y = 5 l y = 6
BIC−9208.71−9214.21−9237.06−9237.20−9230.39−9223.74−9218.98
Table 5. Fault 1 detection rates and false alarm rates of three algorithms.
Table 5. Fault 1 detection rates and false alarm rates of three algorithms.
MethodsOSA
Indices T C 2 SPEC T E 2 SPEE T F 2 SPEFSPEXY
Detection rate1.22.461.6815.171.8114.97
False alarm rate1.60.60.80.810.41
MethodsDOSA-X
Indices T C 2 SPEC T E 2 SPEE T F 2 SPEFSPEXY
Detection rate1.62.287.6255.6921.22.4
False alarm rate1.80.61.22.20.80.40.8
MethodsDOSA-XY
Indices T C 2 SPEC T E 2 SPEE T F 2 SPEFSPEXY
Detection rate0.81.893.2153.292.21.210.58
False alarm rate0.80.42.41.20.40.80.2
Table 6. Fault 2 detection rates and false alarm rates of three algorithms.
Table 6. Fault 2 detection rates and false alarm rates of three algorithms.
MethodsOSA
Indices T C 2 SPEC T E 2 SPEE T F 2 SPEFSPEXY
Detection rate0.6110.242.918.5832.73
False alarm rate10.81.610.81.22
MethodsDOSA-X
Indices T C 2 SPEC T E 2 SPEE T F 2 SPEFSPEXY
Detection rate0.40.810.844.315.644.71
False alarm rate11.61.221.21.21
MethodsDOSA-XY
Indices T C 2 SPEC T E 2 SPEE T F 2 SPEFSPEXY
Detection rate0.60.22.40.491.829.5862.48
False alarm rate2.810.63.611.41.61.61
Table 7. Fault 3 detection rates and false alarm rates of three algorithms.
Table 7. Fault 3 detection rates and false alarm rates of three algorithms.
MethodsOSA
Indices T C 2 SPEC T E 2 SPEE T F 2 SPEFSPEXY
Detection rate45.5129.3430.9430.9445.5112.3816.97
False alarm rate1.412.611.61.41.20.8
MethodsDOSA-X
Indices T C 2 SPEC T E 2 SPEE T F 2 SPEFSPEXY
Detection rate45.5115.1750.752.545.5125.551.4
False alarm rate1.41.61.221.42.41.6
MethodsDOSA-XY
Indices T C 2 SPEC T E 2 SPEE T F 2 SPEFSPEXY
Detection rate90.8265.677.1937.7249.152.511.98
False alarm rate3.411.40.82.612.21.61.2
Table 8. Fault 4 detection rates and false alarm rates of three algorithms.
Table 8. Fault 4 detection rates and false alarm rates of three algorithms.
MethodsOSA
Indices T C 2 SPEC T E 2 SPEE T F 2 SPEFSPEXY
Detection rate1.21.81.421.20.264.27
False alarm rate1.20.61.21.21.20.80.4
MethodsDOSA-X
Indices T C 2 SPEC T E 2 SPEE T F 2 SPEFSPEXY
Detection rate1.210.611.21.677.45
False alarm rate1.21.21.21.81.21.21
MethodsDOSA-XY
Indices T C 2 SPEC T E 2 SPEE T F 2 SPEFSPEXY
Detection rate0.41.610.610.899.6
False alarm rate0.41.42.810.41.211.6
Table 9. The values of L a g x for doubling the sampling period.
Table 9. The values of L a g x for doubling the sampling period.
l x = 0 l x = 1 l x = 2 l x = 3 l x = 4 l x = 5 l x = 6
L a g x 501202.26194.27186.8183.66179.02176.04
R C % /59.63%3.95%3.84%1.68%2.53%1.66%
Table 10. The values of L a g y for doubling the sampling period.
Table 10. The values of L a g y for doubling the sampling period.
l y = 0 l y = 1 l y = 2 l y = 3 l y = 4 l y = 5 l y = 6
L a g y 501472.91468.78468.64466.8465.47464.08
R C % /5.61%0.87%0.03%0.39%0.28%0.30%
Table 11. Comparison of primitive and doubled sampling periods (Fault 1).
Table 11. Comparison of primitive and doubled sampling periods (Fault 1).
ConditionPrimitive sampling period
Indices T E 2 S P E E
Detection rate93.2153.29
False alarm rate2.41.2
ConditionDoubled sampling period
Indices T E 2 S P E E
Detection rate84.649.36
False alarm rate1.61.2
Table 12. Comparison of primitive and doubled sampling periods (Fault 2).
Table 12. Comparison of primitive and doubled sampling periods (Fault 2).
ConditionPrimitive sampling period
Indices T F 2 S P E F
Detection rate91.829.58
False alarm rate1.61.6
ConditionDoubled sampling period
Indices T F 2 S P E F
Detection rate83.136.43
False alarm rate1.61.2
Table 13. Comparison of primitive and doubled sampling periods (Fault 3).
Table 13. Comparison of primitive and doubled sampling periods (Fault 3).
ConditionPrimitive sampling period
Indices T C 2 S P E C
Detection rate90.8265.67
False alarm rate3.411.4
ConditionDoubled sampling period
Indices T C 2 S P E C
Detection rate82.3360.84
False alarm rate21.6
Table 14. Comparison of primitive and doubled sampling periods (Fault 4).
Table 14. Comparison of primitive and doubled sampling periods (Fault 4).
ConditionPrimitive sampling period
Indices S P E X Y
Detection rate99.6
False alarm rate1.6
ConditionDoubled sampling period
Indices S P E X Y
Detection rate98.39
False alarm rate2.4
Table 15. Descriptions of known faults in TE process.
Table 15. Descriptions of known faults in TE process.
Fault IDProcess VariableTypeKPI-Related
1A/C feed ratio, B composition constantStepYes
2B composition, A/C ration constantYes
3D feed temperature
4Reactor cooling water inlet temperature
5Condenser cooling water inlet temperatureYes
6A feed lossYes
7C header pressure loss-reduced availabilityYes
8A, B and C feed compositionRandom variationYes
9D feed temperature
10C feed temperatureYes
11Reactor cooling water inlet temperature
12Condenser cooling water inlet temperatureYes
13Reaction kineticsSlow driftYes
14Reactor cooling water valveSticking
15Condenser cooling water valve
Table 16. The values of L a g x under different L x values.
Table 16. The values of L a g x under different L x values.
L x = 0 L x = 1 L x = 2 L x = 3 L x = 4 L x = 5 L x = 6
L a g x 15962.9528.323.613,104.5166,817.2434,678.86
Table 17. The values of L a g y under different L y values.
Table 17. The values of L a g y under different L y values.
L y = 0 L y = 1 L y = 2 L y = 3 L y = 4 L y = 5 L y = 6
L a g y 159112.86100.6185.8783.9179.8479
R C % /29.02%10.85%14.65%2.28%4.85%1.05%
Table 18. Testing results of KPI-related faults for the TE process.
Table 18. Testing results of KPI-related faults for the TE process.
DPLSDCCAOSADOSA
T 2 S P E 1 S P E 2 T C 2 S P E C T C 2 S P E C
False alarm rate01.31.30000.63
Fault 142.62573.791.461.7588.2599.37597.375
Fault 298.75868915.37553.7597.12596.375
Fault 520.12598.999.916.87511.2522.37515.125
Fault 696.510010099.125100100100
Fault 73817.534.521.58963.7529.125
Fault 86843.353.16751.62592.87574.75
Fault 105.37521.937.260.87513.12566.87570.25
Fault 123166.285.269.12551.12594.62577.375
Fault 1366.12578.685.28070.62590.7576.625
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hao, W.; Lu, S.; Lou, Z.; Wang, Y.; Jin, X.; Deprizon, S. A Novel Dynamic Process Monitoring Algorithm: Dynamic Orthonormal Subspace Analysis. Processes 2023, 11, 1935. https://doi.org/10.3390/pr11071935

AMA Style

Hao W, Lu S, Lou Z, Wang Y, Jin X, Deprizon S. A Novel Dynamic Process Monitoring Algorithm: Dynamic Orthonormal Subspace Analysis. Processes. 2023; 11(7):1935. https://doi.org/10.3390/pr11071935

Chicago/Turabian Style

Hao, Weichen, Shan Lu, Zhijiang Lou, Yonghui Wang, Xin Jin, and Syamsunur Deprizon. 2023. "A Novel Dynamic Process Monitoring Algorithm: Dynamic Orthonormal Subspace Analysis" Processes 11, no. 7: 1935. https://doi.org/10.3390/pr11071935

APA Style

Hao, W., Lu, S., Lou, Z., Wang, Y., Jin, X., & Deprizon, S. (2023). A Novel Dynamic Process Monitoring Algorithm: Dynamic Orthonormal Subspace Analysis. Processes, 11(7), 1935. https://doi.org/10.3390/pr11071935

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop