Next Article in Journal
Experimental Studies on the Strength of a Flatcar during Shunting Impacts
Previous Article in Journal
Thermo-Mechanical Buckling and Non-Linear Free Oscillation of Functionally Graded Fiber-Reinforced Composite Laminated (FG-FRCL) Beams
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Recurrent Neural Network-Based Hybrid Modeling Method for Digital Twin of Boiler System in Coal-Fired Power Plant

Faculty of Electronic and Information Engineering, Xi’an Jiaotong University, Xi’an 710049, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(8), 4905; https://doi.org/10.3390/app13084905
Submission received: 15 March 2023 / Revised: 7 April 2023 / Accepted: 12 April 2023 / Published: 13 April 2023
(This article belongs to the Section Applied Thermal Engineering)

Abstract

:
Due to the simplified assumptions or unascertained equipment parameters, traditional mechanism models of boiler system in coal-fired power plant usually have predictive errors that cannot be ignored. In order to further improve the predictive accuracy of the model, this paper proposes a novel recurrent neural network-based hybrid modeling method for digital twin of boiler system. First, the mechanism model of boiler system is described through recurrent neural network (RNN) to facilitate training and updating parameters, while the interpretability of the model does not degenerate. Second, for the time-varying parameters in the mechanism model, the functional relationship between them and the state variables is constructed by neurons to improve the predictive accuracy. Third, the long short-term memory (LSTM) neural network model is established to describe the unascertained dynamic characteristics to compensate the predictive residual of the mechanism model. Fourth, the update architecture and training algorithm applicable to the hybrid model are established to realize the iterative optimization of model parameters. Finally, experimental results show that the hybrid modeling method proposed in this paper can improve the predictive performance of traditional models effectively.

1. Introduction

With the development of the Industry 4.0 Plan in various countries, it is inevitable to establish digital intelligent factories by using advanced methods and technologies such as big data, artificial intelligence and industrial Internet [1,2,3,4]. Digital twin, first proposed by Grieves in 2003 [5], is an effective way for interconnection between information space and the physical world, which can realize the deep integration of information technology and traditional industry [6,7,8]. Originally, NASA used digital twin to monitor and predict the flight status of spacecraft to help engineers make correct decisions [9]. Since then, digital twin has been gradually receiving attention from researchers in various fields, especially in the manufacturing industry such as aerospace [10,11,12], automotive [13,14,15] and electronic manufacturing [16,17,18]. According to the statistical results of Psarommatis [19], we have the following information. Frist, the proportion of digital twin research is approximately 24% for aerospace and automotive, about 10% for electronics and component manufacturing, and less than 1% for energy industry. This indicates that few references study digital twin for coal-fired power plants. Second, approximately 55.5% of the references focus on ways to improve and ensure the quality of the production process and products by using digital twin, and about 18% of the existing references study the applications of digital twin for process optimization and production control. These issues are often highly correlated with the accuracy of the digital twin model. Thereby, the high-fidelity model is the solid foundation and powerful support for improving system control performance, optimizing production strategies, predicting equipment health status, increasing efficiency and reducing pollutant emissions [20,21,22,23,24,25]. Although the digital twin architectures [26,27,28], hybrid modelling of thermal systems [29] and modelling of steam turbine [30] for coal-fired power plants are gradually being proposed, the further in-depth and enriched research is still needed on digital twin modeling for boiler system in coal-fired power plants.
Guided by the aforementioned obstacles and gaps, in this paper, we propose a novel hybrid modeling method to improve the accuracy of digital twin model for boiler system. The main contributions of this paper lie in the following four aspects. (1) On the basis of ensuring the interpretability of the mechanism model, in order to facilitate training and updating parameters, this paper uses the recurrent neural network to describe the mechanism dynamic characteristics of the boiler system. (2) By adding new neurons to the mechanism model, the functional relationship between time-varying parameters and input variables is properly described to decrease the errors caused by parameters uncertainty. (3) For the sake of reducing the predictive residual of the mechanism model, the LSTM-based error-compensation model is formed. (4) To achieve the iterative optimization of hybrid model parameters, the update architecture and training algorithm are established. Finally, the experimental results show that the proposed method could ensure the high fidelity of the digital twin hybrid model.
The rest of this paper is organized as follows. In section “State of the art”, the existing modeling methods and technologies is discussed in detail. The hybrid modeling method is established and described in detail in section “Recurrent neural network-based hybrid modeling method for digital twin of boiler system”. Section “Simulation results” provides the simulation results to illustrate the effectiveness of the proposed method. The conclusions and future work are discussed in section “Conclusion”.

2. State of the Art

Presently, the main modeling methods include mechanism analysis, data-driven identification, and the combination of the two. First, as the traditional modeling method, mechanism analysis can well interpret the internal operating principle of the system and fully reflect the physical characteristics of the plant [31,32]. Unfortunately, coal-fired power plant is a huge and complex system with characteristics such as large inertia, large delay, non-linearity and strong coupling due to many equipment and mutual coupling of operation processes [33,34,35]. It is inevitable to propose numerous simplified assumptions, including simplified assumptions of equipment structure, working fluid status and lumped equipment parameters, which doubtlessly leads to poor predictive accuracy of the model [36]. In addition, due to slagging or oxidation, some parameters such as heat transfer coefficient and pipeline damping coefficient may change with equipment operation, deviating from the nominal values provided by the equipment manufacturers. This may affect the accuracy of the model more or less [37]. Second, the data-driven modeling method, also called the black box modeling method, learns the dynamic characteristic information of the plant through the input and output data [38]. Theoretically, if the number of neurons is sufficient, the neural network model can approximate the non-linear function with arbitrary accuracy [39]. In recent years, based on the data-driven method, the coal-fired boiler combustion model [40,41], unburned carbon model of boiler [42] and output power prediction model [43,44] have all achieved significant improvement. However, the accuracy of the black box model depends too much on the data quality. Insufficient data, excessive data noise and incomplete coverage of datasets may lead to poor generalization performance of the model [45]. Furthermore, the parameters are not interpretable and lack the mechanism formula to describe the relationship between variables [46].
Obviously, only using the mechanism modeling or data-driven method is hard to meet the requirements of digital twin for high-fidelity models. Compared with the above two methods, the hybrid modeling scheme can not only ensure the physical explanations, but also improve the predictive accuracy of the digital twin model. Presently, there are two main approaches. One is to establish mechanism model and identify parameters through input and output data [47,48,49]. This method can compensate most of the errors caused by the unascertained parameters, but it is powerless for the model errors introduced by mechanism simplification. The other is to compensate all errors of mechanism model by using learning algorithms such as neural network [37,50,51]. Although this method could improve the predictive accuracy of the digital twin hybrid model, it ignores the improvement of the predictive performance for the mechanism model.
To deal with the above problems while ensuring the high fidelity of the digital twin model, this paper proposes a novel recurrent neural network-based hybrid modeling method which updates the mechanism parameters and data model parameters together. The main work of this paper includes the following four parts. First, the mechanism model of boiler system is established and then described through recurrent neural network (RNN) structure. In this way, the parameters in the mechanism model can be updated iteratively through the backpropagation algorithm, while the interpretability does not deteriorate. Second, the time-varying parameters in the mechanism model change with the change in input and state variables. By using neurons, the functional relationship between time-varying parameters and input variables is properly described to improve the predictive accuracy. Third, the long short-term memory (LSTM) neural network model is established to compensate the predictive residual of the mechanism model. Due to simplified assumptions, the predictive error caused by unmodeled dynamics cannot be reduced by updating the parameters in mechanism model only. Therefore, it is an effective solution to build a data-driven model to compensate the predictive residual of mechanism model. Fourth, the hybrid model-oriented update architecture and back-propagation through time (BPTT) training algorithm are established to realize the iterative optimization of parameters. Finally, the experimental results show that the RNN-based hybrid modeling method proposed in this article has better predictive performance than the mechanism model and the black box model.

3. Recurrent Neural Network-Based Hybrid Modeling Method for Digital Twin of Boiler System

In this section, the RNN-based hybrid modeling method for digital twin of boiler system is established and described in detail, which includes the design process of digital twin model (DTM) for boiler system, the construction of the overall DTM framework for the boiler system, definitions of input and output parameters for DTM and the establishment of DTM.

3.1. The Design Process of Digital Twin Model for Boiler System

To make it easier for readers to understand the design ideas of DTM for boiler system, we present the design process of DTM clearly and completely according to the unified DT design method proposed in [19], as shown in Figure 1. First, in Section 3.2.1, the purpose of the DTM is defined, and then the DTM framework with basic information is constructed in Section 3.2.2. Second, in Section 3.3, we define the input and output parameters of DTM according to the mechanism analysis of boiler system. Third, the DTM of boiler system is established through the above information and suitable technologies in Section 3.4. Fourth, based on the existing data, historical data and real-time data, we develop the DTM and optimize corresponding parameters in Section 4.1. Finally, in Section 4.2, we evaluate and analyze whether the performance of DTM is acceptable.

3.2. The Construction of DTM Framework for Boiler System

3.2.1. The Purpose of DTM

Due to the complexity of the boiler system in coal-fired power plant, it is inevitable to create simplified assumptions before mechanism modeling to ignore the high-order dynamics (usually called unmodeled dynamics) in system. Moreover, some parameters often cannot be known accurately, which may affect the predictive precision of the mechanism model. Therefore, it is necessary to establish a more accurate predictive model to ensure the high fidelity of the digital twin model. The structure of actual system can be described as
F ( x , u , p ) = f M ( x , u , p ) + f P ( x , u ) + f W ( x , u ) ,
where x , u and p are state vector, input vector and parameter vector, respectively.
F ( x , u , p ) represents the model of actual system, f M ( x , u , p ) is mechanism model, f P ( x , u ) represents the errors caused by inaccurate parameters, f W ( x , u ) represents the unmodeled dynamics of the system ignored due to the simplified assumptions.
The main purpose of hybrid modeling is to calculate error terms f P ( x , u ) and f W ( x , u ) by using measurement data information, and then achieve high-precision estimation of the digital twin model F D ( x , u , p ) to the actual model F ( x , u , p ) . The two existing hybrid modeling methods are shown in Figure 2a,b, respectively. The function of the method in Figure 2a is to establish a mechanism model, and then obtain the estimated error term f P E ( x , u ) through parameter identification. However, this method is powerless for the error term f W ( x , u ) . The function of the method in Figure 2b is to establish a data-driven model to compensate error terms f P ( x , u ) and f W ( x , u ) . Nevertheless, it neglects to improve the predictive performance of the mechanism model. This paper combines the above two methods to establish a hybrid model, as shown in Figure 2c. Most of the error term f P ( x , u ) is compensated by training mechanism model parameters, and then the data model f W E ( x , u ) is established to approximate the error term f W ( x , u ) . This method not only improves the predictive performance of the mechanism model, making it more interpretable, but also enhances the predictive accuracy of the overall digital twin model. Thereby, the digital twin model can be written as
F D ( x , u , p ) = f M ( x , u , p ) + f P E ( x , u ) + f W E ( x , u ) .

3.2.2. The DTM Framework with Basic Information

The purpose of the DTM established in this paper is to reflect the dynamic processes of the physical boiler system in real time accurately, as shown in Figure 3. The physical boiler system includes pulverizing system, furnace and drum. Initially, the pulverized coals in coal mills are sent into the furnace by the primary air and mixed with the secondary air for combustion. Then, the water in the drum is heated to saturated vapor through the radiation energy generated by fuel combustion in furnace. Finally, the saturated vapor flowing out of the drum is heated into super-heated vapor in the super heater. The inputs of each device in Figure 3 include not only the outputs of the front device, but also the outputs of other devices.
In DTM of boiler system, the dynamic model corresponding to each device is established. The usage mode of DTM is continuous because the dynamic processes of real physical system need to be reflected synchronously. To identify the suitable technologies for the DTM development, we need to propose the requirements for DTM. The first issue to address is that of the way to achieve iterative updating of DTM parameters using input and output data while ensuring the interpretability of DTM. The solution proposed in this paper is to establish the mechanism model of the boiler system and describe it through neural network structure. In this way, the parameters in the mechanism model can be updated iteratively through the backpropagation algorithm, while the interpretability will not deteriorate. Since the outputs of the previous time need to be used, it is a better way to describe the mechanism model through recurrent neural network (RNN). The second requirement for DTM is to ensure the high fidelity after adopting the simplified mechanism model. Then, the LSTM-based error-compensation model is formed to reduce the predictive residual of the mechanism model.

3.3. Definitions of Input and Output Parameters for DTM

In this subsection, we define the input and output for DTM, and then establish the discrete-time model of each block (including pulverizing system, furnace and drum) based on the mechanism analysis in [52] through the Euler method. The input and output parameters are shown in Table 1. Then, the model of each block in boiler system can be derived.
  • Pulverizing system
The function of the pulverizing system is to crush the coals and sent them into furnace with the primary air. The inputs include normalized speed of coal feeder N g , primary cool air flow W l k and hot air flow W r k . The outputs include coal flow W c f and temperature of air–coal mixture T o . The discrete-time model can be written as
W c f ( k ) = ( 1 1 / K c f ) W c f ( k 1 ) + W g / K c f ,
T o ( k ) = T o ( k 1 ) + ( Q a i + Q r c Q a o Q m o ) / K T ,
where Q a i = H l k W l k + H r k W r k , Q r c = H g W g , Q a o = C p a W p a T o ( k 1 ) , W p a = W l k + W r k , W g = K g N g , Q m o = C c f W c f ( k 1 ) Δ T m and Δ T m = ( T o ( k 1 ) T g ) . K c f and K T are inertia time constants, W g is inlet coal flow, Q a i , Q r c and Q o u t are the energy of inlet air flow, inlet coal flow and outlet mixture flow, respectively. H l k , H r k and H g are the enthalpy of cool air flow, hot air flow and coals, respectively. C p a and C c f are the specific heat capacity of air and coals. T g is the inlet temperature of coals. K g is the maximum inlet coal flow. k is sampling time.
2.
Furnace
In the furnace, the pulverized coals mix with the secondary air to heat the water wall through combustion. The inputs include coal flow W c f , secondary air flow W s a , gas flow W g s , net calorific value Q n e t , a r and the temperature of evaporation area T s v . The outputs include gas density ρ b , average temperature of gas in furnace T g s , furnace pressure P b , gas oxygen content O c p and heat absorption of water wall Q s l . The discrete-time model can be written as
ρ b ( k ) = ρ b ( k 1 ) + ( W c f ( k ) + W s a ( k ) W g s ( k ) ) / V b ,
T g s ( k ) = T g s ( k 1 ) + ( Q c i + Q s i Q g o Q s l ) / K b q ,
P b ( k ) = P b ( k 1 ) + ( W c f ( k ) + W s a ( k ) W g s ( k ) ) / C b ,
O c p ( k ) = O c p i / K o + ( 1 1 / K o ) O c p ( k 1 ) ,
Q s l ( k ) = K s l Δ T s / K s q + ( 1 1 / K s q ) Q s l ( k 1 ) ,
where Q c i = Q n e t , a r ( k ) W c f ( k ) , Q s i = H s a W s a ( k ) , Q g o = C g s W g s ( k ) T g s ( k 1 ) , V s a = W s a / ρ a , O c p i = ( V s a V 0 W c f ) 21 / V s a and Δ T s = T g s 4 ( k 1 ) T s v 4 ( k ) . V b is furnace volume. K b q , K o and K s q are inertia time constants. H s a is the enthalpy of secondary air flow, C g s is the specific heat capacity of gas, C b is flow capacity coefficient of furnace, O c p i is the inlet gas oxygen content at furnace bottom, Q s c is the radiant energy near fuel combustion, V s a is the volume flow of secondary air, V 0 is the theoretical air consumption, K s l is the radiant heat transfer coefficient, ρ a is the air density and 21 is the oxygen content of air.
3.
Drum
In the drum, the riser absorbs the energy from the furnace to heat the water into saturated vapor, and then the saturated vapor flows out to super-heater through the top of the drum. The inputs include the inlet water flow W e from economizer, the enthalpy of inlet water H e , the heat absorption of water wall Q s l and the pressure of super-heater P s . The outputs include the mass of liquid region in drum M d l , saturated vapor density ρ v , the enthalpy of water in drum H w and the enthalpy of saturated vapor-water mixture in riser H r . The discrete-time model can be written as
M d l ( k ) = M d l ( k 1 ) + W e ( k ) q v W r o W d v ,
ρ v ( k ) = ρ v ( k 1 ) + V v 1 Δ W v ,
H w ( k ) = H w ( k 1 ) + ( Q e + Q w v Q d Q d v ) / M d l ,
H r ( k ) = H r ( k 1 ) + ( Q s l + Q d Q r o ) / K r
where q v = F q ( H r , H v , H w v ) = ( H r H w v ) / ( H v H w v ) , V v = V d r u m ( M d l / ρ w ) , W v p = q v W r o , W d v = K e c ( P d r P d r e x ) , H w v = T p ( ρ v ) , H v = T p ( ρ v ) , ρ w = T p ( ρ v ) , P d r = T p ( ρ v ) , Q d v = W d v H v , Δ W v = W v p + W d v W v , W v = ( P d r P s ) / R f , Q e = H e ( k ) W e ( k ) , Q w v = H w v ( 1 q v ) W r o , Q d = H w ( k 1 ) W r o and Q r o = H r ( k 1 ) W r o . q v is the vapor content of vapor–water mixture at riser outlet, W r o is the vapor–water mixture flow of riser outlet, V v is the volume of vapor region in drum, W d v is the dynamic evaporation in drum, W v is the outlet vapor flow, R f is the pipe damping coefficient, H w v is the enthalpy of saturated water, W d is inlet water flow of down tube, K r is the inertia time constants of riser, V d r u m is the volume of drum, ρ w is water density in drum, H v is the enthalpy of saturated vapor, K e c is the coefficient, P d r is the drum pressure, P d r e x is the drum pressure of the previous time, T p ( ) represents thermodynamic property function of saturated vapor and saturated water. The P d r , H w v , H v and ρ w can be derived by T p ( ρ v ) . The Q e , Q w v , Q d v , Q d and Q r o are the energy of inlet water flow, saturated water, dynamic vapor, inlet water flow of down tube and vapor-water mixture flow of riser outlet, respectively. Additionally, to simplify the drum model, we neglect the dynamic equation of the down tube. That is, the inlet water flow of the down tube is equal to the outlet flow of riser.

3.4. Establishment of Digital Twin Model for Boiler System

3.4.1. Recurrent Neural Network Description of Boiler System

Because some parameters in mechanism model cannot be known accurately, it is necessary to update the parameters iteratively by using learning algorithms to approximate the true values continuously. If we describe the mechanism model through neural network structure, the parameters can be easily trained through backpropagation algorithm. Before building the neural network of mechanism model, the writing rules of neuron symbols and weights in the network should be defined. Let the w G m h u ( l ) represent the general form of weights, where G represents the equipment block (i.e., the pulverizing block p z , furnace block b o and drum block v p ), m represents the order number of outputs corresponding to the block, h , u and ( l ) represent the inputs order number of the neuron, the order number of the neuron in the corresponding layer, and the order number of layers corresponding to the neuron, respectively. Briefly speaking, the w G m h u ( l ) represents the weight of the h th input of the u th neuron in the l th layer of the m th output of the block G . Similarly, the b G m u ( l ) represents the bias of the u th neuron in the l th layer of the m th output of the block G . In addition, w G 0 n represents the weight shared by each output of block G , where n is the order number of the weight. In the neurons, the symbols in the left represent the operation on the weighted input. The specific meanings are shown in Table 2. The symbols on the right represent the output of neuron after calculation.
  • Pulverizing system
According to Equations (3) and (4), the neural network structures of the pulverizing block are shown in Figure 4. In the two figures, weights mean the model parameters, including known parameters and unascertained parameters. The known weights and unascertained weights are shown in Table 3. In Equation (4), the heat dissipation of pulverizing system is ignored for simplification. Therefore, neuron Q b u p z is added to represent the heat dissipation and is assumed to be a function of the inlet coal flow W g , that is, Q b u p z = w p z 2 14 ( 2 ) W g + b p z 2 4 ( 2 ) . The value or initial value of the parameters is provided in the “Simulation results” section. In addition, it should be noted that all unmarked weights in the networks are equal to 1.
2.
Furnace
Based on Equations (5)–(9), the neural network structures of the furnace block are shown in Figure 5, Figure 6 and Figure 7. The known weights and unascertained weights are shown in Table 4. In furnace block, generally speaking, K Q = 1 / K b q and K s l are time-varying parameters, and their specific dynamic characteristics and functional relationship cannot be known accurately. Thus, by adding neurons to the network, these two parameters are expressed as K Q = w b o 2 11 ( 2 ) ρ b 1 and K s l = w b o 5 11 ( 1 ) W g s + b b o 5 1 ( 1 ) , respectively.
3.
Drum
Based on Equations (10)–(13), the neural network structures of the drum block are shown in Figure 8, Figure 9 and Figure 10. The known weights and unascertained weights are shown in Table 5. The specific thermodynamic functions T p ( ) are provided in the “Simulation results” section.

3.4.2. LSTM-Based Error Compensation Model for Digital Twin

As previously mentioned, the predictive error caused by unmodeled dynamics cannot be reduced by updating the parameters in mechanism model only. Moreover, in practice, most of the predictive errors caused by unascertained parameters can be reduced through the mechanism parameter identification, but it is difficult to eliminate the errors completely. For this reason, it is necessary to establish a data-driven model to compensate the predictive residual of mechanism model. Long short-term memory (LSTM) network is a special kind of RNN which can solve the problem of gradient disappearance and explosion during long sequence training [53]. In other words, compared with common RNN, LSTM has better performance in long time series prediction [54,55,56].
Therefore, for each block, the error-compensation model is established based on LSTM. The inputs of the LSTM are the same as that of the corresponding block in mechanism network, and the outputs are the compensation for the predictive error of the mechanism model.

3.4.3. Parameter Updating Structure for Digital Twin Model of Boiler System

The parameter updating structure for digital twin hybrid model is shown in Figure 11. According to the input x G , the actual system, mechanism model and error compensation model generate the real output y G , predictive output of mechanism model y ˜ G m c and that of error compensation model y ˜ G e c , respectively. The subscript G represents the equipment block. The parameters in mechanism model are updated based on the difference between y ˜ G m c and y G , that is, E G m c = y ˜ G m c y G . This is because we hope that the predictive output of the mechanism model can approach the real value as much as possible. For the error compensation model, we hope that it can reduce the predictive residual of the mechanism model as much as possible. Therefore, the parameters in compensation model are updated based on the difference between y ˜ G h y and y G , which can be expressed as E G e c = y ˜ G h y y G . The advantages of this structure can be summed up in the following three aspects. First, the model parameters are updated to improve the predictive performance of mechanism model. Second, due to the improvement in predictive accuracy, the mechanism model has better interpretability. Third, the error compensation model is added and updated to ensure the high fidelity of the digital twin model.

3.4.4. Training Algorithm for Digital Twin Model of Boiler System

Because the mechanism model and error compensation model mentioned above are established based on RNN, Backpropagation Through Time (BPTT) method [57], the most common optimization algorithm in RNN [58], is adopted in this paper for iterative training and updating of network parameters in these two models. The time sequence propagation structure of the network for mechanism model is shown in Figure 12. Assume that the total length of the time sequence is T . The directions of the black arrow represent the forward propagation through the network and time. On the contrary, that of the red arrow represent the backpropagation of errors through the network and time. The general form of the network equation at a certain moment t can be written as
y ˜ G m c ( t ) = f ( x G ( t ) , y ˜ G m c ( t 1 ) , W G , b G ) ,
where W G and b G are the weight and bias matrices of block G . Since the loss function at time t is related to all previous moments, using the chain rule, the gradient of L G m c with respect to W G is written as
L G m c W G = t = 0 T k = 0 t L G m c ( t ) y ˜ G m c ( t ) ( j = k + 1 t y ˜ G m c ( j ) y ˜ G m c ( j 1 ) ) y ˜ G m c ( k ) W G .
Similarly, we can also obtain
L G m c b G = t = 0 T k = 0 t L G m c ( t ) y ˜ G m c ( t ) ( j = k + 1 t y ˜ G m c ( j ) y ˜ G m c ( j 1 ) ) y ˜ G m c ( k ) b G .
Finally, the parameters can be iteratively updated through the gradient descent method, which can be expressed as W G = W G η L G m c / W G and b G = b G η L G m c / b G . η is the learning rate.

4. Simulation Results

In this section, the parameter training results of the DTM are analyzed and the predictive performance of different models is compared. Before model training, the preprocessing of data and parameters is required. It should be noted that almost all parameters in gradient terms have physical meaning, so it is necessary to determine the reasonable range for time-varying parameters that need to be updated. Additionally, due to the large difference in the order of magnitude of each neuron (i.e., state variable), it is easy to cause gradient explosion or gradient disappearance in backpropagation processes. Therefore, the range of corresponding parameters or variables in gradient terms should be determined and then their normalized values need to be derived.

4.1. Preprocessing of Data and Parameters

4.1.1. Mechanism Model

The values of parameters in three blocks are shown in Table 6. For unascertained parameters, the initial value is considered. According to the thermodynamic properties of saturated water and saturated vapor, functions T p ( · ) can be written as
P d r = A d r ρ v 3 + B d r ρ v 2 + C d r ρ v + D d r ,
H v = A v ρ v 3 + B v ρ v 2 + C v ρ v + D v ,
H w v = A w v ρ v 4 + B w v ρ v 3 + C w v ρ v 2 + D w v ρ v + E w v ,
ρ w = A w H w 3 + B w H w 2 + C w H w + D w ,
where A d r = 1.104 × 10 6 , B d r = 8.206 × 10 4 , C d r = 0.2265 , D d r = 0.2319 , A v = 2.331 × 10 5 , B v = 6.728 × 10 3 , C v = 2.151 , D v = 2860 , A w v = 7.772 × 10 7 , B w v = 4.381 × 10 4 , C w v = 0.102 , D w v = 14.12 , E w v = 870.3 , A w = 1.458 × 10 7 , B w = 4.729 × 10 4 , C w = 0.8501 and D w = 1355 . Then, the derivatives P d r / ρ v , H v / ρ v , H w v / ρ v and ρ w / H w in gradient terms can be easily derived. The range of time-varying parameters, inputs and outputs are shown in Table 7. It should be noted that the range such as [ a × 10 ± c , b × 10 ± c ] is written as [ a , b ] Ε ± c for simplicity. In the training processes, the positive and negative characteristics of parameters affect the direction of gradient terms. Therefore, for parameters with different ranges, different normalization ranges and methods need to be adopted to retain their positive and negative signs. In general, the normalized interval of parameters can be divided into three categories, including [ 0 , 1 ] , [ 1 , 1 ] and [ 1 , 0 ] . First, for parameter with interval [ a , b ] , where a 0 and b > 0 , the normalized interval is set as [ 0 , 1 ] . The normalization method can be written as x ¯ = ( x a ) / ( b a ) , where x ¯ is normalized value of x . Second, for parameter with interval [ a , b ] , where a < 0 , b > 0 and a = b , the normalized interval is set as [ 1 , 1 ] . The normalization method can be written as x ¯ = x / b . Third, for parameter with interval [ a , b ] , where a < 0 and b < 0 , the normalized interval is set as [ 1 , 0 ] . The normalization method can be written as x ¯ = ( | x | | b | ) / ( | a | | b | ) . In Table 7, the normalized (std.) range of parameters such as w p z 2 14 ( 2 ) , b p z 2 4 ( 2 ) , w b o 5 11 ( 1 ) and b b o 5 1 ( 1 ) is [ 1 , 1 ] , and that of other parameters is [ 0 , 1 ] . Moreover, the learning rate η is set to 0.001 and the threshold of gradient is set to one.

4.1.2. Error Compensation Model

The range of outputs in error compensation model is shown in Table 8. The range of inputs is the same as that of the mechanism model, which is shown in Table 7. Since the error compensation model is established based on the general LSTM network, the normalized range can be all selected as [ 0 , 1 ] , without considering the sign of outputs. The LSTM network structure is set to 1-1-1, that is, an input layer, one hidden layer with 90 neurons and an output layer. The learning rate is set to 0.001, and the threshold of gradient is set to one. The loss function is defined as
L G e c = t = 0 T L G e c ( t ) = t = 0 T 1 2 ( y ˜ G h y ( t ) y G ( t ) ) T ( y ˜ G h y ( t ) y G ( t ) ) ,
where y ˜ G h y ( t ) = y ˜ G m c ( t ) + y ˜ G e c ( t ) . The y ˜ G e c and y ˜ G h y represent the predictive outputs of error compensation model and that of digital twin hybrid model of block G , respectively.

4.2. Model Training and Analysis of Predictive Performance

4.2.1. Model Training

The training and testing datasets used in this paper are collected from the 600 MW coal-fired power plant simulator. The training process of parameters in three blocks is shown in Figure 13. To display the parameters of each block in the same picture, the iterative curve of normalized parameters is provided. The training process of some parameters shows a trend of first falling and then rising, which may be related to the initial value of block parameters. Additionally, the convergence speed of some parameters is slower than that of other parameters, which may be related to the setting of threshold and the normalized range of derivatives in gradient terms. The convergence (con.) value, normalized (std.) value and actual value of all parameters are listed in Table 9. Due to the simplification of mechanism model established in this paper, some corresponding parameters, especially dynamic parameters such as K c f , K T , K o , K s q and K r , cannot be detected in the simulator directly. Therefore, for these parameters, we obtain their approximate true values through the derivation and calculation of other relevant parameters in the simulator. The convergence value of most parameters is very close to the real value, except for parameters w b o 2 11 ( 2 ) and w v p 0 3 . The reasons for the large deviation of these two parameters may be the complex gradient terms, the inappropriate normalized range of parameters and derivatives, and the insufficient number of additional neurons. The additional neurons such as w p z 2 14 ( 2 ) , b p z 2 4 ( 2 ) , w b o 5 11 ( 1 ) and b b o 5 1 ( 1 ) are used for mapping the relationship between time-varying parameters and state variables. Because they have no physical meaning, there are no corresponding actual values in the simulator. With regard to the training of error compensation models, we use the deep learning toolbox of MATLAB 2022a. The training processes are shown in Figure 13.

4.2.2. Analysis of Predictive Performance

The predictive results W c f and T o of pulverizing system are shown in Figure 14, including the curves of actual value, mechanism model with nominal parameters (MMNP), LSTM-based empirical model (EM-LSTM), GRU-based empirical model (EM-GRU) and hybrid model with parameter identification only (HMPIO). The upper subgraphs in Figure 14 depict the overall trend of outputs. As a supplement, the lower subgraphs show the details of the corresponding output as well as the difference between models. Because the time-varying parameters in MMNP are the initial nominal values and have not been updated, there are differences between them and the true values. Undoubtedly, this leads to unsatisfactory predictive accuracy. The EM-LSTM and EM-GRU are data-driven models, which learn the dynamic characteristic information of the system through the input and output data. The HMPIO can be said to be a mechanism model in the form of neural networks, which not only has interpretability, but also makes parameters easy to train through BPTT algorithm. Compared to MMNP, HMPIO achieves higher predictive accuracy by training the parameters to approximate the true values gradually. The iterative training results of the parameters are shown in Figure 13. According to the evaluation indexes proposed by Psarommatis [59], the performance of digital twin model can be analyzed comprehensively. The results are listed in Table 10. The indexes include the accuracy of each output parameter (AOP), global DT accuracy (GDTA), the accuracy variance of each output parameter (ACVAR), the global accuracy variance (GAVAR), the response time (ART) and the response time variance (RTVAR). From the results in Table 10, for pulverizing system, the HMPIO has the highest predictive accuracy and fastest response time. In addition, for this block, the predictive accuracy of HMPIO can meet the requirements, so no error compensation model (ECM) is added. The predictive results of furnace, including ρ b , T g s , P b , O c p and Q s l , are shown in Figure 15 and Figure 16. The predictive error of MMNP is much larger than that of other methods, especially for O c p and Q s l . As for the results of HMPIO for variables ρ b and P b , there are predictive errors in both steady and dynamic processes. Part of the reason is that the parameter optimization of the mechanism model can make it close to the true value, which significantly reduces the predictive error, but still cannot be consistent with the true value completely. Another part of the reason may be that there are errors which cannot be eliminated due to the simplification of mechanism model. Because of the differences in the optimization results of parameter w b o 2 11 ( 2 ) that cannot be ignored, there are large predictive errors in dynamic processes of HMPIO for T g s , which leads to the dynamic error of Q s l indirectly. Therefore, the ECM is added to compensate the error of HMPIO. The green line in the figures represents the hybrid model with ECM. The hybrid model (HM) proposed in this paper reduces the predictive residual of HMPIO effectively. The results of GDTA in Table 11 demonstrate that the HM has the highest accuracy, only one tenth of EM. Due to the ECM, the response time of HM is slower than that of MMNP and HMPIO, but only half of that of EM. Although the improvement of HM accuracy comes at the cost of sacrificing some response time, it can still meet the requirements for the boiler system. Similarly, the predictive results of drum, including M d l , ρ v , H w and H r , are shown in Figure 17 and Figure 18. The reason why the curve of MMNP for output M d l has been rising is that the initial value of parameter w v p 2 14 ( 3 ) is inaccurate, resulting in the mass imbalance of liquid in drum. Due to the mutual conversion and calculation between vapor and liquid, the drum is a complex non-linear block, which leads to poor predictive performance of the simplified mechanism model MMNP, even for HMPIO. For outputs M d l and H w , even though HMPIO has better predictive effect than MMNP, it still has obvious predictive errors, especially in the dynamic processes. Meanwhile, as to output M d l , the predictive result of EM is also not very satisfactory. Comparing the results of these models, we can see that the hybrid model has relatively excellent predictive performance. From the results of GDTA in Table 12, we can derive that the accuracy of HM has been improved by 66% and 76% compared to EM-LSTM and EM-GRU, respectively. Finally, we provide the flexibility comparison in Table 13 according to reference [59]. From the total score, EM has the highest flexibility, followed by HMPIO and HM, and MMNP has the lowest. HM, MMNP and HMPIO are not as flexible as EM due to their mechanism model limiting the number and relationship of input and output parameters. Although the interpretability of HM comes at the cost of sacrificing some flexibility, compared to EM, HM improves the accuracy and response time of digital twin models effectively. In conclusion, the hybrid model established in this paper has relatively excellent predictive performance, which ensures the high fidelity of digital twin model for boiler system. However, the datasets used for training and verifying the model in this paper is limited and may not cover all working conditions. Therefore, the generalization performance of hybrid model, especially ECM, needs to be further studied in future work.

5. Conclusions and Future Work

In this paper, an RNN-based hybrid modeling method for digital twin of boiler system is established to improve the predictive accuracy of mechanism model. The mechanism model of main blocks in boiler system is described through RNN to facilitate training and updating parameters. By adding neurons to mechanism model, the mapping between time-varying parameters and state variables is constructed. In order to compensate the predictive residual of the mechanism model, the LSTM-based error compensation model is established. Moreover, the update architecture and training algorithm for hybrid model are built to realize the iterative optimization of model parameters. Finally, the experimental results show that the hybrid model has better predictive performance.
However, there are some shortcomings that need to be further studied. First, for a few parameters, the optimization results are not satisfactory. In future work, we will explore corresponding solutions. Second, in the training algorithm, the calculation of some gradient terms is complex, which affects the optimization results of parameters. In the future, we will try to develop an effective approach to optimize the calculation of gradient. Finally, the datasets used for training and verifying the model are collected from a 600 MW power plant simulator. In the future, we hope to further validate the hybrid model based on the actual power plant datasets.

Author Contributions

Y.Z. and Y.C. conceived the modeling method and designed the experiments; Y.Z. and H.J. performed the experiments. All the authors analyzed the data. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by in part by the National Natural Science Foundation of China (grant no. 62203349) and in part by the Key Technologies Research and Development Program of China (grant no. 2018YFB1700100).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zheng, T.; Ardolino, M.; Bacchetti, A. The applications of Industry 4.0 technologies in manufacturing context: A systematic literature review. Int. J. Prod. Res. 2021, 59, 1922–1954. [Google Scholar] [CrossRef]
  2. Abdulaziz, Q.A.; Mad Kaidi, H.; Masrom, M.; Hamzah, H.S.; Sarip, S.; Dziyauddin, R.A.; Muhammad-Sukki, F. Developing an IoT Framework for Industry 4.0 in Malaysian SMEs: An Analysis of Current Status, Practices, and Challenges. Appl. Sci. 2023, 13, 3658. [Google Scholar] [CrossRef]
  3. Ryalat, M.; ElMoaqet, H.; AlFaouri, M. Design of a Smart Factory Based on Cyber-Physical Systems and Internet of Things towards Industry 4.0. Appl. Sci. 2023, 13, 2156. [Google Scholar] [CrossRef]
  4. Ahmad, T.; Zhu, H.; Zhang, D. Energetics systems and artificial intelligence: Applications of industry 4.0. Energy Rep. 2022, 8, 334–361. [Google Scholar] [CrossRef]
  5. Grieves, M. Digital twin: Manufacturing excellence through virtual factory replication. White Pap. 2014, 1, 1–7. [Google Scholar]
  6. Jones, D.; Snider, C.; Nassehi, A. Characterising the Digital Twin: A systematic literature review. CIRP J. Manuf. Sci. Technol. 2020, 29, 36–52. [Google Scholar] [CrossRef]
  7. Liu, M.; Fang, S.; Dong, H. Review of digital twin about concepts, technologies, and industrial applications. J. Manuf. Syst. 2021, 58, 346–361. [Google Scholar] [CrossRef]
  8. Errandonea, I.; Beltran, S.; Arrizabalaga, S. Digital Twin for maintenance: A literature review. Comput. Ind. 2020, 123, 103316. [Google Scholar] [CrossRef]
  9. Shafto, M.; Conroy, M.; Doyle, R. DRAFT Modelling, Simulation, Information Technology & Processing Roadmap—Technology Area 11; National Aeronautics and Space Administration: Washington, DC, USA, 2010. [Google Scholar]
  10. Phanden, R.; Sharma, P.; Dubey, A. A review on simulation in digital twin for aerospace, manufacturing and robotics. Mater. Today Proc. 2021, 38, 174–178. [Google Scholar] [CrossRef]
  11. Hanel, A.; Schnellhardt, T.; Wenkler, E. The development of a digital twin for machining processes for the application in aerospace industry. Procedia CIRP 2020, 93, 1399–1404. [Google Scholar] [CrossRef]
  12. Guo, J.; Hong, H.; Zhong, K. Production management and control method of aerospace manufacturing workshops based on digital twin. China Mech. Eng. 2020, 31, 808. [Google Scholar]
  13. Piromalis, D.; Kantaros, A. Digital twins in the automotive industry: The road toward physical-digital convergence. Appl. Syst. Innov. 2022, 5, 65. [Google Scholar] [CrossRef]
  14. Rajesh, K.; Manikandan, N.; Ramshankar, C. Digital twin of an automotive brake pad for predictive maintenance. Procedia Comput. Sci. 2019, 165, 18–24. [Google Scholar] [CrossRef]
  15. Son, Y.; Park, T.; Lee, D. Digital twin–based cyber-physical system for automotive body production lines. Int. J. Adv. Manuf. Technol. 2021, 115, 291–310. [Google Scholar] [CrossRef]
  16. Yan, D.; Sha, W.; Wang, D. Digital twin-driven variant design of a 3C electronic product assembly line. Sci. Rep. 2022, 12, 3846. [Google Scholar] [CrossRef]
  17. Xu, J.; Guo, T. Application and research on digital twin in electronic cam servo motion control system. Int. J. Adv. Manuf. Technol. 2021, 112, 1145–1158. [Google Scholar] [CrossRef]
  18. Leng, J.; Zhang, H.; Yan, D. Digital twin-driven manufacturing cyber-physical system for parallel controlling of smart workshop. J. Ambient Intell. Humaniz. Comput. 2019, 10, 1155–1166. [Google Scholar] [CrossRef]
  19. Psarommatis, F.; May, G. A literature review and design methodology for digital twins in the era of zero defect manufacturing. Int. J. Prod. Res. 2022, 1–21. [Google Scholar] [CrossRef]
  20. Dong, Y.; Chen, Q.; Ding, W.; Shao, N.; Chen, G.; Li, G. State Evaluation and Fault Prediction of Protection System Equipment Based on Digital Twin Technology. Appl. Sci. 2022, 12, 7539. [Google Scholar] [CrossRef]
  21. Guo, K.; Wan, X.; Liu, L.; Gao, Z.; Yang, M. Fault Diagnosis of Intelligent Production Line Based on Digital Twin and Improved Random Forest. Appl. Sci. 2021, 11, 7733. [Google Scholar] [CrossRef]
  22. Peri, D.; Campana, E. High-fidelity models and multi-objective global optimization algorithms in simulation-based design. J. Ship Res. 2005, 49, 159–175. [Google Scholar] [CrossRef]
  23. Xia, M.; Shao, H.; Williams, D. Intelligent fault diagnosis of machinery using digital twin-assisted deep transfer learning. Reliab. Eng. Syst. Saf. 2021, 215, 107938. [Google Scholar] [CrossRef]
  24. Steindl, G.; Stagl, M.; Kasper, L.; Kastner, W.; Hofmann, R. Generic Digital Twin Architecture for Industrial Energy Systems. Appl. Sci. 2020, 10, 8903. [Google Scholar] [CrossRef]
  25. Piltan, F.; Kim, C.-H.; Kim, J.-M. Bearing Crack Diagnosis Using a Smooth Sliding Digital Twin to Overcome Fluctuations Arising in Unknown Conditions. Appl. Sci. 2022, 12, 6770. [Google Scholar] [CrossRef]
  26. Sleiti, K.; Kapat, S.; Vesely, L. Digital twin in energy industry: Proposed robust digital twin for power plant and other complex capital-intensive large engineering systems. Energy Rep. 2022, 8, 3704–3726. [Google Scholar] [CrossRef]
  27. Lei, Z.; Zhou, H.; Hu, W. Toward a web-based digital twin thermal power plant. IEEE Trans. Ind. Inform. 2021, 18, 1716–1725. [Google Scholar] [CrossRef]
  28. Xu, B.; Wang, J.; Wang, X. A case study of digital-twin-modelling analysis on power-plant-performance optimizations. Clean Energy 2019, 3, 227–234. [Google Scholar] [CrossRef] [Green Version]
  29. Yu, J.; Petersen, N.; Liu, P. Hybrid modelling and simulation of thermal systems of in-service power plants for digital twin development. Energy 2022, 260, 125088. [Google Scholar] [CrossRef]
  30. Yu, J.; Liu, P.; Li, Z. Hybrid modelling and digital twin development of a steam turbine control stage for online performance monitoring. Renew. Sustain. Energy Rev. 2020, 133, 110077. [Google Scholar] [CrossRef]
  31. Sunil, P.; Barve, J.; Nataraj, P. Mathematical modeling, simulation and validation of a boiler drum: Some investigations. Energy 2017, 126, 312–325. [Google Scholar] [CrossRef]
  32. Glushkov, D.; Paushkina, K.; Vershinina, K.; Vysokomornaya, O. Slagging Characteristics of a Steam Boiler Furnace with Flare Combustion of Solid Fuel When Switching to Composite Slurry Fuel. Appl. Sci. 2023, 13, 434. [Google Scholar] [CrossRef]
  33. Astrom, K.; Eklund, K. A simplified non-linear model of a drum boiler-turbine unit. Int. J. Control 1972, 16, 145–169. [Google Scholar] [CrossRef] [Green Version]
  34. Astrom, K.; Bell, R. Drum-boiler dynamics. Automatica 2000, 36, 363–378. [Google Scholar] [CrossRef]
  35. Bhambare, K.; Mitra, S.; Gaitonde, U. Modeling of a coal-fired natural circulation boiler. J. Energy Resour. Technol. 2007, 129, 159–167. [Google Scholar] [CrossRef]
  36. Yang, Y.; Liu, J. Research on the algorithm of the coal mill primary air flow prediction based on the hybrid modelling. Chin. J. Sci. Instrum. 2016, 37, 1913–1919. [Google Scholar]
  37. Li, J.; Zhou, D.; Xiao, W. Hybrid Modeling of Gas Turbine based on Neural Network. J. Eng. Therm. Energy Power 2019, 34, 33–39. [Google Scholar]
  38. Jain, A.; Mao, J.; Mohiuddin, K. Artificial neural networks: A tutorial. Computer 1996, 29, 31–44. [Google Scholar] [CrossRef] [Green Version]
  39. Hornik, K. Approximation capabilities of multilayer feedforward networks. Neural Netw. 1991, 4, 251–257. [Google Scholar] [CrossRef]
  40. Smrekar, J.; Assadi, M.; Fast, M. Development of artificial neural network model for a coal-fired boiler using real plant data. Energy 2009, 34, 144–152. [Google Scholar] [CrossRef]
  41. Deepika, K.K.; Varma, P.S.; Reddy, C.R.; Sekhar, O.C.; Alsharef, M.; Alharbi, Y.; Alamri, B. Comparison of Principal-Component-Analysis-Based Extreme Learning Machine Models for Boiler Output Forecasting. Appl. Sci. 2022, 12, 7671. [Google Scholar] [CrossRef]
  42. Ilamathi, P.; Selladurai, V.; Balamurugan, K. Modeling and optimization of unburned carbon in coal-fired boiler using artificial neural network and genetic algorithm. J. Energy Resour. Technol. 2013, 135, 032201. [Google Scholar] [CrossRef]
  43. Smrekar, J.; Pandit, D.; Fast, M. Prediction of power output of a coal-fired power plant by artificial neural network. Neural Comput. Appl. 2010, 19, 725–740. [Google Scholar] [CrossRef]
  44. Grimaccia, F.; Niccolai, A.; Mussetta, M.; D’Alessandro, G. ISO 50001 Data Driven Methods for Energy Efficiency Analysis of Thermal Power Plants. Appl. Sci. 2023, 13, 1368. [Google Scholar] [CrossRef]
  45. Zhang, Y.; Liu, P.; Li, Z. Hybrid Modeling of Gas Turbine Based on Dominant Factors Method. J. Eng. Thermophys. 2021, 42, 2787–2795. [Google Scholar]
  46. Li, S.; Tan, P.; Rao, D. Modeling of Coal-fired Generating Unit Coordination System with Data and Mechanism. J. Eng. Thermophys. 2022, 43, 19–26. [Google Scholar]
  47. Wu, X.; Shen, J.; Li, Y. Data-driven modeling and predictive control for boiler–turbine unit. IEEE Trans. Energy Convers. 2013, 28, 470–481. [Google Scholar] [CrossRef]
  48. Dong, X.; Shen, J.; He, G. A general radial basis function neural network assisted hybrid modeling method for photovoltaic cell operating temperature prediction. Energy 2021, 234, 121212. [Google Scholar] [CrossRef]
  49. Heo, J.; Lee, K.; Garduno-Ramirez, R. Multi-objective control of power plants using particle swarm optimization techniques. IEEE Trans. Energy Convers. 2006, 21, 552–561. [Google Scholar] [CrossRef]
  50. Zhao, G.; Cui, Z.; Xu, J. Hybrid modeling-based digital twin for performance optimization with flexible operation in the direct air-cooling power unit. Energy 2022, 254, 124492. [Google Scholar] [CrossRef]
  51. Ghosh, D.; Hermonat, E.; Mhaskar, P. Hybrid modeling approach integrating first-principles models with subspace identification. Ind. Eng. Chem. Res. 2019, 58, 13533–13543. [Google Scholar] [CrossRef]
  52. Lv, C. System Simulation and Modelling of Large Thermal Power Unit; Tsinghua University Press: Beijing, China, 2002. [Google Scholar]
  53. Graves, A.; Graves, A. Long short-term memory. Supervised Seq. Label. Recurr. Neural Netw. 2012, 37–45. [Google Scholar]
  54. Jozefowicz, R.; Zaremba, W.; Sutskever, I. An empirical exploration of recurrent network architectures. Int. Conf. Mach. Learn. 2015, 37, 2342–2350. [Google Scholar]
  55. Chung, J.; Gulcehre, C.; Cho, K.H. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv 2014, arXiv:1412.3555. [Google Scholar] [CrossRef]
  56. Sherstinsky, A. Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network. Phys. D Nonlinear Phenom. 2020, 404, 132306. [Google Scholar] [CrossRef] [Green Version]
  57. Werbos, P. Backpropagation through time: What it does and how to do it. Proc. IEEE 1990, 78, 1550–1560. [Google Scholar] [CrossRef] [Green Version]
  58. Gonzalez, J.; Yu, W. Non-linear system modeling using LSTM neural networks. IFAC 2018, 51, 485–489. [Google Scholar] [CrossRef]
  59. Psarommatis, F.; May, G. A standardized approach for measuring the performance and flexibility of digital twins. Int. J. Prod. Res. 2022, 1–16. [Google Scholar] [CrossRef]
Figure 1. The design process of digital twin model for boiler system.
Figure 1. The design process of digital twin model for boiler system.
Applsci 13 04905 g001
Figure 2. Three hybrid modeling methods. (a) Hybrid modeling method with parameter identification only. (b) Hybrid modeling method with data-driven identification only. (c) Hybrid modeling method with both parameter and data-driven identification.
Figure 2. Three hybrid modeling methods. (a) Hybrid modeling method with parameter identification only. (b) Hybrid modeling method with data-driven identification only. (c) Hybrid modeling method with both parameter and data-driven identification.
Applsci 13 04905 g002
Figure 3. The DTM framework for boiler system.
Figure 3. The DTM framework for boiler system.
Applsci 13 04905 g003
Figure 4. Neural network structure of Equations (3) and (4) in pulverizing block.
Figure 4. Neural network structure of Equations (3) and (4) in pulverizing block.
Applsci 13 04905 g004
Figure 5. Neural network structures of Equations (5) and (7) in furnace block.
Figure 5. Neural network structures of Equations (5) and (7) in furnace block.
Applsci 13 04905 g005
Figure 6. Neural network structures of Equations (6) and (8) in furnace block.
Figure 6. Neural network structures of Equations (6) and (8) in furnace block.
Applsci 13 04905 g006
Figure 7. Neural network structures of Equation (9) in furnace block.
Figure 7. Neural network structures of Equation (9) in furnace block.
Applsci 13 04905 g007
Figure 8. Neural network structures of Equations (10) and (13) in drum block.
Figure 8. Neural network structures of Equations (10) and (13) in drum block.
Applsci 13 04905 g008
Figure 9. Neural network structures of Equation (11) in drum block.
Figure 9. Neural network structures of Equation (11) in drum block.
Applsci 13 04905 g009
Figure 10. Neural network structures of Equation (12) in drum block.
Figure 10. Neural network structures of Equation (12) in drum block.
Applsci 13 04905 g010
Figure 11. Parameter updating structure for digital twin model of boiler system.
Figure 11. Parameter updating structure for digital twin model of boiler system.
Applsci 13 04905 g011
Figure 12. Time sequence propagation structure of the network for mechanism model.
Figure 12. Time sequence propagation structure of the network for mechanism model.
Applsci 13 04905 g012
Figure 13. The iterative process of parameters in three blocks and error compensation models.
Figure 13. The iterative process of parameters in three blocks and error compensation models.
Applsci 13 04905 g013
Figure 14. The predictive results of pulverizing system based on different models.
Figure 14. The predictive results of pulverizing system based on different models.
Applsci 13 04905 g014
Figure 15. The predictive results (part 1) of furnace based on different models.
Figure 15. The predictive results (part 1) of furnace based on different models.
Applsci 13 04905 g015
Figure 16. The predictive results (part 2) of furnace based on different models.
Figure 16. The predictive results (part 2) of furnace based on different models.
Applsci 13 04905 g016
Figure 17. The predictive results (part 1) of drum based on different models.
Figure 17. The predictive results (part 1) of drum based on different models.
Applsci 13 04905 g017
Figure 18. The predictive results (part 2) of drum based on different models.
Figure 18. The predictive results (part 2) of drum based on different models.
Applsci 13 04905 g018
Table 1. Input and output parameters for DTM.
Table 1. Input and output parameters for DTM.
InputsMeaningsInputsMeaningsOutputsMeaningsOutputsMeanings
N g normalized speed of coal feeder Q n e t , a r net calorific value W c f coal flow Q s l heat absorption of water wall
W l k primary cool air flow T s l v p the temperature of evaporation area T o temperature of air–coal mixture M d l the mass of liquid region in drum
W r k primary hot air flow W e the inlet water flow ρ b gas density ρ v saturated vapor density
W c f coal flow H e the enthalpy of inlet water T g s average temperature of gas in furnace H w the enthalpy of water in drum
W s a secondary air flow Q s l the heat absorption of water wall P b furnace pressure H r the enthalpy of saturated vapor–water mixture in riser
W g s gas flow P s the pressure of super-heater O c p gas oxygen content
Table 2. Meanings of symbols in neurons.
Table 2. Meanings of symbols in neurons.
SymbolsOperationSymbolsOperationSymbolsOperation
Summation Product P w Exponential
T p Thermodynamic property functions F q Function of vapor content of vapor–water mixture S q Square root
Table 3. The known weights and unascertained weights in pulverizing block.
Table 3. The known weights and unascertained weights in pulverizing block.
KnownValuesKnownValuesUnascertainedValuesUnascertainedValues
w p z 2 11 ( 1 ) H l k w p z 2 13 ( 2 ) H g w p z 0 1 K g w p z 2 11 ( 4 ) 1 / K T
w p z 2 21 ( 1 ) H r k w p z 2 21 ( 3 ) 1 w p z 1 11 ( 2 ) 1 / K c f w p z 2 14 ( 2 ) /
b p z 2 3 ( 1 ) T g w p z 2 31 ( 3 ) 1 w p z 1 21 ( 2 ) 1 w p z 1 11 ( 2 ) b p z 2 4 ( 2 ) /
w p z 2 11 ( 2 ) C p a w p z 2 12 ( 2 ) C c f
Table 4. The known weights and unascertained weights in furnace block.
Table 4. The known weights and unascertained weights in furnace block.
KnownValuesKnownValuesUnascertainedValuesUnascertainedValues
w b o 0 1 1 w b o 2 13 ( 1 ) H s a w b o 1 11 ( 2 ) 1 / V b w b o 2 14 ( 1 ) C g s
w b o 2 32 ( 2 ) 1 w b o 2 42 ( 2 ) 1 w b o 3 11 ( 2 ) 1 / C b w b o 4 11 ( 2 ) V 0
w b o 4 21 ( 2 ) 21 ρ s a b b o 4 1 ( 2 ) 21 w b o 4 11 ( 3 ) 1 / K o w b o 4 21 ( 3 ) 1 w b o 4 11 ( 3 )
w b o 5 21 ( 2 ) 1 w b o 5 11 ( 4 ) 1 / K s q w b o 5 21 ( 4 ) 1 w b o 5 11 ( 4 )
w b o 2 11 ( 2 ) / w b o 5 11 ( 1 ) /
b b o 5 1 ( 1 ) /
Table 5. The known weights and unascertained weights in drum block.
Table 5. The known weights and unascertained weights in drum block.
KnownValuesKnownValuesKnownValuesUnascertainedValues
w v p 0 1 1 w v p 0 2 K e c w v p 1 21 ( 4 ) 1 w v p 0 3 W r o
w v p 1 31 ( 4 ) 1 w v p 2 11 ( 3 ) 1 b v p 2 1 ( 3 ) V d r u m w v p 2 14 ( 3 ) 1 / R f
w v p 2 24 ( 2 ) 1 w v p 2 32 ( 4 ) 1 w v p 3 12 ( 3 ) 1 w v p 4 11 ( 3 ) 1 / K r
b v p 3 2 ( 3 ) 1 w v p 3 21 ( 5 ) 1 w v p 3 41 ( 5 ) 1
w v p 4 11 ( 2 ) 1
Table 6. The value of parameters in mechanism models.
Table 6. The value of parameters in mechanism models.
ParameterValueParameterValueParameterValueParameterValueParameterValue
H l k 41 H r k 276 T g 30 C p a 1.022 C c f 1.3
H g 56.4 K g 100 K c f 2.5 K e c −600 H s a 200
K T 1667 w p z 2 14 ( 2 ) 0.001 b p z 2 4 ( 2 ) 0.001 K r 2.97 × 10 4 C g s 1.5
V 0 5.25 ρ s a 1.416 V b 16,855 b b o 5 1 ( 1 ) 5 × 10 7 w b o 2 11 ( 2 ) 5 × 10 5
C b 14.286 K o 20 K s q 1.25 w b o 5 11 ( 1 ) 2.5 × 10 10 V d r u m 118.53
W r o 1290 R f 2.27 × 10 6
Table 7. The range of time-varying parameters, inputs and outputs.
Table 7. The range of time-varying parameters, inputs and outputs.
ParameterRangeStd. RangeParameterRangeStd. RangeParameterRangeStd. Range
N g [ 0.54 , 0.75 ] [ 0 , 1 ] W l k [ 19 , 32 ] [ 0 , 1 ] W r k [ 78 , 92 ] [ 0 , 1 ]
W c f [ 54 , 80 ] [ 0 , 1 ] T o [ 120 , 145 ] [ 0 , 1 ] w p z 1 11 ( 1 ) [ 90 , 105 ] [ 0 , 1 ]
w p z 1 11 ( 2 ) [ 0.1 , 0.5 ] [ 0 , 1 ] w p z 2 12 ( 2 ) [ 1.2 , 2.2 ] [ 0 , 1 ] w p z 2 11 ( 4 ) [ 4 , 9 ] Ε 3 [ 0 , 1 ]
w p z 2 14 ( 2 ) [ 1 , 1 ] [ 1 , 1 ] b p z 2 4 ( 2 ) [ 2 , 2 ] [ 1 , 1 ] W s a [ 500 , 700 ] [ 0 , 1 ]
W g s [ 590 , 770 ] [ 0 , 1 ] Q n e t , a r [ 19 , 21 ] [ 0 , 1 ] T s l v p [ 329 , 332 ] [ 0 , 1 ]
ρ b [ 1.3 , 1.4 ] [ 0 , 1 ] T g s [ 1 , 1.1 ] Ε + 3 [ 0 , 1 ] P b [ 1.01 , 1.02 ] Ε + 5 [ 0 , 1 ]
O c p [ 2.6 , 3.8 ] [ 0 , 1 ] Q s l [ 5.2 , 6.2 ] Ε + 5 [ 0 , 1 ] w b o 1 11 ( 2 ) [ 5 , 8.5 ] Ε 5 [ 0 , 1 ]
w b o 2 14 ( 1 ) [ 1 , 1.6 ] [ 0 , 1 ] w b o 2 11 ( 2 ) [ 2 , 8 ] Ε 5 [ 0 , 1 ] w b o 3 11 ( 2 ) [ 0.03 , 0.08 ] [ 0 , 1 ]
w b o 4 11 ( 2 ) [ 4.5 , 6 ] [ 0 , 1 ] w b o 4 11 ( 3 ) [ 0.01 , 0.09 ] [ 0 , 1 ] w b o 5 11 ( 1 ) [ 5 , 5 ] Ε 10 [ 1 , 1 ]
w b o 5 11 ( 4 ) [ 0.7 , 1 ] [ 0 , 1 ] b b o 5 1 ( 1 ) [ 6 , 6 ] Ε 7 [ 1 , 1 ] W e [ 440 , 562 ] [ 0 , 1 ]
H e [ 1280 , 1350 ] [ 0 , 1 ] P s [ 17.8 , 18.3 ] [ 0 , 1 ] M d l [ 3.78 , 3.87 ] Ε + 4 [ 0 , 1 ]
ρ v [ 137 , 149 ] [ 0 , 1 ] H w [ 1581 , 1596 ] [ 0 , 1 ] H r [ 2010 , 2074 ] [ 0 , 1 ]
w v p 0 3 [ 1280 , 1295 ] [ 0 , 1 ] w v p 2 14 ( 3 ) [ 4.2 , 4.5 ] Ε + 5 [ 0 , 1 ] w v p 4 11 ( 3 ) [ 3.2 , 4 ] Ε 5 [ 0 , 1 ]
Table 8. The range of outputs in error compensation model.
Table 8. The range of outputs in error compensation model.
ParameterRangeStd. RangeParameterRangeStd. RangeParameterRangeStd. Range
Δ ρ b e c [ 5 , 1 ] Ε 3 [ 0 , 1 ] Δ T g s e c [ 3 , 4 ] [ 0 , 1 ] Δ P b e c [ 4.5 , 0.5 ] [ 0 , 1 ]
Δ O c p e c [ 0.04 , 0.02 ] [ 0 , 1 ] Δ Q s l e c [ 6 , 8 ] Ε + 3 [ 0 , 1 ] Δ M d l e c [ 150 , 150 ] [ 0 , 1 ]
Δ ρ v e c [ 1 , 4 ] [ 0 , 1 ] Δ H w e c [ 8 , 15 ] [ 0 , 1 ] Δ H r e c [ 8 , 15 ] [ 0 , 1 ]
Table 9. The convergence value, normalized value and actual value of all parameters.
Table 9. The convergence value, normalized value and actual value of all parameters.
ParameterStd. ValueCon. ValueActual ValueParameterStd. ValueCon. ValueActual Value
w p z 1 11 ( 1 ) 0.466696.998897 w p z 1 11 ( 2 ) 0.36940.24770.245
w p z 2 12 ( 2 ) 0.86172.06171.988 w p z 2 11 ( 4 ) 0.8076 8 × 10 4 7.93 × 10 4
w p z 2 14 ( 2 ) 0.17740.1774/ b p z 2 4 ( 2 ) 0.18360.3672/
w b o 1 11 ( 2 ) 0.3447 6.21 × 10 5 6.13 × 10 5 w b o 2 14 ( 1 ) 0.51691.31011.31
w b o 2 11 ( 2 ) 0.1650 2.99 × 10 5 3.38 × 10 5 w b o 3 11 ( 2 ) 0.44610.05230.05
w b o 4 11 ( 2 ) 0.60895.41345.4135 w b o 4 11 ( 3 ) 0.24760.02980.0287
w b o 5 11 ( 1 ) −0.3812 1.91 × 10 10 / w b o 5 11 ( 4 ) 0.93280.97980.99
b b o 5 1 ( 1 ) 0.9738 5.84 × 10 7 / w v p 0 3 0.33341285.011283.85
w v p 2 14 ( 3 ) 0.3048429144.41429150 w v p 4 11 ( 3 ) 0.8271 3.862 × 10 5 3.85 × 10 5
Table 10. The evaluation indexes of digital twin model for pulverizing system.
Table 10. The evaluation indexes of digital twin model for pulverizing system.
IndexesOutputMMNPEM-LSTMEM-GRUHMPIO
AOP (%) W c f 85.512.661.410.06
T o 75.010.20.50.17
ACVAR W c f 0.04380.03910.03940.0363
T o 0.67710.01470.01490.0148
GDTA (%)global80.261.430.950.12
GAVARglobal0.36050.02690.02710.0256
ART (s)global 2.84 × 10 5 0.00440.0041 7.43 × 10 6
RTVARglobal 2.20 × 10 7 2.27 × 10 5 9.29 × 10 6 5.23 × 10 8
Table 11. The evaluation indexes of digital twin model for furnace.
Table 11. The evaluation indexes of digital twin model for furnace.
IndexesOutputMMNPEM-LSTMEM-GRUHMPIOHM
AOP (%) ρ b 5.230.800.633.990.01
T g s 9.640.070.070.280.02
P b 3.680.530.382.690.01
O c p 61.550.40.360.240.04
Q s l 164.940.130.070.560.05
ACVAR ρ b 0.0018 7.09 × 10 4 8.39 × 10 4 0.0011 7.63 × 10 4
T g s 0.00740.00120.00120.00120.0013
P b 0.0014 4.99 × 10 4 5.94 × 10 4 8.05 × 10 4 5.41 × 10 4
O c p 0.1985 4.22 × 10 5 4.69 × 10 5 7.22 × 10 5 4.51 × 10 5
Q s l 0.62210.00250.00250.00280.0025
GDTA (%)global49.010.390.301.550.03
GAVARglobal0.1662 9.90 × 10 4 0.00100.00120.0010
ART (s)global 3.25 × 10 6 0.00480.0045 2.69 × 10 6 0.0024
RTVARglobal 4.51 × 10 9 1.42 × 10 6 1.10 × 10 6 2.01 × 10 9 3.15 × 10 7
Table 12. The evaluation indexes of digital twin model for drum.
Table 12. The evaluation indexes of digital twin model for drum.
IndexesOutputMMNPEM-LSTMEM-GRUHMPIOHM
AOP (%) M d l 35.723.074.8011.920.48
ρ v 3.200.270.190.220.09
H w 9.140.420.522.480.62
H r 2.240.240.220.850.18
ACVAR M d l 0.03500.00220.00290.00350.0021
ρ v 0.00220.00160.00160.00170.0017
H w 0.00800.00100.00100.00380.0014
H r 0.00410.00390.00380.00390.0038
GDTA (%)global12.581.001.433.870.34
GAVARglobal0.01230.00220.00230.00320.0023
ART (s)global 4.75 × 10 6 0.00440.0046 7.24 × 10 6 0.0022
RTVARglobal 1.45 × 10 9 6.41 × 10 7 8.70 × 10 6 5.19 × 10 9 1.04 × 10 7
Table 13. The flexibility index for digital twin model.
Table 13. The flexibility index for digital twin model.
QuestionMMNPHMPIOEMHM
Q1: Can the DTM be used in other use cases without any changes? (YES (Y) or NO (N))N (−0.8)N (−0.8)Y (+0.8)N (−0.8)
Q2: Does the DTM require specific experiments to develop the DT (S) or can it be developed using real-time or historical data (NS)?NS (+0.5)NS (+0.5)NS (+0.5)NS (+0.5)
Q3: Can the number of input parameters be increased?N (−0.6)Y (+0.6)Y (+0.6)Y (+0.6)
Q3.1: Are there any limitations to increasing the number of input parameters? (if Q3 is YES)/N (+0.2)N (+0.2)N (+0.2)
Q3.3: Does increasing the number of input parameters require modification of the DTM? (if Q3 is YES)/Y (−0.2)N (+0.2)Y (−0.2)
Q3.4: Does increasing the number of input parameters affect the output parameters? (if Q3 is YES)/Y (−0.2)Y (−0.2)Y (−0.2)
Q4: Can the number of input parameters be decreased?N (−0.4)N (−0.4)Y (+0.4)N (−0.4)
Q4.1: Does decreasing the number of input parameters affect the output parameters? (if Q4 is YES)//Y (−0.2)/
Q4.2: Does decreasing the number of input parameters require modification of the DTM? (if Q4 is YES)//N (+0.2)/
Q5: Can the DTM handle more than one output parameter at the same time? (No need for aggregation of multiple parameters into a single value.)Y (+0.6)Y (+0.6)Y (+0.6)Y (+0.6)
Q6: Can the number of output parameters be increased?N (−0.5)N (−0.5)Y (+0.5)N (−0.5)
Q6.1: Are there any limitations on increasing the number of output parameters in the DTM? (if Q6 is YES)//Y (−0.2)/
Q7: Can the DTM adapt to new operational conditions?Y (+0.8)Y (+0.8)Y (+0.8)Y (+0.8)
Q7.1: Is the adaptation of DTM performed automatically(A)/manually(M)? (if Q7 is YES)A (+0.3)A (+0.3)A (+0.3)A (+0.3)
Q7.2: Does the adaptation of the DTM require additional data? (if Q7 is YES)Y (−0.3)Y (−0.3)Y (−0.3)Y (−0.3)
Q8: Can the DTM be executed using a conventional hardware computer?Y (+0.3)Y (+0.3)Y (+0.3)Y (+0.3)
Q9: Does the use of the DTM require special training?N (+0.3)N (+0.3)N (+0.3)N (+0.3)
Total score569.66
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, Y.; Cai, Y.; Jiang, H. Recurrent Neural Network-Based Hybrid Modeling Method for Digital Twin of Boiler System in Coal-Fired Power Plant. Appl. Sci. 2023, 13, 4905. https://doi.org/10.3390/app13084905

AMA Style

Zhao Y, Cai Y, Jiang H. Recurrent Neural Network-Based Hybrid Modeling Method for Digital Twin of Boiler System in Coal-Fired Power Plant. Applied Sciences. 2023; 13(8):4905. https://doi.org/10.3390/app13084905

Chicago/Turabian Style

Zhao, Yanbo, Yuanli Cai, and Haonan Jiang. 2023. "Recurrent Neural Network-Based Hybrid Modeling Method for Digital Twin of Boiler System in Coal-Fired Power Plant" Applied Sciences 13, no. 8: 4905. https://doi.org/10.3390/app13084905

APA Style

Zhao, Y., Cai, Y., & Jiang, H. (2023). Recurrent Neural Network-Based Hybrid Modeling Method for Digital Twin of Boiler System in Coal-Fired Power Plant. Applied Sciences, 13(8), 4905. https://doi.org/10.3390/app13084905

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop