Next Article in Journal
Assessment of Nociceptive Responsiveness Levels during Sedation-Analgesia by Entropy Analysis of EEG
Previous Article in Journal
A Complexity-Based Approach for the Detection of Weak Signals in Ocean Ambient Noise
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

System Entropy Measurement of Stochastic Partial Differential Systems

Lab of Control and Systems Biology, National Tsing Hua University, Hsinchu 30013, Taiwan
*
Author to whom correspondence should be addressed.
Entropy 2016, 18(3), 99; https://doi.org/10.3390/e18030099
Submission received: 24 December 2015 / Revised: 7 March 2016 / Accepted: 8 March 2016 / Published: 18 March 2016

Abstract

:
System entropy describes the dispersal of a system’s energy and is an indication of the disorder of a physical system. Several system entropy measurement methods have been developed for dynamic systems. However, most real physical systems are always modeled using stochastic partial differential dynamic equations in the spatio-temporal domain. No efficient method currently exists that can calculate the system entropy of stochastic partial differential systems (SPDSs) in consideration of the effects of intrinsic random fluctuation and compartment diffusion. In this study, a novel indirect measurement method is proposed for calculating of system entropy of SPDSs using a Hamilton–Jacobi integral inequality (HJII)-constrained optimization method. In other words, we solve a nonlinear HJII-constrained optimization problem for measuring the system entropy of nonlinear stochastic partial differential systems (NSPDSs). To simplify the system entropy measurement of NSPDSs, the global linearization technique and finite difference scheme were employed to approximate the nonlinear stochastic spatial state space system. This allows the nonlinear HJII-constrained optimization problem for the system entropy measurement to be transformed to an equivalent linear matrix inequalities (LMIs)-constrained optimization problem, which can be easily solved using the MATLAB LMI-toolbox (MATLAB R2014a, version 8.3). Finally, several examples are presented to illustrate the system entropy measurement of SPDSs.

1. Introduction

Information entropy is considered a measure of uncertainty and its maximization guarantees the best solutions for the maximal uncertainty [1,2,3,4,5]. Information entropy characterizes uncertainty caused by random parameters of a random system and measurement noise in the environment [6]. Entropy has been used for information retrieval such as systemic parametric and nonparametric estimation based on real data, which is an important topic in advanced scientific disciplines such as econometrics [1,2], financial mathematics [4], mathematical statistics [3,4,6], control theory [5,7,8], signal processing [9], and mechanical engineering [10,11]. Methods developed within this framework consider model parameters as random quantities and employ the informational entropy maximization principle to estimate these model parameters [6,9].
System entropy describes disorder or uncertainty of a physical system and can be considered to be a significant system property [12]. Real physical systems are always modeled using stochastic partial differential dynamic equation in the spatio-temporal domain [12,13,14,15,16,17]. The entropy of thermodynamic systems has been discussed in [18,19,20]. The maximum entropy generation of irreversible open systems was discussed in [20,21,22]. The entropy of living systems was discussed in [19,23]. The system entropy of stochastic partial differential systems (SPDSs) can be measured as the logarithm of system randomness, which can be obtained as the ratio of output signal randomness to input signal randomness from the entropic point of view. Therefore, if system randomness can be measured, the system entropy can be easily obtained from its logarithm. The system entropy of biological systems modeled using ordinary differential equations was discussed in [24]. However, since many real physical and biological systems are modeled using partial differential dynamic equations, in this study, we will discuss the system entropy of SPDSs. In general, we can measure the system entropy from the system characteristics of a system without measuring the system signal or input noise. For example, a low-pass filter, which is a system characteristic, can be determined from its transfer function or system’s frequency response without measuring its input/output signal. Hence, in this study, we will measure the system entropy of SPDSs from the system’s characteristics. Actually, many real physical and biological systems are only nonlinear, such as the large-scale systems [25,26,27,28], the multiple time-delay interconnected systems [29], the tunnel diode circuit systems [30,31], and the single-link rigid robot systems [32]. Therefore, we will also discuss the system entropy of nonlinear system as a special case in this paper.
However, because direct measurement of the system entropy of SPDSs in the spatio-temporal domain using current methods is difficult, in this study, an indirect method for system entropy measurement was developed through the minimization of its upper bound. That is, we first determined the upper bound of the system entropy and then decreased it to the minimum possible value to achieve the system entropy. For simplicity, we first measure the system entropy of linear stochastic partial differential systems (LSPDSs) and then the system entropy of nonlinear stochastic partial differential systems (NSPDSs) by solving a nonlinear Hamilton–Jacobi integral inequality (HJII)-constrained optimization problem. We found that the intrinsic random fluctuation of SPDSs will increase the system entropy.
To overcome the difficulty in solving the system entropy measurement problem due to the complexity of the nonlinear HJII, a global linearization technique was employed to interpolate several local LSPDSs to approximate a NSPDS; a finite difference scheme was employed to approximate a partial differential operator with a finite difference operator at all grid points. Hence, the LSPDSs at all grid points can be represented by a spatial stochastic state space system and the system entropy of the LSPDSs can be measured by solving a linear matrix inequalities (LMIs)-constrained optimization problem using the MATLAB LMI toolbox [12]. Next, the NSPDSs at all grid points can be represented by an interpolation of several local linear spatial state space systems; therefore, the system entropy of NSPDSs can be measured by solving the LMIs-constrained optimization problem.
Finally, based on the proposed systematic analysis and measurement of the system entropy of SPDSs, two system entropy measurement simulation examples of heat transfer system and biochemical system are given to illustrate the proposed system entropy measurement procedure of SPDSs.

2. General System Entropy of LSPDSs

For simplicity, we will first calculate the entropy of linear partial differential systems (LPDSs). Then, the result will be extended to the measure of NSPDSs. Consider the following LPDS [15,16]:
y ( x , t ) t = κ 2 y ( x , t ) + A y ( x , t ) + B v ( x , t ) z ( x , t ) = C y ( x , t ) ,
where x = [ x 1   x 2 ] T U is the space variable, y ( x , t ) = [ y 1 ( x , t ) , , y n ( x , t ) ] T n is the state variable, v ( x , t ) = [ v 1 ( x , t ) , , v l ( x , t ) ] T l is the random input signal, and z ( x , t ) = [ z 1 ( x , t ) ,… z m ( x , t ) ] T m is the output signal. x and t are the space and time variable, respectively. The space domain U is a two-dimensional bounded domain. The system coefficients are κ n × n , A n × n , B n × l , and C m × n . The Laplace (diffusion) operator 2 is defined as follows [15,16]:
2 y ( x , t ) : = k = 1 2 2 y ( x , t ) x k 2 2 y ( x , t ) x k 2 : = [ 2 x k 2 y 1 ( x , t ) , ... , 2 x k 2 y n ( x , t ) ] T n .
Suppose that the initial value y ( x , 0 ) : = y 0 ( x ) . For simplicity, the boundary condition is usually given by the Dirichlet boundary condition, i.e., y ( x , t ) = a constant on U , or by the Neumann boundary condition y ( x , t ) n = 0 on U , where n is a normal vector to the boundary U [15,16]. The randomness of the random input signal is measured by the average energy in the domain U and the entropy of the random input signal is measured by the logarithm of the input signal randomness as follows [1,2,24]:
log 1 U t f E { U 0 t f v T ( x , t ) v ( x , t ) d t d x } ,
where E denotes the expectation operator and tf denotes the period of the random input signal, i.e., v ( x , t ) U × [ 0 , t f ] . Similarly, the entropy of the random output signal z ( x , t ) is obtained as:
log 1 U t f E { U 0 t f z T ( x , t ) z ( x , t ) d t d x } .
In this situation, the system entropy S of the LPDS given in Equation (1) can be obtained from the differential entropy between the output signal and input signal, i.e., input signal entropy minus output signal entropy, or the net signal entropy of the LPDS [33]:
S = log 1 U t f E { U 0 t f z T ( x , t ) z ( x , t ) d t d x } log 1 U t f E { U 0 t f v T ( x , t ) v ( x , t ) d t d x } = log E { U 0 t f z T ( x , t ) z ( x , t ) d t d x } E { U 0 t f v T ( x , t ) v ( x , t ) d t d x } = log E { U 0 t f v T ( x , t ) v ( x , t ) d t d x } E { U 0 t f z T ( x , t ) z ( x , t ) d t d x } .
Let us denote the system randomness as the following normalized randomness:
S 0 = E { U 0 t f z T ( x , t ) z ( x , t ) d t d x } E { U 0 t f v T ( x , t ) v ( x , t ) d t d x } ,  if  y 0 ( x ) = 0 .
Then, the system entropy S = log S 0 . That is, if system randomness can be obtained, the system entropy can be determined from the logarithm of the system randomness. Therefore, our major work of measuring the entropy of the LPDS given in Equation (1) first involves the calculation of the system randomness S 0 given in Equation (4). However, it is not easy to directly calculate the normalized randomness S 0 in Equation (4) in the spatio-temporal domain. Suppose there exists an upper bound of S 0 as follows:
S 0 = E { U 0 t f z T ( x , t ) z ( x , t ) d t d x } E { U 0 t f v T ( x , t ) v ( x , t ) d t d x } S ¯ 0 ,
and we will determine the condition with that S 0 has an upper bound S ¯ 0 . Then, we will decrease the value of the upper bound S ¯ 0 as small as possible to approach S 0 , and then obtain the system entropy using S = log S 0 .
Remark 1. (i) From the system entropy of LPDS Equation (1), if the randomness of the input signal v ( x , t ) is larger than the randomness of the output signal z ( x , t ) , i.e.,:
E { U 0 t f v T ( x , t ) v ( x , t ) d t d x } > E { U 0 t f z T ( x , t ) z ( x , t ) d t d x } ,
then S 0 < 1 and S < 0 . A negative system entropy implies that the system can absorb external energy to increase the structure order of the system. All the biological systems are of this type, and according to Schrödinger’s viewpoint, biological systems consume negative entropy, leading to construction and maintenance of their system structures, i.e., life can access negative entropy to produce high structural order. (ii) If the randomness of the output signal z ( x , t ) is larger than the randomness of the input signal v ( x , t ) , i.e.,:
E { U 0 t f v T ( x , t ) v ( x , t ) d t d x } < E { U 0 t f z T ( x , t ) z ( x , t ) d t d x } ,
then S 0 > 1 and S > 0 . A positive system entropy indicates that the system structure disorder increases and the system can disperse entropy to the environment. (iii) If the randomness of the input signal v ( x , t ) is equal to the randomness of the system signal z ( x , t ) , i.e.,:
E { U 0 t f v T ( x , t ) v ( x , t ) d t d x } = E { U 0 t f z T ( x , t ) z ( x , t ) d t d x } ,
then S 0 = 1 and S = 0 . In this case, the system structure order is maintained constantly with zero system entropy. (iv) If the initial value y 0 ( x ) 0 , then the system randomness S 0 in Equation (4) should be modified as:
S 0 = E { U 0 t f z T ( x , t ) z ( x , t ) d t d x } U V ( y 0 ( x ) ) S d x + E { U 0 t f v T ( x , t ) v ( x , t ) d t d x } S ¯ 0
for a positive Lyapunov function V ( y ( x , t ) ) > 0 , and the randomness due to the initial condition y 0 ( x ) 0 should be considered a type of input randomness.
Based on the upper bound S ¯ 0 of the system randomness as given in Equation (5), we get the following result:
Proposition 1. For the LPDS in Equation (1), if the following HJII holds for a Lyapunov function V ( y ( x , t ) ) > 0 and with V ( 0 ) = 0 :
U [ y T ( x , t ) C T C y ( x , t ) + ( V ( y ( x , t ) ) y ) T ( κ 2 y ( x , t ) + A y ( x , t ) ) + 1 4 S ¯ 0 ( V ( y ( x , t ) ) y ) T B B T ( V ( y ( x , t ) ) y ) ] d x < 0 ,
then the system randomness S 0 has an upper bound S ¯ 0 as given in Equation (5).
Proof. See Appendix A.                   ☐
Since S ¯ 0 is the upper bound of S 0 , it can be calculated by solving the following HJII-constrained optimization problem:
S 0 = min  V ( y ( x , t ) ) > 0 S ¯ 0 subject to the HJII in Equation (10) .
Consequently, we can calculate the system entropy using S = log S 0 .
Remark 2. If the system in Equation (1) is free of a partial differential term 2 y ( x , t ) , i.e., in the case of the following conventional linear dynamic system:
d y ( t ) d t = A y ( t ) + B v ( t ) z ( t ) = C y ( t ) ,
then the system entropy of linear dynamic system in Equation (12) is written as [24]:
S = log E { 0 t f z T ( t ) z ( t ) d t } E { 0 t f v T ( t ) v ( t ) d t } .
Therefore, the result of Proposition 1 is modified as the following corollary.
Corollary 1. For the linear dynamic system in Equation (12), if the following Riccati-like inequality holds for a positive definite symmetric P > 0 :
P A + A T P + C T C + 1 S ¯ 0 P B B T P < 0 ,
or equivalently (by the Schur complement [12]):
[ P A + A T P + C T C P B B T P S ¯ 0 I ] < 0 ,
then the system randomness S 0 of the linear dynamic system in Equation (12) has an upper bound S ¯ 0 .
Proof. See Appendix B.                       ☐
Thus, the randomness S 0 of the linear dynamic system in Equation (12) is obtained by solving the following LMI-constrained optimization problem:
S 0 = min  P > 0 S ¯ 0 subject to the LMI in Equation (15) .
Hence, the system entropy of the linear dynamic system in Equation (12) can be calculated using S = log S 0 . The LMI-constrained optimization problem given in Equation (16) is easily solved by decreasing S ¯ 0 until no positive definite solution P exists for the LMI given in Equation (15), which can be solved using the MATLAB LMI toolbox [12]. Substituting S 0 into S ¯ 0 in Equation (14), we get:
C T C + 1 S 0 P B B T P < ( P A + A T P ) .
The right hand side of Equation (17) can be considered as an indication of the system stability. If the eigenvalues of A are more negative (more stable), i.e., the right hand side is more large, then S 0 and the system entropy S , are smaller. Obviously, the system entropy is inversely related to the stability of the dynamic system. If A is fixed, then the increase in input signal coupling B may increase S 0 and S .
Remark 3. If the LPDS in Equation (1) suffers from the following intrinsic random fluctuation:
y ( x , t ) t = κ 2 y ( x , t ) + A y ( x , t ) + B v ( x , t ) + H y ( x , t ) w ( x , t ) z ( x , t ) = C y ( x , t ) ,
where the constant matrix H n × n denotes the deterministic part of the parametric variation of system matrix A and w ( x , t ) is a stationary spatio-temporal white noise to denote the random source of intrinsic parametric variation [34,35], then the LSPDS in Equation (18) can be rewritten in the following It o ^ differential form:
y ( x , t ) = ( κ 2 y ( x , t ) + A y ( x , t ) + B v ( x , t ) ) t + H y ( x , t ) W ( x , t ) z ( x , t ) = C y ( x , t ) ,
where W ( x , t ) = w ( x , t ) t with W ( x , t ) being the Wiener process or Brownian motion in a zero mean Gaussian random field with unit variance at each location x [15].
For the LSPDS in Equation (19), we get the following result.
Proposition 2. For the LSPDS in Equation (19), if the following HJII holds for a Lyapunov function V ( y ( x , t ) ) > 0 with V ( 0 ) = 0 :
E { U [ y T ( x , t ) C T C y ( x , t ) + ( V ( y ( x , t ) ) y ) T ( κ 2 y ( x , t ) + A y ( x , t ) ) + 1 4 S ¯ 0 ( V ( y ( x , t ) ) y ) T B B T ( V ( y ( x , t ) ) y ) + 1 2 y T ( x , t ) H T ( 2 V ( y ( x , t ) ) y 2 ) T H y ( x , t ) ] d x } < 0 ,
then the system randomness S 0 has an upper bound S ¯ 0 as given in (5).
Proof. See Appendix C.          ☐
Since S ¯ 0 is the upper bound of S 0 , it could be calculated by solving the following HJII-constrained optimization problem:
S 0 = min V ( y ( x , t ) ) > 0 S ¯ 0 subject to the HJII in Equation (20) .
Hence, the system entropy of LSPDS in Equations (18) or (19) could be obtained using S = log S 0 , where S 0 is the system randomness solved from Equations (21).
Remark 4. Comparing the HJII in Equation (20) with the HJII in Equation (10) and replacing S ¯ 0 with S 0 , we find that Equation (20) has an extra positive term ( 1 / 2 ) y T ( x , t ) H T ( 2 V ( y ( x , t ) ) / y 2 ) T H y ( x , t ) due to the intrinsic random parametric fluctuation given in Equation (18). To maintain the left-hand side of Equation (20) as negative, the system randomness S 0 in Equation (20) must be larger than the randomness S 0 in Equation (10), i.e., the system entropy of the LPDS in Equations (18) or (19) is larger than that of the LPDS in Equation (1) because the intrinsic random parametric variation H ( y ( x , t ) ) w ( x , t ) in Equation (18) can increase the system randomness and the system entropy.
Remark 5. If the LSPDS in Equation (18) is free of the partial differential term 2 y ( x , t ) , i.e., in the case of the conventional linear dynamic system:
d y ( t ) d t = A y ( t ) + B v ( t ) + H y ( t ) w ( t ) z ( t ) = C y ( t ) ,
or the following It o ^ form:
d y ( t ) = ( A y ( t ) + B v ( t ) ) d t + H y ( t ) d W ( t ) z ( t ) = C y ( t ) ,
then we modify Proposition 2 as the following corollary.
Corollary 2. For the linear dynamic system in Equations (22) or (23), if the following Riccati-like inequality holds for a positive definite symmetric P = P T > 0 :
P A + A T P + C T C + H T P H + 1 S ¯ 0 P B B T P < 0
or equivalently:
[ P A + A T P + C T C + H T P H P B B T P S ¯ 0 I ] < 0 ,
then the system randomness S 0 of the linear dynamic system in Equations (22) or (23) has a upper bound S ¯ 0 .
Proof. See Appendix D.          ☐
Therefore, the system randomness S 0 of the linear stochastic system in Equations (22) or (23) can be obtained by solving the following LMI-constrained optimization problem:
S 0 = min  P > 0 S ¯ 0 subject to the LMI in Equation (25) ,
Hence, the system entropy Equation (13) of the linear stochastic system in Equations (22) or (23) can be calculated using S = log S 0 , where the system randomness S 0 is the optimal solution of Equation (26).
By substituting S 0 calculated by Equation (26) into Equation (24), we can get:
C T C + H T P H + 1   S 0 P B B T P < ( P A + A T P ) .
Remark 6. Comparing Equation (27) with Equation (17), it can be seen that the term H T P H due to the intrinsic random parametric fluctuation H y ( t ) w ( t ) in Equation (22) can increase the system randomness S 0 which consequently increases the system entropy S .

3. The System Entropy Measurement of LSPDSs via a Semi-Discretization Finite Difference Scheme

Even though the entropy of the linear systems in Equations (12) and (22) can be easily measured by solving the optimization problem in Equations (16) and (26), respectively, using the LMI toolbox in MATLAB, it is still not easy to solve the HJII-constraint optimization problem in Equations (11) and (21) for the system entropy of the LPDS in Equation (1) and the LSPDS in Equation (18), respectively. To simplify this system entropy problem, the main method is obtaining a more suitable spatial state space model to represent the LPDSs. For this purpose, the finite difference method and the Kronecker product are used together in this study. The finite difference method is employed to approximate the partial differential term 2 y ( x , t ) in Equation (1) in order to simplify the measurement procedure of entropy [14,16].
Consider a typical mesh grid as shown in Figure 1. The state variable y ( x , t ) is represented by y k , l ( t ) n at the grid node x k , l ( x 1 = k Δ x , x 2 = l Δ x ) , where k = 1 , N 1 and l = 1 , N 2 , i.e., y ( x , t ) | x = x k , l = y k , l ( t ) at the grid point x k , l , and the finite difference approximation scheme for the partial differential operator can be written as follows [14,16]:
κ 2 y ( x , t ) κ y k + 1 , l ( t ) + y k 1 , l ( t ) 2 y k , l ( t ) Δ x 2 + κ y k , l + 1 ( t ) + y k , l 1 ( t ) 2 y k , l ( t ) Δ x 2 .
.
Based on the finite difference approximation in Equation (28), the LPDS in Equation (1) can be represented by the following finite difference system:
d d t y k , l ( t ) κ 1 Δ x 2 [ y k + 1 , l ( t ) + y k 1 , l ( t ) + y k , l + 1 ( t ) + y k , l 1 ( t ) 4 y k , l ( t ) ] + A y k , l ( t ) + B v k , l ( t ) , k = 1 , N 1 ,   l = 1 , N 2 ,
where  y k , l ( t ) = y ( x , t ) | x = x k , l ,   v k , l ( t ) = v ( x , t ) | x = x k , l .
Let us denote:
T k , l y k , l ( t ) = 1 Δ x 2 [ y k + 1 , l ( t ) + y k 1 , l ( t ) + y k , l + 1 ( t ) + y k , l 1 ( t ) 4 y k , l ( t ) ] ,
then we get:
d d t y k , l ( t ) κ T k , l y k , l ( t ) + A y k , l ( t ) + B v k , l ( t ) z k , l ( t ) = C y k , l ( t ) .
For the simplification of entropy measurement for the LPDS in Equation (1), we will define a spatial state vector y k , l ( t ) n at all grid node in Figure 1. For the Dirichlet boundary conditions [16], the values of y k , l ( t ) at the boundary are fixed. For example, y ( x , t ) = 0 , where x U . We have y k , l ( t ) = 0 at k = 0 ,   N 1 + 1 or l = 0 ,   N 2 + 1 . Therefore, the spatial state vector y ( t ) n N for state variables at all grid nodes is defined as follows:
y ( t ) = [ y 1 , 1 T ( t ) , ... , y k , 1 T ( t ) , ... , y N 1 , 1 T ( t ) , ... , y k , l T ( t ) , ... , y 1 , N 2 T ( t ) , ... , y k , N 2 T ( t ) , ... , y N 1 , N 2 T ( t ) ] ,
where N : = N 1 × N 2 . Note that N is the dimension of the vector y k , l ( t ) for each grid node and N 1 × N 2 is the number of grid nodes. For example, let N 1 = 2 and N 2 = 2 , then we have y ( t ) = [ y 1 , 1 T ( t ) , y 1 , 2 T ( t ) , y 2 , 1 T ( t ) , y 2 , 2 T ( t ) ] T 4 n . To simplify the index of the node y k , l ( t ) n in the spatial state vector y ( t ) n N , we will denote the symbol y j ( t ) n to replace y k , l ( t ) . Note that the index j is from 1 to N, i.e.:
y 1 ( t ) : = y 1 , 1 ( t ) , y 2 ( t ) : = y 2 , 1 ( t ) , ... , y j ( t ) : = y k , l ( t ) , ... , y N ( t ) : = y N 1 , N 2 ( t ) ,
where j = ( l 1 ) N 1 + k in Equation (32). Thus, the linear difference model of two indices in Equation (31) could be represented with only one index as follows:
d d t y j ( t ) = κ T j y j ( t ) + A y j ( t ) + B v j ( t ) , j = 1 , 2 , ... , N z j ( t ) = C y j ( t ) ,
where v j ( t ) = v k , l ( t ) with j = ( l 1 ) N 1 + k and T j is defined as follows:
T j y ( t ) = 1 Δ x 2 [ O n ... O n      I n      O n ... O n      I n      4 I n      I n      O n ... O n      I n      O n ... O n ] Position  1    j N 1      j 1     j    j + 1     j + N 1     N 1 N 2 ,
where O n and I N denote the n × n zero matrix and N × N identity matrix, respectively.
We will collect all states y j ( t ) of the grid nodes given in Equation (33) to the spatial state vector given in Equation (32). The Kronecker product can be used to simplify the representation. Using the Kronecker product, the systems at all grid nodes given in Equation (33) can be represented by the following spatial state space system (i.e., the linear dynamic systems of Equation (33) at all grid points within domain U in Figure 1 are represented by a spatial state space system [14]):
d y ( t ) d t = { [ I N κ ] T + [ I N A ] } y ( t ) + [ I N B ] v ( t ) z ( t ) = [ I N C ] y ( t ) ,
where T = [ T 1 T T N T ] n N × n N , v ( t ) = ( v 1 ( t ) v N ( t ) ) T l N , and I N κ denotes the Kronecker product between I N and κ .
Definition 1. [17,36]: Let M a × b , N c × d . Then the Kronecker product of M and N is defined as the following matrix:
M N = [ m 11 N m 1 b N m a 1 N m a b N ] a c × b d .
Remark 7. Since the spatial state vector y ( t ) in Equation (32) is used to represent y ( x , t ) at all grid points, in this situation, E { U 0 t f z T ( x , t ) z ( x , t ) d t d x } , E { U V ( y 0 ( x ) ) d x } , and E { U 0 t f v T ( x , t ) v ( x , t ) d t d x } in the measurement of system randomness in Equations (5) or (9) could be modified by the temporal forms E { 0 t f z T ( t ) z ( t ) Δ x 2 d t } , E { V ( y ( 0 ) ) Δ x 2 } , and E { 0 t f v T ( t ) v ( t ) Δ x 2 d t } , respectively, for the spatial state space system in (35), where the Lyapunov function V ( y ( t ) ) is related to the Lyapunov function V ( y ( x , t ) ) as V ( y ( t ) ) = j = 1 N V ( y j ( t ) ) . Therefore, for the spatial state space system in Equation (35), the system randomness in Equations (5) or (9) is modified as follows:
S 0 = E { 0 t f z T ( t ) z ( t ) d t } E { 0 t f v T ( t ) v ( t ) d t } S ¯ 0
or:
S 0 E { 0 t f z T ( t ) z ( t ) d t } E { V ( y ( 0 ) ) S + 0 t f v T ( t ) v ( t ) d t } S ¯ 0 ,  for  y ( 0 ) 0 .
Hence, our entropy measurement problem of the LPDS in Equation (1) becomes the measurement of the entropy of the spatial state system Equation (35), as given below.
Proposition 3. For the linear spatial state space system in Equation (35), if the following Riccati-like inequality holds for a positive definite matrix P ¯ > 0 :
P ¯ A ¯ + A ¯ T P ¯ + C ¯ T C ¯ + I ¯ + 1 S ¯ 0 P ¯ B ¯ B ¯ T P ¯ < 0
or equivalently:
[ P ¯ A ¯ + A ¯ T P ¯ + C ¯ T C ¯ P ¯ B ¯ B ¯ T P ¯ S ¯ 0 I ] < 0 ,
where A ¯ = [ I N κ ] T + [ I N A ] ,   B ¯ = [ I N B ] , C ¯ = [ I N C ] , then the system randomness S 0 in Equations (36) or (37) of linear spatial state space system in Equation (35) has the upper bound S ¯ 0 .
Proof. The proof is similar to the proof of Corollary 1 in Appendix B and can be obtained by replacing A , B , C , and P with A ¯ , B ¯ , C ¯ , and P ¯ , respectively.      ☐
Therefore, the randomness S 0 of the linear spatial state space system in Equation (35) can be obtained by solving the following LMI-constrained optimization problem:
S 0 = min  P ¯ > 0 S ¯ 0 subject to the LMI in Equation (39) .
Hence, the system entropy S of the linear spatial state space system in Equation (33) can be calculated using S = log S 0 .
Remark 8. (i) The Riccati-like inequality in Equation (38) or the LMI in Equation (39) is an approximation of the HJII in Equation (10) with the finite difference scheme given in Equation (28). If the finite difference, shown in Equation (28), Δ x 0 , then S 0 in Equation (40) will approach S 0 in Equation (11). (ii) Substituting S 0 into Equation (38), we get:
C ¯ T C ¯ + 1 S 0 P ¯ B ¯ B ¯ T P ¯ < ( P ¯ A ¯ + A ¯ T P ¯ ) .
If the eigenvalues of A ¯ are more negative (more stable), the randomness   S 0 as well as the entropy S is smaller. Similarly, the LSPDS in Equation (18) can be approximated by the following stochastic spatial state space system via finite difference scheme [14]:
d y ( t ) = { [ I N κ ] T y ( t ) d t + [ I N A ] y ( t ) d t } + [ I N B ] v ( t ) + [ I N H ] y ( t ) d W z ( t ) = [ I N C ] y ( t ) ,
where d W = [ d W 1 ( t ) d W N ( t ) ] n N , and the Hadamard product of matrices (or vectors) X = [ X i j ] m × n and Y = [ Y i j ] m × n of the same size is the entry-wise product denoted as X Y = [ X i j Y i j ] m × n .
Then we can get the following result.
Corollary 3. For the linear stochastic spatial state space system Equation (42), if the following Riccati-like inequality holds for a positive definite symmetric P ¯ > 0 :
P ¯ A ¯ + A ¯ T P ¯ + C ¯ T C ¯ + H ¯ T P ¯ H ¯ + 1 S ¯ 0 P ¯ B ¯ B ¯ T P ¯ < 0 ,
or equivalently, the following LMI has a positive definite symmetric solution P ¯ > 0 :
[ P ¯ A ¯ + A ¯ T P ¯ + C ¯ T C ¯ + H ¯ T P ¯ H ¯ P ¯ B ¯ B ¯ T P ¯ S ¯ 0 I ] < 0 ,
then the system randomness S 0 of the stochastic state space system in Equation (42) has an upper bound S ¯ 0 , where H ¯ = [ I N H ] .
Proof. The proof is similar to the proof of Corollary 2 in Appendix D.               ☐
Therefore, the system randomness S 0 of the linear stochastic state space system Equation (42) can be obtained by solving the following LMI-constrained optimization problem:
S 0 = min  P ¯ > 0 S ¯ 0 subject to the LMI in (44) ,
and hence the system entropy S of the stochastic spatial state space system in Equation (42) can be obtained using S = log S 0 . Substituting S 0 into (43), we get:
C ¯ T C ¯ + H ¯ T P ¯ H ¯ + 1 S 0 P ¯ B ¯ B ¯ T P ¯ < ( P ¯ A ¯ + A ¯ T P ¯ ) .
Remark 9. Comparing Equation (41) with Equation (46), because of the term H ¯ T P ¯ H ¯ from the intrinsic random fluctuation, it can be seen that the LSPDS with random fluctuations will lead to a larger S 0 and a larger system entropy S.         ☐

4. System Entropy Measurement of NSPDSs

Most partial dynamic systems are nonlinear; hence, the measurement of the system entropy of nonlinear partial differential systems (NPDSs) will be discussed in this section. Consider the following NPDSs in the domain U :
y ( x , t ) t = κ ( y ( x , t ) ) 2 y ( x , t ) + f ( y ( x , t ) ) + g ( y ( x , t ) ) v ( x , t ) z ( x , t ) = C ( y ( x , t ) ) ,
where f ( y ( x , t ) ) n , C ( y ( x , t ) ) m × n and g ( y ( x , t ) ) n × l are the nonlinear functions with f ( 0 ) = 0 , C ( 0 ) = 0 , and g ( 0 ) = 0 , respectively. The nonlinear diffusion functions κ ( y ( x , t ) ) n × n satisfy κ ( y ( x , t ) ) 0 , and κ ( 0 ) = 0 . If the equilibrium point of interest is not at the origin, for the convenience of analysis, the origin of the NPDS must be shifted to the equilibrium point (shifted to zero). The initial and boundary conditions are the same as the LPDS in Equation (1); then, we get the following result.
Proposition 4. For the NPDS in Equation (47), if the following HJII holds for a Lyapunov function V ( y ( x , t ) ) > 0 with V ( 0 ) = 0 :
E { U 0 t f [ z T ( x , t ) z ( x , t ) + ( V ( y ( x , t ) ) y ) T ( κ ( y ( x , t ) ) 2 y ( x , t ) + f ( y ( x , t ) ) ) + 1 4 S ¯ 0 ( V ( y ( x , t ) ) y ) T g ( y ( x , t ) ) g T ( y ( x , t ) ) ( V ( y ( x , t ) ) y ) ] d t d x } < 0 ,
then the system randomness S 0 of the NPDS in Equation (47) has an upper bound S ¯ 0 as given in Equation (5).
Proof. See Appendix E.                    ☐
Based on the condition of upper bound S ¯ 0 given in Equation (48), the system randomness S 0 could be obtained by solving the following HJII-constrained optimization problem:
S 0 = min  V ( y ( x , t ) ) > 0 S ¯ 0 subject to the HJII in Equation (48) .
Hence, the system entropy of NPDS in Equation (47) can be obtained using S = log S 0 . If the NPDS in Equation (47) is free of the diffusion operator 2 y ( x , t ) as with the following conventional nonlinear dynamic system:
d y ( t ) d t = f ( y ( t ) ) + g ( y ( t ) ) v ( t ) z ( t ) = C ( y ( t ) ) y ( t ) ,
then the result of Proposition 4 is reduced to the following corollary.
Corollary 4. For the nonlinear dynamic system Equation (50), if the following HJII holds for a positive Lyapunov function V ( y ( t ) ) > 0 with V ( 0 ) = 0 :
E { 0 t f [ z T ( t ) z ( t ) + ( V ( y ( t ) ) y ) T ( f ( y ( t ) ) ) + 1 4 S ¯ 0 ( V ( y ( t ) ) y ) T g ( y ( t ) ) g T ( y ( t ) ) ( V ( y ( t ) ) y ) ] d t } < 0 ,
then the system randomness S 0 of the nonlinear dynamic system in Equation (50) has an upper bound S ¯ 0
Proof. The proof is similar to that of Proposition 4 without consideration of the diffusion operator 2 y ( x , t ) and spatial integration on the domain U .
Hence, the system randomness of the nonlinear dynamic system in Equation (50) can be obtained by solving the following HJII-constrained optimization problem:
S 0 = min  V ( y ( t ) ) > 0 S ¯ 0 subject to the HJII in Equation (51) ,
and the system entropy is obtained using S = log S 0 . If the NPDS in Equation (47) suffers from random intrinsic fluctuations as with the NSPDSs:
y ( x , t ) t = κ ( y ( x , t ) ) 2 y ( x , t ) + f ( y ( x , t ) ) + g ( y ( x , t ) ) v ( x , t ) + H ( y ( x , t ) ) y ( x , t ) w ( x , t ) z ( x , t ) = C ( y ( x , t ) ) y ( x , t ) ,
where H ( y ( x , t ) ) y ( x , t ) w ( x , t ) denotes the random intrinsic fluctuation, then the NSPDS in Equation (53) can be written in the following It o ^ form:
y ( x , t ) = ( κ ( y ( x , t ) ) 2 y ( x , t ) + f ( y ( x , t ) ) + g ( y ( x , t ) ) v ( x , t ) ) t + H ( y ( x , t ) ) y ( x , t ) W ( x , t ) z ( x , t ) = C ( y ( x , t ) ) y ( x , t ) .
Therefore, we can get the following result:
Proposition 5. For the NSPDS in Equations (53) or (54), if the following HJII holds for a Lyapunov function V ( y ( x , t ) ) > 0 with V ( 0 ) = 0 :
E { 0 t f [ z T ( x , t ) z ( x , t ) + ( V ( y ( x , t ) ) y ) T ( κ ( y ( x , t ) ) 2 y ( x , t ) + f ( y ( x , t ) ) ) + 1 2 y T ( x , t ) H T ( y ( x , t ) ) ( 2 V ( y ( x , t ) ) 2 y ) T H ( y ( x , t ) ) y ( x , t ) + 1 4 S ¯ 0 ( V ( y ( x , t ) ) y ) T g ( y ( x , t ) ) g T ( y ( x , t ) ) ( V ( y ( x , t ) ) y ) ] d t } < 0 ,
then the system randomness S 0 of the NSPD S in Equations (53) or (54) can be obtained by solving the following HJII-constrained optimization problem:
S 0 = min  V ( y ( x , t ) ) > 0 S ¯ 0 subject to the HJII in Equation (55) .
Proof. See Appendix F.                           ☐
Remark 10. By comparing the HJII in Equation (48) with the HJII in Equation (55), due to the extra term ( 1 / 2 ) y T ( x , t ) H T ( y ( x , t ) ) ( 2 V ( y ( x , t ) ) / y 2 ) T H ( y ( x , t ) ) y ( x , t ) from the random intrinsic fluctuation H ( y ( x , t ) ) y ( x , t ) w ( x , t ) in Equation (53), it can be seen that the system randomness of the NSPDS in Equation (53) must be larger than the system randomness of the NPDS in Equation (47). Hence, the system entropy of the NSPDS in Equation (53) is larger than that of the NPDS in Equation (47).

5. System Entropy Measurement of NSPDS via Global Linearization and Semi-Discretization Finite Difference Scheme

In general, it is very difficult to solve the HJII in Equations (48) or (55) for the system entropy measurement of the NPDS in Equation (47) or the NSPDS in Equation (53), respectively. In this study, the global linearization technique and a finite difference scheme were employed to simplify the entropy measurement of the NPDS in Equation (47) and NSPDS in Equation (53). Consider the following global linearization of the NPDS in Equation (47), which is bounded by a polytope consisting of L vertices [12,37]:
( κ ( y ( x , t ) ) y f ( y ( x , t ) ) y g ( y ( x , t ) ) y C ( y ( x , t ) ) y ) C 0 ( [ κ 1 A 1 B 1 C 1 ] [ κ i A i B i C i ] [ κ L A L B L C L ] ) , y ( x , t ) ,
where C 0 denotes the convex hull of a polytope with L vertices defined in Equation (57). Then, the trajectories of y ( x , t ) for the NPDS in Equation (47) will belong to the convex combination of the state trajectories of the following L linearized PDSs derived from the vertices of the polytope in Equation (57):
y ( x , t ) t = κ i 2 y ( x , t ) + A i y ( x , t ) + B i v ( x , t ) ,   i = 1 , 2 , ... , L z ( x , t ) = C i y ( x , t ) .
From the global linearization theory [16,37], if Equation (57) holds, then every trajectory of the NPDS in Equation (47) is a trajectory of a convex combination of L linearized PDSs in Equation (58), and they can be represented by the convex combination of L linearized PDSs in Equation (58) as follows:
y ( x , t ) t = i = 1 L α i ( y ) [ κ i 2 y ( x , t ) + A i y ( x , t ) + B i v ( x , t ) ] z ( x , t ) = i = 1 L α i ( y ) C i y ( x , t ) ,
where the interpolation functions are selected as α i ( y ) = ( 1 / | | y i y | | 2 2 ) / ( i = 1 L | | y i y | | 2 2 ) and they satisfy 0 α i ( y ) 1 and i = 1 L α i ( y ) = 1 . That is, the trajectory of the NPDS in Equation (47) can be approximated by the trajectory of the interpolated local LPDS given in Equation (59).
Following the semi-discretization finite difference scheme in Equations (28)–(34), the spatial state space system of the interpolated PDS in Equation (59) can be represented as follows:
d y ( t ) d t = i = 1 L α i ( y ) { [ I N κ i ] T + [ I N A i ] } y ( t ) + [ I N B i ] v ( t ) z ( t ) = i = 1 L α i ( y ) [ I N C i ] y ( t ) ,
where y ( t ) and v ( t ) are defined in (35). That is, the NPDS in Equation (47) is interpolated through local linearized PDSs in Equation (59) to approximate the NPDS in Equation (47) using global linearization and semi-discretization finite difference scheme.
Remark 11. In fact, there are many interpolation schemes for approximating a nonlinear dynamic system with several local linear dynamic systems such as Equation (60); for example, fuzzy interpolation and cubic spline interpolation methods [13]. Then, we get the following result.                   ☐
Proposition 6. For the linear dynamic systems in Equation (60), if the following Riccati-like inequalities hold for a positive definite symmetric P ¯ > 0 :
P ¯ A ¯ i + A ¯ i T P ¯ + C ¯ i T C ¯ i + 1 S ¯ 0 P ¯ B ¯ i B ¯ i T P ¯ < 0 ,   i , j = 1 , L
or equivalently:
[ P ¯ A ¯ i + A ¯ i T P ¯ + C ¯ i T C ¯ i P ¯ B ¯ i B ¯ i T P ¯ S ¯ 0 ] < 0 ,   i , j = 1 , ... , L ,
where A ¯ i , B ¯ i , and C ¯ i are defined as A ¯ i = [ I N κ i ] T + [ I N A i ] , B ¯ i = [ I N B i ] , and C ¯ i = [ I N C i ] , respectively, then the system randomness S 0 of the NPDSs in Equation (47) or the interpolated dynamic systems in Equation (60) have an upper bound S ¯ 0 .
Proof. See Appendix G.          ☐
Therefore, the system randomness S 0 of the NPDSs in Equation (47) or the interpolated dynamic systems in Equation (60) can be obtained by solving the following LMIs-constrained optimization problem:
S 0 = min  P ¯ > 0 S ¯ 0 subject to the LMI in Equation (62) .
Hence, the system entropy S of the NPDSs in Equation (47) or the interpolated dynamic systems in Equation (60) can be obtained using S = log S 0 . By substituting S 0 into the Riccati-like inequalities in Equation (61), we can obtain:
C ¯ i T C ¯ i + 1 S 0 P ¯ B ¯ i B ¯ i T P ¯ < ( P ¯ A ¯ i + A ¯ i T P ¯ ) .
Obviously, if the eigenvalues of local system matrices A ¯ i are more negative (more stable), the randomness S 0 is smaller and the corresponding system entropy S is also smaller, and vice versa.
The NSPDs given in Equation (54) can be approximated using the following global linearization technique [12,37]:
( κ ( y ( x , t ) ) y f ( y ( x , t ) ) y g ( y ( x , t ) ) y H ( y ( x , t ) ) y C ( y ( x , t ) ) y ) C 0 ( [ κ 1 A 1 B 1 H 1 C 1 ] [ κ i A i B i H i C i ] [ κ L A L B L H L C L ] ) , y ( x , t ) .
Then, the NSPDs with the random intrinsic fluctuation given in Equation (53) can be approximated by the following interpolated spatial state space system [14]:
d y ( t ) d t = i = 1 L α i ( y ) { [ I N κ i ] T + [ I N A i ] } y ( t ) + [ I N B i ] v ( t )   + [ I N H i ] y ( t ) d W ( t ) z ( t ) = i = 1 L α i ( y ) [ I N C i ] y ( t ) ,
i.e., we could interpolate L local interpolated stochastic spatial state space systems to approximate the NSPDs in Equation (53). Then, we get the following result.
Proposition 7. For the NSPDs in Equation (54) or the linear interpolated stochastic spatial state space systems in (66), if the following Riccati-like inequalities hold for a positive definite symmetric P ¯ > 0 :
P ¯ A ¯ i + A ¯ i T P ¯ + C ¯ i T C ¯ i + H ¯ i T P ¯ H ¯ i + 1 S ¯ 0 P ¯ B ¯ i B ¯ i T P ¯ < 0   i , j = 1 , ... , L
or equivalently:
[ P ¯ A ¯ i + A ¯ i T P ¯ + C ¯ i T C ¯ i + H ¯ i T P ¯ H ¯ i P ¯ B ¯ i B ¯ i T P ¯ S ¯ 0 I ¯ ] < 0 ,
where H ¯ i = [ I N H i ] , then the system randomness S 0 of the NSPDs in Equation (53) or the interpolated stochastic systems in Equation (66) can be obtained by solving the following LMIs-constrained optimization problem:
S 0 = min  P ¯ > 0 S ¯ 0 s u b j e c t   t o   t h e   L M I s   i n  Equation  ( 68 ) .
Then, the system entropy S of NSPD in Equation (53) or the interpolated stochastic systems in Equation (66) could be obtained as S = log S 0 .
Proof. See Appendix H.                   ☐
Substituting S 0 into in Equation (67), we get:
C ¯ i T C ¯ i + H i T P ¯ H ¯ i + 1 S 0 P ¯ B ¯ i B ¯ i T P ¯ < ( P ¯ A ¯ i + A ¯ i T P ¯ ) .
Comparing (64) with Equation (70), S 0 of the NSPDS in Equation (53) is larger than S 0 of the NPDS in Equation (47), i.e., the random intrinsic fluctuation H ( y ( x , t ) ) y ( x , t ) w ( x , t ) will increase the system entropy of the NSPDS. Based on the above analysis, the proposed system entropy measurement procedure of NSPDSs is given as follows:
Step 1: 
Given the initial value of state variable, the number of finite difference grids, the vertices of the global linearization, and the boundary condition.
Step 2: 
Construct the spatial state space system in Equation (60) by finite difference scheme.
Step 3: 
Construct the interpolated state space system Equation (66) by global linearization method.
Step 4: 
If the error between the original model Equation (54) and the approximated model Equation (66) is too large, we could adjust the density of grid nodes of finite difference scheme and the number of vertices of global linearization technique and return to Step 1.
Step 5: 
Solve the eigenvalue problem in Equation (69) to obtain P ¯ and S ¯ 0 , and then system entropy S = log S ¯ 0 .

6. Computational Example

Based on the aforementioned analyses for the system entropy of the considered PDSs, two computational examples are given below for measuring the system entropy.
Example 1. Consider a heat transfer system in a 1m × 0.5m thin plate with a surrounding temperature of 0 °C as follows [38]:
y ( x , t ) t = κ 2 y ( x , t ) + A y ( x , t ) + B v ( x , t ) z ( x , t ) = C y ( x , t ) ,
y ( x , 0 ) = 20 × e ( 10 × | 0.5 x 1 | 0.6738 ) × e ( 30 × | 0.5 2 x 2 | ) and y(x,t) = 0 °C, ∀ t, ∀ x on the boundary of U =[0,1]×[0,0.5]. Here, y(x,t) is the temperature function, location x is in meters, time t is in s, κ = 10 4 m 2 / s is the thermal diffusivity [4,5,6,7,9], and the term A y ( x , t ) with A = 0.1 s 1 denotes the thermal dissipation when the temperature of the plate is greater than the surrounding temperature, i.e., y ( x , t ) > 0 °C, or the thermal absorption when the temperature on the plate is less than the surrounding temperature, i.e., y ( x , t ) < 0 °C. The output coupling C = 1 . B v ( x , t ) is the environmental thermal fluctuation input with B = 0.1 . We can estimate the system entropy of the heat transfer system in Equation (71). Based on Proposition 3 and the LMI-constrained optimization problem Equation (40), we can calculate the system entropy of the heat transfer system in Equation (71) as S = log S 0 = log ( 0.0046 ) = 2.3372 . In this calculation of the system entropy, the grid spacing Δ x of the finite difference scheme was chosen as 0.125m such that there are N = 7 × 3 = 21 interior grid points and 24 boundary points in U . The temperature distributions y ( x , t ) of the heat transfer system in Equation (71) at t = 1 , 10, 30 and 50 s are shown in Figure 2 with v ( x , t ) = 30 sin ( t ) . Due to the diffusion term κ 2 y ( x , t ) , the heat temperature of transfer system Equation (71) will be uniformly distributed gradually. Even if the thin plate has initial value (heat source) or some other influences like input signal and intrinsic random fluctuation, the temperature of the thin plate will gradually achieve a uniform distribution to increase the system entropy. This phenomenon can be seen in Figure 2, Figure 3, Figure 4 and Figure 5.
Suppose that the heat transfer system in Equation (71) suffers from the following random intrinsic fluctuation:
y ( x , t ) t = κ 2 y ( x , t ) + A y ( x , t ) + B v ( x , t ) + H y ( x , t ) w ( x , t ) z ( x , t ) = C y ( x , t ) ,
where the term H y ( x , t ) w ( x , t ) with H = 0.02 is due to the random parameter variation of the term A y ( x , t ) . Then, the temperature distributions y ( x , t ) of the heat transfer system in Equation (72) at t = 1, 10, 30 and 50 s are shown in Figure 3. Based on the Corollary 3 and the LMI-constrained optimization problem in Equation (45), we can calculate the system entropy of the stochastic heat transfer system in Equation (72) as S = log S 0 = log ( 0.0339 ) = 1.4698 . Obviously, it can be seen that the system entropy of the stochastic heat transfer system in Equation (72) is larger than the heat transfer system in Equation (71) without intrinsic random fluctuation.
Example 2. A biochemical enzyme system is used to describe the concentration distribution of the substrate in a biomembrane. For the enzyme system, the thickness of the artificial biomembrane is 1 μm. The concentration of the substrate is uniformly distributed inside the artificial biomembrane. Since the biomembrane is immersed in the substrate solution, the reference axis is chosen to be perpendicular to the biomembrane. The biochemical system can be formulated as follows [13]:
y ( x , t ) t = κ ( y ( x , t ) ) 2 y ( x , t ) V M y ( x , t ) K M + y ( x , t ) + y 2 ( x , t ) / K S + g ( y ( x , t ) ) v ( x , t ) z ( x , t ) = C y ( x , t ) ,
where y ( x , t ) is the concentration of the substrate in the biomembrane, κ is the substrate diffusion coefficient, V M is the maximum activity in one unit of the biomembrane, K M is the Michaelis constant, and K S is the substrate inhibition constant. The parameters of the biochemical enzyme system are given by κ ( y ( x , t ) ) = e y ( x , t ) , V M = 0.5 , K M = 1 , K S = 1 and the output coupling C = 1 . Note that the equilibrium point in Example 2 is at zero. The concentration of the initial value of the substrate is given by y 0 ( x ) = 0.3 sin ( π x ) . The boundary conditions used to restrict the concentration are zero at x = 0 and x = 1 , i.e., y ( 0 , t ) = 0 , y ( 1 , t ) = 0 . A more detailed discussion about the enzyme can be found in [13]. Suppose that the biochemical enzyme system is under the effect of an external signal v ( x , t ) . For the convenience of computation, the external signal v ( t ) is assumed as a zero mean Gaussian noise with a unit variance. The influence function of external signal is defined as g ( y ( x , t ) ) = 0.5 y ( x , t ) at x = 4 / 9 ,   5 / 9 , and 6 / 9 (μm). Based on the global linearization in Equation (57), we get A ¯ 1 A ¯ 3 and B ¯ 1 B ¯ 3 , as shown in detail in Appendix I. The concentration distributions y ( x , t ) of the real system and approximated system are given in Figure 4 with Δ x = 0.125 , i.e., y i ( t ) = [ y 1 ( t ) , y 2 ( t ) , ... , y 9 ( t ) ] = [ y 1 ( 0 , t ) , y 2 ( 0.125 , t ) , y 3 ( 0.375 , t ) , y 4 ( 0.5 , t ) , y 5 ( 0.625 , t ) , y 6 ( 0.75 , t ) , y 7 ( 0.875 , t ) , y 8 ( 0.125 , t ) , y 9 ( 1 , t ) ] . Clearly, the approximated system based on the global linearization technique and finite difference scheme efficiently approach the nonlinear function. Based on Proposition 6 and the LMIs-constrained optimization given in Equation (63), we can obtain P ¯ as shown in detail in Appendix J and calculate the system entropy of the enzyme system in Equation (73) as S = log S 0 = log ( 7.6990 × 10 7 ) = 6.1136 .
Therefore, it is clear that the approximated system in Equation (60) can efficiently approximate biochemical enzyme system in Equation (73). In this simulation, Δ x = 0.125 . Suppose that the biochemical system in Equation (73) suffers from the following random intrinsic fluctuation:
y ( x , t ) t = κ ( y ( x , t ) ) 2 y ( x , t ) V M y ( x , t ) K M + y ( x , t ) + y 2 ( x , t ) / K S + g ( y ( x , t ) ) v ( x , t ) + H ( y ( x , t ) ) y ( x , t ) w ( x , t ) z ( x , t ) = C y ( x , t ) ,
where the term H ( y ( x , t ) ) y ( x , t ) w ( x , t ) with H ( y ( x , t ) ) = y ( x , t ) is the random parameter variation from the term V M y ( x , t ) / ( K M + y ( x , t ) + y 2 ( x , t ) / K S ) . Based on the global linearization in Equation (65), we can get H ¯ 1 H ¯ 3 as shown in detail in Appendix K. Based on the Proposition 7 and the LMIs-constrained optimization given in Equation (69), we can solve P ¯ as shown in detail in Appendix L and calculate the system entropy of the enzyme system in Equation (74) as S = log S 0 = log ( 1.3177 × 10 6 ) = 5.8802 .
Clearly, because of the intrinsic random parameter fluctuation, the system entropy of the stochastic enzyme system given in Equation (74) is larger than that of the enzyme system given in Equation (73).
The computation complexities of the proposed LMI-based indirect entropy measurement method is about O ( r n ( n + 1 ) / 2 ) in solving LMIs, where n is the dimension of P ¯ , r is the number of global interpolation points. We also calculate the elapsed time of the simulations examples by using MATLAB. The computation times including the drawing of the corresponding figures to solve the LMI constrained optimization problem are given as follows: in Example 1, the case of heat transfer system in Equation (71) is 183.9 s; the case of heat transfer system with random fluctuation in Equation (72) is 184.6 s. In Example 2, the case of biochemical system in Equation (73) is 17.7 s, the case of biochemical system with random fluctuation in Equation (74) is 18.6 s. The RAM of the computer is 4.00 GB, the CPU we used is AMD A4-5000 CPU with Radeon(TM) HD Graphics, 1.50 GHz. The results are reasonable. Because the dimension of grid nodes in Example 1 is 45 × 45 and the dimension of grid nodes in Example 2 is 9 × 9 , obviously, the computation time in Example 1 is much larger than in Example 2. Further, the time spent of the system without the random fluctuation is slightly faster than the system with the random fluctuation. The conventional algorithms of calculating entropy have been applied in image processing, digital signal processing, and particle filters, like in [39,40,41]. The conventional algorithms for calculating entropy just can be used in linear discrete systems, but in fact many systems are nonlinear and continuous. The indirect entropy measurement method we proposed can deal with the nonlinear stochastic continuous systems. Though the study in [24] is about the continuous nonlinear stochastic system, many physical systems are always modeled using stochastic partial differential dynamic equation in the spatio-temporal domain. The indirect entropy measurement method we proposed can be employed to solve the system entropy measurement in nonlinear stochastic partial differential system problem.

7. Conclusions

In this study, the system entropy of stochastic partial differential systems (SPDSs) was introduced as the difference between input signal entropy and output signal entropy and was found to be the logarithm of the output signal randomness-to-input signal randomness ratio. We found that the system stability was inversely related to the system entropy and that intrinsic random fluctuation could increase the system entropy. If the eigenvalues of the system matrices are further in the left-hand side of the s-complex domain, then the SPDS has lower system entropy, and vice versa. If the output and the input signal randomness values are equal and the system is independent of the initial value, then the system entropy is zero. To estimate the system entropy of nonlinear stochastic partial differential systems (NSPDSs), the global linearization technique and finite difference scheme were employed to represent the NSPDS using the spatial state space system given in Equation (66). Therefore, the system entropy measurement problem of NPDSs became the problem of solving the HJII-constrained optimization problem given in Equation (55), which can be replaced by a simple LMIs-constrained optimization problem given in Equation (69). Hence, using the LMI-toolbox of MATLAB, we could easily calculated the system entropy of NSPDS. Finally, two examples were provided to illustrate the measurement procedure of the system entropy and to confirm that the PDSs with intrinsic random fluctuation possess greater system entropy.

Acknowledgments

This work was supported by National Science Council under contract No. MOST-104-2221-E-007-124-MY3.

Author Contributions

Bor-Sen Chen: Methodology development, conception and design, data interpretation and improved the scientific quality of the manuscript. Chao-Yi Hsieh: Manuscript writing, computational simulation and interpretation. Shih-Ju Ho: Simulation assistance, conception and design and data interpretation. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendixes

Lemma 1. [12]: For any matrices (or vectors) X , Y , and a symmetric matrix P = P T > 0 with appropriate dimensions, we have:
X T P Y + Y T P X ξ X T P X + ( 1 ξ ) Y T P Y
for any positive constant ξ .             ☐
Lemma 2. [42]: Let M i be any matrix with appropriate dimension and α i ( z ) be the interpolation function for the i th local system and P = P T > 0 . Then, we have:
( i = 1 l α i ( z ) M i ) P ( j = 1 l α i ( z ) M j ) i = 1 l α i ( z ) M i P M i
With Lemma 2, the LMI-constrained optimization in Equations (62) or (68) can be solved efficiently.                   ☐

Appendix A.

Proof of Proposition 1.

E { U 0 t f z T ( x , t ) z ( x , t ) d t d x } = E { U [ V ( y 0 ( x ) ) V ( y ( x , t f ) ) + 0 t f ( z T ( x , t ) z ( x , t ) + V ( z ( x , t ) ) t ) d t ] d x } .
From the fact that V ( y ( x , t f ) ) 0 , we have:
E { U 0 t f z T ( x , t ) z ( x , t ) d t d x } E { U V ( y 0 ( x ) ) d x + U 0 t f [ y T ( x , t ) C T C y ( x , t ) + ( V ( z ( x , t ) ) t ) T ( κ 2 y ( x , t ) + A y ( x , t ) + B v ( x , t ) ) ] d t d x } .
From Lemma 1:
( V ( y ( x , t ) ) y ) T B v ( x , t ) = 1 2 ( V ( y ( x , t ) ) y ) T B v ( x , t ) + 1 2 v T ( x , t ) B T V ( y ( x , t ) ) y 1 4 S ¯ 0 ( V ( y ( x , t ) ) y ) T B B T V ( y ( x , t ) ) y + S ¯ 0 v T ( x , t ) v ( x , t ) .
Substituting Equation (A3) into Equation (A2), we get:
E { U 0 t f z T ( x , t ) z ( x , t ) d t d x } E { U [ V ( y 0 ( x ) ) + 0 t f ( y T ( x , t ) C T C y ( x , t ) + ( V ( y ( x , t ) ) y ) T ( κ 2 y ( x , t ) + A y ( x , t ) ) + 1 4 S ¯ 0 ( V ( y ( x , t ) ) y ) T B B T V ( y ( x , t ) ) y + S ¯ 0 v T ( x , t ) v ( x , t ) ) d t ] d x } .
If the HJII in given Equation (10) holds, then the system randomness in Equation (9) holds. If y 0 ( x ) = 0 and V ( y 0 ( x ) ) = 0 , then the HJII in Equation (10) will lead to the inequality in Equation (5). ☐

Appendix B.

Proof of Corollary 1

In the conventional linear dynamic system in Equation (12), which is independent on x , the HJII in Equation (10) for the system randomness to have an upper bound S ¯ 0 becomes the following inequality:
y T ( t ) C T C y ( t ) + ( V ( y ( t ) ) y ) T A y ( t ) + 1 4 S ¯ 0 ( V ( y ( t ) ) y ) T B B T V ( y ( t ) ) y < 0.
If we choose the Lyapunov function as V ( y ( t ) ) = y T ( t ) P y ( t ) , then the HJII for the existence of the upper bound S ¯ 0 in (B1) becomes:
y T ( t ) C T C y ( t ) + 2 y T ( t ) P A y ( t ) + 1 S ¯ 0 y T ( t ) P B B T P y ( t ) < 0
or:
y T ( t ) ( P A + A T P + C T C + 1 S ¯ 0 P B B T P ) y ( t ) < 0.
Therefore, if the Riccati-like inequality in Equation (14) holds, then the inequality in Equation (B3) also holds and the system randomness of the linear dynamic system in Equation (12) has an upper bound S ¯ 0 .              ☐

Appendix C.

Proof of Proposition 2

For the LSPDS given in Equation (18), from the It o ^ formula [34,35], we get
V ( y ( x , t ) ) t = ( V ( y ( x , t ) ) y ) T ( κ 2 y ( x , t ) + A y ( x , t ) + B v ( x , t ) ) + H y ( x , t ) w ( x , t ) + 1 2 y T ( x , t ) H T 2 V ( y ( x , t ) ) y 2 H y ( x , t ) .
In this situation, we will follow the proof procedure in Appendix A:
E { U 0 t f z T ( x , t ) z ( x , t ) d t d x } = E { U [ V ( y 0 ( x ) ) V ( y ( x , t f ) ) + E 0 t f ( z T ( x , t ) z ( x , t ) + V ( y ( x , t ) ) t ) d t ] d x } .
From the fact that V ( y ( x , t f ) ) 0 and E { d W ( x , t ) } = 0 , substituting Equation (C1) into Equation (C2), we get:
E { U 0 t f z T ( x , t ) z ( x , t ) d x d t } E { U [ V ( y 0 ( x ) ) + 0 t f ( y T ( x , t ) C T C y ( x , t ) + ( V ( y ( x , t ) ) y ) T ( κ 2 y ( x , t ) + A y ( x , t ) ) + ( V ( y ( x , t ) ) y ) T B v ( x , t ) + 1 2 y T ( x , t ) H T 2 V ( y ( x , t ) ) y 2 H y ( x , t ) ) d t d x } .
By using the inequality Equation (A3), we get:
E { U 0 t f z T ( x , t ) z ( x , t ) d x d t } E { U [ V ( y 0 ( x ) ) + 0 t f [ y T ( x , t ) C T C y ( x , t ) + ( V ( y ( x , t ) ) y ) T ( κ 2 y ( x , t ) + A y ( x , t ) ) + 1 4 S ¯ 0 ( V ( y ( x , t ) ) y ) T B B T V ( y ( x , t ) ) y + S ¯ 0 v T ( x , t ) v ( x , t ) + 1 2 y T ( x , t ) H T 2 V ( y ( x , t ) ) y 2 H y ( x , t ) ] d t ] d x } .
Therefore, if the HJII given in Equation (20) holds, then the inequality of system randomness in Equation (9) holds. If the initial condition y 0 ( x ) = 0 , then V ( y 0 ( x ) ) = 0 , and the inequality of the system randomness in Equation (5) holds.       ☐

Appendix D.

Proof of Corollary 2

For the linear stochastic system in Equation (22), the HJII in Equation (20) for S 0 with an upper bound S ¯ 0 becomes:
y T ( t ) C T C y ( t ) + ( V ( y ( t ) ) y ) T A y ( t ) + 1 4 S ¯ 0 ( V ( y ( t ) ) y ) T B B T V ( y ( t ) ) y + 1 2 y T ( t ) H T 2 V ( y ( t ) ) y 2 H y ( t ) < 0.
If we choose the Lyapunov function as V ( y ( t ) ) = y T ( t ) P y ( t ) , then the condition Equation (D1) for V ( y ( t ) ) = y T ( t ) P y ( t ) with an upper bound S ¯ 0 becomes:
P A + A T P + C T C + H T P H + 1 S ¯ 0 P B B T P < 0.
Therefore, if the Riccati-like inequality in Equation (24) holds, then the system randomness S 0 has an upper bound S ¯ 0 .                 ☐

Appendix E.

Proof of Proposition 4.

E { U 0 t f z T ( x , t ) z ( x , t ) d t d x } = E { U [ V ( y 0 ( x ) ) V ( y ( x , t f ) ) + 0 t f ( z T ( x , t ) z ( x , t ) + V ( y ( x , t ) ) t ) d t ] d x } .
From the fact that
E { U 0 t f z T ( x , t ) z ( x , t ) d x d t } E { U V ( y 0 ( x ) ) d x + U 0 t f [ z T ( x , t ) z ( x , t ) + ( V ( y ( x , t ) ) y ) T ( κ ( y ( x , t ) ) 2 y ( x , t ) + f ( y ( x , t ) ) + g ( y ( x , t ) ) v ( x , t ) ) ] d t d x } .
From Lemma 1:
( V ( y ( x , t ) ) y ) T g ( y ( x , t ) ) v ( x , t ) = 1 2 ( V ( y ( x , t ) ) y ) T g ( y ( x , t ) ) v ( x , t ) + 1 2 v T ( x , t ) g T ( y ( x , t ) ) V ( y ( x , t ) ) y 1 4 S ¯ 0 ( V ( y ( x , t ) ) y ) T g ( y ( x , t ) ) g T ( y ( x , t ) ) V ( y ( x , t ) ) y + S ¯ 0 v T ( x , t ) v ( x , t ) .
Substituting Equation (E3) into Equation (E2), we get:
  E { U 0 t f z T ( x , t ) z ( x , t ) d t d x } E { U V ( y 0 ( x ) ) d x + U 0 t f [ z T ( x , t ) z ( x , t ) + ( V ( y ( x , t ) ) y ) T ( κ ( y ( x , t ) ) 2 y ( x , t ) + f ( y ( x , t ) ) ) + 1 4 S ¯ 0 ( V ( y ( x , t ) ) y ) T g ( y ( x , t ) ) g T ( y ( x , t ) ) V ( y ( x , t ) ) y + S ¯ 0 v T ( x , t ) v ( x , t ) ] d t d x .
If the HJII in Equation (48) holds, then S 0 has an upper bound S ¯ 0 as shown in Equation (9). If y 0 ( x ) = 0 , then V ( y 0 ( x ) ) = 0 , and the HJII in Equation (48) and Equation (E4) will lead to Equation (5).             ☐

Appendix F.

Proof of Proposition 5

For the NPDS given in Equation (54), by using the It o ^ formula, we get:
V ( y ( x , t ) ) t = ( V ( y ( x , t ) ) y ) T ( κ ( y ( x , t ) ) 2 y ( x , t ) + f ( y ( x , t ) ) + g ( y ( x , t ) ) v ( x , t ) + H ( y ( x , t ) ) y ( x , t ) d W ( x , t ) ) + 1 2 y T ( x , t ) H T ( y ( x , t ) ) 2 V ( y ( x , t ) ) y 2 H ( y ( x , t ) ) y ( x , t ) .
From the fact that E { d W ( x , t ) } = 0 and by following a similar procedure explained in Appendix E, we get:
E { U 0 t f z T ( x , t ) z ( x , t ) d x d t } E { U V ( y 0 ( x ) ) d x + U 0 t f [ ( V ( y ( x , t ) ) y ) T ( κ ( y ( x , t ) ) 2 y ( x , t ) + f ( y ( x , t ) ) ) + z T ( x , t ) z ( x , t ) + 1 2 y T ( x , t ) H T ( y ( x , t ) ) ( V ( y ( x , t ) ) y ) T H ( y ( x , t ) ) y ( x , t ) + S ¯ 0 v T ( x , t ) v ( x , t ) + 1 4 S ¯ 0 ( V ( y ( x , t ) ) y ) T g ( y ( x , t ) ) g T ( y ( x , t ) ) ( V ( y ( x , t ) ) y ) ] d t d x } < 0.
If the HJII in Equation (55) holds, then the system randomness S 0 of the NSPDSs in Equations (53) or (54) has an upper bound S ¯ 0 as Equation (5) or Equation (9). ☐

Appendix G.

Proof of Proposition 6.

E { 0 t f z T ( t ) z ( t ) d t } = E { V ( y ( 0 ) ) V ( y ( t f ) ) + 0 t f i = 1 L j = 1 L α i ( y ) α j ( y ) y T ( t ) C ¯ i T C ¯ j y ( t ) + V ( y ( t ) ) t ) d t } E { V ( y ( 0 ) ) + 0 t f ( i = 1 L j = 1 L α i ( y ) α j ( y ) y T ( t ) C ¯ i T C ¯ j y ( t ) + ( V ( y ( t ) ) t ) T ( j = 1 L α i ( y ) A ¯ i y ( t ) + B ¯ i v ( t ) ) ) d t }    ( by fact that  V ( y ( t ) ) 0 ) .
From the fact of the following inequality:
( V ( y ( t ) ) y ) T B ¯ i v ( t ) 1 4 S ¯ 0 ( V ( y ( t ) ) y ) T B ¯ i B ¯ j T V ( y ( t ) ) y + S ¯ 0 v T ( t ) v ( t ) ,
from Lemma 2, and the choice of V ( y ( t ) ) = y T ( t ) P y ( t ) , we get:
E { 0 t f z T ( t ) z ( t ) d t } E { y T ( 0 ) P y ( 0 ) + 0 t f [ i = 1 L j = 1 L α i ( y ) α j ( y ) y T ( t ) [ C ¯ i T C ¯ j + P ¯ A ¯ i + A ¯ i T P ¯ + 1 S ¯ 0 P ¯ B ¯ i B ¯ i T P ¯ ] y ( t ) + S ¯ 0 v T ( t ) v ( t ) ] d t } .
If the inequalities in Equations (61) or (62) holds, then we get Equation (36) if y ( 0 ) = 0 or Equation (37) if y ( 0 ) 0 ; i.e., S 0 has an upper bound S ¯ 0 as shown in Equations (36) or (37). ☐

Appendix H.

Proof for Proposition 7.

E { 0 t f z T ( t ) z ( t ) d t } = E { V ( 0 ) V ( y ( t f ) ) + 0 t f ( i = 1 L j = 1 L α i ( y ) α j ( y ) y T ( t ) C ¯ i T C ¯ j y ( t ) + V ( y ( t ) ) y ) d t } .
Using the I t ^ o formula [34,35]:
V ( y ( t ) ) t = V ( y ( t ) ) y d y ( t ) + 1 2 i = 1 L α i ( y ) y T ( t ) H ¯ i T 2 V ( y ( t ) ) y 2 H ¯ i y ( t ) = ( V ( y ( t ) ) t ) T ( i = 1 L α i ( y ) ( A ¯ i y ( t ) + B ¯ i v ( t ) + H ¯ i y ( t ) W ( t ) ) ) + 1 2 i = 1 L α i ( y ) y T ( t ) H ¯ i T 2 V ( y ( t ) ) y 2 H ¯ i y ( t ) .
From the fact that V ( y ( t f ) ) 0 , E { d W ( x , t ) } = 0 , Equation (G2), Lemma 2, and the choice of V ( y ( t ) ) = y T ( t ) P y ( t ) , we get:
E { 0 t f y T ( t ) y ( t ) d t } E { V ( y ( 0 ) ) + 0 t f [ i = 1 L j = 1 L α i ( y ) α j ( y ) d t + y T ( t ) [ C ¯ i T C ¯ j + P ¯ A ¯ i + A ¯ i T P ¯ + H ¯ i T P H ¯ i + 1 S ¯ 0 P ¯ B i B i T P ] y ( t ) + S ¯ 0 v T ( t ) v ( t ) ] d t } .
From the Riccati-like inequalities in Equation (67), we get Equation (36) if y ( 0 ) = 0 or Equation (37) if y ( 0 ) 0 . Then, we can find that S 0 has an upper bound S ¯ 0 given in Equations (36) or (37).                  ☐

Appendix I. The Values of the Matrices A ¯ 1 A ¯ 3 , and B ¯ 1 B ¯ 3 in Example 2

A ¯ 1 [ 248.5904 73.4140 6.3298 11.3422 13.6801 11.3705 6.2259 1.6850 0.0085 63.9888 306.2888 73.1145 10.7050 12.9056 10.7479 5.9590 1.6144 0.0008 0.0155 72.3758 349.5368 89.6968 6.2517 5.2527 2.7918 0.6792 0.0062 0.0146 2.9022 89.4423 369.4511 109.1560 18.9602 10.3184 2.7872 0.0025 0.0061 2.9075 10.8228 104.1017 377.9499 104.1334 10.8170 2.9033 0.0042 0.0056 3.2568 12.0847 22.2675 112.9763 366.1589 91.1738 3.2818 0.0045 0.0027 1.5983 5.5212 9.9943 12.0014 74.4233 357.7766 70.1692 0.0057 0.0021 1.4880 5.1789 9.2718 11.0895 9.2263 73.8587 306.0651 63.9957 0.0031 0.4562 1.9859 3.6201 4.3747 3.7056 1.9064 71.2250 248.6023 ] ,
A ¯ 2 [ 248.6356 77.5705 21.6610 40.7595 48.9249 40.8425 22.1227 5.7088 0.0148 63.9970 297.5356 108.7288 54.0796 65.1006 54.1566 29.6209 7.7505 0.0034 0.0148 77.7395 331.5775 123.5166 47.1578 39.1062 21.4110 5.7625 0.0122 0.0137 4.8852 61.4647 420.3573 47.4347 32.2401 17.4056 4.6695 0.0172 0.0202 0.9506 2.8227 79.5339 407.6558 79.2352 - 2.9837 0.7131 0.0024 0.0131 4.9633 18.9866 35.6314 44.1130 423.4611 60.1927 5.1723 0.0202 0.0118 8.4312 30.9949 56.7717 67.9977 141.1667 322.2049 80.2604 0.0386 0.0090 10.0080 36.6276 66.7149 80.0874 66.7529 115.8006 295.6419 63.9684 0.0205 3.8063 15.4396 28.0445 33.7817 28.3198 14.9776 75.8867 248.5651 ] ,
A ¯ 3 [ 248.5929 61.8110 37.6184 70.3762 84.5701 70.4963 38.2200 9.8546 0.0113 64.0064 309.7453 60.6536 34.2427 41.2822 34.2495 18.5503 4.6302 0.0051 0.0092 66.0761 374.0286 45.3930 47.1019 39.0665 21.1461 5.5868 0.0076 0.0070 9.7655 115.8537 321.4285 167.1271 66.9193 36.3552 9.6716 0.0226 0.0211 3.1529 10.8493 104.0141 377.9706 104.3839 11.0994 2.7932 0.0005 0.0086 8.5989 33.1513 61.7400 159.9973 327.0113 112.2483 8.8910 0.0247 0.0221 5.8946 22.0825 40.8491 48.8130 43.6805 374.7487 65.8380 0.0468 0.0129 8.5827 31.6336 58.0069 69.7027 58.1225 47.4858 313.3645 64.0353 0.0283 1.7713 8.0733 14.4285 17.4896 14.7019 7.4802 69.7287 248.6651 ] .
B ¯ 1 [ 0.0000 0.0003 0.0009 0.0018 0.0023 0.0020 0.0012 0.0003 0.0000 0.0000 0.0012 0.0043 0.0079 0.0094 0.0079 0.0043 0.0011 0.0000 0.0000 0.0004 0.0017 0.0030 0.0036 0.0030 0.0017 0.0004 0.0000 0.0000 0.0000 0.0000 0.1385 0.0001 0.0000 0.0001 0.0000 0.0000 0.0000 0.0010 0.0038 0.0068 0.1583 0.0068 0.0038 0.0010 0.0000 0.0000 0.0002 0.0009 0.0015 0.0019 0.1370 0.0008 0.0002 0.0000 0.0000 0.0000 0.0003 0.0005 0.0006 0.0004 0.0002 0.0000 0.0000 0.0000 0.0002 0.0008 0.0013 0.0015 0.0013 0.0008 0.0003 0.0000 0.0000 0.0003 0.0013 0.0023 0.0028 0.0023 0.0012 0.0003 0.0000 ] ,
B ¯ 2 [ 0.0000 0.0004 0.0007 0.0017 0.0023 0.0021 0.0012 0.0003 0.0001 0.0000 0.0025 0.0093 0.0167 0.0201 0.0168 0.0093 0.0023 0.0001 0.0000 0.0018 0.0067 0.0119 0.0144 0.0121 0.0066 0.0017 0.0000 0.0000 0.0008 0.0030 0.1441 0.0065 0.0054 0.0031 0.0007 0.0000 0.0000 0.0029 0.0108 0.0197 0.1263 0.0198 0.0109 0.0028 0.0000 0.0000 0.0009 0.0034 0.0057 0.0069 0.1443 0.0031 0.0008 0.0000 0.0000 0.0008 0.0035 0.0064 0.0077 0.0063 0.0034 0.0008 0.0000 0.0000 0.0020 0.0063 0.0121 0.0147 0.0124 0.0065 0.0017 0.0000 0.0000 0.0004 0.0012 0.0022 0.0025 0.0022 0.0013 0.0004 0.0000 ] ,
B ¯ 3 [ 0.0000 0.0005 0.0028 0.0047 0.0054 0.0044 0.0023 0.0007 0.0001 0.0000 0.0017 0.0067 0.0118 0.0144 0.0121 0.0066 0.0016 0.0000 0.0000 0.0023 0.0086 0.0156 0.0188 0.0158 0.0086 0.0022 0.0000 0.0000 0.0007 0.0025 0.1339 0.0054 0.0045 0.0025 0.0006 0.0000 0.0000 0.0024 0.0090 0.0164 0.1699 0.0166 0.0091 0.0023 0.0000 0.0000 0.0009 0.0033 0.0057 0.0069 0.1328 0.0031 0.0008 0.0000 0.0000 0.0009 0.0038 0.0069 0.0083 0.0069 0.0038 0.0010 0.0000 0.0000 0.0030 0.0098 0.0187 0.0224 0.0190 0.0100 0.0028 0.0001 0.0000 0.0013 0.0048 0.0088 0.0103 0.0087 0.0048 0.0013 0.0000 ] .

Appendix J. The Values of the Matrix P ¯ in Example 2 without the Random Fluctuation H ( y ( x , t ) ) w ( x , t )

P ¯ [ 1.8901 0.4570 0.0541 0.0081 0.0004 0.0150 0.1708 0.1305 0.0528 0.4570 1.3381 0.2589 0.0123 0.0009 0.0017 0.0051 0.0779 0.0963 0.0541 0.2589 0.2453 0.0228 0.0003 0.0031 0.0031 0.0866 0.1054 0.0081 0.0123 0.0228 0.0063 0.0003 0.0002 0.0008 0.0161 0.0157 0.0004 0.0009 0.0003 0.0003 0.0034 0.0000 0.0032 0.0093 0.0238 0.0150 0.0017 0.0031 0.0002 0.0000 0.0076 0.0324 0.0237 0.0049 0.1708 0.0051 0.0031 0.0008 0.0032 0.0324 0.3140 0.3007 0.2484 0.1305 0.0779 0.0866 0.0161 0.0093 0.0237 0.3007 1.2421 0.2637 0.0528 0.0963 0.1054 0.0157 0.0238 0.0049 0.2484 0.2637 1.8658 ] .

Appendix K. The Values of the Matrices H ¯ 1 H ¯ 3 is in Example 2

H ¯ 1 [ 0.0037 0.0825 0.5457 0.9808 1.1872 0.9620 0.6517 0.1674 0.0080 0.0000 0.0006 0.0021 0.0037 0.0045 0.0038 0.0020 0.0005 0.0000 0.0000 0.0016 0.0060 0.0108 0.0131 0.0109 0.0060 0.0016 0.0000 0.0000 0.0002 0.0008 0.0016 0.0019 0.0016 0.0009 0.0002 0.0000 0.0000 0.0002 0.0007 0.0014 0.0016 0.0014 0.0007 0.0002 0.0000 0.0000 0.0009 0.0033 0.0059 0.0071 0.0060 0.0032 0.0008 0.0000 0.0000 0.0004 0.0013 0.0023 0.0029 0.0023 0.0012 0.0003 0.0000 0.0000 0.0004 0.0013 0.0023 0.0027 0.0024 0.0012 0.0003 0.0000 0.0083 0.2632 1.2275 2.1739 2.6014 2.0976 1.2483 0.3517 0.0076 ] ,
H ¯ 2 [ 0.0092 0.5677 1.3331 2.6171 3.2422 2.6149 1.2459 0.2840 0.0249 0.0000 0.0001 0.0005 0.0007 0.0004 0.0004 0.0002 0.0000 0.0000 0.0000 0.0031 0.0119 0.0213 0.0256 0.0214 0.0120 0.0031 0.0000 0.0000 0.0008 0.0033 0.0061 0.0073 0.0061 0.0034 0.0008 0.0000 0.0000 0.0009 0.0031 0.0059 0.0068 0.0058 0.0031 0.0007 0.0000 0.0000 0.0023 0.0081 0.0149 0.0178 0.0151 0.0181 0.0021 0.0000 0.0000 0.0003 0.0013 0.0022 0.0025 0.0021 0.0016 0.0004 0.0000 0.0001 0.0011 0.0038 0.0066 0.0080 0.0068 0.0036 0.0008 0.0000 0.0117 1.7045 6.9326 12.4667 14.9464 12.2852 6.9683 1.8089 0.0294 ] ,
H ¯ 3 [ 0.0131 0.7755 2.1170 4.1004 5.0706 4.0306 2.1002 0.4409 0.0221 0.0000 0.0017 0.0072 0.0130 0.0150 0.0127 0.0069 0.0017 0.0000 0.0000 0.0032 0.0122 0.0219 0.0263 0.0219 0.0123 0.0032 0.0000 0.0000 0.0006 0.0026 0.0046 0.0056 0.0045 0.0026 0.0005 0.0000 0.0000 0.0012 0.0044 0.0081 0.0095 0.0080 0.0043 0.0010 0.0000 0.0000 0.0015 0.0053 0.0097 0.0117 0.0098 0.0052 0.0013 0.0000 0.0000 0.0008 0.0030 0.0052 0.0062 0.0050 0.0033 0.0008 0.0000 0.0001 0.0011 0.0035 0.0059 0.0073 0.0060 0.0033 0.0006 0.0001 0.0140 2.1995 8.8562 15.9906 19.1852 15.8464 8.8599 2.3156 0.0024 ] .

Appendix L. The Value of the Matrix P ¯ is in Example 2 with the Random Fluctuation H ( y ( x , t ) ) w ( x , t )

P ¯ [ 0.9065 0.2235 0.0382 0.0043 0.0012 0.0074 0.1056 0.0015 0.0784 0.2235 0.5981 0.1408 0.0073 0.0004 0.0002 0.0024 0.0512 0.0181 0.0382 0.1408 0.0826 0.0073 0.0001 0.0005 0.0026 0.0067 0.0036 0.0043 0.0073 0.0826 0.0017 0.0002 0.0000 0.0008 0.0012 0.0003 0.0012 0.0004 0.0001 0.0002 0.0009 0.0000 0.0016 0.0015 0.0002 0.0074 0.0002 0.0005 0.0000 0.0000 0.0022 0.0093 0.0052 0.0004 0.1056 0.0024 0.0026 0.0008 0.0016 0.0093 0.0761 0.0434 0.0077 0.0015 0.0512 0.0067 0.0012 0.0015 0.0052 0.0434 0.1011 0.0013 0.0784 0.0181 0.0036 0.0003 0.0002 0.0004 0.0077 0.0013 0.0081 ] .

References

  1. Golan, A.; Judge, G.G.; Miller, D. Maximum Entropy Econometrics: Robust Estimation with Limited Data; Wiley: New York, NY, USA, 1996. [Google Scholar]
  2. Golan, A. Information and entropy econometrics—Volume overview and synthesis. J. Econ. 2007, 138, 379–387. [Google Scholar]
  3. Racine, J.S.; Maasoumi, E. A versatile and robust metric entropy test of time-reversibility, and other hypotheses. J. Econ. 2007, 138, 547–567. [Google Scholar] [CrossRef]
  4. Ruiz, M.D.; Guillamon, A.; Gabaldon, A. A new approach to measure volatility in energy markets. Entropy 2012, 14, 74–91. [Google Scholar] [CrossRef]
  5. Lebiedz, D. Entropy-related extremum principles for model reduction of dissipative dynamical systems. Entropy 2010, 12, 706–719. [Google Scholar] [CrossRef]
  6. Gupta, M.; Srivastava, S. Parametric bayesian estimation of differential entropy and relative entropy. Entropy 2010, 12, 818–843. [Google Scholar] [CrossRef]
  7. Popkov, Y.S. New class of multiplicative algorithms for solving of entropy-linear programs. Eur. J. Oper. Res. 2006, 174, 1368–1379. [Google Scholar] [CrossRef]
  8. Powell, G.; Percival, I. A spectral entropy method for distinguishing regular and irregular motion of hamiltonian systems. J. Phys. A Math. General 1979, 12. [Google Scholar] [CrossRef]
  9. Popkov, Y.; Popkov, A. New methods of entropy-robust estimation for randomized models under limited data. Entropy 2014, 16, 675–698. [Google Scholar] [CrossRef]
  10. Chen, B.; Yan, Z.L.; Chen, W. Defect detection for wheel-bearings with time-spectral kurtosis and entropy. Entropy 2014, 16, 607–626. [Google Scholar] [CrossRef]
  11. Yan, R.Q.; Gao, R.X. Approximate entropy as a diagnostic tool for machine health monitoring. Mech. Syst. Signal Process. 2007, 21, 824–839. [Google Scholar] [CrossRef]
  12. Boyd, S.P. Linear Matrix Inequalities in System and Control Theory; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1994. [Google Scholar]
  13. Chen, B.S.; Chang, Y.T. Fuzzy state-space modeling and robust observer-based control design for nonlinear partial differential systems. IEEE Trans. Fuzzy Syst. 2009, 17, 1025–1043. [Google Scholar] [CrossRef]
  14. Chen, W.H.; Chen, B.S. Robust stabilization design for stochastic partial differential systems under spatio-temporal disturbances and sensor measurement noises. IEEE Trans. Circuits Syst. I Regular Papers 2013, 60, 1013–1026. [Google Scholar] [CrossRef]
  15. Chow, P.L. Stochastic Partial Differential Equations; Chapman & Hall/CRC: Boca Raton, FL, USA, 2007. [Google Scholar]
  16. Pao, C.V. Nonlinear Parabolic and Elliptic Equations; Plenum Press: New York, NY, USA, 1992. [Google Scholar]
  17. Chen, B.-S.; Chen, W.-H.; Zhang, W. Robust filter for nonlinear stochastic partial differential systems in sensor signal processing: Fuzzy approach. IEEE Trans. Fuzzy Syst. 2012, 20, 957–970. [Google Scholar] [CrossRef]
  18. Lucia, U. Thermodynamic paths and stochastic order in open systems. Phys. A Stat. Mech. Appl. 2013, 392, 3912–3919. [Google Scholar] [CrossRef]
  19. Lucia, U.; Ponzetto, A.; Deisboeck, T.S. A thermodynamic approach to the ‘mitosis/apoptosis’ ratio in cancer. Phys. A Stat. Mech. Appl. 2015, 436, 246–255. [Google Scholar] [CrossRef]
  20. Lucia, U. Irreversible entropy variation and the problem of the trend to equilibrium. Phys. A Stat. Mech. Appl. 2007, 376, 289–292. [Google Scholar] [CrossRef]
  21. Lucia, U. Maximum entropy generation and κ-exponential model. Phys. A Stat. Mech. Appl. 2010, 389, 4558–4563. [Google Scholar] [CrossRef]
  22. Lucia, U. Irreversibility, entropy and incomplete information. Phys. A Stat. Mech. Appl. 2009, 388, 4025–4033. [Google Scholar] [CrossRef]
  23. Lucia, U. The gouy-stodola theorem in bioenergetic analysis of living systems (irreversibility in bioenergetics of living systems). Energies 2014, 7, 5717–5739. [Google Scholar] [CrossRef]
  24. Chen, B.-S.; Wong, S.-W.; Li, C.-W. On the calculation of system entropy in nonlinear stochastic biological networks. Entropy 2015, 17, 6801–6833. [Google Scholar] [CrossRef]
  25. Chang, W.; Wang, W.J. H fuzzy control synthesis for a large-scale system with a reduced number of LMIs. IEEE Trans. Fuzzy Syst. 2015, 23, 1197–1210. [Google Scholar] [CrossRef]
  26. Lin, W.-W.; Wang, W.-J.; Yang, S.-H. A novel stabilization criterion for large-scale T-S fuzzy systems. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2007, 37, 1074–1079. [Google Scholar] [CrossRef]
  27. Hsiao, F.-H.; Hwang, J.-D. Stability analysis of fuzzy large-scale systems. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2001, 32, 122–126. [Google Scholar] [CrossRef] [PubMed]
  28. Wang, W.-J.; Luoh, L. Stability and stabilization of fuzzy large-scale systems. IEEE Trans. Fuzzy Syst. 2004, 12, 309–315. [Google Scholar] [CrossRef]
  29. Chen, C.-W.; Chiang, W.; Hsiao, F. Stability analysis of T–S fuzzy models for nonlinear multiple time-delay interconnected systems. Math. Comput. Simul. 2004, 66, 523–537. [Google Scholar] [CrossRef]
  30. Saat, S.; Nguang, S.K. Nonlinear H output feedback control with integrator for polynomial discrete-time systems. Int. J. Robust Nonlinear Control 2013, 25, 1051–1065. [Google Scholar] [CrossRef]
  31. Saat, S.; Nguang, S.K.; Darsono, A.; Azman, N. Nonlinear H∞ feedback control with integrator for polynomial discrete-time systems. J. Frankl. Inst. 2014, 351, 4023–4038. [Google Scholar] [CrossRef]
  32. Chae, S.; Nguang, S.K. SOS based robust fuzzy dynamic output feedback control of nonlinear networked control systems. IEEE Trans. Cybern. 2014, 44, 1204–1213. [Google Scholar] [CrossRef] [PubMed]
  33. Johansson, R. System Modeling & Identification; Prentice Hall: Upper Saddle River, NJ, USA, 1993. [Google Scholar]
  34. Zhang, W.H.; Chen, B.S. State feedback H control for a class of nonlinear stochastic systems. Siam J. Control Optim. 2006, 44, 1973–1991. [Google Scholar] [CrossRef]
  35. Zhang, W.H.; Chen, B.S.; Tseng, C.S. Robust H filtering for nonlinear stochastic systems. IEEE Trans. Signal Process. 2005, 53, 589–598. [Google Scholar] [CrossRef]
  36. Laub, A.J. Matrix Analysis for Scientists and Engineers; SIAM: Philadelphia, PA, USA, 2005. [Google Scholar]
  37. Chen, B.S.; Chen, W.H.; Wu, H.L. Robust H2/H global linearization filter design for nonlinear stochastic systems. IEEE Trans. Circuits Syst. I Regular Papers 2009, 56, 1441–1454. [Google Scholar] [CrossRef]
  38. Incropera, F.P.; DeWitt, D.P. Introduction to Heat Transfer, 3rd ed.; Wiley: New York, NY, USA, 1996. [Google Scholar]
  39. Nemcsics, Á.; Nagy, S.; Mojze, I.; Turmezei, P. Fractal and Structural Entropy Calculations on the Epitaxially Grown Fulleren Structures with the Help of Image Processing. In Proceedings of the 7th International Symposium on Intelligent Systems and Informatics, 2009 (SISY'09), Subotica, Serbia, 25–26 September 2009; pp. 65–67.
  40. Orguner, U. Entropy Calculation in Particle Filters. In Proceedings of the IEEE 17th Signal Processing and Communications Applications Conference, 2009 (SIU 2009), Antalya, Turkey, 9–11 April 2009; pp. 628–631.
  41. Voronych, A.; Pastukh, T. Methods of Digital Signal Processing Based on Calculation of Entropy Technologies. In Proceedings of the 2015 13th International Conference Experience of Designing and Application of CAD Systems in Microelectronics (CADSM), Lviv, Ukraine, 24–27 February 2015; pp. 379–381.
  42. Chen, B.-S.; Wu, C.-F. Robust scheduling filter design for a class of nonlinear stochastic poisson signal systems. IEEE Trans. Signal Process. 2015, 63, 6245–6257. [Google Scholar] [CrossRef]
Figure 1. Finite difference grids on the spatio-domain U .
Figure 1. Finite difference grids on the spatio-domain U .
Entropy 18 00099 g001
Figure 2. The temperature distribution y ( x , t ) of the heat transfer system given in Equation (71) at t = 1, 10, 30 and 50 s. Due to the diffusion term κ 2 y ( x , t ) , the temperature of heat system will be uniformly distributed gradually to increase the system entropy.
Figure 2. The temperature distribution y ( x , t ) of the heat transfer system given in Equation (71) at t = 1, 10, 30 and 50 s. Due to the diffusion term κ 2 y ( x , t ) , the temperature of heat system will be uniformly distributed gradually to increase the system entropy.
Entropy 18 00099 g002
Figure 3. The temperature distribution y ( x , t ) of heat transfer system in Equation (72) at t = 1, 10, 30 and 50 s. Obviously, the temperature distribution of stochastic heat transfer system in Equation (72) is with more random fluctuations and with more system entropy than the heat transfer system in Equation (71). The temperature distribution is also uniformly distributed gradually to increase the system entropy as time goes on. In general, the temperature in Figure 3 is more random than Figure 2, i.e., with more system randomness and entropy.
Figure 3. The temperature distribution y ( x , t ) of heat transfer system in Equation (72) at t = 1, 10, 30 and 50 s. Obviously, the temperature distribution of stochastic heat transfer system in Equation (72) is with more random fluctuations and with more system entropy than the heat transfer system in Equation (71). The temperature distribution is also uniformly distributed gradually to increase the system entropy as time goes on. In general, the temperature in Figure 3 is more random than Figure 2, i.e., with more system randomness and entropy.
Entropy 18 00099 g003
Figure 4. (a) Spatial-time profiles of the real biochemical system in Equation (73); (b) Spatial-time profiles of the approximated system in Equation (60) based on the finite difference scheme and global linearization technique; (c) The error between the real biochemical system in Equation (73) and the approximated system in Equation (60). Obviously, the approximated system based on finite difference scheme and global linearization method can approximate the biochemical enzyme system quite well.
Figure 4. (a) Spatial-time profiles of the real biochemical system in Equation (73); (b) Spatial-time profiles of the approximated system in Equation (60) based on the finite difference scheme and global linearization technique; (c) The error between the real biochemical system in Equation (73) and the approximated system in Equation (60). Obviously, the approximated system based on finite difference scheme and global linearization method can approximate the biochemical enzyme system quite well.
Entropy 18 00099 g004
Figure 5. (a) Spatial-time profiles of the real biochemical system in Equation (74); (b) Spatial-time profiles of the approximated system in Equation (66) based on the finite difference scheme and global linearization technique; (c) The error between the real biochemical system in Equation (74) and the approximated system in Equation (66). Obviously, the approximated system in Equation (66) could approximate the real system in Equation (74) quite well.
Figure 5. (a) Spatial-time profiles of the real biochemical system in Equation (74); (b) Spatial-time profiles of the approximated system in Equation (66) based on the finite difference scheme and global linearization technique; (c) The error between the real biochemical system in Equation (74) and the approximated system in Equation (66). Obviously, the approximated system in Equation (66) could approximate the real system in Equation (74) quite well.
Entropy 18 00099 g005

Share and Cite

MDPI and ACS Style

Chen, B.-S.; Hsieh, C.-Y.; Ho, S.-J. System Entropy Measurement of Stochastic Partial Differential Systems. Entropy 2016, 18, 99. https://doi.org/10.3390/e18030099

AMA Style

Chen B-S, Hsieh C-Y, Ho S-J. System Entropy Measurement of Stochastic Partial Differential Systems. Entropy. 2016; 18(3):99. https://doi.org/10.3390/e18030099

Chicago/Turabian Style

Chen, Bor-Sen, Chao-Yi Hsieh, and Shih-Ju Ho. 2016. "System Entropy Measurement of Stochastic Partial Differential Systems" Entropy 18, no. 3: 99. https://doi.org/10.3390/e18030099

APA Style

Chen, B. -S., Hsieh, C. -Y., & Ho, S. -J. (2016). System Entropy Measurement of Stochastic Partial Differential Systems. Entropy, 18(3), 99. https://doi.org/10.3390/e18030099

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop