Next Article in Journal
An RNN-Based Performance Identification Model for Multi-Agent Containment Control Systems
Next Article in Special Issue
Distribution of Eigenvalues and Upper Bounds of the Spread of Interval Matrices
Previous Article in Journal
Optimization of Asset and Liability Management of Banks with Minimum Possible Changes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimating Failure Probability with Neural Operator Hybrid Approach

1
School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
2
School of Statistics and Mathematics, Shanghai Lixin University of Accounting and Finance, Shanghai 201209, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(12), 2762; https://doi.org/10.3390/math11122762
Submission received: 21 May 2023 / Revised: 15 June 2023 / Accepted: 16 June 2023 / Published: 18 June 2023
(This article belongs to the Special Issue Advances in Numerical Linear Algebra and Its Applications)

Abstract

:
Evaluating failure probability for complex engineering systems is a computationally intensive task. While the Monte Carlo method is easy to implement, it converges slowly and, hence, requires numerous repeated simulations of a complex system to generate sufficient samples. To improve the efficiency, methods based on surrogate models are proposed to approximate the limit state function. In this work, we reframe the approximation of the limit state function as an operator learning problem and utilize the DeepONet framework with a hybrid approach to estimate the failure probability. The numerical results show that our proposed method outperforms the prior neural hybrid method.

1. Introduction

In practice, evaluating failure probability for systems that inherently contain uncertainty is a fundamental problem encountered in various fields, such as structural safety, risk management, reliability-based optimization, etc. Uncertainties in such systems are abstracted in terms of the failure mode, and failure probability estimation is essentially a problem of evaluating multivariate integrals in domains defined by certain failure modes. While the mathematical formulation of the problem is well defined, evaluating such integrals remains a challenging task in practice.
The most straightforward approach to evaluating the failure probability is to use the Monte Carlo sampling (MCS) method [1,2]. However, due to its slow convergence, MCS requires numerous samples, resulting in a heavy computational burden. This computational burden becomes even more pronounced when complex stochastic PDEs represent the failure modes, since MCS necessitates repeatedly solving the model to estimate the failure probability.
To address this issue, various approaches have been developed, including the first-order reliability method (FORM) [3], second-order reliability method (SORM) [4], and response surface method (RSM) [5,6]. These methods replace the limit state function with a surrogate model that is easy to evaluate, thereby greatly reducing the simulation time. Following this idea, a hybrid method was proposed by [7,8] to estimate the probability based on the surrogate model while re-evaluating samples in a given suspicious region. The design of the hybrid method significantly reduces the time complexity while ensuring accuracy.
There are various methodologies for constructing surrogate models, such as the stochastic Galerkin method [9,10], the reduced basis method [11,12], and deep learning. Deep learning has rapidly developed in recent decades, particularly in scientific and engineering applications. Physics-informed neural networks (PINNs), which build upon the widely known universal approximation capabilities of continuous functions for neural networks (NNs) [13,14], were introduced in [15] and have demonstrated their efficiency in numerous studies [16,17,18]. By utilizing established deep learning and machine learning techniques, NN models can be employed as surrogate models to approximate the limit state function, outperforming traditional surrogate models in certain problems, such as those of high-dimensional systems [19,20,21].
As a significant area within the domain of deep learning, operator learning has emerged in recent years. The underlying principle of operator learning resides in the observation that nonlinear operators can be effectively approximated by employing single-layer neural networks [22,23] (Theorem 1). Operator learning aims to map infinite-dimensional functions to infinite-dimensional functions. Since it is more expressive and can break the curse of dimensionality in input space [24], operator learning has gained much attention in recent years [24,25,26]. Among the operator learning techniques, the DeepONet introduced in [24,26] has been demonstrated to be effective in numerous applications, including [27,28,29].
In this work, we present a novel approach to failure probability estimation by reframing the approximation problem of the limit state function as an operator learning problem and subsequently adapting the DeepONet framework to address it. The operator learning formulation provides a more effective and generalized approach to constructing a surrogate model for the limit state function, resulting in enhanced precision and reduced simulation numbers. To further ensure the precision, we employed a hybrid method [7] for estimating the failure probability. Our proposed neural operator hybrid (NOH) approach significantly reduces the time complexity while maintaining high accuracy compared to earlier neural hybrid and Monte Carlo simulation approaches. We posit that the efficiency of our approach in estimating failure probability demonstrates the potential of operator learning in various tasks.
This paper is structured as follows: In Section 2, we present the problem setting and introduce a hybrid method for evaluating the failure probability. The neural operator learning and proposed algorithm are then fully described in Section 3. To demonstrate the effectiveness of our approach, we describe numerical experiments in Section 4 that cover a variety of scenarios, including ODEs, PDEs, and multivariate models. Finally, we offer concluding remarks and observations in Section 5.

2. Preliminaries

This section will provide an overview of the mathematical framework for failure probability and introduce a hybrid method for solving this problem.

2.1. Problem Setting

Let Z = Z 1 , Z 2 , , Z n z be an n z -dimensional random vector with the distribution function F Z ( z ) = Prob ( Z z ) . The image of Z, i.e., the set of all possible values that Z can take, is denoted by Ω . It is our interest to evaluate the failure probability P f defined by Equation (1):
P f = Prob Z Ω f = Ω f d F Z ( z ) = χ Ω f ( z ) d F Z ( z ) = E [ χ Ω f ( z ) ] ,
where the characteristic function χ Ω f ( z ) is defined as:
χ Ω f ( z ) = 1 if z Ω f , 0 if z Ω f ,
and the failure domain Ω f , where failure occurs, is defined as:
Ω f = { Z : g ( Z ) < 0 } .
Here, g ( Z ) is a scalar limit state function—also known as a performance function—that characterizes the failure domain. It should be emphasized that, in many real-world systems, g ( Z ) does not have an analytical expression and is instead characterized by a complex system that requires expensive simulations to evaluate. Consequently, the evaluation of g ( Z ) can be computationally expensive, leading to significant time complexity.

2.2. Hybrid Method

The most straightforward approach to estimating failure probability is the Monte Carlo sampling (MCS) method [1,2], which is given by:
P f m c = 1 M i = 1 M χ { g ( z ) < 0 } z ( i ) ,
where { z ( i ) } i = 1 M is a set of sample points for the random vector Z. The characteristic function χ { g ( z ) < 0 } ( z ( i ) ) takes a value of 1 if the limit state function g ( Z ) evaluated at z ( i ) is less than zero and 0 otherwise. The failure probability P f m c is estimated as the average of the characteristic function over the M sample points.
However, evaluating the limit state function g ( Z ) at numerous sample points can be a computationally intensive task, especially when dealing with complex stochastic systems, resulting in significant simulation time complexity. To address this issue, a surrogate model can be used to approximate the limit state function g ( Z ) and avoid the need for direct evaluation at each sample point. Specifically, a surrogate model of g ( Z ) is denoted by g ^ ( Z ) , which can be rapidly evaluated. The failure probability can then be estimated as:
P ^ f m c = 1 M i = 1 M χ { g ^ ( z ) < 0 } z ( i ) .
While surrogate models can significantly reduce computational costs in Monte Carlo methods, relying solely on them for estimating the failure probability may result in poor precision or even failure. To address this issue, a hybrid approach that combines the surrogate models g ^ and the limit state function g was proposed in [7,8]. In the following, we give a brief review of the hybrid method.
Suppose that ( γ , γ ) is a suspicious region, where γ is a non-negative real number. In this case, we can approximate the failure domain Ω f with Ω ˜ f as follows:
Ω ˜ f = g ^ ( Z ) < γ g ^ ( Z ) γ { g ( Z ) < 0 } ,
where g is the limit state function, and  g ^ represents the surrogate model of g. Enhanced with the hybrid method, the failure probability can be estimable by MCS:
P f h = 1 M i = 1 M χ Ω ^ f z ( i ) = 1 M i = 1 M χ { g ^ < γ } z ( i ) + χ { | g ^ | γ } z ( i ) · χ { g < 0 } z ( i ) .
The hybrid method can be considered as an approach for estimating P f by using a surrogate g ^ , followed by a re-evaluation of the samples within the suspicious domain. While increasing the value of γ leads to higher time complexity, it also results in more accurate estimation. In Ref. [7], it was proved that for any surrogate g ^ ( Z ) and for all ϵ > 0 , there exists a critical value γ N > 0 such that for all γ > γ N , the difference between the estimated P f h and the truth P f is less than ϵ , i.e.,
P f P f h < ϵ .
To be more precise,
γ N = 1 ϵ 1 / p g ( Z ) g ^ ( Z ) L Ω p ,
where the approximation is measured in the L p -norm with p 1 .
g ( Z ) g ^ ( Z ) L Ω p = Ω g ( z ) g ^ ( z ) p d F Z ( z ) 1 / p .
Selecting an appropriate value of γ that balances accuracy and computational efficiency can be a challenging task. To address this challenge, an iterative algorithm, as demonstrated in Algorithm 1, is commonly employed in practice instead of directly selecting γ . In Algorithm 1, the surrogate g ^ samples are gradually replaced with g samples in the iteration procedure until either the stopping criterion is reached or the iteration step reaches M / δ M , which is equivalent to expanding the suspicious region at each iteration. When k reaches M / δ M , the iterative hybrid algorithm degenerates to the Monte Carlo method (4), indicating that the convergence is achieved as P f ( k ) P f m c , k M / δ M . It is obvious that the time complexity of the iterative hybrid algorithm is heavily influenced by the accuracy of the surrogate model used in Algorithm 1.
Algorithm 1 Iterative Hybrid Method [7].
Input: 
surrogate model g ^ , S = { z ( i ) } i = 1 M samples from random variable Z, tolerance ϵ , and sample size in each iteration δ M .
1:
Initialization: k = 0 .
2:
Compute P f ( 0 ) = 1 M i = 1 M χ { g ^ ( z ) < 0 } z ( i ) .
3:
Sort g ^ z ( i ) i = 1 M in ascending order; sort the correspond sample S accordingly.
4:
for k from 1 to M / δ M  do
5:
     δ S k = z ( j ) j = ( k 1 ) δ M + 1 k δ M .
6:
     δ P = 1 M z ( j ) δ S k χ { g ^ < 0 } z ( j ) + χ { g < 0 } z ( j ) .
7:
     P f ( k ) = P f ( k 1 ) + δ P .
8:
    if  | δ P | ϵ for several times then
9:
        break
10:
   end if
11:
end for
Output: 
P f ( k )

3. Neural Operator Hybrid Algorithm

In Section 2, we described the failure probability problem and the hybrid algorithm for solving it. As we have mentioned, the accuracy of the surrogate model greatly affects the performance of the iterative hybrid algorithm. In this section, we introduce neural operator learning and present the neural operator hybrid (NOH) algorithm, which reframes the approximation problem of the limit state function as an operator learning problem. Unlike prior studies that used neural networks as surrogate models for mappings (as in [19,20,21,30,31]), our algorithm constructs a surrogate model by using operator learning techniques. The benefits of our method are mainly in two aspects: First, it increases the generalization of the surrogate model. Second, it increases the precision of the surrogate model with more information involved, which results in a more effective and generalized approach to estimating failure probability.

3.1. Neural Operator Learning

Neural operator learning aims to accurately represent linear and nonlinear operators that map input functions into output functions. More specifically, let U be a vector space of functions on set K 1 , and let V be a vector space of functions on set K 2 ; G is an operator map from U to V, i.e.,
G : u G ( u ) V , for u U ,
where u is a function defined on the domain K 1 , i.e.,
u : x u ( x ) R , for x K 1 ,
and G ( u ) is a function defined on the domain K 2 , i.e.,
G ( u ) : y G ( u ) ( y ) R , for y K 2 .
In the context of this paper, U is referred to as the input function space, and V is referred to as the output function space. It is of interest to design neural networks that can approximate the mapping of the operator G from the input function space to the output function space.
In this work, we employ the DeepONet framework [26], an ascendingoperator learning approach based on the following theorem, to construct the surrogate of the operator G.
Theorem 1
(Universal Approximation Theorem for the Operator [23]). Suppose that σ is a continuous non-polynomial function, X is a Banach Space, K 1 X , K 2 R d are two compact sets in X and R d , respectively, U is a compact set in C K 1 , and G is a nonlinear continuous operator that maps U into C K 2 . Then, for any ϵ > 0 , there are positive integers n , p , m and constants c i k , ξ i j k , θ i k , ζ k R , w k R d , x j K 1 , i = 1 , , n , k = 1 , , p , j = 1 , , m , such that
| G ( u ) ( y ) k = 1 p i = 1 n c i k σ j = 1 m ξ i j k u x j + θ i k branch σ w k · y + ζ k trunk | < ϵ .
holds for all u U and y K 2 .
In DeepONet, the operator G is approximated by taking the inner product of two components, which can be expressed as follows:
G ( u ) ( y ) G ( u ) ( y ) : = k = 1 p b k ( u ) branch t k ( y ) trunk ,
where b k ( u ) is the output of the trunk network for a given input function u in U, and t k ( y ) is the output of the branch network for a given y in K 2 . Figure 1 illustrates this architecture.
In practical applications, the input function u is vectorized as u x 1 , , u x m T , where the sample points [ x 1 , x 2 x m ] T are referred to as sensors. The input of the trunk network takes P specific values y 1 ( i ) , , y P ( i ) in K 2 for each G ( u ( i ) ) . The neural network can be trained by using the training data:
T = u ( 1 ) , G ( u ( 1 ) ) , u ( 2 ) , G ( u ( 2 ) ) , , u ( N ) , G ( u ( N ) ) ,
and by minimizing the following loss function:
L ( Θ ) = 1 N P l = 1 P i = 1 N G Θ ( u ( i ) ) ( y l ( i ) ) G ( u ( i ) ) ( y l ( i ) ) 2 ,
where the neural network G Θ with parameter Θ approximates the operator G.

3.2. Neural Operator Hybrid Algorithm

Neural operator learning involves functions as both inputs and outputs, requiring us to reframe our problem accordingly. Let U be the collection of all functions defined on [ 0 , 1 ] such that
u ( x ) = Z ¯ k σ ¯ + 2 k σ ¯ x , for x [ 0 , 1 ] ,
where k is a selective parameter, and Z ¯ and σ ¯ are defined as:
Z ¯ = Z 1 + + Z n z n z , σ ¯ = σ 1 + + σ n z n z .
Here, σ 1 , , σ n z are the variances of random variables Z 1 , , Z n z , respectively. We can then define an operator G fromm U to V as follows:
G : u G ( u ) V , for u U ,
where V = span { g ( x ) } , and G ( u ) is defined by
G ( u ) : y g ( y ) R , for y R n z .
Here, g is the limit state function discussed in Section 2.1. Then, designing a surrogate model for G is a standard operator learning problem.
The inspiration for this reframing is the establishment of a relationship between the input function u ( x ) and the random variable Z. It is important to note that the prior distribution of u ( x ) is entirely known, which enables us to generate training data by using the following process: Firstly, we randomly sample z ( i ) according to the random distribution Z, where i = 1 , , N . Next, we define u ( i ) ( x ) by using the following equation:
u ( i ) ( x ) = z ¯ ( i ) k σ ¯ + 2 k σ ¯ x , x [ 0 , 1 ] ,
where
z ¯ ( i ) = z 1 ( i ) + + z n z ( i ) n z .
In practical implementation, we vectorize the input function u ( i ) ( x ) on a uniform grid of the interval [ 0 , 1 ] . Specifically, the vectorized input function is given by [ u ( i ) ( x 1 ) , , u ( i ) ( x m ) ] , where x j denotes the j-th sensor and is defined by
x j = j 1 m 1 , j = 1 , , m .
Suppose that there are P observations y l ( i ) R n z , l = 1 , , P for G ( u ( i ) ) ; then, by Equation (21), we have G ( u ( i ) ) ( y l ( i ) ) = g ( y l ( i ) ) . Once the dataset is generated, the model is trained by minimizing the following loss function:
L ( Θ ) = 1 N P l = 1 P i = 1 N G Θ ( u ( i ) ) ( y l ( i ) ) g ( y l ( i ) ) 2 ,
where G Θ is a neural network with parameters Θ that approximates the operator G.
After constructing the surrogate for G by using the neural network G Θ , we can integrate it into a hybrid algorithm to estimate the failure probability. This whole process is called the neural operator hybrid (NOH) method, and it is shown in Figure 2.
As discussed in Section 2.2, the convergence of Algorithm 1 depends on the norm measurement of the difference between the surrogate and the limit state function. Therefore, it is crucial to have an accurate approximation g ^ ( Z ) for reliable estimation. The surrogate model constructed for the reformulated operator learning problem using DeepONet may achieve higher accuracy than that of the surrogate model constructed by using neural networks that do not incorporate information from random variables, as the former model utilizes additional information from random distribution functions. Moreover, DeepONet exhibits less generalization error than that of simple neural networks [26].

4. Numerical Experiments

In this section, we present three numerical examples to demonstrate the efficiency and effectiveness of the proposed neural operator hybrid (NOH) method. Furthermore, we compare the NOH method with the neural hybrid (NH) method. For the purpose of clarity in presentation, we refer to the surrogate model constructed by using fully connected neural networks for g in the NH method as the neural surrogate and the surrogate constructed by using DeepONet for g in the NOH method as the neural operator surrogate. Both surrogate models are designed to approximate limit state function g, and their main difference is the structure of neural networks utilized.
For the NOH method, we use the simplest unstacked DeepONet to construct the neural operator surrogate for the operator G, with the branch and trunk networks implemented as fully connected neural networks (FNNs). The trunk network is employed with a depth of 2 and a width of 40 FNNs, while the branch network has a depth of 2 and a width of 40 FNNs. To facilitate a comparative analysis with the NH method, we built a neural surrogate for g by using a simple FNN with a parameter size comparable to that of the NOH method. Specifically, in the NH method, the FNN utilized for the neural surrogate has a depth of 3, and its width is adjusted to achieve a similar number of parameters to that in the DeepONet. Both models were optimized by using the Adam optimizer [32] with a learning rate of 0.001 on identical datasets.
The code was run by using PyTorch [33] and MATLAB 2019b on a workstation with an Nvidia GTX 1080Ti graphics card and an Intel Core i5-7500 processor with 16 GB of RAM. It is noteworthy that the evaluation of time complexity is based on the performance function (PF) calls N c a l l , which refers to the number of system simulations that need to be executed, rather than the running time of the programs, as program running speeds may vary significantly across different programming languages and platforms. The PF calls consist of the evaluation of the hybrid algorithm in line 6 of Algorithm 1 and simulations for generating training data in Equation (16). We do not evaluate the computational time required for the neural surrogate or the neural operator surrogate, as a model trained by using batch techniques can evaluate 10 5 samples in less than a second.

4.1. Ordinary Differential Equation

In this test problem, we consider a random ordinary differential equation (ODE) proposed in [7]. The ODE is given by:
d s d t = Z s , s ( 0 ) = s 0 ,
where s 0 = 1 , and Z N μ , σ 2 is a Gaussian random variable with a mean of μ = 2 and standard deviation of σ = 1 . The limit state function is defined as g ( Z ) = g ( s ( t , Z ) ) = s ( t , Z ) s d , where s d = 0.5 and t = 1 . The exact failure probability P f = 0.003539 is regarded as the reference solution, which can be computed by using the analytic solution s ( t , Z ) = s 0 e Z t .
To demonstrate the efficiency and effectiveness of the proposed NOH method, we compare it with a Monte Carlo simulation (MCS) and the NH method. We used DeepONet to train the neural operator surrogate in the NOH method and set the parameter k in Equation (22) to 4, the number of input functions for training N to 500, P to 500, and the number of sensors m to 100. In the NH method, we used the FNN as the neural surrogate.
Both surrogates in the NH and NOH methods were trained with identical datasets, epochs, and optimizers. Additionally, in the MCS, 10 6 samples were generated to estimate the failure probability.
Table 1 presents the performance of the MCS, the NOH method, and the NH method. As shown in the table, the NOH method outperformed MCS by achieving the same level of estimation precision with only approximately O ( N c a l l / 1000 ) or 0.23 % of the PF calls required by the MCS. The NH method failed to estimate the failure probability, as all of the outputs of the neural surrogate were greater than 0. In this special case, the hybrid iterative procedure always terminated too early, while the estimated failure probability remained at 0.
Figure 3 illustrates the convergence of the NOH method and the NH method. In order to compare the two methods, the iterative procedure in the hybrid algorithm was not terminated until the limit state function g was recomputed for at least 10 5 samples. The figure shows that the estimate of the failure probability by the NH method remained at 0 until around 100 iterations, and it converged after approximately 7000 iterations. In contrast, the NOH method converged after only 70 iterations, demonstrating its superior efficiency compared to that of the NH method.
In Figure 4, we compare the performance of the neural surrogate and the neural operator surrogate in predicting the limit state function g. We observed that all of the outputs of the neural surrogate were greater than 0, which led to the failure of the NH method in estimating the failure probability. It is evident that the neural operator surrogate outperformed the neural surrogate in predicting the limit state function g.

4.2. Multivariate Benchmark

Next, we consider a high-dimensional multivariate benchmark problem (dimensionality: n = 50 ) in the field of structural safety in [19,34]:
g ( Z ) = β n 1 2 i = 1 n Z i ,
where β = 3.5 and each random variable Z i N ( 0 , 1 ) , i = 1 , , n . g ( Z ) is the limit state function. In this test problem, the reference failure probability is P f m c = 2.218 × 10 4 , which was obtained by using MCS with 5 × 10 6 samples.
The proposed NOH method is compared with the NH method in terms of accuracy and efficiency. For the NOH method, we set the parameter k in Equation (22) to 4, the number of input functions N to 1000, P to 1000, and the number of sensors m to 100. In comparison, a naive neural surrogate employing an FNN with a similar number of parameters was also constructed and trained under conditions identical to those for the NH method.
The performance of the MCS, the NOH method, and the NH method are illustrated in Table 2. Both the NOH method and the NH method demonstrated a substantial reduction in the number of samples required—approximately 0.1 % of the computational cost of MCS. Notably, the NOH method outperformed the NH method by evaluating only 3 % of the N c a l l while achieving a superior relative error 0.81 % compared to the NH method’s relative error of 8.92 % . This indicated that the NOH method achieved higher accuracy with significantly fewer samples, making it a more efficient and effective approach for the given task.
In Figure 5, the convergence of the NOH method is depicted and compared with that of the NH method. Figure 5a demonstrates that the NOH method achieved an estimation of P f h = 2.2 × 10 4 with a relative error of ϵ e = 0.81 % in less than six iterations. Figure 5b provides a comparison of convergent behaviors between the NH and NOH methods, clearly demonstrating that the NOH method converged in significantly fewer iterations. As a consequence, the NOH method required only 3 % of the total evaluations N c a l l . These findings strongly suggest that neural operator surrogates offer improved precision and ease of training when compared to FNNs. The reduced iteration times and lower number of evaluations highlight the superior efficiency and accuracy of neural operator surrogates in approximating complex functions or operators.
Figure 6 depicts the superior accuracy of the neural operator surrogates. It compares the neural surrogate and the neural operator surrogate approximations from 50 randomly selected samples. As illustrated in the figure, the predictions of the limit function g by the neural operator surrogate are more accurate than those of the neural surrogate, as the former closely approximated the ground truth.
The experimental results provide evidence of the precision exhibited by the neural operator surrogate. The combination of reduced iteration times and a lower number of evaluations further accentuates the efficiency and accuracy of neural operator surrogates in approximating complex functions or operators. These findings emphasize that the NOH method achieved higher levels of accuracy while utilizing significantly fewer samples.

4.3. Helmholtz Equation

We consider the Helmholtz equation on a disk with a square hole [19]. The equation is given by:
Δ u κ 2 u = 0 ,
where coefficient κ is a Gaussian random variable with a mean of μ = 60 and variance of σ = 1 , i.e, κ N 60 , 1 . The system was set with a homogeneous term, and Dirichlet boundary conditions ( u = 0 ) were applied on the edges of the square hole, while generalized Neumann conditions ( ξ · r i κ r = 0 , and ξ is the radial distance from the object) were applied on the edge of the disk. The system was numerically solved by using the MATLAB PDE solver to obtain an accurate solution. A snapshot of the solution of Helmholtz is shown in Figure 7. A point sensor was placed at x p = [ 0.7264 ; 0.4912 ] , and the failure probability was defined as Prob ( u ( x p , κ ) > 1.00 ) . The reference solution was P f m c = 2.70 × 10 4 , which was obtained with a Monte Carlo simulation with 10 5 samples.
With a setup similar to that in the previous experiment, we used DeepONet to train the neural operator surrogate in the NOH method and set the parameter k in Equation (22) to 4, the number of input functions N to 1000, P to 1000, and the number of sensors m to 100. We used a fully connected neural network (FNN) as the neural surrogate in the NH method and trained both models under identical conditions.
In Table 3, we present a performance analysis of the Monte Carlo simulation (MCS), the NOH method, and the NH method. Similarly to the previous experiments, the results indicate that the NOH method required fewer N c a l l —about 11 % —than the NH method did while providing a more accurate estimation with 3.70 % relative error versus 11.11 % .
The convergence of the NOH method is illustrated and compared with that of the NH method in Figure 8. Although both methods converged quickly, NOH converged in noticably fewer iterations compared to the NH method. With only 100 evaluations, the NOH method could accurately estimate the failure probability as P f h = 2.80 × 10 4 , with a relative error of 3.70 % , while the NH method only achieved 11.11 % while utilizing 875 evaluations. The faster convergence and lower relative error observed in the NOH method signify the precision and capabilities of the neural operator surrogate when compared to the neural surrogate.
These results highlight the potential of neural operator surrogates to significantly enhance computational efficiency and accuracy in a variety of applications. Consequently, the NOH method emerges as a more efficient and effective approach for estimating failure probability.

5. Conclusions

This paper introduced a neural operator hybrid method for the estimation of failure probability. Instead of approximating the limit state function directly, we reframe the problem as an operator learning task. This allows us to construct a highly efficient and precise surrogate operator model that can accurately estimate the limit state function. By integrating the surrogate operator model into the hybrid algorithm, we created the neural operator hybrid method. The numerical results demonstrate that the proposed method provides an efficient strategy for estimating failure probability, particularly in systems governed by ODEs, multivariate functions, and the Helmholtz equation. Our proposed method exhibited superior performance to that of the basic MCS approach, particularly in terms of efficiency. Furthermore, it surpassed the previous neural hybrid method in both efficiency and accuracy. Consequently, it is applicable and beneficial for addressing general failure probability estimation problems. The obtained results not only demonstrate the efficacy of neural operator learning frameworks in the context of failure probability estimation, but also imply their promising potential in other areas, such as Bayesian inverse problems and partial differential equations with random inputs. In our future work, techniques such as importance sampling (IS) [8,35] or adaptive learning [20,36] can further reduce the sample size required in the estimation process.

Author Contributions

Conceptualization, M.L. and G.W.; methodology, M.L.; software, M.L.; validation, M.L.; investigation, M.L., Y.F. and G.W.; data curation, M.L.; writing—original draft preparation, M.L., Y.F. and G.W.; writing—review and editing, M.L., Y.F. and G.W.; visualization, M.L.; supervision, M.L.; project administration, M.L.; funding acquisition, M.L., Y.F. and G.W. All authors have read and agreed to the published version of the manuscript.

Funding

M. Li and Y. Feng were supported by the National Natural Science Foundation of China (No. 12071291), the Science and Technology Commission of Shanghai Municipality (No. 20JC1414300), and the Natural Science Foundation of Shanghai (No. 20ZR1436200). G. Wang was supported by the Colleges and Universities Young Teachers’ Training and Funding Program of Shanghai (No. ZZLX21039).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bjerager, P. Probability integration by directional simulation. J. Eng. Mech. 1988, 114, 1285–1302. [Google Scholar] [CrossRef]
  2. Ditlevsen, O.; Bjerager, P. Methods of structural systems reliability. Struct. Saf. 1986, 3, 195–229. [Google Scholar] [CrossRef]
  3. Der Kiureghian, A.; Dakessian, T. Multiple design points in first and second-order reliability. Struct. Saf. 1998, 20, 37–49. [Google Scholar] [CrossRef]
  4. Hohenbichler, M.; Gollwitzer, S.; Kruse, W.; Rackwitz, R. New light on first-and second-order reliability methods. Struct. Saf. 1987, 4, 267–284. [Google Scholar] [CrossRef]
  5. Rajashekhar, M.R.; Ellingwood, B.R. A new look at the response surface approach for reliability analysis. Struct. Saf. 1993, 12, 205–220. [Google Scholar] [CrossRef]
  6. Khuri, A.I.; Mukhopadhyay, S. Response surface methodology. Wiley Interdiscip. Rev. Comput. Stat. 2010, 2, 128–149. [Google Scholar] [CrossRef]
  7. Li, J.; Xiu, D. Evaluation of failure probability via surrogate models. J. Comput. Phys. 2010, 229, 8966–8980. [Google Scholar] [CrossRef]
  8. Li, J.; Li, J.; Xiu, D. An efficient surrogate-based method for computing rare failure probability. J. Comput. Phys. 2011, 230, 8683–8697. [Google Scholar] [CrossRef]
  9. Ghanem, R.G.; Spanos, P.D. Stochastic Finite Elements: A Spectral Approach; Courier Corporation: Chelmsford, MA, USA, 2003. [Google Scholar]
  10. Xiu, D.; Karniadakis, G.E. The Wiener-Askey polynomial chaos for stochastic differential equations. SIAM J. Sci. Comput. 2002, 24, 619–644. [Google Scholar] [CrossRef]
  11. Boyaval, S.; Bris, C.L.; Lelièvre, T.; Maday, Y.; Nguyen, N.C.; Patera, A.T. Reduced basis techniques for stochastic problems. Arch. Comput. Methods Eng. 2010, 4, 435–454. [Google Scholar] [CrossRef] [Green Version]
  12. Quarteroni, A.; Manzoni, A.; Negri, F. Reduced Basis Methods for Partial Differential Equations: An Introduction; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  13. Cybenko, G. Approximation by superpositions of a sigmoidal function. Math. Control. Signals Syst. 1989, 2, 303–314. [Google Scholar] [CrossRef]
  14. Hornik, K.; Stinchcombe, M.; White, H. Multilayer feedforward networks are universal approximators. Neural Netw. 1989, 2, 359–366. [Google Scholar] [CrossRef]
  15. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
  16. Lu, L.; Meng, X.; Mao, Z.; Karniadakis, G.E. DeepXDE: A deep learning library for solving differential equations. SIAM Rev. 2021, 63, 208–228. [Google Scholar] [CrossRef]
  17. Pang, G.; Lu, L.; Karniadakis, G.E. fPINNs: Fractional physics-informed neural networks. SIAM J. Sci. Comput. 2019, 41, A2603–A2626. [Google Scholar] [CrossRef] [Green Version]
  18. Zhang, D.; Guo, L.; Karniadakis, G.E. Learning in modal space: Solving time-dependent stochastic PDEs using physics-informed neural networks. SIAM J. Sci. Comput. 2020, 42, A639–A665. [Google Scholar] [CrossRef]
  19. Li, K.; Tang, K.; Li, J.; Wu, T.; Liao, Q. A hierarchical neural hybrid method for failure probability estimation. IEEE Access 2019, 7, 112087–112096. [Google Scholar] [CrossRef]
  20. Lieu, Q.X.; Nguyen, K.T.; Dang, K.D.; Lee, S.; Kang, J.; Lee, J. An adaptive surrogate model to structural reliability analysis using deep neural network. Expert Syst. Appl. 2022, 189, 116104. [Google Scholar] [CrossRef]
  21. Yao, C.; Mei, J.; Li, K. A Mixed Residual Hybrid Method For Failure Probability Estimation. In Proceedings of the 2022 17th International Conference on Control, Automation, Robotics and Vision (ICARCV), Singapore, 11–13 December 2022; IEEE: New York, NY, USA, 2022; pp. 119–124. [Google Scholar]
  22. Chen, T.; Chen, H. Approximations of continuous functionals by neural networks with application to dynamic systems. IEEE Trans. Neural Netw. 1993, 4, 910–918. [Google Scholar] [CrossRef] [PubMed]
  23. Chen, T.; Chen, H. Approximation capability to functions of several variables, nonlinear functionals, and operators by radial basis function neural networks. IEEE Trans. Neural Netw. 1995, 6, 904–910. [Google Scholar] [CrossRef] [Green Version]
  24. Lu, L.; Meng, X.; Cai, S.; Mao, Z.; Goswami, S.; Zhang, Z.; Karniadakis, G.E. A comprehensive and fair comparison of two neural operators (with practical extensions) based on FAIR data. Comput. Methods Appl. Mech. Eng. 2022, 393, 1–42. [Google Scholar] [CrossRef]
  25. Li, Z.; Kovachki, N.; Azizzadenesheli, K.; Liu, B.; Bhattacharya, K.; Stuart, A.; Anandkumar, A. Fourier neural operator for parametric partial differential equations. arXiv 2020, arXiv:2010.08895. [Google Scholar]
  26. Lu, L.; Jin, P.; Pang, G.; Zhang, Z.; Karniadakis, G.E. Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. Nat. Mach. Intell. 2021, 3, 218–229. [Google Scholar] [CrossRef]
  27. Lin, C.; Li, Z.; Lu, L.; Cai, S.; Maxey, M.; Karniadakis, G.E. Operator learning for predicting multiscale bubble growth dynamics. J. Chem. Phys. 2021, 154, 104118. [Google Scholar] [CrossRef]
  28. Cai, S.; Wang, Z.; Lu, L.; Zaki, T.A.; Karniadakis, G.E. DeepM&Mnet: Inferring the electroconvection multiphysics fields based on operator approximation by neural networks. J. Comput. Phys. 2021, 436, 110296. [Google Scholar]
  29. Mao, Z.; Lu, L.; Marxen, O.; Zaki, T.A.; Karniadakis, G.E. DeepM&Mnet for hypersonics: Predicting the coupled flow and finite-rate chemistry behind a normal shock using neural-network approximation of operators. J. Comput. Phys. 2021, 447, 110698. [Google Scholar]
  30. Papadrakakis, M.; Lagaros, N.D. Reliability-based structural optimization using neural networks and Monte Carlo simulation. Comput. Methods Appl. Mech. Eng. 2002, 191, 3491–3507. [Google Scholar] [CrossRef]
  31. Kutyłowska, M. Neural network approach for failure rate prediction. Eng. Fail. Anal. 2015, 47, 41–48. [Google Scholar] [CrossRef]
  32. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  33. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; Curran Associates, Inc.: Red Hook, NY, USA, 2019; Volume 32. [Google Scholar]
  34. Engelund, S.; Rackwitz, R. A benchmark study on importance sampling techniques in structural reliability. Struct. Saf. 1993, 12, 255–276. [Google Scholar] [CrossRef]
  35. Tabandeh, A.; Jia, G.; Gardoni, P. A review and assessment of importance sampling methods for reliability analysis. Struct. Saf. 2022, 97, 102216. [Google Scholar] [CrossRef]
  36. Teixeira, R.; Nogal, M.; O’Connor, A. Adaptive approaches in metamodel-based reliability analysis: A review. Struct. Saf. 2021, 89, 102019. [Google Scholar] [CrossRef]
Figure 1. Illustrations of stacked and unstacked architectures of DeepONets [26]. DeepONets with unstacked structures replace branch networks with a single multi-layer neural network.
Figure 1. Illustrations of stacked and unstacked architectures of DeepONets [26]. DeepONets with unstacked structures replace branch networks with a single multi-layer neural network.
Mathematics 11 02762 g001
Figure 2. A descriptive flowchart for the NOH method.
Figure 2. A descriptive flowchart for the NOH method.
Mathematics 11 02762 g002
Figure 3. Convergence of the NOH method and the NH method. In the hybrid algorithm, we set δ M to 25, and the iterative procedure was not terminated until the limit state function g was recomputed for 4 × 10 4 iterations.
Figure 3. Convergence of the NOH method and the NH method. In the hybrid algorithm, we set δ M to 25, and the iterative procedure was not terminated until the limit state function g was recomputed for 4 × 10 4 iterations.
Mathematics 11 02762 g003
Figure 4. Comparison of the neural surrogate and the neural operator surrogate. Both models are compared to the ground truth g ( Z ) from the analytic solution. The right figure is the left figure on the log scale. (a) Comparison of different surrogates. (b) Comparison on the log scale.
Figure 4. Comparison of the neural surrogate and the neural operator surrogate. Both models are compared to the ground truth g ( Z ) from the analytic solution. The right figure is the left figure on the log scale. (a) Comparison of different surrogates. (b) Comparison on the log scale.
Mathematics 11 02762 g004
Figure 5. Convergence of the NOH method and the NH method. Panel (a) demonstrates the convergence of the NOH method, where NOH terminates in six iterations, while in (b), a comparison of the convergence rates between the NOH and the NH methods is conducted. The NH method requires significantly more iterations than the NOH method to converge because its neural surrogate lacks precision. For both methods, we set δ M = 25 in each iteration. For NOH, ϵ was set to 0, and the iterative procedure was terminated when δ P ϵ for the first time. (a) Convergence of the NOH method. (b) Comparison between the NOH and NH methods.
Figure 5. Convergence of the NOH method and the NH method. Panel (a) demonstrates the convergence of the NOH method, where NOH terminates in six iterations, while in (b), a comparison of the convergence rates between the NOH and the NH methods is conducted. The NH method requires significantly more iterations than the NOH method to converge because its neural surrogate lacks precision. For both methods, we set δ M = 25 in each iteration. For NOH, ϵ was set to 0, and the iterative procedure was terminated when δ P ϵ for the first time. (a) Convergence of the NOH method. (b) Comparison between the NOH and NH methods.
Mathematics 11 02762 g005
Figure 6. Comparison of the neural surrogate and the neural operator surrogate by using 50 randomly selected samples.
Figure 6. Comparison of the neural surrogate and the neural operator surrogate by using 50 randomly selected samples.
Mathematics 11 02762 g006
Figure 7. A snapshot of the solution of the Helmholtz equation.
Figure 7. A snapshot of the solution of the Helmholtz equation.
Mathematics 11 02762 g007
Figure 8. Convergence of the NOH method and the NH method. Panel (a) demonstrates the convergence of the NOH method, while in (b), a comparison of the convergence rates between the NOH and the NH methods is conducted. For both methods, we set δ M = 25 in each iteration. For NOH, ϵ was set to 0, and the iterative procedure was terminated when δ P ϵ for the first time. (a) Convergence of the NOH method. (b) Comparison between the NOH and NH methods.
Figure 8. Convergence of the NOH method and the NH method. Panel (a) demonstrates the convergence of the NOH method, while in (b), a comparison of the convergence rates between the NOH and the NH methods is conducted. For both methods, we set δ M = 25 in each iteration. For NOH, ϵ was set to 0, and the iterative procedure was terminated when δ P ϵ for the first time. (a) Convergence of the NOH method. (b) Comparison between the NOH and NH methods.
Mathematics 11 02762 g008
Table 1. Comparison of a Monte Carlo simulation (MCS), the neural hybrid (NH) method, and the neural operator hybrid (NOH) method. In the hybrid algorithm, we set δ M to 25 and ϵ to 0, and we terminated the iterative procedure when δ P ϵ five times. The relative error is denoted by ϵ e .
Table 1. Comparison of a Monte Carlo simulation (MCS), the neural hybrid (NH) method, and the neural operator hybrid (NOH) method. In the hybrid algorithm, we set δ M to 25 and ϵ to 0, and we terminated the iterative procedure when δ P ϵ five times. The relative error is denoted by ϵ e .
Method P f h N call ϵ e
MCS 3.578 × 10 3 10 6 0.11 %
NOH 3.578 × 10 3 500 (Training) + 1750 (Evaluating)0.11%
NH---
Table 2. A performance evaluation was conducted to compare the MCS, the NH, and the NOH approaches. The NOH method was found to outperform the other methods, and it only evaluated 3 % of N c a l l compared to NH method. The NOH method also achieved the best relative error of 0.81 % while requiring the lowest number of samples with respect to the other methods.
Table 2. A performance evaluation was conducted to compare the MCS, the NH, and the NOH approaches. The NOH method was found to outperform the other methods, and it only evaluated 3 % of N c a l l compared to NH method. The NOH method also achieved the best relative error of 0.81 % while requiring the lowest number of samples with respect to the other methods.
Method P f h N call ϵ e
MCS 2.22 × 10 4 5 × 10 6 -
NOH 2.20 × 10 4 1000 (Training) + 150 (Evaluating)0.81%
NH 2.02 × 10 4 1000 (Training) + 4175 (Evaluating)8.92 %
Table 3. A performance evaluation was conducted to compare the NH and the NOH approaches. The NOH method was found to outperform the NH method in both efficiency and accuracy. Specifically, the NOH method achieved a relative error of just 3.70 % while requiring fewer N c a l l than the NH method did.
Table 3. A performance evaluation was conducted to compare the NH and the NOH approaches. The NOH method was found to outperform the NH method in both efficiency and accuracy. Specifically, the NOH method achieved a relative error of just 3.70 % while requiring fewer N c a l l than the NH method did.
Method P f h N call ϵ e
MCS 2.70 × 10 4 10 5 -
NOH 2.80 × 10 4 1000 (Training) + 100 (Evaluating)3.70%
NH 3.00 × 10 4 1000 (Training) + 875 (Evaluating)11.11%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, M.; Feng, Y.; Wang, G. Estimating Failure Probability with Neural Operator Hybrid Approach. Mathematics 2023, 11, 2762. https://doi.org/10.3390/math11122762

AMA Style

Li M, Feng Y, Wang G. Estimating Failure Probability with Neural Operator Hybrid Approach. Mathematics. 2023; 11(12):2762. https://doi.org/10.3390/math11122762

Chicago/Turabian Style

Li, Mujing, Yani Feng, and Guanjie Wang. 2023. "Estimating Failure Probability with Neural Operator Hybrid Approach" Mathematics 11, no. 12: 2762. https://doi.org/10.3390/math11122762

APA Style

Li, M., Feng, Y., & Wang, G. (2023). Estimating Failure Probability with Neural Operator Hybrid Approach. Mathematics, 11(12), 2762. https://doi.org/10.3390/math11122762

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop