Next Article in Journal
Use of Acidithiobacillus thiooxidans and Acidithiobacillus ferrooxidans in the Recovery of Heavy Metals from Landfill Leachates
Previous Article in Journal
Deep Convolutional Feature-Based Probabilistic SVDD Method for Monitoring Incipient Faults of Batch Process
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fourth-Order Comprehensive Adjoint Sensitivity Analysis (4th-CASAM) of Response-Coupled Linear Forward/Adjoint Systems: I. Theoretical Framework

by
Dan Gabriel Cacuci
Department of Mechanical Engineering, College of Engineering and Computing, University of South Carolina, Columbia, SC 29208, USA
Energies 2021, 14(11), 3335; https://doi.org/10.3390/en14113335
Submission received: 3 May 2021 / Revised: 1 June 2021 / Accepted: 3 June 2021 / Published: 6 June 2021

Abstract

:
The most general quantities of interest (called “responses”) produced by the computational model of a linear physical system can depend on both the forward and adjoint state functions that describe the respective system. This work presents the Fourth-Order Comprehensive Adjoint Sensitivity Analysis Methodology (4th-CASAM) for linear systems, which enables the efficient computation of the exact expressions of the 1st-, 2nd-, 3rd- and 4th-order sensitivities of a generic system response, which can depend on both the forward and adjoint state functions, with respect to all of the parameters underlying the respective forward/adjoint systems. Among the best known such system responses are various Lagrangians, including the Schwinger and Roussopoulos functionals, for analyzing ratios of reaction rates, the Rayleigh quotient for analyzing eigenvalues and/or separation constants, etc., which require the simultaneous consideration of both the forward and adjoint systems when computing them and/or their sensitivities (i.e., functional derivatives) with respect to the model parameters. Evidently, such responses encompass, as particular cases, responses that may depend just on the forward or just on the adjoint state functions pertaining to the linear system under consideration. This work also compares the CPU-times needed by the 4th-CASAM versus other deterministic methods (e.g., finite-difference schemes) for computing response sensitivities These comparisons underscore the fact that the 4th-CASAM is the only practically implementable methodology for obtaining and subsequently computing the exact expressions (i.e., free of methodologically-introduced approximations) of the 1st-, 2nd, 3rd- and 4th-order sensitivities (i.e., functional derivatives) of responses to system parameters, for coupled forward/adjoint linear systems. By enabling the practical computation of any and all of the 1st-, 2nd, 3rd- and 4th-order response sensitivities to model parameters, the 4th-CASAM makes it possible to compare the relative values of the sensitivities of various order, in order to assess which sensitivities are important and which may actually be neglected, thus enabling future investigations of the convergence of the (multivariate) Taylor series expansion of the response in terms of parameter variations, as well as investigating the range of validity of other important quantities (e.g., response variances/covariance, skewness, kurtosis, etc.) that are derived from Taylor-expansion of the response as a function of the model’s parameters. The 4th-CASAM presented in this work provides the basis for significant future advances towards overcoming the “curse of dimensionality” in sensitivity analysis, uncertainty quantification and predictive modeling.

1. Introduction

The functional derivatives of results (customarily called “responses”) produced by computational models of physical systems with respect to the model’s parameters are customarily called the “response sensitivities.” The first-order sensitivities have been used for a variety of purposes, including: (i) understanding the model by ranking the importance of the various parameters; (ii) performing “reduced-order modeling” by eliminating unimportant parameters and/or processes; (iii) quantifying the uncertainties induced in a model response due to model parameter uncertainties; (iv) performing “model validation,” by comparing computations to experiments to address the question “does the model represent reality?” (v) prioritizing improvements in the model; (vi) performing data assimilation and model calibration as part of forward “predictive modeling” to obtain best-estimate predicted results with reduced predicted uncertainties; (vii) performing inverse “predictive modeling”; and (viii) designing and optimizing the system.
As is well known, non-linear operators do not admit adjoint operators; only linear operators admit corresponding adjoint operators. For this reason, many of the most important responses for linear systems involve the solutions of both the forward and the adjoint linear models that correspond to the respective physical system. Included among the widest used system responses that involve both the forward and adjoint functions are the various forms of Lagrangian functionals, the Raleigh quotient for computing eigenvalues and/or separation constants when solving partial differential equations, the Schwinger functional for first-order “normalization-free” solutions, and many others (see, e.g., [1,2]). These functionals play a fundamental role in optimization and control procedures, derivation of numerical methods for solving equations (differential, integral, integro-differential), etc. The analysis of responses that simultaneously involve both forward and adjoint functions makes it necessary to treat linear systems in their own right, rather than treating them as particular cases of nonlinear systems. This is in contradistinction to responses for a nonlinear system, which can depend only on the forward functions, since nonlinear operators do not admit bona-fide adjoint operators; only a linearized form of a nonlinear operator admits an adjoint operator.
As is well known, even the approximate determination of the first-order sensitivities R / α i , i = 1 , , N α of a model response R to N α parameters α i using conventional finite-difference methods would require at least N α large-scale computations with altered parameter values. The computation of the distinct second-order response sensitivities would require N α ( N α + 1 ) / 2 large-scale computations, which rapidly becomes unfeasible for large-scale models comprising many parameters, even using supercomputers. The computation of higher-order sensitivities by conventional methods is limited in practice by the so-called “curse of dimensionality” [3] since the number of large-scale computations needed for computing higher-order response sensitivities increases exponentially with the order of the response sensitivities. Already the First-Order Adjoint Sensitivity Analysis Methodology for Nonlinear Systems, conceived and developed by Cacuci [4,5,6], provides a considerable step forward in the direction of overcoming the “curse of dimensionality”. For the exact computation of the first- and second-order response sensitivities to parameters, the “curse of dimensionality” has been overcome by the Second-Order Adjoint Sensitivity Analysis Methodology conceived and developed by Cacuci [7,8,9], as was demonstrated by the application of this methodology to compute [10,11,12,13,14,15], comprehensively and efficiently, the exact expression of the first- and second-order sensitivities of a the leakage response (which has also been measured experimentally) to the model parameters of an OECD/NEA reactor physics benchmark [16]. The neutron transport code PARTISN [17] has been used to computationally model this reactor physics benchmark, which comprises 21,976 model parameters. Hence, there are 21,976 first-order sensitivities (of which 7477 have nonzero values) and 241,483,276 second-order sensitivities (of which 27,956,503 have nonzero values) of the benchmark’s leakage response to the model parameters. The work presented in [10,11,12,13,14,15] has been extended to third-order in [18], which has subsequently been applied in [19,20,21] to the computation of the third-order sensitivities of the leakage response of the OECD/NEA benchmark [16] to the benchmark’s total microscopic nuclear cross sections, which turned out to be the most important model parameters.
This work presents the Fourth-Order Comprehensive Adjoint Sensitivity Analysis Methodology (4th-CASAM) for general response-coupled forward/adjoint systems. The 4th-CASAM enables the efficient computation of the exact expressions of the 1st-, 2nd-, 3rd- and 4th-order sensitivities of a generic system response, which depends on both the forward and adjoint state functions with respect to all of the parameters underlying the respective systems. The qualifier “comprehensive” is used because this methodology provides exact expressions for the sensitivities of a system response not only to the system’s internal parameters, but also to its (possibly uncertain) boundaries and internal interfaces in phase-space. The development of the 4th-CASAM for coupled forward/adjoint linear system will provide the basis for significant advances towards overcoming the “curse of dimensionality” in sensitivity analysis, uncertainty quantification, as well as forward and inverse predictive modeling [22,23,24].
This work is structured as follows: Section 2 presents the generic mathematical formulation of the forward and adjoint equations that underlies the computational model of a linear physical system, having a response that depends nonlinearly on the forward and adjoint state functions and parameters. Section 3 presents the development of the novel 4th-CASAM, which is developed successively from the 1st-, 2nd- and 3rd-CASAM. Section 3 also presents, for comparison, the expressions and number of large-scale computations that would be needed if the 1st-, 2nd-, 3rd-, and 4th-order response sensitivities were computed by using finite-difference formulas (which would not only be impractical for large-scale systems, but would provide only approximate values for the respective sensitivities) and the Forward Sensitivity Analysis Methodology (FSAM), which would provide exact expressions for the respective sensitivities, but would also be prohibitively expensive in terms of computational costs for large-scale systems. Finally, Section 4 offers conclusions regarding the significance of this work’s novel results in the quest to overcome the curse of dimensionality in sensitivity analysis, uncertainty quantification and predictive modeling.

2. Background: Mathematical Description of the Physical System

A physical system is modeled by using independent variables, dependent variables (“state functions”), as well as parameters which are seldom, if ever, known precisely. Without loss of generality, the model parameters can be considered to be real scalar quantities, having known nominal (or mean) values and, possibly, known higher-order moments or cumulants (i.e., variance/covariances, skewness, kurtosis), which are determined outside the model, e.g., from experimental data. These imprecisely known model parameters will be denoted as α 1 ,…, α N α , where N α denotes the total number of imprecisely known parameters underlying the model under consideration. These model parameters are considered to include imprecisely known geometrical parameters that characterize the physical system’s boundaries in the phase-space of the model’s independent variables. For subsequent developments, it is convenient to consider that these parameters are components of a “vector of parameters” denoted as α ( α 1 , , α N α ) ; α N α , where N α denotes the N α -dimensional subset of the set of real scalars. The vector α N α is considered to include any imprecisely known model parameters that may enter into defining the system’s boundary in the phase-space of independent variables. The symbol “ ” will be used to denote “is defined as” or “is by definition equal to.” Matrices and vectors will be denoted using bold letters. All vectors in this work are considered to be column vectors, and transposition will be indicated by a dagger ( ) superscript.
The model is considered to comprise N x independent variables which are denoted as x i , i = 1 , , N x , and are considered to be components of the vector x ( x 1 , , x N x ) N x .The vector x N x of independent variables is considered to be defined on a phase-space domain, denoted as Ω x , and defined as follows: Ω x { λ i ( α ) x i ω i ( α ) ; i = 1 , , N x } . The lower boundary-point of an independent variable is denoted as λ i ( α ) (e.g., the inner radius of a sphere or cylinder, the lower range of an energy-variable, etc.), while the corresponding upper boundary-point is denoted as ω i ( α ) (e.g., the outer radius of a sphere or cylinder, the upper range of an energy-variable, etc.). A typical example of boundaries that depend on imprecisely known parameters is provided by the boundary conditions needed for models based on diffusion theory, in which the respective “flux and/or current conditions” for the “boundaries facing vacuum” are imposed on the “extrapolated boundary” of the respective spatial domain. As is well known, the “extrapolated boundary” depends not only on the imprecisely known physical dimensions of the problem’s domain, but also on the medium’s microscopic transport cross sections and atomic number densities. The boundary of Ω x , which will be denoted as Ω x ( α ) , comprises the set of all of the endpoints λ i ( α ) , ω i ( α ) , i = 1 , , N x , of the respective intervals on which the components of x are defined, i.e., Ω x ( α ) { λ i ( α )   ω i ( α ) , i = 1 , , N x } .
The model is considered to comprise N φ dependent variables (also called “state functions”), denoted as φ i ( x ) , i = 1 , N φ , which are considered to be the components of the “vector of dependent variables” defined as φ ( x ) [ φ 1 ( x ) , , φ N φ ( x ) ] .
A linear physical system is generally modeled by a system of N φ linear operator-equations which can be generally represented as follows:
L ( x ; α ) φ ( x ) = q φ ( x ; α )   , x Ω x ,
where ( x ; α )   [ L i j ( x ; α )   ] ,   i , j = 1 , , N φ , is a matrix of dimensions N φ × N φ , while q φ ( x ; α )   [ q φ , 1 ( x ; α ) , . , q φ , N φ ( x ; α ) ] is a column vector of dimension N φ . The components L i j ( x ; α )   are operators that act linearly on the dependent variables φ j ( x ) and are, in general, nonlinear functions of the imprecisely known parameters α N α . The components L i j ( α ) are operators that act linearly on the dependent variables φ j ( x ) and are, in general, nonlinear functions of the imprecisely known parameters α N α . The components q φ , i ( x ; α ) of q φ ( x ; α ) , where the subscript “ φ ” indicates sources associated with the “forward” system of equations, are also nonlinear functions of α N α . Since the right-side of Equation (1) may contain distributions, the equality in this equation is considered to hold in the weak (“distributional”) sense. Similarly, all of the equalities that involve differential equations in this work will be considered to hold in the weak/distributional sense.
When L ( x ; α ) contains differential operators, a set of boundary and/or initial conditions which define the domain of L ( x ; α ) must also be given. Since the complete mathematical model is considered to be linear in φ ( x ) , the boundary and/or initial conditions needed to define the domain of L ( x ; α ) must also be linear in φ ( x ) . Such a linear boundary and/or initial conditions are represented in the following operator form:
B φ ( x ; α )   φ ( x ) c φ ( x ; α ) = 0 , x Ω x ( α ) .
In Equation (2), the operator B φ ( x ; α ) [ b i j ( x ; α ) ] ; i = 1 , , N b ; j = 1 , , N φ is a matrix comprising, as components, operators that act linearly on φ ( x ) and nonlinearly on α ; the quantity N B denotes the total number of boundary and initial conditions. The operator c φ ( x ; α ) [ c φ , 1 ( x ; α ) , , c φ , N B ( x ; α ) ] is a N B -dimensional vector comprising components that are operators acting, in general, nonlinearly on α . The subscript “ φ ” in Equation (2) indicates boundary conditions associated with the forward state function φ ( x ) . In this work, capital bold letters will be used to denote matrices (whose components may be operators rather than just functions) while lower case bold letter will be used to denote vectors.
The nominal solution of Equations (1) and (2) is denoted as φ 0 ( x ) , and is obtained by solving these equations at the nominal (or mean) values of the model parameter α 0 . The superscript “zero” will henceforth be used to denote “nominal” (or, equivalently, “expected” or “mean” values). Thus, the vectors φ 0 ( x ) and α 0 satisfy the following equations:
L ( x ; α 0 ) φ 0 ( x ) = q φ ( x ; α 0 )   , x Ω x ,
B φ ( x ; α 0 )   φ 0 ( x ) c φ ( x ; α 0 ) = 0 , x Ω x ( α 0 ) .
Linear systems differ fundamentally from nonlinear systems in that linear operators admit adjoint operators, whereas nonlinear operators do not admit adjoint operators. Furthermore, important model responses of linear systems can be functions of both the forward and the adjoint state functions, a situation that cannot occur for nonlinear problems. Therefore, linear physical systems cannot be simply considered to be particular cases of linear systems (although they may be just that in particular cases), but need to be treated comprehensively in their own right.
Physical problems modeled by linear systems and/or operators are naturally defined in Hilbert spaces. Thus, for physical systems represented by Equations (1) and (2), the components φ i ( x ) , i = 1 , , N φ are considered to be square-integrable functions and φ ( x ) H 0 , where H 0 is the “original”, or “zeroth-level” Hilbert space, as denoted by the subscript “zero”. Subsequently in this work, higher-level Hilbert spaces, which will be denoted as H 1 , H 2 , etc., will also be introduced. Evidently, all of the elements of H 0 are N φ -dimensional vectors that are functions of the independent variables x . For two elements φ ( x ) H 0 and ψ ( x ) H 0 , the Hilbert space H 0 is endowed with an inner product that will be denoted as φ ( x ) , ψ ( x ) 0 and which is defined as follows:
φ ( x ) , ψ ( x ) 0 Ω x φ ( x ) · ψ ( x ) d x i = 1 N x λ i ( α ) ω i ( α ) φ ( x ) ψ ( x ) d x = i = 1 N φ λ 1 ( α ) ω 1 ( α ) λ i ( α ) ω i ( α ) λ N x ( α ) ω N x ( α ) φ i ( x ) ψ i ( x ) d x 1 d x 2 d x i d x N x .
In Equation (5), the product-notation i = 1 N x λ i ( α ) ω i ( α ) [ ]   d x i compactly denotes the respective multiple integrals, while the dot indicates the “scalar product of two vectors” defined as follows:
φ ( x ) · ψ ( x ) i = 1 N u φ i ( x ) ψ i ( x ) .
In most practical situations the Hilbert space H 0 is self-dual. The operator L ( x ; α ) admits an adjoint (operator), which will be denoted as L * ( x ; α ) , and which is defined through the following relation for an arbitrary vector ψ ( x ) H φ :
ψ ( x ) , L ( x ; α )   φ ( x ) 0 = L * ( x ; α )   ψ ( x ) , φ ( x ) 0 .
In Equation (7), the formal adjoint operator L * ( x ; α )   is the N φ × N φ matrix
L * ( x ; α )   [ L j i * ( x ; α )   ] , i , j = 1 , , N φ ,
comprising elements L j i * ( x ; α )   which are obtained by transposing the formal adjoints of the operators L i j ( x ; α )   . Thus, the system adjoint to the linear system represented by Equations (1) and (2) has the following general representation in operator form:
L * ( x ; α )   ψ ( x ) = q ψ ( x ; α )   , x Ω x ,
B ψ ( x ; α )   ψ ( x )   c ψ ( x ; α )   = 0 , x Ω x ( α ) .
The domain of L * ( x ; α ) is determined by selecting the adjoint boundary and/or initial conditions represented in operator form in Equation (10), where the subscript “ ψ ” indicates adjoint boundary and/or initial conditions associated with the adjoint state function ψ ( x ) . These adjoint boundary and/or initial conditions are selected so as to ensure that the boundary terms that arise in the so-called “bilinear concomitant” when going from the left-side to the right side of Equation (7) vanish, in conjunction with the forward boundary conditions given in Equation (2).
The nominal solution of Equations (9) and (10) is denoted as ψ 0 ( x ) , and is obtained by solving these equations at the nominal parameter values α 0 , i.e.,
L * ( x ; α 0 )   ψ 0 ( x ) = q ψ ( x ; α 0 ) , x Ω x ,
B ψ ( x ; α 0 ) ψ 0 ( x ; α 0 )   c ψ ( x ; α 0 )   = 0 , x Ω x ( α 0 ) .
In view of Equations (1) and (9), the relationship shown in Equation (7), which is the basis for defining the adjoint operator, also provides the following fundamental “reciprocity-like” relation between the sources of the forward and the adjoint equations, respectively:
ψ ( x ) , q φ ( x ; α ) 0 = q ψ ( x ; α ) , φ ( x ) 0 .
The functional on the right-side of Equation (13) represents a “detector response”, i.e., a particle reaction-rate representative of the “count” of particles incident on a detector of particles, measuring the respective particle flux φ ( x ) . Thus, the source term q ψ ( x ; α )   [ q ψ , 1 ( x ; α ) , . , q ψ , N φ ( x ; α ) ] in Equation (11) is usually associated with the “result of interest” to be measured and/or computed, which is customarily called the system’s “response”.
The system response generally depends on the model’s state-functions and on the system parameters, which are considered to also include parameters that may specifically appear only in the definition of the response under consideration (but which may not appear in the definition of the model). Thus, the (physical) “system” is defined in this work to comprise both the system’s computational model and the system’s response. The system’s response will be denoted as R [ φ ( x ) , ψ ( x ) ; α ] and, in the most general case, is a nonlinear operator acting on the model’s forward and adjoint state functions, as well as on imprecisely known parameters, both directly and indirectly through the state functions. The nominal value of the response, R [ φ 0 ( x ) , ψ 0 ( x ) ; α 0 ] , is determined by using the nominal parameter values α 0 , the nominal value φ 0 ( x ) of the forward state function [obtained by solving Equations (3) and (4)] and the nominal value ψ 0 ( x ) of the adjoint function [obtained by solving Equations (11) and (12)].
A particularly important class of system responses comprises (scalar-valued) functionals of the forward and adjoint state functions. Such responses occur in many fields, including optimization, control, model verification, data assimilation, model validation and calibration, predictive modeling, etc. For example, the well-known Lagrangian functional is usually represented in the following form:
i = 1 N x λ i ( α ) ω i ( α ) { φ ( x ) q ψ ( α ) + ψ ( x ) [ L ( α ) φ ( x ) q φ ( α ) ] } d x i .
In particular, the Schwinger “normalization-free Lagrangian” is usually represented in the following form [1,2]:
{ i = 1 N x λ i ( α ) ω i ( α ) φ ( x ) q ψ ( α ) d x i } { i = 1 N x λ i ( α ) ω i ( α ) ψ ( x ) q φ ( α ) d x i } i = 1 N x λ i ( α ) ω i ( α ) ψ ( x ) L ( α ) φ ( x ) d x i .
When Equation (1) takes on the eigenvalue (or “separated”) form M [ α ( x ) ] φ ( x ) = μ   N [ α ( x ) ] φ ( x ) , which implies that the adjoint function ψ ( x ) is the solution of the corresponding adjoint equation M * [ α ( x ) ] ψ ( x ) = μ   N * [ α ( x ) ] ψ ( x ) , the well-known Raleigh quotient is usually represented in the following form:
i = 1 N x λ i ( α ) ω i ( α ) ψ ( x ) M [ α ( x ) ] φ ( x ) d x i   i = 1 N x λ i ( α ) ω i ( α ) ψ ( x ) N [ α ( x ) ] φ ( x ) d x i .
Actually, any response can be represented in terms of functionals using the inner product underlying the Hilbert space in which the physical problem is formulated (in this case, H 0 ). For example, a measurement of a physical quantity can be represented as a response R p [ φ ( x p ) , ψ ( x p ) ; α ] which is located at a specific point, x p , in phase-space. Such a response can be represented mathematically as a functional of the following form:
R p [ φ ( x p ) , ψ ( x p ) ; α ] λ 1 ( α ) ω 1 ( α ) λ i ( α ) ω i ( α ) λ N x ( α ) ω N x ( α ) R [ φ ( x ) , ψ ( x ) ; α ] δ ( x x p ) d x 1 d x 2 d x i d x N x .
where δ ( x x p ) denotes the multidimensional Dirac-delta functional. Furthermore, a function-valued (operator) response R o p [ φ ( x ) , ψ ( x ) ; α ] can be represented by a spectral (in multidimensional orthogonal polynomials or Fourier series) expansion of the form:
R o p [ φ ( x ) , ψ ( x ) ; α ] = m 1 m N x c m 1 m N x P m 1 ( x 1 ) P m 2 ( x 2 ) P m N x ( x N x ) ,
where the quantities P m i ( x i ) , i = 1 , , N x , denote the corresponding spectral functions (e.g., orthogonal polynomials or Fourier exponential/trigonometric functions) and where the spectral Fourier) coefficients c m 1 m N x are defined as follows:
c m 1 m N x λ 1 ω 1 λ i ω i ι N x ω N x R o p [ φ ( x ) , ψ ( x ) ; α ] P m 1 ( x 1 ) P m i ( x i ) P m N x ( x N x ) d x 1 d x i d x N x .
The coefficients c m 1 m N x can themselves be considered as system responses since the spectral polynomials P m i ( x i ) are perfectly well known while the expansion coefficients will contain all of the dependencies of the respective response on the imprecisely known model and response parameters. Consequently, the sensitivity analysis of operator-valued responses can be reduced to the sensitivity analysis of scalar-valued responses. The expressions in both Equations (17) and (19) are functionals of the forward and adjoint state functions and can be represented in the following general form:
R [ φ ( x ) , ψ ( x ) ; α ] λ 1 ( α ) ω 1 ( α ) λ i ( α ) ω i ( α ) λ N x ( α ) ω N x ( α ) S [ φ ( x ) , ψ ( x ) ; α ] d x 1 d x 2 d x i d x N x ,
where S [ φ ( x ) , ψ ( x ) ; α ] denotes a suitably Gateaux- (G-) differentiable function of the indicated arguments. Specific examples that illustrate the effects of the first- and second-order sensitivities of responses of the form shown in Equations (14)–(20) to imprecisely known model and domain-boundary parameters are provided in [25,26,27,28,29,30,31,32,33,34].
The generic response defined in Equation (20) provides the basis for constructing any other responses of specific interest, and will therefore be used for the generic “Fourth-Order Comprehensive Sensitivity Analysis Methodology (4th-CASAM) for Linear Systems” to be developed in the remainder of this work. Note that the generic response defined in Equation (20) is, in general, a nonlinear function of all of its arguments, i.e., S [ φ ( x ) , ψ ( x ) ; α ] is nonlinear in φ ( x ) , ψ ( x ) , and α .

3. The Fourth-Order Comprehensive Adjoint Sensitivity Analysis Methodology (4th-CASAM) for Linear Systems

The model parameters α i are imprecisely known quantities, so their actual values may differ from their nominal values by quantities denoted as δ α i α i α i 0 , i = 1 , , N α . Since the model parameters α and the state functions are related to each other through the forward and adjoint systems, it follows that variations δ α ( δ α 1 , , δ α N α ) in the model parameters will cause corresponding variations δ φ ( δ φ 1 , , δ φ N φ ) ,   δ φ i φ i φ i 0 , i = 1 , , N u and δ ψ ( δ ψ 1 , , δ ψ N φ ) , δ ψ i ψ i ψ i 0 , i = 1 , , N u in the forward and, respectively, adjoint state functions. In turn, the variations δ α , δ φ , and δ ψ cause a response variation R ( φ 0 + δ φ ; ψ 0 + δ ψ ; α 0 + δ α ) around the nominal response value R [ φ 0 ( x ) , ψ 0 ( x ) ; α 0 ] .

3.1. Computation of the First-Order Sensitivities of R [ φ ( x ) , ψ ( x ) ; α ]

Since the closed-form analytical expression of the response R [ φ ( x ) , ψ ( x ) ; α ] is not available in practice, the first-order sensitivities of the response cannot be computed directly from Equation (20), but need to be computed by other means. There are three methods for computing response sensitivities: (i) by using finite-difference formulas or statistical procedures to approximate Equation (20) in conjunction with “brute-force” re-computations using altered parameter values in Equations (1) and (2); (ii) by using the “forward sensitivity analysis methodology (FSAM),” which requires solving the differentiated forms of Equations (1), (2), (9) and (10), evaluated at the known nominal and response parameter values; (iii) by applying the “comprehensive adjoint sensitivity analysis methodology” which has been developed based on the pioneering work of Cacuci [4,5,6]. These methods will be discussed in Section 3.1.1, Section 3.1.2 and Section 3.1.3 below.

3.1.1. Finite-Difference Approximation Using Re-Computations with User-Modified Parameters

The first-order sensitivities of the response, R [ φ ( x ) , ψ ( x ) ; α ] , can be computed approximately using the well-known finite-difference formula presented below:
R ( α ) α j 1 2 h j ( R j + 1 R j 1 ) + O ( h j 2 ) ,
where R j + 1 R [ φ ( α j + h j ; x ) , ψ ( α j + h j ; x ) ; α j + h j ] , R j 1 R [ φ ( α j h j ; x ) , ψ ( α j h j ; x ) ; α j h j ] , and where h j denotes a “judiciously-chosen” variation in the parameter α j around its nominal value α j 0 . The values R i + 1 and R i 1 are obtained by re-solving the original forward and adjoint systems of equations repeatedly, using the changed parameter values ( α i ± h i ) . The value of the variation h j is chosen by “trial and error” for each parameter α i , and varies from parameter to parameter. If the value of h j is too large or too small, the result produced by Equation (21) will be in considerable error from the exact value of the derivative R ( α ) / α i , and this negative outcome may be exacerbated by the fact that the user may not be aware of this error since the exact result for the respective sensitivity can computed only by using the forward or the adjoint sensitivity analysis methods (to be presented in Section 3.1.2 and Section 3.1.3). It is important to note that finite difference formulas introduce their intrinsic “methodological errors,” such as the error O ( h j 2 ) indicated in Equation (21), which are in addition to, and independent of, the errors that might be incurred in the computation of R [ φ 0 ( x ) , ψ 0 ( x ) ; α 0 ] . In other words, even if the computation of R [ φ 0 ( x ) , ψ 0 ( x ) ; α 0 ] were perfect (error-free), the finite-difference formulas nevertheless introduce their own, intrinsic, numerical errors into the computation of the sensitivity { d R ( α ) / d α j } α 0 . This statement is also valid for any of the statistical methods (e.g., Latin-hypercubes, etc.) that might be used in conjunction with re-computations.

3.1.2. Forward Sensitivity Analysis Methodology (FSAM)

The aim of the Forward Sensitivity Analysis Methodology (FSAM) is to compute the response sensitivities without introducing approximations of their own (i.e., methodological errors), as was the case when using methods that need brute-force re-computations (e.g., finite-difference errors, statistical errors). The mathematical framework of the FSAM is set in the vector-space of the model parameters α . In this vector-space, the first-order partial sensitivity of R [ φ ( x ) , ψ ( x ) ; α ] to a generic model parameter, α i , can in principle be obtained by taking the first-order partial of the response and equations underlying the model with respect to a generic model/respond parameter α i , evaluated at the nominal parameter values α 0 . Recall that in a linear vector space comprising function-valued elements of the form η ( x ) [ η 1 ( x ) , , η K ( x ) ] , the most comprehensive—and practically computable—concept of a “partial derivative” of a nonlinear operator F [ η 1 ( x ) , , η K ( x ) ] with respect to any one of its arguments, η i , stems from the properties of the 1st-order partial Gateaux- (G-) variation, δ F i [ η 1 ( x ) , , η K ( x ) ; h i ( x ) ] , of F [ η 1 ( x ) , , η K ( x ) ] , which is defined as follows:
δ F i [ η 1 ( x ) , , η K ( x ) ; h i ( x ) ] { d d ε F [ η 1 ( x ) , , η i ( x ) + ε h i ( x ) , , η K ( x ) ] } s ε 0 ,
where ε is a scalar (i.e., ε F , where F denotes the underlying field of real scalars) and where h i ( x ) denotes an arbitrary variation in η i ( x ) .
The total G-variation δ F i [ η 1 ( x ) , , η K ( x ) ; h 1 ( x ) , , h K ( x ) ] of F [ η 1 ( x ) , , η K ( x ) ] for arbitrary variations h ( h 1 , , h K ) is defined as follows:
δ F i [ η 1 ( x ) , , η K ( x ) ; h 1 ( x ) , , h K ( x ) ] { d d ε F [ η 1 ( x ) + ε h 1 ( x ) , , η k ( x ) + ε h K ( x ) ] } ε = 0 .
The first-order G-variation δ F ( η ; h ) is an operator defined on the same domain as F ( η ) , and has the same range as F ( η ) . The G-variation δ F ( η ; h ) satisfies the relation F ( η + ε h ) F ( η ) = δ F ( η ; h ) + Δ ( h )   , with L i m ε 0 [ Δ ( ε h ) ] ε = 0   . The existence of the G-variation δ F ( η ; h ) does not guarantee its numerical computability. Numerical methods most often require that δ F ( η ; h ) be linear in the variations h in a neighborhood ( η + ε h ) around η . The necessary and sufficient conditions for the G-differential δ F ( η ; h ) of a nonlinear operator F ( η ) to be linear in the variations h in a neighborhood ( η + ε h ) around η are as follows:
( i )   F ( η )   satisfies   a   weak   Lipschitz   condition   at   η ;
(ii) For two arbitrary vectors of variations h 1 and h 2 , the operator F ( η ) satisfies the relation
F ( η + ε h 1 + ε h 2 ) F ( η + ε h 1 ) F ( η + ε h 2 ) + F ( η ) = o ( ε )   .
In practice, it is not necessary to investigate if F ( η ) satisfies the conditions presented in Equations (24) and (25) since it is usually evident if the right-side of Equation (23) is linear (or not) in the variations h . When the total variation δ F ( η ; h ) is linear in h , it is called the “total differential of F ( u ) ” and is denoted as D F ( η ; h ) . In this case, the partial variation δ F i ( η 1 , , η K ; h i ) is called the “partial differential δ F i ( η 1 , , η K ; h i ) of F ( η ) with respect to η i ” and the following representation holds:
D F ( η 1 , , η K ; h 1 , , h K ) = k = 1 K F ( η 1 , , η K ) η i h i ,
where the quantities F ( η 1 , , η K ) / η i denote the partial derivatives of F ( u ) with respect to its arguments η i . It will henceforth be assumed that all of the operators considered in this work satisfy the conditions presented in Equations (24) and (25), so that they admit partial G-derivatives such that the representation shown in Equation (26) exists.
The sensitivity { R [ φ ( x ) , ψ ( x ) ; α ] / α j } ( α 0 ) of R [ φ ( x ) , ψ ( x ) ; α ] to a model parameter α j , evaluated at the nominal parameter values α 0 is obtained by determining the partial G-differential δ R [ φ 0 ( x ) , ψ 0 ( x ) ; α 0 ; δ α j ] = { R [ φ ( x ) , ψ ( x ) ; α ] / α j } ( α 0 ) δ α j of R [ φ ( x ) , ψ ( x ) ; α ] with respect to the respective parameter α j , for each j = 1 , , N α . Since the forward state function φ ( x ) is the solution of Equations (1) and (2), while the adjoint state function ψ ( x ) is the solution of Equations (9) and (10), it follows that both φ ( x ) and ψ ( x ) are implicit functions of the model parameters α ( α 1 , , α N α ) . Hence, considering that φ ( x ) = φ ( x ; α ) , ψ ( x ) = ψ ( x ; α ) , and applying the definition provided in Equations (20)–(22) yields the following expression for the 1st-order partial sensitivity { R [ φ ( x ) , ψ ( x ) ; α ] / α j } ( α 0 ) , for each j = 1 , , N α :
{ R [ φ ( x ) , ψ ( x ) ; α ] α j } α 0 1 δ α j { d d ε R [ φ ( α j 0 + ε δ α j ) ; ψ ( α j 0 + ε δ α j ) ; ( α j 0 + ε δ α j ) ] } ε = 0 = j = 1 N α k = 1 N x { λ k ( α ) ω k ( α ) d x k S [ φ ( x ) , ψ ( x ) ; α ] α j } α 0 + j = 1 N α k = 1 N x m = 1 , k j N x { λ m ( α ) ω m ( α ) d x m S [ φ ( x 1 , , b k , , x N x ) , ψ ( x 1 , , b k , , x N x ) ; α ] ω k ( α ) α j S [ φ ( x 1 , , a k , , x N x ) , ψ ( x 1 , , a k , , x N x ) ; α ] λ k ( α ) α j } α 0 + k = 1 N x { λ k ( α ) ω k ( α ) d x k S [ φ ( x ) , ψ ( x ) ; α ] φ φ α j + S [ φ ( x ) , ψ ( x ) ; α ] ψ ψ α j } α 0 , j = 1 , , N α .
The functions φ ( x ) / α j and ψ ( x ) / α j are obtained by solving the system of equations obtained by taking the first partial G-derivative of Equations (1), (2), (9) and (10) with respect to the model parameter α j . Applying the definition provided in Equation (22) to Equations (1), (2), (9) and (10) yields, after cancelling the scalar parameter variation δ α j on both sides of the equal signs in resulting expressions, the following First-Order Forward Sensitivity System (1st-OFSS):
L ( x ; α 0 ) φ α j | = { f ( 1 ) ( j ; φ ; α ) } ( α 0 ) ; f ( 1 ) ( j ; φ ; α ) | q φ ( x ; α ) α j L ( x ; α ) α j φ ( x ) ; x Ω x ; j = 1 , , N α ;
{ B φ ( x ; α ) } ( α 0 ) φ α j = { c φ ( 1 ) ( j ; φ ; α ) } ( α 0 ) ; c φ ( 1 ) ( j ; φ ; α ) [ b φ ( x ; α ) φ c φ ( x ; α ) ] α j ; x Ω x ( α 0 ) ;
L * ( x ; α 0 ) ψ ( x ) α j = { g ( 1 ) ( j ; ψ ; α ) } ( α 0 ) ; g ( 1 ) ( j ; ψ ; α ) q ψ ( x ; α ) α j L * ( α ) α j ψ ( x ) ; x Ω x ; j = 1 , , N α ;
{ B ψ ( x ; α ) } ( α 0 ) ψ ( x ) α j = { c ψ ( 1 ) ( j ; ψ ; α ) } ( α 0 ) ; c ψ ( 1 ) ( j ; ψ ; α ) b ψ ( x ; α ) α j ψ ( x ) c ψ ( x ; α ) α j ; x Ω x ( α 0 ) .
The system of equations comprising Equations (28) through (31) is called the 1st-OFSS because its solutions are the first-order derivatives of the forward and adjoint state functions φ ( x ) / α j and ψ ( x ) / α j , respectively. Evidently, the 1st-OFSS would need to be solved 2 N α -times, with different right-sides in order to compute the 1st-order derivatives of the forward and, respectively, adjoint state functions with respect to all model parameters α j , j = 1 , , N α . Subsequently, the solutions φ ( x ) / α j and ψ ( x ) / α j of the 1st-OFSS would be used in Equation (27) to compute the first-order response sensitivity { R [ φ ( x ) , ψ ( x ) ; α ] / α j } ( α 0 ) , for each j = 1 , , N α , using quadrature formulas. The quadrature computations are small-scale inexpensive computations but solving the 1st-OFSS entails large-scale computations, even if the same linear operators, namely L ( x ; α ) and L * ( x ; α ) , would need to be inverted each time (which means that the same code/solvers would be used, without needing additional programming for inverting this operator). Thus, the FSAM is advantageous to employ only if, in the problem under consideration, the number N α of model parameters is considerably less than the number of responses of interest. This is rarely the case in practice, however, since most problems of practical interest are characterized by many model parameters and comparatively few responses.
It is evident from Equations (27)–(31) that the 1st-order partial sensitivities { R [ φ ( x ) , ψ ( x ) ; α ] / α j } ( α 0 ) are functions of the quantities φ ( x ) / α j and ψ ( x ) / α j , as well as of the index j = 1 , , N α . Therefore, for the subsequent derivation of the 2nd- and higher-order sensitivities of the response with respect to the model parameters, it is convenient to introduce the following notation for the 1st-order response sensitivities obtained by using the FSAM:
R ( 1 ) ( j ; φ ; ψ ; α ; φ α j ; ψ α j ) R ( φ ; ψ ; α ) α j .

3.1.3. First-Order Comprehensive Adjoint Sensitivity Analysis Methodology (1st-CASAM)

The aim of the First-Order Comprehensive Adjoint Sensitivity Analysis Methodology (1st-CASAM) is to find an alternative way for expressing the contribution of the flux variation (the so-called “indirect-effect term” contribution) to the total response sensitivity, so as to avoid the need for having to compute the derivatives φ g ( r , Ω ) / α i of the state functions with respect to the model’s parameters, as is the case when using the FSAM. In contradistinction to the FSAM, which is framed in the vector-space of the parameters α , the 1st-CASAM is framed in the combined phase-space of the parameters α and state-functions φ ( x ) and ψ ( x ) .
The 1st-order Gateaux- (G-) variation, denoted as δ R ( φ 0 , ψ 0 , α 0 ; δ φ , δ ψ , δ α ) , of the response R ( e ) for arbitrary variations δ φ ( x ) , δ ψ ( x ) , δ α in the model parameters and state functions, in a neighborhood [ φ 0 ( x ) + ε δ φ ( x ) , ψ 0 ( x ) + ε δ ψ ( x ) ; α 0 + ε δ α ] around ( φ 0 , ψ 0 , α 0 ) , where ε F is a real scalar ( F denotes the underlying field of scalars), is defined as follows:
δ R ( φ 0 , ψ 0 , α 0 ; δ φ , δ ψ , δ α ) | { d d ε R [ φ 0 ( x ) + ε δ φ ( x ) , ψ 0 ( x ) + ε δ ψ ( x ) ; α 0 + ε δ α ] } ε = 0 | = { δ R ( φ 0 , ψ 0 , α 0 ; δ φ , δ ψ ) } d i r + { δ R ( φ 0 , ψ 0 , α 0 ; δ α ) } i n d ,
where the “direct-effect” term { δ R ( φ 0 , ψ 0 , α 0 ; δ α ) } d i r depends only on the parameter variations δ α and is defined as follows:
{ δ R ( φ 0 , ψ 0 , α 0 ; δ α ) } d i r { α k = 1 N x λ k ( α ) ω k ( α ) d x k S [ φ ( x ) , ψ ( x ) ; α ] } α 0 δ α = k = 1 N x { κ k ( α ) ω k ( α ) d x k S [ φ ( x ) , ψ ( x ) ; α ] α δ α } α 0 + k = 1 N x m = 1 , k j N x { λ m ( α ) ω m ( α ) d x m S [ φ ( x 1 , , b k , , x N x ) , ψ ( x 1 , , b k , , x N x ) ; α ] ω k ( α ) α } α 0 δ α k = 1 N x m = 1 , k j N x { λ m ( α ) ω m ( α ) d x m S [ φ ( x 1 , , a k , , x N x ) , ψ ( x 1 , , a k , , x N x ) ; α ] λ k ( α ) α } α 0 δ α ,
while the “indirect-effect” term { δ R ( φ 0 , ψ 0 , α 0 ; δ φ , δ ψ ) } i n d depends only on the variations δ φ and δ ψ in the state functions, and is defined as follows:
{ δ R ( φ 0 , ψ 0 , α 0 ; δ φ , δ ψ ) } i n d = k = 1 N x { k ( α ) u k ( α ) d x k [ S [ φ ( x ) , ψ ( x ) ; α ] φ δ φ + S [ φ ( x ) , ψ ( x ) ; α ] ψ δ ψ ] } α 0 .
In Equations (34) and (35), the notation { } α 0 has been used to indicate that the quantity within the brackets is to be evaluated at the nominal values of the parameters and state functions. This simplified notation is justified by the fact that when the parameters take on their nominal values, it implicitly means that the corresponding state functions also take on their corresponding nominal values, in view of Equations (3), (4), (11) and (12). This simplified notation will be used throughout this work.
The “direct effect” term { δ R ( φ 0 , ψ 0 , α 0 ; δ α ) } d i r defined in Equation (34) depends directly on the parameter variations δ α , and can be computed immediately since it does not depend on the variations δ φ and δ ψ ( x ) . On the other hand, the “indirect effect” term { δ R ( φ 0 , ψ 0 , α 0 ; δ φ , δ ψ ) } i n d defined in Equation (35) depends indirectly on the parameter variations δ α , through the variations δ φ ( x ) and δ ψ ( x ) in the forward state functions φ ( x ) and ψ ( x ) , and can be computed only after having computed the values of the variations δ φ ( x ) and δ ψ ( x ) .
The variations δ φ ( x ) and δ ψ ( x ) are the solutions of the system of equations obtained by taking the G-differentials of Equations (1), (2), (9) and (10), which can be written in the following matrix-vector form:
{ V ( 1 ) ( α ) δ u ( 1 ) ( x ) } α 0 = { q ( 1 ) ( u ( 1 ) ; α ; δ α ) } α 0 , x Ω x ,
{ b V ( 1 ) ( u ( 1 ) ; α ; δ u ( 1 ) ; δ α ) } α 0 = 0 , x Ω x ( α 0 ) ,
where
V ( 1 ) ( α ) ( L ( α ) 0 0 L * ( α ) ) ; u ( 1 ) ( x ) ( φ ( x ) ψ ( x ) ) ; δ u ( 1 ) ( x ) ( δ φ ( x ) δ ψ ( x ) ) ; q ( 1 ) ( u ( 1 ) ; α ; δ α ) ( q 1 ( 1 ) ( φ ; α ; δ α ) q 2 ( 1 ) ( ψ ; α ; δ α ) ) ; b V ( 1 ) ( u ( 1 ) ; δ u ( 1 ) ; α ; δ α ) ( b 1 ( 1 ) ( φ ; α ; δ α ) b 2 ( 1 ) ( ψ ; α ; δ α ) ) .
{ q 1 ( 1 ) ( φ , α ; δ α ) } α 0 { [ q φ ( α ) L ( α ) φ ] α } α 0 δ α ,
{ q 2 ( 1 ) ( ψ , α ; δ α ) } α 0 { [ q ψ ( α ) L * ( α ) ψ ( x ) ] α } α 0 δ α ,
The matrices q φ ( α ) / α and [ L ( α ) φ ] / α , which appear on the right-side of Equation (36), are defined as follows:
q φ ( α ) α ( q φ , 1 α 1 q φ , 1 α N α q φ , N φ α 1 q φ , N φ α N α ) , [ L ( α ) φ ] α ( [ j = 1 N φ L 1 , j ( α ) φ j ] α 1 [ j = 1 N φ L 1 , j ( α ) φ j ] α N α [ j = 1 N φ L N φ , j ( α ) φ j ] α 1 [ j = 1 N φ L N φ , j ( α ) φ j ] α N α ) .
The system of equations comprising Equations (36) and (37) will be called the “1st-Level Variational Sensitivity System” (1st-LVSS) and its solution, δ u ( 1 ) ( x ) , will be called the “1st-Level variational sensitivity function.” The superscript “(1)” used in Equations (36) and (37) indicates “1st-Level” (as opposed to “first-order”) because the 1st-LVSS differs from the 1st-OFSS in that the 1st-LVSS involves the total first-order differential δ u ( 1 ) ( x ) of the state functions whereas the 1st-OFSS involves the first-order partial derivatives of the state functions with respect to a single model parameter α j . In principle, the 1st-LVSS could be solved for each possible component of the vectors of parameter variations δ α to obtain the functions δ φ ( x ) and δ ψ ( x ) . Subsequently, the solutions δ φ ( x ) and δ ψ ( x ) of the 1st-LVSS could be used together with the known parameter variations δ α in Equation (35) to compute the indirect-effect term { δ R ( φ 0 , ψ 0 , α 0 ; δ φ , δ ψ ) } i n d . Computing the indirect-effect term { δ R ( φ 0 , ψ 0 , α 0 ; δ φ , δ ψ ) } i n d by solving the 1st-LVSS would require at least 2 N α large-scale computations (to solve the 1st-LVSS) for every independent component of the vectors of parameter variations δ α . Therefore, solving the 1st-LVSS is advantageous to employ only if, in the problem under consideration, the number N α of model and boundary parameters is considerably less than the number of responses of interest. This is rarely the case in practice, however, since most problems of practical interest are characterized by many model parameters and comparatively few responses.
The indirect-effect term { δ R ( φ 0 , ψ 0 , α 0 ; δ φ , δ ψ ) } i n d defined in Equation (35) can be computed by applying of the First-Order Comprehensive Adjoint Sensitivity Analysis Methodology (1st-CASAM), which avoids the need for computing the functions δ φ ( x ) and δ ψ ( x ) . The principles underlying the 1st-CASAM are as follows:
1
Introduce a Hilbert space, denoted as H 1 , comprising square-integrable functions vector-valued elements of the form η ( 1 ) ( x ) [ η 1 ( 1 ) ( x ) , η 2 ( 1 ) ( x ) ] , with η i ( 1 ) ( x ) [ η i , 1 ( 1 ) ( x ) , , η i , j ( 1 ) ( x ) , , η i , N φ ( 1 ) ( x ) ] , i = 1 , 2 , and endowed with an inner product between two elements, η ( 1 ) ( x ) H 1 , ξ ( 1 ) ( x ) H 1 , of this Hilbert space, which will be denoted as η ( 1 ) ( x ) , ξ ( 1 ) ( x ) 1 and defined as follows:
η ( 1 ) ( x ) , ξ ( 1 ) ( x ) 1 i = 1 2 η i ( 1 ) ( x ) , ξ i ( 1 ) ( x ) 0 .  
2
In the Hilbert H 1 , form the inner product of Equation (36) with a yet undefined vector-valued function a ( 1 ) ( x ) [ a 1 ( 1 ) ( x ) , a 2 ( 1 ) ( x ) ] H 1 to obtain the following relation:
{ a ( 1 ) ( x ) , V ( 1 ) ( α ) δ u ( 1 ) ( x ) 1 } α 0 = { a ( 1 ) ( x ) , q ( 1 ) ( φ ; ψ ; α ; δ α ) 1 } α 0 .
3
Using the definition of the adjoint operator in the Hilbert space H 1 , recast the left-side of Equation (43) as follows:
{ a ( 1 ) ( x ) , V ( 1 ) ( α ) δ u ( 1 ) ( x ) 1 } α 0 = { δ u ( 1 ) ( x ) , A ( 1 ) ( α ) a ( 1 ) ( x ) 1 } α 0 + { P ( 1 ) [ δ u ( 1 ) ( x ) ; a ( 1 ) ( x ) ; α ; δ α ] } α 0 ,
where { P ( 1 ) [ δ u ( 1 ) ( x ) ; a ( 1 ) ( x ) ; α ; δ α ] } α 0 denotes the bilinear concomitant defined on the phase-space boundary x Ω x ( α 0 ) , and where A ( 1 ) ( α ) is the operator formally adjoint to V ( 1 ) ( α ) , i.e.,
A ( 1 ) ( α ) [ V ( 1 ) ( α ) ] * = ( L * ( α ) 0 0 L ( α ) ) .
4
Require the first term on right-side of Equation (44) to represent the indirect-effect term defined in Equation (35), to obtain the following relation:
A ( 1 ) ( α ) a ( 1 ) ( x ) = s ( 1 ) ( u ( 1 ) ( x ) ; α ) ,
where
s ( 1 ) [ u ( 1 ) ( x ) ; α ] [ s 1 ( 1 ) ( u ( 1 ) ; α ) , s 2 ( 1 ) ( u ( 1 ) ; α ) ] ( [ S ( u ( 1 ) ; α ) φ ] [ S ( u ( 1 ) ; α ) ψ ] ) .
5
Implement the boundary conditions given in Equation (37) into Equation (44) and eliminate the remaining unknown boundary-values of the functions δ φ and δ ψ from the expression of the bilinear concomitant { P ( 1 ) [ δ u ( 1 ) ( x ) ; a ( 1 ) ( x ) ; α ] } α 0 by selecting appropriate boundary conditions for the function a ( 1 ) ( x ) [ a 1 ( 1 ) ( x ) , a 2 ( 1 ) ( x ) ] , to ensure that Equation (46) is well-posed while being independent of unknown values of δ φ , δ ψ , and δ α . The boundary conditions thus chosen for the function a ( 1 ) ( x ) [ a 1 ( 1 ) ( x ) , a 2 ( 1 ) ( x ) ] can be represented in operator form as follow
{ b A ( 1 ) [ u ( 1 ) ( x ) ; a ( 1 ) ( x ) ; α ; ] } α 0 = 0 , x Ω x ( α 0 ) .
6
The selection of the boundary conditions for the adjoint function a ( 1 ) ( x ) [ a 1 ( 1 ) ( x ) , a 2 ( 1 ) ( x ) ] represented by Equation (48) eliminates the appearance of the unknown values of δ u ( 1 ) ( x ) in { P ( 1 ) [ δ u ( 1 ) ( x ) ; a ( 1 ) ( x ) ; α ; δ α ] } α 0 and reduces this bilinear concomitant to a residual quantity that contains boundary terms involving only known values of φ ( x ) , ψ ( x ) , a ( 1 ) ( x ) , α and δ α . This residual quantity will be denoted as { P ^ ( 1 ) [ u ( 1 ) ( x ) ; a ( 1 ) ( x ) ; α ; δ α ] } α 0 . In general, this residual quantity does not automatically vanish, although it may do so occasionally.
7
The system of equations comprising Equation (46) together with the boundary conditions represented by Equation (48) constitute the 1st-Level Adjoint Sensitivity System (1st-LASS). the solution a ( 1 ) ( x ) [ a 1 ( 1 ) ( x ) , a 2 ( 1 ) ( x ) ] of the 1st-LASS will be called the 1st-level adjoint function. The 1st-LASS is called “first-level” (as opposed to “first-order”) because it does not contain any differential or functional-derivatives, but its solution a ( 1 ) ( x ) will be used below to compute the first-order sensitivities of the response with respect to the model parameters. This terminology will be also used in the sequel, when deriving the expressions for the 2nd- and 3rd-order sensitivities.
8
It follows from Equations (43) and (44) that the following relation holds:
{ a ( 1 ) ( x ) , q ( 1 ) ( u ( 1 ) ; α ; δ α ) 1 } α 0 = { δ u ( 1 ) ( x ) , A ( 1 ) ( α ) a ( 1 ) ( x ) 1 } α 0 + { P ^ ( 1 ) [ u ( 1 ) ( x ) ; a ( 1 ) ( x ) ; α ; δ α ] } α 0 .
9
Recalling that the first term on the right-side of Equation (49) is, in view of Equation (46), the indirect-effect term { δ R ( φ 0 , ψ 0 , α 0 ; δ φ , δ ψ ) } i n d , it follows from Equation (49) that the indirect-effect term can be expressed in terms of the 1st-level adjoint function a ( 1 ) ( x ) [ a 1 ( 1 ) ( x ) , a 2 ( 1 ) ( x ) ] as follows:
{ δ R ( φ 0 , ψ 0 , α 0 ; δ φ , δ ψ ) } i n d = { a ( 1 ) ( x ) , q ( 1 ) ( u ( 1 ) ; α ; δ α ) 1 } α 0 { P ^ ( 1 ) [ u ( 1 ) ; a ( 1 ) ; α ; δ α ] } α 0 = { i = 1 2 a i ( 1 ) ( x ) , q i ( 1 ) ( u ( 1 ) ; α ; δ α ) 0 } α 0 { P ^ ( 1 ) [ u ( 1 ) ; a ( 1 ) ; α ; δ α ] } α 0 { δ R ( u ( 1 ) , 0 ; a ( 1 ) , 0 ; α 0 ; δ α ) } i n d .
As has been indicated by the last equality in Equation (50), the variations δ φ and δ ψ have been eliminated from the original expression of the indirect-effect term, which now instead depends on the 1st-level adjoint function a ( 1 ) ( x ) [ a 1 ( 1 ) ( x ) , a 2 ( 1 ) ( x ) ] . As indicated in Equation (46), solving the 1st-Level Adjoint Sensitivity System (1st-LASS) entails the following operations: (i) inverting (i.e., solving) the original left-side of the adjoint equation with the source [ S ( u ( 1 ) ; α ) φ ] to obtain the 1st-level adjoint function ψ 1 ( 1 ) ( x ) ; and (ii) inverting the original left-side of the forward equation with the source [ S ( u ( 1 ) ; α ) ψ ] to obtain the 1st-level adjoint function ψ 2 ( 1 ) ( x ) . It is very important to note that the 1st-LASS is independent of parameter variations δ α . Hence, the 1st-LASS needs to be solved only once (as opposed to the 1st-LFSS, which would need to be solved anew for each parameter variation) to determine the 1st-level adjoint function a ( 1 ) ( x ) [ a 1 ( 1 ) ( x ) , a 2 ( 1 ) ( x ) ] . Subsequently, the “indirect-effect term” { δ R ( u ( 1 ) , 0 ; a ( 1 ) , 0 ; α 0 ; δ α ) } i n d is computed efficiently and exactly by simply performing the integrations over the adjoint function a ( 1 ) ( x ) [ a 1 ( 1 ) ( x ) , a 2 ( 1 ) ( x ) ] , as indicated on the right-side of Equation (50).
As indicated in Equation (33), the total 1st-order sensitivity of the response R [ φ ( x ) , ψ ( x ) ; α ] to the model parameters is obtained by adding the expressions of the direct-effect term defined in Equation (34) and indirect-effect term as obtained in Equation (50), which yields the following expression:
{ δ R [ u ( 1 ) ( x ) , α ; δ φ , δ ψ , δ α ] } α 0 = k = 1 N x { κ k ( α ) ω k ( α ) d x k S [ u ( 1 ) ( x ) ; α ] α δ α } α 0 + k = 1 N x m = 1 , k j N x { λ m ( α ) ω m ( α ) d x m S [ φ ( x 1 , , b k , , x N x ) , ψ ( x 1 , , b k , , x N x ) ; α ] ω k ( α ) α δ α } α 0 k = 1 N x m = 1 , k j N x { λ m ( α ) ω m ( α ) d x m S [ φ ( x 1 , , a k , , x N x ) , ψ ( x 1 , , a k , , x N x ) ; α ] λ k ( α ) α } δ α } α 0 + { i = 1 2 a i ( 1 ) ( x ) , q i ( 1 ) ( u ( 1 ) ; α ; δ α ) 0 } α 0 { P ^ ( 1 ) [ u ( 1 ) ; a ( 1 ) ; α ; δ α ] } α 0 { δ R [ u ( 1 ) ( x ) ; a ( 1 ) ( x ) ; α ; δ α ] } α 0 .
The expression in Equation (51) no longer depends on the variations the variations δ φ and δ ψ , which are expensive to compute, but instead depends on the 1st-level adjoint function a ( 1 ) ( x ) [ a 1 ( 1 ) ( x ) , a 2 ( 1 ) ( x ) ] . In particular, this expression also reveals that the sensitivities of the response R [ φ ( x ) , ψ ( x ) ; α ] to parameters that characterize the system’s boundary and/or internal interfaces can arise both from the direct-effect and indirect-effect terms. It also follows from Equation (51) that the total 1st-order sensitivity δ R ( φ 0 , ψ 0 ; a ( 1 ) ; α 0 ; δ α ) can be expressed in terms of the 1st-level adjoint functions as follows:
{ δ R [ u ( 1 ) ( x ) ; a ( 1 ) ( x ) ; α ; δ α ] } α 0 = j = 1 N α { R [ u ( 1 ) ( x ) ; a ( 1 ) ( x ) ; α ] α j } α 0 δ α j ,
where the quantities
R [ j ; u ( 1 ) ( x ) ; a ( 1 ) ( x ) ; α ] α j = k = 1 N x k ( α ) u k ( α ) d x k S ( 1 ) [ j ; u ( 1 ) ( x ) ; a ( 1 ) ( x ) ; α ] R ( 1 ) [ j 1 ; u ( 1 ) ( x ) ; a ( 1 ) ( x ) ; α ] ,     j = 1 , , N α .
Represent the 1st-order response sensitivities, evaluated at the nominal values of the state functions and model parameters. In particular, if the residual bilinear concomitant is non-zero, the functions S ( 1 ) [ j ; φ ( x ) , ψ ( x ) ; a ( 1 ) ( x ) ; α ] would contain suitably defined Dirac delta-functionals for expressing the respective non-zero boundary terms as volume-integrals over the phase-space of the independent variables. Dirac-delta functionals would also be needed to represent, within S ( 1 ) [ j ; φ ( x ) , ψ ( x ) ; a ( 1 ) ( x ) ; α ] , the terms containing the derivatives of the boundary end-points with respect to the model and/or response parameters.
In the particular case in which the model’s response depended only on the forward state function φ ( x ) , the sensitivities R [ φ ( x ) ; a 1 ( 1 ) ( x ) ; α ] / α j , j = 1 , , N α , would depend only on the 1st-level adjoint function a 1 ( 1 ) ( x ) , which is the solution of an adjoint-like 1st-LASS. Conversely, if the model’s response depended only on the adjoint state function ψ ( x ) , then the sensitivities R [ ψ ( x ) ; a 2 ( 1 ) ( x ) ; α ] / α j , j = 1 , , N α , would depend only on the 1st-level adjoint function a 2 ( 1 ) ( x ) , which is the solution of a forward-like 1st-LASS.

3.1.4. Comparison of Computational Requirements for Computing the First-Order Response Sensitivities with Respect to the Model Parameters

From the derivations presented in Section 3.1.1, Section 3.1.2 and Section 3.1.3, it is evident that the computational requirements for the three deterministic methods discussed therein for computing the sensitivities of a scalar valued response R [ φ ( x ) , ψ ( x ) ; α ] with respect to the N α model parameters α ( α 1 , , α N α ) are as follows:
  • The 1st-Order Comprehensive Adjoint Sensitivity Analysis Methodology (1st-CASAM) requires 2 large-scale computations: one large-scale computation for computing the first-level adjoint state function a 1 ( 1 ) ( x ) and one large-scale computation for computing the first-level adjoint state function a 2 ( 1 ) ( x ) .
  • The Forward Sensitivity Analysis Method (FSAM) requires 2 N α large-scale computations: N α large-scale computations for computing the first-order derivatives of the forward state function φ ( α ; x ) and N α large-scale computations for computing the first-order derivatives of the adjoint state function ψ ( α ; x ) .
  • The finite-difference (FD) method requires 4 N α large-scale computations: 2 N α large-scale computations for computing the forward state functions φ ( α j ± h j ; x ) and 2 N α large-scale computations for computing the adjoint state functions ψ ( α j ± h j ; x ) .
If the response depends only on the forward state function, the following simplifications occur:
V ( 1 ) ( α ) = L ( α ) ; A ( 1 ) ( α ) = L * ( α ) ; a ( 1 ) ( x ) = a 1 ( 1 ) ( x ) .
A similar simplification occurs in the boundary conditions represented by Equation (48).
If the response depends only on the adjoint state function, the following simplifications occur:
V ( 1 ) ( α ) = L * ( α ) ; A ( 1 ) ( α ) = L ( α ) ; a ( 1 ) ( x ) = a 2 ( 1 ) ( x ) .
A similar simplification occurs in the boundary conditions represented by Equation (48). Hence, if the response depends only on the forward or only on the adjoint state function, then only half of the number of large-scale computations discussed in points A–C, above, would be needed.
It is evident that the 1st-CASAM is the most efficient of all of the above methods for computing the first-order sensitivities of scalar-valued responses to parameters, and the relative superiority of the 1st-CASAM over the other methods increases as the number of model parameters increases. This efficiency was evidently illustrated in [10,11,12,13,14,15], which presented the CPU times needed to compute the first-order sensitivities of the response to the model parameters for an OECD/NEA reactor physics benchmark [16], which consists of a plutonium metal sphere surrounded by a spherical shell made of polyethylene. The benchmark’s response of interest was the experimentally measured leakage of particles (neutrons) out of the sphere’s outer boundary, which depends only on the forward state function, but does not depend on the adjoint state-function (adjoint flux), so this response was modeled using a particular form of the right-side of Equation (13) within the PARTISN neutron transport code [17]. To compute the 7477 non-zero 1st-order sensitivities of the benchmark’s neutron leakage response to the benchmark’s parameters, the 1st-CASAM needed 1 large-scale adjoint computation for computing the 1st-level adjoint function [the equivalent of the function a 1 ( 1 ) ( x ) ], which required 85 CPU-seconds. In addition, the 1st-CASAM required 10 CPU-seconds for computing the 7477 integrals over the adjoint functions [i.e., the equivalent of computing Equation (50)] for computing the numerical values of all of the 7477 1st-order sensitivities. In contradistinction, the FSAM required 7477 large-scale computations for solving the respective 1st-OFSS [i.e., the equivalent of solving Equations (28) and (29)], which required 694-CPU Hours. The finite-difference (FD) approximation would have required 1388-CPU Hours for obtaining approximate values for the 7477 1st-order sensitivities.

3.2. Computation of the Second-Order Sensitivities of R [ φ ( x ) , ψ ( x ) ; α ]

Since this work ultimately aims at deriving the explicit expressions of the 4th-order sensitivities of the response R [ φ ( x ) , ψ ( x ) ; α ] with respect to the model parameters, the proliferation of indices, superscripts and subscripts is unavoidable. Nevertheless, “subscripted-subscripts” can be avoided by using subscripts of the form j 1 = 1 , , N α ; j 2 = 1 , , j 1 , where the index j 1 will replace the index j (which was used in the Section 3.1) to index the 1st-order sensitivities, and where the index j 2 will be used (in addition to j 1 ) to index the 2nd-order sensitivities. Furthermore, anticipating the notation to be used in subsequent Subsections, the index j 3 will be used (in addition to the indices j 1 and j 2 ) to index the 3rd-order sensitivities and the index j 4 will be used (in addition to j 1 , j 2 and j 3 ) to index the 4th-order sensitivities.
There are N α ( N α + 1 ) / 2 distinct second-order sensitivities of the response with respect to the model and response parameters. Section 3.2.1, below, summarizes the computational aspects of using finite-difference formulas and Section 3.2.2 describes the “forward sensitivity analysis methodology” for computing the 2nd-order response sensitivities to the model parameters. Section 3.2.3 presents the 2nd-Order Comprehensive Adjoint Sensitivity Analysis Methodology (2nd-CASAM), for computing the 2nd-order response sensitivities along the concepts originally developed by Cacuci [7,8,9].

3.2.1. Finite-Difference Approximation Using Re-Computations with User-Modified Parameters

The second-order responses sensitivities could be computed approximately by “brute force” re-computations using standard forward or backward differences. Thus, the second-order unmixed sensitivities can be calculated using the following formula:
2 R ( α ) ( α j 1 ) 2 1 h j 1 2 ( R j 1 + 1 R j 1 + R j 1 1 ) + O ( h j 1 2 ) , j 1 = 1 , , N α ,
where R j 1 + 1 R ( α j 1 + h j 1 ) , R j 1 R ( α j 1 ) , R j 1 1 R ( α j 1 h j 1 ) and h j 1 denotes a “judiciously-chosen” variation in the parameter α j 1 around its nominal value α j 1 0 .
The 2nd-order mixed sensitivities, 2 R ( α ) / α j 1 α j 2 , can be calculated by using the following finite-difference formula:
2 R ( α ) α j 1 α j 2 1 4 h j 1 h j 2 ( R j 1 + 1 , j 2 + 1 R j 1 1 , j 2 + 1 R j 1 + 1 , j 2 1 + R j 1 1 , j 2 1 ) + O ( h j 1 2 , h j 2 2 ) , f o r   j 1 = 1 , , N α ; j 2 = 1 , , j 1 ,
where h j 1 and h j 2 denote the variations in the parameters α j 1 and α j 2 , respectively. The values of the quantities R j 1 + 1 , j 2 + 1 R ( α j 1 + h j 1 , α j 2 + h j 2 ) , R j 1 1 , j 2 + 1 R ( α j 1 h j 1 , α j 2 + h j 2 ) , R j 1 + 1 , j 2 1 R ( α j 1 + h j 1 , α j 2 h j 2 ) , and R j 1 1 , j 2 1 R ( α j 1 h j 1 , α j 2 h j 2 ) are obtained by repeatedly re-solving Equations (1), (2), (9) and (10), using the changed parameter values ( α j 1 ± h j 1 ) and ( α j 2 ± h j 2 ) . The values of h j 1 and h j 2 , respectively, must be chosen “judiciously” by “trial and error” for each of the parameters α j 1 and α j 2 ; otherwise, the result produced by Equation (57) will be erroneous, significantly removed from the exact value of the derivative 2 R ( α ) / α j 1 α j 2 . These finite-difference formulas introduce their intrinsic “methodological errors” of order O ( h j 1 2 ) and/or O ( h j 1 2 , h j 2 2 ) , as indicated in Equations (56) and (57), which are in addition to, and independent of, the errors that might be incurred in the computation of L ( α ) . In other words, even if the computation of R ( α ) were perfect (error-free), the finite-difference formulas nevertheless introduce their own, intrinsic, numerical errors into the computation of 2 R ( α ) / α j 1 α j 2 .

3.2.2. Forward Sensitivity Analysis Methodology (FSAM)

Within the framework of the FSAM, the second-order sensitivities 2 R ( α ) / α j 1 α j 2 , j 1 , j 2 = 1 , , N α , of the response to the model and response parameters are obtained by computing the partial G-derivatives of the first-order sensitivities represented by Equation (32), which yields the following result:
2 R ( φ ; ψ ; α ) α j 1 α j 2 1 δ α j 2 { d d ε R ( 1 ) [ j 1 ; φ ( α j 2 0 + ε δ α j 2 ) ; ψ ( α j 2 0 + ε δ α j 2 ) ; α j 2 0 + ε δ α j 2 ; φ ( α j 2 0 + ε δ α j 2 ) α j 1 ; ψ ( α j 2 0 + ε δ α j 2 ) α j 1 ] } ε = 0 = R ( 2 ) [ j 2 ; j 1 ; φ ; ψ ; α ; φ α j 2 ; ψ α j 2 ; 2 φ α j 1 α j 2 ; 2 ψ α j 1 α j 2 ] ; j 1 , j 2 = 1 , , N α .
As indicated by the functional dependence of the last term in Equation (58), the evaluation of the second-order response sensitivities requires prior knowledge of the 1st- and 2nd-order derivatives of the state functions with respect to the model parameters. The functions φ ( x ) / α j 1 and ψ ( x ) / α j 1 are obtained by solving the 1st-OFSS. The functions 2 φ / α j 1 α j 2 and 2 ψ / α j 1 α j 2 are the solutions of the following Second-Order Forward Sensitivity System (2nd-OFSS), which is obtained by taking the partial G-derivative of the 1st-OFSS with respect to a generic model parameter α j 2 :
L ( x ; α 0 ) 2 φ α j 2 α j 1 = { f ( 2 ) ( j 2 ; j 1 ; φ ; α ) } α 0 ; x Ω x ; j 1 = 1 , , N α ; j 2 = 1 , , j 1 ;
{ B φ ( x ; α ) } ( e 0 ) 2 φ α j 2 α j 1 = { c φ ( 2 ) ( j 2 ; j 1 ; φ ; α ) } α 0 ; x Ω x ( α 0 ) ; j 1 = 1 , , N α ; j 2 = 1 , , j 1 ;
L * ( x ; α 0 ) 2 ψ α j 2 α j 1 = { g ( 2 ) ( j 2 ; j 1 ; ψ ; α ) } α 0 ; x Ω x ; j 1 = 1 , , N α ; j 2 = 1 , , j 1 ;
{ B ψ ( x ; α ) } ( e 0 ) 2 ψ α j 2 α j 1 = { c ψ ( 2 ) ( j 2 ; j 1 ; ψ ; α ) } α 0 ; x Ω x ( α 0 ) ; j 1 = 1 , , N α ; j 2 = 1 , , j 1 ;
where
f ( 2 ) ( j 2 ; j 1 ; φ ; α ) f ( 1 ) ( j 1 ; φ ; α ) α j 2 L ( x ; α ) α j 2 φ α j 1 ; j 1 = 1 , , N α ; j 2 = 1 , , j 1 ;
g ( 2 ) ( j 2 ; j 1 ; ψ ; α ) g ( 1 ) ( j 1 ; ψ ; α ) α j 2 L * ( x ; α ) α j 2 ψ α j 1 ; j 1 = 1 , , N α ; j 2 = 1 , , j 1 ;
c φ ( 2 ) ( j 2 ; j 1 ; φ ; α ) c φ ( 1 ) ( j 1 ; φ ; α ) α 2 B φ ( x ; α ) α 2 φ α j 1 ; x Ω x ( α 0 ) ; j 1 = 1 , , N α ; j 2 = 1 , , j 1 ;
c ψ ( 2 ) ( j 2 ; j 1 ; ψ ; α ) c ψ ( 1 ) ( j 1 ; ψ ; α ) α j 2 B ψ ( x ; α ) α j 2 ψ ( x ) α j 1 ; x Ω x ( α 0 ) ; j 1 = 1 , , N α ; j 2 = 1 , , j 1 .
Evidently, the 1st-OFSS would need to be solved 2 N α -times, with different right-sides in order to compute the 1st-order derivatives φ ( x ) / α j 1 and ψ ( x ) / α j 1 with respect to all model parameters, which will entail large-scale computations. Furthermore, there are N α ( N α + 1 ) / 2 distinct 2nd-order functions 2 φ / α j 1 α j 2 and just as many distinct 2nd-order functions 2 ψ / α j 1 α j 2 . This means that the 2nd-OFSS would need to be solved N α ( N α + 1 ) times to compute these 2nd-order functions. Hence, the computation of the N α ( N α + 1 ) / 2 distinct 2nd-order response sensitivities to the model parameters using the FSAM would necessitate a total number of N α 2 + 3 N α large-scale computations, evidently highlighting the impact of the “curse of dimensionality” [Bellman, 1957].

3.2.3. The 2nd-Order Comprehensive Adjoint Sensitivity Analysis Methodology (2nd-CASAM)

The starting point for the development of the 2nd-CASAM is the expressions of the sensitivities provided in terms of the 1st-level adjoint functions, namely Equation (53), rather that expression of the 1st-order sensitivities produced by the FSAM [cf. Equation (32)]. Thus, the 2nd-order total sensitivity of the model response is obtained by applying the definition of the 1st-order G-differential to Equation (53), which yields the following result:
{ δ R ( 1 ) [ j 1 ; u ( 1 ) ( x ) ; a ( 1 ) ( x ) ; α ; δ u ( 1 ) ( x ) ; δ a ( 1 ) ( x ) ; δ α ] } α 0 { d d ε [ k = 1 N x λ k ( α 0 + ε δ α ) ω k ( α 0 + ε δ α ) d x k S ( 1 ) [ j 1 ; φ 0 + ε δ φ ; ψ 0 + ε δ ψ ; a ( 1 ) , 0 ( x ) + ε δ a ( 1 ) ; α 0 + ε δ α ] ] } ε = 0 = { δ R ( 1 ) [ j 1 ; u ( 1 ) ( x ) ; a ( 1 ) ( x ) ; α ; δ α ] } d i r + { δ R ( 1 ) [ j 1 ; u ( 1 ) ( x ) ; a ( 1 ) ( x ) ; α ; δ u ( 1 ) ( x ) ; δ a ( 1 ) ( x ) ] } i n d , j 1 = 1 , , N α ,
where the direct-effect term { δ R ( 1 ) [ j 1 ; u ( 1 ) ( x ) ; a ( 1 ) ( x ) ; α ; δ α ] } d i r is defined as follows:
{ δ R ( 1 ) [ j 1 ; u ( 1 ) ( x ) ; a ( 1 ) ( x ) ; α ; δ α ] } d i r { α k = 1 N x λ k ( α ) ω k ( α ) d x k S ( 1 ) [ j 1 ; u ( 1 ) ( x ) ; a ( 1 ) ( x ) ; α ] } α 0 δ α ,
while the “indirect-effect term” { δ R ( 1 ) [ j 1 ; u ( 1 ) ( x ) ; a ( 1 ) ( x ) ; α ; δ u ( 1 ) ( x ) ; δ a ( 1 ) ( x ) ] } i n d is defined as follows:
{ δ R ( 1 ) [ j 1 ; u ( 1 ) ( x ) ; a ( 1 ) ( x ) ; α ; δ u ( 1 ) ( x ) ; δ a ( 1 ) ( x ) ] } i n d k = 1 N x λ k ( α ) ω k ( α ) d x k S ( 1 ) ( j 1 ; u ( 1 ) ; a ( 1 ) ; α ) φ δ φ ( x ) + k = 1 N x λ k ( α ) ω k ( α ) d x k S ( 1 ) ( j 1 ; u ( 1 ) ; a ( 1 ) ; α ) ψ δ ψ ( x ) + k = 1 N x λ k ( α ) ω k ( α ) d x k S ( 1 ) ( j 1 ; u ( 1 ) ; a ( 1 ) ; α ) a 1 ( 1 ) δ a 1 ( 1 ) ( x ) + k = 1 N x λ k ( α ) ω k ( α ) d x k S ( 1 ) ( j 1 ; u ( 1 ) ; a ( 1 ) ; α ) a 2 ( 1 ) δ a 2 ( 1 ) ( x ) .
Note that the left-side of Equation (69) is to be computed at nominal parameter and state function values but the corresponding indication (namely, the superscript “zero”) has been omitted in Equation (69) in order to simplify the notation.
The direct-effect term { δ R ( 1 ) [ j 1 ; u ( 1 ) ( x ) ; a ( 1 ) ( x ) ; α ; δ α ] } d i r can be computed immediately, while the “indirect-effect term” { δ R ( 1 ) [ j 1 ; u ( 1 ) ( x ) ; a ( 1 ) ( x ) ; α ; δ u ( 1 ) ( x ) ; δ a ( 1 ) ( x ) ] } i n d depends on the variations δ φ , δ ψ ( x ) and δ a ( 1 ) ( x ) in the forward state functions and, respectively, the components of the first-level adjoint function. Recall that the functions δ φ and δ ψ ( x ) are the solutions of the 1st-LVSS defined by Equations (36) and (37). On the other hand, the function δ a ( 1 ) ( x ) is the solution of the system of equations obtained by G-differentiating the 1st-LASS. By definition, the first G-differential of the 1st-LASS is obtained as follows:
{ d d ε [ L * ( α 0 + ε δ α ) ( a 1 ( 1 ) , 0 + ε δ a 1 ( 1 ) ) S ( φ 0 + ε δ φ ; ψ 0 + ε δ ψ ; α 0 + ε δ α ) φ ] } ε = 0 = 0 ,
{ d d ε [ L ( α 0 + ε δ α ) ( a 2 ( 1 ) , 0 + ε δ a 2 ( 1 ) ) S ( φ 0 + ε δ φ ; ψ 0 + ε δ ψ ; α 0 + ε δ α ) ψ ] } ε = 0 = 0 ,
              { d d ε b A ( 1 ) [ φ 0 + ε δ φ ; ψ 0 + ε δ ψ ; a ( 1 ) , 0 + ε δ a ( 1 ) ; α 0 + ε δ α ] } ε = 0 = 0 , x Ω x ( α 0 ) .
Carrying out the operations in Equations (70)–(72) yields the following equations:
L * ( α 0 ) δ a 1 ( 1 ) { 2 S ( u ( 1 ) ; α ) φ φ } α 0 δ φ { 2 S ( u ( 1 ) ; α ) φ ψ } α 0 δ ψ = { p 1 ( 2 ) ( u ( 1 ) ; a 1 ( 1 ) ; α ; δ α ) } α 0 , p 1 ( 2 ) ( φ ; ψ ; a 1 ( 1 ) ; α ; δ α ) 2 S ( u ( 1 ) ; α ) α φ δ α [ L * ( α ) a 1 ( 1 ) ] α δ α ,
L ( α 0 ) δ a 2 ( 1 ) { 2 S ( u ( 1 ) ; α ) φ ψ } α 0 δ φ { 2 S ( u ( 1 ) ; α ) ψ ψ } α 0 δ ψ = { p 2 ( 2 ) ( u ( 1 ) ; a 2 ( 1 ) ; α ; δ α ) } α 0 , p 2 ( 2 ) ( u ( 1 ) ; a 2 ( 1 ) ; α ; δ α ) 2 S ( u ( 1 ) ; α ) α ψ δ α [ L ( α ) a 2 ( 1 ) ] α δ α ,
{ δ b A ( 1 ) ( u ( 1 ) ; a ( 1 ) ; α ) } α 0 { b A ( 1 ) ( u ( 1 ) ; a ( 1 ) ; α ) φ } α 0 δ φ + { b A ( 1 ) ( u ( 1 ) ; a ( 1 ) ; α ) ψ } α 0 δ ψ + { b A ( 1 ) ( u ( 1 ) ; a ( 1 ) ; α ) a 1 ( 1 ) } α 0 δ a 1 ( 1 ) + { b A ( 1 ) ( u ( 1 ) ; a ( 1 ) ; α ) a 2 ( 1 ) } α 0 δ a 2 ( 1 ) + { b A ( 1 ) ( u ( 1 ) ; a ( 1 ) ; α ) α } α 0 δ α = 0 .
Recalling the 1st-LVSS defined in Equations (36) and (37), and considering Equations (73) and (74), it follows that the functions δ φ , δ ψ ( x ) and δ a ( 1 ) ( x ) [ δ a 1 ( 1 ) ( x ) , δ a 2 ( 1 ) ( x ) ] are the solution of the following system of equations:
{ V ( 2 ) ( u ( 1 ) ; α ) δ u ( 2 ) ( x ) } α 0 = { q ( 2 ) ( u ( 2 ) ; α ; δ α ) } α 0 , x Ω x ,
{ b v ( 2 ) [ u ( 2 ) ; α ; δ u ( 2 ) ( x ) ; δ α ] } α 0 { ( b v ( 1 ) ( u ( 1 ) ; α ; δ u ( 1 ) ; δ α ) δ b A ( 1 ) ( u ( 2 ) ; α ; δ u ( 2 ) ; δ α ) ) } α 0 = ( 0 0 ) , x Ω x ( α 0 ) ,
where
V ( 2 ) ( u ( 1 ) ; α ) ( L ( α ) 0 0 0 0 L * ( α ) 0 0 2 S ( u ( 1 ) ; α ) φ φ 2 S ( u ( 1 ) ; α ) φ ψ L * ( α ) 0 2 S ( u ( 1 ) ; α ) ψ φ 2 S ( u ( 1 ) ; α ) ψ ψ 0 L ( α ) ) ;
u ( 2 ) ( x ) ( u ( 1 ) ( x ) a ( 1 ) ( x ) ) = ( φ ( x ) ψ ( x ) a 1 ( 1 ) ( x ) a 2 ( 1 ) ( x ) ) ; δ u ( 2 ) ( x ) ( δ u ( 1 ) ( x ) δ a ( 1 ) ( x ) ) = ( δ φ ( x ) δ ψ ( x ) δ a 1 ( 1 ) ( x ) δ a 2 ( 1 ) ( x ) ) ; q ( 2 ) ( u ( 2 ) ; α ; δ α ) ( q 1 ( 2 ) ( u ( 1 ) ; α ; δ α ) q 2 ( 2 ) ( u ( 1 ) ; α ; δ α ) q 3 ( 2 ) ( u ( 2 ) ; α ; δ α ) q 4 ( 2 ) ( u ( 2 ) ; α ; δ α ) ) ( q 1 ( 1 ) ( φ ; α ; δ α ) q 2 ( 1 ) ( ψ ; α ; δ α ) p 1 ( 2 ) ( u ( 2 ) ; α ; δ α ) p 2 ( 2 ) ( u ( 2 ) ; α ; δ α ) ) .
All of the components of the matrices and vectors defined in Equations (76) and (77) are to be computed at nominal parameter and state function values. The matrix V ( 2 ) ( φ ; ψ ; α ) , which is defined in Equation (78), has the following structure:
V ( 2 ) ( u ( 1 ) ; α ) = ( V ( 1 ) [ 0 ] 2 × 2 V 21 ( 2 ) V 22 ( 2 ) ) ; V 21 ( 2 ) ( 2 S ( u ( 1 ) ; α ) φ φ 2 S ( u ( 1 ) ; α ) φ ψ 2 S ( u ( 1 ) ; α ) ψ φ 2 S ( u ( 1 ) ; α ) ψ ψ ) , V 22 ( 2 ) = A ( 1 ) ( α ) .
The matrix V 21 ( 2 ) depends only the system’s response and is responsible for coupling the forward and adjoint systems, even though they could still be solved successively rather than simultaneously, because the matrix V ( 2 ) ( φ ; ψ ; α ) is block-diagonal.
The system of equations represented by Equation (76) together with the boundary conditions represented by Equation (77) will be called the “2nd-Level Variational Sensitivity System” (2nd-LVSS). The solution, δ u ( 2 ) ( x ) , of the 2nd-LVSS will be called the “2nd-level variational sensitivity function”. The 2nd-LVSS differs from the 2nd-OFSS in that the 2nd-LVSS involves first-order differentials of the state functions, whereas the 2nd-OFSS involves the second-order partial derivatives of the state functions with respect to model parameters α j 2 and α j 1 , j 1 = 1 , , N α ;   j 2 = 1 , , j 1 . In principle, the 2nd-LVSS could be solved for each possible component of the vectors of parameter variations δ α to obtain the 2nd-level variational vector δ u ( 2 ) ( x ) . Subsequently, δ u ( 2 ) ( x ) could be used together with the known parameter variations δ α in Equation (69) to compute the indirect-effect term { δ R ( 1 ) [ j 1 ; u ( 2 ) ( x ) ; α ; δ u ( 1 ) ( x ) ; δ a ( 1 ) ( x ) ] } i n d . Computing { δ R ( 1 ) [ j 1 ; u ( 2 ) ( x ) ; α ; δ u ( 1 ) ( x ) ; δ a ( 1 ) ( x ) ] } i n d by solving the 2nd-LVSS would require at least 2 N α ( N α + 1 ) large-scale computations (to solve the 2nd-LVSS) for every independent component of the vectors of parameter variations δ α . Therefore, solving the 2nd-LVSS is advantageous to employ only if, in the problem under consideration, the number N α of model and boundary parameters is considerably less than the number of responses of interest. This is rarely the case in practice, however, since most problems of practical interest are characterized by many model parameters and comparatively few responses.
Following the principles introduced by Cacuci [7,8,9], the application of the 2nd-CASAM avoids the need for solving the 2nd-LVSS, by expressing the indirect-effect term { δ R ( 1 ) ( j 1 ) } i n d defined in Equation (69) in an alternative manner, in terms of the solution of a 2nd-Level Adjoint Sensitivity System (2nd-LASS), so as to eliminate the appearance of the 2nd-level variational vector δ u ( 2 ) ( x ) in the alternative expression of the indirect-effect term { δ R ( 1 ) ( j 1 ) } i n d . The construction of the requisite 2nd-LASS commences by considering a Hilbert space, denoted as H 2 , comprising square-integrable vector-valued elements of the form η ( 2 ) ( x ) [ η 1 ( 2 ) ( x ) , η 2 ( 2 ) ( x ) , η 3 ( 2 ) ( x ) , η 4 ( 2 ) ( x ) ] H 2 , with η i ( 2 ) ( x ) [ η i , 1 ( 2 ) ( x ) , , η i , j ( 2 ) ( x ) , , η i , N φ ( 2 ) ( x ) ] , i = 1 , 2 , 3 , 4 , . The inner product between two elements, η ( 2 ) ( x ) H 2 and ξ ( 2 ) ( x ) H 2 , of this Hilbert space, will be denoted as η ( 2 ) ( x ) , ξ ( 2 ) ( x ) 2 and is defined as follows:
η ( 2 ) ( x ) , ξ ( 2 ) ( x ) 2 i = 1 2 η i ( 2 ) ( x ) , ξ i ( 2 ) ( x ) 2 = i = 1 4 η i ( 2 ) ( x ) , ξ i ( 2 ) ( x ) 0 .
In the Hilbert H 2 , form the inner product of Equation (76) with a set of yet undefined vector-valued functions a ( 2 ) ( j 1 ; x ) [ a 1 ( 2 ) ( j 1 ; x ) , a 2 ( 2 ) ( j 1 ; x ) , a 3 ( 2 ) ( j 1 ; x ) , a 4 ( 2 ) ( j 1 ; x ) ] H 2 , j 1 = 1 , , N α , to obtain the following relation:
{ a ( 2 ) ( j 1 ; x ) , V ( 2 ) ( u ( 1 ) ; α ) δ u ( 2 ) ( x ) 2 } α 0 = { a ( 2 ) ( j 1 ; x ) , q ( 2 ) ( u ( 2 ) ; α ; δ α ) 2 } α 0 .
Using the definition of the adjoint operator in the Hilbert space H 2 , recast the left-side of Equation (82) as follows:
{ a ( 2 ) ( j 1 ; x ) , V ( 2 ) ( u ( 1 ) ; α ) δ u ( 2 ) ( x ) 2 } α 0 = { δ u ( 2 ) ( x ) , A ( 2 ) ( u ( 1 ) ; α ) a ( 2 ) ( j 1 ; x ) 2 } α 0 + { P ( 2 ) [ δ u ( 2 ) ( x ) ; u ( 1 ) ( x ) ; a ( 2 ) ( j 1 ; x ) ; α ; δ α ] } α 0 ,
where { P ( 2 ) [ δ u ( 2 ) ( x ) ; u ( 1 ) ( x ) ; a ( 2 ) ( j 1 ; x ) ; α ; δ α ] } α 0 denotes the bilinear concomitant defined on the phase-space boundary x Ω x ( α 0 ) and where A ( 2 ) ( u ( 1 ) ; α ) [ V ( 2 ) ( u ( 1 ) ; α ) ] * is the operator formally adjoint to V ( 2 ) ( u ( 1 ) ; α ) and has therefore the following form:
A ( 2 ) ( u ( 1 ) ; α ) = ( A ( 1 ) ( α ) [ V 21 ( 2 ) ( u ( 1 ) ; α ) ] * [ 0 ] ( 2 × 2 ) V ( 1 ) ( α ) ) = ( L * ( α ) 0 2 S ( u ( 1 ) ; α ) φ φ 2 S ( u ( 1 ) ; α ) ψ φ 0 L ( α ) 2 S ( u ( 1 ) ; α ) φ ψ 2 S ( u ( 1 ) ; α ) ψ ψ 0 0 L ( α ) 0 0 0 0 L * ( α ) ) .
The first term on right-side of Equation (83) is now required to represent the indirect-effect term { δ R ( 1 ) ( j 1 ) } i n d defined in Equation (69). This requirement is satisfied by imposing the following relation on each element a ( 2 ) ( j 1 ; x ) [ a 1 ( 2 ) ( j 1 ; x ) , a 2 ( 2 ) ( j 1 ; x ) , a 3 ( 2 ) ( j 1 ; x ) , a 4 ( 2 ) ( j 1 ; x ) ] H 2 , j 1 = 1 , , N α :
{ A ( 2 ) ( u ( 1 ) ; α ) a ( 2 ) ( j 1 ; x ) } α 0 = { s ( 2 ) ( j 1 ; u ( 2 ) ; α ) } α 0 , j 1 = 1 , , N α ,
where the block-vector s ( 2 ) ( j 1 ; u ( 2 ) ; α ) comprises, for each j 1 = 1 , , N α , four vector-components which are defined as follows:
s ( 2 ) ( j 1 ; u ( 2 ) ; α ) [ s 1 ( 2 ) ( j 1 ; u ( 2 ) ; x ) , s 2 ( 2 ) ( j 1 ; u ( 2 ) ; x ) , s 3 ( 2 ) ( j 1 ; u ( 2 ) ; x ) , s 4 ( 2 ) ( j 1 ; u ( 2 ) ; x ) ] [ S ( 1 ) ( j 1 ; u ( 2 ) ; α ) φ , S ( 1 ) ( j 1 ; u ( 2 ) ; α ) ψ , S ( 1 ) ( j 1 ; u ( 2 ) ; α ) a 1 ( 1 ) , S ( 1 ) ( j 1 ; u ( 2 ) ; α ) a 2 ( 1 ) ] .
The definition of the set of vectors a ( 2 ) ( j 1 ; x ) [ a 1 ( 2 ) ( j 1 ; x ) , a 2 ( 2 ) ( j 1 ; x ) , a 3 ( 2 ) ( j 1 ; x ) , a 4 ( 2 ) ( j 1 ; x ) ] will be completed by selecting boundary conditions for this set of vectors, which will be represented in operator form as follows:
{ b A ( 2 ) [ u ( 2 ) ( x ) ; a ( 2 ) ( j 1 ; x ) ; α ] } α 0 = 0 , x Ω x ( α 0 ) , j 1 = 1 , , N α .
The boundary conditions represented by Equation (87) are selected so as to satisfy the following requirements:
(i)
The boundary conditions Equation (87) together with the operator Equation (85) constitute a well posed problem for the functions a ( 2 ) ( j 1 ; x ) .
(ii)
The implementation in Equation (83) of the boundary conditions provided in Equation (77) together with those provided in Equation (87) eliminates all of the unknown values of the functions δ u ( 2 ) ( x ) and a ( 2 ) ( j 1 ; x ) in the expression of the bilinear concomitant { P ( 2 ) [ u ( 2 ) ( x ) ; a ( 2 ) ( j 1 ; x ) ; α ; δ u ( 2 ) ( x ) ; δ α ] } α 0 . The bilinear concomitant may vanish after these boundary conditions are implemented, but if it does not, it will be reduced to a residual quantity which will be denoted as P ^ ( 2 ) [ u ( 2 ) ( x ) ; a ( 2 ) ( j 1 ; x ) ; α ; δ α ] and which will comprise only known values of u ( 2 ) ( x ) , a ( 2 ) ( j 1 ; x ) , α and δ α . In principle, the bilinear concomitant can always be “forced” to vanish by introducing delta-function terms in the definition of the adjoint operator, but such a practice could lead to severe (and usually unnecessary) difficulties when attempting to solve the equations that involve such extended adjoint operators.
The system of equations represented by Equation (85) together with the boundary conditions represented by Equation (87) constitute the 2nd-Level Adjoint Sensitivity System (2nd-LASS). The solution a ( 2 ) ( j 1 ; x ) [ a 1 ( 2 ) ( j 1 ; x ) , a 2 ( 2 ) ( j 1 ; x ) , a 3 ( 2 ) ( j 1 ; x ) , a 4 ( 2 ) ( j 1 ; x ) ] H 2 , j 1 = 1 , , N α , of the 2nd-LASS will be called the 2nd-level adjoint function. The results provided in Equations (82), (83) and (85) are employed in Equation (69) to obtain the following expression for the indirect-effect term { δ R ( 1 ) [ j 1 ; u ( 2 ) ( x ) ; α ; δ u ( 1 ) ( x ) ; δ a ( 1 ) ( x ) ] } i n d in terms of the 2nd-level adjoint functions a ( 2 ) ( j 1 ; x ) , for j 1 = 1 , , N α :
{ δ R ( 1 ) [ j 1 ; u ( 2 ) ( x ) ; α ; δ u ( 1 ) ( x ) ; δ a ( 1 ) ( x ) ] } i n d = { a ( 2 ) ( j 1 ; x ) , q ( 2 ) ( u ( 2 ) ; α ; δ α ) 2 } α 0 { P ^ ( 2 ) [ u ( 2 ) ( x ) ; a ( 2 ) ( j 1 ; x ) ; α ; δ α ] } α 0 , j 1 = 1 , , N α .
Inserting the expressions that define the vector q ( 2 ) ( u ( 2 ) ; α ; δ α ) from Equation (78) into Equation (88) and adding the resulting expression for the indirect-effect term with the expression of the direct-effect term given in Equation (68) yields the following expression for the total second-order G-differential of the response R [ φ ( x ) , ψ ( x ) ; α ] :
{ δ R ( 1 ) [ j 1 ; u ( 2 ) ( x ) ; α ; δ u ( 2 ) ( x ) ; δ α ] } α 0 = { α k = 1 N x λ k ( α ) ω k ( α ) d x k S ( 1 ) ( j 1 ; u ( 2 ) ; α ) } α 0 δ α + { a 1 ( 2 ) ( j 1 ; x ) , q 1 ( 1 ) ( φ ; α ; δ α ) 0 } α 0 + { a 2 ( 2 ) ( j 1 ; x ) , q 2 ( 1 ) ( ψ ; α ; δ α ) 0 } α 0 + { a 3 ( 2 ) ( j 1 ; x ) , q 3 ( 2 ) ( u ( 2 ) ; α ; δ α ) 0 } α 0 + { a 4 ( 2 ) ( j 1 ; x ) , q 4 ( 2 ) ( u ( 2 ) ; α ; δ α ) 0 } α 0 { P ^ ( 2 ) [ u ( 2 ) ( x ) ; a ( 2 ) ( j 1 ; x ) ; α ; δ α ] } α 0 j 2 = 1 N α { R ( 2 ) [ j 2 ; j 1 ; u ( 2 ) ( x ) ; a ( 2 ) ( j 1 ; x ) ; α ] } α 0 δ α j 2 ,
where R ( 2 ) [ j 2 ; j 1 ; u ( 2 ) ( x ) ; a ( 2 ) ( j 1 ; x ) ; α ] denotes the 2nd-order partial sensitivity of the response with respect to the model parameters, evaluated at the nominal parameter values α 0 , and has the following expression for j 1 , j 2 = 1 , , N α :
R ( 2 ) [ j 2 ; j 1 ; u ( 2 ) ( x ) ; a ( 2 ) ( j 1 ; x ) ; α ] R ( 1 ) [ j 1 ; φ , ψ ; a ( 1 ) ; α ] α j 2 2 R ( φ ; ψ ; α ) α j 2 α j 1 = α j 2 { k = 1 N x λ k ( α ) ω k ( α ) d x k S ( 1 ) ( j 1 ; u ( 2 ) ; α ) } α 0 + { a 1 ( 2 ) ( j 1 ; x ) , [ q φ ( α ) L ( α ) φ ( x ) ] α j 2 0 } α 0 + { a 2 ( 2 ) ( j 1 ; x ) , [ q ψ ( α ) L * ( α ) ψ ( x ) ] α j 2 0 } α 0 + { a 3 ( 2 ) ( j 1 ; x ) , 2 S ( u ( 1 ) ( x ) ; α ) α j 2 φ [ L * ( α ) a 1 ( 1 ) ] α j 2 0 } α 0 + { a 4 ( 2 ) ( j 1 ; x ) , 2 S ( u ( 1 ) ( x ) ; α ) α j 2 ψ [ L ( α ) a 2 ( 1 ) ] α j 2 0 } α 0 { α j 2 P ^ ( 2 ) [ u ( 2 ) ( x ) ; a ( 2 ) ( j 1 ; x ) ; α ] } α 0 .
Since the 2nd-LASS is independent of parameter variations δ α , the exact computation of all of the partial second-order sensitivities R ( 2 ) [ j 2 ; j 1 ; u ( 2 ) ( x ) ; a ( 2 ) ( j 1 ; x ) ; α ] requires at most N α large-scale (adjoint) computations using the 2nd-LASS, rather than O ( N α 2 ) large-scale computations as would be required by forward methods. In component form, the equations comprising the 2nd-LASS are as solved for each j 1 = 1 , , N α in the following order:
L ( α ) a 3 ( 2 ) ( j 1 ; x ) = S ( 1 ) ( j 1 ; u ( 2 ) ; α ) a 1 ( 1 ) ,
L * ( α ) a 4 ( 2 ) ( j 1 ; x ) = S ( 1 ) ( j 1 ; u ( 2 ) ; α ) a 2 ( 1 ) ,
L * ( α ) a 1 ( 2 ) ( j 1 ; x ) = 2 S ( u ( 1 ) ; α ) φ φ a 3 ( 2 ) ( j 1 ; x ) + 2 S ( u ( 1 ) ; α ) ψ φ a 4 ( 2 ) ( j 1 ; x ) + S ( 1 ) ( j 1 ; u ( 2 ) ; α ) φ ,
L ( α ) a 2 ( 2 ) ( j 1 ; x ) = 2 S ( u ( 1 ) ; α ) φ ψ a 3 ( 2 ) ( j 1 ; x ) + 2 S ( u ( 1 ) ; α ) ψ ψ a 4 ( 2 ) ( j 1 ; x ) + S ( 1 ) ( j 1 ; u ( 2 ) ; α ) ψ .
It is important to note that by solving the 2nd-LASS N α -times, the “off-diagonal” 2nd-order mixed sensitivities 2 R / α j 1 α j 2 will be computed twice, in two different ways (i.e., using distinct 2nd-level adjoint functions), thereby providing an independent intrinsic (numerical) verification that the 1st- and 2nd-order response sensitivities are computed accurately. The information provided by the 1st-order sensitivities usually indicates which 2nd-order sensitivities are important and which could be neglected. Therefore, it is useful to prioritize the computation of the 2nd-order sensitivities by using the rankings of the relative magnitudes of the 1st-order sensitivities as a “priority indicator”: the larger the magnitude of the relative 1st-order sensitivity, the higher the priority for computing the corresponding 2nd-order sensitivities. Also, since vanishing 1st-order sensitivities may indicate critical points of the response in the phase-space of model parameters, it is also of interest to compute the 2nd-order sensitivities that correspond to vanishing 1st-order sensitivities. In practice, only those 2nd-order partial sensitivities which are deemed important would need to be computed.
Dirac delta-functionals in Equation (90) may need to be used for expressing the non-zero residual terms in the respective residual bilinear concomitant and/or the terms containing derivatives with respect to the lower- and upper-boundary points, so that the expression of the partial second-order sensitivities R ( 2 ) [ j 2 ; j 1 ; u ( 2 ) ( x ) ; a ( 2 ) ( j 1 ; x ) ; α ] can be written in the following form, in preparation for computing the 3rd-order sensitivities:
R ( 2 ) [ j 2 ; j 1 ; u ( 2 ) ( x ) ; a ( 2 ) ( j 1 ; x ) ; α ] = k = 1 N x λ k ( α ) ω k ( α ) d x k S ( 2 ) [ j 2 ; j 1 ; u ( 2 ) ( x ) ; a ( 2 ) ( j 1 ; x ) ; α ] .

3.2.4. Comparison of Computational Requirements for Computing the Second-Order Response Sensitivities with Respect to the Model Parameters

Numerical results have been presented in [10,11,12,13,14,15] for the 27,956,503 nonzero 2nd-order sensitivities that correspond to the 7477 non-zero first-order sensitivities of the PERP benchmark’s leakage response to the benchmark’s parameters. These numerical results were computed using the PARTISN neutron transport code [17] running on a DELL desktop computer (AMD FX-8350) with an 8-core processor. A typical large-scale (forward or adjoint) computation required ca. 160 s CPU-time while the computation of a single 2nd-order sensitivity, including reading the adjoint functions and the integration over the adjoint functions, required ca. 0.046 s CPU-time. Altogether, the computation of all of the 27,956,503 nonzero 2nd-order sensitivities of the PERP leakage response using the 2nd-CASAM required ca. 929 CPU-hours, comprising 735 CPU-hours used for the 14, 843 large-scale computations (needed to compute the 2nd-level adjoint functions) and 194 CPU-hours used for performing the integrations needed to compute the respective unmixed and mixed second-order sensitivities.
In contradistinction, the FSAM would have required ca. 38,000 CPU-hours; using finite-differences (FD) would have required over 147,000 CPU-hours to compute the 27,956,503 nonzero 2nd-order sensitivities. Evidently, the 2nd-CASAM is the only method that can be used in practice for computing accurately and efficiently the 2nd-order response sensitivities for large-scale systems involving many model parameters.

3.3. Computation of the Third-Order Sensitivities of R [ φ ( x ) , ψ ( x ) ; α ]

There are N α ( N α + 1 ) ( N α + 2 ) / 3 ! distinct 3rd-order sensitivities of the response R [ φ ( x ) , ψ ( x ) ; α ] with respect to the model and response parameters. The finite-difference formulas for computing these sensitivities are presented in Section 3.3.1. The Forward Sensitivity Analysis Methodology (FSAM) for computing the 3rd-order response sensitivities are presented in Section 3.3.2 and the 3rd-Order Comprehensive Adjoint Sensitivity Analysis Methodology (3rd-CASAM) is presented in Section 3.3.3. Section 3.3.4 presents a comparison of computational requirements for computing the third-order response sensitivities with respect to model parameters in the case of the OECD/NEA benchmark measured in [16].

3.3.1. Finite-Difference Approximation Using Re-Computations with User-Modified Parameters

The 3rd-order unmixed sensitivities, 3 R ( α ) / α j 3 , of the response can be approximately computed by using the well-known finite-difference formula presented below:
3 R ( α ) α j 1 3 1 2 h j 1 3 ( R j 1 + 2 2 R j 1 + 1 + 2 R j 1 1 R j 1 2 ) + O ( h j 1 2 ) , j 1 = 1 , , N α ,
where R j 1 + 2 R ( α j 1 + 2 h j 1 ) , R j 1 + 1 R ( α j 1 + h j 1 ) , R j 1 1 R ( α j 1 h j 1 ) , R j 1 2 R ( α j 1 2 h j 1 ) and h j 1 denotes a “judiciously-chosen” variation in the parameter α j 1 around its nominal value α j 1 0 . The 3rd-order mixed sensitivities, 3 R ( α ) / α j 1 α j 2 α j 3 , can be calculated by using the following finite-difference formula:
3 R ( α ) α j 1 α j 2 α j 3 1 8 h j 1 h j 2 h j 3 ( R j 1 + 1 , j 2 + 1 , j 3 + 1 R j 1 + 1 , j 2 + 1 , j 3 1 R j 1 + 1 , j 2 1 , j 3 + 1 + R j 1 + 1 , j 2 1 , j 3 1 R j 1 1 , j 2 + 1 , j 3 + 1 + R j 1 1 , j 2 + 1 , j 3 1 + R j 1 1 , j 2 1 , j 3 + 1 R j 1 1 , j 2 1 , j 3 1 ) + O ( h j 1 2 , h j 2 2 , h j 3 2 ) ,
where R j 1 + 1 , j 2 + 1 , j 3 + 1 = R ( α j 1 + h j 1 , α j 2 + h j 2 , α j 3 + h j 3 ) , R j 1 1 , j 2 + 1 , j 3 + 1 = R ( α j 1 h j 1 , α j 2 + h j 2 , α j 3 + h j 3 ) , etc. The values of the quantities on the rights-sides of the expressions shown in Equations (69) and (70) are obtained by re-solving Equations (1) and (2) repeatedly, using the changed parameter values ( α j 1 ± h j 1 ) , ( α j 1 ± h j 2 ) and ( α j 3 ± h j 3 ) . As has also been previously discussed, the values of h j 1 , h j 2 and h j 3 , respectively, must be chosen judiciously by trial and error for each of the parameters α j 1 , α j 2 and α j 3 . The finite difference formulas introduce their intrinsic “methodological errors” of order O ( h j 1 2 , h j 2 2 , h j 3 2 ) which are in addition to, and independent of, the errors that might be incurred in the computation of 3 R ( α ) / α j 1 α j 2 α j 3 .

3.3.2. Forward Sensitivity Analysis Methodology

Within the framework of the FSAM, the third-order sensitivities 3 R ( α ) / α j 1 α j 2 α j 3 , j 1 , j 2 , j 3 = 1 , , N α , of the response to the model parameters are obtained by computing the partial G-derivatives of the second-order sensitivities represented by Equation (58), which yields the following result:
3 R ( φ ; ψ ; α ) α j 1 α j 2 α j 3 1 δ α j 3 { d d ε R ( 2 ) [ j 2 ; j 1 ; φ ( α j 3 0 + ε δ α j 3 ) ; ψ ( α j 3 0 + ε δ α j 3 ) ; α j 3 0 + ε δ α j 3 ; φ ( α j 3 0 + ε δ α j 3 ) α j 1 ; ψ ( α j 3 0 + ε δ α j 3 ) α j 1 ; 2 φ ( α j 3 0 + ε δ α j 3 ) α j 1 α j 2 ; 2 ψ ( α j 3 0 + ε δ α j 3 ) α j 1 α j 2 ] } ε = 0 = R ( 3 ) [ j 2 ; j 1 ; φ ; ψ ; α ; φ α j 2 ; ψ α j 2 ; 2 φ α j 1 α j 2 ; 2 ψ α j 1 α j 2 ; 3 φ α j 1 α j 2 α j 3 ; 3 ψ α j 1 α j 2 α j 3 ] .
As indicated by the functional dependence of the last term in Equation (98), the evaluation of the 3rd-order response sensitivities requires prior knowledge of the 1st- and 2nd-order derivatives of the state functions with respect to the model parameters. The functions φ ( x ) / α j 2 and ψ ( x ) / α j 2 are obtained by solving the 1st-OFSS. The functions 2 φ / α j 1 α j 2 and 2 ψ / α j 1 α j 2 are the solutions of the 2nd-OFSS. The functions α j 3 3 φ / α j 1 α j 2 α j 3 and 3 ψ / α j 1 α j 2 α j 3 are the solutions of the following “Third-Order Forward Sensitivity System” (3rd-OFSS), which is obtained by taking the partial G-derivative of the 2nd-OFSS with respect to a generic model parameter α j 3 , which yields the following system:
L ( x ; α 0 ) 3 φ α j 3 α j 2 α j 1 = { f ( 3 ) ( j 3 ; j 2 ; j 1 ; φ ; α ) } α 0 ; j 1 , j 2 , j 3 = 1 , , N α ; x Ω x ;
{ b φ ( x ; α ) } α 0 3 φ α j 3 α j 2 α j 1 = { c φ ( 3 ) ( j 3 ; j 2 ; j 1 ; φ ; α ) } α 0 ; j 1 , j 2 , j 3 = 1 , , N α ; x Ω x ( α 0 ) ;
L * ( x ; α 0 ) 3 ψ α j 3 α j 2 α j 1 = { g ( 3 ) ( j 3 ; j 2 ; j 1 ; ψ ; α ) } α 0 ; j 1 , j 2 , j 3 = 1 , , N α ; x Ω x ;
{ b ψ ( x ; α ) } α 0 3 ψ α j 3 α j 2 α j 1 = { c ψ ( 3 ) ( j 3 ; j 2 ; j 1 ; ψ ; α ) } α 0 ; j 1 , j 2 , j 3 = 1 , , N α ; x Ω x ( α 0 ) ;
where
f ( 3 ) ( j 3 ; j 2 ; j 1 ; φ ; α ) f ( 2 ) ( j 2 ; j 1 ; φ ; α ) α j 3 L ( x ; α ) α j 3 2 φ α j 2 α j 1 ;
g ( 3 ) ( j 3 ; j 2 ; j 1 ; ψ ; α ) g ( 2 ) ( j 2 ; j 1 ; ψ ; α ) α j 3 L * ( x ; α ) α j 3 2 ψ α j 2 α j 1 ;
c φ ( 3 ) ( j 3 ; j 2 ; j 1 ; φ ; α ) c φ ( 2 ) ( j 2 ; j 1 ; φ ; α ) α j 3 b φ ( x ; α ) α j 3 2 φ α j 2 α j 1 ; x Ω x ( α 0 ) .
c ψ ( 3 ) ( j 3 ; j 2 ; j 1 ; ψ ; α ) c ψ ( 2 ) ( j 2 ; j 1 ; ψ ; α ) α j 3 b ψ ( x ; α ) α j 3 2 ψ α j 2 α j 1 ; x Ω x ( α 0 ) .
Evidently, the computations of the third-order sensitivities 3 R ( α ) / α j 3 α j 2 α j 1 require 2 N α ( N α + 1 ) ( N α + 2 ) / 3 ! large-scale computations to solve Equations (99)–(102), followed by 2 N α ( N α + 1 ) ( N α + 2 ) / 3 ! small-scale computations for performing the integrations that define the various sensitivities. These computations would be in addition to the 2 N α large-scale computations needed to determine the functions φ ( x ) / α j 2 and ψ ( x ) / α j 2 by solving the 1st-OFSS, and the N α ( N α + 1 ) large-scale computations needed to determine the functions 2 φ / α j 1 α j 2 and 2 ψ / α j 1 α j 2 by solving the 2nd-OFSS.

3.3.3. The 3rd-Order Comprehensive Adjoint Sensitivity Analysis Methodology (3rd-CASAM)

The third-order sensitivities of the response R [ φ ( x ) , ψ ( x ) ; α ] with respect to the model parameters are obtained by determining the first-order G-differential of the 2nd-order sensitivities R ( 2 ) [ j 2 ; j 1 ; u ( 2 ) ( x ) ; a ( 2 ) ( j 1 ; x ) ; α ] = 2 R ( φ ; ψ ; α ) / α j 1 α j 2 , which were computed in Equation (90) in Section 3.2.3. By definition, the total G-differential of R ( 2 ) [ j 2 ; j 1 ; u ( 2 ) ( x ) ; a ( 2 ) ( j 1 ; x ) ; α ] is obtained as follows:
{ δ R ( 2 ) [ j 2 ; j 1 ; u ( 2 ) ; a ( 2 ) ( j 1 ) ; α ] } α 0 { d d ε [ k = 1 N x λ k ( α 0 + ε δ α ) ω k ( α 0 + ε δ α ) d x k S ( 2 ) [ j 2 ; j 1 ; u ( 2 ) , 0 + ε δ u ( 2 ) ; a ( 2 ) , 0 ( j 1 ) + ε δ a ( 2 ) ( j 1 ) ; α 0 + ε δ α ] } ε = 0 = { δ R ( 2 ) [ j 2 ; j 1 ; u ( 2 ) , 0 ; a ( 2 ) , 0 ( j 1 ) ; α 0 ; δ α ] } d i r + { δ R ( 2 ) [ j 2 ; j 1 ; u ( 2 ) , 0 ; a ( 2 ) , 0 ( j 1 ) ; α 0 ; δ u ( 2 ) ; δ a ( 2 ) ( j 1 ) ] } i n d ,
where the direct-effect term { δ R ( 2 ) ( j 2 ; j 1 ) } d i r depends directly on the parameter variations and is defined as follows:
{ δ R ( 2 ) [ j 2 ; j 1 ; u ( 2 ) , 0 ; a ( 2 ) , 0 ( j 1 ) ; α 0 ; δ α ] } d i r { α k = 1 N x λ k ( α ) ω k ( α ) d x k S ( 2 ) [ j 2 ; j 1 ; u ( 2 ) ( x ) ; a ( 2 ) ( j 1 ; x ) ; α ] } α 0 δ α ,
while the “indirect-effect term” { δ R ( 2 ) [ j 2 ; j 1 ; u ( 2 ) , 0 ; a ( 2 ) , 0 ( j 1 ) ; α 0 ; δ u ( 2 ) ; δ a ( 2 ) ( j 1 ) ] } i n d depends indirectly on the parameter variations through the variations in the forward and adjoint state functions and is defined as follows:
{ δ R ( 2 ) [ j 2 ; j 1 ; u ( 2 ) , 0 ; a ( 2 ) , 0 ( j 1 ) ; α 0 ; δ u ( 2 ) ; δ a ( 2 ) ( j 1 ) ] } i n d { k = 1 N x λ k ( α ) ω k ( α ) d x k S ( 2 ) [ j 2 ; j 1 ; u ( 2 ) ( x ) ; a ( 2 ) ( j 1 ; x ) ; α ] φ } α 0 δ φ ( x ) + { k = 1 N x λ k ( α ) ω k ( α ) d x k S ( 2 ) [ j 2 ; j 1 ; u ( 2 ) ( x ) ; a ( 2 ) ( j 1 ; x ) ; α ] ψ } α 0 δ ψ ( x ) + { m = 1 2 k = 1 N x λ k ( α ) ω k ( α ) d x k S ( 2 ) [ j 2 ; j 1 ; u ( 2 ) ( x ) ; a ( 2 ) ( j 1 ; x ) ; α ] a m ( 1 ) } α 0 δ a m ( 1 ) ( x ) + { m = 1 4 k = 1 N x λ k ( α ) ω k ( α ) d x k S ( 2 ) [ j 2 ; j 1 ; u ( 2 ) ( x ) ; a ( 2 ) ( j 1 ; x ) ; α ] a m ( 2 ) ( j 1 ) } α 0 δ a m ( 2 ) ( j 1 ; x ) .
The indirect-effect term defined in Equation (109) can be computed only after having computed the functions δ u ( 2 ) ( x ) and δ a ( 2 ) ( j 1 ; x ) [ δ a 1 ( 2 ) ( j 1 ; x ) , δ a 2 ( 2 ) ( j 1 ; x ) , δ a 3 ( 2 ) ( j 1 ; x ) , δ a 4 ( 2 ) ( j 1 ; x ) ] . The vector of variations δ u ( 2 ) ( x ) is the solution of the 2nd-LVSS while the vector δ a ( 2 ) ( x ) is the solution of the system of equations obtained by G-differentiating the 2nd-LASS defined by Equations (85) and (87).
Applying the definition of the total G-differential, cf. Equation (23), to the 2nd-LASS yields the following set of equations for the vector of variations δ a ( 2 ) ( x ) , all to be computed at the nominal values of the parameters and state functions (although this will not be indicated explicitly below, in order to keep the notation as simple as possible):
L * ( α ) δ a 1 ( 2 ) ( j 1 ; x ) 2 S ( u ( 1 ) ; α ) φ φ δ a 3 ( 2 ) ( j 1 ; x ) [ 3 S ( u ( 1 ) ; α ) φ φ φ δ φ + 3 S ( u ( 1 ) ; α ) φ φ ψ δ ψ ] a 3 ( 2 ) ( j 1 ; x ) 2 S ( u ( 1 ) ; α ) ψ φ δ a 4 ( 2 ) ( j 1 ; x ) [ 3 S ( u ( 1 ) ; α ) ψ φ φ δ φ + 2 S ( u ( 1 ) ; α ) ψ ψ φ δ ψ ] a 4 ( 2 ) ( j 1 ; x ) 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) φ φ δ φ 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) φ ψ δ ψ 2 S ( 1 ) ( j 1 ; u ( 2 ) ; a ( 1 ) ; α ) φ a 1 ( 1 ) δ a 1 ( 1 ) 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) φ a 2 ( 1 ) δ a 2 ( 1 ) = p 1 ( 2 ) [ u ( 2 ) ; a ( 2 ) ( j 1 ) ; α ; δ α ] ,
L ( α ) δ a 2 ( 2 ) ( j 1 ; x ) 2 S ( u ( 1 ) ; α ) φ ψ δ a 3 ( 2 ) ( j 1 ; x ) [ 3 S ( u ( 1 ) ; α ) φ φ ψ δ φ + 3 S ( u ( 1 ) ; α ) φ ψ ψ δ ψ ] a 3 ( 2 ) ( j 1 ; x ) 2 S ( u ( 1 ) ; α ) ψ ψ δ a 4 ( 2 ) ( j 1 ; x ) [ 3 S ( u ( 1 ) ; α ) φ ψ ψ δ φ + 3 S ( u ( 1 ) ; α ) ψ ψ ψ δ ψ ] a 4 ( 2 ) ( j 1 ; x ) 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) φ ψ δ φ 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) ψ ψ δ ψ 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) a 1 ( 1 ) ψ δ a 1 ( 1 ) 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) a 2 ( 1 ) ψ δ a 2 ( 1 ) = p 2 ( 2 ) [ u ( 2 ) ; a ( 2 ) ( j 1 ) ; α ; δ α ] ,
L ( α ) δ a 3 ( 2 ) ( j 1 ; x ) 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) φ a 1 ( 1 ) δ φ 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) ψ a 1 ( 1 ) δ ψ 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) a 1 ( 1 ) a 1 ( 1 ) δ a 1 ( 1 ) 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) a 1 ( 1 ) a 2 ( 1 ) δ a 2 ( 1 ) = p 3 ( 2 ) [ u ( 2 ) ; a ( 2 ) ( j 1 ) ; α ; δ α ] ,
L * ( α ) δ a 4 ( 2 ) ( j 1 ; x ) 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) φ a 2 ( 1 ) δ φ 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) ψ a 2 ( 1 ) δ ψ 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) a 1 ( 1 ) a 2 ( 1 ) δ a 1 ( 1 ) 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) a 2 ( 1 ) a 2 ( 1 ) δ a 2 ( 1 ) = p 4 ( 2 ) [ u ( 2 ) ; a ( 2 ) ( j 1 ) ; α ; δ α ] ,
where:
p 1 ( 2 ) [ u ( 2 ) ; a ( 2 ) ( j 1 ) ; α ; δ α ] [ L * ( α ) a 1 ( 2 ) ( j 1 ; x ) ] α δ α + [ 3 S ( u ( 1 ) ; α ) α φ φ δ α ] a 3 ( 2 ) ( j 1 ; x ) + [ 3 S ( u ( 1 ) ; α ) α ψ φ δ α ] a 4 ( 2 ) ( j 1 ; x ) + 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) α φ δ α ,
p 2 ( 2 ) [ u ( 2 ) ; a ( 2 ) ( j 1 ) ; α ; δ α ] [ L ( α ) a 2 ( 2 ) ( j 1 ; x ) ] α δ α + [ 3 S ( u ( 1 ) ; α ) α φ ψ δ α ] a 3 ( 2 ) ( j 1 ; x ) + [ 3 S ( u ( 1 ) ; α ) α ψ ψ δ α ] a 4 ( 2 ) ( j 1 ; x ) + S ( 1 ) ( j 1 ; u ( 2 ) ; α ) α ψ δ α ,
p 3 ( 2 ) [ u ( 2 ) ; a ( 2 ) ( j 1 ) ; α ; δ α ] [ L ( α ) a 3 ( 2 ) ( j 1 ; x ) ] α δ α + 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) α a 1 ( 1 ) δ α ,
p 4 ( 2 ) [ u ( 2 ) ; a ( 2 ) ( j 1 ) ; α ; δ α ] [ L * ( α ) a 4 ( 2 ) ( j 1 ; x ) ] α δ α + 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) α a 2 ( 1 ) δ α .
The vector of variations δ a ( 2 ) ( x ) is subject to the boundary conditions obtained by G-differentiating Equation (87), which will be represented in operator form as follows:
{ δ b A ( 2 ) [ u ( 2 ) ; a ( 2 ) ( j 1 ) ; α ; δ u ( 1 ) ; δ a ( 2 ) ( j 1 ) ; δ α ] } α 0 = 0 , x Ω x ( α 0 ) , j 1 = 1 , , N α .
As Equations (110) through (118) indicate, the vector of variations δ a ( 2 ) ( x ) and δ u ( 2 ) ( x ) [ δ u ( 1 ) ( x ) ; δ a 1 ( 1 ) ( x ) , δ a 2 ( 1 ) ( x ) ] are related to each other, and must be determined by solving simultaneously the coupled system of equations obtained by concatenating Equations (110) through (117) to the 2nd-LVSS, while being subject to the boundary conditions obtained by concatenating the boundary conditions which belong to the 2nd-LVSS with the boundary conditions represented by Equation (118). This coupled system will comprise 8 (coupled) equations, which can be represented in the following matrix-form:
V ( 3 ) [ j 1 ; u ( 2 ) ( x ) ] δ u ( 3 ) ( j 1 ; x ) = q ( 3 ) [ j 1 ; u ( 2 ) ( j 1 ; x ) ; a ( 2 ) ( j 1 ; x ) ; α ; δ α ] , x Ω x , j 1 = 1 , , N α ,
b v ( 3 ) [ j 1 ; u ( 3 ) ( j 1 ; x ) ; α ; δ u ( 3 ) ( j 1 ; x ) ; δ α ] ( b V ( 2 ) [ u ( 2 ) ( j 1 ; x ) ; α ; δ u ( 2 ) ( j 1 ; x ) ; δ α ] δ b A ( 2 ) [ j 1 ; u ( 3 ) ( j 1 ; x ) ; α ; δ u ( 3 ) ( j 1 ; x ) ; δ α ] ) = ( 0 0 ) , x Ω x ( α 0 ) ,
where all of the components of the matrices and vectors are to be computed at nominal parameter and state function values, although the corresponding indication (namely, the superscript “zero”) has been omitted in order to simplify the notation. The 8 × 8 block-matrix V ( 3 ) [ j 1 ; u ( 2 ) ( x ) ] , and the 8-component block-vectors δ u ( 3 ) ( j 1 ; x ) and q ( 3 ) [ u ( 3 ) ( j 1 ; x ) ; α ; δ α ] have the following structures:
u ( 3 ) ( j 1 ; x ) = [ u ( 2 ) ( x ) ; a ( 2 ) ( j 1 ; x ) ; ] ; δ u ( 3 ) ( j 1 ; x ) = [ δ u ( 2 ) ( x ) ; δ a ( 2 ) ( j 1 ; x ) ] ; q ( 3 ) [ u ( 3 ) ( j 1 ; x ) ; α ; δ α ] ( q ( 2 ) ( u ( 2 ) ; α ; δ α ) p ( 2 ) [ j 1 ; u ( 2 ) ( j 1 ; x ) ; a ( 2 ) ( j 1 ; x ) ; α ; δ α ] ) [ q 1 ( 3 ) , q 2 ( 3 ) , q 3 ( 3 ) , q 4 ( 3 ) , q 5 ( 3 ) ( j 1 ) , q 6 ( 3 ) ( j 1 ) , q 7 ( 3 ) ( j 1 ) , q 8 ( 3 ) ( j 1 ) ] ;
V ( 3 ) [ j 1 ; u ( 2 ) ( x ) ] ( V ( 2 ) ( u ( 1 ) ) [ 0 ] ( 4 × 4 ) V 21 ( 3 ) ( j 1 ; u ( 2 ) ) V 22 ( 3 ) ( u ( 1 ) ) ) ; V 22 ( 3 ) ( u ( 1 ) ) ( L * ( α ) 0 2 S ( u ( 1 ) ; α ) φ φ 2 S ( u ( 1 ) ; α ) ψ φ 0 L ( α ) 2 S ( u ( 1 ) ; α ) φ ψ 2 S ( u ( 1 ) ; α ) ψ ψ 0 0 L ( α ) 0 0 0 0 L * ( α ) ) ;
and where the components v 21 ( 3 ) ( i , j ) of the matrix V 21 ( 3 ) ( j 1 ) [ v 21 ( 3 ) ( i , j ) ] ( 4 × 4 ) ; i , j = 1 , , 4 are defined as follows:
v 21 ( 3 ) ( 1 , 1 ) = 3 S ( u ( 1 ) ; α ) φ φ φ a 3 ( 2 ) ( j 1 ; x ) 3 S ( u ( 1 ) ; α ) ψ φ φ a 4 ( 2 ) ( j 1 ; x ) 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) φ φ ,
v 21 ( 3 ) ( 1 , 2 ) = 3 S ( u ( 1 ) ; α ) φ φ ψ a 3 ( 2 ) ( j 1 ; x ) 2 S ( u ( 1 ) ; α ) ψ ψ φ a 4 ( 2 ) ( j 1 ; x ) 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) φ ψ ,
v 21 ( 3 ) ( 1 , 3 ) = 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) φ a 1 ( 1 ) , v 21 ( 3 ) ( 1 , 4 ) = 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) φ a 2 ( 1 ) ,
v 21 ( 3 ) ( 2 , 1 ) = 3 S ( u ( 2 ) ; α ) φ φ ψ a 3 ( 2 ) ( j 1 ; x ) 3 S ( u ( 1 ) ; α ) φ ψ ψ a 4 ( 2 ) ( j 1 ; x ) 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) φ ψ ,
v 21 ( 3 ) ( 2 , 2 ) = 3 S ( u ( 1 ) ; α ) φ ψ ψ a 3 ( 2 ) ( j 1 ; x ) 3 S ( u ( 1 ) ; α ) ψ ψ ψ a 4 ( 2 ) ( j 1 ; x ) 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) ψ ψ ,
v 21 ( 3 ) ( 2 , 3 ) = 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) a 1 ( 1 ) ψ , v 21 ( 3 ) ( 2 , 4 ) = 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) a 2 ( 1 ) ψ ,
v 21 ( 3 ) ( 3 , 1 ) = 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) φ a 1 ( 1 ) , v 21 ( 3 ) ( 3 , 2 ) = 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) ψ a 1 ( 1 ) ,
v 21 ( 3 ) ( 3 , 3 ) = 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) a 1 ( 1 ) a 1 ( 1 ) , v 21 ( 3 ) ( 3 , 4 ) = 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) a 2 ( 1 ) a 1 ( 1 ) ,
v 21 ( 3 ) ( 4 , 1 ) = 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) φ a 2 ( 1 ) , v 21 ( 3 ) ( 4 , 2 ) = 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) ψ a 2 ( 1 ) ,
v 21 ( 3 ) ( 4 , 3 ) = 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) a 1 ( 1 ) a 2 ( 1 ) , v 21 ( 3 ) ( 4 , 4 ) = 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) a 2 ( 1 ) a 2 ( 1 ) .
The matrix-operator equations represented by Equation (119) together with the boundary conditions represented by Equation (120) will be called the 3rd-Level Variational Sensitivity System (3rd-LVSS). The solution, δ u ( 3 ) ( x ) , of the 3rd-LVSS will be called the “3rd-level variational sensitivity function”.
In the particular case when the response depends only on the forward function φ ( x ) , i.e., if the response has the functional dependence R [ φ ( x ) ; α ] , the quantities ψ , a 2 ( 1 ) , a 2 ( 2 ) ( j 1 ; x ) , a 4 ( 2 ) ( j 1 ; x ) and related quantities would not exist. Consequently, the vector w ( 3 ) ( j 1 ; x ) would reduce to the following 4-component vector: δ u ( 3 ) ( j 1 ; x ) = [ δ φ ( x ) , δ a 1 ( 1 ) ( x ) , δ a 1 ( 2 ) , δ a 3 ( 2 ) ] and the 3rd-LVSS would reduce correspondingly to the following system of 4x4 matrix equations:
( L ( α ) 0 0 0 2 S ( φ ; α 0 ) φ φ L * ( α ) 0 0 F 31 ( 3 ) [ j 1 ] F 32 ( 3 ) [ j 1 ] L * ( α ) 2 S ( φ ; α ) φ φ F 41 ( 3 ) [ j 1 ] F 42 ( 3 ) [ j 1 ] 0 L ( α ) ) ( δ φ ( x ) δ a 1 ( 1 ) δ a 1 ( 2 ) ( j 1 ; x ) δ a 3 ( 2 ) ( j 1 ; x ) ) = ( 1 ) ( 1 ) = ( q 1 ( 1 ) q 3 ( 2 ) q 5 ( 3 ) q 7 ( 3 ) ) ,
where
F 31 ( 3 ) [ j 1 ] = 3 S ( φ ; ψ ; α ) φ φ φ a 3 ( 2 ) ( j 1 ; x ) 2 S ( 1 ) ( j 1 ; φ ; ψ ; a ( 1 ) ; α ) φ φ ;   F 32 ( 3 ) [ i 1 ] = 2 S ( 1 ) ( j 1 ; φ ; a ( 1 ) ; α ) φ a 1 ( 1 ) ; F 41 ( 3 ) [ j 1 ] = 2 S ( 1 ) ( j 1 ; φ ; ψ ; a ( 1 ) ; α ) φ a 1 ( 1 ) ;   F 42 ( 3 ) [ i 1 ] = 2 S ( 1 ) ( j 1 ; φ ; ψ ; a ( 1 ) ; α ) a 1 ( 1 ) a 1 ( 1 ) ; q 1 ( 1 ) = [ q φ ( α ) L ( α ) φ ] α ;   q 3 ( 2 ) = 2 S ( φ ; ψ ; α 0 ) α φ δ α [ L * ( α 0 ) a 1 ( 1 ) ] α δ α ; q 5 ( 3 ) = [ L * ( α ) a 1 ( 2 ) ( j 1 ; x ) ] α δ α + 2 S ( 1 ) ( j 1 ) α φ δ α + [ 3 S ( φ ; α ) α φ φ δ α ] a 3 ( 2 ) ( j 1 ; x ) ; q 7 ( 3 ) = [ L ( α ) a 3 ( 2 ) ( j 1 ; x ) ] α δ α + 2 S ( 1 ) ( j 1 ; φ ; a ( 1 ) ; α ) α a 1 ( 1 ) δ α .  
A reduction of the 3rd-LVSS to a 4 × 4 matrix-equation, which would be similar to the reduction shown in Equation (133), would also occur for a response R [ ψ ( x ) ; α ] which depended only on the adjoint function ψ ( x ) .
Since the 3rd-LVSS equations depend on the parameter variations δ α i , solving them is prohibitively expensive computationally for large-scale systems involving many parameters. The need for solving the 3rd-LVSS can be avoided by expressing the indirect-effect term { δ R ( 1 ) ( j 2 ; j 1 ) } i n d defined in Equation (109) in an alternative way, namely in terms of the solutions of a 3rd-Level Adjoint Sensitivity System (3rd-LASS), which will not involve any variations in the state functions. This 3rd-LASS is constructed by implementing the same sequence of logical steps as used to construct the 1st-LASS and the 2nd-LASS, namely:
1
Define a Hilbert space, denoted as H 3 , comprising vector-valued elements of the form η ( 3 ) ( x ) [ η 1 ( 3 ) ( x ) , , η 8 ( 3 ) ( x ) ] H 3 , with η i ( 3 ) ( x ) [ η i , 1 ( 3 ) ( x ) , , η i , j ( 3 ) ( x ) , , η i , N φ ( 3 ) ( x ) ] , i = 1 , .. , 8 . The inner product between two elements, η ( 3 ) ( x ) H 3 and ξ ( 3 ) ( x ) H 3 , of this Hilbert space, will be denoted as η ( 3 ) ( x ) , ξ ( 3 ) ( x ) 3 and is defined as follows:
η ( 3 ) ( x ) , ξ ( 3 ) ( x ) 3 i = 1 8 η i ( 3 ) ( x ) , ξ i ( 3 ) ( x ) 0 .
2
In the Hilbert H 3 , form the inner product of Equation (119) with a set of yet undefined vector-valued functions a ( 3 ) ( j 2 ; j 1 ; x ) [ a 1 ( 3 ) ( j 2 ; j 1 ; x ) , , a 8 ( 3 ) ( j 2 ; j 1 ; x ) ] H 3 , j 1 , j 2 = 1 , , N α , to obtain the following relation:
{ a ( 3 ) ( j 2 ; j 1 ; x ) , V ( 3 ) [ j 1 ; u ( 2 ) ( x ) ] δ u ( 3 ) ( x ) 3 } α 0 = { a ( 3 ) ( j 2 ; j 1 ; x ) , q ( 3 ) [ j 1 ; u ( 3 ) ( j 1 ; x ) ; α ; δ α ] 3 } α 0 .
3
Using the definition of the adjoint operator in the Hilbert space H 3 , recast the left-side of Equation (135) as follows:
{ a ( 3 ) ( j 2 ; j 1 ; x ) , V ( 3 ) [ j 1 ; u ( 2 ) ( x ) ] δ u ( 3 ) ( x ) 3 } α 0 = { δ u ( 3 ) ( x ) , A ( 3 ) ( j 1 ) a ( 3 ) ( j 2 ; j 1 ; x ) 3 } α 0 + { P ( 3 ) [ δ u ( 3 ) ( x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; u ( 3 ) ( j 1 ; x ) ; α ; δ α ] } α 0 ,
where { P ( 3 ) [ δ u ( 3 ) ( x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; u ( 3 ) ( j 1 ; x ) ; α ; δ α ] } α 0 denotes the bilinear concomitant defined on the phase-space boundary x Ω x ( α 0 ) and where A ( 3 ) [ j 1 ; u ( 2 ) ( x ) ] [ V ( 3 ) ( j 1 ; u ( 2 ) ) ] * is the operator formally adjoint to V ( 3 ) [ j 1 ; u ( 2 ) ( x ) ] and has therefore the following form:
A ( 3 ) [ j 1 ; u ( 2 ) ( x ) ] = ( A ( 2 ) [ V 21 ( 3 ) ( j 1 ; u ( 2 ) ) ] * [ 0 ] ( 4 × 4 ) [ V 22 ( 3 ) ] * ) .
4
The first term on right-side of Equation (136) is now required to represent the indirect-effect term { δ R ( 2 ) [ j 2 ; j 1 ; u ( 2 ) , 0 ; a ( 2 ) , 0 ( j 1 ) ; α 0 ; δ u ( 2 ) ; δ a ( 2 ) ( j 1 ) ] } i n d defined in Equation (109). This requirement is satisfied by imposing the following relation on each element a ( 3 ) ( j 2 ; j 1 ; x ) [ a 1 ( 3 ) ( j 2 ; j 1 ; x ) , , a 8 ( 3 ) ( j 2 ; j 1 ; x ) ] H 3 , j 1 , j 2 = 1 , , N α :
{ A ( 3 ) [ j 1 ; u ( 2 ) ( j 1 ; x ) ] a ( 3 ) ( j 2 ; j 1 ; x ) } α 0 = { s ( 3 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ] } α 0 , j 1 , j 2 = 1 , , N α ,
where the block-vector s ( 3 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ] comprises, for each j 1 , j 2 = 1 , , N α , eight components (which are themselves block-vectors) defined as follows:
s ( 3 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ] [ s 1 ( 3 ) ( j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ) , , s 8 ( 3 ) ( j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ) ] ,
where
s 1 ( 3 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ] S ( 2 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ] φ ,
s 2 ( 3 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ] S ( 2 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ] ψ ,
s 3 ( 3 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ] S ( 2 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ] a 1 ( 1 ) ,
s 4 ( 3 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ] S ( 2 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ] a 2 ( 1 ) ( j 1 ) ,
s 5 ( 3 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ] S ( 2 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ] a 1 ( 2 ) ,
s 6 ( 3 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ] S ( 2 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ] a 2 ( 2 ) ( j 1 ) ,
s 7 ( 3 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ] S ( 2 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ] a 3 ( 2 ) ( j 1 ) ,
s 8 ( 3 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ] S ( 2 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ] a 4 ( 2 ) ( j 1 ) .
5
The definition of the set of vectors a ( 3 ) ( j 2 ; j 1 ; x ) [ a 1 ( 3 ) ( j 2 ; j 1 ; x ) , , a 8 ( 3 ) ( j 2 ; j 1 ; x ) ] H 3 will now be completed by selecting boundary conditions for this set of vectors, which will be represented in operator form as follows:
{ b A ( 3 ) [ a ( 3 ) ( j 2 ; j 1 ; x ) ; u ( 3 ) ( j 1 ; x ) ; α ] } α 0 = 0 , x Ω x ( α 0 ) , j 1 , j 2 = 1 , , N α .
The boundary conditions represented by Equation (148) are selected so as to satisfy the following requirements:
(i)
The boundary conditions Equation (148) together with the operator Equation (138) constitute a well posed problem for the functions a ( 3 ) ( j 2 ; j 1 ; x ) .
(ii)
Implementation in Equation (136) of the boundary conditions provided in Equation (120) together with those provided in Equation (148) eliminates all of the unknown values of the functions δ u ( 3 ) ( x ) and a ( 3 ) ( j 2 ; j 1 ; x ) in the expression of the bilinear concomitant { P ( 3 ) [ δ u ( 3 ) ( x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; u ( 3 ) ( j 1 ; x ) ; α ; δ α ] } ( α 0 ) . This bilinear concomitant may vanish after implementing the boundary conditions represented by Equation (148), but if it does not, it will be reduced to a residual quantity which will be denoted as P ^ ( 3 ) [ a ( 3 ) ( j 2 ; j 1 ; x ) ; u ( 3 ) ( j 1 ; x ) ; α ; δ α ] and which will comprise only known values of a ( 3 ) ( j 2 ; j 1 ; x ) , u ( 3 ) ( j 1 ; x ) , α and δ α .
The system of equations represented by Equation (138) together with the boundary conditions represented by Equation (148) constitute the 3rd-Level Adjoint Sensitivity System (3rd-LASS). The solution a ( 3 ) ( j 2 ; j 1 ; x ) [ a 1 ( 3 ) ( j 2 ; j 1 ; x ) , , a 8 ( 3 ) ( j 2 ; j 1 ; x ) ] H 3 , j 1 , j 2 = 1 , , N α , of the 3rd-LASS will be called the 3rd-level adjoint function.
The 3rd-LASS together with the results provided in Equations (135) and (136) are employed in Equation (109) to obtain the following expression for the indirect-effect term { δ R ( 2 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ; δ u ( 2 ) ; δ a ( 2 ) ( j 1 ) ] } i n d in terms of the 3rd-level adjoint functions a ( 3 ) ( j 2 ; j 1 ; x ) , for j 1 , j 2 = 1 , , N α :
{ δ R ( 2 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ; δ u ( 3 ) ( j 1 ; x ) ] } i n d = { a ( 3 ) ( j 2 ; j 1 ; x ) , q ( 3 ) [ u ( 3 ) ( j 1 ; x ) ; α ; δ α ] 3 } α 0 { P ^ ( 3 ) [ u ( 3 ) ( j 1 ; x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; α ; δ α ] } α 0 { δ R ( 2 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; α ; δ α ] } i n d .
As the identity in the last line of Equation (149) indicates, the dependence of the indirect-effect term on the variations δ u ( 2 ) and δ a ( 2 ) ( j 1 ) in the state functions has been eliminated, by having replaced this dependence by the dependence on the 3rd-level adjoint function a ( 3 ) ( j 2 ; j 1 ; x ) .
Inserting the expressions of the components of the vector q ( 3 ) [ u ( 3 ) ( j 1 ; x ) α ; δ α ] , cf. Equation (122), into Equation (149) and adding the resulting expression for the indirect-effect term { δ R ( 2 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ; δ u ( 2 ) ; δ a ( 2 ) ( j 1 ) ] } i n d to the expression of the direct-effect term given in Equation (108) yields the following expression for the total third-order G-differential of the response R [ φ ( x ) , ψ ( x ) ; α ] :
{ δ R ( 2 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; α ; δ α ] } α 0 = { α k = 1 N x λ k ( α ) ω k ( α ) d x k S ( 2 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ] } α 0 δ α + { a 1 ( 3 ) ( j 2 ; j 1 ; x ) , q 1 ( 1 ) ( φ ; α ; δ α ) 0 } α 0 + { a 2 ( 3 ) ( j 2 ; j 1 ; x ) , q 2 ( 1 ) ( ψ ; α ; δ α ) 0 } α 0 + { a 3 ( 3 ) ( j 2 ; j 1 ; x ) , q 3 ( 2 ) ( u ( 2 ) ; α ; δ α ) 0 } α 0 + { a 4 ( 3 ) ( j 2 ; j 1 ; x ) , q 4 ( 2 ) ( u ( 2 ) ; α ; δ α ) 0 } α 0 + { a 5 ( 3 ) ( j 2 ; j 1 ; x ) , q 5 ( 3 ) [ u ( 3 ) ( j 1 ; x ) ; α ; δ α ] 0 } α 0 + { a 6 ( 3 ) ( j 2 ; j 1 ; x ) , q 6 ( 3 ) [ u ( 3 ) ( j 1 ; x ) ; α ; δ α ] 0 } α 0 + { a 7 ( 3 ) ( j 2 ; j 1 ; x ) , q 7 ( 3 ) [ u ( 3 ) ( j 1 ; x ) ; α ; δ α ] 0 } α 0 + { a 8 ( 3 ) ( j 2 ; j 1 ; x ) , q 8 ( 3 ) [ u ( 3 ) ( j 1 ; x ) ; α ; δ α ] 0 } α 0 { P ^ ( 3 ) [ u ( 3 ) ( j 1 ; x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; α ; δ α ] } α 0 , f o r j 1 , j 2 = 1 , , N α .
Inserting the explicit expressions for the components of q ( 3 ) [ u ( 3 ) ( j 1 ; x ) ; α ; δ α ] into Equation (150) leads to the following expression of the total third-order G-differential { δ R ( 2 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; α ; δ α ] } α 0 in terms of the third-order partial sensitivities (G-derivatives) of the response with respect to the model parameters:
{ δ R ( 2 ) [ u ( 3 ) ( j 1 ; x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; α ; δ α ] } α 0 = j 3 = 1 N α { R ( 3 ) [ j 3 ; j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; α ] } α 0 δ α j 3 ,
where R ( 3 ) [ j 3 ; j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; α ] denotes the 3rd-order partial sensitivity of the response with respect to the model parameters, evaluated at the nominal parameter values α 0 . The explicit expression of R ( 3 ) [ j 3 ; j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; α ] is obtained by using Equation (150) in Equation (151) and subsequently identifying the expression that multiplies the parameter δ α j 3 . This process indicates that the 3rd-order partial sensitivity  R ( 3 ) [ j 3 ; j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; α ] has the following explicit expression, for each j 1 , j 2 , j 3 = 1 , , N α :
R ( 3 ) [ j 3 ; j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; α ] 3 R ( φ ; ψ ; α ) α j 3 α j 2 α j 1 = { α j 3 P ^ ( 3 ) [ u ( 3 ) ( j 1 ; x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; α ; δ α ] } α 0 + α j 3 { k = 1 N x λ k ( α ) ω k ( α ) d x k S ( 2 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ] } α 0 + { a 1 ( 3 ) ( j 2 ; j 1 ; x ) , [ q φ ( α ) L ( α ) φ ( x ) ] α j 3 0 + a 2 ( 3 ) ( j 2 ; j 1 ; x ) , [ q ψ ( α ) L * ( α ) ψ ( x ) ] α j 3 0 } α 0 + { a 3 ( 3 ) ( j 2 ; j 1 ; x ) , 2 S ( u ( 1 ) ; α ) α j 3 φ α [ L * ( α ) a 1 ( 1 ) ] α j 3 0 + a 4 ( 3 ) ( j 2 ; j 1 ; x ) , 2 S ( u ( 1 ) ; α ) α j 3 ψ [ L ( α ) a 2 ( 1 ) ] α j 3 0 } α 0 + { a 5 ( 3 ) ( j 2 ; j 1 ; x ) , [ L * ( α ) a 1 ( 2 ) ( j 1 ; x ) ] α j 3 + 3 S ( u ( 1 ) ; α ) α j 3 φ φ a 3 ( 2 ) ( j 1 ; x ) + 3 S ( u ( 1 ) ; α ) α j 3 ψ φ a 4 ( 2 ) ( j 1 ; x ) + 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) α j 3 φ 0 } α 0 + { a 6 ( 3 ) ( j 2 ; j 1 ; x ) , [ L ( α ) a 2 ( 2 ) ( j 1 ; x ) ] α j 3 + 3 S ( u ( 1 ) ; α ) α j 3 φ ψ a 3 ( 2 ) ( j 1 ; x ) + 3 S ( u ( 1 ) ; α ) α j 3 ψ ψ a 4 ( 2 ) ( j 1 ; x ) + 2 S ( 1 ) ( j 1 ; u ( 2 ) ; a ( 1 ) ; α ) α j 3 ψ 0 } α 0 + { a 7 ( 3 ) ( j 2 ; j 1 ; x ) , [ L ( α ) a 3 ( 2 ) ( j 1 ; x ) ] α j 3 + 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) α j 3 a 1 ( 1 ) 0 } α 0 + { a 8 ( 3 ) ( j 2 ; j 1 ; x ) , [ L * ( α ) a 4 ( 2 ) ( j 1 ; x ) ] α j 3 + 2 S ( 1 ) ( j 1 ; u ( 2 ) ; α ) α j 3 a 2 ( 1 ) 0 } α 0 { k = 1 N x λ k ( α ) ω k ( α ) d x k S ( 3 ) [ j 3 ; j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; α ] } α 0 .
The expression in Equation (152) can be computed after having obtained the 3rd-level adjoint function a ( 3 ) ( j 2 ; j 1 ; x ) [ a 1 ( 3 ) ( j 2 ; j 1 ; x ) , , a 8 ( 3 ) ( j 2 ; j 1 ; x ) ] . Since the 3rd-LASS is independent of parameter variations δ α , the exact computation of all of the partial third-order sensitivities R ( 3 ) [ j 3 ; j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; α ] requires at most O ( N α 2 ) large-scale (adjoint) computations using the 3rd-LASS, rather than O ( N α 3 ) large-scale computations as would be required by forward methods. In component form, the equations comprising the 3rd-LASS are as solved (for each j 1 = 1 , , N α ; j 2 = 1 , , j 1 ) at the nominal values for the model parameters and state functions in the following order:
L ( α ) a 5 ( 3 ) ( j 2 ; j 1 ; x ) = s 5 ( 3 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ] ,
L * ( α ) a 6 ( 3 ) ( j 2 ; j 1 ; x ) = s 6 ( 3 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ] ,
L * ( α ) a 7 ( 3 ) ( j 2 ; j 1 ; x ) = s 7 ( 3 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ] + 2 S ( u ( 1 ) ; α ) φ φ a 5 ( 3 ) ( j 2 ; j 1 ; x ) + 2 S ( u ( 1 ) ; α ) φ ψ a 6 ( 3 ) ( j 2 ; j 1 ; x ) ,
L ( α ) a 8 ( 3 ) ( j 2 ; j 1 ; x ) = s 8 ( 3 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ] + 2 S ( u ( 1 ) ; α ) ψ φ a 5 ( 3 ) ( j 2 ; j 1 ; x ) + 2 S ( u ( 1 ) ; α ) ψ ψ a 6 ( 3 ) ( j 2 ; j 1 ; x ) ,
L ( α ) a 3 ( 3 ) ( j 2 ; j 1 ; x ) = s 3 ( 3 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ] + k = 1 4 v 21 ( 3 ) ( k , 3 ) a k + 4 ( 3 ) ( j 2 ; j 1 ; x ) ,
L * ( α ) a 4 ( 3 ) ( j 2 ; j 1 ; x ) = s 4 ( 3 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ] + k = 1 4 v 21 ( 3 ) ( k , 4 ) a k + 4 ( 3 ) ( j 2 ; j 1 ; x ) ,
L * ( α ) a 1 ( 3 ) ( j 2 ; j 1 ; x ) = s 1 ( 3 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ] + 2 S ( u ( 1 ) ; α ) φ φ a 3 ( 3 ) ( j 2 ; j 1 ; x ) + 2 S ( u ( 1 ) ; α ) ψ φ a 4 ( 3 ) ( j 2 ; j 1 ; x ) + k = 1 4 v 21 ( 3 ) ( k , 1 ) a k + 4 ( 3 ) ( j 2 ; j 1 ; x ) ,
L ( α ) a 2 ( 3 ) ( j 2 ; j 1 ; x ) = s 2 ( 3 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ] + 2 S ( u ( 1 ) ; α ) φ ψ a 3 ( 3 ) ( j 2 ; j 1 ; x ) + 2 S ( u ( 1 ) ; α ) ψ ψ a 4 ( 3 ) ( j 2 ; j 1 ; x ) + k = 1 4 v 21 ( 3 ) ( k , 2 ) a k + 4 ( 3 ) ( j 2 ; j 1 ; x ) ,
The equations underlying the 3rd-LASS are solved successively, starting with Equations (153) and (154) to compute the 3rd-level adjoint functions a 5 ( 3 ) ( j 2 ; j 1 ; x ) and a 6 ( 3 ) ( j 2 ; j 1 ; x ) . Note that solving using Equations (153) and (154) would be performed by using the same forward and adjoint solvers (i.e., computer codes) as used for solving the original forward and adjoint systems, namely Equations (3) and (11), respectively. Only the corresponding source terms and boundary conditions would differ. After having obtained the 3rd-level adjoint functions a 5 ( 3 ) ( j 2 ; j 1 ; x ) and a 6 ( 3 ) ( j 2 ; j 1 ; x ) , the next round of computations would be to solve Equations (155) and (156) in order to determine the 3rd-level adjoint functions a 7 ( 3 ) ( j 2 ; j 1 ; x ) and a 8 ( 3 ) ( j 2 ; j 1 ; x ) , respectively. Solving Equations (155) and (156) would also be performed by using the same forward and adjoint solvers (i.e., computer codes) as used for solving the original forward and adjoint systems. The next round of computations would employ the forward and adjoint solvers for solving Equations (157) and (158) in order to determine the 3rd-level adjoint functions a 3 ( 3 ) ( j 2 ; j 1 ; x ) and a 4 ( 3 ) ( j 2 ; j 1 ; x ) , respectively. The final set of computations would require the use of the forward and adjoint solvers to solve Equations (159) and (160) in order to determine the 3rd-level adjoint functions a 1 ( 3 ) ( j 2 ; j 1 ; x ) and a 2 ( 3 ) ( j 2 ; j 1 ; x ) , respectively. Thus, solving the 3rd-LASS in order to determine the 3rd-level adjoint functions does not require any significant “code development,” since the original forward and adjoint solvers (codes) do not need to be modified; only the right-sides (i.e., “sources”) and boundary conditions would need to be programmed accordingly. Using the 3rd-LASS enables the specific computation of the 3rd-order sensitivities in the priority order set by the user, so that only the important 3rd-order partial sensitivities R [ φ ( x ) , ψ ( x ) ; α ] would be computed. Information provided by the first- and second-order sensitivities is very likely to indicate which 3rd-order sensitivities could be neglected.

3.3.4. Comparison of Computational Requirements for Computing the Third-Order Response Sensitivities with Respect to Model Parameters

The largest third-order sensitivities of the leakage response of the OECD/NEA reactor physics benchmark measured in [16] have been computed in [19,20,21], where it was shown that these large sensitivities (which reached relative values of 104) involved the benchmark’s 180 imprecisely known microscopic total cross sections. Using a DELL computer (AMD FX-8350) with an 8-core processor, the CPU-time for a typical adjoint computation (which represents a large-scale computation) of one (of the 8) of the adjoint functions needed for computing the 3rd-order response sensitivities to the benchmark’s total microscopic cross sections is ca. 20 s, while the CPU-time computing an integral (which represents a small-scale computation) over the adjoint function(s) which appear(s) in the formula for computing a 3rd-order sensitivity is ca. 0.003 s. By applying the 3rd-CASAM, the computation of the 3rd-order sensitivities of the benchmark’s leakage response to the 180 total microscopic cross sections required 33,301 large-scale adjoint computations, which required 175 CPU-hours. By comparison, computing these 3rd-order sensitivities using the FSAM would have required 1,004,730 large-scale forward computations (which would have required 12,559 CPU-hours). Using finite-differences would have required 7,905,360 large-scale forward computations (which would have required 98,817 CPU-hours) to obtain, at best, approximate values for these 3rd-order sensitivities. Evidently, the 3rd-CASAM is the only practical methodology capable of computing, without introducing methodological errors, 3rd-order sensitivities for large-scale systems involving many parameters.

3.4. Computation of the Fourth-Order Sensitivities of R [ φ ( x ) , ψ ( x ) ; α ]

There are N α ( N α + 1 ) ( N α + 2 ) ( N α + 3 ) / 4 ! distinct 4th-order sensitivities of the response with respect to the model and response parameters. It is not practicable to compute these sensitivities using finite-difference formulas or the forward sensitivity analysis methodology. The finite-difference formulas for computing these sensitivities are presented in Section 3.4.1. The Forward Sensitivity Analysis Methodology (FSAM) for computing the 3rd-order response sensitivities are presented in Section 3.4.2, while the 4th-Order Comprehensive Adjoint Sensitivity Analysis Methodology (4th-CASAM) is presented in Section 3.4.3.

3.4.1. Finite-Difference Approximation Using Re-Computations with User-Modified Parameters

The 4th-order unmixed sensitivities, 4 R ( α ) / α j 1 4 , of the leakage response, R ( α ) , with respect to all the model parameters, can be approximately computed by re-computations using the well-known finite-difference formula presented below:
4 R ( α ) ( α j 1 ) 4 1 h j 1 4 ( R j 1 + 2 4 R j 1 + 1 + 6 R j 1 4 R j 1 1 + R j 1 2 ) + O ( h j 1 2 ) , j 1 = 1 , , N α ,
where R j 1 + 2 R ( α j 1 + 2 h j 1 ) , R j 1 + 1 R ( α j 1 + h j 1 ) , R j 1 1 R ( α j 1 h j 1 ) , R j 1 2 R ( α j 1 2 h j 1 ) and h j 1 denotes a “judiciously-chosen” variation in the parameter α j 1 around its nominal value α j 1 0 . The 4th-order mixed sensitivities, 4 R ( α ) / α j 1 α j 2 α j 3 α j 4 , can be calculated by using the following finite-difference formula:
4 R ( α ) α j 1 α j 2 α j 3 α j 4 1 16 h j 1 h j 2 h j 3 h j 4 ( R j 1 + 1 , j 2 + 1 , j 3 + 1 , j 4 + 1 R j 1 + 1 , j 2 + 1 , j 3 + 1 , j 4 1 R j 1 + 1 , j 2 + 1 , j 3 1 , j 4 + 1 + R j 1 + 1 , j 2 + 1 , j 3 1 , j 4 1 R j 1 + 1 , j 2 1 , j 3 + 1 , j 4 + 1 + R j 1 + 1 , j 2 1 , j 3 + 1 , j 4 1 + R j 1 + 1 , j 2 1 , j 3 1 , j 4 + 1 R j 1 + 1 , j 2 1 , j 3 1 , j 4 1 R j 1 1 , j 2 + 1 , j 3 + 1 , j 4 + 1 + R j 1 1 , j 2 + 1 , j 3 + 1 , j 4 1 + R j 1 1 , j 2 + 1 , j 3 1 , j 4 + 1 R j 1 1 , j 2 + 1 , j 3 1 , j 4 1 + R j 1 1 , j 2 1 , j 3 + 1 , j 4 + 1 R j 1 1 , j 2 1 , j 3 + 1 , j 4 1 R j 1 1 , j 2 1 , j 3 1 , j 4 + 1 + R j 1 1 , j 2 1 , j 3 1 , j 4 1 ) + O ( h j 1 2 , h j 2 2 , h j 3 2 , h j 4 2 ) ,
where R j 1 + 1 , j 2 + 1 , j 3 + 1 , j 4 + 1 R ( α j 1 + h j 1 , α j 2 + h j 2 , α j 3 + h j 3 , α j 4 + h j 4 ) , etc. The finite difference formulas introduce their intrinsic “methodological errors” of order O ( h j 1 2 , h j 2 2 , h j 3 2 , h j 4 2 ) which are in addition to, and independent of, the errors that might be incurred in the computation of 4 R ( α ) / α j 1 α j 2 α j 3 α j 4 .

3.4.2. Forward Sensitivity Analysis Methodology

Within the framework of the FSAM, the fourth-order sensitivities 4 R ( φ ; ψ ; α ) / α j 1 α j 2 α j 3 α j 4 ; j 1 , j 2 , j 3 ; j 4 = 1 , , N α , of the response to the model parameters are obtained by computing the partial G-derivatives of the third-order sensitivities represented by Equation (98), which yields the following result:
4 R ( φ ; ψ ; α ) α j 1 α j 2 α j 3 α j 4 1 δ α j 4 { d d ε R ( 3 ) [ j 3 ; j 2 ; j 1 ; φ ( α j 4 0 + ε δ α j 4 ) ; ψ ( α j 4 0 + ε δ α j 4 ) ; α j 4 0 + ε δ α j 4 ; φ ( α j 4 0 + ε δ α j 4 ) α j 1 ; ψ ( α j 4 0 + ε δ α j 4 ) α j 1 ; 2 φ ( α j 4 0 + ε δ α j 4 ) α j 1 α j 2 ; 2 ψ ( α j 4 0 + ε δ α j 4 ) α j 1 α j 2 ; 3 φ ( α j 4 0 + ε δ α j 4 ) α j 1 α j 2 α j 3 ; 3 ψ ( α j 4 0 + ε δ α j 4 ) α j 1 α j 2 α j 3 ] } ε = 0 = R ( 4 ) [ j 4 ; j 3 j 2 ; j 1 ; φ ; ψ ; α ; φ α j 2 ; ψ α j 2 ; 2 φ α j 1 α j 2 ; 2 ψ α j 1 α j 2 ; 3 φ α j 1 α j 2 α j 3 ; 3 ψ α j 1 α j 2 α j 3 ; 4 φ α j 1 α j 2 α j 3 α j 4 ; 4 ψ α j 1 α j 2 α j 3 α j 4 ] .
As indicated by the functional dependence of the last term in Equation (163), the evaluation of the 4th-order sensitivities requires prior knowledge of the 1st-, 2nd-, and 3rd-order partial derivatives of the state functions with respect to the model parameters. The functions φ ( x ) / α j 1 and ψ ( x ) / α j 1 are obtained by solving the 1st-OFSS. The functions 2 φ / α j 1 α j 2 and 2 ψ / α j 1 α j 2 are the solutions of the 2nd-OFSS. The functions 3 φ / α j 1 α j 2 α j 3 and 3 ψ / α j 1 α j 2 α j 3 are the solutions of the 3rd-OFSS. The functions 4 φ / α j 1 α j 2 α j 3 α j 4 and 4 ψ / α j 1 α j 2 α j 3 α j 4 are the solutions of the following Fourth-Order Forward Sensitivity System (4th-OFSS), which is obtained by taking the partial G-derivative of the 3rd-OFSS with respect to a generic model parameter α j 4 :
L ( x ; α 0 ) 4 φ α j 4 α j 3 α j 2 α j 1 = { f ( 4 ) ( j 4 ; j 3 ; j 2 ; j 1 ; φ ; α ) } α 0 ; j 1 , , j 4 = 1 , , N α ; x Ω x ;
{ B φ ( x ; α ) } α 0 4 φ α j 4 α j 3 α j 2 α j 1 = { c φ ( 4 ) ( j 4 ; j 3 ; j 2 ; j 1 ; φ ; α ) } α 0 ; j 1 , , j 4 = 1 , , N α ; x Ω x ( α 0 ) ;
L * ( x ; α 0 ) 4 ψ α j 4 α j 3 α j 2 α j 1 = { g ( 4 ) ( j 4 ; j 3 ; j 2 ; j 1 ; ψ ; α ) } α 0 ; j 1 , , j 4 = 1 , , N α ; x Ω x ;
{ B ψ ( x ; α ) } α 0 4 ψ α j 4 α j 3 α j 2 α j 1 = { c ψ ( 4 ) ( j 4 ; j 3 ; j 2 ; j 1 ; ψ ; α ) } α 0 ; j 1 , , j 4 = 1 , , N α ; x Ω x ( α 0 ) ;
where
f ( 4 ) ( j 4 ; j 3 ; j 2 ; j 1 ; φ ; α ) f ( 3 ) ( j 3 ; j 2 ; j 1 ; φ ; α ) α j 4 L ( x ; α ) α j 4 3 φ α j 3 α j 2 α j 1 ;
g ( 4 ) ( j 4 ; j 3 ; j 2 ; j 1 ; ψ ; α ) g ( 3 ) ( j 3 ; j 2 ; j 1 ; ψ ; α ) α j 4 L * ( x ; α ) α j 4 3 ψ α j 3 α j 2 α j 1 ;
c φ ( 4 ) ( j 4 ; j 3 ; j 2 ; j 1 ; φ ; α ) c φ ( 3 ) ( j 3 ; j 2 ; j 1 ; φ ; α ) α 4 B φ ( x ; α ) α 4 3 φ α j 3 α j 2 α j 1 ; x Ω x ( α 0 ) ;
c ψ ( 4 ) ( j 4 ; j 3 ; j 2 ; j 1 ; ψ ; α ) c ψ ( 3 ) ( j 3 ; j 2 ; j 1 ; ψ ; α ) α j 4 B ψ ( x ; α ) α j 4 3 ψ α j 3 α j 2 α j 1 ; x Ω x ( α 0 ) .
Evidently, the computations of the fourth-order sensitivities 4 R ( φ ; ψ ; α ) / α j 4 α j 3 α j 2 α j 1 require 2 N α ( N α + 1 ) ( N α + 2 ) ( N α + 3 ) / 4 ! large-scale computations to solve Equations (164)–(167), followed by just as many small-scale computations for performing the integrations that define the various sensitivities. These computations would be in addition to the 2 N α large-scale computations needed to determine the functions φ ( x ) / α j 1 and ψ ( x ) / α j 1 by solving the 1st-OFSS, the N α ( N α + 1 ) large-scale computations needed to determine the functions 2 φ / α j 1 α j 2 and 2 ψ / α j 1 α j 2 by solving the 2nd-OFSS, and the N α ( N α + 1 ) ( N α + 2 ) / 3 large-scale computations needed to determine the functions 3 φ / α j 1 α j 2 α j 3 and 3 ψ / α j 1 α j 2 α j 3 by solving the 3rd-OFSS.

3.4.3. The Fourth-Order Adjoint Sensitivity Analysis Methodology (4th-CASAM)

The fourth-order sensitivities of the response R [ φ ( x ) , ψ ( x ) ; α ] with respect to the model parameters are obtained by determining the first-order G-differential of the 3rd-order sensitivities R ( 3 ) [ j 3 ; j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; α ] 3 R ( φ ; ψ ; α ) / α j 3 α j 2 α j 1 , which were computed in Section 3.3 and defined in Equation (152). By definition, the total G-differential of R ( 3 ) [ j 3 ; j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; α ] is obtained as follows:
{ δ R ( 3 ) [ j 3 ; j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; α ; δ u ( 3 ) ; δ a ( 3 ) ; δ α ] } α 0 { d d ε k = 1 N x λ k ( α ) ω k ( α ) d x k S ( 3 ) [ j 3 ; j 2 ; j 1 ; u ( 3 ) , 0 ( j 1 ; x ) + ε δ u ( 3 ) ( j 1 ; x ) ; a ( 3 ) , 0 ( j 2 ; j 1 ; x ) + ε δ a ( 3 ) ( j 2 ; j 1 ) ; α 0 + ε δ α ] } ε = 0 = { δ R ( 3 ) ( j 3 ; j 2 ; j 1 ; u ( 3 ) ; a ( 3 ) ; δ α ) } d i r + { δ R ( 3 ) ( j 3 ; j 2 ; j 1 ; u ( 3 ) ; a ( 3 ) ; δ u ( 3 ) ; δ a ( 3 ) ) } i n d , j 1 , j 2 ; j 3 = 1 , , N α ,
where the direct-effect term { δ R ( 3 ) ( j 3 ; j 2 ; j 1 ; u ( 3 ) ; a ( 3 ) ; δ α ) } d i r depends directly on the parameter variations and is defined as follows:
{ δ R ( 3 ) ( j 3 ; j 2 ; j 1 ; u ( 3 ) ; a ( 3 ) ; δ α ) } d i r { α k = 1 N x λ k ( α ) ω k ( α ) d x k S ( 3 ) [ j 3 ; j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; α ] } α 0 δ α ,
while the indirect-effect term { δ R ( 3 ) ( j 3 ; j 2 ; j 1 ; u ( 3 ) ; a ( 3 ) ; δ u ( 3 ) ; δ a ( 3 ) ) } i n d depends indirectly on the parameter variations through the variations in the forward and adjoint state functions and is defined as follows:
{ δ R ( 3 ) [ j 3 ; j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; δ u ( 3 ) ( j 1 ; x ) ; δ a ( 3 ) ( j 2 ; j 1 ; x ) ] } i n d { k = 1 N x λ k ( α ) ω k ( α ) d x k S ( 3 ) [ j 3 ; j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; α ] u ( 3 ) ( j 1 ; x ) } α 0 δ u ( 3 ) ( j 1 ; x ) + { k = 1 N x λ k ( α ) ω k ( α ) d x k S ( 3 ) [ j 3 ; j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; α ] a ( 3 ) ( j 2 ; j 1 ; x ) } α 0 δ a ( 3 ) ( j 2 ; j 1 ; x ) ,
where
[   ] u ( 3 ) ( j 1 ; x ) δ u ( 3 ) ( j 1 ; x ) [   ] φ δ φ ( x ) + [   ] ψ δ ψ ( x ) + m = 1 2 S ( 3 ) [   ] a m ( 1 ) δ a m ( 1 ) ( x ) + m = 1 4 [   ] a m ( 2 ) ( j 1 ) δ a m ( 2 ) ( j 1 ; x ) ,
[   ] a ( 3 ) ( j 2 ; j 1 ; x ) δ a ( 3 ) ( j 2 ; j 1 ; x ) m = 1 8 [   ] a m ( 3 ) ( j 2 ; j 1 ; x ) δ a m ( 3 ) ( j 2 ; j 1 ; x ) .
The indirect-effect term { δ R ( 3 ) ( j 3 ; j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; δ u ( 3 ) ; δ a ( 3 ) ) } i n d defined in Equation (174) can be computed only after having computed the variations δ u ( 3 ) ( j 1 ; x ) and δ a ( 3 ) ( j 2 ; j 1 ; x ) . Recall that the 3rd-level variational sensitivity function δ u ( 3 ) ( j 1 ; x ) is the solution of the 3rd-LVSS. On the other hand, the vector δ a ( 3 ) ( j 2 ; j 1 ; x ) is the solution of the system of equations obtained by G-differentiating the 3rd-LASS. Applying the definition of the total G-differential, cf. Equation (23), to the 3rd-LASS, cf. Equations (138) and (148), yields the following set of equations for the vector of variations δ a ( 3 ) ( j 2 ; j 1 ; x ) , all to be evaluated at the nominal values for the parameters and state functions, for j 1 , j 2 = 1 , , N α :
{ A ( 3 ) [ j 1 ; u ( 2 ) ( j 1 ; x ) ] a ( 3 ) ( j 2 ; j 1 ; x ) s ( 3 ) [ j 2 ; j 1 ; u ( 3 ) ; α ] } u ( 3 ) δ u ( 3 ) + { A ( 3 ) ( j 1 ; u ( 2 ) ) } α 0 δ a ( 3 ) = p ( 3 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; α ; δ α ] ,
{ δ b A ( 3 ) [ a ( 3 ) ( j 2 ; j 1 ; x ) ; u ( 3 ) ( j 1 ; x ) ; α ; δ a ( 3 ) ; δ u ( 3 ) ; δ α ] } α 0 = 0 , x Ω x ( α 0 ) , j 1 , j 2 = 1 , , N α ,
where
p ( 3 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; α ; δ α ] { s ( 3 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ] A ( 3 ) ( u ( 2 ) ) a ( 3 ) ( j 2 ; j 1 ; x ) } α δ α .
As Equations (177) and (178) indicate, the vectors of variations δ u ( 3 ) ( j 1 ; x ) and δ a ( 3 ) ( j 2 ; j 1 ; x ) are related to each other, and must be determined by solving simultaneously the coupled system of equations obtained by concatenating the 3rd-LVSS to Equation (177) while being subject to the boundary conditions obtained by concatenating the boundary conditions which belong to the 3rd-LVSS with the boundary conditions represented by Equation (178). Concatenating Equations (119) and (177) yields the following 16 × 16 block-matrix equation:
V ( 4 ) ( u ( 3 ) ) δ u ( 4 ) ( j 2 ; j 1 ; x ) = q ( 4 ) [ j 2 ; j 1 ; u ( 4 ) ( j 2 ; j 1 ; x ) ; α ; δ α ] ,
where:
u ( 4 ) ( j 2 ; j 1 ; x ) ( u ( 3 ) ( j 1 ; x ) a ( 3 ) ( j 2 ; j 1 ; x ) ) ; δ u ( 4 ) ( j 2 ; j 1 ; x ) ( δ u ( 3 ) ( j 1 ; x ) δ a ( 3 ) ( j 2 ; j 1 ; x ) ) ;
V ( 4 ) ( u ( 4 ) ) = ( V ( 3 ) ( j 1 ; u ( 2 ) ) [ 0 ] ( 8 × 8 ) V 21 ( 4 ) ( j 2 ; j 1 ) V 22 ( 4 ) ( j 1 ) ) ; V 22 ( 4 ) ( j 1 ) A ( 3 ) ( j 1 ; u ( 2 ) ) ; V 21 ( 4 ) { A ( 3 ) [ j 1 ; u ( 2 ) ( j 1 ; x ) ] a ( 3 ) ( j 2 ; j 1 ; x ) s ( 3 ) [ j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; α ] } u ( 3 ) ;
q ( 4 ) [ j 2 ; j 1 ; u ( 4 ) ( j 2 ; j 1 ; x ) ; α ; δ α ] ( q ( 3 ) [ j 1 ; u ( 3 ) ( j 1 ; x ) ; α ; δ α ] p ( 3 ) [ j 2 ; j 1 ; u ( 4 ) ( j 2 ; j 1 ; x ) ; α ; δ α ] ) ( q 1 ( 4 ) [ j 2 ; j 1 ; u ( 4 ) ( j 2 ; j 1 ; x ) ; α ; δ α ] q 16 ( 4 ) [ j 2 ; j 1 ; u ( 4 ) ( j 2 ; j 1 ; x ) ; α ; δ α ] ) .
Concatenating the boundary conditions which belong to the 3rd-LVSS, cf. Equation (120), with the boundary conditions represented by Equation (178) yields the following boundary conditions for the function δ u ( 4 ) ( j 2 ; j 1 ; x ) :
b v ( 4 ) [ u ; a ( 1 ) ; a ( 2 ) ( j 1 ) ; α ; δ φ ( x ) ; δ ψ ( x ) ; δ a ( 2 ) ( j 1 ; x ) ; δ α ] ( b v ( 3 ) [ j 1 ; u ( 3 ) ( j 1 ; x ) ; α ; δ u ( 3 ) ( j 1 ; x ) ; δ α ] δ b A ( 3 ) [ u ( 2 ) ( j 1 ; x ) ; α ; δ a ( 3 ) ( j 2 ; j 1 ; x ) ; δ α ] ) = ( 0 0 ) , x Ω x ( α 0 ) .
The matrix-operator equations represented by Equation (180) together with the boundary conditions represented by Equation (184) will be called the 4th-Level Variational Sensitivity System (4th-LVSS). The solution, δ u ( 4 ) ( j 2 ; j 1 ; x ) , of the 4th-LVSS will be called the “4th-level variational sensitivity function”.
Since the 4th-LVSS equations depend on the parameter variations δ α i , solving them is prohibitively expensive computationally for large-scale systems involving many parameters. The need for solving the 4th-LVSS can be avoided by expressing the indirect-effect term { δ R ( 3 ) ( j 3 ; j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; δ u ( 3 ) ; δ a ( 3 ) ) } i n d defined in Equation (174) in an alternative way, which does not involve the function δ u ( 4 ) ( j 2 ; j 1 ; x ) . This alternative expression will be derived by using the solution of a 4th-Level Adjoint Sensitivity Systems (4th-LASS), which will be constructed by implementing the same sequence of logical steps as were followed for constructing the 1st, 2nd, and 3rd-LASS. The first step is to define a Hilbert space, denoted as H 4 , comprising block-vector elements of the form η ( 4 ) ( x ) [ η 1 ( 4 ) ( x ) , , η 16 ( 4 ) ( x ) ] H 4 , with η i ( 4 ) ( x ) [ η i , 1 ( 4 ) ( x ) , , η i , N φ ( 4 ) ( x ) ] , i = 1 , .. , 16 . The inner product between two elements, η ( 4 ) ( x ) H 4 and ξ ( 4 ) ( x ) H 4 , of this Hilbert space, will be denoted as η ( 4 ) ( x ) , ξ ( 4 ) ( x ) 4 and is defined as follows:
η ( 4 ) ( x ) , ξ ( 4 ) ( x ) 4 i = 1 16 η i ( 4 ) ( x ) , ξ i ( 4 ) ( x ) 0 .
Using the definition provided in Equation (185), form the inner product in H 4 of Equation (180) with a set of yet undefined vector-valued functions a ( 4 ) ( j 2 ; j 1 ; x ) [ a 1 ( 4 ) ( j 2 ; j 1 ; x ) , , a 16 ( 4 ) ( j 2 ; j 1 ; x ) ] h 4 , j 1 , j 2 = 1 , , N α , to obtain the following relation:
{ a ( 4 ) ( j 3 ; j 2 ; j 1 ; x ) , V ( 4 ) ( u ( 3 ) ) δ u ( 4 ) ( j 2 ; j 1 ; x ) 4 } α 0 = { a ( 4 ) ( j 3 ; j 2 ; j 1 ; x ) , q ( 4 ) [ j 2 ; j 1 ; u ( 4 ) ( j 2 ; j 1 ; x ) ; α ; δ α ] 4 } α 0 .
Using the definition of the adjoint operator in the Hilbert space H 4 , recast the left-side of Equation (186) as follows:
{ a ( 4 ) ( j 3 ; j 2 ; j 1 ; x ) , V ( 4 ) ( u ( 3 ) ) δ u ( 4 ) ( j 2 ; j 1 ; x ) 4 } α 0 = { δ u ( 4 ) ( j 2 ; j 1 ; x ) , A ( 4 ) ( u ( 3 ) ) a ( 4 ) ( j 3 ; j 2 ; j 1 ; x ) 4 } α 0 + { P ( 4 ) [ δ u ( 4 ) ( j 2 ; j 1 ; x ) ; a ( 4 ) ( j 3 ; j 2 ; j 1 ; x ) ; α ; δ α ] } α 0 ,
where { P ( 4 ) [ δ u ( 4 ) ( j 2 ; j 1 ; x ) ; a ( 4 ) ( j 3 ; j 2 ; j 1 ; x ) ; α ; δ α ] } α 0 denotes the bilinear concomitant defined on the phase-space boundary x Ω x ( α 0 ) and where A ( 4 ) ( j 2 ; j 1 ; u ( 3 ) ) = { V ( 4 ) [ j 2 ; j 1 ; u ( 2 ) ( x ) ] } * is the operator formally adjoint to V ( 3 ) [ j 1 ; u ( 2 ) ( x ) ] and has therefore the following form:
A ( 4 ) ( j 2 ; j 1 ; u ( 3 ) ) = ( V ( 3 ) ( u ( 1 ) ) [ 0 ] ( 4 × 4 ) V 21 ( 4 ) ( j 1 ; u ( 2 ) ) V 22 ( 4 ) ( u ( 1 ) ) ) * = ( A ( 3 ) [ V 21 ( 4 ) ] * [ 0 ] ( 4 × 4 ) [ V 22 ( 4 ) ] * ) .
The first term on right-side of Equation (187) is now required to represent the indirect-effect term { δ R ( 3 ) ( j 3 ; j 2 ; j 1 ; u ( 3 ) ; a ( 3 ) ; δ u ( 3 ) ; δ a ( 3 ) ) } i n d defined in Equation (184). This requirement is satisfied by imposing the following relation on each element a ( 4 ) ( j 3 ; j 2 ; j 1 ; x ) [ a 1 ( 4 ) ( j 3 ; j 2 ; j 1 ; x ) , , a 16 ( 4 ) ( j 3 ; j 2 ; j 1 ; x ) ] , j 3 , j 2 , j 1 = 1 , , N α :
A ( 4 ) ( u ( 3 ) ) a ( 4 ) ( j 3 ; j 2 ; j 1 ; x ) = s ( 4 ) [ j 3 ; j 2 ; j 1 ; u ( 4 ) ( j 2 ; j 1 ; x ) ; α ] , j 1 , j 2 , j 3 = 1 , , N α ,
where the block-vector
s ( 4 ) [ j 3 ; j 2 ; j 1 ; u ( 4 ) ( j 2 ; j 1 ; x ) ; α ] [ s 1 ( 4 ) ( j 3 ; j 2 ; j 1 ; x ) , , s 16 ( 4 ) ( j 3 ; j 2 ; j 1 ; x ) ] ,
comprises, for each j 3 , j 2 , j 1 = 1 , , N α , sixteen components defined as follows:
s 1 ( 4 ) [ j 3 ; j 2 ; j 1 ; u ( 4 ) ( j 2 ; j 1 ; x ) ; α ] S ( 3 ) [ j 3 ; j 2 ; j 1 ; u ( 4 ) ( j 2 ; j 1 ; x ) ; α ] φ , s 2 ( 4 ) [ j 3 ; j 2 ; j 1 ; u ( 4 ) ( j 2 ; j 1 ; x ) ; α ] = S ( 3 ) [ j 3 ; j 2 ; j 1 ; u ( 4 ) ( j 2 ; j 1 ; x ) ; α ] ψ , s 2 + m ( 4 ) [ j 3 ; j 2 ; j 1 ; u ( 4 ) ( j 2 ; j 1 ; x ) ; α ] = S ( 3 ) [ j 3 ; j 2 ; j 1 ; u ( 4 ) ( j 2 ; j 1 ; x ) ; α ] a m ( 1 ) ; m = 1 , 2 ; s 4 + m ( 4 ) [ j 3 ; j 2 ; j 1 ; u ( 4 ) ( j 2 ; j 1 ; x ) ; α ] = S ( 3 ) [ j 3 ; j 2 ; j 1 ; u ( 4 ) ( j 2 ; j 1 ; x ) ; α ] a m ( 2 ) ( j 1 ) ; m = 1 , .. , 4 ; s 8 + m ( 4 ) [ j 3 ; j 2 ; j 1 ; u ( 4 ) ( j 2 ; j 1 ; x ) ; α ] = S ( 3 ) [ j 3 ; j 2 ; j 1 ; u ( 4 ) ( j 2 ; j 1 ; x ) ; α ] a m ( 3 ) ( j 2 ; j 1 ) ; m = 1 , .. , 8 .
The definition of the set of vectors a ( 4 ) ( j 3 ; j 2 ; j 1 ; x ) [ a 1 ( 4 ) ( j 3 ; j 2 ; j 1 ; x ) , , a 16 ( 4 ) ( j 3 ; j 2 ; j 1 ; x ) ] will now be completed by selecting boundary conditions for this set of vectors, which will be represented in operator form as follows:
{ b A ( 4 ) [ a ( 4 ) ( j 3 ; j 2 ; j 1 ; x ) ; u ( 4 ) ( j 2 ; j 1 ; x ) ; α ] } α 0 = 0 , x Ω x ( α 0 ) , j 1 , j 2 , j 3 = 1 , , N α .
The boundary conditions represented by Equation (148) are selected so as to satisfy the following requirements:
(i)
The boundary conditions Equation (192) together with the operator Equation (189) constitute a well posed problem for the functions a ( 3 ) ( j 2 ; j 1 ; x ) .
(ii)
Implementation in Equation (187) of the boundary conditions (for the 3rd-LVSSS) provided in Equation (148) together with those provided in Equation (192) eliminates all of the unknown values of the functions δ u ( 4 ) ( j 2 ; j 1 ; x ) and a ( 4 ) ( j 3 ; j 2 ; j 1 ; x ) in the expression of the bilinear concomitant { P ( 4 ) [ δ u ( 4 ) ( j 2 ; j 1 ; x ) ; a ( 4 ) ( j 3 ; j 2 ; j 1 ; x ) ; α ; δ α ] } α 0 . This bilinear concomitant may vanish after implementing the boundary conditions represented by Equation (148), but if it does not, it will be reduced to a residual quantity which will be denoted as { P ^ ( 4 ) [ a ( 4 ) ( j 3 ; j 2 ; j 1 ; x ) ; u ( 4 ) ( j 2 ; j 1 ; x ) ; α ; δ α ] } α 0 and which will comprise only known values of a ( 4 ) ( j 3 ; j 2 ; j 1 ; x ) , u ( 4 ) ( j 2 ; j 1 ; x ) , α and δ α .
The system of equations represented by Equation (189) together with the boundary conditions represented by Equation (192) constitute the 4th-Level Adjoint Sensitivity System (4th-LASS). The solution a ( 4 ) ( j 3 ; j 2 ; j 1 ; x ) [ a 1 ( 4 ) ( j 3 ; j 2 ; j 1 ; x ) , , a 16 ( 4 ) ( j 3 ; j 2 ; j 1 ; x ) ] , j 1 , j 2 , j 3 = 1 , , N α , of the 4th-LASS will be called the 4th-level adjoint function.
The 4th-LASS together with the results provided in Equations (186) and (187) are employed in Equation (174) to obtain the following expression for the indirect-effect term { δ R ( 3 ) [ j 3 ; j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; δ u ( 3 ) ( j 1 ; x ) ; δ a ( 3 ) ( j 2 ; j 1 ; x ) ] } i n d in terms of the 4th-level adjoint functions a ( 4 ) ( j 3 ; j 2 ; j 1 ; x ) , for j 1 , j 2 , j 3 = 1 , , N α :
{ δ R ( 3 ) [ j 3 ; j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; δ u ( 3 ) ( j 1 ; x ) ; δ a ( 3 ) ( j 2 ; j 1 ; x ) ] } i n d = { a ( 4 ) ( j 3 ; j 2 ; j 1 ; x ) , q ( 4 ) [ j 2 ; j 1 ; u ( 4 ) ( j 2 ; j 1 ; x ) ; α ; δ α ] 4 } α 0 { P ^ ( 4 ) [ a ( 4 ) ( j 3 ; j 2 ; j 1 ; x ) ; u ( 4 ) ( j 2 ; j 1 ; x ) ; α ; δ α ] } α 0 { δ R ( 3 ) [ j 3 ; j 2 ; j 1 ; u ( 4 ) ( j 2 ; j 1 ; x ) ; α ; δ α ] } i n d .
As the identity in the last line of Equation (193) indicates, the dependence of the indirect-effect term on the variations δ u ( 3 ) ( j 1 ; x ) and δ a ( 3 ) ( j 2 ; j 1 ; x ) has been eliminated by having replaced these functional-dependences by the dependence on the 4th-level adjoint function a ( 4 ) ( j 3 ; j 2 ; j 1 ; x ) .
Representing the components of q ( 4 ) [ j 2 ; j 1 ; u ( 4 ) ( j 2 ; j 1 ; x ) ; α ; δ α ] in the following form:
q m ( 4 ) ( j 2 ; j 1 ; u ( 4 ) ; α ; δ α ) j 4 = 1 N α { q m ( 4 ) [ j 4 ; j 3 ; j 2 ; j 1 ; u ( 4 ) ( j 2 ; j 1 ; x ) ; α ] α j 4 } ( α 0 ) δ α j 4 , m = 1 , , 16 .
and adding the expression obtained in Equation (193) for the indirect-effect term with the expression of the direct-effect term given in Equation (173) yields the following expression for the 4th-order total G-differential { δ R ( 3 ) [ j 3 ; j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; α ; δ u ( 3 ) ; δ a ( 3 ) ; δ α ] } α 0 of the response with respect to the model parameters, for j 1 , j 2 , j 3 = 1 , , N α :
δ R ( 3 ) [ j 3 ; j 2 ; j 1 ; u ( 4 ) ( j 2 ; j 1 ; x ) ; α ; δ α ; ] = j 4 = 1 N α { R ( 4 ) [ j 4 ; j 3 ; j 2 ; j 1 ; u ( 4 ) ( j 2 ; j 1 ; x ) ; α ] } ( α 0 ) δ α j 4 ,
where R ( 4 ) [ j 4 ; j 3 ; j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; α ] denotes the 4th-order partial sensitivity of the response R [ u ( 1 ) ( x ) ; α ] with respect to the model parameters, evaluated at the nominal parameter values α 0 , and can be represented as follows:
R ( 4 ) [ j 4 ; j 3 ; j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; α ] = 4 R ( φ ; ψ ; α ) α j 4 α j 3 α j 2 α j 1 = { α j 4 P ^ ( 4 ) [ j 3 ; j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; α ] } ( α 0 ) + α j 4 { k = 1 N x λ k ( α ) ω k ( α ) d x k S ( 3 ) [ j 3 ; j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; α ] } ( α 0 ) + α j 4 { m = 1 16 a m ( 4 ) ( j 3 ; j 2 ; j 1 ; x ) , Q m ( 4 ) [ j 4 ; j 3 ; j 2 ; j 1 ; u ( 3 ) ( j 1 ; x ) ; a ( 3 ) ( j 2 ; j 1 ; x ) ; α ] 0 } ( α 0 ) .
After obtaining the 4th-level adjoint function a ( 4 ) ( j 3 ; j 2 ; j 1 ; x ) by performing at most O ( N α 3 ) large-scale computations, the 4th-order response sensitivities to model parameters are obtained using the expression provided in Equation (196).

4. Discussion and Conclusions

This work has presented the 4th-Order Comprehensive Adjoint Sensitivity Analysis Methodology (4th-CASAM), which enables the efficient computation of the exact expressions of the 4th-order functional derivatives (“sensitivities”) of a general system response, which depends on both the forward and adjoint state functions, with respect to all of the parameters underlying the respective forward and adjoint systems. Since nonlinear operators do not admit adjoint operators (only linearized versions of nonlinear operators can admit a bona-fide adjoint operator), responses that simultaneously depend on forward and adjoint functions can arise only in conjunction with linear systems, thus providing the fundamental motivation for treating linear systems in their own right rather than just as particular cases of nonlinear systems. Very importantly, the computation of the 2nd-, 3rd-, and 4th-level adjoint functions uses the same forward and adjoint solvers (i.e., computer codes) as used for solving the original forward and adjoint systems, thus requiring relatively minor additional software development for computing the various-order sensitivities.
The 4th-CASAM presented in this work is the only practically implementable methodology for obtaining and subsequently computing the exact expressions (i.e., free of methodologically-introduced approximations) of the 1st-, 2nd-, 3rd-, and 4th-order sensitivities (i.e., functional derivatives) of responses to system parameters, for coupled forward/adjoint linear systems. By enabling the practical computation of any and all of the 1st-, 2nd-, 3rd-, and 4th-order response sensitivities to model parameters, the 4th-CASAM makes it possible to compare the relative values of the sensitivities of various order, in order to assess which sensitivities are important and which may actually be neglected, thus enabling future investigations of the convergence of the (multivariate) Taylor series expansion of the response in terms of parameter variations, as well as investigating the actual validity of expressions that are derived from Taylor-expansion of the response (e.g., response variances/covariance, skewness, kurtosis, etc.) as a function of the model’s parameters.
The 4th-CASAM presented in this work provides a fundamentally important step in the quest to overcome the “curse of dimensionality” in sensitivity analysis, uncertainty quantification, and predictive modeling. Ongoing work aims at generalizing the 4th-CASAM, attempting to provide a general methodology for the practical, efficient, and exact computation of arbitrarily-high order sensitivities of responses to model parameters, both for linear and nonlinear systems.

Funding

This research received no external funding.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Lewins, J. IMPORTANCE: The Adjoint Function; Pergamon Press Ltd.: Oxford, UK, 1965. [Google Scholar]
  2. Stacey, W.M. Variational Methods in Nuclear Reactor Physics; Academic Press, Inc.: New York, NY, USA, 1974. [Google Scholar]
  3. Bellman, R.E. Dynamic Programming; Rand Corporation: Santa Monica, CA, USA; Princeton University Press: Princeton, NJ, USA, 1957; ISBN 978-0-691-07951-6. [Google Scholar]
  4. Cacuci, D.G. Sensitivity theory for nonlinear systems: I. Nonlinear functional analysis approach. J. Math. Phys. 1981, 22, 2794–2802. [Google Scholar] [CrossRef]
  5. Cacuci, D.G. Sensitivity theory for nonlinear systems: II. Extensions to additional classes of responses. J. Math. Phys. 1981, 22, 2803–2812. [Google Scholar] [CrossRef]
  6. Cacuci, D.G. Sensitivity and Uncertainty Analysis: Theory, Volume 1; Chapman & Hall/CRC: Boca Raton, FL, USA, 2003. [Google Scholar]
  7. Cacuci, D.G. Second-order adjoint sensitivity analysis methodology (2nd-ASAM) for computing exactly and efficiently first- and second-order sensitivities in large-scale linear systems: I. Computational methodology. J. Comput. Phys. 2015, 284, 687–699. [Google Scholar] [CrossRef] [Green Version]
  8. Cacuci, D.G. Second-order adjoint sensitivity analysis methodology (2nd-ASAM) for large-scale nonlinear systems: I. Theory. Nucl. Sci. Eng. 2016, 184, 16–30. [Google Scholar] [CrossRef]
  9. Cacuci, D.G. The Second-Order Adjoint Sensitivity Analysis Methodology; CRC Press, Taylor & Francis Group: Boca Raton, FL, USA, 2018. [Google Scholar]
  10. Cacuci, D.G.; Fang, R.; Favorite, J.A. Comprehensive second-order adjoint sensitivity analysis methodology (2nd-ASAM) applied to a subcritical experimental reactor physics benchmark: I. Effects of imprecisely known microscopic total and capture cross sections. Energies 2019, 12, 4219. [Google Scholar] [CrossRef] [Green Version]
  11. Fang, R.; Cacuci, D.G. Comprehensive second-order adjoint sensitivity analysis methodology (2nd-ASAM) applied to a subcritical experimental reactor physics benchmark: II. Effects of imprecisely known microscopic scattering cross sections. Energies 2019, 12, 4114. [Google Scholar] [CrossRef] [Green Version]
  12. Cacuci, D.G.; Fang, R.; Favorite, J.A.; Badea, M.C.; Di Rocco, F. Comprehensive Second-Order Adjoint Sensitivity Analysis Methodology (2nd-ASAM) Applied to a Subcritical Experimental Reactor Physics Benchmark: III. Effects of Imprecisely Known Microscopic Fission Cross Sections and Average Number of Neutrons per Fission. Energies 2019, 12, 4100. [Google Scholar] [CrossRef] [Green Version]
  13. Fang, R.; Cacuci, D.G. Comprehensive second-order adjoint sensitivity analysis methodology (2nd-ASAM) applied to a subcritical experimental reactor physics benchmark: IV. Effects of imprecisely known source parameters. Energies 2020, 13, 1431. [Google Scholar] [CrossRef] [Green Version]
  14. Fang, R.; Cacuci, D.G. Comprehensive second-order adjoint sensitivity analysis methodology (2nd-ASAM) applied to a subcritical experimental reactor physics benchmark: V. Computation of mixed 2nd-order sensitivities involving isotopic number densities. Energies 2020, 13, 2580. [Google Scholar] [CrossRef]
  15. Cacuci, D.G.; Fang, R.; Favorite, J.A. Comprehensive second-order adjoint sensitivity analysis methodology (2nd-ASAM) applied to a subcritical experimental reactor physics benchmark: VI. Overall impact of 1st-and 2nd-order sensitivities on response uncertainties. Energies 2020, 13, 1674. [Google Scholar] [CrossRef] [Green Version]
  16. Valentine, T.E. Polyethylene-Reflected Plutonium Metal Sphere Subcritical Noise Measurements, SUB-PU-METMIXED-001. In International Handbook of Evaluated Criticality Safety Benchmark Experiments, NEA/NSC/DOC(95)03/I-IX; Organization for Economic Co-Operation and Development: Paris, France; Nuclear Energy Agency: Paris, France, 2006. [Google Scholar]
  17. Alcouffe, R.E.; Baker, R.S.; Dahl, J.A.; Turner, S.A.; Ward, R. PARTISN: A Time-Dependent, Parallel Neutral Particle Transport Code System; Los Alamos National Laboratory: New Mexico, NM, USA, 2008; LA-UR-08-07258 (Revised November 2008). [Google Scholar]
  18. Cacuci, D.G. Third order adjoint sensitivity analysis of reaction rate responses in a multiplying nuclear system with source. Ann. Nucl. Energy 2021, 151, 107924. [Google Scholar] [CrossRef]
  19. Cacuci, D.G.; Fang, R. Third-order adjoint sensitivity analysis of an OECD/NEA reactor physics benchmark: I. Mathematical framework. Am. J. Comput. Math. (AJCM) 2020, 10, 503–528. [Google Scholar] [CrossRef]
  20. Fang, R.; Cacuci, D.G. Third-Order Adjoint Sensitivity Analysis of an OECD/NEA Reactor Physics Benchmark: II Computed Sensitivities. Am. J. Comput. Math. (AJCM) 2020, 10, 529–558. [Google Scholar] [CrossRef]
  21. Fang, R.; Cacuci, D.G. Third-order adjoint sensitivity analysis of an OECD/NEA reactor physics benchmark: III Response moments. Am. J. Comput. Math. (AJCM) 2020, 10, 559–570. [Google Scholar] [CrossRef]
  22. Cacuci, D.G. BERRU Predictive Modeling: Best Estimate Results with Reduced Uncertainties; Springer: Heidelberg, Germany; New York, NY, USA, 2018. [Google Scholar]
  23. Cacuci, D.G. Inverse predictive modeling of radiation transport through optically thick media in the presence of counting uncertainties. Nucl. Sci. Eng. 2017, 186, 199–223. [Google Scholar] [CrossRef]
  24. Cacuci, D.G.; Fang, R.; Badea, M.C. MULTI-PRED: A software module for predictive modeling of coupled multi-physics systems. Nucl. Sci. Eng. 2018, 191, 187–202. [Google Scholar] [CrossRef]
  25. Cacuci, D.G. Second-order sensitivities of a general functional of the forward and adjoint fluxes in a multiplying nuclear system with source. Nucl. Eng. Des. 2019, 344, 83–106. [Google Scholar] [CrossRef]
  26. Cacuci, D.G. Application of the second-order comprehensive adjoint sensitivity analysis methodology to compute 1st-and 2nd-order sensitivities of flux functionals in a multiplying system with source. Nucl. Sci. Eng. 2019, 193, 555–600. [Google Scholar] [CrossRef] [Green Version]
  27. Cacuci, D.G. The Roussopoulos and Schwinger functionals for nuclear systems involving imprecisely known fluxes and/or parameters: Distinctions and equivalences. Nucl. Sci. Eng. 2019, 193, 681–721. [Google Scholar] [CrossRef] [Green Version]
  28. Cacuci, D.G. Nuclear Thermal-Hydraulics Applications Illustrating the Key Roles of Adjoint-Computed Sensitivities for Overcoming the Curse of Dimensionality in Sensitivity Analysis, Uncertainty Quantification and Predictive Modeling. Nucl. Eng. Des. 2019, 351, 20–32. [Google Scholar] [CrossRef]
  29. Cacuci, D.G. Second-order adjoint sensitivity analysis of ratios of functionals of the forward and adjoint fluxes in a multiplying nuclear system with source. Ann. Nucl. Energy 2020, 135, 106956. [Google Scholar] [CrossRef]
  30. Cacuci, D.G. The first-order comprehensive sensitivity analysis methodology (1st-CASAM) for scalar-valued responses: I. Theory. Am. J.Comput. Math. (AJCM) 2020, 10, 275–289. [Google Scholar] [CrossRef]
  31. Cacuci, D.G. The first-order comprehensive sensitivity analysis methodology (1st-CASAM) for scalar-valued responses: II. Illustrative application to a heat transport benchmark model. Am. J. Comput. Math. (AJCM) 2020, 10, 290–310. [Google Scholar] [CrossRef]
  32. Cacuci, D.G. Second-order adjoint sensitivity analysis methodology for computing exactly response sensitivities to uncertain parameters and boundaries of linear systems: Mathematical framework. Am. J. Comput. Math. (AJCM) 2020, 10, 329–354. [Google Scholar] [CrossRef]
  33. Cacuci, D.G. Illustrative application of the 2nd-order adjoint sensitivity analysis methodology to a paradigm linear evolution/transmission model: Point-detector response. Am. J. Comput. Math. (AJCM) 2020, 10, 355–381. [Google Scholar] [CrossRef]
  34. Cacuci, D.G. Illustrative Application of the 2nd-Order Adjoint Sensitivity Analysis Methodology to a Paradigm Linear Evolution/Transmission Model: Reaction-Rate Detector Response. Am. J. Comput. Math. (AJCM) 2020, 10, 382–397. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cacuci, D.G. Fourth-Order Comprehensive Adjoint Sensitivity Analysis (4th-CASAM) of Response-Coupled Linear Forward/Adjoint Systems: I. Theoretical Framework. Energies 2021, 14, 3335. https://doi.org/10.3390/en14113335

AMA Style

Cacuci DG. Fourth-Order Comprehensive Adjoint Sensitivity Analysis (4th-CASAM) of Response-Coupled Linear Forward/Adjoint Systems: I. Theoretical Framework. Energies. 2021; 14(11):3335. https://doi.org/10.3390/en14113335

Chicago/Turabian Style

Cacuci, Dan Gabriel. 2021. "Fourth-Order Comprehensive Adjoint Sensitivity Analysis (4th-CASAM) of Response-Coupled Linear Forward/Adjoint Systems: I. Theoretical Framework" Energies 14, no. 11: 3335. https://doi.org/10.3390/en14113335

APA Style

Cacuci, D. G. (2021). Fourth-Order Comprehensive Adjoint Sensitivity Analysis (4th-CASAM) of Response-Coupled Linear Forward/Adjoint Systems: I. Theoretical Framework. Energies, 14(11), 3335. https://doi.org/10.3390/en14113335

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop