Next Article in Journal
How to Generate Economic and Sustainability Reports from Big Data? Qualifications of Process Industry
Next Article in Special Issue
Mathematical Modeling of Tuberculosis Granuloma Activation
Previous Article in Journal
Optimization through Response Surface Methodology of a Reactor Producing Methanol by the Hydrogenation of Carbon Dioxide
Previous Article in Special Issue
Improving Bioenergy Crops through Dynamic Metabolic Modeling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Objective Optimization of Experiments Using Curvature and Fisher Information Matrix

by
Erica Manesso
1,2,†,
Srinath Sridharan
3 and
Rudiyanto Gunawan
1,2,*
1
Institute for Chemical and Bioengineering, ETH Zurich, 8093 Zurich, Switzerland
2
Swiss Institute of Bioinformatics, 1015 Lausanne, Switzerland
3
Saw Swee Hock School of Public Health, National University of Singapore, Singapore 117549, Singapore
*
Author to whom correspondence should be addressed.
Current address: Bayer AG, 65926 Frankfurt am Main, Germany.
Processes 2017, 5(4), 63; https://doi.org/10.3390/pr5040063
Submission received: 14 September 2017 / Revised: 20 October 2017 / Accepted: 23 October 2017 / Published: 1 November 2017
(This article belongs to the Special Issue Biological Networks)

Abstract

:
The bottleneck in creating dynamic models of biological networks and processes often lies in estimating unknown kinetic model parameters from experimental data. In this regard, experimental conditions have a strong influence on parameter identifiability and should therefore be optimized to give the maximum information for parameter estimation. Existing model-based design of experiment (MBDOE) methods commonly rely on the Fisher information matrix (FIM) for defining a metric of data informativeness. When the model behavior is highly nonlinear, FIM-based criteria may lead to suboptimal designs, as the FIM only accounts for the linear variation in the model outputs with respect to the parameters. In this work, we developed a multi-objective optimization (MOO) MBDOE, for which the model nonlinearity was taken into consideration through the use of curvature. The proposed MOO MBDOE involved maximizing data informativeness using a FIM-based metric and at the same time minimizing the model curvature. We demonstrated the advantages of the MOO MBDOE over existing FIM-based and other curvature-based MBDOEs in an application to the kinetic modeling of fed-batch fermentation of baker’s yeast.

1. Introduction

Dynamic models of biological networks and processes are often created to gain a better understanding of the system behavior. The creation of dynamic biological models requires the values of kinetic parameters, many of which are system-specific and typically not known a priori. These parameters are commonly estimated by calibrating model simulations to the available experimental data. Such parameter fitting is known to be challenging, as there often exist multiple parameter combinations that fit the available data equally well; that is, the model parameters are not identifiable [1,2,3,4,5]. While there exist a number of reasons for such lack of parameter identifiability, experimental conditions have a strong influence on this issue and thus should be carefully designed. In addition, biological experiments and data collection are often costly and time-consuming, further motivating the need for well-planned experiments that would give the maximum information given finite resources.
Model-based design of experiments (MBDOEs) offer a means for integrating dynamic modeling with experimental efforts, as illustrated by the iterative procedure in Figure 1. The role of the model here is to capture the knowledge and information about the system up to a given iteration. By using MBDOEs, one could harness this knowledge to guide experiments in the next iteration. MBDOE techniques have been used extensively for chemical process modeling [6], and more recently, they have been applied to the modeling of cellular processes [7,8]. For the purpose of parameter estimation, experiments are generally designed to improve the precision of the estimated parameters. In this regard, the Fisher information matrix (FIM), whose inverse provides an estimate of the lower bound of parameter variance-covariance by the Cramér-Rao inequality [9], has been commonly used to define the objective function in the optimal experimental design (see [8] and references therein). Since the turn of the century, FIM-based MBDOE methods have had newfound applications in the emerging area of systems biology [10,11,12,13,14,15,16]. Besides FIM, Bayesian approaches have also been used for MBDOEs, where given the prior distribution of the model parameters, experiments are designed to minimize the posterior parameter variance [8]. Bayesian MBDOE strategies have been applied to the modeling of biological networks for reducing parametric uncertainty [17,18,19]. While our work is concerned with MBDOEs for the purpose of parameter estimation, MBDOE strategies have also been developed and applied for discriminating between biological model structures [20,21,22,23,24] and reducing cellular process output uncertainty [19,25,26].
In this work, we focused on FIM-based MBDOEs for parameter estimation. The FIM relies on a linear approximation of the model behavior as a function of the parameters. More precisely, the FIM is computed as a function of the first-order parametric sensitivity coefficients (Jacobian matrix) of model outputs. For systems with a high degree of nonlinearity, the optimal experimental design using the FIM may perform poorly [27]. For this reason, Bates and Watts proposed a MBDOE based on minimizing model curvature by using the second-order parametric sensitivities (Hessian matrix) [28]. Hamilton and Watts further introduced a design criterion, called Q-optimality, based on a quadratic approximation of the volume of the parameter confidence region [29]. More recently, Benabbas et al., proposed two curvature-based MBDOEs [30]. In one design, the authors used a minimization of the root mean square (RMS) of the Hessian matrix, while in another design, they employed a constrained optimization guaranteeing the RMS to be lower than a given level. While the second strategy using a curvature threshold was demonstrated to give more informative experiments, how to set the appropriate RMS threshold value in a particular application was not described.
Recently, Maheshwari et al. described a multi-objective optimization (MOO) formulation for optimizing the design of the experiment using a combination of FIM-based metric and parameter correlation [15]. Because parameter correlations could not account for model nonlinearity, the strategy has the same drawback as FIM-based methods when applied to nonlinear models. In this work, we proposed a MOO MBDOE method using a combination of a FIM criterion and model curvature. We demonstrated the advantages of the proposed MOO MBDOE over FIM-based and other curvature-based methods in an application to the kinetic modeling of the fed-batch fermentation of baker’s yeast [30,31].

2. Model-Based Optimal Design of Experiments

We assume that the experimental data y I R n are contaminated by additive random noise, as follows:
y = μ + ϵ
where μ and ϵ denote the mean of the measurement data and the random noise, respectively. When the total number of data points n is greater than the number of parameters p, μ spans a p-dimensional space Ω I R n , where
Ω = { μ : μ = F ( x , u , θ ) , θ Θ I R p }
Here, x I R s denotes the state vector, θ I R p denotes the parameter vector, u I R m denotes the input and F ( x , u , θ ) denotes the vector of nonlinear model equations. The subspace Ω is also called the expectation surface or the solution locus. For a dynamic system, the state x is often described by a set of ordinary differential equations (ODEs):
d x ( θ , t ) d t = g ( x ( θ , t ) , u , θ ) , x ( θ , 0 ) = x 0
The estimation of model parameters θ from a given set of data y is typically formulated as a minimization of the weighted sum of squares of the difference between the model prediction F ( x , u , θ ) and the measurement data y . For example, the maximum likelihood estimator (MLE) of the model parameters for normally distributed data with known variance V is given by the minimum of the following objective function:
Φ ( θ ) = y F ( x , u , θ ) T V 1 y F ( x , u , θ )
When the model is a linear function of the parameters F ( x , u , θ ) = X θ , X I R n × p , then the parameter estimates are given by θ ^ = ( X T V 1 X ) 1 X T V 1 y . In this case, the MLE is the minimum variance unbiased estimator of θ , where the covariance matrix of the parameter estimates is given by V θ = ( X T V 1 X ) 1 . When the model is nonlinear (with respect to the parameters), the parameter estimates θ ^ = a r g min Φ ( θ ) do not necessarily correspond to the minimum variance estimator. According to the Cramér-Rao inequality [9], the inverse of the FIM provides a lower bound for the covariance of the parameter estimates θ ^ , that is
V θ FIM 1 = ( F ˙ ^ T V 1 F ˙ ^ ) 1
where F ˙ ^ = F ˙ ( θ ^ , x ) = F ( x , u , θ ) θ | θ = θ ^ is the first-order sensitivity matrix of F ( x , u , θ ) with respect to the parameters θ .
On the basis of the Cramér-Rao inequality, the FIM has been commonly used as a criterion of data informativeness in MBDOEs. Many methods for MBDOEs, such as those listed in Table 1, are based on finding experimental conditions that optimize a FIM-based information metric. As shown in Equation (5), the FIM relies on a linearization of the model behavior with respect to the parameters. Essentially, the linearization replaces the expectation surface Ω by its tangent plane at θ ^ . The performance of the experimental design using a FIM-based criterion would therefore depend on whether (1) the model outputs vary proportionally with the parameter values (planar assumption), and (2) whether this proportionality is constant (uniform coordinate assumption) [32]. When the model is highly nonlinear with respect to the parameters, FIM-based MBDOEs may produce suboptimal designs [33,34]. A recent MOO MBDOE using a combination of a FIM criterion and parameter correlation has been shown to provide an improvement over FIM-based MBDOE methods [15]. However, this method also relies on the first-order parametric sensitivity matrix, and thus it could not account for model nonlinearity.
Curvature-based design of experiment methods such as the Q-optimality have been introduced to account for model nonlinearity by employing a second-order approximation of the model output. Here, the curvature of the expectation surface Ω is captured using the second-order sensitivities of F ( x , u , θ ) based on the Taylor series expansion:
F ( x , u , θ ) = F ( x , u , θ ^ ) + F ˙ ^ ( θ θ ^ ) + 1 2 ( θ θ ^ ) T F ¨ ^ ( θ θ ^ ) + O ( ( θ θ ^ ) 3 )
where F ¨ ^ i j k = 2 F i ( x , u , θ ) θ j θ k | θ = θ ^ is the n × p × p Hessian matrix. As mentioned in the Introduction, several curvature-based MBDOE methods are available, for example, by minimizing curvature or using a curvature threshold [30]. In this work, we employed a MOO approach based on curvatures for designing optimal experiments. The basic premise of our MBDOE is to select experimental conditions that maximize the informativeness of data and ensure that the model behaves relatively linearly with respect to the parameters. More specifically, our MBDOE uses two objective functions, the first of which involves the maximization of a FIM-based information metric, and the second of which involves the minimization of relative curvature measures [28]. The second objective function ensures that the FIM can provide a reliable measure of data informativeness.

2.1. Multi-Objective Design of Experiments Based on Curvatures

In this section, we derive the relative curvature measures by following the work of Bates and Watts [28]. We consider an arbitrary straight line in the parameter space passing through θ ^ :
θ ( b ) = θ ^ + b h
where h = [ h 1 , h 2 , , h p ] is a non-zero vector. As the scalar parameter b varies, a curve is traced through the expectation surface, also referred to as the lifted line, according to
μ h ( b ) = μ ( θ ^ + b h )
The tangent line of this curve at b = 0 is given by
μ h = d μ h ( b ) d b θ = θ ^ , b = 0 = r = 1 p F ( x , u , θ ) θ r θ r ( b ) b θ = θ ^ , b = 0 = F ˙ ^ h
The set of all such tangent lines, that is, the column space of F ˙ ^ , describes the tangent (hyper)plane at μ ( θ ^ ) .
Meanwhile, the curvature measures come from a quadratic approximation of μ . In this case, the acceleration of μ ( b ) at b = 0 can be written as follows:
μ ¨ h = h T F ¨ ^ h = i = 1 p j = 1 p 2 F ( x , u , θ ) θ i θ j h i h j
The acceleration vector μ ¨ h can be subsequently decomposed into two components:
μ ¨ h = μ ¨ h t + μ ¨ h n
where at μ ( θ ^ ) , μ ¨ h t is tangential to the tangent plane and μ ¨ h n is normal to the tangent plane. The tangential acceleration μ ¨ h t is also called the parameter-effect curvature [28] and provides a measure of nonlinearity along the parameter vector h . The degree of the parameter-effect curvature can change upon reparameterization of the model. Meanwhile, the normal acceleration μ ¨ h n does not vary with model parameterization, and hence it is called the intrinsic curvature. Finally, the relative curvature measures in the direction of h are given by [28,32]:
K h t = μ ¨ h t μ ¨ h t 2
K h n = μ ¨ h n μ ¨ h n 2
Below, we describe the decomposition of the Hessian into the tangential and the normal component. We consider the QR-factorization of the Jacobian F ˙ ^ , that is, F ˙ ^ = Q R = Q R ˜ 0 . By rotating the parameter axes ( θ θ ^ ) into φ = R ˜ ( θ θ ^ ) , a new Jacobian matrix U ˙ = d F ( x , u , φ ) d φ | φ = 0 can be computed as U ˙ = F ˙ ^ R ˜ 1 , which comprises the first p column vectors of Q (i.e., Q = U ˙ N ). The remaining column vectors of Q (i.e., N ) are orthonormal to the tangent surface at φ = 0 . In the same manner, the Hessian matrix in the rotated axes can be written as U ¨ = L T F ¨ ^ L , where L = R ˜ 1 and U ¨ i j k = 2 F i ( x , u , φ ) φ j φ k | φ = 0 . The decomposition of the Hessian into the tangential and normal components is given by the following equation [28]:
A ¨ = Q T U ¨ = U ˙ N T U ¨ = A ¨ t A ¨ n
The matrices A ¨ t and A ¨ n respectively correspond to the parameter-effect and intrinsic curvature components of the Hessian.
To normalize the relative curvatures in Equations (12) and (13), Bates and Watts [28] used the scaling factor ρ , where ρ = s p and s 2 = ( y μ ^ ) T ( y μ ^ ) n p . Following the same procedure, we define the normalized relative curvatures as follows:
γ h t = ρ K h t
γ h n = ρ K h n
In addition, recasting h in the rotated axes as h = L d , the tangent line μ ˙ L d will have a unit norm (i.e., μ ˙ L d = 1 ) when d is a unit vector. The computation of γ h t and γ h n is thus simplified into
γ L d t = ρ d T A ¨ t d , d : d = 1
γ L d n = ρ d T A ¨ n d , d : d = 1
In the proposed experimental design, the maximum of these curvature measures are used, where
γ m a x t = max d = 1 γ L d t
γ m a x n = max d = 1 γ L d n
As mentioned above, in formulating the MOO for the design of experiments, two design criteria have been taken into account. The first is that the experiment should be designed to maximize the informativeness of the data for parameter estimation. In this case, we employ an information metric based on the FIM. Meanwhile, the second design criterion in the MOO aims to minimize both the parameter-effect and intrinsic curvatures. The MOO formulation offers certain advantages, for example, that there is no need to prioritize any one of the criteria beforehand. Instead, we generate the Pareto set or Pareto frontier representing the set of solutions for which we cannot improve the value of one objective function without negatively affecting the other(s) [35].
Considering the kinetic ODE model given in Equation (3), our multi-objective formulation using the D-optimal criterion is given by
max x 0 , t sp , u ( t ) i λ i min x 0 , t sp , u ( t ) γ m a x t + γ m a x n
subject to
d x ( θ ^ , t ) d t = g ( x ( θ ^ , t ) , u , θ ^ ) x ( θ ^ , 0 ) = x 0 x 0 L x 0 x 0 U u j L u j u j U
where λ i is the ith eigenvalue of the FIM (Equation (5)). The first objective function can be substituted with other FIM-based metrics (see Table 1). The parameter vector θ ^ is either an initial guess of the parameter values or the parameter estimates from the current iteration of an iterative model identification procedure [6]. The decision variables may include the initial condition of the states x 0 , the sampling time points of measurements t s p , and the dynamic input u ( t ) . In the case study below, we considered a control vector parametrization (CVP) of the input u i ( t ) as illustrated in Figure 2.

2.2. Numerical Implementation of the Curvature-Based MOO Design

As described in the previous section, the parameter-effect and intrinsic curvatures require the computation of the first- and second-order model sensitivities. For the ODE model in Equation (3), the first-order sensitivities can be calculated according to
F ˙ ^ = F ˙ ( θ ^ , x ) = F ( x ( t , u , θ ) ) x x ( t , u , θ ) θ / θ | θ ^
The sensitivities in the above equation are normalized with respect to the parameter values. The last term on the right-hand side is the first-order sensitivities of the ODE model, which obey the following differential equation:
d d t x θ = g x x θ + g θ , x θ | t = 0 = 0
Here, we have assumed that x 0 is not part of the parameter estimation, but such an assumption can be easily relaxed. In the case study, the sensitivities x θ were computed by solving the ODE in Equation (24) simultaneously to Equation (3), following a procedure known as the direct differential method [36]. Meanwhile, the Hessian matrix was approximated using a finite-difference method, as follows:
F ¨ ^ i j k = F i ( θ + Δ θ j e j ) 2 F i ( θ ) + F i ( θ Δ θ j e j ) Δ θ j 2 / θ j 2 , for j = k F i ( θ + Δ θ j e j + Δ θ k e k ) F i ( θ + Δ θ j e j Δ θ k e k ) F i ( θ Δ θ j e j + Δ θ k e k ) + F i ( θ Δ θ j e j Δ θ k e k ) ( Δ θ j / θ j ) ( Δ θ k / θ k ) , for j k
where e j is the jth elementary vector and uses 1% parameter perturbations (i.e., Δ θ j / θ j = 0.01). The second-order sensitivities above are also normalized with respect to the parameter values.
Meanwhile, the curvature measures γ m a x t and γ m a x n in Equations (19) and (20) were calculated from the Hessian matrix using the alternating least squares (ALS) method [37], an algorithm created to find the maximum singular value σ m a x of a three-dimensional matrix. Based on the definitions in Equations (19) and (20), the maximum curvature measures can be determined by computing the maximum singular values of the matrices ρ A ¨ t and ρ A ¨ n , respectively. More specifically, we implemented the ALS method to solve for
σ m a x ( B ) = max r = s = 1 i = 1 m r i s T B i s
where B is either ρ A ¨ t or ρ A ¨ n . The ALS algorithm started with initial guess values of the vectors r and s and used the above equation to solve for one variable while fixing the other in an alternating manner. Zhang and Golub showed that the method linearly converges in a neighbourhood of the optimal solution [37].
In the case study, the MOO problem was solved using the non-dominated sorting genetic algorithm II (NSGAII) in MATLAB, producing a Pareto frontier in the space of the objective functions [38]. We employed a population size of 300 and set the number of generations to 50 times the number of parameters (i.e., 1450). We recasted a maximization of an objective function as the minimization of its negative counterpart. The optimal design was selected from the Pareto frontier by balancing the trade-offs among the objective functions. More specifically, we first normalized the objective functions such that their values on the Pareto frontier ranged between 0 and 1. Finally, we chose among the solutions on the Pareto frontier that which minimized the Euclidean distance of all (normalized) objective functions as the final design.

3. Results

3.1. MBDOEs of Baker Yeast Fermentation Model

We evaluated the performance of the proposed MBDOE in an application to a kinetic model of a fed-batch fermentation of baker’s yeast [30,31]. In addition to a D-optimal criterion, we also implemented A-optimal, E-optimal and modified E-optimal criteria (see Table 1) with our MOO MBDOE. We compared the performance of our method to other MBDOEs, including (a) FIM-based MBDOEs, that is, D-optimal, A-optimal, E-optimal and modified E-optimal designs; (b) a D-optimal design with a curvature threshold [30]; (c) a Q-optimal MBDOE [29]; and (d) a MOO MBDOE using parameter correlation [15]. In total, we applied and compared 14 MBDOE methods. For the optimizations in (a), (b) and (c), we employed the enhanced scatter search metaheuristic (eSSm) algorithm [39,40,41]. For the MOO in (d), we used the optimization algorithm and optimal Pareto point selection, as described in the previous section.
In the fed-batch fermenter model, cellular growth and product formation are captured by the biomass variable x 1 , which is assumed to rely on a single substrate variable x 2 . The fermenter operates at a constant temperature and the feed is free from product. The model equations are given by
d x 1 d t = ( r u 1 θ 4 ) x 1 , x 1 ( 0 ) = x 10 d x 2 d t = r x 1 θ 3 + u 1 ( u 2 x 2 ) , x 2 ( 0 ) = 0.1 r = θ 1 x 2 θ 2 + x 2
where the input u 1 is the dilution factor (in the range of 0.05–0.20 h 1 ) and the input u 2 is the substrate concentration in the feed (in the range of 5–35 g/L). In the model, the biomass growth follows Monod-type kinetics. The parameters θ 1 and θ 2 are the Monod kinetic parameters, θ 3 is the yield coefficient, and θ 4 is the cell death rate constant.
In the MBDOE, the design variables consisted of the initial condition of the biomass x 1 ( 0 ) in the range between 1 and 10 g/L, 10 measurement sampling times ( t s p ), and the inputs u 1 ( t ) and u 2 ( t ) . The piecewise-constant dynamic inputs were each parametrized using the CVP, as shown in Figure 2. Thus, the MOO was performed with 29 design parameters ( x 1 ( 0 ) , 10 t s p ’s, 10 u i , j ’s, and 8 t s w ’s). The length of the time interval between two successive measurement sampling points was constrained to be between 1 and 20 h, while that between two input switching times was bounded between 2 and 20 h. The calculations of the Jacobian and Hessian matrices in MBDOEs were made using parameter values θ d = [ θ 1 , θ 2 , θ 3 , θ 4 ] = [ 0.5 , 0.5 , 0.5 , 0.5 ] [15,42], which were different from the “true” parameter values used for noisy data generation in the next section. The reason for using a different parameter set in the MBDOE to the true values was to emulate the typical scenario in practice, for which one would start only with an estimate or guess of the model parameters. Figure 3 and Figure 4 show the optimal dynamic inputs and data sampling times resulting from all the MBDOE methods mentioned above (see also the Pareto frontiers in Figures S1 and S2 in the Supplementary Materials). Meanwhile, Table 2 gives the optimal initial biomass concentration x 1 ( 0 ) .

3.2. Performance Evaluation

For each of the optimal experimental designs above, we generated in silico datasets by simulating the ODE model using the parameter values θ = [0.31, 0.18, 0.55, 0.05], as reported in previous publications [15,42]. We subsequently added independent and identically distributed (i.i.d.) Gaussian random white noise to the model simulations using a relative variance of 0.04 for both x 1 ( t ) and x 2 ( t ) [15,42]. For each in silico dataset, we then performed a parameter estimation using the resulting data ( y 1 and y 2 ) by maximum likelihood estimation, that is, by minimizing
Φ ( θ ) = 1 σ 2 i = 1 10 [ y 1 ( t i ) x 1 ( t i , θ ) ] 2 + [ y 2 ( t i ) x 2 ( t i , θ ) ] 2
We also employed the following constraints for θ in the optimization above:
0.05 θ 1 , θ 2 , θ 3 0.98
and
0.01 θ 4 0.98
Finding the globally optimal solution to the parameter estimation in Equation (4) is challenging. Here, we solved the constrained parameter optimization problem using the interior-point algorithm (implemented by the subroutine fmincon function in MATLAB) with the true parameter values θ as the initial guess. By employing the true values as the initial starting point of the optimization, we expected that the parameter accuracy would mainly be affected by the experimental design and not by the ability of the parameter optimization algorithm to find the globally optimal solution.
We repeated the in silico data generation and parameter estimation as described above 100 times, which resulted in a set of 100 parameter estimates. The performance of each MBDOE was assessed by the average accuracy of the parameter estimates, measured by the average of the normalized mean-square error (nMSE):
nMSE ¯ = 1 4 i = 1 4 nMSE i
where
nMSE i = variance ( θ ^ i ) bias 2 ( θ ^ i ) ( θ i ) 2 , i = 1 , 2 , 3 , 4
The variance of θ ^ i was computed using the set of 100 parameter estimates, while the bias was calculated as the difference between the average of θ ^ i and θ i . Table 3 gives the average nMSE of the parameter estimates from each MBDOE under consideration.

4. Discussion

As shown in Figure 3, the MBDOEs prescribed manipulating the input u 1 ( t ) mostly at the beginning of the experiment and the input u 2 ( t ) for the entire duration of the experiment. For the majority of the MBDOEs in this study, the optimal sampling times spread unevenly over the duration of the experiment (see Figure 4). A more detailed comparison between Figure 3 and Figure 4 showed that the optimal sampling points were typically placed before and after a change in the dynamic inputs u 1 ( t ) and u 2 ( t ) . The exception to this observation was for the optimal design using the A-optimal criterion, which gave the worst parameter accuracy among the MBDOEs considered.
The consideration of model curvature using the proposed MOO MBDOE generally led to improved parameter accuracy over using only model curvature (i.e., Q-optimal and threshold curvature) or using only FIM-based criteria. The lowest nMSE came from the MOO MBDOE design using the modified E-optimal with model curvature. In comparison to MOO MBDOE using parameter correlation, employing model curvature in the MOO framework gave better experimental designs with lower average nMSEs. Meanwhile, Q-optimality and curvature thresholding strategies provided better nMSEs than the majority of the FIM-based criteria, except the D-optimal design. Finally, the optimal experiments based on the A-optimal criterion, either alone or in MOO MBDOE, performed poorly. The poor performance of the A-optimal design has also been reported in a previous publication [15].
The obvious drawback of curvature-based MBDOEs in comparison to FIM-based strategies is the higher computational cost associated with computing the Hessian matrix. While the number of first-order sensitivities (Jacobian) increases linearly with the number of parameters p, the number of second-order sensitivities scales with p 2 . Fortunately, the calculation of the Hessian matrix can be easily parallelized and implemented using multiple computing cores. In practice, one often focuses on only a subset of the model parameters, and therefore the MBDOE is typically done for a handful of parameters.
We note that the MBDOE methods considered in this work consider only parametric uncertainty in the models and assume that uncertainty in model equations, that is, structural uncertainty, is not significant. For certain types of models, such as generalized mass action and S-system models [43,44], model structural uncertainty can be treated as parametric uncertainty, and therefore the MBDOE strategies developed here could be applied. As mentioned in the Introduction, MBDOE methods for discriminating between model structures have been developed, many of which are based on the Bayesian approach. Furthermore, in applications for which there exists intrinsic parametric variability, for example, batch-to-batch variability in cell culture fermentation processes, Bayesian MBDOE methods would be more suitable than FIM-based strategies, as Bayesian methods are able to incorporate the prior (intrinsic) distribution of the parameter values in the design. Nevertheless, as demonstrated in the case study, even when the MOO MBDOEs were performed using model parameters that were quite different from the true values, the resulting optimal designs led to precise and accurate parameter estimates. Meanwhile, biological systems, like other complex systems, have been argued to be sloppy. In the context of our work, sloppy systems lead to mathematical models whose FIMs have eigenvalues that are logarithmically spread evenly over large orders of magnitude [45]. In other words, the system behavior is sensitive to or is controlled by a small number of parameter combinations (along the FIM eigenvectors corresponding to large eigenvalues). At the same time, there exist many parameter combinations that can be varied without affecting the system behavior. Such sloppiness could arise in a system governed by processes that span large and evenly distributed length and/or time scales, such that there exists no clear separation between relevant and irrelevant mechanisms. A recent study demonstrated that in the case of sloppy systems, reducing the model parametric uncertainty by MBDOEs beyond a certain point might not necessarily translate to any improvement in model prediction accuracy [45]. However, it is possible to construct reduced-order models of sloppy systems, whose parameters correspond to the important parameter combinations [46,47]. Parameter estimation and MBDOE strategies can then be applied to these reduced models.

5. Conclusions

Existing MBDOE methods for parameter estimation mostly rely on the FIM to define information criteria. Because the FIM is based on first-order sensitivities with respect to the model parameters, the related MBDOEs may perform poorly for nonlinear models. Here, a new MBDOE using a MOO framework was presented, employing the maximization of a FIM-based information metric and the minimization of model curvatures. The application to a model of the fermentation of baker’s yeast demonstrated that accounting model nonlinearity through model curvatures in designing the experiment could lead to improved parameter accuracy over using only a FIM-based criterion. The proposed MOO MBDOE also outperformed other curvature-based designs, including the Q-optimality and curvature thresholding and another MOO MBDOE strategy using parameter correlation. The use of the MOO framework further gives flexibility to accommodate other criteria that may arise in a particular application, in the design of experiments.

Supplementary Materials

The following are available online at https://www.mdpi.com/2227-9717/5/4/63/s1. Figure S1: Pareto frontier of the MOO MBDOE using curvatures and a FIM-based criterion. Figure S2: Pareto frontier of the MOO MBDOE using correlation and a FIM-based criterion.

Acknowledgments

The authors would like to acknowledge funding from ETH Zurich and the Ministry of Education, Singapore.

Author Contributions

R.G. conceived and designed the study; E.M. and S.S. performed the design of the experiments and parameter estimations; E.M., S.S. and R.G. analyzed the data; E.M., S.S. and R.G. wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest. The funding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
MBDOEModel-based design of experiments
FIMFisher information matrix
MOOMulti-objective optimization
RMSRoot mean square
ODEOrdinary differential equation
MLEMaximum likelihood estimator
CVPControl vector parametrization
nMSENormalized mean-square error

References

  1. Srinath, S.; Gunawan, R. Parameter identifiability of power-law biochemical system models. J. Biotechnol. 2010, 149, 132–140. [Google Scholar] [CrossRef] [PubMed]
  2. Jia, G.; Stephanopoulos, G.; Gunawan, R. Ensemble kinetic modeling of metabolic networks from dynamic metabolic profiles. Metabolites 2012, 2, 891–912. [Google Scholar] [CrossRef] [PubMed]
  3. Gábor, A.; Hangos, K.; Banga, J.; Szederkànyi, G. Reaction network realizations of rational biochemical systems and their structural properties. J. Math. Chem. 2015, 53, 1657–1686. [Google Scholar] [CrossRef]
  4. Liu, Y.; Manesso, E.; Gunawan, R. REDEMPTION: Reduced dimension ensemble modeling and parameter estimation. Bioinformatics 2015, 31, 3387–3389. [Google Scholar] [CrossRef] [PubMed]
  5. Villaverde, A.F.; Banga, J.R. Structural properties of dynamic systems biology models: Idenfiability, reachability and initial conditions. Processes 2017, 5, 29. [Google Scholar] [CrossRef]
  6. Franceschini, G.; Macchietto, S. Model-based design of experiments for parameter precision: State of the art. Chem. Eng. Sci. 2008, 63, 4846–4872. [Google Scholar] [CrossRef]
  7. Kreutz, C.; Timmer, J. Systems biology: Experimental design. FEBS J. 2009, 276, 923–942. [Google Scholar] [CrossRef] [PubMed]
  8. Chakrabarty, A.; Buzzard, G.T.; Rundell, A.E. Model-based design of experiments for cellular processes. WIREs Syst. Biol. Med. 2013, 5, 181–203. [Google Scholar] [CrossRef] [PubMed]
  9. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  10. Faller, D.; Klingmüller, U.; Timmer, J. Simulation methods for optimal experimental design in systems biology. Simulation 2003, 79, 717–725. [Google Scholar] [CrossRef]
  11. Gadkar, K.G.; Gunawan, R.; Doyle, F.J., III. Iterative approach to model identification of biological networks. BMC Bioinformatics 2005, 6, 155. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Balsa-Canto, E.; Alonso, A.A.; Banga, J.R. Computational procedures for optimal experimental design in biological systems. IET Syst. Biol. 2008, 2, 163–172. [Google Scholar] [CrossRef] [PubMed]
  13. Chung, M.; Haber, E. Experimental design for biological systems. SIAM J. Control Optim. 2012, 50, 471–489. [Google Scholar] [CrossRef]
  14. Transtrum, M.K.; Qiu, P. Optimal experiment selection for parameter estimation in biological differential equation models. BMC Bioinform. 2012, 13, 181. [Google Scholar] [CrossRef] [PubMed]
  15. Maheshwari, V.; Rangaiah, G.P.; Samavedham, L. A multi-objective framework for model based design of experiments to improve parameter precision and minimize parameter correlation. Ind. Eng. Chem. Res. 2013, 52, 8289–8304. [Google Scholar] [CrossRef]
  16. Sinkoe, A.; Hahn, J. Optimal experimental design for parameter estimation of an IL-6 signaling model. Processes 2017, 5, 49. [Google Scholar] [CrossRef]
  17. Vanlier, J.; Tiemann, C.A.; Hilbers, P.A.J.; van Riel, N.A.W. A Bayesian approach to targeted experiment design. Bioinformatics 2012, 28, 1136–1142. [Google Scholar] [CrossRef] [PubMed]
  18. Weber, P.; Kramer, A.; Dingler, C.; Radde, N. Trajectory-oriented Bayesian experiment design versus Fisher A-optimal design: An in-depth comparison study. Bioinformatics 2012, 28, i535–i541. [Google Scholar] [CrossRef] [PubMed]
  19. Liepe, J.; Filippi, S.; Komorowski, M.; Stumpf, M.P.H. Maximizing the information content of experiments in systems biology. PLoS Comput. Biol. 2013, 9, e1002888. [Google Scholar] [CrossRef] [PubMed]
  20. Apgar, J.F.; Toettcher, J.E.; Endy, D.; White, F.M.; Tidor, B. Stimulus design for model selection and validation in cell signaling. PLoS Comput. Biol. 2008, 4, e30. [Google Scholar] [CrossRef] [PubMed]
  21. Daunizeau, J.; Preuschoff, K.; Friston, K.; Stephan, K. Optimizing experimental design for comparing models of brain function. PLoS Comput. Biol. 2011, 7, e1002280. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Flassig, R.J.; Sundmacher, K. Optimal design of stimulus experiments for robust discrimination of biochemical reaction networks. Bioinformatics 2012, 28, 3089–3096. [Google Scholar] [CrossRef] [PubMed]
  23. Busetto, A.G.; Hauser, A.; Krummenacher, G.; Sunnaker, M.; Dimopoulos, S.; Ong, C.S.; Stelling, J.; Buhmann, J.M. Near-optimal experimental design for model selection in systems biology. Bioinformatics 2013, 29, 2625–2632. [Google Scholar] [CrossRef] [PubMed]
  24. Silk, D.; Kirk, P.D.W.; Barnes, C.P.; Toni, T.; Stumpf, M.P.H. Model selection in systems biology depends on experimental design. PLoS Comput. Biol. 2014, 10, e1003650. [Google Scholar] [CrossRef] [PubMed]
  25. Bazil, J.N.; Buzzard, G.T.; Rundell, A.E. A global parallel model based design of experiments method to minimize model output uncertainty. Bull. Math. Biol. 2012, 74, 688–716. [Google Scholar] [CrossRef] [PubMed]
  26. Mdluli, T.; Buzzard, G.T.; Rundell, A.E. Efficient optimization of stimuli for model-based design of experiments to resolve dynamical uncertainty. PLoS Comput. Biol. 2015, 11, e1004488. [Google Scholar] [CrossRef] [PubMed]
  27. Cochran, W.G. Experiments for Nonlinear Functions. J. Am. Stat. Assoc. 1973, 68, 771–781. [Google Scholar] [CrossRef]
  28. Bates, D.M.; Watts, D.G. Relative Curvature Measures of Nonlinearity. J. R. Stat. Soc. Ser. B 1980, 42, 1–25. [Google Scholar]
  29. Hamilton, D.C.; Watts, D.G. A quadratic design criterion for precise estimation in nonlinear regression models. Technometrics 1985, 27, 241–250. [Google Scholar] [CrossRef]
  30. Benabbas, L.; Asprey, S.P.; Macchietto, S. Curvature-based methods for designing optimally informative experiments in multiresponse nonlinear dynamic situations. Ind. Eng. Chem. Res. 2005, 44, 7120–7131. [Google Scholar] [CrossRef]
  31. Asprey, S.P.; Macchietto, S. Statistical tools for optimal dynamic model building. Comput. Chem. Eng. 2000, 24, 1261–1267. [Google Scholar] [CrossRef]
  32. Seber, G.A.F.; Wild, C.J. Nonlinear Regression; John Wiley & Sons: Hoboken, NJ, USA, 2003. [Google Scholar]
  33. Merlé, Y.; Tod, M. Impact of pharmacokinetic-pharmacodynamic model linearization on the accuracy of population information matrix and optimal design. J. Pharmacokinet. Pharmacodyn. 2001, 28, 363–388. [Google Scholar] [CrossRef] [PubMed]
  34. Bogacka, B.; Wright, F. Comparison of two design optimality criteria applied to a nonlinear model. J. Biopharm. Stat. 2004, 14, 909–930. [Google Scholar] [CrossRef] [PubMed]
  35. Rangaiah, G.P. Multi-Objective Optimization: Techniques and Applications in Chemical Engineering; World Scientific: Singapore, 2008; Volume 1. [Google Scholar]
  36. Varma, A.; Morbidelli, M.; Wu, H. Parametric Sensitivity in Chemical Systems; Cambridge University Press: Cambridge, UK, 1999. [Google Scholar]
  37. Zhang, T.; Golub, G.H. Rank-One Approximation to High Order Tensors. SIAM J. Matrix Anal. Appl. 2001, 23, 534–550. [Google Scholar] [CrossRef]
  38. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multi-objective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  39. Egea, J.A.; Martí, R.; Banga, J.R. An evolutionary method for complex-process optimization. Comput. Oper. Res. 2010, 37, 315–324. [Google Scholar] [CrossRef]
  40. Egea, J.A.; Rodriguez-Fernandez, M.; Banga, J.R.; Martí, R. Scatter search for chemical and bioprocess optimization. J. Glob. Optim. 2007, 37, 481–503. [Google Scholar] [CrossRef] [Green Version]
  41. Rodriguez-Fernandez, M.; Egea, J.A.; Banga, J.R. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems. BMC Bioinform. 2006, 7, 483. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Zhang, Y.; Edgar, T.F. PCA combined model-based design of experiments (DOE) criteria for differential and algebraic system parameter estimation. Ind. Eng. Chem. Res. 2008, 47, 7772–7783. [Google Scholar] [CrossRef]
  43. Chou, I.C.; Voit, E. Recent developments in parameter estimation and structure identification of biochemical and genomic systems. Math. Biosci. 2009, 219, 57–83. [Google Scholar] [CrossRef] [PubMed]
  44. Voit, E. Biochemical systems theory: A review. ISRN Biomath. 2013, 2013, 897658. [Google Scholar] [CrossRef]
  45. White, A.; Tolman, M.; Thames, H.D.; Withers, H.R.; Mason, K.A.; Transtrum, M.K. The limitations of model-based experimental design and parameter estimation in sloppy systems. PLoS Comput. Biol. 2016, 12, e1005227. [Google Scholar] [CrossRef] [PubMed]
  46. Transturm, M.K.; Qiu, P. Model reduction by manifold boundaries. Phys. Rev. Lett. 2014, 113, 098701. [Google Scholar] [CrossRef] [PubMed]
  47. Transturm, M.K.; Qiu, P. Bridging mechanistic and phenomenological models of complex biological systems. PLoS Comput. Biol. 2016, 12, e1004915. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Iterative model identification cycle. The model building process involves the following key steps: experimental design, model structure formulation, parameter estimation, and model validation.
Figure 1. Iterative model identification cycle. The model building process involves the following key steps: experimental design, model structure formulation, parameter estimation, and model validation.
Processes 05 00063 g001
Figure 2. Control vector parametrization of input profiles. In the baker yeast case study, we implemented piecewise constant input profiles with u i = [ u i , 1 , u i , 2 , u i , 3 , u i , 4 , u i , 5 ] and four switching times: t s w 1 , t s w 2 , t s w 3 , and t s w 4 .
Figure 2. Control vector parametrization of input profiles. In the baker yeast case study, we implemented piecewise constant input profiles with u i = [ u i , 1 , u i , 2 , u i , 3 , u i , 4 , u i , 5 ] and four switching times: t s w 1 , t s w 2 , t s w 3 , and t s w 4 .
Processes 05 00063 g002
Figure 3. Optimal dilution factor and feed substrate concentration. Optimal dilution factor ( u 1 in h 1 , left panels) and feed substrate concentration ( u 2 in g/L, right panels). (A,B) D-optimal (blue). (C,D) A-optimal (red). (E,F) E-optimal (green). (G,H) modified E-optimal (black). In panels (AH), the optimal u 1 and u 2 using Fisher information matrix (FIM)-based criteria are shown by solid line. Those using FIM-based criteria combined with curvatures are shown by dashed line, while those using FIM-based criteria combined with parameter correlation are drawn with dashed-dot line. (IJ) Threshold curvature (magenta, solid line), and Q-optimal design (magenta, dashed line).
Figure 3. Optimal dilution factor and feed substrate concentration. Optimal dilution factor ( u 1 in h 1 , left panels) and feed substrate concentration ( u 2 in g/L, right panels). (A,B) D-optimal (blue). (C,D) A-optimal (red). (E,F) E-optimal (green). (G,H) modified E-optimal (black). In panels (AH), the optimal u 1 and u 2 using Fisher information matrix (FIM)-based criteria are shown by solid line. Those using FIM-based criteria combined with curvatures are shown by dashed line, while those using FIM-based criteria combined with parameter correlation are drawn with dashed-dot line. (IJ) Threshold curvature (magenta, solid line), and Q-optimal design (magenta, dashed line).
Processes 05 00063 g003
Figure 4. Optimal sampling grid from model-based design of experiments (MBDOEs). Simple Fisher information matrix (FIM)-based criteria shown by continuous line, FIM-based criteria combined with curvatures by dashed line, and FIM-based criteria combined with parameter correlation by dashed-pointed line. Dots indicate the sampling times.
Figure 4. Optimal sampling grid from model-based design of experiments (MBDOEs). Simple Fisher information matrix (FIM)-based criteria shown by continuous line, FIM-based criteria combined with curvatures by dashed line, and FIM-based criteria combined with parameter correlation by dashed-pointed line. Dots indicate the sampling times.
Processes 05 00063 g004
Table 1. Model-based designs of experiments (MBDOEs) using the Fisher information matrix (FIM).
Table 1. Model-based designs of experiments (MBDOEs) using the Fisher information matrix (FIM).
FIM-Based MBDOECriterion
D-optimalmax i λ i
A-optimalmax i λ i
E-optimalmax m i n ( λ i )
Modified E-optimalmax m i n ( λ i ) m a x ( λ i )
Table 2. Optimal initial condition of biomass x 1 ( 0 ) (g/L) from model-based design of experiments (MOO: multi-objective optimization).
Table 2. Optimal initial condition of biomass x 1 ( 0 ) (g/L) from model-based design of experiments (MOO: multi-objective optimization).
Design Criterion x 1 ( 0 )
D-optimal10.0
MOO D-optimal and curvatures10.0
MOO D-optimal and correlation10.0
A-optimal10.0
MOO A-optimal and curvatures9.9
MOO A-optimal and correlation10.0
E-optimal10.0
MOO E-optimal and curvatures10.0
MOO E-optimal and correlation10.0
Modified E-optimal10.0
MOO modified E-optimal and curvatures10.0
MOO modified E-optimal and correlation10.0
Threshold curvature8.2
Q-optimal5.5
Table 3. Model-based design of experiment (MBDOE) performance on the fed-batch fermentation of baker’s yeast model. The overall parameter accuracy is represented by the average of the normalized mean-square error (nMSE). The reported parameter values and errors are the averages and standard deviations from 100 repeated runs of parameter estimation.
Table 3. Model-based design of experiment (MBDOE) performance on the fed-batch fermentation of baker’s yeast model. The overall parameter accuracy is represented by the average of the normalized mean-square error (nMSE). The reported parameter values and errors are the averages and standard deviations from 100 repeated runs of parameter estimation.
Design Criterion nMSE ¯ θ 1 ± SD θ 1 θ 2 ± SD θ 2 θ 3 ± SD θ 3 θ 4 ± SD θ 4
D-optimal7.06 × 10 3 0.3107 ± 0.01020.1831 ± 0.02760.5505 ± 0.01250.0502 ± 0.0026
MOO D-optimal and curvatures4.71 × 10 3 0.3099 ± 0.00560.1825 ± 0.02330.5496 ± 0.00990.0499 ± 0.0018
MOO D-optimal and correlation5.36 × 10 3 0.3117 ± 0.01340.1781 ± 0.01510.5543 ± 0.02700.0508 ± 0.0049
A-optimal2.35 × 10 1 0.3294 ± 0.06590.2399 ± 0.13870.5841 ± 0.10830.0558 ± 0.0181
MOO A-optimal and curvatures1.420.3669 ± 0.09470.5267 ± 0.22300.5548 ± 0.13330.0510 ± 0.0244
MOO A-optimal and correlation4.820.0863 ± 0.04990.8927 ± 0.25550.2879 ± 0.19280.0177 ± 0.0263
E-optimal8.01 × 10 2 0.3180 ± 0.04200.2026 ± 0.09560.5473 ± 0.01590.0496 ± 0.0026
MOO E-optimal and curvatures3.33 × 10 3 0.3083 ± 0.00950.1829 ± 0.01640.5502 ± 0.01830.0500 ± 0.0026
MOO E-optimal and correlation8.19 × 10 3 0.3108 ± 0.01640.1824 ± 0.02130.5552 ± 0.03040.0509 ± 0.0055
Modified E-optimal6.99 × 10 2 0.3137 ± 0.01650.1986 ± 0.09200.5498 ± 0.01440.0502 ± 0.0033
MOO modified E-optimal and curvatures3.44 × 10 4 0.3095 ± 0.00360.1789 ± 0.00340.5491 ± 0.00730.0500 ± 0.0013
MOO modified E-optimal and correlation2.27 × 10 3 0.3088 ± 0.00480.1820 ± 0.01600.5486 ± 0.00470.0496 ± 0.0013
Threshold curvature1.29 × 10 2 0.3144 ± 0.03070.1857 ± 0.03390.5500 ± 0.01550.0502 ± 0.0032
Q-optimal1.91 × 10 2 0.3085 ± 0.01780.1757 ± 0.02160.5514 ± 0.02360.0504 ± 0.0119

Share and Cite

MDPI and ACS Style

Manesso, E.; Sridharan, S.; Gunawan, R. Multi-Objective Optimization of Experiments Using Curvature and Fisher Information Matrix. Processes 2017, 5, 63. https://doi.org/10.3390/pr5040063

AMA Style

Manesso E, Sridharan S, Gunawan R. Multi-Objective Optimization of Experiments Using Curvature and Fisher Information Matrix. Processes. 2017; 5(4):63. https://doi.org/10.3390/pr5040063

Chicago/Turabian Style

Manesso, Erica, Srinath Sridharan, and Rudiyanto Gunawan. 2017. "Multi-Objective Optimization of Experiments Using Curvature and Fisher Information Matrix" Processes 5, no. 4: 63. https://doi.org/10.3390/pr5040063

APA Style

Manesso, E., Sridharan, S., & Gunawan, R. (2017). Multi-Objective Optimization of Experiments Using Curvature and Fisher Information Matrix. Processes, 5(4), 63. https://doi.org/10.3390/pr5040063

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop