Next Article in Journal
Industry 5.0 Drivers Analysis Using Grey-DEMATEL: A Logistics Case in Emerging Economies
Previous Article in Journal
Large Sample Behavior of the Least Trimmed Squares Estimator
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Numerical Stability Analysis in the Inclusion of an Inverse Term in the Design of Experiments for Mixtures

by
Javier Cruz-Salgado
1,
Sergio Alonso-Romero
2,
Edgar Augusto Ruelas-Santoyo
3,
Israel Miguel-Andrés
2,
Roxana Zaricell Bautista-López
2 and
Amir Hossein Nobil
4,*
1
School of Engineering, Industrial Engineering and Mechanical Engineering Department, Universidad de las Américas Puebla (UDLAP), Puebla 72810, Mexico
2
Centro de Innovación Aplicada en Tecnologías Competitivas CIATEC, Biomecánica, León 37545, Mexico
3
Department of Industrial Engineering, Instituto Tecnológico de Celaya, Celaya 38010, Mexico
4
Escuela de Ingeniería y Ciencias, Tecnologico de Monterrey, Ave. Eugenio Garza Sada 2501, Monterrey 64849, Mexico
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(22), 3587; https://doi.org/10.3390/math12223587
Submission received: 20 September 2024 / Revised: 19 October 2024 / Accepted: 20 October 2024 / Published: 16 November 2024
(This article belongs to the Special Issue Advances in Mathematical Analytics and Operations Research)

Abstract

:
A mixture experiment is one where the response depends only on the relative proportions of the ingredients present in the mixture. Different regression models are used to analyze mixture experiments, such as the Scheffé model, the Slack Variable model, and models with inverse terms. Models with inverse terms are worthy of consideration in certain applications. These models have been analyzed considering their fit quality, but not their numerical stability. This article analyzes the numerical stability of the model with inverse terms and the use of pseudo components. Likewise, a criterion is defined for the selection of the regression model with inverse terms, based on the quality of fit and numerical stability.

1. Introduction

Many products are formed by mixing two or more ingredients. One or more properties of each product are generally of interest to the manufacturer, who is responsible for mixing the ingredients. In every case, the measured property of the final product depends on the percentages or proportions of the individual ingredients that are present in the formulation [1]. In the general mixture problem, the measured response is assumed to depend only on the proportions of the ingredients present in the mixture and not on the amount of the mixture. These proportions are connected by a linear constraint of the following type:
x 1 + x 2 + + x q = 1
In some cases, the proportions of the mixture components may be subject to additional restrictions such as
a i x i b i
for one or more components. Experiments with mixtures are generally modeled using Scheffé polynomial models [2]. The linear Scheffé model has the following form:
E Y = i = 1 q β i x i
Likewise, Scheffé’s quadratic model can be described as follows:
E Y = i = 1 q β i x i + i = 1 q 1 j = i + 1 q β i j x i x j
where β i and β i j are unknown parameters that must be estimated, generally using least squares. Due to Equation (1), the form of the quadratic Scheffé model only contains linear terms and cross-product terms.
A general mixture model, in matrix terms, can be presented as Y = X β + or E ( Y ) = X β . To estimate the parameters in the β matrix via least squares, the following expression is commonly used β ^ = X X 1 X y , where the covariance matrix is V β ^ = X X 1 σ 2 . The vector of fitted values is given by y ^ = X β and the residual vector is = y y ^ = y X β ^ . The vector is assumed to follow a normal distribution, that is, ~ N ( 0 ,   σ 2 ) . In regression analysis, the least squares method is widely employed for parameter estimation due to its simplicity and efficiency under certain conditions. However, several alternative methods offer advantages in specific scenarios. One such method is Maximum Likelihood Estimation (MLE), which estimates parameters by maximizing the likelihood function, provided the data follow a specific probability distribution. MLE is highly applicable and efficient, particularly in complex models or non-normal distributions, though it demands more computational resources compared to the least squares method. Robust regression techniques, including Huber regression and M-estimators, are designed to handle outliers and violations of assumptions such as homoscedasticity. These methods diminish the influence of extreme values on parameter estimates, making them more reliable when outliers are present. Ridge regression is another alternative, particularly useful when multicollinearity exists among the predictor variables. By introducing a regularization term (λ) to the least squares objective function, ridge regression shrinks the regression coefficients, thereby reducing overfitting, a feature especially valuable in high-dimensional datasets.
If an exact linear dependence between the columns of X exist, that is, if there is a set of all non-zero c j s such that j = 1 p c j x j = 0 , then the matrix X has a rank inferior to p (predictor variable), and the inverse of X X does not exist. In this case, many software packages would give an error message and would not calculate the inverse. However, if the linear dependence is only approximate, it is j = 1 p c j x j 0 , and then we have the condition usually identified as collinearity or ill-conditioning (some authors use the term multicollinearity) [3]. In this case, many software packages proceed to calculate X X 1 without any signal foreseeing to the potential problem [4]. When there is the presence of ill-conditioning, computer routines used to calculate X X 1 can give erroneous results; this means that the least squares solution may be incorrect. Even if X X 1 is correct, the variance of β ^ s , given in the diagonal terms in V β ^ = X X 1 σ 2 , can be inflated by ill-conditioning [5].
Prescott et al. (2002) [3] have shown that the quadratic Kronecker model is a quadratic model specification that is less susceptible to ill-conditioning compared to the Scheffé model. They concluded that the quadratic Kronecker model always reduces the maximum eigenvalue of the information matrix ( X X ), reducing ill-conditioning. When the presence of collinearity among the terms in the Scheffé model is a possibility, and the appearance of the complete model form is of concern, using the model known as the Slack Variable model makes sense. Reference to the Slack Variable model form appears in [4,6,7,8,9,10,11,12,13,14,15,16]. The justification for using the Slack Variable model is that it offers a lower degree of collinearity in terms of the adjusted model, which represents less numerical instability in the estimation of the model coefficients [17,18]; likewise, it reduces the variances of individual estimated regression coefficients, reduces the correlation between the estimators, and makes the model less dependent on the precise location of the design points [4] have presented a literature review on Slack Variable models for mixture experiments.
Emphasizing the practical aspects of selecting appropriate mixture designs, numerous techniques have been developed for both the design and analysis of mixture experiments. These advancements focus on constructing and interpreting mixture models as well as analyzing mixture data. While substantial progress has been made in designing experiments for Scheffé models [19], the optimal design for more general blending models, particularly those extending the Becker class, has not yet been systematically addressed. These models, which account for nonlinear blending effects, are crucial in practical applications where the relationships between response variables, parameters, and mixture factors are more complex. To address this gap, ref. [20] proposed methods for determining D- and A-optimal designs for general blending models. By reformulating the optimal experimental design problem into an optimization problem—using either Nonlinear Programming or Mixed Integer Nonlinear Programming—their approach is applicable to quadratic and special cubic blending models, including those in the H2 class introduced by Becker. These methods are exemplified through applications in combustion science and fuel property characterization. Furthermore, a new class of models has been introduced to unify and extend existing statistical methodologies for analyzing mixture experiments [21]. This class integrates the Scheffé and Becker models, thereby broadening the range of mixture component effects that can be captured. These models maintain simplicity when appropriate but are flexible enough to accommodate more complex phenomena as required. The proposed methodology significantly extends traditional approaches, with supplementary material available for further exploration.
Draper and John (1977) [22] have presented another model alternative that proposes the inclusion of an inverse term. Models that include an inverse term are augmentations of the Scheffé polynomial with the additional terms of the form x i 1 , included to account for the possible extreme change in the response as x i approaches zero [22]. Draper and John (1977) [22] have argued that the use of a model with inverse terms offers a better quality of fit than the Scheffé polynomial model, when an extreme change in the response behavior exists.
When any component of the mixture is equal to zero ( x i = 0 ), to be able to include an inverse term such as x i 1 in the model, it is necessary to add a small positive quantity to each component of the mixture ( x i ), say c i [22]. Draper and John (1977) [22] suggested that a general rule for defining the value of c i is simply to use a value between 0.02 and 0.05. In addition, they suggest that c 1 = c 2 = = c q = c . Draper and John (1977) [22] have mentioned that values of c i in that range are suitable for most problems. It is important to highlight that the determination of the value of c i is not defined formally, or based on some mathematical criterion. The value of c i is completely at the discretion of the researcher, without having any reference or criteria. Therefore, the definition of c i , through some formal criteria, is a research opportunity.
Although the model with inverse terms can offer a better fit, in this article, we show that this model has certain disadvantages regarding the conditioning of the information matrix used for estimating the parameters by the least squares method. Derived from the above, a strategy is proposed to minimize the problems derived from the poor conditioning of the information matrix. This strategy consists of determining the value of c i for which the model presents better numerical stability.
As a result of this research, it was observed that the conditioning of the information matrix, and consequently the numerical stability of the model, varies with different values of c i . Therefore, it is proposed that the value of c i should be determined based on the model’s numerical stability. Specifically, the value of c i should be selected to optimize the conditioning of the information matrix, ensuring the most accurate least squares estimates.

2. Methods

In this section, we discuss various mathematical methods used in modeling mixture experiments, specifically focusing on the limitations and enhancements of the Scheffé polynomial model when dealing with extreme responses as certain components of the mixture approach boundary values (often zero). We also aim to introduce techniques such as pseudo components and how they can improve the numerical stability of the model, as well as provide a method to determine the adequacy of polynomial models in mixture designs.
In Section 2.1, we discuss how the Scheffé polynomial model can be modified using inverse terms to address extreme changes in the response when one or more components of a mixture approach zero. The equations provided extend the Scheffé model to account for these conditions, if no component value exactly reaches zero but can approach very small positive values. The need to handle these situations arises because the basic Scheffé model struggles when the response shows a drastic change near boundaries. Section 2.2 introduces the concept of pseudo components, which are used to simplify mixture models when components have lower limits (i.e., boundary constraints). The transformation of components into pseudo components helps to handle extreme changes in responses. The equations provided show how to transform the original component values into pseudo components to account for boundary effects, and this section emphasizes the need for adding a small value (denoted as c i ) when a component’s value is zero to stabilize the model. Next, Section 2.3 explains how to assess the adequacy of polynomial models used for mixtures in terms of the numerical stability and collinearity. It describes how to calculate the condition number (CN) of the information matrix to measure the numerical stability, with smaller CN values indicating better model conditioning. Additionally, it proposes the use of the CN to determine the appropriate value of c i , which is essential for improving the stability of the pseudo component transformations. Finally, in Section 2.4, a discussion is presented on alternative problem formulations and regularization methods.

2.1. Mixture Model with Inverse Terms

As mentioned before, the Scheffé model can be re-parameterized; in fact, there are several ways to write a polynomial model of any order using the mixture restriction of Equation (1) [3].
When there exists an extreme change in the response as the value of a certain component ( x i ) tends to a boundary (usually zero), the Scheffé model cannot cope well with this situation [6]. For modeling extreme changes in the response as the value of some components tends to zero, [22] proposed the following models:
E Y = i = 1 q β i x i + i = 1 q β i x i 1 ,
and
E Y = i = 1 q β i x i + i <   j q β i j x i x j + i = 1 q β i x i 1 .
Equations (5) and (6) are a modification of the Scheffé polynomial model with the additional term of the form x i 1 . From the above, it is assumed that the value of x i never reaches zero; however, the value can be very close to zero, that is, x i ε i > 0 , where ε i is some small quantity, which is defined for each application of this model. Likewise, if only a couple of components are as likely to produce extreme changes in the response as x i 0 , then only these terms are included in the model [1].

2.2. Use of Pseudo Components

When lower limit restrictions like those shown below are considered, 0 L i x i for all i and i = 1 q L i < 1 lower pseudo components (L-pseudo components) are suggested for use in place of components in original units ( x i ) [1]:
x i = ( x i L i ) / ( 1 i = 1 q L i )
If an extreme change in the response occurs as x i approaches the bound L i (smallest component value x i ), we define L i = L i ε i , where L i > ε i > 0 , so instead of the previous lower pseudo component definition shown in Equation (7), the transformation is redefined as follows [1]:
x i = x i L i 1 i = 1 q L i
Draper and John (1977) [22] mentioned that when the value of any component of the mixture is equal to zero ( x i = 0 ), then in order to include a term such as x i 1 in the model, a small positive amount must be added, say c i , to each value of x i . This is equivalent to working again with the pseudo components x i , where this time, they are defined as follows [1]:
x i = ( x i + c i ) L i 1 i = 1 q ( L i c i )
As mentioned in the Introduction, the value of c i has not been defined by a formal criterion, leaving it to the discretion of the researcher. In the Results and Discussion, we will describe how to define the value of c i so that an improvement in the numerical stability of the model is obtained.

2.3. Adequacy of Polynomial Models for Mixtures

Next, we describe the procedure for determining the adequacy of the polynomial model used in terms of the conditioning of the information matrix. In this case, the objective is to determine the presence of collinearity (see the works of Cornell and Gorman (2003) [23] and Prescott et al. (2002) [3]).
Let λ m a x > λ 2 >   λ p 1 > λ m i n be the p eigenvalue of the information matrix ( X X ), that is, the p solutions to the determinant equation X X λ I = 0 , which is polynomial with p roots.
There are several definitions of the conditional number (CN) of a matrix. The definition generally used in statistical applications is the square root of the ratio of the maximum to the minimum eigenvalue of X X , denoted by the following [3]:
C N X X = λ m a x λ m i n
(Some authors omit the square root and just take the ratio of the maximum to the minimum eigenvalue). If X X is close to singular, then λ m i n is close to 0 and C N X X is extremely large. Small values of λ m i n and large values of λ m a x indicate the presence of collinearity. A smaller CN indicates more stability (better conditioning) in the least squares estimates.
This article uses the CN to determine the level of numerical stability of the model. Likewise, it is proposed to define the value of c i based on the CN of the information matrix. That is, the value of c i which results in the lowest value of CN is the value that will be used for the transformation to pseudo components of the form (9).
When there is no constant column in the X matrix, the adjusted total sum of the squares in the denominator of the multiple correlation coefficient ( R j 2 ) must be replaced [3]. We define x i as the j-th column of X and X j as the matrix that results when column x j is deleted from X . Then, R j 2 is the multiple correlation coefficient obtained by regressing x j on X j . When the first column of X is a constant column, R j 2 is usually calculated, for j = 2 ,   ,   p , as
R 2 = x j X j X j X j 1 X j x j x j 11 x j / n x j x j x j 11 x j / n ,
When there is no constant column in the X matrix, the unadjusted multiple correlation coefficient can be obtained by
R j 2 = x j X j X j X j 1 X j x j x j x j ,   f o r   j = 1 , , p

2.4. Alternative Problem Formulations and Regularization Methods

While model reformulation, such as the inclusion of inverse terms or the Slack Variable model, can help mitigate collinearity and improve stability, there are additional approaches that can further enhance the conditioning of the Fisher information matrix and ensure more reliable results. One of the primary concerns with mixture models, especially in the presence of ill-conditioning, is the direct inversion of the matrix X X , which may fail in the case of near-linear dependencies between the predictor variables. In such cases, reformulating the problem can be highly beneficial. For instance, instead of solving the linear equations directly, regularization methods can be introduced to modify the objective function and reduce the influence of poorly conditioned parameters [24].
As a first step, ridge regression can be applied to the least squares problem. This method involves adding a penalty term proportional to the square of the regression coefficients to the least squares objective function. The regularization parameter (λ) effectively shrinks the coefficients, which helps stabilize the inversion of the X X matrix, improving the conditioning of the Fisher information matrix [25]. By incorporating ridge regression into the mixture model estimation, the overall numerical stability of the model is enhanced, particularly in the presence of collinearity among the mixture components. Alternatively, Lasso regression, which uses L1 regularization, can also help in reducing collinearity issues. This method introduces a sparsity constraint, where some coefficients are driven to zero, thereby effectively reducing the number of predictors and mitigating the risk of ill-conditioning. In the context of mixture models, where the components may be highly correlated, Lasso can help identify the most influential predictors, ensuring better stability and interpretability [26].
In addition to ridge and Lasso regression, robust regression techniques, such as Huber regression and M-estimators, can also be employed to enhance the numerical stability of mixture models. These methods are designed to handle outliers and deviations from the assumptions of homoscedasticity, which can improve the overall conditioning of the Fisher matrix by reducing the influence of extreme values. For mixture experiments where the response is sensitive to outliers or noise, robust regression techniques are highly recommended [27]. Moreover, quantile regression could be considered in cases where the central tendency of the data is not fully representative of the underlying process. Unlike the ordinary least squares method, which focuses on the mean of the distribution, quantile regression estimates conditional quantiles, offering a more flexible and robust approach that can improve the robustness and conditioning of the estimated parameters [28].
The conditioning of the Fisher information matrix can also be influenced by the choice of optimality criteria used to fit the model. For example, instead of minimizing the sum of squared residuals, more robust or alternative loss functions can be considered. Huber loss, for example, is less sensitive to outliers and provides a more stable estimation when the response is prone to extreme values or non-normal errors. By minimizing the Huber loss, the impact of outliers is reduced, which in turn helps in improving the conditioning of the Fisher matrix. Additionally, alternative design criteria, such as D-optimality or A-optimality, can be applied when selecting mixture designs. These criteria aim to maximize the information provided by the experimental design, improving the conditioning of the Fisher information matrix by ensuring that the mixture components are well spread and sufficiently informative. By optimizing the experimental design based on these criteria, it is possible to achieve better numerical stability and more reliable parameter estimates in the presence of collinearity [29].

3. Results and Discussion

In this section, we illustrate how to define the value of c i based on the CN. For this, we use an example described in [1] regarding the octane rating of different gasoline blends.

Example of Octane Number of Gasoline

In a study, the octane number of nine mixtures consisting of olefin ( x 1 ), an aromatic ( x 2 ), and a saturator ( x 3 ) at 1.5 mL of lead per gallon was used as an example. The octane number and composition of the mixtures in their original units and coded in pseudo components, are shown in Table 1.
As can be noted in Table 1, this experiment presents lower limits. In these cases, the literature recommends using pseudo components of the form shown in Equation (7). However, point 2 in Table 1 shows that the lower limit of   x 2 is equal to zero. The above would make it impossible to include an inverse term of the form x i 1 for a transformation on pseudo components shown in Equation (9). The solution proposed by [22] consisted of selecting a lower limit for x i , slightly greater than zero, for example, 0.02. This value is the one corresponding to c i .
Draper and John (1977) [22] recommend c i = 0.02 . The transformation to pseudo components is carried out in the following way: x i = ( x i + c i ) L i 1 i = 1 q ( L i c i ) .
By substituting c i = 0.02 and L i , we have x i = ( x i + 0.02 ) L i 1.017 .
The transformation to pseudo components, for all values of x i , is shown in Table 1.
The next six models were obtained following a sequential fit of the Scheffé polynomial models proposed in [22] with units in L-pseudo components:
Model 1: E y = 108.52 x 1 + 109.46 x 2 + 71.51 x 3
Model 2: E y = 140.62 x 1 + 112.26 x 2 + 74.07 x 3 87.79 x 1 x 2 71.46 x 1 x 3 1.04 x 2 x 3
Model 3: E y = 117.25 x 1 + 99.14 x 2 + 61.62 x 3 + 0.25 x 1 1
Model 4: E y = 95.90 x 1 + 111.28 x 2 + 67.42 x 3 + 0.34 x 2 1
Model 5: E y = 95.4 x 1 + 115.6 x 2 + 65.7 x 3 + 0.4 x 2 1 0.1 x 3 1
Model 6: E y = 109.9 x 1 + 107.0 x 2 + 52.2 x 3 + 0.3 x 1 1 + 0.3 x 2 1 0.3 x 3 1
Table 2 presents the values of the adjusted correlation coefficient R A 2 and the mean square of the error S 2 . Taking as reference the values of R A 2 and S 2 , the best fit is obtained with model 6, showing a value of R A 2 = 0.88 and S 2 = 14.8 . However, it can be seen in the last column of Table 2 that the value of the CN of model 4 offers a greater degree of numerical stability, since it has a value of CN equal to 95, while model 6 presents a CN equal to 298, more than double that of model 4. The single-inverse-term model, model 4, also appears to be a better fitted model than the Scheffé second-degree polynomial, model 2. Thus, with this set of data, we have an example where a second-degree equation does not improve the fit of the surface obtained with the first-degree model but the addition of inverse terms to the first-degree model does improve the fit.
Model 6 is selected, given that it presents a better fit based on the values of R A 2 and S 2 . Next, different values of c i are assigned in Equation (9). The result of this exercise, as well as the quality of fit and numerical stability measures of the model, are presented in Table 3.
Table 2 shows that model 6 presents a CN value of 298, which suggests a certain degree of collinearity and, consequently, the poor numerical stability of the model. The above could result in an inflation of the variance in the model predictions. Likewise, Table 3 shows that as the value of c i increases, the CN decreases. In the case of the value of c i = 0.07 , there is a satisfactory level of quality of fit with R A 2 = 0.80 and S 2 = 25 , achieving a decrease in the CN from 298 to 201. This represents an improvement in the numerical stability of the model, as well as a more formal criterion to determine the value of c i .
Figure 1 shows a graphical representation of how both the quality of fit and the numerical stability of the model change with different values of c i . As can be seen in Figure 1, as the value of c i increases, the numerical stability of the model (with respect to the value of CN) increases as well until reaching an inflection point, where the numerical stability of the model begins to worsen. In addition to the above, regarding the quality of fit, with higher values of c i , worse values of R A 2 and S 2 are obtained.
Now suppose that model 4 is selected since it is the model that presents greater numerical stability, based on the value of CN. Assigning different values of c i in Equation (9), the quality of fit and numerical stability measures of the model are obtained, as shown in Table 4.
Model 4 presents a value of CN equal to 95 (see Table 2), which, compared to the CN of model 6 (CN = 298), is much smaller. In Table 4, once again, it can be seen that the value of CN changes with different values of c i . For c i = 0.03 , there is an adequate quality of fit and a decrease in the value of CN (from 95 to 76). The above represents an improvement in the numerical stability of the model, and a more formal criterion for determining the value of c i as well.
Figure 2 shows a graphical representation of how both the quality of fit and the numerical stability of the model change with different values of c i . In Figure 2, it is shown again that the numerical stability of the model changes with different values of c i . The above can be used to decrease the value of the CN, and consequently, improve the numerical stability of the model.

4. Conclusions

In this article, it was found that the conditioning of the information matrix, which represents the numerical stability of the model adjusted by least squares, changes with different values of c i in the transformation into lower limit pseudo components. An information matrix with poor conditioning can result in highly correlated estimators with a high standard error. Likewise, the estimated model is highly dependent on the precise location of the experimental points, due to the inflation of the variance of the estimators. The above reflects the importance of taking into consideration the numerical stability of the model when determining the model to be used. For the data used in this article, the first-order inverse model is a useful mode to be considered. Likewise, it contains fewer terms.
We can conclude that the numerical stability of the model can be improved by defining the value of c i , based on the CN. It is important to highlight that this determination offers a mathematical criterion for professionals who wish to include an inverse term in the regression model. Based on the results that we have derived in this article, we recommend its use in the mixture experiment setting both in design search algorithms and in model fitting.
There are several potential extensions of this work that could lead to further advancements in statistical analysis, particularly in the context of mixture experiments and regression models. Building on the findings related to the correlation of estimators and variance inflation, extensions of this work could integrate regularization techniques, such as ridge regression or Lasso, to further control for multicollinearity and stabilize parameter estimates. This could be particularly useful in complex mixture systems with many variables or when data are scarce. Investigating the robustness of the inverse models and their numerical stability in the presence of noisy or incomplete data could be another useful extension. Since experimental data are often imperfect, evaluating how well these models perform under different levels of noise or outliers could refine their applicability in real-world scenarios.

Author Contributions

Conceptualization, J.C.-S. and S.A.-R.; methodology, E.A.R.-S.; software, J.C.-S.; validation, I.M.-A. and R.Z.B.-L.; formal analysis, A.H.N.; investigation, J.C.-S.; resources, J.C.-S.; data curation, R.Z.B.-L.; writing—original draft preparation, S.A.-R.; writing—review and editing, S.A.-R.; visualization, R.Z.B.-L.; supervision, J.C.-S.; project administration, A.H.N.; funding acquisition, J.C.-S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding. The APC was funded by Universidad de las Americas Puebla UDLAP.

Data Availability Statement

No new data were created or analyzed in this study.

Acknowledgments

The authors thank CONACYT, Universidad de las Américas Puebla UDLAP, and Tecnologico de Monterrey for the support provided in carrying out this research.

Conflicts of Interest

The authors declare no conflict of interest.

List of Abbreviations

β i j Unknown parameters to be estimated using least squares.
X X Information matrix.
Residual vector.
N Normal distribution.
σ 2 Population variance.
x i Mixture component.
c i Small positive quantity.
L i Lower pseudo components.
λ i Eigenvalues of the information matrix.
CNConditional number.
R j 2 Multiple correlation coefficient.
Y Response variable.
SMean square error.

References

  1. Cornell, J.A. Experiments with Mixtures; Wiley: New York, NY, USA, 1990. [Google Scholar]
  2. Scheffé, H. Experiments with mixtures. J. R. Stat. Soc. 1958, 20, 344–360. [Google Scholar] [CrossRef]
  3. Prescott, P.; Dean, A.M.; Draper, N.R.; Lewis, S.M. ILL-conditioning and quadratic model specification. Technometrics 2002, 44, 260–268. [Google Scholar] [CrossRef]
  4. Cruz-Salgado, J.; Alonso-Romero, S.; Zitzumbo-Guzmán, R.; Domínguez-Domínguez, J. Optimization of the tensile and flexural strength of a wood-PET composite. Ing. Investig. Tecnol. 2015, 16, 105–111. [Google Scholar] [CrossRef]
  5. John, R.C.S. Experiments with mixtures, Ill-conditioning and ridge regression. J. Qual. Technol. 1984, 16, 81–96. [Google Scholar] [CrossRef]
  6. Cain, M.; Price, M.L.R. Optimal mixture choice. Appl. Stat. 1986, 35, 1–7. [Google Scholar] [CrossRef]
  7. Cornell, J.A. Fitting a slack-variable model to mixture data: Some questions raised. J. Qual. Technol. 2000, 32, 133–147. [Google Scholar] [CrossRef]
  8. Javier, C.-S. Selecting the slack variable in mixture experiment. Ing. Investig. Tecnol. 2015, 16, 613–623. [Google Scholar] [CrossRef]
  9. Cruz-Salgado, J. Comparing the intercept mixture model with the slack-variable mixture model. Ing. Investig. Tecnol. 2016, 17, 383–393. [Google Scholar] [CrossRef]
  10. Cruz-Salgado, J.; Alonso-Romero, S.; Estrada-Monje, A. Mechanical properties optimization of a PET/wood composite by mixture experiments. Rev. Mex. Ing. Química 2016, 15, 643–654. [Google Scholar] [CrossRef]
  11. Khuri, A.I. Slack-variable models versus scheffe’s mixture models. J. Appl. Stat. 2005, 32, 887–908. [Google Scholar] [CrossRef]
  12. Kang, L.; Salgado, J.C.; Brenneman, W.A. Comparing the slack-variable mixture model with other alternatives. Technometrics 2016, 58, 255–268. [Google Scholar] [CrossRef]
  13. Piepel, G.F.; Cornell, J.A. Mixture experiment approaches: Examples, discussion, and recommendations. J. Qual. Technol. 1994, 26, 177–196. [Google Scholar] [CrossRef]
  14. Piepel, G.L.; Landmesser, S.M. Mixture experimental alternatives to the slack variable approach. Qual. Eng. 2009, 21, 262–276. [Google Scholar] [CrossRef]
  15. Snee, R.D. Experimental designs for quadratic models in constrained mixture spaces. Technometrics 1975, 17, 149. [Google Scholar] [CrossRef]
  16. Snee, R.D.; Rayner, A.A. Assessing the accuracy of mixture model regression calculations. J. Qual. Technol. 1982, 14, 67–79. [Google Scholar] [CrossRef]
  17. Cruz-Salgado, J.; Alonso-Romero, S.; Augusto-Ruelas, E.; Bautista-López, R.Z.; Alvarez-Rodriguez, S. Slack-variable model in mixture experimental design applied to wood plastic composite. J. King Saud Univ. Eng. Sci. 2023, 35, 110–115. [Google Scholar] [CrossRef]
  18. Piepel, G.F.; Hoffmann, D.C.; Cooley, S.K. Slack-variable models versus component-proportion models for mixture experiments: Literature review, evaluations, and recommendations. Qual. Eng. 2020, 33, 221–239. [Google Scholar] [CrossRef]
  19. Ronald, D. Snee. Design and analysis of mixture experiments. J. Qual. Technol. 2018, 3, 159–169. [Google Scholar] [CrossRef]
  20. Duarte, B.P.; Atkinson, A.C.; Granjo, J.F.; Oliveira, N.M. Optimal design of mixture experiments for general blending models. Chemom. Intell. Lab. Syst. 2021, 217, 104400. [Google Scholar] [CrossRef]
  21. Brown, L.; Donev, A.N.; Bissett, A.C. General blending models for data from mixture experiments. Technometrics 2015, 57, 449–456. [Google Scholar] [CrossRef]
  22. Draper, N.R.; John, R.C.S. A mixtures model with inverse terms. Technometrics 1977, 19, 37–46. [Google Scholar] [CrossRef]
  23. Cornell, J.A.; Gorman, J. Two new mixture models: Living with collinearity but removing its influence. J. Qual. Technol. 2003, 35, 78–88. [Google Scholar] [CrossRef]
  24. Stewart, G.W. Introduction to Matrix Computations; Academic Press: New York, NY, USA, 1980. [Google Scholar]
  25. Tikhonov, A.N. Regularization of ill-posed problems. Sov. Math. Dokl. 1963, 4, 1624–1627. [Google Scholar]
  26. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B (Methodol.) 1996, 58, 267–288. [Google Scholar] [CrossRef]
  27. Huber, P.J. Robust estimation of a location parameter. Ann. Math. Stat. 1964, 35, 73–101. [Google Scholar] [CrossRef]
  28. Koenker, R.; Bassett, G. Regression quantiles. Econometrica. J. Econom. Soc. 1978, 46, 33–50. [Google Scholar]
  29. Wheeler, R.E.; Atkinson, A.C.; Donev, A.N. Optimum Experimental Designs; Oxford University Press: Oxford, UK, 1992. [Google Scholar]
Figure 1. Quality of fit and numerical stability, with respect to the value of c i , for model 6.
Figure 1. Quality of fit and numerical stability, with respect to the value of c i , for model 6.
Mathematics 12 03587 g001
Figure 2. Quality of fit and numerical stability, with respect to the value of c i , for model 4.
Figure 2. Quality of fit and numerical stability, with respect to the value of c i , for model 4.
Mathematics 12 03587 g002
Table 1. Octane numbers in original units and in pseudo components.
Table 1. Octane numbers in original units and in pseudo components.
Original UnitsCoding in Pseudo ComponentsOctanes
x 1 x 2 x 3 x 1 x 2 x 3 y
10.0100.8700.1200.0230.8750.102111.5
20.5410.0000.4590.5450.020.435101.3
30.4270.0610.5120.4330.080.48780.6
40.0220.4640.5140.0340.4760.4991.0
50.0070.9570.0360.020.9610.019107.0
60.4140.2780.3080.420.2930.28797.0
70.6480.0300.3220.650.0490.30198.6
80.1620.5140.3240.1720.5250.30392.2
90.0080.0680.9240.0210.0870.89277.8
100.0100.8700.1200.0230.8750.102111.5
Table 2. Measurements of the quality of fit and numerical stability of the models.
Table 2. Measurements of the quality of fit and numerical stability of the models.
Model R A 2 S 2 λ m i n λ m a x CNp-Value
1   ( c i = 0.02)0.6542.50.50320.000
2   ( c i = 0.02)0.5259.20.0043280.000
3   ( c i = 0.02)0.6938.40.2077832120.030
4   ( c i = 0.02)0.7925.40.403314950.011
5   ( c i = 0.02)0.7827.10.402731780.032
6   ( c i = 0.02)0.8814.80.1094582980.031
Table 3. Measures of the quality of fit and numerical stability of model 6.
Table 3. Measures of the quality of fit and numerical stability of model 6.
Model   6 .   y x = 109.9 x 1 + 107.0 x 2 + 52.2 x 3 + 0.3 x 1 1 + 0.3 x 2 1 0.3 x 3 1
Model R A 2 S 2 λ m i n λ m a x CNp-Value
6 ( c i = 0.03)0.8716.60.1016152480.036
6 ( c i = 0.04)0.8518.90.1027972240.044
6 ( c i = 0.05)0.83210.1019112120.051
6 ( c i = 0.06)0.81230.1014072040.058
6 ( c i = 0.07)0.80250.0010922010.065
6 ( c i = 0.08)0.79260.048791990.072
Table 4. Measures of the quality of fit and numerical stability of model 4.
Table 4. Measures of the quality of fit and numerical stability of model 4.
Model   4 .   y x = 95.90 x 1 + 111.28 x 2 + 67.42 x 3 + 0.34 x 2 1
Model R A 2 S 2 λ m i n λ m a x CNp-Value
4 ( c i = 0.03)0.7925.70.301792760.012
4 ( c i = 0.04)0.7826.60.301199680.013
4 ( c i = 0.05)0.7727.70.20893630.014
4 ( c i = 0.06)0.7628.90.19710600.015
4 ( c i = 0.07)0.7629.90.16589590.017
4 ( c i = 0.08)0.7530.90.14504580.018
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cruz-Salgado, J.; Alonso-Romero, S.; Ruelas-Santoyo, E.A.; Miguel-Andrés, I.; Bautista-López, R.Z.; Nobil, A.H. A Numerical Stability Analysis in the Inclusion of an Inverse Term in the Design of Experiments for Mixtures. Mathematics 2024, 12, 3587. https://doi.org/10.3390/math12223587

AMA Style

Cruz-Salgado J, Alonso-Romero S, Ruelas-Santoyo EA, Miguel-Andrés I, Bautista-López RZ, Nobil AH. A Numerical Stability Analysis in the Inclusion of an Inverse Term in the Design of Experiments for Mixtures. Mathematics. 2024; 12(22):3587. https://doi.org/10.3390/math12223587

Chicago/Turabian Style

Cruz-Salgado, Javier, Sergio Alonso-Romero, Edgar Augusto Ruelas-Santoyo, Israel Miguel-Andrés, Roxana Zaricell Bautista-López, and Amir Hossein Nobil. 2024. "A Numerical Stability Analysis in the Inclusion of an Inverse Term in the Design of Experiments for Mixtures" Mathematics 12, no. 22: 3587. https://doi.org/10.3390/math12223587

APA Style

Cruz-Salgado, J., Alonso-Romero, S., Ruelas-Santoyo, E. A., Miguel-Andrés, I., Bautista-López, R. Z., & Nobil, A. H. (2024). A Numerical Stability Analysis in the Inclusion of an Inverse Term in the Design of Experiments for Mixtures. Mathematics, 12(22), 3587. https://doi.org/10.3390/math12223587

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop