Next Article in Journal
Pascu-Rønning Type Meromorphic Functions Based on Sălăgean-Erdély–Kober Operator
Next Article in Special Issue
Sustainable Supply Chain Model for Defective Growing Items (Fishery) with Trade Credit Policy and Fuzzy Learning Effect
Previous Article in Journal
Reinsurance Policy under Interest Force and Bankruptcy Prohibition
Previous Article in Special Issue
A Supply Chain Model with Learning Effect and Credit Financing Policy for Imperfect Quality Items under Fuzzy Environment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Extension of the Kumaraswamy Exponential Model with Modeling of Food Chain Data

by
Eman A. Eldessouky
1,*,
Osama H. Mahmoud Hassan
2,
Mohammed Elgarhy
3,
Eid A. A. Hassan
4,
Ibrahim Elbatal
5 and
Ehab M. Almetwally
6
1
Department of Quantitative Methods, Applied College, King Faisal University, Al-Ahsa 31982, Saudi Arabia
2
Department of Quantitative Methods, School of Business, King Faisal University, Al-Ahsa 31982, Saudi Arabia
3
Mathematics and Computer Science Department, Faculty of Science, Beni-Suef University, Beni-Suef 62521, Egypt
4
Department of Accounting, Applied College, King Faisal University, Al-Ahsa 31982, Saudi Arabia
5
Department of Mathematics and Statistics, College of Science Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
6
Faculty of Business Administration, Delta University for Science and Technology, Gamasa 11152, Egypt
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(4), 379; https://doi.org/10.3390/axioms12040379
Submission received: 18 March 2023 / Revised: 9 April 2023 / Accepted: 12 April 2023 / Published: 16 April 2023
(This article belongs to the Special Issue Mathematical Modelling in Sustainable Global Supply Chain Management)

Abstract

:
Statistical models are useful in explaining and forecasting real-world occurrences. Various extended distributions have been widely employed for modeling data in a variety of fields throughout the last few decades. In this article we introduce a new extension of the Kumaraswamy exponential (KE) model called the Kavya–Manoharan KE ( K M K E ) distribution. Some statistical and computational features of the K M K E distribution including the quantile ( Q U A ) function, moments ( M O m s), incomplete M O m s ( I N M O m s), conditional M O m s ( C O M O m s) and M O m generating functions are computed. Classical maximum likelihood and Bayesian estimation approaches are employed to estimate the parameters of the KMKE model. The simulation experiment examines the accuracy of the model parameters by employing Bayesian and maximum likelihood estimation methods. We utilize two real datasets related to food chain data in this work to demonstrate the importance and flexibility of the proposed model. The new KMKE proposed distribution is very flexible, more so than numerous well-known distributions.

1. Introduction

The so-called “food chain”, a university topic taught in Germany since 1987, provides the foundation for the relatively new academic field that describes the intricate connections between food and environment [1]. It is included in environmental science, agricultural science or “agroecology” in other European nations [2]. However, the “grey” literature initially appeared in the 1970s [3], and one of the earliest scientific articles from the 1980s [4] was inspired by the 1972 United Nations environment summit in Stockholm. Since then, different food products have undergone refinement and application of environmental evaluation methods in order to assist producers and companies in improving food production from an environmental standpoint. Recently, many papers have discussed modeling of food chain data, such as [5,6,7,8,9,10].
The Kumaraswamy (K) model was first known as the double-bounded model. It was first mentioned by [11]. The cumulative distribution function (cdf) in this model is closed-form. The K-G family of distributions was presented by reference [12] as a novel approach for developing a new continuous generating family of statistical models utilizing the K model. The K-G can be constructed for any continuous baseline cdf G ( z ) , with the cdf provided via
G ( z ) = 1 1 H ( z ) β γ , z R , β , γ > 0 ,
where H ( x ; δ ) symbolize the CDF of a baseline model and β and γ are two shape parameters.
The K-exponential (KE) model is introduced in [12] by taking the baseline of the CDF of the exponential model as H ( z ; α ) = 1 e x p ( α z ) . The probability density function (PDF) and CDF of the KE model have the below equations:
g z ; α , β , γ = α β γ e α z 1 e α z β 1 1 1 e α z β γ 1 , z > 0 , α , β , γ > 0 ,
and
G z ; α , β , γ = 1 1 1 e α z β γ , z > 0 , α , β , γ > 0 .
Many authors have studied the KE model: Ref. [13] proposed a new generalization of the KE model called the exponentiated KE model, and studied its statistical properties and use medical data to show the application of the KE model. Ref. [14] introduced a bivariate extension of the KE model and its application to the amount of overtime performed by 20 frigorific personnel before and after the installation of an incentive campaign. Ref. [15] investigated maximum likelihood and Bayesian estimates of KE parameters in a progressive type-II censored sample. Ref. [16] studied the beta KE model and some of its statistical properties, such as: moments, quantile function, median, mode, skewness, kurtosis, mean deviation and order statistics; also, they used medical data to show the importance of their model. Ref. [17] discussed the truncated bivariate KE model and computed some statistical features, and used real data related to the lifetimes of forty animals to show the flexibility of their model. Ref. [18] introduced the sine K-G family of distributions and discussed the sine KE model as a special model from their generated family of distributions; also, they used the sine KE model in the application part using two real datasets related to physics and engineering to show the importance of the sine K-G family of distributions. Ref. [19] discussed the K extended exponential (KEE) distribution as a generalization of the KE distribution and studied some important mathematical properties of the KEE distribution; in addition, they used two real datasets related to engineering and physics to illustrate the importance of the KEE distribution. Ref. [20] introduced the Topp–Leone K-G family of distributions and discussed the Topp–Leone KE model as a special model from their generated family of distributions; also, they used the Topp–Leone KE model in the application part utilizing two real datasets related to medicine to show the importance of the Topp–Leone K-G family of distributions. Ref. [21] introduced the gamma KE model as a sub-model from the gamma Kumaraswamy-G family of distributions; also they demonstrated the flexibility of the family in different fields, such as engineering, survival and lifetime data, hydrology, and economics, by using real data.
Various strategies for adding a parameter to distributions have been presented and explored in recent years. These expanded distributions are one way to solve the problem of modeling data and for providing greater flexibility in a variety of applications, including food, agriculture, COVID-19, engineering, economics, biomedicine, biology, physics, environmental sciences, and many others. Several famous families are the odd Dagum-G [22], odd generalized exponential-G [23], sine Topp–Leone-G [24], generalized odd log-logistic-G [25], Flexible BurrX-G [26], truncated Cauchy power Weibull-G [27], transmuted Gompertz- [28], transmuted odd Fréchet-G [29], transmuted odd Lindley-G [30], odd Perks-G [31], a new power Topp–Leone-G by [32], extended odd Fréchet [33], extension of the Burr XII by [34], Marshall–Olkin odd Burr III-G [35], exponentiated M-G [36], exponential TX family of distributions [37], truncated inverted Kumaraswamy generated-G by [38], Marshal–Olkin alpha power family of distributions [39] and unit exponentiated half logistic power series class of distributions introduced by [40]. Some recent classes of distributions were discussed in [41,42,43,44,45,46]. Refs. [47,48] studied the Weibull model under a repetitive group sampling plan based on truncated tests and progressively censored group sampling, among others.
Kavya and Manoharan [49] just presented a novel transformation, the KM transformation class of statistical models. The cdf and pdf are provided in the next two equations
F ( z ) = e e 1 1 e G ( z ) , z R ,
and
f ( z ) = e e 1 g ( z ) e G ( z ) , z R .
Recently, [9] introduced the sine exponentiated Weibull exponential (SEWE) model to fit food data in the United Kingdom (UK) and the SEWE model was found to have an excellent fit for these data. However, in this article, we hope that the suggested model gives a better fit to the food data used by [9]. In addition, Figure 1 offers a comprehensive description of the work.
The following considerations provide sufficient motivation to investigate the suggested model. It is stated as follows:
  • The new KMKE distribution gives more flexibility than the SEWE model and other well-known statistical models for food chain data as we prove in Section 7.
  • The new recommended distribution is quite versatile and comprises three sub-models.
  • The shapes of the pdf for the KMKE model can be decreasing, right skewness and uni-modal. However, the hazard rate function (hrf) for the KMKE model can be decreasing, increasing and j-shaped.
  • Numerous statistical and computational characteristics of the recently proposed model are investigated.
  • The parameters of the KMKE model are estimated utilizing maximum likelihood and Bayesian techniques.
The rest of this article is structured as follows: some relevant literature for some extensions of the K model and their modeling to real data are discussed in Section 2. We provide the novel proposed model designated the KMKE model and its sub-models in Section 3. Several statistical and computational features of the KMKE including the Q U A function, M O m s, I N M O m s, C O M O m s and M O m generating functions are computed in Section 4. The parameters of the KMKE model are estimated utilizing maximum likelihood and Bayesian techniques in Section 5. In Section 6, the numerical simulations used to evaluate the efficiency of the various estimation approaches are described. In Section 7, we apply the KMKE model to two real datasets to demonstrate its usefulness and applicability. Eventually, in Section 8, some final thoughts are offered.

2. Relevant Literature

Statistical models are very useful for describing and predicting real-world events. Various extended distributions have been extensively used for data modeling in a wide range of areas throughout the last few decades. Many authors have used Equation (1) to generate new extensions from the K model and used these statistical models in modeling for different real datasets, such as: engineering, physics, medicine, failure times, reliability, survival, income and COVID-19. Table 1 shows some relevant literature for some extensions of the K model and their modeling to real data. We note that all previous authors who studied extensions of the K model did not use their models to fit food chain data. However, in this article, we try to generate a new extension of the K model and hope to give a good fit to the food chain data.

3. The Construction of the Kavya–Manoharan Kumaraswamy Exponential Model

In this section, we create the Kavya–Manoharan Kumaraswamy exponential (KMKE) model by entering Formula (3) into Formula (4), and we obtain the cdf as shown below
F z ; α , β , γ = e e 1 1 e 1 1 1 e α z β γ , z > 0 , α , β , γ > 0 ,
where β and γ are two shape parameters and α is scale parameter. The pdf of the KMKE model can be investigated by inserting Equations (3) and (2) into (5) as
f z ; α , β , γ = α β γ e 1 e α z 1 e α z β 1 1 1 e α z β γ 1 e 1 1 e α z β γ .
The reliability function, the hazard rate function (hrf), and the reversed and cumulative hrfs (see [86]) for the KMKE model are
S z ; α , β , γ = 1 e e 1 1 e 1 1 1 e α z β γ ,
h z ; α , β , γ = α β γ e α z 1 e α z β 1 1 1 e α z β γ 1 e 1 1 e α z β γ e 1 1 1 1 e α z β γ 1 ,
τ z ; α , β , γ = α β γ e α z 1 e α z β 1 1 1 e α z β γ 1 e 1 1 e α z β γ e 1 e 1 1 1 e α z β γ ,
and
H z ; α , β , γ = ln 1 e e 1 1 e 1 1 1 e α z β γ .
The KMKE model is very flexible and has three sub-models, see Table 2.
Figure 2 shows the plots of the pdf and hrf for the KMKE model in 2D. Furthermore, Figure 3 and Figure 4 show the plots of the pdf and hrf for the KMKE model in 3D.

4. Statistical and Computational Features

In this section, we focus on the statistical and computational characteristics of the KMKE model, particularly the Q U A function, M O m s, I N M O m s, C O M O m s and M O m generating functions.

4.1. Quantile Function

The quantile function of the KMKE model is a useful tool to perform a simulated sample and it can be calculated by inverting Equation (6) where u U n i f o r m ( 0 , 1 ) , then
u = e e 1 1 e 1 1 1 e α Q ( u ) β γ .
After some simplification, we can obtain the quantile function of the KMKE model as
Q u = 1 α ln 1 1 1 + ln 1 u 1 e 1 1 γ 1 β .
The median of the KMKE model is investigated by putting u = 0.5 in Equation (8),
Q u = 1 α ln 1 1 1 + ln 1 0.5 1 e 1 1 γ 1 β .

4.2. Moments

In this subsection, we derive the w t h moment ( M O m ) (see [87]) for the KMKE model. The first four M O m s are the most important to describe the shape and monotonicity of the distribution curve. Suppose Z via a R V r that follows K M K E ( α , β , γ ), then the w t h M O m about the zero of the KMKE model is
μ w = i = 0 j = 0 γ ( i + 1 ) 1 k = 0 β ( j + 1 ) 1 π i , j , k Γ w + 1 α ( k + 1 ) w + 1 .
The proof of Equation (9) is mentioned in Appendix A. By putting w = 1, 2, 3 and 4 into Equation (9) we will obtain the first four M O m s
μ 1 = i = 0 j = 0 γ ( i + 1 ) 1 k = 0 β ( j + 1 ) 1 π i , j , k α ( k + 1 ) 2 ,
μ 2 = i = 0 j = 0 γ ( i + 1 ) 1 k = 0 β ( j + 1 ) 1 2 π i , j , k α ( k + 1 ) 3 ,
μ 3 = i = 0 j = 0 γ ( i + 1 ) 1 k = 0 β ( j + 1 ) 1 6 π i , j , k α ( k + 1 ) 4 ,
and
μ 4 = i = 0 j = 0 γ ( i + 1 ) 1 k = 0 β ( j + 1 ) 1 24 π i , j , k α ( k + 1 ) 5 .
As a consequence, the mean and variance of the KMKE model are calculated via
μ = i = 0 j = 0 γ ( i + 1 ) 1 k = 0 β ( j + 1 ) 1 π i , j , k α ( k + 1 ) 2 ,
and
V a r ( z ) = i = 0 j = 0 γ ( i + 1 ) 1 k = 0 β ( j + 1 ) 1 2 π i , j , k α ( k + 1 ) 3 i = 0 j = 0 γ ( i + 1 ) 1 k = 0 β ( j + 1 ) 1 π i , j , k α ( k + 1 ) 2 2 .
The moment-generating function (see [87]) of the KMKE model can be computed from the next equation
M Z ( t ) = E e t Z = 0 e t Z f ( z ; α , β , γ ) d z .
After some simplification we obtain
M Z ( t ) = i = 0 j = 0 γ ( i + 1 ) 1 k = 0 β ( j + 1 ) 1 π i , j , k 0 e α ( k + 1 ) t z d z .
Then the moment-generating function of the KMKE model is
M Z ( t ) = i = 0 j = 0 γ ( i + 1 ) 1 k = 0 β ( j + 1 ) 1 π i , j , k α ( k + 1 ) t .
The m t h incomplete M O m of the KMKE model can be computed from the next equation
η m ( t ) = 0 t z m f ( z ; α , β , γ ) d z .
After some simplification we obtain
η m ( t ) = i = 0 t j = 0 γ ( i + 1 ) 1 k = 0 β ( j + 1 ) 1 π i , j , k 0 z m e α ( k + 1 ) z d z ,
Then the m t h incomplete M O m (see [87]) of the KMKE model is
η m ( t ) = i = 0 j = 0 γ ( i + 1 ) 1 k = 0 β ( j + 1 ) 1 π i , j , k γ m + 1 , t α ( k + 1 ) m + 1 .
The m t h conditional M O m (see [87]) of the KMKE model can be computed from the next equation
τ m ( t ) = t z m f ( z ; α , β , γ ) d z .
After some simplification we obtain
τ m ( t ) = i = t j = 0 γ ( i + 1 ) 1 k = 0 β ( j + 1 ) 1 π i , j , k 0 z m e α ( k + 1 ) z d z ,
Then the m t h conditional M O m of the KMKE model is
τ m ( t ) = i = 0 j = 0 γ ( i + 1 ) 1 k = 0 β ( j + 1 ) 1 π i , j , k Γ m + 1 , t α ( k + 1 ) m + 1 .
Figure 5 shows the mean, variance (var), skewness (SK), kurtosis (KU), coefficient of variation (CV) and index of dispersion (ID) (see [87]).

5. Estimation Methods

The maximum likelihood and Bayesian methods are the most famous. Based on Bayes’ theorem, Bayesian statistics is a method for analyzing data and estimating parameters. The prior and data distributions, which are a special feature of Bayesian statistics, are given to all observable and unobserved parameters in a statistical model. In this section, maximum likelihood estimation and Bayesian estimation have been discussed to estimate the parameters of the KMKE model. Recently, more papers have discussed maximum likelihood and Bayesian estimation methods, such as [88,89].

5.1. Maximum Likelihood Estimation

In this section, we focus on how the maximum likelihood technique (see [87]) can be employed to estimate the parameters α , β and γ for the KMKE model. Suppose that z 1 , , z n is a random sample of size n from the KMKE model (7). Then, the total log-likelihood function for Ω = ( α , β , γ ) is supplied as below
ln L = ln α + ln β + ln γ ln e 1 α i = 1 n z i + β 1 i = 1 n ln 1 e α z i + γ 1 i = 1 n ln 1 1 e α z i β + i = 1 n 1 1 e α z i β γ .
The first partial derivatives U n ( Ω ) = ( L n ( Ω ) α , L n ( Ω ) β , L n ( Ω ) γ ) T are provided via
ln L α = n α i = 1 n z i + β 1 i = 1 n z i e α z i 1 β γ 1 i = 1 n z i e α z i 1 e α z i β 1 1 1 e α z i β β γ i = 1 n z i e α z i 1 e α z i β 1 1 1 e α z i β γ 1 ,
ln L β = n β + i = 1 n ln 1 e α z i γ 1 i = 1 n 1 e α z i β ln 1 e α z i 1 1 e α z i β γ i = 1 n 1 e α z i β 1 1 e α z i β γ 1 ln 1 e α z i ,
and
ln L γ = n γ + i = 1 n ln 1 1 e α z i β + i = 1 n 1 1 e α z β γ ln 1 1 e α z β .
By setting the nonlinear system of equations L n ( Ω ) α = L n ( Ω ) β = L n ( Ω ) γ = 0 and solving these equations simultaneously, we can obtain the M L E ( Ω ^ ) . Because an exact solution is not achievable, these equations can be numerically solved by employing iterative approaches and statistical tools.

5.2. Bayesian Estimation

The Bayesian approach is a well-known non-classical inference technique in statistics. It defines uncertainties on the distribution parameters using a joint prior distribution and some proposed symmetric and asymmetric loss functions. It is believed that the three parameters, α , β and γ , are independent and follow gamma prior distributions:
C ( α , β , γ ) α w 1 1 β w 2 1 γ w 3 1 exp { α 1 + β 2 + γ 3 } , α , β , γ > 0 ; j , w j > 0 ; j = 1 , 2 , 3 .
The hyper-parameters will be elicited using the parameters priors j , w j ; for more information, see [90]. The mean and variance of the KMKE distribution’s α and γ maximum likelihood estimates will be compared to the mean and variance of the α j , β j and γ j considered priors (gamma priors), where j = 1 , , N and N is the number of samples available from the KMKE distribution. By equating α , β and γ with the mean and variance of gamma priors, we may calculate their respective means and variances.
1 N j = 1 N α j = w 1 1 , and 1 N 1 j = 1 N α j 1 k j = 1 N α j 2 = w 1 1 2 ,
1 N j = 1 N β j = w 2 2 , and 1 N 1 j = 1 N β j 1 N j = 1 N β j 2 = w 2 2 2 ,
and
1 N j = 1 N γ j = w 3 3 , and 1 N 1 j = 1 N γ j 1 N j = 1 N γ j 2 = w 3 3 2 .
The estimated hyper-parameters can now be stated as follows after solving the preceding two equations:
w 1 = 1 k j = 1 k α j 2 1 k 1 j = 1 k α j 1 k j = 1 k α j 2 , and 1 = 1 k j = 1 k α j 1 k 1 j = 1 k α j 1 k j = 1 k α j 2 ,
w 2 = 1 k j = 1 k β j 2 1 k 1 j = 1 k β j 1 k j = 1 k β j 2 , and 2 = 1 k j = 1 k β j 1 k 1 j = 1 k β j 1 k j = 1 k β j 2 .
and
w 3 = 1 k j = 1 k γ j 2 1 k 1 j = 1 k γ j 1 k j = 1 k γ j 2 , and 3 = 1 k j = 1 k γ j 1 k 1 j = 1 k γ j 1 k j = 1 k γ j 2 .
The likelihood function and the joint prior function Equation (14) can be used to express the joint posterior distribution. Consequently, Ω ’s joint posterior density function is
G ( Ω | x ) = C ( e 1 ) n α n + w 1 1 β n + w 2 1 γ n + w 3 1 e α 1 + β 2 + γ 3 e α i = 1 n z i e i = 1 n 1 1 e α z i β γ i = 1 n 1 e α z i β 1 1 1 e α z i β γ 1 .
In actuality, the posterior density’s normalization constant C is often intractable, requiring an integral over the parameter space.
The squared-error loss function (SELF) is the symmetric loss function:
L S ( Ω ˜ , Ω ) Ω ˜ Ω 2 .
The average is then the Bayesian estimator of Ω under SELF.
Ω ˜ S = E Ω Ω .
The two most well-known loss functions—LINEX and entropy—have been covered.
Varian [91] introduced a useful asymmetric loss function, which has recently been used in several publications by [92,93,94]. The linear exponential LINEX loss function describes this function. Assuming that the minimal loss happens at Ω ˜ = Ω , the LINEX loss function can be expressed as follows:
L L ( Ω ˜ , Ω ) e c Ω ˜ L Ω c Ω ˜ L Ω 1 ; c 0 ,
where c is the shape parameter and Ω ˜ is any estimate of the parameter Ω . The shape of this loss function depends on the value of c. When the entropy loss function is used, the Bayes estimator of Ω is
Ω ˜ L = 1 c ln E Ω e c Ω .
According to Calabria and Pulcini [95], the entropy loss function is a decent asymmetric loss function. The form’s entropy loss function is thought of as
L E ( Ω ˜ , Ω ) Ω ˜ Ω b b ln Ω ˜ Ω 1 ,
whose minimum is found at Ω ˜ = Ω . When the entropy loss function is used, the Bayes estimator of Ω is
Ω ˜ E = E Ω Ω b 1 b ,
Since it is challenging to solve these integrals analytically, the MCMC method will be used. The most important sub-classes of MCMC algorithms are Gibbs sampling and the more general Metropolis-within-Gibbs samplers. This algorithm was first presented by Metropolis et al. [96] As with acceptance–rejection sampling, the Metropolis–Hastings (MH) algorithm treats a candidate value produced from a proposal distribution as normal for each iteration of the process.

6. Simulation

Monte Carlo simulations are used to compare the performance of the suggested estimators for the KMKE parameters model. In this section, the estimation of the KMKE parameters are discussed using Bayesian and likelihood estimation techniques, comparing the results using a simulation study. In the Bayesian technique, symmetric and asymmetric loss functions are obtained. LINES and ELF are used as asymmetric loss functions.

6.1. Simulation Study

We investigate several sample sizes with n = 40, 75 and 150 for different α , β and γ parameter selections. We take 5000 random samples from the KMKE distribution. For each estimate, we calculate the bias values, mean square error (MSE) and length of confidence interval (LCI). The LCI of MLE is an asymptotic CI which can be denoted as LACI. The LCI of the Bayesian technique is the credible CI which can be denoted as LCCI.
Bias, MSE and LCI are used to quantify the efficacy of various estimators, with bias and MSE values close to zero indicating the most efficient techniques. The simulation results are obtained using the R programming language. The “maxLik” package computes the MLE using the Newton–Raphson approach. Additionally, the “CODA” package is used to perform the Bayesian estimation with various loss functions. This package evaluates the Markov chain Monte Carlo (MCMC) outputs and diagnoses lack of convergence. The estimated bias, MSE and LCI parameters of the KMWE distribution are displayed in Table 3, Table 4, Table 5 and Table 6.

6.2. Final Thoughts on the Simulation Results

Figure 6, Figure 7 and Figure 8 show heatmaps of MSE for parameters of the KMKE distribution, where the X-axis shows the MSE based on different estimation methods with each parameter α , β and γ , respectively (MLE1 is a MSE for α , MLE2 is a MSE for β and MLE3 is a MSE for γ ), while the Y-axis shows the MSE based on different cases and sample sizes, for example: C1n40 is an actual value of the parameter in Table 3 where α = 0.5 , β = 0.4 , γ = 0.5 and n = 40; C1n70 is an actual value of the parameter in Table 3 where α = 0.5 , β = 0.4 , γ = 0.5 and n = 70; and C2n70 is an actual value of the parameter in Table 3 where α = 0.5 , β = 0.4 , γ = 1.7 and n = 70.
By simulation in Table 3, Table 4, Table 5 and Table 6 and Figure 6, Figure 7 and Figure 8, all estimate techniques perform flawlessly, have very little bias and small MSE, and their mean values tend to be quite similar to the parameters’ actual values.
  • The Bayesian estimation is superior to the MLE in every situation, we observe.
  • The Bayesian estimation with positive weight asymmetric loss function is superior to the Bayesian estimation with negative weight asymmetric loss function, as we note.
  • We note that the Bayesian estimation method with positive weight asymmetric loss function is better than the other estimation method.
  • The Bayesian estimation with symmetric loss function is superior to the Bayesian estimation with negative weight asymmetric loss function, in some simulations.
  • Bayesian credible and HPD intervals are the shortest LCI.

7. Modeling Food Data

Several methods for adding a parameter to distributions have been presented and debated in recent years. These expanded distributions provide flexibility for specific food data applications. In this application, the problem is finding the best and most efficient model fitting food data. This section shows how the KMWE distribution outperforms traditional distributions such as SEWE by [9], exponentiated generalized Weibull–Gompertz (EGWG) by [97], Kumaraswamy exponentiated Burr XII (KEBXII) by [98], Weibull–Lomax (WL) by [99], Marshall–Olkin alpha power Weibull (MOAPW) by [100], extended odd Weibull–Lomax (EOWL) by [101], modified Kies inverted Topp–Leone (MKITL) by [102], odd Weibull inverted Topp–Leone (OWITL) by [103] and extended Weibull (EW) [104].
Below tables discussed estimates of MLE and various measures of fit with provide statistics for all models fitted based on two real datasets, including different measures such as Kolmogorov–Smirnov discrete (KSD) with P-value of KS (PVKS), Cramer von Mises (CVM) and Anderson-(AD) Akaike information criterion (AIC), Bayesian information criterion (BIC), consistent AIC (CAIC), and Hannon and Quinn’s information criterion (HQIC). These tables also contain the MLE of the parameters for the models being examined.
Firstly: The food chain in the UK from 2000 to 2019 is shown in the first dataset, which can be found at https://www.gov.uk/government/statistics/food-chain-productivity and was accessed on 18 July 2022. Furthermore, this data has been cited in [9]. The data are as follows: “102.9, 104.1, 104.8, 105.5, 107.2, 108.6, 104.7, 105.8, 103.4, 104.1, 100, 99.9, 98.5, 100.1, 101.9, 101.4, 103.1, 103.2, 104.2, 109”. The results of this data are attached in Table 7 and Table 8, and Figure 9, Figure 10, Figure 11 and Figure 12.
Secondly, as one component of factor total productivity (FTP), food and drink wholesaling in the UK from 2000 to 2019, see https://www.gov.uk/government/statistics/food-chain-productivity, accessed on 18 July 2022. Furthermore, this data has been cited in [9]. The data are as follows: “101.1, 104.2, 104.6, 106.3, 100,101.7,99.6, 101, 102.7, 104.8, 109.1, 112, 114.4, 105.6, 107.1, 107.5, 108.6, 107.5, 106.6, 112.5”. The results of this data are attached in Table 9 and Table 10, and Figure 13, Figure 14, Figure 15 and Figure 16.
Figure 10 and Figure 14 show the two datasets that were fitted to the KMWE model using pdf, cdf, PP-plot and QQ-plot, respectively. Figure 9 and Figure 13 confirm the estimators have maximum plot and unique values of the KMWE model for two datasets, respectively.
A total of 10,000 MCMC samples are produced using the MCMC algorithm that is discussed in Section 5. The MLEs and BEs of the unknown parameters of the KMWE distribution were determined using two datasets in Table 8 and Table 10, respectively. Furthermore, generated and provided in Table 8 and Table 10 are two-sided 95% ACI/HPD credible intervals for MLE and Bayesian estimations, respectively. They demonstrate how closely the point estimates of the unknown parameters that the MLE and Bayesian estimations obtain are to one another. Additionally, there are similarities in the interval estimates determined by 95% ACI/HPD credible intervals.
Figure 11 and Figure 15 provide trace plots of the posterior distributions of the parameters from the under-two datasets to track the convergence of the MCMC outputs. It suggests that the MCMC method converges quite effectively and demonstrates how closely spaced apart the 95% ACI/HPD credible interval boundaries are. Figure 12 and Figure 16 also show the marginal posterior density estimates of the KMWE distribution’s parameters together with their histograms based on 10,000 chain values. The estimations clearly show that all of the generated posteriors are symmetric with respect to the theoretical posterior density functions.

8. Concluding Remarks

The SEWE model [9] was introduced to fit the food data in the United Kingdom (UK), and the SEWE model gave an excellent fit for these data. However, in this article, we investigate a new lifetime model called the KMKE model which gives a better fit than the SEWE model for the food data. The KMKE model has three special models that are proposed and discussed. Some important statistical and computational features of the new model are investigated, such as the Q U A function, M O m s, I N M O m s, C O M O m s and M O m generating functions. Classical maximum likelihood estimation and Bayesian estimation approaches are utilized to estimate the parameters of the KMKE model. The simulation experiment examines the accuracy of the model parameters by employing Bayesian and maximum likelihood estimation methods. In this article, we use two real datasets related to food to show the relevance and flexibility of the suggested model. The KMKE model gives the best fits for food data and we compare it with the SEWE model, which was introduced by [9] for fitting food data, and also compare it with various known statistical models. This allows it to be used to predict the future dataset of food and drink wholesaling sales, and the extent of its validity and expected risks when using different quantities of food and beverages. By studying the KMKE model for food chain data, we can say that the KMKE model is the best model for evaluating and appropriating almost in-depth food data and avoiding erroneous conclusions, by using the previous prior information of parameters of the proposed model (Bayesian) as gamma distribution, where the Bayesian estimation method has the smallest SE values of parameters. The limitation of our new suggested model is that we estimate its parameters with complete samples only. Future works can use our new model to study the statistical inference for parameters using different censored schemes and different ranked set sampling. Some authors may study the stress–strength model using our model because the KMKE model is very simple and has two parameters only.

Author Contributions

Conceptualization, E.A.A.H., E.M.A., I.E. and M.E.; methodology, E.A.E., O.H.M.H., E.M.A., E.A.A.H. and M.E.; software, E.M.A. and M.E.; validation, E.A.A.H., E.M.A., I.E. and M.E.; formal analysis, E.A.E., O.H.M.H., E.M.A., E.A.A.H. and M.E.; investigation, E.A.A.H., E.M.A., I.E. and M.E.; resources, E.A.E., O.H.M.H., E.M.A., I.E. and M.E.; data curation, E.A.E., O.H.M.H., E.M.A., E.A.A.H. and M.E.; writing-original draft preparation, E.A.E., O.H.M.H., E.M.A., I.E., E.A.A.H. and M.E.; writing—review and editing, E.A.E., O.H.M.H., E.M.A., I.E., E.A.A.H. and M.E.; visualization, E.A.E. and O.H.M.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by The Deputyship for Research & Innovation, Ministry of Education in Saudi Arabia grant number INST 171.

Data Availability Statement

Datasets are available in the application section.

Acknowledgments

The authors extend their appreciation to the Deputyship for Research & Innovation, Ministry of Education in Saudi Arabia for funding this research work through the project number INST 171.

Conflicts of Interest

The authors declare no conflict of interest.

List of Symbols

ZRandom variable
G ( z ) Cumulative distribution function of Kumaraswamy generated family
H ( z ) Cumulative distribution function of exponential distribution
g z ; α , β , γ Probability density function of Kumaraswamy exponential distribution
G z ; α , β , γ Probability density function of Kumaraswamy exponential distribution
α Scale parameter
β Shape parameter
γ Shape parameter
F z ; α , β , γ Cumulative distribution function of the Kavya–Manoharan Kumaraswamy exponential distribution
f z ; α , β , γ Probability density function of the Kavya–Manoharan Kumaraswamy exponential distribution
S z ; α , β , γ Reliability function of the Kavya–Manoharan Kumaraswamy exponential distribution
h z ; α , β , γ Hazard rate function of the Kavya–Manoharan Kumaraswamy exponential distribution
τ z ; α , β , γ Reversed hazard rate function of the Kavya–Manoharan Kumaraswamy exponential distribution
H z ; α , β , γ Cumulative hazard rate function of the Kavya–Manoharan Kumaraswamy exponential distribution
Q u Quantile function
μ w The w t h moment
M Z ( t ) Moment generating function
η m ( t ) The m t h incomplete moment
τ m ( t ) The m t h conditional moment
ln L Log-likelihood function
nSample size
w j Shape parameter of hyper-parameter
j Scale parameter of hyper-parameter
NThe number of samples
C Constant of posterior distribution
L S Squared-error loss function
Ω ˜ S Bayesian estimator under SELF
E Ω Average expectation
L L LINEX loss function
Ω ˜ L Bayesian estimator under LINEX
cShape parameter of LINEX loss function
L E Entropy loss function
Ω ˜ E Bayesian estimator under entropy

Appendix A

Proof for the w t h   M O m about the zero of the KMKE model:
μ w = 0 z w f ( z ; α , β , γ ) d z .
By inserting Equation (7) into Equation (A1), we can rewrite the above Equation as
μ w = α β γ e 1 0 z w e α z 1 e α z β 1 1 1 e α z β γ 1 e 1 1 e α z β γ d z .
By applying the next exponential expansion to the above equation (see [105])
e 1 1 e α z β γ = i = 0 1 1 e α z β γ i i ! ,
then, we obtain
μ w = α β γ e 1 0 z w e α z 1 e α z β 1 i = 0 1 i ! 1 1 e α z β γ ( i + 1 ) 1 d z .
Employing the next binomial expansion to the last term of the previous equation
1 1 e α z β γ ( i + 1 ) 1 = j = 0 γ ( i + 1 ) 1 1 j γ ( i + 1 ) 1 j 1 e α z β j .
By employing the last binomial expansion in Equation (A2) we have
μ w = α β γ e 1 0 z w e α z i = 0 j = 0 γ ( i + 1 ) 1 1 j i ! γ ( i + 1 ) 1 j 1 e α z β ( j + 1 ) 1 d z .
Again employing the next binomial expansion to the last term of the previous equation
1 e α z β ( j + 1 ) 1 = k = 0 β ( j + 1 ) 1 1 k β ( j + 1 ) 1 k e α k z .
By inserting the previous expansion in Equation (A4), then we obtain
μ w = i = 0 j = 0 γ ( i + 1 ) 1 k = 0 β ( j + 1 ) 1 π i , j , k 0 z w e α ( k + 1 ) z d z ,
where
π i , j , k = α β γ 1 j + k e 1 i ! γ ( i + 1 ) 1 j β ( j + 1 ) 1 k .
then the w t h M O m about the zero of the KMKE model is
μ w = i = 0 j = 0 γ ( i + 1 ) 1 k = 0 β ( j + 1 ) 1 π i , j , k Γ w + 1 α ( k + 1 ) w + 1 .

References

  1. Schneider, K.; Hoffmann, I. Nutrition ecology—A concept for systemic nutrition research and integrative problem solving. Ecol. Food Nutr. 2011, 50, 1–17. [Google Scholar] [CrossRef] [PubMed]
  2. Wezel, A.; Bellon, S.; Doré, T.; Francis, C.; Vallod, D.; David, C. Agroecology as a science, a movement and a practice. A review. Agron. Sustain. Dev. 2009, 29, 503–515. [Google Scholar] [CrossRef]
  3. Lappé, F.M. Diet for a Small Planet: How to Enjoy a Rich Protein Harvest by Getting Off the Top of the Food Chain. 1971. [Google Scholar]
  4. Gussow, J.D.; Clancy, K.L. Dietary guidelines for sustainability. J. Nutr. Educ. 1986, 18, 1–5. [Google Scholar] [CrossRef]
  5. Djekic, I.; Sanjuán, N.; Clemente, G.; Jambrak, A.R.; Djukić-Vuković, A.; Brodnjak, U.V.; Pop, E.; Thomopoulos, R.; Tonda, A. Review on environmental models in the food chain-Current status and future perspectives. J. Clean. Prod. 2018, 176, 1012–1025. [Google Scholar] [CrossRef]
  6. Almetwally, E.M.; Meraou, M.A. Application of Environmental Data with New Extension of Nadarajah-Haghighi Distribution. Comput. J. Math. Stat. Sci. 2022, 1, 26–41. [Google Scholar] [CrossRef]
  7. Abubakari, A.G.; Anzagra, L.; Nasiru, S. Chen Burr-Hatke Exponential Distribution: Properties, Regressions and Biomedical Applications. Comput. J. Math. Stat. Sci. 2023, 2, 80–105. [Google Scholar] [CrossRef]
  8. Marín, S.; Freire, L.; Femenias, A.; Sant’Ana, A.S. Use of predictive modelling as tool for prevention of fungal spoilage at different points of the food chain. Curr. Opin. Food Sci. 2021, 41, 1–7. [Google Scholar] [CrossRef]
  9. Alyami, S.A.; Elbatal, I.; Alotaibi, N.; Almetwally, E.M.; Elgarhy, M. Modeling to Factor Productivity of the United Kingdom Food Chain: Using a New Lifetime-Generated Family of Distributions. Sustainability 2022, 14, 8942. [Google Scholar] [CrossRef]
  10. Yu, J.; Chen, H.; Zhang, X.; Cui, X.; Zhao, Z. A Rice Hazards Risk Assessment Method for a Rice Processing Chain Based on a Multidimensional Trapezoidal Cloud Model. Foods 2023, 12, 1203. [Google Scholar] [CrossRef]
  11. Kumaraswamy, P. A generalized probability density function for double-bounded random processes. J. Hydrol. 1980, 46, 79–88. [Google Scholar] [CrossRef]
  12. Cordeiro, G.M.; de Castro, M. A new family of generalized distributions. J. Stat. Comput. Simul. 2011, 81, 883–898. [Google Scholar] [CrossRef]
  13. de Araujo Rodrigues, J.; Madeira Silva, A.P.C. The Exponentiated Kumaraswamy-Exponential Distribution. Curr. J. Appl. Sci. Technol. 2015, 10, 1–12. [Google Scholar] [CrossRef]
  14. Bakouch, H.; Moala, F.; Saboor, A.; Samad, H. A bivariate Kumaraswamy-exponential distribution with application. Math. Slovaca 2019, 69, 1185–1212. [Google Scholar] [CrossRef]
  15. Chacko, M.; Mohan, R. Estimation of parameters of Kumaraswamy-Exponential distribution under progressive type-II censoring. J. Stat. Comput. Simul. 2017, 87, 1951–1963. [Google Scholar] [CrossRef]
  16. Al-saiary, Z.A.; Bakoban, R.A.; Al-zahrani, A.A. Characterizations of the Beta Kumaraswamy Exponential Distribution. Mathematics 2020, 8, 23. [Google Scholar] [CrossRef]
  17. El-Damrawy, H.H.; Teamah, A.A.M.; El-Shiekh, B.M. Truncated Bivariate Kumaraswamy Exponential Distribution. J. Stat. Appl. Pro. 2022, 11, 461–469. [Google Scholar]
  18. Chesneau, C.; Jamal, F. The Sine Kumaraswamy-G Family of Distributions. J. Math. Ext. 2021, 15, 1–33. [Google Scholar]
  19. Hassan, A.S.; Mohamed, R.E.; Kharazmi, O.; Nagy, H.F. A New Four Parameter Extended Exponential Distribution with Statistical Properties and Applications. Pak. J. Stat. Oper. Res. 2022, 18, 179–193. [Google Scholar] [CrossRef]
  20. Sule, I.; Doguwa, S.; Isah, A.; Jibril, H. The Topp Leone Kumaraswamy-G Family of Distributions with Applications to Cancer Disease Data. JBE 2020, 6, 40–51. [Google Scholar] [CrossRef]
  21. Arshad, R.M.I.; Tahir, M.H.; Chesneau, C.; Jamal, F. The Gamma Kumaraswamy-G family of distributions: Theory, inference and applications. Stat. Transit. New Ser. 2020, 21, 17–40. [Google Scholar] [CrossRef]
  22. Afify, A.Z.; Alizadeh, M. The odd Dagum family of distributions: Properties and applications. J. Appl. Probab. Stat. 2020, 15, 45–72. [Google Scholar]
  23. Tahir, M.H.; Cordeiro, G.M.; Alizadeh, M.; Mansoor, M.; Zubair, M.; Hamedani, G.G. The odd generalized exponential family of distributions with applications. J. Stat. Distrib. Appl. 2015, 2, 1–28. [Google Scholar] [CrossRef]
  24. Al-Babtain, A.A.; Elbatal, I.; Chesneau, C.; Elgarhy, M. Sine Topp-Leone-G family of distributions: Theory and applications. Open Phys. 2020, 18, 74–593. [Google Scholar] [CrossRef]
  25. Cordeiro, G.M.; Alizadeh, M.; Ozel, G.; Hosseini, B.; Ortega, E.M.M.; Altun, E. The generalized odd log-logistic class of distributions: Properties, regression models and applications. J. Stat. Comput. Simul. 2017, 87, 908–932. [Google Scholar] [CrossRef]
  26. Al-Babtain, A.A.; Elbatal, I.; Al-Mofleh, H.; Gemeay, A.M.; Afify, A.Z.; Sarg, A.M. The Flexible BurrX-G Family: Properties, Inference, and Applications in the Engineering Science. Symmetry 2021, 13, 474. [Google Scholar] [CrossRef]
  27. Alotaibi, N.; Elbatal, I.; Almetwally, E.M.; Alyami, S.A.; Al-Moisheer, A.S.; Elgarhy, M. Truncated Cauchy Power Weibull-G Class of Distributions: Bayesian and Non-Bayesian Inference Modelling for COVID-19 and Carbon Fiber Data. Mathematics 2022, 10, 1565. [Google Scholar] [CrossRef]
  28. Reyad, H.; Jamal, F.; Othman, S.; Hamedani, G.G. The transmuted Gompertz-G family of distribu-tions: Properties and applications. Tbil. Math. J. 2018, 11, 47–67. [Google Scholar]
  29. Badr, M.; Elbatal, I.; Jamal, F.; Chesneau, C.; Elgarhy, M. The Transmuted Odd Fréchet-G class of Distributions: Theory and Applications. Mathematics 2020, 8, 958. [Google Scholar] [CrossRef]
  30. Reyad, H.; Jamal, F.; Othman, S.; Hamedani, G.G. The transmuted odd Lindley-G family of distributions. Asian J. Probab. Stat. 2018, 1, 1–25. [Google Scholar] [CrossRef]
  31. Elbatal, I.; Alotaibi, N.; Almetwally, E.M.; Alyami, S.A.; Elgarhy, M. On Odd Perks-G Class of Distributions: Properties, Regression Model, Discretization, Bayesian and Non-Bayesian Estimation, and Applications. Symmetry 2022, 14, 883. [Google Scholar] [CrossRef]
  32. Bantan, R.A.; Jamal, F.; Chesneau, C.; Elgarhy, M. A New Power Topp–Leone Generated Family of Distributions with Applications. Entropy 2019, 21, 1177. [Google Scholar] [CrossRef]
  33. Yousof, H.M.; Rasekhi, M.; Altun, E.; Alizadeh, M. The extended odd Fréchet family of distributions:properties, applications and regression modeling. Int. J. Math. Comput. 2019, 30, 1–16. [Google Scholar]
  34. Ocloo, S.K.; Brew, L.; Nasiru, S.; Odoi, B. On the Extension of the Burr XII Distribution: Applications and Regression. Comput. J. Math. Stat. Sci. 2023, 2, 1–30. [Google Scholar] [CrossRef]
  35. Afify, A.Z.; Cordeiro, G.M.; Ibrahim, N.A.; Jamal, F.; Nasir, M.A. Marshall-Olkin odd Burr III-G family: Theory, estimation, and engineering applications. IEEE Access 2021, 9, 4376–4387. [Google Scholar] [CrossRef]
  36. Bantan, R.A.; Chesneau, C.; Jamal, F.; Elgarhy, M. On the analysis of new COVID-19 cases in Pakistan using an exponentiated version of the M family of distributions. Mathematics 2020, 8, 953. [Google Scholar] [CrossRef]
  37. Ahmad, Z.; Mahmoudi, E.; Alizadeh, M.; Roozegar, R.; Afify, A.Z. exponential TX family of distributions:properties and an application to insurance data. J. Math. 2021, 2021, 3058170. [Google Scholar] [CrossRef]
  38. Bantan, R.A.; Jamal, F.; Chesneau, C.; Elgarhy, M. Truncated inverted Kumaraswamy generated family of distributions with applications. Entropy 2019, 21, 1089. [Google Scholar] [CrossRef]
  39. Nassar, M.; Kumar, D.; Dey, S.; Cordeiro, G.M.; Afify, A.Z. The Marshal-Olkin alpha power family of distributions with applications. J. Comput. Appl. Math. 2019, 351, 41–53. [Google Scholar] [CrossRef]
  40. Alghamdi, S.M.; Shrahili, M.; Hassan, A.S.; Mohamed, R.E.; Elbatal, I.; Elgarhy, M. Analysis of Milk Production and Failure Data: Using Unit Exponentiated Half Logistic Power Series Class of Distributions. Symmetry 2023, 15, 714. [Google Scholar] [CrossRef]
  41. Shah, Z.; Khan, D.M.; Khan, Z.; Faiz, N.; Hussain, S.; Anwar, A.; Ahmad, T.; Kim, K.-I. A New Generalized Logarithmic-X Family of Distributions with Biomedical Data Analysis. Appl. Sci. 2023, 13, 3668. [Google Scholar] [CrossRef]
  42. Abbas, S.; Muhammad, M.; Jamal, F.; Chesneau, C.; Muhammad, I.; Bouchane, M. A New Extension of the Kumaraswamy Generated Family of Distributions with Applications to Real Data. Computation 2023, 11, 26. [Google Scholar] [CrossRef]
  43. Ampadu, C.B. Some Structural Properties of the Generalized Kumaraswamy (GKw) qT-X Class of Distributions. Earthline J. Math. Sci. 2023, 12, 27–52. [Google Scholar] [CrossRef]
  44. Ghosh, I. A New Class of Alternative Bivariate Kumaraswamy-Type Models: Properties and Applications. Stats 2023, 6, 232–252. [Google Scholar] [CrossRef]
  45. Nik, A.S.; Chesneau, C.; Bakouch, H.S.; Asgharzadeh, A. A new truncated (0, b)-F family of lifetime distributions with an extensive study to a submodel and reliability data. Afr. Mat. 2023, 34, 3. [Google Scholar] [CrossRef]
  46. Oluyede, B.; Liyanage, G.W. The Gamma Odd Weibull Generalized-G Family of Distributions: Properties and Applications. Rev. Colomb. EstadíStica 2023, 46, 1–44. [Google Scholar]
  47. Aslam, M.; Jun, C.-H.; Fernandez, A.J.; Ahmad, M.; Rasool, M. Repetitive group sampling plan based on truncated tests for Weibull models. Res. J. Appl. Sci. Eng. Technol. 2014, 7, 1917–1924. [Google Scholar] [CrossRef]
  48. Fernandez, A.J.; Perez-Gonzalez, C.J.; Aslam, M.; Jun, C.H. Design of progressively censored group sampling plans for Weibull distributions: An optimization problem. Eur. J. Oper. Res. 2011, 211, 525–532. [Google Scholar] [CrossRef]
  49. Kavya, P.; Manoharan, M. Some parsimonious models for lifetimes and applications. J. Statist. Comput. Simul. 2021, 91, 3693–3708. [Google Scholar] [CrossRef]
  50. Cordeiro, G.M.; Ortega, E.M.; Nadarajah, S. The kumaraswamy weibull distribution with application to failure data. J. Frankl. Inst. 2010, 347, 1399–1429. [Google Scholar] [CrossRef]
  51. Gomes, A.E.; de-Silva, C.Q.; Cordeiro, G.M.; Ortega, E.M. A new lifetime model: The Kumaraswamy generalized Rayleigh distribution. J. Stat. Comput. Simul. 2014, 84, 290–309. [Google Scholar] [CrossRef]
  52. Cordeiro, G.M.; Ortega, E.M.; Silva, G.O. The Kumaraswamy modified Weibull distribution: Theory and applications. J. Stat. Comput. Simul. 2014, 84, 1387–1411. [Google Scholar] [CrossRef]
  53. Al-Babtain, A.; Fattah, A.A.; A-Hadi, N.A.; Merovci, F. The Kumaraswamy-transmuted exponentiated modified Weibull distribution. Commun. Stat.-Simul. Comput. 2017, 46, 3812–3832. [Google Scholar]
  54. Mansour, M.M.; Hamed, M.S.; Mohamed, S.M. A New Kumaraswamy transmuted modified Weibull Distribution: With Application. J. Stat. Adv. Theory Appl. 2015, 13, 101–133. [Google Scholar]
  55. Chukwu, A.U.; Ogunde, A.A. On Kumaraswamy Gompertz Makeham distribution. Am. J. Math. Stat. 2016, 6, 122–127. [Google Scholar]
  56. Cordeiro, G.M.; Nadarajah, S.; Ortega, E.M. The Kumaraswamy Gumbel distribution. Stat. Methods Appl. 2012, 21, 139–168. [Google Scholar] [CrossRef]
  57. De Pascoa, M.A.; Ortega, E.M.; Cordeiro, G.M. The Kumaraswamy generalized gamma distribution with application in survival analysis. Stat. Methodol. 2011, 8, 411–433. [Google Scholar] [CrossRef]
  58. Nagarjuna, V.B.V.; Vardhan, R.V.; Chesneau, C. Kumaraswamy generalized power Lomax distribution and Its Applications. Stats 2021, 4, 28–45. [Google Scholar] [CrossRef]
  59. Paranaíba, P.F.; Ortega, E.M.; Cordeiro, G.M.; Pascoa, M.A.D. The Kumaraswamy Burr XII distribution: Theory and practice. J. Stat. Comput. Simul. 2013, 83, 2117–2143. [Google Scholar] [CrossRef]
  60. Ogunde, A.A.; Chukwu, A.U.; Oseghale, I.O. The Kumaraswamy generalized inverse Lomax distribution and Applications to Reliability and survival data. Sci. Afr. 2022, 5, e01483. [Google Scholar] [CrossRef]
  61. Huang, S.; Oluyede, B.O. Exponentiated Kumaraswamy-Dagum distribution with applications to income and lifetime data. J. Stat. Distrib. App. 2014, 1, 8. [Google Scholar] [CrossRef]
  62. Bantan, R.A.R.; Chesneau, C.; Jamal, F.; Elgarhy, M.; Almutiry, W.; Alahmadi, A. Study of a Modified Kumaraswamy Distribution. Mathematics 2021, 9, 2836. [Google Scholar] [CrossRef]
  63. Elgarhy, M.; Sharma, V.K.; Elbatal, I. Transmuted Kumaraswamy Lindley distribution with application. J. Stat. Manag. Syst. 2018, 21, 1083–1104. [Google Scholar] [CrossRef]
  64. George, R.; Thobias, S. Kumaraswamy Marshall-Olkin Exponential distribution. Commun. Stat.-Theory Methods 2019, 48, 1920–1937. [Google Scholar] [CrossRef]
  65. Usman, R.M.; Haq, M.; Junaid, T. Kumaraswamy half-logistic distribution: Properties and applications. J. Stat. Appl. Prob. 2017, 6, 597–609. [Google Scholar] [CrossRef]
  66. De Santana, T.V.F.; Ortega, E.M.M.; Cordeiro, G.M.; Silva, G.O. The Kumaraswamy-log-logistic distribution. J. Stat. Theory Appl. 2012, 11, 265–291. [Google Scholar]
  67. Cakmakyapan, S.; Ozel, G.; El Gebaly, Y.M.; Hamedani, G.G. The Kumaraswamy Marshall-Olkin log-logistic distribution with application. J. Stat. Theory Appl. 2018, 17, 59–76. [Google Scholar] [CrossRef]
  68. Mead, M.E.; Afify, A.; Butt, N. The Modified Kumaraswamy Weibull Distribution: Properties and Applications in Reliability and Engineering Sciences. Pak. J. Stat. Oper. Res. 2020, 16, 433–446. [Google Scholar] [CrossRef]
  69. Hassan, A.S.; Almetwally, E.M.; Ibrahim, G.M. Kumaraswamy inverted Topp-Leone distribution with applications to COVID-19 data. Comput. Mater. Contin. 2021, 68, 337–358. [Google Scholar] [CrossRef]
  70. Alotaibi, N.; Elbatal, I.; Shrahili, M.; Al-Moisheer, A.S.; Elgarhy, M.; Almetwally, E.M. Statistical Inference for the Kavya- Manoharan Kumaraswamy Model under Ranked Set Sampling with Applications. Symmetry 2023, 15, 587. [Google Scholar] [CrossRef]
  71. Ishaq, A.I.; Usman, A.; Musa, T.; Agboola, S. On some properties of Generalized Transmuted Kumaraswamy distribution. Pak. J. Stat. Oper. Res. 2019, 15, 577–586. [Google Scholar] [CrossRef]
  72. Jamal, F.; Nasir, M.A.; Ozel, G.; Elgarhy, M.; Khan, N.M. Generalized inverted Kumaraswamy generated family of distributions: Theory and applications. J. Appl. Stat. 2019, 46, 2927–2944. [Google Scholar] [CrossRef]
  73. Reyad, H.; Jamal, F.; Othman, S.; Yahia, N. The Topp Leone Generalized Inverted Kumaraswamy Distribution: Properties and Applications. Asian Res. J. Math. 2019, 13, 48226. [Google Scholar] [CrossRef]
  74. Mdlongwa, P.; Oluyede, B.O.; Amey, A.K.; Fagbamigbe, A.F.; Makubate, B. Fagbamigbe, and B.Makubate, Kumaraswamy log-logistic Weibull distribution: Model, theory and application to lifetime and survival data. Heliyon 2019, 5, e01144. [Google Scholar] [CrossRef]
  75. Kawsar, F.; Jan, U.; Ahmad, S.P. Statistical Properties and Applications of the Exponentiated Inverse Kumaraswamy Distribution. J. Reliab. Stat. Stud. 2018, 11, 93–102. [Google Scholar]
  76. Madaki, U.Y.; Abu Bakar, M.R.; Handique, L. Beta Kumaraswamy Burr Type X Distribution and Its Properties. ASEANA Sci. Educ. J. 2022, 2, 9–36. [Google Scholar]
  77. Usman, R.M.; Haq, M.A.U. The Marshall-Olkin extended inverted Kumaraswamy distribution: Theory and applications. J. King Saud-Univ.-Sci. 2018, 32, 356–365. [Google Scholar] [CrossRef]
  78. Nawaz, T.; Hussain, S.; Ahmad, T.; Naz, F.; Abid, M. Kumaraswamy generalized Kappa distribution with application to stream flow data. J. King Saud-Univ.-Sci. 2018, 32, 172–182. [Google Scholar] [CrossRef]
  79. Saracoglu, B.; Tanis, C. A new statistical distribution: Cubic rank transmuted Kumaraswamy distribution Weibull its properties. J. Natl. Sci. Found. SriLanka 2018, 46, 505–518. [Google Scholar] [CrossRef]
  80. Kaile, N.K.; Isah, A.; Dikko, H.G. Odd Generalized Exponential Kumaraswamy distribution: Its properties and application to real-life data. Atbu J. Sci. Technol. Educ. (JOSTE) 2018, 6, 137–148. [Google Scholar]
  81. Muhammad, M.; Muhammad, I.; Yaya, A.M. The Kumaraswamy Exponentiated U-Quadratic Distribution: Properties and Application. Asian J. Probab. Stat. 2018, 1, 41224. [Google Scholar] [CrossRef]
  82. Nasir, A.; Bakouch, H.S.; Jamal, F. Kumaraswamy odd Burr G family of distributions with applications to reliability data. Stud. Sci. Math. Hung. Mar 2018, 55, 94–114. [Google Scholar] [CrossRef]
  83. Elgarhy, M.; Haq, M.A.; Ain, Q.U. Exponentiated Generalized Kumaraswamy Distribution with Applications. Ann. Data Sci. 2018, 5, 273–292. [Google Scholar] [CrossRef]
  84. Sharma, D.; Chakrabarty, T.K. On Size Biased Kumaraswamy Distribution. Stat. Opt. Inform. Comput. 2016, 4, 252–264. [Google Scholar] [CrossRef]
  85. Selim, M.A.; Badr, A.M. The Kumaraswamy Generalized Power Weibull Distribution. Math. Theory Model. 2016, 6, 110–124. [Google Scholar]
  86. Elsayed, E.A. Reliability Engineering, 2nd ed.; John Wiley Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  87. Mood, A.M.; Graybill, F.A.; Boes, D.C. Introduction to the Theory of Statistics, 3rd ed.; McGraw Hill: New York, NY, USA, 1974. [Google Scholar]
  88. Tolba, A. Bayesian and Non-Bayesian Estimation Methods for Simulating the Parameter of the Akshaya Distribution. Comput. J. Math. Stat. Sci. 2022, 1, 13–25. [Google Scholar] [CrossRef]
  89. Abo-Kasem, O.E.; Salem, S.; Hussien, A. On Joint Type-II Generalized Progressive Hybrid Censoring Scheme. Comput. J. Math. Stat. Sci. 2023, 2, 123–158. [Google Scholar]
  90. Dey, S.; Singh, S.; Tripathi, Y.M.; Asgharzadeh, A. Estimation and prediction for a progressively censored generalized inverted exponential distribution. Stat. Methodol. 2016, 32, 185–202. [Google Scholar] [CrossRef]
  91. Varian, H.R. A Bayesian approach to real estate assessment. Studies in Bayesian econometric and statistics in Honor of Leonard. J. Savage 1975, 32, 195–208. [Google Scholar]
  92. Algarni, A.; MAlmarashi, A.; Elbatal, I.; SHassan, A.; Almetwally, E.M.; MDaghistani, A.; Elgarhy, M. Type I Half Logistic Burr XG Family: Properties, Bayesian, and Non-Bayesian Estimation under Censored Samples and Applications to COVID-19 Data. Math. Probl. Eng. 2021, 2021, 5461130. [Google Scholar] [CrossRef]
  93. Khatun, N.; Matin, M.A. A study on LINEX loss function with different estimating methods. Open J. Stat. 2020, 10, 52. [Google Scholar] [CrossRef]
  94. Arshad, M.; Abdalghani, O. On estimating the location parameter of the selected exponential population under the LINEX loss function. Braz. J. Probab. Stat. 2020, 34, 167–182. [Google Scholar] [CrossRef]
  95. Calabria, R.; Pulcini, G. An engineering approach to Bayes estimation for the Weibull distribution. Microelectron. Reliab. 1994, 34, 789–802. [Google Scholar] [CrossRef]
  96. Metropolis, N.; Rosenbluth, A.W.; Rosenbluth, M.N.; Teller, A.H.; Teller, E. Equation of state calculations by fast computing machines. J. Chem. Phys. 1953, 21, 1087–1092. [Google Scholar] [CrossRef]
  97. El-Damcese, M.A.; Mustafa, A.; Eliwa, M.S. Exponentaited Generalized Weibull Gompertz Distribution. arXiv 2014, arXiv:1412.0705. [Google Scholar]
  98. Afify, A.Z.; Mead, M.E. On five-parameter Burr XII distribution: Properties and applications. S. Afr. Stat. J. 2017, 51, 67–80. [Google Scholar]
  99. Tahir, M.H.; Cordeiro, G.M.; Mansoor, M.; Zubair, M. The Weibull-Lomax distribution: Properties and applications. Hacet. J. Math. Stat. 2015, 44, 455–474. [Google Scholar] [CrossRef]
  100. Almetwally, E.M.; Sabry, M.A.; Alharbi, R.; Alnagar, D.; Mubarak, S.A.; Hafez, E.H. Marshall-olkin alpha power Weibull distribution: Different methods of estimation based on type-I and type-II censoring. Complexity 2021, 2021, 5533799. [Google Scholar] [CrossRef]
  101. Alsuhabi, H.; Alkhairy, I.; Almetwally, E.M.; Almongy, H.M.; Gemeay, A.M.; Hafez, E.H.; Sabry, M. A superior extension for the Lomax distribution with application to COVID-19 infections real data. Alex. Eng. J. 2022, 61, 11077–11090. [Google Scholar] [CrossRef]
  102. Almetwally, E.M.; Alharbi, R.; Alnagar, D.; Hafez, E.H. A new inverted topp-leone distribution: Applications to the COVID-19 mortality rate in two different countries. Axioms 2021, 10, 25. [Google Scholar] [CrossRef]
  103. Almetwally, E.M. The odd Weibull inverse topp–leone distribution with applications to COVID-19 data. Ann. Data Sci. 2022, 9, 121–140. [Google Scholar] [CrossRef]
  104. Zhang, T.; Xie, M. Failure data analysis with extended Weibull distribution. Commun. Stat. Comput. 2007, 36, 579–592. [Google Scholar] [CrossRef]
  105. Khan, W.A.; Alatawi, M.S.; Ryoo, C.S.; Duran, U. Novel Properties of q-Sine-Based and q-Cosine-Based q-Fubini Polynomials. Symmetry 2023, 15, 356. [Google Scholar] [CrossRef]
Figure 1. A detailed graphic representation of the article.
Figure 1. A detailed graphic representation of the article.
Axioms 12 00379 g001
Figure 2. Plots of pdf and hrf for the KMKE model.
Figure 2. Plots of pdf and hrf for the KMKE model.
Axioms 12 00379 g002
Figure 3. Plots of the pdf for the KMKE model in 3D.
Figure 3. Plots of the pdf for the KMKE model in 3D.
Axioms 12 00379 g003aAxioms 12 00379 g003b
Figure 4. Plots of the hrf for the KMKE model in 3D.
Figure 4. Plots of the hrf for the KMKE model in 3D.
Axioms 12 00379 g004
Figure 5. Plots of mean, var, SK, KU, CV and ID in 3D for the KMKE model.
Figure 5. Plots of mean, var, SK, KU, CV and ID in 3D for the KMKE model.
Axioms 12 00379 g005
Figure 6. Heatmaps of MSE values for parameters of the KMWE distribution with different sample cases: α = 0.5 , β = 0.4 .
Figure 6. Heatmaps of MSE values for parameters of the KMWE distribution with different sample cases: α = 0.5 , β = 0.4 .
Axioms 12 00379 g006
Figure 7. Heatmaps of MSE values for parameters of the KMWE distribution with different sample cases: α = 0.5 , β = 1.5 .
Figure 7. Heatmaps of MSE values for parameters of the KMWE distribution with different sample cases: α = 0.5 , β = 1.5 .
Axioms 12 00379 g007
Figure 8. Heatmaps of MSE values for parameters of KMWE distribution with different sample cases: α = 0.5 , β = 1.5 .
Figure 8. Heatmaps of MSE values for parameters of KMWE distribution with different sample cases: α = 0.5 , β = 1.5 .
Axioms 12 00379 g008
Figure 9. Profile MLE of the KMWE model for food chain data.
Figure 9. Profile MLE of the KMWE model for food chain data.
Axioms 12 00379 g009
Figure 10. MLE of cdf, and pdf with empirical and histogram, QQ and PP of the KMWE model for food chain data.
Figure 10. MLE of cdf, and pdf with empirical and histogram, QQ and PP of the KMWE model for food chain data.
Axioms 12 00379 g010
Figure 11. MCMC plot and convergence line for parameters of the KMWE model for food chain data.
Figure 11. MCMC plot and convergence line for parameters of the KMWE model for food chain data.
Axioms 12 00379 g011
Figure 12. Histogram plot with normal curve for parameters of the KMWE model for food chain data.
Figure 12. Histogram plot with normal curve for parameters of the KMWE model for food chain data.
Axioms 12 00379 g012
Figure 13. Profile MLE of the KMWE model for food and drink wholesaling data.
Figure 13. Profile MLE of the KMWE model for food and drink wholesaling data.
Axioms 12 00379 g013
Figure 14. MLE of cdf, and pdf with empirical and histogram, QQ and PP of the KMWE model for food and drink wholesaling data.
Figure 14. MLE of cdf, and pdf with empirical and histogram, QQ and PP of the KMWE model for food and drink wholesaling data.
Axioms 12 00379 g014
Figure 15. MCMC plot and convergence line for parameters of the KMWE model for food and drink wholesaling data.
Figure 15. MCMC plot and convergence line for parameters of the KMWE model for food and drink wholesaling data.
Axioms 12 00379 g015
Figure 16. Histogram plot with normal curve for parameters of the KMWE model for food and drink wholesaling data.
Figure 16. Histogram plot with normal curve for parameters of the KMWE model for food and drink wholesaling data.
Axioms 12 00379 g016
Table 1. Relevant literature.
Table 1. Relevant literature.
ModelModelingAuthors
The new suggested model (KMKE model)Food chain dataNew
K-Weibull modelFailure times data [50]
K-generalized Rayleigh modelEngineering data [51]
K-modified Weibull modelFailure times data [52]
K-transmuted exponentiated modified Weibull modelMedical data [53]
K-transmuted modified Weibull modelFailure times data [54]
K-Gompertz Makeham modelPhysics data [55]
K-Gumbel modelEngineering data [56]
K-generalized gamma modelIndustrial and medical data [57]
K-generalized power Lomax modelPhysics data [58]
K-Burr XII modelEngineering, physics and medical data [59]
K-generalized inverse Lomax modelReliability and survival data [60]
K-Dagum modelIncome and lifetime data [61]
Modified K modelEngineering data [62]
Transmuted K-Lindley modelMedical data [63]
K-Marshall–Olkin exponential modelMedical data [64]
K-half logistic modelPhysics and medical data [65]
K-log logistic modelMedical data [66]
K-Marshall–Olkin log-logistic modelPhysics data [67]
Modified K Weibull modelReliability and engineering data [68]
K-inverted Topp–Leone modelCOVID-19 data [69]
Kavya–Manoharan-K modelCOVID-19 and physics data [70]
Transmuted K modelMedical and environmental data [71]
Generalized inverted K-GPhysics data [72]
Topp–Leone generalized inverted K modelPhysics data [73]
K log-logistic Weibull modelFailure times data [74]
Exponentiated inverse K modelEconomic data [75]
Beta K Burr Type X modelPhysics and medical data [76]
Marshall–Olkin extended inverted K modelPhysics, failure and medical data [77]
K generalized Kappa modelGeological data [78]
Cubic rank transmuted K modelFood and industrial data [79]
K Marshall–Olkin log-logistic modelPhysics data [67]
Odd generalized exponential K modelGeological and environmental data [80]
K exponentiated U-quadratic modelMedical data [81]
K odd Burr-GPhysics and engineering data [82]
Exponentiated generalized K modelEnvironmental, agriculture and engineering data [83]
Size-biased K modelEngineering data [84]
K generalized power Weibull modelEngineering data [85]
Exponentiated K-Dagum modelIncome and lifetime data [61]
Table 2. Some sub-models of the KMKE model.
Table 2. Some sub-models of the KMKE model.
Model α β γ
KMKE---
KM- Topp–Leone exponential-2-
KM- exponentiated exponential--1
KM- exponential-11
Table 3. Bias, MSE and LCI for MLE and Bayesian estimation methods for α = 0.5 , β = 0.4 .
Table 3. Bias, MSE and LCI for MLE and Bayesian estimation methods for α = 0.5 , β = 0.4 .
α = 0.5, β = 0.4MLESELFLINEX c = −1.2LINEX c = 1.2ELF c = −1.2ELF c = 1.2
γ n BiasMSELACIBiasMSELCCIBiasMSELCCIBiasMSELCCIBiasMSELCCIBiasMSELCCI
0.540 α 0.14450.15401.43110.09400.05100.76380.11770.06200.80080.07050.04150.72090.10660.05460.76330.02370.03810.7249
β 0.13060.02690.38870.09190.01840.33020.10230.02240.34300.08160.01500.31270.09840.02020.33400.05710.01080.3083
γ 0.16400.14421.43110.07910.03950.64640.09790.04800.70200.06070.03220.59600.08890.04240.66700.02790.02780.5870
70 α 0.06080.06810.99540.04920.01720.46050.05600.01880.46720.04240.01580.45440.05320.01780.46050.02280.01470.4560
β 0.11790.01830.26020.05820.00660.21390.06180.00710.21800.05470.00600.21060.06070.00690.21570.04540.00500.2083
γ 0.15960.09220.99540.04130.01170.36620.04680.01270.36740.03600.01080.35980.04450.01210.36470.02510.00980.3621
150 α 0.04490.04510.81420.04370.00870.31830.04700.00920.32590.04040.00810.31470.04560.00890.32120.02230.00740.3126
β 0.11650.01590.18960.05780.00520.15890.05970.00540.16250.04560.00490.15650.05910.00530.16060.04510.00430.1543
γ 0.14360.06620.81420.03440.00620.27080.03700.00660.27590.03180.00590.26470.03600.00640.27310.02460.00540.2616
1.740 α 0.32600.19311.15540.14860.07290.84990.18050.09350.92560.11690.05580.77390.16390.07960.86400.06070.04810.8048
β 0.08640.01130.24300.05650.01040.21460.06820.01100.22540.04580.00910.20410.06090.01050.22500.03390.00910.2036
γ 0.09680.10191.15540.05030.09011.13500.08740.10061.14950.01310.08311.12210.05750.09081.12620.01320.08901.1664
70 α 0.27400.14191.01380.06040.01860.46420.06850.02050.47160.05240.01690.45780.06510.01940.46390.03610.01580.4654
β 0.08110.00900.19340.03540.00440.14870.03770.00540.14950.03310.00360.14720.03690.00470.14910.02770.00290.1474
γ 0.14870.10051.01380.02120.02100.56700.02980.02190.56990.01260.02030.55070.02290.02110.56510.01290.02060.5597
150 α 0.26880.10750.73650.05140.00910.32130.05510.00980.32900.04780.00850.31410.05360.00940.32490.04040.00770.3116
β 0.07400.00660.13180.02940.00160.10230.03020.00160.10290.02860.00150.10120.03000.00160.10260.02630.00140.0989
γ 0.07100.08250.73650.01790.01020.38910.02180.01050.38930.01400.00990.38900.01870.01020.38910.01420.01000.3902
340 α 0.43280.28181.20540.14280.07750.85710.17170.09800.92340.11430.06040.79360.14980.08050.86200.05580.05230.8153
β 0.09410.01160.20390.08290.01050.12370.10900.10340.19250.05820.00940.19230.08550.01060.20040.04980.00940.1822
γ 0.22760.14851.20540.03330.10371.19060.06520.11011.20400.00110.09961.17740.03510.10381.19100.01340.10331.2095
70 α 0.42550.24230.97070.05740.02080.50850.06410.02240.51660.05070.01920.49980.05940.02110.50950.03460.01830.5118
β 0.09080.01000.16390.03330.00740.13380.03580.01030.13490.03100.00530.13300.03400.00770.13420.02640.00420.1305
γ 0.18160.11770.97070.00480.02270.56390.01180.02300.5656-0.00230.02260.56640.00510.02270.56400.00040.02270.5683
150 α 0.37730.16070.53050.05140.01020.33280.05450.01090.34010.04820.00960.32860.05230.01040.33400.03140.00880.3255
β 0.08250.00750.10400.02530.00110.07830.02580.00110.07910.02490.00100.07780.02550.00110.07850.02350.00100.0774
γ 0.14400.02580.53050.00310.01110.39800.01050.01130.39820.00800.01100.39770.00410.01110.39810.00040.01110.3957
Table 4. Bias, MSE and LCI for MLE and Bayesian estimation methods for α = 0.5 , β = 1.5 .
Table 4. Bias, MSE and LCI for MLE and Bayesian estimation methods for α = 0.5 , β = 1.5 .
α = 0.5 , β = 1.5 MLESELFLINEX c = −1.2LINEX c = 1.2ELF c = −1.2ELF c = 1.2
γ n BiasMSELACIBiasMSELCCIBiasMSELCCIBiasMSELCCIBiasMSELCCIBiasMSELCCI
0.540 α 0.16300.12981.25990.06060.04010.71780.07510.04640.74950.04630.03460.67630.06440.04110.72050.01670.03090.6627
β 0.64170.59191.66450.08640.09511.07740.11820.11231.12130.05500.08431.03490.08990.09541.08060.04770.09371.0550
γ 0.11610.11521.25990.07510.05230.77780.09360.06140.82490.05700.04440.75110.07990.05370.77480.02000.03980.7325
70 α 0.04170.04820.84520.02050.01040.36660.02460.01100.37160.01640.00990.36100.02180.01050.36830.00640.00950.3620
β 0.50480.33851.13470.03980.02370.57580.04720.02540.58350.03250.02240.56700.04060.02390.57840.03120.02270.5702
γ 0.18420.11440.84520.02990.01400.42640.03480.01480.43260.02490.01320.42210.03140.01410.42650.01280.01270.4249
150 α 0.14100.03910.80450.01680.00520.25690.01870.00540.26210.01490.00510.25580.01740.00530.25780.00610.00490.2524
β 0.55640.32821.05330.03550.01100.36670.03890.01190.36910.03200.01030.36120.03580.01110.36690.03050.01030.3626
γ 0.06040.05461.04490.02360.00820.29910.02620.00920.30130.02110.00730.29770.02440.00830.29860.01250.00690.2997
1.740 α 0.25620.20021.43860.07860.02710.53050.08950.03120.55990.06780.02340.50550.08150.02780.53290.04580.01980.5047
β 0.54090.45771.59380.12480.12631.04580.16020.18871.09430.09070.08940.97390.12830.12811.04570.08770.10271.0044
γ 0.01020.31301.4386−0.00460.12021.31930.03010.12251.3294−0.03940.12041.3188−0.00090.11951.3142−0.04600.13071.3629
70 α 0.24800.19251.36830.03070.00660.28590.03380.00700.28970.02760.00630.28180.03170.00670.28570.02000.00590.2807
β 0.47940.34151.31080.04870.02280.52280.05590.02530.52930.04160.02090.51320.04950.02290.52450.03980.02280.5152
γ −0.09240.16671.26830.00370.02430.60740.01420.02450.6077−0.00010.02430.61300.00780.02430.6071−0.00080.02470.6169
150 α 0.19050.06630.67910.02910.00320.18520.03060.00330.18650.02770.00310.18340.02960.00330.18510.01920.00290.1820
β 0.41870.20660.69320.04710.01120.34770.05070.01140.35040.04040.01110.34360.04760.01110.34690.03840.01400.3442
γ −0.06780.12160.67910.00190.01040.39290.00530.01050.3931−0.00010.01040.39390.00220.01040.3930−0.00070.01050.3951
340 α 0.25340.14261.09800.08010.02580.53050.09030.02890.54580.07010.02300.51500.08300.02630.53030.04740.02120.5222
β 0.50370.36081.28310.14100.15421.11070.18250.23171.16440.10190.10871.04850.14540.15581.11430.09630.13471.0918
γ −0.06340.28331.0980−0.00790.11791.33280.02800.11831.3187−0.04380.12131.3426−0.00580.11751.3283−0.03110.12311.3580
70 α 0.21460.08040.72710.03200.00680.28870.03510.00720.29580.02880.00650.28750.03290.00690.28880.02110.00610.2931
β 0.45840.26730.93720.06390.02900.56070.07240.03340.57110.05540.02580.53890.06480.02910.56170.05350.02890.5414
γ −0.06810.12430.7271−0.00440.02590.62450.00330.02570.6209−0.01200.02620.6263−0.00390.02580.6247−0.00910.02620.6267
150 α 0.20830.05950.49780.02710.00320.19000.02850.00330.19200.02570.00310.18860.02760.00320.19010.02020.00290.1864
β 0.42200.20140.59920.05410.01180.33860.05770.01310.34290.05060.01070.33200.05450.01190.33940.05010.01080.3319
γ −0.05130.11730.49780.00220.01120.41000.00330.01120.4079−0.00130.01120.41160.00240.01120.40980.00010.01120.4120
Table 5. Bias, MSE and LCI for MLE and Bayesian estimation methods for α = 2 , β = 1.5 .
Table 5. Bias, MSE and LCI for MLE and Bayesian estimation methods for α = 2 , β = 1.5 .
α = 2 , β = 1.5 MLESELFLINEX c = −1.2LINEX c = 1.2ELF c = −1.2ELF c = 1.2
γ n BiasMSELACIBiasMSELCCIBiasMSELCCIBiasMSELCCIBiasMSELCCIBiasMSELCCI
0.540 α 0.32040.86683.4284−0.02300.09981.21060.00990.10091.2289−0.05580.10151.1995−0.02010.09941.2044−0.05530.10641.2332
β 0.73500.76541.86100.11980.10961.08230.15370.14071.12660.08620.08871.04330.12330.11081.08500.08180.09541.0746
γ 0.17200.15553.42840.08100.02890.53300.09460.03430.56270.06770.02420.50750.08460.02990.54020.04080.01990.4891
70 α 0.27460.84033.4300−0.00380.02310.59060.00360.02300.5856−0.01120.02320.5944−0.00310.02300.5895−0.01070.02350.5982
β 0.65110.57621.53040.04420.02400.57590.05160.02510.58320.03680.02330.56740.04500.02410.57690.03490.02480.5713
γ 0.21210.12673.43000.02430.00780.31370.02790.00820.31640.02070.00740.31070.02540.00780.31380.01150.00710.3114
150 α 0.07610.12781.36980.00440.01090.38580.00350.01100.38700.00110.01090.38780.00350.01090.38560.00130.01090.3885
β 0.56330.35210.73150.04890.01130.35960.05220.01180.36390.04550.01080.35780.04920.01130.36010.03340.01070.3584
γ 0.14410.07021.36980.02320.00350.21050.02490.00370.21180.02140.00330.20900.02370.00350.21030.01110.00310.2043
1.740 α 0.80602.10204.72660.04280.08671.14780.07230.09331.15750.01350.08251.13260.04520.08681.14580.01560.08631.1548
β 0.56080.49561.66900.08740.06070.83780.11110.06740.87720.06420.05560.79730.09000.06060.84360.05920.06030.8156
γ 0.27740.84884.72660.05180.09601.16940.08210.10601.20570.02140.08851.14710.05470.09631.16790.01910.09331.1781
70 α 0.65431.14613.32320.01170.02290.59060.01870.02340.59550.00470.02240.58110.01230.02290.59050.00530.02270.5864
β 0.49020.31461.06900.04730.01900.48250.05370.01960.49240.04090.01870.47510.04810.01900.48370.03870.02150.4757
γ 0.12170.23253.32320.01210.02390.59730.01930.02440.59760.00480.02350.59640.01280.02390.59690.00420.02390.6000
150 α 0.52670.80223.16680.01100.01050.40680.01670.01060.40630.00380.01030.40670.01200.01050.40700.00410.01040.4079
β 0.35600.23640.88540.04690.00930.31960.05000.00980.32330.03780.00890.31550.04720.00940.32010.03430.00880.3155
γ -0.10460.20503.06680.01200.01070.40500.01560.01090.40740.00390.01060.40350.01270.01070.40440.00390.01060.4055
340 α 0.82341.45453.45590.07400.09711.17550.10590.10941.19470.04240.08781.14520.07660.09771.17570.04570.09231.1783
β 0.48730.34321.27530.09220.06540.79680.11540.08810.83050.06960.05040.75600.09460.06620.79910.06610.05720.7675
γ 0.21950.80113.45590.00580.11801.36090.04200.12161.3737−0.03020.11801.36140.00790.11781.3566−0.01700.12091.3838
70 α 0.62720.65061.98890.01030.02110.55490.01740.02160.55450.00310.02080.54820.01090.02120.55420.00370.02110.5534
β 0.42890.23120.85230.03370.02250.47430.04120.02240.48100.02630.02340.46640.03470.02190.47460.02150.03290.4695
γ 0.22170.18091.98890.00470.02490.61480.01440.02520.6171−0.00120.02460.60830.00710.02490.61620.00190.02480.6135
150 α 0.57380.61871.82510.01030.00940.37020.01840.00970.37210.00290.00930.36880.01050.00950.36980.00230.00930.3696
β 0.41700.19990.63330.03300.00750.30500.04240.00780.30670.02540.00720.30260.03400.00750.30520.02040.00720.3030
γ 0.04640.12631.82510.00460.01060.39800.00820.01070.39870.00120.01050.39750.00510.01060.39790.00180.01060.3994
Table 6. Bias, MSE and LCI for MLE and Bayesian estimation methods for α = 2 , β = 0.4 .
Table 6. Bias, MSE and LCI for MLE and Bayesian estimation methods for α = 2 , β = 0.4 .
α = 2 , β = 0.4 MLESELFLINEX c = −1.2LINEX c = 1.2ELF c = −1.2ELF c = 1.2
γ n BiasMSELACIBiasMSELCCIBiasMSELCCIBiasMSELCCIBiasMSELCCIBiasMSELCCI
0.540 α 0.00830.33812.28010.00330.11691.30990.03860.11911.3204−0.03200.11811.31320.00640.11631.3090−0.03230.12681.3544
β 0.13790.02850.38210.09670.03280.33170.10680.05700.34330.08670.01790.31780.09940.03440.33620.06470.01480.3092
γ 0.19910.12622.28010.11470.03600.55850.12950.04280.57700.10000.02990.53910.11830.03730.56260.07220.02340.4975
70 α 0.04500.18681.68610.00780.02530.61770.01540.02570.62190.00020.02510.61640.00840.02530.61860.00080.02540.6197
β 0.12530.02070.27750.05860.00780.24000.06180.00840.24400.05540.00730.23660.05970.00800.23980.04590.00610.2308
γ 0.14780.05451.68610.05790.01100.33040.06230.01190.33970.05350.01020.32300.05920.01120.33120.04360.00890.3162
150 α −0.01860.29282.12090.01050.01160.41420.01400.01160.41530.00710.01150.41440.01080.01160.41400.00730.01160.4153
β 0.12090.01700.19230.05550.00490.16840.05700.00510.16990.05390.00470.16480.05600.00500.16930.04930.00410.1606
γ 0.16920.07422.12090.04840.00590.22180.05060.00620.23050.04620.00560.21790.04910.00600.22390.04130.00490.2118
1.740 α 0.34650.19531.07570.01810.11071.31600.05070.11431.3186−0.01420.10961.29980.02090.11041.3154−0.01330.11591.3541
β 0.10950.01590.24460.04430.00920.18890.04810.01560.19120.04070.00580.18610.04540.00970.18930.03250.00480.1839
γ 0.50750.40211.07570.08600.09251.10490.11650.10521.14010.05560.08231.08060.08880.09321.10160.05440.08641.1150
70 α 0.27390.18241.05940.00580.02580.59420.01310.02610.6032−0.00150.02560.59830.00640.02580.5922−0.00090.02590.6005
β 0.11060.01450.18560.03320.00250.13110.03440.00260.13280.03190.00240.12980.03360.00250.13160.02780.00210.1288
γ 0.62350.35961.05940.02650.02180.55310.03320.02250.55460.01970.02110.55110.02710.02180.55170.01920.02140.5540
150 α 0.29960.13790.86080.00410.01150.41270.01530.01170.41740.00870.01130.41120.01230.01150.41270.00900.01140.4123
β 0.10260.01160.12870.03230.00200.09170.03300.00230.09250.03160.00190.09130.03260.00210.09200.02960.00160.0903
γ 0.51190.32130.86080.02330.01110.40420.03620.01160.40830.02940.01070.39990.03310.01120.40440.02930.01080.4013
Table 7. Estimates of MLE and various measures of fit for food chain data.
Table 7. Estimates of MLE and various measures of fit for food chain data.
Models α β γ ρ θ AICCAICBICHQICCVMADKSDPVKS
KMWE0.08534770.44989.070--103.606106.593105.106104.1890.0330.2460.0940.994
SEWE25.4585.8540.0970.010-105.516108.183109.499106.2940.0320.2320.0970.991
EGWGP12.9990.0030.2820.1230.907119.739124.025124.718120.7110.0320.2320.1970.420
EGWGP272.71645.0471048.38722.0000.073140.606144.892145.585141.5780.0330.2380.3310.025
WL39.63894.6260.2094.361-108.018110.685112.001108.7960.0680.4810.1420.818
MOAPW8.68513.48214.55694.164 108.963111.629112.946109.7400.0490.3700.1310.880
EOWL57.7620.9231.414-163.848106.082108.749110.065106.8600.0280.2180.1000.988
MKITL112.7480.174---104.023104.729106.014104.4120.0680.4820.1420.817
OWITL113.74682.382--0.170106.022107.522109.009106.6050.0680.4820.1420.817
EW38.762132.052--55.135106.086107.586109.073106.6690.0690.4880.1420.813
Table 8. Point and interval estimates and SE for parameters of the KMWE model for food chain data.
Table 8. Point and interval estimates and SE for parameters of the KMWE model for food chain data.
Methods EstimatesSELowerUpperCV
MLE α 0.08490.01100.06320.106513.01%
β 34,770.44902973.652128,942.090940,598.80708.55%
γ 89.070440.160410.3561167.784745.09%
Bayesian α 0.08480.00880.06740.101410.38%
β 34,769.9281172.201834,449.747335,119.90130.50%
γ 89.079612.270664.5373113.349613.77%
Table 9. Estimates of MLE and various measures of fit for food and drink wholesaling data.
Table 9. Estimates of MLE and various measures of fit for food and drink wholesaling data.
Models α β γ ρ θ AICCAICBICHQICCVMADKSDPVKS
KMWE0.08220539.66818.838--118.941121.928120.441119.5240.0270.2320.0920.996
SEWE27.5672.6190.0170.020-121.234123.900125.217122.0110.0290.2510.0940.995
EGWGP7.4940.0544.4581.1890.650123.381127.667128.359124.3530.0310.2670.1000.989
WL0.00245.0470.35013.751-124.276126.942128.259125.0530.0720.5230.1490.765
MOAPW378.1695.184449.67971.020-123.167125.833127.149123.9440.0370.3180.1060.977
EOWL46.7651.2461.120-122.998121.761124.428125.744122.5390.0290.2390.1000.989
MKITL76.6580.173---120.276120.982122.268120.6650.0720.5230.1490.769
OWITL77.44938.926--0.167122.275123.775125.262122.8580.0720.5230.1490.766
EW26.184153.169--63.485122.379123.879125.366122.9620.0740.5320.1500.757
Table 10. Point and interval estimates and SE for parameters of KMWE distribution: data 2.
Table 10. Point and interval estimates and SE for parameters of KMWE distribution: data 2.
Methods EstimatesSELowerUpperCV
MLE α 0.0820.0070.0670.1018.58%
β 20,539.668123.55634,449.74735,119.9010.60%
γ 18.8387.91964.537113.35042.04%
Bayesian α 0.0820.0070.0680.0958.46%
β 20,539.53611.23020,517.75420,561.5570.05%
γ 18.8312.83413.53624.62915.05%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Eldessouky, E.A.; Hassan, O.H.M.; Elgarhy, M.; Hassan, E.A.A.; Elbatal, I.; Almetwally, E.M. A New Extension of the Kumaraswamy Exponential Model with Modeling of Food Chain Data. Axioms 2023, 12, 379. https://doi.org/10.3390/axioms12040379

AMA Style

Eldessouky EA, Hassan OHM, Elgarhy M, Hassan EAA, Elbatal I, Almetwally EM. A New Extension of the Kumaraswamy Exponential Model with Modeling of Food Chain Data. Axioms. 2023; 12(4):379. https://doi.org/10.3390/axioms12040379

Chicago/Turabian Style

Eldessouky, Eman A., Osama H. Mahmoud Hassan, Mohammed Elgarhy, Eid A. A. Hassan, Ibrahim Elbatal, and Ehab M. Almetwally. 2023. "A New Extension of the Kumaraswamy Exponential Model with Modeling of Food Chain Data" Axioms 12, no. 4: 379. https://doi.org/10.3390/axioms12040379

APA Style

Eldessouky, E. A., Hassan, O. H. M., Elgarhy, M., Hassan, E. A. A., Elbatal, I., & Almetwally, E. M. (2023). A New Extension of the Kumaraswamy Exponential Model with Modeling of Food Chain Data. Axioms, 12(4), 379. https://doi.org/10.3390/axioms12040379

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop