Bayesian Estimation of a New Pareto-Type Distribution Based on Mixed Gibbs Sampling Algorithm
Abstract
:1. Introduction
2. Maximum Likelihood Estimation of NP Distribution
2.1. Complete Samples Case
2.2. Type II Censored Samples Case
3. Bayesian Estimation of NP Distribution
3.1. Complete Samples Case
- Step 0
- Choose an initial value of . Denote the value of the ith iteration as . Then, the value of the th iteration can be obtained using the following Step 1 and Step 2.
- Step 1
- Obtain from using the Metropolis–Hastings algorithm in this step, which consists of the following four steps (1)∼(4).
- (1)
- Set , where is the current state and is the standard deviation. Obtain a sample of ; if , then a new sampling is required.
- (2)
- Calculate the acceptance probability as (10) below:
- (3)
- Generate a random number u from the uniform distribution , and then obtain according to the following conditions (11):
- (4)
- Set , then return to Step 1 (1).
The resulting is a Markov chain. Step 1 ends when the Markov chain reaches equilibrium, then we can obtain . - Step 2
- Obtain from using the Metropolis–Hastings algorithm in this step.
- (1)
- Set , where is the current state and is the standard deviation. Obtain a sample of ; if or , then a new sampling is required.
- (2)
- Calculate the acceptance probability as (12):
- (3)
- Generate a random number u from the uniform distribution of , and then obtain according to the following conditions (13):
- (4)
- Set , then return to Step 2 (1).
Similarly, the resulting is a Markov chain. We can obtain when the Markov chain reaches equilibrium. - Step 3
- Let , and repeat Step 1 and Step 2 in sequence until the Markov chains reach equilibrium.
3.2. Type II Censored Samples Case
- Step 0
- Give an initial value using the Gibbs sampling. For simplicity, the value of the ith iteration is still denoted as . Then, we can use the following Step 1 and Step 2 to obtain .
- Step 1
- Obtain a sample from using the Metropolis–Hastings algorithm in this step.
- (1)
- Set , where is the current state and is the standard deviation. Obtain a sample of ; if , then a new sampling is required.
- (2)
- Calculate the acceptance probability , denoted as (17):
- (3)
- Generate a random number u from the uniform distribution of , and then obtain according to the following (18):
- (4)
- Set , then return to Step 1(1).
The resulting is a Markov chain, and we can obtain when the Markov chain reaches equilibrium. - Step 2
- Obtain a sample from using the Metropolis–Hastings algorithm in this step.
- (1)
- Set , where is the current state and is the standard deviation. Obtain a sample of ; if or , then a new sampling is required.
- (2)
- Calculate the acceptance probability as (19):
- (3)
- Generate a random number u from the uniform distribution of , and then obtain according to the following conditions (20):
- (4)
- Set , then return to Step 2(1).
Similarly, the resulting is a Markov chain, and we can obtain when the Markov chain reaches equilibrium. - Step 3
- Let , and repeat Step 1 and Step 2 in sequence until the Markov chains reach equilibrium.
4. Numerical Studies
4.1. Simulation Studies
4.2. Real Data Analysis
5. Discussion and Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Qian, F.; Zheng, W. An evolutionary nested sampling algorithm for Bayesian model updating and model selection using modal measurement. Eng. Struct. 2017, 140, 298–307. [Google Scholar] [CrossRef]
- Kuipers, J.; Suter, P.; Moffa, G. Efficient Sampling and Structure Learning of Bayesian Networks. J. Comput. Graph. Stat. 2022, 31, 639–650. [Google Scholar] [CrossRef]
- Alhamzawi, R.; Ali, H.T.M. A New Gibbs Sampler for Bayesian Lasso. Commun. Stat.-Simul. C 2020, 49, 1855–1871. [Google Scholar] [CrossRef]
- Pareto, V. Cours d’Économie Politique; F.Rouge: Lausanne, Switzerland, 1897. [Google Scholar]
- Jonhson, N.L.; Kotz, S.; Balakrishnan, N. Continuous Univariate Distributions; John Willey and Sons: London, UK, 1994; Volume 1, Chapter 18. [Google Scholar]
- Bourguignon, M.; Saulo, H.; Fernandez, R.N. A new Pareto-type distribution with applications in reliability and income data. Physica A 2016, 457, 166–175. [Google Scholar] [CrossRef]
- Raof, A.S.A.; Haron, M.A.; Safari, A.; Siri, Z. Modeling the Incomes of the Upper-Class Group in Malaysia using New Pareto-Type Distribution. Sains Malays. 2022, 51, 3437–3448. [Google Scholar] [CrossRef]
- Karakaya, K.; Akdoğan, Y.; Nik, A.S.; Kuş, C.; Asgharzadeh, A. A Generalization of New Pareto-Type Distribution. Ann. Data Sci. 2022, 2022, 1–15. [Google Scholar] [CrossRef]
- Sarabia, J.M.; Jordá, V.; Prieto, F. On a new Pareto-type distribution with applications in the study of income inequality and risk analysis. Physica A 2019, 527, 121277. [Google Scholar] [CrossRef]
- Nik, A.S.; Asgharzadeh, A.; Nadarajah, S. Comparisons of methods of estimation for a new Pareto-type distribution. Statistic 2019, 79, 291–319. [Google Scholar]
- Nik, A.S.; Asgharzadeh, A.; Raqab, M.Z. Estimation and prediction for a new Pareto-type distribution under progressive type-II censoring. Math. Comput. Simulat. 2021, 190, 508–530. [Google Scholar]
- Soliman, A.A. Reliability estimation in a generalized life-model with application to the Burr-XII. IEEE Trans. Reliab. 2002, 51, 337–343. [Google Scholar] [CrossRef]
- Soliman, A.A. Estimation of parameters of life from progressively censored data using Burr-XII model. IEEE Trans. Reliab. 2005, 54, 34–42. [Google Scholar] [CrossRef]
- Geman, S.; Geman, D. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Trans. Pattern Anal. 1984, 6, 721–741. [Google Scholar] [CrossRef] [PubMed]
- Ahmed, H.H. Best Prediction Method for Progressive Type-II Censored Samples under New Pareto Model with Applications. J. Math. 2021, 2021, 1355990. [Google Scholar]
- Ibrahim, M.; Ali, M.M.; Yousof, H.M. The Discrete Analogue of the Weibull G Family: Properties, Different Applications, Bayesian and Non-Bayesian Estimation Methods. Ann. Data Sci. 2023, 10, 1069–1106. [Google Scholar] [CrossRef]
- Hastings, W.K. Monte Carlo Sampling Methods Using Markov Chains and Their Applications. Biometrika 1970, 57, 97–109. [Google Scholar] [CrossRef]
- Linhart, H.; Zucchini, W. Model Selection; John Wiley and Sons: New York, NY, USA, 1986. [Google Scholar]
- Murthy, D.N.P.; Xie, M.; Jiang, R. Weibull Models; John Wiley and Sons: Hoboken, NJ, USA, 2004. [Google Scholar]
Loss Function | 95% Credible Interval for | 95% Credible Interval for | |||
---|---|---|---|---|---|
LINEX loss | (2, 1) | 1.8271 | 0.9938 | [1.5392, 2.1614] | [0.9631, 1.0052] |
(3, 1.5) | 2.9647 | 1.4964 | [2.5138, 3.4898] | [1.4691, 1.5067] | |
(4, 1) | 3.9284 | 0.9955 | [3.3583, 4.6654] | [0.9819, 1.0006] | |
Square loss | (2, 1) | 2.0986 | 0.9977 | [1.7768, 2.4561] | [0.9718, 1.0067] |
(3, 1.5) | 2.8759 | 1.4952 | [2.4206, 3.3609] | [1.4685, 1.5056] | |
(4, 1) | 3.9840 | 0.9977 | [3.3579, 4.6719] | [0.9845, 1.0027] |
Loss Function | 95% Credible Interval for | 95% Credible Interval for | |||
---|---|---|---|---|---|
LINEX loss | (2, 1) | 1.9896 | 0.9908 | [1.2957, 3.0046] | [0.9630, 1.0008] |
(3, 1.5) | 2.9361 | 1.4974 | [1.9785, 4.5394] | [1.4681, 1.5077] | |
(4, 1) | 4.1126 | 0.9988 | [2.8326, 6.5555] | [0.9855, 1.0037] | |
Square loss | (2, 1) | 1.9576 | 0.9946 | [1.2310, 2.8416] | [0.9637, 1.0051] |
(3, 1.5) | 3.1365 | 1.4977 | [1.9785, 4.5394] | [1.4681, 1.5077] | |
(4, 1) | 4.3141 | 0.9955 | [3.5397, 6.2427] | [0.9816, 1.0002] |
Sample Case | n | Estimation Method | (Mean of , ) | (Mean of , ) |
---|---|---|---|---|
Complete samples | 20 | BE | (4.0457, 0.0389) | (0.9896, 0.0002) |
MLE | (4.4374, 0.4789) | (1.0287, 0.0014) | ||
Type II censored samples (20%) | 100 | BE | (4.1136, 0.1671) | (0.9989, 1.6214 × 10) |
MLE | (4.0985, 0.4564) | (1.0056, 5.3148 × 10) |
r | Parameters | BE | 2.5% Percentile | 50% Percentile | 97.5% Percentile |
---|---|---|---|---|---|
0.3329 | 0.2152 | 0.3304 | 0.4645 | ||
0.7609 | 0.3180 | 0.7963 | 0.9923 | ||
0.4279 | 0.3152 | 0.4264 | 0.5590 | ||
0.8723 | 0.5896 | 0.9016 | 0.9956 |
Estimation Method | Sample Case | Parameter | Estimation Results |
---|---|---|---|
MLE | Complete samples | 2.871 | |
0.067 | |||
BE | Complete samples | 2.823 | |
0.065 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, F.; Wei, S.; Zhao, M. Bayesian Estimation of a New Pareto-Type Distribution Based on Mixed Gibbs Sampling Algorithm. Mathematics 2024, 12, 18. https://doi.org/10.3390/math12010018
Li F, Wei S, Zhao M. Bayesian Estimation of a New Pareto-Type Distribution Based on Mixed Gibbs Sampling Algorithm. Mathematics. 2024; 12(1):18. https://doi.org/10.3390/math12010018
Chicago/Turabian StyleLi, Fanqun, Shanran Wei, and Mingtao Zhao. 2024. "Bayesian Estimation of a New Pareto-Type Distribution Based on Mixed Gibbs Sampling Algorithm" Mathematics 12, no. 1: 18. https://doi.org/10.3390/math12010018
APA StyleLi, F., Wei, S., & Zhao, M. (2024). Bayesian Estimation of a New Pareto-Type Distribution Based on Mixed Gibbs Sampling Algorithm. Mathematics, 12(1), 18. https://doi.org/10.3390/math12010018