🤖 AI Summary
This paper establishes the first objective Bayesian inference framework for the Dhillon distribution. Methodologically, it derives and systematically compares Jeffreys’ prior, reference prior, and maximum data information prior; rigorously establishes sufficient conditions for posterior propriety and existence of posterior moments for the first two priors; and identifies the intrinsic impropriety of the posterior induced by the third prior. Bayesian estimation is implemented via the Metropolis–Hastings algorithm, and comprehensive simulation studies assess bias, mean squared error, and credible interval coverage—demonstrating that Bayesian estimators substantially outperform maximum likelihood estimators in small samples. The methodology’s robustness and practical utility are further validated on real-world reliability data. This work fills a critical theoretical gap in objective Bayesian analysis of the Dhillon distribution and provides a generalizable Bayesian toolkit for reliability modeling.
📝 Abstract
In this work, we develop an objective Bayesian framework for the Dhillon probability distribution. We explicitly derive three objective priors: the Jeffreys prior, the overall reference prior, and the maximal data information prior. We show that both Jeffreys and reference priors yields a proper posterior distribution, whereas the maximal data information prior leads to an improper posterior. Moreover, we establish sufficient conditions for the existence of its respective posterior moments. Bayesian inference is carried out via Markov chain Monte Carlo, using the Metropolis-Hastings algorithm. A comprehensive simulation study compares the Bayesian estimators to maximum likelihood estimators in terms of bias, mean squared error, and coverage probability. Finally, a real-data application illustrates the practical utility of the proposed Bayesian approach.