🤖 AI Summary
This paper addresses the insufficient robustness of historical–current data fusion under the Bayesian framework. We propose the generalized power prior, which replaces the Kullback–Leibler divergence in the conventional power prior with the Amari α-divergence—the first integration of α-divergence into the power prior paradigm. This reformulation reveals its geometric interpretation as a generalized geodesic on the probability manifold, endowing the prior with adaptive tolerance to historical data bias. Theoretically, we establish the optimality of the resulting generalized power posterior and characterize its Riemannian geometric structure. Empirically, the α-parameter can be adaptively estimated from data, yielding significant improvements in inference accuracy—particularly in small-sample and heterogeneous historical-data settings.
📝 Abstract
The power prior is a class of informative priors designed to incorporate historical data alongside current data in a Bayesian framework. It includes a power parameter that controls the influence of historical data, providing flexibility and adaptability. A key property of the power prior is that the resulting posterior minimizes a linear combination of KL divergences between two pseudo-posterior distributions: one ignoring historical data and the other fully incorporating it. We extend this framework by identifying the posterior distribution as the minimizer of a linear combination of Amari's $alpha$-divergence, a generalization of KL divergence. We show that this generalization can lead to improved performance by allowing for the data to adapt to appropriate choices of the $alpha$ parameter. Theoretical properties of this generalized power posterior are established, including behavior as a generalized geodesic on the Riemannian manifold of probability distributions, offering novel insights into its geometric interpretation.