🤖 AI Summary
This study clarifies under what conditions loss-driven generalized Bayesian posterior updates admit a genuine Bayesian interpretation. Within a decision-theoretic framework, it distinguishes for the first time between “belief posteriors” and “decision posteriors.” Building on the Savage and Anscombe–Aumann axiomatic systems, exponential tilting, and sequential consistency assumptions—and leveraging variational representations and entropy regularization—the work demonstrates that generalized Bayesian updating coincides with standard Bayesian updating if and only if the loss function is the negative log-likelihood. The paper establishes necessary and sufficient conditions for generalized Bayes to be an optimal decision rule, reveals the necessity of nonlinear preferences for non-degenerate posteriors, and shows that marginal likelihoods and Bayes factors lack intrinsic evidential meaning within the decision-posterior framework.
📝 Abstract
Loss-based updating, including generalized Bayes, Gibbs, and quasi-posteriors, replaces likelihoods by a user-chosen loss and produces a posterior-like distribution via exponential tilt. We give a decision-theoretic characterization that separates \emph{belief posteriors} -- conditional beliefs justified by the foundations of Savage and Anscombe-Aumann under a joint probability mode l-- from \emph{decision posteriors} -- randomized decision rules justified by preferences over decision rules. We make explicit that a loss-based posterior coincides with ordinary Bayes if and only if the loss is, up to scale and a data-only term, negative log-likelihood. We then show that generalized marginal likelihood is not evidence for decision posteriors, and Bayes factors are not well-defined without additional structure. In the decision posterior regime, non-degenerate posteriors require nonlinear preferences over decision rules. Under sequential coherence and separability, these lead to an entropy-penalized variational representation yielding generalized Bayes as the optimal rule.