Calibrated Test-Time Guidance for Bayesian Inference

📅 2026-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a fundamental limitation in existing test-time guidance methods for diffusion models, which fail to correctly sample from the Bayesian posterior due to structural approximation errors, thereby introducing inference bias. The paper is the first to identify and rectify this structural bias by proposing a theoretically consistent alternative estimator that enables calibrated posterior sampling. Built upon the test-time guidance framework of diffusion models, the method introduces a consistency-aware estimator designed to preserve the fidelity of Bayesian inference. Experimental results demonstrate that the proposed approach significantly outperforms current methods across multiple Bayesian inference tasks and achieves state-of-the-art performance in black hole image reconstruction.

Technology Category

Application Category

📝 Abstract
Test-time guidance is a widely used mechanism for steering pretrained diffusion models toward outcomes specified by a reward function. Existing approaches, however, focus on maximizing reward rather than sampling from the true Bayesian posterior, leading to miscalibrated inference. In this work, we show that common test-time guidance methods do not recover the correct posterior distribution and identify the structural approximations responsible for this failure. We then propose consistent alternative estimators that enable calibrated sampling from the Bayesian posterior. We significantly outperform previous methods on a set of Bayesian inference tasks, and match state-of-the-art in black hole image reconstruction.
Problem

Research questions and friction points this paper is trying to address.

Bayesian inference
test-time guidance
posterior calibration
diffusion models
miscalibrated inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

test-time guidance
Bayesian inference
posterior calibration
diffusion models
consistent estimators
🔎 Similar Papers
No similar papers found.
D
Daniel Geyfman
Department of Computer Science, University of California, Irvine, CA, USA
Felix Draxler
Felix Draxler
University of California, Irvine
Machine LearningGenerative ModelingNormalizing Flows
J
Jan Groeneveld
Department of Computer Science, University of California, Irvine, CA, USA
H
Hyunsoo Lee
Seoul National University, Seoul, South Korea
Theofanis Karaletsos
Theofanis Karaletsos
Head of AI, CZI-Science | Achira.ai
Generative AIAI x ScienceProbabilistic Modeling
Stephan Mandt
Stephan Mandt
Associate Professor, University of California, Irvine
Artificial IntelligenceMachine LearningCompressionAI for ScienceGenerative Models