Learnable Chernoff Baselines for Inference-Time Alignment

📅 2026-02-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing inference-time reward-guided alignment methods often rely on specific model architectures or incur substantial computational overhead. This work proposes Learnable Chernoff Baselines (LCB), which approximates the exponentially tilted kernel via black-box rejection sampling, enabling efficient and fine-grained trade-offs between inference quality and computational cost without modifying the original model architecture. By integrating KL-regularized reward alignment with an adaptive acceptance probability mechanism, LCB provides theoretical guarantees on total variation error. Empirical results demonstrate that, in both continuous and discrete diffusion models, LCB achieves alignment performance approaching that of ideal rejection sampling while requiring significantly fewer queries to the pretrained model.

Technology Category

Application Category

📝 Abstract
We study inference-time reward-guided alignment for generative models. Existing methods often rely on either architecture-specific adaptations or computationally costly inference procedures. We introduce Learnable Chernoff Baselines (LCBs) as a method for efficiently and approximately sampling from the exponentially tilted kernels that arise from KL-regularized reward alignment. Using only black-box sampling access to the pretrained model, LCBs implement a form of rejection sampling with adaptively selected acceptance probabilities, which allows fine-grained control over inference-compute scaling. We establish total-variation guarantees to the ideal aligned model, and demonstrate in both continuous and discrete diffusion settings that LCB sampling closely matches ideal rejection sampling while using substantially fewer queries to the pretrained model.
Problem

Research questions and friction points this paper is trying to address.

inference-time alignment
reward-guided generation
generative models
efficient sampling
KL-regularized alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learnable Chernoff Baselines
inference-time alignment
rejection sampling
KL-regularized reward alignment
black-box sampling
🔎 Similar Papers
No similar papers found.