PITA: Preference-Guided Inference-Time Alignment for LLM Post-Training

📅 2025-07-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of aligning large language models (LLMs) with user preferences during inference—without requiring training or access to a pre-trained reward model. Methodologically, it formulates preference alignment as a gradient-free, iterative probabilistic correction process grounded in stochastic search and optimization. The framework integrates KL-divergence regularization, a lightweight guiding model, explicit preference guidance strategies, and token-level probability reweighting—enabling dynamic generation control without decoupling or fine-tuning the base LLM. Its key contribution is the first formulation of preference distribution learning as a low-overhead, inference-time optimization problem that avoids gradient computation entirely. Experiments on mathematical reasoning and sentiment classification demonstrate significant improvements in output consistency with user preferences, while maintaining high computational efficiency, robustness across diverse prompts, and seamless deployability in production settings.

Technology Category

Application Category

📝 Abstract
Inference-time alignment enables large language models (LLMs) to generate outputs aligned with end-user preferences without further training. Recent post-training methods achieve this by using small guidance models to modify token generation during inference. These methods typically optimize a reward function KL-regularized by the original LLM taken as the reference policy. A critical limitation, however, is their dependence on a pre-trained reward model, which requires fitting to human preference feedback--a potentially unstable process. In contrast, we introduce PITA, a novel framework that integrates preference feedback directly into the LLM's token generation, eliminating the need for a reward model. PITA learns a small preference-based guidance policy to modify token probabilities at inference time without LLM fine-tuning, reducing computational cost and bypassing the pre-trained reward model dependency. The problem is framed as identifying an underlying preference distribution, solved through stochastic search and iterative refinement of the preference-based guidance model. We evaluate PITA across diverse tasks, including mathematical reasoning and sentiment classification, demonstrating its effectiveness in aligning LLM outputs with user preferences.
Problem

Research questions and friction points this paper is trying to address.

Aligns LLM outputs with user preferences without training
Eliminates dependency on pre-trained reward models
Modifies token probabilities at inference time efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Directly integrates preference feedback into token generation
Eliminates need for pre-trained reward models
Uses stochastic search for preference distribution identification
🔎 Similar Papers
No similar papers found.