Inference-time Alignment in Continuous Space

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing inference-time alignment methods rely on discrete response sampling, which struggles to explore high-quality outputs when the base policy is weak or the candidate set is small, thereby limiting alignment effectiveness. This paper proposes Simple Energy Adaptation (SEA), the first approach to formulate inference-time alignment as iterative optimization of an energy function over a continuous latent space. SEA employs gradient-driven sampling grounded in energy-based models to directly optimize the latent representations of the base policy, eliminating dependence on multi-response generation and discrete search. Its core innovation lies in defining an action energy function oriented toward the optimal policy and enabling efficient gradient-based updates in the continuous space. SEA achieves 77.51% and 16.36% relative performance gains on AdvBench and MATH, respectively. The code is publicly available.

Technology Category

Application Category

📝 Abstract
Aligning large language models with human feedback at inference time has received increasing attention due to its flexibility. Existing methods rely on generating multiple responses from the base policy for search using a reward model, which can be considered as searching in a discrete response space. However, these methods struggle to explore informative candidates when the base policy is weak or the candidate set is small, resulting in limited effectiveness. In this paper, to address this problem, we propose Simple Energy Adaptation ($ extbf{SEA}$), a simple yet effective algorithm for inference-time alignment. In contrast to expensive search over the discrete space, SEA directly adapts original responses from the base policy toward the optimal one via gradient-based sampling in continuous latent space. Specifically, SEA formulates inference as an iterative optimization procedure on an energy function over actions in the continuous space defined by the optimal policy, enabling simple and effective alignment. For instance, despite its simplicity, SEA outperforms the second-best baseline with a relative improvement of up to $ extbf{77.51%}$ on AdvBench and $ extbf{16.36%}$ on MATH. Our code is publicly available at https://github.com/yuanyige/SEA
Problem

Research questions and friction points this paper is trying to address.

Aligning language models with human feedback flexibly
Overcoming limitations of discrete response space search
Improving alignment via continuous latent space optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gradient-based sampling in continuous latent space
Iterative optimization on energy function
Direct adaptation of base policy responses
🔎 Similar Papers
No similar papers found.