Nudging: Inference-time Alignment of LLMs via Guided Decoding

📅 2024-10-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) require alignment to safely follow instructions, yet training a dedicated aligned variant for each base model incurs prohibitive computational costs. This paper proposes *nudging*, a training-free, inference-time alignment method: when the base model exhibits low confidence—measured by high entropy or low minimum token probability—on stylistic tokens (e.g., politeness markers, refusal phrases, or formatting cues), a lightweight alignment model dynamically generates corrective “nudge” tokens. Nudging operates as a plug-in, enabling zero-shot, architecture-agnostic alignment across model families. It is the first method to achieve training-free, decoupled inference-time alignment. Experiments demonstrate that nudging a 27B base model (e.g., Gemma-2-27B) with only a 7B–14B alignment model surpasses the instruction-following performance of a fully trained 70B aligned model (e.g., Llama-2-70b-chat) in zero-shot settings, establishing a new paradigm for efficient, scalable LLM alignment.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) require alignment to effectively and safely follow user instructions. This process necessitates training an aligned version for every base model, resulting in significant computational overhead. In this work, we propose nudging, a simple, plug-and-play, and training-free algorithm that aligns any base model at inference time using a small aligned model. Nudging is motivated by recent findings that alignment primarily alters the model's behavior on a small subset of stylistic tokens (e.g., discourse markers). We find that base models are significantly more uncertain when generating these tokens. Building on this insight, nudging employs a small aligned model to generate nudging tokens to guide the base model's output during decoding when the base model's uncertainty is high. We evaluate nudging across 3 model families on a diverse range of open-instruction tasks. Without any training, nudging a large base model with a 7x-14x smaller aligned model achieves zero-shot performance comparable to, and sometimes surpassing, that of large aligned models. By operating at the token level, nudging enables off-the-shelf collaboration between model families. For instance, nudging Gemma-2-27b with Llama-2-7b-chat outperforms Llama-2-70b-chat on various tasks. Overall, our work offers a modular and cost-efficient solution to LLM alignment. Our project website: https://fywalter.github.io/nudging/ .
Problem

Research questions and friction points this paper is trying to address.

Aligns LLMs at inference without training overhead
Uses small aligned model to guide base model
Enables cross-model collaboration for better performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free alignment via small model guidance
Token-level uncertainty-based nudging technique
Cross-model family plug-and-play collaboration
🔎 Similar Papers
2024-02-05Annual Meeting of the Association for Computational LinguisticsCitations: 37