Guaranteed Generation from Large Language Models

📅 2024-10-09
🏛️ International Conference on Learning Representations
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) struggle to strictly satisfy hard constraints—such as lexical forcing or sentiment reversal—while preserving fidelity to the original output distribution. Method: We propose a training-inference co-design paradigm, formally defining the “ideal constrained-satisfaction distribution” and proving that standard autoregressive training cannot converge to it. We establish a theoretical framework jointly optimizing distribution fidelity and inference efficiency, using KL divergence as a unifying metric. Our approach extends the GUARD algorithm by coupling an autoregressive proposal distribution with verifiable rejection sampling—requiring no fine-tuning or reinforcement learning. Contribution/Results: On lexical constraint enforcement and sentiment reversal tasks, our method achieves 100% constraint satisfaction, attains distribution fidelity near the theoretical optimum, and accelerates inference by multiple times over naive rejection sampling.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) are increasingly used across various applications, there is a growing need to control text generation to satisfy specific constraints or requirements. This raises a crucial question: Is it possible to guarantee strict constraint satisfaction in generated outputs while preserving the distribution of the original model as much as possible? We first define the ideal distribution - the one closest to the original model, which also always satisfies the expressed constraint - as the ultimate goal of guaranteed generation. We then state a fundamental limitation, namely that it is impossible to reach that goal through autoregressive training alone. This motivates the necessity of combining training-time and inference-time methods to enforce such guarantees. Based on this insight, we propose GUARD, a simple yet effective approach that combines an autoregressive proposal distribution with rejection sampling. Through GUARD's theoretical properties, we show how controlling the KL divergence between a specific proposal and the target ideal distribution simultaneously optimizes inference speed and distributional closeness. To validate these theoretical concepts, we conduct extensive experiments on two text generation settings with hard-to-satisfy constraints: a lexical constraint scenario and a sentiment reversal scenario. These experiments show that GUARD achieves perfect constraint satisfaction while almost preserving the ideal distribution with highly improved inference efficiency. GUARD provides a principled approach to enforcing strict guarantees for LLMs without compromising their generative capabilities.
Problem

Research questions and friction points this paper is trying to address.

Control text generation to satisfy specific constraints
Guarantee strict constraint satisfaction in generated outputs
Preserve original model distribution while enforcing constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines training-time and inference-time methods
Uses autoregressive proposal with rejection sampling
Controls KL divergence for optimal performance
🔎 Similar Papers
No similar papers found.