Adaptive Decoding via Test-Time Policy Learning for Self-Improving Generation

📅 2026-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing static decoding strategies—such as greedy search or fixed-temperature/top-p sampling—struggle to simultaneously satisfy the diverse stylistic and structural requirements across multiple tasks and domains. This work formulates decoding as a sequential decision-making problem and introduces a lightweight, test-time learnable reinforcement learning-based decoding strategy. Operating under frozen large language model weights, the approach dynamically adjusts sampling parameters without requiring model retraining, thereby enabling domain adaptation and user-controllable generation. By optimizing a composite reward function that incorporates structured metrics—including output length, coverage, repetition rate, and completeness—the method achieves substantial improvements over static baselines, with relative gains of up to +88% on BookSum (Granite) and +79% on WikiHow (Qwen).

Technology Category

Application Category

📝 Abstract
Decoding strategies largely determine the quality of Large Language Model (LLM) outputs, yet widely used heuristics such as greedy or fixed temperature/top-p decoding are static and often task-agnostic, leading to suboptimal or inconsistent generation quality across domains that demand stylistic or structural flexibility. We introduce a reinforcement learning-based decoder sampler that treats decoding as sequential decision-making and learns a lightweight policy to adjust sampling parameters at test-time while keeping LLM weights frozen. We evaluated summarization datasets including BookSum, arXiv, and WikiHow using Granite-3.3-2B and Qwen-2.5-0.5B. Our policy sampler consistently outperforms greedy and static baselines, achieving relative gains of up to +88% (BookSum, Granite) and +79% (WikiHow, Qwen). Reward ablations show that overlap-only objectives underperform compared to composite rewards, while structured shaping terms (length, coverage, repetition, completeness) enable stable and sustained improvements. These findings highlight reinforcement learning as a practical mechanism for test-time adaptation in decoding, enabling domain-aware and user-controllable generation without retraining large models.
Problem

Research questions and friction points this paper is trying to address.

decoding strategies
large language models
generation quality
task-agnostic decoding
stylistic flexibility
Innovation

Methods, ideas, or system contributions that make the work stand out.

adaptive decoding
test-time policy learning
reinforcement learning
self-improving generation
structured reward shaping
🔎 Similar Papers
No similar papers found.