🤖 AI Summary
Existing static decoding strategies—such as greedy search or fixed-temperature/top-p sampling—struggle to simultaneously satisfy the diverse stylistic and structural requirements across multiple tasks and domains. This work formulates decoding as a sequential decision-making problem and introduces a lightweight, test-time learnable reinforcement learning-based decoding strategy. Operating under frozen large language model weights, the approach dynamically adjusts sampling parameters without requiring model retraining, thereby enabling domain adaptation and user-controllable generation. By optimizing a composite reward function that incorporates structured metrics—including output length, coverage, repetition rate, and completeness—the method achieves substantial improvements over static baselines, with relative gains of up to +88% on BookSum (Granite) and +79% on WikiHow (Qwen).
📝 Abstract
Decoding strategies largely determine the quality of Large Language Model (LLM) outputs, yet widely used heuristics such as greedy or fixed temperature/top-p decoding are static and often task-agnostic, leading to suboptimal or inconsistent generation quality across domains that demand stylistic or structural flexibility. We introduce a reinforcement learning-based decoder sampler that treats decoding as sequential decision-making and learns a lightweight policy to adjust sampling parameters at test-time while keeping LLM weights frozen. We evaluated summarization datasets including BookSum, arXiv, and WikiHow using Granite-3.3-2B and Qwen-2.5-0.5B. Our policy sampler consistently outperforms greedy and static baselines, achieving relative gains of up to +88% (BookSum, Granite) and +79% (WikiHow, Qwen). Reward ablations show that overlap-only objectives underperform compared to composite rewards, while structured shaping terms (length, coverage, repetition, completeness) enable stable and sustained improvements. These findings highlight reinforcement learning as a practical mechanism for test-time adaptation in decoding, enabling domain-aware and user-controllable generation without retraining large models.