Let it Calm: Exploratory Annealed Decoding for Verifiable Reinforcement Learning

📅 2025-10-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In verifiable reward reinforcement learning (RLVR), balancing sample quality and training stability during exploration remains challenging. To address this, we propose Exploratory Annealing Decoding (EAD): a dynamic temperature-scheduling strategy that employs higher softmax temperatures early in autoregressive generation to encourage semantic diversity, then gradually anneals to lower temperatures toward the end to ensure output fidelity—effectively realizing “exploration at the beginning, exploitation at the end.” EAD introduces no additional hyperparameters and seamlessly integrates with diverse RLVR algorithms and large language models of varying scales. Experiments across multiple RLVR benchmarks and model sizes demonstrate that EAD consistently outperforms fixed-temperature sampling, significantly improving sample efficiency and training stability. Moreover, it enhances the reasoning capabilities of large models and strengthens the robustness of policy optimization under sparse or noisy reward signals.

Technology Category

Application Category

📝 Abstract
Reinforcement learning with verifiable rewards (RLVR) is a powerful paradigm for enhancing the reasoning capabilities of large language models (LLMs), yet its success hinges on effective exploration. An ideal exploration strategy must navigate two fundamental challenges: it must preserve sample quality while also ensuring training stability. While standard fixed-temperature sampling is simple, it struggles to balance these competing demands, as high temperatures degrade sample quality and low temperatures limit discovery. In this work, we propose a simpler and more effective strategy, Exploratory Annealed Decoding (EAD), grounded in the insight that exploration is most impactful on early tokens which define a sequence's semantic direction. EAD implements an intuitive **explore-at-the-beginning, exploit-at-the-end** strategy by annealing the sampling temperature from high to low during generation. This dynamic schedule encourages meaningful, high-level diversity at the start, then gradually lowers the temperature to preserve sample quality and keep the sampling distribution close to the target policy, which is essential for stable training. We demonstrate that EAD is a lightweight, plug-and-play method that significantly improves sample efficiency, consistently outperforming fixed-temperature sampling across various RLVR algorithms and model sizes. Our work suggests that aligning exploration with the natural dynamics of sequential generation offers a robust path to improving LLM reasoning.
Problem

Research questions and friction points this paper is trying to address.

Balancing exploration and exploitation in verifiable reinforcement learning for LLMs
Improving sample quality while ensuring training stability during exploration
Addressing limitations of fixed-temperature sampling in RLVR algorithms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Anneals temperature from high to low
Explores early tokens then exploits later
Improves sample efficiency and training stability
🔎 Similar Papers
No similar papers found.