🤖 AI Summary
Existing RLVR frameworks suffer from rapid policy entropy collapse due to static initial state sampling, impairing exploration capability and long-term performance. To address this, we propose CURE—a framework that dynamically balances exploration and exploitation in mathematical reasoning tasks via critical token identification and high-entropy regeneration. In Stage I, high-entropy critical tokens are regenerated to enhance output diversity; in Stage II, trajectories are jointly optimized and reassembled to expand contextual coverage. CURE implements dynamic context expansion based on the DAPO algorithm, requiring no architectural modifications or reward function redesign. Evaluated on Qwen-2.5-Math-7B, CURE achieves an average 5% accuracy improvement across six mathematical reasoning benchmarks while significantly delaying entropy collapse—attaining state-of-the-art performance and entropy preservation.
📝 Abstract
Recent advances in Reinforcement Learning with Verified Reward (RLVR) have driven the emergence of more sophisticated cognitive behaviors in large language models (LLMs), thereby enhancing their reasoning capabilities. However, in prior RLVR pipelines, the repeated use of static initial-state sampling drawn exactly from the dataset distribution during each sampling phase produced overly deterministic, low diversity model behavior, which manifested as rapid entropy collapse and hindered sustained performance gains during prolonged training. To address this issue, we introduce CURE (Critical-token-gUided Re concatenation for Entropy-collapse prevention), a two-stage framework that balances exploration and exploitation. Specifically, in the first stage, to deliberately steer the model toward novel yet coherent contexts, we re-generate at high-entropy critical tokens and jointly optimize the original and the branched trajectories. The further comparison with vanilla DAPO shows that the regeneration process achieves a better performance on math reasoning tasks while sustaining a high-level entropy degree for exploration. In the second stage, we continue training with static initial-state sampling by DAPO, intentionally placing the model in a familiar state to gradually strengthen exploitation. Extensive experiments on Qwen-2.5-Math-7B show that, compared to other RLVR methods, CURE achieves a 5% performance gain across six math benchmarks, establishing state-of-the-art performance in both entropy and accuracy. A series of experiments further validate the effectiveness of our approach. Code is available at https://github.com/CURE-Project/CURE.