🤖 AI Summary
Existing RL methods (e.g., GRPO) suffer from reward sparsity and training failure in long-chain reasoning with LLMs, primarily due to a severe distribution mismatch—termed “low training affinity”—between external guidance (e.g., SFT data or prompts) and the model’s policy. This paper introduces, for the first time, the concept and quantitative metric of *training affinity* to diagnose and mitigate this mismatch. Building on this insight, we propose HINT, an adaptive prompting framework that dynamically injects heuristic guidance into rollouts to improve exploration efficiency and training stability. Unlike prior approaches, HINT operates entirely on-policy and avoids error-prone off-policy data. Evaluated across multiple model scales and mathematical reasoning benchmarks, HINT achieves state-of-the-art performance, significantly enhancing training convergence speed and data utilization efficiency.
📝 Abstract
Reinforcement Learning (RL) has become a key driver for enhancing the long chain-of-thought (CoT) reasoning capabilities of Large Language Models (LLMs). However, prevalent methods like GRPO often fail when task difficulty exceeds the model's capacity, leading to reward sparsity and inefficient training. While prior work attempts to mitigate this using off-policy data, such as mixing RL with Supervised Fine-Tuning (SFT) or using hints, they often misguide policy updates In this work, we identify a core issue underlying these failures, which we term low training affinity. This condition arises from a large distributional mismatch between external guidance and the model's policy. To diagnose this, we introduce Affinity, the first quantitative metric for monitoring exploration efficiency and training stability. To improve Affinity, we propose HINT: Helping Ineffective rollouts Navigate Towards effectiveness, an adaptive hinting framework. Instead of providing direct answers, HINT supplies heuristic hints that guide the model to discover solutions on its own, preserving its autonomous reasoning capabilities. Extensive experiments on mathematical reasoning tasks show that HINT consistently outperforms existing methods, achieving state-of-the-art results with models of various scales, while also demonstrating significantly more stable learning and greater data efficiency.Code is available on Github.