Angles Don't Lie: Unlocking Training-Efficient RL Through the Model's Own Signals

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address low sample efficiency in reinforcement fine-tuning (RFT) of large language models (LLMs) caused by uniform data sampling, this paper proposes GAIN-RL, a dynamic curriculum learning framework grounded in the geometric properties of model hidden states. We theoretically establish, for the first time, that angular concentration—i.e., the degree of alignment among hidden state vectors—is positively correlated with gradient effectiveness, thereby serving as an intrinsic, self-generated learning signal. Leveraging this insight, we design a gradient-driven data navigation mechanism and integrate it with the GRPO reinforcement algorithm for efficient RFT. Experiments demonstrate that GAIN-RL improves training efficiency by 2.5× and achieves superior performance using only 50% of the training data—outperforming full-dataset baselines—and significantly enhances generalization on mathematical reasoning and code generation tasks. To our knowledge, this is the first RFT method that constructs dynamic curriculum learning from geometric signals in the latent space.

Technology Category

Application Category

📝 Abstract
Current Reinforcement Fine-tuning (RFT) paradigms for Large Language Models (LLMs) suffer from sample inefficiency due to the redundant exposure of identical queries under uniform data sampling. While previous work has explored curriculum learning via heuristic difficulty metrics, these strategies exhibit limitations by neglecting the intrinsic learning signals generated by the model itself, thus leading to suboptimal training regimes. In this paper, we identify a model-inherent signal termed angle concentration that effectively reflects an LLM's capacity to learn from specific data. We theoretically and empirically demonstrate a correlation between the angular distribution of token hidden state vectors and the resulting gradient, revealing a learning preference for data exhibiting higher angle concentration. Inspired by this finding, we propose GAIN-RL, a Gradient-driven Angle-Informed Navigated RL framework. By leveraging the model's intrinsic angle concentration signal, GAIN-RL dynamically selects training data in each epoch, ensuring consistently impactful gradient updates and thus significantly enhancing overall training efficiency. Empirical evaluations show that GAIN-RL (GRPO) achieves over a 2.5x acceleration in training efficiency across diverse mathematical and coding tasks and varying model scales. Furthermore, GAIN-RL (GRPO)'s efficient sampling yields data-efficient training, achieving better performance with half the original data compared to vanilla GRPO with full training data. Code is realsed at https://github.com/wangqinsi1/GAINRL/tree/main.
Problem

Research questions and friction points this paper is trying to address.

Improving sample efficiency in Reinforcement Fine-tuning for LLMs
Utilizing model-inherent angle concentration for data selection
Enhancing training efficiency via dynamic gradient-driven data sampling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses angle concentration as learning signal
Dynamically selects data via GAIN-RL framework
Enhances training efficiency with gradient-driven updates
🔎 Similar Papers
No similar papers found.