🤖 AI Summary
To address the critical limitation of static data selection—its inability to adapt to models’ dynamically evolving reasoning capabilities in mathematical reasoning—this paper proposes SAI-DPO, a model self-aware dynamic data sampling algorithm. Departing from conventional paradigms reliant on external, pre-defined metrics (e.g., problem difficulty or diversity), SAI-DPO performs fine-grained reasoning capability diagnosis, multi-stage performance monitoring, and online dynamic difficulty reweighting to iteratively identify model strengths and weaknesses, enabling closed-loop, feedback-driven data selection. Its core innovation lies in the first-ever deep coupling of data selection policy with the model’s real-time internal state, bridging the gap between data curation and model evolution prevalent in online reinforcement learning frameworks (e.g., R1). Evaluated on eight mathematical benchmarks—including AIME24 and AMC23—SAI-DPO achieves an average improvement of 21.3 percentage points, with gains of 10.0 and 15.0 points on AIME24 and AMC23, respectively.
📝 Abstract
In the realm of data selection for reasoning tasks, existing approaches predominantly rely on externally predefined static metrics such as difficulty and diversity, which are often designed for supervised fine-tuning (SFT) and lack adaptability to continuous training processes. A critical limitation of these methods is their inability to dynamically align with the evolving capabilities of models during online training, a gap that becomes increasingly pronounced with the rise of dynamic training paradigms and online reinforcement learning (RL) frameworks (e.g., R1 models). To address this, we introduce SAI-DPO, an algorithm that dynamically selects training data by continuously assessing a model's stage-specific reasoning abilities across different training phases. By integrating real-time model performance feedback, SAI-DPO adaptively adapts data selection to the evolving strengths and weaknesses of the model, thus enhancing both data utilization efficiency and final task performance. Extensive experiments on three state-of-the-art models and eight mathematical reasoning benchmarks, including challenging competition-level datasets (e.g., AIME24 and AMC23), demonstrate that SAI-DPO achieves an average performance boost of up to 21.3 percentage points, with particularly notable improvements of 10 and 15 points on AIME24 and AMC23, respectively. These results highlight the superiority of dynamic, model-adaptive data selection over static, externally defined strategies in advancing reasoning.