🤖 AI Summary
This work addresses the degradation of multi-turn tool-use capabilities in large language models (LLMs) during reinforcement learning, a phenomenon often caused by interaction collapse. To mitigate this issue, the authors propose the ASTER framework, which leverages only 4K high-interaction-density expert trajectories to construct a cold-start behavioral prior. ASTER integrates supervised fine-tuning with reinforcement learning and introduces an interaction-density-guided initialization mechanism, alongside an optimized tool-calling strategy during inference. Experimental results demonstrate that ASTER-4B achieves a 90.0% accuracy on the AIME 2025 mathematical benchmark, substantially outperforming existing open-source models such as DeepSeek-V3.2-Exp, thereby validating its effectiveness and superiority in long-horizon tool-integrated reasoning.
📝 Abstract
Reinforcement learning (RL) has emerged as a dominant paradigm for eliciting long-horizon reasoning in Large Language Models (LLMs). However, scaling Tool-Integrated Reasoning (TIR) via RL remains challenging due to interaction collapse: a pathological state where models fail to sustain multi-turn tool usage, instead degenerating into heavy internal reasoning with only trivial, post-hoc code verification. We systematically study three questions: (i) how cold-start SFT induces an agentic, tool-using behavioral prior, (ii) how the interaction density of cold-start trajectories shapes exploration and downstream RL outcomes, and (iii) how the RL interaction budget affects learning dynamics and generalization under varying inference-time budgets. We then introduce ASTER (Agentic Scaling with Tool-integrated Extended Reasoning), a framework that circumvents this collapse through a targeted cold-start strategy prioritizing interaction-dense trajectories. We find that a small expert cold-start set of just 4K interaction-dense trajectories yields the strongest downstream performance, establishing a robust prior that enables superior exploration during extended RL training. Extensive evaluations demonstrate that ASTER-4B achieves state-of-the-art results on competitive mathematical benchmarks, reaching 90.0% on AIME 2025, surpassing leading frontier open-source models, including DeepSeek-V3.2-Exp.