ASTER: Agentic Scaling with Tool-integrated Extended Reasoning

📅 2026-02-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the degradation of multi-turn tool-use capabilities in large language models (LLMs) during reinforcement learning, a phenomenon often caused by interaction collapse. To mitigate this issue, the authors propose the ASTER framework, which leverages only 4K high-interaction-density expert trajectories to construct a cold-start behavioral prior. ASTER integrates supervised fine-tuning with reinforcement learning and introduces an interaction-density-guided initialization mechanism, alongside an optimized tool-calling strategy during inference. Experimental results demonstrate that ASTER-4B achieves a 90.0% accuracy on the AIME 2025 mathematical benchmark, substantially outperforming existing open-source models such as DeepSeek-V3.2-Exp, thereby validating its effectiveness and superiority in long-horizon tool-integrated reasoning.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) has emerged as a dominant paradigm for eliciting long-horizon reasoning in Large Language Models (LLMs). However, scaling Tool-Integrated Reasoning (TIR) via RL remains challenging due to interaction collapse: a pathological state where models fail to sustain multi-turn tool usage, instead degenerating into heavy internal reasoning with only trivial, post-hoc code verification. We systematically study three questions: (i) how cold-start SFT induces an agentic, tool-using behavioral prior, (ii) how the interaction density of cold-start trajectories shapes exploration and downstream RL outcomes, and (iii) how the RL interaction budget affects learning dynamics and generalization under varying inference-time budgets. We then introduce ASTER (Agentic Scaling with Tool-integrated Extended Reasoning), a framework that circumvents this collapse through a targeted cold-start strategy prioritizing interaction-dense trajectories. We find that a small expert cold-start set of just 4K interaction-dense trajectories yields the strongest downstream performance, establishing a robust prior that enables superior exploration during extended RL training. Extensive evaluations demonstrate that ASTER-4B achieves state-of-the-art results on competitive mathematical benchmarks, reaching 90.0% on AIME 2025, surpassing leading frontier open-source models, including DeepSeek-V3.2-Exp.
Problem

Research questions and friction points this paper is trying to address.

Tool-Integrated Reasoning
Interaction Collapse
Reinforcement Learning
Large Language Models
Long-horizon Reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tool-Integrated Reasoning
Reinforcement Learning
Cold-start Strategy
Interaction Collapse
Agentic Scaling
🔎 Similar Papers
No similar papers found.
X
Xuqin Zhang
Department of Foundation Model, 2012 Labs, Huawei; National Key Laboratory for Novel Software Technology, Nanjing University, China; School of Artificial Intelligence, Nanjing University, China
Q
Quan He
Department of Foundation Model, 2012 Labs, Huawei
Z
Zhenrui Zheng
The Chinese University of Hong Kong, Shenzhen
Zongzhang Zhang
Zongzhang Zhang
Nanjing University
Artificial IntelligenceReinforcement LearningProbabilistic PlanningMulti-Agent Systems
Xu He
Xu He
Huawei Noah' Ark Lab
Reinforcement learningArtificial intelligence
Dong Li
Dong Li
Huawei Noah's Ark Lab
Reinforcement learningLLM Alignment