Scaling Agentic Capabilities, Not Context: Efficient Reinforcement Finetuning for Large Toolspaces

📅 2026-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges small language models face in large-scale tool-augmented environments, where long-horizon tasks are hindered by context saturation, error propagation during execution, and sparse rewards. To overcome these limitations, the authors propose ATLAS, a framework that formulates context acquisition and action execution as learnable decisions. ATLAS employs a scoring-criterion-based reinforcement fine-tuning mechanism, integrated with iterative tool loading and procedural orchestration, to enable efficient task planning under strict constraints on context length and model parameters. Evaluated on the MCP benchmark, the approach significantly outperforms general reinforcement learning baselines, allowing a 4B-parameter model to achieve performance approaching that of state-of-the-art agents despite stringent resource limitations.

Technology Category

Application Category

📝 Abstract
Agentic systems operating over large tool ecosystems must plan and execute long-horizon workflows under weak or non-verifiable supervision. While frontier models mitigate these challenges through scale and large context budgets, small language models (SLMs) remain brittle: eager tool loading saturates context, execution errors compound over time, and sparse rewards limit learning. We introduce ATLAS, a reinforcement finetuning framework that enables SLMs to operate effectively in large-scale toolspace environments by learning how to acquire context and how to execute actions. Our approach makes two key contributions. First, we treat context control and execution structure as learnable decisions, combining iterative tool loading with programmatic tool orchestration to bound context growth and stabilize long-horizon trajectories. Second, we propose rubric-based reinforcement finetuning, which decomposes task success into structured, task-aligned criteria and enables scalable training using small judge models. Across MCP benchmarks, these design choices yield large and consistent gains over generic RL baselines, allowing a 4B SLM to approach frontier-agent performance under far tighter parameter and context budgets.
Problem

Research questions and friction points this paper is trying to address.

small language models
large toolspaces
long-horizon workflows
context saturation
sparse rewards
Innovation

Methods, ideas, or system contributions that make the work stand out.

reinforcement finetuning
small language models
tool orchestration
context control
rubric-based learning
🔎 Similar Papers
No similar papers found.
K
Karan Gupta
Microsoft Research
P
Pranav Vajreshwari
Microsoft Research
Y
Yash Pandya
Microsoft Research
R
Raghav Magazine
Microsoft Research
Akshay Nambi
Akshay Nambi
Principal Researcher, Microsoft Research
Machine LearningLLMsAI for Social GoodMobile Computing & Systems
A
Ahmed Awadallah
Microsoft Research