π€ AI Summary
Small-scale language models struggle to develop strong agentic capabilities due to limited training tasks and unstable real-world API environments. To address this, this work proposes SYNTHAGENT, a novel framework that, for the first time, jointly synthesizes diverse tool-use tasks, simulates user interaction environments, and incorporates evaluation criteria to establish a scalable and stable reinforcement learning training loop. The framework leverages a teacher model to generate tasks and convert them into ambiguous instructions, prompting the agent to actively seek clarification. It further integrates an LLM-based user simulator and a virtual tool system to provide consistent and reliable feedback. Evaluated across 14 datasets spanning mathematical reasoning, search, and tool invocation, the approach significantly enhances the performance of small models, with some results surpassing those of larger baseline models.
π Abstract
Small LLMs often struggle to match the agentic capabilities of large, costly models. While reinforcement learning can help, progress has been limited by two structural bottlenecks: existing open-source agentic training data are narrow in task variety and easily solved; real-world APIs lack diversity and are unstable for large-scale reinforcement learning rollout processes. We address these challenges with SYNTHAGENT, a framework that jointly synthesizes diverse tool-use training data and simulates complete environments. Specifically, a strong teacher model creates novel tasks and tool ecosystems, then rewrites them into intentionally underspecified instructions. This compels agents to actively query users for missing details. When handling synthetic tasks, an LLM-based user simulator provides user-private information, while a mock tool system delivers stable tool responses. For rewards, task-level rubrics are constructed based on required subgoals, user-agent interactions, and forbidden behaviors. Across 14 challenging datasets in math, search, and tool use, models trained on our synthetic data achieve substantial gains, with small models outperforming larger baselines.