EnvScaler: Scaling Tool-Interactive Environments for LLM Agent via Programmatic Synthesis

📅 2026-01-09
🏛️ arXiv.org
📈 Citations: 5
Influential: 1
📄 PDF
🤖 AI Summary
This work addresses the scarcity of scalable, high-quality tool-interaction environments for training large language model (LLM) agents, a limitation imposed by restricted access to real-world systems, simulation hallucinations, and prohibitive human annotation costs. To overcome this, the authors propose EnvScaler, a novel framework featuring a two-stage automated synthesis pipeline: first constructing structured environment skeletons through topic mining and logical modeling, then generating diverse task scenarios and verifiable execution trajectories via rule-based methods. These synthetic data are leveraged for supervised fine-tuning and reinforcement learning to train the Qwen3 series of models. The approach automatically produces 191 distinct environments and approximately 7,000 task scenarios, yielding significant performance gains across three benchmarks in complex, multi-turn, multi-tool interaction tasks.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are expected to be trained to act as agents in various real-world environments, but this process relies on rich and varied tool-interaction sandboxes. However, access to real systems is often restricted; LLM-simulated environments are prone to hallucinations and inconsistencies; and manually built sandboxes are hard to scale. In this paper, we propose EnvScaler, an automated framework for scalable tool-interaction environments via programmatic synthesis. EnvScaler comprises two components. First, SkelBuilder constructs diverse environment skeletons through topic mining, logic modeling, and quality evaluation. Then, ScenGenerator generates multiple task scenarios and rule-based trajectory validation functions for each environment. With EnvScaler, we synthesize 191 environments and about 7K scenarios, and apply them to Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) for Qwen3 series models. Results on three benchmarks show that EnvScaler significantly improves LLMs'ability to solve tasks in complex environments involving multi-turn, multi-tool interactions. We release our code and data at https://github.com/RUC-NLPIR/EnvScaler.
Problem

Research questions and friction points this paper is trying to address.

tool-interaction environments
LLM agents
environment scalability
sandbox simulation
hallucination
Innovation

Methods, ideas, or system contributions that make the work stand out.

programmatic synthesis
tool-interaction environments
LLM agents
scalable environment generation
trajectory validation
🔎 Similar Papers
No similar papers found.