๐ค AI Summary
High-quality, multi-step task data for training Large Action Models (LAMs) is scarce, hindering scalable and effective LAM development.
Method: This paper proposes an intelligent agent simulation framework supporting online exploration and closed-loop feedback. It introduces two novel mechanisms: dynamic task query generation and trajectory self-evolution, enabling LLM-based agents to autonomously invoke tools, respond to real-time feedback, and generate diverse action trajectories within an interactive simulation environment. Trajectory distillation and quality filtering further ensure efficient, high-fidelity data construction with minimal human involvement.
Contribution/Results: The approach drastically reduces reliance on manual annotationโdata production requires near-zero human intervention. When training LAMs on the generated data, performance improves by up to 49.3% on ToolBench and CRMArena benchmarks, consistently surpassing original baselines across all evaluated metrics.
๐ Abstract
Large Action Models (LAMs) for AI Agents offer incredible potential but face challenges due to the need for high-quality training data, especially for multi-steps tasks that involve planning, executing tool calls, and responding to feedback. To address these issues, we present LAM SIMULATOR, a comprehensive framework designed for online exploration of agentic tasks with high-quality feedback. Our framework features a dynamic task query generator, an extensive collection of tools, and an interactive environment where Large Language Model (LLM) Agents can call tools and receive real-time feedback. This setup enables LLM Agents to explore and solve tasks autonomously, facilitating the discovery of multiple approaches to tackle any given task. The resulting action trajectory data are then used to create high-quality training datasets for LAMs. Our experiments on popular agentic benchmarks, ToolBench and CRMArena, highlight the effectiveness of LAM SIMULATOR: models trained with self-generated datasets using our framework achieve significant performance gains, up to a 49.3% improvement over their original baselines. LAM SIMULATOR requires minimal human input during dataset creation, highlighting LAM SIMULATOR's efficiency and effectiveness in speeding up development of AI agents.